title
sequencelengths 0
18
| author
sequencelengths 0
4.41k
| authoraffiliation
sequencelengths 0
6.45k
| venue
sequencelengths 0
9
| abstract
stringlengths 1
37.6k
| doi
stringlengths 10
114
⌀ | pdfurls
sequencelengths 1
3
⌀ | corpusid
int64 158
259M
| arxivid
stringlengths 9
16
| pdfsha
stringlengths 40
40
| text
stringlengths 66
715k
| github_urls
sequencelengths 0
36
|
---|---|---|---|---|---|---|---|---|---|---|---|
[
"Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes",
"Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes"
] | [
"M A S Kolarijani ",
"G F Max ",
"P Mohajerin Esfahani "
] | [] | [] | In this study, we consider the infinite-horizon, discounted cost, optimal control of stochastic nonlinear systems with separable cost and constraints in the state and input variables. Using the linear-time Legendre transform, we propose a novel numerical scheme for the implementation of the corresponding value iteration (VI) algorithm in the conjugate domain. Detailed analyses of the convergence, time complexity, and error of the proposed algorithm are provided. In particular, with a discretization of size X and U for the state and input spaces, respectively, the proposed approach reduces the time complexity of each iteration in the VI algorithm from O(XU ) to O(X + U ), by replacing the minimization operation in the primal domain with a simple addition in the conjugate domain.2 the expense of three conjugate transforms. However, the proper application of this transformation relies on efficient numerical algorithms for conjugation. Fortunately, such an algorithm, known as linear-time Legendre transform (LLT), has been developed in the late 90s[24]. Other than the classical application of LLT (and other fast algorithms for conjugate transform) in solving the Hamilton-Jacobi equation[1,14,15], these algorithms are used in image processing[25], thermodynamics[13], and optimal transport[18].The application of conjugate duality for the DP problem is not new and goes back to Bellman[5]. Further applications of this idea for reducing the computational complexity were later explored in[16,19]. However, surprisingly, the application of LLT for solving discrete-time optimal control problems has been limited. In particular, in[12], the authors propose the "fast value iteration" algorithm (without a rigorous analysis of the complexity and error of the proposed algorithm) for a particular class of infinite-horizon optimal control problems with state-independent stage cost C(x, u) = C(u) and deterministic linear dynamics x t+1 = Ax t + Bu t , where A is a non-negative, monotone, invertible matrix. More recently, in [21], we also considered the application of LLT for solving the DP operation in finite-horizon, optimal control of input-affine dynamicsIn particular, we introduced the "discrete conjugate DP" (d-CDP) operator, and provided a detailed analysis of its complexity and error. As we will discuss shortly, the current study is an extension of the corresponding d-CDP algorithm that, among other things, considers infinite horizon, discounted cost problems. We note that the algorithms developed in[17,25]for "distance transform" can also potentially tackle the optimal control problems similar to the ones of interest in the current study. In particular, these algorithms require the stage cost to be reformulated as a convex function of the "distance" between the current and next states. While this property might arise naturally, it can generally be restrictive, as it is in the problem class considered in this study. Another line of work that is closely related to ours involves utilizing max-plus algebra in solving deterministic, continuous-state, continuous-time, optimal control problems; see, e.g.,[2,26]. These works exploit the compatibility of the DP operation with max-plus operations, and approximate the value function as a max-plus linear combination. Recently, in[3,6], the authors used this idea to propose an approximate VI algorithm for continuous-state, deterministic MDPs. In this regard, we note that the proposed approach in the current study also involves approximating the value function as a max-plus linear combination, namely, the maximum of affine functions.The key difference is however that by choosing a grid-like (factorized) set of slopes for the linear terms (i.e., the basis of the max-plus linear combination), we take advantage of the linear time complexity of LLT in computing the constant terms (i.e., the coefficients of the max-plus linear combination).Main contribution. In this study, we focus on an approximate implementation of VI involving discretization of the state and input spaces for solving the optimal control problem of discrete-time systems, with continuous state-input space. Building upon our earlier work [21], we employ conjugate duality to speed up VI for problems with separable stage cost (in state and input) and input-affine dynamics. We propose the conjugate VI (ConjVI) algorithm based on a modified version of the d-CDP operator introduced in [21], and extend the existing results in three directions: We consider infinite-horizon, discounted cost problems with stochastic dynamics, while incorporating a numerical scheme for approximation of the conjugate of input cost. The main contributions of this paper are as follows:(i) we provide sufficient conditions for the convergence of ConjVI (Theorem 3.11); (ii) we show that ConjVI can achieve a linear time complexity of O(X + U ) in each iteration (Theorem 3.12), compared to the quadratic time complexity of O(XU ) of the standard VI, where X and U are the cardinalities of the discrete state and input spaces, respectively; (iii) we analyze the error of ConjVI (Theorem 3.13) and use that result to provide specific guidelines on the construction of the discrete dual domain (Section 3.4); (iv) we provide a MATLAB package for the implementation of the proposed ConjVI algorithm[22].Paper organization. The problem statement and its standard solution via the VI algorithm (in primal domain) are presented in Section 2. In Section 3, we present our main results: We begin with presenting the class of problems that are of interest, and then introduce the alternative approach for VI in the conjugate domain and its numerical implementation. The theoretical results on the convergence, complexity, and error of the proposed algorithm along with the guidelines on the construction of dual grids are also provided in this section. In Section 4, we compare the performance of the ConjVI with that of VI algorithm through three numerical examples. Section 5 concludes the paper with some final remarks. All the technical proofs are provided in Appendix A.Notations. We use R and R = R∪{∞} to denote the real line and the extended reals, respectively, and E w [·] to denote expectation with respect (w.r.t.) to the random variable w. The standard inner product in R n and the corresponding induced 2-norm are denoted by ·, · and · 2 , respectively. We also use · 2 to denote the operator norm (w.r.t. the 2-norm) of a matrix; i.e., for A ∈ R m×n , we denote A 2 = sup{ Ax 2 : x 2 = 1}. The infinity-norm is denoted by · ∞ . | null | [
"https://arxiv.org/pdf/2102.08880v4.pdf"
] | 239,049,902 | 2102.08880 | 1032241d3f3152c125510625607a82d4c61145bc |
Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes
M A S Kolarijani
G F Max
P Mohajerin Esfahani
Fast Approximate Dynamic Programming for Infinite-Horizon Markov Decision Processes
Stochastic optimal controlvalue iterationinput-affine systemsFenchel dualitycomputational complexity
In this study, we consider the infinite-horizon, discounted cost, optimal control of stochastic nonlinear systems with separable cost and constraints in the state and input variables. Using the linear-time Legendre transform, we propose a novel numerical scheme for the implementation of the corresponding value iteration (VI) algorithm in the conjugate domain. Detailed analyses of the convergence, time complexity, and error of the proposed algorithm are provided. In particular, with a discretization of size X and U for the state and input spaces, respectively, the proposed approach reduces the time complexity of each iteration in the VI algorithm from O(XU ) to O(X + U ), by replacing the minimization operation in the primal domain with a simple addition in the conjugate domain.2 the expense of three conjugate transforms. However, the proper application of this transformation relies on efficient numerical algorithms for conjugation. Fortunately, such an algorithm, known as linear-time Legendre transform (LLT), has been developed in the late 90s[24]. Other than the classical application of LLT (and other fast algorithms for conjugate transform) in solving the Hamilton-Jacobi equation[1,14,15], these algorithms are used in image processing[25], thermodynamics[13], and optimal transport[18].The application of conjugate duality for the DP problem is not new and goes back to Bellman[5]. Further applications of this idea for reducing the computational complexity were later explored in[16,19]. However, surprisingly, the application of LLT for solving discrete-time optimal control problems has been limited. In particular, in[12], the authors propose the "fast value iteration" algorithm (without a rigorous analysis of the complexity and error of the proposed algorithm) for a particular class of infinite-horizon optimal control problems with state-independent stage cost C(x, u) = C(u) and deterministic linear dynamics x t+1 = Ax t + Bu t , where A is a non-negative, monotone, invertible matrix. More recently, in [21], we also considered the application of LLT for solving the DP operation in finite-horizon, optimal control of input-affine dynamicsIn particular, we introduced the "discrete conjugate DP" (d-CDP) operator, and provided a detailed analysis of its complexity and error. As we will discuss shortly, the current study is an extension of the corresponding d-CDP algorithm that, among other things, considers infinite horizon, discounted cost problems. We note that the algorithms developed in[17,25]for "distance transform" can also potentially tackle the optimal control problems similar to the ones of interest in the current study. In particular, these algorithms require the stage cost to be reformulated as a convex function of the "distance" between the current and next states. While this property might arise naturally, it can generally be restrictive, as it is in the problem class considered in this study. Another line of work that is closely related to ours involves utilizing max-plus algebra in solving deterministic, continuous-state, continuous-time, optimal control problems; see, e.g.,[2,26]. These works exploit the compatibility of the DP operation with max-plus operations, and approximate the value function as a max-plus linear combination. Recently, in[3,6], the authors used this idea to propose an approximate VI algorithm for continuous-state, deterministic MDPs. In this regard, we note that the proposed approach in the current study also involves approximating the value function as a max-plus linear combination, namely, the maximum of affine functions.The key difference is however that by choosing a grid-like (factorized) set of slopes for the linear terms (i.e., the basis of the max-plus linear combination), we take advantage of the linear time complexity of LLT in computing the constant terms (i.e., the coefficients of the max-plus linear combination).Main contribution. In this study, we focus on an approximate implementation of VI involving discretization of the state and input spaces for solving the optimal control problem of discrete-time systems, with continuous state-input space. Building upon our earlier work [21], we employ conjugate duality to speed up VI for problems with separable stage cost (in state and input) and input-affine dynamics. We propose the conjugate VI (ConjVI) algorithm based on a modified version of the d-CDP operator introduced in [21], and extend the existing results in three directions: We consider infinite-horizon, discounted cost problems with stochastic dynamics, while incorporating a numerical scheme for approximation of the conjugate of input cost. The main contributions of this paper are as follows:(i) we provide sufficient conditions for the convergence of ConjVI (Theorem 3.11); (ii) we show that ConjVI can achieve a linear time complexity of O(X + U ) in each iteration (Theorem 3.12), compared to the quadratic time complexity of O(XU ) of the standard VI, where X and U are the cardinalities of the discrete state and input spaces, respectively; (iii) we analyze the error of ConjVI (Theorem 3.13) and use that result to provide specific guidelines on the construction of the discrete dual domain (Section 3.4); (iv) we provide a MATLAB package for the implementation of the proposed ConjVI algorithm[22].Paper organization. The problem statement and its standard solution via the VI algorithm (in primal domain) are presented in Section 2. In Section 3, we present our main results: We begin with presenting the class of problems that are of interest, and then introduce the alternative approach for VI in the conjugate domain and its numerical implementation. The theoretical results on the convergence, complexity, and error of the proposed algorithm along with the guidelines on the construction of dual grids are also provided in this section. In Section 4, we compare the performance of the ConjVI with that of VI algorithm through three numerical examples. Section 5 concludes the paper with some final remarks. All the technical proofs are provided in Appendix A.Notations. We use R and R = R∪{∞} to denote the real line and the extended reals, respectively, and E w [·] to denote expectation with respect (w.r.t.) to the random variable w. The standard inner product in R n and the corresponding induced 2-norm are denoted by ·, · and · 2 , respectively. We also use · 2 to denote the operator norm (w.r.t. the 2-norm) of a matrix; i.e., for A ∈ R m×n , we denote A 2 = sup{ Ax 2 : x 2 = 1}. The infinity-norm is denoted by · ∞ .
Introduction
Value iteration (VI) is one of the most basic and widespread algorithms employed for tackling problems in reinforcement learning (RL) and optimal control [10,30] formulated as Markov decision processes (MDPs). The VI algorithm simply involves the consecutive applications of the dynamic programming (DP) operator T J(x t ) = min ut C(x t , u t ) + γEJ(x t+1 ) ,
where C(x t , u t ) is the cost of taking the control action u t at the state x t . This fixed point iteration is known to converge to the optimal value function for discount factors γ ∈ (0, 1). However, this algorithm suffers from a high computational cost for large-scale finite state spaces. For problems with a continuous state space, the DP operation becomes an infinite-dimensional optimization problem, rendering the exact implementation of VI impossible in most cases. A common solution is to incorporate function approximation techniques and compute the output of the DP operator for a finite sample (i.e., a discretization) of the underlying continuous state space. This approximation again suffers from a high computational cost for fine discretizations of the state space, particularly in high-dimensional problems. We refer the reader to [10,27] for various approximation schemes for the implementation of VI.
For some problems, however, it is possible to partially address this issue by using duality theory, i.e., approaching the minimization problem in the conjugate domain. In particular, as we will see in Section 3, the minimization in the primal domain in DP can be transformed to a simple addition in the dual domain, at Date: March 18, 2022. Continuous (infinite, uncountable) sets are denoted as X, Y, . . .. For finite (discrete) sets, we use the superscript d as in X d , Y d , . . . to differentiate them from infinite sets. Moreover, we use the superscript g to differentiate grid-like finite sets. Precisely, a grid X g ⊂ R n is the Cartesian product X g = Π n i=1 X g i = X g 1 × . . . × X g n , where X g i is a finite subset of R. We also use X g sub to denote the sub-grid of X g derived by omitting the smallest and the largest elements of X g in each dimension. The cardinality of a finite set X d or X g is denoted by X. Let X, Y be two arbitrary sets in R n . The convex hull of X is denoted by co(X). The diameter of X is defined as ∆ X := sup x,x∈X x −x 2 . We use d(X, Y) := inf x∈X,y∈Y x − y 2 to denote the distance between X and Y. The one-sided Hausdorff distance from X to Y is defined as d H (X, Y) := sup x∈X inf y∈Y x − y 2 .
Let h : R n → R be an extended real-valued function with a non-empty effective domain dom(h) = X := {x ∈ R n : h(x) < ∞}, and range rng(h) = max x∈X h(x) − min x∈X h(x). We use h d : X d → R to denote the discretization of h, where X d is a finite subset of R n . Whether a function is discrete is usually also clarified by providing its domain explicitly. We particularly use this notation in combination with a second operation to emphasize that the second operation is applied on the discretized version of the operand. E.g., we use h d : R n → R to denote a generic extension of h d . If the domain X d = X g of h d is grid-like, we then use h d (as opposed to h d ) for the extension using multi-linear interpolation and extrapolation (LERP). The Lipschtiz constant of h over a set Y ⊂ dom(h) is denoted by L(h; Y) := sup x,y∈Y |h(x) − h(y)|/ x − y 2 . We also denote L(h) := L h; dom(h) and L(h) := Π n i=1 L − i (h), L + i (h) , where L + i (h) (resp. L − i (h)) is the maximum (resp. minimum) slope of the function h along the i-th dimension, The subdifferential of h at a point x ∈ X is defined as ∂h(x) := y ∈ R n : h(x) ≥ h(x) + y,x − x , ∀x ∈ X . Note that ∂h(x) ⊆ L(h) for all x ∈ X; in particular, L(h) = ∪ x∈X ∂h(x) if h is convex. The Legendre-Fenchel transform (convex conjugate) of h is the function h * : R n → R, defined by h * (y) = sup x { y, x − h(x)}. Note that the conjugate function h * is convex by construction. We again use the notation h d * to emphasize the fact that the domain of the underlying function is finite, that is, h d * (y) = sup x∈X d { y, x − h(x)}. The biconjugate and discrete biconjugate operators are defined accordingly and denoted by [ We report the complexities using the standard big-O notations O and O, where the latter hides the logarithmic factors. In this study, we are mainly concerned with the dependence of the computational complexities on the size of the finite sets involved (discretization of the primal and dual domains). In particular, we ignore the possible dependence of the computational complexities on the dimension of the variables, unless they appear in the power of the size of those discrete sets; e.g., the complexity of a single evaluation of an analytically available function is taken to be of O(1), regardless of the dimension of its input and output arguments.
VI in primal domain
We are concerned with the infinite-horizon, discounted cost, optimal control problems of the form
J (x) = min E wt ∞ t=0 γ t C(x t , u t ) x 0 = x s.t. x t+1 = g(x t , u t , w t ), x t ∈ X, u t ∈ U, w t ∼ P(W), ∀t ∈ {0, 1, . . .},
where x t ∈ R n , u t ∈ R m , and w t ∈ R l are the state, input and disturbance variables at time t, respectively; γ ∈ (0, 1) is the discount factor; C : X × U → R is the stage cost; g : R n × R m × R l → R n describes the dynamics; X ⊂ R n and U ⊂ R m describe the state and input constraints, respectively; and, P(·) is the distribution of the disturbance over the support W ⊂ R l . Assuming the stage cost C is bounded, the optimal value function solves the Bellman equation J = T J , where T is the DP operator (C and J are extended to infinity outside their effective domains) [
8, Prop. 1.2.2] (1) T J(x) := min u C(x, u) + γ · E w J g(x, u, w) , ∀x ∈ X. Indeed, T is γ-contractive in the infinity-norm, i.e., T J 1 − T J 2 ∞ ≤ γ J 1 − J 2 ∞ [8, Prop. 1.2.4].
This property then gives rise to the VI algorithm J k+1 = T J k which converges to J as k → ∞, for arbitrary initialization J 0 . Moreover, assuming that the composition J • g (for each w) and the cost C are jointly convex in the state and input variables, T also preserves convexity [9,Prop. 3.3.1].
For the numerical implementation of VI, we need to address three issues. First, we need to compute the expectation in (1). To simplify the exposition and include the computational cost of this operation explicitly, we consider disturbances with finite support in this study: Assumption 2.1 (Disturbance with finite support). The disturbance w has a finite support W d ⊂ R l with a given probability mass function (p.m.f.) p :
W d → [0, 1].
Under the preceding assumption, we have E w J g(x, u, w) = w∈W d p(w) · J g(x, u, w) . 1 The second and more important issue is that the optimization problem (1) is infinite-dimensional for the continuous state space X. This renders the exact implementation of VI impossible, except for a few cases with available closedform solutions. A common solution to this problem is to deploy a sample-based approach, accompanied by a function approximation scheme. To be precise, for a finite subset X d of X, at each iteration k = 0, 1, . . ., we take the discrete function J d k : X d → R as the input, and compute the discrete function J d
k+1 = T J d k d : X d → R, where J d k : X → R is an extension of J d k . 2
Finally, for each x ∈ X d , we have to solve the minimization problem in (1) over the control input. Since this minimization problem is often a difficult, non-convex problem, a common approximation again involves enumeration over a discretization U d ⊂ U of the input space.
Incorporating these approximations, we end up with the approximate VI algorithm J d k+1 = T d J d k , characterized by the discrete DP (d-DP) operator
(2) T d J d (x) := min u∈U d C(x, u) + γ · w∈W d p(w) · J d g(x, u, w) , ∀x ∈ X d .
The convergence of approximate VI described above depends on the properties of the extension operation [·].
In particular, if [·] is non-expansive (in the infinity-norm), then T d is also γ-contractive. For example, for a grid-like discretization of the state space X d = X g , the extension using interpolative LERP is non-expansive; 1 Indeed, W d can be considered as a finite approximation of the true support W of the disturbance. Moreover, one can consider other approximation schemes, such as Monte Carlo simulation, for this expectation operation. 2 The extension can be considered as a generic parametric approximation J θ k : X → R, where the parameters θ k are computed using regression, i.e., by fitting J θ k to the data points J d k :
X d → R.
see Lemma A.2. The error of this approximation (lim J d k − J d ∞ ) also depends on the extension operation [·] and its representative power. We refer the interested reader to [8,11,27] for detailed discussions on the convergence and error of different approximation schemes for VI.
The d-DP operator and the corresponding approximate VI algorithm will be our benchmark for evaluating the performance of the alternative algorithm developed in this study. To this end, we finish this section with some remarks on the time complexity of the d-DP operation. Let the time complexity of a single evaluation of the extension operator [·] in (2) be of O(E). 3 Then, the time complexity of the d-DP operation (2) is of O XU W E . In this regard, note that the scheme described above essentially involves approximating a continuous-state/action MDP with a finite-state/action MDP, and then applying VI. This, in turn, implies the lower bound Ω(XU ) for the time complexity (corresponding to enumeration over u ∈ U d for each x ∈ X d ). This lower bound is also compatible with the best existing time complexities in the literature for VI for finite MDPs; see, e.g., [3,28]. However, as we will see in the next section, for a particular class of problems, it is possible to exploit the structure of the underlying continuous system to achieve a better time complexity in the corresponding discretized problem.
Reducing complexity via conjugate duality
In this section, we present the class of problems that allows us to employ conjugate duality and propose an alternative path for solving the corresponding DP operator. We also present the numerical scheme for implementing the proposed alternative path and analyze its convergence, complexity, and error. We note that the proposed algorithm and its analysis are based on the d-CDP algorithm presented in [21,Sec. 5] for finite-horizon, optimal control of deterministic systems. Here, we extend those results for infinite-horizon, discounted cost, optimal control of stochastic systems. Moreover, unlike [21], our analysis includes the case where the conjugate of input cost is not analytically available and has to be computed numerically; see [21,Assump. 5.1] for more details.
VI in conjugate domain
Throughout this section, we assume that the problem data satisfy the following conditions. Assumption 3.1 (Problem class). The problem data has the following properties:
(i) The dynamics is of the form g(x, u, w) = f (x, u) + w = f s (x) + Bu + w, with additive disturbance,
where f s : R n → R n is a Lipschitz continuous, possibly nonlinear map, and B ∈ R n×m . (ii) The stage cost C is separable in state and input; that is, C(x, u) = C s (x) + C i (u), where the state cost C s : X → R and the input cost C i : U → R are Lipschitz continuous. (iii) The constraint sets X ⊂ R n and U ⊂ R m are compact. Moreover, for each x ∈ X, the set of admissible inputs U(x) := {u ∈ U : g(x, u, w) ∈ X, ∀w ∈ W d } is nonempty.
Some remarks are in order regarding the preceding assumptions. We first note that the setting of Assumption 3.1 goes beyond the classical LQR. In particular, it includes nonlinear dynamics, state and input constraints, and non-quadratic stage costs. Second, the properties laid out in Assumption 3.1 imply that the set of admissible inputs U(x) is a compact set for each x ∈ X. This, in turn, implies that the optimal value in (1) is achieved if J : X → R is also assumed to be lower semi-continuous. Finally, as we discuss shortly, the two assumptions on the dynamics and the cost play an essential role in the derivation of the alternative algorithm and its computationally efficient implementation.
For the problem class of Assumption (3.1), we can use duality theory to present an alternative path for computing the output of the DP operator. This path forms the basis for the algorithm proposed in this study. To this end, let us fix x ∈ X and consider the following reformulation of the optimization problem (1)
T J(x) =C s (x) + min u,z {C i (u) + γ · E w J(z + w) : z = f (x, u)} ,
where we used additivity of disturbance and separability of stage cost. The corresponding dual problem then reads as
T J(x) := C s (x) + max y min u,z {C i (u) + γ · E w J(z + w) + y, f (x, u) − z } ,(3)
where y ∈ R n is the dual variable corresponding to the equality constraint. For the dynamics of Assumption 3.1-(i), we can then obtain the following representation for the dual problem.
(x) := γ · E w J(x + w), x ∈ X, (4a) φ(y) := C * i (−B y) + * (y), y ∈ R n , (4b) T J(x) = C s (x) + φ * f s (x) , x ∈ X,(4c)
where [·] * denotes the conjugate operation.
Following [21], we call the operator T in (4) the conjugate DP (CDP) operator. We next provide an alternative representation of the CDP operator that captures the essence of this operation.
(5) T J(x) = C s (x) + min u C * * i (u) + γ · [E w J(· + w)] * * f (x, u) ,
where [·] * * denotes the biconjugate operation.
The preceding result implies that the indirect path through the conjugate domain essentially involves substituting the input cost and (expectation of the) value function by their biconjugates. In particular, it points to a sufficient condition for zero duality gap. Hence, T has the same properties as T if C i and J are convex. More importantly, if T and T preserve convexity, then the conjugate VI (ConjVI) algorithm J k+1 = T J k also converges to the optimal value function J , with arbitrary convex initialization J 0 . For convexity to be preserved, however, we need more assumptions: First, the state cost C s : X → R needs to be also convex. Then, for T J to be convex, a sufficient condition is convexity of J • f (jointly in x and u), given that J is convex. The following assumption summarizes the sufficient conditions for equivalence of VI and ConjVI algorithms. 16: for each x ∈ X d do 17: use LERP to compute ϕ d * d fs(x) from ϕ d * d : Z g → R;
: V g → R from C d i : U d → R; 3: construct the grid Z g ; 4: construct the grid Y g ; 5: J d (x) ← 0 for x ∈ X d ; 6: J d + (x) ← C d s (x) − min C d i for x ∈ X d ; iteration: 7: while J d + − J d ∞ ≥ et do 8: J d ← J d + ; d-CDP operation: 9: ε d (x) ← γ · w∈W d p(w) · J d (x + w) for x ∈ X d ; 10: use LLT to compute ε d * d : Y g → R from ε d : X d → R; 11: for each y ∈ Y g do 12: use LERP to compute C d * d i (−B y) from C d * d i : V g → R; 13: ϕ d (y) ← C d * d i (−B y) + ε d * d (y); 14: end for 15: use LLT to compute ϕ d * d : Z g → R from ϕ d : Y g → R;
18:
J d + (x) ← Cs(x) + ϕ d * d fs(x) ; 19: end for 20: end while 21: output J d ← J d + .
We note that the last condition in the preceding assumption usually does not hold for nonlinear dynamics, however, for f s (x) = Ax with A ∈ R n×n , this is indeed the case for problems satisfying Assumptions 3.1 and 3.5 [7]. Note that, if convexity is not preserved, then the alternative path suffers from duality gap in the sense that in each iteration it uses the convex envelope of (the expectation of) the output of the previous iteration.
ConjVI algorithm
The approximate ConjVI algorithm involves consecutive applications of an approximate implementation of the CDP operator (4) until some termination condition is satisfied. Algorithm 1 provides the pseudo-code of this procedure. In particular, we consider solving (4) for a finite set X d ⊂ X, and terminate the iterations when the difference between two consecutive discrete value functions (in the infinity-norm) is less than a given constant e t > 0; see Algorithm 1:7. Since we are working with a finite subset of the state space, we can restrict the feasibility condition of Assumption 3.1-(iii) to all x ∈ X d (as opposed to all x ∈ X):
Assumption 3.6 (Feasibile discretization). The set of admissible inputs U(x) is nonempty for all x ∈ X d .
In what follows, we describe the main steps within the initialization and iterations of Algorithm 1. In particular, the conjugate operations in (4) are handled numerically via the linear-time Legendre transform (LLT) algorithm [24]. LLT is an efficient algorithm for computing the discrete conjugate function over a finite grid-like dual domain. Precisely, to compute the conjugate of the function h : X → R, LLT takes its discretization h d : X d → R as an input, and outputs h d * d : Y g → R, for the grid-like dual domain Y g . We refer the reader to [24] for a detailed description of LLT. The main steps of the proposed approximate implementation of the CDP operator (4) are as follows:
(i) For the expectation operation in (4a), by Assumption 2.1, we again have
E w J(· + w) = w∈W d p(w) · J(· + w).
Hence, we need to pass the value function J d : X d → R through the "scaled expection filter" to obtain ε d : X d → R in (6a) as an approximation of in (4a). Notice that here we are using an extension J d : X → R of J d (recall that we only have access to the discrete value function J d ).
(ii) To compute φ in (4b), we need access to two conjugate functions:
(a) For * , we use the approximation ε d * d : Y g → R in (6b), by applying LLT to the data points
ε d : X d → R for a properly chosen state dual grid Y g ⊂ R n . (b) If the conjugate C *
i of the input cost is not analytically available, we approximate it as follows: For a properly chosen input dual grid V g ⊂ R m , we employ LLT to compute C d * d
i : V g → R in (6c), using the data points C d i : U d → R, where U d is a finite subset of U.
With these conjugate functions at hand, we can now compute ϕ d : Y g → R in (6d), as an approximation of φ in (4b). In particular, notice that we use the LERP extension
C d * d i of C d * d i to approximate C d * i at the required point (−B y) for each y ∈ Y g . (iii)
To be able to compute the output according to (4c), we need to perform another conjugate transform.
In particular, we need the value of φ * at f s (x) for x ∈ X d . Here, we use the approximation ϕ d * d :
Z g → R in (6e), by applying LLT to the data points ϕ d : Y g → R for a properly chosen grid Z g ⊂ R n . Finally, we use the LERP extension ϕ d * d of ϕ d * d to approximate ϕ d * at the required point f s (x) for each x ∈ X d , and compute T d J d in (6f) as an approximation of T J in (4c).
With these approximations, we can introduce the discrete CDP (d-CDP) operator as follows
ε d (x) := γ · w∈W d p(w) · J d (x + w), x ∈ X d , (6a) ε d * d (y) = max x∈X d x, y − ε d (x) , y ∈ Y g , (6b) C d * d i (v) = max u∈U d u, v − C d i (u) , v ∈ V g , (6c) ϕ d (y) := C d * d i (−B y) + ε d * d (y), y ∈ Y g , (6d) ϕ d * d (z) = max y∈Y g y, z − ϕ d (y) , z ∈ Z g , (6e) T d J d (x) := C s (x) + ϕ d * d f s (x) , x ∈ X d . (6f)
The proper construction of the grids Y g , V g , and Z g will be discussed in Section 3.4. We finish this subsection with the following remarks on the modification of the proposed algorithm for two special cases.
Remark 3.7 (Deterministic systems). For deterministic systems, i.e., g(x, u, w) = f (x, u), we do not need to compute any expectation. Then, the operation in (6a) becomes the simple scaling ε d = γ · J d .
Remark 3.8 (Analytically available C * i ).
If the conjugate C * i of the input cost is analytically available, we can use it directly in (6d) instead of C d * d i and avoid the corresponding approximation; i.e., there is no need for construction of V g and the computation of C d * d i in (6c).
Analysis of ConjVI algorithm
We now provide our main theoretical results concerning the convergence, complexity, and error of the proposed algorithm. Let us begin by presenting the assumptions to be called in this subsection. Assumption 3.9 (Grids). Consider the following properties for the grids in Algorithm 1 (consult the Notations in Section 1):
(i) The grid V g is constructed such that co(V g sub ) ⊇ L(C d i ). (ii) The grid Z g is constructed such that co(Z g ) ⊇ f s X d . (iii)
The construction of Y g , V g , and Z g requires at most O(X + U ) operations. The cardinality of the grids Y g and Z g (resp. V g ) in each dimension is the same as that of X d (resp. U d ) in that dimension so that Y, Z = X and V = U . (i) The extension operator is non-expansive w.r.t. the infinity norm; that is, for two discrete functions
J d i : X d → R (i = 1, 2) and their extensions J d i : X → R, we have J d 1 − J d 2 ∞ ≤ J d 1 − J d 2 ∞ . (ii) Given a function J : X → R and its discretization J d : X d → R, the error of the extension operator is uniformly bounded, that is, J − J d ∞ ≤ e e for some constant e e ≥ 0.
Our first result concerns the contractiveness of the d-CDP operator.
Theorem 3.11 (Convergence). Let Assumptions 3.9-(ii) and 3.10-(i) hold. Then, the d-CDP operator (6) is γ-contractive w.r.t. the infinity-norm.
The preceding theorem implies that the approximate ConjVI Algorithm 1 is indeed convergent given that the required conditions are satisfied. In particular, for deterministic dynamics, co(Z g ) ⊇ f s X d is sufficient for Algorithm 1 to be convergent. We next consider the time complexity of our algorithm. The requirements of Assumption 3.9-(iii) will be discussed in Section 3.4. Recall that each iteration of VI (in primal domain) has a complexity of O(XU W E), where E denotes the complexity of the extension operation used in (2). This observation points to a basic characteristic of the proposed approach: ConjVI reduces the quadratic complexity of VI to a linear one by replacing the minimization operation in the primal domain with a simple addition in the conjugate domain. Hence, for the problem class of Assumption 3.1, ConjVI is expected to lead to a reduction in the computational cost. We note that ConjVI, like VI and other approximation schemes that utilize discretization/abstraction of the continuous state and input spaces, still suffers from the so-called "curse of dimensionality." This is because the sizes X and U of the discretizations increase exponentially with the dimensions n and m of the corresponding spaces. However, for ConjVI, this exponential increase is of rate max{m, n}, compared to the rate m + n for VI.
Let us also note that the most crucial step that allows the speedup discussed above is the interpolative discrete conjugation in (6f) that approximates ϕ d * d at the point f s (x). In this regard, notice that we can alternatively compute ϕ d * d f s (x) = max y∈Y g y, f s (x) − ϕ d (y) exactly via enumeration over y ∈ Y g for each x ∈ X d (then, the computation of ϕ d * d : Z g → R in (6e) is not needed anymore). However, this approach requires O(XY ) = O(X 2 ) operations in the last step, hence rendering the proposed approach computationally impractical. Of course, the application of interpolative discrete conjugation has its cost: The LERP extension in (6f) can lead to non-convex outputs (even if Assumption 3.5 holds true). This, in turn, can introduce a dualization error. We finish with the following result on the error of the proposed ConjVI algorithm.
Theorem 3.13 (Error). Let Assumptions 3.5, 3.9-(i)&(ii), and 3.10-(i) hold. Consider the true optimal value function J = T J : X → R and its discretization J d : X d → R, and let Assumption 3.10-(ii) hold for J . Also, let J d : X d → R be the output of Algorithm 1. Then,
(7) J d − J d ∞ ≤ γ(e e + e t ) + e d 1 − γ ,
where e d = e u + e v + e x + e y + e z , and
e u = c u · d H (U, U d ), (8a) e v = c v · d H co(V g ), V g , (8b) e x = c x · d H X, X d , (8c) e y = c y · max x∈X d d ∂(J − C s )(x), Y g , (8d) e z = c z · d H f s (X d ), Z g , (8e) with constants c u , c v , c x , c y , c z > 0 depending on the problem data.
Let us first note that Assumption 3.5 implies that the DP and CDP operators preserve convexity, and they both have the true optimal value function J as their fixed point (i.e., the duality gap is zero). Otherwise, the proposed scheme can suffer from large errors due to dualization. Moreover, Assumptions 3.9-(i)&(ii) on the grids V g and Z g are required for bounding the error of approximate discrete conjugations using LERP in (6d) and (6f); see the proof of Lemmas A.6 and A.8. The remaining sources of error in the proposed approximate implementation of ConjVI are captured by the three error terms in (7):
(i) e e is due to the approximation of the value function using the extension operator [·]; (ii) e t corresponds to the termination of the algorithm after a finite number of iterations; (iii) e d captures the error due to the discretization of the primal and dual state and input domains.
We again finish with the following remarks on the modification of the proposed algorithm for deterministic systems and analytically available C * i .
Remark 3.14 (Deterministic systems). If the dynamics are deterministic, then the complexity of each iteration of Algorithm 1 reduces to O(X). Moreover, in this case, the error term e e disappears.
Remark 3.15 (Analytically available C * i ). If the conjugate C * i of the input cost is analytically available and used in (6d) instead of the LERP extension C d * d i , the error term due to discretization modifies to e d = e x + e y + e z . That is, the error terms e u and e v corresponding to the discretization of the primal and dual input spaces disappear.
Construction of the grids
In this subsection, we provide specific guidelines for the construction of the grids Y g , V g and Z g . We note that these discrete sets must be grid-like since they form the dual grid for the three conjugate transforms that are handled using LLT. The presented guidelines aim to minimize the error terms in (8) while taking into account the properties laid out in Assumption 3.9. In particular, the schemes described below satisfy the requirements of Assumption 3.9-(iii).
Construction of V g . Assumption 3.9-(i) and the error term e v in (8b) suggest that we find the smallest input dual grid V g such that co(V g sub ) ⊇ L(C d i ). This latter condition essentially means that V g must "more than cover the range of slope" of the function C d i ; recall that L(
C d i ) = Π m j=1 L − j (C d i ), L − j (C d i ) , where L − j (C d i ) (resp. L + j (C d i ))
is the minimum (resp. maximum) slope of C d i along the j-th dimension. Hence, we need to compute/approximate L ± j (C d i ) for j = 1, . . . , m. A conservative approximation is L − j (C i ) = min ∂Ci /∂uj and L + j (C i ) = max ∂Ci /∂uj, assuming C i is differentiable. Alternatively, we can directly use the discrete input
cost C d i for computing L ± j (C d i ). In particular, if the domain U d = U g = Π m j=1 U g j of C d i is grid-like and C i is convex, we can take L − j (C d i ) (resp. L + j (C d i )
) to be the minimum first forward difference (resp. maximum last backward difference) of C d i along the j-th dimension (this scheme requires O(U ) operations). Having L ± j (C d i ) at our disposal, we can then construct V g sub = Π m j=1 V g subj such that, in each dimension j, V g subj is uniform and has the same cardinality as U g j , and co(
V g subj ) = L − j (C d i ), L + j (C d i )
. Finally, we construct V g by extending V g sub uniformly in each dimension (by adding a smaller and a larger element to V g sub in each dimension while preserving the resolution in that dimension).
Construction of Z g . According to Assumption 3.9-(ii), the grid Z g must be constructed such that co(Z g ) ⊇ f s X d . This can be simply done by finding the vertices of the smallest box that contains the set f s X d . Those vertices give the diameter of Z g in each dimension. We can then, for example, take Z g to be the uniform grid with the same cardinality as Y g in each dimension (so that Z = Y ). This way,
d H f s (X d ), Z g ≤ d H co(Z g ), Z g ,
and hence e z in (8e) reduces by using finer grids Z g . This construction has a time complexity of O(X).
Construction of Y g . Construction of the state dual grid Y g is more involved. According to Theorem 3.13, we need to choose a grid that minimizes e y in (8d). This can be done by choosing Y g such that Y g ∩∂(J −C s ) = ∅ for all x ∈ X d so that e y = 0. Even if we had access to the optimal value function J , satisfying such a condition could lead to a dual grid Y g ⊂ R n of size O(X n ). Such a large size violates Assumption 3.9-(iii) on the size of Y g , and essentially renders the proposed algorithm impractical for dimensions n ≥ 2. A more practical condition is co(
Y g ) ∩ ∂(J − C s ) = ∅ for all x ∈ X d so that max x∈X d d ∂(J − C s )(x), Y g ≤ d H co(Y g ), Y g ,
and hence e y reduces by using a finer grid Y g . The latter condition is satisfied if co(Y g ) ⊇ L(J − C s ), i.e., if co(Y g ) "covers the range of slope" of (J −C s ). Hence, we need to approximate the range of slope of (J −C s ). To this end, we first use the fact that J is the fixed point of DP operator (1) to approximate rng(J − C s ) by
R = rng(C d i ) + γ · rng(C d s ) 1 − γ .
We then construct the gird Y g = Π n i=1 Y g i such that, for each dimension i, we have
(9) ± αR ∆ i X d ∈ co(Y g i )
where ∆ i X d denotes the diameter of the projection of X d on the i-th dimension. Here, the coefficient α > 0 is a scaling factor mainly depending on the dimension of the state space. In particular, by setting α = 1, the value R /∆ i X d is the slope of a linear function with range R over the domain ∆ i X d . This construction has a one-time computational cost of O(X + U ) for computing rng(C d i ) and rng(C d s ). Dynamic construction of Y g . Alternatively, we can construct Y g dynamically at each iteration to minimize the corresponding error in each application of the d-CDP operator given by (see Lemma A.7 and Proposition A.3)
e y = c y · max x∈X d d ∂(T J − C s )(x), Y g .
This means that line 4 in Algorithm 1 is moved inside the iterations, after line 8. Similar to the static scheme described above, the aim here is to construct Y g such that co(Y g ) ⊇ L(T J − C s ). Since we do not have access to T J (it is the output of the current iteration), we can again use the definition of the DP operator (1) to approximate rng(T J − C s ) by R = rng(C d i ) + γ · rng(J d ), where J d is the output of the previous iteration. We then construct the gird Y g = Π n i=1 Y g i such that, for each dimension i, the condition (9) holds. This construction has a one-time computational cost of O(U ) for computing rng(C d i ) and a per iteration computational cost of O(X) for computing rng(J d ). Notice, however, that under this dynamic construction, the error bound of Theorem 3.13 does not hold true. More importantly, with a dynamic grid Y g that varies in each iteration, there is no guarantee for ConjVI to converge.
Numerical simulations
In this section, we compare the performance of the proposed ConjVI algorithm with the benchmark VI algorithm (in primal domain) through three numerical examples. For the first example, we focus on a synthetic system satisfying the conditions of assumptions considered in this study to examine our theoretical results. We then showcase the application of ConjVI in solving the optimal control problem of an inverted pendulum and a batch reactor. The simulations were implemented via MATLAB version R2017b, on a PC with an Intel Xeon 3.60 GHz processor and 16 GB RAM. We also provide the ConjVI MATLAB package [22] for the implementation of the proposed algorithm. The package also includes the numerical simulations of this section. We note that multiple routines in the developed package are borrowed from the d-CDP MATLAB package [23]. Also, for the discrete conjugation (LLT), we used the MATLAB package (in particular, the LLTd routine) provided in [24].
Example 1 -Synthetic
We consider the linear system x + = Ax + Bu + w with A = [2 1; 1 3], B = [1 1; 1 2]. The problem of interest is the infinite-horizon, optimal control of this system with cost functions C s (x) = 10 x 2 2 and C i (u) = e |u1| +e |u2| −2, and discount factor γ = 0.95. We consider state and input constraint sets X = [−1, 1] 2 and U = [−2, 2] 2 , respectively. The disturbance is assumed to have a uniform distribution over the finite support W d = {0, ±0.05} × {0} of size W = 3. Notice how the stage cost is a combination of a quadratic term (in state) and an exponential term (in input). Particularly, the control problem at hand does not have a closed-form solution. We use uniform, grid-like discretizations X g and U g for the state and input spaces such that co(X g ) = X and co(U g ) = U. This choice allows us to deploy multilinear interpolation, which is nonexpansive, as the extension operator [·] in the d-DP operation (2) in VI, and in the d-CDP operation (6a) in ConjVI. The grids V g , Z g ⊂ R 2 are also constructed uniformly, following the guidelines provided in Section 3.2. For the construction of Y g ⊂ R 2 , we also follow the guidelines of Section 3.2 with α = 1. In particular, we also consider the dynamic scheme for the construction of Y g in ConjVI (hereafter, referred to as ConjVId). Moreover, in each implementation of VI and ConjVI(-d), all of the involved grids (X g , U g , Y g , V g , Z g ) are chosen to be of the same size N 2 (with N points in each dimension). We are particularly interested in the performance of these algorithms, as N increases. We note that the described setup satisfies all of the assumptions in this study.
The results of our numerical simulations are shown in Figure 1. As shown in Figures 1a, both VI and ConjVI are indeed convergent with a rate less than or equal to the discount factor γ = 0.95; see Theorem 3.11. In particular, ConjVI terminates in k t = 55 iterations, compared to k t = 102 iterations required for VI to reach the termination bound e t = 0.001. Not surprisingly, this faster convergence, combined with the lower time complexity of ConjVI in each iteration, leads to a significant reduction in the running time of this algorithm compared to VI. This effect can be seen in Figure 1b, where the run-time of ConjVI for N = 41 is an order of magnitude less than that of VI for N = 11. In this regard, we note that the setting of this numerical example leads to O(k t N 4 W ) and O(k t N 2 W ) time complexities for VI and ConjVI, respectively; see Theorem 3.12 and the discussion after that. Indeed, the running times in Figure 1b match these complexities.
Since we do not have access to the true optimal value function, in order to evaluate the performance of the outputs of the VI and ConjVI, we consider the performance of the greedy policy
µ(x) ∈ argmin u∈U(x)∩U g C(x, u) + γ · E w J d g(x, u, w) ,
w.r.t. the discrete value function J d computed using these algorithms (we note that, for finding the greedy action, we used the same discretization U g of the input space and the same extension J d of the value function as the one used in VI and ConjVI, however, this need not be the case in general). Figure 1c reports the average cost of one hundred instances of the optimal control problem with greedy control actions. As shown, the reduction in the run-time in ConjVI comes with an increase in the cost of the controlled trajectories.
Let us now consider the effect of dynamic construction of the state dual grid Y g . As can be seen in Figure 1a, using a dynamic Y g leads to a slower convergence (ConjVI-d terminates in k t = 100 iterations). We note that the relative behavior of the convergence rates in Figures 1a was also seen for other grid sizes in the discretization scheme. However, we see a small increase in the running time of ConjVI-d compared to ConjVI since the per iteration complexity for ConjVI-d is again of O(k t N 2 W ); see Figure 1b. More importantly, as depicted in Figure 1c, ConjVI-d shows almost the same performance as VI when it comes to the quality of the greedy actions. This is because the dynamic construction of Y g in ConjVI-d uses the available computational power (related to the size of the discretization) smartly by finding the smallest grid Y g in each iteration, to minimize the error of that same iteration.
We note that our simulations show that for the deterministic system, ConjVI-d has a similar converge rate as ConjVI. This effect can be seen in Figure 2, where ConjVI-d terminates in 10 iterations. Interestingly, in this particular example, ConjVI converges to the fixed point after 7 iterations (J d 8 = T d J d 7 ) for the deterministic system. Let us finally note that the conjugate C * i of the input cost in the provided example is indeed analytically available. One can use this analytic representation to exactly compute C * i in (6f) and avoid the corresponding numerical approximation. With such a modification, the computational cost reduces, however, our numerical experiments show that for the provided example, the ConjVI outputs effectively the same value function within the same number of iterations (results are not shown here).
Example 2 -Inverted pendulum
We use the setup (model and stage cost) of [21, App. C.2.2] with discount factor γ = 0.95. In particular, the state and input costs are both quadratic ( · 2 2 ), and the discrete-time, nonlinear dynamics is of the form
x + = f s (x) + Bu + w, where f s (x 1 , x 2 ) = x 1 + α 12 x 2 α 21 sin x 1 + α 22 x 2 , B = 0 β , (α 12 , α 21 , α 22 , β ∈ R).
State and input constraints are described by
X = [− π 3 , π 3 ]×[−π, π] ⊂ R 2 and U = [−3, 3] ⊂ R.
The disturbance has a uniform distribution over the finite support W g = {0, ±0.025 π 3 , ±0.05 π 3 } × {0, ±0.025π, ±0.05π} ⊂ R 2 of size W = 5 2 . We use uniform, grid-like discretizations X g and U g for the state and input spaces such that co(X g ) = [− π 4 , π 4 ] × [−π, π] ⊂ X and co(U g ) = U. This choice of discrete state space X g particularly satisfies the feasibility condition of Assumption 3.6. (Note however that the set X does not satisfy the feasibility condition of Assumption 3.1-(iii)). Also, we use nearest neighbor extension (which is non-expansive) for the extension operators in (2) for VI and in (6a) for ConjVI. The grids V g ⊂ R and Z g , Y g ⊂ R 2 are also constructed uniformly, following the guidelines of Section 3.4 (with α = 1). We again also consider the dynamic scheme for the construction of Y g . Moreover, in each implementation of VI and ConjVI(-d) the termination bound is e t = 0.001, and all of the involved grids are chosen to be of the same size N in each dimension, i.e., X = Y = Z = N 2 and U = V = N .
The results of simulations are shown in Figures 3 and 4. As reported, we essentially observe the same behaviors as before. In particular, the application of ConjVI(-d), especially for deterministic dynamics, leads to faster convergence and a significant reduction in the running time; see Figures 3a, 3b and 4. Note that Figure 4 also shows the non-monotone behavior of ConjVI-d for scaling factor α = 3. In this regard, recall that when the grid Y g is constructed dynamically and varies at each iteration, the d-CDP operator is not necessarily contractive. Moreover, as shown in Figures 3b and 3c, this dynamic scheme leads to a huge improvement in the performance of the corresponding greedy policy at the expense of a small increase in the computational cost.
Example 3 -Batch Reactor
Our last numerical example concerns the optimal control of a system with four states and two input channels, namely, an unstable batch reactor. The setup (dynamics, cost, and constraints) are borrowed from [20,Sec. 6]. In particular, we consider a deterministic linear dynamics x + = Ax + Bu, with costs C s (x) = 2 x 2 2 and C i (u) = u 2 2 , discount factor γ = 0.95, and constraints x ∈ X = [−2, 2] 4 ⊂ R 4 and u ∈ U = [−2, 2] 2 ⊂ R 2 . Once again, we use uniform, grid-like discretizations X g and U g for the state and input spaces such that co(X g ) = [−1, 1] 4 ⊂ X and co(U g ) = U. The grids V g ⊂ R 2 and Z g , Y g ⊂ R 4 are also constructed uniformly, following the guidelines of Section 3.4 (with α = 1). Moreover, in each implementation of VI and ConjVI, the termination bound is e t = 0.001 and all of the involved grids are chosen to be of the same size N in each dimension, i.e., X = Y = Z = N 4 and U = V = N 2 . Finally, we note that we use multi-linear interpolation and extrapolation for the extension operator in (2) for VI. Due to the extrapolation, the extension operator is no longer non-expansive and hence the convergence of VI is not guaranteed. On the other hand, since the dynamics is deterministic, there is no need for extension in ConjVI (recall that the scaled expectation in (6a) in ConjVI reduces to the simple scaling ε d = γ · J d for deterministic dynamics), and hence the convergence of ConjVI only requires co(Z g ) ⊇ f s X g and is guaranteed.
The results of our numerical simulations are shown in Figure 5. Once again, we see the trade-off between the time complexity and the greedy control performance in VI and ConjVI. On the other hand, ConjVI-d has the same control performance as VI with an insignificant increase in running time compared to ConjVI. In Figure 5a, we again observe the non-monotone behavior of ConjVI-d (the d-CDP operator is expansive in the first six iterations). The VI algorithm is also showing a non-monotone behavior, where for the first nine iterations the d-DP operation is actually expansive. As we noted earlier, this is because the multi-linear extrapolation operation used in extension is expansive.
Final remarks
In this paper, we proposed the ConjVI algorithm which reduces the time complexity of the VI algorithm from O(XU ) to O(X + U ). This better time complexity however comes at the expense of restricting the class of problem. In particular, there are two main conditions that must be satisfied in order to be able to apply the ConjVI algorithm:
(i) the dynamics must be of the form x + = f s (x) + Bu + w; and, (ii) the stage cost C(x, u) = C s (x) + C i (u) must be separable.
Moreover, since ConjVI essentially solves the dual problem, for non-convex problems, it suffers from a non-zero duality gap. Based on our simulation results, we also notice a trade-off between computational complexity and control action quality: While ConjVI has a lower computational cost, VI generates better control actions. However, the dynamic scheme for the construction of state dual grid Y g allows us to achieve almost the same performance as VI when it comes to the quality of control actions, with a small extra computational burden. In what follows, we provide our final remarks on the limitations of the proposed ConjVI algorithm and its relation to existing approximate VI algorithms.
Relation to existing approximate VI algorithms. The basic idea for complexity reduction introduced in this study can be potentially combined with and further improve the existing sample-based VI algorithms. These sample-based algorithms solely focus on transforming the infinite-dimensional optimization in DP problems into computationally tractable ones, and in general, they have a time complexity of O(XU ), depending on the product of the cardinalities of the discrete state and action spaces. The proposed ConjVI algorithm, on the other hand, focuses on reducing this time complexity to O(X + U ), by avoiding the minimization over input in each iteration. Take, for example, the aggregation technique in [27,Sec. 8.1] that leads to a piecewise constant approximation of the value function. It is straightforward to combine ConjVI with this type of state space aggregation. Indeed, the numerical example of Section 4.2 essentially uses such aggregation by approximating the value function via nearest neighbor extension.
Cost functions with a large Lipschitz constant. Recall that for the proposed ConjVI algorithm to be computationally efficient, the size Y of the state dual grid Y g must be controlled by the size X of the discrete state space X d (Assumption 3.9-(iii)). Then, as the range of slope of the value function J increases, the corresponding error e y in (8d) due to the discretization of the dual state space increases. The proposed dynamic approach for the construction of Y g partially addresses this issue by focusing on the range of slope of J d k in each iteration to minimize the discretization error of the same iteration k. However, when the cost function has a large Lipschitz constant, even this latter approach can fail to provide a good approximation of the value function. Table 1 reports the result of the numerical simulation of the unstable batch reactor with the stage cost
(10) C(x, u) = − 4 1 + η + 4 i=1 1 1 + η − |x i | − 2 2 + η + 2 j=1 1 2 + η − |u j | , x ∞ ≤ 1, u ∞ ≤ 2.
Clearly, as η → 0, we increase the range of slope of the cost function. As can be seen, the quality of the greedy action generated by ConjVI-d also deteriorates in this case.
Gradient-based algorithms for solving the minimization over input. Let us first note that the minimization over u in sample-based VI algorithms usually involves solving a difficult non-convex problem. This is particularly because the extension operation employed in these algorithms for approximating the value function using the sample points does not lead to a convex function in u (e.g., take kernel-based approximations or neural networks). This is why in MDP and RL literature, it is quite common to consider a finite action space in the first place [11,27]. Moreover, the minimization over u again must be solved for each sample point in each iteration, while the application of ConjVI avoids solving this minimization in each iteration. In this regard, let us note that ConjVI uses a convex approximation of the value function, which allows for the application of a gradient-based algorithm for minimization over u within the ConjVI algorithm. Indeed, in each iteration k = 0, 1, . . ., ConjVI solves (for deterministic dynamics)
J d k+1 (x) = C s (x) + min u C i (u) + γ · max y∈Y g y, f s (x) + Bu − J d * d k (y) , x ∈ X d , where J d * d k (y) = max x∈X d x, y − J d k (x) , y ∈ Y g ,
is the discrete conjugate of the output of the previous iteration (computed using the LLT algorithm). Then, it is not hard to see that a subgradient of the objective of the minimization can be computed using O(Y ) operations: for a given u, assuming we have access to the subdifferential ∂C i (u), the subdifferential of the objective function is ∂C i (u) + γ · B y u , where This result is an extension of [21,Lem. 4.2] that accounts for the separable cost, the discount factor, and additive disturbance. Inserting the dynamics of Assumption 3.1-(i) into (3), we can use the definition of conjugate transform to obtain (all the functions are extended to infinity outside their effective domains)
y u ∈ argmax y∈Y g y, f s (x) + Bu − J d * d k (y) .T J(x) − C s (x) = max y min u,z {C i (u) + γ · E w J(z + w) + y, f s (x) + Bu − z } = max y y, f s (x) − max u −B y, u − C i (u) − max z [ y, z − γ · E w J(z + w)] = max y y, f s (x) − C * i (−B y) − [γ · E w J(· + w)] * (y) = max y y, f s (x) − C * i (−B y) − * (y) = max y { y, f s (x) − φ(y)} = φ * f s (x) ,
where we used the definition of epsilon and φ in (4a) and (4b), respectively.
A.2. Proof of Proposition 3.3
We can use the representation (4) and the definition of conjugate operation to obtain
T J(x) − C s (x) = max y { f s (x), y − φ(y)} = max y f s (x), y − C * i (−B y) − * (y) = max y f s (x), y − [C * i ] * * (−B y) − * (y) = max y f s (x), y − max u∈co(U) −B y, u − C * * i (u) − * (y) = max y min u∈co(U) {C * * i (u) + y, f s (x) + Bu − * (y)} ,
where we used the fact that C * i : R m → R is proper, closed, and convex, and hence [C * i ] * * = C * i . This follows from the fact that dom(C i ) = U is assumed to be compact (Assumption 3.1-(iii)). Hence, the objective function of this maximin problem is convex in u, with co(U) being compact, which follows from convexity of C * * i : co(U) → R. Also, the objective function is concave in y, which follows from the convexity of * . Then, by Sion's Minimax Theorem (see, e.g., [29,Thm. 3]), we have minimax-maximin equality, i.e.,
T J(x) − C s (x) = min u max y {C * * i (u) + y, f (x, u) − * (y)} = min u C * * i (u) + max y y, f (x, u) − * (y) = min u C * * i (u) + * * f (x, u) = min u C * * i (u) + γ · [E w J(· + w)] * * f (x, u) ,
where the last equality, we used the fact that [γh] * * = γ · h * * ; see [4,Prop. 13.23-(i)&(iv)].
A.3. Proof of Corollary 3.4
By Proposition 3.3, we need to show C * * i = C i and [E w J(· + w)] * * = E w J(· + w) so that
C * * i (u) + γ · [E w J(· + w)] * * f (x, u) = C i (u) + γ · [E w J(· + w)] f (x, u) = C i (u) + γ · E w J f (x, u) + w = C i (u) + γ · E w J g(x, u, w) .
This holds if C i and E w J(· + w) are proper, closed and convex. This is indeed the case since X and U are compact, and C i : U → R and J : X → R are assumed to be convex.
We note that we are using: (i) Assumption 3.9-(ii) in the application of Lemma A.2, (ii) the fact that dom(ϕ d * i ) = dom(ε d * i ) = R n for i = 1, 2 in the two applications of Lemma A.1, and (iii) Assumption 3.10-(i) in the last inequality.
A.5. Proof of Theorem 3.12 In what follows, we provide the time complexity of each line of Algorithm 1. In particular, we use the fact that Y, Z = X and V = U by Assumption 3. Note that the ConjVI Algorithm 1 involves consecutive applications of the d-CDP operator T d (6), and terminates after a finite number of iterations corresponding to the bound e t . We begin with bounding the difference between the DP and d-CDP operators. We note that this result extends [21,Thm. 5.3] by considering the error of extension operation for computing the expectation w.r.t. to the additive disturbance in (6a) and the approximate discrete conjugation of the input cost in (6d). Proposition A.3 (Error of d-CDP operation). Let J : X → R be a Lipschitz continuous, convex function that satisfies the condition of Assumption 3.10-(ii). Assume C i : U → R is convex. Also, let Assumptions 3.9-(i)&(ii) hold. Consider the output of the d-CDP operator T d J d : X d → R and the discretization of the output of the DP operator [T J] d : X d → R. We have
(11) T d J d − [T J] d ∞ ≤ γ · e e + e d .
Proof. First note that, by Corollary 3.4, the DP and CDP operators are equivalent, i.e., T J = T J. Hence, it suffices to bound the error of the d-CDP operator T d w.r.t. the CDP operator T . We begin with the following preliminary lemma.
Lemma A.4. The scaled expectation in (4a) is Lipschitz continuous and convex with a nonempty, compact effective domain. Moreover, L( ) ≤ γ · L(J).
Proof. The convexity follows from the fact that expectation preserves convexity and γ > 0. The effective domain of is nonempty by the feasibility condition of Assumption 3.1-(iii), and is compact since X is assumed to be compact. Finally, the bound on the Lipschitz constant of immediately follows from (4a).
We now provide our step-by-step proof. Consider the function in (4a) and its discretization d : X d → R. Also, consider the discrete function ε d : X d → R in (6a).
Lemma A.5. We have dom( d ) = dom(ε d ) = ∅. Moreover, d − ε d ∞ ≤ γ · e e .
Proof. The first statement follows from the feasibility condition of Assumption 3.6. For the second statement, note that for every x ∈ dom( d ) = dom(ε d ), we can use (4a) and (6a) to write
d (x) − ε d (x) = γ · w∈W d p(w) · J(x + w) − J d (x + w) ≤ γ · w∈W d p(w) · J(x + w) − J d (x + w) ≤ γ · J − J d ∞ .
The result then follows from Assumption 3.10-(ii) on J.
Now, consider the function φ : R n → R in (4b) and its discretization φ d : Y g → R. Also, consider the discrete function ϕ d : Y g → R in (6d).
Lemma A.6. We have φ d − ϕ d ∞ ≤ γ · e e + e u + e v + e x , where e u = [ B 2 · ∆ Y g + L(C i )] · d H (U, U d ), e v = ∆ U d · d H co(V g ), V g , e x = [∆ Y g + γ · L(J)] · d H (X, X d ).
Proof. Let y ∈ Y g . According to (4b) and (6d), we have (note that ε d * d (y) = ε d * (y))
φ d (y) − ϕ d (y) = φ(y) − ϕ(y) = C * i (−B y) − C d * d i (−B y) + * (y) − ε d * (y).(12)
First, let us use [21, Lem. 2.5] to write
0 ≤ C * i (−B y) − C d * i (−B y) ≤ − B y 2 + L(C i ) · d H (U, U d ) ≤ [ B 2 · ∆ Y g + L(C i )] · d H (U, U d ) = e u .(13)
Also, Assumption 3.9-(i) allows to use [21, Cor. 2.7] and write
0 ≤ C d * d i (−B y) − C d * i (−B y) ≤ ∆ U d · d H co(V g ), V g = e v .(14)
Now, by Lemma A.1 (non-expansiveness of conjugation) and Lemma A.5, we have
d * (y) − ε d * (y) ≤ d − ε d ∞ ≤ γ · e e .(15)
Moreover, we can use [21,Lem. 2.5] and Lemma A.4 to obtain 0 ≤ * (y) − d * (y) ≤ y 2 + L( ) · d H (X, X d )
≤ [∆ Y g + γ · L(J)] · d H (X, X d ) = e x .(16)
Combining (12) ≤ e u + e v + γ · e e + e x .
+ ∆ X + B 2 · ∆ U · max x∈X d d ∂(T J − C s )(x), Y g .
Proof. Let x ∈ X d . Also let φ d : Y g → R be the discretization of φ : R n → R. Since φ is convex by construction, we can use [21,Lem. 2.5] to obtain (recall that L(h; X) denotes the Lipschtiz constant of h restricted to the set X ⊂ dom(h))
0 ≤ φ * f s (x) − φ d * f s (x) ≤ min y∈∂φ * (fs(x)) f s (x) 2 + L φ; {y} ∪ Y g · d(y, Y g )(17)
By using (4c) and the equivalence of DP and CDP operators we have φ * • f s = T J − C s = T J − C s . Also, the definition (4b) implies that
L(φ) ≤ L C * i • −B + L( * ) ≤ B 2 · L(C * i ) + L( * ) ≤ B 2 · ∆ dom(Ci) + ∆ dom( ) ≤ B 2 · ∆ U + ∆ X ,
where for the last inequality we used the fact that dom( ) ⊆ dom(J) = X. Using these results in (17), we have
0 ≤ φ * f s (x) − φ d * f s (x) ≤ min y∈∂(T J−Cs)(x) f s (x) 2 + ∆ X + B 2 ∆ U · d(y, Y g ) ≤ ∆ fs(X d ) + ∆ X + B 2 · ∆ U · max x ∈X d d ∂(T J − C s )(x ), Y g = e y .(18)
Second, by Lemmas A.1 and A.6, we have
φ d * (z) − ϕ d * (z) ≤ φ d − ϕ d ∞ ≤ γ · e e + e u + e v + e x ,(19)
for all z ∈ R n , including z = f s (x). Here, we are using the fact that dom(φ d ) = dom(ϕ d ) = Y g and dom(φ d * ) = dom(ϕ d * ) = R n . Combining inequalities (18) and (19), we obtain
φ * f s (x) − ϕ d * f s (x) ≤ φ * f s (x) − φ d * f s (x) + φ d * f s (x) − ϕ d * f s (x)
≤ e y + γ · e e + e u + e v + e x .
This completes the proof.
We are now left with the final step. Consider the output of the d-CDP operator T d J d : X d → R. Also, consider the output of the CDP operator T J : X → R and its discretization [ T J] d : X d → R.
Lemma A.8. We have
T d J d − [ T J] d ∞
≤ γ · e e + e u + e v + e x + e y + e z = γ · e e + e d , where e z = ∆ Y g · d H f s (X d ), Z g .
Proof. Let x ∈ X d . According to (4c) and (6f), we have
T d J d (x) − [ T J] d (x) = T d J d (x) − T J(x) = ϕ d * d f s (x) − φ * f s (x)(20)
Now, by Lemma A.7, we have φ * f s (x) − ϕ d * f s (x) ≤ γ · e e + e u + e v + e x + e y .
Moreover, Assumption 3.9-(ii) allows us to use [21,Cor. 2.7] and obtain
0 ≤ ϕ d * d f s (x) − ϕ d * f s (x) ≤ ∆ Y g · d H f s (X d ), Z g = e z .(22)
Combining (20), (21), and (22), we then have
T d J d (x) − [ T J] d (x) = ϕ d * d f s (x) − φ * f s (x) ≤ ϕ d * d f s (x) − ϕ d * f s (x) + ϕ d * f s (x) − φ * f s (x)
≤ γ · e e + e u + e v + e x + e y + e z .
The inequality (11) then follows from Lemma A.8 by noticing the equivalence of the DP and CDP operators.
With the preceding result at hand, we can now provide a bound for the difference between the fixed points of the d-CDP and DP operators. To this end, let J d = T d J d : X d → R be the fixed point of the d-CDP operator. Recall that J = T J : X → R and J d : X d → R are the true optimal value function and its discretization.
Lemma A.9 (Error of fixed point of d-CDP operator). We have
J d − J d ∞ ≤
γ · e e + e d 1 − γ .
Proof. By Assumptions 3.9-(ii) and 3.10-(i), the operator T d is γ-contractive (Theorem 3.11) and hence
T d J d − T d J d ∞ ≤ γ · J d − J d ∞ .
Also, notice that Assumptions 3.1 and 3.5 imply that J is Lipschitz continuous and convex. Moreover, J is assumed to satisfy the condition of Assumption 3.10-(ii). Hence, by Proposition A.3, we have
T d J d − [T J ] d ∞ ≤ γ · e e + e d .
Using these two inequalities, we can then write
J d − J d ∞ = J d − T d J d + T d J d − J d ∞ ≤ J d − T d J d ∞ + T d J d − J d ∞ = T d J d − T d J d ∞ + T d J d − [T J ] d ∞ . ≤ γ · J d − J d ∞ + γ · e e + e d .
This completes the proof.
Finally, we can use the fact that T d is γ-cantractive to provide the following bound on the error due to finite termination of the algorithm. Recall that J d : X d → R is the output of Algorithm 1.
Lemma A.10 (Error of finite termination). We have
J d − J d ∞ ≤
γ · e t 1 − γ .
Proof. By Assumptions 3.9-(ii) and 3.10-(i), the operator T d is γ-contractive (Theorem 3.11). Let us assume that Algorithm 1 terminates after k ≥ 0 iterations so that J d = J d k+1 and J d k+1 − J d k ∞ ≤ e t . Then,
J d − J d ∞ = J d k+1 − T d J d k+1 + T d J d k+1 − J d ∞ ≤ J d k+1 − T d J d k+1 ∞ + T d J d k+1 − J d ∞ = T d J d k − T d J d k+1 ∞ + T d J d k+1 − T d J d ∞ ≤ γ · J d k − J d k+1 ∞ + γ · J d k+1 − J d ∞ ≤ γ · e t + γ J d − J d ∞ ,
where for the second inequality we used the fact that T d is a contraction.
The inequality (7) is then derived by combining the results of Lemmas A.9 and A.10.
Proposition 3.2 (CDP operator). The dual problem (3) equivalently reads as
Proposition 3.3 (CDP reformulation). The CDP operator T equivalently reads as
Corollary 3. 4 (
4Equivalence of T and T ). If C i : U → R and J : X → R are convex, then T J = T J.
Assumption 3.5 (Convexity). Consider the following properties for the constraints, costs, and dynamics:(i) The sets X ⊂ R n and U ⊂ R m are convex. (ii) The costs C s : X → R and C i : U → R are convex. (iii) The deterministic dynamics f : R n × R m → R n is such that given a convex function J : X → R, the composition J • f is jointly convex in the state and input variables.
Assumption 3 . 10 (
310Extension operator). Consider the following properties for the operator [·] in (6a):
Theorem 3.12 (Complexity). Let Assumption 3.9-(iii) hold. Also assume that each evaluation of the extension operator [·] in (6a) requires O(E) operations. Then, the time complexities of initialization and each iteration in Algorithm 1 are of O(X + U ) and O(XW E), respectively.
Figure 1 .
1VI vs. ConjVI (CVI) -synthetic example with stochastic dynamics x + = Ax+Bu+w: (a) Convergence rate for N = 41; (b) Running time; (c) Average cost of one hundred instances of the control problem with random initial conditions over T = 100 time steps. The black dashed-dotted line in (a) corresponds to exponential convergence with coefficient γ = 0.95. CVI-d corresponds to dynamic construction of the dual grid Y g in the ConjVI algorithm.
Figure 2 .
2Convergence of VI and ConjVI with deterministic dynamics x + = Ax + Bu; cf.Figure 1a.
Figure 3 .
3VI vs. ConjVI (CVI) -optimal control of noisy inverted pendulum: (a) Convergence rate for N = 41; (b) Running time; (c) Average cost of one hundred instances of the control problem with random initial conditions over T = 100 time steps. The black dashed-dotted line in (a) corresponds to exponential convergence with coefficient γ = 0.95. CVI-d corresponds to dynamic construction of the dual grid Y g in the ConjVI algorithm.
Figure 4 .
4Convergence of VI and ConjVI with deterministic dynamics x + = fs(x) + Bu; cf. Figure 3a.
Figure 5 .
5VI vs. ConjVI (CVI) -optimal control of batch reactor: (a) Convergence rate for N = 25; (b) Running time; (c) Average cost of one hundred instances of the control problem with random initial conditions over T = 100 time steps. The black dashed-dotted line in (a) corresponds to exponential convergence with coefficient γ = 0.95. CVI-d corresponds to dynamic construction of the dual grid Y g in the ConjVI algorithm.
This leads to a per iteration time complexity of O(XY ) = O(X 2 ), which is again practically inefficient. Appendix A. Technical proofs A.1. Proof of Proposition 3.2
9-(iii). The complexity of construction of V g in line 1 is of O(X + U ) by Assumption 3.9-(iii). The LLT of line 2 requires O(U + V ) = O(U ) operations [24, Cor. 5]. The complexity of lines 3 and 4 is of O(X + U ) by Assumption 3.9-(iii) on the complexity of construction of Z g and Y g . The operation of line 5 also has a complexity of O(X), and line 6 requires O(X + U ) operations. This leads to the reported O(X + U ) time complexity for initialization. In each iteration, lines 8 requires O(X) operations. The complexity of line 9 is of O(XW E) by the assumption on the complexity of the extension operator [·]. The LLT of line 10 requires O(X + Y ) = O(X) operations [24, Cor. 5]. The application of LERP in line 12 has a complexity of O(log V ) [21, Rem. 2.2]. Hence, the for loop over y ∈ Y g requires O(Y log V ) = O(X log U ) = O(X) operations. The LLT of line 15 requires O(Z + Y ) = O(X) operations [24, Cor. 5]. The application of LERP in line 17 has a complexity of O(log Z) [21, Rem. 2.2]. Hence, the for loop over x ∈ X d requires O(X log Z) = O(X log X) = O(X) operations. The time complexity of each iteration is then of O(XW E).A.6. Proof of Theorem 3.13
+
-(16), we then haveφ d (y) − ϕ d (y) = C * i (−B y) − C d * d i (−B y) + * (y) − ε d * (y) ≤ C * i (−B y) − C d * i (−B y) + C d * i (−B y) − C * (y) − d * (y) + d * (y) − ε d * (y)
·] * * = [[·] * ] * and [·] d * d * = [[·] d * ] d * , respectively.
Algorithm 1 ConjVI: Approximate VI in conjugate domainInput: dynamics fs : R n → R n , B ∈ R n×m ; finite state space X d ⊂ X; finite input space U d ⊂ U;state cost
function C d
s : X d → R; input cost function C d
i : U d → R; finite disturbance space W d and its p.m.f. p : W d → [0, 1];
discount factor γ; termination bound et.
Output: discrete value function J d : X d → R.
initialization:
1: construct the grid V g ;
2: use LLT to compute C d * d
i
Table 1 .
1VI vs. ConjVI -optimal control the batch reactor with stage cost (10) and η = 0.01.Algrithm Run-time (sec) Average cost (100 runs)
VI
7669
33.9
ConjVI
55
73.5
ConjVI-d
90
74.0
Next, consider the discrete composite functions [φ * • f s ] d :X d → R and [ϕ d * • f s ] d : X d → R.In particular, notice that φ * • f s appears in (4c).Lemma A.7. We have [φ * • f s ] d − [ϕ d * • f s ] d∞ ≤ γ · e e + e u + e v + e x + e y , where e y = ∆ fs(X d )
For example, for the linear approximation J d (x) = B i=1 α i · b i (x), we have E = B (the size of the basis), while for the kernel-based approximation J d (x) = x∈X d αx · r(x,x), we generally have E ≤ X. In particular, if X d = X g is grid-like, and J d = J d is approximated using LERP, then E = log X [21, Rem. 2.2].
A.4. Proof of Theorem 3.11We begin with two preliminary lemmas on the non-expansiveness of conjugate and multilinear interpolation operations within the d-CDP operation(6).Lemma A.1 (Non-expansiveness of conjugate operator). Consider two functions h i (i = 1, 2), with the same nonempty effective domain X. For any y ∈ dom(h * 1 ) ∩ dom(h * 2 ), we haveHence,Proof. For any x ∈ co(X g ), we have (i = 1, 2)where x j , j = 1, . . . , 2 n , are the vertices of the hyper-rectangular cell that contains x, and α j , j = 1, . . . , 2 n , are convex coefficients (i.e., α j ∈ [0, 1] and j α j = 1). ThenWith these preliminary results at hand, we can now show that T d is γ-contractive. Consider two discrete functions J d i : X d → R (i = 1, 2). For any x ∈ X d ⊂ R n , we have
On numerical approximation of the Hamilton-Jacobi-transport system arising in high frequency approximations. Y Achdou, F Camilli, L Corrias, Discrete & Continuous Dynamical Systems-Series B. 193Achdou, Y., Camilli, F., and Corrias, L. (2014). On numerical approximation of the Hamilton-Jacobi-transport system arising in high frequency approximations. Discrete & Continuous Dynamical Systems-Series B, 19(3).
The max-plus finite element method for solving deterministic optimal control problems: Basic properties and convergence analysis. M Akian, S Gaubert, A Lakhoua, SIAM Journal on Control and Optimization. 472Akian, M., Gaubert, S., and Lakhoua, A. (2008). The max-plus finite element method for solving deterministic optimal control problems: Basic properties and convergence analysis. SIAM Journal on Control and Optimization, 47(2):817-848.
Max-plus matching pursuit for deterministic Markov decision processes. F Bach, arXiv:1906.08524arXiv preprintBach, F. (2019). Max-plus matching pursuit for deterministic Markov decision processes. arXiv preprint arXiv:1906.08524.
Convex analysis and monotone operator theory in Hilbert spaces. H H Bauschke, P L Combettes, SpringerNew York, NY2nd editionBauschke, H. H. and Combettes, P. L. (2017). Convex analysis and monotone operator theory in Hilbert spaces. Springer, New York, NY, 2nd edition.
Mathematical programming and the maximum transform. R Bellman, W Karush, Journal of the Society for Industrial and Applied Mathematics. 103Bellman, R. and Karush, W. (1962). Mathematical programming and the maximum transform. Journal of the Society for Industrial and Applied Mathematics, 10(3):550-567.
Max-plus linear approximations for deterministic continuous-state markov decision processes. E Berthier, F Bach, IEEE Control Systems Letters. Berthier, E. and Bach, F. (2020). Max-plus linear approximations for deterministic continuous-state markov decision processes. IEEE Control Systems Letters, pages 1-1.
Linear convex stochastic control problems over an infinite horizon. D Bertsekas, IEEE Transactions on Automatic Control. 183Bertsekas, D. (1973). Linear convex stochastic control problems over an infinite horizon. IEEE Transactions on Automatic Control, 18(3):314-315.
Dynamic Programming and Optimal Control. D P Bertsekas, Athena Scientific. II3rd editionBertsekas, D. P. (2007). Dynamic Programming and Optimal Control, Vol. II. Athena Scientific, Belmont, MA, 3rd edition.
Convex Optimization Theory. D P Bertsekas, Athena ScientificBelmont, MABertsekas, D. P. (2009). Convex Optimization Theory. Athena Scientific, Belmont, MA.
Reinforcement Learning and Optimal Control. D P Bertsekas, Athena Scientific. Bertsekas, D. P. (2019). Reinforcement Learning and Optimal Control. Athena Scientific, Belmont, MA.
Reinforcement learning and dynamic programming using function approximators. L Busoniu, R Babuska, B De Schutter, Ernst , D , CRC pressBusoniu, L., Babuska, R., De Schutter, B., and Ernst, D. (2017). Reinforcement learning and dynamic program- ming using function approximators. CRC press.
Fast value iteration: an application of Legendre-Fenchel duality to a class of deterministic dynamic programming problems in discrete time. R Carpio, T Kamihigashi, Journal of Difference Equations and Applications. 262Carpio, R. and Kamihigashi, T. (2020). Fast value iteration: an application of Legendre-Fenchel duality to a class of deterministic dynamic programming problems in discrete time. Journal of Difference Equations and Applications, 26(2):209-222.
A linear-time approximate convex envelope algorithm using the double Legendre-Fenchel transform with application to phase separation. L Contento, A Ern, R Vermiglio, Computational Optimization and Applications. 601Contento, L., Ern, A., and Vermiglio, R. (2015). A linear-time approximate convex envelope algorithm using the double Legendre-Fenchel transform with application to phase separation. Computational Optimization and Applications, 60(1):231-261.
Fast Legendre-Fenchel transform and applications to Hamilton-Jacobi equations and conservation laws. L Corrias, SIAM Journal on Numerical Analysis. 334Corrias, L. (1996). Fast Legendre-Fenchel transform and applications to Hamilton-Jacobi equations and conser- vation laws. SIAM Journal on Numerical Analysis, 33(4):1534-1558.
A variational formulation for higher order macroscopic traffic flow models: Numerical investigation. G Costeseque, J.-P Lebacque, Transportation Research Part B: Methodological. 70Costeseque, G. and Lebacque, J.-P. (2014). A variational formulation for higher order macroscopic traffic flow models: Numerical investigation. Transportation Research Part B: Methodological, 70:112 -133.
Computational experiments with a class of dynamic programming algorithms of higher dimensions. A O Esogbue, C W Ahn, Computers & Mathematics with Applications. 1911Esogbue, A. O. and Ahn, C. W. (1990). Computational experiments with a class of dynamic programming algorithms of higher dimensions. Computers & Mathematics with Applications, 19(11):3 -23.
Distance transforms of sampled functions. Theory of computing. P F Felzenszwalb, D P Huttenlocher, 8Felzenszwalb, P. F. and Huttenlocher, D. P. (2012). Distance transforms of sampled functions. Theory of computing, 8(1):415-428.
A fast approach to optimal transport: The back-and-forth method. M Jacobs, F Léger, Numerische Mathematik. 1463Jacobs, M. and Léger, F. (2020). A fast approach to optimal transport: The back-and-forth method. Numerische Mathematik, 146(3):513-544.
Conjugate duality and the curse of dimensionality. C M Klein, T L Morin, European Journal of Operational Research. 502Klein, C. M. and Morin, T. L. (1991). Conjugate duality and the curse of dimensionality. European Journal of Operational Research, 50(2):220 -228.
A decentralized event-based approach for robust model predictive control. A S Kolarijani, S C Bregman, P Mohajerin Esfahani, T Keviczky, IEEE Transactions on Automatic Control. 658Kolarijani, A. S., Bregman, S. C., Mohajerin Esfahani, P., and Keviczky, T. (2020). A decentralized event-based approach for robust model predictive control. IEEE Transactions on Automatic Control, 65(8):3517-3529.
Fast approximate dynamic programming for input-affine dynamics. M A S Kolarijani, P M Esfahani, arXiv:2008.10362preprintKolarijani, M. A. S. and Esfahani, P. M. (2021). Fast approximate dynamic programming for input-affine dy- namics. preprint arXiv:2008.10362.
Conjugate value iteration (ConjVI) MATLAB package. Licensed under the MIT License. M A S Kolarijani, P Mohajerin Esfahani, Kolarijani, M. A. S. and Mohajerin Esfahani, P. (2021a). Conjugate value iteration (ConjVI) MATLAB package. Licensed under the MIT License, available online at https://github.com/AminKolarijani/ConjVI.
Discrete conjugate dynamic programming (d-CDP) MATLAB package. Licensed under the MIT License. M A S Kolarijani, P Mohajerin Esfahani, Kolarijani, M. A. S. and Mohajerin Esfahani, P. (2021b). Discrete conjugate dynamic programming (d-CDP) MATLAB package. Licensed under the MIT License, available online at https://github.com/AminKolarijani/ d-CDP.
Faster than the fast Legendre transform, the linear-time Legendre transform. Y Lucet, Numerical Algorithms. 162Lucet, Y. (1997). Faster than the fast Legendre transform, the linear-time Legendre transform. Numerical Algorithms, 16(2):171-185.
New sequential exact Euclidean distance transform algorithms based on convex analysis. Y Lucet, Image and Vision Computing. 271Lucet, Y. (2009). New sequential exact Euclidean distance transform algorithms based on convex analysis. Image and Vision Computing, 27(1):37 -44.
Max-plus eigenvector representations for solution of nonlinear H∞ problems: basic concepts. W M Mceneaney, IEEE Transactions on Automatic Control. 487McEneaney, W. M. (2003). Max-plus eigenvector representations for solution of nonlinear H∞ problems: basic concepts. IEEE Transactions on Automatic Control, 48(7):1150-1163.
Approximate Dynamic Programming: Solving the Curses of Dimensionality. W B Powell, John Wiley & SonsHoboken, NJ2nd editionPowell, W. B. (2011). Approximate Dynamic Programming: Solving the Curses of Dimensionality. John Wiley & Sons, Hoboken, NJ, 2nd edition.
Variance reduced value iteration and faster algorithms for solving Markov decision processes. A Sidford, M Wang, X Wu, Ye , Y , Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms. the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete AlgorithmsSIAMSidford, A., Wang, M., Wu, X., and Ye, Y. (2018). Variance reduced value iteration and faster algorithms for solving Markov decision processes. In Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 770-787. SIAM.
Minimax theorems and their proofs. S Simons, Minimax and Applications. Du, D.-Z. and Pardalos, P. M.Boston, MASpringer USSimons, S. (1995). Minimax theorems and their proofs. In Du, D.-Z. and Pardalos, P. M., editors, Minimax and Applications, pages 1-23. Springer US, Boston, MA.
Reinforcement Learning: An Introduction. R S Sutton, A G Barto, MIT PressSutton, R. S. and Barto, A. G. (2018). Reinforcement Learning: An Introduction. MIT Press.
| [
"https://github.com/AminKolarijani/ConjVI.",
"https://github.com/AminKolarijani/"
] |
[
"A quantum XOR oblivious transfer protocol compatible with classical partially homomorphic encryption",
"A quantum XOR oblivious transfer protocol compatible with classical partially homomorphic encryption"
] | [
"Li Yu \nSchool of Physics\nHangzhou Normal University\n311121HangzhouZhejiangChina\n",
"Jie Xu \nSchool of Physics\nHangzhou Normal University\n311121HangzhouZhejiangChina\n",
"Fuqun Wang \nSchool of Mathematics\nHangzhou Normal University\n311121HangzhouZhejiangChina\n\nKey Laboratory of Cryptography of Zhejiang Province\n311121HangzhouChina\n\nWestone Cryptologic Research Center\n100071BeijingChina\n",
"Chui-Ping Yang \nSchool of Physics\nHangzhou Normal University\n311121HangzhouZhejiangChina\n"
] | [
"School of Physics\nHangzhou Normal University\n311121HangzhouZhejiangChina",
"School of Physics\nHangzhou Normal University\n311121HangzhouZhejiangChina",
"School of Mathematics\nHangzhou Normal University\n311121HangzhouZhejiangChina",
"Key Laboratory of Cryptography of Zhejiang Province\n311121HangzhouChina",
"Westone Cryptologic Research Center\n100071BeijingChina",
"School of Physics\nHangzhou Normal University\n311121HangzhouZhejiangChina"
] | [] | XOR oblivious transfer (XOT) is a classical cryptographic primitive which is apparently weaker than 1-out-of-2 oblivious transfer, yet still universal for secure two-party computation. In ideal XOT, Bob initially has two bits, and Alice may choose to obtain either the first bit of Bob's, or the second bit, or their exclusive-or, but does not obtain any more information, while Bob does not learn anything about her choice. In this work we firstly introduce a quantum protocol which implements the functionality of XOT on classical inputs, and we show that such protocol is insecure if Alice cheats. By building on a variant of such protocol, we present a protocol for XOT with partial security for both parties. We then propose a protocol for evaluating linear polynomials. It has near-perfect security for Alice under uniform input distributions, for large input size, but only partial security for Bob. On the hybrid security front, all these protocols can be easily combined with a classical XOR homomorphic encryption scheme to save quantum costs when evaluating linear functions. | null | [
"https://export.arxiv.org/pdf/2305.11114v3.pdf"
] | 258,762,552 | 2305.11114 | 5113fde66cc682c0f33ca3e76feae55ca063772e |
A quantum XOR oblivious transfer protocol compatible with classical partially homomorphic encryption
30 May 2023
Li Yu
School of Physics
Hangzhou Normal University
311121HangzhouZhejiangChina
Jie Xu
School of Physics
Hangzhou Normal University
311121HangzhouZhejiangChina
Fuqun Wang
School of Mathematics
Hangzhou Normal University
311121HangzhouZhejiangChina
Key Laboratory of Cryptography of Zhejiang Province
311121HangzhouChina
Westone Cryptologic Research Center
100071BeijingChina
Chui-Ping Yang
School of Physics
Hangzhou Normal University
311121HangzhouZhejiangChina
A quantum XOR oblivious transfer protocol compatible with classical partially homomorphic encryption
30 May 2023
XOR oblivious transfer (XOT) is a classical cryptographic primitive which is apparently weaker than 1-out-of-2 oblivious transfer, yet still universal for secure two-party computation. In ideal XOT, Bob initially has two bits, and Alice may choose to obtain either the first bit of Bob's, or the second bit, or their exclusive-or, but does not obtain any more information, while Bob does not learn anything about her choice. In this work we firstly introduce a quantum protocol which implements the functionality of XOT on classical inputs, and we show that such protocol is insecure if Alice cheats. By building on a variant of such protocol, we present a protocol for XOT with partial security for both parties. We then propose a protocol for evaluating linear polynomials. It has near-perfect security for Alice under uniform input distributions, for large input size, but only partial security for Bob. On the hybrid security front, all these protocols can be easily combined with a classical XOR homomorphic encryption scheme to save quantum costs when evaluating linear functions.
I. INTRODUCTION
Secure two-party function evaluation is a central problem in classical cryptography. Ideally, the two parties wish to compute some function correctly using inputs that they provide, while the information about the inputs are kept unknown to the opposite party except what could be learnt in the computation result. Possible approaches to this problem include classical homomorphic encryption [1,2], or Yao's "Garbled Circuit" [3] and its variants. The 1-out-of-2 oblivious transfer (1-2 OT) [4][5][6] is a universal cryptographic primitive which may be used by these approaches. Another approach is to assume prior classical correlations, such as precomputed oblivious transfer, which has the same type of correlation as in the Popescu-Rohrlich nonlocal box [7].
XOR oblivious transfer (XOT) is a cryptographic primitive which is a variant of oblivious transfer. In ideal XOT, Bob has two bits, and Alice obtains either the first bit, the second bit, or their exclusive-or (XOR). Alice should not learn anything more than this, and Bob should not learn what Alice has learnt. The problem of XOT is almost equivalent to a problem of computing a function f = x 1 y 1 ⊕ x 2 y 2 (viewed as a linear function * Electronic address: [email protected] † Electronic address: [email protected] of x 1 , x 2 ) with similar security requirements, where the x i and y i (i = 1, 2) are Alice's and Bob's input bits, respectively, and ⊕ stands for addition modulo 2. And the latter problem is almost the same as the problem of 2BP in [5], except that boolean functions such as AND and OR are also allowed for 2BP, but this difference is not important here. It is known that 1-2 OT can be implemented using many instances of 2BP with near-perfect security (see protocol 1 of [5]), and this implies that if we can compute the function f = x 1 y 1 ⊕ x 2 y 2 many times with random inputs and with the required security in ideal XOT, we can implement 1-2 OT with near-perfect security. Since the 1-2 OT is a universal cryptographic primitive for two-party computation, the XOT is also universal. In this work, the XOTs from our protocols are with partial security on both sides. Hence they cannot be used to generate 1-2 OT with satisfactory security. But based on our protocol for XOT, we developed a protocol for evaluating general linear functions, in which Alice's data is quite secure under some common conditions.
In this work, we firstly present a quantum protocol that implements the functionality of XOT on classical inputs with some weakened security. Specifically, Alice's input is perfectly secure, but Bob may leak both bits of information to Alice when she initially prepares an entangled state on the main system and some ancilla. We then introduce a second protocol which also implements the functionality of XOT. It uses a variant of the first protocol as a subprocedure, and it has partial security for both parties. Based on the second protocol, we propose a protocol for evaluating general linear functions, in which Alice's data is asymptotically secure for uniformly distributed input, for large input size. But for small input size or biased input, Alice's data is not very secure, which is consistent with the security characteristics of the second protocol.
In secure two-party quantum computation, the two parties with quantum capabilities wish to correctly compute an output according to some public or private program while keeping their (quantum) inputs as secure as possible. A typical problem in this field is quantum homomorphic encryption (QHE) [8][9][10][11][12][13][14][15][16][17][18][19][20]. Since our last protocol can perform two-party evaluation of linear functions, it should be of help in many QHE schemes, provided that the input size is large and the input data has a nearly uniform prior probability distribution.
For securely evaluating linear functions, there is an alternative approach: it is simple to combine our protocols with a classical additive homomorphic encryption scheme, and this would save the number of runs of the quantum protocol for achieving comparable practical levels of security, but the combined protocol only has hybrid security.
The rest of the paper is organized as follows. In Sec. II we introduce some background knowledge. In Sec. III we introduce the first quantum protocol for XOT and explain why its security in the cheating case is not good. In Sec. IV, we introduce a composite quantum protocol for XOT, and explain why its security is partial for both parties. In Sec. V, we introduce a quantum protocol for evaluating general linear polynomials with much enhanced data security, under some assumption about the input distribution and input size. We also introduce the method to combine the protocol with classical XOR homomorphic encryption. The Sec. VI contains the conclusion and some open questions.
II. PRELIMINARIES
Lo [21] presented a no-go theorem for two-party secure quantum computation of generic publicly known classical functions with the output on one party only. The proof of the no-go theorem in the case of perfect data privacy assumes a purified and deterministic protocol. Buhrman et al [22] presented a similar claim on the security of twoparty quantum computation for publicly known classical functions in the case that both parties know the outcome, although with some limitations in the security notions.
Some notations are as follows. Denote |+ := 1 √ 2 (|0 + |1 ), and |− := 1 √ 2 (|0 − |1 ). Let I = diag(1, 1),
Z = diag(1, −1), X = 1 0 0 1 , and R z (θ) := diag(1, e iθ ).
When the symbol ⊕ is used between numbers, it represents addition modulo 2.
III. THE FIRST QUANTUM PROTOCOL FOR XOR OBLIVIOUS TRANSFER WITH LIMITED SECURITY
Firstly, we introduce the problem of 2BP in [5], with a slightly strengthened security requirement in the last sentence, and the roles of Alice and Bob swapped. The revised requirement is as follows: Bob has two secret input bits and he is willing to disclose some information about them to Alice, at her choosing. Alice must not be allowed to learn more than one bit of information on Bob's bits, but Alice is allowed to learn the value of any deterministic one-bit function of these two bits, such as their exclusive-or (XOR). Bob does not know what information Alice learns.
A quantum protocol for a restricted version of the above problem is presented as Protocol 1 below. The restriction is that the other function in the description of 2BP is now limited to the XOR only, that is, functions such as AND/OR are not allowed. If Alice's input bits x 1 and x 2 are promised to be not both zero, the protocol implements the functionality of XOT on classical inputs. The correctness is easily verified, and we describe the security characteristics below.
The data privacy of honest Alice's is perfect. After the Step 1 in the protocol, the three qubits are in a maximally mixed state in Bob's view, regardless of the values of x 1 , x 2 . Thus, the values of x 1 , x 2 are perfectly hidden from Bob.
The security of honest Bob's input is described as follows. If Alice is completely honest, she learns one of {y 1 , y 2 , y 1 ⊕ y 2 } completely, and she does not deterministically learn the other bit of Bob's. If she cheats by using an initial state entangled with key registers, she can get all information about Bob's two input bits. This is explained in the next paragraphs.
In the following discussion, we always assume Alice uses quantum registers called S 1 , S 2 , S 3 for storing s 1 , s 2 , s 3 , respectively, to maximize her information Protocol 1 Computing f = x 1 y 1 ⊕ x 2 y 2 with limited security.
Input: Alice has two input bits x1, x2; Bob has two input bits y1, y2. Output: The output of Alice equals x1y1 ⊕ x2y2.
1. Alice prepares three uniformly random key bits s1, s2, s3. Alice picks two qubits according to x1, x2: if x2 = 0, she picks the first and the third qubit; if x1 = 0 and x2 = 1, she picks the second and the third qubit; if x1 = x2 = 1, she picks the first two qubits. She prepares one of the four Bell states 1
√ 2 (|01 + |10 ), 1 √ 2 (|01 − |10 ), 1 √ 2 (|00 + |11 ), 1 √ 2 (|00 − |11
) on the two picked qubits, corresponding to (s1, s2) being (0, 0), (0, 1), (1, 0), (1, 1), respectively; on the remaining qubit, she prepares the |+ state, or the |− state, corresponding to s3 being 0 and 1, respectively. She sends the three qubits to Bob with explicit labels.
2. Bob receives the three qubits from Alice. He performs the Z y 1 gate on the first qubit, and the Z y 2 gate on the second qubit. Bob generates a uniformly random integer k ∈ {0, 1, 2, 3}. He performs Rz( kπ 2 ) on all three qubits. He measures all three qubits in the X basis. The measurement outcome corresponding to |+ is recorded as 0, and the outcome corresponding to |− is recorded as 1. He sends Alice the three outcome bits, as well as the bit k0 := k mod 2.
3. If x1 = x2 = 0, Alice outputs 0. Otherwise, she picks two of the three outcome bits according to x1, x2 in exactly the same way as she picked the initial qubits for preparing Bell states, and she calculates the XOR of these two bits and records it as r0, then she outputs r0 ⊕ s2 ⊕ (s1k0).
about Bob's input bits. For Alice to get as much information as possible, we assume that the initial quantum state of these three registers are all |+ , and then uses a coherent encoding process. It is implemented by turning the classically controlled encoding operation into a joint unitary on the key registers and the main registers (three qubits).
Let us consider the case that Alice learns about y 1 ⊕ y 2 deterministically. Firstly, consider the case that she follows the Step 1 of the protocol but with the initial superposed state in all key registers. Let us consider the case that she uses the default input x 1 = 1, x 2 = 1 (both values are 1 because y 1 and y 2 both appear in the expression of y 1 ⊕ y 2 ). Note that k 0 is known to Alice. In the case k 0 = 0, Alice can learn about the higher bit of k from the measurement outcome on the third qubit, and then in the following we will show that some information about y 1 would be present in the phase of final quantum state of S 1 , due to a phase-gathering effect from measurements. We discuss an example as follows:
The example is for x 1 = x 2 = 1, y 1 = 1, y 2 = 0, and k 0 = 0. The initial state 1 √ 2 (|01 + |10 ) (corresponding to s 1 = s 2 = 0) on the first two qubits would become
(−1) k1 √ 2 (|01 − |10 )(1)
after Bob's Z or I gates, and R z ( kπ 2 ) rotations, where k 1 is the higher bit in the binary form of Bob's key k. A simplest unitary model for the X-basis measurement on the state in (1) would give
(−1) k1 √ 2 (−| + − |01 + | − + |10 )(2)
as the output state, where the latter two qubits are for storing the measurement outcomes. After Bob does a measurement, the outcome effectively chooses a term in the above state. Suppose the first term remains, then the state of the latter two qubits are now |01 , the same as in the case s 1 = 1, s 2 = 0 except for a (−1) k1+1 phase. Then, we may regard the state of the latter two qubits as |01 , and a phase of (−1) k1+1 would be collected by the |0 state of Alice's S 1 register under the case s 2 = 0. On the other hand, suppose the second term remains, then similar statement holds with a phase of (−1) k1 . The expression in (2) would be almost the same as the output state for y 1 = 0, y 2 = 1 under such model with the same values of x 1 , x 2 , k 0 except that the "overall" phase is flipped (the word overall is in quotations since such phase and the state are still dependent on the state of S 1 ). But for s 1 = 1, s 2 = 0, the phases in front of the |1 in S 1 are the same for these two inputs of Bob, while there is no phase (−1) k1 . Thus, given that the initial state of S 1 is |+ , under the case s 2 = 0, Alice could use the phase information in S 1 in the end together with the received measurement outcomes to distinguish the two cases of y 1 = 1, y 2 = 0 and y 1 = 0, y 2 = 1, and determine y 1 . The case of s 2 = 1 is similar. If k 0 is switched to 1 in the above analysis, a cheating Alice can use the final state of her S 3 register, as well as the measurement outcome on the third qubit, to learn all information about k 1 . To do this, she needs to entangle her third qubit with S 3 initially, so that they are in the state |0 |+ + |1 |− when she sends it to Bob, and in the end she needs to measure S 3 in the Y basis. Then from following the discussion above, it can be found that Alice can distinguish the cases of y 1 = 1, y 2 = 0 and y 1 = 0, y 2 = 1. Thus she can learn about y 1 deterministically. On the other hand, if she does not entangle her third qubit initially, and just prepares the |+ or |− states, she can learn no information about k 0 , and thus she cannot learn about y 1 at all.
The cases that Alice learns about y 1 or y 2 deterministically are similar. In summary, a cheating Alice can always learn all information about y 1 , y 2 . The cheating is only in preparing the initial state of the key registers, and the encoding is as described in the protocol, except that it must be an overall unitary operation on the systems including the key registers. On the other hand, a completely honest Alice cannot deterministically learn any information except the 1 bit of information implied by the output.
IV. THE SECOND QUANTUM PROTOCOL FOR XOR OBLIVIOUS TRANSFER WITH PARTIAL SECURITY
In this section, we use a modified version of the Protocol 1, in which Bob does not send k 0 to Alice. Thus Alice does not get the function value in the end, and what she obtains is lacking the XOR with a term s 1 k 0 . We construct the Protocol 2 below using many instances of such modified protocol, but with some extra requirements: let Alice guarantee that she uses the same inputs x 1 , x 2 for these instances, with an even number of times that s 1 = 1. Let Bob guarantee that he uses the same k 0 for all these instances. Now the errors from the terms s 1 k 0 of these instances all cancel out. Thus in the end of the main protocol, Alice still does not know the value of k 0 deterministically. Therefore, she cannot determine both y 1 and y 2 deterministically.
For the overall protocol composed of n instances of the modified Protocol 1, Bob's effective input (y 1 , y 2 ) satisfies that each y j (j = 1, 2) is the XOR of the y Alice's data security in Protocol 2 is partial, and Bob has the following strategy to guess Alice's common input (x 1 , x 2 ) in all these instances: he performs a Z-basis measurement in all 3n qubits, and for each of the three possible combinations of (x 1 , x 2 ), he calculates the XOR of the two bits in the corresponding two positions in each instance, and then take the XOR over all n instances, and compare the final bit with the suspected result (related to the parity of n); if the result agrees with the suspected result, he believes there is some probability that the input was (x 1 , x 2 ), otherwise this combination of (x 1 , x 2 ) is certainty impossible. After he calculates all three cases of the possible (x 1 , x 2 ), he obtains a final probability distribution of (x 1 , x 2 ), and this represents partial information about the true value of (x 1 , x 2 ).
The reason that the above attack works is a joint result of these factors: Alice uses the same inputs x 1 , x 2 for all instances of the subprocedure; the form of the Bell states lead to the simple rules of the XOR of Z-basis measurement outcomes; the requirement in the protocol that the number of instances of s 1 = 1 is even; and that n is known to Bob. We note that Bob's such attack would in general make the computation result incorrect.
From the discussion about security in the previous section, and that the k 0 is not told Alice now, we deduce that in each subprocedure, Alice cannot deterministically learn k 0 nor k (i) 1 , although she may try to measure her key register S 3 in some suitable basis to get as much information about k (i) 1 as possible. Bob's security in the protocol is partial, and this is the joint result of the fact that he only sends back three bits, and the uncertainty principle of quantum mechanics. From the discussion about Protocol 1, Alice needs to measure the key register S 3 in the Z or Y basis to learn about k 1 in each instance of the subprocedure, where the choice of basis depends on k 0 . These are two incompatible bases. We note that by guessing the value of the common k 0 , Alice has one half chance of being correct in her guess, and on such occasions she may measure the key registers S 3 in the correct bases and know the k (i) 1 in each instance, and hence would know the overall effective
k 1 = n i=1 k (i)
1 . In such case she would know Bob's input bits y 1 , y 2 , by measuring her other two key registers in each instance. Hence Bob's data security in Protocol 2 is at best partial. From the method in [23], we estimate that Alice can learn about half of the n-bit information about the k (i) 1 . But the final quantity of interest is Alice's information about Bob's inputs y 1 , y 2 , so learning some bits among the k (i) 1 while being ignorant of others is not of much use for a cheating Alice. We suspect that her best strategy is to just guess the value of k 0 . Note Protocol 2 Computing f = x 1 y 1 ⊕ x 2 y 2 with partial security for both sides. (Restricting Alice's input to be nonzero gives partially-secure XOT.)
Input: Alice has two input bits x1, x2; Bob has two input bits y1, y2. Output: The output of Alice equals x1y1 ⊕ x2y2.
1. The two parties agree on an integer n, the number of instances of a subprocedure. The subprocedure is a modified Protocol 1, where the modification is that Bob does not send k0 to Alice, and accordingly Alice's calculation in the end does not include the s1k0 term; and Bob's k0 are the same among all instances of the subprocedure. Alice performs the
Step 1 of the n instances of the subprocedure, with the extra requirements that the inputs x1 and x2 are the same among instances, and the number of instances of s1 = 1 is even. She sends the 3n qubits to Bob with explicit labels.
2. Bob generates the bits y 2 ) on all three qubits of the i-th instance of the subprocedure. He measures all 3n qubits in the X basis. The measurement outcome corresponding to |+ is recorded as 0, and the outcome corresponding to |− is recorded as 1. He takes the XOR of the corresponding outcome bits of all instances of the subprocedure, and obtains three combined outcome bits. He sends Alice the 3 combined outcome bits.
3. If x1 = x2 = 0, Alice outputs 0. Otherwise, she picks two of the three combined outcome bits according to x1, x2 in exactly the same way as she picked the initial qubits for preparing Bell states, and then she calculates the XOR of these two bits and records it as R0. She calculates S2 = n j=1 s j . If prior entanglement with ancilla registers are considered, it may be hard to apply existing methods to analyze the problem. We leave the detailed analysis to further work.
The partial security of Bob's data can be enhanced in the implementation of 1-2 OT from XOT using the method in [5], but Alice's security gets worse as the used number of instances of the XOT protocol increases. Since Alice's security is partial in Protocol 2 here, we expect that in the implemented 1-2 OT, the security of Alice is quite bad. That is why we introduce the Protocol 3 below, which is for implementing general linear polynomials rather than a two-term linear polynomial.
V. A QUANTUM PROTOCOL FOR EVALUATING LINEAR POLYNOMIALS
The following Protocol 3 is for evaluating general linear polynomials with much enhanced security for Alice. The linear polynomial has 2n terms, e.g. f = 2n j=1 x j y j mod 2, and each instance of the subprocedure (which is modified from Protocol 1) implements two terms. Alice's security is nearly perfect if the input distribution of {x j } is uniform and the input size is large, but is much worse if the input distribution is not uniform, or when n is very small. Hence, there is no contradiction with the partial security of the Protocol 2, if we regard the current protocol with some particular biased input distribution as a special variant of Protocol 2.
The reason why Alice's security in Protocol 3 is good for generic input distributions is that Bob can no longer infer the information about Alice's input in the instances by looking at the Z-basis measurement outcomes (or any other similar single-qubit measurement, or Bellstate measurements), and the latter is due to that Alice changes her input from instance to instance, so there is a large chance Bob measured on a random |+ or |− state (effectively a maximally mixed qubit state) in at least one instance. Thus when he takes the XOR among instances, he does not know much information, except when Alice's input among the instances are likely to be all equal. The last condition explains why when the input distribution is very biased, Bob may obtain some partial information about Alice's input {x j }. When n is very small, there is a non-negligible chance that Alice's input among the instances are equal, and thus Bob's cheating measurement strategy would work to some degree.
Bob's security in Protocol 3 is partial, and the reason Protocol 3 Computing f = 2n j=1 x j y j mod 2 with enhanced security for Alice and partial security for Bob.
Input: Alice has 2n input bits xj, and Bob has 2n input bits yj, where j = 1, . . . , 2n.
Output: The output of Alice equals 2n j=1 xjyj mod 2. 1. The two parties both know the integer n, the number of instances of the subprocedure, from the form of the linear polynomial. The subprocedure is a modified Protocol 1, where the modification is that Bob does not send k0 to Alice, and accordingly Alice's calculation in the end does not include the s1k0 term; and Bob's k0 are the same among all instances of the subprocedure. Alice performs the Step 1 of the n instances of the subprocedure, with the extra requirement that the number of instances of s1 = 1 is even. She sends the 3n qubits to Bob with explicit labels.
2. Bob receives the 3n qubits from Alice. He performs the Z y 2i−1 gate on the first qubit, and the Z y 2i gate on the second qubit of the i-th instance of the subprocedure. Bob generates a uniformly random bit k0 which is the same for all subprocedures. He generates uniformly random bits k (i) 1 for i ∈ {1, . . . , n}. Then he calculates k (i) = 2k
(i) 1 + k0 for each i. He performs Rz( k (i) π
2 ) on all three qubits of the i-th instance of the subprocedure. He measures all 3n qubits in the X basis. The measurement outcome corresponding to |+ is recorded as 0, and the outcome corresponding to |− is recorded as 1. He sends Alice the 3n outcome bits.
3. Alice receives the outcome bits from Bob. For each i, if x2i−1 = x2i = 0, she sets r (i) 0 = 0; otherwise, she picks two of the three outcome bits for this instance according to x2i−1, x2i in exactly the same way as she picked the initial qubits for preparing Bell states, and then she calculates the XOR of these two bits and records it as r is similar to that in the analysis of Protocol 2. Such security may be improved by repeating the Protocol 3 several times with the same x j but different y j (the XOR of them may give the true y j for the target linear polynomial), but of course this is at the cost of harming Alice's security, especially when the input distribution of {x j } is biased. Sometimes we may choose to just run Protocol 3 once. This may be acceptable when Bob's security is not regarded as important or when the protocol is part of a larger computation so that Bob's input in the part of the circuit does not carry much information for the overall implemented circuit (due to that there are many different recompiled forms of the overall circuit).
All protocols introduced above can in principle be used together with a classical XOR homomorphic encryption scheme, to enhance the practical level of security and lower the quantum costs. We describe how the Protocol 3 can be used in conjunction with a classical XOR homomorphic encryption scheme. In the last step of Protocol 3, when {x j } are not all zero, Alice's needs to calculate the R 0 , and since it is the XOR of many bits (at most 2n bits), it could be understood as the calculation of a linear polynomial with n terms. It is easy to perform this calculation under a classical XOR homomorphic encryption scheme. Bob sends the encryptions of the outcome bits to Alice, and Alice does the homomorphic XOR calculations. Then she homomorphically takes the XOR of the result with an encrypted mask bit of her choice, and sends the result to Bob for decryption. The decrypted bit can be sent back to Alice for her to recover the final result. The many instances of the subprocedure jointly call the classical XOR homomorphic encryption scheme once (this is possible due to that r 0 is a separate term in Alice's output in the last step of Protocol 1). Thus, Alice only learns the 1-bit information about the XOR of all r 0 from different instances, if the classical computationallysecure encryption is not broken; and this is essentially the information contained in the output of the function.
VI. CONCLUSION
A partially-secure quantum protocol for XOR oblivious transfer has been proposed as Protocol 2, and a protocol for evaluating general linear polynomials is given as Protocol 3. Alice's data security of the latter protocol gets better as the size of the polynomial increases, provided that the input distribution is nearly uniform. In the case of single use of the Protocol 3 applied on classical input, it provides asymptotic information-theoretic data security for Alice's input under the above assumptions, while the security of Bob's input is partial. We also mentioned that the protocols can be used together with a classical XOR homomorphic encryption scheme to achieve hybrid security while having lower quantum costs compared to the case that statistical security is required.
There are some unresolved questions: how to prove an upper bound for Bob's data leakage in Protocol 3 in the case that Alice entangles her quantum state with some ancilla; whether there is a protocol with similar functionality but with fewer qubits; whether there exists a protocol for generating a precomputed version of the XOT or linear polynomial evaluation with similar security characteristics; whether there are better ways to use the XOT in two-party classical computations or cryptographic tasks; what new assertions can be stated for secure two-party computation with quantum inputs or quantum outputs (note that requiring the overall protocol to be non-interactive might limit the uses of our protocols).
the j-th bit of Bob's input of the i-th instance of the subprocedure.
i ∈ {1, . . . , n}, j ∈ {1, 2}, which are random except the requirement that yj = y (i) j mod 2 for j = 1, 2. Bob receives the 3n qubits from Alice. He performs the Z y (i) 1 gate on the first qubit, and the Z y (i) 2 gate on the second qubit of each instance of the subprocedure. Bob generates a uniformly random bit k0 which is the same for all subprocedures. He generates uniformly random bits k(i) 1 for i ∈ {1, . . . , n}. Then he calculates k (i) = 2k (i) 1 + k0 for each i. He performs Rz( k (i) π
the s2 key in the i-th instance of the subprocedure. Finally, she outputs R0 ⊕ S2. that in the current protocol there are large hidden key registers: Alice's choice of the instances of subprocedures with s 1 = 1, and Bob's choice of the random bits among the y (i)
the s2 key in the i-th instance of the subprocedure when (x2i−1, x2i) = (0, 0) and is 0 otherwise. Finally, she outputs R0 ⊕ S2.
Fully homomorphic encryption using ideal lattices. Craig Gentry, Proceedings of the Forty-first Annual ACM Symposium on Theory of Computing, STOC '09. the Forty-first Annual ACM Symposium on Theory of Computing, STOC '09New York, NY, USAACMCraig Gentry. Fully homomorphic encryption using ideal lattices. In Proceedings of the Forty-first Annual ACM Symposium on Theory of Computing, STOC '09, pages 169-178, New York, NY, USA, 2009. ACM.
Efficient fully homomorphic encryption from (standard) LWE. Z Brakerski, V Vaikuntanathan, 2011 IEEE 52nd Annual Symposium on Foundations of Computer Science. Z. Brakerski and V. Vaikuntanathan. Efficient fully ho- momorphic encryption from (standard) LWE. In 2011 IEEE 52nd Annual Symposium on Foundations of Com- puter Science, pages 97-106, Oct 2011.
How to generate and exchange secrets. A C Yao, 27th Annual Symposium on Foundations of Computer Science. A. C. Yao. How to generate and exchange secrets. In 27th Annual Symposium on Foundations of Computer Science, pages 162-167, 1986.
A randomized protocol for signing contracts. Shimon Even, Oded Goldreich, Abraham Lempel, Commun. ACM. 286Shimon Even, Oded Goldreich, and Abraham Lempel. A randomized protocol for signing contracts. Commun. ACM, 28(6):637-647, 1985.
Information theoretic reductions among disclosure problems. Gilles Brassard, Claude Crepeau, Jean-Marc Robert, 27th Annual Symposium on Foundations of Computer Science. Gilles Brassard, Claude Crepeau, and Jean-Marc Robert. Information theoretic reductions among disclosure prob- lems. In 27th Annual Symposium on Foundations of Computer Science, pages 168-173, 1986.
Oblivious transfer is symmetric. Stefan Wolf, Jürg Wullschleger, Advances in Cryptology -EUROCRYPT 2006. Serge VaudenayBerlin, Heidelberg; Berlin HeidelbergSpringerStefan Wolf and Jürg Wullschleger. Oblivious transfer is symmetric. In Serge Vaudenay, editor, Advances in Cryptology -EUROCRYPT 2006, pages 222-232, Berlin, Heidelberg, 2006. Springer Berlin Heidelberg.
Quantum nonlocality as an axiom. Sandu Popescu, Daniel Rohrlich, Foundations of Physics. 243Sandu Popescu and Daniel Rohrlich. Quantum nonlocal- ity as an axiom. Foundations of Physics, 24(3):379-385, Mar 1994.
Quantum walks with encrypted data. P Peter, Joseph F Rohde, Alexei Fitzsimons, Gilchrist, Phys. Rev. Lett. 109150501Peter P. Rohde, Joseph F. Fitzsimons, and Alexei Gilchrist. Quantum walks with encrypted data. Phys. Rev. Lett., 109:150501, 2012.
Symmetric quantum fully homomorphic encryption with perfect security. Min Liang, Quantum Inf. Process. 12Min Liang. Symmetric quantum fully homomorphic en- cryption with perfect security. Quantum Inf. Process., 12:3675-3687, 2013.
Limitations on information-theoretically-secure quantum homomorphic encryption. Li Yu, Carlos A Pérez-Delgado, Joseph F Fitzsimons, Phys. Rev. A. 9050303Li Yu, Carlos A. Pérez-Delgado, and Joseph F. Fitzsi- mons. Limitations on information-theoretically-secure quantum homomorphic encryption. Phys. Rev. A, 90:050303(R), Nov 2014.
A quantum approach to homomorphic encryption. S.-H Tan, J A Kettlewell, Y Ouyang, L Chen, J F Fitzsimons, Sci. Rep. 633467S.-H. Tan, J. A. Kettlewell, Y. Ouyang, L. Chen, and J. F. Fitzsimons. A quantum approach to homomorphic encryption. Sci. Rep., 6:33467, 2016.
Quantum homomorphic encryption from quantum codes. Y Ouyang, S.-H Tan, J Fitzsimons, Phys. Rev. A. 9842334Y. Ouyang, S.-H. Tan, and J. Fitzsimons. Quantum ho- momorphic encryption from quantum codes. Phys. Rev. A, 98:042334, 2018.
Quantum homomorphic encryption for circuits of low T-gate complexity. Anne Broadbent, Stacey Jeffery, Proceedings of Advances in Cryptology -CRYPTO 2015. Advances in Cryptology -CRYPTO 2015Anne Broadbent and Stacey Jeffery. Quantum homo- morphic encryption for circuits of low T-gate complexity. In Proceedings of Advances in Cryptology -CRYPTO 2015, pages 609-629, 2015.
Quantum homomorphic encryption for polynomial-sized circuits. Yfke Dulek, Christian Schaffner, Florian Speelman, CRYPTO 2016: Advances in Cryptology -CRYPTO 2016. Yfke Dulek, Christian Schaffner, and Florian Speelman. Quantum homomorphic encryption for polynomial-sized circuits. CRYPTO 2016: Advances in Cryptology - CRYPTO 2016, pages 3-32, 2016.
M Newman, Y Shi, Limitations on Transversal Computation through Quantum Homomorphic Encryption. Quantum Information and Computation. 18M. Newman and Y. Shi. Limitations on Transversal Computation through Quantum Homomorphic Encryp- tion. Quantum Information and Computation, 18:927- 948, 2018.
C.-Y Lai, K.-M Chung, On Statistically-Secure Quantum Homomorphic Encryption. Quantum Information and Computation. 18C.-Y. Lai and K.-M. Chung. On Statistically-Secure Quantum Homomorphic Encryption. Quantum Informa- tion and Computation, 18:785-794, 2018.
Classical homomorphic encryption for quantum circuits. U Mahadev, IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS). U. Mahadev. Classical homomorphic encryption for quantum circuits. In 2018 IEEE 59th Annual Sym- posium on Foundations of Computer Science (FOCS), pages 332-338, Oct 2018.
Quantum fully homomorphic encryption with verification. Gorjan Alagic, Yfke Dulek, Christian Schaffner, Florian Speelman, Advances in Cryptology -ASIACRYPT 2017. Tsuyoshi Takagi and Thomas PeyrinChamSpringer International PublishingGorjan Alagic, Yfke Dulek, Christian Schaffner, and Flo- rian Speelman. Quantum fully homomorphic encryption with verification. In Tsuyoshi Takagi and Thomas Peyrin, editors, Advances in Cryptology -ASIACRYPT 2017, pages 438-467, Cham, 2017. Springer International Pub- lishing.
M Newman, Further Limitations on Information-Theoretically Secure Quantum Homomorphic Encryption. M. Newman. Further Limitations on Information- Theoretically Secure Quantum Homomorphic Encryp- tion. http://arxiv.org/abs/1809.08719, September 2018.
Practical somewhat-secure quantum somewhat-homomorphic encryption with coherent states. Si-Hui Tan, Yingkai Ouyang, Peter P Rohde, Phys. Rev. A. 9742308Si-Hui Tan, Yingkai Ouyang, and Peter P. Rohde. Prac- tical somewhat-secure quantum somewhat-homomorphic encryption with coherent states. Phys. Rev. A, 97:042308, Apr 2018.
Insecurity of quantum secure computations. Hoi-Kwong Lo, Phys. Rev. A. 56Hoi-Kwong Lo. Insecurity of quantum secure computa- tions. Phys. Rev. A, 56:1154-1162, Aug 1997.
Complete insecurity of quantum protocols for classical two-party computation. Harry Buhrman, Matthias Christandl, Christian Schaffner, Phys. Rev. Lett. 109160501Harry Buhrman, Matthias Christandl, and Christian Schaffner. Complete insecurity of quantum protocols for classical two-party computation. Phys. Rev. Lett., 109:160501, Oct 2012.
Locking classical correlations in quantum states. D P Divincenzo, M Horodecki, D W Leung, J A Smolin, B M , Phys. Rev. Lett. 9267902D. P. DiVincenzo, M. Horodecki, D. W. Leung, J. A. Smolin, and B. M. Terhal. Locking classical correlations in quantum states. Phys. Rev. Lett., 92:067902, Feb 2004.
| [] |
[
"Knowledge Graph Embedding with Electronic Health Records Data via Latent Graphical Block Model",
"Knowledge Graph Embedding with Electronic Health Records Data via Latent Graphical Block Model"
] | [
"Junwei Lu [email protected] \nDepartment of Biostatistics\nHarvard T.H. Chan School of Public Health\n\n",
"Ying Jin \nDepartment of Statistics\nStanford University\n\n",
"Tianxi Cai [email protected] \nDepartment of Biostatistics\nSchool of Public Health\nHarvard T.H. Chan\n",
"‡ "
] | [
"Department of Biostatistics\nHarvard T.H. Chan School of Public Health\n",
"Department of Statistics\nStanford University\n",
"Department of Biostatistics\nSchool of Public Health\nHarvard T.H. Chan"
] | [] | Due to the increasing adoption of electronic health records (EHR), large scale EHRs have become another rich data source for translational clinical research. The richness of the EHR data enables us to derive unbiased knowledge maps and large scale semantic embedding vectors for EHR features, which are valuable and shareable resources for translational research. Despite its potential, deriving generalizable knowledge from EHR data remains challenging. First, EHR data are generated as part of clinical care with data elements too detailed and fragmented for research. Despite recent progress in mapping EHR data to common ontology with hierarchical structures, much development is still needed to enable automatic grouping of local EHR codes to meaningful clinical concepts at a large scale. Second, the total number of unique EHR features is large, imposing methodological challenges to derive reproducible knowledge graph, especially when interest lies in conditional dependency structure. Third, the detailed EHR data on a very large patient cohort imposes additional computational challenge to deriving a knowledge network. To overcome these challenges, we propose to infer the conditional dependency structure among EHR features via a latent graphical block model (LGBM). The LGBM has a two layer structure with the first providing semantic embedding vector (SEV) representation for the EHR features and the second overlaying a graphical block model on the latent SEVs. The block structures on the graphical model also allows us to cluster synonymous features in EHR. We propose to learn the LGBM efficiently, in both statistical and computational sense, based on the empirical point mutual information matrix. We establish the statistical rates of the proposed estimators and show the perfect recovery of the block structure. Numerical results from simulation studies and real EHR data analyses suggest that the proposed LGBM estimator performs well in finite sample. | null | [
"https://export.arxiv.org/pdf/2305.19997v1.pdf"
] | 258,987,768 | 2305.19997 | feb3e98e1da916162a63c2773ae4ce7a17a6028a |
Knowledge Graph Embedding with Electronic Health Records Data via Latent Graphical Block Model
Junwei Lu [email protected]
Department of Biostatistics
Harvard T.H. Chan School of Public Health
Ying Jin
Department of Statistics
Stanford University
Tianxi Cai [email protected]
Department of Biostatistics
School of Public Health
Harvard T.H. Chan
‡
Knowledge Graph Embedding with Electronic Health Records Data via Latent Graphical Block Model
Due to the increasing adoption of electronic health records (EHR), large scale EHRs have become another rich data source for translational clinical research. The richness of the EHR data enables us to derive unbiased knowledge maps and large scale semantic embedding vectors for EHR features, which are valuable and shareable resources for translational research. Despite its potential, deriving generalizable knowledge from EHR data remains challenging. First, EHR data are generated as part of clinical care with data elements too detailed and fragmented for research. Despite recent progress in mapping EHR data to common ontology with hierarchical structures, much development is still needed to enable automatic grouping of local EHR codes to meaningful clinical concepts at a large scale. Second, the total number of unique EHR features is large, imposing methodological challenges to derive reproducible knowledge graph, especially when interest lies in conditional dependency structure. Third, the detailed EHR data on a very large patient cohort imposes additional computational challenge to deriving a knowledge network. To overcome these challenges, we propose to infer the conditional dependency structure among EHR features via a latent graphical block model (LGBM). The LGBM has a two layer structure with the first providing semantic embedding vector (SEV) representation for the EHR features and the second overlaying a graphical block model on the latent SEVs. The block structures on the graphical model also allows us to cluster synonymous features in EHR. We propose to learn the LGBM efficiently, in both statistical and computational sense, based on the empirical point mutual information matrix. We establish the statistical rates of the proposed estimators and show the perfect recovery of the block structure. Numerical results from simulation studies and real EHR data analyses suggest that the proposed LGBM estimator performs well in finite sample.
Introduction
The increasing adoption of electronic health records (EHR) has led to EHR data becoming a rich data source for translational clinical research. Detailed longitudinal clinical information on broad patient populations have allowed researchers to derive comprehensive prediction models for translational and precision medicine research (Lipton et al., 2015;Choi et al., 2016a,c;Rajkomar et al., 2018;Ma et al., 2017, e.g.). The longitudinal co-occurrence patterns of EHR features observed on a large number of patients have also enabled researchers to derive comprehensive knowledge graphs and large scale representation learning of semantic embedding vectors to represent EHR features (Che et al., 2015;Choi et al., 2016d,b;Miotto et al., 2016), which are valuable and shareable resources for translational clinical research.
There is a rich literature on machine learning approaches to deriving knowledge graphs (Che et al., 2015;Choi et al., 2016d). Some traditional approaches to learning knowledge graph, such as Nickel et al. (2011);Bordes et al. (2013); Wang et al. (2014); Lin et al. (2015) , only consider the information on the connectivity and relationship between the nodes. The symbolic nature of the graph, while useful, makes it challenging to manipulation. To overcome these challenges, several useful knowledge graph embedding methods, which aim to embed entities and relations into continuous vector spaces, has found much success in recent years (Du et al., 2018;Nickel et al., 2011;Nguyen et al., 2017;Shang et al., 2019;Bordes et al., 2013, e.g.). Most of these existing methods require training on patient level longitudinal data, which would be both computationally prohibitive and subject to data sharing constraints due to patient privacy.
In addition, these methods do not address another challenge arising from EHR data elements being overly detailed with many near-synonymous features that need to be grouped into a broader concept for better clinical interpretation (Schulam et al., 2015). For example, among COVID-19 hospitalized patients, 4 distinct LOINC (Logical Observation Identifiers Names and Codes) codes are commonly used for C-reactive protein (CRP) at Mass General Brigham (MGB), and we manually grouped them to represent CRP to track the progression of COVID patients Brat et al. (2020).
Valuable hierarchical ontologies, such as the "PheCodes" from the phenome-wide association studies (PheWAS) catalog for grouping of International Classification of Diseases (ICD) codes, the clinical classification software (CCS) for grouping of the Current Procedural Terminology (CPT) codes, RxNorm (Prescription for Electronic Drug Information Exchange) hierarchy for medication, and LOINC hierarchy for laboratory tests, have been curated via tremendous manual efforts (Steindel et al., 2002;Healthcare Cost and Utilization Project, 2017;Wu et al., 2019). However, these mappings are incomplete and only applicable to codes that have been mapped to common ontology.
These challenges motivate us to propose a latent graphical block modeling (LGBM) approach to knowledge graph embedding with EHR data. The LGBM has a two-layer structure linked by the latent semantic embedding vectors (SEVs) that represent the EHR features. The co-occurrence patterns of the EHR features are generated from the first layer of a hidden markov model parametrized by the latent SEVs similar to those proposed in Arora et al. (2016). The conditional dependency structure of the SEVs is encoded by the second layer of a vector-valued block graphical model. Specifically, let V w ∈ R p be the SEV of feature w ∈ V, where V is the corpus of all features. The vector-valued graphical model associates a feature network to the vectors by assuming that there is an edge between the codes j and k if V j and V k are jointly independent conditioning on all the other vectors {V w } w̸ =j,k . The proposed LGBM has several key advantages over existing methods.
First, we learn the LGBM from a co-occurrence matrix of EHR features, which only involve simple summary statistics that can be computed at scale, overcoming both computational and privacy constraints. Second, the LGBM characterizes the conditional dependence structure of the EHR features, not marginal relationships. Third, the learned block structure also enables automatic grouping of near synonymous codes, which improves both interpretability and reproducibility.
There is a rich statistical literature on vector-valued graphical models with high dimensional features (Yuan and Lin, 2007;Rothman et al., 2008;Friedman et al., 2008;d'Aspremont et al., 2008;Fan et al., 2009;Lam and Fan, 2009;Yuan, 2010;Cai et al., 2011b;Liu and Wang, 2017;Zhao and Liu, 2014;Kolar et al., 2014;Du and Ghosal, 2019, e.g.). The block structured graphical model has also been extensively studied by, e.g., Bunea et al. (2020); Eisenach et al. (2020); Eisenach and Liu (2019). For example, Bunea et al. (2020) proposed the G-block model which assumes that the weighted matrix of the graph is block-wise constant and they estimate the network by the convex relaxation optimization. Eisenach et al. (2020) considered the inference of the block-wise constant graphical model. On the other hand, these graphical modeling methods largely require observations on the nodes of the graph and/or the random vectors on the nodes and hence are not applicable to the EHR setting where only co-occurrence patterns of the codes are observed. The latent structure of the LGBM makes it substantially more challenging to analyze the theoretical properties of the estimated network. Although a few estimators have also been proposed for latent graphical models (Choi et al., 2011;Bernardo et al., 2003;Chandrasekaran et al., 2010;Wu et al., 2017;Bunea et al., 2020;Eisenach et al., 2020;Eisenach and Liu, 2019), they require subject-level data and are computationally intensive. In this paper, we propose a two-step approach to efficiently learn the LGBM to infer about the conditional dependence and grouping structures of EHR features based on the summary level co-occurrence matrix. We first conduct spectral decomposition on an empirical pointwise mutual information (PMI) matrix derived from the co-occurrence matrix to learn the SEVs for the features. In the second step, we obtain a block graphical model estimator on the representation vectors learned from the first step. We establish statistical rate of the learned PMI matrix and knowledge network. The remainder of the paper is organized as follows. We detail the LGBM assumptions in Section 2 and present statistical methods for estimating the SEVs along with the block graphical model in Section 3. Theoretical properties of our proposed estimators are discussed in Section 4 including the estimation procesion and clustering recovery. Section 5 implements our method to both synthetic simulations and a real EHR data to learn a knowledge graph based on a large number of codified EHR features. Although our methods are generally applicable to both codified and narrative features, we will use describe our methods below in the context of EHR codes to be more concrete.
Latent Graphical Block Model
In this section, we detail a two-layer generative process for the proposed LGBM with the two layers linked by the latent p-dimensional SEVs for the d EHR codes, denoted by V d×p = [V 1 , ...,
V d ] T ,
where V j is the SEV for the jth code and without loss of generality, we index the d codes as
V = [d] ≡ {1, .
. . , d}. The first layer of the LGBM is a hidden Markov model for the code sequence given the SEVs and the second layer is a latent gaussian graphical block model that encodes the joint distribution of {V j , j ∈ [d]}.
Overview of the Hidden Markov Model
We first give a high level picture of how we assume the longitudinal EHR data is generated from a hidden markov model. Consider an observed length T longitudinal sequence of EHR features occurred in a patient, w 1 , w 2 , . . . w T ∈ V. We assume that the t th code, w t , is generated from a hidden Markov model (Arora et al., 2016) and it takes the value of w ∈ V = [d] with the probability P(w t = j|c t ) ∝ e ⟨V j ,ct⟩ , where V j = (V j1 , ..., V jp ) T ∈ R p is the latent SEV of code w with V ·ℓ = (V 1ℓ , ..., V dℓ ) T following a Gaussian prior V ·ℓ iid ∼ N (0, Σ), and {c t } t≥1 is the the discourse vector which follows a hidden markov process
z 2 = √ α · z 1 /∥z 1 ∥ 2 + √ 1 − α · r 2 ; z t+1 = √ α · z t + √ 1 − α · r t+1 , c t = z t+1 /∥z t+1 ∥ 2 when t ≥ 2,(2.1)
where α = 1 − log d/k 2 , r t i.i.d. ∼ N (0, I k /k) for all t ≥ 2, and r t+1 is independent from z 1 , · · · , z t .
From the hidden Markov model in (2.1), we generate the code sequence w 1 , . . . , w T ∈ V. We will
show that under the model in (2.1), the dependency structure of the latent SEVs can be connected with the code sequence via the point-wise mutual information (PMI) matrix. Specifically, we will
show that under the hidden markov model with Gaussian assumption on the distribution of V ·ℓ , the PMI matrix is close to the covariance matrix of V ·ℓ :
∥PMI − Σ∥ max = O( log d/p),(2.2)
where PMI is the population PMI matrix defined based on the so-called co-occurrence matrix of the longitudinal word occurrence data {w t } t≥1 . Define the context of a feature w t as the codes given within q-days of time t, denoted as c q (w t ) = {w s : |s − t| ≤ q}. Given a pair of codes w, w ′ ∈ V, we define the co-occurrence between w and w ′ similar to Beam et al. (2020) as the number of incidences the feature w ′ occurs in the context of w, denoted by C(w, w ′ ) = |{(t, s) : |t − s| ≤ q and w = w t , w ′ = w s | t = 1, . . . , T }|.
(2.3)
We denote the co-occurrence matrix as C = [C(j, j ′ )] ∈ R d×d . The PMI between codes w, w ′ is defined as 4) where N (w, w ′ ) is the expectation of C(w, w ′ ), and the marginal occurrence
PMI(w, w ′ ) = log N · N (w, w ′ ) N (w, ·)N (w ′ , ·) ,(2.N (w, ·) = c∈V N (w, c), N = w∈V w ′ ∈V N (w, w ′ ).
Remark 2.1. Arora et al. (2016) shows a similar relationship as (2.2). However, one of the major difference between the results in Arora et al. (2016) and our model is that they assumed that all code vectors are independent from each other, while our representation vectors are generated from the above block graphical model which enable us to characterize the dependency structures between the words. Moreover, they did not characterize the specific rate on distance between the PMI and the covariance matrix.
Graphical Block Model for the Latent Embedding Vectors
In this section, we will describe the latent vector-valued block graphical model that captures the conditional dependency structure of {V i , i ∈ [d]} as well as the unknown block structure that encodes information on which codes are near-synonymous. Specifically, we assume that
V i |V −i,· ∼ N j∈[d]\{i} B ij V ȷ , σ 2 I p for all i ∈ [d] and some σ > 0, (2.5) Let − → V = (V T ·1 , ..., V T ·p ) T denote the vector concatenating columns of V. We can show (see Ap- pendix A.1 for details) that (2.5) can be satisfied if − → V follows a multivariate normal distribution − → V ∼ N (0, I p ⊗ Σ) with Σ = Ω −1 ,(2.6)
where Ω = σ −2 (I d − B), where the identity matrix comes from the variance in (2.5), and B = [B ij ] is same as (2.5). Under (2.6), we show in Lemma A.1 that the conditional distribution of V i |V −i,· follows (2.5). Therefore, (2.6) defines a vector-valued gaussian graphical model such that the node of the graph for code i is represented by the corresponding SEV V i and the code i is connected to j if and only if B ij ̸ = 0. We assume that B is sparse to enable recovery of the code dependency structure. From (2.5) and (2.6), we can see that V i is conditionally independent to V j conditioning on all {V k } k̸ =i,j if and only if B ij ̸ = 0. Thus the dependency structure is encoded by the support of B which is assumed to be sparse.
In order to characterize the synonymous structure, we further impose the block structure Gaussian graphical model of the SEVs such that
V ·ℓ = A d×K Z ℓ + E ℓ , with Z ℓ iid ∼ N (0, Q), E ℓ iid ∼ N (0, Γ), for ℓ = 1, ..., p, (2.7)
where the covariance matrix Q K×K is positive definite and Γ d×d = diag(Γ 11 , . . . ,
Γ dd ) with Γ j > 0 for j ∈ [d]. The clustering assignment matrix A = [A ik ] = [I(i ∈ G k )] essentially encodes the partition of [d] as disjoint sets G = {G 1 , . . . , G K } with [d] = ∪ K k=1 G k .
The codes within each group G k are considered as near synonymous and can be grouped into a clinically meaningful code concept. With the additional block structure, we may express the covariance structure as
Σ = Ω −1 = σ 2 (I d − B) −1 = AQA T + Γ.
(2.8)
Under this model, embedding vectors for codes within a cluster share similar behavior. To see this,
let Z = (Z 1 , ..., Z p ) and E = (E 1 , ..., E p ). Then V = AZ + E and hence {V i = Z k· + E i· , i ∈ G k }
share the same center Z k· . Moreover, according to the decomposition in (2.8), we have Σ ij = Q g i g j , meaning that the noiseless part of large covariance matrix Σ is entrywise identical within groups.
We aim to leverage the structure of Σ to improve the estimation of B and infer the dependency structure of the codes.
The block model in (2.8) is not identifiable due to the additivity of AQA T and Γ. For the identifiability of the model, we introduce the following definition of feasible covariance.
Definition 2.2. Define the cluster gap of any positive definite matrix Σ induced by clustering G as ∆(Σ, G) = min
g i ̸ =g j max ℓ̸ =i,j |Σ iℓ − Σ jℓ |.
A partition G along with its associated assignment matrix A and decomposition
Σ = AQA T + Γ is feasible if ∆(Σ, G) > 0 and Γ is diagonal with Γ ii = 0 for all i such that |G g i | = 1.
The decomposition of Σ by G implies that items belonging to the same cluster behave in the same way. In a feasible partition and decomposition, ∆(Σ, G) > 0 means that the matrix Σ can be accurately separated by G and code in the same cluster will not be separately into distinct groups.
Setting Γ ii = 0 for |G g i | = 1 removes noise from singleton codes to ensure identifiability and hence allows us to remove the requirement of minimum cluster size being at least 2 needed in Bunea et al. (2020). Formally, we have the following result on identifiability of our model, saying that two feasible decompositions must be identical. We leave the proof of the proposition to Appendix A.2.
Proposition 2.1 (Identifiability of block model for code clusters). Define the assignment g i = k iff i ∈ G k . Let the true decomposition be Σ = AQA T + Γ with true partition G, so that G, A and the decomposition is feasible. Then for any partition G, its assignment matrix A with decomposition
Σ = A Q A T + Γ that are also feasible, it holds that G = G, Q = Q, Γ = Γ.
Estimation of the Latent Code Graphical Model
To estimate the precision matrix for the graphical model, we first obtain estimators for Σ and then use the CLIME estimator of Cai et al. (2011a) to estimate the precision matrix Ω. We propose a three step estimator for Σ and A. In step (I), we obtain an initial estimator for Σ, Σ ini , based on the co-occurrence matrix. In step (II), we perform clustering of the codes based on Σ ini to obtain A and the resulting G = { G 1 , ..., G K }. Finally in step (III), we update the estimate of Σ as Σ by leveraging the estimated group structure G.
Step I: If {V i , i = 1, ..., d} were observed, Σ can be estimated empirically using the empirical covariances of V i and V j . However, since V i is latent, we instead estimate Σ directly from the co-
occurrence matrix C d×d = [C jj ′ ] = [C(j, j ′ )] with C(j, j ′ ) calculated as (2.
3) across all patients. From C, we derive the shifted and truncated empirical PMI matrix as an estimator for Σ. Specifically, we define the PMI matrix estimator as PMI d×d = [ PMI(j, j ′ )] with PMI(j, j ′ ) = log C(j, j ′ ) C(j, ·)C(j ′ , ·)
(3.1) and the shifted positive PMI matrix estimator as
SPPMI = [ SPPMI(j, j ′ )] with SPPMI(j, j ′ ) = max PMI(j, j ′ ), η , (3.2)
where η > −∞ is a threshold used in practice to prevent the values of SPPMI being minus infinity.
In our theoretical analysis, we will show that PMI(j, j ′ ) is lower bounded by some constant with high probability under appropriate assumptions, so max PMI(j, j ′ ), η would be closer to truth than PMI(j, j ′ ) with high probability if η is chosen properly. Our initial estimator for Σ is then set as Σ ini = SPPMI.
Step II With Σ ini , we estimate the code cluster G = {G 1 , . . . , G K } as G = { G 1 , . . . , G K } based on Algorithm 1 proposed by Bunea et al. (2020) with distance between two rows of Σ ini corresponding to codes j and j ′ defined as
d(j, j ′ ) = max c̸ =j,j ′ SPPMI(j, c) − SPPMI(j ′ , c) .
Algorithm 1 The COD Algorithm (Bunea et al., 2020) Input: SPPMI and α > 0 Let the candidate nodes V = [d] and ℓ = 0
while V ̸ = ∅ do ℓ = ℓ + 1 If |V | = 1, then G ℓ = V If |V | > 1, then 1. (j ℓ , j ′ ℓ ) = arg min j̸ =j ′ d(j, j ′ ) 2. If d(j ℓ , j ′ ℓ ) > α, then G ℓ = {j ℓ } Else G ℓ = {c ∈ V | d(j ℓ , c) ∧ d(j ′ ℓ , c) ≤ α} Update V = V \ G ℓ end while Output: the cluster estimator G = { G 1 , . . . , G K }
Step III In the final step, we refine the estimator for Σ by averaging the entries of Σ ini belonging to the same cluster since Σ ii ′ = Q kk ′ and Σ ii = Q kk + Γ ii for all i ∈ G k and i ′ ∈ G k ′ \ {i}. Thus, for p, k ′ = 1, . . . , K, we estimate Q kk ′ as
Q kk ′ = {| G k | · | G k ′ |} −1 w∈ G k ,w ′ ∈ G k ′ SPPMI(w, w ′ ), if k ̸ = k ′ ; {| G k | · (| G k | − 1)} −1 w,w ′ ∈ G k I(w ̸ = w ′ ) SPPMI(w, w ′ ), if k = k ′ , | G k | > 1; SPPMI(k, k), if k = k ′ , | G k | = 1.
Finally, we estimate the k th column (corresponding to cluster G k ) of the precision matrix O = Q −1 based on Q = [ Q kk ′ ] for k = 1, . . . , K via the CLIME estimator Cai et al. (2011a):
O k = arg min β∈R K ∥β∥ 1 subject to ∥ Qβ − e k ∥ ∞ ≤ λ,(3.Γ = diag( Σ ini − Q) and Ω = Γ −1 − Γ −2 ( O + A T Γ −1 A) −1 . (3.4)
Theoretical Properties
In this section, we establish the estimation rates of the PMI matrix and the precision matrix estimators. We will also establish the consistency of the clustering recovery. The high level summary of our theoretical analysis has three major components: (1) the statistical error between SPPMI and the true PMI matrix;
(2) the approximation error between the true PMI and the feature SEV inner products VV T /p; and (3) the statistical error of precision matrix estimator and the clustering recovery. We then integrate the three parts together to get the final rates. We first detail the key assumptions required by our theoretical analyses.
Assumption 4.1. The true precision matrix O belongs to the parameter space
U s (M, ρ) = O ∈ R K×K O ≻ 1/ρ, ∥O∥ 2 ≤ ρ, max j∈[d] ∥O j· ∥ 0 ≤ s, ∥O∥ 1 ≤ M . (4.1)
for some constants ρ and M .
Assumption 4.2. Assume that the latent model follows the block structure so that the true decomposition Σ = AQA T + Γ is legal, and max i Γ ii ≤ c 0 for some constant c 0 > 0. Moreover, and the distance between clusters satisfies that ∆
(Q) = ∆(Σ, G) = ϵ > 0, with log d/p = o(ϵ). Assumption 4.3. Assume that log d = o( √ p), s log d/p = o(1), and p(log 2 d)·max k |G k | = O(d 1−γ )
for some γ ∈ (0, 1/2). Also assume that the corpus size T satisfies T = Ω(p 5 d 4 log 4 d).
Remark 4.
Statistical Rate of PMI Estimators
The following proposition provides the approximation error rate of SPPMI for the population PMI
∥ SPPMI − PMI∥ max ≤ 5/ √ p + dq/T,
with probability at least 1 − 1/d and appropriately large p.
Next we establish the rate of the approximation error between PMI and the population covari-
ance matrix Σ = Ω −1 in
Clustering Recovery and Precision Matrix Estimation Consistency
We next summarize the results on the clustering recovery and the estimation consistency of O for the precision matrix O = Q −1 . First, we have the following theorem on the perfect recovery of
G = { G 1 , ..., G K } for G = {G 1 , ..., G K } with proof given in Appendix C.1.
Recall the latent model V ℓ· = AZ ℓ + E ℓ and the decomposition Σ = AQA T + Γ. In order to differentiate between different groups in the LGBM, we define the distance between clusters as
∆(Q) := min i̸ =j max k̸ =i,j Q ik − Q jk .
Such quantity is also defined in the study of block matrix estimation (Bunea et al., 2020;Eisenach et al., 2020;Eisenach and Liu, 2019). With the perfect recovery of clusters, we have the following corollary whose proof is given in
Appendix C.2.
Corollary 4.7. Under assumptions of Theorem 4.6, with probability no less than 1−exp(−ω(log 2 d))− O(d −τ ) for some large constant τ > 0, we have
∥ Q − Q∥ max ≤ O log d p .
In the following, we present the main theorem on the convergence rate of our precision estimator with proof given in Appendix C.2 as well.
Theorem 4.8. Under the settings of Theorem 4.6, for sufficiently large p, d, T , with probability
no less than 1 − exp(−ω(log 2 d)) − O(d −τ ) for some large constant τ > 0, we have ∥ O − O∥ max ≤ λ∥O∥ 1 + O log d p , ∥ O − O∥ 1 ≤ sλ∥O∥ 1 + O s log d p ,
and supp( O) = supp(O). If we choose λ = C log d/p for some sufficient large constant C, we also have with probability no less than 1 − exp(−ω(log 2 d)) − O(d −τ ) for some large constant τ > 0,
∥ Ω − Ω∥ 1 = O s log d/p .
Numerical Experiments for Synthetic and Real Datasets
In this subsection, we conduct simulation studies to evaluate the performance of our proposed algorithm and compare it with the GloVe method (Pennington et al., 2014). We consider the settings (p, d) = (50, 25), (100, 50), (500, 1000) and (1000,2000). We generate the precision matrix O = Q −1 via the following two types of graphs.
• Independent Graph -The basic model is where all nodes are independent with same variance.
We set O = cI K for some c.
• Erdős-Rényi Graph -This model generates a graph with the adjacency matrix A ij i.i.d.
∼
Bern(p), i < j for some p ∈ (0, 1), i.e., all edges are independently added with probability p.
In an Erdős-Rényi Graph with K nodes, the expected amount of total edges is K(K − 1)p/2.
To satisfy the sparsity conditions, we typically choose small p, specified later on. After generating the adjacency matrix, we let O = cA + (|λ min (cA)| + c 1 )I for some (c, c 1 ).
Under these two types of Graphs, we consider a total of six set of hyper parameters to generate O as detailed in Table 1 Erdős-Rényi Graph with p = 0.2, (c, c 1 ) = (0.3, 0.2). G4
Erdős-Rényi Graph with p = 0.2, (c, c 1 ) = (0.5, 0.3). G5
Erdős-Rényi Graph with p = 0.05, (c, c 1 ) = (0.3, 0.2). G6
Erdős-Rényi Graph with p = 0.05, (c, c 1 ) = (0.5, 0.3). Evaluations on the estimation of the knowledge network focused on three components: (1) accuracy in cluster recovery with cluster partition G;
(2) average error in precision matrix estimation;
and (3) support recover accuracy. We evaluate the performance of cluster recovery by Rand index averaged over all the repetitions. Specifically, for true partition G and estimator G for nodes {1, . . . , d}, let g(j) be the cluster of node j in G and g(j) in G, then Rand index is calculated by
RI = {i ̸ = j : g(i) = g(j), g(i) = g(j)} + {i ̸ = j : g(i) ̸ = g(j), g(i) ̸ = g(j)} d(d − 1)/2 .
We evaluate the performance of our precision matrix estimator via the average relative error, %Err = average(∥ O − O∥/∥O∥), where we let the matrix norm ∥ · ∥ be either ∥ · ∥ max or ∥ · ∥ F .
We evaluate the support recovery of the true graph via F-score, which is defined based on the true positives (TP), false positives (FP) and false negatives (FN) as Tables 3 and 4 give the %Err for the estimation of O with O and the F-score of the support recovery for the graphical model estimator without and with clustering. In summary, the proposed procedure can identify most signals in the graph. We can also see that utilizing the block structure of the precision matrix helps the estimation of the precision matrix as long as inferring the graph structure.
Precision = TP TP + FP , Recall = TP TP + FN , F-score = 2 · Precision · Recall Precision + Recall ,
Applications to Electronic Health Record Data
In this section, we apply the proposed LGBM inference procedure to derive a knowledge network using codified EHR data of 2.5 million patients from a large tertiary hospital system. We analyzed four categories of codes including ICD, medication prescription, laboratory tests, and CPT procedures. We mapped ICD codes to PheCodes using the ICD-to-PheCode mapping from PheWAS catalog (https://phewascatalog.org/phecodes). The CPT procedure codes are mapped into medical procedure categories according to the clinical classifications software (CCS) (https://www.hcupus.ahrq.gov/toolssoftware/ccs svcsproc/ccssvcproc.jsp). The medication codes are mapped to the ingredient level RxNorm codes, which is part of the Unified Medical Language System (UMLS) (Bodenreider, 2004). The laboratory codes are mapped to LOINC codes of the UMLS. We included a total of d = 5507 mapped codes that have at least 1000 occurrences and calculated the co-occurrence of these codes within 30 day window across all patients. We then applied our proposed procedures to obtain estimates of group structure G and the precision matrix O.
We grouped the 5507 codes into K = 8 code clusters. Because the network is too large to illustrate, we focus on two specific codes of interest: rheumatoid arthritis and type-II diabetes.
The code clouds of the selected neighbors of rheumatoid arthritis and depression are illustrated in Figure 1. We also only focus on the clustering of the lab codes LOINC. We choose the tuning parameter α in Algorithm 1 in the range from each code consists of its own cluster to all codes merge to one cluster. From Figure 2, we visualize the clustering result via the clustering tree and we can observe that similar codes are easier to be merged together.
References
Arora, S., Li, Y., Liang, Y., Ma, T. and Risteski, A.
A.1 Vector-Valued Graphical Model on
V Lemma A.1. Let V j ∈ R p be the code vector variable for code j (j ∈ [d]) and V −i be the set of all code vectors expect V i , i.e., V −i = {V j : j ∈ [d]\{i}}. And let [V j ] ℓ be the ℓ-th component of V j (ℓ ∈ [k], j ∈ [d]). [V] ℓ := [V 1 ] ℓ , [V 2 ] ℓ , · · · , [V d ] ℓ T ∈ R d concatenates the ℓ-th components of all code vectors. And [V] ℓ can be stacked in a column vector as V := [V] T 1 , [V] T 2 , · · · , [V] T p T ∈ R d×p .
There exists a multivariate Gaussian distribution N (0, Σ V ) for V such that
V i |V −i· = v −i ∼ N j∈[d]\{i} B ij v j , σ 2 I p for all i ∈ [d],
where σ 2 > 0 is a constant.
And one such N (0, Σ V ) is V ∼ N (0, Σ V ) = N (0, Ω −1 ) = N (0, (I p ⊗ (I d − B)/σ 2 ) −1 ),
where B is a symmetric hollow matrix and I d − B is positive definite.
Proof of Lemma A.1. The conditional Gaussian distribution assumption is
V i |V −i· = v −i ∼ N d j=1 B ij v j , σ 2 I p for all i ∈ [d], (A.1)
where B is a hollow matrix whose diagonal entries are zeros.
Let Y 0 be the ℓ-th component of V i , i.e., Y 0 := [V i ] ℓ , and Z 0 be the vector of all components in V except Y 0 . Define the covariance matrix of Y 0 Z 0 as Var Y 0 Z 0 := σ Y 0 Y 0 σ Y 0 Z 0 σ Z 0 Y 0 Σ Z 0 Z 0 . Assume that Y 0 Z 0 T is a multivariate Gaussian random variable, then Y 0 |Z 0 = z 0 ∼ N (µ Y 0 + (z 0 − µ Z 0 ) T Σ −1 Z 0 Z 0 σ Z 0 Y 0 , σ Y 0 Y 0 − σ Y 0 Z 0 Σ −1 Z 0 Z 0 σ Z 0 Y 0 ), (A.2)
where µ Y 0 and µ Z 0 are the means of Y 0 and Z 0 , respectively.
Let [a] ℓ be the ℓ-th component of a vector a. Based on (A.1) and (A.2), it can be inferred that
d j=1 B ij [v j ] ℓ = z T 0 Σ −1 Z 0 Z 0 σ Z 0 Y 0 . Note that [v j ] m for m ̸ = ℓ does not show up in d j=1 B ij [v j ] ℓ , and thus the parameters of [v j ] m (j ∈ [d], m ∈ [k]
\{ℓ}) are zeros. One way to realize this is to impose an assumption that Σ V is a block diagonal matrix:
Σ V = diag (Var([V] 1 ), Var([V] 2 ), · · · , Var([V] p )) . (A.3)
By the property of multivariate Gaussian distribution that zero covariance is equivalent to
independence, assumption in A.3 indicates that [V] 1 , [V] 2 , · · · , [V] p are independent. In addition, the conditional Gaussian distribution in (A.1) is the same for [V] 1 , [V] 2 , · · · , [V] p , and therefore [V] 1 , [V] 2 , · · · ,
[V] p are not only independent but also identically distributed.
Without loss of generality, we here analyze the distribution of [V] 1 .
Let Y 1 := [V i ] 1 and Z 1 := [V 1 ] 1 · · · [V i−1 ] 1 [V i+1 ] 1 · · · [V d ] 1 T . Then we have Var Y 1 Z 1 Var Y 1 Z 1 −1 := σ Y 1 Y 1 σ Y 1 Z 1 σ Z 1 Y 1 Σ Z 1 Z 1 θ Y 1 Y 1 θ Y 1 Z 1 θ Z 1 Y 1 Ω Z 1 Z 1 = 1 0 0 I d−1 =⇒ σ Y 1 Y 1 θ Y 1 Y 1 + σ Y 1 Z 1 θ Z 1 Y 1 = 1, σ Y 1 Y 1 θ Y 1 Z 1 + σ Y 1 Z 1 Ω Z 1 Z 1 = 0, σ Z 1 Y 1 θ Y 1 Y 1 + Σ Z 1 Z 1 θ Z 1 Y 1 = 0, σ Y 1 Z 1 θ Y 1 Z 1 + Σ Z 1 Z 1 Ω Z 1 Z 1 = I d−1 . (A.4) Based on (A.1), the conditional Gaussian distribution of Y 1 is Y 1 |Z 1 = z 1 ∼ N B i,−i z 1 , σ 2 = N (µ Y 1 + (z 1 − µ Z 1 ) T Σ −1 Z 1 Z 1 σ Z 1 Y 1 , σ Y 1 Y 1 − σ Y 1 Z 1 Σ −1 Z 1 Z 1 σ Z 1 Y 1 ), where B i,−i := B i,1 · · · B i,i−1 B i,i+1 · · · B i,d is B i,: without B i,i and µ Y 1 , µ Z 1 are the means of Y 1 , Z 1 , respectively.
Here we assume µ Y 1 = µ Z 1 = 0. So we have
Y 1 |Z 1 = z 1 ∼ N B i,−i z 1 , σ 2 = N (σ Y 1 Z 1 Σ −1 Z 1 Z 1 z 1 , σ Y 1 Y 1 − σ Y 1 Z 1 Σ −1 Z 1 Z 1 σ Z 1 Y 1 ). (A.5)
According to (A.5),
B i,−i = σ Y 1 Z 1 Σ −1 Z 1 Z 1 = (Σ −1 Z 1 Z 1 σ Z 1 Y 1 ) T , σ Y 1 Y 1 − σ Y 1 Z 1 Σ −1 Z 1 Z 1 σ Z 1 Y 1 = σ 2 . (A.6)
Then by (A.4) and (A.6), we have
σ Z 1 Y 1 θ Y 1 Y 1 + Σ Z 1 Z 1 θ Z 1 Y 1 = 0 =⇒ θ Z 1 Y 1 = −θ Y 1 Y 1 Σ −1 Z 1 Z 1 σ Z 1 Y 1 =⇒ θ Z 1 Y 1 = −θ Y 1 Y 1 B T i,−i . (A.7) Furthermore, by σ Y 1 Y 1 θ Y 1 Y 1 + σ Y 1 Z 1 θ Z 1 Y 1 = 1 from (A.4) and θ Z 1 Y 1 = −θ Y 1 Y 1 B T i,−i from (A.7) , we have 1/θ Y 1 Y 1 = σ Y 1 Y 1 + σ Y 1 Z 1 θ Z 1 Y 1 /θ Y 1 Y 1 = σ Y 1 Y 1 − σ Y 1 Z 1 B T i,−i . (A.8)
Combining (A.6) and (A.8), we have
1/θ Y 1 Y 1 = σ Y 1 Y 1 − σ Y 1 Z 1 B T i,−i = σ Y 1 Y 1 − σ Y 1 Z 1 Σ −1 Z 1 Z 1 σ Y 1 Z 1 = σ 2 . (A.9)
Note that (A.9) holds for any i ∈ [d], so the diagonal entries of [Var(V 1 )] −1 are all equal to
1/σ 2 . Plug θ Y 1 Y 1 = 1/σ 2 in θ Z 1 Y 1 = −θ Y 1 Y 1 B T i,−i from (A.7), then we have θ Z 1 Y 1 = −B T i,−i /σ 2 , which holds for all i ∈ [d]. Therefore, [Var(V 1 )] −1 = (I d − B)/σ 2 , where we let B be a symmetric hollow matrix. Since [V] 1 , [V] 2 , · · · , [V] p are i.i.d., the Gaussian distribution of V is N (0, Ω −1 ) = N (0, I p ⊗ [(I d − B)/σ 2 )] −1 ) = N (0, [I p ⊗ (I d − B)/σ 2 )] −1 ),
which satisfies conditional Gaussian distribution in (A.1).
And the B here is a symmetric hollow matrix such that I d − B is of full rank.
A.2 Identifiability of the Model
Proof of Propostion 2.1. We first prove that G = G. Firstly, by the decompositions Σ = AQA T + Γ = A Q A T + Γ, we know that max ℓ̸ =i,j |Σ iℓ − Σ jℓ | = 0 for all i, j such that g(i) = g(j), and the same holds for all i, j such that g(i) = g(j). On the other side, since ∆(Σ, G) > 0, we know that for any i ̸ = j, g(i) ̸ = g(j), there exists some ℓ ̸ = i, j such that Σ iℓ ̸ = Σ jℓ . And the same holds for G.
This means that if g(i) = g(j), then g(i) = g(j) since max ℓ̸ =i,j |Σ iℓ − Σ jℓ | = 0. And if g(i) ̸ = g(j), then max ℓ̸ =i,j |Σ iℓ − Σ jℓ | ̸ = 0 thus it must be g(i) ̸ = g(j) since ∆(Σ, G) > 0.
Therefore the partition G and G are the same, and A = A.
We then show the identifiability of Q and Γ. For any cluster k ̸ = k ′ and any i ∈ G k , j ∈ G p ′ ,
since A = A and Γ ij = 0, we have Σ ij = Q pk ′ = Q pk ′ . For any cluster p with |G k | > 1, for two arbitrary members i ̸ = j, i, j ∈ G k , by the decomposition we have Q kk = Σ ij = Q pp , hence Γ ii = Σ ii − Q kk = Σ ii − Q pp = Γ ii also holds for all i ∈ G k . For any cluster p with |G k | = 1, we have {i} = G k for some i, thus since G = G, we have Γ ii = Γ ii = 0, hence Q kk = Σ ii = Q pp .
Wrapping up all above cases, we have Q = Q and Γ = Γ.
B Proof on the Statistical Rate of PMI
In this section, we provide the proof of Proposition 4.3, the concentration of PMI(w, w ′ ) to the truth PMI(w, w ′ ) with high probability, and the proof of Proposition 4.4 that in PMI(w, w ′ ) converges to Σ. The theoretical analysis is based on the concentration of code occurrence probabilities when the discourse variables follow their (marginal or joint) stationary distribution, which follows the discourse variables in Section D and the analysis of partition functions in Section F .
Recall the true PMI matrix for code w, w ′ and window size q is defined in (2.4) as
PMI(w, w ′ ) = log N · N (w, w ′ ) N (w, ·)N (w ′ , ·) , (B.1)
where N w,w ′ is the stationary version of expected total co-occurrence of code w, w ′ within window
size q, and N (w, ·) = c∈V N (w, c), N = w,w ′ N (w, w ′ ).
We will start with formally defining how the occurrences are counted and how the empirical PMI matrix is calculated. The proofs of technical lemmas used in this section are left to Section H.
B.1 Notations for code Occurrences
For ease of analysis, in this section, we formally define how the empirical PMI matrix is obtained, and how these stationary versions are defined.
For a code w, let X w (t) be the indicator of the occurrence of code w at time t, and let X w = T t=1 X w (t) be the total occurrence of code w. Conditional on realization of discourse variables {c t },
define p w (t) := E[X w (t)|{c t }] = exp(⟨vw,ct⟩)
w ′ exp(⟨v w ′ ,ct⟩) and S w = T t=1 p w (t) the conditional expectation of total occurrence of code w at all time steps. Let p w = E c∼C [p w (t)] where C is the stationary distribution of c t , the uniform distribution over unit sphere. This is the stationary version of code occurrence probabilities and has nothing to do with specific t. Let N w := T t=1 p w = T p w be the (stationary) expectation of total code occurrences.
For a pair w, w ′ and distance u ≥ 1, let X w,w ′ (t, t+u) be the indicator of the occurrence of code w at time t and w ′ at time t+u, and let X (u) w,w ′ = T −u t=1 X w,w ′ (t, t+u) be the total occurrence of (w, w ′ ) with distance u at all time steps. We omit u for the case u = 1 for simplicity, and write X w,w ′ (t) for X w,w ′ (t, t + 1) and X w,w ′ for X
(1) w,w ′ . For realizations of discourse variables {c t } t≥1 , denote the conditional expectations as p w,w ′ (t, t + u) = E[X w,w ′ (t, t + u)|{c t }] and S (u) w,w ′ = T −u t=1 p w,w ′ (t, t + u)
. Similarly, let D u denote the joint stationary distribution of (c t , c t+u ), as specified in Lemma D.1, and denote p (u) w,w ′ = E (ct,c t+u )∼Du [p w,w ′ (t, t + u)] the stationary version of code co-occurrence probabilities. Note that p
(u) w,w ′ is constant among all t. Define N (u) w,w ′ = T −u t=1 p (u) w,w ′ = (T − u)p (u)
w,w ′ be the sum of stationary co-occurrence probabilities. Note that here the relative position of w, w ′ matters, i.e., the probabilitites for w ′ appearing after w with distance u.
For window size q, compute the total co-occurrences of (w, w ′ ) within window size q as
X [q] w,w ′ = q u=1 T −u t=1 [X w,w ′ (t, t + u) + X w ′ ,w (t, t + u)] = C(w, w ′ )
Note that this definition is equivalent to the C(w, w ′ ) as in (3.1) and (2.3). But for now we use the notation X
[q] w,w ′ to indicate the window size q. Denote S [q] w,w ′ = E[X [q]
w,w ′ |{c t }] as its conditional expectation given {c t } t≥1 , and N
[q] w,w ′ = E[X [q]
w,w ′ ] be denote its expectation under stationary distribution as
N [q] w,w ′ = q u=1 T −u t=1 [p (u) w,w ′ + p (u) w ′ ,w ] = q u=1 [N (u) w,w ′ + N (u) w ′ ,w ].
Note that this definition is equivalent to the N (w, w ′ ) as in (2.4), but for now we use the notation
N [q] w,w ′ to indicate the window size q. Denote X [q] w = w ′ X [q]
w,w ′ as the total count of co-occurrence of code w with other words within window size q, and X
[q] = w,w ′ X [q]
w,w ′ be the total count of co-occurrences within window size q, with stationary version N [q] . Note that in describing co-occurrence within a window size, whether w occurs before w ′ or vice versa does not matter. This notation coincide with previous ones that X [q] w = C(w, ·) X [q] = C as in (3.1) and (2.3).
With co-occurrences computed above, the empirical PMI for code w, w ′ is computed with and window size q as PMI
(q) w,w ′ = log X [q] X [q] w,w ′ X [q] w X [q] w ′
. And the true (stationary) PMI as in (2.4) for code w, w ′ and window size q is PMI(w, w ′ ) = log
N [q] N [q] w,w ′ N [q] w N [q] w ′ .
Furthermore, we illustrate some simple observations on the relationships in these notations for occurrences. For simplicity we use E 0 to denote the expectation under stationary (joint and marginal) distributions of discourse variables.
Firstly, a simple observation is that the count of total co-occurrences within window q is
X [q] = T i=1 T j=1 I{0 < |j − i| ≤ q} = q t=1 (t − 1 + q) + T −q t=q+1 (2q) + T t=T −q+1 (T − t + q) = 2qT − q 2 − q. (B.2)
which is a constant. Thus N [q] ≡ X [q] . Also, by definition we know w ′ X w,w ′ (t, t + u) = X w (t), hence by linearity of expectations, for fixed u,
w ′ p (u) w,w ′ = p w ∈ [0, 1].
By definition,
N [q] w,w ′ = q u=1 Tu t=1 E 0 [X w,w ′ (t, t + u) + X w ′ ,w (t, t + u)] = 2 q u=1 T −u t=1 p (u) w,w ′ = 2 q u=1 (T − u)p (u) w,w ′ . Therefore since p (u) w,w ′ ∈ [0, 1], we have 2T q u=1 p (u) w,w ′ − q(q + 1) ≤ N [q] w,w ′ ≤ 2T q u=1 p (u) w,w ′ . (B.3)
For a single code w, counting all its pairs within window size q we have
N [q] w = w ′ N [q] w,w ′ = 2 q u=1 (T − u) w ′ p (u) w,w ′ . Therefore as w ′ p (u) w,w ′ = p w ∈ [0, 1] we have 2T qp w − q(q + 1) ≤ N [q] w ≤ 2qT p w . (B.4)
Also note that the definition here of empirical X
[q]
w,w ′ is exactly how many times code w, w ′ co-occur within window q. However,
X [q] w = w ′ X [q] w,w ′ = q u=1 T −u t=1 w ′ X w,w ′ (t, t + u) + q u=1 T −u t=1 w ′ X w ′ ,w (t, t + u), where q u=1 T −u t=1 w ′ X w,w ′ (t, t + u) = T −1 t=1 min(q,T −t) u=1 w ′ X w,w ′ (t, t + u) = T −1 t=1 min(q,T −t) u=1 X w (t) = T −1 t=1 min(q, T − t)X w (t) =q T −1 t=1 X w (t) + q+1 t=1 (t − q)X w (T − t) = (1 + o(1))q T t=1 X w (t).
And similarly for the second term. Hence X
[q]
w ≈ 2(q − 1) T t=1 X w (t), which is 2(q − 1) times the total times that code w appears within time T , since each occurrence is counted for 2(q − 1) times within window q. So we have
X [q] X [q] w,w ′ X [q] w X [q] w ′ ≈ (2qT − q 2 − q) · X [q] w,w ′ 4q 2 T t=1 X w (t) T t=1 X w ′ (t) = 2qT − q 2 − q 4q 2 q−1 u=1 ( 1 T T −u t=1 X w,w ′ (t, t + u) + 1 T T −u t=1 X w ′ ,w (t, t + u)) 1 T T t=1 X w (t) · 1 T T t=1 X w ′ (t) , and N [q] N [q] w,w ′ N [q] w N [q] w ′ ≈ wqT · 2T q−1 u=1 p (u) w,w ′ 4q 2 T 2 p w p w ′ = q−1 u=1 (p (u) w,w ′ + p (u) w ′ ,w ) 2q · p w p w ′ .
The above two relationships will be made more specified later.
B.2 Proof of Proposition 4.3
Recall that S
[q] w,w ′ = E[X [q]
w,w ′ |{c t }] as its conditional expectation given {c t } t≥1 , and N
[q] w,w ′ = E[X [q]
w,w ′ ] be denote its expectation. In Lemma G.3 , we prove the concentration of conditional occurrence probabilities S (u) w,w ′ to stationary ones N (u) w,w ′ and investigate its approximate scale. In Lemma G.4, we proceed to show the concentration of empirical occurrences X
(u) w,w ′ to S (u)
w,w ′ . We refer to the proofs of these lemmas to Section G.
Proof of Proposition 4.3. By Lemma G.3 and Lemma G.4, for fixed window size q > 0 we have
P max w S w N w − 1 ≥ 1 √ p ≤ exp(−ω(log 2 d)) + d −τ ′ , P max w,w ′ S [q] w,w ′ N [q] w,w ′ − 1 ≥ 1 √ p ≤ exp(−ω(log 2 d)) + O(d −τ ′ ),
for some large constant τ ′ > 0. By Lemma G.6 and Lemma G.7 we know that
P max w X w S w − 1 ≥ 1 √ p ≤ exp(−ω(log 2 d)) + d −τ ′ , P max w,w ′ X [q] w,w ′ S [q] w,w ′ − 1 ≥ 1 √ p ≤ exp(−ω(log 2 d)) + O(d −τ ′ ),
for some large constant τ ′ > 0. Thus with probability at least 1
− exp(−ω(log 2 d)) − O(d −τ ′ ), it holds that (1 − 1 √ p ) 2 ≤ min w X [q] w N [q] w ≤ max w X [q] w N [q] w ≤ (1 + 1 √ p ) 2 , (1 − 1 √ p ) 2 ≤ min w,w ′ X [q] w,w ′ N [q] w,w ′ ≤ max w,w ′ X [q] w,w ′ N [q] w,w ′ ≤ (1 + 1 √ p ) 2 .
Also recall the definition PMI(w, w ′ ) = log
X [q] X [q] w,w ′ X [q] w X [q] w ′ and PMI(w, w ′ ) = log N [q] N [q] w,w ′ N [q] w N [q] w ′ , therefore with probability at least 1 − exp(−ω(log 2 d)) − O(d −τ ′ ), it holds for all w, w ′ that 1 − 4 √ p ≤ 1 − 1 √ p (1 + 1 √ p ) 2 ≤ X [q] X [q] w,w ′ X [q] w X [q] w ′ N [q] N [q] w,w ′ N [q] w N [q] w ′ = X [q] w,w ′ X [q] w X [q] w ′ N [q] w,w ′ N [q] w N [q] w ′ ≤ 1 + 1 √ p (1 − 1 √ p ) 2 ≤ 1 + 4 √ p ,
for appropriately large p, in which case
max w,w ′ PMI(w, w ′ ) − PMI(w, w ′ ) ≤ max log(1 − 4 √ p ) , log(1 + 4 √ p ) ≤ 5 √ p ,
for appropriately large p.
B.3 Proof of Proposition 4.4
We now provide the main result for the concentration of stationary PMI to the covariance matrix of code vectors in our graphical model. It is based on the concentration of stationary co-occurrence probabilities, stated in the following lemma.
Lemma B.1 (Concentration and boundedness of stationary probabilities). Assume all Σ ww ′ 's are bounded from above and below by some constants c ≤ min w,w ′ Σ ww ′ ≤ max w,w ′ Σ ww ′ ≤ c, and Assumptions 4.1 and 4.3 hold. Then with probability at least 1 − exp(−ω(log 2 d)) − d −τ , it holds for any constant τ > 0 and constant C τ = 12 3(τ + 4) that
max w log(p w ) − Σ ww 2 − log Z ≤ 2 + C τ ∥Σ∥ max √ log d √ p , (B.5) max w,w ′ log(p w,w ′ ) − Σ ww + Σ w ′ w ′ + 2Σ ww ′ 2 − 2 log Z ≤ (2C τ ∥Σ∥ max + 6) log d p , (B.6) max w,w ′ log(p (u) w,w ′ ) − Σ ww + Σ w ′ w ′ + 2Σ ww ′ 2 − 2 log Z ≤ (2C τ ∥Σ∥ max + 7 √ 2u) log d p , (B.7)
in which case for u ≤ q for some fixed window size q, there exists some constants p, p such that
p/d ≤ min w p w ≤ max w p w ≤ p/d, (B.8) p/d 2 ≤ min w,w ′ p (u) w,w ′ ≤ max w,w ′ p (u) w,w ′ ≤ p/d 2 . (B.9)
The proof of the lemma is presented in Section H.2.
(w, w ′ ) = N [q] N [q] w,w ′ N [q] w N [q] w ′ ≤ 2qT · 2T q u=1 p (u) w,w ′ q 2 (2T p w − q)(2T p w ′ − q) = 1 (1 − q 2T pw )(1 − q 2T p w ′ ) · 1 q q u=1 p (u) w,w ′ p w p w ′ , (B.10)
where for each u, due to (B.5) and (B.7) we have
Σ w,w ′ − 4 + (4C τ ∥Σ∥ max + 7 √ 2q) log d √ p ≤ log( p (u) w,w ′ p w p w ′ ) ≤ Σ w,w ′ + 4 + (4C τ ∥Σ∥ max + 7 √ 2q) √ log d √ p .
Taking exponential and averaging over u = 1, . . . , q − 1 we have
exp(Σ w,w ′ − 4 + (4C τ ∥Σ∥ max + 7 √ 2q) √ log d √ p ) ≤ 1 q−1 q−1 u=1 p (u) w,w ′ p w p w ′ ≤ exp(Σ w,w ′ + 4 + (4C τ ∥Σ∥ max + 7 √ 2q) √ log d √ p
).
Taking logarithm to both sides in (B.10), we have log
N [q] N [q] w,w ′ N [q] w N [q] w ′ ≤ log( p (u) w,w ′ p w p w ′ ) − log(1 − q − 1 2T p w ) − log(1 − q − 1 2T p w ′ ) ≤Σ w,w ′ + 4 + (4C τ ∥Σ∥ max + 7 √ 2u) log d √ p − log(1 − q − 1 2T p w ) − log(1 − q − 1 2T p w ′ ).
Due to the bound for p w and p w ′ in (B.8), as well as the fact that d = o(T ), we know √ p = o(T p w ), hence since log(1 − x) ≥ −2x for x ∈ (0, 1/2), for appropriately large (d, p, T ),
log N [q] N [q] w,w ′ N [q] w N [q] w ′ ≤Σ w,w ′ + 4 + (4C τ ∥Σ∥ max + 7 √ 2q) log d √ p + q − 1 T p w + q − 1 T p w ′ ≤Σ w,w ′ + 5 + (4C τ ∥Σ∥ max + 7 √ 2q) log d √ p .
On the other hand, from the analysis in (B.2)-(B.4), we have
PMI(w, w ′ ) = N [q] N [q] w,w ′ N [q] w N [q] w ′ ≥ (q − 1)(T − q−1 2 ) · [2T q−1 u=1 p (u) w,w ′ − q(q − 1)] 4T 2 (q − 1) 2 p w p w ′ = (1 − q−1 2T ) · [ 1 q−1 q−1 u=1 p (u) w,w ′ − q 2T ] 2p w p w ′ where for appropriately large (d, p, T ), we have q(q−1) T q−1 u=1 p (u) w,w ′ ≤ 1 2 √ p < 1 2 and q−1 T ≤ 1 2 √ p < 1 2 since p (u) w,w ′ = Ω(1/d 2 ). Thus since log(1 − x) ≥ −2x for x ∈ (0, 1/2), log N [q] N [q] w,w ′ N [q] w N [q] w ′ ≥ log 1 q−1 q−1 u=1 p (u) w,w ′ p w p w ′ + 2 log(1 − 1 4 √ p ) ≥Σ w,w ′ − 5 + (4C τ ∥Σ∥ max + 7 √ 2q) √ log d √ p .
Combining the above two directions we have that with probability at least 1−exp(−ω(log 2 d))−2d −τ for some large constant τ , it holds that log(
N [q] N [q] w,w ′ N [q] w N [q] w ′ ) − Σ ww ′ ≤ 5 + (4C τ ∥Σ∥ max + 7 √ 2q) log d √ p .
C Word Cluster Analysis
In this section we provide the proof for exact cluster recovery and precision matrix estimation under block model. In Section C.1, we show that our algorithm achieves exact recovery of word clusters with high probability, and in Section C.2, we provide estimation accuracy of the precision matrix after recovering the block structure.
C.1 Exact Cluster Recovery
Proof of Theorem 4.6. Denote the mapping g such that g(w) = j if w ∈ G j . Define the difference of word w, w ′ as
d(w, w ′ ) = max c̸ =w,w ′ Σ wc − Σ w ′ c .
Then by the block-wise structure in Assumption 4.2, d(w, w ′ ) = max k̸ =g(w),g(w ′ ) |Q k,g(w) − Q k,g(w ′ ) |.
So Assumption 4.2 ensures min
g(w)̸ =g(w ′ ) d(w, w ′ ) = ∆(Q) ≥ η, with log d p = o(η).
By Propositions 4.3 and 4.4, with probability at least
1 − exp(−ω(log 2 d)) − O(d −τ ) for some large constant τ > 0, ∥ SPPMI − Σ∥ max = O( log d p ). Thus for any w, w ′ , c ∈ [d], SPPMI(w, c) − SPPMI(w, c)| − Σ wc − Σ w ′ c ≤ 2∥ SPPMI − Σ∥ max .
Hence for any
w ̸ = w ′ ∈ [d], d(w, w ′ ) − cd(w, w ′ ) ≤ 2∥ SPPMI − Σ∥ max = O log d p .
Thus for any w, w ′ with g(w) = g(w ′ ), by the it holds that d(w, w ′ ) = 0, so d(w, w ′ ) ≤ 2∥ SPPMI − Σ∥ max . Also, for any w, w ′ with g(w) ̸ = g(w ′ ), we have d(w, w ′ ) ≥ η − 2∥ SPPMI − Σ∥ max . Now since 0 < α ≤ η/2, for sufficiently large k, d, T such that 2∥ SPPMI − Σ∥ max < α/2 we have that for any w, w ′ with g(w) = g(w ′ ), d(w, w ′ ) < α and for any w, w ′ with g(w) ̸ = g(w ′ ), d(w, w ′ ) > α.
We prove the exact recovery in the above case, which happens with probability at least 1 − exp(−ω(log 2 d)) − O(d −τ ), by induction on the number of steps ℓ. Suppose it is consistent up to the ℓ-th step, i.e. G j = G g(w j ) for j = 1, . . . , ℓ − 1. Then if |S| = 1, it directly follows that G = G.
Otherwise, if d(w ℓ , w ′ ℓ ) > α, then no w ′ ∈ [d] is in the same group as w ℓ . By assumption the algorithm is consistent up to the ℓ-th step, so w ℓ is also not in the same group as those in ∪ ℓ−1 j=1 G j , hence w ℓ is a singleton, and G ℓ = {w ℓ } = G g(a ℓ ) .
If d(w ℓ , w ′ ℓ ) ≤ α, then w ℓ , w ′ ℓ must be in the same group. Also we've seen that d(x, y) ≤ α if and only if g(x) = g(y). So for any c ̸ = w, w ′ , g(c) = g(w) if and only if d(w ℓ , c) ≤ α and d(w ′ ℓ , c) ≤ α. So G ℓ = S ∩ G g(w ℓ ) . Since the algorithm is consistent up to step ℓ, no members in the same group as w ℓ has been included in ∪ ℓ−1 j=1 G j , so G g(w ℓ ) ⊂ S, hence G ℓ = G ⋆ g(w ℓ ) . So the algorithm is also consistent at the ℓ-th step. The consistency of the Word Clustering algorithm follows by induction.
C.2 Precision Matrix Estimation after Exact Recovery
Once the partition G is exactly recovered, the refined precision matrix is also sufficiently close to the true covariance matrix, which guarantees the accurate estimate of the precision matrix. In this section, we provide proof of Corollary 4.7 about the concentration of SPPMI to Σ = Ω −1 , and the estimation accuracy for the true precision matrix Ω.
Proof of Corollary 4.7. By Propositions 4.3 and 4.4, with probability at least 1−exp(−ω(log
2 d))− O(d −τ ) for some large constant τ > 0, ∥PMI − Σ + log 2∥ max = O( log d p ). By definition of SPPMI, since η ≤ min Σ ij , we have ∥ SPPMI − Σ∥ max ≤ ∥PMI − Σ + log 2∥ max = O( log d p ).
By the block structure in Assumption 4.2, for any w ̸ = w ′ , Σ ww ′ = Q g(w),g(w ′ ) . Therefore with
probability at least 1 − exp(−ω(log 2 d)) − O(d −τ ), for any two members w ∈ G i , c ∈ G j , w ̸ = c, it holds that SPPMI(w, c) − Σ wc = SPPMI(w, c) − Q ij ≤ c log d p
for some constant c > 0. When all clusters are perfectly recovered with G = G, for all
i ̸ = j, we have Q ij − c log d p ≤ SPPMI(i, j) = 1 |G i | · |G j | w∈G i ,c∈G j SPPMI(w, c) ≤ Q ij + c log d p .
On the other hand, for diagonal entries of SPPMI, for any j ∈ [K] with |G j | > 1, note that for any w ̸ = c ∈ G j , the block model implies Σ wc = Q jj , thus
Q jj − c log d p ≤ SPPMI(j, j) = 2 |G j | · (|G j | − 1) w,c∈G j ,w̸ =c SPPMI(w, c) ≤ Q jj + c log d p .
Lastly, for those j ∈ [K] such that |G j | = 1, G j = {w} with w the i-th word, since Γ ii = 0 we have
Σ ii = Q jj , hence Q jj − c log d p ≤ SPPMI(j, j) = SPPMI(w, w) ≤ Q jj + c log d p .
Wrapping up all cases above yields max w,c
SPPMI(i, j) − Q ij ≤ c log d p ,
which by union bound happens with probability at least 1 − exp(−ω(log 2 d)) − O(d −τ ).
We now provide proof for the consistency of the CLIME-type estimator.
Proof of Theorem 4.8. Since ∥ SPPMI−Q∥ max = o(λ), for sufficiently large k, d, T , with probability at least 1 − exp(−ω(log 2 d)) − O(d −τ ), β = C ·ℓ is feasible for the ℓ-th optimization problem in Eq.
(3.3), thus ∥ C ·ℓ ∥ 1 ≤ ∥C ·ℓ ∥ 1 and ∥ C∥ 1 ≤ ∥C∥ 1 . Also by definition, ∥ SPPMI C − I∥ max ≤ λ. Hence with probability at least 1 − exp(−ω(log 2 d)) − O(d −τ ) for some large constant τ ,
∥ C − C∥ max =∥C(Q( C − C))∥ max ≤∥C∥ 1 ∥Q( C − C))∥ max ≤∥C∥ 1 ∥ SPPMI( C − C))∥ max + ∥( SPPMI − Q)( C − C))∥ max ≤∥C∥ 1 ∥ SPPMI · C − I∥ max + ∥ SPPMI · C − I∥ max + ∥ SPPMI − Q∥ max ∥ Ω − C∥ 1 ≤∥C∥ 1 λ + 3∥C∥ 1 ∥ SPPMI − Q∥ max ≤λ∥C∥ 1 + O log d p .
Here the first equation is just CQ = I, and the second line is due to ∥AB∥ max ≤ ∥A∥ 1 ∥B∥ max for two matrices A, B. The Third line is triangle inequality. The fourth line is triangle inequality combined with the inequality ∥AB∥ max ≤ ∥A∥ 1 ∥B∥ max . The last two lines are due to the assumptions.
Then by Assumption 4.1, the sparsity yields
∥ C − C∥ 1 ≤ s∥ C − C∥ max ≤ sλ∥C∥ 1 + O s log d p .
D Properties of Discourse Variables
In this section, we provide some useful properties of the hidden Markov process {z t , c t } t≥1 that are related to concentration properties of stationary distributions. In Subsection D.1 we specify the (joint) stationary distributions of {z t } t≥1 and {c t } t≥1 , and show that under stationary distribution, the discourse vectors {c t } t≥0 "moves slowly" on the unit sphere, which will be of use later in showing the convergence of true PMI matrix to covariance in our Gaussian graphical model. We also show the mixing properties of {z t } t≥1 in Subsection D.2 and D.3, i.e., the convergence of marginal distribution of {z t } t≥1 to its stationary version. The total variation distances enjoy exponential decay.
Recall that the hidden Markov process of the discourse variables {c t } t≥1 is specified as
z 2 = √ α z 1 ∥z 1 ∥ 2 + √ 1 − αr 2 , z t+1 = √ αz t + √ 1 − αr t+1 , t ≥ 2. (D.1)
Here z t is nonzero with probability 1, α = 1 − log d p 2 , r t i.i.d. ∼ N (0, I p /p). And
c t = z t /∥z t ∥ 2 . (D.2)
D.1 Slow Moving of Stationary Disclosure
Denote D u the joint stationary distribution of (c t , c t+u ), which does not depend on t. We first specify the stationary distributions of the hidden Markov process.
Lemma D.1 (Joint stationary distributions of c t ). The distribution D u is the same as ( z ∥z∥ 2 , z ′ ∥z ′ ∥ 2 ) where (z, z ′ ) is the joint stationary distribution of (z t , z t+u ), which is jointly Gaussian with mean
zero, Cov(z) = Cov(z ′ ) = I p /p, E[z(z ′ ) T ] = α u I p /p.
Proof of Lemma D.1. By construction of z t , we have
z t+u α (t+u)/2 = z t α t/2 + u j=1 √ 1 − α α (t+j)/2 r t+j ,
where r t+1 , . . . , r t+u are i.i.d. N (0, I p /p) random variables that are independent of z t . Hence
z t+u = α u/2 z t + √ 1 − α u r, (D.3)
where r ∼ N (0, I p /p) and is independent of z t .
The stationary distribution of Markov process {z t } t≥1 is N (0, I p /p). One straightforward way to see this is to let z t ∼ N (0, I p /p) for t ≥ 2, since r t+1 and z t are independent, z t+1 ∼ N (0, ( √ α) 2 + ( √ 1 − α) 2 · I p /p) = N (0, I p /p). This holds true for any t ≥ 2, so N (0, I p /p) is a stationary distribution of {z t } t≥1 . Also, the marginal distribution of {z t } converges to N (0, I p /p) as t → ∞, regardless of the starting point z 1 . To see this, note that given z 1 , the distribution of
z t+1 is z t+1 |z 1 ∼ N (α t 2 z 1 ∥z 1 ∥ 2 , (1 − α t )I p /p),
where the mean converges to zero and the covariance matrix converges to I p /p. In this case, (z t , z t+u ) is jointly normal with mean zero, Cov(z t ) = Cov(z t+u ) = I p /p and E[z t z T t+u ] = α u I p /p. By definition, since c t = z t /∥z t ∥ 2 , the distribution D u is the same as described.
With the joint stationary distribution in hand, we have the following result on the "moving step" ∥c t − c t+u ∥ 2 under stationary distributions.
Lemma D.2 (Slow moving of c t under stationary distribution). Suppose (c, c ′ ) ∼ D u for some fixed u > 0. Then with probability at least 1 − exp(−ω(log 2 d)),
∥c − c ′ ∥ 2 ≤ 20 √ 2u log d 3p = Ω( √ log d p ). (D.4)
Particularly, for u = 1, with probability at least 1 − exp(−ω(log 2 d)),
∥c − c ′ ∥ 2 ≤ 5 √ log d p . (D.5) Proof of LemmaD.2. By Lemma D.1, we have (c, c ′ ) d = z ∥z∥ 2 , z ′ ∥z ′ ∥ 2 d = z ∥z∥ 2 , α u/2 z + √ 1 − α u r ∥α u/2 z + √ 1 − α u r∥ 2 ,
where z, r are independent N (0, I p /p) random vectors. By the property of chi-squared random variables stated in Lemma D.7, with probability 1 − exp(−ω(log 2 d)), 1 2 ≤ ∥z∥/∥r∥ 2 ≤ 2. Also recall that α = 1 − log d p 2 where log d = o( √ p), so for fixed u, we safely assume α u > 8/9. Now we
consider z ∥z∥ 2 − z ′ ∥z ′ ∥ 2 , where z ′ = α u/2 z + √ 1 − α u r. z ∥z∥ 2 − z ′ ∥z ′ ∥ 2 2 = ∥z ′ ∥ 2 z − ∥z∥ 2 z ′ ∥z∥ 2 · ∥z ′ ∥ 2 2 , where α u/2 ∥z∥ 2 − √ 1 − α u ∥r∥ 2 ≤ ∥z ′ ∥ 2 ≤ α u/2 ∥z∥ 2 + √ 1 − α u ∥r∥ 2 .
With probability 1 − exp(ω(− log 2 d)), we have
α u/2 ∥z∥ 2 − √ 1 − α u ∥r∥ 2 ≥ (2α u/2 − √ 1 − α u )∥r∥ 2 ≥ 0.
Thus with probability 1 − exp(ω(− log 2 d)), we have
z ∥z∥ 2 − z ′ ∥z ′ ∥ 2 2 ≤ max (α u/2 ∥z∥ 2 + √ 1 − α u ∥r∥ 2 )z − ∥z∥z ′ ∥z∥ 2 · ∥z ′ ∥ 2 2 , (α u/2 ∥z∥ 2 − √ 1 − α u ∥r∥ 2 )z − ∥z∥ 2 z ′ ∥z∥ 2 · ∥z ′ ∥ 2 2 , where (α u/2 ∥z∥ 2 + √ 1 − α u ∥r∥ 2 )z − ∥z∥z ′ ∥z∥ 2 · ∥z ′ ∥ 2 2 = (α u/2 ∥z∥ 2 + √ 1 − α u ∥r∥ 2 )z − ∥z∥(α u/2 z + √ 1 − α u r) ∥z∥ 2 · ∥z ′ ∥ 2 2 = √ 1 − α u ∥r∥ 2 z − ∥z∥ 2 r ∥z∥ 2 · ∥z ′ ∥ 2 2 ≤ 2 √ 1 − α u ∥r∥ 2 ∥z ′ ∥ 2 , (α u/2 ∥z∥ 2 − √ 1 − α u ∥r∥ 2 )z − ∥z∥z ′ ∥z∥ 2 · ∥z ′ ∥ 2 2 = (α u/2 ∥z∥ 2 − √ 1 − α u ∥r∥ 2 )z − ∥z∥(α u/2 z + √ 1 − α u r) ∥z∥ 2 · ∥z ′ ∥ 2 2 = √ 1 − α u ∥r∥ 2 z + ∥z∥ 2 r ∥z∥ 2 · ∥z ′ ∥ 2 2 ≤ 2 √ 1 − α u ∥r∥ 2 ∥z ′ ∥ 2 .
Furthermore, since 1 2 ≤ ∥z∥ 2 ∥r∥ 2 ≤ 2 and 1 − α u ≤ 1 9 hence α u/2 −
√ 1 − α u ≥ 3 5 , 2 √ 1 − α u ∥r∥ 2 ∥z ′ ∥ 2 ≤2 √ 1 − α u ∥r∥ 2 α u/2 ∥z∥ 2 − √ 1 − α u ∥r∥ 2 ≤ 4 √ 1 − α u α u/2 − √ 1 − α u ≤ 20 3 √ 1 − α u ≤ 20 3 1 − (1 − log d p 2 ) u ≤ 20 √ 2u log d 3p .
Therefore for fixed u, with probability 1
− exp(−ω(log 2 d)), ∥c − c ′ ∥ 2 ≤ 20 √ 2u log d 3p
. Specifically, for
u = 1, 2 √ 1 − α u ∥r∥ 2 ∥z ′ ∥ 2 ≤ 4 √ 1 − α √ α − √ 1 − α ≤ 5 √ 1 − α ≤ 5 √ log d p
with probability at least 1 − exp(−ω 2 (log 2 d)).
D.2 Mixing property of Random Walp
In this subsection we provide the mixing property of {z t }. To this end, we first provide a lower bound of density ratios of conditional distributions of {z t }.
Lemma D.3. Let m = 4p 2 log d, λ = 1 − 1 e . Let f z m+1 |z 1 (·|z 1 ) be the density function of z m+1 conditional on z 1 , and π(·) be the density of the stationary distribution N (0, I p /p). Then for
∥x∥ 2 ≤ 2 T /p, we have (i) For all values of z 1 ̸ = 0, f z m+1 |z 1 (x|z 1 ) ≥ λπ(x); (ii) For t 0 > 1 and ∥z t 0 ∥ 2 ≤ 2 T /p 3 , f z m+t 0 |zt 0 (x|z t 0 ) ≥ λπ(x).
Proof. We first prove (ii). By construction, the distribution of z m+t 0 |z t 0 is
z m+t 0 |z t 0 ∼ N (α m 2 z t 0 , (1 − α m )I p /p).
Thus the ratio of two densities is
f z m+t 0 |zt 0 (x|z t 0 ) π(x) = exp(− 1 2(1−α m ) (x − α m 2 z t 0 ) T (I p /p) −1 (x − α m 2 z t 0 )) (2π) p/2 |det((1 − α m )I p /p)| 1/2 exp(− 1 2 x T (I p /p) −1 x) (2π) p/2 |det(I p /p)| 1/2 = 1 (1 − α m ) p/2 exp(− p 2(1 − α m ) ∥x − α m 2 z t 0 ∥ 2 2 + p 2 ∥x∥ 2 2 ) ≥ exp( p 2 α m − p 2(1 − α m ) ∥x − α m 2 z t 0 ∥ 2 2 + p 2 ∥x∥ 2 2 ) ≥ exp( p 2 α m − p 2(1 − α m ) ∥x∥ 2 + α m 2 ∥z t 0 ∥ 2 + p 2 ∥x∥ 2 2 ),
where since ∥x∥ 2 , ∥z t 0 ∥ 2 ≤ 2 T /p 3 ,
1 1 − α m ∥x∥ 2 + α m 2 ∥z t 0 ∥ 2 − α m − ∥x∥ 2 2 = (1 + α m 2 ) 2 1 − α m − 1 4T p 3 − α m ≤ 2α m + 2α m 2 1 − α m 4T p 3 . Also α m = (1 − log d p 2 ) 4p 2 log d ≤ exp(−4 log 2 (d)) ≤ 0.01 for d ≥ 3. Thus 1 − α m ≥ 0.99, α m ≤ α m 2 /10, hence 2α m + 2α m 2 1 − α m ≤ 2.2α m 2 0.99 ≤ 7 4 α m 2 ,
and still by α m ≤ exp(−4 log 2 (d)), we have
f z m+t 0 |zt 0 (x|z t 0 ) π(x) ≥ exp(− p 2 · 7α m 2 T 2p 3 ) ≥ exp(− 7T 4p 2 exp(−4 log 2 (d))) = exp(− 7T 4p 2 d 4 log d ) ≥ λ, since T = Ω(p 5 d 4 ).
For (i), similarly the distribution of z m+1 |z 1 is
z m+1 |z 1 ∼ N (α m 2 z 1 ∥z 1 ∥ 2 , (1 − α m )I p /p).
Exactly the same procedure but with z t 0 replaced by z 1 ∥z 1 ∥ 2 and naturally ∥ z 1 ∥z 1 ∥ 2 ∥ 2 = 1 ≤ 2 T /p 3 yields the lower bound λ for the ratio of densities.
With this lower bound in hand, we provide a mixing property of {z t }, which is of use in the generalized Hoeffding's inequality for code occurrences.
Lemma D.4 (Exponential decay of total variation distances). Let m = 4p 2 log d, λ = 1 − 2 e . Then under Assumption 4.3, for the hidden Markov process specified in (2.1) and T ≥ m, it holds that
∥f z T +1 |z 1 (·|z 1 ) − π(·)∥ var ≤ (1 − λ) ⌊T /m⌋ .
Proof of Lemma D.4. We construct a coupling for {z t } and {y t }, where the ditribution of {z t } t≥1 is the same as the hidden Markov model in (2.1), and each y t follows the stationary distribution π(·). Specifically, Assume i = 1, m + 1, 2m + 1, . . . , ⌊T /m⌋m + 1.
We consider the following coupling.
(i) If i = 1 or ∥z i ∥ 2 and ∥y i ∥ 2 ≤ 2 T /p 3 , let x i , u i independently from x i ∼ π(·) and u i ∼
Unif[0, 1], (a) if ∥x i ∥ 2 ≤ 2 T /p 3 and u i ≤ λ, then set z i+m = y i+m = i; (b) if ∥x i ∥ 2 > 2 T /p 3 , or u i > λ, set y i+m = x i and choose z i+m ∼ h(·|z i ), where h(x|z i ) = I{∥x∥ 2 ≤ 2 T /p 3 }[f z i+m |z i (x|z i ) − λπ(x)] + I{∥x∥ 2 > 2 T /p 3 }f z i+m |z i (x|z i ) 1 − λ {∥x∥ 2 ≤2 √ T /p 3 } π(x)dx .
(D.6) (ii) If i > 1, ∥z i ∥ 2 and ∥y i ∥ 2 > 2 T /p 3 , then z i+m and y i+m are chosen independently from f z i+m |z i (·|z i ) and π(·).
Firstly, the distribution specfied in (D.6) is well-defined, because according to Lemma D.3,
f z i+m |z i (x|z i ) − λπ(x) ≥ 0 on {∥x∥ 2 ≤ 2 T /p 3 }, provided that ∥z i ∥ 2 ≤ 2 T /p 3 .
Furthermore, the coupling is well-defined. Note that marginal distribution of y t is y t ∼ π(·) since y t is always chosen according to π(·). For z i+m , in (i), the distribution of z i+m is a mixture of π(·)I{∥ · ∥ 2 ≤ 2 T /p 3 } with probability λ {∥x∥ 2 ≤2 √ T /p 3 } π(x)dx and h(·|z 1 ) as specified in (D.6), with probability 1 − λ {∥x∥ 2 ≤2 √ T /p 3 } π(x)dx. This mixture is exactly f z i+m |z i (·|z i ). So in both cases (i) and (ii), the marginal distribution is z i+m ∼ f z i+m |z i (·|z i ), thus the distribution of {z t } is exactly the target.
Let T be the coupling time of {z t } and {y t }, T = inf{t : z t = y t }. Let M = {m + 1, 2m + 1, . . . , ⌊T /m⌋m + 1}. Then by coupling inequality, ∥ 2 f z t+1 |z 1 (·|z 1 ) − π(·)∥ var ≤ P( T > T + 1).
By construction of the coupling, for any given nonzero vector z 1 we have
P( T > T + 1|z 1 ) ≤ P ∃i ∈ M, ∥z i ∥ or ∥y i ∥ 2 > 2 T /p 3 z 1 + P T > T + 1, ∥z i ∥ 2 , ∥y i ∥ 2 ≤ 2 T /p 3 , ∀i ∈ M z 1 . (D.7)
By the bounds for chi-squared random variables stated in Lemma D.8, for t > 1, P(∥z t ∥ ≥ 2 T /p 3 |z 1 ] ≤ exp(− T 8p 2 ) and P(∥y t ∥ ≥ 2 T /p 3 ] ≤ exp(− T 8p 2 ). Hence the first term in (D.7) is bounded with
P ∃i ∈ M, ∥z i ∥ or ∥y i ∥ 2 > 2 T /p 3 z 1 ≤ 2⌊ T m ⌋ exp(− T 8p 2 ). (D.8)
The event in the second term in (D.7) corresponds to the subcase (i)(b) with u i > λ for all steps i ∈ M and ∥x i ∥ 2 = ∥y i+m ∥ 2 ≤ 2 T /p 3 , which happens with probability at most 1 − λ in each step. Therefore by independence of u i , we have
P T > T + 1, ∥z i ∥ 2 , ∥y i ∥ 2 ≤ 2 T /p 3 , ∀i ∈ M z 1 ≤ P |M | i=1 {u i > λ} = (1 − λ) |M | ≤ (1 − λ) ⌊T /m⌋ . (D.9)
Comparing the two probabilities in (D.8) and (D.9),
2⌊ T m ⌋ exp(− T 8p 2 ) (1 − λ) ⌊T /m⌋ ≤ exp(− T 8p 2 + log( 2T m ) − ( T m − 1) log(1 − λ)) ≤ exp(− T 8p 2 + log( 2T m ) − T 2m log(1 − λ)) ≤ exp(− T 8p 2 log(e(1 − λ)) + log( T 2p 2 log d )) = exp(− T 8p 2 log 2 + log( T 2p 2 log d )) = exp(−ω( T p 2 )) = o(1).
Here we utilize the fact that log(T /(2p 2 log d)) = o(T /p 2 ) obtained from Assumption 4.3. This shows that the bound in (D.8) is dominated by the one in (D.9). So there exists some λ ∈ (0, 1), for which we may let λ = 1 − 2 e , such that for appropriately large k, d, T ,
∥ 2 f z t+1 |z 1 (·|z 1 ) − π(·)∥ var ≤ P( T > T + 1) ≤ (1 − λ) ⌊T /m⌋ . (D.10)
D.3 Mixing Property of Joint Random Walks
For constant gap u > 0, mixing properties of joint (z t , z t+u ) can be obtained in a similar way. We first provide a counterpart of Lemma D.3 for the joint distributions. Let f zt,z t+u (·, ·|z 1 ) denote the density of joint distribution of (z t , z t+u ) given z 1 . Let π u (·, ·) be density of the stationary joint distribution of (z t , z t+u ), which is the joint Gaussian distribution specified in Lemma D.1.
Lemma D.5. Let m = 4p 2 log d, λ = 1 − 1 e . Then for ∥x∥ 2 ≤ 2 T /p, we have
(i) For all values of z 1 ̸ = 0, f z m+1 ,z m+1+u (x, y|z 1 ) ≥ λπ u (x, y);
(ii) For t 0 > 1 and ∥z t 0 ∥ 2 ≤ 2 T /p 3 , f z m+t 0 ,z m+t 0 +u |zt 0 (x, y|z t 0 ) ≥ λπ u (x, y).
Proof of Lemma D.5. Firstly, for any fixed t and u > 0, according to (D.3), (z t , z t+u ) is actually a linear transformation of (z t , r) where z t , r are independent Gaussian random vectors. By changeof-variable formula for density functions, let g t,t+u (·, ·|z 1 ) be the density of joint distribution of z t , r
given z 1 , we know
f t,t+u (z t , z t+u |z 1 ) = f t,t+u (z t , α u/2 z t + √ 1 − α u r|z 1 ) = (1−α u ) −k/2 g t,t+u (z t , r|z 1 ) = (1−α u ) −k/2 g t (z t |z 1 )g(r),
where z t+u = α u/2 z t + √ 1 − α u r, g t (·|z 1 ) is the density of z t given z 1 , and g(r) is the density for r ∼ N (0, I p /p). And the last equality is due to the independence of z t and r. Similarly, the stationary distribution of (z t , z t+u ) can be decomposed in the same way with z t , r independent N (0, I p /p). Again by change-of-variable formula,
π u (z t , z t+u ) = (1 − α u ) −k/2 π t,t+u (z t , r) = (1 − α u ) −k/2 π(z t )g(r),
where z t+u = α u/2 z t + √ 1 − α u r. Thus the ratio of densities are simply
f t,t+u (z t , z t+u |z 1 ) π u (z t , z t+u ) = g t (z t |z 1 ) π(z t ) ,
which is exactly the same as in Lemma D.3. Thus we have the same result as Lemma D.3.
The mixing property of (z t , z t+u ) can be obtained from a similar coupling method, as follows.
Lemma D.6 (Exponential decay of total variation distances). Let m = 4p 2 log d, λ = 1 − 2 e . Then under Assumption 4.3, for the hidden Markov process specified in (2.1), for any T ≥ m, it holds that ∥f z T +1 ,z T +u+1 |z 1 (·, ·|z 1 ) − π u (·, ·)∥ var ≤ (1 − λ) ⌊T /m⌋ .
Proof of Lemma D.6. For constant gap u > 0, a similar coupling for joint (z i , z i+u ) and (y i , y i+u ) can be constructed for i = 1, m + 1, . . . , ⌊ T m ⌋, where (z i , z i+u ) follows the distribution in our Markov process and (y i , y i+u ) follows the stationary distribution for (z i , z i+u ), which is multivariate Gaussian.
Consider the following coupling of (z i , z i+u ) and (y i , y i+u ) for i ∈ M .
(i) If i = 1 or ∥z i ∥ 2 , ∥y i ∥ 2 ≤ 2 T /p 3 , choose (x i , x i+u ) and v i independently, where (x i , x i+u ) ∼ π u (·, ·) and v i ∼ Unif[0, 1]; (a) if ∥x i ∥ 2 ≤ 2 T /p 3 and v i ≤ λ, then set (z i+m , z i+m+u ) = (y i+m , y i+m+u ) = (x i , x i+u ); (b) if ∥x i ∥ 2 > 2 T /p 3 or v i > λ, set (y i+m , y i+m+u ) = (x i , x i+u ) and independently choose (z i+m , z i+m+u ) ∼ h(x, y|z i ), where h(x, y|z i ) = I{∥x∥ 2 ≤ 2 T /p 3 }[f z i+m ,z i+m+u |z i (x, y|z i ) − λπ u (x)] 1 − λ {∥x∥ 2 ≤2 √ T /p 3 } π u (x, y)dxdy + I{∥x∥ 2 > 2 T /p 3 }f z i+m ,z i+m+u |z i (x, y|z i ) 1 − λ {∥x∥ 2 ≤2 √ T /p 3 } π u (x, y)dxdy (D.11) (ii) If i > 1, ∥z i ∥ 2 and ∥y i ∥ 2 > 2 T /p 3 , then choose (z i+m , z i+m+u ∼ f z i+m ,z i+m+u |z i (·, ·|z i ) and (y i+m , y i+m+u ) ∼ π u (·, ·) independently.
Firstly, the distribution in (D.11) is well-defined, since according to Lemma D.5,
f z i+m ,z i+m+u |z i (x, y|z i ) − λπ u (x) ≥ 0 on {∥x∥ 2 ≤ 2 T /p 3 } given ∥z i ∥ 2 ≤ 2 T /p 3 .
Furthermore, the coupling is well-defined. Note that the marginal distribution of (y i , y i+u ) is
(y i , y i+u ) ∼ π u (·, ·) since it is always drawn from that. The marginal distribution of {(z i , z i+u } i∈M
is the same as in (2.1), because the conditional distribution (z i+m , z i+m+u )|z i is the same as in the hidden Markov model in (2.1). To see this, note that simialr to the coupling in the proof of Lemma D.4, in case (i) the marginal distribution of (z i+m , z i+m+u ) is a mixture of π u (x, y)I{∥x∥ 2 ≤ 2 T /p 3 } and h(x, y|z i ), with weights λ {∥x∥ 2 ≤2 √ T /p 3 } π u (x, y)dxdy and 1 − λ {∥x∥ 2 ≤2 √ T /p 3 } π u (x, y)dxdy, respectively. Thus both in case (i) and (ii), (z i+m , z i+m+u ) follows f z i+m ,z i+m+u |z i (x, y|z i ), and so is their marginal joint distributions.
We again employ coupling inquality to bound the total variation distance of (z i , z i+u )|z 1 and π u (·, ·). Let T be the coupling time of {(z t , z t+u )} and {(y t , y t+u )}, T = inf{t : (z t , z t+u ) = (y t , y t+u ). Let M = {m + 1, 2m + 1, . . . , ⌊T /m⌋m + 1}. Then by coupling inequality, ∥f z T +1 ,z T +1+u |z 1 (·, ·|z 1 ) − π u (·, ·)∥ var ≤ P( T > T + 1|z 1 ).
We have the decomposition as
P( T > T +1|z 1 ) ≤ P ∃i ∈ M, ∥z i ∥ or ∥y i ∥ 2 > 2 T /p 3 |z 1 +P T > T +1, ∥z i ∥ 2 , ∥y i ∥ 2 ≤ 2 T /p 3 ≤ 2 T /p 3 .
Here the first term is similarly bounded by
P ∃i ∈ M, ∥z i ∥ or ∥y i ∥ 2 > 2 T /p 3 |z 1 ≤ 2⌊ T m ⌋ exp(− T 8p 2 ),
and the second term implies that ∥x i ∥ 2 ≤ 2 T /p 3 but v i > λ for all steps i ∈ M . Thus Lemma D.7. Under Assumption 4.1, if z t , r t+1 ∼ N (0, I p /p) and they are independent, then
P T > T + 1, ∥z i ∥ 2 , ∥y i ∥ 2 ≤ 2 T /p 3 ≤ 2 T /p 3 ≤ (1 − λ) |M | ≤ (1 − λ) ⌊T /P ∥z t ∥ 2 ∥r t+1 ∥ 2 ≤ 1 2 = exp(−ω(log 2 d)).
Proof of Lemma D.7. k∥z t ∥ 2 2 ∼ χ 2 p , and by Lemma E.3, we have
P k∥z t ∥ 2 2 ≥ 2k ≤ exp − p 4(1 + c ′ 1 ) , P k∥z t ∥ 2 2 ≤ k/2 ≤ exp − p 16(1 − c ′ 2 ) , (D.12)
where c ′ 1 ≥ 1 and c ′ 2 ≤ 1/2. Let c ′ 1 = 1 and c ′ 2 = 0, then P ∥z t ∥ 2 2 ≥ 2 = exp(−O(p)), P ∥z t ∥ 2 2 ≤ 1/2 = exp(−O(p)).
This holds true for r t+1 as well.
So we have P ∥z t ∥ 2 ∥r t+1 ∥ 2 ≤ 1 2 ≤ P ∥z t ∥ 2 2 ≤ 1/2 ∥r t+1 ∥ 2 2 ≥ 2 ≤ P ∥z t ∥ 2 2 ≤ 1/2 + P ∥r t+1 ∥ 2 2 ≥ 2 = exp(−Ω(p)) = exp(−ω(log 2 d)).
(D.13)
Lemma D.8. Under Assumption 4.1, the ℓ 2 norm of the following two vectors can be bounded by 2 T /p 3 with high probability.
1. For z ∼ N (0, I p /p), P[∥z∥ 2 ≥ 2 T /p 3 ] ≤ exp − T 8p 2 .
2. For z t |z 1 (t ≥ 2) in {z t } t≥1 defined in (2.1), we have P ∥z t ∥ 2 ≥ 2 T /p 3 z 1 ≤ exp − T 8p 2 . Proof of Lemma D.8. For z ∼ N (0, I p /p), k∥z∥ 2 2 ∼ χ 2 p , then by Lemma E.3, ∥z∥ 2 2 can be bounded with high probability as
P[∥z∥ 2 2 ≥ 1 + T /p 3 ] ≤ exp − T 2 /p 5 4(1 + T /p 3 ) ≤ exp − T 8p 2 =⇒P[∥z∥ 2 ≥ 2 T /p 3 ] ≤ exp − T 8p 2 , (D.14)
where T ≫ p 3 under Assumption 4.1.
E Properties of code vectors
In this section, we prove several properties of code vectors, which are frequently used in Section F
1 ρ ≤ min w σ 2 w ≤ max w σ 2 w ≤ ρ + c 0 .
Hereafter we denote σ 2 min = 1/ρ, σ 2 max = ρ + c 0 as the lower and upper bounds of variances.
Proof of Lemma E.1. For the i-th code w, i ∈ [d], w ∈ V , we do not distinguish between saying 'word i' or 'word w', and denote σ 2 i := σ 2 w in this proof. From the decomposition Σ = AQA T + Γ, we see that σ 2 i = Q kk if i ∈ G k and |G k | = 1, and σ 2 i = Q kk + Γ ii if i ∈ G k and |G k | > 1. Therefore
min k Q kk ≤ min i (σ 2 i ) ≤ max i (σ 2 i ) ≤ max k Q kk + max i Γ ii .
By Assumption 4.1, we know
min k Q kk ≥ min ∥x∥ 2 =1 x T Qx = λ min (Q) = ∥C∥ −1 L 2 ≥ 1/ρ,
where λ min (·) denotes the smallest eigenvalue of a matrix. On the other hand, as ∆(Q) ≥
C 0 log d/p, we know max k Q kk + max i Γ ii ≤ c 0 + λ max (Q) = c 0 + λ min (C) −1 ≤ c 0 + ρ,
where λ max (·) denotes the largest eigenvalue of a matrix. Then with probability at least 1−exp(ω(log 2 d)), the code vectors V w generated from our Gaussian graphical model satisfy that
σ min √ 2 √ p ≤ min w ∥V w ∥ 2 ≤ max w ∥V w ∥ 2 ≤ σ max 2p. Proof of Corollary E.2. For code w, [V w ] ℓ i.i.d. ∼ N (0, σ 2 w ) for 1 ≤ ℓ ≤ p. Then p ℓ=1 [V w ] 2 ℓ /σ 2 w = ∥V w ∥ 2 2 /σ 2 w ∼ χ 2 p . We first analyze P max w ∥V w ∥ 2 ≥ 2σ
√ p . By Lemma E.3, we have the tail probability bound
P ∥V w ∥ 2 2 /σ 2 w ≥ 2k ≤ exp − p 4(1 + c ′ 1 ) = exp (−k/8) = exp(−ω(p)), where c ′ 1 = 1 ≥ √ p/ √ p satisfies the condition in Lemma E.3. By Lemma E.1, σ 2 w ≤ σ 2 max , so P ∥V w ∥ 2 2 ≥ 2σ 2 max k = P ∥V w ∥ 2 ≥ 2σ max 2p = exp (−ω(p)) .
With union bound, we can bound the ℓ 2 norm of all code vectors with high probability as
P max w ∥V w ∥ 2 ≥ σ max 2p = d exp (−ω(p)) = exp(−ω(p)) = exp(−ω(log 2 d)), since log d = o( √ p) by Assumption 4.3.
Similar procedure can be used to analyze P min w ∥V w ∥ 2 ≤ σ min √ 2 · √ p . By Lemma E.3, we
have P ∥V w ∥ 2 2 /σ 2 w ≤ k/2 ≤ exp − p 16(1 − c ′ 2 ) , where c ′ 2 ≤ √ p/2 √ p = 1 2 .
Let c ′ 2 = 0 and by σ 2 w ≥ min w σ 2 w ≥ σ 2 min ,
P ∥V w ∥ 2 2 ≤ σ 2 min k/2 = P ∥V w ∥ 2 ≤ σ min √ 2 · √ p ≤ exp − p 16 = exp (−ω(p)) .
With union bound, we have
P min w ∥V w ∥ 2 ≤ σ min √ 2 · √ p = d exp (−ω(p)) = exp(−ω(p)) = exp(−ω(log 2 d)), where log d = o( √ p) by Assumption 4.3.
Lemma E.3 (Concentration of Chi-square random variable). Q ∼ χ 2 p concentrates around its mean with high probability. Specifically,
P[Q ≥ p + t 1 √ p] ≤ exp − t 2 1 4(1 + c ′ 1 ) ,
where t 1 > 0 and constant c ′ 1 satisfies c ′ 1 ≥ t 1 / √ p.
And
P[Q ≤ p − t 2 √ p] ≤ exp − t 2 2 4(1 − c ′ 2 )
, where 0 < t 2 < √ p and c ′ 2 ≤ t 2 / √ p is a constant.
Proof of Lemma E.3. We first bound P(Q ≥ p + t 1 √ p). By Chernoff bound, for some s < 1/2, we
have P Q ≥ p + t 1 √ p ≤ exp(sQ) exp(s(p + t 1 √ p)) = (1 − 2s) −p/2 exp(s(p + t 1 √ p)) . Let s = t 1 2( √ p+t 1 ) < 1 2 , we have P Q ≥ p + t 1 √ p ≤ (1 − t 1 √ p+t 1 ) −p/2 exp(t 1 √ p/2) . Let g 1 (t 1 ) := ln (1− t 1 √ p+t 1 ) −p/2 exp(t 1 √ p/2)
, then we will show that g 1 (t 1 ) ≤ −
t 2 1 4(1+c ′ 1 ) for some constant c ′ 1 ≥ t 1 / √ p. Define G 1 (t 1 ) as G 1 (t 1 ) := g 1 (t 1 ) + t 2 1 4(1+c ′ 1 ) , then dG 1 (t 1 ) dt 1 = p 2( √ p + t 1 ) − √ p 2 + t 1 2(1 + c ′ 1 ) = − t 1 2(1 + t 1 / √ p) + t 1 2(1 + c ′ 1 ) ≤ 0.
Note that G 1 (·) is a continuous function in [0, +∞], so we have G 1 (t 1 ) ≤ G 1 (0) = 0, which gives us g 1 (t 1 ) ≤ − t 2 1 4(1 + c ′ 1 )
.
And thus
P(Q ≥ p + t 1 √ p) ≤ exp(g 1 (t 1 )) ≤ exp − t 2 1 4(1 + c ′ 1 ) , where c ′ 1 ≥ t 1 / √ p is a constant.
We next bound P(Q ≤ p − t 2 √ p) using similar method.
P Q ≤ p − t 2 √ p ≤ E [exp{−sQ}] exp{−s(p − t 2 √ p)} = (1 + 2s) −p/2 exp{−s(p − t 2 √ p)} , where s > −1/2 and 0 < t 2 < √ p. Let s = t 2 2( √ p−t 2 ) > − 1 2 , then we have P Q ≤ p − t 2 √ p ≤ (1 + t 2 √ p−t 2 ) −p/2 exp{−t 2 √ p/2} . Let g 2 (t 2 ) := ln (1− t 2 √ p−t 2 ) −p/2 exp(−t 2 √ p/2)
, for some constant 0 ≤ c ′ 2 ≤ t 2 √ p , and G 2 (t 2 ) := g 2 (t 2 ) +
t 2 2 4(1−c ′ 2 ) . Then dG 2 (t 2 ) dt 2 = p 2( √ p − t 2 ) − √ p 2 + t 2 2(1 − c ′ 2 ) = − t 2 2(1 − t 2 / √ p) + t 2 2(1 − c ′ 2 ) ≤ 0.
Here G 2 (·) is a continuous function in [0, +∞], so we have G 2 (t 2 ) ≤ G 2 (0) = 0, and thus
g 2 (t 2 ) ≤ − t 2 2 4(1 − c ′ 2 )
.
Lastly, we have
P(Q ≤ p − t 2 √ p) ≤ exp(g 2 (t 2 )) ≤ exp − t 2 2 4(1 − c ′ 2 ) , where 0 ≤ c ′ 2 ≤ t 2 / √ p is a constant.
F Concentration of Partition Function
Based on results of properties of code vectors developed in Section E, we prove the main result in this section, Lemma F.2. It mainly says that the partition function Z(V, c) = w exp(⟨V w , c⟩) is close to its expectation with high probability, where the randomness comes from both code vectors V w and discourse vectors c.
Before stating our concentration result, we first state the following lemma about concentration of functions of i.i.d. N (0, 1) variables, which plays a key role in our proof of Lemma F.2. It is from Corollary 2.5 in Ledoux (1999) applied to local gradient operator Γ(f ) = ∥∇f ∥ 2 2 .
Lemma F.1 (Gaussian concentration inequality). Let X 1 , . . . , X n be independent Gaussian random variables with zero mean and unit variance.
Then
P(f (X 1 , . . . , X n ) − E[f (X 1 , . . . , X n )] ≥ t) ≤ exp(− t 2 2σ 2 ), holds for all t ≥ 0, where σ 2 = ∥ 2 ∥∇f ∥ 2 2 ∥ ∞ .
Proof. See Corollary 2.5 in Ledoux (1999), or Theorem 3.25 in van Handel (2016).
Below is the main result of this section.
Lemma F.2. Suppose the code vectors V w are generated from the specified Gaussian graphical model, and v w are their realizations. Then under Assumptions 4.1, 4.2 and 4.3, for some large constant τ ′ , with probability at least 1 − exp(−ω(log 2 d)), (with respect to the randomness in generating code vectors), the realized partition function Z(V, c) = w exp(⟨v w , c⟩) satisfies
P c∼C (1 − ϵ z )Z ≤ Z(V, c) ≤ (1 + ϵ z )Z V w = 1 − exp(−ω(log 2 d)). (F.1) where Z = d i=1 exp(σ 0 i,i /2), ϵ z = log d p .
If we count in the randomness of V w , we still have
P c∼C,Vw (1 − ϵ z )Z ≤ Z(V, c) ≤ (1 + ϵ z )Z ≤ exp(−ω(log 2 d)). (F.2)
Though Lemma F.2 is similar to Lemma 2.1 in Arora et al. (2016), their proof is based on a Bayesian prior of code vectors, which assumes code vectors are i.i.d. produced by v = s · v, where s is a scalar r.v. and v comes from a spherical Gaussian distribution. In our lemma, however, this Bayesian prior assumption is relaxed. And instead we assume the code vectors are generated from our Gaussian graphical model in Section 2.2. This proof is essentially harder than that in Arora et al. (2016), because code vectors are correlated rather than i.i.d..
Proof of Lemma F.2. When both discourse variable c and code vectors are random, the partition function Z c = Z c (V w , c) is a function of both V w and c, which makes the analysis complicated.
We prove the lemma in two steps. Firstly we analyze the concentration of Z(V w , c) to Z with c fixed. Then we switch the randomness in V w and c to obtain the final results. In the first step, we first truncate the norm of vector ⟨V i , c⟩ and prove concentration for truncated version. The other parts have sufficiently small probabilities.
Analysis of Z(V w , c) with c fixed. Firstly, as is shown in Lemma F.3, for fixed c, the vector Y = (⟨V 1 , c⟩, · · · , ⟨V d , c⟩) T follows a multivariate Gaussian distribution with mean zero and covariance matrix Σ. Let Σ = LL T be the Cholesky decomposition of Σ, where L is a lower triangular matrix with positive diagonal entries. Let Y = LX or X = L −1 Y , we know X ∼ N (0, I d ).
Denote the event F = {∥Y i ∥ 2 ≤ (γ/2) log d, ∀i = 1, . . . , d}, where γ > 0 is the constant such
that k log 2 d = O(d 1−γ ) in Assumption 4.3. Since Y i ∼ N (0, Σ ii ), we have P ∥Y i ∥ 2 ≤ log d/4, ∀i = 1, . . . , d ≤ d i=1 P(∥Y i ∥ 2 ≤ (γ/2) log d) ≤ d i=1 exp(− γ 2 log 2 d 8Σ ii ) = exp(−ω(log 2 d)).
Also note that F = {∥LX∥ ∞ ≤ (γ/2) log d} in terms of X. Now define our target
f (X) = f (X 1 , . . . , X d ) = d i=1 exp(Y i )I{F } = d i=1 exp( i j=1 L ij X j )I{F }.
Since the discontinuity points (also non-differentiability points) of I{F }(X) is of measure zero under both Lebesgue measure and the probability measure of X, we have the gradient of f (almost surely)
(∇f ) i (X 1 , . . . , X d ) = d j=i+1 L ji e j p=1 Xp I{F } = d j=i+1 L ji e Y j I{F }.
And thus ∇f = L T YI{F } (almost surely). Hence
∥∇f ∥ 2 2 = Y T LL T YI{F } = Y T Σ YI{F } ≤ ∥Σ∥ 2 ∥ Y∥ 2 2 I{F }, where ∥ Y∥ 2 2 I{F } = d i=1 e 2Y i I{F } ≤ d 1+γ .
We now proceed to bound ∥Σ∥ 2 from our model assumptions. By the decomposition Σ = AQA T + Γ, we have ∥Σ∥ 2 ≤ ∥AQA T ∥ 2 + ∥Γ∥ 2 , hwere ∥Γ∥ 2 = max i Γ ii ≤ c 0 since it's diagonal and satisfies
Assumption 4.2. The L 2 -norm of AQA T is ∥AQA T ∥ 2 = max ∥x∥ 2 =1 x T AQA T x ≤ max ∥x∥ 2 =1 ∥A T x∥ 2 2 ∥Q∥ 2 ,
where for ∥x∥ 2 = 1,
∥A T x∥ 2 2 = K j=1 i∈G j x i 2 ≤ K j=1 |G j | i∈G j x 2 i ≤ max p |G k |∥x∥ 2 2 = max j |G j |.
Thus we have the bound of ∥Σ∥ 2 as
∥Σ∥ 2 ≤ ∥Q∥ 2 max j |G j | + c 0 ≤ ρ max j |G j | + c 0 ,
where ρ, c 0 are constants from Assumptions 4.1 and 4.2. Therefore by Lemma F.1, with σ 2 = d 1+γ ,
for t ≥ 0 we have
P f (X 1 , . . . , X d ) − E[f (X 1 , . . . , X d )] ≥ t ≤ exp(− t 2 2d 1+γ ).
With the same argument applied to −f and using union bound, we know that for any δ ≥ 0,
P Z(V w , c)I{F } − E[Z(V w , c)I{F }] E[Z(V w , c)] ≥ δ ≤2 exp(− δ 2 (E[Z(V w , c)]) 2 2d 1+γ ) ≤ 2 exp(− d 1−γ δ 2 2(ρ max j |G j | + c 0 ) ) = exp(−ω(log 2 d)), (F.3) since d 1−γ /(k max j |G j |) = ω(log 2 d) by Assumption 4.3, and E[Z(V w , c)] ≥ d proved in Lemma F.3.
Moreover, by Cauchy-Schartz inequality we have
0 ≤E[Z(V w , c)] − E[Z(V w , c)I{F }] =E[Z(V w , c)I{F c }] ≤(E[Z(V w , c) 2 ]) 1/2 (E[I{F c }]) 1/2 ≤(d d i=1 E[e 2Y i ])(E[I{F c }]) 1/2
≤d 2 e 2σ 2 max · exp(−ω(log 2 d)) = exp(−ω(log 2 d)).
Therefore letting δ = 1 2 √ p in (F.3), we have P Z(V w , c) − E[Z(V w , c)] E[Z(V w , c)] ≥ 1 √ p ≤P Z(V w , c) ̸ = Z(V w , c)I{F } + P Z(V w , c) = Z(V w , c)I{F }, Z(V w , c)I{F } − E[Z(V w , c)] E[Z(V w , c)] ≥ 1 √ p ≤P(F c ) + P Z(V w , c)I{F } − E[Z(V w , c)] E[Z(V w , c)] ≥ 1 √ p − E[Z(V w , c)I{F }] − E[Z(V w , c)] E[Z(V w , c)] ≤ exp(−ω(log 2 d)) + P Z(V w , c)I{F } − E[Z(V w , c)] E[Z(V w , c)] ≥ 1 √ p − exp(−ω(log 2 d)) ≤ exp(−ω(log 2 d)) + P Z(V w , c)I{F } − E[Z(V w , c)] E[Z(V w , c)] ≥ 1 2 √ p ≤ exp(−ω(log 2 d)) + 2 exp(− d 1−γ 8p ) = exp(−ω(log 2 d))
for appropriately large (k, d, T ), under the condition that k log 2 d = O(d 1−γ ) in Assumption 4.3 so that d 1−γ /k = ω(log 2 d). Note that all these analysis are carried out with c fixed. Thus taking expectation over all c, we have our last assetion that
P c∼C,Vw (1 − ϵ z )Z ≤ Z(V, c) ≤ (1 + ϵ z )Z ≤ exp(−ω(log 2 d)).
Switch the randomness in V w and c. We then proceed to prove the assertion with respect to V w by switching the randomness in V w and c. By applying Lemma F.4 to V w , c and F =
{(1 − ϵ z )Z ≤ Z(V w , c) ≤ (1 + ϵ z )Z)}, and some ϵ = exp(−ω(log 2 d)), we have that for κ = √ ϵ, it holds that P P c∼C (1 − ϵ z )Z ≤ Z(V w , c) ≤ (1 + ϵ z )Z) V w > κ · exp(−ω(log 2 d)) ≥ 1 − 1 κ ,
which actuallly reads
P P c∼C (1 − ϵ z )Z ≤ Z(V w , c) ≤ (1 + ϵ z )Z) V w > exp(−ω(log 2 d)) ≥ 1 − exp(−ω(log 2 d)).
Next we provide bounds on the mean of Z(V w , c), which supports Lemma F.2.
Lemma F.3 (Distribution and mean of Z c ). Under Assumption 4.1, for any fixed unit vector c ∈ R p , and random code vector V w from our Gaussian graphical model, the vector
Y = (⟨V 1 , c⟩, · · · , ⟨V d , c⟩) T ∼ N (0, Σ),
where Σ is the covariance matrix in our Gaussian graphical model. Also we have the bounds on
the mean of Z(V w , c) that d ≤ d exp(σ 2 min /2) ≤ E[Z(V w , c)] ≤ d exp(σ 2 max /2)
Proof of Lemma F.3. Throughout this proof, we let c be any fixed unit vector in R p and code vectors be random variables following our Gaussian graphical model. Recall that partition function
Z c (V w , c) is Z(V w , c) = w exp(⟨V w , c⟩).
Based on our Gaussian graphical model, p components in a code vector random variable V w are i.i.d. Gaussian random variables. Since c is a given unit vector, by the property that linear combination of jointly Gaussian random variables is still a Gaussian r.v., then the vector (⟨V 1 , c⟩, · · · , ⟨V d , c⟩) T follows a multivariate Gaussian distribution. Clearly the mean is zero, and the pairwise covariance is
Cov(⟨V i , c⟩, ⟨V j , c⟩) = E[c T V i V T j c] = c T E[V i V T j ]c. According to Lemma A.1, V i , V j has entries such that [V i ] ℓ , [V j ] ℓ T are i.i.d. with covari- ance Σ ij . Therefore E[V i V T j ] = Σ ij I p . Thus we have Cov(⟨V i , c⟩, ⟨V j , c⟩) = Σ ij ,
where Σ ij is the (i, j)-th entry of Σ. Therefore, the vector (⟨V 1 , c⟩, · · · , ⟨V d , c⟩) T ∼ N (0, Σ), and Z = (exp(⟨V 1 , c⟩), · · · , exp(⟨V d , c⟩)) T ∼ lognormal(0, Σ).
According to the distribution of Z, the mean of Z(V w , c) is
E[Z(V w , c)] = d i=1 E[Z i ] = d i=1 exp(Σ ii /2). (F.4)
Applying Lemma E.1 and Assumptions 4.1 and 4.2 to Σ ii the variance of the i-th word,
E[Z(V w , c)] is bounded as d ≤ d exp(σ 2 min /2) ≤ E[Z(V w , c)] ≤ d exp max w σ 2 w /2 ≤ d exp(σ 2 max /2), (F.5)
where the left-most bound is straightforward since σ 2 min ≥ 0.
Lemma F.4. Let F 2 be an event about random variables X 1 , X 2 , formally, F 2 ∈ σ(X 1 , X 2 ) and I{F 2 } be the indicator function of F 2 . Assume that conditioning on X 2 , it satisfies that E[I{F 2 }|X 2 ] = P(F 2 |X 2 ) ≥ 1 − ϵ for some 0 < ϵ < 1, then
P X 1 [P X 2 (F 2 |X 1 ) > 1 − κϵ] ≥ 1 − 1 κ ,
where κ > 0, and P X 1 indicates that the probability is with repsect to X 1 .
Proof of Lemma F.4. We prove the lemma by contradiction. Assume that P X 1 (P
X 2 (F 2 |X 1 ) > 1 − κϵ) = 1 − 1 κ − δ, where δ > 0. Then E[I{F 2 }] satisfies that E[I{F 2 }] =E I{F 2 }I{E[I{F 2 }|X 1 ] > 1 − κϵ} + E I{F 2 }I{E[I{F 2 }|x 1 ] ≤ 1 − κϵ} ≤E I{E[I{F 2 }|X 1 ] > 1 − κϵ} + E E[I{F 2 }I{E[I{F 2 }|X 1 ] ≤ 1 − κϵ} X 1 ] ≤P(E[I{F 2 }|x 1 ] > 1 − κϵ) + E E[I{F 2 } X 1 ]I{E[I{F 2 }|X 1 ] ≤ 1 − κϵ} ≤P(E[I{F 2 }|x 1 ] > 1 − κϵ) + (1 − κϵ)E I{E[I{F 2 }|X 1 ] ≤ 1 − κϵ} =P X 1 (P X 2 (F 2 |X 1 ) > 1 − κϵ) + (1 − κϵ)P x 1 [P x 2 (F 2 |x 1 ) ≤ 1 − κϵ] =1 − 1 κ − δ + (1 − κϵ) 1 κ + δ = 1 − ϵ − κϵδ < 1 − ϵ. (F.6)
Here the first inequlity is due to monotonicity of expectation and tower property and conditional expectation, and the second inequality is because I{E[I{F 2 }|X 1 ] ≤ 1 − κϵ} is σ(X 1 )-measurable.
However, by condition that
E X 1 [I{F 2 }|X 2 ] = P X 1 [F 2 |X 2 ] ≥ 1 − ϵ, it follows that E[I{F 2 }] = E X 2 [E X 1 [I{F 2 }|X 2 ]] = E X 2 [P X 1 [F 1 |X 2 ]] ≥ 1 − ϵ, (F.7)
a contradiction to (F.6). Thus P x 1 [P x 2 (F 2 |x 1 ) > 1 − κϵ] ≤ 1 − 1 κ − δ for any δ > 0. Therefore, it must hold that
P x 1 [P x 2 (F 2 |x 1 ) > 1 − κϵ] ≥ 1 − 1 κ .
G Technical Lemmas on the Markov Processes G.1 Concentration of Conditional Occurrence Expectations
Note that p w (t) is function of c t , which is also functions of z t . Also p w (t) ∈ [0, 1]. We employ a generalized Hoeffding's inequality to provide concentration properties of expectations of word occurrences.
Lemma G.1. Let S be the state space of a Markov process {X t } t≥0 . Assume that for each x ∈ S, there exists a probability measure φ on S, λ > 0 and an integer m ≥ 1 such that P[X m ∈ ·|X 0 = x] ≥ λφ(·). For a function f : S → R, let Y i := f (X i ). If ∥f ∥ := sup{|f (x)| : x ∈ S} < ∞, then we can apply a generalized Heoffding's inequality to S n := n−1 i=0 Y i as
P[S n − ES n ] ≥ nε|X 0 = x] ≤ exp − λ 2 (nε − 2∥f ∥m/λ) 2 2n∥f ∥ 2 m 2 ,
for n > 2∥f ∥m/(λε).
Lemma G.1 here is a generalized Heoffding's inequality proven in Glynn and Ormoneit (2002).
With minor modification, it can be presented as Lemma G.2. We will use Heoffding's inequality in
Lemma G.2 to prove the concentration of T t=1 f 2 • f 1 (z t ).
Lemma G.2. Let S be the continuous state space of a Markov process {X t } t≥1 . Let π(·) be a stationary distribution of {X t } t≥1 . Assume that there exists an integer m ≥ 1 and λ > 0 such that ∥f X n+1 |X 1 (·|X 1 = x 1 ) − π(·)∥ var ≤ (1 − λ) ⌊n/m⌋ . Let f : S → R be a function on S, and
S n := n i=1 f (X i ). Let α = E π [f (X i )] = S f (x)dπ(x) be the expectation of f (X i ) when X i ∼ π(·). Then, if ∥f ∥ S < ∞, P S n − nα ≥ nϵ|X 1 = x 1 ≤ exp − λ 2 (nϵ − 2m∥f ∥ S λ ) 2 2nm 2 ∥f ∥ 2 S for nϵ ≥ 2m∥f ∥ S /λ.
Proof of Lemma G.2. We prove Lemma G.2 by proving the condition here is the same as Lemma G.1. Lemma G.1 assumes P[X m ∈ ·|X 0 = x] ≥ λφ(·) for any x ∈ S. In the proof in Glynn and Ormoneit (2002), this assumption is only used to prove
E[f (X n )|X 0 = x] − S f (x)π(dx) ≤ ∥f ∥ · (1 − λ) ⌊n/m⌋ . (G.1)
However, Lemma G.2 does not assume P[X m ∈ ·|X 0 = x] ≥ λφ(·), but assumes ∥f X n+1 |x 1 (·|X 1 =
x 1 ) − π(·)∥ var ≤ (1 − λ) ⌊n/m⌋ . Also, note that the Markov process in Lemma G.2 starts at t = 1.
We here show that condition ∥f X n+1 |x 1 (·|X 1 = x 1 ) − π(·)∥ var ≤ (1 − λ) ⌊n/m⌋ can also derive Eq. G.1, only with different subscripts:
∥f X n+1 |x 1 (·|X 1 = x 1 ) − π(·)∥ var ≤ (1 − λ) ⌊n/m⌋ =⇒|f (·)| × |f X n+1 |x 1 (·|X 1 = x 1 ) − π(·)| ≤ ∥f ∥(1 − λ) ⌊n/m⌋ =⇒ E[f (X n+1 )|X 1 = x 1 ] − S f (x)π(dx) ≤ ∥f ∥(1 − λ) ⌊n/m⌋ . (G.2)
With this, Lemma G.2 can be proven is the same way as Lemma G.1.
Recall our definition in Section B.1 of conditional occurrence probabilities p w (t) = E[X t (w)|{c t } t>0 ], and the conditional total occurrence S w = T t=1 p w (t) of word w. Also recall that
N w = T p w = T t=1 E c∼D [p w (t)]
is the stationary expectation of S w . Based on Lemma G.2, we show the relative concentration of S w to N w provided that p w is of order Ω(1/d), which is decided by V w . Note that the stationary probabilities p w and hence N w only depend on word vectors V w .
Denote the set of word vectors
V * = {V w : p/d ≤ min w p w ≤ max w p w ≤ p/d}. Then for V w ∈ V * , it holds that P S w N w − 1 ≥ 1 √ p V w = exp(−ω(log 2 (d))),
If we remove the above condition on V w , then P
(V w / ∈ V * ) ≤ exp(−ω(log 2 d) + d −τ and P S w N w − 1 ≥ 1 √ p = exp(−ω(log 2 (d))) + d −τ ,
for some large constant τ > 0 as in Lemma B.1.
Proof. Adopt the choice of m = 4p 2 log d and λ = 1 − 2 e in Lemma D.4, then conditions in Lemma G.2 are satisfied. By Lemma G.2 applied to f (z t ) = p w (t), with ∥f ∥ S ≤ 1, where S is the continuous state space of z t , we have that for T ϵ ≥ 2m/λ,
P T t=1 p w (t) − T p w ≥ T ϵ V w ≤ 2 exp(− λ 2 (T ϵ − 2m λ ) 2 2T m 2 ). (G.3)
Here we choose ϵ = δp w , where δ = 1 √ p . Then as λ ≥ 1/4 we have
T ϵ 2m/λ = λT p w 8p 2 log d ≥ T p w 32p 2 log d ≥ 2,
since p w = Ω(1/d) for V w ∈ V * . Thus T ϵ ≥ 2m/λ and
P 1 T p w T t=1 p w (t)−1 ≥ δ V w ≤ 2 exp(− λ 2 T 2 δ 2 p 2 w
8T m 2 ) = exp(−ω( T p 5 d 2 log 2 d )) = exp(−ω(log 2 (d))).
(G.4) Furthermore, by the results about the scale of p (u) w,w ′ in Lemma B.1, P(V w / ∈ V * ) ≤ exp(−ω(log 2 d))+ d −τ for the large constant τ > 0 as in Lemma B.1. Thus, incorporating the randomness in V w ,
P 1 T p w T t=1 p w (t) − 1 ≥ 1 √ p =P V w ∈ V * ∩ 1 T p w T t=1 p w (t) − 1 ≥ 1 √ p + P V w ∈ V * c ∩ 1 T p w T t=1 p w (t) − 1 ≥ 1 √ p ≤E I{V w ∈ V * }P 1 T p w T t=1 p w (t) − 1 ≥ δ V w + P( V w ∈ V * c ) ≤ exp(−ω(log 2 d))E I{V w ∈ V * } + P( V w ∈ V * c ) = exp(−ω(log 2 d)) + d −τ . (G.5)
Here the first inequality is due to tower property of conditional expectation (conditional on σ(V w )) and the fact that I{V w ∈ V * } is σ(V w )-measurable, and the second inequality is due to Eq.(G.4) and the fact that P(F c ) ≤ exp(−ω(log 2 d)) + d −τ ), as proved in Lemma B.1.
We analyze T −1 t=1 p (u)
w,w ′ (t) in the same way as in previous notes combined with the results for T t=1 p w , where u is a constant distance. Recall that S (u)
w,w ′ = T −u t=1 p w,w ′ (t, t+u) is the conditionally expected co-occurrence counts, and N (u) w,w ′ is its stationary version, which is a function of V w . According to the coupling for joint (z t , z t+u ) and the fact that p (u) w,w ′ (t) is a function of (z t , z t+u ) taking value in [0, 1], we similarly have the following result.
Lemma G.4. Suppose Assumptions 4.1, 4.2 and 4.3 hold, and let p, p be the constant in Lemma B.1.
Denote the set of word vectors
V * * = {V w : p/d 2 ≤ min w,w ′ p (u) w,w ′ ≤ max w,w ′ p (u) w,w ′ ≤ p/d 2 }. Denote the set of word vectors V * * = {V w : p/d 2 ≤ min w,w ′ p (u) w,w ′ ≤ max w,w ′ p (u)
w,w ′ ≤ p/d 2 } with constants p, p as in Lemma B.1. Then when V w ∈ V * * , it holds that
P S (u) w,w ′ N (u) w,w ′ − 1 ≥ 1 √ p V w = exp(−ω(log 2 (d))).
Furthermore, if we remove the condition that V w ∈ V * * , then P
(V w / ∈ V * * ) ≤ exp(−ω(log 2 d) + d −τ and P S (u) w,w ′ N (u) w,w ′ − 1 ≥ 1 √ p = exp(−ω(log 2 (d))) + d −τ ,
for some large constant τ > 0 as in Lemma B.1.
Proof of Lemma G.4. This proof is essentially the same as the proof for Lemma G.3. Note that f (z t , z t+u ) = p w,w ′ (t, t + u) is a function of (z t , z t+u ) with ∥f ∥ S ≤ 1. Then with the choice of m = 4p 2 log d and λ = 1 − 2 e in Lemma D.6, conditions in Lemma G.2 are satified. Therefore for T ϵ ≥ 2m/λ and all values of V w ,
P T −u t=1 p w,w ′ (t, t + u) − (T − u)p (u) w,w ′ ≥ (T − u)p (u) w,w ′ √ p V w ≤ 2 exp(− λ 2 ((T − u)ϵ − 2m λ ) 2 2(T − u)m 2 ), where ϵ = p (u)
w,w ′ / √ p, and as λ ≥ 1/4 we have
(T − u)ϵ 2m/λ = λ(T − u)p (u) w,w ′ 8p 2 log d ≥ (T − u)p (u) w,w ′ 32p 2 log d ≥ 2, since p (u)
w,w ′ = Ω(1/d 2 ) when V w ∈ V * * , and T = ω(p 5 d 4 log 2 d). Therefore (T − u)ϵ ≥ 2m/λ and
P T −u t=1 p w,w ′ (t, t + u) (T − u)p (u) w,w ′ − 1 ≥ 1 √ p V w ≤2 exp(− λ 2 (T − u)(p (u) w,w ′ ) 2 8 · 16p 5 log d ) = exp(−ω( T p 5 d 4 log 2 d )) = exp(−ω(log 2 d)),
since T = ω(p 5 d 4 log 4 d) by Assumption 4.3, and p (u) w,w ′ = Ω(1/d 2 ). Furthermore, incorporating the randomness in V w , similar to the reasoning in Eq.(G.5) we have
P T −u t=1 p w,w ′ (t, t + u) (T − u)p (u) w,w ′ − 1 ≥ 1 √ p ≤ exp(−ω(log 2 d))P(V w ∈ V * * ) + P(V w / ∈ V * * ) = exp(−ω(log 2 d)) + d −τ , where (T − u)p (u) w,w ′ = N (u)
w,w ′ and τ > 0 is a large constant as in Lemma B.1.
G.2 Concentration of Empirical Occurrences
The main result in this part is the concentration of empirical occurrences X w and X (u) w,w ′ , X
[q] w,w ′ defined in Section B.1 to their conditional expected counterparts. This is based on the fact that conditional on {c t } t>0 , X w (t) and X w,w ′ (t, t + u) can be viewed as independent Bernoulli random variables (or at least independent within some carefully filtered subsequence). Then we use a Chernoff bound for sum of independent Bernoulli r.v.s in Lemma G.5 to guarantee their concentrations.
Lemma G.5. [Chernoff bound for sum of Bernoulli r.v.'s.] Let X 1 , X 2 , · · · , X n be independent {0, 1}-valued random variables. Let S n = n i=1 X i whose expectation is E n = E(S n ). Then
P [S n ≥ (1 + δ)E n ] ≤ exp −δ 2 E n 2 + δ , δ > 0; P [S n ≤ (1 − δ)E n ] ≤ exp −δ 2 E n 2 , 0 < δ < 1.
Proof of Lemma G.5. Let µ i := E[X i ] for i = 1, 2, · · · , n. For t > 0 and δ > 0, by Chernoff bound we have
P [S n ≥ (1 + δ)E n ] ≤ exp (−t(1 + δ)E n ) n i=1 E [exp (tX i )] = exp (−t(1 + δ)E n ) n i=1 1 + µ i (e t − 1) ≤ exp (−t(1 + δ)E n ) n i=1 exp[µ i (e t − 1)]. (G.6) Let t = ln(1 + δ) > 0, then we have exp (−t(1 + δ)E n ) n i=1 exp[µ i (e t − 1)] = e δ (1 + δ) (1+δ) En ≤ exp E n δ − (1 + δ) 2δ 2 + δ = exp −δ 2 E n 2 + δ . (G.7) So P [S n ≥ (1 + δ)E n ] ≤ exp −δ 2 E n 2 + δ , δ > 0.
And a similar proof shows that
P [S n ≤ (1 − δ)E n ] ≤ e −δ (1 − δ) (1−δ) En ≤ exp −δ 2 E n 2 , 0 < δ < 1.
In particular, conditional on {c t } t≥1 and V w , {X w (t)} t≥1 are independent with X w (t)|c t , V w ∼ Bernoulli(p w (t)).
Recall that X w = T t=1 X w (t) is the total occurrence of word w and S w = T t=1 p w (t) is its conditional expectation (conditional on discourse variables {c t } t>0 and naturally V w ). We have the following lemma for this setting.
Lemma G.6. Assume Assumptions 4.1, 4.2 and 4.3 hold. Let V * = {V w : p/d ≤ min w p w ≤ max w p w ≤ p/d}. Then for all V w ∈ V * , it holds that
P max w X w S w − 1 ≥ 1 √ p V w = exp(−ω(log 2 d)) + d −τ ′ , (G.8)
for a large constant τ ′ > 0. If we remove the condition that V w ∈ V * , then we have
P max w X w S w − 1 ≥ 1 √ p = exp(−ω(log 2 d)) + O(d −τ ′ ), (G.9)
for the large constant τ ′ > 0.
Proof of Lemma G.6. Apply the Chernoff bound in Lemma G.5 to X w (t)|{c t , V w } which are independent Ber(p w (t)) variables conditional on {c t } t>0 , and recall
S w = T t=1 p w (t) = T t=1 E[X w (t)|{c t , V w }] = E[ T t=1 X w (t)|{c t , V w }] is a function of {c t } t>0 and V w , we have that P T t=1 X w (t) ≥ (1 + δ)S w {c t } t>0 , V w ≤ exp − δ 2 S w 2 + δ , δ > 0; P T t=1 X w (t) ≤ (1 − δ)S w {c t } t>0 , V w ≤ exp − δ 2 S w 2 , 0 < δ < 1.
(G.10)
Note that S w is a function of {c t } t>0 and V w . Combining the two bounds in Eq.(G.10) and using union bound, for δ ∈ (0, 1) we have
P X w S w − 1 ≥ δ {c t } t>0 , V w ≤ 2 exp − δ 2 S w 2 + δ . (G.11)
Define the event
F = | S w N w − 1| < 1 √ p , and p/d ≤ min w p w ≤ max w p w ≤ p/d ,
where p, p are constants as in Lemma B.1. Note that
F = | S w N w − 1| < 1 √ p ∩ V w ∈ V * .
therefore by Lemma G.3, we know for all V w ∈ V * as in Lemma G.3,
P(F |V w ) = E I{F }I{V w ∈ V * } V w = I{V w ∈ V * }E I{F } V w = E I{F } V w ≥ 1−exp(−ω(log 2 (d))).
Also V w ∈ V * with high probability, specifically, P(V w / ∈ V * ) ≤ exp(−ω(log 2 d)) + d −τ for some (large) constant τ > 0.
Also recall p w is a function of V w , N w is a function of V w and S w is a function of {c t } t>0 and V w , so I{F } is σ(V w , {c t } t>0 )-measurable. And on F , we have N w = T p w ≥ pT /d. Therefore,
P X w S w − 1 ≥ δ V w ≤P X w S w − 1 ≥ δ ∩ F V w + P(F c |V w ) =E E I{|X w − S w | ≥ δS w }I{F } {c t } t>0 , V w V w + P(F c |V w ) =E E I{|X w − S w | ≥ δS w } {c t } t>0 , V w I{F } V w + P(F c |V w ) ≤2E exp − δ 2 S w 2 + δ I{F } V w + P(F c |V w ).
(G.12)
Here the first inequality is just union bound. The two equalities that follow are due to tower property and the fact that I{F } is σ({c t } t>0 , V w )-measurable. The second inequality is due to
Eq.(G.11).
Furthermore, according to the definition of F , on F it holds that S w ≥ (1 − 1 √ p )N w , and N w ≥ T p/d. Continuing the bound in Eq.(G.12) we have
P X w S w − 1 ≥ δ V w ≤2E exp − δ 2 2 + δ (1 − 1 √ p ) 2 N 2 w I{F } V w + P(F c |V w ) ≤2E exp − δ 2 2 + δ (1 − 1 √ p ) 2 T 2 p 2 d 2 I{F } V w + P(F c |V w ) ≤2E exp − δ 2 2 + δ (1 − 1 √ p ) 2 T 2 p 2 d 2 V w + P(F c |V w ) ≤2 exp − δ 2 2 + δ (1 − 1 √ p ) 2 T 2 p 2 d 2 + P(F c |V w ).
(G.13)
Here the first two inequalities are due to the definition of F . The third inequality is due to monotonicity of expectation, and the last is because the quantity inside is a constant thus we remove the conditioning.
Letting δ = 1 √ p , then for all V w ∈ V * , according to Eq.(G.12) and Assumption 4.1, we have
P X w S w − 1 ≥ 1 √ p V w ≤2 exp − 1 2k + √ p (1 − 1 √ p ) 2 T 2 p 2 * d 2 + exp(−ω(log 2 d)) + d −τ = exp(−ω(log 2 d)) + d −τ .
(G.14)
Since the quantity does not depend on V w , combined with the fact that P(V w / ∈ V * ) ≤ exp(−ω(log 2 d))+ d −τ , according to Eq.(G.13) we have
P Vw,ct X w S w − 1 ≥ 1 √ p =E I{V w ∈ V * }P X w S w − 1 ≥ 1 √ p V w + P(V w / ∈ V * ) ≤ exp(−ω 2 (log 2 d)) + O(d −τ ),
for large constant τ > 0. Finally, applying union bounds for all w and note that d exp(−ω(log 2 d))+ d · d −τ = exp(−ω(log 2 d)) + d −τ ′ for another large constant τ ′ > 0, we have the desired results in Eq.(G.8) and (G.9).
Similar to Lemma G.6, as S
(u) w,w ′ is also centered around N (u)
w,w ′ , the empirical co-occurrences also concentrate to their conditional expectations. Recall that X
(u) w,w ′ = T −u t=1 X w,w ′ (t, t + u)
is the total co-occurrence of words w, w ′ , and S (u)
w,w ′ = T t=1 p (u)
w,w ′ (t, t + u) is its conditional expectation. We have the following result for concentration of empirical co-occurrences.
V * * = {V w : p/d 2 ≤ min w,w ′ p (u) w,w ′ ≤ max w,w ′ p (u)
w,w ′ ≤ p/d 2 } with constants p, p as in Lemma B.1. Then for u = 1, . . . , q, for V w ∈ V * * , it holds that
P max w,w ′ X (u) w,w ′ S (u) w,w ′ − 1 ≥ 1 √ p V w = exp(−ω(log 2 d)) + d −τ ′ , P max w,w ′ X [q] w,w ′ S [q] w,w ′ − 1 ≥ 1 √ p V w = exp(−ω(log 2 d)) + d −τ ′ , (G.15)
for some large constant τ ′ > 0. If we remove the conditions on V w , then
P max w,w ′ X (u) w,w ′ S (u) w,w ′ − 1 ≥ 1 √ p = exp(−ω(log 2 d)) + O(d −τ ′ ), P max w,w ′ X [q] w,w ′ S [q] w,w ′ − 1 ≥ 1 √ p = exp(−ω(log 2 d)) + O(d −τ ′ ).
(G.16)
Note that, different from the occurrence of one single word at one single step, co-occurrence indicators X w,w ′ (t, t + u) are not independent conditional on {c t } t>0 , V w . To circumvent such difficulty, in our proof, we carefully choose a subsequence of {1, . . . , T } so that along the sequence, these Bernoulli random variables are conditionally independent. Apart from this modification, other arguments are essentially the same as the proof of Lemma G.6.
Proof of Lemma G.7. We first analyze for a fixed u ≤ q. Consider 2u subsequences of {X t,t+u (w, w ′ )} T −u t=1 denoted as
X (u,i) = {X 2ku+i,(2k+1)u+i (w, w ′ )} ⌊ T −u−i 2u ⌋ k=0
, i = 1, . . . , 2u.
Then X (u,i) are disjoint, and ∪ i X (u,i) = {X t,t+u (w, w ′ )} T −u t=1 . Moreover, for any fixed 1 ≤ i ≤ 2u, the elements within the subsequence X (u,i) are independent Bernoulli (p w,w ′ (t, t + u)) random variables conditional on {c t } t>0 and V w , where by definition p w,w ′ (t, t + u) = E[X w,w ′ (t, t + u)|{c t } t>0 , V w ].
Denote total co-occurrence, conditional expectations and stationary versions for these subsequences as
X (u,i) w,w ′ = ⌊ T −u−i 2u ⌋ k=0 X 2ku+i,(2k+1)u+i (w, w ′ ), S (u,i) w,w ′ = ⌊ T −u−i 2u ⌋ k=0 E[X 2ku+i,(2k+1)u+i (w, w ′ )|{c t } t>0 , V w ] = ⌊ T −u−i 2u ⌋ k=0 p w,w ′ (2ku + i, (2k + 1)u + i), N (u,i) w,w ′ = ⌊ T −u−i 2u ⌋ k=0 p (u) w,w ′ = ⌊ T − u − i 2u ⌋p (u) w,w ′ .
For each fixed u, i, define the event (for simplicity drop the superscript (u, i))
F = S (u,i) w,w ′ N (u,i) w,w ′ − 1 < 1 √ p ∩ V w ∈ V * * .
Since u, i are all bounded by fixed window size q, similar to the proof of Lemma G.4 modified for subsequence X (u,i) , we have for all V w ∈ V * * it holds that
P(F |V w ) = I{V w ∈ V * * }P S (u,i) w,w ′ N (u,i) w,w ′ − 1 < 1 √ p V w = exp(−ω(log 2 d)),
with P(V w / ∈ V * * ) ≤ exp(−ω(log 2 d)) + d −τ for a large constant τ > 0 as in Lemma B.1. Also note that F ∈ σ(V w , {c t } t>0 ). Applying Lemma G.5 to conditionally independent Bernoulli random variables inside X (u,i) , we know that for δ ∈ (0, 1),
P X (u,i) w,w ′ S (u,i) w,w ′ − 1 ≥ δ V w ≤P X (u,i) w,w ′ S (u,i) w,w ′ − 1 ≥ δ ∩ F V w + P(F c |V w ) =E E I{|X (u,i) w,w ′ − S (u,i) w,w ′ | ≥ δS (u,i) w,w ′ }I{F } {c t } t>0 , V w V w + P(F c |V w ) =E E I{|X (u,i) w,w ′ − S (u,i) w,w ′ | ≥ δS (u,i) w,w ′ } {c t } t>0 , V w I{F } V w + P(F c |V w ) ≤2E exp − δ 2 S (u,i) w,w ′ 2 + δ I{F } V w + P(F c |V w ).
(G.17)
In Eq.(G.17), the first inequality is union bound. The second line is due to tower property of conditional expectations. The third line is due to the fact that F ∈ Σ({c t } t>0 , V w ), and the last line is due to the Chernoff bound in Lemma G.5. Note that on F , S (u,i)
w,w ′ ≥ (1 − 1 √ p )N (u,i)
w,w ′ and N (u,i) w,w ′ ≥ ⌊(T − u − i)/(2u)⌋p/d 2 ≥ T p/(u 2 d 2 ) for appropriately large d, T . Continuing Eq.(G.17),
P X (u,i) w,w ′ S (u,i) w,w ′ − 1 ≥ δ V w ≤2E exp − δ 2 2 + δ (1 − 1 √ p ) 2 (N (u,i) w,w ′ ) 2 I{F } V w + P(F c |V w ) ≤2E exp − δ 2 2 + δ (1 − 1 √ p ) 2 T 2 p 2 5u 2 d 2 I{F } V w + P(F c |V w ) ≤2E exp − δ 2 2 + δ (1 − 1 √ p ) 2 T 2 p 2 5u 2 d 2 V w + P(F c |V w )
≤2 exp − δ 2 2 + δ (1 − 1 √ p ) 2 T 2 p 2 5u 2 d 2 + P(F c |V w ).
(G.18)
Then similar to reasoning in Eq.(G.14) in the proof of Lemma G.6, for δ = 1 √ p we have
P X (u,i) w,w ′ S (u,i) w,w ′ − 1 ≥ 1 √ p V w ≤2 exp − 1 2k + √ p (1 − 1 √ p ) 2 T 2 p 2 *
5u 2 d 2 + exp(−ω(log 2 d)) + d −τ = exp(−ω(log 2 d)) + d −τ .
(G.19)
Taking union bound of the bound in Eq.(G.19) for all u, i with 1 ≤ i ≤ 2u, 1 ≤ u ≤ q, for all
V w ∈ V * * we have P X (u,i) w,w ′ S (u,i) w,w ′ − 1 < 1 √ p , ∀u, i V w ≥ 1 − exp(−ω(log 2 d)) − O(d −τ ). (G.20)
Further with union bound for all pairs w, w ′ we have P max w,w ′ ,u,i
X (u,i) w,w ′ S (u,i) w,w ′ − 1 ≥ 1 √ p V w ≤ d 2 exp(−ω(log 2 d)) − O(d −τ +2 ) = exp(−ω(log 2 d)) + d −τ ′ (G.21)
for some large constant τ ′ > 0 since the τ > 0 can be sufficiently large.
Also note that X (u)
w,w ′ = 2u i=1 X (u,i) w,w ′ , S (u) w,w ′ = 2u i=1 S (u,i)
w,w ′ , according to Eq.(G.19) with union bound for all i = 1, . . . , 2u with fixed u and all pairs (w, w ′ ), for all V w ∈ V * * we have
P max w,w ′ X (u) w,w ′ S (u) w,w ′ − 1 ≥ 1 √ p V w ≤ exp(−ω(log 2 d)) + d −τ ′ (G.22)
for some large constant τ ′ and appropriately large d, T . And similarly, since X
[q]
w,w = q u=1 [X (u) w,w ′ + X (u) w ′ ,w , we have P max w,w ′ X [q] w,w ′ S (u) w,w ′ − 1 ≥ 1 √ p V w ≤ exp(−ω(log 2 d)) + d −τ ′ . (G.23)
for some large constant τ ′ > 0 and appropriately large d, T . Removing the condition on V w , combined with the fact that P(V w / ∈ V * * ) ≤ exp(−ω(log 2 d)) + d −τ , with same arguments as in LemmaG.6 we get the desired results in Eq.(G.16).
H Concentration of Stationary PMI H.1 Concentration of Stationary Occurrence Probabilities
We provide a concentration property for (stationary) occurrence probabilities p w and p for appropriately large p. Furthermore, the aforementioned conditions hold with probability at least 1 − exp(−ω(log 2 d)).
Proof of Lemma H.2. Given all code vectors V where σ min √ 2 √ p ≤ ∥v w ∥ 2 ≤ σ max √ 2p for all w and satisfies the condition in (F.1), by previous results we know P(F |V ) ≥ 1 − exp(−ω(log 2 d)) for
F = Z(V, c) Z − 1 ≤ 1 √ p , (H.2)
where Z(V, c) = w exp(⟨v w , c⟩) and the probability measure corresponds to generating c ∼ C the uniform distribution over unit sphere. Since Z(V, c) ≥ exp(⟨v w , c⟩), Hereafter we omit the subscript (c, c ′ ) ∼ D 1 for simplicity. Also note that and max w,w ′ log(p w,w ′ ) − Σ ww + Σ w ′ w ′ + 2Σ ww ′ 2 − 2 log Z ≤ (6 + 2C τ ∥Σ∥ max ) log d p ,
p w = E c∼C exp(⟨v w , c⟩) Z(V, c) V = E c∼C exp(⟨v w , c⟩) Z(V, c) I{F } V + E c∼C exp(⟨v w , c⟩) Z(V, c) I{F c } V , where E c∼C exp(⟨v w , c⟩) Z(V, c) I{F c } V ≤ E[I{F c }|V ] = exp(−ω(log 2 d)),(H.max w,w ′ log(p (u) w,w ′ ) − Σ ww + Σ w ′ w ′ + 2Σ ww ′ 2 − 2 log Z ≤ (7 √ 2u + 2C τ ∥Σ∥ max ) log d p .
The last assertions follows the fact that all entries Σ ww ′ are bounded below and above, and d ≤ Z ≤ cd for some universal constant c under our model assumptions.
Below is a standard result for the convergence of sample covariance to true covariance, which we include and prove here for completeness and for accurate specification of the constants in tail bounds. It serves as the intermediate step for the convergence of stationary PMI to covariance matrix.
Lemma H.5 (Concentration of ⟨v w , v w ′ ⟩/p to covariance). For fixed pair of words w, w ′ , denote the covariances in our Gaussian graphical model as Σ ww , Σ w ′ w ′ and Σ ww ′ . Then for fixed a > 0, with probability no less than 1 − exp(−ω(log 2 d)), we have
P ⟨v w , v w ′ ⟩ p − Σ ww ′ ≥ a log d p ≤ exp(− a 2 log d 4 · 432Σ ww Σ w ′ w ′ ). (H.18)
Taking a = 12 3(τ + 2)∥Σ∥ max for any constant τ > 0 yields that, with probability at least 1 − d −τ , we have max w,w ′ ⟨v w , v w ′ ⟩ p − Σ ww ′ ≤ 12∥Σ∥ max 3(τ + 2) log d p .
(H.19)
Proof of Lemma H.5. In our setting, components in the code vector v w 's are i.i.d. copies of Gaussian random variables with variance Σ ww and covariance Σ w,w ′ . Applying Lemma H.6 and note that
(1 + |ρ|) 2 ≤ 4 yields the desired tail probability bound. Moreover, since Σ ww are bounded in both directions, taking union bound over d 2 pairs of (w, w ′ ) and note that d 2 exp(−(τ + 2) log d)) = d −τ , we have the uniform control of covariance deviation.
Below we prove a result for the convergence of sample covariance of two Gaussian samples to the true covariance and provide precise constants for tails, which are of use in Lemma H.5.
Lemma H.6 (Concentration of sample covariance). Let (X i , Y i ) p i=1 be i.i.d. samples of bivariate Gaussian random vector (X, Y ), with E[X] = E[Y ] = 0, Var(X) = σ 2
x , Var(Y ) = σ 2 y and Cov(X, Y ) = Σ xy = ρσ x σ y . Then we have the tail bound for sample covariance Σ xy := 1/p p i=1 X i Y i that P Σ xy − Σ xy > δ ≤ 4 exp(− pδ 2 432(1 + |ρ|) 2 σ 2
x σ 2 y ), (H.20)
for any δ ∈ (0, 12σ x σ y ).
Therefore for 4ϵ/b ≤ 1, i.e., δ ≤ 12σ x σ y , we have
P p i=1 Z i ≥ 2kϵ ≤ exp(− 8kϵ 26b
Theorem 4. 6 .
6Under Assumptions 4.1, 4.2 and 4.3, with any α ∈ (0, ϵ/2] in Algorithm 1, if ∆(Q) ≥ C 0 log d/p where C 0 is defined in Proposition 4.4, G = G with probability at least 1 − exp(−ω(log 2 d)) − O(d −τ ) for some large constant τ > 0.
Figure 1 :Figure 2 :
12GASTRIC BYPASS AND VOLUME REDUCTION PRC:DEBRIDEMENT OF WOUND, INFECTION OR BURN MED:OMEPRAZOLE PRC:CREATION, REVISION AND REMOVAL OF ARTERIOVENOUS FISTULA OR VESSEL−TO−VESSEL CANNULA FOR DIALYSIS DIS:RETINAL VASCULAR CHANGES AND ABNOMALITIES DIS:ERECTILE DYSFUNCTION [ED] Word cloud of selected features of rheumatoid arthritis and type 2 diabetes. Different colors represent different types of codes. Clustering of laboratory codes for rheumatoid arthritis and type 2 diabetes 4 385-399.Beam, A. L.,Kompa, B., Schmaltz, A., Fried, I., Weber, G., Palmer, N., Shi, X., Cai, T. and Kohane, I. S. (2020). Clinical concept embeddings learned from massive sources of multimodal medical data. In Pacific Symposium onBiocomputing, vol. 25. Bernardo, J., Bayarri, M., Berger, J., Dawid, A., Heckerman, D., Smith, A., West, M. et al. (2003). The variational bayesian em algorithm for incomplete data: with application to scoring graphical model structures. Bayesian statistics 7 210.Bodenreider, O. (2004). The unified medical language system (umls): integrating biomedical terminology. Nucleic acids research 32 D267-D270. Bordes, A., Usunier, N., Garcia-Duran, A., Weston, J. and Yakhnenko, O. (2013). Translating embeddings for modeling multi-relational data. In Neural Information Processing Systems (NIPS). Brat, G. A., Weber, G. M., Gehlenborg, N., Avillach, P., Palmer, N. P., Chiovato, L., Cimino, J., Waitman, L. R., Omenn, G. S., Malovini, A. et al. (2020). International electronic health record-derived covid-19 clinical course profiles: the 4ce consortium. medRxiv . Bunea, F., Giraud, C., Luo, X., Royer, M., Verzelen, N. et al. (2020). Model assisted variable clustering: minimax-optimal recovery and algorithms. The Annals of Statistics 48 111-137. Cai, T., Liu, W. and Luo, X. (2011a). A constrained ℓ 1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association 106 594-607. Cai, T. T., Liu, W. and Luo, X. (2011b). A constrained ℓ 1 minimization approach to sparse precision matrix estimation. Journal of the American Statistical Association 106 594-607. Cai, T. T., Liu, W. and Zhou, H. H. (2016). Estimating sparse precision matrix: Optimal rates of convergence and adaptive estimation. Ann. Statist. 44 455-488. Chandrasekaran, V., Parrilo, P. A. and Willsky, A. S. (2010). Latent variable graphical model selection via convex optimization. In 2010 48th Annual Allerton Conference on Communication, Control, and Computing (Allerton). IEEE. Che, Z., Kale, D., Li, W., Bahadori, M. T. and Liu, Y. (2015). Deep computational phenotyping. In SIGKDD. Choi, E., Bahadori, M. T., Schuetz, A., Stewart, W. F. and Sun, J. (2016a). Doctor ai: Predicting clinical events via recurrent neural networks. In Machine Learning for Healthcare Conference. Lam, C. and Fan, J. (2009). Sparsistency and rates of convergence in large covariance matrix estimation. Ann. Statist. 37 4254-4278. Ledoux, M. (1999). Concentration of measure and logarithmic sobolev inequalities. Séminaire de probabilités de Strasbourg 33 120-216. Lin, Y., Liu, Z., Sun, M., Liu, Y. and Zhu, X. (2015). Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29. Lipton, Z. C., Kale, D. C., Elkan, C. and Wetzel, R. (2015). Learning to diagnose with lstm recurrent neural networks. arXiv preprint arXiv:1511.03677 . Liu, H. and Wang, L. (2017). Tiger: A tuning-insensitive approach for optimally estimating gaussian graphical models. Electron. J. Statist. 11 241-294. Ma, F., Chitta, R., Zhou, J., You, Q., Sun, T. and Gao, J. (2017). Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks. In SIGKDD. Miotto, R., Li, L., Kidd, B. A. and Dudley, J. T. (2016). Deep patient: an unsupervised representation to predict the future of patients from the electronic health records. Scientific reports . Nguyen, D. Q., Nguyen, T. D., Nguyen, D. Q. and Phung, D. (2017). A novel embedding model for knowledge base completion based on convolutional neural network. arXiv preprint arXiv:1712.02121 . Nickel, M., Tresp, V. and Kriegel, H.-P. (2011). A three-way model for collective learning on multi-relational data. In Icml. Pennington, J., Socher, R. and Manning, C. D. (2014). Glove: Global vectors for word representation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). Rajkomar, A., Oren, E., Chen, K., Dai, A. M., Hajaj, N., Hardt, M., Liu, P. J., Liu, X., Marcus, J., Sun, M. et al. (2018). Scalable and accurate deep learning with electronic health records. NPJ Digital Medicine . Rothman, A. J., Bickel, P. J., Levina, E. and Zhu, J. (2008). Sparse permutation invariant covariance estimation. Electron. J. Statist. 2 494-515. Schulam, P., Wigley, F. and Saria, S. (2015). Clustering longitudinal clinical marker trajectories from electronic health data: Applications to phenotyping and endotype discovery. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 29. Shang, C., Tang, Y., Huang, J., Bi, J., He, X. and Zhou, B. (2019). End-to-end structureaware convolutional networks for knowledge base completion. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33. Steindel, S., Loonsk, J. W., Sim, A., Doyle, T. J., Chapman, R. S. and Groseclose, S. L. (2002). Introduction of a hierarchy to loinc to facilitate public health reporting. In Proceedings of the AMIA Symposium. American Medical Informatics Association. van Handel, R. (2016). Apc 550: Probability in high dimension. Lecture Notes. Princeton University. Retrieved from https://web. math. princeton. edu/rvan/APC550. pdf on December 21 2016. Wang, Z., Zhang, J., Feng, J. and Chen, Z. (2014). Knowledge graph embedding by translating on hyperplanes. In Proceedings of the AAAI Conference on Artificial Intelligence, vol. 28. Wu, C., Zhao, H., Fang, H., Deng, M. et al. (2017). Graphical model selection with latent variables. Electronic Journal of Statistics 11 3485-3521. Wu, P., Gifford, A., Meng, X., Li, X., Campbell, H., Varley, T., Zhao, J., Carroll, R., Bastarache, L., Denny, J. C., Theodoratou, E. and Wei, W.-Q. (2019). Mapping icd-10 and icd-10-cm codes to phecodes: workflow development and initial evaluation. JMIR Medical Informatics 7 e14325. Yuan, M. (2010). High dimensional inverse covariance matrix estimation via linear programming 11 2261-2286. Yuan, M. and Lin, Y. (2007). Model selection and estimation in the gaussian graphical model. Biometrika 94 19-35. Zhao, T. and Liu, H. (2014). Calibrated precision matrix estimation for high-dimensional elliptical distributions. IEEE transactions on information theory 60 7874-7887.
Proof of Proposition 4. 4 .
4By Lemma B.1, with probability at least 1 − exp(−ω(log 2 d)) − 2d −τ , the conditions in (B.5)-(B.7) hold and the bounds in (B.8) and (B.9) hold simultaneously for all w, w ′ . In this case, from the analysis of stationary occurrence expectations in (B.2), (B.3) and (B.4), we have PMI
m⌋ . The bound in (D.3) is dominated by (D.3), as stated in the proof of Lemma D.4. Hence with same arguments we have an upper bound 2(1 − λ) ⌊T /m⌋ ≤ (1 − λ) ⌊T /m⌋ for T /m ≥ 1.
Lemma G. 3 .
3Suppose Assumptions 4.1, 4.2 and 4.3 hold, and let p, p be the constant in Lemma B.1.
Lemma G. 7 .
7Consider a fixed windows size q. Assume Assumptions 4.1, 4.2 and 4.3 hold. Recall the set of word vectors
w ′ . The proof is generally based on Theorem 2.2 in Arora et al. (2016) but with some modifications for our model. We first cite Lemma A.5 from Arora et al. (2016) here. Lemma H.1 (Lemma A.5 in Arora et al. (2016)). Let v ∈ R p be a fixed vector with norm ∥v∥ 2 ≤ κ √ p for some constant κ. Then for random vector c ∼ C, we have that log E[exp(⟨v, c⟩)]
let W i = V 2 i − 2(1 − ρ), exactly the same result applied to −
3 )
3and O = ( O 1 , . . . , O K ). The latent code graph can thus be estimated from the support of O. By the model assumption in Eq(2.8), we can estimate Γ and Ω by
1. The matrix class U s (M, ρ) is frequently considered in the literature on inverse covariance matrix estimation (Cai et al., 2016). Assumption 4.1 of O ∈ U s (M, ρ) bounds the variances of the code SEVs from both sides, thus ensuring the norm of the SEVs to concentrate around O(k). bounds on norms of O also prevents the SEVs in different clusters from being too correlated.Assumption 4.2 ensures that different code clusters have sufficiently large distance. and the number of codes in the same cluster to be controlled by √ d/p. We also require the corpus size T to be appropriately large (polynomial in p, d) so that code occurrences are able to reveal desired properties of the underlying feature SEVs, with diminishing random deviations.The Remark 4.2. Assumption 4.3 requires that the code SEVs are relatively compact with dimension
p = O(
√
d)
matrix PMI. The result is a consequence of the concentration of empirical code occurrences to their expectations conditional on {c t } and V, while the latter further concentrate to true PMI due to the good mixing properties of {c t }. Detailed proof of Proposition 4.3 is given in Appendix B.2.Proposition 4.3. Suppose Assumptions 4.1, 4.2 and 4.3 hold. For a fixed window size q ≥ 2,
are independent thus they can apply concentration inequalities to the code vectors. However, in our latent graphical model, the word vectors V are dependent and thus their proof can no longer be applied. We introduced the log-Sobolev inequality and the whitening trick to show the the rate of PMI under the the Gaussian graphical model prior.Proposition 4.4 with proof given in Appendix B.3.
Proposition 4.4. Under Assumptions 4.1, 4.2 and 4.3, with probability at least 1 − 3d −τ , for
sufficiently large (k, d, T ) and constant C 0 = 5 + (7
√
2q + 48 3(τ + 4))∥Σ∥ max , we have
∥PMI − Σ∥ max ≤ C 0 log d/p.
(4.2)
Remark 4.5. Arora et al. (2016) only established the consistency of the PMI matrix without
showing the concrete statistical rate. On the contrary, we establish the exact statistical rates of
the PMI estimator with respect to d, k and T which involves finer analysis on the hidden Markov
process. Moreover, the analysis of Arora et al. (2016) assumes that the prior of the word vectors
. Finally, we construct A by evenly assigning the group with G k ={(k − 1)m + 1, . . . , km} for k = 1, . . . , K and generate the diagonal entries of Γ from Unif[0.25, 0.5]
to form the final Σ, where m = d/K and we choose K = 10, 25, 50. After generating the underlying
graph, we generate Z and code vectors V, and simulate T corpus of the word sequence with
discourse process {c t } specified in (2.1). PMI matrix is then calculated, with window size q = 10.
Name Setting
G1
Independent Graph with c = 0.5.
G2
Independent Graph with c = 2.
G3
Table 1 :
1The scenarios of the graphs generating the precision matrix.The threshold α in Algorithm 1 for estimating G is set as α = c · log d/k, where c is tuned over
grid {0.1, 0.2, . . . , 2} to have the most stable cluster assignment (quantified by Rand index specified
later). For O estimated via CLIME-type estimator as in Eq. (3.3), we suppose the clusters are
recovered perfectly and use the true partition G to calculate SPPMI. We set the tuning parameter
in (3.3) as λ = c · log d/k, where c is chosen over grid {0.1, 0.2, . . . , 2} to be most stable, with
the smallest entry-wise change from the previous one.
Table 2
2summarizes the clustering accuracy of the estimator. The clustering accuracy is gener-ally high with RI above 90% across all settings. With a fixed d, the accuracy tends to be slightly
higher with larger K which corresponds to smaller number of codes per code concept group. We
do not observe a big difference between different configurations for O.
Table 2 :
2Averaged Rand index for cluster recovery.
Table 3 :
3Averaged empirical relative error of precision matrix O without using the clustering method in Algorithm 1.K
G1
G2
G3
G4
G5
G6
d = 500, k = 1000
∥ · ∥max
50
3.47%
9.76 %
31.83 %
28.61%
23.33 % 27.73 %
25
2.89 %
7.35 %
19.16 %
21.00 % 28.30 % 25.92 %
10
1.29 %
4.36 %
21.64 % % 22.28 % 15.81 %
∥ · ∥F
50
8.62 %
8.14 %
41.35 %
40.61 % 28.58 % 25.27 %
25
7.22 %
6.10 %
23.32 %
27.83 % 18.55 % 18.52 %
10
6.83 %
5.62 %
24.14 %
16.88 % 12.70 % 10.29 %
F-score
50 73.47 %
76.48%
72.89 %
71.17 % 67.29 %
71.52%
25 72.90 % 73.10 %
66.21 %
69.97 % 66.73 % 67.55 %
10 71.29 % 72.11 %
69.36 %
74.12%
74.42 % 68.35 %
K
G1
G2
G3
G4
G5
G6
d = 1000, k = 2000
∥ · ∥max
50 12.15 % 21.56 %
34.89 %
35.54 % 26.20 % 27.12 %
25
2.16 %
6.28 %
24.91 %
27.61%
28.29 % 25.01 %
10
1.82 %
5.74 %
20.11 %
14.79 % 19.29 % 14.06 %
∥ · ∥F
50 11.70 %
20.42%
61.14%
57.99%
40.15 % 48.64 %
25
3.02 %
6.76%
38.72 %
46.03%
24.07%
29.40 %
10
3.66 %
5.81%
18.54%
19.26%
11.10 % 13.66 %
F-score
50 73.66 % 75.69 %
73.61 %
76.79 %
75.96%
64.21 %
25 77.16 %
71.45%
70.36 %
72.22 % 74.25 % 71.81 %
10 76.82 % 73.75 %
73.87 %
72.95 % 71.42 % 65.03 %
Table 4 :
4Averaged empirical relative error of precision matrix O using the clustering method in Algorithm 1.Rheumatoid arthritis
Type 2 diabetes
The next two lemmas provide key results for the concentration of stationary PMI, which is the Lemma H.2. Suppose Assumptions 4.1, 4.2 and 4.3 holds. Given that all code vectors are bounded with σ min p/2 ≤ ∥v w ∥ 2 ≤ σ max √ 2p for all w and the condition in (F.1) holds, the stationary occurrence probabilities p w = E c∼C [ exp(⟨vw,c⟩) Z(V,c) |V ] satisfy thatmax
w
log(p w ) −
∥v w ∥ 2
2
2p
− log Z ≤
2
√ p
,
(H.1)
3 )
3and E c∼C [exp(⟨v w , c⟩)I{F }|V ] (1 + ϵ z )Z ≤ E c∼C exp(⟨v w , c⟩) Z(V, c) I{F } V ≤ E c∼C [exp(⟨v w , c⟩)I{F }|V ] (1 − ϵ z )Z . (H.4)Hereafter we omit the footscript indicating c ∼ C. Note thatE[exp(⟨v w , c⟩)I{F }|V ] = E[exp(⟨v w , c⟩)|V ] − E[exp(⟨v w , c⟩)IF C |V ] ≤ E[exp(⟨v w , c⟩)|V ],where by Cauchy-Schwarz inequality,E[exp(⟨v w , c⟩)IF C |V ] ≤ E[exp(⟨2v w , c⟩)|V ] · E[IF C |V ]Here E[IF C |V ] = exp(−ω 2 (log 2 d)) and by Lemma H.1, since 2∥v w ∥ 2 ≤ 2σ max √ 2p, we haveE[exp(⟨2v w , c⟩)|V ] = exp(O(∥v∥ 2 2 /(2k) + 1/k)) = O(1),thus E[exp(⟨v w , c⟩)|V ] − exp(−ω(log 2 d)) ≤ E[exp(⟨v w , c⟩)I{F }|V ] ≤ E[exp(⟨v w , c⟩)|V ]. (H.6)1/2 .
(H.5)
Appendix A Proofs on the Model PropertiesIn this section, we prove the identifiability of the vector-valued graphical model. (1 + ϵ z )Z ≤E c∼C exp(⟨v w , c⟩)where E[exp(⟨v w , c⟩)|V ] = exp( ∥vw∥ 2 2 2p + O(1/p)) and ∥v w ∥ 2 2 /2k ≥ σ 2 min /2 = Ω(1). Thusandfor appropriately large p. The last assertion in the lemma follows union bound as well as the fact that P( σ √ 2 √ p ≤ ∥v w ∥ 2 ≤ 2 √ p, ∀w) = 1 − exp(−ω(log 2 d)) and probability bound in Lemma F.2.A similar result involving more techniques holds for stationary co-occurrence probabilities p w,w ′ , stated as follows.Lemma H.3. Given the code vectors with σ min p/2 ≤ ∥v w ∥ 2 ≤ σ max √ 2p for all w and condition of (F.1) holds. And suppose Assumptions 4.1, 4.2 and 4.3 hold. Then the stationary co-occurrencefor appropriately large p. Furthermore, it holds with probability at least 1 − exp(−ω(log 2 d)).Proof of Lemma H.3. Suppose Assumptions 4.1, 4.2 and 4.3 hold. Given all code vectors V where σ min p/2 ≤ ∥v w ∥ 2 ≤ σ max √ 2p for all w and satisfies the condition in (F.1), by previous results we know P(F |V ) = 1 − exp(−ω(log 2 d)) for constant C ≤ 5 andwhere Z(V, c) = w exp(⟨v w , c⟩) and the probability measure corresponds to generating (c, c ′ ) ∼ D 1 . By definition, given v w and v w ′ ,where since both two ratios are less than 1, we have(H.14)AndAlso, the second term in Equation (H.14) and (H.15) satisfies thatTherefore we have+O(1/p)).(H.16)Combining results in Equations (H.11)-(H.16) we havefor appropriately large p (and also d). Therefore we have the boundAnd the last assertion of the lemma follows union bound applied to the fact that σ √ 2 √ p ≤ ∥v w ∥ 2 ≤ 2 √ p for all w happens with probability at least 1−exp(−ω(log 2 d)) and condition in Equation (F.1)is satisfied with probability at least 1 − exp(−ω(log 2 d)).A more general result for all u is as follows:Lemma H.4. Suppose Assumptions 4.1, 4.2 and 4.3 hold. Given the code vectors with σ min p/2 ≤ ∥v w ∥ 2 ≤ σ max √ 2p for all w and condition of (F.1) holds. Then the stationary co-occurrence prob-for appropriately large p. Furthermore, the aforementioned conditions hold with probability at least 1 − exp(ω(log 2 d)).Proof. The proof for this lemma is exactly the same as that for Lemma H.3, except that the bound. Then (H.17) holds for appropriately large p, and the whole event has the same probability bound.H.2 Concentration of Stationary PMI to Covariance MatrixThe main result in this subsection is the convergence of stationary PMI to the covariance matrix, which is essentially because of the convergence of ⟨v w , v w ′ ⟩/p to covariance matrix Σ, stated in which at most consist of d 2 pairs, we know that with probability at least 1 − d −τ which corresponds to the process of generating code vectors, we have that for constant C τ = 12 3(τ + 4),Thus we haveMoreover, from results in Lemma H.2, H.3 and H.4, with probability (incorporating the randomness in the underlying code discourse variables) at least 1 − exp(−ω(log 2 d)),Combining the above results, we know that with probability at least 1−exp(−ω(log 2 d))−d −τ where universal constant C only depends on the ρ in Assumption 4.1,for constant C τ = 12 3(τ + 4),. By union bound and triangle inequality,where by Gaussianity, E[U 2m i ] = 2 m (1 + ρ) m (2m − 1)!!. ThusNote that for m ≥ 3, we have 1 + (2m − 1)!! ≤ (2m)!! 3 , thus for m ≥ 3,(2m)!! 3 2 2m−1 (1 + ρ) m = 2 3m−1 3 (1 + ρ) m · m!.We apply Bernstein's inequality for X = Z i with σ 2 = E[Z 2 i ] = 8(1 + ρ) 2 , and b such that m! 2 σ 2 b m−2 = 4m!(1 + ρ) 2 b m−2 ≥ 2 3m−1 (1 + ρ) m 3 m!,where we may choose b = 24(1 + ρ). Therefore by Bernstein's inequality, with ϵ = δ/(σ x σ y ),∈ [0, 1), the tail bound becomes1 + 4ϵ/b − 1 1 + 1 + 4ϵ/b ) = exp(− 8kϵ 2 b 2 (1 + 1 + 4ϵ/b) 2 ).
Multi-layer representation learning for medical concepts. E Choi, M T Bahadori, E Searles, C Coffey, M Thompson, J Bost, J Tejedor-Sojo, J Sun, SIGKDD. Choi, E., Bahadori, M. T., Searles, E., Coffey, C., Thompson, M., Bost, J., Tejedor- Sojo, J. and Sun, J. (2016b). Multi-layer representation learning for medical concepts. In SIGKDD.
Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. E Choi, M T Bahadori, J Sun, J Kulas, A Schuetz, W Stewart, NIPS. Choi, E., Bahadori, M. T., Sun, J., Kulas, J., Schuetz, A. and Stewart, W. (2016c). Retain: An interpretable predictive model for healthcare using reverse time attention mechanism. In NIPS.
Learning latent tree graphical models. M J Choi, V Y Tan, A Anandkumar, A S Willsky, Journal of Machine Learning Research. 12Choi, M. J., Tan, V. Y., Anandkumar, A. and Willsky, A. S. (2011). Learning latent tree graphical models. Journal of Machine Learning Research 12 1771-1812.
Learning low-dimensional representations of medical concepts. Y Choi, C Y Chiu, D Sontag, AMIA Summits on Translational Science Proceedings. Choi, Y., Chiu, C. Y.-I. and Sontag, D. (2016d). Learning low-dimensional representations of medical concepts. AMIA Summits on Translational Science Proceedings .
First-order methods for sparse covariance selection. A Aspremont, O Banerjee, L El Ghaoui, SIAM J. Matrix Anal. Appl. 30d'Aspremont, A., Banerjee, O. and El Ghaoui, L. (2008). First-order methods for sparse covariance selection. SIAM J. Matrix Anal. Appl. 30 56-66.
Dynamic network embedding: An extended approach for skip-gram based network embedding. L Du, Y Wang, G Song, Z Lu, J Wang, In IJCAI. Du, L., Wang, Y., Song, G., Lu, Z. and Wang, J. (2018). Dynamic network embedding: An extended approach for skip-gram based network embedding. In IJCAI, vol. 2018.
Multivariate gaussian network structure learning. X Du, S Ghosal, Journal of Statistical Planning and Inference. 199Du, X. and Ghosal, S. (2019). Multivariate gaussian network structure learning. Journal of Statistical Planning and Inference 199 327-342.
High-dimensional inference for cluster-based graphical models. C Eisenach, F Bunea, Y Ning, C Dinicu, Journal of Machine Learning Research. 21Eisenach, C., Bunea, F., Ning, Y. and Dinicu, C. (2020). High-dimensional inference for cluster-based graphical models. Journal of Machine Learning Research 21 1-55.
Efficient, certifiably optimal clustering with applications to latent variable graphical models. C Eisenach, H Liu, Mathematical Programming. 176Eisenach, C. and Liu, H. (2019). Efficient, certifiably optimal clustering with applications to latent variable graphical models. Mathematical Programming 176 137-173.
Network exploration via the adaptive lasso and scad penalties. J Fan, Y Feng, Y Wu, Ann. Appl. Stat. 3Fan, J., Feng, Y. and Wu, Y. (2009). Network exploration via the adaptive lasso and scad penalties. Ann. Appl. Stat. 3 521-541.
Sparse inverse covariance estimation with the graphical lasso. J H Friedman, T J Hastie, R Tibshirani, Biostatistics. 9Friedman, J. H., Hastie, T. J. and Tibshirani, R. (2008). Sparse inverse covariance estimation with the graphical lasso. Biostatistics 9 3 432-41.
Hoeffding's inequality for uniformly ergodic markov chains. P W Glynn, D Ormoneit, Statistics & probability letters. 56Glynn, P. W. and Ormoneit, D. (2002). Hoeffding's inequality for uniformly ergodic markov chains. Statistics & probability letters 56 143-146.
Clinical classification software. Agency for Healthcare Research and Quality. Healthcare Cost and Utilization Project (2017). Clinical classification software. Agency for Healthcare Research and Quality .
Graph estimation from multi-attribute data. M Kolar, H Liu, E P Xing, Journal of Machine Learning Research. Kolar, M., Liu, H. and Xing, E. P. (2014). Graph estimation from multi-attribute data. Journal of Machine Learning Research .
| [] |
[
"CHARACTERISATION OF FLIP PROCESS RULES WITH THE SAME TRAJECTORIES",
"CHARACTERISATION OF FLIP PROCESS RULES WITH THE SAME TRAJECTORIES"
] | [
"Keat Eng ",
"Hng "
] | [] | [] | Garbe, Hladký, Šileikis and Skerman recently introduced a general class of random graph processes called flip processes and proved that the typical evolution of these discrete-time random graph processes correspond to certain continuous-time deterministic graphon trajectories. We obtain a complete characterization of the equivalence classes of flip process rules with the same graphon trajectories. As an application, we characterize the flip process rules which are unique in their equivalence classes. These include several natural families of rules such as the complementing rules, the component completion rules, the extremist rules, and the clique removal rules. | null | [
"https://export.arxiv.org/pdf/2305.19925v1.pdf"
] | 258,987,880 | 2305.19925 | e7bca4f2ae43fd22b97d4bd8bd578ce58850ad94 |
CHARACTERISATION OF FLIP PROCESS RULES WITH THE SAME TRAJECTORIES
May 2023
Keat Eng
Hng
CHARACTERISATION OF FLIP PROCESS RULES WITH THE SAME TRAJECTORIES
May 2023
Garbe, Hladký, Šileikis and Skerman recently introduced a general class of random graph processes called flip processes and proved that the typical evolution of these discrete-time random graph processes correspond to certain continuous-time deterministic graphon trajectories. We obtain a complete characterization of the equivalence classes of flip process rules with the same graphon trajectories. As an application, we characterize the flip process rules which are unique in their equivalence classes. These include several natural families of rules such as the complementing rules, the component completion rules, the extremist rules, and the clique removal rules.
Introduction
Graphs are mathematical structures which underpin the study of numerous real-life settings, many of which evolve in time according to a stochastic rule. This leads us to random graph processes, the study of which was initiated by Erdős and Rényi in their seminal paper [5] from 1960. In the ensuing decades, a variety of random graph processes have been introduced and studied extensively. One example is the triangle removal process, which was introduced by Bollobás and Erdős [4] in 1990. The process begins with the initial graph G 0 = K n , and in each step ℓ we obtain the graph G ℓ by deleting the edges of a uniformly random triangle in G ℓ−1 . The process terminates when G ℓ is triangle-free. There is a long line of research on random graph processes with triangle-free final graphs, which has led to progressive breakthroughs on lower bounds for the Ramsey numbers R (3, t) in [8,2,3,6].
Consider the following modification of the triangle removal process. Instead of deleting the edges of a uniformly random triangle, in each step we sample a uniformly random triple of distinct vertices, delete the edges of any triangle induced and do nothing otherwise. Note that this has the form of a local replacement rule: triples of distinct vertices represent 'localities' where we replace triangles by empty graphs and do nothing otherwise. Furthermore, observe that we may recover the original triangle removal process from the modified process by ignoring all steps that do nothing. Indeed, conditioned on sampling a triangle, every step of the modified process deletes a uniformly random triangle.
Generalising the idea of local replacements, Garbe, Hladký, Šileikis and Skerman [7] recently introduced a general class of random graph processes called flip processes. Before we describe these processes, we first introduce some notation and terminology. Write N for the set of positive integers and N 0 for the set N ∪ {0}. For a ∈ N 0 write [a] for the set {1, . . . , a} and [a] 0 for [a] ∪ {0}. All our graphs are simple and undirected. Let k ∈ N and let H k be the set of all labelled graphs on the vertex set [k]. A rule is a matrix R = (R F,H ) F,H∈H k ∈ [0, 1] H k ×H k such that for each F ∈ H k we have H∈H k R F,H = 1. We call k the order of the rule R. A flip process with rule R is a random graph process (G ℓ ) ℓ∈N 0 where we start with an initial graph G 0 on the vertex set [n] with n ≥ k and in step ℓ ∈ N we obtain the graph G ℓ from G ℓ−1 as follows. First, we sample an ordered tuple v = (v i ) i∈ [k] of distinct vertices in [n] at random. Next, we define the drawn graph F ∈ H k by ij ∈ E(F ) if and only if v i v j ∈ E(G ℓ−1 ) and generate the replacement graph H ∈ H k according to the probability distribution (R F,H ) H∈H k . Finally, we replace G ℓ−1 [v] with H to obtain G ℓ from G ℓ−1 .
Research supported by Czech Science Foundation project GX21-21762X and with institutional support RVO:67985807.
Flip processes were studied in [7] through the lens of dense graph limits [9]. Let (Ω, π) be an atomless probability space with an implicit separable sigma-algebra. Write W ⊆ L ∞ (Ω 2 ) for the set of kernels, that is, bounded symmetric measurable functions. Write W 0 for the set of graphons, that is, the elements of W whose range is a subset of [0,1]. An important metric on W is the cut norm distance d given by d (U, W ) = sup S,T ⊆Ω S×T (U − W ) dπ 2 . Let G be a graph and (Ω i ) i∈V (G) be a partition of Ω into parts of equal measure. We write W G for the graphon which is equal to I {ij∈E(G)} on Ω i × Ω j and call it a graphon representation of G.
A key insight of Garbe, Hladký, Šileikis and Skerman is that the typical evolution of flip processes over quadratic timescales is highly concentrated along graphon-valued trajectories. This is formalized in their Transference Theorem, which we state below in a simplified form. It transfers problems about the typical evolution of random discrete-time graph-valued flip processes to problems about deterministic continuous-time graphon-valued trajectories. Consequently, understanding these graphon-valued trajectories is key to understanding the typical evolution of flip processes. Theorem 1.1 (Theorem 5.1 in [7]). For every rule R there is a function Φ R : W 0 ×[0, ∞) → W 0 such that given ε, T > 0 and a flip process (G ℓ ) ℓ∈N 0 with rule R where G 0 has vertex set [n], with probability at least 1 − e −Ω(n 2 ) we have d (W G ℓ , Φ R (W G 0 , ℓ n 2 )) < ε for all ℓ ∈ [T n 2 ] 0 . We often write Φ t R W instead of Φ R (W, t) for the sake of brevity. We remark that in [7] the function Φ R is constructed for each rule R by first explicitly defining a velocity operator V R : W 0 → W and then defining the trajectory Φ t R W t∈[0,∞) starting at each W ∈ W 0 to be the unique solution to the Banach-space-valued differential equation
d dt Φ t R W = V R Φ t R W with the initial condition Φ 0 R W = W .
More details are given in Section 2. A rule R is ignorant if the replacement graph distribution (R F,H ) H∈H k is independent of the drawn graph F ∈ H k . It was observed in [1, Section 4] that two ignorant rules R 1 and R 2 of the same order and with the same expected replacement edge count d R = H∈H k R F,H e(H) have the same trajectories, that is, Φ R 1 = Φ R 2 . We say that two rules R 1 and R 2 of orders k 1 and k 2 respectively have the same flip process distributions if for all finite graphs G on n ≥ max{k 1 , k 2 } vertices the flip process started at G with rule R 1 has the same distribution as the flip process started at G with rule R 2 . Note that two rules with the same trajectories could have different flip process distributions. Indeed, the lens of dense graph limits captures the macroscopic profile of flip processes but is insensitive to lower order fluctuations. Consider, for example, ignorant rules R 1 and R 2 of order 4 such that for R 1 the replacement graph is either complete or empty with probability 1 2 each and for R 2 the replacement graph is a copy of K 1,3 . Clearly, they have different replacement behaviour. On the other hand, both rules are ignorant rules with the same expected replacement edge count d R 1 = d R 2 = 3, so they have the same trajectories. This leads us to the following two related questions. In this paper, we focus on the dense graph limits perspective of flip processes and fully resolve Question 1.2, thereby providing a complete characterization of equivalence classes of flip process rules with the same trajectories. We defer discussion of Question 1.3 to Section 5.1.
We introduce some notation, terminology and concepts. An ordered set is a set S equipped with a linear order ≤ S . For a set S write S 2 = {{i, j} ∈ S 2 : i = j} and S (2) = {(i, j) ∈ S 2 : i = j}. For the sake of brevity we often write ij for an unordered pair {i, j}. A rooted graph is a pair (F, R) consisting of a graph F and an ordered set R ⊆ V (F ) of roots; we often write F R for the rooted graph (F, R). ) is an ordered pair, we often write F a,b instead of (F, (a, b)) or F (a,b) . For k ∈ N write G k := H k × [k] (2) for the collection of rooted graphs F a,b where F ∈ H k and (a, b) ∈ [k] (2) . Let us explain the ideas motivating our main result. Let R be a rule of order k. Heuristically, a graphon W represents the adjacency matrix of a large graph G on n vertices and W (x, y) represents the edge density of an auxiliary bipartite pair (X, Y ) of tiny subsets of V (G) at (x, y) ∈ Ω 2 . Since X and Y are tiny, the rate of change V R W (x, y) of the edge density of (X, Y ) is dominated by replacements where the drawn graph has exactly one vertex in each of X and Y . We sample a copy of a labelled graph F ∈ H k with distinct vertices a, b ∈ [k] in X and Y respectively with probability p F a,b ,W (x, y) = |X||Y |n −2 · T F a,b W (x, y) (see (1)), and the expected change in edge density on the pair (X, Y ) due to the replacement of F is given by
Set V (F R ) = V (F ), v(F R ) = v(F ), ρ(F R ) = R, r(F R ) = |R|, V ′ (F R ) = V (F ) \ R and v ′ (F R ) = v(F ) − |R|. When R = (a, bZ F a,b ,R |X||Y | where Z F a,b ,R = H∈H k R F,H · I {ab∈E(H)} − I {ab∈E(F )} .
Hence, by the law of total expectation, ignoring lower order terms, and rescaling time by n 2 because (Ω, π) is a probability space rather than a set of n vertices, we find that V R W (x, y) has the form
F a,b ∈G k Z F a,b ,R · T F a,b W (x, y) .
Let us briefly comment on the k = 1 case. One obviously cannot find distinct vertices a, b ∈ [k], so the sum above is vacuous and trivially always zero; this is consistent with the observation that the only rule of order 1 is a rule which does nothing.
Since T F a,b W (x, y) is invariant under the relabelling of the vertices of F a,b , we may consider equivalence classes of rooted graphs F a,b related by relabelling; write J k for the set of such equivalence classes for G k . For each J ∈ J k we set T J W (x, y) to be the unique value of
T F a,b W (x, y) for triples F a,b ∈ J and set Z J,R = F a,b ∈J Z F a,b ,R . Hence, we have V R W (x, y) = J∈J k Z J,R · T J W (x, y). Now consider two rules R 1 and R 2 of the same order k. Suppose that Z J,R 1 = Z J,R 2 for all J ∈ J k . Then we have V R 1 W (x, y) = V R 2 W (x, y)
for all graphons W and (x, y) ∈ Ω 2 , which implies that R 1 and R 2 have the same trajectories. Our result states that the converse also holds, thereby giving a characterization of the equivalence classes of flip process rules of the same order with the same trajectories. More details are given in Section 2.
Theorem 1.4. The following are equivalent for rules R 1 and R 2 of the same order k.
(A1) We have Z J,R 1 = Z J,R 2 for all J ∈ J k . (A2) Φ R 1 = Φ R 2 .
Theorem 1.4 characterizes equivalence classes of rules of the same order with the same trajectories. Let us now demonstrate how it may be applied to determine whether two rules R 1 and R 2 of different orders have the same trajectories. Say the orders of R 1 and R 2 are k 1 and k 2 respectively; by symmetry we may assume k 1 < k 2 . Let R * 1 be the rule of order k 2 where the replacement graph H is obtained from the drawn graph F by replacing the subgraph of F induced on [k 1 ] according to rule R 1 and leaving all other pairs unchanged. Since R * 1 acts exclusively on pairs on [k 1 ] and on those pairs it acts in accordance with R 1 , it follows that R * 1 and R 1 have the same flip process distributions and have the same trajectories. Let F a,b be a rooted graph with F ∈ H k 2 and distinct a, b
∈ [k 2 ]. If a, b ∈ [k 1 ], then we have Z F a,b ,R * 1 = Z G a,b ,R 1 with G being the subgraph of F induced on [k 1 ]. Otherwise, we have Z F a,b ,R * 1 = 0. Now we apply Theorem 1.4 to determine whether Φ R * 1 = Φ R 2 , which is equivalent to Φ R 1 = Φ R 2 .
All in all, this gives us a complete characterization of equivalence classes of flip process rules with the same trajectories.
Many combinatorially interesting rules turn out to be symmetric deterministic rules. Write S k for the symmetric group on [k]. We work with the following natural action of S k on H k . Given σ ∈ S k and F ∈ H k we define σ ·F to be the unique graph on [k] such that for all i, j ∈ [k] we have σ(i)σ(j) ∈ E(σ · F ) if and only if ij ∈ E(F ). A rule R of order k is symmetric if for all F, H ∈ H k and σ ∈ S k we have R σ·F,σ·H = R F,H and deterministic if for each F ∈ H k there is a unique G ∈ H k such that R F,H = I {H=G} . Indeed, naturally arising rules likely exhibit symmetric behaviour on isomorphic graphs. For example, naturally arising rules of order 3 are likely to treat the cherry graph K 1,2 symmetrically, no matter whether the middle vertex is labelled 1, 2 or 3. The class of symmetric deterministic rules contains many natural families of rules which were highlighted in [7] and studied in [1]. These include the complementing rules, the component completion rules, the extremist rules, and the clique removal rules.
Another interesting class of rules is the collection of unique rules, that is, rules such that no other rule of the same order has the same trajectories. As an application of our main theorem, we prove that the unique rules are precisely the symmetric deterministic rules and the rules of order 2. Observe that the rules of order 2 form a very simple family of symmetric rules which is parametrized by just two parameters: the probabilities of replacing an edge by a non-edge and a non-edge by an edge. The remainder of this paper is organized as follows. In Section 2 we formally state our main theorem, Theorem 2.3, after introducing the necessary notation, terminology and concepts. In Section 3 we state the main lemmas for our proof of Theorem 2.3 and give our proof of Theorem 2.3. We provide the proofs of the main lemmas in Section 3.3. In Section 4 we provide our proof of Theorem 1.5.
Main Theorem
In this section we give the full statement of our main result. We begin with some definitions we need to formally introduce the concept of a graphon-valued trajectory. For a rooted graph F a,b we define an operator T F a,b : W → L ∞ (Ω 2 ) by setting
(1) T F a,b W (x, y) := ij∈E(F ) W (x i , x j ) ij / ∈E(F ) (1 − W (x i , x j )) dπ V ′ (F R ) ,
with x a = x and x b = y fixed and the integral taken over (
x i ) i∈V ′ (F R ) ∈ Ω V ′ (F R ) .
The velocity operator for a rule R of order k is the operator V R : W → W given by
(2) V R W := F a,b ∈G k T F a,b W H∈H k R F,H · I {ab∈E(H)} − I {ab∈E(F )} .
Now we state the definition of a graphon-valued trajectory.
d dt Φ t R W = V R Φ t R W with the initial condition Φ 0 R W = W .
The following theorem guarantees that there is a unique trajectory with maximal domain starting at each kernel and establishes some useful facts about trajectories. Theorem 2.2 (Theorem 4.5 in [7]). The following hold for any rule R and any kernel W ∈ W.
(i) There is an open interval D R,W ⊆ R containing 0 and a trajectory Φ · R W : D R,W → W starting at W such that any other trajectory starting at W is a restriction of Φ · R W to a subinterval of D R,W .
(ii) For any u ∈ D R,W we have D R,Φ u W = {t ∈ R : t+u ∈ D R,W } and for every t
∈ D R,Φ u W we have Φ t R Φ u R W = Φ t+u R W . (iii) If W ∈ W 0 is a graphon, then the set L R,W := {t ∈ D R,W : Φ t R W ∈ W 0 } is a closed interval containing [0, ∞).
We introduce some further notation, terminology and concepts. Let F and H be graphs. A function φ :
V (F ) → V (H) is relation-preserving if for all u, v ∈ V (F ) we have uv ∈ E(F ) if and only if φ(u)φ(v) ∈ E(H). We say that F and H are isomorphic if there is an isomorphism from F to H, that is, a relation-preserving bijective function φ : V (F ) → V (H). Let F R and H S be rooted graphs. A function φ : V (F ) → V (H) is root-respecting if for all v ∈ V (F ) we have φ(v) ∈ S if and only if v ∈ R; it is root-order-preserving if for all u, v ∈ R we have φ(u) ≤ S φ(v) if and only if u ≤ R v. We say that F R and H S are isomorphic if there is an isomorphism from F R to H S , that is, a root-order-preserving root-respecting isomorphism from F to H; write F R ∼ = H S . Let Aut(F R ) denote the set of automorphisms of F R , that is, isomorphisms from F R to F R .
Let k ∈ N. Write J k for the set of isomorphism classes of the collection G k . For J ∈ J k and a rule R of order k, set
(3) a J,R = F a,b ∈J H∈H k R F,H · I {ab∈E(H)} − I {ab∈E(F )} .
We say that two rules R 1 and
R 2 have the same trajectories if for all W ∈ W we have D R 1 ,W = D R 2 ,W and furthermore for all t ∈ D R 1 ,W we have Φ t R 1 W = Φ t R 2 W .
Our main theorem, which we state below, characterizes rules of the same order with the same trajectories. Together with the 'lifting' procedure relating rules of different orders described just below Theorem 1.4, this fully resolves Question 1.2.
Theorem 2.3. The following are equivalent for rules R 1 and R 2 of the same order k ∈ N.
(S1) For all J ∈ J k we have a J,R 1 = a J,R 2 . (S2) V R 1 = V R 2 on W 0 . (S3) V R 1 = V R 2 on W . (S4) R 1 and R 2 have the same trajectories.
Proving Theorem 2.3
The goal in this section is to provide a proof of Theorem 2.3. Let us briefly describe our proof strategy here. Our proof scheme is (S1) ⇒ (S3) ⇒ (S4) ⇒ (S2) ⇒ (S1). We note that rooted induced densities are invariant under rooted graph isomorphism, so for J ∈ J k we may set T J to be the unique operator equal to T F a,b for all F a,b ∈ J and the first implication readily follows. The second implication follows from the definition and uniqueness of trajectories (Definition 2.1 and Theorem 2.2), while the third implication follows from the smoothness of the velocity operator (Lemmas 3.5 and 3.6). Indeed, the most difficult step is to show that (S2) implies (S1). The goal is to show that the collection I = {T J : J ∈ J k } of operators is linearly independent.
We apply a couple of key insights to achieve this goal. Firstly, we focus on parametrized graphon representations of graphs because evaluating induced densities at these graphons discretizes them into sums of monomials indexed over root-order-preserving root-respecting relationpreserving functions between rooted graphs. Secondly, we focus on so-called twinfree versions of rooted graphs because Lemma 3.1 tells us that the aforementioned functions are necessarily injective when the graphs involved are so-called twinfree; in particular, this induces a partial order on the set of twinfree versions of rooted graphs. Combining these two insights leads us to Lemma 3.3, which provides an explicit quantification of induced densities at graphon representations of graphs with a unique monomial assigned to each relevant rooted graph. Finally, the desired linear independence follows from the linear independence of monomials in polynomial functions (see Lemma 3.7).
The remainder of this section is organized into subsections as follows. First, we introduce the key concepts and the associated key lemmas in Section 3.1. Next, we provide the proof of Theorem 2.3 in Section 3.2. Finally, we conclude with proofs of the key lemmas in Section 3.3.
Key concepts and main lemmas.
In this subsection we introduce some notation, the key concepts and our main lemmas. The proofs of the lemmas are given later in Section 3.3.
For a graph F and v ∈ V (F ) we write N F (v) for the neighbourhood of v in F . Let F R be a rooted graph. We say that u, v ∈ V (F R ) are twins if N F (u) = N F (v). Write U (F R ) for the set of vertices in V ′ (F R ) with a twin in R. Set V * (F R ) = V ′ (F R ) \ U (F R ), u(F R ) = |U (F R )| and v * (F R ) = v ′ (F R ) − u(F R ). We say that F R is twinfree if for all u, v ∈ V (F ) we have that |R ∩ {u, v}| = 1 or that N (u) = N (v) implies u = v.
In particular, we permit pairs of twins with one root vertex and one non-root vertex.
Our first lemma deals with twin vertices in the context of root-respecting relation-preserving functions between rooted graphs. Lemma 3.1. The following hold for any root-respecting relation-preserving function φ :
V (F ) → V (H) from a rooted graph F R to a rooted graph H S . (i) If u, v ∈ V (F ) are such that φ(u) and φ(v) are twins in H, then u and v are twins in F . (ii) If F R is twinfree, then φ is injective.
Let (S, ≤ S ) be a finite ordered set with a partition S = {S i } i∈I indexed by a finite set I. Writing a i for the least element of S i according to ≤ S , the linear order ≤ I on I induced by ≤ S is given by i ≤ I j if and only if a i ≤ S a j . For a rooted graph F R we define an equivalence relation
∼ tf(R) on V (F ) as follows. For u, v ∈ V (F ) we have u ∼ tf(R) v if u = v or both N (u) = N (v) and |R ∩ {u, v}| = 1 hold. Write C(F R ) for the set of equivalence classes of ∼ tf(R) . For v ∈ V (F R ) we write v tf(R) to denote the unique equivalence class of ∼ tf(R) containing v. For A ⊆ V (F R ) we set A tf(R) = {a tf(R) : a ∈ A}. The twinfree version of F R is the rooted graph F tf(R) on C(F R )
whose edges are precisely the pairs IJ for which there are i ∈ I and j ∈ J such that ij ∈ E(F ) and whose roots are precisely the elements of R tf(R) with the linear order induced by ≤ R . We often omit the set R of roots when we intend R = ∅. For example, we often write ∼ tf and F tf instead of ∼ tf(∅) and F tf(∅) respectively.
m i ) i∈V (F ) ∈ N V (F ) 0
the m-blowup of F R is the rooted graph F R (m) obtained from F (m) by designating the set of roots to be i∈R S i ordered so that each S i forms an interval and for all u ∈ S i and v ∈ S j we have that u ≤ ρ(F R (m)) v implies i ≤ R j. Note that the linear order is not specified within each interval S i ; any order on each interval S i would give the same rooted graph up to isomorphism, so we may conveniently avoid making these arbitrary decisions without detriment. The following lemma gives some useful properties about twinfree versions.
Lemma 3.2. The following hold for any rooted graph
F R . (i) For all i ∈ I ∈ C(F R ) we have N F (i) = L∈N F tf(R) (I) L. (ii) F tf(R) is twinfree. (iii) There is a vector m ∈ N V (F tf(R) ) such that F R ∼ = F tf(R) (m). (iv) |R tf | = |R tf(R) |. (v) v * (F tf(R) ) = v(F tf ) − |R tf |. Let z = (z i ) i∈V (G) ∈ [0, 1] V (G) satisfy i∈V (G) z i = 1 and (Ω i ) i∈V (G)
be a partition of Ω such that Ω i has measure z i . We call the graphon W z G which is equal to I {ij∈E(G)} on Ω i × Ω j a z-scaled graphon representation of G. For a graph G and an ordered subset X ⊆ V (G) the X-rooted version of G is the rooted graph H T obtained from G by blowing up each vertex in X by a factor of 2 and collecting the duplicate vertices in a set T of roots with the linear order induced by ≤ X . For (x, y) ∈ Ω 2 the (x, y)-rooted version of G for partition (
Ω i ) i∈V (G) is the X-rooted version of G where X = {v ∈ V (G) : {x, y} ∩ Ω v = ∅}
and the first element w in the linear order induced by ≤ R satisfies x ∈ Ω w . We shall write x rt (resp. y rt ) for the root vertex whose twin w (resp. z) in X satisfies x ∈ Ω w (resp. y ∈ Ω z ).
The following lemma gives an explicit quantification of induced densities at parametrized graphon representations of graphs. The availability of such an explicit quantification is central to our proof of Theorem 2.3.
Lemma 3.3. Let F R be a twinfree rooted graph and G be a twinfree graph. Let z = (z i ) i∈V (G) ∈ [0, 1] V (G) satisfy i∈V (G) z i = 1 and W z G be a z-scaled graphon representation of G with partition (Ω i ) i∈V (G) . Let m = (m i ) i∈V (F ) ∈ N V (F ) satisfy i∈R m i = 2. Let (x, y) ∈ Ω 2 and let H T be the (x, y)-rooted version of G for partition (Ω i ) i∈V (G) . The following hold. (i) Suppose that |R| > |T |. Then we have T F R (m) W z G (x, y) = 0. (ii) Suppose that v * (F R ) ≥ v * (H T ) and |R| ≥ |T |. Then we have T F R (m) W z G (x, y) = φ : V (F )→V (H) i∈V (F )\R z m i φ(i) ,
where the sum is over all root-order-preserving root-respecting relation-preserving func-
tions φ : V (F ) → V (H) from F R to H T . Let A(k, ℓ, R, S) be the set of vectors n = (n i ) i∈[ℓ]∪R∪S with non-negative integer entries such that n i is positive for i ∈ [ℓ] ∪ R, i∈[ℓ]∪S n i = k and i∈R n i = 2. Let F R be a rooted graph on [ℓ] ∪ S ∪ R. Define an equivalence relation ∼ v F R on A(k, ℓ, R, S) as follows. For m, n ∈ A(k, ℓ, R, S) we have m ∼ v F R n if there exists φ ∈ Aut(F R ) such that m i = n φ(i) . Write B(k, F R ) for the set of equivalence classes of ∼ v F R .
The following lemma characterizes isomorphic blowups of twinfree rooted graphs.
Lemma 3.4. For each twinfree rooted graph F R we have m ∼ v F R n if and only if F R (m) ∼ = F R (n).
We state two lemmas which give bounds on trajectories and the velocity operator. For k ∈ N set (k) 2 = k(k − 1) and C k = (k) 2 2 2 ( k 2 )−1 . Lemma 3.5 (Lemma 4.8 in [7]). Given a rule R of order k ∈ N and U, W ∈ W 0 we have
V R U − V R W ∞ ≤ C k U − W ∞ .
Lemma 3.6 (Lemma 4.15 in [7]). Given a rule R of order k ∈ N, W ∈ W 0 and δ > 0 we have
Φ δ W − W δ − V R W ∞ ≤ C k (k) 2 δ .
Finally, we state a technical lemma which establishes the linear independence of monomials in polynomial functions. We first introduce the necessary notation, terminology and concepts. A k-variable polynomial function is a function f : R k → R given by
(4) f (x 1 , . . . , x k ) = m=(m i ) i∈[k] ∈N k 0 a m i∈[k] x m i i ,
where only finitely many of the coefficients a m ∈ R are nonzero. A k-variable polynomial function is nonzero if it has at least one nonzero coefficient a m . Otherwise, we say that it is zero. The degree of a nonzero k-variable polynomial function f is
deg(f ) := max i∈[k] m i : m = (m i ) i∈[k] ∈ N k 0 , a m = 0 .
A zero of a k-variable polynomial function f is a vector (x i ) i∈ [k] ∈ R k such that f (x 1 , . . . , x k ) = 0. The following lemma, which is a special case of the Schwartz-Zippel Lemma [10,11], asserts that each nonzero multivariate polynomial function is not identically zero when taking values in sufficiently large sets.
Proof of Theorem 2.3.
In this subsection we prove Theorem 2.3. As mentioned at the beginning of this section, we use the proof scheme (S1) ⇒ (S3) ⇒ (S4) ⇒ (S2) ⇒ (S1). We shall present the proof of each implication separately.
Proof of (S1) ⇒ (S3) . Let F a,b , G
c,d ∈ J ∈ J k , W ∈ W and (x, y) ∈ Ω 2 . Fix an isomorphism φ from F a,b to G c,d . Set x a = x c = x and x b = x d = y. By (1) we have T F a,b W (x, y) = ij∈E(F ) W (x i , x j ) ij / ∈E(F ) (1 − W (x i , x j )) dπ V ′ (F a,b ) = φ(i)φ(j)∈E(H) W (x i , x j ) φ(i)φ(j) / ∈E(H) (1 − W (x i , x j )) dπ V ′ (F a,b ) = ij∈E(H) W (x i , x j ) ij / ∈E(H) (1 − W (x i , x j )) dπ V ′ (G c,d ) = T G c,d W (x, y) .
Hence, for J ∈ J k we may write T J for the unique operator equal to T F a,b for all F a,b ∈ J. For a rule R of order k we rewrite (2) to obtain
(5) V R W = J∈J k a J,R · T J W .
It follows that (S1) implies (S3).
Proof of (S3) ⇒ (S4). Suppose that V R 1 = V R 2 on W and fix W ∈ W. By Theorem 2.2(i) we may pick trajectories Φ · R 1 W and Φ · R 2 W starting at W which have maximal domains D R 1 ,W and D R 2 ,W , respectively, that are open intervals containing 0. Let the function f : D R 1 ,W ∪D R 2 ,W → (W, · ∞ ) be given by (6) f
(t) = Φ t R 1 W if t ∈ D R 1 ,W Φ t R 2 W if t ∈ D R 2 ,W \ D R 1 ,W
. Now the combination of (S3) and Definition 2.1 implies that f is a trajectory starting at W for both R 1 and R 2 , so by Theorem 2.2(i) we have D R 1 ,W = D R 2 ,W and Φ t R 1 W = f (t) = Φ t R 2 W for all t ∈ D R 1 ,W . Hence, R 1 and R 2 have the same trajectories.
Proof of (S4) ⇒ (S2). Suppose that we may pick W ∈ W 0 such that
V R 1 W = V R 2 W . Set ε = V R 1 W − V R 2 W ∞ > 0, δ = ε 5C k and η 0 = ε 5C k (k) 2 . Let η ∈ (0, η 0 ] and U ∈ W 0 satisfy U − W ∞ ∈ [0, δ].
By the triangle inequality for · ∞ and Lemmas 3.5 and 3.6, we have
ε = V R 1 W − V R 2 W ∞ ≤ Φ η R 1 U − Φ η R 2 U η ∞ + Φ η R 1 U − U η − V R 1 U ∞ + Φ η R 2 U − U η − V R 2 U ∞ + V R 1 W − V R 1 U ∞ + V R 2 W − V R 2 U ∞ ≤ η −1 Φ η R 1 U − Φ η R 2 U ∞ + 2C k (k) 2 η + 2C k δ. Rearranging yields Φ η R 1 U − Φ η R 2 U ∞ ≥ η(ε − 2C k (k) 2 η − 2C k δ) ≥ ηε 5 > 0, which implies Φ η R 1 U = Φ η R 2 U .
Hence, R 1 and R 2 do not have the same trajectories. Proof of (S2) ⇒ (S1). For J ∈ J k set α J = a J,R 1 − a J,R 2 . Suppose that (S1) does not hold; fix J R ∈ J k satisfying α J R = 0 which is minimal in the following quantities in the given order of priority: r(J tf(R) ) and then v * (J tf(R) ). Equip R tf with the linear order induced by ≤ R and write G T for the R tf -rooted version of J tf . We shall prove that (S2) does not hold. We first prove the following claim. Proof. T is a copy of R tf , so by Lemma 3.2(iv) we have r(G T ) = |T | = |R tf | = |R tf(R) | = r(J tf(R) ). By the definition of G T and Lemma 3.
2(v) we have v * (G T ) = v(J tf )−|R tf | = v * (J tf(R) ).
Pick (x, y) ∈ Ω 2 so that R tf = {v ∈ V (J tf ) : {x, y} ∩ Ω v = ∅} and the first element w in the linear order induced by ≤ R satisfies x ∈ Ω w . In particular, G T is the (x, y)-rooted version of J tf for partition (Ω i ) i∈V (J tf ) . By (5) we have
(7) V R 1 W z J tf (x, y) − V R 2 W z J tf (x, y) = I S ∈J k T I S W z J tf (x, y) · α I S .
We evaluate the summands in (7)
2(iii) and 3.3 we have T
I S W z J tf (x, y) > 0 only if r(I tf(S) ) = r(G T ), v * (I tf(S) ) ≥ v * (G T )
and there is a root-order-preserving root-respecting relation-preserving function φ :
V (I tf(S) ) → V (G T ). Suppose that φ : V (I tf(S) ) → V (G T )
is such a function for some I S in category (C2) or (C3) with r(I tf(S) ) = r(G T ) and v * (I tf(S) ) ≥ v * (G T ). The function φ is injective by Lemma 3.1(ii), so φ maps ρ(I tf(S) ) to T . Now by Lemma 3.1(i) we have φ(v) ∈ V * (G T ) for all v ∈ V * (I tf(S) ), so the injectivity of φ implies that v * (I tf(S) ) = v * (G T ) and φ maps V * (I tf(S) ) to V * (G T ). In particular, I S is in category (C2).
Set Q = V * (G T ) and q = |Q|; without loss of generality, let us assume Q = [q]. By adding to I tf(S) dummy non-root twins of root vertices, we may extend φ (uniquely, by Lemma 3.1(i)) to an isomorphism χ : V (G T ) → V (G T ); equivalently, I tf(S) is a copy of G T with some non-root twins of root vertices deleted. Furthermore, each blowup vector m ∈ A(k, q, ρ(I tf (S) ), U (I tf(S) )) can be extended to n ∈ A(k, q, T, U ) (where U denotes the set U (G T ) of non-root vertices of G T with a twin in T ) by adding a dummy zero-entry for each dummy vertex. Hence, by (7), the definition of B(k, G T ) and Lemmas 3.2(iii), 3.3 and 3.4, we obtain
V R 1 W z J tf (x, y) − V R 2 W z J tf (x, y) = m∈B(k,G T ) T G T (m) W z J tf (x, y) · α G T (m) = m∈B(k,G T ) φ∈Aut(G T ) i∈Q∪U z m i φ(i) · α G T (m) = m∈A(k,q,T,U ) i∈Q∪U z m i i · α G T (m) .
Since z is subject to the condition i∈[q]∪U z i = 1, we replace z q with 1 − i∈[q−1]∪U z i in the expression above to obtain an (q − 1 + |U |)-variable polynomial function p. Observe that the coefficient of a monomial i∈[q−1]∪U z m i i in p has the form n C m,n ·α G T (n) where we have some coefficients C m,n with C m,m = 0 and the sum is over all vectors n such that for all i ∈ U ∪ [q − 1] we have n i ≤ m i . Since α J R = 0, we may pick p such that α G T (p) = 0 and α G T (n) = 0 for all n = p such that for all i ∈ V (G T ) we have n i ≤ p i . Hence, the monomial i∈[q−1]∪U z p i i has nonzero coefficient in p and in particular p is not the zero polynomial function. Now Lemma 3.7 implies that p is not identically zero on [0, (q + |U |) −1 ] [q−1+|U |] , so we may pick y = (y i ) i∈[q]∪U such that i∈[q]∪U y i = 1 and V R 1 W y J tf = V R 2 W y J tf . This completes the proof.
Proofs of lemmas.
In this subsection we give the proofs of our lemmas from Section 3.1.
Proof of Lemma 3.1. We first consider part (i). Take u, v ∈ V (F ) such that φ(u) and φ(v) are twins in H; by definition we have N H (φ(u)) = N H (φ(v)). Since φ is relation-preserving, we have N F (u) = N F (v), that is, u and v are twins in F . Now we consider part (ii). Let u, v ∈ V (F ) satisfy φ(u) = φ(v). By part (i) u and v are twins in F . Since F R is twinfree and we cannot have |R ∩ {u, v}| = 1 because φ is root-respecting, we must have u = v. This completes the proof that φ is injective.
Proof of Lemma 3.2. We begin with part (i). Fix i ∈ I ∈ C(F R ). For j ∈ N F (i) note that there exists J ∈ C(F R ) such that j ∈ J. Since ij ∈ E(F ), by definition of F tf(R) we have IJ ∈ E(F tf (R) ), which implies j ∈ J ∈ N F tf(R) (I). On the other hand, for j ∈ J ∈ N F tf(R) (I) we have IJ ∈ E(F tf (R) ), so by definition there exist u ∈ I and v ∈ J such that ij ∈ E(F ). Since
u ∼ tf(R) i, we have v ∈ N F (u) = N F (i), which implies iv ∈ E(F ). But we also have v ∼ tf(R) j, so we have i ∈ N F (v) = N F (j). Hence, we have j ∈ N F (i) as required.
For part (ii) let I, J ∈ C(F R ) and pick i ∈ I and j ∈ J arbitrarily. Suppose that we have |R tf(R) ∩ {I, J}| = 1 and N F tf(R) (I) = N F tf(R) (J). By part (i) we have N F (i) = N F (j), so we have i ∼ tf(R) j. This in turn implies I = J as required.
For part (iii) set m = (|I|) I∈C(F R ) . For each I ∈ C(F R ) select an arbitrary bijection φ I : I → S m I . Note that by part (i) the function φ :
V (F R ) → V (F tf(R) (m)) given by φ(v) = φ I (v) whenever v ∈ I ∈ C(F R ) is an isomorphism.
For part (iv) we note that for all a, b ∈ R we have a ∼ tf(R) b if and only if a ∼ tf b, so for all a ∈ R we have a tf(R) = a tf ∩ R. Now since any two equivalence classes with nonempty intersection must be equal, we deduce that for all a, b ∈ R we have a tf = b tf if and only if
a tf(R) = a tf ∩ R = b tf ∩ R = b tf(R) . It follows that |R tf | = |R tf(R) |. Finally, we consider part (v). Set A = {v tf : v ∈ V (F ), v tf ∩ R = ∅}. Since A ∩ R tf = ∅ and A ∪ R tf = V (F tf ), we have |A| = v(F tf ) − |R tf |. Note that for all a, b ∈ V (F ) \ R we have a ∼ tf(R) b if and only if a ∼ tf b, so for all a ∈ V (F ) \ R we have a tf(R) = a tf \ R.
To complete the proof, we shall show that A = V * (F tf(R) ). Let a ∈ A. By the definition of A we have v ∈ V (F ) \ R such that a = v tf and v tf ∩ R = ∅, so a = v tf(R) ∈ V (F tf(R) ) and v has no twin in R. Now by part (i) a must have no twin in R tf(R) , so we have a ∈ V * (F tf (R) ). Now take I ∈ V * (F tf(R) ) and fix v ∈ I. Since I ∈ V (F tf(R) ) has no twin in R tf(R) , by part (i) the vertex v ∈ V (F R ) has no twin in R. Hence, we have v tf ∩ R = ∅, which implies I = v tf(R) = v tf ∈ A. This completes the proof.
Proof of Lemma 3.3. For
x = (x i ) i∈V (F R (m)) ∈ Ω V (F R (m)) set a(x) = ij∈E(F R (m)) W z G (x i , x j ) ij / ∈E(F R (m)) (1 − W z G (x i , x j )) .
Let (p, q) be the ordered pair of roots of
F R (m). For x ∈ Ω V (F R (m)) define a function f x : V (F R (m)) → V (G) as follows. For i ∈ V (F R (m)) set f x (i) = w where x i ∈ Ω w . Set P = {x ∈ Ω V (F R (m)) : f x is relation-preserving} and Q = {x ∈ P : x p = x, x q = y}. For x ∈ Q define functions g x : V (F R (m)) → V (H T ) and h x : V (F R ) → V (H T ) as follows. For i ∈ V (F R (m)) set g x (i) = f x (i) if i = p, q x rt if i = p y rt if i = q.
For i ∈ V (F R ) pick a representative a i ∈ S i and set h x (i) = g x (a i ). The following claim establishes some useful properties. (ii) For all x ∈ Q the functions g x and h x are relation-preserving, root-respecting and rootorder-preserving.
(iii) If |R| > |T | then Q = ∅.
The following hold when v * (F R ) ≥ v * (H T ) and |R| ≥ |T |.
(iv) Given x ∈ Q, for all ℓ ∈ V (F R ) and i, j ∈ S ℓ we have g x (i) = g x (j). In particular, the function h x is uniquely defined. (v) Given a root-order-preserving root-respecting relation-preserving function h :
V (F R ) → V (H T ) and a vector x ∈ Ω V (F R (m)) , the following are equivalent. (a) x ∈ Q and h x = h. (b) x p = x, x q = y, and for all ℓ ∈ V (F R ) \ R and i ∈ S ℓ we have x i ∈ Ω h(ℓ) .
Proof. For part (i) note that the definitions of W z G and f x imply that for all x ∈ Ω V (F R (m)) and
all i, j ∈ V (F R (m)) we have W z G (x i , x j ) = I {fx(i)fx(j)∈E(G)} . Hence, we have a(x) = i,j∈V (F R (m)) I {ij∈E(F R (m))⇔fx(i)fx(j)∈E(G)} = I {x∈P} .
For part (ii) note that g x is root-respecting and root-order-preserving by definition. Since f x is relation-preserving for x ∈ Q ⊆ P and x rt and y rt are copies of f x (p) and f x (q) respectively, it follows that g x is relation-preserving. The function h x inherits the desired properties from g x .
For part (iii) suppose that we may take x ∈ Q. Since F R is twinfree, by Lemma 3.1(ii) the function h x is injective, which implies |h x (R)| ≥ |R| > |T |. But h x is root-respecting, so we have h x (R) ⊆ T . This gives a contradiction, so we must have Q = ∅.
For part (iv) fix x ∈ Q and take ℓ ∈ V (F R ) and i, j ∈ S ℓ . By the definition of F R (m) we have N F R (m) (i) = N F R (m) (j) and by part (ii) g x is relation-preserving and root-respecting, so we have N Im(gx) (g x (i)) = N Im(gx) (g x (j)). Since F R is twinfree, by Lemma 3.1(ii) the function h x is injective, which implies |h x (R)| ≥ |R| ≥ |T | and |h
x (V * (F R ))| ≥ v * (F R ) ≥ v * (H T ).
But h x is root-respecting, so we have h x (R) = T . Furthermore, by Lemma 3.1(i) applied to h x we have h x (V * (F R )) = V * (H T ). Now we have h x (V * (F R ) ∪ R) = V * (H T ) ∪ T and Im(h x ) ⊆ Im(g x ), so every vertex in V (H T ) \ Im(g x ) has a twin in Im(g x ). Hence, we have N H T (g x (i)) = N H T (g x (j)). But H T is twinfree, so we have g x (i) = g x (j). In particular, the function h x obtained is independent of the specific choices of the representatives a i ∈ S i . Finally, we consider part (v). Let h : V (F R ) → V (H T ) be a root-order-preserving rootrespecting relation-preserving function. Observe that the definitions of Q and h x imply that any x satisfying the conditions in (a) also satisfies the conditions in (b). Now suppose that x satisfies the conditions in (b). Let i ∈ V (F R (m)). By assumption we have x i ∈ Ω (h(i tf )) tf , so we have f x (i) = (h(i tf )) tf . Since h is relation-preserving, the definitions of H T and of blowups imply that f x is relation-preserving, so we have x ∈ Q. Now the definition of h x and its uniqueness by (iv) imply that h x = h, completing the proof that x satisfies the conditions in (a).
Let us apply Claim 3.9 to prove parts (i) and (ii). By (1) we have
T F R (m) W z G (x, y) = y∈Ω V ′ (F R (m)) a(y, x, y) dπ V ′ (F R (m)) .
For part (i) note that by Claim 3.9(i) and Claim 3.9(iii) we obtain T F R (m) W z G (x, y) = 0 as desired. Now consider part (ii). Since h x is uniquely defined for x ∈ Q by Claim 3.9(iv) and Claim 3.9(v) gives an exact characterization of the partition of Q according to the associated root-order-preserving root-respecting relation-preserving function h x , we obtain
T F R (m) W z G (x, y) = φ : V (F )→V (H) i∈V (F )\R z m i φ(i) ,
where the sum is over all root-order-preserving root-respecting relation-preserving functions φ : V (F ) → V (H) from F R to H T . This completes the proof.
Proof of Lemma 3.4. Let F R be a twinfree rooted graph. Suppose that m ∼ v F R n. Fix φ ∈ Aut(F R ) such that for all i ∈ V (F R ) we have m i = n φ(i) . For each i ∈ V (F R ) select an arbitrary bijection ψ i : S m i → S n φ(i) . Then, observe that the function ψ : V (F R (m)) → V (F R (n)) given by ψ(v) = ψ i (v) whenever v ∈ S m i is an isomorphism. Now suppose that F R (m) ∼ = F R (n) and fix an isomorphism ψ : V (F R (m)) → V (F R (n)). Define a function χ :
V (F R (m)) → V (F R ) by χ(i) = j where ψ(i) ∈ S n j . For i ∈ V (F R ) pick a representative a i ∈ S m i and define a function φ : V (F R ) → V (F R ) by φ(i) = χ(a i )
. We show that the function φ is uniquely defined by proving that for all ℓ ∈ V (F R ) and i, j ∈ S m ℓ we have χ(i) = χ(j). Indeed, take ℓ ∈ V (F R ) and i, j ∈ S m ℓ . We have N F R (m) (i) = N F R (m) (j) by the definition of F R (m). Since the function χ inherits the properties of rootrespecting, relation-preserving and surjectivity from ψ, we have N F R (χ(i)) = N F R (χ(j)). But F R is twinfree, so we have χ(i) = χ(j). Hence, the function φ is uniquely defined as desired. Furthermore, the definitions of φ, F R (m) and F R (n) imply that φ is a root-order-preserving rootrespecting relation-preserving function. Now φ inherits surjectivity from χ and by Lemma 3.1(ii) φ is injective, so φ is an isomorphism from F R to itself. Observe that the derivation of φ from ψ implies that we have m i = n φ(i) for all i ∈ V (F R ), so this establishes that m ∼ v F R n.
Proving Theorem 1.5
In this section we shall prove Theorem 1.5. We briefly describe our proof strategy here. Theorem 2.3 recasts the question of whether two rules of the same order have the same trajectories in terms of the quantities a J,R , so understanding these quantities is key. The presence of indicator functions I {ab∈E(H)} in the expression for a J,R given by (3) renders the behaviour of a J,R rather intractable in general; indeed, this is a key difficulty in understanding rules with the same trajectories. Our key observation is that we may utilize the symmetries present in symmetric rules to obtain cleaner expressions for the quantities a J,R ; this is the focus of Lemma 4.2. Indeed, we can directly verify the (⇐) direction by combining Theorem 2.3 with the simplified expressions from this key lemma. For the (⇒) direction, the simplified expressions from Lemma 4.2 gives us enough leeway and control to construct 'perturbed' versions of symmetric non-deterministic rules of order k ≥ 3 which have the same trajectories. To handle non-symmetric rules and complete the proof, we observe and directly verify that every such rule has a natural symmetric version which is a rule with the same trajectories. This gives our characterization of unique rules.
To prepare for our proof of Theorem 1.5, we introduce some notation, terminology and useful lemmas. The proof of Theorem 1.5 is given in Section 4.1.
We use the language of group actions to describe the symmetries observed. Let a group Γ act on a set X. For x ∈ X the orbit of x is Orb Γ (x) = {γ · x : γ ∈ Γ} and the stabilizer of
x is Stab Γ (x) = {γ ∈ Γ : γ · x = x}. Set ORB Γ (X) = {Orb Γ (x) : x ∈ X}.
Aside from the natural action of S k on H k described just before Theorem 1.5, we also work with the following natural actions of S k on [k] 2 and [k] (2) . Given σ ∈ S k and ab ∈ [k] 2 we set σ · ab = σ(a)σ(b). Given σ ∈ S k and (a, b) ∈ [k] (2) we set σ · (a, b) = (σ(a), σ(b)). For ℓ ∈ N and
A 1 , . . . , A ℓ ∈ {H k , [k] 2 , [k]
(2) } we have the following action of S k on i∈[ℓ] A i . Given σ ∈ S k and (a i ) i∈ [ℓ] we set σ · (a i ) i∈[ℓ] = (σ · a i ) i∈ [ℓ] . Note that every group action of S k on a set X naturally induces a group action of each subgroup Γ of S k on the same set X. Recall that G k = H k × [k] (2) . Observe that J k is precisely the set ORB S k (G k ) of orbits for the action of S k on G k .
Let Σ be a subgroup of a group Γ. A left coset of Σ in Γ is a set γΣ = {γσ : σ ∈ Σ} with γ ∈ Γ. Write Γ/Σ for the collection of left cosets of Σ in Γ. The following is a useful technical lemma that allows us to recast sums over orbits of group actions as sums over groups.
Lemma 4.1.
Given an action of a group Γ on a set X, a function f : X → R and an element x ∈ X, we have
γ∈Γ f (γ · x) |Γ| = y∈Orb Γ (x) f (y) | Orb Γ (x)| .
Proof. Observe that the left cosets of Σ := Stab Γ (x) partition Γ, the map γΣ → γ · x gives a well-defined bijection from Γ/Σ to Orb Γ (x) and for each γ ∈ Γ the map σ → γσ gives a bijection from Σ to γΣ, so by grouping terms according to left cosets of Σ we have
γ∈Γ f (γ · x) |Γ| = ρΣ∈Γ/Σ γ∈ρΣ f (γ · x) |Γ| = y∈Orb Γ (x) f (y) | Orb Γ (x)| as required.
To prepare for our key lemma, we introduce some notation.
Let F ∈ H k . Set S F = Stab S k (F ) and T F = ORB S F ( [k]
2 ). We assign to each G ∈ H k a vector p
a Orb S k (F a,b ),R = | Orb S k (F a,b )| H∈H k R F,H · I {ab∈E(H)} − I {ab∈E(F )} = | Orb S k (F a,b )| | Orb S F (ab)| ℓ=0 ℓ · R(F, ab, ℓ) | Orb S F (ab)| − I {ab∈E(F )} .
Proof. For G c,d ∈ Orb S k (F a,b ) pick σ ∈ S k such that G = σ · F and cd = σ · ab. Clearly we have I {cd∈E(G)} = I {ab∈E(F )} . Since R is symmetric and H → σ · H defines a bijection on H k , we may replace sums over H ∈ H k with sums over σ · H ∈ H k and obtain H∈H k
R G,H · I {cd∈E(H)} = H∈H k R G,σ·H · I {σ·ab∈E(σ·H)} = H∈H k R F,H · I {ab∈E(H)} .
Hence, by (3) we have
(8) a Orb S k (F a,b ),R = | Orb S k (F a,b )| H∈H k R F,H · I {ab∈E(H)} − I {ab∈E(F )} .
Take G ∈ J ∈ ORB S F (H k ). By replacing sums over σ ∈ S F with sums over σ −1 ∈ S F and applying Lemma 4.1 for G ∈ H k and for ab ∈ [k] 2 , we obtain
H∈Orb S F (G) I {ab∈E(H)} | Orb S F (G)| = σ∈S F I {ab∈E(σ·G)} |S F | = σ∈S F I {σ·ab∈E(G)} |S F | = cd∈Orb S F (ab) I {cd∈E(G)} | Orb S F (ab)| .
Rearranging yields (9) H∈Orb S F (G)
I {ab∈E(H)} = | Orb S F (G)| | Orb S F (ab)| · cd∈Orb S F (ab) I {cd∈E(G)}
Now R is symmetric, so for all G ∈ H k and H ∈ Orb Stab S k (F ) (G) we have R F,G = R F,H . Hence, in combination with (8) and (9) we obtain
a Orb S k (F a,b ),R = | Orb S k (F a,b )| | Orb S F (ab)| ℓ=0 ℓ · R(F, ab, ℓ) | Orb S F (ab)| − I {ab∈E(F )} as desired.
4.1. Proof of Theorem 1.5. We present each direction of the proof of Theorem 1.5 separately.
Proof of Theorem 1.5 (⇐). We first consider when R is a rule of order 2. Let R ′ be a rule of the same order with the same trajectories as R. Take F ∈ H 2 and set J = Orb S k (F 1,2 ) = {F 1,2 , F 2,1 }. By (S1) of Theorem 2.3 we have
2 R ′ F,K 2 − I {12∈E(F )} = a J,R ′ = a J,R = 2 R F,K 2 − I {12∈E(F )} , so we obtain R ′ F,K 2 = R F,K 2 . But H 2 = {K 2 , K 2 }, so this gives R = R ′ .
Hence, R is unique. Next, we consider when R is a symmetric deterministic rule of order k. Let R ′ be a rule of the same order with the same trajectories as R. Let F ∈ H k and fix the unique
H 0 ∈ H k such that R F,H 0 = 1. Take H ′ = H 0 and pick ab ∈ E(H ′ ) \ E(H 0 ). Set J = Orb S k (F a,b ). By (3) and noting that for all G c,d ∈ J we have I {cd∈E(G)} = I {ab∈E(F )} , we obtain a J,R ′ = G c,d ∈J H∈H k R ′ G,H · I {cd∈E(H)} − |J|I {ab∈E(F )} .
By (S1) of Theorem 2.3 and Lemma 4.2 we have a J,
R ′ = a J,R = −|J|I {ab∈E(F )} , so we deduce that H∈H k R ′ F,H · I {ab∈E(H)} = I {ab∈E(H 0 )} = 0. Since ab ∈ E(H ′ ), we have R ′ F,H ′ = 0. This gives R = R ′ , so R is unique.
Proof of Theorem 1.5 (⇒). We prove the contrapositive statement. We first consider when R is a rule which is not symmetric.
Set A = H k × H k and B = ORB S k (A). The symmetric version of R is the matrix R × = (R × F,H ) F,H∈H k where for all (F, H) ∈ B ∈ B we have R × F,H = 1 |B| (F ′ ,H ′ )∈B R F ′ ,H ′ .
The following claim establishes that R × is a symmetric rule of the same order with the same trajectories as R; since R is not symmetric, this shows that R is not unique. Now take F ∈ J. By the definition of J, for each F ′ ∈ J we may pick σ ∈ S k such that
F ′ = σ·F , so we have H∈H k R × F ′ ,H = H∈H k R × σ·F,σ·H = H∈H k R × F,H . In combination with (10), we obtain |J| = (G,H)∈J∩H k R × G,H = |J| · H∈H k R × F,H , which implies H∈H k R × F,H = 1. Hence, R × is a rule as required.
Now we show that R × and R have the same trajectories by verifying the conditions of (S1). Take J ∈ J k . Pick (F a,b , H) ∈ A ∈ ORB S k (J × H k ) and set B = Orb S k (F, H). By noting that I {σ·ab∈E(σ·H)} = I {ab∈E(H)} for all σ ∈ S k and applying Lemma 4.1 for (F a,b , H
R G,H ′ I {cd∈E(H ′ )} | Orb S k (F a,b , H)| = σ∈S k R σ·F,σ·H I {σ·ab∈E(σ·H)} |S k | = σ∈S k R σ·F,σ·H I {ab∈E(H)} |S k | = (G,H ′ )∈Orb S k (F,H) I {ab∈E(H)} R G,H ′ | Orb S k (F, H)| .
Since the analogous equation for R × also holds and by the definition of R × we have Hence, by (3) we have
a J,R = A∈ORB(J×H k ) (F a,b ,H)∈A R F,H · I {ab∈E(H)} − F a,b ∈J I {ab∈E(F )} = A∈ORB(J×H k ) (F a,b ,H)∈A R × F,H · I {ab∈E(H)} − F a,b ∈J I {ab∈E(F )} = a J,R × .
This establishes that the conditions of (S1) hold, so by Theorem 2.3 the rules R and R × have the same trajectories as required.
Since the only rule of order 1 is symmetric and deterministic, it remains to consider when R is a non-deterministic symmetric rule of order k ≥ 3. We have the following cases.
(
D1) There are F ∈ H k , I ∈ T F , p ∈ A∈T F [|A|] 0 and H ∈ H k (F, p) such that p I ∈ [|I| − 1] and R F,H > 0. (D2) There are F ∈ H k , I ∈ T F , p (i) ∈ A∈T F [|A|] 0 and H (i) ∈ H k (F, p (i) ) for all i ∈ {0, 1}
such that |I| > 1 and the following hold for all i ∈ {0, 1}. We have p (i)
I = i|I|, p (i) A = p (0) A for all A = I and R F,H (i) > 0. (D3) There are F ∈ H k , distinct I, L ∈ T F , p i,ℓ ∈ A∈T F [|A|] 0 and H i,ℓ ∈ H k (F, p i,ℓ ) for all (i, ℓ) ∈ {0
, 1} 2 such that |I| = |L| = 1 and the following hold for all (i, ℓ) ∈ {0, 1} 2 . We have p i,ℓ
I = i, p i,ℓ L = ℓ, p i,ℓ A = p 0,0 A for all A = I, L and R F,H i,ℓ > 0. (D4) There are F ∈ H k , {ab} ∈ T F , p − , p + ∈ A∈T F [|A|] 0 , H − ∈ H k (F, p − ) and H + ∈ H k (F, p + ) such that p − {ab} = 0, p + {ab} = 1, p − A = p + A for A = {ab}R ε F ′ ,H ′ = R F ′ ,H ′ − ε | Orb S k (F,H)| if (F ′ , H ′ ) ∈ Orb S k (F, H) R F ′ ,H ′ + ε 2| Orb S k (F,H − )| if (F ′ , H ′ ) ∈ Orb S k (F, H − ) R F ′ ,H ′ + ε 2| Orb S k (F,H + )| if (F ′ , H ′ ) ∈ Orb S k (F, H + ) R F ′ ,H ′ otherwise.
It is straightforward to check that R ε is a symmetric rule distinct from R. To show that R is not unique, we shall show that R and R ε have the same trajectories by verifying the conditions of (S1) from Theorem 2.3. We consider three categories of J ∈ J k . First, we have J = Orb S k (F a,b ) for all ab ∈ [k] 2 . In this case, by (3) and the definition of R ε we have a J,R = a J,R ε . Next, we have J = Orb S k (F a,b ) for some ab / ∈ I. Here, by the definition of R ε we have R ε (F, ab, ℓ) = R(F, ab, ℓ) for all ℓ, so by Lemma 4.2 we have a J,R = a J,R ε . Finally, we have J = Orb S k (F a,b ) for some ab ∈ I. In this case, we have R ε (F, ab, p I ) = R(F, ab, p I ) − ε, R ε (F, ab, p I − 1) = R(F, ab, p I − 1) + ε 2 , R ε (F, ab, p I + 1) = R(F, ab, p I + 1) + ε 2 and R ε (F, ab, ℓ) = R(F, ab, ℓ) otherwise. Hence, by Lemma 4.2 we have a J,R = a J,R ε . Since the conditions of (S1) from Theorem 2.3 hold, R and R ε have the same trajectories and R is not unique.
For (D2) set δ = min{| Orb S k (F, H (0) )|R F,H (0) , | Orb S k (F, H (1) )|R F,H (1) } > 0, take ε ∈ (0, δ] and distinguish two subcases: |I| ≥ 3 and |I| = 2. Let us begin with the former. Define
q − , q + ∈ A∈T F [|A|] 0 by setting q − I = 1, q + I = |I| − 1 and q − A = q + A = p(0)
A for all A = I.
Pick G − ∈ H k (F, q − ) and G + ∈ H k (F, q + ). Define a modification R ε of R as follows. For
(F ′ , H ′ ) ∈ H k × H k we set R ε F ′ ,H ′ = R F ′ ,H ′ − ε | Orb S k (F,H (i) )| if (F ′ , H ′ ) ∈ Orb S k (F, H (i) ) R F ′ ,H ′ + ε | Orb S k (F,G − )| if (F ′ , H ′ ) ∈ Orb S k (F, G − ) R F ′ ,H ′ + ε | Orb S k (F,G + )| if (F ′ , H ′ ) ∈ Orb S k (F, G + ) R F ′ ,H ′ otherwise.
In a manner similar to that for (D1), we may verify that R ε is a symmetric rule distinct from R which has the same trajectories, thereby establishing that R is not unique. For the latter subcase define q ∈ A∈T F [|A|] 0 by setting q I = 1 and q A = p
A for all A = I. Pick G ∈ H k (F, q) and define a modification R ε of R as follows.
For (F ′ , H ′ ) ∈ H k × H k we set R ε F ′ ,H ′ = R F ′ ,H ′ − ε | Orb S k (F,H (i) )| if (F ′ , H ′ ) ∈ Orb S k (F, H (i) ) R F ′ ,H ′ + 2ε | Orb S k (F,G)| if (F ′ , H ′ ) ∈ Orb S k (F, G) R F ′ ,H ′ otherwise.
In a manner similar to that for (D1), we may verify that R ε is a symmetric rule distinct from R which has the same trajectories, thereby establishing that R is not unique. For (D3) we set δ = min 1 2 + (−1) i+j 1 2 − | Orb S k (F, H i,j )|R F,H i,j : (i, j) ∈ {0, 1} 2 , take ε ∈ (0, δ] and define a modification R ε of R as follows. For (F ′ , H ′ ) ∈ H k × H k we set
R ε F ′ ,H ′ = R F ′ ,H ′ + (−1) i+j ε | Orb S k (F,H i,j )| if (F ′ , H ′ ) ∈ Orb S k (F, H i,j ) R F ′ ,H ′ otherwise.
In a manner similar to that for (D1), we may verify that R ε is a symmetric rule distinct from R which has the same trajectories, thereby establishing that R is not unique. For (D4) pick G ∈ H k distinct from F so that for some cd ∈ [k] 2 we have G c,d ∈ Orb S k (F a,b ); we may always do so because k ≥ 3 and Orb S F (ab) = {ab}. Fix σ ∈ S k such that G = σ · F , set δ = min{R F,H − , R G,H + , 1 − R F,H + , 1 − R G,H − } and take ε ∈ (0, δ]. Define a modification R ε of R as follows. For (F ′ , H ′ ) ∈ H k × H k we set
R ε F ′ ,H ′ = R F ′ ,H ′ − ε if (F ′ , H ′ ) = (F, H − ), (G, σ · H + ) R F ′ ,H ′ + ε if (F ′ , H ′ ) = (F, H + ), (G, σ · H − ) R F ′ ,H ′ otherwise.
In a manner similar to that for (D1), we may verify that R ε is a rule distinct from R which has the same trajectories, thereby establishing that R is not unique.
Concluding remarks
In this paper, we obtain a complete characterization of the equivalence classes of flip process rules with the same graphon trajectories and characterize the flip process rules which are unique in their equivalence classes. In this section, we shall discuss a number of possible directions for further study.
Characterization by flip process distributions.
Here we revisit Question 1.3, which asks for an analogous characterization of flip process rules with the same flip process distributions. Since flip processes are time-homogeneous Markov processes, it suffices to consider their one-step evolutions, that is, the distribution of G 1 given an initial graph G 0 = G. This is analogous to studying the velocity operator for graphon trajectories and leads us to the following conjecture. Together with the 'lifting' procedure relating rules of different orders described just below Theorem 1.4, a positive answer would fully resolve Question 1.3. It would be interesting to resolve Conjecture 5.1. One potential approach is to replicate the proof framework of Theorem 2.3. Indeed, much of the proof has a graph theoretic flavour and could carry over by analogy. In particular, one could likely prove an analogue of Lemma 3.3 with a suitable notion of rooted induced densities. However, the flip process sampling method means that we would need to count injective copies of graphs and so we would likely get terms i∈[m] (z − i+ 1) resembling falling factorials instead of simple monomials z m . As a consequence, we would not be able to directly utilize the linear independence of monomials.
Trajectories with varying speeds.
Our definition of rules R 1 and R 2 with the same trajectories requires Φ t R 1 W = Φ t R 2 W for all t. That is, the trajectories starting at W have to be the same in 'real-time' under both R 1 and R 2 . One might ask what happens if we were to relax this time-restriction and allow speed variation. Here we state an example question. One notable case is that of uniform time dilation, that is, when f (W, t) = Ct for some constant C > 0. Here the proof framework of Theorem 2.3 can be adapted to give the characterization a J,R 1 = C·a J,R 2 (analogous to (S1)). There are concrete examples of this form of speed variation. Indeed, let R 1 be the triangle removal rule and R 2 be the rule which deletes a random edge from any sampled triangle (and does nothing otherwise). It is straightforward to check that Φ t R 1 W = Φ 3t R 2 W . We remark that there are also concrete examples of non-uniform time dilation. Let R 1 and R 2 be rules of order 6 which act as follows. If both sets {v 1 , v 2 , v 3 } and {v 4 , v 5 , v 6 } of drawn vertices induce a triangle, R 1 deletes the edges of the triangle on {v 1 , v 2 , v 3 }. If the drawn vertices {v 1 , v 2 , v 3 } induce a triangle, R 2 deletes the edges of that triangle. Otherwise, they do nothing. It is straightforward to check that V R 1 W = f (W ) · V R 2 W , where f (W ) represents the triangle density of W . By Definition 2.1 and Theorem 2.2, the trajectories Φ t R 1 W and Φ t R 2 W are unique and satisfy Φ t
R 1 W = Φ g(t,W ) R 2 W with g(t, W ) = t 0 f (Φ s R 1 W ) ds.
Question 1. 2 .
2When do two rules R 1 and R 2 have the same trajectories? Question 1.3. When do two rules R 1 and R 2 have the same flip process distributions?
Theorem 1.5. A rule R is unique if and only if it is a symmetric deterministic rule or a rule of order 2.
For a graph F and a vector m = (m i ) i∈V (F ) ∈ N V (F ) 0 the m-blowup of F is the graph F (m) obtained by replacing each i ∈ V (F ) by an independent set S i of m i vertices and each edge ij ∈ E(F ) with a complete bipartite graph on (S i , S j ). We say that we blow up each vertex i ∈ V (G) by a factor of m i to obtain F (m) from F and often omit mention of vertices i ∈ V (G) where m i = 1. For a rooted graph F R and a vector m = (
Lemma 3. 7 .
7For any k ∈ N, finite subset S ⊆ R and nonzero k-variable polynomial function f with deg(f ) < |S|, there exist x 1 , . . . , x k ∈ S such that f (x 1 , . . . , x k ) = 0.
Claim 3.8. r(G T ) = r(J tf(R) ) and v * (G T ) = v * (J tf(R) ).
by considering the following categorization of I S .(C1) r(I tf(S) ) < r(G T ), or r(I tf(S) ) = r(G T ) and v * (I tf(S) ) < v * (G T ) hold. (C2) r(I tf(S) ) = r(G T ) and v * (I tf(S) ) = v * (G T ) hold. (C3) r(I tf(S) ) > r(G T ), or r(I tf(S) ) = r(G T ) and v * (I tf(S) ) > v * (G T ) hold.Summands of category (C1) are trivial because we have α I S = 0 by Claim 3.8 and the choice of J R . For summands of categories (C2) and (C3) we focus on T I S W z J tf (x, y). By Lemma 3.2(ii) I tf(S) and J tf are twinfree, so by Lemmas 3.
Claim 3. 9 .
9The following hold. (i) For all x ∈ Ω V (F R (m)) we have a(x) = I {x∈P} .
F,G = (p F,G I ) I∈T F given by p F,G I = |E(G) ∩ I| for all I ∈ T F . For each vector p ∈ I∈T F [|I|] 0 set H k (F, p) = {H ∈ H k : p F,H = p} and R(F, p) = H∈H k (F,p) R F,H . For I ∈ T F and ℓ ∈ [|I|] 0 set H k (F, I, ℓ) = {H ∈ H k : p F,H I = ℓ} and R(F, I, ℓ) = H∈H k (F,I,ℓ) R F,H . For ab ∈ [k] 2 and ℓ ∈ [| Orb S F (ab)|] 0 set H k (F, ab, ℓ) = H k (F, Orb S F (ab), ℓ) and R(F, ab, ℓ) = R(F, Orb S F (ab), ℓ).Our key lemma gives useful expressions for a J,R when R is a symmetric rule.
Lemma 4. 2 .
2Given a symmetric rule R of order k, a graph F ∈ H k and ab ∈ [k] 2 we have
Claim 4.3. R × is a symmetric rule with the same trajectories as R.Proof. We first verify that R × is a symmetric rule. By the definition of R × it is symmetric and we have R × F,H ∈ [0, 1] for all (F, H) ∈ A, so it remains to show that for all F ∈ H k we haveH∈H k R × F,H = 1. Fix J ∈ ORB S k (H k ). Note that C = {A ∈ ORB S k (A) : A ∩ (J × H k ) = ∅} partitions J ∩ H k .Indeed, this follows from the fact that the collection ORB S k (A) of orbits partitions A and for all (G, H) ∈ J × H k we have Orb S k (G, H) ⊆ J × H k . Hence, by the definition of R × we have (10) (G,H)∈J∩H k R × G,H = A∈A (G,H)∈A R × G,H = A∈A (G,H)∈A R G,H = (G,H)∈J∩H k R G,H = |J| .
and R F,H − , R F,H + > 0. For (D1) define p − , p + ∈ A∈T F [|A|] 0 by setting p − I = p I − 1, p + I = p I + 1 and p − A , p + A = p A for all A = I. Pick H − ∈ H k (F, p − ) and H + ∈ H k (F, p + ). Set δ = | Orb S k (F, H)|R F,H > 0 and take ε ∈ (0, δ]. Define a modification R ε of R as follows. For (F ′ , H ′ ) ∈ H k × H k we set
Conjecture 5. 1 .
1The following are equivalent for rules R and R ′ of the same order k ∈ N.(K1) For all B ∈ ORB S k (H k × H k ) we have (F ′ ,H ′ )∈B R F ′ ,H ′ = (F ′ ,H ′ )∈B R ′ F ′ ,H ′ .(K2) R and R ′ have the same flip process distributions.
Question 5. 2 .
2When do two rules R 1 and R 2 satisfy Φ t R 1 W = Φ f (W,t) R 2 W for all W ∈ W 0 and t ∈ [0, ∞) for a suitable 'time function' f : W 0 × [0, ∞) → [0, ∞)?
) ∈ G k × H k and for (F, H) ∈ H k × H k , we obtain (G c,d ,H ′ )∈Orb S k(F a,b ,H)
AcknowledgementsThe author would like to thank Jan Hladký for helpful discussions and insightful comments.
Prominent examples of flip processes. P Araújo, J Hladký, E K Hng, M Šileikis, arXiv:2206.038842022P. Araújo, J. Hladký, E. K. Hng, and M. Šileikis. Prominent examples of flip processes, 2022. arXiv:2206.03884.
The triangle-free process. T Bohman, Adv. Math. 2215T. Bohman. The triangle-free process. Adv. Math., 221(5):1653-1677, 2009.
Dynamic concentration of the triangle-free process. T Bohman, P Keevash, 58Random Structures AlgorithmsT. Bohman and P. Keevash. Dynamic concentration of the triangle-free process. Random Structures Algo- rithms, 58(2):221-293, 2021.
To prove and conjecture: Paul Erdős and his mathematics. B Bollobás, Amer. Math. Monthly. 1053B. Bollobás. To prove and conjecture: Paul Erdős and his mathematics. Amer. Math. Monthly, 105(3):209- 237, 1998.
On the evolution of random graphs. P Erdős, A Rényi, Magyar Tud. Akad. Mat. Kutató Int. Közl. 5P. Erdős and A. Rényi. On the evolution of random graphs. Magyar Tud. Akad. Mat. Kutató Int. Közl., 5:17-61, 1960.
The triangle-free process and the Ramsey number R(3, k). G Fiz Pontiveros, S Griffiths, R Morris, Mem. Amer. Math. Soc. 263125G. Fiz Pontiveros, S. Griffiths, and R. Morris. The triangle-free process and the Ramsey number R(3, k). Mem. Amer. Math. Soc., 263(1274):v+125, 2020.
From flip processes to dynamical systems on graphons. F Garbe, J Hladký, M Šileikis, F Skerman, arXiv:2201.122722022F. Garbe, J. Hladký, M. Šileikis, and F. Skerman. From flip processes to dynamical systems on graphons, 2022. arXiv:2201.12272.
The Ramsey number R(3, t) has order of magnitude t 2 / log t. J H Kim, Random Structures Algorithms. 73J. H. Kim. The Ramsey number R(3, t) has order of magnitude t 2 / log t. Random Structures Algorithms, 7(3):173-207, 1995.
Large networks and graph limits. L Lovász, American Mathematical Society60Providence, RIL. Lovász. Large networks and graph limits, volume 60 of American Mathematical Society Colloquium Pub- lications. American Mathematical Society, Providence, RI, 2012.
Fast probabilistic algorithms for verification of polynomial identities. J T Schwartz, J. Assoc. Comput. Mach. 274J. T. Schwartz. Fast probabilistic algorithms for verification of polynomial identities. J. Assoc. Comput. Mach., 27(4):701-717, 1980.
Email address: [email protected] Institute of Computer Science of the Czech Academy of Sciences. R Zippel, EU- ROSAM '79Symbolic and algebraic computation. Marseille; Berlin-New York; Praha 8, Czech RepublicSpringer72Probabilistic algorithms for sparse polynomials. Pod Vodárenskou věží 2, 182 00R. Zippel. Probabilistic algorithms for sparse polynomials. In Symbolic and algebraic computation (EU- ROSAM '79, Internat. Sympos., Marseille, 1979), volume 72 of Lecture Notes in Comput. Sci., pages 216-226. Springer, Berlin-New York, 1979. Email address: [email protected] Institute of Computer Science of the Czech Academy of Sciences, Pod Vodárenskou věží 2, 182 00, Praha 8, Czech Republic
| [] |
[
"The Yukawa's forgotten interaction",
"The Yukawa's forgotten interaction"
] | [
"Luiz L Lopes \nCentro Federal de Educação Tecnológica de Minas Gerais Campus VIII\nCEP 37.022-560VarginhaMGBrazil\n"
] | [
"Centro Federal de Educação Tecnológica de Minas Gerais Campus VIII\nCEP 37.022-560VarginhaMGBrazil"
] | [] | I investigate the use of the SU(3) Clebsch-Gordan coefficients in light of the relations of completeness and closure. I show that these relations are not satisfied in most works that use the symmetry group arguments to fix hyperon-mesons coupling constants because the set of coupling constants usually utilized is incomplete. There is a forgotten interaction: the exchange of a ρ meson between a Λ and a Σ 0 hyperon. I then calculate the missing coupling constants and show that this recovers the completeness and closure of the SU(3) Clebsch-Gordan coefficients, besides, it increases the symmetry of the theory, once now we can group the baryon octet into four doublets. Finally, I add the new coupling constants to study numerical results in the hyperon onset in dense nuclear matter. | null | [
"https://export.arxiv.org/pdf/2305.19388v1.pdf"
] | 258,987,948 | 2305.19388 | b2d8256971d6cf10db20e95b718ff67be9ce5116 |
The Yukawa's forgotten interaction
30 May 2023
Luiz L Lopes
Centro Federal de Educação Tecnológica de Minas Gerais Campus VIII
CEP 37.022-560VarginhaMGBrazil
The Yukawa's forgotten interaction
30 May 2023(Dated: June 1, 2023)Yuakawa couplingSymmetry groupClebsch-Gordan coefficients
I investigate the use of the SU(3) Clebsch-Gordan coefficients in light of the relations of completeness and closure. I show that these relations are not satisfied in most works that use the symmetry group arguments to fix hyperon-mesons coupling constants because the set of coupling constants usually utilized is incomplete. There is a forgotten interaction: the exchange of a ρ meson between a Λ and a Σ 0 hyperon. I then calculate the missing coupling constants and show that this recovers the completeness and closure of the SU(3) Clebsch-Gordan coefficients, besides, it increases the symmetry of the theory, once now we can group the baryon octet into four doublets. Finally, I add the new coupling constants to study numerical results in the hyperon onset in dense nuclear matter.
I. INTRODUCTION
The study of nuclear physics is almost a century old. And despite its senility, some techniques developed in the early years are still helpful today in describing strongly interacting matter. In 1935, H. Yukawa [1] proposed that the interaction between nucleons was mediated by an exchange of massive particles. Nowadays, such interaction is called a one-boson exchange, or Yukawa coupling [2], and it is expressed as the so-called Yukawa Lagrangian:
L Y UK = −g BBM (ψ B ψ B )M.(1)
The theory of strong force and the use of the Yukawa couplings had a great leap with the works of J. Schwinger [3] and especially with the elegant and imperative work of J. J. Sakurai [4]. Based on current conservations and local gauge invariance, Sakurai proposed a model that deals explicitly with baryon-baryon interaction via vector mesons exchange. In such a model, the ω meson couples to the hypercharge while the ρ 0 meson couples to the isospin.
With the development of symmetry group theories, Sakurai's theory was relegated as just a particular case of the more powerful and well-accepted flavor SU(3) symmetry group theory [5][6][7][8]. However, with the onset of the more restrictive flavor-spin hybrid SU(6) group: SU(6) ⊃ SU(3) ⊗ SU(2), Sakurai's theory was restored in its full glory; and again, the ω meson couples to the hypercharge and the ρ meson couples to the isospin [8][9][10][11].
Although the Yukawa coupling explicitly deals with baryon-baryon interaction via one-boson exchange, such interaction has proven extremely useful also in manybody theories. In 1974 J. D. Walecka applied the Yuakwa coupling to describe dense nuclear matter in mean field * [email protected] approximation (MFA) [12]. In this approach, the mesonic fields are replaced by their expected values and the nucleons do not interact with each other but instead, they behave like a free Fermi gas with a classical background field. The Walecka model and its extensions are today known as quantum hadrodynamics [13] and soon become a standard effective field theory to describe dense nuclear matter.
From the early 1990s on, the interest in studying neutron stars with exotic matter has increased significantly, and to reduce the huge uncertainties about the hyperonmeson coupling constants, the use of the SU(6) symmetry group became a standard approach and is widely used, even in nowadays [14][15][16][17][18][19][20][21]. However, the discovery and confirmation of hypermassive neutron stars in the earlier 2010s have shaken our trust in SU(6) coupling constants. For instance, the J0348+0432 with a mass range of 2.01 ± 0.04 M ⊙ [22] and especially the PSR J0740+6620, whose gravitational mass is 2.08 ± 0.07 M ⊙ [23,24] bring great tension between the onset of energetically favorable hyperons and its well-known softening of the equation of state (EoS). This phenom is called the hyperon puzzle. Quickly, several authors realized that it was possible to reconcile massive neutron stars with hyperons in their core by partially breaking the SU(6) symmetry in favor of the less restrictive flavor SU(3) symmetry [25][26][27][28][29][30][31][32][33][34][35][36][37][38].
Although in the SU(3) the ρ 0 meson does not necessarily couples direct to the isospin, its sign depends on the isospin projection [7,8]. This implies that the coupling of the ρ between the neutrons is the opposite of those between the protons. The same is true for the Ξ's and for the Σ's. Such behavior is summarized in Eq. 2.
g nnρ = −g ppρ , g Ξ − Ξ − ρ = −g Ξ 0 Ξ 0 ρ , g Σ − Σ − ρ = −g Σ + Σ + ρ .(2)
Moreover, as someone can correctly guess, the coupling constant between Λ's and between Σ 0 's are null:
g ΛΛρ = g Σ 0 Σ 0 ρ = 0,(3)
once their isospin projection is zero. When we are dealing with the Yukawa coupling (Eq; 1), especially in quantum hadrodynamics, we usually assume that Dirac fieldψ B is the complex conjugate of the field ψ B . From the SU(3) point of view, that is almost always true. Most of the g BBM is zero for crossed terms -i.e.; if ψ B and ψ B are not complex conjugates to each other.
The KEY point of the present work is that if we assume thatψ B and ψ B are always complex conjugates to each other, the relation of completeness and closure of the SU(3) Clebsch-Gordan coefficients is violated. This implies that the set of coupling constants utilized is incomplete. Indeed, there are crossed Yukawa couplings (sometimes called coupled channels):
−g Σ 0 Λρ (ψ Σ 0 ψ Λ )ρ 0 , and − g ΛΣ 0 ρ (ψ Λ ψ Σ 0 )ρ 0 ,(4)
that may in fact differ from zero. Such interactions seem to have been forgotten in all previous works dealing with broken SU(6) symmetry (at least it is not present in none of the ref. [25][26][27][28][29][30][31][32][33][34][35][36][37][38]) which implies that the Lagrangians used in these previous work are incomplete. From the field theory point of view [3], the Eq. 4 indicates that the Σ 0 and the Λ interact with each other via ρ meson exchange. However, in the mean field approximation, the Λ and the Σ 0 now interact with the background field of the meson ρ. The strength of this interaction depends only on the coupling constant.
In this work, I calculate the crossed coupling constants from Eq. 4 by imposing that the Yukawa Lagrangian (Eq. 1) is invariant under the SU(3) flavor symmetry group and show that this restores the relation of completeness and closure. Thereafter, I explicitly add the crossed Yukawa terms to build a more complete QHD Lagrangian. Then, I calculate the new energy eigenvalues for the Λ and Σ 0 hyperons. Finally, we see how the modified energy eigenvalues affect some of the microscopic and macroscopic properties in neutron stars and dense nuclear matter.
II. THE SU(3) GROUP FORMALISM
In the SU(3) symmetry group formalism (see ref. [7][8][9][10]38] and the references therein to additional discussion), each eigenstate can be labeled as |N Y I I 3 , where N is the dimension of the representation, Y is the hypercharge, I is the total isospin and I 3 is the isospin projection. Assuming that the Yukawa coupling of the QHD (Eq. 1) is invariant under the SU(3) flavor symmetry group, implies that its eigenstate is |0 0 0 0 , or simply a unitary singlet.
The eigenstate of the ρ 0 is |8 0 1 0 . Therefore, in order to produce a Yukawa Lagrangian that is a unitary singlet, the direct product (ψ B ⊗ ψ B ) also must have the same eigenstate: |8 0 1 0 . As the hypercharge and isospin projection are additive numbers, the simplest way to couple (ψ B ⊗ ψ B ) to result in |8 0 1 0 is to assume thatψ B and ψ B are complex conjugates to each other. For the use of the Speiser method [7], there are two ways to couple (ψ B ⊗ ψ B ) to result in the |8 0 1 0 state, typically, the antisymmetric and the symmetric coupling. After that, we must couple the resulting |8 0 1 0 state to the ρ 0 meson in order to obtain the unitary singlet: (ψ B ⊗ ψ B ) ⊗ ρ 0 = |0 0 0 0 . The Yukawa Lagrangian of Eq. 1 can be rewritten as:
L Yukawa = −((gC 8 + g ′ C ′ 8 ) × C 1 )(ψ B ψ B )ρ 0 ,(5)
The g (g ′ ) is the constant associated with the antisymmetric (symmetric) coupling, while the C 8 (C ′ 8 ) is the SU(3) Clebsch-Gordan (CG) coefficients of the antisymmetric (symmetric) coupling to result in the |8 0 1 0 state. Furthermore, C 1 is the CG coefficients to the product (ψ B ψ B ) × ρ 0 to result in the unitary singlet. The SU(3) CG coefficients can be calculated from the isoscalar factors, as discussed in Ref. [7]. Once its values are well known, we use the tables presented in Ref. [39]. Explicitly, we have:
g ppρ = − − 3 20 g − 1 12 g ′ × 1 8 , g nnρ = − − 3 20 g − 1 12 g ′ × − 1 8 , g ΛΛρ = − (0g + 0g ′ ) × 0, g Σ 0 Σ 0 ρ = − (0g + 0g ′ ) × 0, g Σ + Σ + ρ = − 0g − 1 3 g ′ × 1 8 , g Σ − Σ − ρ = − 0g + 1 3 g ′ × 1 8 , g Ξ 0 Ξ 0 ρ = − − 3 20 g + 1 12 g ′ × − 1 8 , g Ξ − Ξ − ρ = − − 3 20 g + 1 12 g ′ × 1 8 ,(6)
Nevertheless, the SU(3) CG coefficients, as their SU(2) counterparts (see for instance chapter 3 in Sakurai's classical book [40]), must satisfy the relations of completeness and closure. In other words, we must have:
C 2 8 = C ′2 8 = C 2 8 = 1.
However, one can easily check that:
C 2 8 = 0.6 C ′2 8 = 1 C 2 1 = 0.75.(7)
The results in Eq. 7 show us that the set of coupling constants presented in Eq. 6 are not complete. There are some forgotten (ψ B ⊗ ψ B ) product that still results in the |8 0 1 0 state, but are not complex conjugate to each other. Indeed, the direct productψ Σ 0 ⊗ ψ Λ , as well theψ Λ ⊗ ψ Σ 0 produce an eigenstate |8 0 1 0 . The coupling constants g ΣΛρ and g ΛΣρ can be calculated with the SU(3) Clebsch-Gordan (CG) coefficients:
g Σ 0 Λρ = − − 1 5 g + 0g ′ × 1 8 , g ΛΣ 0 ρ = − − 1 5 g + 0g ′ × 1 8 .(8)
When we add these two forgotten coupling constants, we recover the relations of completeness and closure:
C 2 8 = C ′2 8 = C 2 8 = 1,
implying that we now have a complete set of coupling constants in agreement with the SU(3) group. Moreover, as can be seen, unlike the cases of isospin doublets (as protons and neutrons; Ξ 0 and Ξ − , etc) the g ΣΛρ and g ΛΣρ are both positives and not opposite to each other as the ones in Eq. 2. Now, following ref. [7] we introduce the coupling constants:
g 8 = √ 30 40 g + √ 6 24 g ′ , and α V = √ 6 24 g ′ g8 ,(9)
which results in:
g Σ 0 Λρ = g ΛΣ 0 ρ = 2 3 √ 3g 8 (1 − α V ), implying g Σ 0 Λρ g N N ρ = 2 3 √ 3(1 − α V ). (10)
Within the flavor SU(3) symmetry, we have in principle three free parameters: α V , the ratio z = g 8 /g 1 , and the mixing angle θ V . -see ref. [8,29,38] to additional discussion) When we assume the SU(6) symmetry we have:
α V = 1.00, z = 1 √ 6 , θ V = 35.264,(11)
and the Sakrurai proposals [4] are restored: the ρ meson couples to the isospin, therefore g ΣΛρ = 0. However, if α V = 1, the g ΣΛρ = 0 and these forgotten interactions must be considered to account for the completeness of the theory. The now complete set of coupling constants in agreement with the SU(3) theory is presented in Tab. I. These results are fully model-independent and can be applied to a diversity of calculations in future works.
III. THE QHD FORMALISM AND NUMERICAL RESULTS
Although the primary goal of the present work was to restore the completeness and closure relation of the SU(3) CG coefficients and construct a complete set of coupling constants -which was fully achieved in the last sectionit is good to study the influence of the coupled channel of Eq. 10 in the dense nuclear matter.
I began by imposing chemical equilibrium and zero electric charge net, a situation expected in neutron star interiors, to investigate the influence of the crossed terms. Let us start with a classical QHD Lagrangian without crossed couplings. Its Lagrangian reads [27,38]:
L = Bψ B [γ µ (i∂ µ − g Bω ω µ − g Bφ φ µ − g Bρ 1 2 τ · ρ µ ) −(M B − g Bσ σ)]ψ B − U (σ) + 1 2 (∂ µ σ∂ µ σ − m 2 s σ 2 ) − 1 4 Ω µν Ω µν + 1 2 m 2 v ω µ ω µ + Λ ωρ (g 2 ρ ρ µ · ρ µ )(g 2 ω ω µ ω µ ) − 1 4 Φ µν Φ µν + 1 2 m 2 φ φ µ φ µ + 1 2 m 2 ρ ρ µ · ρ µ − 1 4 P µν · P µν ,(12)
in natural units. Additional discussion about the parameters and the formalism can be found in ref. [12,13,27,31] and the references therein. The g's in Eq. 12 have only two instead three subscripts to let clear that in this Lagrangianψ B is always the complex conjugate of ψ B . Applying Euler-Lagrange and the quantization rules we obtain the energy eigenvalues (which at T = 0 K is also the chemical potential). In MFA we have:
E B = M * 2 B + k 2 + g Bω ω 0 + g Bφ φ 0 + τ 3 2 g Bρ ρ 0 (13)
Now I add the coupled channels in the Lagrangian of Eq. 12:
L ΛΣ 0 ρ = − 1 2 g ΣΛρ (ψ Λ ψ Σ +ψ Σ ψ Λ )ρ 0 ,(14)
where the 1/2 factor was added to keep the internal coherence with Eq. 12. When we apply Euler-Lagrange to now complete SU(3) Lagrangian, we see that the energy eigenvalue for all other six baryons is kept as in Eq. 13. For the Λ and the Σ 0 we have two coupled equations:
[γ µ (i∂ µ − g Λω ω µ ) − M * Λ ]ψ Λ − 1 2 (g Σ 0 Λρ )ρ 0 ψ Σ = 0 [γ µ (i∂ µ − g Σω ω µ ) − M * Σ ]ψ Σ − 1 2 (g ΛΣ 0 ρ )ρ 0 ψ Λ = 0.(15)
However, as we already knew the energy eigenvalue without the coupled channel, their inclusion is much easier in Hamiltonian formalism. The diagonal terms are the well-known unperturbed energy eigenvalues given by Eq. 13, while the crossed terms are off-diagonal. We have:
H = E B ∆ ∆ E B and H|ψ B = E|ψ B ,(16)
where |ψ B = (ψ Λ , ψ Σ ) and ∆ = 1/2(g Σ 0 Λρ )ρ 0 . As we are dealing with a beta-stable matter, µ Λ = µ Σ , the new energy eigenvalues are (see for instance chapter 5 of Sakurai's book [40] for a complete discussion):
E 1 = M * 2 Λ + k 2 + g Λω ω 0 + g Λφ φ 0 − gΣΛρ 2 ρ 0 , E 2 = M * 2 Σ + k 2 + g Σω ω 0 + g Σφ φ 0 + gΣΛρ 2 ρ 0 .(17)
Despite the energy eigenvalues from Eq. 17 being exact, the issue here is that the coupled channels lead us to mixed states [41]. In other words, the ψ Λ and ψ Σ are not eigenstates of the Hamiltonian of Eq. 16 anymore. Instead, we have a superposition [40,41]. However, as we have E B >> ∆ in Eq. 16, and following Sakurai's nomenclature [40], ψ Λ and ψ Σ are "almost good" eigenstates of Eq. 16. Therefore we can recognize E 1 as the eigenvalue of the Λ and E 2 as the eigenvalue of the Σ 0 .
This approach allows to use MFA for coupled channels but it is not new. It was successfully used to account for the kaon interaction in nuclear medium in MFA (see, for instance, section 10.1 of Glendenning's book [42] and the references therein.), though such interaction is explicitly a coupled channel coming from the g N ΛK and g N ΣK couplings [43,44] (indeed, as the g ΛΛρ , the g N N K is null [7]). Finally, the eigenvalues of the other six baryons are given by their usual expression, Eq. 13.
It is interesting to notice that when I calculated the g Σ 0 Λρ and the g ΛΣ 0 ρ coupling constants from the SU(3) Clebsch-Gordan coefficients, I showed that both have positive signs. However, as they are off-diagonal contributions, they ultimately contribute with opposite signs to the energy eigenvalues, as displayed in Eq. 17. So, for practical purposes, the (Σ 0 , Λ) forms a new isospin doubled, exactly as the (p,n), (Σ + , Σ − ), and (Ξ 0 , Ξ − ), with the coupling constants given by Tab. I. The total EoS is given by [27]:
ǫ = B 1 π 2 k Bf 0 dkk 2 k 2 + M * 2 B + U (σ 0 ) + 1 2 m 2 σ σ 2 0 + 1 2 m 2 ω ω 2 0 + 1 2 m 2 φ φ 2 0 + 1 2 m 2 ρ ρ 2 0 +3Λ v ω 2 0 ρ 2 0 + l 1 π 2 k lf 0 dkk 2 k 2 + m 2 l ,(18)
where B indicats baryon and l indicates leptons. The pressure is easily obtained by thermodynamic relations:
p = f µ f n f − ǫ,
where the sum runs over all the fermions and µ f is the corresponding chemical potential.
To obtain numerical results, I use only α V = 0.25, which has the strongest influence of the g Σ 0 Λρ , in order to not saturate the figures. Also, I use two different parameterizations, the eL3ωρ [38], that virtually fulfill every constraint of the symmetric nuclear matter, and the well-known and the widely used GM1 paramertrization [45]. All parameters and predictions for the eL3ωρ are presented in Tab. I of ref. [38], while the GM1 can be found in Tab. I of ref. [31]. The coupling constants of the hyperons with the scalar meson are fixed to reproduce the hyperon potential depth values: U Λ = -28 MeV and U Σ = + 30 MeV. For the and U Ξ , I chose U Ξ = -18 MeV as suggested in ref. [46] when I use the GM1 parametrization (which allows a direct comparison with the results presented in ref. [31]), and chose U Ξ = -4 MeV as suggested in ref. [47] for the eL3ωρ parametrization (which allow us a comparison with the results presented in ref. [38] ).
The reason I use two different parametrizations is that in the eL3ωρ there is a non-linear coupling between the ω and ρ mesons, as introduced in the IUFSU model [48], while for the GM1 there isn't. Such coupling influences the mass of the ρ meson, which ultimately affects the strength of the ρ field at high densities.
The particle population for the beta-stable matter at T = 0 K for α v = 0.25 is displayed in Fig. 1. We can see that the main effect of the g Σ 0 Λρ coupling is to suppress the Λ onset, pushing it away to higher densities, whilst, at the same time, it favors the Ξ − . In the case of the eL3ωρ parametrization, the prensece of the g Σ 0 Λρ coupling, pushes the Λ threshold from 0.4114 fm −3 to 0.4416 fm −3 , whilst the Ξ − is draw close-approach from from 0.5821 fm −3 to 0.5168 fm −3 . This indicates an increase of around 10% in the density of the Λ and a decrease of around 10% in the density of the Ξ − . In the case of the GM1 parametrizations, the results are more extreme. The g Σ 0 Λρ coupling not only suppresses the Λ threshold whilst favoring the Ξ − , but it exchanges their roles. Within it, the Ξ − is now the first hyperon to appear and becomes the most populous hyperon at higher densities. The Λ threshold is pushed away from 0.3264 fm −3 to 0.4405 fm −3 ; an increae of around 35%. On the other hand, the Ξ − is drawn close-approach from from 0.4079 fm −3 to 0.3655 fm −3 , a decrease of around 10%. Now I use the EoS of the beta-stable electric neutral matter to solve the TOV equations [49] equations. For both parametrizations, I use the BPS EoS [50] for the outer crust and the BBP EoS [51] for the inner crust. I do not plot the EoS itself because the effects of the g Σ 0 Λρ coupling are visually indistinguishable. The numerical results are presented in Fig. 2. We can also discuss some constraints related to neutron stars. Today, maybe the more important constraint is the undoubted existence of supermassive neutron stars. Using the NICER x-ray telescope, ref. [24] was able to constraint the mass and the radius of the PSR J0740 + 6620 in the range of M = 2.08 ± 0.07M ⊙ , and 11.41 km < R < 13.70 km respectively. We plot this constraint as a hatched area in Fig. 2. As can be seen, both the eL3ωρ and the GM1 fulfill this constraint.
Other constraints are related to the radius and tidal parameter of the canonical 1.4 M ⊙ star, however, they are still the subject of high debate about their true values. Recently, results obtained from Bayesian analysis indicate that the radius of the canonical star lies between 10.8 km and 13.2 km [52]; and 11.3 km to 13.5 km [53]; whilst results coming from the NICER x-ray telescope points out that R 1.4 lies between 11.52 km and 13.85 km from ref. [54] and between 11.96 km and 14.26 km from ref. [55]. State-of-the-art theoretical results at low and high baryon density point to an upper limit of R 1.4 < 13.6 km [56]. Finally, PREX2 results [57] indicate that the radius of the canonical star lies between 13.25 km < R 1.4 < 14.26 km. I use the intersection between the two NICER results [54,55]: 11.96 km < R 1.4 < 13.85 km as a constraint for the canonical star.
In relation to the tidal parameter, an upper limit of 860 was found in ref. [53]. A close limit, Λ 1.4 < 800 was pointed out in ref. [58]. In ref. [52], an upper limit of 686 was deduced from Bayesian analysis. On the other hand, two mutually exclusive constraints are presented in ref. [59], which proposed a limit between 70 < Λ 1.4 < 580, and the PREX2 inferred values, whose limit lies between 642 < Λ 1.4 < 955 [57]. As hyperons are not present at a 1.4 M ⊙ star, we always have R 1 .4 = 12.82 km and Λ 1.4 = 516 for the eL3ωρ, and R 1.4 = 13.68 km and Λ 1.4 = 696 for the GM1. Other results are presented in Tab. II.
As can be seen, for massive neutron stars the influence of the g Σ 0 Λρ coupling is very limited. The g Σ 0 Λρ coupling causes a small increase of the maximum mass, as well causes an increase of the radius for a fixed mass value. All these increments are about only 0.5%. This may sound a little disappointing but we must remember that no one could know how strong would be the influence of the g Σ 0 Λρ until someone calculated its value. The effect of the g Σ 0 ωρ coupling is more evident when we consider a matter consisting of only neutrons and Λ's. In ref. [60,61] the authors study a liquid-gas-like phase transition within neutron-Λ matter. The neutron-Λ matter was also used to study spinodal instability in ref. [62]. Moreover, the existence of a neutral bound state consisting of only neutrons and Λ's was investigated in ref. [63,64]. Here I follow ref. [62] and use µ n = µ Λ . The EoS and the square of the speed of sound v 2 s = ∂p/∂ǫ are displayed in Fig. 3.
As can be seen, the presence of the g Σ 0 Λρ stiffens the EoS, as well as increases the speed of sound at high densities and pushes away the onset of the Λ. For the eL3ωρ the Λ threshold is pushed from 0.3634 fm −3 to 0.4164 fm −3 , while within the GM1 parametrization the onset is pushed from 0.2819 fm −3 to 0.3586 fm −3 . For the GM1 the increase of the density threshold is higher than 25%, while for the eL3ωρ it is around 15%.
IV. CONCLUSIONS
In this work, I investigate the use of the symmetry groups and the SU(3) Clebsch-Gordan coefficients to fix the coupling constants of the baryon octet with the vector meson in order to keep the Yukawa Lagrangian as a singlet. The main results of the prseent work are summarized below:
• I found that the current set of coupling constants for the SU(3) symmetry group does not satisfy the relations of completeness and closure.
• There are two forgotten Yukawa interactions related to the exchange of the neutral ρ meson between the Σ 0 and the Λ hyperon. When these interactions are taken into account the relations of completeness and closure are restored.
• Then I calculate the g Σ 0 Λρ coupling constants within SU(3) and SU(6) symmetry groups. In SU(6) we have g Σ 0 Λρ = 0, and Sakurai's theory of strong interaction is restored [4]. However, if α V = 1, the g ΣΛρ = 0 and these forgotten interactions must be considered to account for the completeness of the theory. These results are fully model-independent.
In order to study the effects of the g Σ 0 Λρ couplings, I add these crossed Yukawa couplings to the QHD model.
• I show that these crossed terms enter as off-diagonal terms in the Hamiltonian. As a consequence, the coupling with the Λ and with the Σ 0 present opposite signs, despite having the same Clebsch-Gordan coefficients.
• I then obtain some numerical results within two different parametrizations: the eLωρ [38] and the GM1 [45]. I show that the g Σ 0 Λρ coupling suppresses the Λ onset whilst favoring the Ξ − one. In the case of the GM1, this is enough to make the Ξ − the first hyperon to appear. In the case of massive neutron stars, the g Σ 0 Λρ coupling causes a very small increase of the maximum masses and the radii for fixed masses (around 0.5%).
• Finally, I study a hadronic matter constituted by only neutrons and Λ's. I show that the g Σ 0 Λρ coupling stiffens the EoS, and pushes the hyperon threshold to higher densities. It also affects the speed of the sound.
. Complete set of baryon-vector mesons coupling constants for different values of αv, within the SU(3) symmetry group. These results are fully model-independent.
FIG. 1 .
1(Color online) Particle population for the eLωρ and for the GM1. Results with (without) * indicate the presence (absence) of the g Σ 0 Λρ coupling.
FIG. 3 .
3(Color online) The EoS and the v 2 s for neutron-Λ matter. The dotted (solid) lines indicate the presence (absence) of the g Σ 0 Λρ coupling.
FIG. 2. (Color online) Above: Neutron stars mass-radius relation for the eLωρ and the GM1 models. The solid (dotted) lines indicate the presence (absence) of the g Σ 0 Λρ coupling. The orangey hatched area is the mass-radius uncertainty of the PSR J0740+6620 pulsar[24], and the bluish hatched area is the intersection of two estimative from NICER for the 1.4M⊙[54,55]. Below: Zoom in for M ≥ 2.0M⊙.TABLE II. Some of the neutron stars properties. Results with (without) * indicate the presence (absence) of the g Σ 0 Λρ coupling.0.8
1.2
1.6
2
2.4
10
11
12
13
14
M/M
0
R (km)
eL3wr
GM1
2
2.05
2.1
2.15
2.2
2.25
10
11
12
13
14
M/M
0
R (km)
eL3wr
GM1
Mmax/M⊙ Hyp. at (fm −3 ) R2.0 (km)
eL3ωρ
2.202
Λ at 0.4114
12.379
eL3ωρ*
2.206
Λ at 0.4416
12.420
GM1
2.223
Λ at 0.3264
13.092
GM1*
2.208
Ξ − at 0.3655
13.193
. H Yukawa, 10.1143/PTPS.1.1Prog. Theor. Phys. Suppl. 11H. Yukawa, Prog. Theor. Phys. Suppl. 1, 1 (1955).
. E E Salpeter, H A Bethe, 10.1103/PhysRev.84.1232Phys. Rev. 841232E. E. Salpeter and H. A. Bethe, Phys. Rev. 84, 1232 (1951).
. J Schwinger, 10.1016/0003-4916(57)90015-5Ann. Phys. 2407J. Schwinger, Ann. Phys. 2, 407 (1957).
. J Sakurai, 10.1016/0003-4916(60)90126-3Ann. Phys. 111J. Sakurai, Ann. Phys. 11, 1 (1960).
. M Gell-Mann, 10.1103/PhysRev.125.1067Phys. Rev. 1251067M. Gell-Mann, Phys. Rev. 125, 1067 (1962).
. R E Behrends, J Dreitlein, C Fronsdal, W Lee, 10.1103/RevModPhys.34.1Rev. Mod. Phys. 341R. E. Behrends, J. Dreitlein, C. Fronsdal, and W. Lee, Rev. Mod. Phys. 34, 1 (1962).
. J J De Swart, 10.1103/RevModPhys.35.916Rev. Mod. Phys. 35916J. J. de Swart, Rev. Mod. Phys. 35, 916 (1963).
. C Dover, A , 10.1016/0146-6410(84)90004-8Prog. Part. Nucl. Phys. 12171C. Dover and A. Gal, Prog. Part. Nucl. Phys. 12, 171 (1984).
. A Pais, 10.1103/PhysRevLett.13.175Phys. Rev. Lett. 13175A. Pais, Phys. Rev. Lett. 13, 175 (1964).
. A Pais, 10.1103/RevModPhys.38.215Rev. Mod. Phys. 38215A. PAIS, Rev. Mod. Phys. 38, 215 (1966).
. M Nagels, T Rijken, J De Swart, 10.1016/0550-3213(79)90315-8Nucl. Phys. B. 147189M. Nagels, T. Rijken, J. De Swart, et al., Nucl. Phys. B 147, 189 (1979).
. J Walecka, 10.1016/0003-4916(74)90208-5Ann. Phys. 83491J. Walecka, Ann. Phys. 83, 491 (1974).
. B D Serot, 10.1088/0034-4885/55/11/001Reports on Progress in Physics. 551855B. D. Serot, Reports on Progress in Physics 55, 1855 (1992).
. J Ellis, J I Kapusta, K A Olive, 10.1016/0550-3213(91)90523-ZNucl. Phys. B. 348345J. Ellis, J. I. Kapusta, and K. A. Olive, Nucl. Phys. B 348, 345 (1991).
. J Schaffner, C Dover, A Gal, C Greiner, D Millener, H Stocker, 10.1006/aphy.1994.1090Ann. Phys. 23535J. Schaffner, C. Dover, A. Gal, C. Greiner, D. Millener, and H. Stocker, Ann. Phys. 235, 35 (1994).
. J Schaffner, I N Mishustin, 10.1103/PhysRevC.53.1416Phys. Rev. C. 531416J. Schaffner and I. N. Mishustin, Phys. Rev. C 53, 1416 (1996).
. S Pal, M Hanauske, I Zakout, H Stöcker, W Greiner, 10.1103/PhysRevC.60.015802Phys. Rev. C. 6015802S. Pal, M. Hanauske, I. Zakout, H. Stöcker, and W. Greiner, Phys. Rev. C 60, 015802 (1999).
. J Schaffner-Bielich, A , 10.1103/PhysRevC.62.034311Phys. Rev. C. 6234311J. Schaffner-Bielich and A. Gal, Phys. Rev. C 62, 034311 (2000).
. S Weissenborn, D Chatterjee, J Schaffner-Bielich, 10.1016/j.nuclphysa.2012.02.012Nucl. Phys. A. 88162S. Weissenborn, D. Chatterjee, and J. Schaffner-Bielich, Nucl. Phys. A 881, 62 (2012).
. L Tolos, M Centelles, A Ramos, 10.1017/pasa.2017.60PASA. 3465L. Tolos, M. Centelles, and A. Ramos, PASA 34, E065 (2017).
. L Lopes, D Menezes, 10.1140/epja/s10050-020-00125-9Eur. Phys. J. A. 56122L. Lopes and D. Menezes, Eur. Phys. J. A 56, 122 (2020).
. J Antoniadis, P C C Freire, N Wex, 10.1126/science.1233232Science. 3401233232J. Antoniadis, P. C. C. Freire, N. Wex, et al., Science 340, 1233232 (2013).
. M Miller, 10.3847/2041-8213/ac089bAstrophys. J. Lett. 91828M. Miller et al., Astrophys. J. Lett. 918, L28 (2021).
. T Riley, 10.3847/2041-8213/ac0a81Astrophys. J. Lett. 91827T. Riley et al., Astrophys. J. Lett. 918, L27 (2021).
. S Weissenborn, D Chatterjee, J Schaffner-Bielich, 10.1103/PhysRevC.85.065802Phys. Rev. C. 8565802S. Weissenborn, D. Chatterjee, and J. Schaffner-Bielich, Phys. Rev. C 85, 065802 (2012).
. S Weissenborn, D Chatterjee, J Schaffner-Bielich, 10.1103/PhysRevC.90.019904Phys. Rev. C. 9019904S. Weissenborn, D. Chatterjee, and J. Schaffner-Bielich, Phys. Rev. C 90, 019904 (2014).
. T Miyatsu, M.-K Cheoun, K Saito, 10.1103/PhysRevC.88.015802Phys. Rev. C. 8815802T. Miyatsu, M.-K. Cheoun, and K. Saito, Phys. Rev. C 88, 015802 (2013).
. M E Gusakov, P Haensel, E M Kantor, 10.1093/mnras/stt2438Mon. Not. Roy. Astron. Soc. 439318M. E. Gusakov, P. Haensel, and E. M. Kantor, Mon. Not. Roy. Astron. Soc. 439, 318 (2014).
. M Oertel, F Gulminelli, C Providencia, A Raduta, 10.1140/epja/i2016-16050-1Eur. Phys. J. A. 5250M. Oertel, F. Gulminelli, C. Providencia, and A. Raduta, Eur. Phys. J. A 52, 50 (2016).
. J Li, W Long, A Sedrakian, 10.1140/epja/i2018-12566-6Eur. Phys. J. A. 54133J. Li, W. Long, and A. Sedrakian, Eur. Phys. J. A 54, 133 (2018).
. L Lopes, D Menezes, 10.1016/j.nuclphysa.2021.122171Nucl. Phys. A. 1009122171L. Lopes and D. Menezes, Nucl. Phys. A 1009, 122171 (2021).
. I Rather, U Rahaman, V Dexheimer, 10.3847/1538-4357/ac09f7Astrophys. J. 91746I. Rather, U. Rahaman, V. Dexheimer, et al., Astrophys. J. 917, 46 (2021).
. L L Lopes, C Biesdorf, D P Menezes, 10.1093/mnras/stac793Mon. Not. R. Astron. Soc. 5125110L. L. Lopes, C. Biesdorf, and D. P. Menezes, Mon. Not. R. Astron. Soc. 512, 5110 (2022).
. L L Lopes, 10.1088/1572-9494/ac2297Commun. Theor. Phys. 7415302L. L. Lopes, Commun. Theor. Phys. 74, 015302 (2022).
. B Hong, Z Ren, X.-L Mu, 10.1088/1674-1137/ac588dChin. Phys. C. 4665104B. Hong, Z. Ren, and X.-L. Mu, Chin. Phys. C 46, 065104 (2022).
. M Pelicer, D Menezes, 10.1140/epja/s10050-022-00829-0Eur. Phys. J. A. 58177M. Pelicer and D. Menezes, Eur. Phys. J. A 58, 177 (2022).
. H R Fu, J J Li, A Sedrakian, F Weber, 10.1016/j.physletb.2022.137470Phys. Lett. B. 834137470H. R. Fu, J. J. Li, A. Sedrakian, and F. Weber, Phys. Lett. B 834, 137470 (2022).
. L Lopes, K Marquez, D P Menezes, /10.1103/PhysRevD.107.036011Phys. Rev. D. 10736011L. Lopes, K. Marquez, and D. P. Menezes, Phys. Rev. D 107, 036011 (2023).
. P Mcnamee, S J , F Chilton, 10.1103/RevModPhys.36.1005Rev. Mod. Phys. 361005P. McNAMEE, S. J., and F. CHILTON, Rev. Mod. Phys. 36, 1005 (1964).
J J Sakurai, Modern Quantum Mechanics. Addison Wesley LongmanJ. J. Sakurai, Modern Quantum Mechanics (Addison Wesley Longman, 1994).
. J Providencia, C Fiolhais, 10.1016/0375-9474(85)90311-2Nucl. Phys. A. 435190J. Providencia and C. Fiolhais, Nucl. Phys. A 435, 190 (1985).
N K Glendenning, 10.1007/978-1-4684-0491-3Compact stars. New YorkSpringer-VerlagSecond EditionN. K. Glendenning, Compact stars (Second Edition, Springer-Verlag New York, 2000).
. P B Siegel, W Weise, 10.1103/PhysRevC.38.2221Phys. Rev. C. 382221P. B. Siegel and W. Weise, Phys. Rev. C 38, 2221 (1988).
. V Koch, 10.1016/0370-2693(94)91434-6Phys. Lett. B. 3377V. Koch, Phys. Lett. B 337, 7 (1994).
. N K Glendenning, S A Moszkowski, 10.1103/PhysRevLett.67.2414Phys. Rev. Lett. 672414N. K. Glendenning and S. A. Moszkowski, Phys. Rev. Lett. 67, 2414 (1991).
. J Schaffner-Bielich, A , 10.1103/PhysRevC.62.034311Phys. Rev. C. 6234311J. Schaffner-Bielich and A. Gal, Phys. Rev. C 62, 034311 (2000).
. T Inoue, 10.7566/JPSCP.26.023018JPS Conf. Proc. 2623018T. Inoue, JPS Conf. Proc 26, 023018 (2019).
. F Fattoyev, 10.1103/PhysRevC.82.055803Phys. Rev. C. 8255803F. Fattoyev et al., Phys. Rev. C 82, 055803 (2010).
. J R Oppenheimer, G M Volkoff, 10.1103/PhysRev.55.374Phys. Rev. 55J. R. Oppenheimer and G. M. Volkoff, Phys. Rev. 55, 374 (1939).
. G Baym, C Pethick, P Sutherland, 10.1086/151216Astrophys. J. 170299G. Baym, C. Pethick, and P. Sutherland, Astrophys. J. 170, 299 (1971).
. G Baym, H A Bethe, C J Pethick, 10.1016/0375-9474(71)90281-8Nucl.Phys. A. 175225G. Baym, H. A. Bethe, and C. J. Pethick, Nucl.Phys. A 175, 225 (1971).
. Y Li, 10.1140/epja/s10050-021-00342-wEur. Phys. J. A. 5731Y. Li et al., Eur. Phys. J. A 57, 31 (2021).
. M Coughlin, 10.1093/mnrasl/slz133Mon. Not. Roy. Astr. Soc. Lett. 48991M. Coughlin et al., Mon. Not. Roy. Astr. Soc. Lett. 489, L91 (201
. T E Riley, 10.3847/2041-8213/ab481cAstrophys. J. Lett. 88721T. E. Riley et al., Astrophys. J. Lett. 887, L21 (2019).
. M Miller, 10.3847/2041-8213/ab50c5Astrophys. J. Lett. 88724M. Miller et al., Astrophys. J. Lett. 887, L24 (2019).
. E Annala, 10.1103/PhysRevLett.120.172703Phys. Rev. Lett. 120172703E. Annala et al., Phys. Rev. Lett. 120, 172703 (2018).
. B Reed, 10.1103/PhysRevLett.126.172503Phys. Rev. Lett. 126172503B. Reed et al., Phys. Rev. Lett. 126, 172503 (2020).
. B Abbott, 10.1103/PhysRevLett.119.161101Phys. Rev. Lett. 119161101B. Abbott et al., Phys. Rev. Lett. 119, 161101 (2017).
. B Abbott, 10.1103/PhysRevLett.121.161101Phys. Rev. Lett. 121161101B. Abbott et al., Phys. Rev. Lett. 121, 161101 (2018).
. F Gulminelli, A R Raduta, M Oertel, 10.1103/PhysRevC.86.025805Phys. Rev. C. 8625805F. Gulminelli, A. R. Raduta, and M. Oertel, Phys. Rev. C 86, 025805 (2012).
. J R Torres, F Gulminelli, D P Menezes, 10.1103/PhysRevC.93.024306Phys. Rev. C. 9324306J. R. Torres, F. Gulminelli, and D. P. Menezes, Phys. Rev. C 93, 024306 (2016).
. J R Torres, F Gulminelli, D P Menezes, 10.1103/PhysRevC.95.025201Phys. Rev. C. 9525201J. R. Torres, F. Gulminelli, and D. P. Menezes, Phys. Rev. C 95, 025201 (2017).
. C Rappold, HypHI CollaborationE Kim, HypHI CollaborationT R Saito, HypHI Collaboration10.1103/PhysRevC.88.041001Phys. Rev. C. 8841001C. Rappold, E. Kim, T. R. Saito, et al. (HypHI Collab- oration), Phys. Rev. C 88, 041001 (2013).
. H Garcilazo, A Valcarce, J Vijande, 10.1088/1674-1137/41/7/074102Chin. Phys. C. 4174102H. Garcilazo, A. Valcarce, and J. Vijande, Chin. Phys. C 41, 074102 (2017).
| [] |
[
"A Proof That Measured Data and Equations of Quantum Mechanics Can Be Linked Only by Guesswork",
"A Proof That Measured Data and Equations of Quantum Mechanics Can Be Linked Only by Guesswork"
] | [
"John M Myers ",
"F Hadi Madjid "
] | [] | [] | The design and operation of a quantum-mechanical device as a laboratory instrument puts models written in equations of quantum mechanics in contact with instruments. In designing a quantum-mechanical device of high precision, such as a quantum computer, a scientist faces choices of models and of instruments, and the scientist must choose which model to link to which arrangement of instruments. This contact is recordable in files of a Classical Digital Process-control Computer (CPC) used both to calculate with the equations and to manage the instruments. By noticing that equations and instruments make contact in a CPC, we rewrite equations of quantum mechanics to explicitly include functions of CPC-commands to the instruments. This sets up a proof that a scientist's choice in linking mathematical models to instruments is unresolvable without guesswork to narrow the set of models from which one is to be chosen.The proof presents the challenge of pursuing its implications. Scientists in any investigative endeavor inherit choices from the past and frame choices for the future, choices open to guesswork and visible in CPC files. To picture the framing of choices and relations among them, we adapt colored Petri nets. Constraining the events of the nets to produce output colors defined by definite functions of input colors excludes guesswork from the firing of net events, and by contrast highlights guesses entering a net fragment as colored tokens placed by a scientist or by instruments on input conditions. The availability of these net fragments makes choice and guesswork part and parcel of physics.Net fragments as a means of expressing guess-demanding choices are applied to portray guesswork needed in testing and calibrating a quantum computer. The sample size required to test a quantum gate in a quantum computer is shown to grow as the inverse square of the error allowed in implementing the gate. | 10.1090/conm/305/05222 | [
"https://export.arxiv.org/pdf/quant-ph/0003144v1.pdf"
] | 17,353,416 | quant-ph/0003144 | fef08ac53dac3ec0bc8f52abf4d8d01103419720 |
A Proof That Measured Data and Equations of Quantum Mechanics Can Be Linked Only by Guesswork
0003144v1 30 Mar 2000
John M Myers
F Hadi Madjid
A Proof That Measured Data and Equations of Quantum Mechanics Can Be Linked Only by Guesswork
0003144v1 30 Mar 2000arXiv:quant-ph/ Contemporary Mathematics 1 2 JOHN M. MYERS AND F. HADI MADJID1991 Mathematics Subject Classification Primary 81P6868Q05; Secondary 81P1568Q85
The design and operation of a quantum-mechanical device as a laboratory instrument puts models written in equations of quantum mechanics in contact with instruments. In designing a quantum-mechanical device of high precision, such as a quantum computer, a scientist faces choices of models and of instruments, and the scientist must choose which model to link to which arrangement of instruments. This contact is recordable in files of a Classical Digital Process-control Computer (CPC) used both to calculate with the equations and to manage the instruments. By noticing that equations and instruments make contact in a CPC, we rewrite equations of quantum mechanics to explicitly include functions of CPC-commands to the instruments. This sets up a proof that a scientist's choice in linking mathematical models to instruments is unresolvable without guesswork to narrow the set of models from which one is to be chosen.The proof presents the challenge of pursuing its implications. Scientists in any investigative endeavor inherit choices from the past and frame choices for the future, choices open to guesswork and visible in CPC files. To picture the framing of choices and relations among them, we adapt colored Petri nets. Constraining the events of the nets to produce output colors defined by definite functions of input colors excludes guesswork from the firing of net events, and by contrast highlights guesses entering a net fragment as colored tokens placed by a scientist or by instruments on input conditions. The availability of these net fragments makes choice and guesswork part and parcel of physics.Net fragments as a means of expressing guess-demanding choices are applied to portray guesswork needed in testing and calibrating a quantum computer. The sample size required to test a quantum gate in a quantum computer is shown to grow as the inverse square of the error allowed in implementing the gate.
Introduction
This paper stems from earlier work [1] and a proof presented here showing that inquiry in quantum physics continually presents a scientist with choices of equations and of instruments, unresolvable by calculation and measurement. Something else is demanded of the scientist, which may as well be called a guess. 1 Challenged by the proof to look at its implications, we noticed that people in investigative endeavors inherit and frame choices open to guesswork, some of which show up clearly in the computers used in the endeavors.
Section 2 introduces the Classical Process-control Computer (CPC) with its special capacity to manipulate abstractions expressed as equations without contaminating them with its own physics. 2 A scientist can use a CPC not only to calculate with equations, but also to mediate the command of laboratory instruments via digital/analog (D/A) converters and to record experimental results returned from the instruments via analog/digital (A/D) converters. By noticing that both equations and instruments make contact in a CPC, we rewrite equations of quantum mechanics to explicitly include functions of CPCcommands to the instruments. This sets up the proof that the scientist's choice in linking equations to instruments is unresolvable without guesswork to narrow the set of models. A lattice of sets of models is defined, two widely used guesses that narrow the set of models are noted, and the concept of statistical distance between probability distributions is applied to quantum-mechanical models.
Section 3 provides language for displaying and analyzing guessdemanding choices visible in CPC files. To this end Turing machines are introduced and adapted to formalize the definition of a CPC. This allows fragments of colored Petri-nets, opened to exogenous influences, to portray the programming and running of programs in a network of CPC's operated by collaborating scientists. Many of these programs incorporate guesses. This general picture of process-control computation shows programs and other guesses as colors on tokens that a scientist enters on a Petri net that acts as a game board. Mechanisms for one scientist to judge programs (and hence guesses) made by another are sketched, leading to the first of many needs for concurrently operating CPC's. Section 4 describes some examples in which guessing, quantum mechanics and CPC structures must interact in the building of a quantum computer as a laboratory instrument specified by equations of quantum mechanics. We show the need for guesses to link equations and instruments brings with it a need to test the quantum computing instruments and to calibrate them, guided by test results and guesses. Quantum mechanics imposes a peculiar structure on this testing, related to the statistical distance between models. For the measure of precision conventionally used in quantum computing, the sample size needed for testing a quantum gate is shown to increase as the inverse square of the tolerated imprecision. While many questions are left open to future work, the example demonstrates a frame for analysis and experiment broader than any quantum model alone, a frame that includes the testing of the mathematical models by results of the use of instruments, and so distinguishes what the model says from what the instruments do, allowing provision for guesswork as an ingredient in advancing both models and instruments.
Quantum-mechanical models and their links to instruments
Proving the necessity of guesses demands language to describe the linking of numbers in mathematical models to numbers pertaining to laboratory instruments, starting with mathematical language to describe a scientist's choosing one arrangement of laboratory instruments rather than another. We shall describe a situation in which a scientist chooses instruments by using a CPC keyboard to type strings of characters, much as Gödel, in mathematical logic, described equations as strings of characters. The scientist at the CPC keyboard writes and executes programs to command the operation of laboratory instruments and record their results. These programs make use of quantummechanical equations, which the scientist also writes into the CPC.
Quantum mechanics as a mathematical language expresses different measuring instruments by different operators, and thus has built into it a recognition that phenomena to be described cannot be independent of the instruments used to study them [2]. Still, this dependence is emphasized more some times than others. Some modeling merely assumes that instruments can be found, without saying how, to implement various combinations of state vectors and operators. Such models appear in theories of quantum computing to relate the multiplication of unitary operators to the solving of problems of interest. To see the need for another kind of model, suppose a scientist has computer-controlled instruments (such as lasers) with the potential to implement a quantum computer, and faces the question of what commands the CPC should transmit to the instruments and when it should transmit them in order implement one or another quantum gate. Determining the commands and their timing to implement a quantum gate expressed as a unitary matrix U j takes a model that expresses the gates as unitary transformations in terms of commands that a process-control computer can transmit to the instruments. Curiously, models of this kind have not been much stressed in physics, and it is a merit of efforts to build quantum computers to make the importance of such models apparent.
2.1. Models and instruments make contact in a CPC. Part of a scientist's control of instruments can work through the use of a process-control computer that transmits commands to the (computercontrolled) instruments and records results produced by them. We confine our analysis to this part, excluding from consideration here (but by no means denigrating) hand work beyond the reach of a processcontrol computer. We shall portray cases in which a scientist chooses arrangements of instruments, chooses models, and puts the two in contact, linking models to instruments, during a CPC session starting after the instruments have been set up and put under control of a CPC and ending before the scientist has to tinker with the instruments in ways unreachable by the CPC. Within the CPC, laboratory instruments and mathematical models make contact when: 1. a model resident in a CPC file is used to derive commands for the CPC to transmit to the instruments; 2. instrumental results collected by a CPC are used to narrow down a set of models. (We shall later see feedback as an example of this.) Such contact does not spring from nothing, but is brought about by design and depends on choices made by a scientist, including choices of what set of models to start with, what model to choose for use by a CPC in generating commands, and what experiments to run. To picture the design and operation of contact between models and instruments, imagine eavesdropping on CPC's used in various investigations. Commands sent to the instruments by the CPC and the results received from them, both numerical, are amenable to analysis, as is the scientist's writing of equations, programs, calls for program execution, etc.; we also eavesdrop on displays produced by the CPC for the scientist.
Although the CPC puts instruments in contact with equations involving quantum superposition, the CPC itself is a classical machine, free of quantum superposition, for it needs no quantum behavior within itself, neither to manipulate equations of quantum mechanics nor to manage laboratory instruments. For example, the writing of an expression |0 + |1 for a superposition of quantum states makes use of written characters that themselves exhibit no superposition. And any command to instruments is likewise a character string, including a command to rotate a polarizing filter by 45 degrees to implement the superposed state |0 + |1 . Similarly, results of the use of instruments interpreted as demonstrating superposition arrive as bit strings, themselves devoid of superposition.
The CPC is situated between a scientist to its left and laboratory instruments to its right, as shown in figure 1. Working at the CPC, a scientist is limited in action at any moment to the resolution of the choice presented by the CPC at that moment, a choice defined by the files stored in its memory and the state of its processor, and exemplified by a menu displayed by the CPC. Our analysis of the CPC cannot reach beyond its buffers: neither to its left into the scientist, nor to its right where, invisible to eavesdropping, reside digital-to-analog (D/A) and analog-to-digital (A/D) converters and beyond them the laboratory instruments.
2.2. Quantum-mechanical models that recognize commands sent to instruments. For equations of quantum mechanics to model effects of a scientist's choices in arranging instruments, these choices must show up in the equations. To see how this can work, recall that quantum mechanics parses the functioning of instruments into state preparation, transformation, and measurement, three coordinated activities that generate outcomes, supposed visible in experimental results by some means unspecified. The three activities are described, respectively, by a state (as a unit vector representing a ray in a Hilbert space), a unitary operator, and a hermitian operator. The only way to make the scientist's choices in arranging instruments show up in quantum-mechanical equations is to make the state vector |v , the transformation operator U, or the measurement operator M, or some combination of them, depend on how these choices are resolved.
A simple and yet, so far as we know, original way to analyze a scientist's choice of arrangements of instruments is to suppose that during a CPC-mediated session the instruments are controlled by CPCtransmitted commands from a set B of possible commands, where B ⊂ B and B is the set of all finite binary strings. We formulate a core set of quantum mechanical models that express the probability of an outcome of instruments in response to a command b ∈ B sent to the instruments by the CPC, as follows. Let The core models exhibit discrete spectra for all M ∈ M:
Property 1. (∀b ∈ B)M(b) = j m j (b)M j (b), (2.1)
where m j : B → R (with R denoting the real numbers) is the j-th eigenvalue of M, and M j is the projection onto the j-th eigenspace (so M j M k = δ j,k M j ).
Let Pr(j|b) denote the probability of obtaining the j-th outcome, given transmission by the CPC of a command b. Although not commonly seen in texts, this probability of an outcome given a command is the hinge pin for focusing on quantum mechanical modeling of uses of instruments. Quantum mechanics constrains the models to satisfy:
Property 2. Pr(j|b) = v(b)|U † (b)M j (b)U(b)|v(b) , (2.2)
where the † denotes the hermitian adjoint. More models are available in more general formulations. When we show that guesswork is necessary even to resolve choices among models of the core set, it follow that guesswork is necessary also to resolve the choices of among a larger set of models involving positive-operatorvalued measures, superoperators, etc.
H} such that (∀b ∈ B)|v ′ (b) = Q(b)|v(b) , U ′ (b) = Q(b)U(b)Q † (b) and M ′ (b) = Q(b)M(b)Q † (b).
2.3.
From results to quantum-mechanical outcomes. Before stating and proving the proposition that calculations and measurements cannot by themselves link models to outcomes obtained from instruments, we call to the reader's attention that outcomes themselves, in the sense of quantum mechanics, are produced by instruments only with the help of interpretive guesswork. Claim 1. To speak of actual instruments in the language of quantum mechanics one needs to associate results of the use of the instruments, recorded in a CPC, with outcomes in the sense of quantum mechanics or with averages of outcomes.
Experimental results of the use of instruments become quantummechanical outcomes only by a scientist's act of interpreting the results as outcomes. The interpretation involves judgment and guesswork, not only to sidestep imperfections in the instruments, but as a matter of principle, even for the limiting case of instruments supposed free of imperfections. For example, light detectors used in experiments described by models of quantum optics generate experimental results; typically, each of L detectors reports to the CPC at each of a succession of K time intervals a detection result, consisting of 0 (for no detection) or 1 (for detection), so a record contains LK bits. Depending on judgments made about correlations from time interval to time interval and detector to detector, these LK bits may constitute LK quantum outcomes, or one quantum outcome, or some number in between. The number of outcomes in LK bits is determined neither by the experimental results (which in this case are just these bits) nor by general principles of quantum mechanics; yet the parsing of results into outcomes must occur, at least provisionally, before any comparison between equations and measured outcomes can be made. Henceforth, when we speak of outcomes, we presuppose that this piece of guesswork has been accomplished and a decision made to define the parsing of results into outcomes.
2.4. Calculation and measurement by themselves cannot link quantum models to recorded outcomes. Could it be that the general properties 1 and 2 suffice to determine a model (up to unitary equivalence) if only one collects enough measured results interpreted as outcomes? The answer is: "no; unless some special properties restrict the model more tightly than the form established by properties 1 and 2 alone, one can always find many unitarily inequivalent models (|v , U, M) B , all of which produce probabilities that match perfectly the relative frequencies of outcomes."
To prove this we define some things to pose the issue more sharply. Let B denote the set of commands used to generate some set of outcomes interpreted from measured results. 3 For any b ∈ B, let N(b) be the number of times that an outcome has been entered in the record for a run of the experiment for command b, and let J(b) be the number of distinct outcomes for command b. For j = 1, . . . , J(b), let λ j (b) be the j-th distinct outcome obtained for command b, and let n(j, b) be the number of times this j-th distinct outcome λ j is recorded in response to command b. For all j > J(b) let µ j (b) be arbitrary real numbers, and for all j ≥ 1 let φ(j, b) be arbitrary real numbers.
Proposition 2.1. Given any set of recorded outcomes associated with any set B of commands, the set of models satisfying properties 1 and 2 contains many unitarily inequivalent models (|v , U, M) B , each of which has a perfect fit with the set of outcomes, in the sense that
(∀b)(∀1 ≤ j ≤ J(b)) Pr(j|b) = n(j, b)/N(b). (2.3)
Proof. It is instructive to start with the special case in which for some b ∈ B, there exist two or more distinct values of j for which n(j) > 0. For this case let the set {|j } be an orthonormal basis of a separable Hilbert space. Define a subset S of models satisfying properties 1 and 2 as all models of the form (|v , U, M) B , where
|v(b) def = J(b) j=1 [n(j, b)/N(b)] 1/2 exp(iφ(j, b)|j ), (2.4) U(b) def = 1, (2.5) M(b) def = J(b) j=1 λ j (b)|j j| + ∞ j=J(b)+1 µ j (b)|j j|, (2.6)
with µ j and φ arbitrary real-valued functions. By invoking property 2, one checks that any such model has the claimed perfect fit; yet the set contains many unitarily inequivalent models, which predict conflicting statistics for some possible quantum measurement. 4 This proves the special case.
For the general case, modify the definitions above to
|v(b) def = J(b) j=1 [n(j, b)/N(b)] 1/2 |w j , (2.7) U(b) def = 1, (2.8) M(b) def = J(b) j=1 λ j (b)P j + ∞ j=J(b)+1 µ j (b)P j , (2.9)
where P j P k = δ j,k P j , for all j the projection P j has dimension greater than 1, and |w j ranges over all unit vectors of the eigenspace defined by P j |w j = |w j . In particular, for any j, dim(P j ) can be as large as one pleases. Then even if there is only one outcome that is ever recorded, there are still many unitarily inequivalent models that perfectly fit the data.
Proposition 2.1 implies that the density matrix, often supposed to be determined from measured data [3], is undetermined without assuming special properties shortly to be discussed; this follows by expressing the density matrix as |v v| and noticing that the phases of the off-diagonal elements are undetermined. We leave to the future the demonstration of additional ambiguity in the link between any set of recorded outcomes and models expressed in the mathematical language of quantum mechanics.
2.5. Statistically significant differences between models. In practice, a scientist has little interest in a model chosen so that its probabilities exactly fit measured relative frequencies. Rather, the scientist wants a simpler model with some appealing structure that comes reasonably close to fitting. Quantum mechanics encourages this predilection, because on account of statistical variation in the sample mean, functions that perfectly fit outcomes on hand at one time are not apt to fit perfectly outcomes acquired subsequently. We show here that accepting statistics no way takes away from the proof that measurements and equations by themselves cannot link models to instruments.
One needs a criterion for the statistical significance of a difference between two quantum-mechanical models (or between a model and measured relative frequencies). Here we limit our attention to models α and β which have a set B of commands in common and for which the spectra of M α and M β are the same. For a single command b, the question is whether the difference between the probability distributions Pr α (·|b) and Pr β (·|b) is bigger than typical fluctuations expected in N(B) trials. An answer is that two distributions are indistinguishable statistically in N(b) trials unless
N(b) 1/2 d(Pr α (·|b), Pr β (·|b)) > 1, (2.10)
where d is the statistical distance defined by Wooters in Eq. (10) of [4]. Furthermore, Wooters's Eq. (12) shows for two models α and β that differ only in the function |v ,
d(Pr α (·|b), Pr β (·|b)) ≤ cos −1 | v α (b)|v β (b) |. (2.11)
To judge the significance of the difference between two models with respect to a set B of commands common to them, a scientist who chooses some weighting of different commands can define a weighted average of d(Pr α (·|b), Pr β (·|b)) over all b ∈ B. The same holds if model β is replaced by relative frequencies of outcomes interpreted from measured results.
It is noteworthy that the set of models statistically indistinguishable from a given model can be much larger than would be the case if the "≤" of (2.11) were an equality, as follows.
Proposition 2.2. For any set of outcomes, two models α and β of the form (|v , 1, M) B can perfectly fit the relative frequencies of the outcomes (Proposition 2.1) and yet be mutually orthogonal in the sense that v α |v β = 0
Proof. For any set of measured outcomes, there exists a perfectly fitting model α of the form in the proof of Proposition 2.1 for the general case, for which (∀j, b)dim(|w j (b) ) > 1), and a corresponding perfectly
fitting model β such that (∀j, b) w α,j (b)|w β,j (b) = 0. For these two models, v ( b)α(b)|v β (b) = j [n(j, b)/N(b)] w α,j (b)|w β,j (b) = 0 .
Wooters extended the definition of statistical difference to unit vectors. While for any two unit vectors, there exist measurement operators that maximize the statistical distance between them, for any such operator there exist other vectors, mutually orthogonal, that have zero statistical distance relative to this operator. For this reason, among others, statistics still leaves the scientist needing something beyond calculation and measurement to determine a model, for the set of models closer than ǫ in weighted statistical distance to certain measured results certainly includes all the models that exactly fit the data and, without special restrictions dependent on guesses, this set includes models that are mutually orthogonal. Models close to given measured data are not necessarily close to each other in the predictions they make.
2.6. Lattices of models. Properties 1 and 2 set up a big set of models (|v , U, M) B , B ⊆ B, |v ∈ V, U ∈ U, M ∈ M. Subsets of models of this set are a lattice under set intersection and union. Each command set B establishes a smaller lattice of sets of models, and these lattices will play a part in the testing and calibrating of quantum computers, discussed in section 4, where a scientist encountering problems with a model chooses a set of possible alternatives, and then tries to narrow it. Often this narrowing is seen as choosing values of parameters within a form of model in order to obtain a best fit, say with a criterion of minimizing statistical distance between frequencies of outcomes interpreted from measured results and probabilities calculated from the model. One is free to think of the estimating of parameters in the language of a lattice of models as the using of measured results to select a model from a set of models.
From Proposition 2.1 that showed that the whole set of models defined by properties 1 and 2 is too big to permit measured results to select a model, we have:
Proposition 2.3.
For measured data to uniquely decide to within unitary equivalence which quantum-mechanical model of a set of models best fits experimental results interpreted as outcomes by a criterion of least statistical distance (or any other plausible criterion), the set of models must first be sufficiently narrowed, and this narrowing is underivable from the results and the basic properties 1 and 2 of quantum mechanics.
Something beyond measured results and calculations from equations is required to narrow a set of models so that measured results can select a model that is "best" by some criterion. Such an act of choosing undefined by calculation and results of observation is what we have called a guess.
2.7. Hidden guesswork in conventional quantum mechanical models. The proof casts in a clear light maneuvers conventionally made to narrow down the set of models. Sometimes a community of physicists is in mutual agreement about guesses deemed appropriate, and this agreement obscures from notice the fact that a guess is invoked. As an example of a widely invoked guess, most modeling in quantum physics supposes that the scientist can vary b so as to vary U(b) while holding v(b) and M(b) constant. Indeed, most models used in quantum physics are restricted to the subset of models having the special Property 3. The command b is the concatenation of separate commands for the three types of operations, so that
b = b v b U b M , (2.12)
where here the denotes concatenation of commands.
According to these models, one can vary any one of the three while holding the other two fixed. This specializes (2.1) to the more restrictive form:
Pr(j|b) = v(b v )|U † (b U )M j (b M )U(b U )|v(b v ) . (2.13)
An additional constraining guess characterizes models widely used in the analysis of quantum computers, a guess prompted by the desire to generate a unitary transformation as a product of other unitary transformations that serve as "elementary quantum gates." For example, one may want to generate the unitary transformation U(b U,1 ) U(b U,2 ). To generate it one causes the CPC to transmit some b U . For quantum computing to have an advantage over classical computing, the determination of this b U in terms of b U,1 and b U,2 must be of polynomial complexity [5]. It is usually assumed that b U is the simplest possible function of b U,1 and b U,2 , as follows.
Let B U ⊂ B be a set of instrument-controlling commands, thought of as strings that can be concatenated. Suppose the function U has the form U
(b 1 b 2 ) = U(b 2 )U(b 1 ) for all b 1 b 2 ∈ B U (note reversal of order)
. Then we say the function U respects concatenation.
Property 4. Quantum computation employs a subset of models in which U respects concatenation.
Remark 2.1. We present properties 1 through 4 not as properties of laboratory instruments, but as properties that a scientist can choose to demand of models. Whether the instruments act that way is another question. There are reasons, relaxation and other forms of decoherence among them, to expect limits to the precision with which instruments can behave in accord with properties 3 and 4. All four properties are used often enough to be conventions, in the sense that a convention is a guess endorsed by a community.
Petri nets to show choices open to guesswork
In orchestrating contact between mathematical models and laboratory instruments, scientists set up chains of cause and effect, expressed in computer programs with their "if-then" structure, not as static propositions but as designs for action. Such designs are implemented in experiments; an example is a feedback loop that adjusts the orientation of a filter according to a rule that tells what adjustment to make in immediate response to a result recorded by a light detector. On a more relaxed time scale, physicists make other connections by analyzing outcomes of one generation of experiment, using the equations of a model, to set up design instruments for a next generation. As remarked above, contact between equations and instruments depends on choices made by scientists, including choices of what set of models to start with, what model to choose for use by a CPC in generating commands, and what experiments to run. If these choices could be resolved by some combination of calculation and measurement, one could argue that they are irrelevant to physics. But the propositions of the preceding section show this is not the case, so the design and operation of contact between equations and instruments, with its ineradicable dependence on guesswork, cries out for attention as part and parcel of physics.
Although widespread in practice, the design of contact between equations and instruments is in its infancy as a topic for theoretical attention. A beginning can be seen in Benioff's analysis of sequences of measurements (described quantum mechanically) in which subsequent measurements are functions of outcomes of preceding measurements [6]. Called decision procedures, these involve classical feedback control equations to control instruments described quantum mechanically, in some cases with proved advantages [7]. These efforts dealt with measurements occurring at a single location. Designs that put equations and instruments in contact over a network of cooperating investigators are wide open for future attention.
Logic in experiments, in feedback loops at many time scales, is logic in action. This is the logic of models that relate instrument commands to quantum vectors and operators. Here we adapt Petri nets to provide mathematical language by which to express and analyze designs for contact between equations and instruments, designs that include sequencing of effects, decision rules, and interactions among sequences of effects that scientists implement in their instruments. The nets will highlight choices resolvable only by resort to guesswork; they serve as a language with which one can express formally how guessing works in physics, case by case, within CPC-mediated investigations.
3.1. Requirements CPC's. In order to adapt Petri nets to showing guess-demanding choices visible in CPC's, we start by clarifying how a CPC differs from a Turing machine, on the way to adapting the Turing machine to process control and to use in a network of collaborating scientists. This lays the groundwork for introducing Petri nets.
3.1.1. Timing in the execution of commands. The first thing that makes process-control computing special is timing. In the context of quantum-mechanical models, each unitary transformation maps states possible in one situation to states possible in another situation; for quantum computing this means mapping states possible at an earlier time to states possible at a later time. Thus a unitary transformation is implemented not all at once, but over a time duration. In practice, that duration depends on how the instruments implement the transformation. A written command b U acts as a musical score. Like sight reading at a piano, executing a program containing the command b U requires converting the character string b U -the score-into precisely timed actions-the music. The piano keys, in this analogy, include the output buffers that control the amplitude, phase, frequency, and polarization of lasers of an ion-trap quantum computer or of radio-frequency transmitters for a nuclear-magnetic-resonance (NMR) quantum computer.
For this reason executing a command b U requires parsing it into pieces (signals) and implementing each signal at a time, the specification of which is contained in the string b U . Either the CPC that executes a program in which b U is written parses the command into signals and transmits each signal at its appointed time, or the instruments receiving the command b U , unparsed, contain programmable counters operating in conjunction with a clock that do this timed parsing. Such programmable counters themselves constitute a special-purpose CPC. So either the scientist's CPC must execute commands by issuing an appropriately timed sequence of signals, or some other CPC attached to the instruments must do this. Either way, the capacity to execute programmed motion in step with a clock is a requirement for a CPC, distinct from and in addition to requirements to act as a Turing machine.
3.1.2. Firewalls in a network of computers. Just as axioms set up branches of mathematics, guesses set up rules for the conduct of experiments and the interpretation of their results, rules often embedded in CPC's. Collaborating scientists accept guesses from each other, at least provisionally, use these in experimenting and modeling; they evaluate some of them, sometimes refining or replacing them. This poses a problem for CPC-mediated inquiry, where guesses engender computer programs, for a scientist's guess can reprogram a CPC, often for better but sometimes, by malice or accident, for worse. Scientists in a collaboration need to test each other's programs and to limit the influence of any program, making the scope of influence of a CPC program a matter for negotiation among the collaborators.
An easy but narrow case is that of a computer running Gödel's test for validity of a claimed derivation [8]. To think about such testing, one models the computer by a Turing machine designed to start from a tape on which the claimed derivation is written and to halt leaving a "yes" or "no" on the tape, according to whether the claim is or is not valid. Such a Turing machine can be emulated by a universal Turing machine executing a testing program to check a passive (non-executed) file containing the claimed derivation.
Not just derivations, but also programs need to be tested with respect to what they do when they are executed. But what is to keep an executing program under test from infecting the program that tests it? Hardware walls of some kind are needed. By limiting our analysis to exclude remote login and insisting on computers that distinguish physically one interface from another, we can see a basic structure for testing programs and for limiting the reach of guesses of any one scientist in a network of CPC's, based on operating two or more CPC's concurrently with controlled interfaces between them, so the testing program and the program under test execute on separate CPC's, with an interface controlled by the testing CPC. By virtue of concurrent operation of CPC's with controlled interfaces, guesses made by collaborators can set up programs that frame choices open to guessing by any one scientist, and that test the performance of the scientist's programs within that frame of choice, allowing freedom to a scientist to program one part of the investigation while insulating other parts. Hardware walls that limit the reach of one person's guesses at any moment are one many motivations for stressing a network of concurrently operation CPC's.
Turing machines and Petri nets.
Here we provide language for displaying and analyzing guess-demanding choices visible in files of CPC's used by collaborating scientists who on occasion reprogram those choices. As a model of a CPC, we assume that each CPC of a network is a Turing Machine adapted for Process-control (TMP), to be defined. Making sense of networks of TMP's handling equations and controlling instruments calls for a descriptive capacity that allows for various viewpoints at various levels of detail. We introduce a specialized use of fragments of colored Petri-nets, opened to exogenous influences, to portray the programming and running of programs in a network of TMP's operated by collaborating scientists. 5 Different viewpoints and levels of detail are accommodated by morphisms in the category of nets. Isomorphisms between Petri nets trade net detail for color detail [9]. These will be combined with coarsening maps that suppress detail, for example by mapping colored tokens to black tokens. We will show how the programming of a universal TMP (UTMP) portrayable as a single Petri net can produce any number of patterns of use of instruments and equations, portrayable by a host of different Petri nets. This general picture of process-control computation will show programs and other guesses as colors on tokens that a scientist enters on a game board defined by a fragment of a Petri net, and equations of quantum mechanics written as guesses by a scientist will be seen as colors on tokens that take part in directing and interpreting the use of laboratory instruments.
3.2.1. Writing vs. executing a program. Computers rest on the writing of motionless characters on a page to describe something moving, a puzzle solved in music by writing notes on staves, to be read in step with a swinging pendulum that chops time into moments, so that written notes that portray a still picture for each moment direct the motion of the playing of a musical instruments [10]. The logical machinery of a computer moves in response to triggering signals, "tick" and "tock", synchronized to distinct phases of the swinging of a pendulum. Computer designers employ truth tables, each of which specifies the response of a clocked circuit at a tock to a stimulus present as an input at a preceding tick. A row of a truth-table can be drawn as a transition in a Finite State Machine (FSM). By coupling an FSM to a memory of unlimited capacity, one arrives at the theoretical concept of a Turing machine, various special cases of which perform various special tasks [11,12,13]. And here is the crux of programming: because a state machine is describable by still writing-a table-a Turing machine can be designed to be universal. By coding into its memory the table that describes any given special Turing machine, one causes the universal Turing machine to emulate the given special Turing machine. So, apart from speed and memory requirements, the single universal Turing machine can be put to doing any of the things that any of the special Turing machines can do, making it potentially convenient, once adapted to process control, to designing and implementing contact between equations and instruments. (But demands for quick response require in some cases devices streamlined to a special task better modeled by a special Turing machine than by a universal one.) The next tasks are to adapt the Turing machines, special and universal, to process control, and after that to express them formally by use of colored Petri net fragments.
Turing machine for process control (TMP).
To adapt a Turing machine as a model of a process-control computer, we leave the coupling of the FSM to the memory unchanged but add input and output buffers to the FSM. As for the FSM, at whatever level of detail of description one chooses, the control structure of a program (with its "if-then" statements) can be viewed as an FSM consisting of (classical) states drawn as circles, connected by directed arcs, with each arc labeled by an input I that selects it and by an output O [12]; a fragment of such a picture is shown in figure 2(a). An FSM serves as a game board on which a single token can be placed to mark the "current state." Heading toward the hooking together of FSM's to make a Petri net, we suppose that each arc in the FSM is punctuated by a tick event and a tock event, drawn as small boxes, enlarging the FSM into a special case of a condition-event Petri net fragment, as shown in figure 2(b). Once colors are introduced, states shown as dashed circles pointing into an event of the FSM from outside will become the means to express the entrance of guesses. These states are assumed to receive tokens put into them by scientists and instruments undescribed by events of the net. Similarly, dashed states pointed to by arcs from an event are assumed to have tokens taken from them by agents undescribed by events of the net. Figure 2(c) streamlines the picture to the form we shall use, in which more or less vertical arcs are understood to point downward, the dashed states are left undrawn, as are all states with one input and one output event. To emphasize the input and output arcs with their extra tokens, we often call this an FSM fragment to distinguish it from the FSM form of figure 2(a). To define a Turing machine for Process-control (TMP), we adapt the FSM of a Turing machine to have for each of its states a cartesian product of states of a set of clocked internal registers and, in addition, input buffers and output buffers, which allow input/output transactions with a scientist, with laboratory instruments, and with other TMP's.
I 1 I 2 I 1 I 2 O 1 O 1 O 1 O 2 I 2 I 1 O 2 O 2 (a) (b) (c)
3.2.3. Colored tokens. By replacing the black tokens of an FSM fragment by a colored tokens and adjoining to each event a function that defines colors on output tokens in terms of colors on input tokens, any FSM fragment can be mapped one-to-one to the drastic form of figure 3, in which color changes substitute for most of the moves of black tokens on a bigger net. A "fork in the road" for black tokens, turns into a choice between red and green, so to speak, so the descriptive burden is taken up by the functions f tick and f tock ; f tick defines the color of f tick f tock Figure 3. FSM with detail pushed to coloring. a token placed on an internal state depending on a list of colors, one for each input, while f tock defines a list of output colors depending on the color of the token on the internal state. The vertical arc is to be read as directed downward, and the big circles at the top and bottom of a path signify that the path is wrapped around a cylinder, so the top is a continuation of the bottom, i.e. a loop. An FSM fragment in which the token carries a color will be called a colored FSM.
3.2.4. Other mappings. Less drastic mappings are also possible. Any two states of a single FSM can be merged without breaking any arcs by augmenting the color rules in the events that feed them and the events fed by them. If a set of states connected to one another by events is mapped into a single state, the single state then connects to an event that loops back to it; this results in a place-transition Petri net, but not a condition-event net. We restrict the mappings dealt with here to ones that avoid pasting tick and tock events together, thereby avoiding self loops. Two events of an FSM that link the same pair of states can be merged by distinguishing external inputs and outputs by color instead of by place.
The mappings discussed so far are net isomorphisms: they map markings of one net bijectively to markings of the other and preserve the one-step reachability of one marking from another (by the firing of an event). Inverses of these bijections take more richly to less richly colored nets. Going in this direction depends on each state of a colored FSM having a set of possible colors associated with it [9]; then any colored transition corresponds one-to-one to a set of transitions obtained by partitioning sets of colors of input states, as illustrated in figure 4 for a two-in, two-out transition with color sets A, B, C, and D, each partitioned into "+" and "−" subsets. For this to make sense, it must be that an event which has tokens in all its inputs cannot fire unless the colors of the tokens comprise an element of the domain of its color function; we assume this firing rule. One gets a coarser description by use of a surjective map that is not an isomorphism by dropping the color distinction and dropping the color functions from the transitions; this coarsening, however, preserves a one-to-one correspondence between the number of firings in one net and the number in another. All these maps are continuous in the net topology [14], and, as emphasized by Petri [15], nets form a category in which the morphisms are continuous maps, an idea that extends to nets with colored tokens [9].
3.2.5. Disciplined coarsening of time. Some other kinds of continuous coarsening maps bundle up multiple event firings into a single firing; as when one describes e.g. "running a program" as a single event. This brings us to the first of several areas open to future work, for, more than other computing, process control benefits from well defined timing, and in particular from machine and software design that allows systematic, well controlled mappings that take a certain number of firings in an FSM to a single firing, so that one can think at a coarser level while still maintaining discipline in timing.
A striking example of the need to design programs that run in the same time for all inputs from some set I occurs in quantum computing. For example, suppose that U is the universal unitary operator defined by Deutsch to operate on basis states of the form |s; n; m where s is the location of the scanned square, n is the state of the FSM-processor (n = 0 is the starting state and n = 1 the halt state) and m is the tape [16]. For this to work in a computation that takes advantage of quantum superposition, one needs ∃r[(∀x ∈ I)U r |0; 0; x, 0 = |0; 1; x, f (x) ]; however, this is by no means implied for a program π f for which (as is usual in borrowed classical programs) one can assure only (∀x ∈ I)(∃r(x))[U r(x) |0; 0; x, 0 = |0; 1; x, f (x) ] [17]. An interesting topic for future study is the complexity of converting various classes of programs with variable running time to programs running in a time independent of the input for some set of inputs. 3.2.6. Cartoon of UTMP. Ignoring the laboratory instruments for the moment, by connecting input-and output-signals from a suitable FSM to a scientist and coupling the FSM to an unlimited memory, one gets a Universal Turing Machine (UTM) that provides for continual communication with a scientist, as shown in figure 5(a), in which boxes connected by a horizontal line are read as a single event. We cartoon the UTM in the condensed form of figure 5(b). By adding input-and output-signals from the FSM to laboratory instruments and to other FSM's, one gets a Universal Turing Machine adapted for Process control (UTMP), as shown in figure 5(c); again almost all of the burden of description is in the color functions, here called T 1 and T 2 (for Turing) that define a finite state machine that operates a UTMP. We assume that at some level of description, the ticks and tocks of the UTMP slice time into moments not only for the UTMP but also for the scientist at a keyboard and the instruments on the laboratory bench; we assume that input tokens from the scientist and from the instruments arrive at the UTMP synchronized with the UTMP pendulum. If the scientist enters nothing at a given clock tick, then the token taken by the UTMP from the input buffer for the scientist carries the color "empty," and if the instruments enter nothing, the token from the input buffer for the instruments carries the color "empty"; similarly the UTMP marks output tokens with the color "empty" if it writes nothing else on them.
3.2.7. A scientist controls a UTMP. To see the structure imposed on physics by the UTMP, one must think as if the UTMP were delivered to a scientist in a bare condition: no installed software, 6 the FSM in a starting state, and the memory all blank. We assume that the function T 1 operating on empty input tokens, the starting state of the FSM, and a blank memory produces empty output tokens and makes no change in the FSM state or the memory or the memory location scanned. Finally, we invoke the universality of a UTM to assume that the functions T 1 and T 2 are fixed (by a manufacturer, so to speak) independent of whatever laboratory instruments need to be considered and independent of all action by the scientist. These assumptions imply Proposition 3.1. Whatever a UTMP does besides staying in its starting state and taking in and putting out empty tokens is in response to input tokens.
We invoke this proposition to view the scientist as precluded from defending questionable management of equations or instruments by saying "the computer did it." If a CPC does something, it executes a program; we view the scientist as responsible for any program entered (as a colored token) into the UTMP and for running the program on any particular occasion. 7 3.2.8. Reprogramming always an option. We assume the UTMP is isomorphic to the net shown in figure 6, so that the scientist has a recurring choice of letting the UTMP run as programmed or of interrupting it to reprogram it. By programming a UTMP, a scientist can simulate an arbitrary special Turing machine. At will, the scientist can interrupt a program in execution to change to a program that simulates a different special Turing machine, corresponding to a different FSM and a different net. One can glimpse this in figure 4, where it is apparent that if the colors are limited to the sets A + and B + , then six of the eight events are precluded from firing, and the net is in effect reduced to the fragment defined by the selected colors. In this way the part of the net that actually fires, corresponding to the event "Use existing program" of figure 6, is variable in how it acts and in the net 3.2.9. Plug and play. To see how UTMP's can be connected (as well as the detail of how the FSM of a TM or TMP is connected to the memory), we introduce a signal that is phased just opposite to an FSM: the signal takes an input at a tock event and issues an output at a tick event. Then FSM A can send a signal (which can convey a message as a token color) to B (which can be either another FSM or a memory), as shown in figure 7, provided the signal path is short enough compared to the clock rate of the machines. This use of a signal synchronizes A with B. For two-way communication, one adds a signal going the other way. If communication over a distance long compared to the clock period is called for, then a chain of communication over intermediating UTMP's, is necessary, with the result that more firings of an event of A are required before a consequence of one firing can propagate to B and return as a property of a color on a token at a later firing of the A-event. The use of colored tokens sets up an area for future investigation of replacing the awkward definition of synchronic distance [18] with a measure of synchronization that counts firings in circuits of color effects, without having to add artificial elements to a net. 3.3. Net fragments formalized. Portraying logic operating in CPC's calls for fragments of Petri nets, not complete nets, to allow for guesses as token colors definable neither by results of experiments nor by calculations. From among the standard definitions of a Petri net, the one we use is (S, E, F ), where S is a set of states, E is a set of events, and F ⊆ S × E ∪ E × S is the flow relation. In order to make room for guesses from a scientist and results of instruments inexpressible in the logic defined by a net but essential to setting it up, the nets used are all net fragments, which we define as follows. A net fragment is a structure (S, S I , S O , E, F ) where S is a set of states of CPC's, and S I is a set of states of input signals (e.g. from A/D converters to a CPC input buffer), disjoint from S, allowing for input to the CPC from a scientist and laboratory instruments. S O is a set of states of output signals disjoint from both S and S I , allowing for output from the CPC; the flow relation is expanded so
F ⊆ [(S ∪S I )×E]∪[E ×(S ∪ S O )].
States of S I are assumed to have tokens placed in them by some means beyond the net, and states of S O are assumed to have tokens removed from them by means beyond the net. Our pictures show stubs of arcs from states of S I to events and from events to states of S O while omitting the circles for these states. Associated with a net fragment is a "reduced net" obtained by omitting the states of S I and S O (and dropping the arc stubs); using this reduced net, one can explore issues of liveness and safety [19]. The events of E express computer logic and nothing else. As an example of a guess used in designing contact between equations and instruments, a mathematical model entered by a scientist as a colored token in an S I state can assert whatever rules the scientist chooses to relate tokens received from instruments in S I states to commands sent to them as colored tokens in S O states. In this way the net fragment expresses the difference between such a model, with its guesswork, as a color on a token and how the instruments actually behave by producing colored tokens on their own.
Net-based portrait of guesswork needed to test and calibrate a quantum computer
In section 2 choices of equations to link to instruments were shown inescapably open to guesswork, bidding to make guesswork part and parcel of physics. The availability of net fragments described in section 3 brings within physics the study of contacts between equations and instruments by making available to analysis relations of sequence, concurrency and choice expressed in these contacts and in the guessdependent actions that set the contacts up. Here we turn from nets themselves to attention to an example problem in which a net illustrates an important structure needed to link equations to instruments. Besides the net explicitly shown in figure 8, the availability of nets provides a framework in which to view the main topic of this section, the problem of resolving a choice of commands by which a CPC manages a quantum computer. That framework can be used in the future to ask other questions, to do with: how do the necessities of quantummechanical models, classical process control, and guesswork interact; how are FSM's as program structures affected by use of models that are quantum mechanical; how does the need for CPC's to mediate between quantum-mechanical equations and instruments change our understanding of quantum mechanics?
Turning to the case at hand, some telling illustrations of guesswork needed to link models to instruments arise in quantum computing. To build a quantum computer, say to solve problems of factorizing [20] and searching [21], a scientist must choose quantum-mechanical equations and laboratory instruments to work in harmony. Quantum computational models call for quantum gates that are unitary transformations, each a tensor product of an operator on a 1-bit or 2-bit subspace of the Hilbert space H and identity operators for the other factors of the tensor product. Note that each permutation of a non-identity factor with an identity factor is a distinct gate, calling for a distinct command to the instruments that implement it. For this reason, the number of quantum gates for an n-bit quantum computer grows faster than n. Call this number G(n) and let the set of gates be U 1 , . . . , U G . The most commonly used models of quantum computers can be put in the form [22]:
• Prepare a starting state independent of the input (e.g. the integer to be factorized). • Transform the state by a product of quantum gates that depends on the input. • Make a measurement independent of the input.
For an example, suppose the scientist assumes properties of models 1 through 4 and looks for the model that gives the least mean-square deviation between relative frequencies of outcomes and probabilities calculated by (2.13). To factorize an integer I, a classical computer program is converted to a product of K(I) quantum gates, a number that rises faster than linearly with log I. To obtain the effect of multiplying the gate transformations, the scientist must first have solved the model to determine the command b U,j for each gate U j occurring in the product. As in the portrait in section 3 of putting tokens into a net fragment, the scientist programs a CPC to transmit a command b v to prepare an initial state |v , commands b U,j for the gates needed, and a command b M for a measurement. This endeavor is known to exhibit the following four features:
1. The instruments are valuable as a quantum computer insofar as their results substitute for a more costly classical calculation defined by the model. 2. An inexpensive classical computation (e.g. with the CPC) tests whether outcomes interpreted from results correctly solve the problem. 3. Quantum indeterminacy imposes a positive probability that a result fails to provide a correct answer, so multiple tries with the instruments are the rule, and a wrong answer does not by itself imply a fault in the instruments. 4. The tolerable imprecision of instruments implementing the chosen model of a quantum gate diminishes as the inverse of the number of gates K(I) in the sequence [23].
Because the number of gates required in the product rises with the size of the integer to be factorized, feature 4 implies that passing the test for smaller integers is no guarantee against failure of the instruments to factorize larger integers, unless the model or the instruments or both are refined. This requires, in turn, that a CPC intended for use on progressively larger integers be organized to switch between a mode of using the quantum computer and a mode of inquiring into its performance, e.g. so as to determine commands that make it behave more precisely in accord with the desired quantum gates. This calls for a program for the CPC that expands the events "Use existing program" of figure 6 to that of figure 8.
4.1.
Navigating the lattice of models to get better commands. As an example of what goes on within the coarsely portrayed event "Calibrate," suppose a scientist who uses a model α of the form (|v , U, M) B finds it works for small integers, but fails for bigger ones, which require more precise gates, which in turn requires calibrating (i.e. adjusting) the commands used to generate gates. This means giving up model α and choosing some alternative model β. A scientist does not choose a model all at once, but starts with some set of models and then narrows down on a smaller set, sometimes to a single model, a process open to guesswork at various stages. At one stage, the scientist may need to relax a constraint on models, leading to a bigger set of models from which to choose; at another stage the scientist may guess a new constraint, narrowing the set of models under consideration. By such a back and forth procedure, the scientist gives up U α and arrives at a new function U β (and hence a new model) with the hope that solving this function for a command b U,j,β for gates U j , j = 1, 2 . . . , that will succeed for factorizing larger integers than did the commands obtained from U α . (This makes a need for models adapted to homing in on results, with some metric on B, so that a small change in the command b U results in a small change in e.g. U(b U ); while properties 3 and 4 are a start, going beyond them is left to the future.)
To get a better model, the scientist guesses a set of models and hopes to find within it a model that better fits measured results interpreted as outcomes. If no model of the set adequately fits these outcomes, the scientist can first broaden the set of models and next try to guess a property that will narrow the set, not to the original model, but to one that fits better. The recognition of guesswork assures us that so long as progressively more ambitious goals of precision keep being introduced, there is no end to the need for adjusting both models and the laboratory instruments.
4.2.
Sample sizes needed to choose between models of gates. As discussed in section 2.5, the number of trials needed to statistically distinguish one model from another is bounded from below by the inverse square of a weighted statistical distance between the two models. Small numbers of experimental results can sometimes decide between distant models, but never between models that are close. In particular, distinguishing experimentally between two models for quantum gates can demand large samples: Proof. The models α and β under the stated condition are unitarily equivalent to a pair of models that differ only in |v with cos | v α |v β | ≤ ǫ. The proposition then follows from (2.10) and (2.11).
We argue elsewhere that this is a serious and heretofore unappreciated challenge to bringing instruments into working order as quantum computers, made visible by attention to the need for guesswork in linking of laboratory instruments to equations of quantum mechanics [24].
Concluding remarks
Gödel proved that no one true structure could be generated by sitting in a room with blinds drawn, writing down axioms. Quantum mechanics tells us that with the blinds up and the world of physical measurement available, the situation remains much the same. Just as the openings for new axioms are uncloseable in mathematical logic, so in physics guesswork is part of the foundation.
The net formalism can be put to use both to address improving the contacts between equations and instruments, fostering advances in theory and in instrumentation, and, at a more abstract level, to pose problems pertaining to universal Turing machines adapted to process control. By formalizing commands to instruments, the techniques presented here extend the reach of set-based mathematics into the area of contact between equations and instruments, and open to study within physics of some of what physicist do in the course of doing physics. This extends a parallel beachhead established already in mathematics by Gödel's study of what a mathematician does to prove a theorem and Turing's analysis of a mathematician who makes a note by which to resume an interrupted computation.
Acknowledgment
We acknowledge Amr Fahmy for showing us our debt to Gödel's proof of incompleteness in mathematical logic. We are indebted to Steffen Glaser, Raimund Marx, and Wolfgang Bermel for introducing us to the subtleties of laboratory work aimed at nuclear-magnetic-resonance quantum computers [25]. To David Mumford we owe our introduction to quantum computing from the standpoint of pure mathematics. We are greatly indebted to Anatol W. Holt and to C. A. Petri for conversations years ago, in which each pointed in his own way to the still mysterious expressive potential of nets.
Figure 1 .
1Computer mediating contact between scientist and instruments.
V B , U B , and M B be the sets of all functions |v , U, and M, respectively, with |v : B → H, U : B → {unitary operators on H}, and M : B → {hermitian operators on H}.
(
Within this modeling scheme, the Schrödinger equation relates a model at a later time to a model at an earlier time by a certain transformation operator U, dependent on the situation.) Any choice from the sets B, V B , U B , and M B produces some quantum-mechanical model (|v , U, M) B . Two models (|v , U, M) B and (|v ′ , U ′ , M ′ ) B generate the same probabilities Pr(j|b) if they are unitarily equivalent, meaning there exists a Q : B → {unitary operators on
For this reason, any model (|v , U, M) B can be reduced to (|v ′ , 1, M) B , where |v ′ = U|v and M ′ = M or, alternatively to (|v , 1, M ′ ) B where M ′ = U † MU.
Figure 2 .
2Fragment of FSM.
Figure 4 .
4From color detail to net detail.
Figure 5 .
5From FSM to UTMP.
Figure 6 .
6Alternative modes controlled by scientist.by which one portrays it in more detail, according to the scientist's actions in providing and running programs.
Figure 7 .
7Signal from A to B.
Figure 8 .
8Alternating between running and testing a QC.
Proposition 4 . 1 .
41Models α and β that differ only in U, with spec-tral norm U α (b U )−U β (b U ) = ǫ > 0, are statistically indistinguishable for a command b unless N(b) ≥ ǫ −2 .
Gordon McKay Laboratory, Harvard University, Cambridge, MA 02138, USA E-mail address: [email protected] 82 Powers Road, Concord, MA 01742, USA
Other words are hypothesis, Ansatz, assumption, axiom, postulate, and sometimes principle.2 This capacity stems from regenerative amplifiers and clock-gated memory registers, two inventions used to make all computer hardware insensitive to manufacturing variations, so that, like the placing of a chess piece not quite in the center of a square, deviations in performance, within limits, do not matter. Its independence from its own physics distinguishes a classical computer from a quantum computer.
Practically speaking, B must be a finite set, but the proof holds also for B denumerably infinite.
This happens e.g. for primed and unprimed models if for any b, n(1, b) = 0 = n(2, b) and φ(1, b) − φ(2, b) = φ ′ (1, b) − φ ′ (2, b).
Our use of Petri nets is impressionistic and a more technical presentation will doubtless be rewarded by exposing issues here overlooked.
The scientist can borrow software and install it, but is responsible for it.7 This rules out taking for granted the operating system, instrument-managing programs, a simulator, and whatever other programs come pre-installed in a commercially available CPC.
Formal distinction between quantum states and outcomes of their measurement. F H Madjid, John M Myers, Meas. Sci. Technol. 8F. H. Madjid and John M. Myers, Formal distinction between quantum states and outcomes of their measurement, Meas. Sci. Technol. 8 (1997), 465-472.
P A M Dirac, The principles of quantum mechanics, 4. OxfordOxford University Pressth ed.P. A. M. Dirac, The principles of quantum mechanics, 4-th ed., Oxford Uni- versity Press, Oxford, 1958.
E G Beltrametti, G Cassinelli, The Logic of Quantum Mechanics. Reading, MAAddison-WesleyE. G. Beltrametti and G. Cassinelli, The Logic of Quantum Mechanics, Addison-Wesley, Reading, MA, 1981.
Statistical distance and Hilbert space. W K Wooters, Phys. Rev. D. 23W. K. Wooters, Statistical distance and Hilbert space, Phys. Rev. D 23 (1981), 357-362.
C H Papadimitriou, Computational complexity. Addison-Wesley, Reading, MAC. H. Papadimitriou, Computational complexity, Addison-Wesley, Reading, MA, 1994.
Decision procedures in quantum mechanics. P A Benioff, J. Math. Phys. 13P. A. Benioff, Decision procedures in quantum mechanics, J. Math. Phys. 13 (1972), 908-915.
Linkages between the calculable and the incalculable in quantum theory. F H Madjid, J M Myers, Annals of Physics. 221F. H. Madjid and J. M. Myers, Linkages between the calculable and the incal- culable in quantum theory, Annals of Physics 221 (1993), 258-305.
K Gödel, Über formal unentscheidbare Sätze der Principia mathematica und verwandter Systems I, Monatshefte für Mathematik und Physik. 38K. Gödel,Über formal unentscheidbare Sätze der Principia mathematica und verwandter Systems I, Monatshefte für Mathematik und Physik 38 (1931), 173-198.
Coloured Petri nets: Basic concepts, analysis methods and practical use. K Jensen, Monographs in Theoretical Computer Science, an EATCS Series. 1Springer-Verlag2nd ed.K. Jensen, Coloured Petri nets: Basic concepts, analysis methods and practical use, Monographs in Theoretical Computer Science, an EATCS Series, Springer- Verlag, Berlin, Vol. 1, 2nd ed., 1996, Vol. 2, 1995.
The measure of reality: Quantification and western society. A W Crosby, Cambridge University PressCambridgeA. W. Crosby, The measure of reality: Quantification and western society, 1250-1600, Cambridge University Press, Cambridge, 1997.
On computable numbers, with an application to the Entscheidungsproblem. A M Turing, Proc. London Math. Soc. 42A. M. Turing, On computable numbers, with an application to the Entschei- dungsproblem, Proc. London Math. Soc. 42 (1937), 230-265.
Feynman lectures on computation. R P Feynman, Addison-WesleyReading, MAR. P. Feynman, Feynman lectures on computation, Addison-Wesley, Reading, MA, 1996.
G S Boolos, R C Jeffrey, Computability and logic. CambridgeCambridge University Press3rd ed.G. S. Boolos and R. C. Jeffrey, Computability and logic, 3rd ed., Cambridge University Press, Cambridge, 1989.
Elements of general net theory. H J Genrich, K Lautenbach, P S Thiagarajan, Net theory and applications. W. BrauerBerlinSpringer-Verlag254H. J. Genrich, K. Lautenbach, and P. S. Thiagarajan, Elements of general net theory, in W. Brauer, ed., Net theory and applications, Lecture Notes in Computer Science, Springer-Verlag, Berlin, 254 (1987), 338-358.
Computing System Design. C A Petri, General net theory. B. ShawProceedings of the Joint IBM University of Newcastle upon Tyne Seminar, University of Newcastle upon TyneC. A. Petri, General net theory, in B. Shaw, ed., Computing System Design, Proceedings of the Joint IBM University of Newcastle upon Tyne Seminar, University of Newcastle upon Tyne, 7-10 September, 1976.
Quantum theory, the Church-Turing principle and the universal quantum computer. D Deutsch, Proc. R. Soc. Lond. A. R. Soc. Lond. A400D. Deutsch, Quantum theory, the Church-Turing principle and the universal quantum computer, Proc. R. Soc. Lond. A 400 (1985), 97-117.
Can a universal quantum computer be fully quantum?. J M Myers, Phys. Rev. Letters. 78J. M. Myers, Can a universal quantum computer be fully quantum?, Phys. Rev. Letters 78 (1997), 1823-1824.
Petri nets: central models and their properties. U Goltz, Lecture Notes in Computer Science. W. Brauer, W. Reisig, and G. Rozenburg254Springer-VerlagSynchronic distanceU. Goltz, Synchronic distance, in W. Brauer, W. Reisig, and G. Rozenburg, eds., Petri nets: central models and their properties, Lecture Notes in Com- puter Science, Springer-Verlag, Berlin, 254 (1987), 338-358.
Petri nets: central models and their properties. G Berthelot, Lecture Notes in Computer Science. W. Brauer, W. Reisig, and G. Rozenburg84Springer-VerlagTransformations and decompositions of netsG. Berthelot, Transformations and decompositions of nets, in W. Brauer, W. Reisig, and G. Rozenburg, eds., Petri nets: central models and their properties, Lecture Notes in Computer Science, Springer-Verlag, Berlin, 84 (1980), 359- 376.
Algorithms for quantum computation: Discrete logarithms and factoring. P Shor, Proc. of the 35th annual symposium on foundations of computer science. of the 35th annual symposium on foundations of computer scienceLos Alamitos, CAIEEE Computer SocietyP. Shor, Algorithms for quantum computation: Discrete logarithms and factor- ing, Proc. of the 35th annual symposium on foundations of computer science, IEEE Computer Society, Los Alamitos, CA, 1994, pp. 124-134.
A fast quantum mechanical algorithm for database search. L K Grover, Proc. 28th annual ACM symposium on the theory of computing. 28th annual ACM symposium on the theory of computingNew York, NYACM PressL. K. Grover, A fast quantum mechanical algorithm for database search, Proc. 28th annual ACM symposium on the theory of computing, ACM Press, New York, NY, 1996, pp. 212-219.
Quantum computational networks. D Deutsch, Proc. R. Soc. Lond. A. R. Soc. Lond. A425D. Deutsch, Quantum computational networks, Proc. R. Soc. Lond. A 425 (1989), 73-90.
Quantum complexity theory. E Bernstein, U Vazirani, SIAM Journal on Computing. 26E. Bernstein and U. Vazirani, Quantum complexity theory, SIAM Journal on Computing 26 (1997), 1411-1473.
Contact between laboratory instruments and equations of quantum mechanics. J M Myers, F H Madjid, Quantum Computing, Proc. SPIE. E. Donker and A. R. Pirich4047J. M. Myers and F. H. Madjid, "Contact between laboratory instruments and equations of quantum mechanics," to be published in E. Donker and A. R. Pirich, eds, Quantum Computing, Proc. SPIE, 4047, 2000.
Approaching five-bit NMR quantum computing. R Marx, A F Fahmy, J M Myers, W Bermel, S J Glaser, Phys. Rev. A. accepted for publicationR. Marx, A. F. Fahmy, J. M. Myers, W. Bermel, and S. J. Glaser, Approaching five-bit NMR quantum computing, Phys. Rev. A., accepted for publication, 2000.
| [] |
[
"The Physical Thickness of Stellar Disks to z ∼ 2",
"The Physical Thickness of Stellar Disks to z ∼ 2"
] | [
"Kathleen A Hamilton-Campos \nWilliam H. Miller III Department of Physics and Astronomy\nJohns Hopkins University\n3400 N. Charles Street Baltimore21218MDUSA\n\nSpace Telescope Science Institute\n3700 San Martin Drive Baltimore21218MDUSA\n",
"Raymond C Simons \nSpace Telescope Science Institute\n3700 San Martin Drive Baltimore21218MDUSA\n\nUniversity of Connecticut\n196 Auditorium Road Storrs06269CTUSA\n",
"Molly S Peeples \nWilliam H. Miller III Department of Physics and Astronomy\nJohns Hopkins University\n3400 N. Charles Street Baltimore21218MDUSA\n\nSpace Telescope Science Institute\n3700 San Martin Drive Baltimore21218MDUSA\n",
"Gregory F Snyder \nSpace Telescope Science Institute\n3700 San Martin Drive Baltimore21218MDUSA\n",
"Timothy M Heckman \nWilliam H. Miller III Department of Physics and Astronomy\nJohns Hopkins University\n3400 N. Charles Street Baltimore21218MDUSA\n"
] | [
"William H. Miller III Department of Physics and Astronomy\nJohns Hopkins University\n3400 N. Charles Street Baltimore21218MDUSA",
"Space Telescope Science Institute\n3700 San Martin Drive Baltimore21218MDUSA",
"Space Telescope Science Institute\n3700 San Martin Drive Baltimore21218MDUSA",
"University of Connecticut\n196 Auditorium Road Storrs06269CTUSA",
"William H. Miller III Department of Physics and Astronomy\nJohns Hopkins University\n3400 N. Charles Street Baltimore21218MDUSA",
"Space Telescope Science Institute\n3700 San Martin Drive Baltimore21218MDUSA",
"Space Telescope Science Institute\n3700 San Martin Drive Baltimore21218MDUSA",
"William H. Miller III Department of Physics and Astronomy\nJohns Hopkins University\n3400 N. Charles Street Baltimore21218MDUSA"
] | [] | In local disk galaxies such as our Milky Way, older stars generally inhabit a thicker disk than their younger counterparts. Two competing models have attempted to explain this result: one in which stars first form in thin disks that gradually thicken with time through dynamical heating, and one in which stars form in thick disks at early times and in progressively thinner disks at later times. We use a direct measure of the thicknesses of stellar disks at high redshift to discriminate between these scenarios. Using legacy HST imaging from the CANDELS and GOODS surveys, we measure the rest-optical scale heights of 491 edge-on disk galaxies spanning 0.4 ≤ z ≤ 2.5. We measure a median intrinsic scale height for the full sample of 0.74 ± 0.03 kpc, with little redshift evolution of both the population median and scatter. The median is consistent with the thick disk of the Milky Way today (0.6 − 1.1 kpc), but is smaller than the median scale height of local disks (∼1.5 kpc) which are matched to our high-redshift sample by descendant mass. These findings indicate that (1) while disks as thick as the Milky Way's thick disk were in place at early times, (2) to explain the full disk galaxy population today, the stellar disks in galaxies need to on average physically thicken after formation. | null | [
"https://export.arxiv.org/pdf/2303.04171v1.pdf"
] | 257,404,997 | 2303.04171 | 9172a7a32af0dd1b746e73cac497814857865f37 |
The Physical Thickness of Stellar Disks to z ∼ 2
Kathleen A Hamilton-Campos
William H. Miller III Department of Physics and Astronomy
Johns Hopkins University
3400 N. Charles Street Baltimore21218MDUSA
Space Telescope Science Institute
3700 San Martin Drive Baltimore21218MDUSA
Raymond C Simons
Space Telescope Science Institute
3700 San Martin Drive Baltimore21218MDUSA
University of Connecticut
196 Auditorium Road Storrs06269CTUSA
Molly S Peeples
William H. Miller III Department of Physics and Astronomy
Johns Hopkins University
3400 N. Charles Street Baltimore21218MDUSA
Space Telescope Science Institute
3700 San Martin Drive Baltimore21218MDUSA
Gregory F Snyder
Space Telescope Science Institute
3700 San Martin Drive Baltimore21218MDUSA
Timothy M Heckman
William H. Miller III Department of Physics and Astronomy
Johns Hopkins University
3400 N. Charles Street Baltimore21218MDUSA
The Physical Thickness of Stellar Disks to z ∼ 2
Draft version March 9, 2023 Typeset using L A T E X twocolumn style in AASTeX631Galaxy evolution(594)Scale height(1429)Disk galaxies(391)High-redshift galaxies(734)
In local disk galaxies such as our Milky Way, older stars generally inhabit a thicker disk than their younger counterparts. Two competing models have attempted to explain this result: one in which stars first form in thin disks that gradually thicken with time through dynamical heating, and one in which stars form in thick disks at early times and in progressively thinner disks at later times. We use a direct measure of the thicknesses of stellar disks at high redshift to discriminate between these scenarios. Using legacy HST imaging from the CANDELS and GOODS surveys, we measure the rest-optical scale heights of 491 edge-on disk galaxies spanning 0.4 ≤ z ≤ 2.5. We measure a median intrinsic scale height for the full sample of 0.74 ± 0.03 kpc, with little redshift evolution of both the population median and scatter. The median is consistent with the thick disk of the Milky Way today (0.6 − 1.1 kpc), but is smaller than the median scale height of local disks (∼1.5 kpc) which are matched to our high-redshift sample by descendant mass. These findings indicate that (1) while disks as thick as the Milky Way's thick disk were in place at early times, (2) to explain the full disk galaxy population today, the stellar disks in galaxies need to on average physically thicken after formation.
INTRODUCTION
In present-day disk galaxies like our own Milky Way, older stars are generally found at larger distances above and below the galaxy midplane than their younger counterparts (Wyse & Gilmore 1995;Yoachim & Dalcanton 2006;Holmberg et al. 2009;Leaman et al. 2017). They are also "kinematically hotter" with larger vertical velocity dispersions-on average, the orbit of an older star will loft further from the midplane than that of a younger star. These two observational facts are closely related.
The collection of older stars in the Milky Way comprises what is known as its "thick disk" (with a scale height of ∼ 1 kpc), while its gas and younger stars comprise its flatter "thin disk" (scale height ∼ 270 pc) (Gilmore & Reid 1983;Bland-Hawthorn & Gerhard 2016 and references therein). In reality, the relation between disk thickness (or vertical velocity dispersion) and stellar age in the Milky Way is likely a continuum (Bovy et al. 2012)-with younger stars comprising thinner, kinematically-colder components of the disk and older stars comprising thicker, kinematically-hotter components of the disk. Most disk galaxies in the local universe appear to have a thick(-er) disk component made up of old(-er) stars (Yoachim & Dalcanton 2006). Understanding when and how the older, thicker components of today's stellar disks developed is an open question-and a key piece of our story of disk galaxy formation.
The classic explanation for thick disks starts by assuming that the stars that comprise today's thick disks first formed in thin disks. Those initially-thin disks are then thought to have thickened with time through the dynamical heating of the vertical components of the stel-lar orbits. That heating could come from gradual internal processes, such as vertical "kicks" from gravitational interactions between the stars and asymmetries in the disk (e.g., spiral arms and Giant Molecular Clouds; Villumsen 1985). The heating could also come from fast external processes, such as galaxy-galaxy mergers (Wyse et al. 2006). It is perhaps reasonable to expect that the older stars-having spent a longer time in the disk with more opportunities for such encounterswould comprise a kinematically hotter and physically thicker component of the disk.
A more recent explanation (one that contrasts the historical picture above) contends that today's thick stellar disks were first formed thick . This idea is motivated by observations that the velocity dispersion of the ionized gas in high-redshift galaxies (back to z ∼ 3 or 11.5 Gyr in lookback time) was up to a few times higher than it is in today's galaxies (e.g., Weiner et al. 2006;Kassin et al. 2007Kassin et al. , 2012Wisnioski et al. 2015;Simons et al. 2016Simons et al. , 2017Übler et al. 2019). If the velocity dispersion of the ionized gas in these galaxies reflects that of their cold molecular star-forming gas and the stars that are newly forming inherit the kinematics of the gas from which they form, then we might expect for the young stellar disks of these high redshift galaxies to be physically thicker than the young stellar disks today. In this picture, today's old thick stellar disks are simply the descendants of the young thick disks formed at high redshift. As the velocity dispersion of the ionized gas in galaxies gradually declines with time (as is observed; Kassin et al. 2012;Simons et al. 2017;Übler et al. 2019), we might expect for later generations of stars to form in progressively thinner disks.
In brief, the question is whether the stars that comprise today's disk galaxies:
1. formed in thin disks (i.e., near the galaxy midplane) at all cosmic times which subsequently thicken into thick disks, or 2. formed in thick disks (i.e., at large distances above and below the midplane) at early times and in progressively thinner disks at later times.
Both of these scenarios qualitatively reproduce the observed trend in the Milky Way-that older stars comprise progressively thicker portions of the disk. Numerical simulations of galaxy formation in the context of a ΛCDM cosmology have made considerable progress in addressing this question (e.g., Bird et al. 2021;Meng & Gnedin 2021), but there is not yet theoretical consensus. For instance, Bird et al. (2021) argue that both scenarios are at play-older stars are formed in thicker disks at higher redshifts and are also then kinematically-heated after birth into even thicker disks. Meng & Gnedin (2021) report that the stars in their simulations form in thin disks at all times, but that the continuous rearrangement of the orientation of the starforming disk leads to a gradual thickening of the disk.
To distinguish between these scenarios, one needs direct measurements of the thicknesses of galaxy disks back to the early universe. At early times, the scenarios offer opposing predictions. The first scenario predicts that disks (as a composite including both the thick and thin components, which is what is observable) should be physically thinner at higher redshift and progressively thicker closer to the present day. The second scenario predicts that the composite disks should be as thick at high redshift as the thick disks today and progressively thinner closer to the present day.
In this paper, we measure the vertical scale heights of 491 galaxies from z ∼ 0.4 to z ∼ 2.5 to reveal how and when galaxies formed their thick stellar disks. To do that, we use archival Hubble Space Telescope (HST ) IR imaging from the CANDELS and GOODS surveys (Giavalisco et al. 2004;Grogin et al. 2011;Koekemoer et al. 2011). Given the limited resolution of the HST imaging, we are only able to infer the composite thickness of these external galaxies-we can not distinguish between their thin and thick components. The galaxies in this sample span a stellar mass range of 9 ≤ log M * /M ≤ 11 (with the majority at log M * /M ≤ 10) and a redshift range of 0.4 ≤ z ≤ 2.5 (with the majority at z > 1). This work builds on previous formative studies using HST imaging to measure the disk scale heights of high redshift galaxies in the Ultra Deep Field and Frontier Fields (Elmegreen & Elmegreen 2006;Elmegreen et al. 2017). The study presented in this paper includes a number of advances:
(1) we study a large sample of galaxies (N = 491) that allows us to infer the statistics of galaxy populations in redshift, (2) we focus on a fixed rest-optical wavelengths (0.46-0.66 µm) to mitigate degeneracies between scale height, stellar age, and redshift, and (3) we compare with a population-matched sample of disk galaxies at z = 0 to infer evolution.
In §2, we discuss the HST imaging used and the selection of our galaxy sample. In §3, we discuss the measurements of galaxy scale heights from the images. In §4, we present our results on the distribution and redshift evolution of scale height. We compare our results against a local sample of disk galaxies that are selected to reflect the expected descendants of our sample in terms of mass, and discuss our findings. In §5, we briefly conclude. In Appendix A, we discuss a correction that we apply to the scale heights to account for galaxy inclination. In Appendix B, we discuss the 1D PSF kernel we use in our model. Where relevant, we adopt a Planck15 ΛCDM cosmology (Planck Collaboration et al. 2016) with (h, Ω m , Ω λ ) = (0.67, 0.31, 0.69).
DATA AND SAMPLE SELECTION
We use archival Hubble Space Telescope (HST ) nearinfrared imaging to measure the photometric scale heights of 491 edge-on disk galaxies over 0.4 ≤ z ≤ 2.5 Figure 1. The distributions of stellar mass, redshift, WFC3/F160W effective radius, and WFC3/F160W photometric axis ratio for the galaxies in our sample are shown. Our sample spans a relatively uniform distribution in redshift. The majority of the galaxy sample have low stellar mass (9 < log M * /M < 10). To include galaxies that are sufficiently edge-on, we select galaxies with a WFC3/F160W photometric axis ratio less than 0.4. in the GOODS-S galaxy field. Here, we discuss the HST imaging used, the catalogs containing the physical properties of the galaxies in GOODS-S ( §2.1), and the selection of the galaxy sample studied in this paper ( §2.2). Figure 1 shows the distribution of the galaxy sample in mass, size, photometric axis ratio, and redshift. Figure 2 shows HST imaging for a random subset of the galaxies in the sample.
HST Imaging and Catalogs
We use 3-band optical and near infrared imaging of the GOODS-S galaxy field observed with the Advanced Camera for Surveys (ACS/850LP) and the Wide Field Camera 3 (WFC3/F125W+F160W) on HST. The imaging was taken as a part of two HST Treasury programs: the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey (CANDELS; Grogin et al. 2011;Koekemoer et al. 2011) and the Great Observatories Origins Figure 2. HST /ACS+WFC3 postage stamps for a random subset of the galaxies in the sample are shown. We select edge-on galaxies using a cut on axis ratio. The top row are high-mass galaxies (10 < log M * /M < 11) and the bottom row are low-mass galaxies (9 < log M * /M < 10). Redshift increases from left to right. The filters shown (and used to make the measurements in this paper) vary with redshift to target a fixed rest frame wavelength of 0.46 − 0.66µm (see Figure 3 and §2.2). We use ACS/F850LP imaging for galaxies over 0.4 ≤ z ≤ 0.8, WFC3/F125W for 0.8 < z ≤ 1.7, and WFC3/F160W for 1.7 < z ≤ 2.5. The postage stamps are 4. 0 on a side. A red bar is included to indicate the physical scale. Black circles in the lower left corners represent the point-spread functions for each filter: the full-width half-maximum is the radius of each circle.
Deep Survey (GOODS; Giavalisco et al. 2004). We use the image mosaics, source detection, segmentation maps, and photometric catalogs of this imaging field as provided by the 3D-HST survey (Skelton et al. 2014) 1 . The mosaics include an estimate of the 2D point-spread function, which was constructed by stacking isolated stars in the mosaic (Skelton et al. 2014). The catalogs also include estimates of stellar masses as derived from the FAST stellar population synthesis fitting routine (Kriek et al. 2009 (Mobasher et al. 2015). We take the photometric redshifts provided in the 3D-HST catalogs as measured by the EAZY photometric redshift code (Brammer et al. 2008). The typical uncertainty on the photometric redshifts are 0.1 × (1 + z) (R. C. Simons et al. in prep). We adopt measurements of the F160W photometric axis ratios, position angles, and effective radii of the galaxies in GOODS-S from the GALFIT catalogs described in van der Wel et al. (2012). The effective radius is defined as the semi-major axis of the ellipse that contains half of the total light in the best fitting GALFIT single-Sérsic model. The axis ratio is defined as the ratio of the semi-major and semi-minor axes from the best-fitting Sérsic model (van der Wel et al. 2012).
Galaxy Sample Selection
To select the galaxies studied in this paper, we use the masses, effective radii, photometric position angles, and photometric redshifts tabulated in the 3D-HST and GALFIT catalogs, as described above.
We select a parent sample of 6933 galaxies in GOODS-S that span a fixed range in mass (log M * /M = 9−11) and photometric redshift (z phot = 0.4 − 2.5). The redshift minima and maxima are chosen to bound a fixed rest wavelength window for the HST /WFC3+filters used in this paper ( Figure 3). From the parent sample, we use two criteria to select galaxies with a well-defined orientation, i.e., a well-defined major and minor axis, so that we can reliably orient the plane of the galaxy. To do that, we first keep galaxies that have been flagged with the "good fit" designation in the GALFIT catalogs. We then compare the position angle measured by the 3D-HST team using the SExtractor code (Bertin & Arnouts 1996) to that measured from GALFIT for each galaxy. If the two position angles disagree by more than 10 • , we consider the position angle to be uncertain and the galaxy is discarded. Finally, we remove 3 galaxies with errant effective radii (> 5 arcsec). Together, these criteria reduce the sample to 3032 galaxies.
To mitigate projection effects in the scale height measurements, we ideally want to use galaxies whose disks are oriented perfectly edge-on relative to our line of sight (LOS). In reality, all galaxies possess a finite LOS inclination and so our practical goal is to select galaxies that are nearly edge-on-and to model and correct for the measurement biases introduced by the mean residual inclination of the sample. We consider a galaxy sufficiently edge-on if it has an F160W photometric axis ratio of b/a ≤ 0.4. This corresponds to an inclination of < 20 • from edge-on for disks with intrinsic axial ratios (commonly denoted q) of 0.25. We then use simulations of randomly-inclined galaxy disks to calculate the bias to the median of the measured scale heights due to the residual inclination (see Appendix A). Following the above selection of nearly edge-on galaxies, our sample is reduced to 1248 galaxies. Finally, we remove galaxies with a sufficiently close neighbor ( < 2 ) in the full 3D-HST source catalog. The rejected galaxies include those in late-stage encounters and those with chance projections-both of which would challenge our fitting routine. An additional 36 galaxies were discarded due to bad surface brightness fits (see later, §3).
The final sample includes 491 galaxies. The distribution of the sample in size, mass, photometric redshift, and photometric axis ratio is shown in Figure 1. The HST imaging for a random subset of galaxies in the sample is shown in Figure 2.
SDSS Comparison Sample
We compare the measurements of the galaxy scale heights of our sample against those of disk galaxies in the local universe measured from the Sloan Digital Sky Survey (SDSS; Abazajian et al. 2003;Bizyaev et al. 2014). Bizyaev et al. (2014) used SDSS g-band imaging (∼0.41 -0.53 µm; characteristic wavelength ∼ 0.48 µm) to measure the scale heights of 4768 local edge-on disks. This wavelength window roughly matches the rest frame window targeted for our sample. For a fair comparison, we want to compare the galaxies in our sample with their anticipated descendants in the local universe. To that end, we downselect the Bizyaev et al. (2014) sample to only include galaxies in the stellar mass range 9.5 < log M * /M < 10.5. We calculate the stellar mass of each galaxy in the full Bizyaev et al. (2014) SDSS sample using their absolute g-band magnitudes, redshift-determined distances, and a fixed mass-to-light ratio (Faber & Gallagher 1979). The down-selected SDSS sample includes 1,679 galaxies and spans a stellar mass range (p 16th , p 50th , p 84th ) of log M * /M = (9.9, 10.2, 10.4). Given galaxy mass growth expectations from abundance matching (Moster et al. 2013), the galaxies in our sample log M * /M ∼ (p 16th , p 50th , p 84th ) = (9. 1, 9.3, 9.9) are expected to evolve in mass (Papovich et al. 2015;Simons et al. 2017) to roughly match the mass range of this down-selected SDSS sample.
MEASUREMENTS OF GALAXY SCALE HEIGHT
In this section, we describe the fits to the galaxy scale height from the HST imaging.
For each galaxy in the sample, we extract 4 × 4 postage stamps from the ACS/F850LP, WFC3/F125W, and WFC3/F160W 3D-HST imaging mosaics. The stamps are centered on the galaxy center. We extract the corresponding stamps of the inverse variance weight and segmentation maps using their respective mosaics. In each stamp, we use the segmentation map to mask extraneous sources and identify background pixels. We rectify the postage stamps so that the major axis of the galaxy (i.e., the disk midplane, as defined by the F160W photometric position angle) lies along the horizontal direction of the image. Figure 2 shows the rectified stamps for a random subset of 6 galaxies in the final sample.
We extract vertical surface brightness (and uncertainty) profiles along each column of the postage stamps. The columns are separated by 0. 06-which corresponds to 0.33 kpc at z = 0.4 and 0.52 kpc at z = 2.
We fit each of the observed surface brightness profiles using a model that includes a 1D convolution of the HST point-spread function (PSF) and a 1D sech 2 surface brightness profile,
sech 2 (A, z, z 0 ) = A × 4 (e ∆z/z0 + e −∆z/z0 ) 2
(1) Figure 3. We measure the scale heights of the galaxies in the same rest frame wavelength window. This figure shows the redshift evolution of the rest wavelength traced by the characteristic wavelength of the three HST imaging filters we use-WFC3/F160W, WFC3/F125W, and ACS/F850LP. The gray shaded region shows our targeted rest wavelength window, spanning 0.46-0.66 µm. For each galaxy, we carry out measurements in the filter appropriate for its redshift: F160W for 1.7 < z ≤ 2.5, F125W for 0.8 < z ≤ 1.7, and F850LP for 0.4 ≤ z ≤ 0.8.
where A, ∆z, and z 0 are the amplitude, the position of the galaxy midplane, and the scale height 2 , respectively. We construct and use a 1D convolution kernel appropriate for each imaging filter (see Appendix B). The sech 2 profile assumes that the stars are distributed above and below the disk isothermally (van der Kruit 1988; van der Kruit & Freeman 2011), with a number density that decreases with the distance from the midplane as n(z) = n 0 × sech 2 (z/z 0 ), where n 0 is the number density of stars in the galaxy midplane. We assume that the galaxies have an intrinsic geometry of a disk. A number of studies have argued from both an empirical (Ravindranath et al. 2006;van der Wel et al. 2014;Zhang et al. 2019) and theoretical perspective (Ceverino et al. 2015;Tomassetti et al. 2016;Pandya et al. 2019), that the majority of the galaxy population at z > 1 and log M * /M < 10 (which comprises the bulk of our sample, Figure 1) have stellar structures that are intrinsically elongated (i.e. prolate) and not disky. For galaxies in our sample that are intrinsically prolate, the "scale height" that we measure represents the physical thickness of the short axis of the system. For each galaxy in our sample, we measure the scale height using the imaging band that covers a fixed rest wavelength window (0.46 − 0.66 µm) at the redshift of the source (Figure 3). This allows us to track the same portion of the rest frame SED across our full sample of galaxies. In doing so, we mitigate pseudo-evolution that might arise from a dependence between stellar population age and scale height at fixed redshift. To first order, this fixed portion of the SED is contributed to by similarly-aged stellar populations at different redshifts. The types (i.e., ages and evolutionary states) of stellar populations that contribute light at this rest frame wavelength window will depend somewhat on the starformation rate history. Typically, this portion of the spectrum (∼0.5 µm) will be dominated by > 9 Gyr old main-sequence stars (Conroy 2013). We do not consider the effect of dust attenuation on the observed surface brightness profiles. Dust tends to lie in the midplanes of galaxies and will suppress midplane light. For a galaxy with high midplane dust obscuration, the observed vertical surface brightness profile will be broadened and the disk would appear thicker than its intrinsic thickness. Using imaging from the SDSS survey, Bizyaev et al. (2014) inferred that dust attenuation leads to a < 15% increase in the observed disk thickness in local galaxies. Without an empirical constraint on the vertical distribution (and attenuation properties) of the dust in our high-redshift sample, we exclude the minor correction for dust.
For each column of each galaxy, we employ the Bayesian Markov chain Monte Carlo (MCMC) Python package emcee (Foreman-Mackey et al. 2013) to derive the probability distribution of the scale height. The model of each profile (Eq. 1) has three free parameters: the amplitude, the position of the galaxy midplane, and the scale height. We adopt a top-hat prior for each. The lower and upper bounds of the amplitude prior are set to 0.5× and 2.5× the maximum surface brightness of the profile. The bounds of the midplane prior are set to 3 pixels above or below, respectively, the location of the maximum surface brightness. Finally, the lower and upper bounds of the scale height prior are set to 0 pixels and 10 pixels (0. 6), respectively. We run emcee using 100 walkers and 2000 steps per walker. We discard the first 100 steps for "burn-in". A small (∼ 5%) fraction of the fits are discarded due to unreliable results-generally surface brightness profiles with unflagged contamination for which the posterior of the scale height is either unconstrained or diverges to unrealistically large values. Figure 4 shows the results of the fit for a single column of an example galaxy (log M * /M = 9.5 at z = 0.90), which rests near the median redshift and stellar mass of the sample. The vertical surface brightness profile shown is measured at one effective radius from the center of the galaxy. The effective radius is marked by the vertical line. We show a circle indicating the full-width at half-maximum of the 2D PSF of the HST /WFC3 image. In the top right panel, the posterior distribution of the scale height is shown, and the 50 th percentile is marked. The bottom left panel shows the vertical surface brightness profile and its uncertainty, the 50 th percentile model, and the 1D PSF of the image. We note that the observed surface brightness profile is broadened beyond the 1D PSF. This indicates a physical thickness that is detectable with the HST image. Repeating this process for each column of each galaxy, we derive the radial profiles of the scale height for all 491 galaxies in the sample.
For simplicity, in what follows we present a single scale height measurement for each galaxy. Specifically, we report the weighted-average and uncertainty of the two posterior distributions derived at one effective radius from either end of the galaxy center.
RESULTS AND DISCUSSION
In Figures 5 and 6, we present the vertical scale heights as measured at one effective radius for 491 galaxies spanning a redshift range of 0.4 ≤ z ≤ 2.5 and a stellar mass range of 9 ≤ log M * /M ≤ 11. The majority of the sample (74%) lie at z > 1 and log M * /M < 10. As discussed in §2.2, the measurements are carried out at a fixed rest frame wavelength of 0.46 − 0.66 µmmitigating potential differences between older (redder) and younger (bluer) stellar populations at a fixed redshift.
In the left panel of Figure 5, we show the scale heights and uncertainties of the full sample as a function of redshift (lookback time). The majority of the sample are consistent with a measurable finite thickness-97% ( 84%) of the sample have scale heights that are at least 1σ (2σ) larger than zero. The red-shaded region shows the median and the 16 th − 84 th percentile span of the sample running with lookback time. The solid red line shows the as-is measured median of the population, and the dashed red line shows the median corrected for the average (small, but non-zero) inclination of the galaxy sample. For the latter, we apply a correction factor of 22%-see Appendix A for details. In the right panel of Figure 5, we show the distribution of scale heights of the full sample with the median and inclination-corrected median marked-0.94 (±0.04; standard error on the median) kpc and 0.74 (±0.03) kpc, respectively.
We do not find evidence for an evolution in either the median or the scatter of the population with redshiftthe red shaded region in the left panel of Figure 5 is flat. For the full sample, we measure a span of 0.6 to 1.4 kpc (16 th − 84 th percentiles) or a 1σ scatter of 0.35 kpc.
We split the sample into three redshift bins. At high (1.7 < z ≤ 2.5) and intermediate (0.8 < z ≤ 1.7) Figure 4. The WFC3/F125W image and fitting results for a single galaxy in the sample is shown. For consistency among galaxies, we measure the scale height as the values at the effective radius. This column is shown as a light brown vertical line. The histogram in the upper right shows the results of our MCMC fitting process for the effective radius column. The final reported scale height, the 50th percentile of these values, is marked by the vertical dark red line. Above the center of the galaxy, we show the HST WFC3 PSF as a grey circle whose diameter is the full-width half-maximum of the PSF. We compare this to the data with associated error bars (light brown) and the model (dashed red line) in the lower left corner. This plot demonstrates we measure broadening above and beyond the nominal PSF for our galaxies.
redshifts, the as-is median scale heights of the sample are 0.96 ± 0.07 kpc and 0.88 ± 0.04 kpc, respectively. At low redshift (0.4 ≤ z ≤ 0.8)-where the sample is sparse-the median scale height is 1.15 ± 0.15 kpc. The inclination-corrected median scale heights of the three bins are 0.75 (± 0.05), 0.69 (± 0.03), and 0.90 (± 0.12) kpc, respectively. See Table 1 for a summary.
We compare our results with previous studies (Elmegreen & Elmegreen 2006;Elmegreen et al. 2017) that measured the scale heights of high redshift galaxies with HST /ACS imaging in the Hubble Ultra Deep Field and Frontier Field Parallels. For context, we note that these measurements were carried out using bluer filters (F435W, F606W, F775W, and F850LP) than those used in this paper and are not corrected for inclination. In total, these samples include ∼200 galaxies spanning 7 ≤ log M * /M ≤ 12 and redshifts 0.5 ≤ z ≤ 4.5. Over this redshift range, Elmegreen & Elmegreen (2006) report an average scale height of 1 ± 0.4 kpc. Binning by redshift, Elmegreen et al. (2017) report an average of 1.03 ± 0.25 kpc at 0.5 < z < 1.5 and 0.63 ± 0.24 kpc at 1.5 < z < 2.5. Combining the low and intermediate redshift bins for our sample (i.e., 0.4 ≤ z ≤ 1.7), we measure an as-is median (i.e., not inclination-corrected) scale height of 0.91 ± 0.05 kpc-in good agreement with the low redshift average from Elmegreen et al. (2017). Moreover, the high redshift average is consistent with our measurements in the same redshift bin (0.96 ± 0.07 kpc). For both the sample studied in this paper and in Elmegreen et al. (2017), there is no significant evidence for a redshift evolution in the average scale height from z ∼ 2.5 to z ∼ 0.4.
In Figure 6, we compare the distribution of the scale heights of our high-redshift galaxy sample with those of a population-matched sample of disk galaxies in the local universe as measured from the Sloan Digital Sky Survey (SDSS, see §2.3; Bizyaev et al. 2014). The high-redshift distribution is shown in red, and the low-redshift distribution from SDSS is shown in blue. In addition, we show the range of estimates of the scale heights of the thin and thick disks of the Milky Table 1. The population median (z 0,median ), inclinationcorrected median (z incl−corr 0,median ), and scatter (∆z0) of the galaxy scale height as measured at one effective radius is reported for different bins in redshift (top three rows) and for the full sample (bottom row). The scatter is defined as (z 0,84th − z 0,16th )/2. The reported uncertainties are the standard error on the median. Figure 6. The distribution of the galaxy scale heights of our sample is shown in red. The sample spans 0.4 ≤ z ≤ 2.5. The measurements are carried out at one effective radius from the center of each galaxy. The distribution of a population-matched sample of disk galaxies in the local universe (SDSS; Bizyaev et al. 2014) is shown in blue. The median and inclination-corrected median of our high redshift sample are shown with solid and dashed red lines, respectively. The median of the local sample is indicated with a vertical blue line. The range of estimates for the thick (light blue) and thin (grey) disks of the Milky Way are also shown (Bland-Hawthorn & Gerhard 2016 and references therein). The inclination-corrected median scale height of the high-redshift galaxies (∼ 750 pc) is smaller than that of the disk galaxies today (∼ 1500 pc) but is similar to that of the thick disk of the Milky Way (∼ 600 − 1100 pc).
Way (Bland-Hawthorn & Gerhard 2016, and references therein). Figure 6 shows that the inclination-corrected median of the high redshift population (0.74 ± 0.03 kpc) is consistent with the range of estimates (0.63-1.08 kpc) of the Milky Way thick disk. The thin disk of the Milky Way is significantly thinner than both the high redshift and the SDSS comparison samples. Importantly, we note that we are only able to measure the composite thicknesses of the external galaxies. While we cannot rule out the presence of a thin disk in these galaxies, we can conclude that the majority of their stellar light arises from a thick component.
While the median scale height of the high-redshift sample is consistent with that of the thick disk of the Milky Way, it is appreciably and statistically smaller than that of the SDSS comparison sample (1.46 ± 0.04 kpc; Bizyaev et al. 2014).
We next consider differences in the shapes of the distributions. The low-redshift distribution not only includes an offset in the median, but is also wider than that of the high-redshift distribution. The (16, 50, 84) th percentiles of the SDSS and high-redshift distributions are (0.87, 1.46, 2.25) kpc and (0.60, 0.94, 1.52) kpc, respectively. We compute the width of the two distributions (p 84 -p 16 ) as 1.39 and 0.91, respectively-the SDSS distribution is ∼53% wider than that of our high redshift sample.
Together, the differences in the median and width of the distributions imply that with cosmic time the disk population needs to both (i) on average become physically thicker and also (ii) develop higher variety with some galaxies remaining thin and some thickening. We also note that both distributions contain a tail towards higher scale heights, and that the tail extends further in the low redshift distribution. This implies that the thickness of the thickest galaxies at a given redshift increases with decreasing redshift. Interestingly, we do not detect such evolution in our sample ( Figure 5). However, we note that Elmegreen et al. (2017) report this exact result (a thickening of the thick-end of the population at a given redshift) for the observed I-band (rest NUV -blue light) of clumpy galaxies in the mass range of our sample (log M/M ∼ 9 − 10) over z ∼ 3 to z ∼ 1. This raises an important question: do the thickest galaxies (those that comprise the tail) at high redshift evolve into the thickest galaxies in the present-day? This is a question that will need to be answered with cosmological numerical simulations that both incorporate high star particle resolution (e.g., Bird et al. 2013;Peeples et al. 2019;Bird et al. 2021;Meng & Gnedin 2021) and which can model large populations of galaxies.
As a whole, we draw two conclusions: (1) disks as thick as the Milky Way are established as early as cosmic noon at z ∼ 2, but also that (2) these high-redshift stellar disks are as thin or thinner than their (expected) descendant galaxies today. This suggests that the thickest components of today's galaxy disks start thick, and subsequently thicken at later times. This is consistent with the scenario outlined in Bird et al. (2021), where stars are formed in thicker disks (with higher velocity dispersion) at higher redshifts and also subsequently thicken at later times.
We do not detect an evolution in the scale height across our redshift range. However, we note that (as seen in the distribution of points in Figure 5) we lack the sampling at z < 1 to make a strong statistical statement on late time evolution. We postulate that the populationwide thickening that is inferred in Figure 6 must occur in the period of cosmic time where our sample loses statistical power-after z ∼ 1 or in the last 8 Gyr.
CONCLUSION
Using Hubble Space Telescope/ACS+WFC3 imaging of the GOODS-S galaxy field, we measure the restoptical scale heights of 491 galaxies spanning 0.4 < z < 2.5 and 9 < log M * /M < 11. We use these measurements to track the redshift evolution of the composite thicknesses of stellar disks to cosmic noon. We then compare our results with a population-matched sample of disk galaxies today from the Sloan Digital Sky Survey (SDSS; Bizyaev et al. 2014). Our main conclusions are as follows:
• We measure a median intrinsic (inclinationcorrected) scale height of 0.74 (±0.03) kpc and a population scatter of 0.35 kpc. This median scale height is consistent with the range of estimates of that of the Milky Way's thick disk today (∼0.6 -1.1 kpc). This indicates that disks that are as thick as the Milky Way's thick disk are in place at early cosmic times.
• Comparing with the distribution of scale heights of local disks measured from SDSS (Bizyaev et al. 2014), we find that the high-redshift popula-tion is on average physically thinner than their population-matched descendent disks today. This indicates that the disk population must on average thicken towards the present day. Moreover, the width of the low redshift distribution is larger than that of the high-redshift population (by 53%), indicating that the galaxy population must develop a higher variety in thickness with cosmic time.
• From z ∼ 2.5 to z ∼ 0.4, we find no evidence for an evolution in the median and scatter of the scale heights of the galaxy population with redshift. We suggest that the bulk of the scale height evolution that is implied by the comparison with SDSS above must occur at cosmic times later than z ∼ 1, where our sample loses statistical power.
In brief, our results indicate that the stellar disks of galaxies both start thick and subsequently thicken with time.
With the near-infrared imaging capabilities of JWST /NIRCam + NIRISS and imaging observations from a number of large public programs available (e.g., CEERS, PRIMER, COSMOS-Web, UNCOVER), it is now possible to extend this study to earlier cosmic times at z > 3. Early findings from these and other programs are indicating a surprisingly high fraction of morphologically-regular stellar disks at these redshifts (Robertson et al. 2022;Kartaltepe et al. 2022;Ferreira et al. 2022a,b). It is not yet clear how the detailed physical structures of these galaxies are connected with the structures of the galaxy populations studied in this paper at later cosmic times.
6. ACKNOWLEDGEMENTS This work is supported through the Hubble Space Telescope program number AR-15052. Support for Program number AR-15052 was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy. This work is based on observations taken by the 3D-HST Treasury Program (GO 12177 and 12328) with the NASA/ESA HST, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. This research made use of Astropy (http://www.astropy.org) a community-developed core Python package for Astronomy (Astropy Collaboration et al. 2013, 2018.
APPENDIX
A. QUANTIFYING THE BIAS OF INCLINATION ON THE POPULATION STATISTICS
In this study, we measure the scale heights of a sample of galaxies that are selected to be "edge-on". We define a galaxy to be edge-on if its HST /F160W photometric axis ratio is < 0.4. In practice, every galaxy in the sample carries a finite inclination to our line-ofsight. Due to projection, this small inclination will artificially inflate the observed scale height. Ideally, we would correct each galaxy's scale height based on its known inclination. However, from the projected 2D images alone there is a degeneracy between disk thickness and galaxy inclination-we cannot distinguish between thick galaxies and inclined galaxies. However, we are able to correct the population median scale height given the known population-averaged inclination. To do this, we rely on the known distribution of inclinations that a random sample of disks will take with respect to an arbitrary line-of-sight. In this appendix, we aim to quantify the bias of inclination on our population statistics by fitting a suite of randomly-inclined disk models using the same procedures adopted in this paper ( §3).
First, we generate three toy exponential disk galaxies in 3D (ρ(r) ∝ e −r/r d ). The three toy galaxies differ only in the intrinsic ratio (q) of their scale height to scale length (r d ), taking on q = 0.15, 0.25, and 0.35, respectively. For each toy galaxy, we progressively incline and project it to a mock observer that is fixed in space. We sample an inclination array that follows a flat distribution in the cosine of the inclination angle. This is the distribution expected for randomly-inclined galaxies. In doing so, we create a suite of inclined projections.
We then impose our selection criterion so that we only retain nearly edge-on galaxies as defined in this paperthose with a projected axis ratio of < 0.4. For each galaxy that survives this cut, we perform the sech 2 fitting procedure described in §3. We take the scale height to be the median of the posterior of the model. By comparing the intrinsic scale height (which is fixed for all galaxies in our toy sample) with what we recover using the fits, we quantify the effect of inclination on the population statistics studied in this paper-notably the median. Figure 7 shows how the percent difference between the intrinsic scale height of the toy galaxies and that recovered from our fitting procedure varies as a function of galaxy inclination and q. The median correction of the toy population is marked with a dashed line.
As the inclination of the toy disk model with respect to the observer increases, the percent difference between the intrinsic and recovered scale height also increases. Galaxies with inclinations larger than 12 degrees do not survive our selection criterion for the thickest galaxy models considered (q = 0.35). The solid line in Figure 7 shows the q = 0.25 model. The shaded region shows the scatter, encompassing the lowest and highest axis ratios Figure 7. This figure shows the percentage difference between the intrinsic and recovered scale heights for toy models of disk galaxies that are randomly inclined to our line-ofsight. The toy models have ratios (q) of intrinsic scale height to scale length that vary between 0.15 and 0.35 The horizontal line represents the median of the models, 22%. We use the results shown here to correct the population median of the observed scale heights measured in this paper. considered (q = 0.15 − 0.35). We calculate the median percentage difference between the intrinsic and recovered scale heights for all three axis ratios. The median of these medians is the dashed line, 22%. We adopt this factor to correct the median of the observations studied in this paper.
In conclusion, we derive a 22% median difference between the intrinsic scale heights and those recovered for randomly-inclined disk galaxies that pass our selection criterion. This indicates that, on average, the scale heights for galaxies in our sample are artificially inflated by ∼ 22% and we correct the median results shown in Figures 5 and 6 by this factor.
B. CONSTRUCTING THE 1D CONVOLUTION KERNEL
To fit the 1D vertical surface brightness profiles, we construct and convolve a 1D line spread function (LSF) with a sech 2 model (see §3). The 1D LSF describes how the shape of the blurring from the 2D point-spread function varies with the distance above and below the galaxy midplane. To create the appropriate 1D kernel for our observations, we follow the approach of Elmegreen et al. (2017) and simulate a convolution between the 2D pointspread function and a line of infinitesimal thickness. that are used to model the observed surface brightness profiles in this paper are shown by the black dashed lines. A LSF is constructed for each of the three HST filters used in this paper-ACS/F850LP, WFC3/F125, and WFC3/F160W. They are determined by convolving a 2D point-spread function with an infinitesimally thick disk model of a given size (color-coding). The LSF are applicable at one effective radius-where the measurements in this study are carried out. A suite of LSF for disk models of varying sizes are shown with the color-coded lines. There is negligible variation in the LSF between the disk models. The FWHMs of the LSFs range from 0. 18 -0. 25, depending on the filter.
We first create a suite of 2D images of mock disk galaxies that are perfectly edge-on and infinitesimally thick. The mock images adopt the pixel scale of the HST images (0. 06 per pixel side). Each mock galaxy has a surface brightness profile that follows a 1D Sérsic model (with n = 1; and normalized to a total flux of 1) with a given scale length. We create a suite of mock galaxies with scale lengths that vary from 0. 2 -1. 5-generally reflecting the distribution of the real galaxy sample. The major axis of each mock disk is set to lie along the horizontal of the image and the disks span a single HST pixel (i.e., they are a line). For each mock disk, we simulate a convolved image in each of the three filters that we use in this paper (HST /ACS+WFC3-F850LP, F125W, F160W), convolving with the 2D PSF provided by the 3D-HST survey (Skelton et al. 2014) for each of the filters.
For each image of each mock disk, we extract the surface brightness profile above and below the midplane of the mock disk. The shape of the resulting surface brightness profile corresponds to the appropriate 1D projection of the 2D PSF for a given pair of disk model and filter. Figure 8 shows the line spread functions derived for the suite of toy models in each of the three HST filters.
In a given filter (i.e., a given 2D PSF), we find that the shape of the extracted 1D PSFs are relatively similar (< 10% variation in the width) from column-to-column and for mock disks of different scale lengths. For all of the fits described in this paper, and for each of the three filters, we adopt the 1D PSF defined at one effective radius from the center of the mock disk with scale length 0. 4.
Figure 5 .
5The scale heights of the galaxy sample are shown as a function of lookback time (bottom x-axis) and redshift (top x-axis). The scale height is measured at one effective radius from the galaxy center. The weighted average and uncertainty of the two measurements carried out on both sides of the galaxy are reported. The running median (solid red line) and 16 th -84 th percentiles (shaded region) of the sample are shown. The dashed red line shows the median corrected for inclination (see Appendix A). The median galaxy scale height of the sample is generally constant over a large period of cosmic time (0.4 ≤ z ≤ 2.5). We truncate the vertical axis at 2.5 kpc, and note that 2% of the sample have larger scale heights and are not shown here. In the right panel, we show the distribution of the sample, with the median and inclination-corrected median marked.
Figure 8 .
8The line spread functions (LSF)
https://archive.stsci.edu/prepds/3d-hst/
The scale height defined in the sech 2 model, and adopted in this paper, is ∼one-half of the exponential scale height.
MNRAS, 484, 5170, doi: 10.1093/mnras/stz339
. K Abazajian, J K Adelman-Mccarthy, M A Agüeros, 10.1086/378165AJ. 1262081Abazajian, K., Adelman-McCarthy, J. K., Agüeros, M. A., et al. 2003, AJ, 126, 2081, doi: 10.1086/378165
. T P Robitaille, Astropy CollaborationE J Tollerud, Astropy Collaboration10.1051/0004-6361/201322068A&A. 55833Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
. A M Price-Whelan, Astropy CollaborationB M Siphocz, Astropy Collaboration10.3847/1538-3881/aabc4f156123Astropy Collaboration, Price-Whelan, A. M., SipHocz, B. M., et al. 2018, aj, 156, 123, doi: 10.3847/1538-3881/aabc4f
. E Bertin, S Arnouts, 10.1051/aas:1996164A&AS. 117393Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393, doi: 10.1051/aas:1996164
. J C Bird, S Kazantzidis, D H Weinberg, 10.1088/0004-637X/773/1/43ApJ. 77343Bird, J. C., Kazantzidis, S., Weinberg, D. H., et al. 2013, ApJ, 773, 43, doi: 10.1088/0004-637X/773/1/43
. J C Bird, S R Loebman, D H Weinberg, 10.1093/mnras/stab289MNRAS. 5031815Bird, J. C., Loebman, S. R., Weinberg, D. H., et al. 2021, MNRAS, 503, 1815, doi: 10.1093/mnras/stab289
. D V Bizyaev, S J Kautsch, A V Mosenkov, 10.1088/0004-637X/787/1/24ApJ. 78724Bizyaev, D. V., Kautsch, S. J., Mosenkov, A. V., et al. 2014, ApJ, 787, 24, doi: 10.1088/0004-637X/787/1/24
. J Bland-Hawthorn, O Gerhard, 10.1146/annurev-astro-081915-023441ARA&A. 54529Bland-Hawthorn, J., & Gerhard, O. 2016, ARA&A, 54, 529, doi: 10.1146/annurev-astro-081915-023441
. J Bovy, H.-W Rix, D W Hogg, 10.1088/0004-637X/751/2/131ApJ. 751131Bovy, J., Rix, H.-W., & Hogg, D. W. 2012, ApJ, 751, 131, doi: 10.1088/0004-637X/751/2/131
. G B Brammer, P G Van Dokkum, P Coppi, 10.1086/591786ApJ. 6861503Brammer, G. B., van Dokkum, P. G., & Coppi, P. 2008, ApJ, 686, 1503, doi: 10.1086/591786
. G Bruzual, S Charlot, 10.1046/j.1365-8711.2003.06897.xMNRAS. 3441000Bruzual, G., & Charlot, S. 2003, MNRAS, 344, 1000, doi: 10.1046/j.1365-8711.2003.06897.x
. D Calzetti, L Armus, R C Bohlin, 10.1086/308692ApJ. 533682Calzetti, D., Armus, L., Bohlin, R. C., et al. 2000, ApJ, 533, 682, doi: 10.1086/308692
. D Ceverino, J Primack, A Dekel, 10.1093/mnras/stv1603MNRAS. 453408Ceverino, D., Primack, J., & Dekel, A. 2015, MNRAS, 453, 408, doi: 10.1093/mnras/stv1603
. G Chabrier, 10.1086/376392PASP. 115763Chabrier, G. 2003, PASP, 115, 763, doi: 10.1086/376392
. C Conroy, 10.1146/annurev-astro-082812-141017ARA&A. 51Conroy, C. 2013, ARA&A, 51, 393, doi: 10.1146/annurev-astro-082812-141017
. B G Elmegreen, D M Elmegreen, 10.1086/507578ApJ. 650644Elmegreen, B. G., & Elmegreen, D. M. 2006, ApJ, 650, 644, doi: 10.1086/507578
. B G Elmegreen, D M Elmegreen, B Tompkins, L G Jenks, 10.3847/1538-4357/aa88d4ApJ. 84714Elmegreen, B. G., Elmegreen, D. M., Tompkins, B., & Jenks, L. G. 2017, ApJ, 847, 14, doi: 10.3847/1538-4357/aa88d4
. S M Faber, J S Gallagher, 10.1146/annurev.aa.17.090179.001031ARA&A. 17135Faber, S. M., & Gallagher, J. S. 1979, ARA&A, 17, 135, doi: 10.1146/annurev.aa.17.090179.001031
. L Ferreira, N Adams, C J Conselice, 10.3847/2041-8213/ac947cApJL. 9382Ferreira, L., Adams, N., Conselice, C. J., et al. 2022a, ApJL, 938, L2, doi: 10.3847/2041-8213/ac947c
. L Ferreira, C J Conselice, E Sazonova, 10.48550/arXiv.2210.01110arXiv:2210.01110arXiv e-printsFerreira, L., Conselice, C. J., Sazonova, E., et al. 2022b, arXiv e-prints, arXiv:2210.01110, doi: 10.48550/arXiv.2210.01110
. D Foreman-Mackey, D W Hogg, D Lang, J Goodman, 10.1086/670067PASP. 125306Foreman-Mackey, D., Hogg, D. W., Lang, D., & Goodman, J. 2013, PASP, 125, 306, doi: 10.1086/670067
. M Giavalisco, H C Ferguson, A M Koekemoer, 10.1086/379232ApJL. 60093Giavalisco, M., Ferguson, H. C., Koekemoer, A. M., et al. 2004, ApJL, 600, L93, doi: 10.1086/379232
. G Gilmore, N Reid, 10.1093/mnras/202.4.1025MNRAS. 2021025Gilmore, G., & Reid, N. 1983, MNRAS, 202, 1025, doi: 10.1093/mnras/202.4.1025
. N A Grogin, D D Kocevski, S M Faber, 10.1088/0067-0049/197/2/35The Astrophysical Journal Supplement Series. 19739Grogin, N. A., Kocevski, D. D., Faber, S. M., et al. 2011, The Astrophysical Journal Supplement Series, 197, 39, doi: 10.1088/0067-0049/197/2/35
. J Holmberg, B Nordström, J Andersen, 10.1051/0004-6361/200811191A&A. 501941Holmberg, J., Nordström, B., & Andersen, J. 2009, A&A, 501, 941, doi: 10.1051/0004-6361/200811191
. J S Kartaltepe, C Rose, B N Vanderhoof, arXiv:2210.14713arXiv e-printsKartaltepe, J. S., Rose, C., Vanderhoof, B. N., et al. 2022, arXiv e-prints, arXiv:2210.14713. https://arxiv.org/abs/2210.14713
. S A Kassin, B J Weiner, S M Faber, 10.1086/517932ApJL. 66035Kassin, S. A., Weiner, B. J., Faber, S. M., et al. 2007, ApJL, 660, L35, doi: 10.1086/517932
. 10.1088/0004-637X/758/2/106ApJ. 758106-. 2012, ApJ, 758, 106, doi: 10.1088/0004-637X/758/2/106
. A M Koekemoer, S M Faber, H C Ferguson, 10.1088/0067-0049/197/2/36The Astrophysical Journal Supplement. 4939Koekemoer, A. M., Faber, S. M., Ferguson, H. C., et al. 2011, The Astrophysical Journal Supplement, 49, 39, doi: 10.1088/0067-0049/197/2/36
. M Kriek, P G Van Dokkum, I Labbé, 10.1088/0004-637X/700/1/221ApJ. 700221Kriek, M., van Dokkum, P. G., Labbé, I., et al. 2009, ApJ, 700, 221, doi: 10.1088/0004-637X/700/1/221
. R Leaman, J T Mendel, E Wisnioski, 10.1093/mnras/stx2014MNRAS. 4721879Leaman, R., Mendel, J. T., Wisnioski, E., et al. 2017, MNRAS, 472, 1879, doi: 10.1093/mnras/stx2014
. X Meng, O Y Gnedin, 10.1093/mnras/stab088MNRAS. 5021433Meng, X., & Gnedin, O. Y. 2021, MNRAS, 502, 1433, doi: 10.1093/mnras/stab088
. B Mobasher, T Dahlen, H C Ferguson, 10.1088/0004-637X/808/1/101ApJ. 808101Mobasher, B., Dahlen, T., Ferguson, H. C., et al. 2015, ApJ, 808, 101, doi: 10.1088/0004-637X/808/1/101
. B P Moster, T Naab, S D M White, 10.1093/mnras/sts261MNRAS. 4283121Moster, B. P., Naab, T., & White, S. D. M. 2013, MNRAS, 428, 3121, doi: 10.1093/mnras/sts261
. V Pandya, J Primack, P Behroozi, 10.1093/mnras/stz2129MNRAS. 4885580Pandya, V., Primack, J., Behroozi, P., et al. 2019, MNRAS, 488, 5580, doi: 10.1093/mnras/stz2129
. C Papovich, I Labbé, R Quadri, 10.1088/0004-637X/803/1/26ApJ. 80326Papovich, C., Labbé, I., Quadri, R., et al. 2015, ApJ, 803, 26, doi: 10.1088/0004-637X/803/1/26
. M S Peeples, L Corlies, J Tumlinson, 10.3847/1538-4357/ab0654ApJ. 873129Peeples, M. S., Corlies, L., Tumlinson, J., et al. 2019, ApJ, 873, 129, doi: 10.3847/1538-4357/ab0654
. P A R Ade, Planck CollaborationN Aghanim, Planck Collaboration10.1051/0004-6361/201525830A&A. 59413Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13, doi: 10.1051/0004-6361/201525830
. S Ravindranath, M Giavalisco, H C Ferguson, 10.1086/507016ApJ. 652963Ravindranath, S., Giavalisco, M., Ferguson, H. C., et al. 2006, ApJ, 652, 963, doi: 10.1086/507016
. B E Robertson, S Tacchella, B D Johnson, arXiv:2208.11456arXiv e-printsRobertson, B. E., Tacchella, S., Johnson, B. D., et al. 2022, arXiv e-prints, arXiv:2208.11456. https://arxiv.org/abs/2208.11456
. R C Simons, S A Kassin, J R Trump, 10.3847/0004-637X/830/1/14ApJ. 83014Simons, R. C., Kassin, S. A., Trump, J. R., et al. 2016, ApJ, 830, 14, doi: 10.3847/0004-637X/830/1/14
. R C Simons, S A Kassin, B J Weiner, 10.3847/1538-4357/aa740cApJ. 843Simons, R. C., Kassin, S. A., Weiner, B. J., et al. 2017, ApJ, 843, 46, doi: 10.3847/1538-4357/aa740c
. R E Skelton, K E Whitaker, I G Momcheva, 10.1088/0067-0049/214/2/24ApJ. 2141Skelton, R. E., Whitaker, K. E., Momcheva, I. G., et al. 2014, ApJ, 214, 1, doi: 10.1088/0067-0049/214/2/24
. M Tomassetti, A Dekel, N Mandelker, 10.1093/mnras/stw606MNRAS. 4584477Tomassetti, M., Dekel, A., Mandelker, N., et al. 2016, MNRAS, 458, 4477, doi: 10.1093/mnras/stw606
. H Ubler, R Genzel, E Wisnioski, 10.3847/1538-4357/ab27ccApJ. 88048Ubler, H., Genzel, R., Wisnioski, E., et al. 2019, ApJ, 880, 48, doi: 10.3847/1538-4357/ab27cc
. P C Van Der Kruit, A&A. 192117van der Kruit, P. C. 1988, A&A, 192, 117
. P C Van Der Kruit, K C Freeman, 10.1146/annurev-astro-083109-153241ARA&A. 49301van der Kruit, P. C., & Freeman, K. C. 2011, ARA&A, 49, 301, doi: 10.1146/annurev-astro-083109-153241
. A Van Der Wel, E F Bell, B Häussler, 10.1088/0067-0049/203/2/24The Astrophysical Journal Supplement. 20312van der Wel, A., Bell, E. F., Häussler, B., et al. 2012, The Astrophysical Journal Supplement, 203, 12, doi: 10.1088/0067-0049/203/2/24
. A Van Der Wel, Y.-Y Chang, E F Bell, 10.1088/2041-8205/792/1/L6ApJL. 7926van der Wel, A., Chang, Y.-Y., Bell, E. F., et al. 2014, ApJL, 792, L6, doi: 10.1088/2041-8205/792/1/L6
. J V Villumsen, 10.1086/162960ApJ. 29075Villumsen, J. V. 1985, ApJ, 290, 75, doi: 10.1086/162960
. B J Weiner, C N A Willmer, S M Faber, 10.1086/508921ApJ. 6531027Weiner, B. J., Willmer, C. N. A., Faber, S. M., et al. 2006, ApJ, 653, 1027, doi: 10.1086/508921
. E Wisnioski, Förster, N M Schreiber, S Wuyts, 10.1088/0004-637X/799/2/209ApJ. 799209Wisnioski, E., Förster Schreiber, N. M., Wuyts, S., et al. 2015, ApJ, 799, 209, doi: 10.1088/0004-637X/799/2/209
. R F G Wyse, G Gilmore, 10.1086/117729AJ. 1102771Wyse, R. F. G., & Gilmore, G. 1995, AJ, 110, 2771, doi: 10.1086/117729
. R F G Wyse, G Gilmore, J E Norris, 10.1086/501228ApJL. 63913Wyse, R. F. G., Gilmore, G., Norris, J. E., et al. 2006, ApJL, 639, L13, doi: 10.1086/501228
. P Yoachim, J J Dalcanton, 10.1086/497970AJ. 131226Yoachim, P., & Dalcanton, J. J. 2006, AJ, 131, 226, doi: 10.1086/497970
. H Zhang, J R Primack, S M Faber, Zhang, H., Primack, J. R., Faber, S. M., et al. 2019,
| [] |
[
"IWAHORI-HECKE ALGEBRA AND UNRAMIFIED LOCAL L-FUNCTIONS",
"IWAHORI-HECKE ALGEBRA AND UNRAMIFIED LOCAL L-FUNCTIONS"
] | [
"Masao Oi ",
"Ryotaro Sakamoto ",
"Hiroyoshi Tamori "
] | [] | [] | In this paper, we compute the Hecke action of a certain test function on the space of an unramified principal series of a connected reductive group over a non-archimedean local field by using the theory of Iwahori-Hecke algebra. As an application, we obtain a new expression of the local L-functions of unramified representations. | 10.1007/s00209-023-03214-9 | [
"https://arxiv.org/pdf/1903.07613v4.pdf"
] | 246,485,858 | 1903.07613 | 4f8d4299ee31b3a477a2988fffe06064a5754b01 |
IWAHORI-HECKE ALGEBRA AND UNRAMIFIED LOCAL L-FUNCTIONS
8 Feb 2022
Masao Oi
Ryotaro Sakamoto
Hiroyoshi Tamori
IWAHORI-HECKE ALGEBRA AND UNRAMIFIED LOCAL L-FUNCTIONS
8 Feb 2022
In this paper, we compute the Hecke action of a certain test function on the space of an unramified principal series of a connected reductive group over a non-archimedean local field by using the theory of Iwahori-Hecke algebra. As an application, we obtain a new expression of the local L-functions of unramified representations.
parahoric subgroup J of GSp 4 (Q p ), which is defined by
J := ßÅ A B C D ã ∈ GSp 4 (Q p ) A, D ∈ GL 2 (Z p ), B ∈ M 2 (Z p ), C ∈ M 2 (pZ p ) ™ ,
Taylor proved that L(s, π, Spin) = det 1 − p −(s+3/2) π(U J )
V J −1 ,
where U J is the characteristic function of the open compact subset J diag(p, p, 1, 1)J normalized so that U J (diag(p, p, 1, 1)) = vol(J) −1 . These formulas are, in addition to their original importance in a study of modular forms, also interesting from the purely representation-theoretic viewpoint as follows. In the definition of the local L-functions for unramified representations, we utilize the Satake parameters determined by the Satake isomorphism. This amounts to looking at the action of the spherical Hecke algebra on the subspace of spherical vectors, which is 1-dimensional. For example, in the case of GL 2 mentioned above, we consider the action of all elements of C ∞ c (GL 2 (Z p )\ GL 2 (Q p )/ GL 2 (Z p )) (bi-GL 2 (Z p )-invariant test functions on GL 2 (Q p )) on the 1-dimensional subspace V GL2(Zp) χ of GL 2 (Z p )-fixed vectors. On the other hand, in the above formulas, the local L-function is expressed by the characteristic polynomial of the action of only one test function on the subspace whose dimension is the same as the degree of the local L-function. For instance, in the case of GL 2 , the local L-function L(s, π, Std) is described by the action of a single test function U J on the subspace V J χ , which is 2-dimensional.
In this paper, we establish these kind of formulas for connected reductive groups and general finite-dimensional representations of the Langlands dual groups. For simplicity, we assume that G is split in the rest of this introduction. Let T be a split maximal torus of G defined over F . By fixing a Borel subgroup B containing T, a dominance is determined on the characters and cocharacters of T. Then, to each dominant cocharacter µ of T, we can associate an open compact subgroup J µ of G(F ) (see Section 2.3) and a normalized characteristic function ½ µ of a certain J µ -double coset (see Sections 3.2 and 3.3). For a finite-dimensional representation r of the Langlands dual groupĜ, we put P + (r) to be the set of dominant weights in r. Note that each element µ of P + (r) can be regarded as a dominant cocharacter of T through the duality between G andĜ. For each µ ∈ P + (r), we write m µ for the multiplicity of µ in r. The following is the main result of this paper.
Theorem 1.1 (Theorem 4.8 and Remark 4.10). Let π be an irreducible unramified representation of G(F ). We take an unramified character χ of T(F ) such that π is realized as a subquotient of the normalized parabolic induction (I χ , V χ ) of χ. Then we have an equality L(s, π, r) = µ∈P + (r)
det 1 − q −(s+ ρB,µ ) I χ (½ µ ) V Jµ χ −mµ ,
where ρ B is the half sum of the positive roots of T in G.
Note that if (G, r) is (GL 2 , Std) or (GSp 4 , Spin), then the set P + (r) is a singleton and the formula in Theorem 1.1 is nothing but the identity in the above examples (see Sections 5.1 and 5.3). More generally, when r is a quasi-minuscule representation (see Definition 4.11), we get a similar formula to the above examples (see Corollary 4.13). See Remark 4.14 and Table 1 for a list of (G, r) such that G is simple and r is quasi-minuscule.
We also remark that Theorem 1.1 (Theorem 4.8) is proved in a slightly more general setting where G might not be split and π is a parahoric-spherical representation of G(F ) (i.e., an irreducible smooth representation having a nonzero vector fixed by a parahoric subgroup, see Definition 4.1). When π is not unramified but spherical for some parahoric subgroup, we consider the semisimple L-function (see Definition 4.4) instead of the usual L-function.
We explain the outline of the proof of Theorem 1.1. The key in our proof is that the action of I χ (½ µ ) on the space V Jµ χ can be triangulated with respect to an ordered basis of V Jµ χ . To explain this, we assume that µ is strictly dominant for simplicity. In this case, J µ is an Iwahori subgroup, hence let us simply write I for J µ . Then we can find an explicit basis {v ∨ w } w∈W of the subspace V I χ of I-fixed vectors in V χ , which is labelled by the elements of the Weyl group W of T in G. With respect to this ordered basis of V I χ , we have the following: Proposition 1.2 (Proposition 3.4). For any w ∈ W , there exists a family {c w ′ } w ′ ∈W,w ′ ≥w of complex numbers satisfying
I χ (½ µ ) · v ∨ w = c w · v ∨ w + w ′ ∈W w ′ >w c w ′ · v ∨ w ′ .
Moreover, the number c w can be determined explicitly.
Once this proposition is proved, we immediately get a description of the characteristic polynomial of the action of ½ µ on V I χ . Then we obtain Theorem 1.1 by tracking the construction of the Satake parameter and rewriting the local L-function L(s, π, r) in terms of the weights of the representation r.
Originally, we proved this proposition by making full use of the Chevalley basis by assuming that our group G is split. By utilizing various relations of the Chevalley basis, we carried out the induction on the length of w ∈ W ; then the problem is essentially reduced to the case of SL 2 . Although the basic idea of our original proof is fairly simple in this way, we had to show a lot of technical statements about group-theoretic properties of parahoric subgroups to justify the induction step (cf. the older version of this paper; [OST19]).
However, after we released the first version of this paper, Thomas Haines told the authors that the above triangularity result can be proved in a more sophisticated way if we appeal to the theory of the Iwahori-Hecke algebra. Furthermore, he also explained that his approach naturally enables us to prove Proposition 1.2 for any general (i.e., possibly non-split) connected reductive group G. Hence we decided to follow his idea and present his simplified version of the proof in this paper.
The outline of the proof of Proposition 1.2 is as follows. We continue to assume that G is split in the following for simplicity. We write N for the unipotent radical of B and put M to be the space C ∞ c (T(O F )N(F )\G(F )/I), where I denotes an Iwahori subgroup of G(F ). Then the space M has commuting actions of two kinds of C-algebras; one is the group algebra R of the cocharacter group of T, and the other one is the Iwahori-Hecke algebra H I := C ∞ c (I\G(F )/I). (See Section 2.6 for the details.)
This space M can be understood as the space of I-fixed vectors in the universal unramified principal series. More precisely, any unramified character χ of T(F ) defines a C-algebra homomorphism from R to C (let us again write χ). Then, by specializing the R-module M to a C-module via χ, we obtain the space (n-Ind
G(F ) B(F ) χ −1 ) I of I-fixed vectors in the unramified principal series of χ −1 , i.e., we have C ⊗ R,χ M ∼ = (n-Ind G(F ) B(F ) χ −1 ) I .
Also, we can find an R-basis {v w } w∈W of M labelled by the elements of W . With this language, Proposition 1.2 is rephrased as follows:
Proposition 1.3 (Proposition 3.3). For any w ∈ W , there exists a family {a w ′ } w ′ ∈W,w ′ ≤w of elements of R satisfying v w * Θ µ = a w · v w + w ′ ∈W w ′ <w a w ′ · v w ′ .
Moreover, a w can be explicitly determined. Here Θ µ is an element of the Iwahori-Hecke algebra which is a constant multiple of ½ µ (see Section 2.7).
The point here is that the ring structure of H I and its action on R are wellinvestigated, especially in the works of Haines-Kottwitz-Prasad (split case, [HKP10]) and Rostami (general case, [Ros15]). By using several basic relations of the Iwahori-Hecke algebra (e.g., the Bernstein relation, see Proposition 2.15), we can prove Proposition 1.3 by an induction argument on the length of w ∈ W .
It seems that our computations in the previous version of the proof are essentially encoded in the various identities in the theory of the Iwahori-Hecke algebra. In this sense, the core of the new proof presented in this paper is not totally different to our original proof. Nevertheless, we would like to emphasize that most of the arguments are drastically simplified and our main result is far more generalized by following the formulation suggested by Haines.
Notations and conventions. Let F be a non-archimedean local field. We let O, p, and k denote the ring of integers, its maximal ideal, and its residue field of F , respectively. Let q be the order of k. We write W F and I F for the Weil group of F and the inertia subgroup, respectively. We fix a lift Frob of the geometric Frobenius in Gal(k/k) (i.e., x → x q −1 ) to W F .
For an algebraic variety J over F (written by the bold letter), we let J := J(F ) (written by the usual italic letter) denote the set of its F -valued points. For an algebraic group T, we write X * (T) (resp. X * (T)) for the groups of characters Hom(T, G m ) (resp. cocharacters Hom(G m , T)) of T. When an algebraic group T is defined over F , we write X * (T) F and X * (T) F for the groups of F -rational characters and cocharacters of T, respectively.
For an abelian group M , we write M R for M ⊗ Z R.
Iwahori subgroup and Iwahori-Hecke algebra
In this section, we review the fundamental properties of the Iwahori-Hecke algebra needed for us. The content of this section is based on the paper [HKP10] of Haines-Kottwitz-Prasad and also the paper [Ros15] of Rostami, which generalizes the results of [HKP10] from the split case to the non-split case.
2.1. Iwahori subgroup and Kottwitz homomorphism. Let G be a connected reductive group over F . We write B(G, F ) (resp. B red (G, F )) for the Bruhat-Tits building (resp. reduced Bruhat-Tits building) of G over F . We fix a point o ∈ B(G, F ) whose image in B red (G, F ) is a special vertex. Let K denote the special maximal parahoric subgroup of G associated with o. We fix a maximal Fsplit torus A of G whose apartment A(A, F ) contains the point o. Note that, by using the fixed special point o, the apartment A(A, F ) is identified with X * (A) R :
X * (A) R ∼ = A(A, F ) : µ → o + µ.
We furthermore fix an Iwahori subgroup I contained in K. Then I determines an alcove C of the apartment A(A, F ) whose closure contains the special point o. Let Φ := Φ(G, A) be the set of roots of A in G. Then the alcove C determines a system Φ + (resp. Φ − ) of positive (resp. negative) roots in Φ. We put Φ red to be the set of reduced roots in Φ and put Φ ± red := Φ ± ∩ Φ red . We write ∆ for the set of simple roots.
Let M be the centralizer of the fixed maximal F -split torus A in G, which a minimal F -rational Levi subgroup of G. Let P be the minimal parabolic subgroup with Levi factor M such that the corresponding set of positive roots is given by Φ + . We write κ M for the Kottwitz homomorphism for M (see [Kot97,Section 7.7]):
κ M : M ։ X * (Z(M) IF ) Frob ,
where •M is the Langlands dual group of M, • (−) IF denotes the group of I F -coinvariants, and • (−) Frob denotes the group of Frobenius invariants.
In the following, we simply write Λ M for X * (Z(M) IF ) Frob . We put
M 1 := Ker(κ M : M ։ Λ M ).
Thus we have an identification M/M 1 ∼ = Λ M . For an element µ ∈ Λ M , we write µ for the inverse image κ −1 M (µ) of µ in M/M 1 (we often loosely regard µ ∈ M/M 1 as an element of M as long as it does not cause any confusion).
According to [Ros15, Section 5.2], we introduce a dominance on Λ M as follows. We put ν M : M → Hom(X * (M) F , Z) to be the homomorphism defined by
ν M (m) := [χ → val F (χ(m))].
Then there exists a homomorphism q M : Λ M → Hom(X * (M) F , Z) such that q M • κ M = ν M . By tensoring R over Z and composing with a natural isomorphism
Hom(X * (M) F , R) ∼ = X * (A) R , we get an identification Λ M,R ∼ = − → X * (A) R : M κM / / νM & & ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ ◆ Λ M qM / / Λ M,R ∼ = Hom(X * (M) F , Z) / / Hom(X * (M) F , R) ∼ = / / X * (A) R (see [Ros15,
Sections 2.5-2.7] for details). Hence we can transport a dominance on X * (A) R ( ∼ = A(A, F )), which is determined by the alcove C, to Λ M,R . We say that
an element µ of Λ M is dominant if its image in Λ M,R is dominant.
For any µ ∈ Λ M and α ∈ X * (A) R , we often simply write α, µ for α, q M (µ) , which is the value at (α, q M (µ)) of the natural pairing −, − on X * (A) R ×X * (A) R .
Remark 2.1. In [Ros15], κ M and ν M are defined to be −κ M and −ν M , respectively (see [Ros15, Section 2.7, 519 page]). Since q M is not affected by the difference of these normalizations (the sign differences cancel out), the identification between Λ M,R and X * (A) R in this paper is the same as that in [Ros15].
2.2. Iwahori-Weyl group. LetW denote the Iwahori-Weyl group defined bỹ
W := N G (A)(F )/M 1 , where N G (A) is the normalizer group of A in G. We write W := W G (A)(F ) = (N G (A)/M)(F )− − → W o ): W ∼ = Λ M ⋊ W.
See [Ros15, Sections 2.8 and 2.9] and also [Ric16] for the details.
2.3. Parahoric subgroups. For any facet F of the apartment A(A, F ), we let J F denote the parahoric subgroup associated with F . Note that then, with this notation, we have K = J o and I = J C .
The fixed special point o defines "a valuation of root datum" of G, which consists of group-theoretic data satisfying several axiomatic properties (see [BT72,Section 6.1] for the definition of a valuation of root datum). In particular, for each α ∈ Φ, the root subgroup U α = U α (F ) of G has a descending filtration {U α,r } r∈R .
Remark 2.2. When G is split, the choice of a special point o of the Bruhat-Tits building B(G, F ), or equivalently, its associated valuation of root data can be made explicitly in terms of a Chevalley basis. More precisely, a Chevalley basis of G consists of homomorphisms x α : G a → U α ⊂ G for each α ∈ Φ satisfying several axioms, where U α denotes the root subgroup of α in G (cf. [Ste16, page 21, Corollary 1]). Then, for α ∈ Φ, the filtration {U α,r } r∈R of U α = U α (F ) is given by
U α,r = x α ({a ∈ F | val F (a) ≥ r}).
For a dominant element µ ∈ Λ M , we define an open compact subgroup J µ such that I ⊂ J µ ⊂ K by
J µ := M 1 , U α,fµ(α) | α ∈ Φ red , where f µ : Φ red → R is a function given by f µ := ® 0 if α, µ ≥ 0, 0+ if α, µ < 0
(0+ denotes any sufficiently small positive number). This group J µ is nothing but the parahoric subgroup J F associated with the facet F such that • F is contained in the closure C of the fixed alcove C, • F contains o, and • F contains o + εµ for any sufficiently small ε > 0. If we put W F to be the subgroup ofW generated by the reflections with respect to the walls containing the facet F (note that W F is automatically contained in W o ), then we have
J µ (= J F ) = IW F I.
This follows from that the Iwahori subgroup I and the Iwahori-Weyl groupW form a Tits system and that a parahoric subgroup is a parabolic subgroup in the sense of a Tits system (see [
W µ := s α | α ∈ Φ, s α (µ) = µ = s α | α ∈ Φ, α, µ = 0 ,
where s α denotes the reflection with respect to a root α ∈ Φ.
2.4. Some lemmas on Iwahori subgroups. In terms of the valuation of root datum associated with o, the Iwahori subgroup I is explicitly described as follows:
I = M 1 , U α,0 , U β,0+ | α ∈ Φ + red , β ∈ Φ − red .
Furthermore, the Iwahori subgroup I has the following uniqueness of the product expression (see [Tit79, Section 3.1.1]).
Proposition 2.3. The natural multiplication map
α∈Φ + red U α,0 × M 1 × β∈Φ − red U β,0+ → I
is bijective with any orders on Φ + red and Φ − red (also, the products over Φ + red and Φ − red can be swapped).
For an F -rational standard parabolic subgroup Q of G with Levi decomposition Q = LU, we introduce the following notation (U denotes the opposite to U):
• We put Φ + red (U) := {α ∈ Φ + red | U α ⊂ U} and define
I U := α∈Φ + red (U) U α,0 ⊂ G.
• We put Φ ± red (L) := {α ∈ Φ ± red | U α ⊂ L} and define
I L := α∈Φ + red (L) U α,0 × M 1 × α∈Φ − red (L) U α,0+ ⊂ G, • We put Φ − red (U) := {α ∈ Φ − red | U α ⊂ U} and define I U := α∈Φ − red (U) U α,0+ ⊂ G.
Note that the definitions of I U , I L , I U are independent of the choice of orders on the sets of roots and that these sets are subgroups of G.
Lemma 2.4. For any F -rational standard parabolic subgroup Q of G with Levi decomposition Q = LU, the following hold.
(1) We have I = I U I L I U = I U I L I U .
(2) For any w ∈ W , we have wI U w −1 ⊂ I.
(3) For any dominant µ ∈ Λ M , we have µI U µ −1 ⊂ I U and µ −1 I U µ ⊂ I U .
Proof.
(1) This is clear from Proposition 2.3 and the definitions of I U , I L , and I U .
(2) Since we regard w ∈ W as an element ofW through the isomorphism W o ∼ = W , the action of w on the apartment A(A, F ) stabilizes the special point o. Hence w stabilizes the valuation of root datum associated with o. In particular, we have wU α,r w −1 = U w(α),r for any α ∈ Φ and r ∈ R.
Thus we get wU α,0+ w −1 ⊂ I for any α ∈ Φ − red , which implies that we have wI U w −1 ⊂ I.
(3) Since {U α,r } r∈R consists of a part of the valuation of root datum, we have µU α,0 µ −1 = U α, α,νM (µ) (see[BT72, Proposition 6.2.10]). The fact that
q M • κ M = ν M shows that α, ν M (µ) = α, ν M (κ −1 M (µ)) = α, q M (µ)
. Since the dominance on Λ M is introduced through the homomorphism q M (see Section 2.1), we have α, q M (µ) ≥ 0 for any α ∈ Φ + . Thus we have µU α,0 µ −1 ⊂ U α,0 , hence get µI U µ −1 ⊂ I U .
We can check that µ −1 U α,0+ µ ⊂ U α,0+ for any α ∈ Φ − red (hence µ −1 I U µ ⊂ I U ) in a similar way.
Lemma 2.5.
(1) For any w ∈ W , we have wIw −1 I ∩ N I = I.
Φ red → R by f w (α) = ® 0 if w −1 (α) ∈ Φ + red , 0+ if w −1 (α) ∈ Φ −wIw −1 = M 1 , U α,fw(α) | α ∈ Φ red .
We put I ′ N := α∈Φ + red U α,fw(α) and I ′ N := α∈Φ − red U α,fw(α) . Note that wI M w −1 = I M = M 1 . Then, similarly to Proposition 2.3, we see that the multiplication map N × M × N → G induces a bijection 2.5. Iwahori-Hecke algebra. Let H I := C ∞ c (I\G/I) be the Iwahori Hecke algebra, which has a structure of a C-algebra via convolution product denoted by * .
I ′ N × I M × I ′ N → wIw −1 . Since I ′ N ⊂ I N
Here we use the Haar measure dg on G normalized so that dg(I) = 1 in the definition of the convolution product. Recall that we have the Iwahori decomposition (see [Hai14,Lemma 4.57]):
G = w∈W IwI.
Thus, if we put T w to be the characteristic function ½ IwI of the double coset IwI for w ∈W , then the set {T w } w∈W forms a C-basis of H I .
According to [Ros15,Definition 5.3.1], we normalize T w for w ∈W by
T w := q(w) − 1 2 T w .
Here, we define a function q :W → Z >0 by This quantity can be expressed in a root-theoretic way as follows (see [Ric16, Section 1.4] for the details). We letW nr denote the Iwahori-Weyl group over the completion F of the maximal unramified extension of F . Then, by [Ric16, Proposition 1.11], W is contained inW nr and we have
q(w) = q ℓ nr (w)
for any w ∈W , where ℓ nr denotes the length function onW nr .
Remark 2.6. For any dominant element λ ∈ Λ M , we can compute ℓ nr (w) by using the result of Lusztig [Lus89] on affine Weyl groups as follows. Let S be a maximal F -split torus of G which is defined over F and contains A. Let Σ be the scaled root system associated with Φ(G, S), i.e., the unique reduced root system in X * (S) R such that hyperplanes determined by the affine functions Σ + Z on the apartment A(S,F ) coincide with those determined by the affine roots with respect to Φ(G, S) (see [Ros15, Section 2.3] for details). Then Λ M can be regarded as a subgroup of the affine Weyl group associated with the reduced root system Σ (see [Ros15, Section 3.3]). By putting ρ nr to be the half sum of all positive roots in Σ, we have 2.6. Universal unramified principal series. Recall that we fixed a minimal Frational parabolic subgroup P of G with Levi factor M. We let N denote the unipotent radical of P. Hence we have a Levi decomposition P = MN. We put
M := C ∞ c (M 1 N \G/I).
For w ∈W , we put v w := ½ M1N wI . Since we have
G = w∈W IwI = w∈W M 1 N wI, (see [Hai14, Lemma 4.61]), the set {v w } w∈W forms a C-basis of M. Let R be the group algebra C[Λ M ] of Λ M , which is isomorphic to C ∞ c (M/M 1 ). For µ ∈ Λ M , we let R µ denote the element of the group algebra C[Λ M ] correspond- ing to µ. Then {R µ } µ∈ΛM forms a C-basis of R. We make M into a left R-module by (r · f )(g) := M r(y)δ 1 2 P (y)f (y −1 g) dy
for any r ∈ R and f ∈ M, where δ P denotes the modulus character of P and the Haar measure dy on M is normalized so that dy(M 1 ) = 1.
We will next make M into a right H I -module. For this, we consider the set
C ∞ c (M 1 N \G) of compactly supported left-M 1 N -invariant smooth functions.
(We call this space the universal unramified principal series.) Then we have a right action of the full Hecke algebra H :
= C ∞ c (G) on C ∞ c (M 1 N \G) given by f → f * h for any f ∈ C ∞ c (M 1 N \G) and h ∈ H.
This action naturally induces a right action of the Iwahori-Hecke algebra H I on M = C ∞ c (M 1 N \G) I . In summary, with respect to these actions, M has a structure of an (R, H I )bimodule.
Remark 2.7. Since C ∞ c (M 1 N \G) is a smooth representation of G via right translation (let ρ right denote this representation), we may also consider the left action of H on C ∞ c (M 1 N \G) given by
ρ right (h)(f ) := G h(g) · ρ right (g)(f ) dg for h ∈ H and f ∈ C ∞ c (M 1 N \G).
The relationship between the right action (−) * h and the left action ρ right (h)(−) is described as follows. Let ι : H → H be the anti-involution given by
ι(h)(g) := h(g −1 ) (i.e., ι is a C-linear automorphism of H satisfying ι(h 1 * h 2 ) = ι(h 2 ) * ι(h 1 ) for any h 1 , h 2 ∈ H). Then we have (−) * ι(h) = ρ right (h)(−). Indeed, for any h ∈ H, f ∈ C ∞ c (M 1 N \G), and x ∈ G, we have f * ι(h) (x) = G f (g) · ι(h)(g −1 x) dg = G f (g) · h(x −1 g) dg = G f (xg) · h(g) dg = G h(g) · ρ right (g)(f ) (x) · dg = ρ right (h)(f ) (x).
We put
X w (M ) := Hom(M/M 1 , C × )
and call an element of X w (M ) an weakly unramified character of M . Each element χ ∈ X w (M ) defines a C-algebra homomorphism
R = C[Λ M ] ։ C : R µ → χ(µ).
If we again write χ for this homomorphism, then we have an isomorphism
C ⊗ R,χ M ∼ = (n-Ind G P χ −1 ) I as right H I -modules ([Hai14,• we put T w := ½ IwI ∈ H I for w ∈W (hence {T w } w∈W is a C-basis of H I ),
• we put {R µ } µ∈ΛM to be the natural C-basis of the group algebra R = C[Λ M ], and
• we put v w := ½ M1N wI ∈ M for w ∈W (hence {v w } w∈W is a C-basis of M). Let ρ P ∈ X * (A) R be the element satisfying δ 1 2 P (µ) = q − ρP,µ .
Note that this is explicitly given by
ρ P = 1 2 α∈Φ + red dim F (g α ) · α + dim F (g 2α ) · 2α ,
where g α and g 2α denote the root subspaces of g associated with the roots α and 2α, respectively (we simply put g 2α := 0 when 2α is not a root).
Lemma 2.8. For any µ ∈ Λ M , we have R µ · v 1 = q − ρP,µ · v µ .
Proof. By the definition of the left R-module structure of M, we have
(R µ · v 1 )(g) = M R µ (y)δ 1 2 P (y)½ M1N I (y −1 g) dy = µM1 δ 1 2 P (y)½ M1N I (y −1 g) dy = δ 1 2 P (µ) M1 ½ M1N I (y −1 µ −1 g) dy
for any g ∈ G. This is not zero if only if y −1 µ −1 g belongs to M 1 N I for some y ∈ M 1 , which is equivalent to that g belongs to µM 1 N I = M 1 N µI. In other words, R µ · v 1 is supported on M 1 N µI. When g belongs to M 1 N µI, we have
R µ · v 1 (g) = δ 1 2 P (µ) M1 ½ M1N I (y −1 µ −1 g) dy = δ 1 2 P (µ)dy(M 1 ) = q − ρP,µ . Thus we have R µ · v 1 = q − ρP,µ · v µ .
The following proposition in the split case can be found in Proposition 2.9.
(1) For any w ∈ W ⊂W , we have v
1 * T w = v w . (2) For any dominant element µ ∈ Λ M , we have v 1 * T µ = v µ .
Proof.
(1) By the definitions of v 1 and T w , we have
(v 1 * T w )(x) = G ½ M1N I (g) · ½ IwI (g −1 x) dg = M1N I ½ IwI (g −1 x) dg. Let g ∈ M 1 N I. If the integrand ½ IwI (g −1 x) is not zero, then x must belong to gIwI. By Lemma 2.4 (1), we have M 1 N I = M 1 N I N I M I N = M 1 N I M I N . Since M normalizes N and I M ⊂ M 1 , we have M 1 N I M I N = M 1 N I N . Hence gIwI is contained in M 1 N I N wI, which is equal to M 1 N wI by Lemma 2.4 (2). Thus the function v 1 * T w is supported on M 1 N wI.
Let x be an element of M 1 N wI. Let us write x = mnwy with m ∈ M 1 , n ∈ N , y ∈ I. Then g −1 x belongs to IwI if and only if g belongs to mnwyIw −1 I = mnwIw −1 I. Hence we get
(v 1 * T w )(x) = dg(mnwIw −1 I ∩ M 1 N I) = dg(wIw −1 I ∩ M 1 N I). By Lemma 2.5 (1), we have dg(wIw −1 I ∩ M 1 N I) = dg(I) = 1. Thus we conclude that v 1 * T w is equal to ½ M1N wI , which equals v w by definition.
(2) The proof is similar to that of claim (1) (the same argument works by using Lemmas 2.4 (3) and 2.5 (2) instead of Lemmas 2.4 (2) and 2.5 (1), respectively).
By [HKP10, Lemma 1.6.1] (split case) and [Hai14, Lemma 4.63 (b)] (non-split case), M is free of rank 1 with generator v 1 as an H I -module. In particular, we have an isomorphism of C-algebras
H I ∼ = End HI (M) : h ′ → [v 1 * h → v 1 * h ′ * h].
Accordingly, the left R-action on M induces an injective C-algebra homomorphism
R ֒→ End HI (M) ∼ = H I .
Definition 2.10 ([Ros15, Definition 5.3.1]). For any element µ ∈ Λ M , we put Remark 2.11. Note that, for µ ∈ Λ M , the quantity q(µ) and the element T µ are defined by regarding µ as an element of the Iwahori-Weyl groupW through the Kottwitz homomorphism κ M : M ։ Λ M . As we mentioned in Remark 2.1, in [Ros15], the symbol κ M denotes the (−1)-multiple of the usual Kottwitz homomorphism κ M . Accordingly, our Θ µ is equal to Rostami's Θ −µ .
Θ µ := T λ1 * T −1 λ2 by taking dominant elements λ 1 , λ 2 ∈ Λ M satisfying µ = λ 1 − λ 2 . (See [Ros15,
Proposition 2.12. The image of R µ under the above homomorphism R ֒→ H I is given by q ρ nr −ρP,µ · Θ µ . In other words, we have
q ρ nr −ρP,µ · v 1 * Θ µ = R µ · v 1 .
Proof. Let µ be an element of Λ M . Note that, for any dominant element λ ∈ Λ M , we have
R λ · v 1 = q − ρP,λ · v λ = q − ρP,λ · v 1 * T λ , or, equivalently, R −1 λ · v 1 = q ρP,λ · v 1 * T −1 λ
by Lemma 2.8 and Proposition 2.9 (2). Hence, by taking dominant elements λ 1 and λ 2 of Λ M such that µ = λ 1 − λ 2 and applying this identity to λ 1 and λ 2 , we get
R µ · v 1 = R λ1 · R −1 λ2 · v 1 = q ρP,λ2 · R λ1 · v 1 * T −1 λ2 = q ρP,λ2−λ1 · v 1 * T λ1 * T −1 λ2 = q − ρP,µ · v 1 * T λ1 * T −1 λ2 . Since Θ µ is defined by Θ µ = T λ1 * T −1 λ2 = q(λ 1 ) − 1 2 · q(λ 2 ) 1 2 · T λ1 * T −1 λ2 , we get R µ · v 1 = q − ρP,µ · q(λ 1 ) 1 2 · q(λ 2 ) − 1 2 · v 1 * Θ µ . Since we have q(λ i ) 1 2 = q 1 2 ℓ nr (λi) (see Section 2.2) and 1 2 ℓ nr (λ i ) = ρ nr , λ i (see Remark 2.6), we get q − ρP,µ · q(λ 1 ) 1 2 · q(λ 2 ) − 1 2 = q ρ nr −ρP,µ .
Corollary 2.13. For any dominant element µ ∈ Λ M , we have
Θ µ = q − ρ nr ,µ · T µ .
Proof. Since M is a free H I -module of rank 1 with generator v 1 , it suffices to check
that v 1 * Θ µ = q − ρ nr ,µ · v 1 * T µ . By Proposition 2.12, we have v 1 * Θ µ = q −ρ nr +ρP,µ · R µ · v 1 . Since we have R µ · v 1 = q − ρP,µ · v µ = q − ρP,µ · v 1 * T µ by Lemma 2.8 and Proposition 2.9 (2), we get q −ρ nr +ρP,µ · R µ · v 1 = q − ρ nr ,µ · v 1 * T µ .
Remark 2.14. Assume that G is split overF . In this case, the set of affine roots for the apartment A(S,F ) is given by
Φ(G, S) + Z under the identification A(S,F ) ∼ = X * (S) R given by the Chevalley special point (Remark 2.2) since val F •x −1 α (U α ) = Z for any α ∈ Φ(G, S).
Hence the scaled root system Σ (see Remark 2.6) equals Φ(G, S) as Φ(G, S) is reduced. By the definition of the positive system of Σ = Φ(G, S), any positive root in Φ(G, S) restricts to a positive root in Φ(G, A) or zero. Therefore ρ nr maps to ρ P under the restriction from X * (S) R to X * (A) R , and q ρ nr −ρP,µ = 1 for any µ ∈ Λ M . In particular, the definition of Θ µ given in this paper coincides with that by [HKP10, Section 1.7] when G is split.
Finally, we introduce the Bernstein relation, which will play an important role in the induction step of the proof of Proposition 3.3.
T sα * Θ µ = Θ sα(µ) * T sα + N −1 j=0 qj (s α )Θ µ−jα ∨ ,
where α ∨ denotes the coroot corresponding to α. For α ∈ Φ, we write s α for the reflection with respect to α. For each w ∈ W , we put ℓ(w) to be the length of w, which is defined by
ℓ(w) := #{α ∈ Φ + red | w(α) ∈ Φ − }. For w, w ′ ∈ W , write w ′ → w if ℓ(w ′ ) < ℓ(w) and w = w ′ s α for some α ∈ Φ. Then we define w ′ ≤ w if there is a sequence w ′ = w 0 → w 1 → · · · → w m = w for some nonnegative integer m and w 0 , . . . , w m ∈ W .
The relation is a partial order on W and is called the Bruhat order. It is immediate that we have
ℓ(w ′ ) < ℓ(w) if w ′ < w.
Lemma 3.1 ([Hum90, Lemma 1.6]). For every w ∈ W and α ∈ ∆, we have
® w < ws α if w(α) ∈ Φ + , w > ws α if w(α) ∈ Φ − .
The bijection w → w −1 of W is an automorphism as an ordered set (see [BB05, Corollary 2.2.5]). From this fact and [BB05, Proposition 2.2.7], we obtain the following Lemma 3.2. Let w, w ′′ ∈ W and α ∈ ∆. If w ′ = ws α < w, w ′′ < w and w ′′ < w ′′ s α , then we have w ′′ s α < w and w ′′ < w ′ .
The following is the key to prove our main theorem of this paper.
Proposition 3.3. For any w ∈ W and µ ∈ Λ M , there exists a family {a w ′ } w ′ ∈W,w ′ <w of elements of R satisfying v w * Θ µ = q ρP−ρ nr ,w(µ) · R w(µ) · v w + w ′ ∈W w ′ <w a w ′ · v w ′ . (1)
Proof. We prove the assertion by the induction on the length ℓ(w) of w ∈ W . When ℓ(w) = 0, i.e., w = 1, the equality (1) is nothing but Proposition 2.12.
We consider the case where ℓ(w) = 1, i.e., w = s α for some simple root α ∈ Φ. Since any element w ′ ∈ W satisfying w ′ < w is necessarily equal to 1, our task in this case is to find an element a 1 of R satisfying v sα * Θ µ = q ρP−ρ nr ,sα(µ) · R sα(µ) · v sα + a 1 · v 1 . By using Propositions 2.9 (1), 2.15, and 2.12 in this order, we get
v sα * Θ µ 2.9 = v 1 * T sα * Θ µ 2.15 = v 1 * Å Θ sα(µ) * T sα + N −1 j=0 qj(s α )Θ µ−jα ∨ ã 2.12 = q ρP−ρ nr ,sα(µ) · R sα(µ) · v 1 * T sα + N −1 j=0 a sα,j · R µ−jα ∨ · v 1 ,
where a sα,j is given by q ρP−ρ nr ,µ−jα ∨ · qj(s α ). Thus it suffices to put
a 1 := N −1 j=0 a sα,j · R µ−jα ∨ ∈ R.
Next, we consider the case where ℓ(w) > 1. In this case, there exists a simple root α ∈ ∆ such that w(α) ∈ Φ − . By Lemma 3.1, w ′ := ws α satisfies w ′ < w. Then we have T w = T w ′ * T sα by the Iwahori-Matsumoto relation (see [HKP10,Section 7.2] (split case) and [Ros15, Proposition 4.1.1 (ii)] (non-split case)). Thus, by using Proposition 2.9 (1) and this relation, we have
v w * Θ µ = v 1 * T w * Θ µ = v 1 * T w ′ * T sα * Θ µ .
By using Propositions 2.15 and 2.9 (1) in this order, we get
v 1 * T w ′ * T sα * Θ µ 2.15 = v 1 * T w ′ * Å Θ sα(µ) * T sα + N −1 j=0 qj (s α )Θ µ−jα ∨ ã 2.9 = v w ′ * Å Θ sα(µ) * T sα + N −1 j=0 qj(s α )Θ µ−jα ∨ ã .
By the induction hypothesis, the second term N −1 j=0 qj (s α )v w ′ * Θ µ−jα ∨ can be written as the R-linear sum of v w ′′ 's for w ′′ ∈ W satisfying w ′′ ≤ w ′ (hence, in particular, w ′′ < w). Let us consider the first term v w ′ * Θ sα(µ) * T sα . By the induction hypothesis, there exists a family {a w ′′ } w ′′ ∈W,w ′′ <w ′ of elements of R such that
v w ′ * Θ sα(µ) = q ρP−ρ nr ,w ′ (sα(µ)) · R w ′ (sα(µ)) · v w ′ + w ′′ ∈W w ′′ <w ′ a w ′′ · v w ′′ .
Hence, by noting that w ′ (s α (µ)) = w(µ), we get
v w ′ * Θ sα(µ) * T sα = q ρP−ρ nr ,w(µ) · R w(µ) · v w ′ * T sα + w ′′ ∈W w ′′ <w ′ a w ′′ · v w ′′ * T sα .
The first term of the right hand side equals q ρP−ρ nr ,w
(µ) · R w(µ) · v w since v w ′ * T sα = v 1 * T w ′ * T sα = v 1 * T w = v w
by Proposition 2.9 (1) and the Iwahori-Matsumoto relation. Therefore it suffices to show that for any w ′′ ∈ W with w ′′ < w ′ , the element v w ′′ * T sα = v 1 * T w ′′ * T sα is expressed as a C-linear combination of elements in {v w ′′′ | w ′′′ < w}.
If w ′′ < w ′′ s α , we see v 1 * T w ′′ * T sα = v 1 * T w ′′ sα = v w ′′ sα by Proposition 2.9 (1) and the Iwahori-Matsumoto relation. Since w ′ = ws α < w, w ′′ < w and w ′′ < w ′′ s α , we have w ′′ s α < w by Lemma 3.2. Hence the assertion holds when w ′′ < w ′′ s α .
If w ′′ > w ′′ s α , Proposition 2.9 (1) together with the Iwahori-Matsumoto relations T w ′′ = T w ′′ sα * T sα and T sα * T sα = (q(s α ) − 1)T sα + q(s α )T 1 (see [Ros15,Proposition 4
.1.1 (ii), (iii)]) shows that v 1 * T w ′′ * T sα = v 1 * T w ′′ sα * T sα * T sα = v 1 * T w ′′ sα * ((q(s α ) − 1)T sα + q(s α )T 1 ) = v 1 * ((q(s α ) − 1)T w ′′ + q(s α )T w ′′ sα ) = (q(s α ) − 1)v w ′′ + q(s α )v w ′′ sα
Since w ′′ s α < w ′′ < w, the assertion also holds.
3.2. The case of Iwahori. Let V χ := n-Ind G P χ be the principal series with respect to an weakly unramified character χ : M/M 1 → C × . Recall that the space C⊗ R,χ −1 M equipped with the right H I -action is nothing but V I χ as noted in Section 2.6. Hence the image of {v w } w∈W in C ⊗ R,χ M (for which we again write {v w } w∈W ) forms a C-basis of V I χ −1 for any χ. Proposition 3.4. Let µ ∈ Λ M be a strictly dominant element, i.e., a dominant element satisfying α, µ > 0 for any positive root α ∈ Φ. Then there exists a C-basis {v ∨ w } w∈W of V I χ such that, for any w ∈ W , there exists a family {c w ′ } w ′ ∈W,w ′ >w of complex numbers satisfying
I χ (½ µ ) · v ∨ w = q(w, µ) · χ • κ −1 M (w(µ)) · v ∨ w + w ′ ∈W w ′ >w c w ′ · v ∨ w ′ ,
where ½ µ := ½ IµI and q(w, µ) := q ρ nr ,µ + ρP−ρ nr ,w(µ) .
Proof. Note that we have ½ µ = T µ . Thus, by Remark 2.7, the left action I χ (½ µ ) on V I χ coincides with the right action of ι(T µ ) ∈ H I on C ⊗ R,χ −1 M. By Corollary 2.13, we have ι(T µ ) = q ρ nr ,µ · ι(Θ µ ).
V I χ × V I χ −1 −→ C. Let {v ∨ w } w∈W be the dual basis of V I χ ∼ = C ⊗ R,χ −1 M to {v w } w∈W with respect to this perfect pairing, that is, each v ∨ w satisfies (v ∨ w , v w ′ ) χ = δ w,w ′ . Then, by Proposition 3.3, we have (v ∨ w * ι(Θ µ ), v w ′ ) χ = (v ∨ w , v w ′ * Θ µ ) χ = q ρP−ρ nr ,w(µ) · χ(R w(µ) ) if w = w ′ , c w ′ if w < w ′ , 0 otherwise
with some complex number c w ′ . By noting that χ(R w(µ) ) = χ • κ −1 M (w(µ)), we get the assertion.
Remark 3.5. By Remark 2.14, we simply have q(w, µ) = q ρP,µ when G is split overF .
Corollary 3.6. With the notations as in Proposition 3.4, we have
det 1 − q −s · I χ (½ µ ) V I χ = w∈W 1 − q −s · q(w, µ) · χ • κ −1 M (w(µ)) .
Proof. For each k ∈ Z ≥0 , we put
W (k) := {w ∈ W | ℓ(w) = k}.
Then obviously we have W = W (0) ⊔ · · · ⊔ W (h) for h := max{ℓ(w) | w ∈ W }. We choose a labeling W = {w 1 , . . . , w #W } so that we have
W (0) = {w 1 , . . . , w #W (0) }, W (1) = {w #W (0)+1 , . . . , w #W (0)+#W (1) }, . . . W (h) = {w #W (0)+···#W (h−1)+1 , . . . , w #W }.
We take a C-basis {v ∨ w } w∈W of V I χ as in Proposition 3.4 and consider a matrix representation of I χ (½ µ ) on V I χ with respect to the basis {v ∨ wi } w∈W ordered according to the above labeling on W . Then, by Proposition 3.4, we have
I χ (½ µ ) · v ∨ wi = q(w i , µ) · χ • κ −1 M (w i (µ)) · v ∨ wi + i ′ ∈{1,...,#W } w i ′ >wi c w i ′ · v ∨ w i ′ .
When i ′ satisfies w i ′ > w i , we necessarily have ℓ(w i ′ ) > ℓ(w i ) by the definition of the Bruhat order (see the beginning of Section 3.1). In particular, we have i ′ > i. This means that the action of I χ (½ µ ) on V I χ is triangulated with respect to the ordered basis {v ∨ wi } i=1,...,#W . As the diagonal entry corresponding to v ∨ wi is given by q(w i , µ) · χ • κ −1 M (w i (µ)), we get the assertion.
3.3. General case. We next consider the general case. Let µ ∈ Λ M be a dominant element. As explained in Section 2.3, µ defines the parahoric subgroup J µ satisfying Proposition 3.7. We have
I ⊂ J µ ⊂ K. Recall that we have J µ = IW µ I,IW µ I = I Mµ N W µ I = w∈Wµ I Mµ N wI.
Proof. Since the second equality follows from the disjointness of the Iwahori decomposition G = w∈W IwI (see Section 2.5), it is enough to show the first equality. By Lemma 2.4 (1), we have I = I N I M I N , which implies IW µ I = I N I M I N W µ I. Lemma 2.4 (2) shows that I N I M I N W µ I = I N I M W µ I. As W µ normalizes I M , we get IW µ I = I N W µ I.
Since we have I N = α∈Φ + red U α,0 with any order on Φ + red and wU α,0 w −1 = U w(α),0 , it suffices to check that w(α) ∈ Φ + red for any w ∈ W µ and any α ∈ Φ + red satisfying α, µ = 0. Let w ∈ W µ . By the definition of W µ , we can write w = s β1 · · · s βr with β i ∈ Φ such that β i , µ = 0. If α ∈ Φ + red is a root satisfying α, µ = 0, then we have α, µ > 0 as µ is dominant. Hence we have
s βr (α), µ = α − α, β ∨ r β r , µ = α, µ − α, β ∨ r β r , µ = α, µ > 0.
Thus the dominance of µ implies that s βr (α) is positive. By applying the same argument to s βr (α), we know that s βr−1 (s βr (α)) satisfies s βr−1 (s βr (α)), µ > 0 and is positive. Repeating this procedure, we get w(α) ∈ Φ + red .
Recall that an order on the quotient W/W µ induced by the Bruhat order on W as follows. Define
W µ := {w ∈ W | ℓ(w) ≤ ℓ(ws α ) for all α ∈ ∆ with α, µ = 0}.
Then it follows from [Hum90, Proposition 1.10 (c)] that the canonical quotient W µ → W/W µ is bijective. Since the set W µ has a partial order induced from the Bruhat order of W , we can transport it to W/W µ via the bijection W µ ∼ = W/W µ .
≤ w ′ W µ if w ≤ w ′ in W .
Remark 3.9. For any w ∈ W and α ∈ ∆, we have ℓ(w) ≤ ℓ(ws α ) if and only if w(α) ∈ Φ + by Lemma 3.1. Thus we have W µ = {w ∈ W | w(α) ∈ Φ + for all α ∈ ∆ with α, µ = 0}.
Since µ ∈ Λ M is dominant, any positive root α satisfying α, µ = 0 can be written as the sum of simple roots α i 's satisfying α i , µ = 0 with non-negative integer coefficients. Hence W µ furthermore equals
{w ∈ W | w(α) ∈ Φ + for all α ∈ Φ + with α, µ = 0}.
We let e Jµ ∈ H I denote the idempotent corresponding to J µ , which is given explicitly by dg(J µ ) −1 ½ Jµ . We put U α,rα , r α :=
½ µ = dg(J µ ) −1 ½ JµµJµ .® 0 w −1 (α) > 0, 0+ w −1 (α) < 0.
Proof. The first statement is an immediate consequence of Proposition 3.7.
To show the second statement, let us take two elements x, y ∈ I Mµ N such that xwI = ywI. Then we have y −1 x ∈ wIw −1 , hence y −1 x ∈ I Mµ N ∩ wIw −1 . By a similar argument to the proof of Lemma 2.5, we can check that
I Mµ N ∩ wIw −1 = α∈Φ + red (Mµ) U α,rα ,
where r α is as in the statement.
Proposition 3.11. We have
½ µ = e Jµ * T µ * e Jµ ,
Proof. Recall that T µ = ½ IµI . Thus our task is to show that dg(J µ ) · ½ JµµJµ = ½ Jµ * ½ IµI * ½ Jµ . Let us compute ½ Jµ * ½ IµI * ½ Jµ . In general, for any f 1 , f 2 , f 3 ∈ C ∞ c (G), we have
f 1 * f 2 * f 3 (x) = G f 1 (g)(f 2 * f 3 )(g −1 x) dg = G f 1 (g) Å G f 2 (h)f 3 (h −1 g −1 x) dh ã dg = G f 1 (g) Å G f 2 (g −1 xh)f 3 (h −1 ) dh ã dg.
(In the last equality, we replaced h with g −1 xh by noting that dh is a Haar measure on G.) Hence we have
½ Jµ * ½ IµI * ½ Jµ (x) = G ½ Jµ (g) Å G ½ IµI (g −1 xh)½ Jµ (h −1 ) dh ã dg = Jµ Jµ ½ IµI (g −1 xh) dg dh.
The integrand of the right-hand side is not zero if and only if x belongs to J µ µJ µ .
Furthermore, we see that ½ Jµ * ½ IµI * ½ Jµ (x) is constant for any x ∈ J µ µJ µ again by noting that dg and dh are Haar measures on G (hence of J µ ).
Thus now it is enough to check that ½ Jµ * ½ IµI * ½ Jµ (µ) is given by dg(J µ ). By
µx i µ −1 I Mµ N [w] = x i ′ I Mµ N [w].(2)
On the other hand, as w commutes with µ ∈ Λ M (as elements ofW ), we have
µwµ −1 I M = wI M .(3)
By combining equalities (2) and (3), we can check that
µx i wµ −1 I = x i ′ wI,
or equivalently, µg i µ −1 I = g i ′ I. By taking the inverse, we get Iµg −1 i = Ig −1 i ′ µ. This implies that, for any g ∈ g i ′ I and h ∈ g j I, we have
½ IµI (g −1 µh) = ½ IµI (g −1 i ′ µg j ) = ½ IµI (µg −1 i g j ).
Thus, by noting that the association [g i → g i ′ ] gives a bijection from {g i } i to itself, we get
½ Jµ * ½ IµI * ½ Jµ (µ) = Jµ Jµ ½ IµI (g −1 µh) dg dh = #Jµ/I i=1 #Jµ/I j=1 ½ IµI (µg −1 i g j ).
Now our task is to show that ½ IµI (µg −1 i g j ) = 0 if and only if i = j. The "if" part is obviously true, so let us consider the "only if" part. We suppose that ½ IµI (µg −1 i g j ) = 0, namely, g −1 i g j ∈ µ −1 IµI. Let N µ be the unipotent radical of the standard parabolic subgroup with Levi subgroup M µ , and let N µ be its opposite. If we put I N µ := I ∩ N µ , I Mµ := I ∩ M µ , I Nµ := I ∩ N µ , then we have I = I Nµ I Mµ I N µ and I = I N µ I Mµ I Nµ (Lemma 2.4 (1)).
By Lemma 2.4 (3), we have µ −1 I N µ µ ⊂ I N µ and µ −1 I Nµ µ ⊃ I Nµ by the dominance of µ. Moreover, by a similar argument to the proof of Lemma 2.4 (3), we can show that µ −1 U α,r µ = U α,r for any r ∈ R and any α whose root subgroup U α is contained in M µ . Accordingly, we have µ −1 I Mµ µ = I Mµ . Thus we have
µ −1 IµI = µ −1 (I Nµ I Mµ I N µ )µI = µ −1 I Nµ µI = µ −1 I Nµ µ(I Nµ I Mµ I N µ ) = µ −1 I Nµ µI Mµ I N µ .
Recall that the multiplication map
N µ × M µ × N µ → G
is injective (see [BT84, Théorème 2.2.3]) and note that µ −1 I Nµ µ, I Mµ , and I N µ are contained in N µ , M µ , and N µ , respectively. Hence, as g −1 i g j lies in M µ , the assumption that g −1 i g j ∈ µ −1 IµI implies that g −1 i g j belongs to I Mµ . This means that g i I and g j I are the same I-coset, thus we have g i = g j .
Lemma 3.12. For any w ∈ W , we have
M 1 N wJ µ = w ′ ∈Wµ M 1 N ww ′ I.
Proof. Since W µ 1:1 − − → W/W µ and J µ contains W µ , it is enough to treat only the case where w ∈ W µ . By Proposition 3.7, we have J µ = w ′ ∈Wµ I Mµ N w ′ I. Hence we have
M 1 N wJ µ = w ′ ∈Wµ M 1 N wI Mµ N w ′ I.
By the definition of W µ and Remark 3.9, we have w(α) ∈ Φ + for any α ∈ Φ + satisfying α, µ = 0. This fact shows that wI Mµ N w −1 ⊂ N , and hence we get
M 1 N wJ µ = w ′ ∈Wµ M 1 N ww ′ I.
Since the decomposition G = w ′ ∈W M 1 N w ′ I is disjoint (see Section 2.6), this decomposition is disjoint.
For w ∈ W , we put v Jµ w := w ′ ∈Wµ v ww ′ . Lemma 3.13. For any w ∈ W , we have v w * e Jµ = #W −1 µ · v Jµ w .
Proof. Recall that v w = ½ M1N wI and e Jµ = dg(J µ ) −1 · ½ Jµ . We have
½ M1N wI * ½ Jµ (x) = G ½ M1N wI (g)½ Jµ (g −1 x) dg = dg(M 1 N wI ∩ xJ µ ).
Thus, since I ⊂ J µ , we have Supp(½ M1N wI * ½ Jµ ) = M 1 N wJ µ . Suppose that
x ∈ M 1 N wJ µ and write x = mnwj with m ∈ M 1 , n ∈ N , and j ∈ J µ . Then, by noting that dg is a Haar measure on G and that M normalizes N , we have
dg(M 1 N wI ∩ xJ µ ) = dg(M 1 N wI ∩ mnwJ µ ) = dg(M 1 N wI ∩ wJ µ ).
This fact implies that ½ M1N wI * ½ Jµ is equal to constant multiple of ½ M1N wJµ . Since
we have M 1 N wJ µ = w ′ ∈Wµ M 1 N ww ′ I by Lemma 3.12, there is a constant C ∈ C such that v w * e Jµ = C · v Jµ w . Since e Jµ is an idempotent, we have v Jµ w * e Jµ = (C −1 · v w * e Jµ ) * e Jµ = C −1 · v w * e Jµ = v Jµ w . On the other hand, we have v Jµ w * e Jµ = w ′ ∈Wµ v ww ′ * e Jµ = w ′ ∈Wµ C · v Jµ w = #W µ · C · v Jµ w .
Thus we get C = #W −1 µ .
Since (−) * e Jµ gives a projector from M onto M Jµ , Lemma 3.13 implies that {v Jµ w } w∈W/Wµ forms an R-basis of M Jµ .
Proposition 3.14. For any w ∈ W/W µ , there exists a family
{a w ′ } w ′ ∈W/Wµ,w ′ <w of elements of R satisfying v Jµ w * (e Jµ * Θ µ * e Jµ ) = q ρP−ρ nr ,w(µ) · R w(µ) · v Jµ w + w ′ ∈W/Wµ w ′ <w a w ′ · v Jµ w ′ . Proof. We have v Jµ w * (e Jµ * Θ µ ) = v Jµ w * Θ µ = w ′ ∈Wµ v ww ′ * Θ µ .
By applying Proposition 3.3 to each v ww ′ * Θ µ , we have
w ′ ∈Wµ v ww ′ * Θ µ = w ′ ∈Wµ q ρP−ρ nr ,ww ′ (µ) · R ww ′ (µ) · v ww ′ + w ′′ ∈W w ′′ <ww ′ a (w ′ ) w ′′ · v w ′′ , where a (w ′ )
w ′′ ∈ R is an element determined by w ′ and w ′′ . By noting that ww ′ (µ) = w(µ) for any w ′ ∈ W µ , we get
w ′ ∈Wµ q ρP−ρ nr ,ww ′ (µ) · R ww ′ (µ) · v ww ′ = q ρP−ρ nr ,w(µ) · R w(µ) · w ′ ∈Wµ v ww ′ = q ρP−ρ nr ,w(µ) · R w(µ) · v Jµ w .
On the other hand, by Lemma 3.13, we have
w ′ ∈Wµ w ′′ ∈W w ′′ <ww ′ a (w ′ ) w ′′ · v w ′′ * e Jµ = #W −1 µ · w ′ ∈Wµ w ′′ ∈W w ′′ <ww ′ a (w ′ ) w ′′ · v Jµ w ′′ .
By Lemma 3.8, for any w ′ ∈ W µ and w ′′ ∈ W satisfying w ′′ < ww ′ , we have w ′′ W µ < wW µ . This implies that we have
#W −1 µ · w ′ ∈Wµ w ′′ ∈W w ′′ <ww ′ a (w ′ ) w ′′ · v Jµ w ′′ = w ′ ∈W/Wµ w ′ <w a ′ w ′ * v Jµ w ′
by choosing a ′ w ′ for each w ′ ∈ W/W µ satisfying w ′ < w appropriately. such that, for any w ∈ W/W µ , there exists a family {c w ′ } w ′ ∈W/Wµ,w ′ >w of complex numbers satisfying
I χ (½ µ ) · v Jµ,∨ w = q(w, µ) · χ • κ −1 M (w(µ)) · v Jµ,∨ w + w ′ ∈W/Wµ w ′ >w c w ′ · v Jµ,∨ w ′ .
Proof. Note that, by Proposition 3.11, we have ½ µ = e Jµ * T µ * e Jµ . Thus, by Remark 2.7, the left action I χ (½ µ ) on V Jµ χ coincides with the right action of ι(e Jµ * T µ * e Jµ ) ∈ H I on C ⊗ R,χ −1 M Jµ . By Lemma 2.13, we have ι(e Jµ * T µ * e Jµ ) = q ρ nr ,µ · ι(e Jµ * Θ µ * e Jµ ). With notations as in Proposition 3.15, we introduce a diagonalizable operator
A µ on V Jµ χ given by A µ (v Jµ,∨ w ) = q(w, µ) −1 · v Jµ,∨ w .
Corollary 3.16. We have
det 1 − q −s · c · A µ • I χ (½ µ ) V Jµ χ = w∈W/Wµ 1 − q −s · c · χ • κ −1 M (w(µ))
for any c ∈ C.
Proof. Recall that there exists a complete set W µ of representatives of the quotient W/W µ and that the order on W/W µ is nothing but the order transported from the Bruhat order on W µ ⊂ W . By noting this, we can carry out the same argument as in the proof of Corollary 3.6. To be more precise, we put
W µ (k) := {w ∈ W µ | ℓ(w) = k}
for each k ∈ Z ≥0 and define a total order on W µ such that
W µ (0) = {w 1 , . . . , w #W µ (0) }, W µ (1) = {w #W µ (0)+1 , . . . , w #W µ (0)+#W µ (1) }, . . . W µ (h) = {w #W µ (0)+···+#W µ (h−1)+1 , . . . , w #W µ }.
Then, if we order the C-basis {v Representations with parahoric fixed vectors. We recall basic a fact about irreducible smooth representations of G having a non-zero fixed vector by a parahoric subgroup following [Hai14].
Let J ⊂ G be a parahoric subgroup of G.
Definition 4.1 (J-spherical representation). We say that an irreducible smooth representation π of G is J-spherical if π has a nonzero vector fixed by J.
In the following, we assume that J contains the fixed Iwahori subgroup I. Note that then any J-spherical representation is I-spherical. We also remark that this assumption is always satisfied up to conjugacy since • any parahoric subgroup contains an Iwahori subgroup, and • any Iwahori subgroups are conjugate.
Proposition 4.2 ([Hai14, Section 11.5]). Let π be an I-spherical irreducible smooth representation of G. Then there exists an weakly unramified character χ ∈ X w (M ) of M such that π is a subquotient of the normalized parabolic induction n-Ind G P χ.
Moreover, such an weakly unramified character χ is unique up to the action of the Weyl group W = W (G, A).
Remark 4.3. When G is unramified (i.e., quasi-split and splits over an unramified extension of F ) and J is a hyperspecial maximal open compact subgroup of G, the above result is nothing but the well-known classification of unramified representations via the Satake isomorphism (e.g., see [Car79,Section 4]
Ψ(G) = X * (T), ∆ B , X * (T), ∆ ∨ B , where ∆ B (resp. ∆ ∨ B )
is the set of simple roots (resp. coroots) of T determined by B. By taking the dual of this root datum, we get the Langlands dual groupĜ of G. To be more precise,Ĝ is a connected reductive group over C with the following fixed data:
• a maximal torus T ofĜ,
χ(κ −1 T (λ)) = λ(χ) for any λ ∈ X * (T IF ) Frob .
We consider a mapT
IF ֒→ (Ĝ IF ⋊ Frob) ss : t → t ⋊ Frob,
where (Ĝ IF ⋊ Frob) ss denotes the semisimple locus inĜ IF ⋊ Frob. HereT is regarded as a subgroup ofĜ via the isomorphismT ∼ = T induced by the fixed isomorphism ι. Then, according to [Hai15, Proposition 6.1], this map induces a bijection
(T IF ) Frob /W ∼ = − → (Ĝ IF ⋊ Frob) ss /Ĝ IF .
Let π be an Iwahori-spherical irreducible smooth representation of G. Then, by Proposition 4.2, an element χ of X w (T ) is determined by π uniquely up to W -conjugation. We define the Satake parameter s(π) of π to be the image of χ ⋊ Frob ∈ (Ĝ IF ⋊ Frob) ss in (Ĝ IF ⋊ Frob) ss /Ĝ IF .
4.2.2.
Non-quasi-split case. We next consider the case where G is not quasi-split (see [Hai15,Sections 8 and 9] for the details of the content of this section). In this case, we take the quasi-split inner form G * of G over F with an inner twist ψ : G → G * . We fix a maximal F -split torus A * of G * and put T * to be the centralizer of A * in G * . We also fix a Borel subgroup B * of G * containing T * . For the F -rational parabolic subgroup P of G with minimal Levi subgroup M of G, by replacing ψ if necessary, there exists a parabolic subgroup P * of G * such that ψ(P) = P * , ψ(M) = M * and P * ⊃ B * . Then we get a Galois-equivariant isomorphismψ
: Z(M) ∼ = − → Z(M * ).
Since the Langlands dual groupM * of M * is realized as a Levi subgroup ofĜ * containing the maximal torusT * , we have an inclusion Z(M * ) ֒→T * . Thus we get a Galois-equivariant homomorphismψ 0 :
Z(M) ∼ = Z(M * ) ֒→T * . We define a map t A * ,A from (Z(M) IF ) Frob to (T * IF ) Frob bỹ t A * ,A : (Z(M) IF ) Frob → (T * IF ) Frob χ → δ − 1 2 B * ·ψ 0 (δ 1 2 Pχ ).
Here, δ On the other hand, as explained in the quasi-split case, we have
(T * IF ) Frob /W (G * , A * ) ∼ = − → (Ĝ * IF ⋊ Frob) ss /Ĝ * IF .
Since the Langlands dual groupsĜ andĜ * are isomorphic Galois-equivariantly, we have
(Ĝ * IF ⋊ Frob) ss /Ĝ * IF ∼ = (Ĝ IF ⋊ Frob) ss /Ĝ IF .
Therefore, by putting all of these maps together, we get a map
(Z(M) IF ) Frob /W (G, A) → (Ĝ IF ⋊ Frob) ss /Ĝ IF χ →t A * ,A (χ) ⋊ Frob.
Let π be an Iwahori-spherical irreducible smooth representation of G. Then, by Proposition 4.2, an element χ of X w (M ) is determined by π uniquely up to W -conjugation. We define the Satake parameter s(π) of π to be the image of
t A * ,A (χ) ⋊ Frob ∈ (Ĝ IF ⋊ Frob) ss in (Ĝ IF ⋊ Frob) ss /Ĝ IF .
For our convenience, for any χ ∈ X w (M ), we let χ * ∈ X w (T * ) denote the image oft A * ,A (χ) ∈ (T * IF ) Frob under the map X w (T * ) ∼ = (T * IF ) Frob :
X w (M ) ∼ = / / (−) * (Z(M) IF ) Frob t A * ,A χ ✤ / /χ ❴ X w (T * ) ∼ = / / (T * IF ) Frob χ * t A * ,A (χ) ✤ o o 4.3.
Local L-functions for parahoric-spherical representations. According to [Bor79, Section 2.6], we take a finite-dimensional continuous representation (r, V ) of L G whose restriction toĜ is an algebraic homomorphism of complex Lie groupŝ G → GL C (V ). Note that the continuity implies that r factors through the quotient G ⋊ Gal(E/F ) for a finite Galois extension E of F over which G splits.
Definition 4.4. For an I-spherical irreducible smooth representation π of G, we define the semi-simple local L-function of π with respect to r by L ss (s, π, r) := det 1 − q −s · r(s(π)) V IF −1 ,
where V IF denotes the subspace of V consisting of I F -fixed vectors.
Remark 4.5. A meaning of the semi-simple local L-function can be explained as follows. If we believe the conjectural local Langlands correspondence for G, we should have an L-parameter φ π of G for any irreducible smooth representation π of G. Recall that an L-parameter of G is a homomorphism from the product W F × SL 2 (C) of the Weil group and SL 2 (C) to the L-group L G satisfying several conditions (see, for example, [GR10, Section 3.2] or [Hai14, Section 4] for the precise definition). For an L-parameter φ of G, its local L-function with respect to r is defined by
L(s, φ, r) := det 1 − q −s · r(φ(Frob)) V φ(IF ) −1 .
On the other hand, according to [Hai14, Section 5.1], for an L-parameter φ of G, its infinitesimal character φ ss :
W F → L G of φ is defined by φ ss := φ • η, where η : W F → W F × SL 2 (C); σ → σ, Ç |σ| 1 2 0 0 |σ| − 1 2 å .
Here |σ| denotes the absolute value of σ ∈ W F normalized so that |Frob| = q −1 . It is expected that any parahoric-spherical representation of G corresponds to an L-parameter φ which is trivial on I F (i.e., φ(σ, 1) = 1 ⋊ σ for any σ ∈ I F ) under the local Langlands correspondence. Furthermore, it is expected that the Satake parameter s(π) of a parahoric-spherical representation π describes the image of the geometric Frobenius under the infinitesimal character φ π,ss of the L-parameter φ π of π, i.e., s(π) = φ π,ss (Frob) (see [Hai15,Conjecture 13.1]). Therefore, for any parahoricspherical representation π of G, we should have L ss (s, π, r) = L(s, φ π,ss , r).
Remark 4.6. When G is unramified (i.e., G is quasi-split and splits over an unramified extension of F ) and π is an unramified representation (i.e., a J-spherical representation for a hyperspecial parahoric subgroup J of G), the Satake parameter s(π) is nothing but the classical Satake parameter of π (see, for example, [Car79]).
In this case, the L-parameter φ π of π is defined just by
φ π : W F × SL 2 (C) →Ĝ ⋊ W F ; ® (Frob, 1) → s(π), (σ, g) → 1 ⋊ σ for any (σ, g) ∈ I F × SL 2 (C).
Hence we have φ π,ss = φ π and L ss (s, π, r) = L(s, π, r) = L(s, φ π , r).
We will rewrite the above definition of the semisimple local L-function in a different form by using the next Lemma 4.7. Let W be finite dimensional C-vector space and A : W → W be a C-linear automorphism. Suppose that we have a decomposition W = l i=1 W i such that A maps W i to W i+1 (we put W l+1 := W 1 ). Then we have
det(1 − A | W ) = det(1 − A l | W l ).
Proof. By fixing a basis of W i for each i, we let A i be the representation matrix of
A| Wi : W i → W i+1 . Then 1 − A is represented by the matrix I m −A l −A 1 I m −A 2 . . . . . . . . . −A l−1 I m ,
where m denotes the dimension of W 1 and I m denotes the identity matrix of size m. By noting that
I m −A l −A 1 I m −A 2 . . . . . . . . . −A l−1 I m I m A l I m . . . . . . I m = I m −A 1 I m −A 1 A l −A 2 . . . . . . . . . −A l−1 I m , we have I m −A l −A 1 I m −A 2 . . . . . . . . . −A l−1 I m = I m −A 1 A l −A 2 . . . . . . . . . −A l−1 I m .
Similarly, we have
I m −A 1 A l −A 2 . . . . . . . . . −A l−1 I m = I m −A 2 A 1 A l −A 3 . . . . . . . . . −A l−1 I m .
Repeating this procedure, eventually we get
|1 − A| = |1 − A l−1 · · · A 1 A l |.
Since A l−1 · · · A 1 A l is nothing but the restriction of A l to W l , we get the conclusion.
We take the quasi-split group G * over F equipped with an inner twist ψ and use notations in Section 4.2.2. Put W * := W (G * , A * ). Recall that the action of W * × W F on X * (T * ) induces that of W * × Frob on X * (T * IF ). Let P(r IF ) denote the (W * × Frob )-stable subset consisting of all weights in V IF with respect toT * IF , i.e., P(r IF ) := {µ ∈ X * (T * IF ) | µ appears in V IF }.
For each µ ∈ P(r IF ), we write [µ] for the image of µ under the canonical quotient map from P(r IF ) onto P(r IF )/ Frob , and define l µ ∈ Z >0 to be the cardinality of the Frob -orbit {Frob i (µ) | i ∈ Z}. We also define N (µ) := For each µ ∈ P(r IF ), we put V IF µ to be the µ-eigenspace in V IF . For any (λ, l) ∈ I, a complete set S of representatives of P λ,l / Frob , and η ∈ P(r IF ), we define
lµ−1 i=0 Frob i (µ) ∈ Λ T * = X * (T * IF ) Frob .V IF λ,l := µ∈P λ,l V IF µ , V IF S := µ∈S V IF µ , V IF [η] := µ∈[η] V IF µ .
Then we have
V IF λ,l = l−1 i=0 r(Frob) i (V IF S ), V IF [η] = l [η] −1 i=0 r(Frob) i (V IF η ), V IF = [µ]∈P(r I F )/ Frob V IF [µ] .
We remark that r(Frob) l gives an automorphism on V IF S and r(Frob) l [η] gives one on V IF η . Since the multiset of eigenvalues of the automorphism r(Frob) l (resp. r(Frob) l [η] ) do not depend on the choice of S (resp. η), we may write C λ,l (resp. C [η] ) for it. Note that the cardinality of C λ,l equals the dimension of
V IF S . Recall that A µ is a diagonalizable operator on V Jµ χ * given by A µ (v Jµ,∨ w ) = q(w, µ) −1 · v Jµ,∨ w
(see the paragraph before Corollary 3.16).
Theorem 4.8. Let π be an I-spherical representation of G. Let χ ∈ X w (M ) be a weakly unramified character of M such that π is a subquotient of the normalized parabolic induction of χ. With the notations as in Corollary 3.16, we have L ss (s, π, r) = (λ,l)∈I + c∈C λ,l
det 1 − q −ls · c · A λ • I χ * (½ λ ) V J λ χ * −1 . (4)
Proof. Recall that, by definition, we have L ss (s, π, r) = det 1 − q −s · r(s(π)) V IF −1 .
Since r(s(π)) preserves V IF
[µ] for each [µ] ∈ P(r IF )/ Frob , we have
det 1 − q −s · r(s(π)) V IF = [µ]∈P(r I F )/ Frob det 1 − q −s · r(s(π)) V IF [µ] .
Note that, by fixing a representative µ of [µ], we have
V IF [µ] = V IF µ ⊕ V IF Frob(µ) ⊕ · · · ⊕ V IF Frob l [µ] −1 (µ)
and q −s · r(s(π)) maps V IF Frob i (µ) to V IF Frob i+1 (µ) for each i. Hence, by Lemma 4.7, we get det 1 − q −s · r(s(π)) V IF
[µ] = det 1 − q −l [µ] s · r(s(π)) l [µ] V IF µ .
Recall that s(π) =t A * ,A (χ) ⋊ Frob. Thus we have
r(s(π)) l [µ] = r(N (t A * ,A (χ))) · r(Frob) l [µ] ,
where we put N (t A * ,A (χ)) :=
l [µ] −1 i=0
Frob i (t A * ,A (χ)). As N (t A * ,A (χ)) belongs toT * IF , r(N (t A * ,A (χ))) acts on V IF µ by a scalar multiplication µ(N (t A * ,A (χ))). Note that we have
µ(N (t A * ,A (χ))) = l [µ] −1 i=0 Frob i (µ)(t A * ,A (χ)) = N ([µ])(t A * ,A (χ)).
By the definition of χ * (see Section 4.2.2), we have λ(t A * ,A (χ)) = χ * • κ −1 T * (λ) for any λ ∈ Λ T * = X * (T * IF ) Frob . Hence we get
N ([µ])(t A * ,A (χ)) = χ * • κ −1 T * (N ([µ])
). From the above argument, we obtain L ss (s, π, r) = (
1 − q −l [µ] s · c · χ * • κ −1 T * (N ([µ]))) −1 .(5)
We next rewrite the index set. Recall that we have a surjection N × l from P(r IF )/ Frob onto I. By
V IF λ,l = µ∈P λ,l V IF µ = [µ]∈P λ,l / Frob V IF [µ] ,
we have an equality of multisets C λ,l = [µ]∈P λ,l / Frob C [µ] for any (λ, l) ∈ I. Therefore (5) equals
(λ,l)∈I [µ]∈P λ,l / Frob c∈C [µ] (1 − q −ls · c · χ * • κ −1 T * (λ)) −1 = (λ,l)∈I c∈C λ,l (1 − q −ls · c · χ * • κ −1 T * (λ)) −1 .(6)
Note that the map l : P λ,l / Frob → Z >0 is W * -invariant, and that we have C w(λ),l = C λ,l as multisets for any w ∈ W * since the action of w induces a Frob -
equivalent isomorphism V IF (λ,l) ∼ = V IF (w(λ),l)
. Moreover, we have a bijection
W * /W * λ 1:1 − − → W * · λ; w → w(λ).
Therefore (6) equals
(λ,l)∈I + w∈W * /W * λ c∈C w(λ),l (1 − q −ls · c · χ * • κ −1 T * (w(λ))) −1 = (λ,l)∈I + c∈C λ,l w∈W * /W * λ (1 − q −ls · c · χ * • κ −1 T * (w(λ))) −1 .(7)
By applying Corollary 3.16 to (G * , χ * , λ, c), the right-hand side of the equation (7) is written as
(λ,l)∈I + c∈C λ,l det 1 − q −ls · c · A λ • I χ * (½ λ ) V J λ χ * −1 .
Hence we get the assertion.
4.4. The case of induced representations. In this section, we consider the case where G is unramified and the representation r of L G is induced from the one of G. In this case, the expression of Theorem 4.8 can be slightly simplified as we see in the following.
Assume that G is unramified, i.e., G is quasi-split and splits over an unramified extension of F . As we have G = G * , we use the notation as in Section 4.2.1; for example, T denotes the centralizer of A in G. Since the action of I F onĜ is trivial, we obtain the action of Frob onĜ. There exists l 0 ∈ Z >0 such that the action of Frob l0 onĜ is trivial. Let (r 0 , V 0 ) be a finite-dimensional algebraic representation ofĜ. Via the quotient homomorphism L G →Ĝ ⋊ Frob →Ĝ ⋊ (Z/l 0 Z), we regard the induced representation (r = IndĜ
⋊(Z/l0Z) G r 0 , V = i∈Z/l0Z V 0 ) (8)
as a representation of L G, where Frob permutes each component of V .
Write W = W (G, A). We define a (W × Frob )-equivalent map N 0 by
N 0 : X * (T) → Λ T = X * (T) Frob ; µ → l0−1 i=0
Frob i (µ), and put I 0 (resp. I + 0 ) to be N 0 (P(r 0 )) (resp. the set of dominant elements in N 0 (P(r 0 ))). Then the canonical map I + 0 → I 0 /W is bijective, as discussed for I + before Theorem 4.8.
For µ ∈ P(r 0 ), we write V 0,µ for the µ-eigenspace in V 0 . For µ ∈ P(r 0 ), we define
V µ := l0−1 i=0 r(Frob) i (V 0,µ ).
Then we see V = µ∈P(r0) V µ . For λ ∈ I 0 , we set m 0,λ := µ∈N −1 0 (λ) dim V 0,µ . Since w(V 0,µ ) = V 0,w(µ) , we have m 0,λ = m 0,w(λ) for any w ∈ W .
Theorem 4.9. Assume that G is unramified and r is given by (8). Let π be an I-spherical representation of G. Let χ ∈ X w (T ) be a weakly unramified character of T such that π is a subquotient of the normalized parabolic induction of χ. Then we have L ss (s, π, r) =
λ∈I + 0 det 1 − q −(l0s+ ρB,λ ) I χ (½ λ ) V J λ χ −m 0,λ .
Proof. The proof is similar to that of Theorem 4.8. Since r(s(π)) preserves V µ for each µ ∈ P(r 0 ), we have
L ss (s, π, r) = det 1 − q −s · r(s(π)) V −1 = µ∈P(r0) det 1 − q −s · r(s(π)) V µ −1 .
Since q −s · r(s(π)) maps r(Frob) i (V 0,µ ) to r(Frob) i+1 (V 0,µ ) for each i, Lemma 4.7 shows det 1 − q −s · r(s(π)) V µ = det 1 − q −l0s · r(s(π)) l0 V 0,µ .
By s(π) =χ ⋊ Frob, we have r(s(π)) l0 = r(N 0 (χ)) · r(Frob) l0 = r(N 0 (χ)),
where we put N 0 (χ) := l0−1 i=0 Frob i (χ) ∈T Frob . For µ ∈ P(r 0 ), we have
µ(N 0 (χ)) = l0−1 i=0 Frob i (µ)(χ) = N 0 (µ)(χ) = χ • κ −1 T (N 0 (µ)).
From the above argument, we obtain L ss (s, π, r) = µ∈P(r0)
(1 − q −l0s · χ • κ −1 T (N 0 (µ))) − dim V0,µ = λ∈I0 (1 − q −l0s · χ • κ −1 T (λ)) −m 0,λ = λ∈I + 0 w∈W/W λ (1 − q −l0s · χ • κ −1 T (w(λ))) −m 0,λ = λ∈I + 0 det 1 − q −(l0s+ ρB,λ ) I χ (½ λ ) V J λ χ −m 0,λ ,
where we used m 0,λ = m 0,w(λ) at the third equality, and Corollary 3.16 and Remark 3.5 at the last equality. Hence we get the assertion.
Remark 4.10. When G is split and the finite-dimensional continuous representation r of L G =Ĝ × W F is trivial on W F , we can apply Theorem 4.9 to l 0 = 1 and r 0 = r. In this case, there is no difference between P(r 0 ) and I. Hence the formula in Theorem 4.9 is simplified as follows:
L ss (s, π, r) = µ∈P + (r) det 1 − q −(s+ ρB,µ ) I χ (½ µ ) V Jµ χ −mµ .
Here m µ denotes the multiplicity of the weight µ in r.
4.5. The case of quasi-minuscule representations. In this section, we focus on the case where G is split. Let us investigate simpler cases where the right-hand side of the formula of Theorem 4.8 consists of essentially one nontrivial factor.
Definition 4.11. We say that an irreducible finite-dimensional representation of G is minuscule (resp. quasi-minuscule) if the Weyl group W acts transitively on the set of weights (resp. the set of weights not fixed by W ).
Remark 4.12. Let r be an irreducible representation ofĜ with highest weight µ. Recall that the map P + (r) → P(r)/W is bijective as discussed for I + before Theorem 4.8. Hence, we have #P + (r) = 1 if r is minuscule. Moreover we can check that if r is quasi-minuscule and not minuscule, then we have #P + (r) = 2 as follows: Let us suppose that µ 1 and µ 2 are dominant weights of r fixed by W .
Then it suffices to show that µ 1 = µ 2 , which is equivalent to α, µ 1 = α, µ 2 (9) for any α ∈ X * (T ). LetĜ der denote the derived group ofĜ. As we have T = T der ZĜ, where T der := T ∩Ĝ der and ZĜ is the center ofĜ, it is enough to check the equality (9) for every α ∈ X * (T der ) and α ∈ X * (ZĜ). We first check the former case. For every coroot α ∈ X * (T der ), since µ 1 is W -invariant, we have α, µ 1 = α, s −1 α µ 1 = s α α, µ 1 = − α, µ 1 , where s α is the reflection with respect to α. Thus we have α, µ 1 = 0. As the space X * (T der ) R is spanned by the set of coroots of T der inĜ der , the equality α, µ 1 = 0 holds for any element α of X * (T der ). Similarly, we have α, µ 2 = 0 for any α ∈ X * (T der ). Second, as the representation r is irreducible, it has a central character by Schur's lemma. In other words, all weights of r has the same value on the center ZĜ. Thus the equality (9) holds for any α ∈ X * (ZĜ).
Corollary 4.13. Let r be a quasi-minuscule representation of the Langlands dual groupĜ with highest weight µ.
(1) Assume that r is minuscule. Then we have
L ss (s, π, r) = det 1 − q −(s+ ρB,µ ) I χ (½ µ ) V Jµ χ −1 .
(2) Assume that r is not minuscule. Then the set P + (r) of dominant weights in r consists of µ and a dominant weight µ ′ fixed by W , and we have
L ss (s, π, r) = 1 − q −s χ • κ −1 T (µ ′ ) −m µ ′ det 1 − q −(s+ ρB,µ ) I χ (½ µ ) V Jµ χ −1 .
Proof. Assertion (1) is a direct consequence of Theorem 4.8 and Remark 4.12 (recall that the multiplicity of the highest weight of r is one). Let us show assertion (2). Again by Theorem 4.8 and Remark 4.12, we get L ss (s, π, r) = det 1−q −(s+ ρB,µ ′ )
I χ (½ µ ′ ) V J µ ′ χ −m µ ′ det 1−q −(s+ ρB,µ ) I χ (½ µ ) V Jµ χ −1 .
Since µ ′ is a W -invariant weight, by the same argument as in Remark 4.12, we have α, µ ′ = 0 for any α ∈ Φ. Hence W µ ′ = W and ρ B , µ ′ vanishes. Then Corollary 3.16 shows that
det 1 − q −(s+ ρB,µ ′ ) I χ (½ µ ′ ) V J µ ′ χ = 1 − q −s χ • κ −1 T (µ ′ ).
Remark 4.14. Assume that G is a split connected simple group with trivial center. In Table 1 in the end of this paper, we list all isomorphism classes of nontrivial quasi-minuscule representations ofĜ (cf. [LR08, 221 page, Fig. A. 1]). Note that since we are assuming that G is simple, a nontrivial quasi-minuscule representation r is minuscule exactly when m 0 = 0. We also remark that the Langlands dual group of the adjoint group is simply-connected, and that there is a natural oneto-one correspondence between finite-dimensional representations of a connected simply-connected simple complex Lie group and finite-dimensional representations of its Lie algebra. For a split connected simple group G ′ whose center is not necessarily trivial, we remark that quasi-minuscule representations of G ′ are exactly those of the Langlands dual group ÷ G ′ /Z ′ of G ′ /Z ′ factoring G ′ , where Z ′ denotes the center of G ′ .
Let ∆ B be the set of simple (with respect to the fixed Borel subgroup B) roots of T inĜ. Let I denote the subset of ∆ B consisting of the boxed simple roots in the Dynkin diagram on Table 1. Then the highest weight µ of a quasi-minuscule representation r ofĜ is characterized as the unique character satisfying α, µ = ® 1 if α ∈ I, 0 otherwise.
Examples in the unramified case
In this section, we present some examples in the cases where G is GL n , Res E/F GL n and GSp 2n . 5.1. The case of GL n . Let G = GL n (n ≥ 2). We take the split maximal torus T consisting of diagonal matrices, and the Borel subgroup B consisting of uppertriangular matrices. We take Z-bases for the character group X * (T) and the cocharacter group X * (T) to be {e i } n i=1 and {e ∨ i } n i=1 , where e i and e ∨ i are given by
Φ ∨ = {±(e ∨ i − e ∨ j ) | 1 ≤ i < j ≤ n}, ∆ ∨ B = {e ∨ 1 − e ∨ 2 , .
. . , e ∨ n−1 − e ∨ n }.
From these expressions, it follows that the Langlands dual group ' GL n is GL n (C). Since the set of positive roots is given by {e i − e j | 1 ≤ i < j ≤ n}, we have
ρ B = n i=1 n + 1 − 2i 2 e i .
For i = j, we define homomorphisms x ei−ej : G a → U α ⊂ G by x ei−ej (a) := I n + aE i,j for each a ∈ G a . Here I n denotes the n × n unit matrix and E i,j denotes the n × n matrix where the (i, j)-entry is 1 and the other entries are 0. Then {x α : G a → U α } α∈Φ forms a Chevalley basis of G. We take the special point o ∈ B(GL n , F ) corresponding to this Chevalley basis. In other words, as explained in Remark 2.2, for α ∈ Φ, the filtration {U α,r } r∈R of the root subgroup U α = U α (F ) is given by U α,r = x α ({a ∈ F | val F (a) ≥ r}). The corresponding special parahoric subgroup K is simply given by GL n (O). 5.1.1. Exterior L-functions. Consider the l-th exterior power r = ∧ l of the standard representation ofĜ = GL n (C) for 1 ≤ l ≤ n−1. It has the unique dominant weight µ = l i=1 e ∨ i . Hence ∧ l is minuscule. We have ρ B , µ = Given a ∈ T + l , define m ≥ 1 and r 1 (a), . . . , r m (a) so that r 1 (a) + · · · + r m (a) = n and a 1 = a r1(a) > a r1(a)+1 = a r1(a)+r2(a) > · · · > a r1(a)+···+rm−1(a)+1 = a r1(a)+···+rm(a) .
The set P + (Sym l ) of dominant weights is given by {µ a | a ∈ T + l }, and their multiplicities are one. Therefore Theorem 4.8 gives L(s, π, Sym l ) =
a∈T + l det 1 − q −(s+ n i=1 ai(n+1−2i)/2) I χ (½ µa ) V Jµ a χ −1 ,
where we have 5.2. The case of Res E/F GL n . Let E be the unramified quadratic extension of F . Let us take G to be the Weil restriction Res E/F GL n,E of the general linear group GL n,E over E with respect to E/F (note that G is unramified). We take A to be the maximal F -split torus of G whose F -valued points consists of diagonal matrices of GL n (F ), T to be the F -rational E-split torus of G consisting of diagonal matrices, and B to be the F -rational Borel subgroup of G consisting of upper triangular matrices. The Langlands dual groupĜ of G is given by GL n (C) × GL n (C) and the Weil group W F acts onĜ by σ(g 1 , g 2 ) = ® (g 1 , g 2 ) if σ ∈ I F , (g 2 , g 1 ) if σ = Frob.
J µa = á A 11 A 12 · · · A 1m A 21 A 22 · · · A 2m . . . . . . . . . . . . A m1 A m2 · · · A mm ë A ii ∈ GL
Hence L G has LḠ :=Ĝ ⋊ Gal(E/F ) = (GL n (C) × GL n (C)) ⋊ Z/2Z as its quotient. We write T n for the E-split maximal torus of GL n,E in Section 5.1, and use notations e i , e ∨ i therein. Then we have X * (T) = X * (T n ) ⊕ X * (T n ). Since the set of positive roots is given by {(e i − e j , 0), (0, e i − e j ) | 1 ≤ i < j ≤ n}, we have
ρ B = n i=1 n + 1 − 2i 2 e i , n i=1 n + 1 − 2i 2 e i .
We take a special point o ∈ B(G, F ) in the apartment attached to A so that the corresponding special parahoric subgroup K is simply given by GL n (O E ), where O E denotes the ring of integers of E. 5.2.1. Asai L-function. Let ǫ ∈ {±1}. Consider the Asai representation As ǫ of L G, which is characterized by the following properties:
• The restriction of As ǫ toĜ = GL n (C) × GL n (C) is given by the tensor product C n ⊠ C n of the standard representations of GL n (C). • The representation As ǫ factors through LḠ , and As ǫ (Frob)(v⊗w) = ǫ·w⊗v for any v, w ∈ C n .
(C) ∧ l C n*a 0 • 1 · · · • l · · · • n − 1 (n ≥ 2) (1 ≤ l ≤ n − 1) adjoint n − 1 • • · · · • • so 2n+1 sp 2n (C) C 2n*a 0 • • · · · • • o o (n ≥ 2) (∧ 2 C 2n ) 0 *a n − 1 • • · · · • • o o sp 2n so 2n+1 (C) spin 0 • • · · · • • / / (n ≥ 2) C 2n+1*a 1 • • · · · • • / / so 2n so 2n (C) C 2n*a 0 • • · · · • • ❜ ❜ ❜ ❜ ❜ ❜ ❜ • ❭ ❭ ❭ ❭ ❭ ❭ ❭ (n ≥ 4) half spin ×2 *b 0 • • · · · • • ❜ ❜ ❜ ❜ ❜ ❜ ❜ • ❭ ❭ ❭ ❭ ❭ ❭ ❭ 0 • • · · · • • ❜ ❜ ❜ ❜ ❜ ❜ ❜ • ❭ ❭ ❭ ❭ ❭ ❭ ❭ adjoint n • • · · · • • ❜ ❜ ❜ ❜ ❜ ❜ ❜ • ❭ ❭ ❭ ❭ ❭ ❭ ❭ e 6
e 6 (C) o o *a C n denotes the n-dimensional representation defining g, and (∧ 2 C 2n )0 denotes the unique nontrivial irreducible component of the sp 2n (C)-module ∧ 2 C 2n . *b The spin representation of so2n(C) decomposes into the direct sum of two inequivalent irreducible submodules, which are called half spin. *c C 27 × 2 denotes the two 27-dimensional irreducible e6(C)-modules which are inequivalent. *d C 56 , C 26 , C 7 denote the irreducible 56, 26, 7-dimensional g-modules, respectively.
C 27 × 2 *c 0 • • • • • • 0 • • • • • •
( 2 )
2For any dominant element µ ∈ Λ M , we have µIµ −1 I ∩ N I = I. Proof. Let us show (1). Since the inclusion wIw −1 I ∩ N I ⊃ I is obvious, we only need to prove the converse inclusion wIw −1 I ∩ N I ⊂ I. To see this, it suffices to check that wIw −1 ∩ N I ⊂ I. As we have I = I N I M I N by Lemma 2.4 (1), we have N I = N I M I N . Hence the multiplication map N × M × N → G, which is injective ([BT84, Théorème 2.2.3]), induces a bijection N ×I M ×I N If we define a function f w :
red (0+ denotes any sufficiently small positive number), then we have
by Lemma 2.4 (2), we obtain wIw −1 ∩ N I ⊂ I ′ N I M (I ′ N ∩ I N ) ⊂ I N I M I N = I. The same argument works for (2) by using Lemma 2.4 (3) instead of Lemma 2.4 (2).
q(w) := [IwI : I].
nr (λ) = ρ nr , λ for any dominant element λ ∈ Λ M by [Lus89, Section 1.4 (f)]. Here, we consider the positivity on Σ determined by the alcove of A(S,F ) whose Frobenius fixed part coincides with our fixed alcove of A(A, F ) (see [Ric16, Section 1.2]).
Lemma 4.63 (a)]), where n-Ind G P denotes the normalized parabolic induction. 2.7. Several basic identities on R, H I , and M. Recall that
Definition 5.3.1] for the well-definedness of this definition.)
Let α ∈ ∆ be a simple root with simple reflection s α ∈ W . Then, for any µ ∈ Λ M , there exist a family of complex numbers {qj(s α )} j=0,...,N −1 satisfying
Hecke action on the unramified principal series 3.1. Triangularity of the action of Θ µ . The space M is free as R-module with a basis {v w } w∈W (see [Hai14, Lemma 4.63 (c)]). Our aim in this section is to compute the action of Θ µ on M in terms of the basis {v w } w∈W . For this, we recall basics on the Bruhat order on W .
(r 1 · m 1 , r 2 · m 2 ) = ι R (r 1 ) · r 2 · (m 1 , m 2 ) for any r 1 , r 2 ∈ R and m 1 , m 2 ∈ M, (B) (m 1 * h, m 2 ) = (m 1 , m 2
21222Let us consider the left action of ι(Θ µ ) on M. Note that we have an R-valued perfect pairing (−, −) : M × M −→ R satisfying the following conditions (see [HKP10, Section 1.9]): (A) * ι(h)) for any h ∈ H I and m 1 , m 2 ∈ M. Here ι R denotes the anti-involution of R defined by ι R (r)(x) := r(x −1 ) for any r ∈ R = C[Λ M ] ∼ = C ∞ c (M/M 1 ). Then the pairing (−, −) induces a perfect pairing (−, −) χ :
where we regard W µ as a subgroup of W o ⊂W by using the bijection W o1:1 − − → W .We let M µ ⊃ M denote the Levi subgroup of G determined by µ, i.e., for a root α ∈ Φ, U α ⊂ M µ if and only if α, µ = 0. We put Φ + red (M µ ) := {α ∈ Φ
Lemma 3.8 ([BB05, Proposition 2.5.1]). The quotient map W ։ W/W µ preserves the orders, namely, wW µ
N
Moreover, for each w ∈ W µ , we have a bijection wI/I : x → xwI,
/I : x → xwI. Thus we can take a complete set of representatives {g i } #Jµ/I i=1 of the quotient J µ /I so that each g i is given by x i w with some x i ∈ I Mµ N . We note that µ-conjugation preserves I Mµ N and I Mµ N [w]. This fact follows from that α, µ = 0 for any α ∈ Φ + red (M µ ) by a similar argument to the proof of Lemma 2.4 (3). Hence, for each x i ∈ I Mµ N , there exists a unique x i ′ ∈ I Mµ N satisfying
Proposition 3 . 15 .
315There exists a C-basis {v Jµ,∨ w } w∈W/Wµ of V Jµ χ
Since the perfect pairing (−, −) introduced in the proof of Proposition 3.4 is antiinvariant with respect to the action of the Iwahori-Hecke algebra (the property (B) in the proof of Proposition 3.4), it canonically induces a perfect pairing (−, −) χ : V Jµ χ × V Jµ χ −1 −→ C.Thus, by choosing a C-basis of {v with respect to this pairing, the same argument as in the proof of Proposition 3.4 works using Proposition 3.14 instead of Proposition 3.3.
Proposition 3.15 according to this total order, Proposition 3.15 shows that the action of I χ (½ µ ) on V Jµ χ is triangulated with respect to the ordered basis {v Jµ,∨ wi } i=1,...,#W µ . As the diagonal entry corresponding to v Jµ,∨ wi is given by q(w i , µ) · χ • κ −1 M (w i (µ)), we get the assertion. 4. Relation to the local L-functions 4.1.
• a Borel subgroup B ofĜ containing T , • an isomorphism ι between the root datum Ψ(Ĝ) = (X * (T ), ∆ B , X * (T ), ∆ ∨ B ) ofĜ and the dual root datum Ψ(G) ∨ = (X * (T), ∆ ∨ B , X * (T), ∆ B ) of G. Recall that the Kottwitz homomorphism gives an isomorphism κ T : T /T 1 ∼ = − → X * (T IF ) Frob (see Section 2.1, note that now we have Z(T) =T). This induces an isomorphism X w (M ) = Hom(T /T 1 , C × ) ∼ = (T IF ) Frob : χ →χ, which is characterized by the identity
B
* is an weakly unramified character of T * , hence can be regarded as an element of (T * IF ) Frob through the isomorphism X w (T * ) ∼ = (T * IF ) Frob induced from the Kottwitz homomorphism. Similarly, δ 1 2 P is an weakly unramified character of M and regarded as an element of Z(M IF ) Frob through the isomorphism X w (M ) ∼ = Z(M IF ) Frob induced from the Kottwitz homomorphism. The mapt A * ,A induces a map (Z(M) IF ) Frob /W (G, A) → (T * IF ) Frob /W (G * , A * ) (see [Hai15, Lemma 8.1]), for which we again writet A * ,A .
We remark that the maps l and N are (W * × Frob )equivalent, where W * × Frob acts on Z >0 trivially. Hence we can regard N and l as maps defined on P(r IF )/ Frob . Put I to be the image of the W * -equivalent map N × l :P(r IF )/ Frob → Λ T * × Z >0 ; [µ] → (N ([µ]), l [µ] ).DefineI + := I ∩ ((the set of dominant elements in Λ T * ) × Z >0 ).Then the canonical map I + → I/W * is bijective. Indeed, at least one element of each W * -orbit in I belongs to I + since the Weyl group acts on the set of Weyl chambers transitively. The uniqueness follows from, for example, [Hum78, Lemma 10.3.B]. Put P λ,l to be the inverse image of (λ, l) ∈ I under the map P(r IF ) → I, i.e.,P λ,l := {µ ∈ P(r IF ) | (N ([µ]), l [µ] ) = (λ, l)} ⊂ P(r IF ).
[µ]∈P(r I F )/ Frob c∈C[µ]
e i diag(t 1 , . . . , t n ) = t i and e ∨ i (s) = diag(1, . . . , 1 , . . . , t n , s ∈ G m . Then we seeΦ = {±(e i − e j ) | 1 ≤ i < j ≤ n}, ∆ B = {e 1 − e 2 , .. . , e n−1 − e n },
ri(a) (O) for 1 ≤ i ≤ m, A ij ∈ M ri(a),rj(a) (O)and A ji ∈ M rj(a),ri(a) element µ a ∈ T /T 1 is represented by diag(̟ a1 , . . . , ̟ an ).
for the relative Weyl group of the relative root system Φ. Then we have a short exact sequence (see[Ros15, Lemma 3.1.1]) 1 → Λ M →W → W → 1.Let W o be the subgroup ofW generated by the reflections with respect to the walls of the fixed alcove C containing the point o. By [HR10, Lemma 5.0.1], the natural map W o ⊂W ։ W is bijective since o is a special point. Accordingly, we can expressW as a semi-direct product (i.e., W is regarded as a subgroup ofW through the splitting W1:1
Through the isomorphism W o ∼ = W mentioned in Section 2.2, the subgroup W F of W o is identified with the subgroup W µ of W given byBT84, Proposition 5.2.12] and [BT72, Section 1.5]). See also
an expository of Yu [Yu15, Section 7.3].
of the maximal F -split torus A in G is a maximal torus, so we write T for M. As the minimal parabolic subgroup P is Borel, let us write B for P.for the details).
4.2. Satake parameters of parahoric-spherical representations. We review
the construction of the Satake parameters of parahoric-spherical representations
according to Haines [Hai15, Hai17].
4.2.1. Quasi-split case. We first consider the case where G is quasi-split (see [Hai15,
Sections 6 and 7] for the details of the content of this section). In this case, the
centralizer M From the tuple (G, B, T), we get the corresponding root datum
Table 1 .
1All nontrivial quasi-minuscule representations of simple Lie algebrasg
g
r
m 0
I
sl n
sl n
, . . . , e ∨ n−1 − e ∨ n , e ∨ n }.
Acknowledgments. The authors are grateful to Miyu Suzuki for encouragement and constructive advice on a draft of this paper. The authors also thank to Hiraku Atobe, Yoichi Mieda, and Lei Zhang for their helpful comments. Finally, the authors express their sincere gratitude to Thomas Haines for his detailed explanation about how to prove our result via Iwahori-Hecke algebras. He also kindly answered a lot of questions by the authors and encouraged them. This work was supported by the Program for Leading Graduate Schools, MEXT, Japan and JSPS KAKENHI Grant Number 17J05451 and 20K14287 (Oi), 17J02456 (Sakamoto), and 17J01075 and 20J00024 (Tamori). R.S. was also supported by RIKEN Center for Advanced Intelligence Project (AIP).This root datum is the dual root datum of GSpin 2n+1 given in[Asg02,Proposition 2.4]. Hence the Langlands dual group ÷ GSp 2n is GSpin 2n+1 (C). Here we fix an isomorphism between root data Ψ(GSp 2n ) ∨ and Ψ(GSpin 2n+1 ) in the following way. Let sim GSpin 2n+1 be the similitude character of GSpin 2n+1 (C) defined by composing the covering map GSpin 2n+1 (C) ։ GSO 2n+1 (C) with that sim GSO2n+1 of GSO 2n+1 (C), which is given byThen we choose a unique isomorphism between root data Ψ(GSp 2n ) ∨ and Ψ(GSpin 2n+1 ) such that 2e ∨ 0 − n i=1 e ∨ i corresponds to sim GSpin 2n+1 . Since the set of positive roots Φ + is given bySimilarly to the case of GL n , we choose a special point o of the Bruhat-Tits building B(GSp 2n , F ) associated with the following Chevalley basisr}). The corresponding special parahoric subgroup K is simply given by GSp 2n (O). 5.3.1. Spin L-function. Consider the spin representation r = Spin ofĜ = GSpin 2n+1 . By checking weights in the spin representation of the derived group Spin 2n+1 (see[Kna02,Chapter V.9.27]), we see that the spin representation of GSpin 2n+1 is minuscule and that the highest weight µ ∈ X * (T) satisfies e i − e i+1 , µ = 0 for 1 ≤ i ≤ n − 1 and 2e n + e 0 , µ = 1. Since the restriction of the similitude character of GSpin 2n+1 to its center is the twice of the character defined by the spin representation, we have e 0 , µ = e 0 , 2e ∨ 0 − n i=1 e ∨ i /2 = 1. Therefore we obtain µ = e ∨ 0 . We have ρ B , µ = n(n + 1)/4. Therefore Corollary 4.13 gives L(s, π, Spin) = det 1 − q −(s+n(n+1)/4)where we have Note that when n = 2, this formula recovers Taylor's formula for L(s, π, Spin) explained in Section 1 (see [Tay88, Section 2.4]).5.3.2.Standard L-function. Composing the quotient GSpin 2n+1 → SO 2n+1 with the standard representation Std of SO 2n+1 , we obtain an irreducible (2n + 1)dimensional representation r = Std ofĜ = GSpin 2n+1 . Its highest weight is given by µ = e ∨ 1 . The other dominant weight is µ ′ = 0, whose multiplicity is one. Hence the representation Std is quasi-minuscule.
Local L-functions for split spinor groups. M Asgari, Canad. J. Math. 544M. Asgari, Local L-functions for split spinor groups, Canad. J. Math. 54 (2002), no. 4, 673-693.
A Björner, F Brenti, Combinatorics of Coxeter groups, Graduate Texts in Mathematics. New YorkSpringer231A. Björner and F. Brenti, Combinatorics of Coxeter groups, Graduate Texts in Mathe- matics, vol. 231, Springer, New York, 2005.
Automorphic L-functions. A Borel, Automorphic forms, representations and Lfunctions (Proc. Sympos. Pure Math. Corvallis, OreXXXIII, Amer. Math. Soc2Providence, R.I.A. Borel, Automorphic L-functions, Automorphic forms, representations and L- functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 2, Proc. Sympos. Pure Math., XXXIII, Amer. Math. Soc., Providence, R.I., 1979, pp. 27- 61.
Groupes réductifs sur un corps local. F Bruhat, J Tits, Inst. HautesÉtudes Sci. Publ. Math. 41F. Bruhat and J. Tits, Groupes réductifs sur un corps local, Inst. HautesÉtudes Sci. Publ. Math. (1972), no. 41, 5-251.
Groupes réductifs sur un corps local. II. Schémas en groupes. Existence d'une donnée radicielle valuée. Inst. HautesÉtudes Sci. Publ. Math. 60, Groupes réductifs sur un corps local. II. Schémas en groupes. Existence d'une donnée radicielle valuée, Inst. HautesÉtudes Sci. Publ. Math. (1984), no. 60, 197-376.
Automorphic forms, representations and L-functions. P Cartier, Representations of p-adic groups: a survey. Corvallis, OreXXXIII, Amer. Math. Soc1Providence, R.I.P. Cartier, Representations of p-adic groups: a survey, Automorphic forms, representa- tions and L-functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1, Proc. Sympos. Pure Math., XXXIII, Amer. Math. Soc., Providence, R.I., 1979, pp. 111-155.
Arithmetic invariants of discrete Langlands parameters. B H Gross, M Reeder, Duke Math. J. 1543B. H. Gross and M. Reeder, Arithmetic invariants of discrete Langlands parameters, Duke Math. J. 154 (2010), no. 3, 431-508.
T J Haines, The stable Bernstein center and test functions for Shimura varieties, Automorphic forms and Galois representations. CambridgeCambridge Univ. Press2T. J. Haines, The stable Bernstein center and test functions for Shimura varieties, Automorphic forms and Galois representations. Vol. 2, London Math. Soc. Lecture Note Ser., vol. 415, Cambridge Univ. Press, Cambridge, 2014, pp. 118-186.
On Satake parameters for representations with parahoric fixed vectors. Int. Math. Res. Not. IMRN. 20, On Satake parameters for representations with parahoric fixed vectors, Int. Math. Res. Not. IMRN (2015), no. 20, 10367-10398.
On Satake parameters for representations with parahoric fixed vectors. Int. Math. Res. Not. IMRN. 13, Correction to "On Satake parameters for representations with parahoric fixed vectors" [ MR3455870], Int. Math. Res. Not. IMRN (2017), no. 13, 4160-4170.
Iwahori-Hecke algebras. T J Haines, R E Kottwitz, A Prasad, J. Ramanujan Math. Soc. 252T. J. Haines, R. E. Kottwitz, and A. Prasad, Iwahori-Hecke algebras, J. Ramanujan Math. Soc. 25 (2010), no. 2, 113-145.
The Satake isomorphism for special maximal parahoric Hecke algebras. T J Haines, S Rostami, Represent. Theory. 14T. J. Haines and S. Rostami, The Satake isomorphism for special maximal parahoric Hecke algebras, Represent. Theory 14 (2010), 264-284.
Introduction to Lie algebras and representation theory. J E Humphreys, Graduate Texts in Mathematics. 9Springer-VerlagSecond printing, revisedJ. E. Humphreys, Introduction to Lie algebras and representation theory, Graduate Texts in Mathematics, vol. 9, Springer-Verlag, New York-Berlin, 1978, Second printing, revised.
Reflection groups and Coxeter groups. Cambridge Studies in Advanced Mathematics. 29Cambridge University Press, Reflection groups and Coxeter groups, Cambridge Studies in Advanced Math- ematics, vol. 29, Cambridge University Press, Cambridge, 1990.
A W Knapp, Lie groups beyond an introduction. Boston, MABirkhäuser Boston, Inc140second ed.A. W. Knapp, Lie groups beyond an introduction, second ed., Progress in Mathematics, vol. 140, Birkhäuser Boston, Inc., Boston, MA, 2002.
R E Kottwitz, Isocrystals with additional structure. IIR. E. Kottwitz, Isocrystals with additional structure. II, Compositio Math. 109 (1997), no. 3, 255-339.
Invariant theoretic approach, Invariant Theory and Algebraic Transformation Groups. V Lakshmibai, K N Raghavan, Encyclopaedia of Mathematical Sciences. 137Springer-VerlagStandard monomial theoryV. Lakshmibai and K. N. Raghavan, Standard monomial theory, Encyclopaedia of Math- ematical Sciences, vol. 137, Springer-Verlag, Berlin, 2008, Invariant theoretic approach, Invariant Theory and Algebraic Transformation Groups, 8.
D Loeffler, C Skinner, S Zerbes, arXiv:1706.00201Euler systems for GSp(4), preprint. D. Loeffler, C. Skinner, and S. Zerbes, Euler systems for GSp(4), preprint, arXiv:1706.00201, 2017.
Affine Hecke algebras and their graded version. G Lusztig, J. Amer. Math. Soc. 23G. Lusztig, Affine Hecke algebras and their graded version, J. Amer. Math. Soc. 2 (1989), no. 3, 599-635.
M Oi, R Sakamoto, H Tamori, arXiv:1903.07613v2New expression of unramified local L-functions by certain Hecke operators, preprint. M. Oi, R. Sakamoto, and H. Tamori, New expression of unramified local L-functions by certain Hecke operators, preprint, arXiv:1903.07613v2, 2019.
On the Iwahori Weyl group. T Richarz, Bull. Soc. Math. France. 1441T. Richarz, On the Iwahori Weyl group, Bull. Soc. Math. France 144 (2016), no. 1, 117-124.
The Bernstein presentation for general connected reductive groups. S Rostami, J. Lond. Math. Soc. 2S. Rostami, The Bernstein presentation for general connected reductive groups, J. Lond. Math. Soc. (2) 91 (2015), no. 2, 514-536.
R Steinberg, Lectures on Chevalley groups. Robert R. SnappProvidence, RIAmerican Mathematical Society66Revised and corrected edition of the 1968 original [ MR0466335R. Steinberg, Lectures on Chevalley groups, University Lecture Series, vol. 66, American Mathematical Society, Providence, RI, 2016, Notes prepared by John Faulkner and Robert Wilson, Revised and corrected edition of the 1968 original [ MR0466335], With a foreword by Robert R. Snapp.
On congruences between modular forms. R Taylor, ProQuest LLC. Thesis (Ph.D.)-Princeton UniversityR. Taylor, On congruences between modular forms, ProQuest LLC, Ann Arbor, MI, 1988, Thesis (Ph.D.)-Princeton University.
J Tits, Reductive groups over local fields, Automorphic forms, representations and Lfunctions (Proc. Sympos. Pure Math. Corvallis, OreXXXIII, Amer. Math. Soc1Oregon State Univ.Providence, R.I.J. Tits, Reductive groups over local fields, Automorphic forms, representations and L- functions (Proc. Sympos. Pure Math., Oregon State Univ., Corvallis, Ore., 1977), Part 1, Proc. Sympos. Pure Math., XXXIII, Amer. Math. Soc., Providence, R.I., 1979, pp. 29- 69.
Smooth models associated to concave functions in Bruhat-Tits theory. J.-K Yu, Panor. Synthèses. ParisSoc. Math. FranceIIIJ.-K. Yu, Smooth models associated to concave functions in Bruhat-Tits theory, Autour des schémas en groupes. Vol. III, Panor. Synthèses, vol. 47, Soc. Math. France, Paris, 2015, pp. 227-258.
Email address: [email protected]. Oiwakecho Kitashirakawa, Sakyo-Ku, 103-0027jp RIKEN Center for Advanced Intelligence Project (AIP), 1-4-1 Nihonbashi. Japan; Chuo-ku, Tokyo; Japan; Sapporo, Hokkaido; JapanKita-Ku10Department of Mathematics (Hakubi center), Kyoto University, ; jp Department of Mathematics, Faculty of Science, Hokkaido UniversityEmail address: ryotaro.sakamoto@riken. Email address: [email protected] of Mathematics (Hakubi center), Kyoto University, Kitashirakawa, Oiwake- cho, Sakyo-ku, Kyoto 606-8502, Japan. Email address: [email protected] RIKEN Center for Advanced Intelligence Project (AIP), 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan. Email address: [email protected] Department of Mathematics, Faculty of Science, Hokkaido University, Kita 10, Nishi 8, Kita-Ku, Sapporo, Hokkaido, 060-0810, Japan. Email address: [email protected]
| [] |
[
"Ab-initio study of model guanine assemblies: the role of π-π coupling and band transport",
"Ab-initio study of model guanine assemblies: the role of π-π coupling and band transport"
] | [
"Rosa Di Felice \nIstituto Nazionale di Fisica della Materia (INFM)\nDipartimento di Fisica\nUniversità di Modena e Reggio Emilia\nVia Campi 213/A41100ModenaItaly\n",
"Arrigo Calzolari \nIstituto Nazionale di Fisica della Materia (INFM)\nDipartimento di Fisica\nUniversità di Modena e Reggio Emilia\nVia Campi 213/A41100ModenaItaly\n",
"Elisa Molinari \nIstituto Nazionale di Fisica della Materia (INFM)\nDipartimento di Fisica\nUniversità di Modena e Reggio Emilia\nVia Campi 213/A41100ModenaItaly\n",
"Anna Garbesi \nCNR ISOF\nArea della Ricerca, Via P. Gobetti 10140129BolognaItaly\n"
] | [
"Istituto Nazionale di Fisica della Materia (INFM)\nDipartimento di Fisica\nUniversità di Modena e Reggio Emilia\nVia Campi 213/A41100ModenaItaly",
"Istituto Nazionale di Fisica della Materia (INFM)\nDipartimento di Fisica\nUniversità di Modena e Reggio Emilia\nVia Campi 213/A41100ModenaItaly",
"Istituto Nazionale di Fisica della Materia (INFM)\nDipartimento di Fisica\nUniversità di Modena e Reggio Emilia\nVia Campi 213/A41100ModenaItaly",
"CNR ISOF\nArea della Ricerca, Via P. Gobetti 10140129BolognaItaly"
] | [] | Several assemblies of guanine molecules are investigated by means of first-principle calculations.Such structures include stacked and hydrogen-bonded dimers, as well as vertical columns and planar ribbons, respectively, obtained by periodically replicating the dimers. Our results are in good agreement with experimental data for isolated molecules, isolated dimers, and periodic ribbons. For stacked dimers and columns, the stability is affected by the relative charge distribution of the π orbitals in adjacent guanine molecules. π-π coupling in some stacked columns induces dispersive energy bands, while no dispersion is identified in the planar ribbons along the connections of hydrogen bonds. The implications for different materials comprised of guanine aggregates are discussed. The bandstructure of dispersive configurations may justify a contribution of band transport (Bloch type) in the conduction mechanism of deoxyguanosine fibres, while in DNA-like configurations band transport should be negligible. | 10.1103/physrevb.65.045104 | [
"https://export.arxiv.org/pdf/cond-mat/0110636v1.pdf"
] | 118,962,968 | cond-mat/0110636 | f7dbaf9858c80f352d44de6ad2807e123a31ea05 |
Ab-initio study of model guanine assemblies: the role of π-π coupling and band transport
30 Oct 2001
Rosa Di Felice
Istituto Nazionale di Fisica della Materia (INFM)
Dipartimento di Fisica
Università di Modena e Reggio Emilia
Via Campi 213/A41100ModenaItaly
Arrigo Calzolari
Istituto Nazionale di Fisica della Materia (INFM)
Dipartimento di Fisica
Università di Modena e Reggio Emilia
Via Campi 213/A41100ModenaItaly
Elisa Molinari
Istituto Nazionale di Fisica della Materia (INFM)
Dipartimento di Fisica
Università di Modena e Reggio Emilia
Via Campi 213/A41100ModenaItaly
Anna Garbesi
CNR ISOF
Area della Ricerca, Via P. Gobetti 10140129BolognaItaly
Ab-initio study of model guanine assemblies: the role of π-π coupling and band transport
30 Oct 2001numbers: numbers: 7322Dj7115Mb8714Gg
Several assemblies of guanine molecules are investigated by means of first-principle calculations.Such structures include stacked and hydrogen-bonded dimers, as well as vertical columns and planar ribbons, respectively, obtained by periodically replicating the dimers. Our results are in good agreement with experimental data for isolated molecules, isolated dimers, and periodic ribbons. For stacked dimers and columns, the stability is affected by the relative charge distribution of the π orbitals in adjacent guanine molecules. π-π coupling in some stacked columns induces dispersive energy bands, while no dispersion is identified in the planar ribbons along the connections of hydrogen bonds. The implications for different materials comprised of guanine aggregates are discussed. The bandstructure of dispersive configurations may justify a contribution of band transport (Bloch type) in the conduction mechanism of deoxyguanosine fibres, while in DNA-like configurations band transport should be negligible.
I. INTRODUCTION.
The twofold issue of scaling down the electronic devices and of realizing a high circuit integration density on a single chip has recently seen the rise of the field of molecular electronics, which consists of the use of molecules to realize electrically conducting structures [1,2]. Because of their sequence-specific recognition properties, DNA molecules are attracting attention for the construction of nanometer scale devices, where they might be used, by virtue of their self-assembling capabilities, to wire the electronic materials in a programmable way [3]. This research path has led recently to a set of controlled experiments for the direct measurement of the d.c. conductivity.
Using interdigital electrodes, anisotropic conductivity was found in an aligned DNA cast film: At room temperature, a large ohmic current, linearly increasing with the applied voltage, was measured [4]. Ohmic behavior and high conductivity were found, also, for a a 600 nm long rope made f a few λ-DNA molecules [5]. Instead, nonlinear current/voltage curves, exhibiting a voltage gap at low applied voltage, were measured through a single 10 nm long poly(dG)/poly(dC) DNA molecule trapped between two metal nanoelectrodes [6]. Large currents were observed, in air and in vacuum, both at ambient temperature and at 4 K. The authors suggested that the observed electron transport is best explained by a semiconductor-like band model where the electronic states are delocalized over the entire length of the base pair stack [6].
The guanine (G) base is particularly interesting in view of obtaining conductive molecular aggregates, because of its low ionization potential, which suggests its viability to mediate charge motion along a sequence of bases [7,8]. This peculiar property has opened the way to the measurement of the the electrical conductivity of G aggregates. Interestingly, a metallic nanogate filled with a dried solution of a lipophilic derivative of 2'-deoxyguanosine [9,10], displayed a current/voltage behavior [11] similar to that of the semiconducting poly(G)/poly(C) sample [6]. Previous investigations had shown that, in organic solvents, this molecule undergoes extensive self-assembly, mediated by H-bonding among the guanine bases, to give ribbon-like aggregates that, upon drying, form fiber structures within which the guanine cores of the ribbons lie on parallel planes at a distance of about 3.4Å [12,13].
While a direct comparison among the findings for the deoxyguanosine materials and for DNA cannot be made, because of the different overall experimental settings and chemical nature of the molecules, one cannot avoid to notice the similar and peculiar current/voltage characteristics of the nanogates interconnected by molecules featuring self-assembled [9,10] or inherent [6] guanine stacks.
In the following, we have chosen to perform ab-initio calculations of the structural and ground state electronic properties of extended model structures whose building block is the G base alone. Of course, we are well aware that these model structures are only partially related to the structure of a poly(G)/poly(C) duplex [6], where each guanine is H-bonded to a cytosine in the opposite strand. However, self-assembled G structures are among the simplest base aggregates characterized experimentally [10,12], thus allowing accurate theoretical calculations as well as comparison with experimental data. Additionally, our choice is motivated by the role played by guanine, the DNA base with the lowest ionization potential, in the mechanism of charge transport [14,15,16].
The main question that we address here is whether the electronic properties of extended G-based structures can account for a band-like mechanism for charge transport. Previous theoretical studies for DNA bases and for their assemblies can be found in the existing literature [17,18,19,20,21,22]. Both MP2/Hartree-Fock and density-functional-based calculations were performed in the past with localized basis sets (GAUSSIAN) to treat isolated guanines and small clusters of guanines [17]. Here, we focus on density-functional-based calculations using a plane wave basis and ab-initio pseudopotentials [23]: this technique should allow for a correct description of solid aggregates of G's, resembling those of long-range deoxyguanosine fibers, provided the single molecules and the clusters are well described.
Before simulating the structures of our interest, which are planar ribbons and stacked sequences of G's, we check that our technique is well suitable to describe isolated G molecules as well as pairs of molecules in different (lateral or vertical) configurations. We then consider (GG) n vertical stacks, and (GG) n isolated and stacked periodic ribbons. We study the stability of different vertical configurations as a function of the relative atomic positions between adjacent G molecules, and identify the role of π-π coupling. We demonstrate that pseudopotential plane-wave Density Functional Theory (DFT) calculations at the current level of accuracy are able to reproduce not only the equilibrium length of isolated hydrogen bonds, but also periodic sequences of such bonds. Concerning the electronic properties, all the model solids that we considered are semiconducting with large energy gaps. The dispersion of the highest valence and lowest conduction bands is always negligible in the (x, y) plane containing the molecules (the guanines are connected laterally by H bonds).
Conversely, the dispersion along the z direction perpendicular to the G planes found to be extremely sensitive to the detailed geometric stacking of the bases, which in turn can be affected by different environments. In vertical stacking geometries that are similar to those present in DNA, the calculated dispersion of both the valence band maximum (VBM) and conduction band minimum (CBM) are extremely small. Consequently, both electrons and holes introduced by "doping" or photoexcitation would have very large effective masseshence low mobility -in these types of structures. Instead, our results show that a band-like contribution may be responsible for the conduction mechanism in the 2'-deoxyguanosine lipophilic derivative [10].
The paper is organized as follows. Section II describes the computational method. Section III deals with the calculated equilibrium geometries and the electronic structure for the isolated G molecule and for the model guanine assemblies. Section IV presents a discussion of our results with an outlook to the implications for the physics of guanine ribbons and DNA-like stackings. Finally, section V contains a summary of the arguments presented in the paper.
II. METHOD.
Our calculations are based on the DFT in the Local Density Approximation (LDA) [24].
For the hydrogen-bonded pairs and ribbons, we take into account BLYP gradient corrections to the exchange-correlation functional [25]. The electron-ion interaction is described via ab-initio norm conserving pseudopotentials in the factorized form of Kleinman and Bylander [26]. The search for optimized metastable structures is performed by total energy minimizations with respect to the ionic and electronic degrees of freedom. The former are represented by the ionic coordinates, the latter by the electronic wavefunctions. The ions are treated in a classical formalism, and are displaced according to the forces derived from the potential determined by the full quantum mechanical electronic structure [23], within a Car-Parrinello-like scheme [27]. For any selected geometry, all the atoms are allowed to relax, until the forces vanish within an accuracy of 0.05 eV/Å. Thus, for each metastable structure, we obtain both the geometry and the consistent single-particle electron energies and wavefunctions. The electron wavefunctions are expanded in a basis of plane waves with kinetic energy up to 50 Ry. This cutoff is very high with respect to standard first-principle calculations for most solids, and is due to the presence of the first-row elements C, N, and O, whose valence electron wavefunctions have strong oscillations in the region around the nucleus, thus needing many plane waves for an accurate treatment. For one of the model systems, we have verified that increasing the precision of the calculations both in the planewave kinetic energy cutoff (up to 60 Ry) and in the accuracy within which the atomic forces vanish (0.025 eV/Å), gives changes of the bond lengths smaller than 0.01Å and of the bond angles smaller than 0.5 degrees, within a guanine molecule, whereas the inter-planar distance in the stacks is not affected at all. Additionally, for the diatomic molecules N 2 and O 2 the employed pseudopotential was tested up to a cutoff of 80 Ry, with no significant improvement.
Our method employs periodic boundary conditions in three dimensions. In order to simulate isolated molecules, we choose a large supercell of size 15.9Å × 15.9Å × 10.6
A. Such a choice ensures that the minimum distance (in any spatial direction) between two molecules is larger than 8.5Å. In particular, the distance is 10.6Å in the direction perpendicular to the plane of the molecules, very large with respect to the distance of 3.37
A between two neighboring bases in B-DNA. The supercells for different assemblies of G molecules are described in section III, where the model structures are discussed in detail.
For Brillouin Zone sums, the single high-symmetry Γ point has been employed in the case of isolated molecules and dimers, while one or two (depending on the symmetries) special k points [28] have been employed in the case of periodic columns and ribbons, and in the case of stacked ribbons.
The computational technique has been successfully applied in many investigations of the structure and electronic properties of inorganic and organic materials [24,29,30]. In section III, before reporting the results of our calculations for the model guanine assemblies, we address the issue of extending such calculations to biomolecules, by presenting a test on the G base.
III. RESULTS.
In this section we present our results about the isolated G molecules and about their assemblies and discuss them in the frame of the existing theoretical literature, which is limited to isolated G's and G-pairs [18,19,20] and does not give any account of periodic G columns and ribbons. We relate the outcome of the calculations to new experimental findings about the structure and electrical behavior of deoxyguanosine-based solids [10].
The section is divided into sub-sections for the different systems: ( We start by presenting the calculated structure and selected electron states for the G molecule. This simple system allows us to evaluate the accuracy of our method, by comparison with X-ray data and with the outcome of previous ab-initio calculations. Furthermore, the total energy of the equilibrium structure of the isolated molecules is necessary in order to evaluate formation energies of dimers, ribbons, and columns.
The structure of guanine is well known from X-ray studies [31,32]. Figure 1(a,b) shows a plane view of the molecule, and the isosurface plot for the total charge density. The calculated bond lengths and angles are in good agreement with X-ray data (see Table I). Most of the bond lengths are underestimated within 2% and 3% with respect to the experimental structure, the only exceptions being the underestimate of 4.3% for the C 6 -O 6 bond and the overestimate of 1.6% for the C 6 -N 1 bond. The average C-H and N-H distances, not reported in Table I, are 1.08Å and 1.01Å, respectively, in good agreement with bond lengths in NH 3 and CH 3 molecules. The bond angles in guanine are also reproduced with a high degree of accuracy (Table I): the discrepancy with respect to the experimental data is below 2.5%, but the average percentage error is 1%. The formation energy of G, with respect to its elemental components in stable phases (O 2 , H 2 , and N 2 molecules, crystalline diamond) is 87.5 Kcal/mole. By starting the atomic relaxation with a planar G molecule as the initial condition, we find that the planar configuration is indeed a metastable state. By considering a different initial condition with a non-planar amino group (the NH 2 complex bonded to the site C 2 , see Figure 1a), we find another metastable state. However, the deviation from planarity is small, and the total energy difference between the planar and the puckered geometries is also very small, within the precision of our calculations (estimated to be about 10 meV/G [33]). While previous calculations have pointed out a stronger stabilization effect of non-planarity [19], no direct gas phase experimental data exist. Moreover, the hydrogen bonds tend to flatten the structure when forming dimers and ribbons of G's; thus, we are confident that our results for the periodic structures are not affected by this issue.
The single particle eigenvalues are characterized by a DFT-LDA energy gap of 4.8 eV between the highest occupied (HOMO) and the lowest unoccupied (LUMO) electron states [24]. We have identified σ and π orbitals. The HOMO (see Figure 1c) has a π character and is localized on the C 8 , O 6 , N 2 atoms, and on the C 4 -C 5 and N 3 -C 2 bonds. The LUMO (see Figure 1d) has a π character and is localized on the N 9 , C 4 , C 5 , C 2 , and N 1 atoms. Because they extend out of the guanine plane, both the HOMO and the LUMO states are well suitable for interactions with adjacent similar states when forming vertical stacks, inducing a splitting of degenerate molecular orbitals by an amount that depends on the strength of such interactions. In the case of an infinite periodic stack, this is the mechanism that may give rise to band dispersion for sufficiently strong coupling, and to mobile carriers if the bands are partially occupied (for instance, as a consequence of doping or photoexcitation).
B. Stacked GG dimers.
In order to select low-energy geometries for the vertical columnar structures, whose electronic properties are the main subject under investigation, we have first considered stacked dimers. We name stacked dimer a pair of G molecules lying in parallel planes whose distance in the perpendicular direction is an output of our calculations. A column is obtained by periodically replicating a dimer along the stacking (perpendicular) direction.
We have analyzed several configurations ( Figure 2b, the rotation angle is zero as for GGv.A, but the center of mass of one molecule is shifted with respect to the other, in order to lower the π-π superposition (label GGv.B). In Figure 2c, the azimuthal rotation angle is zero, as well as the relative translation of the two G's, but there is a reflection of the upper molecule with respect to its plane: this configuration (label GGv.C) is the only one, among those considered in this work, that exhibits a reflection and allows to discriminate between two molecular faces. In Figure 2d, the rotation angle is 180 • , and the π-π superposition is large, though smaller than in GGv.A (label GGv.D). In Figure 2e, the azimuthal rotation angle is 36 • , and a translation brings the two molecules in a configuration similar to that in B-DNA (label GGv.E).
Although some of these stacked structures are not likely to occur in nature, they allow to understand important microscopic features that may be relevant in real structures. Most notably, the dependence of the stability and of the electronic properties on the azimuthal angle and π superposition is accessible. The lowest-energy configuration among those shown in Figure 2 is GGv.D, while GGv.A has the highest formation energy, and the other dimers have intermediate formation energies [34]. However, the energy difference between the two extreme cases GGv.A and GGv.D is small, about 250 meV/G. We attribute the highest formation energy of GGv.A to the electrostatic repulsion due to the complete π-π superposition. In fact, the π-like HOMO's of guanines in neighboring planes are mostly responsible for their interaction: the superposition of negative charge in the same region of space (the hexagonal ring) for configuration GGv.A contributes with a Coulomb repulsion. The π-π superposition is large also in GGv.D, but the repulsion is much smaller, making GGv.D a more viable model for a stacked GG pair [18].
We wish to point out that, although the lower superposition of adjacent π orbitals of GGv.D with respect to GGv.A decreases the electrostatic repulsion between the two molecules and makes the dimer more viable, such interaction is still rather large and is the origin of energy dispersion in columnar structures based on the GGv.D dimer. This issue is addressed in the next sub-section, by computing the band structures of model periodic stacks built up with the dimers just described. It is also worth stressing that, for the stacked dimers, without inclusion of sugars, phosphates, and water as in real situations, the rotation angle of 36 • is not preferred. This result is in line with the fact that the inner core of the base pair stack is not completeley responsible for the stability of double-stranded DNA, but also the backbone and the environment are relevant factors.
C. Periodic stacked poly(G) columns.
The building block of each periodic stack is a GG pair; we have analyzed five different model columns starting from the dimers described above (Figure 2). The supercell for these calculations is 15.9Å × 15.9Å × 6.74Å: no vacuum region separates two adjacent dimers in the stacking direction. The periodic structures are labeled with the same names as the stacked dimers. Direct and reciprocal one-dimensional crystal lattices are associated to the periodic columns: the basis vectors of these lattices are a 3 =6.74Å, and b 3 = 2π a 3â 3 . The formation energies are reported in Table II. The energetical order is the same as for the dimers. The most stable configuration is GGv.D. We stress again that configuration A, with very large π-π superposition, has the highest formation energy. The energy is substantially reduced in the column GGv.D, where the superposition is still large but the repulsion is weaker: the hexagonal rings lie on top of each other, but the O atoms in adjacent planes lie opposite to each other. Thus, we attribute the dominant repulsive contribution of the π-π interaction to the O atoms.
A detailed analysis of the electronic properties reveals other interesting features. We report the numerical data in Table II and we show the bandstructure for columns GGv.A, GGv.D, and GGv.E in Figure 3.
GGv.E, with the two G's rotated by 36 • as in portions of nucleic acids, has flat HOMOand LUMO-derived bands, very large effective masses, and is therefore incompatible with even a partial band transport mechanism for the conduction in guanine-based devices [10]. Table II, are of the order of 1÷2 free electron masses.
Although these values are much larger than those of conducting organic polymers [35], they are similar to those of inorganic materials for which band transport is demonstrated. Notable examples of such materials are wide-bandgap semiconductors, such as group-III nitrides [36].
As a note to support the strength of our results, we wish to point out that a test calculation To illustrate how the π-π stack may originate channels for charge migration through a band transport mechanism, in Figure 4 we show an isosurface of the HOMO state at the A point for the columnar structure GGv.D: the interaction between the two molecules in the cell is evident in the superposition resulting in a delocalized orbital.
Summarizing our study of the columnar structures, we emphasize that the orbital interaction is strong. At variance, we will soon show that it is practically absent in hydrogenbonded pairs. This is in agreement with the common knowledge that hydrogen bonds have an electrostatic (rather than covalent) nature. In the following subsections, we compare the behavior of planar G pairs and ribbons with that of stacked G pairs and columns. In particular, we find that hydrogen bonding does not give origin to dispersive electron bands, contrary to what we have seen for the π stacking.
D. Planar hydrogen-bonded GG dimers.
We have selected one possible arrangement of hydrogen bonds, giving a structure named GG3 [17,21] (Figure 5a). We have not considered other documented hydrogen-bonded GG pairs [17], because we focus our interest here in the guanine ribbons present in the fibers of a lipophilic derivative of 2'-deoxyguanosine: Such fibers [13] have the bond network illustrated in Figure 5(b). The equilibrium structure that we obtain after atomic relaxation is in good agreement with that of the GG3 hydrogen-bonded pair previously described via quantum chemistry and DFT cluster calculations [17]. The two individual G molecules in the planar pair remain very similar to their isolated form. The N7(H)· · ·N1 hydrogen bond has a length of 2.93Å and forms an angle of 177 • (2.96Å and 173 • in the theoretical literature [17]). The N2(H)· · ·O6 hydrogen bond has a length of 2.85Å and forms an angle of 169 • (3.27Å and 166 • in the theoretical literature [17]). Our calculated value for the DFT-BLYP energy gap [24] of the GG3 dimer is 2.45 eV. The formation energy of the structure, with respect to two isolated guanine molecules, is -310 meV/G. This value, compared to the formation energy of the most stable stacked dimer GGv.D (≃ -200 meV/G), is in agreement with the previous demonstration that hydrogen bonds are stronger than stacking interactions [17].
The structure of the GG3 dimer was calculated only as a basic building block of the fiberstate ribbons, whose electronic properties we are interested in. Thus, we believe that the underestimate of the N2(H)· · ·O6 distance with respect to the results of other computations is not a serious issue. A slight underestimate of the hydrogen-bonds calculated in the frame of DFT is documented [17]. Furthermore, the contraction of the N2(H)···O6 hydrogen bridge is due to an imprecise account of secondary interactions such as C8(H)· · ·O6. It is likely that such secondary interaction is reduced in one-dimensional ribbons, thus minimizing the shortcomings of DFT.
E. Planar hydrogen-bonded ribbons.
In a periodic ribbon obtained by piling up replicas of the GG3 dimer (Figure 5b), the equilibrium structure maintains the bonding characteristics of the dimer. There is a huge energy gain in forming the one-dimensional ribbon, of 820 meV/G with respect to isolated G molecules, and of 510 meV/G with respect to the hydrogen-bonded GG3 pair (Table III).
Note that G has a large dipole moment [19]; the dipole moments of G's add up to give a nonvanishing dipole also in the GG3 dimer and in the ribbon, parallel to the ribbon axis [10].
Such electrostatic interactions account for the high stability of the planar hydrogen-bonded structures, well known both in solution and in the solid state.
The bandstructure of the ribbon was calculated along a symmetry line parallel to its axis.
The HOMO-and LUMO-derived bands are separated by a DFT-BLYP energy gap [24] of 3.84 eV, and are both dispersionless. Consequently, these bands have practically infinite effective masses, so that the electrons and holes in these states are not mobile according to this picture. The electronic state analysis shows that the HOMO and the LUMO have a π character, similar to isolated G. The electronic states are localized around single G molecules, no delocalized intermolecular states, extended through the hydrogen bonds in the ribbon, are present. The dispersion induced by hydrogen-bonding is not compatible with band transport. The only possible conduction mechanism through hydrogen bonds is via electrostatic interactions. Therefore, in a device where dried deoxyguanosine fibers are deposited between two metal electrodes [10], if the ribbons are stretched between the electrodes, band transport cannot contribute to conduction.
In order to investigate the competition of π-π coupling versus hydrogen-bonding interactions, we have simulated two different configurations of stacked ribbons, periodic along the stacking direction. Top views are shown in Figure 5(b,c). Numerical data for the energetics and the electronic properties are reported in Table III. Both structures SR.A and SR.D are energetically favorable with respect to isolated G molecules and to stacked columns, but unfavorable with respect to isolated ribbons, as indicated by the smaller energy gain of the stacked ribbons. The configuration SR.D is more stable than SR.A by 280 meV: this trend is consistent with what found previously for the columnar stacks, where a higher stability is achieved by lowering the π-π superposition in building the stacks. Although the configurations SR.A and SR.D that we have calculated here are not viable models for piling up ribbons into solid-state crystals, ordered phases for deoxyguanosine fibers are known to exist [12]. We note that other factors, such as presence of the phosphate-sugar backbone, or the possibility of different relative positions of the neighboring ribbons, should be taken into account to achieve a full description, which is beyond the scope of this work.
In Figure 6 we show the calculated bandstructure for the SR.A and SR.D configurations.
In both cases, the HOMO-and LUMO-derived bands are dispersionless along the Γ − X direction, and the charge carriers are not mobile in this direction. This behavior is the same as in isolated ribbons: therefore, it is a further evidence that the base-base interactions due to the stacking do not change the features of the ribbons. In the Γ − A direction, parallel to the stacking direction, the HOMO-and LUMO-derived bands are dispersive, as in the columnar structures of subsection III C.
The bands for structure SR.A in the Γ − A direction are more dispersive than those for structure SR.D, due to the higher π-π superposition. As a consequence, the corresponding Summarizing the results discussed above, we wish to point out that the π-π coupling and H-bonding interaction mechanisms are not in competition. H-bonding has an electrostatic nature and accounts for in-plane stabilization: it is not compatible with a band transport mechanism, and it is not modified by base stacking. π-π coupling is weaker than H-bonding:
it is compatible with band transport along the stacking direction and it is not affected by the local details of the planar base sequence.
As outlined in the Introduction, the recent lively research activities in molecular electronics have pointed out peculiar guanine assemblies to be exploited as electrical conductors.
In fact, the experiments on a lipophilic derivative of 2'-deoxyguanosine [10], demonstrated conduction through the biomolecular material deposited in a nanogate between metal electrodes. The details of the conductivity depend on the on the experimental conditions, in particular on the gate width. Our results indicate that the observed conductivity, resembling that of a semiconductor in the intermediate gate length regime (few hundreds nm) and that of a diode junction in the short gate length regime (less than 100 nm), is compatible with a Bloch contribution to the transport mechanism. We have shown that delocalized orbitals through the base stack may be formed, provided that a relevant superposition of the hexagonal rings of the single guanines is maintained. Therefore, if the guanosine ribbons in the device gate align locally with their plane perpendicular to the direction connecting the electrodes, in a geometry similar to that proposed as SR.A, then extended Bloch-like states may be partially responsible for the observed conduction. This stacking orientation does not need to be complete in order for the proposed mechanism to work: it is sufficient that randomly aligned stacks form locally, and that the total resulting component in the direction connecting the electrodes is non vanishing. In the gate length regime ranging from 100 nm to 300 nm, semiconducting-like conductivity is revealed: we propose that, in such a condition, the ribbons are locally stacked in such a way to form partially delocalized orbitals that give a global band-like contribution. In the gate length regime below a 100 nm, a diode-like behavior is observed: this characteristic is assigned to an interaction of the π stack with the total dipole moment of the ribbons, as discussed elsewhere [10]. While our study is limited to a single type of nucleoside, it also allows a discussion of the electronic properties of DNA molecules, where G sequences in base stacks play an important role [15,37,38]. Depending on the energetics of the base sequence, and on the overall structural aspects of the system under investigation, the mechanisms proposed for DNA-mediated charge migration include single-step superexchange [39,40], multistep hole hopping [8,14], phonon-assisted polaron hopping [15], and band transport [6].
In DNA double strands, the relative arrangement of neighboring bases along the axis of the helix [31] is characterized by a rotation angle around 36 degrees. Our results show that periodic G columns in such a configuration do not support the formation of extended molecular orbitals. Therefore, band transport would not be effective. For what concerns the mechanism of conductivity in DNA molecules, our results support the conclusion that the contribution of band transport would be very small [14,41,42], in contrast to recently proposed interpretations of experimental data [6], unless structural distortions, possibly activated by temperature effects, may induce rotations that support the formation of partially delocalized electron states. Although in real DNA molecules the backbone phosphate and sugar groups may affect the overall charge mobility, we believe that the presence of the outer mantle would not change our present conclusion that band transport is not supported by the native DNA stack: eventually, it may contribute a hopping or ionic mechanism, but it is unlikely to contribute to the formation of extended electron orbitals. The importance of the base stack for electron transfer through DNA molecules has been highlighted in recent theoretical invesigations [41].
V. CONCLUSIONS.
We have reported the results of ab-initio calculations for the structure, energetics, and electronic properties of several guanine assemblies. The geometry of isolated molecules and hydrogen-bonded dimers is well described with our technique, based on plane-wave pseudopotential density functional theory, as demonstrated by comparison to available theoretical and experimental data.
We found that hydrogen-bonding and π-π coupling are independent mechanisms that control the self-assemblying of guanine bases. Hydrogen-bonding is not responsible for band transport: In fact, we have shown that no band dispersion is present along a planar ribbon of H-bonded guanines. Instead, base stacking is accompanied by π-π interactions that, for the case of sufficiently large overlap between adjacent π orbitals (e.g., GGv.D configuration), induce energy dispersion and are consistent with charge mobility. Therefore, band transport may be partially responsible for charge mobility in nucleotide aggregates, in structures characterized by a large base-base superposition. This mechanism is likely complemented by hopping to connect (through space) different regions where such superposition is realized.
VI. ACKNOWLEDGEMENTS.
We acknowledge the allocation of computer resources from INFM Progetto Calcolo Parallelo. Fruitful discussions with M. Buongiorno Nardelli, R. Cingolani, G. Gottarelli, and R.
Rinaldi, are also sincerely acknowledged. We are grateful to F. Grepioni for communicating results prior to publication. [31,32] is that for guanosine, with a sugar moiety instead of an H atom in position 9 (e.g., attached to atom N 9 in Figure 1). is calculated with respect to isolated guanine molecules. E gap is the energy difference between the HOMO and LUMO single particle eigenstates, calculated by DFT-LDA [24]. m e (m h ) is the electron (hole) effective mass, in units of the free electron mass m 0 . The structures are labeled as in Figure 2; G is a single molecule.
(B) the stacked GG dimers; (C) the stacked (GG) n columns; (D) the hydrogenbonded GG pairs; (E) the hydrogen-bonded (GG) n ribbons and the stacked ribbons. A. The Isolated Guanine Molecule.
Figure 2 )
2characterized by the relative azimuthal rotation angle of the two G's in the pair, with respect to an axis perpendicular to the G plane, and by an in-plane translation. The supercell used in the calculations was 15.9Å × 15.9Å × 19.1Å. With periodic boundary conditions, two dimers in neighboring supercells are 15.7Å apart in the stacking direction, sufficient to avoid spurious interactions betweenthem. By allowing all the atoms to relax, the interplanar distance between the two G's in a pair was a free parameter. In this way we determined the equilibrium interplanar distance to be used in the calculations for the periodic columns. For all the configurations shown in
Figure 2 (
2e.g., independently of the relative rotation angle between the bases), the average interplanar distance was 3.37Å, typical of base stacks in B-DNA. Each base maintained a planar geometry, with out-of-plane fluctuations smaller than 0.05Å.Top views of the computed stacked dimers are illustrated inFigure 2. InFigure 2a, the two G's of the dimer are perfectly eclipsed, in particular the hexagonal rings are on top of each other, and there is maximum superposition of the π-like HOMO and LUMO of the two guanines (label GGv.A). In
The periodic columns GGv.A and GGv.D, instead, have electronic properties that may support electronic transport through the base stack. For GGv.A, the HOMO-band disperses downwards by 0.65 eV and the LUMO-band disperses upwards by 0.52 eV, between the center (Γ) and the edge (A= 1 2 b 3 , b 3 being the reciprocal lattice vector along the stacking direction) of the Brillouin Zone. For GGv.D, the dispersions of the HOMO-and LUMO-bands are, respectively, 0.26 eV downwards and 0.13 eV upwards. The effective masses of GGv.A and GGv.D, reported in
for the model GGv.A demonstrates that the values of the band dispersions do not depend (within 0.02 eV) on the accuracy required for vanishing atomic forces (0.025 eV/Åinstead of 0.05 eV/Å), and to the approximation employed for the exchange-correlation functional (BLYP instead of LDA).
Figure 5b shows indeed a single ribbon, but the top view is equivalent for two exactly eclipsed ribbons on top of each other (label SR.A): the stacking between adjacent bases is similar to that of column GGv.A, and all the individual bases are perfectly aligned. Figure 5c shows a stack of two ribbons where only half of the individual G molecules lie on top of each other, with a stacking similar to that of configuration GGv.D (this structure is labeled SR.D). In both the SR.A and SR.D geometries, the projections of the ribbon axes on the (x, y) plane coincide, identifying an axis for the stack: this axis defines the Γ − X direction in the one dimensional BZ. The supercells are 21.2Å × 11.3Å × 6.74Å for SR.A and 24.3Å × 11.3Å × 6.74 A for SR.D. The choice of different supercells allows to have a vacuum region of the same volume between neighboring ribbons in the two configurations SR.A and SR.D. The relaxed structures maintain the geometry of the single ribbon: no G deformations, no variations of Hbond lengths and angles, and no out-of-plane buckling are observed. This finding indicates that the stacking does not affect the hydrogen-bonding mechanism that determines the structure of the isolated ribbon.
electron and hole effective masses along the Γ−A direction are smaller. The effective masses of the stacked-ribbon periodic structures are very similar to those of the stacked periodic columns (subsection III C): this is an indication that the hydrogen-bonding network of the individual ribbons does not affect the electronic properties in the perpendicular direction. The only exception to this trend is m e , which becomes infinite for the stacked ribbons SR.D while it is finite for the equivalent stacked column GGv.D: this is due to the fact that in structure SR.D one half of the individual bases are on top of each other (see Figure 5c), while the other half do not participate in the π-π coupling.
FIG. 1 :
1(a) Planar view of the isolated guanine molecule, with indication of the chemical species. (b) Isosurface plot of the total charge density from the ab-initio calculation. (c) Isosurface plot of the HOMO. (d) Isosurface plot of the LUMO. FIG. 2: Top views of different stacked dimer configurations. The structures are explicitly defined in the text. Gray (black) dots and lines are used to represent atoms and bonds in the upper (lower) plane. Columnar structures are obtained by replicating the dimer units along the direction perpendicular to the plane of the figure. The chemical species are read from Figure 1(a).
FIG. 3 :
3Bandstructure for configurations GGv.A, GGv.D, and GGv.E, calculated along the symmetry line parallel to the stacking direction. Larger dots indicate HOMO and LUMO states. The single-particle energies reported in these plots are relative to the top of the highest valence band. FIG. 4: Isosurface plot of the HOMO state at the point A for the columnar structure GGv.D.
FIG. 5 :
5(a) Top view of the hydrogen-bonded GG dimer. (b) Top view of the hydrogen-bonded ribbon obtained by periodically replicating the dimer, representing also the structure SR.A for the stacked ribbons. (c) Top view for the stacked ribbon structure SR.D. Black and gray dots identify atoms lying in two different parallel planes. FIG. 6: Bandstructure for configurations SR.A and SR.D, calculated along the symmetry lines parallel to the ribbon direction (Γ − X) and to the stacking direction (Γ − A). Large dots indicate HOMO and LUMO states.
TABLE I :
IStructural data for the isolated guanine molecule. The experimental structure taken as reference
TABLE II :
IIEnergetical and electronic data of different periodic stacked configurations. E 1 f orm is calculated with respect to the N 2 , O 2 , H 2 molecules, and C in the diamond phase. E 2f orm
G GGv.A GGv.B GGv.C GGv.D GGv.EE 1
f orm (eV/G) -3.80 -3.69 -3.79 -3.93 -4.07 -3.89
E 2
f orm (meV/G) 0
+100
0
-130
-280
-100
E gap (eV)
4.8 2.97
3.71
4.12
3.63
3.54
m e (m 0 )
1.41
2.80
∞
m h (m 0 )
1.04
2.20
5.25
TABLE III :
IIIEnergetical and electronic data of isolated and stacked ribbons, as defined inTable II.ribbon SR.A SR.D
Molecular Electronics: Science and Technology. A Aviram, M Ratner, Ann. N.Y. Acad. Sci. 852A. Aviram and M. Ratner, "Molecular Electronics: Science and Technology", Ann. N.Y. Acad. Sci. 852 (1998).
. C Joachim, J K Gimzewski, A Aviram, Nature. 408541C. Joachim, J. K. Gimzewski, and A. Aviram, Nature 408, 541 (2000).
. E Braun, Y Eichen, U Sivan, G Ben-Yoseph, Nature. 391775E. Braun, Y. Eichen, U. Sivan, and G. Ben-Yoseph, Nature 391, 775 (1998).
. Y Okahata, T Kobayashi, K Tanaka, M Shimomura, J. Am. Chem. Soc. 1206165Y. Okahata, T. Kobayashi, K. Tanaka, and M. Shimomura, J. Am. Chem. Soc. 120, 6165 (1998).
. H.-W Fink, C Schönenberger, Nature. 398407H.-W. Fink and C. Schönenberger, Nature 398, 407 (1999).
. D Porath, A Bezryadin, S Vries, C Dekker, Nature. 403635D. Porath, A. Bezryadin, S. de Vries, and C. Dekker, Nature 403, 635 (2000).
. E Meggers, M E Michel-Beyerle, B Giese, J. Am. Chem. Soc. 12012950E. Meggers, M. E. Michel-Beyerle, and B. Giese, J. Am. Chem. Soc. 120, 12950 (1998).
J Jortner, M Bixon, T Langenbacher, M E Michel-Beyerle, Proc. Nat. Acad. Sci. USA. Nat. Acad. Sci. USA9512759J. Jortner, M. Bixon, T. Langenbacher, and M. E. Michel-Beyerle, Proc. Nat. Acad. Sci. USA 95, 12759 (1998).
. R Rinaldi, E Branca, R Cingolani, S Masiero, G P Spada, G Gottarelli, Appl. Phys. Lett. 783541R. Rinaldi, E. Branca, R. Cingolani, S. Masiero, G. P. Spada, and G. Gottarelli, Appl. Phys. Lett. 78, 3541 (2001).
. R Rinaldi, arXiv:cond-mat/0006402R. Rinaldi et al., arXiv: cond-mat/0006402 (2000);
. R Rinaldi, preprintR. Rinaldi et al., preprint (2001).
. G Gottarelli, S Masiero, E Mezzina, G P Spada, P Mariani, M Recanatini, Helv. Chimica Acta. 812078G. Gottarelli, S. Masiero, E. Mezzina, G. P. Spada, P. Mariani, and M. Recanatini, Helv. Chimica Acta 81, 2078 (1998).
. G Gottarelli, S Masiero, E Mezzina, S Pieraccini, J P Rabe, P Samorí, G P Spada, Chem. Eur. J. A. 63242G. Gottarelli, S. Masiero, E. Mezzina, S. Pieraccini, J. P. Rabe, P. Samorí, and G. P. Spada, Chem. Eur. J. A 6, 3242 (2000).
M Bixon, B Giese, S Wessely, T Langenbacher, M E Michel-Beyerle, J Jortner, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA9611713M. Bixon, B. Giese, S. Wessely, T. Langenbacher, M. E. Michel-Beyerle, and J. Jortner, Proc. Natl. Acad. Sci. USA 96, 11713 (1999).
. G B Schuster, Acc. Chem. Res. 33253G. B. Schuster, Acc. Chem. Res. 33, 253 (2000).
. B Giese, Acc. Chem. Res. 33631B. Giese, Acc. Chem. Res. 33, 631 (2000);
. K Nakatani, C Dohno, I Saito, J. Am. Chem. Soc. 1225893K. Nakatani, C. Dohno, and I. Saito, J. Am. Chem. Soc. 122, 5893 (2000).
. J Šponer, J Leszczynski, P Hobza, J. Phys. Chem. 1001965J.Šponer, J. Leszczynski, and P. Hobza, J. Phys. Chem. 100, 1965 (1996).
. J Šponer, J Leszczynski, P Hobza, J. Phys. Chem. 1005590J.Šponer, J. Leszczynski, and P. Hobza, J. Phys. Chem. 100, 5590 (1996).
. P Hobza, J Šponer, Chem. Rev. 99and references thereinP. Hobza, and J.Šponer, Chem. Rev. 99, 3247 (1999), and references therein.
. H Sugiyama, I Saito, J. Am. Chem. Soc. 1187063H. Sugiyama and I. Saito, J. Am. Chem. Soc. 118, 7063 (1996).
. M Machado, P Ordejon, E Artacho, D Sanchez-Portal, J M Soler, physics/9908022preprintM. Machado, P. Ordejon, E. Artacho, D. Sanchez-Portal, and J. M. Soler, physics/9908022, preprint (2000).
. J Hutter, P Carloni, M Parrinello, J. Am. Chem. Soc. 1188710J. Hutter, P. Carloni, and M. Parrinello, J. Am. Chem. Soc. 118, 8710 (1996).
. M Bockstedte, A Kley, J Neugebauer, M Scheffler, Comp. Phys. Comm. 107187M. Bockstedte, A. Kley, J. Neugebauer, and M. Scheffler, Comp. Phys. Comm. 107, 187 (1997).
This book also discusses the well known underestimate of the bandgap energies that is characteristic of DFT. R M Dreizler, E K U Gross, Springer-VerlagBerlinDensity Functional Theory. An Approach to the quantum many-body problem. in the different approximations that are commonly adoptedR. M. Dreizler and E. K. U. Gross, Density Functional Theory. An Approach to the quan- tum many-body problem., Springer-Verlag, Berlin, 1990. This book also discusses the well known underestimate of the bandgap energies that is characteristic of DFT in the different approximations that are commonly adopted.
. A D Becke, Phys. Rev. A. 383098A. D. Becke, Phys. Rev. A 38, 3098 (1988);
. C Lee, W Yang, R C Parr, Phys. Rev. B. 37785C. Lee, W. Yang, and R. C. Parr, Phys. Rev. B 37, 785 (1988).
. N Troullier, J L Martins, Phys. Rev. B. 431993N. Troullier and J. L. Martins, Phys. Rev. B 43, 1993 (1991);
. L Kleinman, D M Bylander, Phys. Rev. Lett. 481425L. Kleinman and D. M. Bylander, Phys. Rev. Lett. 48, 1425 (1982).
. R Car, M Parrinello, Phys. Rev. Lett. 552471R. Car and M. Parrinello, Phys. Rev. Lett. 55, 2471 (1985).
. H J Monkhorst, J D Pack, Phys. Rev. B. 135188H. J. Monkhorst and J. D. Pack, Phys. Rev. B 13, 5188 (1976).
For a review on the surfaces of inorganic materials, see. C M Bertoni, G Roma, R , For a review on the surfaces of inorganic materials, see, e.g., C. M. Bertoni, G. Roma, and R.
. Di Felice, Electronic Structure of Adsorbates on Surfaces. Adsorption on Semiconductors. in Handbook of Surface Science. K. Horn and M. Scheffler2Elsevierand references thereinDi Felice, Electronic Structure of Adsorbates on Surfaces. Adsorption on Semiconductors. in Handbook of Surface Science, Volume 2, edited by K. Horn and M. Scheffler, Elsevier (2000), and references therein.
Density Functional Approaches for Molecular and Materials Design. E Wimmer, ACS Symposium Series. B. B. Laird, R. B. Ross, and T. ZieglerWashington DCAmerican Chemical Society629423E. Wimmer, "Density Functional Approaches for Molecular and Materials Design", ACS Sym- posium Series 629, p. 423, edited by B. B. Laird, R. B. Ross, and T. Ziegler, American Chemical Society, Washington DC (1996).
W Saenger, Principles of Nucleic Acid Structure. New YorkSpringer-VerlagW. Saenger, Principles of Nucleic Acid Structure., Springer-Verlag, New York, 1984.
. R Taylor, O Kennard, J. Mol. Struct. 781R. Taylor and O. Kennard, J. Mol. Struct. 78, 1 (1988).
The unit eV/G that we use here for the formation energies means eV per guanine molecule. The conversion factor is 1 eV/G = 23. 25 Kcal/moleThe unit eV/G that we use here for the formation energies means eV per guanine molecule. The conversion factor is 1 eV/G = 23.25 Kcal/mole.
The fact that the dimer GGv.C (the only structure in which one molecule is reflected) is not energetically favorable with respect to other geometries, indicates that the two faces of guanine are inequivalent for what concerns the stacking properties. The fact that the dimer GGv.C (the only structure in which one molecule is reflected) is not energetically favorable with respect to other geometries, indicates that the two faces of guanine are inequivalent for what concerns the stacking properties.
. P Gomes Da Costa, E M Conwell, Phys. Rev. B. 481993P. Gomes Da Costa and E. M. Conwell, Phys. Rev. B 48, 1993 (1993).
. S K Pugh, D J Douglas, S Brand, R A Abram, Semicond. Sci. Technol. 1423S. K. Pugh, D. J. Douglas, S. Brand, and R. A. Abram, Semicond. Sci. Technol. 14, 23 (1999).
. M W Grinstaff, Angew. Chem. Int. Ed. 383629M. W. Grinstaff, Angew. Chem. Int. Ed. 38, 3629 (1999).
. P F Barbara, E J C Olson, Adv. Chem. Phys. 107647P. F. Barbara and E. J. C. Olson, Adv. Chem. Phys. 107, 647 (1999).
. F D Lewis, T Wu, X Liu, R L Letsinger, S R Greenfield, S E Miller, M R Wasielewski, J. Am. Chem. Soc. 1222889F. D. Lewis, T. Wu, X. Liu, R. L. Letsinger, S. R. Greenfield, S. E. Miller, and M. R. Wasielewski, J. Am. Chem. Soc. 122, 2889 (2000).
This is the basic mechanism for electron/hole transfer through tunneling between localized molecular states. Biochim. Biophys. Acta. R. A. Marcus and N. Sutin811265This is the basic mechanism for electron/hole transfer through tunneling between localized molecular states; R. A. Marcus and N. Sutin, Biochim. Biophys. Acta 811, 265 (1985).
. Y.-J Ye, R.-S Chen, A Martinez, P Otto, J Ladik, Solid State Commun. 112139Y.-J. Ye, R.-S. Chen, A. Martinez, P. Otto, and J. Ladik, Solid State Commun. 112, 139 (1999);
. Y.-J Ye, Y Jiang, Int. J. Quant. Chem. 78112Y.-J. Ye and Y. Jiang, Int. J. Quant. Chem. 78, 112 (2000).
. P J De Pablo, F Moreno-Herrero, J G Herrero, P Herrero, A M Baró, P Ordejon, J M Soler, E Artacho, Phys. Rev. Lett. 85499P. J. de Pablo, F. Moreno-Herrero, J. G. Herrero, P. Herrero, A. M. Baró, P. Ordejon, J. M. Soler, and E. Artacho, Phys. Rev. Lett. 85, 499 (2000).
| [] |
[
"The CGM 2 Survey: Quenching and the Transformation of the Circumgalactic Medium",
"The CGM 2 Survey: Quenching and the Transformation of the Circumgalactic Medium"
] | [
"Kirill Tchernyshyov \nDepartment of Astronomy\nUniversity of Washington\n98195SeattleWAUSA\n",
"Jessica K Werk \nDepartment of Astronomy\nUniversity of Washington\n98195SeattleWAUSA\n",
"Matthew C Wilde \nDepartment of Astronomy\nUniversity of Washington\n98195SeattleWAUSA\n",
"J Xavier Prochaska \nUniversity of California\n1156 High Street95064Santa Cruz, Santa CruzCAUSA\n\nKavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan\n\nDivision of Science\nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan\n\nSimons Pivot Fellow\n\n",
"Todd M Tripp \nDepartment of Astronomy\nUniversity of Massachusetts\n710 North Pleasant Street01003-9305AmherstMAUSA\n",
"Joseph N Burchett \nUniversity of California\n1156 High Street95064Santa Cruz, Santa CruzCAUSA\n\nDepartment of Astronomy\nNew Mexico State University\nMSC 4500PO Box 3000188001Las CrucesNMUSA\n",
"Rongmon Bordoloi \nDepartment of Physics\nNorth Carolina State University\n27695-8202RaleighNCUSA\n",
"J Christopher Howk \nDepartment of Physics and Astronomy\nUniversity of Notre Dame\nNotre Dame\n10 W. M. Keck Observatory, 65-1120 Mamalahoa Hwy46556, 96743KamuelaIN, HIUSA\n",
"Nicolas Lehner \nDepartment of Physics and Astronomy\nUniversity of Notre Dame\nNotre Dame\n10 W. M. Keck Observatory, 65-1120 Mamalahoa Hwy46556, 96743KamuelaIN, HIUSA\n",
"John M O'meara ",
"Nicolas Tejos \nInstituto de Física\nPontificia Universidad Católica de Valparaíso\nCasilla 4059ValparaísoChile\n",
"Jason Tumlinson \nSpace Telescope Science Institute\nBaltimoreMDUSA\n"
] | [
"Department of Astronomy\nUniversity of Washington\n98195SeattleWAUSA",
"Department of Astronomy\nUniversity of Washington\n98195SeattleWAUSA",
"Department of Astronomy\nUniversity of Washington\n98195SeattleWAUSA",
"University of California\n1156 High Street95064Santa Cruz, Santa CruzCAUSA",
"Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU\nThe University of Tokyo\n5-1-5 Kashiwanoha277-8583KashiwaJapan",
"Division of Science\nNational Astronomical Observatory of Japan\n2-21-1 Osawa181-8588MitakaTokyoJapan",
"Simons Pivot Fellow\n",
"Department of Astronomy\nUniversity of Massachusetts\n710 North Pleasant Street01003-9305AmherstMAUSA",
"University of California\n1156 High Street95064Santa Cruz, Santa CruzCAUSA",
"Department of Astronomy\nNew Mexico State University\nMSC 4500PO Box 3000188001Las CrucesNMUSA",
"Department of Physics\nNorth Carolina State University\n27695-8202RaleighNCUSA",
"Department of Physics and Astronomy\nUniversity of Notre Dame\nNotre Dame\n10 W. M. Keck Observatory, 65-1120 Mamalahoa Hwy46556, 96743KamuelaIN, HIUSA",
"Department of Physics and Astronomy\nUniversity of Notre Dame\nNotre Dame\n10 W. M. Keck Observatory, 65-1120 Mamalahoa Hwy46556, 96743KamuelaIN, HIUSA",
"Instituto de Física\nPontificia Universidad Católica de Valparaíso\nCasilla 4059ValparaísoChile",
"Space Telescope Science Institute\nBaltimoreMDUSA"
] | [] | This study addresses how the incidence rate of strong O VI absorbers in a galaxy's circumgalactic medium (CGM) depends on galaxy mass and, independently, on the amount of star formation in the galaxy. We use HST/COS absorption spectroscopy of quasars to measure O VI absorption within 400 projected kpc and 300 km s −1 of 52 M * ∼ 10 10 M galaxies. The galaxies have redshifts 0.12 < z < 0.6, stellar masses 10 10.1 < M * < 10 10.9 M , and spectroscopic classifications as star-forming or passive. We compare the incidence rates of high column density O VI absorption (N O VI ≥ 10 14.3 cm −2 ) near star-forming and passive galaxies in two narrow stellar mass ranges and, separately, in a matched halo mass range. In all three mass ranges, the O VI covering fraction within 150 kpc is higher around starforming galaxies than around passive galaxies with greater than 3σ-equivalent statistical significance. On average, the CGM of M * ∼ 10 10 M star-forming galaxies contains more O VI than the CGM of passive galaxies with the same mass. This difference is evidence for a CGM transformation that happens together with galaxy quenching and is not driven primarily by halo mass. | 10.3847/1538-4357/acc86a | [
"https://export.arxiv.org/pdf/2211.06436v1.pdf"
] | 253,510,372 | 2211.06436 | 7845d92a6eaf14ac9bd9404e5ec17dce4979d5bc |
The CGM 2 Survey: Quenching and the Transformation of the Circumgalactic Medium
Kirill Tchernyshyov
Department of Astronomy
University of Washington
98195SeattleWAUSA
Jessica K Werk
Department of Astronomy
University of Washington
98195SeattleWAUSA
Matthew C Wilde
Department of Astronomy
University of Washington
98195SeattleWAUSA
J Xavier Prochaska
University of California
1156 High Street95064Santa Cruz, Santa CruzCAUSA
Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU
The University of Tokyo
5-1-5 Kashiwanoha277-8583KashiwaJapan
Division of Science
National Astronomical Observatory of Japan
2-21-1 Osawa181-8588MitakaTokyoJapan
Simons Pivot Fellow
Todd M Tripp
Department of Astronomy
University of Massachusetts
710 North Pleasant Street01003-9305AmherstMAUSA
Joseph N Burchett
University of California
1156 High Street95064Santa Cruz, Santa CruzCAUSA
Department of Astronomy
New Mexico State University
MSC 4500PO Box 3000188001Las CrucesNMUSA
Rongmon Bordoloi
Department of Physics
North Carolina State University
27695-8202RaleighNCUSA
J Christopher Howk
Department of Physics and Astronomy
University of Notre Dame
Notre Dame
10 W. M. Keck Observatory, 65-1120 Mamalahoa Hwy46556, 96743KamuelaIN, HIUSA
Nicolas Lehner
Department of Physics and Astronomy
University of Notre Dame
Notre Dame
10 W. M. Keck Observatory, 65-1120 Mamalahoa Hwy46556, 96743KamuelaIN, HIUSA
John M O'meara
Nicolas Tejos
Instituto de Física
Pontificia Universidad Católica de Valparaíso
Casilla 4059ValparaísoChile
Jason Tumlinson
Space Telescope Science Institute
BaltimoreMDUSA
The CGM 2 Survey: Quenching and the Transformation of the Circumgalactic Medium
Draft version November 15, 2022 Typeset using L A T E X twocolumn style in AASTeX631
This study addresses how the incidence rate of strong O VI absorbers in a galaxy's circumgalactic medium (CGM) depends on galaxy mass and, independently, on the amount of star formation in the galaxy. We use HST/COS absorption spectroscopy of quasars to measure O VI absorption within 400 projected kpc and 300 km s −1 of 52 M * ∼ 10 10 M galaxies. The galaxies have redshifts 0.12 < z < 0.6, stellar masses 10 10.1 < M * < 10 10.9 M , and spectroscopic classifications as star-forming or passive. We compare the incidence rates of high column density O VI absorption (N O VI ≥ 10 14.3 cm −2 ) near star-forming and passive galaxies in two narrow stellar mass ranges and, separately, in a matched halo mass range. In all three mass ranges, the O VI covering fraction within 150 kpc is higher around starforming galaxies than around passive galaxies with greater than 3σ-equivalent statistical significance. On average, the CGM of M * ∼ 10 10 M star-forming galaxies contains more O VI than the CGM of passive galaxies with the same mass. This difference is evidence for a CGM transformation that happens together with galaxy quenching and is not driven primarily by halo mass.
INTRODUCTION
The circumgalactic medium (CGM) is the extended halo of gas surrounding a galaxy and a key site in the baryon cycle that governs a galaxy's supply of fuel for star formation. Its physical state mediates the accretion of intergalactic gas and can affect the outcome of feedback processes (i.e., whether winds stall or escape). The physical state of the CGM is set by the interaction of many factors: radiative cooling, the gravitational potential of the host dark matter halo, energy and momentum injection by feedback, and a variety of other possibly important effects such as cosmic rays and magnetic fields. Different relative contributions of these factors can yield qualitatively different CGM structures. A classic example is that when considering gravity and radiative cooling, maintaining a hot, quasi-static CGM inside a stable virial shock requires a sufficiently high halo mass (e.g. Birnboim & Dekel 2003;Dekel & Birnboim 2006). In the absence of other factors, intergalactic gas can accrete onto the host galaxies of lower mass haloes without shocking (e.g. White & Rees 1978;Kereš et al. 2005). How the balance of these factors affects and is affected by host galaxy properties is a key question for understanding galaxy evolution.
One part of this question is the role of the CGM in how central galaxies in sub-group scale halos (M h ∼ 10 11 -10 12 M ) quench. We are still learning how CGM observables differ around star-forming and passive galaxies. When comparing the CGM between these galaxy classes, it is necessary to control for a number of potentially confounding variables. CGM properties and observables depend on distance (observationally, on the impact parameter, R ⊥ ) from a galaxy, galaxy mass (stellar or halo) and color, redshift, environment, and, for some tracers, angular location relative to a galaxy (e.g., Bergeron 1986;Bahcall et al. 1991;Chen et al. 2001;Stocke et al. 2006;Bordoloi et al. 2011;Werk et al. 2013;Johnson et al. 2015;Tejos et al. 2016;Burchett et al. 2016). At the very least, it is necessary to control for impact parameter and galaxy mass and to restrict comparisons to reasonably narrow redshift ranges. Comparisons between star-forming and passive galaxies have been done with the necessary controls for cool (T ∼ 10 4 K) metal enriched gas traced by Mg II (Bordoloi et al. 2011;Lan 2020;Anand et al. 2021) and for hot (T 10 6 K) gas traced by X-ray emission (Comparat et al. 2022;Chadayammuri et al. 2022). Star-forming galaxies have higher equivalent widths of cool gas tracers than passive galaxies, but results are still unclear for X-ray emission.
There is a paucity of constraints on gas in the intermediate temperature range, T ∼ 10 5 K. When metalenriched, gas in this temperature range can be traced by the ion O VI. This gas may be near thermal equilibrium (Faerman et al. 2017;Voit 2019) or it may be part of cooling inflows or outflows (Heckman et al. 2002;Bordoloi et al. 2017;McQuinn & Werk 2018;Qu & Bregman 2018). Some O VI may instead be tracing cool gas (T ∼ 10 4 K) that is diffuse enough for the extragalactic background to photoionize oxygen to O VI (n H ∼ 10 −5 -10 −4 cm −3 , Tripp et al. 2008;Stern et al. 2018). Ne VIII measurements strongly suggest the presence of a warm-hot phase (Burchett et al. 2018).
Low redshift star-forming galaxies in general (i.e., without controlling for galaxy mass) have higher detection rates and typical O VI column densities than passive galaxies (Tumlinson et al. 2011;Johnson et al. 2015;Zahedy et al. 2019). The only mass-controlled comparison of O VI absorber statistics between star-forming and passive galaxies thus far is found in the supplementary materials of Tumlinson et al. (2011), but its results are inconclusive. Because the star-forming galaxies in all other comparisons have lower average masses than the passive galaxies, the interpretation of the difference in O VI absorber statistics is ambiguous: is the O VI dichotomy driven by differences in halo mass or is it the result of some distinct process associated with quenching?
Which interpretation is correct bears on the connection between feedback, the CGM, and galaxy quench-ing. If there is no difference in O VI around star-forming and passive galaxies at fixed mass, then the O VI dichotomy is caused by a change in CGM structure as a function of halo mass and the different halo mass distributions of star-forming and passive galaxies. This scenario has been suggested in works such as Oppenheimer et al. (2016), Fielding et al. (2017), and Sanchez et al. (2019). If instead there is a difference at fixed mass, then the O VI dichotomy is caused by some process other than the quasi-hydrostatic evolution of a growing halo. In one example of such a scenario, integrated active galactic nucleus (AGN) feedback heats and partially drives out the CGM (Mathews & Prochaska 2017;Suresh et al. 2017;Davies et al. 2020a;Oppenheimer et al. 2020;Terrazas et al. 2020;Zinger et al. 2020). Finding that the O VI dichotomy persists at fixed mass would not automatically mean that quenching disrupts the CGM or some change in the CGM causes quenching: both changes could be caused by a third process, such as interaction with large scale structure. However, it would mean a more direct connection between the state of a galaxy and its CGM than the scenario where star-forming and passive galaxies of the same mass can have the same CGM.
In this work, we compare O VI incidence rates around star-forming and passive galaxies with stellar masses (M * ) between 10 10.1 and 10 10.9 M controlling for impact parameter and stellar mass and, in a separate comparison, approximately controlling for halo mass (M h ). For the halo mass comparison, we account for the different clustering properties of star-forming and passive galaxies by using star-formation-dependent stellar-massto-halo-mass relations when estimating halo masses. We build our sample by combining galaxy-O VI absorber pairs from a compilation of new and literature measurements published in Tchernyshyov et al. (2022) (Paper I) with a small number of additional observations from the CUBS survey (Chen et al. 2020). The dataset is described in §2. The mass-controlled comparison between star-forming and passive galaxies is described in §3. We discuss the implications of our findings in §4 and summarize our results in §5. We assume a flat-universe ΛCDM cosmology with H 0 = 67.8 km s −1 Mpc −1 and Ω m = 0.308 (Planck Collaboration et al. 2016). Stellar masses are derived assuming a Chabrier (2003) initial mass function.
DATA
We analyze a mass and impact parameter matched set of galaxy-O VI absorber pairs. Most of the galaxy masses, impact parameters, and associated O VI column densities are taken from the galaxy-absorber data compiled in Paper I. This dataset combines measurements from the CGM 2 survey (Wilde et al. 2021) and the literature (Werk et al. 2013;Johnson et al. 2015;Keeney et al. 2018;Zahedy et al. 2019). We also include three galaxyabsorber pairs from the CUBS survey (Chen et al. 2020;Cooper et al. 2021;Boettcher et al. 2021;Cooper et al. 2021). In cases where information on galaxy environment is available, we exclude galaxies from our analysis if they are within 1 Mpc and 600 km s −1 of a more massive galaxy.
We do not use the galaxy classifications from Paper I, which were based on fitting galaxy templates to photometric measurements. Instead, we use spectroscopic classifications. For galaxies from Johnson et al. (2015) and the CUBS survey, we adopt the classifications given in those works. For the remainder of the sample, we make our own spectroscopic classifications.
We use three classification criteria: Hα emission equivalent width, Hβ emission equivalent width, and the 4000Å decrement D 4000 1 . We use multiple criteria because the galaxy spectra were taken with different instruments and cover different galaxy restframe wavelength ranges. We measure Hα and Hβ equivalent widths by fitting the galaxy spectra with superpositions of stellar templates and emission lines. Fitting is done with pPXF (Cappellari 2017) using the MILES stellar library (Falcón-Barroso et al. 2011). We measure D 4000 by integrating the galaxy spectra over the appropriate interval and taking the ratio of the results.
Not all quantities are measurable from all spectra. Of the ones that are measured from a spectrum, we adopt the classification according to the most preferred quantity, where the order of preference is Hα, then Hβ, then D 4000 . A galaxy is classified as star-forming if it has an Hα equivalent width greater than 6Å, a Hβ equivalent width greater than 2Å, or D 4000 less than 1.6 ( Kauffmann et al. 2003;Sánchez et al. 2014). The quantities usually agree on a galaxy's classification. Taken pairwise, Hα and Hβ, Hα and D 4000 , and Hβ and D 4000 agree in 42/42, 34/36, and 41/49 instances, respectively.
The star formation indicator cuts split the galaxies at low impact parameters into mostly low and mostly high N O VI subsamples. Figure 1 shows N O VI as a function of the three indicators around galaxies with R ⊥ < 200 kpc. The two star-forming N O VI non-detections have larger impact parameters than all but one of the other star-forming galaxies shown. Apart form these two nondetections, galaxies on the star-forming side of the classi-fication thresholds have higher N O VI than most passive galaxies.
We focus on galaxies with stellar masses between 10 10.1 and 10 10.9 M . Paper I included galaxies with masses ranging from 10 7.8 to 10 11.2 M . In that sample, the overwhelming majority of photometrically-classified passive galaxies have M * ≥ 10 10 M . For this work, we initially spectroscopically classified all of those galaxies with M * ≥ 10 10 M . The sample contains no passive galaxies with mass less than 10 10.1 M and no starforming galaxies with mass greater than 10 10.9 M . To get a closer match in M * , we restrict the sample to bracket this mass range. The stellar masses and impact parameters of galaxies in this range are shown in Figure 2.
To enable a test of the hypothesis that differences between star-forming and passive galaxies are driven by halo mass, we estimate halo masses for the galaxies in the sample. We match the observed galaxies to central galaxies from the UNIVERSEMACHINE mock catalogs (Behroozi et al. 2019) conditioning on star-formation class, stellar mass, and redshift. We classify mock galaxies to be star-forming or passive using a 10 −11 yr −1 sSFR cut, use a tolerance of ±0.01 dex for matching on stellar mass, and take the nearest redshift available (in all cases, |δz| ≤ 0.1). This procedure yields between 800 and 2800 matching mock galaxies per observed galaxy, each with a halo mass. We summarize an observed galaxy's possible halo mass distribution by its 16th, 50th, and 84th percentiles. The median halo masses of the star-forming and passive samples overlap for M h ≈ 10 11.6 -10 12.2 M . With this halo mass estimation procedure, star-forming and passive galaxies in this halo mass range have M * = 10 10.2 -10 10.9 M and M * = 10 10.1 -10 10.7 M , respectively.
We compare star-forming and passive galaxies in three mass sub-samples: M * = 10 10.1 -10 10.5 M (lower stellar mass), M * = 10 10.5 -10 10.9 M (higher stellar mass), and M h ≈ 10 11.6 -10 12.2 M (matched halo mass). The matched halo mass sub-sample partially overlaps with each of the matched stellar mass sub-samples. The impact parameters and O VI column densities of galaxies in these sub-samples are shown in Figure 3. The right panel shows a subset of the galaxies in the left and middle panels. The distributions of measurements in all three panels are qualitatively similar. Star-forming galaxies have distinct inner and outer column density regimes, with uniformly high column densities in the inner region and a broad, generally lower distribution of column densities in the outer region. Passive galaxies have a broad distribution of column densities at all impact parameters, with a possible tendency towards SF P Figure 1. O VI column density as a function of three star formation indicators for galaxies with R ⊥ < 200 kpc. Data point colors and shapes denote whether a galaxy is classified as star-forming (blue diamonds) or passive (red and red-outlined circles). Data points that are outlined rather than filled are upper limits on O VI non-detections. Because not all galaxies have measurements of all three indicators, some points only appear in some of the panels. Vertical dashed gray lines denote thresholds used for galaxy classification and were taken from the literature. The three star formation indicators almost always agree on a galaxy's class. There is a clear change in the NO VI distribution from one side of a classification threshold to the other. SF P Figure 2. Stellar masses and impact parameters of galaxies in the sample. Blue diamonds represent star-forming galaxies, red circles represent passive galaxies. The distribution of the two classes in stellar mass is shown by the histogram above the scatter plot.
higher column densities at low impact parameters. The inner star-forming galaxy column densities are greater than almost all of the inner passive galaxy column densities.
ANALYSIS AND RESULTS
We quantify the incidence of strong O VI absorbers around star-forming and passive galaxies by calculating covering fractions, the number of detections above a threshold ("hits") over the number of observations. We adopt a detection threshold of N O VI = 10 14.3 cm −2 , which is just above the least constraining upper limit in the sample. An upper limit that is greater than the threshold is ambiguous, meaning that using a lower threshold would require discarding part of the sample.
The number of hits in a sample of fixed size given a covering fraction has a binomial distribution. Assuming a beta distribution prior on the covering fraction, the posterior probability distribution for the covering fraction is itself a beta distribution. We use the Jeffreys prior, Beta(f C ; α = 1/2, β = 1/2), and use the 16th and 84th quantiles of the covering fraction posterior probability distribution as a 68% (1σ-equivalent) credible interval.
From visual inspection of Figure 3, there is an obvious need for covering fractions to depend on impact parameter. We include an impact parameter dependence by splitting each mass-selected sub-sample into inner and outer regions. Figure 4 shows inner and outer covering fractions for the three mass sub-samples and a dividing R ⊥ of 200 kpc. The inner covering fraction around star-forming galaxies is higher than that around passive galaxies. The outer covering fractions of the two galaxy classes are consistent, with overlapping 68% credible intervals. SF P Figure 3. O VI column density as a function of impact parameter for three mass-selected star-forming and passive galaxy comparison samples. Filled data points mark O VI detections while outlined data points mark upper limits on non-detections. Blue diamonds are star forming galaxies and red circles are passive galaxies. The left and middle panels show galaxies selected to be in the same stellar mass range. The right panel shows galaxies selected to be in the same halo mass range, where halo masses are estimated using separate stellar mass-halo mass relations for the two galaxy classes. The halo mass selected sample overlaps with both stellar mass selected samples. SF P Figure 4. O VI covering fractions of star-forming (shown in blue) and passive (shown in red) galaxies in three mass-selected samples. Shaded regions are 68% credible intervals about the median for the covering fraction of absorbers with R ⊥ = 0-200 kpc and 200-400 kpc. Star-forming galaxies have higher inner covering fractions than passive galaxies, but similar outer covering fractions. SF P 2σ 3σ Figure 5. (top row) O VI covering fractions within a maximum impact parameter of star-forming (shown in blue) and passive (shown in red) galaxies in three mass-selected samples. In this row and the next, solid lines are medians and shaded regions are central 68% credible intervals. (middle row) Ratio of the passive galaxy covering fraction to the star-forming galaxy covering fraction. (bottom row) Probability that the passive galaxy covering fraction is greater than or equal to the star-forming galaxy covering fraction. One-sided 2σ-and 3σ-equivalent probabilities are indicated by horizontal lines. Maximum impact parameters where the probability can never be less than 3σ-equivalent because there are too few measurements are shaded in gray. The star-forming galaxy covering fraction declines for maximum impact parameters greater than approximately 150 kpc. Between the region with too few measurements and 150 kpc, the star-forming galaxy covering fraction is greater than the passive galaxy covering fraction with conclusive, greater than 3σ-equivalent statistical significance.
To quantitatively compare the inner covering fractions for the two galaxy classes, we calculate their ratio as a function of maximum inner impact parameter. The top row of Figure 5 shows inner covering fractions at all impact parameters spanned by each mass-selected sub-sample. These covering fractions are cumulative: measurements used to calculate the covering fraction at R ⊥,1 are also used to calculate the covering fraction at R ⊥,2 > R ⊥,1 . The star-forming galaxy covering fractions decline outside about 150 kpc. The passive galaxy covering fractions are similar at all impact parameters. The middle row shows the ratio f C,P /f C,SF . The probability distribution over the ratio is estimated by integrating over the joint distribution of the two covering fractions, p(f C,P , f C,SF ), over lines of fixed ratio.
The bottom row of the figure shows the probability that this ratio is greater than one. We consider a probability less than about 0.00135 to be statistically significant evidence 2 that f C,SF is greater than f C,P . If there are too few measurements, the probability can never be below this threshold. For all maximum impact parameters that are less than about 150 kpc and where there are enough measurements, the probability is below the threshold. The decrease in significance for greater dividing impact parameters is likely physical and can be explained by the decline in the star-forming galaxy covering fraction. In all three mass-matched sub-samples, there are significantly more strong O VI absorbers near star-forming galaxies than near passive galaxies. These results show that there is a statistically significant dichotomy in O VI around star-forming and passive galaxies at fixed stellar mass and at fixed halo mass.
DISCUSSION
Comparison with previous work on O VI and star formation
The present study builds on Tumlinson et al. (2011), which found that there is a higher incidence rate of strong O VI absorption around star-forming galaxies than around passive galaxies at 0.1 < z < 0.4. The supplementary material of Tumlinson et al. (2011) contains a comparison of O VI detection rates between the galaxy classes restricted to a mass range where star-forming and passive galaxies in their overlap, M * > 10 10.5 M . They find that with 2.6σ-level significance, the detection rate is higher for star-forming galaxies, a suggestive result that motivated our current work.
Like a number of other observational studies (Johnson et al. 2015;Zahedy et al. 2019), we confirm the general finding that there is an O VI dichotomy between starforming and passive galaxies. We extend the result by establishing with high statistical significance that the dichotomy persists when controlling for stellar mass or halo mass. The difference in O VI incidence around passive galaxies is evidence for a CGM transformation that is associated with how central galaxies in this mass range quench.
Galaxy quenching and CGM transformation
Recent theory and analyses of cosmological hydrodynamic simulations offer a candidate for the required CGM transformation: the heating and ejection of CGM gas by integrated black hole feedback (Mathews & Prochaska 2017;Suresh et al. 2017;Davies et al. 2020a;Terrazas et al. 2020;Zinger et al. 2020;Oppenheimer et al. 2020). In the EAGLE and Illustris-TNG simulations, galaxies quench when the integrated amount of black hole feedback of the appropriate type exceeds the binding energy of the CGM and ejects some fraction of it from the halo. The bulk of the gas remaining in the CGM after quenching is hotter and more diffuse than before, and as a result has a long cooling time. Nelson et al. (2018) find that in Illustris-TNG, the O VI mass also drops once a galaxy quenches. This galaxy quenching mechanism is consistent with our observations (and some observational studies of central galaxy quenching, e.g., Reines & Volonteri 2015;Piotrowska et al. 2022) because its onset is determined by black hole mass, rather than by galaxy stellar mass or halo mass. Other quenching mechanisms, such as those in which the interaction of a halo with large scale structure cuts off the supply of intergalactic gas to the central galaxy (Aragon Calvo et al. 2019;Winkel et al. 2021), would also be consistent. While most absorbers associated with passive galaxies have low O VI column densities, this is not universal. Of the sixteen passive galaxies in our sample with R ⊥ < 200 kpc, two have N O VI values typical of star-forming galaxies. Put another way, the distribution p(N O VI ) for sightlines near passive galaxies is shifted to lower N O VI compared to p(N O VI ) for star-forming galaxies, but does have a high N O VI tail.
The simplest explanations for these two cases is misclassification or interloping absorption. Both galaxies have secure classifications, with very low hydrogen emission equivalent widths and D 4000 values that are greater than the threshold of 1.6. Their D 4000 values are less than the median for the passive galaxy sample, so there is a possibility that these galaxies quenched relatively recently. Interloping absorption can be significant : Ho et al. (2021) find that in the EAGLE simulations, the column density of O VI absorbers within ±300 km s −1 of a galaxy can be twice that of absorbers with radius less than R vir . This could be the explanation for one of the galaxies, where we do not have good info on environment. The other galaxy is found in a CGM 2 quasar field with spectroscopy that is complete to a g-band magnitude of 22 within 600 kpc, a depth sufficient to detect galaxies down to M * ∼ 10 9.5 M . No other galaxies are detected within 600 kpc of the sightline and 600 km s −1 of the passive galaxy in question, suggesting that the high O VI column density is not attributable to an interloping galaxy's CGM.
Possible physical explanations for the the high N O VI tail include a patchy O VI distribution and a lag between quenching and CGM transformation. If O VI around a typical passive galaxy is found in localized, anisotropically distributed structures, the low sample-averaged O VI incidence rate would reflect a low per-galaxy O VI covering fraction. However, these structures would need to have total O VI column densities close to those of sightlines through a star-forming galaxy's CGM. A different explanation is that the CGM transformation as seen in O VI starts later or takes longer than quenching. In this interpretation, the passive galaxies with high N O VI column densities are ones that quenched recently.
Examples of both of these options, within-halo spatial and between-halo temporal variation, have been seen in cosmological hydrodynamic simulations. Nelson et al. (2018) find that in the TNG100 simulation, the O VI distribution is smooth and approximately isotropic around star-forming galaxies but patchy and obviously anisotropic around passive galaxies. Oppenheimer et al. (2020) analyze the evolution of black hole mass, star formation rate, and the CGM in the EAGLE simulation. They find that a galaxy quenches at approximately the same time as the most rapid phase of growth for the galaxy's black hole. The fraction of total halo mass found as gas in a galaxy's CGM starts to sharply decline at the same time, but the covering fractions of ion absorption, including O VI, do not drop until about 1 Gyr after quenching. Appleby et al. (2022) compare CGM properties between star-forming, passive, and intermediate ("green valley") galaxies from the SIMBA simulation. They define z = 0 galaxies to be in the green valley if their sSFR is between 10 −10.8 and 10 −11.8 yr −1 . Based on studies of the correlation between sSFR and D 4000 , galaxies with these sSFRs are more likely than not to have D 4000 > 1.6 (e.g. Bluck et al. 2020;Angthopo et al. 2020), meaning that we would classify most of them as being passive. Appleby et al. (2022) find that these galaxies have O VI covering fractions that are inter-mediate between those of galaxies with lower or higher sSFR, which would be another example of between-halo temporal variation.
While all three of these simulations show a decline in the incidence rate of O VI absorption as a result of integrated black hole feedback, they implement the feedback in different ways (Schaye et al. 2015;Weinberger et al. 2017;Davé et al. 2019). These implementation differences lead to differences in bulk properties such as CGM mass as a fraction of halo mass (e.g. Davies et al. 2020b;Tillman et al. 2022) and could mean that different mechanisms are responsible for the reduction in O VI incidence rates in each simulation. It is not obvious whether simulation resolution has a noticeable effect on the comparisons we make above because O VI covering fractions are a relatively coarse CGM property. Some works find that refinement beyond the resolution of these three simulations (m gas ∼ 10 6 -10 7 M ) does not substantially change coarse CGM properties (
The case of M31
Our observations consist of a single sightline per galaxy, and so cannot distinguish within-halo variation from between-halo variation. The CGM of the Milky Way's nearest massive neighbor, M31, has been measured along multiple sightlines (Lehner et al. 2015(Lehner et al. , 2020. We would classify M31 as passive and slightly more massive than the most massive galaxies we consider 3 . Lehner et al. (2020) measure O VI along eight sightlines with R ⊥ < 400 kpc, with four of these eight sightlines at R ⊥ < 250 kpc. 4/4 and 7/8 N O VI measurements within 250 and 400 kpc, respectively, are greater than 10 14.3 cm −2 . In our higher stellar mass passive galaxy sub-sample, the corresponding hit rates are 1/11 and 1/12. Using Barnard's exact test for 2-by-2 contingency tables (Barnard 1947), the probability of M31 having a covering fraction that is less than or equal to that of the M * = 10 10.5 to 10 10.9 M passive sample is less than 0.1%, a greater-than-3σ tension. We propose three (non-mutually exclusive) ways of resolving this tension: a coincidence of observations with a coher-ent spatial feature; O VI variation being a between-halo temporal effect; and the possibility of different modes of quenching. Lehner et al. (2020) find that ions other than O VI with R ⊥ > R vir tend to be detected in a particular direction relative to M31 and suggest that this absorption may be arising in an accreting IGM filament. The O VI sightlines are in this direction, and so could also be related to this hypothetical filament. Large structures between the Milky Way and M31 are found in simulations of Local Group analogues (Nuza et al. 2014;Damle et al. 2022) and a bridge of hot ∼ 2×10 6 K gas between the galaxies has been detected in X-ray emission (Qu et al. 2021).
The second possibility is that M31 quenched recently and the CGM transformation is not yet apparent in UVaccessible CGM tracers. Williams et al. (2017) find that M31 had a burst of star formation about 2 Gyr ago and Lewis et al. (2015) find that the star formation rate has been mostly declining for the past 400 Myr. M31 could therefore still be in the period between quenching and a drop in metal absorber incidence.
Finally, it is possible that not all quenching is accompanied by a CGM transformation. Unlike many passive galaxies, M31 has spiral structure and a substantial gaseous disk (M H I = 5 × 10 9 M ; Carignan et al. 2006). The K S band luminosity of M31 is L K = 5 × 10 10 L (Huchra et al. 2012;Willmer 2018) and its M H I /L K is about 1/10. This H I-mass-to-luminosity ratio is greater than that of ≈ 97% of early type galaxies in the ATLAS 3D survey, but is typical for a low redshift massive spiral galaxy (Serra et al. 2012). Its high gas mass would suggest that M31 has not undergone the interstellar medium "blowout" associated with quenching in recent cosmological hydrodynamic simulations. Internal dynamics driven by structure in a galaxy can reduce the star formation efficiency of available gas ("bar quenching" ;Tubbs 1982;Khoperskov et al. 2018;Newnham et al. 2020). M31 is known to have a bar and so could be affected by this mechanism (Athanassoula & Beaton 2006;Dorman et al. 2015;Feng et al. 2022). The CGM would not be affected by the bar, allowing a star-forming galaxy level of O VI with a low sSFR.
Cool and hot gas in the CGM of quenched galaxies
If O VI around galaxies with M * = 10 10 -10 11 M is mostly collisionally ionized and found in ∼ 10 5 K gas, then the O VI dichotomy implies that quenching is associated with a drop in the amount of warm gas in a galaxy's CGM. There is evidence that passive galaxies in the same mass range also have less T ∼ 10 4 K gas than the corresponding star-forming galaxies. Cool gas is traced by ions such as Mg II. Analyses of the cover-ing fractions of strong Mg II absorbers as a function of impact parameter, stellar mass, and star formation rate find that the covering fraction is several times lower near passive galaxies than near star-forming galaxies (Bordoloi et al. 2011;Lan 2020;Anand et al. 2021).
The situation is less clear for gas that is too hot to be traced by O VI (T 10 6 K). Emission from hot gas can be detected in X-rays. Comparat et al. (2022) and Chadayammuri et al. (2022) use eROSITA data to measure X-ray emission around stellar-mass-controlled starforming and passive galaxy samples and find conflicting results. Comparat et al. (2022) find that passive galaxies are associated with more X-ray emission while Chadayammuri et al. (2022) find the opposite. Chadayammuri et al. (2022) argue that the discrepancy is driven by differences in how emission from groups and clusters is treated for galaxies other than the group and cluster centrals. Determining which interpretation of the data is correct will tell us whether the CGM transformation associated with quenching mostly heats gas or also drives some of it out of the halo. If warm gas is driven out as well as heated, the passive galaxy X-ray emission should be weaker. If the gas is heated but remains in the CGM, then the passive galaxy X-ray emission should be stronger.
CONCLUSION
We study the incidence rate of strong O VI absorption in the circumgalactic medium of z < 0.6 star-forming and passive central galaxies with stellar masses between 10 10.1 and 10 10.9 M . The galaxy-O VI-absorber pair sample is drawn from Tchernyshyov et al. (2022) and references therein and from the CUBS survey (Chen et al. 2020). The absorber impact parameters span ≈ 0 to 400 physical kpc. To separate differences due to galaxy stellar or halo mass from differences related to galaxy quenching, we compare the two galaxy classes in narrow stellar mass ranges, M * = 10 10.1 -10 10.5 M and M * = 10 10.5 -10 10.9 M , and, separately, in a relatively narrow estimated halo mass range, M h ≈ 10 11.6 -10 12.2 M .
For each combination of mass range and galaxy class, we further split the galaxies by impact parameter into an inner and outer region. We measure covering fractions of sightlines with N O VI ≥ 10 14.3 cm −2 for these two regions and calculate probabilities that the inner starforming galaxy covering fractions are less than or equal to the inner passive galaxy covering fractions. We also explore some possible origin scenarios for the small number of strong O VI absorbers that are still found around passive galaxies.
Our observational results on the incidence rate of strong O VI absorbers around star-forming and passive galaxies in the mass ranges M * = 10 10.1 -10 10.5 M , M * = 10 10.5 -10 10.9 M , and M h ≈ 10 11.6 -10 12.2 M are as follows:
• Within 150 kpc, the covering fraction of strong O VI absorbers is approximately 0.9-1 for starforming galaxies and 0-0.2 for passive galaxies.
• In each mass range and within 150 kpc, the probability that star-forming galaxy covering fractions are less than or equal to passive galaxy covering fractions is less than 0.001. At greater than 3σ-equivalent statistical significance, the incidence rate of strong O VI absorption in the CGM is greater around star-forming galaxies than around passive galaxies with the same stellar or halo mass.
From these observational results, we reach the following conclusions:
• There is a dichotomy in the incidence rate of strong O VI absorption in the CGM of star-forming and passive M * ∼ 10 10 M galaxies at fixed stellar mass and at fixed halo mass.
• The quenching of a M * ∼ 10 10 M central galaxy at low redshift is, in most cases, accompanied by a transformation of the galaxy's CGM. This change is not driven by the mass of the galaxy's dark matter halo.
• There may be a delay between galaxy quenching and an observable change in the incidence rate of O VI. Alternatively (or jointly), there may be a less common mode of quenching in which the CGM is not substantially changed.
van de Voort et al. 2019; Peeples et al. 2019; Suresh et al. 2019). Conversely, Hummels et al. (2019) find that using finer resolution can reduce O VI column densities throughout the CGM and Lochhaas et al. (2021) find that resolution affects what fraction of energy in CGM gas is thermal rather that non-thermal.
: astropy (Astropy Collaboration et al. 2013, 2018, 2022), linetools (Prochaska et al. 2017), matplotlib (Hunter 2007), numpy (Harris et al. 2020), pandas (Wes McKinney 2010)
4.2.1. High passive galaxy NO VI: geometric or temporal variation?
KT thanks Alison Coil, Yakov Faerman, Chris Mc-Kee, Evan Schneider, Fakhri Zahedy, and Yong Zheng for useful discussions. KT, JKW, and MW acknowledge support for this work from NSF-AST 1812521 and NSF-CAREER 2044303. JKW acknowledges additional support as a Cottrell Scholar, from the Research Corporation for Science Advancement, grant ID number 26842. The authors gratefully acknowledge the UW Werk SQuAD (Student Quasar Absorption Diagnosticians), a team of more than 40 dedicated undergraduate researchers since 2017, who made significant contributions to the CGM 2 survey over the last five years and thus enabled some of the science presented in this work. This work benefited from lively Zoom discussions during KITP's "Fundamentals of Gaseous Halos" program, and thus was supported in part by the National Science Foundation under Grant No. NSF PHY-1748958.
We use the 100Å-wide interval definition of D 4000 , with intervals 3850-3950Å and 4000-4100Å(Balogh et al. 1999).
This is the probability of drawing a value that is lower than µ−3σ from a Gaussian distribution.
The nominal log 10 M * /M is 10.93, where we have applied a Kroupa to Chabrier initial mass function conversion factor of 1/1.06(Zahid et al. 2012). The sSFR is 7 × 10 −12 yr −1(Lewis et al. 2015;Williams et al. 2017).
. A Anand, D Nelson, G Kauffmann, 10.1093/mnras/stab871MNRAS. 50465Anand, A., Nelson, D., & Kauffmann, G. 2021, MNRAS, 504, 65, doi: 10.1093/mnras/stab871
. J Angthopo, I Ferreras, J Silk, 10.1093/mnras/staa1276MNRAS. 4952720Angthopo, J., Ferreras, I., & Silk, J. 2020, MNRAS, 495, 2720, doi: 10.1093/mnras/staa1276
. S Appleby, R Davé, D Sorini, W Cui, J Christiansen, arXiv:2207.04068arXiv e-printsAppleby, S., Davé, R., Sorini, D., Cui, W., & Christiansen, J. 2022, arXiv e-prints, arXiv:2207.04068. https://arxiv.org/abs/2207.04068
. M A Aragon Calvo, M C Neyrinck, J Silk, 10.21105/astro.1697.07881The Open Journal of Astrophysics. 27Aragon Calvo, M. A., Neyrinck, M. C., & Silk, J. 2019, The Open Journal of Astrophysics, 2, 7, doi: 10.21105/astro.1697.07881
. T P Robitaille, Astropy CollaborationE J Tollerud, Astropy Collaboration10.1051/0004-6361/201322068A&A. 55833Astropy Collaboration, Robitaille, T. P., Tollerud, E. J., et al. 2013, A&A, 558, A33, doi: 10.1051/0004-6361/201322068
. A M Price-Whelan, Astropy CollaborationB M Sipőcz, Astropy Collaboration10.3847/1538-3881/aabc4fAJ. 156123Astropy Collaboration, Price-Whelan, A. M., Sipőcz, B. M., et al. 2018, AJ, 156, 123, doi: 10.3847/1538-3881/aabc4f
. A M Price-Whelan, Astropy CollaborationP L Lim, Astropy Collaboration10.3847/1538-4357/ac7c74ApJ. 935167Astropy Collaboration, Price-Whelan, A. M., Lim, P. L., et al. 2022, ApJ, 935, 167, doi: 10.3847/1538-4357/ac7c74
. E Athanassoula, R L Beaton, 10.1111/j.1365-2966.2006.10567.xMNRAS. 3701499Athanassoula, E., & Beaton, R. L. 2006, MNRAS, 370, 1499, doi: 10.1111/j.1365-2966.2006.10567.x
. J N Bahcall, B T Jannuzi, D P Schneider, 10.1086/186103ApJL. 3775Bahcall, J. N., Jannuzi, B. T., Schneider, D. P., et al. 1991, ApJL, 377, L5, doi: 10.1086/186103
. M L Balogh, S L Morris, H K C Yee, R G Carlberg, E Ellingson, 10.1086/308056ApJ. 52754Balogh, M. L., Morris, S. L., Yee, H. K. C., Carlberg, R. G., & Ellingson, E. 1999, ApJ, 527, 54, doi: 10.1086/308056
. G A Barnard, 10.1093/biomet/34.1-2.123Biometrika. 34123Barnard, G. A. 1947, Biometrika, 34, 123, doi: 10.1093/biomet/34.1-2.123
. P Behroozi, R H Wechsler, A P Hearin, C Conroy, MNRAS. 4883143Behroozi, P., Wechsler, R. H., Hearin, A. P., & Conroy, C. 2019, MNRAS, 488, 3143. https://arxiv.org/abs/1806.07893
. J Bergeron, A&A. 1558Bergeron, J. 1986, A&A, 155, L8
. Y Birnboim, A Dekel, 10.1046/j.1365-8711.2003.06955.xMNRAS. 345349Birnboim, Y., & Dekel, A. 2003, MNRAS, 345, 349, doi: 10.1046/j.1365-8711.2003.06955.x
. A F L Bluck, R Maiolino, S F Sánchez, 10.1093/mnras/stz3264MNRAS. 49296Bluck, A. F. L., Maiolino, R., Sánchez, S. F., et al. 2020, MNRAS, 492, 96, doi: 10.1093/mnras/stz3264
. E Boettcher, H.-W Chen, F S Zahedy, 10.3847/1538-4357/abf0a0ApJ. 913Boettcher, E., Chen, H.-W., Zahedy, F. S., et al. 2021, ApJ, 913, 18, doi: 10.3847/1538-4357/abf0a0
. R Bordoloi, A Y Wagner, T M Heckman, C A Norman, ApJ. 848122Bordoloi, R., Wagner, A. Y., Heckman, T. M., & Norman, C. A. 2017, ApJ, 848, 122. https://arxiv.org/abs/1605.07187
. R Bordoloi, S J Lilly, C Knobel, ApJ. 74310Bordoloi, R., Lilly, S. J., Knobel, C., et al. 2011, ApJ, 743, 10. https://arxiv.org/abs/1106.0616
. J N Burchett, T M Tripp, Q D Wang, MNRAS. 475Burchett, J. N., Tripp, T. M., Wang, Q. D., et al. 2018, MNRAS, 475, 2067. https://arxiv.org/abs/1705.05892
. J N Burchett, T M Tripp, R Bordoloi, ApJ. 832124Burchett, J. N., Tripp, T. M., Bordoloi, R., et al. 2016, ApJ, 832, 124. https://arxiv.org/abs/1512.00853
. M Cappellari, 10.1093/mnras/stw3020MNRAS. 466798Cappellari, M. 2017, MNRAS, 466, 798, doi: 10.1093/mnras/stw3020
. C Carignan, L Chemin, W K Huchtmeier, F J Lockman, 10.1086/503869ApJL. 641109Carignan, C., Chemin, L., Huchtmeier, W. K., & Lockman, F. J. 2006, ApJL, 641, L109, doi: 10.1086/503869
. G Chabrier, PASP. 115Chabrier, G. 2003, PASP, 115, 763. https://arxiv.org/abs/astro-ph/0304382
. U Chadayammuri, Á Bogdán, B D Oppenheimer, 10.3847/2041-8213/ac8936ApJL. 93615Chadayammuri, U., Bogdán,Á., Oppenheimer, B. D., et al. 2022, ApJL, 936, L15, doi: 10.3847/2041-8213/ac8936
. H.-W Chen, K M Lanzetta, J K Webb, 10.1086/321537ApJ. 556158Chen, H.-W., Lanzetta, K. M., & Webb, J. K. 2001, ApJ, 556, 158, doi: 10.1086/321537
. H.-W Chen, F S Zahedy, E Boettcher, 10.1093/mnras/staa1773MNRAS. 497498Chen, H.-W., Zahedy, F. S., Boettcher, E., et al. 2020, MNRAS, 497, 498, doi: 10.1093/mnras/staa1773
. J Comparat, N Truong, A Merloni, 10.1051/0004-6361/202243101A&A. 666156Comparat, J., Truong, N., Merloni, A., et al. 2022, A&A, 666, A156, doi: 10.1051/0004-6361/202243101
. T J Cooper, G C Rudie, H.-W Chen, 10.1093/mnras/stab2869MNRAS. 5084359Cooper, T. J., Rudie, G. C., Chen, H.-W., et al. 2021, MNRAS, 508, 4359, doi: 10.1093/mnras/stab2869
. M Damle, M Sparre, P Richter, 10.1093/mnras/stac663MNRAS. 5123717Damle, M., Sparre, M., Richter, P., et al. 2022, MNRAS, 512, 3717, doi: 10.1093/mnras/stac663
. R Davé, D Anglés-Alcázar, D Narayanan, 10.1093/mnras/stz937MNRAS. 4862827Davé, R., Anglés-Alcázar, D., Narayanan, D., et al. 2019, MNRAS, 486, 2827, doi: 10.1093/mnras/stz937
. J J Davies, R A Crain, B D Oppenheimer, J Schaye, MNRAS. 4914462Davies, J. J., Crain, R. A., Oppenheimer, B. D., & Schaye, J. 2020a, MNRAS, 491, 4462. https://arxiv.org/abs/1908.11380
. J J Davies, R A Crain, A Pontzen, arXiv:2006.13221arXiv e-printsDavies, J. J., Crain, R. A., & Pontzen, A. 2020b, arXiv e-prints, arXiv:2006.13221. https://arxiv.org/abs/2006.13221
. A Dekel, Y Birnboim, 10.1111/j.1365-2966.2006.10145.xMNRAS. 368Dekel, A., & Birnboim, Y. 2006, MNRAS, 368, 2, doi: 10.1111/j.1365-2966.2006.10145.x
. C E Dorman, P Guhathakurta, A C Seth, 10.1088/0004-637X/803/1/24ApJ. 80324Dorman, C. E., Guhathakurta, P., Seth, A. C., et al. 2015, ApJ, 803, 24, doi: 10.1088/0004-637X/803/1/24
. Y Faerman, A Sternberg, C F Mckee, ApJ. 83552Faerman, Y., Sternberg, A., & McKee, C. F. 2017, ApJ, 835, 52. https://arxiv.org/abs/1602.00689
. J Falcón-Barroso, P Sánchez-Blázquez, A Vazdekis, 10.1051/0004-6361/201116842A&A. 53295Falcón-Barroso, J., Sánchez-Blázquez, P., Vazdekis, A., et al. 2011, A&A, 532, A95, doi: 10.1051/0004-6361/201116842
. Z.-X Feng, Z Li, J Shen, 10.3847/1538-4357/ac7964ApJ. 933233Feng, Z.-X., Li, Z., Shen, J., et al. 2022, ApJ, 933, 233, doi: 10.3847/1538-4357/ac7964
. D Fielding, E Quataert, M Mccourt, T A Thompson, MNRAS. 4663810Fielding, D., Quataert, E., McCourt, M., & Thompson, T. A. 2017, MNRAS, 466, 3810. https://arxiv.org/abs/1606.06734
. C R Harris, K J Millman, S J Van Der Walt, 10.1038/s41586-020-2649-2Nature. 585357Harris, C. R., Millman, K. J., van der Walt, S. J., et al. 2020, Nature, 585, 357, doi: 10.1038/s41586-020-2649-2
. T M Heckman, C A Norman, D K Strickland, K R Sembach, ApJ. 577691Heckman, T. M., Norman, C. A., Strickland, D. K., & Sembach, K. R. 2002, ApJ, 577, 691. https://arxiv.org/abs/astro-ph/0205556
. S H Ho, C L Martin, J Schaye, arXiv:2110.01633arXiv e-printsHo, S. H., Martin, C. L., & Schaye, J. 2021, arXiv e-prints, arXiv:2110.01633. https://arxiv.org/abs/2110.01633
. J P Huchra, L M Macri, K L Masters, 10.1088/0067-0049/199/2/26ApJS. 26Huchra, J. P., Macri, L. M., Masters, K. L., et al. 2012, ApJS, 199, 26, doi: 10.1088/0067-0049/199/2/26
. C B Hummels, B D Smith, P F Hopkins, 10.3847/1538-4357/ab378fApJ. 882156Hummels, C. B., Smith, B. D., Hopkins, P. F., et al. 2019, ApJ, 882, 156, doi: 10.3847/1538-4357/ab378f
. J D Hunter, 10.1109/MCSE.2007.55Computing in Science and Engineering. 990Hunter, J. D. 2007, Computing in Science and Engineering, 9, 90, doi: 10.1109/MCSE.2007.55
. S D Johnson, H.-W Chen, J S Mulchaey, MNRAS. 4493263Johnson, S. D., Chen, H.-W., & Mulchaey, J. S. 2015, MNRAS, 449, 3263. https://arxiv.org/abs/1503.04199
. G Kauffmann, T M Heckman, S D M White, 10.1046/j.1365-8711.2003.06291.xMNRAS. 34133Kauffmann, G., Heckman, T. M., White, S. D. M., et al. 2003, MNRAS, 341, 33, doi: 10.1046/j.1365-8711.2003.06291.x
. B A Keeney, J T Stocke, C T Pratt, ApJS. 23711Keeney, B. A., Stocke, J. T., Pratt, C. T., et al. 2018, ApJS, 237, 11. https://arxiv.org/abs/1805.08767
. D Kereš, N Katz, D H Weinberg, R Davé, S Khoperskov, M Haywood, P Di Matteo, M D Lehnert, F Combes, 10.1051/0004-6361/201731211doi: 10.1051/0004-6361/201731211MNRAS. 36360A&AKereš, D., Katz, N., Weinberg, D. H., & Davé, R. 2005, MNRAS, 363, 2, doi: 10.1111/j.1365-2966.2005.09451.x Khoperskov, S., Haywood, M., Di Matteo, P., Lehnert, M. D., & Combes, F. 2018, A&A, 609, A60, doi: 10.1051/0004-6361/201731211
. T.-W Lan, 10.3847/1538-4357/ab989aApJ. 89797Lan, T.-W. 2020, ApJ, 897, 97, doi: 10.3847/1538-4357/ab989a
. N Lehner, J C Howk, B P Wakker, 10.1088/0004-637X/804/2/79ApJ. 80479Lehner, N., Howk, J. C., & Wakker, B. P. 2015, ApJ, 804, 79, doi: 10.1088/0004-637X/804/2/79
. N Lehner, S C Berek, J C Howk, ApJ. 9009Lehner, N., Berek, S. C., Howk, J. C., et al. 2020, ApJ, 900, 9. https://arxiv.org/abs/2002.07818
. A R Lewis, A E Dolphin, J J Dalcanton, 10.1088/0004-637X/805/2/183ApJ. 805183Lewis, A. R., Dolphin, A. E., Dalcanton, J. J., et al. 2015, ApJ, 805, 183, doi: 10.1088/0004-637X/805/2/183
. C Lochhaas, J Tumlinson, B W O'shea, 10.3847/1538-4357/ac2496ApJ. 922121Lochhaas, C., Tumlinson, J., O'Shea, B. W., et al. 2021, ApJ, 922, 121, doi: 10.3847/1538-4357/ac2496
. W G Mathews, J X Prochaska, ApJL. 846Mathews, W. G., & Prochaska, J. X. 2017, ApJL, 846, L24. https://arxiv.org/abs/1708.07140
. M Mcquinn, J Werk, ApJ. 85233McQuinn, M., & Werk, J. K. 2018, ApJ, 852, 33. https://arxiv.org/abs/1703.03422
. D Nelson, G Kauffmann, A Pillepich, MNRAS. 477450Nelson, D., Kauffmann, G., Pillepich, A., et al. 2018, MNRAS, 477, 450. https://arxiv.org/abs/1712.00016
. L Newnham, K M Hess, K L Masters, 10.1093/mnras/staa064MNRAS. 4924697Newnham, L., Hess, K. M., Masters, K. L., et al. 2020, MNRAS, 492, 4697, doi: 10.1093/mnras/staa064
. S E Nuza, F Parisi, C Scannapieco, 10.1093/mnras/stu643MNRAS. 4412593Nuza, S. E., Parisi, F., Scannapieco, C., et al. 2014, MNRAS, 441, 2593, doi: 10.1093/mnras/stu643
. B D Oppenheimer, R A Crain, J Schaye, MNRAS. 4602157Oppenheimer, B. D., Crain, R. A., Schaye, J., et al. 2016, MNRAS, 460, 2157. https://arxiv.org/abs/1603.05984
. B D Oppenheimer, J J Davies, R A Crain, MNRAS. 4912939Oppenheimer, B. D., Davies, J. J., Crain, R. A., et al. 2020, MNRAS, 491, 2939. https://arxiv.org/abs/1904.05904
. M S Peeples, L Corlies, J Tumlinson, 10.3847/1538-4357/ab0654ApJ. 873129Peeples, M. S., Corlies, L., Tumlinson, J., et al. 2019, ApJ, 873, 129, doi: 10.3847/1538-4357/ab0654
. J M Piotrowska, A F L Bluck, R Maiolino, Y Peng, 10.1093/mnras/stab3673MNRAS. 5121052Piotrowska, J. M., Bluck, A. F. L., Maiolino, R., & Peng, Y. 2022, MNRAS, 512, 1052, doi: 10.1093/mnras/stab3673
. P A R Ade, Planck CollaborationN Aghanim, Planck CollaborationA&A. 594Planck Collaboration, Ade, P. A. R., Aghanim, N., et al. 2016, A&A, 594, A13. https://arxiv.org/abs/1502.01589
J X Prochaska, N Tejos, N Crighton, 10.5281/zenodo.1036773Linetools/Linetools: Third Minor Release, v0.3, Zenodo. Prochaska, J. X., Tejos, N., Crighton, N., et al. 2017, Linetools/Linetools: Third Minor Release, v0.3, Zenodo, doi: 10.5281/zenodo.1036773
. Z Qu, J Bregman, ApJ. 86223Qu, Z., & Bregman, J. N. 2018, ApJ, 862, 23. https://arxiv.org/abs/1804.08784
. Z Qu, R Huang, J N Bregman, J.-T Li, 10.3847/1538-4357/abc9b9ApJ. 90714Qu, Z., Huang, R., Bregman, J. N., & Li, J.-T. 2021, ApJ, 907, 14, doi: 10.3847/1538-4357/abc9b9
. A E Reines, M Volonteri, 10.1088/0004-637X/813/2/82ApJ. 81382Reines, A. E., & Volonteri, M. 2015, ApJ, 813, 82, doi: 10.1088/0004-637X/813/2/82
. N N Sanchez, J K Werk, M Tremmel, 10.3847/1538-4357/ab3045ApJ. 882Sanchez, N. N., Werk, J. K., Tremmel, M., et al. 2019, ApJ, 882, 8, doi: 10.3847/1538-4357/ab3045
. S F Sánchez, F F Rosales-Ortega, J Iglesias-Páramo, 10.1051/0004-6361/201322343A&A. 56349Sánchez, S. F., Rosales-Ortega, F. F., Iglesias-Páramo, J., et al. 2014, A&A, 563, A49, doi: 10.1051/0004-6361/201322343
. J Schaye, R A Crain, R G Bower, 10.1093/mnras/stu2058MNRAS. 446521Schaye, J., Crain, R. A., Bower, R. G., et al. 2015, MNRAS, 446, 521, doi: 10.1093/mnras/stu2058
. P Serra, T Oosterloo, R Morganti, 10.1111/j.1365-2966.2012.20219.xMNRAS. 4221835Serra, P., Oosterloo, T., Morganti, R., et al. 2012, MNRAS, 422, 1835, doi: 10.1111/j.1365-2966.2012.20219.x
. J Stern, C.-A Faucher-Giguère, J F Hennawi, ApJ. 86591Stern, J., Faucher-Giguère, C.-A., Hennawi, J. F., et al. 2018, ApJ, 865, 91. https://arxiv.org/abs/1803.05446
. J T Stocke, S V Penton, C W Danforth, 10.1086/500386ApJ. 641217Stocke, J. T., Penton, S. V., Danforth, C. W., et al. 2006, ApJ, 641, 217, doi: 10.1086/500386
. J Suresh, D Nelson, S Genel, K H R Rubin, L Hernquist, 10.1093/mnras/sty3402MNRAS. 4834040Suresh, J., Nelson, D., Genel, S., Rubin, K. H. R., & Hernquist, L. 2019, MNRAS, 483, 4040, doi: 10.1093/mnras/sty3402
. J Suresh, K H R Rubin, R Kannan, MNRAS. 4652966Suresh, J., Rubin, K. H. R., Kannan, R., et al. 2017, MNRAS, 465, 2966. https://arxiv.org/abs/1511.00687
. K Tchernyshyov, J K Werk, M C Wilde, 10.3847/1538-4357/ac450cApJ. 927147Tchernyshyov, K., Werk, J. K., Wilde, M. C., et al. 2022, ApJ, 927, 147, doi: 10.3847/1538-4357/ac450c
. N Tejos, J X Prochaska, N H M Crighton, 10.1093/mnras/stv2376MNRAS. 4552662Tejos, N., Prochaska, J. X., Crighton, N. H. M., et al. 2016, MNRAS, 455, 2662, doi: 10.1093/mnras/stv2376
. B A Terrazas, E F Bell, A Pillepich, MNRAS. 4931888Terrazas, B. A., Bell, E. F., Pillepich, A., et al. 2020, MNRAS, 493, 1888. https://arxiv.org/abs/1906.02747
. M T Tillman, B Burkhart, S Tonnesen, arXiv:2210.02467arXiv e-printsTillman, M. T., Burkhart, B., Tonnesen, S., et al. 2022, arXiv e-prints, arXiv:2210.02467. https://arxiv.org/abs/2210.02467
. T M Tripp, K R Sembach, D V Bowen, ApJS. 17739Tripp, T. M., Sembach, K. R., Bowen, D. V., et al. 2008, ApJS, 177, 39. https://arxiv.org/abs/0706.1214
. A D Tubbs, 10.1086/159846ApJ. 255458Tubbs, A. D. 1982, ApJ, 255, 458, doi: 10.1086/159846
. J Tumlinson, C Thom, J K Werk, Science. 334948Tumlinson, J., Thom, C., Werk, J. K., et al. 2011, Science, 334, 948. https://arxiv.org/abs/1111.3980
. F Van De Voort, V Springel, N Mandelker, Van Den, F C Bosch, R Pakmor, 10.1093/mnrasl/sly190MNRAS. 48285van de Voort, F., Springel, V., Mandelker, N., van den Bosch, F. C., & Pakmor, R. 2019, MNRAS, 482, L85, doi: 10.1093/mnrasl/sly190
. G M Voit, ApJ. 880139Voit, G. M. 2019, ApJ, 880, 139. https://arxiv.org/abs/1811.04976
. R Weinberger, V Springel, L Hernquist, 10.1093/mnras/stw2944MNRAS. 4653291Weinberger, R., Springel, V., Hernquist, L., et al. 2017, MNRAS, 465, 3291, doi: 10.1093/mnras/stw2944
. J K Werk, J X Prochaska, C Thom, ApJS. 20417Werk, J. K., Prochaska, J. X., Thom, C., et al. 2013, ApJS, 204, 17. https://arxiv.org/abs/1212.0558
Wes Mckinney, 10.25080/Majora-92bf1922-00aProceedings of the 9th Python in Science Conference. Stéfan van der Walt & Jarrod Millmanthe 9th Python in Science ConferenceWes McKinney. 2010, in Proceedings of the 9th Python in Science Conference, ed. Stéfan van der Walt & Jarrod Millman, 56 -61, doi: 10.25080/Majora-92bf1922-00a
. S D M White, M J Rees, 10.1093/mnras/183.3.341MNRAS. 183341White, S. D. M., & Rees, M. J. 1978, MNRAS, 183, 341, doi: 10.1093/mnras/183.3.341
. M C Wilde, J K Werk, J N Burchett, ApJ. 9129Wilde, M. C., Werk, J. K., Burchett, J. N., et al. 2021, ApJ, 912, 9. https://arxiv.org/abs/2008.08092
. B F Williams, A E Dolphin, J J Dalcanton, 10.3847/1538-4357/aa862aApJ. 846145Williams, B. F., Dolphin, A. E., Dalcanton, J. J., et al. 2017, ApJ, 846, 145, doi: 10.3847/1538-4357/aa862a
. C N Willmer, 10.3847/1538-4365/aabfdfApJS. 23647Willmer, C. N. A. 2018, ApJS, 236, 47, doi: 10.3847/1538-4365/aabfdf
. N Winkel, A Pasquali, K Kraljic, 10.1093/mnras/stab1562MNRAS. 5054920Winkel, N., Pasquali, A., Kraljic, K., et al. 2021, MNRAS, 505, 4920, doi: 10.1093/mnras/stab1562
. F S Zahedy, H.-W Chen, S D Johnson, MNRAS. 4842257Zahedy, F. S., Chen, H.-W., Johnson, S. D., et al. 2019, MNRAS, 484, 2257. https://arxiv.org/abs/1809.05115
. H J Zahid, G I Dima, L J Kewley, D K Erb, R Davé, 10.1088/0004-637X/757/1/54ApJ. 75754Zahid, H. J., Dima, G. I., Kewley, L. J., Erb, D. K., & Davé, R. 2012, ApJ, 757, 54, doi: 10.1088/0004-637X/757/1/54
. E Zinger, A Pillepich, D Nelson, MNRAS. 499768Zinger, E., Pillepich, A., Nelson, D., et al. 2020, MNRAS, 499, 768. https://arxiv.org/abs/2004.06132
| [] |
[
"REACTION-DIFFUSION EQUATIONS WITH TRANSPORT NOISE AND CRITICAL SUPERLINEAR DIFFUSION: LOCAL WELL-POSEDNESS AND POSITIVITY",
"REACTION-DIFFUSION EQUATIONS WITH TRANSPORT NOISE AND CRITICAL SUPERLINEAR DIFFUSION: LOCAL WELL-POSEDNESS AND POSITIVITY"
] | [
"Antonio Agresti ",
"Mark Veraar "
] | [] | [] | In this paper we consider a class of stochastic reaction-diffusion equations. We provide local well-posedness, regularity, blow-up criteria and positivity of solutions. The key novelties of this work are related to the use transport noise, critical spaces and the proof of higher order regularity of solutions -even in case of non-smooth initial data. Crucial tools are L p (L q )-theory, maximal regularity estimates and sharp blow-up criteria. We view the results of this paper as a general toolbox for establishing global well-posedness for a large class of reactiondiffusion systems of practical interest, of which many are completely open. In our follow-up work [AV23b], the results of this paper are applied in the specific cases of the Lotka-Volterra equations and the Brusselator model. | 10.2139/ssrn.4296026 | [
"https://export.arxiv.org/pdf/2209.14759v2.pdf"
] | 252,596,256 | 2209.14759 | 3c51441dc724fe706ebf8a071b5e859630dcded1 |
REACTION-DIFFUSION EQUATIONS WITH TRANSPORT NOISE AND CRITICAL SUPERLINEAR DIFFUSION: LOCAL WELL-POSEDNESS AND POSITIVITY
Antonio Agresti
Mark Veraar
REACTION-DIFFUSION EQUATIONS WITH TRANSPORT NOISE AND CRITICAL SUPERLINEAR DIFFUSION: LOCAL WELL-POSEDNESS AND POSITIVITY
In this paper we consider a class of stochastic reaction-diffusion equations. We provide local well-posedness, regularity, blow-up criteria and positivity of solutions. The key novelties of this work are related to the use transport noise, critical spaces and the proof of higher order regularity of solutions -even in case of non-smooth initial data. Crucial tools are L p (L q )-theory, maximal regularity estimates and sharp blow-up criteria. We view the results of this paper as a general toolbox for establishing global well-posedness for a large class of reactiondiffusion systems of practical interest, of which many are completely open. In our follow-up work [AV23b], the results of this paper are applied in the specific cases of the Lotka-Volterra equations and the Brusselator model.
Introduction
In this paper we investigate local/global existence, uniqueness, (sharp) blow-up criteria, positivity and regularity of solutions to the following stochastic reaction-diffusion equations with transport noise (1.1)
du i − ν i ∆u i dt = div(F i (·, u)) + f i (·, u) dt + n≥1 (b n,i · ∇)u i + g n,i (·, u) dw n t , u i (0) = u i,0 ,
where i ∈ {1, . . . , ℓ} for some integer ℓ ≥ 1. For simplicity we restrict ourselves to the d-dimensional torus T d , but we expect that many results can be extended to R d , and even to bounded smooth domains with suitable boundary conditions. The unknown process is denoted by u = (u i ) ℓ i=1 :
[0, ∞) × Ω × T d → R ℓ , (w n ) n≥1 is a sequence of standard independent Brownian motions on a filtered probability space, F i , f i , g n,i are given nonlinearities and
(b n,i · ∇)u i := d j=1 b j n,i ∂ j u i .
The nonlinearities F i , f i , g n,i are assumed to have polynomial growth. Moreover, the leading operator ν i ∆ can be replaced by div(a i · ∇). This is included in the main results of this work, as this is useful for reformulating (1.1) with Stratonovich noise instead, see (1.8) below. Lower order terms in the differential operators can be allowed as well, and they can be included in the nonlinearities f, F and g.
1.1. Deterministic setting. Systems of PDEs of the form (1.1) with b = 0 and g = 0, are usually called reaction-diffusion equations. Such equations can be used to model a wide class of physical phenomena ranging from chemical reactions to predatory-prey systems, as well as phase separation processes. Further examples can be found in the standard reference [Rot84] and in the recent survey [Pie10]. In the deterministic case there are many global well-posedness results available (see [CGV19, FMT20, Kan90, Pie10, PSY19, Rot84] and references therein).
In particular, many important systems with rather weak forms of coercivity are included, but some structure is essential. As a matter of fact, existence of global smooth solutions to (1.1) (or more generically global well-posedness) under polynomial growth and smoothness assumptions, positivity, and mass preservation, is known to be false [Pie10,Section 4]. This shows that even in the deterministic setting, the problem of global well-posedness is rather delicate. Under additional entropy structures, existence of global renormalized solutions has been established in [Fis15]. Such solutions have a rather poor regularity in time and space. Moreover, the uniqueness of such solutions is still open, see also [Fis17] for the weaker notion of weak-strong uniqueness.
Next we discuss a well-known example of reaction-diffusion equations arising in the study of chemical reactions. For an integer ℓ ≥ 1 and two collections of nonnegative integers (
q i ) ℓ i=1 , (p i ) ℓ i=1
(note that either q i = 0 or p i = 0 for some i is allowed), consider the (reversible) chemical reaction:
(1.2) q 1 U 1 + · · · + q ℓ U ℓ
R+ − − ⇀ ↽ − − R− p 1 U 1 + · · · + p ℓ U ℓ ,
where R ± are the reaction rates and (U i ) ℓ i=1 are chemical substances. Let u i be the concentration of the substance U i , and let ν i > 0 be its diffusivity. The law of mass action postulates that the concentration u i satisfies the deterministic version of (1.1) with
(1.3) f i (·, u) = (p i − q i ) R + ℓ j=1 u qj j − R − ℓ j=1 u pj j , i ∈ {1, . . . , ℓ}.
The results of the current paper applies to (1.1) with f i as in (1.3). From a modeling point of view, especially in the context of chemical reactions, it is natural to ask for mass conservation along the flow, i.e. T d u(t, x) dx is constant in time. For the special case of (1.3), the mass conservation turns out to be equivalent to the existence of strictly positive constants (
α i ) ℓ i=1 such that ℓ i=1 α i (q i − p i ) = 0.
Weaker notions of mass conservation are also employed, see property (M) in [Pie10]. Although it will be not needed in most of our results below, mass conservation will be used in Theorem 5.1 to provide a simple proof of global existence of (sufficiently) smooth solutions to (1.1) for small initial data.
1.2. Stochastic setting. A lot of work has already been done on stochastic reaction-diffusion equations already (see [Cer03,Cer05,CR05, DKZ19, Fla91, KN20, KvN12, Mar18, Sal21, Sal22a, Sal22b] and references therein). Unfortunately, little is known for weakly dissipative systems in which the equations are coupled through nonlinear terms such as ±u pi i u qj j . These type of nonlinearities are very common since they model reaction terms (e.g. Lotka-Volterra equations or chemical reactions). In the follow-up paper [AV23b] we prove global well-posedness for some of the important concrete systems, including the above mentioned Lotka-Volterra equations and the Brusselator model. For these applications we refer to [AV23b,Section 5]. Although with our methods we can include a general class of equations, the full setting of [FMT20] and [Pie10, Section 3] seems still completely out of reach.
The results of the current paper can be seen as a first natural step in the study of global wellposedness of weakly dissipative systems. Some general global well-posedness results will already be included in case of small initial data. Moreover, we obtain several new regularization effects, sharp blow-up criteria, and positivity results. Each of the above turns out to be crucial for proving the global well-posedness results in our follow-up work [AV23b]. Indeed, higher order regularity is needed to apply stochastic calculus pointwise in space. This is essential in checking energy estimates and mass conservation type conditions, which in turn can often be combined with blowup criteria to show global existence of smooth solutions. Positivity often plays a central role in these calculations as it provides a uniform lower bound and important information on the sign of the nonlinear terms.
Let us briefly explain the main difference of our setting to the given literature on stochastic reaction-diffusion equations. To the best of our knowledge, only few papers consider superlinear diffusion (e.g. ±u pi i u qj j ). Also very few papers allow transport noise (i.e. (b n,i · ∇)u i dw n ), which seems motivated by small scale turbulence (see Subsection 1.3 below). Furthermore, there is very little L p -theory for stochastic reaction-diffusion equations available. L p -theory with p > 2 is often essential when dealing with either (rough) Kraichnan noise, or nonlinearities of higher order growth and dimensions d ≥ 2. In particular, to establish global well-posedness, L p -energy estimates are typically needed for large p when working with d ≥ 2. For applications to concrete systems we refer to [AV23b,. Moreover, in our work, weighted L p (L q )-theory turns out to be key in proving results on higher order regularity of solutions.
1.3. A derivation via separation of scales. In this subsection we explain where the transport term in (1.1) comes from and where it has been considered before. In fluid dynamics, transport noise is typically used to model turbulent phenomena (see e.g. [BCF92,Fla08,Fla11,FL21,MR01,MR04]), and it is usually referred to as Kraichnan's noise due to his seminal works on turbulent flows [Kra68,Kra94].
In engineering applications, highly turbulent flows are often employed to improve the development and efficiency of chemical reactions (compared to reactions occurring in more "regular" flows), see e.g. [FB18,MS06,VC12,ZCB20]. We refer to [GJ63,KD86,LW76,MC98,SM91] for further applications and discussions concerning turbulent flows and chemical reactions. Here, to motivate the transport noise we follow the heuristic argument in [FL21,Subsection 1.2]. We refer to [DP22,FP22] for (different) situations where the argument below can be made rigorous. Before going further, let us note that our setting only requires some Hölder smoothness of b k n,i , and thus we are able to include the Kraichnan noise with arbitrary small correlation parameter (see [AV21,MR04] and [GY21,Section 5]). In particular this includes the case where b k n,i reproduces the Kolmogorov spectrum of turbulence according to [MK99,pp. 427 and 436].
Suppose that (1.1) models a chemical reaction taking place in a fluid where u i 's are the concentration of the reactants. As commented below (1.2), from a deterministic view-point, one can consider the model:
(1.4) ∂ t u i = ν i ∆u i + (v i · ∇)u i + div(F i (·, u)) + f i (·, u),
where v i models the transport effects of the fluid, f i is as in (1.3) and F i is a given nonlinearity (of polynomial growth) modeling conservative source terms. In this situation, as in [FL21, Subsection 1.2], we can assume that v i splits into a "Small" and "Large" part:
(1.5) v i = v (L) i + v (S)
i . In a turbulent regime, the small component v i . Therefore, in some sense, v (S) i models turbulent phenomena (for instance to thermal fluctuations if the reaction is related to combustion processes). In practice, there is no efficient way to model the small scale component. Hence, the latter is often modeled as an approximation of white noise:
(1.6) v (S) i = n≥1 b n,iẇ n t ,
where, (w n ) n≥1 is a sequence of independent standard Brownian motions. In case of incompressible flows, one also has the divergence free-condition:
(1.7) div b n,i := d j=1 ∂ j b j n,i = 0, for all n ≥ 1.
Using the ansatz (1.6) for the small scale behavior of v i = v
(L) i + v (S) i
in (1.4) one obtains (1.1). The same heuristic argument can be used also in other contexts. For instance, in the case of the famous Lotka-Volterra equations (see e.g. [AV23b, Subsection 5.2]), which model the dynamics of predatory-prey systems, the flow v i may model migratory phenomena of the i-th species. In particular, in (1.5) the term v (L) i takes into account the large scale movements of the i-th species while v (S) i models small fluctuations of the movements due to local effects (e.g. unusual dryness of the fields, adverse weather events and local changes of the territory).
It is worth to mention two other properties of transport noise. Assume that u i satisfies (1.1) with g n,i ≡ 0. Firstly, if (1.7) holds, then at least formally the total mass
ℓ i=1 T d u i (t, x) dx is controlled pathwisely along the flow provided ℓ i=1 f i (·, u) ≲ 1 + ℓ i=1 u i ,
which is typical in deterministic theory, see (1.2) and [Pie10,(M)]. To see this it is enough to take T d · dx in the first equation of (1.1). Secondly, if the positivity preserving condition of the deterministic theory holds (see e.g. [Pie10, (P)]), then also the flow induced by (1.1) is positive preserving (see Theorem 2.13). Here we do not need (1.7).
From a mathematical point of view, there is no reason to prefer the Itô's formulation rather than a Stratonovich one in (1.1). In our paper, we are able to deal with both situations as we will consider (1.1) with ν i ∆u i replaced by div(a i · ∇u i ) + (r i · ∇)u i . To see this it is enough to recall that (at least formally in case g n,i ≡ 0)
(1.8) (b n,i · ∇)u i • dw n t = [div(a b,i · ∇u i ) + (r b,i · ∇)u i ] dt + (b n,i · ∇)u i dw n t where a b,i := ( 1 2 n≥1 b j n,i b k n,i ) d j,k=1 and r b,i := (− 1 2 n≥1 (div b n,i )b j n,i ) d j=1 .
In the context of SPDEs, transport noise has attracted much attention in the last decades. Indeed, under structural assumptions on the b n,i 's one can show that the solution u to a certain SPDE has better properties than its deterministic counterparts. This phenomena is usually called regularization by noise, see e.g. [FGP10,Fla11] and the references therein for further details. Let us mention two situations where it occurs:
• Delayed blow-up [FGL21a,FL21].
• Dissipation enhancement and/or stabilization [FGL21b,GY21,Luo21]. To the best of our knowledge, none of the above phenomena have been shown in the general context of reaction-diffusion equations. In the follow-up paper [Agr22] first steps are being made by the first author. the homogeneous version of such spaces are invariant under the map u 0 → u 0,λ :
∥u 0,λ ∥Ḃ d q − 2 h−1 q,p (R d ) ≂ ∥u 0 ∥Ḃ d q − 2 h−1 q,p (R d ) , ∥u 0,λ ∥ L d 2 (h−1) (R d ) ≂ ∥u 0 ∥ L d 2 (h−1) (R d ) . The Sobolev index of the spaces B d q − 2 h−1 q,p and L d 2 (h−1) is − 2
h−1 and therefore it is independent of d, q (and p in case of Besov spaces). This number will appear several times in the paper and it will gives distinction between the "critical" and "non-critical" situation.
Although the above choice seems very restrictive, the above can be thought of as a "toy example" for the case of F, f, g with polynomial growth of order h > 1, i.e. as u → ∞
|F (u)| + ∥(g n (u)) n≥1 ∥ ℓ 2 ≲ |u| h+1 2 and |f (u)| ≲ |u| h .
1.5. Overview. Below we give an overview of the results of the current paper. In the manuscript we consider a (slightly) generalized version of (1.1), namely (2.1) below.
• Local well-posedness in critical spaces of (2.1) -see Theorem 2.7 and Proposition 2.9.
• Instantaneous regularization of solutions to (2.1) -see Theorems 2.7 and 4.2.
• (Sharp) blow-up criteria for (2.1) -see Theorem 2.10.
• Positivity of solutions to (2.1) -see Theorem 2.13.
• Global well-posedness for small initial data -see Theorems 5.1. Although we formulate the main results only for d ≥ 2, a detailed explanation on the simpler case d = 1 case can be found in Section 6. The special case p = q = 2 is presented separately in Section 7 as it requires a different argument. Finally, there is an appendix on the maximum principle for scalar SPDEs in Appendix A. The latter plays a crucial role in the positivity of the solution to (2.1).
The proofs of the above results are based on our recent theory on stochastic evolution equations [AV22b,AV22c]. It was already applied to stochastic Navier-Stokes equations [AV21] and a large class of SPDEs which fit into a variational setting [AV22a]. The current paper is the first in a series of papers in which we apply our new framework to reaction-diffusion equations. In the companion papers [AV23a,AV23b], based on the analysis worked out in this paper, we prove global wellposedness results in several cases, and extend some of the results to the quasilinear case. Finally, we mention that the local well-posedness and positivity results proven in the current paper have been already used by the first author in [Agr22] to prove delay of the blow-up of strong solutions and to establish an enhanced diffusion effect in presence of sufficiently intense transport noise.
1.6. Notation. Here we collect some notation which will be used throughout the paper. Further notation will be introduced where needed. We write A ≲ P B (resp. A ≳ P B) whenever there is a constant C > 0 depending only on P such that A ≤ CB (resp. A ≥ CB). We write C(P ) if the constant C depends only on P .
Let p ∈ (1, ∞) and κ ∈ (−1, p − 1), we denote by w κ the weight w κ (t) = |t| κ for t ∈ R. For a Banach space X and an interval I = (a, b) ⊆ R, L p (a, b, w κ ; X) denotes the set of all strongly measurable maps f : I → X such that
∥f ∥ L p (a,b,wκ;X) := b a ∥f (t)∥ p X w κ (t) dt 1/p < ∞.
Furthermore, W 1,p (a, b, w κ ; X) ⊆ L p (a, b, w κ ; X) denotes the set of all f such that f ′ ∈ L p (a, b, w κ ; X) (here the derivative is taken in the distributional sense) and we set ∥f ∥ W 1,p (a,b,wκ;X) := ∥f ∥ L p (a,b,wκ;X) + ∥f ′ ∥ L p (a,b,wκ;X) .
Let (·, ·) θ,p and [·, ·] θ be the real and complex interpolation functor, respectively. We refer to [HNVW16,Tri95,BL76] for details on interpolation and functions spaces. For each θ ∈ (0, 1), we set H θ,p (a, b, w κ ; X) := [L p (a, b, w κ ; X), W 1,p (a, b, w κ ; X)] θ .
In the unweighted case, i.e. κ = 0, we set H θ,p (a, b; X) := H θ,p (a, b, w 0 ; X) and similar. For
A ∈ {L p , H θ,p , W 1,p }, we denote by A loc (a, b, w κ ; X) (resp. A loc ([a, b), w κ ; X)) the set of all strongly measurable maps f : (c, d) → X such that f ∈ A(c, d, w κ ; X) for all c, d ∈ (a, b) (resp. f ∈ A(a, c, w κ ; X) for all c ∈ (a, b)).
The d-dimensional (flat) torus is denoted by T d where d ≥ 1. For K ≥ 1 and θ 1 , θ 2 ∈ (0, 1),
C θ1,θ2 loc ((a, b) × T d ; R ℓ ) denotes the space of all maps v : (a, b) × T d → R ℓ such that for all a < c < d < b we have |v(t, x) − v(t ′ , x ′ )| ≲ c,d |t − t ′ | θ1 + |x − x ′ | θ2 , for all t, t ′ ∈ [c, d], x, x ′ ∈ T d .
This definition is extended to θ 1 , θ 2 ≥ 1 by requiring that the partial derivatives ∂ α,β v (with α ∈ N and β ∈ N d ) exist and are in C θ1−|α|,θ2−|β| loc
((a, b) × T d ; R ℓ ) for all α ≤ ⌊θ 1 ⌋ and d i=1 β i ≤ ⌊θ 2 ⌋.
We will also need the Besov spaces B s q,p (T d ; R ℓ ) and Bessel potential spaces H s,q (T d ; R ℓ ) to formulate our main results. These spaces can be defined by real and complex interpolation or more directly using Littlewood-Paley decompositions (see [Saw18,Section 6.6] and [ST87]). Throughout the paper section, to abbreviate the notation, we often write L q , H s,q , B s q,p instead of
L q (T d ; R ℓ ), H s,q (T d ; R ℓ ), B s q,p (T d ; R ℓ )
if no confusion seems likely. Finally we collect the main probabilistic notation. In the paper we fix a filtered probability space (Ω, A , (F t ) t≥0 , P) and we denote by E[·] = Ω · dP the expected value. A map σ : Ω → [0, ∞] is called a stopping time if {σ ≤ t} ∈ F t for all t ≥ 0. For two stopping times σ and τ , we let
[τ, σ] × Ω := {(t, ω) ∈ [0, ∞) × Ω : τ (ω) ≤ t ≤ σ(ω)}.
Similar definition holds for [τ, σ) × Ω, (τ, σ) × Ω etc. Finally, P denotes the progressive σ-algebra on the above mentioned probability space.
Acknowledgments. The authors thank the referee and Udo Böhm for helpful comments.
Statement of the main results
In this section we state our main results on local well-posedness, regularity, blow-up criteria, and positivity for systems of reaction-diffusion equations on the d-dimensional torus T d . The results will be presented in a very flexible setting. This has the advantage that using the results of this paper, one can address global well-posedness issues in an efficient way by checking sharp blow-up criteria. Regularity and positivity often play a crucial role in dealing with these issues. As mentioned in the introduction, in the stochastic case there are many important cases in which global well-posedness is completely open. Using our new framework we are able to settle some of these problems in [AV23b].
The proofs of the main results are postponed to Section 3 and are based on our abstract framework developed in [AV22b,AV22c], and our recent maximal regularity estimates [AV23c]. Consider the following system of stochastic reaction-diffusion equations:
(2.1)
du i − div(a i · ∇u i ) dt = div(F i (·, u)) + f i (·, u) dt + n≥1 (b n,i · ∇)u i + g n,i (·, u) dw n t , on T d , u i (0) = u 0,i , on T d ,
where i ∈ {1, . . . , ℓ} and ℓ ≥ 1 is an integer. Here u = (u i ) ℓ i=1 : [0, ∞) × Ω × T d → R ℓ is the unknown process, (w n ) n≥1 is a sequence of standard independent Brownian motions on the above mentioned filtered probability space and div(a i · ∇u i ) :
= d j,k=1 ∂ j (a j,k i ∂ k u i ), (b n,i · ∇)u i := d j=1 b j n,i ∂ j u i .
As explained in the Subsection 1.3, the coefficients b j n,i model small scale turbulent effects; while the coefficients a j,k i model inhomogeneous conductivity and may also take into account the Itô correction in case of Stratonovich noise (see (1.8)). Note that the SPDEs (2.1) are coupled only through the nonlinearities F , f and g, but there is no cross interactions in the diffusion terms div(a i · ∇u i ) and (b n,i · ∇)u i , which is a standard assumption in reaction-diffusion systems. The absence of cross-diffusion in the deterministic part div(a i · ∇u i ) dt can be weakened in all the results below expect for Theorem 2.13, where in its proof we argue component-wise.
Lower order terms in the leading differential operators in (2.1) can be included as well. Since they can be modeled through the nonlinearities F , f , and g as well, we do not have to write them explicitly.
2.1. Assumptions and definitions. In this subsection we collect the main assumptions and definitions. Additional assumptions will be employed where needed. Below B and P denotes the Borel and the progressive σ-algebra, respectively. The space H α,q (T d ; Y ) denotes the Bessel potential space with smoothness α and integrability q, defined on T d with values in the Banach space Y .
Assumption 2.1. Let d ≥ 2 and ℓ ≥ 1 be integers. We say that Assumption 2.1(p, q, h, δ) holds if p ∈ (2, ∞), q ∈ [2, ∞), h > 1, δ ∈ [1, 2) and for all i ∈ {1, . . . , ℓ} the following hold:
(1) For each j, k ∈ {1, . . . , d}, a j,k i :
R + × Ω × T d → R, b j i := (b j n,i ) n≥1 : R + × Ω × T d → ℓ 2 are P ⊗ B(T d )-measurable.
(2) There exist N > 0 and α > max{ d ρ , δ − 1} where ρ ∈ [2, ∞) such that a.s. for all t ∈ R + and j, k ∈ {1, . . . , d},
∥a j,k i (t, ·)∥ H α,ρ (T d ) + ∥(b j n,i (t, ·)) n≥1 ∥ H α,ρ (T d ;ℓ 2 ) ≤ N.
(3) There exists ν i > 0 such that, a.s. for all t ∈ R + , x ∈ T d and ξ ∈ R d ,
d j,k=1 a j,k i (t, x) − 1 2 n≥1 b j n,i (t, x)b k n,i (t, x) ξ j ξ k ≥ ν i |ξ| 2 .
(4) For all j ∈ {1, . . . , d}, the maps
F j i , f i : R + × Ω × T d × R → R, g i := (g n,i ) n≥1 : R + × Ω × T d × R → ℓ 2 , are P ⊗ B(T d ) ⊗ B(R)-measurable. Set F i := (F j i ) d j=1 . Assume that F j i (·, 0), f i (·, 0) ∈ L ∞ (R + × Ω × T d ), g i (·, 0) ∈ L ∞ (R + × Ω × T d ; ℓ 2 )
, and a.s. for all t ∈ R + , x ∈ T d and y ∈ R,
|f i (t, x, y) − f i (t, x, y ′ )| ≲ (1 + |y| h−1 + |y ′ | h−1 )|y − y ′ |, |F i (t, x, y) − F i (t, x, y ′ )| ≲ (1 + |y| h−1 2 + |y ′ | h−1 2 )|y − y ′ |, ∥g i (t, x, y) − g i (t, x, y ′ )∥ ℓ 2 ≲ (1 + |y| h−1 2 + |y ′ | h−1 2 )|y − y ′ |.
The parameters p and q will be used for temporal and spatial integrability, respectively. Finally, δ will be related to the order of smoothness of the underlined Sobolev space with integrability q. Although we allow δ ∈ [1, 2), in applications to (2.1) it turns out to be enough to consider δ ∈ [1, h+1 h ), see Assumption 2.4 below. Note that Assumption 2.1(2) and Sobolev embeddings give
∥a j,k i ∥ C α− d ρ (T d ) + ∥(b j n,i ) n≥1 ∥ C α− d ρ (T d ;ℓ 2 ) ≲ α,d,ρ N.
For future convenience, we collect some observations in the following remark.
Remark 2.2.
(a) If Assumption 2.1(p, q, h, δ) holds for some δ ∈ [1, 2), then there exists an ε > 0 such that it holds for all δ ∈ [1, δ + ε]. (d) The case d = 1 is excluded in Assumption 2.1 to avoid many subcases in our main results.
However, it can be deduced by more direct methods (see Section 6), or from the d = 2 case by adding a dummy variable (under some restrictions). Often one cannot identify any critical spaces in the case d = 1. (e) The globally Lipschitz case h = 1 is excluded in the above. Global well-posedness always holds in this case and can be derived from [AV22c,Theorem 4.15]. Similar to (d), if h = 1, then no critical spaces can be identified as no rescaling of solutions can (locally) preserve the structure of (2.1).
Next we introduce the notion of solution to (2.1). To stress the dependence on (p, κ, δ, q) we will keep these parameters in the definition of solutions. The parameter κ ≥ 0 is used for the power weight w κ (t) = t κ in time. Finally, let us recall that the sequence (w n ) n≥1 uniquely induces an ℓ 2 -cylindrical Brownian motion (see e.g. [AV22b, Definition 2.11]) given by
W ℓ 2 (v) := n≥1 R+ v n dw n t where v = (v n ) n≥1 ∈ L 2 (R + ; ℓ 2 ).
Definition 2.3. Suppose that Assumption 2.1(p, q, h, δ) is satisfied for some h > 1 and let κ ∈ [0, p 2 − 1).
• Let σ be a stopping time and let u = (
u i ) ℓ i=1 : [0, σ) × Ω → H 2−δ,q (T d ; R ℓ )
be a stochastic process. We say that (u, σ) is a local (p, κ, δ, q)-solution to (2.1) if there exists a sequence of stopping times (σ j ) j≥1 such that the following hold for all i ∈ {1, . . . , ℓ}:
σ j ≤ σ a.s. for all j ≥ 1 and lim j→∞ σ j = σ a.s.; for all j ≥ 1 the process 1 [0,σj ]×Ω u i is progressively measurable; a.s. for all j ≥ 1 we have u i ∈ L p (0, σ j , w κ ; H 2−δ,q (T d )) and
(2.2) div(F i (·, u)) + f i (·, u) ∈ L p (0, σ j , w κ ; H −δ,q (T d )),
(g n,i (·, u)) n≥1 ∈ L p (0, σ j , w κ ; H 1−δ,q (T d ; ℓ 2 ));
a.s. for all j ≥ 1 the following identity holds for all t ∈ [0, σ j ]:
(2.3) u i (t) − u 0,i = t 0 div(a i · ∇u i ) + div(F i (·, u)) + f i (·, u) ds + t 0 1 [0,σj ] (b n,i · ∇)u + g n,i (·, u) n≥1
dW ℓ 2 (s).
• Finally, (u, σ) is called a (p, κ, δ, q)-solution to (2.1) if for any other local (p, κ, δ, q)solution (u ′ , σ ′ ) to (2.1) we have σ ′ ≤ σ a.s. and u = u ′ on [0, σ ′ ) × Ω.
Note that a (p, κ, δ, q)-solution is unique by definition. Later on in Proposition 3.5 we will prove a further uniqueness result: a different choice of the coefficients (p, κ, δ, q, h) leads to the same solution.
All the integrals in (2.3) are well-defined. To see this, fix i ∈ {1, . . . , ℓ}. By Assumption 2.1(2), [AV23c, Proposition 4.1] and u i ∈ L p (0, σ j , w κ ; H 2−δ,q (T d )) a.s. for all j ≥ 1, we get
(2.4) div(a i · ∇u i ) ∈ L p (0, σ j , w κ ; H −δ,q (T d )), ((b n,i · ∇)u i ) n≥1 ∈ L p (0, σ j , w κ ; H 1−δ,q (T d ; ℓ 2 )),
a.s. for all j ≥ 1. The deterministic integrals are well-defined as H −δ,q (T d )-valued Bochner integrals. For the stochastic integrals, recall that (2.5) γ(ℓ 2 , H ζ,r (T d )) = H ζ,r (T d ; ℓ 2 ), for all ζ ∈ R and r ∈ (1, ∞),
where γ(ℓ 2 ; X) denotes the set of all γ-radonifying operators with values in the Banach space X (see [HNVW17,Chapter 9] and in particular Theorem 9.4.8 there for details). Therefore, due to (2.2) and (2.4), the stochastic integrals are well-defined as H 1−δ,q (T d )-valued stochastic integrals by (2.5), [NVW15, Theorem 4.7] and L p (0, T ; w κ ) → L 2 (0, T ) since κ < p 2 − 1.
2.2.
Local well-posedness and regularity in critical spaces. Before we state our main local well-posedness result for (2.1) in critical spaces, we first introduce the set of admissible exponents (p, q, h, δ).
Assumption 2.4. Let d ≥ 2. We say that Assumption 2.4(p, q, h, δ) holds if h > 1, δ ∈ [1, h+1 h ), p ∈ (2, ∞), and q ∈ [2, ∞) satisfy
1 p + 1 2 δ + d q ≤ h h − 1 , and d d − δ < q < d(h − 1) h + 1 − δ(h − 1) . (2.6)
In the above assumption we avoided the case p = 2 since this is an exceptional case, which can be included provided q = 2. The latter situation is discussed in Section 7. The following lemma characterizes for which exponents h we can find (p, q, δ) such that Assumption 2.4(p, q, h, δ) holds. Recall that we may always enlarge h if needed (see Remark 2.2(b)).
Lemma 2.5. Let d ≥ 2 and set
(2.7) h d := 3, if d = 2, 1 2 + 1 d + 1 2 + 1 d 2 + 2 d , if d ≥ 3.
Then there exist (p, q, δ) such that Assumption 2.4(p, q, h, δ) holds if and only if h > h d .
Proof. Since we can take p as large as we want, the first part of (2.6) is equivalent to d(h−1) 2h−δ(h−1) < q. Therefore, we can find admissible (p, q, h, δ) if and only if there exist δ ∈ [1, h+1 h ) and q ≥ 2 such that
max d d − δ , d(h − 1) 2h − δ(h − 1) < q < d(h − 1) h + 1 − δ(h − 1)
. The numbers h d in (2.7) are connected to the Fujita exponent 1 + 2 d introduced in the seminal paper [Fuj66] in the study of the blowing-up of positive (smooth) solutions to the PDE: ∂ t u−∆u = u 1+h . In the next remark we compare this to our setting. In case h ≤ h d , then one can still apply Theorem 2.7 by using one of the following strategies:
• enlarge h in Assumption 2.1, see Remark 2.2(b);
• add dummy variables to increase the dimension d 2 > d in order to have h > h d2 (here we are using that lim d→∞ h d = 1). Via Theorem 2.10 one can show non-explosion (in probability) on large time intervals for solutions to (2.1) in case of small initial data and admissible exponents without further conditions, see Section 5. Therefore, by Lemma 2.5, one can allow nonlinearities as in [Fuj66] for h > h d . Such a threshold h d seems optimal for these results to hold in presence of a non-trivial transport noise term, i.e. (b · ∇)u dw. Therefore, it seems natural to call h d the stochastic Fujita exponent.
Recently, there has been an increasing attention in extending [Fuj66] to the stochastic framework, see e.g. [Cho09, Cho11, CK15, FLN19] and the references therein. In the latter works, equations on R d are considered, but transport noise does not appear. In view of the scaling argument in Subsection 1.4, we expect that the same stochastic Fujita exponent h d appears in the R d -case of (2.1).
The main result of this section is the following local existence and regularity for (2.1) in critical spaces, and it will be proved in Subsection 3.1. Recall that B s q,p (T d ; R ℓ ) denotes the Besov space with smoothness s ∈ R, integrability q, and microscopic parameter p. To abbreviate notation we write B s q,p and H s,q for the spaces B s q,p (T d ; R ℓ ) and H s,q (T d ; R ℓ ). Theorem 2.7 (Local existence and uniqueness in critical spaces, and regularity). Let Assumptions 2.1(p, q, h, δ) and 2.4(p, q, h, δ) be satisfied.
Set κ := κ c := p h h−1 − 1 2 (δ + d q ) − 1.
Then for any
(2.10) u 0 ∈ L 0 F0 (Ω; B d q − 2 h−1 q,p ), the problem (2.1) has a (unique) (p, κ c , δ, q)-solution (u, σ) such that σ > 0 a.s. and u ∈ C([0, σ); B d q − 2 h−1 q,p ) a.s. (2.11) u ∈ H θ,p loc ([0, σ), w κc ; H 2−δ−2θ,q ) a.s. for all θ ∈ [0, 1/2). (2.12)
Finally, u instantaneously regularizes in space and time:
u ∈ H θ,r loc (0, σ; H 1−2θ,ζ ) a.s. for all θ ∈ [0, 1/2), r, ζ ∈ (2, ∞), (2.13) u ∈ C θ1,θ2 loc ((0, σ) × T d ; R ℓ ) a.s. for all θ 1 ∈ [0, 1/2), θ 2 ∈ (0, 1). (2.14)
The standard set of initial data in the theory of reaction-diffusion equations is L ∞ (T d ; R ℓ ) (see e.g. [Pie10]), and it is always included as a special case in the above result (see Remark 2.8(c)).
The regularity (2.13)-(2.14) can be improved by imposing further smoothness conditions on (a, b, F, f, g), but keeping the same space of initial data for u 0 (see Theorem 4.2 below). We will prove later on that if Theorem 2.7 is applicable for two sets of exponents (p, q, h, δ), then the corresponding solutions coincide, see Proposition 3.5.
For future reference, we collect several observations in the following remark.
Remark 2.8. in (2.10) has the right local scaling for (2.1). For this reason it is often called critical for (2.1). It coincides with the abstract notion of criticality which will be considered in Section 3. Note that the Sobolev index of the initial value space is
( d q − 2 h−1 ) − d q = − 2 h−1 ,
which is independent of q and δ. Moreover, by Sobolev embeddings
B d q − 2 h−1 q,p (T d ; R ℓ ) → B d r − 2 h−1 r,s (T d ; R ℓ ) for all r ≥ q and s ≥ p. (b)
The freedom in the choice of δ allows us to reduce the smoothness of the above critical spaces.
Indeed, choosing δ ↑ h+1 h and letting q ↑
d(h−1) h+1−( h+1 h )(h−1) = dh(h−1) h+1 it follows that we can treat initial data with smoothness d q − 2 h−1 ↓ − 1 h . (c) By increasing h (see Remark 2.2(b)) we can suppose that either h > 1 + 4 d or h = 1 + 4 d , and d ≥ 3 .
Setting q = d 2 (h − 1), Theorem 2.7 gives local well-posedness for (2.1) for the important class of initial data in u 0 ∈ L 0 F0 (Ω; L q (T d ; R ℓ )). Indeed, even if Assumptions 2.1(p, q, h, δ) and 2.4(p, q, h, δ) hold with δ = 1, they self-improve to some δ > 1 (see Remark 2.2(a)) and p ≥ max{q, 2 2−δ }.
Thus since L q → B 0 q,p = B d q − 2 h−1 q,p
, local well-posedness with initial data from the space L 0 F0 (Ω; L q ) follows from Theorem 2.7. In the above setting, one can also prove that u ∈ C([0, σ); L q ) a.s. by using the local continuity w.r.t. u 0 (see Proposition 2.9 below) and a stopped version of the arguments used in [AV23b, Proposition 6.3] (see also the comments below its statement).
The next rather technical local continuity result will be used in the proof of positivity of solutions (u, σ) (see Theorem 2.13 below).
Proposition 2.9 (Local continuity). Let Assumptions 2.1(p, q, h, δ) and 2.4(p, q, h, δ) be satisfied.
Set κ := κ c := p h h−1 − 1 2 (δ + d q ) − 1.
Assume that u 0 satisfies (2.10) and let (u, σ) be the (p, κ c , δ, q)-solution to (2.1) provided by Theorem 2.7. There exist constants C 0 , T 0 , ε 0 > 0 and stopping times σ 0 , σ 1 ∈ (0, σ] a.s. for which the following assertion holds:
For each v 0 ∈ L p F0 (Ω; B d q − 2 h−1 q,p ) with E∥u 0 − v 0 ∥ p B d q − 2 h−1 q,p ≤ ε 0 , the (p, κ c , δ, q)-solution (v, τ ) to
(2.1) with initial data v 0 , has the property that there exists a stopping time τ 0 ∈ (0, τ ] a.s. such that for all t ∈ [0, T 0 ] and γ > 0, one has
P sup r∈[0,t] ∥u(r) − v(r)∥ B d q − 2 h−1 q,p ≥ γ, σ 0 ∧ τ 0 > t ≤ C 0 γ p E∥u 0 − v 0 ∥ p B d q − 2 h−1 q,p , (2.15) P ∥u − v∥ L p (0,t,wκ c ;H 2−δ,q ) ≥ γ, σ 0 ∧ τ 0 > t ≤ C 0 γ p E∥u 0 − v 0 ∥ p B d q − 2 h−1 q,p , (2.16) P(σ 0 ∧ τ 0 ≤ t) ≤ C 0 E∥u 0 − v 0 ∥ p B d q − 2 h−1 q,p + P(σ 1 ≤ t) . (2.17)
The stopping time τ 0 depends on (u 0 , v 0 ). To some extend, the estimates (2.15)-(2.16) show that (u, σ) depends continuously on the initial data u 0 , while (2.17) gives a measure of the size of the time interval on which the continuity estimates (2.15)-(2.16) hold. The key point is that the right-hand side of (2.17) depends on v 0 , but not on v. In particular, {τ 0 ≤ t} has small probability if t ∼ 0 and v 0 is close to u 0 . We actually prove a slightly stronger result than Proposition 2.9, see Remark 3.4.
2.3. Blow-up criteria. Below we state some blow-up criteria for the solution to (2.1) provided by Theorem 2.7. Roughly speaking, blow-up criteria ensure that, if there exists a fixed time T > 0 such that the stopping time σ satisfies P(σ < T ) > 0, then the norm of u in an appropriate space explodes. Blow-up criteria are often used to prove that a certain solution (u, σ) is global in time, i.e. σ = ∞ a.s. In practice, to prove global existence, it is enough to prove that the norm of u in the above mentioned function space cannot explode. In our follow-up paper [AV23b], we will use this strategy to prove that solutions provided by Theorem 2.7 are global in time in several situations. A version of such results for small initial data can be found in Section 5.
Blow-up criteria are most powerful when they are formulated in function spaces which are as rough (i.e. large) as possible. On the other hand, the regularity cannot be arbitrarily low, since at least the nonlinearities need to be well-defined. Hence, from a scaling point of view it is natural to ask for blow-up criteria involving function spaces with Sobolev index − 2 h−1 , because such critical threshold (see Subsection 1.4) is generically optimal for local and global well-posedness of (S)PDEs (see [PSW18, Section 2.2] for deterministic evidence on this). Our general theory from [AV22c] leads to the following criteria which at the moment is the best we can expect with abstract methods.
Theorem 2.10 (Blow-up criteria). Let the assumptions of Theorem 2.7 be satisfied and let (u, σ) be the (p, κ c , δ, q)-solution to (2.1). Suppose that p 0 ∈ (2, ∞), q 0 ∈ [2, ∞), h 0 ≥ h, δ 0 ∈ [1, 2) are such that Assumptions 2.1(p 0 , q 0 , h 0 , δ 0 ) and 2.4(p 0 , q 0 , h 0 , δ 0 ) hold. Set
β 0 := d q 0 − 2 h 0 − 1 and γ 0 := d q 0 + 2 p 0 − 2 h 0 − 1 .
Then for all 0 < s < T < ∞,
(1) P s < σ < T, sup t∈[s,σ) ∥u(t)∥ B β 0 q 1 ,∞ < ∞ = 0 for all q 1 > q 0 .
(2) P s < σ < T, sup
t∈[s,σ) ∥u(t)∥ B β 0 q 0 ,p 0 + ∥u∥ L p 0 (s,σ;H γ 0 ,q 0 ) < ∞ = 0.
Note that the norms in the blow-up criteria are well-defined thanks to (2.13)-(2.14) and s > 0. In particular, the parameter s makes it possible to consider rough initial data. It is possible to take (p, q, δ, h) = (p 0 , q 0 , δ 0 , h 0 ), but the extra flexibility turns out to be very helpful in proving global well-posedness.
The proof of Theorem 2.10 will be given in Subsection 3.2. As a consequence we also obtain:
Corollary 2.11. Let the assumptions of Theorem 2.7 be satisfied and let (u, σ) be the (p, κ c , δ, q)-
solution to (2.1). Suppose that p 0 ∈ (2, ∞), q 0 ∈ [2, ∞), h 0 ≥ max{h, 1 + 4 d }, δ 0 ∈ (1, 2) are such that Assumptions 2.1(p 0 , q 0 , h 0 , δ 0 ) and 2.4(p 0 , q 0 , h 0 , δ 0 ) hold. Let ζ 0 = d 2 (h 0 − 1).
The following hold for each 0 < s < T < ∞:
(1) If q 0 = ζ 0 , then for all ζ 1 > q 0 P s < σ < T, sup t∈[s,σ) ∥u(t)∥ L ζ 1 < ∞ = 0. (2) If q 0 > ζ 0 , p 0 ∈ 2 δ0−1 , ∞ , p 0 ≥ q 0 , and d q0 + 2 p0 = 2 h0−1 , then P s < σ < T, sup t∈[s,σ) ∥u(t)∥ L ζ 0 + ∥u∥ L p 0 (s,σ;L q 0 ) < ∞ = 0.
Although Theorem 2.10 is more general, in the follow-up work [AV23b] on global well-posedness we mainly use Corollary 2.11. Considering T + ε instead of T in Corollary 2.10(1) and letting ε ↓ 0, we find
(2.18) P s < σ ≤ T, sup t∈[s,σ) ∥u(t)∥ L ζ 1 < ∞ = 0.
Note that (2.18) contains also information on the set {σ = T }. The same also holds for Corollary 2.10(2) and the assertions in Theorem 2.10. Such variants of the blow-up criteria can be sometimes useful (see e.g. Theorem 5.1).
Remark 2.12.
(a) Keeping in mind the parabolic scaling, the spaces L ∞ (s, σ; B β0 q0,∞ ) and L p0 (s, σ; H γ0,q0 ) have (space-time) Sobolev index − 2 h−1 . Thus, from a scaling point of view, Theorem 2.10(2) is optimal, while (1) is only sub-optimal. A similar remark holds for Corollary 2.11. (b) In Theorem 2.10(1) and Corollary 2.11(1), p 0 does not appear, and thus it can be taken arbitrarily large. (c) Choosing q 0 , p 0 large enough and δ 0 > 1, one has β 0 , γ 0 < 0. Thus Theorem 2.10 yields blow-up criteria in Sobolev spaces of negative smoothness. To see how far below zero one can get, as in Remark 2.
8(b), we take δ 0 ↑ h0+1 h0 , q 0 ↓ dh0(h0−1) h0+1
. This gives β 0 , γ 0 ↓ − 1 h0 . (d) Under the assumptions of Theorem 2.10 for p 0 large enough (depending on h) one also has
P s < σ < T, ∥u∥ L p 0 (s,σ;H γ 0 ,q 0 ) < ∞ = 0 for all 0 < s < T.
To prove this one can argue as in the proof of Theorem 2.10 below by using [AV22c, Theorem 4.11] instead. We leave the details to the reader.
2.4. Positivity. In this subsection we investigate the positive preserving property of the stochastic reaction-diffusion equations (2.1). Existence of positive solutions to stochastic reaction-diffusion equations has been studied by many authors see e.g. [Ass99, CPT16, CES13, Mar19, MS21] and the references therein. To the best of our knowledge, positivity of solutions to (2.1) is not known in our setting (e.g. rough data, transport noise and (t, ω)-dependent coefficients). Considering (t, ω)-dependence of the coefficients is also very useful in applications to quasilinear SPDEs, in which case a j,k i (t, ω, x) := A j,k i (u(t, ω, x)) and A j,k i (·) is a nonlinear map. These applications will be considered in [AV23a].
The strategy of proof which we use seems to be new in the stochastic setting, but folklore for deterministic reaction-diffusion equations. It is based on a linearization argument, and on the maximum principle. The stochastic version of the maximum principle we use is for linear scalar SPDEs and due to [Kry13] (see Lemma A.1 for a slight variation of the latter). To apply this to obtain positivity in the case of nonlinear systems, an essential ingredient is the instantaneous regularization (2.13)-(2.14) of solutions to (2.1) proven in Section 3.1.
Below
we write v ≥ 0 for v ∈ D ′ (T d ) provided ⟨φ, v⟩ ≥ 0 for all φ ∈ D(T d ) such that φ ≥ 0 on T d .
If v ∈ L 1 (T d ), then the above coincides with its natural meaning. Recall that positive distributions can be identified with finite positive measures. For an
R ℓ -valued distribution v = (v i ) ℓ i=1 ∈ D ′ (T d ; R ℓ ), we say that v ≥ 0 provided v i ≥ 0 for all i ∈ {1, . . . , ℓ}.
Our main result on positivity is the following.
Theorem 2.13 (Positivity). Let the assumptions of Theorem 2.7 be satisfied. Let (u, σ) be the (p, κ c , δ, q)-solution to (2.1) provided by Theorem 2.7. Suppose that
u 0 ≥ 0 a.s.,
and that there exist progressive measurable processes c 1 , . . . , c ℓ :
R + × Ω → R such that for all i ∈ {1, . . . , ℓ}, n ≥ 1, y = (y i ) ℓ i=1 ∈ [0, ∞) ℓ and a.e. on R + × Ω × T d , f i (·, y 1 , . . . , y i−1 , 0, y i+1 , . . . , y ℓ ) ≥ 0, (2.19) F i (·, y 1 , . . . , y i−1 , 0, y i+1 , . . . , y ℓ ) = c i (·), (2.20) g n,i (·, y 1 , . . . , y i−1 , 0, y i+1 , . . . , y ℓ ) = 0. (2.21)
Then a.s. for all x ∈ T d and t ∈ [0, σ), one has u(t, x) ≥ 0.
By (2.14), the pointwise expression u(t, x) is well-defined in the above. The condition (2.19) is standard in the theory of (deterministic) reaction-diffusion equations (see e.g. [Pie10, eq. (1.7)]), while (2.21) is (almost) optimal since it excludes the additive noise case (in which case positivity cannot be preserved). Condition (2.20) might be new. For ℓ = 1 it holds trivially if F is not depending on x ∈ T d . In case ℓ = 2 it is for instance fulfilled for
F i (t, ω, x, y 1 , y 2 ) = ψ i (t, ω, x)ϕ i,1 (x, y 1 )ϕ i,2 (x, y 2 ) if ϕ i,i (x, 0) = 0 and ψ i is P ⊗ B(T d )-measurable.
The proof of Theorem 2.13 will be given in Section 3.3. From the proof it will be clear that it is possible to replace T d by a domain O ⊆ R d if one assumes Dirichlet boundary conditions (for instance), and b n,i | ∂O = 0.
Proofs of the main results
3.1. Local well-posedness and regularity. The aim of this subsection is to prove local wellposedness and smoothness of (p, κ, δ, q)-solutions to (2.1). In particular, the next result contains Theorem 2.7 as a special case.
Proposition 3.1 (Local existence, uniqueness, and regularity).
Let Assumption 2.1(p, q, h, δ) be satisfied. Suppose that q > max{ d d−δ , d(h−1) 2h−δ(h−1) } and that κ ∈ [0, p 2 − 1)
satisfies one of the following conditions:
q < d(h − 1) δ and 1 + κ p + 1 2 (δ + d q ) ≤ h h − 1 ; (3.1) q ≥ d(h − 1) δ and 1 + κ p ≤ h h − 1 (1 − δ 2 ). (3.2) Then for any u 0 ∈ L 0 F0 (Ω; B 2−δ−2 1+κ p q,p ), (2.1) has a (unique) (p, κ, δ, q)-solution satisfying a.s. σ > 0 and for all θ ∈ [0, 1 2 ) (3.3) u ∈ H θ,p loc ([0, σ), w κ ; H 2−δ−2θ,q ) ∩ C([0, σ); B 2−δ−2 1+κ p q,p ).
Moreover, u instantaneously regularizes
u ∈ H θ,r loc (0, σ; H 1−2θ,ζ ) a.s. for all θ ∈ [0, 1/2), r, ζ ∈ (2, ∞), (3.4) u ∈ C θ1,θ2 loc ((0, σ) × T d ; R ℓ ) a.s. for all θ 1 ∈ [0, 1/2), θ 2 ∈ (0, 1). (3.5)
The weight κ is called critical if equality holds in the above condition on κ in (3.1) or (3.2), i.e.
in (3.1): κ = κ c = p h h − 1 − 1 2 δ + d q − 1, in (3.2): κ = κ c = ph h − 1 1 − δ 2 − 1. Moreover, the space of initial data B 2−δ−2 1+κ p q,p
is called critical as well. For details on criticality we refer to [AV22b,Section 4]. This explains the subscript 'c' in Theorem 2.7. This abstract notion of criticality turns out to be the one that leads to scaling invariant space in many examples.
Before we prove the above result, let us first show how Theorem 2.7 can be deduced from Proposition 3.1.
Proof of Theorem 2.7. The upper bound q <
d(h−1) h+1−δ(h−1) and δ < h+1 h imply q < d(h−1) δ .
In particular, this is the first case of Proposition 3.1. Thus, it remains to check the inequality 1 + κ p
+ 1 2 δ + d q ≤ h h − 1 . Since κ c = p h h−1 − 1 2 (δ + d q ) − 1, the assumptions 1 p + 1 2 δ + d q ≤ h h−1 and q < d(h−1) h+1−δ(h−1) imply κ c ≥ 0 and κ c < p 2 − 1, respectively.
In other words κ c belongs to the admissible range [0, p 2 − 1). Hence, the assumptions of Theorem 2.7 imply that Proposition 3.1 is applicable with κ = κ c . It remains to show that the space of initial data u 0 is the one claimed in Theorem 2.7. To this end,
note that B 2−δ−2 1+κc p q,p = B d q − 2 h−1 q,p as desired. □
Next we prove Proposition 3.1. The idea is to reformulate the system of SPDEs (2.1) as a stochastic evolution equations (SEE in the following) and then use the results in [AV22b,AV22c]. To this end, we need two ingredients:
• Stochastic maximal L p (L q )-regularity for the linearized problem (see e.g. [AV22b, Section 3] for the definition); • Estimates for the nonlinearities. Recently, we obtained stochastic maximal L p (L q )-regularity for second order systems on the ddimensional torus [AV23c]. Required estimates for the nonlinearities will be formulated in Lemma 3.2 below.
Before we state the lemma we reformulate (2.1) as an SEE. To this end, throughout this subsection we set
(3.6) X 0 = H −δ,q , X 1 = H 2−δ,q , and X λ := [X 0 , X 1 ] λ = H −δ+2λ,q ,
where λ ∈ (0, 1), and a.s. for all t ∈ R + , v ∈ X 1 ,
(3.7) A(t)v = div(a(t) · ∇v), B(t)v = (b n (t) · ∇)v n≥1 , Φ(t, v) = div(F (t, v)) + f (t, v), Γ(t, v) = g n (t, v) n≥1 .
With the above notation, (2.1) can be rewritten as a semilinear SEE on X 0 :
(3.8) du − A(t)u dt = Φ(t, u) dt + (B(t)u + Γ(t, u)) dW ℓ 2 (t), t ∈ R + , u(0) = u 0 ,
where W ℓ 2 is the ℓ 2 -cylindrical Brownian motion induced by (w n ) n≥1 , see the text before Definition 2.3. Recall that γ(ℓ 2 , X 1/2 ) = γ(ℓ 2 , H 1−δ,q ) = H 1−δ,q (ℓ 2 ), cf. (2.5).
Lemma 3.2. Let Assumption 2.1(p, q, h, δ) be satisfied. Let Φ, Γ be as in (3.7). Suppose that
q > max{ d d−δ , d(h−1) 2h−δ(h−1) }. Set ρ 1 = h − 1, ρ 2 = h−1 2 and β 1 := 1 2 δ + d q 1 − 1 h , if q < d(h − 1) δ , δ 2 , if q ≥ d(h − 1) δ , β 2 := 1 h + 1 + 1 2 δ + d q h − 1 h + 1 if q < d(h − 1) 2(δ − 1) , δ 2 , if q ≥ d(h − 1) 2(δ − 1) . Then β 1 , β 2 ∈ (0, 1) and for each v, v ′ ∈ X 1 ∥Φ(·, v) − Φ(·, v ′ )∥ X0 ≲ j∈{1,2} (1 + ∥v∥ ρj X β j + ∥v ′ ∥ ρj X β j )∥v − v ′ ∥ X β j , ∥Φ(·, v)∥ X0 ≲ j∈{1,2} (1 + ∥v∥ ρj X β j )∥v∥ X β j , ∥Γ(·, v) − Γ(·, v ′ )∥ γ(ℓ 2 ,X 1/2 ) ≲ (1 + ∥v∥ ρ2 X β 2 + ∥v ′ ∥ ρ2 X β 2 )∥v − v ′ ∥ X β 2 , ∥Γ(·, v)∥ γ(ℓ 2 ,X 1/2 ) ≲ (1 + ∥v∥ ρ2 X β 2 )∥v∥ X β 2 .
Since β j < 1, the above result shows that Φ and Γ are lower-order nonlinearities.
Proof. Since f (·, 0), F j (·, 0) ∈ L ∞ and (g n,i (·, 0)) n≥1 ∈ L ∞ by Assumption 2.1(4), it is enough to
estimate the differences Φ(·, v) − Φ(·, v ′ ) and Γ(·, v) − Γ(·, v ′ ).
We break the proof into two steps.
Step 1:
Estimate for Φ. Let us write Φ = Φ 0 + Φ 1 where Φ 0 (·, v) := f (·, v) and Φ 1 (·, v) = div(F (·, v)).
Substep 1a: Estimate for Φ 0 . By Assumption 2.1(4), a.e. on R + × Ω and for all v, v ′ ∈ X 1 ,
(3.9) ∥Φ 0 (·, v) − Φ 0 (·, v ′ )∥ H −δ,q (i) ≲ ∥f (t, ·, v) − f (t, ·, v ′ )∥ L ξ ≲ (1 + |v| h−1 + |v ′ | h−1 )|v − v ′ | L ξ (ii) ≲ (1 + ∥v∥ h−1 L hξ + ∥v ′ ∥ h−1 L hξ )∥v − v ′ ∥ L hξ (iii) ≲ (1 + ∥v∥ h−1 H θ,q + ∥v ′ ∥ h−1 H θ,q )∥v − v ′ ∥ H θ,q , where in (i) we used Sobolev embedding with − d ξ = −δ − d q and q > d d−δ to ensure ξ ∈ (1, ∞)
. Estimate (ii) follows from Hölder's inequality. In (iii) we used Sobolev embedding with θ − d q ≥ − d hξ , and where we need θ < 2 − δ to ensure that Φ 0 is of lower-order (see (3.6)). To choose θ we consider two cases:
• Case q < d(h−1) δ . In this situation we set θ = d q − d hξ = d(h−1) hq − δ h > 0. Note that θ < 2 − δ follows from the assumption q > d(h−1) 2h−δ(h−1) ; • Case q ≥ d(h−1) δ .
Here we set θ = 0. Since δ < 2 by Assumption 2.1, we also have θ = 0 < 2 − δ. In both of the above cases, X β1 = H θ,q (see (3.6)). Thus (3.9) gives
(3.10) ∥Φ 0 (t, ·, v) − Φ 0 (t, ·, v ′ )∥ X0 ≲ (1 + ∥v∥ ρ1 X β 1 + ∥v ′ ∥ ρ1 X β 1 )∥v − v ′ ∥ X β 1 .
Substep 1b: Estimate for Φ 1 . As in substep 1a, by Assumption 2.1(4) we have, a.e. on R + × Ω and for all v, v ′ ∈ X 1 , (3.11)
∥Φ 1 (·, v) − Φ 1 (·, v ′ )∥ H −δ,q (iv) ≲ ∥F (t, ·, v) − F (t, ·, v ′ )∥ L η ≲ (1 + |v| h−1 2 + |v ′ | h−1 2 )|v − v ′ | L η (v) ≲ (1 + ∥v∥ h−1 2 L h+1 2 η + ∥v ′ ∥ h−1 2 L h+1 2 η )∥v − v ′ ∥ L h+1 2 η (vi) ≲ (1 + ∥v∥ h−1 2 H ϕ,q + ∥v ′ ∥ h−1 2 H ϕ,q )∥v − v ′ ∥ H ϕ,q ,where
in (iv) we used div : H 1−δ,q → H −δ,q boundedly, and Sobolev embedding with
− d η = 1 − δ − d q , where η ∈ (1, q) since q > d d−δ .
In (v) we used Hölder's inequality, and in (vi) the Sobolev embedding with ϕ ∈ [0, 2 − δ) and ϕ − d q ≥ − 2d η(h+1) . As in substep 1a, to choose ϕ we distinguish two cases.
• Case q < d(h−1) 2(δ−1) . In this situation we have ϕ :
= d q − 2d η(h+1) = d q h−1 h+1 + 2 1−δ h+1 > 0. Note that ϕ < 2 − δ since q > d(h−1) 2h−δ(h−1) by assumption; • Case q ≥ d(h−1)
2(δ−1) . Here we set ϕ = 0 and thus ϕ < 2 − δ. Again one can check X β2 = H ϕ,q in both cases. Thus (3.11) gives
(3.12) ∥Φ 1 (·, v) − Φ 1 (·, v ′ )∥ X0 ≲ (1 + ∥v∥ ρ2 X β 2 + ∥v ′ ∥ ρ2 X β 2 )∥v − v ′ ∥ X β 2 .
The required estimate for Φ(·, v) − Φ(·, v ′ ) follows from (3.10) and (3.12), which completes Step 1.
Step 2: Estimate for Γ. Here we prove that Γ(·, v) − Γ(·, v ′ ) satisfies the same bound of Φ 1 (·, v) − Φ 1 (·, v ′ ) in (3.11). Thus the required estimate for Γ follows as in Substep 1b. Indeed, a.e. on R + × Ω and for all v, v ′ ∈ X 1 ,
(3.13) ∥Γ(·, v) − Γ(·, v ′ )∥ γ(ℓ 2 ,X 1/2 ) (vii) ≲ ∥g(t, ·, v) − g(t, ·, v ′ )∥ γ(ℓ 2 ,L η ) (viii) ≂ ∥g(t, ·, v) − g(t, ·, v ′ )∥ L η (ℓ 2 ) (ix) ≲ (1 + |v| h−1 2 + |v ′ | h−1 2 )|v − v ′ | L η ,
where in (vii) we used Sobolev embeddings with − d η = 1 − δ − d q , in (viii) (2.5) and in (ix) Assumption 2.1(4). Comparing (3.13) with the second line in (3.11), one can check that the claimed estimate for Γ follows as in Substep 1b. □ Next we prove Proposition 3.1. For the reader's convenience, the proof will be divided into two parts. In Part (A) we prove the existence of a (p, κ, δ, q)-solution to (2.1) with pathwise regularity as in (3.3) and in Part (B) we prove (3.4)-(3.5).
Proof of Proposition 3.1 Part (A) -Local existence and uniqueness. We break the proof of Part (A) into two steps. Recall that (A, B, Φ, Γ) are defined in (3.7). In the following, we use the definition of criticality of [AV22b] for the trace space of initial data (see e.g. [ALV21] for details on trace theory) (3.14)
X Tr κ,p := (X 0 , X 1 ) 1− 1+κ
p ,p = (H −δ,q , H 2−δ,q ) 1− 1+κ p ,p = B 2−δ−2 1+κ p q,p ,
where we used [BL76, Theorem 6.4.5].
Step 1: The assumptions (HF) and (HG) of [AV22b, Section 4.1] hold with (F, G) replaced by
(Φ, Γ). Moreover, the trace space X Tr κ,p = B 2−δ−2 1+κ p q,p
is critical for (2.1) if and only if one of the following conditions holds:
• q < d(h−1) δ and 1+κ p + 1 2 (δ + d q ) = h h−1 ; • q ≥ d(h−1) δ and 1+κ p = h h−1 (1 − δ 2 )
. To prove the claim of this step, by Lemma 3.2 it is suffices to show that
(3.15) 1 + κ p ≤ ρ j + 1 ρ j (1 − β j ) for j ∈ {1, 2},
where ρ j , β j are as in Lemma 3.2. Note that d(h−1) δ < d(h−1) 2(δ−1) for all h > 1 and δ ∈ [1, 2). Therefore, to check (3.15), we can split into the following three cases:
(a) Case q < d(h−1) δ . In this situation one can check that the inequalities in (3.15) for j ∈ {1, 2} are equivalent to the following restriction:
1 + κ p ≤ h h − 1 − 1 2 δ + d q . (b) Case d(h−1) δ ≤ q < d(h−1) 2(δ−1) . Then (3.15) for j ∈ {1, 2} holds if and only if 1 + κ p ≤ h h − 1 − 1 2 δ + d q , and 1 + κ p ≤ h h − 1 1 − δ 2 . Note that q ≥ d(h−1) δ implies h h−1 (1 − δ 2 ) ≤ h h−1 − 1 2 (δ + d q )
. Therefore, it is enough to assume the second of the above conditions. (c) Case q ≥ d(h−1) 2(δ−1) . Then (a), (3.15) for j ∈ {1, 2} leads to the same condition
1 + κ p ≤ h h − 1 1 − δ 2 .
One can check that the conditions in the cases (a)-(c) coincide with the one assumed in Proposition 3.1. Moreover, criticality holds if and only if the estimates in cases (a)-(c) hold with equality.
Step 2: There exists a (unique) (p, κ, δ, q)-solution (u, σ) to (2.1) such that
u ∈ H θ,p loc ([0, σ); H 2θ−δ,Y j = H 2j−δ,q to Y j = H 2j−1,q (see
Step 3). In each of the steps in the proof below and without further mentioning it, we use the stochastic maximal L r wα -regularity result of [AV23c, Theorem 5.2 and Remark 5.6] for (A, B). By Assumption 2.1 the latter holds on X 0 = H −s,ζ and all r ∈ (2, ∞), ζ ∈ [2, ∞), κ ∈ [0, r 2 − 1), and s such that 1 ≤ s ≤ δ + γ, for some (small) γ > 0.
Proof of Proposition 3.1 Part (B) -Instantaneous regularization (3.4)-(3.5). Let (u, σ) denote the (p, κ, δ, q)-solution to (2.1) provided by Part (A).
Step 1: For all r ∈ (2, ∞),
(3.16) u ∈ θ∈[0,1/2) H θ,r loc (0, σ; H 2−δ−2θ,q ), a.s.
The proof of (3.16) consists of two sub-steps, where
Step 1a is not needed if κ > 0.
Step 1a: If κ = 0, then (3.16) holds for some r > p. Here we apply [AV22c, Proposition 6.8]. Let (β j ) j∈{1,2} be as in Lemma 3.2. Note that β 1 , β 2 ∈ (0, 1) and p ∈ (2, ∞) under the assumption of Proposition 3.1. Fix r ∈ (p, ∞) and α ∈ (0, r 2 − 1) such that (3.17) 1 p = 1 + α r , and 1 r ≥ max j β j − 1 + 1 p .
With the above choice, Step 1 of the proof of Proposition 3.1 Part (A) ensures that [AV22c, Proposition 6.8] is applicable with Y j = X j = H 2−δ,q and (r, α) as above. This yields (3.16) for all r ∈ (p, ∞) and α > 0 satisfying (3.17).
Step 1b: (3.16) holds for all r ∈ (2, ∞). Let
either [(r, α) = (p, κ) if κ > 0] or [(r, α) as in Step 1a if κ = 0].
In all cases α > 0. Let r ∈ (2, ∞) be arbitrary and let α ∈ [0, r 2 − 1) be such that 1+ α r < 1+α r . Set Y j := H 2j−δ,q for j ∈ {0, 1}. Combining Step 1 of the proof of Proposition Part (A) and
Step 1a if κ = 0, one can check that the assumptions of [AV22c, Corollary 6.5] hold and this yields the claim of Step 1b.
Step 2: For all r, ζ ∈ (2, ∞),
u ∈ θ∈[0,1/2) H θ,r loc (0, σ; H 2−δ−2θ,ζ ) a.s.
It suffices to consider r ∈ (p, ∞) such that 1 r + 1 2 (δ + d q ) < h h−1 . The latter condition is nonempty since in each of the case of (3.1) and (3.2) one can check that 1 2 (δ + d q ) < h h−1 . To prove the above claim for u it is enough to show the existence of ε > 0 depending only (r, δ, q, h, d) such that for all ζ ∈ [q, ∞),
(3.18) u ∈ θ∈[0,1/2) H θ,r loc (0, σ; H 2−δ−2θ,ζ ) a.s. =⇒ u ∈ θ∈[0,1/2) H θ,r loc (0, σ; H 2−δ−2θ,ζ+ε ) a.s.
Indeed, by
Step 1 we know that the RHS(3.18) holds with ζ = q and r as above. Thus the claim of this step follows by iterating (3.18). To prove (3.18) suppose that u ∈ θ∈[0,1/2) H θ,r loc (0, σ; H 2−δ−2θ,ζ ) a.s. We will apply [AV22c, Theorem 6.3]. Since 1 r + 1 2 (δ + d q ) < h h−1 by assumption, there exists α > 0 (depending only on (r, δ, q, h, d)) such that 1+α
r + 1 2 (δ + d q ) < h h−1 . By
Step 1 of the proof of Proposition 3.1 Part (A) we know that (HF) and (HG) of [AV22b, Section 4.1] hold in the (H −δ,ζ , H 2−δ,ζ , α, r)-setting with ζ ∈ [q, ∞), and the corresponding trace space is not critical for (2.1) in this setting. Next we check the assumptions of [AV22c, Theorem 6.3] with the choice
Y i = H 2j−δ,ζ , Y i = H 2j−δ,ζ+ε , r = r, α = α
where ε will be chosen below. It is easy to see that conditions (1) and (2)
(3.19) Y r → Y α, r = Y α,r .
The latter will require ε to be small enough. Recall that Y r = B 2−δ− 2 r ζ,r and Y α,r = B 2−δ−2 1+α r ζ+ε,r by (3.14). By Sobolev embedding (3.19) holds provided
(3.20) 2 − δ − 2 r − d ζ ≥ 2 − δ − 2 1 + α r − d ζ + ε ⇔ 1 ζ − 1 ζ + ε ≤ 2α dr .
For (3.20) we can for instance take ε = 2α dr > 0.
Step 3: For all r, ζ ∈ (2, ∞),
(3.21) u ∈ θ∈[0,1/2) H θ,r loc (0, σ; H 1−2θ,ζ ) a.s.
Note that, if δ = 1, then (3.21) follows from Step 2. Thus below we may assume δ ∈ (1, 2). It suffices to prove (3.21) for r and ζ large. Therefore, we may suppose that
ζ ≥ max d(h − 1) δ , q , r > max p, 2 2 − δ , and 1 r + δ − 1 2 < h 2(h − 1)
.
For the latter note that δ−1 2 < h 2(h−1) always holds. As in the previous step, we use [AV22c, Theorem 6.3] to improve the differentiability in space. To prove (3.21), for j ∈ {0, 1}, we let
(3.22) Y j = H 2j−δ,ζ , Y j = H 2j−1,ζ , r = r, α = 0, α = r(δ − 1) 2 .
Moreover, α ∈ [0, r 2 − 1) since r > 2 2−δ . We claim that and (3.2) is satisfies in the (Y 0 , Y 1 , r, α)-setting and ( Y 0 , Y 1 , r, α)-setting, and both are not critical. Indeed, for Y and Y this follows from
1 r < 1 − δ 2 < h h − 1 1 − δ 2 and 1 + α r = 1 r + δ − 1 2 < h 2(h − 1) ,
respectively. To apply [AV22c, Theorem 6.3] it remains to check condition (3) there, which states
Y 1−ε = [ Y 0 , Y 1 ] 1−ε = Y 1 , Y 0 = [Y 0 , Y 1 ] ε = Y ε , and 1 + α r = ε + 1 r .
Since α = 0 and ε < 1 2 − 1 r by construction, [AV22c, Lemma 6.2(4)] applies and thus (3.23) (b) follows. Hence [AV22c, Theorem 6.3] yields (3.21).
Step 4: Conclusion. Note that (3.4) is equivalent to (3.21). In addition, (3.5) follows from (3.4) and Sobolev embedding. Hence the proof of Proposition 3.1 is completed. □
Next we turn to the local continuity result.
Proposition 3.3 (Local continuity).
Let the assumptions of Proposition 3.1 be satisfied. Let (u, σ) be the (p, κ, δ, q)-solution to (2.1). Then there exist positive constants (C 0 , T 0 , ε 0 ) and stopping times σ 0 , σ 1 such that σ 0 , σ 1 ∈ (0, σ] a.s. for which the following assertion holds:
For each v 0 ∈ L p F0 (Ω; B 2−δ−2 1+κ p q,p ) with E∥u 0 − v 0 ∥ p B 2−δ−2 1+κ p q,p ≤ ε 0 , the (p, κ, δ, q)-solution (v, τ )
to (2.1) with initial data v 0 has the property that there exists a stopping time τ 0 ∈ (0, τ ] a.s. such that for all t ∈ [0, T 0 ] and γ > 0, one has P sup For λ > 0 consider the following truncated version of (3.8):
r∈[0,t] ∥u(r) − v(r)∥ B 2−δ−2 1+κ p q,p ≥ γ, σ 0 ∧ τ 0 > t ≤ C 0 γ p E∥u 0 − v 0 ∥ p B 2−δ−2 1+κ p q,p , (3.24) P ∥u − v∥ L p (0,t,wκ;H 2−δ,q ) ≥ γ, σ 0 ∧ τ 0 > t ≤ C 0 γ p E∥u 0 − v 0 ∥ p B 2−δ−2 1+κ p q,p , (3.25) P(σ 0 ∧ τ 0 ≤ t) ≤ C 0 E∥u 0 − v 0 ∥ p B 2−δ−2 1+κ p q,p + P(σ 1 ≤ t) .(3.27) du − A(t)u dt = ξ λ (t, u)Φ(t, u) dt + (B(t)u + ξ λ (t, u)Γ(t, u)) dW ℓ 2 (t), t ∈ R + , u(0) = u 0 , where (3.28) ξ λ (t, u) := ξ 1 λ ∥u∥ X (t)
where X is as in [AV22b, eq. (4.14)] with (ρ j , β j ) as in Lemma 3.2 and φ j = β j . For the choice of the cut-off in (3.28) we also uses [AV22b, Remark 4.14] and that the implicit constants in estimates of Lemma 3.2 are independent of v, v ′ . As noticed in Step 2 of the proof of Proposition 3.1 Part (A), the (p, κ, δ, q)-solution of (2.1) is the L p κ -maximal solution in the terminology of [AV22b, Definition 4.4] with the choice (3.6)-(3.7). Recall that X Tr κ,p has been defined in (3.14).
Now
Steps 1-2 in the proof of [AV22b, Theorem 4.5] show the existence of constants (λ 0 , T 0 , ε 0 ) for which the following assertion holds: For all v 0 ∈ L p F0 (Ω; X Tr κ,p ) such that
E∥u 0 − v 0 ∥ p X Tr κ,p ≤ ε 0
there exists a local (p, κ, δ, q)-solution (v, T 0 ) to (3.27) with initial data v 0 and λ = λ 0 satisfying
(3.29) E∥u − v∥ p C([0,T0];X Tr κ,p ) + E∥u − v∥ p L p (0,T0,wκ;H 2−δ,q ) ≤ C 0 E∥u 0 − v 0 ∥ p X Tr κ,p ,
where (u, T 0 ) is the local (p, κ, δ, q)-solution to (3.27) with λ = λ 0 and initial data u 0 . As in Step 4 of [AV22b, Theorem 3.5], we set
σ 0 := inf t ∈ [0, T 0 ] : ∥u∥ X (t) ≥ λ 0 , and τ 0 := inf t ∈ [0, T 0 ] : ∥v∥ X (t) ≥ λ 0 . (3.30)
Note that a.s. σ 0 > 0 and τ 0 > 0. Arguing as in [AV22b, Step 4], one can check that (u| [0,σ0)×Ω , σ 0 ) (resp. (v| [0,τ0)×Ω , τ 0 )) is a local (p, κ, δ, q)-solution to (2.1) with initial data u 0 (resp. v 0 ). By maximality of the (p, κ, δ, q)-solutions (u, σ) and (v, τ ), we have σ 0 ∈ (0, σ], τ 0 ∈ (0, τ ] a.s., and We are ready to prove (3.24). By (3.31), for all t ∈ [0,
T 0 ]. P sup r∈[0,t] ∥u(r) − v(r)∥ X Tr κ,p ≥ γ, σ 0 ∧ τ 0 > t ≤ P sup r∈[0,t] ∥u(r) − v(r)∥ X Tr κ,p ≥ γ ≤ 1 γ p E∥u − v∥ p C([0,t];X Tr κ,p ) ≤ C p 0 γ p E∥u 0 − v 0 ∥ p X Tr κ,p ,
where in the last inequality we used (3.29) and t ≤ T 0 . The same argument also yields (3.25). Next we prove (3.26). For all t ∈ [0, T 0 ],
P(σ 0 ∧ τ 0 ≤ t) ≤ P ∥u∥ X (t) + ∥v∥ X (t) ≥ λ 0 ≤ P 2∥u∥ X (t) + ∥v − u∥ X (t) ≥ λ 0 ≤ P ∥v − u∥ X (t) ≥ λ 0 2 + P ∥u∥ X (t) ≥ λ 0 4 ≤ 2 p C 0 λ p 0 E∥u 0 − v 0 ∥ p X Tr κ,p + P(σ 1 ≤ t),
where in the last step we used (3.29) and σ 1 := inf t ∈ [0, T 0 ] : ∥u∥ X (t) ≥ λ0 4 . □ Remark 3.4. The proof of Proposition 3.3 also yields the following facts. (a) By (3.29) and (3.31), the estimates (3.24)-(3.26) can be also formulated as L p (Ω)-estimates. For instance, (3.24) holds in the stronger form:
E 1 {σ0∧τ0>t} sup s∈[0,t] ∥u(s) − v(s)∥ p B 2−δ−2 1+κ p q,p ≤ C 0 E∥u 0 − v 0 ∥ p B 2−δ−2 1+κ p q,p , t ∈ [0, T 0 ].
(b) Let (u, v) be as in (3.29), i.e. the (p, κ, δ, q)-solutions to (3.27) with data (u 0 , v 0 ), respectively.
By Steps 1-2 of [AV22b, Theorem 4.5] and maximal L p -regularity estimates (cf. [AV23c]), we have the following stronger version of (3.29):
E∥u − v∥ H θ,p (0,T0,wκ;H 2−δ−2θ,q ) ≲ θ E∥u 0 − v 0 ∥ p B 2−δ−2 1+κ p q,p , for all θ ∈ [0, 1 2 ).
Whence (3.25) also holds with L p (0, t, w κ ; H 2−δ,q ) replaced by H θ,p (0, t, w κ ; H 2−2θ−δ,q ). (c) The proof of Proposition 3.3 shows that (3.24)-(3.26) holds also for quasilinear SPDEs as considered in [AV22b] but taking F L = G L ≡ 0. The above proofs need the following modifications: (3.28) needs to be replaced with ξ λ (t, u) = ξ( 1 λ [∥u∥ X (t) + sup s∈[0,t] ∥u(s)∥ X Tr κ,p ]) for t ∈ [0, T 0 ], and (3.30) needs to be replaced by
σ 0 = inf t ∈ [0, T 0 ] : ∥u∥ X (t) + sup s∈[0,t] ∥u(s) − u 0 ∥ X Tr κ,p ≥ λ 0 , τ 0 = inf t ∈ [0, T 0 ] : ∥v∥ X (t) + sup s∈[0,t] ∥v(s) − u 0 ∥ X Tr κ,p ≥ λ 0 .
The same assertion as in Proposition 3.3 holds in the quasilinear setting, but the set {τ 0 = 0} might have positive measure as we are only imposing smallness on E∥u 0 − v 0 ∥ p X Tr κ,p .
Blow-up criteria.
Here we prove Theorem 2.10. The argument follows the one in [AV22c, Lemma 6.10]. However, Theorem 2.10 cannot be deduced from such result since in the present situation we are also considering a parameter h 0 that is (possibly) different from h. Thus we provide a proof below. For the reader's convenience, we give a (rough) idea of the argument which is based on the fact that solutions to (2.1) instantaneously regularizes, cf. (2.13)-(2.14). Indeed, for any s > 0, u(s) is smooth and we may 'restart' the system of SPDEs (2.1) considering the solution to such problem on [s, ∞) with data u(s), which will be denoted by v. Note that, a-priori, we don't know how u(t)| t>s and v relate. Since u(s) is smooth, the restarted problem (2.1) can be considered in a different 'setting', i.e. replacing the parameters (p, κ, q, δ, h) by (possibly) different ones (p 0 , κ 0 , q 0 , δ 0 , h 0 ). With the latter choice, the results in [AV22c, Section 4] show that v satisfy a blow-up criterium in the (p 0 , κ 0 , q 0 , δ 0 , h 0 )-setting which is the analogue of the one claimed for u. The conclusion follows by showing that u = v on [s, ∞) and thus the blow-up criteria for v 'transfers' to u.
Proof of Theorem 2.10.
(1): We begin by collecting some useful facts. Fix 0 < s < T < ∞ and let (u, σ) be the (p, κ c , δ, q)-solution to (2.1) provided by Theorem 2.7. By [AV22c, Theorem 4.10(3)], (3.6) and (3.14) we have
(3.32) P σ < ∞, sup t∈[0,σ) ∥u(t)∥ B β q,p + ∥u∥ L p (0,σ;H γ,p ) < ∞ = 0, where β = d q − 2 h − 1 , γ = d q + 2 p − 2 h − 1 , and κ c = p h h − 1 − 1 2 δ + d q − 1.
Moreover, let us recall that, by (2.12) for θ c := κc p < 1 2 − 1 p and the weighted Sobolev embeddings (see e.g. [AV22b, Proposition 2.7]), we have (3.33) u ∈ H θc,p loc ([0, σ), w κc ; H 2−δ−2θc,q ) → L p loc ([0, σ); H γ,q ) a.s. Let (q 0 , p 0 , δ 0 , h 0 , β 0 ) be as in Theorem 2.10 and q 1 > q 0 . Set κ 0 = κ c,0 = p 0 h0 h0−1 − 1 2 (δ 0 + d q0 ) −1. Fix κ ∈ (κ c,0 , κ c,1 ) where κ c,1 := p 0 ( h0 h0−1 − 1 2 (δ + d q1 )) − 1 for i ∈ {0, 1}. Set β := 2 − δ − 2 1+κ p0 and note that β < β 0 . By (2.14) with θ 1 = 0, θ = θ 2 ∈ (β, 1) and the progressive measurability of u, we have 1 {σ>s} u(s) ∈ L 0 Fs (Ω; C θ ). Combining this with C θ = B θ ∞,∞ → B β q1,p0 since θ > β, we get
1 {σ>s} u(s) ∈ L 0 Fs (Ω; B β q1,p0 ), where V := {σ > s}.
Up to a shift argument, Proposition 3.1 ensures the existence of a (p 0 , κ 0 , δ 0 ,
q 1 )-solution (v, τ ) on [s, ∞) to (3.34) dv i − div(a i · ∇v i ) dt = div(F i (·, v)) + f i (·, v) dt + n≥1 (b n,i · ∇)v i + g n,i (·, v) dw n t , on T d , v i (s) = 1 {σ>s} u i (s), on T d , where v = (v i ) ℓ i=1 .
Moreover, the solution (v, τ ) to (3.34) instantaneously regularizes in time and space:
(3.35)
v ∈ H θ,r loc (s, τ ; H 1−2θ,ζ ) a.s. for all θ ∈ [0, 1/2), r, ζ ∈ (2, ∞). The notion of (p 0 , κ 0 , δ 0 , q 1 )-solutions to (3.34) follows as in Definition 2.3.
By
Step 1 of Proposition 3.1 and the fact that κ < κ c,1 we know that B β q1,p0 is not critical for (3.34). Thus, applying [AV22c, Theorem 4.10(2)] to (3.34),
P τ < T, sup t∈[s,τ ) ∥v(t)∥ B β q 1 ,p 0 < ∞ = 0.
Since β < β 0 , we have B β0 q1,∞ → B β q1,p0 . Hence the previous implies
(3.36) P τ < T, sup t∈[s,τ ) ∥v(t)∥ B β 0 q 1 ,∞ < ∞ = 0.
Recall that V = {σ > s}. Since τ > s a.s., (3.36) shows that (1) follows as soon as we have shown To conclude it is enough to show that P(V ∩ {σ < τ }) = 0. To this end we employ the blowup criteria in (3.32). Indeed, by (3.35) and (3.38), we have u = v ∈ L p loc ((s, σ]; H γ,q ) a.s. on V ∩ {σ < τ }. Combining this with (3.33), we find u ∈ L p (0, σ; H γ,q ) a.s. on V ∩ {σ < τ }. Similarly, one can check that sup t∈[0,σ) ∥u(t)∥ B β q,p < ∞ a.s. on V ∩ {σ < τ }, and therefore
P(V ∩ {σ < τ }) = P V ∩ {σ < τ } ∩ sup t∈[0,σ) ∥u(t)∥ B β q,p + ∥u∥ L p (0,σ;H γ,p ) < ∞ ≤ P σ < ∞, sup t∈[0,σ) ∥u(t)∥ B β q,p + ∥u∥ L p (0,σ;H γ,p ) < ∞ (3.32) = 0.
(2): The proof is similar to the one of (1). Indeed, let us consider the (p 0 , κ c,0 , δ 0 , q 0 )-solution to (3.34) where κ c,0 = p( h0 h0−1 − 1 2 (δ 0 + d q0 )) − 1. Here the subscript 'c' stresses that the corresponding space for the initial data B β0 q0,p0 is critical for (3.34) (cf.
Step 1 of Proposition 3.1 and note that the spatial integrability is q 0 ). Compared to Step 1, the only difference is that instead of (3.36) we use [AV22c, Theorem 4.10(3)] (which holds also in critical situations) and it yields
P τ < T, sup t∈[s,τ ) ∥v(t)∥ B β 0 q 0 ,p 0 + ∥v∥ L p 0 (s,τ ;H γ 0 ,q 0 ) < ∞ = 0,
where β 0 , γ 0 are as in the statement of Theorem 2.10. □ Proof of Corollary 2.11. To prove (1) we use Theorem 2.10(1) with an appropriate choice of (q 0 , q 1 ). Recall that h 0 ≥ 1 + 4 d , ζ 0 = d 2 (h 0 − 1) and let ζ 1 > ζ 0 . Choose δ 0 > 1 small enough so that Assumption 2.1(2) holds. Fix q 1 ≤ ζ 1 such that
ζ 0 < q 1 < d(h 0 − 1) h 0 + 1 − δ 0 (h 0 − 1) .
The above choice is possible since δ 0 > 1. Since δ 0 < 2, we may fix p 0 ∈ (q 1 , ∞) such that
1 p 0 + 1 2 δ 0 + d ζ 0 ≤ h 0 h 0 − 1 .
One can check the condition in Theorem 2.7 with (p, q, δ, h) replaced by (p 0 , q 0 , δ 0 , h 0 ). By ζ 1 ≥ q 1 and elementary embeddings for Besov spaces, L ζ1 → L q1 → B 0 q1,∞ . Hence
s < σ < T, sup t∈[s,σ) ∥u(t)∥ L ζ 1 < ∞ ⊆ s < σ < T, sup t∈[s,σ) ∥u(t)∥ B 0 q 1 ,∞ < ∞
Thus that (1) follows from Theorem 2.10(1) with q 0 = ζ 0 noticing that β 0 = d ζ0 − 2 h0−1 = 0. To prove (2) we use Theorem 2.10(2). Let δ 0 > 1 be as above. By assumption q 0 < d(h0−1) h0+1−δ0(h0−1) and therefore
d q 0 > 2 h 0 − 1 − (δ 0 − 1).
Hence to ensure the existence of p 0 such that 2 p0 + d q0 = 2 h0−1 we need p 0 > 2 δ0−1 as required in (2). Since q 0 > ζ 0 ,
L ζ0 (i) → B β0 q0,q0 (ii) → B β0 q0,p0
where in (i) we used the Sobolev embeddings for Besov spaces (recall β 0 = d q0 − 2 h0−1 ) and in (ii) the fact that p 0 ≥ q 0 by assumption. Hence
s < σ < T, sup t∈[s,σ) ∥u(t)∥ L ζ 0 + ∥u∥ L p 0 (s,σ;L q 0 ) < ∞ ⊆ s < σ < T, sup t∈[s,σ) ∥u(t)∥ B β 0 q 0 ,p 0 + ∥u∥ L p 0 (s,σ;L q 0 ) < ∞ .
Thus (2) follows from Theorem 2.10(2) by noticing that
γ 0 = 2 p0 + d q0 − 2 h0−1 = 0. □
Finally we prove a compatibility result for the solutions obtained in different settings.
Proposition 3.5 (Compatibility of different settings). If Proposition 3.1 is applicable for two sets of exponents (p 1 , κ 1 , δ 1 , q 1 , h 1 ) and (p 2 , κ 2 , δ 2 , q 2 , h 2 ), then the corresponding solutions (u 1 , σ 1 ) and (u 2 , σ 2 ) coincide, i.e. σ 1 = σ 2 a.s. and u 1 = u 2 a.e. on [0, σ 1 ) × Ω.
As Theorem 2.7 is a special case of Proposition 3.1, the above compatibility also holds for solutions provided by Theorem 2.7. To explain the difficulty in proving the above result, let us consider two settings where Theorem 2.7 applies with p 1 ̸ = p 2 and (δ 1 , q 1 , h 1 ) = (δ 2 , q 2 , h 2 ). Note that the corresponding critical weights κ c,i :
= p i h h−1 − 1 2 (δ + d q ) − 1 satisfy 1+κ1 p1 = 1+κ2 p2 .
In particular, the L pi (w κc,i )-spaces on RHS(2.2) of Definition 2.3 cannot be embedded one in the other (cf. [AV22c, Proposition 2.1(3) and Remark 2.2]). Hence, a priori it is unclear how to compare the solutions, and use the uniqueness in one of the two settings. To solve this, we use an approximation argument, local continuity, and regularization results.
∩ i∈{1,2} ∥u 0 ∥ B 2−δ i −2 1+κ i p i q i ,p i ≤ n ∈ F 0 ,
it is enough to consider the case
u 0 ∈ ∩ i∈{1,2} L pi (Ω; B 2−δi−2 1+κ i p i qi,pi
).
In the following we let (u
(n) 0 ) n≥1 ⊆ L ∞ F0 (Ω; C 1 ) be such that u (n) 0 → u 0 in L pi (Ω; B 2−δi−2 1+κ i p i qi,pi ) for i ∈ {1, 2}. Fix some r > max{p 1 , p 2 , q 1 , q 2 , 2d + 2, d(h 1 − 1), d(h 2 − 1)} and set h = max{h 1 , h 2 }.
Then one can check that Assumption 2.1(r, r, h, 1) holds, and (3.2) holds with (p, κ, q, δ) replaced by (r, 0, r, 1). Therefore, by Proposition 3.1, for each n ≥ 1 there exists a (unique) (r, 0, 1, r)solution (u (n) , σ (n) ) to (2.1) such that a.s. σ (n) > 0 and
(3.39) u (n) ∈ H θ,r loc ([0, σ (n) ); H 1−2θ,r ) ∩ C([0, σ (n) ); B 1− 2 r
r,r ), θ ∈ [0, 1/2). By Sobolev embedding (since 1 − 2 r > − d r ) we find that u (n) ∈ C([0, σ (n) ); C(T d ; R ℓ )) a.s. Now let us fix i ∈ {1, 2} and n ≥ 1. Consider the (
p i , κ i , δ i , q i )-solution (u (n) i , σ (n)
i ) provided by Proposition 3.1 with the parameters (p i , κ i , δ i , q i ) and initial data u (n) 0 . By Definition 2.3, (3.39), and the special choice of r, one obtains that (u (n) , σ (n) ) is a local (p i , κ i , δ i , q i )-solution to (2.1). Hence σ (n) ≤ σ Thus in the following we write (u (n) , σ (n) ) instead of (u
(n) i , σ (n) i ).
Step 2: For all i ∈ {1, 2}, up to extracting a (not relabeled) subsequence of ((u (n) , σ (n) )) n≥1 , there exists a stopping time τ i ∈ (0, σ i ) such that a.s. τ i < lim inf n→∞ σ (n) and
(3.41) u i = lim n→∞ u (n) a.e. on [0, τ i ) × Ω.
Note that the RHS(3.41) makes sense since τ i < lim inf n→∞ σ (n) . In this step we fix i ∈ {1, 2}. Moreover, we use the notation introduced in the proof of Proposition 3.3 with the subscript i to keep track of the setting we are considering. For instance X i denotes the space introduced in [AV22b, eq. (4.14)] in the (p i , κ i , δ i , q i )-setting. Here we prove the claim with τ i given by (cf. the definition of σ 1 at the end of the proof of Proposition 3.3) (3.40). Finally, Step 1 and (3.42) give (3.41).
τ i := inf t ∈ [0, T 0,i ] : ∥u i ∥ Xi(t) ≥ λ 0,i 4 , where u i is the (p i , κ i , δ i , q i )-u (n) i → u i a.s. in X i (T 0,i ) ∩ C([0, T 0,i ]; B 2−δi−2 1+κ i p i qi,piτ i < lim inf n→∞ σ (n) 0,i . Recall that (see (3.31)) σ (n) 0,i ≤ σ (n) i a.s. and u (n) i = u (n) i a.e. on [0, σ (n) 0,i ) × Ω, σ 0,i ≤ σ i a.s. and u i = u i a.e. on [0, σ 0,i ) × Ω. Hence τ i < lim inf n→∞ σ (n) i = lim inf n→∞ σ (n) by
Step 3: Conclusion. By Steps 1 and 2, we deduce that u 1 (t) = u 2 (t) a.s. for all t ∈ [0, τ 1 ∧ τ 2 ).
Set τ := τ 1 ∧τ 2 ∈ (0, σ 1 ∧σ 2 ] and let s > 0. Using the instantaneous regularization (i.e. (3.4)-(3.5)) we have 1 {τ >s} u 1 (s) = 1 {τ >s} u 2 (s) ∈ C θ for all θ ∈ (0, 1). Hence, as in Step 1, the conclusion follows by repeating the argument used in Theorem 2.10. □ Remark 3.6. (a) (A proof involving X -space). Proposition 3.5 can be also proved by using embedding results for the X -spaces (cf. the proof of [AV22c, Proposition 6.8] where κ i = 0 for some i ∈ {1, 2}).
Besides being technically more difficult, this approach also requires additional assumptions on the parameters (p i , κ i , δ i , q i ). These restrictions can be removed by tedious iteration arguments. Hence we prefer the above more direct argument based on local continuity. (b) The proof of Proposition 3.5 extends verbatim to other situations such as the Navier-Stokes equations with transport noise as analyzed in [AV21].
3.3. Positivity. Next we will prove the positivity of the solution stated in Theorem 2.13. For the proof we need the well-posedness and regularity results of Theorem 2.7, Proposition 2.9, the blow-up criteria of Theorem 2.10, and a maximum principle for linear scalar equations, which is a variation of [Kry13] (see Lemma A.1 in the appendix).
In case of smooth initial data the proof below can be shortened considerably. In particular, the approximation argument in Step 1 in the proof below can be omitted. Note that Step 1 relies on the rather technical local continuity result of Proposition 2.9.
Proof of Theorem 2.13. Below we write Y := X Tr κc,p = B (T d ; R ℓ ) for convenience.
Step 0: Reduction to the case u 0 ∈ L p (Ω; Y ). To prove the claim of this step, assume that the claim of Theorem 2.13 holds for L p (Ω)-integrable data. For any n ≥ 1, set V n := {∥u 0 ∥ Y ≤ n} and let (u (n) , σ (n) ) be the (p, κ c , δ, q)-solution to (2.1) with initial data 1 Vn u 0 . Thus, by assumption, Theorem 2.13 holds for (u (n) , σ (n) ) and therefore
(3.43) u (n) (t, x) ≥ 0 a.s. for all t ∈ [0, σ (n) ) and x ∈ T d .
By localization (i.e. [AV22b, Theorem 4.7(d)]), we have σ = σ (n) a.s. on V n , and u = u (n) a.e. on [0, σ) × V n .
The previous identity, the arbitrariness of n ≥ 1 and (3.43) yield the claim of this step.
Step 1: Reduction to the case u 0 ∈ L 0 (Ω; C α (T d ; R ℓ )) where α ∈ (0, 1). Fix α ∈ (0, 1). By Step 0 we can assume that u 0 ∈ L p (Ω; Y ). In the current step we assume that Theorem 2.13 holds for initial data from L p (Ω; C α (R d ; R ℓ )). Note that from (2.14), we know that u is smooth on (0, σ) × T d . This will be used several times below.
Set
A + := {φ ∈ D(T d ; R ℓ ) : φ ≥ 0 and ∥φ∥ Y * ≤ 1}.
It is important to note that A + is separable due to the separability of D(T d ; R ℓ ), in order to have measurable sets below. Recall that u 0 ≥ 0 by assumption. Let (C 0 , T 0 , ε 0 , σ 0 , σ 1 ) be as in the statement of Proposition 2.9. Choose a sequence (u
(n) 0 ) n≥1 in L p (Ω; C α (T d ; R ℓ )) such that u (n) 0 ≥ 0 a.s. on T d and u (n) 0 → u 0 in L p (Ω; Y ).
Without loss of generality we may assume that
E∥u 0 − u (n) 0 ∥ p Y ≤ ε 0 for all n ≥ 1. Let (u (n)
, σ (n) ) be the (p, κ c , δ, q)-solution to (2.1) with initial data u (n) 0 . The reductive assumption ensures that (3.44) u (n) (t, x) ≥ 0 a.s. for all t ∈ (0, σ (n) ) and x ∈ T d .
Note that, for all t ∈ (0, T 0 ], n ≥ 1 and γ > 0,
P inf r∈[0,t] T d u(r) · φ dx ≤ −γ for some φ ∈ A + , σ 0 > t ≤ P inf r∈[0,t] T d u(r) · φ dx ≤ −γ for some φ ∈ A + , σ 0 ∧ τ (n) 0 > t + P(σ 0 ∧ τ (n) 0 ≤ t),
where τ (n) 0 is as in Theorem 2.7 with v 0 replaced by u (n) 0 . Note that, by combining (2.11), (3.44) and
∥φ∥ Y * ≤ 1 for φ ∈ A + , inf r∈[0,t] T d u(r) · φ dx ≤ −γ, σ 0 ∧ τ (n) 0 ≥ t ∩ sup r∈[0,t] ∥u(r) − u (n) (r)∥ Y < γ = ∅. Hence P inf r∈[0,t] T d u(r) · φ dx ≤ −γ for some φ ∈ A + , σ 0 > t ≤ P sup r∈[0,t] ∥u(r) − u (n) (r)∥ Y ≥ γ, σ 0 ∧ τ (n) 0 > t + P(σ 0 ∧ τ (n) 0 ≤ t) ≤ C 0 (1 + γ −p )E∥u 0 − u (n) 0 ∥ p Y + C 0 P(σ 1 ≤ t),
where in the last estimate we used (2.15) and (2.17). Letting n → ∞ and γ = k −1 ↓ 0, the above estimate yields
(3.45) P(U t ) ≤ C 0 P(σ 1 ≤ t), where U t := inf r∈[0,t] T d u(r) · φ dx < 0 for some φ ∈ A + , σ 0 > t . Note that {σ 0 > t} = U t ∪ V t where V t := inf r∈[0,t] T d u(r) · φ dx ≥ 0 for all φ ∈ A + , σ 0 > t = u(r, x) ≥ 0 for all x ∈ T d , r ∈ (0, t] ∩ {σ 0 > t},
where we used the smoothness of u. By definition V s ⊇ V t as s ≤ t and V t ∈ F t for all t ∈ [0, T 0 ]. The estimate (3.45) gives (3.46) P(V t ) = P(σ 0 > t) − P(U t ) ≥ P(σ 0 > t) − C 0 P(σ 1 ≤ t) → 1 as t ↓ 0, where we used σ 0 > 0 and σ 1 > 0 a.s. Fix t ∈ [0, T 0 ] and consider (v, τ ) the (p, κ c , δ, q)-solution to (2.1) on [t, ∞) with initial data v(t) = v t := 1 Vt u(t). By definition of V t , we have a.s. v t ≥ 0 on T d , and by the smoothness of (u, σ) (see (2.14)), we have v t ∈ L 0 Ft (Ω; C α (T d ; R ℓ )). In particular, by the reductive assumption (applied at initial time t instead of 0), we have a.s. v(r, x) ≥ 0 for all r ∈ [t, τ ) and x ∈ T d .
As before, by localization (i.e. [AV22b, Theorem 4.5(4)]), τ = σ a.s. on V t and v = u a.e. on [t, τ ) × V t . It follows that
P u(r, x) ≥ 0 ∀r ∈ [t, σ) and x ∈ T d = lim s↓0 P u(r, x) ≥ 0 ∀r ∈ [t, σ) and x ∈ T d ∩ V s = lim s↓0 P v(r, x) ≥ 0 ∀r ∈ [t, τ ) and x ∈ T d ∩ V s = lim s↓0 P(V s ) = 1,
by (3.46). Therefore, letting t ↓ 0 we obtain that a.s. u(r, x) ≥ 0 for all r ∈ (0, σ) and x ∈ T d .
Step 2: Reduction to the case u 0 ∈ L ∞ (Ω; C α (T d ; R ℓ )) where α ∈ (0, 1). Due to Step 1, the claim of Step 2 follows by localization as in Step 0.
Step 3: Reduction to positivity of a new function u + . By the previous steps we may suppose that u 0 ∈ L ∞ (Ω; C α (T d ; R ℓ )) for some α > 0. For all (t, ω, x) ∈ R + × Ω × T d , y ∈ R ℓ and i ∈ {1, . . . , ℓ}, let
f + i (t, ω, x, y) := f i (t, ω, x, (y ∨ 0)), F + i (t, ω, x, y) := F i (t, ω, x, (y ∨ 0)), g + n,i (t,
ω, x, y) := g n,i (t, ω, x, (y ∨ 0)). We denote by (2.1) + the system of SPDEs (2.1) with (F, f, g) replaced by (F + , f + , g + ). Since the assignment y → y ∨ 0 is globally Lipschitz, (F + , f + , g + ) satisfies Assumption 2.1(4) with the same parameters. Thus, by Theorem 2.7 there exists a (p, κ c , δ, q)-solution (u + , σ + ) to (2.1) + . Moreover, Theorem 2.10(1) implies (with T ↑ ∞)
(3.47) P s < σ + < ∞, sup t∈[s,σ + ) ∥u + (t)∥ L ∞ < ∞ = 0, for all s > 0.
We claim that (3.48) u + ≥ 0 a.e. on [0, σ + ) × Ω × T d .
Next we show that (3.48) yields the claim of Theorem 2.13. More precisely we prove that, if (3.48) holds, then (3.49) σ + = σ a.s. and u + = u a.e. on [0, σ) × Ω × T d .
Suppose that (3.48) holds. Thus, by Definition 2.3, (u + , σ + ) is a local (p, κ c , δ, q)-solution to the original problem (2.1). Since (u, σ) is a (p, κ c , δ, q)-solution to (2.1), we have σ + ≤ σ a.s. and u + = u a.e. on [0, σ + ) × Ω × T d .
Since σ + > 0 a.s., to prove (3.49) it remains to show that P(s < σ + < σ) = 0 for all s > 0. To this end, fix s ∈ (0, ∞). Note that u + = u a.s. on [s, σ + ∧ σ), and u ∈ C([s, σ + ] × T d ; R ℓ ) a.s. on {s < σ + < σ} by (2.14). Thus
P(s < σ + < σ) = P {s < σ + < σ} ∩ sup t∈[0,σ + ) ∥u + (t)∥ L ∞ < ∞ ≤ P s < σ + < ∞, sup t∈[s,σ + ) ∥u + (t)∥ L ∞ < ∞ (3.47) = 0.
This proves (3.49) in case (3.48) holds.
Step 4: Proof of (3.48). Fix i ∈ {1, . . . , ℓ}. As usual, for all j ≥ 1, we set
σ + j := inf t ∈ [0, σ) : ∥u + (t) − u 0 ∥ L ∞ + ∥u + ∥ L 2 (0,t;H 1 ) ≥ j ∧ j where inf ∅ := σ.
By (2.13)-(2.14) (applied with (F, f, g) replaced by (F + , f + , g + )) we have lim j→∞ σ + j = σ + . Therefore, to show (3.48) it is suffices to prove u + ≥ 0 a.e. on [0, σ + j ] × Ω × T d . In the following we fix j ≥ 1 such that ∥u 0 ∥ L ∞ (Ω;L ∞ ) < j, and we drop it from the notation. Moreover, we set τ + := σ + j . Note that (3.50) sup
t∈[0,τ + )
∥u + (t)∥ L ∞ ≤ 2j a.s. and ∥u + ∥ L 2 (0,τ + ;H 1 ) ≤ j a.s.
Next we turn the nonlinearities into globally Lipschitz function by a cut-off argument. Let ζ : R ℓ → R ℓ be a smooth map such that ζ| {|y|≤2j} = 1 and ζ| {|y|≥2j+1} = 0. Set f i (·, y) := f + i (·, ζ(y)), F i (·, y) := F + i (·, ζ(y)), g n,i (·, y) := g + n,i (·, ζ(y)).
Then f i , F i , g n,i are globally Lipschitz w.r.t. y ∈ R ℓ uniformly in (t, ω, x) ∈ [0, ∞) × Ω × T d .
For a vector y ∈ R ℓ we set y i = (y 1 , . . . , y i−1 , 0, y i+1 , . . . , y ℓ ). Note that, a.e. on [0, τ + ) × Ω × T d ,
f + i (·, u) = f + i (·, u) − f + i (·, u i ) + f + i (·, u i ) (3.50) = f i (·, u) − f i (·, u i ) + f i (·, u i ).
Below we will exploit that f i (·, u i ) ≥ 0 by (2.19). Similarly, by (2.20)-(2.21),
div(F + i (·, u)) = div[F + i (·, u) − F + i (·, u i )], g +
n,i (·, u) = g n,i (·, u) − g n,i (·, u i ). Recall that Lipschitz functions are weakly differentiable. Hence, for a Lipschitz function R writing
R(u)−R(v) = 1 0 d ds [R(u+s(v −u))] ds = 1 0 R ′ (u+s(v −u)) ds (v −u), one can check that there exists bounded P ⊗ B(T d )-measurable maps r fi , r Fi , r gi,n (depending on u on [0, τ + ) × Ω × T d ) such that (3.51) F + i (·, u) − F + i (·, u i ) = r Fi u i , f i (·, u) − f i (·, u i ) = r fi u i , g n,i (·, u) − g n,i (·, u i ) = r gi,n u i a.e. on [0, τ + ) × Ω × T d .
Now consider the following linearization of (2.1):
(3.52)
dv i − div(a i · ∇v i ) dt = 1 [0,τ + ) div(r Fi v i ) + r fi,n v i + f i (·, u i ) dt + n≥1 (b n,i · ∇)v i + 1 [0,τ + ) r gi,n v i dw n t , on T d , v i (0) = u i,0 , on T d .
Let v i ∈ L 2 (Ω; C([0, j]; L 2 )) ∩ L 2 ((0, j) × Ω; H 1 ) be the global (2, 0, 1, 2)-solution to the linear problem (3.52) (well-posedness follows from [LR15, Chapter 4]). By (3.51), u + i is a solution to the problem (3.52) on [0, τ + ). Therefore, by uniqueness By (2.19), the inhomogeneity satisfies 1 [0,τ + ) f i (·, u i ) ≥ 0 a.e., and the coefficients of the linear parts are bounded,. Therefore, the conditions of the maximum principle of Lemma A.1 are fulfilled, and thus a.e. on [0, j] × Ω, v i ≥ 0 on T d . Hence, a.e. on [0, τ + ] × Ω, we have u + i ≥ 0 on T d as desired. □
u + i = v i on [0, τ + ). Thus it remains to show v i ≥ 0 on [0, j].
Higher order regularity
In this section we briefly explain higher order regularity of the solution to (2.1) provided by Theorem 2.7.
The next assumption roughly says that F, f and (g n ) are C ⌈α+1⌉ in the y-variable, where α > 0 is some fixed number.
Assumption 4.1. Let α > 0, F, f and g n be as in Assumption 2.1(4). We assume that F, f and g n are x-independent, C ⌈α+1⌉ in y and, for all N ≥ 1 there is a C N > 0 such that a.s. ⌈α+1⌉ j=1 |∂ j y F i (t, y)| + |∂ j y f i (t, y)| + ∥(∂ j y g n,i (t, y)) n≥1 ∥ ℓ 2 ≤ C N , |y| ≤ N, i ∈ {1, . . . , ℓ}, t ≥ 0.
Theorem 4.2 (Higher order regularity). Let the assumptions of Theorem 2.7 be satisfied, where (η, ρ) are such that Assumption 2.1(2) holds, i.e. α > max{d/ρ, δ − 1}, ρ ∈ [2, ∞), and there exists an N such that
∥a j,k i (t, ·)∥ H α,ρ (T d ) + ∥(b j n,i (t, ·)) n≥1 ∥ H α,ρ (T d ;ℓ 2 ) ≤ N, t ≥ 0, i ∈ {1, .
. . , ℓ}, a.s. Furthermore, suppose that Assumption 4.1 holds. Let (u, σ) be the (p, κ c , δ, q)-solution to (2.1) provided by Theorem 2.7. Then a.s.
u ∈ H θ,r loc (0, σ; H 1+α−2θ,ρ (T d ; R ℓ )) for all θ ∈ [0, 1/2), r ∈ (2, ∞), (4.1) u ∈ C θ1,θ2+α− d ρ loc ((0, σ) × T d ; R ℓ ) for all θ 1 ∈ [0, 1/2), θ 2 ∈ (0, 1). (4.2)
From the above theorem one can see how the regularity of order η of the coefficients appears in (4.1) and (4.2). In particular, (4.1) with θ = 0 shows that the regularity of u is one order higher than the regularity of (a, b, h). In the above, we can also allow x-dependency of the nonlinearities F i , f i and g n,i under suitable smoothness assumptions on the spatial variable. (T d ; R ℓ )) for some fixed r ∈ (2, ∞), then one can check from the proofs that the regularity result (4.1) (for the fixed r) holds locally on [0, σ) instead of (0, σ). However, this will not be used in the sequel.
To prove the result one can argue in the same way as in [AV21, Theorem 2.7]. Similar as in the proof of (3.4), the ingredients in the proof are stochastic maximal L p -regularity (see [AV23c]) and mapping properties for the nonlinearities as we have encountered in the proof of Proposition 3.1. Since the proofs go through almost verbatim, details are left to the reader.
Existence and uniqueness for large times in presence of small data
In this section we prove that the solution of reaction-diffusion equations provided by Theorem 2.7 exists on large time intervals whenever the initial data is sufficiently small.
Theorem 5.1 (Existence and uniqueness for large times in presence of small data). Suppose that Assumptions 2.1(p, q, h, δ) and 2.4(p, q, h, δ) hold and set κ := κ c := p( h h−1 − 1 2 (δ + d q ))−1. Assume that there are M 1 , M 2 > 0 such that a.s. for all t ≥ 0 and y ∈ R ℓ ,
(5.1) |f (t, x, y)| ≤ M 1 + M 2 (|y| + |y| h ),
|F (t, x, y)| + ∥(g n (t, x, y)) n≥1 ∥ ℓ 2 ≤ M 1 + M 2 (|y| + |y| h+1 2 ).
Fix u 0 ∈ L p F0 (Ω; B d q − 2 h−1 q,p
). Let (u, σ) be the (p, κ c , δ, q)-solution to (2.1) provided by Theorem 2.7. For all ε ∈ (0, 1) and T ∈ (0, ∞), there exists C ε,T > 0, independent of u 0 such that
(5.2) E∥u 0 ∥ p B d q − 2 h−1 q,p + M p 1 ≤ C ε,T =⇒ P(σ > T ) > 1 − ε.
Roughly speaking, Theorem 5.1 shows that if u 0 and M 1 are close to 0, then u exists up to T with probability > 1 − ε. Reasoning as in Remark 2.8(c), the above result also implies existence for large time of unique solutions with small data in L .11], we present an alternative approach under the additional assumption that u ≥ 0, and the mass conservation property: there exist α 1 , . . . , α ℓ , C 0 > 0 such that, for all t ≥ 0, x ∈ T d and y ∈ [0, ∞) ℓ ,
(5.4) ℓ i=1 α i f i (t, x, y) ≤ C 0 1 + ℓ i=1 y i .
Both conditions are natural for reaction-diffusion equations, see Subsection 1.1 and [Pie10]. Due to assumption (5.4) we can control the lower order term M 2 |y| on the RHS(5.1) by exploiting the mass balance, i.e. for all T < ∞ and i ∈ {1, . . . , ℓ},
(5.5) E T d u i (τ, x) dx ≲ T E T d u 0,i (x) dx, for any stopping time τ ∈ (0, σ ∧ T ].
We refer to
Step 1 in the proof of Theorem 5.1 for the precise statement. Before going into the proof of the simplified version of Theorem 5.1, we introduce some more notation. Recall that (X λ , A, B, Φ, Γ) and κ c = p( h h−1 − 1 2 (δ + d q )) − 1 have been introduced in (3.6)-(3.7) and Theorem 2.7, respectively. Moreover, X Tr κc,p = B , and for β 1 , β 2 as in Lemma 3.2 (with q < d(h−1) δ and thus q < d(h−1) 2(δ−1) ), we let (5.6) X (t) := L hp (0, t, w κc ; X β1 ) ∩ L h+1 2 p (0, t, w κc ; X β2 ).
One can readily check that the above space coincides with the one introduced in [AV22b, Subsection 4.3, eq. (4.14)]. By [AV22b,Lemma 4.19], there exists θ ∈ [0, 1 2 ) such that
H θ,p (0, t; w κc ; X 1−θ ) ∩ L p (0, t, w κc ; X 1 ) ⊆ X (t), t > 0.
In particular, the solution (u, σ) provided by Theorem 2.7 satisfies a.s. for all t ∈ (0, σ), u ∈ X (t).
As in [AV21], we need the following special case of [AV22c, Lemma 5.3] and the maximal L p -regularity estimates of [AV23c].
Lemma 5.2. Let Assumption 2.1(p, q, h, δ) be satisfied. Fix T ∈ (0, ∞). Let (A, B) be as in (3.7). Then there exists K > 0 such that for every stopping time τ ∈ [0, T ], every v 0 ∈ L 0 F0 (Ω; X Tr κc,p ), f ∈ L p P ((0, τ ) × Ω, w κc ; X 0 ), g ∈ L p P ((0, τ ) × Ω, w κc ; γ(ℓ 2 , X 1/2 )), and every (p, κ c , q, δ)-solution v ∈ L p P ((0, τ ) × Ω, w κc ; X 1 ) to
dv + Av dt = f dt + Bv + g dW ℓ 2 , v(0) = v 0 ,
on (0, τ ) × Ω, the following estimate holds ∥v∥ p L p (Ω;X (τ )) ≤ K p ∥v 0 ∥ p L p (Ω;X Tr κc,p ) + ∥f ∥ p L p ((0,τ )×Ω,wκ c ;X0) + ∥g∥ p L p ((0,τ )×Ω,wκ c ;γ(ℓ 2 ,X 1/2 )) .
Proof of Theorem 5.1 -Case u ≥ 0 a.e. on [0, σ) × Ω × T d and the mass conservation (5.4) holds.
Through the proof we fix ε ∈ (0, 1) and T ∈ (0, ∞). Let
σ n := inf t ∈ [0, σ) : ∥u∥ L p (0,t,wκ c ;X1) + ∥u∥ X (t) ≥ n ∧ T, n ≥ 1,
where inf ∅ := σ. We split the proof into several steps.
Step 1: (Mass conservation). There exists L > 0, depending only on (C 0 , α i , T ) in (5.4) such that, for all t ∈ [0, T ] and n ≥ 1,
(5.7) E T d ℓ i=1 u i (t ∧ σ n , x) dx ≤ LE T d ℓ i=1 u 0,i (x) dx.
On the RHS(5.7) we understood
T d u 0 (x) dx := ⟨1 T d , u 0 ⟩. Note that T d u 0 (x) dx ≲ ∥u 0 ∥ X Tr κc,p .
To see (5.7) it is enough to stop (2.1) at time t ∧ σ n , multiply each equation in (2.1) by α i and then sum them up. After integrating over T d and canceling the divergence terms and martingale terms, and using the mass conservation (5.4), we find that
E T d ℓ i=1 α i u i (t ∧ σ n , x) dx = E T d ℓ i=1 α i u 0,i (x) dx + E T d t∧σn 0 α i f i (s, x, u)ds dx ≤ E T d ℓ i=1 α i u 0,i (x) dx + C 0 E t∧σn 0 1 + T d ℓ i=1 u i (s, x) dx ds ≤ E T d ℓ i=1 α i u 0,i (x) dx + C 0 E t 0 1 + T d ℓ i=1 u i (s ∧ σ n , x) dx ds,
where we used the positivity of u i in the last step, which holds by assumption. Now (5.7) follows from Gronwall's inequality applied to the function U (s) := E T d ℓ i=1 u i (s ∧ σ n , x)dx and the fact that α i > 0.
Step 2: (Estimates for the nonlinearities). Let K be as in Lemma 5.2. There exists c 0 , c 1 > 0 independent of M 1 such that, for all stopping times 0 ≤ µ ≤ σ ∧ T a.s., one has
E∥Φ(·, u)∥ p L p (0,µ,wκ c ;H 2−δ,q ) + E∥Γ(·, u)∥ p L p (0,µ,wκ c ;H 1−δ,q (ℓ 2 )) ≤ c 0 M p 1 + 1 2K p E∥u∥ p X (µ) + c 1 M p 2 E∥u 0 ∥ X Tr κc ,p + E∥u∥ ph X (µ) ,
where (Φ, Γ) is as in (3.7). Finally, c 0 is also independent of M 2 .
Recall that Φ = f + div(F ). We only provide the details for the estimate of f . The estimates for divF and Γ are similar. Following the proof of the Φ-estimates in (3.9), we obtain that
(5.8) ∥f (·, v)∥ H 2−δ,q ≤ c 0 (M 1 + M 2 ∥v∥ L ξ + M 2 ∥v∥ h L hξ ), v ∈ H 2−δ,q ,
where c 0 is a constant independent of (M 1 , M 2 , v) and ξ is as in (3.9).
By Fatou's lemma it is enough to show the claim of Step 2 where µ is replaced by µ n := σ n ∧ µ for n ≥ 1 and with constants independent of n ≥ 1. Hence, by using (5.8) we have E∥f (·, u)∥ p L p (0,µn,wκ c ;H 2−δ,q ) ≤ c 0 T M 1 + c 1 M 2 E∥u∥ p L p (0,µn,wκ c ;L ξ ) + E∥u∥ ph L ph (0,µn,wκ c ;L hξ ) ).
Next we conveniently estimate the lower order term E∥u∥ p L p (0,µn;L ξ ) appearing on the RHS of the above estimate. Let λ > 0 be arbitrary for the moment. Note that, by standard interpolation,
E∥u∥ p L p (0,µn,wκ c ;L ξ ) ≤ 1 λ E∥u∥ p L p (0,µn,wκ c ;L hξ ) + C 1 E∥u∥ p L p (0,µn;L 1 ) ≤ 1 λ E∥u∥ p L p (0,µn,wκ c ;L hξ ) + C 2 E∥u∥ L 1 (0,µn,wκ c ;L 1 ) + E∥u∥ ph L ph (0,µn,wκ c ;L 1 ) ,
where C 1 , C 2 are constants which depend only on (p, c 1 , λ, M 2 , h, ξ, d) and we used (5.6). Now, let C T be the constant of the embedding
L hp (0, t, w κc ; X β1 ) → L p (0, t, w κc ; H −δ+2β1,q ) → L p (0, t, w κc ; L hξ )
for any t ∈ (0, T ]. Then, the previous shows
E∥u∥ p L p (0,µn,wκ c ;L ξ ) ≤ C T λ
E∥u∥ p X (µn) + C 2 E∥u∥ L 1 (0,µn,wκ c ;L 1 ) + E∥u∥ ph X (µn) .
To conclude, recall that, µ n ≤ σ n ≤ T a.s. and therefore
E∥u∥ L 1 (0,µn,wκ c ;L 1 ) ≤ E T 0 T d |u(t ∧ σ n , x)|w κc (t) dx dt ≲ E∥u 0 ∥ X Tr κc ,p ,
where in the last inequality we used Step 1. Note that, by Step 1, the implicit constant in the above estimate is independent of n ≥ 1 as desired. Putting together the previous estimates, one obtains the claim for f (·, u) by choosing λ large enough. The remaining ones are similar.
Step 3: Let K be as in Lemma 5.2. Then there exists R > 0, independent of (M 1 , u 0 ), such that for any N ≥ 1 and any stopping time µ satisfying 0 ≤ µ ≤ σ ∧ T and ∥u∥ X (µ) ≤ N a.s.,
E ψ R (∥u∥ p X (µ) ) ≤ E∥u 0 ∥ X Tr κc,p + E∥u 0 ∥ p X Tr κc ,p + M p 1 , with ψ R (x) = x R − x h .
As in the previous step we may prove the claim with µ replaced by µ n := σ n ∧µ since ∥u∥ X (µ) ≤ N a.s. for some N ≥ 1. The estimates of Lemma 5.2 and Step 3 readily implies
E∥u∥ p X (µn) ≤ K p E∥u 0 ∥ p X Tr κc,p + E∥Φ(·, u)∥ p L p (0,µn,wc;X0) + E∥Γ(·, u)∥ p L p (0,µn,wc;X 1/2 ) ≤ K p c 0 M p 1 + K p E∥u 0 ∥ X Tr κc,p + c 1 M 2 E∥u 0 ∥ p X Tr κc,p + 1 2 E∥u∥ p X (µn) + K p c 1 M 2 E∥u∥ ph X (µn) .
Since ∥u∥ X (µn) ≤ n a.s. by definition of σ n , the term 1 2 E∥u∥ p X (µn) can be absorbed on the LHS and hence
E∥u∥ p X (µn) ≤ 2K p c 0 M p 1 + 2K p (E∥u 0 ∥ X Tr κc,p + c 1 M 2 E∥u 0 ∥ p X Tr κc,p ) + 2K p c 1 M 2 E∥u∥ ph X (µn) .
Letting n → ∞, the desired estimate follows after division by R = 2K p max{c 0 , 1 + c 1 M 2 }.
Step 4: (A reduction). To prove Theorem 5.1 (i.e. the implication (5.2)) it is sufficient to prove the existence of C ε,T , r T > 0 independent of u 0 such that
(5.9) E∥u 0 ∥ X Tr κc,p + E u 0 p X Tr κc,p + M p 1 ≤ C ε,T =⇒ P(O) > 1 − ε, where O = {∥u∥ X (σ∧T ) ≤ r T }.
Arguing as in Step 2 (cf. (5.8) and the text before it), one can check that there exists C * depending only on (M 1 , M 2 , K, c 0 , c 1 , r T ) such that ∥Φ(·, u)∥ L p (0,µ,wκ c ;H 2−δ,q ) + ∥Γ(·, u)∥ L p (0,µ,wκ c ;H 1−δ,q (ℓ 2 )) ≤ C * on O,
where µ ∈ [0, σ] is a stopping time. Define the stopping time τ by τ = inf t ∈ [0, σ) : ∥Φ(·, u)∥ L p (0,µ,wκ c ;H 2−δ,q ) + ∥Γ(·, u)∥ L p (0,µ,wκ c ;H 1−δ,q (ℓ 2 )) ≥ C * + 1 ∧ T,
where we set inf ∅ = σ ∧ T . Then τ = σ ∧ T on O. By [AV23c, Theorem 1.2], (A, B) has stochastic maximal L p -regularity. Since (u, σ) is a (p, κ c , q, δ)-solution to (2.1), as in [AV22b, Proposition 3.12(2)] it follows that a.s. on [0, τ )
du + Au dt = 1 [0,τ ) F (·, u) dt + Bu + 1 [0,τ ) G(·, u) dW ℓ 2 ,
and u(0) = u 0 . Now [AV23c, Theorem 1.2] (see also Theorem 5.2 there) also gives (5.10) u ∈ L p (Ω; H κc p ,p (0, τ, w κc ; X 1− κc p )) ∩ L p (Ω; C([0, τ ]; X Tr κc,p )).
Using τ = σ ∧ T on O, by Sobolev embedding [AV22b, Proposition 2.7] we obtain
(5.11) u ∈ H κc p ,p (0, σ ∧ T, w κc ; X 1− κc p ) → L p (0, σ ∧ T ; X 1− κc p ) = L p (0, σ ∧ T ; H γ,q ) a.s. on O,
where γ = 1 + δ − 2 κc p = 2 p + d q − 2 h−1 . Let β = d q − 2 h−1 , and note that X Tr κc,p = B β q,p , see (3.14). Thus (5.10) also implies
(5.12) u ∈ C([0, σ ∧ T ]; B β q,p ) a.s. on O.
Hence, it follows that
P {σ ≤ T } ∩ O (i) = lim s↓0 P s < σ ≤ T, sup t∈[s,σ) ∥u(s)∥ B β q,p + ∥u∥ L p (s,σ;H γ,q ) < ∞ ∩ O ≤ lim sup s↓0 P s < σ ≤ T, sup t∈[s,σ) ∥u(s)∥ B β q,p + ∥u∥ L p (s,σ;H γ,q ) < ∞ (ii) = 0.
Here in (i) we used σ > 0 a.s. (see Theorem 2.7) and (5.11)-(5.12). In (ii) we used Theorem 2.10(2) with p 0 = p, q 0 = q, γ 0 = γ and β 0 = β (see also the comments below (2.18) on the set {σ = T }). Therefore, σ > T on O and therefore we showed that (5.9) implies the claim of Theorem 5.1.
Step 5: Conclusion, i.e. (5.9) holds. Let ψ R be as in Step 3. It is easy to check that ψ R has a unique maximum on R + attained in x ⋆ := (Rh) −1/(h−1) and it is given by
ψ ⋆ := x⋆ R h−1 h . Set r ε,T = x −1/p ⋆ and hence O = {∥u∥ X (σ) ≤ x −1/p ⋆ }. Define µ := inf{t ∈ [0, σ) : ∥u∥ X (t) ≥ x −1/p ⋆ } ∧ T, (5.13)
where inf ∅ := σ ∧ T . We prove (5.9) with C ε,T = εψ⋆ 2 . To derive a contradiction suppose that (5.14)
E∥u 0 ∥ X Tr κc ,p + E∥u 0 ∥ p X Tr κc,p + M p 1 ≤ εψ ⋆ 2 , and P(O) ≤ 1 − ε.
From the definition of O and (5.13), we find that µ < σ ∧ T a.s. on Ω \ O. Moreover,
ψ R (∥u∥ p X (µ) ) = ψ ⋆ a.s. on Ω \ O, and ψ R (∥u∥ p X (µ) ) ≥ 0, a.s. on O. (5.15) Therefore, E ψ R (∥u∥ p X (µ) ) = E ψ R (∥u∥ p X (µ) )1 O + E ψ R (∥u∥ p X (µ) )1 Ω\O (5.15) ≥ P(Ω \ O)ψ ⋆ (5.14)
≥ εψ ⋆ (5.14)
≥ 2 E∥u 0 ∥ p X Tr κc,p + M p 1 . The latter contradicts Step 2. Thus P(O) > 1 − ε as desired. □
Extension to the one-dimensional case
Many of the results of the previous sections extend to the one-dimensional setting. However, different restrictions on the parameters will appear. The reason for this is that certain sharp Sobolev embeddings become invalid. An example is the condition on ξ in (3.9): − d ξ = −δ − d q . The latter can no longer hold for δ ∈ [1, 2) and d = 1, and therefore one takes the best possible choice ξ = 1, which in turn leads to other conditions on h and δ in the Sobolev embedding H θ,q → L hξ used in (3.9). Similar changes are needed for Subset 2. As these restrictions lead to sub-optimal exponents, it is not really interesting to consider critical spaces anymore. Therefore, there is no need to state Theorem 2.7 for d = 1. However, we will include the conditions on the exponents under which the one-dimension variant of Proposition 3.1 holds: Proposition 6.1 (Local existence, uniqueness, and regularity for d = 1). Let Assumption 2.1(p, q, h, δ) be satisfied for d = 1. Suppose that q ≥ 2 and 1 q − 1 h < 2 − δ and that one of the following holds:
(1) δ + 1 q > 2 and 1+κ p ≤ h h−1 min 1 − δ 2 , 1 − δ 2 + 1 2h − 1 2q . (2) δ + 1 q < 2 and 1+κ p ≤ h h−1 min 1 − δ 2 , 1 − δ 2 + 1 2h − 1 2q , 1 − h−1 2h (δ + 1 q ) .
Then for any u 0 ∈ L 0 F0 (Ω; B ). Moreover, u instantaneously regularizes u ∈ H θ,r loc (0, σ; H 1−2θ,ζ ) a.s. for all θ ∈ [0, 1/2), r, ζ ∈ (2, ∞),
u ∈ C θ1,θ2 loc ((0, σ) × T d ; R ℓ ) a.
s. for all θ 1 ∈ [0, 1/2), θ 2 ∈ (0, 1).
Moreover, the assertions of Proposition 3.3 hold under these conditions as well.
We left out the case δ + 1 q = 2 since it leads to slightly different conditions because one needs η = 1 + ε in this case, because the Sobolev embedding L 1 → H 1−δ,q does not hold for the L 1 -endpoint (here η is as in Substep 1b of Lemma 3.2).
Proof. First we discuss the required changes in the proof of Lemma 3.2. Taking ξ = 1 in (3.9) we can set θ = max{ 1 q − 1 h , 0}. We need the condition 1 q − 1 h < 2 − δ to ensure Φ 0 is of lower order. This leads to the choice
β 1 = max 1 2q − 1 2h , 0 + δ 2 . (6.1)
For Φ 1 and Γ we consider two cases: Case δ + 1 q > 2. In this case we choose η = 1 and ϕ = max{0, 1 q − 2 h+1 }. We need the condition 1 q − 2 h+1 < 2 − δ to ensure that Φ 1 is of lower order, but the latter is automatically satisfied since 2 h+1 > 1 h . This leads to
β 2 = max 1 2q − 1 h + 1 , 0 + δ 2 .
It turns out that the sub-criticality condition (see (3.15))
(6.2) 1 + κ p ≤ ρ j + 1 ρ j (1 − β j ) for j ∈ {1, 2},
is most restrictive for j = 1, and this leads to the condition as stated in (1). Case δ + 1 q < 2. In this case we can take η, ϕ and β 2 as in the proof of Lemma 3.2. Since δ + 1 q < 2 < 2h h−1 , elementary computations show that the condition ϕ < 2 − δ is automatically satisfied. This time the condition (6.2) gets an additional restriction as stated in (2). It only plays a role if q < h−1 2(δ−1) . Now the rest of the assertions follow in the same way as in Propositions 3.1 and 3.3. □
The following analogues of the previous results hold in the case d = 1 as well:
Remark 6.2. Let the conditions of Proposition 6.1 be satisfied with exponents (p, q, h, δ, κ).
(1) (Blow-up criteria). Suppose that Assumption 2.1(p 0 , q 0 , h 0 , δ 0 ) holds with h 0 ≥ h, and that Proposition 6.1(1) or (2) hold for (p 0 , q 0 , h 0 , δ 0 , κ 0 ). Let β 0 = 2 − δ 0 − 2 1+κ0 p0 and γ 0 = 2 − δ 0 − 2κ0 p0 . Then for all 0 < s < T < ∞, Theorem 2.10(1)-(2) for d = 1 hold. (2) (Positivity). The assertion of Theorem 2.13 holds for d = 1 if the conditions of Theorem 2.7 are replaced by the conditions of Proposition 6.1. (3) In a similar way Theorems 4.2 and 5.1 hold for d = 1 in the setting of Proposition 6.1.
Here Assumption 2.4 should be omitted and κ should be as in Proposition 6.1 instead of κ c . Some changes are required in the arguments.
In Remark 2.2(d) we mentioned an alternative way to include the case d = 1 by adding a dummy variable. However, this leads to additional restrictions on the parameters. 7. Extensions to the case p = q = 2
In this section we explain how to extend the results of the previous sections to p = q = 2 and κ = 0. This end-point case follows in the same way as in [AV22a], where we cover the so-called variational setting which can be seen as an abstract version of the case p = q = 2 and κ = 0. Its importance lies in the fact that it often allows to prove energy bounds which lead to global existence. All results in Sections 2 and 6 extend to p = q = 2 and κ = 0 under suitable restrictions which we explain below.
The variational framework is very effective in the weak setting (i.e. δ = 1), where coercivity conditions are easy to check. The case δ ∈ (1, 2) allowed in Theorem 2.7, is typically not included as the fractional scale leads to difficulties with coercivity conditions. The results of this section (e.g. Proposition 7.3) might be combined to some of the results in [AV22a] for instance to allow rougher initial data and/or to obtain higher order regularity (see Theorem 2.7 and 4.2, respectively). However, one should be aware that using the case p = q = 2 and κ = 0 requires low dimension, or nonlinearities which do not grow too rapidly (see Subsection 1.4 and [AV23b, Subsection 5.2]).
As in [AV22a, Subsection 5.3] one can check that Definition 2.3 can be extended to p = q = 2, κ = 0 and δ = 1 if Assumption 2.1(1),(3), and (4) hold and there exists a constant K such that |a j,k i | + ∥b j i ∥ ℓ 2 ≤ K, for all i, j, k and a.e. on R + × Ω × T d . (7.1) Note that the regularity conditions on the coefficients in Assumption 2.1(2) are left out. In this section we often use the abbreviation H s = H s,2 (T d ; R ℓ ) for s ∈ R. Of course a natural question whether under further conditions on the coefficients, the solution of Proposition 7.1 is a (p, κ, δ, q)-solution and has higher regularity than stated in Proposition 7.1. This indeed turns out to be the case.
Proposition 7.2 (Regularity for p = q = 2). Suppose that (7.2) holds with the additional restriction that h < 4 for d = 1. Suppose Assumption 2.1(p, q, h, δ) holds for some δ ∈ (1, 2). Let u 0 ∈ L 0 F0 (Ω; L 2 ). Let (u, σ) be the (2, 0, 1, 2)-solution to (2.1) provided by Proposition 7.1. Then the regularity assertions (2.13)-(2.14) hold, i.e.
u ∈ H θ,r loc (0, σ; H 1−2θ,ζ ) a.s. for all θ ∈ [0, 1/2), r, ζ ∈ (2, ∞), u ∈ C θ1,θ2 loc ((0, σ) × T d ; R ℓ ) a.s. for all θ 1 ∈ [0, 1/2), θ 2 ∈ (0, 1). Proof. First consider d ≥ 3. Then without loss of generality we can assume h = 1 + 4 d . Fix ε ∈ (0, 1/2) is so small that δ 0 := ε + 1 < δ.
To prove the claim we will apply [AV22c, Proposition 6.8] with (7.4) Y i = H −1+2i−ε , X i = H −1+2i , p = 2, r ∈ (2, ∞), 1 2 = 1 + α r + ε 2 .
Note that since ε ∈ (0, 1 2 ), (7.4) yields α ∈ (0, r 2 − 1). First note that the conditions of Proposition 3.1 with variant (3.1) hold with (p, q, κ, δ) replaced by (r, 2, α, δ 0 ). Therefore, Part (A) of the proof of Proposition 3.1 shows that the conditions of [AV22c, Proposition 6.8] are satisfied if we choose r such that 1 r = max j∈{1,2} β j − 1 2 , where β j is as in Lemma 3.2. From the proof of [AV22c, Proposition 6.8] one sees that (u, σ) coincides with the (r, α, δ 0 , 2)-solution. Therefore, the required regularity follows from Proposition 3.1 (or equivalently the extrapolation result of [AV22c, Lemma 6.10]).
Next let d = 2. Without loss of generality we can assume h ∈ (2, 3). In this case we need a slight modification of Lemma 3.2. To this end, let 1 < δ 0 ≤ min{δ, 5/3} be fixed but arbitrary. The nonlinearity Φ 0 satisfies the required estimates with h < 3, β 1 = δ0 2 + 1 2 − 1 h . Indeed, to see this in (3.9) one can take ξ = 1 (using δ 0 > 1), and θ = d q − d hξ = 1 − 2 h < 1 3 . Note that we are in the case q < d(h−1) δ and θ < 2 − δ 0 follows from δ 0 ≤ 5/3. The estimates for Φ 1 and Γ can be done by taking the optimal choices for η and ϕ in the Sobolev embeddings where we replace δ by δ 0 .
Note that in Step 1 of the proof of Proposition 3.1 we have q < d(h−1) δ0 and 1+κ p + 1 2 (δ 0 + d q ) ≤ h h−1 with d = p = q = 2 and κ = 0 if we take δ 0 ≤ h+1 h−1 . Now we are in the situation that we can repeat the argument of the case d ≥ 3, where we take δ 0 = 1 + ε with ε ∈ (0, 1/2) small enough.
In the case d = 1, we argue in a similar way as for d = 2. We may suppose that h ∈ (3, 4). We first check Proposition 6.1(2). One can check that the minimum is attained at the middle expression. Let r > 2, α ∈ (0, r−1 2 ) and δ 0 ∈ (1, δ ∧ 7 4 ] be arbitrary but fixed. Using h < 4 and that the right-hand side is strictly decreasing in h, we find that for δ 0 small enough
1 + α r < 1 2 < h h − 1 1 − δ 0 2 + 1 2h − 1 4 .
Thus Proposition 6.1 is applicable with (p, q, κ, δ) replaced by (r, 2, α, δ 0 ). Recall from (6.1) that β 1 = max 1 2q − 1 2h , 0 + δ0 2 ∈ 1 2 , 1 . Also recall that from the proof of Proposition 6.1 one can see that β 2 can be taken as in Lemma 3.2. Therefore, we can repeat the argument of the case d ≥ 3 once more. □
The following complements the blow-up criteria of Theorem 2.10 and of Corollary 2.11. In particular, it shows that the solutions provided by Proposition 3.1 (or Theorem 2.7) are global in time if one can obtain energy estimates in an L 2 -setting. Here the initial data can be from space with lower smoothness than L 2 (T d ; R ℓ ). Thus the result extends the class of initial data covered by Proposition 7.1 under some smoothness conditions on the coefficients. ). Let (u, σ) be the (p, κ, δ, q)-solution obtained there. Suppose that (7.2) holds with h < 4 if d = 1. Then, for all 0 < s < T < ∞, P s < σ < T, sup t∈[s,σ) ∥u(t)∥ L 2 + ∥u∥ L 2 (s,σ;H 1 ) < ∞ = 0. (7.5) Proof. We extend the argument in Theorem 2.10 to p = q = 2. First note that u satisfies the regularity stated in (3.4) and (3.5). In particular, the L 2 -norm and H 1 -norm appearing in (7.5) are well-defined.
Proposition 7.1 (up to translation) yields the existence of a (2, 0, 1, 2)-solution (v, τ ) on [s, ∞) to (3.34) with initial data 1 {σ>s} u(s) which satisfies τ > s a.s., and by Proposition 7.2 (recall that h < 4 if d = 1), (7.6) v ∈ H θ,r loc (s, τ ; H 1−2θ,ζ ) a.s. for all θ ∈ [0, 1/2), r, ζ ∈ (2, ∞). Moreover, by Proposition 7.1 (up to translation), (7.7) P τ < T, sup t∈[s,τ ) ∥v(t)∥ L 2 + ∥v∥ L 2 (s,τ ;H 1 ) < ∞ = 0.
We claim that Hence (7.5) follows from (7.7) and (7.8).
It remains to prove the claim (7.8). Since (u| [s,σ)×V , 1 V σ + 1 Ω\V s) is a local (2, 0, 1, 2)-solution to (3.34) with initial data with initial data 1 {σ>s} u(s), the maximality of (v, τ ) yields (7.9) σ ≤ τ a.s. on {σ > s} and u = v a.e. on [s, σ).
To conclude it is enough to show that P(s < σ < τ ) = 0. To this end we will apply the blow-up criteria (3.32). Indeed, by (7.6) and (7.9) we have u = v ∈ L p loc ((s, σ]; H γ,q ) a.s. on {s < σ < τ }.
Combining this with (3.33) we find u ∈ L p (0, σ; H γ,q ) a.s. on {s < σ < τ }. Similarly, one can check that sup t∈[0,σ) ∥u(t)∥ B β q,p < ∞ a.s. on {s < σ < τ }, and therefore P(s < σ < τ ) = P {s < σ < τ } ∩ sup t∈[0,σ) ∥u(t)∥ B β q,p + ∥u∥ L p (0,σ;H γ,p ) < ∞ ≤ P σ < T, sup t∈[0,σ) ∥u(t)∥ B β q,p + ∥u∥ L p (0,σ;H γ,p ) < ∞ Remark 7.5. By splitting the locally Lipschitz and growth conditions on f , F and g stated in Assumption 2.1(4) into three different growth conditions with parameters h f , h F and h g instead of h, one can further weaken the conditions in Propositions 7.1-7.3. Indeed, from [AV22a, Section 5.3] one sees that in Proposition 7.1 it is enough to assume h F , h g ≤ d+4 d for d ≥ 1.
The assumption on h f remains as it was for h. This leads to a slightly weaker assumptions on F and g for d ∈ {1, 2}. The same actually applies to the more general of Lemma 3.2.
Remark 7.6. One can also replace (L 2 , H 1 ) by (H s , H s+1 ) in the above results. This gives a wider range of nonlinearities which can be treated if s is large, but at the same time this choice requires more restrictions on the regularity of the coefficients, the spatial smoothness of the nonlinearities f, F, g and on the initial data (see e.g. [AV22a, Section 5.4]).
Appendix A. A maximum principle for SPDEs
In [Kry13], Krylov presented a maximum principle for linear scalar second order SPDEs, which are allowed to be degenerate. In the proof of the positivity result of Theorem 2.13 we need such a result in the non-degenerate setting, but with coefficients which have less smoothness. Below we extend the maximum principle to the case of non-smooth coefficients as one can use an approximation argument in the non-degenerate case. As before Theorem 2.13, here we say that v ∈ D ′ (T d ) is positive (or v ≥ 0) if ⟨φ, v⟩ ≥ 0 for all test functions φ satisfying φ ≥ 0 on T d .
Lemma A.1 (Maximum principle for second order SPDEs of scalar type). Suppose that a ij , a i , b i , c : [0, T ]×Ω×T d → R, (σ ik ) k≥1 , (ν k ) k≥1 : [0, T ]×Ω×T d → ℓ 2 are bounded and P⊗B(T d )-measurable, and there is a γ > 0 such that a.s. where α ij = (σ i , σ j ) ℓ 2 . Let u 0 ∈ L 2 (Ω; L 2 (T d )) and f ∈ L 2 (Ω × (0, T ); H −1 (T d )) be such that a.e. u 0 ≥ 0 and f ≥ 0. Let u ∈ L 2 (Ω; L 2 (0, T ; H 1 (T d ))) ∩ L 2 (Ω; C([0, T ]; L 2 (T d ))) be the solution to
(A.2) du − Au dt = f dt + k≥1 B k u dw k t , u(0) = u 0 , where Au = d i,j=1 ∂ i (a ij ∂ j u) + d i=1 ∂ i (a i u) + d i=1 b i ∂ i u + cu, and B k u = d i=1 σ ik ∂ i u + ν k u.
Then a.s. for all t ∈ [0, T ], one has u ≥ 0.
In the above solutions to (A.2) are understood in the sense of Defintion 2.3 with q = p = 2 and κ = 0. A similar result holds for general domains O ⊆ R d with (for instance) Dirichlet boundary conditions.
varies in time very rapidly compared to the larger one v (L)
(b) If Assumption 2.1(p, q, h, δ) holds for some h, then it holds for all h ∈ [h, ∞). (c) The growth of f, F and g is chosen in accordance with the scaling argument of Subsection 1.4.
Remark
q ) a.s. for all θ ∈ [0, 1/2). To prove existence and uniqueness for (2.1) we will apply [AV22b, Theorem 4.8]. Indeed, our notion of (p, κ, δ, q)-solution to (2.1) (see Definition 2.3) is equivalent to the notion of L p κ -maximal local solution given in[AV22b, Definition 4.4] (see also [AV22c, Remark 5.6]). By [AV23c, Theorem 5.2 and Remark 5.6], the linearized problem with leading operator (A, B) (see (3.7)) has stochastic maximal L p -regularity. More precisely, we have (A, B) ∈ SMR • p,κ (T ) for all T ∈ (0, ∞) with X 0 = H −δ,q and X 1 = H 2−δ,q (see [AV22b,Definition 3.5] for the definition). Now existence and uniqueness follows from Step 1 and [AV22b, Theorem 4.8]. □ In order to complete the proof of Proposition 3.1 it remains to show the regularity results (3.4)-(3.5). For this we will use our new bootstrap technique of [AV22c, Section 6]. The structure of the proof of the regularity will be follows: • Bootstrap regularity in time via [AV22c, Proposition 6.8] (see Step 1a) and [AV22c, Corollary 6.5] (see Substep 1b). • Bootstrap integrability in space via [AV22c, Theorem 6.3] applied recursively considering (2.1) in the (H −δ,qj , H 2−δ,qj , r, α)-setting where (q k ) k≥1 is a sequence of increasing numbers q k ↑ ∞ with q 1 = q (see Step 2). • Bootstrap differentiability in space via [AV22c, Theorem 6.3] by shifting the scale from
of α in (3.22) immediately yields (3.23) (a) and both spaces equal B ) (b) we apply [AV22c, Lemma 6.2(4)]. To this end note that, for ε = δ−1 2 ,
first show that Proposition 2.9 is included. Proof of Proposition 2.9. The claim follows from Proposition 3.3 with the choice of κ c = p( ) − 1, as in the proof of Theorem 2.7. □ Proof of Proposition 3.3. For the proof of Proposition 3.3 we need some of the arguments in the abstract local well-posedness result of [AV22b, Theorem 4.5] (see also [AV22b, Theorem 4.8]). Let ξ ∈ W 1,∞ (R) be such that ξ| [0,1] = 1, ξ| [2,∞) = 0 and ξ is linear on [1, 2].
( 3 .
331) u = u a.e. on [0, σ 0 ) × Ω, and v = v a.e. on [0, τ 0 ) × Ω.
( 3 .
337) τ = σ a.s. on V and u = v a.e. on [s, σ) × V.The remaining part of this step is devoted to the proof of (3.37). Let us begin by noticing that, by h 0 ≥ h and (2.13), (u| [s,σ)×V , 1 V σ + 1 Ω\V s) is a (p 0 , κ 0 , δ 0 , q 1 )-solution to (3.34). The maximality of (v, τ ) yields (see the last item in Definition 2.3)(3.38)σ ≤ τ a.s. on V and u = v a.e. on [s, σ) × V.
.s. as the assumptions of Proposition 3.1 are verified in both settings. By localization in Ω, see [AV22b, Theorem 4.7(d)] with Γ =
and u (n) = u (n) i a.e. on [0, σ (n) ) × Ω by maximality of (u(n) i , σ (n) i ). Now reasoning as in the proof of Theorem 2.10, by instantaneous regularization of (p i , κ i , δ i , q i )-solutions (i.e. (3.4)-(3.5)), we also obtain (3.40) σ (n) i = σ (n) a.s., and u (n) i = u (n) a.e. on [0, σ (n) ) × Ω.
solution on [0, T 0,i ] solution to (3.27) in the (p i , κ i , δ i , q i )-setting, and where T 0,i and λ 0,i are as in the proof of Proposition 3.3. To prove the claim of Step 2, let σ (n) 0,i be as in (3.30) with (u, X , λ 0 ) replaced by (u (n) i , X i , λ 0,i ). By [AV22b, Lemma 4.9], Remark 3.4(b) and [ALV21, Corollary 5.2] it follows that
+
(T d ; R ℓ ). Under the assumptions of Theorem 5.1, the proof below also yields the following assertion:If (u 0 , M 1 ) satisfies the condition on LHS(5.2), then there exists a stopping time τ ∈ (0, σ] a.s. such that P(τ ≥ T ) > 1 − ε and(5.3) E 1 {τ ≥T } ∥u∥ p H θ,p (0,T,wκ c ;H 2+δ−2θ,q ) ≲ θ E∥u 0 ∥ M p 1 , for all θ ∈ [0, 1 2 ). To prove Theorem 5.1 and (5.3) one can modify the arguments used in the proof of [AV21, Theorem 2.11(1)] and [AV21, Theorem 2.11(2)], respectively. Instead of repeating the technical iteration argument used in [AV21, Theorem 2
has a (unique) (p, κ, δ, q)-solution satisfying a.s. σ > 0 and u ∈ L p loc ([0, σ), w κ ; H 2−δ,q ) ∩ C([0, σ); B 2−δ−2 1+κ p q,p
Proposition 7 . 1 (
71Local existence and uniqueness, and blow-up criteria for p = q = 2).Suppose that for all i ∈ {1, . . . , ℓ} parts (1),(3), and (4) of Assumption 2.1 hold, and (7.1) holds. Let u 0 ∈ L 0 F0 (Ω; L 2 ). Then there exists a unique (2, 0, 1, 2)-solution (u, σ) to (2.1) satisfying σ > 0 a.s. and u ∈ L 2 loc ([0, σ); H 1 ) ∩ C([0, σ); L 2 ) a.s. Moreover, for all T < ∞,P σ < T, sup t∈[0,σ) ∥u(t)∥ L 2 + ∥u∥ L 2 (0,σ;H 1 ) < ∞ = 0. (7.3)Note that in d = 2, one cannot reach the scaling invariant case h = 3, cf. Subsection 1.4.
Proof. This follows by the same reasoning as in [AV22a, Theorems 3.3, 3.4 and Section 5.3]. □
Proposition 7 . 3 (
73Global existence for rough initial data). Suppose that the conditions of Proposition 3.1 are satisfied, in particular u 0 ∈ L 0 F0 (Ω;
= σ a.s. on {σ > s} and u = v a.e. on [s, σ) × {σ > s}.
. 4 .
4From the proof of Proposition 7.2 it follows that the compatibility result of Proposition 3.5 extends to p = q = 2 and δ = 1 under the restrictions on h and d stated in Proposition 7.2.
ij ξ i ξ j ≥ γ|ξ| 2 on [0, T ] × T d , (A.1)
By elementary considerations one can see that the range of q's in (2.8) is nontrivial if and only if(2.8)
h > d+1
d−1 . Since additionally q ≥ 2, admissibility is equivalent to
h >
d + 1
d − 1
and
d(h − 1)
h + 1 − δ(h − 1)
> 2 for some δ ∈ 1,
h + 1
h
.
(2.9)
Taking δ ↑ h+1
h , the second part of (2.9) becomes h 2 − (1 + 2
d )h − 2
d > 0, which is equivalent to
h > 1
2 + 1
d +
1
2 + 1
d
2 + 2
d =: h d . In case d ≥ 3, one can check that d+1
d−1 ≤ h d . In case d = 2,
one has 3 = d+1
d−1 > h d . Hence, admissibility is equivalent to h > h d = h d if d ≥ 3, and h > h d = 3
if d = 2.
□
is decreasing in d, h d ↓ 1 as d → ∞, and h 2 = 3, h 3 = 2, h 4 ≈ 1.781, h 5 ≈ 1.643, h 6 ≈ 1.549.2.6 (Stochastic Fujita exponent). Note that h d in Lemma 2.5 satisfies 1 + 2
d < h d ≤ 1 + 4
d .
In particular it is always larger than the classical Fujita exponent 1 + 2
d (note that h > 1 + 2
d
corresponds to the fact that the scaling invariant space L
d
2 (h−1) has integrability > 1). Moreover,
h d
Therefore, up to extracting a subsequence, we have).
(3.42)
Proof. For convenience of the reader we give the details of the approximation argument. Note that a unique solution exists by the classical variational setting (see e.g.[LR15,Theorem 4.2.4]) applied to the linear problem (A.2). In case of smooth coefficients and smooth f , it follows from that u ≥ 0 (see[Kry13]). In order to prove u ≥ 0 in the above setting, it suffices to construct (u n ) n≥1 such that u n ≥ 0 and u n → u in L 2 (Ω; C([0, T ]; L 2 (T d ))).To approximate u we use a standard mollifier argument. Let ρ ∈ C ∞ (T d ) be such that ρ ≥ 0 andwhere a ij n = a ij n + 1 2 (σ i n , σ j n ) ℓ 2 − 1 2 α ij n . Note that in general (σ i n , σ j n ) ℓ 2 ̸ = α ij n , but the equality holds pointwise a.e. in the limit as n → ∞, (possibly) up to a subsequence. Due to this seemingly unnatural definition we can again check the parabolicity condition (A.1):Since the coefficients in the above linear SPDE are smooth, we apply the periodic case of[Kry13,Theorem 4.3] to obtain u n ≥ 0. It remains to show u n → u in L 2 (Ω; C([0, T ]; L 2 (T d ))).where F n := (A − A n )u + f − f n and G k n := (B k − B k n )u. Therefore, by standard regularity estimates (see[LR15,Theorem 4.2.4] and its proof),for a suitable subsequence, and since u ∈ L 2 ((0, T ) × Ω; H 1 (T d )), it follows from the dominated convergence theorem that ∥(A − A n )u∥ L 2 ((0,T )×Ω;H −1 (T d )) → 0, and ∥((B k − B k n )u) k≥1 ∥ L 2 ((0,T )×Ω;L 2 (T d ;ℓ 2 )) → 0. Note that in the above we used that u ∈ L 2 ((0, T ) × Ω; H 1 (T d )) as γ > 0 in (A.1). For the inhomogeneity f , writing f = (1 − ∆) −1/2 f , we have ∥f − f n ∥ L 2 ((0,T )×Ω;H −1 (T d )) ≂ ∥ f − ρ n * f ∥ L 2 ((0,T )×Ω;L 2 (T d )) → 0.Combining the above we have ∥v n ∥ Z → 0, as required.□
Delayed blow-up and enhanced diffusion by transport noise for systems of reaction-diffusion equations. A Agresti, arXiv:2207.08293arXiv preprintA. Agresti. Delayed blow-up and enhanced diffusion by transport noise for systems of reaction-diffusion equations. arXiv preprint arXiv:2207.08293, 2022.
On the trace embedding and its applications to evolution equations. A Agresti, N Lindemulder, M Veraar, Online first in Mathematische Nachrichten. A. Agresti, N. Lindemulder, and M. Veraar. On the trace embedding and its applications to evolution equations. Online first in Mathematische Nachrichten, 2021.
Stochastic Navier-Stokes equations for turbulent flows in critical spaces. A Agresti, M C Veraar, arXiv:2107.03953arXiv preprintA. Agresti and M.C. Veraar. Stochastic Navier-Stokes equations for turbulent flows in critical spaces. arXiv preprint arXiv:2107.03953, 2021.
The critical variational setting for stochastic evolution equations. A Agresti, M C Veraar, arXiv:2206.00230arXiv preprintA. Agresti and M.C. Veraar. The critical variational setting for stochastic evolution equations. arXiv preprint arXiv:2206.00230, 2022.
Nonlinear parabolic stochastic evolution equations in critical spaces part I. Stochastic maximal regularity and local existence. A Agresti, M C Veraar, Nonlinearity. 358A. Agresti and M.C. Veraar. Nonlinear parabolic stochastic evolution equations in critical spaces part I. Stochastic maximal regularity and local existence. Nonlinearity, 35(8):4100-4210, 2022.
Nonlinear parabolic stochastic evolution equations in critical spaces part II: Blow-up criteria and instataneous regularization. A Agresti, M C Veraar, J. Evol. Equ. 2222022Paper No. 56, 96A. Agresti and M.C. Veraar. Nonlinear parabolic stochastic evolution equations in critical spaces part II: Blow-up criteria and instataneous regularization. J. Evol. Equ., 22(2):Paper No. 56, 96, 2022.
Global existence and regularity for quaslinear SPDEs with transport noise. A Agresti, M C Veraar, In preparationA. Agresti and M.C. Veraar. Global existence and regularity for quaslinear SPDEs with transport noise. In preparation, 2023.
Reaction-diffusion equations with transport noise and critical superlinear diffusion: Global well-posedness of weakly dissipative systems. A Agresti, M C Veraar, arXiv:2301.06897arXiv preprintA. Agresti and M.C. Veraar. Reaction-diffusion equations with transport noise and critical superlinear diffusion: Global well-posedness of weakly dissipative systems. arXiv preprint arXiv:2301.06897, 2023.
Stochastic maximal L p (L q )-regularity for second order systems with periodic boundary conditions. A Agresti, M C Veraar, Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques. To appear inA. Agresti and M.C. Veraar. Stochastic maximal L p (L q )-regularity for second order systems with periodic boundary conditions. To appear in Annales de l'Institut Henri Poincaré (B) Probabilités et Statistiques, 2023.
Comparison of systems of stochastic partial differential equations. S Assing, Stochastic Process. Appl. 822S. Assing. Comparison of systems of stochastic partial differential equations. Stochastic Process. Appl., 82(2):259-282, 1999.
Interpolation spaces. An introduction. J Bergh, J Löfström, Grundlehren der Mathematischen Wissenschaften. 223Springer-VerlagJ. Bergh and J. Löfström. Interpolation spaces. An introduction. Springer-Verlag, Berlin, 1976. Grundlehren der Mathematischen Wissenschaften, No. 223.
Stochastic Navier-Stokes equations with multiplicative noise. Z Brzeźniak, M Capiński, F Flandoli, Stochastic Anal. Appl. 105Z. Brzeźniak, M. Capiński, and F. Flandoli. Stochastic Navier-Stokes equations with multiplicative noise. Stochastic Anal. Appl., 10(5):523-532, 1992.
Solutions of the 4-species quadratic reaction-diffusion system are bounded and C ∞ -smooth, in any space dimension. M C Caputo, T Goudon, A F Vasseur, Anal. PDE. 127M.C. Caputo, T. Goudon, and A.F. Vasseur. Solutions of the 4-species quadratic reaction-diffusion system are bounded and C ∞ -smooth, in any space dimension. Anal. PDE, 12(7):1773-1804, 2019.
Stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term. S Cerrai, 125Probab. Theory Related FieldsS. Cerrai. Stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term. Probab. Theory Related Fields, 125(2):271-304, 2003.
Stabilization by noise for a class of stochastic reaction-diffusion equations. S Cerrai, Probab. Theory Related Fields. 1332S. Cerrai. Stabilization by noise for a class of stochastic reaction-diffusion equations. Probab. Theory Related Fields, 133(2):190-214, 2005.
Large deviations for invariant measures of stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term. S Cerrai, M Röckner, Ann. Inst. H. Poincaré Probab. Statist. 411S. Cerrai and M. Röckner. Large deviations for invariant measures of stochastic reaction-diffusion systems with multiplicative noise and non-Lipschitz reaction term. Ann. Inst. H. Poincaré Probab. Statist., 41(1):69-105, 2005.
The Stampacchia maximum principle for stochastic partial differential equations and applications. M D Chekroun, E Park, R Temam, J. Differential Equations. 2603M.D. Chekroun, E. Park, and R. Temam. The Stampacchia maximum principle for stochastic partial differential equations and applications. J. Differential Equations, 260(3):2926-2972, 2016.
Unbounded positive solutions of nonlinear parabolic Itô equations. P.-L Chow, Commun. Stoch. Anal. 32P.-L. Chow. Unbounded positive solutions of nonlinear parabolic Itô equations. Commun. Stoch. Anal., 3(2):211-222, 2009.
Explosive solutions of stochastic reaction-diffusion equations in mean L p -norm. P.-L Chow, J. Differential Equations. 2505P.-L. Chow. Explosive solutions of stochastic reaction-diffusion equations in mean L p -norm. J. Differ- ential Equations, 250(5):2567-2580, 2011.
Almost-sure explosive solutions of some nonlinear parabolic Itô equations. P.-L Chow, R Khasminskii, Commun. Stoch. Anal. 92P.-L. Chow and R. Khasminskii. Almost-sure explosive solutions of some nonlinear parabolic Itô equa- tions. Commun. Stoch. Anal., 9(2):159-168, 2015.
On the positivity of solutions of systems of stochastic PDEs. J Cresson, M Efendiev, S Sonner, ZAMM Z. Angew. Math. Mech. 936-7J. Cresson, M. Efendiev, and S. Sonner. On the positivity of solutions of systems of stochastic PDEs. ZAMM Z. Angew. Math. Mech., 93(6-7):414-422, 2013.
Global solutions to stochastic reaction-diffusion equations with super-linear drift and multiplicative noise. R C Dalang, D Khoshnevisan, T Zhang, Ann. Probab. 471R.C. Dalang, D. Khoshnevisan, and T. Zhang. Global solutions to stochastic reaction-diffusion equa- tions with super-linear drift and multiplicative noise. Ann. Probab., 47(1):519-559, 2019.
Second order perturbation theory of two-scale systems in fluid dynamics. A Debussche, U Pappalettera, arXiv:2206.07775arXiv preprintA. Debussche and U. Pappalettera. Second order perturbation theory of two-scale systems in fluid dynamics. arXiv preprint arXiv:2206.07775, 2022.
A new EDC approach for modeling turbulence/chemistry interaction of the gas-phase of biomass combustion. M Farokhi, M Birouk, Fuel. 220M. Farokhi and M. Birouk. A new EDC approach for modeling turbulence/chemistry interaction of the gas-phase of biomass combustion. Fuel, 220:420-436, 2018.
Global classical solutions to quadratic systems with mass control in arbitrary dimensions. K Fellner, J Morgan, B Q Tang, Ann. Inst. H. Poincaré Anal. Non Linéaire. 372K. Fellner, J. Morgan, and B. Q. Tang. Global classical solutions to quadratic systems with mass control in arbitrary dimensions. Ann. Inst. H. Poincaré Anal. Non Linéaire, 37(2):281-307, 2020.
Global existence of renormalized solutions to entropy-dissipating reaction-diffusion systems. J Fischer, Arch. Ration. Mech. Anal. 2181J. Fischer. Global existence of renormalized solutions to entropy-dissipating reaction-diffusion systems. Arch. Ration. Mech. Anal., 218(1):553-587, 2015.
Weak-strong uniqueness of solutions to entropy-dissipating reaction-diffusion equations. J Fischer, Nonlinear Anal. 159J. Fischer. Weak-strong uniqueness of solutions to entropy-dissipating reaction-diffusion equations. Nonlinear Anal., 159:181-207, 2017.
A stochastic reaction-diffusion equation with multiplicative noise. F Flandoli, Appl. Math. Lett. 44F. Flandoli. A stochastic reaction-diffusion equation with multiplicative noise. Appl. Math. Lett., 4(4):45-48, 1991.
An introduction to 3D stochastic fluid dynamics. F Flandoli, SPDE in hydrodynamic: recent progress and prospects. SpringerF. Flandoli. An introduction to 3D stochastic fluid dynamics. In SPDE in hydrodynamic: recent progress and prospects, pages 51-150. Springer, 2008.
Lectures from the 40th Probability Summer School held in Saint-Flour. F Flandoli, Lecture Notes in Mathematics. 2015SpringerÉcole d'Été de Probabilités de Saint-FlourRandom perturbation of PDEs and fluid dynamic models. Saint-Flour Probability Summer SchoolF. Flandoli. Random perturbation of PDEs and fluid dynamic models, volume 2015 of Lecture Notes in Mathematics. Springer, Heidelberg, 2011. Lectures from the 40th Probability Summer School held in Saint-Flour, 2010,École d'Été de Probabilités de Saint-Flour. [Saint-Flour Probability Summer School].
Delayed blow-up by transport noise. F Flandoli, L Galeati, D Luo, Comm. Partial Differential Equations. 469F. Flandoli, L. Galeati, and D. Luo. Delayed blow-up by transport noise. Comm. Partial Differential Equations, 46(9):1757-1788, 2021.
Mixing, dissipation enhancement and convergence rates for scaling limit of SPDEs with transport noise. F Flandoli, L Galeati, D Luo, arXiv:2104.01740arXiv preprintF. Flandoli, L. Galeati, and D. Luo. Mixing, dissipation enhancement and convergence rates for scaling limit of SPDEs with transport noise. arXiv preprint arXiv:2104.01740, 2021.
Well-posedness of the transport equation by stochastic perturbation. F Flandoli, M Gubinelli, E Priola, Invent. Math. 1801F. Flandoli, M. Gubinelli, and E. Priola. Well-posedness of the transport equation by stochastic per- turbation. Invent. Math., 180(1):1-53, 2010.
High mode transport noise improves vorticity blow-up control in 3D Navier-Stokes equations. F Flandoli, D Luo, Probab. Theory Related Fields. 1802021F. Flandoli and D. Luo. High mode transport noise improves vorticity blow-up control in 3D Navier- Stokes equations. Probab. Theory Related Fields, 180, 2021.
From additive to transport noise in 2D fluid dynamics. F Flandoli, U Pappalettera, Stoch. Partial Differ. Equ. Anal. Comput. 103F. Flandoli and U. Pappalettera. From additive to transport noise in 2D fluid dynamics. Stoch. Partial Differ. Equ. Anal. Comput., 10(3):964-1004, 2022.
Some non-existence results for a class of stochastic partial differential equations. M Foondun, W Liu, E Nane, J. Differential Equations. 2665M. Foondun, W. Liu, and E. Nane. Some non-existence results for a class of stochastic partial differential equations. J. Differential Equations, 266(5):2575-2596, 2019.
On the blowing up of solutions of the Cauchy problem for ut = ∆u + u 1+α. H Fujita, J. Fac. Sci. Univ. Tokyo Sect. I. 13H. Fujita. On the blowing up of solutions of the Cauchy problem for ut = ∆u + u 1+α . J. Fac. Sci. Univ. Tokyo Sect. I, 13:109-124 (1966), 1966.
Stabilization by transport noise and enhanced dissipation in the Kraichnan model. B Gess, I Yaroslavtsev, arXiv:2104.03949arXiv preprintB. Gess and I. Yaroslavtsev. Stabilization by transport noise and enhanced dissipation in the Kraichnan model. arXiv preprint arXiv:2104.03949, 2021.
Turbulence effects in chemical reaction kinetics measurements. I Glassman, I J Eberstein, AIAA Journal. 16I. Glassman and Eberstein I. J. Turbulence effects in chemical reaction kinetics measurements. AIAA Journal, 1(6):1424-1426, 1963.
Analysis in Banach spaces. T P Hytönen, J M A M Van Neerven, M C Veraar, L Weis, Ergebnisse der Mathematik und ihrer Grenzge. I. Martingales and Littlewood-Paley theory63SpringerT.P. Hytönen, J.M.A.M. van Neerven, M.C. Veraar, and L. Weis. Analysis in Banach spaces. Vol. I. Martingales and Littlewood-Paley theory, volume 63 of Ergebnisse der Mathematik und ihrer Grenzge- biete. 3. Folge. Springer, 2016.
Analysis in Banach spaces. T P Hytönen, J M A M Van Neerven, M C Veraar, L Weis, Ergebnisse der Mathematik und ihrer Grenzgebiete. 67SpringerT.P. Hytönen, J.M.A.M. van Neerven, M.C. Veraar, and L. Weis. Analysis in Banach spaces. Vol. II. Probabilistic Methods and Operator Theory., volume 67 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. Springer, 2017.
Solvability in the large of a system of reaction-diffusion equations with the balance condition. Y I Kanel, Differentsial ′ nye Uravneniya. 263Y. I. Kanel. Solvability in the large of a system of reaction-diffusion equations with the balance condi- tion. Differentsial ′ nye Uravneniya, 26(3):448-458, 549, 1990.
Mixing and chemical reactions in a turbulent liquid mixing layer. M M Koochesfahani, P E Dimotakis, Journal of Fluid Mechanics. 170M. M. Koochesfahani and P. E. Dimotakis. Mixing and chemical reactions in a turbulent liquid mixing layer. Journal of Fluid Mechanics, 170:83-112, 1986.
Small-scale structure of a scalar field convected by turbulence. The Physics of Fluids. R H Kraichnan, 11R. H. Kraichnan. Small-scale structure of a scalar field convected by turbulence. The Physics of Fluids, 11(5):945-953, 1968.
Anomalous scaling of a randomly advected passive scalar. R H Kraichnan, Physical review letters. 7271016R. H. Kraichnan. Anomalous scaling of a randomly advected passive scalar. Physical review letters, 72(7):1016, 1994.
A relatively short proof of Itô's formula for SPDEs and its applications. N V Krylov, Stoch. Partial Differ. Equ. Anal. Comput. 11N.V. Krylov. A relatively short proof of Itô's formula for SPDEs and its applications. Stoch. Partial Differ. Equ. Anal. Comput., 1(1):152-174, 2013.
Dynamics of stochastic reaction-diffusion equations. C Kuehn, A Neamtu, Infinite dimensional and finite dimensional stochastic equations and applications in physics. Hackensack, NJC. Kuehn and A. Neamtu. Dynamics of stochastic reaction-diffusion equations. In Infinite dimensional and finite dimensional stochastic equations and applications in physics, pages 1-60. World Sci. Publ., Hackensack, NJ, [2020] ©2020.
Continuous dependence on the coefficients and global existence for stochastic reaction diffusion equations. M Kunze, J Van Neerven, J. Differential Equations. 2533M. Kunze and J. van Neerven. Continuous dependence on the coefficients and global existence for stochastic reaction diffusion equations. J. Differential Equations, 253(3):1036-1068, 2012.
Turbulent flows involving chemical reactions. P A Libby, F A Williams, Annual Review of Fluid Mechanics. 81P.A. Libby and F.A. Williams. Turbulent flows involving chemical reactions. Annual Review of Fluid Mechanics, 8(1):351-376, 1976.
Stochastic partial differential equations: an introduction. W Liu, M Röckner, SpringerChamUniversitextW. Liu and M. Röckner. Stochastic partial differential equations: an introduction. Universitext. Springer, Cham, 2015.
Enhanced dissipation for stochastic Navier-Stokes equations with transport noise. D Luo, arXiv:2111.12931arXiv preprintD. Luo. Enhanced dissipation for stochastic Navier-Stokes equations with transport noise. arXiv preprint arXiv:2111.12931, 2021.
Simplified models for turbulent diffusion: theory, numerical modelling, and physical phenomena. A J Majda, P R Kramer, Phys. Rep. 3144-5A.J. Majda and P.R. Kramer. Simplified models for turbulent diffusion: theory, numerical modelling, and physical phenomena. Phys. Rep., 314(4-5):237-574, 1999.
On well-posedness of semilinear stochastic evolution equations on Lp spaces. C Marinelli, SIAM J. Math. Anal. 502C. Marinelli. On well-posedness of semilinear stochastic evolution equations on Lp spaces. SIAM J. Math. Anal., 50(2):2111-2143, 2018.
Positivity of mild solution to stochastic evolution equations with an application to forward rates. C Marinelli, arXiv:1912.12472arXiv preprintC. Marinelli. Positivity of mild solution to stochastic evolution equations with an application to forward rates. arXiv preprint arXiv:1912.12472, 2019.
On the positivity of local mild solutions to stochastic evolution equations. C Marinelli, L Scarpa, Geometry and invariance in stochastic dynamics. ChamSpringer378C. Marinelli and L. Scarpa. On the positivity of local mild solutions to stochastic evolution equations. In Geometry and invariance in stochastic dynamics, volume 378 of Springer Proc. Math. Stat., pages 231-245. Springer, Cham, [2021] ©2021.
Effect of chemical reactions on decaying isotropic turbulence. M P Martın, G V Candler, Physics of Fluids. 107M.P. Martın and G. V. Candler. Effect of chemical reactions on decaying isotropic turbulence. Physics of Fluids, 10(7):1715-1724, 1998.
Control of waves, patterns and turbulence in chemical systems. A S Mikhailov, K Showalter, Physics Reports. 4252-3A.S. Mikhailov and K. Showalter. Control of waves, patterns and turbulence in chemical systems. Physics Reports, 425(2-3):79-194, 2006.
On equations of stochastic fluid mechanics. R Mikulevicius, B Rozovskii, Stochastics in finite and infinite dimensions. Boston, MATrends Math.R. Mikulevicius and B. Rozovskii. On equations of stochastic fluid mechanics. In Stochastics in finite and infinite dimensions, Trends Math., pages 285-302. Birkhäuser Boston, Boston, MA, 2001.
Stochastic Navier-Stokes equations for turbulent flows. R Mikulevicius, B Rozovskii, SIAM J. Math. Anal. 355R. Mikulevicius and B. Rozovskii. Stochastic Navier-Stokes equations for turbulent flows. SIAM J. Math. Anal., 35(5):1250-1310, 2004.
Stochastic integration in Banach spaces-a survey. J M A M Van Neerven, M C Veraar, L W Weis, Stochastic analysis: a series of lectures. BaselBirkhäuser/Springer68J.M.A.M. van Neerven, M.C. Veraar, and L.W. Weis. Stochastic integration in Banach spaces-a survey. In Stochastic analysis: a series of lectures, volume 68 of Progr. Probab., pages 297-332. Birkhäuser/Springer, Basel, 2015.
Global existence in reaction-diffusion systems with control of mass: a survey. M Pierre, Milan J. Math. 782M. Pierre. Global existence in reaction-diffusion systems with control of mass: a survey. Milan J. Math., 78(2):417-455, 2010.
Dissipative reaction diffusion systems with quadratic growth. M Pierre, T Suzuki, Y Yamada, Indiana Univ. Math. J. 681M. Pierre, T. Suzuki, and Y. Yamada. Dissipative reaction diffusion systems with quadratic growth. Indiana Univ. Math. J., 68(1):291-322, 2019.
Critical spaces for quasilinear parabolic evolution equations and applications. J Prüss, G Simonett, M Wilke, J. Differential Equations. 2643J. Prüss, G. Simonett, and M. Wilke. Critical spaces for quasilinear parabolic evolution equations and applications. J. Differential Equations, 264(3):2028-2074, 2018.
Global solutions of reaction-diffusion systems. F Rothe, Lecture Notes in Mathematics. 1072Springer-VerlagF. Rothe. Global solutions of reaction-diffusion systems, volume 1072 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 1984.
Existence and uniqueness for the mild solution of the stochastic heat equation with non-Lipschitz drift on an unbounded spatial domain. M Salins, Stoch. Partial Differ. Equ. Anal. Comput. 93M. Salins. Existence and uniqueness for the mild solution of the stochastic heat equation with non- Lipschitz drift on an unbounded spatial domain. Stoch. Partial Differ. Equ. Anal. Comput., 9(3):714- 745, 2021.
Global solutions for the stochastic reaction-diffusion equation with super-linear multiplicative noise and strong dissipativity. M Salins, Electron. J. Probab. 27122022M. Salins. Global solutions for the stochastic reaction-diffusion equation with super-linear multiplicative noise and strong dissipativity. Electron. J. Probab., 27:Paper No. 12, 17, 2022.
Global solutions to the stochastic reaction-diffusion equation with superlinear accretive reaction term and superlinear multiplicative noise term on a bounded spatial domain. M Salins, Trans. Amer. Math. Soc. 37511M. Salins. Global solutions to the stochastic reaction-diffusion equation with superlinear accretive reaction term and superlinear multiplicative noise term on a bounded spatial domain. Trans. Amer. Math. Soc., 375(11):8083-8099, 2022.
Theory of Besov spaces. Y Sawano, Developments in Mathematics. 56SpringerY. Sawano. Theory of Besov spaces, volume 56 of Developments in Mathematics. Springer, Singapore, 2018.
Topics in Fourier analysis and function spaces. H.-J Schmeisser, H Triebel, Wiley-Interscience Publication. John Wiley & SonsLtd., ChichesterH.-J. Schmeisser and H. Triebel. Topics in Fourier analysis and function spaces. A Wiley-Interscience Publication. John Wiley & Sons, Ltd., Chichester, 1987.
Effects of turbulence on the mixing and chemical reaction for cross flow and coflowing jets. P Stapountzis, H Tzavellas, T Moros, Advances in Turbulence 3. Berlin HeidelbergSpringerP. Stapountzis, H.and Tzavellas and T. Moros. Effects of turbulence on the mixing and chemical reaction for cross flow and coflowing jets. In Advances in Turbulence 3, pages 300-311. Springer Berlin Heidelberg, 1991.
Interpolation theory, function spaces, differential operators. H Triebel, Johann Ambrosius Barth, Heidelberg. second editionH. Triebel. Interpolation theory, function spaces, differential operators. Johann Ambrosius Barth, Hei- delberg, second edition, 1995.
Influence of turbulence-chemical interaction on cfd pulverized coal mild combustion modeling. M Vascellari, G Cau, Fuel. 101M. Vascellari and G. Cau. Influence of turbulence-chemical interaction on cfd pulverized coal mild combustion modeling. Fuel, 101:90-101, 2012.
End-wall ignition of methane-air mixtures under the effects of CO 2 /Ar/N 2 fluidic jets. B Zhang, X Chang, C Bai, Fuel. 270117485B. Zhang, X. Chang, and C. Bai. End-wall ignition of methane-air mixtures under the effects of CO 2 /Ar/N 2 fluidic jets. Fuel, 270:117485, 2020.
| [] |
[
"Word of low complexity without uniform frequencies",
"Word of low complexity without uniform frequencies"
] | [
"Julien Cassaigne ",
"Idrissa Kaboré "
] | [] | [] | In this paper, we construct a uniformely recurrent infinite word of low complexity without uniform frequencies of letters. This shows the optimality of a bound of Boshernitzan, which gives a sufficient condition for a uniformly recurrent infinite word to admit uniform frequencies.Mathematics Subject Clasification: 37B10, 37A25, 68R15 . 1 1 | 10.1017/etds.2023.37 | [
"https://export.arxiv.org/pdf/2210.02371v1.pdf"
] | 252,715,897 | 2210.02371 | 0ecac35a6c584ae5079baaa748f07b406abc6035 |
Word of low complexity without uniform frequencies
5 Oct 2022
Julien Cassaigne
Idrissa Kaboré
Word of low complexity without uniform frequencies
5 Oct 2022Mathematics Subject Clasification: 37B1037A2568R15 1
In this paper, we construct a uniformely recurrent infinite word of low complexity without uniform frequencies of letters. This shows the optimality of a bound of Boshernitzan, which gives a sufficient condition for a uniformly recurrent infinite word to admit uniform frequencies.Mathematics Subject Clasification: 37B10, 37A25, 68R15 . 1 1
Introduction
Let us consider an infinite word u over a finite alphabet. We can naturally associate to it a subshift. The goal of this paper is to describe some ergodic properties of this subshift. By Oxtoby theorem, we know that the subshift is uniquely ergodic if and only if, in u, each finite word has uniform frequency. Moreover the subshift is minimal if u is uniformly recurrent.
For a long time, people have tried to find some conditions on infinite words which imply one of these properties. M. Keane gave in [10] a uniformly recurrent infinite word with complexity 3n+1 (from 4-interval exchange map) which does not possess uniform frequencies. Later, Boshernitzan, in [1], proved that a uniformly recurrent infinite word admits uniform frequencies if either of the following sufficient conditions is satisfied:
lim inf p(n) n < 2 or lim sup p(n) n < 3
where p denotes the complexity function of u (see [3] chap. 4, by J. Cassaigne and F. Nicolas for more details on the subject). The bound of the second sufficient condition of Boshernitzan being already optimal by Keane's result, our goal in his work is to establish the optimality of the bound of the first sufficient condition. And we succeed to construct a uniformly recurrent infinite word without uniform frequencies, the complexity function of which verifies lim inf p(n) n = 2. This result relates some properties of the complexity function and the ergodic measures of the subshift. This type of question has been investigated in the last years. The goal is to bound the number of ergodic measures of the subshift in terms of the complexity function.
Boshernitzan was the first to look at it, see [1]. During his Phd. thesis T. Monteil, see [11] and [3] p(n) n = K ≥ 2, then the subshift has at most K − 2 ergodic measures. Since this time, some results have appeared in the same veine: V. Cyr and B. Kra have also obtained similar results, see [6,7]. In the first paper, they prove that the bound of Boshernitzan is sharp. In the second one, they construct minimal subshifts with complexity function arbitrarily close to linear but having uncountably many ergodic measures. We can also cite M. Damron and D. Fickenscher [8] who obtained the bound K+1 2 under a condition on the bispecial words. Nevertheless, it seems that our proof is of a different nature, with an explicit construction of the infinite word.
After the preliminaries (section 2) we construct an infinite word which is uniformly recurrent in section 3, then we show in section 4 that this word is without uniform frequencies of letters and to finish we study the complexity of this word in section 5 and give in section 6 the proof of the main statement of section 3.
Preliminaries
In all that follows we consider the alphabet A = {0, 1}. Let us denote A * , the set of the finite words on alphabet A, ε the empty word. For all u in A * , |u| denotes the length (the number of letters it contains) of the word u (|ε| = 0) and for any letter x of A, |u| x is the number of occurrences of the letter x in u. We call Parikh vector of a finite word u, the vector denoted by U and defined by |u| 0 |u| 1 .
A finite word u of length n formed by repeating a single letter x is typically denoted x n . We define the n-th power of a finite word w as being the concatenation of n copies of w; we denote it w n . An infinite word is an infinite sequence of letters of A. We denote A ω the set of infinite words on A.
We say a finite word v is a factor of u if there exist two words u 1 and u 2 on the alphabet A such that u = u 1 vu 2 ; we also say that u contains v. The factor v is said prefix (resp. suffix) if u 1 (resp. u 2 ) is the empty word. For any word u, the set of factors of length n is denoted L n (u). The set of all factors of u is simply denoted L(u). • a bispecial factor of u if v is simultaneously a right special factor and a left special factor of u.
• a stronq bispecial factor of u if 0v0, 0v1, 1v0, 1v1 are factors of u and a weak bispecial factor if uniquely 0v0 and 1v1, or 0v1 and 1v0, are factors of u.
• an ordinary bispecial factor of u if v is a bispecial factor of u which is neither strong nor weak.
An infinite word u is said to be recurrent if any factor of u appears infinitely often. An infinite word u is uniformly recurrent if for all n ∈ N, there exists N such that any factor of u of length N contains all the factors of u of length n. Definition 2.2. Let u be an infinite word on an alphabet A. The complexity function of u is a function counting the number of distinct factor of u of length n for any given n. It is denoted p and so that:
p(n) = #L n (u).
Let us denote s and b the functions respectively called first difference and second difference of the complexity of u; they are defined as follows: s(n) = p(n + 1) − p(n) and b(n) = s(n + 1) − s(n).
On a binary alphabet the function s counts the number of special factors for a given length in u. Let us denote m the map from L(u) to {−1, 0, +1} defined by
∀v ∈ L(u), m(v) = −1 if v is weak bispecial +1 if v is strong bispecial 0 otherwise
The following formula was given by the first author in [5]:
∀n ≥ 0, s(n) = 1 + w ∈ L(u) |w| < n m (w) = 1 + w bispecial |w| < n m (w) .
This relation allows to compute the complexity p(n) provided when we are able to describe the set of strong and weak bispecial factors of the binary infinite word u. • We say that u admits frequencies if for any factor w , and any sequence (u n ) of prefixes of u such that lim n→∞ = ∞, then lim n→∞ |un|w |un| exists. • We say that u admits uniform frequencies if for any factor w, and any sequence (u n ) of factors of u such that lim n→∞ = ∞, then lim n→∞ |un|w |un| exists.
In [10], M. Keane gave an example of a uniformly recurrent infinite word with complexity 3n + 1 which does not possess uniform frequencies. Later, Boshernitzan [1] obtained the following results: Theorem 2.1. Let u be an infinite word on an alphabet A. Then, u admits uniform frequencies if its complexity function verifies at least one of the following conditions:
• lim inf p(n) n < 2, • lim sup p(n) n < 3.
The example of Keane enssures that constant 3 is optimal in the second condition, i.e., it cannot be replaced with a larger constant.
Construction of a class of uniformly recurrent words
Let (l i ), (m i ), (n i ) be three integer sequences which are strictly increasing and verify the following conditions:
• l i < m i < n i , • m i l i increases exponentially to +∞, • n i m i increases exponentially to +∞.
Let us define in A * two sequences (u i ) and (v i ) in the following way:
u 0 = 0, v 0 = 1 and for all i ∈ N, u i+1 = u m i i v l i i and v i+1 = u m i i v n i i . The sequence (u i ) converges towards an infinite word u. For i ≥ 1, consider the substitution σ i defined by σ i (0) = 0 m i 1 l i , σ i (1) = 0 m i 1 n i . Then, we have u i = σ 0 σ 1 σ 2 . . . σ i−1 (0) and v i = σ 0 σ 1 σ 2 . . . σ i−1 (1).
Theorem 3.1. Any infinite word u so defined is uniformly recurrent.
The proof is given at the end of this paper.
4 The word u is without uniform frequencies Lemma 4.1. For all i ≥ 1 we have:
1. |u i | 0 |u i | ≥ 1 + l 0 m 0 −1 Π i−1 j=1 1 + l j n j−1 m j l j−1 −1 2. |v i | 1 |v i | ≥ Π i−1 j=0 1 + m j n j −1 . Proof. • Lower bound on |u i+1 | 0 |u i+1 | . Firstly, we have for all i ≥ 0, |u i | ≤ |v i | since |u 0 | = |v 0 | = 1 and u i is a strict prefix of v i for i ≥ 1. Then |v i | |u i | = m i−1 |u i−1 | + n i−1 |v i−1 | m i−1 |u i−1 | + l i−1 |v i−1 | ≤ n i−1 l i−1 since l i−1 < n i−1 for i ≥ 1. As |u i+1 | 0 = m i |u i | 0 + l i |v i | 0 ≥ m i |u i | 0 and |u i+1 | = m i |u i | + l i |v i | = |u i | m i + l i |v i | |u i |
we deduce the following inequalities:
|u i+1 | ≤ |u i | m i + l i n i−1 l i−1 and |u i+1 | 0 |u i+1 | ≥ 1 + l i m i n i−1 l i−1 −1 · |u i | 0 |u i | .
Thus
|u i | 0 |u i | ≥ |u 1 | 0 |u 1 | Π i−1 j=1 1 + l j n j−1 m j l j−1 −1 . • Lower bound on |v i+1 | 1 |v i+1 | .
We have
|v i+1 | 1 = m i |u i | 1 +n i |v i | 1 ≥ n i |v i | 1 and |v i+1 | = m i |u i |+n i |v i | ≤ |v i | (m i + n i ) since |u i | ≤ |v i |. So |v i+1 | 1 |v i+1 | ≥ n i m i + n i . |v i | 1 |v i | .
Hence
|v i | 1 |v i | ≥ Π i−1 j=0 1 + m i n i −1 .
In the rest of the paper we need to fix
l i = 2 2.2 i +4 , m i = 2 8.2 i , and n i = 2 10.2 i , for i ≥ 0. ( * )
Then the inequalities of Lemma 4.1 become:
1. |u i | 0 |u i | ≥ Π i j=1 1 1 + 2 −2 j 2. |v i | 1 |v i | ≥ Π i j=1 1 1 + 2 −2 j So we get Lemma 4.2. ∀i ≥ 1, min |u i | 0 |u i | , |v i | 1 |v i | ≥ Π i j=1 1 1 + 2 −2 j .
Then we have the following lemma:
Lemma 4.3. ∀i ∈ N, |u i | 0 |u i | + |v i | 1 |v i | ≥ 3 2 .
Proof.
• For i = 0, the inequality is evident.
• For i ≥ 1, write: P i = Π i j=1 1 1 + 2 −2 j . The sequence (P i ) is decreasing and satisfies the following induction formula:
P i+1 = 1 1 + 2 −2 i+1 P i .
Let us show, by induction, that
4 3 P i = 1 1 − 2 −2 i+1 . We have 4 3 P 0 = 1 1 − 2 −2 .
Assuming that for some i ≥ 0,
4 3 P i = 1 1 − 2 −2 i+1 it follows : 4 3 P i+1 = 4 3 P i × 1 1 + 2 −2 i+1 = 1 1 − 2 −2 i+1 × 1 1 + 2 −2 i+1 = 1 1 − 2 −2 i+2 . So P i = 3 4 × 1 1 − 2 −2 i+1 . Hence, with Lemma 4.2 we get |u i | 0 |u i | + |v i | 1 |v i | ≥ 2 × 3 4 × 1 1 − 2 −2 i+1 ≥ 3 2 .
Lemma 4.4. The letters of the word u do not admit unform frequencies.
Proof. If the letters of u possessed uniform frequencies, then the frequencies of 0 and 1, respectively denoted f u (0) and
f u (1), should verify f u (0) = lim i→∞ |u i | 0 |u i | , f u (1) = lim i→∞ |v i | 1 |v i | and f u (0) + f u (1) = 1.
That is contradictory with Lemma 4.3.
Complexity of u
To estimate the complexity of u we are going to observe its bispecial factors.
Notation 5.1. Let h, i ∈ N. We denote u (h) i the finite word σ h σ h+1 σ h+2 . . . σ h+i−1 (0) and u (h) the infinite word lim i→∞ u (h)
i . However u (0) i and u (0) are simply denoted respectively u i and u.
Definition 5.1. A factor of u (h) is said to be short if it does not contain 10 as a factor. A factor of u (h) which is not short is said to be long. Proof. Since w is long, it cannot occur inside the image of one letter. Any occurrence of w in u is therefore of the form sσ h (v)p, so existence follows.
Uniqueness is consequence of the fact that 10 occurs in u (h) only at the border between to images of letters under σ h . Proof. Let us first observe that in u (h) , the factor 01 is always preceded by 10 m h −1 . Therefore a bispecial factor containing 01 must also contain 10 and is long.
Then the short bispecial factors are all of the form 0 k or 1 k , k ≥ 0. We see that ε is strong bispecial (extensions 00, 01, 10, 11); 0 k (0 ≤ k < m h − 1) is ordinary bispecial (extensions 00 k 0, 00 k 1, 10 k 0), as well as 1 k (1 ≤ k < n h −1, k = l h ); 1 l h is strong bispecial (extensions 01 l h 0, 01 l h 1, 11 l h 0, 11 l h 1); 0 m h −1 is weak bispecial (extensions 00 m h −1 1 and 10 m h −1 0), as well as 1 n h −1 ; 0 m h and 1 n h are not special, and 0 k (k > m h ) and 1 k (k > n h ) are not factors.
(h+1) such that w = σ h (v) where σ h (v) = 1 l h σ h (v)0 m h 1 l h .
Moreover v and w have the same type and |v| < |w|.
Proof. First, let us observe this fact: If a finite word v is a factor of u (h+1) then σ h (v) = 1 l h σ h (v)0 m h 1 l h is a factor of u (h) . Now, let us consider a bispecial factor v of u (h+1) . Therefore the words σ h (0v), σ h (1v), σ h (v0) and σ h (v1) are factors of u (h) ; moreover 0 σ h (v) and 1 σ h (v) are respectively suffix of the first two words whereas σ h (v)0 and σ h (v)1 are respectively prefix of the last two words. Hence, the word w = σ h (v) is bispecial in u (h) , and m(w) ≥ m(v).
Conversely, let w be a long bispecial factor of u (h) . Then, according to the synchronization lemma, we can write w uniquely in the form sσ h (v)p where s and p are respectively non-empty suffix and prefix of images of letters.
As 0w and 1w are factors of u (h) , and σ h (v)p starts with 0, it follows that 0s0 and 1s0 are factors of u (h) . This is only possible if s = 1 l h ( s = 1 k with 1 ≤ k < l h or l h < k < n h are excluded since 0s0 / ∈ L u (h) ; s = 0 k 1 l h with 1 ≤ k < m h and s = 0 k 1 n h with 0 ≤ k < m h are excluded since 1s0 / ∈ L u (h) ; and s = 0 m h 1 l h and s = 0 m h 1 n h are excluded since 0s0 / ∈ L u (h) ). Similarly, 1p0 and 1p1 are factors of u (h) , and this is only possible if
p = 0 m h 1 l h . Therefore w = σ h (v).
If w extends as awb with a, b ∈ A, then v also extends as avb. Therefore m(v) ≥ m(w). It follows that m(v) = m(w): v and w have the same type. Moreover, it is clear that |v| < |w|.
In fact, long bispecial factors of u (h) are the images by σ h of the "less long" bispecial factors of u (h+1) . Thus, step by step, any non-ordinary bispecial factor w of u (h) of given type, will be write in the following form σ h σ h+1 . . . σ h+i−1 (v) where v is a short bispecial factor of u (h+i) with the same type. We will call bispecial factors of rank i, (i ≥ 0) of u (h) , and write a
(h) i , b (h) i , c (h) i , d (h) i the following words a (h) i = σ h σ h+1 . . . σ h+i−1 (ε), b (h) i = σ h σ h+1 . . . σ h+i−1 (1 l h+i ), c (h) i = σ h σ h+1 . . . σ h+i−1 (0 m h+i −1 ) and d (h) i = σ h σ h+1 . . . σ h+i−1 (1 n h+i −1 ). The short bispecial ε, 1 l h , 0 m h −1 and 1 n h −1 of u (h) are the bispecial factors of rank 0, a (h) 0 , b (h) 0 , c (h) 0 , and d (h) 0 .
The non-ordinary bispecial factors of u are therefore a i = a (0)
i , b i = b (0) i , c i = c (0) i , d i = d (0) i .
Definition 5.2.
Let v, w ∈ A * and V, W be their corresponding Parikh vectors. Let us say that V is less than W and write V < W when |v| a ≤ |w| a for all a ∈ A and |v| < |w|.
Proposition 5.1. Let v, w, v ′ , w ′ be four words such that v ′ = σ i (v) and w ′ = σ i (w). Then V < W =⇒ V ′ < W ′ .
Proof. Assume that V < W . Then, |v| 0 ≤ |w| 0 , |v| 1 ≤ |w| 1 , and |v| < |w|.
On the one hand, we have |v ′ | 0 = m i (|v| + 1) and |w ′ | 0 = m i (|w| + 1); hence |v ′ | 0 < |w ′ | 0 . On the other hand, we have |v ′ | 1 = l i |v| 0 + n i |v| 1 + 2l i and
|w ′ | 1 = l i |w| 0 + n i |w| 1 + 2l i ; so |v ′ | 1 ≤ |w ′ | 1 . Finally, |v ′ | = |v ′ | 0 + |v ′ | 1 < |w ′ | 0 + |w ′ | 1 = |w ′ | 1 .
Lemma 5.4. For all i ≥ 0, let A i , B i , C i , D i be the Parikh vectors corresponding to the non-ordinary bispecial factors of u,
a i , b i , c i , d i . Then, we have ∀i ≥ 1, D i−1 < B i < C i < A i+1 < D i Proof. Applying σ i−1 on the words b (i) 0 , c (i) 0 , σ i a (i+1) 0 = 1 l i 0 m i 1 l i , and d (i) 0
we get the following words
d (i−1) 0 = 1 n i−1 −1 b (i−1) 1 = 1 l i−1 (0 m i−1 1 n i−1 ) l i 0 m i−1 1 l i−1 c (i−1) 1 = 1 l i−1 0 m i−1 1 l i−1 m i −1 0 m i−1 1 l i−1 a (i−1) 2 = 1 l i−1 (0 m i−1 1 n i−1 ) l i 0 m i−1 1 l i−1 m i (0 m i−1 1 n i−1 ) l i 0 m i−1 1 l i−1 d (i−1) 1 = 1 l i−1 (0 m i−1 1 n i−1 ) n i −1 0 m i−1 1 l i−1 .
The Parikh vectors corresponding to these words are:
D (i−1) 0 = 0 n i−1 − 1 B (i−1) 1 = m i−1 (l i + 1) n i−1 l i + 2l i−1 C (i−1) 1 = m i m i−1 l i−1 (m i + 1) A (i−1) 2 = m i−1 (m i + 2l i + 1) l i−1 (m i + 2) + 2l i n i−1 D (i−1) 1 = m i−1 n i n i−1 (n i − 1) + 2l i−1 .
From ( * ) we have
n i−1 l i + l i−1 < l i−1 m i , m i + 2l i + 1 < n i , l i−1 m i + 2n i−1 l i < n i−1 (n i − 1) .
It follows the inequalities:
D (i−1) 0 < B (i−1) 1 < C (i−1) 1 < A (i−1) 2 < D (i−1) 1
Applying σ i−2 on the words d
(i−1) 0 , b (i−1) 1 , c (i−1) 1 , a (i−1) 2 , and d (i−1) 1 we get the words d (i−2) 1 , b (i−2) 2 , c (i−2) 2 , a (i−2) 3
, and d (i−2) 2 ; By Proposition 5.1, it results the following inqualities:
D (i−2) 1 < B (i−2) 2 < C (i−2) 2 < A (i−2) 3 < D (i−2) 2 .
And so on, after the i-th iteration we get:
D (0) i−1 < B (0) i < C (0) i < A (0) i+1 < D (0) i . Lemma 5.5. ∀i ≥ 0, |b i | < |c i | < |a i+1 | < |d i | < |b i+1 |. Proof. • For i ≥ 1, the inequalities |b i | < |c i | < |a i+1 | < |d i | < |b i+1 | follows from Lemma 5.4 • For i = 0, recall that |b 0 | = l 0 , |c 0 | = m 0 −1, |a 1 | = 2l 0 +m 0 , |d 0 | = n 0 −1 and |b 1 | = l 1 (m 0 + n 0 )+m 0 +2l 0 . So |b 0 | < |c 0 | < |a 1 | < |d 0 | < |b 1 |.
Lemma 5.6. The function s associated to the word u verifies:
∀n ∈ N, s(n) = 1 if n = 0 2 if n ∈ i≥0 ]|c i |, |a i+1 |] ∪ ]|d i |, |b i+1 |] ∪ ]0, |b 0 |] 3 if n ∈ i≥0 ]|b i |, |c i |] ∪ ]|a i+1 |, |d i |] .
Proof. Let n ∈ N. We know that a i , b i , c i , and d i , i ≥ 0 are the only bispecial factors of u which are strong or weak. Hence, we have
s (n) = 1 + w bispecial |w| < n m (w) = 1 + # {i ≥ 0 : |a i | < n} +# {i ≥ 0 : |b i | < n} −# {i ≥ 0 : |c i | < n} −# {i ≥ 0 : |d i | < n} .
Since for m ∈]0, |b 0 |[ there is not strong or weak bispecial factor of u with length m we have,
for 0 < n ≤ |b 0 |, s(n) = 1 + w bispecial |w| ≤ n − 1 m (w) = 1 + m(ε) = 2.
Suppose n > |b 0 |. Then, there exists i ∈ N such that n ∈ [|b i |, |b i+1 |[. Since the sequences |a i |, |b i |, |c i |, and |d i | are increasing we are in one of the following cases:
• n ∈ [|b i |, |c i |[, then s(n) = 1 + (i + 1)
+ (i + 1) − (i) − (i) = 3. • n ∈ [|c i |, |a i+1 |[, then s(n) = 1 + (i + 1) + (i + 1) − (i + 1) − (i) = 2. • n ∈ [|a i+1 |, |d i |[, then s(n) = 1 + (i + 2) + (i + 1) − (i + 1) − (i) = 3. • n ∈ [|d i |, |b i+1 |[, then s(n) = 1 + (i + 2) + (i + 1) − (i + 1) − (i + 1) = 2.
Theorem 5.2. The complexity function p of u verifies:
∀n ≥ 1, p(n) ≤ 3n + 1.
Proof. By Lemma 5.6, s(n) ≤ 3 for all n ≥ 0. So, p(n) = p(0)+ n−1 m=0 s(m) ≤ p(0) + 3(n) = 3n + 1.
Proposition 5.3.
Let v, w, v ′ , w ′ be four finite words such that v ′ = σ i (v) and w ′ = σ i (w). Then for all λ > 0 we have:
W > λ V + 1 1 =⇒ W ′ > λ V ′ + 1 1
Proof. Assume that W > λ V + 1 1 . Since |v ′ | 0 = m i (|v| + 1) and |v ′ | 1 = l i |v| 0 + n i |v| 1 + 2l i then:
V ′ = m i m i l i n i |v| 0 |v| 1 + m i 2l i .
In the same way, we write W ′ (it suffices to replace V with W in the previous formula). It follows,
W ′ − λ V ′ + 1 1 = m i m i l i n i |w| 0 − λ|v| 0 |w| 1 − λ|v| 1 + (1 − λ) m i 2l i − λ 1 1 . Since W > λV + λ 1 1 and (1 − λ) m i 2l i > −λ m i 2l i
it follows that:
W ′ −λ V ′ + 1 1 > λ m i m i l i n i 1 1 − m i + 1 2l i + 1 = λ m i − 1 n i − l i − 1 > 0 0 .
This proposition allows to prove the following lemma:
Lemma 5.7. ∀i ≥ 0, B i+1 > l i+1 D i + 1 1
Proof. Let us choose an integer i ≥ 1. Then, we have b
(i) 1 = σ i b (i+1) 0 = 1 l i (0 m i 1 n i ) l i+1 0 m i 1 l i and d (i) 0 = 1 n i −1 ; the corresponding Parikh vectors are: B (i) 1 = l i+1 m i + m i l i+1 n i + 2l i and D (i) 0 = 0 n i − 1
. It follows the inequality:
B (i) 1 > l i+1 D (i) 0 + 1 1 .
By regressive induction on j ≤ i, suppose that:
B (j) i+1−j > l i+1 D (j) i−j + 1 1 where B (j)
i+1−j and D
B (j−1) i+2−j > l i+1 D (j−1) i−j+1 + 1 1 since B (j−1) i+2−j and D (j−1) i−j+1 are respectively Parikh vectors of b (j−1) i+2−j = σ j−1 b (j) i+1−j and d (j−1) i−j+1 = σ j−1 d (j) i−j . So, B (j) i+1−j > l i+1 D (j) i−j + 1 1 , 0 ≤ j ≤ i.
In the inequality above, we find the lemma by making j = 0.
Theorem 5.4. The complexity function p of u verifies lim inf p(n) n = 2
Proof. We have s(n) = 2 for |d i | < n ≤ |b i+1 |. So p (|b i+1 |) = p (|d i |) + 2 (|b i+1 | − |d i |) .
By Lemma 5.6, we have p(n) ≤ 3n + 1 and we deduce that:
p (|b i+1 |) ≤ 2|b i+1 | + 1 + 1 l i+1 |b i+1 | since B i+1 > l i+1 D i + 1 1 > l i+1 D i . So p (|b i+1 |) |b i+1 | ≤ 2 + 1 |b i+1 | + 1 l i+1 and lim i→∞ p (|b i+1 |) |b i+1 | = 2.
Thus, lim inf p(n) n = 2, since s(n) ≥ 2 (for all n ≥ 1) implies lim inf p(n) n ≥ 2.
6 Proof of theorem 3.1 Now, with Notation 5.1 we are able to explain the proof of Theorem 3.1.
Proof. Let us show that for i ≥ 0, there exists N i such that any factor of u of length N i contains the prefix u i . Indeed, u does not contain 1 n 0 +1 .
• For i = 0 any factor of u of length N 0 = n 0 + 1 contains the prefix 0 = u 0 .
• For i ≥ 1, any factor of u (i) of length N i , it follows that any factor of u = u (0) of length N i contains the word u i . This completes the proof.
(chap. 3 by S. Ferenczi and Th. Monteil) has proved, the same result with different techniques: If lim sup
Definition 2 . 1 .
21Let u be an infinite word on the alphabet A = {0, 1}. A factor v of u is said to be • a right special factor if v0 and v1 are both factors of u, and a left special factor if 0v and 1v are both factors of u.
Definition 2 . 3 .
23Two bispecial factors v and w of an infinite word u on the alphabet {0, 1} are said to have the same type if they are all strong, weak, or ordinary. In other words the bispecial v and w have the same if m(v) = m(w). Definition 2.4. ([3] chap. 7, by S. Ferinczi and T. Monteil) Let u be an infinite word on an alphabet A.
Lemma 5.1. (Synchronization lemma): Let w be a long factor of u (h) . Then there exist x, y ∈ A and v ∈ A * such that xvy is a factor of u (h+1) i and w = sσ h (v) p, where s is a non-empty suffix of σ h (x), and p is a non-empty prefix of σ h (y). Moreover, the triple (s, v, p) is unique.
The short and strong bispecial factors of u (h) are ε and 1 l h . 2. The short and weak bispecial factors of u (h) are 0 m h −1 and 1 n h −1 .
Lemma 5. 3 .
3Let w be a factor of u (h) . Then the following assertions are equivalent: (1) w is a long bispecial factor of u (h) .(2) There exists a bispecial factor v of u
u (i) . Thus, any factor of u (i−1) regressive induction on j, suppose that for j ≤ i − 1, there exists N (j) i−j such that any factor of u (j) of length N
AcknowledgmenttsThe authors would like to thank CNRS DERCI DSCA for its support during this work.
A unique ergodicity of minimal symbolic flows with minimal block growth. M Boshernitzan, J. Analyse Math. 44M. Boshernitzan, A unique ergodicity of minimal symbolic flows with minimal block growth, J. Analyse Math. 44 (1985), 77-96.
Fréquences des facteurs des suites sturmiennes. V Berthé, Theoret. Comput. Sci. 165V. Berthé, Fréquences des facteurs des suites sturmiennes, Theoret. Comput. Sci. 165 (1996), 295-309.
Combinatorics Cant, Automata , Number Theory, Encyclopedia of Mathematics and its Applications 135. V. Berthé, M. RigoCambridge University PressCANT, Combinatorics, Automata and Number Theory, V. Berthé, M. Rigo (Eds), Encyclopedia of Mathematics and its Applications 135, Cambridge University Press (2010).
Sequences with grouped factors. J Cassaigne, Developments in Language Theory III (DLT'97). Aristotle University of ThessalonikiJ. Cassaigne, Sequences with grouped factors, in Developments in Lan- guage Theory III (DLT'97), pp. 211-222, Aristotle University of Thes- saloniki, 1998.
Complexité et facteurs spéciaux. J Cassaigne, Bull. Belg. Math. Soc. 4J. Cassaigne, Complexité et facteurs spéciaux, Bull. Belg. Math. Soc. 4 (1997), 67-88.
Counting generic measures for a subshift of linear growth. V Cyr, B Kra, J. Eur. Math. Soc. 21JEMS)V. Cyr, B. Kra, Counting generic measures for a subshift of linear growth, J. Eur. Math. Soc. (JEMS) 21 (2019), 355-380.
Realizing ergodic properties in zero entropy subshifts. V Cyr, B Kra, Isr. J.. Math. 2401V. Cyr, B. Kra, Realizing ergodic properties in zero entropy subshifts, Isr. J.. Math. 240(1) (2020), 119-148.
The number of ergodic measures for transitive subshifts under the regular bispecial condition. M Damron, J Fickenscher, Ergodic Theory Dyn. Syst. 421M. Damron, J. Fickenscher, The number of ergodic measures for transi- tive subshifts under the regular bispecial condition,Ergodic Theory Dyn. Syst. 42((1), pp 86-140, 2022.
N , Pytheas Fogg, Substitutions in Dynamics, Arithmetics and Combinatorics. Berlin HeidelbergSpringer-Verlag1794N. Pytheas Fogg, Substitutions in Dynamics, Arithmetics and Combi- natorics, Lectures Notes in Mathematics 1794, Springer-Verlag Berlin Heidelberg, 2002.
Non-ergodic interval exchange transformations. M Keane, Israel J. Math. 26M. Keane, Non-ergodic interval exchange transformations, Israel J. Math. 26 (1977), 188-196.
T , Illumination dans les billards polygonaux et dynamique symbolique. Institut de Mathématiques de Luminy, Université de la MéditerranéePh.D. thesisT. Monteil, Illumination dans les billards polygonaux et dynamique sym- bolique. Ph.D. thesis, Institut de Mathématiques de Luminy, Université de la Méditerranée, 2005. Chapter 5.
case 907 F-13288 Marseille Cedex 9 France [email protected] Idrissa Kaboré UFR-Sciences Exactes et Appliquées Université Nazi Boni 01 BP 1091. Bobo-Dioulasso1Julien CASSAIGNE Institut de Mathématiques de Marseille 163 avenue de LuminyJulien CASSAIGNE Institut de Mathématiques de Marseille 163 avenue de Luminy, case 907 F-13288 Marseille Cedex 9 France [email protected] Idrissa Kaboré UFR-Sciences Exactes et Appliquées Université Nazi Boni 01 BP 1091 Bobo-Dioulasso 01
. Burkina Faso Ikaborei@yahoo, Burkina Faso [email protected]
| [] |
[
"Experimental study of quantum thermodynamics using optical vortices",
"Experimental study of quantum thermodynamics using optical vortices"
] | [
"R Medeiros De Araújo \nDepartamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil\n",
"T Häffner \nDepartamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil\n",
"R Bernardi \nDepartamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil\n",
"D S Tasca \nInstituto de Física\nUniversidade Federal Fluminense\nNiteróiRJBrazil\n",
"M P J Lavery \nSchool of Engineering\nUniversity of Glasgow\nUK\n",
"M J Padgett \nSchool of Physics and Astronomy\nUniversity of Glasgow\nG12 8QQGlasgowUK\n",
"A Kanaan \nDepartamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil\n",
"L C Céleri \nInstituto de Física\nUniversidade Federal de Goiás\nGoiâniaGOBrazil\n",
"P H Souto Ribeiro \nDepartamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil\n"
] | [
"Departamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil",
"Departamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil",
"Departamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil",
"Instituto de Física\nUniversidade Federal Fluminense\nNiteróiRJBrazil",
"School of Engineering\nUniversity of Glasgow\nUK",
"School of Physics and Astronomy\nUniversity of Glasgow\nG12 8QQGlasgowUK",
"Departamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil",
"Instituto de Física\nUniversidade Federal de Goiás\nGoiâniaGOBrazil",
"Departamento de Física\nUniversidade Federal de Santa Catarina\nFlorianópolisSCBrazil"
] | [] | Non-equilibrium thermodynamics and quantum information theory are interrelated research fields witnessing an increasing theoretical and experimental interest. This is mainly due to the broadness of these theories, which found applications in many different fields of science, ranging from biology to the foundations of physics. Here, by employing the orbital angular momentum of light, we propose a new platform for studying non-equilibrium properties of high dimensional quantum systems. Specifically, we use Laguerre-Gaussian beams to emulate the energy eigenstates of a two-dimension quantum harmonic oscillator having angular momentum. These light beams are subjected to a process realized by a spatial light modulator and the corresponding work distribution is experimentally reconstructed employing a two-point measurement scheme. The Jarzynski fluctuation relation is then verified. We also suggest the realization of Maxwell's demon with this platform. | 10.1088/2399-6528/aab178 | [
"https://arxiv.org/pdf/1705.02990v3.pdf"
] | 3,726,906 | 1705.02990 | 361cf75172932fe8dceb85d1e816546d8d9f6dce |
Experimental study of quantum thermodynamics using optical vortices
R Medeiros De Araújo
Departamento de Física
Universidade Federal de Santa Catarina
FlorianópolisSCBrazil
T Häffner
Departamento de Física
Universidade Federal de Santa Catarina
FlorianópolisSCBrazil
R Bernardi
Departamento de Física
Universidade Federal de Santa Catarina
FlorianópolisSCBrazil
D S Tasca
Instituto de Física
Universidade Federal Fluminense
NiteróiRJBrazil
M P J Lavery
School of Engineering
University of Glasgow
UK
M J Padgett
School of Physics and Astronomy
University of Glasgow
G12 8QQGlasgowUK
A Kanaan
Departamento de Física
Universidade Federal de Santa Catarina
FlorianópolisSCBrazil
L C Céleri
Instituto de Física
Universidade Federal de Goiás
GoiâniaGOBrazil
P H Souto Ribeiro
Departamento de Física
Universidade Federal de Santa Catarina
FlorianópolisSCBrazil
Experimental study of quantum thermodynamics using optical vortices
PACS numbers:
Non-equilibrium thermodynamics and quantum information theory are interrelated research fields witnessing an increasing theoretical and experimental interest. This is mainly due to the broadness of these theories, which found applications in many different fields of science, ranging from biology to the foundations of physics. Here, by employing the orbital angular momentum of light, we propose a new platform for studying non-equilibrium properties of high dimensional quantum systems. Specifically, we use Laguerre-Gaussian beams to emulate the energy eigenstates of a two-dimension quantum harmonic oscillator having angular momentum. These light beams are subjected to a process realized by a spatial light modulator and the corresponding work distribution is experimentally reconstructed employing a two-point measurement scheme. The Jarzynski fluctuation relation is then verified. We also suggest the realization of Maxwell's demon with this platform.
I. INTRODUCTION
The orbital angular momentum (OAM) of light is a property of the topology of the optical modes, and are characterized by discrete numbers associated to the amount of orbital angular momentum per photon in the mode [1]. The natural family of optical modes with orbital angular momentum are the Laguerre-Gaussian (LG) modes, a set of solutions of the paraxial wave equation [2] that are described by their radial number p and the azimuthal number . The study and application of these modes is relatively recent and has increased considerably in the last two decades [1,3,4].
Single photons populating modes with OAM are physical realizations of high-dimensional quantum states [5][6][7][8][9][10][11], leading to the possibility of encoding more than one bit of information per photon. Such photonic qudits can be explored in order to improve quantum communication schemes and quantum information processing [12][13][14][15][16][17][18][19]. Moreover, the transverse amplitude profiles of LG light modes are formally identical to the energy eigenstates of the two-dimension quantum harmonic oscillator. Therefore, they stand as a platform for the emulation of these quantum systems in a variety of interesting problems. In the present work, we employ these light modes to experimentally study some thermodymical aspects of a high dimensional quantum system. A similar approach has been successfully used to study the quantum limits of a chaotic harmonic oscillator [20]. Non-equilibrium thermodynamics is fundamentally concerned to the characterization of the response of a system under external perturbations. The theory of the linear-response regime was developed in Refs. [21][22][23], based on earlier works such as Refs. [24][25][26]. The information about the complete nonlinear response is contained in the so called fluctuation theorems, which have been proved for classical [27][28][29] and for quantum systems [30,31].
Fluctuation relations can be understood as a quantification of the probability of observing a violation of the second law of thermodynamics for small systems (when fluctuations come into play) and short time-scales. Considering the new trend in miniaturization, such fluctuations and time-scales are becoming more important for the development of new technological devices [32]. Therefore, the theoretical and experimental study of quantum fluctuation relations are of primary interest, both for fundamental issues and for understanding the limitations of implementing quantum information processing and communication devices.
The quantum versions of the classical fluctuation theorems are possible only due to the two-point measurement approach for defining work. Work performed on (or by) the system is defined as the difference between two energy measurements, one before and one after the considered process takes place. To be specific, let us consider an externally driven system S, whose time-dependent Hamiltonian is denoted by H S (t), initially in the thermal state ρ β , with β = 1/k B T , where T is the temperature of the system and k B is the Boltzmann constant. The scenario considered here can be divided into three steps: i) projective measurement on the initial Hamiltonian, H S (0), eigenbasis; ii) unitary (driven) evolution for a time interval τ ; iii) projective measurement on the final Hamiltonian, H S (τ ), eigenbasis. Defining the two-point measurement variable W mn = ε m − ε n , where ε m and ε n are the eigenvalues of H S (τ ) and H S (0), respectively, it is not difficult to show that this stochastic variable must obey the general fluctuation relation known as the Jarzynski equality [34,35]
e −βW ≡ dW P (W ) e −βW = e −β∆F ,(1)
where P (W ) is the probability density distribution associated with the random variable W and ∆F = F τ − F 0 is the variation of the free energy over the time interval τ during which the system is subjected to the process. It has been shown that Eq. (1) is also valid for unital processes [33], i.e. quantum maps that do not change the identity. Note that the final state is not necessarily a thermal state, since it is generated by a projective measurement followed by an evolution. However, what appears in the right-hand side of Eq. (1) is the equilibrium quantity F τ , which concerns the state the system ends up in if it is allowed to thermalize with a reservoir at the same temperature as the initial one. By defining the entropy production as σ = β (W − ∆F ) we can rewrite the fluctuation relation as e −σ = 1.
The experimental investigation of such relation is new, specially in quantum systems. Regarding classical systems we can mention the experiments reported in Refs. [36][37][38][39][40][41]. In the quantum regime, experiments tend to get trickier or more complicated due to the difficulty in performing energy projective measurements on arbitrary systems. Only recently, based on an alternative scheme that avoid such measurements [42,43], an experimental reconstruction of the work distribution associated with a process performed on a spin-1/2 system was reported [44]. Considering the projective measurements, the only experiment to date, as far as we know, was reported in Ref. [45], where the authors employed trapped ions in order to investigate the work statistics associated with a harmonic oscillator. Here we contribute to this line by employing the projective measurement scheme to reconstruct the probability distribution associated with a process performed on the two-dimensional harmonic oscillator with angular momentum using an optical setup, thus providing a new experimental platform for the investigation of thermodynamic processes in the quantum regime.
II. SIMULATING A QUANTUM SYSTEM WITH CLASSICAL LIGHT
It is possible to simulate a class of quantum systems using classical light and the analogy between the paraxial wave equation and the two-dimension Schrödinger equation. This analogy has been explored experimentally to investigate the quantum limit of a chaotic harmonic oscillator [20] and to propose a study, similar to the one done here, in which the characteristic function of the work distribution could be measured [46]. OAM optical modes emulate the energy eigenstates of the harmonic oscillator in the sense that the transverse distribution of the electric field of LG beams has the same form as the energy eigenfunctions of the 2-D quantum harmonic oscillator. Moreover, under appropriate conditions, the propagation of the light beams is equivalent to the Hamiltonian evolution of the harmonic oscillator [20,46,47].
This type of simulation accounts for all oscillatory aspects of quantum systems, such as state superposition, coherence and decoherence. The intrinsic quantum properties of light itself do not come into play in this scenario, since we are exclusively interested in light's modal structure, rather than in its photonic content, which is usually explored by using detectors like avalanche photodiodes (for single photons) or low-reverse-bias photodiodes (for the continuous variables regime).
In the scheme we present here, we use the OAM modes to represent the wave functions of the two-dimension quantum harmonic oscillator, for which the Hamiltonian and angular momentum operators H and L z form a Complete Set of Commuting Observables. They are written in terms of the number operators for right (N r ) and left (N l ) circular quanta as
H = (N r + N l + 1) ω (2) L z = (N r − N l )(3)
and their eigenvalues are
Energy: ε p = (| | + 2p + 1) ω (4) Angular momentum: λ =(5)
mode sorter mode sorter Figure 1: Experimental setup (left): SLM1 generates an input OAM mode, which undergoes a process implemented by SLM2. Its output is analysed by a mode sorter. General idea (right): the mode sorter sorts the OAM components along the x-axis of the CCD camera. The image is then integrated along the y-axis.
where and p are the azimuthal and radial quantum numbers, respectively, which are analogous to the azimuthal and radial indices used to identify the elements of the LG basis of modes. Note that, for the subset of states having the quantum number p = 0, i.e. whenever either N r or N l has eigenvalue zero, the state energy
ε = (| | + 1) ω(6)
depends only on the azimuthal quantum number . Thus, in this case, projections in the OAM basis are equivalent to projections in the energy eigenbasis.
If we restrict ourselves to processes acting on the harmonic oscillator that only change and we project the system's final state onto an eigenstate of the angular momentum, the work done in one experiment run depends solely on the change in | |. Indeed, when the system goes from an initial to a final , the work done is W = (| | − | |) ω. The work probability distribution is then
P (W ) = , p δ (W − W ) ,(7)
where p = p p | is the probability of observing the transition → , with p being the probability of having at the input and p | the probability of observing at the output given that the input is .
In the context of Jarzynski equality, p is found in the expression of the initial thermal state ρ β = e −βH /Z, where Z is the partition function. This state may be explicitly written as:
ρ β = +∞ =−∞ p | |, with p = e −βε Z and Z = e β ω tanh β ω 2 .(8)
Note that, since the states | and | − have the same energy, their probabilities are the same: p = p − . In fact, every energy level has degeneracy 2, except for the ground state | = 0 .
III. EXPERIMENTAL SETUP
The sketch of the experimental setup is shown in Fig. 1. The light from a He-Ne laser is sent through a beam expander consisting of two lenses in a confocal arrangement, with focal lengths f 1 = 50 mm and f 2 = 300 mm, resulting in an expansion factor of 6. The expanded beam is sent to the first Spatial Light Modulator (SLM1), where an OAM mode is prepared with the usual approach with a forked hologram [1]. SLM1 generates modes with OAM per photon and they are sent to a second spatial light modulator, SLM2, where another phase mask realizes some operation on them depending on the protocol. The resulting light beam is sent to a device called mode sorter [48], which litterally sorts the different OAM components of the beam along the horizontal axis of a CCD camera. The measurement scheme is calibrated sending OAM modes one by one and measuring the intensity distribution at the output of the mode sorter with no phase modulation on SLM2 (flat phase mask). Typical calibration curves are shown in gray in Fig. 2, where each curve represents the intensity distribution at the output for a given value of , from -15 to +15 in this case. Step i of the two-point measurement protocol consists in preparing the thermal state described by Eq. (8) and performing a projective measurement in the initial Hamiltonian eigenbasis or, equivalently, in the OAM basis.
The preparation of the thermal state is made by sending a Gaussian mode with = 0 to the spatial light modulator SLM1, which applies masks that generate OAM states with ranging from −7 ≤ ≤ 7. Each mask is turned on for 3 s, according to a random choice of with weight p . The resulting light beam is sent to SLM2, that acts just as mirror in this case, and then to the mode sorter, which analyses the OAM components. The light intensity at the output of the mode sorter is measured with a CCD camera, and the images are analyzed as explained in detail in Appendix A. A typical result is shown in Fig. 3, where the final distribution is obtained from 300 runs of the experiment. The distribution obtained for the absolute value of OAM is normalized and fitted to the function
p(| |) = N e −βε | | = N e −β(| |+1) ω ,(9)
which represents the Boltzmann Distribution, leaving N (the normalization factor) and β ω as free parameters. For the results shown in Fig. 3, we obtained an excellent agreement with the fitted function, with β ω = 0.67 ± 0.01. The parameter β ω = ω/k B T can be interpreted as the ratio between the ground state energy and the typical scale of thermal energy at temperature T . So, for a given system, the greater β ω, the lower the temperature.
We show how to prepare a thermal state and perform the projective measurements in the energy eigenbasis in order to illustrate this procedure. However, in the second step of the protocol, we prepare each energy eigenstate separately and submit it to the process in order to obtain the transition probabilities. This is strictly equivalent to using the states resulting from the first measurement and means a considerable simplification in the set up.
Step ii consists in sending input modes prepared with SLM1 and having OAM ranging from = −7 to = +7 to SLM2, where a phase mask is applied, realizing the process whose work distribution will be measured. This mask couples the input mode to other OAM modes, thus inducing OAM transitions. The energy spectrum of the system is discrete and infinite. However, the thermal weight of states corresponding to higher energies can be made negligible by choosing sufficiently low temperatures, so that we can truncate the initial distribution of states, as we did.
Using the mode sorter, we are able to measure the final distribution of OAM modes and their corresponding weights. This device implements a projective measurement in the orbital angular momentum basis, which in our case is equivalent to the energy under the assumption that the radial number p = 0. Observing the mode sorter output, we can compute the transition probabilities and, consequently, reconstruct the work distribution. Nonetheless, as the calibration figure shows, there is a considerable overlap between adjacent curves (adjacent orbital angular momenta). This appreciably reduces the resolution of the OAM sorter. Newer generations of mode sorters, as well as other strategies [49], minimise this technical inconvenient.
In the present proof of principle experiment, we overcome this issue using a process in SLM2 which generates superpositions of OAM modes that can be easily resolved by the mode sorter. Specifically, the process in our experiment implements the linear operation (L +5 + L −5 )/ √ 2, where we define In this way, the overlap between the two components at the output becomes negligible. Typical measurement results are shown in Figs. 2(a) and 2(b). For each measured output, we performed a linear least squares regression in order to obtain the values of the orbital angular momenta and their respective weights (see Appendix A for details). The normalised set of all OAM weights for all outputs is exhibited in the matrix of Fig. 4(a). This is a density plot where the index of the input (output) modes are the labels of the vertical (horizontal) axis. In other words, these are the transition probabilities for a typical run of the experiment. Fig. 4(b) shows the matrix for the ideal process.
L ±5 | = | ± 5 .(10)
IV. RESULTS AND DISCUSSION
In our experiment, we measured the conditional transition probabilities p | as shown in Fig. 4. As explained earlier, we truncated the OAM space and limited the input modes to | | ≤ 7. That is to say we operate in a regime of low temperatures where the Boltzmann weights for | | > 7 can be neglected, i.e. β ω 1, i.e. ω k B T . Fig. 5 shows a plot of the quantity e −βW as a function of β ω. The inset displays the probability distributions of work computed from the measurement results for β ω = 2. The probabilities in the vertical axis are obtained summing up all values of p (given in Eq.7) for which the corresponding transition results in a given value of work W . Regarding the Jarzynski equality, Eq. (1), the considered process gives ∆F = 0 as it does not change the Hamiltonian of the system. In other words, transitions are induced, but the energy levels do not change. Therefore, Eq. (1) becomes simply e −βW = 1.
The curve named theory represents e −βW computed for an ideal (but truncated) process. The corresponding ideal transition probabilities are illustrated in Fig. 4(b). The curve named experiment represents the same quantity computed from the measured transition probabilities shown in Fig. 4(a). Within the range β ω 1 (gray area), we obtain e −βW < 1 even for the theory curve, due to the truncation of the input states in | | ≤ 7. However, for larger values of β ω, we see that the theory curve is essentially constant and equal to 1, while the experiment curve is always below unity. We interpret this result as a consequence of entropy increase due to natural technical limitations present in a real world measurement (which includes classical fluctuations coming from laser pointing instability, mechanical vibrations on the set-up and camera thermal noise) and experimental imperfections (such as misalignment, limited pixel resolution on the SLM and on the camera and limited optical resolution on the mode sorter). The uncertainty band shown in Fig. 5 accounts only for the fluctuations detected upon several subsequent identical measurements (see Appendix B for details). For instance, for β ω = 2, we have found e −βW = 0.910 ± 0.046 (95%-confidence interval), clearly different from 1. We atribute the remaining difference to the experimental imperfections listed above that are not captured by our error estimation procedure, but that is captured by the Jarzynski's fluctuation relation.
V. MAXWELL'S DEMON
While Eq. (1) is valid for any unital process [33], in a general context including measurements and feedback, i.e., when Maxwell's demon come into play, a new equality holds [50]:
e −σ−I = 1,(11)
where I is the information acquired by the demon due to the measurement process. This implies a modification of the second law of thermodynamics as σ ≥ −I, highlighting the demon's main feature, which is the use of information to reduce entropy production. We propose an experimental scheme for realizing Maxwell's demon using OAM modes. Fig. 7 shows the sketch of the suggested scheme. A laser beam is sent to spatial light modulator SLM1, which produces a thermal state of OAM modes as discussed in Sec. III. The light beam prepared in the thermal state is then sent through mode sorter MS1. The modes with positive angular momentum will be deflected to the right, while those with negative angular momentum will drift to the left. These two groups of beams are separated and directed to mode sorters MS2 and MS3 working in reverse [51,52], converting them back to OAM modes. With this approach, it is possible to separate OAM modes according to sign of . SLM2 is used to apply the operation L +5 to the modes with negative and SLM3 is used to apply L −5 to the modes with positive . They are finally sent to mode sorters MS4 and MS5 that perform the final measurement of the OAM using CCD cameras CCD1 and CCD2. The measurement and feedback, in this case, increases the probability of lowering the absolute value of orbital angular momentum (| |) of the system, thus extracting work from the initial thermal state. As an example, let us start with = +3: the operation (L +5 +L −5 )/ √ 2, and subsequent measurement, results in either = −2 or = +8 with equal probabilities, leading to an average work of 2 ω. Now, if Maxwell's demon takes action, goes invariably from +3 to -2 and W = − ω < 0, which means that work is extracted from the system. He-Ne is a Helium-Neon laser. SLM is spatial light modulator. MS is mode sorter. CCD is charge coupled device, which is a camera. The arrows near the mode sorters indicates in which sense they are being used.
As the measurement of OAM sign provides Maxwell's demon with one bit of information, I = ln 2, and Eq. (11) leads to e −σ = 2 for all β ω, which is the Jarzynski's fluctuation relation with demon's action for the ideal case. This scenario has been computed and plotted in Fig. 6 (curve named theory). Note that the values of e −σ below 2 at high temperatures (shaded gray area on the left) are simply due to truncation of the OAM space dimension.
We would also like to have some insight on the effect of noise in the Maxwell's demon scenario. In order to do that, we have added the same noise that appears in Fig. 4 to the probability transitions of the Maxwell's demon scheme. The result is the curve named noise in Fig. 6, that carries an uncertainty band calculated in the same way as for the experimental curve in Fig. 5. Notice that the random noise decreases the average value of e −σ , meaning that the decrease in entropy production caused by Maxwell's demon would be affected by experimental noise.
The inset shows the probability distributions for the possible values of work in the case of β ω = 2. Even though the value of e −σ changes dramatically from 1 to 2 when the demon takes action, the work distribution and the average work do not change too much: W = 5.0 ω without Maxwell's demon and W = 4.8 ω with it. This is due to the fact that, in this sytem, for low enough temperatures such as β ω = 2, there is a large contribution from the state component with = 0 in the input thermal state. For an input with = 0, both L +5 and L −5 contribute for positive work, which dominates the work distribution.
VI. CONCLUSION
In conclusion, we have experimentally investigated the quantum version of thermodynamic work and the Jarzynski's fluctuation relation using the orbital angular momentum of light, a discrete degree of freedom with infinite dimension usually employed in the single-photon regime to realize a qudit, with applications in quantum communication and quantum information processing. Here, by exploring the analogy between the Paraxial wave equation and Schrödinger equation, we have used OAM of light to simulate the eigenstates of the two-dimensional quantum harmonic oscillator and their evolution through a given process. We measured the work distribution associated with this process and obtained the experimental Jarzynski's fluctuation relation. We have also proposed a scheme for implementing a Maxwell's demon measurement and feedback action. These results illustrate the usefulness of Laguerre-Gaussian beams as a practical platform to investigate aspects of the growing field of quantum thermodynamics in high dimensional Hilbert spaces. Given the versatility of this platform, one can consider using it, for example, in the study of the role of multipartite entanglement in thermodynamic processes, as well as the role of the environment, i.e., non-unitary and non-unital processes.
VII. ACKNOWLEDGMENTS
We acknowledge financial support from the Brazilian funding agencies CNPq, CAPES, and the National Institute for Quantum Information (INCT-IQ). M.P.J.L. acknowledges support from EPSRC awards EP/N032853/1. M.J.P. acknowledges support from EPSRC QuantIC (EP/M01326X/1) and ERC TWISTS (Grant No. 192382).
VIII. APPENDIX
A. Data fitting
When an Laguerre-Gaussian mode passes through a mode sorter, its intensity profile (initially presented in a donut-like shape) becomes an elongated spot along the vertical direction on the camera. The integration of such an image along the vertical axis gives a curve (among the 31 calibration curves on Figs. 2(a) and 2(b)) showing the marginal distribution of intensity along the horizontal direction of the sorted LG mode. The right side of Fig. 1 outlines this idea.
The horizontal size of an image on the camera (∼ 80 pixels) sets the extension of the horizontal axis in Fig. 2, which shows collections of lists of 80 elements. Let us call the lists corresponding to the LG inputs x j = {x j,k } 0≤k<80 and the lists at the output y i = {y i,k } k , where −7 ≤ i ≤ 7 and −15 ≤ j ≤ 15 may be associated with and , respectively. We model the process implemented by SLM2 as a linear operation represented by a 15 × 31 matrix A acting on the set of possible input modes X = {x i } i and leading to the set of its outcomes Y = {y j } j , i.e.
Y = AX.(12)
In order to find the matrix that best fits our experimental data, we apply a linear least squares approach, numerically solving the minimization problem
min A ||Y − AX|| 2(13)
with the additional constraints that all elements of matrix A must be non-negative and that the sum of the elements in each line must equal 1. The result is a matrix similar to the ones shown in Fig. 3. The non-negativity is equivalent to the assumption that the overall phases of each OAM component of the output are the same or, at least, that the OAM components are far enough from each other so that they do not interfere and their relative phases do not play a significant role. The sum over each line equaling 1 stands for the unitarity of the process (no optical loss), which can be assumed without loss of generality.
B. Measurement uncertainty
A measurement consists in taking a picture, integrating it and obtaining its marginal intensity distribution y i = {y i,k } k . The goal here is to assess the uncertainty on each y i,k measured and, from this, to calculate the uncertainty:
• on each element of the matrix of conditional transition probabilities of Fig. 3(a) and
• on the mean value e −βW as a function of β ω (uncertainty band on Fig. 4).
Let us call σ i,k the uncertainty on y i,k . In order to assess these σ's, we performed a series of ten identical measurements on the same transformed mode over a time window of a few minutes and obtained a set of intensity distributions fluctuating, for each k, around a mean value µ k with standard a deviation σ k . These standard deviations turn out to be dependent on µ k and on k itself, but mostly on µ k . We noticed, for instance, that the relative standard deviation (σ/µ) is always smaller than 10%, for any k. These measurements were used to model the typical error associated to a y i measurement. This procedure led to a set of numbers σ i,k used as input for our model, in which we assume each y i measured is a realization of a random variable following the multivariate normal distribution with estimated mean values y i,k and standard deviations σ i,k . This procedure above allows us to simulate sets of measurements, realizing Monte Carlo experiments.
Ten different experimental matrices Y were randomly generated in this manner, from each of which we numerically solved the minimization problem above to find a different probability matrix A. We could see from the set os matrices A that the relative uncertainty on each matrix element was never bigger than 2%.
Similarly (and finally), we performed 1000 Monte Carlo experiments in order to estimate the uncertainty on e −βW for each β ω ranging from 0.05 to 5. We observed that the random variable e −βW nearly follows a normal distribution for all values of β ω. For instance, for β ω = 2, we have found e −βW = 0.910 with a standard deviation σ = 0.022. From the 1.96σ rule, we established our 95%-confidence interval for e −2W/ ω to be 0.910 ± 0.046. By doing that same for all values of β ω, we were able to plot the uncertainty band shown in Fig. 4.
Figure 2 :
2Intensity distributions at the output of the mode sorter. In gray, the calibration curves obtained by sending OAM modes ranging from = −15 to = +15, with no process applied (flat phase mask on SLM2). Colored curves: (a) process (L+5 + L−5)/ √ 2 (defined from Eq. 10) is applied by SLM2 onto = −7, splitting the input mode into two modes with = −12 and = −2; (b) Input at = 3 split up into = −2 and = 8 by the same process. Each curve is obtained by integrating the output intensity profile over the vertical direction of camera.
Figure 3 :Figure 4 :
34Normalized intensity distribution as a function of | |. β ω = 0.67 ± 0.01. Conditional transition probabilities p | . (a) Input-output matrix obtained from the experimental results for the process (L+5 + L−5)/ √ 2 applied to input modes −7 ≤ ≤ 7. (b) Theoretical prediction for the same process.
Figure 5 :
5(a) Fluctuation relation. Plot of e −βW for the process (L+5 + L−5)/ √ 2. The shaded gray area indicates the region where the effect of truncation of the input states is non negligible. Curves labelled theory and experiment are obtained using an ideal process and the experimental results, respectively. The filled region between the dashed lines (experimental curves) represents the assessed measurement uncertainty, within a 95% confidence level (see Appendix B for details). (b) Work distribution. Experimentally reconstructed probability distribution for each possible value of work with β ω = 2 (point indicated by the arrow at subfigure (a)).
Figure 6 :Figure 7 :
67(a) Fluctuation relation. Plot of e −βW for the Maxwell's demon scheme with process (L+5 applied to negative OAM modes and L−5 applied to negative OAM modes. The shaded gray area indicates the region where the effect of truncation of the input states is non negligible. Curves labelled theory and experiment are obtained using an ideal process and the results from a simulated experiment, respectively. (b) Work distribution. Experimentally reconstructed probability distribution for each possible value of work with β ω = 2 (point indicated by the arrow at subfigure (a)). Sketch of the scheme.
Light with a twist in its tail. M Padgett, L Allen, Contemporary Physics. 41275M. Padgett and L. Allen. Light with a twist in its tail. Contemporary Physics 41, 275 (2000).
B E A Saleh, M C Teich, Fundamentals of Photonics. John Wiley and SonsB. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (John Wiley and Sons, 1991).
Twisted photons. G Molina-Terriza, J P Torres, L Torner, Nature Physics. 3305G. Molina-Terriza, J. P. Torres and L. Torner. Twisted photons. Nature Physics 3, 305 (2007).
The angular momentum of light. L Allen, M J Padgett, M Babiker, Progess in Optics. 39291L. Allen, M. J. Padgett and M. Babiker. The angular momentum of light. Progess in Optics. 39, 291 (1999).
Experimental two-photon, three-dimensional entanglement for quantum communication. A Vaziri, G Weihs, A Zeilinger, Phys. Rev. Lett. 89240401A. Vaziri, G. Weihs and A. Zeilinger. Experimental two-photon, three-dimensional entanglement for quantum communica- tion. Phys. Rev. Lett., 89, 240401 (2002).
Tomography of the quantum state of photons entangled in high dimensions. M Agnew, J Leach, M Mclaren, F S Roux, R W Boyd, Phys. Rev. A. 8462101M. Agnew, J. Leach, M. McLaren, F. S. Roux and R. W. Boyd. Tomography of the quantum state of photons entangled in high dimensions. Phys. Rev. A, 84, 062101 (2011).
Experimental high-dimensional two-photon entanglement and violations of generalized Bell inequalities. A C Dada, J Leach, G S Buller, M J Padgett, E Andersson, Nature Physics. 7677A. C. Dada, J. Leach, G. S. Buller, M. J. Padgett and E. Andersson. Experimental high-dimensional two-photon entanglement and violations of generalized Bell inequalities. Nature Physics, 7, 677 (2011).
. V D Salakhutdinov, E R Eliel, W Löffler, Phys. Rev. Lett. 108173604V. D. Salakhutdinov, E. R. Eliel, and W. Löffler. Phys. Rev. Lett. 108, 173604 (2012)
Characterization of high-dimensional entangled systems via mutually unbiased measurements. D Giovannini, J Romero, J Leach, A Dudley, A Forbes, M J Padgett, Phys. Rev. Lett. 110143601D. Giovannini, J. Romero, J. Leach, A. Dudley, A. Forbes and M. J. Padgett. Characterization of high-dimensional entangled systems via mutually unbiased measurements. Phys. Rev. Lett. 110, 143601 (2013).
Generation and confirmation of a (100 × 100)-dimensional entangled quantum system. M Krenn, M Huber, R Fickler, R Lapkiewicz, S Ramelow, A Zeilinger, Proc. Natl. Acad. Sci. USA. Natl. Acad. Sci. USA1116243M. Krenn, M. Huber, R. Fickler, R. Lapkiewicz, S. Ramelow and A. Zeilinger. Generation and confirmation of a (100 × 100)- dimensional entangled quantum system. Proc. Natl. Acad. Sci. USA 111, 6243 (2014).
Twisted light transmission over 143 km. M Krenn, J Handsteiner, M Fink, R Fickler, R Ursin, M Malik, A Zeilinger, Proc. Natl. Acad. Sci. USA. 11313648M. Krenn, J. Handsteiner, M. Fink, R. Fickler, R. Ursin, M. Malik and A. Zeilinger. Twisted light transmission over 143 km. Proc. Natl. Acad. Sci. USA 113, 13648 (2016).
Management of the angular momentum of light: Preparation of photons in multidimensional vector states of angular momentum. G Molina-Terriza, J P Torres, L Torner, Phys. Rev. Lett. 8813601G. Molina-Terriza, J. P. Torres and L. Torner. Management of the angular momentum of light: Preparation of photons in multidimensional vector states of angular momentum. Phys. Rev. Lett. 88, 013601 (2001).
Quantum key distribution using multilevel encoding. M Bourennane, A Karlsson, G Björk, Phys. Rev. A. 6412306M. Bourennane, A. Karlsson and G. Björk. Quantum key distribution using multilevel encoding. Phys. Rev. A 64, 012306 (2001).
Weak randomness in device-independent quantum key distribution and the advantage of using high-dimensional entanglement. M Huber, M Pawłowski, Phys. Rev. A. 8832309M. Huber and M. Pawłowski. Weak randomness in device-independent quantum key distribution and the advantage of using high-dimensional entanglement. Phys. Rev. A 88, 032309 (2013).
Experimental quantum cryptography with qutrits. S Gröblacher, T Jennewein, A Vaziri, G Weihs, A Zeilinger, New J. Phys. 875S. Gröblacher, T. Jennewein, A. Vaziri, G. Weihs and A. Zeilinger. Experimental quantum cryptography with qutrits. New J. Phys. 8, 75 (2006).
Quantum key distribution with higher-order alphabets using spatially encoded qudits. S P Walborn, D S Lemelle, M P Almeida, P S Ribeiro, Phys. Rev. Lett. 9690501S. P. Walborn, D. S. Lemelle, M. P. Almeida and P. S. Ribeiro. Quantum key distribution with higher-order alphabets using spatially encoded qudits. Phys. Rev. Lett. 96, 090501 (2006).
Higher-dimensional orbital-angular-momentum-based quantum key distribution with mutually unbiased bases. M Mafu, A Dudley, S Goyal, D Giovannini, M Mclaren, M J Padgett, T Konrad, F Petruccione, N Lütkenhaus, A Forbes, Phys. Rev. A. 8832305M. Mafu, A. Dudley, S. Goyal, D. Giovannini, M. McLaren, M. J. Padgett, T. Konrad, F. Petruccione, N. Lütkenhaus and A. Forbes. Higher-dimensional orbital-angular-momentum-based quantum key distribution with mutually unbiased bases. Phys. Rev. A 88, 032305 (2013).
High-dimensional quantum cryptography with twisted light. M Mirhosseini, O S Magaña-Loaiza, M N O'sullivan, B Rodenburg, M Malik, M P J Lavery, M J Padgett, D J Gauthier, R W Boyd, New J. Phys. 733033M. Mirhosseini, O. S. Magaña-Loaiza, M. N. O'Sullivan, B. Rodenburg, M. Malik, M. P. J. Lavery, M. J. Padgett, D. J. Gauthier and R. W. Boyd. High-dimensional quantum cryptography with twisted light. New J. Phys. 7, 033033 (2015).
Entanglement of the orbital angular momentum states of photons. A Mair, A Vaziri, G Weihs, A Zeilinger, Nature. 412313A. Mair, A. Vaziri, G. Weihs and A. Zeilinger. Entanglement of the orbital angular momentum states of photons. Nature 412, 313 (2001).
. G B Lemos, R M Gomes, S P Walborn, P H Souto Ribeiro, F Toscano, Nat. Comm. 31211G. B. Lemos, R. M. Gomes, S. P. Walborn, P. H. Souto Ribeiro and F. Toscano. Nat. Comm. 3, 1211 (2012).
Irreversibility and generalized noise. H B Callen, T A Welton, Phys. Rev. 8334H. B. Callen and T. A. Welton. Irreversibility and generalized noise. Phys. Rev. 83, 34 (1951).
Markoff random processes and the statistical mechanics of time-dependent phenomena. M S Green, J. Chem. Phys. 201281M. S. Green. Markoff random processes and the statistical mechanics of time-dependent phenomena. J. Chem. Phys. 20, 1281 (1952).
Statistical mechanical theory of irreversible processes I. R Kubo, J. Phys. Soc. Jpn. 12570R. Kubo. Statistical mechanical theory of irreversible processes I. J. Phys. Soc. Jpn. 12, 570 (1957).
A Einstein, Investigations on the theory of Brownian movement. London, UK; MethuenA. Einstein, Investigations on the theory of Brownian movement (London, UK: Methuen, 1926).
Thermal agitation of electricity in conductors. J B Johnson, Phys. Rev. 3297J. B. Johnson. Thermal agitation of electricity in conductors. Phys. Rev. 32, 97 (1928).
Thermal agitation of electric charge in conductors. H Nyquist, Phys. Rev. 32110H. Nyquist. Thermal agitation of electric charge in conductors. Phys. Rev. 32, 110 (1928).
General theory of thermal fluctuations in nonlinear systems. G N Bochkov, Y E Kuzovlev, Sov. Phys. JETP. 45125G. N. Bochkov and Y. E. Kuzovlev. General theory of thermal fluctuations in nonlinear systems. Sov. Phys. JETP 45, 125 (1977).
Nonlinear fluctuation-dissipation relations and stochastic models in nonequilibrium thermodynamics I. Generalized fluctuation-dissipation theorem. G N Bochkov, Y E Kuzovlev, Physica A. 106443G. N. Bochkov and Y. E. Kuzovlev. Nonlinear fluctuation-dissipation relations and stochastic models in nonequilibrium thermodynamics I. Generalized fluctuation-dissipation theorem. Physica A 106, 443 (1981).
Nonequilibrium equality for free energy differences. C Jarzynski, Phys. Rev. Lett. 782690C. Jarzynski. Nonequilibrium equality for free energy differences. Phys. Rev. Lett. 78, 2690 (1997).
Jarzynski relations for quantum systems and some applications. H Tasaki, H. Tasaki. Jarzynski relations for quantum systems and some applications. http://arxiv.org/abs/cond-mat/0009244 (2000).
Quantum Bochkov-Kuzovlev work fluctuation theorems. M Campisi, P Talkner, P Hänggi, Phil. Trans. R. Soc. A. 369291M. Campisi, P. Talkner and P. Hänggi. Quantum Bochkov-Kuzovlev work fluctuation theorems. Phil. Trans. R. Soc. A 369, 291 (2011).
Artificial Brownian motors: Controlling transport on the nanoscale. P Hänggi, F Marchesoni, Rev. Mod. Phys. 81387P. Hänggi and F. Marchesoni. Artificial Brownian motors: Controlling transport on the nanoscale. Rev. Mod. Phys. 81, 387 (2009).
Jarzynski equality for quantum stochastic maps. Alexey E Rastegin, Karol Zyczkowski, Phys. Rev. E. 8912127Alexey E. Rastegin and Karol Zyczkowski, Jarzynski equality for quantum stochastic maps, Phys. Rev. E 89, 012127 (2014).
Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems. M Esposito, U Harbola, S Mukamel, Rev. Mod. Phys. 811665M. Esposito, U. Harbola and S. Mukamel. Nonequilibrium fluctuations, fluctuation theorems, and counting statistics in quantum systems. Rev. Mod. Phys. 81, 1665 (2009).
Colloquium: Quantum fluctuation relations: Foundations and applications. M Campisi, P Hänggi, P Talkner, Rev. Mod. Phys. 83771M. Campisi, P. Hänggi and P. Talkner. Colloquium: Quantum fluctuation relations: Foundations and applications. Rev. Mod. Phys. 83, 771 (2011).
Free energy reconstruction from nonequilibrium single-molecule pulling experiments. G Hummer, A Szabo, Proc. Natl. Acad. Sci. USA. 983658G. Hummer and A. Szabo. Free energy reconstruction from nonequilibrium single-molecule pulling experiments. Proc. Natl. Acad. Sci. USA 98, 3658 (2001).
Equilibrium information from nonequilibrium measurements in an experimental test of the Jarzynski equality. J Liphardt, S Dumont, S B Smith, I J Tinoco, C Bustamante, Science. 2961832J. Liphardt, S. Dumont, S. B. Smith, I. J. Tinoco and C. Bustamante. Equilibrium information from nonequilibrium measure- ments in an experimental test of the Jarzynski equality. Science 296, 1832 (2002).
Verification of the Crooks fluctuation theorem and recovery of RNA folding free energies. D Collin, F Ritort, C Jarzynski, S B Smith, I Tinoco, C Bustamante, Nature. 437231D. Collin, F. Ritort, C. Jarzynski, S. B. Smith, I. Tinoco and C. Bustamante. Verification of the Crooks fluctuation theorem and recovery of RNA folding free energies. Nature 437, 231 (2005).
Thermodynamics of a colloidal particle in a time-dependent nonharmonic potential. V Blickle, T Speck, L Helden, U Seifert, C Bechinger, Phys. Rev. Lett. 9670603V. Blickle, T. Speck, L. Helden, U. Seifert and C. Bechinger. Thermodynamics of a colloidal particle in a time-dependent nonharmonic potential. Phys. Rev. Lett. 96, 070603 (2006).
Experimental free energy surface reconstruction from single-molecule force spectroscopy using Jarzynski's equality. N C Harris, Y Song, C.-H Kiang, Phys. Rev. Lett. 9968101N. C. Harris, Y. Song and C.-H. Kiang. Experimental free energy surface reconstruction from single-molecule force spec- troscopy using Jarzynski's equality. Phys. Rev. Lett. 99, 068101 (2007).
Test of the Jarzynski and Crooks fluctuation relations in an electronic system. O.-P Saira, Y Yoon, T Tanttu, M Möttönen, D V Averin, J P Pekola, Phys. Rev. Lett. 109180601O.-P. Saira, Y. Yoon, T. Tanttu, M. Möttönen, D. V. Averin and J. P. Pekola. Test of the Jarzynski and Crooks fluctuation relations in an electronic system. Phys. Rev. Lett. 109, 180601 (2012).
Extracting quantum work statistics and fluctuation theorems by single qubit interferometry. R Dorner, S R Clark, L Heaney, R Fazio, J Goold, V Vedral, Phys. Rev. Lett. 110230601R. Dorner, S. R. Clark, L. Heaney, R. Fazio, J. Goold and V. Vedral. Extracting quantum work statistics and fluctuation theorems by single qubit interferometry. Phys. Rev. Lett. 110, 230601 (2013).
Measuring the characteristic function of the work distribution. L Mazzola, G De Chiara, M Paternostro, Phys. Rev. Lett. 110230602L. Mazzola, G. De Chiara and M. Paternostro. Measuring the characteristic function of the work distribution. Phys. Rev. Lett. 110, 230602 (2013).
Experimental reconstruction of work distribution and study of fluctuation relations in a closed quantum system. T B Batalhão, A M Souza, L Mazzola, R Auccaise, R S Sarthour, I S Oliveira, J Goold, G De Chiara, M Paternostro, R M Serra, Phys. Rev. Lett. 113140601T. B. Batalhão, A. M. Souza, L. Mazzola, R. Auccaise, R. S. Sarthour, I. S. Oliveira, J. Goold, G. De Chiara, M. Paternostro and R. M. Serra. Experimental reconstruction of work distribution and study of fluctuation relations in a closed quantum system. Phys. Rev. Lett. 113, 140601 (2014).
Experimental test of the quantum Jarzynski equality with a trapped-ion system. S An, J.-N Zhang, M Um, D Lv, Y Lu, J Zhang, Z.-Q Yin, H T Quan, K Kim, Nature Physics. 11193S. An, J.-N. Zhang, M. Um, D. Lv, Y. Lu, J. Zhang, Z.-Q. Yin, H. T. Quan and K. Kim. Experimental test of the quantum Jarzynski equality with a trapped-ion system. Nature Physics 11, 193 (2015).
. M A A Talarico, P B Monteiro, E C Mattei, E I Duzzioni, P H Souto Ribeiro, L C Céleri, Phys. Rev. A. 9442305M.A.A. Talarico, P.B. Monteiro, E.C. Mattei, E.I. Duzzioni, P.H. Souto Ribeiro, and L. C. Céleri. Phys. Rev. A 94, 042305 (2016).
Light Transmission Optics. D Marcuse, Van Nostrand Reinhold CompanyNew YorkD. Marcuse. Light Transmission Optics . (Van Nostrand Reinhold Company, New York, 1982).
. G C G Berkhout, M P J Lavery, J Courtial, M W Beijersbergen, M J Padgett, Phys. Rev. Lett. 105153601G. C. G. Berkhout, M. P. J. Lavery, J. Courtial, M. W. Beijersbergen and M. J. Padgett. Phys. Rev. Lett. 105, 153601 (2010).
Efficient separation of the orbital angular momentum eigenstates of light. See, M Mirhosseini, Z Malik, R W Shi, Boyd, Nat. Comm. 42781See for instance M. Mirhosseini, M. Malik, Z. Shi and R. W. Boyd. Efficient separation of the orbital angular momentum eigenstates of light. Nat. Comm. 4, 2781 (2013).
Generalized Jarzynski equality under nonequilibrium feedback control. T Sagawa, M Ueda, Phys. Rev. Lett. 10490602T. Sagawa and M. Ueda. Generalized Jarzynski equality under nonequilibrium feedback control. Phys. Rev. Lett. 104, 090602 (2010).
. Robert Fickler, Radek Lapkiewicz, Marcus Huber, P J Martin, Miles J Lavery, Anton Padgett, Zeilinger, Nat. Comm. 54502Robert Fickler, Radek Lapkiewicz, Marcus Huber, Martin P.J. Lavery, Miles J. Padgett, and Anton Zeilinger. Nat. Comm. 5, 4502 (2014).
. Hao Huang, Giovanni Milione, P J Martin, Guodong Lavery, Yongxiong Xie, Yinwen Ren, Cao, Willner. Sci. Rep. Nisar Ahmed, Thien An Nguyen, Daniel A. Nolan, Ming-Jun Li, Moshe Tur, Robert R. Alfano, and Alan E514931Hao Huang, Giovanni Milione, Martin P. J. Lavery, Guodong Xie, Yongxiong Ren, Yinwen Cao, Nisar Ahmed, Thien An Nguyen, Daniel A. Nolan, Ming-Jun Li, Moshe Tur, Robert R. Alfano, and Alan E. Willner. Sci. Rep. 5, 14931 (2015).
| [] |
[
"Resynthesis-based Attacks Against Logic Locking",
"Resynthesis-based Attacks Against Logic Locking"
] | [
"Felipe Almeida [email protected] ",
"Levent Aksoy [email protected] ",
"Quang-Linh Nguyen [email protected] ",
"Sophie Dupuis [email protected] ",
"Marie-Lise Flottes [email protected] ",
"Samuel Pagliarini [email protected] ",
"\nDepartment of Computer Systems\nTallinn University of Technology\nTallinnEstonia\n",
"\nUniversity of Montpellier\nMontpellierFrance\n"
] | [
"Department of Computer Systems\nTallinn University of Technology\nTallinnEstonia",
"University of Montpellier\nMontpellierFrance"
] | [] | Logic locking has been a promising solution to many hardware security threats, such as intellectual property infringement and overproduction. Due to the increased attention that threats have received, many efficient specialized attacks against logic locking have been introduced over the years. However, the ability of an adversary to manipulate a locked netlist prior to mounting an attack has not been investigated thoroughly. This paper introduces a resynthesis-based strategy that utilizes the strength of a commercial electronic design automation (EDA) tool to reveal the vulnerabilities of a locked circuit. To do so, in a pre-attack step, a locked netlist is resynthesized using different synthesis parameters in a systematic way, leading to a large number of functionally equivalent but structurally different locked circuits. Then, under the oracle-less threat model, where it is assumed that the adversary only possesses the locked circuit, not the original circuit to query, a prominent attack is applied to these generated netlists collectively, from which a large number of key bits are deciphered. Nevertheless, this paper also describes how the proposed oracle-less attack can be integrated with an oracle-guided attack. The feasibility of the proposed approach is demonstrated for several benchmarks, including remarkable results for breaking a recently proposed provably secure logic locking method and deciphering values of a large number of key bits of the CSAW'19 circuits with very high accuracy. | 10.1109/isqed57927.2023.10129403 | [
"https://export.arxiv.org/pdf/2301.04400v1.pdf"
] | 255,595,797 | 2301.04400 | 9ecbc9c709f9a00fe609891dba3ef10db119da89 |
Resynthesis-based Attacks Against Logic Locking
Felipe Almeida [email protected]
Levent Aksoy [email protected]
Quang-Linh Nguyen [email protected]
Sophie Dupuis [email protected]
Marie-Lise Flottes [email protected]
Samuel Pagliarini [email protected]
Department of Computer Systems
Tallinn University of Technology
TallinnEstonia
University of Montpellier
MontpellierFrance
Resynthesis-based Attacks Against Logic Locking
Index Terms-Logic lockingresynthesisEDA toolsoracle-less and oracle-guided attacks
Logic locking has been a promising solution to many hardware security threats, such as intellectual property infringement and overproduction. Due to the increased attention that threats have received, many efficient specialized attacks against logic locking have been introduced over the years. However, the ability of an adversary to manipulate a locked netlist prior to mounting an attack has not been investigated thoroughly. This paper introduces a resynthesis-based strategy that utilizes the strength of a commercial electronic design automation (EDA) tool to reveal the vulnerabilities of a locked circuit. To do so, in a pre-attack step, a locked netlist is resynthesized using different synthesis parameters in a systematic way, leading to a large number of functionally equivalent but structurally different locked circuits. Then, under the oracle-less threat model, where it is assumed that the adversary only possesses the locked circuit, not the original circuit to query, a prominent attack is applied to these generated netlists collectively, from which a large number of key bits are deciphered. Nevertheless, this paper also describes how the proposed oracle-less attack can be integrated with an oracle-guided attack. The feasibility of the proposed approach is demonstrated for several benchmarks, including remarkable results for breaking a recently proposed provably secure logic locking method and deciphering values of a large number of key bits of the CSAW'19 circuits with very high accuracy.
Abstract-Logic locking has been a promising solution to many hardware security threats, such as intellectual property infringement and overproduction. Due to the increased attention that threats have received, many efficient specialized attacks against logic locking have been introduced over the years. However, the ability of an adversary to manipulate a locked netlist prior to mounting an attack has not been investigated thoroughly. This paper introduces a resynthesis-based strategy that utilizes the strength of a commercial electronic design automation (EDA) tool to reveal the vulnerabilities of a locked circuit. To do so, in a pre-attack step, a locked netlist is resynthesized using different synthesis parameters in a systematic way, leading to a large number of functionally equivalent but structurally different locked circuits. Then, under the oracle-less threat model, where it is assumed that the adversary only possesses the locked circuit, not the original circuit to query, a prominent attack is applied to these generated netlists collectively, from which a large number of key bits are deciphered. Nevertheless, this paper also describes how the proposed oracle-less attack can be integrated with an oracle-guided attack. The feasibility of the proposed approach is demonstrated for several benchmarks, including remarkable results for breaking a recently proposed provably secure logic locking method and deciphering values of a large number of key bits of the CSAW'19 circuits with very high accuracy.
Index Terms-Logic locking, resynthesis, EDA tools, oracle-less and oracle-guided attacks.
I. INTRODUCTION
Due to the globalized integrated circuit (IC) supply chain, serious security threats, such as hardware Trojans, piracy, overbuilding, reverse engineering, and counterfeiting, have emerged [1]. Many defense techniques, such as watermarking [2], digital rights management [3], metering [4], and logic locking [5], have been introduced over the years to deal with these threats. Among those, logic locking stands out by being a well-established technique and by offering protection against a diverse array of adversaries [6]. Logic locking inserts additional logic driven by key bits so that the circuit behaves as expected only when the secret key is applied.
On the other hand, many efficient attacks have been introduced to overcome the defenses built by logic locking [7]. However, the impact of an electronic design automation (EDA) tool on the manipulation of the locked netlist before performing an attack has not been investigated thoroughly. In This work has been partially conducted in the project "ICT programme" which was supported by the European Union through the European Social Fund. It was also partially supported by European Union's Horizon 2020 research and innovation programme under grant agreement No 952252 (SAFEST). this work, we explore if EDA tools can be used to make a locked circuit vulnerable to existing logic locking attacks. Thus, the main contributions of this work are three-fold: (i) we introduce a resynthesis procedure that is a pre-attack step, where functionally equivalent but structurally different locked circuits are generated by resynthesizing the original locked circuit using different optimization parameters and delay constraints in order to create structural vulnerabilities that can be exploited by existing attacks; (ii) we present an oracle-less (OL) resynthesis-based attack, which applies the prominent SCOPE attack [8] to these resynthesized circuits and gathers all its solutions to discover the secret key; (iii) we show that our OL attack can be combined with a traditional oracle-guided (OG) attack for further improving the number of correctly deciphered key bits. The last contribution is essential, since we consider circuits from the CSAW'19 contest -these circuits compound the use of two logic locking techniques at the same time.
The main finding of this work is that the use of many resynthesized locked circuits enables us to discover values of more key bits, and even the whole key, when compared to a single attack mounted on the original locked netlist.
The remainder of this paper is organized as follows: Section II presents the background concepts and related work. The resynthesis process and the proposed attacks are described in Section III. Experimental results are given in Section IV. Finally, Section V concludes the paper.
II. BACKGROUND A. Logic Locking and Threat Models
The procedure of logic locking is applied at the gate level in the IC design flow, as shown in Fig. 1. Note that the layout of the locked circuit is sent to the foundry without revealing the secret key. After the locked IC is produced and delivered to the design house, the values of the secret key are stored in a tamper-proof memory, before the functional IC is sent to the market.
It is assumed that the gate-level netlist of the locked circuit can be obtained directly by an untrusted foundry or by reverse-engineering a functional IC obtained from the open market. An adversary can also use the functional IC programmed with the secret key as an oracle to apply inputs and observe outputs. Thus, in logic locking, there are generally two threat models: OL and OG. In the OL threat model, only the gate-level netlist of the locked circuit is available to the adversary. The adversary has both the netlist of the locked circuit and the functional IC in the OG threat model.
B. Related Work
After the introduction of random logic locking (RLL) using XOR/XNOR gates in [9], earlier work focused on different types of key gates, such as AND/OR, multiplexors, and look-up tables, taking into account the hardware complexity of the locked circuit [5]. However, the OG satisfiability (SAT)-based attack [10] overcame all the defenses existing at that time. Note that the SAT-based attack iteratively finds differentiating input patterns (DIPs) that rule out wrong keys. To thwart the SAT-based attack and its variants, circuits are locked using a point function that sets a limit on the number of wrong keys which a DIP can eliminate, forcing these attacks to explore an exponential number of queries [6], [11]- [14].
The SAT-resilient methods can be categorized into two groups: single-flip locking technique (SFLT) and double-flip locking technique (DFLT), as shown in Fig. 2. An SFLT has only one critical point, which is responsible to corrupt a protected output under a specific input pattern. In this category, SARLock [15] adds a comparator and a masking circuit connected with the original netlist in a way that it generates a corruption on one input pattern. Anti-SAT [11] utilizes two complementary AND gate trees, whose output is merged with the original circuit. CASLock [12] is based on the same concept of Anti-SAT, however it uses both AND and OR gates. SKG-Lock [14] uses decoy key bits and provides a tunable output corruption. Note that SFLTs are susceptible to removal attacks [16]- [18]. If an attacker can identify this single critical point, he/she can split the design into a recovered netlist (original) and the locking unit.
A DFLT has two critical points, one that connects the original netlist with a perturbation unit and another one that connects the output of the stripped circuit with the restore unit. Under this category, stripped functionality logic locking (SFLL) [6], [13] initially corrupts an output based on an input combination in the perturbation unit and then, corrects this output only when the secret key is applied in the restore unit. Note that a removal attack becomes inefficient for a DFLT, since the original circuit is mixed with the perturbation unit, even though it can easily identify the restore unit. However, there exist efficient structural attacks developed for DFLTs [19]- [22].
Alternative locking techniques have also been introduced. In [23], a technique, which has more than two critical points, called the multi-flip locking technique (MFLT), was proposed. However, it leads to a significant increase in area, power dissipation, and delay when compared to other techniques. Compound logic locking techniques were proposed to overcome the main drawback of a SAT-resilient technique, i.e., its low output corruptibility as can be observed in Fig. 2, by locking a design using both low and high output corruptibility techniques, such as SFLL and RLL, respectively [24]. Recently, efficient attacks have also been introduced against compound logic locking [25], [26].
Moreover, the OL attacks explore patterns in the structure of a locked netlist using statistical analysis [8], [27], [28]. For example, the SCOPE attack [8] is an unsupervised constant propagation technique, which analyzes each key bit of the locked design for critical features that can reveal its correct value after it is assigned to logic 0 and 1 value. These critical features include area, power dissipation, delay, and many other circuit characteristics obtained by a synthesis tool. These features are analyzed using linear regression and machine learning based clustering.
III. PROPOSED RESYNTHESIS-BASED ATTACK
This section describes our resynthesis-based attack in detail. We initially introduced the pre-attack stage, where the locked circuit is resynthesized using different synthesis parameters, leading to a large number of structurally different netlists with the same functionality [29]. Then, we present the OL attack that utilizes these resynthesized netlists in order to find the secret key. Finally, in order to handle the compound logic locking efficiently, we present its modified version, where our proposed OL attack cooperates with an OG attack.
A. The Pre-attack Step: Resynthesis of the Locked Netlist
The locked circuit is synthesized multiple times using a different script each time, where the synthesis parameters are explored in a systematic way. We use the following parameters to increase the number of resynthesized locked circuits:
Synthesis Effort: In a synthesis tool, logic optimizations can be applied with different efforts at different synthesis stages. This flexibility enables a designer to explore the trade-off between the quality of results and run time. The following efforts are considered at the given synthesis stage: low, medium, and high at generic transformations (syn_gen); low, medium, and high at mapping (syn_map); and low, medium, high, and extreme at optimization (syn_opt).
Delay Constraint: To meet performance targets, delay constraints are used to guide the synthesis tool. We initially resynthesize the locked circuit without a delay constraint and find the delay of its critical path, i.e., dcp. Then, in an interval between 0 and dcp, d − 1 points, which are computed as (dcp/d)i with 1 ≤ i ≤ d − 1, are set as delay constraints. Note that d is set to 5 in order to generate a large number of resynthesized circuits. Even though some delay constraints are impossible to meet, the synthesis tool always generates a netlist equivalent to the original one in terms of functionality.
Maximum Transition: The transition time of a net in a circuit is defined as the longest time required for its driving pin to change its logic value. We choose the maximum transition value to be 5%, 10%, and 15% of the delay constraint for all the nets in the locked circuit to explore different resynthesized circuits.
Key Constraints: To direct the synthesis tool to work intensively on the paths that contain the keyed logic, a delay constraint, which is impossible to be satisfied, can also be used. In this case, we force the delay between all key bits and all primary outputs to be 1 ps.
Thus, the combination of parameters given above generates 3 × 3 × 4 × 5 × 3 × 2 = 1080 netlists. We eliminate the resynthesized circuits with identical characteristics and keep only the unique ones. Additionaly, we prevent the use of XOR/XNOR gates, which can be problematic for the SCOPE attack, during technology mapping. Note that our resynthesis methodology aims to generate different versions of the locked circuit, making it more vulnerable to existing attacks. Thus, any existing attack, either OL or OG, may potentially benefit from this pre-attack strategy to discover the secret key.
B. Attacks on the Resynthesized Netlists
Time-efficient attacks are chosen in order to handle a large number of resynthesized circuits. In our OL resynthesis-based attack, SCOPE [8] is used to predict the values of key bits. In its modified version developed for compound logic locking, a query attack is used to find the values of key bits in a deterministic way.
1) Proposed OL Attack: SCOPE is applied to each resynthesized locked circuit and a solution is found. Note that this solution may return a logic 0, 1, or an unknown value for a key bit. Then, the values of key bits deciphered for each netlist are merged into a single solution that represents the overall guess. To do so, for each key bit,
k i with 1 ≤ i ≤ p,
where p denotes the number of key bits, we initially count the number of solutions, where k i is deciphered as logic 0 and 1, denoted as dk 0 i and dk 1 i , respectively. Then, if dk 0 i > dk 1 i or dk 1 i > dk 0 i , the value of k i is determined to be 0 or 1, respectively. Otherwise, in the case of a tie, the value of k i is decided to be unknown.
2) Proposed OG Attack: In order to handle a large number of resynthesized netlists efficiently, we introduce a SAT-based query attack, which can determine the actual values of individual key bits. Note that traditional SAT-based attacks rather attempt to find the whole secret key, which increases the computational effort significantly. In this attack, we initially find queries, i.e., values of inputs of the oracle circuit, using two techniques. The first technique uses the ATPG tool Atalanta [30] to find test patterns for the stuck-at-fault of each key bit on the locked circuit and stores the values of the related primary inputs as queries. The aim is to find input patterns that can propagate each key bit to a primary output, making it observable. The second technique finds queries randomly. The aim is to find input patterns that may make multiple key bits observable. In our experiments, we generate a total of 2p queries, where p denotes the number of key bits.
Then, we describe the locked circuit in a conjunctive normal form (CNF) formula C by expressing each gate in its CNF. Each query is applied to the oracle and the values of primary outputs are obtained. Then, the related input and output values are assigned to the associated nets in the locked circuit, the constant values of these nets are propagated, and the Boolean equations including key bits are derived in a CNF formula E. The SAT problem including the locked circuit in CNF, i.e., C, is augmented with these equations, i.e., C = C ∧ E. After all the queries are considered, the SAT problem C is solved using a SAT solver and the values of key bits are determined. Note that the locked circuit with the found values of key bits behaves exactly the same as the oracle under the given queries, but not under all possible input values. Hence, these key values are not guaranteed to be the values of the secret key.
However, the value found for a key bit can be proved if it is indeed equal to the actual value of the related bit in the secret key using the concept of proof by contradiction. To do so, for each key bit, the complement of its found value is added into C and the SAT solver is run. If there exists no solution to C, i.e., the SAT problem is unsatisfiable, the value of the related key bit is proven to be the one in the found solution.
As a simple example, consider the majority circuit in Fig. 3(a) and suppose that it is locked using XOR/XNOR gates as given in Fig. 3(b). Assume that a query is found as abc = 000 and thus, the value of its output f is obtained as 0 using the oracle. After propagating these values on the locked circuit as shown in Fig. 3(c), a Boolean equation k 0 ∨ k 1 = 0, i.e., k 0 ∧ k 1 in CNF, is obtained. In the SAT solution, the key bit values are found as k 0 k 1 = 01. Note that these are the proven key values since a SAT solver guarantees that there exists no solution to the SAT problem C, which is extended by either k 0 = 1, i.e., k 0 in CNF, or k 1 = 0, i.e., k 1 in CNF, due to a conflict with the found Boolean equation, i.e., k 0 ∧ k 1 in CNF. #in #out #gates #gates #gates #gates #gates c2670 157 64 1193 64 1321 1320 1421 1401 c3540 50 22 1669 32 1733 1732 1783 1773 c5315 178 123 2307 64 2435 2434 2523 2514 c6288 32 32 2416 32 2480 2479 2531 2516 c7552 206 105 3512 64 3640 3639 3729 3713 The query attack is run on all the resynthesized circuits and the proven values of key bits in each netlist are combined into a single solution. Note that the query attack is developed in Perl and is equipped with the incremental SAT solver CaDiCaL [31].
Finally, the solution of the OG resynthesis-based attack is determined after merging the solution of the SCOPE attack over all resynthesized circuits into that of the query attack on all resynthesized circuits without changing the proven values of key bits.
IV. EXPERIMENTAL RESULTS
This section initially presents the results of the proposed OL resynthesis-based attack on the ISCAS'85 circuits [32] and then, those of the OG resynthesis-based attack on the CSAW'19 circuits [24] including compound logic locking.
A. Results on the ISCAS'85 Circuits
As the first experiment set, five ISCAS'85 circuits were considered. Table I presents their details. For our experiments, these circuits were locked by the Anti-SAT [11], CASlock [12], SFLL [6], and SKG-Lock [14] techniques. Note that while Anti-SAT and SFLL were taken from the NEOS tool [33], SKG-Lock was provided by its developer, and CASLock was implemented by ourselves. Table I also presents details of the locked circuits. Note that the number of keys, i.e., p, was determined based on the number of inputs and overhead of the locking technique, and circuit characteristics, i.e., the number of inputs, outputs, and gates, were taken from the gate-level netlist.
Observe from Table I that all logic locking techniques lead to circuits with a number of gates close to each other, whereas the one locked by SFLL has a slightly large number of gates. Besides, the overhead on the number of gates in circuits locked by SFLL varies from 4.7% to 19.1% when compared to original circuits.
In the following subsections, we present the results of the resynthesis process and OL resynthesis-based attack, analyze the impact of synthesis parameters on the performance of the resynthesis process and SCOPE attack, and introduce improvements to the run-time of the resynthesis process.
1) Resynthesis of the Locked ISCAS'85 Circuits: The resynthesis is performed by Cadence Genus with a commercial 65 nm standard cell library, and the whole process is automated in a Perl script. Table II presents the resynthesis results of locked circuits. In this table, unique denotes the number of unique locked netlists out of 1080 generated netlists and area, delay, and power stand respectively for the average values of total area in µm 2 , delay in the critical path in ps, and total power dissipation in µW on the unique locked netlists. Finally, time is the total run time of the resynthesis process. The resynthesized netlists were generated on a computing server with Intel Xeon processing units at 3.9 GHz and a total of 1 TB memory.
Observe from Table II that the number of unique netlists is less than half of the total number of generated netlists, i.e., 540, except the c3540 circuit locked by SKG-Lock. Note that Anti-SAT, CASLock, and SFLL lead to fewer unique netlists when compared to SKG-Lock, which is mainly because the logic added by these techniques is more compact than that added by SKG-Lock, which uses a chain of AND gates. We note that the synthesis tool consumes a large amount of time to fulfill a delay constraint that is impossible to meet, such as strict delay constraints and key constraints described in Section III-A. Hence, the run-time of the resynthesis process depends on the locked circuit and the logic locking technique, and more importantly, if there exists enough room for the synthesis tool to satisfy the constraints.
In order to illustrate the diversity of resynthesized netlists, the c2670 circuit locked by SFLL is considered. Fig. 4 presents the area, delay, and power dissipation of each unique netlist, normalized by their average values given in Table II. Observe that resynthesis generates circuits significantly different from each other in terms of hardware complexity. The standard deviation on area, delay, and power dissipation values of all these netlists are computed as 1578, 235, and 4964, respectively. Note also that in this figure, the netlists after instance number 232 have a distinct profile, since they are generated using key constraints described in Section III-A.
In order to illustrate the differences in the structure of generated netlists, the c2670 circuit locked by SKG-Lock is considered. Fig. 5 presents the graphs of two netlists resynthesized using the same synthesis parameters, except for the delay constraint. In this figure, red, green, and blue circles denote the inputs, key bits, and outputs, respectively; the gray triangles represent the gates. Observe that a small change in the delay constraint can lead to a structurally different netlist, where the difference between the number of gates and logic levels is 599 and 12, respectively.
2) Attacks on the Locked ISCAS'85 Circuits: Table III presents the results of the SCOPE attack on the original locked netlists and those of OL resynthesis-based attack on the unique locked netlists generated in the resynthesis process. In this table, cdk and dk stand respectively for the number of correctly deciphered key bits and the total number of deciphered key bits and time is the total time required for the attack. The attacks were also run on the same server used to resynthesize the locked netlists.
Observe from Table III that the SCOPE attack is not entirely successful on any of the original locked netlists. However, the use of resynthesized netlists enables us to decipher the values of a large number of key bits, and even the whole key, e.g., for the c2670 and c3540 circuits locked by SKG-Lock. Note that Technique Details c2670 c3540 c5315 c6288 c7552 Anti-SAT #unique 480 537 464 498 439 area 2357 2803 4112 7265 5387 delay 504 818 663 2144 694 power 5518 4934 4297 9403 7479 time 17h14m51s 1d05h56m12s 1d09h56m22s 3d20h50m46s 1d16h01m13s CASLock #unique 473 449 488 410 479 area 2359 3112 4173 7739 5337 delay 513 874 650 2146 676 power 5170 3304 3852 10693 6765 time 15h29m56s 1d11h02m52s 1d06h52m54s 4d03h12m29s the SCOPE attack can decipher almost all of the key bits using the resynthesized netlists locked by the SKG-Lock technique. While the results on the netlists locked by SKG-Lock are all correct, the ones on the netlists locked by Anti-SAT, CASLock, and SFLL are slightly better than a random guess. The run time of the SCOPE attack and our resynthesis-based attack depends mainly on the number of gates and keys in the locked design.
To find the SAT resiliency of resynthesized locked circuits, the SAT-based attack of [10] was run on 541 netlists of the c3540 circuit locked by SKG-Lock with a time limit of 2 days. This circuit was chosen since it has the smallest number of key bits with the smallest number of gates. Note that the SAT-based attack was not able to find the secret key of any resynthesized locked netlists. This experiment indicates that the resynthesis changes only the structure of the circuit as shown in Fig. 5, but maintains its SAT resiliency.
3) Redundant Synthesis Runs: Observe from Tables II and III that the total run-time of the proposed attack is dominated by the resynthesis process. However, it is possible to reduce the time required to resynthesize the locked netlist by removing redundant synthesis runs without sacrificing any unique netlists. For example, it is observed that the high value of the syn_gen parameter given in Section III-A can be removed from the parameter list, since all possible synthesis scripts including this parameter generate the same circuit when this parameter is low or medium. Thus, the number of generated circuits, i.e., 1080, reduces to 720. 4) Convergence on the Number of Deciphered Keys: It is also observed that the number of key bits deciphered by the SCOPE attack on all unique resynthesized netlists can actually be obtained using a small number of netlists. Fig. 6 presents the number of deciphered key bits along the unique resynthesized netlists of the c2670 circuit locked by SKG-Lock. Observe from this figure that although a large number of unique netlists increases the quality of the SCOPE attack, actually a small number of unique netlists, 147 in this case, is sufficient to achieve the same result as when all 521 unique netlists are considered. We note that a similar situation was also observed on circuits locked by other techniques. 5) Promising Resynthesized Netlists: Moreover, it is observed that the SCOPE attack is more successful on specific resynthesized netlists. To find a set of synthesis parameters that enables the SCOPE attack to decipher more key values, we initially define two categories of netlists based on the slack time of the design, i.e., the difference between the required and arrived time in the critical path, as follows: i) netlists with a slack value less than or equal to 0; ii) netlists with a slack value greater than 0. The slack value of a design gives indeed a rough idea of the effort put in by the synthesis tool; for the netlists in the first category, the synthesis tool works extremely hard to meet the delay constraint, trying many logic transformations and optimization techniques.
Then, the solutions of the SCOPE attack on all possible 1080 netlists are obtained and sorted based on the number of deciphered key bits in descending order. The top 10% of these sorted netlists are categorized based on their slack values. Fig. 7 presents the results of this experiment on the circuits locked by SKG-Lock. Observe that the netlists that enable the SCOPE attack to decipher more key values generally have a slack value less than or equal to 0. Thus, to generate such circuits, one can easily add strict delay constraints or key constraints as described in Section III-A. We note that a similar result was also observed on resynthesized netlists locked by other techniques. 6) Structural Analysis: In order to improve the performance of the resynthesis process, the logic cone, which is the locking technique is applied on, can be extracted and resynthesized. Note that the output of this logic cone is a single primary output, while its inputs are primary inputs, but not necessarily all the primary inputs of the locked design. Thus, the runtime of the resynthesis process can be decreased, since the logic cone has a small number of inputs, outputs, and gates when compared to the whole locked circuit.
Table IV presents details on the resynthesis process on entire locked circuits and logic cones when the circuits locked by SFLL are used. Observe that the resynthesis process on a logic cone generates less number of unique designs and takes significantly less time without a significant loss on the solution quality when compared to the resynthesis process on the entire circuit. We note that similar results were also observed on circuits locked by other techniques.
B. Results on the CSAW'19 Circuits
As the second experiment set, we used the state-of-theart locked circuits from the CSAW'19 contest [24]. Details of these circuits are given in Table V. Note that two logic locking techniques -RLL [9] and SFLL-rem [13] -are applied together to lock a circuit.
In the following two subsections, we present the results of the resynthesis process and the resynthesis-based attack.
1) Resynthesis of the Locked CSAW'19 Circuits: Table VI presents the resynthesis results of locked circuits. Observe that the number of unique resynthesized netlists is larger than half of the total number of generated netlists, i.e., 540. As the hardware complexity of designs increases, the run-time of the resynthesis process increases. We note that diverse netlists in terms of complexity are obtained, e.g., the standard deviation on area, delay, and power dissipation values of all the locked netlists of the small circuit is computed as 8526, 1029, and 20074, respectively.
2) Attacks on the Locked CSAW'19 Circuits: Table VII presents results of the attacks obtained, after they are applied to the original locked netlist, denoted as OLN, and all unique resynthesized netlists, denoted as URNs. In this table, prv stands for the number of proven values of key bits. Note that since the secret key is not publicly available, the cdk values are omitted for the SCOPE and resynthesis-based attacks.
Observe from Table VII that the original SCOPE attack could only decipher a small number of key bits, all of which belongs to RLL, and the query attack can prove the values of a large number of key bits, all of which again belong to RLL, on the original locked circuits. Thus, the resynthesis-based attack could only decipher the RLL key bits on the original locked circuits. However, the use of resynthesized circuits makes the SCOPE attack decipher more key bits that also belong to SFLL-rem and makes the query attack prove the values of Circuit unique area delay power time small 557 18935 1631 23571 5d3h22m28s medium 569 26080 1745 31284 6d12h24m16s large 567 31348 1798 24610 5d21h42m10s bonus 560 20643 1758 19090 4d14h44m29s more key bits that belong to RLL. Thus, the resynthesis-based attack could decipher almost all the values of the secret key, proving almost all the values of the key bits of RLL. Note that all the unknown key bits belong to SFLL-rem. Observe that the run-time of attacks increases, as the number of gates and key bits increases.
After the values of key bits of the CSAW'19 circuits were determined, they were sent to the contest organizers for evaluation. Table VIII presents the results of the resynthesis-based attack along with those of other techniques which participated in the contest.
Observe from Table VIII that our proposed attack can determine all the key bits of RLL correctly, even though there are unproven key bits in the medium and bonus circuits as shown in Table VII. This observation implies that the guesses of the SCOPE attack on those key bits are actually correct. Moreover, the proposed technique can decipher the key bits of SFLL-rem with a number of deciphered key bits greater than any other OL technique with a high accuracy.
V. CONCLUSIONS
This work has shown that EDA tools can be used to generate variants of locked circuits that may be vulnerable to existing logic locking attacks and such circuits can be generated using a specific set of synthesis parameters. It was shown that the run-time of the proposed technique can be improved using a small number of resynthesized netlists without diminishing its solution quality. Experimental results clearly indicated that the use of many resynthesized circuits enables existing attacks to decipher values of a large number of key bits with high accuracy. Hence, the resynthesis of a locked circuit can be utilized as a pre-attack step for many existing attacks in order to improve their success rate. As future work, we plan to consider other synthesis parameters, such as fanout, capacitance limits, and wire loads, which enable synthesis tools to generate different circuits. Also, we aim to incorporate other commercial and open source EDA tools into the resynthesis process to generate different unique netlists.
Fig. 1 .Fig. 2 .
12Conventional logic locking in the IC design flow (adapted from [SAT-resilient logic locking methods: (a) SFLT; (b) DFLT.
Fig. 3 .
3(a) Majority circuit; (b) Locked majority circuit; (c) Constant propagation on the locked majority circuit.
Normalized complexity of resynthesized netlists of the c2670 circuit locked by SFLL: (a) area; (b) delay; (c) power.
Graphs of resynthesized netlists generated using a difference in the delay constraint dc: (a) dc is 990 ps; (b) dc is 496 ps.
Fig. 6 .
6Convergence on the number of deciphered keys over the number of resynthesized netlists in the SCOPE attack.
Fig. 7 .
7Classification of resynthesized netlists based on their slack values on promising solutions of SCOPE attack.
TABLE I DETAILS
IOF THE ISCAS'85 CIRCUITS.Circuit
Original Netlist
Locked Netlist
p
Anti-SAT CASLock SFLL SKG-Lock
TABLE II RESULTS
IIOF RESYNTHESIZED LOCKED ISCAS'85 CIRCUITS.
TABLE III RESULTS
IIIOF ATTACKS ON THE LOCKED ISCAS'85 CIRCUITS.Circuit
Anti-SAT
CASLock
SFLL
SKG-Lock
SCOPE
Resynthesis
SCOPE
Resynthesis
SCOPE
Resynthesis
SCOPE
Resynthesis
cdk/dk time cdk/dk time
cdk/dk time cdk/dk time
cdk/dk time cdk/dk time
cdk/dk time cdk/dk time
c2670
0/0
4s
37/64 34m18s
0/0
4s
35/64 33m47s
0/0
4s
34/64 37m32s 32/32 4s
64/64 44m37s
c3540
0/0
3s
17/32 21m27s
0/0
3s
17/32 18m12s
0/0
2s
19/32 21m29s 17/17 2s
32/32 24m30s
c5315
0/0
5s
38/64 42m34s
0/0
5s
30/64 43m54s
0/0
5s
33/64 46m23s 32/32 5s
62/62 52m06s
c6288
0/0
3s
18/32 29m08s
0/0
3s
16/32 27m18s
0/0
3s
16/31 33m19s 16/16 3s
31/31 34m24s
c7552
0/0
6s
38/64 45m31s
0/0
6s
47/64 49m13s
0/0
6s
38/63 52m26s 32/32 6s
61/61 56m45s
30
35
40
45
50
55
60
65
0
100
200
300
400
500
Number of deciphered keys
Number of netlists
TABLE IV RESULTS
IVOF THE RESYNTHESIS PROCESS ON ENTIRE CIRCUIT AND LOGIC CONE.TABLE V DETAILS OF THE LOCKED CSAW'19 CIRCUITS.Circuit
Entire Circuit
Logic Cone
#unique
time
cdk/dk
#unique
time
cdk/dk
c2670
468
13h13m23s 34/64
319
07h46m26s 34/64
c3540
484
1d47m51s
19/32
320
06h29m35s 16/32
c5315
477
21h57m14s 33/64
313
07h06m16s 32/64
c6288
523
2d22h15m7s 16/31
302
06h20m57s 19/32
c7552
504
22h40m29s 38/63
279
06h57m14s 38/63
Circuit
Details
Number of keys
#in
#out
#gates
RLL SFLL-rem Total
small
522
512
15995
40
40
80
medium
767
757
24008
60
60
120
large
1452 1445
36584
80
80
160
bonus
892
1746
23004
128
128
256
TABLE VI RESULTS
VIOF RESYNTHESIZED LOCKED CSAW'19 CIRCUITS.
TABLE VII RESULTS
VIIOF ATTACKS ON THE LOCKED CSAW'19 CIRCUITS.Circuit-Netlist
SCOPE
Query
Resynthesis
dk
time
prv
time
dk
time
small -OLN
19
20s
39
1m21s
40
1m41s
small -URNs
77 4h10m42s 40 1d10h4m37s 79 1d14h15m19s
medium -OLN
32
41s
58
6m37s
59
7m18s
medium -URNs 117 8h33m56s 58 3d19h12m13s 120 4d3h46m9s
large -OLN
30
1m7s
79
6m19s
79
7m26s
large -URNs
152 12h56m15s 80 3d2h52m11s 159 3d15h48m26s
bonus -OLN
64 1m46s
118
3m2s
120
4m48s
bonus -URNs
233 16h7m17s 125 1d20h29m22s 252 2d12h36m39s
TABLE VIII RESULTS
VIIIOF ATTACKS ON THE LOCKED CSAW'19 CIRCUITS.Approach
Attack Scenario
Circuit
small (40+40)
medium (60+60)
large (80+80)
bonus (128+128)
RLL
SFLL-rem
RLL
SFLL-rem
RLL
SFLL-rem
RLL
SFLL-rem
Key sensitization [34]
OG
40/40
-
60/60
-
80/80
-
-
-
Hamming distance-based attack [24]
OG
30/30
-
50/50
-
72/72
-
-
-
Automated analysis + SAT [24]
OG
11/18
-
31/50
-
10/34
-
-
-
Sub-circuit SAT [24]
OG
17/17
-
29/29
-
-
-
-
-
Redundancy-based [27]
OL
28/28
4/12
35/35
23/28
45/45
0/51
66/66
8/27
Bit-flipping attack [35]
OG
40/40
-
60/60
-
80/80
-
-
-
Topology guided attack [28]
OL
15/32
-
19/50
-
36/73
-
75/108
-
Resynthesis-based attack
OG
40/40
20/39
60/60
29/60
80/80
35/79
128/128
55/124
ACKNOWLEDGMENTThe authors thank Nimisha Limaye for evaluating the keys found by the proposed technique on the CSAW'19 benchmarks.
A Primer on Hardware Security: Models, Methods, and Metrics. M Rostami, F Koushanfar, R Karri, Proceedings of the IEEE. 1028M. Rostami, F. Koushanfar, and R. Karri, "A Primer on Hardware Security: Models, Methods, and Metrics," Proceedings of the IEEE, vol. 102, no. 8, pp. 1283-1295, 2014.
Watermarking Techniques for Intellectual Property Protection. A B Kahng, J Lach, W H Mangione-Smith, S Mantik, I L Markov, M Potkonjak, P Tucker, H Wang, G Wolfe, DACA. B. Kahng, J. Lach, W. H. Mangione-Smith, S. Mantik, I. L. Markov, M. Potkonjak, P. Tucker, H. Wang, and G. Wolfe, "Watermarking Techniques for Intellectual Property Protection," in DAC, 1998, pp. 776- 781.
Remote Activation of ICs for Piracy Prevention and Digital Right Management. Y Alkabani, F Koushanfar, M Potkonjak, ICCAD. Y. Alkabani, F. Koushanfar, and M. Potkonjak, "Remote Activation of ICs for Piracy Prevention and Digital Right Management," in ICCAD, 2007, pp. 674-677.
Provably Secure Active IC Metering Techniques for Piracy Avoidance and Digital Rights Management. F Koushanfar, IEEE Transactions on Information Forensics and Security. 71F. Koushanfar, "Provably Secure Active IC Metering Techniques for Piracy Avoidance and Digital Rights Management," IEEE Transactions on Information Forensics and Security, vol. 7, no. 1, pp. 51-63, 2012.
Logic Locking: A Survey of Proposed Methods and Evaluation Metrics. S Dupuis, M.-L Flottes, J. Electron. Test. 353S. Dupuis and M.-L. Flottes, "Logic Locking: A Survey of Proposed Methods and Evaluation Metrics," J. Electron. Test., vol. 35, no. 3, pp. 273-291, 2019.
Provably-Secure Logic Locking: From Theory To Practice. M Yasin, A Sengupta, M T Nabeel, M Ashraf, J Rajendran, O Sinanoglu, ACM CCS. M. Yasin, A. Sengupta, M. T. Nabeel, M. Ashraf, J. Rajendran, and O. Sinanoglu, "Provably-Secure Logic Locking: From Theory To Prac- tice," in ACM CCS, 2017, pp. 1601-1618.
Threats on Logic Locking: A Decade Later. K Z Azar, H M Kamali, H Homayoun, A Sasan, GLVLSI. K. Z. Azar, H. M. Kamali, H. Homayoun, and A. Sasan, "Threats on Logic Locking: A Decade Later," in GLVLSI, 2019, pp. 471-476.
SCOPE: Synthesis-Based Constant Propagation Attack on Logic Locking. A Alaql, M M Rahman, S Bhunia, IEEE TVLSI. 298A. Alaql, M. M. Rahman, and S. Bhunia, "SCOPE: Synthesis-Based Constant Propagation Attack on Logic Locking," IEEE TVLSI, vol. 29, no. 8, pp. 1529-1542, 2021.
EPIC: Ending Piracy of Integrated Circuits. J A Roy, F Koushanfar, I L Markov, DATE. J. A. Roy, F. Koushanfar, and I. L. Markov, "EPIC: Ending Piracy of Integrated Circuits," in DATE, 2008, pp. 1069-1074.
Evaluating the Security of Logic Encryption Algorithms. P Subramanyan, S Ray, S Malik, HOST. P. Subramanyan, S. Ray, and S. Malik, "Evaluating the Security of Logic Encryption Algorithms," in HOST, 2015, pp. 137-143.
Anti-SAT: Mitigating SAT Attack on Logic Locking. Y Xie, A Srivastava, IEEE TCAD. 382Y. Xie and A. Srivastava, "Anti-SAT: Mitigating SAT Attack on Logic Locking," IEEE TCAD, vol. 38, no. 2, pp. 199-207, 2019.
CAS-Lock: A Security-Corruptibility Trade-off Resilient Logic Locking Scheme. B Shakya, X Xu, M Tehranipoor, D Forte, IACR Transactions on Cryptographic Hardware and Embedded Systems. 2020B. Shakya, X. Xu, M. Tehranipoor, and D. Forte, "CAS-Lock: A Security-Corruptibility Trade-off Resilient Logic Locking Scheme," IACR Transactions on Cryptographic Hardware and Embedded Systems, vol. 2020, no. 1, pp. 175-202, 2019.
Truly Stripping Functionality for Logic Locking: A Fault-Based Perspective. A Sengupta, M Nabeel, N Limaye, M Ashraf, O Sinanoglu, IEEE TCAD. 3912A. Sengupta, M. Nabeel, N. Limaye, M. Ashraf, and O. Sinanoglu, "Truly Stripping Functionality for Logic Locking: A Fault-Based Per- spective," IEEE TCAD, vol. 39, no. 12, pp. 4439-4452, 2020.
On Preventing SAT Attack with Decoy Key-Inputs. Q.-L Nguyen, M.-L Flottes, S Dupuis, B Rouzeyre, ISVLSI. Q.-L. Nguyen, M.-L. Flottes, S. Dupuis, and B. Rouzeyre, "On Prevent- ing SAT Attack with Decoy Key-Inputs," in ISVLSI, 2021, pp. 114-119.
SARLock: SAT Attack Resistant Logic Locking. M Yasin, B Mazumdar, J Rajendran, O Sinanoglu, HOST. M. Yasin, B. Mazumdar, J. Rajendran, and O. Sinanoglu, "SARLock: SAT Attack Resistant Logic Locking," in HOST, 2016, pp. 236-241.
Novel Bypass Attack and BDD-based Tradeoff Analysis Against All Known Logic Locking Attacks. X Xu, B Shakya, M M Tehranipoor, D Forte, Cryptographic Hardware and Embedded Systems. X. Xu, B. Shakya, M. M. Tehranipoor, and D. Forte, "Novel Bypass Attack and BDD-based Tradeoff Analysis Against All Known Logic Locking Attacks," in Cryptographic Hardware and Embedded Systems, 2017, pp. 189-210.
Removal Attacks on Logic Locking and Camouflaging Techniques. M Yasin, B Mazumdar, O Sinanoglu, J Rajendran, IEEE Transactions on Emerging Topics in Computing. 82M. Yasin, B. Mazumdar, O. Sinanoglu, and J. Rajendran, "Removal Attacks on Logic Locking and Camouflaging Techniques," IEEE Trans- actions on Emerging Topics in Computing, vol. 8, no. 2, pp. 517-532, 2020.
Breaking CAS-Lock and Its Variants by Exploiting Structural Traces. A Sengupta, N Limaye, O Sinanoglu, IACR Transactions on Cryptographic Hardware and Embedded Systems. 2021A. Sengupta, N. Limaye, and O. Sinanoglu, "Breaking CAS-Lock and Its Variants by Exploiting Structural Traces," IACR Transactions on Cryptographic Hardware and Embedded Systems, vol. 2021, no. 3, p. 418-440, 2021.
Functional Analysis Attacks on Logic Locking. D Sirone, P Subramanyan, DATE. D. Sirone and P. Subramanyan, "Functional Analysis Attacks on Logic Locking," in DATE, 2019, pp. 936-939.
Stripped Functionality Logic Locking With Hamming Distance-Based Restore Unit (SFLL-hd) -Unlocked. F Yang, M Tang, O Sinanoglu, IEEE Transactions on Information Forensics and Security. 1410F. Yang, M. Tang, and O. Sinanoglu, "Stripped Functionality Logic Locking With Hamming Distance-Based Restore Unit (SFLL-hd) - Unlocked," IEEE Transactions on Information Forensics and Security, vol. 14, no. 10, pp. 2778-2786, 2019.
Does Logic Locking Work with EDA Tools. Z Han, M Yasin, J Rajendran, USENIX Security Symposium. Z. Han, M. Yasin, and J. Rajendran, "Does Logic Locking Work with EDA Tools?" in USENIX Security Symposium, 2021, pp. 1055-1072.
Valkyrie: Vulnerability Assessment Tool and Attack for Provably-Secure Logic Locking Techniques. N Limaye, S Patnaik, O Sinanoglu, IEEE Transactions on Information Forensics and Security. 17N. Limaye, S. Patnaik, and O. Sinanoglu, "Valkyrie: Vulnerability Assessment Tool and Attack for Provably-Secure Logic Locking Tech- niques," IEEE Transactions on Information Forensics and Security, vol. 17, pp. 744-759, 2022.
Strong Anti-SAT: Secure and Effective Logic Locking. Y Liu, M Zuzak, Y Xie, A Chakraborty, A Srivastava, ISQED. Y. Liu, M. Zuzak, Y. Xie, A. Chakraborty, and A. Srivastava, "Strong Anti-SAT: Secure and Effective Logic Locking," in ISQED, 2020, pp. 199-205.
Benchmarking at the Frontier of Hardware Security: Lessons from Logic Locking. B Tan, B. Tan et al., "Benchmarking at the Frontier of Hardware Security: Lessons from Logic Locking," 2020. [Online]. Available: https://arxiv.org/abs/2006.06806
SAT Based Partial Attack on Compound Logic Locking. M John, A Hoda, R Chouksey, C Karfa, Asian Hardware Oriented Security and Trust Symposium. M. John, A. Hoda, R. Chouksey, and C. Karfa, "SAT Based Partial Attack on Compound Logic Locking," in Asian Hardware Oriented Security and Trust Symposium, 2020, pp. 1-6.
N Limaye, S Patnaik, O Sinanoglu, Fa-SAT: Fault-aided SATbased Attack on Compound Logic Locking Techniques. DATEN. Limaye, S. Patnaik, and O. Sinanoglu, "Fa-SAT: Fault-aided SAT- based Attack on Compound Logic Locking Techniques," in DATE, 2021, pp. 1166-1171.
Piercing Logic Locking Keys through Redundancy Identification. L Li, A Orailoglu, DATE. L. Li and A. Orailoglu, "Piercing Logic Locking Keys through Redun- dancy Identification," in DATE, 2019, pp. 540-545.
TGA: An Oracle-Less and Topology-Guided Attack on Logic Locking. Y Zhang, P Cui, Z Zhou, U Guin, ASHES. Y. Zhang, P. Cui, Z. Zhou, and U. Guin, "TGA: An Oracle-Less and Topology-Guided Attack on Logic Locking," in ASHES, 2019, p. 75-83.
Resynthesis tool. F Almeida, L Aksoy, 2023F. Almeida and L. Aksoy, "Resynthesis tool," https://github.com/ Centre-for-Hardware-Security/resynthesis_attack, 2023.
On the Generation of Test Patterns for Combinational Circuits. H K Lee, D S Ha, Tech. Rep. Department of Electrical Engineering, Virginia Polytechnic Institute and State UniversityH. K. Lee and D. S. Ha, "On the Generation of Test Patterns for Combinational Circuits," Department of Electrical Engineering, Virginia Polytechnic Institute and State University, Tech. Rep. 12-93, 1993.
CaDiCaL, Kissat, Paracooba, Plingeling and Treengeling entering the SAT Competition 2020. A Biere, K Fazekas, M Fleury, M Heisinger, Proc. of SAT Competition 2020 -Solver and Benchmark Descriptions. of SAT Competition 2020 -Solver and Benchmark Descriptions2020University of HelsinkiA. Biere, K. Fazekas, M. Fleury, and M. Heisinger, "CaDiCaL, Kissat, Paracooba, Plingeling and Treengeling entering the SAT Competition 2020," in Proc. of SAT Competition 2020 -Solver and Benchmark Descriptions, ser. Department of Computer Science Report Series B, vol. B-2020-1. University of Helsinki, 2020, pp. 51-53.
A Neutral Netlist of 10 Combinational Benchmark Circuits and a Targeted Translator in FORTRAN. F Brglez, H Fujiwara, ISCAS. F. Brglez and H. Fujiwara, "A Neutral Netlist of 10 Combinational Benchmark Circuits and a Targeted Translator in FORTRAN," in ISCAS, 1985, pp. 663-698.
Netlist Encryption and Obfuscation Suite. K Shamsi, K. Shamsi, "Netlist Encryption and Obfuscation Suite," 2021. [Online]. Available: https://bitbucket.org/kavehshm/neos/src/master/
Security Analysis of Logic Obfuscation. J Rajendran, Y Pino, O Sinanoglu, R Karri, DAC. J. Rajendran, Y. Pino, O. Sinanoglu, and R. Karri, "Security Analysis of Logic Obfuscation," in DAC, 2012, pp. 83-89.
SAT-based Bit-Flipping Attack on Logic Encryptions. Y Shen, A Rezaei, H Zhou, in DATE. Y. Shen, A. Rezaei, and H. Zhou, "SAT-based Bit-Flipping Attack on Logic Encryptions," in DATE, 2018, pp. 629-632.
| [] |
[
"Optimising the measurement of relativistic distortions in large-scale structure",
"Optimising the measurement of relativistic distortions in large-scale structure"
] | [
"Camille Bonvin [email protected] \nTheory Division\nCERN\n1211GenevaSwitzerland\n\nDépartement de Physique Théorique and Center for Astroparticle Physics (CAP)\nUniversity of Geneva\n24 quai Ernest AnsermetCH-1211GenevaSwitzerland\n",
"Lam Hui [email protected] \nInstitute for Strings, Cosmology and Astroparticle Physics\nDepartment of Physics\nColumbia University\n10027New YorkNYU.S.A\n",
"Enrique Gaztanaga \nInstitut de Ciències de l'Espai (IEEC-CSIC)\nC5 2-par08193Bellaterra, BarcelonaF. CiènciesSpain\n"
] | [
"Theory Division\nCERN\n1211GenevaSwitzerland",
"Département de Physique Théorique and Center for Astroparticle Physics (CAP)\nUniversity of Geneva\n24 quai Ernest AnsermetCH-1211GenevaSwitzerland",
"Institute for Strings, Cosmology and Astroparticle Physics\nDepartment of Physics\nColumbia University\n10027New YorkNYU.S.A",
"Institut de Ciències de l'Espai (IEEC-CSIC)\nC5 2-par08193Bellaterra, BarcelonaF. CiènciesSpain"
] | [] | It has been shown recently that relativistic distortions generate a dipolar modulation in the two-point correlation function of galaxies. To measure this relativistic dipole it is necessary to cross-correlate different populations of galaxies with for example different luminosities or colours. In this paper, we construct an optimal estimator to measure the dipole with multiple populations. We show that this estimator increases the signal-to-noise of the dipole by up to 35 percent. Using 6 populations of galaxies, in a survey with halos and number densities similar to those of the millennium simulation, we forecast a cumulative signal-to-noise of 4.4. For the main galaxy sample of SDSS at low redshift z ≤ 0.2 our optimal estimator predicts a cumulative signal-to-noise of 2.4. Finally we forecast a cumulative signal-to-noise of 7.4 in the upcoming DESI survey. These forecasts indicate that with the appropriate choice of estimator the relativistic dipole should be detectable in current and future surveys. | 10.1088/1475-7516/2016/08/021 | [
"https://export.arxiv.org/pdf/1512.03566v2.pdf"
] | 118,398,345 | 1512.03566 | d5dda7f8d6a2107fbe05e754f2a9e9c8d04c88b3 |
Optimising the measurement of relativistic distortions in large-scale structure
Camille Bonvin [email protected]
Theory Division
CERN
1211GenevaSwitzerland
Département de Physique Théorique and Center for Astroparticle Physics (CAP)
University of Geneva
24 quai Ernest AnsermetCH-1211GenevaSwitzerland
Lam Hui [email protected]
Institute for Strings, Cosmology and Astroparticle Physics
Department of Physics
Columbia University
10027New YorkNYU.S.A
Enrique Gaztanaga
Institut de Ciències de l'Espai (IEEC-CSIC)
C5 2-par08193Bellaterra, BarcelonaF. CiènciesSpain
Optimising the measurement of relativistic distortions in large-scale structure
(Dated: March 6, 2022)
It has been shown recently that relativistic distortions generate a dipolar modulation in the two-point correlation function of galaxies. To measure this relativistic dipole it is necessary to cross-correlate different populations of galaxies with for example different luminosities or colours. In this paper, we construct an optimal estimator to measure the dipole with multiple populations. We show that this estimator increases the signal-to-noise of the dipole by up to 35 percent. Using 6 populations of galaxies, in a survey with halos and number densities similar to those of the millennium simulation, we forecast a cumulative signal-to-noise of 4.4. For the main galaxy sample of SDSS at low redshift z ≤ 0.2 our optimal estimator predicts a cumulative signal-to-noise of 2.4. Finally we forecast a cumulative signal-to-noise of 7.4 in the upcoming DESI survey. These forecasts indicate that with the appropriate choice of estimator the relativistic dipole should be detectable in current and future surveys.
I. INTRODUCTION
The two-point correlation function of galaxies, and its Fourier transform the power spectrum, have been measured with increasing precision over the last decades. These observables contain valuable information about the global properties of our universe: its initial conditions, the law of gravity governing its evolution and the amount and nature of dark energy. To a good approximation, the observed two-point correlation function of galaxies follows the twopoint correlation function of the underlying dark matter. The galaxy correlation function shares therefore the same statistical properties as the dark matter correlation function. In particular this implies that the galaxy correlation function is isotropic and that its shape depends only on the galaxies' separation. The relation between the galaxy and the dark matter correlation function is simply given via the square of the bias, which in the linear regime is usually assumed to be scale independent. In this context, measuring the galaxy correlation function allows to directly characterise the distribution of dark matter in our universe.
Since the 80's we know however that this description of the two-point function of galaxies is too simplistic since it does not account for the fact that our observations are made in redshift-space [1][2][3]. In redshift-space, the peculiar velocities of galaxies distort the two-point correlation function. As a consequence the correlation function is not isotropic anymore: it depends on the orientation of the pair of galaxies with respect to the observer's line-of-sight. One can show that in the distant-observer approximation, redshift distortions generate a quadrupole and an hexadecapole. Measurements of these multipoles have been performed in various galaxy surveys (see e.g. [4][5][6][7][8][9][10]). These measurements provide additional information on our universe since they are sensitive to the galaxies' peculiar velocities. In particular, combined measurements of the monopole and of the quadrupole allow to measure separately the bias and the growth rate of fluctuations 1 . The fact that we observe in redshift-space is therefore not a complication, but rather an interesting source of information.
In the past few years, it has been shown that redshift-space distortions are just one of the many distortions that affect the observed distribution of galaxies. The fractional over-density of galaxies ∆ is distorted by gravitational lensing [11][12][13][14][15][16][17][18][19], Doppler effects, gravitational redshift, Sachs-Wolfe effects, Shapiro time-delay and integrated Sachs-Wolfe [20][21][22][23]. These terms distort the coordinate system in which our observations are performed (i.e. redshift and incoming photon's direction) and they generate consequently additional fluctuations to ∆. These effects (apart from lensing and Doppler effects) have been called relativistic distortions, since they are suppressed by powers of H/k with respect to the standard contributions -namely density and redshift-space distortions-and they are therefore expected to become relevant at large scales, near the horizon. A natural question arises then: can we use similar techniques as those developed for redshift-space distortions to isolate the relativistic distortions from the standard terms?
In a recent paper [24] (see also [25][26][27]), we showed that some of the relativistic distortions have a remarkable property: they break the symmetry of the correlation function under the exchange of the two galaxies in the pair (this property has been identified previously in Fourier space, where the relativistic distortions generate an imaginary part to the power spectrum [28,29]). Obviously to observe such a breaking of symmetry we need more that one population of galaxies. In [24], we showed that the cross-correlation function between a bright and a faint population of galaxies contains in addition to the standard monopole, quadrupole and hexadecapole, a dipole and an octupole directly generated by the relativistic distortions. This suggests that we can isolate the contributions from relativistic distortions by fitting for a dipole and an octupole in the two-point function. Since these new multipoles are orthogonal to the monopole, quadrupole and hexadecapole, this method would allow us to get rid of the dominant standard terms and to target specifically the new relativistic terms. In essence this is very similar to the method used in [30,31] to measure gravitational redshift in clusters and to separate it from the dominant Doppler redshift.
In this paper, we calculate the detectability of the relativistic distortions in large-scale structure using this method. We construct an optimal estimator to isolate the dipole. We then calculate the signal-to-noise for the dipole in a multipopulation case and we show that our optimal estimator allows us to improve the measurement of the relativistic distortions by up to 35 percent. In a survey with halos and number densities similar to those of the millennium simulation we expect a cumulative signal-to-noise of 4.4. For the main galaxy sample of SDSS (DR5) at low redshift we can reach a cumulative signal-to-noise of 2.4. Finally we forecast a cumulative signal-to-noise of 7.4 in the upcoming Dark Energy Spectroscopic Instrument (DESI) survey. This demonstrates the feasibility of our method to current and future surveys. The advantage of this method is that it does not require to measure the correlation function at extremely large scales, of the size of the horizon. By fitting for a dipole we can indeed isolate the relativistic distortions from the standard terms at scales accessible by current surveys.
The remainder of the paper is organised as follow: in section II we construct a general estimator combining different populations of galaxies, which isolates the anti-symmetric part of the correlation function. In section III we derive the variance of this estimator. In section IV we find the kernel which minimises the variance. In section V we calculate the signal-to-noise of the dipole and in section VI we forecast our method to the millennium simulation, the main galaxy sample of SDSS and the DESI survey.
II. THE TWO-POINT CORRELATION FUNCTION FOR MULTIPLE POPULATIONS OF GALAXIES
To measure anti-symmetric terms in the two-point correlation function we need more than one population of galaxies. If all galaxies are the same, we have indeed by construction that ∆(x, z)∆(x , z ) is symmetric under the exchange of the two galaxies in the pair. If however we split the galaxies into multiple populations with different characteristics, e.g. different luminosities, then the cross-correlation between two populations with respective luminosities L and L can have an anti-symmetric part
∆ L (x, z)∆ L (x , z ) = ∆ L (x , z )∆ L (x, z) .(1)
The goal of this paper is to construct an estimator which isolates the anti-symmetric part of the correlation function. We start by splitting the galaxies depending on their luminosity. For each pixel i in the sky, we then count how many galaxies we have at each luminosity (or in each bin of luminosity). Let us denote this number by n Li (x i ), where L i is the luminosity of the population under consideration and x i is the position of the pixel 2 . The overdensity of galaxies in pixel i with luminosity L i is then
δn Li (x i ) = n Li (x i ) − dn Li ,(2)
where dn Li denotes the mean number of galaxies per pixel, with luminosity L i . Note that dn Li depends on the size of the pixels. The most general estimator we can construct, combining all populations of galaxies, is then 3
ξ = ij LiLj w xixj LiLj δn Li (x i )δn Lj (x j ) ,(3)
where the kernel w xixj LiLj depends on the position of the pixels i and j and on the luminosities L i and L j . This kernel must be symmetric under the exchange of i and j, which just represents a relabelling of the pixels
w xixj LiLj = w xj xiLj Li .(4)
We want to construct a kernel which isolates the anti-symmetric part of the correlation function. The general expression for the overdensity of galaxies reads
δn Li (x i ) = dn Li · ∆ Li (x i ) = dn Li b Li δ i − 1 H ∂ r (V · n) i + (5s Li − 2) ri 0 dr r i − r 2rr i ∆ Ω (Φ + Ψ) + ∆ rel Li (x i ) ,(5)
where Φ and Ψ are the two metric potentials 4 , δ is the density contrast, V is the peculiar velocity, n is the observed direction, H is the conformal Hubble parameter, r is the conformal distance to the source and ∆ Ω denotes the angular Laplacian. The indices i represents a value evaluated in pixel i, and b Li and s Li denote respectively the bias and the slope of the luminosity function of the galaxy population with luminosity L i . The first term in eq. (5) is the density contribution and the second term is the contribution from redshift-space distortions. We call the sum of these two contributions the standard terms
δn stand Li (x i ) = dn Li b Li δ i − 1 H ∂ r (V · n) i .(6)
The third term in eq. (5) denotes the so-called lensing magnification bias which depends on the slope of the luminosity function s Li and the last term encodes all the relativistic distortions. The relativistic distortions contain contributions with one gradient of the gravitational potentials and contributions directly proportional to the potentials. As shown in [24], the terms with one gradient of the potentials are those that generate an anti-symmetry in the correlation function 5 . They read
δn rel Li (x i ) = dn Li 1 H ∂ r Ψ i + 1 H (V · n) i + 1 −Ḣ H 2 − 2 rH + 5s Li 1 − 1 rH (V · n) i ,(7)
where the first term is the contribution from gravitational redshift and the other terms are Doppler contributions.
Here a dot denotes derivative with respect to conformal time η. In theories of gravity where Euler equation is valid, we can rewrite the gradient of the potential as a function of velocity and we obtain
δn rel Li (x i ) = −dn Li Ḣ H 2 + 2 rH (V · n) i .(8)
Here and in the following we neglect for simplicity the contribution proportional to the slope of the luminosity function in eq. (7), i.e. we set s Li = 0. We also neglect the anti-symmetric contributions generated by evolution effects and wide-angle effects, assuming that all the anti-symmetric signal is due to the relativistic distortions in eq. (8). The effect of evolution has been calculated in [24] and shown to be small: less than 5% of the relativistic signal at redshift z = 0.25 and less than 9% at z = 0.5 (see blue dashed line of figure 11). The wide-angle effects have been calculated in a companion paper [32], where we show that with the appropriate choice of kernel they are of the same order of magnitude as the relativistic effects (albeit slightly smaller) and with an opposite sign, see figure 3 of [32] left panel. In the following we disregard nevertheless the wide-angle effects. We are indeed primarily interested to find 3 Note that in the estimator we do not divide δn L i (x i ) by dn L i , as is usually done for auto-correlations. The reason is that dn L i depends on luminosity and dividing by this factor is similar to applying a weighting to the different populations. Such a factor should therefore be included in the general kernel w x i x j L i L j . 4 We use here the following convention for the metric ds 2 = a 2 − (1 + 2Ψ)dη 2 + (1 − 2Φ)δ ij dx i dx j , where a is the scale factor and η denotes conformal time. 5 Note that the lensing magnification bias also generates an anti-symmetry in the two-point function. However, as shown in [24] this contribution is always significantly smaller than the terms in eq. (7) and it can safely be neglected. the kernel that will optimise the measurement of the relativistic dipole. As discussed in [24], once the dipole has been measured we can in principle easily separate the wide-angle contribution from the relativistic contribution by using measurements of the quadrupole. Another strategy would be to find the kernel that optimise the measurement of the relativistic dipole while minimising the contribution from the wide-angle dipole. We defer this more involved calculation to a future paper. As shown in [24], in the case of two populations of galaxies -a bright and a faint population-the contribution (8) generates a dipole in the correlation function
δn stand B (x i )δn rel F (x j ) + δn rel F (x i )δn stand B (x j ) = dn B dn F (b B − b F ) Ḣ H 2 + 2 rH H H 0 f 2π 2 dkkH 0 P (k,z)j 1 (kd ij ) · cos β ij ,(9)
where P (k,z) is the density power spectrum at the mean redshift of the survey:
δ(k,z)δ(k ,z) = (2π) 3 P (k,z)δ D (k + k ) ,(10)
f is the growth rate, d ij is the pair separation and β ij denotes the orientation of the pair with respect to the line-of-sight 6 , as depicted on figure 1. Eq. (9) is anti-symmetric under the exchange of the bright and faint population:
dn B dn F (b B − b F ) = −dn F dn B (b F − b B )
. It is also anti-symmetric under the exchange of the relative position of the pixels, i.e. when β ij goes into β ij + π. To isolate this term, we need therefore a kernel w which is anti-symmetric under the exchange of L i and L j
w xixj LiLj = −w xixj Lj Li ,(11)
as well as anti-symmetric under the exchange of x i and x j
w xixj LiLj = −w xj xiLiLj .(12)
These properties of the kernel insure us that the standard density and redshift-space distortions do not contribute to the mean of the estimator. Inserting (6) into (3) we have indeed
ξ stand = ij LiLj w xixj LiLj dn Li dn Lj b Li b Lj δ i δ j − 1 H b Li δ i ∂ r (V · n) j − 1 H b Lj δ j ∂ r (V · n) i + 1 H 2 ∂ r (V · n) i ∂ r (V · n) j = 0 ,(13)
since w xixj LiLj is anti-symmetric under the exchange of L i and L j whereas the bracket in (13) is symmetric (to show this we use that
δ j ∂ r (V · n) i = δ i ∂ r (V · n) j )
. This argument applies also to all relativistic distortions in ∆ rel Li that have no gradients of Φ and Ψ. On the other hand, the terms in eq. (8), which have only one gradient of the potential, survive in the mean
ξ = ξ rel = − Ḣ H 2 + 2 rH ij LiLj w xixj LiLj dn Li dn Lj b Li δ i (V · n) j + b Lj δ j (V · n) i = − Ḣ H 2 + 2 rH ij LiLj w xixj LiLj dn Li dn Lj b Li − b Lj δ i (V · n) j ,(14)
where we have used that
δ i (V · n) j = − H H 0 f 2π 2 dkkH 0 P (k,z)j 1 (kd ij ) · cos(β ij )(15)
=
H H 0 f 2π 2 dkkH 0 P (k,z)j 1 (kd ji ) cos(β ji ) = − δ j (V · n) i .
Since dn Li dn Lj b Li − b Lj is clearly anti-symmetric under the exchange of L i and L j , eq. (14) does not vanish under the summation over L i and L j . The generic kernel defined by eqs. (11) and (12) therefore allows us to isolate the relativistic terms from the dominant density and redshift-space distortions. A simple example for the kernel w is
w xixj LiLj = 3 8π θ(L i − L j ) − θ(L j − L i ) cos β ij δ K (d ij − d) ,(16)
where θ is the Heaviside function. In the case of two populations of galaxies (bright and faint), inserting kernel (16) into eq. (14) we obtain
ξ = c N dn B dn F b B − b F Ḣ H 2 + 2 rH H H 0 f 2π 2 dkkH 0 P (k,z)j 1 (kd) ,(17)
where c N is a normalisation factor. We now want to find the kernel which maximises the signal-to-noise of the dipole in a generic survey with multiple populations of galaxies.
III. VARIANCE
We start by calculating the variance of our estimator (3):
var(ξ) = ξ 2 − ξ 2 = ijLiLj abLaL b w xixj LiLj w xax b LaL b × δn Li (x i )δn Lj (x j )δn La (x a )δn L b (x b ) − δn Li (x i )δn Lj (x j ) δn La (x a )δn L b (x b ) =2 ijLiLj abLaL b w xixj LiLj w xax b LaL b δn Li (x i )δn La (x a ) δn Lj (x j )δn L b (x b ) ,(18)
where we have used Gauss theorem to expand the four-point correlation function in products of two-point functions and we have used the symmetry property of the kernel eq. (4).
A. Poisson noise
The first contribution to eq. (18) comes from Poisson noise. We have
δn Li (x i )δn La (x a ) = n Li (x i ) − dn Li n La (x a ) − dn La = n Li (x i )n La (x a ) − dn Li dn La .(19)
We have different cases:
• i = a: n Li (x i )n La (x a ) = n Li (x i ) n La (x a ) = dn Li dn La , so that δn Li (x i )δn La (x a ) = 0. • i = a and L i = L a : n 2 Li (x i ) = dn Li + dn 2 Li , so that δn 2 Li (x i ) = dn Li .
• i = a and L i = L a = L i , i.e. in the same pixel i we look at different populations. The Poisson fluctuations of these different populations are uncorrelated so that
n Li (x i )n L i (x i ) = n Li (x i ) n L i (x i ) = dn Li dn L i and δn Li (x i )δn L i (x i ) = 0 .(20)
The Poisson noise can therefore generally be written as
δn Li (x i )δn La (x a ) = dn Li δ ia δ LiLa .(21)
Inserting this in the variance we obtain
var P (ξ) = 2 ijLiLj (w xixj LiLj ) 2 dn Li dn Lj .(22)
Even if the kernel is anti-symmetric in L i ↔ L j , the Poisson term does not vanish because according to eq. (21) it is non-zero only when i = a and L i = L a . As a consequence, only the square of the kernel (which is symmetric in L i ↔ L j ) enters into eq. (22). Note that to derive (22) we have assumed that galaxies follow Poisson statistics. This is a simplifying assumption. Simulations have indeed shown that exclusion and non-linear clustering effects generate non-diagonal shot-noise contributions through correlations between galaxies of different luminosities and correlations between close pixels [33].
B. Cosmic variance
The second contribution to eq. (18) comes from the cosmic variance of the density and redshift distortions contributions (6) (which dominate over the cosmic variance of the relativistic terms)
var C (ξ) =2 ijab LiLj LaL b dn Li dn Lj dn La dn L b w xixj LiLj w xax b LaL b (23) × b Li b La δ i δ a − 1 H b Li δ i ∂ r (V · n) a − 1 H b La δ a ∂ r (V · n) i + 1 H 2 ∂ r (V · n) i ∂ r (V · n) a × b Lj b L b δ j δ b − 1 H b Lj δ j ∂ r (V · n) b − 1 H b L b δ b ∂ r (V · n) j + 1 H 2 ∂ r (V · n) j ∂ r (V · n) b .
Many of the products in the brackets vanish since they are symmetric under the exchange L i ↔ L j or L a ↔ L b , whereas the kernels w xixj LiLj and w xax b LaL b are anti-symmetric. The remaining terms read
var C (ξ) = ijab LiLj LaL b dn Li dn Lj dn La dn L b w xixj LiLj w xax b LaL b (b Li − b Lj )(b La − b L b ) (24) × 1 H 2 δ i δ a ∂ r (V · n) j ∂ r (V · n) b − δ i ∂ r (V · n) a δ j ∂ r (V · n) b .
As shown in appendix B 2, going to the continuous limit and using a change of variables we can show that the second line of eq. (24) is proportional to
1 45 + 2 63 P 2 (n ·k) + 2 35 P 4 (n ·k) − 1 9 P 2 2 (n ·k) = 0 ,(25)
where P denotes the Legendre polynomial of degree . This shows that the properties of the kernel allow us to get rid of the standard dominant terms (density and redshift-space distortions) not only in the signal but also in the variance. This cancellation greatly enhances the detectability of the relativistic dipole. Note that the cancellation of the cosmic variance for multiple populations of galaxies has already been discussed in detail in the case of the power spectrum [34,35].
C. Mixed term
The variance (18) also contains mixed contributions, where the cosmic variance contributes to one of the correlation and the Poisson noise to the other. These terms read
var CP (ξ) =4 ija LiLj La w xixj LiLj w xaxj LaLj dn Li dn Lj dn La (26) × b Li b La δ i δ a − 1 H b Li δ i ∂ r (V · n) a − 1 H b La δ a ∂ r (V · n) i + 1 H 2 ∂ r (V · n) i ∂ r (V · n) a .
Note that if galaxies do not follow Poisson statistics, this contribution will also be modified. The total variance is then simply the sum of eqs. (22) and (26).
IV. OPTIMISING THE KERNEL
We want to find the kernel w xixj LiLj which minimises the variance under the constraint that our estimator is unbiased i.e. ξ = ξ true . This kernel must be symmetric in i and j (see eq. (4)), since we have used this property to derive the variance. We construct the Lagrangian
L = var(ξ) + λ 0 ξ − ξ true + ijLiLj λ ijLiLj w xixj LiLj − w xj xiLj Li ,(27)
where λ 0 and λ ijLiLj are Lagrange multipliers.
Minimising (27) with respect to λ 0 gives ξ = ξ true . Minimising with respect to λ cdL * c L * d , where L * c and L * d denote two specific values of the luminosity in pixels c and d, gives
w xcx d L * c L * d = w x d xcL * d L * c . Minimising with respect to w xcx d L * c L * d gives ∂var(ξ) ∂w xcx d L * c L * d + λ 0 ∂ ξ ∂w xcx d L * c L * d + λ cdL * c L * d − λ dcL * d L * c = 0 ,(28)
and minimising with respect to w
x d xcL * d L * c gives ∂var(ξ) ∂w x d xcL * d L * c + λ 0 ∂ ξ ∂w x d xcL * d L * c + λ dcL * d L * c − λ cdL * c L * d = 0 .(29)
Taking the sum of eqs. (28) and (29) we obtain
∂var(ξ) ∂w xcx d L * c L * d + ∂var(ξ) ∂w x d xcL * d L * c + λ 0 ∂ ξ ∂w xcx d L * c L * d + ∂ ξ ∂w x d xcL * d L * c = 0 .(30)
Inserting the expressions for the mean and the variance into eq. (30) and dividing by 8dn L * c dn L * d , we can rewrite eq. (30) in matrix notation as
w + wN + N T w = B ,(31)
where
w ij ≡ w xixj LiLj , B ij ≡ λ 0 4 Ḣ H 2 + 2 rH (b Li − b Lj ) δ i (V · n) j and(32)N ij ≡ dn Li b Li b Lj δ i δ j − 1 H b Li δ i ∂ r (V · n) j − 1 H b Lj δ j ∂ r (V · n) i + 1 H 2 ∂ r (V · n) i ∂ r (V · n) j .(33)
To solve eq. (31), we first add the term N T wN . Using the same steps as in appendix B 2 we can indeed show that this term is zero. With this term eq. (31) becomes
(1 1 + N T )w(1 1 + N ) = B ,(34)
and the solution is
w = (1 1 + N T ) −1 B(1 1 + N ) −1 .(35)
Eq. (35) is the kernel which maximises the signal-to-noise for the dipole. This kernel is relatively sophisticated since for each pair of pixels it involves a sum over all other pairs of pixels in the survey. Its role is to maximise the signal, while simultaneously minimising the noise due to both Poisson sampling and the product of Poisson sampling with cosmic variance. In [36], it has been shown that the measurement of the standard power spectrum can be improved by using bias-dependent weights. In addition [37] showed that Poisson noise can be significantly reduced by using an appropriate weighting. Our optimal kernel incorporates these effects in configuration space, for the measurement of the relativistic dipole. Kernel (35) can be simplified in the case where the Poisson noise dominates over the mixed term (this is usually the case at small separations, see appendix A). In this case, 1 1 dominates over N and we can expand eq. (35) in powers of N around 1 1. We obtain at lowest order
w xixj LiLj λ 0 4 Ḣ H 2 + 2 rH (b Li − b Lj ) δ i (V · n) j .(36)
This kernel is quite intuitive: it shows that the measurement of the dipole can be improved by weighting more the cross-correlations between galaxies with very different biases, for which the signal is larger. It is however not optimal at large separation, where the mixed term dominates over the Poisson noise. In the following, we will use the simplified kernel (36) to explicitly calculate the signal-to-noise. As we will see using this kernel already improves the signal-to-noise significantly. We defer to a forthcoming paper the study of the whole kernel (35), which will lead to further improvement, especially at large scales.
V. SIGNAL-TO-NOISE
We calculate the signal-to-noise for the dipole, using the kernel in eq. (16) (kernel 1) and the optimised kernel in eq. (36) (kernel 2). This kernel is defined up to a constant of normalisation which we choose similarly to eq. (16). We can write
w xixj LiLj = 3 8π g(d ij )h(L i , L j ) cos β ij δ K (d ij − d) ,(37)
where g(d ij ) = 1 for kernel 1 α(d ij )
for kernel 2 and
h(L i , L j ) = θ(L i − L j ) − θ(L j − L i ) for kernel 1 b Li − b Lj for kernel 2 (38) with α(d ij ) = 1 2π 2 Ḣ H 2 + 2 rH H H 0 f dk kH 0 P (k,z)j 1 (kd ij ) .(39)
The mean of the estimator is
ξ = 3 8π ijLiLj dn Li dn Lj (b Li − b Lj )g(d ij )h(L i , L j )α(d ij ) cos 2 β ij δ K (d ij − d) .(40)
As shown in appendix B, in the continuous limit this expression reduces to
ξ = 1 2 N tot p d 2N g(d)α(d) LL n LnL h(L, L )(b L − b L ) ,(41)
where p denotes the length of the cubic pixels, N tot is the total number of galaxies in the survey,N is the mean number density:N = 1
3 p L dn L ,(42)
andn L is the fractional number of galaxies with luminosity L:
n L = dn L L dn L .(43)
In the case of two populations of galaxies we obtain for the kernel 1 and 2
ξ K1 = N tot p d 2N α(d)n BnF (b B − b F ) ,(44)ξ K2 = N tot p d 2N α 2 (d)n BnF (b B − b F ) 2 .(45)
The Poisson contribution to the variance is given by (see appendix B for more detail)
var P (ξ) = 2 3 8π 2 ijLiLj dn Li dn Lj h 2 (L i , L j )g 2 (d ij ) cos 2 β ij δ K (d ij − d)δ K (d − d ) = 3 8π N tot p d 2N g 2 (d) LL n LnL h 2 (L, L )δ D (d − d ) .(46)
The mixed term in the variance is more complicated to calculate since it contains a sum over three pixels. The derivation is presented in appendix B, where we show that in the continuous limit this term can be written as
var CP (ξ) = 9 8π N totN 2 2 p d 2 d 2 g(d)g(d ) D 0 σ 0 (d, d ) + D 2 σ 2 (d, d ) + D 4 σ 4 (d, d ) .(47)
Here
σ (d, d ) = − 1 2π 2 1 −1 dµ µ 1 −1 dν ν 2π 0 dϕ dkk 2 P (k,z)j (ks)P dµ + d ν s , = 0, 2, 4 ,(48)
and
s = d 2 + d 2 + 2dd µν + (1 − µ 2 )(1 − ν 2 ) sin ϕ .(49)
The coefficients D are defined as
D 0 = LL L h(L, L )h(L , L )n LnL n L b L b L + (b L + b L ) f 3 + f 2 5 ,(50)D 2 = − LL L h(L, L )h(L , L )n LnL n L (b L + b L ) 2f 3 + 4f 2 7 ,(51)D 4 = LL L h(L, L )h(L , L )n LnL n L 8f 2 35 .(52)
Note that contrary to the Poisson contribution, the mixed term does not vanish for d = d . It introduces therefore correlations of the dipole at different separations. Combining eqs. (41), (46) and (47), the signal-to-noise at fixed separation d becomes
S N (d) = ξ (d) var(ξ)(d) = 2πN tot 3 A α(d) B N d 2 p −1 + 3 D 0 σ 0 (d, d) + D 2 σ 2 (d, d) + D 4 σ 4 (d, d) 1/2 ,(53)
where
A = LL n LnL h(L, L )(b L − b L ) ,(54)B = LL n LnL h 2 (L, L ) .(55)
These coefficients as well as the D defined in eqs. (50) to (52) depend on the biases and number densities of the different populations of galaxies that we are cross-correlating. In the case where we have only two populations of galaxies, the two kernels give exactly the same signal-to-noise since the function h(L, L ) can be factorised out of eqs. (50)-(52), (54) and (55). If the noise is dominated by Poisson sampling, the signal-to-noise (53) becomes
S N = A α(d) √ 6B N pair ,(56)
where N pair (d) = 4πN totN p d 2 is the total number of pairs at fixed separation d. This number depends on the size of the pixel p .
On the other hand, if the noise is dominated by the mixed term, the signal-to-noise is
S N = 2πN tot A α(d) 3 D 0 σ 0 (d, d) + D 2 σ 2 (d, d) + D 4 σ 4 (d, d) 1/2 .(57)
In this case, the signal-to-noise depends only on the total number of galaxies N tot and is independent on the pixelisation. Eq. (53) represents the signal-to-noise at fixed separation d. To calculate the cumulative signal-to-noise over a range of separation, we need to account for the covariances between separations. We have
S N 2 cum = ij ξ (d i )var(ξ) −1 (d i , d j ) ξ (d j ) ,(58)
where var(ξ) is given by the sum of eqs. (46) and (47).
VI. FORECASTS
We apply now our method to concrete examples. We start by validating the calculation of the errors using measurements from the BOSS survey. In [32] we present a measurement of the dipole for two populations of galaxies in the LOWz and CMASS samples, data release DR10 [38], using kernel 1. Here we use this measurement to compare the observed errors (obtained by Jackknife) with our predictions using eqs. (46) and (47). Since the number density of galaxies as well as the fractional number of bright and faint galaxies are evolving through the sample, we split them into thin redshift slices of ∆z ∼ 0.01 and we calculate the errors in each bin. The total error in the samples is then obtained by averaging the errors over the redshift slices 7 . The mean bias of the bright and faint populations have been measured using the monopole of the total sample and the monopole of the cross-correlation between the bright and faint populations [32]. In LOWz we found a mean bias for the bright population of b B = 2.30 and for the faint population of b F = 1.31. In the CMASS sample we found b B = 2.36 and b F = 1.46. In both samples we use cubic pixels with size p = 4 Mpc/h for separations 16 ≤ d ≤ 120 Mpc/h. The errors are shown in figure 2. Our prediction under-estimates the Jackknife errors by up to 30 percent both in the LOWz and in the CMASS sample. We have checked that the Jackknife errors for the monopole and the quadrupole agree well with the errors from the BOSS collaboration [38], obtained from simulations. This gives us confidence that the Jackknife errors on the dipole are reliable. The 30 percent difference between the Jackknife errors and our theoretical predictions could be due to inhomogeneous sampling in the BOSS data. These inhomogeneities are accounted for by weighting the data appropriately and are consequently captured by the Jackknife errors but they are not encoded in the theoretical predictions. We will investigate this in more detail in a future work.
A. Multi-population of halos in the millennium simulation
We now calculate the signal-to-noise for multiple populations of galaxies in three concrete examples. First we use the millennium-XXL simulation [39]. The millennium simulation does not contain relativistic distortions and it is therefore not expected to contain a dipole 8 . This simulation is however useful since it contains various populations of halos with different masses (and consequently biases) that we can use as a model to predict how the optimal kernel increases the signal-to-noise. We use measurements from [40], where halos in the millennium simulation have been separated into 6 mass bins. For each bin, the bias and number density have been measured. These measurements are summarised in table I. They were performed at z = 0. Here we use them to calculate the signal-to-noise for the dipole between z = 0 and z = 0.2, assuming that they are valid over this redshift range. We split the volume into two redshift bins of width ∆z = 0.1 and we calculate the signal-to-noise in each bin. Since the redshift bins are to a good approximation uncorrelated we obtain the total signal-to-noise by adding in quadrature the signal-to-noise from each bin. The volume of the box is (4.11 Gpc/h) 3 . In this box we count the number of halos we would see on our past light-cone in each of the redshift bins. The number density in the simulation isN = 2.7 × 10 −3 (h/Mpc) 3 leading to a total number of halos N tot = 2.9 × 10 5 in the lowest redshift bin and N tot = 1.86 × 10 6 in the highest redshift bin. We calculate the signal-to-noise in three different cases: the six populations of table I with kernel 1; the six populations of table I with kernel 2; and two populations, the first one (faint) corresponding to the first mass bin in table I, and the second one (bright) combining bins 2 to 6. The mean bias for the bright population is b B = 1.17 and the mean fractional number of galaxies isn B = 0.158. The signal-to-noise for the three cases is plotted in figure 3. Going from two populations of galaxies to six populations of galaxies increases the signal-to-noise by 7 percent. This is simply due to the fact that for six populations the number of possible cross-correlations is larger than for two populations and the noise is consequently smaller. Then using the optimal kernel provides a further improvement of 28 percent, leading to a total improvement of 35 percent. The cumulative signal-to-noise from 8 to 120 Mpc/h can be calculated from eq. (58). For two populations of galaxies we find a cumulative signal-to-noise of 3.3. With six populations and kernel 1 the signal-to-noise becomes 3.5, whereas the optimal kernel allows us to reach a cumulative signal-to-noise of 4.4. This indicates that with the optimal kernel a detection of the dipole in a survey with characteristics similar to those of the millennium simulation should be possible. Note that the naive cumulative signal-to-noise obtained by simply summing over separations without accounting for the covariance between bins is of 7.8 instead of 4.4 for the optimal kernel. This shows that the bins in separation are significantly correlated.
B. Multi-populations of galaxies in the main sample of SDSS DR5
As a second example we apply our method to the main sample of galaxies in the data release DR5 of SDSS. This sample has two advantages with respect to the BOSS LOWz and CMASS samples. First it is at lower redshift, where the dipole is larger. And second it contains a more diverse population of galaxies for which the biases are significantly different. In [42], Percival et al. split the main galaxy sample into nine bins of luminosity and measured the bias for each population. For our calculation of the signal-to-noise we group these nine populations into six populations. The mean number density of galaxies isN = 4.3 × 10 −3 (h/Mpc) 3 . As before we split the survey into two redshift bins of width ∆z = 0.1 and we calculate the signal-to-noise in each of the bins. In total we have 465'789 galaxies: 62'705 in the lowest redshift bin and 403'083 in the highest redshift bin. The values of the biases and fractional number densities for the different populations are extracted from [42,43] Table II: Biases and fractional number densities for six populations of galaxies with different magnitude, measured in the data release DR5 of SDSS [42,43].
We calculate the signal-to-noise in three different cases: the six populations of table II with kernel 1; the six populations of table II with and the second one (faint) combining the bins 5 and 6. The mean bias for the bright population is b B = 1.43 and the mean fractional number of galaxies isn B = 0.408. The mean bias for the faint population is b F = 0.99 and the mean fractional number of galaxies isn F = 0.592. The signal-to-noise for the three cases is plotted in figure 4. The optimal kernel with six populations increases the signal-to-noise by 28 percent with respect to the case of two populations: we gain 14 percent by going from two populations to six populations with kernel 1, and using kernel 2 gives a further increase of 14 percent. The cumulative signal-to-noise from 8 to 120 Mpc/h is of 1.8 in the case of two populations, 2.1 with six populations and kernel 1 and 2.4 with six populations and the optimal kernel.
C. The Dark Energy Spectroscopic Instrument
Finally we forecast the signal-to-noise for the future Dark Energy Spectroscopic Instrument (DESI) [44]. The Bright Galaxy DESI survey [45] will observe 10 million galaxies at low redshift z ≤ 0.3 over 14'000 square degrees. We split this sample into three redshift bins of size ∆z = 0.1 and calculate the signal-to-noise in each bin using table II, i.e. assuming that the DESI Bright Galaxy sample can be split in luminosity in a similar way as the SDSS sample. The signal-to-noise in shown in figure 5. With 6 populations and the optimal kernel we reach a signal-to-noise of 4.3 per bin between 15 and 30 Mpc/h. The improvement of the signal-to-noise with respect to SDSS is mainly due to the fact that DESI will observe a significantly larger number of galaxies, thanks to the fact that both the volume and the galaxy number density are larger in DESI than in SDSS. As shown in eq. (53) the signal-to-noise is indeed proportional to √ N tot . This increases the signal-to-noise by a factor 2.6 in the redshift bins 0 < z < 0.1 and 0.1 < z < 0.2. In addition DESI will observe 6.8 million galaxies at redshift 0.2 < z < 0.3 leading to a total improvement of a factor 3.1 with respect to SDSS. The cumulative signal-noise from 8 to 120 Mpc/h is of 5.8 in the case of two populations, 6.6 with six populations and kernel 1 and 7.4 with six populations and the optimal kernel. The optimal kernel increases therefore the signal-to-noise by 28 percent with respect to the two populations case. Note that these numbers may be reduced by ∼ 30 percent, if the errors in DESI are affected by inhomogeneous weights as is the case for BOSS (see figure 2). These forecasts seem consistent with the results of [29], who calculated the detectability of relativistic effects using the power spectrum of multiple populations of galaxies. For a full-sky survey at low redshift 0.1 < z < 0.3 they predict a 5-sigma detection of the imaginary part of the power spectrum if all halos down to bias of order 1 are used (corresponding to a minimum halo mass of 3 × 10 11 M h −1 ).
In addition to the Bright sample, DESI will observe emission line galaxies (ELG) and bright luminous red galaxies (LRG) over a wide range of redshift. We use the specifications of [44] (see table 3) to forecast the signal-to-noise from the two lowest redshift bins, i.e. from z = 0 to z = 0.4. We use the ELG's as faint sample with bias b F = 0.84, and the LRG's as bright sample with bias b B = 1.7. The cumulative signal-to-noise is of 4.6. Since ELG's and LRG's have very different biases, we can expect to split the sample into more populations following table I. According to figure 3 this could increase the signal-to-noise by up to 35 percent giving a cumulative signal-to-noise of 6.2. These forecasts show that a robust detection of the relativistic dipole should be possible in the near future.
VII. CONCLUSIONS
Relativistic distortions are an intrinsic part of our observations. They are rich in information since they are sensitive not only to the galaxies' peculiar velocities, but also to the geometry of the universe through the metric potentials Φ and Ψ. Measuring relativistic distortions would therefore open the way to new tests of the theory of gravity. These effects are however challenging to detect since they are suppressed by powers of H/k with respect to the standard contributions. To observe the impact of relativistic distortions on the monopole, quadrupole and hexadecapole, we need therefore to look at correlation functions at horizon scales k ∼ H.
In this paper we propose instead to isolate the relativistic distortions by fitting for a dipole in the cross-correlation function between multiple populations of galaxies. The advantage of using the dipole to measure relativistic distortions is twofolds. First, this allows us to remove the contribution from the standard terms, which in the distant-observer approximation affect only the even multipoles. Second, the kernel to isolate the dipole is anti-symmetric in the exchange of the galaxies' luminosity, which automatically suppresses the cosmic variance contribution to the error.
Combining multiple populations of galaxies, we construct an optimal estimator to maximise the signal-to-noise of the dipole. This estimator has a complicated form, which involves multiple summations over all pixels in the survey, as shown in eq. (35). In a forthcoming paper we will study how to implement efficiently this estimator in large-scale structure surveys. Here instead we restrict ourself to the case where Poisson sampling dominates the error. In this case, the estimator takes a simple and intuitive form, which can readily be used. We find that with this simple estimator we increase the signal-to-noise of the dipole by up to 35 percent. This allows us to reach a detectable level for the dipole in the main SDSS sample of galaxies and in surveys with number densities and halos similar to the one in the millennium simulation. In a forthcoming paper we will apply this method to the data release DR5 of SDSS and to the MICE simulation. This will require to split the galaxies into multiple populations with different luminosities and to combine these populations according to eq. (40). In the future we will also try and measure the dipole in the upcoming DESI survey for which we predict a cumulative signal-to-noise of 7.4.
A detection of the relativistic dipole with this method would be very interesting since it would allow us to test the validity of Euler equation in a model independent way. According to eq. (7) the dipole is indeed sensitive to a combination of the gravitational potential Ψ and the peculiar velocity of galaxies. Combining a measurement of the dipole with a measurement of the quadrupole would therefore allow us to test the relation between Ψ and V , i.e. to test if the velocity of galaxies is governed only by the gravitational potential as predicted in General Relativity, or if a fifth force acts on the galaxies.
Acknowledgments: It is a pleasure to thank Alex Hall for interesting discussions and for his help in showing the cancellation of the cosmic variance. We also thank Rupert Croft, Ofer Lahav and Francesco Montanari for stimulating and useful discussions. We thank the referee for his useful comments which help us improve the manuscript. CB integral over x simply gives the volume of the survey V (or the volume of the redshift bin over which we average). We obtain ξ = 3 8π
p V 6 p LL dn L dn L h(L, L )(b L − b L ) 2π 0 dϕ π 0 dβ sin β cos 2 β ∞ 0 ds s 2 g(s)α(s)δ D (s − d) = 1 2 N tot p d 2N g(d)α(d) LL n LnL h(L, L )(b L − b L ) , (B4) whereN = 1 3 p L dn L (B5)
is the mean number density in the survey,n
L = dn L L dn L (B6)
is the fractional number of galaxies with luminosity L, and N tot =N V is the total number of galaxies in the survey.
Variance
Poisson term
The Poisson contribution to the variance is
var P (ξ) = 2 3 8π 2 ijLiLj dn Li dn Lj h 2 (L i , L j )g 2 (d ij ) cos 2 β ij δ K (d ij − d)δ K (d − d ) .(B7)
In the continuous limit we obtain var P (ξ) = 2 3 8π
2 p 6 p LL dn L dn L h 2 (L, L ) d 3 x d 3 y g 2 (|x − y|) cos 2 β(x, y)δ D (|x − y| − d)δ D (d − d ) = 3 8π N tot p d 2N g 2 (d) LL n LnL h 2 (L, L )δ D (d − d ) .(B8)
We can easily show that many of the products in the brackets vanish since they are symmetric under the exchange L i ↔ L j or L a ↔ L b , whereas the kernels w xixj LiLj and w xax b LaL b are anti-symmetric. The remaining terms read
var C (ξ) = ijab LiLj LaL b dn Li dn Lj dn La dn L b w xixj LiLj w xax b LaL b (b Li − b Lj )(b La − b L b ) (B10) × 1 H 2 δ i δ a ∂ r (V · n) j ∂ r (V · n) b − δ i ∂ r (V · n) a δ j ∂ r (V · n) b .
Using that
δ i δ j = 1 (2π) 3 d 3 k e ik(xj −xi) P (k) ,(B11)− 1 H δ i ∂ r (V · n) j = f (2π) 3 d 3 k e ik(xj −xi) P (k) 1 3 + 2 3 P 2 (n ·k) ,(B12)
1
H 2 ∂ r (V · n) i ∂ r (V · n) j = f 2 (2π) 3 d 3 k e ik(xj −xi) P (k) 1 5 + 4 7 P 2 (n ·k) + 8 35 P 4 (n ·k) ,(B13)
where P denotes the Legendre polynomial of degree , and going to the continuous limit we find var C (ξ) = 4f 2 (2π) 6
1 12 p LL L L dn L dn L dn L dn L (b L − b L )(b L − b L ) (B14) × d 3 x i d 3 x j d 3 x a d 3 x b w xixj LL w xax b L L d 3 k d 3 k e ik(xa−xi) e ik (x b −xj )
× P (k)P (k ) 1 45 + 2 63 P 2 (n ·k ) + 2 35 P 4 (n ·k ) − 1 9 P 2 (n ·k )P 2 (n ·k) .
We then do the following change of variables x i → y i = x j − x i and x a → y a = x b − x a . The exponentials become
e ik(xa−xi) e ik (x b −xj ) = e ik(yi−ya) e i(k+k )(x b −xj ) .(B15)
The kernel w xixj LiLj is a function of the separation between the pixels |x j − x i | = y i and the orientation of the pair with respect to the line-of-sight cos β ij = cos β yi , and similarly for w xax b LaL b . The integral over x j and x b become then trivial d 3 x j e −i(k+k )xj = (2π) 3 δ D (k + k ) and
d 3 x b = V ,(B16)
where V is the volume of the survey. The Dirac-delta function enforces k = −k which implies that the square bracket in eq. (B14) exactly vanishes 1 45 + 2 63 P 2 (n ·k) + 2 35 P 4 (n ·k) − 1 9 P 2 2 (n ·k) = 0 .
This shows that the measurement of the relativistic dipole is not affected at all by the cosmic variance of the density and redshift-space distortions. This cancellation of the cosmic variance for multiple populations of galaxies has already been demonstrated for the case of the power spectrum [34,35].
Mixed term
The variance due to the product of the Poisson contribution and of the cosmic variance contribution is where C (d ia ) = 1 2π 2 dkk 2 P (k,z)j (kd ia ) , = 0, 2, 4 .
var CP (ξ) =4 3 8π 2 ija g(d ij )g(d aj ) cos β ij cos β aj δ K (d ij − d)δ K (d aj − d ) (B18) × LiLj La dn Li dn Lj dn La h(L i , L j )h(L a , L j ) b Li b La + (b Li + b La ) f 3 + f 2 5 C 0 (d ia ) − (b Li + b La ) 2f 3 + 4f 2 7 C 2 (d ia
Eq. (B18) contains a sum over three pixels, which becomes a triple integral in the continuous limit. To solve this triple integral, we fix i at the origin and we fix j in the plane x 2 − x 3 as shown on figure 6. The direction of observation n is along the x 3 axis. Due to the symmetry of the problem we can then simply multiply the result by the volume of the survey V (to account for the integral over i) and by 2π to account for the integral over j around the x 3 axis. We obtain var CP (ξ) = − 9 8π N totN 2 2 p d 2 d 2 g(d)g(d )
b L b L + (b L + b L ) f 3 + f 2 5 C(d ia ) − (b L + b L ) 2f 3 + 4f 2 7 C 2 (d ia )P 2 (cos β ia ) + 8f 2 35 C 4 (d ia )P 4 (cos β ia ) ,
where µ = cos ρ and ν = cos γ. The distance d ia and the angle β ia are functions of d, d , µ, ν and ϕ. From figure 6 we have d ia cos α = d cos γ + d cos ρ (B21) d 2 ia sin 2 α = d 2 sin 2 γ + d 2 sin 2 ρ − 2dd sin γ sin ρ cos(ϕ + π/2) ,
leading to
d 2 ia = d 2 + d 2 + 2dd µν + (1 − µ 2 )(1 − ν 2 ) sin ϕ ,(B23)
cos β ia = cos α = dµ + d ν d ia .
(B24)
Figure 1 :
1Coordinate system in which the dipole is observed.
Figure 2 :
2Errors in the measurements of the dipole (black dots) in the LOWz and CMASS samples. The red solid line represents the theoretical prediction from eqs. (46) and (47), calculated with pixel size p = 4 Mpc/h.
Figure 3 :
3Predicted signal-to-noise for the dipole between z = 0 and z = 0.2 in a survey with the same characteristics as the millennium simulation, plotted as a function of separation between galaxies. We use pixels of size 8 Mpc/h. The green dotted line corresponds to two populations with kernel 1 (same with kernel 2). The black dashed line corresponds to six populations with kernel 1 and the red solid line to six populations with kernel 2.
Figure 4 :
4kernel 2; and two populations, the first one (bright) combining the bins 1 to 4 in table I, Predicted signal-to-noise for the dipole in the data release DR5 of SDSS z ≤ 0.2, plotted as a function of separation between galaxies. We use pixels of size 8 Mpc/h. The green dotted line corresponds to two populations with kernel 1 (same with kernel 2). The black dashed line corresponds to six populations with kernel 1 and the red solid line to six populations with kernel 2.
Figure 5 :
5Predicted signal-to-noise for the dipole in the DESI Bright Galaxy sample z ≤ 0.3, plotted as a function of separation between galaxies. We use pixels of size 8 Mpc/h. The green dotted line corresponds to two populations with kernel 1 (same with kernel 2). The black dashed line corresponds to six populations with kernel 1 and the red solid line to six populations with kernel 2.
Figure 6 :
6)P 2 (cos β ia ) + 8f2 35C 4 (d ia )P 4 (cos β ia ) Configuration used to calculate the mixed term in the variance, eq. (B18). The direction of observation n is along the x3 axis.
L, L )h(L , L )n LnL n L
Table I :
IBiases and fractional number densities for six populations of halos with different masses, measured in the millennium simulation at z = 0[40].
and are summarised in table II.Mean magnitude bLnL
-22.5
2.16 0.046
-21.75
1.68 0.017
-21.25
1.44 0.017
-20.25
1.32 0.328
-19.25
1.08 0.164
-18.5
0.96 0.428
More precisely, one can measure separately bσ 8 and f σ 8 . arXiv:1512.03566v2 [astro-ph.CO] 16 Jan 2017
Here and in the following we work in the distant-observer approximation and we neglect the evolution of our observables with redshift (see text after eq. (8)). All variables are therefore evaluated at the mean redshift of the surveyz. For example n L i (x i , z i ) n L i (x i ,z). For simplicity we drop the dependence onz in the notation, when it is not needed.
Note that since we work here in the distant-observer approximation, the line-of-sight to the median, to the bright and to the faint galaxy are all parallel.
Note that to compare the theoretical signal to the measurement, we must divide eq. (44) by Ntot pd 2Nn BnF . We therefore apply the same normalisation to eqs. (46) and (47).
Note that some simulations, like for example the MICE simulation[41], do provide the distribution of galaxies on the light-cone. As a consequence, those simulations contain part of the relativistic dipole, namely all the terms proportional to the velocity in eq. (7).
Appendix A: Comparison of N and 1 1 in eq.(35)In eq.(35)we have to compare the two terms w ij and a w ia N aj . For simplicity we fix the position of the pixels i and j on the z axis and we look at a bright galaxy in pixel i and a faint galaxy in pixel j. Neglecting the effect of redshift-space distortions, we need to compare 3 8π cos β ij = 3 8π withIn the continuous limit we obtainI ij can be calculated numerically for fixed values of d ij . ChoosingN = 2.8 × 10 −4 , which is the number density in the LOWz survey, we find that at small separation d ij = 1 Mpc/h, I ij = 0.033 and at large separation, d ij = 50 Mpc/h, I ij = 0.74. Comparing this with 3 8π = 0.12, we see that at small separation w ij dominates over a w ia B aj , whereas at large separation it is the opposite. For surveys with larger number density, like the main galaxy sample of DR5, a w ia B aj starts dominating over w ij at small separation already.Appendix B: Explicit calculation of the mean and variance in the continuous limitMeanInserting the kernel (37) into eq. (14) we find for the mean of the estimatorThe functions dn Li , b Li and α depend on the redshift z i . However we neglect here the evolution of these functions with redshift, meaning that we evaluate them at the mean redshift of the surveyz (or the mean of the redshift slice). Since these functions evolve slowly with redshift and since in the distant-observer approximation we have |z i −z j | z, this is a good approximation. In this case, the sum over luminosities L i and L j is independent of the pixels and can be written asIn the continuous limit, the sum over pixels becomeswhere p denotes the size of the cubic pixels. With this we obtain ξ = 3 8πWe can fix the position of x and integrate over all y. Since the integrand does not depend on the position of x but only on the relative separation |x − y| and on the orientation of the pair with respect to the observer β(x, y), theCosmic varianceThe cosmic variance contribution is given by
PITNGA-2009-238356) and LACEGAL (PIRSES- GA-2010-269264Swiss National Science Foundation. LH is supported in part by the DOE and NASA. EG acknowledges support from projects AYA2012-39559 and Consolider-Ingenio CSD2007-00060 from he Spanish Ministerio de Ciencia e Innovacion (MICINN) and CosmoComp. from the European Commissionsupport by the Swiss National Science Foundation. LH is supported in part by the DOE and NASA. EG acknowledges support from projects AYA2012-39559 and Consolider-Ingenio CSD2007-00060 from he Spanish Ministerio de Ciencia e Innovacion (MICINN) and CosmoComp (PITNGA-2009-238356) and LACEGAL (PIRSES- GA-2010-269264) from the European Commission.
. N Kaiser, MNRAS. 2271N. Kaiser, MNRAS 227, 1 (1987).
. P B Lilje, G Efstathiou, MNRAS. 236851P. B. Lilje and G. Efstathiou, MNRAS 236, 851 (1989).
. A J S Hamilton, Astrophys. J. 3855A. J. S. Hamilton, Astrophys. J. 385, L5 (1992).
. E Hawkins, MNRAS. 34678E. Hawkins et al., MNRAS 346, 78 (2003).
. I Zehavi, Astrophys. J. 62122I. Zehavi et al., Astrophys. J. 621, 22 (2005).
. L Guzzo, Nature. 451541L. Guzzo et al., Nature 451, 541 (2008).
. A Cabre, E Gaztanaga, MNRAS. 3931183A. Cabre and E. Gaztanaga, MNRAS 393, 1183 (2009).
. Y.-S Song, C G Sabiu, I Kayo, R C Nichol, JCAP. 0520Y.-S. Song, C. G. Sabiu, I. Kayo and R. C. Nichol, JCAP 05, 020 (2011).
. L Samushia, MNRAS. 4393504L. Samushia et al., MNRAS 439, 3504 (2014).
. C.-H Chuang, arXiv:1312.4889C.-H. Chuang et al., arXiv:1312.4889 (2013).
. J E Gunn, Astrophys. J. 14761J. E. Gunn, Astrophys. J. 147, 61 (1967).
. E L Turner, J P Ostriker, J R Gott, Iii , Astrophys. J. 2841E. L. Turner, J. P. Ostriker and J. R. Gott III, Astrophys. J. 284, 1 (1984).
. R L Webster, P C Hewett, M E Harding, G A Wegner, Nature. 336358R. L. Webster, P. C. Hewett, M. E. Harding and G. A. Wegner, Nature 336, 358 (1988).
. W Fugmann, Astron. and Astrophys. 20473W. Fugmann, Astron. and Astrophys. 204, 73 (1988).
. R Narayan, Astrophys. J. Lett. 33953R. Narayan, Astrophys. J. Lett. 339, L53 (1989).
. P Schneider, Astron. and Astrophys. 221221P. Schneider, Astron. and Astrophys. 221, 221 (1989).
. T J Broadhurst, A N Taylor, J A Peacock, Astrophys. J. 43849T. J. Broadhurst, A.N. Taylor and J.A. Peacock, Astrophys. J. 438, 49 (1995).
. R Moessner, B Jain, J V Villumsen, MNRAS. 294291R. Moessner, B. Jain and J. V. Villumsen, MNRAS 294, 291 (1998).
. M Loverde, L Hui, E Gaztanaga, Phys. Rev. D. 7723512M. LoVerde, L. Hui and E. Gaztanaga, Phys. Rev. D 77, 023512 (2008);
. L Hui, E Gaztanaga, M Loverde, Phys. Rev. D. 7763526L. Hui, E. Gaztanaga and M. LoVerde, Phys. Rev. D 77, 063526 (2008).
. J Yoo, A L Fitzpatrick, M Zaldarriaga, Phys. Rev. 8083514J. Yoo, A. L. Fitzpatrick and M. Zaldarriaga, Phys. Rev. D80, 083514 (2009);
. J Yoo, Phys. Rev. 8283508J. Yoo, Phys. Rev. D82, 083508 (2010).
. C Bonvin, R Durrer, Phys. Rev. D. 8463505C. Bonvin and R. Durrer, Phys. Rev. D 84, 063505 (2011).
. A Challinor, A Lewis, Phys. Rev. D. 8443516A. Challinor and A. Lewis, Phys. Rev. D 84, 043516 (2011).
. D Jeong, F Schmidt, C M Hirata, Phys. Rev. D. 8523504D. Jeong, F. Schmidt and C. M. Hirata, Phys. Rev. D 85, 023504 (2012).
. C Bonvin, L Hui, E Gaztanaga, Phys. Rev. D. 8983535C. Bonvin, L. Hui and E. Gaztanaga, Phys. Rev. D 89, 083535 (2014).
. R A C Croft, MNRAS. 4343008R. A. C. Croft, MNRAS 434 434, 3008 (2013).
. A Raccanelli, D Bertacca, O Doré, R Maartens, JCAP. 0822A. Raccanelli, D. Bertacca, O. Doré and R. Maartens, JCAP 08, 022 (2014).
. C Bonvin, Class. Quant. Grav. 31234002C. Bonvin., Class. Quant. Grav. 31, 234002 (2014).
. P Mcdonald, JCAP. 1126P. McDonald, JCAP 11, 026 (2009).
. J Yoo, N Hamaus, U Seljak, M Zaldarriaga, Phys. Rev. 8663514J. Yoo, N. Hamaus, U. Seljak and M. Zaldarriaga, Phys. Rev. D86, 063514 (2012).
. R Wojtak, S H Hansen, J Hjorth, Nature. 477567R. Wojtak, S. H. Hansen and J. Hjorth, Nature 477, 567 (2011).
. I Sadeh, L L Feng, O Lahav, Phys. Rev. Lett. 1147I. Sadeh, L. L. Feng and O. Lahav, Phys. Rev. Lett. 114, 7 (2015).
. E Gaztanaga, C Bonvin, L Hui, arXiv:1512.03918E. Gaztanaga, C. Bonvin and L. Hui, arXiv:1512.03918 (2015).
. N Hamaus, U Seljak, V Desjacques, R E Smith, T Baldauf, Phys. Rev. 8243515N. Hamaus, U. Seljak, V. Desjacques, R. E. Smith and T. Baldauf, Phys. Rev. D82, 043515 (2010);
. T Baldauf, U Seljak, R E Smith, N Hamaus, V Desjacques, Phys. Rev. 8883507T. Baldauf, U. Seljak, R. E. Smith, N. Hamaus and V. Desjacques, Phys. Rev. D88, 083507 (2013).
. U Seljak, Phys. Rev. Lett. 10221302U. Seljak, Phys. Rev. Lett. 102, 021302 (2009).
. P Mcdonald, U Seljak, JCAP. 107P. McDonald and U. Seljak, JCAP 10, 007 (2009).
. W J Percival, L Verde, J A Peacock, MNRAS. 347645W. J. Percival, L. Verde and J. A. Peacock, MNRAS 347, 645 (2004);
. R E Smith, L Marian, MNRAS. 4572968R. E. Smith and L. Marian, MNRAS 457, 2968 (2016).
. U Seljak, N Hamaus, V Desjacques, PRL. 10391303U. Seljak, N. Hamaus and V. Desjacques, PRL 103, 091303 (2009);
. N Hamaus, U Seljak, V Desjacques, PRD. 86103513N.Hamaus, U. Seljak and V. Desjacques, PRD 86, 103513 (2012).
. L Anderson, MNRAS. 44124L. Anderson et al., MNRAS 441, 24 (2014).
. R E Angulo, V Springel, S D M White, A Jenkins, C M Baugh, C S Frenk, MNRAS. 4262046R. E. Angulo, V. Springel, S. D. M. White, A. Jenkins, C. M. Baugh and C. S. Frenk, MNRAS 426, 2046 (2012).
. E Jennings, C M Baugh, D Hatt, MNRAS. 446793E. Jennings, C. M. Baugh and D. Hatt, MNRAS 446, 793 (2015).
. P Fosalba, M Crocce, E Gaztanaga, F J Castander, MNRAS. 4482987P. Fosalba, M. Crocce, E. Gaztanaga and F. J. Castander, MNRAS 448, 2987 (2015);
. M Crocce, F J Castander, E Gaztanaga, P Fosalba, J Carretero, arXiv:1312.2013M. Crocce, F. J. Castander, E. Gaztanaga, P. Fosalba and J. Carretero, arXiv:1312.2013 (2013).
. W , Astrophys. J. 657645W. Percival et al., Astrophys. J. 657, 645 (2007).
. G Cresswell, W , MNRAS. 392682G. Cresswell and W. Percival, MNRAS 392, 682 (2009).
. M Levi, arXiv:1308.0847M. Levi et al., arXiv:1308.0847 (2013).
. R N Cahn, #336.10American Astronomical Society. AAS Meeting #225R. N. Cahn et al., American Astronomical Society, AAS Meeting #225, #336.10 (2015) http://adsabs.harvard.edu/abs/2015AAS...22533610C
| [] |
[
"A WRAPPED FUKAYA CATEGORY OF KNOT COMPLEMENT",
"A WRAPPED FUKAYA CATEGORY OF KNOT COMPLEMENT"
] | [
"Youngjin Bae ",
"Seonhwa Kim ",
"Yong-Geun Oh "
] | [] | [] | This is the first of a series of two articles where we construct a version of wrapped Fukaya category WF (M \ K; Hg 0 ) of the cotangent bundle T * (M \ K) of the knot complement M \ K of a compact 3-manifold M , and do some calculation for the case of hyperbolic knots K ⊂ M . For the construction, we use the wrapping induced by the kinetic energy Hamiltonian Hg 0 associated to the cylindrical adjustment g 0 on M \K of a smooth metric g defined on M . We then consider the torus T = ∂N (K) as an object in this category and its wrapped Floer complex CW * (ν * T ; Hg 0 ) where N (K) is a tubular neighborhood of K ⊂ M . We prove that the quasi-equivalence class of the category and the quasi-isomorphism class of the A∞ algebra CW * (ν * T ; Hg 0 ) are independent of the choice of cylindrical adjustments of such metrics depending only on the isotopy class of the knot K in M .In a sequel [BKO], we give constructions of a wrapped Fukaya category WF (M \ K; H h ) for hyperbolic knot K and of A∞ algebra CW * (ν * T ; H h ) directly using the hyperbolic metric h on M \ K, and prove a formality result for the asymptotic boundary of (M \ K, h). | 10.1007/s00209-023-03285-8 | [
"https://arxiv.org/pdf/1901.02239v2.pdf"
] | 119,142,453 | 1901.02239 | f04d3dd9466066844411331e2d381367de9da625 |
A WRAPPED FUKAYA CATEGORY OF KNOT COMPLEMENT
Youngjin Bae
Seonhwa Kim
Yong-Geun Oh
A WRAPPED FUKAYA CATEGORY OF KNOT COMPLEMENT
This is the first of a series of two articles where we construct a version of wrapped Fukaya category WF (M \ K; Hg 0 ) of the cotangent bundle T * (M \ K) of the knot complement M \ K of a compact 3-manifold M , and do some calculation for the case of hyperbolic knots K ⊂ M . For the construction, we use the wrapping induced by the kinetic energy Hamiltonian Hg 0 associated to the cylindrical adjustment g 0 on M \K of a smooth metric g defined on M . We then consider the torus T = ∂N (K) as an object in this category and its wrapped Floer complex CW * (ν * T ; Hg 0 ) where N (K) is a tubular neighborhood of K ⊂ M . We prove that the quasi-equivalence class of the category and the quasi-isomorphism class of the A∞ algebra CW * (ν * T ; Hg 0 ) are independent of the choice of cylindrical adjustments of such metrics depending only on the isotopy class of the knot K in M .In a sequel [BKO], we give constructions of a wrapped Fukaya category WF (M \ K; H h ) for hyperbolic knot K and of A∞ algebra CW * (ν * T ; H h ) directly using the hyperbolic metric h on M \ K, and prove a formality result for the asymptotic boundary of (M \ K, h).
8. Homotopy of A ∞ functors 33 9. Independence of choice of metrics 42 10. Construction of Knot Floer algebra HW (∂ ∞ (M \ K)) 43
Part 2. C 0 -estimates for the moduli spaces 45 11. Horizontal C 0 estimates for m-components 46 12. Vertical C 0 estimates for m-components 47 13. C 0 estimates for moving Lagrangian boundary 49 Appendix A. Energy identity for Floer's continuation equation 53 Appendix B. Gradings and signs for the moduli spaces 54 References 59
Introduction
Idea of using the conormal lift of a knot (or a link) in R 3 or S 3 as a Legendrian submanifold in the unit cotangent bundle has been exploited by Ekholm-Etynre-Ng-Sullivan [EENS] in their construction of knot contact homology and proved that this analytic invariants recovers Ng's combinatorial invariants of the knot which is an isomorphism class of certain differential graded algebras [Ng]. On the other hand, Floer homology of conormal bundles of submanifolds of a compact smooth manifold in the full cotangent bundle were studied as a quantization of singular homology of the submanifold (see [Oh2], [KO]). Such a construction has been extended in the A ∞ level by Nadler-Zaslow [NZ], [N] and also studied by Abbondandolo-Portaluri-Schwarz [APS] in its relation to the singular homology of the space of cords of the submanifold.
The present article is the first of a series of two articles where we construct a version of wrapped Fukaya category of T * (M \ K) of the knot-complement M \ K, which is noncompact, as an invariant of knot K (or more generally of links) and do some computation of the invariant for the case of hyperbolic knot K ⊂ M by relating the (perturbed) pseudoholomorphic triangles in T * (M \ K) to the hyperbolic geodesic triangles of the base M \ K.
For our purpose of investigating the effect on the topology of M \K of the special metric behavior such as the existence of a hyperbolic metric in the complement M \ K, it is important to directly deal with the cotangent bundle of the full complement M \ K equipped with the wrapping induced by the kinetic energy Hamiltonian of a metric on M \ K. On the other hand, we would like to emphasize that the space T * (M \K) may not be convex in the sense of [EG] in general towards the direction of horizontal infinity, i.e., towards the direction of the knot K in M \ K. In particular, it may not carry a Liouville structure when equipped with the tautological Liouville one form. Because of these reasons, to carry out necessary analysis of the relevant perturbed Cauchy-Riemann equation, we need to impose certain tame behavior of the associated Hamiltonian and almost complex structure at infinity of M \ K. Because of noncompactness of M \ K, the resulting A ∞ category a priori depends on the Liptschitz-equivalence class of such metrics modulo conformal equivalence and requires a choice of such equivalence class in the construction.
In the present paper, we will consider the restricted class of Hamiltonians that asymptotically coincide with the kinetic energy Hamiltonian denoted by H g associated Riemannian metric g on the complement and the Sasakian almost complex structure J g associated to the metric g. We will need a suitable 'tameness' of the metric near the end of complement M \ K so that a uniform horizontal C 0 bound holds for the relevant Cauchy-Riemann equation with any given tuple (L 0 , . . . , L k ) of Lagrangian boundary conditions from the given collection of admissible Lagrangians. As long as such a C 0 bound is available, one can directly, construct a wrapped Fukaya category using such a metric. It turns out that such a horizontal bound can be proved in general only for the metric with suitable tame behavior such as those with cylindrical ends or with certain type of homogeneous behavior at the end of M \ K like a complete hyperbolic metric. We will consider the case of hyperbolic knot K ⊂ M in a sequel [BKO] to the present paper. It turns out that T * (M \ K) is convex at infinity in the sense that there is a J hpluri-subharmonic exhaustion function in a neighborhood at infinity of T * (M \ K) when K is a hyperbolic knot. We refer to [BKO] for the explanation of this latter property of the hyperbolic knot.
1.1. Construction of wrapped Fukaya category WF(M \ K). The main purpose of the present paper is to define a Fukaya-type category canonically associated to the knot complement M \K. To make our definition of wrapped Fukaya category of knot complement flexible enough, we consider a compact oriented Riemannian manifolds (M, g) without boundary. (We remark that the case of hyperbolic knot K ⊂ M does not belong to this case because the hyperbolic metric on M \K cannot be smoothly extended to the whole M .)
For this purpose, we take a tubular neighborhood N (K) ⊂ M of K and decompose
M \ K = N cpt ∪ (N (K) \ K)
(1.1) and equip a cylindrical metric on N (K) \ K ∼ = [0, ∞) × T K with T K = ∂N (K). We call such a metric a cylindrical adjustment of the given metric g on M , and denote by g 0 the cylindrical adjustment of g on N (K) \ K. An essential analytical reason why we take such an cylindrical adjustment of the metric and its associated kinetic energy Hamiltonian is because it enables us to obtain the horizontal C 0 -estimates for the relevant perturbed Cauchy-Riemann equation with various boundary conditions. To highlight importance and nontriviality of such C 0 -estimates, we collect all the proofs of relevant C 0 -estimates in Part 2. Denote the associated kinetic energy Hamiltonian of g 0 on T * (M \ K) by
H g0 = 1 2 |p| 2 g0 .
(1.
2)
The first main theorem is a construction the following A ∞ functor between two different choices of various data involved in the construction of WF(M \ K; H g0 ).
Theorem 1.1. Let N (K) be a tubular neighborhood of K be given. For any two smooth metrics g, g on M , denote by g 0 , g 0 the associated cylindrical adjustments thereof as above. Then there exist a natural A ∞ quasi-equivalence Φ gg : WF(M \ K; H g0 ) → WF(M \ K; H g 0 ) for any pair g, g such that Φ gg = id. Furthermore its quasi-isomorphism class depends only on the isotopy type of the knot K independent of the choice of tubular neighborhoods and other data.
We denote by WF(M \ K)
any such A ∞ category WF(M \ K; H g0 ). Each A ∞ category WF(M \ K) will be constructed on T * (M \ K) using the kinetic energy Hamiltonian H g0 and the Sasakian almost complex structure J g0 on T * (M \ K) of g 0 on M \ K: In general a Sasakian almost complex structure J h associated to the metric h on a Riemannian manifold (N, h) is given by
J h (X) = X , J h (α) = −α (1.3)
under the splitting T (T * N ) T N ⊕ T * N via the Levi-Civita connection of h. Then the proof of Theorem 1.1 is relied on construction of various A ∞ operators, A ∞ functors and A ∞ homotopies in the current context of Floer theory on T * (M \ K). The general strategy of such construction is by now standard in Floer theory. (See, for example, [Se1].) In fact, our construction applies to any arbitrary tame orientable 3-manifold with boundary and similar computational result applies when the 3-manifold admits a complete hyperbolic metric of finite volume. Our construction is given in this generality. (See Theorem 9.3 for the precise statement.) Remark 1.2. In the proofs of Theorems 1.1 and 10.2, we need to construct various A ∞ functors and A ∞ homotopies between them which enter in the invariance proofs. We adopt the definitions of them given in [Lef] for the constructions of the A ∞ functor and the A ∞ homotopy directly using the continuations of either Hamiltonians or of Lagrangians or others. Construction of A ∞ functors appear in the literature in various circumstances, but we could not locate a literature containing geometric construction of an A ∞ homotopy in the sense of [Lef,Se1] for the case of geometric continuations such as Hamiltonian isotopies. (In [FOOO1,FOOO2,F], the notion of A ∞ homotopy is defined via a suspension model, the notion of pseudoisotopy, of the chain complex.) Because of these reasons and for the convenience of readers, we provide full details, in the categorical context, of the construction A ∞ homotopy associated to a continuation of Hamiltonians in Section 8. Our construction of A ∞ functor is the counterpart of the standard Floer continuation equation also applied to the higher m k maps with k ≥ 2. It turns out that actual constructions of the associated A ∞ homotopy as well as of A ∞ functors are rather subtle and require some thought on the correct moduli spaces that enter in definitions of A ∞ functors and homotopies associated to geometric continuations. (See Subsection 7.2 and Section 8 for the definitions of relevant moduli spaces.) We also call readers' attention to Savelyev's relevant construction in the context of ∞-category in [Sa].
1.2. Construction of Knot Floer algebra. For a concrete computation we do in [BKO], we focus on a particular object in this category WF(M \ K) canonically associated to the knot. For given tubular neighborhood N (K) of K, we consider the conormal L = ν * T, T = ∂N (K) and then the wrapped Fukaya algebra of the Lagrangian L in T * (M \ K). We remark that the wrapped Fukaya algebra of ν * T in T * M for a closed manifold M can be described by purely topological data arising from the base space, more specifically that of the space of paths attached to T in M . (See [APS] for some relevant result.) So it is important to consider L as an object for T * (M \ K) to get more interesting knot invariant.
On the other hand, since we restrict the class of our Hamiltonians to that of kinetic energy Hamiltonian H g0 associated to a Riemannian metric g 0 on M \ K, the pair (ν * T, H g0 ) is not a nondegenerate pair but a clean pair in that the set of Hamiltonian chords contains a continuum of constant chords valued at points of T ∼ = T 2 .
We denote by X(L; H g0 ) = X(L, L; H g0 ) the set of Hamiltonian chords of H g0 attached to a Lagrangian submanifold L in general. We have
X(L; H g0 ) = X 0 (L; H g0 ) X >0 (L; H g0 )
where the subindex of X in the right hand side denotes the length of the geodesic associated to the Hamiltonian chords of H g0 . We note that X 0 (L; H g0 ) ∼ = T 2 and the component is clean in the sense of Bott. We take
CW (ν * T, ν * T ; T * (M \ K); H g0 ) = C * (T ) ⊕ Z{X >0 (L; H g0 )}
where C * (T ) is a cochain complex of T , e.g., C * (T ) = Ω * (T ) the de Rham complex and associate an A ∞ algebra following the construction from [FOOO1]. can be defined, and its isomorphism class does not depend on the various choices involved such as tubular neighborhood N (K) and the metric g on M .
Remark 1.4. Due to the presence of Morse-Bott component of constant chords, there are two routes toward construction of wrapped Floer complex, which we denote by CW g (ν * T, T * (M \ K)) : One is to take the model
CW (ν * T, ν * T ; T * (M \ K); H g0 ) = C * (T ) ⊕ Z X >0 (L; H h )
where C * (T ) is a chain complex of T such as the singular chain complex as in [FOOO1] or the de Rham complex, and the other is to take
CW (ν * k T, ν * k T ; T * (M \ K); H g0 ) ∼ = Z Crit k ⊕ Z X >0 (L; H h ) where ν *
k T the fiberwise translation of ν * T by dk, suitably interpolated with ν * T away from the zero section, for a sufficiently C 2 -small compactly supported Morse function k : M \ K → R such that ν * T Image dk. We refer readers to [BKO,Section 2] for the detailed explanation on the latter model.
The isomorphism class of CW g (T, M \ K) independent of g then provides a knot-invariant of K in M for an arbitrary knot K. To emphasize the fact that we regard L = ν * T as an object in the cotangent bundle of a knot complement M \ K, not as one in the full cotangent bundle T * M , we denote the cohomology group of CW g (T, M \ K) as follows.
Definition 1.5 (Knot Floer algebra). We denote the cohomology of CW g (T, M \K) by HW (∂ ∞ (M \ K)) which carries a natural product arising from m 2 map. We call this Knot Floer algebra of K ⊂ M .
By letting the torus converge to the ideal boundary of M \ K, we may regard HW (∂ ∞ (M \ K)) as the wrapped Floer cohomology of the 'ideal boundary' of the hyperbolic manifolds M \ K, which is the origin of the notation ∂ ∞ (M \ K) we are adopting.
In a sequel [BKO], we introduce a reduced version of the A ∞ algebra, denoted by
CW d (∂ ∞ (M \ K))
, by considering the complex generated by non-constant Hamiltonian chords, and prove that for the case of hyperbolic knot K ⊂ M this algebra can be also directly calculated by considering a horo-torus T and the wrapped Floer complex CW (ν * T, T * (M \ K); H h ) of the hyperbolic metric h although h cannot be smoothly extended to the whole manifold M . We also prove a formality result of its A ∞ structure CW (ν * T, T * (M \ K); H h ) for any hyperbolic knot K. The following is the main result we prove in [BKO].
Theorem 1.6 (Theorem 1.6 [BKO]). Suppose K is a hyperbolic knot on M . Then we have an (algebra) isomorphism
HW d (∂ ∞ (M \ K)) ∼ = HW d (ν * T ; H h )
for all integer d ≥ 0. Furthermore the reduced cohomology HW d (∂ ∞ (M \ K)) = 0 for all d ≥ 1.
1.3. C 0 estimates. One crucial new ingredient in the proofs of the above theorems is to establish the horizontal C 0 estimates of solutions of the perturbed Cauchy-Riemann equation
(du − X H ⊗ β) (0,1) J = 0 (1.4)
mainly for the energy Hamiltonian H g0 (q, p) = 1 2 |p| 2 g0 . For this purpose, we have to require the one-form β to satisfy 'co-closedness' d(β • j) = 0 in addition to the usual requirement imposed in [AS]. Together with the well-known requirement of subclosedness for the vertical C 0 estimates [AS], we need to require β to satisfy dβ ≤ 0, d(β • j) = 0, i * β = 0 (1.5) for the inclusion map i : ∂Σ → Σ. This brings the question whether such one-forms exist in the way that the choice is compatible with the gluing process of the relevant moduli spaces. We explicitly construct such a β by pulling back a one form from a slit domain. This choice of one form is consistent under the gluing of moduli spaces.
Roughly speaking, this C 0 -estimates guarantees that under the above preparation, whenever the test objects L 0 , . . . , L k are all contained in W i = T * N i , the images of the solutions of the Cauchy-Riemann equation is also contained in W i , i.e., they do not approach the knot K. We refer to Part 2 for the proofs of various C 0 estimates needed for the construction. Furthermore for the proof of independence of the A ∞ category WF g (M \ K) under the choice smooth metric g, one need to construct an A ∞ functor between them for two different choices of g, g . Because the maximum principle applies only in the increasing direction of the associated Hamiltonian from H g to H g , we have to impose the monotonicity condition g ≥ g or equivalently H g ≤ H g .
1.4. Further perspectives. Putting the main results of the present paper in perspective, we may regard that our category WF(M \ K) is a version of partially wrapped Fukaya category on T * M with the 'stop' given by
Λ ∞ K := ∂ ∞ (T * M )| K ⊂ ∂ ∞ (T * M ) for Λ K = T * M | K .
This is a 3-dimensional coisotropic submanifold of the asymptotic contact boundary
∂ ∞ (T * M ) = q∈M ∂ ∞ (T * q M )
of T * M which is of 5 dimension. In this regard, our construction can be put into the framework of partially wrapped Fukaya categories. The way how we avoid this stop is by attaching the cylindrical end on M \N (K) along its boundary ∂(M \N (K)) = ∂N (K) and considering the kinetic energy Hamiltonian of a cylindrical adjustment g 0 of a smooth metric g defined on M . See Remark 2.3 for the difference between the behaviors of the two Hamiltonian vector fields put in the neighborhood of our coisotropic stop and that associated to Liouville sectors that was introduced by Sylvan [Sy] and Ganatra-Pardon-Shende [GPS]. We note that the resulting wrapped Fukaya categories depend on the type of wrapping imposed near the knot K ⊂ M as usual for the general partially wrapped Fukaya categories depending on the wrapping around the stop. We would also like to compare our approach with that in the arXiv version of [ENS,§6.5] Ekholm-Ng-Shende. They considered a wrapped Fukaya category on the Weinstein manifold denoted by W K to K that is obtained by attaching a punctured handle to DT * R 3 along the unit conormal bundle of the knot and altering the Liouville vector field of T * M along ν * K. Our approach directly working with T * (M \ K), which is well-adapted to the metric structures on the base manifold, will be important for our later purpose of studying hyperbolic knots in a sequel [BKO].
A similar construction of A ∞ algebra for the conormal ν * T of T as above can be carried out for the 'conormal' ν * N cpt of N cpt := N \ N (K) or the micro-support of the characteristic function χ N cpt , which is given by
ν * N cpt = o N cpt ν * + (∂N cpt )
where o N cpt is the zero section restricted to N cpt and T = ∂N cpt is equipped with boundary orientation of N cpt . Combined with the construction of the wrapped version of the natural restriction morphisms constructed in [Oh3], this construction would give rise to A ∞ morphisms ν * N i → ν * T and a natural A ∞ functor
ν * N 1 / / ν * N 2 ν * T 1 / / ν * T 2 (1.6) where N i = M \ N i (K) and T i = ∂N i for two tubular neighborhoods N 1 (K) ⊂ N 2 (K) of K for i = 1, 2.
Here L denotes the Yoneda image of the Lagrangian L in general. (Since we will not use this construction in this series, we will leave further discussion elsewhere not to further lengthen the paper.) On the other hand it is interesting to see that when we are given a exhaustion sequence N 1 ⊂ N 2 ⊂ · · · N i ⊂ · · · , the union M ∪ ν * K is a limit of the above mentioned conormal ν * N i as i → ∞ on T * (M \ K) in the Gromov-Hausdorff sense. Furthermore the canonical smoothing of ν * N i given in [KO,Theorem 2.3] can be made to converge to the S 1 -family Lagrangian surgery of o M ∪ ν * K denoted by M K in [AENV], [ENS] which consider the case M = S 3 . While construction of Lagrangian M K requires some geometric restriction K such as being fibered (see [AENV,Lemma 6.12]), construction of exact Lagrangian smoothing of ν * K can be done for arbitrary knot K. It would be interesting to see what the ramification of this observation will be.
1.5. Conventions. In the literature on symplectic geometry, Hamiltonian dynamics, contact geometry and the physics literature, there are various conventions used which are different from one another one way or the other. Because many things considered in the present paper such as the energy estimates, the C 0 estimates applying the maximum principle and construction of the Floer continuation map depend on the choice of various conventions, we highlight the essential components of our convention that affect their validity.
The major differences between different conventions in the literature lie in the choice of the following three definitions:
• Definition of Hamiltonian vector field: On a symplectic manifold (P, ω), the Hamiltonian vector field associated to a function H is given by the formula ω(X H , ·) = dH ( resp. ω(X H , ·) = −dH),
• Compatible almost complex structure: In both conventions, J is compatible to ω if the bilinear form ω(·, J·) is positive definite. • Canonical symplectic form: On the cotangent bundle T * N , the canonical symplectic form is given by ω 0 = dq ∧ dp, (resp. dp ∧ dq).
In addition, we would like to take
∂u ∂τ + J ∂u ∂t − X H (u) = 0 (1.7)
as our basic perturbed Cauchy-Riemann equation on the strip. Since we work with the cohomological version of Floer complex, we would like to regard this equation as the positive gradient flow of an action functional A H as in [FOOO2]. This, under our convention laid out above, leads us to our choice of the action functional associated to Hamiltonian H on T * N given by
A H (γ) = − γ * θ + 1 0 H(t, γ(t)) dt,
which is the negative of the classical action functional. With this definition, Floer's continuation map is defined for the homotopy of Hamiltonian s → H s for which the inequality ∂H s ∂s ≥ 0 is satisfied, i.e., in the direction for which the Hamiltonian is increasing. (See Section 7 for the relevant discussion.)
For the kinetic energy Hamiltonian H = H g (x), we have
A H (γ c ) = −E g (c) (1.8)
where γ c is the Hamiltonian chord associated to the geodesic c and E g (c) is the energy of c with respect to the metric g.
Acknowledgement: Y. Bae thanks Research Institute for Mathematical Sciences, Kyoto University for its warm hospitality. Y.-G. Oh thanks H. Tanaka for his interest in the present work and explanation of some relevance of Savelyev's work [Sa] to our construction of A ∞ homotopy associated to the homotopy of Floer continuation maps.
Geometric preliminaries
In this section, we consider the cotangent bundle W = T * N with the canonical symplectic form
ω 0 = i=1 dq i ∧ dp i which is nothing but ω 0 = −dθ,
where θ is the Liouville one-form θ = p i dq i . (Our convention of the canonical symplectic form on the cotangent bundle is different from that of [Se2].) Then the radial vector field
Z = i=1 p i ∂ ∂p i (2.1) satisfies Z ω 0 = −θ, L Z ω 0 = ω 0 (2.2) In particular its flow φ t satisfies (φ t ) * ω 0 = e t ω 0 . (2.3)
Therefore T * N is convex at infinity in the sense of [EG]. Let us consider a Riemannian metric g 0 of N . We denote by
: T * N → T N, : T * N → T N
the 'raising' and the 'lowering' operations associated to the metric g 0 . Then we also equip T * N with the metric g 0 = g 0 ⊕ g 0 with respect to the splitting
T (T * N ) = T N ⊕ T * N
induced from the Levi-Civita connection of g 0 on N .
2.1. Tame manifolds and cylindrical adjustments. Let us consider a noncompact tame 3-manifold N , which means that there exists an exhaustion {N i }, a sequence of compact manifolds N i with i N i = N such that A typical example of tame manifold is a knot complement N = M \ K with an ambient closed 3-manifold M . In this case, we can take an exhaustion by simply choosing a sequence of nested tubular neighborhoods of a knot K.
N 1 ⊂ int(N 2 ) ⊂ N 2 ⊂ · · · ⊂ N i ⊂ int(N i+1 ) ⊂ · · · (2.4) and each N i+1 \int(N i ) is homeomorphic to ∂N i ×[0, 1].
Now we focus on the knot complement, i.e. ∂N cpt is a 2-dimensional torus T 2 , specially denoted by T i . Although most of statements in this article also work for arbitrary tame 3-manifolds in general, the results in the sequel [BKO] requires the torus boundary condition of N cpt in order to exploit hyperbolic geometry using complete hyperbolic metric of finite volume.
We can also consider a more general Riemannian metric g 0 of N possibly incomplete. For example, if we consider a knot complement N = M \ K, a natural choice of such a metric is a restriction of a smooth metric g = g M of a closed ambient manifold M .
Definition 2.1 (Cylindrical adjustment). We define the cylindrical adjustment g 0 of the metric g on M with respect to the exhaustion (4.1) by
g 0 = g on N i da 2 ⊕ g| ∂Ni on N i \ K (2.5) for some i, which is suitably interpolated on N i \ N i which is fixed.
Here a is the coordinate for [0, +∞) for the following decomposition
N = N cpt ∪ T T × [0, +∞) .
We call the metric g 0 an asymptotically cylindrical adjustment of g on N , which are all complete Riemannian metrics of N . We denote N end = T × [0, +∞) in the above decomposition equipped with the cylindrical metric g| Ti ⊕ da 2 . We remark that the following property of asymptotically cylindrical adjustments g 0 of smooth metrics g on M restricted to M \ K will be important in Section 9 later.
Proposition 2.2. Suppose there is given an exhaustion (2.4). For any choice of two smooth Riemannian metrics g and g defined on M , for every pair of g 0 and g i are Lipschitz equivalent, i.e., there is a constant C = C(g, g ) ≥ 1 such that
1 C g 0 ≤ g 0 ≤ Cg 0 on M \ K.
Proof. Let N (K) be a tubular neighborhood of K. By choosing sufficiently large i, we may assume M \ N i0 ⊂ N (K) and both g 0 , g 0 are cylindrical outside N i0 . Using the normal exponential map of K, we can parameterize the tubular neighborhood T (K) of K by (r, θ, ϕ) where ϕ ∈ S 1 parameterizes the knot K and (r, θ) is the polar coordinates of N x K at x ∈ K for the metric g. Similarly we denote by (r , θ , ϕ ) the coordinates associated to g .
More explicitly, we take a normal frame {X, Y } along K along K. We denote by exp g,⊥ the normal exponential map along K of the metric g and by exp g ,⊥ that of g . Using this frame, we have an embedding of ι g : D 2 ( 0 ) × S 1 M defined by ι g (r, θ, ϕ) = exp g,⊥ x(φ) (r(cos θX(x(ϕ)) + sin θY (x(ϕ)))) and ι g 0 in a similar way. Then the composition map
ι −1 g 0 • ι g : D 2 ( 1 ) × S 1 → D 2 ( 0 ) × S 1
is well-defined and smooth if we choose a smaller 1 = 1 (g, g ) whose size depends only on g, g . In particular, we have
D(ι −1 g 0 • ι g ) − id C 1 < C = C(g, g )
on D 2 ( 1 ) × S 1 for some constant C > 0. This in particular proves
sup g (v, v) g(v, v) v ∈ T (M )| N (K) , v g = 1 < C = C (g, g ).
We recall that the cylindrical adjustment of g 0 is defined
g 0 ∼ e −2a da 2 + e −2a θ 2 + dϕ 2 for 0 ≤ r < 0 2 1 (da 2 + dθ 2 ) + dϕ 2 for a ≥ i
in coordinates with the coordinate change r = e −a and 0 = e −i+1/2 and 1 = e −i . Exactly the same formula holds for the cylindrical adjustment of g 0 in the coordinates replaced by (a , θ , ϕ ), those with primes. In particular all the metric coefficients appearing in the formulae are exactly the same for both adjustments. This proves
sup g 0 (v, v) g 0 (v, v) v ∈ T (M \ K)| ι −1 g (N (K)\K) , v g0 = 1 < C = C (g, g ).
for all i's if we choose the tubular neighborhood N (K) of K sufficient small. This finishes the proof.
Remark 2.3.
(1) It may be worthwhile to examine the behavior of Hamiltonian vector field X Hg 0 (q, p) on M \ K as q approach to K. For this purpose, let (r, θ, ϕ) be the coordinate system on N (K). The metric g can be written as g = dr 2 + r 2 dϕ 2 + dθ 2 + o(r 2 ).
Define a cylindrical adjustment with respect to the coordinate (a, θ, ϕ) with r = e −a for a ∈ [0, ∞). Ignoring o(r 2 ), the associate Hamiltonian of the cylindrical adjustment (for r = 1) is given by
H g0 = 1 2 p 2 a + p 2 ϕ + p 2 θ = 1 2 r 2 p 2 r + p 2 ϕ + p 2 θ .
Therefore
X Hg 0 = p a ∂ ∂a + p ϕ ∂ ∂ϕ + p θ ∂ ∂θ = −rp 2 r ∂ ∂p r + r 2 p r ∂ ∂r + p ϕ ∂ ∂ϕ + p θ ∂ ∂θ .
We highlight the last two summands which makes the associated Hamiltonian flow rotates around the torus with higher and higher speed as |p ϕ | 2 + |p θ | 2 → ∞ around the knot K. This asymptotic behavior is different from that of the Hamiltonian vector field used in defining the partially wrapped Fukaya category whose stop is given by the Liouville sector T * (∂N (K)) ⊂ T * (M \ N (K)) whose horizontal component is parallel to the radial vector field ∂ ∂r .
(2) The above discussion and construction of wrapped Fukaya category can be extended to a general tame manifold N whose end may have more than one connected component, as long as we fix a Lipschitz-equivalence class of metrics on N . An example of such N is the complement M \ L where L is a link.
2.2. Kinetic energy Hamiltonian and almost complex structures. We now fix a complete Riemannian manifold with cylindrical end. We denote the resulting Rimanninan manifold by (N, g).
Definition 2.4. We denote by H = H(T * N ) the space of smooth functions H :
T * N → R that satisfies H(q, p) = 1 2 |p| 2 g on {(q, p) : q ∈ N, |p| g ≥ R} ∪ T * N end
for a sufficiently large R > 0, where g is the dual metric of g. We call such map on T * N admissible Hamiltonian with respect to the metric g. The Hamiltonian vector field X H on T * N is defined to satisfy
X H ω = dH for a given H ∈ H i .
We next describe the set of adapted almost complex structures. From now on, we use g instead of g , when there is no danger of confusion. For each given R > 0, the level set
Y R := {|p| g = R} = H −1 (R 2 /2)
is a hypersurface of contact type in T * N , and the domain
W R := {|p| g ≤ R}
becomes a Liouville domain in the sense of [Se2]. For each R > 0, the level set i R : Y R → T * N admits a contact from λ R := −i * R θ by the restriction. We denote the associated contact distribution by
ξ R = ker λ R ⊂ T Y R .
Note that the corresponding Reeb flow is nothing but a reparametrization of the g-geodesic flow on Y R .
If we denote by (r, y) the cylindrical coordinates given by r = |p| g
T * N \ {0} ∼ = ST * N × R + .
Including the zero section, we have decomposition
T * N ∼ = W 1 ∪ Y1 [0, ∞) × Y 1 .
To highlight the metric dependence, we use Y g , λ g , ξ g instead of Y 1 , λ 1 , ξ R , respectively. On T * N \ 0 N with metric g, we have the natural splitting
T (T * N \ 0 N ) ∼ = R · ∂ ∂r ⊕ T Y g0 ∼ = Span X g , ∂ ∂r ⊕ ξ g ∼ = R 2 ⊕ ξ g ,
where X g is a vector field which generates the g-geodesic flow on Y g . We recall that we have a canonical almost complex structure J g on T * N associated to a metric on N defined by J g (X) = X , J g (α) = −α (2.6) in terms of the splitting T (T * N ) = T N ⊕ T * N with respect to the Levi-Civita connection of g. We call this J g the Sasakian almost complex structure on T * N .
Definition 2.5 (Admissible almost complex structure). We say an almost complex structure J on T * N is g-admissible if J = J g on T * N end and on {(q, p) | |p| g ≥ R} for the Sasakian almost complex structure J g0 associated to g 0 for a metric
decomposition N = N cpt ∪ N end with N end ∼ = [0, ∞) × T with T = ∂N cpt .
Denote by J g the set of g-admissible almost complex structures.
All admissible almost complex structures satisfy the following important property which enters in the study of Floer theoretic construction on Liouville manifolds in general.
Definition 2.6. Let H g be the energy Hamiltonian on T * N associated metric g on the base N given above. An almost complex structure J on T * N is called contact type if it satisfies (−θ) • J = dH g .
Remark 2.7. Appearance of the negative sign here is because our convention of the canonical symplectic form on the cotangent bundle is ω 0 = −dθ. Compared with the convention of [AS], our choice of the one-form λ therein is λ = −θ.
3.
A choice of one-form β on Σ In Abouzaid-Seidel's construction of wrapped Floer cohomology given in [AS,Section 3.7], [A1], they start from a compact Liouville domain with contact type boundary and consider the perturbed Cauchy-Riemann equation of the type
(du − X H ⊗ β) (0,1) J = 0.
(3.1)
Because of the compactness assumption, their setting does not directly apply to our current cotangent bundle T * N where the tame base manifold N is noncompact: To perform analytical study of the relevant moduli spaces, the first step is to establish suitable C 0 -estimate.
3.1. Co-closed and sub-closed one-forms. We first recall Abouzaid-Seidel's construction of what they call a sub-closed one-form in the context of Liouville domain with compact contact-type boundary. For each given k ≥ 1, let us consider a Riemann surface (Σ, j) of genus zero with (k + 1)-ends. This is isomorphic to the closed unit disk D 2 minus k + 1 boundary points z = {z 0 , . . . , z k } in the counterclockwise way. Each end admits a holomorphic embedding
0 : Z + := {τ ≥ 0} × [0, 1] → Σ; i : Z − := {τ ≤ 0} × [0, 1] → Σ, for i = 1, . . . , k
preserving the boundaries and satisfying lim τ →±∞ = z , for = 0, . . . , k. We call the distinguished point at infinity z 0 a root. For a given weight w = {w 0 , . . . , w k } satisfying
w 0 = w 1 + · · · + w k ,(3.2)
a total sub-closed one-form β ∈ Ω 1 (Σ) is considered in [A2] satisfying
dβ ≤ 0; ( ) * β = w dt.
This requirement is put to establish both (geometric) energy bound and (vertical) C 0 -bound. It turns out that in our case of T * (M \ K) where the base is noncompact, subclosedness of β is not enough to control the behavior of the Floer moduli space of solutions of (3.1): we need the following restriction
d(β • j) = 0 (3.3)
in addition to the sub-closedness. We refer readers to Lemma 11.1 in Subsection 11 for the reason why such a condition is needed.
3.2. Construction of one-forms β. For each given k ≥ 1 equipped with weight datum w = {w 0 , . . . , w k } satisfying the balancing condition (3.2), we will choose a one-form β on Σ that satisfies
dβ ≤ 0 i * β = 0 for the inclusion map i : ∂Σ → Σ dβ = 0 near ∂Σ d(β • j) = 0 ( j ) * β = w j dt on a subset of Z ± where ±τ 0. (3.4)
For this purpose, we first consider the slit domain representation of the conformal structure (Σ, j) whose explanation is in order. Consider domains
Z 1 = {τ + √ −1 t ∈ C | τ ∈ R, t ∈ [w 0 − w 1 , w 0 ]}; Z 2 = {τ + √ −1 t ∈ C | τ ∈ R, t ∈ [w 0 − w 1 − w 2 , w 0 − w 1 ]};
. . .
Z k = {τ + √ −1 t ∈ C | τ ∈ R, t ∈ [0, w k ]},
and its gluing along the inclusions of the following rays
R = {τ + √ −1 t ∈ C | τ ≥ s , t = w 0 − (w 1 + · · · + w )}; j − : R → Z , j + : R → Z +1 ,
for some s ∈ R, = 1, . . . , k −1. In other words, for a collection s = {s 1 , . . . , s k−1 }, the glued domain becomes
Z w (s) = Z 1 Z 2 · · · Z k / ∼ .
Here, for (ζ, ζ ) ∈ Z × Z +1 , ζ ∼ ζ means that there exists r ∈ R such that ζ = j − (r), ζ = j + (r). We may regards
Z 0 ⊂ {τ + √ −1 t ∈ C | τ ∈ R, t ∈ [0, w 0 ]}, with (k − 1)-slits S = {τ + √ −1 t ∈ C | τ ≤ s , t = w 0 − (w 1 + · · · + w )} where = 1, . . . , k − 1.
Then there is a conformal mapping
ϕ : Σ = D 2 \ {z 0 , . . . , z k } → Z w (s)
with respect to s satisfying that
(1) Let ∂Σ be a connected boundary component of D 2 \ {z 0 , . . . , z k } between z and z +1 , for ∈ Z k+1 . Then
ϕ(∂Σ ) = w 0 √ −1 + R for = 0; ϕ(∂Σ ) = S for = 1, . . . , k − 1; ϕ(∂Σ ) = R for = k.
(2) A restriction ϕ|Σ :Σ →Z 0 is a conformal diffeomorphism.
(3) The following asymptotic conditions
lim τ →−∞φ −1 ({τ } × (w 0 − j=1 w j , w 0 − −1 j=1 w j )) = z for = 1, . . . , k; lim τ →+∞φ −1 ({τ } × (0, w 0 )) = z 0 .
Let us denote such a slit domain by Z w (s) or simply Z if there is no confusion. Now we consider dt ∈ Ω 1 (Z) and we define the one-form β
w 1 w 0 w 2 w 3 w 4Lemma 3.1. Define β = ϕ * dt ∈ Ω 1 (Σ). (3.5)
Then β satisfies all the requirements in (3.4).
Proof. Observe that the holomorphic embedding provides a natural strip-like representation at each puncture. More precisely, we have
ϕ • 0 : Z + → Z : (τ, t) → (τ + K, w 0 t); ϕ • : Z − → Z : (τ, t) → (τ − K, w t + w 0 − j=1 w j ) for = 1, . . . , k
for some K ∈ R + large enough. So it is direct to check that ( j ) * β = w j dt and other conditions in (3.5). In fact, this form satisfies the stronger condition begin closed rather than being sub-closed. Furthermore since ϕ is holomorphic, we compute
β • j = (ϕ * dt) • j = dtdϕ • j = dtj • dϕ = −dτ • dϕ = −d(τ • ϕ)
which is obviously closed. Therefore we have proved d(β • j) = 0. This finishes the proof.
Let us wrap up this section by recalling the gluing result of the slit domains.
Proposition 3.2. Let k 1 , k 2 be positive integers, and let u = (u 0 , . . . , u k1 ) and v = (v 0 , . . . , v k2 ) be weight datum satisfying the balancing condition and u 0 = v i for some i ≥ 1. Then there is a one-parameter gluing of Z(u) and Z(v) which become a slit domain of
w = u# i v := (v 0 , . . . , v i−1 , u 1 , . . . , u k1 , v i+1 , . . . , v k2 ).
Part 1. Construction of a wrapped Fukaya category for M \ K
In this part, we carry our construction of our wrapped Fukaya category of the knot complement M \ K.
The standard Liouville structure given by the vector field
Z λ = p ∂ ∂p = n i=1 p i ∂ ∂p i , n = dim N has noncompact contact type boundary ∂(DT * ≤1 N ) for N = M \K.
Because of this, construction of wrapped Floer cohomology for the sequence of conormal bundles of the type ν * T with closed submanifold T ⊂ N meets a new obstruction arising from the noncompactness of N : One must examine the horizontal C 0 bound away from the 'infinity' in N .
Remark 3.3. For the construction of a wrapped Fukaya category WF(T * N ), we do not need to restrict ourselves to the case of knot complement but can consider the general context of tame 3-manifolds M with several asymptotic boundaries. Since we will not use this general construction, we do not pursue it in this paper.
Admissible Lagrangians and associated wrapped Floer complexes
We first describe the set of admissible Lagrangian submanifolds in such a manifold N for the set of objects of WF(M \ K). Since the base manifold N is noncompact, we need to restrict the class of Lagrangian manifolds as follows:
Definition 4.1. A Lagrangian submanifold L ⊂ (W, ω 0 ) is called admissible if
(1) L is exact and embedded;
(2) Its image under the projection π : W → N is compact in M \ K.
(3) The relative first Chern class 2c 1 (W, L) vanishes on H 2 (W, L) and the second Stiefel-Whitney class w 2 (L) vanishes.
(4) Under the decomposition N ∼ = N cpt ∪ ([0, ∞) × T 2 ) and the associated cylindrical adjustment g 0 of g, there exists R > 0 such that L ∩ {|p| g0 ≥ R} = Λ R · [R, +∞) where Λ R = L ∩ {|p| g0 = R} given as in (2.5).
For the purpose of controlling the horizontal C 0 estimates of solutions of (3.1), we fix a compact exhaustion sequence
N 1 ⊂ N 2 ⊂ · · · ⊂ N i ⊂ · · · be of N = M \ K.
We also take another exhaustion N 1 ⊂ N 2 ⊂ · · · ⊂ N i ⊂ · · · such that
N 1 ⊂ N 1 ⊂ N 2 ⊂ N 2 ⊂ · · · ⊂ N i ⊂ N i ⊂ · · · .
(4.1)
We mostly denote any such N i by N cpt when we do not need to specify the subindex. We let
W 1 ⊂ W 2 ⊂ · · · ⊂ W i ⊂ · · · , W i = T * N i be the exhaustion T * (M \ K) = ∞ i=1 W i induced by (4.1). We have T * N = T * (N i ∪ [0, ∞) × ∂N i ) = W i ∪ T * N end i .
(4.2) (See Section B.1 for the implication of the condition (4).) For any two given admissible Lagrangian L 0 and L 1 , they are contained inside W i for some i. We consider a Hamiltonian H from H and define the set
X(H; L 0 , L 1 ) = {x : [0, 1] → W |ẋ(t) = X H (x(t)), x(0) ∈ L 0 , x(1) ∈ L 1 } (4.3)
of time-one Hamiltonian chords of H. It can be decomposed into
X(H; L 0 , L 1 ) = i X i (H; L 0 , L 1 )
where X k (H; L 0 , L 1 ) is the set of Reeb chords with degree k. By the bumpy metric theorem [Ab] (or rather its version with free boundary condition), we may assume that all Hamiltonian chords of the Hamiltonian H ∈ H g are nondegenerate by considering a generic metric g, as long as we consider a countable family of Lagrangian submanifolds.
Proposition 4.2. Consider a decomposition N = N cpt ∪ N end as above assume that N end is equipped with the cylindrical metric da 2 ⊕g| ∂N end . Denote W = T * N cpt and let T * N = W ∪ T * N end be the associated decomposition. Suppose L 0 , L 1 ⊂ W . Then for all x ∈ X(H; L 0 , L 1 ), we have Im x ⊂ W .
Proof. We recall that the geodesic x satisfies the (elliptic) second order ODE ∇ tẋ = 0. This implies that the a-component of x in N end satisfies d 2 a dt = 0. Applying the easy maximum principle to a, a cannot have maximum at a point (a, p) ∈ N end . Since x(0), x(1) ∈ W , this finishes the proof.
We take the action functional
A(γ) = − 1 0 γ * θ + 1 0 H(γ(t)) dt + f 1 (γ(1)) − f 0 (γ(0)) (4.4) on the path space Ω((L 0 , L 1 ) = {γ : [0, 1] → T * N | γ(0) ∈ L 0 , γ(1) ∈ L 1 }.
We alert the readers that this is the negative of the classical action functional.
We assign a Maslov index µ(x) to each chord x. See Section B.2. Consider the graded module
CW (L 1 , L 0 ; H) = i CW i (L 1 , L 0 ; H); CW k (L 1 , L 0 ; H) = µ(x)=k Z x
for H ∈ H g and x ∈ X k (H; L 0 , L 1 ). We would like to define a boundary map m 1 on this module. Remark 4.3. Here we adopt the notation CW (L , L; H) for the complex associated to the geometric path space and to the set of Hamiltonian chords from L to L which we denote by Ω(L, L ) and X(H; L, L ) respectively, following the notational convention of [FOOO1,FOOO2] in that CW (L , L; H) is the cohomological complex, not the homological one.
We take a t-dependent family of almost complex structures {J t } t∈[0,1] contained in admissible almost complex structures defined in Definition 2.5. For each given pair x 0 , x 1 of Hamiltonian chords in CW (L 1 , L 0 ; H) for some τ 0 ∈ R + , we consider the moduli space M(x 0 ; x 1 ) which consist of maps u : R × [0, 1] → W satisfying the following perturbed J-holomorphic equation with the boundary and the asymptotic conditions given by
∂ τ u + J t (∂ t u − X H ) = 0 u(R × {1}) ⊂ L 0 , u(R × {0}) ⊂ L 1 u(−∞, t) = x 1 (t), u(+∞, t) = x 0 (t).
(4.5)
Remark 4.4.
(1) In the point of view of [Oh1], we may fix the Hamiltonian H and J and perturb the boundary Lagrangians to achieve this kind of transversality result. This is the strategy that we adopt in the present paper and its sequel [BKO].
(2) Note that (4.5) is action increasing as τ → ∞ for the action functional (4.4). On the other hand we put the output at τ = +∞ and the input at τ = −∞. Combination of these two implies that our Floer complex is the cohomological version.
For a generic choice of almost complex structures, the moduli space
M(x 0 ; x 1 ) is a manifold of dimension µ(x 0 ) − µ(x 1 ).
It admits a free R-action as long as
x 0 = x 1 . We write M(x 0 ; x 1 ) for the quotient space. Note that M(x 0 ; x 1 ) is a set of oriented points, when µ(x 0 ) − µ(x 1 ) − 1 = 0.
Here the sign of the rigid solution is determined by the sign of the induced isomorphism
o(x 1 ) → o(x 0 ),
where o(x i ) is an orientation space associated to each Hamiltonian chord which is described in Section B.2. Then the differential m 1 :
CW i (L 1 , L 0 ) → CW i+1 (L 1 , L 0 ) is defined by counting rigid solutions of M(x 0 ; x 1 ), m 1 (x 1 ) = µ(x 0 )=µ(x 1 )+1 (−1) µ(x 1 ) #M(x 0 ; x 1 )x 0 ,
where # denotes the signed sum.
For readers' convenience, let us recall from [A2] a canonical isomorphism between the two wrapped Floer complexes before and after conformal fiberwise rescaling. We denote by ψ λ is time-log(λ) Liouville flow on M , which can be expressed by ψ s (r, y) = (sr, y) with respect to the cylindrical coordinate discussed in Section2.2. For the simplicity of notation, when a pair (L 0 , L 1 ) and the Hamiltonian H are given, we just write CW * (ψ(L 1 ), ψ(L 0 )) for any of CW * (ψ(L 1 ), ψ(L 0 ); ω, 1
w 2 H • ψ, ψ * J t ) for ψ = ψ w .
Lemma 4.5. Let g 0 be the metric on N as above. Denote by H the kinetic energy Hamiltonian on W of the metric g 0 on N . Let ψ : W → W be a conformally symplectic diffeomorphism satisfying ψ * ω = w 2 ω for some non-zero constant w. Then there is a canonical isomorphism CW (ψ) : CW * (L 1 , L 0 ) ∼ = CW * (ψ(L 1 ), ψ(L 0 )).
Floer data for A ∞ structure
Now consider a (k + 1)-tuple of admissible Lagrangian
L = (L 0 , . . . , L k ) inside W i ⊂ W . Pick a (k + 1)-tuple of Hamiltonian chords (x 0 ; x) with x = (x 1 , . . . , x k ) where x 0 ∈ X(H; L 0 , L k ); x i ∈ X(H; L i−1 , L i ) for i = 1, . . . , k. (5.1)
Recall the Riemann surface (Σ, j) ∈ M k+1 is a genus zero disk (k + 1) boundary points z = {z 0 , . . . , z k } removed in a counter-clockwise way. Each end near z is equipped with a holomorphic embedding
0 : Z + → Σ i : Z − → Σ for i = 1, . . . , k (5.2)
satisfying the boundary and the asymptotic conditions. We take these choices so that they become consistent over the universal family
S k+1 := {(s, Σ, j) | s ∈ Σ, (Σ, j) ∈ M k+1 } → M k+1
and its compactification S k+1 → M k+1 as in [AS]. Usage of universal family will enter in a more significant way when we consider construction of A ∞ homotopy later.
Definition 5.1 (Floer data for A ∞ map). A Floer datum D m = D m (Σ, j) on a stable disk (Σ, j) ∈ M k+1 is the following:
(1) Weights: A (k + 1)-tuple of positive real numbers w = (w 0 , . . . , w k ) which is assigned to the end points z satisfying
w 0 = w 1 + · · · + w k .
(2) One-form: β ∈ Ω 1 (Σ) constructed in Section 3.2 satisfying j * β agrees with w j dt. (3) Hamiltonian: A map H : Σ → H i whose pull-back under j uniformly converges to H (w j ) 2 • ψ w j near each z j for some H ∈ H i . (4) Almost complex structure: A map J : Σ → J (T * N ) whose pull-back under j uniformly converges to ψ * w j J t near each z j for some J t ∈ J i . (5) Vertical moving boundary: A map η : ∂Σ → [1, +∞) which converges to w j near each z j .
Remark 5.2. The condition (3) is automatically holds for any (fiberwise)globally quadratic Hamiltonian such as the kinetic energy Hamiltonian that we are using in the present paper. This is because of the equality
H w 2 • ψ w = H (5.3)
for such a Hamiltonian. For the consistency with that of [A1], we leave the condition as it is. On the other hand, to exploit this special situation, we extend the function η to whole Σ so that
η • i (∞ i , t) ≡ w i . We then choose J so that (ψ η ) * J = J g . (5.4)
Such a choice will be useful later in our study of C 0 estimates.
Two Floer data (w 1 , β 1 , H 1 , J 1 , η 1 ) and (w 2 , β 2 , H 2 , J 2 , η 2 ) are conformally equivalent if there exist a constant C > 0 such that (Σ, j) in M k for all k ≥ 2 satisfying the following:
w 1 = Cw 2 , β 1 = Cβ 2 , H 1 = H 2 • ψ C C 2 , J 1 = ψ C * J 2 , η 1 = Cη 2 . Definition 5.3. A universal choice of Floer data D m for the A ∞ structure consists of D m (Σ, j) for every
(1) The data is smooth with respect to the moduli space M k .
(2) The data on ∂M k is conformally equivalent the one on the lower strata
M k1 × M k2 , k 1 + k 2 = k + 2. (3)
The data are compatible with the gluing process in an infinite order.
Let us consider a moduli space
M w (Σ,j) (x 0 ; x) = M(x 0 ; x; D m (Σ, j)
) which consists of a map u : Σ → T * N satisfying a perturbed (j, J)-holomorphic equation with a vertical moving boundary condition and a shifted asymptotic condition:
(du − X H ⊗ β) (0,1) J = 0, u(z) ∈ ψ η(z) (L i ), for z ∈ ∂Σ between z i and z i+1 where i ∈ Z k+1 . u • j (−∞, t) = ψ w j • x j (t), for j = 1, . . . , k. u • 0 (+∞, t) = ψ w 0 • x 0 (t). (5.5)
Here precise meaning of the first equation is
(du(z) − X H(z) ⊗ β(z)) + J(z) • (du(z) − X H(z) ⊗ β(z)) • j = 0.
(5.6) An immediate but important observation is that the pull-back of (5.6) under j becomes the standard Floer equation
∂ τ u + J t ∂ t u − X H w j •ψ w j (u) = 0
, on the strip-like ends which can be obtained by applying ψ w j to (4.5). Now we consider a parameterized moduli space
M w (x 0 ; x) = (Σ,j)∈M k+1 M w (Σ,j) (x 0 ; x).
Then by the standard transversality argument we have
Lemma 5.4. For a generic choice of universal Floer data D m , the moduli space
M(x 0 ; x) is a manifold of dimension µ(x 0 ) − k i=1 µ(x i ) − 2 + k.
Due to our construction of the one-forms β using the slit domain, consistency of the Floer data over all strata is automatic which is needed for the construction of A ∞ structures later.
Construction of A ∞ structure map
In this section, we would like to construct an A ∞ structure map
m k : CW * (L 1 , L 0 ; H) ⊗ · · · ⊗ CW * (L k , L k−1 ; H) → CW * (L k , L 0 ; H)[2 − k].
We recall that the module CW * (L, L ; H) is generated by time-one Hamiltonian chords of X H from L to L .
The starting point of compactification of the moduli space relevant to the A ∞ Floer structure map is to establish a uniform energy bound and C 0 estimates for the solutions of perturbed pseudo holomorphic equations associated to the corresponding moduli spaces.
6.1. Energy bound and C 0 estimates. The uniform energy bound for the solutions of perturbed pseudo holomorphic equations is necessary for the compactification of the corresponding moduli spaces.
Let L i be an admissible Lagrangian and f i : L → R be its potential function, i.e., a function ι * (−θ) = df i . We first recall the definition of action of x ∈ X(H; L i , L j )
A(x) = − 1 0 x * θ + 1 0 H(x(t)) dt + f j (x(1)) − f i (x(0)) (6.1) from (4.4). The energy E(u) is defined by Σ 1 2 |du − X H (u) ⊗ β| 2 J
for general smooth map u. For any solution u : Σ → T * N of (5.5), we have the following estimate
E(u) = Σ u * ω − u * dH ∧ β ≤ Σ u * ω − u * dH ∧ β − Σ u * H · dβ (6.2) = Σ u * ω − d(u * H · β), (6.3)
where the inequality in (6.2) comes from H ≥ 0 and sub-closedness of β. By Stokes' theorem with the fixed Lagrangian boundary condition in (5.5) and β| ∂Σ = 0 implies that (6.3) becomes
A(x 0 ) − k j=1 A(x j ).
The moduli spaces M w (x 0 ; x) admit compactification with respect to the Floer datum D m = D m (Σ, j) on a stable disk (Σ, j) ∈ M k+1 stated in Definition 5.1.
Since our underlying manifold N is noncompact, the C 0 -estimate for the Jholomorphic maps has to be preceded before starting the process of compactification. This is where the 'co-closedness' of β enters in an essential way which is needed for an application of maximum principle.
The following is the main proposition in this regard whose proof is postponed until Part 2.
Proposition 6.1 (Horizontal C 0 bound). Let (Σ, j) be equipped with strip-like ends at each puncture z i as before. Assume the one-form β is as in Definition 5.1. Let L = (L 0 , · · · , L k ) be (k + 1)-pair of admissible Lagrangians in W ⊂ T * N for some ∈ N, see Definition 4.1. Let x j ∈ X(w j H; L j−1 , L j ) for j = 1, . . . , k and x 0 ∈ X(w 0 H; L 0 , L k ). Then for any solution u of (5.5), we have Im u ⊂ W .
(6.4)
We also need to establish the vertical bound. Such a vertical bound is established in [AS,section 7c] using an integrated form of the strong maximum principle. Here, partially because similar arguments are needed for the (horizontally) moving boundary condition, we will provide a more standard argument of using the pointwise strong maximum principle in Part 2. Both conditions dβ ≤ 0 and i * β = 0 will be used in a crucial way similarly as in Abouzaid-Seidel's proof
Proposition 6.2 (Vertical C 0 bound). Let x = (x 0 , . . . , x k ) be given as in (5.1). Then max z∈Σ |p(u(z))| ≤ ht(x; H, {L i })
for any solution u of (5.5).
Remark 6.3. In this remark, we summarize the standard arguments on how the compactness arguments and dimension counting enter in the construction of moduli operator such as A ∞ maps. For the practical purpose, we restrict ourselves to the cases of the moduli spaces M w (x 0 ; x) of expected dimension one and zero. Consider a sequence of pseudo-holomorphic maps {u ν } ν∈N with a sequence of Floer data {D ν } ν∈N defined on {(Σ ν , j ν )} ν∈N diverging to one end. Let D ∞ be the corresponding limit of the Floer datum which is defined over a broken stable disk
(Σ ∞ , j ∞ ) = (Σ I , j I ) * n+1 (Σ II , j II ).
Here * n+1 means 0-th puncture of Σ I and (i+1)th puncture of Σ II correspond to the breaking point. We already mentioned about
the C 0 -estimate of the moduli space M w (x 0 ; x). If the gradient of M w (x 0 ; x)
is not uniformly bounded, then we have sphere or disk bubbling phenomenon. But these cannot happen since the symplectic manifold (T * N, ω) and Lagrangian submanifold (L i , ι * θ) are exact. Then by Arzelà-Ascoli theorem, there is a subsequence of {u ν } which C ∞ loc -converges to u ∞ with Floer datum D ∞ . The uniform energy estimate guarantees that the broken strip (or points) should be mapped to a Hamiltonian chord with a matching weight condition. So the boundary points correspond to broken pseudo-holomorphic maps and each component of the broken map is a part of zero-dimensional moduli space. Note also that the moduli space M w (x 0 ; x) of dimension zero has empty boundary, and hence itself is a finite set.
6.2. Definition of A ∞ structure map. Construction of the A ∞ structure map m = {m k } proceeds in two steps as in [A1]:
First we consider the moduli space M w (x 0 ; x) which consist of solutions u : Σ → T * N of (5.5) satisfying the asymptotic conditions
(ψ w 0 (x 0 ); ψ w 1 (x 1 ), . . . , ψ w k (x k )) at each ends z = (z 0 ; z 1 , . . . , z k ) of Σ, where ψ w 0 (x 0 ) ∈ CW * (ψ w 0 (L k ), ψ w 0 (L 0 )); ψ w j (x j ) ∈ CW * (ψ w j (L j ), ψ w j (L j−1 )) for j = 1, . . . , k for some Floer datum D = D m (Σ, j) given in Definition 5.1. Here w = (w 0 ; w 1 , . . . w k )
is the given tuple of weights of asymptotic conformal shifting of boundary Lagrangians.
The count of such elements directly defines a map
m k D : CW * (ψ w 1 (L 1 ), ψ w 1 (L 0 )) ⊗ · · · ⊗ CW * (ψ w k (L k ), ψ w k (L k−1 )) −→ CW * (ψ w 0 (L k ), ψ w 0 (L 0 ))[2 − k].
Then we compose the tensor of vertical scaling maps
CW (ψ) : CW * (L, L ) ∼ = CW * (ψ(L), ψ(L )).
in Lemma 4.5 and pre-compose its inverse to m k D , i.e., the map m k is defined by
m k = (CW (ψ)) −1 • m k D • (CW (ψ)) ⊗k (6.5)
In conclusion, we have
m k (x 1 ⊗ · · · ⊗ x k ) = x 0 (−1) † #M w (x 0 ; x) x 0 , (6.6) where † = k i=1 i·µ(x i ) and # denotes the signed count of zero dimensional moduli space M w (x 0 ; x).
Remark 6.4. Note that the A ∞ structure map {m k } k∈N is defined on the chain complex generated by time-1 Hamiltonian chords of the originally given H, while the intermediate maps m k D is defined on the complex generated by time-one Hamiltonian chords of the weighted Hamiltonian w i H. Now suppose that the moduli space M w (x 0 ; x) is one dimensional. Then by the standard argument in Gromov-Floer compactification, the boundary strata
∂M(x 0 ; x) consist of x M w (x 0 ; x 1 ) × M w (x; x 2 ).
Here, 0 ≤ n ≤ k − m and
x ∈ X(|w 2 |H; L n+m , L n ), |w 2 | = w n+1 + · · · + w n+m ;
x 1 = (x 1 , . . . , x n ,x, x n+m+1 , . . . , x k ), w 1 = w| x 1 ; x 2 = (x n+1 , . . . , x n+m ), w 2 = w| x 2 .
We conclude the following relation. Here we follow the sign convention from [Se1], see also Section B.2.
Proposition 6.5. The maps {m k } k∈N define an A ∞ structure i.e., satisfy
m,n (−1) ‡ m k−m+1 (x 1 , . . . , x n , m m (x n+1 , . . . , x n+m ), x n+m+1 , . . . , x k ) = 0, where ‡ = ‡ n = n i=1 µ(x i ) − n.
We denote the resulting category by WF(T * (M \ K); H g0 ).
Construction of A ∞ functor
In this section, we consider a pair of metrics g, g on M with g ≥ g .
We shall construct a (homotopy) directed system of A ∞ functors
F λ : WF(M \ K; H g0 ) → WF(M \ K; H g 0 )
for any monotone path λ : [0, 1] → C(N ) with λ(0) = g, λ(1) = g . By Definition 4.1 of the objects in the wrapped Fukaya category, there is a natural inclusion
Ob(WF(M \ K; H g0 )) → Ob(WF(M \ K; H g 0 )).
We take it as data of the maps
Ob(F λ ) : Ob(WF(M \ K; H g0 )) → Ob(WF(M \ K; H g 0 )).
Note that even though L 0 , L 1 are objects of WF(M \ K; H g0 ) and hence of WF(M \ K; H g 0 ), the corresponding morphism spaces could be different. In fact, the Hamiltonian used for WF(M \K) is H g0 and that for WF(M \K) is H g 0 and so the corresponding morphism spaces are CW * (L 0 , L 1 ; H g0 ) and CW * (L 0 , L 1 ; H g 0 ) respectively. Then we will show that the quasi-equivalence class of the A ∞ category WF(T * (M \ K); H g0 ) does not depend on g. We denote the A ∞ category by
WF g (M \ K) := WF(T * (M \ K); H g0 )
suppressing g from its notation.
More precisely, we will construct a A ∞ functor
f g g = {f k λ } with f k λ : CW * (L 1 , L 0 ; H g0 ) ⊗ . . . ⊗ CW * (L k , L k−1 ; H g0 ) → CW * (L k , L 0 ; H g 0 )
associated to a homotopy of metrics from g to g . Such an A ∞ functor will be constructed by an A ∞ version of Floer's continuation map under the homotopy of associated Hamiltonians H g0 to H g 0 .
Remark 7.1. It turns out that this construction of A ∞ morphism or A ∞ functor under the change of Hamiltonians, at least in the form given in the present paper, has not been given in the existing literature as far as we are aware of. (See [Sa], however, for some construction which should be relevant to such a study.) It took us some effort and time to arrive at the right definition we are presenting here. At the end of the day, our definition is motivated by the construction given in [FOOO1,Section 4.6] which defines an A ∞ morphism under its Hamiltonian isotopy in the context of singular chain complexes associated to a given Lagrangian submanifold L in the Morse-Bott context. 7.1. Moduli space of time-allocation stable curves. In this subsection, we recall the moduli space, denoted by N k+1 in [FOOO2], of a decorated stable curves of genus 0. This moduli space was used in the construction of A ∞ homomorphisms therein which we generalize for the construction of A ∞ functors. (See also [AS] for the similar moduli space named as the 'popsicle moduli space' for the more elaborate version thereof.)
Let M k+1 be the moduli space of (Σ; z) where Σ is a genus zero bordered Riemann surface and z = (z 0 , . . . , z k ) are the boundary punctured points, ordered anti-clockwise, such that (Σ; z) is stable. Let Σ = Σ i be the decomposition into the irreducible components. The stability implies that there is no sphere component, since we do not put any interior marked points.
We define a partial order on the set of irreducible components. Let {Σ α | α ∈ A} be the set of components of Σ and, by the definition of M k+1 , it admits a rooted tree structure, see Figure 7.1.21 in [FOOO2]. We assign a partial order ≺ on A with respect to the rooted tree structure as follows:
Definition 7.2. If every path joining Σ α1 to the rooted component Σ α0 , corresponding to z 0 , intersect with Σ α2 , then we write α 1 ≺ α 2 .
We recall the following definition of time-allocation.
Definition 7.3 (Definition 7.1.53 [FOOO2]). Let (Σ, ≺) be such a pair. We define the time allocation ρ : A → [0, 1] to (Σ, ≺) so that if α i ≺ α j then ρ(α i ) ≤ ρ(α j ). See Figure 7.1.19 [FOOO2].
Definition 7.4 (Definition 7.1.54 [FOOO2]). For k ≥ 2, we define N k+1 to be the set of pairs ((Σ; z), ρ) of (Σ; z) ∈ M k+1 and its time allocation ρ.
We equip N k+1 with the topology induced from that of M k+1 in an obvious way. We now describe the stratification of N k+1 . Following [FOOO2], we denote
N = (Σ, z, ρ) ∈ N k+1 .
We consider the union of all irreducible components Σ i with ρ i = ρ(i) ∈ (0, 1). Let us decompose and label the index set A of N ∈ N k+1 as follows:
A = ρ −1 (0) ρ −1 ((0, 1)) ρ −1 (1); (7.1) ρ −1 (0) = {m 1 , . . . , m λ }; ρ −1 ((0, 1)) = {f 1 , . . . , f j }; ρ −1 (1) = {m 1 , . . . , m i }.
Also denote Note that could be 0, and if ρ −1 (1) is non-empty then α∈ρ −1 (1) Σ α is connected and contains z 0 .
Definition 7.5. The combinatorial type Γ = Γ(N) of N is a ribbon graph with decorations as follows:
• Interior vertices V int of Γ correspond to the components set α ∈ A.
• Exterior vertices V ext of Γ correspond to the punctured points z.
• The matching condition of components determines interior edges of Γ.
• An exterior edge is assigned when a component contains z.
• The ribbon structure is determined by the cyclic order of the marked or singular points on the boundary of each component. • There is a decomposition
V int (Γ) = V f (Γ) V m (Γ) V m (Γ) (7.2)
with respect to (7.1). We denote by G k+1 the set of such combinatorial types of N k+1 . Let N Γ = {N ∈ N k+1 : Γ(N) = Γ}, then we have the decomposition
N k+1 = Γ∈G k+1 N Γ .
Let Γ 0 be the graph that has only one interior vertex. For each k, there are three different types of them as in (7.2).
Lemma 7.6 (Lemma 7.1.55 [FOOO2]). For any Γ ∈ G k+1 with |V (Γ)| > 1 then
N Γ is diffeomorphic to D |Γ| where |Γ| := k − 1 − |V (Γ) \ V f (Γ)|. (7.3)
Proposition 7.7 (Proposition 7.1.61 [FOOO2]). N k+1 has a structure of smooth manifold (with boundary or corners) that is compatible with the decomposition (according to the combinatorial types).
It was shown in the proof of this proposition in [FOOO2], there exists a map
I : M k+1 × [0, 1] → N k+1
that gives a homeomorphism onto the set of all N whose associated graph Γ has only one interior vertex and its time allocation ρ is constant.
We then quote the following basic structure theorem of N k+1 from [FOOO2] Theorem 7.8 (Theorem 7.1.51 [FOOO2]). For each k ∈ N, N k+1 carries the structure of a cell complex which is diffeomorphic to k − 1 dimensional disc D k−1 such that its boundary is decomposed to the union of cells described as follows:
(1) M (i+1,...,i+ ) × N (1,...,i, * ,i+ +1,...,k) .
(
2) m i=1 N ( i−1 +1,..., i) × M m+1 , where 1 = 0 , m = k, i < i+1 − 1.
In the above, M (i1,...,i k ) is M k with a new label (i 1 , . . . , i k ) on k punctures. The same for N .
We remark that Case (1) contains the case i = 1, 1 + = k. This is the case of M k+1 × N 1+1 ∼ = M k+1 . It corresponds to the case when time allocation ρ is 1 everywhere. Case (2) contains the case m = k. This is the case of N 1+1 × (M k+1 ) . It corresponds to the case when the time allocation ρ is 0 everywhere.
Similarly as in the construction of A ∞ homomorphism given in Chapter 7 [FOOO2], we will use N k+1 for the construction of A ∞ functors. 7.2. Definition of A ∞ functor and energy estimates. Firstly, consider cylindrical adjustments g 0 , g 0 on N = M \ K of Riemannian metrics defined on M as introduced in Section 2.1. We assume g 0 ≥ g 0 and consider a homotopy λ : [0, 1] → C(N ) of Riemannian metrics from g 0 to g 0 given by λ : r → g(r) = g 0 + r(g 0 − g 0 ) (or by any other homotopy r → g(r) connecting λ(0) = g 0 and g 0 ). We denote by H λ the time-dependent Hamiltonian associated to λ defined by H λ (s, x) = H λ(s) (x). We will also impose additional monotonicity restriction g 0 ≥ g 0 or equivalently H g0 ≤ H g 0 (7.4) and λ is monotone, which is needed, for example, for the energy estimates for the perturbed Cauchy-Riemann equation relevant to the definition of Floer's continuation map for the wrapped Fukaya category of the cotangent bundle in general. We refer to Remark 7.11 to see how this monotonicity condition enters in a crucial way.
Remark 7.9. For our purpose, we will obtain such an inequality by multiplying a sufficiently large constant λ > 0 to one of the metrics say g 1 . Because the base manifold M \ K is not compact, it is possible to achieve g 0 ≤ λg 0 everywhere on M \ K, only when g 0 and g 0 are Lipschitz-equivalent, i.e., when there is a constant C = C(g, g) > 0 such that 1 C g 0 ≤ g 0 ≤ Cg 0 . This is precisely the reason why we look at those metrics g, g on M \ K that are smoothly extendable to whole M in the present paper. We refer readers to Proposition 2.2 for the latter statement. and
J λ → [0, 1], J g(r) = J (T * N, g(r)). We will also use the elongation function χ : R × [0, 1] satisfying
χ(τ ) = 0 τ ≤ 0; 1 τ ≥ 1, 0 ≤ χ ≤ 2.
The morphism of A ∞ functor F λ consists of the following data of A ∞ homomorphism
f k λ : CW * (L 1 , L 0 ; H g0 ) ⊗ · · · ⊗ CW * (L k , L k−1 ; H g0 ) → CW * (L k , L 0 ; H g 0 )[1 − k].
In order to construct f k we need to use another moduli space of perturbed Jholomorphic curves of genus zero. Here we consider the case where components are perturbed J-holomorphic with respect to the compatible almost complex structures and Hamiltonian functions which could vary on the components. We basically follow the construction described in [FOOO1,Section 4.6] in the current wrapped context.
Note that each component Σ α is an element of M +1 for some ≥ 1, and two components have a matching asymptotic condition at most one point. Since our underlying manifold is exact and the Lagrangian submanifolds are exact, there is no possibility for sphere bubbles and disk bubbles. Now we are ready to give the definition of A ∞ functor. We start with listing the data necessary for the construction. We would like to highlight that in the item (3(ρ)) below, we put the time-dependent Hamiltonian H λ while we put the time-independent Hamiltonian in Definition 5.1.
Definition 7.10 (Floer data for A ∞ functor). A Floer datum D f = D f (Σ, j; ρ) for (Σ, j; ρ) ∈ N k+1 is defined by replacing the properties (3), (4) in Definition 5.1 as follows:
(1) Weights: A (k + 1)-tuple of non-negative integers w = (w 0 , . . . , w k ) which is assigned to the marked points z satisfying w 0 = w 1 + · · · + w k .
(2) One forms: A collection of one-forms β α ∈ Ω 1 (Σ α ) constructed in Section 3.2 satisfying j * β agrees with w j dt. (3 ρ ) Hamiltonians: a collection H ρ of maps H α : Σ α → H ρ(α) assigned to each irreducible component α ∈ A whose pull-back under jα uniformly converges to H jα λ (w jα ) 2 • ψ w jα near each z jα for the path H jα λ : [0, 1] → H ρ(α) dictated as described above. (4 ρ ) Almost complex structures: J ρ which consist of maps J α : Σ α → J ρ(α) for each α ∈ A, whose pull-back under jα uniformly converges to φ * w jα J jα near each z jα for the path
J jα λ : [0, 1] → J ρ(α)
associated to λ. (5) Vertical moving boundary: A map η : ∂Σ → [1, +∞) which converges to w j near each z j .
A universal choice of Floer data D f for the A ∞ functor consists of the collection of D f for every (Σ, j; ρ) ∈ k≥1 N k+1 satisfying the corresponding conditions as in Definition 5.3 for N k+1 and its boundary strata. Now we consider the system of H ρ perturbed J ρ holomorphic maps {u α : Σ α → T * N } α∈A for the data D f (Σ α , j α ; ρ) and a (k + 1)-pair of Hamiltonian chords
x = (x 1 , . . . , x k ) ∈ X(H g0 ; L 0 , L 1 ) × · · · × X(H g0 ; L k−1 , L k ); y ∈ X(H g 0 ; L 0 , L k )
satisfying stability, shifted asymptotic conditions, and the perturbed J-holomorphic equation with respect to H s and J s for each parameter s as follows.
With this preparation, we describe the structure of the collection of maps {u α } α∈A . As a warm-up, we first consider the case k = 1, i.e., with one input and one output puncture whose domain is unstable. When the domain is smooth, it is isomorphic to R ×
+ J χ(τ ) ∂u ∂t − X H χ(τ ) (u) = 0 u(τ, 0) ∈ L, u(τ, 1) ∈ L .
(7.5)
Remark 7.11. This appearance of non-autonomous Cauchy-Riemann equation in our construction of A ∞ functor is the reason why we imposed the monotonicity hypothesis (7.4) for the energy estimates. We refer to [Oh5] for the energy formula for the non-autonomous equation:
∂u ∂τ 2 J χ(τ ) dt dτ = A H + (z + ) − A H − (z − ) − ∞ −∞ χ (τ ) 1 0 ∂H s ∂s s=χ(τ ) (u(τ, t))dt dτ. (7.6)
For readers' convenience, we give its derivation in Appendix. It follows that we have uniform energy bound for general Hamiltonians on non-compact manifolds provided the homotopy s → H s is a monotonically increasing homotopy.
Denote by N (x ; x) the moduli space of finite energy solution with u(−∞) = x, u(∞) = x . We denote by N (x ; x) its compactification. An element of N (x ; x) is a linear chain of maps elements
u − 1 , . . . , u − k − , u 0 , u + 1 , . . . , u + k + such that u − i ∈ M(x − i−1 , x − i ) with x − i ∈ X(H − ; L, L ) for 0 ≤ i ≤ k − , u + j ∈ M(x + j−1 , x + j ) x + j ∈ X(H + ; L, L ) for 0 ≤ j ≤ k + and u 0 ∈ N (x − k − , x + 0 )
. We denote the concatenation of such linear chain by
u = (u − 1 , . . . , u − k − , u 0 , u + 1 , . . . , u + k + ). (7.7)
Then for given ordered sequence 0 ≤ ρ 0 ≤ ρ 1 ≤ . . . ≤ ρ λ ≤ 1, we perform the above construction for each consecutive pair (H ρi , H ρi+1 ) iteratively to form a 'stair-case' chain which is a concatenation of the linear chains u i over i = 0, . . . , . We denote by N (x ; x) the set of such stair-case chains. Now we turn to the case k ≥ 2. Let Σ α be an irreducible component of Σ. Denote by {z α 0 , . . . , z α kα } k α ≥ 1 the special points of α which is the union of nodal points and marked points of Σ on Σ α . We denote by L α i the Lagrangian submanifold that we originally put around the vertex α in the chambers given by the dual graph of Σ in the beginning. Denote by Σ α + the component attached to z α 0 . Then we have α ≺ α + and so ρ(α) ≤ ρ(α + ).
We now describe the equation for u α on the strip-like region at z α 0 . For given α, β, we consider the function homotopy s → H (α,β) s defined by
H (α,β) s = (1 − s)H ρ(α) + sH ρ(β)
or any homotopy {H s } s∈[0,1] connecting H ρ(α) and H ρ(β) . In the strip-like region near z α 0 on Σ α , we consider the non-autonomous Cauchy-Riemann equation on
(−∞, 0] × [0, 1] ∂u ∂τ + J χ(τ α 0 −τ ) ∂u ∂t − X H (α,α + ) χ(τ α 0 −τ ) (u) = 0
for some τ α 0 ≤ −1. On the other hand, at the input punctures z α i , we denote by Σ α − the component attached to z i α at the nodal point z α − 0 . Then we put the non-autonomous equation
on [0, ∞) ⊂ [0, 1] ∂u ∂τ + J χ(τ −τ α i ) ∂u ∂t − X H (α − ,α) χ(τ −τ α i ) (u) = 0
for some τ α i ≥ 1. At each nodal point of Σ associated with α ≺ β, we insert a linear chain associated to H − = H ρ(α) , H + = H ρ(β) of the type u − 1 , . . . , u − k − , u 0 on the strip-like regions of z α i with u − i as above but u 0 is a map on [0, ∞) × [0, 1] or of the type u 0 , u + 1 , . . . , u + k + on the strip-like regions of z β 0 with u + j as above but u 0 is a map on (−∞, 0] × [0, 1]. We put the chain at most one of the two regions.
In summary, we require the map u α on Σ α to satisfy
(1) (du α − X Hα ⊗ β α ) (0,1) Jα = 0 (7.8) for each α ∈ A.
Here we emphasize the fact that the Hamiltonian H α is autonomous away from the strip-like regions of each component. We do not exclude the case where the component is of the m-type, i.e., one that satisfies the autonomous equation of H ρ(α) .
(2) u α • jα (−∞, t) = ψ w jα • x jα (t), for the input punctures j α of Σ α .
(3) u • 0α (+∞, t) = ψ w 0α • x 0α (t), for the output punctur 0 α of Σ α . (4) (Σ α , z), (u α ) α∈A is stable.
To be able to construct the relevant compactified moduli space of solutions (7.8), we need the uniform energy bound and C 0 estimates both of which require the monotonicity condition (7.4). The proof of the following energy bound is given by combining the energy bound obtained in Section 6.1 and that of (7.6) in Remark 7.11. Lemma 7.12. For any finite energy solution of (7.8), we have the energy identity
E(u) ≤ A H + (x 0 ) − k j=1 A H − (x j ) − α∈A ∞ −∞ χ (τ ) 1 0 ∂H α λ χ ∂s s=χ(τ )
(u(τ, t))dt dτ (7.9) where the map λ χ : R ± → [0, 1] is the elongated path defined by λ χ (τ ) = λ(χ(τ )). In particular, if λ is a monotonically decreasing path, i.e, if ∂H λ ∂s ≥ 0, then we have
E(u) ≤ A H + (x 0 ) − k j=1 A H − (x j ).
Then we denote the space of such a system of maps satisfying the above conditions by N k+1 (Σ,j;ρ) (y; x) = N k+1 (y;
x; D f (Σ, j; ρ)).
(7.10)
Consider a parameterized moduli space
N k+1 (y; x) = (Σ,j;ρ)∈N k+1 N k+1 (Σ,j;ρ) (y; x).
We have a forgetful map
forget : N k+1 (y; x) → N k+1 for k ≥ 2 which induces a decomposition N k+1 (y; x) = Γ∈G k+1 N Γ (y; x) where N Γ (y; x) = forget −1 (Γ).
Lemma 7.13. For a generic choice of universal Floer data D f , the moduli space
N Γ (y; x) is a manifold of dimension µ(y) − k i=1 µ(x i ) − |Γ|
where |Γ| is given as in (7.3).
In particular, if V (Γ) = V f (Γ), i.e., there is no m or m component, then the dimension becomes
µ(y) − k i=1 µ(x i ) − 1 + k. The matrix coefficient of the A ∞ homomorphism f k λ : CW * (L 1 , L 0 ; H g0 ) ⊗ · · · ⊗ CW * (L k , L k−1 ; H g0 ) → CW * (L k , L 0 ; H g 0 )[1 − k]. is defined by f k λ (x 1 ⊗ · · · ⊗ x k ) = y (−1) ♠ #N k+1 (y; x) y,(7.11)
where ♠ = k j=1 j · µ(x j ) + k and # is the signed sum as before. Note that #(N k+1 (y; x)) becomes zero unless
µ(y) = k i=1 µ(x i ) + 1 − k.
Since the equation (7.8) for each given ρ(α) does not involve moving boundary condition but only fixed boundary condition, the C 0 -bound for (5.5) still applies to prove the following C 0 -bounds.
Let (Σ, j) be equipped with strip-like ends k : Z ± → Σ at each puncture z k as before.
Proposition 7.14 (Vertical C 0 estimates for f-components). Let x = (x 0 , . . . , x k ) be a (k + 1)-tuple of Hamiltonian chords x j ∈ X(H g0 ; L j−1 , L j ). Then for any solution u of (7.8).
Proposition 7.15 (Horizontal C 0 estimates for f-components). Assume the oneform β satisfies (3.5). Suppose L j ⊂ W λ for all j = 0, . . . , k, and let x j ∈ X(H g0 ; L j−1 , L j ) for j = 1, . . . , k and x 0 ∈ X(H g 0 ; L0, L k ) be Hamiltonian chords. Then
Im u ⊂ W λ (7.12)
for any solution u of (7.8).
We now describe the structure of boundary (or codimension one) strata of N k+1 (y; x). We first form the union
N k+1 (y; x) = (Σ,j;ρ)∈N k+1 N k+1 (Σ,j;ρ) (y; x).
Then consider the asymptotic evaluation maps Theorem 7.16. Let g, g be a pair of smooth metrics on M satisfying g ≥ g , and λ be a monotone homotopy between them with λ(0) = g, λ(1) = g . Then the definition (7.11) is well-defined and the maps {f k λ } satisfy the A ∞ functor relation.
Proof. Let (Σ (i) , z (i) ), (u (i) α ), (ρ (i) α )
i∈N α∈A be a sequence of elements of N k+1 (y; x), then
(1) One of the component Σ Both Type (1) and Type (2) above consists of the following fiber product N k1+1 (y; x 1 ) ev i + × ev 0 − N k2+1 ( * ; x 2 ) with k 1 + k 2 = k and with opposite sign and so they cancel each other when µ(y) = k i=1 µ(x i ) + 1 − k. Here for given x = (x 1 , x 2 , . . . , x k ),
x 1 = (x 1 , . . . , x i−1 , * , x i+k2 , . . . , x k1+k2 ),
x 2 = (x i , . . . , x i+k2−1 ).
Note that the case (3), (4) occur at leaf components and the root component of {Σ α } α∈A , which correspond to the two cases of domain degenerations described in Theorem 7.8 respectively.
The case (3) contributes to the following terms k=k1+k2−1 n,k2
(−1) ‡+1 f k1 λ (x 1 , . . . , x n , m k2 (x n+1 , . . . , x n+k2 ), x n+k2+1 , . . . , x k ), (7.13) where ‡ = ‡ n = n i=1 µ(x i ) − n.
While the case (4) corresponds to k=k1+···+k ,k1,...,k m (f k1 λ (x 1 , . . . , x k1 ), . . . , f k λ (x k−k +1 , . . . , x k )). (7.14)
This verifies that {f k } k∈N satisfies the A ∞ homomorphism relation, i.e. (7.13) equals to (7.14). We refer to Chapter 7 Section 1 of [FOOO2] for the detail of this proof for the case of A ∞ homomorphism but the same argument applies to the case of A ∞ functors.
Homotopy of A ∞ functors
Now let λ 1 , λ 2 ∈ C ∞ ([0, 1], C(N )) be two paths of admissible metrics connecting g 0 = λ 1 (0) = λ 2 (0), g 0 = λ 1 (1) = λ 2 (1) with g 0 ≥ g 0 . Denote by F = {f k λ } k∈N and F = {f k λ } k∈N be the corresponding A ∞ functors from WF(M \ K) to WF(M \ K) constructed in the previous subsection. Now suppose we are given a path Γ ∈ C ∞ ([0, 1]×[0, 1], C(N )) of paths connecting λ 1 and λ 2 . The main objective of this subsection is to construct an A ∞ homotopy, denoted by h = h Γ , between F and F . 8.1. Definitions of A ∞ homotopy and composition. To motivate our construction of the relevant decorated moduli space below, we first recall the definition of an A ∞ homotopy from [Lef], [Se1]. where s 1 + · · · + s r = d and †
Definition 8.1 (A ∞ homotopy). Let (A, m A ), (B, m B ) be two A ∞ categories, and F = {f k } k∈N , G = {g k } k∈N be two A ∞ functors from A to B. A homotopy H from F to G is a family of morphisms h k : A ⊗k → B, k ≥ 1, of degree −k satisfying the equation that (f k − g k )(a 1 ,= n i=1 µ(a i ) − n, ♣ = s1+···+si−1 =1 µ(a ) − i−1 =1 s λ .
We will apply this definition of homotopy in the proof of consistency condition for the system
WF(M \ K; H g0 ), {m k } Φ λ −→ WF(M \ K; H g 0 ), {m k }
to define a homotopy: For a triple g ≥ g ≥ g of metrics on M we have functors F λ , F λ , and F λ . We consider the composition of A ∞ functors and construct a homotopy between F λ • F λ and F λ .
The composed A ∞ functor F λ •F λ which we recall consists of the following data of A ∞ homomorphism: Let us recall from Definition 7.2 that (Σ; z; ρ) ∈ N k+1 admits a partial order ≺ on the set A of irreducible components α of Σ.
(F λ • F λ ) d (x 1 ,
Remark 8.2.
(1) With the above preparation, it looks natural to consider the parameterized f-moduli space, which was called a timewise moduli space in [FOOO1,FOOO2]
N k+1 para (y; x) = s∈[0,1] {s} × N k+1 ρ(s) (y; x),(8.# N k+1
para (y; x) when the moduli space associated to (y; x) has its virtual dimension 0. However this definition will not lead to the relation required for the A ∞ homotopy described in Definition 8.1 because the one-dimensional components of the above defined moduli space has its boundary that is not consistent with the homotopy relation.
So we need to modify this moduli space so that the deformed homotopy map whose matrix coefficients are defined via counting the zero dimensional components of the modified moduli space. One requirement we impose in this modification is that the strata the domain of each element of which is irreducible are unchanged from above. We will modify those strata whose elements have nodal domains, i.e., not irreducible.
(2) We would like to mention that this kind of modification process already appeared in the definition of A ∞ map f where the time-order decoration ρ is added to the moduli space of bordered stable maps to incorporate the datum of geometric homotopy λ : r → g λ (r). We also recall the starting point of Fukaya-Oh-Ohta-Ono's deformation theory of Floer homology [FOOO1] in which modifying the definition of Floer boundary map to cure the anomaly ∂ 2 = 0. (3) A general categorial construction of A ∞ homotopy associated to Lagrangian correspondence is given as a natural transformation between two A ∞ functors using the quilted setting in [MWW]. It appears to us that to apply this general construction, we should lift a family of Hamiltonian to a Lagrangian cobordism arising as the suspension of the associated Lagrangian isotopy. We avoid using this general set-up and Lagrangian suspension but instead quickly provide a direct and down-to-earth construction as a variation of the construction given in [FOOO2,Section 4.6].
8.2. Timewise decorated time-allocation stable curves. We recall the evaluation map ev [0,1]
: N k+1 para (y; x) → [0, 1] which naturally induces a map A → [0, 1]; α → s(α)
where s(α) is the time in [0, 1] at which the irreducible component ((Σ α , z α ), u α ) of ((Σ, z), u) is attached. For our modification of the above mentioned moduli space N k+1 w,para (y; x), we assign a total order on the component set A of Σ which is determined by the rooted ribbon structure on (Σ, z) ∈ M k+1 . This order will play an important role in establishing the homotopy relation (8.1) between F λ and
F λ • F λ . WF(M \ K; H g0 ) WF W λ (M \ K; H g 0 ) WF(M \ K; H g 0 ) F λ F λ H F λ
Motivated by this diagram and to achieve the A ∞ homotopy relation given in Definition 8.1, we would like to attach the m-component only at ∂[0, 1] = {0, 1} in the way that the one with time-allocation ρ = 0 at s = 0 and ρ = 1 at s = 1. We note that the components with ρ(α) = 0, 1 are type m-components.
Recall from Definition 7.5 that Γ = Γ(N) is the rooted ribbon tree associated to N ∈ N k+1 or M k+1 . From now on, we canonically identify V ext (Γ) with the boundary punctures z = (z 0 , . . . , z k ) and V int (Γ) with the component A. Each edge of Γ carries the natural orientation which flows into the root z 0 .
Each z j , j ≥ 1 determines a unique (minimal) edge path j = z 0 z j from z 0 to z j respecting the orientation of Γ (in the opposite direction). For an interior vertex v of Γ, we say
v ∈ j if v ∈ V int ( j ) where V int ( j ) = {v ∈ V int (Γ) | v ∈ j }.
We have an obvious order on each V int ( j ) given by the edge distance from the root. Denote v α ∈ V int (Γ) the interior vertex corresponding to α ∈ A. Now define
j tm : A → {1, . . . , k}; α → min{j | v α ∈ j }.
Definition 8.3. Let Σ be a bordered stable curve of genus zero with k + 1 marked points with its combinatorial type T . Let α, β ∈ A. We say α β if one of the following holds:
(1) j tm (α) < j tm (β).
(2) j tm (α) = j tm (β) and ρ(α) ≤ ρ(β).
It is easy to check that defines a total order on A and depends only on the ribbon structure of Γ. (We refer to [Sa,p.7] for more pictorial description of this order.) We denote the set of descendants of δ by Desc(δ) := {α ∈ A | α δ, α = δ}.
We now recall the universal family S k+1 → M k+1 . Each element v = (Σ, j; σ) ∈ S k+1 picks out a distinguished component δ(v) ∈ A that contains the point σ ∈ Σ.
Definition 8.4 (Universal parametrization). Let v ∈ S k+1 and δ(v) be the associated component. We define an s-parameterized map s v :
[0, 1] × A → [0, 1] by s v (s)(α) = 1 if α ∈ Desc(δ(v)); s if α = δ(v); 0 otherwise. (8.4)
Note that the map s v is determined by the ribbon graph Γ(v) and the distin-
guished component δ(v). Indeed, s v is defined over S k+1 /∼ where v ∼ v ⇐⇒ Γ(v) = Γ(v ), δ(v) = δ(v ).
We are ready to describe a parameter space L k+1 defined over S k+1 /∼ which will be used in the construction of A ∞ homotopy or 2-morphism.
Definition 8.5. For each v = [Σ, j; σ] ∈ S k+1 /∼ with k ≥ 2, we define L k+1 to be the set of quadruples (Σ, j; ρ v ; s v ),
where • ρ v is a time allocation in Definition 7.3 satisfying ρ(δ(v)) = 0 or 1, • s v is a s-parameterized map defined in Definition 8.4.
By extending Definition 7.5, let us denote a ribbon graph induced from L ∈ L k+1 by Γ(L). The ribbon graph Γ(L) has an additional decoration of δ(v). Note that δ(v) ∈ V f (Γ) by the condition of ρ v in Definition 8.5. Let L Γ = {L ∈ L k+1 | Γ(L) = Γ}, then we have the decomposition (8.5) where G k+1 be the set of combinatorial types of L k+1 . As analogues of Lemma 7.6 and Proposition 7.7 we have
L k+1 = Γ∈G k+1 L Γ ,Lemma 8.6. For any Γ ∈ G k+1 , L Γ is diffeomorphic to D |Γ|+1 .
Proof. The space L Γ additionally has a datum of s-parameterized map s v compared to N Γ , so the lemma follows.
Proposition 8.7. The parameter space L k+1 has a structure of smooth manifold with boundaries and corners, which are compatible with the decomposition (8.5). Moreover, there is a projection map
L k+1 → [0, 1]; (Σ, j; ρ v ; s v ) → s v (s)(δ(v)) = s
which is a smooth submersion.
Proof. The map factors through the map L k+1 → M k+1 × [0, 1] defined by
(Σ, j; ρ v ; s v ) → (Σ, j; s v (s)(δ(v))) = (Σ, j; s),
so the submersion property follows.
Lemma 8.8. The boundary ∂L k+1 is decomposed into the union of the following three types of fiber products:
(1) M +1 # N k1+1 × · · · × L ki+1 × · · · × N k λ +1 , where i=1 k i = k. (2) L +1 #M k− +1 where = 1, . . . , k − 1. (3) {0, 1} × N k+1 .
8.3. Construction of A ∞ homotopy. We first describe the situation we are in for the purpose of constructing an A ∞ functor.
Definition 8.9 (Floer data for A ∞ homotopy). A Floer datum D h (L) for L = (Σ, j; ρ; s) ∈ L k+1 is a quintuple
(w, {β α } α∈A , H L = {H α } α∈A , J L = {J α } α∈A , η)
defined by modifying a Floer datum for A ∞ functor in Definition 7.10 as follows:
(1) A (k + 1)-tuple of non-negative integers w = (w 0 , . . . , w k ) which is assigned to the marked points z satisfying
w 0 = w 1 + · · · + w k .
(2) A consistent choice of one-forms β α ∈ Ω 1 (Σ α ), for each α ∈ A, constructed in Section 3.2 satisfying j * β α agrees with w j dt. (3 ρ,s ) Hamiltonian perturbations H L which consist of maps
H α : Σ α → H ρ(α) sv(s,α)
for each α ∈ A, see (8.2). Its pull-back under the map jα uniformly converges to H (w jα ) 2 • ψ w jα near each z jα for some H ∈ H ρ(α) sv(s,α) . (4 ρ,s ) Almost complex structures J L which consist of maps
J α : Σ α → J ρ(α) sv(s,α)
for each α ∈ A, whose pull-back under jα uniformly converges to φ * w jα J near each z jα for some J ∈ J ρ(α) sv(s,α) , see also (8.2). (5) A map η : ∂Σ → [1, +∞) which converges to w j near each z j . A universal choice of Floer data D h for the A ∞ homotopy consists of the collection of D h for every (Σ, j; ρ; s) ∈ k≥1 L k+1 satisfying the corresponding conditions as in Definition 5.3 for L k+1 and its boundary strata.
Here we would like to recall the readers that the current geometric circumstance is that of 2-morphisms. x = (x 1 , x 2 , . . . , x k ) ∈ X(H g0 ; L 0 , L 1 ) × · · · × X(H g0 ; L k−1 , L k ); Figure 3. An example of slit domains for the A ∞ homotopy we consider a system of maps {u α : Σ α → T * N } α∈A with respect to D h over L = (Σ, j; ρ; s) ∈ L k+1 satisfying the following:
y ∈ X(H λ ; L 0 , L k ), ρ 0 ρ 1 ρ 3 ρ 4 ≤ ≤ ≤ ≤ ≤ ρ 2 w 6 w 8 w 7 w 5 w 4 w 3 w 2 w 1 w 0 ρ 5 1 s 1 1 ≤ = = = ≤ 0 1
(1-a) For α ∈ A Σ except the distinguished component, u α satisfies
(du α − X Hα ⊗ β α ) (0,1) Jα = 0 for H α ∈ H ρ(α) * and J α ∈ J ρ(α) *
, where * = 0, 1 depending on s(α). L (s 0 )(y; x) = L k+1 (y; x; D h (Σ, j; ρ; s; s 0 )). Now we consider parameterized moduli spaces given by
L k+1 L (y; x) = s∈[0,1] L k+1 L (s)(y; x); L k+1 (s)(y; x) = L∈L k+1 L k+1 L (s)(y; x); L k+1 (y; x) = L∈L k+1 L k+1 L (y; x) = s∈[0,1] L k+1 (s)(y; x).
Similarly as for N k+1 (y; x), we have a forgetful map forget : L k+1 (y; x) → L k+1
for k ≥ 2 and the evaluation map ev [0,1] : L k+1 (y; x) → [0, 1] defined by
{u α } α∈A\{δ} ∪ {u δ(s0) } → s 0 . (8.7)
Standard parameterized transversality then proves the following Lemma 8.10. Suppose that L k+1 (s 0 )(y; x) with s 0 = 0, 1 are transversal. Then for a generic choice Floer data D h for the A ∞ homotopy with fixed ends for s = 0, 1, the moduli space L k+1 (y; x) is a manifold of dimension
µ(y) − k i=1 µ(x i ) + k.
Again, when µ(y) = k i=1 µ(x i ) − k, a map is defined by counting rigid solutions of L k+1 (y; x) with respect to the Floer data D h in Definition 8.9
h k D (ψ w 1 (x 1 ) ⊗ ψ w 2 (x 2 ) ⊗ · · · ⊗ ψ w k (x k )) = y (−1) † # L k+1 (y; x) ψ w 0 (y), where † = k i=1 i · µ(x i ).
By the same identification as used in the definition m k and f k , we will obtain a A ∞ homotopy
h k : CW * (L 1 , L 0 ; H g0 ) ⊗ · · · ⊗ CW * (L k , L k−1 ; H g0 ) → CW * (L k , L 0 ; H g 0 )[−k].
For the later purpose, we describe the structure of the zero dimensional component more carefully. Recall that the fiberwise dimension of L k+1 (y; x) for the current circumstance is −1 and that we assume the cases of s = 0, s = 1 are generic. Therefore there is no contribution therefrom. Therefore there are a finite number of 0 < s 1 < s 2 < · · · < s j < 1 such that
# L k+1 (y; x) = j i=1 # L k+1 (s)(y; x)| s=si .
Furthermore, generically the associated moduli space L k+1 (s)(y; x)| s=si is minimally degenerate, i.e., the cokernel of the associated linearized operator has dimension 1. (See [Lee] for the relevant parameterized gluing result.) Denote by Sing I ⊂ [0, 1] the set of these points.
In order to investigate the algebraic relation for the A ∞ homotopy for the map {h k } k∈N we need to look up the codimension one strata of the moduli space L k+1 (y; x). Especially we need to examine the structure of the boundary of onedimensional components i.e., of those satisfying
µ(y) − k i=1
µ(x j ) + k = 1.
(8.8)
Theorem 8.11. The maps {h k } k∈N satisfies the homotopy relation (8.1).
Proof. We first recall the current geometric circumstance. We are given two Hamiltonian H 0 ∈ H( g i ) and H 1 ∈ H( g λ ), and two paths r → H * (r) ∈ H( g(r)) with * = 0, 1 connecting the two Hamiltonians, and then a homotopy s → H s interpolating the two paths with s ∈ [0, 1].
WF(M \ K; H g0 ) WF(M \ K; H g 0 ) F λ (H0(r)) F λ (H1(r))
H(Hs(r))
Let us consider a sequence
s (i) δ , Γ( u (i) ), (Σ (i) α , ρ (i) α ) α∈A i∈N (8.9)
constructed from u ∈ L k+1 (y; x) satisfying (8.8), i.e., when the fiberwise dimension becomes zero. Here s δ is the s-image of the distinguished component δ, see (8.7), and Γ( u) is a ribbon graph with decorations defined as in Definition 7.5. Generically, the change of Γ( u) occurs at finite number points s = s 0 in [0, 1] provided the corresponding moduli spaces are regular. We denote by
Sing II ⊂ [0, 1]
the set of these points. There are also a finite number of points s at which the fiber ev −1 [0,1] (s) contains an element that is minimally degenerate, i.e., the cokernel of the linearized operator has dimension 1. We denote by
Sing III ⊂ [0, 1].
Generically, the three subsets Sing I , Sing II , Sing III are pairwise disjoint. We denote the union by Sing([0, 1] s ).
The sequence (8.9) with fixed type of domain configuration Γ(Σ), see Definition 7.5. By choosing a subsequence, we may assume that the sequence s (i) δ is increasing.
Then we have the following possible scenarios:
(1) s (i) δ converges to 0 < s 0 < 1, one of the points in Sing([0, 1] s ).
(2) lim i→+∞ s (i)
δ = 0. (3) lim i→+∞ s (i) δ = 1.
We examine the three cases more closely. The first case s (i) δ → s 0 is further divided into the following four different scenarios by Gromov-Floer compactness:
(1-i) One of the component Σ (1-ii) Two component Σ α1 and Σ α2 at s = s 0 sharing one asymptotic matching condition satisfy 0 < lim ρ
(i) α1 = lim ρ (i) α2 < 1 at s = s 0 when i → +∞. (1-iii) lim i→+∞ ρ (i) α = 0 . (1-iv) lim i→+∞ ρ (i) α = 1.
Each of these cases is the analog to the corresponding scenario of the study of f-moduli space N k+1 (y; x) in the previous section. One difference is that the parameter s = s 0 is not transversal fiberwise but transversal as a parameterized problem. So among the two components α 1 , α 2 one is an m-component and the other is an h-component, i.e., a fiberwise f-component with minimal degeneracy. Then by the same argument as in N k+1 (y; x) applied to the parametric case, the case (1-i) and (1-ii) cancel out.
For the convenience sake, we denote the associated structure maps by m * , f * , and h * respectively. A codimension one phenomenon of (1-iii) occurs when α is an index for a m-component. We note that this component α is generically regular and the underlying zero-dimensional h-component containing α is also regular as a parameterized moduli space. Since at ρ = 0, 1, the Floer datum is constant over s ∈ [0, 1]. The same configuration must occur for all s near s 0 . This contradicts to the fact that s = s 0 is contained in the discrete set Sing([0, 1] s ) unless m-bubble is attached at s = 0, 1. Then it contributes the following terms m,n
(−1) † h k−m+1 (x 1 , . . . , x n , m m g0 (x n+1 , . . . , x n+m ), x n+m+1 , . . . , x d ), (8.10) where † = n i=1 µ(x i ) − n.
For the case (1-iv), it follows from (8.4) that a codimension one strata can be obtained when α is an index for the root component and is different from the distinguished component δ. Then it corresponds to r,i s1,...,sr
(−1) ♣ m r g 0 f s1 (x 1 , . . . , x s1 ), . . . , f si−1 (. . . , x s1+···+si−1 ), h si (x s1+···+si−1+1 , . . . , x s1+···+si ), g si+1 (x s1+···+si+1 , . . . ), . . . , g sr (x d−sr+1 , . . . , x d ) , (8.11) where ♣ = s1+···+si−1 =1 µ(x ) − i−1 =1
s . Among many terms from the cases (2) and (3), all of them are cancelled out except the following two terms
(2 ) lim i→+∞ s (i) δ = 0, where δ is the minimum index with respect to .
(3 ) lim i→+∞ s (i) δ = 1, where δ is the maximum index with respect to . because of the total order and the definition of time-wise product s. These two cases give the following terms
−f d (x 1 , . . . , x d ), g d (x 1 , . . . , x d ),(8.12)
respectively. The algebraic relation coming from the combination of (8.10), (8.11), and (8.12) is nothing but the A ∞ homotopy relation (8.1).
This concludes the proof of the following theorem.
Theorem 8.12. Let M be a closed manifold and g, g be two metrics on M such that their cylindrical adjustments satisfy g 0 ≥ g 0 . Suppose λ 1 and λ 2 be homotopies connecting g 0 and g 0 and let Γ : [0, 1] 2 → C(N ) be a homotopy between λ 1 and λ 2 . Then there exists an A ∞ homotopy between F λ1 and F λ2 .
We denote the resulting A ∞ category by WF g (M \ K) := WF(T * N, H g0 ).
(8.13)
In the next section, we will prove the quasi-equivalence class of the A ∞ category WF(T * N ; H g0 ) is independent of the choice of metrics g.
Independence of choice of metrics
In this section, we consider the wrapped Fukaya categories WF(T * N, g 0 ) and WF(T * N, h 0 ) constructed on the knot complement M \ K for two metrics g, h of M .
Moreover, it follows from Proposition 2.2 that the two Riemannian metric g 0 and h 0 are Lipschitz equivalent, i.e., there exists a constant C > 1 such that
1 C h 0 ≤ g 0 ≤ Ch 0 , (9.1) on M \ K.
For the convenience of notation, we denote the kinetic energy Hamiltonian H g0 associated to g also by H(g 0 ) in this section.
Next we prove the following equivalence theorem.
Theorem 9.1. Let (g, h) be a pair of Lipschitz equivalent metrics on an orientable tame manifold N . Then two induced wrapped Fukaya categories WF g (T * N ) and WF h (T * N ) are quasi-equivalent.
Proof. By the Lipschitz equivalence (9.1), we have
H(g 0 ) ≤ H(h 0 /C); H(h 0 ) ≤ H(g 0 /C) (9.2)
recalling that the Hamiltonian is given by the dual metric. We have two A ∞ functors between induced wrapped Fukaya categories These are homotopic to natural isomorphisms induced by the rescaling of metrics
ρ C 2 : WF(T * N ; H(h 0 )) → WF(T * N, H(h 0 /C 2 )); η C 2 : WF(T * N ; H(g 0 )) → WF(T * N ; H(g 0 /C 2 )),
respectively. This proves that Φ and Ψ are quasi-equivalences.
Remark 9.2. Another class of metrics we will study in [BKO] is a complete hyperbolic metric h on N = M \ K for hyperbolic knots K. In such a case, we can exploit hyperbolic geometry to directly construct another A ∞ category WF(ν * T ; H h ) without taking a cylindrical adjustment. Since h is not Lipschitz-equivalent to a cylindrical adjustment g 0 on M \ K of any smooth metric g on M , that category may not be quasi-equivalent to WF(M \ K) constructed in the present paper.
Wrap-up of the construction of wrapped Fukaya category WF(M \K).
In this section, we restrict ourselves to the case when N is a knot complement M \K in a closed oriented 3-manifold M , and wrap up the proofs of the main theorems stated in the introduction. We first note that one class of Lipschitz equivalent metrics on N considered in Theorem 9.1 is obtained by restricting any Riemannian metric g M to M . Note that these metrics on N is incomplete.
Theorem 9.3. Let M be a closed oriented 3-manifold equipped with a metric and let M \K be a knot. Then there exists an A ∞ category WF(M \K) whose isomorphism class depends only on the isotopy type of K in M .
Proof. Let g be a Riemannian metric on M and restrict the metric to M \ K. By the remark above, the quasi-isomorphism type of A ∞ category WF(T * N, H g0 ) does not depend on the choice of the metric.
It remains to show the isotopy invariance of WF(T * N, H g0 ). More precisely, let K 0 , K 1 be isotopic to each other. Suppose φ t : M → M be an isotopy such that
K 1 = φ 1 (K 0 ). Denote K t = φ t (K 0 ).
We fix a metric g on M and consider the family of metric g t := φ t * g. We also fix a pair of precompact domains W 0 ⊂ W 0 ⊂ M \ K with smooth boundary and fix a cylindrical adjustment g of g outside W 0 so that g 0 = g| ∂W0 ⊕dr 2 on M \W 0 which is interpolated in W 0 \W 0 to g. Then we consider the isotopy of pairs (M \ K t , g t ). We denote W t = φ t (W 0 ) and W t = φ t (W 0 ). Then choose a smooth family of cylindrical adjustments g t for g t outside W t ⊂ M \ K t with g t = φ T * (g) on W t which are interpolated in between. Then we construct an A ∞ functor Φ : WF g (T * (M \ K 0 )) → WF φ 1 * g (T * (M \ K 0 )) which are given by L → φ 1 (L) objectwise and whose morphism is defined by the same construction performed in the previous construction of A ∞ functor. By considering the inverse isotopy, we also have Ψ : WF(T * (M \ K 1 ); H g0 ) → WF(T * (M \ K 0 ); H g 0 ).
Then we consider the isotopy which is a concatenation of φ t and its inverse isotopy which is homotopic to the constant isotopy. Then the same construction of the homotopy as the one in Section 7 can be applied to produce a A ∞ homotopy between Ψ • Φ and the identity functor.
This proves WF(T * (M \K 0 ), H g0 ) is quasi-isomorphic to WF(T * (M \K 0 ), H φ 1 * g ), which finishes the proof.
Construction of Knot Floer algebra HW (∂ ∞ (M \ K))
In this section, we give construction of Knot Floer algebra mentioned in Definition 1.5.
Denote by G g0 (T ) the energy of the shortest geodesic cord of T relative to the metric g 0 . Then the following lemma is a standard fact in Riemannian geometry since T is a compact smooth submanifold and g 0 is of bounded geometry.
Lemma 10.1. Denote by G g0 (T ) the infimum of the energy of non-constant geodesics. Then G g0 (T ) > 0 and all non-constant Hamiltonian chords γ of (ν * T ; H g0 ) have
A Hg 0 (γ) = −E(c γ ) ≤ −G g0 (T ).
Now we perform the algebra version in the Morse-Bott settting of the construction given in the previous sections associates an A ∞ algebra
CW g (L, T * (M \ K)) := C * (T ) ⊕ Z{X(H g0 ; ν * T, ν * T )}
where C * (T ) is chosen to be a cochain complex of T , e.g., the de Rham complex similarly as in [FOOO1,FOOO2]. We note that for 0 < 0 < G g0 (T ) is sufficiently small, we have
X ≥− 0 (H g0 ; ν * T, ν * T ) ∼ = T 2 and X <− 0 (H g0 ; ν * T, ν * T )
is in one-one correspondence with the set of non-constant geodesic cords of (T, g 0 ) and so
CW <− 0 (L, T * (M \ K); H g0 ) = Z X(H g0 ; ν * T, ν * T )
It follows from the lemma that the associated complex (CW g (L, T * (M \ K)), m 1 ) has a subcomplex
(CW ≥− 0 (L, T * (M \ K); H g0 ), m 1 ) ∼ = (C * (T ), d)
where d is the differential on C * (T ). We denote the associated homology by
HW g (L, T * (M \ K)) := HW (L, T * (M \ K); H g0 ).
Not to further lengthen the paper and since the detailed construction is not explicitly used in the present paper, we omit the details of this Morse-Bott construction till [BKO] where we provide them and describe its cohomology in terms of the Morse cohomology model of C * (T ). With this mentioned, we will pretend in the discussion below that (ν * T, H g0 ) is a nondegenerate pair with ν * T implicitly replaced by a C 2 -small perturbation L thereof.
Let N (K), N (K) be a tubular neighborhood of K such that
N (K) ⊂ Int N (K).
We denote M \ N (K) = N cpt and similarly for N (K). We denote T = N (K) and L = ν * T . We define the cylindrical adjustments g 0 of the metric g on M with respect to the exhaustion (4.1) by Proof. The metric independence can be proved in the same as in the proof given in the categorical level.
g 0 = g 0 on N ,cpt da 2 ⊕ g 0 | ∂N cpt on N cpt \ K which is suitably interpolated on N cpt \ N ,
Therefore we focus on the choice of tubular neighborhood. Let N 1 , N 2 be two different tubular neighborhoods of K and denote by T 1 , T 2 be the boundaries T 1 = ∂N 1 and T 2 = ∂N 2 . Choose any diffeomorphism φ : M → M such that φ(T ) = T and that it is isotopic to the identity fixing K. Then the symplectomorphism (dφ −1 ) * , which is fiberwise linear (over the map φ), induces a natural quasi-isomorphism
CW g (ν * T 1 , T * (M \ K)) ∼ = CW φ * g (ν * T 2 , T * (M \ K)).
On the other hand, the latter is quasi-isomorphic to CW g (ν * T 2 , T * (M \ K)) by the metric independence. Invariance under other changes can be proved similarly and so omitted.
Denote by the resulting A ∞ algebra by
CW (ν * T, T * (M \ K)) := CW (ν * T, T * (M \ K); H g0 )
whose quasi-isomorphism class is independent of the metric g. Finally we prove the invariance thereof under the isotopy of K. The proof is almost the same as that of the categorical context given in the proof of Theorem 9.3. For readers' convenience, we duplicate the proof here with necessary changes made. Suppose K 0 , K 1 be isotopic to each other and let φ t : M → M be an isotopy such that K 1 = φ 1 (K 0 ) as before. Denote K t = φ t (K 0 ). We fix a metric g on M and consider the family of metric g t := φ t * g. We also fix a pair of precompact domains W 0 ⊂ W 0 ⊂ M \ K with smooth boundary and fix a cylindrical adjustment g of g outside W 0 so that g 0 = g| ∂W0 ⊕ dr 2 on M \ W 0 which is interpolated in W 0 \ W 0 to g. Then we consider the isotopy of pairs (M \ K t , g t ). We denote W t = φ t (W 0 ) and W t = φ t (W 0 ). Then choose a smooth family of cylindrical adjustments g t for g t outside W t ⊂ M \ K t with g t = φ t * (g) on W t which are interpolated in between. Then we construct an A ∞ map Φ : CW (ν * T 0 , T * (M \ K 0 ); H g0 ) → CW (ν * T 1 , T * (M \ K 1 ); H g 0 , g = φ t * (g) is defined by the same construction performed in the previous construction of A ∞ functor. By considering the inverse isotopy, we also have
Ψ : CW (ν * T 1 , T * (M \ K 1 ); H g 0 ) → CW (ν * T 0 , T * (M \ K 0 ); H g0 ).
Then we consider the isotopy which is a concatenation of φ t and its inverse isotopy which is homotopic to the constant isotopy. Then the same construction of the homotopy as the one in Section 7 can be applied to produce a A ∞ homotopy between Ψ • Φ and the identity map. This finishes the proof.
Definition 10.3 (Knot Floer algebra). We denote by
HW * (∂ ∞ (M \ K)) = ∞ d=0 HW d (∂ ∞ (M \ K))
the resulting (isomorphism class of the) graded group and call it the knot Floer algebra of K in M .
The same argument also proves that the isomorphism class of the algebra depends only on the isotopy class of the knot K.
Part 2. C 0 -estimates for the moduli spaces
The compactifications of the moduli spaces such as M w (x 0 ; x) = M w (x; H, J, η) (and also for N w (x 0 ; x)) are essential in the definition of the wrapped Floer cohomology and its algebraic properties, the A ∞ structure. The main purpose of the present section is to establish the two C 0 estimates, Proposition 6.2, 6.1, 7.14, and 7.15 postponed in the previous sections.
For this purpose, we observe that thanks to the choice we made for J to satisfy (5.4) in Remark 5.2, u satisfies (5.5) if and only if the composition v(z) = ψ −1 η(z) (u(z)) satisfies the autonomous equation
(dv − X H ⊗ β) (0,1) Jg = 0, v(z) ∈ L i , for z ∈ ∂Σ between z i and z i+1 where i ∈ Z k+1 . u • j (−∞, t) = x j (t), for j = 1, . . . , k. u • 0 (+∞, t) = x 0 (t).
(10.1) Therefore it is enough to prove the relevant C 0 -estimates for the map v which we will do in the rest of this part.
11. Horizontal C 0 estimates for m-components Proof of Proposition 6.1. Recall that on N end , we have the product metric g = da 2 ⊕ h. Because of this we have the Riemannian splittings
N end ∼ = [0, ∞) × T with T = ∂N end and T * N end ∼ = T * (∂N end ) ⊕ T * [0, ∞).
Furthermore the Sasakian almost complex structure has the splitting J g = i ⊕ J h where J h is the Sasakian almost complex structure on ∂N end and i is the standard complex structure on T * [0, ∞) ⊂ T * R ∼ = C.
Denote q = (a, q T ) ∈ [0, ∞) × T . Then we have the orthogonal decomposition p = (p a , p T ) and hence |p| 2 g = |p T | 2 h + |p a | 2 . Therefore the (a, p a )-components of X H and JX H are given by
π T * [0,∞) (X H (q, p)) = p a ∂ ∂a , π T * [0,∞) (JX H ) = p a ∂ ∂p a .
(11.1) Let z = x + √ −1y be a complex coordinate of (Σ, j) such that β = dt e away from the singular points of the minimal area metric. If we write β = β x dx + β y dy, then (5.6) is separable. Another straightforward calculation shows that the (a, p a )-component of the equation becomes
∂a(v) ∂x − ∂p a (v) ∂y − β x p a (v) = 0 ∂p a (v) ∂x + ∂a(v) ∂y − β y p a (v) = 0 (11.2)
We note
d(β • j) = − ∂β x ∂x + ∂β y ∂y dx ∧ dy.
In particular, if d(β • j) = 0, this vanishes.
For any given one-form β, a straightforward calculation using these identities leads to the following formula for the (classical) Laplacian
∆(a(v)) = p a (v) ∂β x ∂x + ∂β y ∂y − β x ∂(a(v)) ∂y + β y ∂(a(v)) ∂x (11.3)
for any solution v of (5.6).
Lemma 11.1. Then for any one-form β on Σ, a(v) satisfies
∆(a(v)) = −β x ∂(a(v)) ∂y + β y ∂(a(v)) ∂x (11.4)
for any solution u of (5.6).
Proof. This immediately follows from (11.3) since β satisfies 0 = d(β • j) = ( ∂βx ∂x + ∂βy ∂y )dx ∧ dy. Therefore we can apply the maximum principle for v. Now let L 0 , . . . , L k be Lagrangian submanifolds contained in Int N cpt . Then we have a(z) ≤ a 0 for a 0 > 0 for all z ∈ ∂Σ = D 2 \ {z 0 , . . . , z k }. In particular, the end points of x i are contained in Int N cpt . The maximum principle applied to the function t → a • x i (t) on [0, w i ] prevents the image of x i from entering in the cylindrical region N end .
This proves that the whole image of x i is also contained in W . This finishes the proof of the proposition by applying the maximum principle to v based on Lemma 11.1.
Remark 11.2. We comment that the conformal rescaling of u to v defined by v(z) = ψ −1 η(z) (u(z)) does not change the horizontal part, i.e., π • v = π • u.
12. Vertical C 0 estimates for m-components
We next examine the C 0 -bound in the fiber direction of T * N . Writing ρ = e s • v, a straightforward calculation (see [Se2,(3.20)]) derives
∆ρ = |dv − β ⊗ X H | 2 − ρH (ρ) dρ ∧ β dx ∧ dy − ρH (ρ) dβ dx ∧ dy (12.1)
for any complex coordinates z = x + iy for (Σ, j). We note that from this equation, the (interior) maximum principle applies. When ∂Σ = ∅, we also need to examine applicability of strong maximum principle on the boundary ∂Σ. In [AS], Abouzaid-Seidel used certain integral estimates to control C 0 -bound instead of the strong maximum principle. Here we prefer to use the strong maximum principle and so provide the full details of this application of strong maximum principle, especially for the moving boundary case.
In this section we consider the case of fixed Lagrangian boundaries. For this purpose, the following C 0 -bound is an essential step in the case of noncompact Lagrangian such as the conormal bundles L i = ν * (∂N cpt ). For given x = (x 1 , . . . , x k ) where x j ∈ X(w j H; L j−1 , L j ) with j = 1, . . . , k and x 0 ∈ X(w j H; L 0 , L k ), we define
ht(x; H, {L i }) := max 0≤j≤k p • x j C 0 (12.2)
Proof of the following proposition is a consequence of the strong maximum principle based on the combination of the following (1) ρ = r • v = e s • v with r = |p| g satisfies (12.1), (2) the conormal bundle property of L i and (3) the special form of the Hamiltonian H = 1 2 r 2 , which is a radial function. (See [EHS], [Oh2] for a similar argument in a simpler context of unperturbed Jholomorphic equation.)
Proof of Proposition 6.2. Since the interior maximum principle is easier and totally standard for the type of equation (12.1), we focus on the boundary case.
Due to the asymptotic convergence condition and Σ is compact, the maximum of the function z → p(v(z)) is achieved. If it happens on one of ∞'s in the strip-like end, we are done.
So it remains to examine that case where the maximum is achieved at z 0 ∈ ∂Σ. We will apply a strong maximum principle to prove that the maximum cannot be achieved beyond the height of the asymptotic chords in the fiber direction of T * N . However to be able to apply the strong maximum principle, we should do some massaging the equation (12.1) into a more favorable form. Here the condition i * β = 0 and dβ = 0 near the boundary enters in a crucial way.
Choose a complex coordinate z = s + it on a neighborhood U of z 0 so that
z(∂Σ ∩ U ) ⊂ R ⊂ C.
First the last term of (12.1) drops out by dβ = 0 near ∂Σ. For the second term, we note dρ ∧ β = ∂ρ ∂s ds + ∂ρ ∂t dt ∧ (β s ds + β t dt) = ∂ρ ∂s β t − ∂ρ ∂t β s ds ∧ dt and so the second term becomes
−ρH (ρ) ∂ρ ∂s β t − ∂ρ ∂t β s .
On the other hand, β s = 0 since we imposed i * β = 0 (3.5) and hence (12.1) is reduced to
∆ρ = |dv − X H (v) ⊗ β| 2 − ∂ρ ∂s β t (12.3) on ∂Σ.
Since v(∂Σ) ⊂ ν * T i and z ∈ z i z i+1 → v(z) defines a curve on L i and the function τ → |p(v(s + 0 √ −1))| achieves a maximum at z 0 where z 0 = s 0 + 0 √ −1. In particular we have
0 = ∂r • v ∂s (s 0 ) = dr ∂v ∂s (z 0 ) . But we note ∂v ∂s (z 0 ) ∈ T v(z0) L i ∩ ker dr v(z0) and L i ∩ ker dr v(z0) is a Legendrian subspace of T r −1 (R 0 ) with R 0 = |p(v(z 0 ))|. Therefore J ∂v ∂s (z 0 ) ∈ ξ v((z 0 ) = ∂v ∂t (z 0 ) − β t X H (v(z 0 ))
into (12.4), we have obtained dr ∂v ∂t (z 0 ) = dr (β t X H (v(z 0 ))) = 0 which in turn implies ∂ρ ∂s (z 0 ) = 0 in (12.3) and so (∆ρ)(z 0 ) ≥ 0.
This contradicts to the strong maximum principle, unless Im v ⊂ r −1 (R 0 ). The latter is possible only when r(x j ) ≡ R 0 for all j = 0, · · · , k for some R 0 ≥ 0. But if that holds, the proposition already holds and there is nothing to prove. This completes the proof of the proposition.
C 0 estimates for moving Lagrangian boundary
In this section, we first provide the direct proof of the uniform C 0 estimate for the moving boundary condition without decomposing the isotopy in two stages hoping that such an estimate may be useful in the future, even though this C 0 -estimate is not used in the present paper. Then we will briefly indicate how the proofs of easier propositions, Propositions 7.14 and 7.15 can be obtained from the scheme of the proof with minor modifications.
We consider a map u : Σ → T * N satisfying a perturbed Cauchy-Riemann equation
(du − X H ⊗ β) (0,1) J = 0, u • j (−∞, t) = ψ w j • x j (t), for j = 1, . . . , k. u • 0 (+∞, t) = ψ w 0 • x 0 (t).
(13.1) where x 0 ∈ X(w 0 H; L 0 , X 1 ) and x d ∈ X(w d H; L 1 , X d ) and x d ∈ X(w j H; X j , X j+1 ) for j = 1, . . . , d − 1, and with moving boundary condition:
u(z) ∈ ψ η(z) (X i ), for z ∈ z i z i+1 ⊂ ∂Σ with 1 ≤ i ≤ d − 1. ψ η(z) (L 0 ), for z ∈ z 0 z 1 ⊂ ∂Σ ψ η(z) (L 1 ), for z ∈ z d z d+1 ⊂ ∂Σ (13.2)
Proposition 13.1. Let L i = ν * (a −1 (a i )), L j = ν * (a −1 (a j )) with a i < a j and {x j } 0≤j≤k be given as above. Define the constant
ht H; L i , L i+1 ; {X k } d k=1 ; {x j } 0≤j≤k = max 0≤j≤k p • x j C 0 . Then max z∈Σ |p(u(z))| ≤ ht H; L i , L i+1 ; {X k } d k=1 ; {x j } 0≤j≤k
for any solution u of (5.5).
We consider the case of moving the Lagrangian L i = ν * T i to L i+1 = ν * T i+1 for a given exhaustion sequence (2.4) where T i = a −1 (i), T i+1 = a −1 (i + 1). The main point of this case is that the Lagrangian L i moves outward with respect to the a-direction in the cylindrical region [0, ∞) × 0, ∞).
Remark 13.2. It is very interesting to see that subclosedness of the one-form β also enters in our proof of the vertical C 0 -bound. See the proof of Proposition 13.1 below.
Proof. In order to produce a Hamiltonian flow which send L i to L i+1 , we start with a Hamiltonian function F : T * N → R whose restriction to T * N end agrees with the coordinate function p a : T * N end → R. Then we deduce
X F | T * N end = ∂ ∂a .
Since the metric g is cylindrical on N end , the induced Sasakian metric g on
T * N end ∼ = T * [0, +∞) ⊕ T * T 2
can be written as g = g a ⊕ g T 2 .
Let φ t F be a time t flow of X F on T * N then its restriction to T * N end becomes Φ t F ((a, p a ), (q T , p T )) = ((a + t, p a ), (q T , p T )). The local coordinate expression of the Lagrangian L i = ν * (b −1 (k i )), where k i < 0 see Section 2.1, is given by
{((−k i , p a ), (q T , 0)) : p a ∈ R, q T ∈ T 2 }.
For s ∈ [0, 1], let us denote by For each given test Lagrangians X 1 , . . . , X k , we consider a moduli space
L i+s := {((−k i + s(k − k i+1 ), p a ), (q T , 0)) : p a ∈ R, q T ∈ T 2 },N w (x 0 ; x) = N w (x 0 ; x; D m ) with N w (x 0 ; x) := D∈F Dw N w ((x 0 ; x); D)
which consists of a map u : Σ → T * N satisfying a perturbed Cauchy-Riemann equation (5.5) with moving boundary condition (13.2).
Remark 13.4. We would like to alert readers that as our proof clearly shows in general construction of such a morphism is possible only in a certain direction that favors application of strong maximum principle.
For each given quadruple (H; L i , L i+1 , {x j } 0≤j≤k ), where x 0 ∈ X(w 0 H; L i+1 ) and x j ∈ X(w j H; L i ) for j = 1, . . . , k, we define a constant
ht(H; L i , L i+1 , {x j } 0≤j≤k ) = max 0≤j≤k p • x j C 0 .
Recall that our test Lagrangians X j are either compact or with cylindrical end if noncompact. Since the boundary condition is the fixed one on z j z j+1 , u(z j z j+1 ) ⊂ X j , for 1 ≤ j ≤ k − 1, the strong maximum principle for (10.1) applies thereto. It remains to check on the strip-like end near z 0 where one of the boundary involves moving boundary {L s } from L i to L i+1 . The only place where this is not clear is the part where χ (τ ) = 0, i.e., τ ∈ (−2, −1) ⊂ (−∞, 0] in the strip-like region.
In this part of the region, ∆ρ is given by the same formula as (12.1) but with the moving boundary condition. Recall the [0, 1]-parameterized Lagrangian
L i+s := {((−k i + s(k − k i+1 ), p a ), (q T , 0)) : p a ∈ R, q T ∈ T }.
As before let z 0 := 0 (τ 0 , 0) or 0 (τ 0 , 1) be a point in ∂Σ ∩ 0 (Z − ) where a maximum of the radial function r(z) = |p(v(z))| g is achieved. Without loss of any generality, we may assume that the point is z 0 = 0 (τ 0 , 1) with −2 < τ 0 < −1. Similarly as before, take a local neighborhood U of z 0 with coordinates
ϕ : U ∩ Z − → H = {x + iy ∈ C : y ≥ 0}
such that ϕ(z 0 ) = x 0 + 0i for some x 0 and ϕ(U ∩ ∂Z − ) ⊂ R ⊂ H. We note that the outward normal of Z − along t = 1 is ∂ ∂t and that of H along ∂H = {y = 0} is given by − ∂ ∂y . Therefore for the holomorphic map ϕ, we have ∂τ ∂x < 0. This in turn implies
∂(a • v) ∂x = ∂(a • v) ∂τ ∂τ ∂x ≥ 0 (13.3) along {y = 0} since ∂(a•v)
∂τ ≤ 0 on ∂Z − by the direction of the moving boundary condition in (13.2).
We also have dr ∂v ∂x (z 0 ) = 0 and so 0 = θ J ∂v ∂x (z 0 ) for r 2 = |p| 2 = 2H g0 . Remark 13.5. This time ∂v ∂x (z 0 ) ∈ T v(z0) L ρ(x0) and hence the argument used in the later half of the fixed boundary case cannot be applied to the current situation. This is the precisely the reason why a direct construction of homotopy for the moving (noncompact) Lagrangian boundary has not been able to be made but other measures such as the cobordism approach [Oh4] or Nadler's approach [N] are taken. As we shall see below, we have found a way of overcoming this obstacle by a two different applications of strong maximum principle after combining all the given geometric circumstances and a novel sequence of logics.
The condition v • 0 (τ, 0) ∈ L i+ρ(τ ) at z 0 and Lemma 13.3 imply
φ ρ(x)(ki−ki+1) F −1 (v • 0 (x, 0)) ∈ L i (13.4)
where ρ(x) = ρ(τ (x)). By differentiating the equation for x and recalling z 0 = 0 (x 0 , 0), we have obtained that
∂τ ∂x ρ (τ (x))(k i −k i+1 )X F φ ρ(x)(ki−ki+1) F −1 (v(z 0 )) +d φ ρ(x)(ki−ki+1) F −1 ∂v ∂x (z 0 ) is a tangent vector of L i at φ ρ(x)(ki−ki+1) F −1 (v(z 0 )). Here F = −F • φ t F is the inverse Hamiltonian which generates the inverse flow (φ t F ) −1 . This implies − ∂τ ∂x ρ (τ (x))(k i − k i+1 )X F (v(z 0 )) + ∂v ∂x (z 0 ) ∈ T v(z0) L ρ(x0) .
(See [Oh1] for the same kind of computations used for the Fredholm theory for the perturbation of boundary condition.) Recall θ ≡ 0 on any conormal bundle and so on L ρ(x) . Therefore we obtain
0 = θ − ∂τ ∂x ρ (τ (x))(k i − k i+1 )X F (v(z 0 )) + ∂v ∂x (z 0 ) .
By evaluating the equation in (13.2) against ∂ ∂x , we get ∂v ∂x
(z 0 ) + J ∂v ∂y (z 0 ) − β t X H (v(z 0 )) = 0.
Therefore the equation θ • J = −dH g gives rise to
1 2 ∂|p • v| 2 ∂y (z 0 ) = ∂(H • v) ∂y (z 0 ) = −θ • J( ∂v ∂y ) = θ ∂v ∂x (z 0 ) = ∂τ ∂x ρ (τ (x))(k i − k i+1 )θ(X F (v(z 0 ))) = ∂τ ∂x ρ (τ (x))(k i − k i+1 )p a (v(z 0 )). (13.5)
Lemma 13.6. We have p a (v(z 0 )) ≥ 0 and equality holds only when p a • v ≡ 0, i.e., when Image v is entirely contained in the zero section.
Proof. In order to estimate p a (v(z 0 )), by a direct computation from (11.2), we derive
∆(p a • v) = p a • v ∂β x ∂y − ∂β y ∂x + β x ∂(p a • v) ∂y − β y ∂(p a • v) ∂x ,
where β = β x dx + β y dy near z 0 . The sub-closedness assumption of β implies that ∂βx ∂y − ∂βy ∂x ≥ 0 and hence the (interior) maximum principle applies for the positive value of p a (v) and the (interior) minimum principle for the negative values, respectively. Now we consider the function p a • v on ∂Σ. Recalling the property i * β = 0 on ∂Σ which implies β x = 0, the above equation for ∆(p a • v) becomes
∆(p a • v) = p a • v ∂β x ∂y − ∂β y ∂x − β y ∂(p a • v) ∂x (13.6) on ∂Σ. The inequality (13.3) on ∂Z − ∩ U implies ∂(p a • v) ∂y (z 0 ) ≥ 0
by the Cauchy-Riemann equation applied to the holomorphic function a + ip a of x + iy. If we denote f (z) = −p a • v(z), this inequality becomes
− ∂f ∂y (z 0 ) ≤ 0. (13.7)
We would like to show f (z 0 ) = −p a (v(z 0 )) ≤ 0. Suppose to the contrary f (z 0 ) ≥ 0, we obtain ∆f (z 0 ) ≥ 0 from (13.6) by combining ∂(pa•v) ∂x (z 0 ) = 0. Therefore since − ∂ ∂y is the outward normal to {y = 0}, (13.7) contradicts to the (strong) maximum principle. Therefore p a (v(z 0 )) ≥ 0 and equality holds only when p a • v ≡ 0. This finishes the proof.
Substituting this lemma back into (13.5), we obtained − ∂|p • v| 2 ∂y (z 0 ) ≤ 0 and equality holds only when p•v(z 0 ) = 0. This contradicts to the strong maximum principle applied to the radial function (r • v) 2 = |p • v| 2 by the same way as for the vertical C 0 estimates given in Section 12, unless the whole image of v is contained in the level set r −1 (0) = 0 N . But if the latter holds, all the Hamiltonian chords are constant chords and the solution v must be constant maps. This is impossible because of the asymptotic
condition v( − ((−∞, 0]) ⊂ ν * T i , i = 1, · · · , k and v( + ([0, ∞)) ⊂ ν * T i+1 , i = 0.
This finishes the proof of Proposition 13.1. Now we briefly indicate the main points of the proofs of Propositions 7.14 and 7.15].
For these cases, the boundaries are fixed as in the proofs of Proposition 6.1, 6.2 is that the Hamiltonian involved is not τ -independent due to the appearance of the elongation function χ in H χ . We have only to consider the equation on the strip-like region. Writing ρ = e s • v, the calculation (see [Se2,(3.20)
]) derives ∆ρ = |dv − β ⊗ X H χ λ | 2 − ρ(H χ λ ) (ρ) dρ ∧ β dτ ∧ dt + χ (τ ) ∂H s ∂s s=χ(τ ) (13.8)
We note that from this equation, both the maximum principle and strong maximum principly apply for this non-autonomous case because
χ (τ ) ∂H s ∂s s=χ(τ ) ≥ 0
by the monotonicity hypothesis on λ. We refer to the proof of Proposition 13.1 to see how the strong maximum principle applies.
Appendix A. Energy identity for Floer's continuation equation
In this appendix, we recall the energy identity for the continuation equation given in [Oh5] and given its derivatioin which was originally given in [Oh2] in the cotangent bundle context. Proposition A.1. Suppose that u is a finite energy solution of (7.5). Then
∂u ∂τ 2 J χ(τ ) dt dτ = A H + (z + ) − A H − (z − ) − ∞ −∞ χ (τ ) 1 0 ∂H s ∂s s=χ(τ ) (u(τ, t))dt dτ. (A.1) Proof. We derive ∂u ∂τ 2 J χ(τ ) dt dτ = ω ∂u ∂τ , J χ(τ ) ∂u ∂τ dt dτ = ω ∂u ∂τ , ∂u ∂t − X H χ(τ ) (u) dt dτ = u * ω + ∞ −∞ 1 0 dH χ(τ ) ∂u ∂τ dt dτ.
The finite energy condition and nondegeneracy of z ± imply that the first improper integral u * ω converges and becomes
u * ω = u * (−dθ) = − 1 0 (z + ) * θ + 1 0 (z − ) * θ. (A.2)
On the other hand, the second summand becomes
L x ) = η M (v 1 ∧ v 2 ∧ v 3 ) 2 |η M (v 1 ∧ v 2 ∧ v 3 )| 2 ,
where v 1 , v 2 , v 3 is a basis of T L x and Gr(T M ) is the Lagrangian Grassmannian. The vanishing condition of the relative first Chern class in Definition 4.1 implies that the Maslov class µ L ∈ H 1 (L) vanishes on H 1 (L). So we have a lifting α # L : L → R of α M | T L satisfying exp(2πiα # L (x)) = α M (T L x ), which we call a grading on L. For the orientation of various types of moduli spaces, we need to consider a relative spin structure of each Lagrangian L. Since we have H 2 (M ; Z 2 ) = 0, the situation is rather simple. The relative spin structure P # is consist of the choice of orientation of L and a Pin structure on T L, see [FOOO2,Section8], [Se1,Section11] for more general cases. Note that the vanishing condition of the second Stiefel-Whitney class w 2 (L) in Definition 4.1 implies the existence of the Pin structures. We call a triple (L, α # , P # ) a Lagrangian brane.
B.2. Dimensions and orientations of moduli spaces. Let (L 0# , L 1# ) be a pair of exact Lagrangian branes with a choice of Floer datum (H, J). Consider a Hamiltonian chord x ∈ X(H; L 0 , L 1 ) and use a linearization dφ t H of the Hamiltonian flow of H to transport the linear Lagrangian brane L 0#
x(0) = ((T L 0 ) x(0) , α 0# (x(0)), (P 0# ) x(0) ) to T M x(1) . Then the index and the orientation space are defined by comparing two linear Lagrangian branes L 0#
x(0) and L 1#
x(1) , see [Se1,11h] for the construction. Let us denote them by µ(x) = µ(H; L 0#
x(0) , L 1#
x(1) ) and o(x) = o(H; L 0# x(0) , L 1#
x(1) ), respectively.
We have a more direct description of the grading in the cotangent bundle setup from [Oh2]. Note that the sequence of Riemannian metrics {g 0 } i∈N on the knot complement M \ K give the canonical splittings of two Lagrangian subbundles
T x(t) M = H x(t) ⊕ V x(t) ,
where the vertical tangent bundle V x(t) is canonically isomorphic to T * π(x(t)) (M \K), and the horizontal subbundle H x(t) with respect to the Levi-Civita connection of g 0 is isomorphic to T π(x(t)) (M \ K) under T π : T M → T (M \ K).
We now consider a canonical class of symplectic trivialization
Φ : x * T M → [0, 1] × C 3 ,
satisfying Φ(H x(t) ) ≡ R 3 and Φ(V x(t) ) ≡ iR 3 for all t ∈ [0, 1]. Since [0, 1] is contractible, such a trivialization exists. For example, if x ∈ X(H g0 ; ν * T i , ν * T i ), then we have
Φ(T x(0) ν * T i ) = U Φ ⊕ U ⊥ Φ , Φ(T x(1) ν * T i ) = W Φ ⊕ W ⊥ Φ ,
where U Φ , W Φ are 2-dimensional subspaces and U ⊥ Φ , W ⊥ Φ are the corresponding annihilators. Then the index µ(x) is defined by following the definition of the Maslov index in [RS] (6) where B D is a Banach manifold of maps in W 1,p loc (Σ, M ) satisfying W 1,p -convergence on each strip-like ends, and E D → B D is a Banach vector bundle whose fiber at u is L p (Σ, Ω 0,1 Σ ⊗ u * T M ). Let Σ be a (d + 1)-pointed disk equipped with a fixed conformal structure, with strip-like ends, and with a brane structure on each boundary component, set say L 0# , . . . , L d# . Consider Floer data D in Definition 5.1 for x 0 ∈ X(H; L d , L 0 ) and x k ∈ X(H; L k , L k−1 ), k = 1, . . . , d. Here, D D,u is the linearized operator of (dv − X H ⊗ β) (0,1) J , where (H, J, β) comes from the data D, and ∨ denotes the dual vector space. B.2.1. Sign convention for A ∞ structure map. We briefly explain the sign convention adopted in Proposition 6.5. Let Σ 1 be a (d + 2 − m)-pointed disk equipped with Floer data D 1 and with brane structures L 1# := (L 0# , . . . , L n# , L n+m# , . . . , L d# ), and Σ 2 be an (m + 1)-pointed disk with D 2 and with branes L 2# := (L n# , . . . , L n+m# ). Now consider the gluing of (Σ 1 , D 1 , L 1# ) and (Σ 2 , D 2 , L 2# ) near the outgoing end of Σ 2 and the (n + 1)-th incoming end to obtain a (d + 1)-pointed disk Σ with Floer data D := D 1 # n+1 D 2 and with the combined brane data L 1# # n+1 L 2# = (L 0# , . . . , L d# ).
µ(B Φ (U Φ ⊕ U ⊥ Φ ), W Φ ⊕ W ⊥ Φ ), where B Φ : [0, 1] → Sp
For given regular points u 1 ∈ M(x 0 ; x 1 , . . . , x n ,x, x n+m+1 , . . . , x d ; D 1 ); u 2 ∈ M(x; x n+1 , . . . , x n+m ; D 2 ), the standard gluing procedure gives rise to a regular point u ∈ M(x 0 ; . . . , x d ; D).
This gluing process extends to a local diffeomorphism, let say φ, between neighborhood of (u 1 , u 2 ) to that of u. Then its linearization Dφ fits into the following commutative diagram: Here x 1 = (x 0 , . . . , x n ,x, x n+m+1 , . . . , x d );
x 2 = (x, x n+1 , . . . , x n+m ). and hence We then have dim (r,u) M D (x 0 ; x) = dim r M + µ(x 0 ) − µ(x 1 ) − · · · − µ(x d );
o(x 1 ) = o(x 0 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x n ) ∨ ⊗ o(x) ∨ ⊗ o(x n+m+1 ) ∨ ⊗ · · · ⊗ o(x d ) ∨ ; o(x 2 ) = o(x) ⊗ o(x n+1 ) ∨ ⊗ · · · ⊗ o(x n+m ) ∨ .(Λ top M D (x 0 , x)) (r,u) ∼ = (Λ top T M) r ⊗ Λ top ker(D D(r),u ) ∼ = (Λ top T M) r ⊗ o(x 0 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x d ) ∨ .
Now consider two Floer data D j parameterized over M j for j = 1, 2. The brane data and asymptotic data are given as before by ( L 1# , L 2# ) and (x 1 , x 2 ). Then the gluing induces a Floer data D parameterized over
M = R + × M 1 × M 2 ,
where R + plays a role of gluing parameter. Let us choose regular points (r 1 , u 1 ) ∈ M D 1 (x 0 , . . . , x n ,x, x n+m+1 , . . . , x d );
(r 2 , u 2 ) ∈ M D 2 (x, x n+1 , . . . , x n+m ), then for each gluing parameter ∈ R + we have a regular point ( , u) ∈ M D (x 0 , . . . , x d ).
Note that we obtain (Λ top T M) r ∼ = R ⊗ (Λ top T M 1 ) r 1 ⊗ (Λ top T M 2 ) r 2 , when is sufficiently large. By the same argument as above we have the following diagram:
(Λ top T M D (x 0 , . . . , x d )) (r,u) (
Λ top T M) r ⊗ o(x 0 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x d ) ∨ (Λ top T M D 1 (x 1 )) (r 1 ,u 1 ) ⊗ (Λ top T M D 2 (x 2 )) (r 2 ,u 2 ) R ⊗ (Λ top T M 1 ) r 1 ⊗ o(x 1 ) ⊗ (Λ top T M 2 ) r 2 ⊗ o(x 2 ) ∼ = ∼ =
(Λ top Dφ) ((r 1 ,u 1 ),(r 2 ,u 2 )) (−1) *
Here o(x 1 ) and o(x 2 ) are the same as before. The sign in the right vertical arrow comes from the Koszul convention m = dim r 2 M 2 · index(D D 1 (r 1 ),u ) + k>n+m µ(x k ) · index(D D 2 (r 2 ),u 2 ) (B. Recall from (6.6) that the structure map m k already has the sign † = k i=1 i · µ(x i ). By considering all the above sign effects we have † u 1 + † u 2 + m + (r 1 ,r 2 ) ≡ ‡ u +
d i=1 (i + 1)µ(x i ), where ‡ = n i=1 µ(x i ) − n.
This explains the sign convention in Proposition 6.5. B.2.2. Sign convection for A ∞ functor. For the sign convention in (7.13) and (7.14), we need to consider the previous argument with the parameter space N k+1 in Definition 7.4. The notations in this section are the same as in Section 7.
Firstly, let us consider a regular point (r, u) ∈ N d+1 (y 0 ; x 1 , . . . , x d ) (B.5) and its boundary strata (r 0 , u 0 ) ∈ N d−m+2 (y 0 ; x 1 , . . . , x n ,x, x n+m+1 , · · · , x d );
(r 1 , u 1 ) ∈ M m+1 (x; x n+1 , . . . , x n+m ) satisfying µ(y 0 ) = µ(x 1 ) + · · · + µ(x n ) + µ(x) + µ(x n+m+1 ) + · · · + µ(x d ) + d − m;
µ(x) = µ(x n+1 ) + · · · + µ(y n+m ) + m − 2.
As in the previous section we compare the sign difference between
(Λ top N d+1 ) r ⊗ o(y 0 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x d ) ∨ and (Λ top N d−m+2 ) r 0 ⊗ o(y 0 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x) ⊗ · · · ⊗ o(x d ) ∨ ⊗ (Λ top M m+1 ) r 1 ⊗ o(x) ⊗ o(x n+1 ) ∨ · · · o(x n+m ) ∨ .
The Koszul convention gives
f ≡ m(d − m) + m k>n+m µ(x k )
and a similar computation f + (r 0 ,r 1 ) + † u 1 + ♠ u 0 ≡ ( ‡ u + 1) + d i=1 (i + 1)µ(x i ) + d verifies (7.13). Now we consider another type of boundary strata of (B.5) consisting of (r 0 , u 0 ) ∈ M +1 (y 0 ; y 1 , . . . , y );
(r 1 , u 1 ) ∈ N s1+1 (y 1 ; x 1 , . . . , x s1 );
. . .
(r , u ) ∈ N s λ +1 (y ; x s1+···+s −1 +1 , . . . , x d );
satisfying degree conditions µ(y 0 ) = µ(y 1 ) + · · · + µ(y ) + r − 2;
µ(y 1 ) = µ(x 1 ) + · · · + µ(x s1 ) + s 1 − 1;
. . . µ(y ) = µ(x s1+···+s −1 +1 ) + · · · + µ(x d ) + s λ − 1 Then the sign difference between
(Λ top N d+1 ) r ⊗ o(y 0 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x d ) ∨ and (Λ top M +1 ) r 0 ⊗ o(y 0 ) ⊗ o(y 1 ) ∨ ⊗ · · · ⊗ o(y ) ∨ ⊗ (Λ top N s1+1 ) r 1 ⊗ o(y 1 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x s1 ) ∨ ⊗ · · · ⊗ (Λ top N s λ +1 ) r ⊗ o(y ) ⊗ o(x s1+···+s −1 +1 ) ∨ ⊗ · · · ⊗ o(x d ) ∨ .
Note that the Koszul sign convention induces
f ≡ (s 1 + · · · + s λ ) + 1≤i≤j≤ s i s j + −1 i=1 (s 1 + · · · + s i )µ(y i+1 ),
where s r = s r − 1. The sign convention for the parameter space configuration (r; r 0 , . . . , r ) is obtained by applying the sign rule in (B.4) as follows:
(r 0 ,...,r ) = d + d + 1≤i≤j≤ s i s j .
Then by a tedious computation, we have f + (r 0 ,...,r )
+ † u 0 + j=1 ♠ u j = d i=1 (i + 1)µ(x i ) + d.
This verifies the signs in (7.14). The sign for A ∞ homotopy relation requires basically the same computation, we omit it.
Theorem 1. 3 .
3Let N (K) be a tubular neighborhood of K and let T = ∂(N (K)). The A ∞ algebra (CW (ν * T, T * (M \ K); H g0 ), m), m = {m k } 1≤k<∞
Figure 1 .
1An example of slit domain
For
given λ = {g(r)} r∈[0,1] , we consider the fiber bundles H λ → [0, 1], H g(r) = H(T * N, g(r))
Figure 2 .
2An example of slit domains for the A ∞ functor
x; H g (r) , {L i }) | i = 0, . . . , k}
NM
k+1 (y; x) → X(H ρ(z ) ; L −1 , L ) × [0, 1] for = 1, . . . , k ev 0 + :(y;x) N k+1 (y; x) → X(H ρ(z 0 ) ; L 0 , L k ) × [0, 1] given by ev − ({u α } α∈A ) = (u • (−∞, · ), ρ(z )) for = 1, . . . , k ev 0 + ({u α } α∈A ) = (u • 0 (+∞, · ), ρ(z 0 )) where ρ(z i ) ∈ [0, 1] isa ρ value of the component having the i-th end. We also consider similar evaluation maps from ev − : (y;x) M k+1 (y; x) → X(H g0 ; L −1 , L ) for = 1, . . . k+1 (y; x) → X(H g0 ; L 0 , L k ).
into two components as i → +∞.(2) Two component Σ α1 and Σ α2 sharing one asymptotic matching condition satisfy lim ρ
† h k−m+1 (a 1 , . . . , a n , m m A (a n+1 , . . . , a n+m ), a n+m+1 , . . . , a d )+ r,i s1,...,sr (−1) ♣ m r B f s1 (a 1 , . . . ,a s1 ), . . . , f si−1 (. . . , a s1+···+si−1 ), h si (a s1+···+si−1+1 , . . . , a s1+···+si ), g si+1 (a s1+···+si+1 , . . . ), . . . , g sr (a d−sr+1 , . . . , a d ) .(8.1)
. . . , x d ) = r s1,...,sr f r λ (f s1 λ (x 1 , . . . , x s1 ), . . . , f sr λ (x d−sr+1 , . . . , x d )) We consider two paths of cylindrical Riemannian metric between the concatenated path λ * λ : [0, 1] → C(N ) and the direct path λ connecting g and g . It easily follows from contractibility of the space of cylindrical Riemannian metrics we can construct a path (of paths) Γ : [0, 1] × [0, 1] → C(N ) between λ * λ and λ . Here we use the coordinate (r, s) for [0, 1] × [0, 1], where s is the newly adopted one. Note that Γ(−, s) =: λ s gives a homotopy between g 0 and g 0 for each s ∈ [0, 1]. Now we go back to the construction of A ∞ homotopy associated to a geometric homotopy. For each fixed s ∈ [0, 1], we have the corresponding spaces of admissible almost complex structures and of admissible Hamiltonians J r s = J (T * N, Γ(r, s)), H r s = H(T * N, Γ(r, s)). (8.2)
(1-b) For the distinguished component δ ∈ A in Definition 8.4 and for each s ∈ [0, 1], let us consider a map u δ(s) satisfying (du δ(s) − X H δ(s) ⊗ β δ ) s,δ) . and the conditions (2)-(6) for N k+1 (Σ,j;ρ) (y; x) in (7.10). For each s 0 ∈ [0, 1], let us denote the space of maps {u α } α∈A\{δ} ∪ {u δ(s0) } satisfying the above by L k+1
α
< 1 splits into two components at s = s 0 as i → +∞.
Φ:
WF(T * N ; H(h 0 )) → WF(T * N ; H(g 0 /C)); Ψ : WF(T * N ; H(g 0 )) → WF(T * N, H(h 0 /C)) (9.3) which are defined by the standard C 0 -estimates for the monotone homotopies. Note that the morphism is naturally defined from the smaller metric to the bigger metric. Now consider the composition of the functors Ψ • Φ : WF(T * N ; H(h 0 )) → WF(T * N ; H(g 0 /C 2 )); Φ • Ψ : WF(T * N ; H(g 0 )) → WF(T * N ; H(g 0 /C 2 )).
cpt and fixed.Theorem 10.2. The quasi-isomorphism class of A ∞ algebra (CW (ν * T, T * (M \ K; H g0 )), m), m = {m k } 0≤k<∞does not depend on the various choices involved such as tubular neighborhood N (K) and the metric g on M .
z0) ⊂ ker dr and so dr(−J ∂v ∂s (z 0 )) = 0.
-parameterized family of Lagrangians interpolating L i and L i+1 . By an easy observation we have Lemma 13.3. The flow Φ s(k−ki+1) F of X F sends L i to L i+s for s ∈ [0, 1].
u(±τ, t) = z ± (t) and adding (A.2), (A.3), we have obtained (A.1). Appendix B. Gradings and signs for the moduli spaces B.1. Lagrangian branes. Let us choose a quadratic complex volume form η 2 M on the symplectic manifold (W = T * (M \K), ω std ) with respect to the almost complex structure J g introduced in Definition 2.6. The associated (squared) phase map is α M : Gr(T M ) → S 1 , α M (T
D
is a symplectic path given by B Φ := Φ • T φ t Hg 0 • Φ −1 . Let Σ be a (d + 1)-pointed disk equipped with a fixed conformal structure, with strip-like ends, and with brane structures (L 0# , . . . , L d# ) on each boundary component. Consider Floer data D in Definition 5.1 for x 0 ∈ X(L d , L 0 ), x = (x 1 , . . . , x d ), x k ∈ X(L k , L k−1 ), k = 1, . . . , d. (B.1) Let u ∈ M(x 0 , x; D) be a regular point, then the linearization of (dv − X H ⊗ β) D,u : (T B D ) u → (E D ) u ,
Let u ∈ M(x 0 ; x; D) be a regular point, wherex = (x 1 , . . . , x d ), then we have dim u M(x 0 ; x; D) = µ(x 0 ) − µ(x 1 ) − · · · − µ(x d ); (Λ top T M(x 0 ; x; D)) u ∼ = Λ top ker(D D,u ) ⊗ Λ top Coker(D D,u ) ∼ = o(x 0 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x d ) ∨ .
(
Λ top T M(x 0 , . . . , x d ; D)) u o(x 0 ) ⊗ o(x 1 ) ∨ ⊗ · · · ⊗ o(x d ) ∨ (Λ top T M(x 1 ; D 1 )) u 1 ⊗ (Λ top T M(x 2 ; D 2 )) u 2 o(x 1 ) ⊗ o(x 2 ) x k ) · dim M(x 2 ; D 2 ) u 2 .
Now we consider the M-parameterized Floer data used in Definition 5.3 D = D m = r∈M D(r), on (d + 1)-pointed disks, with branes structures (L 0# , . . . , L d# ) on the ends. Let us denote the corresponding parameterized moduli space by M D (x 0 ; x) := r∈M {r} × M(x 0 ; x; D(r)), where (x 0 ; x) are the same as in (B.1). For a regular point (r, u) of the parameterized moduli space, the extended linearized operator is given by D D(r),u : (T M) r × (T B D(r) ) u → (E D(r) ) u .
2) ≡ m(d − m − 1) + m k>n+m µ(x k ) (B.3) and the sign difference between parameter spaces M and M 1 × M 2 (r 1 ,r 2 ) = m(d − n) + m + n. (B.4)
It is convenient to consider a compact manifold with boundary N such that N is homeomorphic to Int N . Let us choose a neighborhood U = U (∂N ) of ∂N inside N , called an end of N . Then there is a homeomorphism φ : U → ∂N × (0, +∞], where φ(∂N ) = ∂N × {+∞} is called the asymptotic boundary of N .
[0, 1]. For a given pair of Hamiltonians H − , H + and homotopy {H s } s∈[0,1] with H 0 = H − , H 1 = H + , we consider the non-autonomous Cauchy-Riemann equation∂u
∂τ
functors F λ • F λ and F λ , respectively. It is fibered over [0, 1] By definition, each fiber of this map is compact. It is tempting to define the homotopy by setting its matrix coefficient to be3)
where ρ(s) is a homotopy between ρ(0), ρ(1) : A → [0, 1] which induces the
A ∞ ev [0,1] : N k+1
para (y; x) → [0, 1].
YOUNGJIN BAE, SEONHWA KIM, YONG-GEUN OH
The homology of path spaces and Floer homology with conormal boundary conditions. A Abbondandolo, A Portaluri, M Schwarz, J. Fixed Point Theory Appl. 42Abbondandolo, A., Portaluri, A. Schwarz, M., The homology of path spaces and Floer homology with conormal boundary conditions, J. Fixed Point Theory Appl. 4 (2008), no. 2, 263-293.
A cotangent fibre generates the Fukaya category. M Abouzaid, Adv. Math. 228Abouzaid, M., A cotangent fibre generates the Fukaya category, Adv. Math. 228 (2011), 894-939.
On the wrapped Fukaya category and based loops. M Abouzaid, J. Symplectic Geom. 101Abouzaid, M., On the wrapped Fukaya category and based loops, J. Symplectic Geom. 10 (2012), no. 1, 27-79.
An open string analogue of Viterbo functoriality. M Abouzaid, P Seidel, Geometry & Topology. 14Abouzaid, M., Seidel, P., An open string analogue of Viterbo functoriality, Geometry & Topology 14 (2010), 627-718.
R Abraham, Bumpy Metrics, Global Analysis (Proc. Sympos. Berkeley, Calif; Providence, R.IXIVPure Math.Abraham, R., Bumpy metrics, 1970 Global Analysis (Proc. Sympos. Pure Math., Vol. XIV, Berkeley, Calif.,1968) pp. 1-3 Amer. Math. Soc., Providence, R.I.
Topological strings, D-model, and knot contact homology. M Aganagic, T Ekholm, L Ng, C Vafa, Adv. Theor. Math. Phys. 184Aganagic, M., Ekholm, T., Ng, L., Vafa, C., Topological strings, D-model, and knot contact homology, Adv. Theor. Math. Phys. 18 (2014), no. 4, 827-956.
Formality of Floer complex of the ideal boundary of hyperbolic knot complements, preprint. Y Bae, S Kim, Y.-G Oh, Bae, Y., Kim, S., Oh, Y.-G., Formality of Floer complex of the ideal boundary of hyperbolic knot complements, preprint, January 2019.
Knot contact homology. T Ekholm, J Etnyre, L Ng, M Sullivan, Geom. Topol. 172Ekholm, T., Etnyre, J., Ng, L., Sullivan, M. Knot contact homology, Geom. Topol. 17 (2013), no. 2, 975-1112
A complete knot invariant from contact homology. T Ekholm, L Ng, V Shende, Invent. Math. 2113Ekholm, T., Ng, L., Shende, V. A complete knot invariant from contact homology, Invent. Math. 211 (2018), no. 3, 1149-1200.
Convex symplectic manifolds. Y Eliashberg, M Gromov, Several Complex Variables and Complex Geometry. I. BedfordProvidence, RIAMS52Proc. Sympos. Pure MathEliashberg, Y. and Gromov, M., Convex symplectic manifolds, in Several Complex Variables and Complex Geometry (I. Bedford, etc eds.), Proc. Sympos. Pure Math. 52, Part 2, AMS, Providence, RI, 1991, pp 135-162.
Lagrangian intersections in contact geometry. Y Elaishberg, H Hofer, D Salamon, J. Geom. Funct. Anal. 5Elaishberg, Y., Hofer, H., Salamon, D., Lagrangian intersections in contact geometry, J. Geom. Funct. Anal. 5 (1995), 244-269.
Cyclic symmetry and adic convergence in Lagrangian Floer theory. K Fukaya, Kyoto J. Math. 503Fukaya, K., Cyclic symmetry and adic convergence in Lagrangian Floer theory, Kyoto J. Math. 50 (2010), no. 3, 521-590.
Lagrangian intersection Floer theory: anomaly and obstruction. Part I. K Fukaya, Y.-G Oh, H Ohta, K Ono, AMS/IP Studies in Advanced Mathematics. Providence, RI; Somerville, MAInternational Press46Fukaya, K., Oh, Y.-G., Ohta, H., Ono, K. Lagrangian intersection Floer theory: anom- aly and obstruction. Part I. AMS/IP Studies in Advanced Mathematics, 46.1. American Mathematical Society, Providence, RI; International Press, Somerville, MA, 2009.
Lagrangian intersection Floer theory: anomaly and obstruction. Part II. K Fukaya, Y.-G Oh, H Ohta, K Ono, AMS/IP Studies in Advanced Mathematics. Providence, RI; Somerville, MAInternational Press46Fukaya, K., Oh, Y.-G., Ohta, H., Ono, K., Lagrangian intersection Floer theory: anom- aly and obstruction. Part II. AMS/IP Studies in Advanced Mathematics, 46.2. Ameri- can Mathematical Society, Providence, RI; International Press, Somerville, MA, 2009.
Exact Lagrangian submanifolds in simply-connected cotangent bundles. K Fukaya, P Seidel, I Smith, Invent. Math. 1721Fukaya, K., Seidel, P., Smith, I., Exact Lagrangian submanifolds in simply-connected cotangent bundles, Invent. Math. 172 (2008), no. 1, 1-27.
S Ganatra, J Pardon, V Shende, arXiv:1706.03152Covariantly functorial wrapped Floer theory on Liouville sectors. Ganatra, S., Pardon, J., and Shende, V. Covariantly functorial wrapped Floer theory on Liouville sectors, arXiv:1706.03152
The Floer homology of standard pairs. R Kasturirangan, Thesis (Ph.D.). 93ppThe University of Wisconsin-MadisonKasturirangan, R. The Floer homology of standard pairs. Thesis (Ph.D.), The Univer- sity of Wisconsin-Madison. 1998. 93 pp.
Floer homology of open subsets and a relative version of Arnold's conjecture. R Kasturirangan, Y.-G Oh, Math. Z. 2361Kasturirangan, R., Oh, Y.-G. Floer homology of open subsets and a relative version of Arnold's conjecture, Math. Z. 236 (2001), no. 1, 151-189.
Reidemeister torsion in Floer-Novikov theory and counting pseudoholomorphic tori. I. Y.-J Lee, J. Symplectic Geom. 32Lee, Y.-J., Reidemeister torsion in Floer-Novikov theory and counting pseudo- holomorphic tori. I, J. Symplectic Geom. 3 (2005), no. 2, 221-311.
Sur les A∞-categories. K Lefévre-Hasegawa, Univ.Paris 7-Doctoral ThesisLefévre-Hasegawa, K. Sur les A∞-categories, Univ. Paris 7-Doctoral Thesis, 2003.
A ∞ functors for Lagrangian correspondences. S Ma'u, K Wehrheim, C Woodward, Selecta Math. (N.S.). 243Ma'u, S., Wehrheim, K., Woodward, C., A ∞ functors for Lagrangian correspondences, Selecta Math. (N.S.) 24 (2018), no. 3, 1913-2002.
Microlocal branes are constructible sheaves. D Nadler, Selecta Math. (N.S.). 154Nadler, D., Microlocal branes are constructible sheaves, Selecta Math. (N.S.) 15 (2009), no. 4, 563-619.
Constructible sheaves and the Fukaya category. D Nadler, E Zaslow, J. Amer. Math. Soc. 221Nadler, D., Zaslow, E., Constructible sheaves and the Fukaya category, J. Amer. Math. Soc. 22 (2009), no. 1, 233-286.
Knot and braid invariants from contact homology I. L Ng, Geom. Topol. 9Ng, L., Knot and braid invariants from contact homology I, Geom. Topol. 9 (2005), 247-297.
Fredholm theory of holomorphic discs under the perturbation of boundary conditions. Y.-G Oh, Math. Z. 2223Oh, Y.-G., Fredholm theory of holomorphic discs under the perturbation of boundary conditions, Math. Z. 222 (1996), no. 3, 505-520.
Symplectic topology as the geometry of action functional I -Relative Floer theory on the cotangent bundle. Y.-G Oh, J. Differ. Geom. 46Oh, Y.-G., Symplectic topology as the geometry of action functional I -Relative Floer theory on the cotangent bundle, J. Differ. Geom. 46 (1997), 499-577.
Naturality of Floer homology of open subsets in Lagrangian intersection theory. Y.-G Oh, The Third Pacific Rim Geometry Conference. Seoul; Cambridge, MAInt. Press25Oh, Y.-G., Naturality of Floer homology of open subsets in Lagrangian intersection theory, The Third Pacific Rim Geometry Conference (Seoul, 1996), 261-280, Monogr. Geom. Topology, 25, Int. Press, Cambridge, MA, 1998.
Floer homology and its continuity for noncompact Lagrangian submanifolds. Y.-G Oh, Turkish J. Math. 251Oh, Y.-G., Floer homology and its continuity for noncompact Lagrangian submanifolds, Turkish J. Math. 25 (2001), no. 1, 103-124.
Chain level Floer theory and Hofer's geometry of the Hamiltonian diffeomorphism group. Y.-G Oh, Asian J. Math. 64Oh, Y.-G., Chain level Floer theory and Hofer's geometry of the Hamiltonian diffeo- morphism group, Asian J. Math. 6 (2002), no. 4, 579-624.
Y.-G Oh, Symplectic Topology and Floer Homology. Cambridge University Press2Oh, Y.-G., Symplectic Topology and Floer Homology 2, New Mathematical Mongo- graph 29, Cambridge University Press, 2015.
The Maslov index for paths. J Robbin, D Salamon, Topology. 324Robbin, J., Salamon, D., The Maslov index for paths, Topology 32 (1993), no. 4, 827-844.
Global Fukaya category and quantum Novikov conjecture I, submitted. Y Savelyev, arXiv:1307.3991Savelyev, Y., Global Fukaya category and quantum Novikov conjecture I, submitted, preprint available from http://yashamon.github.io/web2/, see also arXiv:1307.3991.
Fukaya Categories and Picard-Lefschetz Theory. P Seidel, Zurich Lectures in Advanced Mathematics. EMS), ZrichEuropean Mathematical SocietySeidel, P., Fukaya Categories and Picard-Lefschetz Theory. Zurich Lectures in Ad- vanced Mathematics. European Mathematical Society (EMS), Zrich, 2008.
A biased view of symplectic cohomology, from. P Seidel, Current Developments in Mathematics. Int. PressSeidel, P., A biased view of symplectic cohomology, from "Current Developments in Mathematics, 2006", Int. Press, Somerville, MA (2008), 211-253.
On partially wrapped Fukaya categories. Z Sylvan, arXiv:1604.02540preprint 2016Sylvan, Z., On partially wrapped Fukaya categories, preprint 2016, arXiv:1604.02540.
. Youngjin Bae, 606-8317 E-mail address: [email protected] Prefecture, Kyoto, Sakyo Ward, Kitashirakawa Oiwakecho, JapanResearch Institute for Mathematical Sciences, Kyoto UniversityYoungjin Bae, Research Institute for Mathematical Sciences, Kyoto University, Kyoto Prefecture, Kyoto, Sakyo Ward, Kitashirakawa Oiwakecho, Japan 606-8317 E-mail address: [email protected]
Seonhwa Kim, [email protected] for Geometry and Physics, Institute for Basic Sciences (IBS). Pohang, Korea E-mail addressSeonhwa Kim, Center for Geometry and Physics, Institute for Basic Sciences (IBS), Pohang, Korea E-mail address: [email protected]
Yong-Geun Oh, [email protected] for Geometry and Physics, Institute for Basic Sciences (IBS). Pohang, Korea & Department of Mathematics, POSTECH, Pohang, Korea E-mail addressYong-Geun Oh, Center for Geometry and Physics, Institute for Basic Sciences (IBS), Pohang, Korea & Department of Mathematics, POSTECH, Pohang, Korea E-mail address: [email protected]
| [] |
[
"Model Driven Mutation Applied to Adaptative Systems Testing",
"Model Driven Mutation Applied to Adaptative Systems Testing",
"Model Driven Mutation Applied to Adaptative Systems Testing",
"Model Driven Mutation Applied to Adaptative Systems Testing"
] | [
"Alexandre Bartel [email protected] \nInterdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg\n",
"Benoit Baudry [email protected] \nINRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu\n35042RennesFrance\n",
"Freddy Munoz [email protected] \nINRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu\n35042RennesFrance\n",
"Jacques Klein [email protected] \nInterdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg\n",
"Tejeddine Mouelhi [email protected] \nInterdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg\n",
"Yves Le Traon [email protected] \nInterdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg\n",
"Alexandre Bartel [email protected] \nInterdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg\n",
"Benoit Baudry [email protected] \nINRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu\n35042RennesFrance\n",
"Freddy Munoz [email protected] \nINRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu\n35042RennesFrance\n",
"Jacques Klein [email protected] \nInterdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg\n",
"Tejeddine Mouelhi [email protected] \nInterdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg\n",
"Yves Le Traon [email protected] \nInterdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg\n"
] | [
"Interdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg",
"INRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu\n35042RennesFrance",
"INRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu\n35042RennesFrance",
"Interdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg",
"Interdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg",
"Interdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg",
"Interdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg",
"INRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu\n35042RennesFrance",
"INRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu\n35042RennesFrance",
"Interdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg",
"Interdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg",
"Interdisciplinary Center for Security, Reliability\nTrust University of Luxembourg\nL-1359Luxembourg-KirchbergLuxembourg"
] | [] | Dynamically Adaptive Systems modify their behavior and structure in response to changes in their surrounding environment and according to an adaptation logic. Critical systems increasingly incorporate dynamic adaptation capabilities; examples include disaster relief and space exploration systems. In this paper, we focus on mutation testing of the adaptation logic. We propose a fault model for adaptation logics that classifies faults into environmental completeness and adaptation correctness. Since there are several adaptation logic languages relying on the same underlying concepts, the fault model is expressed independently from specific adaptation languages. Taking benefit from model-driven engineering technology, we express these common concepts in a metamodel and define the operational semantics of mutation operators at this level. Mutation is applied on model elements and model transformations are used to propagate these changes to a given adaptation policy in the chosen formalism. Preliminary results on an adaptive web server highlight the difficulty of killing mutants for adaptive systems, and thus the difficulty of generating efficient tests. | 10.1109/icstw.2011.24 | [
"https://arxiv.org/pdf/1205.5783v1.pdf"
] | 10,749,913 | 1205.5783 | 3b741553c3409fad294f71d0b8f9413072f35e57 |
Model Driven Mutation Applied to Adaptative Systems Testing
Alexandre Bartel [email protected]
Interdisciplinary Center for Security, Reliability
Trust University of Luxembourg
L-1359Luxembourg-KirchbergLuxembourg
Benoit Baudry [email protected]
INRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu
35042RennesFrance
Freddy Munoz [email protected]
INRIA Centre Rennes -Bretagne Atlantique Campus de Beaulieu
35042RennesFrance
Jacques Klein [email protected]
Interdisciplinary Center for Security, Reliability
Trust University of Luxembourg
L-1359Luxembourg-KirchbergLuxembourg
Tejeddine Mouelhi [email protected]
Interdisciplinary Center for Security, Reliability
Trust University of Luxembourg
L-1359Luxembourg-KirchbergLuxembourg
Yves Le Traon [email protected]
Interdisciplinary Center for Security, Reliability
Trust University of Luxembourg
L-1359Luxembourg-KirchbergLuxembourg
Model Driven Mutation Applied to Adaptative Systems Testing
Index Terms-model driven engineeringMDEmutationtest- ingadaptative systems
Dynamically Adaptive Systems modify their behavior and structure in response to changes in their surrounding environment and according to an adaptation logic. Critical systems increasingly incorporate dynamic adaptation capabilities; examples include disaster relief and space exploration systems. In this paper, we focus on mutation testing of the adaptation logic. We propose a fault model for adaptation logics that classifies faults into environmental completeness and adaptation correctness. Since there are several adaptation logic languages relying on the same underlying concepts, the fault model is expressed independently from specific adaptation languages. Taking benefit from model-driven engineering technology, we express these common concepts in a metamodel and define the operational semantics of mutation operators at this level. Mutation is applied on model elements and model transformations are used to propagate these changes to a given adaptation policy in the chosen formalism. Preliminary results on an adaptive web server highlight the difficulty of killing mutants for adaptive systems, and thus the difficulty of generating efficient tests.
I. INTRODUCTION
Dynamically Adaptative Systems (DAS) must adapt themselves to ongoing circumstances and find the way to continue accomplishing their functionalities. DAS play increasingly important role in society's infrastructures; the demand for DAS appears in application domains ranging from crisis management applications such as disaster management [8], space exploration [6], and transportation control to entertainment and business applications. This demand is intensified by the mobile and nomadic nature of many of these domains. The IDC 1 analysts forecast a global increase in the number of mobile workers to more than 1.19 billion by 2013 [5].
DAS respond to environmental changes by modifying their internal configuration to continue meeting their functional and non-functional requirements. Designing a DAS involves specifying environmental fluctuations that have an impact on the system, as well as the related strategies for performing the structural changes. This is captured by an adaptation logic that expresses the actions to be adopted when the environment changes [4], [7], [9], [15]. More precisely, adaptation logics drive the adaptation process and compute the right system configuration that should be adopted given an environmental condition.
This paper focuses on the issue of testing whether an adaptation logic is correctly implemented. More specifically, we focus on mutation of adaptation logic, considering that test cases should be able to distinguish between the original adaptation logic and the mutated one. Mutation thus provides a qualification criterion for test cases.
We use a Model-driven engineering (MDE) process to model adaptation formalisms/languages as well as adaptation policies defined according to these formalisms. A metamodel captures all the necessary concepts for representing actionbased adaptation policies. From the metamodel, we derive mutation operators that can apply to several action-based adaptation formalisms.
We classify adaptation logic faults into two groups:
1) The possible environmental conditions the system will face, and 2) the complexity involved in producing a response to those conditions.
The first, environmental completeness (EC) faults embody faults due to gaps in the space covered by the adaptation logic, thereby missing adaptations for environmental changes. The second, adaptation correctness (AC) faults embody faults due to incorrect adaptations to environmental changes. Our hypothesis is that managing environmental changes involving a single property variation (simple) is easy, whereas managing several properties varying at the same time (complex) is error prone. We summarize the contributions of this paper as follows: adaptation logic, completed with model transformations from two different input formalisms. 2) A generic set of mutation operators for adaptation logics as well as a specialization of this model to action-based adaptation logics. 3) A first proof of concepts through an adaptive web server case study. It has to be noted that we do not deal with efficient test cases generation in this paper, and for the experiments we simply create test sequences randomly (sequence of events issued by the environment).
The remainder of this paper proceeds as follows. Section 2 provides a background on dynamically adaptive systems. Section 3 introduces model driven engineering techniques and explains how they can be used with testing adaptation logics. Section 4 describes the first mutation operators we used. Section 5 presents our first experiments. Section 6 presents the related work. Finally, we conclude and present our perspectives in section 7.
II. DYNAMICALLY ADAPTIVE SYSTEMS
Consider an adaptive web server, which processes file requests over the HTTP protocol. It answers these requests as fast as possible while optimizing the resources it consumes, e.g. memory, CPU time, etc. Additionally, it provides nonstop service, thus it needs to adapt its internal structure to respond to a changing working environment. This environment is characterized by the variable amount of requests over time.
A. Environment and configurations
Dynamically adaptive systems (DAS) encode the environment into an abstraction called context. Definition 1 (context). A context consists of a n-tuple of fields <p 0 ,p 1 , . . . , p n >, where each field p i represents an environmental property. The type of each field is defined by the encoding chosen for the property it represents.
In our adaptive web server example, the environment is modeled as a context with two properties:
• p 1 : number of request per second (server load); • p 2 : the percentage of request (request density). The last one corresponds to the number of requested files. The domain or type of each property has a lower and an upper bound. For instance, we associate the type integer with request density and server load, a lower bound 0 and upper bound 100 for both. The server load domain indicates that the minimum amount of request in one second is 0 (no request) and the maximal is 100. Analogous, request density indicates the number of requested files.
Definition 2. Specific environmental conditions at an instant t are drawn by an instance I of the context representing the environment. Such an instance is an n-tuple of values corresponding to the punctual value of a particular property.
The context instance <12, 3> designates a particular environmental condition with 12 requests per second requesting 3 different files. A sequence of context instances I 0 , I 1 , I 2 , . . . , I n ordered by their occurrence over time is called a context flow (CF). A context generates a space containing all the possible instances that can produce the combination of property values. The context of the adaptive web server generates a space containing all its possible context instances.
B. Adaptation logic
Adaptation in DAS is driven by an adaptation logic (adaptation model) that uses a specific strategy to describe the configuration to adopt given a context change.
Definition 3. An adaptation logic defines a relation between contexts and system configuration. It receives a context instance (current environmental condition), a context flow (history of the environment, and the history of the system configuration, and gives the next configuration the system must adopt.
There exist several strategies to describe adaptation logics, a few examples are: action-based adaptation [9], where adaptations are triggered when a condition is satisfied; goal based adaptation [7], where adaptations are performed to reach a specific goal; and utility function based adaptation [15], where adaptations are calculated according to a cost function based on environmental conditions and variation point value.
An action-based strategy describes the adaptation logic of the adaptive web server [9]. In this case the adaptation logic is a set of rules (adaptation policies) that, whenever an event occurs (environmental change) evaluate if a set of conditions are satisfied, and if it is so, they perform a series of adaptation actions. Table I presents an excerpt of the adaptive web server adaptation logic. The first two rules manage the system cache. The first rule (lines 1-3) enables the cache mechanism when the request density is high or medium (line 1) and there is no cache (line 2). The second rule (lines 5-6) is analogous to the first. It reflects the fact that when the dispersion is high, adding a cache is not very useful. The remaining rules (lines 9-14) handle the variations of the server load property.
While table 1 presents a textual action-based adaptation logic, the Diva framework 2 allows to express the adaption policy in the form of a set of tables which are directly manipulated as models elements. Thus the connection with our MDE process is natural.
III. MDE AND ADAPTATION LOGICS
This section will introduce the MDE concepts which are required to understand how we create and use mutation operators later in the paper.
A. Metamodeling, Kermeta and Sintaks
This section summarizes the intents of metamodelling and how the Kermeta environment fits in this modelling activity.
1) Metamodelling: Metamodelling is a technique used to build a metamodel that defines a modeling language for a particular domain. The metamodel defines the concepts and relationships that describe the domain. A metamodel is itself a model expressed with a modeling language called the metametamodel.
2) Kermeta : Kermeta [14] is a metamodelling environment developed at IRISA (Research Institute in Information technology and rAndom Systems). This imperative and objectoriented language is used to provide an implementation of operations defined in metamodels.
3) Sintaks: Sintaks [13] is a tool to defines bridges between plain text files and models.
B. Action-based adaptation logic metamodel Figure 1 represents the metamodel we propose to represent the abstraction of action-based adaptation logics. An actionbased adaptation logic always consists of a set of rules (Rule-Set in the figure) called Event-Condition-Action, or ECA, rules. One ECA rule (Rule) features one event (Property), one condition (Condition) and one action (Action). An event is bound to a context property. When the bounded context property changes and its new value matches the event condition (propertyCondition), then the rule is executed. When the rule is executed the condition (Condition->BoolExpression) has to be true to perform the rule's action. This condition usually refers to internal states of the adaptation system. The action consists of assigning a new value (newValue) to a property (actionProperty).
In short, a rule performs an Action if the bounded Event property in the new context matches the Event condition and if the rule Condition is true. For instance, the first rule of the adaptation logic represented table I, is bounded to the property "requestdensity". The rule will be executed only after specific context changes in which the property became "high" or "medium". The action of assigning "high" to "addCache" is only performed if the internal variable cacheHandle.size equals zero.
The metamodel ecore file was created using EMF (Eclipse Modeling Framework) and GEE (Graphical Ecore Editor).
We will describe the process of mutant generation as well as the genericity of the metamodel in the following section. (1) is to transform the adaptation policy into a model conforming to the metamodel. The second step consists in applying the mutation operators to the policy. Mutation operators are generic and work on models, not plain text files. Once the mutant models have been generated, they are transformed in plain text files. The Sintaks tool was used to do a mapping between a rule set written in plain text and its model representation.
Since mutation operators are defined from the metamodel and are working on models, they are independent of the actionbased language used to write the adaptation policy: a bridge between textual files and models must be defined for each language. This is achieved by defining one bridge for each language with Sintaks.
As a result we got a set ofResulting plaintext mutants will be used to test the adaptation logic's test suites. We consider that test suites must be able to distinguish a correct adaptation logic from the incorrect ones.
In the following section we introduce the first mutation operators.
IV. MUTATION OPERATORS FOR ADAPTATION LOGICS
Definition 3 introduces the concept of adaptation logic as the driver of the adaptation. Testing the realization of such driver means verifying whether the system is capable of adapting to environmental changes, and whether such adaptations proceed as expected. This section presents the challenges associated with testing adaptation logics, as well as a fault model for adaptation logics.
A. Testing challenges
Testing adaptation logics involves generating context instances, and evaluating the results of exposing the system to such context instances.
Three steps compose the testing process: Figure 1. Metamodel of action-based adaptation logics 1) Initially, testers synthesize a context flow from a series of context instances. 2) Then, they execute and expose the system to the generated context flow. Testers evaluate whether the configurations adopted by the system (configuration flow) when exposed to environmental changes are as expected. If not, the adaptation logic contains a fault.
3) The process may start again until a qualification criterion is reached. Note that (1) and (2) are not the object of the paper. Thus we generate test cases randomly. We rather focus on (3).
A test suite is a set of test cases. In this paper a test case is defined as a context flow of a certain length, L. L represents the number of context instances in the flow. Given a flow f containing L context instances I i i ∈ (1, 2, ..., L), I i and I i+1 differs by one or more of their properties' values. For each I i the adaptive system will generate one ore more events corresponding to the properties that have changed. Those events are then handled by the adaptation logic (rules) which generates a new configuration for the system.
A test case is said to kill a mutant if the result (new configuration) generated by the mutant adaptation logic differs from the result given by the original adaptation logic.
This process enables us to detect:
• duplicate rules or useless rules (the mutant is not killed in this case) • errors in the adaptation logic either an event in not handled properly or an incorrect action in performed leading to an incorrect new configuration
B. Fault model for adaptation logics
Managing the scenarios to which a system adapts is complex due to their large number and the difficulty to foresee the interactions between them.
In this section we introduce generic mutation operators for the adaptation logics metamodel. Those operators will mutate adaptation logic models conforming to the metamodel and thus are independent of any adaptation logic language.
1) Environmental completeness faults : Definition 1 defines a context as a tuple of fields representing environmental properties. The adaptation logic interprets these fields' values, and decides the system configuration that best fits the environmental conditions. It is possible, however, that the adaptation logic neglects some property values, or a complete property. We call faults of this type environmental completeness (EC) faults.
In the following, we describe three different types of EC faults represented as mutation operators.
1) ICP -Ignore Context Property
For a given property p, delete each rule that can be executed on p.
For instance, when ignoring property "requestdensity" the two last rules in table I (lines 9-14) are deleted.
2) ISV -Ignore Specific Context Property Value
For a given couple (property p, value v), delete each rule that can be executed when p equals v.
When ignoring value "high" for property "LOAD" one rule (lines 9-11) is deleted.
3) IMV -Ignore Multiple Context Property Values
For a given set of couples (property p i , value v i ), delete each rule that can be executed when any p i (i ∈ {1,2,...,N}) equals v i . (At least two rules with different properties are modified/deleted).
When ignoring value "high" for property "LOAD" and "low" for property "requestdensity", two rules (lines 5-7 and lines 9-11) are deleted.
2) Adaptation correctness faults : The observable behavior produced by the adaptation logic is the adaptation it produces facing an environmental change. Some times such adaptation does not change the system in the expected way. We call this kind of faults adaptation corrected (AC) faults, because they lead directly to incorrect adaptations. Notice that the observable behavior of EC faults is manifested in at least one of the following AC faults.
1) SRA -Swap Rule Action
The action values from two rules modifying the same property are swapped.
For instance "high" and "low" swap lines 11 and 14 in table I for property "addFileServer".
2) Modify Rule Condition Value
The condition value (always on the right part of the condition), for a condition which uses operator > or <, in a rule is decreased or increased, respectively.
For instance in table I line 10, the value "10" is increased to "100".
V. EXPERIMENTS
In this section we present a preliminary proof of concept based on the adaptative web server system.
A. Test subject
To validate our hypothesis about the ability of AST to uncover faults in adaptation logics, we use the adaptive web server presented in section 2 as a test subject. Figure 3 illustrates the architectural realization of the adaptation logic presented in section 2. It is composed of a sensor component, which is aware of the environment and collects the data produced by environmental changes. It encodes the data into values representing the environmental properties of interest (context instance) and passes them to a reconfiguration engine. Finally, the reconfiguration engine loads the adaptation rules and matches the values against the adaptation rules. If an To inject context instances and collect reconfiguration data we have instrumented the adaptation logic. Figure 3 presents the instrumented architecture. We have modified the source code of the sensor component and replaced the environment sensing mechanism with an environment emulator. This emulator reads context flows from a text file and injects them into the system provoking the instrumented sensor to respond identically to the non-instrumented one. We have also added a reconfiguration probe that records the reconfiguration requests produced by the reconfiguration engine. 1) Experiment: We prepared and executed our experiment as described in table II. We generated 30 test suites. Each of them contains 10 test cases (context flows). A flow is created by uniformly selecting a sequence of context instances among all the possible context instances. 2) Results and analysis: Table III presents the global mutation score (number of unique killed mutants).
B. Experiment set up, results and analysis
What we notice is that 30% of the mutants are not killed with random-generation. Even if we take longer test cases the results are similar. This first shows that other techniques should be studied.
C. Threats to validity
There exists no perfect data, or perfectly trustable analysis results, and this study is not an exception. For this reason we identify the construction, internal and external threats to validity for this study.
Internal threats lie on the source and nature of the empirical data. We recognize that we have only studied a small adaptive system realizing the adaptation logic through action-based reasoning. The limited number of environmental properties, and the size of the space represent a threat since it is easy to achieve a uniform coverage with few context instances.
External threats lie on the statistical significance of our study. We are aware that since the adaptive system is small and only one, it does not represent the industrial trends. To make more general statements it is necessary to try the presented technique on large system. However, DAS are an emergent technology still paving its adoption.
VI. RELATED WORK
As far as we know there is no other work that uses mutation to measure the quality of adaptation logics' tests. However, a large number of researchers have addressed the validation and testing problem of adaptive systems. Zhang et al. [17] address the verification of dynamically adaptive systems through modular model checking. They model the adaptive system as finite state machine in which states represent different system variants. Zhang and Cheng [16] introduce a model-based development process for adaptive software that uses Petri-nets. Biyani and Kulkarni [3] use predicate detection for testing adaptive systems during adaptation. They extend existing algorithms based of global predicate evaluation [2] for testing distributed systems to the system during adaptation. Kulkarni and Biyani [11] introduce an approach using proof-lattice to verify that all possible adaptation paths do not violate global constraints. Allen et al [1] used the Wright ADL to integrate the specifications of both architectural and behavioral aspects of dynamically reconfigurable systems. Kramer and Magee [10] use property automata and labeled transition systems to specify and verify adaptive program's properties. The main difference between these verification approaches and ours is the focus of attention. We are interested in verifying through testing the adaptation driver, and not the adaptation process itself. Furthermore, these approaches require computing the entire system configurations and the transitions between them, however sometimes this is not possible.
Lu et al. [12] study the testing of pervasive context-aware software. They propose a family of test adequacy criteria that measure the quality of test sets with respect to the context variability.
Since very different testing techniques exist, we hope that mutation will reveal itself as a good way to compare them.
VII. CONCLUSIONS AND PERSPECTIVES
The mutation operators presented in this paper are a first proposal to offer a qualification environment for comparing testing techniques applied to action-based adaptative systems. The use of MDE makes it possible to derive mutants for most action-based logics, thus providing a common framework for such test cases qualification. The case study shows the feasibility of the approach and confirms that, for killing mutants, other testing techniques should be considered rather than random test generation. Due to the size of the case study and the number of environmental properties it contains, it is not possible to generalize to larger DAS. Future work will thus consist of completing the set of mutation operators, and will exhibit experimental results on other case studies comparing several test generation techniques. We plan to experiment with a much larger case study, which comprises several environmental properties and interactions. Furthermore we plan studying and specializing our fault model to other adaptation logic technologies, such as goal oriented.
Figure 2 .
2Mutants generation process.
Figure 2
2represents the mutants creation process. The process start by selecting an adaptation policy (AP) expressed in an action-based language (L1 on the figure). The first step
Figure 3 .
3Instrumented architecture of the adaptive web server adaptation logic adaptation rule matches the values, then it requests the system implementation to reconfigure as described by the rule.
Table I EXCERPT
IOF THE ADAPTIVE WEB SERVER ADAPTATION LOGIC.
Table II EXPERIMENT
IISET-UP AND EXECUTION# of test suites
30
# of context flows per test suite
10
# of context instances per flow
20
# of mutants of the adaptation logic
130
Total number of simulations
39.000 (30 · 10 · 20 · 130)
Table III EXPERIMENT
IIIRESULTSTest suite
Random
minimun mutation score
91/130 ≈ 70%
maximum mutation score 96/130 ≈ 74%
average mutation score
93/130 ≈ 71%
IDC is an analyst company and a global provider of market intelligence, advisory services, and events for the information technology, telecommunications, and consumer technology markets.
Specifying and analyzing dynamic software architectures. R Allen, R Douence, D Garlan, R. Allen, R. Douence, and D. Garlan. Specifying and analyzing dynamic software architectures. pages 21-37.
Consistent global states of distributed systems: Fundamental concepts and mechanisms. O Babaoglu, K Marzullo, UBLCS-93-1University of Bologna, Department of Computer ScienceTechnical ReportO. Babaoglu and K. Marzullo. Consistent global states of distributed systems: Fundamental concepts and mechanisms. Technical Report UBLCS-93-1, University of Bologna, Department of Computer Science, Jan. 1993.
Testing dynamic adaptation in distributed systems. K N Biyani, S S Kulkarni, AST. H. Zhu, W. E. Wong, and A. M. ParadkarIEEEK. N. Biyani and S. S. Kulkarni. Testing dynamic adaptation in distributed systems. In H. Zhu, W. E. Wong, and A. M. Paradkar, editors, AST, pages 51-54. IEEE, 2007.
Composition of qualitative adaptation policies. F Chauvel, O Barais, I Borne, J.-M Jézéquel, Automated Software Engineering Conference (ASE 2008). Short paperF. Chauvel, O. Barais, I. Borne, and J.-M. Jézéquel. Composition of qualitative adaptation policies. In Automated Software Engineering Conference (ASE 2008), pages 455-458, 2008. Short paper.
Worldwide mobile worken population. S D Drake, R Boggs, J Jaffe, S. D. Drake, R. Boggs, and J. Jaffe. Worldwide mobile worken population 2009-2013 forecast, 2010.
Software architecture themes in JPL's mission data system. D Dvorak, R Rasmussen, G Reeves, A Sacks, AIAA Space Technology Conference and Exposition. Albuquerque, NMD. Dvorak, R. Rasmussen, G. Reeves, and A. Sacks. Software architec- ture themes in JPL's mission data system. In AIAA Space Technology Conference and Exposition, Albuquerque, NM., 1999.
Evolving self-adaptive services using planning-based reflective middleware. F Eliassen, E Gjørven, V S W Eide, J A Michaelsen, The 5th annual Workshop on Adaptive and Reflective Middleware (ARM 2006). N. V. Geoff CoulsonACM PressF. Eliassen, E. Gjørven, V. S. W. Eide, and J. A. Michaelsen. Evolving self-adaptive services using planning-based reflective middleware. In N. V. Geoff Coulson, editor, The 5th annual Workshop on Adaptive and Reflective Middleware (ARM 2006), pages 1-6. ACM Press, 2006.
An intelligent and adaptable grid-based flood monitoring and warning system. D Hughes, P Greenwood, G Blair, G Coulson, P Smith, K Beven, Proceedings of the UK eScience All Hands Meeting. the UK eScience All Hands MeetingD. Hughes, P. Greenwood, G. Blair, G. Coulson, P. Smith, and K. Beven. An intelligent and adaptable grid-based flood monitoring and warning system. In Proceedings of the UK eScience All Hands Meeting, pages 53-60, 2005.
Chisel: A policy-driven, context-aware, dynamic adaptation framework. J Keeney, V Cahill, Proceedings of the 4 th IEEE International Workshop on Policies for Distributed Systems and Networks. the 4 th IEEE International Workshop on Policies for Distributed Systems and NetworksIEEEJ. Keeney and V. Cahill. Chisel: A policy-driven, context-aware, dynamic adaptation framework. In Proceedings of the 4 th IEEE International Workshop on Policies for Distributed Systems and Networks (Policy 2003), pages 3-14. IEEE, June 2003.
Analysing dynamic change in distributed software architectures. J Kramer, J Magee, IEE Proceedings -Software. 1455J. Kramer and J. Magee. Analysing dynamic change in distributed software architectures. IEE Proceedings -Software, 145(5):146-154, 1998.
Correctness of component-based adaptation. S S Kulkarni, K N Biyani, Lecture Notes in Computer Science. I. Crnkovic, J. A. Stafford, H. W. Schmidt, and K. C. Wallnau3054SpringerS. S. Kulkarni and K. N. Biyani. Correctness of component-based adaptation. In I. Crnkovic, J. A. Stafford, H. W. Schmidt, and K. C. Wallnau, editors, CBSE, volume 3054 of Lecture Notes in Computer Science, pages 48-58. Springer, 2004.
Testing pervasive software in the presence of context inconsistency resolution services. H Lu, W K Chan, T H Tse, 30th International Conference on Software Engineering (ICSE 2008). W. Schäfer, M. B. Dwyer, and V. GruhnLeipzig, GermanyACMH. Lu, W. K. Chan, and T. H. Tse. Testing pervasive software in the presence of context inconsistency resolution services. In W. Schäfer, M. B. Dwyer, and V. Gruhn, editors, 30th International Conference on Software Engineering (ICSE 2008), Leipzig, Germany, May 10-18, 2008, pages 61-70. ACM, 2008.
Model-driven analysis and synthesis of concrete syntax. P.-A Muller, F Fleurey, F Fondement, M Hassenforder, R Schneckenburger, S Gérard, J.-M Jézéquel, Proceedings of the MoDELS/UML. the MoDELS/UMLGenova, ItalyP.-A. Muller, F. Fleurey, F. Fondement, M. Hassenforder, R. Schneck- enburger, S. Gérard, and J.-M. Jézéquel. Model-driven analysis and synthesis of concrete syntax. In Proceedings of the MoDELS/UML 2006, Genova, Italy, Oct. 2006.
Weaving executability into object-oriented meta-languages. P.-A Muller, F Fleurey, J.-M Jézéquel, Proceedings of MODELS/UML'2005. S. K. L. BriandMODELS/UML'2005Montego Bay, JamaicaSpringer3713P.-A. Muller, F. Fleurey, and J.-M. Jézéquel. Weaving executability into object-oriented meta-languages. In S. K. L. Briand, editor, Proceed- ings of MODELS/UML'2005, volume 3713 of LNCS, pages 264-278, Montego Bay, Jamaica, Oct. 2005. Springer.
Utility functions in autonomic systems. W E Walsh, G Tesauro, J O Kephart, R Das, W. E. Walsh, G. Tesauro, J. O. Kephart, and R. Das. Utility functions in autonomic systems, 2004.
Model-based development of dynamically adaptive software. J Zhang, B H C Cheng, ICSE. L. J. Osterweil, H. D. Rombach, and M. L. SoffaACMJ. Zhang and B. H. C. Cheng. Model-based development of dynamically adaptive software. In L. J. Osterweil, H. D. Rombach, and M. L. Soffa, editors, ICSE, pages 371-380. ACM, 2006.
Modular verification of dynamically adaptive systems. J Zhang, H Goldsby, B H C Cheng, Proceedings of the 8th International Conference on Aspect-Oriented Software Development. the 8th International Conference on Aspect-Oriented Software DevelopmentCharlottesville, Virginia, USAACMJ. Zhang, H. Goldsby, and B. H. C. Cheng. Modular verification of dynamically adaptive systems. In Proceedings of the 8th International Conference on Aspect-Oriented Software Development, AOSD 2009, Charlottesville, Virginia, USA, March 2-6, 2009, pages 161-172. ACM, 2009.
| [] |
[
"Radiation reaction at 3.5 post-Newtonian order in effective field theory",
"Radiation reaction at 3.5 post-Newtonian order in effective field theory"
] | [
"Chad R Galley ",
"Adam K Leibovich ",
"\nTheoretical Astrophysics\nJet Propulsion Laboratory\nCalifornia Institute of Technology\n91109PasadenaCaliforniaUSA\n",
"\nDepartment of Physics and Astronomy\nPittsburgh Particle physics Astrophysics and Cosmology Center (PITT PACC)\nCalifornia Institute of Technology\n91125PasadenaCaliforniaUSA\n",
"\nUniversity of Pittsburgh\n15260PittsburghPennsylvaniaUSA\n"
] | [
"Theoretical Astrophysics\nJet Propulsion Laboratory\nCalifornia Institute of Technology\n91109PasadenaCaliforniaUSA",
"Department of Physics and Astronomy\nPittsburgh Particle physics Astrophysics and Cosmology Center (PITT PACC)\nCalifornia Institute of Technology\n91125PasadenaCaliforniaUSA",
"University of Pittsburgh\n15260PittsburghPennsylvaniaUSA"
] | [] | We derive the radiation reaction forces on a compact binary inspiral through 3.5 order in the post-Newtonian expansion using the effective field theory approach. We utilize a recent formulation of Hamilton's variational principle that rigorously extends the usual Lagrangian and Hamiltonian formalisms to dissipative systems, including the inspiral of a compact binary from the emission of gravitational waves. We find agreement with previous results, which thus provides a non-trivial confirmation of the extended variational principle. The results from this work nearly complete the equations of motion for the generic inspiral of a compact binary with spinning constituents through 3.5 post-Newtonian order, as derived entirely with effective field theory, with only the spin-orbit corrections to the potential at 3.5 post-Newtonian remaining. | 10.1103/physrevd.86.044029 | null | 12,943,676 | 1205.3842 | 026880dc334a5192b01f894cbb56fcae84ae5558 |
Radiation reaction at 3.5 post-Newtonian order in effective field theory
17 May 2012
Chad R Galley
Adam K Leibovich
Theoretical Astrophysics
Jet Propulsion Laboratory
California Institute of Technology
91109PasadenaCaliforniaUSA
Department of Physics and Astronomy
Pittsburgh Particle physics Astrophysics and Cosmology Center (PITT PACC)
California Institute of Technology
91125PasadenaCaliforniaUSA
University of Pittsburgh
15260PittsburghPennsylvaniaUSA
Radiation reaction at 3.5 post-Newtonian order in effective field theory
17 May 2012(Dated: May 1, 2014)
We derive the radiation reaction forces on a compact binary inspiral through 3.5 order in the post-Newtonian expansion using the effective field theory approach. We utilize a recent formulation of Hamilton's variational principle that rigorously extends the usual Lagrangian and Hamiltonian formalisms to dissipative systems, including the inspiral of a compact binary from the emission of gravitational waves. We find agreement with previous results, which thus provides a non-trivial confirmation of the extended variational principle. The results from this work nearly complete the equations of motion for the generic inspiral of a compact binary with spinning constituents through 3.5 post-Newtonian order, as derived entirely with effective field theory, with only the spin-orbit corrections to the potential at 3.5 post-Newtonian remaining.
I. INTRODUCTION
The advent of advanced ground-based gravitational wave (GW) interferometer detectors (i.e., advanced LIGO and advanced VIRGO) brings an increasing demand for more accurate waveform templates to be used for detecting gravitational waves and for extracting information about the parameters associated with a source, such as the masses, spins, distance, and sky location. Currently, a goal of the GW source-modeling community is to produce inspiral waveforms accurate through at least 3.5 post-Newtonian (PN) order. The PN expansion is a perturbation theory for the gravitational field and the binary's motion in the weak-field and slow-motion limits (see [1] for a review). The equations for the relative motion of the binary are known already through 3.5PN order, even when including the spin angular momenta of the binary's constituents (see, e.g., [1][2][3][4][5] and references therein). However, the gravitational wave flux and the waveform, especially, are not yet known to such a high order for spinning binary inspiral sources.
High-accuracy waveforms and source-modeling are also important for matching post-Newtonian inspiral waveforms to numerical simulations of binary mergers for purposes of parameter estimation (see e.g., [6]) and for accurately calibrating phenomenological models like Effective One Body [7] and hybrid models [8][9][10][11]. These phenomenological models may be used to construct relatively cheap template banks without having to run a prohibitively large number of expensive numerical simulations of binary mergers. This is especially true if combined with an efficient template bank compression and representation scheme such as provided by the Reduced Basis method [12].
The Effective Field Theory (EFT) approach [13] offers an efficient and algorithmic computational framework compared to traditional methods [1,14] and has rapidly made useful contributions towards the goal of 3.5PN-accurate inspiral waveforms. To date, this includes the calculation of the PN corrections to the binding potential, including spin angular momenta of the binary's component masses, through 3PN order [2,3,13,[15][16][17][18][19][20][21][22], the spin1-spin2 terms computed at 4PN [23], the leading order radiation reaction force at 2.5PN [24], the multipole moments needed for calculating the gravitational wave flux through 3PN [25], and the moments needed for calculating the waveform amplitude corrections through 2.5PN [26].
In this paper, we calculate the radiation reaction forces that appear at 3.5PN order (spin effects do not enter radiation reaction until 4PN). In doing so, we nearly complete the PN equations of motion for the generic inspiral of a compact binary with spinning constituents as computed entirely in the EFT framework. Only the spin-orbit correction to the potential at 3.5PN order is remaining to be computed in EFT. This is a rather remarkable achievement given that the EFT approach was introduced almost eight years ago [13].
Computing radiative effects in the EFT approach presents a unique challenge since the formalism makes heavy use of an action formulation of the binary system. More specifically, it is well-known that Lagrangians and Hamiltonians are not generally applicable to dissipative systems, which would make computing the PN radiation reaction forces in EFT very difficult. In [24], it was indicated how this might be overcome using a language and notation from quantum field theory but was not given a rigorous foundation within a purely classical mechanical context. Nevertheless, as a demonstration of the formalism, the 2.5PN radiation reaction force and the gravitational waveform from the binary's leading order quadrupole moment were computed [24] and shown to agree with existing results [27,28], thus lending credibility to the method. Recently, one of us (CRG) gave a rigorous extension of Hamilton's variational principle in [29] that yields a Lagrangian (and Hamiltonian) formulation that suitably and correctly describes generally dissipative systems. We use this formalism here, together with EFT, to compute the 3.5PN radiation reaction force and find agreement with previously published results [30,31]. This agreement lends non-trivial confirmation for the validity of the extended variational principle for dissipative systems described in [29] (see also the examples given in that reference). This paper is organized as follows. Section II gives an overview of the formalisms needed to compute the radiation reaction force at 3.5PN order in EFT. Specifically, Section II A reviews the EFT of compact binary inspirals and Section II B reviews the recently formulated extension of Hamilton's variational principle for dissipative systems [29]. Section III discusses the computation of the 3.5PN radiation reaction force in EFT where agreement is shown with previous results [30,31]. Section IV concludes the paper and Appendix A outlines in detail the EFT calculation of the leading order 2.5PN radiation reaction force of Burke and Thorne [27,28].
II. OVERVIEW OF EFFECTIVE FIELD THEORY AND DISSIPATIVE MECHANICS
We begin by reviewing the effective field theory of compact binary inspirals, focusing mainly on the radiation sector, and end the section by reviewing how dissipative (e.g., radiative) effects can be handled within the newly developed mechanics for dissipative systems [29].
A. Effective field theory of compact binary inspirals
The EFT approach, introduced by Goldberger and Rothstein in [13], separately treats the relevant scales of the binary by successively "integrating out" the smaller scales thereby yielding a hierarchy of EFTs that are related to each other through so-called matching calculations. The effects of short-distance physics in a large-distance effective theory are parameterized in a manner consistent with the symmetries (e.g., general coordinate invariance) as discussed in more detail below.
The slow inspiral of compact binaries has three relevant scales, from smallest to largest: the size of the compact objects (COs) R m , their orbital separation r, and the wavelength of the emitted gravitational waves λ. The first EFT describes the extended masses in the point particle approximation. To incorporate the finite size of the CO one appends to the point particle action all possible interaction terms that are consistent with general coordinate invariance and reparameterizations of the worldline. This leads to a worldline EFT, for one of the COs (e.g., spherical and non-spinning), that is described by the action [13]
S CO = −m dτ + C E dτ E αβ E αβ + C B dτ B αβ B αβ + · · ·(1)
where E αβ and B αβ are the electric and magnetic parts of the Weyl curvature tensor. The coefficients C E , C B , . . . are determined via a matching calculation, wherein a chosen quantity is calculated in the effective theory and in the long-distance limit of the "full" theory for the CO. One can show that these extra terms in the action contribute to the binding potentials (due to induced quadrupole moments) starting at 5PN for non-spinning COs. We can therefore ignore these terms for the work presented here. As one "zooms out" from R m to the orbital radius r, the system is described by General Relativity coupled to two COs that are each described by the worldline EFT in (1). At this scale, the particles interact with two kinds of gravitational perturbations. The first describes the nearly instantaneous potentials that bind the two particles in orbit (for further details see [13]). The second is the long-wavelength GWs emitted by the binary.
As one "zooms out" from r, the binary itself is described in the point particle approximation by a composite object [13,24,32]. This composite object radiates gravitational waves from its time-dependent multipole moments that can be calculated by matching onto the radiative moments of the binary at the orbital scale. The worldline EFT for this radiating object has an action given by [32]
S rad [x µ , h µν ] = − dτ M (τ ) − 1 2 dτ L ab (τ ) Ω ab L + u µ ω µ ab (τ ) + 1 2 ∞ n=0 dτ c (I) n I aba1···an (τ )∇ a1 · · · ∇ an E ab (x µ ) + 1 2 ∞ n=0 dτ c (J) n J aba1···an (τ )∇ a1 · · · ∇ an B ab (x µ ) + · · ·(2)
where x µ (τ ) are the worldline coordinates (τ beings its proper time), I aba1···an (τ ) and J aba1···an (τ ) are the symmetric, trace-free (STF) mass and current multipole moments, respectively, of the composite object [32,33], Ω ab L is the angular frequency of the body's rotation as measured in a locally flat Lorentz frame, and ω µ ab are the spin connection coefficients, which couple to the total angular momentum L ab = −L ab . Also, lower case roman letters takes values in {1, 2, 3}. The mass of the body is taken to be generally time-dependent M (τ ) as the body may lose rest-mass energy, as measured by a distant observer, via gravitational wave emission. A local Lorentz frame is attached to the worldline such that e µ 0 (τ ) = u µ (τ ) = dx µ /dτ and e µ a (τ ) are space-like vectors that rotate with the body and thus account for its spin dynamics. The first several c (I,J) n coefficients are conventionally taken to be
c (I) 0 = 1, c (J) 0 = − 4 3 , c (I) 1 = 1 3 .(3)
It is important to note that (2) describes any multipolar, extended body that has a size smaller than the wavelength of gravitational waves it emits. Therefore, (2) is a rather general and model-independent description of such a system. However, using the multipole moments computed in the PN expansion for compact binary inspirals [25,33] one may use (2) to study radiative effects, such as radiation reaction, in compact binary systems.
As with any field theory coupled to point particles, including the EFTs reviewed here, divergences will appear. However, there are an infinite number of parameters in the theory (e.g., c
(I) n , c (J)
n ) so that divergences can always be absorbed into these coupling constants. Interestingly, these parameters can exhibit a classical renormalization group flow due to gravitational screening effects, which manifest as logarithmic divergences in the potentials. Unlike with traditional approaches for the PN expansion [1], divergences in the EFT approach have a natural place and interpretation in the context of renormalization group theory [13]. Spin angular momenta for the binary's component masses can be included as in [2].
B. Classical mechanics for dissipative systems
Determining the evolution of the compact binary in EFT is achieved by integrating out the gravitational perturbations from the action. In practice, this is accomplished by solving for the gravitational perturbations and substituting these solutions into the original action to yield an effective action for the worldlines of the component masses [13,34], which is efficiently carried out using Feynman diagrams [13]. This procedure is well-suited for conservative interactions, such as for computing PN corrections to the binding potential of a compact binary, but requires modification when studying dissipative systems, such as the inspiral of a compact binary due to the emission of gravitational radiation, the reason being that dissipative systems generally do not admit Lagrangian or Hamiltonian descriptions.
Recently, one of us (CRG) introduced a rigorous variational calculus for Hamilton's principle of stationary action that includes systems exhibiting dissipative effects and thus generalizes the usual action principle [29]. The derivation and the details will not be given here but instead refer the reader to [29]. However, we will review the relevant formalism needed specifically for computing the radiation reaction force in this paper.
The total system formed by the gravitational perturbations h µν and the worldlines of the compact bodies in a binary system, x K (t) for K = 1, 2, is closed. Only when the gravitational perturbations are integrated out is the dynamics of the worldlines open. When integrating out the long wavelength gravitational perturbations at the level of the action one must be careful when applying Hamilton's principle of stationary action to the effective action since it is formulated by specifying boundary conditions in time, not initial conditions. This observation may seem innocuous except that the resulting effective action for the worldlines describes conservative (i.e., non-dissipative) dynamicsthe Green function for the gravitational perturbations that appear in the effective action are time-symmetric (as these are the ones satisfying boundary conditions according to Sturm-Liouville theory) and hence do not account for the dissipative effects of radiation reaction [24] .
To overcome this problem, one formally doubles the degrees of freedom [29] so that h µν → (h µν1 , h µν2 ) and x K → ( x K1 , x K2 ) then constructs the following action,
S[ x K1 , x K2 , h µν1 , h µν2 ] = S[ x K1 , h µν1 ] − S[ x K2 , h µν2 ](4)
where each action on the right side consists of the Einstein-Hilbert action (here, gauge-fixed in the Lorenz gauge and expanded around flat spacetime) and the worldline EFT action (2). Integrating out the long wavelength gravitational perturbations using Feynman diagrams at the desired PN order (see [24] for the Feynman rules of the radiation EFT and how to construct the Feynman diagrams) gives the effective action for the open dynamics of the binary's inspiral. After all variations are performed one is free to identify the doubled variables with the physical ones so that, for example, x K1 , x K2 → x K . This limit will be called the physical limit.
It is convenient, but not necessary, to change variables from the (1, 2) variables to ± variables where
x K+ = x K1 + x K2 2 (5) x K− = x K1 − x K2(6)
This change of variable is motivated by the physical limit since then x K− → 0 and x K+ → x K where x K is the physical worldline of the K th body. By computing the following variation [29],
0 = δS eff [ x 1± , x 2± ] δ x K− (t) xK−=0 xK+= xK(7)
one obtains a set of worldline equations of motion that properly incorporates radiation reaction effects [24,29]. It is important to note that (7) receives non-zero contributions from terms in the effective action that are perturbatively linear in x K− or its time derivatives. Therefore, all terms of quadratic order or higher in any "−" variables can be dropped from the effective action for the purposes of deriving the worldline equations of motion. We will take advantage of this property in the course of the calculations below in Section III and Appendix A.
In the remaining sections, we compute the terms of the effective action at 3.5PN order by computing the corresponding Feynman diagrams in the extended action formalism of [29]. Once the effective action is derived in Section III, we then apply the variational principle in (7) to obtain the radiation reaction forces on the binary at 3.5PN order.
III. RADIATION REACTION THROUGH 3.5PN
In this section we lay out how to calculate the radiation reaction at 3.5PN in the EFT. However, the first nonconservative term in the acceleration enters at 2.5PN order and is due to radiation reaction. We include in Appendix A a detailed calculation of the radiation reaction at 2.5PN using the EFT. The 3.5PN terms can be derived similarly so we will often refer the reader to Appendix A for formulae.
The relative acceleration a i = a i 1 − a i 2 for a non-spinning two-body system can be expanded as
a i = a i 0 + ǫa i 1 + ǫ 2 a i 2 + ǫ 2.5 a i 2.5 + ǫ 3 a i 3 + ǫ 3.5 a i 3.5 + · · ·(8)
where ǫ = 1 counts the post-Newtonian order. The motion through 2PN order is conservative, meaning that the motion can be characterized by the conserved total energy and angular momentum. Dissipative effects first arise at order 2.5PN due to gravitational radiation reaction. The 3PN terms are again conservative while the 3.5PN terms are the first post-Newtonian corrections to radiation reaction. The first term in (8) is just the Newtonian acceleration,
a i 0 = − m r 2 n i ,(9)
while the second term is the 1PN correction, derived from the Einstein-Infeld-Hoffmann (EIH) Lagrangian [35],
a i 1 = −(3 + η) m r a i − mη a · x r 2 n i − 1 − 3η 2 v 2 a i − (1 − 3η) v · a v i − m r 2 n i − m r + 3 2 (1 + η)v 2 − 3 2 ηṙ 2 − v iṙ (3 + η)(10)= − m r 2 n i −2(2 + η) m r + (1 + 3η)v 2 − 3 2 ηṙ 2 − 2(2 − η)ṙv i ,(11)
where m is the total mass, η = m 1 m 2 /m 2 is the symmetric mass ratio, n i = x i /r with x i = x i 1 − x i 2 the separation between the point masses, and v i = dx i /dt. The EIH Lagrangian was calculated using the EFT in [13,15]. To go from Eq. (10) to Eq. (11) it is necessary to order-reduce by substituting the O(ǫ 0 ) expression for the acceleration. By including the acceleration at O(ǫ 2.5 ) when performing the order-reduction, we get a contribution to the 3.5PN acceleration. This will be discussed in more detail below. In the rest of this section we derive the radiation reaction through 3.5PN using the EFT.
A. Feynman diagrams
To calculate the radiation reaction, we begin from the worldline action (2) and integrate out the electric and magnetic parts of the Weyl curvature tensor giving, to the order we require, the effective action
S 3.5PN eff = S mq + S mo + S cq ,(12)
where the terms on the right-hand side stand for mass quadrupole (mq), mass octupole (mo), and current quadrupole (cq), respectively. As discussed in Appendix A, at 2.5PN this is accomplished by calculating the diagram in Fig. 1. At 2.5PN, each vertex is given by the leading order mass quadrupole moment
I ij = I ij 0 + O(ǫ) where I ij 0 = K m K x i K x j K − 1 3 δ ij x 2 K ,(13)
and where K labels the worldlines in the binary K = 1, 2 and the subscript on I ij denotes the (relative) PN order in ǫ. To calculate at 3.5PN, we need to include PN corrections to the mass quadrupole,
I ij = I ij 0 + ǫI ij 1 + O(ǫ 1.5 ) = K m K x i K x j K + ǫ 3 2 v 2 K − L =K Gm L r x i K x j K + 11 42 d 2 dt 2 x 2 K x i K x j K − 4 3 d dt x K · v K x i K x j K TF + O(ǫ 1.5 )(14)= µx i x j + ǫµ 29 42 (1 − 3η)v 2 − 1 7 (5 − 8η) m r x i x j + 1 21 (1 − 3η) 11r 2 v i v j − 12rṙx (i v j) TF + O(ǫ 1.5 ),(15)
where TF denotes taking the trace-free part of the enclosed expressions and in the last line we have gone to center-ofmass coordinates and µ = ηm is the reduced mass. However, the diagram in Fig. 1 is the same whether one includes the 1PN corrections to the mass quadrupole or not, so we get the effective action (A21) derived in Appendix A, namely,
S mq = G 5 dt I ij − (t)I (5) ij+ (t).(16)
where I ij − ≡ I ij 1 − I ij 2 and I ij + ≡ (I ij 1 + I ij 2 )/2, and I ij A for A = 1, 2 is the mass quadrupole moment formed by the A th worldline (A labels the doubled worldlines, not the components of the two-body system, which is indexed by K = 1, 2). By expanding in ǫ,
S mq = G 5 ǫ 0 dtI ij 0− (t)I(5)0ij+ (t) + ǫ dtI ij 0− (t)I(5)1ij+ (t) + ǫ dtI ij 1− (t)I(5)0ij+ (t) + O(ǫ 1.5 ) .(17)
The first term corresponds to the 2.5PN radiation reaction, calculated in Appendix A. The second term can be treated similar to the 2.5PN term because the same factor of I ij 0− will be functionally differentiated when applying Eq. (7). In the last term, however, we need to do a bit more work. It is easiest to use Eq. (14) and integrate by parts to move the derivatives in the last two pieces of the "−" mass quadrupole onto the "+" mass quadrupole in Eq. (17). Once that is done, it is straightforward to vary with respect to the "−" coordinates following Eq. (7). This will be done in Section III B.
There are two more diagrams that need to be calculated in order to get the effective action necessary to extract the 3.5PN radiation reaction force. First, there is a contribution from the mass octupole, as shown on the left-hand side of Fig. 2. Second, there is a contribution from the current quadrupole shown on the right-hand side of Fig. 2. These will be discussed in turn. Evaluating the octupole contribution we have
iS mo = 1 2 i 2 (6m pl ) 2 dt dt ′ I ijk A (t) E A ij,k (t)E B lm,n (t ′ ) I lmn B (t ′ ),(18)
where
E ij,k = 1 2 (h 00,ijk + h ij,00k − h j0,0ik − h i0,0jk ) .(19)
Following the same procedure that is shown in Appendix A, we can write the integrand as
I ijk A (t) E A ij,k (t)E B lm,n (t ′ ) I lmn B (t ′ ) = I ijk A (t)I ijkB (t ′ ) 5 2 D (3)AB + 5σD (4)AB + σ 2 D (5)AB ,(20)
where σ is the Synge world function defined in Eq. (A9) and the indices are now contracted with δ ij . The quantities D AB are given in Eq. (A7) and the numerical superscripts on D AB indicate the number of derivatives with respect to σ. Thus the contribution to the effective action is
iS mo = − 1 72m 2 pl dt dt ′ I ijk A (t)I ijkB (t ′ ) 5 2 D (3)AB + 5σD (4)AB + σ 2 D (5)AB (21) = 8πiG 9 dt dt ′ I ijk − (t)I ijk+ (t ′ ) 5 2 D (3) ret + 5σD (4) ret + σ 2 D (5) ret .(22)
Changing variables from t ′ to s = t ′ − t, holding t fixed, we find using Eq. (A17)
S mo = 8πG 9 dt I ijk − (t) ds − 15 4s 5 dD ret (s) ds + 15 4s 4 d 2 D ret (s) ds 2 − 5 4s 3 d 3 D ret (s) d3 2 + 1 4s d 5 D ret (s) ds 5 I ijk+ (t + s). (23)
Integrating by parts to put the derivatives on I ijk+ (t + s) and Laurent expanding to linear order in s, we find
S mo = 8πG 9 dt I ijk − (t) ds D ret (s) 15 4s 6 I ijk+ (t) + 15 8s 4 I ′′ ijk+ (t) − 5 32s 4 I (4) ijk+ (t) − 1 64 I (6) ijk+ (t) − s 42 I (7) ijk+ (t) . (24)
The first four terms are divergent and are removed by regularization as described in Appendix A. Keeping only the finite term (i.e., the last one) and using Eq. (A22), we obtain
S mo = − G 189 dt I ijk − (t)I (7) ijk+ (t).(25)
Since this contribution begins at order 3.5PN we only need the leading order mass octupole moment
I ijk = I ijk 0 + O(ǫ) with I ijk 0 = K m K x i K x j K x k K STF = −µ δm m x i x j x k STF ,(26)
where δm = m 1 − m 2 is the mass difference and STF indicates taking the symmetric, trace-free part of the enclosed expression. Varying Eq. (25) using Eq. (7) gives the mass octupole contribution to the radiation reaction force at order 3.5PN, the result of which will be given in Section III B. The current quadrupole contribution follows similarly. Evaluating the diagram on the right-hand side of Fig. 2 we have
iS cq = 1 2 2 3m pl 2 dt dt ′ J ij A (t) B A ij (t)B B kl (t ′ ) J kl B (t ′ ),(27)
where B ij is the magnetic part of the Weyl tensor,
B ij = ǫ iαβλ C αβ jρ u λ u ρ = ǫ iab R ab j0 + O(v),(28)
and the current quadrupole is given by
J ij = K x i K L j K + 3 2 x i K S j K STF ,(29)
where L is orbital angular momentum and S is the spin angular momentum. The spin contribution enters at 4PN so we can neglect the second term in the current quadrupole above.
To linear order, the magnetic Weyl tensor is
B ij = ǫ ab i h a0,bj + h bj,a0 .(30)
Following the same method as discussed in the Appendix, we can write the action as
iS cq = − 2 3m 2 pl dt dt ′ J ij A (t) σD ′′AB (σ) ′ J ijB (t ′ ) (31) = 128πiG 3 dt dt ′ J ij − (t) σD ′′ ret (σ) ′ J ij+ (t ′ )(32)
Changing variables from t ′ to s, integrating by parts, and Laurent expanding as before we obtain
S cq = − 64πG 3 dt J ij − (t) ds D ret (s) − 3 s 4 J ij+ (t) − 1 2s 2 J (2) ij+ (t) + 3 8 J (4) ij+ (t) + 4s 15 J (5) ij+ (t) .(33)
Again, the first three terms are divergent and are removed by regularization. Therefore, we obtain
S cq = − 64G 45 dt J ij − (t)J (5) ij+ (t).(34)
B. Radiation reaction force
We are now ready to vary the action, Eq. (12), to obtain the equations of motion. Since after varying we will set all the "−" variables to zero, we just need to vary with respect to the minus coordinates, using Eq. (7), or (35) where the effective Lagrangian is given by S eff = dt L eff (t).
F i K (t) = ∂L eff ∂x Ki− (t) − d dt ∂L eff ∂ẋ Ki− (t) xK−=0 xK+= xK
There are two contributions at order 3.5PN from the mass quadrupole piece, as discussed below Eq. (17). The easier one is when the lowest order mass quadrupole has the minus coordinates. In that case, we get
a i mq,1 = − 2G 5 x j I (5) 1,ij(36)
where x j is the relative coordinate. To get the other mass quadrupole piece from Eq. (17), first substitute the 1PN part of Eq. (14) into the action for the minus coordinates, integrate by parts to remove the explicit derivatives (so that there are no acceleration terms appearing) and then vary with respect to the minus coordinates. This gives
a i mq,2 = G 105 (1 − 3η) 17x i x j − 11r 2 δ ij x k I (7) jk + G 15 (1 − 3η) 8x i x j v k − 8 x · v x k δ ij + 9v i x j x k I (6) jk + 3G 5 (1 − 3η) 2v i x j v k − m r 3 x i x j x k − v 2 x k δ ij I (5) jk + G 5 m r (1 − 2η) 2x k δ ij − 1 r 2 x i x j x k I(5)
jk .
The mass octupole contribution is, from Eq. (25),
a i mo = − G 63 µ δm m x j x k I (7) ijk ,(38)
while the current quadrupole piece is, from Eq. (34),
a i cq = 16G 45 δm m 2x j v k ǫ ikl J (5) jl + x k v j ǫ ikl J (5) jl − x j v k ǫ kjl J (5) il + x j x k ǫ ikl J (6) jl .(39)
There is one more place where we get a contribution to the 3.5PN radiation reaction. We must substitute the 2.5PN radiation reaction for the accelerations that appear on the right hand side of the 1PN acceleration in Eq. (10). Doing this gives
a i reduced = 2G 5 (3 + η) m r x k δ ij + m r 3 ηx i x j x k + 1 2 (1 − 3η)v 2 x k δ ij + (1 − 3η)v i x j v k I (5) jk .(40)
Combining all these terms, the 3.5PN acceleration is
a i 3.5 = a i mq,1 + a i mq,2 + a i mo + a i cq + a i reduced = − 2G 5 x j I (5) 1,ij + G 105 (1 − 3η)(17x i x j − 11r 2 δ ij )x k I (7) jk + 1 15 (1 − 3η)(9v i x j + 8v j x i − 8 x · v δ ij )x k I (6) jk + G 5 8(1 − 3η)v i v j − (4 − 13η) m r 3 x i x j x k I (5) jk − 2G 5 (1 − 3η)v 2 − (4 − η) m r x k I (5) ik − G 63 δm m x j x k I (7) ijk + 16G 45 δm m 2x j v k ǫ ikl J (5) jl + x k v j ǫ ikl J (5) jl − x j v k ǫ kjl J (5) il + x j x k ǫ ikl J (6) jl ,(41)
where the all the moments are evaluated at lowest order except for I
1,ij , which is the 1PN contribution. This agrees with Eq. (3.15) in Ref. [31].
IV. CONCLUSION
In this paper we computed the radiation reaction forces at the 3.5PN order for a compact binary inspiral using the effective field theory approach. To accomplish this, we utilized a recently introduced extension of Hamilton's variational principle of stationary action [29] that properly incorporates dissipative effects (e.g., radiation reaction) within a Lagrangian or Hamiltonian formalism.
We find agreement between our 3.5PN radiation reaction forces and those of Iyer and Will [30,31]. We also derive the Burke-Thorne 2.5PN radiation reaction force [27,28], first demonstrated in the EFT approach in [24], in detail in an appendix. The agreement between our results and previous results gives a strong, non-trivial check that the extended action principle formalism for dissipative mechanics (briefly discussed in Section II B, its need emphasized in [24], and given a proper rigorous framework in [29]) is the correct formalism for describing dissipative effects in a dynamical system.
Combined with previous results from the EFT community, our work nearly completes the equations of motion for a spinning compact binary through 3.5PN order and derived entirely using the EFT approach. These previous works include: the 1PN [13,15], 2PN [16], and 3PN [17] non-spinning corrections to the Newtonian potential; the 1.5PN [2] and 2.5PN [18][19][20] spin-orbit corrections; the 2PN [2] and 3PN [2,3,21,22] spin-spin corrections; and the 2.5PN [24] and now 3.5PN [this paper] radiation reaction forces. The only contribution remaining to be computed through 3.5PN is from the 3.5PN spin-orbit correction to the potential.
This work also paves the way for higher order radiation reaction calculations in EFT, including the conservative contribution from the radiative back-scattering of gravitational waves off the total mass of the binary spacetime (i.e., the 4PN tail term [36][37][38][39]) and the 4PN contribution from the spin-orbit coupling of the current quadrupole [39,40].
V. ACKNOWLEDGMENTS
We thank Ira Rothstein for useful discussions. CRG was supported by an appointment to the NASA Postdoctoral Program at the Jet Propulsion Laboratory administered by Oak Ridge Associated Universities through a contract with NASA. AKL was supported in part by the National Science Foundation under Grant No. PHY-0854782. Copyright 2012. All rights reserved. To calculate the radiation reaction, we begin by using the worldline action in Eq. (2) and "integrate out" the electric and magnetic parts of the Weyl curvature tensor. This is accomplished by calculating the diagram in Fig. (1), where at order 2.5PN the mass quadrupole moment is given in Eq. (13). In the dissipative mechanics formulation [29] of the EFT the contribution to the effective action from this diagram is
iS 2.5PN eff = 1 2 i 2 (2m pl ) 2 dt dt ′ I ij A (t) E A ij (t)E B kl (t ′ ) I kl B (t ′ ) (A1)
The Feynman rules that tell how to translate Fig. 1 into Eq. (A1) are given in [24]. Here A and B are indices that label the doubled variables (see Sec. II B), which will be called history indices, and the effective action is independent of the basis chosen for the fields. For our purposes, it is convenient to work with the basis in which the history indices are A, B = ±. The electric part of the Weyl tensor is given in terms of the metric perturbations to linear order by
E ij = R αiβj u α u β = 1 2 [∂ i ∂ j h 00 (t, x) + ∂ 0 ∂ 0 h ij (t, x) − ∂ 0 ∂ i h j0 (t, x) − ∂ 0 ∂ j h i0 (t, x)] + O(h 2 ). (A2)
The two-point function in Eq. (A1) is then
E A ij (t)E B kl (t ′ ) = 1 4 [h A 00,ij (t) + h A ij,00 (t)][h B 00,kl (t ′ ) + h B kl,00 (t ′ )] + 1 4 [h A i0,j0 (t) + h A j0,i0 (t)][h B k0,l0 (t ′ ) + h B l0,k0 (t ′ )] ,(A3)
where we have evaluated the radiation fields at the binary's center of mass (taken to be at the origin of our spatial coordinates) so that h αβ (t) ≡ h αβ (t, 0) and have used
h i0 (t)h kl (t ′ ) ∝ P i0kl = 0,(A4)
with P αβγδ = 1 2 (η αγ η βδ + η αδ η βγ − η αβ η γδ ).
In terms of the field propagators, the two-point function is
E A ij (t)E B kl (t ′ ) = 1 8 ∂ i ∂ j ∂ k ′ ∂ l ′ + 2P ijkl ∂ 2 0 ∂ 2 0 ′ − η ij ∂ 2 0 ∂ k ′ ∂ l ′ − η kl ∂ i ∂ j ∂ 2 0 ′ D AB (t − t ′ , 0) + 1 8 (η ik ∂ j ∂ 0 ∂ l ′ ∂ 0 ′ + η jl ∂ i ∂ 0 ∂ k ′ ∂ 0 ′ + η il ∂ j ∂ 0 ∂ k ′ ∂ 0 ′ + η jk ∂ i ∂ 0 ∂ l ′ ∂ 0 ′ ) D AB (t − t ′ , 0),(A6)
where the prime on the spacetime index of a derivative means that it is a derivative with respect to x ′µ and the matrix of (scalar) propagators in the ± basis is
D AB = 0 −iD adv −iD ret 0 .(A7)
Since the two-point function is contracted with I ij A and I kl B , which are symmetric and trace-free, a number of these terms will drop, leaving
I ij A (t) E A ij (t)E B kl (t ′ ) I kl B (t ′ ) = 1 8 I ij A (t) ∂ i ∂ j ∂ k ′ ∂ l ′ + 2η ik η jl ∂ 2 0 ∂ 2 0 ′ D AB (t − t ′ , 0) I kl B (t ′ ) + 1 2 I ij A (t) η ik ∂ j ∂ 0 ∂ l ′ ∂ 0 ′ D AB (t − t ′ , 0) I kl B (t ′ ).(A8)
The propagators can be written in terms of Synge's world function
σ(x α , x ′α ) = 1 2 η αβ (x α − x ′α )(x β − x ′β ) = 1 2 (t − t ′ ) 2 − 1 2 ( x − x ′ ) 2 .(A9)
Therefore, by use of the chain rule, we can simplify Eq. (A8) to
I ij A (t) E A ij (t)E B kl (t ′ ) I kl B (t ′ ) = I ij A (t)I ijB (t ′ ) 3 2 D ′′AB + 2(t − t ′ ) 2 D (3)AB + 1 4 (t − t ′ ) 4 D (4)AB (A10) = I ij A (t)I ijB (t ′ ) − 1 2 D ′′AB + σ 2 D ′′AB ′′ ,(A11)
where the derivatives on D AB are with respect to σ. Plugging this back into the effective action, Eq. (A1), we have
iS 2.5PN eff = − 1 8m 2 pl dt dt ′ I ij A (t)I ijB (t ′ ) − 1 2 D ′′AB + σ 2 D ′′AB ′′ .(A12)
Summing over the history indices A, B = ± and using Eq. (A7) gives
iS 2.5PN eff = i 8m 2 pl dt dt ′ I ij − (t)I ij+ (t ′ ) − 1 2 D ′′ ret + σ 2 D ′′ ret ′′ + I ij + (t)I ij− (t ′ ) − 1 2 D ′′ adv + σ 2 D ′′ adv ′′ (A13) = 8πiG dt dt ′ I ij − (t)I ij+ (t ′ ) − 1 2 D ′′ ret (σ) + σ 2 D ′′ ret (σ) ′′ ,(A14)
where in the last line we have used m −2 pl = 32πG and the identity between the retarded and advanced propagators
D adv (x, x ′ ) = D ret (x ′ , x).(A15)
The next step is to write the integral in the effective action in a quasi-local expansion about the point t ′ = t, which is when the retarded propagator has non-vanishing support. To do so, define the time difference as
s = t ′ − t,(A16)ij+ (t) .(A20)
The first three terms are power divergent and are thus zero in dimensional regularization. If we picked a different regularization scheme, these terms will end up renormalizing either the mass or other (possibly gauge-violating) terms in the action (see [41] for further discussion on this point). We thus can remove them and keep only the finite term (the last one),
ij+ (t),(A21)
where we have used ds s D ret (s) = 1 4π .
(A22)
To get the equations of motion once we have the above action, we vary in the usual way with respect to the worldline coordinates for each body. At the end of the calculation we set the history indices to be the same so that all the "−" variables are set to zero. Therefore, to get a non-zero result, we just need to vary with respect to the minus coordinate as in Eq. (7), or
F i K (t) = ∂L 2.5PN eff ∂x Ki− (t) − d dt ∂L 2.5PN eff ∂ẋ Ki− (t) xK−=0 xK+= xK (A23)
where the effective Lagrangian is given by S 2.5PN eff = dt L 2.5PN eff (t). Using the leading PN mass quadrupole moment, Eq. (13), we get
F i K = 2Gm K 5 x Kj d 5 I ij (t) dt .(A24)
We have been using the mostly-minus convention for the metric, so for the spatial components η ij = −δ ij . Switching now to contracting indices with δ ij we have
F i K = − 2Gm K 5 x Kj d 5 I ij (t) dt .(A25)
We thus recover the Burke-Thorne force [27,28] using the EFT [24].
FIG. 1 :
1The Feynman diagram giving the contribution to the radiation reaction from the mass quadrupole. A and B label worldlines that have been doubled in the extended variational principle.
FIG. 2 :
2The Feynman diagram giving the contributions to the radiation reaction from the mass octupole (left) and the current quadrupole (right).
ij − (t)I
( 5 )
5ij+ (t) ds s D ret (s) = G 5 dtI ij − (t)I
By changing variables to t and s and by integrating by parts to remove the derivatives on the propagators, we can write the effective action asS 2.5PN eff = 8πG dtI ij − (t) ds D ret (s) −The retarded propagator is proportional to so we Laurent expand the terms in the square brackets up O(s),so that
σ =
s 2
2
.
(A17)
3
4
d
ds
I ij+ (t + s)
s 3
−
3
4
d 2
ds 2
I ij+ (t + s)
s 2
−
1
2
d 3
ds 3
I ij+ (t + s)
s
+
1
4
d 4 I ij+ (t + s)
ds 4
.
(A18)
D ret (s, 0) ∝
δ(s)
s
,
(A19)
S 2.5PN
eff
= 8πG dtI ij
− (t) ds D ret (s)
3
s 4 I ij+ (t) +
3
8s 2 I ′′
ij+ (t) +
1
32
I
(4)
ij+ (t) +
s
10
I
(5)
. L Blanchet, Living Reviews in Relativity. 9Blanchet L 2006 Living Reviews in Relativity 9 URL http://www.livingreviews.org/lrr-2006-4
. R Porto, gr-qc/0511061Phys.Rev. 73104031Porto R A 2006 Phys.Rev. D73 104031 (Preprint gr-qc/0511061)
. R A Porto, I Z Rothstein, gr-qc/0604099Phys. Rev. Lett. 9721101Porto R A and Rothstein I Z 2006 Phys. Rev. Lett. 97 021101 (Preprint gr-qc/0604099)
. R A Porto, I Z Rothstein, Phys. Rev. D. 7844013Preprint 0804.0260Porto R A and Rothstein I Z 2008 Phys. Rev. D 78 044013 (Preprint 0804.0260)
. J Hartung, J Steinhoff, 1104.3079Annalen Phys. 523783Hartung J and Steinhoff J 2011 Annalen Phys. 523 783 (Preprint 1104.3079)
. I Macdonald, S Nissanke, H P Pfeiffer, Class.Quant.Grav. 28134002Preprint 1102.5128MacDonald I, Nissanke S and Pfeiffer H P 2011 Class.Quant.Grav. 28 134002 (Preprint 1102.5128)
. A Buonanno, T Damour, gr-qc/9811091Phys. Rev. D59. 084006Buonanno A and Damour T 1999 Phys. Rev. D59 084006 (Preprint gr-qc/9811091)
. P Ajith, Class. Quant. Grav. 24Preprint 0704.3764Ajith P et al. 2007 Class. Quant. Grav. 24 S689-S700 (Preprint 0704.3764)
. P Ajith, Phys. Rev. 77104017Preprint 0710.2335Ajith P et al. 2008 Phys. Rev. D77 104017 (Preprint 0710.2335)
. P Ajith, M Hannam, S Husa, Y Chen, B Bruegmann, Phys.Rev.Lett. 106241101Preprint 0909.2867Ajith P, Hannam M, Husa S, Chen Y, Bruegmann B et al. 2011 Phys.Rev.Lett. 106 241101 (Preprint 0909.2867)
. L Santamaria, F Ohme, P Ajith, B Bruegmann, N Dorband, Phys.Rev. 8264016Preprint 1005.3306Santamaria L, Ohme F, Ajith P, Bruegmann B, Dorband N et al. 2010 Phys.Rev. D82 064016 (Preprint 1005.3306)
. S E Field, C R Galley, F Herrmann, J S Hesthaven, E Ochsner, Phys.Rev.Lett. 106221102Preprint 1101.3765Field S E, Galley C R, Herrmann F, Hesthaven J S, Ochsner E et al. 2011 Phys.Rev.Lett. 106 221102 (Preprint 1101.3765)
. W Goldberger, I Rothstein, hep-th/0409156Phys. Rev. 73104029Goldberger W and Rothstein I 2006 Phys. Rev. D73 104029 (Preprint hep-th/0409156)
. Toshifumi Futamase, Y , Living Reviews in Relativity. 10Toshifumi Futamase Y I 2007 Living Reviews in Relativity 10 URL http://www.livingreviews.org/lrr-2007-2
. B Kol, M Smolkin, Class.Quant.Grav. 25145011Preprint 0712.4116Kol B and Smolkin M 2008 Class.Quant.Grav. 25 145011 (Preprint 0712.4116)
. J B Gilmore, Ross A , Phys. Rev. 78124021Preprint 0810.1328Gilmore J B and Ross A 2008 Phys. Rev. D78 124021 (Preprint 0810.1328)
. S Foffa, R Sturani, 1104.1122Phys.Rev. 8444031Foffa S and Sturani R 2011 Phys.Rev. D84 044031 (Preprint 1104.1122)
. D Perrodin, 1005.0634Perrodin D L 2010 (Preprint 1005.0634)
. R Porto, 1005.5730Class. Quant. Grav. 27205001Porto R A 2010 Class. Quant. Grav. 27 205001 (Preprint 1005.5730)
. M Levi, 1006.4139Phys.Rev. Levi M 2010 Phys.Rev. D82 104004 (Preprint 1006.4139)
. R A Porto, I Z Rothstein, Preprint 0712.2032Porto R A and Rothstein I Z 2007 (Preprint 0712.2032)
. R A Porto, I Z Rothstein, 0802.0720Phys. Rev. D. 7844012Porto R A and Rothstein I Z 2008 Phys. Rev. D 78 044012 (Preprint 0802.0720)
. M Levi, 1107.4322Phys.Rev. D. 8564043Levi M 2012 Phys.Rev. D 85 064043 (Preprint 1107.4322)
. C R Galley, M Tiglio, Phys. Rev. 79124027Preprint 0903.1122Galley C R and Tiglio M 2009 Phys. Rev. D79 124027 (Preprint 0903.1122)
. R A Porto, Ross A Rothstein, I Z , 1007.1312JCAP. 11039Porto R A, Ross A and Rothstein I Z 2011 JCAP 1103 009 (Preprint 1007.1312)
. R A Porto, Ross A Rothstein, I Z , 1203.2962Porto R A, Ross A and Rothstein I Z 2012 (Preprint 1203.2962)
. W L Burke, K S Thorne, M, Fickler S I and Witten L (PlenumNew YorkBurke W L and Thorne K S 1970 Relativity ed Carmeli M, Fickler S I and Witten L (Plenum, New York) pp 209-228
. W Burke, J. Math. Phys. 12401Burke W L 1971 J. Math. Phys. 12 401
The classical mechanics of dissipative systems. C Galley, in preparationGalley C R The classical mechanics of dissipative systems (in preparation).
. B R Iyer, Will C , Phys.Rev.Lett. 70Iyer B R and Will C 1993 Phys.Rev.Lett. 70 113-116
. B R Iyer, Will C , Phys.Rev. 52Iyer B R and Will C 1995 Phys.Rev. D52 6882-6893
. W D Goldberger, Ross A , Phys. Rev. 81124015Preprint 0912.4254Goldberger W D and Ross A 2010 Phys. Rev. D81 124015 (Preprint 0912.4254)
. A Ross, 1202.4750Ross A 2012 (Preprint 1202.4750)
. B Kol, M Smolkin, Phys.Rev. 7764033Preprint 0712.2822Kol B and Smolkin M 2008 Phys.Rev. D77 064033 (Preprint 0712.2822)
. A Einstein, L Infeld, B Hoffmann, Ann. Math. 3965Einstein A, Infeld L and Hoffmann B 1938 Ann. Math. 39 65
. L Blanchet, T Damour, Phys. Rev. 376Blanchet L and Damour T 1988 Phys. Rev. D37(6) 1410-1435
. L Blanchet, Phys. Rev. 4710Blanchet L 1993 Phys. Rev. D47(10) 4392-4420
. S Foffa, R Sturani, 1111.5488Foffa S and Sturani R 2011 (Preprint 1111.5488)
. C R Galley, A K Leibovich, in preparationGalley C R and Leibovich A K (in preparation)
. H Wang, J Steinhoff, J Zeng, G Schafer, 1109.1182Phys.Rev. 84124005Wang H, Steinhoff J, Zeng J and Schafer G 2011 Phys.Rev. D84 124005 (Preprint 1109.1182)
. C R Galley, A Leibovich, I Z Rothstein, 1005.2617Phys. Rev. Lett. 10594802Galley C R, Leibovich A K and Rothstein I Z 2010 Phys. Rev. Lett. 105 094802 (Preprint 1005.2617)
| [] |
[
"SERIES REVERSION FOR ELECTRICAL IMPEDANCE TOMOGRAPHY WITH MODELING ERRORS",
"SERIES REVERSION FOR ELECTRICAL IMPEDANCE TOMOGRAPHY WITH MODELING ERRORS",
"SERIES REVERSION FOR ELECTRICAL IMPEDANCE TOMOGRAPHY WITH MODELING ERRORS",
"SERIES REVERSION FOR ELECTRICAL IMPEDANCE TOMOGRAPHY WITH MODELING ERRORS"
] | [
"H Garde ",
"N Hyvönen ",
"T Kuutela ",
"H Garde ",
"N Hyvönen ",
"T Kuutela "
] | [] | [] | This work extends the results of Hyvönen, Math. Comp. 91:1925-1953] on series reversion for Calderón's problem to the case of realistic electrode measurements, with both the internal admittivity of the investigated body and the contact admittivity at the electrode-object interfaces treated as unknowns. The forward operator, sending the internal and contact admittivities to the linear electrode current-to-potential map, is first proven to be analytic. A reversion of the corresponding Taylor series yields a family of numerical methods of different orders for solving the inverse problem of electrical impedance tomography, with the possibility to employ different parametrizations for the unknown internal and boundary admittivities. The functionality and convergence of the methods is established only if the employed finite-dimensional parametrization of the unknowns allows the Fréchet derivative of the forward map to be injective, but we also heuristically extend the methods to more general settings by resorting to regularization motivated by Bayesian inversion. The performance of this regularized approach is tested via three-dimensional numerical examples based on simulated data. The effect of modeling errors related to electrode shapes and contact admittances is a focal point of the numerical studies. | 10.1088/1361-6420/acdab8 | [
"https://export.arxiv.org/pdf/2209.05543v3.pdf"
] | 258,823,249 | 2209.05543 | c12d177b3f3824711844f02fe7016b95c1a87466 |
SERIES REVERSION FOR ELECTRICAL IMPEDANCE TOMOGRAPHY WITH MODELING ERRORS
H Garde
N Hyvönen
T Kuutela
SERIES REVERSION FOR ELECTRICAL IMPEDANCE TOMOGRAPHY WITH MODELING ERRORS
electrical impedance tomographysmoothened complete electrode modelseries reversionBayesian inversionmismodelingLevenberg-Marquardt algorithm AMS subject classifications 35R3035J2541A5847H1465N21
This work extends the results of Hyvönen, Math. Comp. 91:1925-1953] on series reversion for Calderón's problem to the case of realistic electrode measurements, with both the internal admittivity of the investigated body and the contact admittivity at the electrode-object interfaces treated as unknowns. The forward operator, sending the internal and contact admittivities to the linear electrode current-to-potential map, is first proven to be analytic. A reversion of the corresponding Taylor series yields a family of numerical methods of different orders for solving the inverse problem of electrical impedance tomography, with the possibility to employ different parametrizations for the unknown internal and boundary admittivities. The functionality and convergence of the methods is established only if the employed finite-dimensional parametrization of the unknowns allows the Fréchet derivative of the forward map to be injective, but we also heuristically extend the methods to more general settings by resorting to regularization motivated by Bayesian inversion. The performance of this regularized approach is tested via three-dimensional numerical examples based on simulated data. The effect of modeling errors related to electrode shapes and contact admittances is a focal point of the numerical studies.
1. Introduction. Electrical impedance tomography (EIT) is an imaging modality for deducing information on the admittivity distribution inside a physical body based on current and potential measurements on the object boundary; for general information on the practice and theory of EIT, we refer to the review articles [5,10,33] and the references therein. Real-world EIT measurements are performed with a finite number of contact electrodes, and in addition to the internal admittivity, there are typically also other unknowns in the measurement setup, such as the contacts at the electrode-object interfaces and the precise positions of the electrodes. If one models the measurements with the so-called smoothened complete electrode model (SCEM) [23], a variant of the well-established standard complete electrode model (CEM) [11,32], it is possible to include both the strength and the positions of the electrode contacts as unknowns in the inverse problem of EIT [13].
The aim of this work is to combine the SCEM with the series reversion ideas of [18] to introduce a family of numerical one-step methods with increasing theoretical accuracy for simultaneously reconstructing the internal and electrode admittivities. To be more precise, [18] applied reversion to the Taylor series of a forward map, sending a perturbation in a known interior admittivity to a projected version of the current-to-voltage boundary map, in order to introduce methods of different asymptotic accuracies for reconstructing (only) the interior admittivity perturbation. Both the idealized continuum model (CM) of EIT and the SCEM were considered, but the contact admittivities were assumed to be known for the latter, i.e., they were not treated as additional unknowns in the inversion. Here, we generalize the ideas of [18] to include the contact admittivities as variables in the Taylor expansion and as unknowns in the subsequent series reversion. See [3] for closely related ideas.
Unlike in [18], we perform our analysis for general parametrizations of the internal and contact admittivities. As an example of such a parametrization, one may write the domain conductivity as σ = e κ and treat the log-conductivity κ as the unknown in the series reversion, which is the choice in our numerical studies. This leads to more complicated inversion formulas as the linear dependence of the involved sesquilinear forms on the unknowns is lost, but it may also have a positive effect on the reconstruction accuracy in some settings [24]. As in [18], a fundamental requirement for the theoretical convergence of the introduced family of methods is that the Fréchet derivative of the forward map is injective for the chosen parametrizations of the internal and contact admittivities; for the considered measurements with a finite number of electrodes, the injectivity is actually a sufficient condition for the convergence since it guarantees invertibility of the Fréchet derivative on its range that is necessarily finite-dimensional (cf. [18]). In fact, the only potentially unstable step in any of the introduced numerical methods is the requirement to operate with the inverse of the Fréchet derivative. Furthermore, the asymptotic computational complexity of any of the numerical schemes is the same as that of a straightforward linearization [18].
Our numerical examples concentrate on testing the series reversion approach in a realistic three-dimensional setting where the Fréchet derivative of the forward map is not forced to be (stably) invertible via employing sparse enough discretizations for the admittivities (cf. [1,2,21]). To perform the required inversion of the Fréchet derivative, we resort to certain Tikhonov-type regularization motivated by Bayesian inversion; since the assumption guaranteeing the functionality of the introduced family of methods is not met, there is no reason to expect that the theoretical convergence rates carry over to this regularized framework as such. According to our tests, the introduced second and third order methods demonstrate potential to outperform a single regularized linearization. However, at least with the chosen regularization that is not specifically designed for our recursively defined family of numerical methods, a Levenberg-Marquardt algorithm based on a couple of sequential linearizations leads on average to more accurate reconstructions than the novel methods. Recall, however, that the computational cost of several sequential linearizations is asymptotically higher than that of a single series reversion of any order, and according to our tests, the performance of sequential series reversions is approximately as good as that of a same number of linearizations. In addition to these comparisons, we computationally demonstrate that all considered numerical schemes are capable of coping to a certain extent with mismodeling of the electrode contacts if their strengths and locations are included as unknowns in the inversion in the spirit of the two-dimensional numerical tests of [13]. Other previously introduced methods for handling unknown strengths, shapes or positions of electrode contacts in EIT include, e.g., [6,7,12,14,15,22,29,30,31,35] This text is organized as follows. Section 2 recalls the SCEM and introduces the Taylor series for its (complete) forward operator, paying particular attention to the complications caused by the SCEM allowing the contact admittivity to vanish on parts of the electrodes. The series reversion for general parametrizations of the internal and boundary admittivities is presented in Section 3. Sections 4 and 5 consider the implementation details and describe our threedimensional numerical experiments based on simulated data, respectively. Finally, the concluding remarks are listed in Section 6.
2. SCEM and the Taylor series for its forward map. This section first recalls the SCEM and then introduces a Taylor series representation for the associated forward operator. For more information on the SCEM and how it can be employed in accounting for uncertainty in the electrode positions when solving the inverse problem of EIT, we refer to [23] and [13], respectively.
2.1. Forward model and its unique solvability. The examined physical body is modeled by a bounded Lipschitz domain Ω ⊂ R n , n = 2 or 3, and its electric properties are charac-terized by an isotropic admittivity σ : Ω → C that belongs to (2.1) S := σ ∈ L ∞ (Ω) ess inf(Re(σ)) > 0 .
The measurements are performed by M ∈ N \ {1} electrodes {E m } M m=1 that are identified with the nonempty connected relatively open subsets of ∂Ω that they cover. We assume E m ∩ E l = ∅ for all m = l and denote E := ∪E m . To begin with, the electrode contacts are modeled by a surface admittivity ζ : ∂Ω → C that is assumed to satisfy
(2.2) Z := ζ ∈ L ∞ (E) Re(ζ) ≥ 0 and Re(ζ| Em ) ≡ 0 for all m = 1, . . . , M ,
where the conditions on ζ hold almost everywhere on ∂Ω or E m . The sets Z and L ∞ (E) are interpreted as subsets of L ∞ (∂Ω) via zero continuation. A single EIT measurement corresponds to driving the net currents I m ∈ C, m = 1, . . . , M , through the corresponding electrodes and measuring the resulting constant electrode potentials U m ∈ C, m = 1, . . . , M . Obviously, any reasonable current pattern I = [I 1 , . . . , I M ] T belongs to the mean-free subspace
C M := J ∈ C M M m=1 J m = 0 .
In the following analysis, the potential vector U = [U 1 , . . . , U M ] T ∈ C M is often identified with the piecewise constant function
(2.3) U = M m=1 U m χ m ,
where χ m is the characteristic function of the mth electrode E m . According to the SCEM [23], the electromagnetic potential u inside Ω and the electrode potentials U ∈ L ∞ (E) ⊂ L ∞ (∂Ω) weakly satisfy
(2.4) ∇ · (σ∇u) = 0
in Ω,
ν · σ∇u = ζ(U − u) on ∂Ω, Em ν · σ∇u dS = I m , m = 1, . . . , M,
where ν is the exterior unit normal of ∂Ω. Set 1 = [1 . . . 1] T ∈ C M . The variational formulation of (2.4) is to find the unique (u, U ) in the space of equivalence classes
H := {(v + c, V + c1) | c ∈ C} (v, V ) ∈ H 1 (Ω) ⊕ C M such that (2.5) B σ,ζ (u, U ), (v, V ) = I · V for all (v, V ) ∈ H,
with the sesquilinear form B σ,ζ : H × H → C defined by
(2.6) B σ,ζ (w, W ), (v, V ) = Ω σ∇w · ∇v dx + ∂Ω ζ(W − w)(V − v) dS.
The space H is equipped with the standard quotient norm.
(2.7) (v, V ) H := inf c∈C v − c 2 H 1 (Ω) + |V − c1| 2 1/2 ,
where | · | denotes the Euclidean norm.
According to the material in [18,Section 4], for any (σ, ζ) ∈ L ∞ (Ω) × L ∞ (E) it holds
(2.8) B σ,ζ (w, W ), (v, V ) ≤ C( σ L ∞ (Ω) + ζ L ∞ (∂Ω) ) (w, W ) H (v, V ) H ,
where the constant C > 0 is independent of (w, W ), (v, V ) ∈ H. This settles the continuity of the considered sesquilinear form. To tackle its coercivity in a manner that allows perturbing each contact admittivity in Z to any direction, we define Z to be the largest subset of L ∞ (E) for which the following condition holds for any (σ, ζ) ∈ S × Z :
(2.9) Re B σ,ζ (v, V ), (v, V ) ≥ c (v, V ) 2 H , with c = c(σ, ζ) > 0 independent of (v, V ) ∈ H.N σ,ζ in L ∞ (Ω) × L ∞ (E) such that (2.9)
holds for all (ς, ξ) ∈ N σ,ζ with the same constant c > 0. If (σ, ζ) ∈ S × Z , then (2.5) has a unique solution that satisfies
(2.10) (u, U ) H ≤ |I| c(σ, ζ) ,
as can easily be deduced by resorting to (2.8), (2.9) and the Lax-Milgram lemma. The constant c(σ, ζ) in (2.10) is the one appearing in (2.9). In particular, (2.10) allows us to define a nonlinear "parameter to forward solution operator" map N :
S × Z → L (C M , H) via N (σ, ζ)I = (u, U ),
where (u, U ) is the solution to (2.5) for (σ, ζ) ∈ S × Z . Moreover, denoting by T ∈ L (H, C M ) the 'trace map' that picks the zero-mean representative for the second component of an element in H, we define the forward map of the SCEM, i.e., Λ : S × Z → L (C M ), through Λ(σ, ζ)I = T N (σ, ζ)I for (σ, ζ) ∈ S × Z and I ∈ C M .
Forward map and its Taylor series.
In this section, we extend the Taylor series presented in [18] to also include the contact admittivity as a variable. This extension is based essentially on the same arguments as the ones utilized in [18]. In addition to the standard parametrization for the internal and boundary admittivities, i.e., treating σ and ζ directly as the variables, we also consider more general (analytic) parametrizations. This is motivated by the fact that real world domain conductivities range from the order of 10 −20 S m −1 (plastics) to 10 8 S m −1 (metals), which means that it may be numerically advantageous to adopt, e.g., the log-admittivity κ = log σ as the to-be-reconstructed variable. However, on the downside, such an approach leads to somewhat more complicated structures for the associated Taylor series and reversion formulas due to the loss of linear dependence of the sesquilinear form (2.6) on the parameters of interest.
Standard parametrization.
A general interior-boundary admittivity pair is denoted by τ := (σ, ζ) ∈ S × Z =: K + . Analogously, the space of total admittivity perturbations is denoted by K := L ∞ (Ω) × L ∞ (E) and equipped with a natural norm, i.e., (σ, ζ) K := σ L ∞ (Ω) + ζ L ∞ (E) . We mildly abuse the notation by writing B τ instead of B σ,ζ for the sesquilinear form introduced in (2.6).
For a fixed τ ∈ K + , we define a linear operator P τ ∈ L (K, L (H)) via P τ (η)(y, Y ) = (w, W ), where (w, W ) ∈ H is the unique solution of
(2.11) B τ (w, W ), (v, V ) = −B η (y, Y ), (v, V ) for all (v, V ) ∈ H.
The unique solvability of (2.11) can be deduced by combining (2.8) and (2.9) with the Lax-Milgram lemma, which also directly provides the estimates
(2.12) P τ (η) L (H) ≤ C η K c(τ ) , P τ L (K,L (H)) ≤ C c(τ ) ,
where C and c(τ ) are the constants appearing in (2.8) and (2.9), respectively. It is well known that P τ (η)(u, U ) = P τ (η)N (τ )I ∈ H, with (u, U ) being the solution to (2.5) for I ∈ C M and τ = (σ, ζ), is the Fréchet derivative of N (τ )I = (u, U ) with respect to τ in the direction η; see, e.g., [13,25]. In order to deduce higher order derivatives for N (τ ), we first write down a differentiability lemma for P τ , playing the same role as [18,Lemmas 3.1 & 5.1] in the case where only the domain admittivity is considered as a variable.
Lemma 2.1. The map P τ ∈ L (K, L (H)) is infinitely times continuously Fréchet differentiable with respect to τ ∈ K + . Its first derivative at τ in the direction η, i.e. D τ P τ ( · ; η) ∈ L (K, L (H)), allows the representation
(2.13) D τ P τ ( · ; η) = P τ (η)P τ ( · ).
Proof. The result follows through exactly the same line of reasoning as [18, Lemmas 3.1 & 5.1], bearing in mind that for any τ ∈ K + there exists a nonempty open neighborhood N τ ∈ K + such that the coercivity of the sesquilinear form B τ , characterized by (2.9), holds with the same constant everywhere in N τ ; see [13,Lemma 2.4].
Observe that (2.13) immediately provides means to deduce formulas for higher order derivatives for N and P τ via the product rule. However, as in [18,19] for the mere domain admittivity, we can actually do better. To this end, let p k be the collection of index permutations of length k, that is,
p k = {α 1 , . . . , α k | α i ∈ {1, . . . , k}, α i = α j if i = j}.
Theorem 2.2. The mapping N is infinitely times continuously Fréchet differentiable. Its derivatives at τ ∈ K + are given by
(2.14) D k N (τ ; η 1 , . . . , η k ) = α∈p k P τ (η α1 ) · · · P τ (η α k ) N (τ ), k ∈ N,
with η 1 , . . . , η k ∈ K. In particular, N is analytic with the Taylor series
(2.15) N (τ + η) = ∞ k=0 1 k! D k N (τ ; η, . . . , η) = ∞ k=0 P τ (η) k N (τ ),
where τ ∈ K + and η ∈ K is small enough so that also τ + η ∈ K + .
Proof. The proof follows via exactly the same arguments as [18, Theorem 3.3 & Theorem 5.2], with B η taking the role of the employed continuous (and coercive for η = τ ) sesquilinear form as per the definition of our operator P τ . Remark 2.3. By the linearity and boundedness of the trace operator T , the results of Theorem 2.2 immediately extend for the forward map Λ = T N of the SCEM. In particular,
(2.16) Λ(τ + η) = ∞ k=0 1 k! D k Λ(τ ; η, . . . , η) = T ∞ k=0 P τ (η) k N (τ )
for τ ∈ K + and small enough η ∈ K.
Remark 2.4. The abstract requirement that η ∈ K needs to have "small enough" norm for (2.15) and (2.16) to hold could be replaced by a more concrete condition if the real part of the second component in τ , i.e. the contact admittivity, were required to be bounded away from zero almost everywhere on E. Namely, it would be sufficient to require that η is so small that the real parts of both components of τ + η remain bounded away from zero on Ω and E, respectively. However, if the real part of the contact admittivity is allowed vanish (or just to tend to zero) on E, one must allow the abstract condition on the size of (the second component of ) an admissible perturbation η.
Using (2.12), we get via the same line of reasoning as in [18,Corollary 5.3] the following bounds for the above introduced derivatives,
D k N (τ ) L k (K,L (C M ,H)) ≤ k!C H c(τ ) k+1 , (2.17) D k Λ(τ ) L k (K,L (C M )) ≤ k!C 2 H c(τ ) k+1 , (2.18)
where c(τ ) is the τ -dependent coercivity constant from (2.9) and C H > 0 depends on the geometric setup.
General parametrization.
Let us then consider a general parametrization of the total admittivity, namely a mapping (2.19) τ : ι → τ (ι),
I → K + ,
where I is a nonempty open subset of a Banach space B. We define the parametrized solution and forward maps via
(2.20) N : ι → N (τ (ι)), I → L (C M , H), Λ : ι → Λ(τ (ι)), I → L (C M ),
respectively. Based on the material in the previous section, it is obvious that the regularity of the composite maps ι → N (ι) and ι → Λ(ι) is dictated by the properties of the parametrization ι → τ (ι).
N (ι + η) = ∞ k=0 1 k! D k N (ι; η, . . . , η), Λ(ι + η) = ∞ k=0 1 k! D k Λ(ι; η, . . . , η),
for ι ∈ I and small enough η ∈ B.
Proof. The result follows from the chain rule and the basic properties of analytic maps on Banach spaces; see, e.g., [36].
Since we know the derivatives of τ → N (τ ) and τ → Λ(τ ), it would be possible to introduce explicit formulas for the kth derivatives of the parametrized maps ι → N (ι) and ι → Λ(ι) based on the first k derivatives of ι → τ (ι) and thereby to more explicitly characterize the Taylor series in Corollary 2.5; see, e.g., [17,Formula A]. However, as we do not need such general formulas for introducing our numerical schemes, but truncated versions of the aforementioned Taylor series are sufficient for our purposes, we settle with writing down explicit expressions for the first three Fréchet derivatives of ι → N (ι) and ι → Λ(ι) in what follows.
To retain readability of the series reversion formulas presented in Section 3, we use the following shorthand notation for derivatives of ι → τ (ι) in the argument of P τ (ι) ,
P τ (ι) D k τ (ι; η 1 , . . . , η k ) = P τ (k) (ι; η 1 , . . . , η k ), η 1 , . . . , η k ∈ B,
with an even more compact variant when η 1 = · · · = η m ,
P τ (ι) D k τ (ι; η, . . . , η) = P τ (k) (ι; η k ).
The nonlinear dependence on ι ∈ I is often not explicitly marked for the sake of brevity in what follows.
Lemma 2.6. The first three derivatives of ι → N (ι) allow the representations
D N (ι; η 1 ) = P τ (η 1 ) N (ι), (2.21) D 2 N (ι; η 1 , η 2 ) = P τ (η 1 )P τ (η 2 ) + P τ (η 2 )P τ (η 1 ) + P τ (η 1 , η 2 ) N (ι), D 3 N (ι; η 1 , η 2 , η 3 ) = P τ (η 1 )P τ (η 2 )P τ (η 3 ) + P τ (η 1 )P τ (η 3 )P τ (η 2 ) + P τ (η 2 )P τ (η 1 )P τ (η 3 ) + P τ (η 2 )P τ (η 3 )P τ (η 1 ) + P τ (η 3 )P τ (η 1 )P τ (η 2 ) + P τ (η 3 )P τ (η 2 )P τ (η 1 ) + P τ (η 1 )P τ (η 2 , η 3 ) + P τ (η 2 , η 3 )P τ (η 1 ) + P τ (η 2 )P τ (η 1 , η 3 ) + P τ (η 1 , η 3 )P τ (η 2 ) + P τ (η 3 )P τ (η 1 , η 2 ) + P τ (η 1 , η 2 )P τ (η 3 ) + P τ (η 1 , η 2 , η 3 ) N (ι).
In particular, if η 1 = . . . = η k = η, the second and third directional derivatives become
D 2 N (ι; η 2 ) = 2P τ (η) 2 + P τ (η 2 ) N (ι), (2.22) D 3 N (ι; η 3 ) = 6P τ (η) 3 + 3P τ (η 2 )P τ (η) + 3P τ (η)P τ (η 2 ) + P τ (η 3 ) N (ι). (2.23)
The corresponding derivatives of the parametrized forward map ι → Λ(ι) can be obtained from the above formulas by operating with T from the left.
Proof. The results follow by combining (2.14) with the chain rule for Banach spaces.
Explicit bounds on the operator norms of the derivatives D k N (ι) ∈ L k (B, L (C M , H)) and D k Λ(ι) ∈ L k (B, L (C M )) could be straightforwardly deduced based on the general form for the differentiation formulas in Lemma 2.6, assumed bounds on the derivatives of the parametrization ι → τ (ι) and (2.12); cf. (2.17) and (2.18). However, we content ourselves with simply noting that for any k ∈ N and ω ∈ I, there exists a constant C k (ω) > 0 and an open neighborhood N ω of ω in I such that
(2.24) D k Λ(ι) L k (B,L (C M )) ≤ C k (ω) for all ι ∈ N ω
if the mapping ι → τ (ι) is k times continuously differentiable; cf. (2.9) and (2.18).
Series reversion.
In this section we introduce the series reversion procedure for the SCEM with a general admittivity parametrization. The derivation closely follows the ideas in [18] with only minor modifications and extensions. In fact, with the trivial parametrization τ = id and I = K + , the deduced reversion formulas are exactly the same as in [18], although the ones presented in this work also implicitly account for the boundary admittivity. However, as the derivatives of the forward map corresponding to a general admittivity parametrization have terms that do not appear if D k τ = 0 for k ∈ N \ {1} (compare (2.21), (2.22) and (2.23) to [18, eq. (5.2)]), we end up with more terms in the series reversion formulas as well. For this reason, we content ourselves with third order series reversion even though the same approach could be straightforwardly, yet tediously, extended to arbitrarily high orders. What is more, Remark 3.2 comments on a case were the current-feeding and potential-measuring electrodes need not be the same, which is a case not explicitly covered in [18]. Because a translation is an analytic mapping, we may assume without loss of generality that the origin of the Banach space B acts as the initial guess for the to-be-reconstructed parameters and, in particular, that the origin belongs to the open parameter set I ⊂ B; cf. Section 4.1.
Let W ⊂ B be a subspace that defines the admissible perturbation directions in our parametrization for the interior and boundary admittivities. Throughout this section, the target admittivity pair is defined as (σ, ζ) = τ (υ) for a fixed υ ∈ I ∩ W and an analytic parametrization τ : I → K + . Furthermore, define F = D Λ(0; · ) and Y = F (W). We work under the following, arguably rather restrictive, main assumption.
Assumption 3.1. The Fréchet derivative F : B → L(C M ) is injective on W. 1 Since L(C M ) is finite-dimensional, the same must also apply to W by virtue of Assump- tion 3.1. In consequence, F ∈ L(W, Y) has a bounded inverse F −1 ∈ L(Y, W). Moreover, there obviously exists a bounded projection Q ∈ L(L(C M ), Y) onto Y due to the finite-dimensionality of the involved subspaces.
Let us then get into the actual business. The third order Taylor expansion for Λ around the origin can be arranged into the form
F υ = D Λ(0; υ) = Λ(υ) − Λ(0) − 1 2 D 2 Λ(0; υ 2 ) − 1 6 D 3 Λ(0; υ 3 ) − 1 6 1 0 (1 − s) 3 D 4 Λ(sυ; υ 4 ) ds assuming that υ ∈ W is small enough so that the whole line segment [0, υ] = {ι ∈ B | ι = tυ, t ∈ [0, 1]} lies in I. The remainder term is of order O( υ 4 B ) due to (2.24). Operating with M := F −1 Q ∈ L(L(C M ), W) from the left yields υ = F −1 QF υ = M Λ(υ) − Λ(0) − 1 2 MD 2 Λ(0; υ 2 ) − 1 6 MD 3 Λ(0; υ 3 ) + O( υ 4 B ) (3.1)
because by assumption υ ∈ W and Q is a projection onto Y.
At this point, it is worthwhile to stop for a moment and consider what we have actually derived.
• The residual term Λ(υ) − Λ(0) in (3.1) only contains values that are known. Namely, Λ(υ) is the available data and Λ(0) corresponds to an initial guess υ = 0. • Since the second and third derivatives of Λ in (3.1) are of orders O( υ 2 B ) and O( υ 3 B ), respectively, by virtue of (2.24), the expansion (3.1) also provides first and second order approximations for υ.
1 Assumption 3.1 can be satisfied by choosing the number of electrodes M high enough compared to the dimension of a suitably constructed W, if the contact admittances are fixed; see [28,21] for more information. In general, since W must be finite-dimensional for the assumption to hold, one may try to numerically verify if F is injective.
• Since W and L(C M ) are finite-dimensional, the operator M can simply be implemented as a pseudoinverse of F : W → L(C M ) after introducing inner products for the involved spaces. Higher order approximations for υ can naturally be derived by applying the same technique to higher order Taylor expansions of Λ (cf. Corollary 2.5).
Our leading idea is to recursively insert (3.1) into its right-hand side, which results in an explicit formula that returns υ up to an error of order O( υ 4 B ). Let us start by making (3.1) more explicit by plugging in (2.22) and (2.23) at ι = 0 composed with T :
υ = M Λ(υ) − Λ(0) − 1 2 MT 2P τ (0; υ) 2 + P τ (0; υ 2 ) N (0) (3.2) − 1 6 MT 6P τ (0; υ) 3 + 3P τ (0; υ 2 )P τ (0; υ) + 3P τ (0; υ)P τ (0; υ 2 ) + P τ (0; υ 3 ) N (0) + O( υ 4 B ).
Here and in the following, M always operates on the entire composition of operators on its right. Take note that the bounded operators M, T and N (0) do not depend on υ, and thus they do not affect the asymptotic accuracy of the representation. In the following, the dependence on the base point ι = 0 is suppressed for the sake of brevity.
As the first order estimate, we immediately obtain
(3.3) η 1 := M Λ(υ) − Λ(0) = υ + O( υ 2 B )
, which corresponds to a linearization of the original reconstruction problem. Replacing υ by η 1 in the second term on the right-hand side of (3.2) and employing boundedness of the involved linear operators, it straightforwardly follows that
η 2 := − 1 2 MT 2P τ (η 1 ) 2 + P τ (η 2 1 ) N = − 1 2 MT 2P τ (υ) 2 + P τ (υ 2 ) N + O( υ 3 B ). (3.4)
Obviously, η 2 is altogether of order O( υ 2 B ). Moreover, making the substitution (3.4) in (3.2) leads to the conclusion υ = η 1 + η 2 + O( υ 3 B ). In order to derive the third order approximation, we start by reconsidering the second term on the right-hand side of (3.2). Replacing υ this time around with
η 1 + η 2 + O( υ 3 B ) leads to − 1 2 MT 2P τ (υ) 2 + P τ (υ 2 ) N = − 1 2 MT 2P τ (η 1 ) 2 + P τ (η 2 1 ) + 2P τ (η 1 )P τ (η 2 ) + 2P τ (η 2 )P τ (η 1 ) + P τ (η 1 , η 2 ) + P τ (η 2 , η 1 ) N + O( υ 4 B ) = η 2 − MT P τ (η 1 )P τ (η 2 ) + P τ (η 2 )P τ (η 1 ) (3.5) + P τ (η 1 , η 2 ) N + O( υ 4 B ),
where all higher order terms have been collected in O( υ 4 B ). Let us then tackle the third term on the right-hand side of (3.2). As it is of third order in υ, inserting the first order estimate
υ = η 1 + O( υ 2 B )
is sufficient for our purposes:
− 1 6 MT 6P τ (υ) 3 + 3P τ (υ 2 )P τ (υ) + 3P τ (υ)P τ (υ 2 ) + P τ (υ 3 ) N = − MT P τ (η 1 ) 3 + 1 2 P τ (η 2 1 )P τ (η 1 ) + 1 2 P τ (η 1 )P τ (η 2 1 ) + 1 6 P τ (η 3 1 ) N + O( υ 4 B ). (3.6)
Combining the observations in (3.5) and (3.6), it is natural to define
η 3 = −MT P τ (η 1 ) 3 + 1 2 P τ (η 2 1 )P τ (η 1 ) + 1 2 P τ (η 1 )P τ (η 2 1 ) + 1 6 P τ (η 3 1 ) (3.7) + P τ (η 1 )P τ (η 2 ) + P τ (η 2 )P τ (η 1 ) + P τ (η 1 , η 2 ) N , which is of order O( υ 3 B ). Summarizing the information in (3.2)-(3.7), we have demonstrated under Assumption 3.1 that (3.8) υ − K k=1 η k B ≤ C K υ K+1 B , K = 1, . . . , 3,
where C K > 0 is independent of small enough υ ∈ B. Take note that the definition of the components in the reconstruction is recursive: η 1 depends on the residual Λ(υ)− Λ(0), η 2 depends on η 1 , and η 3 depends on both η 1 and η 2 . Although the Taylor series representation for Λ is available only if τ : I → K + is analytic, it is easy to check via Taylor's theorem that the parametrization actually only needs to be K + 1 times continuously differentiable for the conclusion (3.8) to hold.
Remark 3.2. In the above derivation we tacitly assumed that the complete set of electrodes is available both for feeding currents and measuring voltages. However, this is not the case in, e.g., geophysical electrical resistivity tomography [16], where the injecting and measuring electrodes are separate. To overcome this limitation, let us define
C M in = {J ∈ C M | J m = 0 if E m is not a current-feeding electrode} and C M out = {J ∈ C M | J m = 0 if E m
is not a potential-measuring electrode}. The orthogonal projections of C M onto the subspaces C M in and C M out are denoted by P in and P out , respectively. The above analysis remains valid as such if one redefines F as
F = P out D Λ(0; · )P in and M : L(C M ) → W via ML = F −1 QP out LP in .
With this modification, the projected relative data P out ( Λ(υ) − Λ(0))P in takes the role of the residual Λ(υ) − Λ(0). It is obvious that (a noisy version of ) the former is available when only using the current-feeding and potential-measuring electrodes in their respective roles.
Numerical implementation.
In this section we expand on the initial two-dimensional numerical setup in [18,Section 7] by considering the more practical framework of the threedimensional SCEM. In particular, we purposefully include some modeling errors in our setting so that their effect can be studied in the numerical experiments of Section 5, and we also employ so dense discretizations for the internal admittivity that stable inversion of the Fréchet derivative of the forward map is impossible without regularization. In the following, we only consider realvalued admittivities, i.e., conductivities.
The observations about the computational complexity of the CM based numerical implementation in [18,Section 7] remain essentially the same in our setting even though the underlying sesquilinear form is different, the reconstruction of the contact conductivities is included in the algorithm and general parametrizations of the unknowns are considered. As mentioned in the previous section, allowing arbitrary parametrizations for the conductivities introduces a number of extra terms in the reversion formulas, but the recursive computational structure of (3.3), (3.4) and (3.7) is in any case very similar to that of the respective terms F 1 , F 2 and F 3 in [18]. In consequence, the increase in the computational complexity is still a mere constant multiplier independent of the discretizations when the order of the inversion scheme is increased, albeit the constant is now larger due to the derivatives of the composite operator appearing in the reversion formulas for a general parametrization. As with the standard parametrization of the conductivity in [18], we still only have to compute directional derivatives instead of the complete tensor-valued higher order derivatives of the forward operator N .
Parametrization and discretization.
In all our experiments, the target domain Ω is the perturbed unit ball depicted in Figure 5.3. To enable avoiding an inverse crime between the measurement simulation and reconstruction stages, the surface of Ω is triangulated at two resolutions, and the resulting high (H) and low (L) resolution surface triangulations, T H and T L , are independently tetrahedralized into polygonal domains Ω H and Ω L , respectively. In particular, the electrode positions are not accounted for when forming the surface meshes, which causes a certain discrepancy in the electrode geometries between the two discretizations, as explained in more detail below. The denser FE mesh consists of 23 630 nodes and 95 951 tetrahedra, while the corresponding surface mesh has 15 584 nodes and 31 164 triangles. The sparser FE mesh has 4984 nodes and 18 597 tetrahedra, with the corresponding surface mesh composed of 3730 nodes and 7456 triangles.
The tetrahedralizations Ω H and Ω L are clustered into N H = 4000 and N L = 1000 polygonal connected subsets of roundish shape and approximately the same size, respectively, in order to introduce the subdomains for piecewise constant representations of the conductivity. These subdomains are denoted ω * ,i ⊂ Ω * , i = 1, . . . N * , where the subindex * stands for H or L. We represent the domain log-conductivity as
κ * = µ κ + N * i=1 κ i 1 * ,i , κ = (κ 1 , . . . , κ N * ) ∈ R N * ,
where 1 * ,i is the indicator function of ω * ,i and µ κ ∈ R corresponds to information on the expected log-conductivity level in Ω, making κ = 0 a reasonable basis point for the reversion formulas of Section 3. The high and low resolution parametrizations for the domain conductivity are then defined via (4.1) σ * :
κ → e κ * = e µκ N * i=1 e κi 1 * ,i , R N * → L ∞ + (Ω * ).
The parameter vector κ ∈ R N * is identified with a shifted piecewise constant log-conductivity via writing κ = The contact conductivities are parametrized either as a function taking constant values on certain subsets of the electrodes and vanishing elsewhere or by a smooth function following the construction in [9]. The former is dubbed the CEM parametrization in reference to the standard approach of modeling the contact resistivity/conductivity as piecewise constant in the standard CEM, whereas the latter option is called the smooth parametrization since it is constructed in the spirit of the SCEM (cf. [23]).
The CEM parametrization is only used for simulating measurement data, not for computing reconstructions. We do not assume there is contact everywhere on the surface patches identified as the electrodes, 2 but the corresponding contact regions are defined as
e * ,m = int t ∈ T * | c t ∈ B(x m , 0.2) ⊂ E * ,m , m = 1, . . . , M,
i.e., by the same formula as the associated electrodes but with a smaller radius. The actual CEM parametrization is
(4.2) ζ * ,CEM : θ → e µ ζ M m=1 e θmχ * ,m |e * ,m | , R M → L ∞ (∂Ω * ),
where θ m corresponds to the shifted log-conductance on E * ,m , µ ζ ∈ R is the expected logconductance, andχ * ,m is the indicator function of e * ,m . With the smooth parametrization, employed for both measurement simulation and reconstruction on both discretizations, we aim to also demonstrate the functionality of the contactadapting model of [13] in a three-dimensional setup. That is, we do not only include the strength of the contact as a parameter in the smoothened model but also its relative location on the considered electrode. To this end, each electrode E * ,m is projected onto an approximate tangent plane, obtained via a least squares fit, at the corresponding midpoint x m . The coordinate system on the tangent plane is chosen so that the projection of the electrode just fits inside the square [−1, 1] 2 in the local coordinates. There is an obvious freedom in choosing the orientation of the local coordinates, but we do not go into the details of our choice as we do not expect the precise orientation to have any meaningful effect on the numerical results. This construction defines a bijective change of coordinates
φ * ,m : E * ,m → φ * ,m (E * ,m ) ⊂ [−1, 1] 2 , m = 1, . . . , M,
on each electrode and for both levels of discretization, assuming the electrodes are small enough. On [−1, 1] 2 , we define a smooth surface conductivity shape
(4.3) ψ ξ (y) = exp a − a 1 − |y−ξ| 2 R 2 , |y − ξ| < R, 0 otherwise,
where the midpoint ξ is considered an unknown in the reconstruction process, but the other two parameters are assigned fixed values R = 0.6 and a = 4. The former controls the width of the contact, whereas the latter fine-tunes its shape; in particular, the contact region on the parameter square is D(ξ, R), i.e., the disk of radius R centered at ξ. The smooth conductivity parametrization on the mth electrode is defined as
(4.4) (ζ * ,smooth ) m : (ρ, ξ) → e ρ+µ ζ ψ ξ • φ * ,m E * ,m ψ ξ • φ * ,m dS ,
where e ρ+µ ζ is the net contact conductance on the electrode, i.e., the integral of the surface conductivity over the electrode, which makes ρ + µ ζ the log-conductance. All the strength and location parameter pairs (ρ, ξ) of individual electrodes are collected into a single parameter vector θ, and the corresponding domain of definition D smooth for the smooth parametrization is defined to be the subset of R 3M defined by the condition that φ −1 * ,m (D(ξ, R)) ⊂ E * ,m for m = 1, . . . , M , that is, the contact regions are required to be subsets of the respective electrodes. To summarize, we have arrived at the complete smooth parametrization:
(4.5) R 3M ⊃ D smooth θ → ζ * ,smooth (θ) ∈ L ∞ (∂Ω * ),
where ζ * ,smooth (θ) behaves as indicated by (4.4) on E * ,m , with (ρ, ξ) defined by the appropriate components of θ ∈ D smooth , and vanishes outside the electrodes. The complete parametrization is finally obtained by combining (4.1) with either (4.2) or (4.5) to form a mapping τ from the complete parameter vector ι = (κ, θ) ∈ I ⊂ R N to a pair of domain and boundary conductivities. Depending on the contact parametrization and the level of discretization, the number of degrees of freedom ranges from N = N L + M = 1024 to N = N H +3M = 4072. It is straightforward to check that for any admissible ι in the sense of the above definitions, the resulting image τ (ι) lies in a discretized version of K + . Moreover, the derivatives of the parametrization with respect to ι can be explicitly calculated in a straightforward but tedious manner. The various terms in the series reversion formulas can then be computed by combining these parameter derivatives with the appropriate numerical solutions of the variational problems (2.5) and (2.11), obtained by resorting to a custom FE solver built on top of scikitfem [20]. We employ a mixed FE method with standard piecewise linear elements for the domain potential and certain macro-elements for the electrode potentials; see [34] for a comparable scheme.
Regularization of the Fréchet derivative and Bayesian interpretation.
In the numerical computations, electrode current-to-voltage maps and their directional derivatives are represented as symmetric matrices with respect to a certain orthonormal basis of R M ; see, e.g., [24,Section 4]. In particular, such a representation for (noiseless) data, say, Ψ ∈ R (M −1)×(M −1) is defined by only M (M − 1)/2 = 276 real numbers when M = 24. Taking into account the well-known severe illposedness of the inverse problem of EIT and the number of degrees of freedom in our parametrizations for the domain and boundary conductivities, it is clear that operating with M in the series reversion formulas must be performed in some regularized manner. In what follows, we denote by vec :
R (M −1)×(M −1) → R (M −1) 2 the columnwise vectorization operator.
Let Γ noise ∈ R (M −1) 2 ×(M −1) 2 and Γ pr ∈ R N ×N be symmetric positive definite matrices. We define a regularized version for M appearing in the series reversion formulas via
(4.6) M R : Ψ → η R , R (M −1)×(M −1) → R N ,
where η R is the minimizer of a Tikhonov functional,
(4.7) η R = argmin η∈R N vec(D Λ(0; η) − Ψ) 2 Γ −1 noise + η 2 Γ −1 pr .
Here, the standard notation z 2 A = z T Az is used. This choice of regularization is motivated by Bayesian inversion [26]. To be more precise, η R is the maximum a posteriori (MAP) or conditional mean estimate for η in the equation D Λ(0; η) = Ψ if the vectorized data vec(Ψ) is contaminated by additive zero-mean Gaussian noise with the covariance Γ noise and η is a priori normally distributed with a vanishing mean and the covariance Γ pr .
When computing the first order estimate η 1 , replacing M by M R in (3.3) corresponds to solving the linearized reconstruction problem with the aforementioned assumptions on the additive measurement noise contaminating the data Λ(υ) and prior information on the parameter vector υ. However, due to the nonlinear and recursive nature of the formulas defining the higher order terms η 2 and η 3 in the reconstruction, there is no obvious reason why this same choice of regularization for M would also be well-advised in (3.4) and (3.7) from the Bayesian perspective. Be that as it may, we exclusively employ this same regularization for all occurrences of M in our numerical tests, although we acknowledge that a more careful study on the choice of regularization in connection to the series reversion technique would be in order.
Let us then introduce the prior models employed in (4.7). We resort to a standard Gaussian smoothness prior for the domain log-conductivity, with the covariance matrix defined elementwise as
(4.8) (Γ κ ) i,j = γ 2 κ exp − |z * ,i − z * ,j | 2 2λ 2 κ , i, j = 1, . . . , N * ,
where z * ,i is the center of ω * ,i in the partitioning of Ω * . Moreover, γ 2 κ and λ κ are the pointwise variance and the so-called correlation distance, respectively, for the shifted log-conductivity in (4.1). The shifted electrode log-conductance parameters, appearing in both the CEM parametrization, i.e. θ in (4.2), and the smooth parametrization, i.e. ρ in (4.5), are assigned a diagonal covariance of the form Γ ρ = γ 2 ρ I ∈ R M ×M . The M contact location parameters ξ ∈ [−1, 1] 2 , only included in the smooth parametrization (4.5), are also equipped with a diagonal covariance Γ ξ = γ 2 ξ I ∈ R 2M ×2M . For the CEM parametrization, the total prior covariance is Γ prior = diag(Γ κ , Γ ρ ), whereas that for the smooth parametrization is Γ prior = diag(Γ κ , Γ ρ , Γ ξ ), assuming the log-conductance parameters appear before the location parameters in θ ∈ R 3M of (4.5). According to this covariance structure, the domain log-conductivity values are a priori correlated, but no other prior correlation between different parameters is assumed. Observe that it is natural to assume that the parameter vector has a vanishing mean (cf. (4.7)) as information on the expected values for the domain log-conductivity and the electrode log-conductances can be included as the shifts µ κ and µ ζ in (4.1), (4.2) and (4.5).
To describe the noise model, the assumed structure of the physical measurements must first be explained. Let e 1 , . . . , e M denote the standard Cartesian basis vectors of R M . Although the available data is presented in (4.7) with respect to an orthogonal basis I
(0)| 2 + δ 2 |U (m) i (0)| 2 , δ 1 , δ 2 > 0.
In other words, the components of the noise are independent, and each of them is further a sum of two independent Gaussian random variables whose expected values are zero and the standard deviations are essentially proportional to the largest measured electrode potential and the corresponding electrode potential, respectively. The last term on the right-hand side of (4.9) ensures that the measured potential vector has zero mean, which simply corresponds to a normalization of the data recorded by a measurement device. By simultaneously considering all M − 1 equations in (4.9) in a matrix form, one arrives at Remark 4.1. In the following section, we compare the performance of the novel one-step reconstruction methods with sequential linearizations. To be more precise, the latter reconstructions are computed by the iteration
(4.11) Υ(υ) := B † V(υ)B † B = Λ(υ) + B †ΘB † B, where V(υ) = [V (1) (υ) . . . V (M −1) (υ)] ∈ Rυ j+1 =υ j + M R υ j ) Υ(υ) − Λ(υ j ) , j = 0, 1, 2,υ 0 = 0,
where Υ(υ) is the available noisy data and M R (υ j ) is defined in the same way as the standard M R but with D Λ(0; η) replaced by D Λ(υ j ; η) in (4.7). This corresponds to a Levenberg-Marquardt type algorithm as the updateη j+1 =υ j+1 −υ j is the minimizer of the Tikhonov functional
vec D Λ(υ j ; η) − (Υ(υ) − Λ(υ j )) 2 Γ −1 noise + η 2 Γ −1
pr with respect to η; cf. Remark 4.2. Our choice for the physical current patterns is motivated by the observation that injecting current patterns between two electrodes is a simpler task technologically than simultaneously maintaining nonzero currents on several electrodes. Although the current is always fed between electrodes with consecutive numbers, this does not actually imply physical proximity of these electrodes. In fact, our numbering for the electrodes does not essentially have any correlation with their positions on the object boundary.
Numerical experiments.
In this section we present our numerical experiments. The first experiment statistically compares the performance of higher order series reversion with 1-3 sequential linearizations (cf. Remark 4.1). In the second experiment, we test if the higher orders of convergence verified in the simplistic CM based setup of [18,Section 7] can be detected in our setting with a high-dimensional unknown, modeling errors and noise. To conclude, we show some representative reconstructions for a couple of target conductivities.
We evaluate the performance of the reconstruction methods via two indicators, both of which have absolute and relative versions. For a given target parameter vector υ, the absolute and relative residuals
(5.1) Res(υ i , υ) = Λ(υ i ) − Υ(υ) F , Res rel (υ i , υ) = Λ(υ i ) − Υ(υ) F Λ(0) − Υ(υ) F
measure the convergence of the simulated measurements corresponding to the reconstructed parameters toward the measured data. Here, υ i is the reconstructed parameter vector, defined for the series reversion scheme as
υ i = i k=1 η k , i = 1, . . . , 3.
When considering sequential linearizations, one replaces υ i byυ i in (5.1); see Remark 4.1. The measurement Υ(υ) is simulated as described in (4.11). The reference measurement Λ(0) corresponds to the initial guess, i.e., the expected values for the parameters. Take note that (5.1) utilizes the Frobenius norm, not the norm weighted by the inverse noise covariance as in (4.7). The second pair of indicators are the absolute and relative reconstruction errors in the domain log-conductivity:
(5.2) Err(υ i , υ) = κ i − κ L 2 * (Ω) , Err rel (υ i , υ) = κ i − κ L 2 * (Ω) κ L 2 (ΩH) ,
where κ i and κ are the parts of υ i and υ, respectively, defining the shifted log-conductivities of the reconstruction and the target. The definitions for sequential linearizations are analogous. If the discretizations of the log-conductivity are different in the data simulation and reconstruction steps, we resort to a nearest-neighbor projection between the corresponding meshes before computing the error in the L 2 (Ω H ) norm. This explains the special notation L 2 * (Ω) for the norm appearing in (5.2).
We do not attempt to quantitatively compare the reconstructed contact conductivities with the true ones since it is nontrivial to define an indicator for measuring the difference between two electrode conductivities that follow different parametrizations (cf. Remark 5.1 at the end of this section). Moreover, when simulating data with the smooth contact conductivity parametrization, the location parameters ξ in (4.5) are chosen to be zero. This means that the contact region is always approximately at the middle of the respective electrode when data is simulated; the flexibility to reconstruct the locations of the smooth contacts is simply considered as a tool for compensating for the discrepancy between the CEM and smooth parametrizations if the data is simulated by the former and, as always, the reconstruction is formed with the latter.
The free parameters appearing in (4.1), (4.2), (4.5), (4.7) and (4.10) are specified in Table 5.1 for six experimental cases C1-C6. The left-hand side of the table gives the parameter values, discretization level and type of contact parametrization for simulating the data; note that the covariance-defining parameters for the contact strengths and domain log-conductivity are needed for making random draws from the prior. The right-hand side gives the parameters employed when computing the reconstructions; note that the lower discretization level, i.e. Ω L , and the smooth contact parametrization are always used in the reconstruction step. The value of the background log-conductivity µ κ = −3.0 in Table 5.1 approximately corresponds to the conductivity of Finnish tap water (cf., e.g., [23]) if the true physical diameter of our domain of Table 5.1: Specifications of the six experimental cases. The lower level of discretization Ω L and the smooth contact model are used for reconstruction. The parameter values γ ξ = 0.02, µ κ = −3.0 and µ ζ = −3.0 are common for all experiments.
Measurement
Reconstruction Smooth (0.005, 0.05) 0.1 (0.1, 1.0 interest were two meters, whereas the expected value of the electrode log-conductances µ ζ = −3.0 corresponds to a net resistance of about 20 Ω at each electrode. The cases C1-C2 correspond to an inverse crime because the simulation of data and reconstruction are performed using exactly the same discretization. In the other four cases C3-C6, the measurement data is simulated using the denser discretization and the CEM model for the electrode contacts, which introduces a considerable discrepancy in modeling compared to the coarser discretization and the smooth contact model used in the reconstruction step. The noise level and the informativeness of the prior also vary between the six cases; our expectation is that the higher order methods function more stably when the prior is more informative, i.e., the to-be-reconstructed parameters are expected to lie closer to the initial guess υ = 0. In order to achieve reasonable stability for the statistical experiments, the prior used for reconstruction is often slightly more restrictive than the one used for random draws in the simulation step, and the reconstruction methods work under an assumption of a considerably higher noise level than is actually employed in the simulation of data. That is, the regularization in (4.7) is chosen to be stronger than the available prior information would suggest, which is expected to compensate for the numerical and modeling errors not accounted for in the additive noise model in (4.10).
Ω contact (δ 1 , δ 2 ) % γ ρ (γ κ , λ κ ) (δ 1 , δ 2 ) % γ ρ (γ κ , λ κ ) C1 Ω L
Experiment 1.
In the first experiment, the following steps are carried out for all six cases listed in Table 5.1: 1000 samples of target parameters are drawn from the prior and the corresponding measurements are simulated according to the specifications on the left-hand side of the table. Subsequently, five different reconstructions are computed for each simulated noisy measurement according to the specifications on the right-hand side of the table. The considered reconstruction methods are the regularized first (1), second (2) and third (3) order series reversions (see (3.3), (3.4), (3.7) and (4.7)) as well as the two-(1, 1) and three-step (1, 1, 1) sequential linearizations (see Remark 4.1). 3 In order to exclude potentially diverging cases from the statistical analysis, the 200 random draws of target parameters υ that resulted in the largest absolute residuals Λ(0) − Υ(υ) F are excluded before computing the mean values for the indicators Res rel and Err over the constructed sample of reconstructions. The resulting numbers for the six cases C1-C6 and the considered reconstruction methods are listed in Table 5.2. Moreover, Figure 5.1 visualizes the distributions of Res rel and Err rel over the full set of 1000 samples for the cases C1 and C5.
Let us first consider the results for the easier of the two 'inverse crime cases' C1, where the noise level is low and the priors for the domain log-conductivity and the electrode logconductances are very informative. According to Table 5.2, both Res rel and Err decrease on average when the order of the series reversion method is increased, although the decrease is no Table 5.2: Sample means of the performance indicators Res rel and Err over those 800, of a total of 1000, random draws from the prior that resulted in the smallest absolute residuals. The rows correspond to the different tests specified in Table 5.1. The columns correspond to different reconstruction methods: (1), (2) and (3) are the first, second and third order one-step series reversions, whereas (1, 1) and (1, 1, 1) stand for two and three steps of sequential linearizations (cf. Remark 4.1). The trivial method of resorting to the initial guess υ 0 = 0 is labeled (0) and E υ denotes the sample mean operation with respect to the drawn targets υ.
log 10 E υ (Res rel ( · , υ)) log 10 E υ (Err( ·, , υ)) longer substantial between the second and third order methods. The two-and three-step linearization schemes result in approximately as low domain error in the log-conductivity as the second and third order series reversions, but in considerably smaller values for the relative residual. There is not much performance difference between two and three sequential linearizations, suggesting that two linearization steps are essentially enough for convergence. These conclusions are confirmed by the left-hand images of Figure 5.1. Take note that case C1 is the only one where the higher order series reversions are able to equally compete with (two) sequential linearizations in light of either of the two performance indicators considered in Table 5.2. The other 'inverse crime case' C2 considers a higher noise level and considerably less informative priors for the domain log-conductivity and the electrode log-conductances. This time, the mean values of Res rel are much larger for the higher order series reversion methods than for two or three steps of sequential linearizations, with the third order series reversion scheme actually increasing the residual on average compared to its second order counterpart. However, the difference in the performance of the series reversion and sequential linearizations is not as pronounced when measured by the quality of the log-conductivity reconstructions.
(1) (2) (3) (1, 1) (1, 1, 1) (0) (1) (2)(3)
Case C3 alters C1 to another direction: the prior and noise model remain the same as for C1, but the denser discretization and the CEM parametrization for the contact conductivities are employed when simulating measurement data. This leads to considerable modeling errors. The behavior of Res rel is comparable to case C1 for both the series reversion and sequential linearizations, but the log-conductivity reconstructions are on average much worse, with the performance of the series reversion actually decreasing in this regard as a function of its order. It is worth mentioning that even the observed decrease in relative residuals for the series reversion methods is achieved only by including the contact locations ξ of the smooth parametrization (4.5) as unknowns in the inversion, that is, if ξ = 0 were fixed, Res rel would also increase on average.
Like C3, the remaining cases C4-C6 also consider settings where the data simulation is performed with the denser discretization and the CEM parametrization for the electrode contacts, while the reconstructions are computed using the sparser discretization and the smooth contact model. All three cases consider the less informative prior model for generation of data, and their differences are related to the noise level and whether a more conservative prior is assumed for the reconstruction step compared to how the random draws are performed. The conclusions for all four cases C4-C6 are essentially the same, i.e., the differences in parameter values between these cases do not seem to have any significant effect on the results. According to both indicators of Table 5.2, the performance of the series reversion improves as a function of its order, although the difference between the second and third order is significantly smaller than that between the first and second order when the relative residual is considered. Moreover, two sequential linearizations clearly outperform the higher order series reversion methods with respect to both Res rel and Err. For case C5, these conclusions are seconded by the right-hand images of Figure 5.1. Although in all tests the higher order series reversion methods are clearly inferior to sequential linearizations, in none of the cases considered in Table 5.2 do the higher order series reversion schemes completely break down. Moreover, one could argue that their relative performance compared to the sequential linearizations remains approximately the same over all six cases C1-C6, with all methods suffering roughly equally from the modeling errors in cases C3-C6. This is maybe a bit surprising since EIT is known to be sensitive to modeling errors [4,8,27], and it would be intuitive to expect the higher order series reversions to further highlight this sensitivity due to their recursive nature. However, this observed robustness is probably partially due to the exclusion of the 200 'most difficult' parameter samples from the mean values listed in Table 5.2. Indeed, the higher order series reversion methods have a tendency to occasionally 'explode' as illustrated by the right ends of some yellow and red curves in Figure 5.1, indicating that in some cases a considerable proportion of the reconstructions fail to improve compared to the initial guess. This phenomenon can be considered to be analogous to the behavior of higher order polynomial interpolation or extrapolation for noisy data. In particular, this might suggest a decreased radius of convergence in the higher order methods compared to the first order method, but in this study we did not push this question further.
Remark 5.1. According to our experiments, the (shifted) log-conductance parameters in (4.2) and (4.5) do not seem to have the same effect on the electrode measurements. That is, even though plugging in the same value for µ ζ + θ m in (4.2) and µ ζ + ρ in (4.5) gives the same net integral of the surface conductivity over the considered electrode, it seems that the actual effect of the contact on the potential at the electrode is considerably different. This phenomenon emphasizes the modeling errors in cases C3-C6 as it makes the initial guess υ = 0 nonoptimal when data is simulated by resorting to the CEM parametrization but reconstructions are formed with the smooth parametrization. See [23,Section 4.3] for related observations. Remark 5.2. Because the asymptotic computational complexity of any of our series reversion methods is the same as that of a single regularized linearization, it would be natural to also compare sequential linearizations with sequential series reversions defined in the obvious manner: one first computes a reconstruction by series reversion, say, υ i , then utilizes it as the basis point for the next series reversion, and so on. The reason for not considering, e.g., a method labeled as (2, 2) in our numerical tests is that for all considered test cases C1-C6, sequential series reversions perform statistically approximately as well as the corresponding number of sequential linearizations. That is, the performance of, e.g., the method (2, 2) is about as good as that of (1, 1). In fairness, it should also mentioned that with our Python implementation, the considered FE discretizations, and the number of electrodes and unknown parameters, it is not yet obvious that the computational cost of two series reversions of different orders are of the same asymptotic computational complexity. One would probably need to consider even denser FE discretizations to observe this phenomenon. In other words, building and Cholesky-factorizing the system matrix corresponding to the sesquilinear form (2.6) plus forming and inverting the Fréchet derivative do not clearly dominate the other steps required by a series reversion scheme in our tests (cf. [18]).
Experiment 2.
The second experiment mimics the simplistic convergence test of [18,Section 7.2] in our three-dimensional setting with a high-dimensional unknown and modeling errors. The following computations are performed for cases C1 and C5. A single target parameter vector, say,υ is drawn from the prior distribution defined on the left-hand side of the appropriate row in Table 5.1, and the actual target is then formed via scaling as υ = sυ for some s ∈ [0.2, 10]. The reconstructions by the considered five methods are computed according to the specifications of the considered case in Table 5.1, but with the standard deviations of the domain log-conductivity and the contact log-conductance, γ κ and γ ρ , scaled by the utilized s. 4 This procedure is repeated for multiple values of s ∈ [0.2, 10], and the absolute indicators Res and Err are computed for each of them. Note that the considered scaling range covers multiple orders of magnitude in terms of the domain conductivity and the contact conductances due to the employed logarithmic parametrizations. Figure 5.2 illustrates the performance indicators Res and Err as functions of the scaling parameter s for both cases C1 and C5. In all subimages of Figure 5 the earlier article. Let us first digest the 'inverse crime case' C1. The top left image in Figure 5.2 indicates that the first order series reversion, or a single regularized linearization, demonstrates approximately quadratic convergence in Res up to a scaling factor of the order s = 0.5, below which the additive noise, which is proportional to the absolute measurement data (cf. (4.10)), presumably starts to dominate and prohibits further convergence. The other reconstruction methods exhibit higher orders of convergence in Res, but understandably they are also unable to converge beneath the noise level that is essentially independent of s. On the other hand, the bottom left image in Figure 5.2 shows that the higher order series reversion methods and two-and three-step linearizations clearly demonstrate initial higher orders of convergence in the domain log-conductivity reconstruction error Err, but all of them eventually settle with the same low order of convergence as a single regularized linearization
The right-hand images in Figure 5.2 demonstrate the effect of the modeling errors induced by simulating the measurement data by the CEM parametrization for the contact conductivity but computing the reconstructions by resorting to the smooth contact model in case C5. Let us first consider the visualization of the absolute residuals Res on the top right in Figure 5.2. Due to the discrepancy between the two contact models considered in Remark 5.1, even the initial guess υ = 0 does not produce an accurate estimate for the measured data when s, and thus also the target υ = sυ, converges to zero. The higher order series reversions as well as Table 5.3: The values of the performance indicators Res rel and Err for the series reversion reconstructions in Figure 5.3. The numbering of the methods is as in Table 5.1 log 10 Res rel (υ i , υ) log 10 Err(υ i , υ) the two-and, especially, three-step linearizations do a much better job in reducing the residual when s decreases, but a single linearization is not flexible enough to overcome the modeling error between the CEM and smooth models for the contact conductivity. All considered reconstruction methods dominate the initial guess υ = 0 when measured in the absolute domain log-conductivity error Err up to the level of about s = 0.5, below which the modeling errors seem to prohibit further convergence. Naturally, the initial guess is not affected by such modeling errors, but it steadily converges toward the target sυ as s goes to zero. It is difficult to draw any reliable conclusions on the relative performance of the considered five reconstruction methods based on the right-hand images of Figure 5.2; the only thing that seems rather certain is that all of them become unreliable for scaling factors that are considerably larger than s = 1 in case C5. Figure 5.3 shows two example target log-conductivities as well as associated reconstructions by the series reversion methods. The reconstructions in Subfigure 5.3a correspond to case C5, that is, the target parameter vector υ is drawn and the reconstructions formed according to the specifications of case C5 in Table 5.1. The reconstructions in Subfigure 5.3b are also computed according to the specifications of case C5, but they correspond to a piecewise constant log-conductivity target consisting of three regions where the shifted log-conductivity in (4.1) differs from the expected value of κ = 0. The other parameters for this piecewise constant example are chosen as in case C5. The central horizontal cross-sections of all panels in Figure 5.3 are shown as two-dimensional images in Figure 5 , an immediate, and perhaps disappointing, observation is that it is difficult to detect a considerable difference between the first order υ 1 = η 1 and the higher order, i.e. υ 2 = η 1 + η 2 and υ 3 = η 1 + η 2 + η 3 , reconstructions. Although the values of the standard performance indicators Res rel and Err listed in Table 5.3 do in fact demonstrate a slight improvement for both examples and error indicators as functions of the reversion order, the differences in the quality of the domain log-conductivity reconstructions are simply too small to be visually significant.
(1) (2) (3) (0) (1) (2)(3)
Example reconstructions.
Comparing the first, second and third order components of the reconstructions, i.e. η 1 , η 2 and η 3 , a damped oscillatory behavior is observed in their values. This is especially evident between η 2 and η 3 , for which the positive and negative areas seem to be essentially swapped. According to our experience, this same oscillatory phenomenon is often encountered with most target domains, not just the ones depicted in Figure 5.3.
6. Concluding remarks. The goal of this paper was to introduce the series reversion methods of [18] as reconstruction schemes for three-dimensional electrode-based EIT under (considerable) modeling and numerical errors that are unavoidable in many real-life settings. It was demonstrated that the series reversion methods can indeed be robustly implemented in a practical setting, and their performance is arguably better than that of a single regularized linearization. However, a reconstruction method based on sequential linearizations, namely a Levenberg-Marquardt scheme, performed on average better than the higher order one-step series reversion methods in our statistical tests. Applying series reversions in a sequential manner resulted in approximately as good performance as the same number of sequential linearizations; : Two target log-conductivities and corresponding reconstructions by the series reversion methods. One vertical and three horizontal slices of the (shifted) target log-conductivities are depicted in the panels labeled by υ. The same panels also show the forward-facing electrodes as translucent yellow surface patches and the corresponding reconstructed contact regions as green patches. The other panels visualize the first, second and third order series reversion reconstructions of the log-conductivities as well as their individual components in the order indicated by the labeling. Top: An example target log-conductivity and the corresponding reconstructions in case C5 of Table 5.1. Bottom: A piecewise constant target log-conductivity with four partially overlapping, roughly rectangular, subdomains. The other parameters for the data simulation and reconstruction steps were chosen according to case C5. see Remark 5.2. According to our theoretical results as well as the numerical tests in [18], the series reversion methods demonstrate higher order convergence rates if the discretization of the unknowns is sparse enough to allow the Fréchet derivative of the forward map to be injective (and stably invertible). This condition was not met in our numerical studies, but the Fréchet derivative was inverted by resorting to certain kind of Tikhonov regularization that was motivated by Bayesian inversion. An interesting topic for future studies is to consider how regularization should be applied to the Fréchet derivative of the forward map in order to retain (some) good qualities of the series reversion methods in settings where the injectivity of the Fréchet derivative cannot be guaranteed by considering a low enough number of unknown parameters.
i 1 * ,i with respect to the appropriate partitioning. The numerical experiments are performed with M = 24 electrodes. Their midpoints x m , m = 1, . . . , M , are distributed roughly uniformly over ∂Ω; cf. Figure 5.3. Assuming the triangles of T * are closed sets, the electrodes for the two levels of discretization are defined as the open polygonal surface patches E * ,m = int t ∈ T * | c t ∈ B(x m , 0.3) , m = 1, . . . , M, where c t is the centroid of the triangle t and B(x, r) denotes the open ball of radius r centered at x. Because the surface triangulations T H and T L do not match, the corresponding electrodes do not exactly coincide either, i.e., E H,m = E L,m for m = 1, . . . , M . That is, we do not assume perfect prior knowledge on the electrode positions in the experiments where the measurement simulation and reconstruction are performed on different meshes.
( 1 )
1, . . . , I (M −1) of R M , the original noisy measurements are simulated by usingÎ (m) = e m − e m+1 ∈ R M , m = 1, . . . , M − 1, as the physical current patterns. The resulting potential vectors in Cartesian coordinates are then contaminated with additive Gaussian noise with a diagonal covariance. To allow a more precise explanation, we first define two auxiliary matrices B = [I (1) . . . I (M −1) ] ∈ R M ×(M −1) and B = [Î (1) . . .Î (M −1) ] ∈ R M ×(M −1) . Take note that both B andB are invertible as mappings from R M −1 to R M , which means that their pseudoinverses B † andB † satisfy BB † | R M =BB † | R M = Id and B † B =B †B = Id. Furthermore, B † andB † have the span of 1 := [1 . . . 1] T ∈ R M as their kernels. The noiseless physical measurements corresponding to a parameter υ ∈ I are defined as U (m) (υ) = B Λ(υ)B †Î (m) , m = 1, . . . , M m) ∈ R M is a Gaussian random variable with the expected value 0 ∈ R M and a diagonal covariance Γ (m)
M ×(M −1) contains the noisy physical measurements andΘ = [θ (1) . . .θ (M −1) ] ∈ R M ×(M −1) all components of the additive noise. Because the elements of Θ := B †ΘB † B ∈ R (M −1)×(M −1) are linear combinations of independent Gaussian random variables whose expected values are zero, they are themselves Gaussians with vanishing expected values but with a nontrivial covariance structure that defines Γ noise employed in (4.7).
(3.3), (3.4) and (3.7).
Fig. 5 . 1 :
51Distributions of the relative residual of the forward problem Res rel and the relative domain conductivity error of the reconstructions Err rel over a sample of 1000 target parameter vectors drawn from the prior distributions in cases C1 and C5. The reconstructions were computed for regularized first (1), second (2) and third (3) order series reversions, and for two-(1, 1) and three-step (1, 1, 1) sequential linearizations, with reconstruction and regularization parameters as defined for cases C1 and C5. For each method, the values of Res rel and Err rel are separately sorted in ascending order on the horizontal axis.
Fig. 5 . 2 :
52Absolute residual Res (top) and domain error Err (bottom) as functions of a scaling factor s > 0 for a single random draw from the prior in cases C1 (left) and C5 (right). Both the random draw and the corresponding (pointwise) standard deviations γ κ and γ ρ are scaled by the considered s when computing the reconstructions, but the scaling factor does not affect the noise level that depends on the absolute measurements as indicated by (4.10).
Fig. 5.3: Two target log-conductivities and corresponding reconstructions by the series reversion methods. One vertical and three horizontal slices of the (shifted) target log-conductivities are depicted in the panels labeled by υ. The same panels also show the forward-facing electrodes as translucent yellow surface patches and the corresponding reconstructed contact regions as green patches. The other panels visualize the first, second and third order series reversion reconstructions of the log-conductivities as well as their individual components in the order indicated by the labeling. Top: An example target log-conductivity and the corresponding reconstructions in case C5 of Table 5.1. Bottom: A piecewise constant target log-conductivity with four partially overlapping, roughly rectangular, subdomains. The other parameters for the data simulation and reconstruction steps were chosen according to case C5.
central horizontal cross-sections in the twelve panels ofFigure 5.3 presented as two-dimensional images. The ordering of the images is the same as inFigure 5.3.
.2, the range of s is restricted to a subinterval of [0.2, 10] to exclude uninteresting settings where the reconstruction methods essentially fail. Comparing the results to [18, Example 7.2], we immediately observe a significant (relative) performance degradation with the higher order series reversions. The worse performance probably results from our experiment setup being significantly more complicated than in0.2
0.4
0.8
1.6
3.2
6.4
10 0
10 1
10 2
C1
Residual
0.2
0.4
0.8
1.6
10 0
10 1
10 2
10 3
C5
0.2
0.4
0.8
1.6
3.2
6.4
0.05
0.2
0.8
3.2
C1
s 4
s 3
s 2
Domain error
0.2
0.4
0.8
1.6
1
2
4
8
C5
(1)
(2)
(3)
(1, 1)
(1, 1, 1)
Initial guess
In the terminology of[13], E * ,m could be called an extended electrode.
The first order series reversion and one-step sequential linearization are in fact the same method.
Recall that the noise level depends on the size of the absolute data (cf. (4.10)), so it does not go to zero as a function of s.
Acknowledgments. We are grateful for Tom Gustafsson for guidance in the use of scikitfem. The computations for statistical experiments were performed using computer resources within the Aalto University School of Science "Science-IT" project.
Calderón's inverse problem with a finite number of measurements. G S Alberti, M Santacesaria, Forum Math. Sigma. 735G. S. Alberti and M. Santacesaria, Calderón's inverse problem with a finite number of measurements, Forum Math. Sigma, 7 (2019), p. e35.
Calderón's inverse problem with a finite number of measurements II: independent data. Appl. Anal. 101, Calderón's inverse problem with a finite number of measurements II: independent data, Appl. Anal., 101 (2022), pp. 3636-3654.
Inverse Born series for the Calderón problem. S Arridge, S Moskow, J C Schotland, Inverse Problems. 2835003S. Arridge, S. Moskow, and J. C. Schotland, Inverse Born series for the Calderón problem, Inverse Problems, 28 (2012), p. 035003.
Errors in reconstruction of resistivity images using a linear reconstruction technique. D C Barber, B H Brown, Clin. Phys. Physiol. Meas. 9D. C. Barber and B. H. Brown, Errors in reconstruction of resistivity images using a linear reconstruction technique, Clin. Phys. Physiol. Meas., 9 (1988), pp. 101-104.
Electrical impedance tomography, Inverse problems. L Borcea, 18L. Borcea, Electrical impedance tomography, Inverse problems, 18 (2002), pp. R99-R136.
Efficient simultaneous reconstruction of timevarying images and electrode contact impedances in electrical impedance tomography. G Boverman, D Isaacson, J C Newell, G J Saulnier, T.-J Kao, B C Amm, X Wang, D M Davenport, D H Chong, R Sahni, J M Ashe, IEEE Trans. Biomed. Eng. 64G. Boverman, D. Isaacson, J. C. Newell, G. J. Saulnier, T.-J. Kao, B. C. Amm, X. Wang, D. M. Davenport, D. H. Chong, R. Sahni, and J. M. Ashe, Efficient simultaneous reconstruction of time- varying images and electrode contact impedances in electrical impedance tomography, IEEE Trans. Biomed. Eng., 64 (2017), pp. 795-806.
Methods for compensating for variable electrode contact in EIT. G Boverman, D Isaacson, G J Saulnier, J C Newell, IEEE Trans. Biomed. Eng. 56G. Boverman, D. Isaacson, G. J. Saulnier, and J. C. Newell, Methods for compensating for variable electrode contact in EIT, IEEE Trans. Biomed. Eng., 56 (2009), pp. 2762-2772.
Data errors and reconstruction algorithms in electrical impedance tomography. W Breckon, M Pidcock, Clin. Phys. Physiol. Meas. 9W. Breckon and M. Pidcock, Data errors and reconstruction algorithms in electrical impedance tomog- raphy, Clin. Phys. Physiol. Meas., 9 (1988), pp. 105-109.
Computational framework for applying electrical impedance tomography to head imaging. V Candiani, A Hannukainen, N Hyvönen, SIAM J. Sci. Comput. 41V. Candiani, A. Hannukainen, and N. Hyvönen, Computational framework for applying electrical im- pedance tomography to head imaging, SIAM J. Sci. Comput., 41 (2019), pp. B1034-B1060.
Electrical impedance tomography. M Cheney, D Isaacson, J Newell, SIAM Rev. 41M. Cheney, D. Isaacson, and J. Newell, Electrical impedance tomography, SIAM Rev., 41 (1999), pp. 85- 101.
Electrode models for electric current computed tomography. K.-S Cheng, D Isaacson, J S Newell, D G Gisser, IEEE Trans. Biomed. Eng. 36K.-S. Cheng, D. Isaacson, J. S. Newell, and D. G. Gisser, Electrode models for electric current computed tomography, IEEE Trans. Biomed. Eng., 36 (1989), pp. 918-924.
Fine-tuning electrode information in electrical impedance tomography. J Dardé, H Hakula, N Hyvönen, S Staboulis, Inverse Probl. Imag. 6J. Dardé, H. Hakula, N. Hyvönen, and S. Staboulis, Fine-tuning electrode information in electrical impedance tomography, Inverse Probl. Imag., 6 (2012), pp. 399-421.
Contact adapting electrode model for electrical impedance tomography. J Dardé, N Hyvönen, T Kuutela, T Valkonen, SIAM J. Appl. Math. 82J. Dardé, N. Hyvönen, T. Kuutela, and T. Valkonen, Contact adapting electrode model for electrical impedance tomography, SIAM J. Appl. Math., 82 (2022), pp. 427-229.
An analytic solution to the homogeneous EIT problem on the 2D disk and its application to estimation of electrode contact impedances. E Demidenko, Physiol. Meas. 32E. Demidenko, An analytic solution to the homogeneous EIT problem on the 2D disk and its application to estimation of electrode contact impedances, Physiol. Meas., 32 (2011), pp. 1453-1471.
Statistical estimation of EIT electrode contact impedance using a magic toeplitz matrix. E Demidenko, A Borsic, Y Wan, R J Halter, A Hartov, IEEE Trans. Biomed. Eng. 58E. Demidenko, A. Borsic, Y. Wan, R. J. Halter, and A. Hartov, Statistical estimation of EIT electrode contact impedance using a magic toeplitz matrix, IEEE Trans. Biomed. Eng., 58 (2011), pp. 2194-2201.
J D Ducut, M Alipio, P J Go, R Concepcion, I I , R R Vicerra, A Bandala, E Dadios, A review of electrical resistivity tomography applications in underground imaging and object detection. 73102208J. D. Ducut, M. Alipio, P. J. Go, R. Concepcion II, R. R. Vicerra, A. Bandala, and E. Dadios, A review of electrical resistivity tomography applications in underground imaging and object detection, Displays, 73 (2022), p. 102208.
Formulae for high derivatives of composite functions. L E Fraenkel, Math. Proc. Cambridge Philos. Soc. 83L. E. Fraenkel, Formulae for high derivatives of composite functions, Math. Proc. Cambridge Philos. Soc., 83 (1978), pp. 159-165.
Series reversion in Calderón's problem. H Garde, N Hyvönen, Math. Comp. 91H. Garde and N. Hyvönen, Series reversion in Calderón's problem, Math. Comp., 91 (2022), pp. 1925- 1953.
On regularity of the logarithmic forward map of electrical impedance tomography. H Garde, N Hyvönen, T Kuutela, SIAM J. Math. Anal. 52H. Garde, N. Hyvönen, and T. Kuutela, On regularity of the logarithmic forward map of electrical impedance tomography, SIAM J. Math. Anal., 52 (2020), pp. 197-220.
scikit-fem: A Python package for finite element assembly. T Gustafsson, G D Mcbain, Journal of Open Source Software. 52369T. Gustafsson and G. D. McBain, scikit-fem: A Python package for finite element assembly, Journal of Open Source Software, 5 (2020), p. 2369.
Uniqueness and Lipschitz stability in electrical impedance tomography with finitely many electrodes. B Harrach, ID 024005Inverse Problems. 35B. Harrach, Uniqueness and Lipschitz stability in electrical impedance tomography with finitely many electrodes, Inverse Problems, 35 (2019). Article ID 024005.
Polynomial collocation for handling an inaccurately known measurement configuration in electrical impedance tomography. N Hyvönen, V Kaarnioja, L Mustonen, S Staboulis, SIAM J. Appl. Math. 77N. Hyvönen, V. Kaarnioja, L. Mustonen, and S. Staboulis, Polynomial collocation for handling an inaccurately known measurement configuration in electrical impedance tomography, SIAM J. Appl. Math., 77 (2017), pp. 202-223.
Smoothened complete electrode model. N Hyvönen, L Mustonen, SIAM J. Appl. Math. 77N. Hyvönen and L. Mustonen, Smoothened complete electrode model, SIAM J. Appl. Math., 77 (2017), pp. 2250-2271.
Generalized linearization techniques in electrical impedance tomography. Numer. Math. 140, Generalized linearization techniques in electrical impedance tomography, Numer. Math., 140 (2018), pp. 95-120.
Statistical inversion and Monte Carlo sampling methods in electrical impedance tomography. J P Kaipio, V Kolehmainen, E Somersalo, M Vauhkonen, Inverse Problems. 16J. P. Kaipio, V. Kolehmainen, E. Somersalo, and M. Vauhkonen, Statistical inversion and Monte Carlo sampling methods in electrical impedance tomography, Inverse Problems, 16 (2000), pp. 1487-1522.
J P Kaipio, E Somersalo, Statistical and Computational Inverse Problems. Springer-VerlagJ. P. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems, Springer-Verlag, 2004.
Assessment of errors in static electrical impedance tomography with adjacent and trigonometric current patterns. V Kolehmainen, M Vauhkonen, P A Karjalainen, J P Kaipio, Physiol. Meas. 18V. Kolehmainen, M. Vauhkonen, P. A. Karjalainen, and J. P. Kaipio, Assessment of errors in static electrical impedance tomography with adjacent and trigonometric current patterns, Physiol. Meas., 18 (1997), pp. 289-303.
A Lechleiter, R Rieder, Newton regularizations for impedance tomography: convergence by local injectivity, Inverse Problems. 2465009A. Lechleiter and R. Rieder, Newton regularizations for impedance tomography: convergence by local injectivity, Inverse Problems, 24 (2008), p. 065009.
Compensation of errors due to discretization, domain truncation and unknown contact impedances in electrical impedance tomography. A Nissinen, L M Heikkinen, V Kolehmainen, J P Kaipio, Meas. Sci. Technol. 20105504A. Nissinen, L. M. Heikkinen, V. Kolehmainen, and J. P. Kaipio, Compensation of errors due to discretization, domain truncation and unknown contact impedances in electrical impedance tomography, Meas. Sci. Technol., 20 (2009), p. 105504.
Compensation of modelling errors due to unknown domain boundary in electrical impedance tomography. A Nissinen, V Kolehmainen, J P Kaipio, IEEE Trans. Med. Imag. 30A. Nissinen, V. Kolehmainen, and J. P. Kaipio, Compensation of modelling errors due to unknown domain boundary in electrical impedance tomography, IEEE Trans. Med. Imag., 30 (2011), pp. 231-242.
Imaging of conductivity changes and electrode movement in EIT. M Soleimani, C Gómez-Laberge, A Adler, Physiol. Meas. 27M. Soleimani, C. Gómez-Laberge, and A. Adler, Imaging of conductivity changes and electrode move- ment in EIT, Physiol. Meas., 27 (2006), pp. S103-S113.
Existence and uniqueness for electrode models for electric current computed tomography. E Somersalo, M Cheney, D Isaacson, SIAM J. Appl. Math. 52E. Somersalo, M. Cheney, and D. Isaacson, Existence and uniqueness for electrode models for electric current computed tomography, SIAM J. Appl. Math., 52 (1992), pp. 1023-1040.
Electrical impedance tomography and Calderón's problem, Inverse Problems. G Uhlmann, 25123011G. Uhlmann, Electrical impedance tomography and Calderón's problem, Inverse Problems, 25 (2009), p. 123011.
Electrical impedance tomography with prior information. M Vauhkonen, 62Kuopio University Publications C (DissertationM. Vauhkonen, Electrical impedance tomography with prior information, vol. 62, Kuopio University Pub- lications C (Dissertation), 1997.
Simultaneous reconstruction of electrode contact impedances and internal electrical properties: I. Theory. T Vilhunen, J P Kaipio, P J Vauhkonen, T Savolainen, M Vauhkonen, Meas. Sci. Technol. 13T. Vilhunen, J. P. Kaipio, P. J. Vauhkonen, T. Savolainen, and M. Vauhkonen, Simultaneous re- construction of electrode contact impedances and internal electrical properties: I. Theory, Meas. Sci. Technol., 13 (2002), pp. 1848-1854.
Analytic functions in Banach spaces. E F Whittlesey, Proc. Amer. Math. Soc. 16E. F. Whittlesey, Analytic functions in Banach spaces, Proc. Amer. Math. Soc., 16 (1965), pp. 1077-1083.
| [] |
[
"Density biases and temperature relations for DESIRED HII regions",
"Density biases and temperature relations for DESIRED HII regions"
] | [
"J E Méndez-Delgado \nZentrum für Astronomie\nAstronomisches Rechen-Institut\nUniversität Heidelberg\nMönchhofstraße 12-14D-69120HeidelbergGermany\n",
"★ ",
"C Esteban \nInstituto de Astrofísica de Canarias (IAC)\nE-38205La LagunaSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nE-38206La LagunaSpain\n",
"J García-Rojas \nInstituto de Astrofísica de Canarias (IAC)\nE-38205La LagunaSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nE-38206La LagunaSpain\n",
"K Z Arellano-Córdova \nDepartment of Astronomy\nThe University of Texas at Austin\nStop C14002515, 78712Speedway, AustinTXUSA\n",
"K Kreckel \nZentrum für Astronomie\nAstronomisches Rechen-Institut\nUniversität Heidelberg\nMönchhofstraße 12-14D-69120HeidelbergGermany\n",
"V Gómez-Llanos \nInstituto de Astrofísica de Canarias (IAC)\nE-38205La LagunaSpain\n\nDepartamento de Astrofísica\nUniversidad de La Laguna\nE-38206La LagunaSpain\n",
"O V Egorov \nZentrum für Astronomie\nAstronomisches Rechen-Institut\nUniversität Heidelberg\nMönchhofstraße 12-14D-69120HeidelbergGermany\n",
"M Peimbert \nInstituto de Astronomía\nUniversidad Nacional Autónoma de México\nApdo. Postal 70-264 Ciudad UniversitariaMéxico\n",
"M Orte-García \nDepartamento de Astrofísica\nUniversidad de La Laguna\nE-38206La LagunaSpain\n"
] | [
"Zentrum für Astronomie\nAstronomisches Rechen-Institut\nUniversität Heidelberg\nMönchhofstraße 12-14D-69120HeidelbergGermany",
"Instituto de Astrofísica de Canarias (IAC)\nE-38205La LagunaSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nE-38206La LagunaSpain",
"Instituto de Astrofísica de Canarias (IAC)\nE-38205La LagunaSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nE-38206La LagunaSpain",
"Department of Astronomy\nThe University of Texas at Austin\nStop C14002515, 78712Speedway, AustinTXUSA",
"Zentrum für Astronomie\nAstronomisches Rechen-Institut\nUniversität Heidelberg\nMönchhofstraße 12-14D-69120HeidelbergGermany",
"Instituto de Astrofísica de Canarias (IAC)\nE-38205La LagunaSpain",
"Departamento de Astrofísica\nUniversidad de La Laguna\nE-38206La LagunaSpain",
"Zentrum für Astronomie\nAstronomisches Rechen-Institut\nUniversität Heidelberg\nMönchhofstraße 12-14D-69120HeidelbergGermany",
"Instituto de Astronomía\nUniversidad Nacional Autónoma de México\nApdo. Postal 70-264 Ciudad UniversitariaMéxico",
"Departamento de Astrofísica\nUniversidad de La Laguna\nE-38206La LagunaSpain"
] | [
"MNRAS"
] | We present a first study based on the analysis of the DEep Spectra of Ionized REgions Database (DESIRED). This is a compilation of 190 high signal-to-noise ratio optical spectra of H II regions and other photoionized nebulae, mostly observed with 8-10m telescopes and containing ∼29380 emission lines. We find that the electron densitye -of the objects is underestimated when [S II] 6731/ 6716 and/or [O II] 3726/ 3729 are the only density indicators available. This is produced by the non-linear density dependence of the indicators in the presence of density inhomogeneities. The average underestimate is ∼ 300 cm −3 in extragalactic H II regions, introducing systematic overestimates of e ([O II]) and e ([S II]) compared to e ([N II]). The high-sensitivity of [O II] 7319 + 20 + 30 + 31/ 3726 + 29 and [S II] 4069 + 76/ 6716 + 31 to density makes them more suitable for the diagnosis of the presence of high-density clumps. If e ([N II]) is adopted, the density underestimate has a small impact in the ionic abundances derived from optical spectra, being limited to up to ∼0.1 dex when auroral [S II] and/or [O II] lines are used. However, these density effects are critical for the analysis of infrared fine structure lines, such as those observed by the JWST in local star forming regions, implying strong underestimates of the ionic abundances. We present temperature relations between e ([O III]), e ([Ar III]), e ([S III]) and e ([N II]) for the extragalactic H II regions. We confirm a non-linear dependence between e ([O III])e ([N II]) due to a more rapid increase of e ([O III]) at lower metallicities. | 10.1093/mnras/stad1569 | [
"https://export.arxiv.org/pdf/2305.13136v1.pdf"
] | 258,832,973 | 2305.13136 | 204a8da2ff8fe747c9ac5203672fe729f7202138 |
Density biases and temperature relations for DESIRED HII regions
2020
J E Méndez-Delgado
Zentrum für Astronomie
Astronomisches Rechen-Institut
Universität Heidelberg
Mönchhofstraße 12-14D-69120HeidelbergGermany
★
C Esteban
Instituto de Astrofísica de Canarias (IAC)
E-38205La LagunaSpain
Departamento de Astrofísica
Universidad de La Laguna
E-38206La LagunaSpain
J García-Rojas
Instituto de Astrofísica de Canarias (IAC)
E-38205La LagunaSpain
Departamento de Astrofísica
Universidad de La Laguna
E-38206La LagunaSpain
K Z Arellano-Córdova
Department of Astronomy
The University of Texas at Austin
Stop C14002515, 78712Speedway, AustinTXUSA
K Kreckel
Zentrum für Astronomie
Astronomisches Rechen-Institut
Universität Heidelberg
Mönchhofstraße 12-14D-69120HeidelbergGermany
V Gómez-Llanos
Instituto de Astrofísica de Canarias (IAC)
E-38205La LagunaSpain
Departamento de Astrofísica
Universidad de La Laguna
E-38206La LagunaSpain
O V Egorov
Zentrum für Astronomie
Astronomisches Rechen-Institut
Universität Heidelberg
Mönchhofstraße 12-14D-69120HeidelbergGermany
M Peimbert
Instituto de Astronomía
Universidad Nacional Autónoma de México
Apdo. Postal 70-264 Ciudad UniversitariaMéxico
M Orte-García
Departamento de Astrofísica
Universidad de La Laguna
E-38206La LagunaSpain
Density biases and temperature relations for DESIRED HII regions
MNRAS
0002020Accepted XXX. Received YYY; in original form ZZZPreprint 23 May 2023 Compiled using MNRAS L A T E X style file v3.0ISM:Abundances -ISM: HII regions -galaxies: abundances -ISM: evolution
We present a first study based on the analysis of the DEep Spectra of Ionized REgions Database (DESIRED). This is a compilation of 190 high signal-to-noise ratio optical spectra of H II regions and other photoionized nebulae, mostly observed with 8-10m telescopes and containing ∼29380 emission lines. We find that the electron densitye -of the objects is underestimated when [S II] 6731/ 6716 and/or [O II] 3726/ 3729 are the only density indicators available. This is produced by the non-linear density dependence of the indicators in the presence of density inhomogeneities. The average underestimate is ∼ 300 cm −3 in extragalactic H II regions, introducing systematic overestimates of e ([O II]) and e ([S II]) compared to e ([N II]). The high-sensitivity of [O II] 7319 + 20 + 30 + 31/ 3726 + 29 and [S II] 4069 + 76/ 6716 + 31 to density makes them more suitable for the diagnosis of the presence of high-density clumps. If e ([N II]) is adopted, the density underestimate has a small impact in the ionic abundances derived from optical spectra, being limited to up to ∼0.1 dex when auroral [S II] and/or [O II] lines are used. However, these density effects are critical for the analysis of infrared fine structure lines, such as those observed by the JWST in local star forming regions, implying strong underestimates of the ionic abundances. We present temperature relations between e ([O III]), e ([Ar III]), e ([S III]) and e ([N II]) for the extragalactic H II regions. We confirm a non-linear dependence between e ([O III])e ([N II]) due to a more rapid increase of e ([O III]) at lower metallicities.
INTRODUCTION
The determination of chemical abundances from emission line spectra of ionized nebulae is an essential tool for studying the chemical composition and evolution of the Universe, from the Milky Way to high-redshift galaxies. In ionized nebulae, the total abundance of heavy elements, the metallicity, is traced by the O/H abundance, as it comprises ∼ 55 per cent of the total metal content ). This information can be used to explore the nucleosynthesis of chemical elements and the galaxy formation and evolution. In fact, the mean metallicity of the galaxies and the shape of radial abundance gradients depend on their masses, the star formation history and the relative importance of the gas inflows/outflows across their discs (e.g. Tinsley 1980;Prantzos 2008;Matteucci 2014).
The chemical abundances of elements heavier than He can ★ E-mail: [email protected] be derived from bright collisionally excited lines (CELs) in the emission line spectra of ionized nebulae. In the optical range, the emissivity of CELs is exponentially dependent on the electron temperature, e , being a critical physical parameter for obtaining accurate abundance values. This is the basis of the so-called direct method for determining chemical abundances (e.g. Dinerstein 1990;Peimbert et al. 2017;Pérez-Montero 2017). Moreover, recently demonstrated the presence of temperature inhomogeneities within the highly ionized gas as theorized by Peimbert (1967). The existence of such spatial temperature variations introduces a systematic bias towards lower abundances that can reach errors as high as ∼ 0.5 dex in the O/H abundance ). On the other hand, the fine structure CELs in the infrared (IR) range that arise from atomic transitions of low energy levels (Δ E<< 1 eV) have a smaller temperaturedependence (Osterbrock & Ferland 2006). However, in these cases the electron density, e , is a fundamental parameter to accurately determine chemical abundances, as the critical densities of these low-energy levels are smaller than those involved in the emission of optical CELs (Osterbrock & Ferland 2006). With the advent of optical spectroscopic surveys using large Integral Field Units (IFU), data for myriads of H II regions in large samples of external spiral galaxies have become available (e.g. Sánchez et al. 2012;Bryant et al. 2015;Bundy et al. 2015;Emsellem et al. 2022). However, it is common that most of the spectra of extragalactic H II regions in these surveys are not deep 1 enough to detect the faint auroral CELs necessary to determine e or the even fainter recombination lines (RLs) of heavy-element ions. When the gas temperature is not available one has to rely on the so-called strong-line methods to estimate the gas-phase metallicity, which are based on calibrations of the O/H ratio -the proxy for metallicity when analyzing nebular spectra-built with observed intensity ratios of bright nebular CELs (e.g. Pagel et al. 1979;Pilyugin et al. 2010Pilyugin et al. , 2012Marino et al. 2013;Pilyugin & Grebel 2016) or on photoionization models (e.g. McGaugh 1991; Kewley & Dopita 2002;Kobulnicky & Kewley 2004;Tremonti et al. 2004). Comparing the different calibrations available in the literature, one can find very large differences between the O/H ratios for the same set of observations, differences that can amount to 0.2-0.7 dex (e.g. Kewley & Ellison 2008;López-Sánchez et al. 2012;Groves et al. 2023). From the available strong-line methods, only those of Peña-Guerrero et al. (2012b) take into account the presence of temperature inhomogeneities.
The large amount of data generated by big surveys that one can gather from the literature permit us to explore, constrain and minimize the effects of statistical errors in the estimate of metallicities of H II regions in a given galaxy or a group of similar galaxies (e.g Sánchez et al. 2015;Ho 2019;Kreckel et al. 2019;Metha et al. 2021). However, only detailed studies of deep spectra of H II regions allow us to adequately explore and constrain the effects of systematic errors in the determination of physical conditions and ionic and total abundances. On this matter, there are previous works dedicated to collect auroral CELs from the most commonly studied ions ( (Pilyugin et al. 2012;Croxall et al. 2016;Berg et al. 2020;Rogers et al. 2021Rogers et al. , 2022Zurita et al. 2021). However, with some notable exceptions where recombination lines were considered Guseva et al. 2011;Peña-Guerrero et al. 2012a;Valerdi et al. 2019;Skillman et al. 2020), most previous studies are limited to the CELs of few ions, which do not provide the complete picture of the physics of the ionized gas.
Since the beginning of this century, our group has gathered a large number of intermediate spectral resolution longslit or high spectral resolution echelle spectra for a large number of Galactic and extragalactic H II regions as well as Galactic planetary nebulae (PNe) and ring nebulae (RNe) around massive Wolf-Rayet and Of stars. This collection of data is what we call DESIRED (DEep Spectra of Ionized Regions Database, see Section 2 for references and a description of the data). The vast majority of the data have been obtained with large-aperture (8-10m) telescopes and the observations were designed to detect very faint emission lines. As a result of the remarkable signal-to-noise ratio of our collection of nebular spectra, each individual object counts with tens or even hundreds of emission lines, showing good measurements of all or some of these: (a) one or several faint e -sensitive auroral CELs, (b) several density indicators based on the intensity ratios of CELs, (c) RLs of one or some heavy-element ions and (d) sets of rare faint lines as those of [Fe II] and/or [Fe III] or fluorescence ones, useful for detailed studies on the internal physics of the ionized gas.
The DESIRED papers seek to analyze global properties of the ionized gas in unprecedented detail, detecting and describing phenomena that have -or might have-an impact on interpretations of large-scale studies based on solid observational evidence. The present work is dedicated to the study of physical conditions ( e , e ) of the ionized gas, including information about their internal structures and the temperature relations. The prescriptions, warnings and relations of this study are intended to consider different types of ionized regions and can be used both in studies of individual objects and in large-scale studies.
DESCRIPTION OF DESIRED
DESIRED comprises a set of 190 spectra, 72 of them correspond to 68 extragalactic H II regions, 56 spectra of 41 Galactic H II regions, 34 Galactic PNe, 21 spectra of 7 Galactic RNe as well as 6 spectra of 5 photoionized Herbig-Haro objects (HHs) and 1 protoplanetary disk (proplyd) of the Orion Nebula. References to the spectra are shown in Tables A1, A2, A3, A4 and A5. All the spectra have been observed by our group except those of the Galactic PNe IC 418, IC 2501, IC 4191 and NGC 7027 (Sharpee et al. 2003(Sharpee et al. , 2007. We decided to include these data in DESIRED as they show an analogous level of depth and quality as the rest of the objects included in Table A4 (see the comparative analysis performed by Rodríguez 2020 The remarkably high signal-to-noise ratio of the DESIRED spectra can be verified in any of the published reference articles. We can highlight fig. 1 of Esteban et al. (2014a) The observations have been taken from 2002 to date with the spectrographs and telescopes shown in Table A6 3 . The spectra were reduced and calibrated manually following a consistent procedure, using IRAF routines (Tody 1993), Python codes and some tasks from the ESO UVES pipeline (Ballester et al. 2000). The flux, wavelength and FWHM of the lines were measured manually using the IRAF task SPLOT, individually estimating the continuum.
Echelle spectra were not corrected from telluric emissions, since the slit does not usually cover sky areas. However, the high spectral resolution permits us to separate the doppler shifted nebular emissions from the sky contaminations. Sky-blended lines are BPT diagram of the DESIRED spectra. The dashed line represents the boundaries between star-forming regions (to the left and below the line) and regions with harder ionizing sources (generally associated to Active Galactic Nuclei) (Kauffmann et al. 2003).
identified and their use has been ruled out in this work. In most of the spectra the telluric absorption bands were not corrected. This potentially affects several wavelength ranges as 7600 − 7700Å, 9000−10000Å, where atmospheric O 2 and H 2 O bands are strong and dense (Stevenson 1994). UVES spectra may have optical reflections within the second dichroic of the blue arm ( 3750 − 4995Å). The wavelength position of these spurious "ghosts" can be determined directly from the echellograms as they cross the different observed orders. The use of these lines is also discarded along with those with detected individual spurious effects.
Intermediate spectral resolution spectra ( ∼ 3000-4000) were mostly taken with long-slit two-arms spectrographs. We verified the accuracy of the relative flux calibration between the bluest and reddest wavelength ranges. The sky emission was removed in the case of the smaller angular size nebulae (most of the Galactic H II regions observed with OSIRIS at the 10.4m GTC telescope, the extragalactic ones and PNe), this was not possible in the case of IC 5146 and M43, extended Galactic H II regions observed with ISIS at the 4.2m WHT telescope.
The spectra were corrected for interstellar extinctions and underlying stellar absorptions following the iterative process described by López-Sánchez et al. (2006), which is based on the results of Mazzarella & Boroson (1993) and the observed H I Balmer and Paschen decrements, when available. No corrections for underlying stellar absorptions were made to the He I lines. However, the Galactic objects did not require such corrections (Méndez-Delgado et al. 2020). The detailed procedure for each object is described in the reference articles.
In Fig. 1, we show a BPT diagram (Baldwin et al. 1981) of all DESIRED spectra distinguishing their corresponding types of nebulae. The dashed line indicates the separation between the H II regions and active galactic nuclei (AGNs) as defined by the empirical equation (1) of Kauffmann et al. (2003). All Galactic and extragalactic H II regions as well as photoionized HH objects and the proplyd are located in the zone of star forming regions. This is consistent with gas photoionized by O or early B type stars. PNe and RNe are present both in the star forming region zone and in the area usually associated with AGNs. RNe associated with Wolf-Rayet stars are located within the AGN zone whereas those associated to Of stars are together with the H II regions. This is due both to a harder ionizing spectrum from Wolf-Rayet stars and to a larger contribution from shocks, associated with stellar feedback . Most PNe are located well above the H II regions line (e.g. Kniazev et al. 2008), as expected from their harder ionizing sources. However, Abell 46, Abell 63 and Ou5 (Corradi et al. 2015) fall within the area of H II regions. This interesting result seems linked to the fact that these 3 regions have the largest abundance discrepancy factor (ADF) between the O 2+ /H + abundances derived with both RLs and CELs of the whole sample. This is in agreement with the scenario where these PNe contain metal-rich cold inclusions within the ionized gas, enhancing the emission of the H I RLs, as proposed by several authors (Corradi et al. 2015;García-Rojas et al. 2022).
The metallicity range, expressed by 12+log(O/H), determined from CELs and assuming no temperature fluctuations, covered by the sample objects goes from 7.72 and 8.70 in the case of H II regions (including both Galactic and extragalactic) and from 7.76 and 8.80 in the case of PNe. It should be noted that, due to the requirements of DESIRED observations (relatively bright spectra and high probability of detecting RLs of heavy element ions), the number of H II regions with 12+log(OH) below 8.0 is rather limited. A drawback that could be corrected with observations with the future very large aperture telescopes.
PHYSICAL CONDITIONS
The determination of the chemical composition of photoionized nebulae requires, as a first step, accurate calculations of e and e . DESIRED objects potentially comprise a wide range of densities from e ∼ 10 2 cm −3 for some extragalactic H II regions to e > 10 5 cm −3 for HHs and the photoevaporating proplyd 170-337 of the Orion Nebula. Therefore, it is possible to explore relations between several density diagnostics. and the transition probabilities and collision strengths given in Table A7. We use the getCrossTemDen task of PyNeb to simultaneously derive e and e , cross matching the aforementioned density diagnostics with the e -sensitive [N II] 5755/ 6584, [O III] 4363/ 5007, [Ar III] 5192/ 7135 and [S III] 6312/ 9069 line intensity ratios. Finally, we average the density values obtained with each cross-match to obtain a representative value of e for each tested density diagnostic. For the objects with reliable detections of density diagnostics but not of the aforementioned temperature diagnostics, we derive the density by assuming e = 10000 ± 1000 K. There are only 5 objects in this last case: three slit positions of M 43 (observed by Simón-Díaz et al. 2011), two H II regions of M 33 and another two of NGC 300 (observed by Toribio San Cipriano et al. 2016). The temperature dependence of the density diagnostics is negligible in these cases. All these objects show e < 1000 cm −3 . We analyze the e determinations in Section 5, defining a clear criteria to adopt its final representative value for each object. Finally, once e is fixed, e is calculated by using the getTemDen task of PyNeb.
The near infrared lines [S III] 9069, 9531 can be affected by the telluric absorption bands (Stevenson 1994;Noll et al. 2012), potentially introducing spurious results in e ([S III]) if there is no strict control over this issue. Usually, the most affected line is [S III] 9531, which lies in a wavelength zone more contaminated by telluric absorption bands, although this effect may vary depending on internal gas velocities, as in the Orion Nebula, where [S III] 9069 is usually the most contaminated one (Baldwin et al. 1991;Méndez-Delgado et al. 2021a). We have tried to have a strict control on the telluric absorptions, discarding the use of the affected lines, in order to avoid spurious e ([S III]) determinations. As a second check, in those objects where both lines were detected, we test the [S III] 9531/ 9069 line intensity ratio. Both lines arise from the same atomic 1 2 upper level, therefore their relative intensity must be equal to 2.47 (Froese Fischer et al. 2006), regardless of the physical conditions of the gas. We discard those objects where [S III] ( 9531)/ ( 9069) > 2.47 beyond the observational uncertainties, as it indicates a possible effect on [S III] 9069. However, since no telluric corrections of any kind were made, except in the Méndez-Delgado et al. (2021a,b, 2022) spectra, we cannot guarantee that all DESIRED spectra are free of telluric absorption effects on their [S III] 9069, 9531 lines.
Although the [O II] 7319+20+30+31/ 3726+29 and/or [S II] 4069+76/ 6716+31 line ratios were measured in many objects, we prefer not using them in the determination of the final adopted e of each object. As we discuss in Section 6.1, those line ratios are very sensitive to e and the inferred e ([O II]) and e ([S II]) are affected by the presence of high-density clumps within the ionized nebulae, as will be discussed in Section 6.1.
PHOTOIONIZATION MODELS
To explore the theoretical temperature relations in the absence of temperature fluctuations ( 2 = 0), we select the photoionization models of giant H II regions from the Mexican Million Models database 4 , built for the BOND project (Vale Asari et al. 2016) using Cloudy v17.01 (Ferland et al. 2017). We adopt the same selection criteria as Amayo et al. (2021), which considers startburst ages lower than 6 Myr, ionization-bounded and density-bounded selected by a cut of 70 per cent of the H flux and a selection of realistic N/O, and O/H values (Vale Asari et al. 2016). We also adopt the same BPT-cut defined by Amayo et al. (2021) in their equation (3). Since we do not intend to study the temperature relations in PNe or RNe beyond analyzing their differences with the results of H II regions, we do not adopt any additional set of models.
THE DENSITY STRUCTURE OF IONIZED NEBULAE
Several line intensity ratios emitted from atomic levels close in energy are sensitive to e due to their different collisional excitation and deexcitation rates. As shown in left panel of Fig. 2, the density dependence of several optical and infrared line ratios is not linear and they have different ranges of validity. The [S II] 6731/ 6716 line intensity ratio is one of the most used density diagnostics in the literature due to its observational accessibility. Therefore, it will be used in this work as the main reference in the comparisons with other density diagnostics. In order to estimate the utility of a density diagnostic, it is convenient to study its sensitivity. We define this quantity as the variation of the line intensity ratio with e , being mathematically represented with the derivative of the diagnostic with respect to the density.
The sensitivity of the e -diagnostics and, in general, the relationship between the inferred physical conditions and the observed line intensity ratios depend on the atomic transition probabilities and collision strengths. Several studies have analyzed the behavior of these parameters with optical spectra (Stasińska et al. 2013 We minimize the presence of errors in the atomic data by considering the results of the aforementioned studies, avoiding the use of discrepant atomic data sets. However, the impact of potential errors cannot be completely neglected, since the available calculations are few in number in the case of some ions.
A comparison of the relative sensitivity of different density diagnostics with respect to the widely used [S II] 6731/ 6716 is shown in the right panel of Fig. 2 4069+76/ 6716+31 have a very high density-sensitivity over the entire range from 10 2 cm −3 < e < 10 6 cm −3 . These line intensity ratios will be discussed in detail in Section 6.1.
If the nebulae have homogeneous density, the different diagnostics should converge to the same value if they are in their density-sensitive range. However, the emissions of the different ions can come from different volumes of ionized gas and the nebulae may contain density inhomogeneities. In fact, the presence of high-density clumps has been revealed by high-resolution images in several nearby photoionized nebulae (see e. g. Borkowski et al. 1993;O'Dell & Wong 1996;O'Dell et al. 2002). Besides filamentary structures, jets of matter and gas flows due to photoionization are capable of compressing the gas, increasing the local density. Within the H II regions, ongoing star formation can give rise to HHs (Herbig 1950;Haro 1952) and proplyds (O'Dell et al. 1993), which are associated with clumps of ionized gas that can reach density values of up to ∼ 10 6 cm −3 (Henney & O'Dell 1999). Although the high-density inclusions may represent a small fraction of the gas volume, the different collisional deexcitation rates of the diagnostics can bias them towards higher or lower values depending on their particular density-sensitivity regime. Moreover, since the refractory elements such as Fe are mostly depleted into dust grains within the ionized environments, the [Fe III] 4702/ 4658 ratio may be more easily detected in shock-compressed higher-density areas where the dust destruction is taking place, such as HH objects (Méndez-Delgado et al. 2021a). 6731/ 6716 and [O II] 3726/ 3729 diagnostics show an excellent agreement for the whole sample. This is not surprising since both O + and S + ions coexist in the volume of low degree of ionization and both show essentially the same sensitivity (see the right panel of Fig. 2). In some PNe, due to the possible existence of cold clumps of high metallicity (Liu et al. 2000(Liu et al. , 2006García-Rojas et al. 2016Richer et al. 2022), we may expect an important contribution of recombination in the observed [O II] lines (Barlow et al. 2003;Wesson et al. 2018). This is especially important in the cases of Ou5 and Abell 46 (Corradi et al. 2015), the PNe with the largest ADF from the whole sample and the only ones with ADF>5, and where the density obtained from [O II] 3726/ 3729 is clearly higher than that obtained from [S II] 6731/ 6716. In the rest of the photoionized nebulae in the database, this phenomenon, if present, actually has a negligible impact on these density diagnostics. 4740/ 4711 reveals significant deviations from a 1:1 relation in those objects where the first diagnostic gives e < 1000 cm −3 . As expected from Fig 2, this is because the aforementioned diagnostics are at their low density limit, where the sensitivity is practically negligible. In the low density limit, the line intensity ratios should converge to constant values mainly fixed by the atomic collisional strengths. From the DESIRED data we obtain [Cl III] 5538/ 5518 = 0.74 ± 0.05, [Fe III] 4702/ 4658 = 0.26 ± 0.04 and [Ar IV] 4740/ 4711 = 0.79 ± 0.07, in consistency with the predictions of the selected atomic data (see Table A7), discarding significant errors in them.
[ [Cl III] and [Ar IV] density diagnostics show rather consistent trends, despite arising from the low, intermediate and very high ionization volumes. This shows that the different e -sensitivity range of the diagnostics dominates over the possible density stratification in the nebulae, except for few dispersed objects of the sample, as it is also shown in Fig. 4.
Considering the previous discussion and in agreement with Méndez-Delgado et al. (2023), we propose the following criteria to adopt a representative density for chemical abundance determinations using optical spectra:
(i) If e ([S II]) < 100 cm −3 , we adopt the low density limit ( e < 100 cm −3 ).
( The resulting representative density values are shown in the bottom panel of Fig. 3. As discussed in Section 6.1, this criteria is far from perfect, but it is accurate enough to determine chemical abundances based on optical spectra. However, we discourage its use for the determination of feedback-related pressure terms or abundances based on infrared fine structure lines without further analysis.
In criterion (i) we consider the fact that all the density diagnostics analyzed in this work are insensitive at such low densities. If the average electron density is actually in this range of values, its impact is negligible in the determination of temperature and chemical abundances (Osterbrock & Ferland 2006). However, the consideration of another method is recommended for those who require precise determinations of the gas pressure (dependent on density) in low-density H II gestion, considering the radiative and collisional atomic transitions from Bautista et al. (2015), the [Fe II] 8617/ 9267 line intensity ratio should vary from a value ∼ 110 at e = 1 cm −3 to a value ∼ 54 at e = 100 cm −3 when assuming e = 10000 K. These lines arise from the Fe + lower quartet levels, and should not have significant fluorescence contributions (Baldwin et al. 1996;Verner et al. 2000). Méndez-Delgado et al. (2021b have checked the adequacy of this density diagnostic in higher density regions. However, this is highly dependent on the atomic data used (Mendoza et al. 2023 3726/ 3729) as single diagnostic is consistent with the adopted value in most of the denser nebulae within the error bars -given the high quality of the DESIRED spectra-the uncertainty of these diagnostics becomes larger as the density increases. As shown in the bottom panel of Fig. 3, a systematic underestimate of the median values of density is noticeable when e ([S II] 6731/ 6716) approaches values ∼ 10 4 cm −3 , concerning especially to PNe. It is difficult to establish whether this behaviour is linked to a density stratification in these objects, as some works suggest (see e. g. Rauber et al. 2014), or it is just a consequence of the different sensitivity of the compared diagnostics. Since this affects some HHs as well, this seems to indicate that the different sensitivity of the diagnostics dominates the observed trend. (Sharpee et al. 2007). In these objects, the electron density is so high that a large fraction of the emission in CELs is produced through the much weaker auroral lines instead of the nebular ones. Unfortunately, because of the large dust depletion and low ionization degree of proplyd
TEMPERATURE STRUCTURE
In this section, we analyze the temperature relations for the different ionization zones in extragalactic H II regions. Firstly, we will start by investigating the dependence of the low ionization temperature diagnostics e ([O II]), e ([S II]) and e ([N II]) on the electron density. Secondly, we will study the temperature relations obtained directly from the observations. In all figures of this section, we use the parameter defined by Pilyugin (2001): Based on the results of photoionization models (Campbell et al. 1986;Pilyugin et al. 2006 Hägele et al. 2006Hägele et al. , 2008Esteban et al. 2009;Bresolin et al. 2009;Berg et al. 2015 4069+76/ 6716+31 line intensity ratios, which can be affected by several observational effects. The first line intensity ratio is highly dependent on the reddening correction as well as the quality of the flux calibration of the spectrum given the wide wave- Fig. 1. It should be noticed the good agreement that exist when considering regions with e >1000 cm −3 (which leaves out most extragalactic H II regions and RNs, blue dots and magenta crosses, respectively), as they are in their optimal sensitivity range, regardless of the degree of ionization of the ion. 4069.62, 4069.88, 4072.15, 4075.86 lines, which can represent more than 10 per cent of the total flux in some nebulae.
In addition to the possible observational effects commented on above, some other physical phenomena have been invoked to explain the discrepancies between e ([O II]), e ([S II]) and e ([N II]), such as:
• Mismatch between the temperature of the volumes of O + , N + and S + .
• Recombination contribution to the CELs.
• Temperature fluctuations.
• Density variations.
The high quality of the DESIRED spectra permits us to minimize the effect of observational errors on e determinations and to explore other physical phenomena that may cause the discrepancies. We derive e ( [O II Table 1. We will focus on the global trends.
As it can be seen in Fig. 5
Are these temperatures different?
The differences between the temperatures determined from CELs of different ions are usually explained because they are representative of zones with different ionization conditions. This may result from differences in the ionization potentials, in the spectral distribution of the ionizing radiation and sometimes on the absorption edges on the ionizing radiation and on the presence of charge exchange and dielectronic recombination contributions (Stasińska 1980;Garnett 1992). Although there are small differences in the ionization energy ranges of S + , O + and N + and some other properties, photoionization models predict that this should not have relevant effects on the difference between e ([S II]), e ([O II]) and e ([N II]). The exceptions may be very high metallicity regions, where the internal temperature gradients can be very marked (Stasińska 2005). However, as can be inferred from Fig. 5, the differences in the top panels are higher for the regions of higher degree of ionization, which is most typical case at lower metallicities. Furthermore, although the coexisting volumes of S + and O + usually differ much more than those of N + and O + (e.g. see fig. 2
Recombination contributions?
Rubin (1986) pointed out the possibility of significant recombina-tion contributions to the atomic levels that produce optical CELs of N and O ions. At first order, one would expect recombinations to be more important in regions of higher metallicity (Stasińska 2005), where the temperature is lower. This is the opposite behavior of our observations shown in Fig. 5
. A complex question is to know in what proportion the recombinations affect the inferred e ( [S II]), e ([O II]) and e ([N II]) in the analyzed extragalactic H II regions, as recombination contributions can affect both the auroral and nebular [O II] lines whereas it is expected to only affect the auroral [N II]
line. To clarify this, we use the photoionization models described in Section 4. These models consider the recombination contributions in the [O II] and [N II] lines using the recombination coefficients calculated by Pequignot et al. (1991), Fang et al. (2011Fang et al. ( , 2013 and Storey et al. (2017).
In Fig. 6 can be tested by assuming that this ion has the same electronic configuration than [O II]. Therefore, as a first approximation, the recombination contribution to the 2 and 2 levels of [S II] would be similar to that of [O II] weighted by the S 2+ /O 2+ abundance ratio. In Fig. 7 we show that the potential recombinations under this case are negligible when e ( [S II]) > 7000 K, which is the case for all our data (see Fig. 5).
Considering Fig. 5 would be difficult to explain, given the predictions of Fig. 7. Therefore, recombination effects on [S II], [O II] and [N II] CELs do not seem to explain the observed differences in their corresponding temperatures.
Temperature inhomogeneities?
Peimbert (1967) (2023). Those authors find that although the effects of 2 are evident in the high-ionization volume of nebulae, they seem to be absent in the low-ionization one.
Density inhomogeneities
The [O II] 7319+20+30+31/ 3726+29 and [S II] 4069+76/ 6716+31 line intensity ratios are highly dependent on density as it is shown in Fig. 2. When e is fixed, the e -sensitivity of the aforementioned line intensity ratios is larger than that of [S II] (Peimbert 1971;Rubin 1989 As a conclusion, we propose that the presence of high-density inclusions within the volume observed in the spectra of extragalactic H II regions naturally explains the behavior seen in Fig. 5 7319+20+30+31/ 3726+29 is slightly higher, and this may explain the larger number of points below the 1:1 line. Fig. 3) are extremely relevant when using infrared fine structure CELs (Lamarche et al. 2022). In analogy to what happens with temperature inhomogeneities in the optical CELs, density inhomogeneities may introduce systematic bias in the chemical abundances derived from infrared CELs. If e is underestimated, ionic abundances are also underestimated.
In the case of the high-density nebulae of the DESIRED sample, where e ([S II] 6731/ 6716) > 1000 cm −3 , we find good consistency between the adopted density and the values derived from the [O II] and [S II] 2 P 0 / 2 D 0 line intensity ratios, as shown in Fig. 11. It should be noted that in these cases, the adopted densities are mainly weighted by the [Cl III], [Fe III] and [Ar IV] density diagnostics. This suggests that although high-density clumps (or density gradients) may be present in all ionized nebulae, the systematic effects on the derived properties can be reduced in those objects showing higher mean densities. In contrast, in low-density nebulae, the presence of high-density clumps can go unnoticed by using e ([S II] The different sensitivity of auroral and nebular [O II] lines to density cause the systematic difference between the O + abundances determined with the [O II] auroral and nebular lines; fact that has been described by several authors, especially in the case of PNe (Stasińska et al. 1998;Escudero et al. 2004;Rodríguez 2020). In the case of PNe, there may be other phenomena playing a role. In the upper panel of Fig. 12 Fig. 12 we show the same comparison but using [O II] 7319+20+30+31/ 3726+29 and [S II] 4069+76/ 6716+31 as density indicators. As we can see, with this approach the systematic difference is removed.
The DESIRED temperature relationships for extragalactic HII regions
The temperature can be stratified within ionized nebulae, which is reflected in differences between the representative values of different ionic species. The most common procedure to consider the temperature stratification when deriving chemical abundances is to adopt e ([O III]) and e ([N II]) for the high and low ionization volumes, respectively. Other temperature-sensitivity line ratios as e ([S III] 6312/ 9069) arises from zones of intermediate ionization (e.g. Berg et al. 2015). Fig. 13 shows the DESIRED temperature relationships derived from different diagnostics associated with different ionization volumes of the gas. In each plot, we include the best fit to the data, the predicted linear fit from the BOND models (see Section 4) and the model-derived relations of Garnett (1992). In Table 2 we present the DESIRED temperature relations (column 4) and the scatter and number of objects considered in each case (columns 5-6).
Upper left panel of Fig. 13 shows the e ([O III]) vs. e ( [N II]) relationship defined for the DESIRED extragalactic H II regions and the linear fit to the data. There is wealth of works devoted to study this relation in the literature (e.g., Campbell et al. 1986;Garnett 1992;Pagel et al. 1992;Pilyugin 2007;Esteban et al. 2009;Arellano-Córdova & Rodríguez 2020;Berg et al. 2020;Rogers et al. 2021Rogers et al. , 2022, finding a relatively high scatter. Arellano-Córdova & Rodríguez (2020) showed that part of the dispersion is due to the effects of metallicity and the degree of ionization and therefore related to nebular properties. With the DESIRED extragalactic H II regions, we minimize spurious scatter that can occur by using low signal-to-noise spectra and confirm a departure from a linear relationship. This departure becomes larger with the degree of ionization (and lower metallicities) and becomes noticeable when e ([O III]) > 10000 K. Such a deviation from a linear relationship has been reported by several authors previously (e.g. Pilyugin 2007; Arellano-Córdova & Rodríguez 2020). In Fig. 13 we also include a quadratic fit to the data, which is only valid within 7000 K < e ([O III]) < 16, 500 K. However, its shape at e ([O III]) > 13, 000 K is determined by the position of only two objects in the diagram, NGC 5408 (Esteban et al. 2014b) and NGC 2363 ). As shown in Table 2, the photoionization models described in Section 4 are not able to reproduce the curvature observed between e ([O III]) and e ([N II]) in the DE-SIRED extragalactic H II regions. This curvature is not reproduced either if the models are weighted with the methodology proposed by Amayo et al. (2021) in their equation (4), considering the observational sample compiled by Zurita et al. (2021) and Izotov et al. (2007).
Méndez Table 2 to estimate e ([N II]) and consequently 0 (O 2+ ), using equation (4) from . Fig. 13 Upper right panel of Fig. 13 shows the e ([S III]) vs. e ([O III]) relationship defined by the DESIRED spectra of extragalactic H II regions. The slope of the linear fit to the data is very similar to that obtained from model predictions of Garnett (1992) and Vale Asari et al. (2016). However, the dispersion around the fit is larger for the observational points with higher e and parameter values. Those points correspond mainly to spectra of H II regions of the Magellanic Clouds (Domínguez-Guzmán et al. 2022). As mentioned in Section 3, some of our estimates of e ([S III]) might not be completely free of telluric absorptions in the [S III] 9069, 9531 lines and this fact may enhance the derived e ([S III]). In the middle right panel of Fig. 13 we present the e ([S III]) vs. e ([N II]) relationship, which follows a linear relation with a remarkably small dispersion except for the spectra with lowest e values. Our linear fit has a steeper slope compared to the relations found by Berg et al. (2020) or Rogers et al. (2021). This might be due to the larger proportion of DESIRED spectra with e ([S III]) > 12000 compared to the samples of Berg et al. (2020) or Rogers et al. (2021), where the vast majority of objects are below that e value. The higher slope defined by the DESIRED spectra may be related to the fact that -as it was also relationshipse ( [S III]) tends to be higher than e ([N II]) in spectra with larger e values. As it has been said before, and following the results by , this indicates that e ([S III]) may also be affected by 2 . This possibility, however, requires a verification since the telluric absorptions in the [S III] 9069, 9531 lines act in the same direction as 2 .
DISCUSSION AND CONCLUSIONS
In this paper we present a first study based on DEep Spectra of Ionized REgions Database (DESIRED), a collection of high-quality deep optical spectra of ionized nebulae from the literature. The data were mostly obtained with 8-10m telescopes over more than 20 years by our research group and have been carefully reduced in an homogeneous way. DESIRED contains ∼ 29380 emission lines of 190 spectra of Galactic and extragalactic H II regions, PNe, RNe as well as photoionized HH objects and one proplyd of the Orion Nebula. The main aim of the study of the DESIRED sample as a whole is to draw attention to and quantify systematic effects that may bias the determination of physical conditions and chemical abundances of ionized gas in the Universe, as well as to better understand the physics of the formation of certain faint emission lines. The philosophy of DESIRED has been to prioritize the quality and depth of the spectra over their quantity in the design of the observations. However, due to the continuity of the project over the years, the number of objects has been increasing substantially, reaching a level comparable to that of a small survey, with the possibility of increasing in the future, especially with observations of low-metallicity (12+log(O/H) < 8.0) objects with very large aperture telescopes. Although formally this is the first paper based on the exploitation of DESIRED, it was also used by , who analyzed the systematic bias introduced by temperature fluctuations in the determination of ionized abundances in H II regions, a task impossible to perform with any other sample.
In this paper, we explore the density structure of the DE-SIRED objects as well as the ee relations for extragalactic H II regions. Regarding the density structure, we show that [Cl III] 5538/ 5518, [Fe III] 4658/ 4702 and [Ar IV] 4740/ 4711 are good density indicators when 10 3 cm −3 < e < 10 6 cm −3 , whereas [S II] 6731/ 6716, [O II] 3726/ 3729 are density sensitive when 10 2 cm −3 < e < 10 4 cm −3 . We find good consistency between diagnostics associated to different ionization volumes when the sensitivity ranges are similar. This implies that the sensitivity range of the diagnostics used is a more relevant parameter to obtain good density determinations than their selection attending to the ionization volume in which the abundance is determined. Based on these findings, in Section 5 we present simple and consistent criteria to derive the representative density for chemical abundance studies in the optical range.
We
ACKNOWLEDGEMENTS
We thank the referee, Grażyna Stasińska, for her careful revision of the manuscript and useful comments that have contributed to increase the quality of the paper. JEMD thank to A. Amayo
DATA AVAILABILITY
The original data is public and available in the references cited in Tables A1-A5. All our calculations are present in the files of the online material. DESIRED files, although already public, can be shared upon reasonable request. Table A6. Average spectral resolution and coverage from the sampled regions. The precise spectral resolutions are found in the reference articles shown in Tables A1-A5.
or fig. 7 of Méndez-Delgado et al. (2021a) in the case of the Orion Nebula; fig. 3 of Domínguez-Guzmán et al. (2022) for extragalactic H II regions in the Magellanic Clouds; fig. 3 of Esteban et al. (2016) for the RN NGC 6888 and fig. 4 of García-Rojas et al. (2018) for a group of PNe.
Figure 1 .
1Figure 1. BPT diagram of the DESIRED spectra. The dashed line represents the boundaries between star-forming regions (to the left and below the line) and regions with harder ionizing sources (generally associated to Active Galactic Nuclei) (Kauffmann et al. 2003).
Fig. 3
3compares e ([S II]) and the e values obtained using the rest of diagnostics for all the DESIRED nebulae. The [S II]
[O III] (5007+4959) [O III] (5007+4959)+[O II] (3726+3729) as a proxy of the ionization degree of the gas.
Figure 3 .
3Comparison between the density derived from the [S II] 6731/ 6716 line intensity ratio and from the rest of the diagnostics, including the average density estimated from the adopted criteria (bottom panel). The solid line represents a 1:1 linear relation. Down arrows indicate the upper limit when the value is at the low density limit ( e < 100 cm −3 ).6.1 e ([O II]), e ([S II]), and e ([N II])
Figure 4 .
4Comparison between the density derived from the [Cl III] 5538/ 5518 line intensity ratio and those from [Ar IV] 4740/ 4711 and [Fe III] 4702/ 4658. The symbols code is the same as in
]), e ([S II]) and e ([N II]) adopting the density criteria mentioned in Section 5, actually criteria (i) or (ii) in most cases. The adoption of e ([S II] 6731/ 6716) or e([O II] 3726/ 3729) is the standard procedure for the analysis of extragalactic H II regions and therefore our results can be directly compared with other works.Fig. 5 shows the comparison between e ([O II]), e ([S II]) and e ( [N II]) derived with the standard procedure of adopting e ([S II] 6731/ 6716) and e ([O II] 3726/ 3729) as the representative electron density. The blue lines correspond to the best linear fits, which are presented in Table 1. Despite of the quality of the data, there are few outlier regions: NGC 5471, H 37 (Esteban et al. 2020), N 66A (Domínguez-Guzmán et al. 2022) and H II-2 (López-Sánchez et al. 2007) with very high values of e ([S II]) and NGC 2363 (Esteban et al. 2009) with an extremely high value of e ([O II]). These regions may have particular physical phenomena or some non-identified contamination in the auroral lines, such as those described previously. Although they are included in Fig. 5, they are not considered in the linear fits shown in
Figure 5 .
5Relations between e ( [O II]) , e ( [S II]) and e ( [N II]) derived by adopting e ([S II] 6731/ 6716) and e ([O II] 3726/ 3729) in the extragalactic H II regions of the sample. The color of the points represents the value of their parameter (Pilyugin 2001, see text) that can be used as proxy of the ionization degree of the nebulae. The blue solid line represents the linear fit to the data. The black solid line represents a 1:1 linear relation. first pair of ions shows better consistency between their respective e values, as it is shown in the bottom panel of Fig. 5. This result suggests that the difference between the ionization structure alone does not explain the differences between e ([S II]), e ([O II]) and e ([N II]). It is sometimes argued in the literature that part of the optical [S II] emission can be originated in the photodissociation region (PDR) where H and He are mostly neutral, discarding e ([S II]). This argument has also been found together with the adoption of e ([S II] 6731/ 6716) as valid density estimator, even for the entire nebula (e.g. Esteban et al. 2020). It is clear that this can not be an explanation of the differences between e ([S II]) and e ([N II]) since, although there may be some S + in the volume where H and He are neutral, the emission of [S II] lines requires numerous collisions with free electrons that can only be supplied in sufficient quantities by the ionization of H and He (O'Dell et al. 2023). Therefore, [S II] emission should arise from the ionized volume and the surrounding areas of the ionization front. Exceptions may appear when there are shocks in the ionization front.
Figure 6 .Figure 7 .
67Comparison of the impact of recombination contributions on e ( [O II]) and e ( [N II]) . These predictions are based on photoionization models described in Section 4. The color of the symbols represents the value of their parameter (as in Fig 5). The black solid line represents a 1:1 linear relation. In case of recombination contributions, e ( [O II]) > e ( [N II]) for most cases. Comparison of the derived e ( [S II]) without recombination contributions with the hypothetical case of recombination contributions in proportion to those of [O II]. These predictions are based on photoionization models described in Section 4. The color of the symbols represents the value of their parameter (as in Fig 5). The black solid line represents a 1:1 linear relation. The recombination contributions are negligible for e ( [S II]) > 7000 K.
Figure 8 .
8the above, if the difference between e ([O II]) and e ( [N II]) is actually produced by recombinations, this would increase as a function of the intensity of the O II RLs. However, Fig. 8 demonstrate that e ([O II]) − e ([N II]) and the intensity of the O II V1 RLs (adopted from Méndez-Delgado et al. e ( [O II]) − e ( [N II]) difference as a function of the intensity of the O II recombination multiplet V1. The color of the symbols represents the value of their parameter (as in Fig 5). correlate, discarding significant recombination contributions. Most importantly, if e ([O II]) are affected by recombinations, the close 1:1 relation shown by e ([O II]) and e ([S II]) in
introduced the formalism of internal temperature inhomogeneities in ionized nebulae, quantified by the root mean square temperature fluctuations parameter ( 2 ). In the presence of such fluctuations in the volume where S + , O + and N + coexist, we would expect e ([O II]) ≥ e ([N II]) ≥ e ([S II]), as a consequence of the different excitation energies of the atomic levels involved (see equation 15 of Peimbert 1967). However, this is not the case in the observed trends shown in Fig. 5, in agreement with the recent results by Méndez-Delgado et al.
Figure 10 .
10Comparison between the average e obtained from the [O II] 7319+20+30+31/ 3726+29 and [S II] 4069+76/ 6716+31 line intensity ratios and e ([S II] 6731/ 6716) for extragalactic H II regions. The color of the symbols represents the value of their parameter (as inFig 5). The black solid line represents a 1:1 linear relation. e ([S II]). If the high-density gas is 10 2 cm −3 < e < 10 4 cm −3 or if the density is higher but occupies a small fraction of the total ionized volume, e ( [N II]) may remain unaffected in contrast to what happens with e ( [O II]) and e ([S II]).
Figure 11 .
11Comparison between the average e obtained from the [O II] 7319+20+30+31/ 3726+29 and [S II] 4069+76/ 6716+31 line intensity ratios and the adopted density for high density nebulae ( e ([S II] 6731/ 6716)>1000 cm −3 ). The black solid line represents a 1:1 linear relation. tensity ratios as density diagnostics instead of temperature ones by cross-matching them with e ([N II]) (getCrossTemDen of PyNeb can be used), we obtain densities that are consistent with each other and systematically larger than e ([S II] 6731/ 6716) (or e ([O II] 3726/ 3729)), as shown in Fig. 10. e ([S II] 6731/ 6716) underestimate the density by ∼ 300 cm −3 on average, even when e ([S II] 6731/ 6716) < 10 2 cm −3 . If e ([N II]) is adopted, this underestimate of e has a small impact in the calculation of chemical abundances based on optical CELs, except when [S II] and [O II] auroral lines are used. However, the underestimate of density is relevant in the case of ionized gas pressure determinations and the correct interpretation of the properties depending on this quantity. We remark that the presence of high density inclusions and the underestimate of density by e ([S II] 6731/ 6716) and e ([O II] 3726/ 3729) (see
Figure 12 .
126731/ 6716) or e ([O II] 3726/ 3729) and therefore affecting the reliability of further calculations involving these parameters. A possible solution would be the use of [O II] 7319+20+30+31/ 3726+29 and [S II] 4069+76/ 6716+31 as density indicators together with [N II] 5755/ 6584 to determine the temperature. Another conclusion of the discussion carried out Comparison between the O + abundance derived with [O II] auroral and nebular lines. Upper panel: the physical conditions adopted are e ( [N II]) and the average of e ([S II] 6731/ 6716) and e ([O II] 3726/ 3729). Bottom panel: the physical conditions adopted are e ( [N II]) and the average of e ([S II] 4069+76/ 6716+31) and e ([O II]7319+20+30+31/ 3726+29). The color of the symbols represents the value of their parameter (as inFig 5). The black solid line represents a 1:1 linear relation. The relation between both quantities is tighter in the bottom panel.so far is that the use of e ([O II]) and e ([S II]) should be avoided when e ( [N II]) is available.
-Delgado et al. (2023, see their fig. 2 and equation (4)) derive a tight linear relation between e ([N II]) and the average temperature of the high ionization volume, 0 (O 2+ ), parameter that can be used to estimate the O/H ratio without the bias induced by temperature inhomogeneities which does affect the abundances determined using e ([O III]). Nevertheless, e ([N II]) is usually very difficult to determine in faint low metallicity H II regions and the only available temperature diagnostic is often e ([O III]). In such cases, it is possible to use the relations presented in
noted in the e ( [O III]) vs. e ([N II]) and e ([Ar III]) vs. e ([N II])
Figure 13 .
13Temperature relations of the DESIRED extragalactic H II regions. Top panels: e [N II] (left) and e ([S III]) (right) as a function of e ([O III]). Middle panels: The e ([O III])e ([Ar III]) relation (left) and the e ([N II])e ([S III]) relation (right). Bottom panels: The e ([N II])e ([Ar III]) relation (left) and the e ([S III]) and e ([Ar III]) relation (right). The solid blue line represent the linear fit of the data. The dashed and dotted lines indicates the model predictions of Garnett (1992) and the BOND models (Vale Asari et al. 2016), respectively. The red solid line in the upper left panel represents a second degree polynomial fit. ionization volumes. We confirm a departure from a linear fit in the e ([O III]) vs. e ([N II]) relationship, which is more prominent in regions of lower metallicity. This is consistent with the presence of larger temperature inhomogeneities in the high ionization volume of these systems, as Méndez-Delgado et al. (2023) propose in a recent study. A similar departure from a linear fit seems also to be present in the e ([Ar III]) vs. e ([N II]) and e ([S III]) vs. e ([N II]) relationships of the DESIRED spectra of extragalactic H II regions.
e
( [N II]) = −1.11 × 10 −5 2 + 1.06 + 760 e ( [N II]) = −5.19 × 10 −5 2 + 1.71 − 1680 e ( [S III]) = 0.95 + 510 200 e ( [S III]) = 1.06(±0.10) − 640(±1070) 640 27 e ( [Ar III]) = 0.95 + 560 250 e ( [Ar III]) = 0.78(±0.09) + 2120(±1000) 580 17 = e ( [S III]) e ( [N II]) = 0.92 + 1010 470 e ( [N II]) = 0.43(±0.06) + 5820(±500) 330 15 e ( [Ar III]) = 0.99 + 50 70 e ( [Ar III]) = 0.58(±0.09) + 4020(±930) 380 11 e ( [O III]) = 1.04 − 480 210 e ( [O III]) = 0.95(±0.10) + 600(±950) 830 27 = e ( [N II]) e ( [S III]) = 1.05 − 750 510 e ( [S III]) = 2.31(±0.24) − 13470(±2570) 920 15 e ( [Ar III]) = 1.05 − 710 480 e ( [Ar III]) = 1.57(±0.24) − 6070(±2570) 790 17 e ( [O III]) = 1.09 − 1190 660 e ( [O III]) = 1.55(±0.09) − 5620(±860) 710 42
[O II], [O III], [S II], [S III], [N II])
). The database contains 29380 emission line detections, associated with 2486 transitions of 148 ionic species 2 . Of that total number of detections, 8715 are forbidden lines, while 18986 are permitted ones and 1679 remain unidentified or with doubtful identifications. From the detected permitted lines, 7836 are associated to metals. A number of 851 forbidden lines correspond to e -sensitive auroral transitions [O II] 7319 + 20 + 30 + 31, [S II] 4069 + 76, [N II] 5755, [S III] 6312, [Ar III] 5192 and [O III] 4363, that can be used for e determinations.
To derive e , we test the [S II] 6731/ 6716, [O II] 3726/ 3729, [Cl III] 5538/ 5518, [Fe III] 4658/ 4702 and [Ar IV] 4740/ 4711 line intensity ratios. To solve the statistical equilibrium equations, we use PyNeb 1.1.13
; Juan de Dios & Rodríguez 2017; Morisset et al. 2020; Juan de Dios & Rodríguez 2021; Mendoza et al. 2023). After detecting and discarding discrepant data sets, Morisset et al. (2020) and Mendoza et al. (2023) estimate uncertainties of ∼ 10 per cent in the radiative atomic rates for ions like [O II], [S II], [Fe III], [Cl III] and [Ar IV].
, then Δ 6716 / 6731 /Δ e ≈ 0. On the other hand, at fixed temperature, [O II] 7319+20+30+31/ 3726+29 and [S II]. The first notable result
is that [O II] 3726/ 3729 and [S II] 6731/ 6716 are equiva-
lent diagnostics in terms of sensitivity, without significant differ-
ences. This figure also shows that [Cl III] 5538/ 5518, [Fe III]
4658/ 4702 and [Ar IV] 4740/ 4711 are not sensitive diag-
nostics when e < 10 3 cm −3 . However, beyond this thresh-
old, the aforementioned diagnostics are comparatively more and
more sensitive, since [S II] 6731/ 6716 decrease its sensitivity.
In contrast [O III] 88 m/ 51 m shows higher sensitivity when
e < 10 3 cm −3 , but beyond this value, its sensitivity decreases
to a greater extent than that of [S II] 6731/ 6716. When e ≈
10 5.3 cm −3
The comparison of the [S II] 6731/ 6716 values and those of [Cl III] 5538/ 5518, [Fe III] 4702/ 4658 and [Ar IV]
Cl III] 5538/ 5518, [Fe III] 4702/ 4658 and [Ar IV] 4740/ 4711 line ratios become good density indicators for e > 10 3 cm −3 , showing higher sensitivity than [S II] 6731/ 6716 (See Fig. 2). For this range of e , Fig. 3 shows a general offset between [S II] 6731/ 6716 and the rest of the aforementioned diagnostics. This is due to the combination of two phenomena, [S II] 6731/ 6716 is more sensitive in areas of lower density within the nebulae while the rest of indicators behave inversely. Furthermore, as density increases, for e > 10 4 cm −3 , the accuracy of [S II] 6731/ 6716 decreases, amplifying the size of error bars. It is noticeable that [Fe III],
ii) If 100 cm −3 < e ([S II]) < 1000 cm −3 , we adopt the average value of e ([S II]) and e ([O II]). (iii) If e ([S II]) > 1000 cm −3 , we take the average values of e ([S II]), e ([O II]), e ([Cl III]), e ([Fe III]) and e ([Ar IV]) when available. (iv) For the HH objects we adopt e ([Fe III]), while in the case of the proplyd 170-337, we adopt the reference value derived from the [S II] 4069/ 4076 line intensity ratio.
regions, relevant to some phenomena such as stellar feedback (e.g. McLeod et al. 2020; Barnes et al. 2021). As a sug-Figure 2. Left panel: Dependence of different line intensity ratios with the electron densitye -, considering a e = 10000 K and the atomic data from Table A7. The line intensity ratios have been normalized with the expected values at e = 1 cm −3 . Right panel: comparison between the density-sensitivity of the different line intensity ratios and that of [S II] 6716/ 6731, considering a e = 10000 K. The density sensitivity is defined as0
1
2
3
4
5
6
log(n e )
0.0
0.2
0.4
0.6
0.8
1.0
I
1 /I
2 /(I
1 /I
2 )
ne = 1 at 10000K
[O II] 3729/3726
[O II] 3727+/7325+
[SII] 6716/6731
[SII] 6720+/4072+
[ClIII] 5518/5538
[Fe III] 4658/4701
[Ar IV] 4711/4740
[O III] 88 m/51 m
0
1
2
3
4
5
6
log(n e )
-1
0
1
2
3
log(( I
1 /I
2 )/( I
6716 /I
6731 ))
Δ 1
/ 2
Δ e
. When
e ≈ 10 5.3 cm −3 ,
Δ 6716
/ 6731
Δ e
≈ 0, inducing an asymptote.
) .
)Criterion (ii) is based on the fact that [Cl III] 5538/ 5518,[Fe III] 4702/ 4658 and [Ar IV] 4740/ 4711 are quite insensitive to densities smaller than 1000 cm −3 . In the presence of highdensity inclusions within the nebulae, densities adopted under this criterion are underestimated as well as those of criterion (i). This will be demonstrated in Section 6.1. The impact of such underestimate is rather limited in optical studies, being constrained up to ∼ 0.1 dex when using [O II] ( 7319+20+30+31) to estimate the O + /H + abundance. However, this can introduce large systematic errors when using IR fine structure CELs, where a density underestimate of ∼ 300 cm −3 can affect e determinations by several thousand Kelvin (seefig. 3from Lamarche et al. 2022). Criterion (iii) allows us to obtain more precise values of electron density. Although the use of e ([S II] 6731/ 6716) or e ([O II]
In this range of densities, e ([N II] 5755/ 6584) depends appreciably on e . Therefore, hav-ing large error bars in e gives rise to obtain inaccurate values of e ([N II] 5755/ 6584) and, finally, of the ionic abundances. Criterion (iv) is applied to photoionized HHs because indicators based on [Fe III] lines are sensitive to very high densities, but also because the destruction of Fe-bearing dust particles by shocks enhances the emission of [Fe III] lines. In these cases, we adopt the values obtained with a maximum-likelihood procedure using several [Fe III] lines. This method provides values fully consistent with e ([Fe III] 4702/ 4658). In Fig. 3, we can see that density determinations based on [Fe III] lines -although showing larger error bars-are marginally consistent with e ([S II] 6731/ 6716) in most of the cases except for HH 514, the proplyd 170-337 (Méndez-Delgado et al. 2022) and NGC 7027
However, this is rarely satisfied observationally in extragalactic H II regions (Pérez-Montero & Díaz 2003; Kenni-cutt et al. 2003;), it is generally assumed that e ([O II]) ≈
e ([S II]) ≈ e ( [N II]). This is also predicted by the BOND mod-
els (Sec 4).
). e ([O II]) and e ( [S II]) are estimated from [O II] 7319+20+30+31/ 3726+29 and [S II]
Table 1 .
1Linear fits between e ([O II]), e ([S II]) and e ([N II]). represent the standard deviation between the linear fit and the calculated temperature values. N is the number of regions considered. In the case of the latter line intensity ratio, the [S II] auroral lines can be blended with O IILinear fit (K)
(K)
N
e ( [O II]) = 1.60(±0.20) e ( [N II]) − 4270(±1870)
1210
32
e ( [N II]) = 0.62(±0.08) e ( [O II]) + 2660(±840)
950
32
e ( [S II]) = 1.57(±0.17) e ( [N II]) − 4290(±1620)
900
30
e ( [N II]) = 0.64(±0.07) e ( [S II]) + 2740(±740)
730
30
e ( [S II]) = 1.03(±0.11) e ( [O II]) − 1050(±1180)
1280
39
length separation between the nebular and auroral lines. Moreover,
7319+20+30+31 can be contaminated by telluric emissions.
, e ([O II]) and e ([S II]) are higher than e ([N II]) for most values of this last parameter, in agreement with previous findings (Esteban et al. 2009; Bresolin et al. 2009; Rogers et al. 2021). It should be noted that e ([O II])e ([N II]) and e ([S II])e ([N II]) increase as a function of temperature, as shown in the fit parameters given in Table 1. On the other hand, e ( [S II]) versus e ([O II]) fits an almost 1:1 relation, with a slight offset to higher e ([O II]) values.
we show that when the recombination contribution is relevant, the measured [O II] 7319+20+30+31/ 3726+29 line intensity ratio tends to be comparatively more enhanced than [N II] 5755/ 6548+84. This implies that, if there are recombination contributions (dielectronic plus radiative) to the [O II] and [N II] CELs, we would expect e ([O II]) > e ([N II]) in most cases. To date, there are no evidences of relevant recombination contributions to the [S II] CELs. Furthermore, there is also a lack of calculations of effective recombination coefficients for this ion. However, potential recombination contributions to the [S II] CELs
cm −3 < e < 10 6 cm −3 . If there are density inhomogeneities in the nebulae, these line ratios would give higher densities than those derived from [S II] 6731/ 6716 or [O II] 3726/ 37296731/ 6716, [O II] 3726/ 3729, [Cl III]
5538/ 5518, [Fe III] 4702/ 4658 and [Ar IV] 4740/ 4711 in
practically the entire range 10 2
II]). This behavior is illustrated inFig. 9, where it can be seen that e ([N II]) is insensitive to density up to ∼ 10 4 cm −3 , two orders of magnitude beyond than in the case of e ([O II]) or commonly used to infer e ( [O II]) , e ( [S II]) and e ( [N II]) , respectively. We have assumed e = 10,000 K. The line intensity ratios have been normalized with the expected values at e = 1 cm −3 .). The presence
of high-density clumps biases [S II] 6731/ 6716 and [O II]
3726/ 3729 towards lower values of e and this would impact
e ([N II]) determination to a smaller extent than e ([O II]) and
e ([S 0
1
2
3
4
5
6
log(n e )
0.0
0.2
0.4
0.6
0.8
1.0
I
1 /I
2 /(I
1 /I
2 )
ne = 1 at 10000K
[O II] 3727+/7325+
[SII] 6720+/4072+
[N II] 6548+84/5755
Figure 9. Density dependence of the [O II] 7319+20+30+31/ 3726+29,
[S II]
4069+76/ 6716+31 and [N II] 5755/ 6548+84 line intensity
ratios,
we compare the O + abundance derived from [O II] auroral and nebular lines using e ([N II]) and the average of e ([S II] 6731/ 6716) and e ([O II] 3726/ 3729). In the figure, we can see that the O + /H + ratio derived with the [O II] auroral lines is up to ∼0.1 dex higher, on the average. In the bottom panel of
includes also temperature relations involving the uncommon e ([Ar III]), derived from the [Ar III] 5192/ 7135 intensity ratio. DESIRED contains the largest collection of e ( [Ar III]) determinations for H II regions. Considering that the ionization con-ditions of Ar 2+ are different than those of S 2+ or O 2+ , we cannot strictly say that e ( [Ar III]) is also representative of the same ionization volume where S 2+ or O 2+ lie. In the middle left panel of Fig. 13, we can see that e ( [Ar III]) follows a rather linear relationship with e ([O III]) for the spectra of extragalactic H II regions. Lower left panel ofFig. 13shows that the behavior of the e ([Ar III]) vs.e ([N II]) relationship has certain similarity to the e ([O III]) vs.
e ([N II]) one. The two objects with the highest e show some
deviation towards larger e ([Ar III]) values. This agrees with the
results obtained by Méndez-Delgado et al. (2023), who find that
e ([Ar III]) and e ( [O III]) seem to be affected by 2 in a similar
way.
demonstrate that e ([S II] 6731/ 6716) and e ([O II] 3726/ 3729) are biased towards lower densities in extragalactic H II regions due to the presence of density inhomogeneities and the non-linear sensitivity of these indicators. This is inferred from the behavior of [O II] 7319+20+30+31/ 3726+29 and [S II] 4069+76/ 6716+31 intensity ratios, commonly used to compute e ([O II]) and e ([S II]), respectively. When e ([O II]) and e ([S II]) -derived adopting e ([S II] 6731/ 6716) and e ([O II] 3726/ 3729-are compared with e ([N II] 5755/ 6584) they show systematic trends that can not be explained by observational errors, mismatches between the ionization volumes, recombination contribution or temperature fluctuations, but are explained by the presence of an inhomogeneous density structure. The sensitivity of [O II] 7319+20+30+31/ 3726+29 and [S II] 4069+76/ 6716+31 to higher densities -10 2 cm −3 < e < 10 6 cm −3 -makes them better diagnostics than e ([S II] 6731/ 6716) or e ([O II] 3726/ 3729) when they are crosscorrelated with e ([N II]), since they are sensitive to the presence of high-density clumps. In the analysis of extragalactic H II regions, the density underestimate of e ([S II] 6731/ 6716) or e ([O II] 3726/ 3729) is of ∼ 300 cm −3 on the average, even if the aforementioned diagnostics give values consistent with the low density limit (< 100 cm −3 ). The implications of this underestimate in the calculation of chemical abundances from optical spectra are rather small, being constrained up to ∼ 0.1 dex when O + abundances are estimated with the [O II] 7319 + 20 + 30 + 31 CELs. However, the density underestimate is critical for studies based on infrared fine structure CELs. For instance, [O III] 88 m decreases its emissivity ∼40 per cent when e changes from 200 cm −3 to 500 cm −3 , implying an increase of the derived chemical abundances of ∼ 70 per cent. Density diagnostics in the infrared such as [O III] 88 m/52 m are likely to suffer a bias towards lower densities even to a greater extent than e ([S II] 6731/ 6716) or e ([O II] 3726/ 3729) due to their different sensitivity ranges (see Fig. 2). Finally, we present the temperature relations for DESIRED extragalactic H II regions considering the e -sensitive [N II] 5755/ 6584, [O III] 4363/ 5007, [Ar III] 5192/ 7135 and [S III] 6312/ 9069 intensity ratios. The availability of such a number of different e diagnostics permits us to calculate chemical abundances considering the stratification of temperature at different
Table 2 .
2The DESIRED temperature relations for extragalactic H II regions.
for her help regarding the handling of the BOND photoionization models. JEM-D, OE and KK gratefully acknowledge funding from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in the form of an Emmy Noether Research Group (grant number KR4598/2-1, PI Kreckel). CE and JG-R acknowledge support from the Agencia Estatal de Investigación del Ministerio de Ciencia e Innovación (AEI-MCINN) under grant Espectroscopía de campo integral de regiones H II locales. Modelos para el estudio de regiones H II extragalácticas with reference 10.13039/501100011033. JG-R acknowledges support from an Advanced Fellowship under the Severo Ochoa excellence program CEX2019-000920-S. JG-R and VG-LL acknowledge financial support from the Canarian Agency for Research, Innovation and Information Society (ACIISI), of the Canary Islands Government, and the European Regional Development Fund (ERDF), under grant with reference ProID2021010074. CE, JG-R and VG-LL acknowledge support under grant P/308614 financed by funds transferred from the Spanish Ministry of Science, Innovation and Universities, charged to the General State Budgets and with funds transferred from the General Budgets of the Autonomous Community of the Canary Islands by the MCIU.
Table A1 .
A1DESIRED Photoionized Herbig-Haro objects and proplyds.Table A2. DESIRED Galactic Ring Nebulae.Object
Spectrograph Telescope
Reference
HH202S
UVES
VLT
Mesa-Delgado et al. (2009)
HH204
Méndez-Delgado et al. (2021b)
HH514I (jet base)
Méndez-Delgado et al. (2022)
HH514II (knot)
HH529II
Méndez-Delgado et al. (2021a)
HH529III
Proplyd 170-337
Méndez-Delgado et al. (2022)
Object
Zone
Ionizing star type Spectrograph
Telescope
Reference
G2.4+1.4
A1
WO2
OSIRIS
GTC
Esteban et al. (2016)
A2
MagE
Clay Telescope
A3
OSIRIS
GTC
A4
A5
NGC 6888
A1
WN6
OSIRIS
GTC
Esteban et al. (2016)
A2
A3
A4
A5
A6
NGC 7635
A1
O6.5f
OSIRIS
GTC
Esteban et al. (2016)
A2
A3
A4
A5
A6
RCW 52
10:46:02.5 −58:38:05
O8
MagE
Clay Telescope Esteban et al. (2016)
RCW 58
11:06:00.1 −65:32:06
WN8
Sh 2-298
07:18:28.10 −13:17:19.7
WN4
UVES
VLT
Esteban et al. (2017)
Sh 2-308
06:53:02.2 −23:53:30
WN4
MagE
Clay Telescope Esteban et al. (2016)
Table A3 .
A3DESIRED Extragalactic H II regions.Table A3. DESIRED Extragalactic H II regions (continued).Nebula
Galaxy
Spectrograph Telescope
Reference
He 2-10
He 2-10
UVES
VLT
Esteban et al. (2014b)
30 Doradus
LMC
UVES
VLT
Peimbert (2003)
IC 2111a
Domínguez-Guzmán et al. (2022)
N11Bb
N44Cb
NGC 1714a
BA289
M31
OSIRIS
GTC
Esteban et al. (2020)
BA310
BA371
BA374
BA379
K160
K703
K932b
HIRES
KECK
Esteban et al. (2009)
B2011 b5
M33
UVES
VLT
Toribio San Cipriano et al. (2016)
B2011 b15
BCLMP 29
BCLMP 88w
BCLMP 290
BCLMP 626
IC 131
IC 132
NGC 588
LGC H II3
LGC H II11
NGC 595
HIRES
KECK
Esteban et al. (2009)
NGC 604
Nucleus 13
M83
HIRES
KECK
Esteban et al. (2009)
H37
M101
OSIRIS
GTC
Esteban et al. (2020)
H219
H681
H1146
H1216
H1118
NGC 5447
NGC 5455
NGC 5462
NGC 5471
SDH323
NGC 5447
ISIS
WHT
Esteban et al. (2009)
H1013g
HIRES
KECK
NGC 5461
Mrk 1271
Mrk 1271
UVES
VLT
Esteban et al. (2014b)
R2
NGC 300
UVES
VLT
Toribio San Cipriano et al. (2016)
R5
R14
R20
R23
R27
R76a
Zone C
NGC 1741
HIRES
KECK
Esteban et al. (2009)
NGC 2363 (Mrk 71) NGC 2366
Nebula
Galaxy
Spectrograph Telescope
Reference
VS 24d
NGC 2403
HIRES
KECK
Esteban et al. (2009)
VS 38d
VS 44d
NGC 3125
NGC 3125
UVES
VLT
Esteban et al. (2014b)
Region 70
NGC 4395
HIRES
KECK
Esteban et al. (2009)
Brightest H II region
NGC 4861
H II-1
NGC 5253
UVES
VLT
López-Sánchez et al. (2007)
H II-2
UV-1
UV-2
NGC 5408
NGC 5408
UVES
VLT
Esteban et al. (2014b)
NGC 6822
NGC 6822
POX 4
POX 4
SDSS J1253-0312
SDSS J1253-0312
N66Ab
SMC
UVES
VLT
Domínguez-Guzmán et al. (2022)
N81b
N88Ab
N90b
Tol 1457-262
Tol 1457-262
UVES
VLT
Esteban et al. (2014b)
Tol 1924-416
Tol 1924-416
With the concept of "deep spectrum" we mean a long-exposure time spectrum with a high signal-to-noise ratio where the main purpose is the detection of weak emission lines, such as auroral CELs or RLs.
In this context, permitted and forbidden transitions are considered independently. For instance,[O III] and O II are counted as different ionic species.3 The spectra ofSharpee et al. (2003Sharpee et al. ( , 2007 were taken between 2001 and 2003.MNRAS 000, 1-17(2020)
MNRAS 000, 1-17 (2020)
https://sites.google.com/site/mexicanmillionmodels/
APPENDIX A: APPENDIXThis paper has been typeset from a T E X/L A T E X file prepared by the author.Quinet (1996) Zhang (1996
. A Amayo, G Delgado-Inglada, G Stasińska, 10.1093/mnras/stab1467MNRAS. 5052361Amayo A., Delgado-Inglada G., Stasińska G., 2021, MNRAS, 505, 2361
The Messenger. I Appenzeller, 941Appenzeller I., et al., 1998, The Messenger, 94, 1
. K Z Arellano-Córdova, M Rodríguez, 10.1093/mnras/staa1759MNRAS. 497672Arellano-Córdova K. Z., Rodríguez M., 2020, MNRAS, 497, 672
. K Z Arellano-Córdova, C Esteban, J García-Rojas, J E Méndez-Delgado, 10.1093/mnras/staa3903MNRAS. 502225Arellano-Córdova K. Z., Esteban C., García-Rojas J., Méndez-Delgado J. E., 2021, MNRAS, 502, 225
. J A Baldwin, M M Phillips, R Terlevich, 10.1086/130766PASP. 935Baldwin J. A., Phillips M. M., Terlevich R., 1981, PASP, 93, 5
. J A Baldwin, G J Ferland, P G Martin, M R Corbin, S A Cota, B M Peterson, A Slettebak, 10.1086/170146ApJ. 374580Baldwin J. A., Ferland G. J., Martin P. G., Corbin M. R., Cota S. A., Peterson B. M., Slettebak A., 1991, ApJ, 374, 580
. J A Baldwin, 10.1086/310245ApJ. 468115Baldwin J. A., et al., 1996, ApJ, 468, L115
. P Ballester, A Modigliani, O Boitquin, S Cristiani, R Hanuschik, A Kaufer, S Wolf, The Messenger10131Ballester P., Modigliani A., Boitquin O., Cristiani S., Hanuschik R., Kaufer A., Wolf S., 2000, The Messenger, 101, 31
. M J Barlow, X W Liu, D Péquignot, P J Storey, Y G Tsamis, C Morisset, Kwok S., Dopita M., Sutherland R.209373Planetary Nebulae: Their Evolution and Role in the Universe. p.Barlow M. J., Liu X. W., . Péquignot D., Storey P. J., Tsamis Y. G., Morisset C., 2003, in Kwok S., Dopita M., Sutherland R., eds, Vol. 209, Planetary Nebulae: Their Evolution and Role in the Universe. p. 373
. A T Barnes, 10.1093/mnras/stab2958MNRAS. 5085362Barnes A. T., et al., 2021, MNRAS, 508, 5362
. M A Bautista, V Fivet, C Ballance, P Quinet, G Ferland, C Mendoza, T R Kallman, 10.1088/0004-637X/808/2/174ApJ. 808174Bautista M. A., Fivet V., Ballance C., Quinet P., Ferland G., Mendoza C., Kallman T. R., 2015, ApJ, 808, 174
. D A Berg, E D Skillman, K V Croxall, R W Pogge, J Moustakas, M Johnson-Groh, 10.1088/0004-637X/806/1/16ApJ. 80616Berg D. A., Skillman E. D., Croxall K. V., Pogge R. W., Moustakas J., Johnson-Groh M., 2015, ApJ, 806, 16
. D A Berg, R W Pogge, E D Skillman, K V Croxall, J Moustakas, N S J Rogers, J Sun, 10.3847/1538-4357/ab7eabApJ. 89396Berg D. A., Pogge R. W., Skillman E. D., Croxall K. V., Moustakas J., Rogers N. S. J., Sun J., 2020, ApJ, 893, 96
Instrument Design and Performance for Optical/Infrared Ground-based Telescopes. R Bernstein, S A Shectman, S M Gunnels, S Mochnacki, A E Athey, 10.1117/12.461502Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Iye M., Moorwood A. F. M.4841Bernstein R., Shectman S. A., Gunnels S. M., Mochnacki S., Athey A. E., 2003, in Iye M., Moorwood A. F. M., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 4841, In- strument Design and Performance for Optical/Infrared Ground-based Telescopes. pp 1694-1704, doi:10.1117/12.461502
. K J Borkowski, J P Harrington, Z Tsvetanov, R E S Clegg, 10.1086/187029ApJ. 41547Borkowski K. J., Harrington J. P., Tsvetanov Z., Clegg R. E. S., 1993, ApJ, 415, L47
. F Bresolin, W Gieren, R.-P Kudritzki, G Pietrzyński, M A Urbaneja, G Carraro, 10.1088/0004-637X/700/1/309ApJ. 700309Bresolin F., Gieren W., Kudritzki R.-P., Pietrzyński G., Urbaneja M. A., Carraro G., 2009, ApJ, 700, 309
. J J Bryant, 10.1093/mnras/stu2635MNRAS. 4472857Bryant J. J., et al., 2015, MNRAS, 447, 2857
. K Bundy, 10.1088/0004-637X/798/1/7ApJ. 7987Bundy K., et al., 2015, ApJ, 798, 7
. K Butler, C J Zeippen, A&A. 208337Butler K., Zeippen C. J., 1989, A&A, 208, 337
. A Campbell, R Terlevich, J Melnick, 10.1093/mnras/223.4.811MNRAS. 223811Campbell A., Terlevich R., Melnick J., 1986, MNRAS, 223, 811
Optical and IR Telescope Instrumentation and Detectors. J Cepa, 10.1117/12.395520Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Iye M., Moorwood A. F.4008Cepa J., et al., 2000, in Iye M., Moorwood A. F., eds, Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series Vol. 4008, Optical and IR Telescope Instrumentation and Detectors. pp 623-631, doi:10.1117/12.395520
Instrument Design and Performance for Optical/Infrared Ground-based Telescopes. J Cepa, 10.1117/12.460913Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Iye M., Moorwood A. F. M.4841Cepa J., et al., 2003, in Iye M., Moorwood A. F. M., eds, Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series Vol. 4841, Instrument Design and Performance for Optical/Infrared Ground-based Telescopes. pp 1739-1749, doi:10.1117/12.460913
. R L M Corradi, J García-Rojas, D Jones, P Rodríguez-Gil, 10.1088/0004-637X/803/2/99ApJ. 80399Corradi R. L. M., García-Rojas J., Jones D., Rodríguez-Gil P., 2015, ApJ, 803, 99
. K V Croxall, R W Pogge, D A Berg, E D Skillman, J Moustakas, 10.3847/0004-637X/830/1/4ApJ. 8304Croxall K. V., Pogge R. W., Berg D. A., Skillman E. D., Moustakas J., 2016, ApJ, 830, 4
Discoveries and Research Prospects from 8-to 10-Meter-Class Telescopes. S D'odorico, S Cristiani, H Dekker, V Hill, A Kaufer, T Kim, F Primas, 10.1117/12.390133Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Bergeron J.4005D'Odorico S., Cristiani S., Dekker H., Hill V., Kaufer A., Kim T., Primas F., 2000, in Bergeron J., ed., Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 4005, Discoveries and Re- search Prospects from 8-to 10-Meter-Class Telescopes. pp 121-130, doi:10.1117/12.390133
. G Delgado-Inglada, A Mesa-Delgado, J García-Rojas, M Rodríguez, C Esteban, 10.1093/mnras/stv2961MNRAS. 4563855Delgado-Inglada G., Mesa-Delgado A., García-Rojas J., Rodríguez M., Es- teban C., 2016, MNRAS, 456, 3855
H L Dinerstein, 10.1007/978-94-009-0595-5_10Astrophysics and Space Science Library. Thronson Harley A. J., Shull J. M.161The Interstellar Medium in GalaxiesDinerstein H. L., 1990, in Thronson Harley A. J., Shull J. M., eds, Astro- physics and Space Science Library Vol. 161, The Interstellar Medium in Galaxies. pp 257-285, doi:10.1007/978-94-009-0595-5_10
. G Domínguez-Guzmán, M Rodríguez, J García-Rojas, C Esteban, L Toribio San Cipriano, 10.1093/mnras/stac2974MNRAS. 5174497Domínguez-Guzmán G., Rodríguez M., García-Rojas J., Esteban C., Toribio San Cipriano L., 2022, MNRAS, 517, 4497
. E Emsellem, 10.1051/0004-6361/202141727A&A. 659191Emsellem E., et al., 2022, A&A, 659, A191
. A V Escudero, R D D Costa, W J Maciel, 10.1051/0004-6361:20031625A&A. 414211Escudero A. V., Costa R. D. D., Maciel W. J., 2004, A&A, 414, 211
. J N Espíritu, A Peimbert, 10.1093/mnras/stab2746MNRAS. 5082668Espíritu J. N., Peimbert A., 2021, MNRAS, 508, 2668
. C Esteban, J García-Rojas, 10.1093/mnras/sty1168MNRAS. 4782315Esteban C., García-Rojas J., 2018, MNRAS, 478, 2315
. C Esteban, M Peimbert, J García-Rojas, M T Ruiz, A Peimbert, M Rodríguez, 10.1111/j.1365-2966.2004.08313.xMNRAS. 355229Esteban C., Peimbert M., García-Rojas J., Ruiz M. T., Peimbert A., Ro- dríguez M., 2004, MNRAS, 355, 229
. C Esteban, F Bresolin, M Peimbert, J García-Rojas, A Peimbert, A Mesa-Delgado, 10.1088/0004-637X/700/1/654ApJ. 700654Esteban C., Bresolin F., Peimbert M., García-Rojas J., Peimbert A., Mesa- Delgado A., 2009, ApJ, 700, 654
. C Esteban, L Carigi, M V F Copetti, J García-Rojas, A Mesa-Delgado, H O Castañeda, D Péquignot, 10.1093/mnras/stt730MNRAS. 433382Esteban C., Carigi L., Copetti M. V. F., García-Rojas J., Mesa-Delgado A., Castañeda H. O., Péquignot D., 2013, MNRAS, 433, 382
. C Esteban, J García-Rojas, A Mesa-Delgado, L Toribio San Cipriano, 10.1002/asna.201312002Astronomische Nachrichten. 33573Esteban C., García-Rojas J., Mesa-Delgado A., Toribio San Cipriano L., 2014a, Astronomische Nachrichten, 335, 73
. C Esteban, J García-Rojas, L Carigi, M Peimbert, F Bresolin, A R López-Sánchez, A Mesa-Delgado, 10.1093/mnras/stu1177MNRAS. 443624Esteban C., García-Rojas J., Carigi L., Peimbert M., Bresolin F., López- Sánchez A. R., Mesa-Delgado A., 2014b, MNRAS, 443, 624
. C Esteban, A Mesa-Delgado, C Morisset, J García-Rojas, 10.1093/mnras/stw1243MNRAS. 4604038Esteban C., Mesa-Delgado A., Morisset C., García-Rojas J., 2016, MNRAS, 460, 4038
. C Esteban, X Fang, J García-Rojas, L Toribio San Cipriano, 10.1093/mnras/stx1624471987MN-RASEsteban C., Fang X., García-Rojas J., Toribio San Cipriano L., 2017, MN- RAS, 471, 987
. C Esteban, F Bresolin, J García-Rojas, L Toribio San Cipriano, 10.1093/mnras/stz3134MNRAS. 4912137Esteban C., Bresolin F., García-Rojas J., Toribio San Cipriano L., 2020, MNRAS, 491, 2137
. X Fang, P J Storey, X W Liu, 10.1051/0004-6361/201116511A&A. 53018Fang X., Storey P. J., Liu X. W., 2011, A&A, 530, A18
. X Fang, P J Storey, X W Liu, 10.1051/0004-6361/201116511eA&A. 5502Fang X., Storey P. J., Liu X. W., 2013, A&A, 550, C2
. G J Ferland, Rev. Mex. Astron. Astrofis. 53385Ferland G. J., et al., 2017, Rev. Mex. Astron. Astrofis., 53, 385
. S Fritzsche, B Fricke, D Geschke, A Heitmann, J E Sienkiewicz, 10.1086/307328ApJ. 518994Fritzsche S., Fricke B., Geschke D., Heitmann A., Sienkiewicz J. E., 1999, ApJ, 518, 994
Atomic Data and Nuclear Data Tables. Froese Fischer, C Tachiev, G , 10.1016/j.adt.2004.02.001871Froese Fischer C., Tachiev G., 2004, Atomic Data and Nuclear Data Tables, 87, 1
Atomic Data and Nuclear Data Tables. Froese Fischer, C Tachiev, G Irimia, A , 10.1016/j.adt.2006.03.00192607Froese Fischer C., Tachiev G., Irimia A., 2006, Atomic Data and Nuclear Data Tables, 92, 607
. M E Galavis, C Mendoza, C J Zeippen, A&AS. 111347Galavis M. E., Mendoza C., Zeippen C. J., 1995, A&AS, 111, 347
. J García-Rojas, C Esteban, M Peimbert, M Rodríguez, M T Ruiz, A Peimbert, 10.1086/421909ApJS. 153501García-Rojas J., Esteban C., Peimbert M., Rodríguez M., Ruiz M. T., Peim- bert A., 2004, ApJS, 153, 501
. J García-Rojas, C Esteban, A Peimbert, M Peimbert, M Rodríguez, M T Ruiz, 10.1111/j.1365-2966.2005.09302.xMNRAS. 362301García-Rojas J., Esteban C., Peimbert A., Peimbert M., Rodríguez M., Ruiz M. T., 2005, MNRAS, 362, 301
. J García-Rojas, C Esteban, M Peimbert, M T Costado, M Rodríguez, A Peimbert, M T Ruiz, 10.1111/j.1365-2966.2006.10105.xMNRAS. 368253García-Rojas J., Esteban C., Peimbert M., Costado M. T., Rodríguez M., Peimbert A., Ruiz M. T., 2006, MNRAS, 368, 253
. J García-Rojas, C Esteban, A Peimbert, M Rodríguez, M Peimbert, M T Ruiz, Rev. Mex. Astron. Astrofis. 433García-Rojas J., Esteban C., Peimbert A., Rodríguez M., Peimbert M., Ruiz M. T., 2007, Rev. Mex. Astron. Astrofis., 43, 3
. J García-Rojas, M Peña, C Morisset, A Mesa-Delgado, M T Ruiz, 10.1051/0004-6361/201118217A&A. 53854García-Rojas J., Peña M., Morisset C., Mesa-Delgado A., Ruiz M. T., 2012, A&A, 538, A54
. J García-Rojas, S Simón-Díaz, C Esteban, 10.1051/0004-6361/201424660A&A. 57193García-Rojas J., Simón-Díaz S., Esteban C., 2014, A&A, 571, A93
. J García-Rojas, S Madonna, V Luridiana, N C Sterling, C Morisset, G Delgado-Inglada, L Toribio San Cipriano, 10.1093/mnras/stv1415MNRAS. 4522606García-Rojas J., Madonna S., Luridiana V., Sterling N. C., Morisset C., Delgado-Inglada G., Toribio San Cipriano L., 2015, MNRAS, 452, 2606
. J García-Rojas, R L M Corradi, H Monteiro, D Jones, P Rodríguez-Gil, A Cabrera-Lavers, 10.3847/2041-8205/824/2/L27ApJ. 82427García-Rojas J., Corradi R. L. M., Monteiro H., Jones D., Rodríguez-Gil P., Cabrera-Lavers A., 2016, ApJ, 824, L27
. J García-Rojas, G Delgado-Inglada, D A García-Hernández, F Dell'agli, M Lugaro, A I Karakas, M Rodríguez, 10.1093/mnras/stx2519MNRAS. 4734476García-Rojas J., Delgado-Inglada G., García-Hernández D. A., Dell'Agli F., Lugaro M., Karakas A. I., Rodríguez M., 2018, MNRAS, 473, 4476
. J García-Rojas, C Morisset, D Jones, R Wesson, H M J Boffin, H Monteiro, R L M Corradi, P Rodríguez-Gil, 10.1093/mnras/stab3523MNRAS. 5105444García-Rojas J., Morisset C., Jones D., Wesson R., Boffin H. M. J., Monteiro H., Corradi R. L. M., Rodríguez-Gil P., 2022, MNRAS, 510, 5444
. D R Garnett, 10.1086/116146AJ. 1031330Garnett D. R., 1992, AJ, 103, 1330
. M F R Grieve, C A Ramsbottom, C E Hudson, F P Keenan, 10.1088/0004-637X/780/1/110ApJ. 780110Grieve M. F. R., Ramsbottom C. A., Hudson C. E., Keenan F. P., 2014, ApJ, 780, 110
. B Groves, 10.1093/mnras/stad114MNRAS. 5204902Groves B., et al., 2023, MNRAS, 520, 4902
. N G Guseva, Y I Izotov, G Stasińska, K J Fricke, C Henkel, P Papaderos, 10.1051/0004-6361/201016291A&A. 529149Guseva N. G., Izotov Y. I., Stasińska G., Fricke K. J., Henkel C., Papaderos P., 2011, A&A, 529, A149
. G F Hägele, E Pérez-Montero, Á I Díaz, E Terlevich, R Terlevich, 10.1111/j.1365-2966.2006.10856.xMNRAS. 372293Hägele G. F., Pérez-Montero E., Díaz Á. I., Terlevich E., Terlevich R., 2006, MNRAS, 372, 293
. G F Hägele, Á I Díaz, E Terlevich, R Terlevich, E Pérez-Montero, M V Cardaci, 10.1111/j.1365-2966.2007.12527.xMNRAS. 383209Hägele G. F., Díaz Á. I., Terlevich E., Terlevich R., Pérez-Montero E., Cardaci M. V., 2008, MNRAS, 383, 209
. G Haro, 10.1086/145576ApJ. 115572Haro G., 1952, ApJ, 115, 572
. W J Henney, C R O'dell, 10.1086/301087AJ. 1182350Henney W. J., O'Dell C. R., 1999, AJ, 118, 2350
. G H Herbig, 10.1086/145232ApJ. 11111Herbig G. H., 1950, ApJ, 111, 11
. I T Ho, 10.1093/mnras/stz649MNRAS. 4853569Ho I. T., 2019, MNRAS, 485, 3569
. A Irimia, Froese Fischer, C , 10.1238/Physica.Regular.071a00172Phys. Scr. 71172Irimia A., Froese Fischer C., 2005, Phys. Scr., 71, 172
. Y I Izotov, T X Thuan, G Stasińska, 10.1086/513601ApJ. 66215Izotov Y. I., Thuan T. X., Stasińska G., 2007, ApJ, 662, 15
. D Jones, R Wesson, J García-Rojas, R L M Corradi, H M J Boffin, 10.1093/mnras/stv2519MNRAS. 4553263Jones D., Wesson R., García-Rojas J., Corradi R. L. M., Boffin H. M. J., 2016, MNRAS, 455, 3263
P R Jorden, 10.1117/12.19163Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Crawford D. L.1235Instrumentation in Astronomy VIIJorden P. R., 1990, in Crawford D. L., ed., Society of Photo-Optical Instru- mentation Engineers (SPIE) Conference Series Vol. 1235, Instrumenta- tion in Astronomy VII. pp 790-798, doi:10.1117/12.19163
. L Juan De Dios, M Rodríguez, 10.1093/mnras/stx916MNRAS. 4691036Juan de Dios L., Rodríguez M., 2017, MNRAS, 469, 1036
. L Juan De Dios, M Rodríguez, 10.1093/mnras/stab2488MNRAS. 5075331Juan de Dios L., Rodríguez M., 2021, MNRAS, 507, 5331
. G Kauffmann, 10.1111/j.1365-2966.2003.07154.xMNRAS. 3461055Kauffmann G., et al., 2003, MNRAS, 346, 1055
. V Kaufman, J Sugar, 10.1063/1.555775Journal of Physical and Chemical Reference Data. 15321Kaufman V., Sugar J., 1986, Journal of Physical and Chemical Reference Data, 15, 321
. Kennicutt Robert, C J Bresolin, F Garnett, D R , 10.1086/375398ApJ. 591801Kennicutt Robert C. J., Bresolin F., Garnett D. R., 2003, ApJ, 591, 801
. L J Kewley, M A Dopita, 10.1086/341326ApJS. 14235Kewley L. J., Dopita M. A., 2002, ApJS, 142, 35
. L J Kewley, S L Ellison, 10.1086/587500ApJ. 6811183Kewley L. J., Ellison S. L., 2008, ApJ, 681, 1183
. R Kisielius, P J Storey, G J Ferland, F P Keenan, 10.1111/j.1365-2966.2009.14989.xMNRAS. 397903Kisielius R., Storey P. J., Ferland G. J., Keenan F. P., 2009, MNRAS, 397, 903
. A Y Kniazev, S A Pustilnik, D B Zucker, 10.1111/j.1365-2966.2007.12540.xMNRAS. 3841045Kniazev A. Y., Pustilnik S. A., Zucker D. B., 2008, MNRAS, 384, 1045
. H A Kobulnicky, L J Kewley, 10.1086/425299ApJ. 617240Kobulnicky H. A., Kewley L. J., 2004, ApJ, 617, 240
. K Kreckel, 10.3847/1538-4357/ab5115ApJ. 88780Kreckel K., et al., 2019, ApJ, 887, 80
. C Lamarche, 10.3847/1538-4357/ac3b4fApJ. 925194Lamarche C., et al., 2022, ApJ, 925, 194
. E M Levesque, L J Kewley, K L Larson, 10.1088/0004-6256/139/2/712AJ. 139712Levesque E. M., Kewley L. J., Larson K. L., 2010, AJ, 139, 712
. X W Liu, P J Storey, M J Barlow, I J Danziger, M Cohen, M Bryce, 10.1046/j.1365-8711.2000.03167.xMNRAS. 312585Liu X. W., Storey P. J., Barlow M. J., Danziger I. J., Cohen M., Bryce M., 2000, MNRAS, 312, 585
. X W Liu, M J Barlow, Y Zhang, R J Bastin, P J Storey, 10.1111/j.1365-2966.2006.10283.xMNRAS. 3681959Liu X. W., Barlow M. J., Zhang Y., Bastin R. J., Storey P. J., 2006, MNRAS, 368, 1959
. Á R López-Sánchez, C Esteban, J García-Rojas, 10.1051/0004-6361:20053119A&A. 449997López-Sánchez Á. R., Esteban C., García-Rojas J., 2006, A&A, 449, 997
. Á R López-Sánchez, C Esteban, J García-Rojas, M Peimbert, M Rodríguez, 10.1086/510112ApJ. 656168López-Sánchez Á. R., Esteban C., García-Rojas J., Peimbert M., Rodríguez M., 2007, ApJ, 656, 168
. Á R López-Sánchez, M A Dopita, L J Kewley, H J Zahid, D C Nicholls, J Scharwächter, 10.1111/j.1365-2966.2012.21145.xMNRAS. 4262630López-Sánchez Á. R., Dopita M. A., Kewley L. J., Zahid H. J., Nicholls D. C., Scharwächter J., 2012, MNRAS, 426, 2630
. V Luridiana, C Morisset, R A Shaw, 10.1051/0004-6361/201323152A&A. 57342Luridiana V., Morisset C., Shaw R. A., 2015, A&A, 573, A42
. Madonna S García-Rojas, J Sterling, N C Delgado-Inglada, G Mesa-Delgado, A Luridiana, V Roederer, I U Mashburn, A L , 10.1093/mnras/stx15854711341MN-RASMadonna S., García-Rojas J., Sterling N. C., Delgado-Inglada G., Mesa- Delgado A., Luridiana V., Roederer I. U., Mashburn A. L., 2017, MN- RAS, 471, 1341
. R A Marino, 10.1051/0004-6361/201321956A&A. 559114Marino R. A., et al., 2013, A&A, 559, A114
Ground-based and Airborne Instrumentation for Astronomy II. J L Marshall, 10.1117/12.789972arXiv:0807.3774Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. McLean I. S., Casali M. M.7014701454Marshall J. L., et al., 2008, in McLean I. S., Casali M. M., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 7014, Ground-based and Airborne Instrumentation for Astronomy II. p. 701454 (arXiv:0807.3774), doi:10.1117/12.789972
F Matteucci, 10.1007/978-3-642-41720-7_2arXiv:0804.1492Saas-Fee Advanced Course. Bland-Hawthorn J., Freeman K., Matteucci F.Saas-Fee Advanced Course37145Matteucci F., 2014, in Bland-Hawthorn J., Freeman K., Matteucci F., eds, Saas-Fee Advanced Course Vol. 37, Saas-Fee Advanced Course. p. 145 (arXiv:0804.1492), doi:10.1007/978-3-642-41720-7_2
. J M Mazzarella, T A Boroson, 10.1086/191754ApJS. 8527Mazzarella J. M., Boroson T. A., 1993, ApJS, 85, 27
. S S Mcgaugh, 10.1086/170569ApJ. 380140McGaugh S. S., 1991, ApJ, 380, 140
. A F Mcleod, 10.3847/1538-4357/ab6d63ApJ. 89125McLeod A. F., et al., 2020, ApJ, 891, 25
. J E Méndez-Delgado, C Esteban, J García-Rojas, K Z Arellano-Córdova, M Valerdi, 10.1093/mnras/staa1705MNRAS. 4962726Méndez-Delgado J. E., Esteban C., García-Rojas J., Arellano-Córdova K. Z., Valerdi M., 2020, MNRAS, 496, 2726
. J E Méndez-Delgado, C Esteban, J García-Rojas, W J Henney, A Mesa-Delgado, K Z Arellano-Córdova, 10.1093/mnras/stab068MNRAS. 5021703Méndez-Delgado J. E., Esteban C., García-Rojas J., Henney W. J., Mesa- Delgado A., Arellano-Córdova K. Z., 2021a, MNRAS, 502, 1703
. J E Méndez-Delgado, W J Henney, C Esteban, J García-Rojas, A Mesa-Delgado, K Z Arellano-Córdova, 10.3847/1538-4357/ac0cf5ApJ. 91827Méndez-Delgado J. E., Henney W. J., Esteban C., García-Rojas J., Mesa- Delgado A., Arellano-Córdova K. Z., 2021b, ApJ, 918, 27
. J E Méndez-Delgado, C Esteban, J García-Rojas, W J Henney, 10.1093/mnras/stac1300MNRAS. 514744Méndez-Delgado J. E., Esteban C., García-Rojas J., Henney W. J., 2022, MNRAS, 514, 744
. J E Méndez-Delgado, C Esteban, J García-Rojas, K Kreckel, M Peimbert, 10.1038/s41586-023-05956-2Nature. 2023Méndez-Delgado J. E., Esteban C., García-Rojas J., Kreckel K., Peimbert M., 2023, Nature
C Mendoza, IAU Symposium. Aller L. H.103Planetary NebulaeMendoza C., 1983, in Aller L. H., ed., IAU Symposium Vol. 103, Planetary Nebulae. pp 143-172
. C Mendoza, C J Zeippen, 10.1093/mnras/198.1.127MNRAS. 198127Mendoza C., Zeippen C. J., 1982, MNRAS, 198, 127
. C Mendoza, J E Méndez-Delgado, M Bautista, J García-Rojas, C Morisset, 10.3390/atoms11040063Atoms. 1163Mendoza C., Méndez-Delgado J. E., Bautista M., García-Rojas J., Morisset C., 2023, Atoms, 11, 63
. A Mesa-Delgado, C Esteban, J García-Rojas, V Luridiana, M Bautista, M Rodríguez, L López-Martín, M Peimbert, 10.1111/j.1365-2966.2009.14554.xMNRAS. 395855Mesa-Delgado A., Esteban C., García-Rojas J., Luridiana V., Bautista M., Rodríguez M., López-Martín L., Peimbert M., 2009, MNRAS, 395, 855
. B Metha, M Trenti, T Chu, 10.1093/mnras/stab2554MNRAS. 508489Metha B., Trenti M., Chu T., 2021, MNRAS, 508, 489
. C Morisset, G Delgado-Inglada, N Flores-Fajardo, Rev. Mex. Astron. Astrofis. 51103Morisset C., Delgado-Inglada G., Flores-Fajardo N., 2015, Rev. Mex. Astron. Astrofis., 51, 103
. C Morisset, V Luridiana, J García-Rojas, V Gómez-Llanos, M Bautista, Claudio Mendoza, 10.3390/atoms8040066Atoms. 866Morisset C., Luridiana V., García-Rojas J., Gómez-Llanos V., Bautista M., Mendoza Claudio 2020, Atoms, 8, 66
. S Noll, W Kausch, M Barden, A M Jones, C Szyszka, S Kimeswenger, J Vinther, 10.1051/0004-6361/201219040A&A. 54392Noll S., Kausch W., Barden M., Jones A. M., Szyszka C., Kimeswenger S., Vinther J., 2012, A&A, 543, A92
. C R O'dell, K Wong, 10.1086/117832AJ. 111846O'Dell C. R., Wong K., 1996, AJ, 111, 846
. C R O'dell, Z Wen, X Hu, 10.1086/172786ApJ. 410696O'Dell C. R., Wen Z., Hu X., 1993, ApJ, 410, 696
. C R O'dell, B Balick, A R Hajian, W J Henney, A Burkert, 10.1086/340726AJ. 1233329O'Dell C. R., Balick B., Hajian A. R., Henney W. J., Burkert A., 2002, AJ, 123, 3329
. C R O'dell, G J Ferland, J E Méndez-Delgado, 10.3847/1538-3881/ac9f44AJ. 16521O'Dell C. R., Ferland G. J., Méndez-Delgado J. E., 2023, AJ, 165, 21
Astrophysics of gaseous nebulae and active galactic nuclei. D E Osterbrock, G J Ferland, Osterbrock D. E., Ferland G. J., 2006, Astrophysics of gaseous nebulae and active galactic nuclei
. B E J Pagel, M G Edmunds, D E Blackwell, M S Chun, G Smith, 10.1093/mnras/189.1.95MNRAS. 18995Pagel B. E. J., Edmunds M. G., Blackwell D. E., Chun M. S., Smith G., 1979, MNRAS, 189, 95
. B E J Pagel, E A Simonson, R J Terlevich, M G Edmunds, 10.1093/mnras/255.2.325MNRAS. 255325Pagel B. E. J., Simonson E. A., Terlevich R. J., Edmunds M. G., 1992, MNRAS, 255, 325
. M A Peña-Guerrero, A Peimbert, M Peimbert, M T Ruiz, 10.1088/0004-637X/746/2/115ApJ. 746115Peña-Guerrero M. A., Peimbert A., Peimbert M., Ruiz M. T., 2012a, ApJ, 746, 115
. M A Peña-Guerrero, A Peimbert, M Peimbert, 10.1088/2041-8205/756/1/L14ApJ. 75614Peña-Guerrero M. A., Peimbert A., Peimbert M., 2012b, ApJ, 756, L14
. M Peimbert, 10.1086/149385ApJ. 150825Peimbert M., 1967, ApJ, 150, 825
Boletin de los Observatorios Tonantzintla y Tacubaya. M Peimbert, 629Peimbert M., 1971, Boletin de los Observatorios Tonantzintla y Tacubaya, 6, 29
. A Peimbert, 10.1086/345793ApJ. 584735Peimbert A., 2003, ApJ, 584, 735
. A Peimbert, M Peimbert, M T Ruiz, 10.1086/444557ApJ. 6341056Peimbert A., Peimbert M., Ruiz M. T., 2005, ApJ, 634, 1056
. M Peimbert, V Luridiana, A Peimbert, 10.1086/520571ApJ. 666636Peimbert M., Luridiana V., Peimbert A., 2007, ApJ, 666, 636
. M Peimbert, A Peimbert, G Delgado-Inglada, 10.1088/1538-3873/aa72c3PASP. 12982001Peimbert M., Peimbert A., Delgado-Inglada G., 2017, PASP, 129, 082001
. D Pequignot, P Petitjean, C Boisson, A&A. 251680Pequignot D., Petitjean P., Boisson C., 1991, A&A, 251, 680
. E Pérez-Montero, 10.1088/1538-3873/aa5abbPASP. 12943001Pérez-Montero E., 2017, PASP, 129, 043001
. E Pérez-Montero, A I Díaz, 10.1046/j.1365-2966.2003.07064.xMNRAS. 346105Pérez-Montero E., Díaz A. I., 2003, MNRAS, 346, 105
. L S Pilyugin, 10.1051/0004-6361:20010079A&A. 369594Pilyugin L. S., 2001, A&A, 369, 594
. L S Pilyugin, 10.1111/j.1365-2966.2006.11333.xMNRAS. 375685Pilyugin L. S., 2007, MNRAS, 375, 685
. L S Pilyugin, E K Grebel, 10.1093/mnras/stw238MNRAS. 4573678Pilyugin L. S., Grebel E. K., 2016, MNRAS, 457, 3678
. L S Pilyugin, J M Vílchez, T X Thuan, 10.1111/j.1365-2966.2006.10618.xMNRAS. 370Pilyugin L. S., Vílchez J. M., Thuan T. X., 2006, MNRAS, 370, 1928
. L S Pilyugin, J M Vílchez, T X Thuan, 10.1088/0004-637X/720/2/1738ApJ. 7201738Pilyugin L. S., Vílchez J. M., Thuan T. X., 2010, ApJ, 720, 1738
. L S Pilyugin, E K Grebel, L Mattsson, 10.1111/j.1365-2966.2012.21398.xMNRAS. 4242316Pilyugin L. S., Grebel E. K., Mattsson L., 2012, MNRAS, 424, 2316
. N Prantzos, 10.1051/eas:0832009arXiv:0709.0833EAS Publications Series. Charbonnel C., Zahn J. P.32EAS Publications SeriesPrantzos N., 2008, in Charbonnel C., Zahn J. P., eds, EAS Publications Series Vol. 32, EAS Publications Series. pp 311-356 (arXiv:0709.0833), doi:10.1051/eas:0832009
. P Quinet, A&AS. 116573Quinet P., 1996, A&AS, 116, 573
Atomic Data and Nuclear Data Tables. C A Ramsbottom, K L Bell, 10.1006/adnd.1997.07416665Ramsbottom C. A., Bell K. L., 1997, Atomic Data and Nuclear Data Tables, 66, 65
. A B Rauber, M V F Copetti, A C Krabbe, 10.1051/0004-6361/201323363A&A. 56342Rauber A. B., Copetti M. V. F., Krabbe A. C., 2014, A&A, 563, A42
. M G Richer, A Arrieta, L Arias, Castañeda Carlos, L Torres-Peimbert, S López, J A Galindo, A , arXiv:2210.05085Richer M. G., Arrieta A., Arias L., Castañeda Carlos L., Torres-Peimbert S., López J. A., Galindo A., 2022, arXiv e-prints, p. arXiv:2210.05085
. M Rodríguez, 10.1093/mnras/staa1286MNRAS. 4951016Rodríguez M., 2020, MNRAS, 495, 1016
. N S J Rogers, E D Skillman, R W Pogge, D A Berg, J Moustakas, K V Croxall, J Sun, 10.3847/1538-4357/abf8b9ApJ. 91521Rogers N. S. J., Skillman E. D., Pogge R. W., Berg D. A., Moustakas J., Croxall K. V., Sun J., 2021, ApJ, 915, 21
. N S J Rogers, E D Skillman, R W Pogge, D A Berg, K V Croxall, J Bartlett, K Z Arellano-Córdova, 10.3847/1538-4357/ac947dMoustakas J. 93944ApJRogers N. S. J., Skillman E. D., Pogge R. W., Berg D. A., Croxall K. V., Bartlett J., Arellano-Córdova K. Z., Moustakas J., 2022, ApJ, 939, 44
. R H Rubin, 10.1086/164606ApJ. 309334Rubin R. H., 1986, ApJ, 309, 334
. R H Rubin, 10.1086/191330ApJS. 69897Rubin R. H., 1989, ApJS, 69, 897
. S F Sánchez, 10.1051/0004-6361/201117353A&A. 5388Sánchez S. F., et al., 2012, A&A, 538, A8
. S F Sánchez, 10.1051/0004-6361/201424950A&A. 573105Sánchez S. F., et al., 2015, A&A, 573, A105
. B Sharpee, R Williams, J A Baldwin, P A M Van Hoof, 10.1086/378321ApJS. 149157Sharpee B., Williams R., Baldwin J. A., van Hoof P. A. M., 2003, ApJS, 149, 157
. B Sharpee, Y Zhang, R Williams, E Pellegrini, K Cavagnolo, J A Baldwin, M Phillips, X.-W Liu, 10.1086/511665ApJ. 6591265Sharpee B., Zhang Y., Williams R., Pellegrini E., Cavagnolo K., Baldwin J. A., Phillips M., Liu X.-W., 2007, ApJ, 659, 1265
. S Simón-Díaz, J García-Rojas, C Esteban, G Stasińska, A R López-Sánchez, C Morisset, 10.1051/0004-6361/201116608A&A. 53057Simón-Díaz S., García-Rojas J., Esteban C., Stasińska G., López-Sánchez A. R., Morisset C., 2011, A&A, 530, A57
. E D Skillman, D A Berg, R W Pogge, J Moustakas, N S J Rogers, K V Croxall, 10.3847/1538-4357/ab86aeApJ. 894138Skillman E. D., Berg D. A., Pogge R. W., Moustakas J., Rogers N. S. J., Croxall K. V., 2020, ApJ, 894, 138
. P Sowicka, D Jones, R L M Corradi, R Wesson, J García-Rojas, M Santander-García, H M J Boffin, P Rodríguez-Gil, 10.1093/mnras/stx16974713529MN-RASSowicka P., Jones D., Corradi R. L. M., Wesson R., García-Rojas J., Santander-García M., Boffin H. M. J., Rodríguez-Gil P., 2017, MN- RAS, 471, 3529
. G Stasińska, A&A. 85359Stasińska G., 1980, A&A, 85, 359
. G Stasińska, 10.1051/0004-6361:20042216A&A. 434507Stasińska G., 2005, A&A, 434, 507
. G Stasińska, M G Richer, M L Mccall, A&A. 336667Stasińska G., Richer M. G., McCall M. L., 1998, A&A, 336, 667
. G Stasińska, M Peña, F Bresolin, Y G Tsamis, 10.1051/0004-6361/201220345A&A. 55212Stasińska G., Peña M., Bresolin F., Tsamis Y. G., 2013, A&A, 552, A12
. C C Stevenson, 10.1093/mnras/267.4.904MNRAS. 267904Stevenson C. C., 1994, MNRAS, 267, 904
. P J Storey, C J Zeippen, 10.1046/j.1365-8711.2000.03184.xMNRAS. 312813Storey P. J., Zeippen C. J., 2000, MNRAS, 312, 813
. P J Storey, T Sochi, N R Badnell, 10.1093/mnras/stu777MNRAS. 4413028Storey P. J., Sochi T., Badnell N. R., 2014, MNRAS, 441, 3028
. P J Storey, T Sochi, R Bastin, 10.1093/mnras/stx1189MNRAS. 470379Storey P. J., Sochi T., Bastin R., 2017, MNRAS, 470, 379
. S S Tayal, 10.1088/0067-0049/195/2/12ApJS. 19512Tayal S. S., 2011, ApJS, 195, 12
. S S Tayal, O Zatsarinny, 10.1088/0067-0049/188/1/32ApJS. 18832Tayal S. S., Zatsarinny O., 2010, ApJS, 188, 32
. B M Tinsley, 10.48550/arXiv.2203.02041Fundamentals Cosmic Phys. 5287Tinsley B. M., 1980, Fundamentals Cosmic Phys., 5, 287
D Tody, Astronomical Society of the Pacific Conference Series. Hanisch R. J., Brissenden R. J. V., Barnes J.52173Astronomical Data Analysis Software and Systems IITody D., 1993, in Hanisch R. J., Brissenden R. J. V., Barnes J., eds, Astro- nomical Society of the Pacific Conference Series Vol. 52, Astronomical Data Analysis Software and Systems II. p. 173
. L Toribio San Cipriano, J García-Rojas, C Esteban, F Bresolin, M Peimbert, 10.1093/mnras/stw397MNRAS. 4581866Toribio San Cipriano L., García-Rojas J., Esteban C., Bresolin F., Peimbert M., 2016, MNRAS, 458, 1866
. C A Tremonti, 10.1086/423264ApJ. 613898Tremonti C. A., et al., 2004, ApJ, 613, 898
. Vale Asari, N Stasińska, G Morisset, C , Cid Fernandes, R , 10.1093/mnras/stw971MNRAS. 4601739Vale Asari N., Stasińska G., Morisset C., Cid Fernandes R., 2016, MNRAS, 460, 1739
. M Valerdi, A Peimbert, M Peimbert, A Sixtos, 10.3847/1538-4357/ab14e4ApJ. 87698Valerdi M., Peimbert A., Peimbert M., Sixtos A., 2019, ApJ, 876, 98
. E M Verner, D A Verner, J A Baldwin, G J Ferland, P G Martin, 10.1086/317159ApJ. 543831Verner E. M., Verner D. A., Baldwin J. A., Ferland G. J., Martin P. G., 2000, ApJ, 543, 831
S S Vogt, 10.1117/12.176725Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Crawford D. L., Craine E. R.2198362Vogt S. S., et al., 1994, in Crawford D. L., Craine E. R., eds, Society of Photo- Optical Instrumentation Engineers (SPIE) Conference Series Vol. 2198, Instrumentation in Astronomy VIII. p. 362, doi:10.1117/12.176725
. R Wesson, D Jones, J García-Rojas, H M J Boffin, R L M Corradi, 10.1093/mnras/sty1871MNRAS. 4804589Wesson R., Jones D., García-Rojas J., Boffin H. M. J., Corradi R. L. M., 2018, MNRAS, 480, 4589
. W L Wiese, J R Fuhr, T M Deters, Journal of Physical and Chemical Reference Data, Monograph. 7403Wiese W. L., Fuhr J. R., Deters T. M., 1996, Journal of Physical and Chemical Reference Data, Monograph 7, 403
D G York, E B Jenkins, P Zucchino, J L Lowrance, D Long, A Songaila, 10.1117/12.965861Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. Geary J. C., Latham D. W.290202Society of Photo-Optical Instrumentation Engineers (SPIE) Conference SeriesYork D. G., Jenkins E. B., Zucchino P., Lowrance J. L., Long D., Songaila A., 1981, in Geary J. C., Latham D. W., eds, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series Vol. 290, Society of Photo-Optical Instrumentation Engineers (SPIE) Conference Series. p. 202, doi:10.1117/12.965861
. H Zhang, A&AS. 119523Zhang H., 1996, A&AS, 119, 523
. A Zurita, E Florido, F Bresolin, E Pérez-Montero, 10.1093/mnras/staa22465002359Pérez I., 2021, MN-RASZurita A., Florido E., Bresolin F., Pérez-Montero E., Pérez I., 2021, MN- RAS, 500, 2359
Isis Wht García-Rojas, Table A5. DESIRED Galactic H II regions. Nebula Zone Spectrograph Telescope Reference IC 5146 93.8" from BD+46. 3474Table A5. DESIRED Galactic H II regions. Nebula Zone Spectrograph Telescope Reference IC 5146 93.8" from BD+46 3474 (A2) ISIS WHT García-Rojas et al. (2014)
. García-Rojas, M16 48" N and 40" W of BD-13 4930García-Rojas et al. (2007) M16 48" N and 40" W of BD-13 4930
. García-Rojas, M17 300" S and 72" E of the center of BD-164819García-Rojas et al. (2006) M17 300" S and 72" E of the center of BD-164819
. García-Rojas, M20 17" N and 10" E of HD164492García-Rojas et al. (2007) M20 17" N and 10" E of HD164492
. García-Rojas, A468.05" from HD 37061A5) 84.55" from HD 37061 (A6) NIL 154.3" from 1 Ori C (cut 2García-Rojas et al. (2006) M43 51.55" from HD 37061 (A4) ISIS WHT Simón-Díaz et al. (2011) 68.05" from HD 37061 (A5) 84.55" from HD 37061 (A6) NIL 154.3" from 1 Ori C (cut 2)
Uves Vlt Méndez-Delgado, NGC 2579 5" N of DENIS J082054. UVES VLT Méndez-Delgado et al. (2021b) NGC 2579 5" N of DENIS J082054.8-361258
NGC 3576 24" N and 65" W of HD 97499. Esteban , Esteban et al. (2013) NGC 3576 24" N and 65" W of HD 97499
. García-Rojas, NGC. 3603and 116" E of HD 306201García-Rojas et al. (2004) NGC 3603 12" N and 116" E of HD 306201
. García-Rojas, García-Rojas et al. (2006)
. C Ori, Uves Vlt Esteban, Orion Nebula. 29Orion Nebula 29.4" from 1 Ori C UVES VLT Esteban et al. (2004)
. C Ori, Delgado-Inglada, Ori C Delgado-Inglada et al. (2016)
Orion Bar) 75.5" from 1. C Ori, C Ori, Mesa-Delgado, Ori C (Orion Bar) 75.5" from 1 Ori C Mesa-Delgado et al. (2009)
. Méndez-Delgado, Méndez-Delgado et al. (2021a)
. Méndez-Delgado, Méndez-Delgado et al. (2021b)
. C Ori, Méndez-Delgado, Ori C (cut 2) Méndez-Delgado et al. (2022)
. Osiris Gtc Arellano-Córdova, OSIRIS GTC Arellano-Córdova et al. (2021)
. & Esteban, García-Rojas, 2-82 19:30:22.80 +18:15:14.4Esteban & García-Rojas (2018) Sh 2-82 19:30:22.80 +18:15:14.4
. Arellano-Córdova, 2-83 19:24:52.55 +20:47:24.4Arellano-Córdova et al. (2021) Sh 2-83 19:24:52.55 +20:47:24.4
. Esteban , Esteban et al. (2017)
. 46:46.51 −25:12:39.4Sh 2-88B. 19Sh 2-88B 19:46:46.51 −25:12:39.4
. Arellano-Córdova, 2-90 19:49:09.33 +26:51:41.4Arellano-Córdova et al. (2021) Sh 2-90 19:49:09.33 +26:51:41.4
. & Esteban, García-Rojas, 2-93 19:54:58.17 +27:12:45.3Esteban & García-Rojas (2018) Sh 2-93 19:54:58.17 +27:12:45.3
. Arellano-Córdova, 2-100 20:02:00.69 +33:29:23.9Arellano-Córdova et al. (2021) Sh 2-100 20:02:00.69 +33:29:23.9
. Esteban , Esteban et al. (2017)
. 2-175 00:27:47.93 64:42:00.3OSIRIS GTC Esteban & García-Rojas. 8Sh 2-152 22:58:40.80 +58:46:53.8 OSIRIS GTC Esteban & García-Rojas (2018) Sh 2-175 00:27:47.93 64:42:00.3
. & Osiris Gtc Esteban, García-Rojas, 2-209 04:11:25.69 +51:14:33.8OSIRIS GTC Esteban & García-Rojas (2018) Sh 2-209 04:11:25.69 +51:14:33.8
. Esteban , Esteban et al. (2017)
. & Esteban, García-Rojas, Esteban & García-Rojas (2018)
. Esteban , Esteban et al. (2017)
Sh 2-311 126" S of HD 64315. & Esteban, ; García-Rojas, Uves, García-Rojas, Esteban & García-Rojas (2018) Sh 2-311 126" S of HD 64315 UVES VLT García-Rojas et al. (2005)
| [] |
[
"A Self-Supervised Learning-based Approach to Clustering Multivariate Time- Series Data with Missing Values (SLAC-Time): An Application to TBI Phenotyping",
"A Self-Supervised Learning-based Approach to Clustering Multivariate Time- Series Data with Missing Values (SLAC-Time): An Application to TBI Phenotyping"
] | [
"Hamid Ghaderi \nDepartment of Systems and Industrial Engineering\nUniversity of Arizona\nTucsonAZUSA\n",
"Brandon Foreman \nCollege of Medicine\nUniversity of Cincinnati\nCincinnatiOHUSA\n",
"Amin Nayebi \nDepartment of Systems and Industrial Engineering\nUniversity of Arizona\nTucsonAZUSA\n",
"Sindhu Tipirneni \nDepartment of Computer Science\nVirginia Tech\nArlingtonVAUSA\n",
"Chandan K Reddy \nDepartment of Computer Science\nVirginia Tech\nArlingtonVAUSA\n",
"Vignesh Subbian \nDepartment of Systems and Industrial Engineering\nUniversity of Arizona\nTucsonAZUSA\n\nDepartment of Biomedical Engineering\nUniversity of Arizona\nTucsonAZUSA\n"
] | [
"Department of Systems and Industrial Engineering\nUniversity of Arizona\nTucsonAZUSA",
"College of Medicine\nUniversity of Cincinnati\nCincinnatiOHUSA",
"Department of Systems and Industrial Engineering\nUniversity of Arizona\nTucsonAZUSA",
"Department of Computer Science\nVirginia Tech\nArlingtonVAUSA",
"Department of Computer Science\nVirginia Tech\nArlingtonVAUSA",
"Department of Systems and Industrial Engineering\nUniversity of Arizona\nTucsonAZUSA",
"Department of Biomedical Engineering\nUniversity of Arizona\nTucsonAZUSA"
] | [] | Self-supervised learning approaches provide a promising direction for clustering multivariate time-series data. However, real-world time-series data often include missing values, and the existing approaches require imputing missing values before clustering, which may cause extensive computations and noise and result in invalid interpretations. To address these challenges, we present a Self-supervised Learning-based Approach to Clustering multivariate Time-series data with missing values (SLAC-Time). SLAC-Time is a Transformer-based clustering method that uses time-series forecasting as a proxy task for leveraging unlabeled data and learning more robust time-series representations. This method jointly learns the neural network parameters and the cluster assignments of the learned representations. It iteratively clusters the learned representations with the K-means method and then utilizes the subsequent cluster assignments as pseudo-labels to update the model parameters. To evaluate our proposed approach, we applied it to clustering and phenotyping Traumatic Brain Injury (TBI) patients in the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) study. Clinical data associated with TBI patients are often measured over time and represented as timeseries variables characterized by missing values and irregular time intervals. Our experiments * Corresponding author: [email protected] demonstrate that SLAC-Time outperforms the baseline K-means clustering algorithm in terms of silhouette coefficient, Calinski Harabasz index, Dunn index, and Davies Bouldin index. We identified three TBI phenotypes that are distinct from one another in terms of clinically significant variables as well as clinical outcomes, including the Extended Glasgow Outcome Scale (GOSE) score, Intensive Care Unit (ICU) length of stay, and mortality rate. The experiments show that the TBI phenotypes identified by SLAC-Time can be potentially used for developing targeted clinical trials and therapeutic strategies. | 10.1016/j.jbi.2023.104401 | [
"https://export.arxiv.org/pdf/2302.13457v2.pdf"
] | 258,847,479 | 2302.13457 | cada6709d8b9468aa5bd49a425e4f0a4c145872e |
A Self-Supervised Learning-based Approach to Clustering Multivariate Time- Series Data with Missing Values (SLAC-Time): An Application to TBI Phenotyping
Hamid Ghaderi
Department of Systems and Industrial Engineering
University of Arizona
TucsonAZUSA
Brandon Foreman
College of Medicine
University of Cincinnati
CincinnatiOHUSA
Amin Nayebi
Department of Systems and Industrial Engineering
University of Arizona
TucsonAZUSA
Sindhu Tipirneni
Department of Computer Science
Virginia Tech
ArlingtonVAUSA
Chandan K Reddy
Department of Computer Science
Virginia Tech
ArlingtonVAUSA
Vignesh Subbian
Department of Systems and Industrial Engineering
University of Arizona
TucsonAZUSA
Department of Biomedical Engineering
University of Arizona
TucsonAZUSA
A Self-Supervised Learning-based Approach to Clustering Multivariate Time- Series Data with Missing Values (SLAC-Time): An Application to TBI Phenotyping
Self-supervised learningClusteringTransformerMultivariate time-series dataTraumatic brain injury
Self-supervised learning approaches provide a promising direction for clustering multivariate time-series data. However, real-world time-series data often include missing values, and the existing approaches require imputing missing values before clustering, which may cause extensive computations and noise and result in invalid interpretations. To address these challenges, we present a Self-supervised Learning-based Approach to Clustering multivariate Time-series data with missing values (SLAC-Time). SLAC-Time is a Transformer-based clustering method that uses time-series forecasting as a proxy task for leveraging unlabeled data and learning more robust time-series representations. This method jointly learns the neural network parameters and the cluster assignments of the learned representations. It iteratively clusters the learned representations with the K-means method and then utilizes the subsequent cluster assignments as pseudo-labels to update the model parameters. To evaluate our proposed approach, we applied it to clustering and phenotyping Traumatic Brain Injury (TBI) patients in the Transforming Research and Clinical Knowledge in Traumatic Brain Injury (TRACK-TBI) study. Clinical data associated with TBI patients are often measured over time and represented as timeseries variables characterized by missing values and irregular time intervals. Our experiments * Corresponding author: [email protected] demonstrate that SLAC-Time outperforms the baseline K-means clustering algorithm in terms of silhouette coefficient, Calinski Harabasz index, Dunn index, and Davies Bouldin index. We identified three TBI phenotypes that are distinct from one another in terms of clinically significant variables as well as clinical outcomes, including the Extended Glasgow Outcome Scale (GOSE) score, Intensive Care Unit (ICU) length of stay, and mortality rate. The experiments show that the TBI phenotypes identified by SLAC-Time can be potentially used for developing targeted clinical trials and therapeutic strategies.
Introduction
Multivariate time-series data are frequently observed in many healthcare domains where each patient is represented by a set of clinical measurements recorded over time and present important information spanning the whole course of a patient's care. Clustering approaches are commonly used to extract valuable information and patterns from multivariate time-series data [1]. Such clustering approaches can be broadly divided into two categories: raw data-based approaches and representation-based approaches [2]. Raw data-based approaches perform the clustering on raw input data using well-designed similarity measures that can address the specificities of the temporal dimension, including shifted or stretched patterns (e.g., [3][4][5]).
However, since all time points are included, raw data-based clustering approaches are highly susceptible to noise and outliers [2]. Representation-based clustering approaches, on the other hand, employ clustering methods on the representations learned from input time series, which mitigates the effects of noise and outliers in raw input data [2]. Representation learning techniques aim to eliminate the time dimension while preserving the relationship between nearby data points, or they aim to make the comparison more accurate by aligning time-series data with each other [6]. Deep learning architectures have strong representation-learning capabilities, making them useful for state-of-the-art supervised and unsupervised methods to learn the representations of time series for different downstream tasks [7]. Non-linear mappings can be learned using deep learning, allowing time-series data to be transformed into representations that are more suited for clustering [7]. However, deep learning models need to be trained via supervised learning that requires a large, annotated dataset [8]. Building such a dataset would be too costly or often not feasible in most clinical applications. Considering this, self-supervised learning is becoming more common for clustering multivariate time-series data [8], where a deep learning model is trained on an unlabeled time-series dataset by performing a proxy task, and then the learned representations are applied to the clustering task.
Self-supervision and missingness: Existing self-supervised learning-based methods for clustering multivariate time-series data are only effective in scenarios with no missing values, while multivariate time-series data are rarely complete due to a variety of reasons. There are three ways to address the missing values in multivariate time-series data: (1) omitting entire samples including missing data, (2) filling in the missing data using data imputation or interpolation methods, and (3) aggregating the irregularly sampled data into discrete time periods. Omitting the samples with missing data and performing the analysis just on the available observations is a straightforward method, but it does not perform well when the rate of missingness is high and/or when the samples are insufficient [6]. Data imputation is another solution that involves substituting new values for the ones that are missing. However, imputing missing values in multivariate time-series data without having domain knowledge about each time-series variable can lead to bias and invalid conclusions. Interpolation techniques are straightforward and commonly used in real-world settings to address missing values. These techniques, meanwhile, may not be able to capture complicated patterns of multivariate timeseries data since they do not consider correlated variables [6]. Additionally, when time-series data are sparser, interpolation methods often degrade by adding unwanted noise and additional complexity to the model. Another issue associated with multivariate time-series data is that they may include different time-series variables measured at irregular time intervals. Aggregating measurements into discrete time periods is a typical strategy for addressing irregular time intervals, but this method results in loss of granular information [9]. To address these challenges, we propose a novel Self-supervised Learning-based Approach to Clustering multivariate Time-series data with missing values (SLAC-Time) that does not rely on any data imputation or aggregation methods. We evaluate the proposed approach by applying it to the problem of clustering time-series data collected from acute traumatic brain injury (TBI) patients. TBI patients exhibit considerable variability in their clinical presentation, making it challenging to identify effective interventions [10,11]. However, by leveraging an advanced clustering technique, TBI patients can be stratified into distinct phenotypic groups with greater precision and reliability that would allow for targeted interventions or clinical studies [10]. Figure 1 shows a high-level overview of our work that addresses this problem.
Motivated by the shortcomings of the existing state-of-the-art clustering approaches, in this work, we make the following contributions:
• We propose a novel self-supervised learning-based clustering approach called SLAC-Time for clustering multivariate time-series data with missing values without using any data imputation or aggregation methods.
• We perform time-series forecasting as a proxy task for learning more robust representations of unlabeled multivariate time-series data.
• We demonstrate the ability of SLAC-Time in identifying reliable TBI patient phenotypes and their distinct baseline feature profiles using TBI clinicians' domain knowledge and different cluster validation methods.
The rest of this paper is organized as follows. In section 2, we review relevant work in TBI clustering and deep learning-based clustering of multivariate time-series data. Section 3 presents the problem formulation and provides a detailed description of SLAC-Time. Section 4 presents the implementation of SLAC-Time to cluster TBI patients, along with the validation of identified phenotypes using different internal and external validation methods. Finally, Section 5 concludes our work and suggests future directions for research.
Related work
In this section, we review existing methods for clustering TBI patients as well as state-of-theart self-supervised learning-based approaches to clustering multivariate time-series data.
Clustering TBI patients
Existing methods for clustering TBI patients are limited to using non-temporal features with a need for imputing missing values. Folweiler et al. [10] implemented a wrapper framework consisting of two stages. In the first stage, generalized low-rank models were used for selecting significant TBI variables. Then, in the second stage, the selected variables were used for clustering TBI patients into distinct phenotypes by a partitional clustering method. Multivariate imputation by chained equations (MICE) was used with the random forest method to impute missing values.
Si et al. [12] used a sparse hierarchical clustering method for subgrouping patients with mild TBI while using the MICE method to impute the missing values of the features. Yeboah et al. [13] proposed a framework for an explainable ensemble clustering model, including K-means, spectral, Gaussian, mixture, and agglomerative clustering methods to identify TBI phenotypes.
They excluded patients with missing data among key features and only included those records with less than 1% missing values and used imputation by mean. As one of the first efforts to cluster TBI patients without imputing missing values of the features, Akerlund et al. [14] developed an unsupervised learning method based on probabilistic graph models of TBI patients' early clinical and laboratory data. Despite clustering without data imputation methods, their cluster analysis approach does not incorporate time-series features that are commonly measured in the Intensive Care Unit (ICU).
Self-supervised learning-based approaches for clustering multivariate time-series data
Self-supervised learning-based approaches often include two stages for clustering multivariate time-series data: (1) learning feature vectors or representations of multivariate time-series data and (2) clustering the learned representations. These methods first convert the input multivariate time-series data into low-dimensional representations; then, the clustering techniques are applied to the learned representations. Tavakoli et al. [15] proposed a two-stage autoencoder-based approach to cluster time-series data with no labels and features. In the first stage of their proposed approach, descriptive metadata were captured as features.
Subsequently, K-means clustering method was applied to the extracted features to identify their cluster labels which are then utilized as the labels of input time series. Although this approach clusters time-series data based on their known and hidden non-linear features, it does not handle missingness in time-series data. In another work, Ma et al. [16] proposed a self-supervised timeseries clustering network (STCN), a clustering approach that simultaneously optimizes representation learning and clustering. In the representation learning module of this approach, a recurrent neural network (RNN) performed a one-step time-series prediction, and the parameters of the output layer were regarded as model-based representations. Then, these representations were supplied into a self-supervised learning module in which a linear classifier obtains pseudo-labels initialized by K-means. Furthermore, spectral analysis was performed to limit comparable representations to have the same pseudo-labels and match the predicted labels with pseudo-labels. Due to the lack of a strategy for discovering the correlation between the time-series variables, STCN can only be used for clustering univariate time-series data. Moreover, this method does not address missingness and irregular time intervals in time-series data, making it inappropriate for clustering clinical data that usually include multivariate time-series data with missing values. To address issues with clustering clinical data, Jong et al. [17] proposed a variational deep embedding with recurrence (VaDER) approach based on an extended Gaussian mixture variational autoencoder for clustering clinical multivariate time-series data with missing values. However, to handle missing values, they integrated a data imputation scheme into model training, which can result in unnecessary computations and noise.
Real-world time-series data often include missing values which can be an issue, especially in clinical data analysis [18]. Our literature review shows that related clustering approaches learn the representation of time series either when there is no missing value or when the missing values are imputed beforehand. To the best of our knowledge, there is no self-supervised learning-based clustering approach that handles missing values in multivariate time-series data without resorting to imputation methods.
Methods
Preliminaries
Representation-based clustering approaches necessitate the use of an effective representation learning technique that best suits the type of input data. SLAC-Time leverages a self-supervised Transformer model for time series (STraTS) to learn the representations of multivariate time-series data. It maps input multivariate time-series data with missing values into a fixed-dimensional vector space without resorting to data imputation or aggregation methods [9]. Therefore, unlike traditional methods which treat each multivariate time-series data as a matrix with specific dimensions, our approach treats each multivariate time-series data as a set of observation triplets, avoiding the need for data imputation or aggregation. The Transformerbased architecture of STraTS uses self-attention to go from one token to another in a single step, allowing for parallel processing of observation triplets. Observation triplets are embedded using a novel Continuous Value Embedding (CVE) method, eliminating the necessity for binning continuous values before embedding them. By doing so, the fine-grained information that is lost when time is discretized is preserved. We denote STraTS mapping by with being the model's parameters. We use the term "representation" to refer to the vector that results from applying this mapping to a multivariate time-series data. are obtained by a one-to-many Feed-Forward Network (FFN) as follows [9]:
e i v = ( )(1)e i t = ( )(2)
Both FFNs consist of one input neuron, output neurons, a single hidden layer containing ⌊√ ⌋ neurons and a tanh(. ) activation function. These networks can be represented as
( ) = tanh( + )(3)
where the dimensions of the weight parameters { , , } are determined by the sizes of the hidden and output layers within the FFN. The initial embeddings {e 1 , … , e } ∈ ℝ are processed by a Transformer [19] consisting of blocks. Each block contains a Multi-Head Attention ( ) layer with h attention heads and an FFN with one hidden layer. Each block's output serves as the input for the subsequent block, with the final block's output yielding the contextual triplet embeddings {c 1 , … , c } ∈ ℝ . Then, a self-attention layer is utilized to compute time-series embedding e ∈ ℝ [9]. We also obtain the embedding of non-temporal variables by passing d through an FFN [9].
SLAC-Time architecture
The architecture of SLAC-Time is illustrated in Figure 2. SLAC-Time defines its input as a set of observation triplets that pass through STraTS model to generate the representation. Considering an unlabeled input dataset, SLAC-Time generates pseudo-labels to fine-tune the self-supervised model and facilitate performing the target task. In the target task, pseudo-labels serve as target classes for unlabeled data as though they were actual labels. SLAC-Time contains three modules, including (1) self-supervision, (2) pseudo-label extraction, and (3) classification module. This approach alternates between the pseudo-label extraction and classification modules to update cluster assignments and increase the quality of clusters. The following is a detailed description of SLAC-Time architecture.
Self-supervision module
In the first step of SLAC-Time, we pre-train STraTS by performing time-series forecasting as a self-supervision task to learn the representation of the unlabeled multivariate time-series data.
To do so, we use a larger dataset with ′ ≥ N samples given by ′ = {(d , T , m , z )} =1 ′ , where m k ∈ ℝ represents the forecast mask, indicating whether a variable was observed in the forecast window, and z k ∈ ℝ includes the associated values of the variable. We need to mask out the unobserved forecasts in the loss function because they cannot be used for training the model.
The time-series data in the forecast task dataset are created by considering various observation windows in the time series. Figure 3 depicts how we construct inputs and outputs for the forecast task. The time-series forecasting output is obtained by passing the concatenation of non-temporal and time-series embeddings through the following layer [9]:
z = W [e e ] + b ∈ ℝ |ℱ|(4)
where W and b are the weights, and e and e represent the time-series embedding and the non-temporal embedding, respectively.
To address missing values in the forecast outputs, we use a masked Mean Squared Error (MSE) loss for training on the forecast task model. The self-supervision loss is defined as
ℒ = 1 | ′ | ∑ ′ =1 ∑ |ℱ| =1 (z − z ) 2(5)
where m = 0 if the ground truth forecast (z ) is not available for the j th variable in the k th sample, and m = 1 if otherwise [9].
Pseudo-label extraction module
After pretraining the STraTS model by performing the forecast task, the model's last layer that is specific to the forecast task is removed, and then the resulting model is used to compute the non-temporal and time-series embeddings of the samples. The concatenation of the nontemporal embeddings and time-series embeddings of each sample is defined as its representation [9]. Then, K-means clustering analysis is performed on the learned representations. K-means takes the learned representations as input and divides them into k subgroups using its geometric criterion [1]. It simultaneously learns a × centroid matrix and the cluster assignment of each subject as follows [20]:
min ∈ℝ × 1 ∑ =1 min ∈{0,1} ∥ ∥ ((d , T )) − ∥ ∥ 2 2 such that ⊤ 1 = 1(6)
Optimizing this problem results in a set of optimal cluster assignments ( * ) ≤ considered as the pseudo-labels of the subjects. That is, each subject (d , T ) is associated with a pseudo-label in {0,1} , representing the membership of the subject to one of the possible predefined classes.
Classification module
The extracted pseudo-labels are leveraged to supervise the training of a classifier predicting the accurate labels on top of the representations ((d , T )) obtained from the STraTS model. Then, the classifier's parameters and the STraTS' parameters are simultaneously learned from the following optimization problem:
min , 1 ∑ =1 ℓ ( ( ((d , T ))) , )(7)
where ℓ is a negative log-softmax function.
SLAC-Time is an iterative procedure that alternates between two modules: (1) clustering the representations through the K-means algorithm and then using cluster assignments to generate pseudo-labels and (2) updating classifier's and STraTS model's parameters to predict the correct labels for each subject by minimizing the loss function.
Experimental results
Dataset
Our experiments were based on data obtained from the Transforming Research and Clinical
Knowledge in Traumatic Brain Injury (TRACK-TBI) dataset [21]. This dataset, which was collected
Implementation details
We implemented SLAC-Time using Keras and TensorFlow backend. Table 1 represents the hyperparameters used in our experiments. Proxy task and target task models are trained using a batch size of 8 and Adam optimizer. Training for the proxy task is stopped when the validation loss does not decrease for ten epochs. Training for the target task is performed for 500 iterations, each of which is comprised of 200 epochs. Training in each iteration is also stopped when the validation loss does not decrease for ten epochs. The experiments were carried out on an NVIDIA Tesla P100 GPU, which took around four days.
Optimal number of clusters
We evaluated how the quality of the clusters obtained by SLAC-Time is affected by the number of clusters, k. We performed the same proxy and downstream tasks using the same
where a stands for the average distance between a sample in a cluster and the rest of the samples in the cluster, and b is the average distance between a sample in a cluster and the samples in the closest cluster. The Silhouette coefficient for the set of samples used in the clustering problem is given as the mean of the Silhouette coefficient for each sample. This score ranges from -1 for incorrect clustering to +1 for dense clustering. We used the Euclidean distance between embeddings as the distance metric. The choice of Euclidean distance is consistent with the distance metric used in the K-means clustering algorithm, which we employed for clustering the learned time-series representations in SLAC-Time.
Calinski Harabasz (CH) index is the ratio of dispersion between clusters to dispersion within clusters for all clusters defined as follows.
[ ∑ ‖ − ‖ 2 =1 − 1 ] / [ ∑ ∑ ‖ − ‖ 2 =1 =1 − ](9)
Here, and are the number of samples and centroid of the k th cluster, respectively. c denotes the global centroid, and N is the total number of samples.
Dunn index is the ratio of the shortest distance between the samples from different clusters to the longest intra-cluster distance. The Dunn index varies from 0 to infinity with a higher index indicating higher quality of clusters. Dunn index of clustering with m clusters is represented as follows. Davies Bouldin index (DB) represents the average similarity between clusters where similarity is a metric that relates cluster distance to cluster size. This index is defined as
= 1 ∑ max =1(11)
where is the similarity between clusters i and j, and is calculated as follows.
= +(12)
where and are the intra-cluster dispersion of clusters i and j, respectively, and represents the distance between the centroid of clusters.
We observed that k=3 results in the best clustering performance across all four clustering evaluation metrics ( Table 2), suggesting three possible TBI phenotypes.
Number of cluster reassignments between iterations
By updating the parameters of the model in each iteration, a record may be assigned to a new cluster. Accordingly, the cluster assignments may change over iterations. We measure the information shared between the clusters at iteration t-1 and t using the Normalized Mutual Information (NMI) defined as follows [24]:
NMI ( ; ) = I( ; ) √H( )H( )(13)
where I is the mutual information and H denotes the entropy. If cluster assignments in iterations t-1 and t are perfectly dissimilar, the NMI will equal 0. If cluster assignments in iterations t-1 and t are perfectly the same, the NMI will equal 1. We measure NMI between the clusters at iterations t-1 and t to determine the actual stability of SLAC-Time. Figure 4 demonstrates the NMI trend during 500 training iterations. As can be seen in Figure 4, the value of NMI significantly increases after about 200 iterations, indicating a decrease in cluster reassignments and an increase in the stability of clusters. NMI, however, stays below 1, meaning that several TBI patients are frequently reassigned between iterations.
Comparison of SLAC-Time and K-means clustering algorithm
To demonstrate the effectiveness of SLAC-Time over the common clustering methods, we compared the clustering performance metrics of SLAC-Time with those of K-means for clustering multivariate time-series data in the TRACK-TBI dataset. Since there are no ground-truth labels regarding TBI phenotypes, we use different intrinsic clustering evaluation measures to quantify the clustering performance.
K-means is a common method for clustering both non-temporal and time-series data. In SLAC-Time, we use the K-means method to extract pseudo-labels of multivariate time-series data by clustering the learned representations. In order to cluster multivariate time-series data with missing values using K-means, it is necessary to handle missing values beforehand by imputing or interpolating them. This is because K-means requires complete data points for each variable to calculate distances and determine cluster membership. To enable a fair comparison between SLAC-Time and K-means, we handled missing values in the data using iterative imputation for non-temporal variables and linear interpolation for time-series variables. This allowed us to create complete data points for both types of variables and ensure an unbiased evaluation of the performance of both clustering algorithms. The clustering evaluation metrics show that SLAC-Time outperforms K-means ( Table 3), suggesting that clustering multivariate time-series data based on the learned representations rather than raw data can be more effective.
Characteristics of the TBI phenotypes
To discover distinct characteristics of TBI phenotypes, we examine the variables that have been shown to be explanatory in the prediction of the 6-month GOSE score [25,26], and those that are significant based on TBI clinicians' domain knowledge. These variables include age, sex, overall GCS score, GCS motor score, GCS eye score, pupil reactivity, hypoxia, hypotension, intubation, glucose, hemoglobin, white blood cell (WBC), hematocrit, international normalized ratio (INR) and activated partial thromboplastin time (aPTT). For making comparisons between the phenotypes, non-temporal variables were represented by mean and standard deviation or numbers and percentages ( Table 4). We also represent the phenotype-specific average of timeseries variables with 95% confidence intervals and the correlation between them (Figures 6-13).
We univariately analyze the important non-temporal and time-series variables and test whether there is a significant difference between the phenotypes using the Kruskal-Wallis test. The differences between phenotypes were considered significant when corresponding p-values are <0.05.
Using k=3 (the optimal number of clusters) as the number of TBI phenotypes, we performed SLAC-Time for clustering 2160 TBI patients in the TRACK-TBI dataset, which resulted in three TBI phenotypes , , and that include 693, 586, and 881 TBI patients, respectively. Each TBI phenotype had a distinct baseline feature profile linked to its outcome endpoints. an average GOSE score of 6.5 1.5 seems to overlap with phenotype , these two phenotypes are significantly different in terms of both GOSE score (p < 0.05) and ICU length of stay (p < 0.05).
Furthermore, the higher mortality rate of phenotype (3.5%) compared to that of phenotype (0.6%) emphasizes the higher severity of phenotype compared to phenotype . respectively, meaning that the TBI patients in phenotype had a more severe injury than those in phenotype , and are more likely to need intensive care after hospital admission.
Comparison of Level of Consciousness and Pupil Reactivity across TBI Phenotypes:
Higher severity of phenotype compared to phenotypes and is evident in their GCS motor scores, GCS eye scores, and pupil reactivity (Figures 6-8). More than 40% of the TBI patients in phenotype had no motor response once they are admitted to the ICU, while about 4% and 10% of TBI patients in phenotype and had no motor response upon ICU admission, respectively ( Figure 6). Likewise, about 60% of the TBI patients in phenotype have no eye-opening response once they are admitted to the ICU, while only 8% and 20% of the TBI patients in phenotypes and have no eye-opening response upon ICU admission, respectively (Figure 7). For all three phenotypes, the percentage of TBI patients with no motor response and no eye-opening response sharply decreases on the first day of ICU stay. The highest GCS motor score (score 6:
obey commands) and the highest GCS eye score (score 4: response spontaneously) were significantly different among the phenotypes (p < 0.001). Phenotype had the lowest percentage of TBI patients with GCS motor score equal to 6. Only about 24% of the TBI patients in phenotype had the highest GCS motor and eye scores once they are admitted to the ICU. Although the percentages of TBI patients in phenotype with the highest GCS scores increases over time, they remain much lower than those in phenotypes and during the ICU stay. Furthermore, phenotypes and differ significantly in all the aforementioned GCS motor and GCS eye score categories (p < 0.001). Phenotype had the lowest percentages of TBI patients with no GCS motor or no GCS eye response. On the other hand, it had the highest percentages of TBI patients with the best GCS motor and eye scores. Pupil reactivity is among the variables with the highest contribution to the long-term recovery outcome of TBI patients [25,26]. There are significant differences between the pupil reactivity of TBI phenotypes in both ED and the ICU (p < 0.001). Phenotype and phenotype differ significantly in terms of pupil reactivity (p < 0.001). The percentages of TBI patients with no pupil reactivity or with only one reactive pupil are less in phenotype compared to phenotype .
During ED visits, phenotype had the highest percentage of TBI patients with two reactive pupils (87%) ( Table 4). On the other hand, during ICU stay, the percentage of phenotype patients with two reactive pupils is slightly less than that of phenotype (Figure 8). This difference can be due to the missing values in the time-series variable associated with pupil reactivity. TBI patients in phenotype had the worst pupil reactivity among the TBI patients in both ED and ICU. Phenotype had the highest percentage of TBI patients with no pupil reactivity and the lowest percentage of TBI patients, both of whose pupils react.
Comparison of Clinical Variables across TBI Phenotypes:
The analysis of phenotype-specific averages of clinical variables reveals significant differences among the three TBI phenotypes.
Phenotype β exhibited the highest rates of hypoxia and hypotension compared to the other two phenotypes (p < 0.005). Additionally, the rates of hypoxia and hypotension in phenotype α were significantly higher than those in phenotype γ (p < 0.005) ( Table 4).
In Figure 9, we present the average hematocrit, hemoglobin, and WBC values for each TBI phenotype during the first 120 hours of ICU stay. Phenotype β had the lowest hemoglobin and hematocrit values as well as the highest WBC count, indicating more severe blood loss and injury compared to the other two phenotypes. This finding is consistent with the lowest GCS score in the ED (8.5 ± 5.1) and the worst GCS motor response, GCS eye response, and pupil reactivity trajectories during the five-day ICU stay (Figures 6-8). In addition, phenotype had the highest intubation rate. About 60% of the TBI patients in phenotype are intubated once they are admitted to the ICU, while phenotype had the lowest intubation rate and only 10% of the TBI patients in this phenotype were intubated upon ICU admission (Figure 10). Likewise, phenotype and phenotype had the highest and lowest levels of glucose levels during ICU stay, accordingly (Figure 11). The INR and aPTT tests are used to measure how quickly blood clots in different pathways [27]. As can be seen in Figure 12, phenotype had the highest levels of aPTT and INR among the TBI phenotypes, suggesting that phenotype includes patients suffering from bleeding disorders due to their TBI. On the other hand, the relatively normal coagulation measurements of phenotypes and align with their low bleeding rates. phenotypes α and γ. The plot shows that phenotype β has a substantially larger dispersion than phenotypes α and γ, aligning with our previous findings that indicate worse clinical outcomes for phenotype β, such as the lowest GOSE scores, highest mortality rates, and longest ICU stays. The PCA plot reveals that patients in phenotypes α and γ are located near each other, with phenotype γ exhibiting greater dispersion than phenotype α. This difference in dispersion highlights the varying levels of heterogeneity within each group, suggesting that the broader range of underlying pathophysiological features in phenotype γ could contribute to its worse clinical outcomes compared to phenotype α.
Comparison of Correlations between Important Variables across TBI Phenotypes:
We found that glucose, hematocrit, hemoglobin, and WBC values are negatively correlated with the best GCS motor response (obey commands), best GCS eye response (response spontaneously), and best pupil reactivity (both pupils react) across all three phenotypes. Conversely, these variables are positively correlated with the worst GCS motor response (no response), worst GCS eye response (no response), and worst pupil reactivity (neither pupil reacts) across all three phenotypes (see Figure 14). In other words, a decrease in these four time-series variables during a TBI patient's ICU stay may indicate the recovery of their impaired consciousness. Figure 14. Correlation analysis of time-series features within each TBI phenotype
Discussion
We applied SLAC-time to the clustering of TBI patients based on non-temporal and timeseries clinical variables during the first five days of ICU stay. We identified three TBI phenotypes (, , and ) that are distinct from one another in terms of clinical variables and outcome endpoints. Phenotype had the worst clinical outcomes. Phenotype had better clinical outcomes than those in phenotype , although phenotype had less favorable trends than phenotype in some clinical variables such as WBC, hematocrit, and hemoglobin as well as hypoxia, and hypotension rates.
WBCs circulate through the bloodstream and tissues to respond to the injury and protect the body from infection after trauma [28]. Considering this, the more severe the injury, the higher the WBC counts [29]. Hematocrit is the percentage of red blood cells in the blood, and hemoglobin enables red blood cells to carry oxygen and CO2 throughout the body. After trauma, low hemoglobin or hematocrit levels indicate that the patient is losing red blood cells because of acute bleeding. Comparing the values of WBC, hemoglobin, and hematocrit in TBI phenotypes demonstrates that phenotype had the highest blood loss among the TBI phenotypes.
Furthermore, phenotype suffers from more blood loss compared to phenotype . The amount and intensity of blood loss in TBI phenotypes can also be seen in their rates of hypoxia and hypotension. Hypoxia is the absence of oxygen in the body tissues, and it can be caused due to a low number of blood cells from severe blood loss. Hypotension is defined as having a blood pressure of less than 90/60 mm/Hg i.e., low blood pressure. Low blood volume due to severe blood loss can cause low blood pressure. The rates of hypoxia and hypotension substantiate that phenotype had the highest and phenotype had the lowest blood loss.
Phenotype , despite having more blood loss compared to phenotype , had better recovery outcomes. This might be because the patients in phenotype are much younger than phenotype . This supports the cluster analysis because it is consistent with the assumption that younger age yields better TBI outcomes, especially for more severe cases. Gender might be another reason why phenotype has better outcomes compared to phenotype . Phenotype with a higher percentage of females result in a better outcome. This aligns with clinical research that suggests better outcomes and recovery of females compared to males after injury [30,31]. The glucose levels might also contribute to the slower recovery of phenotype compared to phenotype . Over time, a high glucose level, indicative of a high blood sugar level, might harm the body's organs and result in potential long-term effects by damaging small and big blood vessels [32]. Clinical studies shows that high blood sugar levels harm the brain by raising intracranial pressure, and it contributes to poorer outcomes after injury [33]. An uncontrolled glucose level may impede or delay the recovery of TBI patients. The higher glucose levels of phenotype might explain why the patients in this phenotype recover more slowly than those of phenotype . All three phenotypes had their highest level of glucose once they are admitted to the ICU, which is associated with disturbed cerebrovascular pressure reactivity following TBI [33]. to consider factors such as the underlying distribution of the data, the presence of noise, and the computational complexity of the algorithm when selecting the most appropriate clustering technique for their specific application.
Limitations
There are several limitations to this study. First, we applied SLAC-Time to a TBI-specific dataset with longitudinal outcomes. However, we may need to evaluate SLAC-Time using a broader clinical multivariate time-series dataset. Second, our study was primarily focused on TBI patients within the TRACK-TBI dataset, and we acknowledge that one limitation is our inability to externally validate the derived phenotypes. This constraint mainly stems from the scarcity of alternative datasets that possess the requisite temporal data and clinical outcomes necessary for effective validation. In addition, challenges in accessing and obtaining approvals for using such datasets further contribute to this limitation. Third, our study was inevitably limited by the available clinical variables. More insights into phenotypical differences can be derived if additional clinical variables such as specific CT results, neurological symptoms, and genetic profiles of the TBI patients had been used in clustering. Finally, SLAC-Time is specifically designed to handle missing values in multivariate time-series data, but not in non-temporal data. In cases where non-temporal data contains missing values, one approach is to represent non-temporal variables as triplets, with a default time value. However, this method may affect the model's performance [9]. Therefore, it may be necessary to use data imputation methods to fill in the missing values in non-temporal data.
Conclusion
We proposed a self-supervised learning-based approach to cluster multivariate time-series data with missing values without resorting to data imputation and aggregation methods. We used time-series forecasting as a proxy task to learn the representation of unlabeled multivariate time-series data. SLAC-Time iteratively clusters the representations with K-means and updates its parameters by predicting cluster labels as pseudo-labels. SLAC-Time outperforms K-means in all clustering evaluation metrics, suggesting that using learned representations rather than raw data for clustering multivariate time-series data might mitigate the negative influence of noises in raw data on clustering performance. SLAC-Time needs limited to no domain knowledge about input data, making it an excellent choice for clustering multivariate time-series data in fields where data annotations are rare. The performance of SLAC-Time for clustering TBI patients demonstrates its applicability to sparse and irregularly sampled multivariate time-series data. We successfully derived TBI phenotypes (, , and ) from the TRACK-TBI dataset, revealing significant differences between their outcomes. The identification of these distinct TBI phenotypes has significant implications for designing clinical trials and developing treatment strategies tailored to the specific physiological characteristics of TBI patients. By considering the unique features and needs of each TBI phenotype, researchers and clinicians can develop more targeted interventions with the potential to improve outcomes and reduce the likelihood of ineffective or harmful interventions. Additionally, a nuanced understanding of TBI phenotypes can inform the development of new diagnostic tools and treatment approaches designed to address the underlying mechanisms of each subtype. We recommend that future research focus on external validation by examining cohorts from other TBI datasets to ensure the generalizability of the phenotypes. Additionally, exploring alternative self-supervision tasks for the STraTS model and developing a framework for interpreting self-supervised learning-based clustering of multivariate time-series data are recommended avenues for further research.
Figure 1 .
1A high-level overview of the problem of identifying TBI phenotypes. TBI: Traumatic Brain Injury; GOSE: Glasgow Outcome Scale-Extended; ICU: Intensive Care Unit
Given a training set of N unlabeled samples represented by = {(d , T )} =1 where the ℎ sample includes a non-temporal vector d k ∈ ℝ and a multivariate time-series data T k , we determine the optimal parameter * such that the mapping * yields general-purposerepresentations. To do so, drawing from the STraTS model, we define each time-series feature as a set of observation triplets where each triplet is of the form ( , , ) where ∈ ℝ ≥0 is the time of measurement, is the name of the variable, and ∈ ℝ represents the value of the variable. A multivariate time-series data of length n is defined as a set of n observation triplets where T = {( , , )} =1 . To obtain the representation of samples, we use the STraTS model from our prior work in [9]. With an input multivariate time-series data T = {( , , )} =1 , the initial embedding for the ℎ triplet e i ∈ ℝ is calculated by adding the three embeddings, including (1) feature embedding e i f ∈ ℝ , (2) value embedding e i v ∈ ℝ , and (3) time embedding e i t ∈ ℝ [9]. To put it another way, e i = e i f + e i v + e i ∈ ℝ . Feature embeddings, such as word embeddings, are produced using a basic lookup table. Value embeddings and time embeddings
Figure 2 .
2An overview of the proposed SLAC-time clustering approach
Figure 3 .
3An illustration of input and output in the forecasting task
hyperparameters while changing k according to the TBI clinicians' domain knowledge about the possible number of TBI phenotypes. We utilized various intrinsic clustering evaluation metrics, including the Silhouette coefficient, Calinski Harabasz index, Dunn index, and Davies Bouldin index on the data embeddings to measure the quality of the clusters.Silhouette coefficient s for a sample is defined as
(
, ) is the intra-cluster distance metric between clusters Ci and Cj where 1 ≤ ≤ ≤ and m stands for the total number of clusters. Also, ∆ represents the maximum distance between observations in cluster k where 1 ≤ ≤ .
Figure 4 .
4NMI trend during 500 training iterations
Figure 5 .
5Boxplot illustrating GOSE scores and ICU lengths of stay for TBI patients across different phenotypes
Figure 6 .Figure 7 .
67Percentage of TBI patients with the lowest and highest GCS motor during the first 120 h of ICU stay Percentage of TBI patients with the lowest and highest GCS eye score during the first 120 h of ICU stay
Figure 8 .
8Percentage of TBI patients in each category of pupil reactivity during the first 120 h of ICU stay
Figure 9 .
9Mean hematocrit, hemoglobin, and WBC levels for TBI patients in each phenotype during the first 120 h of ICU stay
Figure 10 .Figure 11 .
1011Percentage of TBI patients intubated in each phenotype during the first 120 h of ICU stay Mean glucose levels for TBI patients in each phenotype during the first 120 h of ICU stay
Figure 12 .
12Mean aPTT and INR levels for TBI patients in each phenotype during the first 120 h of ICU stay Comparison of Heterogeneity and Dispersion across TBI Phenotypes: Figure 13 presents a Principal Component Analysis (PCA) plot of TBI patients, emphasizing the high heterogeneity of TBI patients, as there is no clear distinction between them in the PCA plot, particularly between
Figure 13 .
13PCA plot of TBI Patients in each phenotype based on two principal components
from 18 academic Level I trauma hospitals across the United States, includes detailed clinical data on 2996 TBI patients with different severity levels. The data used in this study includes 110 variables, including 59 non-temporal variables (demographics and one-time-recorded measurements at the time of emergency department (ED) visits) and 51 time-series variables collected during the first five days of TBI patients' hospital or ICU stay. We included three outcome variables, including the Glasgow Outcome Scale-Extended (GOSE) score, ICU length of stay, and mortality rate, to evaluate the validity of the identified TBI phenotypes. GOSE is a measure of functional outcome that assesses TBI patients in eight categories: (1) dead, (2) The ICU length of stay was defined as the duration of time that a patient spent in the ICU after admission, and the mortality rate was the proportion of patients who died within six months of the injury. We excluded the TBI patients with no GOSE score available. Considering this, 2160 TBI patients met the inclusion criteria. Non-temporal variables were not available for all the patients, so we performed iterative imputation to fill in the missing values in non-temporal variables. Both time-series and non-temporal variables were normalized to have zero mean and unit variance.Proxy task: We perform time-series forecasting as a proxy task to pre-train the model and learn the initial representation of the multivariate time-series data. To do so, we define the set of observation windows as {24, 48, 72, 96, 118} hours and the prediction window as the 2-hour time period that comes just after the observation window. It should be noted that we only include the records with at least one time-series data in both observation and prediction windows. The data for performing time-series forecasting is divided into training and validation datasets with a ratio of 80 to 20. task: The target task of SLAC-Time in this study is to subgroup TBI patients considering all the variables in the TRACK-TBI dataset that meet the inclusion criteria. These TBI patients are divided into training and validation datasets with a ratio of 80 to 20.vegetative state, (3) lower severe disability, (4) upper severe disability, (5) lower moderate
disability, (6) upper moderate disability, (7) lower good recovery, and (8) upper good recovery
[22,23]. Target
Table 1 .
1Hyperparameters used in the experimentHyperparameter
Value
M (Number of blocks in Transformer)
2
d (Number of output neurons in FFNs)
32
h (Number of attention heads in MHA)
4
Dropout
0.2
Learning rate
0.0005
Table 2 .
2SLAC-Time cluster quality for different numbers of clusters Number of clusters Silhouette coefficient Dunn index Davies Bouldin index Calinski Harabasz index3
0.06
0.09
1.9
90.3
4
0.05
0.08
5.3
54.3
5
0.03
0.06
7.8
34.6
Table 3 .
3Comparison of SLAC-Time and K-means clustering method based on different clustering quality metricsClustering method Silhouette coefficient Dunn index Davies Bouldin index Calinski Harabasz index
K-means
0.07
0.11
4.31
131.66
SLAC-Time
0.13
0.15
1.8
196.31
Table 4 .
4Key demographics and non-temporal clinical features of TBI patients included in clustering analysisComparison of Outcomes across TBI Phenotypes: Evaluating the outcome variables of the TBI patients in each phenotype showed that phenotypes , , and significantly differ from one another. Phenotypes and had the best and worst across all three outcomes, respectively recovery compared to the other two phenotypes. 15% of the TBI patients in phenotype had a GOSE score of 1 or 2, while only 1% and 4% of the TBI patients in phenotype and have such GOSE scores, respectively. Besides having the lowest GOSE score, phenotype is characterized by the highest mortality rate (14%) and longest ICU stay (21 days). Even though phenotype withFeature
Phenotype
Phenotype
Phenotype
Total subjects, n
693
586
881
Age
Age, mean SD
28 14
41 18
48 18
Age30, n (%)
471 (68%)
223 (38%)
182 (21%)
30Age45, n (%)
139 (20%)
123 (21%)
217 (24%)
45Age60, n (%)
69 (10%)
142 (24%)
245 (28%)
Age60, n (%)
14 (2%)
98 (17%)
237 (27%)
Sex
Male, n (%)
336 (53%)
426 (72%)
720 (81%)
Female, n (%)
357 (47%)
160 (28%)
161 (19%)
Clinical variables
ED Glucose, mean SD
122 32
158 67
138 58
ED Hemoglobin, mean SD
13.7 1.6
13.5 1.8
14.2 1.6
ED INR, mean SD
1.06 0.09
1.22 0.86
1.07 0.17
Hypoxia, n (%)
19 (2.7%)
37 (6.3%)
13 (1.5%)
Hypotension, n (%)
17 (2.5%)
51 (8.7%)
14 (1.6 %)
GCS score
ED GCS score, mean SD
14 .1 2.1
8.5 5.1
13.9 2.7
ED GCS motor score
1 (no response), n (%)
4 (0.6%)
16 (2.7%)
3 (0.3%)
2 (extension), n (%)
5 (0.7%)
12 (2%)
4 (0.5%)
3 (flexion abnormal), n (%)
21 (3%)
46 (8%)
8 (1%)
4 (flexion withdrawal), n (%)
18 (2.6%)
85 (14.5%)
27 (3%)
5 (localizes to pain), n (%)
581 (84%)
193 (33%)
783 (89%)
6 (obeys commands), n (%)
2 (0.3%)
33 (5.6%)
5 (0.6%)
ED Pupil reactivity
Both pupils react, n (%)
603 (87%)
349 (60%)
730 (83%)
Only one pupil reacts, n (%)
0 (0%)
22 (3.8%)
8 (1%)
Neither of the pupils reacts, n (%) 1 (0.14%)
96 (16.38%)
3 (0.34%)
(Figure 5). Phenotype had the best GOSE score (6.9 1) among TBI phenotypes. 68% of TBI
patients in phenotype had a GOSE score of 7 or 8, while 27% and 47% of the patients in
phenotypes and had a GOSE score of 7 or 8, respectively. Phenotype is also characterized
by the lowest mortality rate (0.6%) and the shortest ICU length of stay (4 days) compared to the
other two phenotypes. Phenotype with the lowest GOSE score (5.1 2.2) had the worst
Comparison of Sex and Age across TBI Phenotypes: Phenotype with an average age of 28 14 was the youngest group (p < 0.005). 68% of the TBI patients in phenotype were 30 years old or younger, and only 2% of the patients in this phenotype were older than 60. On the other hand, phenotype had the oldest group (48 18). 21% of TBI patients in phenotype were younger than 30, and 27% were older than 60. Most of the TBI patients were male(Table 4), and there is a significant difference between TBI phenotypes in terms of their gender (p < 0.005). Phenotype had the highest percentage of females (47%) among all phenotypes, whereas only 19% of TBI patients in phenotype were female. of ICU Admission Rate across TBI Phenotypes: Of the TBI patients in the TRACK-TBI dataset, less than 1% are discharged from the ED, while the vast majority require admission to hospitals for ongoing treatment and management following ED care. Those admitted toComparison hospitals are transferred to ICUs if they suffer from a serious injury and need special care. About
90% of TBI patients in phenotype were transferred to ICUs, which suggests that the patients in
this phenotype primarily had severe and life-threatening injuries requiring intensive care. Also,
30% and 48% of TBI patients in phenotype and phenotype were transferred to ICUs,
The potential applicability of SLAC-Time extends beyond the realm of TBI to other clinical clustering tasks in areas such as infectious diseases, cardiology, oncology, and chronic disease management. For instance, it can enable the clustering of patients with infectious diseases such as COVID-19 based on symptom progression and treatment responses, helping clinicians identify patient subgroups and tailor personalized interventions. SLAC-Time is also well-suited for clustering time-series data in other domains such as finance, energy management, and environmental monitoring applications. In each of these domains, time-series data often presents unique challenges, including complexity, non-stationarity, and missing values-similar to those observed in TBI datasets. By successfully addressing these challenges in the context of TBI data, we demonstrate the potential of SLAC-Time to be readily adapted and employed in these diverse fields. The self-supervised learning and the ability to handle missing values without imputation or aggregation make it a valuable tool for a broad range of applications where accurate and efficient clustering of time-series data is essential.Finally, we chose to use the K-means method for clustering time-series representations within SLAC-Time for its simplicity, efficiency, and widespread use in the field of time-series analysis. However, SLAC-Time is a flexible framework that can accommodate a variety of clustering algorithms. The choice of clustering algorithm should depend on the specific characteristics of the dataset and the problem domain. For instance, density-based clustering algorithms such as DBSCAN may be more appropriate when the data exhibits clusters of varying densities or when noise is present. By incorporating alternative clustering techniques, SLAC-Time can be further customized to address a broader range of problems. It is essential for researchers
AcknowledgmentsThis material is based upon work supported by the National Science Foundation under grants
Time-series clustering -A decade review. S Aghabozorgi, A Shirkhorshidi, T Ying Wah, 10.1016/J.IS.2015.04.007Inf Syst. 53S. Aghabozorgi, A. Seyed Shirkhorshidi, T. Ying Wah, Time-series clustering -A decade review, Inf Syst. 53 (2015) 16-38. https://doi.org/10.1016/J.IS.2015.04.007.
Learning Representations for Time Series Clustering. Q Ma, J Zheng, S Li, G W Cottrell, Adv Neural Inf Process Syst. 32Q. Ma, J. Zheng, S. Li, G.W. Cottrell, Learning Representations for Time Series Clustering, Adv Neural Inf Process Syst. 32 (2019).
Patterns of temporal variation in online media. J Yang, J Leskovec, 10.1145/1935826.1935863Proceedings of the 4th ACM International Conference on Web Search and Data Mining, WSDM 2011. the 4th ACM International Conference on Web Search and Data Mining, WSDM 2011J. Yang, J. Leskovec, Patterns of temporal variation in online media, Proceedings of the 4th ACM International Conference on Web Search and Data Mining, WSDM 2011. (2011) 177-186. https://doi.org/10.1145/1935826.1935863.
Efficient and accurate clustering of time series. J Paparrizos, L Gravano, K -Shape, 10.1145/2723372.2737793Proceedings of the ACM SIGMOD International Conference on Management of Data. the ACM SIGMOD International Conference on Management of DataJ. Paparrizos, L. Gravano, K-shape: Efficient and accurate clustering of time series, Proceedings of the ACM SIGMOD International Conference on Management of Data. 2015-May (2015) 1855-1870. https://doi.org/10.1145/2723372.2737793.
A global averaging method for dynamic time warping, with applications to clustering. F Petitjean, A Ketterlin, P Gançarski, 10.1016/J.PATCOG.2010.09.013Pattern Recognit. 44F. Petitjean, A. Ketterlin, P. Gançarski, A global averaging method for dynamic time warping, with applications to clustering, Pattern Recognit. 44 (2011) 678-693. https://doi.org/10.1016/J.PATCOG.2010.09.013.
Z Che, S Purushotham, K Cho, D Sontag, Y Liu, 10.1038/s41598-018-24271-9Recurrent Neural Networks for Multivariate Time Series with Missing Values, Sci Rep. 8Z. Che, S. Purushotham, K. Cho, D. Sontag, Y. Liu, Recurrent Neural Networks for Multivariate Time Series with Missing Values, Sci Rep. 8 (2018) 1-12. https://doi.org/10.1038/s41598-018-24271-9.
E Aljalbout, V Golkov, Y Siddiqui, M Strobel, D Cremers, Clustering with deep learning: Taxonomy and new methods. E. Aljalbout, V. Golkov, Y. Siddiqui, M. Strobel, D. Cremers, Clustering with deep learning: Taxonomy and new methods, ArXiv. (2018) 1-12.
Self-supervised learning in medicine and healthcare. R Krishnan, P Rajpurkar, E J Topol, 10.1038/s41551-022-00914-1Nature Biomedical Engineering. 6R. Krishnan, P. Rajpurkar, E.J. Topol, Self-supervised learning in medicine and healthcare, Nature Biomedical Engineering 2022 6:12. 6 (2022) 1346-1352. https://doi.org/10.1038/s41551-022-00914-1.
Self-Supervised Transformer for Sparse and Irregularly Sampled Multivariate Clinical Time-Series. S Tipirneni, R Chandan, K , ACM Trans Knowl Discov Data. 16S. Tipirneni, R. Chandan K, Self-Supervised Transformer for Sparse and Irregularly Sampled Multivariate Clinical Time-Series, ACM Trans Knowl Discov Data. 16 (2022).
Unsupervised Machine Learning Reveals Novel Traumatic Brain Injury Patient Phenotypes with Distinct Acute Injury Profiles and Long-Term Outcomes. K A Folweiler, D K Sandsmark, R Diaz-Arrastia, A S Cohen, A J Masino, 10.1089/neu.2019.6705J Neurotrauma. 37K.A. Folweiler, D.K. Sandsmark, R. Diaz-Arrastia, A.S. Cohen, A.J. Masino, Unsupervised Machine Learning Reveals Novel Traumatic Brain Injury Patient Phenotypes with Distinct Acute Injury Profiles and Long-Term Outcomes, J Neurotrauma. 37 (2020) 1431-1444. https://doi.org/10.1089/neu.2019.6705.
Classification of Traumatic Brain Injury for Targeted Therapies. K E Saatman, A Duhaime, R Bullock, A I Maas, A Valadka, G T Manley, D Brody, C Contant, P Dash, R Diaz-Arrastia, S Fertig, A Gean, C Goodman, W Gordon, R Hayes, R Hicks, J Langloi, A Marmarou, D Moore, G Murray, D Okonkwo, L Papa, L Phillips, N Plesnila, C Robertson, C Robertson, J Sahuquillo, R Silbergleit, E Steyerberg, N Stocchetti, E Teasdale, G Teasdale, N Temkin, H Thompson, K Tong, L Wilson, D Wright, 10.1089/neu.2008.0586K.E. Saatman, A. Duhaime, R. Bullock, A.I. Maas, A. Valadka, G.T. Manley, D. Brody, C. Contant, P. Dash, R. Diaz-Arrastia, S. Fertig, A. Gean, C. Goodman, W. Gordon, R. Hayes, R. Hicks, J. Langloi, A. Marmarou, D. Moore, G. Murray, D. Okonkwo, L. Papa, L. Phillips, N. Plesnila, C. Robertson, C. Robertson, J. Sahuquillo, R. Silbergleit, E. Steyerberg, N. Stocchetti, E. Teasdale, G. Teasdale, N. Temkin, H. Thompson, K. Tong, L. Wilson, D. Wright, Classification of Traumatic Brain Injury for Targeted Therapies, (2008). https://doi.org/10.1089/neu.2008.0586.
Sub-classifying patients with mild traumatic brain injury: A clustering approach based on baseline clinical characteristics and 90-day and 180-day outcomes. B Si, G Dumkrieger, T Wu, R Zafonte, A B Valadka, D O Okonkwo, G T Manley, L Wang, D W Dodick, T J Schwedt, J Li, PLoS One. 13B. Si, G. Dumkrieger, T. Wu, R. Zafonte, A.B. Valadka, D.O. Okonkwo, G.T. Manley, L. Wang, D.W. Dodick, T.J. Schwedt, J. Li, Sub-classifying patients with mild traumatic brain injury: A clustering approach based on baseline clinical characteristics and 90- day and 180-day outcomes, PLoS One. 13 (2018).
. 10.1371/journal.pone.0198741https://doi.org/10.1371/journal.pone.0198741.
An Explainable and Statistically Validated Ensemble Clustering Model Applied to the Identification of Traumatic Brain Injury Subgroups. D Yeboah, L Steinmeister, D B Hier, B Hadi, D C Wunsch, G R Olbricht, T Obafemi-Ajayi, 10.1109/ACCESS.2020.3027453IEEE Access. 8D. Yeboah, L. Steinmeister, D.B. Hier, B. Hadi, D.C. Wunsch, G.R. Olbricht, T. Obafemi-Ajayi, An Explainable and Statistically Validated Ensemble Clustering Model Applied to the Identification of Traumatic Brain Injury Subgroups, IEEE Access. 8 (2020) 180690-180705. https://doi.org/10.1109/ACCESS.2020.3027453.
Clustering identifies endotypes of traumatic brain injury in an intensive care cohort: a CENTER-TBI study. C A I Åkerlund, A Holst, N Stocchetti, E W Steyerberg, D K Menon, A Ercole, D W Nelson, 10.1186/s13054-022-04079-wCrit Care. 26228C.A.I. Åkerlund, A. Holst, N. Stocchetti, E.W. Steyerberg, D.K. Menon, A. Ercole, D.W. Nelson, Clustering identifies endotypes of traumatic brain injury in an intensive care cohort: a CENTER-TBI study, Crit Care. 26 (2022) 228. https://doi.org/10.1186/s13054-022-04079-w.
An autoencoder-based deep learning approach for clustering time series data. N Tavakoli, S Siami-Namini, M Khanghah, F Mirza, A Soltani, Siami Namin, 10.1007/s42452-020-2584-8SN Appl Sci. 2N. Tavakoli, S. Siami-Namini, M. Adl Khanghah, F. Mirza Soltani, A. Siami Namin, An autoencoder-based deep learning approach for clustering time series data, SN Appl Sci. 2 (2020) 1-25. https://doi.org/10.1007/s42452-020-2584-8.
Self-Supervised Time Series Clustering With Model-Based Dynamics. Q Ma, S Li, W Zhuang, S Li, J Wang, D Zeng, 10.1109/tnnls.2020.3016291IEEE Trans Neural Netw Learn Syst. Q. Ma, S. Li, W. Zhuang, S. Li, J. Wang, D. Zeng, Self-Supervised Time Series Clustering With Model-Based Dynamics, IEEE Trans Neural Netw Learn Syst. (2020) 1-14. https://doi.org/10.1109/tnnls.2020.3016291.
Deep learning for clustering of multivariate clinical patient trajectories with missing values. J Jong, M A Emon, P Wu, R Karki, M Sood, P Godard, A Ahmad, H Vrooman, M Hofmann-Apitius, H Fröhlich, 10.1093/gigascience/giz134Gigascience. 8J. de Jong, M.A. Emon, P. Wu, R. Karki, M. Sood, P. Godard, A. Ahmad, H. Vrooman, M. Hofmann-Apitius, H. Fröhlich, Deep learning for clustering of multivariate clinical patient trajectories with missing values, Gigascience. 8 (2019) 1-14. https://doi.org/10.1093/gigascience/giz134.
. C Sun, S Hong, M Song, H Li, A Review of Deep Learning Methods for Irregularly Sampled Medical Time Series Data. C. Sun, S. Hong, M. Song, H. Li, A Review of Deep Learning Methods for Irregularly Sampled Medical Time Series Data, (2020). http://arxiv.org/abs/2010.12493 (accessed October 8, 2022).
. A Vaswani, G Brain, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Attention Is All You Need, n.dA. Vaswani, G. Brain, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A.N. Gomez, Ł. Kaiser, I. Polosukhin, Attention Is All You Need, n.d.
M Caron, P Bojanowski, A Joulin, M Douze, Deep Clustering for Unsupervised Learning of Visual Features. n.dM. Caron, P. Bojanowski, A. Joulin, M. Douze, Deep Clustering for Unsupervised Learning of Visual Features, n.d.
Transforming Research and Clinical Knowledge in TBI | TRACK-TBI. Transforming Research and Clinical Knowledge in TBI | TRACK-TBI, (n.d.). https://tracktbi.ucsf.edu/transforming-research-and-clinical-knowledge-tbi (accessed October 8, 2022).
Measuring Outcome in Traumatic Brain Injury Treatment Trials: Recommendations From the Traumatic Brain Injury Clinical Trials Network. E Bagiella, T A Novack, B Ansel, R Diaz-Arrastia, S Dikmen, T Hart, N Temkin, 10.1097/HTR.0b013e3181d27fe3E. Bagiella, T.A. Novack, B. Ansel, R. Diaz-Arrastia, S. Dikmen, T. Hart, N. Temkin, Measuring Outcome in Traumatic Brain Injury Treatment Trials: Recommendations From the Traumatic Brain Injury Clinical Trials Network, (n.d.). https://doi.org/10.1097/HTR.0b013e3181d27fe3.
Sliding Scoring of the Glasgow Outcome Scale-Extended as Primary Outcome in Traumatic Brain Injury Trials. S D Yeatts, R H Martin, W Meurer, R Silbergleit, G L Rockswold, W G Barsan, F K Korley, D W Wright, B J Gajewski, 10.1089/neu.2019.6969J Neurotrauma. 37S.D. Yeatts, R.H. Martin, W. Meurer, R. Silbergleit, G.L. Rockswold, W.G. Barsan, F.K. Korley, D.W. Wright, B.J. Gajewski, Sliding Scoring of the Glasgow Outcome Scale- Extended as Primary Outcome in Traumatic Brain Injury Trials, J Neurotrauma. 37 (2020) 2674-2679. https://doi.org/10.1089/neu.2019.6969.
. S. Open Access Citation. S. Open Access Citation ;
Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale. S Emmons, M Kobourov, K Gallant, Börner, 10.1371/journal.pone.0159161PLoS One. 11159161Emmons, S. Kobourov, M. Gallant, K. Börner, Analysis of Network Clustering Algorithms and Cluster Quality Metrics at Scale, PLoS One. 11 (2016) 159161. https://doi.org/10.1371/journal.pone.0159161.
A hierarchical expert-guided machine learning framework for clinical decision support systems: an application to traumatic brain injury prognostication. N Farzaneh, C A Williamson, J Gryak, K Najarian, NPJ Digit Med. 4N. Farzaneh, C.A. Williamson, J. Gryak, K. Najarian, A hierarchical expert-guided machine learning framework for clinical decision support systems: an application to traumatic brain injury prognostication, NPJ Digit Med. 4 (2021).
. 10.1038/s41746-021-00445-0https://doi.org/10.1038/s41746-021-00445-0.
Predicting Outcome after Traumatic Brain Injury: Development and International Validation of Prognostic Scores Based on Admission Characteristics. E W Steyerberg, N Mushkudiani, P Perel, I Butcher, J Lu, G S Mchugh, G D Murray, A Marmarou, I Roberts, J D F Habbema, A I R Maas, 10.1371/JOURNAL.PMED.0050165PLoS Med. 5E.W. Steyerberg, N. Mushkudiani, P. Perel, I. Butcher, J. Lu, G.S. McHugh, G.D. Murray, A. Marmarou, I. Roberts, J.D.F. Habbema, A.I.R. Maas, Predicting Outcome after Traumatic Brain Injury: Development and International Validation of Prognostic Scores Based on Admission Characteristics, PLoS Med. 5 (2008) e165. https://doi.org/10.1371/JOURNAL.PMED.0050165.
Brain Injury Admission glucose and coagulopathy occurrence in patients with traumatic brain injury Admission glucose and coagulopathy occurrence in patients with traumatic brain injury. G A Alexiou, G Lianos, G Fotakopoulos, E Michos, D Pachatouridis, S Voulgaris, 10.3109/02699052.2014.888769Brain Inj. 28G.A. Alexiou, G. Lianos, G. Fotakopoulos, E. Michos, D. Pachatouridis, S. Voulgaris, Brain Injury Admission glucose and coagulopathy occurrence in patients with traumatic brain injury Admission glucose and coagulopathy occurrence in patients with traumatic brain injury, Brain Inj. 28 (2014) 438-441. https://doi.org/10.3109/02699052.2014.888769.
Immune response to traumatic injury: harmony and discordance of immune system homeostasis. A Osuka, H Ogura, M Ueyama, T Shimazu, J A Lederer, 10.1002/ams2.17A. Osuka, H. Ogura, M. Ueyama, T. Shimazu, J.A. Lederer, Immune response to traumatic injury: harmony and discordance of immune system homeostasis, (2014). https://doi.org/10.1002/ams2.17.
Early White Blood Cell Dynamics After Traumatic Brain Injury: Effects on the Cerebral Microcirculation. R Hartl, M B Medary, M Ruge, K E Arfors, J Ghajar, R. Hartl, M.B. Medary, M. Ruge, K.E. Arfors, J. Ghajar, Early White Blood Cell Dynamics After Traumatic Brain Injury: Effects on the Cerebral Microcirculation, 1997.
Brain Injury Gender and traumatic brain injury: Do the sexes fare differently?. J J Ratcliff, A I Greenspan, F C Goldstein, A Y Stringer, T Bushnik, F M Hammond, T A Novack, J Whyte, D W Wright, 10.1080/02699050701633072J.J. Ratcliff, A.I. Greenspan, F.C. Goldstein, A.Y. Stringer, T. Bushnik, F.M. Hammond, T.A. Novack, J. Whyte, D.W. Wright, Brain Injury Gender and traumatic brain injury: Do the sexes fare differently?, (n.d.). https://doi.org/10.1080/02699050701633072.
Traumatic brain injury: Does gender influence outcomes?. A Munivenkatappa, A Agrawal, D P Shukla, D Kumaraswamy, B I Devi, 10.4103/2229-5151.183024Int J Crit Illn Inj Sci. 670A. Munivenkatappa, A. Agrawal, D.P. Shukla, D. Kumaraswamy, B.I. Devi, Traumatic brain injury: Does gender influence outcomes?, Int J Crit Illn Inj Sci. 6 (2016) 70. https://doi.org/10.4103/2229-5151.183024.
Diabetes Complications: Why Is Glucose Potentially Toxic?. D Porte, M W Schwartz, 10.1126/SCIENCE.272.5262.699Science. 272D. Porte, M.W. Schwartz, Diabetes Complications: Why Is Glucose Potentially Toxic?, Science (1979). 272 (1996) 699-700. https://doi.org/10.1126/SCIENCE.272.5262.699.
Increased blood glucose is related to disturbed cerebrovascular pressure reactivity after traumatic brain injury. J Donnelly, M Czosnyka, N Sudhan, G V Varsos, N Nasr, I Jalloh, X Liu, C Dias, M S Sekhon, K L H Carpenter, D K Menon, P J Hutchinson, P Smielewski, 10.1007/S12028-014-0042-4Neurocrit Care. 22J. Donnelly, M. Czosnyka, N. Sudhan, G. v. Varsos, N. Nasr, I. Jalloh, X. Liu, C. Dias, M.S. Sekhon, K.L.H. Carpenter, D.K. Menon, P.J. Hutchinson, P. Smielewski, Increased blood glucose is related to disturbed cerebrovascular pressure reactivity after traumatic brain injury, Neurocrit Care. 22 (2015) 20-25. https://doi.org/10.1007/S12028-014-0042-4.
| [] |
[
"TRANSVERSALS TO COLORFUL INTERSECTING CONVEX SETS",
"TRANSVERSALS TO COLORFUL INTERSECTING CONVEX SETS"
] | [
"Cuauhtemoc Gomez-Navarro ",
"Edgardo Roldán-Pensado "
] | [] | [] | Let K be a compact convex set in R 2 and let F 1 , F 2 , F 3 be finite families of translates of K such that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j. A conjecture by Dol'nikov is that, under these conditions, there is always some j ∈ {1, 2, 3} such that F j can be pierced by 3 points. In this paper we prove a stronger version of this conjecture when K is a body of constant width or when it is close in Banach-Mazur distance to a disk. We also show that the conjecture is true with 8 piercing points instead of 3. Along the way we prove more general statements both in the plane and in higher dimensions.A related result was given by Martínez-Sandoval, Roldán-Pensado and Rubin. They showed that if F 1 , . . . , F d are finite families of convex sets in R d such that for every choice of sets C 1 ∈ F 1 , . . . , C d ∈ F d the intersection d i=1 C i is non-empty, then either there exists j ∈ {1, 2, . . . , n} such that F j can be pierced by few points or n i=1 F i can be crossed by few lines. We give optimal values for the number of piercing points and crossing lines needed when d = 2 and also consider the problem restricted to special families of convex sets. | null | [
"https://export.arxiv.org/pdf/2305.16760v1.pdf"
] | 258,947,035 | 2305.16760 | 0a6826e156a6e8ce5ec82bec4c6dd660e3319852 |
TRANSVERSALS TO COLORFUL INTERSECTING CONVEX SETS
Cuauhtemoc Gomez-Navarro
Edgardo Roldán-Pensado
TRANSVERSALS TO COLORFUL INTERSECTING CONVEX SETS
Let K be a compact convex set in R 2 and let F 1 , F 2 , F 3 be finite families of translates of K such that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j. A conjecture by Dol'nikov is that, under these conditions, there is always some j ∈ {1, 2, 3} such that F j can be pierced by 3 points. In this paper we prove a stronger version of this conjecture when K is a body of constant width or when it is close in Banach-Mazur distance to a disk. We also show that the conjecture is true with 8 piercing points instead of 3. Along the way we prove more general statements both in the plane and in higher dimensions.A related result was given by Martínez-Sandoval, Roldán-Pensado and Rubin. They showed that if F 1 , . . . , F d are finite families of convex sets in R d such that for every choice of sets C 1 ∈ F 1 , . . . , C d ∈ F d the intersection d i=1 C i is non-empty, then either there exists j ∈ {1, 2, . . . , n} such that F j can be pierced by few points or n i=1 F i can be crossed by few lines. We give optimal values for the number of piercing points and crossing lines needed when d = 2 and also consider the problem restricted to special families of convex sets.
Introduction
In this paper we study problems with hypothesis similar to that of the Colorful Helly theorem, we are interested in finding transversals to the families involved.
Let F be a family of sets in R d , a set T ⊂ R d is a transversal to the family F if every set C ∈ F intersects the set T . Additionally, if T is a k-flat (k-dimensional affine subspace of R d ) and is transversal to the family F, we say that T is a k-flat transversal. For example, a line transversal is a line that intersects every member of F and a hyperplane transversal is a hyperplane that intersects every member of F.
We say that the family F can be pierced by n points if there exist n points p 1 , . . . , p n ∈ R d such that every set C ∈ F intersects at least one of the points p 1 , . . . , p n ; in other words, P = {p 1 , . . . , p n } is a transversal to the family F. Similarly, we say that the family F can be crossed by n lines if there exist n lines l 1 , . . . , l n ∈ R d such that every set C ∈ F intersects at least one of the lines l 1 , . . . , l n ; in other words, l 1 ∪ · · · ∪ l n is a transversal to the family F.
We begin by giving a short list of theorems involving transversals including Helly-type theorems and Colorful theorems before presenting our main results in Section 2.
1.1. Helly-type theorems. Helly's theorem [Hel23] is probably one of the most famous theorems in discrete and convex geometry. It states that if a finite family of convex sets in R d satisfies that every d + 1 or fewer of them have non-empty intersection, then the whole family has non-empty intersection. The number d + 1 cannot be improved in Helly's theorem. However, it is possible to strengthen the hypothesis to obtain stronger results. For instance, Grünbaum proved the following theorem for families of homothetic copies of a convex set. Theorem 1.1 ( [Grü59]). For any integer d ≥ 1 there exists an integer c = c(d) such that the following holds. If F is a finite family of homothetic copies of a convex set in R d and any two members of F have non-empty intersection, then F can be pierced by c points.
In [Dan86] was solved the case of circles in the plane, in [DJ11] was solved the case of triangles in the plane and in [KNPS06] was given a bound for the general planar case. In [KNPS06], [NT10] and [DJ11] gave bounds for the general case.
Theorem 1.2 ( [Dan86]). Let F be a finite family of circles in R 2 such that the intersection of every 2 sets in F is non-empty. Then F can be pierced by 4 points. Furthermore, there are examples where 3 points are not enough.
Theorem 1.3 ([DJ11]
). Let F be a finite family of homothetic copies of a triangle in R 2 such that the intersection of every 2 sets in F is non-empty. Then F can be pierced by 3 points. Furthermore, there are examples where 2 points are not enough.
Theorem 1.4 ( [KNPS06]). Let F be a finite family of homothetic copies of a convex set in R 2 such that the intersection of every 2 sets in F is non-empty. Then F can be pierced by 16 points.
Karasev proved the following theorem for families of translates of a compact convex set in the plane.
Theorem 1.5 ([Kar00]). Let K be a compact convex set in R 2 . Let F be a finite family of translates of K such that the intersection of every 2 sets in F is non-empty. Then F can be pierced by 3 points. Furthermore, there are examples where 2 points are not enough.
In fact, Karasev [Kar08] also proved similar results in higher dimensions. We include another similar result concerning only axis-parallel boxes where the conclusion is strengthened.
Theorem 1.6 ( [HD66,FDFK93]). Let F be a finite family of axis-parallel boxes in R d such that the intersection of every 2 sets in F is non-empty. Then F ̸ = ∅.
1.2. Colorful theorems. In 1973 Lovász proved a generalization of Helly's theorem called the Colorful Helly theorem.
Theorem 1.7 ( [Bá82]). If we have d + 1 finite families F 1 , . . . , F d+1 of convex sets in R d such that for every choice of sets C 1 ∈ F 1 , . . . , C d+1 ∈ F d+1 , the intersection d+1 i=1 C i is non-empty, then there exists i ∈ {1, . . . , d + 1} such that the family F i has non-empty intersection.
According to [JCMS15], Dol'nikov wondered whether there is a colorful version of Theorem 1.5. Then Dol'nikov proposed the following conjecture.
Conjecture 1.8 ( [JCMS15]). Let K be a compact convex set in R 2 . Let F 1 , F 2 , F 3 be finite families of translates of K. Suppose that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j. Then there exists j ∈ {1, 2, 3} such that F j can be pierced by 3 points.
Jerónimo-Castro, Magazinov and Soberón [JCMS15] proved Conjecture 1.8 when K is centrally symmetric or a triangle. In fact, in the case where K is a circle they proved a stronger statement: if n ≥ 2 and F 1 , . . . , F n are finite families of circles with the same radius in the plane such that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j (with i ̸ = j), then there exists j ∈ {1, 2, . . . , n} such that i̸ =j F i can be pierced by 3 points. They also made the following conjecture. Conjecture 1.9 ([JCMS15]). Let K be a compact convex set in R 2 . Let F 1 , . . . , F n be finite families of translates of K, with n ≥ 2. Suppose that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j. Then there exists j ∈ {1, 2, . . . , n} such that i̸ =j F i can be pierced by 3 points.
In 2020, Martínez-Sandoval, Roldán-Pensado and Rubin sought stronger conclusions for the Colorful Helly theorem [MSRPR20]. In some sense they were looking for a Very Colorful Helly theorem, similar to how the Colorful Carathéodory theorem has a Very Colorful version [HPT08, ABB + 09].
Theorem 1.10 ( [MSRPR20]). For every d ≥ 2 there exist numbers f (d) and g(d) with the following property: Let F 1 , . . . , F d be finite families of convex sets in R d such that for every choice of sets C 1 ∈ F 1 , . . . , C d ∈ F d , the intersection d i=1 C i is non-empty. Then one of the following statements holds:
1. there is a family F j , for j ∈ {1, . . . , d}, that can be pierced by f (d) points, or 2. the family d i=1 F i can be crossed by g(d) lines. In particular, they showed that the 2-dimensional case of this theorem holds with f (2) = 1 and g(2) = 4. One may ask for the optimal pairs (f (d), g(d)) in this theorem, for example, does Theorem 1.10 hold with f (d) = 1 and large enough g(d)?
Main results
We prove Conjecture 1.8 when K is of constant width and when K is close to a circle in Banach-Mazur distance. For an arbitrary convex body, Conjecture 1.8 holds with 8 piercing points instead of 3. Additionally, we show that Conjecture 1.9 holds with 9 piercing points and that, when K has constant width, Conjecture 1.9 holds with 4 piercing points. Furthermore, we prove similar results in arbitrary dimension.
It should be noted that properties we are studying are invariant under linear transformations, so if any of the following theorems is true for a convex body K then it is also true for the image of K under a linear isomorphism.
Theorem 2.1. Let K be a convex body in R 2 . Let F 1 and F 2 be finite families of translates of K such that A ∩ B ̸ = ∅ for every A ∈ F 1 and B ∈ F 2 . If K is of constant width or if K has Banach-Mazur distance at most 1.1178 to the disk, then either F 1 or F 2 can be pierced by 3 points.
The proof of this theorem, included in Section 6, can be extended to give an upper bound for the number of piercing points needed for any K. For each K a certain pentagon is constructed, the bound is dependent on the maximum number of translates of K needed to cover every rotation of this pentagon. As corollary of this proof, using the fact that any convex body is at Banach-Mazur distance at most 2 from the unit disk, we obtain the following.
Corollary 2.2. Let K be a compact convex set in R 2 . Let F 1 and F 2 be finite families of translates of K. Suppose that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j. Then there exists j ∈ {1, 2} such that F j can be pierced by 8 points.
In particular, this implies a weaker form of Dol'nikov's conjecture (Conjecture 1.8) where 8 points are needed to pierce one of the three families.
Theorem 2.3. Let K be a convex body in R d , then there exists a number f K (d) with the following property. If F 1 , . . . , F n are finite families of translates of K with n ≥ 2 such that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j, then there exists j ∈ {1, 2, . . . , n} such that i̸ =j F i can be pierced by f K (d) points. Moreover, in the planar case we have the following:
(a) we may choose f K (2) = 9 for any K and (b) we may choose f K (2) = 4 if K has constant width.
The proof of this theorem is included in Section 5 below.
Using the KKM theorem [KKM29], we show that Theorem 1.10 holds with f (2) = 1 and g(2) = 2. In [MSRPR20] it is shown that if g(d) is a number for which Theorem 1.10 holds then g(d) ≥ d+1 2 .
Therefore the values f (2) = 1 and g(2) = 2 cannot be improved. We actually prove a more general result in R 2 .
Theorem 2.4. Let F 1 , . . . , F n be finite families of convex sets in R 2 with n ≥ 2. Suppose that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j. Then one of the following statements holds:
1. there exists j ∈ {1, 2, . . . , n} such that i̸ =j F i can be pierced by 1 point, or 2. the family n i=1 F i can be crossed by 2 lines. The proof of this result is included in Section 3 and first appeared in the master's thesis of the first author [GN22] in 2022. Even though the 2-dimensional case of Theorem 1.10 does not hold with g(2) = 1, it does for some special families of convex sets.
Theorem 2.5. Let K a subfamily of the family of convex sets in R d , then there exist numbers f K (d) and g K (d) with the following property. If F 1 , . . . , F n are finite families of sets in K with n ≥ 2 such that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j, then one of the following statements holds:
1. there exists j ∈ {1, 2, . . . , n} such that i̸ =j F i can be pierced by f K (d) points, or 2. the family n i=1 F i can be crossed by g K (d) hyperplanes. Moreover, for d = 2 and specific families K we have the following:
(a) if K is the family of translates of a given compact convex body we may choose f K (2) = 3 and g K (2) = 1, (b) if K is the family of homothetic copies of a given compact convex body we may choose f K (2) = 16 and g K (2) = 1, (c) if K is the family of homothetic copies of a triangle we may choose f K (2) = 3 and g K (2) = 1, and (d) if K is the family of circles we may choose f K (2) = 4 and g K (2) = 1. For general d we have the following:
(e) if K is the family of axis-parallel boxes we may choose f K (d) = 1 and g K (d) = 1, and (f ) if K is the family of homothetic copies of a given compact convex body we may choose g K (d) = 1.
Note that in this theorem, statement 2 is about crossing hyperplanes while in Theorem 1.10 it is about crossing lines. The proof is included in Section 4.
Proof of Theorem 2.4
We rely on the KKM theorem [KKM29] in order to prove Theorem 2.4. Let
∆ n = (x 1 , . . . , x n+1 ) ∈ R n+1 : x i ≥ 0, n+1 i=1 x i = 1 denote the n-dimensional simplex in R n+1 that is the convex hull of {e 1 , . . . , e n+1 }, the standard orthonormal basis of R n+1 . Theorem 3.1. (KKM theorem [KKM29]). Let {O 1 , O 2 , . . . , O n+1 } be an open (or closed) cover of ∆ n such that e i ∈ O i for each i ∈ {1, 2, . . . , n + 1}, and conv{e i : i ∈ I} ⊂ i∈I O i for each I ⊂ {1, 2, . . . , n + 1}. Then n+1 i=1 O i ̸ = ∅. TRANSVERSALS TO COLORFUL INTERSECTING CONVEX SETS 5 f 1 (x) f 2 (x) f 3 (x) f 4 (x) = (1, 0) R 1 x R 2 x R 3 x R 4
x Figure 1. Illustration for the proof of Theorem 2.4.
In order to prove Theorem 2.4 we follow ideas similar to the ones used in [MZ21].
Proof of Theorem 2.4. We can assume, without loss of generality, that the sets in the finite families are compact (see [GN22, Chapter 1]). Hence, we may scale the plane such that every set in n i=1 F i is contained in the unit disk. Let f (t) be a parametrization of the unit circle defined by
f (t) = (cos(2πt), sin(2πt)).
To each point x = (x 1 , . . . , x 4 ) ∈ ∆ 3 we associate 4 points on the unit circle given by
f i (x) = f i j=1 x j , for 1 ≤ i ≤ 4. Let l 1 (x) = l 3 (x) = [f 1 (x), f 3 (x)] and l 2 (x) = l 4 (x) = [f 2 (x), f 4 (x)]. For i = 1, . . . , 4 let R i
x be the interior of the region bounded by l i−1 (x), l i (x) and the arc on the unit circle connecting f i−1 (x) and f i (x), where i − 1 is taken modulo 4 (see Figure 1).
Notice that f 4 (x) = (1, 0) for each x ∈ ∆ 3 . Also, the points f 1 (x), f 2 (x), f 3 (x), f 4 (x) are always in counter-clockwise order so the lines l 1 (x) = [f 1 (x), f 3 (x)] and l 2 (x) = [f 2 (x), f 4 (x)] intersect.
If there is some x ∈ ∆ 3 for which l 1 (x) ∪ l 2 (x) is a transversal to the family n i=1 F i , the second statement of the theorem holds and we are done. Otherwise, since the sets in n i=1 F i are convex, we may assume that for every
x ∈ ∆ 3 there is a set C ∈ n i=1 F i contained in one of the four open regions R i x . For i = 1, . . . , 4, let O i be the set of points x ∈ ∆ 3 such that R i x contains a set C ∈ n i=1 F i . Since the sets C ∈ n i=1 F i are compact, O i is open. Notice that ∆ 3 = 4 i=1 O i , since for every x ∈ ∆ 3 there is a set C ∈ n i=1 F i contained in one of the four open regions R i x . Observe that if x ∈ conv{e i : i ∈ I} for some I ⊂ {1, . . . , 4}, then R j x = ∅ for j / ∈ I, so x ∈ i∈I O i . Therefore {O 1 , . . . , O 4
} is an open cover that satisfies the hypothesis of the KKM theorem (Theorem 3.1). Consequently there is a point y = (y 1 , . . . , y 4 ) ∈ ∆ 3 such that y ∈ 4 i=1 O i . In other words, each one of the open regions R i y contains a set C i ∈ n i=1 F i . Since the sets C 1 , . . . , C 4 are pairwise disjoint, the sets C 1 , . . . , C 4 are in the same family F j , for some j ∈ {1, . . . , n}. Let p be the point intersection of the lines l 1 (y) = l 3 (y) and l 2 (y) = l 4 (y). Take any set B ∈ i̸ =j F i . Since B ∩ C i ̸ = ∅ for every i ∈ {1, . . . , 4} and B is convex, then B intersects the line segments [p, f 4 (y)] and [p, f 2 (y)]. Therefore, p ∈ B and the first statement of the theorem holds (see Figure 2). □
Proof of Theorem 2.5
Here we prove that for some special families of convex sets, the 2-dimensional case of Theorem 1.10 holds with g(2) = 1. We also show similar results in higher dimensions.
Proof of Theorem 2.5. To show that the numbers f K (d) and g K (d) exist we use Lemma 2.1 from [MSRPR20]. This Lemma states the following: If A and B are finite families of convex sets in R d such that A ∩ B ̸ = ∅ for every A ∈ A and B ∈ B, then either A ̸ = ∅ or B can be crossed by d hyperplanes. Even though there are only two families involved in this lemma, it can be used to prove the following.
Lemma 4.1. Let F 1 , . . . , F n be finite families of convex sets in R d with n ≥ 2 such that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j. Then one of the following statements holds:
1. there exists j ∈ {1, 2, . . . , n} such that i̸ =j F i can be pierced by 1 point, or 2. the family n i=1 F i can be crossed by dn hyperplanes. Proof. Assume that for every j ∈ {1, 2, . . . , n} the family i̸ =j F i cannot be pierced by 1 point. For every j ∈ {1, 2, . . . , n} we take A = i̸ =j F i and B = F j . By Lemma 2.1 from [MSRPR20], the family B = F j can be crossed by d hyperplanes. Thus, n i=1 F i can be crossed by dn hyperplanes. □
This lemma shows that we may always choose f K (d) = 1 and g K (d) = nd. To prove the rest of the statements, we use the following auxiliary result which is also useful later. 1. there exists j ∈ {1, 2, . . . , n} such that every two sets in i̸ =j F i intersect, or 2. the family n i=1 F i has a hyperplane transversal. Proof. As usual, we may assume without loss of generality that the sets in n i=1 F i are compact. For every direction u ∈ S d−1 , let ℓ u be the line through the origin with direction u. By projecting the sets in n i=1 F i to the line ℓ u we obtain a finite family of intervals in the line ℓ u . If for some u ∈ S d−1 the intervals in the line ℓ u have a common point p then the hyperplane through p orthogonal to u is transversal to the family n i=1 F i , and we are done. Otherwise, for every u ∈ S d−1 , there are two disjoint intervals in the line ℓ u . Hence, for every u ∈ S d−1 , there are two sets A u , B u in the family n i=1 F i that are separated by a hyperplane orthogonal to u. Since A u ∩ B u = ∅, then A u and B u must be in the same family F i , for some i ∈ {1, . . . , n}.
We color the sphere S d−1 as follows. If there are two sets A u , B u ∈ F i that are separated by a hyperplane orthogonal to u, we color u ∈ S d−1 with color i. Let In the second case there exists j ∈ {1, . . . , n} such that S d−1 = O j , so for every u ∈ S d−1 there is a hyperplane H u orthogonal to u and sets A u , B u ∈ F j such that H u strictly separates A u from B u . Let C ∈ i̸ =j F i , since C intersects A u and B u it must also intersect H u . This shows that the family i̸ =j F i has a hyperplane transversal in every direction.
If there are two sets C, D ∈ i̸ =j F i such that C ∩ D = ∅, then C and D can be separated by a hyperplane H. Hence C and D do not have a hyperplane transversal parallel to H, a contradiction. Therefore every two sets in i̸ =j F i intersect. □
We are now ready to prove the parts (a) to (f) of Theorem 2.5. In each of these cases, if the family n i=1 F i has a hyperplane transversal, we are done. Otherwise, by Lemma 4.2, there exists j ∈ {1, . . . , n} such that every two sets in i̸ =j F i intersect. If this last condition implies that i̸ =j F i can be pierced by f K (d) points, we are done. 1.5, 1.4, 1.3, 1.2, 1.6 and 1.1, respectively. We also note that for part (f), the work done in [KNPS06], [NT10] and [DJ11] implies that we may choose f K (d) = O(d d ).
□
Proof of Theorem 2.3
In [MSRPR20] it is shown that if instead of using d + 1 finite families in the Colorful Helly theorem we use only d, then it may be false that one of the families can be pierced by few points. In this section we prove that if the families are translates of a convex body, one of the families can indeed be pierced by few points.
Proof of Theorem 2.3. By part (f) of Theorem 2.5, either there is j ∈ {1, . . . , n} such that i̸ =j F i can be pierced by f K (d) points points or the family n i=1 F i has a hyperplane transversal. In the first case we are done so assume that n i=1 F i has a hyperplane transversal in the direction v 1 ∈ S d−1 .
For every u there is either a hyperplane orthogonal to u transversal to n i=1 F i or there are two sets in some family F k separated by a hyperplane H u orthogonal to u. In the second case H u is transversal to the family i̸ =k F i . In any case, for every direction u, there is a hyperplane transversal to at least n − 1 of the families F 1 , . . . , F n .
We color u ∈ S d−1 with color i if there is a hyperplane orthogonal to u and transversal to the family F i . Then every point in S d−1 is colored with at least n − 1 colors. Moreover, if a vector v is colored with all n colors, there is a hyperplane orthogonal to v transversal to the family
n i=1 F i . For instance, v 1 is colored with all n colors.
For every i ∈ {1, . . . , n}, let F i be the set of all the points in the sphere S d−1 with color i. Since the sets in the families are compact, then the sets F i are closed.
Lemma 5.1. If the sphere S d−1 is covered by n closed families F 1 , . . . , F n such that every u ∈ S d−1 belongs to at least n − 1 families and there exists v 1 ∈ n i=1 F i , then there is an orthonormal basis {v 1 , . . . , v d } of R d and an index j ∈ {1, . . . , n} such that v 1 , . . . , v d ∈ i̸ =j F i .
Proof of Lemma 5.1. We proceed by induction on d. If d = 2, we extend v 1 to an orthonormal basis {v 1 , v 2 } of R 2 . By hypothesis, there exists j ∈ {1, . . . , n} such that v 2 ∈ i̸ =j F i . Thus, v 1 , v 2 are both in i̸ =j F i .
For d > 2 we consider the (d − 2)-dimensional sphere S ⊂ S d−1 orthogonal to v 1 . There are two cases. First suppose that there exists j ∈ {1, . . . , n} such that all the points in S are in i̸ =j F i . Then we extend v 1 to an orthonormal basis {v 1 , . . . , v d } of R d . Since {v 2 , . . . , v d } ⊂ S, then v 1 , . . . , v d are in i̸ =j F i and we are done. Otherwise, suppose that there are j, k ∈ {1, . . . , n} with j ̸ = k such that S contains points from both i̸ =j F i and i̸ =k F i . Since S is connected and the sets F i are closed, there is a point v 2 ∈ S ∩ n i=1 F i . By using the induction hypothesis on the (d − 1)-dimensional sphere S, we can complete {v 1 , v 2 } to an orthonormal basis {v 1 , .
. . , v d } of R d such that v 1 , . . . , v d ∈ i̸ =j F i for some j ∈ {1, . . . , n}. □
This lemma gives us an orthonormal basis {v 1 , . . . , v d } of R d and j ∈ {1, . . . , n} such that the family i̸ =j F i has hyperplane transversals H 1 , . . . , H d orthogonal to v 1 , . . . , v d , respectively.
Without loss of generality we may assume that K is compact and has non-empty interior. Then bye John's ellipse theorem [Joh48] there is an ellipse E ⊂ K with center c such that E ⊂ K ⊂ c + d(E − c). After applying a linear transformation, we may assume that E is a unit ball. Let G j be the family of balls of radius d that contain some set from i̸ =j F i and let H j be the family of balls of radius 1 that are contained in some set from i̸ =j F i . Since every set in i̸ =j F i intersects the hyperplanes H 1 , . . . , H d , it follows that for every direction v i , the centers of the balls in G j are contained in the strip bounded by two hyperplanes parallel to H i . The width of this strips is 2d. Therefore the centers of the balls in G j are contained in a d-dimensional cube with side 2d. The d-dimensional cube with side 2d can be covered by O(d log(d)(2d) d ) unit balls [Rog63] (see also [VG05]). This implies that every ball in H j intersects at least one of the centers of these unit balls. Therefore, we may choose f k (d) = O(d log(d)(2d) d ) points.
To prove part (a) of Theorem 2.3 this bound can be made precise. We require to cover a square of side 4 with unit circles. This can easily be done with 9 circles as shown on the left side of Figure 4. This implies that the family i̸ =j F i can be pierced by 9 points.
For part (b), if K ⊂ R 2 is of constant width then, after applying a suitable linear transformation, we may assume that K contains a unit circle and is contained in a circle of radius 1 2 (1 + √ 3) (see e.g. [MMO19, Theorems 3.4.1 and 3.4.2]). We then continue as before and we are left with a square of side 1 + √ 3 which we need to cover with unit circles. This may be accomplished with 4 circles as shown on the right side of Figure 4. This implies that the family i̸ =j F i can be pierced by 4 points. □
To prove Theorem 2.3 we used Theorem 2.5, and to prove Theorem 2.5 we used the monochromatic theorems. However, the monochromatic theorems are not necessary to prove Theorem 2.3. Indeed, we recall that in the proof of Theorem 2.5 we applied Lemma 4.2 and we used the monochromatic theorems in the case when there is j ∈ {1, . . . , n} such that every two sets in i̸ =j F i intersect. In the proof of Theorem 2.3 we can avoid the monochromatic theorems since the fact that every two sets in i̸ =j F i intersect is equivalent to that the family i̸ =j F i has hyperplane transversals in every direction (in the plane we have line transversals), and we can conclude in the same way as in the other case proved in Theorem 2.3. Thus, to prove the bounds given in Theorem 2.3 we only need our Lemma 4.2. Figure 5. The region bounded by the parallel lines to l 1 , l 2 , l 3 is contained in one of the two pentagons.
1+ √ 2 2 √ 2 − 1 1 √ 2 − 1 1 1+ √
Proof of Theorem 2.1
In this section we prove that Dol'nikov's conjecture (Conjecture 1.8) holds if K is of constant width or if K has Banach-Mazur distance at most 1.1178 to the disk. Our results are slightly stronger than what Dol'nikov's conjecture would imply since we use 2 only colors instead of 3.
Proof of Theorem 2.1. We proceed in a way similar to the proof of Theorem 2.3. If F 1 or F 2 can be pierced by 3 points, we are done. Otherwise, by part (a) of Theorem 2.5, F 1 ∪ F 2 has a line transversal l. Let u 1 ∈ S 1 be a direction orthogonal to l.
A consequence of the 1-dimensional colorful Helly theorem is that in every direction there is a line transversal to either F 1 or F 2 . We color the sphere S 1 so that u ∈ S 1 has color i whenever there is a line transversal to the family F i orthogonal to u.
Let u 2 , u 3 , u 4 ∈ S 1 be unit vectors such that, for each i ∈ {1, 2, 3}, the counter-clockwise angle between u i and u i+1 is π 4 . By the pigeon-hole principle, among u 2 , u 3 and u 4 there are two vectors with a common color j ∈ {1, 2}, so there are 3 vectors among u 1 , u 2 , u 3 and u 4 with color j. We may assume that these 3 vectors are v 1 , v 2 , v 3 and satisfy that v 1 and v 2 are orthogonal and the counter-clockwise angle between v 1 and v 3 is π 4 . Let l 1 , l 2 and l 3 be lines transversal to F j orthogonal to v 1 , v 2 and v 3 , respectively.
To prove the first part of Theorem 2.1, suppose that K has constant width 1. Then every translate of K is contained in a square of side 1. We denote G j as the family of squares of side 1 that contains some set from F j . Since every set in F j intersects the lines l 1 , l 2 , l 3 , it follows that for every direction v i , the centers of the squares in G j are contained in the strip bounded by two lines parallel to l i . The width of these strips is 1. Assume that v 1 = (1, 0), then the centers of the squares in G j are always contained in one of the two pentagons from Figure 5.
We claim that every one of the two pentagons can be covered by 3 translates of −K. This would imply that the family F j can be pierced by 3 points. By symmetry we only have to show that one of the two pentagons from Figure 5 can be covered by 3 translates of −K. In order to do this, we label and divide the pentagon as in Figure 6. Here I is the midpoint of the segment CD. The points F , G and H are points inside the pentagon ABCDE such that F is in the segment AE, G is in the segment AB and AGHF is a square of side 5 8 . Below we prove that the square AGHF and the pentagons GBCIH and F HIDE can be covered by one translate of −K. By using a result of Chakerian [Cha69], we only need to prove that every congruent copy of the square AGHF and every congruent copy of the pentagon GBCIH (or F HIDE) can be covered by a Reuleaux triangle of width 1. Figure 7 shows how every rotation of the square AGHF of side 5 8 can be covered by a Reuleaux triangle of width 1. Figure 8 shows how every rotation of the pentagon F HIDE can be covered by a Reuleaux triangle of width 1.
For the second part of Theorem 2.1, where the Banach-Mazur distance between K and a disk is at most 1.1178, we may assume (after a linear transformation) that K contains a circle of radius r = 1/2.2356 ≈ 0.4473 and is contained in a circle of radius 1 2 . Let G j be the family of circles of radius 1 2 that contain some set from F j and let H j be the family of circles of radius r that are contained in some set from F j . Since every set in F j intersects the lines l 1 , l 2 , l 3 , it follows that for every direction v i , the centers of the circles in G j are contained in the strip bounded by two parallel lines to l i . The width of these strips is 1. Assume, without loss of generality, that v 1 = (1, 0), then the centers of the circles in G j are contained in one of the two pentagons from Figure 5. Suppose that the centers of the circles in G j are contained in the pentagon ABCDE shown in Figure 6. This pentagon can be covered with 3 circles of radius at most r as shown in Figure 9. Here I is the midpoint of the segment CD, M and N are the points in the segments AE and AB such that AM = AN and M N = 2r. From here a simple calculation gives that the circumradii of the triangles M IE, AN M and N BI are all at most r. This implies that the family F j can be pierced by 3 points. □
Conclusions and future work
In Theorem 2.4 we gave the best numbers for the 2-dimensional case of Theorem 1.10. An interesting question is what are the best numbers in this Theorem for any given dimension. In particular, does Theorem 1.10 hold with f (d) = 1 and g(d) = ⌈ d+1 2 ⌉? Theorem 2.4 shows that this is true for d = 2.
Another question is how much can the numbers in Theorems 2.3 and 2.5 be improved. We do not believe that they are optimal.
On the other hand, we use Lemma 4.2 to connect colorful theorems with monochromatic ones. For example, we used it to prove that Theorems 1.5, 1.1, 1.2 and 1.6 imply parts (a), (b), (c) and (d) of Theorem 2.5, respectively. Perhaps similar techniques could be useful in order to prove the following colorful conjecture of Theorem 1.2.
Conjecture 7.1. Let F 1 , F 2 be finite families of circles in R 2 . Suppose that A ∩ B ̸ = ∅ for every A ∈ F 1 and B ∈ F 2 . Then either F 1 or F 2 can be pierced by 4 points.
Acknowledgments
This work was supported by UNAM-PAPIIT IN111923.
Figure 2 .
2If the blue sets are in the family F j , then every set in the family i̸ =j F i contains the point p.
Lemma 4 . 2 .Figure 3 .
423Let F 1 , . . . , F n be finite families of convex sets in R d with n ≥ 2 such that A ∩ B ̸ = ∅ for every A ∈ F i and B ∈ F j with i ̸ = j. Then one of the following statements holds: In this example A ∩ D = ∅.
O i be the subset of S d−1 with color i. Since the convex sets are compact, the sets O i are open. Notice that the sets O 1 , . . . , O n cover S d−1 . We consider two cases. First suppose that at least two sets from O 1 , . . . , O n are non-empty. Since the sets O 1 , . . . , O n are open, there are two indices i, j ∈ {1, . . . , n} with i ̸ = j such that O i and O j intersect. Let u ∈ O i ∩ O j , then there are hyperplanes H and G orthogonal to u, and sets A, B ∈ F i and C, D ∈ F j such that H strictly separates A from B, and G strictly separates C from D. This implies that one of the sets in {A, B} is strictly separated from one of the sets in {C, D} (see Figure 3) which contradicts the hypothesis of the lemma.
Figure 4 .
4The square of side 4 can be covered with 9 unit circles, a square of side 1 + √ 3 can be covered with 4 unit circles.
Figure 6 .
6We divide the pentagon ABCDE in a square of side 5 8 and two pentagons.
Figure 7 .
7Every rotation of the square of side 5 8 can be covered by a Reuleaux triangle of width 1.
Figure 8 .Figure 9 .
89Every rotation of the pentagon can be covered by a Reuleaux triangle of width 1. Indeed, in (a) we rotate the pentagon until the red pentagon, then in (b) we rotate the red pentagon with center in the red vertex until the blue pentagon, then we rotate the blue pentagon with center in the blue vertex until the green pentagon. Finally, in (c) we translate the green pentagon to the red pentagon and this red pentagon is a pentagon in (a) by rotating 2π 3 . The pentagon can be covered with 3 circles of radius at most 0.4474.
Very Colorful Theorems. L Abb + 09] J, I Arocha, J Bárány, R Bracho, L Fabila, Montejano, Discrete Comput. Geom. 422ABB + 09] J. L. Arocha, I. Bárány, J. Bracho, R. Fabila, and L. Montejano, Very Colorful Theorems, Discrete Comput. Geom. 42 (2009), no. 2, 142-154.
A generalization of Carathéodory theorem. I Bárány, Discrete Math. 40I. Bárány, A generalization of Carathéodory theorem, Discrete Math. 40 (1982), 141-152.
Intersection and covering properties of convex sets. G D Chakerian, The American Mathematical Monthly. 767G.D. Chakerian, Intersection and covering properties of convex sets, The American Mathematical Monthly 76 (1969), no. 7, 753-766.
L Danzer, Zur Lösung des Gallaischen Problemsüber Kreisscheiben in der euklidischen Ebene. 21L. Danzer, Zur Lösung des Gallaischen Problemsüber Kreisscheiben in der euklidischen Ebene, Studia Sci. Math. Hungar. 21 (1986), 111-134.
Piercing translates and homothets of a convex body. A Dumitrescu, M Jiang, Algorithmica. 61A. Dumitrescu and M. Jiang, Piercing translates and homothets of a convex body, Algorithmica 61 (2011), 94-115.
D G Fon-Der-Flaass, A V Kostochka, Covering boxes by points, Discrete mathematics. 120D.G. Fon-Der-Flaass and A. V. Kostochka, Covering boxes by points, Discrete mathematics 120 (1993), no. 1-3, 269-275.
Master's thesis. C Gomez-Navarro, Colorful theorems in discrete and convex geometry. Universidad Nacional Autónoma de MéxicoC. Gomez-Navarro, Colorful theorems in discrete and convex geometry, Master's thesis, Universidad Nacional Autónoma de México, 2022.
On intersections of similar sets. B Grünbaum, Portugal Math. 18B. Grünbaum, On intersections of similar sets, Portugal Math. 18 (1959), 155-164.
Combinatorial Geometry in the plane. H Hadwiger, H Debrunner, Dover PublicationsH. Hadwiger and H. Debrunner, Combinatorial Geometry in the plane, Dover Publications, 1966.
Uber Mengen konvexer Korper mit gemeinschaftlichen Punkten. E Helly, Jahres-berichte der Deutschen Math.-Verein. 32E. Helly, Uber Mengen konvexer Korper mit gemeinschaftlichen Punkten, Jahres-berichte der Deutschen Math.-Verein. 32 (1923), 175-176.
Points sorrounding the origin. A F Holmsen, J Pach, H Tverberg, Combinatorica. 286A. F. Holmsen, J. Pach, and H. Tverberg, Points sorrounding the origin, Combinatorica 28 (2008), no. 6, 633-644.
J Jerónimo-Castro, A Magazinov, P Soberón, On a problem by Dol'nikov. 338J. Jerónimo-Castro, A. Magazinov, and P. Soberón, On a problem by Dol'nikov, Discrete Mathematics 338 (2015), no. 9, 1577-1585.
Extremum problems with inequalities as subsidiary conditions. F John, Courant Anniversary Volume. F. John, Extremum problems with inequalities as subsidiary conditions, Courant Anniversary Volume (1948), 187-204.
Transversals for families of translates of a two-dimensional convex compact set. R N Karasev, Discrete Comput. Geom. 242R. N. Karasev, Transversals for families of translates of a two-dimensional convex compact set, Discrete Comput. Geom. 24 (2000), no. 2, 345-354.
Piercing Families of Convex Sets with the d-Intersection Property in R d. Discrete Comput. Geom. 394, Piercing Families of Convex Sets with the d-Intersection Property in R d , Discrete Comput. Geom. 39 (2008), no. 4, 766-777.
B Knaster, C Kuratowski, S Mazurkiewicz, Ein Beweis des Fixpunktsatzes für n-dimensionale Simplexe. 14B. Knaster, C. Kuratowski, and S. Mazurkiewicz, Ein Beweis des Fixpunktsatzes für n-dimensionale Simplexe, Fundamenta Mathematicae 14 (1929), no. 1, 132-137.
Transversal numbers of translates of a convex body. S.-J Kim, K Nakprasit, M J Pelsmajer, J Skokan, Discrete Mathematics. 30618S.-J. Kim, K. Nakprasit, M. J. Pelsmajer, and J. Skokan, Transversal numbers of translates of a convex body, Discrete Mathematics 306 (2006), no. 18, 2166-2173.
. H Martini, L Montejano, D Oliveros, Bodies of constant width. SpringerH. Martini, L. Montejano, and D. Oliveros, Bodies of constant width, Springer, 2019.
Further Consequences of the Colorful Helly Hypothesis. L Martínez-Sandoval, E Roldán-Pensado, N Rubin, Discrete Comput. Geom. 634L. Martínez-Sandoval, E. Roldán-Pensado, and N. Rubin, Further Consequences of the Colorful Helly Hypothesis, Discrete Comput. Geom. 63 (2020), no. 4, 848-866.
D Mcginnis, S Zerbib, arXiv:2103.05565Line transversals in families of connected sets the plane. arXiv preprintD. McGinnis and S. Zerbib, Line transversals in families of connected sets the plane, arXiv preprint arXiv:2103.05565 (2021).
On the transversal number and VC-dimension of families of positive homothets of a convex body. M Naszódi, S Taschuk, Discrete mathematics. 3101M. Naszódi and S. Taschuk, On the transversal number and VC-dimension of families of positive homothets of a convex body, Discrete mathematics 310 (2010), no. 1, 77-82.
Covering a sphere with spheres. C A Rogers, Mathematika. 102C.A. Rogers, Covering a sphere with spheres, Mathematika 10 (1963), no. 2, 157-164.
Covering a ball with smaller equal balls in R n. J L Verger-Gaugry, Discrete and Computational Geometry. 33J.L. Verger-Gaugry, Covering a ball with smaller equal balls in R n , Discrete and Computational Geom- etry 33 (2005), 143-155.
Mexico Email address: [email protected] (E. Roldán-Pensado) Centro de Ciencias Matemáticas. Unam Gomez-Navarro ; Facultad De Ciencias, Ciudad De Mexico, UNAM Campus Morelia, Morelia, Mexico EmailGomez-Navarro) Facultad de Ciencias, UNAM, Ciudad de Mexico, Mexico Email address: [email protected] (E. Roldán-Pensado) Centro de Ciencias Matemáticas, UNAM Campus Morelia, Morelia, Mexico Email address: [email protected]
| [] |
[
"Horocyclic and geodesic orbits on geometrically infinite surfaces with variable negative curvature",
"Horocyclic and geodesic orbits on geometrically infinite surfaces with variable negative curvature"
] | [
"Victoria García "
] | [] | [] | Here we study the behaviour of the horocyclic orbit of a vector on the unit tangent bundle of a geometrically infinite surface with variable negative curvature, when the corresponding geodesic ray is almost minimizing and the injectivity radius is finite. | null | [
"https://export.arxiv.org/pdf/2305.16538v1.pdf"
] | 258,947,549 | 2305.16538 | bf88731dfefff8ce1f006caabb54432a3eef80a1 |
Horocyclic and geodesic orbits on geometrically infinite surfaces with variable negative curvature
Victoria García
Horocyclic and geodesic orbits on geometrically infinite surfaces with variable negative curvature
Here we study the behaviour of the horocyclic orbit of a vector on the unit tangent bundle of a geometrically infinite surface with variable negative curvature, when the corresponding geodesic ray is almost minimizing and the injectivity radius is finite.
Introduction
Let M be an orientable geometrically infinite surface with a complete Riemanninan metric of negative curvature, and letM be its universal cover. Let us suppose that Γ is the fundamental group of M . We can see it as a subgroup of the group of orientation preserving isometries ofM , that is Γ < Isom + (M ). We can write then M = Γ\M .
If T 1 M and T 1M are the unit tangent bundles of M andM respectively, then we can also write:
T 1 M = Γ\T 1M .
If the curvature ofM has an upper bound −κ 2 , with κ > 0, then the geodesic flow on T 1M , denoted by g R , happens to be an Anosov flow (see appendix of [2]), and this flow descends to T 1 M . The strong stable manifold for the geodesic flow defines a foliation, (as we shall see in section 2), which is the stable horocycle flow, denoted by h R , and it also descends to T 1 M .
A work of Hedlund shows that if the surface M has constant negative curvature and it is compact, then the horocycle flow is minimal on the unitary tangent bundle. This means that the only closed non empty invariant set for the horocycle flow is T 1 M ( [9]). Later, F. Dal'Bo generalizes this result to compact surfaces of variable negative curvature ( [5]). She also proves that in case the fundamental group is finitely generated, then all the horocycles on the non-wandering set are either dense or closed, motivating the interest for studying geometrically infinite surfaces.
S. Matsumoto studied a family of geometrically infinite surfaces of constant curvature, for which he proves that the hororycle flow on the unit tangent bundle does not admit minimal sets ( [10]). Also Alcalde, Dal'Bo, Martinez and Verjovsky ( [1]) studied this family of surfaces which appear as leaves of foliations, and proved the same result in this context. More recently, A. Bellis studied the links between geodesic and horcycle orbits for some geometrically infinite surface of constant negative curvature ( [3]). In particular, his result implies Matsumoto's result.
Our aim was to determine whether or not the results of these last works are still valid if we don't have the hypothesis of constant curvature, and to provide arguments that do no depend on specific computations which are valid on constant negative curvature.
Let us introduce the following definitions. wherep is any lift of p to the universal coverM of M .
Definition 1.2. If v ∈ T 1 M , the geodesic ray v[0, ∞) is the projection of the future geodesic orbit {g t (v) : t ∈ [0, ∞)} of v on M . Also v(t) will be the projection on M of g t (v) ∈ T 1 M . Definition 1.3. A geodesic ray v[0, ∞) on M is said to be almost minimizing if there is a positive real number c such that d(v(t), v(0)) ≥ t − c ∀t ≥ 0. Definition 1.4. Let v[0, ∞)
be a geodesic ray, then we define its injectivity radius as
Inj(v[0, ∞)) := lim inf t−→∞ Inj(v(t)).
We prove the following theorem:
Theorem 1.5. Let M be an orientable geometrically infinite surface with a complete Riemanninan metric of negative curvature. Let v ∈ T 1 M such that v[0, ∞) is an almost minimizing geodesic ray with finite injectivity radius a, and such that h R (v) is not closed. Then there is a sequence of times τ n going to ∞ such that g τn (v) ∈ h R (v) for all n. And even more, the set I = {t ∈ R :
g t (v) / ∈ h R (v)} only contains intervals of bounded length.
This is the result that was proved by Bellis ( [3]) for surfaces of constant negative curvature. Our proof takes some ideas of Bellis's proof, but introduces a different approach in some parts.
The following result generalizes the one proved by Matsumoto in the context of constant negative curvature. Definition 1.6. Let M be a noncompact Riemannian surface with variable negative curvature, and let Γ be its fundamental group. We say that M is tight if Γ is purely hyperbolic, and M can be written as
M := M 1 ∪ M 2 ∪ ...
where M n ⊂ M n+1 ∀ n, and each M n is a compact, not necessarily connected submanifold of M with boundary ∂M n , and the boundary components are closed geodesics whose lengths are bounded by some uniform constant A ∈ R. I would like to thank my advisors M. Martínez and R. Potrie for their invaluable help while writing this article. Time spent in conversations with F. Dal'Bo was very helpful for understanding horocycle flows, specially in geometrically infinite and variable curvature contexts. I would also like to thank S. Burniol for a very useful hint to prove Propositions 3.4 and 3.5, and for a careful reading of this text.
Preliminaries
If u ∈ T 1M , we denote by g R (u) the geodesic passing through u in T 1M , and by u(R) the projection of this geodesic onM . So that u(t) will denote the projection of g t (u) onM . We denote by h R (u) the horocycle passing through u.
We will also denote by π the projection from T 1 M to M , and byπ the projection from T 1M to T 1 M .
Boundary at Infinity
The boundary at infinity is the set of endpoints of all the geodesic rays. Here we give a formal definition.
Definition 2.1. We are going to say that the geodesics directed by two vectorsṽ and v ′ ∈ T 1M have the same endpoint if sup t>0 d(π(g t (ṽ)),π(g t (ṽ ′ ))) < ∞.
When this holds, we write:ṽ ∼ * ṽ ′ .
Definition 2.2. The boundary at infinity ofM will be the set defined by
∂ ∞M := T 1M / ∼ * .
For anyṽ ∈ T 1M , we denote byṽ(∞) its equivalence class by the relation ∼ * . Given two different points ξ and η of ∂ ∞M , we will denote by (ξ, η) the geodesic onM joining them.
The action of Isom + (M ) onM can be naturally extended toM ∪ ∂ ∞M . An isometry of Isom + (M ) is said to be hyperbolic if it has exactly two fixed points and both of them lie on ∂ ∞M .
parabolic if it has a unique fixed point and it lies on ∂ ∞M .
elliptic if it has at least one fixed point inM .
Every isometry Isom + (M ) is either hyperbolic, elliptic or parabolic (see [4]).
The Busemann function and horocycles
The Busemann function is one of the main tools we need to describe horocycles and their properties. In this section we show how to construct this function.
Definition 2.3. For x, y, z ∈M we can define b :M ×M ×M −→ R b(x, y, z) = d(x, y) − d(x, z), x, y, z ∈M .
As d is a continuous function, b is also continuous.
Remark 2.4. Some properties of b that can be deduced from the definition are: } be a sequence onM such that x n −→ ξ ∈ ∂ ∞M . Then b y (x n ) converges on C(M ) to some function B ξ (y, ). This will be the Busemann function at ξ, based at y. Explicitly we have:
1. b(x, y, y) = 0 for all x, y ∈M 2. | b(x, y, z) − b(x, y, z ′ ) |≤ d(z, z ′ ) for all x, y, z, z ′ ∈M 3. b(x, y, z) = b(x, y, z ′ ) + b(x, z ′ , z) for all x, y, z, z ′ ∈M See chapter II.1 of [2] for proof.B ξ (y, z) := lim xn−→ξ d(x n , y) − d(x n , z),(1)
where x n is a sequence onM that goes to ξ. This definition is independent of the choice of the sequence x n (see chapter II.1 of [2]). For all ξ ∈ ∂ ∞M , we have B y ξ :M −→ R, where B y ξ (z) = B ξ (y, z) for all z ∈M , is a continuous function, and then the level set (B y ξ ) −1 (t) is a regular curve (meaning that it admits a C 1 arclength parametrization) for all t ∈ R (see chapter IV.3 of [2]). Let us denote by H y (ξ, t) the level set (B y ξ ) −1 (t), and for each p ∈ H y (ξ, t) consider the only vector v p ξ ∈ T 1 pM such thatṽ p ξ (∞) = ξ. Then the set
H y (ξ, t) := {ṽ p ξ : p ∈ H y (ξ, t)}
is the horocycle throughṽ p ξ in T 1M , for all p ∈ H y (ξ, t). We have that π(Ĥ y (ξ, t)) = H y (ξ, t), andĤ y (ξ, t) ⊂ T 1M is the strong stable set ofṽ p ξ for the geodesic flow, which can also be parametrized by arclength. Now, the horocycle flow h s (v), pushes a vector v along its strong stable manifold, through an arc of length s.
Given two elements u and v in T 1M , let z u = u(0) and z v = v(0) be their respective base points inM . Let us suppose that there are t, s ∈ R such that g t
(u) = h s (v) or, in other words, there is t ∈ R such that g t (h R (u)) = h R (v). In this case u(∞) = v(∞). Let us suppose that u(∞) = v(∞) = ξ ∈ ∂ ∞M .
Then the Busemann function centered in ξ evaluated at (z u , z v ) happens to be the real number t mentioned above. We denote it by
B ξ (z u , z v ) = t. Remark 2.6. If u ∈ T 1M is such that u(0) = o and u(∞) = ξ, then B ξ (o, z) = lim t→∞ [d(o, u(t)) − d(z, u(t))] = lim t→∞ t − d(z, u(t)).
Limit set and classification of limit points
The limit set is a special subset of the boundary at infinity. We classify limits points, and show their links with the behaviour of horocyclic orbits.
Definition 2.7. The limit set L(Γ) of the group Γ, is the set of accumulation points of an orbit Γz, for some z ∈M . This is well defined because all orbits have the same accumulation points (see chapter 1.4 of [4]).
One has L(Γ) ⊂ ∂ ∞M . Otherwise, a sub-sequence of the orbit would remain on a compact region ofM , contradicting the fact that Γ acts discontinuously onM .
In the limit set, we can distinguish two different kinds of points:
Definition 2.8. A limit point ξ ∈ ∂ ∞M is said to be horocyclic if given any z ∈M , and t ∈ R, there is γ ∈ Γ such that B ξ (o, γ(z)) > t. Otherwise, we say ξ is a nonhorocyclic limit point. (See figure 1) The sets of the form
{z ∈M : B ξ (o, γ(z)) > t, }, t ∈ R
are called horodisks based on ξ. So, in other words, a limit point ξ is horocyclic if each horodisk based on ξ intersects de orbit Γz, for all z ∈M .
Remark 2.9. Given a point ξ ∈ ∂ ∞M , if there is a sequence {γ n } ⊂ Γ such that B ξ (o, γ −1 n (o)) −−−→ n→∞ ∞ for any o ∈M , then ξ is an horocyclic limit point. This is because if B ξ (o, γ −1 n (o)) −−−→ n→∞ ∞, any horodisk {z ∈M : B ξ (o, γ(z)) > t, } will contain an element of the Γ-orbit of o.
We denote by Λ Γ the image byπ of the set {ṽ ∈ T 1M :ṽ(∞) ∈ L(Γ)}.
Proposition 2.10. If ξ ∈ L(Γ) is an horocyclic limit point, then for allṽ
∈ T 1M such that v(∞) = ξ, we haveπ(h R (ṽ)) is dense in Λ Γ .
The proof of this proposition can be found in [5], as Proposition B.
Given v ∈ T 1 M (or T 1M ), we denote by v[0, ∞) the geodesic ray g t (v) t∈[0,∞) .
Almost minimizing geodesic rays
The following proposition relates the behaviour of a geodesic ray with its endpoint. First, we recall the following definition:
Definition 2.11. A geodesic ray v[0, ∞) on M is said to be almost minimizing if there is a positive real number c such that d(v(t), v(0)) ≥ t − c ∀t ≥ 0. (See figure 2) Proposition 2.12. Let ξ ∈ L(Γ) andṽ ∈ T 1M such thatṽ(∞) = ξ. Then the projected geodesic ray v[0, ∞) over M , is almost minimizing if an only if ξ is a nonhorocyclic limit point.
Proof. Take a reference point o and suppose without loss of generality thatṽ ∈ T 1 oM . Let us suppose first that ξ =ṽ(∞) is a nonhoroyclic limit point. Then there is a horodisk H based at ξ that does not contain any point of the Γ-orbit of o. Let us take H = {z ∈M : B ξ (o, z) > k, with k > 0}. Then for all γ ∈ Γ, one has B ξ (o, γ(o)) ≤ k. And because of remark 2.6, this means that
lim t→∞ [d(o,ṽ(t)) − d(γ(o),ṽ(t))] = lim t→∞ [t − d(γ(o),ṽ(t))] ≤ k.
As this limit is not decreasing, we have for any t > 0: The distance between this two points is less than t − c. The blue dotted line represents the minimizing geodesic joining v(0) and v(t), which obviously has length t. and then d(γ(o),ṽ(t)) ≥ t − k.
t − d(γ(o),ṽ(t)) ≤ k,
As this happens for any γ ∈ Γ, down on the surface M , this implies that
d(v(0), v(t)) ≥ t − k,
which means that v[0, ∞) is an almost minimizing geodesic ray. All this implications can be reverted to prove that an almost minimizing geodesic ray, is projected from a geodesic ending on a nonhorocyclic limit point.
Also a proof of the proposition above can be found in [10].
Links between geodesic and horocyclic orbits
In this section, M will be an orientable geometrically infinite surface with a complete Riemanninan metric of negative curvature,M its universal cover and Γ its fundamental group.
As we mentioned before, the geodesic flow on T 1 M is an Anosov flow (see [8]), and the stable manifolds, which are contracted by this flow (see ch. IV of [2]), have the level sets of the Busemann functions as their projections to M .
The strong stable manifold of the geodesic flow is defined as follows:
Definition 2.13. Consider the geodesic flow g t : T 1 M −→ T 1 M , and take v ∈ T 1 M . Then the strong stable manifold of v will be the set
W s (v) := {u ∈ T 1 M : d(g t (v), g t (u)) − −− → t→∞ 0}.
The following proposition gives a criteria we are going to use in the proof of Theorem 1.5. Proposition 2.14. For u ∈ T 1 M , if there are sequences u n ∈ T 1 M and r n ∈ R such that u n −→ u in T 1 M , r n −→ r 0 ∈ R, and d(g t+rn (u n ), g t (u)) − −− → t→∞ 0, then g r 0 (u) ∈ h R (u).
Proof. As d(g t+rn (u n ), g t (u)) − −− → t→∞ 0, this means that g rn (u n ) is in the strong stable manifold of u for all n. Then, there is a sequence {s n } n∈N such that g rn (u n ) = h sn (u). Then u n = g −rn h sn (u).
By
h sn (u) −−−→ n→∞ g r 0 (u),
where h sn (u) is a sequence on h R (u), and so g r 0 (u) ∈ h R (u) as we wanted to see.
Geometric properties of horocycles
In this section, we prove some geometric properties of horocycles and the Busemann function, which are going to be useful tools in the proof of Theorem 1.5.
First, we introduce some additional notation: If γ ∈ Γ is an hyperbolic isometry (γ − , γ + ) is the axis of γ, where γ − , γ + ∈ ∂ ∞M are its fixed points, we will denote by C γ (p) the curve passing through p whose points are at a constant distance from (γ − , γ + ). In general, if c : R −→M is a geodesic, then C c (p) will be the curve passing through p whose points are at a constant distance from c(R).
For any regular connected curve C , and p, q ∈ C , we will write [p, q] C to denote the arc contained in C joining p and q.
Finally, for ξ ∈ ∂ ∞M and p ∈M , the horocycle based at ξ passing through p will be denoted by H ξ (p).
Remark 3.1. For any hyperbolic isometry γ ∈ Γ and any p ∈M , the curve C γ (p) is a regular curve. In fact, one can construct charts of C γ (p) from the charts of the axis of γ, which is a geodesic.
The tools for proving the following proposition can be found in Ch. 9 of [7]. 2. Up to changing the orientation of γ, there is an increasing function δ : R + −→ R + such that if q ∈ [p, γ(p)] Cγ (p) and d(p, q) > ϵ, then |B ζ (q, p)| > δ(ϵ). In particular, |B ζ (p, γ(p))| > δ(ϵ).
Proof. First, let us give a parametrization a(t) of the curve C c (p) such that for each t ∈ R we have that d(a(t), c(t)) is constant and equal to l = d(p, c(R)), and p = a(0). Let b t : [0, l] −→M the geodesic joining a(t) and c(t) (see, figure below). Now we writė b t (s) = ∂bt ∂s (s), with s ∈ [0, l]. Alsoȧ andċ will refer to ∂a ∂t and ∂c ∂t respectively. We have then that ⟨ḃ t (0),ȧ(t)⟩ = ⟨ḃ t (l),ċ(t)⟩ = 0. This holds since the curves parametrized by a(t) and c(t) are closed submanifolds ofM , and in view of Corollary 3.3, the distance between them is assumed by a geodesic perpendicular to both c(R) and a(R).
If we look at the Busemann function B p ζ (a(t)) along the curve a(t), where B p ζ (z) := B ζ (p, z) for all z ∈M , we see that its derivative vanishes if and only if ⟨∇B p ζ (a(t)),ȧ(t)⟩ = 0, as the directional derivative of a function is zero if and only if the gradient of the function is orthogonal to the direction of the derivative. We are going to show that this derivative vanishes at most for one value of t. If we show this, then C c (p) can only meet a level set of B p ζ at most two times, as we want to prove.
Suppose then that there is t 1 such that ⟨∇B p ζ (a(t 1 )),ȧ(t 1 )⟩ = 0. Then, ∇B p ζ (a(t 1 )) = Kḃ t 1 (0) for some K ∈ R, asḃ t 1 (0) ⊥ȧ(t 1 ). Then, the geodesic ray directed byḃ t 1 (0) (or −ḃ t 1 (0)) has the same endpoint as ∇B p ζ (a(t 1 )), which is ζ. Suppose now that there is an other t 2 for which ⟨∇B p ζ (a(t 2 )),ȧ(t 2 )⟩ = 0, then we also have ∇B p ζ (a(t 2 )) =Kḃ t 2 (0), for someK ∈ R, and we can assume as well that geodesic ray directed byḃ t 2 (0) has endpoint ζ. Then, the geodesic triangle with vertices ζ, c(t 1 ) and c(t 2 ) would have two right angles, and an angle equal to 0, contradicting Gauss-Bonnet theorem: we have that the integral of the curvature of the surface on the interior region of the triangle, equals π minus the sum of the interior angles of the triangle (see chapter 7 of [11] for a proof). As our surfaces has negative curvature, this integral should be strictly negative, so the interior angles of the triangle cannot have sum equal to π. Then, the derivative of B p ζ (a(t)) can only vanish for one value of t, as we wanted to see. Now we are going to prove the second statement. As the Busemann function and a(t) are continuous, B p ζ (a(t)) is continuous as well. In one hand we have B p ζ (a(0)) = 0, and in the other hand C c (p) only meets at most in one other point the set of level 0 of B p ζ . Then, choosing the appropriate orientation for a(t), we have that |B p ζ (a(t))| is an increasing function of t. As |B p ζ (a(t))| is a continuous function of t, for every ϵ > 0 there is δ(ϵ) > 0 such that if d(p, a(t)) > δ(ϵ) then |B p ζ (a(t))| > ϵ, and as |B p ζ (a(t))| is increasing, then δ is also increasing. 2. If η is the fixed point of a parabolic isometry γ then, and z ∈ H η (q) ∩ H ζ (p), then up to changing the orientation of γ, we have that for every ϵ > 0 there is a δ(ϵ), with δ(ϵ) being an increasing function of ϵ, such that if x ∈ [z, γ(z)] Hη(q) and d(z, x) > ϵ, then |B ζ (x, z)| > δ(ϵ). In particular |B ζ (z, γ(z))| > δ(ϵ).
Proof. As in proposition 3.4, to show the first statement we are going to see that the derivative of B p ζ along the curve H η (q) vanishes at most in one point, where B p ζ (z) = B ζ (p, z) for all z ∈M . And then, H η (q) can meet a level set of B p ζ in at most two points. We first give an arc length parametrization a(t) to the curve H η (q), such that q = a(0). The derivative of B p ζ (z) on the direction of a(t) vanishes on a point a(t 0 ) if and only if ⟨∇B p ζ (a(t 0 )),ȧ(t 0 )⟩ = 0. Then ∇B p ζ (a(t 0 )) is normal to the curve H η (q), and then there is k ∈ R such that ∇B p ζ (a(t 0 )) = k∇B q η (a(t 0 )), since the gradient of the Busemann function B q η is perpendicular to its level set. Then, the geodesic directed by the vector ∇B p ζ (a(t 0 )) has endpoints ζ and η, since the gradient of the Busemann function B q η is parallel to the geodesic joining q and η (see proposition 3.2 of [2]). If there is an other point a(t 1 ) such that ⟨∇B p ζ (a(t 1 )),ȧ(t 1 )⟩ = 0, then we would also have that the geodesic directed by the vector ∇B p ζ (a(t 1 )) is the geodesic joining η and ζ. As the geodesic joining two points is unique, this geodesic would be meeting H η (q) in two points: a(t 0 ) and a(t 1 ). But a geodesic ending at η only can meet a level set of B q η once, as B q η is increasing (or decreasing) along geodesics having η as one of its endpoints 1 . Then, a(t 0 ) = a(t 1 ), as we wanted.
The proof of the second statement is analogous to the second statement of proposition 3.4, using that, up to changing orientation of γ, the derivative of B p ζ has constant sign along the arc of the curve H η (q) containing z and γ(z).
Proof of Theorem 1.5
We are now going to prove Theorem 1.5 in several steps. First, we remind the statement of the theorem:
Let M be an orientable geometrically infinite surface with a complete Riemanninan metric of negative curvature. Let v ∈ T 1 M such that v[0, ∞) is an almost minimizing geodesic ray with finite injectivity radius a, and such that h R (v) is not closed. Then there is a sequence of times τ n going to ∞ such that g τn (v) ∈ h R (v) for all n. Moreover, the set
I = {t ∈ R : g t (v) / ∈ h R (v)
} only contains intervals of bounded length. Let v ∈ T 1 M be as in the hypothesis of 1.5, this is that v[0, ∞) is an almost minimizing geodesic ray, with finite injectivity radius a, and such that h R (v) is not a closed horocycle. Letṽ be a lifted vector of v on T 1M . We will call ξ the pointṽ(∞) ∈ ∂ ∞M . Lemma 4.1. There is a sequence {ṽ n } n∈N ⊂ T 1M such that 1.ṽ n (0) = γ n (ṽ(0)) for some γ n ∈ Γ. Proof of lemma 4.1. From the definition of injectivity radius (1.4), as Inj(v[0, ∞)) = a, we have a sequence {t n } going to ∞ such that Inj(v(t n )) −−−→ n→∞ a. Then, given ϵ > 0, for big values of n we have that Inj(v(t n )) < a + ϵ. Then, ifṽ ∈ T 1M is a lifted vector of v, asṽ(t n ) ∈M is a lifted point of v(t n ), there must be an isometry γ n ∈ Γ such that d(ṽ(t n ), γ n (ṽ(t n )) < a + ϵ. Then, the geodesicα n joiningṽ(t n ) and γ n (ṽ(t n )), projects to a closed curve α n in M . Now, we are going to construct the sequence {ṽ n } as follows: consider the curve β T n obtained by concatenation of v[0, t n ], α n , and v[t n , T ] in that order, where T is any big number. This curve β T n is not necessarily a geodesic. We call thenβ T n the geodesic joining v(0) and v(T ) which is homotopic to β T n relative to the endpoints. Now, as T goes to ∞, β T n converges to a geodesic ray v n [0, ∞) starting at v(0) which is asymptotic to v[0, ∞) (see figures below). Then, it follows that v n −−−→ n→∞ v in T 1 M . Now consider the liftedṽ of v, which has basepointṽ(0), and letṽ(∞) = ξ. Then there is a liftedṽ n of v n which also has endpoint ξ, because v and v n are asymptotic. But v n [0, ∞) is the limit ofβ T n , and it is homotopic to β T n . By construction of β T n , a liftedβ T n of this curve that starts atṽ(0) must end at γ n (ṽ(T )). Then, we have thatṽ n (0) must be γ n (ṽ(0)) (see figure 4). And thenṽ n satisfies statements 1 and 2 of the lemma.
Let us see now that B ξ (γ n (ṽ(0)),ṽ(0)) ∈ [b, B]. Up to taking some positive power of γ n , we can assume that the length of α n is between a + ϵ and 2(a + ϵ). Because of the construction of v n , the length of α n is the difference of lengths between v[0, T ] and β T n . Aŝ β T n is a geodesic with the same initial and endpoints as β T n and in the same homotopy class, it is shorter than β T n . Then, the difference of length betweenβ T n and v[0, T ] is bounded from above by the length of α n . As T goes to ∞ this bound still holds, then the length of α n is also an upper bound for B ξ (γ n (ṽ(0)),ṽ(0)). Now, we are going to see that there is a lower bound for B ξ (γ n (ṽ(0)),ṽ(0)). There are then two cases:
Case 1: γ n is an hyperbolic isometry. Consider the arc joiningṽ(0) and γ n (ṽ(0)), contained on the curve C γ (ṽ(0)), the curve of points at a constant distance from the axis of γ n . Consider the function δ(ϵ) given by Proposition 3.4, associated with the Busemann function Bṽ (ṽ(0)),ṽ(0)) > δ −1 (a + ϵ), because of statement 2 of Proposition 3.4, we have that B ξ (γ n (ṽ(0)),ṽ(0)) > a + ϵ for all n. And also B ξ (γ n (ṽ(0)),ṽ(0)) < length(α n ) < 2(a + ϵ). If d(γ n (ṽ(0)),ṽ(0)) < δ −1 (a + ϵ) = C, we can take a positive integer k n such that d(γ kn n (ṽ(0)),ṽ(0)) ≤ k n d(ṽ(0), γ n (ṽ(0))). And choosing the right k n we can make that d(γ kn n (ṽ(0)),ṽ(0)) ∈ [C, 2C], as we are taking an integral multiple of a number which is smaller than C, and also for every n, we have that d(ṽ(0), γ d n (ṽ(0))) −−−→ d→∞ ∞. Then B ξ (γ kn n (ṽ(0)),ṽ(0)) ∈ [δ(C), δ(2C)]. Finally substituting γ n by γ kn n , we have what we wanted. Case 2: γ n is a parabolic isometry. Suppose γ n has fixed point η. Consider the arc joining v(0) and γ n (ṽ(0)). Bothṽ(0) and γ n (ṽ(0)) are in the same projected horocycle based at η, meaning that B η (γ n (ṽ(0)),ṽ(0)) = 0. Now, with an analogous reasoning to the hyperbolic case,we can choose k n such that d(γ kn n (ṽ(0)),ṽ(0)) ∈ [C, 2C], where C = δ −1 (a + ϵ) and δ is as in the second statement of Proposition 3.5. Substituting γ n by γ kn n we get B ξ (γ n (ṽ(0)),ṽ(0)) ∈ [b, B], for all n ∈ N, as we wanted to see.
} n∈N ⊂ R, with r n −−−→ n→∞ r 0 ∈ R such that d(g t+rn (v n ), g t (v)) − −− → t→∞ 0.
Proof of lemma 4.2. As b < B ξ (ṽ n (0),ṽ(0)) < B, if we define r n := B ξ (ṽ n (0),ṽ(0)), as it is a bounded sequence, replacing if it is necessary r n by a subsequence, we can assume r n −→ r 0 ∈ [b, B]. By definition of the Busemann function, we also have:
d(g t (v), g t+rn (v n )) − −− → t→∞ 0.
Proof of theorem 1.5. It follows as a direct corollary of Lemma 4.1, Lemma 4.2 and proposition 2.14, that there is r 0 , with 0 < r 0 < ∞ such that
g r 0 (v) ∈ h R (v). Now g r 0 (g r 0 (v)) ∈ g r 0 (h R (v)) = h R (g r 0 (v)) ⊂ h R (v)
, because the closure of an orbit is invariant by the horocycle flow. Applying the same argument, defining τ n := nr 0 , we can conclude that g τn (v) ∈ h R (v) for all n. As r 0 ∈ [b, B], in every set of length B there is at least one of these τ n . This completes the proof.
Remark 4.3. From the proof of Lemma 4.1 we have that there is t 0 ∈ [b, B] such that g t 0 (v) ∈ h R (v), were B = δ(2δ −1 (a + ϵ)). As δ measures the variation of the Busemann function along a curve which is not a geodesic (along which the biggest variation occurs), we have that B < 2(a + ϵ). Then, even when δ depends on γ, it is uniformly bounded from above. Also, as we can take ϵ as small as we want, in every interval of length 2a there is at least one t such that g t (v) ∈ h R (v).
Corollary 4.4. In case a = 0, one has g R + (v) ⊂ h R (v).
Proof. It suffices to observe that the functions δ in both propositions 3.4 and 3.5 satisfy δ(0) = 0, and apply Theorem 1.5.
Applications to tight surfaces
as we wanted.
Definition 5.4. Given a metric space Y and a flow {φ t } t∈R , we say that X ⊂ Y is a minimal set for the flow, if it is closed, invariant by φ t and minimal with respect to the inclusion.
Proof of Corollary 1.7. Supose X ⊂ T 1 M is a minimal set for the horocycle flow. Consider a vector v ∈ X andṽ ∈ T 1M a lifted of v. As X is a minimal set, h R (v) must be dense in X, otherwise its closure would be a proper invariant subset of X, and X would not be minimal. In the other hand, X can not be T 1 M , because in that case every orbit should be dense, but that can't happen since the limit set has both horocyclic and nonhorocyclic limit points, and horocycles based on nonhorocyclic limit points are not dense. So X is a proper subset of T 1 M . This implies thatṽ(∞) must be a nonhorocyclic limit point, since the closure of its projected orbit is X ̸ = T 1 M . Then, v[0, ∞) is an almost minimizing geodesic ray, and by theorem 1.5 and proposition 5.3, we know that there is a t 0 such that g t 0 (v) ∈ h R (v). Then, g t 0 (X) ∩ X ̸ = ∅ and then g t 0 (X) = X. Then for all n ∈ N we have g nt 0 (X) = X.
Let us see that it actually implies thatṽ(∞) is horocyclic: consider an horocycle B based atṽ(∞). As we know, we can write B = {z ∈M : Bṽ (∞) (ṽ(0), z) > k}, for some k ∈ R. As v(nt 0 ) ∈ X = h R (v), and because of proposition 2.14, there is a sequence {γ m } m∈N ⊂ Γ such that Bṽ (∞) (ṽ(0), γ −1 mṽ (0)) − −−− → m→∞ nt 0 . Then, choosing a big n we have nt 0 > k, and then for a big m, γ −1 mṽ (0) ∈ B. So we can find an element of the Γ-orbit ofṽ(0) on any horocycle based atṽ(∞), and this means thatṽ(∞) is an horocyclic limit point, which is absurd as we already showed that it must be nonhorocyclic.
Definition 1. 1 .
1Let p be a point on a surface M . The injectivity radius of p is defined
Corollary 1. 7 .
7If M is a tight surface, there are no minimal sets for the horocycle flow on T 1 M .
Definition 2 . 5 .
25Let {x n } n∈N be a sequence inM . Given ξ ∈ ∂ ∞M , consider a fixed point o ∈ M and the geodesic ray u[0, ∞) with u(0) = o and u(∞) = ξ. Consider also the geodesic rays u n [0, ∞) such that u n (0) = o and u n (t jn ) = x n for some t jn ∈ [0, ∞), and define ξ n := u n (∞). Then, we say that x n −→ ξ if ξ n −→ ξ in ∂ ∞M . Given x, y ∈M , the map b y (x) :M −→ R, defined as b y (x)(z) := b(x, y, z), z ∈M is a continuous function onM . Let us consider C(M ) the space of continuous function oñ M with the topology of the uniform convergence on bounded sets. Let {x n
Figure 1 :
1Here, ξ 1 is an horocyclic limit point. The points, representing some elements of the Γ-orbit of a point, reach all the horodisks based at ξ 1 . The point ξ 2 on the other hand would be a nonhorocyclic limit point, as no element of the orbit reaches the smaller horodisk.
Figure 2 :
2The projected geodesic ray starts at v(0) and at time t passes through the point v(t).
hypothesis we have u n −−−→ n→∞ u, and then g −rn h n (u) −−−→ n→∞ u. It follows that h sn (u) −−−→ n→∞ g rn (u) in T 1 M . This implies that d(h sn (u), g rn (u)) −−−→ n→∞ 0, and as r n −→ r 0 , the family {g rn } is equicontinuous, and we finally have
Proposition 3. 2 .
2The distance from a point p ∈M and any closed submanifoldÑ ofM is attained by a minimizing geodesic which is orthogonal toÑ .
Corollary 3 . 3 .
33The distance between two closed submanifoldsÑ andW ofM is attained by a geodesic orthogonal to bothÑ andW .
Proposition 3 . 4 .
34Consider ζ ∈ ∂ ∞M , a point p ∈M , and c : R −→M a geodesic. Then: 1. C c (p) ∩ H ζ (p) contains at most two points.
Proposition 3 . 5 .
35Consider ζ, η ∈ ∂ ∞M with ζ ̸ = η, and a point p ∈M . Then we have:1. The set H η (q) ∩ H ζ (p) contains at most two points.
en T 1 M , where v n are the projected vectors ofṽ n on T 1 M . 4. B ξ (ṽ n (0),ṽ(0)) ∈ [b, B] for some 0 < b < B < ∞, and for all n ∈ N.
Figure 3 :
3Geodesics α n , v[0, ∞) and v n [0, ∞) on the surface M .
Figure 4 :
4Geodesicsṽ[0, ∞) andṽ n [0, ∞) on the surfaceM .
) = B ξ (ṽ(0), z) for all z ∈M . Now we have two possibilities: if d(γ n
Figure 5 :
5Here we see β T n (in yellow) and β T m (in orange), with m > n, both ending at time T . The dark green geodesic ray would be the lifted of v n [0, ∞)), and the one in light green would be the lifted of v m [0, ∞).
Lemma 4.2. For a sequence {ṽ n } n∈N ⊂ T 1M satisfying points 1 to 4 of Lemma 4.1, there is a sequence {r n
This is also a consequence of Gauss-Bonnet theorem, as a triangle could not have an angle equal to π
The fundamental groups of tight surfaces are infinitely generated and of the first kind. The last means that. Remark 5.1.. ifM is the universal cover of M , then L(Γ) = ∂ ∞MRemark 5.1. The fundamental groups of tight surfaces are infinitely generated and of the first kind. The last means that, ifM is the universal cover of M , then L(Γ) = ∂ ∞M .
In view of proposition 2.10 and remark 5.1, for a tight surface M , ifṽ ∈ T 1M is such thatṽ(∞) is an horocyclic limit point. Remark 5.2.. thenπ(h R (ṽRemark 5.2. In view of proposition 2.10 and remark 5.1, for a tight surface M , ifṽ ∈ T 1M is such thatṽ(∞) is an horocyclic limit point, thenπ(h R (ṽ)
Consider the geodesics δ n of definition 1.6. As v[0, ∞) is almost minimizing, it intersects an infinite number of these geodesics. Replacing δ n by a subsequence, we can assume that v[0, ∞) intersects δ n for all n. Let t n be the times such that v(t n ) ∈ δ nProof. Consider the geodesics δ n of definition 1.6. As v[0, ∞) is almost minimizing, it intersects an infinite number of these geodesics. Replacing δ n by a subsequence, we can assume that v[0, ∞) intersects δ n for all n. Let t n be the times such that v(t n ) ∈ δ n .
Consider η n ∈ Γ the hyperbolic isometry fixingδ n . By hypothesis, we have that the lengths of δ n are bounded by a constant A, so d(ṽ(t n ), η n (ṽ(t n ))) ≤ A. This means that Inj(v(t n )) ≤ A for all n, and then lim inf n−→∞ Inj(v(t n )) ≤ A. It follows that lim inf t−→∞ Inj(v(t)) < A. Then we have Inj(v[0, ∞)) ≤ A < ∞Letδ n be a lift of δ n onM , andṽ(t n ) a lift of v(t n ). Consider η n ∈ Γ the hyperbolic isometry fixingδ n . By hypothesis, we have that the lengths of δ n are bounded by a con- stant A, so d(ṽ(t n ), η n (ṽ(t n ))) ≤ A. This means that Inj(v(t n )) ≤ A for all n, and then lim inf n−→∞ Inj(v(t n )) ≤ A. It follows that lim inf t−→∞ Inj(v(t)) < A. Then we have Inj(v[0, ∞)) ≤ A < ∞,
F Alcalde, F Dal'bo, M Martínez, A Verjovsky, arXiv:1412.3259v3Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology. F. Alcalde, F. Dal'Bo, M. Martínez and A. Verjovsky Minimality of the horocycle flow on laminations by hyperbolic surfaces with non-trivial topology, (.arXiv:1412.3259v3)(2016)
W Ballmann, Lectures on Spaces of Nonpositive Curvature. BirkhäuserW. Ballmann Lectures on Spaces of Nonpositive Curvature, Birkhäuser (1995).
A Bellis On the links between horocyclic and geodesic orbits on geometrically infinite surfaces. Journal de l'École polytechnique -Mathématiques, l'École polytechnique. 5A Bellis On the links between horocyclic and geodesic orbits on geometrically infinite surfaces., Journal de l'École polytechnique -Mathématiques, l'École polytechnique, 2018, 5, pp.443-454.
Series Ergodic Theory, symbolic dynamics and hyperbolic spaces. T Bedford, M Keane Y C, Oxford University PressT. Bedford, M. Keane y C. Series Ergodic Theory, symbolic dynamics and hyper- bolic spaces, Oxford University Press (1991).
Bo Topologie du feuilletage fortement stable. F Dal, Annales de l'institut Fourier. 50F. Dal'Bo Topologie du feuilletage fortement stable, Annales de l'institut Fourier, tome 50, no 3 (2000), p. 981-993.
. F Dal, Bo Geodesic, Horocyclic Trajectories, SpringerF. Dal'Bo Geodesic and Horocyclic Trajectories, Springer (2011).
. M Do Carmo Riemannian Geometry, Birkhäuser. M. do Carmo Riemannian Geometry, Birkhäuser, Boston (1992).
Introduction to the Modern Theory of Dynamical Systems. B Hasselblatt, A Katok, Cambridge University PressB. Hasselblatt and A.Katok Introduction to the Modern Theory of Dynamical Sys- tems, Cambridge University Press (1995).
Hedlund Fuchsian groups and transitive horocycles. A Gustav, MR 1545946Duke Math. J. 23Gustav A. Hedlund Fuchsian groups and transitive horocycles, Duke Math. J. 2 (1936), no. 3, 530-542. MR 1545946
Horocycle flows wothout minimal sets. S Matsumoto, S. Matsumoto Horocycle flows wothout minimal sets. (2014).
Lecture notes on elementary topology and geometry. I M Singer, J A Thorpe, SpringerSinger, I. M. and Thorpe, J. A. Lecture notes on elementary topology and geometry. Springer (2015).
| [] |
[
"A NOVEL FRAMEWORK EXTENDING CAUSE-EFFECT INFERENCE METHODS TO MULTIVARIATE CAUSAL DISCOVERY",
"A NOVEL FRAMEWORK EXTENDING CAUSE-EFFECT INFERENCE METHODS TO MULTIVARIATE CAUSAL DISCOVERY"
] | [
"Hongyi Chen \nTilburg University\nthe Netherlands\n",
"Maurits Kaptein \nTilburg University\nthe Netherlands\n"
] | [
"Tilburg University\nthe Netherlands",
"Tilburg University\nthe Netherlands"
] | [] | We focus on the extension of bivariate causal learning methods into multivariate problem settings in a systematic manner via a novel framework. It is purposive to augment the scale to which bivariate causal discovery approaches can be applied since contrast to traditional causal discovery methods, bivariate methods render estimation in the form of a causal Directed Acyclic Graph (DAG) instead of its complete partial directed acyclic graphs (CPDAGs). To tackle the problem, an auxiliary framework is proposed in this work so that together with any bivariate causal inference method, one could identify and estimate causal structure over variables more than two from observational data. In particular, we propose a local graphical structure in causal graph that is identifiable by a given bivariate method, which could be iteratively exploited to discover the whole causal structure under certain assumptions. We show both theoretically and experimentally that the proposed framework can achieve sound results in causal learning problems. | null | [
"https://export.arxiv.org/pdf/2305.16904v1.pdf"
] | 258,947,588 | 2305.16904 | c493f4d106349c106894f41513ca50ccf11ef171 |
A NOVEL FRAMEWORK EXTENDING CAUSE-EFFECT INFERENCE METHODS TO MULTIVARIATE CAUSAL DISCOVERY
May 29, 2023
Hongyi Chen
Tilburg University
the Netherlands
Maurits Kaptein
Tilburg University
the Netherlands
A NOVEL FRAMEWORK EXTENDING CAUSE-EFFECT INFERENCE METHODS TO MULTIVARIATE CAUSAL DISCOVERY
May 29, 2023
We focus on the extension of bivariate causal learning methods into multivariate problem settings in a systematic manner via a novel framework. It is purposive to augment the scale to which bivariate causal discovery approaches can be applied since contrast to traditional causal discovery methods, bivariate methods render estimation in the form of a causal Directed Acyclic Graph (DAG) instead of its complete partial directed acyclic graphs (CPDAGs). To tackle the problem, an auxiliary framework is proposed in this work so that together with any bivariate causal inference method, one could identify and estimate causal structure over variables more than two from observational data. In particular, we propose a local graphical structure in causal graph that is identifiable by a given bivariate method, which could be iteratively exploited to discover the whole causal structure under certain assumptions. We show both theoretically and experimentally that the proposed framework can achieve sound results in causal learning problems.
Introduction
Inferring causal relationships between variables from observational data has drawn great research interests, particularly in the fields of deep learning, medicine, and marketing, where the explanatory reasoning of predictive models has gained increasing attention. since in fields like deep learning, medicine, marketing etc, the explanatory reasoning of predictive models has come under increasing attention. The need to infer from a causal model is paramount for answering interventional or counterfactual questions, as inference from a non-causal model can lead to erroneous results. This will enable estimated causal relations to resemble the underlying true causal mechanism as closely as possible. Over the last few decades, numerous ideas and works have been discussed and published in this area, with the paper by [1] providing a comprehensive overview.
The school of thought built upon probabilistic graphical models, also known as structural equation models [2,3], has produced a range of algorithms and methods, derived from different statistical methodologies, for causal structure learning in interdisciplinary research. One group of causal discovery algorithms that employ the Markov property encoded by conditional independence are theoretically sound. However, they are limited to identifying only the Markov equivalence class of the true causal graph instead the true causal graph itself unless additional assumptions such as non-linearity or additive noises are imposed. Another group of methods rely on the asymmetry induced by causal relations in the innate complexity discrepancy between variables. However, the majority of works in this group have focused on bivariate cases, whereas multivariate data, a more general and realistic setting, has not been adequately addressed. Given the research gap between these two groups of approaches, we aim to provide insight into how to generalize any bivariate methods into new ones that can be applied uniformly and systematically to multivariate data.
It may appear to be a simple task to extend bivariate methods to multivariate data, but in fact, the opposite is true. Some researchers have proposed that any edge in a complex network can be inferred by simply examining its two vertices independently [4] while others advocate for conditioning on all remaining variables during inference [5]. The main issue with the former approach is that its justification lies on the algorithmic Markov condition [6], which may no longer hold when the two endpoints are confounded by other variables in the network. On the other hand, although the arXiv:2305.16904v1 [stat.ME] 26 May 2023 latter approach is theoretically sound, it becomes impractical and unreliable when the number of conditioned variables increases, leading to multiple testing problems.
In light of the issues discussed earlier, we propose a new auxiliary framework for extending bivariate methods to multivariate causal inference. This framework allows for a systematic exploration of various causal discovery methods that focus on two variables, and can be expanded to include multivariate data under certain conditions. A key aspect of this framework is the use of a novel graphical criterion to determine the causal orders of all the variables. Specifically, we utilize a graphical criterion that compares the marginalized graph over an edge with the true causal graph to sequentially obtain the causal orders of each variable. We provide a theoretical analysis of our framework, including a general set of assumptions (admissibility rules) that ensure the correctness of individual bivariate methods, along with provable consistency results and considerations of computational complexity.
Indeed, our auxiliary framework provides myriad of benefits. Not only does it expand the range of applications for bivariate methods, but it also generates a family of new methods that can be useful in exploring a variety of inquiries, such as discovering confounders, causal relationships, and the orientation of edges in a partially directed graph.
The paper offers following important contributions:
• Firstly, it proposes a systematic approach for extending any bivariate causal inference method to the multivariate context, which is achieved through an auxiliary framework based on a newly proposed graphical criterion.
• Secondly, the framework yields a new family of causal discovery algorithms that can be used in conjunction with traditional causal discovery approaches and bivariate methods.
• Finally, the theoretical soundness of the framework has been demonstrated in cases where infinite data points are presented, and its computational feasibility has been demonstrated in practical applications.
Background & Preliminaries
We start with a brief introduction to the topics related to our research. In terms of notation, we denote random variables and random vectors with capital letters and bold upper-case letters, respectively, while their corresponding assigned values are represented by lowercase letters.
Graph terminology
A graph is an ordered pair defined as G = (V, E) consisting of a vertices set V and an edges set E. In the context of graphical modelling, vertices represent random variables and edges encode probabilistic and causal relations between vertices associated with them. For convenience, we use terms vertices and variables interchangeably.
A (un)directed graph is a type of graph which contains only (un)directed edges. Otherwise, it is called a mixed or partially directed graph. In particular, a directed graph absent of directed cycles is known as a Directed Acyclic Graph (DAG) that could be transformed into an undirected graph by removing all edge directions, referred as skeleton of the DAG. Two vertices are adjacent if they are linked with an edge. A directed path is a sequence of distinct vertices which are successively adjacent by edges of the same direction. If there is a directed path from vertex X to Y , then X is an ancestor of Y while Y is a descendant of X. Moreover, if such a directed path is an edge, we call X a parent of Y and Y a child of X. The sets of parents, children, ancestors and descendants of a vertex X in graph G are denoted as PA(G, X), CH(G, X), AN(G, X) and DE(G, X) accordingly.
A graphical criterion designated as d-seperation [7,2] specifies conditional independence relationships of a DAG comprehensively. If joint distribution of X, P(X), contains all the conditional independence relationships encoded by a DAG G, the distribution is said to be Markovian to G. On the other hand, a distribution P(X) is said to be faithful to a graph G if every conditional independence relation of P(X) is entailed by that in G [3]. If a distribution is both Markovian and faithful with respect to a DAG, we call the DAG a perfect map of the distribution.
A group of DAGs encoding the same d-separation statements forms a Markov equivalence class that can be uniquely represented by a Completed Partially Directed Acyclic Graphs (CPDAG) [8,9,10]. CPDAG contains directed edges that have the same orientation for every DAG in the equivalent class and undirected edges that have reversible orientations in the equivalent class. Every member within the same CPDAG would fit the data equally well and thus statistically equivalent, making themselves indistinguishable from causal discovery methodology embedded in independence testing or data fitting.
Structural Equation Models
A structural equation model (SEM) determines the marginal distribution of each variable in the random vector X=(X 1 , ..., X n ) corresponding to their DAG G by structural equations of the following form [11]:
X i = f i (PA(G, X i ), ε i ) where {ε i } j=1,.
..,n are mutually independent random noises and { f i } j=1,...,n are real functions.
If a random vector X 1 , ..., X n is generated according a SEM, we can factorize the density of the joint distribution as [7]:
p(x 1 , ..., x n ) = n ∏ 1 p(x i |PA(G, x i ))
It is clear that such a distribution is Markov to the DAG G.
We now define an important concept called Pearl's do-intervention [2]. When we operate a do-intervention upon a variable X i , we change the generating mechanism of X i and rewrite the SEM of X by updating the corresponding equation of X i . This results in a new post-intervention distribution for X. In particular, if the do-intervention fixes X i to a fixed point in the support of X i , the joint density of X according to truncated factorization,
p(x 1 , ..., x n |do(X i =x i )) = ∏ j̸ =i p(x j |PA(G, x j )) if X i = x i 0 otherwise
Prior work on causal discovery
Popular CPDAG learning methods include three categories: constraint-based methods, e.g. the PC algorithm [3], utilizing conditional independence constraints embedded in the joint distribution of variables; score-based methods, e.g. Greedy equivalence search [9], employing a score function to search for optimal structures; and hybrid of the two, e.g. the Max-Min Hill-Climbing (MMHC) algorithm [12]. By applying extra constraints on the SEM, the true DAG can be identified. Examples of this approach are Linear, Non-Gaussian, Acyclic causal Models (LiNGAM) [13,14]; the additive noise model (ANM) [15,16,17]; and post-nonlinear model (PNL) [18].
On the other hand, a separate branch of inference methods built upon postulates such as the algorithmic Markov condition [6,19] in terms of Kolmogorov complexity and independence of input and mechanism. Even though these postulates are not verifiable yet, they offer philosophical building blocks, allowing us to interpret asymmetry between cause and effect on mathematical terms. Therefore, non-parametric algorithms following this principle, such as CURE [20], IGCI [19], CGNN [4] and KCDC [21] have been shown to be able to orient cause-effect pairs to various extents. Though some have sketched some intuitive ideas [4,21,5] to apply these procedures to more than two variables, we believe it currently remains an open question on how to efficiently extend them to multivariable cases : it is exactly this question we aim to address in this paper.
Extending bivariate methods to multivariate causal learning
In this section, we begin with examining the idea of isolating a pair of variables for causal discovery in a multivariate setting. While this approach may seem intuitive, we find that it is not always reliable and is subject to specific conditions, as stated in Theorem 1. Theorem 1 is essential as it lays the foundation for enhancing the flawed approach with a reliable one presented in this study, for extending bivariate methods in multivariate data sets.
Problem setting
In this study, we examine a multivariate data scenario where a collection of p random variables, denoted by X := X 1 , ..., X p , are determined by a structural causal model (SCM) [2]. This model characterises both their data-generating process and overall causal connections. Our work assumes that causal sufficiency and faithfulness hold. In other words, we assume that no unobserved confounding variables exist, and the conditional independencies in the distribution of X align perfectly with d-separation relationships in the Directed Acyclic Graph (DAG), due to the satisfaction of both the Markov property and faithfulness.
Under the aforementioned settings, our focus is to develop an approach for inferring the complete or partial directed acyclic graph (DAG) of random variables X, using two key components: a dataset of X and a bivariate method that can discern cause from effect between any two variables. Additionally, we are also interested in answering other related questions, such as identifying any further assumptions required for extending this approach; and determining the direction of undirected edges in the true CPDAG.
Causality from Marginalisation
Analogous to the way a graph is composed of many edges, the causal structure of a multivariate data set is constructed from the causal relations between each pair of variables, which can be detected by a bivariate method. However, it is generally incorrect to use a bivariate method on just one pair of variables to infer their causal relationship while disregarding the other variables. This is because such a procedure would effectively coincide with learning the causal direction with hidden, unmeasured variables that could be common causes of the pair, and this has been shown to potentially lead to incorrect causal conclusions, as in the well-known example of Simpson's Paradox. To establish the identifiability of a pair of variables using a bivariate method, we derive a sufficient condition that must be satisfied: the causal reconstruction based solely on the marginal distribution of the variables must correspond to the true causal relationship.
As suggested by authors in [17], we provide a precise mathematical definition for the term "true causal graph" that we have been informally using, as outlined in Definition 1 according to [17]. Definition 1. Assume that random variables X = {X 1 , ..., X p } has D(X) as its joint distribution as well as distributions D do(X j :=Ñ j , j∈J) (X) for all J ⊆ {1, ..., p} known, whereÑ j are random distributions. Then the DAG G is a true causal graph of X if conditions (i) and (ii) are satisfied.
(i) The joint distribution D(X) is Markovian to G (ii) for all J ⊆ {1, ..., p} andÑ j , j ∈ J, the distribution D do(X j :=Ñ j , j∈J) (X) is identical to P G;do(X j ):=Ñ j , j∈J
Two graphical features pertaining to causality are introduced below. Definition 2. Given random variables X = {X 1 , ..., X p } whose joint distribution is generated by a SCM with DAG G as the causal graph, if P(X j |do(X i )) = P(X j |X i ), then we have
(i) the DAG X i → X j , i, j ∈ {1, .
.., p} is said to be a valid marginalisation ;
(ii) X i and X j are unconfounded and an edge between them is called an unconfounded edge if they are adjacent.
Lemma 1 details the relationship between the two previously defined features and how they pave the way for our seeking of a graphical criterion that connects the causal structure of two vertices' marginal distribution to that of the true causal graph. Theorem 1 demonstrates that being unconfounded is sufficient as such a criterion. All theoretical results are accompanied by proofs in the appendix. Lemma 1. For random variables X = {X 1 , ..., X p } with causal graph G, then (i) any unconfounded edge of G is a valid marginalisation.
(ii) the DAG of two vertices:
X i → X j , i, j ∈ {1, ..., p} is the true causal graph of marginal distribution D({X i , X j }) if X i → X j , i, j is a valid marginalisation. Theorem 1.
Suppose that an edge between X i and X j , i, j ∈ {1, ..., p}in causal graph G over random variables X = {X 1 , ..., X p } is unconfounded, then the causal direction of marginal distribution of X i and X j coincides with the direction of X i and X j in the true causal graph G.
Admissibility Rule
As numerous causal discovery methods incorporate a priori assumptions concerning the structural equations' assignments, restraining the range of distribution families the variables can adopt, we endorse the notion and prescribe regulations to preserve the validity of utilizing bivariate methods on variables in multivariate data. These regulations, referred to as "Admissibility Rules," are presented below and elaborated on in Section 4. Only when these regulations are adhered to can causal approaches be considered reliable. Admissibility Rule 1. For a non-confounding edge between X i , X j , i, j ∈ {1, ..., p} in X, it is only eligible to apply a bivariate method to D(X i , X j ) if the marginal distribution fulfills functional premises required by the method.
Deviation from this admissibility condition can result in untenable inference. For example, if a non-Gaussianity method is applied to linear Gaussian variables without complying with the admissibility condition, it can yield false outcomes. In the following section, we investigate the compliance of Admissibility Rule 1 in the bivariate methods that we have reviewed in Section 2.3. Our findings demonstrate that there is a general consensus among these methods in adhering to the Admissibility Rule.
1. For methods specifying particular choices of functional classes in the structural equations, such as LiNGAM, derivatives of ANM [22,13,14], and PNL [18], it is easy to see LiNGAM is admissible. While for the rest, the marginal distribution of two variables generated by an ANM(PNL) cannot be expressed following a bivariate ANM(PNL) in general. A special case where it is possible to do so is when the varaibles are sampled according to particular form of ANM called Causal Additive Models(CAM):
X j = ∑ k∈PA(X j ) f jk (X k ) + ε j
where j ∈ {1, ...p}, ε j is Gaussian and f jk is three times differential nonlinear functions.
2. For the other group that are nonparametric without any assumptions on functional classes in the structural equations, they automatically satisfy Admissibility Rule 1.
Auxiliary Framework
Building upon the theoretical findings established in Theorem 1, which establish the fundamental basis of the type of causal information that can be obtained through bivariate methods in multivariate scenarios, we introduce an textbfauxiliary framework for causal structure discovery, to be used in combination with any bivariate method.
In terms of structure, our proposed framework follows a top-down strategy in causal ordering [23], starting with the smallest causal ordering (root nodes) and working up to the largest (leaf nodes). The identification of root nodes is not only important but is also repeatedly applied inductive to infer further down the topological hierarchy. The framework is comprised of two main algorithms, the first of which determines the root nodes of the true causal graph, while the second enables comprehensive causal structure discovery. In most cases, both algorithms are able to unearth more causal information than CPDAG-oriented methods.
Furthermore, we conduct a theoretical analysis of the framework schemes to to demonstrate that they are both consistent and computationally feasible. For the sake of clarity, we assume the following conditions are met throughout this section unless otherwise indicated.
A1 The true causal graph G is faithful to the data distribution D(X).
A2 Causal sufficiency in X A3 (Conditional) Independence tests on the data sample are accurate.
A4 For bivariate data generated according to settings in Section 3.1, causal inference determined by the bivariate method is correct.
A1 and A2 are commonly accepted assumptions in the field of causal discovery. In contrast, A3 and A4 are technical requirements that govern the population setting and the reliability of the statistical methods used. These conditions are crucial to establish the consistency of our framework.
Root nodes determination
Algorithm 1 outlines a procedure to identify the root nodes of the causal DAG G and their associated edge directions.
The algorithm requires a sample of data X and the skeleton of G, as well as a given bivariate method. Initially, the algorithm transverse all undirected edges, estimating their causal direction through the bivariate method, and then constructs a directed graph G ′ . Since not all root nodes of G ′ correspond to those in G, the algorithm then applies independence tests and the bivariate method to identify any statistical dependence between any two root nodes in G ′ , filtering out any false-positive root nodes. The remaining root nodes in G ′ comprise the root nodes of G. Theorem 2 provides proof that this algorithm yields all and only the root nodes of G.
In the following proposition, we provide a summary of theoretical results pertaining to the causal interpretation of using bivariate methods on two variables within X. Proposition 1. Under A1 to A4 and Admissibility Rule 1, a. A bivariate method might fail to accurately identify the causal direction of an edge if it is confounded.
b. A bivariate method would be able to infer the correct causal direction of an edge if it is unconfounded. Proposition 2. Under A1 to A4 and Admissibility Rule 1, the causal direction of any edge connected to a root variable would be correctly inferred by a bivariate method. apply BM to X k and each of its neighbours in G s
5:
if X k remains to be the parent of all directed edges inferred by BM then 6: for j in R do 7: Perform independence test between X k , X j 8:
if X k ̸ ⊥ ⊥ X j then 9: apply BM to X k , X j
10:
if X j is the parent then An essential theorem regarding the accuracy of Algorithm 1 can be obtained once Propositions 1 and 2 have explained the objectives of utilising the given bivariate method.
Theorem 2. Under A1 to A4 and Admissibility Rule 1, the output of Algorithm 1 contains all and only root nodes of the true causal graph over X. Remark 1. Algorithm 1 not only facilitates the discovery of causal structures that are often obscured by Markov equivalence classes, but also embodies a concept that encompasses a mechanism for revealing causal relationships beyond root nodes, as illustrated in the latter part of our framework.
Comprehensive Causal Discovery
The second objective of our framework is to furnish a comprehensive algorithm that learns the entire causal graph using a bivariate method. Prior to delving into the specific steps outlined in Algorithm 2, we first introduce additional guidelines that the bivariate method must adhere to, as it is necessary for it to be applicable to conditional distributions. Admissibility Rule 2. For an adjacent edge X i , X j , i, j ∈ {1, ..., p} in X confounded by a disjoint variables set Y in X, it is only eligible to apply a bivariate method to infer the direction of the conditional distribution D(X i , X j |Y), if the conditional distribution is identifiable by the method.
To ensure the correctness of our approach, we also need to verify whether bivariate methods satisfy Admissibility Rule 2 in general. We have observed that the methods discussed in Section 2.3, with the help of appropriate regression techniques, adhere to Admissibility Rule 2, as long as they follow Admissibility Rule 1, except for the IGCI method as discussed by [6]. Therefore, most of the bivariate methods can be utilised for this purpose. Algorithm 2 presents the pseudocode for a comprehensive causal graph discovery process in an iterative and inductive manner, following the same workflow as Algorithm 1. Unlike Algorithm 1 which terminates after root node retrieval, Algorithm 2 marks and designates root nodes as a batch of nodes with the same causal ordering before identifying the next batch of nodes in the subgraph. The marking process proceeds iteratively, excluding nodes that have already been marked until all nodes have been marked with a causal ordering. The different causal orderings for the batches defines the edge directions between nodes in different batches. It is worth noting that there are no edges between nodes within the same batch. As a result of this process, Algorithm 2 assigns a causal ordering to each node and orients all edges, thus specifying a DAG for the causal graph of X.
Proposition 3 demonstrates the consistency of Algorithm 2, indicating that it will identify the true underlying DAG under regular assumptions. S 0 = S 0 \k S 1 = S 1 ∪ k 5: end for 6: repeat 7: for k in S 0 do 8: apply BM to X k and each of its neighbours over undirected edges in G s , conditioned on a subset of X S 1 which blocks all non-directed paths in G s between X k and the particular neighbor. 9: if X k remains to be the parent of all directed edges inferred by BM then 10: for j in R do 11: Perform conditional independence test between X k , X j given a subset of X S 1 , denoted as S b , which blocks all non-directed paths between them in G s
12:
if X k ̸ ⊥ ⊥ X j |S b then 13: apply BM to D(X j , X k |S b ) 14: if X j is the parent then
15:
Break from current loop 16 Orient undirected edges linked to X i , where i ∈ R as directed edges pointing out of X i and update them to G s 25: 27: OUTPUT: A fully directed graph,aka, a DAG
S 0 = S 0 \R S 1 = S 1 ∪ R R = { / 0} 26: until S 0 = { / 0}
Computational Complexity
To assess the computational cost of the framework, we have to consider the number of independence tests and the frequency with which the bivariate method is employed. The complexity of learning the CPDAG usually takes an upper bound of O(p a ), where a is constant equal to the size of the largest neighborhood. In the worst case scenario, Algorithm 2 performs O(p 3 ) iterations of bivariate method, while O(p 2 ) iterations of bivariate method are required if the graph is sparse. Though independence test and the bivariate method may scale with the sample size, the overall computational complexity of Algorithm 2 is polynomial in the number of variables p.
Simulation Study
Although the main contribution of this paper is a conceptual extension of a category of methods into a broader setup, it is important to demonstrate he practical application of the general framework using a particular bivariate method and, more crucially, to validate our approach with empirical studies. To this end, we assess the efficacy of a version of the auxiliary framework that has been implemented using synthetic data generated from a benchmark SCM as outlined in [17].
In particular, we focus on triple-variable data sampled from a full DAG as ground truth on their causal relations. See in Figure 1a. As a standard scheme suggested in [17], we generate X = {X 1 , X 2 , X 3 } with a linear additive noise model with non-Gaussian error terms of the form:
X 1 = ε 1 X 2 = β 21 X 1 + ε 2 X 3 = β 31 X 1 + β 32 X 2 + ε 3 X 2 X 1 X 3
(a) Underlying DAG representing causal structure of variables X 1 , X 2 , X 3 in the case study We conduct 100 distinct experiments, each involving the random generation of parameter sets as depicted above. For each experiment, we prepare a sample of 100 tuples (X 1 , X 2 , X 3 ). Our empirical analysis not only assesses the accuracy of our proposed framework in inferring causal structures from the observed data, but also compares its performance to that of existing state-of-the-art methods, including two brute-force based methods [17,24], regression followed by independence test (RESIT) [17], and a pairwise ANM based method [22] with a procedure where the method is applied to edges independently, as suggested in [4].
In order to reduce ambiguity arising from the multitude of regression methods and independence measures that could potentially be used by all the methods in our study, we have opted to use the Hilbert Schmidt independence criterion (HSIC) [25] and linear regression as standard tools. Our intention is that this choice will increase the comprehensibility of the simulation results, enabling us to attribute the performance to the methods in comparison themselves rather than extraneous factors. Furthermore, all methods are provided with the causal graph skeleton.
Remark 2.
Brute-force methods can be quite effective when working with small graphs because they are enumerative methods, examining the entire set of possible directed acyclic graphs (DAGs) one by one. However, there is no consensus on how to evaluate all the candidate DAGs. In this study, we have selected two metrics, referred to as BF1 and BF2, to use for comparison, of which the former is suggested by [17] and the latter by ourselves, to select the best DAG as the output:Ĝ = argmin G ∑ p i=1C (res G i , res G −i ) + λ |edges| orĜ = argmin GC (res G 1 , ..., res G p ) + λ |edges|, whereC is a independence measure such as HSIC used in our paper, res i means residual of X i regressing on its parents, res −i denotes all such residuals except X i . The regularization parameter λ is often pruned accordingly but it is irrelevant to our work. As we can see from the empirical results later on, the choice of metric plays a significant role in the performance of brute-force methods.
Algorithm 3 (See Appendix) gives a sketch on implementing the auxiliary framework with a HSIC-based bivariate method known as ANM-HSIC [22] where the independence measure, same as the score functionĈ, is defined as the value of the empirical HSIC estimator itself between two data samples u, v:Ĉ(u, v) = HSIC φ l(u) ,φ l(v) (u, v) where φ l is a kernel with parameters l, that are estimated from the data. We name this instantiation AF-HSIC. Remark 3. One important point to consider is that while the auxiliary framework can extend the bivariate methods to multivariate settings, it does not enhance the accuracy of the chosen bivariate method and that mistakes made by the bivariate method may be propagated during the iterations of the auxiliary framework, much like the independence testing of the PC algorithm [3]. When dealing with dense graphs, it is recommended to take precautions such as verifying the acyclicity of the inferred DAG. Nonetheless, our simulated experiments in sparse graph scenarios did not encounter such robustness issues.
AF-HSIC BF1 BF2 RESIT BM-HSIC 0.08 ± 0.27 0.32 ± 0.63 0.08 ± 0.31 0.15 ± 0.36 0.31 ± 0.46 Table 1: Mean and standard deviation of SHD between estimated DAG and the true one over 100 simulated experiments Table 1 and the bar plot in Figure 1(b) present a summary of our experiment results. We assessed the accuracy of identifying the true DAG and the Structural Hamming distance (SHD) [12] of the estimated DAG from various methods to the true causal graph. The SHD is defined as the number of edges in disagreement between the true DAG and the estimated DAG. Our proposed method (AF-HSIC) and the Brute-force method, using a measure we proposed, achieved the highest accuracy of 92%, with only one incorrect edge in other cases. In contrast, applying the pairwise method to edges separately produced particularly poor results in the simulation, presumably due to confounding effects in the true DAG. Overall, our AF-HSIC outperforms all other methods, including the brute-force method, in linear non-Gaussian settings, while remaining computationally feasible.
Additionally, we have employed our framework with KCDC [21] to simulation settings of k variables, where k = {5, 10}, under more complicated noises such as multiplicative or periodic. The results again have shown soundness and stellar performance of our framework. Details are given in the supplementary material.
Real-world data
To determine the performance on real-world data sets, we apply an instantiation algorithm of our framework, AF-HSIC accompanied by smoothing splines as regression methods, to the protein network problem [26], labelled as Anti-CD3/CD28 dataset with 853 observational data points. A consensus network in the form of a DAG is provided by [26], which is regarded as the true causal structure of the dataset for comparison in this paper. Given the same background knowledge which is the underlying skeleton of the causal graph, our method identifies 16 out of 18 edges accurately using the data, thus performing better than other methods such as GES, CAM and CGNN according to results reported in [4]. Table 2 shows the details, part of which is excerpt from [4].
AF-HSIC GES CAM CGNN 2
12.1 8.5 4.3 Table 2: Results of different methods for the orientation of the protein network given true skeleton in terms of SHD with the true causal graph.
Conclusion
In this paper, we have introduced an auxiliary framework aimed at extending bivariate causal discovery methods to multivariate cases, resulting in a novel family of causal inference methods that unify both constraint-based methods and cause-effect inferences. Furthermore, we have demonstrated the framework's desirable theoretical properties in terms of consistency and computational costs under regular assumptions. We have also provided an instantiation of this framework over triple variables and evaluated it on diverse synthetic datasets against several state-of-the-art causal discovery methods, as well as on real-world data, where our algorithm outperformed other methods. The experiments presented in this paper not only support the theoretical analysis but also highlight the auxiliary framework's potential as a powerful tool for observational data aimed at causal discovery.
A Proofs
A.1 Proof of Lemma 1
Proof. (i) Wlog, let X i → X j be an unconfounded edge in G. By Definition 2(ii), we have P(X j |do(X i )) = P(X j |X i ), which directly shows X i → X j is a valid marginalization from Definition 2(i).
(ii)To demonstrate X i → X j is the true causal graph, we need to show conditions in Definition 1 are satisfied. Part (i) of Definition 1 is trivial. To verify Definition 1(ii) holds, we need D do(X j :=Ñ j , j∈J) (X) = P G;do(X j ):=Ñ j , j∈J (1) where J is any subset of {i, j}.
Clearly, for subsets J = / 0 or {i, j}, we observe that both sides of (1) equal to the marginal distribution of {X i , X j } that has not been intervened or totally intervened respectively.
When J = { j}, the intervention only occurs on the child node independently of its parent node. Therefore, both sides of (1) can be written in the factorized product of two independent variables with known distributions, which shows the equality.
For J = {i}, as X i → X j is a valid marginalization, RHS of (1) = P(X j |do(X i )) * P(do(X i )) using Definition 2(i).
While based on truncated factorization [27], LHS of (1)= P(X j |do(X i )) * P(do(X i )). Therefore, (1) holds when J = {i}.
By enumeration of all subsets of {i, j}, we have shown (1) stands. Therefore, X i → X j satisfies Definition 1 so that it is the true causal graph over D({X i , X j }).
A.2 Proof of Theorem 1
Proof. Suppose that wlog the edge between X i , X j in G is X i → X j . From Lemma 1(i) and 1(ii), X i → X j is a valid marginalization and thus is the true causal graph over D({X i , X j }). Hence, two causal directions are the same.
A.3 Proof of Proposition 1
Proof. (a) Counterexamples can be inferred from the simulation study in Section 5.
(b)From Theorem 1, we know that the causal direction between two unconfounded nodes X i , X j from their marginal distribution is the same as the edge orientation in the true causal graph G. Consequently, to infer the direction between X i , X j in G is equivalent to infer that from the marginal distribution of X i , X j , which can accurately obtained from applying the BM on them. Therefore, the output of the procedure described above produces the correct direction between X i , X j in G.
A.4 Proof of Proposition 2
Proof. Wlog, let X i be any root node in G. We will first show that any edge originates from X i is unconfounded.
Suppose we have an edge X i → X j originating from a root node. Since an empty set / 0 doesn't contain any descendants of X i nor is there any backdoor paths into X i in G, we have / 0 as a valid adjustment set for (X i , X j ) by Pearl's backdoor criterion [27]: P(X j |do(X i )) = P(X j |X i ) Hence X i → X j is unconfounded by definition 2. Followed by Proposition 1(b) which states any unconfounded edge can be correctly discovered by the BM, proposition 2 is proved.
A.5 Proof of Theorem 2
Proof. We observe that due to Proposition 2, all root nodes could be identified via Algorithm 1, which means there's no root nodes missing in the finding of Algorithm 1.
As for the proof of exclusiveness, suppose a node X n in the output of Algorithm 1 is not a root node, then there is a root node, denoted as X r such that X n ̸ ⊥ ⊥ X r , which indicates that X n is a descendent of X r , i.e. there is a directed path from X r towards X n . Hence, X r → X n is a valid marginalization since X r is a root node. If we apply a bivariate method to {X n , X r }, the correct direction would be inferred by Proposition 2, which is exactly a step described in Algorithm 1. Therefore, we have reached a contradiction of having X n in the output whereas X n has been identified as a descendent of another node, a forbidden case prescribed by Algorithm 1.
A.6 Proof of Proposition 3
Proof. Under assumptions stated in the Proposition premise, Algorithm 2 iteratively tries to identify all the 'root nodes' on the causal hierarchy ranking excluding the uncovered nodes from previous iterations. Since on each iteration, Algorithm 2 employs Algorithm 1 that could correctly identify all root nodes excluding the already discovered, Algorithm 2 could accurately identify 'root nodes' in each loop on the remaining variables. Hence, after all nodes have been identified, we have the true full causal graph inferred.
B Algorithm for simulation of three-variable causal inference Algorithm 3 INPUT: Data sampled from {X 1 , X 2 , X 3 } tuple; Score functionĈ based on HSIC, measuring independence between two samples. for i, j ∈ {1, 2, 3}, i > j do Compute residuals of regressing X i on X jê (X i ∼ X j ) as well as regressing X j on X iê (X j ∼ X i ). ifĈ (X i ,ê(X j ∼ X i )) <Ĉ(X j ,ê(X i ∼ X j )) C(X i ,ê(X k ∼ X i )) <Ĉ(X k ,ê(X i ∼ X k )) then Orient edges between X i , X j and X i , X k as X i → X j , X i → X k respectively. Compute residuals of regressing X k on X j , X iê (X k ∼ X j +X i ) as well as regressing X j on X k , X iê (X j ∼ X k +X i ). ifĈ(ê(X k ∼ X j + X i ),ê(X j ∼ X i )) <Ĉ(ê(X j ∼ X k + X i ),ê(X k ∼ X i )) then Orient edge between X j , X k as X k → X j else Orient edge between X j , X k as X j → X k end if end if end for OUTPUT Estimated DAG for {X 1 , X 2 , X 3 }
C Additional simulation results on multivariate data
We randomly generate 100 DAGs of k = {5, 10} vertices with maximum degree of 3 respectively, some of which could be identical. For each DAG generated, we simulate data set of 100 points according to the following data generating mechanism:
X i = n i ∑ i f m j (Pa(X i ) j ) + ε i
, where m j = {1, 2, 3}, j ∈ {1...n i }, n i is the degree of X i , ε i are iid N (0, 1) and f 1 (x) = x 3 + x, f 2 (x) = log(x + 10) + x 6 , f 3 (x) = sin(10x) + exp(3x). Hyperparameter m j is randomly chosen for each j.
From the true CPDAG, we use AF-KCDC, in which we apply our algorithm to KCDC [21], to obtain estimated DAG from synthetic data sets. AF-KCDC achieves 100% estimation accuracy in depicted empirical studies over 100 DAGs for both k = {5, 10}.
1 :
1INPUT: Skeleton G s of a DAG of n variables X, a bivariate method BM and a sample data set of X 2: S := {1, ..., n}R :=
19: end for 20: OUTPUT: X i where i ∈ R are root nodes and thus their edges' directions are oriented accordingly.
Proposition 3 .
3Under A1 to A4 and Admissibility Rule 2, Algorithm 2 would infer the true causal DAG.1: INPUT: Skeleton G s of a DAG of n variables X, bivariate method BM and a sample data set of X 2: S 0 := {1, .., n}S 1 := { / 0}R := { / 0} 3: for X k where kinS 0 without any undirected edges do 4:
Figure 1 in
1Bar plot of SHD between the estimated DAG and the ground truth DAG which β jk are uniformly sampled from [0.1, 2] while ε i , i ∈ {1, 2, 3} are independent noise terms with distributions asψ i · sgn(N i ) · |N i | α i , where
Review of causal discovery methods based on graphical models. Clark Glymour, Kun Zhang, Peter Spirtes, Frontiers in genetics. 10524Clark Glymour, Kun Zhang, and Peter Spirtes. Review of causal discovery methods based on graphical models. Frontiers in genetics, 10:524, 2019.
. Judea Pearl, Causality, Cambridge university pressJudea Pearl. Causality. Cambridge university press, 2009.
Causation, prediction, and search. Peter Spirtes, N Clark, Richard Glymour, David Scheines, Christopher Heckerman, Gregory Meek, Thomas Cooper, Richardson, MIT pressPeter Spirtes, Clark N Glymour, Richard Scheines, David Heckerman, Christopher Meek, Gregory Cooper, and Thomas Richardson. Causation, prediction, and search. MIT press, 2000.
Learning functional causal models with generative neural networks. Olivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, Michele Sebag, Explainable and Interpretable Models in Computer Vision and Machine Learning. SpringerOlivier Goudet, Diviyan Kalainathan, Philippe Caillou, Isabelle Guyon, David Lopez-Paz, and Michele Sebag. Learning functional causal models with generative neural networks. In Explainable and Interpretable Models in Computer Vision and Machine Learning, pages 39-80. Springer, 2018.
Towards a learning theory of cause-effect inference. David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, Iliya Tolstikhin, International Conference on Machine Learning. David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, and Iliya Tolstikhin. Towards a learning theory of cause-effect inference. In International Conference on Machine Learning, pages 1452-1461, 2015.
Causal inference using the algorithmic markov condition. Dominik Janzing, Bernhard Schölkopf, IEEE Transactions on Information Theory. 5610Dominik Janzing and Bernhard Schölkopf. Causal inference using the algorithmic markov condition. IEEE Transactions on Information Theory, 56(10):5168-5194, 2010.
Graphical models. Steffen L Lauritzen, Clarendon Press17Steffen L Lauritzen. Graphical models, volume 17. Clarendon Press, 1996.
A characterization of markov equivalence classes for acyclic digraphs. David Steen A Andersson, Madigan, Michael D Perlman, The Annals of Statistics. 252Steen A Andersson, David Madigan, Michael D Perlman, et al. A characterization of markov equivalence classes for acyclic digraphs. The Annals of Statistics, 25(2):505-541, 1997.
Learning equivalence classes of bayesian-network structures. David Maxwell, Chickering , Journal of machine learning research. 2David Maxwell Chickering. Learning equivalence classes of bayesian-network structures. Journal of machine learning research, 2(Feb):445-498, 2002.
Causal inference in statistics: An overview. Judea Pearl, Statistics surveys. 3Judea Pearl et al. Causal inference in statistics: An overview. Statistics surveys, 3:96-146, 2009.
Structural equations with latent variables. A Kenneth, Bollen, John Wiley & Sons210Kenneth A Bollen. Structural equations with latent variables, volume 210. John Wiley & Sons, 2014.
The max-min hill-climbing bayesian network structure learning algorithm. Ioannis Tsamardinos, Laura E Brown, Constantin F Aliferis, Machine learning. 651Ioannis Tsamardinos, Laura E Brown, and Constantin F Aliferis. The max-min hill-climbing bayesian network structure learning algorithm. Machine learning, 65(1):31-78, 2006.
A linear non-gaussian acyclic model for causal discovery. Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, Antti Kerminen, Journal of Machine Learning Research. 7Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, and Antti Kerminen. A linear non-gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7(Oct):2003-2030, 2006.
A direct method for estimating a causal ordering in a linear non-gaussian acyclic model. S Shimizu, Aapo Hyvärinen, Y Kawahara, T Washio, Proc. 25th Conference on Uncertainty in Artificial Intelligence (UAI2009). 25th Conference on Uncertainty in Artificial Intelligence (UAI2009)S Shimizu, Aapo Hyvärinen, Y Kawahara, and T Washio. A direct method for estimating a causal ordering in a linear non-gaussian acyclic model. Proc. 25th Conference on Uncertainty in Artificial Intelligence (UAI2009), 2009.
Nonlinear causal discovery with additive noise models. Patrik Hoyer, Dominik Janzing, M Joris, Jonas Mooij, Bernhard Peters, Schölkopf, Advances in neural information processing systems. 21Patrik Hoyer, Dominik Janzing, Joris M Mooij, Jonas Peters, and Bernhard Schölkopf. Nonlinear causal discovery with additive noise models. Advances in neural information processing systems, 21:689-696, 2008.
Regression by dependence minimization and its application to causal inference in additive noise models. Joris Mooij, Dominik Janzing, Jonas Peters, Bernhard Schölkopf, Proceedings of the 26th annual international conference on machine learning. the 26th annual international conference on machine learningJoris Mooij, Dominik Janzing, Jonas Peters, and Bernhard Schölkopf. Regression by dependence minimization and its application to causal inference in additive noise models. In Proceedings of the 26th annual international conference on machine learning, pages 745-752, 2009.
Causal discovery with continuous additive noise models. Jonas Peters, M Joris, Dominik Mooij, Bernhard Janzing, Schölkopf, The Journal of Machine Learning Research. 151Jonas Peters, Joris M Mooij, Dominik Janzing, and Bernhard Schölkopf. Causal discovery with continuous additive noise models. The Journal of Machine Learning Research, 15(1):2009-2053, 2014.
On the identifiability of the post-nonlinear causal model. Kun Zhang, Aapo Hyvarinen, Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence. the twenty-fifth conference on uncertainty in artificial intelligenceKun Zhang and Aapo Hyvarinen. On the identifiability of the post-nonlinear causal model. In Proceedings of the twenty-fifth conference on uncertainty in artificial intelligence, 2009.
Information-geometric approach to inferring causal directions. Dominik Janzing, Joris Mooij, Kun Zhang, Jan Lemeire, Jakob Zscheischler, Povilas Daniušis, Bastian Steudel, and Bernhard Schölkopf. 182Dominik Janzing, Joris Mooij, Kun Zhang, Jan Lemeire, Jakob Zscheischler, Povilas Daniušis, Bastian Steudel, and Bernhard Schölkopf. Information-geometric approach to inferring causal directions. Artificial Intelligence, 182:1-31, 2012.
Inference of cause and effect with unsupervised inverse regression. Eleni Sgouritsa, Dominik Janzing, Philipp Hennig, Bernhard Schölkopf, Artificial intelligence and statistics. Eleni Sgouritsa, Dominik Janzing, Philipp Hennig, and Bernhard Schölkopf. Inference of cause and effect with unsupervised inverse regression. In Artificial intelligence and statistics, pages 847-855, 2015.
Causal inference via kernel deviance measures. Jovana Mitrovic, Dino Sejdinovic, Yee Whye Teh, Advances in Neural Information Processing Systems. Jovana Mitrovic, Dino Sejdinovic, and Yee Whye Teh. Causal inference via kernel deviance measures. In Advances in Neural Information Processing Systems, pages 6986-6994, 2018.
Distinguishing cause from effect using observational data: methods and benchmarks. M Joris, Jonas Mooij, Dominik Peters, Jakob Janzing, Bernhard Zscheischler, Schölkopf, The Journal of Machine Learning Research. 171Joris M Mooij, Jonas Peters, Dominik Janzing, Jakob Zscheischler, and Bernhard Schölkopf. Distinguishing cause from effect using observational data: methods and benchmarks. The Journal of Machine Learning Research, 17(1):1103-1204, 2016.
Theories of causal ordering. Johan De Kleer, John Seely Brown, Artificial intelligence. 291Johan De Kleer and John Seely Brown. Theories of causal ordering. Artificial intelligence, 29(1):33-61, 1986.
Kernel-based tests for joint independence. Niklas Pfister, Peter Bühlmann, Bernhard Schölkopf, Jonas Peters, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 801Niklas Pfister, Peter Bühlmann, Bernhard Schölkopf, and Jonas Peters. Kernel-based tests for joint independence. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 80(1):5-31, 2018.
Kernel methods for measuring independence. Arthur Gretton, Ralf Herbrich, Alexander Smola, Olivier Bousquet, Bernhard Schölkopf, Journal of Machine Learning Research. 6Arthur Gretton, Ralf Herbrich, Alexander Smola, Olivier Bousquet, and Bernhard Schölkopf. Kernel methods for measuring independence. Journal of Machine Learning Research, 6(Dec):2075-2129, 2005.
Causal protein-signaling networks derived from multiparameter single-cell data. Karen Sachs, Omar Perez, Dana Pe'er, A Douglas, Garry P Lauffenburger, Nolan, Science. 3085721Karen Sachs, Omar Perez, Dana Pe'er, Douglas A Lauffenburger, and Garry P Nolan. Causal protein-signaling networks derived from multiparameter single-cell data. Science, 308(5721):523-529, 2005.
Belief networks revisited. Artificial intelligence in perspective. Judea Pearl, Judea Pearl. Belief networks revisited. Artificial intelligence in perspective, pages 49-56, 1994.
| [] |
[
"Asymptotic tail properties of Poisson mixture distributions",
"Asymptotic tail properties of Poisson mixture distributions"
] | [
"Samuel Valiquette [email protected] \nForêts et Sociétés\nUPR Forêt et Sociétés\nCIRAD\nF-34398MontpellierFrance\n\nUniv Montpellier\nCIRAD\nMontpellierFrance\n\nIMAG\nCNRS\nUniversité de Montpellier\n34090MontpellierFrance\n\nLEMON\n34095MontpellierInriaFrance\n\nDépartement de mathématiques\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeCanada\n",
"Gwladys Toulemonde \nIMAG\nCNRS\nUniversité de Montpellier\n34090MontpellierFrance\n\nLEMON\n34095MontpellierInriaFrance\n",
"Jean Peyhardi \nIMAG\nCNRS\nUniversité de Montpellier\n34090MontpellierFrance\n",
"Éric Marchand \nDépartement de mathématiques\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeCanada\n",
"| Frédéric Mortier \nForêts et Sociétés\nUPR Forêt et Sociétés\nCIRAD\nF-34398MontpellierFrance\n\nUniv Montpellier\nCIRAD\nMontpellierFrance\n\nEnvironmental Justice Program\nGeorgetown University\nWashington\n\nAmerica Correspondence Samuel Valiquette\nDépartement de mathématiques\n\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeCanada\n",
"D C "
] | [
"Forêts et Sociétés\nUPR Forêt et Sociétés\nCIRAD\nF-34398MontpellierFrance",
"Univ Montpellier\nCIRAD\nMontpellierFrance",
"IMAG\nCNRS\nUniversité de Montpellier\n34090MontpellierFrance",
"LEMON\n34095MontpellierInriaFrance",
"Département de mathématiques\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeCanada",
"IMAG\nCNRS\nUniversité de Montpellier\n34090MontpellierFrance",
"LEMON\n34095MontpellierInriaFrance",
"IMAG\nCNRS\nUniversité de Montpellier\n34090MontpellierFrance",
"Département de mathématiques\nUniversité de Sherbrooke\nJ1K 2R1SherbrookeCanada",
"Forêts et Sociétés\nUPR Forêt et Sociétés\nCIRAD\nF-34398MontpellierFrance",
"Univ Montpellier\nCIRAD\nMontpellierFrance",
"Environmental Justice Program\nGeorgetown University\nWashington",
"America Correspondence Samuel Valiquette\nDépartement de mathématiques",
"Université de Sherbrooke\nJ1K 2R1SherbrookeCanada"
] | [] | Count data are omnipresent in many applied fields, often with overdispersion. With mixtures of Poisson distributions representing an elegant and appealing modelling strategy, we focus here on how the tail behaviour of the mixing distribution is related to the tail of the resulting Poisson mixture. We define five sets of mixing distributions and we identify for each case whenever the Poisson mixture is in, close to or far from a domain of attraction of maxima. We also characterize how the Poisson mixture behaves similarly to a standard Poisson distribution when the mixing distribution has a finite support. Finally, we study, both analytically and numerically, how goodness-of-fit can be assessed with the inspection of tail behaviour. | null | [
"https://export.arxiv.org/pdf/2305.17095v1.pdf"
] | 258,947,633 | 2305.17095 | 62e0bbbfa3e2ba2f0b99da81e5cd73e8437e821e |
Asymptotic tail properties of Poisson mixture distributions
Samuel Valiquette [email protected]
Forêts et Sociétés
UPR Forêt et Sociétés
CIRAD
F-34398MontpellierFrance
Univ Montpellier
CIRAD
MontpellierFrance
IMAG
CNRS
Université de Montpellier
34090MontpellierFrance
LEMON
34095MontpellierInriaFrance
Département de mathématiques
Université de Sherbrooke
J1K 2R1SherbrookeCanada
Gwladys Toulemonde
IMAG
CNRS
Université de Montpellier
34090MontpellierFrance
LEMON
34095MontpellierInriaFrance
Jean Peyhardi
IMAG
CNRS
Université de Montpellier
34090MontpellierFrance
Éric Marchand
Département de mathématiques
Université de Sherbrooke
J1K 2R1SherbrookeCanada
| Frédéric Mortier
Forêts et Sociétés
UPR Forêt et Sociétés
CIRAD
F-34398MontpellierFrance
Univ Montpellier
CIRAD
MontpellierFrance
Environmental Justice Program
Georgetown University
Washington
America Correspondence Samuel Valiquette
Département de mathématiques
Université de Sherbrooke
J1K 2R1SherbrookeCanada
D C
Asymptotic tail properties of Poisson mixture distributions
Received: Added at production Revised: Added at production Accepted: Added at production DOI: xxx/xxxx ARTICLE TYPECount dataExtreme value theoryGoodness-of-fitPeak-over-thresholdPoisson mixtures
Count data are omnipresent in many applied fields, often with overdispersion. With mixtures of Poisson distributions representing an elegant and appealing modelling strategy, we focus here on how the tail behaviour of the mixing distribution is related to the tail of the resulting Poisson mixture. We define five sets of mixing distributions and we identify for each case whenever the Poisson mixture is in, close to or far from a domain of attraction of maxima. We also characterize how the Poisson mixture behaves similarly to a standard Poisson distribution when the mixing distribution has a finite support. Finally, we study, both analytically and numerically, how goodness-of-fit can be assessed with the inspection of tail behaviour.
This paper is organized as follows. Section 2 presents the extreme value theory in the Poisson mixture context and different families of mixing distributions. Using these set of distributions, we identify when the Poisson mixture is in, close to, or far from a domain of attraction. Moreover, we demonstrate that Poisson mixtures satisfying the latter case behave similarly to a standard Poisson distribution. In Section 3, we inspect how those three situations can affect the goodness-of-fit when it comes to adjusting count data with a Poisson mixture. Moreover, we explore how one can identify which type of mixing distribution can be adequate by using the generalized Pareto distribution on the excesses. We also study how the closeness to the Gumbel domain of attraction has an impact on identifying such a mixing distribution. Finally, we provide an example where the maxima of a Poisson mixture alternates between two values.
POISSON MIXTURE TAIL BEHAVIOUR
In this section, we present notations and the family of mixing distributions that is studied in this paper. Moreover, preliminary results in extreme value theory are presented and we describe maximum domain of attraction restrictions for discrete distributions. Following this, we elaborate on mixing distributions that allow the Poisson mixture to be either in or near a domain of attraction, or to drastically fail to belong in one. Finally, for a Poisson mixture with a finite mixing distribution, we will prove that the asymptotic behaviour of its probability mass function behaves similarly to that of a Poisson distribution.
Theoretical foundations
In the following, for Poisson mixtures X | λ with λ random, F , F and f will denote respectively the cumulative distribution function (cdf), the survival function, and the probability density function (pdf) for the mixing λ. Similarly, F M , F M and P M will denote respectively the cdf, the survival function, and the probability mass function (pmf) of the resulting Poisson mixture X. Moreover, in this paper, we restrict the mixing distributions on λ to those with a support equal to (0, x 0 ) for x 0 ∈ R + ∪ {∞}. Finally, we require the notion of a slowly varying function C(x) on R + , defined by the property: for every t ∈ R + , C(tx) ∼ C(x), where g(x) ∼ h(x) means that limx→∞
g(x)
h(x) = 1 for functions g and h.
The tail behaviour of the Poisson mixture can be studied using extreme value theory. Such a statistical approach analyzes how the maxima of F M stabilizes asymptotically. For a general distribution G, the theory says that G belongs to a domain of attraction if there exist two normalizing sequences an > 0 and bn such that G n (anx + bn) converges to a non-degenerate distribution when n tends to infinity (Resnick 1987). Such a non-degenerate distribution can only be the generalized extreme value distribution given by
lim n→∞ G n (anx + bn) = exp −(1 + γx) −1/γ for 1 + γx > 0 with γ ̸ = 0; exp −e −x for x ∈ R with γ = 0.(1)
The three possible domains of attraction are named Weibull, Gumbel and Fréchet for γ < 0, γ = 0 and γ > 0 respectively, and will be denoted by D − , D 0 and D + . Accordingly, we will write G ∈ D where D is one of the three domains. Necessary and sufficient conditions for G to be in a domain of attraction have been established by Gnedenko (1943). While most common continuous distributions can be associated to a domain of attraction, this is not always the case for discrete random variables. Indeed, a necessary condition for a discrete distribution G to be in a domain of attraction is the long-tailed property (Anderson 1970) defined by
G(n + 1) ∼ G(n).(2)
Well known discrete distributions, such as Poisson, geometric and negative binomial, do not satisfy the above property. However, Anderson (1970) and Shimura (2012) showed that if a discrete distribution verifies
G(n + 1) ∼ LG(n),(3)
for L ∈ (0, 1), then G is, in a sense, "close" to the Gumbel domain. More precisely, Shimura (2012) showed that property (3) implies that G is the discretization of a unique continuous distribution belonging to D 0 . On the other hand, Anderson (1970) showed that there exist a sequence bn and α > 0 such that
lim sup n→∞ G n (x + bn) ≤ exp −e −αx lim inf n→∞ G n (x + bn) ≥ exp −e −α(x−1)
if and only if G(n + 1) ∼ e −α G(n). Therefore, the supremum and infimum limits of G n (x + bn) are bounded by two Gumbel distributions under
G(n) = 0,(4)
then no sequence bn can be found such that the the supremum and infimum limits of G n (x+bn) are bounded by two different Gumbel distributions.
For this case, Anderson (1970) showed that for Y i iid ∼ G, there exists a sequence of integers In such that
lim n→∞ P max 1≤i≤n Y i = In or In + 1 = 1(5)
if and only if (4) is satisfied. Therefore, the maximum of such discrete distribution oscillates between two integers asymptotically.
Poisson mixtures categories
Since Poisson mixture distributions are discrete distributions, they are constrained to the long-tailed property (2) in order to have a domain of attraction. Otherwise, they may be close to the Gumbel domain or with a maximum alternating between two integers. Since a Poisson mixture is uniquely identifiable by the distribution on λ (Feller 1943), it follows that its tail behaviour depends on the latter. Therefore, we seek to identify what conditions on the distribution of λ allow the Poisson mixture distributions to satisfy either equation (2), (3) or (4). In the following, we will establish that Poisson mixtures with F in D + or D − will satisfy equations (2) and (4) respectively, but for mixing distributions in D 0 , the Poisson mixture may satisfy either one of the three limits depending on their behaviour. We require the following definitions and notations.
Definition 1.
A distribution F has an exponential tail if for all k ∈ R, there is a β > 0 such that for x → ∞
F (x + k) ∼ e −βk F (x).(6)
Definition 2. A distribution F satisfies the Gumbel hazard condition if its density f has a negative derivative for all x in some left neighborhood
of {∞}, limx→∞ d dx 1−F (x) f (x)
= 0 (the 3rd Von Mises Condition) and limx→∞
x δ f (x) 1−F (x) = 0 for some δ ≥ 1 2 .
Using Definitions 1 and 2, we focus on three distinct subsets of D 0 . Firstly, distributions satisfying one of these definitions are in the Gumbel domain of attraction, see Shimura (2012) and Resnick (1987). Secondly, some distributions with finite tail are in D 0 , but do not belong to D − (e.g. Gnedenko (1943)). Based on these three cases, let D E 0 , D H 0 and D F 0 denote respectively the classes of F ∈ D 0 satisfying Definition 1, Definition 2, and with finite tail. These subsets of D 0 are disjoint by the following Proposition.
Proposition 1. The sets D E 0 , D H 0 and D F 0 are disjoint.
Proof. Since D E 0 and D H 0 represent distributions with an infinite tail, they are both disjoint from D F 0 . To establish that D E 0 and D H 0 are disjoint, it is sufficient to show that the condition on the hazard rate function in Definition 2 is not satisfied for the former case. By abuse of notation, we will denote in this proof by C any slowly varying function. As noticed in Cline (1986), F has an exponential tail if and only if F (ln x) = C(x)x −β for some β > 0. By the monotone density theorem presented in Theorem 1.7.2. in Bingham, C.M., and Teugels (1987), we then have f (x) = C(e x )e −βx .
Therefore, the density f still has an exponential tail. Moreover, its limit given by Definition 1 converges uniformly on (ln b, ∞) for every b > 0 (Resnick 1987). Then, by differentiating f with respect to k,
lim x→∞ f ′ (x + k) f (x) = −βe −βk ,
and fixing k = 0, we conclude that f ′ (x) ∼ −βf (x). Using this property, one has for all δ > 0 that
lim x→∞ x δ f (x) F (x) = β lim x→∞ x δ = ∞,
showing that the Gumbel hazard condition (Definition 2) is not satisfied.
Although these subsets are disjoint, they do not form a partition of D 0 . Indeed, the Weibull distribution with cdf
F (x) = 1 − e − x β α is neither in D H 0 or D E 0 when α ̸ ∈ (0, 1/2) ∪ {1}.
This distribution belongs to a broader subset of D 0 named Weibull tail which intersects with D H 0 and D E 0 ; see Gardes and Girard (2013) for more details. We now discriminate between properties (2), (3) or (4) with respect to the domain of attraction of λ. Theorem 1. Let F M be a Poisson mixture with λ distributed according to a cdf F and supported on (0, x 0 ) with x 0 ∈ R + ∪ {∞}. Then for any
integer k ≥ 1, lim n→∞ F M (n + k) F M (n) = 1 if F ∈ D + ∪ D H 0 , (1 + β) −k if F ∈ D E 0 , 0 if F ∈ D − ∪ D F 0 , where β > 0 is given by Definition 1 for D E 0 . Proof. (A) limn→∞ F M (n+k) F M (n)
= 1: The result for D H 0 is directly established by Perline (1998). For F ∈ D + , a necessary and sufficient condition is that F (x) = C(x)x −α with α > 0 (Gnedenko 1943). In fact, C(x) must be locally bounded since F is bounded. As presented in Karlis and Xekalaki (2005), the survival function of the mixture is given by
F M (x) = ∞ 0 λ ⌊x⌋ e −λ ⌊x⌋! (1 − F (λ))dλ = Γ(⌊x⌋ − α + 1) Γ(⌊x⌋ + 1) ∞ 0 λ ⌊x⌋−α e −λ Γ(⌊x⌋ − α + 1) g(x,λ) C(λ)dλ for x such that ⌊x⌋ − α > 0.
By the definition of the Gamma function, ∞ 0 g(x, λ)dλ = 1, and then for 0 ≤ a < b ≤ ∞, ϕ ∈ (−1, 1), and Stirling's formula, we have
b a λ ϕ g(x, λ)dλ ≤ ∞ 0 λ ϕ g(x, λ)dλ = Γ(⌊x⌋ − α + ϕ + 1) Γ(⌊x⌋ − α + 1) ∼ ⌊x⌋ ϕ .
By Theorem 4.1.4 in Bingham et al. (1987), we can conclude that F M is such that
F M (x) ∼ C(⌊x⌋) Γ(⌊x⌋ − α + 1) Γ(⌊x⌋ + 1) ∼ C(⌊x⌋)⌊x⌋ −α .
Furthermore, since ⌊x⌋ ∼ x, C(⌊x⌋) ∼ C(x) using the Karamata representation of C (Resnick 1987).
Therefore F M ∈ D + and F M (n + k) ∼ F M (n). (B) limn→∞ F M (n+k) F M (n) = (1 + β) −k : Since F has an exponential tail, then F (x) = C(e x )e −βx for some β > 0.
Using a similar argument as in Theorem 4.1.4 in Bingham et al. (1987), we can prove that
F M (n) ∼ C(e n ) (1 + β) n+1 . Therefore, lim n→∞ 1 − F M (n + k) 1 − F M (n) = (1 + β) −k lim n→∞ C(e n+k ) C(e n ) = (1 + β) −k . (C) limn→∞ F M (n+k) F M (n) = 0: Because F M (n) = x 0 0 λ n e −λ n!
(1 − F (λ))dλ, the result as above follows since
F M (n + k) F M (n) = 1 k i=1 (n + i) x 0 0 λ n+k e −λ (1 − F (λ))dλ x 0 0 λ n e −λ (1 − F (λ))dλ ≤ x k 0 k i=1 (n + i) → 0 when n → ∞.
Theorem 1 establishes that if F ∈ D + , then F M ∈ D + which improves the result of Perline (1998) that adds the 1st Von Mises condition (Resnick 1987) to proved a similar result. By relaxing such a condition, we proved that any mixing distributions in D + allows the Poisson mixture to remain in this domain of attraction. Analogous to this property, Shimura (2012) showed that any discretization of a continuous distribution in D + preserves the domain of attraction. Considering the Poisson mixture as a discretization operator, we obtain another example where the Fréchet domain of attraction is preserved. A broad set of mixing distributions in D + can be found, for example the Fréchet, folded-Cauchy, Beta type II, inverse-Gamma, or the Gamma/Beta type II mixture (Irwin 1968). Unfortunately, examples are scarce for distributions in D H 0 . Indeed the asymptotic behaviour of the hazard rate function in Definition 2 is quite restrictive. Examples include the lognormal, the Benktander type I and II (Kleiber & Kotz 2003), and the Weibull distributions, with further restrictions on the parameters for the latter two cases. These type of distributions do not encompass cases like the Gamma, even though the associated mixing distribution belongs to D 0 , because it does not satisfy the additional condition on the hazard rate function. The class D E 0 allows to describe such mixing distribution. It includes a broad class of elements among others Gamma, Gamma/Gompertz, exponential, exponential logarithmic, inverse-Gaussian and the generalized inverse-Gaussian. As previously mentioned these distributions are in the Gumbel domain of attraction, but from Theorem 1, the resulting Poisson mixtures do not belong to any domain of attraction.
However, we can quantify how close such Poisson mixtures are to the Gumbel domain of attraction. Indeed, if β → 0 then
1−F M (n+1) 1−F M (n) → 1, i.e.
it approaches a long-tailed distribution. Finally, when F has a finite tail, i.e. F ∈ D − ∪ D F 0 , the Poisson mixture cannot be close to any domain of attraction by Theorem 1.
2.3
Asymptotic behaviour for F ∈ D −
To shed light on why the last limit in Theorem 1 is null, we complete this section by studying the asymptotic behaviour of the pmf P M when F is in D − . Willmot (1990) studied such a behaviour when the Poisson mixture has a mixing distribution with a particular exponential tail. This result is presented in the following Proposition.
Proposition 2 (Willmot (1990)). Let F M be a Poisson mixture with λ distributed according to a distribution F such that its density is
f (x) ∼ C(x)x α e −βx ,
where C is a locally bounded and slowly varying function on R + , and for some α ∈ R and β > 0. Then the pmf P M is such that
P M (n) ∼ C(n)n α (1 + β) −(n+α+1) .
Proposition 2 indicates that when the density f behaves similarly to a Gamma distribution, then the pmf P M behaves like a negative binomial pmf multiplied by a regular varying function. As previously mentioned, the negative binomial is an example of a distribution where equation (3) is satisfied. This provides additional clarification on why the limit associated with an exponential tail in Theorem 1 converges to a value between 0 and 1. In the following Theorem, a similar conclusion is presented when F ∈ D − .
Theorem 2. Let F M be a Poisson mixture with λ distributed according to a distribution F ∈ D − . Then there exists an α > 0 such that
F M (n) ∼ Γ(α + 1)C(n)n −α x n+1 0 (n + 1)! e −x 0 .
Proof. Using the integral representation of F M ,
F M (n) = x 0 0 λ n e −λ n! (1 − F (λ))dλ = x n+1 0 n! ∞ 0 λ n (λ + 1) n+2 e − x 0 λ λ+1 1 − F x 0 λ λ + 1 dλ
where the transformation λ → λ x 0 −λ has been applied. By adapting the necessary and sufficient condition for the Weibull domain of attraction (Gnedenko 1943), which is F ∈ D − if and only if x 0 < ∞ and 1 − F x 0 x x+1 = C(x)x −α for C a locally bounded function and slowly varying and α > 0, we obtain
F M (n) = x n+1 0 n! ∞ 0 λ n−α (λ + 1) n+2 C(λ)e − x 0 λ λ+1 dλ
and using the fact that the Beta function is such that
B(a, b) = ∞ 0 t a−1 (t + 1) a+b dt,
a similar argument as in Theorem 1 provides that
F M (n) ∼ x n+1 0 n! B(n − α + 1, α + 1)C(n)e − x 0 n n+1 ∼ x n+1 0 e −x 0 n! C(n) Γ(n − α + 1)Γ(α + 1) Γ(n + 2) ∼ Γ(α + 1)C(n)n −α x n+1 0 e −x 0 (n + 1)! .
Using the asymptotic behaviour in Theorem 2, a similar result can be established for P M .
Corollary 1. Let F M be a Poisson mixture with λ distributed according to a cdf F ∈ D − . Then the pmf P M is such that
P M (n) ∼ Γ(α + 1)C(n)n −α x n 0 n! e −x 0 . Proof. Since P M (n) = F M (n − 1) − F M (n), then lim n→∞ P M (n) Γ(α + 1)C(n)n −α x n 0 n! e −x 0 = lim n→∞ C(n − 1)(n − 1) −α C(n)n −α − lim n→∞ x 0 n + 1 = 1.
This result provides a new perspective on why the limit in Theorem 1 converges to 0 for a mixing distribution with a finite support. Indeed, as previously mentioned, the Poisson distribution is an example such that the limit (4) is satisfied. From Theorem 2 and Corollary 1, F M and P M behave like a Poisson distribution with mean x 0 multiplied by a regular varying function. Intuitively, the mixing distribution does not put weight everywhere on R + , so the tail of F M cannot satisfy equation (2).
NUMERICAL STUDY
This section illustrates the practical implications of the theoretical results previously obtained. In particular, we highlight how the mixing distribution impacts the adjustment, how the statistical evaluation of tail distributions of count data may help to select a mixing distribution, and how the maxima of Poisson mixtures with finite mixing distribution behave asymptotically.
Impact of mixing distribution choice on goodness of fit
To illustrate how the tail behaviour of λ affects the model adjustment, we simulated 100 samples of different Poisson mixtures with size n = 250 using the (i) Fréchet(α, β), (ii) lognormal(µ, σ), (iii) Gamma(α, β), and (iv) Uniform(0, x 0 ) distributions on λ with densities
(i) f (x) = α x x β −α e − x β −α , α > 0, β > 0; (ii) f (x) = 1 xσ √ 2π e − (ln x−µ) 2 2σ 2 µ ∈ R, σ > 0; (iii) f (x) = β α Γ(α) x α−1 e −βx , α > 0, β > 0; (iv) f (x) = 1 (0,x 0 ) (x) x 0
, each one being a representative of four out of five type of mixing distributions we encountered. Respectively, they are representative of elements in D + , D H 0 , D E 0 , and in D − . Moreover, the parameter γ from equation (1) associated to (i) and (iv) are respectively γ = 1/α, γ = −1 and γ = 0 for (ii) and (iii). For each sample, the Poisson mixture is fitted with the same four distributions and the best model is kept using a Bayesian framework. This is done using the language R (R Core Team 2021) and the rstan (Stan Development Team 2020) package to estimate the hyperparameters by MCMC. The best model is then kept using the highest posterior model probability. Those probabilities are approximated using the bridge sampling computational technique (Meng & Wong 1996) and the dedicated R package Bridgesampling (Gronau, Singmann, & Wagenmakers 2020). All results are based on the following priors: a Gamma(1, 1) distribution for positive parameters and a Normal(0, 1) for real parameters. Moreover, we simulated for each sample four MCMCs with 10,000 iterations each in order to ensure reasonable convergence for parameter estimation and for the posterior model probabilities. Results are presented in Table 1 The Poisson-Fréchet mixtures stood out the most since their tail is heavier than any other of the distributions. The only competing model seems to be the Poisson-lognormal which has a heavier tail than an exponential type distribution, but lighter than the Fréchet. The variance also influences what model is selected. Indeed, for example, the lognormal(0,1) has a lesser variance compared to the lognormal(1,1). In the former mixture, the Gamma seems to be able to compete against the lognormal, which is not the case for the latter. Interestingly, the Fréchet mixing distribution is selected sparingly for lognormal data even when the variance gets larger. This fact remains true for the rest of the table since the Fréchet distribution has a much heavier tail. By Theorem 1, we know that the Gamma distribution can get close to the Gumbel domain of attraction.
From Table 1, we see that the lognormal is a significant competitor for both simulations, which reflects the closeness to D 0 . However, when the rate parameter is equal to 2, the mean and variance decrease and the uniform becomes another chosen option. This can be explained by the fact that F M (n+1) F M (n) is closer to 0 when n grows to infinity. Finally, since the uniform has a finite tail, only the Gamma can compete and, again, larger the variance the less the Gamma is selected. Based on each case, we see a diagonal effect from the heavier tail to the finite tail.
Identifying the domain of attraction
In order to identify what domain of attraction a random variable belongs to, one can uses the peaks-over-threshold (POT) method (Coles 2001).
This technique involves the distribution of the excesses defined by Y − u|Y > u, for a suitable choice of u. Pickands (1975), Balkema and de Haan (1974) showed that Y belongs to a domain of attraction if and only if the distribution of the excesses converges weakly to a generalized Pareto distribution (GPD) as u tends to the right endpoint of the distribution of Y . In such cases, the corresponding cdf is given by
Hγ,σ(y) = 1 − 1 + γ y σ −1/γ if γ ̸ = 0, 1 − exp − y σ γ = 0,(7)
with support R + if γ ≥ 0 or 0; − σ γ if γ < 0, where γ ∈ R and σ > 0 are respectively shape and scale parameters. Moreover, the γ parameter is the same as in equation (1). Therefore, fitting a GPD to the excesses of a sample can inform us on the domain of attraction the underlying distribution belongs to. Better yet, excesses of count data can inform us whether or not a Poisson mixture distribution belongs to a known domain of attraction and, if so, which one. Therefore, analyzing the discrete excesses can indicate what type of mixing distribution generates the Poisson mixture. Indeed, by Theorem 1, if the discrete excesses belong to a domain of attraction, then a mixing distribution F should be in D + ∪ D H 0 . Otherwise, F should either have an exponential or finite tail.
From a practical point of view, the study of discrete excesses may justify a choice of model. For example, one may hesitate between adjusting a Poisson-lognormal or a negative binomial for their count data. In order to study how useful the discrete excesses can be, various Poisson mixtures have been simulated. Here, we fixed the sample size to n = 1000, the threshold u to be the 95th or 97.5th empirical quantiles, and simulated 1000 samples for each mixing distribution. For each sample, the discrete excesses are extracted, and the evd R package (Stephenson 2002) is used to estimate the GPD parameters by maximum likelihood. Based on these estimations, the modified Anderson Darling test for the goodness-of-fit is applied. Finally, for the samples such that the GPD appears to be adequate, we test H 0 : γ = 0 versus H 1 : γ ̸ = 0. To do so, we fit these two models, evaluate the corresponding log likelihoods L 1 and L 0 , and conclude with the deviance statistic D = 2 (L 1 − L 0 ) which follows approximately a χ 2 1 distribution under suitable conditions (Coles 2001). Results are presented in Table 2. Table 2 Average number of excesses, rejection rate for the GPD and non-rejection rate of H 0 : γ = 0 for the simulations with n = 1000 and u = 95th or 97.5th empirical quantile.
Firstly, we notice that even if the Fréchet and lognormal distributions are in D + and D H 0 respectively, the Fréchet(2,1) and lognormal(0,1) cases lead to a high rejection rate for the 95th quantile threshold. However, when both cases are simulated with a threshold u equal to the 97.5th quantile, the rate of GPD rejection diminishes. Therefore, it seems that the threshold choice has a great impact. Moreover, when u is the 97.5th quantile, the estimation of γ is not significantly different to 0 for 79 % of the samples of the lognormal(0,1). However, 61.5 % of the samples of the Fréchet are also significantly null. Secondly, as noted by Hitz, Davis, and Samorodnitsky (2017), the discrete excesses need a certain amount of variability in order to have a smooth adjustment to the GPD. Since the lognormal(1,1) has a greater variance and the Fréchet(1,1) doesn't have a finite expectation, this explains why these cases are well adjusted to the GPD. Finally, both Gamma and uniform cases have GPD rejection rates as expected. Interestingly, the uniform distribution is rejected at a lesser rate then the Gamma. Again, this can be explained by the greater variance for the uniform than the Gamma simulations.
Also, the Gamma(2,1) leads to a lower rate of rejection than the Gamma(2,2), which is reasonable since the former is closer to D 0 than the latter by Theorem 1. Indeed, if the limit in Theorem 1 (1 + β) −1 approaches 0, the GPD rejection rate for the Poisson mixtures should increase. Inversely, the rejection rate should decrease when (1 + β) −1 approaches 1. To further analyze this, we simulated Poisson mixtures with a Gamma(2, β) mixing density and let the parameter β vary from 0.1 to 8, the quantity (1 + β) −1 thus varying between 1/9 and 10/11. For each value of β, we simulate 500 samples of size n = 1000 from the Poisson mixture, fix the threshold u to the 95th empirical quantile, and calculate the proportion of samples where the GPD is rejected with type I error α = 0.05. Results are presented in Figure 1. We can see that indeed the proportion decreases when
(1 + β) −1 moves towards 1. Between 0 and 0.5, the rejection proportion oscillates between 0.5 and 1. This can be explained by the fact that the number of discrete excesses also oscillates when β increases, which affects the power of the test.
To adjust for the problems related to the discreteness of the excesses, it would be interesting to transform them into continuous variables. As demonstrated by Shimura (2012), a Poisson mixture with F ∈ D H 0 is a random variable that originates from an unique continuous distribution in D 0 that has been discretized. If one can identify such a continuous distribution associated to the discrete excesses when the GPD is rejected, then it would be reasonable to use an exponential tail mixing distribution. A jittering technique consiting of adding random noise to data has been proposed for different discrete contexts (Coeurjolly & Trépanier 2020;Nagler 2018). A plausible approach would be a jittering for the GPD test in order to adequately identify the type of mixing distribution associated to the discrete excesses.
Figure 1
Proportion of Gamma(2, β) Poisson mixture samples (size n = 1000) where the GPD has been rejected (α = 0.05) for the excesses (u = 95th empirical quantile) as a function of (1 + β) −1 .
Maxima for Poisson mixtures with finite tail mixing distribution
By Theorem 1, if F has bounded support (0, x 0 ), then the Poisson mixture is short tailed, i.e.
F M (n+1) F M (n)
→ 0 as n → ∞. Therefore, according to Anderson (1970), there exists a sequence of integers In such that equation (5) is satisfied. Moreover, by Corollary 1, the pmf P M asymptotically behaves like a Poisson distribution and, as mentioned, the Poisson is the primary example where its maximum oscillates between two integers. Kimber (1983) and Briggs, Song, and Prellberg (2009) study how the sequence In can be approximated for the Poisson distribution and showed that it grows slowly when n → ∞. Since P M behaves like the Poisson when F is in D − , the sequence In should also grow slowly. To visualise this behaviour, we simulated Poisson mixtures with λ ∼ x 0 Beta(α, β). We fixed α = 2, x 0 = 5, and for n ∈ {10, 10 2 , 10 3 , 10 4 }, we simulated 10000 samples of F M with size n and recorded the maximum for each sample. With these maxima, we calculated the empirical probabilities, and repeated for β ∈ {1/4, 1/2, 1, 2}. Figure 2 reports on the empirical and theoretical pmf of the simulations and the maxima of n Poisson variables with mean x 0 respectively. Interestingly, the greater β becomes, the slower the sequence In increases. Indeed, when β = 1/4, the probability This can be explained using Corollary 1. Indeed, we can show that P M here is such that
P M (n) ∼ Γ(α + β) Γ(α) n −β x n 0 e −x 0 n! ,
and when β approaches 0, then only the pmf of the Poisson(x 0 ) remains. From another point of view, the density of the x 0 Beta(α, β) approaches a Dirac on x 0 , so the Poisson mixture approaches a simple Poisson distribution.
CONCLUSION AND PERSPECTIVES
Overdispersed count data are commonly observed in many applied fields and Poisson mixtures are appealing to model such data. However, the choice of the appropriate mixing distribution is a difficult task relying mainly on empirical approaches related to modelers subjectivity or on intensive computational techniques combined with goodness-of-fit test or information criteria. In this paper, we showed that such a choice should respect the relation between the tail behaviour of λ and the discrete data. Indeed, if a distribution F is in the Fréchet domain of attraction or satisfies the Gumbel hazard condition given by Definition 2, then the discrete data should be in the same domain of attraction. Otherwise, an exponential or finites tail should be chosen. Moreover, Theorem 1 established that Poisson mixtures with F ∈ D 0 need to be separated into three subsets: D E 0 , D H 0 and D F 0 . Both subsets D E 0 and D H 0 have distributions belonging to a larger subset named Weibull tail (Gardes & Girard 2013). It would be interesting to generalize Theorem 1 with this familly of mixing distributions.
To identify whether the data distribution comes from a domain of attraction or not, we have studied the discrete excesses and their adjustment by the GPD. Some difficulties occurred due to the discrete nature of the data. Solutions that could be explored are the use of techniques like the jittering or the use of discrete analogues of the GPD like the discrete generalized Pareto or the generalized Zipf distribution presented in Hitz et al. (2017). These approaches should help identify whether λ has a exponential tail or not. However, one could think about testing if λ has a bounded support. Based on Theorem 2 and Corollary 1, the Poisson mixture with a finite mixing distribution should behave similarly to a Poisson with mean x 0 . Testing whether F has a finite tail or not based on these results is a promising avenue.
In the field of extreme value theory, our Theorem 2 and the result of Willmot (1990) in Proposition 2 may provide an approach to finding normalizing sequences such that the Poisson mixture belongs to a domain of attraction. Indeed, Anderson, Coles, and Hüsler (1997) showed that if the Poisson's mean λ depends on the sample size and increases with a certain rate, then it is possible to find normalizing sequences an and bn such that the distribution is in the Gumbel domain of attraction. If λ does not depend on the sample size, then no such sequence can be found.
A similar result has been proved by Nadarajah and Mitov (2002) for the negative binomial when α is fixed and β approaches 0. Since Theorem 2 and Proposition 2 showed that Poisson mixtures with finite or exponential tail mixing distribution resemble the Poisson or the negative binomial respectively, one could exploit these asymptotic properties to generalize the results of Anderson et al. (1997) and Nadarajah and Mitov (2002) with various Poisson mixtures like the Poisson-inverse-Gaussian or Poisson-Beta. Similarly, generalizing the results of Kimber (1983) and Briggs et al. (2009) concerning the sequence In for the maxima of Poisson random variables should also be explored.
Figure 2
2Maximum distributions of Poisson mixture with λ ∼ x 0 Beta(2, β) (black) and Poisson(x 0 ) (red) with x 0 = 5, β ∈ {1/4, 1/2, 1, 2} and n ∈ {10, 10 2 , 10 3 , 10 4 }. distribution of the maxima looks similar to that of a Poisson(x 0 ). For β = 2, the distribution for the Poisson mixture drastically shifts to the left.
condition (3). The geometric and negative binomial distributions are two such examples. Finally, if the discrete distribution is a Poisson, or moregenerally such that
lim
n→∞
G(n + 1)
. Selected model frequencies for each Poisson mixture simulation with the highest frequency in bold.Mixing class
Mixing distribution
Fréchet
Lognormal
Gamma
Uniform
Fréchet(1,1)
89
11
0
0
D +
Fréchet(2,1)
80
18
2
0
Lognormal(1,1)
5
89
6
0
D H
0
Lognormal(0,1)
9
69
23
0
Gamma(2,1)
1
22
73
4
D E
0
Gamma(2,2)
1
23
54
22
Uniform(0,10)
0
0
26
74
D −
Uniform(0,5)
0
1
38
61
Table 1
ACKNOWLEDGMENTS
Differential expression analysis for sequence count data. S Anders, W Huber, Genome Biology. 11Anders, S., & Huber, W. (2010). Differential expression analysis for sequence count data. Genome Biology, 11.
Extreme value theory for a class of discrete distributions with applications to some stochastic processes. C W Anderson, Journal of Applied Probability. 71Anderson, C. W. (1970). Extreme value theory for a class of discrete distributions with applications to some stochastic processes. Journal of Applied Probability, 7(1), 99-113.
Maxima of Poisson-like variables and related triangular arrays. C W Anderson, S G Coles, J Hüsler, The Annals of Applied Probability. 74Anderson, C. W., Coles, S. G., & Hüsler, J. (1997). Maxima of Poisson-like variables and related triangular arrays. The Annals of Applied Probability, 7(4), 953-971.
Residual life time at great age. The Annals of Probability. A A Balkema, L De Haan, 2Balkema, A. A., & de Haan, L. (1974). Residual life time at great age. The Annals of Probability, 2(5), 792 -804.
Modelling the claim count with Poisson regression and negative binomial regression. B Bartoszewicz, Innovations in classification, data science, and information systems. Berlin HeidelbergSpringerBartoszewicz, B. (2005). Modelling the claim count with Poisson regression and negative binomial regression. In Innovations in classification, data science, and information systems (pp. 103-110). Springer Berlin Heidelberg.
Regular variation. N Bingham, C M , G Teugels, J , Cambridge University PressBingham, N., C.M., G., & Teugels, J. (1987). Regular variation. Cambridge University Press.
A note on the distribution of the maximum of a set of Poisson random variables. K M Briggs, L Song, T Prellberg, arXiv:0903.4373arXiv preprintBriggs, K. M., Song, L., & Prellberg, T. (2009). A note on the distribution of the maximum of a set of Poisson random variables. arXiv preprint arXiv:0903.4373.
On fitting the Poisson lognormal distribution to species-abundance data. M G Bulmer, Biometrics. 30101Bulmer, M. G. (1974). On fitting the Poisson lognormal distribution to species-abundance data. Biometrics, 30, 101.
Convolution tails, product tails and domains of attraction. Probability Theory and Related Fields. D B H Cline, 72Cline, D. B. H. (1986). Convolution tails, product tails and domains of attraction. Probability Theory and Related Fields, 72, 529-557.
The median of a jittered Poisson distribution. J F Coeurjolly, J R Trépanier, Metrika. 83Coeurjolly, J. F., & Trépanier, J. R. (2020). The median of a jittered Poisson distribution. Metrika, 83, 837-851.
An introduction to statistical modeling of extreme values. S Coles, SpringerColes, S. (2001). An introduction to statistical modeling of extreme values. Springer.
On a General Class of. W Feller, Contagious" Distributions. The Annals of Mathematical Statistics. 144Feller, W. (1943). On a General Class of "Contagious" Distributions. The Annals of Mathematical Statistics, 14(4), 389 -400.
Estimation de quantiles extrêmes pour les lois à queue de type Weibull: une synthèse bibliographique. L Gardes, S Girard, Journal de la Société Française de Statistique. 1542Gardes, L., & Girard, S. (2013). Estimation de quantiles extrêmes pour les lois à queue de type Weibull: une synthèse bibliographique. Journal de la Société Française de Statistique, 154(2), 98-118.
Sur la distribution limite du terme maximum d'une serie aléatoire. B V Gnedenko, Annals of Mathematics. 44423Gnedenko, B. V. (1943). Sur la distribution limite du terme maximum d'une serie aléatoire. Annals of Mathematics, 44, 423.
An inquiry into the nature of frequency distributions representative of multiple happenings with particular reference to the occurrence of multiple attacks of disease or of repeated accidents. M Greenwood, G Yule, Journal of the Royal Statistical Society. 83Greenwood, M., & Yule, G. (1920). An inquiry into the nature of frequency distributions representative of multiple happenings with particular reference to the occurrence of multiple attacks of disease or of repeated accidents. Journal of the Royal Statistical Society, 83, 255-279.
Bridgesampling: An R package for estimating normalizing constants. Q Gronau, H Singmann, E Wagenmakers, Journal of Statistical Software. 9210Gronau, Q., Singmann, H., & Wagenmakers, E. (2020). Bridgesampling: An R package for estimating normalizing constants. Journal of Statistical Software, 92(10), 1-29.
. A Hitz, R Davis, G Samorodnitsky, arXiv:1707.05033Discrete extremesHitz, A., Davis, R., & Samorodnitsky, G. (2017). Discrete extremes. arXiv:1707.05033.
The generalized Waring distribution applied to accident theory. J O Irwin, Journal of the Royal Statistical Society. Series A (General). 1312Irwin, J. O. (1968). The generalized Waring distribution applied to accident theory. Journal of the Royal Statistical Society. Series A (General), 131(2), 205-225.
Mixed Poisson distributions. D Karlis, E Xekalaki, International Statistical Review. 731Karlis, D., & Xekalaki, E. (2005). Mixed Poisson distributions. International Statistical Review, 73(1), 35-58.
A note on Poisson maxima. A Kimber, 63Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte GebieteKimber, A. (1983). A note on Poisson maxima. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete, 63(4), 551-552.
Statistical size distributions in economics and actuarial sciences. C Kleiber, S Kotz, WileyKleiber, C., & Kotz, S. (2003). Statistical size distributions in economics and actuarial sciences. Wiley.
Zero-inflated Poisson regression, with an application to defects in manufacturing. D Lambert, Technometrics. 341Lambert, D. (1992). Zero-inflated Poisson regression, with an application to defects in manufacturing. Technometrics, 34(1), 1-14.
Simulating ratios of normalizing constants via a simple identity: A theoretical exploration. X L Meng, W H Wong, Statistica Sinica. 64Meng, X. L., & Wong, W. H. (1996). Simulating ratios of normalizing constants via a simple identity: A theoretical exploration. Statistica Sinica, 6(4), 831-860.
Asymptotics of maxima of discrete random variables. S Nadarajah, K Mitov, Extremes. 53287Nadarajah, S., & Mitov, K. (2002). Asymptotics of maxima of discrete random variables. Extremes, 5(3), 287.
A generic approach to nonparametric function estimation with mixed data. T Nagler, Statistics and Probability Letters. 137Nagler, T. (2018). A generic approach to nonparametric function estimation with mixed data. Statistics and Probability Letters, 137, 326-330.
Mixed Poisson distributions tail equivalent to their mixing distributions. R Perline, Statistics and Probability Letters. 383Perline, R. (1998). Mixed Poisson distributions tail equivalent to their mixing distributions. Statistics and Probability Letters, 38(3), 229-233.
Statistical inference using extreme order statistics. J Pickands, The Annals of Statistics. 31Pickands, J. (1975). Statistical inference using extreme order statistics. The Annals of Statistics, 3(1), 119-131.
R: A language and environment for statistical computing. R Core Team, Computer software manualR Core Team. (2021). R: A language and environment for statistical computing [Computer software manual].
. Austria Vienna, Vienna, Austria. Retrieved from https://www.R-project.org/
Extreme values, regular variation and point processes. S I Resnick, SpringerResnick, S. I. (1987). Extreme values, regular variation and point processes. Springer.
Discretization of distributions in the maximum domain of attraction. T Shimura, Extremes. 153Shimura, T. (2012). Discretization of distributions in the maximum domain of attraction. Extremes, 15(3), 299-317.
RStan: the R interface to Stan. Stan Development Team. Stan Development Team. (2020). RStan: the R interface to Stan. Retrieved from http://mc-stan.org/ R package version 2.21.2.
A G Stephenson, Extreme value distributions. R News. 2Stephenson, A. G. (2002). evd: Extreme value distributions. R News, 2(2). Retrieved from https://CRAN.R-project.org/doc/Rnews/
Estimating species occurrence, abundance, and detection probability using zero-inflated distributions. S J Wenger, M C Freeman, Ecology. 8910Wenger, S. J., & Freeman, M. C. (2008). Estimating species occurrence, abundance, and detection probability using zero-inflated distributions. Ecology, 89(10), 2953-2959.
Asymptotic tail behaviour of Poisson mixtures with applications. G E Willmot, Advances in Applied Probability. 221Willmot, G. E. (1990). Asymptotic tail behaviour of Poisson mixtures with applications. Advances in Applied Probability, 22(1), 147-159.
| [] |
[
"Testing the Randall-Sundrum Model at a High Energy e − e − Collider",
"Testing the Randall-Sundrum Model at a High Energy e − e − Collider"
] | [
"Dilip Kumar Ghosh [email protected] ",
"Sreerup Raychaudhuri [email protected] \nDepartment of Physics\nIndian Institute of Technology\n208, 016KanpurIndia\n",
"\nDepartment of Theoretical Physics\nTheory Division\nTata Institute of Fundamental Research\nHomi Bhabha Road400 005MumbaiIndia\n",
"\nCERN\n1211Geneva 23CHSwitzerland\n"
] | [
"Department of Physics\nIndian Institute of Technology\n208, 016KanpurIndia",
"Department of Theoretical Physics\nTheory Division\nTata Institute of Fundamental Research\nHomi Bhabha Road400 005MumbaiIndia",
"CERN\n1211Geneva 23CHSwitzerland"
] | [] | We study the process e − e − → e − e − at a high energy e − e − collider including the effect of graviton exchanges in the warped gravity model of Randall and Sundrum. Discovery limits for gravitons are established and the effects of polarization are discussed. | 10.1016/s0370-2693(00)01203-x | [
"https://export.arxiv.org/pdf/hep-ph/0007354v1.pdf"
] | 15,288,820 | hep-ph/0007354 | 84c83582a38c2bb7eff5900aac09cad73a3277b1 |
Testing the Randall-Sundrum Model at a High Energy e − e − Collider
Jul 2000 July 2000
Dilip Kumar Ghosh [email protected]
Sreerup Raychaudhuri [email protected]
Department of Physics
Indian Institute of Technology
208, 016KanpurIndia
Department of Theoretical Physics
Theory Division
Tata Institute of Fundamental Research
Homi Bhabha Road400 005MumbaiIndia
CERN
1211Geneva 23CHSwitzerland
Testing the Randall-Sundrum Model at a High Energy e − e − Collider
Jul 2000 July 2000arXiv:hep-ph/0007354v1 31 TIFR/TH/00-38 IITK/PHY/2000/13
We study the process e − e − → e − e − at a high energy e − e − collider including the effect of graviton exchanges in the warped gravity model of Randall and Sundrum. Discovery limits for gravitons are established and the effects of polarization are discussed.
A great deal of recent interest centres around the physics possibilities of a high energy linear collider with e ± beams [1]. Such a machine can be run in e + e − or e − e − collision modes modes. Th principal scattering processes at these are, respectively, Bhabha scattering e + e − → e + e − and M oller scattering e − e − → e − e − .
One of the useful features of M oller scattering e − e − → e − e − at a high energy e − e − collider is that it can receive only a limited number of contributions from physics options [2] which go beyond the Standard Model (SM). Among the interesting beyond-Standard-Model (BSM) options are the exchange of multiple gravitons in models with low-scale quantum gravity. The exchange of multiple gravitons, in the t-channel as well as the u-channel, can affect the process e − e − → e − e − in two ways:
• by changing (increasing or decreasing) the total cross-section from the SM value -this being the usual effect of BSM physics;
• by changing the kinematic distributions of the final state electrons -this being the effect of exchanging particles with higher spin.
Two different scenarios of low-scale quantum gravity have attracted a great deal of recent attention. In one of these, due to Arkani-Hamed, Dimopoulos and Dvali (ADD) [3], one envisages a spacetime with 4+d dimensions, where the extra d dimensions are compactified with radii R c as large as a millimetre. In the ADD scenario, in four dimensions there is a tower of massive Kaluza-Klein modes of the graviton, whose masses are so densely-spaced (by as little as 10 −13 GeV) as to form a quasicontinuum. Though each graviton mode couples to electrons with the feeble strength of Newtonian gravity, the collective effect of all the gravitons contributes to interactions of almost electroweak strength [4]. Effects of multiple exchange of gravitons in M oller scattering, within the ADD scenario, have been studied in Ref. [5].
The other popular scenario of low-scale quantum gravity is that due to Randall and Sundrum [6], who write a non-factorizable spacetime metric
ds 2 = e −KRcφ η µν dx µ dx ν + R 2 c dφ 2(1)
involving one extra dimension φ compactified with a radius R c , which is assumed to be marginally greater than the Planck length 10 −33 cm, and an extra mass scale K, which is related to the Planck scale M 3 . Such a 'warped' geometry is motivated by compactifying the extra dimension on a S 1 /Z 2 orbifold, with two D-branes at the orbifold fixed points, viz., one at φ = 0 ('Planck brane' or 'invisible brane'), and one at φ = π ('TeV brane' or 'visible brane').
The interesting physical consequence of this geometry is that any mass scale on the Planck brane gets scaled by the 'warp factor' e −πKRc on the TeV brane. It now requires KR c ≃ 12 -which is hardly unnatural -to obtain the hierarchy between the Planck scale and the electroweak scale. There still remains a minor problem: that of stabilizing the radius R c (which is marginally smaller than the Planck scale) against quantum fluctuations. A simple extension of the RS construction involving an extra bulk scalar field has been proposed [7] to stabilize R c and this predicts light radion excitations with possible collider signatures [8]. Alternatively, supersymmetry on the branes can also act as a stabilizing effect [9]. Models with SM gauge bosons and fermions in the bulk have also been considered [10], but will not be discussed in this work. The mass spectrum and couplings of the graviton in the RS model have been worked out, in Refs. [11,12]. We do not describe the details of this calculation, but refer the reader to the original literature. It suffices here to note the following points.
1. There is a tower of massive Kaluza-Klein modes of the graviton, with masses
M n = x n Ke −πKRc ≡ x n m o(2)
where m 0 = Ke −πKRc sets the scale of graviton masses and is essentially a free parameter of the theory. The x n are the zeros of the Bessel function J 1 (x) of order unity.
2. The massless Kaluza-Klein mode couples to matter with gravitational strength; consequently its effects can be ignored for all practical purposes.
3. Couplings of the massive Kaluza-Klein modes are gravitational, scaled by the warp factor e πKRc and are consequently of electroweak strength.
Feynman rules (to the lowest order) for these modes have been worked out in Refs. [13]) and [14] in the context of ADD-like scenarios. Each graviton couples to matter with strength κ = √ 16πG N . All that we need to do to get the corresponding Feynman rules in the RS model is to multiply the coupling constant κ by the warp factor e πKRc wherever necessary. It is convenient to write
κe πKRc = √ 32π c 0 m 0(3)
where κ = √ 16πG N , using Eqn. (2) and introducing another undetermined parameter
c 0 ≡ K/M (4)
P . Thus (c 0 , m 0 ) may conveniently be taken as the free parameters of the theory 2 . Though c 0 and m 0 are not precisely known, one can make estimates of their magnitude. The RS construction requires K to be at least an order of magnitude less than M (4) P , which means that the range of interest for c 0 is about 0.01 to 0.1 (the lower value being determined by naturalness considerations). m 0 , which is of electroweak scale, may be considered in the range of a few tens of GeV to a few TeV. Eq. (2) tells us that the first massive graviton lies at M 1 = x 1 m 0 ≃ 3.83 m 0 . Since no graviton resonances have been seen at LEP-2, running at energies around 200 GeV, it is clear that we should expect m 0 > 50 GeV at least.
In this letter, we examine the effects of multiple graviton exchange in M oller scattering in the RS scenario. We focus on the possibility of observing an excess in e − e − events over the SM prediction, and comment on possible refinements using the the kinematic distributions of the final-state electrons. As earlier calculations [12] have shown, in the case when c 0 is large, the resonance structure in Bhabha scattering is lost and there is not much difference, qualitatively speaking, between Bhabha and M oller scattering in the RS model. In other words, M oller scattering is as good a probe of this model as Bhabha scattering in this case. It is on this option that our interest is focussed.
The calculation of the Feynman amplitude involves, for the diagrams with graviton exchange, a sum over graviton propagators of the form
n 1 t − M 2 n ≡ − 1 m 2 0 Λ √ −t m 0(4)
and a similar sum with t ↔ u. Using the properties of the zeros of Bessel functions, the function Λ(x t ) can be written, to a very good approximation, as [15] Λ(x t ) = 1 πx t Im ψ 1.2331 + i x t π + 0.32586 220.345 + 29.6898
x 2 t + x 4 t(5)
where ψ(z) is the well-known digamma function. The variation of Λ(x t ) with x t is illustrated in Figure 1. It is immediately obvious that the effective coupling of the gravitons varies according to the scattering angle, except in the case when √ −t ≪ m 0 , i.e. x t → 0. This is a feature quite different from that observed in the related ADD model, where it is possible to take a limit in which a similarly-defined λ(x t ) is either constant or a slowly-varying function. This is also a feature which can potentially change the angular distribution of the final-state electrons.
There are six Feynman diagrams corresponding to M oller scattering
e − (p 1 , λ 1 ) + e − (p 2 , λ 2 ) −→ e − (p 3 , λ 3 ) + e − (p 4 , λ 4 )(6)
including the Standard Model as well as graviton-exchange diagrams. Evaluation of these, using the Feynman rules for the RS model, and summing over the final-state helicities λ 3 , λ 4 , is straightforward and leads to a squared matrix element |M(λ 1 , λ 2 )| 2 , whose explicit form is not given here in the interests of brevity. If the initial-state electrons have a left-handed longitudinal polarization P , the differential cross-section is given by assuming that both the beams are identically polarized. The importance of the polarization factor P is considerable, since it can be used, among other things, to enhance or decrease the SM contribution to the cross-section. In fact, polarization studies form an important part of the physics program at a linear collider [16].
In order to make a numerical estimate of the cross-section, we have incorporated the calculated cross-section into a Monte Carlo event generator, by means of which we calculate the cross-section for e − e − → e − e − subject to the following kinematic cuts.
• The scattering angle of the electron(s) should not lie within 10 0 of the beam pipe.
• The transverse momentum of the electron(s) should not be less than 10 GeV.
These 'acceptance' cuts are more-or-less basic ones for any process at a high-energy collider with electron and/or positrons. Though further selection cuts will become appropriate when a more detailed analysis is done, it suffices for our analysis, which is no more than a preliminary study, to take the above cuts. We then calculate the cross-section in the SM and in the RS Model (including interference effects) for a fixed polarization P and given input parameters c 0 and m 0 of the RS Model. Our results are given in Fig. 2. In Fig. 2(a), we present the total cross-section for the unpolarized case P = 0 as a function of machine energy for three different values of the RS mass scale m 0 = 150, 250 and 500 GeV. The dashed line represents the SM prediction and this exhibits the expected falling-off with machine energy. For large values of the graviton mass m 0 , this behaviour is preserved, since the graviton contribution is very small anyway. However, when the graviton mass is smaller, the cross-sections show a marked increase with energy, which reflects the well-known behaviour of gravity. Obviously, at energies of 3-4 TeV, the gravitational contribution is huge if the mass scale m 0 is small; however, a discernible difference exists even when m 0 = 500 GeV. Thus, we can expect larger effects -or, conversely, stronger bounds -on the RS Model as the machine energy increases. In Fig. 2(b), we present the variation of the cross-section with the polarization P , at a 1 TeV machine, for the parameter set marked in the inset box. The solid curves correspond to the SM and the RS model predictions, for a fixed set of parameters (c 0 , m 0 ), while the dashed line represents the difference between the two. It is obvious that there is a modest advantage to be gained from polarizing the beams, and there is little difference between the cases when the beam is dominantly left-or righthanded. This is also expected, since graviton exchanges are non-chiral; in fact, the small difference arises from the interference between diagrams with graviton and Zexchange.
In order to estimate the discovery reach of a linear collider, we adopt the following strategy. Discovery limits will be reached if the total experimental cross-section agrees -within the experimental precision -with the SM. Any excess or deficit must be attributed to BSM physics. Thus, for a given energy √ s, a given polarization P and a fixed set of parameters (c 0 , m 0 ), we calculate the total cross-section in the RS model. A corresponding calculation of the SM cross-section, multiplied by the luminosity, would lead to a predicted number of events. We then estimate the errors assuming that the statistical errors are Gaussian and that there are no systematic errors. While this certainly makes our estimates of the discovery limits over-optimistic, we can argue that electron detection efficiencies are generally high enough to allow us to make a reasonable estimate in this approximation. In any case, before more detailed studies of the detector design and systematic effects are undertaken, any estimate of systematic errors must be pure guesswork. We choose, therefore, to neglect such effects. Finally the search reach of the collider is given in terms of 3σ discovery limits. While a 500 GeV or a 1 TeV collider will almost certainly be built, there has been much interest in having a collider which probes the high energy frontier [17]. In particular, it is possible that the CLIC machine at CERN will be able to achieve a centre-of-mass energy as high as 3 TeV. Moreover, the possibility of a muon collider operating at a centre-of-mass energy of 3-4 TeV has also received serious consideration. For these machines, luminosities as high as 10 3 fb −1 per year have also been considered. Gravitational effects in e − e − → e − e − are, of course, identical to those in µ − µ − → µ − µ − . In view of these possibilities, we have explored the discovery reach of a 3 TeV machine for the RS model. Our results are exhibited in Fig. 4. It may be seen that this can easily probe m 0 as high as 1 TeV, which corresponds to gravitons of mass nearly 4 TeV or more. It is worth noting that if graviton masses are pushed up to 5 TeV or more, then, given that the scale K must be roughly an order of magnitude smaller than M (4) P , it follows that the warp factor e −πKRc must be somewhat larger than is possible now. This would either push up the Higgs boson mass to unacceptable values, or require some mechanism to have a smaller mass scale origin for the Higgs boson mass on the Planck brane. This would be a somewhat uncomfortable situation for the RS model, since the original simplicity -and therefore elegance -will be lost.
Finally we comment on the possibility of observing/constraining graviton effects using the angular distribution of the final state electrons. Since this form of BSM physics involves exchange of spin-2 particles, rather than spin-1 particles, as in the SM, one can, in principle, expect a rather different angular distribution for the electrons in the final state. In order to test this prediction, we have made a χ 2 -analysis of the electron angular distribution in the cases when there is graviton exchange and when there is no graviton exchange. It turns out that the difference in the distributions is rather small and confined to the central region. We find that one cannot get better discovery limits by considering the angular distributions than those which can be obtained by simply considering the total cross-section. If indeed an excess or deficit over the SM prediction is found, angular distributions might then become useful in determining the type of BSM physics responsible, e.g. in distinguishing between spin-1 and spin-0 exchanges. However, this would require high statistics and fine resolutions. Accordingly, in this preliminary study, we do not pursue the question of angular distributions any further.
In conclusion, therefore, an e − e − collider would be a useful laboratory to look for graviton exchange mechanisms, since there are very few competing BSM processes. We find that a simple study of the total cross-section for e − e − → e − e − , subject to some minimal acceptance cuts, leads to a prediction of rather optimistic discovery limits. It is more useful to consider the total cross-section than the angular distribution, which is rather similar to that in the SM. Polarization of the beams can improve the search reach by a few percent, irrespective of whether the beams are left-or rightpolarized. At a high energy collider, running at 3 or 4 TeV, the search limits can be taken as far as graviton masses of 5 TeV or more, which is more-or-less the frontier as far as the simplest version of the RS Model is concerned.
Figure 1 :
12 (1 − P ) 2 |M(+, +)| 2 + (1 + P ) 2 |M(−, −)| 2 +(1 − P 2 ) |M(+, −)| 2 + |M(−, +)| 2 Illustrating the variation of the effective coupling Λ of RS graviton towers to electrons as a function of x t = √ −t/m 0 .
Figure 2 :
2Variation of the cross-section for Moller scattering with (a) machine energy and (b) polarization of the electron beams. In (a) the solid curves correspond to the RS Model predictions for m 0 = 150, 250 and 500 GeV, while the dashed line represents the SM contribution. In (b), the solid lines represent the SM and RS model contributions, while the dashed line represents their difference. Other parameters are marked (in the boxes).
Figure 3 :
3Discovery limits as a function of the integrated luminosity for Randall-Sundrum graviton modes at an e − e − collider at centre-of-mass energies of 500 GeV and 1 TeV. Solid (dotted) lines correspond to unpolarized (80% left-polarized) electron beams. The value of c 0 is written alongside the relevant curve. The ordinate is labelled as a function of the mass scale m 0 on the left and the mass of the lightest resonance M 1 on the right.
Fig. 3
3shows the search reach for the RS model at linear colliders running at 500 GeV and 1 TeV respectively, as a function of the integrated luminosity, for three different values of the coupling constant c 0 (marked along the curves). It may be seen that a linear collider could easily probe m 0 up to at least 300 GeV -which corresponds to a lightest graviton mass of around 1.3 TeV -if 500 pb −1 of data are collected. A slight improvement is possible with polarized beams, as the dotted lines show. If the energy of the collider be increased to 1 TeV, the reach goes up almost by a factor of 2. It may, then, be possible to discover or exclude graviton resonances of mass 2.2 TeV or more.
Figure 4 :
4Discovery limits as a function of the integrated luminosity for Randall-Sundrum graviton modes at an e − e − collider at a centre-of-mass energy of 3 TeV. The labelling is the same as inFig. 2.
Though we differ from the exact choice of parameters in Ref.[12], a translation is easily made using the formulae c 0 = 1 8π K/M P and m 0 = Λ π K/M P . It follows that c 0 is roughly an order of magnitude less than K/M P and m 0 can be one or two orders of magnitude smaller than Λ π .
Acknowledgements: The authors would like to acknowledge Prasanta Das and Saswati Sarkar for useful discussions, and the Theory Division, CERN for hospitality while this work was being done. DKG would also like to thank F. Boudjema and LAPP, Annecy for hospitality.
. E Accomando, Phys. Rept. 2991E. Accomando et al, Phys. Rept. 299, 1 (1998) ;
. H Murayama, M E Peskin, hep-ex/9606003Ann.Rev.Nucl.Part.Sci. 46H. Murayama and M.E. Peskin, Ann.Rev.Nucl.Part.Sci. 46, 533 (1996), hep-ex/9606003.
. C A See, Heusch, Nucl.Instrum.Meth. 355and references thereinSee, for example, C.A. Heusch, Nucl.Instrum.Meth. A355, 75 (1995), and references therein.
. N Arkani-Hamed, S Dimopoulos, G Dvali, Phys. Lett. 429263N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. B429, 263 (1998) ;
. I Antoniadis, N Arkani-Hamed, S Dimopoulos, G Dvali, Phys. Lett. 43627I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos and G. Dvali, Phys. Lett. B436, 27 (1998) .
T G Example, Rizzo, hep-ph/9910255the proceedings of the 2nd International Conference: Physics Beyond the Standard Model: Beyond the Desert 99: Accelerator, Nonaccelerator and Space Approaches. Ringberg Castle, Tegernsee, GermanyFor reviewsFor reviews, see, for example, T.G. Rizzo, in the proceedings of the 2nd International Confer- ence: Physics Beyond the Standard Model: Beyond the Desert 99: Accelerator, Nonaccelerator and Space Approaches, Ringberg Castle, Tegernsee, Germany (June 1999), hep-ph/9910255; S.
talk delivered at WHEPP-6. Raychaudhuri, Chennai, Indiato appear in the proceedingsRaychaudhuri, talk delivered at WHEPP-6, Chennai, India (January 2000), to appear in the proceedings.
. T G Rizzo, Phys. Rev. 59115010T.G. Rizzo, Phys. Rev. D59, 115010 (1999) .
. L Randall, R Sundrum, Phys. Rev. Lett. 834690L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999) , and ibid. 83, 4690 (1999).
. W D Goldberger, M B Wise, Phys. Rev. Lett. 834922W.D. Goldberger and M.B. Wise, Phys. Rev. Lett. 83, 4922 (1999) .
C Csaki, hep-ph/9911406Snta Cruz preprint SCIPP-99-49. 475275C. Csaki et al, Snta Cruz preprint SCIPP-99-49 (1999), hep-ph/9911406; W.D. Goldberger and M.B. Wise, Phys. Lett. B475, 275 (2000) ;
. U Mahanta, S Rakshit, Phys. Lett. 480176U. Mahanta and S. Rakshit, Phys. Lett. B480, 176 (2000) ;
. U Mahanta, A Datta, Phys. Lett. 483196U. Mahanta and A. Datta, Phys. Lett. B483, 196 (2000) ;
. G F Giudice, R Rattazzi, J D Wells, hep-ph/0002178G.F. Giudice, R. Rattazzi and J.D. Wells, CERN preprint CERN-TH-2000-051 (Feb 2000), hep-ph/0002178.
. R Altendorfer, J Bagger, D Nemeschansky, Cit-South Carolina, hep-th/0003117R. Altendorfer, J. Bagger and D. Nemeschansky, CIT-South Carolina preprint CIT-USC-00-015 (March 2000), hep-th/0003117.
. H Davoudiasl, J A Hewett, T G Rizzo, hep-ph/0006041Phys. Lett. 47343H. Davoudiasl, J.A. Hewett and T.G. Rizzo, Phys. Lett. B473, 43 (2000) , and SLAC preprint SLAC-PUB-8436 (Jun 2000), hep-ph/0006041.
. W D Goldberger, M B Wise, Phys. Rev. 60107505W.D. Goldberger and M.B. Wise, Phys. Rev. D60, 107505 (1999) .
. H Davoudiasl, J A Hewett, T G Rizzo, Phys. Rev. Lett. 842080H. Davoudiasl, J.A. Hewett and T.G. Rizzo, Phys. Rev. Lett. 84, 2080 (2000) .
. T Han, J D Lykken, R.-J Zhang, Phys. Rev. 59105006T. Han, J.D. Lykken and R.-J. Zhang, Phys. Rev. D59, 105006 (1999) .
. G F Giudice, R Rattazzi, J D Wells, Nucl. Phys. 5443G.F. Giudice, R. Rattazzi and J.D. Wells, Nucl. Phys. B544, 3 (1998) .
. P Das, S Raychaudhuri, S Sarkar, Kanpur, hep-ph/0002079to appear in JHEPP. Das, S. Raychaudhuri and S. Sarkar, IIT Kanpur preprint IITK-HEP-00-01 (Feb 2000), hep-ph/0002079, (to appear in JHEP).
. G Alexander, I Cohen, I Yu, S B Arestov, Nurushev, hep- ex/0006007Physics at VLEPP. A.A. Babich, A.A. Pankov and N. Paver2355Tel Aviv Univpreprint TAUP-2628-2000Phys. Lett.G. Alexander and I. Cohen, Tel Aviv Univ. preprint TAUP-2628-2000 (June 2000), hep- ex/0006007; Yu.I. Arestov and S.B. Nurushev, in Protvino 1992: Physics at VLEPP, vol. 2, p.51; A.A. Babich, A.A. Pankov and N. Paver, Phys. Lett. B452, 355 (1999) .
. J Ellis, E Keil, G Rolandi, CERN preprint CERN-TH-98-33J. Ellis, E. Keil and G. Rolandi, CERN preprint CERN-TH-98-33 (1998);
. D B Cline, Nucl.Instrum.Meth. 35024D.B. Cline, Nucl.Instrum.Meth. A350, 24 (1994).
| [] |
[
"TOPOLOGICAL QUANTUM CODES FROM SELF-COMPLEMENTARY SELF-DUAL GRAPHS",
"TOPOLOGICAL QUANTUM CODES FROM SELF-COMPLEMENTARY SELF-DUAL GRAPHS"
] | [
"Avaz Naghipour [email protected] ",
"Mohammad Ali Jafarizadeh [email protected] ",
"\nDepartment of Computer Engineering\nDepartment of Applied Mathematics\nFaculty of Mathematical Sciences\nUniversity College of Nabi Akram\n1283 Rah Ahan StreetTabrizNoIran\n",
"\nDepartment of Theoretical Physics and Astrophysics\nFaculty of Physics\nUniversity of Tabriz\n29 Bahman BoulevardTabrizIran\n",
"\nSEDAGHAT SHAHMORAD Department of Applied Mathematics\nFaculty of Mathematical Sciences\nUniversity of Tabriz\n29 Bahman BoulevardTabrizIran\n",
"\nUniversity of Tabriz\n29 Bahman BoulevardTabrizIran\n"
] | [
"Department of Computer Engineering\nDepartment of Applied Mathematics\nFaculty of Mathematical Sciences\nUniversity College of Nabi Akram\n1283 Rah Ahan StreetTabrizNoIran",
"Department of Theoretical Physics and Astrophysics\nFaculty of Physics\nUniversity of Tabriz\n29 Bahman BoulevardTabrizIran",
"SEDAGHAT SHAHMORAD Department of Applied Mathematics\nFaculty of Mathematical Sciences\nUniversity of Tabriz\n29 Bahman BoulevardTabrizIran",
"University of Tabriz\n29 Bahman BoulevardTabrizIran"
] | [] | In this paper we present two new classes of binary quantum codes with minimum distance of at least three, by self-complementary self-dual orientable embeddings of "voltage graphs" and "Paley graphs in the Galois field GF (p r )", where p ∈ P and r ∈ Z + . The parameters of two new classes of quantum codes are [[(2k ′ + 2)(8k ′ + 7), 2(8k ′2 + 7k ′ ), d min ]] and [[(2k ′ + 2)(8k ′ + 9), 2(8k ′2 + 9k ′ + 1), d min ]] respectively, where d min ≥ 3. For these quantum codes, the code rate approaches 1 as k ′ goes to infinity. | 10.1007/s11128-015-1115-9 | [
"https://arxiv.org/pdf/1503.05710v1.pdf"
] | 17,449,798 | 1503.05710 | bd0154421bad6b2622d321e3c2fedcb0d66439b3 |
TOPOLOGICAL QUANTUM CODES FROM SELF-COMPLEMENTARY SELF-DUAL GRAPHS
19 Mar 2015 19 March 2015
Avaz Naghipour [email protected]
Mohammad Ali Jafarizadeh [email protected]
Department of Computer Engineering
Department of Applied Mathematics
Faculty of Mathematical Sciences
University College of Nabi Akram
1283 Rah Ahan StreetTabrizNoIran
Department of Theoretical Physics and Astrophysics
Faculty of Physics
University of Tabriz
29 Bahman BoulevardTabrizIran
SEDAGHAT SHAHMORAD Department of Applied Mathematics
Faculty of Mathematical Sciences
University of Tabriz
29 Bahman BoulevardTabrizIran
University of Tabriz
29 Bahman BoulevardTabrizIran
TOPOLOGICAL QUANTUM CODES FROM SELF-COMPLEMENTARY SELF-DUAL GRAPHS
19 Mar 2015 19 March 2015quantum codesembeddingself-complementaryself-dualvoltage graphPaley graph
In this paper we present two new classes of binary quantum codes with minimum distance of at least three, by self-complementary self-dual orientable embeddings of "voltage graphs" and "Paley graphs in the Galois field GF (p r )", where p ∈ P and r ∈ Z + . The parameters of two new classes of quantum codes are [[(2k ′ + 2)(8k ′ + 7), 2(8k ′2 + 7k ′ ), d min ]] and [[(2k ′ + 2)(8k ′ + 9), 2(8k ′2 + 9k ′ + 1), d min ]] respectively, where d min ≥ 3. For these quantum codes, the code rate approaches 1 as k ′ goes to infinity.
Introduction
Quantum error-correcting codes (QEC) plays an important role in the theory of quantum information and computation. A main difficult to realize quantum computation is decoherence of quantum bits due to the interaction between the system and the surrounding environments. The QEC provide an efficient way to overcome decoherence. The first quantum code [ [9,1,3]] was discovered by Shor [1]. Calderbank et al. [2] have introduced a systematic way for constructing the QEC from classical error-correcting code. The problem of constructing toric quantum codes has motivated considerable interest in the literature. This problem was generalized within the context of surface codes [8] and color codes [3]. The most popular toric code was the first proposed by Kitaev's [5]. This code defined on a square lattic of size m × m on the torus. The parameters of this class of codes are [[n, k, d]] = [[2m 2 , 2, m]]. In the similar way, the authors in [7] have introduced a construction of topological quantum codes in the projective plane RP 2 . They showed that the original Shor's 9-qubit repetition code is one of these codes which can be constructed in a planar domain.
Leslie in [6] proposed a new type of sparse CSS quantum error correcting codes based on the homology of hypermaps defined on an m × m square lattice. The parameters of hypermap-homology codes are [[( 3 2 )m 2 , 2, m]]. These codes are more efficient than Kitaev's toric codes. This seemed to suggest that good quantum codes maybe constructed by using hypergraphs. But there are other surface codes with better parameters than the [[2m 2 , 2, m]] toric code. There exist surface codes with parameters [[m 2 + 1, 2, m]], called homological quantum codes. These codes were introduced by Bombin and Martin-Delgado [8].
Authors in [9] presented a new class of toric quantum codes with parameters [[m 2 , 2, m]], where m = 2(l + 1), l ≥ 1. Sarvepalli [10] studied relation between surface codes and hypermap-homology quantum codes. He showed that a canonical hypermap code is identical to a surface code while a noncanonical hypermap code can be transformed to a surface code by CNOT gates alone. Li et al. [17] were given a large number of good binary quantum codes of minimum distances five and six by Steane's Construction. In [18] good binary quantum stabilizer codes are obtained via graphs of Abelian and non-Abelian groups schemes. In [19], Qian presented a new method of constructing quantum codes from cyclic codes over finite ring F 2 + vF 2 .
Our aim in this work is to present two new classes of binary quantum codes with parameters [[(2k ′ + 2)(8k ′ + 7), 2(8k ′2 + 7k ′ ), d min ]] and [[(2k ′ + 2)(8k ′ + 9), 2(8k ′2 + 9k ′ + 1), d min ]] respectively, based on results of Hill in self-complementary self-dual graphs [13]. Binary quantum codes are defined by pair (H X , H Z ) of Z 2 -matrices with H X H T Z = 0. These codes have parameters [[n, k, d min ]], where k logical qubits are encoded into n physical qubits with minimum distance d min . A minimum distance d min code can correct all errors up to ⌊ dmin−1 2 ⌋ qubits. The code rate for these quantum codes of length n = (2k ′ + 2)(8k ′ + 7) and n = (2k ′ + 2)(8k ′ + 9) is determined by k n = 2(8k ′2 +7k ′ ) (2k ′ +2)(8k ′ +7) and k n = 2(8k ′2 +9k ′ +1) (2k ′ +2)(8k ′ +9) , and this rate approaches 1 as k ′ goes to infinity. The paper is organized as follows. The definition simplices, chain complexes and homology group are recalled in Section 2. In Section 3 we shall briefly present the voltage graphs and their derived graphs. In Section 4, we give a brief outline of self-complementary self-dual graphs. Section 5 is devoted to present new classes of binary quantum codes by using self-complementary self-dual orientable embeddings of voltage graphs and Paley graphs. The paper is ended with a brief conclusion.
Homological algebra
In this section, we review some fundamental notions of homology spaces. For more detailed information about homology spaces, refer to [4], [12].
Simplices. Let m, n ∈ N, m ≥ n. Let moreover the set of points {υ 0 , υ 1 , ..., υ n } of R m be geometrically independent. A n-simplex ∆ is a subset of R m given by
∆ = {x ∈ R m |x = n i=0 t i υ i ; 0 ≤ t i ≤ 1; n i=0 t i = 1}. (2.1)
Chain complexes. Let K be a simplicial complex and p a dimension. A p-chain is a formal sum of p-simplices in K. The standard notation for this is
c = i n i σ i , where n i ∈ Z and σ i a p-simplex in K. Let C p (K) be the set of all p-chains on K. The boundary homomorphism ∂ p : C p (K) −→ C p−1 (K) is defined as ∂ k (σ) = k j=0 (−1) j [υ 0 , υ 1 , ..., υ j−1 , υ j+1 , ..., υ k ]. (2.2)
The chain complex is the sequence of chain groups connected by boundary homomorphisms,
· · · ∂p+2 −→ C p+1 ∂p+1 −→ C p ∂p −→ C p−1 ∂p−1 −→ · · · (2.3)
Cycles and boundaries. We are interested in two subgroups of C p (K), cycle and boundary groups. The p-th cycle group is the kernel of ∂ p : C p (K) −→ C p−1 (K), and denoted as Z p = Z p (K). The p-th boundary group is the image of ∂ p+1 : C p+1 (K) −→ C p (K), and denoted as B p = B p (K).
Definition 2.1 (Homology group, Betti number). The p-th homology group H p is the p-th cycle group modulo the p-th boundary group, H p = Z p /B p . The p-th Betti number is the rank (i.e. the number of generators) of this group, β p =rank H p . So the first homology group H 1 is given as
H 1 = Z 1 /B 1 . (2.4)
From the algebraic topology, we can see that the group H 1 only depends, up to isomorphisms, on the topology of the surface [4]. In fact
H 1 ≃ Z 2g 2 . (2.5)
where g is the genus of the surface, i.e. the number of "holes" or "handles". We then have
|H 1 | = 2 2g . (2.6)
Voltage graphs and their derived graphs
Let G = (V, E) be a multigraph for which every edge has been assigned a direction, and V be a finite group. A voltage assignment of G in V is a function α : E → V, that labels the arcs of G with elements of V. The triple (G, V, α) is called an (ordinary) voltage graph. The derived graph (lift, or covering)
G ′ = (V ′ , E ′ ) (also denoted G α )
, is defined as follows:
i) V ′ = V × V ii) If e = (a, b) ∈ E where a, b ∈ V and α(e) = v for some v ∈ V, then (e, u) = e u = (a u , b uv ) ∈ E ′ where a u , b uv ∈ V ′ for all u ∈ V. Definition 3.1. Let Z t denote Z 2 × Z 2 × · · · × Z 2 t .
A binary vector has even weight if it has an even number of 1's and has odd weight otherwise. An ε-vector is a vector
in Z t−1 2 with even weight. A σ-vector is a vector in Z t−1 2
with odd weight. We label the ε-vectors so that ε 1 < ε 2 < · · · < ε 2 t−2 . Similarly, label the σ-vectors so that
σ 1 < σ 2 < · · · < σ 2 t−2 . Definition 3.2. A 1ε-vector is a vector in Z t 2
where the first entry is a one and the remainder of the vector is an ε-vector. The 1σ-, 0ε-and 0σ-vectors can be defined in a similar fashion. A 1ε-edge is an edge with a 1ε-vector as a voltage assignment. The 1σ-, 0ε-and 0σ-edges can be defined in a similar fashion. For example, when t = 3, we have the following table: 0ε 1 = 000 = 0 0σ 1 = 001 = 1 0ε 2 = 011 = 3 0σ 2 = 010 = 2 1ε 1 = 100 = 4 1σ 1 = 101 = 5 1ε 2 = 111 = 7 1σ 2 = 110 = 6 Definition 3.3. A link is an edge which is incident with 2 different vertices. A loop is an edge which has two incidences with the same vertex. A half edge is an edge together with one of its incident vertices.
Definition 3.4. Let t ≥ 3. Let H t be a voltage graph defined as follows over the group (Z t 2 , ⊕); H t has two vertices, u and v. There are 2 t−1 links between u and v, with voltage assignments 0σ 1 , . . . , 0σ 2 t−2 and 1ε 1 , . . . , 1ε 2 t−2 (equivalently, all possible vectors in Z t 2 with odd weight). There are 2 t−1 half edges about v with voltage assignments 0σ 1 , . . . , 0σ 2 t−2 and 1σ 1 , . . . , 1σ 2 t−2 . Similarly, there are 2 t−1 −1 half edges about u with voltage assignments 0ε 2 , 0ε 3 , . . . , 0ε 2 t−2 and 1ε 1 , . . . , 1ε 2 t−2 . , and m ≡ 0 or 1 (mod 4).
Self-complementary self-dual graphs
Proof. See [14].
Theorem 4.2. If G is a self-complementary self-dual graph on m vertices with a self-dual embedding on an orientable surface of genus g, then m ≡ 0 or 1 (mod 8). In particular, if m = 8 + 8k ′ , then g = 8(k ′2 ) + 7k ′ , and if m = 9 + 8k ′ , then g = 8(k ′2 ) + 9k ′ + 1.
Proof. See [13].
Quantum codes from graphs on surfaces
The idea of constructing CSS (Calderbank-Shor-Steane) codes from graphs embedded on surfaces has been discussed in a number of papers. See for detailed descriptions e.g. [11]. Let X be a compact, connected, oriented surface (i.e. 2manifold) with genus g. A tiling of X is defined to be a cellular embedding of an undirected (simple) graph G = (V, E) in a surface. This embedding defines a set of faces F . Each face is described by the set of edges on its boundary. This tiling of surface is denoted M = (V, E, F ). The dual graph G is the graph G * = (V * , E * ) such that:
i) One vertex of G * inside each face of G,
ii) For each edge e of G there is an edge e * of G * between the two vertices of G * corresponding to the two faces of G adjacent to e.
It can be easily seen that, there is a bijection between the edges of G and the edges of G * .
There is an interesting relationship between the number of elements of a lattice embedded in a surface and its genus. The Euler characteristic of X is defined as its number of vertices (|V |) minus its number of edges (|E|) plus its number of faces (|F |), i.e.,
χ = |V | − |E| + |F |. (5.1)
For closed orientable surfaces we have
χ = 2(1 − g). (5.2)
The surface code associated with a tiling M = (V, E, F ) is the CSS code defined by the matrices H X and H Z such that H X ∈ M |V |,|E| (Z 2 ) is the vertex-edge incidence matrix of the tiling and H Z ∈ M |F |,|E| (Z 2 ) is the face-edge incidence matrix of the tiling. Therefore, from (X, G) is constructed a CSS code with parameters [[n,
k, d]].
where n is the number of edges of G, k = 2g (by (2.6)) and d is the shortest non-boundary cycle in G or G * . In this work, the minimum distance of quantum codes by a parity check matrix H (or generator matrix) is obtained. For a detailed information to compute the minimum distance, we refer the reader to [15].
New class of
[[(2k ′ + 2)(8k ′ + 7), 2(8k ′2 + 7k ′ ), d min ]
] binary quantum codes from embeddings of voltage graphs
Our aim in this subsection is to construct new class of binary quantum codes by using self-complementary self-dual orientable embeddings of voltage graphs. Let G t be the lift of voltage graph H t defined over the group where I is an 2 × 2 identity matrix and X is an Pauli matrix. Also, we will sometimes use notation where we omit the tensor signs. For example IXX is shorthand for I ⊗ X ⊗ X. After finding the vertex-edge incidence matrix H X using the above adjacency matrix and the face-edge incidence matrix H Z by Gaussian elimination and the standard form of the parity check matrix in [15], one can be easily seen that H X H T Z = 0 and d min = 3. Therefore, the code with parameters [[60, 30, 3]] is constructed.
(Z t 2 , ⊕). Since |V (G t )| = |V (H t )| × |Z t 2 | = 2 × 2 t , for t = 3, m = |V (G t )| = 2 3 × 2 = 2 4 ≡ 0 (
In general, the adjacency matrix A = (a ij ) 2 t+1 ×2 t+1 of derived voltage graph by Definition 3.4, is
A = B C C D where B = 1 2 (I + X) ⊗ {(I + X) ⊗ (I + X) ⊗ · · · ⊗ (I + X) t−1 + + (I − X) ⊗ (I − X) ⊗ · · · ⊗ (I − X) t−1 } − I ⊗ I ⊗ · · · ⊗ I t ; C = 1 2 {(I + X) ⊗ (I + X) ⊗ · · · ⊗ (I + X) t − − (I − X) ⊗ (I − X) ⊗ · · · ⊗ (I − X) t }; D = 1 2 (I + X) ⊗ {(I + X) ⊗ (I + X) ⊗ · · · ⊗ (I + X) t−1 − − (I − X) ⊗ (I − X) ⊗ · · · ⊗ (I − X) t−1 }.
With finding the matrix H X using the above adjacency matrix A = (a ij ) 2 t+1 ×2 t+1 and the matrix H Z by Gaussian elimination and the standard form of the parity check matrix, the code minimum distance of at least three is obtained.
After determining d min , by using the Theorems in Section 4 the class of codes with parameters [[(2k ′ + 2)(8k ′ + 7), 2(8k ′2 + 7k ′ ), d min ]], k ′ ≥ 1 is constructed.
New class of
[[(2k ′ + 2)(8k ′ + 9), 2(8k ′2 + 9k ′ + 1), d min ]
] binary quantum codes from embeddings of Paley graphs
The construction of this class will be based on self-complementary self-dual orientable embeddings of Paley graphs in the Galois field GF (p r ), where p ∈ P and r ∈ Z + . . Also, from Theorem 4.2 and Definition 5.2.2, with a self-dual embedding on an orientable surface of genus g, we know that if m = 9 + 8k ′ ≡ 1 (mod 8), then g = 8(k ′2 ) + 9k ′ + 1. Therefore, |E(G)| = (9+8k ′ )(8+8k ′ ) 4 = (9 + 8k ′ )(2 + 2k ′ ). Since in this self-dual embedding on an orientable surface the code minimum distance is at least three. Thus the code parameters are given by: the code minimum distance is d min ≥ 3; the code length is n = |E(G)| = (9 + 8k ′ )(2 + 2k ′ ) and k = 2g = 2(8k ′2 + 9k ′ + 1). Consequently, the class of codes with parameters [[(2k ′ + 2)(8k ′ + 9), 2(8k ′2 + 9k ′ + 1), d min ]], k ′ ≥ 0 is obtained.
Example 5.2.1. Let m = 3 2 ≡ 1 (mod 8). Then P 9 =Cay(X 9 , ∆ 9 ), where X 9 = Z 3 × Z 3 is the additive group of the Galois field GF (3 2 ) and ∆ 9 = {1, x 2 , x 4 , x 6 } for a primitive element x of GF (3 2 ). In fact, ∆ 9 is the set of all squares in GF (3 2 ). Let p(x) ∈ Z 3 [x] be an irreducible polynomial of degree 2. Then the elements of Z 3 [x]/ p(x) will be polynomials of degree 1 or less and there will be 3 2 = 9 such polynomials. So, in terms of representatives, the elements of GF (9) are {ax + b|a, b ∈ Z 3 }. We denote these as: g 0 = 0x + 0 g 3 = 1x + 0 g 6 = 2x + 0 g 1 = 0x + 1 g 4 = 1x + 1 g 7 = 2x + 1 g 2 = 0x + 2 g 5 = 1x + 2 g 8 = 2x + 2
Based on results of Conrad in finite fields [16], the monic irreducible quadratics in Z 3 [x] are x 2 + 1, x 2 + x + 2 and x 2 + 2x + 2. Let p(x) = x 2 + x + 2. Then g 3 = x is a generator of the nonzero elements in the field Z 3 [x]/ x 2 + x + 2 .
g 3 = x = g 3 g 2
3 = x 2 = −x − 2 = 2x + 1 = g 7 g 3 3 = x(2x + 1) = 2x 2 + x = 2(−x − 2) + x = −x − 1 = 2x + 2 = g 8 g 4 3 = x(2x + 2) = 2x 2 + 2x = 2(−x − 2) + 2x = −4 = 2 = g 2 g 5 3 = x(2) = 2x = g 6 g 6 3 = x(2x) = 2x 2 = 2(−x − 2) = −2x − 4 = x + 2 = g 5 g 7 3 = x(x + 2) = x 2 + 2x = −x − 2 + 2x = x − 2 = x + 1 = g 4 g 8 3 =
x(x + 1) = x 2 + x = −x − 2 + x = −2 = 1 = g 1 By Definitions in Subsection 5.2, we get the following adjacency matrix for GF (9):
A =
After finding the matrices H X and H Z using the Theorems in Section 4, the code with parameters [ [18,2,3]] is obtained. Note that the matrix H Z is given by Gaussian elimination and the standard form of the parity check matrix in [15].
Conclusion
We have considered the presentation of two new classes of binary quantum codes by using self-complementary self-dual orientable embeddings of voltage graphs and Paley graphs. These codes is superior to quantum codes presented in other references. We point out the classes [[(2k ′ + 2)(8k ′ + 7), 2(8k ′2 + 7k ′ ), d min (≥ 3)]] and [[(2k ′ + 2)(8k ′ + 9), 2(8k ′2 + 9k ′ + 1), d min (≥ 3)]] of quantum codes achieving the best ratio k n .
Let G = (V, E) be a simple graph. The complement G of G has the same vertices as G, and every pair of vertices are adjacent by an edge in G if and only if they are not adjacent in G. A graph G is self-complementary if G ∼ = G. Let M = (V, E, F ) be a fixed map of G, with dual map M * = (F * , E * , V * ). M is graphically self-dual if (V, E) ∼ = (F * , E * ).
Theorem 4. 1 .
1If G is a self-complementary graph on m vertices, then |E(
mod 8). On the other hand, since by Theorems in Section 4, |E(G)| = m(m−1) 4 and m = 8 + 8 × 1, thus |E(G)| = 60 and g = 15. From Definition 3.4 we get the following adjacency matrix for t = 3: A = IXX + XII + XXX XII + IXI + IIX + XXX XII + IXI + IIX + XXX IIX + IXI + XIX + XXI
Definition 5 .2. 1 .
51Let G be a group and S be a subset of G\{id}. We say that a graph X is a Cayley graph with connection set S, written X=Cay(G, S), ifi) V (X) = G, ii) E(X) = {{g, sg}|g ∈ G, s ∈ S}. Definition 5.2.2. Let m = p r ≡ 1 (mod 8), p ∈ P and r ∈ Z + . A Paley graph is a cayley graph P m =Cay(X m , ∆ m ), where X m = Z p × Z p × · · · × Z pm is the additive group of the Galois field GF (p r ) and ∆ m = {1, x 2 , x 4 , . . . , x m−3 } for a primitive element x of GF (p r ). Let G = (V, E) be a self-complementary self-dual graph on m vertices. From Theorem 4.1, we know that |E(G)| = m(m−1)4
Scheme for reducing decoherence in quantum memory. P W Shor, Phys. Rev. A. 2P. W. Shor, Scheme for reducing decoherence in quantum memory, Phys. Rev. A, 2 (1995) 2493-2496.
Quantum error correction via codes over GF(4). A Calderbank, E Rains, P Shor, N Sloane, IEEE Trans. Inform. Theory. 44A. Calderbank, E. Rains, P. Shor, and N. Sloane, Quantum error cor- rection via codes over GF(4), IEEE Trans. Inform. Theory, 44 (1998) 1369-1387.
Topological quantum distillation. H Bombin, M A Martin-Delgado, Id. 180501Phys. Rev. Lett. 97H. Bombin, and M. A. Martin-Delgado, Topological quantum distillation, Phys. Rev. Lett., 97 (2006) Article Id. 180501.
M Nakahara, Geometry, Topology and Physics. UKIOP Publishing LtdSecond EditionM. Nakahara, Geometry, Topology and Physics, Second Edition, IOP Publishing Ltd, UK, (2003).
Kitaev, Fault-tolerant quantum computation by anyons. A Yu, Annals of Physics. A. Yu. Kitaev, Fault-tolerant quantum computation by anyons, Annals of Physics, 303 (2003) 2-30.
M Leslie, Hypermap-homology quantum codes. 121430001M. Leslie, Hypermap-homology quantum codes, Int. J. Quantum Inform., 12 (2014) p. 1430001.
Projective plane and planar quantum codes. M H Freedman, D A Meyer, Found. Comput. Math. 1M. H. Freedman, D. A. Meyer, Projective plane and planar quantum codes, Found. Comput. Math., 1 (2001) 325-332.
Homological error correction: Classical and quantum codes. H Bombin, M A Martin-Delgado, Id. 052105J. Math. Phys. 48H. Bombin, and M. A. Martin-Delgado, Homological error correction: Classical and quantum codes, J. Math. Phys., 48 (2007) Article Id. 052105.
On toric quantum codes. C D De Albuquerque, R P Junior, E B Da Silva, Int. J. Pure and Applied Math. 50C. D. de Albuquerque, R. P. Junior, and E. B. da Silva, On toric quantum codes, Int. J. Pure and Applied Math., 50 (2009) 221-226.
P Sarvepalli, arXiv: quant-ph/1312.6344v2Relation between surface codes and hypermap-homology quantum codes. P. Sarvepalli, Relation between surface codes and hypermap-homology quantum codes, arXiv: quant-ph/1312.6344v2 (2014).
On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction. G Zemor, Coding and Cryptology, second international workshop IWCC 2009. Springer5557G. Zemor, On Cayley graphs, surface codes, and the limits of homological coding for quantum error correction, In Coding and Cryptology, second international workshop IWCC 2009, LNCS 5557, Springer, (2009) 259- 273.
H Edelsbrunner, J Harer, Computational topology. Duke UniversityH. Edelsbrunner, J. Harer, Computational topology, Duke University, (2008).
Self-dual graphs. A B Hill, Waterloo UniversityM.S. ThesisA. B. Hill, Self-dual graphs, M.S. Thesis, Waterloo University, (2002).
A T White, Graphs, Groups, and Surfaces. AmsterdamNorth-HollandA. T. White, Graphs, Groups, and Surfaces, Amsterdam: North-Holland, (1984).
M A Nielsen, I L Chuang, Quantum computation and quantum information. CambridgeCambridge University PressM. A. Nielsen, and I. L. Chuang, Quantum computation and quantum information, Cambridge University Press, Cambridge, (2000).
K Conrad, Finite fields. Connecticut UniversityK. Conrad, Finite fields, Connecticut University, (2013).
Binary construction of quantum codes of minimum distances five and six. R Li, X Li, Discrete Math. 308R. Li, X. Li, Binary construction of quantum codes of minimum distances five and six, Discrete Math., 308 (2008) 1603-1611.
A Naghipour, M A Jafarizadeh, S Shahmorad, arXiv: quant-ph/1407.6228v2Quantum stabilizer codes from Abelian and non-Abelian groups association schemes. A. Naghipour, M. A. Jafarizadeh, and S. Shahmorad, Quantum stabilizer codes from Abelian and non-Abelian groups association schemes, arXiv: quant-ph/1407.6228v2 (2014).
Quantum codes from cyclic codes over finite ring F 2 +vF 2. J Qian, Journal of Informmation and Computational Science. 106J. Qian, Quantum codes from cyclic codes over finite ring F 2 +vF 2 , Journal of Informmation and Computational Science, 10:6 (2013) 1715-1722.
| [] |
[
"Dynamical exchange-correlation potential formalism for spin-1 2 Heisenberg and Hubbard chains: the antiferromagnetic/half-filled case",
"Dynamical exchange-correlation potential formalism for spin-1 2 Heisenberg and Hubbard chains: the antiferromagnetic/half-filled case"
] | [
"Zhen Zhao \nDivision of Mathematical Physics and ETSF\nLund University\nPO Box 118221 00LundSweden\n",
"Claudio Verdozzi \nDivision of Mathematical Physics and ETSF\nLund University\nPO Box 118221 00LundSweden\n",
"Ferdi Aryasetiawan \nDivision of Mathematical Physics and ETSF\nLund University\nPO Box 118221 00LundSweden\n"
] | [
"Division of Mathematical Physics and ETSF\nLund University\nPO Box 118221 00LundSweden",
"Division of Mathematical Physics and ETSF\nLund University\nPO Box 118221 00LundSweden",
"Division of Mathematical Physics and ETSF\nLund University\nPO Box 118221 00LundSweden"
] | [] | The exchange-correlation potential formalism previously introduced and applied to the onedimensional Hubbard model has been extended to spin systems and applied to the case of the one-dimensional antiferromagnetic spin− 1 2 Heisenberg model. Within the spin exchange-correlation potential formulation, a new sum rule for spin-systems is derived. The exchange-correlation potential for the Heisenberg model is extrapolated from exact diagonalization results of small antiferromagnetic Heisenberg clusters. This procedure is also employed to revisit and computationally improve the previous investigation of the exchange-correlation potential of the half-filled Hubbard model, which was based on the exchange-correlation potential of the dimer. Numerical comparisons with exact benchmark calculations for both the Heisenberg and the Hubbard models indicate that, starting from the exchange-correlation potential of a finite cluster, the extrapolation procedure yields a one-particle spectral function with favorable accuracy at a relatively low computational cost. In addition, a comparison between the ground state energies for the one-dimensional Hubbard and Heisenberg models displays how the well known similarity in behavior of the two models at large interactions manifests within the exchange-correlation potential formalism. arXiv:2305.16879v1 [cond-mat.str-el] 26 May 2023 | null | [
"https://export.arxiv.org/pdf/2305.16879v1.pdf"
] | 258,947,757 | 2305.16879 | 3f233068e8dc9a9400231a664d54fb53de7fa62c |
Dynamical exchange-correlation potential formalism for spin-1 2 Heisenberg and Hubbard chains: the antiferromagnetic/half-filled case
(Dated: May 29, 2023)
Zhen Zhao
Division of Mathematical Physics and ETSF
Lund University
PO Box 118221 00LundSweden
Claudio Verdozzi
Division of Mathematical Physics and ETSF
Lund University
PO Box 118221 00LundSweden
Ferdi Aryasetiawan
Division of Mathematical Physics and ETSF
Lund University
PO Box 118221 00LundSweden
Dynamical exchange-correlation potential formalism for spin-1 2 Heisenberg and Hubbard chains: the antiferromagnetic/half-filled case
(Dated: May 29, 2023)
The exchange-correlation potential formalism previously introduced and applied to the onedimensional Hubbard model has been extended to spin systems and applied to the case of the one-dimensional antiferromagnetic spin− 1 2 Heisenberg model. Within the spin exchange-correlation potential formulation, a new sum rule for spin-systems is derived. The exchange-correlation potential for the Heisenberg model is extrapolated from exact diagonalization results of small antiferromagnetic Heisenberg clusters. This procedure is also employed to revisit and computationally improve the previous investigation of the exchange-correlation potential of the half-filled Hubbard model, which was based on the exchange-correlation potential of the dimer. Numerical comparisons with exact benchmark calculations for both the Heisenberg and the Hubbard models indicate that, starting from the exchange-correlation potential of a finite cluster, the extrapolation procedure yields a one-particle spectral function with favorable accuracy at a relatively low computational cost. In addition, a comparison between the ground state energies for the one-dimensional Hubbard and Heisenberg models displays how the well known similarity in behavior of the two models at large interactions manifests within the exchange-correlation potential formalism. arXiv:2305.16879v1 [cond-mat.str-el] 26 May 2023
I. INTRODUCTION
Lattice models, in spite of their apparent simplicity, can be very valuable to reveal important features in low dimensional and highly correlated quantum systems. This certainly is the case of two highly paradigmatic models of condensed matter physics, namely the Hubbard [1] and spin- 1 2 quantum Heisenberg models [2]. For several decades, these two models have been a testground for new theoretical and computational methods [3][4][5]. Notably, they have been used to describe phenomena such as the Mott transition [6], high T c superconductivity [7], quantum spin liquids [8], and quantum entanglement [9,10]. Furthermore, via suitable parameterization from first-principles ground-state calculations, they have also been used to describe the dynamical behavior of real materials, which is experimentally measurable via, e.g., neutron scattering and angle-resolved photoemission spectroscopy. This model approach is very useful when first-principles descriptions are too complicated to perform. (see e.g. [11][12][13][14]).
There are a number of approaches of increasing sophistication being continuosly developed to solve the Hubbard and Heisenberg models [15][16][17][18][19][20][21][22]. Exact analytical solutions remain scarce. In one dimension (1D), both models are integrable and exactly solvable via Bethe ansatz [23,24]. Yet, exact analytic treatments for higher dimensional or even extended 1D systems (e.g., with nextnearest-neighbor coupling) are in general not available. As it happens, already in 1D not all quantities of interest can be accessed: the Bethe ansatz provides information * [email protected] † [email protected] ‡ [email protected] about the energy dispersion [25,26] but not, for example, the spectral weight, one of the more interesting quantities to consider when studying dynamical correlations, which are usually directly connected to experimental results. On the numerical side, several approaches can be suitably employed for both models, such as Exact diagonalization (ED) [27], Quantum Monte Carlo (QMC) [28][29][30], and Density Matrix Renormalization Group (DMRG) [31][32][33], to name a few. ED gives exact and complete information about the system, but is restricted to small systems, thus unable to capture the thermodynamic limit features. DMRG and QMC are applicable to fairly large systems and with high accuracy in 1D [34][35][36], but for higher dimensions the computational cost increases rapidly [37][38][39].
Density Functional Theory (DFT) [40][41][42][43][44], a standard methodology for first-principles treatment of materials, has also been used to study the two models, [45], via direct adaptation and application of the lattice case [46][47][48][49][50], to calculate the model parameters from firstprinciples (e.g., Hubbard U [51-53] and Heisenberg J [54]), but also to use model results as input to realistic calculations [55]. Although formally exact, DFT in practice requires approximations for the exchange-correlation energy [56].
The local-density approximation (LDA) and its extension to local-spin-density approximation (LSDA) are widely used in DFT [43,57,58]. L(S)DA successfully describes many materials, but does not perform well in strongly correlated systems, and much effort has been devoted to improving it. With focus on model lattice systems, one way is to use the exact Bethe ansatz solution of the Hubbard model to approximate the correlation energy of an inhomogeneous lattice system [59]. A similar employment of DFT has also been considered for the Heisenberg model [60]. What is noteworthy about these L(S)DA approaches when applied to the Hubbard and Heisenberg models is that the exchange-correlation term has information about the lattice structure and dimensionality of the system. From a different perspective, a formalism based on the dynamical exchange-correlation potential (Vxc) was recently introduced [61]. The formalism is not limited by system size, system dimensionality, or type and range of the interaction, and it is thus useful to describe electronic and magnetic structures in general situations. A main feature of the dynamical Vxc formulation is that the coupling between the dynamical Vxc and the Green function occurs as a direct product in space and time. In contrast, the self-energy, which is traditionally used to calculate the Green function, acts on the Green function as a convolution in space and time.
As a first application of the framework, the lattice oneparticle Green function of the infinite 1D Hubbard chain was determined [61,62] using an extrapolation scheme, starting from the dynamical Vxc of the Hubbard dimer as input. In spite of the simplicity of the approximation used and the low computational load, the scheme provides estimates of the band gap and spectral function in favorable agreement with the results obtained from the Bethe ansatz and the Dynamical Density Matrix Renormalization Group (DDMRG) [63]. One general conclusion from this investigation is that the Vxc formalism provides a simple picture of the one-electron spectrum: for a given momentum, a time-independent term in Vxc together with the kinetic energy term determine the main peak of the spectral function, while a time-dependent term in the form of an exponential couples the Green functions with different momenta and generates incoherent structures or satellite peaks. The energy variable appearing in the exponent can be understood as the main bosonic excitations of the system.
More recently, as a step towards the study of realistic systems, the Vxc of the homogeneous electron gas was calculated within the random-phase approximation [64] with the long-term aim of constructing the Vxc as a universal functional of the ground-state density within the local-density approximation.
II. THIS WORK, AND PLAN OF THE PAPER
In this work, the Vxc framework is extended to spin systems, more specifically to the 1D Heisenberg model. The Vxc-based equation of motion and the sum rule for the spin exchange-correlation hole are derived. Furthermore, the extrapolation scheme employed in the previous work for the 1D Hubbard chain is adopted. The essential idea of the extrapolation scheme is to start from the Vxc of a finite cluster (kernel), which can be calculated accurately using an exact diagonalization method or other methods such as the density-matrix renormalization group. By a suitable extrapolation, this is then used to determine the Green function of the corresponding lattice model. The spin Vxc framework within the extrapolation scheme is applied to calculate the spectral functions of the 1D spin− 1 2 antiferromagnetic (AFM) Heisenberg model in the thermodynamic limit, starting from the spin Vxc of small clusters.
In addition, the 1D Hubbard chain is revisited. In the previous work, the Hubbard dimer was the kernel, which was used to calculate the Green function of the 1D Hubbard chain. In this work, in order to improve the quality of the starting Vxc, the cluster size is enlarged so that additional information arising from interactions beyond nearest-neighbor is captured. The improved Vxc is then used to calculate the Green function of the halffilled 1D Hubbard chain.
To summarize, the main outcomes of the present work are: (i) derivation of the Vxc-based equation of motion and the sum rule of the spin exchange-correlation hole for the 1D Heisenberg model, which can be readily generalized to other spin systems; (ii) calculations of the spinon Green function for the 1D AFM Heisenberg lattice by extrapolating from a finite-cluster spinon Vxc; (iii) improved treatment of the Vxc of the half-filled 1D Hubbard lattice from the previous work by using as kernel a Vxc from a finite cluster; (iv) illustration on how in the Vxc formalism the well known large-U limit (where results from the Hubbard model match those from the AFM Heisenberg one) is recovered.
The plan of the paper is as follows: in Section III, we review briefly the general Vxc formalism. Then, in section III A and III B we extend and apply the approach to the 1D AFM Heisenberg model. Specifically, in section III C and III D, we derive an analytic expression for the spinon Vxc for a four-site chain, and compute the lattice dynamical structure factor by extrapolating the finite cluster Vxc to the infinite case. In section IV, we revisit the 1D Hubbard model and compute the exact Vxc of a finite cluster larger than the dimer, with which we improve previous results in the infinite chain limit. In Section V we discuss Vxc from a comparative perspective, addressing the ground-state energy for both the 1D AFM Heisenberg model and the half-filled 1D Hubbard model in the large U limit. Finally, in Section VI we provide some conclusive remarks and an outlook.
III. GENERAL FORMALISM AND APPLICATION TO THE HEISENBERG CHAIN
For a system with a one-body term and two-body interactions, the Hamiltonian readŝ
H = drψ † (r)h 0 (r)ψ(r) + 1 2 dr 1 dr 2ψ † (r 1 )ψ † (r 2 )v(r 1 , r 2 )ψ(r 2 )ψ(r 1 ),(1)
whereψ(r) is the fermionic field operator and r = (r, σ) is a combined space and spin variable. The time-ordered Green function is defined in the Heisenberg picture as
iG(1, 2) := ⟨Tψ(1)ψ † (2)⟩,(2)
where the argument numbers label the space-time 1 := (r 1 , t 1 ), ⟨.⟩ denotes the zero-temperature ground-state expectation value, and T is the time-ordering symbol. The equation of motion in the Vxc formalism is given by [61] [
i∂ t1 − h(r 1 ) − V xc (1, 2)]G(1, 2) = δ(1 − 2),(3)
where the single-particle term
h(r) = h 0 (r) + V H (r)(4)
contains the Hartree potential
V H (r) = dr ′ v(r, r ′ )⟨ψ † (r ′ )ψ(r ′ )⟩(5)
The Vxc reproduces the interaction term containing a special case of the two-particle Green function , i.e.,
V xc (1, 2)iG(1, 2) = d3v(1, 3)⟨Tψ † (3)ψ(3)ψ(1)ψ † (2)⟩ −V H (1)iG(1, 2),(6)
For fermion field operators and in the presence of Coulomb interactions, the bare exchange part of Vxc can be obtained by considering the lowest order of the first term on the RHS of Eq. (6),
V x (1, 2)iG(1, 2) = − d3v(1 − 3)G(1, 3)G(3,
2). (7)
A. Spin-spin interactions
For systems with spin-spin interactions, an observable of central interest is the spin dynamical structure factor, whose longitudinal and transverse terms are
S zz (k, ω) = 1 N pq dt⟨Ŝ z p (t)Ŝ z q (0)⟩e iωt e −ik(p−q)(8)
and
S +− (k, ω) = 1 N pq dt⟨Ŝ + p (t)Ŝ − q (0)⟩e iωt e −ik(p−q) ,(9)
whereŜ z,+,− p (t) are the spin field operators in the Heisenberg picture.
For the Hubbard model, the spin dynamical structure factor can be obtained by solving a two-particle Green function
G (2) ppqq (t) := ⟨T ĉ † p↑ (t)ĉ p↓ (t) ĉ † q↓ĉ q↑ ⟩,(10)
but the equation of motion of the two-particle Green function contains three-particle Green function, and thus is generally difficult to solve. Simplification is however recovered for large repulsion, where charge transfer becomes less likely and spin correlations can be obtained by studying the AFM Heisenberg model. It is thus of fundamental and practical interest to discuss the Vxc formalism directly for the Heisenberg model. The isotropic 1D Heisenberg Hamiltonian with nearest neighbour (NN) exchange coupling is given bŷ
H Heis = −J p 1 2 (Ŝ + pŜ − p+1 + h.c.) +Ŝ z pŜ z p+1 . (11)
where for convenience we use an even total number of sites before taking the thermodynamic limit. We define the Green function with spin field operators
iG pq (t) = θ(t)⟨Ŝ + p (t)Ŝ − q (0)⟩ + θ(−t)⟨Ŝ − q (0)Ŝ + p (t)⟩,(12)
in which the Heisenberg J is the analog of the twoparticle interaction in Eq. (1). From the Heisenberg equation of motion for the spin field operators, the equation of motion of the Green function reads
i∂ t G pq (t) + iF pq (t) = 2δ pq δ(t)⟨Ŝ z p ⟩(13)
where the interaction term is
F pq (t) = − l J pl [⟨p, l; q⟩ − ⟨l, p; q⟩].(14)
Here,
⟨l, p; q⟩ := ⟨TŜ z l (t + )Ŝ + p (t)Ŝ − q (0)⟩,(15)
and J pl = J(δ l,p+1 + δ l,p−1 ) for the 1D NN exchange coupling. One can define the spin exchange-correlation potential analogous to the charge case as follows:
V xc pp,qq (t)iG pq (t) := F pq (t) − V H p iG pq (t) − l V F pl iG lq (t),(16)
where the last two terms on the right-hand side , V H and V F , are the analog of the Hartree and exchange potentials, respectively:
V H p (t) := − l J pl ⟨Ŝ z l ⟩,(17)V F pl (t) := J pl ⟨Ŝ z p ⟩.(18)
Consequently, a spin correlator g lpq (t) can be defined such that
⟨l, p; q⟩ = iG pq (t)g lpq (t)⟨Ŝ z l ⟩,(19)
while the spin exchange-correlation hole ρ xc is defined as
ρ xc lpq (t)iG pq (t) = −⟨l, p; q⟩ + ⟨Ŝ z l ⟩iG pq (t). (20)
Denoting the total z-component of the spin by S z = l ⟨Ŝ z l ⟩, and observing that
l ⟨l, p; q⟩ = θ(−t) + S z iG pq (t),(21)
we can obtain a sum rule for general spin interactions:
l ρ xc lpq (t) = − l g lpq (t) − 1 ⟨Ŝ l ⟩ = −θ(−t). (22)
The detailed derivation is provided in Appendix A.
In this paper, we consider only the case of AFM coupling, i.e. J < 0 so that S z = 0. For a translationally invariant system, the Hartree and Fock terms (Eq. (17), (18)) vanish, and thus the two-spinon Vxc is then
V xc pp,qq (t)iG pq (t) = −J δ=±1 ⟨p, p + δ; q⟩ − ⟨p + δ, p; q⟩ ,
with the corresponding exchange term given by
F x pq (t) := V x pp,qq (t)iG pq (t) = J G p+1,p (0 − )G pq (t) + G p−1,p (0 − )G pq (t) − G p,p+1 (0 − )G p+1,q (t) − G p,p−1 (0 + )G p−1,q (t) .(23)
B. The infinite chain
We specialize now the description to the case of the homogeneous infinite Heisenberg chain, where ⟨S z p ⟩ ≡ s is site-independent due to translational symmetry. It is convenient to move to the momentum domain, with the Green function and V xc defined via the Fourier transform as
G(k, t) = 1 N pq G pq (t)e −ik(p−q) (24) V xc (k, t) = 1 N 2 pq V xc pp,qq (t)e −ik(p−q) ,(25)
and where the equation of motion for the Green function becomes
i∂ k G(k, t) − k ′ V xc (k − k ′ , t)G(k ′ , t) = 2sδ(t). (26)
In the momentum representation, the exchange term becomes
F x (k, t) = 4J N G(k, t) sin k 2 k ′ G(k ′ , 0 − ) sin(k ′ − k 2 ) ≈ iJλ| sin k|G(k, t),(27)
where we have neglected the k ′ -dependence in the weight represented by G(k ′ , 0 − ), performed the sum over k ′ and subsumed all the constants into λ. To proceed further, the dynamical part of Vxc is separated from the static
exchange term V s (k) = F x (k, t)/iG(k, t), i.e. k ′ V xc (k − k ′ , t)G(k ′ , t) = V s (k)G(k, t) +Z sp (k, t)G(k, t),(28)
to finally arrive at the solution to the equation of motion Eq. (26):
G(k, t) = G(k, 0 + )e −iV s t e −i t 0 Z sp (k,t ′ )dt .(29)
In this expression, the (k-dependent) static exchange term V s determines the main peak of the spectral function, and the dynamical correlation term Z sp (k, t) produces the satellite structure. To attain an explicit solution, it is expedient to solve for a reference Green function by keeping only the static V s term in the equation of motion. This simplified solution contains the lower boundary of the two-spinon energy dispersion [15]
G lb (k, ω) = 1 ω − (−J)λ| sin k| ,(30)
and permits to determine the constant λ in Eq. (27) from the analytic form of the two-spinon spectrum.
C. A four-site spin chain
It is useful to start our discussion of Vxc in finite spinclusters by considering a four-site chain. This is the minimal cluster (with even number of sites) in which Vxc is nonzero. Furthermore, it is easy to obtain a compact analytical solution, that illustrates qualitatively several features present also in larger clusters (in which our solution is numerical in character). To illustrate the features of the four-site Vxc, we choose one of its diagonal elements as a representative case, namely
V xc 11,11 (t > 0) = −J ( (xy+x)(xy+x+2y) a 2 + )f 1 + (x 2 + x)f 2 + ( (xy−3x)(xy−3x+2y−4) a 2 − )f 3 ( xy+x+2y a+ ) 2 f 1 + x 2 f 2 + ( xy−3x+2y−4 a− ) 2 f 3(31)
where x = 1 + √ 3, y = 1 + √ 2, a ± = 8 ± 4 √ 2, and f i=1,2,3 are time oscillation factors determined by the difference between the spin excitation energies and the ground state energy. The full details and the explicit forms are given in Appendix B, together with other elements of Vxc. It is useful at this point to move from site orbitals {φ a } to bonding-like ones {ϕ µ }. In analogy to what is done with a Bloch basis in a Hubbard lattice, we set ϕ µ = U µa φ a , φ a = U aµ ϕ µ , in which µ = A, B, C, D, a = 1, 2, 3, 4 and the U matrix is
U = 1 2 1 1 1 1 1 −1 1 −1 1 1 −1 −1 1 −1 −1 1 .(32)
For the Green functions, the transformation reads
G µν = ab U µa G ab U * bν and G ab = µν U * aµ G µν U νb . One can define V xc µα,βν (t) := mn U µm U * mα V xc mm,nn (t)U βn U * nν(33)
such that the equation of motion is now
i∂ t G µν (t) − αβ V xc µα,βν (t)G αβ (t) = s µν δ(t),(34)
where s µν = 2 pq U µp ⟨S z p ⟩δ pq U * qν . Comparing the equation of motion for the diagonal terms G µµ ,
[i∂ t − V xc µµ,µµ ]G µµ (t) − γ̸ =µ V xc µγ,γµ G γγ − γ̸ =δ V xc µγ,δµ (t)G γδ (t) = s µµ δ(t)(35)
to the infinite-chain equation of motion Eq. (26), we note the following: i) G µµ maps to G(k); ii) the contribution from fully off-diagonal terms V xc µν,δµ should be negligible; iii) V xc µγ,γµ , which maps to V (k), depends only on the difference of µ, γ; iv) the weights of the higher excitation term f 3 are relatively small.
According to i-iv) and ignoring the high energyexcitation contributions from f 3 , one thus arrives at an approximate expression for the matrix elements of V xc µγ,γµ : (38) and as in (31), a + = 8 + 4 √ 2. The analytic spinon Vxc in the bonding basis and its main excitation approximation are shown in Fig. 1. Ignoring the high-excitation factor f 3 reduces the fine-structure details in Vxc. Consequently, V xc BB,BB simplifies to a constant whereas V xc
V xc BB,BB (t > 0) ≈ −Jα V xc BC,CB (t > 0) ≈ −Jβ exp[ iJt √ 2 ],(36)whereas V xc BD,DB (t > 0) ≈ 0, V xc BA,AB (t > 0) ≈ 0, in which α := xy+x xy+x+2y = 2x+2 xy+x+2 ,(37)β := 1 4 ( a+ xy+x+2y + a+ xy+x+2 ) 2 (x 2 + x − αx 2 ),
BC,CB
oscillates with a single frequency and a constant magnitude, and all other components are negligible.
D. Infinite chain from cluster extrapolation
In Fig. 2, we show ReZ sp (k, t) as obtained from the cluster Vxc discussed in the previous section.
It can be seen that ReZ sp (k, t) oscillates in time around a momentum-dependent term, a behavior that can be understood as due to a single quasiparticle-like main excitation. We therefore propose the following ansatz for Z sp in the infinite-chain case:
Z sp (k, t) = A(k)e −iω sp (k)t + B(k),(39)
where the amplitude A, the spinon excitation energy ω sp , and the shift term B all increase as k increases from 0 to π. The Green function is given by inserting the ansatz into Eq. (29):
G sp (k, t > 0) = G sp (k, 0 + )e −i[V s (k)+B(k)]t e A(k) ω sp (k) (e −iω sp (k)t −1) ,(40)
where the static potential is V s (k) = −Jπ| sin k|/2. Expanding the last term on the RHS of Eq.(40) to first order in e −iω sp (k)t , one gets an approximate Green function
G sp (1) (k, t > 0) = G sp (k, 0 + )e −i[V s (k)+B(k)]t × 1 + A(k) ω sp (k) (e −iω sp (k)t − 1) ,(41)
which in the frequency domain becomes
G sp (1) (k, ω) = G sp (k, 0 + ) 1 − A(k) ω sp (k) ω − [V s (k) + B(k)] + A(k) ω sp (k) ω − [V s (k) + B(k) + ω sp (k)] .(42)
From Eq. (42), it can be seen that the main peak position of the dynamical structure factor is given by V s + B.
The spinon excitation energy ω sp transfers weight from the main peak to higher-energy region resulting in satellite peaks at V s + B + ω sp . The relative weight between the main peak and the satellite is determined by the amplitude term A and the spinon energy ω sp . Specifically at k = π, the finite cluster solution gives nonzero B, which opens a spin gap that does not exist for the spin− 1 2 lattice. We attribute this to finite-size effects, and thus we adjust B to a smaller value in our extrapolation.
Based on our discussion so far, we now present the lattice case obtained by extrapolating the cluster Vxc. With Z sp obtained via ED from a twelve-site cluster, we estimate the parameters A, B and G(k, 0 + ) by linear interpolation. The spinon excitation energy is estimated by fitting the cluster ω sp to the two-spinon spectrum boundary,
ω sp → (−J)π sin k 2 − 1 2 | sin k| .(43)
The longitudinal and transverse spin dynamical structure factors are then calculated from the spinon Green function. Since for a spin isotropic system S zz and S +− differ by a constant factor, we only calculate the spectral function of the Green function (Eq. (40)), as shown in Fig. 3 (interestingly, approximating the term exp A ω sp exp(−iω sp t) − 1 by 1 + A ω sp exp(−iω sp t) − 1 gives no marked changes of the properties of G sp (k, ω)). A notable aspect in the behavior on the spin dynamical structure factor is that both the peak locations and the relative weights are close to the inelastic neutron scattering data from 1D compound KCuF 3 [21]. Coming to more specific features, S(k, ω) is very small (i.e., close to zero) at small k, while, for a generic k, most of its spectral weight is concentrated around the main peak and the satellite peak. As k → π, the relative weight between the main peak and the satellite peak increases and the spectrum with broadening factor 0.1 is gapless.
While providing a good description of the dynamical structure factor for the 1D AFM Heisenberg model, the present implementation of the spin Vxc approach is also subject to some limitations. This can be seen by, e.g., comparing the dynamical structure factor from the Vxc approach with the two-spinon lower and upper boundaries (dashed line in Fig. 3). It is apparent that the main peak frequency ω = V s (k) + B is still slightly overestimated. To reduce the finite size effects due to a parameter B(π) originating from a twelve-site cluster, we set B(π) to be the same as the broadening factor, i.e. about 0.2 (see Fig. 2). However, the actual Bethe ansatz value of B(π) should be zero. The overall point is that, to obtain a more accurate dynamical structure factor, and to avoid the finite size effects inherent in the extrapolation from a small cluster, more powerful external methods need to be employed (e.g., the algebraic Bethe ansatz).
These considerations might reveal weaknesses of the extrapolation procedure. However, it must also be clearly stressed that this implementation of the Vxc approach captures most of the qualitative features of the 1D AFM Heisenberg model with a very low computational load and this central attractive feature of the method is expected to also apply in more challenging situations, e.g. in higher dimensions, where rigorous references like the Bethe ansatz are not available.
IV. IMPROVING THE TREATMENT OF THE 1D-HUBBARD LATTICE
Encouraged by the 1D AFM Heisenberg chain results obtained with a Vxc extrapolated from clusters, we now revisit the case of the 1D Hubbard Hamiltonian,
H Hub = −∆ pσ [ĉ † p,σĉp+1,σ + h.c.] + U pn p↑np↓ ,(44)
using also in this case a Vxc obtained from small (Hubbard) clusters. In Eq. (44), p = 1, 2, · · · , N are the site labels (with N → ∞ eventually), σ =↑, ↓ is the spin label, ∆ is the hopping energy and U > 0 is the local repulsion.
In the site basis, the spin-up channel Green function is (45) and the Vxc reads (46) where ρ p↓ is the spin-down particle density at site p. The exchange part of Vxc fulfils
G pq (t) = −iθ(t)⟨ĉ p↑ (t)ĉ † q↑ (0)⟩ + iθ(−t)⟨ĉ † q↑ (0)ĉ p↑ (t)⟩V xc pp,qq (t)iG pq (t) = U ⟨Tĉ † p↓ (t)ĉ p↓ (t)ĉ p↑ (t)ĉ † q↑ (0)⟩ −U ρ p↓ iG pq (t),V x pp,qq (t)iG pq (t) = −U G pp (0 − )G pq (t),(47)
where G pp (0 − ) = i⟨ĉ † p↑ĉ p↑ ⟩ = iρ p↑ ; thus, the exchange part of Vxc of the Hubbard model is static and cancels the Hartree potential at half-filling, in contrast to the Heisenberg model in which the exchange part depends on the momentum. In general, the exchange part is timedependent [64].
Written in the momentum domain, the equation of motion for the Hubbard lattice takes the form
[i∂ k − ε k ]G(k, t) − k ′ V xc (k − k ′ , t)G(k ′ , t) = δ(t),(48)
where ε k = −2∆ cos k is the kinetic energy. Eq. (48) shows that the interaction term, expressed as the direct product of Vxc and Green function in space-time domain, is a convolution in the momentum domain. It has been shown [62] that the main peak position of the electron (hole) spectral functions can be described with V xc (k = 0), together with the kinetic energy, while V xc (k = π) plays an important role in determining the satellite peaks. One can also write the interaction term as a direct product in momentum domain,
k ′ V xc (k − k ′ , t)G(k ′ , t) = V xc (0, t)G(k, t) + Y (k, t)G(k, t),(49)
which gives explicitly the solution for the Green function:
G(k, t > 0) = G(k, 0 + )e −iε k t e −i t 0 dt ′ V xc (0,t ′ ) × e −i t 0 dt ′ Y (k,t ′ ) , (50a) G(k, t < 0) = G(k, 0 − )e −iε k t e i 0 t dt ′ V xc (0,t ′ ) × e i 0 t dt ′ Y (k,t ′ ) . (50b)
One can then use an N -site cluster with twisted boundary conditions [65] to parameterize G(k, 0 ± ) and thus the generalized Vxc in the momentum domain becomes
Z el (k, t) := V xc (0, t) + Y (k, t).(51)
A. Extrapolation from finite clusters
The Vxc of clusters with 6 and 8 sites and with periodic boundary condition was computed using ED. The Hubbard U was chosen to be 7.74 with ∆ = 1, to allow for comparisons with previous work and the DDMRG results from the literature. In contrast to the dimer case, the cluster Vxc exhibits multiple sharp peaks as a function of time t. Time snapshots of Vxc as a function of k are shown in Fig. 4. For t ≃ 0, we have that V xc (k, t) ≈ V xc (π − k, t), but such behavior is unseen during the time evolution. The particle-hole symmetry leads to V xc (k, −t) = −V xc (k, t), and the increase of cluster size from N = 6 to 8 does not change qualitatively the characteristics of V xc as a function of k.
The dynamical properties of Vxc can be better illustrated through Z el , a generalisation of Vxc in the momentum basis defined in Eq. (51). Due to degeneracy, Z el (−k, t) = Z el (k, t) and, because of particle-hole symmetry, Z el (k, −t) = −Z el (π − k, t). To improve the simulation of Z el , we use a cluster with twisted boundary condition, that provides larger k-point sampling. The real part of Z el (k, t) with twisted boundary condition is shown in Fig. 5: for small k, it oscillates weakly in time (with small amplitude and long period). However, where the bandgap opens (k → π 2 ), the oscillation of ReZ el is more evident. For k → π, ReZ el exhibits sharp peaks at certain times. The peaks can be both positive and negative: mathematically, this means that some of the zeros of the Green function are located where the interaction term (Eq. (46)) has nonzero finite (positive or negative) values. These spiky structures cannot be fitted into a weighted sum of several (but finite in number) oscillations, indicating that a model beyond the single-energy quasiparticle picture is necessary.
Provided with the numerically exact Vxc for N = 6, 8 clusters, we reconsider the approximate scheme proposed in the previous work based on the Hubbard dimer (N = 2) [62]. The dimer admits two k-points (k = 0, π), with the corresponding approximate values for Vxc given by
V xc (k = 0, t) ≈ αU 2 , (52a) V xc (k = π, t) ≈ αU 2 (1 − α 2 )e −i2∆t .(52b)
Here, the constant α depends only on U ∆ (the explicit dependence relation is shown in Appendix C together with a summary of the properties of the Vxc obtained from the Hubbard dimer) and 2∆ in the exponential represents the main excitation energy. In what follows, we use Eqs. (52a) and (52b) to compute the hole part of the spectral function, with the particle part obtainable via the particle-hole symmetry A e (k, ω) = A h (π − k, −ω). When |k| ≤ π 2 , the hole part of the Green function given by the dimer model is
G h (k, ω) = 1 ω − ω h k − iη [1 − V xc (ω)] (53) V xc (ω) = 1 N occ k ′ V xc (π, 0) ω − [ε k ′ − V xc (0) − 2∆] − iη ,(54)
where η is a broadening factor. The spectral function of G h has a main peak at ω h k , determined by V xc (k = 0) and by the kinetic energy: ω h k = ε k −V xc (0). The term V xc (ω) gives rise to a continuous satellite region. Its relative weight to the main peak is V xc (π, 0), and its lower/upper boundaries are given by the minimum/maximum occupied state kinetic energy
ω h,lower k = ε 0 − V xc (0) − 2∆ (55a) ω h,upper k = ε π 2 − V xc (0) − 2∆. (55b)
The dimer model [62] managed to capture the main structure of the hole spectra of the Hubbard lattice, but can be improved in several aspects: The main peak position given by the model is just the kinetic energy ε k = −2∆ cos k plus a constant determined by U , while the true k−dependence of ω h should be more complicated; the upper and lower boundaries of the satellite part given by the model are independent of k, which is also an oversimplification. Rewriting Eq. (50b) in the spirit of a factorization into a main-peak and a satellite term,
G(k, t < 0) = G(k, 0 − )e −i(ε k +Z h,main k )t × e i 0 t dt ′ Z h,sat (k,t ′ ) ,(56)
where Z h,main k + Z h,sat (k, t) = Z el (k, t) for t < 0, one can see that: i) A momentum-dependent static term, Z h,main k , which is not present in the dimer model, together with ε k , determines the main peak; ii) the dispersion of ω h,lower and ω h,upper can be explained by the satellite term Z el,sat (k, t ′ ). Compared with Fig. 5, Z h,main k is seen to be the time-independent part around which Z el (k, t) oscillates; and Z h,sat (k, t) represents a series of excitation energies. The spike-like ReZ(k, t) for k → 0, t < 0 is a consequence of multiple excitation energies and large satellite peaks, while the less oscillatory ReZ(k, t) for k → π, t < 0 explains the lack of strong satellites of the hole spectral functions A h (k → π, ω).
Taking advantage of the physical picture given by the dimer model, we include the correction to the occupied k values by adding a set of momentum-dependent parameters, l 1,2,3 , such that i) α → αl 1 (k), ii) the main excitation determining the satellite boundaries (Eq. (55a), (55b)) becomes 2∆ → 2∆l 2 (k), and the effective kinetic energy in the summation of Eq. (53) becomes ε k ′ → −2∆ cos k ′ l 3 (k). The parameterized dispersion relations of the key frequencies are
ω h k = −2∆ cos k − αU 2 l 1 (k) (57a) ω h,lower k = −2∆[l 3 (k) + l 2 (k)] − αU 2 l 1 (k) (57b) ω h,upper k = −2∆l 2 (k) − αU 2 l 1 (k).(57c)
Thus, the hole part bandwidth for a given momentum, the satellite width, and the band gap respectively are
ω h k − ω h,lower k = 2∆ l 2 (k) + l 3 (k) − cos k , (58a) ω h,upper k − ω h,lower k = 2∆l 3 (k),(58b)E g = αU l 1 ( π 2 ).(58c)
This means that the main peak location, the bandwidth, and the satellite region width from cluster calculations can be used to determine the parameters l 1,2,3 , which are then used to calculate the lattice spectral functions A(k, ω) for k < π 2 . For k > π 2 , where the dimer model gives zero weight for the hole part spectrum, the cluster results show that the corresponding Vxc can be approximated with a single-energy excitation,
Z el (k > π 2 , t < 0) ≈ A k e −iω el k t + B k ,(59)
where the parameters A, B and ω el k are estimated from cluster result, which is similar to the treatment for the spinon Vxc (Eq. (39)). Combing the l 1,2,3 -involved occupied region and the A, B, ω el -involved unoccupied region, the hole part spectral function can now be calculated for the whole Brillouin zone. The spectral functions for selected k values are shown in Fig. 6.
Compared with the dimer model, the cluster Vxcbased parametrization improves the agreement with DDMRG in several aspects. Specifically, i) The missing weights for unoccupied k points appear when using as input a cluster Vxc. ii) The main peak positions (and thus the bandgap value as well) are more accurate. In fact, the bandgap value from the dimer model, αU , shows a discrepancy with the Bethe ansatz exact value at small U , do to the lack of long-range screening effects. Using a cluster Vxc, however, removes the disagreement.
FIG. 6. Momentum-resolved hole part spectral function A h (k, ω) for U = 7.74, ∆ = 1. For k < π 2 , the parameters l1,2,3 are determined using the peak locations of the eight-site twisted boundary condition cluster spectrum. For k > π 2 , Z el of the eight-site twisted boundary condition cluster is used via Eq. (59) to calculate A h . Top (middle) panel: k-points chosen to compare with DDMRG results, without (with) renormalized weight. Bottom panel: the satellite structure is approximated with two peaks at the satellite region boundaries, in order to get clearer dispersion branches. The k values are π 64 × 0, 1, 2, · · · , 64. The locations of the spinon branch (0 < k < π/2, −3.5 < ω < −2), the holon branches, and the lower boundary of the holon-spinon continuum (π/2 < k < π, ω < −6) are close to the DDMRG result [63]. For the spinon branch, we have ω(k = 0) = −3.25, which differs from the DDMRG result (≈ −3) because the finite cluster gives in general larger band gap. In all calculations, we set the broadening parameter η = 0.1.
iii) Both boundaries and relative weight of the satellite structure are better described by the cluster Vxc and its momentum-basis generalization Z el ; iv) The total weight of the hole/electron part cannot be renormalized within the dimer model, because the non-interacting Green function used in the dimer model can only fix the total spec- tral weight: dωA h (k, ω) = θ(k F − k). With a cluster Vxc, using ⟨ĉ † kĉ k ⟩, we can rescale the total spectral weight for each k value.
Yet, the main peak ω h :s in Fig. 6 is in general lower than the one from DDMRG. This can be understood as due to the band gap narrowing upon increasing the numbers of site (the eight-site cluster we used leads to the overestimation of the gap and thus of the main peak position).
We conclude our discussion of the Hubbard chain, by considering its spectral functions in real space that we obtain starting from those in the momentum domain:
A(r, ω) = 1 2π dkA(k, ω)e ikr(60)
where r = 0, 1, 2, · · · is in units of lattice parameter. A(r, ω) describes the correlation strength between two space points separated by r, at a given energy ω. The local case A(r = 0, ω) corresponds to the density of states. Results for A(r, ω) with a eight-site kernel are shown in Fig. 7, whilst those from a six-site kernel with different U and r are reported in the appendix. The cluster Vxc result for A(r = 0, ω) shows better agreement with DDMRG than the dimer model. Also, the NN spectral weight at positive energy is predominantly negative, and for r ≥ 2, A(r, ω) exhibits nodal structures. Concerning the role of electronic correlations, spatial spectral functions with different U values become qualitatively alike at large repulsion (U > 4), but the band gap value keeps increasing with U . Finally, spectral functions calculated with eight-site and six-site kernels are qualitatively similar (see the appendix for the six-site case), with similarities in the overall shape and in the number of nodes. However, the estimated value of the band gap improves on increasing the cluster size.
V. VXC FROM HUBBARD AND HEISENBERG MODELS: A COMPARATIVE DISCUSSION
It is well known that the 1D spin− 1 2 AFM Heisenberg model becomes equivalent to the 1D half-filled Hubbard model in the large U regime [67,68]. After having discussed Vxc in the two models separately, it can be useful to look at both models together using as perspective the behavior of Vxc in such limit. Meanwhile, Z el and Z sp do not show a direct asymptotic behavior Z el U →∞ = Z sp , because they are coupled to the single-particle Green function (Eq. (45)) and single spin-flipping Green function (Eq. (12)) respectively. For the Hubbard model, the term corresponding to Z sp is coupled to the twoparticle Green function ⟨T ĉ † p↑ (t)ĉ p↓ (t) ĉ † q↓ĉ q↑ ⟩. Equation of motion of higher order Green function needs to be solved for the Hubbard model to calculate the 'higher order Vxc' that is comparable with the spinon Vxc under large repulsion. This means the Vxc formalism for Heisenberg model, having a similar sum rule (Eq. (22)), reduces the difficulty in deriving the equation of motion and improves the interpretability via the quasiparticle picture.
Instead of solving the higher order Green function, we consider a more modest task of comparing the lattice ground state energies for the two models. In the large U limit [68],
lim U →∞ E Hub 0 N = 1 U (4 E Heis 0 N − 1)(61)
where E Hub 0 is the ground state energy of a N -site Hubbard ring with ∆ = 1, and E Heis 0 is the ground state energy of a N -site AFM Heisenberg ring with J = −1. Both energies can be calculated from the Green function via E Heis
0 N = 3 2 ⟨S + 1 (t = 0 + )S − 2 ⟩,(62)
and
E Hub 0 N = − 2⟨ĉ † 1↑ĉ 2↑ (t = 0 − )⟩ − i∂ t ⟨ĉ † 1↑ĉ 1↑ (0 − )⟩ .(63)
In the frequency domain,
E Heis 0 N = 3i 4π G sp (r = 1, ω)dω,(64)E Hub 0 N = i 2π 2G el (r = 1, ω) − ωG el (r = 0, ω) dω.(65)
To perform a comparison, we compute the ground state energy of the Hubbard lattice in two ways: i) by directly using the electron Vxc at different U values, and ii) by calculating E Heis 0 for a J = −1 Heisenberg lattice with the spinon Vxc, to be then used in the effective E Hub 0 of Eq. (61). The differences between the results from these two prescriptions and the exact Bethe ansatz solution are shown in Fig. 8. The E 0 results from ED for a six-site ring are also shown as a reference.
For U < 10, the repulsion strength is not large enough for Eq. (61) to be valid, leading to a discrepancy between the total energies for the two lattice models. However, in such region, E Hub,Vxc 0 (red dots) is already close to the exact Bethe ansatz value, and the difference gets smaller on increasing U . For U > 30, the ED results for the two models converge, meaning that the large repulsion limit is reached. The Vxc-based energies E 0 for the two models also converge to the exact Bethe ansatz value.
However, the effective Vxc-based Heisenberg result is rather accurate, with absolute error less than 10 −4 : this can be understood as a result of i) using the two-spinon upper and lower boundaries in the extrapolation, and ii) adjusting the B parameter from the cluster within the zero spin gap picture. In contrast, the Vxc-based Hubbard result is extrapolated without a good reference, and is more affected by the finite size effects. Thus, the difference with the Bethe ansatz result is larger.
As an overall remark, the comparative analysis of this Section shows the versatility of the Vxc approach across different lattice models, with results that are consistent with trends and benchmarks from other methods.
VI. CONCLUSION AND OUTLOOK
We have presented a novel exchange correlation potential (Vxc) formalism for the one-dimensional antiferromagnetic Heisenberg model, and derived a general new sum rule for spin systems. Our spin-formulation is a tailored extension of a previously introduced general framework for many-body systems that include both charge and spin degrees of freedom. Together with the new formulation, we have also devised a procedure to obtain, from a Vxc extracted from small finite clusters, an extrapolation to the thermodynamical limit. This procedure to access Vxc, originally devised for spin systems, has also permitted us to revisit and improve the treatment of the half-filled one-dimensional Hubbard model, a system already considered in earlier work within the Vxc approach. For both the 1D AFM Heisenberg model and the 1D Hubbard model, the static exchange term of Vxc was derived and shown to exhibit model distinctive properties. For the 1D AFM Heisenberg model, the static exchange term corresponds to the lower boundary of the two-spinon spectrum. For the Hubbard model, the local U leads to a constant V x , which cancels the Hartree potential.
For both models, the spectral functions calculated within the Vxc approach show favourable agreement with DDMRG and with experimental results. Furthermore, a single-energy quasiparticle picture can be used to explain the dynamics of the spinon Vxc for the 1D AFM Heisenberg model and the unoccupied/occupied part of the hole/electron Vxc for the 1D Hubbard model. Finally, we showed how the Vxc formalism captures the equivalence of the two models in the large U limit, by a comparative analysis via the lattice ground state energies.
In conclusion, our results indicate that the Vxc formalism provides an alternative way of calculating singleparticle Green function which is (computationally) costbeneficial but also physically well defined. Looking forward, we plan to apply such dimensionality-and interaction-insensitive scheme to models of increasing complication and higher dimensionality. At the same time, we intend to explore ways to devise Vxc approximations with the goal of improving over the local-(spin)density approximation of density functional theory, in a progression toward a first principle implementation for real materials.
FIG. 1 .
1Real part of Vxc of four-site spin-1 2 AFM Heisenberg chain, in the unit of |J|. Top panel: exact. Bottom panel: results when the high-excitation contribution is ignored (see Eqs. (36)-(38) and related discussion).
FIG. 2 .
2Real part of Z sp from a spin-1 2 AFM Heisenberg ring. Top (bottom) panel: results for a ring with 8 (12) sites.
FIG. 3 .
3Dynamic structure factor of 1D spin-1 2 AFM Heisenberg lattice calculated with Vxc, with broadening 0.1. Top panel: weight factor G(k, t = 0) considered as unit. Bottom panel: weight renormalised with cluster G(k, 0 + ). The blue dashed curves are the boundaries for two-spinon processes.
FIG. 4 .
4V xc (k) of finite Hubbard ring, U = 7.74, ∆ = 1, in the unit of U . Top panel: real part. Bottom panel: imaginary part. V (k, t) = V (−k, t) and V (k, t) = −V (−k, −t).
FIG. 5 .
5Real part of Z el (k, t) of finite Hubbard chain with twisted boundary condition, U = 7.74, ∆ = 1, in the unit of U . Left and middle panel: with shorter time scale and fewer k-points. N = 8, 6, respectively. Right panel: N = 6, longer time and more k-points, peaks out of the color scale not shown. Z el (−k, t) = Z el (k, t) and Z el (k, −t) = −Z el (π − k, t). For a discussion of the negative peak in the middle panel, see the main text.
FIG. 7 .
7Spatial spectral functions, calculated with the eightsite Vxc as kernel (for the six-site case, see the Appendix) for U = 7.74, ∆ = 1 and broadening η = 0.1. 64 k-points are used to approximate the k-integral, according to the Chadi-Cohen method[66].
Vxc-based results for i) the 1D Hubbard model and ii) the 1D AFM Heisenberg model, and the ED results for iii) a six-site Hubbard cluster and iv) a six-site Heisenberg cluster. For both models, Vxc is extrapolated from a six-site kernel.
F) 2 f 1 + x 2 f 2 −
12.A. gratefully acknowledges financial support from the Knut and Alice Wallenberg Foundation (KAW 2017.0061) and the Swedish Research Council (Vetenskapsrådet, VR 2021-04498 3). C.V. gratefully acknowledges financial support from the Swedish Research Council (VR 2017-03945 and 2022-04486pp,qq(t)iG pq (t) = −J × δ=±1 G p,p+δ (0 − )G p+δ,q (t) − G p+δ,p (0 − )G pq (t) . (A20) )t . (B6)The independent elements of V xc in orbital basis can be calculated with V xc pq (t1 − (x 2 + x)f 2 − ( (xy−3x)(( constant factors x, y and a ± are defined in the main text. The terms in 'bonding-antibonding' basis are then V xc BB,BB =
Appendix A: Sum rule and exchange term ofHeisenberg chainEquation of motion of the Heisenberg model iswhere the interaction term is The correlator g lpq (t) and the exchange-correlation hole are defined to fulfill:and for t < 0,Eq. (A7) and (A8) can be written in a compact form asTherefore the correlator fulfillsfrom which the sum rule can be retrieved:The exchange term of spinon Vxc can be derived from the variational methodso the interaction term can be written asAccording to the definition of Vxc, andthe equation of motion can be rewritten as To compute the Green function for positive time,where |Ψ⟩ is the ground state, one needs to use a complete set of eigenstates |n⟩ which give nonzero weight elements ⟨n|Ŝ − q |Ψ⟩. For an even number of sites and AFM coupling, the total z-spin of |Ψ⟩ is zero, which means that the states {n⟩} are in the S z = −1 sector. Labeling the eigenenergy corresponding to state |n⟩ with E − n , the Green function can be written as(B4)Appendix C: The dependence of the α parameter on U ∆ in the Hubbard dimer modelThe equations in this subsection are rewritten from the Hubbard dimer work[62]. With a two-site open ends chain, the half-filled Hubbard Hamiltonian Eq.(44) can be analytically solved, given the analytic bonding (k = 0) and anti-bonding (k = π) Vxc:where α = 1−κ 1+κ , and κ = 1 4 ( U ∆ ) 2 + 16 − U ∆ . After neglecting the higher excitation term e −i4∆t in Eq. (C1), the approximated dimer Vxc in the main text (Eq. (52)) is obtained.
Electron correlations in narrow energy bands. J Hubbard, Proc. R. Soc. A. 276237J. Hubbard, Electron correlations in narrow energy bands, Proc. R. Soc. A 276, 237 (1963).
Zur Theorie des Ferromagnetismus. W Heisenberg, 10.1007/BF01328601Z. Phys. 49619W. Heisenberg, Zur Theorie des Ferromagnetismus, Z. Phys. 49, 619 (1928).
Quantum Physics in One Dimension. T Giamarchi, ternational Series of Monographs on Physics. Clarendon PressT. Giamarchi, Quantum Physics in One Dimension, In- ternational Series of Monographs on Physics (Clarendon Press, 2004).
F H L Essler, H Frahm, F Göhmann, A Klümper, V E Korepin, 10.1017/CBO9780511534843The One-Dimensional Hubbard Model. Cambridge University PressF. H. L. Essler, H. Frahm, F. Göhmann, A. Klümper, and V. E. Korepin, The One-Dimensional Hubbard Model (Cambridge University Press, 2005).
The Hubbard Model. D P Arovas, E Berg, S A Kivelson, S Raghu, https:/arxiv.org/abs/https:/doi.org/10.1146/annurev-conmatphys-031620-102024Annu. Rev. Condens. Matter Phys. 13239D. P. Arovas, E. Berg, S. A. Kivelson, and S. Raghu, The Hubbard Model, Annu. Rev. Condens. Matter Phys. 13, 239 (2022), https://doi.org/10.1146/annurev- conmatphys-031620-102024.
Metal-insulator transitions. M Imada, A Fujimori, Y Tokura, 10.1103/RevModPhys.70.1039Rev. Mod. Phys. 701039M. Imada, A. Fujimori, and Y. Tokura, Metal-insulator transitions, Rev. Mod. Phys. 70, 1039 (1998).
From quantum matter to high-temperature superconductivity in copper oxides. B Keimer, S A Kivelson, M R Norman, S Uchida, J Zaanen, 10.1038/nature14165Nature. 518179B. Keimer, S. A. Kivelson, M. R. Norman, S. Uchida, and J. Zaanen, From quantum matter to high-temperature superconductivity in copper oxides, Nature 518, 179 (2015).
Quantum spin liquids: a review. L Savary, L Balents, 10.1088/0034-4885/80/1/016502Reports on Progress in Physics. 8016502L. Savary and L. Balents, Quantum spin liquids: a re- view, Reports on Progress in Physics 80, 016502 (2016).
Quantum entanglement. R Horodecki, P Horodecki, M Horodecki, K Horodecki, 10.1103/RevModPhys.81.865Rev. Mod. Phys. 81865R. Horodecki, P. Horodecki, M. Horodecki, and K. Horodecki, Quantum entanglement, Rev. Mod. Phys. 81, 865 (2009).
Witnessing entanglement in quantum magnets using neutron scattering. A Scheie, P Laurell, A M Samarakoon, B Lake, S E Nagler, G E Granroth, S Okamoto, G Alvarez, D A Tennant, 10.1103/PhysRevB.103.224434Phys. Rev. B. 103224434A. Scheie, P. Laurell, A. M. Samarakoon, B. Lake, S. E. Nagler, G. E. Granroth, S. Okamoto, G. Alvarez, and D. A. Tennant, Witnessing entanglement in quantum magnets using neutron scattering, Phys. Rev. B 103, 224434 (2021).
Magnetic Susceptibility of Ideal Spin 1/2 Heisenberg Antiferromagnetic Chain Systems, Sr2CuO3 and SrCuO2. N Motoyama, H Eisaki, S Uchida, 10.1103/PhysRevLett.76.3212Phys. Rev. Lett. 763212N. Motoyama, H. Eisaki, and S. Uchida, Magnetic Sus- ceptibility of Ideal Spin 1/2 Heisenberg Antiferromag- netic Chain Systems, Sr2CuO3 and SrCuO2, Phys. Rev. Lett. 76, 3212 (1996).
Unbound spinons in the S=1/2 antiferromagnetic chain KCuF3. D A Tennant, T G Perring, R A Cowley, S E Nagler, 10.1103/PhysRevLett.70.4003Phys. Rev. Lett. 704003D. A. Tennant, T. G. Perring, R. A. Cowley, and S. E. Nagler, Unbound spinons in the S=1/2 antiferromagnetic chain KCuF3, Phys. Rev. Lett. 70, 4003 (1993).
Fractional spinon excitations in the quantum Heisenberg antiferromagnetic chain. M Mourgal, M Enderle, A Klöpperpieper, J.-S Caux, A Stunault, H M Rønnow, 10.1038/nphys2652Nature Physics. 9435M. Mourgal, M. Enderle, A. Klöpperpieper, J.-S. Caux, A. Stunault, and H. M. Rønnow, Fractional spinon ex- citations in the quantum Heisenberg antiferromagnetic chain, Nature Physics 9, 435 (2013).
Quantum wake dynamics in Heisenberg antiferromagnetic chains. A Scheie, P Laurell, B Lake, S E Nagler, M B Stone, J.-S Caux, D A Tennant, 10.1038/s41467-022-33571-8Nature Commun. 135796A. Scheie, P. Laurell, B. Lake, S. E. Nagler, M. B. Stone, J.-S. Caux, and D. A. Tennant, Quantum wake dynam- ics in Heisenberg antiferromagnetic chains, Nature Com- mun. 13, 5796 (2022).
Spin-Wave Spectrum of the Antiferromagnetic Linear Chain. J Cloizeaux, J J Pearson, 10.1103/PhysRev.128.2131Phys. Rev. 1282131J. des Cloizeaux and J. J. Pearson, Spin-Wave Spectrum of the Antiferromagnetic Linear Chain, Phys. Rev. 128, 2131 (1962).
One-Dimensional Chain of Anisotropic Spin-Spin Interactions. II. Properties of the Ground-State Energy Per Lattice Site for an Infinite System. C N Yang, C P Yang, 10.1103/PhysRev.150.327Phys. Rev. 150327C. N. Yang and C. P. Yang, One-Dimensional Chain of Anisotropic Spin-Spin Interactions. II. Properties of the Ground-State Energy Per Lattice Site for an Infinite Sys- tem, Phys. Rev. 150, 327 (1966).
General Relation of Correlation Exponents and Spectral Properties of One-Dimensional Fermi Systems: Application to the Anisotropic S = 1 2 Heisenberg Chain. F D M Haldane, 10.1103/PhysRevLett.45.1358Phys. Rev. Lett. 451358F. D. M. Haldane, General Relation of Correlation Expo- nents and Spectral Properties of One-Dimensional Fermi Systems: Application to the Anisotropic S = 1 2 Heisen- berg Chain, Phys. Rev. Lett. 45, 1358 (1980).
What is the spin of a spin wave?. L Faddeev, L Takhtajan, 10.1016/0375-9601(81)90335-2Phys. Lett. 85375L. Faddeev and L. Takhtajan, What is the spin of a spin wave?, Phys. Lett. 85A, 375 (1981).
Quantum spin dynamics of the antiferromagnetic linear chain in zero and nonzero magnetic field. G Müller, H Thomas, H Beck, J C Bonner, 10.1103/PhysRevB.24.1429Phys. Rev. B. 241429G. Müller, H. Thomas, H. Beck, and J. C. Bonner, Quan- tum spin dynamics of the antiferromagnetic linear chain in zero and nonzero magnetic field, Phys. Rev. B 24, 1429 (1981).
Complete solution of the one-dimensional Hubbard model. F H L Essler, V E Korepin, K Schoutens, 10.1103/PhysRevLett.67.3848Phys. Rev. Lett. 673848F. H. L. Essler, V. E. Korepin, and K. Schoutens, Com- plete solution of the one-dimensional Hubbard model, Phys. Rev. Lett. 67, 3848 (1991).
Multispinon Continua at Zero and Finite Temperature in a Near-Ideal Heisenberg Chain. B Lake, D A Tennant, J.-S Caux, T Barthel, U Schollwöck, S E Nagler, C D Frost, 10.1103/PhysRevLett.111.137205Phys. Rev. Lett. 111137205B. Lake, D. A. Tennant, J.-S. Caux, T. Barthel, U. Schollwöck, S. E. Nagler, and C. D. Frost, Multispinon Continua at Zero and Finite Temperature in a Near-Ideal Heisenberg Chain, Phys. Rev. Lett. 111, 137205 (2013).
Spectral functions of strongly correlated extended systems via an exact quantum embedding. G H Booth, G , K.-L Chan, 10.1103/PhysRevB.91.155107Phys. Rev. B. 91155107G. H. Booth and G. K.-L. Chan, Spectral functions of strongly correlated extended systems via an exact quan- tum embedding, Phys. Rev. B 91, 155107 (2015).
Zur Theorie der Metalle. H Bethe, 10.1007/BF01341708Z. Phys. 71205H. Bethe, Zur Theorie der Metalle, Z. Phys. 71, 205 (1931).
Exact Integrability of the One-Dimensional Hubbard Model. B S Shastry, 10.1103/PhysRevLett.56.2453Phys. Rev. Lett. 562453B. S. Shastry, Exact Integrability of the One-Dimensional Hubbard Model, Phys. Rev. Lett. 56, 2453 (1986).
Absence of Mott Transition in an Exact Solution of the Short-Range, One-Band Model in One Dimension. E H Lieb, F Y Wu, 10.1103/PhysRevLett.20.1445Phys. Rev. Lett. 201445E. H. Lieb and F. Y. Wu, Absence of Mott Transition in an Exact Solution of the Short-Range, One-Band Model in One Dimension, Phys. Rev. Lett. 20, 1445 (1968).
Excitation spectrum in the onedimensional Hubbard model. A Ovchinnikov, Sov. Phys. JETP. 301160A. Ovchinnikov, Excitation spectrum in the one- dimensional Hubbard model, Sov. Phys. JETP 30, 1160 (1970).
Spin and Charge Dynamics of the t − J Model. T Tohyama, P Horsch, S Maekawa, 10.1103/PhysRevLett.74.980Phys. Rev. Lett. 74980T. Tohyama, P. Horsch, and S. Maekawa, Spin and Charge Dynamics of the t − J Model, Phys. Rev. Lett. 74, 980 (1995).
Monte Carlo simulation of quantum spin systems. I. M Suzuki, S Miyashita, A Kuroda, Prog. Theor. Phys. 581377M. Suzuki, S. Miyashita, and A. Kuroda, Monte Carlo simulation of quantum spin systems. I, Prog. Theor. Phys. 58, 1377 (1977).
Quantum-statistical Monte Carlo method for Heisenberg spins. J W Lyklema, Phys. Rev. Lett. 4988J. W. Lyklema, Quantum-statistical Monte Carlo method for Heisenberg spins, Phys. Rev. Lett. 49, 88 (1982).
A generalization of Handscomb's quantum Monte Carlo scheme-application to the 1D Hubbard model. A W Sandvik, 10.1088/0305-4470/25/13/017Journal of Physics A: Mathematical and General. 253667A. W. Sandvik, A generalization of Handscomb's quan- tum Monte Carlo scheme-application to the 1D Hubbard model, Journal of Physics A: Mathematical and General 25, 3667 (1992).
Density matrix formulation for quantum renormalization groups. S R White, Phys. Rev. Lett. 692863S. R. White, Density matrix formulation for quantum renormalization groups, Phys. Rev. Lett. 69, 2863 (1992).
Density-matrix algorithms for quantum renormalization groups. S R White, Phys. Rev. B. 4810345S. R. White, Density-matrix algorithms for quantum renormalization groups, Phys. Rev. B 48, 10345 (1993).
Dynamical density-matrix renormalization-group method. E Jeckelmann, 10.1103/PhysRevB.66.045114Phys. Rev. B. 6645114E. Jeckelmann, Dynamical density-matrix renormalization-group method, Phys. Rev. B 66, 045114 (2002).
Spectral Properties of the One-Dimensional Hubbard Model. R Preuss, A Muramatsu, W Der Linden, P Dieterich, F F Assaad, W Hanke, 10.1103/PhysRevLett.73.732Phys. Rev. Lett. 73732R. Preuss, A. Muramatsu, W. von der Linden, P. Di- eterich, F. F. Assaad, and W. Hanke, Spectral Proper- ties of the One-Dimensional Hubbard Model, Phys. Rev. Lett. 73, 732 (1994).
Charge dynamics in half-filled Hubbard chains with finite on-site interaction. R G Pereira, K Penc, S R White, P D Sacramento, J M P Carmelo, 10.1103/PhysRevB.85.165132Phys. Rev. B. 85165132R. G. Pereira, K. Penc, S. R. White, P. D. Sacramento, and J. M. P. Carmelo, Charge dynamics in half-filled Hubbard chains with finite on-site interaction, Phys. Rev. B 85, 165132 (2012).
Charge dynamics of correlated electrons: Variational description with inclusion of composite fermions. K Ido, M Imada, T Misawa, 10.1103/PhysRevB.101.075124Phys. Rev. B. 10175124K. Ido, M. Imada, and T. Misawa, Charge dynamics of correlated electrons: Variational description with inclu- sion of composite fermions, Phys. Rev. B 101, 075124 (2020).
Quantum monte carlo study of the two-dimensional fermion hubbard model. C N Varney, C.-R Lee, Z J Bai, S Chiesa, M Jarrell, R T Scalettar, 10.1103/PhysRevB.80.075116Phys. Rev. B. 8075116C. N. Varney, C.-R. Lee, Z. J. Bai, S. Chiesa, M. Jarrell, and R. T. Scalettar, Quantum monte carlo study of the two-dimensional fermion hubbard model, Phys. Rev. B 80, 075116 (2009).
Spectral function of the twodimensional Hubbard model: A density matrix renormalization group plus cluster perturbation theory study. C Yang, A E Feiguin, 10.1103/PhysRevB.93.081107Phys. Rev. B. 9381107C. Yang and A. E. Feiguin, Spectral function of the two- dimensional Hubbard model: A density matrix renor- malization group plus cluster perturbation theory study, Phys. Rev. B 93, 081107(R) (2016).
Amelioration for the Sign Problem: An Adiabatic Quantum Monte Carlo Algorithm. M.-S Vaezi, A.-R Negari, A Moharramipour, A Vaezi, 10.1103/PhysRevLett.127.217003Phys. Rev. Lett. 127217003M.-S. Vaezi, A.-R. Negari, A. Moharramipour, and A. Vaezi, Amelioration for the Sign Problem: An Adia- batic Quantum Monte Carlo Algorithm, Phys. Rev. Lett. 127, 217003 (2021).
Inhomogeneous Electron Gas. P Hohenberg, W Kohn, 10.1103/PhysRev.136.B864Phys. Rev. 136864P. Hohenberg and W. Kohn, Inhomogeneous Electron Gas, Phys. Rev. 136, B864 (1964).
Self-Consistent Equations Including Exchange and Correlation Effects. W Kohn, L J Sham, 10.1103/PhysRev.140.A1133Phys. Rev. 1401133W. Kohn and L. J. Sham, Self-Consistent Equations In- cluding Exchange and Correlation Effects, Phys. Rev. 140, A1133 (1965).
Density-Functional Theory for Time-Dependent Systems. E Runge, E K U Gross, 10.1103/PhysRevLett.52.997Phys. Rev. Lett. 52997E. Runge and E. K. U. Gross, Density-Functional Theory for Time-Dependent Systems, Phys. Rev. Lett. 52, 997 (1984).
The density functional formalism, its applications and prospects. R O Jones, O Gunnarsson, 10.1103/RevModPhys.61.689Rev. Mod. Phys. 61689R. O. Jones and O. Gunnarsson, The density functional formalism, its applications and prospects, Rev. Mod. Phys. 61, 689 (1989).
Electronic excitations: density-functional versus many-body Green'sfunction approaches. G Onida, L Reining, A Rubio, 10.1103/RevModPhys.74.601Rev. Mod. Phys. 74601G. Onida, L. Reining, and A. Rubio, Electronic exci- tations: density-functional versus many-body Green's- function approaches, Rev. Mod. Phys. 74, 601 (2002).
Density functionals and model Hamiltonians: Pillars of many-particle physics. K Capelle, V L Campo, 10.1016/j.physrep.2013.03.002density functionals and model Hamiltonians: Pillars of many-particle physics. 52891K. Capelle and V. L. Campo, Density functionals and model Hamiltonians: Pillars of many-particle physics, Phys. Rep. 528, 91 (2013), density functionals and model Hamiltonians: Pillars of many-particle physics.
Interaction-energy functional for lattice density functional theory: Applications to one-, two-, and three-dimensional Hubbard models. R López-Sandoval, G M Pastor, 10.1103/PhysRevB.69.085101Phys. Rev. B. 6985101R. López-Sandoval and G. M. Pastor, Interaction-energy functional for lattice density functional theory: Applica- tions to one-, two-, and three-dimensional Hubbard mod- els, Phys. Rev. B 69, 085101 (2004).
Time-Dependent Density-Functional Theory and Strongly Correlated Systems: Insight from Numerical Studies. C Verdozzi, 10.1103/PhysRevLett.101.166401Phys. Rev. Lett. 101166401C. Verdozzi, Time-Dependent Density-Functional The- ory and Strongly Correlated Systems: Insight from Nu- merical Studies, Phys. Rev. Lett. 101, 166401 (2008).
Time-Dependent Density-Functional Theory Meets Dynamical Mean-Field Theory: Real-Time Dynamics for the 3D Hubbard Model. D Karlsson, A Privitera, C Verdozzi, 10.1103/PhysRevLett.106.116401Phys. Rev. Lett. 106116401D. Karlsson, A. Privitera, and C. Verdozzi, Time- Dependent Density-Functional Theory Meets Dynami- cal Mean-Field Theory: Real-Time Dynamics for the 3D Hubbard Model, Phys. Rev. Lett. 106, 116401 (2011).
The Hubbard dimer: a density functional case study of a many-body problem. D J Carrascal, J Ferrer, J C Smith, K Burke, 10.1088/0953-8984/27/39/393001Journal of Physics: Condensed Matter. 27393001D. J. Carrascal, J. Ferrer, J. C. Smith, and K. Burke, The Hubbard dimer: a density functional case study of a many-body problem, Journal of Physics: Condensed Matter 27, 393001 (2015).
The Hubbard Model: A Computational Perspective. M Qin, T Schäfer, S Andergassen, P Corboz, E Gull, https:/arxiv.org/abs/https:/doi.org/10.1146/annurev-conmatphys-090921-033948Annu. Rev. Condens. Matter Phys. 13275M. Qin, T. Schäfer, S. Andergassen, P. Corboz, and E. Gull, The Hubbard Model: A Compu- tational Perspective, Annu. Rev. Condens. Matter Phys. 13, 275 (2022), https://doi.org/10.1146/annurev- conmatphys-090921-033948.
Ground States of Constrained Systems: Application to Cerium Impurities. P H Dederichs, S Blügel, R Zeller, H Akai, 10.1103/PhysRevLett.53.2512Phys. Rev. Lett. 532512P. H. Dederichs, S. Blügel, R. Zeller, and H. Akai, Ground States of Constrained Systems: Application to Cerium Impurities, Phys. Rev. Lett. 53, 2512 (1984).
Density-functional calculation of the parameters in the Anderson model: Application to Mn in CdTe. O Gunnarsson, O K Andersen, O Jepsen, J Zaanen, 10.1103/PhysRevB.39.1708Phys. Rev. B. 391708O. Gunnarsson, O. K. Andersen, O. Jepsen, and J. Zaa- nen, Density-functional calculation of the parameters in the Anderson model: Application to Mn in CdTe, Phys. Rev. B 39, 1708 (1989).
Reformulation of the LDA+U method for a local-orbital basis. W E Pickett, S C Erwin, E C Ethridge, 10.1103/PhysRevB.58.1201Phys. Rev. B. 581201W. E. Pickett, S. C. Erwin, and E. C. Ethridge, Refor- mulation of the LDA+U method for a local-orbital basis, Phys. Rev. B 58, 1201 (1998).
Accurate magnetic exchange couplings in transition-metal complexes from constrained density-functional theory. I Rudra, Q Wu, T Van Voorhis, https:/aip.scitation.org/doi/10.1063/1.2145878J. Chem. Phys. 12424103I. Rudra, Q. Wu, and T. Van Voorhis, Accurate magnetic exchange couplings in transition-metal complexes from constrained density-functional theory, J. Chem. Phys. 124, 024103 (2006).
Band theory and Mott insulators: Hubbard U instead of Stoner I. V I Anisimov, J Zaanen, O K Andersen, 10.1103/PhysRevB.44.943Phys. Rev. B. 44943V. I. Anisimov, J. Zaanen, and O. K. Andersen, Band theory and Mott insulators: Hubbard U instead of Stoner I, Phys. Rev. B 44, 943 (1991).
Insights into Current Limitations of Density Functional Theory. A J Cohen, P Mori-Sánchez, W Yang, https:/arxiv.org/abs/https:/www.science.org/doi/pdf/10.1126/science.1158722Science. 321A. J. Cohen, P. Mori-Sánchez, and W. Yang, Insights into Current Limitations of Density Functional Theory, Science 321, 792 (2008), https://www.science.org/doi/pdf/10.1126/science.1158722.
First-principles calculations of the electronic structure and spectra of strongly correlated systems: the LDA+ U method. V I Anisimov, F Aryasetiawan, A I Lichtenstein, 10.1088/0953-8984/9/4/002Journal of Physics: Condensed Matter. 9767V. I. Anisimov, F. Aryasetiawan, and A. I. Lichtenstein, First-principles calculations of the electronic structure and spectra of strongly correlated systems: the LDA+ U method, Journal of Physics: Condensed Matter 9, 767 (1997).
W Kohn, 10.1103/RevModPhys.71.1253Nobel Lecture: Electronic structure of matter-wave functions and density functionals. 711253W. Kohn, Nobel Lecture: Electronic structure of matter-wave functions and density functionals, Rev. Mod. Phys. 71, 1253 (1999).
Density Functionals Not Based on the Electron Gas: Local-Density Approximation for a Luttinger Liquid. N A Lima, M F Silva, L N Oliveira, K Capelle, 10.1103/PhysRevLett.90.146402Phys. Rev. Lett. 90146402N. A. Lima, M. F. Silva, L. N. Oliveira, and K. Capelle, Density Functionals Not Based on the Electron Gas: Local-Density Approximation for a Luttinger Liquid, Phys. Rev. Lett. 90, 146402 (2003).
Spin-density functional theory: Some open problems and application to inhomogeneous Heisenberg models. K Capelle, V L Líbero, 10.1002/qua.20740International Journal of Quantum Chemistry. 105K. Capelle and V. L. Líbero, Spin-density func- tional theory: Some open problems and application to inhomogeneous Heisenberg models, International Journal of Quantum Chemistry 105, 679 (2005),
. https:/arxiv.org/abs/https:/onlinelibrary.wiley.com/doi/pdf/10.1002/qua.20740https://onlinelibrary.wiley.com/doi/pdf/10.1002/qua.20740.
Time-dependent exchange-correlation potential in lieu of self-energy. F Aryasetiawan, 10.1103/PhysRevB.105.075106Phys. Rev. B. 10575106F. Aryasetiawan, Time-dependent exchange-correlation potential in lieu of self-energy, Phys. Rev. B 105, 075106 (2022).
Spectral functions of the half-filled one-dimensional Hubbard chain within the exchange-correlation potential formalism. F Aryasetiawan, T Sjöstrand, 10.1103/PhysRevB.106.045123Phys. Rev. B. 10645123F. Aryasetiawan and T. Sjöstrand, Spectral functions of the half-filled one-dimensional Hubbard chain within the exchange-correlation potential formalism, Phys. Rev. B 106, 045123 (2022).
Spin and charge dynamics of the one-dimensional extended Hubbard model. H Benthien, E Jeckelmann, 10.1103/PhysRevB.75.205128Phys. Rev. B. 75205128H. Benthien and E. Jeckelmann, Spin and charge dy- namics of the one-dimensional extended Hubbard model, Phys. Rev. B 75, 205128 (2007).
Time-dependent exchange-correlation hole and potential of the electron gas. K Karlsson, F Aryasetiawan, 10.48550/ARXIV.2301.05590K. Karlsson and F. Aryasetiawan, Time-dependent exchange-correlation hole and potential of the electron gas (2023).
Twisted boundary conditions and effective mass in Heisenberg-Ising and Hubbard rings. B S Shastry, B Sutherland, 10.1103/PhysRevLett.65.243Phys. Rev. Lett. 65243B. S. Shastry and B. Sutherland, Twisted boundary con- ditions and effective mass in Heisenberg-Ising and Hub- bard rings, Phys. Rev. Lett. 65, 243 (1990).
Special Points in the Brillouin Zone. D J Chadi, M L Cohen, 10.1103/PhysRevB.8.5747Phys. Rev. B. 85747D. J. Chadi and M. L. Cohen, Special Points in the Bril- louin Zone, Phys. Rev. B 8, 5747 (1973).
. P Fazekas, Lecture Notes on Electron Correlation and Magnetism. World ScientificP. Fazekas, Lecture Notes on Electron Correlation and Magnetism (World Scientific, 1999).
W Nolting, A Ramakanth, Quantum theory of magnetism. Springer Science & Business MediaW. Nolting and A. Ramakanth, Quantum theory of mag- netism (Springer Science & Business Media, 2009).
| [] |
[
"Highlights On applicability of von Karman's momentum theory in predicting the water entry load of V-shaped structures with varying initial velocity On applicability of von Karman's momentum theory in predicting the water entry load of V-shaped structures with varying initial velocity",
"Highlights On applicability of von Karman's momentum theory in predicting the water entry load of V-shaped structures with varying initial velocity On applicability of von Karman's momentum theory in predicting the water entry load of V-shaped structures with varying initial velocity"
] | [
"Yujin Lu ",
"Alessandro Del Buono ",
"Tianhang Xiao ",
"Alessandro Iafrati ",
"Shuanghou Deng ",
"Jinfa Xu ",
"Yujin Lu \nNanjing University of Aeronautics and Astronautics\nYudao Street 29210016NanjingJiangsuPeople's Republic of China\n\nNational Research Council-Institute of Marine Engineering (CNR-INM)\nVia di Vallerano 13900128RomaLazioItaly\n",
"Alessandro Del Buono \nNational Research Council-Institute of Marine Engineering (CNR-INM)\nVia di Vallerano 13900128RomaLazioItaly\n",
"Tianhang Xiao \nNanjing University of Aeronautics and Astronautics\nYudao Street 29210016NanjingJiangsuPeople's Republic of China\n",
"Alessandro Iafrati \nNational Research Council-Institute of Marine Engineering (CNR-INM)\nVia di Vallerano 13900128RomaLazioItaly\n",
"Shuanghou Deng \nNanjing University of Aeronautics and Astronautics\nYudao Street 29210016NanjingJiangsuPeople's Republic of China\n",
"Jinfa Xu \nNanjing University of Aeronautics and Astronautics\nYudao Street 29210016NanjingJiangsuPeople's Republic of China\n",
"Orcid ",
"Yujin Lu ",
"Alessandro Del Buono "
] | [
"Nanjing University of Aeronautics and Astronautics\nYudao Street 29210016NanjingJiangsuPeople's Republic of China",
"National Research Council-Institute of Marine Engineering (CNR-INM)\nVia di Vallerano 13900128RomaLazioItaly",
"National Research Council-Institute of Marine Engineering (CNR-INM)\nVia di Vallerano 13900128RomaLazioItaly",
"Nanjing University of Aeronautics and Astronautics\nYudao Street 29210016NanjingJiangsuPeople's Republic of China",
"National Research Council-Institute of Marine Engineering (CNR-INM)\nVia di Vallerano 13900128RomaLazioItaly",
"Nanjing University of Aeronautics and Astronautics\nYudao Street 29210016NanjingJiangsuPeople's Republic of China",
"Nanjing University of Aeronautics and Astronautics\nYudao Street 29210016NanjingJiangsuPeople's Republic of China"
] | [] | The maximal acceleration is proportional to the square of the initial velocity for the V-shaped body • The theoretical ratio of the corresponding velocity to the initial velocity is valid for large impact velocity • Gravity effect should be considered with slow impact speed • A coupled relation among max , * and * is found arXiv:2207.10413v2 [physics.flu-dyn] 25 Aug 2022 © 2022. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/ Nomenclature velocity angle, • w volume fraction of water deadrise angle, • angular velocity of the object, rad/s tensor of the moments of inertia, kg⋅m 2 resultant moment acting on the object, N⋅m the ratio of the corresponding velocity to the initial velocity the heel angle, • , 0 velocity and the initial velocity, m/s resultant displacement, m non-dimensional maximal acceleration , non-dimensional acceleration in -and -direction intercept pressure coefficient * hd , * hs maximal vertical hydrodynamic and hydrostatic force, N slope length of the cabin and the fuselage, m mass, kg added added mass, kg , w the volume of the cell and the volume of water in the cell, m 3 width, m ℎ shifted coordinate in x-axis, m penetration depth, m aero aerodynamic w, a water and air 1. Introduction 1 Amphibious aircraft is a special flight vehicle that is capable of taking off and landing both on water and 2 conventional runways (Qiu and Song, 2013). The amphibious aircrafts have drawn considerable attentions by the 3 nations with maritime supremacy due to their potential military and civilian applications. In the flight operational 4 envelope of amphibious aircraft, landing on water is regarded as the most dangerous phase where the hydrodynamic 5 impact load significantly influences the occupants survivability and structural integrity (Hughes et al., 2013). In terms 6 of the design and analysis of water entry load, full scale tests are regarded as the most straightforward and reliable way. 7 Investigating the hydrodynamics of the water landing of an amphibious aircraft with full scale test are highly expensive 8 and time demanding and, may be challenged by a low repeatability level. In order to derive reliable estimates of the 9 hydrodynamic loads acting on the aircraft during water landing, another practicable way is to perform scaled-model 10 experiments in water basins. As an example, experimental studies on the water entry problems have been conducted 11 at NACA Langley Memorial Aeronautical Laboratory, resulting in extensive and valuable archived test data and 12 recommendations in industrial applications (Benson and Bidwell, 1945). The study provides interesting information 13 about the effects on performances of design parameters such as deadrise angle, depth of step, configuration of hull 14 body, hydrofoils, etc. In general, hydrodynamics of water impacting can be demonstrated commendably by scaled 15 model water tank tests. In the case of seaplanes, both hydrodynamics and aerodynamic aspects play the same key 16 roles in the dynamic behavior and there is however a difficulty in achieving the correct scaling for the air and water 17 domains (Duan et al., 2019). Froude ( ) scaling guarantees the correct reproduction of the ratio between the inertia 18 and gravity force in the water domain but it do not allow to preserve the Reynolds ( ) similarity and thus the correct 19 scaling of the viscous effects which are important in both water and, especially, in air for the aerodynamic lift and drag 20 (Terziev et al., 2022; Iafrati and Grizzi, 2019). Depending on the full-scale speed, other phenomena like cavitation 21 and ventilation might be also relevant in the water domain that would not be properly reproduced in scaled model tests 22 based on Froude similarity only (Iafrati and Grizzi, 2019). 23 As an alternative to expensive experimental campaign, the recent developed computational approaches allow to 24 simulate the hydro-and aero-dynamics and kinematic motion of amphibious aircraft in full scale. Different phases 25 during the whole process, such as takeoff/landing, skiing, and other serious situations were investigated recently by 26 numerical simulation. For the takeoff process, (Qiu and Song, 2013) proposed a decoupled algorithm to investigate 27 65 Wang et al., 2021a; Sheng et al., 2022). However, it is worth noting that only fitting functions of force and acceleration 66 were discussed in the previous studies, whereas the detailed theoretical basis with related relationships have not been 67 derived yet.68The present study is dedicated to numerical simulations of a two-dimensional symmetric wedge and a three-69 dimensional cabin section in free fall water entry in order to investigate and build up parametric relations, based 70 on the transformation of the von Karman's momentum theory, that can provide the maximal vertical acceleration and 71 the corresponding vertical velocity, penetration depth and time. Particular attention is paid at the effects of horizontal 72 velocity, and three-dimensional flow. The relations are then used to predict the load acting on amphibious aircraft during 73 the water landing. The present work is organized as follows. Section 2 presents the methodology for the theoretical 74 and numerical approaches, and describes the models and the computational setup; the main results are reported and 75 discussed in Sec. 3; final conclusions are drawn in Sec. 4. | 10.1016/j.oceaneng.2022.112249 | [
"https://export.arxiv.org/pdf/2207.10413v2.pdf"
] | 250,917,628 | 2207.10413 | 64239eebec0efc88119f111dfaeadc2620e45065 |
Highlights On applicability of von Karman's momentum theory in predicting the water entry load of V-shaped structures with varying initial velocity On applicability of von Karman's momentum theory in predicting the water entry load of V-shaped structures with varying initial velocity
25 Aug 2022
Yujin Lu
Alessandro Del Buono
Tianhang Xiao
Alessandro Iafrati
Shuanghou Deng
Jinfa Xu
Yujin Lu
Nanjing University of Aeronautics and Astronautics
Yudao Street 29210016NanjingJiangsuPeople's Republic of China
National Research Council-Institute of Marine Engineering (CNR-INM)
Via di Vallerano 13900128RomaLazioItaly
Alessandro Del Buono
National Research Council-Institute of Marine Engineering (CNR-INM)
Via di Vallerano 13900128RomaLazioItaly
Tianhang Xiao
Nanjing University of Aeronautics and Astronautics
Yudao Street 29210016NanjingJiangsuPeople's Republic of China
Alessandro Iafrati
National Research Council-Institute of Marine Engineering (CNR-INM)
Via di Vallerano 13900128RomaLazioItaly
Shuanghou Deng
Nanjing University of Aeronautics and Astronautics
Yudao Street 29210016NanjingJiangsuPeople's Republic of China
Jinfa Xu
Nanjing University of Aeronautics and Astronautics
Yudao Street 29210016NanjingJiangsuPeople's Republic of China
Orcid
Yujin Lu
Alessandro Del Buono
Highlights On applicability of von Karman's momentum theory in predicting the water entry load of V-shaped structures with varying initial velocity On applicability of von Karman's momentum theory in predicting the water entry load of V-shaped structures with varying initial velocity
25 Aug 2022Preprint submitted to ElsevierA R T I C L E I N F O * Corresponding author Page 1 of 25water landing amphibious aircraft momentum theory acceleration linear dependence
The maximal acceleration is proportional to the square of the initial velocity for the V-shaped body • The theoretical ratio of the corresponding velocity to the initial velocity is valid for large impact velocity • Gravity effect should be considered with slow impact speed • A coupled relation among max , * and * is found arXiv:2207.10413v2 [physics.flu-dyn] 25 Aug 2022 © 2022. This manuscript version is made available under the CC-BY-NC-ND 4.0 license https://creativecommons.org/licenses/by-nc-nd/4.0/ Nomenclature velocity angle, • w volume fraction of water deadrise angle, • angular velocity of the object, rad/s tensor of the moments of inertia, kg⋅m 2 resultant moment acting on the object, N⋅m the ratio of the corresponding velocity to the initial velocity the heel angle, • , 0 velocity and the initial velocity, m/s resultant displacement, m non-dimensional maximal acceleration , non-dimensional acceleration in -and -direction intercept pressure coefficient * hd , * hs maximal vertical hydrodynamic and hydrostatic force, N slope length of the cabin and the fuselage, m mass, kg added added mass, kg , w the volume of the cell and the volume of water in the cell, m 3 width, m ℎ shifted coordinate in x-axis, m penetration depth, m aero aerodynamic w, a water and air 1. Introduction 1 Amphibious aircraft is a special flight vehicle that is capable of taking off and landing both on water and 2 conventional runways (Qiu and Song, 2013). The amphibious aircrafts have drawn considerable attentions by the 3 nations with maritime supremacy due to their potential military and civilian applications. In the flight operational 4 envelope of amphibious aircraft, landing on water is regarded as the most dangerous phase where the hydrodynamic 5 impact load significantly influences the occupants survivability and structural integrity (Hughes et al., 2013). In terms 6 of the design and analysis of water entry load, full scale tests are regarded as the most straightforward and reliable way. 7 Investigating the hydrodynamics of the water landing of an amphibious aircraft with full scale test are highly expensive 8 and time demanding and, may be challenged by a low repeatability level. In order to derive reliable estimates of the 9 hydrodynamic loads acting on the aircraft during water landing, another practicable way is to perform scaled-model 10 experiments in water basins. As an example, experimental studies on the water entry problems have been conducted 11 at NACA Langley Memorial Aeronautical Laboratory, resulting in extensive and valuable archived test data and 12 recommendations in industrial applications (Benson and Bidwell, 1945). The study provides interesting information 13 about the effects on performances of design parameters such as deadrise angle, depth of step, configuration of hull 14 body, hydrofoils, etc. In general, hydrodynamics of water impacting can be demonstrated commendably by scaled 15 model water tank tests. In the case of seaplanes, both hydrodynamics and aerodynamic aspects play the same key 16 roles in the dynamic behavior and there is however a difficulty in achieving the correct scaling for the air and water 17 domains (Duan et al., 2019). Froude ( ) scaling guarantees the correct reproduction of the ratio between the inertia 18 and gravity force in the water domain but it do not allow to preserve the Reynolds ( ) similarity and thus the correct 19 scaling of the viscous effects which are important in both water and, especially, in air for the aerodynamic lift and drag 20 (Terziev et al., 2022; Iafrati and Grizzi, 2019). Depending on the full-scale speed, other phenomena like cavitation 21 and ventilation might be also relevant in the water domain that would not be properly reproduced in scaled model tests 22 based on Froude similarity only (Iafrati and Grizzi, 2019). 23 As an alternative to expensive experimental campaign, the recent developed computational approaches allow to 24 simulate the hydro-and aero-dynamics and kinematic motion of amphibious aircraft in full scale. Different phases 25 during the whole process, such as takeoff/landing, skiing, and other serious situations were investigated recently by 26 numerical simulation. For the takeoff process, (Qiu and Song, 2013) proposed a decoupled algorithm to investigate 27 65 Wang et al., 2021a; Sheng et al., 2022). However, it is worth noting that only fitting functions of force and acceleration 66 were discussed in the previous studies, whereas the detailed theoretical basis with related relationships have not been 67 derived yet.68The present study is dedicated to numerical simulations of a two-dimensional symmetric wedge and a three-69 dimensional cabin section in free fall water entry in order to investigate and build up parametric relations, based 70 on the transformation of the von Karman's momentum theory, that can provide the maximal vertical acceleration and 71 the corresponding vertical velocity, penetration depth and time. Particular attention is paid at the effects of horizontal 72 velocity, and three-dimensional flow. The relations are then used to predict the load acting on amphibious aircraft during 73 the water landing. The present work is organized as follows. Section 2 presents the methodology for the theoretical 74 and numerical approaches, and describes the models and the computational setup; the main results are reported and 75 discussed in Sec. 3; final conclusions are drawn in Sec. 4.
The water landing of an amphibious aircraft is a complicated problem that can lead to uncomfortable riding situation and structural damage due to large vertical accelerations and the consequent dynamic responses. The problem herein is investigated by solving unsteady incompressible Reynolds-averaged Navier-Stokes equations with a standard − turbulence closure model. The theoretical solutions established by the von Karman's momentum theory are also employed. In order to validate the relationships between the initial vertical velocity and the peak value of vertical acceleration, free fall test cases of 2D symmetric wedge oblique entry and 3D cabin section vertical entry are presented first. The other parameters at which the maximum acceleration occurs, such as time, penetration depth, velocity, are also evaluated. Hence, the quantitative relations are investigated to water landing event for amphibious aircraft. Detailed results in terms of free surface shape and pressure distribution are provided to show the slamming effects. The results show that a linear dependence of the maximal acceleration from the square of initial vertical velocity can be derived for two-dimensional wedge, three-dimensional cabin section and seaplane with V-shaped hull. Moreover, the ratio between the corresponding velocity and the initial vertical velocity tends to a constant threshold value, 5/6, derived from the theoretical solution, when increasing the initial vertical velocity in all three cases. the kinematic characteristics, whereby the aerodynamic forces of the full configuration and the hydrodynamic forces been directly expressed by fitting formulas for Froude number greater than 2. In the work of (Abraham et al., 2014), the 56 drag-coefficient of a sphere impacting the water surface was found to be independent of some investigated quantities, 57 like the sphere velocity, surface tension, flow regime (laminar or turbulent) and Reynolds number. Hence, algebraic 58 expressions of the drag coefficient versus the dimensionless depth have been established by two fitted polynomials.
59
Effects of parametric variation, such as impact velocity, radius, and mass of the sphere on the impact force and the 60 acceleration, have also been analyzed by (Yu et al., 2019). The peak value of the non-dimensional impact force has 61 been found to be independent of the velocity and the radius, whereas it depends on the mass of sphere. In parallel, 62 simplified expressions for the maximal force and acceleration have been obtained through fitting the relations between 63 the peak value of the non-dimensional force and the non-dimensional mass. The relationships derived in (Yu et al., 78 Pioneer research in water entry problem has been conducted by von Karman (von Karman, 1929), based on momentum theorem and the added mass for the prediction of the hydrodynamic load during the water entry of a V-shaped body penetrating into the water. By applying the momentum conservation at the beginning of the impact and the generic time , it is obtained,
Von Karman's theoretical method and transformation
0 = ( + added ) ⋅ ( )(1)
where is the mass of the wedge per unit length, 0 is the initial vertical impact velocity, ( ) is the instantaneous 79 velocity during the impact. In equation (1), added is the added mass which is computed by using the flat-plate 80 approximation (see Fig. 1). It is assumed that the added mass is equal to the mass of a half disk of water of radius 81 ( ), which results into added = ( 2 ( ))∕2 (Mei et al., 1999). In such approximation the effect of the water pile-up 82 is ignored.
83
With such an assumption, the velocity of the body can be retrieved as: Based on what is provided in the A and differentiating Eq. (2), it is possible to analytically derive the instantaneous acceleration as follows (Panciroli et al., 2013):
( ) = 0 + added = 0 + 2 ( ) 2 tan 2 ( ) = 2 tan 2 ( ) 0 2 tan 2 ( ) + 2 ( ) (2)( ) = ( ) 0 tan 2 ( ) ⋅ 3 ( )(3)
which takes a peak of magnitude: * = 2 0 5 6 3 1 tan( )
√ 2 5 (4)
when the corresponding penetration depth and velocity are:
⎧ ⎪ ⎨ ⎪ ⎩ * =
Note that the superscript * indicates the values the different quantities take when the acceleration reaches its peak. It is 84 interesting to notice that * , * and * , are proportional to 2 0 , 0 and −1 0 respectively, implying that the initial vertical 85 velocity governs those parameters, except * . 86
Numerical method
87
In order to numerically simulate the problem, the commercial package Star CCM+ is utilized herein as the two- Capturing (HRIC) scheme is adopted for volume fraction transport. The convection terms, as well as diffusion terms, 93 are turned into algebraic parameters using second-order upwind and second-order central methods, respectively. The 94 unsteady terms are discretized in the time domain by applying a second-order implicit scheme.
95
Volume of fluid (VOF) scheme, originally proposed by Hirt and Nichols (Hirt and Nichols, 1981), is used in the present computational scheme to capture the water-air interface by introducing a variable, w , called the volume fraction of the water in the computational cell, which varies between 0 (air) and 1 (water) and is defined as:
w = w ∕ ,(7)
where w is the volume of water in the cell and is the volume of the cell. The volume fraction of the air in a cell can be computed as:
a = 1 − w .(8)
The effective value m of any physical properties, such as density, viscosity, etc., of the mixture of water and air in the transport equations is determined by:
m = w w + a (1 − w ).(9)
To accurately capture the dynamic behavior as well as the load characteristics of water landing process, the motion of the body in response to the fluid forces and moments at the surface is determined via a six degree-of-freedom (6DOF) model. The 6DOF model solves the equations for the rotation and translation of the center of mass of the object. The equation for the translation in the global inertial coordinate system is formulated as:
⋅ d d = ,(10)
and the rotation of the object is solved in the body local coordinate system by:
d d + × = .(11)
Subsequently, a dynamic mesh strategy (Xiao et al., 2021a), which moves the entire mesh rigidly along with the 96 object at each time step according to the solution of the 6DOF model, is employed to deal with the relative motion 97 between the fluid and the rigid body with on single grid domain. As neither mesh distortion nor mesh reconstruction 98 occurs, the high quality of the initial mesh remains unchanged during the whole simulation, and thus, the solution 99 accuracy of both flow field and water-air interface capture is not degraded for such unsteady problems with large 100 relative motion. It should be mentioned that the water surface level is kept stationary regardless of the translation or 101 rotation of the mesh. To achieve this goal, at the beginning the function of w needs to be implemented on the boundary 102 condition where the water volume fraction of each grid cell was assigned according to its global inertial coordinates.
103
Specifically, the volume fraction is one for the cells located below the interface, and zero for the cells above. The same 104 treatment of pressure function on the boundary condition also should be defined as a part of the initial condition of 105 the fluid field. For the air field, the pressure is assumed as constant at the beginning, while the water pressure varies 106 gradually depending on the depth in the water domain.
Models and computational setup
108
The theory governing the vertical water entry of wedges and expressed by equations (4), (5) and (6) of the wedge has been studied by systematically varying the velocity angle , with the vertical and horizontal motions.
113
The wedge has a width =0.2 m and a deadrise angle =37 • and it is impacting with the symmetry axis oriented 114 vertically, as seen in Fig. 2. The same configuration is simulated numerically. Besides, in order to carry out a two-115 dimensional numerical simulation, only 1 cell is set in the y-direction (spanwise direction) with a cell size of 0.002 m.
116
The front and back boundary conditions are defined as symmetry. Fig. 3 shows the details of the mesh topology and As a second step of test, a cabin section, that is a part of the seaplane, is investigated numerically to examine the Eventually, the water landing of V-shaped hull on amphibious aircraft is studied to check the capability of the 131 theoretical relations (Eq. (4), (5) and (6)) to deal with complex problems and to verify to which extent they are reliable 132 for engineering applications. A conventional configuration of the fuselage of amphibious aircraft is shown in Fig. 6.
133
The bottom of hull is divided into two parts, forebody and afterbody, by the step, making it easier to take off on water.
134
The computational domain was created by a cuboid with size of 6×2×5 in length, width and height, respectively (see Note that the wing and tail wing are taken into consideration. where w and a denote the fluid force induced by water and air respectively, are depicted in Fig. 9a, along with several 158 pink crosses marking the maximum value max . The data indicate that the increase in causes a significant reduction 159 of due to the corresponding reduction in the 0 . Note that the positive values of denote upward acceleration.
160
In particular, as 0 drops below a certain value, will experience a smooth trend in proximity to zero, known as from Eq. (4). However, there is an intercept value of for the numerical results which is presumably due to the gravity.
168
On the other hand, the data shown in Fig. 10 and Table. 1 display a significant contribution of the vertical component In Fig. 11, the other four correlated variables are reported, viz., time * , penetration depth * , velocity * and the 171 ratio of velocity , defined as = * ∕ 0 , for the four cases introduced earlier. In Eq. (6), a linear relation between 172 * and reciprocal of the initial vertical velocity −1 0 was established that is similar to the solution in Fig. 11a, despite a 173 small difference appears on among the three cases. As listed in Table 1, the error of the numerical values with respect 174 to the theoretical estimate, varying from 44.67% to -6.09%, shows an obvious decreasing with the growth of 0 . In 175 fact, when reducing 0 , the corresponding initial vertical velocity for lower becomes smaller and, consequently, 176 gravity effects increase causing larger differences with respect to the theoretical formulation which is derived without 177 considering gravity. In Fig. 11b, the values display a reduction of * when increasing 0 , where one can see that the 178 greater is the 0 , the closer * will is to a asymptotic line slightly different from the theoretical result, however, * implying that the theoretical solution is nearly valid only when certain conditions on 0 are met. Fig. 12 shows the 184 water surface deformation around the wedge at * for different cases with a cyan region, where it can be clearly noted 185 that the displacements of the apex remain almost the same, despite different water jet zones form at the two sides.
186
In the bottom-right picture, the spray seems to detach from the body and fall down. This is a consequence of the 187 gravity. Moreover, as shown in all contours, it indicates that the maximum acceleration max occurs before the wedge 188 is completely submerged.
189
Moving to the relationship between * and 0 , shown in Fig. 11c and Table 1, a slight difference among the 190 simulations and theory on can be observed, the error on being below 5%. Furthermore, as shown in Fig. 11d, the 191 trend of is similar to the one obtained from * , and a gray shaded region can be found where * is 5/6 times 0 192 in agreement with the theoretical estimate. In other words, the value 5/6 about * and 0 can only be set up when 193 0 is greater than 1.85m/s, which is smaller than the limitation 2.95m/s on * . As it can be seen in Fig. 9a), the 194 acceleration experiences two phases, acceleration downwards and then upwards, before reaching the maximum. Since which requires that the initial vertical velocity has to higher than a threshold value. Furthermore, the formula for the 220 added mass, which is usually focused for the vertical water entry, is found to be valid for the oblique entry on the 221 vertical direction as well. 223 In addition to the analysis of the effect of the initial vertical velocity on the load characteristics, for the oblique water Fig. 15. It is interesting to note that the data fit well with a linear function, in Fig. 18. The results are shown for five distinct cases. It's worth noticing that * varies linearly with 0 , as 245 * = 0.81375 0 + 0.01059. The parameter is numerically lower than the analytical one provided by Eq. (5), as 246 it is shown in Fig. 18b. Nonetheless, the error of compared to theoretical estimate is -2.35% with a root mean squared Fig. 18a), except for the relationship between the corresponding time * and the initial horizontal velocity 255 0 where a constant trend is observed in Fig. 18c.
Effect of horizontal velocity
A cabin section in 3D
257
The above results prove that it is possible to evaluate the load characteristics with the help of the linear relations, can be seen that the distance between the peak points becomes narrower, and the difference is less pronounced as 0 273 grows.
274 Fig. 20a demonstrates max is still a linear function to 2 0 , fitted by max = 0.1734 2 0 − 0.1983, where is 275 slightly lower than the theoretical one with -7.57% error, as listed in Table 2. Fig. 20b shows the results of at three show that the value of becomes larger as 0 rises, denoting more significant three-dimensional effects.
279
The relations about the other dynamic parameters in 3D cabin section are shown in Fig. 21. As can be seen, significant parameter to characterize the impact is the corresponding velocity * as shown in Fig. 21c, which displays 287 a linear relation with 0 . Specifically, as seen in Fig. 21d, the value of approaches the theoretical line only for 0 288 greater than 4.5 m/s, whereas the large difference are observed for smaller initial impact velocities.
289
Subsequently, the instantaneous Froude number (Hulin et al., 2022), * = * ∕ √ * , is introduced here to describe 290 the combined relations between velocity and penetration depth, when the maximum value of acceleration is reached.
291
As it can be seen in Fig. 22, * is found proportional to the initial vertical velocity 0 , in the case of 2D wedge 292 and 3D cabin section. The proportional relation can also be derived from Eq. (5), where * is independent of 0 and 293 * is considered as a linear function of 0 . Being max a linear function of 2 0 , the relationship between maximum 294 acceleration and the instantaneous Froude number * can be easily established through Eq. (4), (5), as follows:
295 max ( * ) 2 = max ⋅ * ( * ) 2 = 1 3 ⟶ max = 1 3 ⋅ ( * ) 2(12)
The detailed results of 2D wedge and 3D cabin section are fitted and summarized in Table 3. It can be seen that the 296 numerical relations agree well with the theoretical prediction, although in 3D case the value of the slope, , displays 297 an obvious deviation associated with the three-dimensional effect. Moreover, Eq. (12) can also be written as:
298 max ⋅ * ( * ) 2 = 1 3 ⟶ max = ( * ) 2 3 ⋅ *(13)
providing a strong coupled relation among max , * and * , instead of three separate expressions ( see Eq. (4) and (5)
299
). effects associated with the suction and the double-stepped planing phenomenon which are not accounted for in the 331 theoretical model (see Fig. 27). Instead, the linearity on max − 2 0 can be found as well, despite a little deviation 332 appears when 0 is below 1m/s on the fixed pitching situation. The results of max with fixed pitch is above that with 333 free pitch. For instance, the high-pressure region presented in Fig. 25c) is larger than that in Fig. 25i).
334
Owing to a considerable change on estimate of max − 2 0 , it is necessary to check the effectiveness on Eq. (5) and 335 (6). Fig. 28 shows the variation of other four gauged factors by changing the initial impacting velocity 0 , when the 336 acceleration reaches its peak. As it can be seen in Fig. 28a, the larger the vertical velocity is, the shorter the time interval The results of maximal draught max , reached by the hull, are also depicted in Fig. 28b. Of course, max is above 344 * , meaning that while attains its maximum, the aircraft continues to move downwards. It shows a monotonous 345 increasing trend on the function of max to 0 for the case of fixed pitch, while a valley occurs in the free pitch motion.
346
Turning to the behavior of the corresponding velocity * , it is interesting to see that the two cases share a quite similar 347 evolution in * − 0 , as presented in Fig. 28c and 28d. Specifically, there is a turning point where 0 equals 1.5 m/s, 348 whereas the trend is similar afterwards. On the left side of turning point, the results of fixed pitch are lower than that of 349 free pitch. The blue dashed rectangle indicates the range at which * is above 0 , in other words, > 1 (see Fig. 28d),
max − 2 0 linear √ - √ - √ - √ - * − −1 0 linear √ - √ - × - × - * − 0 constant × × × constant × * − 0 linear √ - √ - × - × - − 0 constant, 5/6 × × × ×
Conclusion
356
In the present study, the load characteristics of three models, such as a 2D symmetric wedge water entry, a 3D impacting on water surface, the maximum vertical acceleration increases with the initial vertical velocity, and it is 363 found herein that the value of maximal vertical acceleration is proportional to the square of the initial vertical impacting 364 velocity. For oblique entry, the effect of horizontal velocity on acceleration has also been investigated and it is observed 365 that the maximum horizontal acceleration is a linear function of initial horizontal velocity, rather than its square value.
366
2) Another significant parameter, that these three models share the same trend, is the ratio of the corresponding 367 velocity to the initial velocity, . Following the theoretical formulation, the value should be constant, 5/6, while 368 the numerical results approach it in the case of large initial vertical velocity. It indicates that a threshold value of 369 initial vertical velocity needs to be emphasized to make the theoretical result available. In other words, gravity can be 370 neglected with larger velocities, however, with slow impact speeds, gravity should be considered in the model.
The minus sign indicates that the direction of acceleration is opposite to the direction of velocity. Besides, we define 386 that the positive value of acceleration is upwards, while the velocity and the penetration depth are positive downwards.
The acceleration reaches its peak value wheṅ ( ) = 0, that is,
a,b , Alessandro Del Buono b, * , Tianhang Xiao a, * * , Alessandro Iafrati b , Shuanghou Deng a and Jinfa Xu a a Nanjing University of Aeronautics and Astronautics, Yudao Street 29, Nanjing, 210016, Jiangsu, People's Republic of China b National Research Council-Institute of Marine Engineering (CNR-INM), Via di Vallerano 139, Roma, 00128, Lazio, Italy
28of the hull body were computed separately. The whole process was divided into a number of small time-step, and 29 the forces were calculated at each time step. In (Duan et al., 2019) evaluated the porpoising motion, an unstable 30 oscillation phenomenon that threatens the flying safety of amphibious aircrafts, by using a two-phase flow solver in31 OpenFOAM. Both slipstream caused by the propeller and external forces, viz. thrust and elevator forces, were taken 32 into consideration as well. Results highlighted the important role played by the hydrodynamic force on the heaving and33 pitching oscillations, while the aerodynamic forces have a rather marginal effect. Similar to the water landing scenarios 34 of amphibian aircraft, ditching events of conventional aircrafts show the same fluid dynamics phenomena, and have 35 been numerically studied widely. The effects of initial pitching angle and velocity (Xiao et al., 2021b; Guo et al., 36 2013; Qu et al., 2016; Zheng et al., 2021), fluid-structure interaction (Hughes et al., 2013; Siemann et al., 2017; Yang 37 et al., 2020), wave conditions (Woodgate et al., 2019; Xiao et al., 2021a) and various numerical strategies (Bisagni and 38 Pigazzini, 2017; Siemann and Langrand, 2017; Xiao et al., 2017) on the kinematic characteristics and fluid dynamics 39 phenomena have attracted most of the attention. The vertical acceleration, and its peak value in particular, is even more 40 relevant than other kinematic characteristics as it may be responsible for possible comfort and safety problems occur 41 on crew members, besides, of course, the effects in terms of structural integrity of the fuselage once it strikes the water 42 (Neuberg and Drimer, 2017). 43 The ditching event, it is usually distinguished in four phases: approach, impact, landing, and flotation (Siemann 44 et al., 2017). The impact phase is the most important one in terms of complex fluid-structure interaction. Von Karman 45 (von Karman, 1929) first proposed an analytical estimation method based on a wedge-shaped water impact and 46 introduced the method to settle the impact loads on seaplanes. Subsequently, a number of researches related to water impact have been carried out based on theoretical, computational or experimental approaches (Wagner, 1932; Zhao 48 and Faltinsen, 1993; Scolan and Korobkin, 2001; Korobkin, 2004; Korobkin and Scolan, 2006; Wu and Sun, 2014; 49 Breton et al., 2020; Zekri et al., 2021). It has been shown that, in the case of free-fall, the structure experiences a rapid 50 change of vertical acceleration and velocity, which is similar to what happens in the impact phase of the water landing 51 (Wang et al., 2015). Several studies have focused on the relationship between the maximum acceleration and initial 52 parameters on free-fall water entry. Among these studies, (Gong et al., 2009) simulated a series of cases with various 53 initial entering velocity of the wedge through a Smoothed Particle Hydrodynamics (SPH) model, and relations for the 54 maximum force on the wedge and the corresponding time in terms of the initial entering velocity of the wedge have
64 2019) have also been mentioned by other researchers' work (Iafrati and Grizzi, 2019; Iafrati, 2016; Wen et al.
Figure 1 :
1Von Karman's momentum approach.
be noticed that we define the positive direction of acceleration upwards, while the vertical velocity and penetration depth are positive downwards. Moreover, according to (Panciroli et al., 2013) and (Iafrati et al., 2000), the corresponding time * can be expressed as:
phase flow solver. In the present study the unsteady incompressible Reynolds-averaged Navier-Stokes equations with 89 a standard − two-equation turbulence model are solved by the finite volume method. The Semi-Implicit Pressure 90 Linked Equations (SIMPLE) algorithm is employed to achieve an implicit coupling between pressure and velocity, 91 and the gradient is reconstructed with the Green-Gauss Node Based method. The modified High Resolution Interface 92
121 Figure 2 :
1212the grid density with two zoom-in views in the − plane. The length of the square boundary is 10 times the width 118 of the wedge. The computational domain is discretized with structured quadrilateral grids and the minimum size of 119 mesh is 0.0005 m. The right hand and bottom sides were set as velocity inlet, when the boundary condition of pressure 120 outlet was specified on the top and the left sides (seeFig. 3). Sketch of the wedge at the onset of the entry along with relevant geometric and dynamic parameters.
Figure 3 :
3Grid topology and density of the wedge.
130 Figure 4 :
1304quantitative relations, referring to Eq. (4), (5) and (6), since the 3D effects affect the slamming force during water123 impact (Wang et al., 2021b). The geometry parameters of the cabin section are shown in Fig. 4 with length =1.61 m, 124 width =3.27 m, deadrise =30 • and mass =600 kg. The test condition represent that of the experiments in (Chen 125 et al., 2022), where the section is manually lifted to the desired height and released for freely fall. In the simulation, as 126 depicted inFig. 5, the cabin is initially released near the water surface with different initial impact velocity to study the effect of velocity on the acceleration.Fig. 5also shows the boundary conditions and the initial relative pressure field 128 on the left side boundary. A dashed red cuboid was created surrounding the cabin with refined meshes to capture the 129 water surface more accurately. Sketch of the cabin section along with relevant geometric parameters.
Figure 5 :
5Boundary conditions and the initial flow fields of the cabin section.
135 Fig. 7 )
1357, and is regarded large enough for the present study. The whole domain was discretized with Cartesian cells and136 prismatic boundary layer grids surrounding the model and moving rigidly without deforming. Three tiers for refining 137 meshes were assigned to the entire domain as follows: tier 3 for the accurate description of the hydrodynamics about 138 the hull; tier 2 and tier 1 fan-shaped regions to enable the large range of pitch motion. The cell height in these tiers 139 is 0.005 , 0.01 and 0.015 , respectively. The total number of grid cells in the whole domain is almost 12 million.
1. 2D symmetric wedge143 First, the accuracy and efficiency of the numerical method have been validated for a symmetric wedge. In the 144 simulation, at =0.001 s, the wedge is dropped freely against calm water from a small distance at 0.002m, entering 145 the free surface with an initial resultant velocity 0 = 2.75 m/s and velocity angle = 20 • (seeFig. 2).Fig. 8shows146 the comparison between the numerical results of the present study and experimental data (Russo et al., 2018) in terms 147 of the normalized resultant displacement and acceleration̈ . It can be seen, the results are in good agreement with
Figure 6 :
6V-shaped hull configuration features of amphibious aircraft.
Figure 7 :Figure 8 :
78Computational domain and boundary conditions of the amphibious aircraft. experiments, aside from a little discrepancy occurs at the early stage of the acceleration. Theoretically, at the beginning 149 the acceleration should be close to − , like numerical results show, whereas in the experimental data the acceleration 150 is immediately positive, probably due to measurement problems in the initial phases (Russo et al., 2018). Also, a good 151 comparison with another CFD numerical result (Yang and Xu, 2018) can be observed in Fig. 8. Overall, numerical 152 results exhibit a satisfactory agreement with the experimental data. Comparison among the present study, experimental data and numerical results on the oblique water entry of a wedge: (a) normalized resultant displacement; (b) normalized resultant acceleration.3.1.1. Effect of vertical velocity154 Next, in order to better understand the effect of the variation of the vertical velocity, several simulations have been 155 performed for constant 0 and varying from 10 • to 50 • , which corresponds to a reduction of the vertical velocity 156 component. The time histories of dimensionless acceleration in -direction , defined as = (
161 '
161smooth entry' (Vincent et al., 2018). The data shown in Fig. 9b indicate that max is a linear function of 2 0 , thus 162 supporting the relationship formulated in the Eq. (4), except for the offset. Furthermore, other series of simulations 163 have been conducted by varying the value of 0 , including the case of zero horizontal velocity. Fig. 10 shows that all 164 the data are aligned on the same straight line, thus confirming the validity of the relationship in the Eq. (4). Note that 165 in the case of 0 =0.342 m/s, varies from 5 • to 50 • . As highlighted in Table. 1, a linear relation between max and 166 2 0 exists, and only minor deviations can be observed in the slope compared with the theoretical estimate, derived 167
Figure 9 :
9of the velocity to the linear relation, independently of the value of 0 . Variation of dimensionless acceleration with different velocity angle and fixed horizontal velocity component for oblique water entry: (a) versus time; (b) versus initial vertical velocity.
Figure 10 :
10Effect of the horizontal velocity on the relation between max and 2 0 for oblique water entry.
should be constant in theory as it depends on and only (see Eq. (5)). The difference with respect to the theoretical 180 line depends on the pile-up effect which is not taken into account in Von Karman's momentum conservation and affects 181 the evaluation of the hydrodynamic behaviour(Mei et al., 1999; Iafrati et al., 2000). Furthermore, the gray shaded area 182 shows the range at which * is close to the constant value and the lowest value of 0 is almost 2.95 m/s in this model, 183
Figure 11 :
11the wedge, with a deadrise angle =37 • , undergoes a free fall motion, gravity plays a dominant role at the very early196 stage, leading to an accelerating period and an increase in the vertical velocity. Subsequently, with the increase of 197 hydrodynamic force, the downward acceleration diminishes and gradually turns upwards. Thus, it can be concluded 198 fromFig. 9a) that, for a given mass of the impacting body, the smaller is the initial vertical velocity, the longer is the199 accelerating time. Moreover, four distinctive points exceeding 1.0 are noticeable in Fig. 11d, meaning that the vertical 200 velocity of the body is larger than initial vertical velocity. Overall, it indicates that the accelerating phase not only 201 lasts longer, but the effect of the accelerating phase become more dominant than the decelerating phase, as the initial 202 vertical velocity decreases.203It is worth noting that the momentum theorem (Eq. (1)) was obtained without gravity (Mei et al., 1999), whereas the 204 gravitational field has been added into the numerical simulations. Nevertheless, following the investigation discussed 205 above, the formulas (4), (5) and (6) derived from Eq. (1) are still available when the initial vertical velocity becomes 206 larger. In other words, gravity can be neglected with larger velocities, and it has been highlighted in(Zekri et al., 2021).207 Whereas, with slow impact speeds, the gravity should be considered in the model (Bertram, 2012), as confirmed by 208 the discrepancies occurred at the range of low velocities (seeFig. 11b and 11d). Nonetheless, gravity seems to have209 no effects on the linear relation between max and 2 0 , except for the offset. The maximal vertical hydrodynamic force 210 during impact is then introduced herein, defined as * hd = ⋅ ( max ⋅ + ) − * hs , where * hs is the hydrostatic 211 force approximately calculated by Archimedean principle. Results shown in Fig. 13 indicate that the linear relation Effect of initial vertical velocity on variable dynamic parameters for the oblique water entry of a wedge: (a) * ; (b) * ; (c) * ; (d) . still holds which is consistent with (Zekri et al., 2021; Bertram, 2012), who found that 'even when gravity is formally 213 of the same order of magnitude as the fluid inertia, the effect of gravity on the hydrodynamic loads is still small and 214 can be approximately neglected'. 215 Based on the good collapse of the data from different initial horizontal velocity, it is believed that the initial 216 vertical velocity plays a dominant role on the kinematic characteristics during wedge water entry with the given shape 217 parameters, indicating that the effect of initial horizontal velocity on the relations can be ignored. For the analytical 218 solutions based on Eq. (4), (5) and Eq. (6) to be valid, there is a supplementary condition to the momentum theory 219
entry of a wedge it is also significant to investigate the role played by the initial horizontal velocity. By assuming 0225 constant and changing to vary 0 , similar to what done in the previous section, Fig. 14 presents the time histories of 226 and exerted on the wedge at various velocity angle , using the fixed vertical velocity component 0 = 2.943 m/s, 227 derived from the previous case of 0 = 1.071 m/s and = 20 • . As it can be seen, the value of exhibits an obvious 228 decreasing trend when reducing upon water impact, whereas no changes are observed in , significantly differing 229 from the situations of varying initial vertical velocity. Therefore, the data of max are extracted and compared with 230 three different functions of 0 as illustrated in
Figure 12 :
12Free surface deformation around the wedge at * with different 0 and .
Figure 13 :Figure 14 :
1314Hydrodynamic forces versus the square of the initial vertical velocity for the oblique water entry of a wedge with different initial velocity. although the function is established between max and 0 , instead of 2 0 , which is remarkably different from cases of 232 varying 0 . The pressure contour plots around the wedge with variable , when max is achieved, are depicted in the 233 upper side ofFig. 16andFig. 17, where the pressure coefficient is defined as = ( of 0 is referring to the initial horizontal velocity in the case of = 40 • . It can be seen that a higher-pressure 235 region occurs at the right-hand side of the wedge, whereas a zone with negative pressure is observed on the left, leading 236 to the variation of . It is therein evidenced that the pressure field varies significantly in the range ∈ [10 • , 40 • ], when 237 reaches the peak value. The comparison betweenFig. 16andFig. 17indicates that the water jets originate from238 the pressure peak, and the low-pressure zone is close to the apex which is consistent with (Riccardi and Iafrati, 2004; 239 Judge et al., 2004). Furthermore, flow separation could be expected at the apex which can also lead to cavitation or 240 ventilation due to horizontal-vertical impact velocity (Judge et al., 2004), provided that fluid dynamic solution method 241 is able to model cavitation and ventilation phenomena. 242 In order to achieve a better comprehension of the effect of 0 on the impact dynamics, the value of the horizontal 243 velocity component * , the ratio of velocity and time * at which reaches its maximal value are provided Time histories of dimensionless acceleration in both -and -directions with different inclined angles using the fixed vertical velocity component 0 = 2.943 m/s: (a) ; (b) .
Figure 15 :
15Variation of maximum acceleration versus initial horizontal velocity.
Figure 16 :
16Pressure distribution and water volume fraction for different velocity angle when reaches its maximum.
248 Figure 17 :
24817error (RMSE) on is 0.0122, thus indicating the theory about * − 0 derived from vertical entry can be used. Moving Pressure coefficient along the normalized x-axis for different velocity angle when reaches its maximum.to Fig. 18c, the trend is quite different from the linear function displayed in Eq. (6) and Fig. 11a for the case of 2D 249 wedge with various vertical velocities. The numerical values of * are almost constant for both cases of 0 and −1 0 , 250 and the standard deviation of these data is 1.11 × 10 −4 . The curves presented in Fig. 18d demonstrate that * is 251 proportional to 0 , expressed as * = 0.00693 0 , and the results of * oscillate slightly around 0.0221 associated 252 with 2.56 × 10 −4 in . The above result is confirmed by the lower part of Fig. 16, where no substantial differences 253 for vertical displacement are observed. In general, linear functions can be found on max − 0 and * − 0 (see 254 Fig. 15 and
proposed in Eq. (4), (5) and Eq. (6), with large initial vertical velocity. This section presents the results of computational 259 simulations of the vertical free fall of a cabin section (seeFig. 4), entering the free surface with various initial vertical 260 velocity 0 . Eleven cases with a series of 0 from 0.5 m/s to 6 m/s are simulated.Fig. 19ashows the evolution of 261 acting on the cabin during the water entry. It is worth noting that the results have been filtered with a cutoff frequency 262 62.5 Hz. At the beginning of the impacting, the overall acceleration is negative indicating that gravity dominates and 263 leads to an increase in the vertical velocity, while the hydrodynamic force only plays an auxiliary role at the onset of 264 entry. As the body penetrates into the water, turns positive and reaches its peak value subsequently, which means 265 the hydrodynamic force is dominant. Obviously, is linked with the initial impact velocity. The smaller 0 is, the 266 smoother the trend of will be, until a point where the peak disappear. Such a behaviour can be also observed in267 Fig. 19b, where the pressure coefficient = ( − 0 )∕(0.5 2 0 )) is computed along the wetted part of the body at 268 0.5 with 0 chosen as 6 m/s. As the initial impact velocity increases, the overall values of become higher for 269 selected five cases with different initial vertical velocity, as shown in Fig. 19b, where three extreme values can be 270 observed. One extreme value is at =0, the apex of the body, so-called stagnation point, where the flow velocity is 271 almost equal to zero, and the other two extreme values, marked with '+' in Fig. 19b, are inside the grey region. It272
distinctive cross-sections, viz., 0.1 , 0.2 and 0.5 , for different values of 0 , where the difference is caused by 277 the three dimensional effects and introduces a difference between the numerical and the theoretical solution. The data 278
Figure 18 :Figure 19 :Figure 20 :
181920the corresponding time * is a linear function of −1 0 in Fig. 21a, although the numerical estimate of the parameter 281 is 64.41% different from the theoretical prediction, as also observed in the case of oblique entry of a symmetric 282 wedge. Looking into the penetration depth * , there is a slight difference between the numerical results and the Effect of initial horizontal velocity on variable dynamic parameters: (a) * ; (b) ; (c) * ; (d) * and * . In the case of a 3D cabin section: (a) Time histories of dimensionless acceleration with different initial vertical velocity 0 ; (b) pressure coefficient at 0.5 with different 0 . theoretical prediction, however, a new asymptotic line, lying below the theoretical one, appears and all data approach 284 it asymptotically when increasing 0 . It means that the maximum acceleration of the 3D cabin section occurs at a 285 smaller depth due to the three-dimensional effects on slamming load (Wang et al., 2021b) and pile-up effects. Another 286 In the case of a 3D cabin section: (a) Variation of max versus 0 and 2 0 ; (b) pressure coefficient at three distinctive cross-setions for different 0 .
300 3Figure 21 :
30021.3. V-shaped hull on amphibious aircraft301 Herein, the quantitative relations discussed above, (Eq. (4), (5) and(6)), are employed to examine the effect of 302 initial vertical velocity 0 on the load characteristics for the water landing of the V-shaped hull on amphibious aircraft Effect of the initial vertical velocity on the different parameters for the case of a 3D cabin section: (a) * ; (b) * ; (c) * ; (d) .
Figure 22 :
22The instantaneous Froude number as a linear function of the initial velocity: (a) 2D wedge; (b) 3D cabin section. (see Fig. 6). A set of numerical simulations is carried out with constant horizontal flight velocity 0 = 37 m/s, which is 304 determined as 0 = 0.94 √ 2 ∕ L , where is the weight of aircraft , wing area and L lift coefficient regarding 305 to the landing scenario (Lu et al., 2021). The initial pitch 0 is set as 7 • which is considered as the suitable angle for 306
308Figure 23 :Figure 24 :
2324Results, shown inFig. 23, indicate that decrease when reducing 0 in both conditions. It is worth noting that 309 as 0 decreases below 1.5 m/s, the overall trend and the amplitudes of in each condition are quite similar, aside 310 from the time lags. Differently from the conventional impact problem, the amphibian has aerodynamic devices, such 311 as wings and tail wings, which introduce additional force components affecting the aircraft dynamics.Fig. 24shows 312 the parameter aero , which is the ratio between aerodynamic force to fluid force in the vertical direction derived for the 313 different cases, when reaches the highest amplitude during the landing motion. It is shown that the parameter aero 314 is always below 40% and diminishes when increasing 0 , thus indicating that the hydrodynamic force acting on the 315 fuselage becomes larger as 0 grows, as expected.316 Fig. 25 illustrates the pressure distribution at the bottom of the aircraft when reaches its peak. The main fuselage 317 portion striking with the free surface is the region over the forebody near the step. Note that the pressure coefficient 318 displayed in the graph is defined as = ( − 0 )∕(0.5 2 0 ), where 0 is neglected being 0 = 37 m/s much greater 319 than 0 . The pressure peaks occur at the chine flare, after which the hydrodynamic decreases with the formation of 320 a triangle-shaped region of positive pressure near the step. Correspondingly, negative pressure areas occur behind the 321 step and the stern of the fuselage. The occurrence of negative pressures at the back of the fuselage is a consequence of 322 the longitudinal curvature and it can be easily explained by exploiting a 2D+t concept in which the local cross section 323 undergoes a water exit phase (Del Buono et al., 2021). The data also indicate that the high-pressure regions become 324 smaller in size and reduce in magnitude when decreasing 0 , which is coherent with the overall downtrend on the 325 evolutions of revealed in Fig. 23. Comparison of fixed and free pitching condition on dimensionless acceleration in -direction with different initial vertical velocity for the amphibious aircraft: (a) fixed pitch; (b) free pitch. In order to achieve a better comprehension of the effect of the impact velocity on accelerations, the maximal values 327 of are drawn as a function of the square of vertical velocity 2 0 in Fig. 26, although it is difficult to derive the slope 328 from theoretical estimate Eq. (4). In the presence of a high horizontal speed, the pressure doesn't depend much on 329 the vertical velocity but rather on the horizontal velocity, pitch angle and pitch dynamics. Furthermore, there are the Ratio of the aerodynamic to the fluid force as a function of 0 .
Figure 25 :
25Pressure distribution at the bottom of the aircraft for different 0 at * .
is, implying that the load distribution in time is smoother for lower 0 . Moreover, there is no proportionality between 338 * − −1 0 in both cases of fixed and free pitch. Moving to penetration depth *Fig. 28b, a quite different evolution 339 emerges between the fixed and free pitch conditions. In the fixed pitch condition small variations about the mean value 340 occur, whereas, in the free pitch condition the depth * grows as 0 increases gradually and approaching an asymptotic 341 value. It is worth noticing that there is an inverse trend compared to the cases of wedge and cabin section. The depth, 342 * , exhibits much smaller variations when an attitude control mode (fixed pitch) is exerted on the aircraft.
Figure 26 :
26Effect of the pitch motion, fixed and free, on the relation between max and 2 0 .
Figure 27 :
27Water volume fraction at the bottom of the aircraft for different 0 at * .
Figure 28 :
28meaning that gravity plays a significant role when 0 is smaller than a certain value as mentioned on Sec. 3.1.1. It 351 can be seen that the relation of * − 0 is not linear, quite different from the theoretical trend. Whereas, inFig. 28d, it 352 is worth noting that all data approach the theoretical estimate, 5/6, which means is still valid to some extent. Thus,353 the relations derived from Eq. (4) and Eq. (5) are partly useful for the tendency prediction on max and through a 354 simple analysis. Effect of initial vertical velocity on variable dynamic parameters for the case of the amphibious aircraft: (a) * ; (b) * ; (c) * ; (d) .
cabin section water entry and an amphibious aircraft landing on water have been investigated numerically. The effect 358 of initial vertical velocity on the maximum acceleration, together with several relationships based on the transformation of momentum theorem, have been thoroughly analyzed. Contributions and findings are summarized in
3 )
3For the relationship between penetration depth and the initial vertical velocity, the simulated results approach 372 an asymptotic line (different from the theoretical estimate) with the increase of velocity in the 2D wedge case and the 373 cabin section case. The difference between the numerical asymptotic line and the theoretical estimate is mainly caused 374 by the water pile-up effect. For the 3D cabin section, the three-dimensional water flow in the spanwise direction could 375 also be responsible for the difference. Considering the complicated geometry of the hull, it is hard to determine the 376 theoretical estimate. The numerical results of fixed pitch present a constant trend, whereas a constant value is not 377 reached in the case of free pitch.378 4) Looking into other two linear relations, * − −1 0 and * − 0 , shown in Table 4, they can be established upon the 379 wedge and the cabin section compared with the theoretical results, while it is invalid for the hull. Besides, in the case of 380 2D wedge and 3D cabin section, the instantaneous Froude number, * , is displayed to describe the combined relations 381 between velocity and penetration depth, when the maximum value of acceleration is satisfied. Due to the relationship 382 of * − 0 and max − * 0 , the maximum acceleration max is one third of the square of the instantaneous Froude 383number * . Moreover, a strong coupled relation among max , * and * is found, max = ( * ) 2 ∕(3 ⋅ * ).384A. Appendix385In order to obtain the instantaneous acceleration shown in Eq. (3), time derivative of the instantaneous velocity can be analytically computed as:
Yujin
. (A1) into the second term of this last equation, the Eq. (A3) can be expressed as: the expression of the instantaneous velocity is also given by Eq. (2). Then, using Eq. (2) and (A5), we obtain 392 the corresponding penetration depth * : . (A7) into Eq. (A5), the corresponding velocity * can be expressed as: by combing Eqs. (A1), (A7) and (A8), the maximal acceleration in the positive direction can be obtained Lu: Conceptualization, Methodology, Software, Investigation, Data Curation, Visualization, Writing -Orig-Neuberg, O., Drimer, N., 2017. Fatigue limit state design of fast boats. Marine Structures 55, 17-36. doi:10.1016/j.marstruc.2017.05.002.
is here validated for the case of oblique entry of a symmetric wedge first, mainly focusing on the vertical load characteristic. The oblique water entry has been chosen as the motion of the body resembles that of amphibious aircraft during landing and allows109
110
111
to study the effect of varying both the vertical and horizontal components. In (Russo et al., 2018), the oblique impact
112
Table 1
1Comparison between theoretical estimate and numerical results for the inclined water entry of a wedgemax
* , s
* , m∕s
err, %
err, %
err, %
Theoretical value 1.2807
-
-
0.0197
-
-
0.8333
-
-
0 = 0.342m∕s
1.3588
6.09
-0.0509 0.0285 44.67 -0.0046 0.8010
-3.87
0.1121
0 = 1.071m∕s
1.4069
9.85
-0.1211 0.0228 15.73 -0.0014 0.8367
0.41
0.0142
0 = 1.710m∕s
1.3948
8.91
0.0364
0.0185
-6.09
-
0.8308
-0.30
0.0306
Table 2
2Comparison between theoretical estimate and numerical results for cabin sectionmax
* , s
* , m∕s
err, %
err, %
err, %
Theoretical value 0.1876
-
-
0.1343
-
-
0.8333
-
-
Present study
0.1734
-7.57
-0.1983 0.2208 64.41 -0.0215 0.8135
-2.38
0.2793
Table 3
3Function of max and * derived from theoretical estimate and numerical results for a 2D wedge and a 3D cabin section.Expression
Error of , %
Theoretical estimation
max =
1
3
⋅ ( * ) 2
-
2D wedge
max = 0.32142 ⋅ ( * ) 2 + 0.009169
-3.57
3D cabin section
max = 0.24417 ⋅ ( * ) 2 − 0.05474
-26.75
landing event in the previous study (Lu et al., 2021). Both the fixed and free pitch conditions have been simulated in
307
the present study. Note that the wing components are taken into consideration in the present study.
Table 4
4Summary of theoretical quantitative relations compared with simulated results among three cases: simulated trend (black solid line); theoretical estimate (red dashed line); simulated asymptotic value (magenta dashed line)terms
theoretical relations
fixed pitch
free pitch
Table 4 , which
4For the three V-shaped sectional area of bodies, such as 2D wedge, 3D cabin section and amphibious aircraft,360
can be described as:
361
1) 362
Riccardi, G., Iafrati, A., 2004. Water impact of an asymmetric floating wedge. Journal of Engineering Mathematics 49, 19-39. doi:10.1023/B:
Efficient decoupled hydrodynamic and aerodynamic analysis of amphibious aircraft water takeoff process. L Qiu, W Song, 10.2514/1.C031846Journal of 455 Aircraft. 50Qiu, L., Song, W., 2013. Efficient decoupled hydrodynamic and aerodynamic analysis of amphibious aircraft water takeoff process. Journal of 455 Aircraft 50, 1369-1379. doi:10.2514/1.C031846.
Numerical simulation of water-landing performance of a regional aircraft. Q Qu, C Liu, P Liu, B Guo, R K Agarwal, 10.2514/1.C033686Journal of Aircraft. 457Qu, Q., Liu, C., Liu, P., Guo, B., Agarwal, R.K., 2016. Numerical simulation of water-landing performance of a regional aircraft. Journal of Aircraft 457 53, 1680-1689. doi:10.2514/1.C033686.
| [] |
[
"Further solutions of critical ABF RSOS models",
"Further solutions of critical ABF RSOS models"
] | [
"Yu-Kui Zhou \nMathematics Department\nDepartment of Mathematics\nUniversity of Melbourne\n3052ParkvilleVictoriaAustralia\n\nThe Australian National University Canberra\nACT 0200Australia\n"
] | [
"Mathematics Department\nDepartment of Mathematics\nUniversity of Melbourne\n3052ParkvilleVictoriaAustralia",
"The Australian National University Canberra\nACT 0200Australia"
] | [] | The restricted SOS model of Andrews, Baxter and Forrester has been studied. The finite size corrections to the eigenvalue spectra of the transfer matrix of the model with a more general crossing parameter have been calculated. Therefore the conformal weights and the central charges of the non-unitary or unitary minimal conformal field have been extracted from the finite size corrections.hep-th/9504107 revised version 1 | 10.1088/0305-4470/28/15/014 | [
"https://export.arxiv.org/pdf/hep-th/9504107v2.pdf"
] | 17,240,127 | hep-th/9504107 | b08fca44f0fc49c5dc8bdc6cd3b0d0e1a2568c33 |
Further solutions of critical ABF RSOS models
Yu-Kui Zhou
Mathematics Department
Department of Mathematics
University of Melbourne
3052ParkvilleVictoriaAustralia
The Australian National University Canberra
ACT 0200Australia
Further solutions of critical ABF RSOS models
arXiv:hep-th/9504107v2 15 Jun 1995 27/01/95 March 28, 2022
The restricted SOS model of Andrews, Baxter and Forrester has been studied. The finite size corrections to the eigenvalue spectra of the transfer matrix of the model with a more general crossing parameter have been calculated. Therefore the conformal weights and the central charges of the non-unitary or unitary minimal conformal field have been extracted from the finite size corrections.hep-th/9504107 revised version 1
Introduction
The ABF restricted solid-on-solid (RSOS) model has been found by Andrews, Baxter and Forrester (ABF) in 1984 [1]. It has been well known that the model provides realizations of unitary minimal conformal field theories [2,3,4]. This has been further confirmed by studying the finite-size corrections to the ground state energy [5]- [11] (also see [12]- [22] for related works). Among these works, much effort has been focused on the ABF model corresponding to the unitary minimal conformal field theories. By contrast, the finite-size corrections to the transfer matrix of the ABF model corresponding to non-unitary minimal conformal field theories have received no attention.
The local height probabilities of the ABF model with a crossing parameter λ = kπ/h, where two relatively prime positive integers satisfy k < h, have been calculated in [23]. In this paper, with the same motivation, we repeat the consideration of the finite-size correction calculation of the ABF model with λ = kπ/h. In general the model will no longer be physical as there will be some negative face weights. Nevertheless, the non-unitary minimal conformal field theories [4] can be realized as the critical continuum of the ABF RSOS model with the crossing parameter λ = kπ/h. The model is therefore of independent interest for this feature.
In [20] an analytic method has been presented to find the finite-size corrections involving the central charges for the six-vertex model with a twisted boundary condition. The method has been successfully applied to the other models (see [27] for example). In these works only the central charges have been obtained. In fact the central charge and conformal weights together could appear in the finite-size corrections to the eigenvalue spectra of the transfer matrix. In this paper, following the calculation presented in [20], we find the finite-size corrections to the eigenvalue spectra of transfer matrices of the critical ABF RSOS model. From the corrections both the central charges and the conformal weights of non-unitary minimal conformal field theories are extracted. This generalizes the method presented in [20] to find the conformal weights of the ABF SOS model.
We first review the ABF RSOS model and the Bethe ansatz solutions of transfer matrices briefly in section 1.1. In section 2 we find an integral nonlinear equation and express the finite-size corrections in terms of the solution of the nonlinear equation. Then the effective central charges including the conformal weights are extracted from the finite-size corrections. A brief discussion is presented in the final section.
Models and Bethe ansatz solutions
The ABF RSOS model can be given by Baxter's SOS model, which was introduced in order to solve the eight-vertex model with the R-matrix
R(u) = a(u) 0 0 d(u) 0 b(u) c(u) 0 0 c(u) b(u) 0 d(u) 0 0 a(u) (1.1) where a(u) = Θ(λ)Θ(u)H(u + λ) , b(u) = Θ(λ)H(u)Θ(u + λ) , c(u) = H(λ)Θ(u)Θ(u + λ) , d(u) = H(λ)H(u)H(u + λ) . (1.2)
The R-matrix satisfies the Yang-Baxter equation [24,25]
R 12 (u)R 13 (u + v)R 23 (v) = R 23 (v)R 13 (u + v)R 12 (u) . (1.3)
Baxter has shown in [25] that the eight-vertex model can be transferred into the SOS model, which is defined by the following face weights
W (ℓ ± 1, ℓ ± 2, ℓ ± 1, ℓ|u) = h(u + λ) h(λ) W (ℓ ∓ 1, ℓ, ℓ ± 1, ℓ|u) = h(ξ + ℓλ ± λ)h(u) h(ξ + ℓλ)h(λ) (1.4) W (ℓ ± 1, ℓ, ℓ ± 1, ℓ|u) = h(ξ + ℓλ ∓ u) h(ξ + ℓλ)
where the height ℓ ∈ Z Z and ξ is an independent parameter. The crossing parameter is λ and the spectral parameter is u. The function h(u) is given by
h(u) = Θ(0)H(u)Θ(u) .
(1.5)
The face weights satisfy the following Yang-Baxter equation
g W (a, b, g, f |u)W (f, g, d, e|v)W (g, b, c, d|v−u) = g W (f, a, g, e|v−u)W (a, b, c, g|v)W (g, c, d, e|u) (1.6)
for any integers a, b, c, d, e, f. Therefore the SOS model is an integrable system. Suppose that l and m are allowed spin configurations of two consecutive rows of an N (even) column lattice with periodic boundary conditions l N +1 = l 1 , m N +1 = m 1 . The elements of the rowto-row transfer matrix T of the SOS model are defined by
l|T(u)|m = N j=1 W (l j , l j+1 , m j+1 , m j |u) . (1.7)
We recall the eigenvalues of the transfer matrix T given in [25] (also see [26] for algebraic Bethe ansatz),
T (u) = e isλ h N (u + 1 2 λ) q(u − λ) q(u) + e −isλ h N (u − 1 2 λ) q(u + λ) q(u) (1.8)
where q(u) is defined by
q(u) = N/2 j=1 h(u − u j ) . (1.9)
These parameters u 1 , u 2 , · · · , u N/2 are determined by the Bethe ansatz equations,
p(u j ) = −1 , j = 1, 2, · · · , N/2 (1.10)
where the function is given by
p(u) := e −2isλ h N (u − 1 2 λ)q(u + λ) h N (u + 1 2 λ)q(u − λ)
.
(1.11)
The ABF RSOS model is specialized by setting
Finite-size corrections
We consider the corresponding critical ABF RSOS model, which can be obtained by taking the zero elliptic nome p = 0. The elliptic function h(u) reduces to the trigonometric function
h(u) = sin(u) (2.1)
if p → 0. The eigenvalues (1.8) and the Bethe ansatz equations (1.10) are still correct for the critical RSOS model if the function h(u) is replaced with (2.1). Let us introduce the new spectral variable v = iu. It is very helpful to notice that the eigenvalue spectra (1.8) and the Bethe ansatz equations (1.10) are the same as those of the transfer matrix of the six vertex model with a twisted boundary condition [20]. They therefore can be treated similarly. The functions have to be restricted in some analyticity domain since all functions are iπ-periodic. It has been shown in [20] that the following functions are analytic and non-zero (anz)
h(v) anz in 0 < ℑm (v) < π q(v) anz in −π < ℑm (v) < 0 T (v) anz in −λ/2 ≤ ℑm (v) ≤ λ/2 , (2.2)
and, respectively, the functions q and p satisfy
q(v) = q(v) and p(v) = 1/p(v) . (2.3)
Because of iπ-periodic functions of face weights we can take k < h/2. Note that (2.2) has restricted the model to stay on the critical line of regime III/IV .
Nonlinear integral equation
Following [20], let us introduce new functions
α(x) := 1/p(x − iλ/2) = tanh πx 2λ N a(x) A(x) := 1 + α(x) (2.4)
The variable x may be regarded as real. 4 The method presented in [20] is to derive a set of relations about functions a and q and these relations lead to a nonlinear integral equation, in turn, the nonlinear integral equation ensures that the finite-size corrections to the eigenvalue spectra of the transfer matrix can be solved through dilogarithmic functions. The involved functions are anz in the strips (2.2) and are exponentials in asymptotic behaviour. The second logarithmic derivatives of the functions can be Fourier transformed,
f (k) = 1 2π ∞ −∞ [ln f (x)] ′′ e −ikx dx [ln f (x)] ′′ = ∞ −∞ f (k) e ikx dk (2.5)
where the integration path in x-plane has to lie in the analyticity strip and the real part of the variable of integration goes from −∞ to ∞. By Cauchy's theorem all other details of the path are irrelevant for f (k). We now derive a set of relations about functions a and q. Applying the Fourier transform to the definition (2.4) of a(x)
a(x) = e 2isλ coth πx 2λ N h N (x)q(x − 3iλ/2) h N (x − iλ + iπ)q(x + iλ/2 − iπ) (2.6)
where all arguments of the functions q and h have been reduced to the analyticity strips (2.2) because of the πi-periodicity, then yields
a(k) = − Nk 1 + e −λk + Nk(1 − e (λ−π)k ) 1 − e −πk + e 3 2 λk − e (π− 1 2 λ)k q(k) . (2.7)
To solve a and q we need another relation, which is introduced by an auxiliary function
h a (v) := 1 + p(v) q(v) . (2.8) It is anz in the strip −λ/2 < ℑm (v) ≤ λ/2.
To apply Cauchy's theorem to the Fourier transform of the second logarithmic derivative of h a we rewrite h a (v) in the following two different forms such that the arguments of q stands in the analyticity strip (2.2)
h a (x − iλ/2) = coth πx 2λ N A(x) q(x − iλ/2) a(x) h a (x + iλ/2) = A(x) q(x + iλ/2 − iπ)
.
(2.9)
Applying the Fourier transform to (2.9), it follows that e λk/2 h a (k) = − Nk 1 + e −λk − e λk/2 q(k) + A(k) − a(k) e −λk/2 h a (k) = A(k) − e (π−λ/2)k q(k) .
Then they are equated yielding q(k) e (π+λ/2)k − e λk/2 = Nk 1 + e −λk + a(k) + e λk A(k) − A(k) .
(2.10)
The equations (2.7) and (2.10) together determine the functions a(k) and q(k),
a(k) = sinh( 1 2 πk − λk) 2 cosh( 1 2 λk) sinh 1 2 (πk − λk) A(k) − e (λ−ǫ)k A(k) q(k) = Nke −πk/2 4 sinh( 1 2 πk) cosh( 1 2 λk) − e −(π+λ)k/2 4 cosh( 1 2 λk) sinh 1 2 (πk − λk) A(k) − e λk A(k) (2.11)
where an infinitesimal positive ǫ has been introduced for the imaginary part of the argument of x. Transforming back to the variable x
[ln a(x)] ′′ = ∞ −∞ K(y)[ln A] ′′ (x − y) − K(y + iǫ − iλ)[ln A] ′′ (x − y) dy (2.12)
where the kernel function
K(x) := 1 2π ∞ −∞ sinh( 1 2 π − λ)k 2 cosh( 1 2 λk) sinh 1 2 (π − λ)k e ikx dk (2.13) satisfies K(x) = K(−x) , K(x) = K(−x) .
(2.14)
The equation (2.12) is derived based on the essential anz property of the Bethe ansatz solution (1.8). Low-lying excitations have the same bulk behavior as the ground state. The only difference has been shown in [9] to lie in the fact that the eigenvalue functions now possess a finite number of zeros in the analyticity strip, which were free of zeros in the ground state. However, it is always possible to take an anz area in the analyticity strip where Cauchy's theorem can be applied [11]. Therefore the equation (2.12) still works for the excited states if we change the integration path in the anz area. Integrating (2.12) twice we obtain a nonlinear integral equation
ln a(x) = ∞ −∞ K(y) ln A(x − y) − K(y + iǫ − iλ) ln A(x − y) dy +C + Dx (2.15)
where the integral constant D = 0 because all terms remain finite for x → ∞ and another integral constant C heavily dependent on the branch choice of ln a(x),
C = ln a(∞) − ∞ −∞ K(y)dy ln A(∞) − ln A(∞) = ln(ω 2 e 2isλ ) − 1 2 π − λ π − λ ln(1 + ω 2 e 2isλ ) − ln(1 + ω −2 e −2isλ ) = iπφ π − λ (2.16)
where the phase factor φ has been introduced by
φ = sλ − i ln ω , ω 2 = 1 . (2.17)
Here we have taken a more general choice of branches for ln a(x), or ln a(∞) = 2 ln(ωe isλ ). The case ω = 1 has been studied in [20], which corresponds to the ground state. For the excited states we have chosen the other branches with ω = 1 or take
ω = e i(t−s)π (2.18)
with the integers s, t. From the definition (2.4) it follows that a goes to a e −4isλ under the change s → h − s. So the same symmetry should be imposed on the equation (2.15), or the phase factor φ must go to −φ under this change. It follows that the phase factor φ will go to −φ if changing s → h − s and t → h − k − t. Similar to the exponent s, suppose that t is positive. According to s = 1, 2, · · · , h − 1 we therefore take t = 1, 2, · · · , h − k − 1. Recalling the definition (2.4) we arrived at the nonlinear integral equation for α
ln α(x) = N ln tanh πx 2λ + iπφ π − λ + ∞ −∞ K(y) ln A(x − y) − K(y + iǫ − iλ) ln A(x − y) dy . (2.19)
This equation is exact for all finite system size and for both the ground state and the excited states.
Scaling limits
To obtain the finite-size corrections to the eigenvalue spectra of the transfer matrix we observe the following scaling behaviour
lim N →∞ tanh π 2λ ± λ π (x + ln N) N = exp −2e −x (2.20)
in thermodynamic limit N → ∞. The function α scale similarly,
a ± (x) := lim N →∞ α ± λ π (x + ln N) , la ± (x) := ln a ± (x) (2.21) A ± (x) := lim N →∞ A ± λ π (x + ln N) = 1 + a ± (x) , lA ± (x) := ln A ± (x) . (2.22)
In the scaling limit regimes the nonlinear integral equation (2.19) becomes
la ± (x) = −2e −x + ∞ −∞ K 1 (x − y)lA ± (y)du − ∞ −∞ K 2 (x − y)lA ± (y)du + iπφ π − λ la ± (x) = −2e −x + ∞ −∞ K 1 (x − y)lA ± (y)du − ∞ −∞ K 2 (x − y)lA ± (y)du − iπφ π − λ (2.23)
where K 1,2 (x) are defined by
K 1 (x) := λ π K λ π x K 2 (x) := λ π K λ π x ± i(ǫ − λ) . (2.24)
Let us now turn to the eigenvalues T given by (1.8). Its finite-size corrections can be derived from
T (x − iλ/2) = h N (x − iλ) q(x + iλ/2 − iπ) q(x − iλ/2) A(x)e −siλ . (2.25)
Applying the Fourier transform to the ratio of the q-functions and taking (2.11) into account we have ln
q(x + iλ/2 − iπ) q(x − iλ/2) = −N ∞ −∞ sinh 1 2 (π − λ)k 2k sinh 1 2 (πk) cosh 1 2 (λk) e ikx dy + i 2λ ∞ −∞ ln A(x − y) sinh π λ (y − iǫ) dy + i 2λ ∞ −∞ ln A(x − y) sinh π λ (y + iǫ) dy + f c (2.26)
where f c is an integration constant. Therefore the finite-size corrections to the eigenvalue can be expressed as
ln T (x − iλ/2) = f c + N ln h(x − iλ) − N ∞ −∞ sinh 1 2 (π − λ)k sinh(xk) 2k sinh 1 2 (πk) cosh 1 2 (λk) dk + i λ ∞ −∞ ℜe ln A(y) sin π λ (x − y + iǫ) dy + o 1 N .
(2.27)
The scaling limit of the corrections can be done by splitting the integral into two parts, then inserting the variable of integration y by ± λ π (y + ln N) and using the scaling functions (2.22), we obtain
ln T (x − iλ/2) = −Nf (x − iλ/2) − 2i πN e π λ x ∞ −∞ ℜe lA + (y)e −y dy + 2i πN e − π λ x ∞ −∞ ℜe lA − (y)e −y dy + o 1 N = −Nf (x − iλ/2) − πi 6N sinh πx λ 24 π 2 ∞ −∞ ℜe lA ± (y)e −y dy + o 1 N (2.28)
where the bulk behavior is entirely expressed by the first term and second term is the finitesize corrections. The integration constant f c is chosen so that f (x − iλ/2) is exactly the bulk energy, which can be derived from the inversion relation of the face weights [7,23].
Here we are only interested in the finite-size correction terms which include the conformal spectra.
Conformal spectra
The conformal spectra can be extracted from the finite-size corrections of the transfer matrix. The integral in the finite-size correction term in (2.28) can be calculated by considering the expression
∞ −∞ [la ± (x)] ′ lA ± (x) − la ± (x)[lA ± (x)] ′ dx + ∞ −∞ [la ± (x)] ′ lA ± (x) − la ± (x)[lA ± (x)] ′ dx = 2 ∞ −∞ e −x lA ± (x) + [lA ± (x)] ′ dx +2 ∞ −∞ e −x lA ± (x) + [lA ± (x)] ′ dx − πiφ π − λ ∞ −∞ [lA ± (x)] ′ − [lA ± (x)] ′ dx . (2.29)
The right hand side is derived by using the nonlinear integral equation (2.23). The left hand side can be calculated after changing the variable x to a and a and using the dilogarithmic function
L(x) = x 0 ln(1 + y) y − ln y 1 + y dy . (2.30)
Then we are able to derive
24 π 2 ∞ −∞ e −x ℜe lA ± (x)dx = 3 π 2 L ω 2 e 2isλ + L ω −2 e −2isλ − 2πφ 2 π − λ (2.31)
where the asymptotics of a ± (∞) = ωe isλ 2 , a ± (∞) = ωe isλ −2 and a ± (−∞) = a ± (−∞) = 0 have been read off from (2.23). Finally using the well known identity
L(z) + L(1/z) = π 2 3 (2.32)
the finite-size corrections in (2.28) are given by the explicit expression
ln T (x − iλ/2) = −Nf (x − iλ/2) − πi 6N c − 24∆ sinh πx λ + o 1 N (2.33) or changing the variable x to v = iu = x − iλ/2 ln T (v) = −Nf (v) + πi 6N c − 24∆ cosh πv λ + o 1 N , (2.34)
where the central charge is
c = 1 − 6λ 2 π(π − λ) (2.35)
and the conformal weights are
∆ = φ 2 − λ 2 4π(π − λ)
.
(2.36)
For the ground state, s = t = 1 yields ∆ = 0. The choice of 1 < s ≤ h − 1 and 1 < t ≤ h − k − 1 gives the excited states. Remarkably, inserting λ given by (1.12) into the conformal spectra we have the central charges and the conformal weights of the primary fields for Virasoro minimal models
c = 1 − 6k 2 h(h − k) and ∆ = [ht − (h − k)s] 2 − k 2 4h(h − k) (2.37)
k < h; s = 1, 2, · · · , h − 1; t = 1, 2, · · · , h − k − 1 for k ≥ 1. The unitary minimal models are given by taking k = 1.
Discussion
In this paper we have obtained the conformal spectra of the non-unitary minimal conformal field theories from the finite-size corrections to the eigenvalue spectra of the transfer matrix of the critical ABF model on the regime III/IV critical line with the crossing parameter (1.12).
The method given in [20] is only for calculating the central charges for the six-vertex model with a twisted boundary condition. In this paper it has been generalized to calculate both the central charges and the conformal weights. Other methods, for example, the thermodynamic Bethe ansatz (TBA) analysis (see [8], [28]- [35]), exist for calculating the conformal spectra. The TBA relies heavily on the string hypothesis, while our method crucially depends on the anz property instead. However, it is an interesting problem to generalize the TBA method for calculating the conformal weights of the ABF SOS model. There is another method for calculating the finite-size corrections of transfer matrices. This has been shown by solving the fusion hierarchies of the ABF model. Unfortunately it is only for k = 1 [11] (also see [21]). It has not yet known how to find finite-size corrections of the transfer matrix of the ABF model for k > 1.
Note Added
After this work was submitted I was informed by Murray Batchelor about reference [36], where the authors generalize the method of [20] to calculate the conformal spectrum of the six-vertex model with twisted boundary conditions. However, the finite-size corrections to the transfer matrix of the ABF SOS model with the more general crossing parameter (1.12), which is an important class of integrable lattice models corresponding to realizations of non-unitarity minimal conformal field theories, was not considered there. I am grateful to Murray Batchelor for drawing my attention to [36].
and h are relatively prime integers (h > k > 0) and s = 1, 2, · · · , h − 1. With this condition (1.12) the face weights still satisfy the Yang-Baxter equation. The row-to-row transfer matrix T (u) forms the commuting family [ T (u) , T (v) ] = 0 . (1.13) Therefore the model is integrable. The Bethe ansatz solutions (1.8) and (1.10) with the restriction (1.12) are the eigenvalues and the Bethe ansatz equations of the transfer matrix of the RSOS model [7, 8].
Sometimes it is convenient to work with values of x in the upper half plane close to the real axis for avoiding singularities which might otherwise occur.
AcknowledgementsThis research has been supported by the Australian Research Council. The author also thanks P. A. Pearce and Ole Warnaar for discussions.
. G E Andrews, R J Baxter, P J Forrester, J. Stat. Phys. 35193G. E. Andrews, R. J. Baxter and P. J. Forrester, J. Stat. Phys. 35 (1984) 193.
. A A Belavin, A M Polyakov, A B Zamolodchikov, Nucl. Phys. 241333A. A. Belavin, A. M. Polyakov and A. B. Zamolodchikov, Nucl. Phys. B241 (1984) 333.
. D A Huse, Phys. Rev. 303908D. A. Huse, Phys. Rev. B30 (1984) 3908.
. D Friedan, Z Qiu, S Shenker, Phys. Rev. Lett. 521575D. Friedan, Z. Qiu and S. Shenker, Phys. Rev. Lett. 52 (1984) 1575.
. I Affleck, Phys. Rev. Lett. 56746I. Affleck Phys. Rev. Lett. 56 (1986) 746.
. A N Kirillov, N Yu, Reshetikhin, J. Phys. 201587A.N.Kirillov and N.Yu.Reshetikhin, J. Phys. A20 (1987) 1565;1587.
. R J Baxter, J. Stat. Phys. 281R. J. Baxter, J. Stat. Phys. 28 (1982) 1.
. V V Bazhanov, N Yu Reshetikhin, Int. J. Mod. Phys. 4115V. V. Bazhanov and N. Yu Reshetikhin, Int. J. Mod. Phys. B4 (1989) 115.
. A Klümper, P A Pearce, J. Stat. Phys. 6413A. Klümper and P. A. Pearce, J. Stat. Phys. 64 (1991) 13.
A Kuniba, T Nakanishi, Spectra in Conformal Field Theories from the Rogers Dilogarithm. preprint 92A. Kuniba and T. Nakanishi, "Spectra in Conformal Field Theories from the Rogers Dilogarithm" preprint 92.
. A Klümper, P A Pearce, Physica A. 183304A. Klümper and P. A. Pearce, Physica A 183 (1992) 304.
. H J Vega, F Woynarovich, Nucl. Phys. 251439H. J. de Vega and F. Woynarovich, Nucl. Phys. B251 (1985) 439.
. F Woynarovich, Phys. Rev. Lett. 59259F. Woynarovich, Phys. Rev. Lett. 59 (1987) 259.
. G Von Genlen, V Rittenberg, J. Phys. 20227G. von Genlen and V. Rittenberg, J. Phys. A20 (1987) 227.
. H J Vega, M Karowski, Nucl. Phys. 285619H. J. de Vega and M. Karowski, Nucl. Phys. B285 (1987) 619.
. C J Hamer, G R W Quispel, M T Batchelor, J. Phys. 205677C. J. Hamer, G. R. W. Quispel and M. T. Batchelor, J. Phys. A20 (1987) 5677.
. M Karowski, Nucl. Phys. 300473M. Karowski, Nucl. Phys. B300 (1988) 473.
. F C Alcaraz, M N Barber, M T Batchelor, Phys. Rev. Lett. 58771F. C. Alcaraz, M. N. Barber and M. T. Batchelor, Phys. Rev. Lett. 58 (1987) 771.
. A Cappell, C Itzykson, J.-B Zuber, Nucl. Phys. 280445A. Cappell, C. Itzykson and J.-B. Zuber, Nucl. Phys. B280 (1987) 445;
. Commun. Math. Phys. 1131Commun. Math. Phys. 113 (1987) 1.
. A Klümper, M T Batchelor, P A Pearce, J. Phys. 243111A. Klümper, M. T. Batchelor and P. A. Pearce, J. Phys. A24 (1991) 3111.
. Y K Zhou, P Pearce, Nucl. Phys. B. in pressY. K. Zhou and P. Pearce, Nucl. Phys. B (1995), in press.
. Y K Zhou, Nucl. Phys. B. in pressY. K. Zhou, Nucl. Phys. B (1995), in press.
. P J Forrester, R J Baxter, J. Stat. Phys. 38435P. J. Forrester and R. J. Baxter, J. Stat. Phys. 38 (1985) 435.
. C N Yang, Phys. Rev. Lett. 191312C. N. Yang, Phys. Rev. Lett. 19 (1967) 1312.
. R J Baxter, Ann. Phys. 70193R. J. Baxter, Ann. Phys. 70 (1972) 193.
. L A Takhtadzhan, L D Faddeev, Russian Math. Survey. 3411L. A. Takhtadzhan and L. D. Faddeev, Russian Math. Survey 34:5(1979) 11.
. S O Warnaar, M T Batchelor, B Nienhuis, J. Phys. 253077S. O. Warnaar, M. T. Batchelor and B. Nienhuis, J. Phys. A25 (1992) 3077.
. C N Yang, C P Yang, J. Math. Phys. 101115C. N. Yang and C. P. Yang, J. Math. Phys. 10 (1969) 1115.
. M Takahashi, Prog. Theor. Phys. 46401M. Takahashi, Prog. Theor. Phys. 46 (1971) 401.
. M Takahashi, M Suzuki, Prog. Theor. Phys. 482187M. Takahashi and M. Suzuki, Prog. Theor. Phys. 48 (1972) 2187.
. Al B Zamolodchikov, Phys. Lett. 253391Al. B. Zamolodchikov, Phys. Lett. B253 (1991) 391;
. Nucl. Phys. 358497Nucl. Phys. B358 (1991) 497.
. T R Klassen, E Melzer, Nucl. Phys. 338485T. R. Klassen and E. Melzer, Nucl. Phys. B338 (1990) 485.
. T R Klassen, E Melzer, Nucl. Phys. 350635T. R. Klassen and E. Melzer, Nucl. Phys. B350 (1991) 635.
. M J Martins, Phys. Rev. Lett. 67419M. J. Martins, Phys. Rev. Lett. 67 (1991) 419.
. A Kuniba, Nucl. Phys. 389209A. Kuniba, Nucl. Phys. B389 (1993) 209.
. A Klümper, T Wehner, J Zittartz, J. Phys. 262815A. Klümper, T. Wehner and J. Zittartz, J. Phys. A26 (1993) 2815.
| [] |
[
"Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks",
"Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks"
] | [
"Xuanzhe Xiao \nSouthern University of Science and Technology\n518055ShenzhenP.R. China\n",
"Zeng Li \nSouthern University of Science and Technology\n518055ShenzhenP.R. China\n",
"⋆1 ",
"Chuanlong Xie [email protected] \nBeijing Normal University\n519087ZhuhaiP.R. China\n",
"Fengwei Zhou [email protected] \nHuawei Noah's Ark Lab\nHong KongP.R. China\n"
] | [
"Southern University of Science and Technology\n518055ShenzhenP.R. China",
"Southern University of Science and Technology\n518055ShenzhenP.R. China",
"Beijing Normal University\n519087ZhuhaiP.R. China",
"Huawei Noah's Ark Lab\nHong KongP.R. China"
] | [] | Unraveling the reasons behind the remarkable success and exceptional generalization capabilities of deep neural networks presents a formidable challenge. Recent insights from random matrix theory, specifically those concerning the spectral analysis of weight matrices in deep neural networks, offer valuable clues to address this issue. A key finding indicates that the generalization performance of a neural network is associated with the degree of heavy tails in the spectrum of its weight matrices. To capitalize on this discovery, we introduce a novel regularization technique, termed Heavy-Tailed Regularization, which explicitly promotes a more heavy-tailed spectrum in the weight matrix through regularization. Firstly, we employ the Weighted Alpha and Stable Rank as penalty terms, both of which are differentiable, enabling the direct calculation of their gradients. To circumvent over-regularization, we introduce two variations of the penalty function. Then, adopting a Bayesian statistics perspective and leveraging knowledge from random matrices, we develop two novel heavy-tailed regularization methods, utilizing Power-law distribution and Fréchet distribution as priors for the global spectrum and maximum eigenvalues, respectively. We empirically show that heavytailed regularization outperforms conventional regularization techniques in terms of generalization performance. | null | [
"https://export.arxiv.org/pdf/2304.02911v2.pdf"
] | 257,984,990 | 2304.02911 | 3635a5d6fa917743571d4967dda79f2fdf03cee9 |
Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks
7 Apr 2023
Xuanzhe Xiao
Southern University of Science and Technology
518055ShenzhenP.R. China
Zeng Li
Southern University of Science and Technology
518055ShenzhenP.R. China
⋆1
Chuanlong Xie [email protected]
Beijing Normal University
519087ZhuhaiP.R. China
Fengwei Zhou [email protected]
Huawei Noah's Ark Lab
Hong KongP.R. China
Heavy-Tailed Regularization of Weight Matrices in Deep Neural Networks
7 Apr 2023Heavy-Tailed Regularization · Deep Neural Network · Ran- dom Matrix Theory
Unraveling the reasons behind the remarkable success and exceptional generalization capabilities of deep neural networks presents a formidable challenge. Recent insights from random matrix theory, specifically those concerning the spectral analysis of weight matrices in deep neural networks, offer valuable clues to address this issue. A key finding indicates that the generalization performance of a neural network is associated with the degree of heavy tails in the spectrum of its weight matrices. To capitalize on this discovery, we introduce a novel regularization technique, termed Heavy-Tailed Regularization, which explicitly promotes a more heavy-tailed spectrum in the weight matrix through regularization. Firstly, we employ the Weighted Alpha and Stable Rank as penalty terms, both of which are differentiable, enabling the direct calculation of their gradients. To circumvent over-regularization, we introduce two variations of the penalty function. Then, adopting a Bayesian statistics perspective and leveraging knowledge from random matrices, we develop two novel heavy-tailed regularization methods, utilizing Power-law distribution and Fréchet distribution as priors for the global spectrum and maximum eigenvalues, respectively. We empirically show that heavytailed regularization outperforms conventional regularization techniques in terms of generalization performance.
Introduction
Deep neural networks (DNN) have shown remarkable performance in recent years, achieving unprecedented success in various fields such as computer vision, natural language processing, and recommendation systems [6,10,11,25]. However, there is still a lack of clear understanding of how neural networks generalize.
Efforts to construct a generalization framework for DNNs have incorporated various mathematical tools from conventional learning theory [3,4,5,24]. Nevertheless, the majority of these approaches have been found to exhibit certain limitations. For example, the VC dimension and Rademacher complexity have been deemed inadequate in offering a satisfactory explanation for the generalization performance of DNNs [26]. The uniform-convergence-based generalization bounds may fail to elucidate generalization in deep learning due to their vacuous generalization guarantee [20].
One might consider that the weight matrices of a DNN serve as a representation of its generalization capabilities for the following reasons: from a theoretical standpoint, the parameters contained within the weight matrices are intricately connected to the model's output space, input data, optimization algorithm, etc; Additionally, in practical scenarios, access to trained models often comes with limited information regarding training and testing data, which can be attributed to the highly compartmentalized nature of the industry. Recently, Martin and Mahoney [16] introduced a perspective grounded in random matrix theory (RMT) to elucidate the generalization behavior of deep neural networks. They studied the empirical spectral distribution (ESD) of weight matrices in deep neural networks and observed a 5+1 phase of regularizationthroughout the training process, the ESDs of the weight matrices initially conform well to the Marchenko-Pastur (MP) law, gradually deviate from it, and ultimately approach a Heavy-Tailed (HT) distribution [14,16]. This regularization phenomenon is referred to as Implicit Self-Regularization. Furthermore, this theory suggests that large, well-trained DNN architectures should exhibit Heavy-Tailed Self-Regularization, meaning that the spectra of their weight matrices can be well-fitted by a heavy-tailed distribution. Building on Martin and Mahoney's work, Meng and Yao [19] discovered that the complexity of the classification problem could influence the weight matrices spectra of DNNs. These theories offer a novel perspective for exploring the generalization of DNNs.
In addition to the aforementioned studies, several works have advocated the positive impact of heavy tails of weight matrices on the generalization of neural networks from the perspective of stochastic gradient descent (SGD). Zhou et al. [27] pointed out that the time required for both SGD and Adam to escape sharp minima is negatively related to the heavy-tailedness of gradient noise. They further explained that the superior generalization of SGD compared to Adam in deep learning is due to Adam's gradient calculation being smoothed by the exponential moving average, resulting in lighter gradient noise tails compared to SGD. Hodgkinson et al. [12] presented a similar finding, demonstrating that, within a stochastic optimization problem, multiplicative noise and heavy-tailed stationary behavior enhance the capacity for basin hopping during the exploratory phase of learning, in contrast to additive noise and light-tailed stationary behavior. Simsekli et al. [22] approximated the trajectories of SGD using a Feller process and derived a generalization bound controlled by the Hausdorff dimension, which is associated with the tail properties of the process. Their results suggest that processes with heavier tails should achieve better generalization. Barsbey et al. [2] argued that the heavy-tailed behavior present in the weight matrices of a neural network contributes to network compressibility, thereby enhancing the network's generalization capabilities. Taken together, these results suggest that the heavy tail of weight matrices is a fundamental factor for the improved generalization of DNNs under SGD.
An intuitive notion arising from these theories is that the presence of heavy tails of weight matrices during DNN training is crucial for achieving favorable generalization performance. However, previous studies provide a limited understanding of how to enhance the heavy-tailed behavior in neural networks. In this study, our focus lies in regularizing DNNs to facilitate more rapid and pronounced heavy-tailed behavior. To this end, we introduce an explicit regularization technique called Heavy-Tailed Regularization. We empirically demonstrate that models trained with heavy-tailed regularization display superior generalization performance compared to those trained with conventional methods.
Contribution of this paper.
We propose a regularization framework termed Heavy-Tailed Regularization.
This proposal is motivated by prior research, which has shown that the heavy-tailed behavior of weight matrices in neural networks can improve their generalization capabilities. 2. We develop four distinct heavy-tailed regularization methods, including (a) Weighted Alpha regularization, (b) Stable Rank regularization, (c) Powerlaw Prior, and (d) Fréchet Prior. The first two methods are inspired by existing complexity measures for neural networks, while the latter two are informed by insights from random matrix theory (RMT) and Bayesian statistics. 3. We made comparison with conventional methods on widely used datasets including KMNIST and CIFAR10. Numerical experiments show that the heavy-tailed regularization approaches are efficient and outperform competing conventional regularization methods.
2 Heavy-Tailed Regularization
Definition
Consider a DNN f W : X → Y with L layers and with weight matrices of its fully connected layers
W = {W 1 , W 2 , · · · , , W L } and data sample set S = {(x 1 , y 1 ) , (x 2 , y 2 ) · · · , (x N , y N )} with sample size N . Denote l(f (x), y) the loss of example (x, y) ∈ X × Y under model f W .
The optimization problem of the DNN can be viewed as a problem of minimizing the empirical risk with a penalty term:
min W L (x, y) = 1 N N i=1 l (f (x i ) , y i ) + λ L l=1 p l (W l ),(1)
where λ is a tuning parameter and p l (·) is a penalty function on the weight matrices.
Here, we propose a class of regularization methods called Heavy-Tailed Regularization, which refers to the regularization methods that are conducive to making the model's weight matrices more heavy-tailed. To achieve this goal, p l (·) is supposed to be a complex measure of the model that reflects the degree of the heavy-tailed behavior of the model, and it decreases as the tails get heavier.
To describe the degree of heavy tails of the spectra of the weight matrices, it is critical to estimate the tail index α. In statistics, estimating the tail index is a tricky issue. Denote the data points {x i , 1 ≤ i ≤ n} and assume the data come from a heavy-tailed distribution with density function p(x) ∼ cx −α , i.e., its p.d.f. is comparable with power-law x −α as x → ∞. A general method to estimate the tail index α is the Hill estimator (HE), which can be used for general power-law settings. If the data is sorted in increasing order, the HE can be written by:
α = 1 + k k i=1 ln xn−i+1 x n−k ,(2)
where k is a tuning parameter. There is a trade-off depending on the value of k between the bias and variance of the estimator. In this study, we use HE with k = n 2 for tail index estimation.
Weighted Alpha Regularization
Motivated by Martin and Mahoney's work [15,17,18], the Weighted Alpha (also called AlphaHat ) is used in our regularization approach. In their theory, there is a strong linear correlation between the test accuracy and the Weighted Alpha of models. The Weighted Alpha is defined as:
Weighted Alpha(W) = L l=1 α l log λ max,l ,(3)
where α l is the tail index of all the positive eigenvalues of S l = W T l W l , and λ max,l is the maximum eigenvalue of S l . In Martin and Mahoney's theory, only the performance of Weighted Alpha on different architectures of large-scale, pretrained, state-of-the-art models was discussed. While we are interested in how this metric changes during the training process of DNNs. Here, we conducted some experiments and obtained evidence that Weighted Alpha is negatively correlated with test accuracy. Thus, the penalty function p l (·) can be written as
p l (W l ) = α l · log λ max,l .(4)
In fact, we do not need to penalize the weighted alpha throughout the training process. Our goal is to impose a heavy-tailed perturbation in the stochastic optimization, which only requires us to activate the regularization in the early stages or intermittently. Otherwise, it will be over-regularized. On the other hand, for practical reasons, we can terminate the regularization at some point to avoid high computational costs. Therefore, we provide two additional variants of the penalty function as follows:
1. Decay Weighted Alpha:
p l (W l ) = d (⌊e/m⌋) · α l · log λ max,l ,(5)
where e is the current epoch number, m is the frequency of decay, and d(·) is a decreasing function called the decay function. The decay function is called power decay when d (x) = x −k I { x −k > t} and called exponential decay when d (x) = exp (−kx) I { exp (−kx) > t} for the hyperparameter k and t. The adoption of this penalty function means that the regularization is activated only in the early epochs and becomes weaker with training. 2. Lower Threshold Weighted Alpha:
p l (W l ) = α l · log λ max,l · I L l=1 α l · log λ max,l t ,(6)
where t is a hyperparameter, I {·} is the indicator function. The adoption of this penalty function means that the regularization is activated only when the Weighted Alpha is above a predetermined lower threshold t, i.e., the model falls short of the degree of the heavy tail we expect.
Stable Rank Regularization
Stable rank is a classical metric in deep learning, which is defined as
stable (W l ) = W l 2 F W l 2 2 .(7)
It has been verified that the stable rank decreases as the training of DNNs [16]. Several recent studies [4,21] also shows that the generalization error can be
upper bounded by O i W i 2 2
i stable (W i ) , which implies that a smaller i stable (W i ) leads to a smaller generalization error. In light of this, the penalty function for the stable rank regularization can be written as p l (W l ) = stable (W l ), and thus the optimization problem can be written as
min W L (x, y) = N i=1 l (f (x i ) , y i ) + λ L l=1 W l 2 F W l 2 2(8)
Note that ||W|| 2 F is the sum of square singular values of W and ||W|| 2 is the maximum singular value of W. Recall the random matrix theory, when the matrix is heavy-tailed, the maximum eigenvalue is far off the global spectrum. Combined with Martin's 5+1 phase transition theory [16], a smaller stable rank of the weight matrix leads to stronger heavy-tailed self-regularization.
Similar to the weighted alpha regularization, in order to avoid over-regularization, we can also add decay and upper threshold to stable rank regularization as follows:
1. Decay Stable Rank:
p l (W l ) = d (⌊e/m⌋) · W l 2 F W l 2 2 .(9)
2. Lower Threshold Stable Rank:
p l (W l ) = W l 2 F W l 2 2 · I L l=1 W l 2 F W l 2 2 t .(10)
Heavy Tailed Regularization from A Bayesian Perspective
Here, we propose two heavy-tailed regularization methods from a Bayesian perspective. Let us view the deep neural network as a probabilistic model P (y|x, W), where x ∈ X = R p is the input and y ∈ Y is the output probability assigned by the neural network. W = {W 1 , · · · , W L } is the set of weight matrices of the neural network. Given a training sample set S = {(x 1 , y 1 ) , (x 2 , y 2 ) · · · , (x N , y N )} with sample size N , a common method for estimating the weights W is the maximum likelihood estimation (MLE):
W MLE = arg max W N i=1 log P (y i |x i , W ).(11)
Specifically, for a multi-classification task, the probabilistic model is usually a multinomial distribution, and then the MLE can be written as
W MLE = arg max W N i=1 y i log f W (x i ).(12)
From a Bayesian perspective, if we want to introduce the heavy-tailed regularization upon the model, we can assign a heavy-tailed prior upon the probabilistic model and then find the maximum a posteriori (MAP) rather than MLE:
W MAP = arg max W log P (y |x, W ) + log P (W) .(13)
Thus, it is important to choose a reasonable prior P (W) for the weights which can make the weights more heavy-tailed. Recall the Random Matrix Theory for the heavy-tailed matrices, if a random matrix is heavy-tailed, its limiting spectral distribution (LSD) is supposed to be power-law [7,8,9] and its largest eigenvalue is supposed to be Fréchet distribution [1,23]. Therefore, it is easy to consider that prior distribution can be set as Powerlaw or Fréchet distribution when we introduce prior knowledge of the global spectrum or maximum eigenvalue. Now we introduce the heavy-tailed prior upon the model. Firstly we consider the prior of global spectra of weight matrices. When the weight matrices are heavy-tailed, the LSD is power-law, so the prior distribution can be set as
P (W) = L l=1 P (W l ) ∝ L l=1 K l j=1 λ −α l l,j ,(14)
where α l is the tail index of the power-law of square singular value of weight matrix W l in the l-th layer of the neural network, λ l,j is the j-th square singular value of W l . K l is the number of singular values of W l that is considered to be from a heavy-tailed distribution. K l is a hyperparameter and we choose K l as half the size of W l in our study. Substituting this into (13), we have the following optimization problem:
W MAP = arg max W N i=1 y i log f W (x i ) − L l=1 K l j=1 α l log λ l,j(15)
Secondly, we consider the prior of the maximum square singular value of the weight matrices. When the weight matrices are heavy-tailed, the distribution of maximum square singular value is Fréchet distribution, so the prior distribution can be set as
P (W) = L l=1 P (W l ) = L l=1 exp −λ −α l max,l .(16)
where α l is the tail index of W l and λ max,l is the maximum square singular value. Similarly, substituting this into (13), we have the following optimization problem:
W MAP = arg max W N i=1 y i log f W (x i ) − L l=1 λ −α l max,l .(17)
So far we derive two forms of MAP but we have some problems to solve: How to determine the hyperparameters α = {α l , 1 ≤ l ≤ L}, and how to solve this maximization problem. In Empirical Bayes, the hyperparameter is determined by maximizing the marginal likelihood of the data, that is
α = arg max α log P (y, W |x, α ) dW.(18)
It's apparently impossible since the integral is intractable. According to Mandt et al., [13], the SGD can be seen as a variational expectation maximization (VEM) method for Bayesian inference. The maximization problems in (15) and (17) are equivalent to the following minimization problem:
min W L (x, y) = 1 N N i=1 l (f (x i ) , y i ) + L l=1 K l j=1 α l log λ l,j ,(19)min W L (x, y) = 1 N N i=1 l (f (x i ) , y i ) + L l=1 λ −α l max,l ,(20)
where l(f (x), y) = −y log f (x) is the cross entropy loss. The hyperparameters α can be optimized when the SGD is seen as a type of VEM algorithm. Instead of the MLE, the Hill estimator is a better choice to estimate the hyperparameters α. When the tuning parameter is added, the (19) and (20) can be modified as:
min W L (x, y) = 1 N N i=1 l (f (x i ) , y i ) + µ L l=1 K l j=1α l log λ l,j ,(21)min W L (x, y) = 1 N N i=1 l (f (x i ) , y i ) + µ L l=1 λ −α l max,l ,(22)
whereα l is the Hill estimator of the l-th layer weight matrix. Note that (21) and (22) are the special cases of (1) when p l (W) = α l · j log λ j,l and p l (W) = λ α l max,l . Since the penalty terms are similar to the weighted alpha, these regularization terms can be considered as variants of Weighted Alpha. According to the priors used in these regularizations, we call the (21) Heavy-Tailed Regularization under Power-law Prior and the (22) Heavy-Tailed Regularization under Fréchet Prior.
Experiment
In this section, we experimentally demonstrate the performance of Heavy-Tailed Regularization on multi-classification tasks. To verify the effectiveness of heavytailed regularization, we employed the Weighted Alpha Regularization and Stable Rank Regularization with their variants, and the heavy-tailed regularization under the Power-law and Fréchet spectral prior. Here all the tail index is replaced by its Hill estimator with k = n 2 , where n is the size of corresponding weight matrix. In our experiment, we used the following training methods to compare with the heavy-tailed regularization approach: All the experiments here are based on mini-batch SGD and learning rate decay.
In our experiments, we used the following four settings on the model and dataset:
FC3
In the first place, we train the neural network with three hidden layers on the KM-NIST and CIFAR10 dataset for 200 epochs. The KMNIST dataset is adapted from Kuzushiji Dataset and is a drop-in replacement for the MNIST dataset. The image size of the KMNIST dataset is 28×28. The CIFAR10 dataset consists of 60000 color images whose size is 32×32. The CIFAR10 dataset is more complex than the KMNIST dataset so the CIFAR10 dataset is more difficult to be classified correctly. Because the image size of these two datasets varied, we use two types of three-layer neural networks with different sizes for each dataset. The results are shown in Figure 1, 2 and Table 1. The heavy-tailed regularizations all show better accuracies than the vanilla problem both on the KMNIST dataset and the CIFAR10 dataset. The Fréchet prior achieves the best test accuracy on the KMNIST dataset, and the stable rank with lower threshold of t = 15 achieves the best test accuracy on the CIFAR10 dataset.
LeNet5
Secondly, we train the LeNet5 on the CIFAR10 dataset for 200 epochs. The LeNet5 is a famous and classical convolutional neural network (CNN) architecture. The results are shown in Figure 3 and Table 2. The heavy-tailed regularization also all shows better accuracy than the vanilla problem both on the CIFAR10 dataset. As shown in the table, the stable rank with β = 0.1 achieves the best test accuracy.
ResNet18
Thirdly, we train the ResNet18 on the CIFAR10 dataset for 200 epochs. The ResNet is a CNN architecture which greatly advanced the SOTA in various computer vision tasks. In this experiment, we add one linear layer with size of 512×128 before the linear layer in the origin ResNet18 architecture. The results are shown in Figure 4 and Table 3. As shown in the table, the stable rank with β = 5 × 10 −4 achieves the best test accuracy.
1 .
1Vanilla problem (Base): We considered the original model without any explicit regularization. 2. Weight Decay: We considered the most commonly used explicit regularization in (1) where p l (W) = 1 2 ||W|| 2 F . 3. Spectral Norm Regularization: We considered another explicit regularization method with regards to the spectrum which penalty function p l (
Fig. 4 :
4ResNet18 on CIFAR10
For the KMNIST dataset, we use the network with sizes of layers n = [784, 128, 128, 128, 10]; For the CIFAR10 dataset, we use the network with sizes of layers n = [3072, 512, 256, 256, 10].
Table 1 :
1The average (± standard error) of test accuracy of FC3 with different regularization methods on the KMNIST and CIFAR10 dataset weighted alpha1 5.00×10 −5 89.60±0.011 stable rank 2 1.00×10 −4 89.48±0.008 Power-law prior 5.00×10 −4 89.58±0.175 Fréchet prior 2.00×10 −5 89.64±0.173 weighted alpha 1 1.00×10 −4 55.72±0.053 stable rank 2 1.00×10 −4 55.82±0.041 Power-law prior 5.00×10 −5 55.44±0.038 Fréchet prior 5.00×10 −5 55.45±0.029Network Dataset
Method
β
Test accuracy
FC3
KMNIST
base
89.19±0.020
weight decay 5.00×10 −4 89.44±0.037
spectral norm
0.0001
89.27±0.013
CIFAR10
base
54.97±0.039
weight decay 5.00×10 −4 55.56±0.092
spectral norm
0.0001
55.27±0.003
Table 2 :
2The average (± standard error) of test accuracy of LeNet5 with different regularization methods on the CIFAR10 dataset Power-law prior 7.00×10 −4 72.61±1.061Frechet prior 5.00×10 −5 72.58±0.270Network Dataset
Method
β
Test Accuracy
LeNet5 CIFAR10
base
72.42±0.213
weight decay 5.00×10 −4 72.62±0.277
spectral norm
0.0001
71.98±0.275
weighted alpha 1
0.004
72.61±0.300
stable rank
0.1
73.63±0.193
Table 3 :
3The average (± standard error) of test accuracy of ResNet18 with different regularization methods on the CIFAR10 dataset 00×10 −5 93.04±0.045 stable rank 5.00×10 −4 93.19±0.049 Power-law prior 1.00×10 −4 92.85±0.111 Frechet prior 3.00×10 −5 93.07±0.086Network Dataset
Method
β
Test accuracy
ResNet18 CIFAR10
base
92.65±0.066
weight decay 5.00×10 −4 93.15±0.087
spectral norm
0.0001
92.78±0.069
weighted alpha 5.
We use power decay weighted alpha with k = 2.2 We use lower threshold stable rank with t = 1.
Poisson convergence for the largest eigenvalues of heavy tailed random matrices. A Auffinger, G Ben Arous, S Péché, Annales de l'IHP Probabilités et statistiques. 45Auffinger, A., Ben Arous, G., Péché, S.: Poisson convergence for the largest eigen- values of heavy tailed random matrices. In: Annales de l'IHP Probabilités et statis- tiques. vol. 45, pp. 589-610 (2009)
Heavy tails in sgd and compressibility of overparametrized neural networks. M Barsbey, M Sefidgaran, M A Erdogdu, G Richard, U Simsekli, Advances in Neural Information Processing Systems. 34Barsbey, M., Sefidgaran, M., Erdogdu, M.A., Richard, G., Simsekli, U.: Heavy tails in sgd and compressibility of overparametrized neural networks. Advances in Neural Information Processing Systems 34, 29364-29378 (2021)
Almost linear vc dimension bounds for piecewise polynomial networks. P Bartlett, V Maiorov, R Meir, Advances in neural information processing systems. 11Bartlett, P., Maiorov, V., Meir, R.: Almost linear vc dimension bounds for piecewise polynomial networks. Advances in neural information processing systems 11 (1998)
Spectrally-normalized margin bounds for neural networks. P L Bartlett, D J Foster, M J Telgarsky, 30Advances in neural information processing systemsBartlett, P.L., Foster, D.J., Telgarsky, M.J.: Spectrally-normalized margin bounds for neural networks. Advances in neural information processing systems 30 (2017)
Rademacher and gaussian complexities: Risk bounds and structural results. P L Bartlett, S Mendelson, Journal of Machine Learning Research. 3Bartlett, P.L., Mendelson, S.: Rademacher and gaussian complexities: Risk bounds and structural results. Journal of Machine Learning Research 3(Nov), 463-482 (2002)
Behavior sequence transformer for e-commerce recommendation in alibaba. Q Chen, H Zhao, W Li, P Huang, W Ou, Proceedings of the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse Data. the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse DataChen, Q., Zhao, H., Li, W., Huang, P., Ou, W.: Behavior sequence trans- former for e-commerce recommendation in alibaba. In: Proceedings of the 1st International Workshop on Deep Learning Practice for High-Dimensional Sparse Data (Dec 2019).
. 10.1145/3326937.3341261https://doi.org/10.1145/3326937.3341261, http://dx.doi.org/10.1145/3326937.3341261
Extreme value analysis for the sample autocovariance matrices of heavy-tailed multivariate time series. R A Davis, J Heiny, T Mikosch, X Xie, Extremes. 193Davis, R.A., Heiny, J., Mikosch, T., Xie, X.: Extreme value analysis for the sample autocovariance matrices of heavy-tailed multivariate time series. Extremes 19(3), 517-547 (2016)
Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series. R A Davis, T Mikosch, O Pfaffel, Stochastic Processes and their Applications. 126Davis, R.A., Mikosch, T., Pfaffel, O.: Asymptotic theory for the sample covariance matrix of a heavy-tailed multivariate time series. Stochastic Processes and their Applications 126(3), 767-799 (2016)
Limit theory for the largest eigenvalues of sample covariance matrices with heavy-tails. R A Davis, O Pfaffel, R Stelzer, Stochastic Processes and their Applications. 124Davis, R.A., Pfaffel, O., Stelzer, R.: Limit theory for the largest eigenvalues of sample covariance matrices with heavy-tails. Stochastic Processes and their Appli- cations 124(1), 18-50 (2014)
Attention in natural language processing. A Galassi, M Lippi, P Torroni, 10.1109/tnnls.2020.3019893IEEE Transactions on Neural Networks and Learning Systems. 3210Galassi, A., Lippi, M., Torroni, P.: Attention in natural language pro- cessing. IEEE Transactions on Neural Networks and Learning Systems 32(10), 4291-4308 (Sep 2020). https://doi.org/10.1109/tnnls.2020.3019893, http://dx.doi.org/10.1109/tnnls.2020.3019893
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionHe, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 770-778 (2016)
Multiplicative noise and heavy tails in stochastic optimization. L Hodgkinson, M Mahoney, International Conference on Machine Learning. PMLRHodgkinson, L., Mahoney, M.: Multiplicative noise and heavy tails in stochastic optimization. In: International Conference on Machine Learning. pp. 4262-4274. PMLR (2021)
S Mandt, M D Hoffman, D M Blei, arXiv:1704.04289Stochastic gradient descent as approximate bayesian inference. arXiv preprintMandt, S., Hoffman, M.D., Blei, D.M.: Stochastic gradient descent as approximate bayesian inference. arXiv preprint arXiv:1704.04289 (2017)
Traditional and heavy-tailed self regularization in neural network models. C H Martin, M W Mahoney, arXiv:1901.08276arXiv preprintMartin, C.H., Mahoney, M.W.: Traditional and heavy-tailed self regularization in neural network models. arXiv preprint arXiv:1901.08276 (2019)
Heavy-tailed universality predicts trends in test accuracies for very large pre-trained deep neural networks. C H Martin, M W Mahoney, Proceedings of the 2020 SIAM International Conference on Data Mining. the 2020 SIAM International Conference on Data MiningSIAMMartin, C.H., Mahoney, M.W.: Heavy-tailed universality predicts trends in test accuracies for very large pre-trained deep neural networks. In: Proceedings of the 2020 SIAM International Conference on Data Mining. pp. 505-513. SIAM (2020)
Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. C H Martin, M W Mahoney, Journal of Machine Learning Research. 22165Martin, C.H., Mahoney, M.W.: Implicit self-regularization in deep neural networks: Evidence from random matrix theory and implications for learning. Journal of Machine Learning Research 22(165), 1-73 (2021)
Post-mortem on a deep learning contest: a simpson's paradox and the complementary roles of scale metrics versus shape metrics. C H Martin, M W Mahoney, arXiv:2106.00734arXiv preprintMartin, C.H., Mahoney, M.W.: Post-mortem on a deep learning contest: a simp- son's paradox and the complementary roles of scale metrics versus shape metrics. arXiv preprint arXiv:2106.00734 (2021)
Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data. C H Martin, T S Peng, M W Mahoney, Nature Communications. 121Martin, C.H., Peng, T.S., Mahoney, M.W.: Predicting trends in the quality of state-of-the-art neural networks without access to training or testing data. Nature Communications 12(1), 1-13 (2021)
Impact of classification difficulty on the weight matrices spectra in deep learning and application to early-stopping. X Meng, J Yao, arXiv:2111.13331arXiv preprintMeng, X., Yao, J.: Impact of classification difficulty on the weight matrices spectra in deep learning and application to early-stopping. arXiv preprint arXiv:2111.13331 (2021)
Uniform convergence may be unable to explain generalization in deep learning. V Nagarajan, J Z Kolter, Advances in Neural Information Processing Systems. 32Nagarajan, V., Kolter, J.Z.: Uniform convergence may be unable to explain gener- alization in deep learning. Advances in Neural Information Processing Systems 32 (2019)
A pac-bayesian approach to spectrallynormalized margin bounds for neural networks. B Neyshabur, S Bhojanapalli, N Srebro, arXiv:1707.09564arXiv preprintNeyshabur, B., Bhojanapalli, S., Srebro, N.: A pac-bayesian approach to spectrally- normalized margin bounds for neural networks. arXiv preprint arXiv:1707.09564 (2017)
Hausdorff dimension, heavy tails, and generalization in neural networks. U Simsekli, O Sener, G Deligiannidis, M A Erdogdu, Advances in Neural Information Processing Systems. 33Simsekli, U., Sener, O., Deligiannidis, G., Erdogdu, M.A.: Hausdorff dimension, heavy tails, and generalization in neural networks. Advances in Neural Information Processing Systems 33, 5138-5151 (2020)
Poisson statistics for the largest eigenvalues of wigner random matrices with heavy tails. A Soshnikov, Electronic Communications in Probability. 9Soshnikov, A.: Poisson statistics for the largest eigenvalues of wigner random ma- trices with heavy tails. Electronic Communications in Probability 9, 82-91 (2004)
Measuring the vc-dimension of a learning machine. V Vapnik, E Levin, Y Le Cun, Neural computation. 65Vapnik, V., Levin, E., Le Cun, Y.: Measuring the vc-dimension of a learning ma- chine. Neural computation 6(5), 851-876 (1994)
Attention is all you need. A Vaswani, N M Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, L Kaiser, I Polosukhin, ArXiv abs/1706.03762Vaswani, A., Shazeer, N.M., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. ArXiv abs/1706.03762 (2017)
Understanding deep learning (still) requires rethinking generalization. C Zhang, S Bengio, M Hardt, B Recht, O Vinyals, Communications of the ACM. 643Zhang, C., Bengio, S., Hardt, M., Recht, B., Vinyals, O.: Understanding deep learning (still) requires rethinking generalization. Communications of the ACM 64(3), 107-115 (2021)
Towards theoretically understanding why sgd generalizes better than adam in deep learning. P Zhou, J Feng, C Ma, C Xiong, S C H Hoi, Advances in Neural Information Processing Systems. 33Zhou, P., Feng, J., Ma, C., Xiong, C., Hoi, S.C.H., et al.: Towards theoretically understanding why sgd generalizes better than adam in deep learning. Advances in Neural Information Processing Systems 33, 21285-21296 (2020)
| [] |
[
"Semi-and Weakly-supervised Human Pose Estimation",
"Semi-and Weakly-supervised Human Pose Estimation"
] | [
"Norimichi Ukita \nToyota Technological Institute\nJapan\n",
"Yusuke Uematsu \nNara Institute of Science and Technology\nJapan\n"
] | [
"Toyota Technological Institute\nJapan",
"Nara Institute of Science and Technology\nJapan"
] | [] | For human pose estimation in still images, this paper proposes three semi-and weakly-supervised learning schemes. While recent advances of convolutional neural networks improve human pose estimation using supervised training data, our focus is to explore the semi-and weakly-supervised schemes. Our proposed schemes initially learn conventional model(s) for pose estimation from a small amount of standard training images with human pose annotations. For the first semi-supervised learning scheme, this conventional pose model detects candidate poses in training images with no human annotation. From these candidate poses, only true-positives are selected by a classifier using a pose feature representing the configuration of all body parts. The accuracies of these candidate pose estimation and true-positive pose selection are improved by action labels provided to these images in our second and third learning schemes, which are semi-and weakly-supervised learning. While the first and second learning schemes select only poses that are similar to those in the supervised training data, the third scheme selects more true-positive poses that are significantly different from any supervised poses. This pose selection is achieved by pose clustering using outlier pose detection with Dirichlet process mixtures and the Bayes factor. The proposed schemes are validated with large-scale human pose datasets. | 10.1016/j.cviu.2018.02.003 | [
"https://arxiv.org/pdf/1906.01399v1.pdf"
] | 49,487,157 | 1906.01399 | 8e06d88ffb9e1c57e7988e63da205bbf56b38128 |
Semi-and Weakly-supervised Human Pose Estimation
Norimichi Ukita
Toyota Technological Institute
Japan
Yusuke Uematsu
Nara Institute of Science and Technology
Japan
Semi-and Weakly-supervised Human Pose Estimation
Human pose estimationSemi-supervised learningWeakly-supervised learningPose clustering
For human pose estimation in still images, this paper proposes three semi-and weakly-supervised learning schemes. While recent advances of convolutional neural networks improve human pose estimation using supervised training data, our focus is to explore the semi-and weakly-supervised schemes. Our proposed schemes initially learn conventional model(s) for pose estimation from a small amount of standard training images with human pose annotations. For the first semi-supervised learning scheme, this conventional pose model detects candidate poses in training images with no human annotation. From these candidate poses, only true-positives are selected by a classifier using a pose feature representing the configuration of all body parts. The accuracies of these candidate pose estimation and true-positive pose selection are improved by action labels provided to these images in our second and third learning schemes, which are semi-and weakly-supervised learning. While the first and second learning schemes select only poses that are similar to those in the supervised training data, the third scheme selects more true-positive poses that are significantly different from any supervised poses. This pose selection is achieved by pose clustering using outlier pose detection with Dirichlet process mixtures and the Bayes factor. The proposed schemes are validated with large-scale human pose datasets.
Introduction
Human pose estimation is useful in various applications including context-based image retrieval, etc. The number of given training data has a huge impact on pose estimation as well as various recognition problems (e.g, general object recognition [1] and face recognition [2]), Although the scale of datasets for human pose estimation has been increasing (e.g., 305 images in the Image Parse dataset in 2006 [3], 2K images in the LSP dataset [4] in 2010, and around 40K human poses observed in 25K images in the MPII human pose dataset [5] in 2014), it is difficult to develop a huge dataset for human pose estimation in contrast to object recognition (e.g., over 1,430K images in ISVRC2012-2014 [6]). This is because human pose annotation is much complicated than weak label and window annotations for object recognition.
To increase the number of training images with less annotation cost, semi-and weakly-supervised learning schemes are applicable. Semi-supervised learning allows us to automatically provide annotations for a large amount of data based on a small amount of annotated data. In weakly-supervised learning, only simple annotations are required in training data and are utilized to acquire full annotations in learning.
We apply semi-and weakly-supervised learning to human pose estimation, as illustrated in Figure 1. In our method with all functions proposed in this paper, fully-annotated images each of which has a pose annotation (i.e., skeleton) and an action label are used to acquire initial pose models for each action (e.g., "Baseball" and "Tennis" in Figure 1). These actionspecific pose models are used to estimate candidate human poses in each action-annotated image. If a candidate pose is considered true-positive, the given pose with its image is used for re-learning the corresponding action-specific pose model.
The key contributions of this work are threefold:
• True-positive poses are selected from candidate poses based on a pose feature representing the configuration of all body parts. This is in contrast to a pose estimation step in which only the pairwise configuration of neighboring/nearby parts is evaluated for efficiency.
• The action label of each training image is utilized for weakly-supervised learning. Because the variation of human poses in each action is smaller, pose estimation in each action works better than that in arbitrary poses.
• A large number of candidate poses are clustered by Dirichlet process mixtures for selecting true-positive poses based on the Bayes factor.
Related Work
A number of methods for human pose estimation employed (1) deformable part models (e.g., pictorial structure models [7]) for globally-optimizing an articulated human body and (2) discriminative learning for optimizing the parameters of the models [8]. In general, part connectivity in a deformable part model Pose model (h) Pose estimation model, which is not action-specific Figure 1: Overview of the proposed method. Images each of which has a pose annotation and an action label (i.e., Figure 1 (a)) are used to acquire initial action-specific pose models (i.e., (b)). Each model is acquired from images along with its respective action label. a-th action-specific pose model is used for estimating human poses in training images with the label of a-th action (i.e., (c)). Each estimated pose (i.e., (d)) is evaluated whether or not it is truepositive. This evaluation is achieved by a pose feature representing the configuration of all body parts (i.e., (e)). True-positive poses are selected also by outlier detection using Dirichlet process mixtures (i.e., (f)). These true-positive poses are employed as pose annotations (i.e., (g)) for re-learning the actionspecific pose models. After iterative re-learning, all training images with pose annotations (i.e., (a) and (g)) are used for learning a final pose model (i.e., (h)). While this paper proposes three learning schemes, described in Sections 4, 5, and 6, this figure illustrates the third one, which contains all functions proposed in Sections 4, 5, and 6.
is defined by image-independent quadratic functions for efficient optimization via distance transform. Image-dependent functions (e.g., [9,10]) disable distance transform but improve pose estimation accuracy. In [11], on the other hand, imagedependent but quadratic functions enable distance transform for representing the relative positions between neighboring parts. While global optimality of the PSM is attractive, its ability to represent complex relations among parts and the expressive power of hand-crafted appearance feature are limited compared to deep neural networks. Recently, deep convolutional neural networks (DCNNs) improve human pose estimation as well as other computer vision tasks. While DCNNs are applicable to the PSM framework in order to represent the appearance of parts as proposed in [11], DCNN-based models can also model the distribution of body parts. For example, a DCNN can directly estimate the joint locations [12]. In [13], multi-resolution DCNNs are trained jointly with a Markov random field. Localization accuracy of this method [13] is improved by coarse and fine networks in [14]. Recent approaches explore sequential structured estimation to iteratively improve the joint locations [15,16,17,18]. One of these methods, Convolutional pose machines [18], is extended to real-time pose estimation of multiple people [19] and hands [20]. Ensemble modeling can be also applied to DCNNs for human pose estimation [21]. Pose estimation using DCNNs is also extended to a variety of scenarios such as personalized pose estimation in videos [22] and 3D pose estimation with multiple views [23]. As well as DCNNs accepting image patches, DNNs using multi-modal features are applicable to human pose estimation; multi-modal features extracted from an estimated pose (e.g., relative positions between body parts) are fed into a DNN for refining the estimated pose [24].
While aforementioned advances improve pose estimation demonstrably, all of them require human pose annotations (i.e., skeletons annotated on an image) for supervised learning. Complexity in time-consuming pose annotation work leads to annotation errors by crowd sourcing, as described in [25]. For reducing the time-consuming annotations in supervised learning, semi-and weakly-supervised learning are widely used.
Semi-supervised learning allows us to utilize a huge number of non-annotated images for various recognition problems (e.g., human action recognition [26], human re-identification [27], and face and gait recognition [28]). In general, semi-supervised learning annotates the images automatically by employing several cues in/with the images; for example, temporal consistency in tracking [29], clustering [30], multimodal keywords [31], and domain adaptation [32].
For human pose estimation also, several semi-supervised learning methods have been proposed. However, these methods are designed for limited simpler problems. For example, in [33,34], 3D pose models representing a limited variation of human pose sequences (e.g., only walking sequences) are trained by semi-supervised learning; in [33] and [34], GMM-based clustering and manifold regularization are employed for learning unlabeled data, respectively. For semi-supervised learning, not only a small number of annotated images but also a huge amount of synthetic images (e.g., CG images with automatic pose annotations) are also useful with transductive learning [35].
In weakly-supervised learning, only part of full annotations are given manually. In particular, annotations that can be easily annotated are given. For human activities, full annotations may include the pose, region, and attributes (e.g., ID, action class) of each person. Since it is easy to provide the attributes rather than the pose and region, such attributes are often given as weak annotations. For example, only an action label is given to each training sequence where the regions of a person (i.e. windows enclosing a human body) in frames are found automatically in [36]. Instead of the manually-given action label, scripts are employed as weak annotations in order to find correct action labels of several clips in video sequences in [37]; action clips are temporally localized. Not only in videos but also in still images, weak annotations can provide highly-contextual information. In [38], given an action label, a human window is spatially localized with an object used for this action. For human pose estimation, Boolean geometric relationships between body parts are used as weak labels in [39].
Whereas pose estimation using only action labels is more difficult than human window localization described above, it has been demonstrated that the action-specific property of a human pose is useful for pose estimation (e.g. latent modeling of dynamics [40,41], switching dynamical models in videos [42], efficient particle distribution in multiple pose models in videos [43,44], and pose model selection in still images [45]).
Human Pose Estimation Model
This section introduces two base models for human pose estimation, deformable part models and DCNN-based heatmap models.
Deformable Part Models
A deformable part model is an efficient model for articulated pose estimation [7,8,11]. A tree-based model is defined by a set of nodes, V, and a set of links, E, each of which connects two nodes. Each node corresponds to a body part and has pose parameters (e.g., 2D image coordinates, orientation, and scale), which localize the respective parts. Pose parameters are optimized by maximizing the given score function consisting of a unary term, S i ( p i ) , and a pairwise term, P i, j ( p i , p j ), as follows:
f β (I, P) = i∈V S i ( p i ) + i, j∈E P i, j (p i , p j )(1)
where p i and P denote a set of pose parameters of the i-th part and a set of p i of all parts (i.e. P = p 1 , · · · , p N (V) T , where N (V) denotes the number of nodes), respectively. S i ( p i ) is a similarity score of the i-th part at p i . P i, j ( p i , p j ) is a springbased function with a greater value if the relative configuration of pairwise parts, p i and p j , is probable.
In a discriminative training methodology proposed in [8], the parameters of functions S i (p i ) and P i, j ( p i , p j ) are trained with pose-annotated positive and negative training images.
DCNN-based Heatmap Models
Unlike deformable part models, recent DCNN-based human pose estimation methods (e.g., [46,47,48,49,50,18,51]) acquire the position of each body joint from its corresponding heatmap. The heatmap of each joint is outputted from a DCNN as shown in Figure 2. The position with the maximum likelihood in each heatmap is considered to be the joint position.
General DCNNs for human pose estimation consist of convolution, activation, and pooling layers. In order to capture local and spatially-contextual (e.g., kinematically-plausible) evidences for joint localization, smaller and wider convolutional filters are used, respectively. Further contexts are represented by sequential/iterative feedbacks of DCNN responses; see [18,51,17] for example. Figure 3 shows an example of heatmaps generated by iterative inference stages in [18], which was employed as a base model in our experiments.
Semi-supervised Pose Model Learning by Correct-pose selection using Full-body Pose Features
Our semi-supervised pose model learning uses two training sets. The first set consists of images each of which has its human pose annotation and action label. Each image in the second set has no annotation. The first and second sets are called the fully-supervised (FS) and unsupervised (US) sets, respectively. An initial pose model is learned from the FS set. This pose model is then used for the pose estimation of images in the US set. Note that a sole model is used in this section unlike the action-specific models of the complete scheme illustrated in Figure 1. All estimated poses must be classified into truepositives and false-positives to use only true-positives for relearning the pose model.
Examples of false-positives are shown in Figure 4. Among various poses, some (e.g., (a), (b), and (c)) are evidently far from plausible human poses; e.g., the left and right limbs overlap unnaturally in (c). Such atypical poses are obtained by a human pose model described in Section 3, because this model optimizes more or less local regions in a full body. Despite the relative locations of local parts being plausible, the configuration of all parts might be implausible.
On the other hand, it is computationally possible to evaluate how plausible each optimized configuration of the parts is after the pose estimation process. In our semi-supervised learning, therefore, multiple poses are obtained from each training image in the US set by conventional pose estimation method(s) and evaluated whether or not each of them is plausible as the full-body configuration of a human body. With a DPM, multiple candidate poses are obtained with a loose threshold for score (1). With a DCNN-based model, all combinations of local maxima above loose thresholds in the heatmaps are regarded as candidate poses.
These candidate poses are evaluated to detect true-positive poses by the linear SVM. This correct-pose-selection SVM (CPS-SVM) is trained with the following two types of samples:
Positive: Images and pose annotations in the FS set ( Figure 5 (a)) are used as positive samples. To synthesize more samples, in each supervised training image, the end points of all limbs in the pose annotation are shifted randomly ( Figure 5 (b)) 1 within a predefined threshold, , of the PCP evaluation criterion [54,53].
Negative: Human pose estimation is applied to background images ( Figure 5 (c)) with no human region. Detected false-positives ( Figure 5 (d)) are used as negative samples.
From each pose in an image in the US set, the following two features are extracted and concatenated to be a pose representation (PR) feature for the CPS-SVM:
1st stage 2nd 3rd 4th 5th 6th Figure 3: Heatmaps generated in iterative inference stages in a DCNN-based heat map model [18]. The iterative process resolves confusions between similar body regions due to local image features and obtains a strong peak in the latter stages. Configuration feature: A PR feature should represent the configuration of all body parts to differentiate between different human poses. Such features have been proposed for action recognition [55,56]. In the proposed method, the relational pose feature [55] that is modified for 2D x-y image coordinates is used. The 2D relational pose feature [56] consists of three components; distances between all the pairs of keypoints, orientations of the vector connecting two keypoints, and inner angles between two vectors connecting all the triples of keypoints. Given 14 full-body keypoints in our experiments, the number of these three components are 14 C 2 = 91, 14 C 2 = 91, and 3 14 C 3 = 1092, respectively. In total, the relational pose feature is a 1274-D vector.
(a) (b) (c) (d) (e)
Appearance feature: HOG features [57] are extracted from the windows of all parts and used for a PR feature.
The closest prior work to the CPS-SVM is presented in [58], which is designed for performance evaluation. This pose evaluation method features a marginal probability distribution for each part as well as image and geometric features extracted from a window enclosing the upper body. Rather than such features, the configuration feature (i.e., relative positions between body parts [55,56]) is discriminative between different actions and so adopted in our PR feature; action-specific modeling is described in Section 5.
Instead of the CPS-SVM, a DNN is used in [24] for selecting true-positive poses. Whereas DNNs are potentially powerful and actually outperform SVM-based methods in recent pose estimation papers (e.g., [12,15,59,13,14]), they in general require a large number of training data for overfit avoidance. Since (1) our semi-supervised learning problem is assumed to have fewer supervised training data and (2) the PR feature is a high-dimensional data, the proposed method employs the SVM instead of DNNs.
Detected true-positive human poses in the US set are then used for re-learning a pose model with the FS set.
The aforementioned pose estimation, pose evaluation, and pose model re-learning phases can be repeated until no truepositive pose is newly detected from the US set. In the first iteration, the pose estimation and evaluation phases are respectively executed with the pose model and the CPS-SVM that are trained by only the fully-supervised data. These pose model and the CPS-SVM are updated in the pose model re-learning phase and are used in the second or later iterations. All other settings are same in all iterations. However, these phases were repeated only twice to avoid overfitting in experiments shown in this paper.
This semi-supervised learning allows us to only re-learn human poses similar to those in the FS set. In this sense, this learning scheme is based on the smoothness assumption for semi-supervised learning [60].
Semi-and Weakly-supervised Pose Model Learning with
Action-specific Pose Models
In this section, semi-supervised learning, proposed in Section 4, is extended with weakly-supervised learning. Each image in the weakly-supervised (WS) set is annotated with its action label. This WS set is used for our weakly-supervised learning instead of the US set.
The CPS-SVM proposed in the previous section is designed under the assumption that the observed configurations of body Figure 6 shows pose variations in athletics, badminton, and gymnastics, which are included in the LSP dataset [4]. It can be seen that the pose variation depends on the action. Based on this assumption, the proposed semi-and weakly-supervised learning generates action-specific pose models and CPS-SVMs. Since actionspecific models are useful under the assumption of pose clusters depending on the action, the cluster assumption [60] is utilized in this learning scheme. Initially, a general pose model is learned using all training images in the FS set. This initial model is then optimized for each action-specific model by using only its respective images in the FS set. For this optimization, in a DPM pose estimation model, the general model is used as an initial model in order to re-optimize the parameters of two functions S i ( p i ) and P i, j ( p i , p j ) in Eq. (1) by using only the training images of each action. In a DCNN-based model, the general model is finetuned using the training images of each action.
The a-th action-specific pose model is used to estimate candidate poses in images with the a-th action label in the WS set. Each estimated pose is evaluated whether or not it is correct by the a-th action-specific CPS-SVM. If the estimated pose is considered correct, this pose and its respective image are used for iterative re-learning of the a-th action-specific pose model with the FS set, as with semi-supervised learning described in Section 4, After the iterative re-learning scheme finishes, a pose model is learned from all actions' images used in this re-learning (i.e., all images in the FS set and WS images in which correct human poses are selected).
Semi-and Weakly-supervised Pose Model Learning with Outlier Detection by Clustering based on Dirichlet Process Mixtures
A key disadvantage of the learning schemes described in Sections 4 and 5 is that the CPS-SVM allows us to only extract human poses similar to those included in the FS set. In other words, it is difficult to re-learn poses whose 2D configurations of body parts are plausible but quite different from those in the FS set. The method proposed in this section extracts more true-positive poses based on the assumption that similar truepositives compose cluster(s) in the PR feature space of each action.
Let N (P) and N (C) denote the number of all candidate poses and their clusters, respectively. While N (C) is unknown, clustering with Dirichlet process mixtures [61], expressed in Eq. (2), or other non-parametric Bayesian clustering can estimate N (C) and assign the PR features of the candidate poses to the clusters simultaneously.
G|{γ, G 0 } ∼ DP(γ, G 0 ), θ i |G ∼ G, x i |θ i ∼ p(x|θ i ),(2)
where DP(γ, G 0 ) denotes a Dirichlet process with scaling factor γ and base distribution G 0 . Clustering with Dirichlet process mixtures tends to produce clusters with fewer features [62,63]. These small clusters are regarded as outliers with false-positive candidate poses and must be removed in our re-learning scheme. This outlier detection is achieved by [64], which evaluates the Bayes factor between an original set of clusters and its reduced set generated by merging small (outlier) clusters with other clusters. This method [64] is superior to similar methods because a Bayesian inference mechanism inference allows us to robustly find small outlier clusters rather than simple thresholding (e.g., [65]).
This outlier detection [64] is based on modified Dirichlet process mixtures, presented in Eq. (3). In Eq. (3), parameter set θ = {θ 1 , · · · , θ N (P) } in Eq. (2) is decomposed into two parameters, φ and z; φ = {φ 1 , · · · , φ N (C) } is the set of N (C) unique values in θ, and z = {z 1 , · · · , z N (P) } is the set of N (P) cluster membership variables such that z j = k if and only if θ j = φ k . Note that the number of unique values, N (C) , is equal to the number of pose clusters defined above. If θ i = θ j , features x i and x j are in the same cluster; X = {x 1 , · · · , x N (P) } is a set of all PR features of the candidate poses in this paper.
p(z) ∝ N (C) k=1 αΓ(N (P) k ), φ k ∼ G 0 (φ k ), x i |z i = k, φ k ∼ p(x i |φ k ),(3)
where p(z) is a prior mass function obtained by the Polya urn scheme [61]. Γ(N (P) k ) is the gamma function taking the number of PR features in k-th cluster (denoted by N (P) k ). If z i = z j , x i and x j are in the same cluster. This model is a type of product partition models [66].
For the outlier detection, first of all, an initial partition, z I , is obtained by clustering with Dirichlet process mixtures [67]. Let M I be the union of all partitions formed by any sequence of merge operations on clusters in z I . For practical use, M I is produced from z I by merging only small clusters having a few PR features.
The basic criterion of [64] for outlier detection from z I is the expense of model complexity of each partition in M I . Outliers can be detected by evaluating the evidence favoring a complex model over a simpler model with no or fewer outliers. The Bayes factor, which is used in a model selection problem, allows us to evaluate this criterion (e.g., [68]). Given PR features, the plausibilities of two models z I and z m ∈ M I are evaluated by the following Bayes factor K I,m :
K I,m = p(X|z I ) p(X|z m )
A lower bound of K I,m supporting z I rather than z m is obtained under the posterior condition and the prior mass function
p(z) ∝ N (C) k=1 αΓ(N (P) k ) in Eq. (3): p(z I |X) > p(z m |X) p(z I )p(X|z I ) > p(z m )p(X|z m ) p(X|z I ) p(X|z m ) > 1 α ν N (C) m k=1 Γ(N (P) m,k ) N (C) I k=1 Γ(N (P) I,k ) ,(4)
where ν = N (C) I −N (C) m is the number of clusters merged to arrive at z m . N (P) m,k is the number of PR features in k-th cluster of m-th partition. In the proposed method, the number of PR features, N (P) I,k and N (P) m,k , in inequality (4) is weighted by score (1) of pose detection as follows:
p(X|z I ) p(X|z m ) > 1 α ν N (C) m k=1 Γ N (P) m,k f =1 T m,k, f N (C) I k=1 Γ N (P) I,k f =1 T I,k, f ,(5)
where T m,k, f denotes the normalized score of f -th pose in k-th cluster of m-th partition; the scores are normalized linearly so that all of them are distributed between 0 and 1.
In inequality (4), the left-hand side is the Bayes factor, K I,m , which is computed for all possible partition pairs (i.e., z I and z m ∈ M I ) by the method of Basu and Chib [69] in our proposed method. The lower bound of K I,m is defined with parameter α given in Eq. (3). To determine α, the scale provided by Kass and Raftery [70] gives us an intuitive interpretation. Given α, only if z I satisfies inequality (4) for all z m ∈ M I , then each of merged small clusters in M I are detected as outliers.
In the proposed method, re-learning using the CPS-SVM is primarily repeated for updating pose models. Then the updated pose models are used for re-learning with clustering and outlier detection. This re-learning is repeated until no new training image emerges for re-learning. Note that all images in the WS set are used in the process of pose estimation and outlier detection in all iterations.
Experimental Results
Experimental Setting
The proposed method was evaluated with the publiclyavailable LSP, LSP extended [4] and MPII human pose [5] datasets. Images in the LSP dataset were collected from Flickr using eight action labels (i.e., text tags associated with each image), namely athletics, badminton, baseball, gymnastics, parkour, soccer, tennis, and volleyball. However, not only one but also a number of tags including erroneous ones are associated with each image. On the other hand, an action label is given to each image in the MPII dataset, but the action labels are more fine-grained (e.g., serve, smash, and receive in tennis) than the LSP. These incomplete and uneven annotations make it difficult to automatically give one semantically-valid action label to each image in the datasets.
For our experiments, therefore, action labels shared among the datasets were defined as follows; athletics, badminton, baseball, gymnastics 2 , soccer, tennis, volleyball, and general. Whilst one of these eight labels was manually associated with each of 2000 images (1000 training images and 1000 test images) in the LSP dataset and 10000 images in the LSP extend dataset, several fine-grained action classes in the MPII dataset were merged to one of these eight classes 3 . Table 1 shows the number of training and test data of the eight action classes in each dataset.
For pose estimation, we used methods proposed in [11] and [18]. This is because [11] and [18] are one of state-of-the-arts using the PSM and the DCNN, respectively, as shown in Table 2. Each of the two methods [11,18] obtained candidate poses by loose thresholding, as described in Section 4. More specifically, in [18], the candidate poses were generated from joint positions (i.e., local maxima in heatmaps) that were extracted not only in the final output but also in all iterative inference stages (e.g., Figure 3). Table 2 shows quantitative evaluation results. Note that all methods except our proposed methods (i.e., (a) -(f), (j), (k), (o), (p), and (t) -(v)) are based on supervised learning. The parameters of our proposed methods (i.e., of PCP criterion for positive sample selection in the CPS-SVM and α of the product partition model) were set as follows: = 0.7, which was determined empirically, and α = 1 3 , which was selected based on the scale of Kass and Raftery [70] so that false-positives are avoided as many as possible even with reduction of truepositives. For all of our proposed methods (i.e., (g) -(i), (l) -(n), (q) -(s), and (w) -(y)), the FS set consisted of 500 images in the LSP. Remaining 500 images in the LSP were used as the WS set. Additional WS sets collected from the LSP extended and the MPII were used in "(q) -(s)" and "(w) -(y)". While "(q) -(s)" used only the LSP extended, both of the LSP extended and the MPII were used in "(w) -(y)".
Quantitative Comparison
Note that it is impossible to compare the proposed methods (i.e., semi-and weakly-supervise learning) with supervised learning methods on a completely fair basis. Even if the same set of images is used, the amount of annotations is different between semi/weakly-supervised and fully-supervised learning schemes. In general, the upper limitation of expected accuracy 2 The MPII has no action label related to "parkour", and human poses in "parkour" are similar to those in "gymnastics". So "parkour" is merged to "gymnastics" in the LSP in our experiments. 3 The activity IDs of the training data extracted from the MPII dataset are "61, 126, 156, 160, 241, 280, 307, 549, 640, 653, 913, 914, 983", "643, 806", "348, 353, 522, 585, 736", "328, 927", "334, 608, 931", "130, 336, 439, 536, 538, 934", "30, 196, 321, 674, 936, 975" for athletics, badminton, soccer, baseball, gymnastics, soccer, tennis, and volleyball, respectively. Table 1: The number of human poses in each action class is shown. In our method, the training set of the LSP is divided into 500 images for sully-supervised training and 500 images for weakly-supervised training, which are Train (FS) and Train (WS), respectively. Note that the action classes of test data in the LSP are shown just for information, while the action labels are not used for testing. In the MPII dataset, action classes for testing images are publicly unavailable. in the proposed method is the result of its baseline, if training data used in the baseline is split into the FS and WS sets for the proposed method. As shown in Table 2, Baseline-2 [11] is superior to Baseline-1 [18] in most joints. The same tendency is observed also between our proposed methods using Baseline-1 and Baseline-2. It can be also seen that, in all training datasets, the complete version of the proposed method (i.e., Ours-weakC) is the best among three variants of the proposed methods. In what follows, only Ours-weakC-2 is compared with related work.
Comparison between (k) Baseline-2 [18] and (n) Ours-weakC-2 is fair in terms of the amount of training images, while pose annotations in 500 images were not used in the proposed method. The proposed method is comparable with the baseline in all joints. Furthermore, (n) Ours-weakC-2 outperforms (j) Baseline-2 (HALF). This is a case where the amount of annotated data is equal between the baseline and the proposed method, while the proposed method also uses extra WS data (i.e., remaining 500 training images in the LSP).
In experiments with two other training sets (i.e., "LSP+LSPext" and "LSP+LSPext+MPII"), only the WS set increases while the FS set is unchanged from experiments with the LSP dataset. As expected, in these two experiments, difference in performance gets larger between the baseline and the proposed methods than in experiments with the LSP. However, we can see performance improvement in the proposed methods as the WS set increases; (n) 73.6 % < (s) 79.5 % < (y) 81.4 % in the mean accuracy.
Detailed Analysis
Effectiveness in Each Action. As discussed in Section 7.2, the mean performance gain in our semi-and weakly-supervised learning is smaller than that in supervised learning; 81.4 − 68.7 = 12.7 in Ours-weakC-2 vs. 90.5 − 68.7 = 21.8 in Baseline-2 on average. Here we investigate the performance gain in each action class rather than on average. Table 3 shows the PCK-2.0 score of each action class on the test set of the LSP dataset. We focus on the performance gains normalized by the number of training human poses of each action class (i.e., values within brackets); the number of human poses in each action class is shown in Table 1. A gap of the normalized performance gains between the baseline [18] and the proposed method (i.e., Ours-weakC-2 Gain [(j) → (y)]) is smaller in "gym+parkour" and "general" classes than other classes. The performance gains in other classes are better because human poses are actiondependent and easy to be modeled while "gym+parkour" and "general" classes include a large variety of human poses; see Figure 6 to visually confirm the pose variations of "athletics" and "gymnastics".
Even the best action-specific gain in Ours-weakC-2 (i.e., 17.2 = 86.4 − 69.2 in "soccer") is less than the mean gain of Baseline-2 (i.e., 21.8 = 90.5−68.7). However, in contrast to the mean score of Ours-weakC-2 (i.e., 12.7 = 81.4 − 68.7), the best action-specific gain is closer to the mean gain of Baseline-2. In addition, the best action-specific gain normalized by the number of training data (i.e., 0.078 = 17.2/220 in "soccer") is reasonable compared with the normalized mean gain in Baseline-2 (i.e., 0.00057 = 21.8/(500 + 10000 + 27945), where the number of training human poses in the LSP, LSPext, and MPII are 500, 10000, and 27945, respectively).
The results above validate that our proposed method works better in actions where a limited variety of human poses are observed.
Effectiveness of re-learning. The effectiveness of re-learning depends on the number of true-positives selected from the WS set. For our method using [11] with the 500 FS and 500 WS images in the LSP dataset, Figure 7 shows (1) the rate of images in which true-positive pose(s) are included in candidate poses (indicated by "Detected TP") and (2) the rate of images in which true pose-positive(s) are correctly selected from candidate poses (indicated by "Selected TP"). In this evaluation, a candidate pose is considered true-positive if its all parts satisfy the PCP criterion [54,53]. Note that the results shown in Figure 7 were measured after the iterative learning ended.
In Figure 7, it can be seen that only a few poses were selected Table 2: Quantitative comparison using test data in the LSP dataset and the strict PCK-0.2 metric [71]. We used the person-centric annotations given in [25]. Ours-semi (g, l, q, and w), Ours-weak (h, m, r, and x), and Ours-weakC (i, n, s, and y) correspond to our semi-supervised learning (Section 4), semi-and weaklysupervised learning (Section 5), semi-and weakly-supervised learning with outlier detection (Section 6), respectively. Our methods are implemented based on two different baselines, Chen & Yuille [11] (Baseline-1 in the Table) and Wei et al. [18] (Baseline-2 in the Table). If the proposed method is implemented with Baseline-1/2, it is called Ours-semi-1/2, Ours-weak-1/2, and Ours-weakC-1/2. Each result is obtained on a different training dataset specified by at the top of each set; LSP, LSP+LSPext, and LSP+LSPext+MPII. For all of our proposed methods, the FS set consisted of only 500 images in the LSP and all remaining images were used as the WS set. For reference, two baselines are evaluated also with only 500 images in the LSP; (e) and (j). For fair comparison in terms of the amount of the FS set, (e) and (j) should be compared with our proposed methods. In each training set, the best scores among supervised learning methods and methods that used only 500 images for the FS set are colored by red and blue, respectively, in each column. Table 3: PCK-2.0 score of each action class on the test data of the LSP dataset. The scores of the baseline [18] and our method are shown; namely, (j) Baseline-2 (HALF), (v) Baseline-2 with LSP+LSPExt+MPII, and (y) Ours-weakC-2 with LSP+LSPext+MPII in Table 2. Gain [(α) → (β)] = S β S α × 100: S α and S β denote the scores of (α) and (β), where {α, β} ∈ {j, v, y}, respectively. Since the number of training images are inequivalent among action classes (see Table 1), Gain [(j) → (y)], which is the performance gain of our-weakC-2, is linearly-normalized by the number of training human poses of each action and shown within brackets. in gymnastics even in "Ours-weakC". This is natural because (1) the distribution of possible poses in gymnastics is wide relative to the number of its training images and (2) overlaps between two sparse distributions (i.e., poses observed in the FS and WS sets of gymnastics) may be small. In other actions, on the other hand, the number of selected poses could be increased by "Ours-weak" and "Ours-weakC" in contrast to "Ours-semi". The difference between the two rates (i.e., "Detected TP" and "Selected TP") represents the number of true poses that can be selected correctly from a set of candidate poses. This is essentially equivalent to precision of the pose selection methods. In addition to precision, recall is also crucial because true poses should be selected as frequently as possible:
Ϭ ϭ Ϭ Ϯ Ϭ ϯ Ϭ ϰ Ϭ ϱ Ϭ Ă ƚ Ś ď Ă Ě ď Ă Ɛ Ő LJ ŵ н Ɖ Ă ƌ Ɛ Ž Đ ƚ Ğ Ŷ | Ž ů Ő Ğ Ŷ Ğ ƚ Ğ Đ ƚ Ğ Ě d W ; Ɛ Ğ ŵ ŝ Ͳ ϭ Ϳ ^ Ğ ů Ğ Đ ƚ Ğ Ě d W ; Ɛ Ğ ŵ ŝ Ͳ ϭ Ϳ Ğ ƚ Ğ Đ ƚ Ğ Ě d W ; ǁ Ğ Ă Ŭ Ͳ ϭ Ϳ ^ Ğ ů Ğ Đ ƚ Ğ Ě d W ; ǁ Ğ Ă Ŭ Ͳ ϭ Ϳ Ğ ƚ Ğ Đ ƚ Ğ Ě d W ; ǁ Ğ Ă Ŭ Ͳ ϭ Ϳ ^ Ğ ů Ğ Đ ƚ Ğ Ě d W ; ǁ Ğ Ă Ŭ Ͳ ϭ ͿPrecision = Nmb( AT P ∩ ST P) Nmb(ST P) ,(6)Recall = Nmb( AT P ∩ ST P) Nmb(CP ∩ AT P) ,(7)
where AT P, ST P, and CP denote respectively the numbers of all true poses in the WS set (i.e., the number of images in the WS set), candidate poses selected as true ones by pose selection methods, and all candidate poses detected from the WS set. Nmb(P) is a function that counts the number of poses included in pose set P. These precision and recall rates are shown in Figure 8. From the figure, it can be seen that the precision rates are significantly high in almost all cases. That is, most selected poses are true-positive. Compared with the precision rates, the recall rates are lower. That is, many false-negatives are not used for re-learning. This means that the proposed pose selection approaches and parameters were conservative so that only reliable poses are selected and used for re-learning. Figure 9 shows the convergence histories of (g) Ours-semi-1, (h) Ours-weak-1, and (i) Ours-weakC-1, which are shown in Table 2. After a big improvement in the first iteration, the second one can also improve the score. It can be seen that the improvement is saturated in the second iteration. In the worst case, the score was decreased as shown in the third or later iterations of Ours-weakC-1. This is caused due to overfitting and false-positive samples:
Ϭ Ϭ ͘ Ϯ Ϭ ͘ ϰ Ϭ ͘ ϲ Ϭ ͘ ϴ ϭ Ă ƚ Ś ď Ă Ě ď Ă Ɛ Ő LJ ŵ н Ɖ Ă ƌ Ɛ Ž Đ ƚ Ğ Ŷ | Ž ů Ő Ğ Ŷ Ɖ ƌ Ğ Đ ŝ Ɛ ŝ Ž Ŷ ; Ɛ Ğ ŵ ŝ Ͳ ϭ Ϳ ƌ Ğ Đ Ă ů ů ; Ɛ Ğ ŵ ŝ Ͳ ϭ Ϳ Ɖ ƌ Ğ Đ ŝ Ɛ ŝ Ž Ŷ ; ǁ Ğ Ă Ŭ Ͳ ϭ Ϳ ƌ Ğ Đ Ă ů ů ; ǁ Ğ Ă Ŭ Ͳ ϭ Ϳ Ɖ ƌ Ğ Đ ŝ Ɛ ŝ Ž Ŷ ; ǁ Ğ Ă Ŭ Ͳ ϭ Ϳ ƌ Ğ Đ Ă ů ů ; ǁ Ğ Ă Ŭ Ͳ ϭ Ϳ
• The overfitting occurs when only similar samples are detected by the CPS-SVM and used for model re-learning. The iterative sample detections using newly-detected similar samples possibly lead to detecting only similar sam- Figure 9: Convergence in iterative re-learning steps on the LSP dataset. The vertical and horizontal axes indicate the PCK-0.2 score and the number of iterations, respectively. The 0-th iteration is executed only with the supervised training data. Figure ples.
ϳ Ϭ ͘ Ϭ ϳ Ϭ ͘ ϱ ϳ ϭ ͘ Ϭ ϳ ϭ ͘ ϱ ϳ Ϯ ͘ Ϭ ϳ Ϯ ͘ ϱ ϳ ϯ ͘ Ϭ Ϭ ϭ Ϯ ϯ ϰ ϱ ϲ Ɛ Ğ ŵ ŝ Ͳ ϭ ǁ Ğ Ă Ŭ Ͳ ϭ ǁ Ğ Ă Ŭ Ͳ ϭϬ Ϭ ͘ Ϭ Ϭ ϱ Ϭ ͘ Ϭ ϭ Ϭ ͘ Ϭ ϭ ϱ Ϭ ͘ Ϭ Ϯ Ϭ ͘ Ϭ Ϯ ϱ Ϭ ͘ Ϭ ϯ Ϭ ͘ Ϭ ϯ ϱ Ϭ ϭ Ϭ Ϯ Ϭ ϯ Ϭ ϰ Ϭ ϱ Ϭ ϲ Ϭ ϳ Ϭ ϴ Ϭ Ϭ ͘ ϭ Ϭ ͘ Ϯ Ϭ ͘ ϯ Ϭ ͘ ϰ Ϭ ͘ ϱ Ϭ ͘ ϲ Ϭ ͘ ϳ Ϭ ͘ ϴ Ϭ ͘ ϵ ϭ W Ž Ɛ Ğ Ğ Ɛ ƚ Ă Đ Đ ƌ Ă Đ LJ ; d Ž ƚ Ă ů Ϳ Ğ ƚ Ğ Đ ƚ Ğ Ě d W ƌ Ă ƚ Ğ ^ Ğ ů Ğ Đ ƚ Ğ Ě d W ƌ Ă ƚ Ğ ^ Ğ ů Ğ Đ ƚ Ğ Ě & W ƌ Ă ƚ Ğ й
• If false-positives are detected by the CPS-SVM, the iterative detections using those false-positives may lead to detecting more false-positives.
Following the above results, iterations are repeated only twice in our experiments as described in Section 7.1.
Effect of Data Augmentation for CPS-SVM. The effect of parameter was examined with the test data of the LSP ( Figure 10). For simplicity, the results of only the complete learning scheme with the training data of "LSP" (i.e., (i) Ours-weakC-1 in Table 2) are shown. Compared with the growth of detected and selected true-positive poses (indicated by blue and red lines, respectively, in the figure) with increasing , false-positives (indicated by a green line) increases significantly. This may cause the decrease in the accuracy of pose estimation (indicated by purple bars) at or above = 0.8.
Distributions of Detected True-positives. For validating the effect of the semi-and weakly-supervised learning scheme for selecting true-positives, Figure 11 visualizes the distribution of PR features in Ours-weakC-1. While all true-positives selected by the CPS-SVM (indicated by green) are close to poses in the More Unsupervised Data. For our proposed scheme, unsupervised (US) data for semi-supervised learning is much easier to be collected than weakly-supervised (WS) data. Here, the performance gain with more US set is investigated. For the US set, the COCO 2016 keypoint challenge dataset [75] was used while no pose annotations in this dataset were used for our experiments. In total, over 126K human poses in the COCO were added to the US set.
The results are shown in Table 4. Compared with the huge number of the US set, the performance gain is limited (i.e., 78.3 − 76.3 = 2.0). The performance might be almost saturated because only human poses that are similar to those included in the FS set can be detected and used for model re-learning in our semi-supervised learning. For more investigation, weaklysupervised learning using such a huge training data is one of interesting future research directions while we need action labels in the WS set.
Qualitative Results. Several pose estimation results are shown in Figures 12 and 13. In both figures, Baseline-2 [18] and our-weakC-2 are trained by half of LSP and by LSP+LSPext+MPII, respectively; namely, the former and latter correspond to (j) and (y), respectively. In Figure 12, the results of all keypoints are improved and localized successfully by our method. In Figure 13, on the other hand, one or more keypoints are mislocalized by our method.
From the results in Figure 13, we can find several limitations of our proposed method. In (1), (2), and (3), a pitching motion is observed. While a large number of training data for this kind of motion is included in "baseball" class, body poses in this class are diversified (e.g., pitching, batting, running, fielding). That makes it difficult to model the pose variation in this class. This difficulty can be possibly suppressed by more fine-grained action grouping. While (4) and (5) are "parkour" and "general", Table 4: Quantitative results of our semi-supervised training scheme using more unsupervised data obtained from the COCO 2016 keypoint challenge dataset. This scheme is evaluated with the test data of the LSP dataset and the strict PCK-0.2 metric [71]. The results of (v) Baseline-2 [18] (y) Ours-weakC-2 (LSP+LSPext+MPII) Figure 12: Improvement cases by our proposed method, Ours-weakC-2.
respectively, these poses are not similar to any training samples in their respective action class. In (6), overlapping people make pose estimation difficult. For this problem, a base algorithm should be designed for multi-people pose estimation (e.g., [74,19]).
More Quantitative Comparison. Pose estimation accuracy was evaluated also with the test data of the MPII dataset ( Table 5). For comparison, our semi-and weakly-supervised learning scheme with clustering using training images of "LSP+LSPext+MPII", which is equal to (y) Ours-weakC-2 in Table 2, is evaluated because it is the best among all of our proposed schemes. Only half of training images in the MPII were used for the FS set in our method. While our method used a small amount of human pose annotations (i.e., 13.7K, 29K, and 40K annotations in our method, MPII, and "LSP+LSPext+MPII", respectively), the effectiveness of semiand weakly-supervised learning is validated in comparison between our method and the base method [18] using only half of training data in the MPII (i.e., 83.4% vs. 75.9 % in the mean, which are underlined in Table 5).
Concluding Remarks
We proposed semi-and weakly-supervised learning schemes for human pose estimation. While semi-and weakly-supervised learning schemes are widely used for object localization and recognition tasks, this paper demonstrated that such schemes are applicable to human pose estimation in still images. The proposed schemes extract correct poses from training images with no human pose annotations based on (1) pose discrimination on the basis of the configuration of all body parts, (2) action-specific models, and (3) clustering and outlier detection using Dirichlet process mixtures. These three functionalities allow the proposed semi-and weakly-supervised learning scheme to outperform its baselines using the same amount of human pose annotations.
Future work includes candidate pose synthesis and truepositive pose selection using generative adversarial nets [77], which can synthesize realistic data from training data. Since candidate pose synthesis and true-positive pose selection play important roles in our proposed method, further improvement of these schemes should be explored.
Experiments with more training data is also important. This investigates and reveals the properties of the proposed schemes; for example, (1) the relationship between the scale of the WS set and the estimation performance and (2) the positive/negative (1) (2) (3) (4) (5) (6) (j) Baseline-2 [18] (HALF) (1) (2) (3) (4) (5) (6) (y) Ours-weakC-2 (LSP+LSPext+MPII) effects of true-positives/false-positives. While our weaklysupervised learning scheme needs an action label in each training image, unsupervised learning is more attractive for increasing the amount of training images. Automatic action labeling/recognition in training images allows us to extend our weakly-supervised learning to unsupervised learning. Table 5: Quantitative comparison using test data in the MPII dataset evaluated by PCKh-0.5 [5]. Our proposed method (i.e., Ours-weakC-2) used 9040 images in the MPII (i.e., half of the entire images) for the FS set and other images in "LSP+LSPext+MPII" dataset for the WS set. On the other hand, all images and annotations in MPII and "LSP+LSPext+MPII" were used for training in [74,50,76,49,46] (shown in the upper rows in the table) and [73,18] (shown in the lower rows), respectively. For reference, the results of the baseline [18] that used only half of the entire images in the MPII (i.e., Baseline-2 (HALF) in the
Figure 2 :
2Common process flow of pose estimation using DCNN-based heatmap models.
Figure 4 :
4False-positive poses estimated by the tree-based model[52]. Green, yellow, pink, skyblue, red, and blue indicate the head, torso, right arm, left arm, right leg, and left leg, respectively. While only a few parts are incorrectly localized in (d) and (e), the full body is far from plausible human poses in (a), (b), and (c).
Figure 5 :
5How to make positive and negative samples for the correct-poseselection SVM (CPS-SVM).
Figure 6 :
6Pose variations based on different actions. parts are limited. The possible configurations are more limited if the action of a target person is known.
Figure 7 :
7Quantitative evaluation of candidate pose estimation and true-positive pose selection in the LSP dataset. Detected TP is incremented if a set of candidate poses includes a true-positive in each image. Selected TP is incremented if a true-positive is selected by the CPS-SVM in each image. The vertical axis indicates the rate of images with detected/selected TP to all training images in the WS set.
Figure 8 :
8Quantitative evaluation of precision and recall rates for true-positive pose selection in the LSP dataset.
10 :
10Effects of parameter , whose value is indicated in the horizontal axis. The lefthand vertical axis indicates the rates of detected and selected truepositives and the accuracy of pose estimation, while the righthand one indicates the rate of false-positives included in the selected poses.
Figure 11 :
11Distribution of PR features of training images (in "soccer" class of the LSP) . Blue, green, red, and skyblue points indicate annotated poses in the FS set, true-positives selected by the CPS-SVM, true-positives selected by clustering, and false-negatives, respectively. Note that distance between PR features in this 2D space, given by PCA, is not identical to the one in the original PR feature space; even if two poses are closer/farther in this figure, they maybe farther/closer in the original PR feature space.
FS
set (indicated by blue), several true-positives selected by clustering (indicated by red) are far from the FS set as expected.
Figure 13 :
13Failure cases by our proposed method, Ours-weakC-2. The action class of each example is as follows. (1), (2), and (3): Baseball. (4): Parkour. (5): General. (6): Soccer.
and (w) Ours-semi-2 using LSP+LSPext+MPII are also shown for reference.Method
Head Shoulder Elbow Wrist
Hip
Knee Ankle Mean
LSP+LSPext+MPII
(v) Baseline-2 [18]
97.8
92.5
87.0
83.9
91.5
90.8
89.9
90.5
(w) Ours-semi-2
92.4
80.8
70.3
65.7
82.5
73.3
68.8
76.3
LSP+LSP+MPII+COCO
(z) Ours-semi-2
95.5
84.1
71.8
65.9
85.9
74.2
70.6
78.3
(j) Baseline-2 [18] (HALF)
table) are shown. The best scores among supervised learning methods and methods that used only 9040 images for the FS set are colored by red and blue, respectively Method Head Shoulder Elbow Wrist Hip Knee Ankle MeanMPII
Insafutdinov et al. [74]
97.4
92.7
87.5
84.4
91.5
89.9
87.2
90.1
Lifshitz et al. [50]
97.8
93.3
85.7
80.4
85.3
76.6
70.2
85.0
Gkioxary et al. [76]
96.2
93.1
86.7
82.1
85.2
81.4
74.1
86.1
Bulat and Tzimiropoulos [49]
97.9
95.1
89.9
85.3
89.4
85.7
81.7
89.7
Newell et al. [46]
98.2
96.3
91.2
87.1
90.1
87.4
83.6
90.9
Baseline-2 [18] (HALF)
94.8
87.7
76.2
66.4
75.2
64.7
60.0
75.9
LSP+LSPext+MPII
Pishchulin et al. [73]
94.1
90.2
83.4
77.3
82.6
75.7
68.6
82.4
Baseline-2 [18]
97.8
95.0
88.7
84.0
88.4
82.8
79.4
88.5
Our-weakC-2
96.9
92.9
84.6
78.6
84.6
75.8
70.9
83.4
While our proposed method shifts only limbs to accept subtle mismatches between an estimated pose and image cues, a more variety of positive samples can be synthesized by image deformation according to the shifted limbs[53].
A Krizhevsky, I Sutskever, G E Hinton, Imagenet classification with deep convolutional neural networks, in: NIPS. A. Krizhevsky, I. Sutskever, G. E. Hinton, Imagenet classification with deep convolutional neural networks, in: NIPS, 2012.
Deepface: Closing the gap to human-level performance in face verification. Y Taigman, M Yang, M Ranzato, L Wolf, CVPRY. Taigman, M. Yang, M. Ranzato, L. Wolf, Deepface: Closing the gap to human-level performance in face verification, in: CVPR, 2014.
Learning to parse images of articulated bodies. D Ramanan, NIPSD. Ramanan, Learning to parse images of articulated bodies, in: NIPS, 2006.
Clustered pose and nonlinear appearance models for human pose estimation. S Johnson, M Everingham, BMVCS. Johnson, M. Everingham, Clustered pose and nonlinear appearance models for human pose estimation, in: BMVC, 2010.
M Andriluka, L Pishchulin, P V Gehler, B Schiele, 2d human pose estimation: New benchmark and state of the art analysis. CVPRM. Andriluka, L. Pishchulin, P. V. Gehler, B. Schiele, 2d human pose estimation: New benchmark and state of the art analysis, in: CVPR, 2014.
O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A C Berg, L Fei-Fei, arXiv:\protect\vrulewidth0pt\protect\href{http://arxiv.org/abs/1409.0575}{arXiv:1409.0575}ImageNet Large Scale Visual Recognition Challenge. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, L. Fei-Fei, ImageNet Large Scale Visual Recognition Challenge (2014). arXiv:\protect\vrulewidth0pt\protect\href{http:// arxiv.org/abs/1409.0575}{arXiv:1409.0575}.
Pictorial structures for object recognition. P F Felzenszwalb, D P Huttenlocher, International Journal of Computer Vision. 611P. F. Felzenszwalb, D. P. Huttenlocher, Pictorial structures for object recognition, International Journal of Computer Vision 61 (1) (2005) 55- 79.
Object detection with discriminatively trained part-based models. P F Felzenszwalb, R B Girshick, D A Mcallester, D Ramanan, IEEE Trans. Pattern Anal. Mach. Intell. 329P. F. Felzenszwalb, R. B. Girshick, D. A. McAllester, D. Ramanan, Object detection with discriminatively trained part-based models, IEEE Trans. Pattern Anal. Mach. Intell. 32 (9) (2010) 1627-1645.
Cascaded models for articulated pose estimation. B Sapp, A Toshev, B Taskar, ECCVB. Sapp, A. Toshev, B. Taskar, Cascaded models for articulated pose esti- mation, in: ECCV, 2010.
Articulated pose estimation with parts connectivity using discriminative local oriented contours. N Ukita, CVPRN. Ukita, Articulated pose estimation with parts connectivity using dis- criminative local oriented contours, in: CVPR, 2012.
Articulated pose estimation by a graphical model with image dependent pairwise relations. X Chen, A L Yuille, NIPSX. Chen, A. L. Yuille, Articulated pose estimation by a graphical model with image dependent pairwise relations, in: NIPS, 2014.
A Toshev, C Szegedy, Deeppose: Human pose estimation via deep neural networks. CVPRA. Toshev, C. Szegedy, Deeppose: Human pose estimation via deep neu- ral networks, in: CVPR, 2014.
Joint training of a convolutional network and a graphical model for human pose estimation. J J Tompson, A Jain, Y Lecun, C Bregler, NIPSJ. J. Tompson, A. Jain, Y. LeCun, C. Bregler, Joint training of a convo- lutional network and a graphical model for human pose estimation, in: NIPS, 2014.
J Tompson, R Goroshin, A Jain, Y Lecun, C Bregler, Efficient object localization using convolutional networks. CVPRJ. Tompson, R. Goroshin, A. Jain, Y. LeCun, C. Bregler, Efficient object localization using convolutional networks, in: CVPR, 2015.
Pose machines: Articulated pose estimation via inference machines. V Ramakrishna, D Munoz, M Hebert, J A Bagnell, Y Sheikh, ECCVV. Ramakrishna, D. Munoz, M. Hebert, J. A. Bagnell, Y. Sheikh, Pose machines: Articulated pose estimation via inference machines, in: ECCV, 2014.
Learning a sequential search for landmarks. S Singh, D Hoiem, D A Forsyth, CVPRS. Singh, D. Hoiem, D. A. Forsyth, Learning a sequential search for land- marks, in: CVPR.
Human pose estimation with iterative error feedback. J Carreira, P Agrawal, K Fragkiadaki, J Malik, CVPRJ. Carreira, P. Agrawal, K. Fragkiadaki, J. Malik, Human pose estimation with iterative error feedback, in: CVPR, 2016.
S.-E Wei, V Ramakrishna, T Kanade, Y Sheikh, Convolutional pose machines. CVPRS.-E. Wei, V. Ramakrishna, T. Kanade, Y. Sheikh, Convolutional pose machines, in: CVPR, 2016.
Realtime multi-person 2d pose estimation using part affinity fields. Z Cao, T Simon, S.-E Wei, Y Sheikh, CVPRZ. Cao, T. Simon, S.-E. Wei, Y. Sheikh, Realtime multi-person 2d pose estimation using part affinity fields, in: CVPR, 2017.
Hand keypoint detection in single images using multiview bootstrapping. T Simon, H Joo, I Matthews, Y Sheikh, CVPRT. Simon, H. Joo, I. Matthews, Y. Sheikh, Hand keypoint detection in single images using multiview bootstrapping, in: CVPR, 2017.
Ensemble convolutional neural networks for pose estimation. Y Kawana, N Ukita, J Huang, M Yang, 10.1016/j.cviu.2017.12.005Computer Vision and Image Understanding. 169Y. Kawana, N. Ukita, J. Huang, M. Yang, Ensemble convolutional neural networks for pose estimation, Computer Vision and Image Understanding 169 (2018) 62-74. doi:10.1016/j.cviu.2017.12.005. URL https://doi.org/10.1016/j.cviu.2017.12.005
Personalizing human video pose estimation. J Charles, T Pfister, D R Magee, D C Hogg, A Zisserman, CVPRJ. Charles, T. Pfister, D. R. Magee, D. C. Hogg, A. Zisserman, Personal- izing human video pose estimation, in: CVPR, 2016.
Harvesting multiple views for marker-less 3D human pose annotations. G Pavlakos, X Zhou, K G Derpanis, K Daniilidis, CVPRG. Pavlakos, X. Zhou, K. G. Derpanis, K. Daniilidis, Harvesting multiple views for marker-less 3D human pose annotations, in: CVPR, 2017.
Multi-source deep learning for human pose estimation. W Ouyang, X Chu, X Wang, CVPRW. Ouyang, X. Chu, X. Wang, Multi-source deep learning for human pose estimation, in: CVPR, 2014.
Learning effective human pose estimation from inaccurate annotation. S Johnson, M Everingham, CVPRS. Johnson, M. Everingham, Learning effective human pose estimation from inaccurate annotation, in: CVPR, 2011.
A multigraph representation for improved unsupervised/semi-supervised learning of human actions. S Jones, L Shao, CVPRS. Jones, L. Shao, A multigraph representation for improved unsupervised/semi-supervised learning of human actions, in: CVPR, 2014.
Semi-supervised coupled dictionary learning for person re-identification. X Liu, M Song, D Tao, X Zhou, C Chen, J Bu, CVPRX. Liu, M. Song, D. Tao, X. Zhou, C. Chen, J. Bu, Semi-supervised cou- pled dictionary learning for person re-identification, in: CVPR, 2014.
Patch distribution compatible semisupervised dimension reduction for face and human gait recognition. Y Huang, D Xu, F Nie, IEEE Trans. Circuits Syst. Video Techn. 223Y. Huang, D. Xu, F. Nie, Patch distribution compatible semisupervised dimension reduction for face and human gait recognition, IEEE Trans. Circuits Syst. Video Techn. 22 (3) (2012) 479-488.
Treat samples differently: Object tracking with semi-supervised online covboost. G Li, L Qin, Q Huang, J Pang, S Jiang, ICCVG. Li, L. Qin, Q. Huang, J. Pang, S. Jiang, Treat samples differently: Object tracking with semi-supervised online covboost, in: ICCV, 2011.
Semi-supervised spectral clustering for image set classification. A Mahmood, A S Mian, R A Owens, CVPRA. Mahmood, A. S. Mian, R. A. Owens, Semi-supervised spectral clus- tering for image set classification, in: CVPR, 2014.
Multimodal semi-supervised learning for image classification. M Guillaumin, J J Verbeek, C Schmid, CVPRM. Guillaumin, J. J. Verbeek, C. Schmid, Multimodal semi-supervised learning for image classification, in: CVPR, 2010.
Online domain adaptation of a pre-trained cascade of classifiers. V Jain, E G Learned-Miller, CVPRV. Jain, E. G. Learned-Miller, Online domain adaptation of a pre-trained cascade of classifiers, in: CVPR, 2011.
Semi-supervised learning of joint density models for human pose estimation. R Navaratnam, A W Fitzgibbon, R Cipolla, BMVCR. Navaratnam, A. W. Fitzgibbon, R. Cipolla, Semi-supervised learning of joint density models for human pose estimation, in: BMVC, 2006.
Semi-supervised hierarchical models for 3d human pose reconstruction. A Kanaujia, C Sminchisescu, D N Metaxas, CVPRA. Kanaujia, C. Sminchisescu, D. N. Metaxas, Semi-supervised hierar- chical models for 3d human pose reconstruction, in: CVPR, 2007.
Real-time articulated hand pose estimation using semi-supervised transductive regression forests. D Tang, T Yu, T Kim, ICCVD. Tang, T. Yu, T. Kim, Real-time articulated hand pose estimation using semi-supervised transductive regression forests, in: ICCV, 2013.
Similarity constrained latent support vector machine: An application to weakly supervised action classification. N Shapovalova, A Vahdat, K Cannons, T Lan, G Mori, ECCVN. Shapovalova, A. Vahdat, K. Cannons, T. Lan, G. Mori, Similarity con- strained latent support vector machine: An application to weakly super- vised action classification, in: ECCV, 2012.
Automatic annotation of human actions in video. O Duchenne, I Laptev, J Sivic, F Bach, J Ponce, ICCVO. Duchenne, I. Laptev, J. Sivic, F. Bach, J. Ponce, Automatic annotation of human actions in video, in: ICCV, 2009.
Weakly supervised learning of interactions between humans and objects. A Prest, C Schmid, V Ferrari, IEEE Trans. Pattern Anal. Mach. Intell. 343A. Prest, C. Schmid, V. Ferrari, Weakly supervised learning of interac- tions between humans and objects, IEEE Trans. Pattern Anal. Mach. In- tell. 34 (3) (2012) 601-614.
Posebits for monocular human pose estimation. G Pons-Moll, D J Fleet, B Rosenhahn, CVPRG. Pons-Moll, D. J. Fleet, B. Rosenhahn, Posebits for monocular human pose estimation, in: CVPR, 2014.
Complex volume and pose tracking with probabilistic dynamical models and visual hull constraints. N Ukita, M Hirai, M Kidode, ICCVN. Ukita, M. Hirai, M. Kidode, Complex volume and pose tracking with probabilistic dynamical models and visual hull constraints, in: ICCV, 2009.
Gaussian process motion graph models for smooth transitions among multiple actions. N Ukita, T Kanade, CVIU. 1164N. Ukita, T. Kanade, Gaussian process motion graph models for smooth transitions among multiple actions, CVIU 116 (4) (2012) 500-509.
Switching gaussian process dynamic models for simultaneous composite motion tracking and recognition. J Chen, M Kim, Y Wang, Q Ji, CVPRJ. Chen, M. Kim, Y. Wang, Q. Ji, Switching gaussian process dynamic models for simultaneous composite motion tracking and recognition, in: CVPR, 2009.
Gool, 2d action recognition serves 3d human pose estimation. J Gall, A Yao, L J , ECCVJ. Gall, A. Yao, L. J. V. Gool, 2d action recognition serves 3d human pose estimation, in: ECCV, 2010.
Simultaneous particle tracking in multi-action motion models with synthesized paths. N Ukita, Image Vision Comput. 316-7N. Ukita, Simultaneous particle tracking in multi-action motion models with synthesized paths, Image Vision Comput. 31 (6-7) (2013) 448-459.
Iterative action and pose recognition using global-and-pose features and action-specific models. N Ukita, Workshop on Understanding Human Activities: Context and Interactions. N. Ukita, Iterative action and pose recognition using global-and-pose fea- tures and action-specific models, in: Workshop on Understanding Human Activities: Context and Interactions, 2013.
Stacked hourglass networks for human pose estimation. A Newell, K Yang, J Deng, ECCVA. Newell, K. Yang, J. Deng, Stacked hourglass networks for human pose estimation, in: ECCV, 2016.
End-to-end learning of deformable mixture of parts and deep convolutional neural networks for human pose estimation. W Yang, W Ouyang, H Li, X Wang, CVPRW. Yang, W. Ouyang, H. Li, X. Wang, End-to-end learning of deformable mixture of parts and deep convolutional neural networks for human pose estimation, in: CVPR, 2016.
Structured feature learning for pose estimation. X Chu, W Ouyang, H Li, X Wang, CVPRX. Chu, W. Ouyang, H. Li, X. Wang, Structured feature learning for pose estimation, in: CVPR, 2016.
Human pose estimation via convolutional part heatmap regression. A Bulat, G Tzimiropoulos, ECCVA. Bulat, G. Tzimiropoulos, Human pose estimation via convolutional part heatmap regression, in: ECCV, 2016.
Human pose estimation using deep consensus voting. I Lifshitz, E Fetaya, S Ullman, ECCVI. Lifshitz, E. Fetaya, S. Ullman, Human pose estimation using deep con- sensus voting, in: ECCV, 2016.
Flowing convnets for human pose estimation in videos. T Pfister, J Charles, A Zisserman, ICCVT. Pfister, J. Charles, A. Zisserman, Flowing convnets for human pose estimation in videos, in: ICCV, 2015.
Articulated pose estimation with flexible mixturesof-parts. Y Yang, D Ramanan, CVPRY. Yang, D. Ramanan, Articulated pose estimation with flexible mixtures- of-parts, in: CVPR, 2011.
Articulated people detection and pose estimation: Reshaping the future. L Pishchulin, A Jain, M Andriluka, T Thormählen, B Schiele, CVPRL. Pishchulin, A. Jain, M. Andriluka, T. Thormählen, B. Schiele, Artic- ulated people detection and pose estimation: Reshaping the future, in: CVPR, 2012.
Progressive search space reduction for human pose estimation. V Ferrari, M J Marín-Jiménez, A Zisserman, CVPRV. Ferrari, M. J. Marín-Jiménez, A. Zisserman, Progressive search space reduction for human pose estimation, in: CVPR, 2008.
Coupled action recognition and pose estimation from multiple views. A Yao, J Gall, L J V Gool, International Journal of Computer Vision. 1001A. Yao, J. Gall, L. J. V. Gool, Coupled action recognition and pose es- timation from multiple views, International Journal of Computer Vision 100 (1) (2012) 16-37.
Towards understanding action recognition. H Jhuang, J Gall, S Zuffi, C Schmid, M J Black, ICCVH. Jhuang, J. Gall, S. Zuffi, C. Schmid, M. J. Black, Towards understand- ing action recognition, in: ICCV, 2013.
Histograms of oriented gradients for human detection. N Dalal, B Triggs, CVPRN. Dalal, B. Triggs, Histograms of oriented gradients for human detec- tion, in: CVPR, 2005.
Has my algorithm succeeded? an evaluator for human pose estimators. N Jammalamadaka, A Zisserman, M Eichner, V Ferrari, C V Jawahar, ECCVN. Jammalamadaka, A. Zisserman, M. Eichner, V. Ferrari, C. V. Jawahar, Has my algorithm succeeded? an evaluator for human pose estimators, in: ECCV, 2012.
Combining local appearance and holistic view: Dual-source deep neural networks for human pose estimation. X Fan, K Zheng, Y Lin, S Wang, CVPRX. Fan, K. Zheng, Y. Lin, S. Wang, Combining local appearance and holistic view: Dual-source deep neural networks for human pose estima- tion, in: CVPR, 2015.
O Chapelle, B Schölkopf, A Zien, Semi-Supervised Learning. Cambridge, MA, USAMIT PressO. Chapelle, B. Schölkopf, A. Zien, Semi-Supervised Learning, MIT Press, Cambridge, MA, USA, 2006.
Mixtures of dirichlet processes with applications to bayesian nonparametric problems. C E Antoniak, Annals of Statistics. 26C. E. Antoniak, Mixtures of dirichlet processes with applications to bayesian nonparametric problems, Annals of Statistics 2 (6).
A simple example of dirichlet process mixture inconsistency for the number of components. J W Miller, M T Harrison, NIPSJ. W. Miller, M. T. Harrison, A simple example of dirichlet process mix- ture inconsistency for the number of components, in: NIPS, 2013.
An alternative prior process for nonparametric bayesian clustering. H M Wallach, S Jensen, L H Dicker, K A Heller, AISTATSH. M. Wallach, S. Jensen, L. H. Dicker, K. A. Heller, An alternative prior process for nonparametric bayesian clustering, in: AISTATS, 2010.
Bayesian outlier detection with dirichlet process mixtures. M S Shotwell, E H Slate, Bayesian Analysis. 64M. S. Shotwell, E. H. Slate, Bayesian outlier detection with dirichlet pro- cess mixtures, Bayesian Analysis 6 (4) (2011) 1-22.
Bayesian density estimation and inference using mixtures. M D Escobar, M West, Journal of the American Statistical Association. 90430M. D. Escobar, M. West, Bayesian density estimation and inference using mixtures, Journal of the American Statistical Association 90 (430) (1995) 577-588.
Partition models. J A Hartigan, Communications in Statistics. 19927452756Theory and MethodsJ. A. Hartigan, Partition models, Communications in Statistics, Theory and Methods 19 (9) 27452756.
D Aldous, Exchangeability and related topics. Springer-VerlagÉcole d'Été St Flour. lecture Notes in Math. 1117D. Aldous, Exchangeability and related topics, in:École d'Été St Flour 1983, Springer-Verlag, 1985, pp. 1-198, lecture Notes in Math. 1117.
Bayesian measures of surprise for outlier detection. M J Bayarri, J Morales, Journal of Statistical Planning and Inference. 1111-2M. J. Bayarri, J. Morales, Bayesian measures of surprise for outlier detec- tion, Journal of Statistical Planning and Inference 111 (1-2) (2003) 3-22.
Marginal likelihood and bayes factors for dirichlet process mixture models. S Basu, S Chib, Journal of the American Statistical Association. 98S. Basu, S. Chib, Marginal likelihood and bayes factors for dirichlet pro- cess mixture models, Journal of the American Statistical Association 98 (2003) 224-235.
Bayes factors. R E Kass, A E Raftery, Journal of the American Statistical Association. 90R. E. Kass, A. E. Raftery, Bayes factors, Journal of the American Statis- tical Association 90 (1995) 773-795.
Articulated human detection with flexible mixtures of parts. Y Yang, D Ramanan, IEEE Trans. Pattern Anal. Mach. Intell. 3512Y. Yang, D. Ramanan, Articulated human detection with flexible mixtures of parts, IEEE Trans. Pattern Anal. Mach. Intell. 35 (12) (2013) 2878- 2890.
Deep deformation network for object landmark localization. X Yu, F Zhou, M Chandraker, ECCVX. Yu, F. Zhou, M. Chandraker, Deep deformation network for object landmark localization, in: ECCV, 2016.
Deepcut: Joint subset partition and labeling for multi person pose estimation. L Pishchulin, E Insafutdinov, S Tang, B Andres, M Andriluka, P Gehler, B Schiele, CVPRL. Pishchulin, E. Insafutdinov, S. Tang, B. Andres, M. Andriluka, P. Gehler, B. Schiele, Deepcut: Joint subset partition and labeling for multi person pose estimation, in: CVPR, 2016.
Deepercut: A deeper, stronger, and faster multi-person pose estimation model. E Insafutdinov, L Pishchulin, B Andres, M Andriluka, B Schiele, ECCVE. Insafutdinov, L. Pishchulin, B. Andres, M. Andriluka, B. Schiele, Deepercut: A deeper, stronger, and faster multi-person pose estimation model, in: ECCV, 2016.
T Lin, M Maire, S J Belongie, J Hays, P Perona, D Ramanan, P Dollár, C L Zitnick, Microsoft COCO: common objects in context. ECCVT. Lin, M. Maire, S. J. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, C. L. Zitnick, Microsoft COCO: common objects in context, in: ECCV, 2014.
G Gkioxari, A Toshev, N Jaitly, Chained predictions using convolutional neural networks. ECCVG. Gkioxari, A. Toshev, N. Jaitly, Chained predictions using convolu- tional neural networks, in: ECCV, 2016.
I J Goodfellow, J Pouget-Abadie, M Mirza, B Xu, D Warde-Farley, S Ozair, A C Courville, Y Bengio, Generative adversarial nets. NIPSI. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. C. Courville, Y. Bengio, Generative adversarial nets, in: NIPS, 2014.
| [] |
[
"ANNEALED IMPORTANCE SAMPLING FOR ISING MODELS WITH MIXED BOUNDARY CONDITIONS",
"ANNEALED IMPORTANCE SAMPLING FOR ISING MODELS WITH MIXED BOUNDARY CONDITIONS"
] | [
"Lexing Ying "
] | [] | [] | This note introduces a method for sampling Ising models with mixed boundary conditions. As an application of annealed importance sampling and the Swendsen-Wang algorithm, the method adopts a sequence of intermediate distributions that keeps the temperature fixed but turns on the boundary condition gradually. The numerical results show that the variance of the sample weights is relatively small.2010 Mathematics Subject Classification. 82B20,82B80. | 10.4208/jcm.2211-m2022-0172 | [
"https://arxiv.org/pdf/2205.08665v1.pdf"
] | 248,862,885 | 2205.08665 | 285db4b16d6496860234f50075aec43868284f6d |
ANNEALED IMPORTANCE SAMPLING FOR ISING MODELS WITH MIXED BOUNDARY CONDITIONS
Lexing Ying
ANNEALED IMPORTANCE SAMPLING FOR ISING MODELS WITH MIXED BOUNDARY CONDITIONS
This note introduces a method for sampling Ising models with mixed boundary conditions. As an application of annealed importance sampling and the Swendsen-Wang algorithm, the method adopts a sequence of intermediate distributions that keeps the temperature fixed but turns on the boundary condition gradually. The numerical results show that the variance of the sample weights is relatively small.2010 Mathematics Subject Classification. 82B20,82B80.
Introduction
This note is concerned with the Monte Carlo sampling of Ising models with mixed boundary conditions. Consider a graph G = (V, E) with the vertex set V and the edge set E. Assume that V = I ∪B, where I is the subset of interior vertices and B the subset of boundary vertices. Throughout the note, we use i, j to denote the vertices in I and b for the vertices in B. In addition, ij ∈ E denotes an edge between two interior vertices i and j, while ib ∈ E denotes an edge between an interior vertex i and a boundary vertex b. The boundary condition is specified by f = (f b ) b∈B with f b = ±1. The Ising model with the boundary condition f is the probability distribution p V (·) over the configurations s = (s i ) i∈I of the interior vertex set I:
(1) p V (s) ∼ exp β ij∈E s i s j + β ib∈E s i f b .
A key feature of these Ising models is that, for certain mixed boundary conditions, below the critical temperature the Gibbs distribution exhibits on the macroscopic scale different profiles. Figure 1 showcases two such examples. On the left, the square Ising lattice has the +1 condition on the vertical sides but the −1 condition on the horizontal sides. The two dominant profiles are a −1 cluster linking two horizontal sides and a +1 cluster linking two vertical sides. On the right, the Ising lattice supported on a disk has the +1 condition on two disjoint arcs and the −1 condition on the other two. The two dominant profiles are shown in Figure 1(b). Notice that in each case, the two dominant profiles have comparable probability. Hence it is important for any sampling algorithm to visit both profiles frequently.
One of the most well-known methods for sampling Ising models is the Swendsen-Wang algorithm [9], which will be briefly reviewed in Section 2. For Ising models free boundary condition for example, the Swendsen-Wang algorithm exhibits rapid mixing for all temperatures. However, for the mixed boundary conditions shown in Figure 1, the Swendsen-Wang algorithm experiences slow convergence under the critical temperatures, i.e., T < T c or equivalently β > β c in terms of the inverse temperature. The reason is that, for such a boundary condition, the energy barrier between the two dominant profiles is much higher than the energies of these profiles. In other words, the Swendsen-Wang algorithm needs to break a macroscopic number of edges between aligned adjacent spins in order to transition from one dominant profile to the other. However, breaking so many edges simultaneously is an event with exponentially small probability when the mixed boundary condition is specified. Annealed importance sampling is a method by Radford Neal [8], designed for sampling distributions with multiple modes. The main idea is to (1) introduce an easily-to-sample simple distribution, (2) design a sequence of temperature dependent intermediate distributions, and (3) generate sample paths that connects the simple initial distribution and the hard target distribution. Annealed importance sampling has been widely applied in Bayesian statistics and data assimilation for sampling and estimating partition functions.
In this note, we address this problem by combining the Swendsen-Wang algorithm with annealed importance sampling. The main novelty of our approach is that, instead of adjusting the temperature, we freeze the temperature and adjust the mixed boundary condition.
Related works. In [1,2], Alexander and Yoshida studied the spectral gap of the 2D Ising models with mixed boundary conditions. In [10], the double flip move is introduced for models with mixed boundary conditions that enjoy exact or approximate symmetry. When combined with the Swendsen-Wang algorithm, it can accelerate the mixing of these Ising model under the critical temperature significantly. However, it only applies to problem with exact or approximate symmetries, but not more general settings.
Recently in [5], Gheissari and Lubetzky studied the effect of the boundary condition for the 2D Potts models at the critical temperature. In [3], Chatterjee and Diaconis showed that most of the deterministic jumps can accelerate the Markov chain mixing when the equilibrium distribution is uniform.
Contents. The rest of the note is organized as follows. Sections 2 and 3 review the Swendsen-Wang algorithm and annealed importance sampling, respectively. Section 4 describes the algorithm and provides some numerical examples. Section 5 discusses some future directions.
Swendsen-Wang algorithm
In this section, we briefly review the Swendsen-Wang algorithm. First, notice that
p V (s) ∼ exp β ij∈E s i s j + β ib∈E f b s i = exp β ij∈E s i s j + β i∈I ib∈E f b s i .
Therefore, if we interpret h i = ib∈E f b as an external field, one can view the mixed boundary condition problem as a special case of the model with external field h = (h i ) i∈I
(2) p V (s) ∼ exp β ij∈E s i s j + β i∈I h i s i .
This viewpoint simplifies the presentation of the algorithm and the Swendsen-Wang algorithm is summarized below under this setting. The Swendsen-Wang algorithm is a Markov Chain Monte Carlo method for sampling p V (s). In each iteration, it generates a new configuration based on the current configuration s with two substeps:
(1) Generate an edge configuration w = (w ij ) ij∈E . If the spin values s i and s j are different, set w ij = 0. If s i and s j are the same, w ij is sampled from the Bernoulli distribution Ber(1 − e −2β ), i.e., equal to 1 with probability 1 − e −2β and 0 with probability e −2β . (2) Regards all edges ij ∈ E with w ij = 1 as linked. Compute the connected components.
For each connected component γ, define h γ = i∈γ h i . Set the spins (t i ) i∈γ of the new configuration t to 1 with probability e βhγ /(e βhγ + e −βhγ ) and to 0 with probability e −βhγ /(e βhγ + e −βhγ ).
Associated with (2), two other probability distributions are important for analyzing the Swendsen-Wang algorithm [4]. The first one is the joint vertex-edge distribution
(3) p VE (s, w) ∼ ij∈E (1 − e −2β )δ s i =s j δ w ij =1 + e −2β δ w ij =0 · e β i∈I h i s i .
The second one is the edge distribution
(4) p E (w) ∼ w ij =1 (1 − e −2β ) w ij =0 e −2β · γ∈Cw (e −βhγ + e βhγ ),
where C w is the set of the connected components induced by w.
Summing p VE (s, w) over s or w gives the following two identities
(5) s p VE (s, w) = p E (w), w p VE (s, w) = p V (s).
(see for example [4]). A direct consequence of (5) is that the Swendsen-Wang algorithm can be viewed as a data augmentation method [6]: the first substep samples the edge configuration w conditioned on the spin configuration s, while the second substep samples a new spin configuration conditioned on the edge configuration w. This equalities in (5) also imply that Swendsen-Wang algorithm satisfies the detailed balance. To see this, let us fix two spin configurations s and t and consider the the transition between them. Since such a transition in the Swendsen-Wang move happens via an edge configuration w, it is sufficient to show
p V (s)P w (s, t) = p V (t)P w (t, s)
for any compatible edge configuration w, where P w (s, t) is the transition probability from s to t via w. Since the transition probabilities from w to the spin configurations s and t are proportional to e β i h i s i and e β i h i t i respectively, it reduces to showing
(6) p V (s)P (s, w)e β i h i t i = p V (t)P (t, w)e β i h i s i ,
where P (s, w) is the probability of obtaining the edge configuration w from s. Using (2), this is equivalent to
(7) e β ij∈E s i s j +β i∈I h i s i P (s, w)e β i∈I h i t i = e β ij∈E t i t j +β i∈I h i t i P (t, w)e β i∈I h i s i .
The next observation is that
(8) e β ij∈E s i s j P (s, w) = e β ij∈E t i t j P (t, w),
i.e., independent of the spin configuration, as explained below. First, if an edge ij ∈ E has configuration w ij = 1, then s i = s j . Second, if ij ∈ E has configuration w ij = 0, then s i and s j can either be the same or different. In the former case, the contribution to e β ij∈E s i s j P (s, w) from ij is e 2β · e −2β = 1 up to a uniform normalization constant. In the latter case, the contribution is also 1 · 1 = 1 up to the same uniform constant. After canceling the two terms of (8) in (7), proving (6) is equivalent to e β i∈I h i s i · e β i∈I h i t i = e β i∈I h i t i · e β i∈I h i s i , which is trivial. The Swendsen-Wang algorithm unfortunately does not encourage transitions between the dominant profiles shown for example in Figure 1. With these mixed boundary condition, such a transition requires breaking a macroscopic number of edges between aligned adjacent spins, which has an exponentially small probability.
Annealed importance sampling
Given a target distribution p(s) that is hard to sample directly, annealed importance sampling (AIS), proposed by Neal in [8], introduces a sequence of distributions p 0 (·), . . . , p L (·) ≡ p(·),
where p 0 (·) is easy to sample and each p l (·) is associated with a detailed balance Markov Chain T l (s, t), i.e., (9) p l (s)T l (s, t) = p l (t)T l (t, s).
The detailed balance condition can be relaxed, though it simplifies the description. Given this sequence of intermediate distributions, AIS proceeds as follows.
(1) Sample a configuration s 1/2 from p 0 (·).
(2) For l = 1, . . . , L − 1, take one step (or a few steps) of T i (·, ·) (associated with the distributions p l (·)) from s l−1/2 . Let s l+1/2 be the resulting configuration.
p 0 (s 1/2 ) · · · · · p L (s L−1/2 ) p L−1 (s L−1/2 ) .
The claim is that the configuration s with weight w samples the target distribution p L (·). To see this, consider the path (s 1/2 , . . . , s L−1/2 ). This path is generated with probability p 0 (s 1/2 )T 1 (s 1/2 , s 3/2 ) . . . T L−1 (s L−3/2 , s L−1/2 ).
Multiplying this with w and using the detailed balance (9) of T l gives
p 0 (s 1/2 )T 1 (s 1/2 , s 3/2 ) . . . T L−1 (s L−3/2 , s L−1/2 ) · p 1 (s 1/2 ) p 0 (s 1/2 ) · · · · · p L (s L−1/2 ) p L−1 (s L−1/2 ) =p L (s L−1/2 )T L−1 (s L−1/2 , s L−3/2 ) . . . T 1 (s 3/2 , s 1/2 ),
which is the probability of going backward, i.e., starting from a sample s L−1/2 of p L (·) = p(·).
Taking the margin of the last slot s L−1/2 proves that s := s L−1/2 with weight w samples with distribution p L (·) = p(·).
Algorithm and results
Our proposal is to combine AIS with the Swendsen-Wang algorithm for sampling Ising models with mixed boundary conditions. The key ingredients are:
• Set the initial p 0 (·) to be
p 0 (s) ∼ exp β ij∈E s i s j .
This initial distribution has no external field and hence can be sampled efficiently with the Swendsen-Wang algorithm.
• Choose a monotone sequence of (θ l ) 0≤l≤L with θ 0 = 0 and θ L = 1 and set at level l
p l (s) ∼ exp β ij∈E s i s j + β i∈I (θ l h i )s i ,
the distribution with external field θ h h. The Markov transition matrix T l (·, ·) is implemented with the Swendsen-Wang algorithm associated with p l (·). As proven in Section 2, T l (·, ·) satisfies the detailed balance, Below we demonstrate the performance of the proposed method with several examples. In each example, K = 500 samples (s (k) , w (k) ) 1≤k≤K are generated. For each sample s (k) , the initial choice s (k) 1/2 is obtained by running 100 iterations of the Swendsen-Wang algorithm at p 0 (·). In our implementation, the monotone sequence (θ l ) 0≤l≤L is chosen to be an equally spaced sequence with L = 400. Although the equally-spaced sequence is not necessarily the ideal choice in terms of variance minimization, it seems to work reasonably well for the examples studied here.
In order to monitor the variance of the algorithm, we record the weight history at each level l:
w (k) l = p 1 (s (k) 1/2 ) p 0 (s (k) 1/2 )
· · · · · p l (s Figure 2(a). The mixed boundary condition is the +1 at the two vertical sides and −1 at the two horizontal sides. The experiments are performed for the problem size n 1 = n 2 = 40 at the inverse temperature β = 0.5. Figure 3 Example 2. The Ising lattice is again a square as shown in Figure 3(a). The mixed boundary condition is +1 in the first and third quadrants but −1 in the second and fourth quadrants. The experiments are performed for the problem size n 1 = n 2 = 40 at the inverse temperature β = 0.5. Figure 3 Example 3. The Ising model is a random quasi-uniform triangular lattice supported on the unit disk, as shown in Figure 4(a). The lattice does not have rotation and reflection symmetry due to the random triangulation. The mixed boundary condition is +1 in the first and third quadrants but −1 in the second and fourth quadrants. The experiments are performed with a finer triangulation with mesh size h = 0.05 at the inverse temperature β = 0.3. Figure 4 Example 4. The Ising model is again a random quasi-uniform triangular lattice supported on the unit disk, as shown in Figure 5(a). The mixed boundary condition is +1 on the two arcs with angle in [0, π/3] and [π, 5π/3] but −1 on the remaining two arcs. The experiments are performed with a finer triangulation with mesh size h = 0.05 at the inverse temperature β = 0.3. Figure 5
(b) plots Var[{logw
Discussions
This note introduces a method for sampling Ising models with mixed boundary conditions. As an application of annealed importance sampling with the Swendsen-Wang algorithm, the method adopts a sequence of intermediate distributions that fixes the temperature but turns on the boundary condition gradually. The numerical results show that the variance of the sample weights remain to be relatively small. There are many unanswered questions. First, the sequence of (θ l ) 0≤l≤L that controls the intermediate distributions is empirically specified to be equally-spaced. Two immediate questions are (1) what the optimal (θ l ) 0≤l≤L sequence is and (2) whether there is an efficient algorithm for computing it.
Second, historically annealed importance sampling is introduced following the work of tempered transition [7]. We have implemented the current idea within the framework of tempered distribution. However, the preliminary results show that it is less effective compared to annealed importance sampling. A more thorough study is needed in this direction.
Finally, annealed importance sampling (AIS) is a rather general framework. For a specific application, the key to efficiency is the choice of the distribution p 0 (·): it should be easyto-sample, while at the same time sufficiently close to the target distribution p(·). However, since the target distribution is hard-to-sample, these two objectives often compete with each other. There are many other hard-to-sample models in statistical mechanics. A potential direction of research is to apply AIS with appropriate initial p 0 (·) to these models.
Figure 1 .
1Ising models with mixed boundary conditions. (a) a square model and (b) a model support on a disk. In each case, a mixed boundary condition is specified and the model exhibits two dominant profiles on the macroscopic scale.
( 3 )
3Set s := s L−1/2 . (4) Set the weight w := p 1 (s 1/2 )
Following [ 8 ]
8, we report the variance of the logarithm of the normalized weights Var[{logw (k) l }] as a function of level l = 1, . . . , L. The sample efficiency, a quantity between 0 and 1, is measured as (1 + Var[{w (k) L }]) −1 at level L. Example 1. The Ising model is a square lattice, as shown in
Figure 2 .
2(b) plots the variance of the logarithm of the normalized weights, Var[{logw (k) l }], as a function of the level l. The sample efficiency ((a) The lattice along with the external field. (b) The variance of the logarithm of the normalized weights as a function of level l.
l
}] as a function of the level l, which remain quite small. The sample efficiency (1 + Var[{w (k) L }]) −1 is 0.09, which translates to about L · (1 + Var[{w (k) L }]) −1 ≈ 4470 Swendsen-Wang iterations per effective sample.
lFigure 3 .
3}] as a function of the level l, which remain quite small. The sample efficiency (1 + Var[{w (k) L }]) −1 is 0.67, which translates to about L · (1 + Var[{w (k) L }]) −1 ≈ 600 Swendsen-Wang iterations per effective sample. (a) The lattice along with the external field. (b) The variance of the logarithm of the normalized weights as a function of level l.
Figure 4 .
4(a) The lattice along with the external field. (b) The variance of the logarithm of the normalized weights as a function of level l.
l
}] as a function of the level l, which remain quite small. The sample efficiency (1 + Var[{w (k) L }]) −1 is 0.55, which translates to about L · (1 + Var[{w (k) L }]) −1 ≈ 730 Swendsen-Wang iterations per effective sample.
Figure 5 .
5(a) The lattice along with the external field. (b) The variance of the logarithm of the normalized weights as a function of level l.
The spectral gap of the 2-D stochastic Ising model with nearly single-spin boundary conditions. Alexander Kenneth, Journal of Statistical Physics. 1041Kenneth S Alexander, The spectral gap of the 2-D stochastic Ising model with nearly single-spin boundary conditions, Journal of Statistical Physics 104 (2001), no. 1, 59-87.
The spectral gap of the 2-D stochastic Ising model with mixed boundary conditions. S Kenneth, Nobuo Alexander, Yoshida, Journal of Statistical Physics. 1041Kenneth S Alexander and Nobuo Yoshida, The spectral gap of the 2-D stochastic Ising model with mixed boundary conditions, Journal of Statistical Physics 104 (2001), no. 1, 89-109.
Speeding up Markov chains with deterministic jumps. Sourav Chatterjee, Persi Diaconis, Probability Theory and Related Fields. 178Sourav Chatterjee and Persi Diaconis, Speeding up Markov chains with deterministic jumps, Probability Theory and Related Fields 178 (2020), no. 3, 1193-1214.
Generalization of the Fortuin-Kasteleyn-Swendsen-Wang representation and Monte Carlo algorithm. G Robert, Alan D Edwards, Sokal, Physical review D. 386Robert G Edwards and Alan D Sokal, Generalization of the Fortuin-Kasteleyn-Swendsen-Wang represen- tation and Monte Carlo algorithm, Physical review D 38 (1988), no. 6, 2009.
The effect of boundary conditions on mixing of 2D Potts models at discontinuous phase transitions. Reza Gheissari, Eyal Lubetzky, Electronic Journal of Probability. 23Reza Gheissari and Eyal Lubetzky, The effect of boundary conditions on mixing of 2D Potts models at discontinuous phase transitions, Electronic Journal of Probability 23 (2018), 1-30.
. S Jun, Liu, Monte Carlo strategies in scientific computing. 10SpringerJun S Liu, Monte Carlo strategies in scientific computing, Vol. 10, Springer, 2001.
Sampling from multimodal distributions using tempered transitions. M Radford, Neal, Statistics and computing. 64Radford M Neal, Sampling from multimodal distributions using tempered transitions, Statistics and com- puting 6 (1996), no. 4, 353-366.
Annealed importance sampling. Statistics and computing. 112, Annealed importance sampling, Statistics and computing 11 (2001), no. 2, 125-139.
Nonuniversal critical dynamics in Monte Carlo simulations. H Robert, Jian-Sheng Swendsen, Wang, Physical review letters. 58286Robert H Swendsen and Jian-Sheng Wang, Nonuniversal critical dynamics in Monte Carlo simulations, Physical review letters 58 (1987), no. 2, 86.
Double Flip Acceleration for Ising Models with Mixed Boundary Conditions. Lexing Ying, Stanford, CALexing Ying) Department of Mathematics, Stanford UniversityPreprint94305 Email address: [email protected] Ying, Double Flip Acceleration for Ising Models with Mixed Boundary Conditions, Preprint (2022). (Lexing Ying) Department of Mathematics, Stanford University, Stanford, CA 94305 Email address: [email protected]
| [] |
[
"arXiv:hep-ph/0009090v1 7 Sep 2000 Formation of quarkonium states at RHIC",
"arXiv:hep-ph/0009090v1 7 Sep 2000 Formation of quarkonium states at RHIC"
] | [
"R L Thews \nDepartment of Physics\nUniversity of Arizona\n85721TucsonAZUSA\n",
"M Schroedter \nDepartment of Physics\nUniversity of Arizona\n85721TucsonAZUSA\n",
"J Rafelski \nDepartment of Physics\nUniversity of Arizona\n85721TucsonAZUSA\n"
] | [
"Department of Physics\nUniversity of Arizona\n85721TucsonAZUSA",
"Department of Physics\nUniversity of Arizona\n85721TucsonAZUSA",
"Department of Physics\nUniversity of Arizona\n85721TucsonAZUSA"
] | [] | At RHIC the cross section for cc production will be large enough such that approximately 10 pairs will be produced in each central collision. If a region of deconfined quarks and gluons is subsequently formed, one would expect that the mobility of the charm quarks will enable them to form J/ψ through "offdiagonal" combinations, involving a quark and an antiquark which were originally produced in separate incoherent interactions. We present model estimates of this effect, which indicate that the signal for deconfinement at RHIC may possibly be J/ψ enhancement rather than suppression. | 10.1088/0954-3899/27/3/359 | [
"https://export.arxiv.org/pdf/hep-ph/0009090v1.pdf"
] | 119,356,799 | hep-ph/0009090 | e8b5a9a547cb72cf68bd186abf1d53f34185a804 |
arXiv:hep-ph/0009090v1 7 Sep 2000 Formation of quarkonium states at RHIC
R L Thews
Department of Physics
University of Arizona
85721TucsonAZUSA
M Schroedter
Department of Physics
University of Arizona
85721TucsonAZUSA
J Rafelski
Department of Physics
University of Arizona
85721TucsonAZUSA
arXiv:hep-ph/0009090v1 7 Sep 2000 Formation of quarkonium states at RHIC
At RHIC the cross section for cc production will be large enough such that approximately 10 pairs will be produced in each central collision. If a region of deconfined quarks and gluons is subsequently formed, one would expect that the mobility of the charm quarks will enable them to form J/ψ through "offdiagonal" combinations, involving a quark and an antiquark which were originally produced in separate incoherent interactions. We present model estimates of this effect, which indicate that the signal for deconfinement at RHIC may possibly be J/ψ enhancement rather than suppression.
Introduction
A decrease in the number of observed J/ψ in heavy ion collisions due to the screening of the color confining potential was proposed many years ago [1] as a signature of a deconfined phase. It is argued that as the system cools and the deconfined phase disappears, these heavy quarks will most likely form a final hadronic state with one of the much more numerous light quarks. The result will be a decreased population of J/ψ relative to those formed initially in the heavy ion collision.
Here we study a scenario which can only be realized at RHIC (and LHC) energies, where the average number of initially-produced heavy quark pairsN 0 is substantially above unity in each central collision. Then one can amplify the probability of J/ψ formation by a factor which is proportional toN 2 0 , if and only if a space-time region of deconfined quarks and gluons is present. Realization of this result will depend on the efficiency of this new formation mechanism during the deconfinement period. We have developed a simple model to estimate the magnitude of this effect, and examined the sensitivity of results to various input parameters and assumptions.
Suppression Factor
For expected conditions at RHIC, almost all of the directly-produced J/ψ will be dissociated even in peripheral collisions. To include the effects of our new formation mechanism, we parameterize the final J/ψ number in each event as follows:
Of the N 0 charm quark pairs initially produced in a central heavy ion collision, let N 1 be the number of those pairs which form J/ψ states in the normal confining vacuum potential. At hadronization, the final number N J/ψ will contain a small fraction ǫ of the initial number N 1 . The majority of N J/ψ will be formed by this new mechanism which we expect to be quadratic in the remaining N 0 − N 1 heavy quark pairs, with a proportionality parameter β. (We include in the new mechanism both formation and suppression effects, since they occur simultaneously in the deconfined region.) The final population is then
N J/ψ = ǫN 1 + β(N 0 − N 1 ) 2 .(1)
For each N 0 initially-produced heavy quark pairs, we then average over the distribution of N 1 , introducing the probability x that a given heavy quark pair was in a bound state before the deconfined phase was formed. (This factor includes the effect of interactions with target and projectile nucleons). We finally average over the distribution of N 0 , using a Poisson distribution with average valueN 0 , to obtain the expected < N J/ψ > final population per collision,
< N J/ψ >= xN 0 (ǫ + β(1 − x)) +N 0 (N 0 + 1)β(1 − x) 2 .(2)
The bound state "suppression" factor S J/ψ is just the ratio of this average population to the average initially-produced bound state population per collision, xN 0 .
S J/ψ = ǫ + β(1 − x) + β (1 − x) 2 x (N 0 + 1)(3)
Without the new production mechanism, β = 0 and the suppression factor is S J/ψ = ǫ < 1. (Even the fitted parameter ǫ contains some effects of the new mechanism, since formation can reoccur subsequent to the dissociation of an initial J/ψ. Here we use it as an upper limit with which to compare the complete result.) However, it is possible that for sufficiently large values of β andN 0 this factor could actually exceed unity, i.e. one would predict an enhancement in the heavy quarkonium production rates to be the signature of deconfinement! We thus proceed to estimate expected β-values for J/ψ production at RHIC.
Model for J/ψ Formation
This model is adapted from our previous work on the formation of B c mesons [2]. Initial results for the J/ψ application are found in Reference [3]. For simplicity, we assume the deconfined phase is an ideal gas of free gluons and light quarks. Any J/ψ in this medium will be subject to dissociation via collisions with gluons. (This is the dynamic counterpart of the plasma screening scenario, in which the color-confinement force is screened away in the hot dense plasma [4].) The primary formation mechanism is just the reverse of the dissociation reaction, in which a free charm quark and antiquark are captured in the J/ψ bound state, emitting a color octet gluon. Thus it is unavoidable for this model of quarkonium suppression that a corresponding mechanism for quarkonium production must be present. The competition between the rates of these reactions integrated over the lifetime of the QGP then determines the final J/ψ population. Note that in this scenario it is impossible to separate the formation process from the dissociation (suppression) process. Both processes occur simultaneously, in contrast to the situation in which the formation only occurs at the initial times before the QGP is present.
The time evolution if the J/ψ population is then given by
dN J/ψ dτ = λ F N c ρc − λ D N J/ψ ρ g ,(4)
where τ is the proper time in a comoving volume cell and ρ i denotes the number density [L −3 ] of species i. The reactivity λ L 3 /time is the reaction rate σv rel averaged over the momentum distribution of the initial participants, i.e. c andc for λ F and J/ψ and g for λ D . The gluon density is determined by the equilibrium value in the QGP at each temperature. Exact charm conservation is enforced throughout the calculation. The initial volume at τ = τ 0 is allowed to undergo longitudinal expansion V (τ ) = V 0 τ /τ 0 . The expansion is taken to be isentropic, V T 3 = constant, which then provides a generic temperature-time profile. For simplicity, we assume the transverse spatial distributions are uniform, and use a thermal equilibrium momentum distribution for both gluons and charm quarks. (This last simplification requires large energy loss mechanisms for the charm quarks in the deconfined medium, which is indicated by several recent studies [5]). With these inputs and assumptions, the solution of Equation 4 is precisely that anticipated in Equation 1, with
ǫ(τ f ) = e − τ f τ 0 λD ρg dτ ,(5)
where τ f is the hadronization time determined by the initial temperature (T 0 is a variable parameter) and final temperature (T f = 150 MeV ends the deconfining phase), and
β(τ f ) = ǫ(τ f ) × τ f τ0 λ F [V (τ ) ǫ(τ )] −1 dτ .(6)
For our quantitative estimates, we utilize a cross section for the dissociation of J/ψ due to collisions with gluons which is based on the operator product expansion [4,6], which is utilized with detailed balance factors to calculate the primary formation rate for the capture of a charm and anticharm quark into the J/ψ.
Results
In Figure 1 we show the time development of the J/ψ (solid line) along with the separate formation and dissociation rates (dotted lines, arbitrary units).
This calculation maintained exact charm conservation, so that the solutions followed evolution of both bound and free charm quarks. One sees the expected decrease of the formation rate due to the volume expansion, and the decrease of the gluon dissociaton rate due to the decrease in gluon density with temperature.
Some typical calculated values of the J/ψ final population are shown in Figure 2. The parameter values for thermalization time τ 0 = 0.5 fm, initial volume V 0 = πR 2 τ 0 with R = 6 fm, and a range of initial temperature 300 MeV < T 0 < 500 MeV, are all compatible with expectations for a central collision at RHIC. The quadratic fits of Equation 1 are superimposed, verifying our expectations that the decrease in initial unbound charm is a small effect. (These fits also contain a small linear term for the cases in which N 1 is nonzero, which accounts for the increase of the unbound charm population when dissociation occurs.) The fitted ǫ values decrease quite rapidly with increasing T 0 as expected, and give reasonable upper limits for the suppression factor of directly-produced J/ψ in central collisions at RHIC due to gluon dissociation in a deconfined phase. The corresponding β values are relatively insensitive to T 0 , remaining in the range 2.0 − 2.6 × 10 −3 . These fitted parameters must be supplemented by values of x andN 0 to determine the "suppression" factor from Equation 3 for the new mechanism. We useN 0 = 10 from a pQCD estimate [7]. An order of magnitude estimate of 10 −2 for x, from fitted values of a color evaporation model [8], is reduced by the suppression due to interactions with target and beam nucleons. For central collisions we use 0.6 for this factor, which results from the extrapolation of the observed nuclear effects for p-A and smaller A-B central interactions.
With these parameters fixed, we predict from Equation 3 an enhancement factor for J/ψ production of 3.6 < S J/ψ < 5.4, for initial temperatures between 300 and 500 MeV. The suppression of initially-produced J/ψ alone ranges from factors of 10 to 100, so that the enhancement prediction involves a huge increase (factors of approximately one to two orders of magnitude) in the final population of J/ψ at RHIC.
Centrality Dependence
We also predict how this new effect will vary with the centrality of the collision, which has been a key feature of deconfinement signatures analyzed at CERN SPS energies [9]. The ǫ and β parameters are recalculated, using appropriate variation of initial conditions with impact parameter b. From nuclear geometry and the total non-diffractive nucleon-nucleon cross section at RHIC energies, one can estimate the total number of participant nucleons N P (b) and the corresponding density per unit transverse area n P (b, s) [10]. The former quantity has been shown to be directly proportional to the total transverse energy produced in a heavy ion collision [11]. The latter quantity is used, along with the Bjorken-model estimate of initial energy density [12], to provide an estimate of how the initial temperature of the deconfined region varies with impact parameter. We also use the ratio of these quantities to define an initial transverse area within which deconfinement is possible, thus completing the initial conditions needed to calculate the J/ψ production and suppression. The average initial charm numberN 0 varies with impact parameter in proportion to the nuclear overlap integral T AA (b). The impact-parameter dependence of the fraction x is determined by the average path length encountered by initial J/ψ as they pass through the remaining nucleons, L(b) [13]. All of these b-dependent effects are normalized to the previous values used for calculations at b = 0. It is revealing to express these results in terms of the ratio of final J/ψ to initially-produced charm pairs. In Figure 3, the solid symbols are the full results predicted with the inclusion of our new production mechanism at RHIC. The centrality dependence is represented by the total participant number N P (b). For comparison we also show predictions without the new mechanism, when only dissociation by gluons is included (λ F = 0). It is evident not only that the new mechanism dominates the J/ψ production in the deconfined medium at all impact parameters, but also that an increase rather than a decrease is predicted for central collisions. These features should be distinguishable in the upcoming RHIC experiments.
Model Dependence
In our model of a deconfined region, we have used the vacuum values for masses and binding energy of J/ψ, and assumed that the effects of deconfinement are completely included by the dissociation via gluon collisions. For a complementary viewpoint, we have also employed a deconfinement model in which the J/ψ is completely dissociated when temperatures exceed some critical screening value T s . Below that temperature, the new formation mechanism will still be able to operate, and we use the same cross sections and kinematics. We find that for T s = 280 MeV, the final J/ψ population is approximately unchanged, while decreasing T s to 180 MeV could reduce the J/ψ production by factors of 2 or 3. These results are shown in Figure 4. We have also checked the sensitivity of these results to several other assumptions and parameters. Among these are: (a) Change in initial charm production due to gluon shadowing; (b) Alternative cross sections with different magnitudes and threshold behaviors; (c) Transverse expansion of the QGP; (d) Non-chemical equilibrium for gluons; (e)Non-thermal momentum distributions for charm quarks. The effects of varying these assumptions produce both positive and negative changes in the final J/ψ populations. The largest effect could be a decrease by a factor of 2 or 3 if one uses the initial pQCD momentum distributions for the charm quarks. Taken together, however, it is unlikely that a conspiracy of these effects would qualitatively change the predicted enhancement effects of this deconfinement scenario.
Summary
In summary, we predict that at RHIC energies the J/ψ production rate will provide a more interesting signal for deconfinement than has been previously realized. Consideration of multiple heavy quark production made possible by higher collision energy effectively adds another dimension to the parameter space within which one searches for patterns of quarkonium behavior in a deconfined medium. It will be possible to experimentally "tune" the number of initial heavy quark pairs by sweeping through either centrality or energy. One can then search for a J/ψ production behavior which is predicted to be nonlinear in total charm. In our simplified kinetic model of J/ψ formation in a free gas of quarks and gluons, the new production mechanism predicts an enhancement rather than a suppression. These features should provide a signal at RHIC which will be difficult to imitate with conventional hadronic processes. The extension of this scenario to LHC energies will involve hundreds of initiallyproduced charm quark pairs and multiple bottom pairs. We expect the effects of this new production mechanism to be striking.
Figure 1 .Figure 2 .
12Time dependence of J/ψ formation including new mechanism. Calculated J/ψ formation in deconfined matter at several initial temperatures at RHIC, as a function of initial charm production.
Figure 3 .
3Ratio of final J/ψ to initial charm as a function of centrality, due to nuclear absorption only (solid line), after final state suppression by a QGP (dashed lines), and with inclusion of the new formation mechanism in the deconfined medium (solid symbols).
Figure 4 .
4Comparison of Gluon Dissociation and Screening Scenarios for J/ψ formation including new mechanism (see text for details).
AcknowledgmentsAcknowledgment: This work was supported by a grant from the U.S. Department of Energy, DE-FG03-95ER40937.
. T Matsui, H Satz, Phys. Lett. 178416Matsui T and Satz H 1986 Phys. Lett. B178 416
. M Schroedter, R Thews, J Rafelski, Phys. Rev. 6224905Schroedter M, Thews R L and Rafelski J 2000 Phys. Rev. C62 024905
. R Thews, M Schroedter, J Rafelski, hep-ph/0007323Thews R, Schroedter M and Rafelski J 2000 Preprint hep-ph/0007323
. D Kharzeev, H Satz, Phys. Lett. B334. 155Kharzeev D and Satz H 1994 Phys. Lett. B334 155
. For A Review, R Baier, D Schiff, B Zakharov, hep-ph/0002198For a review, see Baier R, Schiff D and Zakharov B 2000 Preprint hep-ph/0002198
. M Peskin, Nucl. Phys. 156365Peskin M E 1979 Nucl. Phys. B156, 365;
. G Bhanot, M E Peskin, Nucl. Phys. 156391Bhanot G and Peskin M E 1979 Nucl. Phys. B156 391
. P L Mcgaughey, E Quack, P V Ruuskanen, R Vogt, Wang X -N , Int. J. Mod. Phys. 102999McGaughey P L, Quack E, Ruuskanen P V, Vogt R and Wang X -N 1995 Int. J. Mod. Phys. A10 2999
. R Gavai, D Kharzeev, H Satz, G Schuler, K Sridhar, R Vogt, Int. J. Mod. Phys. 103043Gavai R, Kharzeev D, Satz H, Schuler G, Sridhar K and Vogt R 1995 Int. J. Mod. Phys. A10 3043
. M Abreu, NA50 CollaborationPhys. Lett. B477. 28NA50 Collaboration, Abreu M C et al. 2000 Phys. Lett. B477 28
. A Bia Las, M Bleszyński, W Czyz, Nucl. Phys. 111461Bia las A, Bleszyński M and Czyz W 1976 Nucl. Phys. B111 461
Nucl Phys. A590 355c. S Margetis, Margetis S et al. 1995 Nucl Phys. A590 355c
. J Bjorken, Phys. Rev. D27. 140Bjorken J D 1983 Phys. Rev. D27 140
. C Gerschel, J Hüfner, Z. Phys. 47171Gerschel C and Hüfner J 1992 Z. Phys. C47 171
| [] |
[
"A many-body singlet prepared by a central spin qubit",
"A many-body singlet prepared by a central spin qubit"
] | [
"Leon Zaporski \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n\nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK\n",
"Stijn R De Wit \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n\nMESA+ Institute for Nanotechnology\nUniversity of Twente\nThe Netherlands\n\nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK\n\nMESA+ Institute for Nanotechnology\nUniversity of Twente\nThe Netherlands\n",
"Takuya Isogawa \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n\nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK\n",
"Martin Hayhurst Appel \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n\nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK\n",
"Claire Le Gall \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n\nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK\n",
"Mete Atatüre \nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom\n\nCavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK\n",
"Dorian A Gangloff [email protected]. \nDepartment of Engineering Science\nUniversity of Oxford\nParks RoadOX1 3PJOxford, DemlerUnited Kingdom, New J\n\nDepartment of Engineering Science\nUniversity of Oxford\nParks RoadOX1 3PJOxford\n"
] | [
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"MESA+ Institute for Nanotechnology\nUniversity of Twente\nThe Netherlands",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK",
"MESA+ Institute for Nanotechnology\nUniversity of Twente\nThe Netherlands",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUnited Kingdom",
"Cavendish Laboratory\nUniversity of Cambridge\nJJ Thomson AvenueCB3 0HECambridgeUK",
"Department of Engineering Science\nUniversity of Oxford\nParks RoadOX1 3PJOxford, DemlerUnited Kingdom, New J",
"Department of Engineering Science\nUniversity of Oxford\nParks RoadOX1 3PJOxford"
] | [] | Controllable quantum many-body systems are platforms for fundamental investigations into the nature of entanglement and promise to deliver computational speed-up for a broad class of algorithms and simulations. In particular, engineering entanglement within a dense spin ensemble can turn it into a robust quantum memory or a computational platform. Recent experimental progress in dense central spin systems motivates the design of algorithms that use a central-spin qubit as a convenient proxy for the ensemble. Here we propose a protocol that uses a central spin to initialize two dense spin ensembles into a pure anti-polarized state and from there creates a many-body entangled statea singlet -from the combined ensemble. We quantify the protocol performance for multiple material platforms and show that it can be implemented even in the presence of realistic levels of decoherence. Our protocol introduces an algorithmic approach to preparation of a known many-body state and to entanglement engineering in a dense spin ensemble, which can be extended towards a broad class of collective quantum states. | null | [
"https://export.arxiv.org/pdf/2301.10258v1.pdf"
] | 256,231,272 | 2301.10258 | 5ca51ea381358d38dfc1e375c5ba0577df218525 |
A many-body singlet prepared by a central spin qubit
Leon Zaporski
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUK
Stijn R De Wit
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
MESA+ Institute for Nanotechnology
University of Twente
The Netherlands
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUK
MESA+ Institute for Nanotechnology
University of Twente
The Netherlands
Takuya Isogawa
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUK
Martin Hayhurst Appel
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUK
Claire Le Gall
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUK
Mete Atatüre
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUnited Kingdom
Cavendish Laboratory
University of Cambridge
JJ Thomson AvenueCB3 0HECambridgeUK
Dorian A Gangloff [email protected].
Department of Engineering Science
University of Oxford
Parks RoadOX1 3PJOxford, DemlerUnited Kingdom, New J
Department of Engineering Science
University of Oxford
Parks RoadOX1 3PJOxford
A many-body singlet prepared by a central spin qubit
* These authors contributed equally † Correspondence to:
Controllable quantum many-body systems are platforms for fundamental investigations into the nature of entanglement and promise to deliver computational speed-up for a broad class of algorithms and simulations. In particular, engineering entanglement within a dense spin ensemble can turn it into a robust quantum memory or a computational platform. Recent experimental progress in dense central spin systems motivates the design of algorithms that use a central-spin qubit as a convenient proxy for the ensemble. Here we propose a protocol that uses a central spin to initialize two dense spin ensembles into a pure anti-polarized state and from there creates a many-body entangled statea singlet -from the combined ensemble. We quantify the protocol performance for multiple material platforms and show that it can be implemented even in the presence of realistic levels of decoherence. Our protocol introduces an algorithmic approach to preparation of a known many-body state and to entanglement engineering in a dense spin ensemble, which can be extended towards a broad class of collective quantum states.
INTRODUCTION
Controlling quantum properties in many-particle systems, whether for technological advantage or foundational studies, can be reduced to controlling the relative participation and phase of the system's eigenstates. In most cases, this leads to entanglement amongst the system's particles [1]. The initialization of a quantum system to a pure state is a necessary starting point from which to engineer entanglement [2]. Initialization through traditional cooling techniques brings a quantum system in contact with a bath whose temperature is below that of the system's characteristic energy scale, bringing the target system to its ground state [3]. An equivalent picture exists for driven systems, where directionality within an energy hierarchy of dressed states can bring the system to an effective ground state of the dressed system [4]. The latter approach is more versatile as it allows the design of tailored ground states. Crucially, it also requires access to the degree of freedom one intends to cool. Versions of this approach appear in driven-dissipative state preparations of photonic systems [5,6], diamond color centers [7][8][9], epitaxial quantum dots [10,11], and trapped atoms in an optical resonator [12].
Central-spin systems, typically consisting of a single electronic spin coupled to an ensemble of nuclear spins [13], have been a particularly potent testing ground for the active approaches to state preparation [14,15]. Firstly, this is by necessity: nuclear-spin energy scales make ground state preparation well beyond the reach of modern refrigeration techniques. Secondly, the one-to-all coupling of the central spin qubit to an ensemble of spins yields a convenient proxy for the ensemble spins [16]. In the few-spin regime of diamond color centers or rare earth ions, dynamic nuclear polarization can initialize a set of proximal spins to a high-purity polarized state [17]. This is then the starting point for qubit storage in an ensemble excitation [18], two-body singlet engineering [19], or spin-by-spin quantum computation [20].
In the limit of dense ensembles, the constituent spins are indistinguishable when interacting with the central spin. Entanglement is then readily generated and measured by controlling the central spin's dynamics [16,21,22]. Collective states of the ensemble become natural targets of a driven purification technique [15]. In addition, an effective all-to-all coupling mediated by the central spin [23] leads to a highly-correlated behavior of the ensemble which, in principle, can be harnessed for state engineering. Despite the absence of individual spin control, the dense systems of interest [14,24,25] operate in a truly many-body regime of 10 4 to 10 6 interacting ensemble spins -far exceeding the number of physical qubits present in near-term quantum simulators [26,27].
A many-body singlet is a superposition of ensemble spins with zero angular momentum. The singlet state is a hyperfine vacuum and renders the bath invisible to the central spin, protecting the latter from interacting with its environment [7,28,29] -a decoherence-free subspace for the central spin qubit. Owing to destructive interference between pairs of ensemble spins, the ensemble is also protected from noise whose length scale is larger than the system size. This makes it a particularly useful and readily detectable first target state to prepare in a arXiv:2301.10258v1 [quant-ph] 24 Jan 2023 central-spin system. Further, many-body singlet preparation offers insights into the evolution of entanglement under slower intra-bath interactions, as well as anomalous spin diffusion at longer time scales [30].
To this date, state engineering efforts in dense central spin systems remain confined to tuning the mean field degrees of freedom, such as ensemble polarization and its fluctuations [15,21,31,32], and classical correlations amongst ensemble spins [22]. A degree of purification has been achieved via polarization of 80% through optical techniques [33] and nearly 50% via central-spin control [22] -such approaches proving to be generally challenging due to a dependence of the central spin transition frequency on the polarization of the bath. Meanwhile stabilizing the ensemble polarization via the central spin [15] can approach the quantum limit of polarization stability. Despite these efforts, the resulting nuclear states remain highly mixed, featuring little inter-particle phase coherence. A degree of quantum correlation among spins was observed when probing a partially polarized ensemble via the central spin [22], which suggested the possibility of purification via reduced total angular momentum states, so-called dark states [34]. Theoretical proposals to engineer the state of dense central spin systems have focused on dissipative phase transitions [4], or quantum memory [10,34] and spin squeezing schemes [35] relying on fully polarized initial states. A protocol for direct control over the ensemble's inter-particle phase, allowing for purification and entanglement engineering at low total ensemble polarization, is still missing.
In this work, we use a simple and realistic form of symmetry breaking to gain control over an ensemble's inter-particle phase. Leveraging this control, we propose a three-stage state engineering protocol that utilizes the effective interaction between two distinct spin species, naturally present in real physical systems [14,24,25], and proceeds via control of the central spin exclusively. The first stage of the protocol locks the total polarization of the system to zero [15]. The second stage initializes the two spin species to an anti-polarized state with near-unit purity. The third stage involves a sequence of unitary gates that drives the system into a many-body singlet via phase engineering. Having in mind a near-term experimental realization, we quantify the protocol robustness as a function of model parameters and identify candidate physical platforms in which it could be successfully implemented.
MAIN
Symmetry and the total angular momentum representation
Central spin systems feature high-dimensional Hilbert spaces. In the dense limit, the central spin's interaction with the ensemble does not distinguish individual spins. This leads to collective symmetries and corresponding constants of motion, which we focus on here.
In this simplest scenario of a perfectly homogeneous spin bath (see Fig. 1a), a general system Hamiltonian is unchanged under the re-ordering of ensemble-spin indices. Such invariance is the highest symmetry that a spin bath can exhibit, and it results in the emergence of a constant of motion that dictates the rate of collective dynamics, as in Dicke superradiance [36]. Within the bath of spin-1/2 particles, this constant is identical to the magnitude of the total angular momentum, I, directly related to the eigenvalue of:
N k=1 i k 2 ≡ I 2 ,(1)
where i k is the single spin operator of the k-th spin in the ensemble, and N is the total number of ensemble spins. Ensembles of higher-spin particles, such as spin-3/2, can be described by equivalent, albeit more numerous, constants of motion. Under this symmetry, coupling to the central spin cannot change the magnitude of the total angular momentum, only the polarization, and coherent control over the ensemble is thereby limited. In particular, efforts to prepare a many-body singlet (I = 0) are futile and the ensemble dynamics are governed by thermally (with β ∼ 0) occupied states of I, dominated by the highest degeneracy states near I ∼ √ N . We propose making use of the simplest form of reduced symmetry to gain control: breaking the system into two distinguishable but equally abundant ensembles (see Fig. 1b), which we call species. The groups of spins of the same species are characterized by their individual total angular momenta: I 1 and I 2 . Their magnitudes, I 1 and I 2 , become the new constants of motion. In the discussions to follow, we assign them the value N/2, which is the most likely value for a fully mixed initial state. Our results hold for all (I 1 , I 2 ) values in a ∼ N/2 vicinity [37], thus capturing all experimentally relevant dynamics. We also note that this easily extends to more than two species.
This situation of two distinguishable spin species makes it possible to alter the magnitude of the total angular momentum, I = |I 1 +I 2 |, and therefore to significantly reduce it (see Fig. 1c). When controlled via a directional pumping process (i.e. cooling), the spin ensembles can be initialized into a pure collective anti-polarized state [38]:
|I z 1 = −M, I z 2 = M ,(2)
with M = min(I 1 , I 2 ). Such a state contains no coherence between the species, and represents a classical limit to the total angular momentum reduction, featuring a noise of I 2 cl = (I 1 + I 2 )(|I 1 − I 2 | + 1) ∼ √ N ; a factor √ N lower than that of a thermal state. Creating entanglement via a controlled phase between the two species can lower the magnitude of the total an-
(c) (a) (b) (d) (e)
FIG. 1. State engineering in the central spin system a, Perfectly homogeneous central spin system, symmetry-protected from changing its total angular momentum. b, Symmetry-broken central spin system, consisting of spin species I1 and I2, no longer protected from changing its total angular momentum. c, Reduction of the total angular momentum magnitude via the state purification protocol, resulting in a narrowed steady-state distribution (blue-shaded curve), peaked around I = N −1/4 . d, Effective Jaynes-Cummings ladder of states |I z 1 = −M + n, I z 2 = M − n , with anti-correlated I z 1 and I z 2 , where M = min(I1, I2). The thick and faint yellow arrows illustrate the dominant, and 2∆ω-detuned three-body interactions, respectively. The curvy orange arrows represent the central spin resets. The pale blue arrow displays a net direction of the phase space flow within this effective sideband-cooling process. e, Stages of anti-polarized state preparation. The red and blue arrows within the generalized Bloch spheres denote the total angular momenta of the two species, I1 and I2. Stage one (left panel): Locking of the total polarization, I z = I z 1 + I z 2 , to zero. Stage two: activation of the three-body interaction (middle panel), equivalent to the back-action sensing, followed by central spin reset (right panel). gular momentum further down to I = |I 1 − I 2 |. In particular, for I 1 = I 2 , the quantum limit is reached after the preparation of a many-body singlet:
|I = 0 = 2M n=0 (−1) n √ 2M + 1 |I z 1 = −M + n, I z 2 = M − n ,(3)
expressed uniquely in the |I z 1 = −M + n, I z 2 = M − nbasis using Clebsh-Gordan coefficients. The many-body singlet is characterized also by a full noise suppression:
I 2 qu = 0.
System Hamiltonian: control via the central spin
We take the general Hamiltonian for a dense central spin system in an external magnetic field [13], and split the spin ensemble into two spin species i = 1, 2:
H = ω c S z + i=1,2 ω i I z i + i=1,2 a i S · I i .(4)
We consider the regime of high magnetic fields, as defined by a dominant Zeeman interaction ∝ ω c of the central spin, S. The second term captures the internal energy structure ω i of the spin ensembles. The last term represents a Heisenberg-type hyperfine coupling (∝ a i < ω i ) between the central spin and the bath spins. We consider a symmetry breaking between the two spin ensembles ω 1 = ω 2 , which in real systems can take myriad forms. Without loss of generality [39], we assume a 1 = a 2 ≡ a and ω 1 > ω 2 . In this high field regime, the central spin quantization axis is pinned to the z-direction, and the hyperfine interaction reduces to [40,41]:
a i=1,2 S · I i = aS z (I z 1 + I z 2 ) − a 2 4ω c (I z 1 + I z 2 ) + i=1,2 a 2 4ω c S z (I + i I − i + I − i I + i ) + a 2 2ω c S z (I + 1 I − 2 + I − 1 I + 2 ) + O[(a/ω c ) 2 ].(5)
The leading and only first-order hyperfine term is the collinear interaction (used in total polarization locking). The second term renormalizes the nuclear Zeeman interaction of the ensemble by a negligible amount of −a 2 /4ω c ω i . The third and fourth terms contain the effective two-body and the three-body interactions, respectively. The former will remain off-resonance during our protocol, while the latter will be critical in cooling and correlating the two spin ensembles.
Exclusively to stabilize the ensemble polarization [21], we also consider that, apart from the Zeeman interaction (∝ ω i ), the ensemble spins are subject to a small eigenstate-mixing interaction (∝ ν i ω i ) [31], which results in an effective non-collinear term:
H nc = i=1,2 aν i 2ω i S z I x i + O[(ν i /ω i ) 2 ].(6)
Finally, we consider a control of the entire system exclusively via the central spin, represented by a resonant drive of the central spin with strength Ω ω c , and work in a rotating frame of reference, under Rotating Wave Approximation [37].
Preparation of a pure anti-polarized state
Prior to the protocol execution, the spin ensemble is found in a fully-mixed state characterized by a density matrix ρ ∝ 1. The first two stages of our protocol initialize the ensemble to a pure collective state -an antipolarized state of the two spin species (Eq. 2) -which enables the generation of a many-body singlet of the whole ensemble in the third stage (last section of this article).
Stage 1: polarization locking -The initial state exhibits maximal uncertainty of the total polarization of the bath ∆ 2 I z ∼ √ N , where I z = I z 1 + I z 2 . The first stage of the protocol reduces this uncertainty to zero using a previously developed technique [15]. In brief terms, the collinear hyperfine interaction (the first term in Eq. 5) allows the central spin to sense the polarization-deviation from the I z = 0 lock-point which induces a Larmor precession around the z-axis. Subsequently, this deviation is corrected by a resonantly activated (Ω = (ω 1 +ω 2 )/2 -c.f. Ref. [42]) non-collinear interaction (using Eq. 6) which translates the acquired phase into a change in the total ensemble polarization. Finally, the state of the central spin is reset and this stage is repeated until the ensemble reaches the limit of I z = 0 and ∆ 2 I z = 0. From this point on, the non-collinear interaction remains offresonant.
Stage 2: full purification -At the end of the first stage, the ensemble's locked zero-polarization state is a mixture of states |I z 1 = −M + n, I z 2 = M − n where n = 0, 1, .., 2M , for which the remaining uncertainty lies in the polarization of individual species, I z 1 and I z 2 . The second stage of the protocol removes this uncertainty to produce a pure collective state. Doing so relies on the driven resonant activation Ω = ∆ω ≡ ω 1 − ω 2 of the threebody interaction (fourth term of Eq. 5). For the central spin initialized in state |↓ x , this interaction activates the transition:
|I z 1 = −M + n, I z 2 = M − n |↓ x ←→ |I z 1 = −M + n − 1, I z 2 = M − n + 1 |↑ x .(7)
This can be visualized in an effective Jaynes-Cummings ladder of states ( Fig. 1d) parameterized by the principal quantum number n corresponding to the |I z 1 = −M + n, I z 2 = M − n state. Equation 7 then represents a single quantum n ↔ n − 1 transition down the ladder. Combined with a central spin reset |↑ x → |↓ x applied every half-period of the three-body interaction, repeating the transition and reset forms a directional pumping process -equivalent to sideband cooling in harmonic systems [43] -towards the ladder's ground state n = 0. This ground state consists of the fully antipolarized ensemble state |I z 1 = −M, I z 2 = M , which is a pure collective state of the ensemble.
Maintaining the directionality of the pumping process relies on a detuning of 2∆ω between the selected transition |I z 1 , I z 2 |↓ x ↔ |I z 1 − 1, I z 2 + 1 |↑ x and the unwanted transition |I z 1 , I z 2 |↓ x ↔ |I z 1 + 1, I z 2 − 1 |↑ x (thick and faint yellow arrows in Fig. 1d, respectively), which drive the system towards the opposite ends of the ladder. This is sustained as long as the three-body interaction strength, N a 2 /2ω c , remains smaller than ∆ω. We characterize the directionality with their ratio, which ultimately sets the purification limit:
κ = ω c ∆ω/(N a 2 ).(8)
We summarize this two-stage cooling process in Fig. 1e, where the initial locked state has two spin ensembles with arbitrary but opposite orientations. Semi-classically, the second stage (three-body interaction) is equivalent to the central spin sensing the Larmor precession beatnote between the two spin ensembles (recall ω 1 = ω 2 ) in the hyperfine back-action, which acts on the central spin akin to a classical driving field. The timing of the central spin reset favors a particular phase of this beatnote, and thus prepares the two ensembles in a specific orientation. , as a function of the protocol time. ρ0 denotes the state of the bath at the beginning of the second stage of the protocol. The x-axis is shared across the panels.
Ideal system dynamics
We verify the convergence of the second stage of the protocol towards the ground state by calculating the full quantum evolution of the system numerically. We treat the ideal case, for which the three-body interaction is fully coherent and the central spin reset is instantaneous. We repeat this second stage as many times as necessary to reach steady state. For convergence towards the steady state under reasonable computational resources, we choose to work with N = 32 bath spins and focus on the I 1 = I 2 = 32/2 manifold [37]. We take the first stage of the protocol to be fully capable of confining the ensemble's dynamics to the I z = 0 subspace [15], spanned by {|I z 1 = −M + n, I z 2 = M − n , n = 0, 1, 2, .., 2M }, to which we accordingly restrict our simulation. At the beginning of the simulation, we set the ensemble's den-sity operator, ρ, to an equal statistical mixture of the subspace basis states. Figure 2a shows the central spin's evolution under the activated three-body interaction and confirms the coherent oscillation in the {|↓ x , |↑ x }-basis, with a half-period of τ 0 ∼ 2πωc a 2 (I1×I2) -inversely proportional to the fastest three-body interaction rate [37]. The state of the central spin is instantaneously reset every τ 0 , and during the consecutive iterations the magnitude of the acquired |↑ x -population drops down to zero as the ensemble approaches its ground state. Semi-classically, this corresponds to a drop in the magnitude of the noise I 2 sensed by the central spin (Fig. 2a). We verify this independently as shown in Fig. 2b, where I 2 saturates to its classical ∼ √ N limit as steady state is reached. Figure 2c shows the polarizations of each of the two species, I z 1 and I z 2 as a function of iteration time. We see that they saturate to maximal and opposite values I z 1 = + min(I 1 , I 2 ) and I z 2 = − min(I 1 , I 2 ). Importantly, at the longest iteration times their uncertainties approach zero, as expected for a pure anti-polarized state.
We quantify the quality of state preparation following our protocol using the bath state impurity:
= 1 − Tr ρ 2 ,(9)
where ρ is the density operator of the bath. For a pure state this measure is known to reach zero. In our simulation, we selected a directionality parameter of κ = 5 (Eq. 8) as an example case. Figure 2d shows that for this value of κ the impurity reaches a steady-state value of ≈ 10 −4 , which represents a negligible initialization error. After settling, the system is trapped in an oscillatory limit-cycle, resulting from the competition of
|↓ x |I z 1 , I z 2 ↔ |↑ x |I z 1 − 1, I z 2 + 1 and |↓ x |I z 1 , I z 2 ↔ |↑ x |I z 1 + 1, I z 2 − 1 transitions.
This error can be made arbitrarily small by increasing κ, which we address in the following section.
Dependence of protocol performance on system parameters
In the absence of dephasing, there remains a fundamental trade-off between impurity and the convergence time to steady-state T c . This is because reducing the three-body interaction -increasing κ -ensures a smaller contribution from off-resonant processes, thus reducing the initialization error, but slows the dynamics to steadystate. For κ → ∞ one could expect bringing the impurity arbitrarily close to zero, but this comes with a prohibitively long convergence time, T c .
To visualize the achievable steady-state impurities, and related convergence times, we run a complete simulation of our pulsed protocol for a range of values of κ and for N = 8 and N = 128. Figure 3a confirms the clear trend of purity improvement with an increase in κ, where substantial degrees of purification are achieved past κ 1. Figure 3b shows the inverse N -dependence of the half-period of the collective three-body exchange, τ 0 ∼ 4πω c /N a 2 , and the number of stage iterations required to reach convergence proportional to N . The convergence time is a simple product of these two quantities and thus combines to
T c ∼ 16πω c /a 2 ,(10)
indicating that the convergence time is dependent purely on the three-body interaction strength, and not the system size.
Verifying the above trends in the large-N limit becomes computationally prohibitive. As an efficient way to extend our results into this regime, we turn to a steadystate solver of a quantum master equation in which the coherent three-body exchange proceeds continuously and simultaneously with a central spin reset whose rate is Γ op = 2π/τ 0 . Figure 3c displays the steady state impurity, , for N up to 10, 000 and for the same range of values of κ as in Fig. 3a. This confirms effective purification for κ 1 for large N . The solid black curve in Fig. 3c is the prediction from a simple rate equation[37], i.e. N → ∞, which overlaps with the steady-state model in the large-N limit. Both models feature a ∼ κ −2 rolloff, as obtained analytically from a first-order expansion of the impurity in the rate equation model (see [37]). As shown in Fig. 3d, eigenvalue analysis of the rate equation allows us to calculate the convergence time, T c , agreeing with the size-independent T c ∼ 16πω c /a 2 behavior observed in the pulsed model (Fig. 3b).
Comparing the pulsed (Fig. 3a) and continuous (Fig. 3c) protocol performance, we note that maintaining the temporal separation between the central spin reset and activation of the three-body interaction with the ensemble allows reaching lower steady-state impurities. This property, illustrated by comparing experiments done in the continuous [21] and pulsed [15] regimes, can be straight-forwardly explained: continuously measuring the central spin reduces its ability to sense the ensemble noise, and in turn limits the achievable purity.
Resilience to system imperfections
Real physical systems deviate from the idealized behavior considered so far, as they are affected by central spin and ensemble inhomogeneous dephasing, as can occur if the ensemble spins have a spread of Larmor precession frequencies √ ∆ 2 ω i > 0 [15,16,37]. Each stage of our purification protocol relies on the exchange dynamics between the central spin and the ensemble. Working in this strong coupling regime renders the protocol equally sensitive to both the central spin and ensemble dephasing mechanisms [37], proceeding at rates Γ c and Γ b , respectively. We thus use the total dephasing rate Γ d = Γ c +Γ b as the relevant parameter and explore the robustness of our protocol against such system imperfection. Using the rate equation approach [37], we calculate the steady-state impurity, , as a function of the dephasing rate normalized by the three-body interaction rate, Γ d τ 0 /(2π), as shown in Fig. 4a (black curve). A significant alteration of the protocol performance is observed when the three-body interaction time τ 0 becomes longer than the typical dephasing time, i.e. Γ d τ 0 /(2π) 1. Nonetheless, for large values of the directionality parameter κ (here, κ = 10), the engineered steady state impurity remains lower than unity by orders of magnitude. In this low-impurity regime (and for N → ∞), this is quantitatively captured by a simple expression [37]:
≈ 2 1 + 64κ 2 [1 + 2Γ d τ 0 /(2π)] −2 .(11)
The rate of the activated three-body exchange is also affected by dephasing, which results in an increased protocol convergence time, T c (blue line in Fig. 4a). As expected from a time-energy uncertainty principle, dephasing also comes with a broadening of the three-body resonance Ω = ∆ω, as seen in Fig. 4b.
Benchmarking candidate material platforms
Throughout this work, we have identified the two platform-agnostic control parameters determining the protocol performance: the directionality parameter, κ, and the normalized total dephasing rate, Γ d τ 0 /(2π). We now evaluate these parameters for candidate physical systems and quantify the corresponding bounds on steadystate bath impurities, , as well as the protocol convergence times, T c , as shown in Fig. 5.
We restrict our analysis to the physical platforms that naturally realize dense central spin systems (see the Hamiltonian of Eq. 4), and in which the rudimentary protocol ingredients, like the control and reset of the central spin state, have been demonstrated previously. Our nonexhaustive selection of the candidate systems includes Gate-Defined Quantum Dots (QDs), Lattice matched GaAs-AlGaAs QDs, Stranski-Krastanow InGaAs QDs, and Rare Earth Ions (REI). The system-specific parameters used in the calculations are tabulated in the supplementary materials [37].
For QD systems the two spin species that break ensemble symmetry are simply two nuclear-spin species with different gyromagnetic ratios: gallium and arsenic. Calculations for QD systems have taken into account the spin-orbit magnetic-field B dependence of the centralspin lifetime, ∝ B 3 [44] and ∝ B 5 [45] for the Gate Defined QDs and the epitaxial QDs (both InGaAs and GaAs-AlGaAs), respectively. Beyond magnetic fields of 10 T, this spin-orbit effect becomes the most performance- limiting factor and caps the impurity-convergence time trade-off. Spectacular purification to < 10 −4 can be achieved for GaAs and Gate Defined QDs, due to the high values of the directionality parameter, κ, and the low total dephasing rates, Γ d τ 0 /(2π), within the typical range of externally applied magnetic fields, B (see Fig. 5a). Gate-Defined QDs feature ∼100-fold larger directionality parameters, κ, than the GaAs QDs at the same externally applied magnetic field, for the same (unnormalized) total dephasing rates, Γ d [14,24]. This generally leads to higher degrees of purification, but significantly longer convergence times for Gate Defined QDs (see Fig. 5b).
InGaAs QDs feature a larger effective nuclear dephasing rate, Γ d , due to nuclear spectral inhomogeneity arising from strain-induced quadrupolar broadening [25]. In their case, three-body interactions could still lead to a weak purification, as consistent with the observation of enhanced spin-wave modes at low ensemble polarization [22].
We now turn our attention to REI systems -specifically to 171 Yb 3+ : YVO 4 , used recently to demonstrate quantum-state transfer between a 171 Yb electronic central spin and the second shell of the nearest 51 V nuclei [18]. This system belongs to the regime of small but dense central-spin systems and offers little tunability with an external magnetic field as all nuclear spins are of the same species. However, the nuclear-spin shells surrounding the central spin can be distinguished via quadrupolar shifts, allowing a set of values for the effective ∆ω [37]. The required central-spin mediated three-body interaction arises as a second-order magnetic dipole-dipole coupling between two of the first, second, and higher shells of 51 V nuclei interfaced with the 171 Yb. The measured nuclear coherence time is three times shorter than the interaction time τ 0 , but this still allows for generating high-purity anti-polarized states (see Fig. 5).
Preparation of a many-body singlet
The singlet state |I = 0 is a superposition of all |I z 1 = −M + n, I z 2 = M − n eigenstates with a ∝ (−1) n phase on each state, as shown in Eq. 3. Having established a pure anti-polarized state of the two subensembles (i.e. n = 0) following the first two stages of the protocol, the third and final stage will prepare the singlet state by weaving an alternating phase into the Jaynes-Cummings ladder, as shown in Fig. 6a. We stress that the many-body singlet state is not an eigenstate of our system Hamiltonian; however, it refocuses every ∆t = 2π/∆ω following the protocol termination [37].
We consider only the ideal execution of this final stage involving unitary gates, free of any dephasing. We work in a frame co-rotating with ΩS x +∆ω(I z 1 −I z 2 )/2. For the sake of clarity, we outline the ideal protocol steps assuming that the effective Jaynes-Cummings ladder has 2M + 1 = 2 K rungs, corresponding to a total number of ensemble spins N ∼ (2 K − 1) 2 /2; the generalization to an arbitrary N is straightforward. In the first instance, we take a simplified scenario in which the three-body-interaction is independent of the |I z 1 = −M + n, I z 2 = M − n -state; we will later take the dependence on n into account.
We apply the following sequence of unitary operations, starting from the state
|ψ 0 = |↓ x |I z 1 = −M, I z 2 = M ,(12)
(i) a central spin (π/2) z -gate, giving state
1 √ 2 (|↓ x − i |↑ x ) |I z 1 = −M, I z 2 = M ,(13)
as illustrated in the second level diagram from the left in Fig. 6a;
(ii) a three-body-interaction π-gate, U (τ ) = U π , where τ is the interaction time, and which up to a global phase leaves the system in the state
|ψ 1 = 1 √ 2 |↓ x (|I z 1 = −M, I z 2 = M − |I z 1 = −M + 1, I z 2 = M − 1 ),(14)
as shown in the third level diagram from the left in Fig. 6a. Together, this pair of gates (π/2) z and U π injects entanglement into the spin ensemble via the central spin. We combine them into a composite gate S 1 , as shown in Fig. 6a. Application of this gate doubles the overlap with the singlet state, | ψ 1 | (|↓ x |I = 0 ) | 2 , from the initial state's, | ψ 0 | (|↓ x |I = 0 ) | 2 . To increase this overlap further, we apply an extended composite gate, S 2 , which contains two steps: (i) entanglement injection using S 1 (the fourth and fifth level diagrams from the left in Fig. 6a) and (ii) phase redistribution onto the |I z 1 = −M + n, I z 2 = M − n -states with n = 0, 1, 2, 3 using a central spin (π) z -gate and a three-body-interaction π-gate (the two right-most level diagrams in Fig. 6a). We generalize this sequence to a composite gate S K , for which the phase redistribution is applied 2 K−1 − 1 times (gray box in Fig. 6b). Each application of a phase redistribution sub-sequence brings the next highest rung on the ladder into the ensemble superposition state. As a result, the overlap of system state |ψ j with the many-body singlet state | ψ j | (|↓ x |I = 0 ) | 2 doubles at each step of a sequence of composite gates S j , for j = 1, 2, .., K.
The modular structure of this algorithm is advantageous in terms of minimizing the impact of the central spin dephasing on the ensemble state preparation. It is sufficient for the central spin to stay coherent over a given S j gate duration, after which it can be safely reinitialized. We note that the phase redistribution is the most operation-costly sub-sequence, as it involves 2 j − 2 gates within each S j gate. Nevertheless, the protocol complexity remains linear with the number of states along the ladder 2 K as the total number of gates in a complete sequence is 2(2 K − 1).
We demonstrate the performance of this algorithm in Fig. 6c, using the trace distance from the engineered state to the singlet state as a function of the total number of gates (gray squares). For K = 4 (N = 112), we can visualize the steady progression from |ψ 0 towards the singlet state |↓ x |I = 0 as the algorithm steps through the sequence of composite gates S 1 to S 4 , culminating in an exact preparation where the final trace distance reaches zero. The classical limit I 2 cl ∼ √ N is overcome as soon as the sequence begins to inject entanglement into the ensemble, and reaches the quantum limit of I 2 qu = 0 at its termination. Interestingly, this happens at the expense of raising the uncertainty in I z 1 and I z 2 towards their thermal values ∼ √ N , akin to squeezing. We now turn to the more realistic description of the spin ensemble for which the threebody-interaction π-gate time depends on the state I2). The illustrated sequence of six gates prepares a manybody singlet for K = 2. The inset displays a relative phase color-coding applied throughout the panel. The global phase is factored out, leaving the lowest energy state with a zero reference phase. b, Concatenation of the composite Sj gates turns the anti-polarized state into a many-body singlet. The gray box contains a quantum circuit corresponding to the SK gate, for an arbitrary integer K. c, Trace distance from a singlet state as a function of the number of singlet-preparing gates used within the simplified (gray squares) and the real (circles) systems with K = 4 (i.e. N = 112), following the exact and variational-searched protocols, respectively. The red-to-blue gradient indicates that the structure of the optimal protocol (discussed in the Ref.
[37]) varies with the number of used gates.
ior complicates the algorithm implementation, the core structure used to inject entanglement and redistribute phase remains in place. With the right choice of the three-body interaction times τ and the central spin zgate phases, φ, it is possible to reach the singlet state. To do so we treat the sequence parameters {τ i , φ i } variationally for each gate to minimize the (Hilbert-Schmidt) trace distance to the singlet state from
|ψ = ← − i=0 U (τ i ) · (φ i ) z |↓ x |I z 1 = −M, I z 2 = M .(15)
Formally, this involves an optimization over a ∼ 2 Kdimensional space of parameters, for which we employ a gradient-descent algorithm [37,46]. The resulting trace distances for the sequences of increasing length are presented in Fig. 6c as the colored circles (red-to-blue gradient is consistent with the color-coding used in ref.
[37]). Strikingly, our algorithm arrives within a trace distance of a few % from the singlet state. A process optimization in larger systems, aided by machine learning algorithms, could prove to be a viable route for creating arbitrary state superpositions [47].
CONCLUSIONS AND OUTLOOKS
In this work, we have proposed a protocol that can initialize a spin ensemble into a pure anti-polarized state and steer it towards a many-body entangled singlet state -exclusively by using a single central-spin qubit. To do so, we have made use of the three-body interaction naturally present in dense spin ensembles and shown that it can be harnessed by breaking the ensemble into two spin species. We have suggested several platforms where this algorithm would be realizable, and where significant purity can be achieved even in the presence of dephasing. We note that in these systems, breaking the spin ensemble can take multiple forms including, but not limited to, nuclear-spin species with different gyromagnetic ratios [25] or high-spin species which split into two effective qubit ensembles under the influence of electric-field gradients (e.g. from strain) [48].
From the perspective of an electron spin qubit hosted in a material with non-zero nuclear spins, a singlet state of its surrounding spin ensemble would dramatically boost its coherence -both homogeneous and inhomogeneous noise sources [25] would be quenched altogether. From the perspective of leveraging this spin ensemble as a quantum memory resource [34], initialization to a pure collective state is sufficient to run an algorithm with unit fidelity, and the availability of two anti-polarized species could even be extended to a two-mode register. The state-engineering recipes that we have established could be extended to more elaborate computational [49] and error-correcting [20] algorithms. Fundamentally, tracking a many-body state in the presence of tuneable interactions can reveal the entanglement dynamics in and out of the central-spin system, opening an experimental window onto quantum information scrambling and area laws for entanglement entropy. example, measuring the transition rate asymmetry 2 of the Ω = ω 1 activated processes:
|↓ x |I z 1 , I z 2 → |↑ x |I z 1 − 1, I z 2 |↑ x |I z 1 , I z 2 → |↓ x |I z 1 + 1, I z 2
(2) and the transition rate asymmetry of the Ω = ω 2 activated processes:
|↓ x |I z 1 , I z 2 → |↑ x |I z 1 , I z 2 − 1 |↑ x |I z 1 , I z 2 → |↓ x |I z 1 , I z 2 + 1(3)
constrains both I 1 and I 2 , which can be used to conditionally discard the post-protocol state, or to adjust the gate parameters in the third stage of the protocol.
II. FULL SIMULATION OF QUANTUM DYNAMICS
To simulate the quantum dynamics of the composite system in a reduced Hilbert space (see the manuscript) we use a master equation and steady state solvers from the QuTiP 3,4 library in Python.
The exchange between the central spin and the ensemble is modelled by the following Hamiltonian, written in a frame rotating with a δ-detuned laser drive, following a Rotating Wave Approximation:
H(t) =Ω(t)S x + δS z + i=1,2 ω i − a 2 4ω c I z i + i=1,2 aS z I z i + i,j=1,2 a 2 4ω c S z (I + i I − j + I − i I + j ) + i=1,2 aν i 2ω i S z I x i .(4)
The individual terms are defined in the manuscript. Within the first stage of the protocol setting Ω(t) = 1 2 (ω 1 + ω 2 ) activates the non-collinear (∝ S z I x i ) interaction. Within the second and third stages of the protocol, Ω(t) = ∆ω activates the three-body interaction (∝ S z (I + 1 I − 2 + I − 1 I + 2 )). At all times δ = 0. The instantaneous central spin resets within the second stage of our protocol were effectuated by the following operation on the composite density operator:
ρ → |↓ x ↓ x | ⊗ Tr c ρ,(5)
where Tr c stands for the partial tracing with respect to the central spin degrees of freedom. The steady state solver was applied to a continuous approximation of the protocol in which the central spin was reset continuously at an optical pumping rate Γ op = 2π/τ 0 , where τ 0 corresponds to the time between subsequent instantaneous central spin resets in the exact protocol. The collapse operator used in modelling that process was Γ op |↓ x ↑ x |. We verify the protocol robustness in a I 1 = I 2 manifold by performing an identical simulation to that from the manuscript section 'Ideal system dynamics'. Both κ and τ 0 are fixed to the same values, and the only difference lies in the choice of I 1 = 32/2 − 1 and I 2 = 32/2 + 1. The results are illustrated in the Fig.2, and organized in a oneto-one correspondence with the Fig. 2 of the manuscript. As displayed in the Fig. 2a, the S x population (that is, the only direct observable during the protocol) saturates to −1/2, like in the Fig. 2a of the manuscript. The polarizations I z 1 and I z 2 are driven towards the maximal possible opposite values, that is ± min(I 1 , I 2 ) (see Fig. 2c), as anticipated. Dynamics feature equal amount of purification after settling to the limit cycle (see Fig. 2d), and the only significant difference comes in the degree of I 2 reduction. Still, the protocol reaches the anticipated classical limit I 2 cl = (I 1 + I 2 )(|I 1 − I 2 | + 1).
B. Equal impact of the ensemble and the central spin dephasing on the protocol's performance
We studied the effects of dephasing of the central spin and the ensemble on the protocol's performance. The former process was modelled using a √ Γ c S x collapse operator, and the latter made use of two distinct collapse operators: Γ b /2I z 1 and Γ b /2I z 2 . Fig. 3 illustrates the steady state impurity calculated in the continuous protocol approximation for a range of dephasing rates Γ c and Γ b . It is evident that both processes have quantitatively identical impact on the state preparation, which motivates introducing a total dephasing rate, Γ d = Γ b + Γ c , as a figure of merit in quantifying the resilience to system's imperfections.
III. RATE EQUATION MODEL
A. Evolution of populations and the scattering rates A good quantitative understanding of the protocol dynamics can be gained from a simple rate equation model. Within this model, the dynamics is again restricted to the I z = 0 ladder of states, as a result of perfect total polarization locking. For convenience, we label the n th ladder state, |I z 1 = −(M − n), I z 2 = +(M − n) , with a principal quantum number: |n . The model assumes that at coarse-grained timescales, the coherences between different states across the ladder vanish, or in other words:
Tr c ρ = n p n |n n| .(6)
The evolution of the population of the |n -state, p n , is then captured by the following equation:
p n = −(r − n + r + n )p n + (r + n−1 p n−1 + r − n+1 p n+1 ),(7)
where r ± n stands for a rate of the population flow from |n -state to |n ± 1 -state. The ∝ p n term in the equation describes the 'out-flow' processes proceeding at a total rate r − n + r + n , as illustrated in the Fig. 4a. The first of the two processes involves a coherent exchange |↓ x |n → |↑ x |n − 1 driven by a three-body interaction, and detuned by ∆ − = Ω − ∆ω, followed by the central spin reset after time τ 0 -here approximated as proceeding concomitantly with the exchange, at the optical pumping rate Γ op = 2π τ0 . The second process consists of the same two steps, except that the dynamics proceeds between |n and |n + 1 states, for which the three-body interaction detuning is ∆ + = Ω + ∆ω. The rates of the outflow processes, r ± n , are then proportional to the populations of the excited states |↑ x |n ± 1 , where the constants of proportionality are given by the central spin reset rate, Γ op . This yields:
r ± n = Γ op |↑ x , n ± 1 ↑ x , n ± 1|(8)
We approximate the expectation value from Eq. 8 by a steady-state excited state population of a two level system constituted by |↓ x , n and |↑ x , n ± 1 states; it follows that:
r ± n = Γ op 2 (α ± n ) 2 /(Γ op Γ ) 1 + (α ± n ) 2 /(Γ op Γ ) + (∆ ± /Γ ) 2 ,(9)
where Γ = 1 2 Γ op +Γ d incorporates the sum of phenomenological ensemble and central spin dephasing rates, Γ d . The effective drive strength, α ± n , corresponds to the three body interaction rate, given by:
α + n = a 2 4ωc (2I − n)(n + 1), α − n = a 2 4ωc (2I − n + 1)n.(10)
Its dependence on I and n is a result of collective enhancement, dependent on the total angular momenta of of sub-ensembles (I 1 = I 2 = I ∼ N/2). The second term in the equation 7 captures the effect of the 'in-flow' processes. The |n -state can be populated by ∆ + -detuned process originating from |n − 1 -state, or a ∆ − -detuned process originating from |n + 1 -state, as shown in the Fig. 4b. The relevant scattering rates are found using Eq. 9.
B. Steady state solution
The system reaches steady state whenṗ n = 0 for all n. This generates the following recursive relation between the |n -state populations: This fully determines steady state populations, up to a normalization factor, which can be constrained using n p n = 1. The computational complexity of solving for steady state populations is O(I); this is to be compared with O(I 6 ) complexity of solving for steady state of quantum master equation with dissipation.
p 1 = r + 0 r − 1 p 0 , p n+2 = r − n+1 +r + n+1 r − n+2 p n+1 − r + n r −
C. Scaling of impurity, , with directionality parameter, κ, and dephasing Γ d τ0/(2π)
The preparation performance can be characterized by the impurity of the reduced density operator, ρ b = Tr c ρ. The rate equation model approximates this impurity by:
= 1 − Tr ρ 2 b = 1 − n p 2 n .(12)
In case of the perfect state preparation we have p 0 = 1, and = 0. The preparation performance starts to drop when the |1 -state acquires a finite population, leading to an initial impurity increase (calculated at at Ω = ∆ω):
≈ 2 r + 0 r − 1 = 2 1 + 4(∆ω) 2 Γ op Γ + α 2 Γ op Γ −1 ,(13)
where we introduced α ≡ α + 0 = α − 1 . Since α 2 ∝ N , and Γ op Γ ∝ N 2 (as Γ op = N a 2 /(2ω c ), for the optimal value of τ 0 ), in the thermodynamic limit, this expression simplifies further:
≈ 2 1 + 64κ 2 (1 + 2Γ d /Γ op ) −2 ,(14)
where κ:
κ = ω c ∆ω N a 2 ,(15)
as defined in the manuscript. The requirement for the cooling to proceed optimally is therefore κ 1 and Γ d /Γ op 1.
D. Convergence Time, Tc
The rate equation formalism allows to estimate the convergence time of the protocol, T c , from spectral analysis of the matrix Λ, that generates the rate equation as:
p = Λp(16)
where p is a vector of |n -state populations. We notice that due to the contractive nature of dynamics, the real parts of the Λ-matrix eigenvalues are all non-positive, and can be arranged according to:
0 = λ 0 > Re(λ 1 ) > Re{λ 2 } > ...(17)
We then identify T c = 2π/| Re(λ 1 )|, as | Re(λ 1 )| is the slowest convergence rate in the model. 11 . The three-body interaction in this system is a central spin mediated second-order dipole-dipole, and it couples the nuclei in the second shell (subscript i) with those in the outer (subscript j) shells with an effective strength ∼ aiaj/ωc. The ∆ω corresponds to the difference in quadrupolar frequencies of the nuclei in the second shell (i.e. a frozen core) and the outer shells.
V. OPTIMIZING PREPARATION OF MANY-BODY SINGLET
A. Central spin gate We introduce the following notation for a single qubit gate:
(φ) j ≡ exp{−iφσ j /2}, j = x, y, z(18)
where σ x , σ y , and σ z are 2 × 2 Pauli matrices.
B. Three-body interaction gate
Upon matching the Ω = ∆ω resonance condition, the evolution of the system in the frame rotating with 1 2 ∆ω(I z 1 − I 2 2 ) + ΩS x is dictated by the following Hamiltonian:
H exc = a 2 4ωc (|↑ x ↓ x | I − + |↓ x ↑ x | I + ),(19)
where I ± = I ± 1 I ∓ 2 are the non-linear ladder operators for the effective Jaynes-Cummings ladder of |n -states, whose action is captured by:
I + = 2I−1 n=0 (2I − n)(n + 1) en |n + 1 n| ,(20)
and I − = (I + ) † . Evolution of the system in that frame over time τ , generated by H exc , realises the following gate:
exp{−iτ H exc } = 1 + |↑ x ↑ x | ⊗ 2I−1 n=0
cos a 2 e n τ 4ω c − 1 |n n|
+ |↓ x ↓ x | ⊗ 2I−1 n=0 cos a 2 e n τ 4ω c − 1 |n + 1 n + 1| − |↑ x ↓ x | ⊗ 2I−1 n=0 i sin a 2 e n τ 4ω c |n n + 1| − |↓ x ↑ x | ⊗ 2I−1 n=0
i sin a 2 e n τ 4ω c |n + 1 n| .
In particular, it is readily seen that the exact π-gate time, τ π , is dependent on n since the enhancement factors e n vary across the ladder of states. In contrast, for the simplified system with e n = const, the exchange interaction π-gate would be:
exp −iτ π H 0 exc = |↓ x ↓ x | ⊗ |0 0| + |↑ x ↑ x | ⊗ |2I 2I| − i |↑ x ↓ x | 2I−1 n=0 |n n + 1| − i |↓ x ↑ x | 2I−1 n=0 |n + 1 n| .(22)
FIG. 5. Optimizing gate times and phases for singlet preparation. a, Hilbert-Schmidt distance (i.e. our cost function, C) during simplified (squares) and optimized (circles) protocols of varied length. The colorcoding is consistent with Fig.6c of the manuscript, which only concerns the protocols' endpoints. b, Evolution of the trace distance in considered protocols. c, Evolution of the expectation value of total angular momentum squared in considered protocols. d, Optimal Z-gate phases in protocols of varied length. The beige circles correspond to initial condition for the gradient descent optimization. e, Optimal exchange gate times in protocols of varied length. The beige circles correspond to initial condition for the gradient descent optimization. 'cpl' in the label stands for coupling strength, a 2 /(4ωc).
C. Optimizing the gates via gradient descent
In order to tailor the sequence parameters in a real system, we minimize the appropriately constructed cost function, which takes the following quantum state:
|ψ = ← − j=0 exp{−iτ j H exc } exp{−iφ j σ z /2} |↓ x ⊗|n = 0 ,(23)
as an input (a method originally devised in Ref. 12 ). The arrow over the product sign represents direction of stacking the consecutive terms with j = 0, 1, 2, .. in the product. The minimization is done with respect to the variational parameters {τ j , φ j } j=0,1,2,.. , which correspond directly to the three-body interaction activation times (τ j ), and phases of central spin gates (φ j ). To reach the minimum of the cost function we apply the RMSprop gradient descent algorithm in the space of {τ j , φ j } j=0,1,2,.. .
Choice of the cost function
We choose a Hilbert-Schmidt distance between the singlet state, χ = |I = 0 I = 0|, and the reduced density operator of the ensemble, ρ b = Tr c (|ψ ψ|), as our cost function:
C(|ψ ) = Tr(ρ b − χ) † (ρ b − χ).(24)
The main advantage of working with this cost function is an ease of calculating its gradient analytically, which speeds up the optimization procedure. To see it, we first act with the differential operator, on the |ψ -state from the Eq. 23, and find:
∂ vi |ψ = ← − j>i e −ivj Xj − iX i e −iviXi × ← − j<i e −ivj Xj |↓ x ⊗ |n = 0 ,(25)
for variational parameters v j ∈ {τ j , φ j } and operators X j ∈ {H exc , σ z /2}. We then notice that extending this result to the cost function from Eq. 24 amounts to a simple application of a chain rule.
Hyperparameters and convergence of an RMSprop algorithm
The update rule for a given variational parameter, v, in our implementation of the RMSprop algorithm is:
v i+1 = v i − ζ ξ + E i [(∂ v C) 2 ] ×[∂ v C] i E i [(∂ v C) 2 ] = βE i−1 [(∂ v C) 2 ] + (1 − β)[∂ v C] 2 i(26)
where ∂ v C is the partial derivative of the cost function, C, with respect to the variational parameter, v. The hyperparameters used throughout the optimization routines were ζ = 0.015, ξ = 10 −8 , and β = 0.85. For all the sequences of lengths shorter than the maximal considered, the convergence was reached in less than 1000 optimization epochs. For the longest considered sequence, optimization was terminated after 7000 epochs, however, the cost function could likely be decreased further. The difficulty of this task comes most likely from a presence of a barren plateau in a cost function landscape -i.e. region around the minimum, where the gradients of C vanish exponentially fast 13 .
The evolutions of the cost function, C, the trace distance, and the expectation value of total angular momentum squared during the optimal protocols of varied length, are plotted in the Fig. 5a, Fig. 5b, Fig. 5c, respectively (circles). Their simplified system's counterparts are plotted alongside, for reference (squares).
Initial condition for gradient descent
To speed up the convergence of the gradient descent optimization, we start the procedure with a physicallymotivated initial guess for variational parameters.
The initial z-gate phases are chosen to be identical to those in the optimal protocol for the idealized system (see the main text, and the beige circles in the Fig. 5d). The initial activation times for the three body interaction are chosen as:
τ j = π/ 2e j × a 2 4ω c(27)
where e j stands for the enhancement factor from the Eq. 20 -see the beige circles in the Fig. 5e. The intuition behind this choice, is that it realizes the ideal amplitude transfer for each consecutive |n -state brought into the superposition.
D. Singlet auto-refocusing in a non-rotating frame
Singlet state prepared in the frame rotating with 1 2 ∆ω(I z 1 − I z 2 ) + ΩS x , will coincide with a singlet state in a non-rotating frame at times t = 2πk/∆ω for k = 0, 1, 2, 3..; indeed in a non-rotating frame the evolution of the prepared state (up to a global phase) is given by:
|ψ(t) = |↓ x ⊗ 2I n=0 (−1) n √ 2I + 1 exp{itn∆ω} |n .(28)
FIG. 2 .
2Ideal system's dynamics for N = 32, and I1 = I2 = 32/2. a, The expectation value of Sx as the function of the protocol time. Inset, Sx during a single iteration of sensing and instantaneous reset (c.f. the middle and the rightmost panels of the Fig. 1e). b, Magnitude (solid orange line) and the uncertainty (shaded area) of I 2 as a function of the protocol time, normalized by I 2 cl = (I1 + I2)(|I1 − I2| + 1). Inset, Suppression of the total I (orange arrow) down to the classical limit. c, The normalized expectation values of I z 1 and I z 2 (solid blue and red lines, respectively) and their uncertainties (corresponding shaded areas) as the functions of the protocol time. The normalization involves dividing the y-axis values by min(I1, I2). Inset, I1 and I2 dynamics towards a pure anti-polarized state. d, The bath impurity,
FIG. 3 .
3Dependence on system parameters. a, Steady state impurity, , as a function of the directionality parameter, κ = ωc∆ω/(N a 2 ). b, Normalized three-body interaction half-periods (i.e. the pulse durations), τ0 (black squares), and the numbers of stage iterations (blue squares) required to reach convergence in the pulsed protocol, as a function of N . The number of iterations was calculated as three times the 1/e-drop time in the impurity's exponent to the settled value. The dashed lines illustrate the asymptotic behavior, N → ∞, of both quantities. c, The κ dependence of impurity, , in the continuous protocol. The solid black line displays a corresponding solution from the rate equation model, which correctly captures the high-N limit. d, Prediction of the convergence results in theFig. 3busing the rate equation model.
FIG. 4 .
4Resilience to dephasing and imperfections. a, Impurity (black curve) and convergence time (blue curve) of the protocol as a function of the normalized total dephasing rate, Γ d τ0/(2π), calculated at the Ω = ∆ω resonance. b, Impurity of the steady state as a function of the deviation from the Ω = ∆ω resonance, in the presence of an increasing amount of dephasing. κ = 10 for all the curves in the plots.
FIG. 5 .
5Benchmarking candidate material platforms a, Impurity as a function of the platform-agnostic parameters, κ and the normalized total dephasing Γ d τ0/(2π). White dashed lines show the contours of constant impurities increasing exponentially from 10 −1 to 10 −11 by a factor 10. Solid violet lines correspond to the typical operating regimes for selected candidate platforms. b, Summary of the achievable steady-state impurities and convergence times for the candidate platforms.
|I z 1 = −M + n, I z 2 = M − n . This interaction time will take on values from ∼2M (for n = 0, 2M ) to ∼M (M +1) (for n = M ) across the ladder[37]. While this behav-FIG. 6. Reaching a quantum limit a, Action of the unitary gates (black arrows) along the effective Jaynes-Cummings ladder of |I z 1 = −M + n, I z 2 = M − n -states, where M = min(I1,
FIG. 2 .
2Ideal system's dynamics for N = 32, and I1 = 32/2 − 1, I2 = 32/2 + 1. Panels a-d are in a one-to-one correspondence to those of theFig. 2of the manuscript.A. Dynamics in a I1 = I2 manifold
FIG. 3 .
3Symmetry in the effect of ensemble and central spin dephasing on the protocol's performance. The figure displays steady state impurity as a function of relative ensemble and central spin dephasing rates, Γ b /Γop and Γc/Γop, respectively. Γop stands for the optical pumping rate in continuous quantum model. The simulation was run for κ = 10 and N = 200 (i.e. I1 = I2 = 200/2).
n+2 p n , n = 0, 1, .., 2I − 2.(11) FIG. 4. Rate equation model of the protocol. a, Scattering processes that depopulate n th state across the ladder. Straight arrows denote activated three-body interactions, whereas the gray bars show the energy gaps that suppress each of the processes. Wavy arrows illustrate the central spin reset. b, Scattering processes that populate n th state across the ladder. Labeling is consistent with that of the panel a.
IV. TABLES OF PHYSICAL PARAMETERS FOR CANDIDATE SYSTEMS Total hyperfine interaction, A 2π × 11 GHz Zeeman frequency difference, ∆ω/B 2π × 5.76 MHz/T Central spin splitting, ωc/B 2π × 1.3 GHz/T Effective number of nuclei, N 10 5 Dephasing rate (nuclear), Γ d 2π × 10 kHz Magnetic field ranges, B 1T -10T TABLE I. Model parameters used to benchmark the protocol's performance for GaAs-AlGaAs QDs 5-8 . Total hyperfine interaction, A 2π × 11 GHz Zeeman frequency difference, ∆ω/B 2π × 5.76 MHz/T Central spin splitting, ωc/B 2π × 6 GHz/T Effective number of nuclei, N 10 5 Dephasing rate (nuclear), Γ d 2π × 10 MHz Magnetic field ranges, B 100mT -10T TABLE II. Model parameters used to benchmark the protocol's performance for InGaAs-GaAs QDs 8,9 . Total hyperfine interaction, A 2π × 11 GHz Zeeman frequency difference, ∆ω/B 2π × 5.76 MHz/T Central spin splitting, ωc/B 2π × 8 GHz/T Effective number of nuclei, N 10 6 Dephasing rate (nuclear), Γ d 2π × 10 kHz Magnetic field ranges, B 100mT -10T TABLE III. Model parameters used to benchmark the protocol's performance for Gate Defined GaAs QDs 7,8,10 .TABLE IV. Model parameters used to benchmark the protocol's performance for a 171 Yb 3+ : YVO4 (REI)Three-body interaction strength, aiaj/ωc
2π × 0.1 kHz
Quadrupolar frequency difference, ∆ω
2π×10-100 kHz
Number of nuclei
4
Dephasing rate, Γ d
2π × 1.25 kHz
Acknowledgements: We acknowledge support from the US Office of Naval Research Global (N62909-19-1-2115) and the EU H2020 FET-Open project QLUSTER (862035). D.A.G. acknowledges a Royal Society University Research Fellowship, and C.LG. a Dorothy Hodgkin Royal Society Fellowship. L.Z. acknowledges support from the EPSRC DTP. We also thank S. Economou and E. Barnes for fruitful discussions.Within the manuscript we make a statement about the dynamics in I 1 = I 2 = N/2 manifold being representative of the complete ensemble dynamics. This follows from the (I 1 , I 2 )-manifold degeneracy, captured by the: p(I 1 , I 2 ) ∝ I 1 (I 1 + 1) × I 2 (I 2 + 1) × e −2(I 2probability distribution, for an ensemble initially at infinite temperature 1 . As shown inFig.1, this distribution features a peak at N/2 of width ∝ N/2, and an exponentially-suppressed high-I i tail. Therefore, when randomly sampling the I 1 , I 2 -distribution in the experiment, a majority of the experimental runs, will satisfy I 1 , I 2 ∼ N/2.A typical deviation from I 1 = I 2 = N/2 features I 1 = I 2 . In the sec. II A, we study the protocol performance in the I 1 = I 2 case, and compare it to the I 1 = I 2 case (studied in the manuscript) to find the same degree of state purification, and identical behaviour of the central spin throughout the protocol.However, the robustness of the optimized unitary gate sequence in the third stage of the protocol is expected to be more sensitive on sampled values of I 1 and I 2 . This effect could be mitigated following the generalization of our variational optimization procedure to a larger Hilbert space. In a proof-of-concept experiment, it could be also bypassed with a measurement post-selection or measurement-based feed-forward, provided the possibility of a single-shot read-out of the central spin state. For
. E Takou, E Barnes, S E Economou, Phys. Rev. X. 1311004E. Takou, E. Barnes, and S. E. Economou, Phys. Rev. X 13, 011004 (2023).
. D P Divincenzo, Fortschritte der Phys. 48771D. P. DiVincenzo, Fortschritte der Phys. 48, 771 (2000).
. D J Wineland, Rev. Mod. Phys. 851103D. J. Wineland, Rev. Mod. Phys. 85, 1103 (2013).
. E M Kessler, G Giedke, A Imamoglu, S F Yelin, M D Lukin, J I Cirac, Phys. Rev. A. 8612116E. M. Kessler, G. Giedke, A. Imamoglu, S. F. Yelin, M. D. Lukin, and J. I. Cirac, Phys. Rev. A 86, 012116 (2012).
. C.-E Bardyn, A Imamoglu, Phys. Rev. Lett. 109253606C.-E. Bardyn and A. Imamoglu, Phys. Rev. Lett. 109, 253606 (2012).
. J Marino, Y E Shchadilova, M Schleier-Smith, E A Demler, New J. Phys. 2113009J. Marino, Y. E. Shchadilova, M. Schleier-Smith, and E. A. Demler, New J. Phys. 21, 013009 (2019).
. Q Chen, I Schwarz, M B Plenio, Phys. Rev. B. 95224105Q. Chen, I. Schwarz, and M. B. Plenio, Phys. Rev. B 95, 224105 (2017).
. J N Greiner, D B R Dasari, J Wrachtrup, Sci. Rep. 7529J. N. Greiner, D. B. R. Dasari, and J. Wrachtrup, Sci. Rep. 7, 529 (2017).
. T N Ikeda, M Sato, 10.1126/sci-adv.abb4019Sci. Adv. 6T. N. Ikeda and M. Sato, Sci. Adv. 6 (2020), 10.1126/sci- adv.abb4019.
. M Issler, E M Kessler, G Giedke, S Yelin, I Cirac, M D Lukin, A Imamoglu, arXiv:1008.3507Phys. Rev. Lett. 105267202M. Issler, E. M. Kessler, G. Giedke, S. Yelin, I. Cirac, M. D. Lukin, and A. Imamoglu, Phys. Rev. Lett. 105, 267202 (2010), arXiv:1008.3507.
. G Éthier-Majcher, D Gangloff, R Stockill, E Clarke, M Hugues, C Le Gall, M Atatüre, arXiv:1706.07749Phys. Rev. Lett. 119130503G.Éthier-Majcher, D. Gangloff, R. Stockill, E. Clarke, M. Hugues, C. Le Gall, and M. Atatüre, Phys. Rev. Lett. 119, 130503 (2017), arXiv:1706.07749.
. B Zhu, J Marino, N Y Yao, M D Lukin, E A Demler, New J. Phys. 2173028B. Zhu, J. Marino, N. Y. Yao, M. D. Lukin, and E. A. Demler, New J. Phys. 21, 073028 (2019).
. B Urbaszek, X Marie, T Amand, O Krebs, P Voisin, P Maletinsky, A Högele, A Imamoglu, Rev. Mod. Phys. 8579B. Urbaszek, X. Marie, T. Amand, O. Krebs, P. Voisin, P. Maletinsky, A. Högele, and A. Imamoglu, Rev. Mod. Phys. 85, 79 (2013).
. H Bluhm, S Foletti, I Neder, M Rudner, D Mahalu, V Umansky, A Yacoby, Nature Physics. 7109H. Bluhm, S. Foletti, I. Neder, M. Rudner, D. Mahalu, V. Umansky, and A. Yacoby, Nature Physics 7, 109 (2011).
. D M Jackson, U Haeusler, L Zaporski, J H Bodey, N Shofer, E Clarke, M Hugues, M Atatüre, C Le Gall, D A Gangloff, Phys. Rev. X. 1231014D. M. Jackson, U. Haeusler, L. Zaporski, J. H. Bodey, N. Shofer, E. Clarke, M. Hugues, M. Atatüre, C. Le Gall, and D. A. Gangloff, Phys. Rev. X 12, 031014 (2022).
. D M Jackson, D A Gangloff, J H Bodey, L Zaporski, C Bachorz, E Clarke, M Hugues, C Le Gall, M Atatüre, Nature Physics. 17585D. M. Jackson, D. A. Gangloff, J. H. Bodey, L. Zaporski, C. Bachorz, E. Clarke, M. Hugues, C. Le Gall, and M. Atatüre, Nature Physics 17, 585 (2021).
. I Schwartz, J Scheuer, B Tratzmiller, S Müller, Q Chen, I Dhand, Z.-Y Wang, C Müller, B Naydenov, F Jelezko, M B Plenio, Sci. Adv. 48978I. Schwartz, J. Scheuer, B. Tratzmiller, S. Müller, Q. Chen, I. Dhand, Z.-Y. Wang, C. Müller, B. Nayde- nov, F. Jelezko, and M. B. Plenio, Sci. Adv. 4, eaat8978 (2018).
. A Ruskuc, C.-J Wu, J Rochman, J Choi, A Faraon, Nature. 602408A. Ruskuc, C.-J. Wu, J. Rochman, J. Choi, and A. Faraon, Nature 602, 408 (2022).
. H P Bartling, M H Abobeih, B Pingault, M J Degen, S J H Loenen, C E Bradley, J Randall, M Markham, D J Twitchen, T H Taminiau, Phys. Rev. X. 1211048H. P. Bartling, M. H. Abobeih, B. Pingault, M. J. Degen, S. J. H. Loenen, C. E. Bradley, J. Randall, M. Markham, D. J. Twitchen, and T. H. Taminiau, Phys. Rev. X 12, 011048 (2022).
. M H Abobeih, Y Wang, J Randall, S J H Loenen, C E Bradley, M Markham, D J Twitchen, B M Terhal, T H Taminiau, Nature. 606884M. H. Abobeih, Y. Wang, J. Randall, S. J. H. Loenen, C. E. Bradley, M. Markham, D. J. Twitchen, B. M. Ter- hal, and T. H. Taminiau, Nature 606, 884 (2022).
. D A Gangloff, G Majcher, C Lang, E V Denning, J H Bodey, D M Jackson, E Clarke, M Hugues, C L Gall, M Atatüre, 10.1126/science.aaw2906Science. 364D. A. Gangloff, G.Éthier Majcher, C. Lang, E. V. Den- ning, J. H. Bodey, D. M. Jackson, E. Clarke, M. Hugues, C. L. Gall, and M. Atatüre, Science 364, 62 (2019), https://www.science.org/doi/pdf/10.1126/science.aaw2906.
. D A Gangloff, L Zaporski, J H Bodey, C Bachorz, D M Jackson, G Éthier-Majcher, C Lang, E Clarke, M Hugues, C Le Gall, M Atatüre, Nature Physics. 171247D. A. Gangloff, L. Zaporski, J. H. Bodey, C. Bachorz, D. M. Jackson, G.Éthier-Majcher, C. Lang, E. Clarke, M. Hugues, C. Le Gall, and M. Atatüre, Nature Physics 17, 1247 (2021).
. G Wüst, M Munsch, F Maier, A V Kuhlmann, A Ludwig, A D Wieck, D Loss, M Poggio, R J Warburton, Nat. Nanotechnol. 11885G. Wüst, M. Munsch, F. Maier, A. V. Kuhlmann, A. Ludwig, A. D. Wieck, D. Loss, M. Poggio, and R. J. Warburton, Nat. Nanotechnol. 11, 885 (2016).
Ideal refocusing of an optically active spin qubit under strong hyperfine interactions. L Zaporski, N Shofer, J H Bodey, S Manna, G Gillard, D M Jackson, M H Appel, C Schimpf, S C Silva, J Jarman, G Delamare, G Park, U Haeusler, E A Chekhovich, A Rastelli, D A Gangloff, M Atatüre, C L Gall, L. Zaporski, N. Shofer, J. H. Bodey, S. Manna, G. Gillard, D. M. Jackson, M. H. Appel, C. Schimpf, S. C. da Silva, J. Jarman, G. Delamare, G. Park, U. Haeusler, E. A. Chekhovich, A. Rastelli, D. A. Gan- gloff, M. Atatüre, and C. L. Gall, "Ideal refocusing of an optically active spin qubit under strong hyperfine in- teractions," (2022).
. R Stockill, C Le Gall, C Matthiesen, L Huthmacher, E Clarke, M Hugues, M Atatüre, Nature Communications. 712745R. Stockill, C. Le Gall, C. Matthiesen, L. Huthmacher, E. Clarke, M. Hugues, and M. Atatüre, Nature Commu- nications 7, 12745 (2016).
. T H Taminiau, J Cramer, T Van Der Sar, V V Dobrovitski, R Hanson, Nature Nanotechnology. 9171T. H. Taminiau, J. Cramer, T. van der Sar, V. V. Do- brovitski, and R. Hanson, Nature Nanotechnology 9, 171 (2014).
. A Browaeys, T Lahaye, Nat. Phys. 16132A. Browaeys and T. Lahaye, Nat. Phys. 16, 132 (2020).
. A Reiserer, N Kalb, M S Blok, K J M Van Bemmelen, T H Taminiau, R Hanson, D J Twitchen, M Markham, Phys. Rev. X. 621040A. Reiserer, N. Kalb, M. S. Blok, K. J. M. van Bemme- len, T. H. Taminiau, R. Hanson, D. J. Twitchen, and M. Markham, Phys. Rev. X 6, 021040 (2016).
. S J Devience, R L Walsworth, M S Rosen, Phys. Rev. Lett. 111173002S. J. DeVience, R. L. Walsworth, and M. S. Rosen, Phys. Rev. Lett. 111, 173002 (2013).
. C Zu, F Machado, B Ye, S Choi, B Kobrin, T Mittiga, S Hsieh, P Bhattacharyya, M Markham, D Twitchen, A Jarmola, D Budker, C R Laumann, J E Moore, N Y Yao, Nature. 59745C. Zu, F. Machado, B. Ye, S. Choi, B. Kobrin, T. Mittiga, S. Hsieh, P. Bhattacharyya, M. Markham, D. Twitchen, A. Jarmola, D. Budker, C. R. Laumann, J. E. Moore, and N. Y. Yao, Nature 597, 45 (2021).
. A Högele, M Kroner, C Latta, M Claassen, I Carusotto, C Bulutay, A Imamoglu, Phys. Rev. Lett. 108197403A. Högele, M. Kroner, C. Latta, M. Claassen, I. Caru- sotto, C. Bulutay, and A. Imamoglu, Phys. Rev. Lett. 108, 197403 (2012).
. W Yang, L J Sham, Phys. Rev. B. 88235304W. Yang and L. J. Sham, Phys. Rev. B 88, 235304 (2013).
. E A Chekhovich, A Ulhaq, E Zallo, F Ding, O G Schmidt, M S Skolnick, Nature Materials. 16982E. A. Chekhovich, A. Ulhaq, E. Zallo, F. Ding, O. G. Schmidt, and M. S. Skolnick, Nature Materials 16, 982 (2017).
. J M Taylor, A Imamoglu, M D Lukin, arXiv:0308459Phys. Rev. Lett. 91246802cond-matJ. M. Taylor, A. Imamoglu, and M. D. Lukin, Phys. Rev. Lett. 91, 246802 (2003), arXiv:0308459 [cond-mat].
. M S Rudner, L M K Vandersypen, V Vuletić, L S Levitov, Phys. Rev. Lett. 107206806M. S. Rudner, L. M. K. Vandersypen, V. Vuletić, and L. S. Levitov, Phys. Rev. Lett. 107, 206806 (2011).
. R H Dicke, Phys. Rev. 9399R. H. Dicke, Phys. Rev. 93, 99 (1954).
. H Sun, P Xu, H Pu, W Zhang, Phys. Rev. A. 9563624H. Sun, P. Xu, H. Pu, and W. Zhang, Phys. Rev. A 95, 063624 (2017).
. J M Taylor, A Imamoglu, M D Lukin, Phys. Rev. Lett. 91246802J. M. Taylor, A. Imamoglu, and M. D. Lukin, Phys. Rev. Lett. 91, 246802 (2003).
. W A Coish, J Fischer, D Loss, Phys. Rev. B. 77125329W. A. Coish, J. Fischer, and D. Loss, Phys. Rev. B 77, 125329 (2008).
. L Cywiński, W M Witzel, S. Das Sarma, Phys. Rev. B. 79245314L. Cywiński, W. M. Witzel, and S. Das Sarma, Phys. Rev. B 79, 245314 (2009).
. A Henstra, P Dirksen, J Schmidt, W Wenckebach, Journal of Magnetic Resonance. 77389A. Henstra, P. Dirksen, J. Schmidt, and W. Wenckebach, Journal of Magnetic Resonance (1969) 77, 389 (1988).
. J Hu, A Urvoy, Z Vendeiro, V Crépel, W Chen, V Vuletić, 10.1126/science.aan5614Science. 3581078J. Hu, A. Urvoy, Z. Vendeiro, V. Crépel, W. Chen, and V. Vuletić, Science 358, 1078 (2017), https://www.science.org/doi/pdf/10.1126/science.aan5614.
. S Amasha, K Maclean, I P Radu, D M Zumbühl, M A Kastner, M P Hanson, A C Gossard, Phys. Rev. Lett. 10046803S. Amasha, K. MacLean, I. P. Radu, D. M. Zumbühl, M. A. Kastner, M. P. Hanson, and A. C. Gossard, Phys. Rev. Lett. 100, 046803 (2008).
. C.-Y Lu, Y Zhao, A N Vamivakas, C Matthiesen, S Fält, A Badolato, M Atatüre, Phys. Rev. B. 8135332C.-Y. Lu, Y. Zhao, A. N. Vamivakas, C. Matthiesen, S. Fält, A. Badolato, and M. Atatüre, Phys. Rev. B 81, 035332 (2010).
. N Khaneja, T Reiss, C Kehlet, T Schulte-Herbrüggen, S J Glaser, Journal of Magnetic Resonance. 172296N. Khaneja, T. Reiss, C. Kehlet, T. Schulte-Herbrüggen, and S. J. Glaser, Journal of Magnetic Resonance 172, 296 (2005).
. M Cerezo, A Sone, T Volkoff, L Cincio, P J Coles, Nature Communications. 121791M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, Nature Communications 12, 1791 (2021).
. E A Chekhovich, S F C Da Silva, A Rastelli, Nat. Nanotechnol. 15999E. A. Chekhovich, S. F. C. da Silva, and A. Rastelli, Nat. Nanotechnol. 15, 999 (2020).
. G Anikeeva, O Marković, V Borish, J A Hines, S V Rajagopal, E S Cooper, A Periwal, A Safavi-Naeini, E J Davis, M Schleier-Smith, arXiv:2009.05549PRX Quantum. 220319G. Anikeeva, O. Marković, V. Borish, J. A. Hines, S. V. Rajagopal, E. S. Cooper, A. Periwal, A. Safavi-Naeini, E. J. Davis, and M. Schleier-Smith, PRX Quantum 2, 020319 (2021), arXiv:2009.05549.
. D M Jackson, U Haeusler, L Zaporski, J H Bodey, N Shofer, E Clarke, M Hugues, M Atatüre, C Le Gall, D A Gangloff, Phys. Rev. X. 1231014D. M. Jackson, U. Haeusler, L. Zaporski, J. H. Bodey, N. Shofer, E. Clarke, M. Hugues, M. Atatüre, C. Le Gall, and D. A. Gangloff, Phys. Rev. X 12, 031014 (2022).
. D A Gangloff, L Zaporski, J H Bodey, C Bachorz, D M Jackson, G Éthier-Majcher, C Lang, E Clarke, M Hugues, C Le Gall, M Atatüre, Nature Physics. 171247D. A. Gangloff, L. Zaporski, J. H. Bodey, C. Bachorz, D. M. Jackson, G.Éthier-Majcher, C. Lang, E. Clarke, M. Hugues, C. Le Gall, and M. Atatüre, Nature Physics 17, 1247 (2021).
. J Johansson, P Nation, F Nori, Computer Physics Communications. 1831760J. Johansson, P. Nation, and F. Nori, Computer Physics Communications 183, 1760 (2012).
. J Johansson, P Nation, F Nori, Computer Physics Communications. 1841234J. Johansson, P. Nation, and F. Nori, Computer Physics Communications 184, 1234 (2013).
Ideal refocusing of an optically active spin qubit under strong hyperfine interactions. L Zaporski, N Shofer, J H Bodey, S Manna, G Gillard, D M Jackson, M H Appel, C Schimpf, S C Silva, J Jarman, G Delamare, G Park, U Haeusler, E A Chekhovich, A Rastelli, D A Gangloff, M Atatüre, C L Gall, L. Zaporski, N. Shofer, J. H. Bodey, S. Manna, G. Gillard, D. M. Jackson, M. H. Appel, C. Schimpf, S. C. da Silva, J. Jarman, G. Delamare, G. Park, U. Haeusler, E. A. Chekhovich, A. Rastelli, D. A. Gangloff, M. Atatüre, and C. L. Gall, "Ideal refocusing of an optically active spin qubit under strong hyperfine interactions," (2022).
. L Zhai, M C Löbl, G N Nguyen, J Ritzmann, A Javadi, C Spinnler, A D Wieck, A Ludwig, R J Warburton, Nature Communications. 114745L. Zhai, M. C. Löbl, G. N. Nguyen, J. Ritzmann, A. Javadi, C. Spinnler, A. D. Wieck, A. Ludwig, and R. J. Warburton, Nature Communications 11, 4745 (2020).
. E A Chekhovich, I M Griffiths, M S Skolnick, H Huang, S F C Silva, X Yuan, A Rastelli, Phys. Rev. B. 97235311E. A. Chekhovich, I. M. Griffiths, M. S. Skolnick, H. Huang, S. F. C. da Silva, X. Yuan, and A. Rastelli, Phys. Rev. B 97, 235311 (2018).
. F K Malinowski, F Martins, L Cywiński, M S Rudner, P D Nissen, S Fallahi, G C Gardner, M J Manfra, C M Marcus, F Kuemmeth, Phys. Rev. Lett. 118177702F. K. Malinowski, F. Martins, L. Cywiński, M. S. Rudner, P. D. Nissen, S. Fallahi, G. C. Gardner, M. J. Manfra, C. M. Marcus, and F. Kuemmeth, Phys. Rev. Lett. 118, 177702 (2017).
. R Stockill, C Le Gall, C Matthiesen, L Huthmacher, E Clarke, M Hugues, M Atatüre, Nature Communications. 712745R. Stockill, C. Le Gall, C. Matthiesen, L. Huthmacher, E. Clarke, M. Hugues, and M. Atatüre, Nature Communi- cations 7, 12745 (2016).
. H Bluhm, S Foletti, I Neder, M Rudner, D Mahalu, V Umansky, A Yacoby, Nature Physics. 7109H. Bluhm, S. Foletti, I. Neder, M. Rudner, D. Mahalu, V. Umansky, and A. Yacoby, Nature Physics 7, 109 (2011).
. A Ruskuc, C.-J Wu, J Rochman, J Choi, A Faraon, Nature. 602408A. Ruskuc, C.-J. Wu, J. Rochman, J. Choi, and A. Faraon, Nature 602, 408 (2022).
. N Khaneja, T Reiss, C Kehlet, T Schulte-Herbrüggen, S J Glaser, Journal of Magnetic Resonance. 172296N. Khaneja, T. Reiss, C. Kehlet, T. Schulte-Herbrüggen, and S. J. Glaser, Journal of Magnetic Resonance 172, 296 (2005).
. M Cerezo, A Sone, T Volkoff, L Cincio, P J Coles, Nature Communications. 121791M. Cerezo, A. Sone, T. Volkoff, L. Cincio, and P. J. Coles, Nature Communications 12, 1791 (2021).
| [] |
[
"Absence of spontaneous time-reversal symmetry breaking and ferromagnetism in superconducting NiBi3 single crystal",
"Absence of spontaneous time-reversal symmetry breaking and ferromagnetism in superconducting NiBi3 single crystal"
] | [
"Jingyuan Wang \nDepartment of Physics and Astronomy\nUniversity of California\n92697IrvineCaliforniaUSA\n",
"Camron Farhang \nDepartment of Physics and Astronomy\nUniversity of California\n92697IrvineCaliforniaUSA\n",
"Di Yue \nDepartment of Physics\nFudan University\n200433ShanghaiChina\n",
"Xiaofeng Jin \nDepartment of Physics\nFudan University\n200433ShanghaiChina\n",
"Xiangde Zhu \nAnhui Province Key Laboratory of Condensed Matter Physics at Extreme Conditions\nHigh Magnetic Field Laboratory\nChinese Academy of Sciences\n230031Hefei, AnhuiChina\n",
"Jing Xia \nDepartment of Physics and Astronomy\nUniversity of California\n92697IrvineCaliforniaUSA\n"
] | [
"Department of Physics and Astronomy\nUniversity of California\n92697IrvineCaliforniaUSA",
"Department of Physics and Astronomy\nUniversity of California\n92697IrvineCaliforniaUSA",
"Department of Physics\nFudan University\n200433ShanghaiChina",
"Department of Physics\nFudan University\n200433ShanghaiChina",
"Anhui Province Key Laboratory of Condensed Matter Physics at Extreme Conditions\nHigh Magnetic Field Laboratory\nChinese Academy of Sciences\n230031Hefei, AnhuiChina",
"Department of Physics and Astronomy\nUniversity of California\n92697IrvineCaliforniaUSA"
] | [] | Recent experiments have pointed to a chiral p-wave-like superconductivity in epitaxial Bi/Ni bilayers that are spontaneously time-reversal symmetry breaking (TRSB), making it a promising platform for exploring physics useful for topologically protected quantum computing. Quite intriguingly, evidence has emerged that in non-epitaxial Bi/Ni bilayers, superconductivity arises due to the formation of NiBi3, which has been reported to host coexisting ferromagnetic and superconducting orders at the surface. We perform high resolution surface magneto-optic Kerr effect (SMOKE) measurements using a Sagnac interferometer on single crystal NiBi3 and find no sign of any spontaneous Kerr signal except for contributions from trapped vortices. This strongly indicates the absence of TRSB in NiBi3, whether due to TRSB in the superconducting state or any coexisting ferromagnetism, and we conclude that the superconductivity found in non-epitaxial Bi/Ni is distinctively different from that in epitaxial Bi/Ni. | 10.1103/physrevb.107.024415 | [
"https://export.arxiv.org/pdf/2208.10645v2.pdf"
] | 251,741,320 | 2208.10645 | dd9994f1d947634ccf077ddc2ece99023f721bc3 |
Absence of spontaneous time-reversal symmetry breaking and ferromagnetism in superconducting NiBi3 single crystal
Jingyuan Wang
Department of Physics and Astronomy
University of California
92697IrvineCaliforniaUSA
Camron Farhang
Department of Physics and Astronomy
University of California
92697IrvineCaliforniaUSA
Di Yue
Department of Physics
Fudan University
200433ShanghaiChina
Xiaofeng Jin
Department of Physics
Fudan University
200433ShanghaiChina
Xiangde Zhu
Anhui Province Key Laboratory of Condensed Matter Physics at Extreme Conditions
High Magnetic Field Laboratory
Chinese Academy of Sciences
230031Hefei, AnhuiChina
Jing Xia
Department of Physics and Astronomy
University of California
92697IrvineCaliforniaUSA
Absence of spontaneous time-reversal symmetry breaking and ferromagnetism in superconducting NiBi3 single crystal
Recent experiments have pointed to a chiral p-wave-like superconductivity in epitaxial Bi/Ni bilayers that are spontaneously time-reversal symmetry breaking (TRSB), making it a promising platform for exploring physics useful for topologically protected quantum computing. Quite intriguingly, evidence has emerged that in non-epitaxial Bi/Ni bilayers, superconductivity arises due to the formation of NiBi3, which has been reported to host coexisting ferromagnetic and superconducting orders at the surface. We perform high resolution surface magneto-optic Kerr effect (SMOKE) measurements using a Sagnac interferometer on single crystal NiBi3 and find no sign of any spontaneous Kerr signal except for contributions from trapped vortices. This strongly indicates the absence of TRSB in NiBi3, whether due to TRSB in the superconducting state or any coexisting ferromagnetism, and we conclude that the superconductivity found in non-epitaxial Bi/Ni is distinctively different from that in epitaxial Bi/Ni.
The quest to build a reliable quantum computer has stimulated intense research into quantum phases with quasiparticles that obey non-Abelian exchange rules and can be used for topologically protected quantum computing [1]. Such quasiparticles would exist as Majorana bound states in the vortex cores of a chiral p-wave superconductor [1,2], which is an electronic analog to the A-phase of superfluid ! [3] and breaks time-reversal symmetry (TRS). In the prototypical chiral p-wave superconductor Sr2RuO4 ( " ≈ 1.5 ), although TRS-breaking (TRSB) has been confirmed by Muon Spin Relaxation ( ) [4], surface magneto-optic Kerr effect (SMOKE) measurements using a Sagnac interferometer [5], and under strain [6], the p-wave aspect has been challenged by the recent nuclear magnetic resonance (NMR) evidence [7] for an even-parity superconducting order parameter. In addition, a magnetic competing order has been identified in close proximity [8] by [6] and elastocaloric effects [8], making the picture of Sr2RuO4 rather complicated.
Superconducting epitaxial Bi/Ni bilayers provide a promising alternative candidate for chiral p-wave superconductivity. It was initially found in tunneling measurements that Bi layers deposited on Ni layers become superconducting with " ≈ 4 [9], and there are coexisting superconducting and ferromagnetic gaps when tunneling from the Ni side [10]. More recently, in high quality Bi/Ni bilayers grown by molecular beam epitaxy (MBE), superconducting quantum interference device (SQUID) measurements [11] show evidence for chiral superconductivity and the formation of chiral domains. SMOKE measurements using a Sagnac interferometer [12] conducted on the Bi side reveal spontaneous TRSB in the superconducting state, where chirality can be trained by a small magnetic field ~100
. Assuming that superconductivity exists only in the top Bi surface away from Ni, we have proposed a #$ ± i # ! %$ ! superconducting order parameter, which is the lowest angular momentum state allowed by this surface symmetry [12]. This hypothetical restriction was soon corrected by a Time-domain Terahertz (THz) spectroscopy experiment [13] that identified a nodeless superconductivity extending over the entire Bi/Ni bilayer. Their data also rule out the odd-frequency pairing [14], which is natural for a superconductor-ferromagnet interface. These experimental findings collectively point to chiral p-wave superconductivity in strongly spin-orbit coupled epitaxial Bi/Ni bilayers [15], whose properties can in principle be engineered by the growth parameters (thickness, strain, doping) to optimize the conditions for hosting Majorana particles.
Real materials are complex. A radically different picture has emerged in Bi/Ni bilayers fabricated using other methods, highlighting the role of the intermetallic compound NiBi3. NiBi3 impurities were first detected in thermally evaporated Bi/Ni bilayers by X-ray diffraction (XRD) [16] and were proposed as the source for the observed superconductivity. Later studies on pulse-laser deposited (PLD) [17] and sputter deposited [18] Bi/Ni bilayers show the absence of superconductivity in as-grown samples without NiBi3 impurities. By changing the deposition temperature [17], or by weeks of annealing [18], these samples develop superconductivity coincident with the formation of NiBi3. As a known type-II s-wave superconductor with " ≈ 4 [19,20], NiBi3 should be TRS-invariant, but there are reports of coexisting ferromagnetism and superconductivity in NiBi3. Extrinsic ferromagnetism was found in flux-grown NiBi3 crystals due to amorphous Ni impurities [21]. Intrinsic magnetic orders were proposed at the surface due to modifications of surface electronic band structures [22]: SQUID magnetometry has identified ferromagnetism in NiBi3 nano-strains (200 nm) with high surface fraction [22]; electron spin resonance (ESR) has detected no ferromagnetism but found surface induced magnetic fluctuations in single crystal NiBi3 [23].
Although these reports of magnetic orders in NiBi3 differ quantitatively from the TRSB observed in epitaxial Bi/Ni bilayer by Sagnac interferometry [12], and the coexistence of ferromagnetism and superconductivity often leads to oddfrequency pairing [14] that is inconsistent with THz timedomain spectroscopy data [13], it is sometimes argued that the observed unconventional superconductivity in epitaxial Bi/Ni bilayer may come from superconducting NiBi3 impurities that have surface-induced ferromagnetism. Does NiBi3 break TRS? Is it ferromagnetic near the surface? Above all, do epitaxial and non-epitaxial Bi/Ni bilayers host identical or distinct superconducting states? These fundamental questions can be addressed by performing a definitive determination of the TRS and magnetic properties of single crystal NiBi3, especially near the surface. SMOKE [24,25] measurements performed by a zero-area loop fiber optic Sagnac interferometer [26] are ideally suited for performing such a definitive test of TRSB and ferromagnetism near the surface of NiBi3. Probing the sample surface with an optical penetration depth that is typically a few nanometers for conductors [24,25], SMOKE has proven to be a powerful probe for surface magnetization. Primarily for detecting even smaller Kerr signals that arise in unconventional superconductors, we have introduced a zero-area loop [26] fiber optic Sagnac interferometer that measures directly the non-reciprocal phase difference &' = 2 ( between counter-propagating circularly polarized light beams, where ( is the Kerr rotation. This approach fundamentally rejects polarization rotations due to non-TRSB effects such as linear and circular dichroism [28]. This design has pushed the Kerr resolution from microradian ( µ ) [24,25] to ten nanoradian ( ) level [5], allowing us to identify TRSB in various unconventional superconductors such as Sr2RuO4 [5] and Bi/Ni bilayers [12]. Scanning imaging capability with spatial resolution has allowed us to discover ferromagnetism in 2D van der Waals layers [29] and to control magnetism in 2D structures [30]. We use a scanning Sagnac microscope operating at 1550 nm wavelength as illustrated in Fig. 1(a). The interferometer itself is located at room temperature. And the piezo scanner [31] is mounted inside a cryostat with 1.8 base temperature and 9 magnetic field capability. A polarization maintaining fiber delivers lights of orthogonal linear polarizations into the high vacuum sample space inside the cryostat. And a cryogenic quarter wave ( /4) plate converts these light beams into circular polarizations of opposite chiralities that will interact with the sample surface and detect TRSB. Fig. 1(b) shows a 16-hour measurement on a silver mirror demonstrating 10 Kerr resolution that is limited by long-term drifts in optics and electronics. Needle-shaped NiBi3 single crystals were grown using the self-flux method with the b axis along the longest dimension ( Fig. 1(a)) as determined by x-ray diffractometry [23]. The typical size of such a single crystal is ~3 × 0.2 × 0.2 . Fig. 2(a) shows the measured resistivity of the NiBi3 sample near the superconducting transition, with the excitation current flowing along the b axis. " = 4.05 is determined as the middle point of the resistivity drop, and is in good agreement with the result in ref [23] on the same batch of crystals. The specific heat ( ) ) is shown in Fig. 2(b), with ) / vs. * plotted near " in the inset. A prominent kink at ~4
indicates a sudden change in the Fermionic contributions to ) , and confirms the superconducting transition. We note that anomalies in ) around 2.2 have been reported [21] in NiBi3 due to amorphous Ni impurities, but we observe no such anomaly in our ) data, attesting to the high quality of crystals used in this study. SMOKE measurements are performed on two lateral surfaces of the crystal, dubbed surface 1 and surface 2 that are perpendicular to the a and c axis, as shown in Fig. 1(a). Due to the softness of the crystal, the surfaces of as-grown crystals are curved. It is necessary to perform lowtemperature scanning imaging to locate optically flat regions for SMOKE measurements. Fig. 1(b) and (c) are images of reflected light power ( + ) from surface 1 and surface 2 respectively, and optically flat regions marked by black boxes are chosen for SMOKE measurements with +~5 .
To test possible spontaneous TRSB in the superconducting state, we perform SMOKE measurements at fixed locations on surfaces 1 and 2 during zero-magneticfield (ZF) warmups. Kerr signals ( of such ZF warmups after zero-field cooling are presented as green curves in Fig. 3(a)(b), showing no sign of TRSB with an uncertainty of 20 across " . As is typical of spontaneous TRSB, the sign and size of ( at zero magnetic fields normally vary as a function of location and temperature. Therefore, a small training field ,'-.&.&/ is often applied and then removed to align the chiral domains in SMOKE measurements of unconventional superconductors such as in the studies of Sr2RuO4 [5], UPt3 [32], UTe2 [33] to name a few. It is noted that in all these examples, ,'-.&.&/ is chosen to be smaller than the lower critical field "0 to avoid introducing vortices that can be trapped at pinning sites even after the removal of the training fields. Trainings with ,'-.&.&/ > "0 could result in non-zero ( during ZF warmups due to contributions from trapped vortices, such as those found in YBa2Cu3O6+x with a 4 training field [34]. We pick ,'-.&.&/ = ±0.01 T for NiBi3, which is smaller than the measured value [20] of "0 = 0.015 T. Kerr signal ( during ZF warmups after ±0.01 T trainings are plotted as red and blue curves in Fig. 3(a)(b) for surfaces 1 and 2 respectively: no spontaneous ( is observed across " with an uncertainty of 20
. In comparison, in epitaxial Bi/Ni bilayers of 20 thickness [12], we have detected (~1 20 onsetting abruptly at " = 4.1 [12]. We can therefore conclude that there is no sign of spontaneous TRSB in the superconducting state of single crystal NiBi3. Furthermore, it was found in sputtered Bi/Ni bilayers, the NiBi3 impurity phase has a preferred orientation of (203) [18]. This translates to a crystalline surface parallel to the b axis, which corresponds to either surface 1 or surface 2 that is measured here. Therefore, we could rule out TRSB superconductivity in sputtered and PLD Bi/Ni bilayers where NiBi3 is responsible for superconductivity [17,18]. Now we turn to tests of possible ferromagnetism in NiBi3 that could be induced by either surface effects [22] or extrinsic Ni impurities [21]. As explained earlier, heat capacity ) (Fig. 2(b)) in our samples indicates a much lower impurity level compared to those used in ref [21], and unlike bulk SQUID magnetometry, Sagnac probes an optical volume of only ~0.1 ! , making it much less susceptible to Ni impurities. and (c) are Kerr signals measured during zero-field warmups after removing ± 1 field on surface 1 and surface 2 respectively, showing (~± 200 onsetting at " due to trapped vortices. There is no sign of any ferromagnetism.
We first perform magnetic hysteresis measurements with magnetic fields up to ±1 , which is similar to the conditions in ref [22]. These are shown in Fig. 4(a) for = 1.8 < " (blue) and = 10 > " (yellow). The Kerr signals are extremely linear with the magnetic field . They are dominated by the background Faraday effect contribution from the low-temperature objective lens, which is proportional to . The higher noise level Δ ( comes from the fluctuations in the above lens contribution induced by magnetic field noise. Δ (~5 µ at high magnetic fields can be seen in the inset of Fig. 4(a) for ( taken between = −1 and −0.9 . Unlike in ref [21,22], we observe no sign of any ferromagnetic hysteresis with 5 µ uncertainty. It is worth noting that using the same instrument we have measured (~1 30 µ in 2 of Ni [12], and (~5 00 µ in 4 of SrRuO3 [30]. Therefore, this is already a strong constraint on any ferromagnetism in NiBi3.
For an even more stringent test of ferromagnetism, we measure the remanent Kerr signal by reducing the 1 magnetic field back to zero at = 1.8 , as shown in the sequence I-II-III in Fig. 4(a). NiBi3 is a type-II superconductor with a lower critical field "0 = 0.015 [20] and an upper critical field "* = 0.35 [20]. As illustrated in the cartoon in Fig. 4(a), when "0 < < "* , vortices penetrate the superconducting sample. Their contributions to ( are linear with the magnetic field but are overwhelmed in the hysteresis measurements ( Fig. 4(a)) by the much larger Faraday effect of the objective lens. After the magnetic field is removed (step III), a small fraction of vortices can be trapped at pinning sites, and they will contribute to ( during subsequent ZF warmups. The trapped vortices' contribution to ( would decrease exponentially as the temperature is raised towards " . The remanent Kerr signals during ZF warmups after ±1 trainings are plotted in Fig. 4(b) for surface 1 and in Fig. 4(c) for surface 2 respectively. There are clear remanent Kerr signals of (~± 200 onsetting sharply at " due to trapped vortices. However, we observe no sign of any ferromagnetism with 20 uncertainty, unless its Curie temperature coincides precisely with " , which is highly unlikely. We note that the 20 uncertainty is four orders of magnitude smaller than the measured ( values in 2
of Ni [12] or 4 of SrRuO3 [30], strongly indicating that ferromagnetism is absent in NiBi3. Therefore, the reported ferromagnetism in nano-strains [22] of NiBi3 is not due to the surface of NiBi3, but must originate from other sources that are likely irrelevant to Bi/Ni bilayers.
In summary, we have provided strong error bounds of 20
for any spontaneous Kerr signals in single crystal NiBi3, strongly indicating the absence of TRSB in NiBi3, whether due to the superconducting state or any coexisting ferromagnetism. We can therefore conclude that the superconducting phases in epitaxial and non-epitaxial Bi/Ni bilayers are distinctively different. In non-epitaxial Bi/Ni, superconductivity originates from the formation of an impurity NiBi3 phase [17,18], which doesn't host coexisting ferromagnetic order or TRSB superconductivity. In contrast, the epitaxial Bi/Ni samples such as those grown by MBE host a superconducting state that is most likely to be of chiral p-wave based on existing experimental evidence [12,13,35]. The latter can be a promising platform for hosting Majorana particles useful for topologically protected quantum computing. And it is important to refine the growth process [17] to enable epitaxial growth, especially for non-MBE growth methods, to stabilize and optimize the chiral p-wave state for exploring Majorana physics for robust quantum computing applications.
Experiments at UC Irvine were supported by NSF award DMR-1807817, and in part by the Gordon and Betty Moore Foundation through Grant GBMF10276 to J.X.. The works at Hefei was supported by the Youth Innovation Promotion Association of CAS (Grant No. 2021117).
FIG. 1 .
1Sagnac interferometer and NiBi3 crystal (a) Schematics of a scanning Sagnac microscope at 1550 nm wavelength (top), NiBi3 crystal (left) and 16-hour Sagnac drift test on a silver mirror showing 10 Kerr resolution (right). (b), (c) Reflected optical power (P0) map at 1.8 on surfaces 1 and 2, with black boxes marking optically flat regions for measurements.
FIG. 2 .
2Resistivity and specific heat (a) Resistivity ( ) of NiBi3, where "~4 is determined as the middle point of the resistivity drop. (b) Specific heat ( ) ) with a kink at " . The inset shows ) / vs. * near " .
FIG. 3 .
3Absence of TRSB in the superconducting state. Kerr signals measured on (a) surface 1 and (b) surface 2 during zero-field (ZF) warmups, after zero field cooldown or after ± 0.01 T field "trainings", showing no TRSB.
FIG. 4 .
4Trapped vortices and absence of ferromagnetism. (a) Illustration of trapped vortices after removal of a magnetic field > 10 (top); Kerr signals during 1 magnetic field hysteresis on Surface 2 at 1.8 and 10 (bottom). (b)
. * Email, [email protected]*Email: [email protected]
Non-Abelian Anyons and Topological Quantum Computation. C Nayak, S H Simon, A Stern, M Freedman, S. Das Sarma, Rev Mod Phys. 801083C. Nayak, S. H. Simon, A. Stern, M. Freedman, and S. Das Sarma, Non-Abelian Anyons and Topological Quantum Computation, Rev Mod Phys 80, 1083 (2008).
A P Mackenzie, Y Maeno, The Superconductivity of Sr2RuO4 and the Physics of Spin-Triplet Pairing. 75657A. P. Mackenzie and Y. Maeno, The Superconductivity of Sr2RuO4 and the Physics of Spin-Triplet Pairing, Reviews of Modern Physics 75, 657 (2003).
A Theoretical Description of the New Phases of Liquid He3. A J Leggett, Reviews of Modern Physics. 47331A. J. Leggett, A Theoretical Description of the New Phases of Liquid He3, Reviews of Modern Physics 47, 331 (1975).
. G M Luke, Y Fudamoto, K M Kojima, M I Larkin, J Merrin, B Nachumi, Y J Uemura, Y , G. M. Luke, Y. Fudamoto, K. M. Kojima, M. I. Larkin, J. Merrin, B. Nachumi, Y. J. Uemura, Y.
Z Q Maeno, Y Mao, H Mori, M Nakamura, Sigrist, Time-Reversal Symmetry-Breaking Superconductivity in Sr2RuO4. 394558Maeno, Z. Q. Mao, Y. Mori, H. Nakamura, and M. Sigrist, Time-Reversal Symmetry-Breaking Superconductivity in Sr2RuO4, Nature 394, 558 (1998).
High Resolution Polar Kerr Effect Measurements of Sr2RuO4: Evidence for Broken Time-Reversal Symmetry in the Superconducting State. J Xia, Y Maeno, P T Beyersdorf, M M Fejer, A Kapitulnik, Phys Rev Lett. 97167002J. Xia, Y. Maeno, P. T. Beyersdorf, M. M. Fejer, and A. Kapitulnik, High Resolution Polar Kerr Effect Measurements of Sr2RuO4: Evidence for Broken Time-Reversal Symmetry in the Superconducting State, Phys Rev Lett 97, 167002 (2006).
V Grinenko, S Ghosh, R Sarkar, J.-C Orain, A Nikitin, M Elender, D Das, Z Guguchia, F Brückner, M E Barber, J Park, N Kikugawa, D A Sokolov, J S Bobowski, T Miyoshi, Y Maeno, A P Mackenzie, H Luetkens, C W Hicks, H.-H Klauss, Split Superconducting and Time-Reversal Symmetry-Breaking Transitions in Sr2RuO4 under Stress. 17748V. Grinenko, S. Ghosh, R. Sarkar, J.-C. Orain, A. Nikitin, M. Elender, D. Das, Z. Guguchia, F. Brückner, M. E. Barber, J. Park, N. Kikugawa, D. A. Sokolov, J. S. Bobowski, T. Miyoshi, Y. Maeno, A. P. Mackenzie, H. Luetkens, C. W. Hicks, and H.-H. Klauss, Split Superconducting and Time-Reversal Symmetry-Breaking Transitions in Sr2RuO4 under Stress, Nat. Phys. 17, 748 (2021).
. A Pustogow, Y Luo, A Chronister, Y.-S Su, D A Sokolov, F Jerzembeck, A P Mackenzie, C W , A. Pustogow, Y. Luo, A. Chronister, Y.-S. Su, D. A. Sokolov, F. Jerzembeck, A. P. Mackenzie, C. W.
Constraints on the Superconducting Order Parameter in Sr2RuO4 from Oxygen-17 Nuclear Magnetic Resonance. N Hicks, S Kikugawa, E D Raghu, S E Bauer, Brown, Nature. 57472Hicks, N. Kikugawa, S. Raghu, E. D. Bauer, and S. E. Brown, Constraints on the Superconducting Order Parameter in Sr2RuO4 from Oxygen-17 Nuclear Magnetic Resonance, Nature 574, 72 (2019).
Elastocaloric Determination of the Phase Diagram of Sr2RuO4. Y.-S Li, M Garst, J Schmalian, S Ghosh, N Kikugawa, D A Sokolov, C W Hicks, F Jerzembeck, M S Ikeda, Z Hu, B J Ramshaw, A W Rost, M Nicklas, A P Mackenzie, Nature. 607276Y.-S. Li, M. Garst, J. Schmalian, S. Ghosh, N. Kikugawa, D. A. Sokolov, C. W. Hicks, F. Jerzembeck, M. S. Ikeda, Z. Hu, B. J. Ramshaw, A. W. Rost, M. Nicklas, and A. P. Mackenzie, Elastocaloric Determination of the Phase Diagram of Sr2RuO4, Nature 607, 276 (2022).
Superconducting Phases of Bi and Ga Induced by Deposition on a Ni Sublayer. J S Moodera, R Meservey, Phys. Rev. B. 42179J. S. Moodera and R. Meservey, Superconducting Phases of Bi and Ga Induced by Deposition on a Ni Sublayer, Phys. Rev. B 42, 179 (1990).
Coexistence of Ferromagnetism and Superconductivity in Ni/Bi Bilayers. P Leclair, J S Moodera, J Philip, D Heiman, Phys Rev Lett. 9437006P. LeClair, J. S. Moodera, J. Philip, and D. Heiman, Coexistence of Ferromagnetism and Superconductivity in Ni/Bi Bilayers, Phys Rev Lett 94, 037006 (2005).
Anomalous Magnetic Moments as Evidence of Chiral Superconductivity in a Bi/Ni Bilayer. J Wang, X Gong, G Yang, Z Lyu, Y Pang, G Liu, Z Ji, J Fan, X Jing, C Yang, F Qu, X Jin, L Lu, Phys. Rev. B. 9654519J. Wang, X. Gong, G. Yang, Z. Lyu, Y. Pang, G. Liu, Z. Ji, J. Fan, X. Jing, C. Yang, F. Qu, X. Jin, and L. Lu, Anomalous Magnetic Moments as Evidence of Chiral Superconductivity in a Bi/Ni Bilayer, Phys. Rev. B 96, 054519 (2017).
X Gong, M Kargarian, A Stern, D Yue, H Zhou, X Jin, V M Galitski, V M Yakovenko, J Xia, Time-Reversal Symmetry-Breaking Superconductivity in Epitaxial Bismuth/Nickel Bilayers. 31602579X. Gong, M. Kargarian, A. Stern, D. Yue, H. Zhou, X. Jin, V. M. Galitski, V. M. Yakovenko, and J. Xia, Time-Reversal Symmetry-Breaking Superconductivity in Epitaxial Bismuth/Nickel Bilayers, Sci Adv 3, e1602579 (2017).
P Chauhan, F Mahmood, D Yue, P.-C Xu, X Jin, N P Armitage, Nodeless Bulk Superconductivity in the Time-Reversal Symmetry Breaking Bi/Ni Bilayer System. 12217002P. Chauhan, F. Mahmood, D. Yue, P.-C. Xu, X. Jin, and N. P. Armitage, Nodeless Bulk Superconductivity in the Time-Reversal Symmetry Breaking Bi/Ni Bilayer System, Physical Review Letters 122, 017002 (2019).
Odd Triplet Superconductivity and Related Phenomena in Superconductor-Ferromagnet Structures. F S Bergeret, A F Volkov, K B Efetov, Rev. Mod. Phys. 771321F. S. Bergeret, A. F. Volkov, and K. B. Efetov, Odd Triplet Superconductivity and Related Phenomena in Superconductor-Ferromagnet Structures, Rev. Mod. Phys. 77, 1321 (2005).
Superconductivity in a Bi/Ni Bilayer. S.-P Chao, Phys. Rev. B. 9964504S.-P. Chao, Superconductivity in a Bi/Ni Bilayer, Phys. Rev. B 99, 064504 (2019).
V Siva, K Senapati, B Satpati, S Prusty, D K Avasthi, D Kanjilal, P K Sahoo, Spontaneous Formation of Superconducting NiBi3 Phase in Ni-Bi Bilayer Films. 11783902V. Siva, K. Senapati, B. Satpati, S. Prusty, D. K. Avasthi, D. Kanjilal, and P. K. Sahoo, Spontaneous Formation of Superconducting NiBi3 Phase in Ni-Bi Bilayer Films, Journal of Applied Physics 117, 083902 (2015).
Superconductivity in Bi/Ni Bilayer System: Clear Role of Superconducting Phases Found at Bi/Ni Interface. L Y Liu, Y T Xing, I L C Merino, H Micklitz, D F Franceschini, E Baggio-Saitovitch, D C Bell, I G Solórzano, Physical Review Materials. 214601L. Y. Liu, Y. T. Xing, I. L. C. Merino, H. Micklitz, D. F. Franceschini, E. Baggio-Saitovitch, D. C. Bell, and I. G. Solórzano, Superconductivity in Bi/Ni Bilayer System: Clear Role of Superconducting Phases Found at Bi/Ni Interface, Physical Review Materials 2, 014601 (2018).
Origin of Superconductivity at Nickel-Bismuth Interfaces. M Vaughan, N Satchell, M Ali, C J Kinane, G B G Stenning, S Langridge, G Burnell, Phys. Rev. Research. 213270M. Vaughan, N. Satchell, M. Ali, C. J. Kinane, G. B. G. Stenning, S. Langridge, and G. Burnell, Origin of Superconductivity at Nickel-Bismuth Interfaces, Phys. Rev. Research 2, 013270 (2020).
G J Zhao, X X Gong, P C Xu, B C Li, Z Y Huang, X F Jin, X D Zhu, T Y Chen, Singlet Superconductivity in a Single-Crystal NiBi3 Superconductor. 31125005G. J. Zhao, X. X. Gong, P. C. Xu, B. C. Li, Z. Y. Huang, X. F. Jin, X. D. Zhu, and T. Y. Chen, Singlet Superconductivity in a Single-Crystal NiBi3 Superconductor, Supercond Sci Tech 31, 125005 (2018).
J Kumar, A Kumar, A Vajpayee, B Gahtori, D Sharma, P K Ahluwalia, S Auluck, V P S Awana, Physical Property and Electronic Structure Characterization of Bulk Superconducting Bi3Ni. 2485002J. Kumar, A. Kumar, A. Vajpayee, B. Gahtori, D. Sharma, P. K. Ahluwalia, S. Auluck, and V. P. S. Awana, Physical Property and Electronic Structure Characterization of Bulk Superconducting Bi3Ni, Supercond Sci Tech 24, 085002 (2011).
B Silva, R F Luccas, N M Nemes, J Hanko, M R Osorio, P Kulkarni, F Mompean, M García-Hernández, M A Ramos, S Vieira, H Suderow, Superconductivity and Magnetism on Flux-Grown Single Crystals of NiBi3. 88184508B. Silva, R. F. Luccas, N. M. Nemes, J. Hanko, M. R. Osorio, P. Kulkarni, F. Mompean, M. García- Hernández, M. A. Ramos, S. Vieira, and H. Suderow, Superconductivity and Magnetism on Flux-Grown Single Crystals of NiBi3, Physical Review B 88, 184508 (n.d.).
Structure-Induced Coexistence of Ferromagnetic and Superconducting States of Single-Phase Bi3Ni Seen via Magnetization and Resistance Measurements. T Herrmannsdörfer, R Skrotzki, J Wosnitza, D Köhler, R Boldt, M Ruck, Physical Review B. 83140501T. Herrmannsdörfer, R. Skrotzki, J. Wosnitza, D. Köhler, R. Boldt, and M. Ruck, Structure-Induced Coexistence of Ferromagnetic and Superconducting States of Single-Phase Bi3Ni Seen via Magnetization and Resistance Measurements, Physical Review B 83, 140501 (2011).
Surface-Induced Magnetic Fluctuations in a Single-Crystal NiBi3superconductor. X Zhu, H Lei, C Petrovic, Y Zhang, Physical Review B. 8624527X. Zhu, H. Lei, C. Petrovic, and Y. Zhang, Surface- Induced Magnetic Fluctuations in a Single-Crystal NiBi3superconductor, Physical Review B 86, 024527 (2012).
Surface Magneto-Optic Kerr Effect (SMOKE). Z Qiu, Journal of Magnetism and Magnetic Materials. 200664Z. Qiu, Surface Magneto-Optic Kerr Effect (SMOKE), Journal of Magnetism and Magnetic Materials 200, 664 (1999).
. S D Bader, Smoke, Journal of Magnetism and Magnetic Materials. 100440S. D. Bader, Smoke, Journal of Magnetism and Magnetic Materials 100, 440 (1991).
Modified Sagnac Interferometer for High-Sensitivity Magneto-Optic Measurements at Cryogenic Temperatures. J Xia, P T Beyersdorf, M M Fejer, A Kapitulnik, Applied Physics Letters. 89J. Xia, P. T. Beyersdorf, M. M. Fejer, and A. Kapitulnik, Modified Sagnac Interferometer for High- Sensitivity Magneto-Optic Measurements at Cryogenic Temperatures, Applied Physics Letters 89, (2006).
On the Proof of the Reality of the Luminiferous. G Sagnac, Comptes Rendus. G. Sagnac, On the Proof of the Reality of the Luminiferous..., Comptes Rendus (1913).
Polar Kerr Effect as Probe for Time-Reversal Symmetry Breaking in Unconventional Superconductors. A Kapitulnik, J Xia, E Schemm, A Palevski, New Journal of Physics. 11A. Kapitulnik, J. Xia, E. Schemm, and A. Palevski, Polar Kerr Effect as Probe for Time-Reversal Symmetry Breaking in Unconventional Superconductors, New Journal of Physics 11, (2009).
C Gong, L Li, Z Li, H Ji, A Stern, Y Xia, T Cao, W Bao, C Wang, Y Wang, Z Q Qiu, R J Cava, S G Louie, J Xia, X Zhang, Discovery of Intrinsic Ferromagnetism in Two-Dimensional van Der Waals Crystals. 546265C. Gong, L. Li, Z. Li, H. Ji, A. Stern, Y. Xia, T. Cao, W. Bao, C. Wang, Y. Wang, Z. Q. Qiu, R. J. Cava, S. G. Louie, J. Xia, and X. Zhang, Discovery of Intrinsic Ferromagnetism in Two-Dimensional van Der Waals Crystals, Nature 546, 265 (2017).
Localized Control of Curie Temperature in Perovskite Oxide Film by Capping-Layer-Induced Octahedral Distortion. S Thomas, B Kuiper, J Hu, J Smit, Z Liao, Z Zhong, G Rijnders, A Vailionis, R Wu, G Koster, J Xia, Phys. Rev. Lett. 119177203S. Thomas, B. Kuiper, J. Hu, J. Smit, Z. Liao, Z. Zhong, G. Rijnders, A. Vailionis, R. Wu, G. Koster, and J. Xia, Localized Control of Curie Temperature in Perovskite Oxide Film by Capping-Layer-Induced Octahedral Distortion, Phys. Rev. Lett. 119, 177203 (2017).
Compact Large-range Cryogenic Scanner. J Siegel, J Witt, N Venturi, S Field, Review of Scientific Instruments. 662520J. Siegel, J. Witt, N. Venturi, and S. Field, Compact Large-range Cryogenic Scanner, Review of Scientific Instruments 66, 2520 (1995).
E R Schemm, W J Gannon, C M Wishne, W P Halperin, A Kapitulnik, Observation of Broken Time-Reversal Symmetry in the Heavy-Fermion Superconductor UPt3. 345190E. R. Schemm, W. J. Gannon, C. M. Wishne, W. P. Halperin, and A. Kapitulnik, Observation of Broken Time-Reversal Symmetry in the Heavy-Fermion Superconductor UPt3, Science 345, 190 (2014).
Multicomponent Superconducting Order Parameter in UTe2. I M Hayes, D S Wei, T Metz, J Zhang, Y S Eo, S Ran, S R Saha, J Collini, N P Butch, D F Agterberg, A Kapitulnik, J Paglione, Science. 373797I. M. Hayes, D. S. Wei, T. Metz, J. Zhang, Y. S. Eo, S. Ran, S. R. Saha, J. Collini, N. P. Butch, D. F. Agterberg, A. Kapitulnik, and J. Paglione, Multicomponent Superconducting Order Parameter in UTe2, Science 373, 797 (2021).
Polar Kerr-Effect Measurements of the High-Temperature YBa2Cu3O6+x Superconductor: Evidence for Broken Symmetry near the Pseudogap Temperature. J Xia, E Schemm, G Deutscher, S A Kivelson, D A Bonn, W N Hardy, R Liang, W Siemons, G Koster, M M Fejer, A Kapitulnik, Physical Review Letters. 100J. Xia, E. Schemm, G. Deutscher, S. A. Kivelson, D. A. Bonn, W. N. Hardy, R. Liang, W. Siemons, G. Koster, M. M. Fejer, and A. Kapitulnik, Polar Kerr- Effect Measurements of the High-Temperature YBa2Cu3O6+x Superconductor: Evidence for Broken Symmetry near the Pseudogap Temperature, Physical Review Letters 100, (2008).
| [] |
[] | [
"\nXUANYU PAN\n\n"
] | [
"XUANYU PAN\n"
] | [] | Let X d be a smooth hypersurface of degree d in P n C . Suppose that the Fano variety F(X d ) of lines of X d is smooth. We prove that the Griffiths group Griff 1 (F(X d )) of F(X d ) is trivial if the hypersurface X d is of 2-Fano type. As a result, we give a positive answer to a question of Professor Voisin about the first Griffiths groups of Fano varieties in some cases. Base on this result, we prove that CH 2 (X d ) = Z for a smooth 3-Fano hypersurface X d ⊆ P n C with smooth Fano variety of lines. | 10.1007/s00208-016-1476-0 | [
"https://arxiv.org/pdf/1512.01721v3.pdf"
] | 119,641,944 | 1512.01721 | d2dae05ac3375c2787dbbca03c07113bfffb1b45 |
12 Oct 2016
XUANYU PAN
12 Oct 20162-CYCLES ON HIGHER FANO HYPERSURFACES
Let X d be a smooth hypersurface of degree d in P n C . Suppose that the Fano variety F(X d ) of lines of X d is smooth. We prove that the Griffiths group Griff 1 (F(X d )) of F(X d ) is trivial if the hypersurface X d is of 2-Fano type. As a result, we give a positive answer to a question of Professor Voisin about the first Griffiths groups of Fano varieties in some cases. Base on this result, we prove that CH 2 (X d ) = Z for a smooth 3-Fano hypersurface X d ⊆ P n C with smooth Fano variety of lines.
Introduction
A fundamental question about cycles on a smooth projective variety X over complex numbers is to determine the Griffiths groups of X. Let us recall the definition of the first Griffiths group Griff 1 (X) = CH 1 (X) hom CH 1 (X) alg .
Professor Voisin asks the following question:
Question 1.1. Is the Griffiths group Griff 1 (X) of a Fano variety X (or more generally, a rationally connected variety) over complex numbers trivial?
In general, to answer this question is very difficult. However, in the case of dimension at most three, Bloch and Srinivas give a positive answer, see [BS83]. Recently, Tian and Zong give a positive answer for Fano complete intersections in a projective space, see [TZ14]. Another fundamental question about cycles is to determine the Chow groups of a variety. The geometry of Chow groups is very delicate. For instance, Mumford proves that the Chow group of zero cycles CH 0 (S) Date: October 14, 2016. of a K3 surface S is infinitely dimensional, see [Mum68]. For a Fano variety X over complex numbers, János Kollár, Yoichi Miyaoka, and Shigefumi Mori prove that CH 0 (X) = Z since X is rationally connected, see [KMM92]. In the paper [dJS07], de Jong and Starr introduce a concept of higher Fano varieties. Typical examples of higher Fano varieties are low degree hypersurfaces. In the paper [TZ14], Tian and Zong prove that
CH 1 (X d ) = Z
for a smooth hypersurface X d of 2-Fano type, see [TZ14] for details. In this paper, we use recent techniques from the rational curves on algebraic varieties due to de Jong and Starr, Harris and Roth (cf. [dJS06], [HRS04] and [HS05]) to give a positive answer to the question (1.1) in the case of Fano varieties of lines. As a result, we prove that CH 2 (X d ) = Z for a general smooth 3-Fano hypersurface X d over complex numbers. More precisely, we have We know the assumption of the smoothness of F(X d ) always holds if d = 3 or X d is a general hypersurface.
Let us briefly describe the structure of this paper. In Section 2 and 3, we use the Tsen-Lang Theorem to show that CH 2 (X d ) can be generated by ruled surfaces and the torsion part of CH 2 (X d ) is annihilated by d if X d is of 3-Fano. In Section 4, we use the fibration structure of F(X d ) and the homotopy fiber sequence to prove the second homology group of F(X d ) is torsion-free and generated by the class of lines under some hypothesis.
In Section 5, 6 and 7, we systematically use the techniques of rational curves such as bend-and-break, smoothing curves and the geometry of quadrics to show the connectedness of the moduli spaces of rational curves on F(X d ). As a result, we show Theorem 1.2.
Acknowledgments. The author would like to thank Professor Jason Starr for his considerate explanation of his thesis to the author. The author thanks Professor Luc Illusie and Professor Burt Totaro for their interest in this project. The author also thanks Professor Matt Kerr for giving a course on algebraic cycles in Washington University in St. Louis. One part of the paper is inspired during his course. At the end, the author is grateful for his truly great friend Dr. Zhiyu Tian.
Rationally Equivalent to Ruled Surfaces
In this section, we suppose that X d is a hypersurface in P n C of degree d. Lemma 2.1. Suppose that E is a vector bundle of rank two on an algebraic scheme M . Let σ 0 and σ 1 be two sections of π : P(E) → M . Then, we have
σ 0 − σ 1 = π * ([N ]) ∈ CH 1 (P(E))
where N is a cycle of codimension one on M .
Proof. By [Ful98], we have
(2.1.1) CH l−1 (M ) CH l (M ) ∼ = / / CH l (P(E)) (Z 0 , Z 1 ) → π * (Z 0 ) + h ∩ π * (Z 1 ) where Z 0 ∈ CH l−1 (M ), Z 1 ∈ CH l (M ) and h = c 1 (O P(E) (1)). If l = dim(M ), then the sections [σ i (M )] ∈ CH 1 (P(E)) have the same second coordinate h ∩ π * ([M ]) by the identification (2.1.1). In other words, [σ i (M )] = (−, h ∩ π * ([M ])). Therefore, we have σ 0 − σ 1 = π * ([N ]) for some [N ] ∈ CH l−1 (M ).
Let M 0,2 (X d , P 1 ∨P 1 ) be the Kontsevich space parametrizing the reducible conics on X d with one marked point on each component. Denote M 0,2 (X d , P 1 ∨ P 1 ) by M 2 . It is clear that we have the forgetful map π : M 2 → M 0,1 (X d , P 1 ∨ P 1 ) := M 1 forgetting the second marked point.
There is a commutative diagram of evalution maps
M 2 π (ev1,ev2) / / X d × X d π1 M 1 = M 0,1 (X d , P 1 ∨ P 1 ) σ1 C C ev / / X d .
where σ 1 is the universal section of π and π 1 is the projection onto the first factor.
Let M 1,p be the fiber ev −1 (p) of ev over p ∈ X d . So we have maps
(2.1.2) C = M 2 | M1,p f =ev2 / / X d M 1,p σ2 F F σ1 9 9
such that Im(ev 2 • σ 1 ) = {p} and σ 2 is the section induced by the singular locus of π. More precisely, suppose that s ∈ M 1,p parametrizes a reducible conic L 1 ∪ L 2 with the wedge point q = L 1 ∩ L 2 , then σ 2 (s) = q. Therefore, the fiber f −1 (q) of f over q is the fiber (ev 1 , ev 2 ) −1 (p, q). In other words, the fiber f −1 (q) parametrizes the reducible conics (on X d ) passing through p and q, so f −1 (q) is a complete intersection, see [dJS06] or [dJHS11, Page 82(2)]. Moreover, the general fiber of f is a complete intersection in P n of type
(1, 1, 2, 2, 3, 3 . . . , d − 1, d − 1, d).
Suppose that S is a surface passing through a general point of X d . It follows that the preimage f −1 (Spec(C(S)) of the generic point of S is a complete intersection in P n C(S) of type (1, 1, 2, 2, 3, 3 . . . , d − 1, d − 1, d). In particular, if the square sum of these degrees
2( d−1 i=1 i 2 ) + d 2 = d(2d 2 + 1) 3
is less than n + 1, then, by the Tsen-Lang Theorem [Lan52], there exists a surfaces S ′ ⊆ C such that f | S ′ is generic one-to-one onto S. In other words, we have
f * ([S ′ ]) = [S] in CH 2 (X d ).
Here, we do not need to worry about the stacky issue since S ′ passes through a non-stacky point, see [Vis89, Section 5 and 6] for the details. Therefore, we have the following diagram induced by the diagram (2.1.2)
C ′ g / / C f / / π X d S ′ σ1 A A σ2 5 5 ∆ 3 3 π| S ′ / / M 1,p where ∆ is induced by the diagonal map of S ′ . It is clear that f • g • ∆ = f | S ′ and Im(f • g • σ 1 ) = {p}.
By Lemma 2.1, the following cycles are rationally equivalent
[∆(S ′ )] ∼ rat [σ 2 (S ′ )] ∼ rat [σ 1 (S ′ )]
in CH 2 (C ′ ) mod π * CH 1 (S ′ ). In particular,
[S] = [f • g • ∆(S ′ )] ∼ rat [f • g • σ 1 (S)] = [p] = 0 in CH 2 (X d ) mod Im ((f • g) * • π * : CH 1 (S ′ ) → CH 2 (X d ))
. Therefore, we have
[S] ∈ Im ((f • g) * • π * : CH 1 (S ′ ) → CH 2 (X d )) .
We conclude that [S] is the formal sum of some ruled surfaces whose fibers are lines.
In particular, the Abel-Jacobi map AJ is surjective
(2.1.3) AJ : CH 1 (F(X d )) → CH 2 (X d )
where F(X d ) is the Fano variety of lines of X d . In summary, we show the following proposition.
Proposition 2.2. Every surface S passing through a general point in X d is rationally equivalent to the formal sum of ruled surfaces if
(2.2.1) d(2d 2 + 1) 3 ≤ n.
Let C be a projective curve. Suppose that g is a morphism g : C → F(X d ). We consider the following incidence subvariety I
(2.2.2) I = {(l, P )|l ⊆ P } h / / F(X d ) × F 2 (X d ) π1 u u ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ ❦ π2 ' ' ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ ❖ F(X d ) F 2 (X d )
where l is a line in X d , P is a 2-plane in X d and F 2 (X d ) is the Fano variety of 2-planes in X d . We claim that the pull-back h| C of h via g has a section s.
g −1 I h|C / / I h C g / / s : : F(X d )
We show the claim as follows. Recall that the space F p of lines in X d throguh a point p ∈ X d is defined by the equations of P n−1 of type (1, 2, 3, 4, . . . , d − 1, d), see [CoS09, Lemma 2.1]. Therefore, the space Fano q (F p ) of lines in F p through a point q ∈ F p is defined by the equations in P n−2 of type (complete intersection of equations of degree)
(2.2.3)
(1, 1, 2); (1, 1, 2, 3); (1, 1, 2, 3, 4); . . . ; (1, 1, 2, 3, . . . d).
Let l be a line in F p corresponding to a plane P 2 = Span(l,
p) in X d . Suppose that x is a point of F(X d ) parametrizing a line L x ⊆ X d . The fiber h −1 (x) is the space of 2-planes in X d containing the line L x .
Let Q be a point on L x . Suppose that F Q is the subspace of F(X d ) parametrizing the lines through Q. Denote by Fano x (F Q ) the space of lines in F Q parametrizing lines through x ∈ F Q ⊆ F(X d ). So the space Fano x (F Q ) is defined by the equations in P n of type as (2.2.3). Note that
h −1 (x) = Fano x (F Q )
and the sum of (2.2.3)
(2.2.4) (d − 1) + d i=2 i(i + 1) 2 = d − 2 + d(d + 1)(2d + 1) 12 + d(d + 1) 4
is less than n − 1 if we assume the inequality (2.2.1). So the map h| C has a section s by the Tsen-Lang Theorem. We have proved the claim. In particular, it produces a map as follows
Pr 2 •s : C → I → F 2 (X d ).
Therefore, we show the following lemma:
Lemma 2.3. Suppose that the morphism g gives rise to a family L of lines on X d . Then the morphism Pr 2 •s gives rise to a family P of 2-planes in X d which contains L.
L ❄ ❄ ❄ ❄ ❄ ❄ ❄ ❄ / / P H / / X d C
The Torsion part of the Second Chow group
In this section, we always assume that (3.0.1) d(2d 2 + 1) 3 ≤ n and d ≥ 3.
Lemma 3.1. With the notations as in section 2, we have Proof. It is clear that
d (H * ([L])) = i ±[P i ] + [V ] · c 1 (O X d (d)) holds in CH 2 (X d ) where {P i } are 2-planes in X d and V is H * ([P]) ∈ CH 3 (X d ) ([V ] · c 1 (O X d (d)) = H * ([P] · H * (c 1 (O X d (d)))) in CH 3 (|V | ∩ |X d |)
, see [Ful98]. By Lemma 2.3, for the fiber P y of P/C over the point y ∈ C, we have that
P 1 = L y ⊆ P y = P 2 and [H * ([X d ])]| Py = d[L y ].
Therefore, we conclude that
d · [L] − H * (c 1 (O X d (d))) = ±fibers of P/C = n i=1 ±[P i ]
where {P i } are 2-planes in X d . It follows from the projection formula that
d · H * ([L]) = n i=1 ±H * ([P i ]) + c 1 (O X d (d)) · H * ([P]) = i ±P i + [V ] · c 1 (O X d (d)).
By the moving lemma, every 2-cycle of X d is rationally equivalent to the formal sum of some surfaces passing through general points of X d . By Proposition 2.2 and Lemma 3.1, we conclude the following corollary.
Corollary 3.2. Under the hypothesis as above, we have Proof. Let us consider the incidence I in (2.2.2). By the calculation (2.2.4), the fibers of h : I → F(X d ) is zero locus of the equations of Fano type. We claim that these fibers are smooth complete intersections for X d general, therefore, the general fibers of h are Fano varieties, hence, they are rationally connected for X d general.
d[S] = n i=0 ±[P i ] + [V ] · c 1 (O X d (d)) holds in CH 2 (X d ) where S ∈ CH 2 (X d ), [V ] ∈ CH 3 (X d ) and {P i } are 2-planes in X d .
In fact, the incidence I is just the flag variety Fl(P 1 , P 2 ; X d ) of X d . Therefore, we can use the classical incidence method to prove I is irreducible, smooth and of expected dimension for X d general. Namely, consider the following incidence, it is clear that Fl(P 1 , P 2 ; X d ) is the fiber of Pr −1 2 ([X d ]).
I = {(P 1 ⊆ P 2 , X d ) | P 1 ⊆ P 2 ⊆ X d } Pr1 t t ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ Pr2 + + ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ ❱ Fl(P 1 , P 2 ; P n ) P H 0 (P n , O P n (d))
We leave an easy exercise to the reader to finish the proof of the claim. Note that F(X d ) is rationally connected for X d general (in fact it is Fano, see [Kol96, Chapter V, Exercise 4.7]) and the general fibers of h : I → F(X d ) are rationally connected. It follows from the Graber-Harris-Starr Theorem [GHS03] or the Tsen-Lang Theorem that I is rationally connected for X d general. It is clear that the projection
π 2 : I → F 2 (X d )
is surjective. It implies that F 2 (X d ) is rationally connected for X d general. By the specialization argument, we conclude that F 2 (X d ) is rationally chain connected for every smooth hypersurface X d . In particular, all the 2-planes of X d are rationally equivalent.
Proposition 3.4. With the hypothesis as above, we have
d · [CH 2 (X d )] tor = 0.
Proof. Let a be an element of [CH 2 (X d )] tor . By Corollary 3.2, we have
(3.4.1) d · a = i a i [(P 2 ) i ] + V · [X d ] where a i ∈ Z, V ∈ CH 3 (P n ) and {(P 2 ) i } are 2-planes in X d . Let j be the inclusion X d → P n . It is clear that j * (d · a)
is a torsion element in CH 2 (P n ) = Z. In particular, we have
0 = j * (d · a) = ( i a i )[P 2 ] + c[P 3 ] · [X d ] = ( i a i )[P 2 ] + c · d · [P 2 ] where V = c[P 3 ] ∈ CH 3 (P n ) . Therefore, it follows that i a i + c · d = 0.
By Lemma 3.3, the equality (3.4.1) becomes
(3.4.2) d · a = c([P 3 ] · [X d ] − d · [P 2 ])
where P 2 is a 2-plane in X d . It is clear that, under the hypothesis (3.0.1), the hypersurface X d contains a projective space P 3 by [Wal08, Theorem 1.6]. In particular, the equality (3.4.2) is
d · a = c(d[P 2 ] − d[P 2 ]) = 0 in CH 2 (P 3 ∩ X d ) = CH 2 (P 3 )
. We have proved the proposition.
The Homology of Fano Variety of Lines
. If X − (Z ∪ H) is a local complete intersection, then the homomorphism π i ((X − Z) ∩ H) → π i (X − Z) is an isomorphism for all i < dim(X) − 1. Proposition 4.2. Suppose that X d is a general hypersurface of degree d in P n and d(d + 1) ≤ n − 2. Then, we have H 2 (F(X d ), Z) = Z and it is generated by the class [l] of lines in F(X d ) with respect to the Plücker embedding. Proof. Let P be P H om C (V * , H 0 (P 1 , O P 1 (1))) where X d ⊆ P(V ) = P n is defined by a homogeneous polynomial F of degree d. So P parameterizes (n + 1)−tuples [u 0 , u 1 , . . . , u n ]
of homogeneous polynomials of degree e on P 1 . Let D ⊆ P be the closed subvariety parametrizing the tuples [u 0 , . . . , u n ] that have a common zero in P 1 . Suppose that e = 1. The subvariety D parameterizes the tuples [u 0 , . . . , u n ] that Span(u 0 , . . . , u n ) in H 0 (P 1 , O P 1 (1)) is one dimensional, i.e., every pair (u i , u j ) satisfies a scalar linear relation. In particular, we have (4.2.1) dim(D) = n + 1 and dim(P) = 2n + 1.
Let P X d be the closed subset of P parameterizing [u 0 , . . . , u n ] such that
F (u 0 , . . . , u n ) ≡ 0.
Then the subvariety P X d is defined by d + 1 homogeneous polynomials of degree d in P, i.e., it is of type as follows
(4.2.2) (d, . . . , d) d+1
.
On the other hand, it is clear that the space Mor 1 (P 1 , X d ) parametrizing the morphisms whose images are lines is P X d −D. In particular, the expected dimension of
Mor 1 (P 1 , X d ) is dim(P) − (d + 1) = 2n + 1 − (d + 1) = 2n − d.
The space Mor 1 (P 1 , X d ) has the expected dimension if and only if P X d is a complete intersection. It is clear that there is topological fiberation as follows.
PGL 2 (C) / / Mor 1 (P 1 , X d ) F(X d ) = Mor 1 (P 1 , X d )/PGL 2 (C) In particular, it follows from [Kol96, Chapter V, 4.3.2] that dim(Mor 1 (P 1 , X d )) = dim(F(X)) + dim PGL 2 (C) = 2n − d − 3 + 3 = 2n − d.
Therefore, the subscheme P X d of P is a complete intersection in P of type (4.2.2).
Notice that we have (4.2.1). By using general n + 2 hyperplanes in P to intersect
with P X d − D, we conclude that P X d − D contains a line L ⊆ P if d(d + 1) ≤ n − 2, see [Kol96,
Ex.4.10.5]. By Proposition 4.1 and the homotopy fiber sequence, we have a short exact sequence
0 / / π 2 (Mor 1 (P 1 , X d )) / / π 2 (F(X d )) / / π 1 (PGL 2 (C)) = Z/2Z / / 0
where we use the fact π 2 (PGL 2 (C)) = 0 and
π 1 (Mor 1 (P 1 , X d )) = π 1 (P X d − D) = π 1 (P X d ) = {1}.
Note that F(X d ) is simply connected. By the Hurewicz theorem, the above exact short sequence is
0 / / H 2 (P X d − D, Z) = Z < [L] > / / H 2 (F(X d ), Z) / / Z/2Z / / 0 where [L] is the homology class of a line L in P X d − D. From this exact sequence, it is clear that H 2 (F(X d )) = Z or Z ⊕ Z/2Z. We exclude the second case. In fact, if H 2 (F(X d )) = Z ⊕ Z/2Z, then the map ϕ Z < [L] >= H 2 (Mor 1 (P 1 , X d ), Z) ϕ + + / / H 2 (F(X 2 )) i * / / H 2 (P N , Z) = Z is an isomorphism where i : F(X d ) → P N is induced by the Plücker embedding. Therefore, we have a family f : L × P 1 → X d of lines parametrized by L such that the image f * ([L × P 1 ]) = f * ([P 1 × P 1 ]) of f is homologous to a 2-plane in X d .
It is absurd since the self-intersection of any line bundle on P 1 × P 1 is even and the intersection number of two general hyperplanes with a 2-plane is one. We have proved the proposition.
Dual Graphs and Specializations
We recall some facts about the Kontsevich space M 0,m (X, b) which parametrizes stable maps of genus zero and of degree b into X. For the details, we refer to [BM96] and [HRS04]. 5.1. Notations. We recall the combinatorial data for a stable map, namely, its dual graph. All the graphs in our paper are finite trees, i.e., they have finitely many vertices and do not have any loops. A graph consists of the following data:
(1) there is a non-negative integer d v for each vertex v, which we call the degree of the vertex, and (2) there is a list L = {p 1 , . . . , p k } of vertices which we call the marked points.
The points in the list may not be distinct, the number of the point p in this list is called the multiplicity of the point. Let us recall how to associate to a stable map its dual graph. For a stable map of genus zero f : C → X, the dual graph G(f ) of f is a graph with data as follows:
• the vertices in the graph G(f ) is one-to-one corresponding to the irreducible components of C, e.g. the vertex v ∈ v(G) corresponds to the irreducible component C v of C, • an edge between the vertices for every intersection point between two com-
ponenets of C, • there is a non-negative integer d v which is equal to the degree of f * ([C v ])
(it may be zero if the map f collapses the component), • a marked point on the component C i ∼ = P 1 contributes a point to the list L.
We call a graph is a good tree if all the numbers {d v } are one. The stable map f is a good tree if its dual graph G(f ) is a good tree, i.e., the map f does not contract any component and its image is the union of lines.
Definition 5.1. Let G be a graph as above. A subgraph W of G is a graph satisfying: In this case, we call G 1 a deformation of G and G a specialization of G 1 (without mentioning H). Suppose that G is a good tree. If G 1 is obtained by contracting a subgraph K in G and the total degree of K is two, then we call G 1 is a conic deformation of G. Two good trees G 2 and G 3 are said to be connected by a conic deformation if there is some dual graph K that is a conic deformation of G 2 and G 3 . We would like to provide a typical example to illustrate it.
• the vertices v(W ) ⊆ v(G), • the edges e(W ) ⊆ e(G), • the marked vertices L(W ) ⊆ L(G) counting with the multiplicity, • the degree d v of v ∈ W is equal to the degree with respect to G.The degree deg(G) of the graph G is v∈v(G) d v .
Example 5.1. Consider the following configurations (the underlying chain of degree one rational curve is the same and the unique marked point lies on the i-th component in Γ i ):
Γ 1 : M 0,2 (X, 1) × X M 0,2 (X, 1) × X . . . × X M 0,1 (X, 1) b f actors Γ 2 : M 0,1 (X, 1) × X M 0,3 (X, 1) × X . . . × X M 0,1 (X, 1) b f actors . . . . . . . . . . . . Γ b : M 0,1 (X, 1) × X M 0,2 (X, 1) × X . . . × X M 0,2 (X, 1) b f actors
where the marked point of the configuration Γ i is on the i-th component of the domain of the stable map. We can use a conic deformation to connect Γ i and Γ i+1 . We only give this deformation for i = 1 (the general case is similar). In fact, the conic deformation connecting Γ 1 and Γ 2 is given by the dual graph of the following configuration:
(5.1.1) M 0,2 (X, 2) × X M 0,2 (X, 1) × X . . . × X M 0,1 (X, 1) b−1 f actors .
Keeping the notations as before, we have a space M 0,m (X, G) which parametrizes the stable maps whose dual graphs are G or specializations of G. We have evaluation maps
(5.1.2) ev : M 0,1 (X, b) → X and ev G : M 0,1 (X, G) → X.
Lemma 5.2. Let G 1 and G 2 be two dual graphs of good trees of total degree e, i.e., each graph consists of e vertices such that each vertex is of degree 1. Assume that G 1 and G 2 have k marked points. Then we can connect G 1 to G 2 by a finite series of conic deformations.
See Example 5.1. We can connect Γ i and Γ j by finitely many configurations similar as 5.1.1).
Proof. Let G 0 be the dual graph such that |v(G 0 )| = e and d v = 1 for each v ∈ v(G 0 ). Suppose that G 0 has a central vertex v 0 to which every other vertex is connected and all the marked points are v 0 . To prove the lemma, it suffices to show that all the good trees can be deformed to G 0 by series of conic deformations. Suppose that G is a good tree and v ′ is a vertex of G with the most edges. If v ′ is not connected to every vertex, then there is some vertex w connected to both v ′ and other vertex u.
We contract the subgraph K(⊆ G) of degree two which consists of {w, v ′ }. This new graph G ′ is a contraction of a graph G ′′ where G ′′ is the union of G ′ and one point v 1 such that v 1 connects to v K ∈ v(G ′ ). In particular, we get a conic deformation connecting G ′′ and G. From G to G ′′ , we increase the number of the edges connected to v ′ since u is connected to v ′ in G ′′ . Similarly, for any marked point which is not v ′ , there is a conic deformation that increases the number
#({p ∈ L |p = v ′ }), see 5.1 (2).
Repeat this process. We can connect G to G 0 by series of conic deformations. We have proved the lemma.
Lemma 5.3. Let G be a dual graph with d v = 0 or 1 for all v ∈ v(G). Then G is a specialization of a good tree G ′ . Moreover, we have
deg(G) = v∈G d v = v ′ ∈G ′ d v ′ = deg(G ′ ).
Proof. For a vertex v ∈ v(G) with d v = 0, we can pick up some vertex w which is connected to v. Then we contract the subgraph that consists of {v, w}. Repeat this process, we have proved the lemma. 5.3. The connectedness of evaluation fibers for the unions of lines and conics. In the following, we assume that H 2 (X, Z) = Z and the evalution fibers of the evalution maps (5.3.1) M 0,1 (X, 1) → X and M 0,1 (X, 2) → X are connected and nonempty. We will verify the assumptions for X = F(X d ) in section 7, see Lemma 7.2 and Lemma 7.4. It is obvious that we have the following lemma. and ev −1 Ki (p) ⊆ ev −1 G (p) (in the fiber ev −1 (p)) for p ∈ X. Proof. Assume that k ≥ 2 (i.e., the graph G has at least two vertices). Since d v is positive (i.e., a stable map of type G does not contract any component), a stable map f of type G is reducible and it is the union of two stable maps of types G 1 and G 2 such that the marked point is on G 1 . In other words, we have the following diagram
M 0,2 (X, G 1 ) × X M 0,1 (X, G 2 ) ev1 ) ) | | | | | | | | | | | | | | | | / / / / M 0,1 (X, G) evG y y t t t t t t t t t t X where v(G i ) = k i , deg(G i ) = b i and k 1 + k 2 = k, b 1 + b 2 = b.
Letêv 1 andêv 2 be the two natural evaluation maps of M 0,2 (X, G 1 ) (êv 1 ,êv 2 ) : M 0,2 (X, G 1 ) → X × X.
Note the assumption (5.3.1). By the induction on k, we conclude that the fiber ev −1 1 (p) =êv −1 1 (p) × X M 0,1 (X, G 2 ) is connected from the fact that the fiberêv −1 1 (p) is connected and Lemma 5.4. In fact, the following diagram commutes and the fibers of the morphism F (it is the forgetful map forgetting the second marked point) are connected. Therefore, it implies thatêv −1 1 (p) is connected since ev −1 G1 (p) is connected by the induction on
k 1 = v(G 1 ).ê v −1 1 (p) / / M 0,2 (X, G 1 ) F ê v1 / / X ev −1 G1 (p) / / M 0,1 (X, G 1 ) evG 1 : : ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉ ✉
Proposition 5.6. Let Γ 1 and Γ 2 be two possible dual graphs of the unions of lines in X of total degree b with one marked point. Then the evaluation fibers
ev −1 Γ1 (p) and ev −1 Γ2 (p) are in a common connected component of ev −1 b (p) where ev b is the evaluation map ev b : M 0,1 (X, b) → X.
Proof. By Lemma 5.3, we know Γ i is a specialization of a good treeΓ i of degree b, in particular, we have M 0,1 (X, Γ i ) ⊆ M 0,1 (X,Γ i ).
Therefore, we can assume that Γ i is a good tree for i = 1, 2. By Lemma 5.2, we have the following series of conic deformations to connect Γ 1 and Γ 2
K 0 K 1 . . . K k+1 Γ 0 > > ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ conicΓ 1`❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ > > ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ conicΓ 2`❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ? ? ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ ⑦ . . .Γ k = = ③ ③ ③ ③ ③ ③ ③ ③ _ _ ❅ ❅ ❅ ❅ ❅ ❅ ❅ ❅ conicΓ k+1
c c ❋ ❋ ❋ ❋ ❋ ❋ ❋ ❋ whereΓ 0 = Γ 1 andΓ k+1 = Γ 2 . Applying Lemma 5.5, we know the fibers ev −1 Γi (p) and ev −1 Γi+1 (p) are in a common connected component of ev −1 b (p). We have proved the proposition.
We provide an example to illustrate this proposition.
Example 5.7. With the notations as in Example 5.1, we have the evaluation maps as follows
Ev Γi : M 0,1 (X, Γ i ) → X.
We use a conic deformation to connect Ev −1 Γi (p) and Ev −1
Γi+1 (p) in ev −1 b (p) to show that the general fiber Ev −1 Γi (p) over a general point p ∈ X is contained in a connected component of ev −1 b (p)
. We explain it for i = 1, for arbitrary i, it is similar. By the hypothesis (5.3.1)
and Lemma 5.4, we can prove the evaluation map
Ev : M 0,2 (X, 2) × X M 0,2 (X, 1) × X . . . × X M 0,1 (X, 1)
b−1 f actors → X has connected fibers as above, where the first factor parametrizes the conics with two marked points. It is clear that the fiber Ev −1 (p) over the point p ∈ X contains fibers Ev −1 Γ1 (p) and Ev −1 Γ2 (p). Hence, we use this conic deformation to connect Ev −1 Γ1 (p) and Ev −1 Γ2 (p).
Special Loci, Geometry of Spaces of Quadrics
In this section, we always assume, unless otherwise noted, that (6.0.1) n ≥ d + 1 2 + d and d ≥ 3.
Proposition 6.1. The stack M 0,0 (F(X d ), 1) is smooth and irreducible for X d general. Moreover, we have
dim(M 0,0 (F(X d ), 1)) = 3n − d + 1 2 − d − 5 for X d general.
Proof. It is easy to prove this proposition by considering the smooth incidence subvariety I as follows:
I = {(l, X d )|l ⊆ F(X d )} pr1 t t ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ pr2 * * ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ ❚ M 0,0 (G(2, n + 1), 1) P(H 0 (P n , O P n (d)))
where l is a line in G(2, n+1). The line l sweep out a 2-plane P 2 in P n , the fiber of pr 1 over [l] is the projective space parametrizing hypersurfaces of degree d containing this 2-plane. Therefore, we conclude I is smooth since M 0,0 (G(2, n + 1), 1) is smooth, see [FP97]. On the other hand, we can follow the same method as the proof of [Kol96,Theorem 4.3] to show the codimension of the singular locus of pr 2 is at least 2. Hence, it is the same as [Kol96,Theorem 4.3] that we can conclude M 0,0 (F(X d ), 1) is smooth and connected of expected dimension for X d general. We leave the details to the reader.
6.1. Special Loci. We have a natural map
M 0,0 (F(X d ), 1) × M 0,0 (P 1 , 2) → M 0,0 (F(X d ), 2) (f, g) → f • g
whose image parametrizes double lines in F(X d ). Denote this image by L 1 . We call the locus L 1 is special. Similarly, we have a closed substack L of M 0,0 (G(2, n+1), 2) parametrizing double lines in G(2, n+1). It is clear that the dimension of the special locus L 1 is
dim(M 0,0 (F(X d ), 1)) + dim(M 0,0 (P 1 , 2)) = 3n − d + 1 2 − d.
Moreover, it follows from Proposition 6.1 that the special locus L 1 is irreducible for X d general. Similarly, we know L is also irreducible since M 0,0 (G(2, n + 1), 1) is irreducible, see [KP01]. We have the following incidence correspondence: (6.1.1)
I = {(C, X d )|C ⊆ F(X d )} pr1 t t ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ ✐ pr2 * * ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯ ❯
M 0,0 (G(2, n + 1), 2) P(H 0 (P n , O P n (d)))
where C is a conic in G(2, n + 1) and P(H 0 (P n , O P n (d))) is the space parametrizing hypersurfaces of degree d in P n . It is clear that pr 2 is surjective if any general hypersurface of degree d contains a 2-plane.
6.2. Geometry of Conics.
Lemma 6.2. The special locus L 1 is not an irreducible component of
M 0,0 (F(X d ), 2)
for X d general.
Proof. We know L 1 is irreducible by Proposition 6.1. It is clear that a general point of L 1 parametrizes a conic which is a double cover of a free line passing through a general point of F(X d ). Recall that the space Y of lines in X d through a point p is defined by the equations in P n of type (6.2.1) (1, 1, 2, 3, . . . , d − 1, d).
For each point y ∈ Y , the space Y has a line through y if the sum of these degrees (6.2.1) is at most n − 1, see [Kol96, Chapter V Exercise 4.10.5 ]. It follows that F(X d ) is covered by lines. Therefore, for a general line P 1 in F(X d ), it is free and we have
T F(X d ) | P 1 = O P 1 (a 1 ) ⊕ O P 1 (a 2 ) ⊕ . . . ⊕ O P 1 (a 2n−d−3 ) where T F(X d ) is the tangent sheaf of F(X d ), a i ≥ 0 and a i ≥ a i+1 .
We claim that a 3 ≥ 1. In fact, if a 3 = a 4 = . . . = a 2n−d−3 = 0, then we have
a 1 + a 2 = 2n−d−3 i=1 a i = deg(T F(X d ) ) = n + 1 − d + 1 2 ≥ 4.
The space Mor 1 (P 1 , F(X d ); {0, 1}) of maps fixing 0 and 1 is the disjoint union of copies of C * . On the other hand, the dimenison of the tangent space of
Mor 1 (P 1 , F(X d ); {0, 1}) is h 0 (P 1 , T F(X d ) (−2)| P 1 ) = h 0 (P 1 , O P 1 (a 1 − 2) ⊕ . . . ⊕ O P 1 (a 2n−d−3 − 2)
) ≥ 2 which is a contradiction. So we prove our claim. In particular, by [Kol96, Theorem 3.14.3, Chapter II], a general deformation of a double cover of a general (free) line is a smooth conic in F(X d ). It implies that L 1 is not an irreducible component of M 0,0 (F(X d ), 2).
We apply a similar method to prove the following lemma.
Lemma 6.3. The incidence correspondence I (6.1.1) is irreducible.
Proof. Denote by U the open substack M 0,0 (G(2, n + 1), 2) − L. By the main theorem of [KP01], we know M 0,0 (G(2, n + 1), 2) is irreducible. Hence, the stack U is irreducible. On the other hand, every point c ∈ U sweeps out a quadric Q c ∈ P n (it may be singular), the fiber of pr 1 over c is the projective space parametrizing hypersurfaces containing Q c . The projective space is a subspace of P H 0 (P n , O P n (d)) of codimension (d + 1) 2 , in fact, it is given by (6.3.1) P(Ker : H 0 (P n , O P n (d)) / / / / H 0 (Q c , O Qc (d))) .
In particular, the open substack pr −1 1 (U ) of I is irreducible. On the other hand, a point u ∈ L parametrizing a double line in F(X d ) sweeps out a 2-plane in P n . As above, the fiber of pr 1 over u is the projective space as follows:
P(Ker : H 0 (P n , O P n (d)) / / / / H 0 (P 2 , O P 2 (d))) .
In particular, the closed substack pr −1 1 (L) is irreducible. We claim that pr −1 1 (L) is not an irreducible component of I. The proof is similar to the proof of Lemma 6.2. Since a general hypersurface of degree d in P n contains a 2-plane, a general point u in pr −1 1 (L) maps to a general point of P H 0 (P n , O P n (d)) via pr 2 . Denote pr 2 (u) by [X d ]. On the other hand,
L 1 = pr −1 1 (L) ∩ pr −1 2 ([X d ])
. By the argument as in the proof of Lemma 6.2, we can deform the double line in F(X d ) associated to u to a smooth conic in F(X d ). In particular, the preimage pr −1 1 (L) is not an irreducible component of I.
In summary, we prove that I is irreducible.
Lemma 6.4. The space M 0,0 (F(X d ), 2) − L 1 is smooth (as a scheme) for X d general.
Proof. We show the stack I − pr −1 1 (L) is smooth first. In fact, the stack I − pr −1 1 (L) is the preimage of the smooth stack M 0,0 (G(2, n + 1), 2) − L (see [FP97]) via pr 1 . The fiber of pr 1 over a point [C] ∈ M 0,0 (G(2, n + 1), 2) − L is just the projective subspace parametrizing hypersurfaces of degree d containing the quardic Q C ⊆ P n where Q C is swept out by the lines parametrized by C, see (6.3.1). It implies that the stack I − pr −1 1 (L) is smooth. It is clear that M 0,0 (F(X d ), 2) − L 1 = (I − pr −1 1 (L)) ∩ pr −1 2 ([X d ]). Since the space I − pr −1 1 (L) is smooth. We conclude this lemma by the generic smoothness theorem via pr 2 . 6.3. Geometry of Quadrics.
Lemma 6.5. The space M 0,0 (F(X d ), 2) is connected if
n ≥ d + 1 2 + d − 1 and d ≥ 3.
Proof. We use degeneration argument. We can assume that X d is general. By deformation theory, we know every irreducible component of M 0,0 (F(X d ), 2) is of dimension at least (expected dimension=)
− K F(X d ) · 2[line] + dim(F(X d )) − 3 = 2(n + 1 − d + 1 2 ) + 2n − 3 − d − 3 ≥ 4n − d + 1 2 − d − 5 where −K F(X d ) = O F(X d ) (n + 1 − d + 1 2 )
and dim(F(X d )) = 2n − 3 − d (see [Kol96, Chapter V Theorem 4.3 and Exercise 4.7]). By the assumption, we conclude that every irreducible component of
M 0,0 (F(X d )
, 2) has dimension at least 3n − 6. We apply [Eri15,Proposition 11.6] to conclude that every component contains a point parametrizing a conic C in F(X d ) such that the associated quadric Q C in P n is singular. Therefore, a such conic C is in a subspace Y of F(X d ) parametrizing the lines through a point. Recall that Y is defined by the equations in P n of type
(1, 1, 2, 3, . . . , d − 1, d).
If 1 + 1 + 2 + 3 + . . .
+ (d − 1) + d + d + 1 2 ≤ 3n − 2 2 (i.e., (d+1) 2 +4 3 ≤ n),D = dim M 0,0 (G(2, n + 1), 2) − (d + 1) 2 . The space M 0,0 (F(X d ), 2) is a local complete intersection in M 0,0 (G(2, n + 1), 2), therefore, it is (D − 1)-connected, see [Laz04, Chapter 3 3.3.C ].
Proof. We claim M 0,0 (F(X d ), 2) is the zero locus of a section of a vector bundle on M 0,0 (G(2, n + 1), 2). In fact, we have the following universal bundles over G(2, n + 1) = G and M 0,0 (G(2, n + 1), 2) L = P(S) π f / / P n C g / / π G(2, n + 1)
G(2, n + 1) M 0,0 (G, 2)
where S is the universal subbundle over G(2, n + 1). Suppose that the section s ∈ H 0 (P n , O P n (d)) defines the hypersurface X d ⊆ P n . We have
π * f * O P n (d) = π * O P(S) (d) = Sym d (S ∨ )
and a section s 1 = π * f * (s) ∈ H 0 (G(2, n + 1), Sym d (S ∨ )). The zero locus of s 1 defines the Fano variety F(X d ) of lines in X d . We claim thatπ * g * Sym d (S ∨ ) = E is a vector bundle on M 0,0 (G(2, n+ 1), 2). Moreover, it has a section s 2 =π * g * (s 1 ). In fact, by the base change theorem, it suffices to prove that the cohomology group H 1 (C, h * (Sym d S ∨ )) is zero for any conic h : C → G = G(2, n + 1). We divide it into two cases:
• If h : P 1 → G is a smooth conic or a double line, then h * (Sym d (S ∨ )) is semi-ample since there is quiotient O ⊕n+1 G → S ∨ . In particular, we have H 1 (C, h * (Sym d (S ∨ )) = 0.
• If the image h : C = P 1 ∨ P 1 → G is a reducible conic or a double line, then, as above, we conclude H 1 (C, h * (Sym d S ∨ )) = 0 by using the short exact sequence
0 → O P 1 (−1) → O C → O P 1 → 0.
It is clear that the zero locus of s 2 defines M 0,0 (F(X d ), 2). As in the proof of Lemma 6.4, we know that
pr −1 2 ([X d ]) = M 0,0 (F(X d )
, 2), cf. (6.1.1). Note that I is irreducible by Lemma 6.3 and L 1 is not an irreducible component of M 0,0 (F(X d ), 2) for X d general by Lemma 6.2. The dimension of each irreducible component of M 0,0 (F(X d ), 2) for X d general is dim(I) − dim(P H 0 (P n , O P n (d))) = dim(M 0,0 (G(2, n + 1), 2)) + dim(pr −1 1 (C)) − dim(P H 0 (P n , O P n (d))) = dim(M 0,0 (G(2, n + 1), 2)) − h 0 (Q C , O QC (d)) = dim(M 0,0 (G(2, n + 1), 2)) − (d + 1) 2 where [C] is a general point of M 0,0 (G(2, n + 1), 2). Hence, the fiber pr −1 1 ([C]), cf. (6.1.1), is the following projective space by (6.3.1)
P Ker : H 0 (P n , O P n (d)) → H 0 (Q C , O QC (d)) .
We show the first statment of the lemma.
To prove the space M 0,0 (F(X d ), 2) is a local complete intersection in M 0,0 (G(2, n + 1), 2), it suffices to verify the rank of E is (d + 1) 2 since M 0,0 (F(X d ), 2) is the zero locus of a section of E. It is clear that the rank of E is h 0 (C, q * (Sym d S ∨ )) for any conic q : C → G. So we can consider the case when q : C → G is a double line. Since ϕ * (S ∨ ) = O P 1 (1) ⊕ O P 1 for any line ϕ : P 1 → G, we conclude that
ϕ * Sym d S ∨ = Sym d (O P 1 (1) ⊕ O P 1 ) = d s=0 O ⊗s P 1 ⊗ O P 1 (1) ⊗(d−s) = d s=0 O P 1 (s).
For the double line q induced by ϕ, i.e., q : P 1 ψ 2:1
/ / P 1 ϕ / / G , we have h 0 (P 1 , q * Sym d S ∨ ) = h 0 (P 1 , ψ * (ϕ * Sym d S ∨ )) = d s=0 h 0 (P 1 , O P 1 (2s)) = d s=0
(2s + 1) = (d + 1) 2 .
We have proved the lemma.
Corollary 6.7. The space M 0,0 (F(X d ), 2) − L 1 is connected.
Proof. It follows from Lemma 6.6 and the fact that the codimension of L 1 in
M 0,0 (F(X d ), 2)
is at least 2.
Proposition 6.8. The space
M 0,m (F(X d ), 2)
is irreducible for X d general.
Proof. Note that the generic fibers of
M 0,m (F(X d ), 2) → M 0,m−1 (F(X d ), 2)
are irreducible curves. It suffices to prove the case m = 0 by the induction on m. It follows from Lemma 6.4 and Corollary 6.7 that the space M 0,0 (F(X d ), 2) − L 1 is irreducible for X d general. Since L 1 is not an irreducible component of M 0,0 (F(X d ), 2) for X d general by Lemma 6.2, we conclude that M 0,0 (F(X d ), 2) is irreducible for X d general.
Bend and Break, Curves on Fano Variety of Lines
In this section, we assume that X d is a smooth hypersurface of degree d in P n .
Theorem 7.1. Suppose that one of the following assumptions holds:
• d ≥ 4 and n ≥ 3 d + 1 2 − d − 4,
• d = 3 and n ≥ 14. If the Fano variety F(X d ) is smooth, then we have
CH 1 (F(X d )) hom = CH 1 (F(X d )) alg
i.e., Griff 1 (F(X d )) = 0. In particular, CH 1 (F(X d )) hom is divisible.
Proof. Note that F(X d ) is Fano under the assumption of the theorem. It follows from [TZ14, Theorem 1.3] that the first Chow group CH 1 (F(X d )) is generated by rational curves. Suppose that 1-cycle C is a rational curve. It is homologus to b · [line], see Proposition 4.2. The cycle C is algebraically equivalent to the sum of some lines in F(X d ) since M 0,0 (F(X d ), b) is connected by Proposition 7.5. We have proved the theorem.
In the rest of this section, we will show Proposition 7.5. In the following, we assume that the hypersurface X d is general and one of the assumptions of Theorem 7.1 holds. Proof. According to [Ols07], we have the Stein factorization as follows
X f ' ' g / / W h / / Y
where W is an integral scheme, the morphism h is finite and and the fibers of g are connected. Since X is irreducible, the preimage X U of an open subset U of Y is irreducible (in particular, it is connected). So we can shrink Y to an open subset U so that h over U isétale. The fibers h −1 (u) = {u 1 , . . . , u s } of h over u ∈ U correspond to the connected components of f −1 (u). There is a unique marked point P in h −1 (u) associated to the connected component of f −1 (u) containing Z u . Since X U is connected, the fundamental group π 1 (U ) acts on the fibers h −1 (u) transitively and fixes the marked point P . It implies that the morphism h is oneto-one. In particular, the general fibers of f are connected. Therefore, all the fibers of f are connected. Proof. Recall that F(X d ) is covered by lines, see Lemma 7.2 or the proof of Lemma 6.2. For any point u ∈ F(X d ), there is a line l ⊆ F(X d ) passing through u. A double cover from P 1 to l gives rise to a point [f ] ∈ M 0,1 (F(X d ), 2) such that ev([f ]) = u. In particular, the fibers of ev are non-empty. By Proposition 6.8, we can apply Lemma 7.3 to X = M 0,1 (F(X d ), 2), Y = F(X d ) and f = ev. We take Z = Im M 0,2 (F(X d ), 1) × F(X d ) M 0,1 (F(X d ), 1) → M 0,1 (F(X d ), 2) .
It follows from Lemma 5.4 and Lemma 7.2 that the fibers of ev| Z : Z → F(X d ) are connected. We have proved the lemma. Assume that f (0) = p and f (∞) = q, we consider the morphism space Mor(P 1 , F(X d ); 0 → p, ∞ → q). We claim that the fibers of ev 1 : M 0,2 (Y, b 1 ) × Y M 0,1 (Y, b 2 ) → Y are connected and intersect the locus of unions of lines. In fact, as at the beginning of the proof, for every point p ∈ Y , there are unions of lines passing through p.
By [Kol96, Chapter II], its expected dimension at the point [f ] is
h 0 (P 1 , T F(X d ) (−2)) − h 1 (P 1 , T F(X d ) (−2)) = −K F(X d ) · [f (P 1 )] − dim(F(X d )) = b (n + 1 − d + 1 2 ) − (2n − 3 − d
In particular, we only need to prove the fibers of ev 1 are connected. Note that we have the following diagram (7.5.1)êv −1 1 (p)
/ / M 0,2 (Y, b 1 ) F ê v1 / / Y ev −1 b1 (p) / / M 0,1 (Y, b 1 ) ev b 1 : : ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈ ✈
where F is the forgetful functor forgetting the second marked point. By the induction on b, we know ev −1 b1 (p) is connected, therefore, the evaluation fiberêv −1 1 (p) in the diagram (7.5.1) is connected. Since the fiber of ev 1 over p is ev −1 1 (p) =êv −1 1 (p) × Y M 0,1 (Y, b 2 ), we show the claim by Lemma 5.4.
In summary, we conclude that every connected component of the fiber ev −1 b (p) parameterizes some stable maps with one marked point whose images are unions of lines. These stable maps are in a common connected component of ev −1 b (p) by Proposition 5.6 (X = Y ). We have proved the proposition.
Theorem 1 . 2 .
12Let X d be a hypersurface of degree d ≥ 3 in P n C . (1) Suppose that the Fano variety F(X d ) of lines is smooth. If d ≥ 4 and n ≥ 3 d + 1 2 − d − 4,or d = 3 and n ≥ 14, then Griff 1 (F(X d )) = 0. (2) If the Fano variety F(X d ) of lines is smooth andd(2d 2 + 1) 3 ≤ n,then CH 2 (X d ) = Z.
Lemma 3 . 3 .
33Under the hypothesis as above, all the 2-planes of X d are rationally equivalent.
5. 2 .
2Deformation and Specialization. Definition 5.2. Let G be a dual graph and H be a non-empty subgraph. We call G 1 a contraction of G along H if • the set of the vertices of G 1 is the disjoint union of the vetices that are not in H and a single point {v H }, i.e., v(G 1 ) = (v(G) − v(H)) ∐ {v H }, • the edges e(G 1 ) are those edges of G 1 connecting two vertices that are not in H, together with edges from v ∈ S ⊆ v(G) − v(H) to v H where the subset S ⊆ v(G) − v(H) consists of the vertices in G connecting with a vertex in H, • the marked vertices L(G 1 ) consist of the marked vertices in v(G) − v(H) and the vertex v H with multiplicity |L(H)|, • the degree d vH of v H is v∈v(H) d v and the degrees of other vertices are their degrees in G.
Lemma 5. 4 .
4Let Z and Y be Deligne-Mumford stacks over C. The fiber product Z × S Y of two proper morphisms Z → S and g : Y → S over S is connected if Z is connected and the fibers of g are connected where S is a C-scheme.Recall the evaluation map ev G : M 0,1 (X, G) → X (5.1.2) for a graph G.Lemma 5.5. Let G be a dual graph with one marked point and d v = 1 or 2 for all v ∈ v(G). Then the fibers of ev G are connected. Suppose that G has two conic specializations K 1 and K 2 , then deg(G) = deg(K 1 ) = deg(K 2 )
then the subspace M 0,0 (Y, 2) of M 0,0 (F(X d ), 2) is connected by [Zon14, Theorem 1.3] and intersects the special locus L 1 by the fact that Y contains a line, see [Kol96, Chapter V, Exercise 4.10.5]. It implies the connectedness of M 0,0 (F(X d ), 2). Lemma 6.6. Every irreducible component of space M 0,0 (F(X d ), 2) is of dimension
Lemma 7 . 2 .
72The fibers of the evalution mapev : M 0,1 (F(X d ), 1) → F(X d )are connected and nonempty.Proof. The fiber ev −1 ([l]) over [l] ∈ F(X d ) parametrizes 2-planes in X d containing the line l. By the same arugment as in section 2 (after Proposition 2.2), we know the fiber ev −1 ([l]) is the intersection of ample divisors in a projective spaces and of positive dimension, therefore, it is connected and nonempty.
Lemma 7 . 3 .
73Suppose that the map f : X → Y is proper from an irreducible reduced Deligne-Mumford stack X to an irreducible variety Y over complex numbers C. Let Z be a closed Deligne-Mumford substack of X. If the fibers of f | Z : Z → Y are nonempty and connected, then the fibers of f are connected.
Lemma 7 . 4 .
74The fibers of the evalution mapev : M 0,1 (F(X d ), 2) → F(X d )are connected and nonempty.
Proposition 7 . 5 .
75With the same assumption as Theorem 7.1, for a general hypersurface X d , the fibers of the evaluation mapev : M 0,1 (F(X d ), b) → F(X d )are connected for any b ∈ N + and intersect the locus of the unions of lines. In particular, the space M 0,0 (F(X d ), b) is connected for any smooth F(X d ).Proof. Denote the curve classb[l] ∈ H 2 (F(X 2 ), Z) = Z < [l] > by b, see Proposition 4.2. Since the map ev 1 : M 0,1 (F(X d ), 1) → F(X d )is surjective by Lemma 7.2, there are unions of lines passing through p for every point p ∈ F(X d ). In particular, if the evaluation fibers of ev are connected, then the fibers intersect the locus of unions of lines To show the proposition, it suffices to either prove • the fiber of ev is connected or prove • every connected component of the evaluation fiber intersects the locus of unions of lines and the locus of unions of lines in the evaluation fiber is in a common connected component of the evaluation fiber. We show the statement by the induction on b. For b = 1 and 2, it is Lemma 7.2 and Lemma 7.4. Let us assume b ≥ 3. Suppose that f is a map f : P 1 → F(X d ) and f * ([P 1 ]) = b = b[l] ∈ H 2 (F(X d ), Z).
) ≥ 2 whereIm
2the last equality follows from [Kol96, Chapter V Theorem 4.3 and Exercise 4.7]. In particular, by the bend-and-break [Kol96, Chapter II], every irreducible component of ev intersects the boundary B ⊆ M 0,1(F(X d ), b) for b ≥ 3. Denote F(X d ) by Y . Recall that there are natural gluing maps for (b 1 , b 2 ) M 0,2 (Y, b 1 ) × Y M 0,1 (Y, b 2 ) → M 0,1 (Y, b). M 0,2 (Y, b 1 ) × Y M 0,1 (Y, b 2 ) → M 0,1 (Y, b) .
Theorem 7.6. Suppose that X d is a hypersurface of degree d in P n with smooth Fano variety of lines. If the equalityProof. By the main theorem of paper[ELV97], it suffices to prove thatWe claim that CH 2 (X d ) tor is divisable. In fact, we have thatwhere CH 2 (X d ) hom is the subgroup of CH 2 (X d ) generated by 2-cycles that are homologous to zero. To prove CH 2 (X d ) tor is divisible, it suffices to show CH 2 (X d ) hom is divisible. Let us consider the following commutative diagram.By Proposition 2.2, the map AJ (2.1.3) is surjective. By Proposition 4.2, the map AJ top is an ismorphism. Therefore, the map AJ hom is surjective. Note that CH 1 (F(X d )) alg is divisible. It implies that CH 2 (X d ) hom is divisible since CH 1 (F(X d )) hom = CH 1 (F(X d )) alg by Theorem 7.1. It follows from Proposition 3.4 and the claim CH 2 (X d ) tor is divisible that CH 2 (X d ) tor = 0.Hence, we prove the theorem.
Positivity in algebraic geometry. I Classical setting: line bundles and linear series Ergebnisse der Mathematik und ihrer Grenzgebiete. Robert Lazarsfeld, Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in MathematicsLazarsfeld, Robert. Positivity in algebraic geometry. I Classical setting: line bundles and linear series Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]
Rational curves on smooth cubic hypersurfaces. Izzet Coskun, Jason Starr, Int. Math. Res. Not. IMRN. 24Coskun, Izzet and Starr, Jason. Rational curves on smooth cubic hypersurfaces. Int. Math. Res. Not. IMRN (24)4626-4641, 2009
Rational connectedness and boundedness of Fano manifolds. J Kollár, Y Miyaoka, S Mori, J. Differential Geom. 36Kollár, J., Miyaoka, Y., and Mori, S. Rational connectedness and boundedness of Fano manifolds. J. Differential Geom. 36, 3 (1992), 765-779.
On quasi algebraic closure. S Lang, Ann. of Math. 2Lang, S. On quasi algebraic closure. Ann. of Math. (2) 55 (1952), 373-390.
Rational equivalence of 0-cycles on surfaces. D Mumford, J. Math. Kyoto Univ. 9Mumford, D. Rational equivalence of 0-cycles on surfaces. J. Math. Kyoto Univ. 9 (1968), 195-204.
Semi-regularity and deRham cohomology. S Bloch, Invent. Math. 17Bloch, S. Semi-regularity and deRham cohomology. Invent. Math. 17 (1972), 51-66.
p-adic deformation of algebraic cycle classes. S Bloch, H Esnault, M Kerz, Invent. Math. 195Bloch, S., Esnault, H., and Kerz, M. p-adic deformation of algebraic cycle classes. Invent. Math. 195, 3 (2014), 673-722.
Stacks of stable maps and Gromov-Witten invariants. K Behrend, Yu Manin, Duke Math. J. 851K. Behrend and Yu. Manin. Stacks of stable maps and Gromov-Witten invariants. Duke Math. J., 85(1):1-60, 1996.
Remarks on correspondences and algebraic cycles. S Bloch, V Srinivas, Amer. J. Math. 1055S. Bloch and V. Srinivas. Remarks on correspondences and algebraic cycles. Amer. J. Math., 105(5):1235-1253, 1983.
Low degree complete intersections are rationally simply connected. A J De Jong, Jason Michael Starr, PreprintA. J. de Jong and Jason Michael Starr. Low degree complete intersections are rationally simply connected. Preprint, 2006.
Families of rationally simply connected varieties over surfaces and torsors for semisimple groups. A J De Jong, Xuhua He, Jason Michael Starr, Publ. Math. Inst. Hauteś Etudes Sci. 114A. J. de Jong, Xuhua He, and Jason Michael Starr. Families of rationally simply con- nected varieties over surfaces and torsors for semisimple groups. Publ. Math. Inst. Hauteś Etudes Sci., (114):1-85, 2011.
Higher Fano manifolds and rational surfaces. A J De Jong, Jason Starr, Duke Math. J. 1391A. J. de Jong and Jason Starr. Higher Fano manifolds and rational surfaces. Duke Math. J., 139(1):173-183, 2007.
Positivity in algebraic geometry. I Classical setting: line bundles and linear series Ergebnisse der Mathematik und ihrer Grenzgebiete. Robert Lazarsfeld, Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in MathematicsLazarsfeld, Robert. Positivity in algebraic geometry. I Classical setting: line bundles and linear series Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics]
Angelo Vistoli, Intersection theory on algebraic stacks and on their moduli spaces. Inventiones mathematicae. 97Angelo Vistoli. Intersection theory on algebraic stacks and on their moduli spaces. Inven- tiones mathematicae, October 1989, Volume 97, Issue 3, pp 613-670.
Chow groups of projective varieties of very small degree. Hélène Esnault, Marc Levine, Eckart Viehweg, Duke Math. J. 871Hélène Esnault, Marc Levine, and Eckart Viehweg. Chow groups of projective varieties of very small degree. Duke Math. J., 87(1):29-58, 1997.
Notes on stable maps and quantum cohomology. W Fulton, R Pandharipande, Algebraic geometry-Santa Cruz. Providence, RIAmer. Math. Soc62W. Fulton and R. Pandharipande. Notes on stable maps and quantum cohomology. In Algebraic geometry-Santa Cruz 1995, volume 62 of Proc. Sympos. Pure Math., pages 45- 96. Amer. Math. Soc., Providence, RI, 1997.
Rational curves on hypersurfaces. Eric Riedl, HarvardD ThesisEric Riedl. Rational curves on hypersurfaces. In ph.D Thesis, Harvard. 2015.
Fano varieties of Low-degree smooth Hypersurfaces and Unirationality. Alex Waldron, HarvardBachelor ThesisAlex Waldron. Fano varieties of Low-degree smooth Hypersurfaces and Unirationality. In Bachelor Thesis, Harvard. 2008.
Intersection theory. William Fulton, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics. 2Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in MathematicsWilliam Fulton. Intersection theory, volume 2 of Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics].
. Springer-Verlag, Berlinsecond editionSpringer-Verlag, Berlin, second edition, 1998.
Stratified Morse theory. Mark Goresky, Robert Macpherson, Ergebnisse der Mathematik und ihrer Grenzgebiete. 143Results in Mathematics and Related Areas (3)Mark Goresky and Robert MacPherson. Stratified Morse theory, volume 14 of Ergebnisse der Mathematik und ihrer Grenzgebiete (3) [Results in Mathematics and Related Areas (3)].
. Springer-Verlag, BerlinSpringer-Verlag, Berlin, 1988.
Lefschetz theorems for singular varieties. A Helmut, Hamm, Singularities, Part 1. Arcata, Calif; Providence, RIAmer. Math. Soc40Helmut A. Hamm. Lefschetz theorems for singular varieties. In Singularities, Part 1 (Arcata, Calif., 1981), volume 40 of Proc. Sympos. Pure Math., pages 547-557. Amer. Math. Soc., Providence, RI, 1983.
Rational curves on hypersurfaces of low degree. Joe Harris, Mike Roth, Jason Starr, J. Reine Angew. Math. 571Joe Harris, Mike Roth, and Jason Starr. Rational curves on hypersurfaces of low degree. J. Reine Angew. Math., 571:73-106, 2004.
Rational curves on hypersurfaces of low degree. Joe Harris, Jason Starr, II. Compos. Math. 1411Joe Harris and Jason Starr. Rational curves on hypersurfaces of low degree. II. Compos. Math., 141(1):35-92, 2005.
Families of rationally connected varieties. T Graber, J Harris, J Starr, J. Amer. Math. Soc. 16electronicGraber, T., Harris, J., and Starr, J. Families of rationally connected varieties. J. Amer. Math. Soc. 16, 1 (2003), 57-67 (electronic).
Rational curves on algebraic varieties. János Kollár, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 323rd Series. A Series of Modern Surveys in MathematicsJános Kollár. Rational curves on algebraic varieties, volume 32 of Ergebnisse der Mathe- matik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics].
. Springer-Verlag, BerlinSpringer-Verlag, Berlin, 1996.
The connectedness of the moduli space of maps to homogeneous spaces. B Kim, R Pandharipande, Symplectic geometry and mirror symmetry. Seoul; River Edge, NJB. Kim and R. Pandharipande. The connectedness of the moduli space of maps to homo- geneous spaces. In Symplectic geometry and mirror symmetry (Seoul, 2000), pages 187-201. World Sci. Publ., River Edge, NJ, 2001.
Sheaves on Artin stacks. Martin Olsson, J. Reine Angew. Math. 603Martin Olsson. Sheaves on Artin stacks. J. Reine Angew. Math., 603:55-112, 2007.
One-cycles on rationally connected varieties. Zhiyu Tian, Hong R Zong, Compos. Math. 1503Zhiyu Tian and Hong R. Zong. One-cycles on rationally connected varieties. Compos. Math., 150(3):396-408, 2014.
On the space of conics on complete intersections. R Hong, Zong, Commun. Math. Stat. 21Hong R. Zong. On the space of conics on complete intersections. Commun. Math. Stat., 2(1):33-45, 2014.
63130 E-mail address: pan@math. St.Louis, MODepartment of Mathematics, Washington University in St.Louiswustl.eduDepartment of Mathematics, Washington University in St.Louis, St.Louis, MO 63130 E-mail address: [email protected]
| [] |
[
"A NEW CONCEPT FOR SAFEGUARDING AND LABELING OF LONG-TERM STORED WASTE AND ITS PLACE IN THE SCOPE OF EXISTING TAGGING TECHNIQUES",
"A NEW CONCEPT FOR SAFEGUARDING AND LABELING OF LONG-TERM STORED WASTE AND ITS PLACE IN THE SCOPE OF EXISTING TAGGING TECHNIQUES"
] | [
"Dina Chernikova \nDepartment of Applied Physics\nChalmers University of Technology\nNuclear EngineeringSE-412 96GothenburgSweden\n",
"Kåre Axell \nDepartment of Applied Physics\nChalmers University of Technology\nNuclear EngineeringSE-412 96GothenburgSweden\n\nSwedish Radiation Safety Authority\nSE-171 16StockholmSweden\n"
] | [
"Department of Applied Physics\nChalmers University of Technology\nNuclear EngineeringSE-412 96GothenburgSweden",
"Department of Applied Physics\nChalmers University of Technology\nNuclear EngineeringSE-412 96GothenburgSweden",
"Swedish Radiation Safety Authority\nSE-171 16StockholmSweden"
] | [
"Workshop -Scanning the Horizon: Novel Techniques and Methods for Safeguards, International Atomic Energy Agency At IAEA's Headquarters in"
] | The idea of a novel labeling method is suggested for a new way of long-term security identification, inventory tracking, prevention of falsification and theft of waste casks, copper canisters, spent fuel containers, mercury containers, waste packages and other items. The suggested concept is based on the use of a unique combination of radioisotopes with different predictable half life. As an option for applying the radioisotope tag to spent fuel safeguarding it is suggested to use a mixture of α-emitting isotopes, such as 241 Am etc., with materials that easily undergo α-induced reactions with emission of specific γ-lines. Thus, the existing problem of the disposing of smoke detectors or other devices [1] which contain radioisotopes can be addressed, indirectly solving an existing waste problem. The results of the first pilot experiments with two general designs of storage canisters, namely a steel container which corresponds to the one which is commonly used for long-term storing of mercury in Europe and USA and a copper canister, the one which is in applications for nuclear repositories, are presented. As one of the options for a new labeling method it is proposed to use a multidimensional bar code symbology and tungsten plate with ultrasound techniques. It is shown that the new radioisotope label offers several advantages in the scope of existing tagging techniques (overview is given) and can be implemented even with low activity sources. | null | [
"https://export.arxiv.org/pdf/1402.2173v1.pdf"
] | 119,285,624 | 1402.2173 | c9fc3ef440df93b212e05333e864ac04fe99e7db |
A NEW CONCEPT FOR SAFEGUARDING AND LABELING OF LONG-TERM STORED WASTE AND ITS PLACE IN THE SCOPE OF EXISTING TAGGING TECHNIQUES
2014
Dina Chernikova
Department of Applied Physics
Chalmers University of Technology
Nuclear EngineeringSE-412 96GothenburgSweden
Kåre Axell
Department of Applied Physics
Chalmers University of Technology
Nuclear EngineeringSE-412 96GothenburgSweden
Swedish Radiation Safety Authority
SE-171 16StockholmSweden
A NEW CONCEPT FOR SAFEGUARDING AND LABELING OF LONG-TERM STORED WASTE AND ITS PLACE IN THE SCOPE OF EXISTING TAGGING TECHNIQUES
Workshop -Scanning the Horizon: Novel Techniques and Methods for Safeguards, International Atomic Energy Agency At IAEA's Headquarters in
Vienna, Austria2014long-term stored wastetaglabelisotopesmultidimensional barcode
The idea of a novel labeling method is suggested for a new way of long-term security identification, inventory tracking, prevention of falsification and theft of waste casks, copper canisters, spent fuel containers, mercury containers, waste packages and other items. The suggested concept is based on the use of a unique combination of radioisotopes with different predictable half life. As an option for applying the radioisotope tag to spent fuel safeguarding it is suggested to use a mixture of α-emitting isotopes, such as 241 Am etc., with materials that easily undergo α-induced reactions with emission of specific γ-lines. Thus, the existing problem of the disposing of smoke detectors or other devices [1] which contain radioisotopes can be addressed, indirectly solving an existing waste problem. The results of the first pilot experiments with two general designs of storage canisters, namely a steel container which corresponds to the one which is commonly used for long-term storing of mercury in Europe and USA and a copper canister, the one which is in applications for nuclear repositories, are presented. As one of the options for a new labeling method it is proposed to use a multidimensional bar code symbology and tungsten plate with ultrasound techniques. It is shown that the new radioisotope label offers several advantages in the scope of existing tagging techniques (overview is given) and can be implemented even with low activity sources.
Introduction
"…mercury, alpha waste, high level waste (HLW), etc." -all these notions are related to the category of so-called long-term stored waste that are aimed at the disposition in geological repository [2]. There are number of classifications that exist for long-term stored waste. Generally, waste is separated into two groups: not radioactive, e.g. mercury waste, and radioactive that is coming from various parts of the nuclear fuel cycle, medical, industrial and research activities. However, there are uniting factors for all of them -long term management issues. One of these issues is related to sealing and containment verification technologies that can meet all needs for maintaining continuity of knowledge of waste in containers [3]. Various countries address this in different ways. For example, in US 10 CFR 60.135 regulations for HLW package design a unique identification is considered as one of the specific acceptance criteria:
"…(4) Unique identification. A label or other means of identification shall be provided for each waste package. The identification shall not impair the integrity of the waste package and shall be applied in such a way that the information shall be legible at least to the end of the period of retrievability. Each waste package identification shall be consistent with the waste package's permanent written records" [4].
Similar criteria, i.e. "Qualitative acceptance criteria for radioactive wastes to be disposed of in deep geological formations" are discussed in Nirex Report (United Kingdom, UK):
"…Unique identification. Criteria. Each waste package for emplacement in a repository should be marked with a unique identification. Additional criteria.
Records should be kept at different locations, nationally and internationally.
Records should include information on location, chemical and physical properties of the waste; repository design and the information used for final safety assessment' [5]. Thus, there is a noticeable trend towards the implementation of a unique labeling of spent fuel waste canisters which are aimed at the emplacement in a repository.
Setting up requirements to the ideal tagging system
While choosing a particular type of tag it is necessary to consider a number of important parameters. There were a few attempts to systematize criteria for the selection of a specific tag, for example, based on: purpose of tag, type of the container, robustness, reliability, ease of application, effectiveness, interface with other safeguards and security elements, cost etc. [6].
Although, in connection with a long-term (hundreds of years) stored item, such as nuclear waste, spent fuel or mercury containers, one can consolidate these requirements in five main points ("intuitive requirements"), i.e. the ideal tag must provide:
1. Environmental safety (avoid corrosion effects of e.g. copper canisters). Thus, an ideal identification tag meets all the challenges of the international initiative on a holistic Safety, Security and Safeguards ('3S') concept. Therefore, hereafter we will consider the suitability of the currently existing technologies and new approach to these "intuitive requirements".
Overview of existing methods
The conventional tagging techniques include etching characters, affixing identification plates, welding, etc. However, when considering an application for long-term storage of waste canisters they have a number of gaps in the factors of environmental safety, security and long operation time. Other disadvantages of the traditional labeling technology are described in [7]. Modern labeling techniques ( Figure 1) may partly solve these problems and be useful for meeting the goals of a unique labeling system compatible with the record keeping of the storage or repository. Among the modern labeling techniques are radio frequency tagging systems, electronic tags, ultrasonic systems [8] and RPT (Reflective Particle Tags) [9], SANAs (SERS-Active Nanoparticle Aggregates; SERS: Surface Enhanced Raman Scattering) [10], etc. The main disadvantages of these techniques are analyzed in [11]. Hereafter we only give a short overview of them in the light of the previously defined "intuitive requirements".
Radio frequency systems (RF) consist of a memory chip, an antenna, and a transmitter/receiver system and therefore overcome problems related to printing or etching characters on the side of the container. RF devices can be active, passive and semi-passive, as shown in Figure 2. Active tags contain a small internal power source to communicate, store and process large amounts of information in the chip. A power source is usually a lithium battery lasting less than 5 years. This makes them unsuitable for use in long-term storages. Passive tags have no battery. In order to provide power and data to the chip, they use the current in the loop antenna which is induced by the interrogating RF signal. Thus, they receive power from the reader's antenna. The main problems encountered with both active and passive RF devices is related to interference of the metallization layer with the RF signal, locating methods and low transmission range. Ultrasonic tagging is based on the assumption of the uniqueness of the welding area of the cask. Thus, it assumes that in the process of ultrasonic scanning one can obtain a unique fingerprint for each stored container. However, it is difficult to explicitly evaluate performance of UIT in terms of environmental safety, long operating time and security. According to results of tests performed in [12] UIT methods suffer from problems with unknown long-term signature stability and material sensitivity, problems with repeatability of the signature and influence by the human factor.
Reflective Particle Tags (RPT) have been proposed by Sandia National Laboratories (SNL) in 1992. The tag represents the transparent adhesive matrix with encapsulated reflective particles. Although this system would be good enough to provide the identification for non-nuclear long-stored waste it will be difficult to apply it to casks containing radioactive material due to the difficulties connected to the reader system (a number of lights which induce the reflection in the tag) and presence of gamma background outside the cask walls. The characteristics of the reflective particle tag regarding long operating times, a large and unique tag memory and security can not be evaluated explicitly due to the present research stage of the technology. Results of tests performed in [12] indicate that the main drawbacks of RPT are related to the image degradation, inconsistent calibrations, occasional reader head instability and false rejection rate caused by corrosion. SANAs technology has unique strengths suited to a number of applications. However, due to the similar nature to RPT, it can suffer from the same problems as RPT techniques.
Accordingly there is a recognized need for a labeling system which last at least from ten to a few hundred years (time factor), at the same time enabling fully unique identification of the canister contents in a manner consistent with permanent records of the storage or repository (information factor), have a high level of security, i.e. low risk of falsification or error (security factor), and give the possibility to avoid a corrosion effect of canisters induced by the traditional tagging methods (environmental factor).
Proposed approach and its application to the nuclear waste containers
The main idea of the proposed method consists of using a unique combination of radioisotopes with different predictable length of life and a long operating time, wherein the unique combination of radioisotopes comprises the mixture of two or more radioisotopes [13]. Radioisotopes have unique inherent properties, such as long half-life (hundreds of years), different penetration properties and energy characteristics (lines in the spectrum emitted by radioisotopes). These properties make them extremely attractive for use in tagging of long-term stored items, since they automatically provide: environmental safety (radioisotopic tag can be placed inside the canister), non-contact reader system, long operating time. The combination of radioisotopes should be chosen independently for each cask/waste container.
The majority of the background gamma rays in spent fuel originates from activation and fission products, e.g. . For a fuel cooled for a short period of time (less than four years), the high energy gamma lines, e.g. a 2185 keV gamma line from 144 Pr, will be possible to measure. However, when the fuel will be sent to an encapsulation plant after a number of years of cooling, 137m Ba, the daughter nuclide of 137 Cs, will be the main gamma emitter. Thus, if the radioisotope tag will have high energy signatures, there will be no problem with radiation background coming from the fuel. The simplest version of the conventional radioisotope tag may just include the specific radioisotopes which emits γ-rays with energies higher than 1 MeV. Although, access to these isotopes can be restricted or their cost might be rather high. Therefore, we suggest to use the following version of the radioisotope tag based on α-emitting isotopes such as 241 Am, for example, in a mixture with one of the materials described in [14], as shown in Figure 3. This version of the tag will serve the needs of long-term tagging of nuclear waste, as well as it can solve the existing problem of disposing of smoke detectors or other devices (surge voltage protection devices, electronic valves etc. [1]) which nowadays contain radioisotopes such as 241 Am. It should be mentioned that according to the Report of the EU commission [1], as of the balance sheet date of year 2001, Ireland manufactured two million ionization chamber smoke detectors per year (activity of each detector is 33.3 -37 kBq), while for example Sweden imported 700 000 of them. Thus, the price of the radioisotope tag based on this type of waste will be partly covered by the costs of the waste disposing. At the same time this method will open the possibility of recycling nuclear waste of this type.
One of the attractive options for realization of the radioisotope tag is implementation of the multidimensional bar code symbology, for example in a way as shown in Figures 3-4. This version of the tag is suitable for a situation where the information about the item is not known in advance and should be encoded in the unique radioisotope tag directly at the encapsulation plant. This realization of the tag includes two components: a radioisotope plate prepared by authorities at a facility licensed for this work and a foil which is printed at the encapsulation plant. This concept is appealing in that while printing the tag a specific color could be used. As an example, the base of the tag can be made of an 241 Am α-emitting isotope. Afterwards, the foil which contains the bar code could be printed with colors based on 9 Be, 23 Na, 19 F, 10;11 B, 30 P, 7;6 Li etc. materials. The printed foil must be placed in close contact with the α-emitting base of the tag. These materials have a high cross-section for α-induced reactions, such as (α,n), (α,p) etc. Thus, the bar code might be read detecting α-induced gamma rays. The energy of the gamma rays depends
Conclusions
We have described a new concept of a long-term security identification tags/labels which is based on the use of unique combinations of radioisotopes. In the case of application of this concept to spent fuel safeguarding it is suggested to use a mixture of α-emitting isotopes, such as 241 Am with materials that easily undergo α-induced reactions with emission of specific γ-lines. Thus, if the radioisotope tag will have a high energy signature, there will be no problem with radiation background coming from the fuel. Moreover, this version of the radioisotope tag allows to solve the existing problem of the disposing of smoke detectors or other devices [1] which contain radioisotopes, such as 241 Am, thus, indirectly providing a recycling of nuclear waste. As an economical advantage, it should be mentioned that the price of the radioisotope tag based on this type of waste will be partly covered by the costs of the waste disposing. As an attractive option for a new labeling method we proposed the possibility to realize a multidimensional bar code symbology. The new radioisotope label offers several advantages, as compared to the currently used tagging methods. It provides environmental safety, non-contact reader system, long operating time, large and unique tag memory, security technique against falsification of data, errors/multiple verification, recycling option for ionization chamber smoke detectors and other devices. Further details can be found in [14].
Figure 1 .
1Existing tagging Workshop -Scanning the Horizon: Novel Techniques and Methods for Safeguards, At IAEA's Headquarters in Vienna, Austria, 2014
137 Cs (662 keV (0.9) γ-line), 134 Cs (569 keV (0.15), 605 keV (0.98), 796 keV (0.85), 802 keV (0.09), 1039 keV (0.01), 1168 keV (0.02) and 1365 keV (0.03) γ-lines), 144 Pr (697 keV (0.0148), 1489 keV (0.003) and 2185 keV (0.008) γ-lines), 154 Eu (723 keV (0.19), 873 keV (0.12), 996 keV (0.1), 1005 keV (0.17), 1275 keV (0.36) and 1595 keV (0.03) γ-lines) and 106 Ru (512 keV (0.21), 622 keV (0.1), 1051 keV (0.02), 1128 keV (0.004) and 1357 keV (0.006) γ-lines)
Figure 3 .
3Tag (versions 1 and 2).Workshop -Scanning the Horizon: Novel Techniques and Methods for Safeguards, At IAEA's Headquarters in Vienna, Austria, 2014
Workshop -Scanning the Horizon: Novel Techniques and Methods for Safeguards, At IAEA's Headquarters in Vienna, Austria, 2014 7 / 8 on the material which is chosen for printing the tag.
Workshop -Scanning the Horizon: Novel Techniques and Methods for Safeguards,At IAEA's Headquarters in Vienna, Austria, 2014
AcknowledgementThis talk was supported by the Knut och Alice Wallenbergs Stiftelse "Jubileumsanslaget" 2013 C 2013/109, Sweden.
A Review of Consumer Products Containing Radioactive Substances in the European Union. Final Report of the Study Contract for the European Commission, the European Commission. A Review of Consumer Products Containing Radioactive Substances in the European Union. Final Report of the Study Contract for the European Commission, the European Commission, 2007.
Radiation, People and the Environment. A D Wrixon, I Barraclough, M J Clark, the IAEA Division of Public Information. J. Ford54A.D. Wrixon, I. Barraclough, M.J. Clark, Radiation, People and the Environment, in: J. Ford (Ed.), the IAEA Division of Public Information, IAEA, 2004, p.54.
Development and Implementation Support Programme for Nuclear Verification. Iaea Department Of, Safeguards, Programme, International Atomic Energy Agency, IAEA. Department of SafeguardsDevelopment and Implementation Support Programme for Nuclear Verification 2012- 2013, IAEA DEPARTMENT OF SAFEGUARDS, D&IS Programme 2012-2013, De- partment of Safeguards, International Atomic Energy Agency, IAEA, 2013, pp. 122-130.
CFR 60.135 Criteria for the waste package and its components. US Nuclear Regulatory Commission, NRCUSAGovernment Printing OfficeCFR 60.135 Criteria for the waste package and its components, in: US Nuclear Reg- ulatory Commission, NRC (Ed.), Government Printing Office, USA, 2013.
Initial Consideration of Waste Acceptance Criteria for the Long-term Management of Certain UK Radioactive Wastes and Potential Wastes. Workshop -Scanning the Horizon: Novel Techniques and Methods for Safeguards, At IAEA's Headquarters in. UK; Vienna, AustriaNirex LimitedNirex Limited, Initial Consideration of Waste Acceptance Criteria for the Long-term Management of Certain UK Radioactive Wastes and Potential Wastes, UK, 2004. Workshop -Scanning the Horizon: Novel Techniques and Methods for Safeguards, At IAEA's Headquarters in Vienna, Austria, 2014
Arms Control and Nonproliferation technologies, Tags and seals for controlling nuclear materials, Department of Energy/Office of Intelligence and National Security. DOE/AN/ACNT-93AArms Control and Nonproliferation technologies, Tags and seals for controlling nu- clear materials, Department of Energy/Office of Intelligence and National Security, DOE/AN/ACNT-93A, 1993.
A Labeling of the Spent Fuel Waste Package, the 3-rd International High Level Radioactive Waste Management Conference. W G Culbreth, A K Chagari, W.G. Culbreth, A.K. Chagari, A Labeling of the Spent Fuel Waste Package, the 3-rd International High Level Radioactive Waste Management Conference, 1992.
Intrinsic fingerprints inspection for identification of dry fuel storage casks. D Demyanuk, M Kroening, A Lider, D Chumak, D Sednev, Unpublished results, to appear in ESARDA BulletinD. Demyanuk, M. Kroening, A. Lider, D. Chumak, D. Sednev, Intrinsic fingerprints inspection for identification of dry fuel storage casks, Unpublished results, to appear in ESARDA Bulletin, (2013).
J C Bennett, D M Day, S A Mitchell, Summary of the CSRI Workshop on Combinatorial Algebraic Topology (CAT): Software, Applications and Algorithms. SANDIA REPORTJ.C. Bennett, D.M. Day, S.A. Mitchell, Summary of the CSRI Workshop on Combi- natorial Algebraic Topology (CAT): Software, Applications and Algorithms, in: SANDIA REPORT (Ed.), SAND2009-7777, 2009.
L O Brown, S K Doorn, P B Merkle, SERS-active nanoparticles as a barcoding technology for tags and seals, INMM 51st Annual Meeting. Baltimore, MD, USAL.O. Brown, S.K. Doorn, P.B. Merkle, SERS-active nanoparticles as a barcoding technology for tags and seals, INMM 51st Annual Meeting, Baltimore, MD, USA, 2010.
Review of Advanced Techniques for Waste Canister Labeling, Progress report on the DOE waste package project at University of Nevada. W G Culbreth, B G Bhagi, A Kanjerla, W.G. Culbreth, B.G. Bhagi, A. Kanjerla, Review of Advanced Techniques for Waste Canister Labeling, Progress report on the DOE waste package project at University of Nevada, 1993.
Tagging RDT&E Volume 1-Technology Assessments and Development Reports. B J Hill, 87119Albuquerque International Albuquerque, NMB.J. Hill, e. al., Tagging RDT&E Volume 1-Technology Assessments and Devel- opment Reports, Albuquerque International Albuquerque, NM 87119, 1994.
A method of radioisotope labeling of waste. D Chernikova, K , Patent pendingD. Chernikova, K. Axell, A method of radioisotope labeling of waste (Patent pending), 1330036-3, April, 2013.
A unique radioisotopic label as a new concept for safeguarding and tagging of long-term stored items and waste. Dina Chernikova, Kåre Axell, Unpublished resultsDina Chernikova, Kåre Axell: A unique radioisotopic label as a new concept for safeguarding and tagging of long-term stored items and waste. Unpublished results, http://arxiv.org/pdf/1312.1985v1.pdf. 12/2013.
| [] |
[
"Quantum Entanglement of Locally Excited States in Maxwell Theory",
"Quantum Entanglement of Locally Excited States in Maxwell Theory"
] | [
"Masahiro Nozaki \nKadanoff Center for Theoretical Physics\nUniversity of Chicago\n60637ChicagoIllinoisUSA\n",
"Naoki Watamura \nDepartment of Physics\nNagoya University\n464-8602NagoyaJapan\n"
] | [
"Kadanoff Center for Theoretical Physics\nUniversity of Chicago\n60637ChicagoIllinoisUSA",
"Department of Physics\nNagoya University\n464-8602NagoyaJapan"
] | [] | In 4 dimensional Maxwell gauge theory, we study the changes of (Renyi) entanglement entropy which are defined by subtracting the entropy for the ground state from the one for the locally excited states generated by acting with the gauge invariant local operators on the state. The changes for the operators which we consider in this paper | 10.1007/jhep12(2016)069 | null | 119,187,756 | 1606.07076 | cf6821561400ceb0c3ce9eed7e4e64f5bc569e7e |
Quantum Entanglement of Locally Excited States in Maxwell Theory
2 Jul 2016
Masahiro Nozaki
Kadanoff Center for Theoretical Physics
University of Chicago
60637ChicagoIllinoisUSA
Naoki Watamura
Department of Physics
Nagoya University
464-8602NagoyaJapan
Quantum Entanglement of Locally Excited States in Maxwell Theory
2 Jul 2016
In 4 dimensional Maxwell gauge theory, we study the changes of (Renyi) entanglement entropy which are defined by subtracting the entropy for the ground state from the one for the locally excited states generated by acting with the gauge invariant local operators on the state. The changes for the operators which we consider in this paper
Introduction and Summary
Quantum entanglement significantly distinguishes quantum states from classical states. It can characterize conformal field theories [1,2,3] and topological phases [4,5,6]. In Gauge/Gravity correspondence [7,8,9], the structure of quantum entanglement in quantum field theories (QFTs) living on the boundary is expected to be related to the gravity in the bulk [10,11]. There are a lot of works done to reveal how the structure of quantum entanglement on the boundary corresponds to the geometry in the bulk [12,13,14,15,16,17,18]. Therefore it is important to uncover the fundamental features which quantum entanglement possesses. (Rényi) entanglement entropy is one of the useful quantities to investigate them.
However the definition of (Rényi) entanglement entropy in gauge theories has subtleties [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34]. In gauge theories, physical states have to be gauge invariant. It obeys constraints which guarantee its gauge invariance. They make it difficult to divide the Hilbert space into subsystems A and B because the physical degrees of freedom in A depends on the freedom in B due to the constraints. Their boundary is ∂A. Then the definition of (Rényi) entanglement entropy needs the precise method of dividing Hilbert space and defining the reduced density matrix ρ A which is given by tracing out the degrees of freedom in B, ρ A = tr B ρ.
(1.1)
On the other hand, the entropy in QFTs depends on a UV cutoff (ultraviolet cutoff) δ because by definition it has the UV divergence. It is given by a series expansion in conformal field theories. The physical degrees of freedom around ∂A have the significant effect on the terms which depend on δ. The method of dividing the Hilbert space is expect to affect the degrees of freedom around ∂A in the direct fashion. In the present paper, we study the changes of (Rényi) entanglement entropy ∆S
(n)
A which is defined by subtracting the entropy for the ground state from the one for the locally excited state, which is defined by acting with a local operator on the ground state. Here we assume that the operator is located far from ∂A. We will explain it more in the next section. As in [35,36,37,38,39,40,41], their changes do not possess the UV divergence. More precisely, they measure how the local operator changes the structure of quantum entanglement. Therefore they are expected to avoid the subtleties which (Rényi) entanglement entropy has.
In this paper we study ∆S
(n)
A in 4d Maxwell gauge theory, which is a free CFT [42]. The previous works [37,38,39] show the time evolution of ∆S (n)
A can be interpreted in terms of relativistic propagation of entangled quasi-particles which are created by local operators. In the free theories, the late-time value of ∆S (n) A is given by the constant, which depends on the operators. It comes from the quantum entanglement between quasi-particles. As in [39], the late-time entanglement structure depends on the kind of quasi-particles. The authors in [40] show that in the specific 2d CFTs, it is related to the quantum dimension of the operator which acts on the ground state. The Authors in [41,43] have shown that in holographic theories the late time value of ∆S (n) A logarithmically increases similarly to the behavior of entanglement entropy for the local quenches [44,45]. ∆S
(n)
A in the finite temperature system was investigated by the authors in [46]. There are many works done to study the fundamental properties of ∆S A depend on theories and the quasi-particles created by the local operator. Then we study how the structure of quantum entanglement is changed by gauge invariant local operators such as electric and magnetic fields. In particular, we study how the late-time structure of quantum entanglement depends on them. More precisely, we study how quasiparticles have the effect on the structure. We also study whether ∆S
(n)
A for gauge invariant locally excited state reflects electric-magnetic duality.
Summary
Here we briefly summarize our results in this work. We study how they change the structure of quantum entanglement by measuring the time evolution of ∆S (n) A for various gauge invariant local operators. We also study whether ∆S (n) A is invariant the electric-magnetic duality transformation.
Electric-Magnetic Duality
As it will be explained later, ∆S A reflects the electric-magnetic duality, the entropy for B i is equal to that for E i , which is the electric field along the same direction as that of magnetic field. ∆S
(n)
A for the electric and magnetic fields along the direction vertical to the entangling surface increases slower than those for fields along the directions parallel to the surface. However there are no difference between the effects of electromagnetic field and that of scalar one on the entanglement structure at the late time 1
Composite Operators
If the composite operator such as B 2 acts on the ground state, they lead to the late-time structure of quantum entanglement in the same manner as a specific scalar operator. Then the late-time value of ∆S
(n)
A for that can be interpreted in terms of quasi-particles created by the scalar operator. However ∆S
(n)
A for some specific operators (e.g, E 2 B 3 ) constructed of both electric and magnetic fields can be interpreted in terms of not the scalar quasi-particles but electromagnetic one, which is explained in section 4. Here B 3 (E 3 ) and B 2 (E 2 ) are the magnetic (electric) fields along the direction perpendicular to ∂A as we will explain it later.
Late-time Algebra
We interpret the late-time values of ∆S (n) A in terms of electromagnetic quasi-particles created by an electromagnetic field, and derive a late-time algebra which they obey. There are commutation relations between the particles of the same kinds of fields . As we will mention later, there are also additional relations between E 2 (E 3 ) and B 3 (B 2 ), which are parallel to the entangling surface. They make the effect of electromagnetic fields different from that of scalar fields on the late-time structure of quantum entanglement.
Organization
This paper is organized as follows. In section 2, we will explain locally excited states and how to compute ∆S A in terms of entangled quasi-particles in section 4. We study how they have the effect on the late-time structure of quantum entanglement. We finish with the conclusion, future problems and the detail of propagators is included in appendices.
How to compute Excesses of (Rényi) Entanglement Entropy
By measuring the excess of (Rényi) entanglement entropies ∆S
(n)
A , we study how local gaugeinvariant operators changes the structure of quantum entanglement in the 4d Maxwell gauge theory:
S = − 1 4 d 4 xF µν F ρσ g µρ g νσ , (2.1) where F µν = ∂ µ A ν − ∂ ν A µ and g µν = diag (−1, 1, 1, 1).
In this section, we explain the definition of locally excited state and how to compute ∆S (n) A in the replica method.
The Definition of Locally Excited States
The locally excited state is defined by acting with a gauge invariant local operator O such as F µν on the ground state:
|Ψ = N O(−t, −l, x) |0 . (2.2)
where N is a normalization constant and |0 is a gauge invariant state. As in Figure.1, O is located at t = −t, x 1 = −l and x = (x 2 , x 3 ).
Subsystem
As in the previous works [37,38,39,40], the subsystem A is defined by (t = 0, x 1 ≥ 0) as in Figure.1. In free theories ∆S
(n)
A approaches to a constant, which comes form quantum entanglement between entangled quasi-particles. In this paper we would like to study how the constant depends on gauge invariant operators. Therefore the region in Figure.1 is chosen as A.
Excesses of (Rényi) Entanglement Entropy
Here we explain more about the definition of ∆S (n)
A . (Rényi) entanglement entropy for the ground state is a static quantity, which does not depend on time. Then we define the excesses of (Rényi) entanglement entropy ∆S A for the ground state from those for locally excited states, ∆S
(n) A = S (n),EX A − S (n),G A , (2.3) where S (n),EX A and S (n),G A
, are (Rényi) entanglement entropies for the excited states in (2.2) and the ground state |0 , respectively. In the sense that ∆S
(n)
A does not depend on δ, it is a "renormalized" (Rényi) entanglement entropy.
The Replica Trick
We would like to study the time evolution of ∆S (n) A in 4d Minkowski spacetime. However in this paper we do not directly study the changes of entanglement structure in the spacetime. Without doing so, we compute ∆S (n) A in Euclidean space by the replica trick. After that we perform the analytic continuation, which we will explain later. Then we compute the real time evolution of ∆S (n)
A . As in [37,38,39,40], a reduced density matrix in Euclidean space is given by
ρ =Ñ 2 O(τ e , −l, x) |0 0| O † (τ l , −l, x), (2.4)
where τ is Euclidean time. By introducing a polar coordinate, (τ l,e , −l) is mapped to (r 1,2 , θ 1,2 ) as in Figure.2. In the replica trick, (Rényi) entanglement entropies for (2.4) and the ground state are respectively given by 2
S (n),EX A = 1 1 − n log DΦO † (r 1 , θ n 1 )O(r 2 , θ n 2 ) · · · O(r 1 , θ 1 1 ) † O(r 2 , θ 1 2 )e −Sn[Φ] DΦO(r 1 , θ 1 1 ) † O(r 2 , θ 1 2 )e −S 1 [Φ] n , S (n),G A = 1 1 − n log DΦe −Sn[Φ] DΦe −S 1 [Φ] n ,(2.5)
where θ k 1,2 = θ 1,2 + 2(k − 1)π. The actions S n and S 1 are defined on n-sheeted geometry Σ n (see Figure.3) and the flat space Σ 1 , respectively. By substituting (Rényi) entanglement
entropies in (2.5) into (2.3), ∆S (n) A is given by ∆S (n) A = 1 1 − n log O † (r 1 , θ n 1 )O(r 2 , θ n 2 ) · · · O † (r 1 , θ 1 1 )O(r 2 , θ 1 2 ) Σn O † (r 1 , θ 1 1 )O(r 2 , θ 1 2 ) n Σ 1 (2.6)
We only need to compute propagators on Σ n in order to compute ∆S
(n) A in free field theories. The two point function of gauge fields A a is defined by − A a (r, θ, x)A b (r ′ , θ ′ , x ′ ) =G ab (r, r ′ , θ − θ ′ , x − x ′ ).
If we choose a specific gauge 3 , their green functions obey the same equation of motion as that for 4d free massless scalar field theory,
∂ 2 r G a b (r, r ′ , θ − θ ′ , x − x ′ ) + 1 r ∂ r G a b (r, r ′ , θ − θ ′ , x − x ′ ) + 1 r 2 ∂ 2 θ G a b (r, r ′ , θ − θ ′ , x − x ′ ) + ∂ 2 x G a b (r, r ′ , θ − θ ′ , x − x ′ ) = − δ a b δ(r − r ′ )δ(θ − θ ′ )δ 2 (x − x ′ ) r , (2.7) where a = {τ, x 1 , x 2 , x 3 } 4 .
The solution of the equation is given by
G ab (r, r, ′ θ − θ ′ , x − x ′ ) = δ ab sinh t 0 n 8nπ 2 rr ′ sinh t 0 cosh t 0 n − cos θ−θ ′ n , (2.8)
where t 0 is defined by
cosh t 0 = r 2 + r ′2 + (x 2 − x ′2 ) 2 + (x 3 − x ′3 ) 2 2rr ′ . (2.9)
(2.8) has been obtained by the authors in [37,38,39,41,48,49].
Analytic Continuation
After computing green functions on Σ n in Euclidean space, we perform the following analytic continuation,
A τ = iA t , ∂ τ = i∂ t , τ l = ǫ − it, τ e = −ǫ − it, (2.10)
where ǫ is a smearing parameter which is introduced to keep the norm of the excited state finite. Analytic-continued green functions depend on ǫ. We are interested in the behavior of ∆S
Excesses of (Rényi) Entanglement Entropy
In this section, we study the time evolution and the late time value of ∆S A does not depend on the operator. It can be interpreted in terms of the quasi-particle created by a scalar operator φ.
(ii) Composite operators which act on the ground state are constructed of only electric or magnetic fields such as E 2 and B 2 . ∆S
(n)
A for E 2 is equivalent to the entropy for B 2 . Then ∆S
(n)
A for them invariant under the electric-magnetic duality transformation. There are no differences between their effect on the (Rényi) entanglement entropy. Its late-time values can be interpreted in terms of quasi-particles, which are created by the operator constructed of massless free scalar fields 3 a=1 (φ a ) 2 . Here a denotes the kinds of fields. (iii) Local operators are constructed of both electric and magnetic fields such as E 2
1 + B 2 1 , B 3 E 2 and B · E. The late-time value of ∆S (n)
A shows that there is a significant difference between the effect of E 1 (B 1 ) and E 2,3 (B 2,3 ) on the late-time entanglement structure. Here E 1 (B 1 ) is the electric (magnetic) field along the direction vertical to the entangling surface. On the other hand, E 2,3 (B 2,3 ) is the electric (magnetic) field along the direction parallel to the entangling surface. As it will be explained in the next section, the difference comes from the commutation relation between electromagnetic quasi-particles created by E 2 (E 3 ) and the particles created by B 3 (B 2 ).
3.1 O = E i or B i
Here locally excited states are defined by acting with only E i or B i on the ground state. ∆S
(n)
A is given by (2.6) in the replica method with Euclidean signature. After performing the analytic continuation in (2.10) and taking the limit ǫ → 0, their time evolution is given as follows. ∆S
(n)
A vanishes before t = l(> 0), but after t = l, they increase. The detail of their time evolution is summarized in Table.1. After taking the late time limit (0 < l ≪ t), they are given by ∆S
(n) A ∼ log 2. (3.1)
Their late time value is the same as that for φ in free massless scalar field theories with any spacetime dimensions. It can be interpreted as (Rényi) entanglement entropy for maximally entangled state in 2 qubit system. Therefore they do not show the difference between the effect of electromagnetic fields and that of free scalar one on the late-time structure of quantum entanglement. However time evolution of ∆S
∆S (n) A ∼ n n − 1 3(t − l) 4l + · · · ,(3.2)
where · · · are contributions from the higher
order O t−l l 2 . ∆S (n≥2) A for O = E 1 or B 1 at t ∼ l is given by ∆S (n) A ∼ n n − 1 3(t − l) 2 4l 2 + · · · . (3.3)
· · · are contributions from the higher order O t−l l 3 . Their time evolution shows that quasi-particles created by E 2,3 (B 2,3 ) enter the region A faster than those generated by E 1 (B 1 ). These behaviors seem to be natural since particles created by E 1 (B 1 ) do not propagate along the direction parallel to x 1 . ∆S
(n)
A in Table.1 shows that they are invariant under the transformation,
F µν → 1 2 ǫ µνρσ F ρσ ,(3.4)
where ǫ µνρσ is an anti-symmetric tensor. Under the transformation in (3.4), the local operator E i (B i ) changes to −B i (E i ). Therefore this duality changes a locally excited state to a different one.
Composite Operators Constructed of Only Electric or Magnetic Fields
The excited states which we consider here are generated by acting with the following operators: (a) E i E j or B i B j , (b) E 2 or B 2 . We study the time evolution and the late-time value of ∆S (n)
A for them.
A for E 1 (B 1 ) and E 2,3 (B 2,3 ). The horizontal and vertical axes correspond to time t and ∆S (2) A , respectively. The red and blue lines correspond to ∆S (2) A for E 1 (B 1 ) and E 2,3 (B 2,3 ), respectively. It is the same as that of ∆S
(n) A for O in the region 0 < l ≤ t O ∆S (n) A E 1 or B 1 1 1−n log − (l+t) 2 (l−2t) 4t 3 n + (t−l) 2 (l+2t) 4t 3 n E 2,3 1 1−n log −l 3 −3lt 2 +4t 3 8t 3 n + l 3 +3lt 2 +4t 3 8t 3 n 3.2.1 O = E i E j or B i B j
(n)
A for φ 2 in the massless free scalar field theories as in [37,38]. Therefore the late-time value of ∆S which can be interpreted as maximum (Rényi) entanglement entropy for ρ A = 1 4 diag (1, 1, 1, 1). It is the same as ∆S
(n)
A for the excited state given by acting with the operator φ a φ b on the ground state. Here a, b denote the kind of scalar fields, and a = b. They are two kinds of massless free scalar fields. The time evolution of ∆S
(n)
A for them is summarized in Table.2. Table.2 shows ∆S
A = 1 1−n log N 1 +N 2 +N 3 D 1 for O in the region 0 < l < t O D 1 N 1 N 2 N 3 E 2 1 or B 2 1 2 1 16 2 n 2 (f 1 ) 2 n 2 (f 2 ) 2 n 2 2n (f 1 f 2 ) n E 2 2,3 or B 2 2,3 2 1 16 2 n 2 (f 3 ) 2 n 2 (f 4 ) 2 n 2 2n (f 3 f 4 ) n E 1 E 2,3 or E 1 B 2,3 or B 1 E 2,3 or B 1 B 2,3 1 16 2n (f 1 ) n (f 3 ) n (f 2 ) n (f 4 ) n (f 2 ) n (f 3 ) n + (f 1 ) n (f 4 ) n E 2 E 3 or B 2 B 3 1 16 2n (f 3 ) 2n (f 4 ) 2n 2 (f 3 ) n (f 4 ) n E 2 B 2 or E 3 B 3 1 16 2 n (f 3 ) 2n (f 4 ) 2n 2 (f 3 ) n (f 4 ) n E 1 B 1 1 16 2n (f 1 ) 2n (f 2 ) 2n 2 (f 1 ) n (f 2 ) n Functions f 1 = − (l−2t)(l+t) 2 64t 3 f 2 = (l+2t)(l−t) 2 64t 3 f 3 = l 3 +3lt 2 +4t 3 128t 3 f 4 = −l 3 −3lt 2 +4t 3 128t 3 3.2.2 O = E 2 or B 2
In order to study whether E 1 acts on the late-time structure of quantum entanglement differently from E 2,3 5 , we study the late-time value of ∆S
(n)
A for the given locally excited state:
|Ψ = N E 2 (−t, −l, x) |0 . (3.7)
Before studying its late-time value, we comment on its time evolution. Before t = l, ∆S (n) A for the state in (3.7) vanishes and after t = l, it increases. Its time evolution is summarized in Table.3. After t = l, as in Table.3, ∆S
(n)
A is given by
∆S (n) A = 1 1 − n log N 1 + N 2 + P 1 + P 2 + P 3 D ,(3.8)
where D, N i and P i are defined in Table.3. If we take the late time limit (0 < l ≪ t), the ratios of P i and N i to D reduce to constant numbers [50],
N 1 D 1 n = N 2 D 1 n = 4 −1 , P 1 D 1 n = P 2 D 1 n = P 3 D 1 n = 1 6 ,(3.9)
where we ignore the higher order contribution O l t . Amazingly, The sum of them is 1. Therefore if the effective reduced density matrix is defined by which can be interpreted in terms of quasi-particles created by (φ 1 )
∆S (n) A = 1 1 − n log [tr A (ρ e A ) n ],(3.2 + (φ 2 ) 2 + (φ 3 ) 2 , which
is constructed of three kinds of free scalar fields. Therefore, there are no differences between the effect of E 1 and that of E 2,3 on the late-time structure of quantum entanglement. As in the Table.2, ∆S (n)
A for E 2 is equivalent to that for B 2 . Therefore they is the electric-magnetic duality invariant. .
A = 1 1−n log N 1 +N 2 +P 1 +P 2 +P 3 D 1 for O in the region 0 < l < t O D 1 N 1 N 2 P 1 P 2 P 3 B 2 orE 2 2 · 3 1 16 2 n (2f 2 1 + 2 · 2f 2 3 ) n (2f 2 2 + 2 · 2f 2 4 ) n 2 2n f n 1 f n 2 2 2n f n 3 f n 4 2 2n f n 3 f n 4 B 2 1 + E 2 1 2 · 2 1 16 2 n (2 · 2f 2 1 ) n (2 · 2f 2 2 ) n 2 2n f n 1 f n 2 2 2n f n 1 f n 2 0 E 2 B 3 2 2 1 4·8 2 n 2 (g 1 ) 2 + 2 (g 3 ) 2 n 2 (g 2 ) 2 + 2 (g 4 ) 2 n 2 2n (g 2 ) n (g 3 ) n 2 2n (g 1 ) n (g 4 ) n 0 or E 3 B 2 F µν F µν 2 · 2 1 16 2 + 2 · 4 2 1 4·8 2 n (2 · 2f 2 1 + 2 · 4 2 g 1 · g 3 ) n (2 · 2f 2 2 + 2 · 4 2 g 2 · g 4 ) n 2 · 2 2n (f 1 ) n (f 2 ) n 2 · 4 2n (g 1 ) n (g 2 ) n 2 · 4 2n (g 3 ) n (g 4 ) n or B · E B 2 E 3 − B 3 E 2 4 · 2 1 4·8 2 n (2 · 2g 2 3 + 2 · 2g 2 1 ) n (2 · 2g 2 4 + 2 · 2g 2 2 ) n 2 2n (g 3 ) n (g 2 ) n 2 2n (g 1 ) n (g 4 ) n 0 Functions g 1 = (l+t) 3 4·64t 3 g 2 = (t−l) 3 4·64t 3 g 3 = (l+t)(l 2 −4lt+7t 2 ) 4·64t 3 g 4 = (t−l)(l 2 +4lt+7t 2 ) 4·64t 3
Composite Operators Constructed of Both Electric and Magnetic Fields
In the previous two subsection we study how the entanglement structure changes at the late time if either electric or magnetic fields act on the ground state. However we do not uncover how it changes at the late time when both of them act on the ground state. Here we study ∆S
(n)
A for (a) E 2 1 + B 2 1 , (b) E i B j and (c) F µν F µν and B · E, which can show that E i and B i act on the late-time structure of quantum entanglement differently from scalar fields such as φ a .
E
2 1 + B 2 1
Here in order to study whether there are differences between the effects of electric and magnetic fields on the late-time structure of quantum entanglement, we study ∆S
(n)
A for the following excited state:
|Ψ = N E 2 1 + B 2 1 (−t, −l, x) |0 . (3.13)
Before investing the late time value of ∆S
(n)
A , let's study the time evolution of ∆S (n)
A . ∆S (n)
A vanishes before t = l. After t = l, its time evolution is summarized in Table.3. If you take the late time limit t → ∞, the late time value of ∆S (n) A reduces to the (Rényi) entanglement entropy whose effective reduced density matrix is given by
ρ e A = 1 4
diag (1, 1, 1, 1) .
(3.14)
Its entropies are given by ∆S (n)
A = ∆S A = ∆S (∞) A = log 4. (3.15)
It shows there are no differences between the effects of electric and magnetic fields on the late-time structure.
E i B j
Here let's find out how the operators constructed of both electric and magnetic fields, E i B j , affect the late-time structure of quantum entanglement. The late-time values of ∆S (n)
A for E i B j except for E 2 B 3 and E 3 B 2 are the same as (3.6). Their time evolution is summarized in Table.2.
On the other hand, after t = l the time evolution of ∆S (n)
A for E 2 B 3 or E 3 B 2 is summarized in Table.3. We can see that it has the electric-magnetic duality from the Table.3. The latetime value of ∆S (n) A is given by (Rényi) entanglement entropy whose reduced density matrix is given by
ρ e A = 1 64
diag (25,7,7,25). It shows how different the effect of E 1 (B 1 ) is from that of E 2,3 (B 2,3 ) on the structure. The value can not be interpreted in terms of quasi-particles created by scalar fields such as φ a φ b . As we will explain later, in the entangled quasi-particle interpretation, there is a commutation relation between the quasi-particle created by E 2 (B 2 ) and that by B 3 (E 3 ).
B · E and F µν
F µν and B 2 E 3 − B 3 E 2
We finally study ∆S
(n)
A for more complicated operators, B · E, F µν F µν and B 2 E 3 − B 3 E 2 . Before t = l, ∆S (n) A for them vanish, but after t = l, they increases. The detail of them is summarized in Table.3 6 . It shows that ∆S
(n)
A for B · E is the same as that for F µν F µν . The effective reduced density matrices for B · E (F µν F µν ) and B 2 E 3 − B 3 E 2 are given by
ρ e A = 1 192
diag (30,30,16,16,49,49,1,1) ,
for O = B · E (F µν F µν ), ρ e A = 1 128
diag (50,50,7,7,7,7) , which are for O = B 2 E 3 − B 3 E 2 . As we will explain in the next section, they can be reproduced by using a late-time algebra which electromagnetic quasi-particles obey.
for O = B 2 E 3 − B 3 E 2 .
A Late-time Algebra
We interpret the late-time value of ∆S (n)
A in terms of quasi-particles. More precisely, let's interpret the effective reduced density matrix in (3.10) in terms of quasi-particles. The effective reduced density matrix for the excited state generated by a composite operator O(−t, −l, x) is defined by
∆S (n) A = 1 1 − n log [tr A (ρ e A ) n ] = 1 1 − n log tr A N 2 O |0 0| O † n , (4.1)
whereN is a normalization constant. The operator O is assumed to be constructed of electric and magnetic fields 7 . As in [37,38,39,41], these fields can be decomposed into left moving and right moving electromagnetic quasi-particles as follows,
E i = E L † i + E R † i + E L i + E R i , B i = B L † i + B R † i + B L i + B R i ,(4.2)
where since we take x 1 ≥ 0 as A in this paper, left-moving and right-moving quasi-particles correspond to particles included in B and A at late time, respectively. The ground state for them is defined by
E L,R i |0 L,R = B L,R i |0 L,R = 0, |0 = |0 L ⊗ |0 R . (4.3)
The late-time algebra which quasi-particles obey is given by
E L,R i , E L,R † j = Cδ ij , B L,R i , B L,R † j = Cδ ij ,(4.4)
which is obtained so that the results by the replica trick are reproduced. Here C is a real number 8 . In the gauge theory in addition to (4.2), we need the following commutation relation for different particles:
E L,R 3 , B L,R † 2 = X R,L , E L,R 2 , B L,R † 3 = Y R,L ,(4.5)
where X R,L and Y R,L are given by
X R = −X L = Y L = −Y R , X 2 R,L = Y 2 R,L = 9 16 C 2 . (4.6)
Here X R,L and Y R,L are real numbers 9 . The commutation relation between electric (magnetic) quasi-particles is determined so that the effective density matrices computed by (4.4) are consistent with (3.11) respectively. The relation for the quasi-particles by E 1 should be the same as that for B 1 so that the effective density matrix in (4.1) reproduces ∆S
(n)
A for the matrix in (3.14). That between quasi-particles generated by E 2 (E 3 ) and those by B 3 (B 2 ) reproduces the matrix in (3.16). We also check that ∆S
(n) A for O = B · E (F µν F µν ), B 3 E 2 − B 2 E 3
are reproduced by using the commutation relation in (4.4) and (4.5). 7 Here ρ e A is not the same as the reduced density matrix for the locally excited state. It is for a "effective" stateN O |0 . It is different form the "original" locally excited state. 8 The redefinition of quasi-particles can absorb the constant C. 9 We find the correspondence between propagators and commutation relations. The commutations can be defined by the late time limit of propagators. We will discuss the detail of the correspondence in [50]. When we use this correspondence, X L = − 3 4 C.
The relation in (4.5) shows that the effect of fields along the direction vertical to ∂A is significantly different from that along the direction parallel to ∂A on the late-time structure. It makes the effects of electromagnetic fields different from that of free scalar fields on the late-time structure of quantum entanglement.
Conclusion and Future Problems
We also studied how gauge invariant operators such as E i , B i and the composite operators constructed of them changes the structure of quantum entanglement by studying ∆S A , which we studied in this paper, is invariant under the duality transformation. If only E i or B i acts on the ground state, without taking the late time limit, the time evolution of ∆S (n)
A depends on them. Due to the duality, ∆S
(n) A for E i is equal to that for B i . Around t = l, ∆S (n)
A for E 2,3 (B 2,3 ) increases slower than that for E 1 (B 1 ). However they can not show the difference between the effects of electromagnetic fields and that of scalar fields on the late-time structure because the late-time values of ∆S (n) A for them can be interpreted in terms of quasi-particle created by scalar fields.
On the other hand, the late-time values of ∆S (n)
A for the specific operators constructed of both electric and magnetic fields can not be interpreted in terms of quasi-particles by scalar fields. They show that there are differences between the effects of electromagnetic and that of scalar fields on the late-time structure of quantum entanglement. If their late-time values are interpreted in terms of electromagnetic quasi-particles in (4.2), there are commutation relations between E 2 (E 3 ) and B 3 (B 2 ), which make the effect of electromagnetic field significantly different from that of scalar fields on the late-time structure. The effect of E 1 and B 1 on the late-time structure is different from that of E 2,3 and B 2,3 .
Future Problem
We finish with comments on a few of future problems:
• In this paper we only consider 4d Maxwell gauge theory which has conformal symmetry. D( = 4) dimensional Maxwell gauge theory is not a CFT. Therefore it is interesting to generalize the analysis in 4d Maxwell theory to that in the theories with general dimensions.
• We expect that the structure of the late-time algebra depends on the spacetime dimension D. Then it is also interesting to study it in general dimensions.
comments on this work.
A Green Functions
The relation between E i , B i and field strengths which are defined in Euclidean space is given by
E i = −iF τ i , B 1 = −F 23 , B 2 = F 13 , B 3 = −F 12 . (A.1)
The analytic continued green functions are defined by
E 1 (θ)E 1 (θ ′ ) = F E1E1 (θ − θ ′ ), E 2 (θ)E 2 (θ ′ ) = E 3 (θ)E 3 (θ ′ ) = F E2E2 (θ − θ ′ ), B 1 (θ)B 1 (θ ′ ) = F B1B1 (θ − θ ′ ), B 2 (θ)B 2 (θ ′ ) = B 3 (θ)B 3 (θ ′ ) = F B2B2 (θ − θ ′ ), E 2 (θ)B 3 (θ ′ ) = F E2B3 (θ − θ ′ ), B 3 (θ)E 2 (θ ′ ) = F B3E2 (θ − θ ′ ), E 3 (θ)B 2 (θ ′ ) = F E3B2 (θ − θ ′ ), B 2 (θ)E 3 (θ ′ ) = F B2E3 (θ − θ ′ ), (A.2)
If the limit ǫ → 0 is taken, their leading terms for n = 1 are given by
F E1E1 (θ 1 − θ 2 ) ∼ 1 16π 2 ǫ 4 , F E2E2 (θ 1 − θ 2 ) ∼ 1 16π 2 ǫ 4 , F B1B1 (θ 1 − θ 2 ) ∼ 1 16π 2 ǫ 4 , F B2B2 (θ 1 − θ 2 ) ∼ 1 16π 2 ǫ 4 . (A.3)
That for the arbitrary n in 0 < t < l at the are given by (A.3)
That for the arbitrary n in 0 < l ≤ t at the are given by
F E1E1 (θ 1 − θ 2 ) = F E1E1 (θ 2 − θ 1 ) ∼ − (l − 2t)(l + t) 2 64π 2 t 3 ǫ 4 , F E2E2 (θ 1 − θ 2 ) = F E2E2 (θ 2 − θ 1 ) ∼ l 3 + 3lt 2 + 4t 3 128π 2 t 3 ǫ 4 , F B1B1 (θ 1 − θ 2 ) = F B1B1 (θ 2 − θ 1 ) ∼ − (l − 2t)(l + t) 2 64π 2 t 3 ǫ 4 , F B2B2 (θ 1 − θ 2 ) = F B2B2 (θ 2 − θ 1 ) ∼ l 3 + 3lt 2 + 4t 3 128π 2 t 3 ǫ 4 , F E2B3 (θ 1 − θ 2 ) = F E2B3 (θ 2 − θ 1 ) ∼ 3(t − l)(l + t) 128π 2 t 2 ǫ 4 , F B3E2 (θ 1 − θ 2 ) = F B3E2 (θ 2 − θ 1 ) ∼ 3(t − l)(l + t) 128π 2 t 2 ǫ 4 , F E3B2 (θ 1 − θ 2 ) = F E3B2 (θ 2 − θ 1 ) ∼ 3(l − t)(l + t) 128π 2 t 2 ǫ 4 , F B2E3 (θ 1 − θ 2 ) = F B2E3 (θ 2 − θ 1 ) ∼ 3(l − t)(l + t) 128π 2 t 2 ǫ 4 , F E1E1 (θ 1 − θ 2 + 2π) = F E1E1 (θ 2 − θ 1 − 2π) = F E1E1 (θ 1 − θ 2 − 2(n − 1)π) = F E1E1 (θ 2 − θ 1 + 2(n − 1)π) ∼ (l − t) 2 (l + 2t) 64π 2 t 3 ǫ 4 , F E2E2 (θ 1 − θ 2 + 2π) = F E2E2 (θ 2 − θ 1 − 2π)
= F E2E2 (θ 1 − θ 2 − 2(n − 1)π) = F E2E2 (θ 2 − θ 1 + 2(n − 1)π) ∼ − l 3 + 3lt 2 − 4t 3 128π 2 t 3 ǫ 4 , F B1B1 (θ 1 − θ 2 + 2π) = F B1B1 (θ 2 − θ 1 − 2π) = F B1B1 (θ 1 − θ 2 − 2(n − 1)π) = F B1B1 (θ 2 − θ 1 + 2(n − 1)π) ∼ (l − t) 2 (l + 2t) 64π 2 t 3 ǫ 4 , F B2B2 (θ 1 − θ 2 + 2π) = F (n,l) B2B2 (θ 2 − θ 1 − 2π) = F B2B2 (θ 1 − θ 2 − 2(n − 1)π) = F (n,l) B2B2 (θ 2 − θ 1 + 2(n − 1)π) ∼ − l 3 + 3lt 2 − 4t 3 128π 2 t 3 ǫ 4 , F E2B3 (θ 1 − θ 2 + 2π) = F E2B3 (θ 2 − θ 1 − 2π) = F E2B3 (θ 1 − θ 2 − 2(n − 1)π) = F E2B3 (θ 2 − θ 1 + 2(n − 1)π) ∼ 3(l − t)(l + t) 128π 2 t 2 ǫ 4 F B3E2 (θ 1 − θ 2 + 2π) = F B3E2 (θ 2 − θ 1 − 2π) = F B3E2 (θ 1 − θ 2 − 2(n − 1)π) = F B3E2 (θ 2 − θ 1 + 2(n − 1)π) ∼ 3(l − t)(l + t) 128π 2 t 2 ǫ 4 , F E3B2 (θ 1 − θ 2 + 2π) = F E3B2 (θ 2 − θ 1 − 2π) = F E3B2 (θ 1 − θ 2 − 2(n − 1)π) = F E3B2 (θ 2 − θ 1 + 2(n − 1)π) ∼ 3(t − l)(l + t) 128π 2 t 2 ǫ 4 , F B2E3 (θ 1 − θ 2 + 2π) = F B2E3 (θ 2 − θ 1 − 2π) = F B2E3 (θ 1 − θ 2 − 2(n − 1)π) = F B2E3 (θ 2 − θ 1 + 2(n − 1)π) ∼ 3(t − l)(l + t) 128π 2 t 2 ǫ 4 .
(A.4)
A
in the replica trick. We study the time evolution and late-time value of ∆S (n) A for various gauge invariant local operators in section 3. We interpret the late-time value of ∆S (n)
Figure 1 :
1The location of local gauge invariant operator in Minkowski spacetime.
Figure 2 :
2The location of local gauge invariant operator in Euclidean space.
A
in the limit ǫ → 0. Their leading behavior (∼ O 1 ǫ 4 ) are summarized in Appendix. A.
Figure 3 :
3A picture of n-sheeted geometry Σ n .
one electric or magnetic field O = E i , B i acts on the ground state. The time evolution of ∆S (n) A depends on the operator which acts on the ground state. If electromagnetic fields are changed by F µν →F µν = 1 2 ǫ µνρσ F ρσ , ∆S (n) A does not change. Here ǫ µνρσ is an antisymmetric tensor. The late-time value of ∆S (n)
the local operator which acts on the ground state as in Figure.4. Even at t ∼ l, time evolution of ∆S (n) A depends on the one which acts on the ground state. If O = B 2,3 or E 2,3 acts on the ground state, ∆S (n≥2) A is given by
Figure 4 :
4The time evolution of ∆S
A
for the excited states generated by acting with E i E j or B i B j on the ground state. When i = j, the late time-value of ∆S
A
((Rényi) entanglement entropy of operator) can be interpreted in terms of entangled quasi-particles created by φ 2 .When i = j, ∆S
for O = B · E (or F µν F µν )
A
for locally excited states created by gauge invariant local operators reflects the electric magnetic duality. ∆S (n)
( n )
nA for locally excited states are invariant under the transformation E i → −B i and B i → E i where E and B are electric and magnetic fields, respectively. E i or B i If only E i or B i acts on the ground state, the time evolution of ∆S A depends on the one which acts on the ground state. Because ∆S(n)
(n)
Table 1 :
1∆S
Table 2 :
2∆S(n)
Table 3 :
3∆S(n)
When only a component of the electric or magnetic one acts on the ground state, we do not consider the linear combination of them in this paper.
The detail of this computation is explained in[37,38,39,40].
The chosen gauge corresponds to Feynman gauge in Minkowski spacetime. 4 G a b ≡ η ac G cb where η ac = diag(1, 1, 1, 1)
The effect of E 1 can be different from that of E 2,3 on the structure since we choose t = 0, x 1 ≥ 0 as the subsystem A.
∆S (n)A is commuted by the green functions in Appendix.B.
AcknowledgmentsMN thanks Tadashi Takayanagi for useful discussions and comments on this paper. MN and NW thank Pawel Caputa, Tokiro Numasawa, Shunji Matsuura and Akinori Tanaka for usefulThe contribution of the other propagators is much smaller than them in (A.4).B Other BasesWe introduce new bases. Their green functions in (0 < t < l) are given byand those in (0 < l ≤ t) are given byThe green functions O i O j =i vanish.
Geometric and Renormalized Entropy in Conformal Field Theory. C Holzhey, F Larsen, F Wilczek, hep-th/9403108Nucl. Phys. B. 424443C. Holzhey, F. Larsen, and F. Wilczek, "Geometric and Renormalized Entropy in Con- formal Field Theory," Nucl. Phys. B 424, 443 (1994) [hep-th/9403108]
Entanglement in quantum critical phenomena. G Vidal, J I Latorre, E Rico, A Kitaev, quant-ph/0211074Phys. Rev. Lett. 90227902G. Vidal, J. I. Latorre, E. Rico, and A. Kitaev, "Entanglement in quantum critical phenomena," Phys. Rev. Lett. 90, 227902 (2003) [quant-ph/0211074]
Ground state entanglement in quantum spin chains. J I Latorre, E Rico, G Vidal, quant-ph/0304098Quant. Inf. and Comp. 448J. I. Latorre, E. Rico, and G. Vidal, "Ground state entanglement in quantum spin chains," Quant. Inf. and Comp. 4, 048 (2004) [quant-ph/0304098]
Entanglement entropy and quantum field theory. P Calabrese, J Cardy, hep-th/0405152J. Stat. Mech. 06002P. Calabrese and J. Cardy, "Entanglement entropy and quantum field theory," J. Stat. Mech. P06002 (2004) [hep-th/0405152]
Entanglement spectrum in one-dimensional systems. P Calabrese, A Lefevre, arXiv:0806.3059Phys. Rev A. 7832329cond-mat.str-elP. Calabrese, and A. Lefevre "Entanglement spectrum in one-dimensional systems," Phys. Rev A 78, 032329 (2008) [arXiv:0806.3059 [cond-mat.str-el]].
Topological entanglement entropy. A Kitaev, J Preskill, hep-th/0510092Phys. Rev. Lett. 96110404A. Kitaev and J. Preskill, "Topological entanglement entropy," Phys. Rev. Lett. 96, 110404 (2006) [hep-th/0510092].
Detecting topological order in a ground state wave function. M Levin, X.-G Wen, arXiv:cond-mat/0510613Phys. Rev. Lett. 96110405cond-mat.str-elM. Levin, X.-G. Wen "Detecting topological order in a ground state wave function" Phys. Rev. Lett., 96, 110405 (2006) arXiv:cond-mat/0510613 [cond-mat.str-el]
The Large N limit of superconformal field theories and supergravity. J M Maldacena, hep-th/9711200Adv. Theor. Math. Phys. 38231Int. J. Theor. Phys.J. M. Maldacena, "The Large N limit of superconformal field theories and supergrav- ity," Int. J. Theor. Phys. 38, 1113 (1999) [Adv. Theor. Math. Phys. 2, 231 (1998)] [hep-th/9711200].
Anti-de Sitter space and holography. E Witten, hep-th/9802150Adv. Theor. Math. Phys. 2253E. Witten, "Anti-de Sitter space and holography," Adv. Theor. Math. Phys. 2, 253 (1998) [hep-th/9802150].
Gauge theory correlators from noncritical string theory. S S Gubser, I R Klebanov, A M Polyakov, hep-th/9802109Phys. Lett. B. 428105S. S. Gubser, I. R. Klebanov and A. M. Polyakov, "Gauge theory correlators from non- critical string theory," Phys. Lett. B 428, 105 (1998) [hep-th/9802109].
Building up spacetime with quantum entanglement. M Van Raamsdonk, arXiv:1005.3035Int. J. Mod. Phys. D. 422429Gen. Rel. Grav.. hep-thM. Van Raamsdonk, "Building up spacetime with quantum entanglement," Gen. Rel. Grav. 42, 2323 (2010) [Int. J. Mod. Phys. D 19, 2429 (2010)] [arXiv:1005.3035 [hep-th]];
M Van Raamsdonk, arXiv:0907.2939Comments on quantum gravity and entanglement. hep-thM. Van Raamsdonk, "Comments on quantum gravity and entanglement," arXiv:0907.2939 [hep-th].
Aspects of Holographic Entanglement Entropy. S Ryu, T Takayanagi, hep-th/0605073JHEP. 060845S. Ryu and T. Takayanagi, "Aspects of Holographic Entanglement Entropy," JHEP 0608, 045 (2006) [hep-th/0605073];
Holographic derivation of entanglement entropy from AdS/CFT. S Ryu, T Takayanagi, hep-th/0603001Phys. Rev. Lett. 96181602S. Ryu and T. Takayanagi, "Holographic deriva- tion of entanglement entropy from AdS/CFT," Phys. Rev. Lett. 96, 181602 (2006) [hep-th/0603001].
Entanglement Renormalization and Holography. B Swingle, arXiv:0905.1317Phys. Rev. D. 8665007cond-mat.str-elB. Swingle, "Entanglement Renormalization and Holography," Phys. Rev. D 86, 065007 (2012) [arXiv:0905.1317 [cond-mat.str-el]].
Constructing holographic spacetimes using entanglement renormalization. B Swingle, arXiv:1209.3304hep-thB. Swingle, "Constructing holographic spacetimes using entanglement renormalization," arXiv:1209.3304 [hep-th].
Holographic Geometry of Entanglement Renormalization in Quantum Field Theories. M Nozaki, S Ryu, T Takayanagi, arXiv:1208.3469JHEP. 1210193hepthM. Nozaki, S. Ryu and T. Takayanagi, "Holographic Geometry of Entanglement Renor- malization in Quantum Field Theories," JHEP 1210, 193 (2012) [arXiv:1208.3469 [hep- th]];
Gravitation from Entanglement in Holographic CFTs. T Faulkner, M Guica, T Hartman, R C Myers, M Van Raamsdonk, arXiv:1312.7856JHEP. 140351hep-thT. Faulkner, M. Guica, T. Hartman, R. C. Myers and M. Van Raamsdonk, "Gravitation from Entanglement in Holographic CFTs," JHEP 1403 (2014) 051 [arXiv:1312.7856 [hep-th]];
Gravitational dynamics from entanglement 'thermodynamics. N Lashkari, M B Mcdermott, M Van Raamsdonk, 10.1007/JHEP04(2014)195arXiv:1308.3716JHEP. 1404195hep-thN. Lashkari, M. B. McDermott and M. Van Raamsdonk, "Gravitational dynamics from entanglement 'thermodynamics'," JHEP 1404, 195 (2014) doi:10.1007/JHEP04(2014)195 [arXiv:1308.3716 [hep-th]].
Bulk Locality and Quantum Error Correction in AdS/CFT. A Almheiri, X Dong, D Harlow, arXiv:1411.7041JHEP. 1504163hep-thA. Almheiri, X. Dong and D. Harlow, "Bulk Locality and Quantum Error Correction in AdS/CFT," JHEP 1504, 163 (2015) [arXiv:1411.7041 [hep-th]];
Bulk Locality and Quantum Error Correction in AdS/CFT. A Almheiri, X Dong, D Harlow, arXiv:1411.7041JHEP. 1504163hep-thA. Almheiri, X. Dong and D. Harlow, "Bulk Locality and Quantum Error Correction in AdS/CFT," JHEP 1504, 163 (2015) [arXiv:1411.7041 [hep-th]];
Bulk Reconstruction in the Entanglement Wedge in AdS/CFT. X Dong, D Harlow, A C Wall, arXiv:1601.05416hep-thX. Dong, D. Harlow and A. C. Wall, "Bulk Reconstruction in the Entanglement Wedge in AdS/CFT," arXiv:1601.05416 [hep-th];
Surface/State Correspondence as a Generalized Holography. M Miyaji, T Takayanagi, arXiv:1503.03542PTEP. 20157hep-thM. Miyaji and T. Takayanagi, "Surface/State Correspondence as a Generalized Hologra- phy," PTEP 2015, no. 7, 073B03 (2015) [arXiv:1503.03542 [hep-th]];
Continuous Multiscale Entanglement Renormalization Ansatz as Holographic Surface-State Correspondence. M Miyaji, T Numasawa, N Shiba, T Takayanagi, K Watanabe, 10.1103/PhysRevLett.115.171602arXiv:1506.01353Phys. Rev. Lett. 11517171602hepthM. Miyaji, T. Numa- sawa, N. Shiba, T. Takayanagi and K. Watanabe, "Continuous Multiscale Entanglement Renormalization Ansatz as Holographic Surface-State Correspondence," Phys. Rev. Lett. 115, no. 17, 171602 (2015) doi:10.1103/PhysRevLett.115.171602 [arXiv:1506.01353 [hep- th]];
Boundary States as Holographic Duals of Trivial Spacetimes. M Miyaji, S Ryu, T Takayanagi, X Wen, 10.1007/JHEP05(2015)152arXiv:1412.6226JHEP. 1505152hep-thM. Miyaji, S. Ryu, T. Takayanagi and X. Wen, "Boundary States as Holographic Duals of Trivial Spacetimes," JHEP 1505, 152 (2015) doi:10.1007/JHEP05(2015)152 [arXiv:1412.6226 [hep-th]]
Bulk Locality and Boundary Creating Operators. Y Nakayama, H Ooguri, arXiv:1507.04130JHEP. 1510114hep-thY. Nakayama and H. Ooguri, "Bulk Locality and Boundary Creating Operators," JHEP 1510, 114 (2015) [arXiv:1507.04130 [hep-th]].
Black hole entropy and entropy of entanglement. D N Kabat, hep-th/9503016Nucl. Phys. B. 453281D. N. Kabat, "Black hole entropy and entropy of entanglement," Nucl. Phys. B 453, 281 (1995) [hep-th/9503016].
Entanglement and Thermal Entropy of Gauge Fields. C Eling, Y Oz, S Theisen, arXiv:1308.4964JHEP. 131119hep-thC. Eling, Y. Oz and S. Theisen, "Entanglement and Thermal Entropy of Gauge Fields," JHEP 1311, 019 (2013) [arXiv:1308.4964 [hep-th]].
Do gauge fields really contribute negatively to black hole entropy?. W Donnelly, A C Wall, arXiv:1206.5831Phys. Rev. D. 8664042hep-thW. Donnelly and A. C. Wall, "Do gauge fields really contribute negatively to black hole entropy?," Phys. Rev. D 86, 064042 (2012) [arXiv:1206.5831 [hep-th]].
Entanglement entropy of electromagnetic edge modes. W Donnelly, A C Wall, arXiv:1412.1895Phys. Rev. Lett. 11411111603hep-thW. Donnelly and A. C. Wall, "Entanglement entropy of electromagnetic edge modes," Phys. Rev. Lett. 114, no. 11, 111603 (2015) [arXiv:1412.1895 [hep-th]].
Physics at the entangling surface. K Ohmori, Y Tachikawa, arXiv:1406.4167J. Stat. Mech. 15044010hep-thK. Ohmori and Y. Tachikawa, "Physics at the entangling surface," J. Stat. Mech. 1504, P04010 (2015) [arXiv:1406.4167 [hep-th]].
D Radicevic, arXiv:1404.1391Notes on Entanglement in Abelian Gauge Theories. hep-thD. Radicevic, "Notes on Entanglement in Abelian Gauge Theories," arXiv:1404.1391 [hep-th].
D Radicevic, arXiv:1509.08478Entanglement in Weakly Coupled Lattice Gauge Theories. hep-thD. Radicevic, "Entanglement in Weakly Coupled Lattice Gauge Theories," arXiv:1509.08478 [hep-th].
Remarks on entanglement entropy for gauge fields. H Casini, M Huerta, J A , arXiv:1312.1183Phys. Rev. D. 89885012hep-thH. Casini, M. Huerta and J. A. Rosabal, "Remarks on entanglement entropy for gauge fields," Phys. Rev. D 89, no. 8, 085012 (2014) [arXiv:1312.1183 [hep-th]].
Entanglement entropy for a Maxwell field: Numerical calculation on a two dimensional lattice. H Casini, M Huerta, arXiv:1406.2991Phys. Rev. D. 9010105013hep-thH. Casini and M. Huerta, "Entanglement entropy for a Maxwell field: Numerical calcula- tion on a two dimensional lattice," Phys. Rev. D 90, no. 10, 105013 (2014) [arXiv:1406.2991 [hep-th]].
On the definition of entanglement entropy in lattice gauge theories. S Aoki, T Iritani, M Nozaki, T Numasawa, N Shiba, H Tasaki, arXiv:1502.04267JHEP. 1506187hep-thS. Aoki, T. Iritani, M. Nozaki, T. Numasawa, N. Shiba and H. Tasaki, "On the definition of entanglement entropy in lattice gauge theories," JHEP 1506, 187 (2015) [arXiv:1502.04267 [hep-th]].
Decomposition of entanglement entropy in lattice gauge theory. W Donnelly, arXiv:1109.0036Phys. Rev. D. 8585004hep-thW. Donnelly, "Decomposition of entanglement entropy in lattice gauge theory," Phys. Rev. D 85, 085004 (2012) [arXiv:1109.0036 [hep-th]].
Entanglement entropy and nonabelian gauge symmetry. W Donnelly, arXiv:1406.7304Class. Quant. Grav. 3121214003hep-thW. Donnelly, "Entanglement entropy and nonabelian gauge symmetry," Class. Quant. Grav. 31, no. 21, 214003 (2014) [arXiv:1406.7304 [hep-th]].
On The Entanglement Entropy For Gauge Theories. S Ghosh, R M Soni, S P Trivedi, arXiv:1501.02593JHEP. 150969hep-thS. Ghosh, R. M. Soni and S. P. Trivedi, "On The Entanglement Entropy For Gauge Theories," JHEP 1509, 069 (2015) [arXiv:1501.02593 [hep-th]].
Aspects of Entanglement Entropy for Gauge Theories. R M Soni, S P Trivedi, arXiv:1510.07455JHEP. 1601136hep-thR. M. Soni and S. P. Trivedi, "Aspects of Entanglement Entropy for Gauge Theories," JHEP 1601, 136 (2016) [arXiv:1510.07455 [hep-th]].
Entanglement with Centers. C T Ma, arXiv:1511.02671JHEP. 160170hep-thC. T. Ma, "Entanglement with Centers," JHEP 1601 (2016) 070 [arXiv:1511.02671 [hep-th]].
Central Charge and Entangled Gauge Fields. K W Huang, arXiv:1412.2730Phys. Rev. D. 92225010hep-thK. W. Huang, "Central Charge and Entangled Gauge Fields," Phys. Rev. D 92 (2015) no.2, 025010 [arXiv:1412.2730 [hep-th]].
Entanglement of low-energy excitations in Conformal Field Theory. F C Alcaraz, M I Berganza, G Sierra, arXiv:1101.2881Phys. Rev. Lett. 106201601condmat.stat-mechF. C. Alcaraz, M. I. Berganza and G. Sierra, "Entanglement of low-energy excitations in Conformal Field Theory," Phys. Rev. Lett. 106, 201601 (2011) [arXiv:1101.2881 [cond- mat.stat-mech]].
Entanglement of excited states in critical spin chians. M I Berganza, F C Alcaraz, G Sierra, arXiv:1109.5673J. Stat. Mech. 12011016cond-mat.stat-mechM. I. Berganza, F. C. Alcaraz and G. Sierra, "Entanglement of excited states in critical spin chians," J. Stat. Mech. 1201, P01016 (2012) [arXiv:1109.5673 [cond-mat.stat-mech]].
Quantum Entanglement of Local Operators in Conformal Field Theories. M Nozaki, T Numasawa, T Takayanagi, arXiv:1401.0539Phys. Rev. Lett. 112111602hep-thM. Nozaki, T. Numasawa and T. Takayanagi, "Quantum Entanglement of Local Oper- ators in Conformal Field Theories," Phys. Rev. Lett. 112, 111602 (2014) [arXiv:1401.0539 [hep-th]].
Notes on Quantum Entanglement of Local Operators. M Nozaki, arXiv:1405.5875JHEP. 1410147hep-thM. Nozaki, "Notes on Quantum Entanglement of Local Operators," JHEP 1410, 147 (2014) [arXiv:1405.5875 [hep-th]].
Quantum Entanglement of Fermionic Local Operators. M Nozaki, T Numasawa, S Matsuura, arXiv:1507.04352hep-thM. Nozaki, T. Numasawa and S. Matsuura, "Quantum Entanglement of Fermionic Local Operators," arXiv:1507.04352 [hep-th].
Quantum dimension as entanglement entropy in two dimensional conformal field theories. S He, T Numasawa, T Takayanagi, K Watanabe, arXiv:1403.0702Phys. Rev. D. 90441701hep-thS. He, T. Numasawa, T. Takayanagi and K. Watanabe, "Quantum dimension as en- tanglement entropy in two dimensional conformal field theories," Phys. Rev. D 90, no. 4, 041701 (2014) [arXiv:1403.0702 [hep-th]].
Entanglement of local operators in large-N conformal field theories. P Caputa, M Nozaki, T Takayanagi, arXiv:1405.5946PTEP. 2014hep-thP. Caputa, M. Nozaki and T. Takayanagi, "Entanglement of local operators in large-N conformal field theories," PTEP 2014, 093B06 (2014) [arXiv:1405.5946 [hep-th]].
Scale invariance vs conformal invariance. Y Nakayama, arXiv:1302.0884Phys. Rept. 569hep-thY. Nakayama, "Scale invariance vs conformal invariance," Phys. Rept. 569, 1 (2015) [arXiv:1302.0884 [hep-th]].
Holographic Entanglement Entropy from 2d CFT: Heavy States and Local Quenches. C T Asplund, A Bernamonti, F Galli, T Hartman, arXiv:1410.1392JHEP. 1502171hep-thC. T. Asplund, A. Bernamonti, F. Galli and T. Hartman, "Holographic Entanglement Entropy from 2d CFT: Heavy States and Local Quenches," JHEP 1502, 171 (2015) [arXiv:1410.1392 [hep-th]].
Entanglement and correlation functions following a local quench: a conformal field theory. P Calabrese, J L Cardy, arXiv:0708.3750J. Stat. Mech. 071010004P. Calabrese and J. L. Cardy, "Entanglement and correlation functions following a local quench: a conformal field theory J. Stat. Mech. 0710 P10004, arXiv:0708.3750.
Holographic Local Quenches and Entanglement Density. M Nozaki, T Numasawa, T Takayanagi, arXiv:1302.5703JHEP. 130580hep-thM. Nozaki, T. Numasawa and T. Takayanagi, "Holographic Local Quenches and Entan- glement Density," JHEP 1305, 080 (2013) [arXiv:1302.5703 [hep-th]].
Quantum Entanglement of Localized Excited States at Finite Temperature. P Caputa, J Simn, A Tikonas, T Takayanagi, arXiv:1410.2287JHEP. 1501102hep-thP. Caputa, J. Simn, A. tikonas and T. Takayanagi, "Quantum Entanglement of Localized Excited States at Finite Temperature," JHEP 1501, 102 (2015) [arXiv:1410.2287 [hep-th]].
Localized Excitations from Localized Unitary Operators. A Sivaramakrishnan, arXiv:1604.00965hep-thA. Sivaramakrishnan, "Localized Excitations from Localized Unitary Operators," arXiv:1604.00965 [hep-th];
Entanglement Entropy for Descendent Local Operators in 2D CFTs. B Chen, W Z Guo, S He, J Q Wu, arXiv:1507.01157JHEP. 1510173hep-thB. Chen, W. Z. Guo, S. He and J. q. Wu, "Entangle- ment Entropy for Descendent Local Operators in 2D CFTs," JHEP 1510, 173 (2015) [arXiv:1507.01157 [hep-th]];
Entanglement constant for conformal families. P Caputa, A Veliz-Osorio, arXiv:1507.00582Phys. Rev. D. 92665010hep-thP. Caputa and A. Veliz-Osorio, "Entanglement constant for conformal families," Phys. Rev. D 92 (2015) no.6, 065010 [arXiv:1507.00582 [hep-th]];
Rnyi entropy of locally excited states with thermal and boundary effect in 2D CFTs. W Z Guo, S He, arXiv:1501.00757JHEP. 150499hep-thW. Z. Guo and S. He, "Rnyi entropy of locally excited states with thermal and boundary effect in 2D CFTs," JHEP 1504, 099 (2015) [arXiv:1501.00757 [hep-th]];
P Caputa, T Numasawa, A Veliz-Osorio, arXiv:1602.06542Scrambling without chaos in RCFT. hep-thP. Caputa, T. Nu- masawa and A. Veliz-Osorio, "Scrambling without chaos in RCFT," arXiv:1602.06542 [hep-th];
M M Sheikh-Jabbari, H Yavartanoo, arXiv:1605.00341Excitation Entanglement Entropy in 2d Conformal Field Theories. hep-thM. M. Sheikh-Jabbari and H. Yavartanoo, "Excitation Entanglement Entropy in 2d Conformal Field Theories," arXiv:1605.00341 [hep-th].
Entanglement entropy in the O(N) model. Max A Metlitski, Carlos A Fuertes, Subir Sachdev, hep-th/0904.4477Phys.Rev.B. 80115122Metlitski, Max A. Fuertes, Carlos A. and Sachdev, Subir "Entanglement entropy in the O(N) model" Phys.Rev.B.80,115122(2009) [hep-th/0904.4477].
Scalar Green's functions in an Euclidean space with a conical-type line singularity. M E X Guimaraes, B Linet, Commun. Math. Phys. 165297M. E. X. Guimaraes and B. Linet, "Scalar Green's functions in an Euclidean space with a conical-type line singularity," Commun. Math. Phys. 165, 297 (1994).
We prepare the paper for more precise relation between the effective reduced density matrix and diagrams. We prepare the paper for more precise relation between the effective reduced density matrix and diagrams.
| [] |
[
"Unbiased Gradient Estimation with Balanced Assignments for Mixtures of Experts",
"Unbiased Gradient Estimation with Balanced Assignments for Mixtures of Experts"
] | [
"Wouter Kool \nUniversity of Amsterdam\n\n",
"Chris J Maddison \nUniversity of Amsterdam\n\n",
"Deepmind Andriy \nUniversity of Amsterdam\n\n",
"Mnih Deepmind \nUniversity of Amsterdam\n\n"
] | [
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n",
"University of Amsterdam\n"
] | [] | Training large-scale mixture of experts models efficiently on modern hardware requires assigning datapoints in a batch to different experts, each with a limited capacity. Recently proposed assignment procedures lack a probabilistic interpretation and use biased estimators for training. As an alternative, we propose two unbiased estimators based on principled stochastic assignment procedures: one that skips datapoints which exceed expert capacity, and one that samples perfectly balanced assignments using an extension of the Gumbel-Matching distribution[29]. Both estimators are unbiased, as they correct for the used sampling procedure. On a toy experiment, we find the 'skip'-estimator is more effective than the balanced sampling one, and both are more robust in solving the task than biased alternatives. * Corresponding author: [email protected]. Work done on an internship at DeepMind. | null | [
"https://arxiv.org/pdf/2109.11817v2.pdf"
] | 237,635,334 | 2109.11817 | 6b95a0c36683025ff38c66128b817d3640a7e03a |
Unbiased Gradient Estimation with Balanced Assignments for Mixtures of Experts
Wouter Kool
University of Amsterdam
Chris J Maddison
University of Amsterdam
Deepmind Andriy
University of Amsterdam
Mnih Deepmind
University of Amsterdam
Unbiased Gradient Estimation with Balanced Assignments for Mixtures of Experts
Training large-scale mixture of experts models efficiently on modern hardware requires assigning datapoints in a batch to different experts, each with a limited capacity. Recently proposed assignment procedures lack a probabilistic interpretation and use biased estimators for training. As an alternative, we propose two unbiased estimators based on principled stochastic assignment procedures: one that skips datapoints which exceed expert capacity, and one that samples perfectly balanced assignments using an extension of the Gumbel-Matching distribution[29]. Both estimators are unbiased, as they correct for the used sampling procedure. On a toy experiment, we find the 'skip'-estimator is more effective than the balanced sampling one, and both are more robust in solving the task than biased alternatives. * Corresponding author: [email protected]. Work done on an internship at DeepMind.
Introduction
A mixture of experts (MoE) model can be used to implement conditional computation by processing different datapoints by different expert modules. This enables increasing the MoE models' representational capacity by adding more experts, without increasing the amount of computation per datapoint, which remains the same as for a single expert. Recently, this idea has been combined with deep neural networks, where each layer can be a separate MoE model, resulting in large-scale MoE's that yield state-of-the-art performance in various tasks [38,25,8,26,35,45]. Training an MoE model involves training a routing network to assign datapoints to experts, and training individual experts to perform well on the datapoints assigned to them. In practice, for computational efficiency, we train on minibatches of datapoints, and as each expert has a limited capacity, we either have to skip datapoints exceeding the capacity of the experts they are assigned to, or balance the assignments of datapoints to different experts [8,26]. The effect of skipping datapoints or balancing their assignments is typically not accounted for when training the routing network. Recent work has also challenged this approach by showing that in some cases better performance can be achieved using a fixed hashing-based routing strategy [36], which suggests that there is room for improvement in training the routing network. As a step towards this goal, we present two principled methods for optimizing MoE's under limited expert capacity. Specifically, we propose two sampling procedures and corresponding unbiased estimators in this paper: a simple one based on skipping datapoints that exceed expert capacity, and a more advanced one based on balanced datapoint assignment using an extension of the Gumbel-Matching distribution [29]. Whereas these sampling procedures ensure that each sample respects the expert capacity, we also propose to use the Sinkhorn algorithm to balance the assignment in expectation before sampling, and we connect this procedure to the Gumbel-Matching distribution, which has such Sinkhorn balancing built-in.
In this paper, we consider a single-layer MoE model, but this can be easily generalized. Formally, the problem we consider is to predict a label y for a datapoint x using an MoE model consisting of k individual experts p θ (y|x, z) indexed by z and selected using the routing network p θ (z|x). At test time, we select the most probable expert z * = arg max z p θ (z|x), while for training we optimize a smoothed objective obtained by taking the expectation over the routing decisions. The resulting objective, ELBO, is a variational lower bound on the marginal log-likelihood:
log p θ (y|x, z * ) ≈ E z∼p θ (z|x) [log p θ (y|x, z)] ELBO ≤ log E z∼p θ (z|x) [p θ (y|x, z)] = log p θ (y|x).(1)
We optimize this objective using minibatches of n datapoints, while respecting the expert capacity by assigning at most c = n k datapoints to each expert (for simplicity, we assume no slack capacity). We evaluate our estimators on a toy experiment, where we find that the simple 'skip'-estimator is more effective than the one based on balanced sampling using the Gumbel-Matching distribution. This is a surprising result, as we designed balanced sampling (with importance weights) as a better alternative to wastefully skipping data, but it seems that the benefit of using all data is outweighed by the added variance due to the importance weights. We do however find that both estimators, which are based on REINFORCE [11,44], are more robust than the biased alternatives using differentiable gating.
Unbiased Estimation using Balanced Assignment
For simplicity, we consider the problem of optimizing a general function f (x, z) using gradient descent using minibatches of datapoints x = (x 1 , ..., x n ) and expert assignments z = (z 1 , ..., z n ).
To simplify notation, we omit the dependence of f (x, z) on y and parameters θ (this can be added easily). By combining importance sampling with REINFORCE [11, 44], we can sample from any joint proposal distribution q(z|x) with marginals q(z i |x) to estimate the gradient for a minibatch x:
∇E z∼p θ (z|x) 1 n i f (x i , z i ) = E z∼q(z|x) 1 n i p θ (z i |x i ) q(z i |x) ∇ log p θ (z i |x i )(f (x i , z i ) − b) .
(2) Here b is a baseline which can reduce the estimator variance (see Appendix B). Taking q(z|x) = i p θ (z i |x i ) recovers the standard 'on-policy' REINFORCE estimator.
Skipping Datapoints as the Simple Solution
Our 'skip'-estimator respects expert capacity by sampling expert assignments independently and randomly subsampling the datapoints assigned to experts for which capacity is exceeded. If we assume a random order of the datapoints (or shuffle them first), simply skipping the last assignments per expert is equivalent to uniform subsampling. Let z be the vector of expert assignments, sampled independently from a proposal distribution q(z|x) = i q(z i |x i ). Let n j = i 1 {zi=j} be the number of datapoints assigned to expert j (before subsampling) and c = n k be the expert capacity. Let δ = (δ 1 , ..., δ n ) with δ i ∈ {0, 1} represent which datapoints are kept after per-expert subsampling. Correcting for the fact that the probability of datapoint i being kept after subsampling is min{n zi , c}/n zi , we obtain (see Appendix C):
∇E z∼p θ (z|x) 1 n i f (x i , z i ) = E z∼q(z|x) E δ|z 1 n i δ i n zi min{n zi , c} · ∇p θ (z i |x i ) q(z i |x) f (x i , z i ) .
(3) Here we have omitted the baseline b and used ∇p θ (z i |x i ) = p θ (z i |x i )∇ log p θ (z i |x i ) for brevity. Note that, while we skip datapoints in the gradient estimate (3), we can still propagate them to subsequent layers, e.g. by using skip connections or no-token-left-behind routing [8]. In paricular, we can also apply (3) to multilayer MoE's, where we can skip different datapoints in different layers.
Balanced Assignment using Gumbel-Matching
As an alternative to skipping, we use the n × k Gumbel-Matching distribution, a strict generalization of the (n × n) Gumbel-Matching distribution [29], to sample perfectly balanced assignments. We derive it by using the Gumbel-Max trick [28,14] to view sampling of independent expert assignments as an optimization problem, and adding constraints to this problem to respect the expert capacity. Let z ij = 1 {zi=j} be the one-hot representation of z i , a ij the unnormalized log-probability (logit) of assigning datapoint i to expert j, and g ij ∼ Gumbel(0) i.i.d. standard Gumbel variables. Then z has the Gumbel-Matching distribution if it is the solution to the following optimization problem:
max z ij z ij (a ij /τ + g ij ) s.t. j z ij = 1 ∀i, i z ij ≤ c ∀j, z ij ∈ {0, 1} ∀i, j,(4)
where τ is a temperature parameter and c = n k is the expert capacity. If we remove the balancing/capacity constraint i z ij ≤ c, the solution decomposes over i and is given by z i = arg max j (a ij /τ + g ij ) which is equivalent to z i ∼ Categorical(exp(a ij /τ )/ j exp(a ij /τ )) as a result of the Gumbel-Max trick. Thus, adding the constraint can be seen as a natural way of enforcing a balanced assignment to the otherwise independent sampling procedure. Generalizing the result from [29], the n × k Gumbel-Matching approximates the Gibbs distribution over n × k assignments z with potentials given by ij z ij a ij and temperature τ (see Appendix D.1):
p(z) ∝ exp 1 τ ij z ij a ij .(5)
For τ → 0 we obtain the deterministic assignment used in BASE layers [26]. We propose to solve the n × k assignment problem using a special cycle cancelling algorithm [20] (see Appendix D.2), which for k n is more efficient than O(n 3 ) assignment using the Hungarian Algorithm [23].
The marginals q θ (z i |x) of the Gumbel-Matching distribution are intractable [29], but we can compute the conditionals q θ (z i |x, G −i ), conditioned on the noise G −i = (g 1 , ..., g i−1 , g i+1 , ..., g n ) of the other datapoints. These can be computed efficiently (see Appendix D.4) and used as a stochastic approximations to the marginals, which still yield unbiased gradients when used in (2). Formally, let z = GM(log p(·|x), G) be the solution for the Gumbel-Matching problem with noise G. We can then use the following estimator (see Appendix D.5): As an alternative to generating balanced samples, we can balance the distribution in expectation by using a generalization of the Sinkhorn algorithm [39,40,21] to non-square matrices, which iteratively normalizes the columns and rows of the probability matrix p ij = p θ (z i = j|x i ) to sum to n k and 1 respectively, until convergence. We may then use this Sinkhorn-normalized distribution as the proposal q(z|x) in (2) and (3), which yields more (but not perfectly) balanced samples. The Gumbel-Matching distribution is invariant to this normalization as it does not change the solution to (4). See Figure 1 for an overview.
∇E z∼p θ (z|x) 1 n i f (x i , z i ) = E G 1 n i ∇p θ (z i |x i ) q θ (z i |x, G −i ) f (x i , z i ) z=GM(log p(·|x),G) . (6)
Bonus: Balancing Expectations using the Sinkhorn Algorithm
The Sinkhorn algorithm is a direct extension of the softmax function, which solves a soft (entropy-regularized) version of the assignment problem [6,30,29]. As a result, it can be seen as approximating the marginals for the Gibbs distribution (5) [10, 29] (for n = k, but this can be generalized to k < n). Empirically, we find that the Sinkhorn algorithm (as a 'soft' matching algorithm) also closely approximates the marginals of the Gumbel-Matching distribution (itself an approximation of (5)), at least for n ≥ 4. As such, we propose to heuristically use it with (2) as an alternative to (6).
The Sinkhorn algorithm yields a balanced row-stochastic matrix with probabilities/expectationsp ij , that can be seen as a balanced approximation of the unbalanced probabilities p ij . Given such a balanced matrix, there exist many distributions over balanced assignments that have the (per datapoint) marginals equal top ij , which follows from generalizing the Birkhoff theorem [4,43]. When using such a distribution with stochastic gradient descent, we would like to minimize the dependence between samples for different datapoints, to reduce the variance of the gradient estimates. This can be achieved by maximizing the entropy of the joint distribution over expert assignments with the given marginalsp ij . In Appendix D.6, we show that this maximum-entropy distribution has the form (5), again motivating the Gumbel-Matching distribution as a principled approximation.
Experiment
We evaluate the proposed estimators on a toy task of modelling a discontinuous function
y = 1 x<0.5 (0.8x − 0.2) + 1 x≥0.5 (−2.0x + 2.0).
We sample a dataset of 100 datapoints
x ∼ Uniform(−1, 1) and add noise ∼ Normal(0, 0.1 2 ), see Figure 2a. We model this dataset using a mixture of two linear experts and a Bernoulli router distribution with probability given by the sigmoid of a linear function. This model is able to solve the task perfectly, but the training methods need to take into account the fact that the solution involves an imbalanced partitioning of the dataset between the experts. We train the model to minimize the mean squared error (MSE) under sampling of the experts, which corresponds to the objective (1) where p θ (y|x, z) is Gaussian with a fixed variance. We consider the task solved successfully if the MSE < 0.02 after 10K steps of training using Adam [18] with the learning rate of 0.1 (found to work best overall using a grid search).
We compare the following sampling strategies combined with REINFORCE: Sample Skip IW is the skipping estimator with the importance-weighting correction (Equation (3)), whereas Sample Skip is the biased alternative that simply averages the gradient over the remaining datapoints. Gumbel-Matching IW uses the conditional distributions for the importance weights (Equation (6)), whereas Gumbel-Matching SH is the (biased) version that uses the 'Sinkhorn marginals' (see Section 2.3) with Equation (2), and Gumbel-Matching is the biased estimator that does not correct for balancing using importance weights. Lastly, to quantify the reduction in performance due to the expert capacity constraints, we also include the results for Sample, the ideal baseline that does not respect expert capacity. We run each experiment with 10 seeds, for a range of sampling temperatures τ , taking into account their effect on the proposal distribution in the importance weights for all estimators. To reduce variance, we include an exponential moving average baseline [41] with decay 0.99. Figure 2b plots the final training MSE for different temperatures τ of the sampling (proposal) distribution, where we observe that Sample Skip IW is the only estimator that matches the imbalanced (unconstrained) Sample estimator, both solving the task for τ ≥ 1. The skipping estimator thus provides a simple and effective way to deal with limited expert capacity, but it is important to upweight the remaining samples for experts which have skipped datapoints, as Sample Skip, which does not use this weighting, does not achieve the same performance. Confounding our expectation, the Gumbel-Matching based estimators turned out to be less effective, because of the increased variance due to the importance weights. Investigating the issue, we found that a datapoint x can have high probability p θ (z|x) for an expert z according to the router, but a low probability under the proposal q(z|x) of actually being assigned to the expert z, due to the balancing constraint. While the latter probability is small, occasionally the datapoint will get assigned to z, resulting in a large importance weight p θ (z|x) q(z|x) . This effect can be mitigated by increasing the temperature of the proposal distribution, making it more uniform and avoiding large importance weights, which explains the good results for large τ values for all estimators except Gumbel-Matching. We also experiment with all estimators with Sinkhorn balancing (Section 2.3) before sampling, which only works for high temperatures (see Appendix E).
Biased training using differentiable gating
Large scale MoE's used in practice [38,25,8,26,35,45] do not use REINFORCE, but instead multiply the output of an expert by the router probability p θ (z|x), which we refer to as differentiable gating. This way, the router becomes more coupled with the experts and gets a gradient signal directly from the objective. Different strategies for injecting noise to encourage exploration have been proposed, e.g. perturbing router inputs or (log-)probabilities with multiplicative or additive (Gaussian) noise [38,8]. Empirically, we find that we get similar results by perturbing log-probabilities with (scaled) Gumbel noise, which, since arg max j (a ij + τ · g ij ) = arg max j (a ij /τ + g ij ), has the advantage of being interpretable as sampling from a categorical distribution with the temperature τ (see Section 2.2). We find training using differentiable gating succeeds only if we additionally include a load balancing loss [8] with a weight of 0.01 or 0.03, and use balanced sampling with importance weights in a low temperature regime, as can be seen in Figure 2c. With differentiable gating, we may wonder if we need importance weights at all, since existing methods do not use importance weights to correct for the sampling temperature (noise scale) τ . In Appendix E we show however, that with differentiable gating, we fail to train the model when not using importance weights.
Discussion
In this paper we proposed several new estimators for training MoE models with a limited computational capacity per expert. We expected balanced sampling with importance weighting to correct for assignment of datapoints to low-probability experts to perform at least as well as skipping, which effectively uses a weight of 0 for the skipped datapoints. We found this not to be the case in practice, with the added variance from importance weights eliminating the benefit from the increased expert capacity utilization due to balanced sampling. Fortunately, the skipping estimator turned out to be a simple and effective alternative. We hope this work will be useful for training MoE models in practice.
[10] Amir Globerson and Tommi Jaakkola. Approximate inference using conditional entropy decompositions. In Artificial Intelligence and Statistics, pages 131-138. PMLR, 2007.
[11] Peter W Glynn. Likelihood ratio gradient estimation for stochastic systems. Communications of the ACM, 33(10):75-84, 1990.
[12] Alex Graves. Adaptive computation time for recurrent neural networks. arXiv preprint arXiv:1603.08983, 2016.
[13] Aditya Grover, Eric Wang, Aaron Zweig, and Stefano Ermon.
A Related work
Mixtures of experts [7,38] have a long history as a method for conditional computation [3,2,12] where different experts are used for different datapoints. Typically, the router or gating network that assigns datapoints to experts is learned jointly with the experts themselves. Mixtures of experts and the more general routing or modular networks [37,19,34], sometimes treat expert assignments as latent variables and use EM or variational methods [19] for training them. The benefit of optimizing a single-sample bound (ELBO) rather than marginal likelihood, is that in additional to being more tractable, it can help to avoid stochasticity [33] in datapoint assignments to experts.
Whereas the conditional distribution over experts given a datapoint should ideally have low entropy, the marginal distribution over experts should be balanced for efficient training and use of model capacity. In some cases this is explicitly encouraged by using a load balancing loss [2,8]. Other methods algorithmically balance the assignment [26] or use a fixed distribution [36]. Many recent large scale MoE models use heuristics to train the router module (see Section 3.1) but have yielded state-of-the-art performance in different domains [38,25,8,26,35,45].
Sampling and optimization of balanced assignments is closely related to the sampling and optimization of permutations (n × n assignments) [24, 1,27,30,29,13,32], which relate to estimation of the matrix permanent [16,17,22]. In a different setting, the problem can also be seen as random fair assignment [5], with the difference being that in random fair assignment one typically is not concerned about dependence between the individual assignments.
B Derivation of (off-policy) REINFORCE
First, we derive (off-policy) REINFORCE for a single datapoint x:
∇E z∼p θ (z|x) [f (x, z)] = ∇ z p θ (z|x)f (x, z) = z ∇p θ (z|x)f (x, z) = z q(z|x) ∇p θ (z|x) q(z|x) f (x, z) = E z∼q(z|x) ∇p θ (z|x) q(z|x) f (x, z) = E z∼q(z|x) p θ (z|x) q(z|x) ∇ log p θ (z|x)f (x, z)
Now consider a minibatch x = (x 1 , ..., x n ) and let p θ (z|x) = i p θ (z i |x i ) be the distribution that samples expert assignments independently. Let q(z i |x) = z−i q(z|x) be the marginal of any joint proposal distribution q(z|x), which we can use to estimate the minibatch gradient:
∇E z∼p θ (z|x) 1 n i f (x i , z i ) = 1 n i ∇E zi∼p θ (zi|xi) [f (x i , z i )] = 1 n i E zi∼q(zi|x) ∇p θ (z i |x i ) q(z i |x) f (x i , z i ) = 1 n i E z∼q(z|x) ∇p θ (z i |x i ) q(z i |x) f (x i , z i ) = E z∼q(z|x) 1 n i ∇p θ (z i |x i ) q(z i |x) f (x i , z i ) = E z∼q(z|x) 1 n i p θ (z i |x i ) q(z i |x) ∇ log p θ (z i |x i )f (x i , z i ) .
Since ∇E z∼p θ (z|x) [b] = 0, we can subtract any constant baseline b from f (x, y), resulting in (2).
C Unbiasedness of the skipping estimator
First, consider a function h(x i , z i ). Let z ij = 1 {zi=j} be the one-hot representation of z i and let h ij = h(x i , z i )| zi=j . Let n j = i z ij be the number of datapoints assigned to expert j (before subsampling). Now let δ i ∈ {0, 1} represent which datapoints are kept after we, for each expert j, uniformly subsample min{n j , c} datapoints, where c = n k is the expert capacity. If z ij = 1 (before subsampling), then the probability that datapoint i remains after subsampling is
min{nj ,c} nj , so we have E δ * |z [δ i ] nj min{nj ,c} z ij = z ij . E z∼p θ (z|x) 1 n i h(x i , z i ) = E z∼p θ (z|x) 1 n i j z ij h ij = E z∼p θ (z|x) 1 n i j E δ|z [δ i ] n j min{n j , c} z ij h ij = E z∼p θ (z|x) E δ|z 1 n i δ i n zi min{n zi , c} h(x i , z i ) Now substituting h(x i , z i ) = ∇p θ (zi|xi) q(zi|x) f (x i , z i )
D The Gumbel-Matching distribution D.1 Approximation to the Gibbs distribution
Here we essentially reproduce the argument from [29] for the n × k Gumbel-Matching distribution. By sampling i.i.d. Gumbel noise g z for every assignment z, we can sample from (5) by maximizing 1 τ ij z ij a ij + g z subject to the constraints given by (4). Comparing this to the objective for the Gumbel-Matching problem (4):
ij z ij (a ij /τ + g ij ) = 1 τ ij z ij a ij + ij z ij g ij
we observe how the Gumbel-Matching distribution approximates (5) through the use of rank-one perturbations [31,15,42] ij z ij g ij instead of g z .
D.2 Solving n × k matching using cycle cancelling with Floyd-Warshall
The n × k assignment problem can be modelled as a minimum cost flow problem which can be solved using cycle cancelling [20] as follows:
• Find an initial (heuristic) feasible assignment z. We use the auction algorithm used in [26] with = 1.0 such that it finds a good (but suboptimal) solution quickly. • For every combination of experts j, j , find the lowest cost d jj to move a datapoint from expert j to j . Let s ij = a ij /τ + g ij be the score for assigning datapoint i to j, such that moving datapoint i from j to j we lose s ij but gain s ij , incurring a net 'cost' of s ij − s ij . Therefore, the minimum cost to move any of the currently assigned datapoints from j to j is d jj = min i:zij =1 s ij − s ij .
• Use the Floyd-Warshall algorithm 2 [9] to find all indirect shortest paths in the fully connected graph with k nodes (one per expert) and distance from j to j given by d jj . Stop as soon as a negative cycle is found (distance from j to j smaller than 0).
• If no negative cycle exists, the assignment is optimal, stop.
• For each edge (j, j ) in the negative cycle, 3 move the datapoint i that minimizes s ij − s ij from j to j . This will improve the assignment by incurring a negative total cost. • Repeat until no negative cycle exists.
Depending on the initial assignment, only a small number of O(k 3 ) improvements is needed and we find in practice for k n this algorithm is much faster than the O(n 3 ) Hungarian [23] algorithm.
D.3 Computing conditionally optimal assignments
If the assignment is optimal, we denote the entries of the all-pairs shortest path matrix resulting from the Floyd-Warshall algorithm by d * jj (see Appendix D.2). We can use this to efficiently obtain conditionally optimal assignments, conditioning on z ij = 1 for all i, j, as follows. If we condition on z ij = 1, we may move datapoint i from the globally optimal assignment j * to a suboptimal assignment j, incurring a cost s ij * − s ij , and move another datapoint from j to j * for a cost of d * jj * , indirectly via the path found by the Floyd-Warshall algorithm. As such, the total cost of enforcing z ij = 1 is s ij * − s ij + d * jj * . We denote the value of the globally optimal assignment by v * . Subtracting the cost for enforcing z ij = 1, we find that the value v * |zij =1 of the conditionally optimal assignment is given by
v * |zij =1 = v * − (s ij * − s ij + d * jj * ) = v * − s ij * + s ij − d * jj * .(7)
D.4 Computation of the conditionals
The dependence of the Gumbel-Matching distribution on x is only through the logits A = (a ij ), which allows us to slightly simplify notation in the rest of this section. Let v * (A, G) be the value of the optimal assignment for the Gumbel-Matching problem with logits A = (a ij ) and noise G = (g ij ), and let v * |zij =1 (A, G) be the value of the conditionally optimal assignment with the additional constraint that z ij = 1 (see Appendix D.3). Assuming a capacity c = n k , then given that z ij = 1, the problem reduces to assigning the remaining n − 1 datapoints to the remaining (k − 1) · n k + ( n k − 1) = n − 1 'slots' ( n k for experts j = j and n k − 1 for expert j). As this reduced problem does not depend on datapoint i, we denote with A −i and G −i the logits and Gumbels with row i removed and we let v * −ij (A −i , G −i ) be the value of the optimal assignment of the reduced problem. From the principle of optimality it follows that
v * |zij =1 (A, G) = v * −ij (A −i , G −i ) + a ij /τ + g ij .(8)
We can use this to compute the desired conditionals. In a slight abuse of notation 4 we write q(z ij |A,
G −i ) =P (z ij = 1|A, G −i ) =P v * |zij =1 (A, G) > max j =j v * |z ij =1 (A, G) A, G −i =P v * −ij (A −i , G −i ) + a ij /τ + g ij > max j =j v * −ij (A −i , G −i ) + a ij /τ + g ij A, G −i = exp(v * −ij (A −i , G −i ) + a ij /τ ) j exp(v * −ij (A −i , G −i ) + a ij /τ ) = exp v * |zij =1 (A, G) − g ij j exp v * |z ij =1 (A, G) − g ij(9)
where we have used the Gumbel-max trick. Although g ij appears in (9), q(z ij |A, G −i ) does not depend on g i as this value cancels against g ij in (8). The values v * |zij =1 (A, G) can be computed efficiently using Equation (7) with the method described in Appendix D.3.
D.5 Unbiased of the Gumbel-Matching estimator
Let z = GM(log p(·|x), G) be the solution for the Gumbel-Matching problem with noise G. The idea behind the Gumbel-Matching estimator is that we can derive an unbiased estimate of the gradient for each datapoint as follows. First sample the Gumbel noise G −i for all datapoints except i, and compute the conditional distribution q θ (z i |x, G −i ) (see Appendix D.4 where A = log p(·|x) is the matrix with log-probabilities and z ij = 1 ⇔ z i = j). We can then use this conditional distribution over z i as a proposal distribution in (2), which we can then reparameterize in terms of the Gumbel noise g i for the i-th datapoint. Finally, we can use this estimator for all datapoints i, where we may reuse the same Gumbel noise and use their average as the estimate: Figure 3: Results using REINFORCE (with no balance loss) and differentiable gating (with 0.01 balance loss weight) loss functions; both also shown in a version that applies Sinkhorn normalization before sampling, as well as a (biased) version that does not apply importance weights. Figure 3 presents results for both REINFORCE (top) and differentiable gating (bottom) both with (middle) and without (left) the Sinkhorn normalization before sampling. When using Sinkhorn normalization before sampling, we take it into account when computing the importance weights. With REINFORCE, not using Sinkhorn normalization works better, as using Sinkhorn normalization requires a high sampling temperature to work well, i.e. close to uniform proposal samples.
∇E z∼p θ (z|x) 1 n i f (x i , z i ) = 1 n i E zi∼p θ (zi|xi) [∇ log p θ (z i |x i )f (x i , z i )] = 1 n i E G−i E zi∼q θ (zi|x,G−i) p θ (z i |x i ) q θ (z i |x, G −i ) ∇ log p θ (z i |x i )f (x i , z i ) = 1 n i E G−i E gi p θ (z i |x i ) q θ (z i |x, G −i ) ∇ log p θ (z i |x i )f (x i , z i ) zi=GM(log p(·|x),G)i =E G
E Experiment
With differentiable gating (and load balancing loss with weight 0.01), we observe the opposite: the results are better with Sinkhorn normalization than without Sinkhorn normalization (Section 3.1), but, contrasting REINFORCE, require a very low sampling temperature to succeed, so close to deterministic training.
Existing methods often do not use importance weights to take into account the noise scale or temperature [38,8], so we experiment with this as well by completely dropping importance weights for all estimators (right column in Figure 3). We find this only works for REINFORCE, when sampling with a low temperature but not when we use the Gumbel-Matching estimators.
In all cases, we found REINFORCE succeeds both with and without a load balancing loss, whereas differentiable gating requires a load balancing loss with a weight of 0.01 to succeed at all. If the load balance loss is too high (0.1) all methods fail to model the unbalanced data.
Figure 1 :
1Overview of the methods that generate balanced and unbalanced samples and expectations.
Figure 2 :
2Toy experiment data and results using REINFORCE (without balance loss) and training using gating (with balance loss weight 0.01). We report final mean squared error (MSE) when training with different temperatures/noise scales. The black dashed line indicates the success threshold 0.02.
and combining with Equation (2) results in (3).
Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. 2015. [19] Louis Kirsch, Julius Kunze, and David Barber. Modular networks: learning to decompose neural computation. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, pages 2414-2423, 2018. Jonathan Kuck, Tri Dao, Hamid Rezatofighi, Ashish Sabharwal, and Stefano Ermon. Approximating the permanent by sampling from adaptive partitions. Advances in neural information processing systems, 2019.Stochastic optimization of sorting
networks via continuous relaxations. In International Conference on Learning Representations,
2019.
[14] Emil Julius Gumbel. Statistical theory of extreme values and some practical applications: a
series of lectures, volume 33. US Government Printing Office, 1954.
[15] Tamir Hazan, Subhransu Maji, and Tommi Jaakkola. On sampling from the gibbs distribution
with random maximum a-posteriori perturbations. Advances in Neural Information Processing
Systems, 26:1268-1276, 2013.
[16] Mark Huber. Exact sampling from perfect matchings of dense regular bipartite graphs. Algo-
rithmica, 44(3):183-193, 2006.
[17] Mark Jerrum, Alistair Sinclair, and Eric Vigoda. A polynomial-time approximation algorithm for
the permanent of a matrix with nonnegative entries. Journal of the ACM (JACM), 51(4):671-697,
2004.
[18] [20] Morton Klein. A primal method for minimal cost flows with applications to the assignment and
transportation problems. Management Science, 14(3):205-220, 1967.
[21] Philip A Knight. The sinkhorn-knopp algorithm: convergence and applications. SIAM Journal
on Matrix Analysis and Applications, 30(1):261-275, 2008.
[22] [23] Harold W Kuhn. The hungarian method for the assignment problem. Naval research logistics
quarterly, 2(1-2):83-97, 1955.
[24] Quoc Le and Alexander Smola. Direct optimization of ranking measures. arXiv preprint
arXiv:0704.3359, 2007.
[25] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang,
Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with condi-
tional computation and automatic sharding. In International Conference on Learning Represen-
tations, 2021.
[26] Mike Lewis, Shruti Bhosale, Tim Dettmers, Naman Goyal, and Luke Zettlemoyer. Base layers:
Simplifying training of large, sparse models. International Conference on Machine Learning,
2021.
[27] Ke Li, Kevin Swersky, and Richard Zemel. Efficient feature learning using perturb-and-map.
2013.
[28] Chris J Maddison, Daniel Tarlow, and Tom Minka. A* sampling. Advances in Neural Informa-
tion Processing Systems, 27:3086-3094, 2014.
We use the parallel version: https://en.wikipedia.org/wiki/Parallel_all-pairs_shortest_ path_algorithm#Floyd_algorithm which runs in O(k 3 ) and k sequential steps.
Can be reconstructed by using Floyd-Warshall with path reconstruction: https://en.wikipedia.org/ wiki/Floyd-Warshall_algorithm#Pseudocode_[11]. 4 q(zij) corresponds to q(zi) where zi = j but assumes a one-hot representation.
Acknowledgments and Disclosure of FundingWe would like to thank Michalis Titsias, Jörg Bornschein, Matthias Bauer and Yee Whye Teh for helpful comments, discussions and support.f (x i , z i ) z=GM(log p(·|x),G) .D.6 Maximum-entropy distribution over balanced assignmentsFor simplicity, we assume n = k, but this can be easily generalized. Assume that we have a balanced (square) matrix (see Section 2.3) P = (p ij ) with probabilities p ij ≥ 0 such that i p ij = j p ij = 1, i.e. the matrix P is doubly stochastic. The Birkhoff decomposition[4,43]decomposes such a matrix as a convex combination over permutation matrices Z = (z ij ), for which z ij ∈ {0, 1} and i z ij = j z ij = 1. Such permutation matrices represent n × n matchings z in one-hot encoding (i.e. z ij = 1 ⇔ z i = j), which is why we will use z to denote them. A Birkhoff decomposition α z > 0, z α z = 1 thus represents a joint probability distribution over matchings z (represented as permutation matrices) with marginal distributions (per datapoint) given by P:Many such decompositions/distributions exist and can be found using the Birkhoff algorithm[4], but these will yield sparse α z and thus have low entropy and high dependence between the marginal distributions for different i. We aim to minimize this dependence between the marginal distributions, which is achieved by maximizing the entropy − z α z log α z subject to the constraint (10) and z α z = 1, which has the Lagrangian:This has first order conditionsIf we let u ij = exp(λ ij ) and w = exp(η − 1), and convert from the 'one-hot' representation z ij to the 'indexing' representation z i (i.e. ij λ ij z ij = i λ i,zi ), we find the solutionHere Perm(U ) is the permanent of the matrix U = (u ij ), and U −ij is the matrix U with rows i and j removed. Since w = 1/Perm(U ) we findwhich can, in theory, be solved using a (very expensive) fixed point iteration scheme. Empirically we found that the solution takes the form u ij ≈ p 1/τ ij for some τ , such that the probability of an assignment z is given by(11) Here P (1/τ ) is the Hadamard power, which raises the entries p ij to the power 1/τ element-wise. By generalizing permanents (sums over permutations) to non-square matrices as sums over balanced assignment matrices, the same result can be derived for n = k. Using a ij = log p ij and converting to 'one-hot' representation z ij , we find that (11) is equal to (5).If we do not use the approximation u ij ≈ p 1/τ ij , then it still holds that the maximum entropy distribution has the form (5), but we should let τ = 1 and a ij = log u ij , so in this case a ij = log p ij are not the marginal log-probabilities we aim to sample from.
Ranking via sinkhorn propagation. Ryan Prescott , Adams , Richard S Zemel, arXiv:1106.1925arXiv preprintRyan Prescott Adams and Richard S Zemel. Ranking via sinkhorn propagation. arXiv preprint arXiv:1106.1925, 2011.
Emmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, Doina Precup, arXiv:1511.06297Conditional computation in neural networks for faster models. arXiv preprintEmmanuel Bengio, Pierre-Luc Bacon, Joelle Pineau, and Doina Precup. Conditional computa- tion in neural networks for faster models. arXiv preprint arXiv:1511.06297, 2015.
Estimating or propagating gradients through stochastic neurons for conditional computation. Yoshua Bengio, Nicholas Léonard, Aaron Courville, arXiv:1308.3432arXiv preprintYoshua Bengio, Nicholas Léonard, and Aaron Courville. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432, 2013.
Tres observaciones sobre el algebra lineal. Garrett Birkhoff, Univ. Nac. Tucuman, Ser. A. 5Garrett Birkhoff. Tres observaciones sobre el algebra lineal. Univ. Nac. Tucuman, Ser. A, 5:147-154, 1946.
Designing random allocation mechanisms: Theory and applications. Eric Budish, Yeon-Koo Che, Fuhito Kojima, Paul Milgrom, American economic review. 1032Eric Budish, Yeon-Koo Che, Fuhito Kojima, and Paul Milgrom. Designing random allocation mechanisms: Theory and applications. American economic review, 103(2):585-623, 2013.
Sinkhorn distances: Lightspeed computation of optimal transport. Marco Cuturi, Advances in neural information processing systems. 26Marco Cuturi. Sinkhorn distances: Lightspeed computation of optimal transport. Advances in neural information processing systems, 26:2292-2300, 2013.
Learning factored representations in a deep mixture of experts. David Eigen, Marc'aurelio Ranzato, Ilya Sutskever, arXiv:1312.4314arXiv preprintDavid Eigen, Marc'Aurelio Ranzato, and Ilya Sutskever. Learning factored representations in a deep mixture of experts. arXiv preprint arXiv:1312.4314, 2013.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. William Fedus, Barret Zoph, Noam Shazeer, arXiv:2101.03961arXiv preprintWilliam Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. arXiv preprint arXiv:2101.03961, 2021.
Algorithm 97: shortest path. W Robert, Floyd, Communications of the ACM. 56345Robert W Floyd. Algorithm 97: shortest path. Communications of the ACM, 5(6):345, 1962.
Learning latent permutations with gumbel-sinkhorn networks. Gonzalo Mena, David Belanger, Scott Linderman, Jasper Snoek, International Conference on Learning Representations. Gonzalo Mena, David Belanger, Scott Linderman, and Jasper Snoek. Learning latent permuta- tions with gumbel-sinkhorn networks. International Conference on Learning Representations, 2018.
Sinkhorn networks: Using optimal transport techniques to learn permutations. Gonzalo Mena, David Belanger, Gonzalo Munoz, Jasper Snoek, NIPS Workshop in Optimal Transport and Machine Learning. Gonzalo Mena, David Belanger, Gonzalo Munoz, and Jasper Snoek. Sinkhorn networks: Using optimal transport techniques to learn permutations. In NIPS Workshop in Optimal Transport and Machine Learning, 2017.
Perturb-and-map random fields: Using discrete optimization to learn and sample from energy models. George Papandreou, Alan L Yuille, 2011 International Conference on Computer Vision. IEEEGeorge Papandreou and Alan L Yuille. Perturb-and-map random fields: Using discrete optimiza- tion to learn and sample from energy models. In 2011 International Conference on Computer Vision, pages 193-200. IEEE, 2011.
Sinkhorn autoencoders. Giorgio Patrini, Rianne Van Den, Patrick Berg, Marcello Forre, Samarth Carioni, Max Bhargav, Tim Welling, Frank Genewein, Nielsen, Uncertainty in Artificial Intelligence. Giorgio Patrini, Rianne van den Berg, Patrick Forre, Marcello Carioni, Samarth Bhargav, Max Welling, Tim Genewein, and Frank Nielsen. Sinkhorn autoencoders. In Uncertainty in Artificial Intelligence, 2019.
Techniques for learning binary stochastic feedforward neural networks. Pekka Raiko, Mathias Berglund, Guillaume Alain, Laurent Dinh, International Conference on Learning Representations. Pekka Raiko, Mathias Berglund, Guillaume Alain, and Laurent Dinh. Techniques for learning binary stochastic feedforward neural networks. In International Conference on Learning Representations, pages 1-10, 2015.
Diversity and depth in per-example routing models. Prajit Ramachandran, V Quoc, Le, International Conference on Learning Representations. Prajit Ramachandran and Quoc V Le. Diversity and depth in per-example routing models. In International Conference on Learning Representations, 2019.
André Susano Pinto, Daniel Keysers, and Neil Houlsby. Scaling vision with sparse mixture of experts. Carlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, arXiv:2106.05974arXiv preprintCarlos Riquelme, Joan Puigcerver, Basil Mustafa, Maxim Neumann, Rodolphe Jenatton, An- dré Susano Pinto, Daniel Keysers, and Neil Houlsby. Scaling vision with sparse mixture of experts. arXiv preprint arXiv:2106.05974, 2021.
Hash layers for large sparse models. Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, arXiv:2106.04426arXiv preprintStephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, and Jason Weston. Hash layers for large sparse models. arXiv preprint arXiv:2106.04426, 2021.
Routing networks and the challenges of modular and compositional computation. Clemens Rosenbaum, Ignacio Cases, Matthew Riemer, Tim Klinger, arXiv:1904.12774arXiv preprintClemens Rosenbaum, Ignacio Cases, Matthew Riemer, and Tim Klinger. Routing networks and the challenges of modular and compositional computation. arXiv preprint arXiv:1904.12774, 2019.
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff Dean, Noam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. 2017.
A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics. Richard Sinkhorn, 35Richard Sinkhorn. A relationship between arbitrary positive matrices and doubly stochastic matrices. The annals of mathematical statistics, 35(2):876-879, 1964.
Concerning nonnegative matrices and doubly stochastic matrices. Richard Sinkhorn, Paul Knopp, Pacific Journal of Mathematics. 212Richard Sinkhorn and Paul Knopp. Concerning nonnegative matrices and doubly stochastic matrices. Pacific Journal of Mathematics, 21(2):343-348, 1967.
Reinforcement learning: An introduction. S Richard, Andrew G Sutton, Barto, MIT pressRichard S Sutton and Andrew G Barto. Reinforcement learning: An introduction. MIT press, 2018.
On some properties of the low-dimensional gumbel perturbations in the perturb-and-map model. M Jakub, Tomczak, Statistics & Probability Letters. 115Jakub M Tomczak. On some properties of the low-dimensional gumbel perturbations in the perturb-and-map model. Statistics & Probability Letters, 115:8-15, 2016.
A certain zero-sum two-person game equivalent to the optimal assignment problem, contributions to the theory of games. John Von Neumann, Ann. Math. Studies. 2281953John Von Neumann. A certain zero-sum two-person game equivalent to the optimal assignment problem, contributions to the theory of games, vol. 2. Ann. Math. Studies, (28), 1953.
Simple statistical gradient-following algorithms for connectionist reinforcement learning. J Ronald, Williams, Machine learning. 83Ronald J Williams. Simple statistical gradient-following algorithms for connectionist reinforce- ment learning. Machine learning, 8(3):229-256, 1992.
Exploring sparse expert models and beyond. An Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, arXiv:2105.15082arXiv preprintAn Yang, Junyang Lin, Rui Men, Chang Zhou, Le Jiang, Xianyan Jia, Ang Wang, Jie Zhang, Jiamang Wang, Yong Li, et al. Exploring sparse expert models and beyond. arXiv preprint arXiv:2105.15082, 2021.
| [] |
[
"Response to \"Comment on 'Ultradense protium p(0) and deuterium D(0) and their relation to ordinary Rydberg matter: a review'[Physica Scripta 94 (2019)075005]\"",
"Response to \"Comment on 'Ultradense protium p(0) and deuterium D(0) and their relation to ordinary Rydberg matter: a review'[Physica Scripta 94 (2019)075005]\""
] | [
"Leif Holmlid [email protected] \nDepartment of Chemistry and Molecular Biology\nUniversity of Gothenburg\n412 96GöteborgSweden\n"
] | [
"Department of Chemistry and Molecular Biology\nUniversity of Gothenburg\n412 96GöteborgSweden"
] | [] | In this answer to the Comment by Hansen and Engelen it is shown, that if there is any violation of the baryon number conservation law in H(0) nuclear reactions, it is not at all of the form that the authors believe. Their belief is disproved by cited well-known scientific results from other groups. It is further shown that quantum mechanics in H(0) molecules is different than these authors believe, not formulated in kinetic energy terms but defined by angular momentum quantization. Repetition of experiments is required, not pondering by non-specialists. | null | [
"https://export.arxiv.org/pdf/2303.08775v1.pdf"
] | 257,532,319 | 2303.08775 | d9e23fea463a373c9991a03a60981b619d26340c |
Response to "Comment on 'Ultradense protium p(0) and deuterium D(0) and their relation to ordinary Rydberg matter: a review'[Physica Scripta 94 (2019)075005]"
Leif Holmlid [email protected]
Department of Chemistry and Molecular Biology
University of Gothenburg
412 96GöteborgSweden
Response to "Comment on 'Ultradense protium p(0) and deuterium D(0) and their relation to ordinary Rydberg matter: a review'[Physica Scripta 94 (2019)075005]"
In this answer to the Comment by Hansen and Engelen it is shown, that if there is any violation of the baryon number conservation law in H(0) nuclear reactions, it is not at all of the form that the authors believe. Their belief is disproved by cited well-known scientific results from other groups. It is further shown that quantum mechanics in H(0) molecules is different than these authors believe, not formulated in kinetic energy terms but defined by angular momentum quantization. Repetition of experiments is required, not pondering by non-specialists.
Introduction
The quantum material ultradense hydrogen H(0) has been studied experimentally with several methods since 2008 and there are now 65 publications in refereed scientific journals on H(0). Skepticism in some physics circles has existed but there have been no publications which have disproved any of the experimental or theoretical results. Despite this, non-scientific activities have been started like campaigns on the web, letters to my university and my department with allegations about lacking radiation protection, and letters to fifteen scientific journals to retract my published papers (none has been retracted). I have as a scientist survived several skepticism periods, first concerning Rydberg states at surfaces, then Rydberg matter, then H(0) and now baryon annihilation.
After around 50 publications from me in each of these fields, the skepticism has faltered,
but not yet for baryon annihilation due to its important relation to energy production. I believe in scientific method and have always improved my work to counter the skepticism. I have often been told that my experiments are so simple that anyone can do them.Why are they then not repeated and published with their positive or negative outcome? Why do I not receive questions or other legitimate communications, if problems exist with the repetition, instead of letters to my university? I think the reason is just ignorance, but that opens for non-scientific activities like the ones mentioned above.
So now I answer this new comment by H+E as I will call them here to save space. I will answer scientifically and will not react further to their nice words like "egregious statements and inferences".
From their list of keywords it seems that their comment includes something on hydrogen phase diagrams but this is not the case. So, I answer with the same structure as H+E have used.
Baryon number conservation
"Besides the claim of 'cold fusion' of deuterons in ultra-dense deuterium…." I do not claim cold fusion but muon-catalyzed fusion which is a well-known process since Another error in Eq. (1) is that the reaction was suggested to be p + p. As published in 2021 the correct reaction is p + anti-p. It took numerous measurements of meson energies before this conclusion was reached.
3. Molecular structure of H 2 Why this part of the comment is included is difficult to understand since the review from 2019 is not concerned with covalently bonded hydrogen molecules H 2 . H+E do not mention any quantum numbers, molecular orbitals or angular momenta as important for this molecule but only its energy terms. Most of their discussion is not worth arguing about since a correct discussion can be found in many textbooks. I will just highlight a few of their statements: "In their Fig.1 the authors provide their understanding of how this very short bond length can come about. Their argument is that the molecule has six Coulomb interactions….".
The reason we included Fig. 1 was that we received comments that H(0) could not exist due to the strong repulsions at short distance. Fig. 1 has nothing to do with the nature of H(0) otherwise which is completely given by angular momentum quantization. H+E apparently misunderstand this.
"Somehow there must be a repulsive force at work."
Of course, the repulsive terms are all included in the discussion of Fig. 1.
"By the argument of HZG, there is therefore nothing to prevent this atom from collapsing to a structure where the electron is located on top of the nuclear charge." How this can be deduced from the review cannot be understood. The process H+E imply to be impossible indeed exists and is called beta capture. It seems that H+E think that the electron in an s orbital circles around the nucleus. They have probably not understood that the most likely location for an electron with l = 0 is in the nucleus. I have used this point as a test of student's understanding of quantum mechanics for a long time. The main factor which governs the electron motion in an atom is its quantized angular momentum which H+E does not mention at all. Maybe this is the reason why they do not accept H(0), which is defined by its quantized angular momenta as expressed very clearly in the review.
"With a total of four elementary particles in the molecule(two protons and two electrons), each of spin 1/2, it is impossible for any pairs of thosespins, and in particular the spins of the electrons, not to be aligned. This will render theirpostulated cancellation of the
Coulomb interaction inoperative."
It is apparent that the authors do not believe in pairing of electrons with different spins, up and down as it is often called. They should consider the basic knowledge that an orbital has the same shape whether there are one or two electrons in it. Thus the cancellation which we observe in passing (it is not necessary for our argument) exists and is well known. There exist many good textbooks on quantum mechanics where the elementa of molecular theory can be found. The resolution is determined by the kinetic energy release in the H(0) molecules and cannot be influenced by experimental parameters. The resolution is certainly good enough for determining the bond distances with considerable precision and also to derive the molecular shapes.
Experiments, measurements
"One is that this production rate corresponds to an energy outputof close to 240 kW with an input of 5 W laser light."
The measurement cited is correct but the value of 240 kW looks unfamiliar. The experiment has been repeated many times over a period of several years and published a few times.An energy gain of 1000 is normal, not 50 000 as H+E state.
"Another is that it is made withoutany reference to radiation protection measures that should have been taken. This type ofintensities will cause serious damage to living biological matter in the surroundings and evento the experimental equipment used."
Conclusions
The comment by H+E could have touched upon some aspect of the review paper that summarized the results from close to 50 published papers. H+E do not have any argument against the review as such.They mainly discuss points that have been published in several other papers. So how that can be the content of a comment on the review paper is difficult to understand.
It is notable that H+E do not remark on that part of the review which concerns the rotational spectroscopy measurements, where a new summary with a complete table of the results was included. These measurements show very clearly picometer distances in p(0), D(0) and pD(0) for spin quantum numbers s = 2, 3 and 4. The uncertainty in the distances is down to a few femtometers. The published bond distance in state s = 2 is 2.245 ± 0.003 pm. This proves beyond any doubt that H(0) exists and has pm sized interatomic distances. These results may of course be difficult to understand for physicists with no experience in molecular rotational spectroscopy.
1957 .
1957If other persons attribute my results to cold fusion it is outside my control. To see my view on cold fusion, please read Ref. 1."that of the conservation of the total number of protons and neutrons in the universe".H+E must intend to write the sum of the number of protons and neutrons since free neutrons decay after 15 min to protons. The idea that the number of baryons is constant in the universe is sometimes given in popular science but it is not correct as every physicist should know. The law of baryon number conservation is something different. If in doubt, check with Wikipedia as a starter. I will mention a few pieces of evidence against the version of the law assumed by H+E.First of all, a large mass of antimatter is assumed to have existed shortly after our Universe started to exist and each annihilation for example antiproton + proton removed one proton from existing. We do not know how much antimatter is still left in theUniverse and how many protons are annihilated all the time. Even if we count antiprotons as baryons the total number of baryons is still not constant. Secondly, particle accelerators can create baryons. For example antiprotons have been used in numerous experiments especially to study their annihilation reactions. Such studies both increase the number of baryons when the antiprotons (+ protons) are formed (or according to definition the baryon number is constant since antiprotons count negative) and then the number of baryons is decreased in the annihilation (or according to definition it is constant since +1-1 = 0). So the baryon number conservation that H+E believe in does not exist. They state "A violation of this conservation law has never been observed.Not in high energy particle physics experiments, nor in another other type of experiment." The baryon number conservation law is that antiproton + proton counts +1-1= 0 thus the baryon number is conserved in a baryon annihilation, not as H+E believe that protons and antiprotons cannot disappear. Antiproton annihilation experiments can for example be found in Ref. 2. "They have all shown no proton decay…." Now H+E change the subject to proton decay which is a different process. I have never stated that the protons in our experiments decay. Why do they argue about protons? On the other hand, the reaction equation they cite in my publication from 2019 is not correct. We had observed kaons at high intensity in the experiments and had to suggest some explanation for their formation. The truth was much more complex than I could know then. The complete process has been published "open access" in 2021 (Refs. 3 and 4) but H+E have apparently not read it. Their discussion about quarks is pointless and meaningless.For example, they wonder how the strangeness in the kaons can start to exist. They apparently do not know that these particles are normally formed in pairs of one particle and its antiparticle. This of course avoids violation of any conservation laws and explains how the strangeness in the kaons appears. The main error in the reaction in my 2019 paper was that I proposed 3K to be formed. I should have proposed 2K (kaons) + 2π (pions) but we had no final experimental evidence for the pions at that time."their suggested explanation completely disregards the difference between matter and antimatter"This difference is often quite small. Neutral kaons are formed easily in the baryon annihilations and they are their own antiparticles thus there is no difference between matter and antimatter. The D 0 mesons oscillate according to CERN between matter and antimatter forms. So there is very little difference between matter and antimatter.
"
employing a standard, potasium doped, iron oxide catalyst" The catalyst is a crucial part of the experiments and to lightly call it a standard catalyst shows great ignorance. It cannot be bought, so what makes it a standard catalyst? "The postulated bond length of a few picometers,in particular, is calculated from the observation of flight times in mass spectra." This sentence contains numerous errors.The H-H bond length is not postulated but measured. The calculation is based on the kinetic energy release from Coulomb explosions in neutral time-of-flight spectra, not in mass spectra. Please observe this point very carefully: Coulomb explosions of the molecules give the kinetic energies of the fragments. The time-of-flights of the neutral fragments are measured and this gives the kinetic energy of the fragments which is several hundred eV. "No attempt has been documented of any attempt to rule out anyother explanation, for example the obvious suggestion that the spectra are due to chargingup of the sample." Such data are indeed given in the publications. but are not observed by H+E. How charging up of a metallic grounded sample could give nanosecond time-of flight spectra requires new physics. The spectra are reproducible; otherwise they would not have been studied or published. The sample is pure metal. Changes in its applied voltage shift the TOF-MS spectra correctly. This proves conclusively that there is no charging effect in the ion mass spectra. No charging effect is of course possible in the neutral time-of flight spectra."The peakswhich, noted in passing, have extremely poor resolution,….."
These comments are unprofessional. Further H+E know nothing about the radiation protection in my lab. The radiation that leaves the apparatus from H(0) is mainly neutral kaons and muons which give little radiation damage. This has been published in several papers. I have had manuscripts rejected since I was not dead and could not be right about the reactions in H(0). However, instead of fantasizing I measured the particles and the radiation. So I am still alive contrary to expectations and I know why. 5. Final comments "The paper of Holmlid and Zeiner-Gundersen makes claims that would be truly revolutionary ifthey were true." The claims are true. I suggest that you try such experiments yourself. You have the resources and if you have problems I offer help. Please note that the review was based on close to 50 published papers. Why should you be able to spot all the errors that must have been made in measuring and publishing these 50 papers if they are not correct? "We have shown that they violate some fundamental and very well establishedlaws in a rather direct manner." H+E probably mean the baryon number conservation law which they misunderstand as shown above. No other fundamental law was referred to. Their discussions about the hydrogen molecule demonstrate no understanding of quantization of angular momentum in atomic and molecular physics. Such quantization is fundamental for the theory of H(0).
Production of ultra-dense hydrogen H(0): A novel nuclear fuel. L Holmlid, A Kotarba, P Stelmachowski, 10.1016/j.ijhydene.2021.02.221International Journal of Hydrogen Energy. L. Holmlid, A. Kotarba and P. Stelmachowski, "Production of ultra-dense hydrogen H(0): A novel nuclear fuel", International Journal of Hydrogen Energy, https://doi.org/10.1016/j.ijhydene.2021.02.221
The antinucleon-nucleon interaction at low energy: annihilation dynamics. E Klempt, C Batty, J.-M Richard, 10.1016/j.physrep.2005.03.002Phys. Rep. 413E. Klempt, C. Batty, J.-M. Richard, The antinucleon-nucleon interaction at low energy: annihilation dynamics, Phys. Rep. 413 (2005) 197, https://doi.org/10. 1016/j.physrep.2005.03.002.
Energy production by laser-induced annihilation in ultradense hydrogen H(0). L Holmlid, 10.1016/j.ijhydene.2021.01.212J. Hydrogen Energy. L.Holmlid, Energy production by laser-induced annihilation in ultradense hydrogen H(0)". J. Hydrogen Energy (2021) https://doi.org/10.1016/j.ijhydene.2021.01.212
Laser-induced annihilation: relativistic particles from ultra-dense hydrogen H(0). L Holmlid, S Olafsson, Doi:10.1016/j.hedp.2021.100942High Energy Density Physics. 40100942L.Holmlid and S. Olafsson, "Laser-induced annihilation: relativistic particles from ultra-dense hydrogen H(0)". High Energy Density Physics 40 (2021) 100942. Doi: 10.1016/j.hedp.2021.100942
| [] |
[
"Linking Alternative Fuel Vehicles Adoption with Socioeconomic Status and Air Quality Index",
"Linking Alternative Fuel Vehicles Adoption with Socioeconomic Status and Air Quality Index"
] | [
"Anuradha Singh \nEnvironmental Science and Management PhD Program\n\n",
"Jyoti Yadav \nDepartment of Computer Science 3. Clean Energy and Sustainability Analytics Center (CESAC)\nMontclair State University\n07043MontclairNJUSA\n",
"Sarahana Shrestha \nEnvironmental Science and Management PhD Program\n\n",
"Aparna S Varde \nEnvironmental Science and Management PhD Program\n\n\nDepartment of Computer Science 3. Clean Energy and Sustainability Analytics Center (CESAC)\nMontclair State University\n07043MontclairNJUSA\n"
] | [
"Environmental Science and Management PhD Program\n",
"Department of Computer Science 3. Clean Energy and Sustainability Analytics Center (CESAC)\nMontclair State University\n07043MontclairNJUSA",
"Environmental Science and Management PhD Program\n",
"Environmental Science and Management PhD Program\n",
"Department of Computer Science 3. Clean Energy and Sustainability Analytics Center (CESAC)\nMontclair State University\n07043MontclairNJUSA"
] | [] | Understanding adoption of new technology by its consumers helps us deal with the challenges it faces in any new market. Moreover, the impact it creates on society in terms of the environment, health, and justice is significantly based on the extent of adoption. Alternative fuel vehicles (AFVs) is such an area that faces challenges in terms of consumers' diverse social status and resistance to change. In order to achieve a cleaner transportation sector, we need to address these challenges. In this paper, we conduct a study via machine learning techniques to correlate the adoption of AFVs across various regions, their socioeconomic ranking, and their impact on the air quality index (AQI); and furthermore to predict the AQI as per the region's AFV adoption. This is an empirical study with predictive modeling based on a regional panel data analysis where we use real US census data, air quality data, and data on the number of AFVs purchased per region. Research in this area can help to promote appropriate policies for AFV adoption in the future with due justice to different population groups. This work exemplifies a modest utilization of AI techniques to enhance social good. More specifically, it makes a considerable impact on energy, climate, transportation, and environmental sustainability. | 10.48550/arxiv.2303.08286 | [
"https://export.arxiv.org/pdf/2303.08286v1.pdf"
] | 257,532,418 | 2303.08286 | f796a0606840453c4c7e811cac57b05906b408b0 |
Linking Alternative Fuel Vehicles Adoption with Socioeconomic Status and Air Quality Index
Anuradha Singh
Environmental Science and Management PhD Program
Jyoti Yadav
Department of Computer Science 3. Clean Energy and Sustainability Analytics Center (CESAC)
Montclair State University
07043MontclairNJUSA
Sarahana Shrestha
Environmental Science and Management PhD Program
Aparna S Varde
Environmental Science and Management PhD Program
Department of Computer Science 3. Clean Energy and Sustainability Analytics Center (CESAC)
Montclair State University
07043MontclairNJUSA
Linking Alternative Fuel Vehicles Adoption with Socioeconomic Status and Air Quality Index
Understanding adoption of new technology by its consumers helps us deal with the challenges it faces in any new market. Moreover, the impact it creates on society in terms of the environment, health, and justice is significantly based on the extent of adoption. Alternative fuel vehicles (AFVs) is such an area that faces challenges in terms of consumers' diverse social status and resistance to change. In order to achieve a cleaner transportation sector, we need to address these challenges. In this paper, we conduct a study via machine learning techniques to correlate the adoption of AFVs across various regions, their socioeconomic ranking, and their impact on the air quality index (AQI); and furthermore to predict the AQI as per the region's AFV adoption. This is an empirical study with predictive modeling based on a regional panel data analysis where we use real US census data, air quality data, and data on the number of AFVs purchased per region. Research in this area can help to promote appropriate policies for AFV adoption in the future with due justice to different population groups. This work exemplifies a modest utilization of AI techniques to enhance social good. More specifically, it makes a considerable impact on energy, climate, transportation, and environmental sustainability.
Introduction
Transportation is as the largest source of greenhouse gas emissions in the United States as mentioned by the Environmental Protection Agency in the literature (EPA 2022). Consequently, both federal and state governments are acting to combat climate change impacts in the country (Bipartisan Infrastructure Law 2021). Analogous to many other states, New Jersey is propagating the use of alternative fuel vehicles (AFVs) including electric vehicles (EVs) to achieve the state's greenhouse gasses (GHG) target reduction and meet its clean energy goals (NJ GWRA, 2020). As of December 2021, there are only 64,307 electric vehicles registered in NJ, hence it is going to be an uphill task to achieve these targets (NJDEP). Innovative methods need to be adopted for decarbonization of transportation (Milovanoff 2020). Previous studies have highlighted the importance of effective policies and socioeconomic factors for AFV adoption (Xue et al. 2021). A few studies on Air Quality Index (AQI) and AFV sales have been conducted (Guo et al. 2020). This motivates our research constituting pilot study in this paper. We define our objectives as exploring AI-based solutions with the following goals.
1. Analyze the link between regional AFV growth and socioeconomic ranking 2. Estimate the correlation between AFV adoption and ambient air quality per region 3. Build a model to predict air quality using data on AFV adoption and socioeconomic census data Our study is aimed at supporting faster adoption of AFVs and assisting the policymakers' decision-making.
Understanding the link between social ranking and AFV adoption can guide us to address different needs of the population as well as policy selection for a specific transportation mode. Further, the use of AFVs lowers GHG emissions locally and has a direct and immediate impact on the ambient air quality. Our research in this paper on predictive modeling for air quality based on the AFV adoption and socioeconomic census data for specific regions is novel and can guide policymakers to better decisions. The inclusion of five air pollutants (SO2, O3, NO2, PM2.5, and CO) also makes it novel, as previous research has mostly focused on PM2.5. Our present study is focused on the counties in NJ, based upon which our further work shall involve adding more states to make our model more robust. It is also pertinent to note that air quality is not just affected by transportation but by various other factors such as industrial markets / processes and energy generation methods used for a given region. Other challenges in this initiative relate to the limited availability of data for analysis, because the use of AFVs has not reached its full potential. All the results are based on the assumption that there are no big changes in the concerned factors to have any additional impact on air quality apart from increased AFV usage. Incorporating all the different sources of pollution, and their data collection is a related challenge. Furthermore, AFV vehicle adoption is hampered by the lack of adequate consumer awareness as well as the high cost of AFVs compared to conventional vehicles fueled by gasoline and diesel. The question that most vehicle users would ask is: Why should I pay more? The answer to that question lies in the fundamental theme of social good. It is important to convey to the masses that their additional costs will yield significantly higher benefits with respect to cleaner air, better environmental conditions, and consequently a much healthier life (which will also save medical costs). This is precisely where our pilot study in this area aims to make a modest contribution. We investigate the role of AI techniques in order to draw correlations between AFV usage and AQI, and predict AQI based on AFV and related factors, so as to highlight the AFV impact on the masses. We hope this work contributes the 2 cents to AI for social good.
Related Work
For equitable and efficient transportation, policymakers need to identify areas for improvement and actions that can be taken to improve the current system, e.g. encouraging EV Access and affordability through an understanding of the social structure (Fleming 2018). A study to understand the relationship between the market share of electric vehicles and the presence of government incentives, and other influential socioeconomic factors for the US, concluded that electricity prices were negatively associated with EV use while urban roads and government incentives were positively correlated with states' electric vehicle market share (Soltani-Sobh et. al. 2017).
EV studies on the equity aspects have focused on disproportional EV adoption and cost of failing to provide equal access, ranging from disparities in local pollution (Holland et. al. 2019;Ju et. al. 2020), to unfair distribution of public subsidies (Borenstein and Davis 2016), and disparate changes in neighborhood desirability (Henderson 2020, Rice et. al. 2020. Demographic variables, such as income, gender, age, education, and household size have previously been analyzed (Sovacool et. al. 2018;Soltani-Sobh et. al. 2017;Gallagher and Muehlegger 2011;Langbroek et. al. 2016). Research on the adoption of EVs in India included a SWOC (strength weakness opportunity and challenges) analysis and concluded the need for more support from the central government both for research and businesses (Singh et. al. 2021). The proportion of cleaner transport options in the overall market also decides the amount of their impact, in one related study in China it was found that new energy vehicles cannot be considered as an efficient measure to mitigate air pollution since their numbers are not yet significant, rather focus on cleaner energy production method is needed (Su et. al. 2021). Another piece of research in Barcelona and Madrid shared similar results where 40% EV introduction was needed to improve air quality, especially NO2 and CO; the sources of power generation for the region and other emission sources played important roles in the impacts on air quality improvement (Soret et. al. 2014). Inferences on these lines were observed in research using the Community Multi-scale Air Quality model and Weather Research and Forecasting model in Taiwan Li et.al. 2016) and another one using Lagrangian dispersion model used for pollution prediction for scenario analysis of EV introduction for a highway in Milan, Italy (Ferrero et. al. 2016). Various techniques in data mining and machine learning are adapted for predicting environmental parameters, e.g. air quality vis-à-vis health , energy demand in the residential sector (Prasad et.al. 2021) and decision support for green data centers (Pawlish et al. 2012). Such techniques along with NLP methods are deployed to assess human interactions / public sentiments on matters pertaining to air quality, public policy and related aspects (McNamee et. al. 2022, Du et.al. 2020, Field et al, 2022, Du et. al. 2016, Kommu et al. 2022. Interesting multidisciplinary approaches are proposed in such works, with opportunities to analyze the diverse data available and utilize it in urban planning for social good.
Other related research (Das et al. 2018), (Puri et al. 2018) (Babicheva et al. 2016), (Radakovic et al. 2022), (Eisner et al. 2011), (Varde et al. 2004), (Zhang et al. 2022) is noteworthy. Yet, the goals of our study have not been achieved in earlier work in a comprehensive manner: analysis of AFV adoption, socioeconomic census, and air quality data together. This constitutes its novelty.
Hypothesis, Data and Methods
In this study, we emphasize a simple hypothesis: regions with higher AFV adoption will have better air quality and lower pollution levels. In line with this hypothesis, we aim to find understandable and explainable correlations between AQI and AFV adoption, as per specific regions, which we can use for predictive modeling of AQI based on future increases in AFV numbers. This helps to gain more insights into the relationships between alternative fuel vehicles and the corresponding air quality, at present and in the future. Via this study, we hope to make positive impacts thriving on the theme of AI for social good, via its relevance to clean air, sustainable environment, and enhanced transportation, all of which imply good health for the masses.
The regions of focus in this work are various counties in NJ. The datasets used in this study are: NJ Alternate Fuel Vehicle historical county-based report with the types of alternative fuel vehicles sold; socioeconomic census data for NJ counties on population count, education, unemployment, poverty rate, and median household income, as well as air quality data from monitoring stations in NJ. These datasets are for the years 2016-2022. The datasets are correlated with the AFV used in each county, based on the type of the AFV. The number of vehicles is used to calculate the AFV ranking for each county. In terms of air quality data, we focus on 5 major pollutants: SO2, O3, NO2, PM2.5, and CO. For the purpose of this study, the AFVs considered are: Battery Electric Vehicles (BEV), Plug-in Hybrid Electric Vehicles (PHEV), Neighborhood Electric Vehicles (NEV), and Hybrid Electric Vehicles (HEV). Their definitions are annexed to this paper in the Appendix. Data preprocessing conducted in this study is illustrated in Figure 1. The NJ Electric Vehicles Registrations dataset is used to obtain county-wise summarized AFV data which includes: semi-annual AFV adoption rate, PEV, non-PEV, and Public Road Mileage and Vehicle Miles Traveled (VMT). In order to prepare the final dataset we compile concatenated AFV data along with socioeconomic data comprising population, poverty rate, and household income. Additionally, we preprocess AQI data by implementing PCA (Principal Components Analysis) on SO2, O3, PM2.5, and CO for dimensionality reduction. Thereafter, we build a predictive model using linear regression to predict AQI using covariates (AFV and socioeconomic data).
We harness Pearson Correlation Coefficient to relate the major variables in socioeconomic data (population, poverty rate, household income, education level) with the extent of AFV adoption. Dimensionality reduction is performed on raw AQI data. Further, the relationship between the AQI scores and these variables is defined by Equation 1.
Y i = ∑ β i + ε (1) =1
Here i is the index, Y i denotes the AQI of the county , each X i denotes a value while each β i depicts its slope coefficient respectively such that ε is an approximate error term associated with the equation. We apply this for poverty level, median household income, population of the county and vehicle count in this pilot study. While building our predictive model for AQI, dimensionality reduction is used to avoid overfitting and multi-collinearity. In order to enhance extraction and visualization of relevant data, we perform PCA on the 5 aforementioned pollutants to calculate the AQI for each county. The target variable for prediction is the AQI score. Linear regression is conducted to predict the AQI via the AFV count and the socioeconomic (SE) factors including median household income, poverty, income, and population. Regression is thereby deployed to capture the relationships between the dependent variable AQI score and independent variables from AFV and socioeconomic data, in order to predict the future values of the target, i.e. AQI. The reason for choosing linear regression in our study is that it provides more understandable and explainable mappings between the input and the output (as opposed to typical deep learning models based on neural networks) that offer black-box approaches, often lacking explainability. In this work it is crucial to fathom how certain factors lead to a given output (and hence make decisions), e.g. how median household income can affect AFV usage and thus AQI.
Algorithm 1: Predictive model for AQI via SE and AFV Input: D=[σ, α] where σ: SE, AFV variables, α:actual AQI Parameter: weights ω, learning rate , maximum number of iterations η, error threshold τ Output: predicted AQI scores α` 1: Let i = 1, Δ = 100 2: while (i <= η) or (Δ < τ) do 3:
α` = inner product of σ and ω 4: Δ = α` -α 5: γ = 2 dot (σ `, Δ) 6:
ω -= * γ 7: end while 8: return α` Algorithm 1 provides the pseudocode for building our AQI predictive model based on learning from existing SE, AFV and AQI data. The experiments are discussed next.
Experimental Results and Discussion
Experiments are conducted using Python's Scikit Learn. In the results shown here, 80% of the data is used to train the predictive model and 20% of the data is used for testing. Figure 2 plots the semiannual AFV growth by vehicle type. We observe an increase in all types of AFVs across all counties from 2016 to 2020, with a higher increase in the number of Battery Electric Vehicles (BEV) of 15% from 2016 to 2022. The number of Plug-in Hybrid Electric Vehicles (PHEV) increases by 5.8%, Neighborhood Electric Vehicles (NEV) by 3.8%, and Hybrid Electric Vehicles (HEV) by 1.8%. In total, there are more HEVs, followed by BEVs, PHEVs, and NEVs. The increase in BEVs seems more explainable with the advancements in technology as well as Federal and State subsidies and grants to consumers to uplift their purchase. We observe that there is a direct correlation between SE ranking and AFV ranking. Table 1 presents a synopsis of these rankings. However, there are some deviations as well. Bergen County has the highest number of AFVs and is ranked the highest in terms of socioeconomic ranking as well, followed by Middlesex County and Essex County, which showed similar patterns. Salem is the county with the lowest number of AFVs and it also has the lowest socioeconomic ranking. As the socioeconomic status increases, the number of AFVs in the county also increases. Outliers observed here are NJ counties such as Hudson, Monmouth, and Cumberland, where this trend is not found. According to Pearson Correlation Coefficient, the vehicle count, population count, education level, and median household income are found to be significantly correlated with the AFV adoption rate. This further shows that socioeconomic ranking, which accounts for higher median household income, education levels, and population count of the county is found to be directly correlated with AFVs ranking. The highlighted rows show counties with almost the same socio-economic and AFV ranking. Figure 3 illustrates Pearson Correlation Coefficients linking socioeconomic census data with AFV ranking. Here the variables included are vehicle count, education percentage, population count, population change, percent of population in poverty, the lower and upper bound (gives 90% confidence interval of the poverty percentage), unemployment rate and median household income. The education percentage is highly linked to median household income, and vehicle count is linked with population count. year, population, unemployment, income, poverty rate, vehicle count, and AQI scores for the counties. We observe that AQI scores are linked positively with the income levels and negatively with the population, unemployment, and poverty in the counties.
In our predictive modeling using linear regression, the R 2 score is 0.69 and the Mean Squared Error (MSE) is 0.003 over test data. This implies that AQI predictions can occur with high accuracy in order to estimate air quality based on the AFV adoption and socioeconomic factors. Table 2 in the Appendix for a more detailed display. As we can see, the predicted AQI values are in line with the actual AQI for these counties for the years 2016, 2017, 2018, and 2019; however, for the years 2020 and 2021 the predicted values are somewhat higher than the actual monitored AQI values which could be due to the impacts of pandemic and decreased transportation activity overall. This brings our attention to incorporating more variables in future work that shall reflect such changes in transportation systems and their usage. On the whole, observing these 3 years' values, our predictive model seems convincing and can be further enhanced to study the impact that the AFV adoption can have on the air quality of any region. Based on this entire study, we find that: there is a lower pollution and better air quality in the counties with a higher number of AFVs and with better socioeconomic ranking. This confirms our initial hypothesis that AQI scores and AFV counts are positively correlated, and furthermore helps establish the correlation between the socioeconomic ranking and AFVs. Our predictions of AQI are accurate based on the ground truth obtained from existing data, and hence can be used for further estimations on air quality as per AFV usage. Policy decisions for regions that have bad air quality can be undertaken only when the policymakers understand the impact such decisions can have for the given areas. With varied options of AFV and varied socioeconomic strata, our work can take a step towards bringing social equity by helping to choose the best AFVs and public transit options as needed based on regional needs, the respective AQI and related factors such as socioeconomic indicators. Since planning for clean transportation involves consideration of future years, our model to predict AQI score for areas with an estimate of AFV adoption can help governments reach their GHG targets along with social good locally, thus contributing to it initially on a small scale, and in helping to enhance AFV awareness on a larger scale, in the future.
Conclusions and Roadmap
In the race to mitigate negative effects of climate change, governments and people need to move towards clean energy and clean transportation methods. Clean technologies such as EVs and other AFVs are backbones for future sustainable living, their adoption however needs government support both for businesses and consumers owing to their higher prices as well as limited infrastructure availability. Hence, issues of social equity and social good come to the forefront and government support should not be limited to higherincome consumers alone, considering that the low-income consumers often live in areas with poor air quality. As found in our research study, counties with higher AFVs have better air quality and vice versa. Moreover, those are the counties with high socioeconomic ranking. Our research focusing on predicting the AQI for counties based on their socioeconomic status and AFV adoption is our initial effort to help policymakers' decisions based on social good and to achieve sustainable development goals. This is a pilot study focusing on the transportation sector and assuming no big changes in the other sectors affect ambient air quality in NJ. Lack of uniform air quality monitored data adds to some of the challenges in this study. Future research in this realm involves adding more data from other states and covering the USA. This study focuses on the use of AFVs which has not yet reached its full potential in NJ and the US. More intensive studies can be conducted on air quality as AFV adoption rises, and we have more data for analyses.
Based on our pilot study, here are a few important takeaways with the scope for future work.
• Policymakers can offer programs in areas with high socioeconomic ranking to encourage more AFV adoption, as these communities can "afford to pay more" for the social good of reducing AQI to improve the environment and be healthier. • Areas with lower socioeconomic status need more attention because social equity is an important aspect of social good; incentives such as tax cuts, pricing of AFVs proportional to income etc. are needed to proliferate the use of AFVs in order to make them more accessible to low income areas. • More datasets on AFVs and AQI must be stored with open access to facilitate further AI-based studies; software should be built for the actual AQI prediction based on AFV adoption, to provide at-aglance information, encouraging more AFV usage. • Mobile applications (apps) can be developed that link AFVs and AQI, and predict daily AQI levels to make the masses more aware of air quality and environmental sustainability. • Further studies with explainable AI models and black-box models should be performed on AFV, SE and AQI data to explore relative merits of the models and use the results for recommendations. • Models can be introduced such that one of them takes only AFVs adoption into account, while another uses socioeconomic data as well, so as to compare results from a "social good" angle. In sum, we highlight the fact that AI plays various roles in promoting social good. In our modest study here, machine learning based analyses deploying linear regression and Pearson Correlation Coefficient shed more light on the linkage between AFV adoption, AQI and socioeconomic factors; opening the door to further research, and motivating the enactment of policies to promote more AFV usage. Our paper thus contributes the 2 cents to AI for social good.
Figure 1 :
1Data Processing Outline
Figure 2 :
2Semiannual AFV growth by type(NJ 2016(NJ -2022
Figure 3 :
3Pearson Correlation Coefficients of socioeconomic census data with AFV ranking Results of analysis based on Pearson Correlation show observations where socioeconomic factors are positively correlated to AQI.
Figure 4 :
4Pearson Correlation Coefficients of AQI and various socioeconomic factorsFigure 4 portrays Pearson Correlation Coefficients linking AQI with socioeconomic factors. Variables observed are the
Figure 5 :
5Predictive model for AQI scores indicating a close match between predicted and actual valuesFigure 5plots the results of data tested for 3 NJ counties i.e. Atlantic, Camden, and Mercer over 6 years. These values are listed in
Table 1 :
1AFV rank & socioeconomic rank in NJ CountiesSE Rank County
Name
Population
Total
AFV
AFV
Rank
1
Bergen
953819
28304
1
2
Middlesex
860807
25674
2
6
Monmouth
645354
19800
3
3
Essex
854917
19138
4
10
Morris
510981
16782
5
12
Mercer
385898
16063
6
8
Camden
523771
13398
7
11
Burlington
464269
12862
8
13
Somerset
345647
12722
9
5
Ocean
648998
12530
10
7
Union
572114
11995
11
4
Hudson
702463
11193
12
9
Passaic
518117
8755
13
14
Gloucester
304477
6331
14
15
Atlantic
274966
6236
15
18
Hunterdon
129924
5362
16
17
Sussex
145543
3625
17
20
Cape May
95661
2888
18
19
Warren
110731
2814
19
16
Cumberland
153627
2302
20
21
Salem
65046
1148
21
AcknowledgmentsAppendix
Empty vehicle redistribution with time windows in autonomous taxi systems. T Babicheva, M Cebecauer, D Barth, W Burghout, L Kloul, ACM Transactions on Data Science. 21Babicheva, T., Cebecauer, M. Barth, D. Burghout, W., and Kloul, L. 2016. Empty vehicle redistribution with time windows in autonomous taxi systems. ACM Transactions on Data Science 2(1):1-22.
The distributional effects of US clean energy tax credits. Tax Policy and the Economy. S Borenstein, L W Davis, 10.1086/68559730Borenstein, S. and Davis, L. W. 2016. The distributional effects of US clean energy tax credits. Tax Policy and the Economy, 30(1), 191-234. https://doi.org/10.1086/685597
Bipartisan Infrastructure Law (BIL), enacted as the Infrastructure Investment and Jobs Act. Bipartisan Infrastructure Law (BIL), enacted as the Infrastructure Investment and Jobs Act, Pub. L. 117-58 (Nov. 15, 2021).
Minimizing latency in online ride and delivery services. A Das, S Gollapudi, A Kim, D Panigrahi, C Swamy, The Web Conference WWW. Das, A., Gollapudi, S., Kim, A., Panigrahi, D., Swamy, C. 2018. Minimizing latency in online ride and delivery services, The Web Conference WWW, pp. 379-388.
Air quality assessment from social media and structured data: Pollutants and health impacts in urban planning. X Du, O Emebo, A Varde, N Tandon, S N Chowdhury, G Weikum, IEEE International Conference on Data Engineering (ICDE) workshops. Du, X.; Emebo, O.; Varde, A.; Tandon, N.; Chowdhury, S.N.; and Weikum, G. 2016. Air quality assessment from social media and structured data: Pollutants and health impacts in urban planning. IEEE International Conference on Data Engineering (ICDE) workshops: 54-59
. 10.1109/ICDEW.2016.7495616https://doi.org/10.1109/ICDEW.2016.7495616
Public opinion matters: Mining social media text for environmental management. X Du, M Kowalski, A S Varde, G De Melo, R W Taylor, 10.1145/3352683.3352688ACM SIGWEBDu, X.; Kowalski, M.; Varde, A.S.; de Melo, G.; and Taylor, R.W. 2020. Public opinion matters: Mining social media text for environmental management. ACM SIGWEB, (Autumn): 1-15. https://doi.org/10.1145/3352683.3352688
Optimal route planning for electric vehicles in large networks. J Eisner, S Funke, S Storandt, 25 th AAAI Conference on Artificial Intelligence. Eisner, J., Funke, S., and Storandt, S. 2011. Optimal route planning for electric vehicles in large networks. In 25 th AAAI Conference on Artificial Intelligence, pp. 1108-1113.
Impact of the electric vehicles on the air pollution from a highway. F Enrico, S Alessandrini, A Balanzino, 10.1016/j.apenergy.2016.01.098Applied energy. 169Enrico, F.; Alessandrini, S.; and Balanzino, A. 2016. Impact of the electric vehicles on the air pollution from a highway. Applied energy 169 (2016): 450-459. https://doi.org/10.1016/j.apenergy.2016.01.098
EPA 430-R-22-003Environmental Protection Agency (EPA). 2022. Inventory of U.S. Greenhouse Gas Emissions and Sinks. U.S. Environmental Protection AgencyEnvironmental Protection Agency (EPA). 2022. Inventory of U.S. Greenhouse Gas Emissions and Sinks: 1990-2020. U.S. Environmental Protection Agency, EPA 430-R-22-003. https://www.epa.gov/ghgemissions/draft-inventory-us- greenhouse-gas-emissionsand-sinks-1990-2020.
Sentiment Analysis and Topic Modeling for Public Perceptions of Air Travel: COVID Issues and Policy Amendments. A Field, A Varde, P Lal, Language Resources and Evaluation Conference (LREC) workshop LEGAL. Field, A; Varde, A; and Lal, P; 2022. Sentiment Analysis and Topic Modeling for Public Perceptions of Air Travel: COVID Issues and Policy Amendments. Language Resources and Evaluation Conference (LREC) workshop LEGAL, pages 2-8, European Language Resources Association. https://aclanthology.org/2022.legal-1.2/
Social Equity Considerations in the New Age of Transportation: Electric, Automated, and Shared Mobility. K L Fleming, Journal of Science Policy & Governance. 131Fleming, K. L. 2018. Social Equity Considerations in the New Age of Transportation: Electric, Automated, and Shared Mobility. Journal of Science Policy & Governance 13(1).
Giving green to get green? Incentives and consumer adoption of hybrid vehicle technology. K S Gallagher, E Muehlegger, 10.1016/j.jeem.2010.05.004Journal of Environmental Economics and Management. 611Gallagher, K. S. and Muehlegger, E. 2011. Giving green to get green? Incentives and consumer adoption of hybrid vehicle technology. Journal of Environmental Economics and Management 61(1): 1-15. https://doi.org/10.1016/j.jeem.2010.05.004
Does air pollution stimulate electric vehicle sales? Empirical evidence from twenty major cities in China. J Guo, X Zhang, F Gu, H Zhang, Y Fan, 10.1016/j.jclepro.2019.119372Journal of Cleaner Production. 249Guo, J.; Zhang, X.; Gu, F.; Zhang, H.; and Fan, Y. 2020. Does air pollution stimulate electric vehicle sales? Empirical evidence from twenty major cities in China. Journal of Cleaner Production 249(119372). https://doi.org/10.1016/j.jclepro.2019.119372
EVs Are Not the Answer: A Mobility Justice Critique of Electric Vehicle Transitions. J Henderson, 10.1080/24694452.2020.1744422Annals of the American Association of Geographers. 1106Henderson, J. 2020. EVs Are Not the Answer: A Mobility Justice Critique of Electric Vehicle Transitions. Annals of the American Association of Geographers, 110(6):1993-2010. https://doi.org/10.1080/24694452.2020.1744422
Distributional Effects of Air Pollution from Electric Vehicle Adoption. S P Holland, E T Mansur, N Z Muller, A J Yates, 10.1086/701188Journal of the Association of Environmental and Resource Economists. 6S1Holland, S. P.; Mansur, E. T.; Muller, N. Z.; and Yates, A. J. 2019. Distributional Effects of Air Pollution from Electric Vehicle Adoption. Journal of the Association of Environmental and Resource Economists, 6(S1): S65-S94. https://doi.org/10.1086/701188
An equity analysis of clean vehicle rebate programs in California. Y Ju, L J Cushing, R Morello-Frosch, 10.1007/s10584-020-02836-wClimatic Change. 162Ju, Y.; Cushing, L. J.; and Morello-Frosch, R. 2020. An equity analysis of clean vehicle rebate programs in California. Climatic Change, 162:2087-2105 https://doi.org/10.1007/s10584-020- 02836-w
HiSAT: Hierarchical Framework for Sentiment Analysis on Twitter Data. A Kommu, S Patel, S Derosa, J Wang, A S Varde, Intelligent Systems and Applications (IntelliSysKommu, A.; Patel, S.; Derosa, S.; Wang, J.; Varde, A.S. 2022. HiSAT: Hierarchical Framework for Sentiment Analysis on Twitter Data. Intelligent Systems and Applications (IntelliSys)
. 10.1007/978-3-031-16072-1_28Lecture Notes in Networks and Systems. 542SpringerLecture Notes in Networks and Systems, vol 542. Springer, https://doi.org/10.1007/978-3-031-16072-1_28
The effect of policy incentives on electric vehicle adoption. J H Langbroek, J P Franklin, Y O Susilo, 10.1016/j.enpol.2016.03.050Energy Policy. 94Langbroek, J. H.; Franklin, J. P.; and Susilo, Y. O. 2016. The effect of policy incentives on electric vehicle adoption. Energy Policy 94:94-103. https://doi.org/10.1016/j.enpol.2016.03.050
Potential impacts of electric vehicles on air quality in Taiwan. N Li, J P Chen, I C Tsai, H Qingyang, Chi, Y Lin, T M Fu, 10.1016/j.scitotenv.2016.05.105Science of the total environment. 566Li, N.; Chen, J.P.; Tsai, I.C.; Qingyang H.; Chi, S; Lin, Y.C; and Fu, T.M. 2016. Potential impacts of electric vehicles on air quality in Taiwan." Science of the total environment 566: 919- 928. http://dx.doi.org/10.1016/j.scitotenv.2016.05.105
Electrification of light-duty vehicle fleet alone will not meet mitigation targets. B Mcnamee, A S Varde, S ; A Razniewski, I D Posen, H L Maclean, 10.1038/s41558-020-00921-7Correlating Facts and Social Media Trends on Environmental Quantities Leveraging Commonsense Reasoning and Human Sentiments. Language Resources and Evaluation Conference (LREC) workshop SALLD. 10National Climate ChangeMcNamee B.; Varde, A.S.; and Razniewski, S. 2022. Correlating Facts and Social Media Trends on Environmental Quantities Leveraging Commonsense Reasoning and Human Sentiments. Language Resources and Evaluation Conference (LREC) workshop SALLD, pages 25-30, European Language Resources Association. https://aclanthology.org/2022.salld-1.5.pdf Milovanoff, A.; Posen, I.D.; and MacLean, H.L. 2020. Electrification of light-duty vehicle fleet alone will not meet mitigation targets. National Climate Change 10: 1102-1107. https://doi.org/10.1038/s41558-020-00921-7
New Jersey's Global Warming Response Act (GWRA) 80x50 Report, Evaluating Our Progress and Identifying Pathways To Reduce Emissions 80% By 2050. Nj Gwra 2020, NJ Department of Environment Protection. Drive GreenNJ GWRA 2020, New Jersey's Global Warming Response Act (GWRA) 80x50 Report, Evaluating Our Progress and Identifying Pathways To Reduce Emissions 80% By 2050. https://www.nj.gov/dep/climatechange/docs/nj-gwra-80x50- report-2020.pdf NJDEP 2022, NJ Department of Environment Protection. Drive Green. https://nj.gov/dep/drivegreen/dg-electric-vehicles- basics.html.
Analyzing utilization rates in data centers for optimizing energy management. M Pawlish, A Varde, S Robila, 10.1109/IGCC.2012.6322248IEEE International Green Computing Conference (IGCC). Pawlish, M.; Varde, A.; and Robila, S. 2012. Analyzing utilization rates in data centers for optimizing energy management. IEEE International Green Computing Conference (IGCC) (pp. 1-6). DOI: 10.1109/IGCC.2012.6322248
. A Prasad, A S Varde, R Gottimukkala, C Alo, P Lal, Prasad, A.; Varde, A.S.; Gottimukkala, R.; Alo, C.; and Lal, P.
Analyzing Land Use Change and Climate Data to Forecast Energy Demand for a Smart Environment. 10.1109/IRSEC53969.2021.9741210IEEE International Renewable and Sustainable Energy Conference (IRSEC). Analyzing Land Use Change and Climate Data to Forecast Energy Demand for a Smart Environment. IEEE International Renewable and Sustainable Energy Conference (IRSEC), pp. 1-6. DOI: 10.1109/IRSEC53969.2021.9741210
Mapping ordinances and tweets using smart city characteristics to aid opinion mining. M Puri, X Du, A Varde, G De Melo, 10.1145/3184558.3191632The Web Conference WWW (Companion. Puri, M.; Du, X.; Varde, A.; and de Melo, G. 2018. Mapping ordinances and tweets using smart city characteristics to aid opinion mining. The Web Conference WWW (Companion Vol.), pp. 1721-1728. https://doi.org/10.1145/3184558.3191632
Enriching Smart Cities by Optimizing Electric Vehicle Ride-Sharing through Game Theory. D Radakovic, A Singh, A Varde, IEEE International Conference on Tools with Artificial Intelligence (ICTAI). Radakovic, D. Singh A., and Varde A. 2022. Enriching Smart Cities by Optimizing Electric Vehicle Ride-Sharing through Game Theory, IEEE International Conference on Tools with Artificial Intelligence (ICTAI).
Contradictions of the Climate-Friendly City: New Perspectives on Eco-Gentrification and Housing Justice. J L Rice, D A Cohen, J Long, J R Jurjevich, 10.1111/1468-2427.12740International Journal of Urban and Regional Research. 441Rice, J. L.; Cohen, D. A.; Long, J.; and Jurjevich, J. R. 2020. Contradictions of the Climate-Friendly City: New Perspectives on Eco-Gentrification and Housing Justice. International Journal of Urban and Regional Research, 44(1): 145-165. https://doi.org/10.1111/1468-2427.12740
Analysis of electric vehicle trends, development and policies in India. V Singh, V Singh, S Vaibhav, 10.1016/j.cstp.2021.06.0062213-624XCase Studies on Transport Policy. 93Singh, V.; Singh, V.; and Vaibhav, S. 2021. Analysis of electric vehicle trends, development and policies in India. Case Studies on Transport Policy 9(3):1180-1197. ISSN 2213-624X. https://doi.org/10.1016/j.cstp.2021.06.006.
Analysis of the Electric Vehicles Adoption over the United States. A Soltani-Sobh, K Heaslip, A Stevanovic, R Bosworth, D Radivojevic, 10.1016/j.trpro.2017.03.027Transportation Research Procedia. 22Soltani-Sobh, A.; Heaslip, K.; Stevanovic, A.; Bosworth, R.; and Radivojevic, D. 2017. Analysis of the Electric Vehicles Adoption over the United States. Transportation Research Procedia. 22: 203-212. https://doi.org/10.1016/j.trpro.2017.03.027
The potential impacts of electric vehicles on air quality in the urban areas of Barcelona and Madrid (Spain). A Soret, M Guevara, J M Baldasano, 10.1016/j.atmosenv.2014.09.048Atmospheric environment. 99Soret, A.; Guevara M.; and Baldasano J. M. 2014. The potential impacts of electric vehicles on air quality in the urban areas of Barcelona and Madrid (Spain). Atmospheric environment 99: 51- 63. https://doi.org/10.1016/j.atmosenv.2014.09.048
The demographics of decarbonizing transport: The influence of gender, education, occupation, age, and household size on electric mobility preferences in the Nordic region. B K Sovacool, J Kester, L Noel, G Z De Rubens, Global Environmental Change. 52Sovacool, B. K.; Kester, J.; Noel, L.; and de Rubens, G. Z. 2018. The demographics of decarbonizing transport: The influence of gender, education, occupation, age, and household size on electric mobility preferences in the Nordic region. Global Environmental Change 52:86-100.
. 10.1016/j.gloenvcha.2018.06.008https://doi.org/10.1016/j.gloenvcha.2018.06.008
Can new energy vehicles help to achieve carbon neutrality targets. C W Su, X Yuan, R Tao, M Umar, Journal of Environmental Management. 297Su, C. W.; Yuan, X.; Tao, R.; and Umar, M. 2021. Can new energy vehicles help to achieve carbon neutrality targets? Journal of Environmental Management 297(113348).
. 10.1016/j.jenvman.2021.113348https://doi.org/10.1016/j.jenvman.2021.113348
Prediction Tool on Fine Particle Pollutants and Air Quality for Environmental Engineering. A S Varde, A Pandey, X Du, 10.1007/s42979-022-01068-23Springer Nature Computer ScienceVarde, A.S.; Pandey, A.; and Du, X. 2022. Prediction Tool on Fine Particle Pollutants and Air Quality for Environmental Engineering. Springer Nature Computer Science, 3, 184. https://doi.org/10.1007/s42979-022-01068-2
Apriori algorithm and game-of-life for predictive analysis in materials science. A Varde, M Takahashi, E Rundensteiner, M O Ward, M Maniruzzaman, R D SissonJr, 10.3233/KES-2004-8405International Journal of Knowledge-based and Intelligent Engineering Systems (KES). 84Varde, A.; Takahashi, M.; Rundensteiner, E.A; Ward, M. O.; Maniruzzaman, M.; and Sisson Jr, R. D. 2004. Apriori algorithm and game-of-life for predictive analysis in materials science. International Journal of Knowledge-based and Intelligent Engineering Systems (KES), 8(4), 213-228. https://doi.org/10.3233/KES-2004-8405
Impact of Incentive Policies and Other Socio-Economic Factors on Electric Vehicle Market Share: A Panel Data Analysis from the 20 Countries. C Xue, H Zhou, Q Wu, X Wu, X Xu, 10.3390/su13052928Sustainability. 1352928Xue, C.; Zhou, H.; Wu, Q.; Wu, X.; and Xu, X. 2021. Impact of Incentive Policies and Other Socio-Economic Factors on Electric Vehicle Market Share: A Panel Data Analysis from the 20 Countries. Sustainability 13(5):2928. https://doi.org/10.3390/su13052928.
A Constraint-based Routing and Charging Methodology for Battery Electric Vehicles with Deep Reinforcement Learning. Y Zhang, M Chen Li, Y Chiang, Y Hua, 10.1109/TSG.2022.3214680IEEE Transactions on Smart Grid. Zhang, Y.; Chen Li, M.; Chiang Y; and Hua, Y. 2022. A Constraint-based Routing and Charging Methodology for Battery Electric Vehicles with Deep Reinforcement Learning, IEEE Transactions on Smart Grid, https://doi:10.1109/TSG.2022.3214680
Hybrid Electric Vehicle): typically non-plug-in Hybrid Electric Vehicles. Examples: Toyota Prius and many others • PHEV(Plug-in Hybrid Electric Vehicle): typically CARB Transitional Zero Emission Vehicle. Examples: Chevy Volt. BMW i3 with range extender • BEV(Battery Electric Vehicle): Examples: Tesla (all models), BMW i3, Nissan Leaf, Chevy Bolt, Honda Clarity • PEV(Plug-in Electric Vehicles): includes both Battery Electric (BEV) and Plug-in Hybrid Vehicles (PHEV), as detailed above. Ford C-Max EnergiAcronyms and Definitions in AFV Terminology • AFV(Alternative Fuel Vehicle): Vehicle powered by fuels other than Gasoline and/or Diesel exclusively • HEVAcronyms and Definitions in AFV Terminology • AFV(Alternative Fuel Vehicle): Vehicle powered by fuels other than Gasoline and/or Diesel exclusively • HEV(Hybrid Electric Vehicle): typically non-plug-in Hybrid Electric Vehicles. Examples: Toyota Prius and many others • PHEV(Plug-in Hybrid Electric Vehicle): typically CARB Transitional Zero Emission Vehicle. Examples: Chevy Volt, Ford C-Max Energi, BMW i3 with range extender • BEV(Battery Electric Vehicle): Examples: Tesla (all models), BMW i3, Nissan Leaf, Chevy Bolt, Honda Clarity • PEV(Plug-in Electric Vehicles): includes both Battery Electric (BEV) and Plug-in Hybrid Vehicles (PHEV), as detailed above.
Neighborhood Electric Vehicle): otherwise known as Low Speed Vehicles; essentially street-legal golf carts limited to 25 MPH; a type of battery electric vehicle. • Nev, Examples: ParCar. • NEV (Neighborhood Electric Vehicle): otherwise known as Low Speed Vehicles; essentially street-legal golf carts limited to 25 MPH; a type of battery electric vehicle. Examples: ParCar, Columbia, Vantage, GEM
Natural Gas): typically CNG, though may include LNG and propane vehicles (we are unable to differentiate from available data). Examples: Honda Civic. • Ng, Ford Econoline• NG(Natural Gas): typically CNG, though may include LNG and propane vehicles (we are unable to differentiate from available data). Examples: Honda Civic, Ford Econoline
| [] |
[
"A BISECTION METHOD TO SOLVE THE ELVIS PROBLEM WITH CONVEX BOUNDED VELOCITY SETS",
"A BISECTION METHOD TO SOLVE THE ELVIS PROBLEM WITH CONVEX BOUNDED VELOCITY SETS"
] | [
"Clinten A Graham [email protected] \nDEPARTMENT OF MATHEMATICS\nLOUISIANA STATE UNIVERSITY\nBATON ROUGE\n70803LAUSA\n\nJOHNS HOPKINS UNIVERSITY APPLIED PHYSICS LABORATORY, LAUREL\nMDUSA\n",
"Frederic Marazzato [email protected] \nDEPARTMENT OF MATHEMATICS\nLOUISIANA STATE UNIVERSITY\nBATON ROUGE\n70803LAUSA\n",
"ANDPeter R Wolenski \nDEPARTMENT OF MATHEMATICS\nLOUISIANA STATE UNIVERSITY\nBATON ROUGE\n70803LAUSA\n"
] | [
"DEPARTMENT OF MATHEMATICS\nLOUISIANA STATE UNIVERSITY\nBATON ROUGE\n70803LAUSA",
"JOHNS HOPKINS UNIVERSITY APPLIED PHYSICS LABORATORY, LAUREL\nMDUSA",
"DEPARTMENT OF MATHEMATICS\nLOUISIANA STATE UNIVERSITY\nBATON ROUGE\n70803LAUSA",
"DEPARTMENT OF MATHEMATICS\nLOUISIANA STATE UNIVERSITY\nBATON ROUGE\n70803LAUSA"
] | [] | The Elvis problem has been studied in[2], which proves existence of solutions. However, their computation in the non-smooth case remains unsolved. A bisection method is proposed to solve the Elvis problem in two space dimensions for general convex bounded velocity sets. The convergence rate is proved to be linear. Finally, numerical tests are performed on smooth and non-smooth velocity sets demonstrating the robustness of the algorithm. | 10.48550/arxiv.2303.08281 | [
"https://export.arxiv.org/pdf/2303.08281v1.pdf"
] | 257,532,692 | 2303.08281 | e573810c3a1a057a84467e537dd04154576d185b |
A BISECTION METHOD TO SOLVE THE ELVIS PROBLEM WITH CONVEX BOUNDED VELOCITY SETS
Clinten A Graham [email protected]
DEPARTMENT OF MATHEMATICS
LOUISIANA STATE UNIVERSITY
BATON ROUGE
70803LAUSA
JOHNS HOPKINS UNIVERSITY APPLIED PHYSICS LABORATORY, LAUREL
MDUSA
Frederic Marazzato [email protected]
DEPARTMENT OF MATHEMATICS
LOUISIANA STATE UNIVERSITY
BATON ROUGE
70803LAUSA
ANDPeter R Wolenski
DEPARTMENT OF MATHEMATICS
LOUISIANA STATE UNIVERSITY
BATON ROUGE
70803LAUSA
A BISECTION METHOD TO SOLVE THE ELVIS PROBLEM WITH CONVEX BOUNDED VELOCITY SETS
The Elvis problem has been studied in[2], which proves existence of solutions. However, their computation in the non-smooth case remains unsolved. A bisection method is proposed to solve the Elvis problem in two space dimensions for general convex bounded velocity sets. The convergence rate is proved to be linear. Finally, numerical tests are performed on smooth and non-smooth velocity sets demonstrating the robustness of the algorithm.
INTRODUCTION
Suppose M 0 , M 1 ⊆ R 2 are, respectively, the lower and upper half-spaces in R 2 and Σ = cl(M 0 ) ∩ cl(M 1 ) is the x-axis. Fix x 0 ∈ M 0 and x 1 ∈ M 1 . Suppose speed parameters r i > 0 are also fixed associated to the domain M i (i = 1, 2). The setting is illustrated in Figure 1. The
sin(θ 0 ) r 0 = sin(θ 1 ) r 1 , (1.1)
where the θ i 's are the angles of incidence. However, (1.1) does not easily identify the point y ∈ Σ through which the optimal path passes. Rather, one usually sets this up as an elementary calculus problem as minimizing x → |y−x 0 | r 0 + |x 1 −y| r 1 over y :=
y 0 ∈ Σ.
This note provides an algorithm to find the optimal point y directly from (1.1) but in a much more general situation that cannot in general be reduced to elementary calculus and instead relies on Convex Analysis.
ANALYSIS OF THE PROBLEM
We stay in R 2 , but similar results hold in any dimension n. The centered balls as the (isotropic) velocity sets above are replaced by so-called Elvis velocity sets (which could be anisotropic) whose class is denoted by C 0 . A set F ∈ C 0 is by definition nonempty, closed, convex, bounded, and contains 0 in its interior. The least time a trajectory can go from 0 to some v ∈ R 2 using velocities from F is recorded by the gauge function
γ F (v) := inf t > 0 : 1 t v ∈ F = inf {t > 0 : v ∈ tF} ,
which is a finite-valued, positively homogeneous, convex function, see [1]. The generalized Elvis problem is inf γ F 0 (y − x 0 ) + γ F 1 (x 1 − y) over y ∈ Σ. P This is a convex optimization problem. Optimality conditions are contained in Theorem 1 which is proved in [2]. Theorem 1. A necessary and sufficient condition for y ∈ R n to solve P is the existence of ζ 0 ,
ζ 1 ∈ R n satisfying ζ 0 ∈ ∂ γ F 0 y − x 0 , (2.1) −ζ 1 ∈ ∂ γ F 1 x 1 − y , and (2.2) ζ 0 + ζ 1 ∈ −N Σ (y). (2.3)
where N Σ (y) is the normal cone of Σ at y, which in this case is always the y-axis.
Remark 2.1. Note that solutions y ∈ R n may not be unique if the velocity sets F 0 and F 1 are not strictly convex.
We call (2.3) the generalized Snell's Law from which the classical version (1.1) can be derived as follows. Let us note the polar of F ∈ C 0 as F • , which is defined by F • := {ζ ∈ R n : ∀v ∈ F, ζ , v ≤ 1}; it also belongs to C 0 . It is shown in [2] that (2.1) and (2.2) imply that the optimal velocities are
v 0 := y − x 0 γ F 0 (y − x 0 ) ∈ F 0 , v 1 := x 1 − y γ F 1 (x 1 − y) ∈ F 1 , and that ζ 0 ∈ N F 0 (v 0 ) , −ζ 1 ∈ N F 1 (v 1 ) , and γ F • 0 (ζ 0 ) = 1 = γ F • 1 (−ζ 1 ). (2.4) If F 0 = r 0 B and F 1 = r 1 B, then F • 0 = 1 r 0 B and F • 1 = 1 r 1 B.
The last statement in (2.4) then says
ζ 0 = 1 r 0 sin(θ 0 ) cos(θ 0 ) and − ζ 1 = 1 r 1 sin(θ 1 ) cos(θ 1 )
for some angles θ 0 , θ 1 . The condition (2.3) says the x-component ζ 0 + ζ 1 is 0, which is (1.1). The first condition in (2.4) implies u → ζ 0 , u is maximized over u ∈ F 0 at v 0 and the second condition in (2.4) implies u → −ζ 1 , u is maximized over u ∈ F 1 at v 1 . The angles have therefore the same geometric meaning as in Figure 1. However, the optimal y remains unknown.
BISECTION METHOD
We provide an algorithm to approximate solutions of P . For z = x y ∈ R 2 , let Π Σ (z) = x be the x-component of the orthogonal projection onto Σ. It is assumed thereafter that Π Σ (x 0 ) < Π Σ (x 1 ). The goal of the algorithm is to compute the optimal y = y 0 that solves P . As the problem is not very regular, the existence of second order gradients is not assured. Therefore, the gradient-free bisection method is chosen.
3.1. The algorithm. Let ε > 0 be a tolerance parameter. Algorithm 1 is used to approximate a solution of P .
Algorithm 1 Bisection
(1) Set l 0 := Π Σ (x 0 ), r 0 := Π Σ (x 1 ) and y 0 := 1 2 l 0 + r 0 (2) For k ∈ N * , let y k = y k 0 and v k 0 := y k − x 0 and γ k 0 := γ F 0 v k 0 v k 1 := x 1 − y k and γ k 1 := γ F 1 v k 1 .
(3) Calculate (or choose any) ζ k 0 , ζ k 1 so that
ζ k 0 ∈ N F 0 v k 0 /γ k 0 with γ F • 0 (ζ k 0 ) = 1 −ζ k 1 ∈ N F 1 v k 1 /γ k 1 with γ F • 1 (−ζ k 1 ) = 1 (4) Set δ = Π Σ (ζ 0 + ζ 1 ).
(a) If |δ | ≤ ε, then the process terminates and y k is the solution.
(b) If δ < −ε, then set l k+1 := l k , y k+1 := 1 2 r k + y k and r k+1 := y k , and start over at step 2. (c) If δ > ε, then set r k+1 := r k , y k+1 := 1 2 l k + y k and l k+1 := y k , and start over at step 2. 3.2. Convergence proof. Let d k := r k − l k > 0. Thus, d 0 = Π Σ (x 1 ) − Π Σ (x 0 ). By construction, one has d k+1 = d k 2 . Therefore, y k − y ≤ d k = d 0 2 k Thus, y k → y when k → ∞ with a linear convergence rate.
NUMERICAL RESULTS
We first verify that the convergence rate is linear and then provide an example with nonstrictly convex and non-smooth velocity sets. 4.1. Elliptic velocity sets. We take ε to be the machine error. The test case consists in having x 0 := (−1, −1) and x 1 : = (1, 1). The velocity sets are the following ellipses F 0 := (cos(t), sin(t)/2) and F 1 := (2 cos(t), sin(t)), for t ∈ [0, 2π].
The green curve in Figure 2 represents Π Σ (N F 0 − N F 1 ). The reference solution is computed as the abscissa of the unique point of the green curve in Figure 2 which has a zero ordinate. The reference solution is thus computed as
y ref := −0.401.
The algorithm finds the result up to ε as shown in the convergence plot of the error is given in Figure 3. A linear regression gives a variance estimated at 5.2 · 10 −3 for a line of equation log(|y k − y ref |) = −0.88 − 0.0011k thus confirming the linear convergence of the algorithm.
Polyhedral velocity sets.
We take ε to be the machine error. The test case consists in having x 0 := (0, −1) and x 1 considered in several regions. The velocity sets are the squares plotted in Figure 4, both containing the origin.These velocity sets were constructed to give rise to three regions of solutions, as can be seen in the raycast propagation 5. When Π Σ (x 1 ) < −1 there is no unique solution. For x 1 = (−2, 1) the algorithm halts in two steps at y ref = −1.5, the first solution it reaches in this region. When −1 ≤ Π Σ (x 1 ) ≤ 1 it can be shown that the optimal crossing point is Π Σ (x 1 ). In this case the algorithm continuously bisects until it reaches ε of the optimal solution. Figure 6 shows convergence when x 1 = (0.5, 1) and
y ref = 0.5.v 4 v 3 v 2 v 1 ζ 4 ζ 3 ζ 2 ζ 1 v 4 v 3 v 2 v 1 ζ 4 ζ 3 ζ 2 ζ 1
CONCLUSION
We have presented a bisection algorithm to solve the Elvis problem for general convex velocity sets. Then we proved the convergence of the algorithm with a linear rate. We then showed on the case of elliptical velocity sets that the proved convergence rate is optimal. Finally, we applied the algorithm with non-smooth velocity sets.
Future work include generalizing this algorithm to n space dimensions. Also, a more efficient algorithm could be design on the model of the Newton-Raphson method to obtain quadratic convergence. However, that will require the use of second order optimality conditions which are not common in convex analysis.
CODE AVAILABILITY
The algorithm has been implemented in Mathematica and is available at https://github. com/clintg105/Elvis-Trajectory-Optimization.git.
FIGURE 1 .
1The angles of incidence and Snell's Law classical Snell's Law provides a necessary condition for a trajectory to traverse from x 0 to x 1 in 1 arXiv:2303.08281v1 [math.NA] 15 Mar 2023 least time while using maximal speed r i while in M 0 . The condition says
Remark 3 . 1 .
31Note that if Π Σ (x 0 ) > Π Σ (x 1 ), then the test (b) and (c) in step (4) of the algorithm are reversed.
FIGURE 2 .
2Ellipitc velocity sets: Reference solution.
FIGURE 3 .
3Ellipcitc velocity sets: Convergence plot.
FIGURE 4 .FIGURE 6 .
46Polyhedral Velocity Sets: F0, F1 (left, right) Polyhedral Velocity Sets: Convergence Plot
R T Rockafellar, Convex analysis. Princeton university pressR. T. Rockafellar. Convex analysis. Princeton university press, 2015.
The generalized Elvis problem: Solving minimal time problems in anisotropic mediums. P R Wolenski, 2021 60th IEEE Conference on Decision and Control (CDC). IEEEP. R. Wolenski. The generalized Elvis problem: Solving minimal time problems in anisotropic mediums. In 2021 60th IEEE Conference on Decision and Control (CDC), pages 4552-4557. IEEE, 2021.
| [] |
[
"Exploring the τ polarization in B → Xτν along different axes",
"Exploring the τ polarization in B → Xτν along different axes"
] | [
"Florian U Bernlochner \nPhysikalisches Institut\nRheinischen Friedrich-Wilhelms-Universität Bonn\n53115BonnGermany\n",
"Zoltan Ligeti \nErnest Orlando Lawrence Berkeley National Laboratory\nUniversity of California\n94720BerkeleyCAUSA\n\nBerkeley Center for Theoretical Physics\nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n",
"Michele Papucci \nBurke Institute for Theoretical Physics\nCalifornia Institute of Technology\n91125PasadenaCAUSA\n",
"Dean J Robinson \nErnest Orlando Lawrence Berkeley National Laboratory\nUniversity of California\n94720BerkeleyCAUSA\n\nBerkeley Center for Theoretical Physics\nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA\n"
] | [
"Physikalisches Institut\nRheinischen Friedrich-Wilhelms-Universität Bonn\n53115BonnGermany",
"Ernest Orlando Lawrence Berkeley National Laboratory\nUniversity of California\n94720BerkeleyCAUSA",
"Berkeley Center for Theoretical Physics\nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA",
"Burke Institute for Theoretical Physics\nCalifornia Institute of Technology\n91125PasadenaCAUSA",
"Ernest Orlando Lawrence Berkeley National Laboratory\nUniversity of California\n94720BerkeleyCAUSA",
"Berkeley Center for Theoretical Physics\nDepartment of Physics\nUniversity of California\n94720BerkeleyCAUSA"
] | [] | The τ polarization in semileptonic B decays provides probes of new physics complementary to decay rate distributions of the three-body final state. Prior calculations for inclusive decays used a definition for the polarization axis that is different from the choice used in calculations (and the only measurement) for exclusive channels. To compare inclusive and exclusive predictions, we calculate the τ polarization in inclusive B → Xτν using the same choice as in the exclusive decays, and construct a sum rule relating the inclusive τ polarization to a weighted sum of exclusive decay polarizations. We use this relation, experimental data, and theoretical predictions for the decays to the lightest charm or up-type hadrons to make predictions for excited channels. | 10.1103/physrevd.107.096008 | [
"https://export.arxiv.org/pdf/2302.04764v2.pdf"
] | 256,697,642 | 2302.04764 | 03cd726178fc23de4e7a31686bc761fc133fbc2f |
Exploring the τ polarization in B → Xτν along different axes
Florian U Bernlochner
Physikalisches Institut
Rheinischen Friedrich-Wilhelms-Universität Bonn
53115BonnGermany
Zoltan Ligeti
Ernest Orlando Lawrence Berkeley National Laboratory
University of California
94720BerkeleyCAUSA
Berkeley Center for Theoretical Physics
Department of Physics
University of California
94720BerkeleyCAUSA
Michele Papucci
Burke Institute for Theoretical Physics
California Institute of Technology
91125PasadenaCAUSA
Dean J Robinson
Ernest Orlando Lawrence Berkeley National Laboratory
University of California
94720BerkeleyCAUSA
Berkeley Center for Theoretical Physics
Department of Physics
University of California
94720BerkeleyCAUSA
Exploring the τ polarization in B → Xτν along different axes
The τ polarization in semileptonic B decays provides probes of new physics complementary to decay rate distributions of the three-body final state. Prior calculations for inclusive decays used a definition for the polarization axis that is different from the choice used in calculations (and the only measurement) for exclusive channels. To compare inclusive and exclusive predictions, we calculate the τ polarization in inclusive B → Xτν using the same choice as in the exclusive decays, and construct a sum rule relating the inclusive τ polarization to a weighted sum of exclusive decay polarizations. We use this relation, experimental data, and theoretical predictions for the decays to the lightest charm or up-type hadrons to make predictions for excited channels.
I. INTRODUCTION
Semileptonic B decays to τ leptons have received immense attention over the last decade because of tensions between BaBar, Belle, and LHCb measurements of ratios sensitive to lepton flavor universality (LFU) violation and the standard model (SM) expectations [1]. For b → cτν decays, the subsequent decay of the τ within the detector allows measurement of the τ polarization fraction, P τ = [Γ(s τ = +) − Γ(s τ = −)]/Γ, where s τ is the τ spin projection along a given polarization axis and Γ is the total rate. The τ polarization fraction (hereafter just 'polarization') depends on the hadronic final state, and is sensitive to beyond SM contributions, providing a probe of new physics complementary to the branching ratios or differential distributions of the three-body final state (treating the τ as stable).
The definition of the polarization depends on the choice of the polarization axis for s τ . It has been conventional to define the τ polarization in inclusive B → Xτν decays, P τ (X), by choosing the polarization axis to be the direction of the τ momentum in the B rest frame, p τ /| p τ | [2][3][4][5][6]. This is equivalent to choosing − p B /| p B | in the τ rest frame, and we therefore call this the p B polarization axis (PA-B) convention. Figure 1 illustrates this choice for a generic B → Xτν decay (for X any hadronic system). By contrast, prior exclusive calculations choose the polarization axis to be p τ /| p τ | in the dilepton rest frame [7][8][9][10]. In this frame with this choice, the τ spin basis (anti)aligns with the neutrino helicity basis, leading to the simplification that in the SM the s τ = + amplitude is exclusively proportional to the τ mass, m τ ; i.e., the s τ = − amplitude contains no m τ -dependent terms. This polarization axis choice is equivalent to − pν/| pν| in the τ rest frame (see also Fig. 1), and we therefore call this the pν polarization axis (PA-ν) convention.
The only polarization measurement to date was performed in B → D * τν, using single prong τ → πν and τ → ρν decays, and using the PA-ν convention to define the polarization, P τ (D * ) = −0.38 ± 0.51 +0. 21 −0.16 [11].
As shown in Figure 1, one could also define polarizations projecting the τ spin on the p X direction, or the direction transverse to the plane spanned by p B , p X , and pν. A nonzero τ polarization in this transverse direction, x, violates CP [12][13][14][15][16][17]. It therefore vanishes in the SM, but could be generated by new physics. In order to compare the prediction for inclusive P τ (X) with the (weighted sum of) predictions for exclusive channels, it is necessary to derive predictions for the τ polarization in B → Xτν in the PA-ν convention: this is the purpose of this work. We further show that one may construct a sum rule, which, when combined with experimental data for exclusive decays to the lightest charmed mesons in the final state, may be used to make predictions for the (average) τ polarization in excited channels.
II. THE INCLUSIVE CALCULATION
In the PA-B convention, following the notation of Ref. [3], one decomposes the partial decay rates for τ spin projection "up" (s τ = +) or "down" (s τ = −) as
Γ B → X τ (s τ = ±)ν = 1 2 Γ ± Γ .(1)
The τ polarization in the PA-B convention is then P τ (X) = 2 Γ/Γ. For the PA-ν convention, in order to distinguish from Γ in Eq. (1) we write instead
Γ B → X τ (s τ = ±)ν = 1 2 Γ ± Γ .(2)
Then the polarization fraction becomes P τ (X) = 2 Γ/Γ. (We emphasize that s τ = ± has different meanings in Eqs. (1) and (2), as introduced above.) We define the kinematic variableŝ
q 2 = q 2 m 2 b , y = 2E τ m b , x = 2E ν m b ,(3)
where E τ and E ν are the energies of the respective particles in the B rest frame. We also define the mass ratios
ρ = m 2 j m 2 b , ρ τ = m 2 τ m 2 b ,(4)
where j = c , u. Performing the OPE [18][19][20][21], we find (for notations, see Ref. [22]),
1 Γ 0 d Γ dq 2 dydx = 6 Θ x − 2(q 2 − ρ τ ) y + y 2 − 4ρ τ Θ 2(q 2 − ρ τ ) y − y 2 − 4ρ τ − x (5) × (q 2 − ρ τ )(−2W 1 + W 2 − yW 3 + ρ τ W 4 ) − x yW 2 − (q 2 + ρ τ )W 3 − 2ρ τ W 5 + 2x 2 ρ τ q 2 − ρ τ W 2 . Here Γ 0 = (|V jb | 2 G 2 F m 5 b )/(192π 3 ).
Integrating over x andq 2 gives, [3], and
1 Γ 0 d Γ dy = y 2 − 4ρ τ 3x 2 0 y 2 − 2y (1 + 3ρ τ ) + 4ρ τ (2 + ρ τ ) + x 3 0 3y (1 + ρ τ ) − y 2 − 8ρ τ + 12x 2 0 (1 + ρ τ − y) 2 X + λ 2 x 0 m 2 b (1 + ρ τ − y) 12y(3 + 17ρ τ + 5ρ 2 τ ) − 30y 2 (1 + 3ρ τ ) + 15y 3 − 48ρ τ (4 + ρ τ ) + 3x 0 − 2y(12 + 58ρ τ + 25ρ 2 τ ) + y 2 (17 + 45ρ τ ) − 5y 3 + 2ρ τ (55 + 21ρ τ + 10ρ 2 τ ) + 5x 2 0 2y(3 + 7ρ τ ) − 4y 2 (1 + ρ τ ) + y 3 − 4ρ τ (5 − ρ τ ) + 12(1 + ρ τ − y) 2 (1 + 5ρ τ − 5ρ) X + λ 1 3m 2 b (1 + ρ τ − y) 2 − 24ρ τ (1 + 3ρ τ )y − 12ρ τ (1 + ρ τ )y 2 + 6(1 + 3ρ τ )y 3 − 3y 4 + 48ρ 2 τ (2 + ρ τ ) + 6x 0 − 2ρ τ y(1 − 8ρ τ − ρ 2 τ ) − y 2 (5 − 2ρ τ + 5ρ 2 τ ) + y 3 (3 + ρ τ ) − y 4 + 16ρ τ (1 − 2ρ τ ) + 3x 2 0 − 4ρ τ y(18 + 29ρ τ + 7ρ 2 τ ) + y 2 (15 + 52ρ τ + 43ρ 2 τ ) − 8y 3 (1 + 2ρ τ ) + 2y 4 − 2ρ τ (7 − 70ρ τ − 9ρ 2 τ − 4ρ 3 τ ) + 2x 3 0 40ρ τ y(1 + ρ τ ) − 2y 2 (5 + 11ρ τ + 5ρ 2 τ ) + 5y 3 (1 + ρ τ ) − y 4 + 2ρ τ (5 − 38ρ τ + 5ρ 2 τ ) + 12(1 + ρ τ − y) 2 2(1 − ρ) 2 − 3y(1 + ρ τ − ρ) + 2ρ τ (4 + ρ τ − 2ρ) X ,(6)where x 0 = 1 − ρ/(1 + ρ τ − y) as in Ref.X = ρ τ y 2 − 4ρ τ ln y − 2ρ τ + y 2 − 4ρ τ y − 2ρ τ − y 2 − 4ρ τ .(7)
For completeness we also derive theq 2 dependence of the τ polarization is (dΓ/dq 2 is given in Ref. [23]),
1 Γ 0 d Γ dq 2 = 1 + λ 1 + 15λ 2 2m 2 b (1 + ρ −q 2 ) 2 − 4ρ (q 2 − ρ τ ) 2 q 6 2q 6 −q 2 (1 + ρ +q 2 )(1 + ρ + ρ τ ) + 2ρ τ (1 − ρ) 2 + 4ρq 2 + 6λ 2 m 2 b (q 2 − ρ τ ) 2 q 6 (1 + ρ −q 2 ) 2 − 4ρ 2q 6 ρ −q 4 [(1 − ρ) 2 + ρ τ (3 + ρ)] + (q 2 − 2ρ τ )(1 − ρ) 3 +q 2 ρ τ (1 − ρ)(5 + ρ) .(8)
Integrating Eq. (6) over y or Eq. (8) overq 2 ,
Γ Γ 0 = − 1 + λ 1 + 3λ 2 2m 2 b 1 6 √ λ 3(1 + ρ)(1 + ρ 2 − 8ρ) + 47ρ τ (1 − ρ + ρ 2 ) − 5ρ τ ρ + 11(1 + ρ)ρ 2 τ − ρ 3 τ + 4ρ 2 3 + ρ τ (3ρ τ + 2ρ − 6) ln f j − 4ρ τ (1 − ρ) 2(1 − ρ) 2 + 3ρ τ (1 + ρ) ln f τ (9) + λ 2 m 2 b √ λ 3(1 − ρ) 3 + ρ τ (1 − ρ)(47ρ − 5) + ρ 2 τ (1 − 11ρ) + ρ 3 τ − 24ρ τ ρ 2(1 − ρ) 2 − ρ τ (2 − 3ρ) ln(f j f τ ) , where f j,τ = x j,τ + x 2 j,τ − 1, x j = (1 + ρ − ρ τ )/(2 √ ρ), x τ = (1 + ρ τ − ρ)/(2 √ ρ τ ), and λ = 1 − 2(ρ + ρ τ ) + (ρ − ρ τ ) 2 = 4ρ(x 2 j − 1) = 4ρ τ (x 2 τ − 1)
. The fact that λ 1 enters d Γ/dq 2 and Γ in Eqs. (8) and (9) as 1 + λ 1 /(2m 2 b ) follows from reparametrization invariance [24]. ( Γ, however, does not have such a structure [3].) The terms proportional to λ 1 can be obtained by "averaging" over the residual motion of the b quark in the B meson (i.e., writing p b = m b v + k and averaging over k), which leaves q unaffected [21]. Therefore, s τ · p ν /| p ν | (in the τ rest frame) is also unchanged, resulting in the 1 + λ 1 /(2m 2 b ) structure. At the same time,
s τ · p B /| p B |,
which defines Γ, is altered, and hence Γ does not have the simple 1 + λ 1 /(2m 2 b ) structure. The limit of vanishing final-state quark mass, B → X u τν, has additional interesting features, in that the bquark distribution function in the B meson plays an enhanced role compared to that in B → X u ν [6]. This arises due to the combination of the facts that (i) the b → u semileptonic decay rate at maximal E τ does not vanish at the free-quark decay level and (ii) the phase space is restricted because of the τ mass.
The m c → 0 limit of Eq. (6) generates singular distributions (i.e., terms containing δ(1 + ρ τ − y) and its derivatives),
1 Γ 0 d Γ u dy = y 2 − 4ρ τ − y(3 − 2y) + ρ τ (16 − 15y) + 12ρ 2 τ + 12(1 + ρ τ − y) 2 X + λ 1 3m 2 b − 5y 2 + ρ τ (74 − 24y) + 24ρ 2 τ + 12 2 − 3y + ρ τ (8 − 3y) + 2ρ 2 τ X + λ 2 m 2 b − y(6 + 5y) + ρ τ (38 − 30y) + 60ρ 2 τ + 12(1 + 5ρ τ )(1 + ρ τ − y) X θ(1 + ρ τ − y) + λ 2 2m 2 b (11 − 5ρ τ ) + λ 1 6m 2 b (1 − 11ρ τ ) (1 − ρ τ ) 3 δ(1 + ρ τ − y) + λ 1 6m 2 b (1 − ρ τ ) 5 δ (1 + ρ τ − y) .(10)
For completeness, theq 2 distribution of the τ polarization is (dΓ u /dq 2 and d Γ u /dq 2 are given in Ref. [6]),
1 Γ 0 d Γ u dq 2 = − 1 + λ 1 + 3λ 2 2m 2 b (1 −q 2 ) 2 q 6 (q 2 − ρ τ ) 2 q 2 (1 + 2q 2 ) − ρ τ (2 +q 2 ) + 6λ 2 m 2 b (q 2 − ρ τ ) 2 (3 − 2q 2 + ρ τ ) .(11)
Integrating over y, or taking the m c → 0 limit of Eq. (9) gives,
Γ u Γ 0 = − 1 + λ 1 + 3λ 2 2m 2 b 1 2 + 22ρ τ 3 − 6ρ 2 τ − 2ρ 3 τ + ρ 4 τ 6 + 2ρ τ (2 + 3ρ τ ) ln ρ τ + λ 2 m 2 b (1 − ρ τ ) 3 (3 + ρ τ ) .(12)
This limit is smooth, unlike the m c → 0 limit of Eq. (6). For ρ τ = 0, these results satisfy −2 Γ = Γ, i.e., P τ (X) = −1, independent of the final state quark mass. This occurs because in the SM the leptons produced by the charged-current electroweak interaction are purely left handed in the massless limit.
Since the s τ = + amplitude is exclusively proportional to the lepton mass, d Γ/dq 2 in Eqs. (8) and (11) obey
2 (q 2 − ρ τ ) 2 d Γ dq 2 = − 1 (q 2 − ρ τ ) 2 dΓ dq 2 ρτ →−ρτ .(13)
This relation holds in the SM to all orders.
In addition, angular momentum conservation in B → X u τν implies that the τ polarization is fully left handed at maximal E τ . The power-suppressed terms that enter at order Λ 2 QCD /m 2 b also account for the nonperturbative shift of the E τ endpoint from the parton level to the hadron level. As a result, the physical rate at maximal E τ vanishes, although it is nonzero at the endpoint at the parton level. It was argued in Ref. [6] that only the most singular terms among the nonperturbative corrections need to satisfy −2 d Γ u = dΓ u . Correspondingly, Eq. (10) shows that the λ 1 δ(1 + ρ τ − y) term changes between the two conventions of the τ polarization fraction, 2 Γ/Γ and 2 Γ/Γ. However, the most singular λ 1 δ (1 + ρ τ − y) and λ 2 δ(1+ρ τ −y) terms are identical in d Γ u /dy and d Γ u /dy, and these terms are equal to −1/2 times the corresponding terms in dΓ u /dy [6].
The O(α s ) perturbative corrections are known for the differential rate and the τ polarization in the PA-B convention [5,25], but they have not been computed for the τ polarization defined in the PA-ν convention. We have not calculated the O(α s ) perturbative corrections to Γ. However, based on the results for Γ and Γ, we expect such O(α s ) corrections to modify the polarization, 2 Γ/Γ, below the percent level (except very near the endpoints of the kinematic distributions).
We do not study in this paper endpoint regions of differential distributions of the τ polarization fraction. We expect, similar to the differential rates, that at fixed order in the operator product expansion (OPE) reliable predictions cannot be made very near maximal q 2 or E τ . Near maximal E τ these effects are related to the b-quark distribution function in the B meson (sometimes called the shape function). The OPE also breaks down near maximal q 2 [23,26,27] because the expansion parameter related to the energy release becomes small. The upper limits of q 2 only differ at second order, by O(Λ 2 QCD ), between the lowest order in the OPE, (m b − m c ) 2 , and the endpoint at the hadron level, (m B − m D ) 2 . The lepton energy endpoint, however, is shifted at first order, by O(Λ QCD ).
III. NUMERICAL RESULTS AND IMPLICATIONS
In the PA-B polarization axis convention, P τ (X c ) −0.71 [3] and P τ (X u ) −0.77 [6] for B → X c τν and B → X u τν decays, respectively. Using m b = 4.7 GeV, m c = 1.3 GeV, m τ = 1.777 GeV, and expanding to linear order in λ 1,2 , we find in the PA-ν convention
P τ (X c ) = 2 Γ/Γ = −0.30 + 0.44 λ 2 ≈ −0.24 ,(14)P τ (X u ) = 2 Γ u /Γ u = −0.40 + 0.33 λ 2 ≈ −0.36 . (15)
Note that λ 1 drops out at this order, as it enters both Γ and Γ as 1 + λ 1 /(2m 2 b ). Using λ 2 = 0.12 GeV 2 , the corresponding second-order terms alter the polarization by nearly 18% and 10% in B → X c τν and X u τν, respectively, compared to the lowest order contributions. [1] 0.288 ± 0.04 [28] 0.65 ± 0.02 D * 5.22 ± 0.11 [1] 0.249 ± 0.03 [28] 1.30 ± 0.03 D * 0 0.44 ± 0.08 [30] 0.08 ± 0.03 [29] 0.032 ± 0.017 D 1 0.20 ± 0.05 [30] 0.05 ± 0.02 [29] 0.010 ± 0.006 D * 1 0.67 ± 0.05 [30] 0.10 ± 0.02 [29] 0.064 ± 0.008 D * 2 0.30 ± 0.04 [30] 0.07 ± 0.01 [29] 0.021 ± 0.004 D ( * , * * ) --2.08 ± 0.04 Xc 10.65 ± 0.16 [1] 0.223±0.005 [23] 2.37 ± 0.06 The reason is that the reduced phase space (due to m τ ) enhances the importance of the λ 2 terms, and P τ (X c ) and P τ (X u ) have somewhat small values at lowest order. (Similar reasons led the authors of Ref.
[5] to consider the O(α s ) corrections relative to 1 − P τ , which is an O(1) quantity everywhere in phase space, rather than P τ itself.) Hence, these seemingly large corrections do not indicate that the OPE breaks down, and we estimate higher-order corrections to be smaller, impacting the results in Eqs. (14) and (15) at or below the 0.02 level.
In a recent fit of the form factors to B → D ( * ) ν data ( = e, µ), Ref. [28] obtained
P τ (D) = 0.323 ± 0.003 , P τ (D * ) = −0.494 ± 0.005 ,(16)
The inclusive polarization can be written as a weighted sum over exclusive polarization fractions, yielding a sum rule
P τ (X c ) = Hc B(B → H c τ ν) P τ (H c ) B(B → X c τ ν) .(18)
The semitauonic branching fractions to D ( * ) and D * * have not been precisely measured. Therefore, we combine branching ratio measurements for the light-lepton semileptonic modes with SM predictions for the LFU ratios R(H) = B(B → Hτν)/B(B → H ν) to predict the semitauonic branching ratios. For the exclusive modes, we use predictions from the same fits as in Eqs. (16) and (17), hence within each heavy quark spin symmetry doublet, the two R(H) and two P τ (H) predictions are correlated. These inputs and the predictions for the semitauonic branching ratios are shown in Ta
Assuming that the remaining charm states, that saturate the inclusive B → X c τν width, all yield τ leptons with maximal (minimal) polarization, P τ = +1 (−1), results in an upper (lower) bound for P τ (X c ). One finds
P min τ (X c ) = −0.31 ± 0.03 , P max τ (X c ) = −0.07 ± 0.03 .(20)
This is consistent with the prediction in Eq. (14). Turning the sum rule in Eq. (18) around, we can use the inclusive polarization prediction in Eq. (14) to predict the branching-ratio-weighted average polarization of higher excited charm states, Figure 2 summarizes our predictions for P τ in inclusive and exclusive decays in the PA-ν convention. Next, we consider the analog of the sum rule in Eq. (18) for P τ (X u ). Predictions for the τ polarization and LFU ratios in exclusive charmless semitauonic decays to the lightest hadrons are available for B → πτν [33], ρτν and ωτν [34]. Using the latest BCL form factor parametrization from a combined fit to lattice QCD predictions plus BaBar and Belle data [35], one finds P τ (π) = −0.270 ± 0.028 , R(π) = 0.653 ± 0.015 . (22) (If instead one used the combined fit from Ref. [36], one would find P τ (π) = −0.296 ± 0.029 and R(π) = 0.640 ± 0.016.) A combined fit of averaged spectra from Belle and BaBar plus light-cone sum rule calculations yields [34] P τ (ρ) = −0.543 ± 0.025 , R(ρ) = 0.532 ± 0.011 , P τ (ω) = −0.545 ± 0.029 , R(ω) = 0.534 ± 0.018 . (23) Using in addition the prediction R(X u ) = 0.337 [6,37] (no uncertainty is quoted) we may derive bounds analogous to Eq. (20). We find
P τ (X non−D ( * , * * ) c ) = −0.41 +0.07 −0.09 .(21)P min τ (X + u ) = −0.72 ± 0.04 , P max τ (X + u ) = 0.28 ± 0.10 ,(24)
which clearly satisfies Eq. (15). 1 Here we used B(B 0 → X + u ν) = (1.51 ± 0.19) × 10 −3 , B(B 0 → π + ν) = (1.50 ± 0.06) × 10 −4 and B(B 0 → ρ + ν) = (2.94 ± 0.21) × 10 −4 [38]. The average polarization for higher excited light hadrons that would saturate Eq. (15) is P τ (X non−π + , ρ + u ) = −0.29 +0.03 −0.02 .
IV. SUMMARY
We calculated the SM prediction for the τ polarization in inclusive semileptonic B → Xτν decay, choosing the PA-ν polarization axis convention to define P τ , in which the τ spin corresponds to the helicity in the τν rest frame. We derived differential distributions that may aid future measurements, and the total polarization is given in Eqs. (14) and (15). These prediction were not previously available, and therefore comparisons between the polarization fractions in inclusive and exclusive decays could not be made. The sum rule in Eq. (18) relates the τ polarization fraction in inclusive decay to a branchingratio-weighted sum over exclusive modes. We explored what is known about the SM predictions for the six lightest charm mesons (D, D * , and D * * ), which allowed us to make predictions for the average τ polarization in the
motivated by the intuition that the reduction of the phase space due to the τ mass should enhance the fraction of the inclusive decay going into the lightest exclusive hadronic final states. This results in the looser bound P min τ (X + u ) = −0.84 ± 0.02.
remaining final states, that saturate the inclusive decay. The similar analysis for charmless semileptonic B decays is less constraining at present, but could prove useful with large data sets expected in the future.
FIG. 1 .
1The B → Xτν decay in the τ rest frame. The threemomenta of the B, X, andν lie in the yz plane. Physical choices for the polarization axes are: (i) pν , used in most exclusive decays; (ii) pB, used in past inclusive decay calculations; (iii) the transverse direction, x, along which a nonzero polarization would violate CP ; and (iv) the direction pX , which leaves a much-needed void in the literature. arXiv:2302.04764v2 [hep-ph] 30 Mar 2023
with a correlation of ρ = 0.189. From the fit results of Refs[29,30] we predict for the four D * * states: P τ (D * 0 ) = 0.10 ± 0.02 , P τ (D * 1 ) = −0.10 ± 0.02 , P τ (D 1 ) = −0.22 ± 0.04 , P τ (D * 2 ) = −0.33 ± 0.04 .
FIG. 2 .
2ble I. (For the inclusive prediction, using the different evaluations R(X c ) = 0.221 ± 0.004 [31] and/or B(Predictions for Pτ in the PA-ν convention. The red point shows Pτ (Xc) in inclusive decay in Eq. (14). The gray error bar and the shaded band shows the allowed range derived in Eq. (20). The black error bars show predictions for the average of the six lightest states in Eq. (19) and for D ( * ) in Eq. (16). The orange error bars show predictions for Pτ (D * * ) in Eq. (17). The blue error bar shows the predicted average polarization of the non-D ( * , * * ) states in Eq. (21).X c ν ) = (10.48 ± 0.13)% [32], result in slightly different predictions: B(B → X c τν) = (2.34 ± 0.06)% [23, 32], B(B → X c τν) = (2.32±0.06)% [31, 32], B(B → X c τν) = (2.35 ± 0.06)% [1, 31].) The resulting contribution of the six lightest charm mesons in Eqs. (16)-(17) to the inclusive polarization fraction is, P τ (D ( * , * * ) ) = D,D * ,D * * B(B → H c τ ν) P τ (H c ) B(B → X c τ ν) = −0.190 ± 0.007 .
1
One may instead obtain a lower bound for the semitauonic channels by assumingB(B → Huτν) B(B → Xuτν) > B(B → Huµν) B(B → Xuµν) ,
TABLE I .
IIsospin-averaged branching ratio measurements for
light-lepton ( = e, µ) semileptonic B decays to the six light-
est charmed mesons, predictions for the corresponding SM
LFU ratios, and the semitauonic branching fractions.
ACKNOWLEDGMENTSWe thank Aneesh Manohar
. Y Amhis, HFLAV CollaborationarXiv:2206.07501hep-exY. Amhis et al. (HFLAV Collaboration), (2022), arXiv:2206.07501 [hep-ex].
. J Kalinowski, 10.1016/0370-2693(90)90134-RPhys. Lett. B. 245201J. Kalinowski, Phys. Lett. B 245, 201 (1990).
. A F Falk, Z Ligeti, M Neubert, Y Nir, 10.1016/0370-2693(94)91206-8arXiv:hep-ph/9401226Phys. Lett. B. 326145A. F. Falk, Z. Ligeti, M. Neubert, and Y. Nir, Phys. Lett. B 326, 145 (1994), arXiv:hep-ph/9401226.
. Y Grossman, Z Ligeti, 10.1016/0370-2693(94)91267-XarXiv:hep-ph/9403376Phys. Lett. B. 332373Y. Grossman and Z. Ligeti, Phys. Lett. B 332, 373 (1994), arXiv:hep-ph/9403376.
. M Jezabek, P Urban, 10.1016/S0550-3213(98)00189-8arXiv:hep-ph/9712440Nucl. Phys. B. 525350M. Jezabek and P. Urban, Nucl. Phys. B 525, 350 (1998), arXiv:hep-ph/9712440.
. Z Ligeti, M Luke, F J Tackmann, 10.1103/PhysRevD.105.073009arXiv:2112.07685Phys. Rev. D. 10573009hep-phZ. Ligeti, M. Luke, and F. J. Tackmann, Phys. Rev. D 105, 073009 (2022), arXiv:2112.07685 [hep-ph].
. M Tanaka, 10.1007/BF01571294arXiv:hep-ph/9411405Z. Phys. C. 67321M. Tanaka, Z. Phys. C 67, 321 (1995), arXiv:hep- ph/9411405.
. M Tanaka, R Watanabe, 10.1103/PhysRevD.82.034027arXiv:1005.4306Phys. Rev. D. 8234027hep-phM. Tanaka and R. Watanabe, Phys. Rev. D 82, 034027 (2010), arXiv:1005.4306 [hep-ph].
. A Datta, M Duraisamy, D Ghosh, 10.1103/PhysRevD.86.034027arXiv:1206.3760Phys. Rev. D. 8634027hep-phA. Datta, M. Duraisamy, and D. Ghosh, Phys. Rev. D 86, 034027 (2012), arXiv:1206.3760 [hep-ph].
. M Tanaka, R Watanabe, 10.1103/PhysRevD.87.034028arXiv:1212.1878Phys. Rev. D. 8734028hep-phM. Tanaka and R. Watanabe, Phys. Rev. D 87, 034028 (2013), arXiv:1212.1878 [hep-ph].
. S Hirose, Belle Collaboration10.1103/PhysRevD.97.012004arXiv:1709.00129Phys. Rev. D. 9712004hep-exS. Hirose et al. (Belle Collaboration), Phys. Rev. D 97, 012004 (2018), arXiv:1709.00129 [hep-ex].
. D Atwood, G Eilam, A Soni, 10.1103/PhysRevLett.71.492arXiv:hep-ph/9303268Phys. Rev. Lett. 71492D. Atwood, G. Eilam, and A. Soni, Phys. Rev. Lett. 71, 492 (1993), arXiv:hep-ph/9303268.
. Y Grossman, Z Ligeti, 10.1016/0370-2693(95)00070-2arXiv:hep-ph/9409418Phys. Lett. B. 347399Y. Grossman and Z. Ligeti, Phys. Lett. B 347, 399 (1995), arXiv:hep-ph/9409418.
. D S Hwang, arXiv:1504.06933hep-phD. S. Hwang, (2015), arXiv:1504.06933 [hep-ph].
. M A Ivanov, J G Körner, C.-T Tran, 10.1103/PhysRevD.95.036021arXiv:1701.02937Phys. Rev. D. 9536021hep-phM. A. Ivanov, J. G. Körner, and C.-T. Tran, Phys. Rev. D 95, 036021 (2017), arXiv:1701.02937 [hep-ph].
. N Penalva, E Hernández, J Nieves, 10.1007/JHEP06(2021)118arXiv:2103.01857JHEP. 06118hep-phN. Penalva, E. Hernández, and J. Nieves, JHEP 06, 118 (2021), arXiv:2103.01857 [hep-ph].
. N Penalva, E Hernández, J Nieves, 10.1007/JHEP10(2021)122arXiv:2107.13406JHEP. 10122hep-phN. Penalva, E. Hernández, and J. Nieves, JHEP 10, 122 (2021), arXiv:2107.13406 [hep-ph].
. J Chay, H Georgi, B Grinstein, 10.1016/0370-2693(90)90916-TPhys. Lett. B. 247399J. Chay, H. Georgi, and B. Grinstein, Phys. Lett. B 247, 399 (1990).
. I I Y Bigi, M A Shifman, N G Uraltsev, A I Vainshtein, 10.1103/PhysRevLett.71.496arXiv:hep-ph/9304225Phys. Rev. Lett. 71496I. I. Y. Bigi, M. A. Shifman, N. G. Uraltsev, and A. I. Vainshtein, Phys. Rev. Lett. 71, 496 (1993), arXiv:hep- ph/9304225.
. B Blok, L Koyrakh, M A Shifman, A I Vainshtein, 10.1103/PhysRevD.50.3572arXiv:hep-ph/9307247Phys. Rev. D. 493572Phys.Rev.DB. Blok, L. Koyrakh, M. A. Shifman, and A. I. Vainshtein, Phys. Rev. D 49, 3356 (1994), [Erratum: Phys.Rev.D 50, 3572 (1994)], arXiv:hep-ph/9307247.
. A V Manohar, M B Wise, 10.1103/PhysRevD.49.1310arXiv:hep-ph/9308246Phys. Rev. D. 491310A. V. Manohar and M. B. Wise, Phys. Rev. D 49, 1310 (1994), arXiv:hep-ph/9308246.
Heavy Quark Physics. A V Manohar, M B Wise, 10.1017/CBO9780511529351Camb. Monogr. on Part. Phys., Nucl. Phys., Cosmol. Cambridge University PressA. V. Manohar and M. B. Wise, Heavy Quark Physics, Camb. Monogr. on Part. Phys., Nucl. Phys., Cosmol. (Cambridge University Press, 2000).
. Z Ligeti, F J Tackmann, 10.1103/PhysRevD.90.034021arXiv:1406.7013Phys. Rev. D. 9034021hep-phZ. Ligeti and F. J. Tackmann, Phys. Rev. D 90, 034021 (2014), arXiv:1406.7013 [hep-ph].
. M E Luke, A V Manohar, 10.1016/0370-2693(92)91786-9arXiv:hep-ph/9205228Phys. Lett. B. 286348M. E. Luke and A. V. Manohar, Phys. Lett. B 286, 348 (1992), arXiv:hep-ph/9205228.
. M Jezabek, L Motyka, 10.1016/S0550-3213(97)00341-6arXiv:hep-ph/9701358Nucl. Phys. B. 501207M. Jezabek and L. Motyka, Nucl. Phys. B 501, 207 (1997), arXiv:hep-ph/9701358.
. C W Bauer, Z Ligeti, M E Luke, 10.1016/S0370-2693(00)00318-XarXiv:hep-ph/0002161Phys. Lett. B. 479395C. W. Bauer, Z. Ligeti, and M. E. Luke, Phys. Lett. B 479, 395 (2000), arXiv:hep-ph/0002161.
. M Neubert, 10.1088/1126-6708/2000/07/022arXiv:hep-ph/0006068JHEP. 0722M. Neubert, JHEP 07, 022 (2000), arXiv:hep- ph/0006068.
. F U Bernlochner, Z Ligeti, M Papucci, M T Prim, D J Robinson, C Xiong, 10.1103/PhysRevD.106.096015arXiv:2206.11281Phys. Rev. D. 10696015hep-phF. U. Bernlochner, Z. Ligeti, M. Papucci, M. T. Prim, D. J. Robinson, and C. Xiong, Phys. Rev. D 106, 096015 (2022), arXiv:2206.11281 [hep-ph].
. F U Bernlochner, Z Ligeti, D J Robinson, 10.1103/PhysRevD.97.075011arXiv:1711.03110Phys. Rev. D. 9775011hep-phF. U. Bernlochner, Z. Ligeti, and D. J. Robinson, Phys. Rev. D 97, 075011 (2018), arXiv:1711.03110 [hep-ph].
. F U Bernlochner, Z Ligeti, 10.1103/PhysRevD.95.014022arXiv:1606.09300Phys. Rev. D. 9514022hep-phF. U. Bernlochner and Z. Ligeti, Phys. Rev. D 95, 014022 (2017), arXiv:1606.09300 [hep-ph].
. M Rahimi, K K Vos, 10.1007/JHEP11(2022)007arXiv:2207.03432JHEP. 117hep-phM. Rahimi and K. K. Vos, JHEP 11, 007 (2022), arXiv:2207.03432 [hep-ph].
. F Bernlochner, M Fael, K Olschewsky, E Persson, R Van Tonder, K K Vos, M Welsch, 10.1007/JHEP10(2022)068arXiv:2205.10274JHEP. 1068hep-phF. Bernlochner, M. Fael, K. Olschewsky, E. Persson, R. van Tonder, K. K. Vos, and M. Welsch, JHEP 10, 068 (2022), arXiv:2205.10274 [hep-ph].
. F U Bernlochner, 10.1103/PhysRevD.92.115019arXiv:1509.06938Phys. Rev. D. 92115019hep-phF. U. Bernlochner, Phys. Rev. D 92, 115019 (2015), arXiv:1509.06938 [hep-ph].
. F U Bernlochner, M T Prim, D J Robinson, 10.1103/PhysRevD.104.034032arXiv:2104.05739Phys. Rev. D. 10434032hepphF. U. Bernlochner, M. T. Prim, and D. J. Robinson, Phys. Rev. D 104, 034032 (2021), arXiv:2104.05739 [hep- ph].
Flavour Lattice Averaging Group (FLAG)). Y Aoki, 10.1140/epjc/s10052-022-10536-1arXiv:2111.09849Eur. Phys. J. C. 82869hep-latY. Aoki et al. (Flavour Lattice Averaging Group (FLAG)), Eur. Phys. J. C 82, 869 (2022), arXiv:2111.09849 [hep-lat].
. J A Bailey, Fermilab Lattice, MILC10.1103/PhysRevD.92.014024arXiv:1503.07839Phys. Rev. D. 9214024hep-latJ. A. Bailey et al. (Fermilab Lattice, MILC), Phys. Rev. D 92, 014024 (2015), arXiv:1503.07839 [hep-lat].
. A H Hoang, Z Ligeti, A V Manohar, 10.1103/PhysRevD.59.074017arXiv:hep-ph/9811239Phys. Rev. D. 5974017A. H. Hoang, Z. Ligeti, and A. V. Manohar, Phys. Rev. D 59, 074017 (1999), arXiv:hep-ph/9811239.
. R L Workman, Particle Data Group10.1093/ptep/ptac097PTEP. 2022R. L. Workman et al. (Particle Data Group), PTEP 2022, 083C01 (2022).
| [] |
[
"LI G H TSE CAG G : A LIGHTWEIGHT AND VERSATILE DESIGN FOR SECURE AGGREGATION IN FEDERATED LEARNING",
"LI G H TSE CAG G : A LIGHTWEIGHT AND VERSATILE DESIGN FOR SECURE AGGREGATION IN FEDERATED LEARNING"
] | [
"Jinhyun So ",
"Chaoyang He ",
"Chien-Sheng Yang ",
"Songze Li ",
"Qian Yu ",
"Ramy E Ali ",
"Basak Guler ",
"Salman Avestimehr "
] | [] | [] | Secure model aggregation is a key component of federated learning (FL) that aims at protecting the privacy of each user's individual model while allowing for their global aggregation. It can be applied to any aggregationbased FL approach for training a global or personalized model. Model aggregation needs to also be resilient against likely user dropouts in FL systems, making its design substantially more complex. State-of-the-art secure aggregation protocols rely on secret sharing of the random-seeds used for mask generations at the users to enable the reconstruction and cancellation of those belonging to the dropped users. The complexity of such approaches, however, grows substantially with the number of dropped users. We propose a new approach, named LightSecAgg, to overcome this bottleneck by changing the design from "random-seed reconstruction of the dropped users" to "one-shot aggregate-mask reconstruction of the active users via mask encoding/decoding". We show that LightSecAgg achieves the same privacy and dropout-resiliency guarantees as the state-of-the-art protocols while significantly reducing the overhead for resiliency against dropped users. We also demonstrate that, unlike existing schemes, LightSecAgg can be applied to secure aggregation in the asynchronous FL setting. Furthermore, we provide a modular system design and optimized on-device parallelization for scalable implementation, by enabling computational overlapping between model training and on-device encoding, as well as improving the speed of concurrent receiving and sending of chunked masks. We evaluate LightSecAgg via extensive experiments for training diverse models (logistic regression, shallow CNNs, MobileNetV3, and EfficientNet-B0) on various datasets (MNIST, FEMNIST, CIFAR-10, GLD-23K) in a realistic FL system with large number of users and demonstrate that LightSecAgg significantly reduces the total training time. | null | [
"https://arxiv.org/pdf/2109.14236v3.pdf"
] | 246,473,454 | 2109.14236 | 98cbd4b64b82a507c02f0a47365c4d2169a24bfe |
LI G H TSE CAG G : A LIGHTWEIGHT AND VERSATILE DESIGN FOR SECURE AGGREGATION IN FEDERATED LEARNING
Jinhyun So
Chaoyang He
Chien-Sheng Yang
Songze Li
Qian Yu
Ramy E Ali
Basak Guler
Salman Avestimehr
LI G H TSE CAG G : A LIGHTWEIGHT AND VERSATILE DESIGN FOR SECURE AGGREGATION IN FEDERATED LEARNING
Secure model aggregation is a key component of federated learning (FL) that aims at protecting the privacy of each user's individual model while allowing for their global aggregation. It can be applied to any aggregationbased FL approach for training a global or personalized model. Model aggregation needs to also be resilient against likely user dropouts in FL systems, making its design substantially more complex. State-of-the-art secure aggregation protocols rely on secret sharing of the random-seeds used for mask generations at the users to enable the reconstruction and cancellation of those belonging to the dropped users. The complexity of such approaches, however, grows substantially with the number of dropped users. We propose a new approach, named LightSecAgg, to overcome this bottleneck by changing the design from "random-seed reconstruction of the dropped users" to "one-shot aggregate-mask reconstruction of the active users via mask encoding/decoding". We show that LightSecAgg achieves the same privacy and dropout-resiliency guarantees as the state-of-the-art protocols while significantly reducing the overhead for resiliency against dropped users. We also demonstrate that, unlike existing schemes, LightSecAgg can be applied to secure aggregation in the asynchronous FL setting. Furthermore, we provide a modular system design and optimized on-device parallelization for scalable implementation, by enabling computational overlapping between model training and on-device encoding, as well as improving the speed of concurrent receiving and sending of chunked masks. We evaluate LightSecAgg via extensive experiments for training diverse models (logistic regression, shallow CNNs, MobileNetV3, and EfficientNet-B0) on various datasets (MNIST, FEMNIST, CIFAR-10, GLD-23K) in a realistic FL system with large number of users and demonstrate that LightSecAgg significantly reduces the total training time.
INTRODUCTION
Federated learning (FL) has emerged as a promising approach to enable distributed training over a large number of users while protecting the privacy of each user (McMahan et al., 2017;2021;Wang et al., 2021). The key idea of FL is to keep users' data on their devices and instead train local models at each user. The locally trained models are then aggregated via a server to update a global model, which is then This paper is accepted to the 5 th MLSys Conference, Santa Clara, CA, USA, 2022. pushed back to users. Due to model inversion attacks (e.g., (Geiping et al., 2020;Wang et al., 2019;Zhu & Han, 2020)), a critical consideration in FL design is to also ensure that the server does not learn the locally trained model of each user during model aggregation. Furthermore, model aggregation should be robust against likely user dropouts (due to poor connectivity, low battery, unavailability, etc) in FL systems. As such, there have been a series of works that aim at developing secure aggregation protocols for FL that protect the privacy of each user's individual model while allowing their global aggregation amidst possible user dropouts (Bonawitz et al., 2017;Kadhe et al., 2020;So et al., 2021d).
The state-of-the-art secure aggregation protocols essentially rely on two main principles: (1) pairwise random-seed agreement between users to generate masks that hide users' models while having an additive structure that allows their cancellation when added at the server and (2) secret sharing of the random-seeds to enable the reconstruction and cancellation of masks belonging to dropped users. The main drawback of such approaches is that the number of mask reconstructions at the server substantially grows as (2) Masking model: each user masks its model by random masks, and uploads its masked model to the server. (3) Reconstructing aggregate-mask: The surviving users upload the aggregate of encoded masks to reconstruct the desired aggregate-mask. The server recovers the aggregate-model by canceling out the reconstructed aggregate-mask. more users are dropped, causing a major computational bottleneck. For instance, the execution time of the SecAgg protocol proposed in (Bonawitz et al., 2017) is observed to be significantly limited by mask reconstructions at the server (Bonawitz et al., 2019b). SecAgg+ (Bell et al., 2020), an improved version of SecAgg, reduces the overhead at the server by replacing the complete communication graph of SecAgg with a sparse random graph, such that secret sharing is only needed within a subset of users rather than all users pairs. However, the number of mask reconstructions in SecAgg+ still increases as more users drop, eventually limits the scalability of FL systems. There have also been several other approaches, such as (So et al., 2021d;Kadhe et al., 2020), to alleviate this bottleneck, however they either increase round/communication complexity or compromise the dropout and privacy guarantees.
Contributions. We propose a new perspective for secure model aggregation in FL by turning the design focus from "pairwise random-seed reconstruction of the dropped users" to "one-shot aggregate-mask reconstruction of the surviving users". Using this viewpoint, we develop a new protocol named LightSecAgg that provides the same level of privacy and dropout-resiliency guarantees as the state-ofthe-art while substantially reducing the aggregation (hence runtime) complexity. As illustrated in Figure 1, the main idea of LightSecAgg is that each user protects its local model using a locally generated random mask. This mask is then encoded and shared to other users in such a way that the aggregate-mask of any sufficiently large set of surviving users can be directly reconstructed at the server. In sharp contrast to prior schemes, in this approach the server only needs to reconstruct one mask in the recovery phase, independent of the number of dropped users.
Moreover, we provide a modular federated training system design and optimize on-device parallelization to improve the efficiency when secure aggregation and model training interact at the edge devices. This enables computational overlapping between model training and on-device encoding, as well as improving the speed of concurrent receiving and sending of chunked masks. To the best of our knowledge, this provides the first open-sourced and secure aggregationenabled FL system that is built on the modern deep learning framework (PyTorch) and neural architecture (e.g., ResNet) with system and security co-design. We further propose system-level optimization methods to improve the run-time. In particular, we design a federated training system and take advantage of the fact that the generation of random masks is independent of the computation of the local model, hence each user can parallelize these two operations via a multi-thread processing, which is beneficial to all evaluated secure aggregation protocols in reducing the total running time.
In addition to the synchronous FL setting, where all users train local models based on the same global model and the server performs a synchronized aggregation at each round, we also demonstrate that LightSecAgg enables secure aggregation when no synchrony between users' local updates are imposed. This is unlike prior secure aggregation protocols, such as SecAgg and SecAgg+, that are not compatible with asynchronous FL. To the best of our knowledge, in the asynchronous FL setting, this is the first work to protect the privacy of the individual updates without relying on differential privacy (Truex et al., 2020) or trusted execution environments (TEEs) (Nguyen et al., 2021).
We run extensive experiments to empirically demonstrate the performance of LightSecAgg in a real-world crossdevice FL setting with up to 200 users and compare it with two state-of-the-art protocols SecAgg and SecAgg+. To provide a comprehensive coverage of realistic FL settings, we train various machine learning models including logistic regression, convolutional neural network (CNN) (McMahan et al., 2017), MobileNetV3 (Howard et al., 2019), and EfficientNet-B0 (Tan & Le, 2019), for image classification over datasets of different image sizes: low resolution images (FEMNIST (Caldas et al., 2018), CIFAR-10 (Krizhevsky et al., 2009)), and high resolution images (Google Landmarks Dataset 23k (Weyand et al., 2020)). The empirical results show that LightSecAgg provides significant speedup for all considered FL training tasks, achieving a performance gain of 8.5×-12.7× over SecAgg and 2.9×-4.4× over SecAgg+, in realistic bandwidth settings at the users. Hence, compared to baselines, LightSecAgg can even survive and speedup the training of large deep neural network models on high resolution image datasets. Breakdowns of the total running time further confirm that the primary gain lies in the complexity reduction at the server provided by LightSecAgg, especially when the number of users are large.
Related works. Beyond the secure aggregation protocols proposed in (Bonawitz et al., 2017;Bell et al., 2020), there have been also other works that aim towards making secure aggregation more efficient. TurboAgg (So et al., 2021d) utilizes a circular communication topology to reduce the communication overhead, but it incurs an additional round complexity and provides a weaker privacy guarantee than SecAgg as it guarantees model privacy in the average sense rather than in the worst-case scenario. FastSecAgg (Kadhe et al., 2020) reduces per-user overhead by using the Fast Fourier Transform multi-secret sharing, but it provides lower privacy and dropout guarantees compared to the other state-of-the-art protocols. The idea of one-shot reconstruction of the aggregate-mask was also employed in (Zhao & Sun, 2021), where the aggregated masks corresponding to each user dropout pattern was prepared by a trusted third party, encoded and distributed to users prior to model aggregation. The major advantages of LightSecAgg over the scheme in (Zhao & Sun, 2021) are 1) not requiring a trusted third party; and 2) requiring significantly less randomness generation and a much smaller storage cost at each user. Furthermore, there is also a lack of system-level performance evaluations of (Zhao & Sun, 2021) in FL experiments. Finally, we emphasize that our LightSecAgg protocol can be applied to any aggregationbased FL approach (e.g., FedNova (Wang et al., 2020), FedProx (Li et al., 2018, FedOpt (Asad et al., 2020)), personalized FL frameworks (T. Dinh et al., 2020;Li et al., 2020;Fallah et al., 2020;Mushtaq et al., 2021;He et al., 2021e), communication-efficient FL (Shlezinger et al., 2020Reisizadeh et al., 2020;Elkordy & Avestimehr, 2020), and asynchronous FL, and their applications in computer vision (He et al., 2021d;2020a;b), natural language processing (Lin et al., 2021;He et al., 2021c), data mining (He et al., 2021a;Ezzeldin et al., 2021;Liang et al., 2021;He et al., 2021b;2020c), and Internet of things (IoTs) (Zhang et al., 2021a;b).
PROBLEM SETTING
FL is a distributed training framework for machine learning, where the goal is to learn a global model x with dimension d using data held at edge devices. This can be represented by minimizing a global objective function F :
F (x) = N i=1 p i F i (x),
where N is the total number of users, F i is the local objective function of user i, and p i ≥ 0 is a weight parameter assigned to user i to specify the relative impact of each user such that N i=1 p i = 1. 1 Training in FL is performed through an iterative process, where the users interact through a server to update the global model. At each iteration, the server shares the current global model, denoted by x(t), with the edge users. Each user i creates a local update, x i (t). The local models are sent to the server and then aggregated by the server. Using the aggregated models, the server updates the global model x(t + 1) for the next iteration. In FL, some users may potentially drop from the learning procedure for various reasons such as having unreliable communication connections. The goal of the server is to obtain the sum of the surviving users' local models. This update equation is given by
x(t + 1) = 1 |U (t)| i∈U (t) x i (t),
where U(t) denotes the set of surviving users at iteration t. Then, the server pushes the updated global model x(t + 1) to the edge users.
Local models carry extensive information about the users' datasets, and in fact their private data can be reconstructed from the local models by using a model inversion attack (Geiping et al., 2020;Wang et al., 2019;Zhu & Han, 2020). To address this privacy leakage from local models, secure aggregation has been introduced in (Bonawitz et al., 2017). A secure aggregation protocol enables the computation of the aggregated global model while ensuring that the server (and other users) learn no information about the local models beyond their aggregated model. In particular, the goal is to securely recover the aggregate of the local models y = i∈U x i , where the iteration index t is omitted for simplicity. Since secure aggregation protocols build on cryptographic primitives that require all operations to be carried out over a finite field, we assume that the elements of x i and y are from a finite field F q for some field size q. We require a secure aggregation protocol for FL to have the following key features.
• Threat model and privacy guarantee. We consider a threat model where the users and the server are honest but curious. We assume that up to T (out of N ) users can collude with each other as well as with the server to infer the local models of other users. The secure aggregation protocol has to guarantee that nothing can be learned beyond the aggregate-model, even if up to T users cooperate with each other. We consider privacy leakage in the strong information-theoretic sense. This requires that for every subset of users T ⊆ [N ] of size at most T , we must have
mutual information I({x i } i∈[N ] ; Y| i∈U x i , Z T ) = 0,
where Y is the collection of information at the server, and Z T is the collection of information at the users in T .
• Dropout-resiliency guarantee. In the FL setting, it is common for users to be dropped or delayed at any time during protocol execution for various reasons, e.g., delayed/interrupted processing, poor wireless channel conditions, low battery, etc. We assume that there are at most D dropped users during the execution of protocol, i.e., there are at least N − D surviving users after potential dropouts. The protocol must guarantee that the server can correctly recover the aggregated models of the surviving users, even if up to D users drop.
• Applicability to asynchronous FL. Synchronizing all users for training at each round of FL can be slow and costly, especially when the number of users are large. Asynchronous FL handles this challenge by incorporating the updates of the users in asynchronous fashion (Xie et al., 2019;van Dijk et al., 2020;Chai et al., 2020;Chen et al., 2020). This asynchrony, however, creates a mismatch of staleness among the users, which causes the incompatibility of the existing secure aggregation protocols (such as (Bonawitz et al., 2017;Bell et al., 2020)). More specifically, since it is not known a priori which local models will be aggregated together, the current secure aggregation protocols that are based on pairwise random masking among the users fail to work. We aim at designing a versatile secure aggregation protocol that is applicable to both synchronous and asynchronous FL.
Goal. We aim to design an efficient and scalable secure aggregation protocol that simultaneously achieves strong privacy and dropout-resiliency guarantees, scaling linearly with the number of users N , e.g., simultaneously achieves privacy guarantee T = N 2 and dropout-resiliency guarantee D = N 2 − 1. Moreover, the protocol should be compatible with both synchronous and asynchronous FL.
OVERVIEW OF BASELINE PROTOCOLS: SE CAG G AND SE CAG G+
We first review the state-of-the-art secure aggregation protocols SecAgg (Bonawitz et al., 2017) and SecAgg+ (Bell et al., 2020) as our baselines. Essentially, SecAgg and SecAgg+ require each user to mask its local model using random keys before aggregation. In SecAgg, the privacy of the individual models is protected by pairwise random masking. Through a key agreement (e.g., Diffie-Hellman (Diffie & Hellman, 1976)), each pair of users i, j ∈ [N ] agree on a pairwise random seed a i,j = Key.Agree(sk i , pk j ) = Key.Agree(sk j , pk i ) where sk i and pk i are the private and public keys of user i, respectively. In addition, user i creates a private random seed b i to prevent the privacy breaches that may occur if user i is only delayed rather than dropped, in which case the pairwise masks alone are not sufficient for privacy protection. User i ∈ [N ] then masks
its model x i asx i = x i + PRG(b i ) + j:i<j PRG(a i,j ) − j:i>j PRG(a j,i ),
where PRG is a pseudo random generator, and sends it to the server. Finally, user i secret shares its private seed b i as well as private key sk i with the other users via Shamir's secret sharing (Yao, 1982). From the subset of users who survived the previous stage, the server collects either the shares of the private key belonging to a dropped user, or the shares of the private seed belonging to a surviving user (but not both). Using the collected shares, the server reconstructs the private seed of each surviving user, and the pairwise seeds of each dropped user. The server then computes the aggregated model as follows
i∈U x i = i∈U (x i − PRG(b i )) + i∈D j:i<j PRG(a i,j ) − j:i>j PRG(a j,i ) ,(1)
where U and D represent the set of surviving and dropped users, respectively. SecAgg protects model privacy against T colluding users and is robust to D user dropouts as long as N − D > T .
We now illustrate SecAgg through a simple example. Consider a secure aggregation problem in FL, where there are N = 3 users with T = 1 privacy guarantee and dropoutresiliency guarantee D = 1. Each user i ∈ {1, 2, 3} holds a local model x i ∈ F d q where d is the model size and q is the size of the finite field. As shown in Figure 2, SecAgg is composed of the following three phases.
Offline pairwise agreement. User 1 and user 2 agree on pairwise random seed a 1,2 . User 1 and user 3 agree on pairwise random seed a 1,3 . User 2 and user 3 agree on pairwise random seed a 2,3 . In addition, user i ∈ {1, 2, 3} creates a private random seed b i . Then, user i secret shares b i and its private key sk i with the other users via Shamir's secret sharing. In this example, a 2 out of 3 secret sharing is used to tolerate 1 curious user.
Masking and uploading of local models. To provide the privacy of each individual model, user i ∈ {1, 2, 3} masks its model x i as follows:
x 1 = x 1 + n 1 + z 1,2 + z 1,3 ,
x 2 = x 2 + n 2 + z 2,3 − z 1,2 , x 3 = x 3 + n 3 − z 1,3 − z 2,3 ,
where n i = PRG(b i ) and z i,j = PRG(a i,j ) are the random masks generated by a pseudo random generator. Then user i ∈ {1, 2, 3} sends its masked local modelx i to the server.
Aggregate-model recovery. Suppose that user 1 drops in the previous phase. The goal of the server is to compute the aggregate of models x 2 + x 3 . Note that x 2 + x 3 =x 2 +x 3 + (z 1,2 + z 1,3 − n 2 − n 3 ).
(2) Figure 2. An illustration of SecAgg in the example of 3 users is depicted. The users first agree on pairwise random seeds, and secret share their private random seeds and private keys. The local models are protected by the pairwise random masking. Suppose that user 1 drops. To recover the aggregate-mask, the server first reconstructs the private random seeds of the surviving users and the private key of user 1 by collecting the secret shares for each of them. Then, the server recovers z1,2, z1,3, n2 and n3, which incurs the computational cost of 4d at the server.
Hence, the server needs to reconstruct masks n 2 , n 3 , z 1,2 , z 1,3 to recover x 2 + x 3 . To do so, the server has to collect two shares for each of b 2 , b 3 , sk 1 , and then compute the aggregate model by (2). Since the complexity of evaluating a PRG scales linearly with its size, the computational cost of the server for mask reconstruction is 4d.
We note that SecAgg requires the server to compute a PRG function on each of the reconstructed seeds to recover the aggregated masks, which incurs the overhead of O(N 2 ) (see more details in Section 5) and dominates the overall execution time of the protocol (Bonawitz et al., 2017;2019b). SecAgg+ reduces the overhead of mask reconstructions from O(N 2 ) to O(N log N ) by replacing the complete communication graph of SecAgg with a sparse random graph of degree O(log N ) to reduce both communication and computational loads. Reconstructing pairwise random masks in SecAgg and SecAgg+ poses a major bottleneck in scaling to a large number of users.
Remark 1. (Incompatibility of SecAgg and SecAgg+ with Asynchronous FL). It is important to note that SecAgg and SecAgg+ cannot be applied to asynchronous FL as the cancellation of the pairwise random masks based on the key agreement protocol is not guaranteed. This is because the users do not know a priori which local models will be aggregated together, hence the masks cannot be designed to cancel out in these protocols. We explain this in more detail in Appendix F.2. It is also worth noting that a recently proposed protocol known as FedBuff (Nguyen et al., 2021) enables secure aggregation in asynchronous FL through a trusted execution environment (TEE)-enabled buffer, where the server stores the local models that it receives in this private buffer. The reliance of FedBuff on TEEs, however, limits the buffer size in this approach as TEEs have limited memory. It would also limit its application to FL settings where TEEs are available. . An illustration of LightSecAgg in the example of 3 users is depicted. Each user first generates a single mask. Each mask of a user is encoded and shared to other users. Each user's local model is protected by its generated mask. Suppose that user 1 drops during the execution of protocol. The server directly recovers the aggregate-mask in one shot. In this example, LightSecAgg reduces the computational cost at the server from 4d to d.
LI G H TSE CAG G PROTOCOL
Before providing a general description of LightSecAgg, we first illustrate its key ideas through the previous 3-user example in the synchronous setting. As shown in Figure 3, LightSecAgg has the following three phases.
Offline encoding and sharing of local masks. User i ∈ {1, 2, 3} randomly picks z i and n i from F d q . User i ∈ {1, 2, 3} creates the masked version of z i as z 1,1 = −z 1 + n 1 ,z 1,2 = 2z 1 + n 1 ,z 1,3 = z 1 + n 1 ; z 2,1 = −z 2 + n 2 ,z 2,2 = 2z 2 + n 2 ,z 2,3 = z 2 + n 2 ; z 3,1 = −z 3 + n 3 ,z 3,2 = 2z 3 + n 3 ,z 3,3 = z 3 + n 3 ;
and user i ∈ {1, 2, 3} sendsz i,j to each user j ∈ {1, 2, 3}. Thus, user i ∈ {1, 2, 3} receivesz j,i for j ∈ {1, 2, 3}. In this case, this procedure provides robustness against 1 dropped user and privacy against 1 curious user.
Masking and uploading of local models. To make each individual model private, each user i ∈ {1, 2, 3} masks its local model as follows:
x 1 = x 1 + z 1 ,x 2 = x 2 + z 2 ,x 3 = x 3 + z 3 ,(3)
and sends its masked model to the server.
One-shot aggregate-model recovery. Suppose that user 1 drops in the previous phase. To recover the aggregate of models x 2 + x 3 , the server only needs to know the aggregated masks z 2 + z 3 . To recover z 2 + z 3 , the surviving user 2 and user 3 sendz 2,2 +z 3,2 andz 2,3 +z 3,3 , z 2,2 +z 3,2 = 2(z 2 + z 3 ) + n 2 + n 3 , z 2,3 +z 3,3 = (z 2 + z 3 ) + n 2 + n 3 , to the server, respectively. After receiving the messages from user 2 and user 3, the server can directly recover the aggregated masks via an one-shot computation as follows:
z 2 + z 3 =z 2,2 +z 3,2 − (z 2,3 +z 3,3 ).(4)
Then, the server recovers the aggregate-model x 2 + x 3 by subtracting z 2 + z 3 fromx 2 +x 3 . As opposed to SecAgg which has to reconstruct the random seeds of the dropped users, LightSecAgg enables the server to reconstruct the desired aggregate of masks via a one-shot recovery. Compared with SecAgg, LightSecAgg reduces the server's computational cost from 4d to d in this simple example.
General Description of LightSecAgg for Synchronous FL
We formally present LightSecAgg, whose idea is to encode the local generated random masks in a way that the server can recover the aggregate of masks from the encoded masks via an one-shot computation with a cost that does not scale with N . LightSecAgg has three design parameters:
(1) 0 ≤ T ≤ N − 1 representing the privacy guarantee;
(2) 0 ≤ D ≤ N − 1 representing the dropout-resiliency guarantee;
(3) 1 ≤ U ≤ N representing the targeted number of surviving users. In particular, parameters T , D, and U are selected such that N − D ≥ U > T ≥ 0.
LightSecAgg is composed of three main phases. First, each user first partitions its local random mask to U − T pieces and creates encoded masks via a Maximum Distance Separable (MDS) code (Roth & Lempel, 1989;Yu et al., 2019;Tang et al., 2021;So et al., 2021c) to provide robustness against D dropped users and privacy against T colluding users. Each user sends one of the encoded masks to one of the other users for the purpose of one-shot recovery. Second, each user uploads its masked local model to the server. Third, the server reconstructs the aggregated masks of the surviving users to recover their aggregate of models. Each surviving user sends the aggregated encoded masks to the server. After receiving U aggregated encoded masks from the surviving users, the server recovers the aggregatemask and the desired aggregate-model. The pseudo code of LightSecAgg is provided in Appendix A. We now describe each of these phases in detail.
Offline encoding and sharing of local masks. User i ∈ [N ] picks z i uniformly at random from F d q and partitions it to
U − T sub-masks [z i ] k ∈ F d U −T q , k ∈ [U − T ]. With the randomly picked [n i ] k ∈ F d U −T q for k ∈ {U −T +1, . . . , U }, user i ∈ [N ] encodes sub-masks [z i ] k 's as [z i ] j = ([z i ] 1 , . . . , [z i ] U −T , [n i ] U −T +1 , . . . , [n i ] U ) · W j ,(5)where W j is j'th column of a T -private MDS matrix W ∈ F U ×N q .
In particular, we say an MDS matrix 2 is T -private iff the submatrix consisting of its {U − T + 1, ..., U }-th rows is also MDS. A T -private MDS matrix guarantees that
2 A matrix W ∈ F U ×N q (U < N ) is an MDS matrix if any U × U sub-matrix of W is non-singular. I(z i ; {[z i ] j } j∈T ) = 0, for any i ∈ [N ] and any T ⊆ [N ] of size T , if [n i ] k '
s are jointly uniformly random. We can always find T -private MDS matrices for any U , N , and T (e.g., (Shamir, 1979;Yu et al., 2019;Roth & Lempel, 1989)). Each user i ∈ [N ] sends [z i ] j to user j ∈ [N ]\{i}. In the end of offline encoding and sharing of local masks, each
user i ∈ [N ] has [z j ] i from j ∈ [N ]. 3
Masking and uploading of local models. To protect the local models, each user i masks its local model asx i = x i + z i , and sends it to the server. Since some users may drop in this phase, the server identifies the set of surviving users, denoted by U 1 ⊆ [N ]. The server intends to recover i∈U1 x i . We note that before masking the model, each user quantizes the local model to convert from the domain of real numbers to the finite field (Appendix F.3.2).
One-shot aggregate-model recovery. After identifying the surviving users in the previous phase, user j ∈ U 1 is notified to send its aggregated encoded sub-masks i∈U1 [z i ] j to the server for the purpose of one-shot recovery. We note that
each i∈U1 [z i ] j is an encoded version of i∈U1 [z i ] k for k ∈ [U − T ] using the MDS matrix W (see more details in Appendix B). Thus, the server is able to recover i∈U1 [z i ] k for k ∈ [U − T ]
via MDS decoding after receiving a set of any U messages from the participating users. The server obtains the aggregated masks i∈U1 z i by concatenating i∈U1 [z i ] k 's. Lastly, the server recovers the desired aggregate of models for the set of participating users U 1 by subtracting i∈U1 z i from i∈U1x i .
Remark 2. Note that it is not necessary to have a stable communication link between every pair of users in LightSecAgg. Specifically, given the design parameter U , LightSecAgg only requires at least U surviving users at any time during the execution. That is, even if up to N −U users drop or get delayed due to unstable communication links, the server can still reconstruct the aggregate-mask.
Remark 3. We note that LightSecAgg directly applies for secure aggregation of weighted local models. The sharing of the masking keys among the clients does not require the knowledge of the weight coefficients. For example, LightSecAgg can work for the case in which all users do not have equal-sized datasets. Suppose that user i holds a dataset with a number of samples s i . Rather than directly masking the local model x i , user i first computes x i = s i x i . Then, user i uploads x i + z i to the server. Through the LightSecAgg protocol, the server can recover i∈U x i = i∈U s i x i securely. By dividing by i∈U s i , the server can obtain the desired aggregate of weighted model i∈U p i x i where p i = si i∈U si . 3 All users communicate through secure (private and authenticated) channels, i.e., the server would only receive the encrypted version of [zi]j's. Such secure communication is also used in prior works on secure aggregation, e.g., SecAgg, SecAgg+.
Extension to Asynchronous FL
We now describe how LightSecAgg can be applied to asynchronous FL. We consider the asynchronous FL setting with bounded staleness as considered in (Nguyen et al., 2021), where the updates of the users are not synchronized and the staleness of each user is bounded by τ max . In this setting, the server stores the models that it receives in a buffer of size K and updates the global model once the buffer is full. More generally, LightSecAgg may apply to any asynchronous FL setting where a group of local models are aggregated at each round. That is, the group size does not need to be fixed in all rounds. While the baselines are not compatible with this setting, LightSecAgg can be applied by encoding the local masks in a way that the server can recover the aggregate of masks from the encoded masks via a one-shot computation, even though the masks are generated in different training rounds. Specifically, the users share the encoded masks with the timestamp to figure out which encoded masks should be aggregated for the reconstruction of the aggregate of masks. As the users aggregate the encoded masks after the server stores the local updates in the buffer, the users can aggregate the encoded masks according to the timestamp of the stored updates. Due to the commutative property of MDS coding and addition, the server can reconstruct the aggregate of masks even though the masks are generated in different training rounds. We postpone the detailed description of the LightSecAgg protocol for the asynchronous setting to Appendix F.
THEORETICAL ANALYSIS
Theoretical Guarantees
We now state our main result for the theoretical guarantees of the LightSecAgg protocol. Theorem 1. Consider a secure aggregation problem in federated learning with N users. Then, the proposed LightSecAgg protocol can simultaneously achieve (1) privacy guarantee against up to any T colluding users, and (2) dropout-resiliency guarantee against up to any D dropped users, for any pair of privacy guarantee T and dropout-resiliency guarantee D such that T + D < N .
The proof of Theorem 1, which is applicable to both synchronous and asynchronous FL settings, is presented in Appendix B. Remark 4. Theorem 1 provides a trade-off between privacy and dropout-resiliency guarantees, i.e., LightSecAgg can increase the privacy guarantee by reducing the dropoutresiliency guarantee and vice versa. As SecAgg (Bonawitz et al., 2017), LightSecAgg achieves the worst-case dropout-resiliency guarantee. That is, for any privacy guarantee T and the number of dropped users D < N − T , LightSecAgg ensures that any set of dropped users of size D in secure aggregation can be tolerated. Differently, SecAgg+ (Bell et al., 2020), FastSecAgg (Kadhe et al., 2020, and TurboAgg (So et al., 2021d) relax the worstcase constraint to random dropouts and provide a probabilistic dropout-resiliency guarantee, i.e., the desired aggregatemodel can be correctly recovered with high probability.
Remark 5. From the training convergence perspective, LightSecAgg only adds a quantization step to the local model updates of the users. The impact of this model quantization on FL convergence is well studied in the synchronous FL (Reisizadeh et al., 2020;Elkordy & Avestimehr, 2020). In the asyncrhonous FL, however, we need to analyze the convergence of LightSecAgg. We provide this analysis in the smooth and non-convex setting in Appendix F.4.
Complexity Analysis of LightSecAgg
We measure the storage cost, communication load, and computational load of LightSecAgg in units of elements or operations in F q for a single training round. Recall that U is a design parameter chosen such that N − D ≥ U > T .
Offline storage cost. Each user i independently generates a random mask z i of length d. Additionally, each user i stores a coded mask
[z j ] i of size d U −T , for j ∈ [N ]
. Hence, the total offline storage cost at each user is (1 + N U −T )d. Offline communication and computation loads. For each iteration of secure aggregation, before the local model is computed, each user prepares offline coded random masks and distributes them to the other users. Specifically, each user encodes U local data segments with each of size d U −T into N coded segments and distributes each of them to one of N users. Hence, the offline computational and communication load of LightSecAgg at each user is O( dN log N U −T ) and O( dN U −T ), respectively. Communication load during aggregation. While each user uploads a masked model of length d, in the phase of aggregate-model recovery, no matter how many users drop, each surviving user in U 1 sends a coded mask of size d U −T . The server is guaranteed to recover the aggregate-model of the surviving users in U 1 after receiving messages from any U users. The total required communication load at the server in the phase of mask recovery is therefore U U −T d. Computation load during aggregation. The major computational bottleneck of LightSecAgg is the decoding process to recover j∈U1 z j at the server. This involves decoding a U -dimensional MDS code from U coded symbols, which can be performed with O(U log U ) operations on elements in F q , hence a total computational load of U log U U −T d. We compare the communication and computational complexities of LightSecAgg with baseline protocols. In particular, we consider the case where secure aggregation
offline comm. (U) O(sN ) O(s log N ) O(d) offline comp. (U) O(dN + sN 2 ) O(d log N + s log 2 N ) O(d log N ) online comm. (U) O(d + sN ) O(d + s log N ) O(d) online comm. (S) O(dN + sN 2 ) O(dN + sN log N ) O(dN ) online comp. (U) O(d) O(d) O(d) reconstruction (S) O(dN 2 ) O(dN log N ) O(d log N )
protocols aim at providing privacy guarantee T = N 2 and dropout-resiliency guarantee D = pN simultaneously for some 0 ≤ p < 1 2 . As shown in Table 1, by choosing U = (1 − p)N , LightSecAgg significantly improves the computational efficiency at the server during aggregation. SecAgg and SecAgg+ incurs a total computational load of O(dN 2 ) and O(dN log N ), respectively at the server, while the server complexity of LightSecAgg remains nearly constant with respect to N . It is expected to substantially reduce the overall aggregation time for a large number of users, which is bottlenecked by the server's computation in SecAgg (Bonawitz et al., 2017;2019b). More detailed discussions, as well as a comparison with another recently proposed secure aggregation protocol (Zhao & Sun, 2021), which achieves similar server complexity as LightSecAgg, are carried out in Appendix C.
SYSTEM DESIGN AND OPTIMIZATION
Apart from theoretical design and analysis, we have further designed a FL training system to reduce the overhead of secure model aggregation and enable realistic evaluation of LightSecAgg in cross-device FL.
The software architecture is shown in Figure 4. In order to keep the software architecture lightweight and maintainable, we do not over-design and only modularize the system as the foundation layer and the algorithm layer.
The foundation layer (blocks below the dashed line) contains the communicator and training engine. The communicator can support multiple communication protocols (PyTorch RPC (trp, 2021), and gRPC (grp, 2021)), but it provides a unified communication interface for the algorithmic layer. In the training engine, in addition to standard PyTorch for GPU, we also compile the ARM-based PyTorch for embedded edge devices (e.g., Raspberry Pi).
In the algorithm layer, Client Manager calls Trainer in the foundation layer to perform on-device training. Client Manager also integrates Client Encoder to complete the secure aggregation protocol, which is supported by security primitive APIs. In Server Manager, Secure Aggregator maintains the cache Figure 4. Overview of the System Design for masked models, and once the cache is full, it starts reconstruction based on aggregated masks uploaded by clients. The server then synchronizes the updated global model to clients for the next round of training. In Figure 4, we mark the 7 sequential steps in a single FL round as circled numbers to clearly show the interplay between federated training and secure aggregation protocol.
This software architecture has two special designs that can further reduce the computational and communication overhead of the secure aggregation protocol. Parallelization of offline phase and model training. We note that for all considered protocols, LightSecAgg, SecAgg, and SecAgg+, the communication and computation time to generate and exchange the random masks in the offline phase can be overlapped with model training. Hence, in our design, we reduce the offline computation and communication overhead by allowing each user to train the model and carry out the offline phase simultaneously by running two parallel processes (multi-threading performs relatively worse due to Python GIL, Global Interpreter Lock), as shown as purple and red colors in Figure 4. We also demonstrate the timing diagram of the overlapped implementation in a single FL training round in Figure 5. We will analyze its impact on overall acceleration in section 7.2. Optimized federated training system and communication APIs via tensor-aware RPC (Remote Procedure Call). As the yellow blocks in Figure 4 show, we specially design the sending and receiving queues to accelerate the Table 2. Summary of four implemented machine learning tasks and performance gain of LightSecAgg with respect to SecAgg and SecAgg+. All learning tasks are for image classification. MNIST, FEMNIST and CIFAR-10 are low-resolution datasets, while images in GLD-23K are high resolution, which cost much longer training time; LR and CNN are shallow models, but MobileNetV3 and EfficientNet-B0 are much larger models, but they are tailored for efficient edge training and inference. scenario that the device has to be sender and receiver simultaneously. As such, the offline phase of LightSecAgg can further be accelerated by parallelizing the transmission and reception of [z i ] j . This design can also speed up the offline pairwise agreement in SecAgg and SecAgg+. Moreover, we choose PyTorch RPC (trp, 2021) as the communication backend rather than gRPC (grp, 2021) and MPI (mpi) because its tensor-aware communication API can reduce the latency in scenarios where the communicator is launched frequently, i.e., each client in the offline mask exchanging phase needs to distribute N coded segments to N users.
With the above design, we can deploy LightSecAgg in both embedded IoT devices and AWS EC2 instances. AWS EC2 instances can also represent a realistic cross-device setting because, in our experiments, we use AWS EC2 m3.medium instances, which are CPU-based and have the same hardware configuration as modern smartphones such as iOS and Android devices. Furthermore, we package our system as a Docker image to simplify the system deployment to hundreds of edge devices.
EXPERIMENTAL RESULTS
Setup
Dataset and models. To provide a comprehensive coverage of realistic FL settings, we train four models over computer vision datasets of different sizes, summarized in Table 2.
The hyper-parameter settings are provided in Appendix D.
Dropout rate. To model the dropped users, we randomly select pN users where p is the dropout rate. We consider the worst-case scenario (Bonawitz et al., 2017), where the selected pN users artificially drop after uploading the masked model. All three protocols provide privacy guarantee T = N 2 and resiliency for three different dropout rates, p = 0.1, p = 0.3, and p = 0.5, which are realistic values according to the industrial observation in real FL system (Bonawitz et al., 2019a). As we can see that when carefully selecting devices which may be stable online during the time period of training, the dropout rate is as high as 10%; when considering intermittently connected devices, only up to 10K devices can participate simultaneously when there are 10M daily active devices (1 : 1000).
Number of users and Communication Bandwidth.
In our experiments, we train up to N = 200 users. The measured real bandwidth is 320Mb/s. We also consider two other bandwidth settings of 4G (LTE-A) and 5G cellular networks as we discuss later.
Baselines. We analyze and compare the performance of LightSecAgg with two baseline schemes: SecAgg and SecAgg+ described in Section 3. While there are also other secure aggregation protocols (e.g., TurboAgg (So et al., 2021d) and FastSecAgg (Kadhe et al., 2020)), we use SecAgg and SecAgg+ for our baselines since other schemes weaken the privacy guarantees as we discussed in Related Works part of Section 1.
Overall Evaluation and Performance Analysis
For the performance analysis, we measure the total running time for a single round of global iteration which includes model training and secure aggregation with each protocol while increasing the number of users N gradually for different user dropouts. Our results from training CNN (McMahan et al., 2017) on the FEMNIST dataset (Caldas et al., 2018) are demonstrated in Figure 6. The performance gain of LightSecAgg with respect to SecAgg and SecAgg+ to train the other models is also provided in Table 2. More detailed experimental results are provided in Appendix D. We make the following key observations. Impact of dropout rate: the total running time of SecAgg and SecAgg+ increases monotonically with the dropout rate. This is because their total running time is dominated by the mask recovery at the server, which increases quadratically with the number of users. Non-overlapping v.s.
Overlapping: In the nonoverlapped implementation, LightSecAgg provides a speedup of up to 11.3× and 3.7× over SecAgg and SecAgg+, respectively, by significantly reducing the server's execution time; in the overlapped implementation, LightSecAgg provides a further speedup of up to 12.7× and 4.1× over SecAgg and SecAgg+, respectively. This is due to the fact that LightSecAgg requires more communication and a higher computational cost in the offline phase than the baseline protocols, and the overlapped implementation helps to mitigate this extra cost. Impact of model size: LightSecAgg provides a significant speedup of the aggregate-model recovery phase at the server over the baseline protocols in all considered model sizes. When training EfficientNet-B0 on GLD-23K dataset, LightSecAgg provides the smallest speedup in the most training-intensive task. This is because training time is dom-inant in this task, and training takes almost the same time in LightSecAgg and baseline protocols.
Aggregation-only: When comparing the aggregation time only, the speedup remains the same for various model sizes as shown in Table 2. We note that speeding up the aggregation phase by itself is still very important because local training and aggregation phases are not necessarily happening one immediately after the other. For example, local training may be done sporadically and opportunistically throughout the day (whenever resources are available), while global aggregation may be postponed to a later time when a large fraction of the users are done with local training, and they are available for aggregation (e.g., 2 am). Impact of U : LightSecAgg incurs the smallest running time for the case when p = 0.3, which is almost identical to the case when p = 0.1. Recall that LightSecAgg can select the design parameter U between T = 0.5N and N − D = (1 − p)N . Within this range, while increasing U reduces the size of the symbol to be decoded, it also increases the complexity of decoding each symbol. The experimental results suggest that the optimal choices for the cases of p = 0.1 and p = 0.3 are both U = 0.7N , which leads to a faster execution than when p = 0.5, where U can only be chosen as U = 0.5N + 1. Table 3 for a single FL round to train CNN over FEMNIST. Table 4. The breakdown of the running time confirms that the primary gain lies in the complexity reduction at the server provided by LightSecAgg, especially for a large number of users. As described in Remark 1, SecAgg and SecAgg+ are not applicable to asynchronous FL, and hence we cannot compare the total running time of LightSecAgg with these baseline secure aggregation protocols. As such, in our experiments here we instead focus on convergence performance of LightSecAgg compared to FedBuff (Nguyen et al., 2021) to investigate the impact of asynchrony and quantization in performance. In Figure 7, we demonstrate that LightSecAgg has almost the same performance as FedBuff on CIFAR-10 dataset while LightSecAgg includes quantization noise to protect the privacy of individual local updates of users. The details of the experiment setting and additional experiments for asynchronous FL are provided in Appendix F.5.
Convergence Performance in Asynchronous FL
CONCLUSION AND FUTURE WORKS
This paper proposed LightSecAgg, a new approach for secure aggregation in synchronous and asynchronous FL. Compared with the state-of-the-art protocols, LightSecAgg reduces the overhead of model aggregation in FL by leveraging one-shot aggregate-mask reconstruction of the surviving users, while providing the same privacy and dropout-resiliency guarantees. In a realistic FL framework, via extensive empirical results it is also shown that LightSecAgg can provide substantial speedup over baseline protocols for training diverse machine learning models. While we focused on privacy in this work (under the honest but curious threat model), an interesting future research is to combine LightSecAgg with state-of-the-art Byzantine robust aggregation protocols (e.g., ( Lin, B. Y., He, C., Zeng, Z., Wang, H., Huang, Y., Soltanolkotabi, M., Ren, X., and Avestimehr, S. Fednlp: A research platform for federated learning in natural language processing. arXiv preprint arXiv:2104.08815, 2021. uploads aggregated encoded masks j∈U1 [z j ] i to the server 23: end for 24: collects U messages of aggregated encoded masks j∈U1 [z j ] i from user i ∈ U 1 25: // recovers the aggregated-mask 26: i∈U1 z i ← obtained by decoding the received U messages 27: // recovers the aggregate-model for the surviving users 28:
McMahan
i∈U1 x i ← i∈U1x i − i∈U1 z i B PROOF OF THEOREM 1
We prove the dropout-resiliency guarantee and the privacy guarantee for a single FL training round. As all randomness is independently generated across each round, one can extend the dropout-resiliency guarantee and the privacy guarantee for all training rounds for both synchronous and asynchronous FL setting. For simplicity, round index t is omitted in this proof.
For any pair of privacy guarantee T and dropout-resiliency guarantee D such that T + D < N , we select an arbitrary U such that N − D ≥ U > T . In the following, we show that LightSecAgg with chosen design parameters T , D and U can simultaneously achieve (1) privacy guarantee against up to any T colluding users, and (2) dropout-resiliency guarantee against up to any D dropped users. We denote the concatenation of {[n i ] k } k∈U −T +1,...,U by n i for i ∈ [N ].
(Dropout-resiliency guarantee) We now focus on the phase of one-shot aggregate-model recovery. Since each user encodes its sub-masks by the same MDS matrix W , each i∈U1 [z i ] j is an encoded version of i∈U1 [z i ] k for k ∈ [U − T ] and i∈U1 [n i ] k for k ∈ {U − T + 1, . . . , U } as follows:
i∈U1 [z i ] j = ( i∈U1 [z i ] 1 , . . . , i∈U1 [z i ] U −T , i∈U1 [n i ] U −T +1 , . . . , i∈U1 [n i ] U ) · W j ,(6)
where W j is the j'th column of W .
Since N − D ≥ U , there are at least U surviving users after user dropouts. Thus, the server is able to recover i∈U1 [z i ] k for k ∈ [U − T ] via MDS decoding after receiving a set of any U messages from the surviving users. Recall that [z i ] k 's are sub-masks of z i , so the server can successfully recover i∈U1 z i . Lastly, the server recovers the aggregate-model for the set of surviving users U 1 by i∈U1
x i = i∈U1x i − i∈U1 z i = i∈U1 (x i + z i ) − i∈U1 z i .
(Privacy guarantee) We first present Lemma 1, whose proof is provided in Appendix E.
Lemma 1. For any T ⊆ [N ] of size T and any U 1 ⊆ [N ], |U 1 | ≥ U such that U > T , if the random masks [n i ] k 's are jointly uniformly random, we have
I({z i } i∈[N ]\T ; {z i } i∈T , {[z j ] i } j∈[N ],i∈T ) = 0.(7)
We consider the worst-case scenario in which all the messages sent from the users are received by the server during the execution of LightSecAgg, i.e., the users identified as dropped are delayed. Thus, the server receives x i + z i from user i ∈ [N ] and j∈U1 [z j ] i from user i ∈ U 1 . We now show that LightSecAgg provides privacy guarantee T , i.e., for an arbitrary set of colluding users T of size T , the following holds,
I {x i } i∈[N ] ; {x i + z i } i∈[N ] , { j∈U1 [z j ] i } i∈U1 i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T = 0.(8)
We prove it as follows:
I {x i } i∈[N ] ; {x i + z i } i∈[N ] , { j∈U1 [z j ] i } i∈U1 i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T (9) =H {x i + z i } i∈[N ] , { j∈U1 [z j ] i } i∈U1 i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T − H {x i + z i } i∈[N ] , { j∈U1 [z j ] i } i∈U1 {x i } i∈[N ] , {z i } i∈T , {[z j ] i } j∈[N ],i∈T (10) =H {x i + z i } i∈[N ] , i∈U1 z i , i∈U1 n i i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T − H {z i } i∈[N ] , i∈U1 z i , i∈U1 n i {x i } i∈[N ] , {z i } i∈T , {[z j ] i } j∈[N ],i∈T(11)=H {x i + z i } i∈[N ]\T , i∈U1 z i , i∈U1 n i i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T − H {z i } i∈[N ] , i∈U1 z i , i∈U1 n i {x i } i∈[N ] , {z i } i∈T , {[z j ] i } j∈[N ],i∈T(12)=H {x i + z i } i∈[N ]\T i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T + H i∈U1 z i , i∈U1 n i {x i + z i } i∈[N ]\T , i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T − H {z i } i∈[N ] {x i } i∈[N ] , {z i } i∈T , {[z j ] i } j∈[N ],i∈T − H i∈U1 z i , i∈U1 n i {z i } i∈[N ] , {x i } i∈[N ] , {z i } i∈T , {[z j ] i } j∈[N ],i∈T(13)=H {x i + z i } i∈[N ]\T i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T + H i∈U1 n i {x i + z i } i∈[N ]\T , i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T − H {z i } i∈[N ]\T {z i } i∈T , {[z j ] i } j∈[N ],i∈T − H i∈U1 n i {z i } i∈[N ] , {[z j ] i } j∈[N ],i∈T(14)=H {x i + z i } i∈[N ]\T i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T + H i∈U1 n i {x i + z i } i∈[N ]\T , i∈U1 x i , {x i } i∈T , {z i } i∈T , {[z j ] i } j∈[N ],i∈T − H {z i } i∈[N ]\T − H i∈U1 n i {z i } i∈[N ] , {[z j ] i } j∈[N ],i∈T(15)
=0,
where (11) follows from the fact that { j∈U1 [z j ] i } i∈U1 is invertible to i∈U1 z i and i∈U1 n i . Equation (12) holds since {x i + z i } i∈T is a deterministic function of {z i } i∈T and {x i } i∈T . Equation (13) follows from the chain rule. In equation (14), the second term follows from the fact that i∈U1 z i is a deterministic function of
{x i + z i } i∈[N ]\T , i∈U1 x i , {x i } i∈T ,{z i } i∈T ;
the third term follows from the independence of x i 's and z i 's; the last term follows from the fact that i∈U1 z i is a deterministic function of {z i } i∈ [N ] and the independence of n i 's and x i 's. In equation (15), the third term follows from Lemma 1. Equation (16)
follows from 1) i∈U1 n i is a function of {x i + z i } i∈[N ]\T , i∈U1 x i ,{x i } i∈T , {z i } i∈T and {[z j ] i } j∈[N ],i∈T ; 2) i∈U1 n i is a function of {z i } i∈U1 {[z j ] i } j∈U1,i∈T ;
3) z i is uniformly distributed and hence it has the maximum entropy in F d q , combined with the non-negativity of mutual information.
C DISCUSSION
As shown in Table 5, compared with the SecAgg protocol (Bonawitz et al., 2017), LightSecAgg significantly improves the computational efficiency at the server during aggregation. While SecAgg requires the server to retrieve T + 1 secret shares of a secret key for each of the N users, and to compute a single PRG function if the user survives, or N − 1 PRG functions to recover N − 1 pairwise masks if the user drops off, yielding a total computational load of O(N 2 d) at the server. In contrast, as we have analyzed in Section 5.2, for U = O(N ), LightSecAgg incurs an almost constant (O(d log N )) computational load at the server. This admits a scalable design and is expected to achieve a much faster end-to-end execution for a large number of users, given the fact that the overall execution time is dominated by the server's computation in SecAgg (Bonawitz et al., 2017;2019b). SecAgg has a smaller storage overhead than LightSecAgg as secret shares of keys with small sizes (e.g., as small as an integer) are stored, and the model size d is much larger than the number of users N in typical FL scenarios. This effect will also allow SecAgg to have a smaller communication load in the phase of aggregate-model recovery. Finally, we would like to note that another advantage of LightSecAgg over SecAgg is the reduced dependence on cryptographic primitives such as public key infrastructure and key agreement mechanism, which further simplifies the implementation of the protocol. SecAgg+ (Bell et al., 2020) improves both communication and computational load of SecAgg by considering a sparse random graph of degree O(log N ), and the complexity is reduced by factor of O( N log N ). However, SecAgg+ still incurs O(dN log N ) computational load at the server, which is much larger than O(d log N ) computational load at the server in LightSecAgg when U = O(N ).
Compared with a recently proposed secure aggregation protocol in (Zhao & Sun, 2021), LightSecAgg achieves similar complexities in communication and computation during the aggregation process. The main advantage of LightSecAgg over the scheme in (Zhao & Sun, 2021) lies in how the randomness is generated and stored offline at the users and the resulting reduced storage cost. For the scheme in (Zhao & Sun, 2021), all randomness are generated at some external trusted party, and for each subset of U 1 of size |U 1 | ≥ U the trusted party needs to generate T random symbols in F d U −T q , which account to a total amount of randomness that increases exponentially with N . In sharp contrast, LightSecAgg does not require a trusted third party, and each user generates locally a set of T random symbols. It significantly improves the SecAgg+ (Bell et al., 2020), and LightSecAgg. Here N is the total number of users. The parameters d and s respectively represent the model size and the length of the secret keys as the seeds for PRG, where s d. LightSecAgg and SecAgg provide worst-case privacy guarantee T and dropout-resiliency guarantee D for any T and D as long as T + D < N . SecAgg+ provides probabilistic privacy guarantee T and dropout-resiliency guarantee D. LightSecAgg selects three design parameters T , D and U such that T < U ≤ N − D.
SecAgg SecAgg+ LightSecAgg
Offline storage per user ) between protocol in (Zhao & Sun, 2021) and LightSecAgg.
O(d + N s) O(d + s log N ) O(d + N U −T d) Offline communication per user O(sN ) O(s log N ) O(d N U −T ) Offline computation per user O(dN + sN 2 ) O(d log N + s log 2 N ) O(d N log N U −T ) Online communication per user O(d + sN ) O(d + s log N ) O(d + d U −T ) Online communication at server O(dN + sN 2 ) O(dN + sN log N ) O(dN + d U U −T ) Online computation per user O(d) O(d) O(d + d U U −T ) Decoding complexity at server O(sN 2 ) O(sN log 2 N ) O(d U log U U −T ) PRG complexity at server O(dN 2 ) O(dN log N ) −
Protocol in (Zhao & Sun, 2021) LightSecAgg
Total amount of randomness needed N (U − T ) + T N u=U N u N U Offline storage per user U − T + N u=U N u u N U − T + N
practicality of LightSecAgg to maintain model security, and further reduces the total amount of needed randomness to scale linearly with N . Consequently, the local offline storage of each user in LightSecAgg scales linearly with N , as opposed to scaling exponentially in (Zhao & Sun, 2021). We compare the amount of generated randomness and the offline storage cost between the scheme in (Zhao & Sun, 2021) and LightSecAgg in Table 6.
D EXPERIMENTAL DETAILS
In this section, we provide experimental details of Section 7. Aside from the results of training CNN ( We show that for an arbitrary set of colluding users T of size T , we have
I({z i } i∈[N ]\T ; {z i } i∈T , {[z j ] i } j∈[N ],i∈T ) = 0.(17)
The T -private MDS matrix used in LightSecAgg guarantees I(z i ; {[z i ] j } j∈T ) = 0. Thus,
I({z i } i∈[N ]\T ; {z i } i∈T , {[z j ] i } j∈[N ],i∈T ) (18) =H({z i } i∈T , {[z j ] i } j∈[N ],i∈T ) − H({z i } i∈T , {[z j ] i } j∈[N ],i∈T |{z i } i∈[N ]\T ) (19) =H({z i } i∈T , {[z j ] i } j∈[N ],i∈T ) − H({z i } i∈T |{z i } i∈[N ]\T ) − H({[z j ] i } j∈[N ],i∈T |{z i } i∈[N ] ) (20) =H({z i } i∈T , {[z j ] i } j∈[N ],i∈T ) − H({z i } i∈T ) − H({[z j ] i } j∈[N ],i∈T ) (21) =0,(22)
where equation (20) follows from the chain rule. Equation (21) follows from the independence of z i 's and (22) follows from the fact that joint entropy is less than or equal to the sum of the individual entropies, combined with the non-negativity of mutual information.
I(z i ; {[z i ] j } j∈T ) = 0. Equation
F APPLICATION OF LI G H TSE CAG G TO ASYNCHRONOUS FL
In this Appendix, we provide a brief overview of asynchronous FL in Appendix F.1. Then, we illustrate the incompatibility of the conventional secure aggregation protocols, SecAgg and SecAgg+, with the asynchronous FL in Appendix F.2. Later on, in Appendix F.3, we demonstrate how LightSecAgg can be applied to the asynchronous FL setting to protect the privacy of individual updates.
F.1 General Description of Asynchronous FL
We consider the general asynchronous FL setting where the updates of the users are not synchronized while the goal is the same as synchronous FL, to collaboratively learn a global model x ∈ R d , using the local datasets of N users without sharing them. This problem is formulated as minimizing a global loss function as follows
min x∈R d F (x) = N i=1 p i F i (x),(23)
where F i is the local loss function of user i ∈ [N ] and p i ≥ 0 are the weight parameters that indicate the relative impact of the users and are selected such that N i=1 p i = 1. This problem is solved iteratively in asynchronous FL. At round t, each user locally trains the model by carrying out E ≥ 1 local SGD steps. When the local update is done, user i sends the difference between the downloaded global model and updated local model to the server. The local update of user i sent to the server at round t is given by ∆
(t;ti) i = x (ti) − x (E;ti) i ,(24)
where t i is the latest round index when the global model is downloaded by user i and t is the round index when the local update is sent to the server, hence the staleness of user i is given by τ i = t − t i . x
for e = 1, . . . , E, where x (0;ti) i = x (ti) , η l denotes learning rate of the local updates. g i (x; ξ i ) denotes the stochastic gradient with respect to the random sampling ξ i on user i, and we assume E ξi [g i (x; ξ i )] = ∇F i (x) for all x ∈ R d where F i is the local loss function of user i defined in (23). When the server receives ∆ (t;ti) i , the global model at the server is updated as
x (t+1) = x (t) − η g i∈S (t) s(t − t i ) i∈S (t) s(t − t i )∆ (t;ti) i ,(26)
where S (t) is an index set of the users whose local models are sent to the server at round t and η g is the learning rate of the global updates. s(τ ) is a function that compensates for the staleness satisfying s(0) = 1 and decreases monotonically as τ increases. There are many functions that satisfy these two properties and we consider a polynomial function s α (τ ) = (τ + 1) −α as it shows similar or better performance than the other functions e.g., Hinge or Constant stale function (Xie et al., 2019).
As discussed in Section 4.2, we focus on extending LightSecAgg to the buffered asynchronous FL setting of FedBuff (Nguyen et al., 2021), where the server stores the local updates in buffer of size K and updates the global model once the buffer is full. This is a special case of the general asynchronous FL setting described above, where |S (t) | = K for all t. In principle the same approach for generalizing LightSecAgg can be used in other asynchronous FL settings where |S (t) | changes over time, however since the convergence of FL in those settings are yet not understood, we do not consider them in the paper. i,j , and generate a random vector by running a PRG based on the random seed of a (t) i,j to mask the local update. This additive structure has the unique property that these pairwise random vectors cancel out when the server aggregates the masked models because user i(< j) adds PRG(a In asynchronous FL, however, the cancellation of the pairwise random masks based on the key agreement protocol is not guaranteed due to the mismatch in staleness between the users. Specifically, at round t, user i ∈ S (t) sends the masked model y (t;ti) i to the server that is given by
y (t;t i ) i = ∆ (t;t i ) i + PRG b (t i ) i + j:i<j PRG a (t i ) i,j − j:i>j PRG a (t i ) j,i ,(27)
where ∆ (t;ti) i is the local update defined in (24). When t i = t j , the pairwise random vectors in y i,j . We note that the identity of the staleness of each user is not known a priori, hence each pair of users cannot use the same pairwise random-seed.
F.3 Asynchronous LightSecAgg
We now demonstrate how LightSecAgg can be applied to the asynchronous FL setting where the server stores each local update in a buffer of size K and updates the global model by aggregating the stored updates when the buffer is full. Our key intuition is to encode the local masks in a way that the server can recover the aggregate of masks from the encoded masks via a one-shot computation even though the masks are generated in different training rounds. The asynchronous LightSecAgg protocol also consists of three phases with three design parameters D, T, U which are defined in the same way as the synchronous LightSecAgg.
Synchronous and asynchronous LightSecAgg have two key differences: (1) In asynchronous FL, the users share the encoded masks with the time stamp in the first phase to figure out which encoded masks should be aggregated for the reconstruction of aggregate of masks in the third phase. Due to the commutative property of coding and addition, the server can reconstruct the aggregate of masks even though the masks are generated in different training rounds; (2) In asynchronous FL, the server compensates the staleness of the local updates. This is challenging as this compensation should be carried out over the masked model in the finite field to provide the privacy guarantee while the conventional compensation functions have real numbers as outputs (Xie et al., 2019;Nguyen et al., 2021).
We now describe the three phases in detail.
F.3.1 Offline Encoding and Sharing of Local Masks
User i generates z defined in (34) with c g = 2 6 , which has the same performance in mitigating the staleness as the original staleness function carried out over the domain of real numbers.
Performance with various quantization levels. To investigate the impact of the quantization, we measure the performance with various values of the quantization parameter c l on MNIST and CIFAR-10 datasets in Fig. 12. We observe that c l = 2 16 has the best performance, while a small or a large value of c l has poor performance. This is because the value of c l provides a trade-off between two sources of quantization noise: 1) the rounding error from the stochastic rounding function defined in (29) and 2) the wrap-around error when modulo operations are carried out in the finite field. When c l has small value the rounding error is dominant, while the wrap-around error is dominant when c l has large value. To find a proper value of c l , we can utilize the auto-tuning algorithm proposed in (Bonawitz et al., 2019c).
Figure 1 .
1Illustration of our proposed LightSecAgg protocol. (1) Sharing encoded mask: users encode and share their generated local masks.
Figure 3
3Figure 3. An illustration of LightSecAgg in the example of 3 users is depicted. Each user first generates a single mask. Each mask of a user is encoded and shared to other users. Each user's local model is protected by its generated mask. Suppose that user 1 drops during the execution of protocol. The server directly recovers the aggregate-mask in one shot. In this example, LightSecAgg reduces the computational cost at the server from 4d to d.
Figure 5 .
5The timing diagram of the overlapped implementation in LightSecAgg and SecAgg+ (Bell et al., 2020) for a single FL training round to train MobileNetV3 (Howard et al., 2019) with CIFAR-100 dataset (Krizhevsky et al., 2009). SecAgg (Bonawitz et al., 2017) is not included as it takes much longer than other two protocols.
Figure 6 .
6Total running time of LightSecAgg versus the state-of-the-art protocols (SecAgg and SecAgg+) to train CNN (McMahan et al., 2017) on the FEMNIST dataset (Caldas et al., 2018), as the number of users increases, for various dropout rates.
Figure 7 .
7Accuracy of asynchronous LightSecAgg and FedBuff on CIFAR-10 dataset (Krizhevsky et al., 2009) with two strategies for mitigating the staleness: a constant function s(τ ) = 1 named Constant; and a polynomial function sα(τ ) = (1 + τ ) −α named Poly where α = 1. The accuracy is reasonable since we use a variant of LeNet-5 (Xie et al., 2019).
, B., Moore, E., Ramage, D., Hampson, S., and y Arcas, B. A. Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273-1282. PMLR, 2017. McMahan, H. B. et al. Advances and open problems in federated learning. Foundations and Trends® in Machine Learning, 14(1), 2021.Minovski, D., Ogren, N., Ahlund, C., and Mitra, K. Throughput prediction using machine learning in lte and 5g networks. IEEE Transactions on Mobile Computing, 2021.Mushtaq, E., He, C., Ding, J., and Avestimehr, S.Spider: Searching personalized neural architecture for federated learning. arXiv preprint arXiv:2112.13939, 2021. Nguyen, J., Malik, K., Zhan, H., Yousefpour, A., Rabbat, M., Esmaeili, M. M., and Huba, D. Federated learning with buffered asynchronous aggregation. arXiv preprint arXiv:2106.06639, 2021. Reddi, S., Charles, Z., Zaheer, M., Garrett, Z., Rush, K., Konečnỳ, J., Kumar, S., and McMahan, H. B. Adaptive federated optimization. arXiv preprint arXiv:2003.00295, 2020. Reisizadeh, A., Mokhtari, A., Hassani, H., Jadbabaie, A., and Pedarsani, R. Fedpaq: A communication-efficient federated learning method with periodic averaging and quantization. In International Conference on Artificial Intelligence and Statistics, pp. 2021-2031. PMLR, 2020. Roth, R. M. and Lempel, A. On mds codes via cauchy matrices. IEEE transactions on information theory, 35 (6):1314-1319, 1989. Scheuner, J. and Leitner, P. A cloud benchmark suite combining micro and applications benchmarks. In Companion of the 2018 ACM/SPEC International Conference on Performance Engineering, pp. 161-166, 2018. Shamir, A. How to share a secret. Communications of the ACM, 22(11):612-613, 1979. Shlezinger, N., Chen, M., Eldar, Y. C., Poor, H. V., and Cui, S. Uveqfed: Universal vector quantization for federated learning. IEEE Transactions on Signal Processing, 69: 500-514, 2020. So, J., Ali, R. E., Guler, B., Jiao, J., and Avestimehr, S. Securing secure aggregation: Mitigating multi-round privacy leakage in federated learning. arXiv preprint arXiv:2106.03328, 2021a. So, J., Güler, B., and Avestimehr, A. S. Byzantine-resilient secure federated learning. IEEE Journal on Selected Areas in Communications, 39(7):2168-2181, 2021b. So, J., Güler, B., and Avestimehr, A. S. Codedprivateml: A fast and privacy-preserving framework for distributed machine learning. IEEE Journal on Selected Areas in Information Theory, 2(1):441-451, 2021c. So, J., Güler, B., and Avestimehr, A. S. Turbo-aggregate: Breaking the quadratic aggregation barrier in secure federated learning. IEEE Journal on Selected Areas in Information Theory, 2(1):479-489, 2021d. T. Dinh, C., Tran, N., and Nguyen, J. Personalized federated learning with moreau envelopes. In Larochelle, H., Ranzato, M., Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 21394-21405. Curran Associates, Inc., 2020. URL https://proceedings. neurips.cc/paper/2020/file/ f4f1f13c8289ac1b1ee0ff176b56fc60-Paper. pdf. Tan, M. and Le, Q. Efficientnet: Rethinking model scaling for convolutional neural networks. In International Conference on Machine Learning, pp. 6105-6114. PMLR, 2019. Tang, T., Ali, R. E., Hashemi, H., Gangwani, T., Avestimehr, S., and Annavaram, M. Verifiable coded computing: Towards fast, secure and private distributed machine learning. arXiv preprint arXiv:2107.12958, 2021. Truex, S., Liu, L., Chow, K.-H., Gursoy, M. E., and Wei, W. Ldp-fed: Federated learning with local differential privacy. In Proceedings of the Third ACM International Workshop on Edge Systems, Analytics and Networking, pp. 61-66, 2020. van Dijk, M., Nguyen, N. V., Nguyen, T. N., Nguyen, L. M., Tran-Dinh, Q., and Nguyen, P. H. Asynchronous federated learning with reduced number of rounds and with differential privacy from less aggregated gaussian noise. arXiv preprint arXiv:2007.09208, 2020. Wang, J., Liu, Q., Liang, H., Joshi, G., and Poor, H. V. Tackling the objective inconsistency problem in heterogeneous federated optimization. arXiv preprint arXiv:2007.07481, 2020. Wang, J., Charles, Z., Xu, Z., Joshi, G., McMahan, H. B., Al-Shedivat, M., Andrew, G., Avestimehr, S., Daly, K., Data, D., et al. A field guide to federated optimization. arXiv preprint arXiv:2107.06917, 2021. Wang, Z., Song, M., Zhang, Z., Song, Y., Wang, Q., and Qi, H. Beyond inferring class representatives: User-level privacy leakage from federated learning. In IEEE INFO-COM 2019-IEEE Conference on Computer Communications, pp. 2512-2520. IEEE, 2019. Weyand, T., Araujo, A., Cao, B., and Sim, J. Google landmarks dataset v2-a large-scale benchmark for instancelevel recognition and retrieval. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2575-2584, 2020. Xie, C., Koyejo, S., and Gupta, I. Asynchronous federated optimization. arXiv preprint arXiv:1903.03934, 2019. Yao, A. C. Protocols for secure computations. In 23rd annual symposium on foundations of computer science (sfcs 1982), pp. 160-164. IEEE, 1982. Yu, Q., Li, S., Raviv, N., Kalan, S. M. M., Soltanolkotabi, M., and Avestimehr, S. A. Lagrange coded computing: Optimal design for resiliency, security, and privacy. In The 22nd International Conference on Artificial Intelligence and Statistics, pp. 1215-1225. PMLR, 2019. Zhang, T., He, C., Ma, T., Gao, L., Ma, M., and Avestimehr, A. S. Federated learning for internet of things. Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, 2021a. APPENDIX A PSEUDO CODE OF LI G H TSE CAG G Algorithm 1 The LightSecAgg protocol Input: T (privacy guarantee), D (dropout-resiliency guarantee), U (target number of surviving users) 1: Server Executes: 2: // phase: offline encoding and sharing of local masks 3: for each user i = 1, 2, . . . , N in parallel do i ] 1 , . . . , [z i ] U −T ← obtained by partitioning z i to U − T pieces 6: [n i ] U −T +1 , . . . , [n i ] U ← randomly picks from F i ] j } j∈[N ] ← obtained by encoding [z i ] k 's and [n i ] k 's using (mask [z i ] j to user j ∈ [N ]\{i} 9:receives encoded mask [z j ] i from user j ∈ [N ]\{i} 10: end for 11: // phase: masking and uploading of local models 12: for each user i = 1, 2, . . . , N in parallel do 13: // user i obtains x i after the local update 14:x i ← x i + z i // masks the local model15: uploads masked modelx i to the server 16: end for 17: identifies set of surviving users U 1 ⊆ [N ] 18: gathers masked modelsx i from user i ∈ U 1 19: // phase: one-shot aggregate-model recovery 20: for each user i ∈ U 1 in parallel do21: computes aggregated encoded masks j∈U1 [z j ] i 22:
Figure 8 .Figure 9 .Figure 10 .
8910McMahan et al., 2017) on the FEMNIST dataset (Caldas et al., 2018) as shown in Figure 6, we also demonstrate the total running time of LightSecAgg versus two baseline protocols SecAgg (Bonawitz et al., 2017) and SecAgg+ (Bell et al., 2020) to train logistic regression on the MNIST dataset (LeCun et al., 1998), MobileNetV3 (Howard et al., 2019) on the CIFAR-10 dataset (Krizhevsky et al., 2009), and EfficientNet-B0 (Tan & Le, 2019) on the GLD23k dataset (Weyand et al., 2020) in Figure 8, Figure 9, and Figure 10, respectively. For all considered FL training tasks, each user locally trains its model with E = 5 local epochs, before masking and uploading its model. We can observe that LightSecAgg provides significant speedup for all considered FL training tasks in the running time over SecAgg and SecAgg+. Total running time of LightSecAgg versus the state-of-the-art protocols (SecAgg (Bonawitz et al., 2017) and SecAgg+ (Bell et al., 2020)) to train logistic regression on the MNIST (LeCun et al., 1998) with an increasing number of users, for various dropout rate. Total running time of LightSecAgg versus the state-of-the-art protocols (SecAgg (Bonawitz et al., 2017) and SecAgg+ (Bell et al., 2020)) to train MobileNetV3 (Howard et al., 2019) on the CIFAR-10 (Krizhevsky et al., 2009) with an increasing number of users, for various dropout rate. (a) Non-overlapped (b) Overlapped Total running time of LightSecAgg versus the state-of-the-art protocols (SecAgg (Bonawitz et al., 2017) and SecAgg+ (Bell et al., 2020)) to train EfficientNet-B0 (Tan & Le, 2019) on the GLD23k (Weyand et al., 2020) with an increasing number of users, for various dropout rate.
local model after E local SGD steps and the local model at user i is updated as
F. 2
2Incompatibility of SecAgg and SecAgg+ with Asynchronous FL As described in Section 3, SecAgg (Bonawitz et al., 2017) and SecAgg+ (Bell et al., 2020) are designed for synchronous FL. At round t, each pair of users i, j ∈ [N ] agree on a pairwise random-seed a (t)
uniformly at random from the finite field F d q , where t i is the global round index when user i downloads the global model from the server. The mask z(ti) i is partitioned into U − T sub-masks denoted by [z (ti) i ] 1 , · · · , [z (ti) i ] U −T ,where U denotes the targeted number of surviving users and N − D ≥ U ≥ T . User i also selects another T random masks denoted by [n(ti) i ] U −T +1 , · · · , [n (ti) i ] U . These U partitions [z (ti) i ] 1 , · · · , [z (ti) i ] U −T , [n (ti) i ] U −T +1 , · · · , [n (ti)i ] U are then (a) MNIST dataset.(b) CIFAR-10 dataset.
Figure 11 .
11Accuracy of asynchronous LightSecAgg and FedBuff with two strategies for the weighting function to mitigate the staleness: a constant function s(τ ) = 1 (no compensation) named Constant; and a polynomial function sα(τ ) = (1 + τ ) −α named Poly where α = 1.(a) MNIST dataset.(b) CIFAR-10 dataset.
Figure 12 .
12Accuracy of asynchronous LightSecAgg and FedBuff with various values of the quantization parameter c l = 2 c bit .
Table 1 .
1Complexity comparison between SecAgg, SecAgg+, and LightSecAgg. Here N is the total number of users, d is the model size, s is the length of the secret keys as the seeds for PRG (s d). In the table, U stands for User and S stands for Server.SecAgg
SecAgg+
LightSecAgg
Table 3 .
3Performance gain in different bandwidth settings.Protocols
4G (98 Mbps) 320 Mbps 5G (802 Mbps)
SecAgg
8.5×
12.7×
13.5×
SecAgg+
2.9×
4.1×
4.4×
Impact of Bandwidth: We have also analyzed the impact
of communication bandwidth at the users. In addition to
the default bandwidth setting used in this section, we have
considered two other edge scenarios: 4G (LTE-A) and 5G
cellular networks using realistic bandwidth settings of 98
and 802 Mbps respectively (Minovski et al., 2021; Scheuner
& Leitner, 2018)). The results are reported in
Table 4 .
4Breakdown of the running time (sec) of LightSecAgg and the state-of-the-art protocols (SecAgg (Bonawitz et al., 2017) and SecAgg+ (Bell et al., 2020)) to train CNN (McMahan et al., 2017) on the FEMNIST dataset(Caldas et al., 2018) with N = 200 users, for dropout rate p = 10%, 30%, 50%. p = 10% p = 30% p = 50% p = 10% p = 30% p = 50%Protocols
Phase
Non-overlapped
Overlapped
LightSecAgg
Offline
69.3
69.0
191.2
75.1
74.9
196.9
Training
22.8
22.8
22.8
Uploading
12.4
12.2
21.6
12.6
12.0
21.4
Recovery
40.9
40.7
64.5
40.7
41.0
64.9
Total
145.4
144.7
300.1
123.4
127.3
283.2
SecAgg
Offline
95.6
98.6
102.6
101.2
102.3
101.3
Training
22.8
22.8
22.8
Uploading
10.7
10.9
11.0
10.9
10.8
11.2
Recovery
911.4
1499.2
2087.0
911.2
1501.3
2086.8
Total
1047.5
1631.5
2216.4
1030.3
1614.4
2198.9
SecAgg+
Offline
67.9
68.1
69.2
73.9
73.8
74.2
Training
22.8
22.8
22.8
Uploading
10.7
10.8
10.7
10.7
10.8
10.9
Recovery
379.1
436.7
495.5
378.9
436.7
497.3
Total
470.5
538.4
608.2
463.6
521.3
582.4
7.3 Performance Breakdown
To further investigate the primary gain of LightSecAgg,
we provide the breakdown of total running time for training
CNN (McMahan et al., 2017) on the FEMNIST dataset (Cal-
das et al., 2018) in
Hadsell, R., Balcan, M. F., and Lin, H. (eds.), Advances in Neural Information Processing Systems, volume 33, pp. 16937-16947. Curran Associates,He et al., 2020d; So et al., 2021b; Elkordy et al.,
2021; Karimireddy et al., 2021)) to also mitigate Byzantine
users while ensuring privacy.
gRPC: A high performance, open source universal RPC
framework. https://grpc.io/, 2021.
Pytorch rpc: Distributed deep learning built on tensor-
optimized remote procedure calls.
https://
pytorch.org/docs/stable/rpc.html, 2021.
Alistarh, D., Grubic, D., Li, J., Tomioka, R., and Vojnovic,
M. Qsgd: Communication-efficient sgd via gradient quan-
tization and encoding. In Advances in Neural Information
Processing Systems, pp. 1709-1720, 2017.
Asad, M., Moustafa, A., and Ito, T. Fedopt: Towards com-
munication efficiency and privacy preservation in feder-
ated learning. Applied Sciences, 10(8):2864, 2020.
Bell, J. H., Bonawitz, K. A., Gascón, A., Lepoint, T., and
Raykova, M. Secure single-server aggregation with (poly)
logarithmic overhead. In Proceedings of the 2020 ACM
SIGSAC Conference on Computer and Communications
Security, pp. 1253-1269, 2020.
Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A.,
McMahan, H. B., Patel, S., Ramage, D., Segal, A.,
and Seth, K. Practical secure aggregation for privacy-
preserving machine learning. In proceedings of the 2017
ACM SIGSAC Conference on Computer and Communica-
tions Security, pp. 1175-1191, 2017.
Bonawitz, K., Eichner, H., Grieskamp, W., Huba, D., Inger-
man, A., Ivanov, V., Kiddon, C., Konečnỳ, J., Mazzocchi,
S., McMahan, H. B., et al. Towards federated learning at
scale: System design. arXiv preprint arXiv:1902.01046,
2019a.
Bonawitz, K., Eichner, H., Grieskamp, W., Huba, D.,
Ingerman, A., Ivanov, V., Kiddon, C., Konečný, J.,
Mazzocchi, S., McMahan, B., Van Overveldt, T., Petrou,
D., Ramage, D., and Roselander, J. Towards federated
learning at scale: System design. In Proceedings of
Machine Learning and Systems, volume 1, pp. 374-
388, 2019b.
URL https://proceedings.
mlsys.org/paper/2019/file/
bd686fd640be98efaae0091fa301e613-Paper.
pdf.
Bonawitz, K., Salehi, F., Konečnỳ, J., McMahan, B.,
and Gruteser, M. Federated learning with autotuned
communication-efficient secure aggregation. In 2019
53rd Asilomar Conference on Signals, Systems, and Com-
puters, pp. 1222-1226. IEEE, 2019c.
Caldas, S., Duddu, S. M. K., Wu, P., Li, T., Konečnỳ, J.,
McMahan, H. B., Smith, V., and Talwalkar, A. Leaf:
A benchmark for federated settings. arXiv preprint
arXiv:1812.01097, 2018.
Chai, Z., Chen, Y., Zhao, L., Cheng, Y., and Rangwala,
H. FedAt: A communication-efficient federated learning
method with asynchronous tiers under non-iid data. arXiv
preprint arXiv:2010.05958, 2020.
Chen, Y., Ning, Y., Slawski, M., and Rangwala, H. Asyn-
chronous online federated learning for edge devices with
non-iid data. In 2020 IEEE International Conference on
Big Data (Big Data), pp. 15-24. IEEE, 2020.
Diffie, W. and Hellman, M. New directions in cryptography.
IEEE transactions on Information Theory, 22(6):644-
654, 1976.
Elkordy, A. R. and Avestimehr, A. S. Secure aggregation
with heterogeneous quantization in federated learning.
arXiv preprint arXiv:2009.14388, 2020.
Elkordy, A. R., Prakash, S., and Avestimehr, A. S. Basil:
A fast and byzantine-resilient approach for decentralized
training. arXiv preprint arXiv:2109.07706, 2021.
Ezzeldin, Y. H., Yan, S., He, C., Ferrara, E., and Aves-
timehr, S. Fairfed: Enabling group fairness in federated
learning. ICML 2021 -International Workshop on Feder-
ated Learning for User Privacy and Data Confidentiality,
2021.
Fallah, A., Mokhtari, A., and Ozdaglar, A. Personalized
federated learning with theoretical guarantees: A model-
agnostic meta-learning approach. Advances in Neural
Information Processing Systems, 33, 2020.
Geiping, J., Bauermeister, H., Dröge, H., and Moeller,
M. Inverting gradients -how easy is it to break privacy
in federated learning?
In Larochelle, H., Ranzato,
M., Inc., 2020.
URL https://proceedings.
neurips.cc/paper/2020/file/
c4ede56bbd98819ae6112b20ac6bf145-Paper.
pdf.
He, C., Annavaram, M., and Avestimehr, S. Group knowl-
edge transfer: Federated learning of large cnns at the
edge. NeurIPS 2020 (Advances in Neural Information-
Processing Systems 2020), 2020a.
He, C., Annavaram, M., and Avestimehr, S. Fednas:
Federated deep learning via neural architecture search.
CVPR 2020 Workshop on Neural Architecture Search and
Beyond for Representation Learning, pp. arXiv-2004,
2020b.
He, C., Tan, C., Tang, H., Qiu, S., and Liu, J. Central server
free federated learning over single-sided trust social net-
works. NeurIPS 2020 (Advances in Neural Information
Processing Systems 2020) -Federated Learning Work-
shop, 2020c.
He, C., Balasubramanian, K., Ceyani, E., Yang, C., Xie,
H., Sun, L., He, L., Yang, L., Yu, P. S., Rong, Y., et al.
Fedgraphnn: A federated learning system and bench-
mark for graph neural networks. DPML@ICLR 2021 and
GNNSys@MLSys 2021, 2021a.
He, C., Ceyani, E., Balasubramanian, K., Annavaram, M.,
and Avestimehr, S. Spreadgnn: Serverless multi-task fed-
erated learning for graph neural networks. International
Workshop on Federated Learning for User Privacy and
Data Confidentiality in Conjunction with ICML 2021 (FL-
ICML'21) and Deep Learning on Graphs: Method and
Applications with KDD 2021 (DLG-KDD'21), 2021b.
He, C., Li, S., Soltanolkotabi, M., and Avestimehr, S.
Pipetransformer: Automated elastic pipelining for dis-
tributed training of large-scale models. In Interna-
tional Conference on Machine Learning, pp. 4150-4159.
PMLR, 2021c.
He, C., Shah, A. D., Tang, Z., Sivashunmugam, D.
F. N., Bhogaraju, K., Shimpi, M., Shen, L., Chu, X.,
Soltanolkotabi, M., and Avestimehr, S. Fedcv: A fed-
erated learning framework for diverse computer vision
tasks. arXiv preprint arXiv:2111.11066, 2021d.
He, C., Yang, Z., Mushtaq, E., Lee, S., Soltanolkotabi,
M., and Avestimehr, S. Ssfl: Tackling label deficiency
in federated learning via personalized self-supervision.
arXiv preprint arXiv:2110.02470, 2021e.
He, L., Karimireddy, S. P., and Jaggi, M. Secure
byzantine-robust machine learning. arXiv preprint
arXiv:2006.04747, 2020d.
Howard, A., Sandler, M., Chu, G., Chen, L.-C., Chen, B.,
Tan, M., Wang, W., Zhu, Y., Pang, R., Vasudevan, V.,
et al. Searching for mobilenetv3. In Proceedings of the
IEEE/CVF International Conference on Computer Vision,
pp. 1314-1324, 2019.
Kadhe, S., Rajaraman, N., Koyluoglu, O. O., and Ram-
chandran, K. Fastsecagg: Scalable secure aggregation
for privacy-preserving federated learning. arXiv preprint
arXiv:2009.11248, 2020.
Karimireddy, S. P., He, L., and Jaggi, M. Learning from
history for byzantine robust optimization. In Interna-
tional Conference on Machine Learning, pp. 5311-5319.
PMLR, 2021.
Krizhevsky, A., Hinton, G., et al. Learning multiple layers
of features from tiny images. 2009.
LeCun, Y., Bottou, L., Bengio, Y., and Haffner, P. Gradient-
based learning applied to document recognition. Proceed-
ings of the IEEE, 86(11):2278-2324, 1998.
Li, T., Sahu, A. K., Zaheer, M., Sanjabi, M., Talwalkar, A.,
and Smith, V. Federated optimization in heterogeneous
networks. arXiv preprint arXiv:1812.06127, 2018.
Li, T., Hu, S., Beirami, A., and Smith, V. Ditto: Fair and
robust federated learning through personalization. arXiv:
2012.04221, 2020.
Li, X., Huang, K., Yang, W., Wang, S., and Zhang, Z. On the
convergence of fedavg on non-iid data. In International
Conference on Learning Representations, 2019.
Liang, J., Li, S., Jiang, W., Cao, B., and He, C. Omnilytics:
A blockchain-based secure data market for decentralized
machine learning. ICML 2021 -International Workshop
on Federated Learning for User Privacy and Data Confi-
dentiality, 2021.
Table 5 .
5Complexity comparison between SecAgg (Bonawitz et al., 2017),
Table 6 .
6Comparison of storage cost (in the number of symbols in Fd
U −T
q
LightSecAgg: A Lightweight and Versatile Design for Secure Aggregation in Federated Learning E PROOF OF LEMMA 1
For simplicity, we assume that all users have equal-sized datasets, i.e., pi = 1 N for all i ∈ [N ].
where W j is the Vandermonde matrix defined in(5)Each user i trains the local model as in(24)and(25). User i quantizes its local update ∆ (t;ti) i from the domain of real numbers to the finite field F q as masking and MDS encoding are carried out in the finite field to provide information-theoretic privacy. The field size q is assumed to be large enough to avoid any wrap-around during secure aggregation.The quantization is a challenging task as it should be performed in a way to ensure the convergence of the global model. Moreover, the quantization should allow the representation of negative integers in the finite field, and enable computations to be carried out in the quantized domain. Therefore, we cannot utilize well-known gradient quantization techniques such as in(Alistarh et al., 2017), which represents the sign of a negative number separately from its magnitude. LightSecAgg addresses this challenge with a simple stochastic quantization strategy combined with the two's complement representation as described subsequently. For any positive integer c ≥ 1, we first define a stochastic rounding function aswhere x is the largest integer less than or equal to x, and this rounding function is unbiased, i.e.,The parameter c is a design parameter to determine the number of quantization levels. The variance of Q c (x) decreases as the value of c increases. We then define the quantized updatewhere the function Q c from(29)is carried out element-wise, and c l is a positive integer parameter to determine the quantization level of the local updates. The mapping function φ : R → F q is defined to represent a negative integer in the finite field by using the two's complement representation,To protect the privacy of the local updates, user i masks the quantized update ∆and sends the pair of ∆ (t;ti) i , t i to the server. The local round index t i is used in two cases: (1) when the server identifies the staleness of each local update and compensates it, and (2) when the users aggregate the encoded masks for one-shot recovery, which will be explained in Section F.3.3.F.3.3 One-shot Aggregate-update Recovery and Global Model UpdateThe server stores ∆ (t;ti) i in the buffer, and when the buffer of size K is full, the server aggregates the K masked local updates. In this phase, the server intends to recoverwhere ∆ (t;ti) i is the local update in the real domain defined in (24), S (t) ( S (t) = K) is the index set of users whose local updates are stored in the buffer and aggregated by the server at round t, and s(τ ) is the staleness function defined in (26). To where (45) follows from the triangle inequality and (46) follows form (44). Now, the update equation of asynchronous LightSecAgg is equivalent to the update equation of FedBuff except that LightSecAgg has an additional random source, stochastic quantization Q c l , which also satisfies the unbiasedness and bounded variance. One can show the convergence rate of asynchronous LightSecAgg presented in Theorem 2 by exchanging E ξ and variance-bound σ 2 l in (Nguyen et al., 2021) with E Qc l ,ξ and variance-bound σ 2 c l = d 4c l 2 + σ 2 l , respectively.Remark 6. Theorem 2 shows that convergence rates of asynchronous LightSecAgg and FedBuff (see Corollary 1 in(Nguyen et al., 2021)) are the same except for the increased variance of the local updates due to the quantization noise in LightSecAgg. The amount of the increased variance d 4c l 2 in σ 2 c l = d 4c l 2 + σ 2 l is negligible for large c l , which will be demonstrated in our experiments in Appendix F.5.F.5 Experiments for Asynchronous LightSecAggAs described in the previous sections, there is no prior secure aggregation protocol applicable to asynchronous FL, and hence we cannot compare the the total running time of LightSecAgg with other baseline protocols, such as SecAgg and SecAgg+ that were considered in synchronous FL. As such, in our experiments here we instead focus on convergence performance of LightSecAgg compared to the buffered asynchronous FL scheme to highlight the impact of asynchrony and quantization in performance. We measure the performance in terms of the model accuracy evaluated over the test samples with respect to the global round index t.Datasets and network architectures. We consider an image classification task on theMNIST (LeCun et al., 1998)and CIFAR-10 datasets(Krizhevsky et al., 2009). For the MNIST dataset, we trainLeNet (LeCun et al., 1998). For the CIFAR-10 dataset, we train the convolutional neural network (CNN) used in(Xie et al., 2019). These network architectures are sufficient for our needs as our goal is to evaluate various schemes, and not to achieve the best accuracy.Setup. We consider a buffered asynchronous FL setting with N = 100 users and a single server having the buffer of size K = 10. For the IID data distribution, the training samples are shuffled and partitioned into N = 100 users. For asynchronous training, we assume the staleness of each user is uniformly distributed over [0, 10], i.e., τ max = 10, as used in(Xie et al., 2019). We set the field size q = 2 32 − 5, which is the largest prime within 32 bits.Implementations. We implement two schemes, FedBuff and LightSecAgg. The key difference between the two schemes is that in LightSecAgg, the local updates are quantized and converted into the finite field to provide privacy of the individual local updates while all operations are carried out over the domain of real numbers in FedBuff. For both schemes, to compensate the staleness of the local updates, we employ the two strategies for the weighting function: a constant function s(τ ) = 1 and a polynomial function s α (τ ) = (1 + τ ) −α .Empirical results. InFigure 11(a) and 11(b), we demonstrate that LightSecAgg has almost the same performance as FedBuff on both MNIST and CIFAR-10 datasets, while LightSecAgg includes quantization noise to protect the privacy of individual local updates of users. This is because the quantization noise in LightSecAgg is negligible. To compensate the staleness of the local updates over the finite field in LightSecAgg, we implement the quantized staleness function
Open mpi: Open source high performance computing. Open mpi: Open source high performance computing. https://grpc.io/.
Federated learning for internet of things. T Zhang, C He, T Ma, L Gao, M Ma, A S Avestimehr, Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems. the 19th ACM Conference on Embedded Networked Sensor SystemsZhang, T., He, C., Ma, T., Gao, L., Ma, M., and Aves- timehr, A. S. Federated learning for internet of things. Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems, 2021b.
Information theoretic secure aggregation with user dropouts. Y Zhao, H Sun, arXiv:2101.07750arXiv preprintZhao, Y. and Sun, H. Information theoretic secure aggrega- tion with user dropouts. arXiv preprint arXiv:2101.07750, 2021.
Deep leakage from gradients. L Zhu, S Han, Federated Learning. SpringerZhu, L. and Han, S. Deep leakage from gradients. In Federated Learning, pp. 17-31. Springer, 2020.
| [] |
[] | [
"Alexander Rashkovskii "
] | [] | [] | A type σ(u, ϕ) of a plurisubharmonic function u relative to a maximal plurisubharmonic weight ϕ with isolated singularity at ζ is defined as lim inf u(x)/ϕ(x) as x → ζ. We study properties of the relative types as functionals u → σ(u, ϕ); it is shown that they give a general form for upper semicontinuous, positive homogeneous and tropically additive functionals on plurisubharmonic singularities. We consider some extremal problems whose solutions are Green-like functions that give best possible bounds on u, given the values of its types relative to some of (or all) weights ϕ; in certain cases they coincide with known variants of pluricomplex Green functions. An analyticity theorem is proved for the upperlevel sets for the types with respect to exponentially Hölder continuous weights, which leads to a result on propagation of plurisubharmonic singularities. | 10.1155/imrn/2006/76283 | [
"https://export.arxiv.org/pdf/math/0509454v4.pdf"
] | 12,870,240 | math/0509454 | 819390ff2acd4ac09a25e5c43fb04aa53a85ad96 |
5 Mar 2007
Alexander Rashkovskii 5 Mar 2007arXiv:math/0509454v4 [math.CV] Relative types and extremal problems for plurisubharmonic functionsSubject classification: 32U0532U2532U35
A type σ(u, ϕ) of a plurisubharmonic function u relative to a maximal plurisubharmonic weight ϕ with isolated singularity at ζ is defined as lim inf u(x)/ϕ(x) as x → ζ. We study properties of the relative types as functionals u → σ(u, ϕ); it is shown that they give a general form for upper semicontinuous, positive homogeneous and tropically additive functionals on plurisubharmonic singularities. We consider some extremal problems whose solutions are Green-like functions that give best possible bounds on u, given the values of its types relative to some of (or all) weights ϕ; in certain cases they coincide with known variants of pluricomplex Green functions. An analyticity theorem is proved for the upperlevel sets for the types with respect to exponentially Hölder continuous weights, which leads to a result on propagation of plurisubharmonic singularities.
Introduction
If a holomorphic mapping F vanishes at a point ζ, then the asymptotic behaviour of |F | near ζ completely determines such fundamental characteristics of F at ζ as the multiplicity of the zero or the integrability index. On the other hand, in most cases the values of such characteristics can just give certain bounds on the asymptotics of F rather than recover it completely.
The transformation F → log |F | puts this into the context of pluripotential theory, which leads to a question of characteristics of singularities of plurisubharmonic functions and their relations to the asymptotic behaviour of the functions. Since our considerations are local, we assume the functions to be defined on domains of C n , n > 1.
Let u be a plurisubharmonic function near a point ζ of C n , such that u(ζ) = −∞. The value ν u (ζ) of the Lelong number of u at ζ gives some information on the asymptotic behaviour near ζ: u(x) ≤ ν u (ζ) log |x − ζ| + O (1). A more detailed information can be obtained by means of its directional Lelong numbers ν u (ζ, a), a ∈ R n + , due to Kiselman: u(x) ≤ ν u (ζ, a) max k a −1 k log |x k − ζ k | + O(1). In addition, these characteristics of singularity are well suited for the tropical structure of the cone of plurisubharmonic functions, namely ν u+v = ν u + ν v (tropical multiplicativity) and ν max{u,v} = min{ν u , ν v } (tropical additivity). These properties play an important role, for example, in investigation of valuations on germs of holomorphic functions [6]. Note that the tropical operations u ⊕ v := max{u, v} and u ⊗ v := u + v, when applied to plurisubharmonic singularities, can be viewed as Maslov's dequantization of usual addition and multiplication of holomorphic functions.
A general notion of Lelong numbers ν(u, ϕ) with respect to plurisubharmonic weights ϕ was introduced and studied by Demailly [3], [5]. Due to their flexibility, the Lelong-Demailly numbers have become a powerful tool in pluripotential theory and its applications. They still are tropically multiplicative, however tropical additivity is no longer true for ν(u, ϕ) with arbitrary plurisubharmonic weights ϕ, even if they are maximal outside ϕ −1 (−∞). In addition, the value ν(u, ϕ) gives little information on the asymptotics of u near ϕ −1 (−∞).
The good properties of the classical and directional Lelong numbers result from the fact that they can be evaluated by means of the suprema of u over the corresponding domains (the balls and polydiscs, respectively). This makes it reasonable to study the asymptotics of the suprema of u over the corresponding domains {ϕ(x) < t} for a maximal weight ϕ with an isolated singularity at ζ and consider the value σ(u, ϕ) = lim inf u(x)/ϕ(x) as x → ζ, the type of u relative to ϕ. The relative type is thus an alternative generalization of the notion of Lelong number.
Unlike the Lelong-Demailly numbers, the relative types need not be tropically multiplicative, however they are tropically additive. Moreover, they are the only "reasonable" tropically additive functionals on plurisubharmonic singularities (for a precise statement, see Theorem 4.3).
Maximality of ϕ gives the bound u ≤ σ(u, ϕ)ϕ + O(1) near the pole of ϕ. We are then interested in best possible bounds on u, given the values of its types relative to some of (or all) the weights ϕ with fixed ϕ −1 (−∞). Tropical additivity of the relative types makes them a perfect tool for dealing with upper envelopes of families of plurisubharmonic functions, constructing thus extremal plurisubharmonic functions with prescribed singularities. In certain cases these Green-like functions coincide with known variants of pluricomplex Green functions. In particular, this gives a new representation of the Green functions with divisorial singularities (Theorems 6.6 and 6.7). We study relations between such extremal functions; one of the relations implies a complete characterization of holomorphic mappings f with isolated zero at ζ of the multiplicity equal to the Newton number of f at ζ (Corollary 6.5).
We also prove that the upperlevel sets for the types relative to exponentially Hölder continuous weights are analytic varieties (an analogue to the Siu theorem). As an application, we obtain a result on propagation of plurisubharmonic singularities (Corollary 7.3) that results in a new representation of the Green functions with singularities along complex spaces (Corollary 7.5).
The paper is organized as follows. Section 2 recalls basic facts on Lelong numbers and Green functions. In Section 3 we present the definition and elementary properties of the relative types. A representation theorem for tropically additive functionals on plurisubharmonic singularities is proved in Section 4. In Sections 5 and 6 we consider extremal problems for plurisubharmonic functions with given singularities. An analyticity theorem for the upperlevel sets and its applications are presented in Section 7.
Preliminaries
Lelong numbers
The Lelong number ν T (ζ) of a closed positive current T of bidimension (p, p) at a point ζ ∈ C n is the residual mass of T ∧ (dd c log | · −ζ|) p at ζ:
ν T (ζ) = lim r→0 |x−ζ|<r T ∧ (dd c log |x − ζ|) p ; (2.1) here d = ∂ +∂, d c = (∂ −∂)/2πi.
The Lelong number ν u (ζ) of a plurisubharmonic function u is just the Lelong number of the current dd c u. It can also be calculated as
ν u (ζ) = lim r→−∞ r −1 S 1 u(ζ + xe r ) dS 1 (x), (2.2)
where dS 1 is the normalized Lebesgue measure on the unit sphere S 1 , as well as
ν u (ζ) = lim r→−∞ r −1 sup{u(x) : |x − ζ| < e r } = lim inf z→ζ u(z) log |z − ζ| ,(2.
3) see [7]. Since the function sup{u(
x) : |x − ζ| < e r } is convex in r, representation (2.3) implies the bound u(x) ≤ ν u (ζ) log |x − ζ| + O(1) near ζ.
Lelong numbers are independent of the choice of coordinates. Siu's theorem states that the set {ζ : ν T (ζ) ≥ c} is analytic for any c > 0.
Directional Lelong numbers
A more detailed information on the behaviour of u near ζ can be obtained by means of the directional Lelong numbers due to Kiselman [8]: given a = (a 1 , . . . , a n ) ∈ R n + , 4) or equivalently, in terms of the mean values of u over the distinguished boundaries of the polydiscs, similarly to (2.2). Namely,
ν u (ζ, a) = lim r→−∞ r −1 sup{u(x) : |x k − ζ k | < e ra k , 1 ≤ k ≤ n},(2.u(x) ≤ ν u (ζ, a)φ a,ζ (x) + O(1) with φ a,ζ (x) = max k a −1 k log |x k − ζ k |. (2.5)
Analyticity of the upperlevel sets {ζ : ν u (ζ, a) ≥ c} was established in [8], [9]. Directional Lelong numbers give rise to the notion of local indicators of plurisubharmonic functions [13]. Given a plurisubharmonic function u, its (local) indicator at a point ζ is a plurisubharmonic function Ψ u,ζ in the unit polydisc D n such that for any y ∈ D n with y 1 · . . . · y n = 0,
Ψ u,ζ (y) = −ν u (ζ, a), a = −(log |y 1 |, . . . , log |y n |) ∈ R n + . (2.6)
It is the largest nonpositive plurisubharmonic function in D n whose directional Lelong numbers at 0 coincide with those of u at ζ, see the details in [13], [14].
Lelong-Demailly numbers
A general notion of Lelong numbers with respect to plurisubharmonic weights was introduced and studied by J.-P. Demailly [3], [5]. Let T be a closed positive current of bidimension (p, p) on a domain Ω ⊂ C n , and let ϕ be a continuous plurisubharmonic function Ω → [−∞, ∞), semiexhaustive on the support of T , that is,
B ϕ R ∩ supp T ⋐ Ω for some real R, where B ϕ R := {x : ϕ(x) < R}, and let S ϕ −∞ := ϕ −1 (−∞) = ∅. The value ν(T, ϕ) = lim r→−∞ B ϕ r T ∧ (dd c ϕ) p = T ∧ (dd c ϕ) p (S ϕ −∞ ) (2.7)
is called the generalized Lelong number, or the Lelong-Demailly number, of T with respect to the weight ϕ. When ϕ(x) = log |x − ζ|, this is just the classical Lelong number of T at ζ. For a plurisubharmonic function u, we use the notation ν(u, ϕ) = ν(dd c u, ϕ). The generalized Lelong numbers have the following semicontinuity property.
Theorem 2.1 ([5], Prop. 3.11)
If T k → T and ϕ is semiexhaustive on a closed set containing the supports of all T k , then lim sup k→∞ ν(T k , ϕ) ≤ ν(T, ϕ).
The following comparison theorems describe variation of the Lelong-Demailly numbers with respect to the weights and currents.
Theorem 2.2 ([5], Th. 5.1) Let T be a closed positive current of bidimension (p, p), and let ϕ, ψ be two weights semiexhaustive on supp T such that lim sup ψ(x)/ϕ(x) = l < ∞ as ϕ(x) → −∞. Then ν(T, ψ) ≤ l p ν(T, ϕ).
(dd c u) q ∧T and (dd c v) q ∧ T are well defined near S ϕ −∞ and u = −∞ on supp T ∩ S ϕ −∞ , where T is a closed positive current of bidimension (p, p), p ≥ q. If lim sup v(x)/u(x) = l < ∞ as ϕ(x) → −∞, then ν((dd c v) q ∧ T, ϕ) ≤ l q ν((dd c u) q ∧ T, ϕ).
The directional Lelong numbers can be expressed in terms of the Lelong-Demailly numbers with respect to the directional weights φ a,ζ (2.5):
ν u (ζ, a) = a 1 . . . a n ν(u, φ a,ζ ).
(2.8)
Siu's theorem on analyticity of upperlevel sets was extended in [3] to generalized Lelong numbers with respect to weights ϕ ζ (x) = ϕ(x, ζ) that are exponentially Hölder continuous with respect to ζ.
Green functions
Let D be a hyperconvex domain in C n and let P SH − (D) denote the class of all negative plurisubharmonic functions in D.
The pluricomplex Green function G ζ,D of D with logarithmic pole at ζ ∈ D (introduced by Lempert, Zahariuta, Klimek) is the upper envelope of the class F ζ,D of u ∈ P SH − (D) such that u(x) ≤ log |x − ζ| + O(1) near ζ. The class F ζ,D can be also described as the collection of u ∈ P SH − (D) such that ν u (ζ) ≥ 1. The function satisfies G ζ,D (x) = log |x − ζ| + O(1) near ζ and (dd c G ζ,D ) n = δ ζ .
A more general construction was presented by Zahariuta [19], [20]. Given a continuous plurisubharmonic function ϕ in a neighbourhood of ζ ∈ D such that ϕ −1 (−∞) = ζ and (dd c ϕ) n = 0 outside ζ, let
G ϕ,D (x) = sup{u(x) : u ∈ P SH − (D), u ≤ ϕ + O(1) near ζ}.
(2.9)
Then G ϕ,D is maximal in D \ {ζ} and G ϕ,D = ϕ + O(1) near ζ. We will refer to this function as the Green-Zahariuta function with respect to the singularity ϕ.
A Green function with prescribed values of all directional Lelong numbers at ζ (the Green function with respect to an indicator Ψ) was introduced in [13] as
G Ψ,ζ,D (x) = sup{u(x) : u ∈ P SH − (D), ν u (ζ, a) ≥ ν Ψ (0, a) ∀a ∈ R n + }, (2.10)
where Ψ is a negative plurisubharmonic function in the unit polydisc D n such that Ψ(z 1 , . . . , z n ) = Ψ(|z 1 |, . . . , |z n |) = c −1 Ψ(|z 1 | c , . . . , |z n | c ), c > 0, z ∈ D n ; (2.11) such a function Ψ coincides with its own indicator (2.6), so ν Ψ (0, a) = −Ψ(e −a 1 , . . . , e −an ).
The above Green functions were also considered for several isolated poles. In the case of non-isolated singularities, a variant of Green functions was introduced by Lárusson-Sigurdsson [11] by means of the class of negative plurisubharmonic functions u satisfying ν u (x) ≥ α(x), where α is an arbitrary nonnegative function on D. In [12] it was specified to the case when α(x) = ν A (x) is the Lelong number of a divisor A; then
F A,D = {u ∈ P SH − (D) : ν u (x) ≥ ν A (x) ∀x ∈ D},(2.12)
and
G A,D (x) = sup{u(x) : u ∈ F A,D } (2.13)
is the Green function for the divisor A. It was shown that if A is the divisor of a bounded holomorphic function f in D, then G A,D = log |f | + O(1) near points of |A| = f −1 (0). This was used in [16], [17] for consideration of Green functions with arbitrary analytic singularities. Given a closed complex subspace
Definition and elementary properties of relative types
Given ζ ∈ C n , let P SH ζ stand for the collection of all (germs of) plurisubharmonic functions u ≡ −∞ in a neighbourhood of ζ. Let ϕ ∈ P SH ζ be locally bounded outside ζ, ϕ(ζ) = −∞, and maximal in a punctured neighbourhood of ζ: (dd c ϕ) n = 0 on B ϕ R \ {ζ} for some R > −∞; we recall that B ϕ R = {z : ϕ(z) < R}. The collection of all such maximal weights (centered at ζ) will be denoted by M W ζ . If we want to specify that (dd c ϕ) n = 0 on ω \ {ζ}, we will write ϕ ∈ M W ζ (ω).
Given a function u ∈ P SH ζ , its singularity at ζ can be compared to that of ϕ ∈ M W ζ ; the value
σ(u, ϕ) = lim inf z→ζ u(z) ϕ(z) (3.1)
will be called the ϕ-type, or the relative type of u with respect to ϕ. Both the Lelong-Demailly numbers and the relative types are generalizations of the classical notion of Lelong number, however they use different points of view on the Lelong number: while the Lelong-Demailly numbers correspond to (2.1) (and (2.2) in the case of functions), the relative types are based on (2.3). As we will see, the two generalizations have much in common, however some features are quite different.
Note that σ(u + h, ϕ) = σ(u, ϕ) if h is pluriharmonic (and hence bounded) near ζ, so σ(u, ϕ) is actually a function of dd c u.
The following properties are direct consequences of the definition of the relative type.
Proposition 3.1 Let ϕ, ψ ∈ M W ζ and u, u j ∈ P SH ζ . Then (i) σ(cu, ϕ) = c σ(u, ϕ) for all c > 0; (ii) σ(max{u 1 , u 2 }, ϕ) = min j σ(u j , ϕ); (iii) σ(u 1 + u 2 , ϕ) ≥ σ(u 1 , ϕ) + σ(u 2 , ϕ); (iv) σ(u, ψ) ≥ σ(u, ϕ) σ(ϕ, ψ); in particular, if there exists lim z→ζ ϕ(z)/ψ(z) = l < ∞, then σ(u, ϕ) = l σ(u, ψ); (v) if lim inf z→ζ u 1 (z)/u 2 (z) = l < ∞, then σ(u 1 , ϕ) ≥ l σ(u 2 , ϕ); (vi) if u = log m j e u j , then σ(u, ϕ) = σ(max j u j , ϕ) = min j σ(u j , ϕ).
Proof. Properties (i) -(v) are direct consequences of the definition of the relative type, and (vi) follows from (ii) and (iv) together. Note that (iv) makes sense because σ(u, ϕ) < ∞ for any u ∈ P SH ζ and ϕ ∈ M W ζ ; this follows, for example, from an alternative description of σ(u, ϕ) given by (3.3)-(3.5) below.
Remarks 3.2 (1) The relative types need not be tropically multiplicative functionals on P SH ζ . Take ϕ = max {3 log |z 1 |, 3 log |z 2 |, log |z 1 z 2 |} ∈ M W 0 and u j = log |z j |, j = 1, 2, then σ(u j , ϕ) = 1/3, while σ(u 1 + u 2 , ϕ) = 1.
(2) Properties (iv) and (v) are analogues to Comparison Theorems 2.2 and 2.3.
(3) For holomorphic functions f j and positive numbers p j , j = 1, . . . , m, property (vi) gives the relation
σ(log j |f j | p j , ϕ) = σ(max j p j log |f j |, ϕ) = min j p j σ(log |f j |, ϕ). (3.2)
Given a weight ϕ ∈ M W ζ and a function u ∈ P SH ζ , consider the growth function
Λ(u, ϕ, r) = sup {u(z) : z ∈ B ϕ r }. (3.3) Proposition 3.3 (see also [2], Corollary 6.6) Let ϕ ∈ M W ζ (B ϕ R ) such that B ϕ R is bounded, and u ∈ P SH(B ϕ R ). Then the function Λ(u, ϕ, r) is convex in r ∈ (−∞, R). Proof. Given −∞ < r 1 < r 2 < R and 0 < ǫ < R − r 2 , let u ǫ = u * χ ǫ be a standard regularization (smoothing) of u. Take c ≥ 0 and d ∈ R such that cr j + d = Λ(u ǫ , ϕ, r j ), j = 1, 2. Since ϕ ≥ r on ∂B ϕ r , we have cϕ + d ≥ u ǫ on ∂(B ϕ r 2 \ B ϕ r 1 ) and thus on the set B ϕ r 2 \ B ϕ r 1 because of the maximality of the function cϕ + d on B ϕ R \ {0}.
In other words, Λ(u ǫ , ϕ, r) ≤ cr + d on (r 1 , r 2 ), which means that Λ(u ǫ , ϕ, r) is convex on (r 1 , r 2 ). Since Λ(u ǫ , ϕ, r) → Λ(u, ϕ, r) as ǫ → 0, this implies the assertion.
Since Λ(u, ϕ, r) is increasing and convex, the ratio
g(u, ϕ, r, r 0 ) := Λ(u, ϕ, r) − Λ(u, ϕ, r 0 ) r − r 0 , r < r 0 < R, (3.4)
is increasing in r ∈ (−∞, r 0 ) and, therefore, has a limit as r → −∞; it is easy to see that the limit equals σ(u, ϕ). We have, in particular,
σ(u, ϕ) ≤ g(u, ϕ, r, r 0 ), r < r 0 , (3.5)
which implies the following basic bound.
Proposition 3.4 Let ϕ ∈ M W ζ (B ϕ R ), u ∈ P SH − (B ϕ r 0 ), r 0 < R. Then u ≤ σ(u, ϕ)(ϕ − r 0 ) in B ϕ r 0 .
In particular, every function u ∈ P SH ζ has the bound
u(z) ≤ σ(u, ϕ) ϕ(z) + O(1), z → ζ. (3.6)
Next statement is an analogue to Theorem 2.1.
Proposition 3.5 Let u j , u ∈ P SH − (Ω), ϕ ∈ M W ζ , ζ ∈ Ω. If u j → u in L 1 loc (Ω), then σ(u, ϕ) ≥ lim sup σ(u j , ϕ).
Proof. Take any r 0 such that Λ(u, ϕ, r 0 ) < 0, then for any ǫ > 0 and r < r 0 there exists j 0 such that Λ(u j , ϕ, r) ≥ Λ(u, ϕ, r) − ǫ for all j > j 0 . Using (3.5) we get
σ(u j , ϕ) ≤ g(u j , ϕ, r, r 0 ) ≤ Λ(u, ϕ, r) − ǫ r − r 0 , (3.7)
which implies the assertion.
Let us compare the values of relative types with some known characteristics of plurisubharmonic singularities. Denote
ν ϕ := ν ϕ (ζ) (3.8) the Lelong number of ϕ ∈ M W ζ at ζ; τ ϕ := (dd c ϕ) n (ζ) (3.9)
the residual Monge-Ampère mass of ϕ at ζ;
α ϕ := lim sup z→ζ ϕ(z) log |z − ζ| . (3.10)
By Theorem 2.2, ν n ϕ ≤ τ ϕ ≤ α n ϕ and, by Proposition 3.4,
α ϕ log |z − ζ| + O(1) ≤ ϕ(z) ≤ ν ϕ log |z − ζ| + O(1), z → ζ. (3.11)
If ϕ has analytic singularity, that is,
ϕ = log |f | + O(1) near ζ, where f = (f 1 , .
. . , f n ) is a holomorphic map with isolated zero at ζ, then ν ϕ equals the minimum of the multiplicities of f k at ζ, τ ϕ is the multiplicity of f at ζ, and α ϕ = γ f , the Lojasiewicz exponent of f at ζ, i.e., the infimum of γ > 0 such that |f (z)| ≥ |z − ζ| γ near ζ. Therefore, ν ϕ > 0 and α ϕ < ∞ in this case.
In the general situation, since ϕ is locally bounded and maximal on B ϕ r \{ζ}, the condition
ϕ(ζ) = −∞ implies 0 < τ ϕ < ∞.
We do not know if ν ϕ > 0 for every weight ϕ ∈ M W ζ . It is actually equivalent to the famous problem of existence of a plurisubharmonic function which is locally bounded outside ζ and has zero Lelong number and positive Monge-Ampère mass at ζ (see a discussion in [18] or the remark after Proposition 6.1). Furthermore, (3.11) implies α ϕ > 0, however we do not know if the " Lojasiewicz exponent" α ϕ is finite for every maximal weight ϕ. It is worth noting that, by Proposition 3.1
(v), σ(log |f |, ϕ) ≤ γ f α −1 ϕ for any holomorphic map f with isolated zero at ζ, where γ f is the Lojasiewicz exponent of f , so σ(log |f |, ϕ) = 0 if α ϕ = ∞.
By Theorem 2.3, the condition α ϕ < ∞ implies τ ϕ ≤ α n−1 ϕ ν ϕ and so, ν ϕ > 0 for such a weight ϕ. In other words, denote
SM W ζ = {ϕ ∈ M W ζ : ν ϕ > 0} (3.12)
(the weights with "strong" singularity) and
LM W ζ = {ϕ ∈ M W ζ : α ϕ < ∞} (3.13)
(the weights with finite Lojasiewicz exponent), then
LM W ζ ⊂ SM W ζ ⊂ M W ζ ,(3.14)
and it is unclear if the inclusions are strict.
Proposition 3.6 The type σ(u, ϕ) of u ∈ P SH ζ with respect to ϕ ∈ SM W ζ is related to the Lelong number ν u (ζ) of u at ζ as α −1 ϕ ν u (ζ) ≤ σ(u, ϕ) ≤ ν −1 ϕ ν u (ζ). (3.15)
If, in addition, ϕ is continuous, then
σ(u, ϕ) ≤ τ −1 ϕ ν(u, ϕ), (3.16)
where ν(u, ϕ) is the Lelong-Demailly number of u with respect to ϕ. Then σ(u, ϕ) = 1/3, ν(u, ϕ) = 3 and τ ϕ = 6, so the right hand side of (3.16) equals 1/2 > 1/3.
Representation theorem
It was shown in Section 3 that relative types σ(u, ϕ) are positive homogeneous, tropically additive (in the sense σ(max u k , ϕ) = min σ(u k , ϕ)) and upper semicontinuous functionals on P SH ζ that preserve ordering of the singularities, i.e.,
u ≤ v + O(1) implies σ(u, ϕ) ≥ σ(v, ϕ).
Here we show that any such functional on P SH ζ can be represented as a relative type, provided it does not vanish on a function that is locally bounded outside ζ.
Lemma 4.1 Let D be a bounded hyperconvex neighbourhood of a point ζ, and let a function σ :
P SH − (D) → [0, ∞] be such that σ(u) < ∞ if u ≡ −∞ and (i) σ(cu) = c σ(u) for all c > 0; (ii) if u 1 ≤ u 2 + O(1) near ζ, then σ(u 1 ) ≥ σ(u 2 ); (iii) σ(max k u k ) = min k σ(u k ), k = 1, 2; (iv) if u j → u in L 1 loc , then lim sup σ(u j ) ≤ σ(u); (v) σ(w 0 ) > 0 for at least one w 0 ∈ P SH − (D) ∩ L ∞ loc (D \ ζ).
Then there exists a unique function
ϕ ∈ M W ζ (D), ϕ(z) → 0 as z → ∂D, such that σ(u) = σ(u, ϕ) ∀u ∈ P SH − (D). (4.1) If, in addition, (v) is true with w 0 (z) = log |z − ζ| + O(1), then ϕ ∈ LM W ζ . Proof. Denote ϕ(z) = sup {u(z) : u ∈ M}, where M = {u ∈ P SH − (D) : σ(u) ≥ 1}.
By the Choquet lemma, there exists a sequence u j ∈ M increasing to a function v such that v * = ϕ * ∈ P SH − (D). Properties (iii) and (iv) imply v * ∈ M, so v * ≤ ϕ. Therefore, ϕ = ϕ * ∈ M. Evidently, σ(ϕ) = 1.
If
v ∈ P SH − (D) satisfies v ≤ ϕ outside ω ⋐ D \ {ζ}, then v ∈ M and so, v ≤ ϕ in D. Therefore, the function ϕ is maximal on D \ {ζ}. Furthermore, ϕ ∈ L ∞ loc (D \ {ζ}) because ϕ ≥ w 0 /σ(w 0 ). It is not hard to see that ϕ(ζ) = −∞. Indeed, assuming ϕ(ζ) = A > −∞, the maximality of ϕ on {ϕ(z) < A} gives ϕ ≥ A everywhere, which contradicts σ(ϕ) > 0 in view of (ii). So, ϕ ∈ M W ζ (D).
Standard arguments involving a negative exhaustion function of D show that ϕ(z) → 0 as z → ∂D.
By Proposition 3.4, u ≤ σ(u, ϕ)ϕ + O(1) and thus, by (i) and (ii), σ(u) ≥ σ(u, ϕ) for every u ∈ P SH − (D). This gives, in particular, σ(u, ϕ) = 0 if σ(u) = 0. Let σ(u) > 0, then u/σ(u) ∈ M, so u ≤ σ(u)ϕ and consequently, σ(u, ϕ) ≥ σ(u). This proves (4.1).
If ψ is another weight from M W ζ (D) with zero boundary values on ∂D, representing the functional σ, then ψ ≤ ϕ. On the other hand, the relation σ(ϕ, ψ) = 1 implies that for any ǫ ∈ (0, 1) we have ϕ ≤ (1 − ǫ)ψ + ǫ on a neighbourhood of ζ and near ∂D and thus, by the maximality of ψ on D \ {ζ}, everywhere in D.
Finally, the last assertion follows from the relation ϕ ≥ w 0 /σ(w 0 ) ∈ LM W ζ .
Remarks 4.2 (1) Note that for the functional σ(u) = σ(u, φ) with a continuous weight φ ∈ M W ζ , the function ϕ constructed in the proof of Lemma 4.1 is just the Green-Zahariuta function for the (continuous) singularity φ (2.9). We will keep this name for the case of Green functions with respect to arbitrary singularities φ ∈ M W ζ ,
G φ,D (z) = sup{u(z) ∈ P SH − (D) : u ≤ φ + O(1) near ζ}. (4.2) We have thus G φ,D ∈ M W ζ (D), G φ,D = φ + O(1) because σ(G φ,D , φ) = 1, and G φ,D (z) → 0 as z → ∂D if D is hyperconvex.
(2) Let be a natural partial ordering on M W ζ : φ ψ if σ(u, φ) ≤ σ(u, ψ) for any u ∈ P SH ζ . It is easy to see that φ ψ ⇔ σ(φ, ψ) ≥ 1 ⇔ G φ,D ≤ G ψ,D for some (and, consequently, for any) hyperconvex neighbourhood D of ζ.
The following representation theorem is an easy consequence of Lemma 4.1. Proof. We may assume σ(w 0 ) = 1. Let ϕ ∈ M W ζ be the function constructed in Lemma 4.1. Exactly as in the proof of the lemma, we get σ(u) ≥ σ(u, ϕ) for every u ∈ P SH ζ . To prove the reverse inequality, take any u ∈ P SH ζ . The function v 0 = max {u, σ(u)w 0 } can be extended from a neighbourhood of ζ to a plurisubharmonic function v on a neighbourhood of D; by (iii),
σ(u) = σ(v 0 ) = σ(v − sup D v) = σ(v − sup D v, ϕ) = σ(v 0 , ϕ) ≤ σ(u, ϕ), so σ(u) = σ(u, ϕ).
If ψ ∈ M W ζ is another weight representing the functional σ, then σ(ϕ, ψ) = σ(ψ, ϕ) = 1 and ψ = ϕ + O(1) by Proposition 3.4.
Remark 4.4
Recall that a valuation on the local ring R ζ of germs of analytic functions f at ζ is a nonconstant function µ : R ζ → [0, +∞] such that
µ(f 1 f 2 ) = µ(f 1 ) + µ(f 2 ), µ(f 1 + f 2 ) ≥ min {µ(f 1 ), µ(f 2 )}, µ(1) = 0; (4.3)
a valuation µ is centered if µ(f ) > 0 for every f from the maximal ideal m ζ , and normalized if min {µ(f ) : f ∈ m ζ } = 1. Every weight ϕ ∈ M W ζ generates a functional σ ϕ on R ζ , σ ϕ (f ) = σ(log |f |, ϕ), with the properties
σ ϕ (f 1 f 2 ) ≥ σ ϕ (f 1 ) + σ ϕ (f 2 ), σ ϕ (f 1 + f 2 ) ≥ min {σ ϕ (f 1 ), σ ϕ (f 2 )}, σ ϕ (1) = 0. (4.4)
Such a functional is thus a valuation if the weight ϕ satisfies the additional condition
σ(u + v, ϕ) = σ(u, ϕ) + σ(v, ϕ) (4.5)
for any u, v ∈ P SH ζ -in other words, if u → σ(u, ϕ) is tropically linear (both additive and multiplicative); σ ϕ is centered if and only if ϕ ∈ LM W ζ , and normalized iff α ϕ = 1.
The weights φ a,ζ (2.5) satisfy (4.5), and the corresponding functionals σ φ a,ζ are monomial valuations on R ζ ; they are normalized, provided min k a k = 1. It was shown in [6] that an important class of valuations in C 2 (quasimonomial valuations) can be realized as σ ϕ with certain weights ϕ ∈ LM W ζ satisfying (4.5) and α ϕ = 1, and all other normalized valuations in C 2 can be realized as limits of increasing sequences of the quasimonomial ones. We believe the relative types with respect to weights satisfying (4.5) can be used in investigation of valuations in higher dimensions.
Greenifications
In this section we consider some extremal problems for plurisubharmonic functions with singularities determined by a given plurisubharmonic function u. Solutions to these problems resemble various Green functions mentioned in Section 2 (and in some cases just coincide with them), and we will call them greenifications of the function u. This reflects the point of view on Green functions as largest negative plurisubharmonic functions with given singularities; different types of the Green functions arise from different ways of measuring the singularities (or different portions of information on the singularities used). Note that the tropical additivity makes relative types an adequate tool in constructing extremal plurisubharmonic functions as upper envelopes.
Type-greenifications
Let a bounded domain D contain a point ζ and let u ∈ P SH ζ . Given a collection P of weights φ ∈ M W ζ , denote
M P u = M P u,ζ,D = {v(x) : v ∈ P SH − (D), σ(v, φ) ≥ σ(u, φ) ∀φ ∈ P } (5.1)
and define the function
h P u (x) = h P u,ζ,D (x) = sup {v(x) : v ∈ M P u,ζ,D }. (5.2)
We will write simply M u,ζ,D and h u,ζ,D if P = M W ζ . The function h P u,ζ,D will be called the type-greenification of u with respect to the collection P , and h u,ζ,D will be called just the type-greenification of u at ζ.
Consideration of the functions h P u with P = M W ζ can be useful in situations where the only information on the singularity of u available is the values of σ(u, ϕ) for certain selected weights ϕ. One more reason is that for some collections P , the functions h P u are quite easy to compute and, at the same time, they can give a reasonably good information on the asymptotic behaviour of u (see Examples 2 and 3 after Proposition 5.2). Note that
h P u ≥ h Q u ≥ h u if P ⊂ Q ⊂ M W ζ .
Proposition 5.1 Let u ∈ P SH(D) be bounded above in D ∋ ζ, and P ⊆ M W ζ . Then (i) h P u,ζ,D ∈ P SH − (D);
(ii) u ≤ h P u,ζ,D + sup D u;
(iii) σ(u, φ) = σ(h P u,ζ,D , φ) for any weight φ ∈ P ;
(iv) h P u,ζ,D is maximal on D \ {ζ}; (v) if D has a strong plurisubharmonic barrier at a point z ∈ ∂D (i.e., if there exists v ∈ P SH(D) such that lim x→z v(x) = 0 and sup D\U v < 0 for every neighbourhood U of z) and if u is bounded below near z, then h P u,ζ,D (x) → 0 as x → z. Proof. Let w denote the right hand side of (5.2), then its upper regularization w * is plurisubharmonic in D. By the Choquet lemma, there exists a sequence v j ∈ M P u such that w * = (sup j v j ) * . Denote w k = sup j≤k v j . By Proposition 3.1(ii), w k ∈ M P u . Since the functions w k converge weakly to w * , Proposition 3.5 implies then σ(w * , φ) ≥ σ(w k , φ) ≥ σ(u, φ) for any weight φ ∈ P and so, w * ∈ M P u , which proves (i) and gives, at the same time, the inequality σ(u, φ) ≤ σ(h P u , φ). The reverse inequality, even for arbitrary weights φ ∈ M W ζ , follows from the (evident) assertion (ii) and completes the proof of (iii).
If a function v ∈ P SH(D) satisfies v ≤ h P u,ζ,D on D \ ω for some open set ω ⋐ D \ ζ, then max {v, h P u,ζ,D } ∈ M P u and therefore v ≤ h P u,ζ,D on ω, which proves (iv). Finally, to prove (v), take a neighbourhood U of z ∈ ∂D such that u > t > −∞ on U ∩ D and choose c > 0 such that v < t/c on D \ U , so u > cv on ∂U ∩ D. Let
w(x) = max{u(x), cv(x)}, x ∈ D ∩ U u(x), x ∈ D \ U ,(5.3)
then w ∈ M P u,ζ,D (u) and lim x→z w(x) = 0. If v ∈ P SH − (D), then the relation σ(v, ϕ) ≥ σ(u, ϕ) is equivalent to v ≤ σ(u, ϕ)G ϕ,D . Therefore, these Green-like functions h P u,ζ,D can be described by means of the Green-Zahariuta functions G ϕ,D (4.2) as follows. (2) Let A be a finite subset of R n + and let P be the collection of the weights φ a = φ a,0 (2.5) with a ∈ A. According to Proposition 5.2 and (2.4), the function h P u,0,D is the largest plurisubharmonic minorant of the family {ν u (0, a)G φa,D : a ∈ A}. Using methods from [13] and [14], it can then be shown that the minorant is the Green-Zahariuta function for the singularity ϕ u,A (x) = ψ u,A (log |x 1 |, . . . , log |x n |), where (4) Let u = log |z 1 | in the unit polydisc D n and let P be the collection of the weights φ a,0 (2.5) with a 1 = 1. Then the type of u with respect to any φ a,0 ∈ P equals 1 and thus, v ≤ φ a,0 for every v ∈ M u,0,D n and any such direction a. Therefore h P u,0,D n = u. (For a more general statement, see Theorem 6.6.) Remarks 5.4 (1) As follows from Example 4, the functions h P u,ζ,D need not be locally bounded outside ζ.
ψ u,A (t) = sup { b, t : b ∈ H u,A }, t ∈ R n − , H u,A = a∈A {b ∈ R n + : k b k a k ≥ ν u (0, a)}.
(2) The same example shows that the condition of strong plurisubharmonic barrier cannot be replaced by hyperconvexity in general. However this can be done if u ∈ L ∞ loc (Ω \ {ζ}), see Proposition 5.6.
Complete greenifications
Another natural extremal function determined by the singularity of u can be defined as follows. Let u ∈ P SH(Ω) be such that u(ζ) = −∞ for some ζ ∈ Ω. Given a domain D ⊂ Ω, ζ ∈ D, consider the class
F u,ζ,D = {v ∈ P SH − (D) : v(z) ≤ u(z) + O(1), z → ζ}, (5.6)
then the upper regularization of its upper envelope is a plurisubharmonic function in D; we will call it the complete greenification of u at ζ and denote by g u,ζ,D :
g u,ζ,D (x) = lim sup y→x sup{v(y) : v ∈ F u,ζ,D }. (5.7)
If ϕ ∈ M W ζ , then g ϕ,ζ,D = G ϕ,D , the Green-Zahariuta function (4.2).
It follows from the definition that if u is bounded above on D, then
u ≤ g u,ζ,D + sup D u. (5.8)
It is easy to see that the function g u,ζ,D need not belong to the class F u,ζ,D (take, for example, u = −| log |z|| 1/2 , then g u,ζ,D ≡ 0).
Proposition 5.5
If a plurisubharmonic function u is bounded above on D ∋ ζ, then (i) g u,ζ,D is maximal on any open ω ⊂ D such that g u,ζ,D ∈ L ∞ loc (ω);
(ii) ν(u, ϕ) = ν(g u,ζ,D , ϕ) for any continuous weight ϕ with ϕ −1 (−∞) = ζ;
(iii) σ(u, ϕ) = σ(g u,ζ,D , ϕ) for any ϕ ∈ M W ζ ;
(iv) if D has a strong plurisubharmonic barrier at a point z ∈ ∂D and if u is bounded below near z, then lim x→z g u,ζ,D (x) = 0.
Proof. Take a sequence of pseudoconvex domains D j such that D j+1 ⋐ D j ⋐ D, ∩ j D j = {ζ}, and let
u j (x) = sup{v(x) : v ∈ P SH − (D), v ≤ u − sup D u in D j }, x ∈ D. (5.9)
Since its upper regularization u * j belongs to P SH − (D) and coincides with u − sup D u in D j , the function u j = u * j ∈ F u,ζ,D and is maximal on D \ D j . When j → ∞, the functions u j increase to a function v such that v * ∈ P SH − (D) and is maximal where it is locally bounded. Evidently, g u,ζ,D ≥ v * .
By the Choquet lemma, there exists a sequence w k ∈ F u,ζ,D that increases to w such that w * = g u,ζ,D . Take any ǫ > 0, then for each k there exists j = j(k) such that w k ≤ (1 − ǫ)u j on D j . Therefore w k ≤ (1 − ǫ)u j ≤ (1 − ǫ)g u,ζ,D in D, which gives g u,ζ,D ≤ (1 − ǫ)v * for all ǫ > 0 and thus, g u,ζ,D = v * , and (i) follows now from the maximality of v * .
To prove (ii), consider again the functions u j (5.9), then ν(u j , ϕ) = ν(u, ϕ) for any ϕ. By Theorem 2.1, ν(g u,ζ,D , ϕ) ≥ lim sup More can be said if u is locally bounded outside ζ. Note that then it can be extended (from a neighbourhood of ζ) to a plurisubharmonic function in the whole space, and none of its greenifications at ζ depend on the choice of the extension.
Proposition 5.6 If D is a bounded hyperconvex domain, u ∈ P SH(D) ∩ L ∞ loc (D \ ζ), then lim x→z g u,ζ,D (x) = 0, z ∈ ∂D, (dd c u) n (ζ) = (dd c g u,ζ,D ) n (ζ). (5.11)
Proof. The first statement follows exactly as in the case of the pluricomplex Green function with logarithmic singularity.
To prove (5.11), observe first that relation (5.8) implies (dd c u) n (ζ) ≥ (dd c g u,ζ,D ) n (ζ). On the other hand, the functions u j (5.9) belong to the Cegrell class F [1] and increase a.e. to g u,ζ,D . By Theorem 5.4 of [1], (dd c u j ) n → (dd c g u,ζ,D ) n . Therefore, (dd c g u,ζ,D ) n (ζ) ≥ lim sup j→∞ (dd c u j ) n (ζ) = (dd c u) n (ζ), which completes the proof.
Remark 5.7
In spite of the relations in Proposition 5.5 and (5.11), some important information on the singularity can be lost when passing to the function g u,ζ,D . For example, if we take u(z) = max{log |z 1 |, −| log |z 2 || 1 2 }, then g u,0,D = 0 for any D ⊂ C 2 containing 0, while ν(u, log |z 2 |) = 1 (note that the function log |z 2 | is semiexhaustive on the support of dd c u).
Greenifications and Green functions
Now we turn to relations between the extremal functions considered above. We will write M S u,ζ,D and h S u,ζ,D if P = SM W ζ (3.12), and M L u,ζ,D and h L u,ζ,D if P = LM W ζ (3.13); in view of (3.14), h L u,ζ,D ≥ h S u,ζ,D .
According to Proposition 5.5, we have σ(g u,ζ,D , ϕ) = σ(u, ϕ) for all ϕ ∈ M W ζ , so
g u,ζ,D ≤ h u,ζ,D ≤ h P u,ζ,D (6.1)
for any u ∈ P SH(D) and P ⊂ M W ζ . By Proposition 5.2, the condition g u,ζ,D ≡ 0 implies h P u,ζ,D ≡ 0 for every P ⊆ M W ζ . So let us assume g u,ζ,D ≡ 0.
When ϕ ∈ M W ζ , the function g ϕ,ζ,D is the Green-Zahariuta function for the singularity ϕ in D; by Proposition 5.2, the same is true for h ϕ,ζ,D , so g ϕ,ζ,D = h ϕ,ζ,D . More generally, we have the following simple Proposition 6.1 Let u ∈ P SH ζ be locally bounded outside ζ, then g u,ζ,D = h u,ζ,D . If, in addition, ν u (ζ) > 0, then g u,ζ,D = h S u,ζ,D .
Proof. The equalities result from Proposition 5.2 and (5.8) because g u,ζ,D belongs to M W ζ and, in case of ν u (ζ) > 0, to SM W ζ .
Remark 6.2
We do not know if g u,ζ,D = h S u,ζ,D when ν u (ζ) = 0. The condition ν u (ζ) = 0 implies, by (3.15), h S u,ζ,D ≡ 0. As follows from Theorem 6.4 below, for functions u locally bounded outside ζ the relation g u,ζ,D = h S u,ζ,D is thus equivalent to (dd c u) n (ζ) = 0, and we are facing the problem of existence of plurisubharmonic functions with zero Lelong number and positive Monge-Ampère mass. It can be reformulated as follows: is it true that g u,ζ,D ≡ 0 if u is locally bounded outside ζ and ν u (ζ) = 0? Equivalently: is it true that SM W ζ = M W ζ ?
To study the situation with the type-greenifications with respect to arbitrary subsets P of M W ζ , we need the following result on "incommensurability" of Green functions. Lemma 6.3 Let D be a bounded hyperconvex domain and let v, w ∈ P SH(D) ∩ L ∞ loc (D \ ζ) be two solutions of the Dirichlet problem (dd c u) n = τ δ ζ , u| ∂D = 0 with some τ > 0. If v ≥ w in D, then v ≡ w.
Proof. We use an idea from the proof of Theorem 3.3 in [21]. Choose R > 0 such that ρ(x) = |x| 2 − R 2 < 0 in D. Given ǫ > 0, consider the function u ǫ = max{v + ǫρ, w}. Since
u ǫ = w near ∂D, we have D (dd c u ǫ ) n = D (dd c w) n = τ. (6.2)
On the other hand, u ǫ ≤ v and thus, by Theorem 2.3, (dd c ρ) n ≤ χ 1 (dd c (v + ǫρ)) n = 0. (6.5)
(dd c u ǫ ) n (ζ) ≥ (dd c v) n (ζ) = τ.
Hence, the set {w < v} has zero Lebesgue measure, which proves the claim. Proof. If h P u,ζ,D = g u,ζ,D , then (6.6) follows from Proposition 5.6. The reverse implication follows from Lemma 6.3 (by Proposition 5.6 and (6.1), the functions v = h P u,ζ,D and w = g u,ζ,D satisfy the conditions of the lemma).
This can be applied to evaluation of the multiplicity of an equidimensional holomorphic mapping by means of its Newton polyhedron. Let ζ be an isolated zero of an equidimensional holomorphic mapping f . Denote by Γ + (f, ζ) the Newton polyhedron of f at ζ, i.e., the convex hull of the set E ζ + R n + , where E ζ ⊂ Z n + is the collection of the exponents in the Taylor expansions of the components of f about ζ, and let N ζ denote the Newton number of f at ζ, i.e., N ζ = n! Vol(R n + \ Γ + (f, ζ)). Kouchnirenko's theorem [10] states that the multiplicity m ζ of f at ζ can be estimated from below by the Newton number,
m ζ ≥ N ζ ,(6.7)
with an equality under certain non-degeneracy conditions. An application of Theorem 6.4 gives a necessary and sufficient condition for the equality to hold. As was shown in [14] and [15], N ζ = (dd c Ψ u,ζ ) n (0), where Ψ u,ζ is the indicator (2.6) of the plurisubharmonic function u = log |f | at ζ. Let P consist of all the directional weights φ a,ζ , a ∈ R n + , and let D be a ball around ζ. Then (see Example 3 after Proposition 5.2) the function h P u,ζ,D coincides with the Green function (2.10) with respect to the indicator Ψ u,ζ , which in turn equals the Green-Zahariuta function G ϕ,D (2.9) for the singularity ϕ(x) = Ψ u,ζ (x − ζ). Therefore h P u,ζ,D = g ϕ,ζ,D and N ζ = (dd c h P u,ζ,D ) n (ζ). (6.8) Since m ζ = (dd c u) n (ζ), the equality m ζ = N ζ is equivalent to (6.6) and thus, by Theorem 6.4, to g u,ζ,D = g ϕ,ζ,D . Finally, as u, ϕ ∈ M W ζ , we have u = g u,ζ,D + O(1) and ϕ = g ϕ,ζ,D + O(1) near ζ, which gives u = ϕ + O(1). We have just proved the following The situation with non-isolated singularities looks more complicated. Observe, for example, that h P u,ζ,D is maximal on the whole D \ {ζ}, however we do not know if the same is true for g u,ζ,D .
We can handle the situation in the case of analytic singularities. In this section, we prove the equality g u,ζ,D = h L u,ζ,D for u = log |f |, where f : D → C is a holomorphic function; we recall that h L u,ζ,D is the type-greenification with respect to the class LM W ζ (3.13). In this case, the greenifications coincide with the Green function G A,D (2.13) in the sense of Lárusson-Sigurdsson. Note that the function G A,D is defined as the upper envelope of functions u with ν u (a) ≥ ν A (a) for all a ∈ |A|. It turns out that one can consider only one point (or finitely many ones) from |A|, but then an infinite set of weights should be used. The case of mappings f : D → C p , p > 1, will be considered in Section 7. where G A ζ ,D is the Green function (2.13) for the divisor A ζ of the function a. Moreover, there exists a sequence P of continuous weights ϕ j ∈ LM W ζ such that the Green-Zahariuta functions G ϕ j ,D decrease to h P u,ζ,D = G A ζ ,D .
Proof. Let F A ζ ,D be the class defined by (2.12)
for A = A ζ , then F A ζ ,D ⊂ F u,ζ,D ⊂ M L u,ζ,D , so G A ζ ,D ≤ g u,ζ,D ≤ h L u,ζ,D . (6.10)
Choose a sequence of domains D j such that D j+1 ⋐ D j ⋐ D, ∩ j D j = {ζ}, and f a −1 does not vanish in D 1 , then the functions u j defined by (5.9) satisfy
u j ≤ log |a| + C j in D j . (6.11)
By Siu's theorem, the set {x ∈ D : ν(u j , x) ≥ m s } is analytic; by (6.11), it contains the support |A s | of the divisor A s of a s . Therefore, ν(u j , x) ≥ ν(log |a|, x) for all x ∈ |A ζ |. Since u j converge to g u,ζ,D , this implies ν(g u,ζ,D , x) ≥ ν(log |a|, x) and thus, g u,ζ,D ∈ F A ζ ,D . This proves the second equality in (6.9). Let f 2 , . . . , f n be holomorphic functions in a neighbourhood ω ⋐ D of ζ such that ζ is the only point of the zero set of the mapping (f 1 , . . . , f n ) in ω and Ω 0 = {z ∈ ω :
|f k | < 1, 1 ≤ k ≤ n} ⋐ ω, where f 1 = 2a(sup ω |a|) −1 . Denote ϕ j := sup{log |f 1 |, j log |f k | : 2 ≤ k ≤ n}, j ∈ Z + .
(6.12)
We have ϕ j ∈ LM W ζ (Ω 0 ) and ϕ j = 0 on ∂Ω 0 , so ϕ j = G ϕ j ,Ω 0 , the Green-Zahariuta function for the singularity ϕ j in Ω 0 . Since h L u,ζ,D < 0 in Ω 0 and σ(h L u,ζ,D , ϕ j ) = σ(u, ϕ j ) = 1, we have h L u,ζ,D ≤ ϕ j in Ω 0 for each j. Therefore h L u,ζ,D ≤ log |f 1 | = lim j→∞ ϕ j in Ω 0 . This implies h L u,ζ,D ∈ F u,ζ,D and thus, h L u,ζ,D = g u,ζ,D . Finally, the Green-Zahariuta functions G ϕ j ,D dominate h L u,ζ,D and decrease to some function v ∈ P SH − (D). Since σ(v, ϕ k ) ≥ lim sup j→∞ σ(ϕ j , ϕ k ) = 1, we get v ≤ h L u,ζ,D , which completes the proof.
One can also consider the greenifications with respect to arbitrary finite sets Z ⊂ D,
h P u,Z,D (x) = sup {v(x) : v ∈ M P (ζ)
u,ζ,D , ζ ∈ Z} (6.13) and g u,Z,D (x) = lim sup y→x sup{v(y) : v ∈ F u,ζ,D , ζ ∈ Z}. (6.14)
They have properties similar to those of the functions h P u,ζ,D and g u,ζ,D , stated in Propositions 5.1-5.6 (with obvious modifications). In particular, if ϕ ∈ P SH(D) is such that ϕ −1 (−∞) = Z and the restriction of ϕ to a neighbourhood of ζ belongs to M W ζ for each ζ ∈ Z, then h ϕ,Z,D = g ϕ,Z,D = G ϕ,D , the Green-Zahariuta function with the singularities defined by ϕ. They are also related to the Green functions (2.13) as follows (cf. Theorem 6.6).
Analyticity theorem
We let Ω be a pseudoconvex domain in C n , and let R : Ω → (−∞, ∞] be a lower semicontinuous function on Ω. We consider a continuous plurisubharmonic function ϕ : Ω×Ω → [−∞, ∞) such that:
(i) ϕ(x, ζ) < R(ζ) on Ω × Ω;
(ii) {x : ϕ(x, ζ) = −∞} = {ζ};
(iii) for any ζ ∈ Ω and r < R(ζ) there exists a neighbourhood U of ζ such that the set {(x, y) : ϕ(x, y) < r, y ∈ U } ⋐ Ω × Ω;
(iv) (dd c ϕ) n = 0 on {ϕ(x, ζ) > −∞};
(v) e ϕ(x,ζ) is Hölder continuous in ζ:
∃β > 0 : |e ϕ(x,ζ) − e ϕ(x,y) | ≤ |ζ − y| β , x, y, ζ ∈ Ω. (7.1)
It then follows that ϕ ζ (x) := ϕ(x, ζ) ∈ SM W ζ (3.12). Similarly to (3.3) and (3.1) we introduce the function Λ(u, ϕ ζ , r) and the relative type σ(u, ϕ ζ ). By Theorem 6.8 of [2], Λ(u, ϕ ζ , r) is plurisubharmonic on each connected component of the set {ζ : R(ζ) > r}.
As was shown (in a more general setting) by Demailly [3], the sets {ζ : ν(u, ϕ ζ ) ≥ c} are analytic for all c > 0. By an adaptation of Kiselman's and Demailly's proofs of Siu's theorem, we prove its analogue for the relative types. Denote
S c (u, ϕ, Ω) = {ζ ∈ Ω : σ(u, ϕ ζ ) ≥ c}, c > 0. (7.2) Theorem 7.1
Let Ω be a pseudoconvex domain in C n , a function ϕ(x, ζ) satisfy the above conditions (i)-(v), and u ∈ P SH(Ω). Then S c (u, ϕ, Ω) is an analytic subset of Ω for each c > 0.
Proof. By Theorem 6.11 of [2], the function U (ζ, ξ) = Λ(u, ϕ ζ , Re ξ) is plurisubharmonic in {(ζ, ξ) ∈ Ω × C : Re ξ < R(ζ)}. Fix a pseudoconvex domain D ⋐ Ω and denote R 0 = inf {R(ζ) : ζ ∈ D} > −∞. Given a > 0, the function U (ζ, ξ)−a Re ξ is then plurisubharmonic and independent of Im ξ, so by Kiselman's minimum principle [7], the function U a (ζ) = inf{Λ(u, ϕ ζ , r) − ar : r < R 0 } (7.3)
is plurisubharmonic in D.
Let ζ ∈ D. If a > σ(u, ϕ ζ ), then Λ(u, ϕ ζ , r) > ar for all r ≤ r 0 < min {R 0 , 0}. If r 0 < r < R 0 , then Λ(u, ϕ ζ , r) − ar > Λ(u, ϕ y , r 0 ) − aR 0 . Therefore U a (ζ) > −∞. Now let a < σ(u, ϕ ζ ). Hölder continuity (7.1) implies Λ(u, ϕ y , r) ≤ Λ(u, ϕ ζ , log(e r + |y − ζ| β )) ≤ σ(u, ϕ ζ ) log(e r + |y − ζ| β ) + C (7.4) in a neighbourhood of ζ. Denote r y = β log |y − ζ|, then U a (y) ≤ Λ(u, ϕ y , r y ) − ar y ≤ (σ(u, ϕ ζ ) − a)β log |y − ζ| + C 1 (7.5) near ζ.
Finally, let Z a,b (a, b > 0) be the set of points ζ ∈ D such that exp(−b −1 U a ) is not integrable in a neighbourhood of ζ. As follows from the Hörmander-Bombieri-Skoda theorem, the sets Z a,b are analytic.
If ζ ∈ S c (u, ϕ, Ω), then the function b −1 U a with σ(u, ϕ ζ ) < a < c is finite at ζ and so, by Skoda's theorem, ζ ∈ Z a,b for all b > 0. If ζ ∈ S c (u, ϕ, Ω), a < c, and b < (c − a)β(2n) −1 , then (7.5) implies ζ ∈ Z a,b . Thus, S c (u, ϕ, Ω) coincides with the intersection of all the sets Z a,b with a < c and b < (c − a)β(2n) −1 , and is therefore analytic.
Remark 7.2
The result can be reformulated in the following way: under the conditions of Theorem 7.1, the set S(u, ϕ, Ω) = {ζ ∈ Ω : u(x) ≤ ϕ(x, ζ) + O(1) as x → ζ} is analytic. As was noticed by the referee, condition (iv) is actually necessary. Take, for example, the function ϕ(x, ζ) = max{log |x 1 − ζ 1 | + log |(x 1 − ζ 1 )x 2 |, log |x 2 − ζ 2 |} in C 2 × C 2 ; it has all the properties except for (iv), while the set S(log |x 1 |, ϕ, C 2 ) = {(0, ζ 2 ) : ζ 2 = 0} is not analytic.
As an application, we present the following result on propagation of plurisubharmonic singularities. We will say that a closed complex space A is a locally complete intersection if |A| is of pure codimension p and the associated ideal sheaf I A is locally generated by p holomorphic functions. Every point z ∈ Z * l has a neighbourhood V and coordinates x = (x ′ , x ′′ ) ∈ C p × C n−p , centered at z, such that V ∩ |A| = V ∩ Z * l = V ∩ {x ′ = 0} and f 1 , . . . , f p are global generators of I A on V . Let U ⋐ D be a pseudoconvex neighbourhood of z such that U − U ⊂ V . For any ζ ∈ U and N > 0, the function ϕ N (x, ζ) = max {log |f (x − (ζ ′ , 0))|, N log |x ′′ − ζ ′′ |} (7.7)
satisfies the conditions of Theorem 7.1 with Ω = U . Therefore, S 1 (u, ϕ N , U ) is an analytic subset of U . Note that S 1 (log |f |, ϕ N , U ) = U ∩ |A|. Let {U j } be a denumerable covering of Z * l by such neighbourhoods, and let {ϕ j,N } be the corresponding weights. We may assume u ≤ log |f | + O(1) near a point ζ 0 ∈ U 1 ∩ ω ∩ |A|, so σ(u, ϕ 1,N ζ ) ≥ σ(log |f |, ϕ 1,N ζ ) = 1 for every ζ ∈ U 1 ∩ ω ∩ |A| and thus for every ζ ∈ U 1 ∩ |A|. Take any ζ ∈ U 1 ∩ Z * l and choose constants a, b > 0 such that
V ζ = {x ∈ U 1 : a|f (x)| < 1, b|x ′′ − ζ ′′ | < 1} ⋐ U 1 . (7.8)
Assuming u ≤ C in U 1 , the relation σ(u, ϕ j,N ζ ) ≥ 1 implies u ≤ g j,N + C in V ζ , where g j,N (x) = max {log |af (x)|, N log b|x ′′ − ζ ′′ |} (7.9)
is the Green-Zahariuta function in V ζ for the singularity ϕ j,N ζ . Observe that g j,N decrease to log |af | as N → ∞, so u ≤ log |f | + C 1 near ζ. Therefore we have extended the hypothesis of the theorem from ω to ω ∪ U 1 . By repeating the argument, we get u ≤ log |f | + O(1) near every point of Z * l (since it is connected) and so, of Reg |A|. Finally, the bound near irregular points of |A| can be deduced by using Thie's theorem and the equation (dd c log |f |) p = 0 outside |A| (see, for example, the proof of Lemma 4.2 in [17]).
Remarks 7.4 (1) In particular, if F is a holomorphic function in D whose restriction to ω belongs to the integral closureĪ A,ω of the ideal sheaf I A,ω on ω, then F ∈Ī A,D . This follows from Corollary 7.3 because F ∈Ī A if and only if log |F | ≤ log |f | + O(1).
(2) Corollary 7.3 fails when the complete intersection assumption is removed. Take, for example, f = (z 2 1 , z 1 z 2 ) in C 2 , then log |z 1 | ≤ log |f (z)| + O(1) near every point (0, ξ) with ξ = 0, but not near the origin.
A consequence of Corollary 7.3 is the following analogue to Theorems 6.6 and 6.7 for the Green functions (2.14) with singularities along complex spaces. where the functions h L u,Z,D and g u,Z,D are defined by (6.13) and (6.14), L denotes the collection of maximal weights with finite Lojasiewicz exponents (3.10) at the points of Z, and G A,D is the Green function (2.14) for the space A.
Proof. We use the same arguments as in the proof of Theorems 6.6 and 6.7, the only difference being referring to Corollary 7.3 instead of Siu's theorem.
By (2.15), we have G A,D ≤ g u,Z,D . Let Z = {ζ 1 , . . . , ζ s }. For k = 1, . . . , s take a sequence of pseudoconvex domains D k,j such that D k,j+1 ⋐ D k,j ⋐ D, ∩ j D k,j = {ζ k }, and denote D j = ∪ k D k,j . As in the proof of Proposition 5.5, the functions u j defined by (5.9) with such a choice of D j are plurisubharmonic in D, the sequence increases to g u,Z,D a.e. in D, and u j ≤ log |F | + C j near each ζ k . Since log |F | = log |f | + O(1), where f 1 , . . . , f p are local generators of I A , p = codim |A|, this gives, by Corollary 7.3, the relations u j ≤ log |f | + O(1) locally near |A| and thus, g u,Z,D ≤ G A,D . This proves the second equality in (7.10).
The rest is proved as in Theorem 6.6.
)
Let u and v be plurisubharmonic functions such that
A on D, let I A = I A,D = (I A,x ) x∈D be the associated coherent sheaf of ideals in the sheaf O D = (O x ) x∈D of germs of holomorphic functions on D, and let |A| = {x ∈ D : I A,x = O x }. A Green function G A,D for the complex space A on D was constructed as G A,D (x) = sup {u(x) : u ∈ P SH − (D), u ≤ log |f | + O(1) locally near |A|}, (2.14) where f = (f 1 , . . . , f p ) and f 1 , . . . , f p are local generators of I A . The function G A,D is plurisubharmonic and satisfies G A,D ≤ log |f | + O(1) (2.15) locally near points of |A|; if I A has bounded global generators, then G A,D = log |f | + O(1).
Theorem 4. 3
3Let a function σ : P SH ζ → [0, ∞) satisfy conditions [i)-(v) of Lemma 4.1 for u, u k ∈ P SH ζ and D a bounded hyperconvex neighbourhood of ζ. Then there exists a weight ϕ ∈ M W ζ such that σ(u) = σ(u, ϕ) for every u ∈ P SH ζ . The representation is essentially unique: if two weights ϕ and ψ represent σ, then ϕ = ψ + O(1) near ζ. If, in addition, (v) is true with w 0 (z) = log |z − ζ|, then ϕ ∈ LM W ζ .
Proposition 5. 2
2The function h P u,ζ,D , P ⊂ M W ζ , is the largest plurisubharmonic minorant of the family {σ(u, ϕ)G ϕ,D : ϕ ∈ P }. In particular, if ϕ ∈ M W ζ , then h ϕ,ζ,D = h ϕ ϕ,ζ,D = G ϕ,D .Examples 5.3 (1) If P consists of a single weight ϕ, then h P u,ζ,D = σ(u, ϕ) G ϕ,D . In particular, if ϕ(x) = log |x − ζ|, then h P u,ζ,D = ν u (ζ) G ζ,D .
if the only information on u is that it is locally bounded on D \ {0} and the values ν u (0, a) for a ∈ A, then its residual Monge-Ampère mass at 0 can be estimated as (dd c u) n (0) ≥ (dd c h P u,0,D ) n (0) = (dd c ϕ u,A ) n (0) = n! V ol(H u,A ). (5.5) (3) If P consists of all the directional weights φ a,ζ (2.5), then h P u,ζ,D = G Ψ,ζ,D , the Green function (2.10) with respect to the indicator Ψ = Ψ u,ζ (2.6) of the function u at ζ.
reverse inequality follows from (5.8) by Theorem 2.3. Similar arguments (but now using Propositions 3.5 and 3.1 (v) instead of Theorems 2.1 and 2.3) prove (iii). Finally, (iv) can be proved exactly as assertion (v) of Proposition 5.1.
implies (dd c u ǫ ) n = 0 on D \ {ζ}. The functions v + ǫρ and w are locally bounded outside ζ, so(dd c u ǫ ) n ≥ χ 1 (dd c (v + ǫρ)) n + χ 2 (dd c w) n(6.4) on D \ {ζ} ([4], Proposition 11.9), where χ 1 and χ 2 are the characteristic functions of the sets E 1 = {w ≤ v + ǫρ} \ {ζ} and E 2 = {w > v + ǫρ} \ {ζ}, respectively.
Theorem 6. 4
4Let D be bounded and hyperconvex, u ∈ P SH(D) ∩ L ∞ loc (D \ ζ), and let P ⊆ M W ζ . Then g u,ζ,D = h P u,ζ,D if and only if (dd c u) n (ζ) = (dd c h P u,ζ,D ) n (ζ). (6.6)
Corollary 6. 5
5The multiplicity of an isolated zero ζ of an equidimensional holomorphic mapping f equals its Newton number at ζ if and only if log |f (x)| = Ψ(x − ζ) + O(1) as x → ζ, where Ψ = Ψ log |f |,ζ is the indicator (2.6) of the function log |f | at ζ.
Theorem 6. 6
6Let u = log |f |, where f is a holomorphic function on Ω. Given ζ ∈ f −1 (0), let f = ab with a = a m 1 1 . . . a m k ksuch that a j are irreducible factors of f vanishing at ζ and b(ζ) = 0. Then for any hyperconvex domain D ⋐ Ω that contains ζ, h L u,ζ,D = g u,ζ,D = G A ζ ,D , (6.9)
Theorem 6. 7
7Let A be the divisor of a holomorphic function f in Ω and let u = log |f |. If a finite subset Z of a hyperconvex domain D ⋐ Ω is such that each irreducible component of |A| ∩ D contains at least one point of Z, then h L u,Z,D = g u,Z,D = G A,D .
Corollary 7. 3
3Let a closed complex space A be locally complete intersection on a domain D ⊂ C n , and let ω be an open subset of D that intersects each irreducible component of |A|. If a function u ∈ P SH(D) satisfies u ≤ log |f | + O(1) locally in ω, where f 1 , . . . , f p are local generators of I A , then it satisfies this relation near every point of |A|. Proof. Denote by Z l , l = 1, 2, . . ., the irreducible components of |A|, and Z * l = (Reg Z l ) \ k =l Z k . (7.6)
Corollary 7. 5
5Let a closed complex space B be locally complete intersection on a pseudoconvex domain Ω ⊂ C n , and let A be the restriction of B to a hyperconvex domain D ⋐ Ω. Further, let F 1 , . . . , F m be global sections of I B generating I A , and u = log |F |. If a finite subset Z of D is such that each irreducible component of |A| contains at least one point of Z, then h L u,Z,D = g u,Z,D = G A,D , (7.10)
Proof. Bounds (3.15) follow from (3.11) by Theorem 2.2, and relation (3.16) follows from Proposition 3.4 by Theorem 2.3.Remark 3.7 Due to (2.4) and (2.8), there is always an equality in(3.16) if ϕ = φ a,ζ , a directional weight (2.5). On the other hand, for general weights ϕ the inequality can be strict. For example, let u 1 , u 2 and ϕ be as in Remarks 3.2 (1), and let u = max {2u 1 , u 2 }.
Acknowledgement. The author is grateful to the referee for valuable remarks and suggestions that have helped much in improving the presentation.
The general definition of the complex Monge-Ampère operator. U , Ann. Inst. Fourier (Grenoble). 541U. Cegrell, The general definition of the complex Monge-Ampère operator, Ann. Inst. Fourier (Grenoble) 54 (2004), no. 1, 159-179.
Mesures de Monge-Ampère et caractérisation géométrique des variétés algébriques affines. J.-P Demailly, Mém. Soc. Math. France (N. S.). 19J.-P. Demailly, Mesures de Monge-Ampère et caractérisation géométrique des variétés algébriques affines, Mém. Soc. Math. France (N. S.) 19 (1985), 1-124.
Nombres de Lelong généralisés, théorèmes d'intégralité et d'analycité. J.-P Demailly, Acta Math. 159J.-P. Demailly, Nombres de Lelong généralisés, théorèmes d'intégralité et d'analycité, Acta Math. 159 (1987), 153-169.
Potential theory in several complex variables. J.-P Demailly, Nicemanuscript. ICPAMJ.-P. Demailly, Potential theory in several complex variables, manuscript. ICPAM, Nice, 1989.
Monge-Ampère operators, Lelong numbers and intersection theory. J.-P Demailly, Complex Analysis and Geometry (Univ. Series in Math.). V. Ancona and A. SilvaNew YorkPlenum PressJ.-P. Demailly, Monge-Ampère operators, Lelong numbers and intersection theory, Complex Analysis and Geometry (Univ. Series in Math.), ed. by V. Ancona and A. Silva, Plenum Press, New York 1993, 115-193.
Valuative analysis of planar plurisubharmonic functions. C Favre, M Jonsson, Invent. Math. 162C. Favre and M. Jonsson, Valuative analysis of planar plurisubharmonic functions, Invent. Math. 162 (2005), 271-311.
Densité des fonctions plurisousharmoniques. C O Kiselman, Bull. Soc. Math. France. 107C.O. Kiselman, Densité des fonctions plurisousharmoniques, Bull. Soc. Math. France 107 (1979), 295-304.
Un nombre de Lelong raffiné. C O Kiselman, Séminaire d'Analyse Complexe et Géométrie 1985-87, Fac. Sci. Monastir Tunisie. C.O. Kiselman, Un nombre de Lelong raffiné, In: Séminaire d'Analyse Complexe et Géométrie 1985-87, Fac. Sci. Monastir Tunisie 1987, 61-70.
Attenuating the singularities of plurisubharmonic functions. C O Kiselman, Ann. Polon. Math. LX. 2C.O. Kiselman, Attenuating the singularities of plurisubharmonic functions, Ann. Polon. Math. LX.2 (1994), 173-197.
Polyèdres de Newton et nombres de Milnor. A G Kouchnirenko, Invent. Math. 32A.G. Kouchnirenko, Polyèdres de Newton et nombres de Milnor, Invent. Math. 32 (1976), 1-31.
Plurisubharmonic functions and analytic discs on manifolds. F Lárusson, R Sigurdsson, J. reine angev. Math. 501F. Lárusson and R. Sigurdsson, Plurisubharmonic functions and analytic discs on manifolds, J. reine angev. Math. 501 (1998), 1-39.
Plurisubharmonic extremal functions, Lelong numbers and coherent ideal sheaves. F Lárusson, R Sigurdsson, Indiana Univ. Math. J. 48F. Lárusson and R. Sigurdsson, Plurisubharmonic extremal functions, Lelong num- bers and coherent ideal sheaves, Indiana Univ. Math. J. 48 (1999), 1513-1534.
Local indicators for plurisubharmonic functions. P Lelong, A Rashkovskii, J. Math. Pures Appl. 78P. Lelong and A. Rashkovskii, Local indicators for plurisubharmonic functions, J. Math. Pures Appl. 78 (1999), 233-247.
Newton numbers and residual measures of plurisubharmonic functions. A Rashkovskii, Ann. Polon. Math. 753A. Rashkovskii, Newton numbers and residual measures of plurisubharmonic functions, Ann. Polon. Math. 75 (2000), no. 3, 213-231.
Lelong numbers with respect to regular plurisubharmonic weights, Results Math. A Rashkovskii, 39A. Rashkovskii, Lelong numbers with respect to regular plurisubharmonic weights, Re- sults Math. 39 (2001), 320-332.
Green functions with analytic singularities. A Rashkovskii, R Sigurdsson, Comptes Rendus Acad. Sci. Paris. 340A. Rashkovskii and R. Sigurdsson, Green functions with analytic singularities, Comptes Rendus Acad. Sci. Paris 340 (2005), Série I, 479-482.
Green functions with singularities along complex spaces. A Rashkovskii, R Sigurdsson, Internat. J. Math. 164A. Rashkovskii and R. Sigurdsson, Green functions with singularities along complex spaces, Internat. J. Math. 16 (2005), no. 4, 333-355.
Pluricomplex charge at weak singularities, preprint. J Wiklund, J. Wiklund, Pluricomplex charge at weak singularities, preprint, 2005, http://arxiv.org/abs/math.CV/0510671
Spaces of analytic functions and maximal plurisubharmonic functions. V P Zahariuta, D.Sci. Dissertation. V.P. Zahariuta, Spaces of analytic functions and maximal plurisubharmonic functions. D.Sci. Dissertation, Rostov-on-Don, 1984.
Spaces of analytic functions and Complex Potential Theory. V P Zahariuta, Linear Topological Spaces and Complex Analysis. 1V.P. Zahariuta, Spaces of analytic functions and Complex Potential Theory, Linear Topological Spaces and Complex Analysis 1 (1994), 74-146.
Pluricomplex Green functions and the Dirichlet problem for the complex Monge-Ampère operator. A Zeriahi, Michigan Math. J. 44A. Zeriahi, Pluricomplex Green functions and the Dirichlet problem for the complex Monge-Ampère operator, Michigan Math. J. 44 (1997), 579-596.
. / Tek, Nat, [email protected], Norway E-mailUniversity of StavangerTek/Nat, University of Stavanger, 4036 Stavanger, Norway E-mail: [email protected]
| [] |
[
"About the measure of the bare cosmological constant",
"About the measure of the bare cosmological constant"
] | [
"Massimo Cerdonio [email protected] \nINFN Section\nUniversity of Padua\nvia Marzolo 8I-35131PadovaItaly\n"
] | [
"INFN Section\nUniversity of Padua\nvia Marzolo 8I-35131PadovaItaly"
] | [] | I try to revive, and possibly reconcile, a debate started a few years ago, about the relative roles of a bare cosmological constant and of a vacuum energy, by taking the attitude to try to get the most from the physics now available as established. I notice that the bare cosmological constant of the Einstein equations, which is there ever since GR emerged, is actually constrained (if not measured) indirectly combining the effective cosmological constant observed now, as given by CDM Precision Cosmology, with the cumulative vacuum contribution of the particles of the Standard Model, SM. This comes out when the vacuum energy is regularized, as given by many Authors, still within well established Quantum Field Theory, QFT, but without violating Lorentz invariance. The fine tuning, implied by the compensation to a small positive value of the two large contributions, could be seen as offered by Nature, which provides one more fundamental constant, the bare Lambda. The possibility is then discussed of constraining (measuring) directly such a bare cosmological constant by the features of primordial gravitational wave signals coming from epoch's precedent to the creation of particles. I comment on possibilities that would be lethal: the discovery of Beyond SM particles, and if the vacuum does not gravitate. This last issue is often raised, and I discuss the current situation about. Finally a hint is briefly discussed for a possible "bare Lambda inflation" process. | 10.1007/s10701-019-00285-9 | [
"https://export.arxiv.org/pdf/1807.08468v4.pdf"
] | 119,516,866 | 1807.08468 | 3591e073c71fa1c39944e63963c3a997f7c5b91e |
About the measure of the bare cosmological constant
March 14 th 2019
Massimo Cerdonio [email protected]
INFN Section
University of Padua
via Marzolo 8I-35131PadovaItaly
About the measure of the bare cosmological constant
March 14 th 20191cosmological constantrelativistic aspects of cosmologyvacuum energyprimordial gravitational wavesinflation 2
I try to revive, and possibly reconcile, a debate started a few years ago, about the relative roles of a bare cosmological constant and of a vacuum energy, by taking the attitude to try to get the most from the physics now available as established. I notice that the bare cosmological constant of the Einstein equations, which is there ever since GR emerged, is actually constrained (if not measured) indirectly combining the effective cosmological constant observed now, as given by CDM Precision Cosmology, with the cumulative vacuum contribution of the particles of the Standard Model, SM. This comes out when the vacuum energy is regularized, as given by many Authors, still within well established Quantum Field Theory, QFT, but without violating Lorentz invariance. The fine tuning, implied by the compensation to a small positive value of the two large contributions, could be seen as offered by Nature, which provides one more fundamental constant, the bare Lambda. The possibility is then discussed of constraining (measuring) directly such a bare cosmological constant by the features of primordial gravitational wave signals coming from epoch's precedent to the creation of particles. I comment on possibilities that would be lethal: the discovery of Beyond SM particles, and if the vacuum does not gravitate. This last issue is often raised, and I discuss the current situation about. Finally a hint is briefly discussed for a possible "bare Lambda inflation" process.
Introduction
Sometime ago an interesting debate took place about what to invoke in order to explain the speeding up of the expansion of the Universe. On one side Bianchi and Rovelli [1] invoked the role of a bare Einsten cosmological constant , which, being the other constant allowed in Einstein GR besides the gravitational constant G, by no reason is to be set to zero. Nature may well be offering the value needed to explain the observed cosmic acceleration. Dadhich [2] remarked that very general guiding principles require having a in the Einstein equations, as a true constant of the space-time structure. Both points of view would see its value given by the accelerating expansion of the Universe as observed in CDM Precision Cosmology. On the opposing side Kolb [3] emphasized how mysterious is the smallness of such a cosmological constant needed by the CDM model, in respect to that coming from the vacuum energy of particles fields.
I would like to revive the debate with an, until now unnoticed, argument, by taking the attitude to try to get the most from the physics now available. Possibly I reconcile the views summarized above: a bare is there in fact, but it is large, not the small one observed now, and the large value predicted by SM + QFT can be accommodated in the picture, without asking for revisions of the theory.
Constraining indirectly the bare lambda
On one side, if we look at the observational situation now [4], the needed by CDM is actually a eff , which may well come out as a combination of an Einstein bare cosmological constant bare with a v , coming from the vacuum of the fields of the now existing particles
(1) eff = bare + v
and eff may be small, while bare and v need not be. On the other side, also v can be seen to come from observations, when its value can be calculated with QFT for the SM observed particles. Therefore I remark that, as QFT and SM are well established physics [5] just as GR, and if the SM + QFT could provide a full calculation of v , then v should be considered as measured, and thus also bare would come out to be measuredat least indirectly -from eq (1). In the spirit to squeeze out the most from established physics, I avoid recourse to modifications/extensions of theories as GR and QFT, of modifications/extensions of models as the SM, and of violation of principles as Lorentz invariance [6] and the Equivalence Principle. A full calculation of v is available only at O(1), and so only constrains can be considered at the moment.
To evaluate v within the SM sector, I use the coincident results of the various Lorentz invariant methods of regularization of the energy density of the vacuum of SM particles fields introduced respectively in refs [7,8]. The motivation for these regularization procedures was amply discussed and expanded in ref [9]. The issue is that, using the more common method with an ultraviolet cutoff at the Planck scale, one violates Lorentz invariance and gets the wrong equation of state for the vacuum. By contrast [9] "…the zero-point energy…can be made perfectly finite", when one uses the regularization proposed and discussed in [7][8][9].
To get the total SM contribution to v , I recall, for convenience of the reader, the calculations of ref [8,9], and so I use for the present vacuum energy density v contributed by particles the relation
(2) v = (c/ħ) 3 ∑ j n j (m j 4 /64π 2 ) ln (m j /) 2 with v = 8πG/c 2 v
where n j are the degrees of freedom, m j is the mass of the j particle, G is the gravitational constant, c is the velocity of light, ħ is the Planck constant(SI units). Notice that the eq (2) is demonstrated in [9] to be valid just the same also in curved space-time. The value of the renormalization scale is taken ~ 3 10 -25 GeV. As the leading term giving the ultraviolet cutoff at the Planck scale for renormalization has been discarded as unphysical, the renormalization scale for is now to be sought at energies below the Planck scale. The value chosen may appear somewhat arbitrary, but in fact the result is quite insensitive, over > 30 orders of magnitude, to the value of for below ~ 10 5 GeV -see Fig.5 in [9]. The particles taken in account are bosons (with positive sign) -Higgs, Z and W + --and fermions (with negative sign) -quarks and leptons. The result is an overall negative SM ~ -2 10 8 GeV 4 , which in SI corresponds to a negative SM ~ -4 10 3 m -2 . Photons and neutrinos, as having zero and very small mass respectively, do not contribute.
One must add to SM the contributions from the EW and QCD phase transitions. Such contributions are model dependent, but of order O(1). Taking the values preferred in [9], all in all the total vacuum contribution from SM piles up to give a total value v = -6 10 3 m -2 .
Thus the bare Einstein cosmological constant bare can be evaluated from eq (1) with v above and using the observed eff . As eff = +10 -52 m -2 -just slightly positiveis much smaller in absolute value than v , it is seen that comes out to be practically equal to = - v = + 6 10 3 m -2 . This value should be correct at O(1), and should be seen as a constrain from SM and QFT to the value of the bare cosmological constant.
Prospects for direct measurements of the bare lambda.
The constrain discussed above looks however indirect. One may wonder if it would ever be possible to have a direct constrain/measurement. It has been recently considered, see [10] and refs therein, how a non-zero cosmological constant, no matter how small, can affect gravitational waves, GWs. At the moment only post deSitter/Newtonian calculations are available, but efforts for a full GR treatment are announced. Then, should we have available in the future on one side such calculations for a large lambda and on the other side observations of primordial GWs, generated before the vacuum contributions would be in place to balance the bare Lambda, it would be possible to get the bare Lambda in a direct manner, exclusively from the GR sector. There are proposed sources that can lead to cosmological backgrounds of gravitational waves coming from epochs back to inflation and before. A few of them could be within the reach of near-future gravitational wave detectors as LISA and the LIGO/VIRGO/KAGRA ground based observatory, see review [11]. However the only explicit calculation available of the effect on GWs of a positive lambda in a deSitter background concerns periodic GWs [12]. For the observed eff , the calculated alterations in periodic GWs, in respect to a eff identically zero, would fail the LIGO/VIRGO/KAGRA and LISA detection levels by more than 20 orders of magnitude. As the bare Lambda considered above would be some 55 order of magnitudes larger than the observed eff , one would expect that quite large alterations should show up in primordial GWs, but of course, on one side in these conditions the approximations in [12] break down, and on the over side no extension to a stochastic background is available. So, while as for now a complete framework is not available, still the prospects for the future are encouraging, because on one side theory and calculation may develop definite predictions and on the other side GW detectors may reach adequate sensitivities.
Discussion.
The logic of Sec 2 is crucially based on accepting the results of the regularization methods of refs [7][8][9]. Usually renormalization procedures, to take care consistently of infinities, connect to physical measures within the sector of relevance. In the case here the connection to physics, to proceed with the regularization, is somewhat less direct. As summarized above it concerns avoiding violation of Lorentz invariance, a violation which however is strongly excluded by a wealth of current experiments/observations. I searched the literature to find comments/criticisms/rebuttals about this issue, and found increasing consensus. Dadhic remarked [2] that we would have to wait for quantum gravity but meanwhile the important point is that the Lambda coming from that would have no relation with the Planck length. In ref [13], where Lorentz invariance is considered for different purposes, but still the issue of the connection with the cosmological constant is discussed at length, it is remarked that "…imposing Lorentz invariance has given us a rather definite finite cut-off estimate for the cosmological constant". More recently [14] this result has been used (in a different context) as well known.
It is commonly accepted that the vacuum energy gravitates with minimal coupling in the Einstein equations and that the Casimir effects offer experimental evidence of thatsee for instance [9] and refs therein. Such a notion has been questioned recently. On one hand Dadhich [2] proposes that such a vacuum energy should not gravitate through a stress tensor, but rather through enlargement of the framework. On the other hand for Casimir effects, Nikolic contends [15] that the Casimir force cannot originate from the vacuum energy of electromagnetic (EM) field. Cerdonio and Rovelli [16] demonstrated, with a simple gedanken experiment, that the action of the em vacuum in a Casimir cavity is inextricably connected to the (massive) presence of matter in the plates, and that it gives just a (regular, negative) binding energy -nothing to do with the "free" vacuum called in for cosmology. Similar remarks, after different arguments, can be found in [17]. Also, the idea itself of the role of zero-point energies has been contrasted, in favor of relativistic quantum forces within charges in the matter of the plates [18]. In lack of a final clarification of the issue, I warn here how such a "semi-classical gravity" hypothesis is a crucial assumption for my considerations.
One may feel that the fine tuning which appears, as eff = +10 -52 m -2 while and - v are much larger, would be embarrassing. In fact the point made here, that the vacuum energy estimated from the Standard Model -many orders of magnitude larger than the observed cosmological constant -may be compensated by a bare cosmological constant, has been made often in the cosmological literature, but it has been always taken as an unwanted fine-tuning to be dismissed, as, for instance, in refs [7][8][9].
At variance with the above attitude, I consider alternatively to be on the table a deceivingly simple notion. As eff comes from observations, and v comes also from measured quantities through well-established physical theories, then the logic conclusion is rather that bare is actually at least heavily constrained, using the observations of current precision cosmology and the measurements coming from realms different from cosmology.
A bare Lambda inflation ?
Finally the above considerations invite to an obvious speculation, that actually the positive and large bare may have started the inflation process -an inflation without inflatons. The SM physics is well understood and tested up to temperature of the EW transition [5]. Above this temperature the SM particles are massless, and thus do not contribute to the renormalized vacuum energy, according to eq (2). It is common view that above the GUT scale the Universe would be filled with radiation at that temperature, somewhat below the Planck scale T P ~10 19 GeV. Constrains on the initial thermal radiation have been considered in [19]. Therefore, if a quasi-DeSitter expansion would be initiated by bare , the Universe would cool down until reaching the EW transition temperature. The particles vacuum contributions would then start to cumulate to give a v . Such a v would ultimately compensate bare , in sort of a graceful exit from a "bare Lambda inflation". If I take the temperature of Universe T i at the start of the process somewhat below the Planck temperature, say T i ~ 10 17 GeV, and as final temperature T f that of the completion of particle creation, say indicatively T f ~ 1 MeV when neutrino decoupled, and I use a ratio of expansion rates a f /a i ~ T i /T f as for radiation, then the number of e-folds would be N = ln (a f /a i ) ~ 46. Such an N is close to the values N ~ 50-60 preferred by Planck [20] for a generic inflation process. Of course this may be only a numerical coincidence, but the matter may warrant further attention, as the scenario would be pretty rigid, and thus could be more credible in a Bayesian sense than any inflaton model. An elaboration of this hint is however beyond the scope of this paper.
6.Concluding remarks
In view of the above discussion, it looks to me that the fine tuning is rather offered by Nature. Therefore it is just an observational fact: a set of measurements made now gives actually the value which is built eternal and unchanging in the Einstein equations of GR. Such a Nature given fine tuning appears at the same level of the fine tuning of the fundamental constants, to account for which anthropic reasoning's have been put forward. Then the result of my considerations should be seen as an observational evidence about a primordial bare Lambda, and thus to be taken in account in modelling the early Universe. My considerations may give a new slant to the Cosmological Constant Problem(s).
AcknowledgementsI am grateful to my wife Annamaria for bearing with me during the preparation of this paper and to Naresh Dadhich for correspondence on the matter. I thank Alessandro Bettini and Gianni Carugno for lively discussions. I am thankful to Philippe Jetzer for a discussion and for helpful suggestions. I am much indebted to Stefano Liberati for a critical reading of the manuscript, with comments I took in due consideration for the present version.
E Bianchi, C Rovelli, arXiv:1002.3966v3[astro-ph.CO]11Why all these prejudices against a constant. Bianchi E. and Rovelli C.: Why all these prejudices against a constant? arXiv:1002.3966v3[astro-ph.CO] 11 Apr 2010
On the enigmatic -a true constan of spacetime. N Dadhich, arXiv:1006.1552v2arXiv:1609.02138see also of the same AuthorDadhich N.; On the enigmatic -a true constan of spacetime. arXiv:1006.1552v2 21 Feb 2011; see also of the same Author arXiv:1105.3396 and arXiv:1609.02138
Cosmology forum: Is dark energy really a mystery?. E Bianchi, C Rovelli, R Kolb, Nature. 466Bianchi E., Rovelli C. and Kolb R.: Cosmology forum: Is dark energy really a mystery? Nature 466, 321-322 (2010)
by now I mean the cosmic time when the creation of all the SM particles had been completed and the contribution of a eff to the expansion of the Universe is found constant within the precision of current observations. by now I mean the cosmic time when the creation of all the SM particles had been completed and the contribution of a eff to the expansion of the Universe is found constant within the precision of current observations
I am using the term well established physics to summarize what has been remarked therein about the history of the Universe "…from 10 -10 seconds [corresponding to the Electroweak Unification at an energy of 1 TeV] to today the history of the universe is based on well understood and experimentally tested laws of particle physics. arXiv:0907.5424v2from Table I in TASI Lectures on Inflation by Baumann D. "well understood and experimentally tested laws…" from Table I in TASI Lectures on Inflation by Baumann D.: arXiv:0907.5424v2 (2012); I am using the term well established physics to summarize what has been remarked therein about the history of the Universe "…from 10 -10 seconds [corresponding to the Electroweak Unification at an energy of 1 TeV] to today the history of the universe is based on well understood and experimentally tested laws of particle physics, nuclear and atomic physics and gravity"
however Lorentz violating theories as Horava gravity are amenable to renormalization and still compatible with observations, see for instance Wang A.: Hořava Gravity at a Lifshitz Point: A Progress Report. Int. J. Mod. Phys. D. 261730014however Lorentz violating theories as Horava gravity are amenable to renormalization and still compatible with observations, see for instance Wang A.: Hořava Gravity at a Lifshitz Point: A Progress Report. Int. J. Mod. Phys. D 26, 1730014 (2017) ;
Liberati for pointing this to me. S I Thank, I thank S.Liberati for pointing this to me
Akhmedov E Kh, arXiv:hep-th/0204048v2Vacuum energy and relativistic invariance. Akhmedov E. Kh.: Vacuum energy and relativistic invariance. arXiv:hep-th/0204048v2 (2002)
J F Kosma, T Prokopec, arXiv:1105.6296v1[gr-qc]31The Cosmological Constant and Lorentz Invariance of the Vacuum State. Kosma J.F. and Prokopec T.: The Cosmological Constant and Lorentz Invariance of the Vacuum State. arXiv:1105.6296v1 [gr-qc] 31 May 2011
Everything You Always Wanted To Know About The Cosmological Constant Problem (But Were Afraid To Ask) in Understanding the Dark Universe Comptes Rendues. J Martin, Physique. 13Martin J.: Everything You Always Wanted To Know About The Cosmological Constant Problem (But Were Afraid To Ask) in Understanding the Dark Universe Comptes Rendues. Physique 13.6-7, 566-665 (2012)
Implications of a positive cosmological constant for general relativity. A Ashtekar, Rep. Prog. Phys. 80102901Ashtekar A.: Implications of a positive cosmological constant for general relativity. Rep. Prog. Phys. 80, 102901 (2017)
C Caprini, D G Figueroa, arXiv:1801.04268[astro-ph.COCosmological Backgrounds of Gravitational Waves. Caprini C. and Figueroa D.G.: Cosmological Backgrounds of Gravitational Waves. arXiv:1801.04268 [astro-ph.CO] 5 Feb 2018
On gravitational waves in spacetimes with a nonvanishing cosmological constant. J Näf, P Jetzer, M Sereno, Phys. Rev. D. 7924014Näf J., Jetzer P. and Sereno M.: On gravitational waves in spacetimes with a nonvanishing cosmological constant. Phys. Rev. D 79, 024014 (2009)
. M Visser, Lorentz Invariance and the Zero-Point Stress-Energy Tensor. Particles. 1138Visser M.: Lorentz Invariance and the Zero-Point Stress-Energy Tensor. Particles 1, 138 (2018)
L Lombriser, arXiv:1901.08588v1On the cosmological constant problem. 23gr-qcLombriser L.: On the cosmological constant problem. arXiv:1901.08588v1 [gr-qc] 23 Jan 2019
Proof that Casimir force does not originate from vacuum energy. H Nikolic´, Physics Letters B. Nikolic´ H.: Proof that Casimir force does not originate from vacuum energy. Physics Letters B 761 197 (2016)
Casimir effects are not an experimental demonstration that free vacuum gravitates: connections to the Cosmological Constant Problem. M Cerdonio, C Rovelli, J Mod Phys D. 241544020special issue publishing a selection of Essays awarded with Honorable Mention by the Gravity Research Foundation 2015 AwardsCerdonio M. and Rovelli C.: Casimir effects are not an experimental demonstration that free vacuum gravitates: connections to the Cosmological Constant Problem, J Mod Phys D 24, 1544020 (2015) [special issue publishing a selection of Essays awarded with Honorable Mention by the Gravity Research Foundation 2015 Awards] ;
Casimir cavities do not fly. M Cerdonio, C Rovelli, arXiv1406.1105v3Cerdonio M. and Rovelli C.: Casimir cavities do not fly. arXiv1406.1105v3
V M Mostepanenko, G L Klimchitskaya, arXiv:1903.04261v1Whether an enormously large energy density of the quantum vacuum is catastrophic. physics.gen-phMostepanenko V.M. and Klimchitskaya G.L. : Whether an enormously large energy density of the quantum vacuum is catastrophic. arXiv:1903.04261v1 [physics.gen-ph] 2 Mar 2019
Casimir effect and the quantum vacuum. R L Jaffe, Phys. Rev. 7221301Jaffe, R.L.: Casimir effect and the quantum vacuum. Phys. Rev. D72 021301 (2005)
R Herrera, D Pavon, J Saavedra, arXiv:1801.06155v1Constraints on the radiation temperature before inflation. gr-qcHerrera R., Pavon D. and Saavedra J.: Constraints on the radiation temperature before inflation. arXiv:1801.06155v1 [gr-qc] 18 Jan 2018
. A&A. 594Planck 2015 results: A&A 594, A13 (2016)
| [] |
[
"Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning",
"Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning"
] | [
"Salah Zaiem [email protected] \nLTCI\nTélécom Paris\nInstitut Polytechnique de Paris\nPalaiseauFrance\n\nAvignon Université, LIA\nAvignonFrance\n",
"Titouan Parcollet \nAvignon Université, LIA\nAvignonFrance\n",
"Slim Essid \nLTCI\nTélécom Paris\nInstitut Polytechnique de Paris\nPalaiseauFrance\n"
] | [
"LTCI\nTélécom Paris\nInstitut Polytechnique de Paris\nPalaiseauFrance",
"Avignon Université, LIA\nAvignonFrance",
"Avignon Université, LIA\nAvignonFrance",
"LTCI\nTélécom Paris\nInstitut Polytechnique de Paris\nPalaiseauFrance"
] | [] | Through solving pretext tasks, self-supervised learning (SSL) leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task. A common pretext task consists in pretraining a SSL model on pseudo-labels derived from the original signal. This technique is particularly relevant for speech data where various meaningful signal processing features may serve as pseudolabels. However, the process of selecting pseudo-labels, for speech or other types of data, remains mostly unexplored and currently relies on observing the results on the final downstream task. Nevertheless, this methodology is not sustainable at scale due to substantial computational (hence carbon) costs. Thus, this paper introduces a practical and theoretical framework to select relevant pseudo-labels with respect to a given downstream task. More precisely, we propose a functional estimator of the pseudo-label utility grounded in the conditional independence theory, which does not require any training. The experiments conducted on speaker recognition and automatic speech recognition validate our estimator, showing a significant correlation between the performance observed on the downstream task and the utility estimates obtained with our approach, facilitating the prospection of relevant pseudo-labels for selfsupervised speech representation learning. | 10.21437/interspeech.2021-1027 | [
"https://arxiv.org/pdf/2104.07388v2.pdf"
] | 233,241,182 | 2104.07388 | daa21e15a425d24dcbe318fd191bba74049ed4dd |
Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning
Salah Zaiem [email protected]
LTCI
Télécom Paris
Institut Polytechnique de Paris
PalaiseauFrance
Avignon Université, LIA
AvignonFrance
Titouan Parcollet
Avignon Université, LIA
AvignonFrance
Slim Essid
LTCI
Télécom Paris
Institut Polytechnique de Paris
PalaiseauFrance
Conditional Independence for Pretext Task Selection in Self-Supervised Speech Representation Learning
Index Terms: Self-Supervised Learning, Speech Representa- tion Learning
Through solving pretext tasks, self-supervised learning (SSL) leverages unlabeled data to extract useful latent representations replacing traditional input features in the downstream task. A common pretext task consists in pretraining a SSL model on pseudo-labels derived from the original signal. This technique is particularly relevant for speech data where various meaningful signal processing features may serve as pseudolabels. However, the process of selecting pseudo-labels, for speech or other types of data, remains mostly unexplored and currently relies on observing the results on the final downstream task. Nevertheless, this methodology is not sustainable at scale due to substantial computational (hence carbon) costs. Thus, this paper introduces a practical and theoretical framework to select relevant pseudo-labels with respect to a given downstream task. More precisely, we propose a functional estimator of the pseudo-label utility grounded in the conditional independence theory, which does not require any training. The experiments conducted on speaker recognition and automatic speech recognition validate our estimator, showing a significant correlation between the performance observed on the downstream task and the utility estimates obtained with our approach, facilitating the prospection of relevant pseudo-labels for selfsupervised speech representation learning.
Introduction
Self-supervised learning (SSL) methods usually solve pretext tasks to learn useful representations, taking advantage of the available unlabeled data, whether it is text, images [1] or audio samples [2], for better performance on downstream tasks. Thus, this approach helps improving the results obtained on the considered task without relying on costly and sometimes imprecise manual annotations.
For instance, SSL models have recently been proposed to benefit from large amounts of unlabeled speech data, leading to state-of-the-art results in various speech processing tasks such as automatic speech recognition (ASR) or speech enhancement [3]. Various paradigms have thus been introduced including: predictive coding [4,5,6,7], pseudo-label learning [8,9], autoencoding techniques [10,11], generative modelling [12] or contrastive learning [13,14].
Pretext tasks may be defined through a choice of pretext labels, hereafter referred to as pseudo-labels. The automatic generation of pseudo-labels is a common technique to conceive SSL models in many application domains such as computer vision [15], music processing [16] and speech processing [8,17]. In the latter scenario, examples of pseudo-labels include, but are not limited to, pitch estimators, energy-based features, voicing state... As a matter of fact, decades of research in signal processing offer a wide range of potential features to be considered as pseudo-labels.
However, the process of selecting the most relevant signal features among the ones present in the speech processing literature is still essentially driven by intuition or empirical validation. Empirical assessment implies a heavy computational load due to a large number of required pretraining and fine-tuning steps. This results in a substantial carbon footprint and may lead to intractability issues. In this work, we aim to provide a clear procedure for a theoretically motivated and efficient pseudolabel selection. This is achieved by introducing a function that estimates the utility of considering a given pseudo-label.
Despite few recent works on the theory of contrastive learning [18,19,20,21] the literature on the theoretical foundations of pseudo-label-based SSL remains extremely scarce. Lee and al. [20] proposed a novel approach building a link between the downstream-task performance and the conditional independence (CI) between a pseudo-label and the training samples given the downstream labels. However, their experiments are not related to speech and are restricted to pseudo-labels with an enforced strict conditional independence which is not the case of traditional speech features. On the other hand, numerous pseudo-labels have been empirically tested to generate useful latent speech representations [8]. Pascual and al. [8] introduced a novel SSL method for speech referred to as PASE alongside with a thorough empirical ablation study on the considered pseudo-labels highlighting the most influential ones. A similar study has been done on music data for instrument recognition by Hung and al. [16]. Nevertheless, neither works provide a prior quantitative motivation to justify the pseudo-labels selection that was thus potentially performed with grid or random searches. In short, and to the best of our knowledge, explaining and motivating the selection process of pseudo-labels remain open research questions for SSL on speech data. Therefore, the main contributions of our work are threefold:
1. Propose a method to compute an estimate of the conditional independence between the pretext task and the downstream speech samples given the downstream label.
2. Show that this estimate predicts well the utility of a given pseudo-label for a given downstream task, as it correlates highly with the downstream performance on two tasks: ASR (TIMIT) and speaker recognition (VoxCeleb).
3. Release the code base developed with SpeechBrain [22] for replication and to encourage further investigations. 1 The conducted experiments demonstrate that the proposed method allows a more intelligent, i.e. better informed, pretext task selection in self-supervised learning settings.
Conditional Independence Estimation
This section details the computation of the conditional independence estimate that we propose as a candidate for the measure of a pseudo-label utility. First, we motivate this choice with a precise description of the theoretical background. Then, we describe the computation steps. Let X, Y and Z be, respectively, the downstream data points, the downstream labels and the pseudo-labels which we decide to learn to predict. Let also C be the set of possible downstream classes. As an example, if we consider speaker recognition as a downstream task, X would be the speech samples, Y the speaker IDs, C the set of unique speaker IDs, and Z a generated signal feature, such as the fundamental frequency. Let X = (xi) i∈{0,...,M } with M being the cardinal of X. Each xi is a speech sample, represented as a Mel band spectrogram. Every sample xi has a corresponding downstream label yi and an automatically generated pseudo-label zi. In the considered cases, yi is always discrete, whether it is the speaker ID for speaker recognition or the phone for ASR. To every xi, corresponds one value zi, which is the mean of the framewise pseudo-label values.
As stated above, Lee and al. [20] linked the utility of a pseudo-label (Z) to the conditional independence between Z and X given Y . In other terms, given the labels Y , we want to quantify how much we can possibly predict the pseudo-labels Z without knowing much about X. In this work, the authors demonstrated that under certain assumptions, the downstream classifier error was bounded by a function of the downstream training set size, and a measure of the conditional dependence. More precisely, the main theorem shows that the bounding function decreases linearly with the downstream-task dataset size (M ) and quadratically with the conditional independence, thus making conditional independence a potential good estimator of pseudo-label utility. The principal issue with conditional independence is the difficulty of computing good estimates of this quantity on realistic data. For our measure, we choose to rely on a kernel-based independence criterion: the Hilbert Schmidt Independence Criterion (HSIC) [23]. HSIC has already been proven successful for textual data in testing statistical dependence between translated sentences [23]. Our choice is motivated by the fact that kernel-based techniques facilitate handling multivariate and complex data, as the estimation then boils down to the computation of a similarity measure between speech samples.
Here are the steps to compute our CI estimate of a pseudolabel Z for a downstream task (X, Y ), inspired by [23], with further details below:
1. Regroup the samples X by the downstream classes C.
2. Embed the speech samples X into fixed-size representations.
3. Compute for every downstream class c ∈ C, the kernel matrices Kc and Lc containing the similarity measures for the speech samples, and the pseudo-labels, respectively.
4. Compute the independence test for every split group using Kc and Lc, and aggregate the estimations.
We start by splitting the speech samples according to the downstream classes. To obtain the similarity matrices, the second step aims to compute fixed-size embeddings for the speech samples. We wanted to avoid any training for this phase, so we chose the gaussian downsampling method [24] detailed thereafter. After the Mel spectrogram extraction, a speech sample becomes a sequence of L input feature vectors of dimension D. The goal is, for varying L, to obtain fixed size embeddings of size N ×D, with N a fixed hyper-parameter for all the samples. To do so, the sequence is divided into N parts. In each part, we compute a Gaussian average of the input frames around the center of the considered part with the standard deviation σ gd being another hyper-parameter. This leads for any sample to a N × D tensor without any training procedure.
Therefore for two speech samples xi and xj, holding two pseudo-label values zi and zj, the coefficients of our kernel similarity matrices are:
Kij = K(xi, xj) = cos(GD(xi), GD(xj)), Lij = RBF (zi, zj),(1)
with GD(.) the Gaussian Downsampling function, cos(., .) the cosine similarity, and RBF (., .) the Radial Basis Function kernel defined as:
cos(x, x ) = trace(x T x ) ||x||.||x || , RBF (x, x ) = exp(− ||x − x || 2 2σ 2 ),(2)
with σ being the width of the RBF kernel and trace(.) being the sum of elements on the main diagonal.
For each group of samples sharing the same downstream class c ∈ C, we compute the matrices Kc and Lc. Kc and Lc correspond to the definitions above, but restricted to the points with c as a downstream label. For each downstream class c, and as in [23] the HSIC value is:
HSICc(X, Z) = 1 n 2 c trace(KcHcLcHc),(3)
with Hc = In c − 1 nc 1n c 1 T nc , nc being the number of points with downstream label c and 1n c a vector of ones of size nc × 1.
The HSIC value is used to characterise the independence of two variables. This value corresponds to the Hilbert norm of their cross-covariance matrix. Intuitively, the HSIC value is high if samples similar in K are similar in L. Therefore, the lower this value is, the more independent the two arguments of HSIC are. We enforce the condition on Y by splitting by groups of points sharing the same downstream label.
The final value for a given pseudo label and a downstream task is a weighted mean taking into account the number of samples per downstream class. So with M being the total number of points, and nc being the number of points having c as their downstream label:
HSIC(X, Z|Y ) = 1 M c∈C HSICc(X, Z) × nc. (4)
Datasets and Experimental Setup
This sections details the experiments validating the CI measure described above. The estimator is evaluated on two speech tasks that involve different aspects of the audio signal: automatic speech recognition (TIMIT) and speaker recognition (VoxCeleb). Thus, three different datasets are used in this work, one per downstream task considered and a common one for the self-supervised pretraining (Common Voice). CI is computed on both tasks for a list of pseudo-labels, mainly related to prosody and aggregates of spectral descriptors, given in Table 1. These features are extracted using the OpenSmile library [25]. They have been chosen among the features described in the feature selection literature for various speech tasks. Figure 1: Illustration of the entire training pipeline including estimation, SSL and the downstream parts. The three steps are depicted: 1. estimate the pseudo-label utility; 2. SSL training with the candidate pseudo-label; 3. Train on the downstream task with the pretrained SSL model. The candidate pseudo-label is selected among various candidates based on its conditional independence score. Voicing Decision Alpha Ratio [26] Ratio of spectrum intensity % 1000 Hz Zero Crossing Rate Zero crossing number per frame RastaSpec L1Norm L1 Norm of Rasta Spectrum [27] log HNR [28] log of Harmonicity to Noise Ratio
Datasets
The train set of the English Common Voice dataset (version 6.1) [29] is used for SSL pretraining (900 hours). Common Voice is a collection of speech utterances from worldwide users recording themselves from their own devices. Hence, the closeness to natural settings makes it a suitable choice for self-supervised learning. We remove from Common Voice the sentences lasting more than 10 seconds, as they often contain long silence parts due to open microphones. VoxCeleb1 [30] is used for the speaker recognition task. The training set contains 148, 642 utterances from 1251 different speakers. To compute the conditional independence estimates, we restricted ourselves, for tractability issues, to the utterances of 50 different speakers (the detailed list is given in the released repository 2 ).
TIMIT [31] is considered for the ASR task. It is composed of a standard 462-speakers training set, a 50-speakers development set and a core test set of 192 sentences for a total of 5 hours of clean speech. For the CI estimation, and to get discrete labels to split on, we cut the sentences at the phone level, using the official transcripts.
Self-supervised training
Based on previous work conclusions [9,14], apart from the pseudo-label to be tested, our self-supervised model learns to reconstruct the input Mel spectrograms, and to compute 40dimensioned MFCC feature vectors. These targets are kept to avoid information loss harming heavily downstream performances. Inspired by the PASE model [9,8], the model consists of an encoder followed by small predictors limited in capacity. Our pretraining model takes as input the speech samples as 80-Mel band spectrograms. The frame size is 25ms and hop size 10ms. The encoder outputs the same number of frames each corresponding to a 256-dimensional feature embedding. These new embeddings are the ones that will be subsequently extracted for the downstream-task retraining. The new features are then fed to the reconstruction workers and to the pseudolabel prediction. To facilitate the learning, pseudo-label are predicted at the frame level. Predictions are made on top of the encoder with a single linear layer with a PReLu [32] activation. The final loss is the sum of every predictor' loss: MSE loss for the reconstructions, and 1-loss for the considered pseudolabel. The encoder is composed of three distinct parts: a VGGlike features extractor, a bidirectional LSTM, and a two-layered dense neural network with leakyRelu activations. The AdaDelta optimizer is used to update the weights with 1 as a starting learning rate, ρ = 0.8 and = 10 −8 . For every pseudo-label, the network is trained for 10 epochs. For the CI estimator, as in the work presenting the gaussian downsampling method [24], we fix N = 20 and σ gb = 0.07. After a few trials aiming to get spaced similarity measures, we fixed the RBF kernel width to σ = 0.05. All the architectures details and hyperparameters can be found in the repository 2 .
Downstream Training
After extracting the Mel spectrograms from the downstream training data, these are fed to the frozen SSL pretrained encoder to get the self-supervised features. For the ASR retrain- Table 2: EER/PER values when learning to predict multiple pseudo-labels jointly. "Best" corresponds to the selection of the pseudo-labels with low CI estimator, "Worst" for the high ones. EER is shown for VoxCeleb experiments and PER for TIMIT. The middle column shows the selected pseudo-labels in the experiment. ing, we considered a speech recognition model based on CTC and attention from the SpeechBrain [22] library. The encoder is similar to the self-supervised training one. It is combined with a location-aware attentive recurrent (LiGRU) decoder [33] jointly trained with the CTC loss [34]. The model is trained for 50 epochs on the official train, dev, and test TIMIT sets. Performance is reported in term of Phone Error Rate (PER). For VoxCeleb, we trained an XVector model [35] for 10 epochs with the frozen SSL features as input. The training recipe follows the one released within SpeechBrain [22]. The extracted speaker embeddings are tested on the enrol and test splits using PLDA [36] as a similarity metric. Performance is reported in term of Equal Error Rate (EER)
We chose not to use any data augmentation or added noise during the training to avoid possible interference in our analysis. As a little variance was observed when changing the random seeds used for the TIMIT runs (σ = 0.20), the results presented are the mean of three different runs from three different seeds. Figure 2 summarizes the results of the experiment for all the considered pseudo-labels, reporting the CI estimates and the downstream performance for each of the two tasks. It shows the evolution of the conditional independence estimator and the PER and EER, respectively on TIMIT and VoxCeleb. Despite a little bump on the loudness pretraining, the two curves seem to follow the same trajectories.
Results
We are looking for a monotonic relationship between CI estimates and the downstream error. Two classic assessors of monotony are considered: Spearman Correlation and Kendall Tau. When Pearson correlation measures the linear correlation between the values, Spearman correlation is a Pearson Correlation on the ranks of the values. Kendall τ considers all the pairs of pseudo-labels, and checks whether their order in the CI estimate is the same for the error rate (i.e. the pair is concordant ). The more concordant pairs there are, the higher Kendall τ is.
Spearman correlations reach 0.48 for speaker recognition and a high 0.93 on TIMIT for ASR, while Kendall τ is respectively 0.41 and 0.81 for the two tasks. The correlations between CI and the downstream error are logically positive. As the lower the CI estimate is, the more independent is the pseudo-label from the speech samples given the label, the lower is the downstream error, confirming theoretical insights [20]. Finally, to test the influence of the downsampling method on our estimate, we compute the HSIC values based on vectors downsampled with SVCCA [37]. It led to minor differences with a mean relative difference of 1.5% on the final CI estimates. This hints to the robustness of our method to downsampling method variation.
Combining Pseudo-labels
Finding the best combination of pseudo-labels certainly involves more than individual estimates as may intervene questions of shared information. Nevertheless, we wanted to test our estimator with pseudo-labels regrouped in a naive way. In a second experiment, for each task, two self-supervised models are trained to predict two different groups of pseudo-labels. One learns to predict jointly the ones with the best CI estimator scores, and one learns the worst pseudo-labels according to our estimator. The same experimental setup is kept with one slight change: in this experiment, one of the objectives was to push further the results. So, the encoder parameters were not frozen but were updated during the retraining, with an SGD optimizer.
Pseudo-labels selected and results are described in Table 2. The third column shows EER for the VoxCeleb (VC) experiments, and PER for the TIMIT (TIM) ones. As expected, the results obtained with the best pseudo-labels are better than the ones with the worst ones. Besides that, results obtained with the non-freezed features are better than with freezed ones. This is probably due to the big distributional shift from the pretraining dataset (Common Voice) and the downstream ones. Unfreezing the encoder parameters may allow the encoder to adapt to the new points' distribution.
Conclusion
In this work, we introduce an estimator of the utility of a given pretext task as a function of the downstream task to better explain and motivate the selection of pretext tasks in selfsupervised learning settings. The estimator evaluates the conditional independence between the pretext label and the speech samples given the downstream labels, using HSIC as the independence criterion. The conducted experiments validate the proposed utility estimator on two tasks: ASR and speaker recognition. This opens a range of possibilities for finding and selecting new pretext tasks in self-supervised learning for speech or other types of data.
Acknowledgements
We want to thank Zoltan Szabo for the discussions we had on conditional independence and thank as well all the SpeechBrain library contributors. This work is partly funded by L'Agence de l'innovation de défense.
2 https://github.com/salah-zaiem/Pseudo-Label-Selection
F
Table 1 :
1Candidate speech pseudo-labels and descriptions.Feature
Description
Loudness
Intensity & approx. loudness
F0
Fundamental Frequency
Voicing
Figure 2: Left : Phone Error Rate and CI estimate values on TIMIT for every considered pseudo-label -Right: Equal Error Rate and CI estimate values on VoxCeleb for every considered pseudo-label. Error rates appear on the left y axis. We can observe the monotonic relation between the estimator and the downstream errors, particularly for TIMIT.0
V oi ci ng
lo gH
N R sp ec R as ta L 1
L ou dn es s P C M
Z C R A lp ha ra ti o
0
1
2
3
4
15
16
17
18
19
20
0.21
0.71
0.17
0.43
0.85
0.80
0.07
16.77
16.99
16.43
17.46
18.35
17.88
16.46
CI TIMIT
PER TIMIT
F 0
V oi ci ng
lo gH
N R sp ec R as ta L 1
L ou dn es s P C M
Z C R A lp ha ra ti o
0
1
2
3
4
8
10
12
14
0.02
0.86
0.02
0.77
0.86
0.86
0.06
9.99
9.98
9.08 9.32
12.68
10.1
10.01
CI VC
EER VC
https://github.com/salah-zaiem/Pseudo-Label-Selection arXiv:2104.07388v2 [eess.AS] 1 Jul 2021
Unsupervised visual representation learning by context prediction. C Doersch, A Gupta, A A Efros, C. Doersch, A. Gupta, and A. A. Efros, "Unsupervised visual rep- resentation learning by context prediction," 2016.
Objects that sound. R Arandjelovic, A Zisserman, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)R. Arandjelovic and A. Zisserman, "Objects that sound," in Proceedings of the European Conference on Computer Vision (ECCV), September 2018.
Selfsupervised learning for speech enhancement. Y.-C Wang, S Venkataramani, P Smaragdis, Y.-C. Wang, S. Venkataramani, and P. Smaragdis, "Self- supervised learning for speech enhancement," 2020.
wav2vec 2.0: A framework for self-supervised learning of speech representations. A Baevski, H Zhou, A Mohamed, M Auli, A. Baevski, H. Zhou, A. Mohamed, and M. Auli, "wav2vec 2.0: A framework for self-supervised learning of speech representa- tions," 2020.
Mockingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders. A T Liu, S Yang, P.-H Chi, P Hsu, H.-Y Lee, ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). A. T. Liu, S.-w. Yang, P.-H. Chi, P.-c. Hsu, and H.-y. Lee, "Mock- ingjay: Unsupervised speech representation learning with deep bidirectional transformer encoders," ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), May 2020.
Speech-xlnet: Unsupervised acoustic model pretraining for selfattention networks. X Song, G Wang, Z Wu, Y Huang, D Su, D Yu, H Meng, X. Song, G. Wang, Z. Wu, Y. Huang, D. Su, D. Yu, and H. Meng, "Speech-xlnet: Unsupervised acoustic model pretraining for self- attention networks," 2020.
Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition. Y Zhang, J Qin, D S Park, W Han, C.-C Chiu, R Pang, Q V Le, Y Wu, Y. Zhang, J. Qin, D. S. Park, W. Han, C.-C. Chiu, R. Pang, Q. V. Le, and Y. Wu, "Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition," oct 2020.
Learning problem-agnostic speech representations from multiple self-supervised tasks. S Pascual, M Ravanelli, J Serrà, A Bonafonte, Y Bengio, S. Pascual, M. Ravanelli, J. Serrà, A. Bonafonte, and Y. Bengio, "Learning problem-agnostic speech representations from multiple self-supervised tasks," 2019.
Multi-task self-supervised learning for robust speech recognition. M Ravanelli, J Zhong, S Pascual, P Swietojanski, J Monteiro, J Trmal, Y Bengio, M. Ravanelli, J. Zhong, S. Pascual, P. Swietojanski, J. Monteiro, J. Trmal, and Y. Bengio, "Multi-task self-supervised learning for robust speech recognition," 2020.
A comparison of neural network methods for unsupervised representation learning on the zero resource speech challenge. D Renshaw, H Kamper, A Jansen, S Goldwater, INTER-SPEECH. D. Renshaw, H. Kamper, A. Jansen, and S. Goldwater, "A com- parison of neural network methods for unsupervised representa- tion learning on the zero resource speech challenge," in INTER- SPEECH, 2015.
Evaluating the reliability of acoustic speech embeddings. R Algayres, M S Zaiem, B Sagot, E Dupoux, INTERSPEECH 2020 -Annual Conference of the International Speech Communication Association. Shanghai / Vitrtual, ChinaR. Algayres, M. S. Zaiem, B. Sagot, and E. Dupoux, "Evaluating the reliability of acoustic speech embeddings," in INTERSPEECH 2020 -Annual Conference of the International Speech Communi- cation Association, Shanghai / Vitrtual, China, Oct. 2020.
A convolutional deep markov model for unsupervised speech representation learning. S Khurana, A Laurent, W.-N Hsu, J Chorowski, A Lancucki, R Marxer, J Glass, S. Khurana, A. Laurent, W.-N. Hsu, J. Chorowski, A. Lancucki, R. Marxer, and J. Glass, "A convolutional deep markov model for unsupervised speech representation learning," 2020.
Contrastive Learning of General-Purpose Audio Representations. A Saeed, D Grangier, N Zeghidour, A. Saeed, D. Grangier, and N. Zeghidour, "Contrastive Learning of General-Purpose Audio Representations," oct 2020.
Speech simclr: Combining contrastive and reconstruction objective for self-supervised speech representation learning. D Jiang, W Li, M Cao, R Zhang, W Zou, K Han, X Li, D. Jiang, W. Li, M. Cao, R. Zhang, W. Zou, K. Han, and X. Li, "Speech simclr: Combining contrastive and reconstruction objec- tive for self-supervised speech representation learning," 2020.
Unsupervised learning of visual representations by solving jigsaw puzzles. M Noroozi, P Favaro, M. Noroozi and P. Favaro, "Unsupervised learning of visual rep- resentations by solving jigsaw puzzles," 2017.
Multitask learning for frame-level instrument recognition. Y.-N Hung, Y.-A Chen, Y.-H Yang, Y.-N. Hung, Y.-A. Chen, and Y.-H. Yang, "Multitask learning for frame-level instrument recognition," 2019.
Learning speech representations from raw audio by joint audiovisual self-supervision. A Shukla, S Petridis, M Pantic, 07A. Shukla, S. Petridis, and M. Pantic, "Learning speech represen- tations from raw audio by joint audiovisual self-supervision," 07 2020.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A sim- ple framework for contrastive learning of visual representations," 2020.
Representation learning with contrastive predictive coding. A V Oord, Y Li, O Vinyals, arXiv:1807.03748arXiv preprintA. v. d. Oord, Y. Li, and O. Vinyals, "Representation learning with contrastive predictive coding," arXiv preprint arXiv:1807.03748, 2018.
Predicting what you already know helps: Provable self-supervised learning. J D Lee, Q Lei, N Saunshi, J Zhuo, J. D. Lee, Q. Lei, N. Saunshi, and J. Zhuo, "Predicting what you already know helps: Provable self-supervised learning," 2020.
A Theoretical Analysis of Contrastive Unsupervised Representation Learning. S Arora, H Khandeparkar, M Khodak, O Plevrakis, N Saunshi, 36th International Conference on Machine Learning, ICML 2019. S. Arora, H. Khandeparkar, M. Khodak, O. Plevrakis, and N. Saunshi, "A Theoretical Analysis of Contrastive Unsupervised Representation Learning," 36th International Conference on Ma- chine Learning, ICML 2019, vol. 2019-June, pp. 9904-9923, feb 2019.
. M Ravanelli, T Parcollet, P Plantinga, A Rouhe, S Cornell, L Lugosch, C Subakan, N Dawalatabad, A Heba, J Zhong, J.-C Chou, S.-L Yeh, S.-W Fu, C.-F Liao, E Rastorgueva, F Grondin, W Aris, H Na, Y Gao, R D Mori, Y Bengio, Speechbrain: A general-purpose speech toolkit," 2021M. Ravanelli, T. Parcollet, P. Plantinga, A. Rouhe, S. Cornell, L. Lugosch, C. Subakan, N. Dawalatabad, A. Heba, J. Zhong, J.-C. Chou, S.-L. Yeh, S.-W. Fu, C.-F. Liao, E. Rastorgueva, F. Grondin, W. Aris, H. Na, Y. Gao, R. D. Mori, and Y. Bengio, "Speechbrain: A general-purpose speech toolkit," 2021.
A kernel statistical test of independence. A Gretton, K Fukumizu, C H Teo, L Song, B Schölkopf, A Smola, 012007A. Gretton, K. Fukumizu, C. H. Teo, L. Song, B. Schölkopf, and A. Smola, "A kernel statistical test of independence," 01 2007.
Learning Word Embeddings: Unsupervised Methods for Fixedsize Representations of Variable-length Speech Segments," in Interspeech 2018, ser. N Holzenberger, M Du, J Karadayi, R Riad, E Dupoux, ISCA. IndiaHyderabadN. Holzenberger, M. Du, J. Karadayi, R. Riad, and E. Dupoux, "Learning Word Embeddings: Unsupervised Methods for Fixed- size Representations of Variable-length Speech Segments," in In- terspeech 2018, ser. Proceedings of Interspeech 2018. Hyder- abad, India: ISCA, Sep. 2018.
opensmile -the munich versatile and fast open-source audio feature extractor. F Eyben, M Wöllmer, B Schuller, F. Eyben, M. Wöllmer, and B. Schuller, "opensmile -the munich versatile and fast open-source audio feature extractor," 01 2010, pp. 1459-1462.
Effects of vocal loudness variation on spectrum balance as reflected by the alpha measure of long-term-average spectra of speech. J Sundberg, M Nordenberg, The Journal of the Acoustical Society of America. 120J. Sundberg and M. Nordenberg, "Effects of vocal loudness vari- ation on spectrum balance as reflected by the alpha measure of long-term-average spectra of speech," The Journal of the Acousti- cal Society of America, vol. 120, pp. 453-7, 08 2006.
Rasta-plp speech analysis technique. H Hermansky, N Morgan, A Bayya, P Kohn, 1H. Hermansky, N. Morgan, A. Bayya, and P. Kohn, "Rasta-plp speech analysis technique," vol. 1, 04 1992, pp. 121 -124 vol.1.
Cepstrum-Based Harmonics-to-Noise Ratio Measurement in Voiced Speech. P Murphy, O Akande, Nonlinear Speech Modeling and Applications. Chollet, A. Esposito, M. Faundez-Zanuy, and M. MarinaroBerlin, Heidelberg; Berlin HeidelbergSpringerP. Murphy and O. Akande, "Cepstrum-Based Harmonics-to-Noise Ratio Measurement in Voiced Speech," in Nonlinear Speech Mod- eling and Applications, G. Chollet, A. Esposito, M. Faundez- Zanuy, and M. Marinaro, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2005, pp. 199-218.
Common voice: A massively-multilingual speech corpus. R Ardila, M Branson, K Davis, M Henretty, M Kohler, J Meyer, R Morais, L Saunders, F M Tyers, G Weber, R. Ardila, M. Branson, K. Davis, M. Henretty, M. Kohler, J. Meyer, R. Morais, L. Saunders, F. M. Tyers, and G. Weber, "Common voice: A massively-multilingual speech corpus," 2020.
Voxceleb: A largescale speaker identification dataset. A Nagrani, J S Chung, A Zisserman, A. Nagrani, J. S. Chung, and A. Zisserman, "Voxceleb: A large- scale speaker identification dataset," Interspeech 2017, Aug 2017.
Timit acoustic-phonetic continuous speech corpus. J Garofolo, L Lamel, W Fisher, J Fiscus, D Pallett, N Dahlgren, V Zue, 111992Linguistic Data ConsortiumJ. Garofolo, L. Lamel, W. Fisher, J. Fiscus, D. Pallett, N. Dahlgren, and V. Zue, "Timit acoustic-phonetic continuous speech corpus," Linguistic Data Consortium, 11 1992.
Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. K He, X Zhang, S Ren, J Sun, K. He, X. Zhang, S. Ren, and J. Sun, "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification," 2015.
Light gated recurrent units for speech recognition. M Ravanelli, P Brakel, M Omologo, Y Bengio, IEEE Transactions on Emerging Topics in Computational Intelligence. 22M. Ravanelli, P. Brakel, M. Omologo, and Y. Bengio, "Light gated recurrent units for speech recognition," IEEE Transactions on Emerging Topics in Computational Intelligence, vol. 2, no. 2, p. 92-102, Apr 2018.
Joint ctc-attention based end-to-end speech recognition using multi-task learning. S Kim, T Hori, S Watanabe, 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEES. Kim, T. Hori, and S. Watanabe, "Joint ctc-attention based end-to-end speech recognition using multi-task learning," in 2017 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, 2017, pp. 4835-4839.
X-vectors: Robust dnn embeddings for speaker recognition. D Snyder, D Garcia-Romero, G Sell, D Povey, S Khudanpur, 2018 IEEE International Conference on Acoustics, Speech and Signal Processing. D. Snyder, D. Garcia-Romero, G. Sell, D. Povey, and S. Khudan- pur, "X-vectors: Robust dnn embeddings for speaker recognition," in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018, pp. 5329-5333.
Probabilistic Linear Discriminant Analysis. S Ioffe, Computer Vision -ECCV. A. Leonardis, H. Bischof, and A. PinzBerlin, Heidelberg; Berlin HeidelbergSpringerS. Ioffe, "Probabilistic Linear Discriminant Analysis," in Com- puter Vision -ECCV 2006, A. Leonardis, H. Bischof, and A. Pinz, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 531-542.
Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability. M Raghu, J Gilmer, J Yosinski, J Sohl-Dickstein, M. Raghu, J. Gilmer, J. Yosinski, and J. Sohl-Dickstein, "Svcca: Singular vector canonical correlation analysis for deep learning dynamics and interpretability," 2017.
| [
"https://github.com/salah-zaiem/Pseudo-Label-Selection",
"https://github.com/salah-zaiem/Pseudo-Label-Selection"
] |
[] | [
"Xiang-Nan Zhou \nCollege of Physics and Information Engineering\nShanxi Normal University\n041004LinfenPeople's Republic of China\n",
"Yun-Zhi Du \nInstitute of Theoretical Physics\nDatong University\n037009DatongPeople's Republic of China\n",
"Hao Yu \nInstitute of Theoretical Physics\nLanzhou University\n730000LanzhouPeople's Republic of China\n",
"Yu-Xiao Liu \nInstitute of Theoretical Physics\nLanzhou University\n730000LanzhouPeople's Republic of China\n"
] | [
"College of Physics and Information Engineering\nShanxi Normal University\n041004LinfenPeople's Republic of China",
"Institute of Theoretical Physics\nDatong University\n037009DatongPeople's Republic of China",
"Institute of Theoretical Physics\nLanzhou University\n730000LanzhouPeople's Republic of China",
"Institute of Theoretical Physics\nLanzhou University\n730000LanzhouPeople's Republic of China"
] | [] | In this paper, we consider the localization of a five-dimensional gravitino field on f (R) thick branes. We get the coupled chiral equations of the Kaluza-Klein (KK) modes of gravitino by choosing the gauge condition Ψz = 0. It is found that the chiral equations of the gravitino KK modes are almost the same as the ones of the Dirac fermion. However, their chiralities are precisely opposite. The chiral KK modes of gravitino could be localized on some kinds of f (R) thick branes if a coupling term is introduced. We investigate the localization of gravitino on three kinds of f (R) thick branes through a Yukawa-like coupling term with background scalar fields. It has been shown that all the KK modes of gravitino can not be localized on the pure geometric f (R) thick branes by adding a five-dimensional gravitino mass term. However, for the f (R) thick branes generated by one or two background scalar fields, only the left-or right-handed zero mode could be localized on the branes and the massive KK resonant modes are the same for both left-and right-handed gravitinos, in spite of their opposite chiralities. All these results are consistent with that of the five-dimensional Dirac fermion except their chiralities, which may be an important sign to distinguish the gravitino field and the Dirac fermion field. a [email protected] b | 10.1007/s11433-018-9246-2 | [
"https://arxiv.org/pdf/1703.10805v2.pdf"
] | 118,857,875 | 1703.10805 | 3730daff6c9ba7512d4af18451058b96da636d4b |
31 Mar 2017
Xiang-Nan Zhou
College of Physics and Information Engineering
Shanxi Normal University
041004LinfenPeople's Republic of China
Yun-Zhi Du
Institute of Theoretical Physics
Datong University
037009DatongPeople's Republic of China
Hao Yu
Institute of Theoretical Physics
Lanzhou University
730000LanzhouPeople's Republic of China
Yu-Xiao Liu
Institute of Theoretical Physics
Lanzhou University
730000LanzhouPeople's Republic of China
31 Mar 2017Localization of Gravitino Field on f (R) Thick Branes
In this paper, we consider the localization of a five-dimensional gravitino field on f (R) thick branes. We get the coupled chiral equations of the Kaluza-Klein (KK) modes of gravitino by choosing the gauge condition Ψz = 0. It is found that the chiral equations of the gravitino KK modes are almost the same as the ones of the Dirac fermion. However, their chiralities are precisely opposite. The chiral KK modes of gravitino could be localized on some kinds of f (R) thick branes if a coupling term is introduced. We investigate the localization of gravitino on three kinds of f (R) thick branes through a Yukawa-like coupling term with background scalar fields. It has been shown that all the KK modes of gravitino can not be localized on the pure geometric f (R) thick branes by adding a five-dimensional gravitino mass term. However, for the f (R) thick branes generated by one or two background scalar fields, only the left-or right-handed zero mode could be localized on the branes and the massive KK resonant modes are the same for both left-and right-handed gravitinos, in spite of their opposite chiralities. All these results are consistent with that of the five-dimensional Dirac fermion except their chiralities, which may be an important sign to distinguish the gravitino field and the Dirac fermion field. a [email protected] b
I. INTRODUCTION
The extra dimensional theory has attracted more and more attention even though the visible world is a fourdimension spacetime [1][2][3][4][5][6][7][8][9][10][11][12][13]. Some classical physical problems including the gauge hierarchy problem (the huge difference between the Planck scale and the weak scale) [4,5,14,15] and the cosmological problem [8,12,13,16,17] could be solved via utilizing extra dimensions. In the 1920s, the Kaluza-Klein (KK) theory was proposed to unify Einstein's gravity and electromagnetism by introducing a compact extra spacial dimension with Plank size [18,19]. Several decades later, Akama, Rubakov and Shaposhnikov proposed the idea of domain wall braneworld with an infinite extra dimension in five-dimensional flat spacetime [1,2]. In 1998, Antoniadis and Arkani-Hamed etc introduced the famous model with large extra dimension that attempts to solve the hierarchy problem [3,4]. One year latter, Randall and Sundrum (RS) suggested that the extra dimension with warped geometry could be finite or infinite, corresponding to the RSI [5] or RSII [6] thin braneworld model. In both braneworld scenarios, our visible four-dimensional world is a brane without thickness along the extra dimension, and the matter fields of the Standard Model (SM) are confined on the brane while only gravity propagates in the five-dimensional bulk spacetime. Subsequently, more realistic thick branes generated dynamically by matter fields or pure gravity were introduced [20][21][22][23][24][25][26][27][28][29]. In these models, there exists a non-vanishing distribution of energy density along the extra dimension.
The braneworld scenario with warped infinite extra dimensions requires a natural physical mechanism to trap the matter fields on the branes in order to not conflict with the current experiments. Thus, it is significant to investigate the localization of the matter fields on various kinds of branes [30][31][32][33][34][35][36][37][38][39][40][41][42][43][44][45][46]. In order to rebuild the SM on the branes, the zero modes of these matter fields (the four-dimensional massless particles) should be localized on the branes. At the same time, the localization of massive KK modes are crucial to provide a method to explore extra dimensions. For example, we may observe some physical effects of these KK particles interacting with the SM particles in the Large Hadron Collider (LHC) [47][48][49][50]. In some braneworld models, there are no bounded massive KK modes for some matter fields, while there may be some resonant KK modes quasi-localized on the branes. These massive resonant KK modes may stay on the branes for a long time and interact with other particles, which could provide us with opportunities to find the massive resonant KK modes and prove the existence of extra dimensions [39,42,45,51,52,[54][55][56].
Gravitino is the gauge fermion supersymmetric partner of graviton in the supersymmetry theory. It has been suggested as a candidate for dark matter in cosmology [58][59][60][61][62]. It is a fermion of spin 3/2 and obeys the Rarita-Schwinger equation. The mass of a light gravitino is always considered around 1eV [58], but there are still some challenges to its mass [61]. Its mass is widely investigated in the models of hot and cold dark matters [58,60], and the possibility of finding light gravitinos at the LHC was discussed in Ref. [63]. The behaviors of gravitino around a black hole also attract attentions [64][65][66][67]. Besides, gravitino is a kind of matter field beyond the SM with many special properties that the SM matter fields do not possess. Therefore, the localization of a five-dimensional gravitino field on a brane will be very interesting and give us new perspective to investigate the gravitino. Compared to the matter fields of the SM such as the scalar and fermion fields, the works on gravitino are few and not comprehensive [37,46,[68][69][70][71][72][73]. The zero mode of a five-dimensional free gravitino can be localized on a RS-like brane only when a bulk mass term is introduced [69]. In a D-dimensional spacetime with D ≥ 5, the zero mode of the gravitino with a coupling term can be localized on the brane, and the localization property is similar to that of the Dirac fermion [37,71]. In addition, the behavior of the gravitino KK modes with coupling terms was investigated in Ref. [72]. Recently, the localization and mass spectrum of the gravitino KK modes on two kinds of thin branes (the RS branes and the scalar-tensor branes) were investigated in Ref. [46]. It should be noticed that most of these investigations focused on the RS-like thin branes.
In this paper, we pay our attention to the localization of a five-dimensional gravitino field on the f (R) thick branes. Although general relativity is a very successful theory, its non-renormalization motives the investigation of modified gravity theories, particularly the gravity theories including higher-order curvature terms [74]. f (R) gravity is a kind of modified gravity whose Lagrangian is a function of the scalar curvature R. It always contains higher-order curvature invariants, which could make the theory to be renormalizable [74]. Furthermore, the f (R) gravity could be used to explain the dark energy or dark matter and answer the astrophysical and cosmological riddles. Therefore, it has been studied widely in cosmology and braneworld [25,29,[74][75][76][77][78][79][80][81][82][83][84][85][86]. In Ref. [29], the authors investigated various kinds of f (R)-branes and gave their general solutions. All of these solutions are also appropriate for the general relativity braneworlds, i.e., f (R) = R.
In this paper, we would like to investigate the localization of a five-dimensional gravitino field on the f (R) thick branes, whose solutions have been given in Ref. [29]. The conclusions of the localization of the gravitino will have some certain universality because they are also appropriate for the general relativity braneworlds. We believe it will give us some interesting results for the structure of thick branes that the thin branes do not have. Our work is organized as follows. In Sec. II, we consider the localization of a five-dimensional free massless gravitino field on a thick brane. We introduce the gauge condition Ψ z = 0 and get the Schrödinger-like equations of the gravitino KK modes. Then we focus on the localization of a five-dimensional gravitino field with a coupling term on a thick brane in Sec. III.
Three kinds of f (R) thick branes are considered and the massive KK resonances are studied. Finally, discussion and conclusion are given in Sec. IV.
II. LOCALIZATION OF FREE GRAVITINO FIELD ON THICK BRANES
Firstly, we consider the localization of a free massless gravitino field on a thick brane in a five-dimensional spacetime. Usually, it can be to assume the five-dimensional line-element as
ds 2 = g MN dx M dx N = e 2A(y)ĝ µν (x)dx µ dx ν + dy 2 .(1)
Here, M and N denote the curved five-dimensional spacetime indices,ĝ µν is the metric on the brane and the warp factor e 2A(y) is only the function of the extra dimension y. For convenience, the following coordinate transformation
dz = e −A(y) dy(2)
could be performed to transform the metric (1) to be
ds 2 = e 2A(z) (ĝ µν dx µ dx ν + dz 2 ).(3)
The action of a free massless gravitino field Ψ in five-dimensional spacetime is given by [37,46,71]
S 3 2 = d 5 x √ −gΨ M Γ [M Γ N Γ R] D N Ψ R ,(4)
and the corresponding equations of motion read as
Γ [M Γ N Γ R] D N Ψ R = 0.(5)
The
Thus Γ M = e −A (ê µμ γμ, γ 5 ) = e −A (γ µ , γ 5 ), where γ µ =ê µμ γμ, γμ and γ 5 are the flat gamma matrices in the four-dimensional Dirac representation. In this paper, we choose the following representation for the four-dimensional flat gamma matrices:
γ 0 = 0 −iI −iI 0 , γ i = 0 iσ i −iσ i 0 , γ 5 = I 0 0 −I .(8)
Here I is a two-by-two unit matrix and σ i are the Pauli matrices. In this paper, we only consider flat chick branes, i.e.ĝ µν = η µν . So we haveê µμ = δ µμ and γ µ = γμ. In addition, the covariant derivative of a gravitino field is defined by
D N Ψ R = ∂ N Ψ R − Γ M N R Ψ M + ω N Ψ R ,(9)
where the spin connection ω N is defined by ω N = 1 4 ωNL N ΓN ΓL and ωNL N is given by
ωNL N = 1 2 e MN (∂ N eL M − ∂ M eL N ) − 1 2 e ML (∂ N eN M − ∂ M eN N ) − 1 2 e MN e PL (∂ M e PR − ∂ P e MR )eR N .(10)
Thus we get the non-vanishing components of ω N :
ω µ = 1 2 (∂ z A)γ µ γ 5 +ω µ .(11)
Note that the four-dimensional spin connectionω µ on a flat brane vanishes. The non-vanishing components of D N Ψ R are
D µ Ψ ν = ∂ µ Ψ ν − Γ M µν Ψ M + ω µ Ψ ν =D µ Ψ ν + (∂ z A)ĝ µν Ψ z + 1 2 (∂ z A)γ µ γ 5 Ψ ν ,(12)D µ Ψ z = ∂ µ Ψ z − Γ M µz Ψ M + ω µ Ψ z = ∂ µ Ψ z − (∂ z A)Ψ µ + 1 2 (∂ z A)γ µ γ 5 Ψ z +ω µ Ψ z ,(13)D z Ψ µ = ∂ z Ψ µ − Γ M zµ Ψ M + ω z Ψ µ = ∂ z Ψ µ − (∂ z A)Ψ µ ,(14)D z Ψ z = ∂ z Ψ z − Γ M zz Ψ M + ω z Ψ z = ∂ z Ψ z − (∂ z A)Ψ z .(15)
Equation (
Γ [5 Γ N Γ R] D N Ψ R = Γ [5 Γ µ Γ ν] D µ Ψ ν = [Γ µ , Γ ν ] − g µν Γ 5 D µ Ψ ν + (∂ z A)ĝ µν Ψ z + 1 2 (∂ z A)γ µ γ 5 Ψ ν = 0.(16)
In this paper, for convenience we prefer to choose the gauge condition Ψ z = 0, with which we introduce the KK decomposition
Ψ µ = ψ (n) µ (x)ξ n (z),(17)
where ψ (n)
µ (x) is the four-dimensional gravitino field. Then Eq. (16) is reduced to
[γ µ , γ ν ] −ĝ µν γ 5 D µ ψ (n) ν + 1 2 (∂ z A)γ µ γ 5 ψ (n) ν = 0.(18)
For the four-dimensional massive gravitino field ψ µ , it should satisfy the following four equations [59]
γ [λ γ µ γ ν]D µ ψ ν − m 3/2 [γ λ , γ µ ]ψ µ = 0, (19a) γ µ ψ µ = 0,(19b)D µ ψ µ = 0,(19c)(γ µD µ + m 3/2 )ψ ν = 0.(19d)
Here, m 3/2 is the mass of a four-dimensional gravitino field ψ µ . Thus the left-hand side of Eq. (18) is always vanished for a four-dimensional gravitino field ψ (n) µ satisfying the above equation (19). On the other hand, when we choose the gauge condition Ψ z = 0, the part Γ [5 Γ N Γ R] D N Ψ R in the five-dimensional gravitino action (4) has no contribution, so Eq. (16) can be ignored. Then we will focus on the case of M = µ, for which the equations of motion are
Γ [λ Γ N Γ L] D N Ψ L = Γ [λ Γ µ Γ ν] D µ Ψ ν + Γ [λ Γ ν Γ 5] D ν Ψ z + Γ [λ Γ 5 Γ ν] D z Ψ ν = e −3A γ [λ γ µ γ ν]D µ Ψ ν − e −3A [γ λ , γ ν ]γ 5 (∂ z A + ∂ z )Ψ ν = 0,(20)
where we have used the gauge condition Ψ z = 0. When we introduce the decomposition (17) and consider the zero mode, which corresponds to the four-dimensional massless gravitino satisfying γ [λ γ µ γ ν]D µ ψ (0) ν = 0, we get the equation of motion for the extra-dimensional configuration ξ 0 (z):
γ [λ γ µ γ ν]D µ ψ 0 ν (x)ξ 0 (z) − [γ λ , γ ν ]γ 5 ψ (0) ν (x)(∂ z A + ∂ z )ξ 0 (z) = −(∂ z A + ∂ z )ξ 0 (z) = 0.(21)
Obviously, the solution is
ξ 0 (z) = Ce −A(z) ,(22)
where C is a normalization constant. By substituting the zero mode ξ 0 (z) into the gravitino action (4) yields
S (0) 3 2 = I 0 d 4 x −ĝψ (0) λ γ [λ γ µ γ ν]D µ ψ (0) ν (x),(23)
where I 0 ≡ dz e 2A ξ 2 0 (z) = C 2 dz = C 2 e −A(y) dy. In order to localize the spin 3/2 gravitino on a brane, the integral I 0 must be finite. So if we consider a RS-type brane model, then only for a finite extra dimension the zero mode of a five-dimensional free massless gravitino can be localized on the brane.
For the massive modes, we need to introduce the following chiral decomposition:
Ψ µ (x, z) = n ψ (n) Lµ (x)ξ Ln (z) + ψ (n) Rµ (x)ξ Rn (z) = n 0 ψ (n) Lµ ξ Ln + ψ (n) Rµ ξ Rn 0 ,(24)whereψ (n) Lµ andψ (n)
Rµ are both the two-component spinors. The effect of P L,R (P L,R = 1 2 [I ∓ γ 5 ]) on the gravitino field Ψ M is to single out the left-and right-handed parts, respectively, which is equivalent to the following equations:
γ 5 ψ (n) Lµ = −ψ (n) Lµ , γ 5 ψ (n) Rµ = ψ (n) Rµ .(25)
Thus, substituting the chiral decomposition (24) into Eq. (20), we have
0 = γ [λ γ µ γ ν]D µ ψ (n) Lν ξ Ln + γ [λ γ µ γ ν]D µ ψ (n) Rν ξ Rn + [γ λ , γ ν ](∂ z A)ψ (n) Lν ξ Ln −[γ λ , γ ν ](∂ z A)ψ (n) Rν ξ Rn + [γ λ , γ ν ]ψ (n) Lν ∂ z ξ Ln − [γ λ , γ ν ]ψ (n) Rν ∂ z ξ Rn .(26)
Since the product of three gamma matrices is oblique diagonal and the product of two gamma matrices is diagonal, two equations can be obtained from above equation:
γ [λ γ µ γ ν]D µ ψ (n) Lν ξ Ln − [γ λ , γ ν ](∂ z A)ψ (n) Rν ξ Rn − [γ λ , γ ν ]ψ (n) Rν ∂ z ξ Rn = 0, (27a) γ [λ γ µ γ ν]D µ ψ (n) Rν ξ Rn + [γ λ , γ ν ](∂ z A)ψ (n) Lν ξ Ln + [γ λ , γ ν ]ψ (n) Lν ∂ z ξ Ln = 0. (27b)
Through the method of separation of variance and defining a parameter m n , we have
γ [λ γ µ γ ν]D µ ψ (n) Lν [γ λ , γ α ]ψ (n) Rα = A ′ ξ Rn + ∂ z ξ Rn ξ Ln = m n ,(28a)γ [λ γ µ γ ν]D µ ψ (n) Rν [γ λ , γ α ]ψ (n) Lα = − (∂ z A)ξ Ln + ∂ z ξ Ln ξ Rn = m n ,(28b)
i.e.,
γ [λ γ µ γ ν]D µ ψ (n) Lν = m n [γ λ , γ α ]ψ (n) Rα , γ [λ γ µ γ ν]D µ ψ (n) Rν = m n [γ λ , γ α ]ψ (n) Lα ,(29)(∂ z + (∂ z A))ξ Rn = m n ξ Ln , (∂ z + (∂ z A))ξ Ln = −m n ξ Rn .(30)
Equations (29) are the ones that four-dimensional chiral gravitino fields satisfy and Eqs. (30) are the coupled ones which KK modes ξ Ln and ξ Rn satisfy. Performing the field transformations ξ Rn (z) = χ R n (z) e −A and ξ Ln (z) = χ L n (z) e −A , we can obtain equations for the left-and right-handed KK modes of gravitino
∂ 2 z χ L n (z) = −m 2 n χ L n (z), (31a) ∂ 2 z χ R n (z) = −m 2 n χ R n (z).(31b)
When the following normalizable conditions are introduced
χ L m (z)χ R n (z)dz = δ RL δ mn ,(32)
the effective action of the four-dimensional massless and massive gravitinos can be got
S m 3 2 = n d 4 x ψ (n) Lλ (x)γ [λ γ µ γ ν] ∂ µ ψ (n) Lν (x) − m nψ (n) Lλ (x)[γ λ , γ µ ]ψ (n) Rµ (x) +ψ (n) Rλ (x)γ [λ γ µ γ ν] ∂ µ ψ (n) Rν (x) − m nψ (n) Rλ (x)[γ λ , γ µ ]ψ (n) Lµ (x) = n d 4 x ψ (n) λ (x)γ [λ γ µ γ ν] ∂ µ ψ (n) ν (x) − m nψ (n) λ (x)[γ λ , γ µ ]ψ (n) µ (x) .(33)
However, obviously the solutions of Eqs. (31a) and (31b) are mediocre. Thus the four-dimensional massive gravitinos cannot be localized. This conclusion is the same as Dirac fermion.
III. LOCALIZATION OF GRAVITINO FIELD WITH COUPLING TERM ON THICK BRANES
As what we have pointed out in the previous section, the massive KK modes of a five-dimensional free massless gravitino field cannot be localized on RS-type thick branes. Therefore, it is necessary to introduce a coupling term as the case of Dirac field. In the thin brane scenario [46], one usually introduces an additional mass term which is associated with the warp factor of the thin brane. In the scenario of thick brane generated by one or multiple background scalar fields, we could introduce a coupling term between the background scalar field and gravitino field. We consider the simplest coupling, i.e., a Yukawa-like coupling, for which the action of a five-dimensional gravitino field is
S 3 2 = d 5 x √ −g Ψ M Γ [M Γ N Γ R] D N Ψ R − ηF (φ)Ψ M [Γ M , Γ N ]Ψ N .(34)
Here, F (φ) is a function of the background scalar field φ and η is the coupling constant. The equations of motion derived from the above action are
Γ [M Γ N Γ R] D N Ψ R − ηF (φ)[Γ M , Γ N ]Ψ N = 0.(35)
By using the gauge condition Ψ z = 0 and introducing the chiral decomposition
Ψ µ (x, z) = n e −A(z) ψ (n) Lµ (x)χ L n (z) + ψ (n) Rµ (x)χ R n (z) ,(36)
we can obtain the following first-order coupled equations
(∂ z − ηe A F (φ))χ L n (z) = − m n χ R n (z), (37a) (∂ z + ηe A F (φ))χ R n (z) = m n χ L n (z). (37b)
From the above equation (37), the left-and right-handed KK modes of the gravitino field satisfy the following Schrödinger-like equations:
(−∂ 2 z + V L (z))χ L n (z) = m 2 n χ L n (z), (38a) (−∂ 2 z + V R (z))χ R n (z) = m 2 n χ R n (z),(38b)
where the effective potentials are given by
V L (z) = (ηe A F (φ)) 2 + η∂ z (e A F (φ)), (39a) V R (z) = (ηe A F (φ)) 2 − η∂ z (e A F (φ)). (39b)
For a five-dimensional free gravitino, we have obtained the effective action (33) of the four-dimensional left-and righthanded gravitinos. It is interesting that the forms of these equations for the left-and right-handed KK gravitinos (38a) and (38b) are the same as those of the KK modes of a Dirac field, while the only difference is their chiralities. For a given background solution of a thick brane, if the function F (φ) and the coupling parameter η are the same, it seems that the mass spectrum of the KK gravitinos will be the same as that of the Dirac field. Here, we should note the difference of chiralities, which will give an interesting result. Next we first review some kinds of f (R) thick branes [29,86], and then investigate the localization of the fivedimensional gravitino on these branes and give their KK mass spectra.
In the five-dimensional spacetime, the action of a general f (R) thick brane model reads [29]
S = d 5 x √ −g 1 2κ 2 5 f (R) + L(φ i , X i ) ,(40)
where κ 2 5 ≡ 8πG 5 is the five-dimensional gravitational constant and is set to one for convenience, f (R) is a function of the scalar curvature R and L(φ i , X i ) is the Lagrangian density of the background scalar fields φ i with the kinetic terms
X i = − 1 2 g MN ∂ M φ i ∂ N φ i .
It is predictable that the spectra of the KK modes of the gravitino field on these f (R) thick branes will be almost the same as the ones of the Dirac field except their chiralities. These results could give us some important reference in the future experiments about extra dimension and gravitino.
A. Localization of gravitino field on the pure geometric f (R) thick branes without background scalar field Firstly, we focus on the localization of the gravitino field on the pure geometric f (R) thick branes. In Ref. [86], the authors investigated the pure geometric f (R) thick branes, where the Lagrangian density of the background scalar fields L(φ i , X i ) vanishes. For the flat pure geometric f (R) thick branes, the background metric is given by (1) witĥ g µν = η µν . The solution of the warp factor A(y) is [86] A(y) = −n ln(cosh(ky)),
where k is a positive real parameter that related to the curvature of the five-dimensional spacetime and n is a positive integer number. The solutions of the function f (R) for n = 1 and n = 20 are respectively [86] f
(R) = 1 7 (6k 2 + R) cosh(a(w(R))) − 2 7 k 2 480 − 36R k 2 − 3R 2 k 4 sinh(α((w(R))), (n = 1) (42) f (R) = − 377600 7803 k 2 + 4196 2601 R − 83 41616k 2 R 2 + 13 39951360k 4 R 3 , (n = 20)(43)
where
α(w) = 2 √ 3 arctan(tanh( w 2 )) and w(R) = ±arcsech √ 20n 2 +R/k 2 √ 8n+20n 2
. For arbitrary n, the function f (R) has no a unified expression, and it is hard to get an analytical y(z) from the following relation of z(y) calculated from the solution (41):
z(y) = − cosh n+1 (ky) sinh(ky) 2 F 1 1/2, n+1 2 , n+3 2 , cosh 2 (ky) (n + 1)k − sinh 2 (ky) .(44)
Since there is no background scalar field in the pure geometric brane model, we may try to take ηF as the fivedimensional mass M of the gravitino field. Then, the effective potentials V L and V R can be expressed in terms of the extra dimension y
V L (z(y)) = sech 2n (ky) M 2 − nkM tanh(ky) ,(45)
V R (z(y)) = sech 2n (ky) M 2 + nkM tanh(ky) .
It is easy to see that both potentials are asymmetric and their asymptotic behaviors are
V L (0) = M 2 , V L (±∞) = e 2A(±∞) (M 2 ∓ M kn) = 0,(47)V R (0) = M 2 , V R (±∞) = e 2A(±∞) (M 2 ± M kn) = 0,(48)L = X − V (φ) = − 1 2 ∂ M φ∂ M φ − V (φ)
, the solution in this model with the Sine-Gordon potential is given by [29] f (R) =R +α
24b 2 +2R+2bR 2+5b P b/2 K− (Ξ)−βQ b/2 K− (Ξ) −4(b 2 −2bK + )Ξ P b/2 K+ (Ξ) −ΞP b/2 K− (Ξ) +βΞ Q b/2 K− (Ξ) − Q b/2 K+ (Ξ) Θ b/2 ,(49a)V (φ) = 3bk 2 8 (1 − 4b) + (1 + 4b) cos 8 3b φ ,(49b)φ(y) = √ 6b arctan tanh ky 2 ,(49c)A(y) = −b ln cosh(ky) ,(49d)
where b and k are positive parameters related to the thickness of the brane, α is an arbitrary constant,R ≡ R/k 2 ,
K ± ≡ 1 2 (b − 14)b + 1 ± 1/2, Ξ = √ 1 − Θ 2 , Θ ≡ √ 20b 2 +R 2 √
2b+5b 2 , P and Q are the first and second kinds of Legendre functions, β = P b/2
K+ (0)/Q b/2 K+ (0)
. Note that the solution (49b)-(49d) is also appropriate for the case of f (R) = R. Thus our following results are also appropriate for the case of general relativity thick brane. As shown in the above subsection, it is very difficult to obtain analytical y(z). Therefore, in the following we will solve the equations numerically. The effective potentials V L and V R in the physical coordinate y become
V L (z(y)) = (ηe A F (φ)) 2 + ηe 2A ∂ y F (φ) + η(∂ y A)e 2A F (φ),(50a)V R (z(y)) = V L (z(y))| η→−η .(50b)
It is obvious that for different forms of F (φ), the potentials V L and V R have different expressions, which determine the mass spectra of the KK modes. In this paper, we would like to consider one kind of Yukawa coupling, i.e., F (φ) = φ α with positive integer α. For a kink configuration of the scalar φ, since V L and V R are demanded to be symmetrical with respect to extra dimension y, α should be odd. Next we consider two cases: the simplest case F (φ) = φ and the case for α > 1.
(a)V L , b = 1 (b)V R , b = 1 (c)V L , b = 3 (d)V R , b = 3+ η √ 6bk(1 − 2b arctan tanh(ky 2
) sinh(ky)) ,
V R (y) = V L (y)| η→−η ,(51a)
which are symmetrical. The values of the potentials at the original point and infinity are given by
V R (0) = −ηk 3b 2 = −V L (0),(52)V R (±∞) = 0 = V L (±∞).(53)
It is clear that both potentials have the same asymptotic behaviors as y → ±∞, while their values at y = 0 are opposite. Thus only the left-or right-handed gravitino zero mode (four-dimensional massless left-or right-handed gravitino) could be localized on the f (R) thick brane. The shapes of the potentials (51) are shown in Fig. 1, from which it can be seen that for any positive b, k and η, V R (z(y)) is a volcano type of potential and there may exist a localized zero mode and a continuous gapless spectrum of massive KK modes. Furthermore, the depth of the potential V R increases with values of the parameters η, b and k. By solving Eq. (38b) with the potential (51b), the zero mode of the right-handed gravitino becomes
χ R 0 (z) ∝ exp −η z 0 e A(z) F (φ)dz = exp −η y 0 φ(ȳ)dȳ = exp −η y 0 √ 6b arctan tanh kȳ 2 dȳ ,(54)
and its normalization condition
∞ −∞ (χ R 0 (z)) 2 dz = ∞ −∞ (χ R 0 (y)) 2 e −A(y) dy ∝ ∞ −∞ exp −A(y) − 2η y 0 φ(ȳ)dȳ dy = ∞ −∞ exp bln(cosh(ky)) − 2η y 0 √ 6b arctan tanh kȳ 2 dȳ dy < ∞(55)
is equivalent to
∞ 0 exp kby − πη 2 √ 6by dy < ∞(56)
since −A(y) → kby and arctan(tanh( ky 2 )) = π/4 as y → ∞. The above normalization condition (56) requires
η > η 0 ≡ k π 2b 3 .(57)
Thus, if the coupling constant is strong enough (η > η 0 ), the right-handed zero mode can be localized on the brane. It is not difficult to check that the left-handed zero mode can not be localized on the brane under the condition (57). On the other hand, the potential V L (z(y)) for positive η is always positive and vanishes far away from the brane. This type of potential cannot trap any bound state, and hence there is no left-handed gravitino zero mode. The structure of the potential V L is determined by the parameters k, b and η. For given k and b, the potential V L has a barrier for a small η. When η increases, there will be a quasi-potential well and the depth of the well will increase with the value of η. However, for given η and k (or b), the height of the potential V L increases with b (or k) and the quasi-potential well changes into a barrier as the growth of b (or k). The behavior of V L around the point y = 0 is similar to that of the function y 4 and there will be three extreme points if a quasi-potential well exists around the point y = 0. Doing third-order Taylor series expansion of ∂ y V L near the point y = 0, we will get
∂ y V L = 1 2 k 2 η 6bη − √ 6bk(1 + 4b) y + 1 12 k 4 η √ 6bk(1 + 2b)(5 + 18b) − 24bη(1 + 3b) y 3 + O(z 5 ).(58)
For k = 1 and b > 1 2 √ 3 , the above function has three roots and there is a quasi-potential well when η > 1 6 6+48b+96b 2 b
(it equals 2.04124 when b = 1).
For the case that there is a quasi-potential well for V L , we could find resonance states of the gravitino, which are the massive four-dimensional gravitinos with finite lifetimes on the brane. To investigate the gravitino resonant modes, we give the definition of the relative probability by following Ref. [39]:
P L,R (m 2 ) = z b −z b |χ L,R (z)| 2 dz zmax −zmax |χ L,R (z)| 2 dz ,(59)
where 2z b is approximately the width of the brane, and z max = 10z b . The left-and right-handed wavefunctions χ L,R (z) are the solutions of Eqs. (38). The above definition could be explained that |χ L,R (z)| 2 is the probability density [39,51]. There exists a resonant mode with mass m n , if the relative probability P (m 2 ) has a peak around m = m n . These peaks should have full width at half maximum and the number of these peaks is the same as the number of the resonant modes. In order to get the solutions of Eqs. (38), we always need additional two types of initial conditions
χ L,R even (0)=1, ∂ z χ L,R even (0) = 0; (60a) χ L,R odd (0)=0, ∂ z χ L,R odd (0) = 1,(60b)
where χ L,R even and χ L,R odd correspond to the even and odd parity modes of χ L,R (z), respectively. Our results are shown in Figs. 2, 3 and Tab. I. It is obvious that the mass spectra of the left-and right-handed gravitino resonant modes are almost the same while their parities are opposite. The first resonant mode of the lefthanded gravitino is even and its shape around z = 0 looks like a ground state. On the other hand, the first resonant mode of the right-handed gravitino is odd and it seems to be the first excited state. These results are reasonable because the effective potentials V L and V R are supersymmetric partners, which give the same spectra of the resonant modes. In fact, fermion resonances on branes have similar properties because Eqs. (38) of the KK modes of a gravitino are almost the same as the ones of a fermion. However, there is a difference between them, which will be explained as follows. For a five-dimensional Dirac fermion field with a coupling term, if we use the representation of the gamma matrices (8) and parity relation (25), the equations of motion of the left-and right-handed fermion KK modes f L,R are given by
(−∂ 2 z + V L (z))f L = m 2 f L , (61a) (−∂ 2 z + V R (z))f R = m 2 f R ,(61b)
with the effective potentials
V L (z) = η 2 e 2A F 2 (φ) − ηe A ∂ z F (φ) − ηe A (∂ z A)F (φ) , (62a) V R (z) = η 2 e 2A F 2 (φ) + ηe A ∂ z F (φ) + ηe A (∂ z A)F (φ) . (62b)
It is obvious that the Schrödinger-like equation of the left-handed gravitino KK modes (38a) is the one of the righthanded fermion KK modes (61b), and the Schrödinger-like equation for the right-handed gravitino KK modes (38b) is the one of the left-handed fermion KK modes (61a). Therefore, for a five-dimensional Dirac fermion, only the zero mode of the left-handed fermion can be localized on the f (R) brane with the coupling F (φ) = φ, and the first resonant mode of the right-handed fermion is even. This difference between the fermion and gravitino KK modes comes from the difference of their field equations. For a five-dimensional Dirac fermion field with the Yukawa coupling, the field equation reads
γ µ ∂ µ + γ 5 (∂ z + 2∂ z A) − ηe A F (φ) Ψ = 0.(63)
It should be noticed that the sign in front of γ 5 is plus. While for a bulk gravitino, Eq. (20) tells us that the sign in front of γ 5 is minus, which leads to the swap of the above results. This difference is very meaningful and it could be a symbol of the distinction between Dirac fermion and gravitino fields. In addition, the number of the resonant modes for the gravitino field increases with the coupling constant η but decreases with the parameter b. The relative probability P decreases when the mass of the resonant mode approaches the maximum of the potentials. Furthermore, the resonant modes become closer and closer as m 2 approaches the maximum of the potentials. These results are consistent with that of the Dirac fermion.
Case II:
F (φ) = φ α with α > 1
Next, we consider a natural generalization of the Yukawa coupling F (φ) = φ α with α = 3, 5, 7, · · · . Note that φ α becomes a double-kink for α ≥ 3 since the scalar field φ is a kink. For this case, the effective potentials (50) become V L (y) = 1 2 3 α 2 kηb α 2 −1 arctan α−1 (tanh (ky/2)) sech 2b+1 (ky) [α − 2b arctan (tanh(ky/2)) sinh(ky)]
+ 6 α η 2 √ b arctan (tanh(ky/2)) 2α sech 2b (ky), (64a) V R (y) = V L (y)| η→−η . (64b)
It is obvious that both the potentials are symmetry and vanish at y = 0 and y → ±∞ and they are depicted in Fig. 4 for different values of b and α. There always exists a quasi-potential well for the left-handed potential V L and a double-potential well for the right-handed one. These wells for both potentials are deeper and deeper with the increases of the parameters b, η, and α, which means that there are more and more resonances with the increases of b, η, and α. Since the coupling function φ α trends to a constant as y → ±∞, the zero mode of the right-handed gravitino , mass m and the relative probability of the left-and right-handed gravitinos with odd-parity and even-parity solutions for the coupling F (φ) = φ. In all tables of this paper, C and P stand for chirality and parity, L and R mean left-and right-handed, respectively. The parameter k is set to k = 1.
χ R 0 ∝ exp −η z 0 e A(z) φ α dz = exp −η y 0 φ α dy(65)
is equivalent to exp −η( π 4 √ 6b) α |y| since φ α = ±( π 4 √ 6b) α as y → ±∞. It is not difficult to check that the normalization condition can be satisfied for any positive coupling constant η. Thus, the right-handed zero mode can be localized on the brane for any positive coupling constant η, and at the same time the left-handed one can not.
As for the massive modes, we consider the resonance states. As what we have done in the previous subsection, we solve the Schrödinger equations (38) numerically by using the two types of initial conditions (60). The mass spectrum of the resonances is shown in Tab. II. It is clear that in this table the masses of the resonant modes of the leftand right-handed gravitinos are still almost same, while their parities are opposite. The number of the resonances increases with the increases of the parameters b, α, and η. These resonances are closer to each other as m 2 increasing, which is the same as the conclusion of the case of α = 1. In the previous subsection, the f (R)-branes are generated by a single canonical scalar field. In this subsection, we will analysis the localization of a bulk gravitino in the Bloch-f (R) brane model, where the Lagrangian density of the scalar fields is given by
L = − 1 2 ∂ M φ∂ M φ − 1 2 ∂ M ξ∂ M ξ − V (φ, ξ).(66)
The scalar fields φ and ξ interact through the scalar potential V (φ, ξ). In the following, we consider the solution given in Ref. [29]:
φ(y)=v tanh(2dvy), (67a) ξ(y)=v b −2d d sech(2dvy),(67b)A(y)= v 2 9d (b−3d) tanh 2 (2dvy)−2b ln cosh(2dvy) ,(67c)
whereb > 2d > 0, and the scalar potential is
V (φ, ξ) = 1 2 b v 2 −bφ 2 − dξ 2 2 + 4d 2 φ 2 ξ 2 − 4 3 b φv 2 − 1 3b φ 3 − dφξ 2 2 .(68)
For certain given values of the parameters v andb, the function f (R) could have analytical expression. For example, when v = 3/2 andb = 3d, we have
f (R) = R + 2γ 7 3(R − 48d 2 )(R + 120d 2 ) sin Y(R) + 2 R + 36d 2 cos Y(R) ,(69)
where γ is a parameter and Y(R) = √ 3 ln
√ R−48d 2 + √ R+120d 2 2 √ 42d .
Next, we investigate the localization of a bulk gravitino with the coupling function F (φ) = φ p ξ q with p = 1, 3, 5, · · · and q any integer. Such coupling was also used to localize the Dirac fermion in Refs. [51][52][53].
1. Case I: F (φ) = φ p ξ q with q > 0
Firstly, we consider the case of F (φ) = φ p ξ q with q > 0. For convenience we let q = 1. The most simplest one is the Yukawa coupling between the two scalar fields and the gravitino, i.e., −ηφξΨ M [Γ M , Γ N ]Ψ N . We also assume without loss of generality that the coupling constant η is positive. The asymptotic behaviors of the potentials (50) in this case are similar to those in the last subsection. As z (or y)→ ∞, both the potentials V L and V R vanish and their values are opposite at z = 0:
V L (0) = −V R (0) = 2ηv 3 (b − 2d)d,(70)
which shows that there is a potential well around z = 0 for V R . Thus, it seems that the left-handed zero mode of the gravitino can not be localized on the brane, while the right-handed zero mode can be localized. However, when substituting the solution of the right-handed zero mode
χ R 0 ∝ exp − η z 0 dze A(z) φ(z)ξ(z) = exp − η y 0 dȳφ(ȳ)ξ(ȳ) = exp ηv 2d b − 2d d sech(2dvȳ)| y 0 ∝ exp ηv 2d b − 2d d sech(2dvy)(71)
into the normalization condition (32), we find the integral
∞ −∞ (χ R 0 (z)) 2 dz = ∞ −∞ (χ R 0 (y)) 2 e −A(y) dy ∝ ∞ −∞ exp − A(y) − 2η y 0 φ(ȳ)ξ(ȳ)dȳ dy = C 2 ∞ −∞ cosh(2dvy) 2v 2b 9d exp 2ηv 2d b − 2d d sech(2dvy) − v 2 2d (b − 3d) tanh 2 (2dvy) dy(72)
is divergent, which means that the right-handed zero mode can not be confined on the brane. Although the potential of the right-handed gravitino is a volcanic-type one, there still does not exist the zero mode on the brane. In fact, for any q > 0 and p = 1, 3, 5 · · · , the right-handed zero mode will be a constant as y → ∞ since F (φ) = φ p ξ q ∝ tanh p (2dvy)sech q (2dvy) → 0. Obviously, this kind of zero mode can not satisfy the normalization condition (32). Thus, for any q > 0, there exists no bounded zero mode of the gravitino on the brane (the left-handed zero mode can also not be localized). Since there is no localized zero mode on the brane, we turn to the case of q < 0.
Case II:
F (φ) = φ p ξ q with q < 0 (or q = −1)
We let q = −1 to represent the case of q < 0 for convenience. The potentials (50) in this case are displayed in can not be localized on the brane since it is divergent as z → ∞. While the right-handed one will vanish as y → ∞ for any η > 0. It is not difficult to check that for any η > 0 this right-handed zero mode can be localized on the brane under the condition (57). And for any q < 0 and p = 1, 3, 5 · · · the right-handed zero mode will be localized. For other case of p 3, both potentials vanish at z = 0: V L (0) = V R (0) = 0, and the left-handed potential V L is always non-negative while V R have a double-well. Therefore, only the right-handed zero mode could be localized on the brane. There are infinite bounded massive KK modes in this case because both the effective potentials are infinite ones. Some of our results are shown in Tab III. It is obvious that the mass spectra of the left-and right-handed gravitino massive bounded KK modes are almost the same while their parities are opposite as shown in the previous section. When p = 1, the mass of the first bounded state of the left-handed gravitino (or the mass of the first excited state of the right-handed one) increases with the value of η because the minimum of the left-handed potential V L increases with η. On the other hand, the relative width of the effective potentials decreases with the value of η and increases with the value of m 2 . Thus, the gaps between the bounded states will extend with the growth of η and become narrower and narrower as m 2 increases. When p ≥ 3, the mass of the first bounded state of the left-handed gravitino still increases with the growth of the η, even though the minimum of the left-handed potential V L is always zero. Other conclusions are the same with the case of p = 1.
χ R 0 ∝ exp − η y 0 dȳφ(ȳ)ξ −1 (ȳ) = exp − η 2 (b − 2d)d v −1 cosh(2dvȳ)| y 0 ∝ exp − η 2 (b − 2d)d v −1 cosh(2dvȳ) (73) (a)V L , p = 1 (b)V R , p = 1 (c)V L , p = 3 (d)V R , p = 3
IV. DISCUSSION AND CONCLUSION
In this manuscript, we investigated the localization and resonant modes of a five-dimensional gravitino field on the f (R) thick branes, and gave the Schrödinger equations for the gravitino KK modes with the gauge condition Ψ z = 0. As same as a five-dimensional free and massless Dirac fermion field, the zero mode of a free massless five-dimensional gravitino field could be localized on a brane only for a compactification extra dimension and its massive KK modes can not realize the localization. Therefore we introduced the coupling term −ηF (φ)Ψ M [Γ M , Γ N ]Ψ N to investigate the localization of gravitino on three kinds of f (R) thick branes. The relativity probability method has been applied to study the resonances of gravitino on these f (R) thick branes. It has been shown that the localization and KK spectra of the five-dimensional gravitino field with the Yukawa coupling term −ηF (φ)Ψ M [Γ M , Γ N ]Ψ N are very similar to those of the Dirac fermion, but their chiralities are opposite. This difference may become a symbol of the distinction between a five-dimensional Dirac fermion field and gravitino field.
Firstly, we considered the localization of gravitino on the pure geometric f (R) thick branes, whose the Lagrangian density L(φ i , X i ) of the background scalar fields is zero. With the addition of the five-dimensional mass term ηF (φ) = M , we found that in this system the KK modes of gravitino, both the zero mode and massive ones, could not be localized on the pure geometric f (R) thick branes.
Then the f (R) thick branes, which are generated by a single canonical background scalar field φ, were considered. We introduced the Yukawa coupling function, F (φ) = φ α with α = 1, 3, 5, 7, · · · to study the localization of the gravitino field in the f (R) thick brane model. There are two types of coupling functions F (φ), i.e., α = 1 and α ≥ 3. For the case of α = 1, there could exist localized left-or right-handed zero mode on the brane as the coupling parameter η satisfies η > k π 2b 3 . Furthermore, for k = 1 and b > 1 2 √ 3 , we could obtain massive resonances of gravitino on the brane with the condition η > 1 6 6+48b+96b 2 b
. The results indicate that the left-and right-handed gravitinos almost have the same resonant spectra, while their parities are opposite. With relation (25), the first resonance of the left-handed gravitino is even and the one of the right-handed gravitino is odd. There is only the right-handed zero mode of gravitino confined on the brane. These results are appropriate to other cases in this paper, while they are just opposite to the Dirac fermion. For a five-dimensional Dirac fermion field, only the left-handed zero mode of Dirac fermion could be localized on the f (R) thick branes and the first resonance of the left-handed Dirac fermion is odd. The difference results between the gravitino field and the Dirac fermion field come from the different sign in front of γ 5 in their dynamic equations, which may be a symbol to distinguish the Dirac fermion field and the gravitino field as they have the same coupling function F and parameter η. In addition, the number of KK resonant modes for gravitino in this braneworld system increases with the increase of the coupling parameter η while decreases with the model parameter b. For other case (α ≥ 3), there are no bounded zero modes for both left-and right-handed gravitinos, and the number of KK resonant modes increases with the growths of the parameters b, α, and the coupling parameter η.
Finally, we focused on the Bloch-f (R) branes which are generated by two interacted real scalar fields. The coupling function F (φ) = φ p ξ q with p = 1, 3, 5, · · · and q any integer was considered in this model. For the case of q > 0, there exist no bounded zero modes. For the case of q < 0, the right-handed zero mode could be localized on the brane for any η > 0, and there exist infinite bounded massive KK modes for both the left-and right-handed gravitinos because both the effective potentials are infinite potential wells. The gaps between the bounded states extend with the growth of η and become narrower and narrower as m 2 increases.
There are still some issues. As we showed in this paper, the spectra of the KK modes of a bulk gravitino are all most the same as the one of a bulk Dirac fermion except for chiralities. Thus all the results of localization of Dirac fermion on branes could be appropriate to gravitino by interchanging the chiralities. But for some kinds of branes, we found the localized KK modes of Dirac fermion by introducing a new kind of coupling term [43,56]. It is not clear whether this kind of coupling term applies to gravitino and it will be our work in the future. In addition, we just consider Minkowski branes in this paper. Localization of gravitino on dS/AdS branes is also interesting.
Dirac gamma matrices Γ M in curved five-dimensional spacetime satisfy Γ M = e MM ΓM . ΓM are the gamma matrices in flat five-dimensional spacetime and {ΓM , ΓN } = 2ηMN , whereM andN represent the five-dimensional local Lorentz indices. The vielbein satisfies g MN = eM M eN N ηMN and for the metric (3) it is given by relations e MM = g MN e NM and e MM = g MN eM N , we can get
5) includes five equations because M runs over all five spacetime indices. There are two kinds of equations: M = 5 and M = µ. For the first case of M = 5, the equation of motion reads as
which indicates that there is no bound massive KK mode. The solutions for the left-and right-handed zero modes of the gravitino field are χ L,R 0 ∝ e ±My . It is clear that both zero modes are not normalizable, and hence cannot be localized on the pure geometric f (R) thick branes.B. Localization of gravitino field on the f (R) thick branes with L = X − V (φ) Now let us consider the thick f (R) branes generated by one background scalar field. For the Lagrangian density
FIG. 1 .
1Potentials V L (z) and V R (z) for the left-and right-handed gravitinos on the f (R) thick branes with F (φ) = φ. Here, k = 1 and the coupling constant η is set to 2.0 (blue thin trace), 3.0 (green thick trace) and 4.0 (red dashed trace).1. Case I: F (φ) = φ For the case of F (φ) = φ, the effective potentials (50) read V L (y) = 1 2 cosh(ky) −1−2b 12bη 2 arctan tanh( ky 2 ) 2 cosh(ky)
FIG. 2 .
2The probabilities PL,R (as a function of m 2 ) for finding massive resonant KK modes of the left-and right-handed gravitinos with mass m 2 on the thick brane for the coupling F (φ) = φ. Solid lines and dashed lines are plotted for the even-parity and odd-parity massive gravitinos, respectively. The parameters are set to b = 1, k = 1, η = 10, and zmax = 20.
FIG. 3 .
3The shapes of the massive KK resonant modes of the left-handed (upper) and right-handed (under) gravitinos for the coupling F (φ) = φ with different m 2 . Here, the parameters are set to k = 1, b = 1, η = 10, and zmax = 20.
II. The eigenvalue m 2 and mass m of the left-and right-handed gravitinos with odd-parity and even-parity solutions for the coupling F (φ) = φ α . The parameters are set to k = 1, η=1, and b = 1.
FIG. 4 .
4Potentials V L (z) and V R (z) for left-and right-handed gravitinos on the f (R) thick branes with F (φ) = φ α . Here, k = 1, η = 1 and b is set to 1.0 (blue thin trace), 1.5 (green thick trace), and 2.0 (red dashed trace).C. Localization of the gravitino field on the f (R) thick branes with L = X1 + X2 − V (φ1, φ2)
Fig. 5 ..
5Both the potentials V L and V R have infinite wells. For the simplest case p = 1, both two potentials vanish as z (or y)→ ∞ and their values are opposite at z = 0: V L (0) = −V R (0The left-handed zero mode still
FIG. 5 .
5Potentials V L (z) and V R (z) for the left-and right-handed gravitino KK modes on the f (R) thick branes with F (φ) = φ p ξ −1 . Here, v = d = 1,b = 3, and the coupling constant η is set to 1.0 (blue thin trace), 1.5 (green thick trace), and 2.0 (red dashed trace).
TABLE
TABLE III. The eigenvalue m 2 n and mass mn of the left-and right-handed gravitino bounded KK modes for the coupling F (φ) = φ p ξ −1 . The parameters are set to v = d = 1 andb = 3.p
η
C
P
m 2
n
mn
even
2.4489
1.5649
odd
3.6790
1.9181
L
even
4.8069
2.1925
odd
5.7380
2.3954
. . .
. . .
. . .
1
even
0
0
odd
2.4490
1.5649
R
even
3.6790
1.9181
odd
4.8070
2.1925
even
5.7381
2.3954
. . .
. . .
. . .
1
even
5.8846
2.4258
odd
9.3857
3.0636
L
even
12.2642
3.5020
odd
14.7436
3.8397
. . .
. . .
. . .
2
even
0
0
odd
5.8849
2.4259
R
even
9.3860
3.0637
odd
12.2640
3.5020
even
14.7437
3.8398
. . .
. . .
. . .
even
1.8861
1.3734
odd
3.5178
1.8756
L
even
4.5436
2.1316
odd
5.6248
2.3717
. . .
. . .
. . .
1
even
0
0
odd
1.8860
1.3733
R
even
3.5177
1.8756
odd
4.5436
2.1316
even
5.6248
2.3717
. . .
. . .
. . .
3
even
3.6985
1.9232
odd
8.1171
2.8491
L
even
10.9629
3.3110
odd
13.8026
3.7152
. . .
. . .
. . .
2
even
0
0
odd
3.6981
1.9230
R
even
8.1170
2.8490
odd
10.9624
3.3110
even
13.8025
3.7152
. . .
. . .
. . .
. K Akama, arXiv:hep-th/0001113Lect. Notes Phys. 176K. Akama, Lect. Notes Phys. 176, 267 (1982), [arXiv:hep-th/0001113].
. V Rubakov, M Shaposhnikov, Phys. Lett. B. 125139V. Rubakov and M. Shaposhnikov, Phys. Lett. B 125, 139 (1983).
. N Arkani-Hamed, S Dimopoulos, G Dvali, arxiv:9803315Phys. Lett. B. 429hep-phN. Arkani-Hamed, S. Dimopoulos, and G. Dvali, Phys. Lett. B 429 (1998) 263, [arxiv:9803315 [hep-ph]].
. I Antoniadis, N Arkani-Hamed, S Dimopoulos, G R Dvali, arXiv:hep-ph/9804398Phys. Lett. B. 436257I. Antoniadis, N. Arkani-Hamed, S. Dimopoulos, and G. R. Dvali, Phys. Lett. B 436, 257 (1998), [arXiv:hep-ph/9804398].
. L Randall, R Sundrum, arXiv:hep-ph/9905221Phys. Rev. Lett. 833370L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 3370 (1999), [arXiv:hep-ph/9905221].
. L Randall, R Sundrum, arXiv:hep-th/9906064Phys. Rev. Lett. 834690L. Randall and R. Sundrum, Phys. Rev. Lett. 83, 4690 (1999), [arXiv:hep-th/9906064].
. N Arkani-Hamed, S Dimopoulos, N Kaloper, R Sundrum, arXiv:hep-th/0001197Phys. Lett. B. 480193N. Arkani-Hamed, S. Dimopoulos, N. Kaloper, and R. Sundrum, Phys. Lett. B 480, 193 (2000), [arXiv:hep-th/0001197].
. J E Kim, B Kyae, H M Lee, arXiv:hep-th/0011118Phys. Rev. Lett. 864223J. E. Kim, B. Kyae, and H. M. Lee, Phys. Rev. Lett. 86, 4223 (2001), [arXiv:hep-th/0011118].
. S Nussinov, R Shrock, arXiv:hep-ph/0112337Phys. Rev. Lett. 88171601S. Nussinov and R. Shrock, Phys. Rev. Lett. 88, 171601 (2002), [arXiv:hep-ph/0112337].
. A A Pankov, N Paver, arXiv:hep-ph/0501170Phys. Rev. D. 7235012A. A. Pankov and N. Paver, Phys. Rev. D 72, 035012 (2005), [arXiv:hep-ph/0501170].
. A Abulencia, CDF CollaborationarXiv:hep-ex/0605101Phys. Rev. Lett. 97171802A. Abulencia et al. [CDF Collaboration], Phys. Rev. Lett. 97, 171802 (2006), [arXiv:hep-ex/0605101].
. P Dey, B Mukhopadhyaya, S Sengupta, arXiv:0904.1970Phys. Rev. D. 8055029hep-phP. Dey, B. Mukhopadhyaya, and S. SenGupta, Phys. Rev. D 80, 055029 (2009), [arXiv:0904.1970 [hep-ph]].
. I P Neupane, arXiv:1011.6357Phys. Rev. D. 8386004hep-thI. P. Neupane, Phys. Rev. D 83, 086004 (2011), [arXiv:1011.6357 [hep-th]].
. S Das, D Maity, S Sengupta, arXiv:0711.1744JHEP. 080542hep-thS. Das, D. Maity, and S. SenGupta, JHEP 0805, 042 (2008), [arXiv:0711.1744 [hep-th]].
. K Yang, Y X Liu, Y Zhong, X L Du, S W Wei, arXiv:1212.2735Phys. Rev. D. 86127502hep-thK. Yang, Y. X. Liu, Y. Zhong, X. L. Du, and S. W. Wei, Phys. Rev. D 86, 127502 (2012), [arXiv:1212.2735 [hep-th]].
. G R Dvali, G Gabadadze, M Porrati, arXiv:hep-th/0002190Phys. Lett. B. 484112G. R. Dvali, G. Gabadadze, and M. Porrati, Phys. Lett. B 484, 112 (2000), [arXiv:hep-th/0002190].
. G D Starkman, D Stojkovic, M Trodden, arXiv:hep-th/0106143Phys. Rev. Lett. 87231303G. D. Starkman, D. Stojkovic, and M. Trodden, Phys. Rev. Lett. 87, 231303 (2001), [arXiv:hep-th/0106143].
. T Kaluza, Sitzungsber. Preuss. Akad. Wiss. Berl. Math. Phys. K. 1966T. Kaluza, Sitzungsber. Preuss. Akad. Wiss. Berl. Math. Phys. K. 1, 966 (1921).
. O Klein, Z. Phys. 37895O. Klein, Z. Phys. 37, 895 (1926).
. R Gregory, V A Rubakov, S M Sibiryakov, arXiv:hep-th/0002072Phys. Rev. Lett. 845928R. Gregory, V. A. Rubakov, and S. M. Sibiryakov, Phys. Rev. Lett. 84, 5928 (2000), [arXiv:hep-th/0002072].
. N Kaloper, J March-Russell, G D Starkman, M Trodden, arXiv:hep-ph/0002001Phys. Rev. Lett. 85928N. Kaloper, J. March-Russell, G. D. Starkman, and M. Trodden, Phys. Rev. Lett. 85, 928 (2000), [arXiv:hep-ph/0002001].
. A Melfo, N Pantoja, A Skirzewski, arXiv:gr-qc/0211081Phys. Rev. D. 67105003A. Melfo, N. Pantoja, and A. Skirzewski, Phys. Rev. D 67, 105003 (2003), [arXiv:gr-qc/0211081].
. D Bazeia, A R Gomes, arXiv:hep-th/0403141JHEP. 040512D. Bazeia and A. R. Gomes, JHEP 0405, 012 (2004), [arXiv:hep-th/0403141].
. A Cardoso, K Koyama, A Mennim, S S Seahra, D Wands, arXiv:hep-th/0612202Phys. Rev. D. 7584002A. Cardoso, K. Koyama, A. Mennim, S. S. Seahra, and D. Wands, Phys. Rev. D 75, 084002 (2007), [arXiv:hep-th/0612202].
. Y X Liu, Y Zhong, Z H Zhao, H T Li, arXiv:arXiv:1104.3188JHEP. 1106135hep-thY. X. Liu, Y. Zhong, Z. H. Zhao, and H. T. Li, JHEP 1106, 135 (2011), [arXiv:arXiv:1104.3188 [hep-th]].
. Y X Liu, K Yang, H Guo, Y Zhong, arXiv:1203.2349Phys. Rev. D. 85124053hep-thY. X. Liu, K. Yang, H. Guo, and Y. Zhong, Phys. Rev. D 85, 124053 (2012), [arXiv:1203.2349 [hep-th]].
. F W Chen, Y Zhong, Y Q Wang, S F Wu, Y X Liu, arXiv:1201.5922Phys. Rev. D. 88104033hep-thF. W. Chen, Y. Zhong, Y. Q. Wang, S. F. Wu, and Y. X. Liu, Phys. Rev. D 88, 104033 (2013), [arXiv:1201.5922 [hep-th]].
. D Bazeia, A S Loboāo, R Menezes, arXiv:1502.04757Phys. Lett. B. 74398hep-thD. Bazeia, A. S. Loboāo, and R. Menezes, Phys. Lett. B 743, 98 (2015), [arXiv:1502.04757 [hep-th]].
. H Yu, Y Zhong, B M Gu, Y X Liu, arXiv:1506.06458Eur. Phys. J. C. 76195gr-qcH. Yu, Y. Zhong, B. M. Gu, and Y. X. Liu, Eur. Phys. J. C 76, 195 (2016), [arXiv:1506.06458 [gr-qc]].
. S Chang, J Hisano, H Nakano, N Okada, M Yamaguchi, arXiv:hep-ph/9912498Phys. Rev. D. 6284025S. Chang, J. Hisano, H. Nakano, N. Okada, and M. Yamaguchi, Phys. Rev. D 62, 084025 (2000), [arXiv:hep-ph/9912498].
. T Shiromizu, K I Maeda, M Sasaki, arXiv:gr-qc/9910076Phys. Rev. D. 6224012T. Shiromizu, K. i. Maeda, and M. Sasaki, Phys. Rev. D 62, 024012 (2000), [arXiv:gr-qc/9910076].
. A Kehagias, K Tamvakis, arXiv:hep-th/0010112Phys. Lett. B. 50438A. Kehagias and K. Tamvakis, Phys. Lett. B 504, 38 (2001), [arXiv:hep-th/0010112].
. C Ringeval, P Peter, J P Uzan, arXiv:hep-th/0109194Phys. Rev. D. 6544016C. Ringeval, P. Peter, and J. P. Uzan, Phys. Rev. D 65, 044016 (2002), [arXiv:hep-th/0109194].
. D Maity, S Sengupta, arXiv:hep-th/0311142Class. Quant. Grav. 21D. Maity and S. SenGupta, Class. Quant. Grav. 21, 3379 (2004), [arXiv:hep-th/0311142].
. A Chatterjee, P Majumdar, arXiv:hep-th/0507085Phys. Rev. D. 7266013A. Chatterjee and P. Majumdar, Phys. Rev. D 72, 066013 (2005), [arXiv:hep-th/0507085].
. A Melfo, N Pantoja, J D Tempo, arXiv:hep-th/0601161Phys. Rev. D. 7344033A. Melfo, N. Pantoja, and J. D. Tempo, Phys. Rev. D 73, 044033 (2006), [arXiv:hep-th/0601161].
. Y X Liu, L Zhao, Y S Duan, arXiv:hep-th/0701010JHEP. 070497Y. X. Liu, L. Zhao, and Y. S. Duan, JHEP 0704, 097 (2007), [arXiv:hep-th/0701010].
. R Davies, D P George, R R Volkas, arXiv:0705.1584Phys. Rev. D. 77124038hep-phR. Davies, D. P. George, and R. R. Volkas, Phys. Rev. D 77, 124038 (2008), [arXiv:0705.1584 [hep-ph]].
. Y X Liu, J Yang, Z H Zhao, C E Fu, Y S Duan, arXiv:0904.1785Phys. Rev. D. 8065019hep-thY. X. Liu, J. Yang, Z. H. Zhao, C. E. Fu, and Y. S. Duan, Phys. Rev. D 80, 065019 (2009), [arXiv:0904.1785 [hep-th]].
. R Guerrero, A Melfo, N Pantoja, R O Rodriguez, arXiv:0912.0463Phys. Rev. D. 8186004hep-thR. Guerrero, A. Melfo, N. Pantoja, and R. O. Rodriguez, Phys. Rev. D 81, 086004 (2010), [arXiv:0912.0463 [hep-th]].
. C E Fu, Y X Liu, K Yang, S W Wei, arXiv:1207.3152JHEP. 121060hep-thC. E. Fu, Y. X. Liu, K. Yang, and S. W. Wei, JHEP 1210, 060 (2012), [arXiv:1207.3152 [hep-th]].
. Q Y Xie, J Yang, L Zhao, arXiv:1310.4585Phys. Rev. D. 88105014hep-thQ. Y. Xie, J. Yang, and L. Zhao, Phys. Rev. D 88, 105014 (2013), [arXiv:1310.4585 [hep-th]].
. Y X Liu, Z G Xu, F W Chen, S W Wei, arXiv:1312.4145Phys. Rev. D. 8986001hep-thY. X. Liu, Z. G. Xu, F. W. Chen, and S. W. Wei, Phys. Rev. D 89, 086001 (2014), [arXiv:1312.4145 [hep-th]].
. Z H Zhao, Y X Liu, Y Zhong, arXiv:1402.6480Phys. Rev. D. 9045031hep-thZ. H. Zhao, Y. X. Liu, and Y. Zhong, Phys. Rev. D 90, 045031 (2014), [arXiv:1402.6480 [hep-th]].
. H Guo, Q Y Xie, C E Fu, arXiv:1408.6155Phys. Rev. D. 92106007hep-thH. Guo, Q. Y. Xie, and C. E. Fu, Phys. Rev. D 92, 106007 (2015), [arXiv:1408.6155 [hep-th]].
Localization of Gravitino Field on Thin Branes. Y Z Du, L Zhao, X N Zhou, Y Zhong, Y X Liu, arXiv:1511.03102hep-thY. Z. Du, L. Zhao, X. N. Zhou, Y. Zhong, and Y. X. Liu, "Localization of Gravitino Field on Thin Branes," [arXiv:1511.03102 [hep-th]].
. P Q Hung, N K Tran, arXiv:hep-ph/0309115Phys. Rev. D. 6964003P. Q. Hung and N. K. Tran, Phys. Rev. D 69, 064003 (2004), [arXiv:hep-ph/0309115].
. H Guo, A Herrera-Aguilar, Y X Liu, D Malagon-Morejon, R R Mora-Luna, arXiv:1103.2430Phys. Rev. D. 8795011hep-thH. Guo, A. Herrera-Aguilar, Y. X. Liu, D. Malagon-Morejon, and R. R. Mora-Luna, Phys. Rev. D 87, 095011 (2013), [arXiv:1103.2430 [hep-th]].
. İ Şahin, M Köksal, S C İnan, A A Billur, B Şahin, P Tektaş, E Alici, R Yildirim, arXiv:1409.1796Phys. Rev. D. 9135017hep-phİ. Şahin, M. Köksal, S. C.İnan, A. A. Billur, B. Şahin, P. TektaŞ, E. Alici, and R. Yildirim, Phys. Rev. D 91, 035017 (2015), [arXiv:1409.1796 [hep-ph]].
. M Bauer, C Hoerner, M Neubert, arXiv:1603.05978JHEP. 0794hep-phM. Bauer, C. Hoerner, and M. Neubert, JHEP 07, 094 (2016), [arXiv:1603.05978 [hep-ph]].
. C A S Almeida, M M FerreiraJr, A R Gomes, R Casana, arXiv:0901.3543Phys. Rev. D. 79125022hep-thC. A. S. Almeida, M. M. Ferreira, Jr., A. R. Gomes, and R. Casana, Phys. Rev. D 79, 125022 (2009), [arXiv:0901.3543 [hep-th]].
. Y X Liu, H T Li, Z H Zhao, J X Li, J R Ren, arXiv:0909.2312JHEP. 091091hep-thY. X. Liu, H. T. Li, Z. H. Zhao, J. X. Li, and J. R. Ren, JHEP 0910, 091 (2009), [arXiv:0909.2312 [hep-th]].
Q.-Y Xie, H Guo, Z.-H Zhao, Y.-Z Du, Y.-P Zhang, arXiv:1510.03345Spectrum Structure of Fermion on Bloch Branes with two Scalar-fermion Couplings. hep-thQ.-Y. Xie, H. Guo, Z.-H. Zhao, Y.-Z. Du, and Y.-P. Zhang, "Spectrum Structure of Fermion on Bloch Branes with two Scalar-fermion Couplings", [arXiv:1510.03345[hep-th]].
. R R Landim, G Alencar, M O Tahim, R N Costa Filho, arXiv:1105.5573JHEP. 110871hep-thR. R. Landim, G. Alencar, M. O. Tahim, and R. N. Costa Filho, JHEP 1108, 071 (2011), [arXiv:1105.5573 [hep-th]].
. Y Z Du, L Zhao, Y Zhong, C E Fu, H Guo, arXiv:1301.3204Phys. Rev. D. 8824009hep-thY. Z. Du, L. Zhao, Y. Zhong, C. E. Fu, and H. Guo, Phys. Rev. D 88, 024009 (2013), [arXiv:1301.3204 [hep-th]].
. Y P Zhang, Y Z Du, W D Guo, Y X Liu, arXiv:1601.05852Phys. Rev. D. 9365042hep-thY. P. Zhang, Y. Z. Du, W. D. Guo, and Y. X. Liu, Phys. Rev. D 93, 065042 (2016), [arXiv:1601.05852 [hep-th]].
. Z G Xu, Y Zhong, H Yu, Y X Liu, arXiv:1405.6277Eur. Phys. J. C. 75368hep-thZ. G. Xu, Y. Zhong, H. Yu, and Y. X. Liu, Eur. Phys. J. C 75, 368 (2015), [arXiv:1405.6277 [hep-th]].
. E J Chun, H B Kim, J E Kim, arXiv:hep-ph/9305208Phys. Rev. Lett. 721956E. J. Chun, H. B. Kim, and J. E. Kim, Phys. Rev. Lett. 72, 1956 (1994), [arXiv:hep-ph/9305208].
Effects of the gravitino on the inflationary universe. T Moroi, arXiv:hep-ph/9503210T. Moroi, "Effects of the gravitino on the inflationary universe", [arXiv:hep-ph/9503210].
. F D Steffen, arXiv:hep-ph/0605306JCAP. 06091F. D. Steffen, JCAP 0609, 001 (2006), [arXiv:hep-ph/0605306].
. J L Feng, M Kamionkowski, S K Lee, arXiv:1004.4213Phys. Rev. D. 8215012hep-phJ. L. Feng, M. Kamionkowski, and S. K. Lee, Phys. Rev. D 82, 015012 (2010), [arXiv:1004.4213 [hep-ph]].
. K G Savvidy, J D Vergados, arXiv:1211.3214Phys. Rev. D. 8775013hep-phK. G. Savvidy and J. D. Vergados, Phys. Rev. D 87, 075013 (2013), [arXiv:1211.3214 [hep-ph]].
. S Shirai, T T Yanagida, arXiv:0905.4034Phys. Lett. B. 680351hep-phS. Shirai and T. T. Yanagida, Phys. Lett. B 680, 351 (2009), [arXiv:0905.4034 [hep-ph]].
. M Y Khlopov, A Barrau, J Grain, arXiv:astro-ph/0406621Class. Quant. Grav. 231875M. Y. Khlopov, A. Barrau, and J. Grain, Class. Quant. Grav. 23, 1875 (2006), [arXiv:astro-ph/0406621].
. A Yale, R B Mann, arXiv:0808.2820Phys. Lett. B. 673168gr-qcA. Yale and R. B. Mann, Phys. Lett. B 673, 168 (2009), [arXiv:0808.2820 [gr-qc]].
. P Arnold, P Szepietowski, D Vaman, arXiv:1311.6409Phys. Rev. D. 8946001hep-thP. Arnold, P. Szepietowski, and D. Vaman, Phys. Rev. D 89, 046001 (2014), [arXiv:1311.6409 [hep-th]].
. C.-H Chen, H T Cho, A S Cornell, G Harmsen, W Naylor, Chin , arXiv:1504.02579J. Phys. 53110101gr-qcC.-H. Chen, H. T. Cho, A. S. Cornell, G. Harmsen, and W. Naylor, Chin. J. Phys. 53, 110101 (2015), [arXiv:1504.02579 [gr-qc]].
. B Bajc, G Gabadadze, arXiv:hep-th/9912232Phys. Lett. B. 474282B. Bajc and G. Gabadadze, Phys. Lett. B 474, 282 (2000), [arXiv:hep-th/9912232].
. I Oda, arXiv:hep-th/0012013Phys. Lett. B. 50896I. Oda, Phys. Lett. B 508, 96 (2001), [arXiv:hep-th/0012013].
. T Gherghetta, A Pomarol, arXiv:hep-ph/0012378Nucl. Phys. B. 6023T. Gherghetta and A. Pomarol, Nucl. Phys. B 602, 3 (2001), [arXiv:hep-ph/0012378].
. I Oda, arXiv:hep-th/0008134Prog. Theor. Phys. 105667I. Oda, Prog. Theor. Phys. 105, 667 (2001), [arXiv:hep-th/0008134].
. J L Hewett, D Sadri, arXiv:hep-ph/0204063Phys. Rev. D. 6915001J. L. Hewett and D. Sadri, Phys. Rev. D 69, 015001 (2004), [arXiv:hep-ph/0204063].
. H M Lee, A Papazoglou, arXiv:0705.1157Nucl. Phys. B. 792166hep-thH. M. Lee and A. Papazoglou, Nucl. Phys. B 792, 166 (2008), [arXiv:0705.1157 [hep-th]].
. T P Sotiriou, V Faraoni, arXiv:0805.1726Rev. Mod. Phys. 82451gr-qcT. P. Sotiriou and V. Faraoni, Rev. Mod. Phys. 82, 451 (2010), [arXiv:0805.1726 [gr-qc]].
. A De Felice, S Tsujikawa, arXiv:1002.4928Living Rev. Rel. 13gr-qcA. De Felice and S. Tsujikawa, Living Rev. Rel. 13, 3 (2010), [arXiv:1002.4928 [gr-qc]].
. S Nojiri, S D Odintsov, arXiv:1011.0544Phys. Rept. 505gr-qcS. Nojiri and S. D. Odintsov, Phys. Rept. 505, 59 (2011), [arXiv:1011.0544 [gr-qc]].
. S Nojiri, S D Odintsov, arXiv:hep-th/0006232JHEP. 000749S. Nojiri and S. D. Odintsov, JHEP 0007, 049 (2000), [arXiv:hep-th/0006232].
. S Nojiri, S D Odintsov, S Ogushi, arXiv:hep-th/0108172Phys. Rev. D. 6523521S. Nojiri, S. D. Odintsov, and S. Ogushi, Phys. Rev. D 65, 023521 (2002), [arXiv:hep-th/0108172].
. M Giovannini, arXiv:hep-th/0106131Phys. Rev. D. 6564008M. Giovannini, Phys. Rev. D 65, 064008 (2002), [arXiv:hep-th/0106131].
. V I Afonso, D Bazeia, R Menezes, A Y Petrov, arXiv:0710.3790Phys. Lett. B. 65871hep-thV. I. Afonso, D. Bazeia, R. Menezes, and A. Y. Petrov, Phys. Lett. B 658, 71 (2007), [arXiv:0710.3790 [hep-th]].
. V Dzhunushaliev, V Folomeev, B Kleihaus, J Kunz, arXiv:0912.2812JHEP. 1004130gr-qcV. Dzhunushaliev, V. Folomeev, B. Kleihaus, and J. Kunz, JHEP 1004, 130 (2010), [arXiv:0912.2812 [gr-qc]].
. H Liu, H Lu, Z L Wang, arXiv:1111.6602JHEP. 120283hep-thH. Liu, H. Lu, and Z. L. Wang, JHEP 1202, 083 (2012), [arXiv:1111.6602 [hep-th]].
. D Bazeia, A S Lobão, L Losano, R Menezes, G J Olmo, arXiv:1505.06315Phys. Rev. D. 91124006hep-thD. Bazeia, A. S. Lobão, L. Losano, R. Menezes, and G. J. Olmo, Phys. Rev. D 91, 124006 (2015), [arXiv:1505.06315 [hep-th]].
. Y Zhong, Y X Liu, K Yang, arXiv:1010.3478Phys. Lett. B. 699398hep-thY. Zhong, Y. X. Liu, and K. Yang, Phys. Lett. B 699, 398 (2011), [arXiv:1010.3478 [hep-th]].
. Y Zhong, Y X Liu, arXiv:1212.1871Phys. Rev. D. 8824017hep-thY. Zhong and Y. X. Liu, Phys. Rev. D 88, 024017 (2013), [arXiv:1212.1871 [hep-th]].
. Y Zhong, Y X Liu, arXiv:1507.00630Eur. Phys. J. C. 76321hep-thY. Zhong and Y. X. Liu, Eur. Phys. J. C 76, 321 (2016), [arXiv:1507.00630 [hep-th]].
| [] |
[
"The phantom menaced: constraints on low-energy effective ghosts",
"The phantom menaced: constraints on low-energy effective ghosts"
] | [
"James M Cline \nPhysics Department\nMcGill University\n3600 University StreetH3A 2T8MontréalQuébecCanada\n",
"Sangyong Jeon \nPhysics Department\nMcGill University\n3600 University StreetH3A 2T8MontréalQuébecCanada\n",
"Guy D Moore \nPhysics Department\nMcGill University\n3600 University StreetH3A 2T8MontréalQuébecCanada\n"
] | [
"Physics Department\nMcGill University\n3600 University StreetH3A 2T8MontréalQuébecCanada",
"Physics Department\nMcGill University\n3600 University StreetH3A 2T8MontréalQuébecCanada",
"Physics Department\nMcGill University\n3600 University StreetH3A 2T8MontréalQuébecCanada"
] | [] | It has been suggested that a scalar field with negative kinetic energy, or "ghost," could be the source of the observed late-time cosmological acceleration. Naively, such theories should be ruled out by the catastrophic quantum instability of the vacuum. We derive phenomenological bounds on the Lorentz-violating ultraviolet cutoff Λ which must apply to low-energy effective theories of ghosts, in order to keep the instability at unobservable levels. Assuming only that ghosts interact at least gravitationally, we show that Λ < ∼ 3 MeV for consistency with the cosmic gamma ray background. We also show that theories of ghosts with a Lorentz-conserving cutoff are completely excluded. PACS numbers: 98.80.Cq, 98.70.Vc The present accelerated expansion of the universe seems to be an experimental fact, now that data from distant type Ia supernovae[1]have been corroborated by those from the cosmic microwave background[2]. Although the simplest explanation is a cosmological constant Λ of order (10 −3 eV) 4 , this tiny energy scale is so far below the expected "natural" size for a cosmological constant, that alternative explanations have been vigorously pursued. A common approach has been to assume that the true value of Λ is zero, due to an unknown mechanism, and to propose new physics which would explain why the present-day vacuum energy differs from zero by the small observed amount.The most popular idea has been quintessence, in which the universe is gradually approaching the zero of the vacuum energy by the slow rolling of an extremely weakly coupled scalar field. More recently, some less conventional alternatives have been considered, including "phantom matter," which is essentially quintessence with a wrong-sign kinetic term[3]. These models are motivated by the supernova data, which suggest that the dark energy equation of state violates the weak energy condition by having p < −ρ[4].A serious problem with phantom matter, which is overlooked in the literature that attempts to apply it to cosmology, is that such theories are not quantum mechanically viable, either because they violate conservation of probability, or they have unboundedly negative energy density and lead to the absence of a stable vacuum state. Whether a ghost carries negative norm and positive energy, or vice versa, is a choice which is made during the quantization procedure. This choice exists because the iǫ prescription for defining the propagator near its poles is not unique, and not specified by the Lagrangian itself. The momentum space propagator for a ghost can have either of the two formsIn the first form in (1), the imaginary part of the propagator has the opposite sign relative to that of a positive norm particle. This will cause the optical theorem to be violated, leading to a nonunitary theory. That is, this choice gives a theory with no probabilistic interpretation. It is therefore unphysical and should be dismissed.On the other hand, if the second form in(1)is chosen, unitarity is maintained. The price to be paid is that the poles in the propagator are shifted in such a way that particles with negative energy are the ones which propagate forward in time, so ghosts possess negative energy. This means, for instance, that a two-body scattering process involving nonghosts and ghosts can result in an increase in the magnitude of the energies of the particles. To illustrate this, suppose that the ghost is massive so that we can consider it to be initially at rest. If the initial energy of a photon is E i and it gravitationally scatters from the ghost at angle θ, then its final energy isin contrast to the nonghost case where photons can only lose energy in such scatterings. In fact, there exist initial energies E i = m/(1 − cos θ) such that the final energy is divergent. The final energy of the ghost is correspondingly large and negative.To avoid this kind of problem, one should consider theories where the interactions between ghosts and normal matter are as weak as possible. However, we must allow the ghosts to interact gravitationally, since it is their gravitational interactions which are needed for them to have any cosmological consequences, and this is already enough. Gravitational interactions allow the process of figure 1, in which a ghost pair and photon pair are spontaneously created from the vacuum. The phase space integral is divergent, indicating a catastrophic instability. The divergent nature of the instability can only be avoided if we impose a Lorentz noninvariant momentum space cutoff on the final state phase space (more on this later). Setting such a cutoff at the scale Λ, the creation rate is, on dimensional grounds, | 10.1103/physrevd.70.043543 | [
"https://export.arxiv.org/pdf/hep-ph/0311312v4.pdf"
] | 119,023,526 | hep-ph/0311312 | 5669c9e47aa8db5e8678146cdf455755038bd433 |
The phantom menaced: constraints on low-energy effective ghosts
James M Cline
Physics Department
McGill University
3600 University StreetH3A 2T8MontréalQuébecCanada
Sangyong Jeon
Physics Department
McGill University
3600 University StreetH3A 2T8MontréalQuébecCanada
Guy D Moore
Physics Department
McGill University
3600 University StreetH3A 2T8MontréalQuébecCanada
The phantom menaced: constraints on low-energy effective ghosts
arXiv:hep-ph/0311312v4 28 Apr 2004 McGill 03-25 (Dated: November, 2003)
It has been suggested that a scalar field with negative kinetic energy, or "ghost," could be the source of the observed late-time cosmological acceleration. Naively, such theories should be ruled out by the catastrophic quantum instability of the vacuum. We derive phenomenological bounds on the Lorentz-violating ultraviolet cutoff Λ which must apply to low-energy effective theories of ghosts, in order to keep the instability at unobservable levels. Assuming only that ghosts interact at least gravitationally, we show that Λ < ∼ 3 MeV for consistency with the cosmic gamma ray background. We also show that theories of ghosts with a Lorentz-conserving cutoff are completely excluded. PACS numbers: 98.80.Cq, 98.70.Vc The present accelerated expansion of the universe seems to be an experimental fact, now that data from distant type Ia supernovae[1]have been corroborated by those from the cosmic microwave background[2]. Although the simplest explanation is a cosmological constant Λ of order (10 −3 eV) 4 , this tiny energy scale is so far below the expected "natural" size for a cosmological constant, that alternative explanations have been vigorously pursued. A common approach has been to assume that the true value of Λ is zero, due to an unknown mechanism, and to propose new physics which would explain why the present-day vacuum energy differs from zero by the small observed amount.The most popular idea has been quintessence, in which the universe is gradually approaching the zero of the vacuum energy by the slow rolling of an extremely weakly coupled scalar field. More recently, some less conventional alternatives have been considered, including "phantom matter," which is essentially quintessence with a wrong-sign kinetic term[3]. These models are motivated by the supernova data, which suggest that the dark energy equation of state violates the weak energy condition by having p < −ρ[4].A serious problem with phantom matter, which is overlooked in the literature that attempts to apply it to cosmology, is that such theories are not quantum mechanically viable, either because they violate conservation of probability, or they have unboundedly negative energy density and lead to the absence of a stable vacuum state. Whether a ghost carries negative norm and positive energy, or vice versa, is a choice which is made during the quantization procedure. This choice exists because the iǫ prescription for defining the propagator near its poles is not unique, and not specified by the Lagrangian itself. The momentum space propagator for a ghost can have either of the two formsIn the first form in (1), the imaginary part of the propagator has the opposite sign relative to that of a positive norm particle. This will cause the optical theorem to be violated, leading to a nonunitary theory. That is, this choice gives a theory with no probabilistic interpretation. It is therefore unphysical and should be dismissed.On the other hand, if the second form in(1)is chosen, unitarity is maintained. The price to be paid is that the poles in the propagator are shifted in such a way that particles with negative energy are the ones which propagate forward in time, so ghosts possess negative energy. This means, for instance, that a two-body scattering process involving nonghosts and ghosts can result in an increase in the magnitude of the energies of the particles. To illustrate this, suppose that the ghost is massive so that we can consider it to be initially at rest. If the initial energy of a photon is E i and it gravitationally scatters from the ghost at angle θ, then its final energy isin contrast to the nonghost case where photons can only lose energy in such scatterings. In fact, there exist initial energies E i = m/(1 − cos θ) such that the final energy is divergent. The final energy of the ghost is correspondingly large and negative.To avoid this kind of problem, one should consider theories where the interactions between ghosts and normal matter are as weak as possible. However, we must allow the ghosts to interact gravitationally, since it is their gravitational interactions which are needed for them to have any cosmological consequences, and this is already enough. Gravitational interactions allow the process of figure 1, in which a ghost pair and photon pair are spontaneously created from the vacuum. The phase space integral is divergent, indicating a catastrophic instability. The divergent nature of the instability can only be avoided if we impose a Lorentz noninvariant momentum space cutoff on the final state phase space (more on this later). Setting such a cutoff at the scale Λ, the creation rate is, on dimensional grounds,
It has been suggested that a scalar field with negative kinetic energy, or "ghost," could be the source of the observed late-time cosmological acceleration. Naively, such theories should be ruled out by the catastrophic quantum instability of the vacuum. We derive phenomenological bounds on the Lorentz-violating ultraviolet cutoff Λ which must apply to low-energy effective theories of ghosts, in order to keep the instability at unobservable levels. Assuming only that ghosts interact at least gravitationally, we show that Λ < ∼ 3 MeV for consistency with the cosmic gamma ray background. We also show that theories of ghosts with a Lorentz-conserving cutoff are completely excluded. PACS numbers: 98.80.Cq,98.70.Vc The present accelerated expansion of the universe seems to be an experimental fact, now that data from distant type Ia supernovae [1] have been corroborated by those from the cosmic microwave background [2]. Although the simplest explanation is a cosmological constant Λ of order (10 −3 eV) 4 , this tiny energy scale is so far below the expected "natural" size for a cosmological constant, that alternative explanations have been vigorously pursued. A common approach has been to assume that the true value of Λ is zero, due to an unknown mechanism, and to propose new physics which would explain why the present-day vacuum energy differs from zero by the small observed amount.
The most popular idea has been quintessence, in which the universe is gradually approaching the zero of the vacuum energy by the slow rolling of an extremely weakly coupled scalar field. More recently, some less conventional alternatives have been considered, including "phantom matter," which is essentially quintessence with a wrong-sign kinetic term [3]. These models are motivated by the supernova data, which suggest that the dark energy equation of state violates the weak energy condition by having p < −ρ [4].
A serious problem with phantom matter, which is overlooked in the literature that attempts to apply it to cosmology, is that such theories are not quantum mechanically viable, either because they violate conservation of probability, or they have unboundedly negative energy density and lead to the absence of a stable vacuum state. Whether a ghost carries negative norm and positive energy, or vice versa, is a choice which is made during the quantization procedure. This choice exists because the iǫ prescription for defining the propagator near its poles is not unique, and not specified by the Lagrangian itself. The momentum space propagator for a ghost can have either of the two forms
−i p 2 − m 2 + iǫ or −i p 2 − m 2 − iǫ(1)
In the first form in (1), the imaginary part of the propagator has the opposite sign relative to that of a positive norm particle. This will cause the optical theorem to be violated, leading to a nonunitary theory. That is, this choice gives a theory with no probabilistic interpretation. It is therefore unphysical and should be dismissed.
On the other hand, if the second form in (1) is chosen, unitarity is maintained. The price to be paid is that the poles in the propagator are shifted in such a way that particles with negative energy are the ones which propagate forward in time, so ghosts possess negative energy. This means, for instance, that a two-body scattering process involving nonghosts and ghosts can result in an increase in the magnitude of the energies of the particles. To illustrate this, suppose that the ghost is massive so that we can consider it to be initially at rest. If the initial energy of a photon is E i and it gravitationally scatters from the ghost at angle θ, then its final energy is
E f = E i m m − E i (1 − cos θ) > E i(2)
in contrast to the nonghost case where photons can only lose energy in such scatterings. In fact, there exist initial energies E i = m/(1 − cos θ) such that the final energy is divergent. The final energy of the ghost is correspondingly large and negative.
To avoid this kind of problem, one should consider theories where the interactions between ghosts and normal matter are as weak as possible. However, we must allow the ghosts to interact gravitationally, since it is their gravitational interactions which are needed for them to have any cosmological consequences, and this is already enough. Gravitational interactions allow the process of figure 1, in which a ghost pair and photon pair are spontaneously created from the vacuum. The phase space integral is divergent, indicating a catastrophic instability. The divergent nature of the instability can only be avoided if we impose a Lorentz noninvariant momentum space cutoff on the final state phase space (more on this later). Setting such a cutoff at the scale Λ, the creation rate is, on dimensional grounds,
Γ 0→2γ2φ ∼ Λ 8 M 4 p .(3)
We have neglected Bose enhancement from final state occupancy, an assumption we will verify a posteriori.
Notice that this pathology exists independently of the question of classical stability of the ghost-gravity system, which has been considered in [5]. The existence of this process is model independent-it requires only the existence of a wrong sign, canonical kinetic term and gravitational interaction for the ghosts. An implicit excuse for even considering phantom matter at the classical level is perhaps the idea that, at long distances, the scalar field theory is merely an effective one, whose ultraviolet completion is well defined and respects unitarity. In this way, it might be possible to have a physical value for the cutoff in (3) which was small enough so that the rate of decay of the vacuum is slow on cosmological time scales. In this letter, we estimate just how low a cutoff Λ is required for consistency with observational constraints. This question was previously considered in [6]; but we reach somewhat different conclusions, as we discuss below.
To find the density of photons which are spontaneously produced, we evolve the phase space density of ghosts and photons in an expanding universe,
d dt a 3 n = a 3 Γ ,(4)
where a(t) is the scale factor and Γ = Γ 0→2γ2φ . The solution is
n(t) = Γ t 3p+1 , a(t) ∼ t p H −1 (1 − e −3Ht ), a(t) ∼ e Ht .(5)
That is, the current number density is approximately given by the production rate per spacetime volume, Γ, times the age of the universe. Most of the photon pairs have been produced since redshift z = 1, both because there has been more time since z = 1 than before then, and because the density produced earlier was diluted by the expansion of the universe. This also means that their energy spectrum is not very different from the energy spectrum produced today; the spectrum peaks at E ∼ Λ. Therefore we find,
dn dE ∼ Λ 7 M −4 p t 0 for E < ∼ Λ .(6)
This spectrum of photons with energy near Λ is constrained by observations of the diffuse gamma ray background. EGRET [7] has measured the differential photon flux to be
dF dE = 7.3 × 10 −9 E E 0 −2.1 cm 2 s sr MeV −1 ,(7)
where E 0 = 451 MeV. Demanding that (6) not exceed (7) gives the upper limit
Λ < ∼ 3 MeV .(8)
Since the observed gamma ray spectrum involves a mean particle occupancy which is orders of magnitude less than 1, neglect of Bose stimulation was entirely justified. We emphasize that this bound depends only upon the ghost having at least a minimal coupling to gravity. The possible presence of other couplings can only strengthen the result. Nor does it depend on whether the ghost has a potential, so long as its mass is less than Λ. In models of phantom cosmology, the mass is taken to be of order the present Hubble scale, 10 −33 eV, so this is not restrictive.
The process 0 → 2γ2φ is not the only allowed one; we can consider also the production of neutrinos and of e + e − pairs. The neutrinos are hard to observe, so no good constraint arises there. The e + e − constraint may be more fruitful, but the existence of galactic and Earth magnetic fields makes it somewhat more difficult to relate an incident e + flux to the extragalactic density. However, a sufficiently dense intergalactic e + e − plasma would lead to excessive rescattering of the cosmic microwave sky. This leads to the constraint Λ < ∼ 40 MeV, still weaker than (8).
Let us compare our bound (8) to those which were obtained in ref. [6]. There it was argued that one can constrain Λ < 10 −3 eV by considering the process φ → g φ φ, where g is a graviton. This bound is incorrect, however. First of all, it arose by considering the "decay" rate of a ghost at rest, and insisting that this be longer than the Hubble time to prevent the exponential runaway generation of ghosts. But the produced ghosts typically carry energies ∼ Λ, so their decay rates are strongly time dilated. Demanding only that the time dilated decay rate be less than 1/t 0 gives Λ < 50 MeV. Second, the constraint arose by considering an interaction L eff = φg µν ∂ µ φ∂ ν φ. Not only is this interaction model dependent; it is actually an artifact of a noncanonical normalization of the φ field kinetic term, and can be removed by a field redefinition. Therefore, the proposed decay mechanism does not actually work. Diagrammatically, this is because the total amplitude is zero, as shown in figure 2. The naive contribution from the contact term is canceled by the three other diagrams, built from the φη µν ∂ µ φ∂ ν φ vertex contained in the interaction, and the g µν ∂ µ φ∂ ν φ vertex from the standard kinetic term. Thus the rate for φ → gφφ is zero. Ref. [6] then proceeds to obtain the bound Λ < 100 MeV by requiring the decay φ → 2g 3φ to be slower than the present Hubble rate. But this bound is model dependent; it requires higher derivative Lagrangian terms. It also suffers from the same error of using at-rest decay rates, so the quoted bound is orders of magnitude too stringent.
g φ φ + + = 0 φ + Figure 2.
Vanishing of the amplitude for a ghost to decay into graviton and two ghosts.
Remarks. The stringent limit we have obtained from the diffuse gamma ray background, Λ < ∼ 3 MeV, implies that any theory of low-energy effective ghosts must originate from new physics far below the TeV scale. Therefore we cannot invoke string theory, for example, as a plausible source for effective ghosts. Instead, we must imagine that they come from a low-energy sector that is completely hidden from the standard model, except for gravitational couplings. This makes ghosts look even more unlikely, in our view.
Another troubling feature is that, in order to pose this problem at all, we were forced to assume that Lorentz symmetry is broken. By taking the phase space for production of two photons plus two ghosts to be cut off at some momentum Λ, we have singled out a preferred frame, namely the rest frame of the cosmic microwave background radiation. Obviously there exist other frames where k > Λ even if k < Λ in the CMB rest frame. Such a cutoff might arise if, for example, the ghost dispersion relation had the form ω = − k 2 − k 4 /Λ 2 , which would result from the Lorentz-violating Lagrangian − 1
2 (∂φ) 2 + 1 2 Λ −2 ( ∇ 2 φ) 2 .
The Lorentz-violating cutoff is necessary because, if we try to impose a Lorentz invariant cutoff, for instance on the virtuality of the off-shell graviton, then there is still a divergent integral over the boost, with respect to the microwave background frame, of the rest frame of the (timelike) graviton. If we demand Lorentz invariance, but want to be maximally conservative, then we could argue that a process with a formation time longer than the age of the universe should not be considered. This places a bound on the boost between graviton and microwave frames, of γ < t 0 √ s, where s is the Mandelstam variable (the 4momentum squared of the off-shell graviton). Imposing in addition the Lorentz invariant bound s < Λ 2 on the graviton propagator, the production rate becomes finite. Denoting the 4-momentum of the virtual graviton as k, the production rate is
Γ ∼ d 4 k θ(Λ 2 − k 2 ) θ(t 0 − k 0 /k 2 ) k 4 M 4 p ∼ Λ 10 t 2 0 M 4 p ,(9)
so that the number density is ∼ Λ 10 t 3 0 M −4 p . The typical energy of a produced photon is k 0 ∼ Λ 2 t 0 ; even for a cutoff Λ of order milli-electron volts, the energy is ∼ 10 18 GeV. The dominant mechanism by which gamma rays of such an energy scatter on the way to the Earth is γγ → 4e, with the second γ a microwave background photon; the free path is about 120 megaparsecs [8], leading to about a twenty-fold reduction in the flux. Arriving at the Earth, such a gamma ray would produce an air shower more energetic than any that have ever been seen. Using current bounds on the flux of such cosmic rays, less than 1 event per km 2 per century [9], leads to a constraint of Λ < ∼ 1 meV (milli-electron volt). Gravity would receive order 1 modifications at a length scale > 0.2 millimeters, in contradiction with experiment [10]. Hence, we conclude that even under very conservative assumptions, ghosts within a Lorentz invariant framework are experimentally excluded.
The requirement of Lorentz violation is worrisome, because it is inconsistent with general covariance. General covariance is the framework for general relativity, and it provides the gauge principle which guarantees the masslessness of the graviton. There are also very severe constraints on Lorentz violation within ordinary particle physics [11]; and Lorentz violation in another sector tends to be communicated to ordinary particle physics via graviton loops [12]. It is also troubling that, to our knowledge, no consistent construction of a low energy theory with ghosts from a ghost-free fundamental theory exists.
These considerations incline us toward the view that ghosts should not be feared, not because they are harmless, but because it is very unlikely that they exist. The inconveniences of a small cosmological constant seem much more bearable than those brought on by ghosts.
We thank Nima Arkani-Hamed, Ramy Brustein, Daniel Chung, Anne M. Green, Justin Khoury, Riccardo Rattazzi and Mark Trodden for useful remarks. JC acknowledges the Kavli Institute for Theoretical Physics and CERN theory group for their hospitality while this work was ongoing. We are supported in part by the Natural Sciences and Engineering Research Council of Canada and by le Fonds Nature et Technologies of Québec. S.J. also thanks RIKEN-BNL Center and U.S. Department of Energy [DE-AC02-98CH10886] for providing facilities essential for the completion of this work.
Figure 1 .
1Graviton-mediated decay of vacuum into two ghosts and two photons.
Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant. A G Riess, arXiv:astro-ph/9805201J. 1161009AstronA. G. Riess et al. [Supernova Search Team Collabora- tion], "Observational Evidence from Supernovae for an Accelerating Universe and a Cosmological Constant," As- tron. J. 116, 1009 (1998) [arXiv:astro-ph/9805201].
Measurements of Omega and Lambda from 42 High-Redshift Supernovae. S Perlmutter, arXiv:astro-ph/9812133Astrophys. J. 517Supernova Cosmology Project CollaborationS. Perlmutter et al. [Supernova Cosmology Project Col- laboration], "Measurements of Omega and Lambda from 42 High-Redshift Supernovae," Astrophys. J. 517, 565 (1999) [arXiv:astro-ph/9812133].
First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Preliminary Maps and Basic Results. C L Bennett, arXiv:astro-ph/0302207Astrophys. J. Suppl. 148C. L. Bennett et al., "First Year Wilkinson Microwave Anisotropy Probe (WMAP) Observations: Preliminary Maps and Basic Results," Astrophys. J. Suppl. 148, 1 (2003) [arXiv:astro-ph/0302207].
A Phantom Menace?. R R Caldwell, arXiv:astro-ph/9908168Phys. Lett. B. 54523R. R. Caldwell, "A Phantom Menace?," Phys. Lett. B 545, 23 (2002) [arXiv:astro-ph/9908168];
The tensor to scalar ratio of phantom dark energy models. A E Schulz, M J White, arXiv:astro-ph/0104112Phys. Rev. D. 6443514A. E. Schulz and M. J. White, "The tensor to scalar ratio of phan- tom dark energy models," Phys. Rev. D 64, 043514 (2001) [arXiv:astro-ph/0104112];
. J G Hao, X Z Li, arXiv:gr-qc/0302100Phys. Rev. D. 67107303J. g. Hao and X. z. Li, Phys. Rev. D 67, 107303 (2003) [arXiv:gr-qc/0302100];
Quantum deSitter cosmology and phantom matter. G W Z Gibbons ; X, J G Li, ; S Hao, S D Nojiri, Odintsov, arXiv:hep-th/0302199arXiv:hep-th/0303117Phys. Lett. B. 562147Phantom matter and the cosmological constantG. W. Gibbons, "Phantom matter and the cosmo- logical constant," arXiv:hep-th/0302199; X. z. Li and J. g. Hao, "O(N) phantom, a way to implement w < −1," arXiv:hep-th/0303093; S. Nojiri and S. D. Odintsov, "Quantum deSitter cosmology and phantom matter," Phys. Lett. B 562, 147 (2003) [arXiv:hep-th/0303117];
Effective equation of state and energy conditions in phantom / tachyon inflationary cosmology perturbed by quantum effects. S Nojiri, S D Odintsov, arXiv:hep-th/0304131arXiv:hep-th/0306212Phys. Lett. B. 5651Phys. Lett. BS. Nojiri and S. D. Odintsov, "deSitter brane uni- verse induced by phantom and quantum effects," Phys. Lett. B 565, 1 (2003) [arXiv:hep-th/0304131]; "Ef- fective equation of state and energy conditions in phantom / tachyon inflationary cosmology perturbed by quantum effects," Phys. Lett. B 571, 1 (2003) [arXiv:hep-th/0306212];
Cosmological dynamics of phantom field. P Singh, M Sami, N Dadhich, arXiv:hep-th/0305110Phys. Rev. D. 6823522P. Singh, M. Sami and N. Dad- hich, "Cosmological dynamics of phantom field," Phys. Rev. D 68, 023522 (2003) [arXiv:hep-th/0305110];
Constructing dark energy models with late time de Sitter attractor. J G Hao, X Z Li, arXiv:hep-th/0305207arXiv:hep-th/0306033Phys. Rev. D. 6883514Phys. Rev. DJ. g. Hao and X. z. Li, "Phantom with Born- Infield type Lagrangian," Phys. Rev. D 68, 043501 (2003) [arXiv:hep-th/0305207]; "Constructing dark en- ergy models with late time de Sitter attractor," Phys. Rev. D 68, 083514 (2003) [arXiv:hep-th/0306033];
. M P Dabrowski, T Stachowiak, M Szydlowski, arXiv:hep-th/0307128Phantom cosmologies. M. P. Dabrowski, T. Stachowiak and M. Szyd- lowski, "Phantom cosmologies," arXiv:hep-th/0307128;
Born-Infeld-type phantom on the brane world. D J Liu, X Z Li, arXiv:hep-th/0307239Phys. Rev. D. 6867301D. j. Liu and X. z. Li, "Born-Infeld-type phantom on the brane world," Phys. Rev. D 68, 067301 (2003) [arXiv:hep-th/0307239];
Phantom and quantum matter in an anti-de Sitter universe. L P Chimento, R Lazkoz, ; E Elizalde, J Q H ; H, B Stefancic ; V, Johri, arXiv:gr-qc/0307111arXiv:astro-ph/0311293Phantom Cosmologies. Generalized phantom energyL. P. Chimento and R. Lazkoz, "On the link between phantom and standard cosmolo- gies," arXiv:gr-qc/0307111; E. Elizalde and J. Q. H, "Phantom and quantum matter in an anti-de Sitter uni- verse," arXiv:gr-qc/0310128; H. Stefancic, "Generalized phantom energy," arXiv:astro-ph/0310904; V. B. Johri, "Phantom Cosmologies," arXiv:astro-ph/0311293.
Probing the dark side: Constraints on the dark energy equation of state from CMB, large scale structure and Type Ia supernovae. S Hannestad, E Mortsell, arXiv:astro-ph/0205096Phys. Rev. D. 6663508S. Hannestad and E. Mortsell, "Probing the dark side: Constraints on the dark energy equation of state from CMB, large scale structure and Type Ia supernovae," Phys. Rev. D 66, 063508 (2002) [arXiv:astro-ph/0205096];
The State of the Dark Energy Equation of State. A Melchiorri, L Mersini, C J Odman, M Trodden, arXiv:astro-ph/0211522Phys. Rev. D. 6843509A. Melchiorri, L. Mersini, C. J. Odman and M. Trodden, "The State of the Dark Energy Equation of State," Phys. Rev. D 68, 043509 (2003) [arXiv:astro-ph/0211522];
Constraining the dark energy with galaxy clusters X-ray data. J A S Lima, J V Cunha, J S Alcaniz, arXiv:astro-ph/0303388Phys. Rev. D. 6823510J. A. S. Lima, J. V. Cunha and J. S. Alcaniz, "Constraining the dark energy with galaxy clusters X-ray data," Phys. Rev. D 68, 023510 (2003) [arXiv:astro-ph/0303388].
. J L Crooks, J O Dunn, P H Frampton, H R Norton, T Takahashi, arXiv:astro-ph/0305495Astropart. Phys. 20361J. L. Crooks, J. O. Dunn, P. H. Frampton, H. R. Nor- ton and T. Takahashi, Astropart. Phys. 20, 361 (2003) [arXiv:astro-ph/0305495].
Phantom Energy and Cosmic Doomsday. R R Caldwell, M Kamionkowski, N N Weinberg, arXiv:astro-ph/0302506Phys. Rev. Lett. 9171301R. R. Caldwell, M. Kamionkowski and N. N. Wein- berg, "Phantom Energy and Cosmic Doomsday," Phys. Rev. Lett. 91, 071301 (2003) [arXiv:astro-ph/0302506];
You need not be afraid of phantom energy. P F Gonzalez-Diaz, arXiv:astro-ph/0305559Phys. Rev. D. 6821303P. F. Gonzalez-Diaz, "You need not be afraid of phantom energy," Phys. Rev. D 68, 021303 (2003) [arXiv:astro-ph/0305559];
Phantom cosmic doomsday: a tale of two attractors. J G Hao, X Z Li, arXiv:astro-ph/0309746J. G. Hao and X. z. Li, "Phantom cosmic doomsday: a tale of two attractors," arXiv:astro-ph/0309746.
Can the dark energy equation-of-state parameter w be less than −1?. S M Carroll, M Hoffman, M Trodden, arXiv:astro-ph/0301273Phys. Rev. D. 6823509S. M. Carroll, M. Hoffman and M. Trodden, "Can the dark energy equation-of-state parameter w be less than −1?," Phys. Rev. D 68, 023509 (2003) [arXiv:astro-ph/0301273].
EGRET observations of the extragalactic gamma ray emission. P Sreekumar, arXiv:astro-ph/9709257Astrophys. J. 494P. Sreekumar et al., "EGRET observations of the extra- galactic gamma ray emission," Astrophys. J. 494, 523 (1998) [arXiv:astro-ph/9709257].
A new estimate of the extragalactic radio background and implications for ultra-high-energy gamma ray propagation. R J Protheroe, P L Biermann, arXiv:astro-ph/9605119Astropart. Phys. 645Erratum-ibid. 7, 181 (1997)R. J. Protheroe and P. L. Biermann, "A new esti- mate of the extragalactic radio background and implica- tions for ultra-high-energy gamma ray propagation," As- tropart. Phys. 6, 45 (1996) [Erratum-ibid. 7, 181 (1997)] [arXiv:astro-ph/9605119].
Evidence For Correlated Changes In The Spectrum And Composition Of Cosmic Rays At Extremely High-Energies. D J Bird, HIRES CollaborationPhys. Rev. Lett. 71491Astrophys. J.D. J. Bird et al. [HIRES Collaboration], "Evidence For Correlated Changes In The Spectrum And Composition Of Cosmic Rays At Extremely High-Energies," Phys. Rev. Lett. 71, 3401 (1993); "The Cosmic Ray Energy Spectrum Observed By The Fly's Eye," Astrophys. J. 424, 491 (1994).
Submillimeter tests of the gravitational inverse-square law: A search for 'large' extra dimensions. C D Hoyle, U Schmidt, B R Heckel, E G Adelberger, J H Gundlach, D J Kapner, H E Swanson, arXiv:hep-ph/0011014Phys. Rev. Lett. 861418C. D. Hoyle, U. Schmidt, B. R. Heckel, E. G. Adelberger, J. H. Gundlach, D. J. Kapner and H. E. Swanson, "Sub- millimeter tests of the gravitational inverse-square law: A search for 'large' extra dimensions," Phys. Rev. Lett. 86, 1418 (2001) [arXiv:hep-ph/0011014].
High-energy tests of Lorentz invariance. S R Coleman, S L Glashow, arXiv:hep-ph/9812418Phys. Rev. D. 59116008S. R. Coleman and S. L. Glashow, "High-energy tests of Lorentz invariance," Phys. Rev. D 59, 116008 (1999) [arXiv:hep-ph/9812418];
Constraints on Lorentz violation from clock-comparison experiments. V A Kostelecky, C D Lane, arXiv:hep-ph/9908504Phys. Rev. D. 60116010V. A. Kostelecky and C. D. Lane, "Constraints on Lorentz violation from clock-comparison experiments," Phys. Rev. D 60, 116010 (1999) [arXiv:hep-ph/9908504].
Loop-generated bounds on changes to the graviton dispersion relation. C P Burgess, J Cline, E Filotas, J Matias, G D Moore, arXiv:hep-ph/0201082JHEP. 020343C. P. Burgess, J. Cline, E. Filotas, J. Matias and G. D. Moore, "Loop-generated bounds on changes to the graviton dispersion relation," JHEP 0203, 043 (2002) [arXiv:hep-ph/0201082].
| [] |
[
"Fluids with competing interactions: I. Decoding the structure factor to detect and characterize self-limited clustering",
"Fluids with competing interactions: I. Decoding the structure factor to detect and characterize self-limited clustering"
] | [
"Jonathan A Bollinger \nMcKetta Department of Chemical Engineering\nUniversity of Texas at Austin\n78712AustinTexasUSA\n",
"Thomas M Truskett \nMcKetta Department of Chemical Engineering\nUniversity of Texas at Austin\n78712AustinTexasUSA\n"
] | [
"McKetta Department of Chemical Engineering\nUniversity of Texas at Austin\n78712AustinTexasUSA",
"McKetta Department of Chemical Engineering\nUniversity of Texas at Austin\n78712AustinTexasUSA"
] | [] | We use liquid state theory and computer simulations to gain insights into the shape of the structure factor for fluids of particles interacting via a combination of short-range attractions and long-range repulsions. Such systems can reversibly morph between homogeneous phases and states comprising compact self-limiting clusters. We first highlight trends with respect to the presence and location of the intermediate-range order (IRO) pre-peak in the structure factor, which is commonly associated with clustering, for wide ranges of the tunable parameters that control interparticle interactions (e.g., Debye screening length). Next, for approximately 100 different cluster phases at various conditions (where aggregates range in size from six to sixty monomers), we quantitatively relate the shape of the structure factor to physical characteristics including intercluster distance and cluster size. We also test two previously postulated criteria for identifying the emergence of clustered phases that are based on IRO peak-height and -width, respectively. We find that the criterion based on peak-width, which encodes the IRO thermal correlation length, is more robust across a wide range of conditions and interaction strengths but nonetheless approximate. Ultimately, we recommend a hybrid heuristic drawing on both pre-peak height and width for positively identifying the emergence of clustered states. | 10.1063/1.4960338 | [
"https://arxiv.org/pdf/1605.04191v3.pdf"
] | 101,147,814 | 1605.04191 | 93409a91e0feb1461643f3d014326c1e968566ba |
Fluids with competing interactions: I. Decoding the structure factor to detect and characterize self-limited clustering
Jonathan A Bollinger
McKetta Department of Chemical Engineering
University of Texas at Austin
78712AustinTexasUSA
Thomas M Truskett
McKetta Department of Chemical Engineering
University of Texas at Austin
78712AustinTexasUSA
Fluids with competing interactions: I. Decoding the structure factor to detect and characterize self-limited clustering
(Dated: 1 April 2018)numbers: Valid appear here Keywords: Cluster phasesself-assemblystructure factorSALR fluids
We use liquid state theory and computer simulations to gain insights into the shape of the structure factor for fluids of particles interacting via a combination of short-range attractions and long-range repulsions. Such systems can reversibly morph between homogeneous phases and states comprising compact self-limiting clusters. We first highlight trends with respect to the presence and location of the intermediate-range order (IRO) pre-peak in the structure factor, which is commonly associated with clustering, for wide ranges of the tunable parameters that control interparticle interactions (e.g., Debye screening length). Next, for approximately 100 different cluster phases at various conditions (where aggregates range in size from six to sixty monomers), we quantitatively relate the shape of the structure factor to physical characteristics including intercluster distance and cluster size. We also test two previously postulated criteria for identifying the emergence of clustered phases that are based on IRO peak-height and -width, respectively. We find that the criterion based on peak-width, which encodes the IRO thermal correlation length, is more robust across a wide range of conditions and interaction strengths but nonetheless approximate. Ultimately, we recommend a hybrid heuristic drawing on both pre-peak height and width for positively identifying the emergence of clustered states.
I. INTRODUCTION
Competing interactions between particles or molecules that manifest at distinct lengthscales can generate hierarchical structure in soft matter systems 1 . For contexts as diverse as microemulsions 2 , block-copolymers 3,4 , graphene oxides 5 , and confined fluid mixtures [6][7][8] , this type of constituent frustration drives (often abrupt) transformations between homogeneous states and morphologies exhibiting micro-to mesoscopic density fluctuations. Such modulated density fluctuations are typically classified as "intermediate-range order" (IRO) because, for this class of morphologies, the structure factor S (k) displays a characteristic pre-peak at a low but nonzero wavenumber [9][10][11][12][13][14][15][16][17][18][19] . In turn, the emergence of IRO can greatly impact the mechanical, optical, electronic, etc. properties of such systems, and the ability to detect, characterize, and ultimately engineer the emergence of IRO structure can facilitate new material processing methods 20-22 . This publication concentrates on an IRO morphology of increasing fundamental and technological interest: the equilibrium cluster phase.
Such a phase comprises self-terminating, finite-sized clusters composed of solute monomers (i.e., primary particles); the clusters themselves are ideally dense, amorphous, and relatively monodisperse in terms of their size 23 . They coexist with a continuous (interstitial) low-density population of monomers; thus, reversible transformations between homogeneous phases (where monomers are well-dispersed) and cluster phases can be viewed as microscopic analogues of macroscopic liquidgas separation.
Self-limiting cluster phases have been studied via theory, computer simulations, and experiments of various idea) Electronic mail: [email protected] alized 17,18,[24][25][26][27][28][29][30][31][32][33][34] or archetypal colloidal suspensions (e.g., polystyrene spheres) [35][36][37][38] and more complex constituent monomers like proteins 16,22,[39][40][41][42][43] , organic-inorganic complexes 44 , etc. The generic clustering behavior is attributed to a common physical paradigm: aggregates form due to a competition between short-range attractions that drive monomer association and long-range repulsions that collectively build up to attenuate growth. The former can be realized in colloidal suspensions via, e.g., the introduction of crowder molecules (e.g., non-interacting polymers) that induce depletion attractions, while the latter are attributable to (typically weakly-screened) electrostatic interactions between the ionic double-layers of nearby monomers due to their surface charges 25,36,45 .
Despite the attention directed at colloidal suspensions that form cluster phases, there remain basic knowledge gaps regarding their behavior and characterization, particularly in terms of how the shape of the structure factor S (k) relates to real-space morphology. To wit, while characteristic clusters must be reflected by the existence of an IRO pre-peak in S (k), it has also been recognized that suspensions can exhibit IRO pre-peaks without having formed monodisperse multi-particle aggregates [16][17][18] . In other words, it is difficult even to positively detect cluster phases versus either effectively homogeneous phases (exhibiting some other form of IRO) or, alternatively, percolated gel phases. Meanwhile, it remains unclear which morphological lengthscale(s) (e.g., cluster size, intercluster spacing) the wavenumber (position) of the IRO pre-peak captures, or whether it is sensitive to conditions like bulk monomer density 17,25,40,46,47 .
Being able to describe cluster morphologies by decoding S (k) would be conceptually powerful because it would allow one to obtain knowledge about multi-body structure based on pair correlations alone; it is also of practical interest because in situ measurements of pair correlations are feasible for a arXiv:1605.04191v3 [cond-mat.soft] 1 Aug 2016 wide range of soft matter systems and lengthscales, including nanoscopic primary particles and aggregates. In this vein, our goal here is to use integral equation theory and computer simulations to unambiguously and simultaneously characterize S (k) profiles and corresponding suspension morphologies for a canonical pairwise interaction model that generates clusters, with a particular emphasis on surveying wide ranges of conditions that might be accessed through experimentally tunable parameters, including monomer packing fraction φ, monomer surface charge Z, suspension (Debye) screening length κ −1 /d, and short-range attraction strength βε.
Based on our analysis of these model fluids, we first systematically expand upon previous findings [16][17][18] to demonstrate the poor correlation between the emergence of the IRO pre-peak in S (k) and the onset (or even energetic favorability) of self-limited clustering. We next demonstrate that the pre-peak position is dependent upon both cluster size in terms of number of monomers and average monomer density, and that it directly quantifies the average real-space intercluster separation. We then test two criteria based on S (k) that have been postulated to pinpoint the onset of clustering (and thus positively detect cluster morphologies), which are based on the IRO pre-peak height 17,18 and width 32 , respectively. We find that the criterion based on the pre-peak width, which encodes the IRO thermal correlation length, is a more robust (albeit still only approximate) predictor of the onset of clustering. Finally, we note that beyond this work, our accompanying publication focuses on describing self-limited cluster phases with free energy models adapted from classical nucleation theory.
II. METHODS
A. Model interactions
We focus on one of the simplest colloidal models 25 known to generate equilibrium cluster phases: a pair potential that combines a short-range attraction (SA) with a long-range repulsion (LR). The so-called SALR potential can be expressed
βu SALR i, j (x i, j ) = βu SA i, j (x i, j ) + βu LR i, j (x i, j )(1)
where β = (k B T ) −1 (k B is Boltzmann's constant and T is temperature); x = r/d is the non-dimensionalized interparticle separation; d is the characteristic particle diameter. Note that we generalize the pair potential to account for multicomponent (here, size-polydisperse) suspensions where two interacting particles are of types i and j, respectively. When conducting simulations (see Section II C), we follow previous work and simulate three-component mixtures that approximate suspensions with 10% size polydispersity; this favors the formation of amorphous fluid clusters, rather than the microcrystalline (often elongated) aggregates that result from monodisperse monomers 32,48 . In this context, the generalized interparticle distance in Eqn. 1 is defined
x i, j ≡ x − (1/2)(i + j)(∆ d /d),
where i (or j) = −1, 0, 1 corresponds to small, medium, and large particles, respectively, and ∆ d /d is a perturbation to particle diameter. Specifically, we study mixtures comprised of 20% small, 60% medium (characteristic size d), and 20% large particles with ∆ d = 0.158d.
Short-range attractions can be realized in colloidal suspensions via the introduction of depletant molecules with exclusion volumes smaller than that of the primary particles. These depletion attractions are represented via a generalized (100-50) Lennard-Jones interaction
βu SA i, j (x i, j ) = 4[βε + (1 − 2δ i, j )β∆ ε ](x −100 i, j − x −50 i, j ) (2)
where the lengthscale of the attractive well is approximately 0.10d. Here, βε is the baseline attraction strength between monomers and ∆ ε = 0.25k B T is an energetic perturbation that biases against demixing. Long-ranged repulsions can be attributed to screened electrostatic interactions between the charge sites located on the surfaces of monomer particles. Ignoring long-range multi-body interactions 49,50 and microscopic mechanisms of ion dissociation [51][52][53][54] , one can approximate this effect via the electrostatic portion of the Derjaguin-Landau-Verwey-Overbeek (DLVO) potential 45,55,56
βu LR i, j (x i, j ) = βA MAX exp {−(x i, j − 1)/(κ −1 /d)} x i, j(3)
with
βA MAX = Z 2 (λ B /d) [1 + 0.5/(κ −1 /d)] 2(4)
where βA MAX is the maximum electrostatic barrier between particles at contact, κ −1 /d is the Debye-Hückel screening length, Z is the total surface charge per monomer (assumed evenly distributed), and λ B /d is the Bjerrum length of the solvent. With respect to experimental realization, recall that not all of these quantities are independent, as κ −1 /d = 0 R k B T/(2d 2 N A e 2 I) and λ B /d = e 2 /(4dπ 0 R k B T ), where 0 is the vacuum permittivity, R is the relative permittivity, N A is Avogadro's number, e is the elementary charge, and I is the ionic strength of the suspending solvent. Experimentally tunable parameters are essentially Z, R , and I (and, practically, even some of these may be interdependent). In our analysis, we choose to fix the relative Bjerrum length at λ B /d = 0.014 (corresponding to, e.g., d = 50 nm monomers suspended in room temperature water with λ B = 0.7 nm), which means electrostatic effects are set via Z and κ −1 /d. (Choosing a different reference λ B /d renormalizes the Z values under consideration; see the companion paper.)
To examine model behavior at a given monomer packing fraction φ = (π/6)ρd 3 (where ρd 3 is number density), we set various combinations of Z and κ −1 /d and then independently vary the depletion attraction strength βε. This treatment mimics how short-and long-range aspects of constituent interactions are approximately orthogonal for colloidal suspensions, and is worth noting as it is in contrast to some studies where attractions and repulsions are simultaneously scaled via changing T 18,25,27 . Finally, note that throughout the remainder of the publication, we notate βu SALR i, j (x i, j ) as βu(r) for aesthetic simplicity (unless otherwise indicated).
B. Integral equation theory
We execute integral equation theory (IET) calculations to efficiently predict S (k) across wide ranges of the parameter space (βε, Z, κ −1 /d) underlying the pair interactions βu(r). In brief, IET partitions the total correlation function h(r) = g(r) − 1 (where g(r) is the radial distribution function) into pair and multibody contributions by introducing the direct correlation function c(r) in the context of the Ornstein-Zernike (OZ) relation:
h(r) = c(r) + ρ c(r )h(|r − r |)dr(5)
In order to use Eqn. 5, we require an accompanying closure expression that relates βu(r), g(r) and c(r). Because our systems have potentials resembling Coulombic interactions, we follow our previous work 32 and employ the optimized random phase approximation (ORPA) 57,58 . The ORPA formulation we use treats the direct correlation function as c(r) ≈ exp {−βu(r)} − 1 + c 0 (r), where the first two terms constitute a large-r perturbation to the c 0 (r) of an underlying reference system. We use the Mayer function to capture effects outside the core because it provides improved results when deep and narrow attraction wells are included in the pair potential 58 . Meanwhile, c 0 (r) = 0 for r > d, while at shortrange it is optimized to enforce h(r) = −1 for r ≤ d (i.e., to exactly incorporate effects of a reference hard-sphere fluid). Note that in performing these calculations, we do not explicitly enforce thermodynamic self-consistency, which has been shown to provide very strong quantitative agreement between analytical and simulation results for complex fluids 14,15,59 . As discussed in Section III, we are mainly interested in using IET to capture general trends in pair structural behavior over wide ranges in model parameter space; for these purposes, our approximate approach is practical and reasonably reflects simulation results 32 .
In practice, we conduct our IET calculations using the single-component monodisperse pair potential (i.e., ∆ d /d = β∆ ε = 0), where we fix Z and κ −1 /d and then systematically increase βε after beginning at vanishing attraction strength. Upon numerical solution at a given βε, S (k) is obtained via the relation S (k) = 1 + (ρd 3 )ĥ(k), whereĥ(k) = FT[h(r)] and FT is a Fourier transform.
C. Molecular dynamics simulations
We perform three-dimensional (3D) MD simulations of the ternary SALR mixtures in the NVT ensemble with periodic boundary conditions using LAMMPS 60 . We use an integration time-step of dt = 0.001 d 2 m/(k B T ) (taking the mass m = 1), and fix temperature via a Nosé-Hoover thermostat with time-constant τ = 2000dt. The pair potential for a given Z and κ −1 /d is cut-off such the that interaction strength at distance x c i, j (note explicit use of the mixture notation) is βu i, j (x c i, j ) ≤ 2e −3 and the force is simultaneously −d[βu i, j (x c i, j )]/dx i, j ≤ 1e −3 . We examine bulk monomer packing fractions φ = 0.015, 0.030, 0.060, and 0.120 using systems of N box = 1920, 2960, 6800, and 6800 particles, respectively. Starting from randomized initial configurations, we allow systems at φ = 0.015, 0.030, 0.060, and 0.120 to equilibrate for 3x10 7 , 1x10 7 , 3x10 6 , and 2x10 6 steps, respectively. (Lower packing fractions require relatively more equilibration time given less frequent monomer-monomer collisions.) We have confirmed that these equilibration times are sufficient by (1) checking that energies have converged and (2) by visualizing the trajectories to check that clusters undergo frequent intracluster rearrangements and intercluster exchanges (i.e., that individual particles ergodically sample the simulation space). Regarding the latter, we indeed find that by employing the lightly polydisperse mixture that we developed and used previously 32,48 , we avoid the formation of highly-arrested microcrystalline phases typical of monodisperse models.
To characterize pair correlations, we calculate the structure factor S (k) via numerical Fourier Transform inversion of the radial distribution function g(r). To characterize multibody structure, we calculate cluster-size distributions (CSDs), which quantify the probability p(N) of observing aggregates comprising N particles. Following previous studies 18,25,30,32 , two monomers are considered part of the same aggregate if they are located within the range of the attractive well (i.e., are direct neighbors) and/or they are both direct neighbors with at least one common particle (i.e., are connected via some percolating pathway).
For consistency across many packing fractions and cluster sizes, we consider a phase clustered with characteristic aggregate size N * based on the following criteria: (1) the p(N) distribution exhibits a visibly-apparent local maximum (mode) at some 1 < N * N box , where the corresponding local minimum between N = 1 and N * is notated as N min ; and (2) that at least 80% of the particles in the system participate in aggregates of size N ≥ N min . Thus, in this framework, the onset of clustering occurs when 0.80 = N box n=N min p(N), where p(N) is appropriately normalized. In turn, we identify the critical attraction strengths βε * best meeting this condition by examining CSDs of simulations performed in increments of ∆ε = 0.05k B T . All of the combinations of Z, κ −1 /d, and φ analyzed via simulations (where cluster phases could be found) are listed in Table I by their respective βε * values.
To characterize the lengthscales and shapes of the N *sized clusters, we calculate the radius of gyration R G /d and the relative shape anisotropy κ 2 . We first calculate the gyration tensor S, where the elements are S mn ≡ N * −2 i< j (r i m − r j m )(r i n − r j n ) and r i m is the the position of the i-th particle participating in the cluster in the m-th Cartesian coordinate (x, y, or z). The radius of gyration is then given by R G /d = (Tr S) 1/2 = (λ 1 + λ 2 + λ 3 ) 1/2 , where λ 1 , λ 2 , and λ 3 are the eigenvalues of S in order of magnitude λ 1 ≥ λ 2 ≥ λ 3 . The well-established relative shape anisotropy 61 is calculated via κ 2 = 1−3(λ 1 λ 2 +λ 2 λ 3 +λ 3 λ 1 )/(R G /d) 4 , which is bounded between 0 and 1: κ 2 = 0 corresponds to points (particle cen-
φ = 0.015 0.7 - - - - - - 6.55 0.8 - - - - - - 6.φ = 0.030 0.7 - - - - - - 6.30 0.8 - - - - - - - 1.0 - - - 5.φ = 0.120 0.7 - - - - - - 5.20 0.8 - - - - - - 5.20 1.0 - - - - - 4.95 5.20 1.2 - - - - - - - 1.5 - - - - 4.75 4.95 5.20 2.0 - - - 4.60 4.75 4.95 - 2.5 - - - 4.60 - - - 3.0 - - - - - - - 4.0 - - - - - - -
ters) that are symmetrically distributed and κ 2 = 1 corresponds to points arranged linearly. To slightly smooth over instantaneous cluster distortions (e.g., when the outer edge is distended due to an imminent particle exchange), measurements of R G /d and κ 2 are derived from S tensors collected over blocks of 10 individual clusters (where particle positions are renormalized relative to the respective centers of mass of the clusters); in turn, average and error values are based on 500 of these measurements.
III. RESULTS & DISCUSSION
A. IRO pre-peak formation, clustering, and macroscopic phase separation
We begin our discussion by considering the existence of a low-wavenumber pre-peak in the structure factor S (k), which emerges at a position k pre d lower than that of the primary peak associated with monomer-monomer packing effects located at k prim d 2π (i.e., a real-space lengthscale of d). A pre-peak position of k pre d = 0 is associated with suspensions dominated by short-range attractions, where such a pre-peak corresponds to (infinitely) long-ranged, densified regions and diverges in magnitude at the onset of macroscopic liquidgas phase separation 58 . On the other hand, phases composed of self-terminating microscopic clusters must exhibit an intermediate-range order (IRO) peak at some wavenumber 0 < k IRO d < k prim d due to their modulated structure; however, as discussed above, it is tentatively understood that not every state exhibiting an IRO peak is actually comprised of characteristically-sized clusters [16][17][18] .
In Fig. 1, we build upon these basic guidelines by examining an SALR system where we fix charge Z and packing fraction φ while varying attraction strength βε and screening length κ −1 /d over wide ranges. This allows us to: (1) systematically map out how the existence of the S (k) pre-peak and its position relate to some of the tunable parameters controlling interparticle interactions and phase behavior; and (2) consider how the parameter space where IRO pre-peaks exist compares to the parameter space where clusters emerge. In Fig. 1(a), we make the mapping tractable by using IET calculations with the approximate ORPA closure (see Methods) that can efficiently survey parameter space; to address the latter comparison, we plot the line of critical attraction strength βε * observed in MD simulations (where we can directly characterize multi-body structure), which corresponds to the onset of clustering at a given κ −1 /d. Meanwhile, in Figs. 1(b-d), we show selected series of S (k) profiles obtained from IET and simulations to illustrate the pre-peak shapes that correspond to the positions in Fig. 1(a). Note that here we are using an approximate closure and making comparisons between monodisperse IET calculations and lightly polydisperse MD simulations; thus, while we cannot expect perfect agreement between the methods, we do observe qualitative agreement in terms of the evolution of S (k) even in regions where S (k) is changing rapidly (as exemplified in Figs. 1(b-d) and elsewhere 32 ). Nonetheless, we restrict our comments below to general trends that should not be sensitive to these types of methodological choices.
Focusing on Fig. 1(a), it is apparent that for any given repulsive interaction, it is only above a sufficiently strong attraction β that a pre-peak of any position forms. As might be anticipated, a k pre d = 0 pre-peak forms in the limit of small screening lengths, while at sufficiently large screening lengths (κ −1 /d ≥ 1.0), one observes an IRO pre-peak at k IRO d > 0 that grows in from higher to lower k-values with increasing attractions. Moving left-to-right in the direction of increasing screening length, the transition between k pre d = 0 and k IRO d > 0 (where the zero-wavenumber convexity switches from negative to positive) is termed a Lifshitz point, which is a common feature of fluids with generic SALR interactions 9,10,19 . Generally-speaking, to reach this transition, repulsions must not only exist but must also be sufficiently competitive relative to attractions to favor modulated phases (minimum threshold repulsion strengths are known analytically for some temperature-controlled systems 10 ). In the parameter space here, this condition means that given a surface charge Z, one requires a minimum κ −1 /d to generate repulsions that can collectively stabilize aggregates once attractions start to pull monomers together.
From Fig. 1(a), one can also readily appreciate that the presence of an IRO pre-peak is a poor indicator of: (1) whether a particular state is composed of clusters; and (2) whether the charge-charge repulsions are even strong enough to favor persistent modulated structure. The first point has been postulated previously [16][17][18] , and here is bolstered by the considerable discrepancy between the region of parameter space where an IRO pre-peak is observed and the region where formation of clusters occurs (i.e., at and above locus of β * ). To wit, there is a energy differential of ∆ ≥ 2k B T between the emergence of the IRO pre-peak and the emergence of clusters over many screening lengths.
Meanwhile, one can also observe a second transition in the peak behavior of Fig. 1(a) within the screening length range 0.3 ≤ κ −1 /d ≤ 1.0: moving in the direction of increasing attraction strength, an IRO pre-peak initially develops, but subsequently shifts to k pre d = 0 while the system bypasses the formation of a cluster phase. Crossing this type of (reverse) Lifshitz boundary is readily attributable to the physical setup we consider, where attraction strength is "decoupled" from repulsions; after all, one should arguably be able to ramp up attractions to such high strengths that macrophase separation is favorable given even relatively strong repulsions. (Alternatively, our previous work illustrates this switch for one case of extremely weak repulsions 32 .) This shift from k IRO d > 0 to k pre d = 0 is exemplified in Fig. 1(b), which can be contrasted with Fig. 1(c), which shows an S (k) series at larger κ −1 /d where the IRO pre-peak persists and grows once it emerges. (These behaviors are rounded out by panel Fig. 1(d), which gives a representative series of a system shifting from a k pre d = 0 to k IRO d > 0 pre-peak.) Taking these two observations together, one must keep in mind that IRO pre-peak existence can not only considerably precede cluster formation, but can be very misleading at intermediate screening lengths where existence does not even universally signal that increasing attraction strength will result in formation of stable clusters.
To demonstrate that the qualitative trends of pre-peak existence and position shown in Fig. 1 are relatively generic, we show in Fig. 2 a representative series of pre-peak landscapes for various charges Z (at fixed φ), and a comparison between landscapes for different φ (at fixed Z). Despite the varying conditions, we generally find: (1) that given sufficient integrated repulsions, the formation of an IRO pre-peak precedes cluster formation by a differential in attraction strength upwards of ∆ = 2 to 3k B T ; (2) that there exist intermediate ranges of κ −1 /d where IRO pre-peaks shift to k pre d = 0 prior to clustering; and (3) that formation of finite-sized aggregates is very unlikely for screening lengths κ −1 /d ≤ 0.60, though we cannot definitively rule out the possibility.
Indeed, the primary differences across these various conditions are systematic shifts in the critical attraction strength βε * to form clusters. The locus of βε * shifts to higher values as surface charge Z increases due to the need to overcome greater charge-charge repulsions. In contrast, for fixed Z and κ −1 /d, the critical attraction strength βε * decreases by between approximately 0.3 and 1.0k B T when φ is doubled (trend applies from 0.015 ≤ φ ≤ 0.12) because this reduces the effective energetic barrier for bringing particles from the reference pair distance L/d ≈ (ρ M d 3 ) −1/3 of the homogeneous dispersion to the contact distance L/d ≈ 1 in aggregates. As a final point, we note that for a given charge Z, the range in κ −1 /d over which the dense phase moves between an infinite scale (i.e., macroscopic liquid-gas separation) at small κ −1 /d to an asymptotic modulated structure (given sufficient charge Z) at large κ −1 /d is quite narrow. Moving horizontally at, e.g., βε * , across any of the landscapes of Figs. 1 and 2, the pre-peak moves from k pre d = 0 at κ −1 /d ≤ 0.5 to an approximately constant k IRO d > 0 for κ −1 /d ≥ 3.0. Thus, one effectively reaches the Coulombic limit in terms of the repulsion influence for screening lengths κ −1 /d approaching only a few monomer diameters.
B. Cluster morphologies in simulations
To forge connections between the IRO pre-peak in S (k) and the real-space morphologies observed in SALR systems, we analyze 3D configurations of approximately 100 different clustered phases generated via MD simulations, where we can obtain S (k) while simultaneously measuring the numbersize N * and real-space lengthscales associated with the aggregates. We consider cluster phases formed for wide ranges of φ, Z, κ −1 /d, where, for the sake of consistency, we specifically concern ourselves with states at the onset of clustering where aggregates of a preferred size have emerged. These states are defined by critical attraction strengths βε * , where all of the state points that are analyzed in the following sec- tions are listed in Table I by their respective βε * values.) As demonstrated in Figs. 3 and 4, we examine phases comprising clusters in the size range 6 ≤ N * ≤ 60 that are compact and spherically symmetric on average, making these states promising for S (k) interpretation because they are relatively simple (idealized) in terms of their morphologies. We first consider Figs. 3(a) and 3(b), where we show that plotting the radius of gyration R G /d versus cluster size N * follows the relation
R G /d = α(φ)N * (1/d f ) with d f = 3(6)
where α(φ) is a φ-dependent prefactor on the order of 1/2 (hereafter notated α) and d f is the fractal dimension of the aggregates. The fractal dimension d f = 3 signifies that the clusters are compact objects, in contrast with aggregates that are more highly-branched and/or elongated, which would tend to exhibit d f < 3. Likewise, the magnitudes of the α prefactors underline that these aggregates have high internal packing fractions, though we do see a modest positive correlation between R G /d and φ given fixed N * . This in- dicates that clusters are slightly less dense given closer intercluster proximity, which can be attributed to more frequent monomer exchanges that tend to instantaneously (but, on average, isotropically) enlarge the clusters compared to their "isolated" structure at very low packing fractions, e.g., φ = 0.015. Meanwhile, measurements of the relative shape anisotropy κ 2 , which are shown in Fig. 3(c), demonstrate that these cluster objects are highly symmetric even down to small sizes N * . Here, we calculate the long-established parameter κ 2 , where κ 2 = 0 corresponds to points (particles) that are symmetrically distributed and κ 2 = 1 corresponds to points arranged linearly 61 . Calculated based on the monomer positions within the clusters, we find κ 2 ≤ 0.05 for all cluster sizes and packing fractions, which indicates symmetric arrangements of particles and complements the R G /d-based findings above that mainly imply compactness. Specifically, we observe κ 2 ≈ 0.01 (very high symmetry) for the most isolated clusters at φ = 0.015, and a slight positive correlation between κ 2 and φ that implies aggregate symmetry is somewhat sensitive to the increasing frequency of (near-)collisions and monomer-exchanges, which tend to generate outlying particles and instantaneously distorted states that positively contribute to κ 2 .
As illustrated in Fig. 4, visualizations of the cluster phases complement the findings above: the aggregates formed in these systems are highly-compact and roughly spherical on average; furthermore, based on these attributes and the sizescaling of the aggregates, we estimate the typical internal packing fraction of the clusters is φ int ≈ 0.40. To wit, we observe good mixing of the polydisperse monomers, which frustrates intracluster crystallization and promotes intra-and intercluster diffusion. One can also appreciate the preferred sphericity of the clusters, though this can be instantaneously violated as clusters collide, merge, or exchange monomers. Given the clusters are spherical, we can estimate the internal packing fraction using the expression φ int = N * V mon /V cl (N * ) where V mon = (4/3)π(d/2) 3 and V cl = (4/3)π(R cl ) 3 are the volumes of the monomer and cluster, respectively (here we assume monodisperse monomer). We then estimate the N *dependent cluster radius as R cl /d = R G /d + 0.5 where the latter coefficient is added because R G /d is based on particle centers. Using the relation R G /d ≈ 0.5N * 1/3 gives 0.30 ≤ φ int ≤ 0.50 over the range 6 ≤ N * ≤ 60, with the majority of sizes φ int ≥ 0.35. This is comparable with dense simple fluids.
Finally, in line with the observations of Godfrin et. al. 18 , we find that the emergent aggregates universally exhibit average intracluster coordination numbers (i.e., numbers of nearest-neighbors) of z c ≥ 2.4, which is the well-established minimum coordination number corresponding to rigid percolation 63 . Predictably, z c grows with respect to cluster size, where the scaling relationship between these two quantities is important for understanding the thermodynamics of cluster formation. We refer the reader to the accompanying publication for a more extensive discussion.
C. Interpreting the IRO pre-peak position
Based on our collection of simulated cluster morphologies, we first address what physical characteristic(s) of these morphologies that the IRO pre-peak position in S (k) captures. This is important because while the real-space lengthscale 2π/(k IRO d) captured by the inverse pre-peak position is generally thought to encode the real-space cluster diameter (or perhaps intercluster center of mass separation), there has been limited information available allowing for an unambiguous determination of what lengthscale(s) k IRO d truly captures. As such, there is not yet consensus about whether the pre-peak position should exhibit a systematic dependence upon bulk monomer density 14,17,25,40,46,47 . In other words, if similarly sized clusters are found at two densities, should pre-peak position be the same?
Focusing on Fig. 5, we find that the real-space lengthscale 2π/(k IRO d) is equivalent to the average center-to-center intercluster distance L C-C /d. A direct comparison between the two quantities is presented in Fig. 5(a), which demonstrates excellent quantitative agreement, and Fig. 5(b) makes it clear that the pre-peak lengthscale is correspondingly a function of both cluster size N * and bulk monomer density ρd 3 . To understand why this is so, let us consider the number density of clusters ρ C d 3 = n C /(L box /d) 3 , where n C = N box /N * is the number of clusters in the simulation assuming perfect size-uniformity and L box is the simulation box length. We can then write ρ C d 3 = N box /[N * (L box /d) 3 ] = (ρd 3 )/N * , where the second equality is simply due to the definition of the bulk monomer density ρd 3 = N box /(L box /d) 3 . Since, in the crudest sense, the average intercluster distance L C-C /d ≈ (ρ C d 3 ) −1/3 , we thus have:
2π/(k IRO d) = L C-C /d ≡ N * ρd 3 1/3(7)
As is evident from Fig. 5(a), there is excellent collapse in the data along Eqn. 7 for all of the cluster phases tested.
This analysis assumes nothing about the shape and/or compactness of the clusters (only that they are distinguishable and of number-size N * ), which has two implications: one can readily obtain cluster size N * given knowledge of k IRO d and ρd 3 ; however, to obtain a real-space cluster diameter, one must independently possess an empirical relation between N * and cluster diameter (or, e.g., R G /d). Of course, given our systems exhibit the size-scaling of Eqn. 6, we demonstrate in Fig. 5(c) that this type of conversion from pre-peak position to cluster radius is quantitative. Finally, though this model for pre-peak position assumes little about the nature of the aggregates, we cannot rule out that the strength of the quantitative match between 2π/(k IRO d) and L C-C /d may diminish for less-idealized morphologies that are not primarily composed of highly-packed spherical clusters. As already discussed, the existence of an IRO pre-peak is necessary but not sufficient evidence for positively identifying a clustered phase. In this section, we draw on our results from simulations to directly test two criteria postulated to detect the transformation between homogeneous and clustered phases: one based on the IRO pre-peak height (i.e., magnitude) and one based on the IRO pre-peak width.
2.85 at the fluid-solid transition (i.e., along the melting line). In this way, the S (k IRO d) ≈ 2.7 clustering criterion is conceptually like considering cluster formation as a microcrystallization event, i.e., a frustrated analog of the bulk freezing transition. However, this criterion for identifying clustering has only been tested for a limited scope of repulsions strengths and lengthscales, generally in schemes (unlike the protocol here) where attraction and repulsions strengths have been simultaneously rescaled by modulating T .
In Fig. 6, we plot the magnitudes of the IRO pre-peaks in S (k) measured from simulations at the onset of clustering for our ≈ 100 different systems, where we observe that for the majority of cases tested, the peak-height considerably exceeds (by up to an order of magnitude) the S (k IRO d) ≈ 2.7 threshold. In essence, the criterion does not generally pinpoint the emergence of aggregates with a characteristic size because many dispersed states (and/or states exhibiting generic amorphous IRO) at a given Z and κ −1 /d exhibit IRO pre-peaks with heights of S (k IRO d) ≥ 2.7 well before attractions are actually strong enough to stabilize clusters. Thus, one might instead posit that the condition S (k IRO d) ≥ 2.7 is a necessary but not sufficient criterion for positively identifying clustered phases.
Broadly speaking, the criterion acts only as a minimum threshold because pre-peak height is highly-coupled to the kd → 0 limit of S (k), which is proportional to system compressibility χ T 58 . To wit, the states where the S (k IRO d) values most exceed the S (k IRO d) ≈ 2.7 limit at βε * are those governed by relatively weak repulsions (correlated with larger N * in Fig. 6) and lower φ, both of which contribute to high χ T . Thus, an IRO pre-peak height can reach large values even as the pre-peak signature itself may be rather weak (i.e., flat, especially away from the clustering locus), simply due to the leading influence of the high-magnitude low-k limit. This type of coupling between the pre-peak and zero-wavenumber limit is evident even at "moderate" packing fractions like φ = 0.060, as shown in Figs. 6(b) and (c): relatively lowstrength repulsions combined with the increasing attractions generating heterogeneity drive compressibility to high values (e.g., greater than 1), with the pre-peak emerging and sharpening at correspondingly large magnitudes.
More conceptually, it should perhaps be unsurprising that the Hansen-Verlet freezing rule is a poor fit for these systems. In essence, the rule was developed based on suspensions undergoing solidification due to packing effects; however, clustering in an SALR system is driven not by a competition between configurational free volumes, but by a competition between attractions and repulsions. In turn, while describing the cluster formation as "microcrystallization" seems fitting-especially for highly monodisperse monomers that form clusters with crystal motifs-it is a transformation more akin to a frustrated liquid-gas separation.
IRO pre-peak width
We now move on to test a recently proposed framework 32 for identifying the onset of clustering based the IRO prepeak width, which encodes the thermal correlation length ξ T /d. Conceptually, the thermal correlation length quanti-fies the real-space persistence of structural correlations and is most frequently considered in the context of fluids undergoing macrophase liquid-gas separation (i.e., unstable droplet formation). In this context, ξ T /d constitutes a prefactor in the well-established 58 second-order inverse expansion of S (k) about the corresponding pre-peak at k pre d = 0:
S (kd) k pre d=0 ≈ S (0) 1 + (ξ T /d) 2 (kd) 2(8)
and one can identify the liquid-gas transition based on the divergence of ξ T /d → ∞, which signifies formation of "infinitely" persistent dense regions. For clustering systems dominated by frustrated interactions, one can analogously consider the ξ T /d encoded in the IRO pre-peak, which quantifies the persistence of the modulated dense structure in the fluid characterized by the finite lengthscale 2π/(k IRO d). Here, the inverse expansion about the pre-peak can be written:
S (kd) k IRO d>0 ≈ S (k IRO d) 1 + (ξ T /d) 2 (k − k IRO ) 2 d 2(9)
which can be readily rearranged to give:
1 S (kd) k IRO d>0 ≈ 1 S (k IRO d) + (ξ T /d) 2 S (k IRO d) (k − k IRO ) 2 d 2 (10)
This rearranged expression makes it clear that the combined prefactor (ξ T /d) 2 /S (k IRO d) is equivalent to the second-order coefficient in a Taylor series expansion of S −1 (kd). This equivalence provides a highly practical expression for calculating the IRO thermal correlation length
ξ T /d = 1 2 S (k IRO d) d 2 S (kd) dk 2 k IRO d>0 1/2(11)
where one must simply (1) record the pre-peak magnitude and (2) perform a polynomial fit about the pre-peak position k IRO d) to obtain the second-derivative.
In line with other systems that undergo frustrated microstructural transformations 7 , the peak-width clustering criterion posits that cluster formation should be characterized not by a true divergence in the IRO ξ T /d, but instead when the IRO ξ T /d first exceeds the only competing (characteristic) lengthscale in the system: the screening length of the repulsions κ −1 /d. In other words, the onset of clustering should occur when the IRO thermal correlation length reaches the Debye screening length, i.e.,
ξ T /d ≈ κ −1 /d(12)
The remainder of this section aims to provide greater physical intuition for this criterion and to demonstrate how it performs versus simulations.
To get a better physical sense for this comparison between thermal correlation length and Debye length, consider Fig. 7(a), where we plot selected transforms of the total correlation function h(r) and the interparticle potential βu(r) that highlight how the constants ξ T /d, and κ −1 /d reflect the characteristic exponential decays (negative slopes) of the pair structural correlations and repulsive barrier, respectively. Here, while repulsions are obviously defined by the exponential decay in Eqn. 3, it is also worth recalling that pair correlations have the form 58
lim r/d→∞ h(r) ∝ (r/d) −1 exp[−r/ξ T ] cos[rk IRO − θ](13)
where the cosine term captures the modulated nature of the IRO structure (it is not normally included for, e.g., simple fluids). By examining the profiles in Fig. 7 calculated for conditions (βε = 6.0) exceeding the Eqn. 12 condition, we can readily glean the features of h(r) that characterize cluster phases in the IRO ξ T /d framework: oscillations (humps) in transformed h(r) that asymptotically decay more slowly than the potential βu(r) (Fig. 7(a)), where these tell-tale oscilla- tions mirror long-range oscillatory structure in h(r) that occurs on the lengthscale 2π/(k IRO d) (Fig. 7(b)) and sets the pre-peak in S (k) (Fig. 7(c)). In contrast, for a dispersed phase (here, βε = 1.5), one observes h(r) (transformed or not) decay quickly to zero and display no characteristic oscillations at any intercluster lengthscale. Comparing these cases, it is clear that by searching for sufficiently strong IRO thermal correlation lengths ξ T /d, we are looking for states that exhibit persistent coordination shell structure in h(r) at a "cluster-sized" scale. This is intuitive given a clustered phase ideally comprises intermediate-scale densified regions exhibiting disordered fluid structure in themselves.
Finally, we consider Fig. 8, where we directly test the ξ T /d ≈ κ −1 /d criterion by examining the S (k) profiles from our ≈ 100 simulated systems at the onset of clustering (i.e., at βε * ) and plotting the ξ T /d values extracted from the IRO pre-peaks versus the κ −1 /d values defining the respective repulsive interactions. We obtain the ξ T /d values via Eqn. 11, where we measure the pre-peak position and magnitude and then calculate the second derivative of S (k) based on a thirdorder polynomial curve centered at k IRO d and fitted over a ∆(kd) ≈ 0.20 range. To give a sense for the uncertainty in ξ T /d, note that we plot error bars corresponding to the standard deviation in ξ T /d values across the S (k) pre-peaks exhibited at attraction strengths βε = βε * and βε = βε * ± 0.05.
So how does the pre-peak width criterion perform? Fig. 8 demonstrates that the emergence of clusters occurs when the IRO ξ T /d ≈ κ −1 /d for a wide variety of φ, Z, and κ −1 /d conditions, provided the interactions are governed by sufficiently large screening lengths (κ −1 /d ≥ 2.0). At smaller screening lengths, we clearly observe a systematic breakdown of the criterion shown by the empirical dashed line. In retrospect, this is somewhat unsurprising given that IRO pre-peaks manifesting equally diminutive correlation lengths would be very weak (flat), i.e., would not reflect persistent intercluster coordination shells. In turn, thinking about larger screening lengths beyond those tested (κ −1 /d > 4.0), we would note that the critical IRO ξ T /d likely exhibits weak dependence on κ −1 /d because these systems effectively approach the Coulombic limit for κ −1 /d ≥ 3.0 (see Figs. 1 and 2 and accompanying publication). Indeed, given the spread in the data, there is already little discernible difference between the critical IRO ξ T /d values recorded from the simulation sweeps at κ −1 /d = 3.0 and 4.0.
Taken altogether, we propose as a general guideline that to detect the onset of clustering, one search for the conditions at which the IRO thermal correlation length is within the range 2.0 ≤ ξ T /d ≤ 3.0 and where (given the discussion above) the pre-peak height simultaneously exceeds S (k IRO d) ≥ 2.7. This two-fold criterion is advantageous because it does not depend on screening length κ −1 /d and, while this rule is necessarily inexact, it is nonetheless more empirically robust with respect to conditions (φ, Z, κ −1 /d), particularly over the intermediate screening lengths (one to three monomer diameters) common to clustering studies. We would also point out that this hybrid rule should serve as a lower bound with respect to βε for the appearance of clusters: above the critical βε * , we have generally observed a bandwidth in attraction strength of ∆ε 1.5k B T before clusters start to form arrested percolated networks that are tentatively classified as thermoreversible gels 32 .
In closing this discussion, we do note that the original prepeak width criterion, which requires knowledge of κ −1 /d, can be used based solely on knowledge of S (k) because one can not only extract the IRO ξ T /d, but also an estimate for κ −1 /d. (This is an alternative approach to estimating κ −1 /d based on Z, R , I, etc.) Here, one can recall 58 that the direct correlation function c(r) is generally understood to scale at long-range as lim r/d→∞ c(r) ≈ −βu(r). Given that c(k) = (ρd 3 ) −1 − [(ρd 3 )S (k)] −1 and c(r) = FT −1 [ĉ(k)], one can: (1) measure S (k); (2) convert itĉ(k); (3) and readily obtain c(r). This provides an approximate βu(r) profile, which can be plotted (as in Fig. 7) to deduce κ −1 /d from its slope at long distance. Thus, in principle, one can quantify the characteristic lengthscale of monomer-monomer repulsions in situ at arbitrary density.
IV. CONCLUSIONS
We have tested how the existence, position, and shape of the IRO pre-peak in the structure factor S (k) can be interpreted for colloidal fluids that reversibly form self-limiting aggregate clusters due to isotropic competing SALR interactions between monomers. A major goal was to survey a wide array of parameter space spanning both monomer packing fraction (0.015 ≤ φ ≤ 0.120) and the variables controlling monomer-monomer interactions (including at-traction strength βε, surface charge Z, and screening length κ −1 /d). The bulk of our findings draw upon results from MD simulations of approximately 100 different phases located along the locus of cluster formation, which exhibited relatively idealized morphologies comprising compact spherical clusters.
First, both IET calculations and MD simulations systematically corroborate the previous observations [16][17][18] that the existence of an IRO pre-peak in S (k) is a poor predictor of whether a phase is clustered. Notably, we observe that for many intermediate screening lengths (e.g., 0.3 < κ −1 /d < 1.0), IRO pre-peaks can form at wavenumbers k IRO d > 0 as βε increases, but subsequently shift to k pre d = 0, which corresponds to macroscopic lengthscales, before any microscopic cluster phases can form. Thus, IRO pre-peak formation does not even guarantee that a particular set of conditions (φ, Z, κ −1 /d) favors self-limited aggregation at any βε.
Provided a phase is clustered, we find that the position (wavenumber) of the IRO pre-peak k IRO d directly encodes the average real-space intercluster distance, where 2π/(k IRO d) = [N * /(ρd 3 )] 1/3 . This dependence on ρd 3 means that for fixed cluster size N * , k IRO d will show a systematic rightward shift with increasing φ. We add a note of caution that one cannot directly derive a real-space cluster diameter from S (k); to obtain a cluster diameter, one one must possess an independent relation that can convert between N * and real-space lengthscale.
We next tested a previously-proposed criterion for detecting the onset of clustering based on the height (magnitude) of the IRO pre-peak, which states that the onset of clustering occurs when S (k IRO d) ≈ 2.7. Over our wide survey of states, we instead find that the pre-peak height at the onset of clustering frequently exceeds (by up to an order of magnitude) the S (k IRO d) ≈ 2.7 threshold because of the coupling between the shape of the IRO pre-peak and the kd → 0 limit of S (k), which equals the system compressibility and is highly sensitive to both φ and the strength and lengthscale of interparticle repulsions. Thus, the condition S (k IRO d) ≥ 2.7 appears to be a minimum threshold for clustering, i.e., it is a necessary but not sufficient test for positively identifying clustered phases.
We then revisited an alternative criterion for detecting cluster formation based on IRO pre-peak width, which encodes the thermal correlation length ξ T /d, where the criterion states that the onset of clustering occurs when ξ T /d ≈ κ −1 /d. We observe that this rule performs well for many different combinations of φ and Z provided that the screening length is in the range 2.0 ≤ κ −1 /d ≤ 4.0. However, the criterion breaks down at smaller κ −1 /d because clustered phases, which are characterized intermediate-range coordination shells of aggregates, must correspondingly exhibit relatively large "threshold" IRO ξ T /d values.
Because both the pre-peak height and width criteria are only approximate across wide ranges of monomer interactions and packing fractions, we propose a hybrid heuristic for detecting the emergence of cluster phases based on S (k): search for the conditions where (1) the pre-peak height exceeds S (k IRO d) ≥ 2.7 and (2) the IRO thermal correlation length encoded in the pre-peak width simultaneously reaches the range 2.0 ≤ ξ T /d ≤ 3.0. The combination of these at-tributes should ensure that there is both a very strong signature of IRO but also slowly-decaying modulated pair correlations corresponding to well-developed coordination-shell pair structure between clusters. And though inexact, this rule does not require knowledge of κ −1 /d and should be reasonably robust to varying conditions and interparticle interactions.
In closing, we remark that beyond the connections considered here between pair correlations and clustering, there remain deep questions about whether one can alternatively identify conditions that favor clustering in SALR fluids based simply on the phase behavior of fluids with equivalent attractions but no repulsions, which exhibit macrophase separation. Indeed, previous work 18 has pointed to strong (predictive) overlap between the onset of clustering and underlying purely-attractive binodal boundaries in systems where temperature is the controlling parameter; meanwhile, our related work on systems where attraction strength is the controlling parameter points to correspondence at least in the limit of very weak repulsions 32 . A fruitful area of inquiry here would be to understand how closely one can map between the temperature-and attraction-strength-based frameworks, which would lend fundamental insights into when and how repulsions drive otherwise macrophase-separating systems to form equilibrium microphase morphologies.
V. ACKNOWLEDGMENTS
This work was partially supported by the National Science Foundation (1247945) and the Welch Foundation (F-1696). We acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources.
FIG. 1 .
1(color online) (a) Pre-peak position k pre d in the structure factor S (k) as a function of attraction strength βε and screening length κ −1 /d for packing fraction φ = 0.03 and charge Z = 8.0, obtained from integral equation theory (IET). Color portions show conditions for which there is an IRO pre-peak at small but finite k IRO d > 0. Filled and unfilled circles delineate transitions between different peak behaviors in IET results. Squares denote critical attraction strengths βε * at the onset of clustering obtained from MD simulations. Note that the locus of IRO pre-peak emergence in simulations (not shown) overlaps with the filled circles from IET. (b,c,d) Structure factors obtained from IET (lines) and simulations (circles) for φ = 0.030 and Z = 8.0, where in (b) and (c) the results are for constant κ −1 /d values and βε = 1.5, 3.0, 4.0, 4.5, 5.0, and 6.0 (bottom to top). In (d), βε is constant with κ −1 /d = 0.1, 0.5, 0.8, 1.0, 2.0, 5.0 (top to bottom). Note that simulation results are not shown for every combination of βε and κ −1 /d. In all panels, IET results are based on monodisperse systems while simulation results are based on lightly polydisperse mixtures (see text).
FIG. 2 .
2Pre-peak position k pre d in the structure factor S (k) as a function of attraction strength βε and screening length κ −1 /d obtained from IET for (a) packing fraction φ = 0.03, charge Z = 4.0; (b) φ = 0.03, Z = 8.0; (c) φ = 0.03, Z = 12.0; and (d) φ = 0.06, Z = 12.0. Filled and unfilled circles delineate transitions between different pre-peak behaviors in IET results. Squares denote critical attraction strengths βε * at the onset of clustering obtained from MD simulations. Note that the loci of IRO pre-peak emergence in simulations (not shown) overlap with the filled circles from IET. In all panels, IET results are based on monodisperse systems while simulation results are based on lightly polydisperse mixtures (see text).
FIG. 3 .
3(a) Cluster radius of gyration R G /d versus characteristic cluster size N * , both measured from MD simulations at the onset of clustering (i.e., at critical attraction strengths βε * ). Blue, yellow, orange, and red symbols correspond to data from simulations at packing fractions φ = 0.015, 0.030, 0.060, and 0.120, respectively. Symbol types correspond to constant charge Z as listed inTable I(note that we test various screening lengths κ −1 /d at each Z). Lines are empirical fits of the form R G /d = αN * 1/3 , where α is a dimensionless prefactor corresponding to α = 0.45, 0.49, 0.53, and 0.60 for φ = 0.015, 0.030, 0.060, and 0.120, respectively. (b) Same data from (a), but rescaled to highlight the characteristic exponent m in the expression R G /d = αN * m , which corresponds to m = 1/d f with d f being the fractal dimension of the aggregates. Black line corresponds to R G /(αd) = N * 1/3 , with dark (light) purple regions denoting 10% (20%) deviation from this relation. (c) Relative shape anisotropy κ 2 of clusters measured from simulations at selected state points from (a), where state points were chosen to roughly span the range of observed equilibrium cluster sizes N * .
FIG. 4 .
4Configuration snapshots from simulations of phases at the onset of clustering (i.e., at critical attraction strengths βε * ). The snapshots are at packing fraction φ = 0.060 and chosen to roughly span the range of observed equilibrium cluster sizes N * . Repulsions are defined by (a) charge Z = 15.0 and screening length κ −1 /d = 2.0; (b) Z = 10.0 and κ −1 /d = 1.5; (c) Z = 6.0 and κ −1 /d = 2.0; and (d) Z = 4.0 and κ −1 /d = 3.0. Blue, yellow, and orange shadings correspond to small, medium, and large particles in the polydisperse mixtures (see Methods). Visualizations were produced using VMD 62 .
FIG. 5 .
5(a) Average intercluster center-to-center distance L C-C /d ≡ [N * /(ρd 3 )] 1/3 (see text), where ρd 3 is the bulk monomer density, versus inverse IRO pre-peak wavenumber (i.e., real-space distance) 2π/(k IRO d), both measured in MD simulations. (b) Cluster size N * versus IRO pre-peak lengthscale 2π/(k IRO d). (c) Cluster radius of gyration R G /d versus inverse IRO pre-peak wavenumber shifted by α and ρd 3 (combining Eqns. 6 and Eqn. 7). In (a) and (c), thick lines denote 1:1 correspondence between x-and y-axes, with dark (light) purple regions denoting 10% (20%) deviation from this relation. In all panels, symbol types correspond to constant charge Z as listed in
FIG
. 6. (a) IRO pre-peak height S (k IRO d) at onset of clustering (at βε * ) versus cluster size N * , both measured in MD simulations. Thick line denotes previously-proposed criterion17,18 postulating that the emergence of clusters occurs as S (k IRO d) ≈ 2.7. Dark (light) purple regions denote 10% (20%) deviation from this relation. Color lines are guides to the eye for results from (top to bottom) φ = 0.015, 0.030, 0.060, and 0.120. Symbol types correspond to constant charge Z as listed inTable I (note that wetest various screening lengths κ −1 /d at each Z). (b) Cluster size distributions p(N) and (c) structure factors calculated from MD simulations for packing fraction φ = 0.060, charge Z = 4.0, screening length κ −1 /d = 2.0, and attraction strengths βε = 3.50, 4.00, 4.30 and 4.55 (top to bottom in (b); bottom to top in (c)). The critical attraction strength is βε * = 4.55. In (b), we note the local minimum N min and maximum N * in p(N) that characterize the onset of clustering (see Methods). The dashed line in (c) marks S (k IRO d) = 2.7. D. Detecting the onset of clustering based on S (k)
FIG
. 7. (a) Log-positive transforms of the total correlation function (TCF) h(r) = g(r) − 1 and pair potential βu(r) for φ = 0.030, Z = 8.0, and κ −1 /d = 2.0, where solid lines correspond to TCF transform of h(r) at βε = 1.5 (blue, lower) and 6.0 (red, upper), and the dashed line corresponds to βu(r) (note: h(r) profiles are obtained from IET). The two types of profiles are plotted to highlight their asymptotic decays at large r/d, with characteristic slopes m TCF and m REP , respectively. Note that the thermal correlation length ξ T /d 3.1 for βε = 6.0, which exhibits strong IRO. (b) Untransformed h(r) profiles for same states as in (a), scaled to highlight long-range oscillations at βε = 6.0. (c) Structure factors obtained from IET at φ = 0.030, Z = 8.0, and κ −1 /d = 2.0, where βε = 1.5, 4.0, 5.0, and 6.0 from bottom to top. Here, the highlighted IRO wavenumber at βε = 6.0 is k IRO d = 1.02.
FIG. 8 .
8IRO thermal correlation lengths ξ T /d extracted from S (k) profiles at onset of clustering (at βε * ) in MD simulations versus screening length κ −1 /d. Thick line denotes previously postulated criterion32 for identifying onset of clustering (ξ T /d ≈ κ −1 /d), where dark (light) purple regions denote 10% (20%) deviation from this relation. Dotted line at shorter κ −1 /d corresponds to an empirical guide-line with form ξ T /d = 1.0 + 0.5(κ −1 /d). Note that at a given κ −1 /d, symbols corresponding to different φ are slightly shifted horizontally to improve aesthetic clarity. Symbol types correspond to constant charge Z as listed in
TABLE I .
ICritical attraction strengths βε * determined from MD simulations at various φ as a function of surface charge Z and screening length κ −1 /d. Conditions with listed βε * values are those used for our analysis and discussion. Symbols below the Z values correspond to those used in Figs. 2-7 (symbols are kept constant for various κ −1 /d). Note that maximum repulsion strengths βA MAX (see Eqn. 4) are calculated based on a reference relative Bjerrum length of λ B /d = 0.014.κ −1 /d
Z
3
4
6
8
10
12
15
Table I (
Inote that we test various screening lengths κ −1 /d at each Z).
Table I (
Inote that we test various screening lengths κ −1 /d at each Z).
. IRO pre-peak heightWe begin by revisiting previous reports 17,18 that the onset of clustering occurs as the pre-peak height (magnitude) reaches the threshold value S (k IRO d) ≈ 2.7. In brief, this is an adaptation of the empirical Hansen-Verlet freezing rule developed for simple fluids64 , which states that the height of the first pre-peak in the structure factor approaches S (k) ≈
Domain shapes and patterns: The phenomenology of modulated phases. M Seul, D Andelman, Science. 267M. Seul and D. Andelman, "Domain shapes and patterns: The phe- nomenology of modulated phases," Science 267, 476-483 (1995), http://science.sciencemag.org/content/267/5197/476.full.pdf.
Microemulsions. D Langevin, 10.1021/ar00151a001Acc. Chem. Res. 21D. Langevin, "Microemulsions," Acc. Chem. Res. 21, 255-260 (1988), http://dx.doi.org/10.1021/ar00151a001.
Block copolymers, polymer-polymer interfaces, and the theory of inhomogeneous polymers. E Helfand, 10.1021/ar50093a002Acc. Chem. Res. 8E. Helfand, "Block copolymers, polymer-polymer interfaces, and the the- ory of inhomogeneous polymers," Acc. Chem. Res. 8, 295-299 (1975), http://dx.doi.org/10.1021/ar50093a002.
Introduction to block copolymers. I W Hamley, Developments in Block Copolymer Science and Technology. LtdJohn Wiley & SonsI. W. Hamley, "Introduction to block copolymers," in Developments in Block Copolymer Science and Technology (John Wiley & Sons, Ltd, 2004) pp. 1-29.
Self-limiting assembly of two-dimensional domains from graphene oxide at the air/water interface. X Zhang, H Sun, S Yang, 10.1021/jp3051005J. Phys. Chem. C. 116X. Zhang, H. Sun, and S. Yang, "Self-limiting assem- bly of two-dimensional domains from graphene oxide at the air/water interface," J. Phys. Chem. C 116, 19018-19024 (2012), http://dx.doi.org/10.1021/jp3051005.
Phase separation of a binary liquid mixture in a porous medium. M Cynthia Goh, W Goldburg, C Knobler, Phys. Rev. Lett. 58M. Cynthia Goh, W. Goldburg, and C. Knobler, "Phase separation of a binary liquid mixture in a porous medium," Phys. Rev. Lett. 58, 1008- 1011 (1987).
Phase separation of a binary liquid system in controlled-pore glass. S Schemmel, D Akcakayiran, G Rother, A Brulet, B Farago, T Hellweg, G H Findenegg, 10.1557/PROC-790-P7.2MRS Proceedings. 790S. Schemmel, D. Akcakayiran, G. Rother, A. Brulet, B. Farago, T. Hell- weg, and G. H. Findenegg, "Phase separation of a binary liquid system in controlled-pore glass," MRS Proceedings 790, 1-6 (2003), http://dx.doi.org/10.1557/PROC-790-P7.2.
Directing colloidal assembly and a metal-insulator transition using a quench-disordered porous rod template. R B Jadrich, K S Schweizer, Phys. Rev. Lett. 113208302R. B. Jadrich and K. S. Schweizer, "Directing colloidal assembly and a metal-insulator transition using a quench-disordered porous rod template," Phys. Rev. Lett. 113, 208302 (2014).
Microphase separation versus the vaporliquid transition in systems of spherical particles. R P Sear, W M Gelbart, J. Chem. Phys. 110R. P. Sear and W. M. Gelbart, "Microphase separation versus the vapor- liquid transition in systems of spherical particles," J. Chem. Phys. 110, 4582-4588 (1999).
Enhanced density fluctuations in fluid systems with competing interactions. D Pini, G Jialin, A Parola, L Reatto, Chem. Phys. Lett. 327D. Pini, G. Jialin, A. Parola, and L. Reatto, "Enhanced density fluctu- ations in fluid systems with competing interactions," Chem. Phys. Lett. 327, 209 -215 (2000).
Structural arrest transitions in fluids described by two yukawa potentials. J Wu, Y Liu, W.-R Chen, J Cao, S.-H Chen, Phys. Rev. E. 7050401J. Wu, Y. Liu, W.-R. Chen, J. Cao, and S.-H. Chen, "Structural arrest transitions in fluids described by two yukawa potentials," Phys. Rev. E 70, 050401 (2004).
Cluster formation in two-yukawa fluids. Y Liu, W.-R Chen, S.-H Chen, J. Chem. Phys. 12244507Y. Liu, W.-R. Chen, and S.-H. Chen, "Cluster formation in two-yukawa fluids," J. Chem. Phys. 122, 044507 (2005).
The structural properties of a two-yukawa fluid: Simulation and analytical results. M Broccio, D Costa, Y Liu, S.-H Chen, J. Chem. Phys. 12484501M. Broccio, D. Costa, Y. Liu, and S.-H. Chen, "The structural properties of a two-yukawa fluid: Simulation and analytical results," J. Chem. Phys. 124, 084501 (2006).
Temperature study of cluster formation in two-yukawa fluids. J.-M Bomont, J.-L Bretonnet, D Costa, J. Chem. Phys. 132184508J.-M. Bomont, J.-L. Bretonnet, and D. Costa, "Temperature study of clus- ter formation in two-yukawa fluids," J. Chem. Phys. 132, 184508 (2010).
On the importance of thermodynamic self-consistency for calculating clusterlike pair correlations in hard-core double yukawa fluids. J M Kim, R Castañeda-Priego, Y Liu, N J Wagner, J. Chem. Phys. 13464904J. M. Kim, R. Castañeda-Priego, Y. Liu, and N. J. Wagner, "On the importance of thermodynamic self-consistency for calculating clusterlike pair correlations in hard-core double yukawa fluids," J. Chem. Phys. 134, 064904 (2011).
Lysozyme protein solution with an intermediate range order structure. Y Liu, L Porcar, J Chen, W.-R Chen, P Falus, A Faraone, E Fratini, K Hong, P Baglioni, 10.1021/jp109333cJ. Phys. Chem. B. 115Y. Liu, L. Porcar, J. Chen, W.-R. Chen, P. Falus, A. Faraone, E. Fratini, K. Hong, and P. Baglioni, "Lysozyme protein solution with an interme- diate range order structure," J. Phys. Chem. B 115, 7238-7247 (2011), http://dx.doi.org/10.1021/jp109333c.
Intermediate range order and structure in colloidal dispersions with competing interactions. P D Godfrin, R Castaeda-Priego, Y Liu, N J Wagner, J. Chem. Phys. 139154904P. D. Godfrin, R. Castaeda-Priego, Y. Liu, and N. J. Wagner, "Interme- diate range order and structure in colloidal dispersions with competing interactions," J. Chem. Phys. 139, 154904 (2013).
Generalized phase behavior of cluster formation in colloidal dispersions with competing interactions. P D Godfrin, N E Valadez-Perez, R Castañeda-Priego, N J Wagner, Y Liu, Soft Matter. 10P. D. Godfrin, N. E. Valadez-Perez, R. Castañeda-Priego, N. J. Wag- ner, and Y. Liu, "Generalized phase behavior of cluster formation in colloidal dispersions with competing interactions," Soft Matter 10, 5061- 5071 (2014).
Aggregate formation in a model fluid with microscopic piecewise-continuous competing interactions. G Cigala, D Costa, J.-M Bomont, C Caccamo, 10.1080/00268976.2015.1078006Molecular Physics. 113G. Cigala, D. Costa, J.-M. Bomont, and C. Caccamo, "Aggregate formation in a model fluid with microscopic piecewise-continuous competing interactions," Molecular Physics 113, 2583-2592 (2015), http://dx.doi.org/10.1080/00268976.2015.1078006.
Directed block copolymer self-assembly for nanoelectronics fabrication. D J Herr, J. Mater. Res. 26D. J. Herr, "Directed block copolymer self-assembly for nanoelectronics fabrication," J. Mater. Res. 26, 122-139 (2011).
Polarity-switching top coats enable orientation of sub-10-nm block copolymer domains. C M Bates, T Seshimo, M J Maher, W J Durand, J D Cushen, L M Dean, G Blachut, C J Ellison, C G Willson, Science. 338C. M. Bates, T. Seshimo, M. J. Maher, W. J. Durand, J. D. Cushen, L. M. Dean, G. Blachut, C. J. Ellison, and C. G. Willson, "Polarity-switching top coats enable orientation of sub-10-nm block copolymer domains," Science 338, 775-779 (2012).
Concentrated dispersions of equilibrium protein nanoclusters that reversibly dissociate into active monomers. K P Johnston, J A Maynard, T M Truskett, A U Borwankar, M A Miller, B K Wilson, A K Dinin, T A Khan, K J Kaczorowski, 10.1021/nn204166zACS Nano. 6K. P. Johnston, J. A. Maynard, T. M. Truskett, A. U. Borwankar, M. A. Miller, B. K. Wilson, A. K. Dinin, T. A. Khan, and K. J. Kaczorowski, "Concentrated dispersions of equilibrium protein nanoclusters that re- versibly dissociate into active monomers," ACS Nano 6, 1357-1369 (2012), http://dx.doi.org/10.1021/nn204166z.
Note that clusters are differentiated from aggregates such as micelles because the characteristic size of the former need not be set by the monomer size. Note that clusters are differentiated from aggregates such as micelles be- cause the characteristic size of the former need not be set by the monomer size.
Anomalously large equilibrium clusters of colloids. J Groenewold, W K Kegel, 10.1021/jp011646wJ. Phys. Chem. B. 105J. Groenewold and W. K. Kegel, "Anomalously large equilibrium clusters of colloids," J. Phys. Chem. B 105, 11702-11709 (2001), http://dx.doi.org/10.1021/jp011646w.
Equilibrium cluster phases and low-density arrested disordered states: The role of shortrange attraction and long-range repulsion. F Sciortino, S Mossa, E Zaccarelli, P Tartaglia, Phys. Rev. Lett. 9355701F. Sciortino, S. Mossa, E. Zaccarelli, and P. Tartaglia, "Equilibrium clus- ter phases and low-density arrested disordered states: The role of short- range attraction and long-range repulsion," Phys. Rev. Lett. 93, 055701 (2004).
Phase behavior of a fluid with competing attractive and repulsive interactions. A J Archer, N B Wilding, Phys. Rev. E. 7631501A. J. Archer and N. B. Wilding, "Phase behavior of a fluid with competing attractive and repulsive interactions," Phys. Rev. E 76, 031501 (2007).
Colloidal systems with competing interactions: from an arrested repulsive cluster phase to a gel. J C F Toledano, F Sciortino, E Zaccarelli, Soft Matter. 5J. C. F. Toledano, F. Sciortino, and E. Zaccarelli, "Colloidal systems with competing interactions: from an arrested repulsive cluster phase to a gel," Soft Matter 5, 2390-2398 (2009).
Cluster formation and bulk phase behavior of colloidal dispersions. T Jiang, J Wu, Phys. Rev. E. 8021401T. Jiang and J. Wu, "Cluster formation and bulk phase behavior of col- loidal dispersions," Phys. Rev. E 80, 021401 (2009).
Communication: Thermodynamic signatures of cluster formation in fluids with competing interactions. J.-M Bomont, J.-L Bretonnet, D Costa, J.-P Hansen, J. Chem. Phys. 13711101J.-M. Bomont, J.-L. Bretonnet, D. Costa, and J.-P. Hansen, "Commu- nication: Thermodynamic signatures of cluster formation in fluids with competing interactions," J. Chem. Phys. 137, 011101 (2012).
Equilibrium and non-equilibrium cluster phases in colloids with competing interactions. E Mani, W Lechner, W K Kegel, P G Bolhuis, Soft Matter. 10E. Mani, W. Lechner, W. K. Kegel, and P. G. Bolhuis, "Equilibrium and non-equilibrium cluster phases in colloids with competing interactions," Soft Matter 10, 4479-4486 (2014).
Cluster formation in fluids with competing short-range and long-range interactions. M B Sweatman, R Fartaria, L Lue, J. Chem. Phys. 140124508M. B. Sweatman, R. Fartaria, and L. Lue, "Cluster formation in fluids with competing short-range and long-range interactions," J. Chem. Phys. 140, 124508 (2014).
Origin and detection of microstructural clustering in fluids with spatial-range competitive interactions. R B Jadrich, J A Bollinger, K P Johnston, T M Truskett, Phys. Rev. E. 9142312R. B. Jadrich, J. A. Bollinger, K. P. Johnston, and T. M. Truskett, "Ori- gin and detection of microstructural clustering in fluids with spatial-range competitive interactions," Phys. Rev. E 91, 042312 (2015).
. T D Nguyen, B A Schultz, N A Kotov, S , T. D. Nguyen, B. A. Schultz, N. A. Kotov, and S. C.
Generic, phenomenological, on-the-fly renormalized repulsion model for self-limited organization of terminal supraparticle assemblies. Glotzer, Proc. Natl. Acad. Sci. U. S. A. 112Glotzer, "Generic, phenomenological, on-the-fly renormalized repul- sion model for self-limited organization of terminal supraparticle as- semblies," Proc. Natl. Acad. Sci. U. S. A. 112, E3161-E3168 (2015), http://www.pnas.org/content/112/25/E3161.full.pdf.
Recent advances in the theory and simulation of model colloidal microphase formers. Y Zhuang, P Charbonneau, 10.1021/acs.jpcb.6b05471J. Phys. Chem. B (Just Accepted). Y. Zhuang and P. Charbonneau, "Recent advances in the theory and sim- ulation of model colloidal microphase formers," J. Phys. Chem. B (Just Accepted) (2016), http://dx.doi.org/10.1021/acs.jpcb.6b05471.
Dynamical arrest in attractive colloids: The effect of long-range repulsion. A I Campbell, V J Anderson, J S Van Duijneveldt, P Bartlett, Phys. Rev. Lett. 94208301A. I. Campbell, V. J. Anderson, J. S. van Duijneveldt, and P. Bartlett, "Dy- namical arrest in attractive colloids: The effect of long-range repulsion," Phys. Rev. Lett. 94, 208301 (2005).
Structural and dynamical features of multiple metastable glassy states in a colloidal system with competing interactions. C L Klix, C P Royall, H Tanaka, Phys. Rev. Lett. 104165702C. L. Klix, C. P. Royall, and H. Tanaka, "Structural and dynamical fea- tures of multiple metastable glassy states in a colloidal system with com- peting interactions," Phys. Rev. Lett. 104, 165702 (2010).
Non-equilibrium cluster states in colloids with competing interactions. T H Zhang, J Klok, R Hans Tromp, J Groenewold, W K Kegel, Soft Matter. 8T. H. Zhang, J. Klok, R. Hans Tromp, J. Groenewold, and W. K. Kegel, "Non-equilibrium cluster states in colloids with competing interactions," Soft Matter 8, 667-672 (2012).
Self-assembly of self-limiting monodisperse supraparticles from polydisperse nanoparticles. Y Xia, T D Nguyen, M Yang, B Lee, A Santos, P Podsiadlo, Z Tang, S C Glotzer, N A Kotov, Nat. Nanotechnol. 7Y. Xia, T. D. Nguyen, M. Yang, B. Lee, A. Santos, P. Podsiadlo, Z. Tang, S. C. Glotzer, and N. A. Kotov, "Self-assembly of self-limiting monodis- perse supraparticles from polydisperse nanoparticles," Nat. Nanotechnol. 7, 479-479 (2012).
A colloidal model system with an interaction tunable from hard sphere to soft and dipolar. A Yethiraj, A Van Blaaderen, Nature. 421A. Yethiraj and A. van Blaaderen, "A colloidal model system with an interaction tunable from hard sphere to soft and dipolar," Nature 421, 513- 517 (2003).
Equilibrium cluster formation in concentrated protein solutions and colloids. A Stradner, H Sedgwick, F Cardinaux, W C K Poon, S U Egelhaaf, P Schurtenberger, Nature. 432A. Stradner, H. Sedgwick, F. Cardinaux, W. C. K. Poon, S. U. Egel- haaf, and P. Schurtenberger, "Equilibrium cluster formation in concen- trated protein solutions and colloids," Nature 432, 492-495 (2004).
Formation of the dynamic clusters in concentrated lysozyme protein solutions. L Porcar, P Falus, W.-R Chen, A Faraone, E Fratini, K Hong, P Baglioni, Y Liu, 10.1021/jz900127cJ. Phys. Chem. Lett. 1L. Porcar, P. Falus, W.-R. Chen, A. Faraone, E. Fratini, K. Hong, P. Baglioni, and Y. Liu, "Formation of the dynamic clusters in con- centrated lysozyme protein solutions," J. Phys. Chem. Lett. 1, 126-129 (2010), http://dx.doi.org/10.1021/jz900127c.
Observation of small cluster formation in concentrated monoclonal antibody solutions and its implications to solution viscosity. E J Yearley, P D Godfrin, T Perevozchikova, H Zhang, P Falus, L Porcar, M Nagao, J E Curtis, P Gawande, R Taing, I E Zarraga, N J Wagner, Y Liu, Biophys. J. 106E. J. Yearley, P. D. Godfrin, T. Perevozchikova, H. Zhang, P. Falus, L. Por- car, M. Nagao, J. E. Curtis, P. Gawande, R. Taing, I. E. Zarraga, N. J. Wag- ner, and Y. Liu, "Observation of small cluster formation in concentrated monoclonal antibody solutions and its implications to solution viscosity," Biophys. J. 106, 1763 -1770 (2014).
Effect of hierarchical cluster formation on the viscosity of concentrated monoclonal antibody formulations studied by neutron scattering. P D Godfrin, I E Zarraga, J Zarzar, L Porcar, P Falus, N J Wagner, Y Liu, 10.1021/acs.jpcb.5b07260pMID: 26707135J. Phys. Chem. B. 120P. D. Godfrin, I. E. Zarraga, J. Zarzar, L. Porcar, P. Falus, N. J. Wag- ner, and Y. Liu, "Effect of hierarchical cluster formation on the viscos- ity of concentrated monoclonal antibody formulations studied by neutron scattering," J. Phys. Chem. B 120, 278-291 (2016), pMID: 26707135, http://dx.doi.org/10.1021/acs.jpcb.5b07260.
Terminal supraparticle assemblies from similarly charged protein molecules and nanoparticles. J I Park, T D Nguyen, G De Queirós, J H Silveira, S Bahng, G Srivastava, K Zhao, P Sun, S C Zhang, N A Glotzer, Kotov, Nat. Commun. 5J. I. Park, T. D. Nguyen, G. de Queirós Silveira, J. H. Bahng, S. Srivas- tava, G. Zhao, K. Sun, P. Zhang, S. C. Glotzer, and N. A. Kotov, "Ter- minal supraparticle assemblies from similarly charged protein molecules and nanoparticles," Nat. Commun. 5 (2014).
J N Israelachvili, Intermolecular and Surface Forces. New York, NY, USAAcademic PressJ. N. Israelachvili, Intermolecular and Surface Forces (Academic Press, New York, NY, USA, 2011).
Absence of equilibrium cluster phase in concentrated lysozyme solutions. A Shukla, E Mylonas, E Di Cola, S Finet, P Timmins, T Narayanan, D I Svergun, Proc. Natl. Acad. Sci. U. S. A. 105A. Shukla, E. Mylonas, E. Di Cola, S. Finet, P. Timmins, T. Narayanan, and D. I. Svergun, "Absence of equilibrium cluster phase in concentrated lysozyme solutions," Proc. Natl. Acad. Sci. U. S. A. 105, 5075-5080 (2008), http://www.pnas.org/content/105/13/5075.full.pdf.
Do equilibrium clusters exist in concentrated lysozyme solutions?. A Stradner, F Cardinaux, S U Egelhaaf, P Schurtenberger, Proc. Natl. Acad. Sci. U. S. A. 105A. Stradner, F. Cardinaux, S. U. Egelhaaf, and P. Schurten- berger, "Do equilibrium clusters exist in concentrated lysozyme solutions?" Proc. Natl. Acad. Sci. U. S. A. 105, E75 (2008), http://www.pnas.org/content/105/44/E75.full.pdf.
Equilibrium cluster fluids: pair interactions via inverse design. R B Jadrich, J A Bollinger, B A Lindquist, T M Truskett, Soft Matter. 11R. B. Jadrich, J. A. Bollinger, B. A. Lindquist, and T. M. Truskett, "Equi- librium cluster fluids: pair interactions via inverse design," Soft Matter 11, 9342-9354 (2015).
Interactions and aggregation of charged nanoparticles in uncharged polymer solutions. G Pandav, V Pryamitsyn, V Ganesan, 10.1021/acs.langmuir.5b02885pMID: 26535914Langmuir. 31G. Pandav, V. Pryamitsyn, and V. Ganesan, "Interactions and aggregation of charged nanoparticles in uncharged polymer so- lutions," Langmuir 31, 12328-12338 (2015), pMID: 26535914, http://dx.doi.org/10.1021/acs.langmuir.5b02885.
Multibody interactions, phase behavior, and clustering in nanoparticlepolyelectrolyte mixtures. G Pandav, V Pryamitsyn, J Errington, V Ganesan, 10.1021/acs.jpcb.5b07905pMID: 26473468J. Phys. Chem. B. 119G. Pandav, V. Pryamitsyn, J. Errington, and V. Ganesan, "Multibody in- teractions, phase behavior, and clustering in nanoparticlepolyelectrolyte mixtures," J. Phys. Chem. B 119, 14536-14550 (2015), pMID: 26473468, http://dx.doi.org/10.1021/acs.jpcb.5b07905.
Counterion binding in polyelectrolyte theory. G S Manning, 10.1021/ar50144a004Acc. Chem. Res. 12G. S. Manning, "Counterion binding in polyelectrolyte theory," Acc. Chem. Res. 12, 443-449 (1979), http://dx.doi.org/10.1021/ar50144a004.
Charge renormalization, osmotic pressure, and bulk modulus of colloidal crystals: Theory. S Alexander, P M Chaikin, P Grant, G J Morales, P Pincus, D Hone, J. Chem. Phys. 80S. Alexander, P. M. Chaikin, P. Grant, G. J. Morales, P. Pincus, and D. Hone, "Charge renormalization, osmotic pressure, and bulk modulus of colloidal crystals: Theory," J. Chem. Phys. 80, 5776-5781 (1984).
Counterion condensation in micellar and colloidal solutions. G V Ramanathan, J. Chem. Phys. 88G. V. Ramanathan, "Counterion condensation in micellar and colloidal solutions," J. Chem. Phys. 88, 3887-3892 (1988).
Counterion condensation on spheres in the salt-free limit. D A J Gillespie, J E Hallett, O Elujoba, A F Hamzah, R M Richardson, P Bartlett, Soft Matter. 10D. A. J. Gillespie, J. E. Hallett, O. Elujoba, A. F. Che Hamzah, R. M. Richardson, and P. Bartlett, "Counterion condensation on spheres in the salt-free limit," Soft Matter 10, 566-577 (2014).
Theory of the stability of strongly charged lyophobic sols and of the adhesion of strongly charged particles in solution of electrolytes. B V Derjaguin, L Landau, Acta Physicochim. URSS. 14B. V. Derjaguin and L. Landau, "Theory of the stability of strongly charged lyophobic sols and of the adhesion of strongly charged particles in solution of electrolytes," Acta Physicochim. URSS 14, 633-662 (1941).
E J Verwey, J T G Overbeek, Theory of the Stability Lyophobic Colloids. New York, NY, USAElsevierE. J. Verwey and J. T. G. Overbeek, Theory of the Stability Lyophobic Colloids (Elsevier, New York, NY, USA, 1948).
Roles of repulsive and attractive forces in liquids: The optimized random phase approximation. H C Andersen, D Chandler, J D Weeks, J. Chem. Phys. 56H. C. Andersen, D. Chandler, and J. D. Weeks, "Roles of repulsive and attractive forces in liquids: The optimized random phase approximation," J. Chem. Phys. 56, 3812-3823 (1972).
J.-P Hansen, I R Mcdonald, Theory of Simple Liquids. New York, NY, USAAcademic Press3rd ed.J.-P. Hansen and I. R. McDonald, Theory of Simple Liquids, 3rd ed. (Aca- demic Press, New York, NY, USA, 2006).
Thermodynamic selfconsistency criterion in the mixed integral equation theory of liquid structure. J Bergenholtz, N J Wagner, B D'aguanno, Phys. Rev. E. 53J. Bergenholtz, N. J. Wagner, and B. D'Aguanno, "Thermodynamic self- consistency criterion in the mixed integral equation theory of liquid struc- ture," Phys. Rev. E 53, 2968-2971 (1996).
Fast parallel algorithms for short-range molecular dynamics. S Plimpton, J. Comput. Phys. 117S. Plimpton, "Fast parallel algorithms for short-range molecular dynam- ics," J. Comput. Phys. 117, 1-19 (1995).
Shape of unperturbed linear polymers: polypropylene. D N Theodorou, U W Suter, 10.1021/ma00148a028Macromolecules. 18D. N. Theodorou and U. W. Suter, "Shape of unperturbed linear polymers: polypropylene," Macromolecules 18, 1206-1214 (1985), http://dx.doi.org/10.1021/ma00148a028.
VMD -Visual Molecular Dynamics. W Humphrey, A Dalke, K Schulten, J. Molec. Graphics. 14W. Humphrey, A. Dalke, and K. Schulten, "VMD -Visual Molecular Dynamics," J. Molec. Graphics 14, 33-38 (1996).
Elastic properties of glasses. H He, M F Thorpe, Phys. Rev. Lett. 54H. He and M. F. Thorpe, "Elastic properties of glasses," Phys. Rev. Lett. 54, 2107-2110 (1985).
Phase transitions of the Lennard-Jones system. J P Hansen, L Verlet, Phys. Rev. 184J. P. Hansen and L. Verlet, "Phase transitions of the Lennard-Jones sys- tem," Phys. Rev. 184, 151-161 (1969).
| [] |
[
"CHANDRA OBSERVATIONS OF NGC 7212: LARGE-SCALE EXTENDED HARD-XRAY EMISSION",
"CHANDRA OBSERVATIONS OF NGC 7212: LARGE-SCALE EXTENDED HARD-XRAY EMISSION"
] | [
"Mackenzie L Jones \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"G Fabbiano \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"Martin Elvis \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"A Paggi \nINAF-Osservatorio Astrofisico di Torino\nVia Osservatorio 2010025Pino TorineseItaly\n",
"M Karovska \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"W P Maksym \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"A Siemiginowska \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n",
"J Raymond \nCenter for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA\n"
] | [
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"INAF-Osservatorio Astrofisico di Torino\nVia Osservatorio 2010025Pino TorineseItaly",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA",
"Center for Astrophysics |\nHarvard & Smithsonian\n60 Garden St02138CambridgeMAUSA"
] | [] | Recent observations of nearby Compton thick (CT) active galactic nuclei (AGNs) with Chandra have resolved hard (> 3 keV) X-ray emission extending out from the central supermassive black hole to kiloparsec scales, challenging the long-held belief that the characteristic hard X-ray continuum and fluorescent Fe K lines originate in the inner ∼parsec due to the excitation of obscuring material. In this paper we present the results of the most recent Chandra ACIS-S observations of NGC 7212, a CT AGN in a compact group of interacting galaxies, with a total effective exposure of ∼150 ks. We find ∼20% of the observed emission is found outside of the central kiloparsec, with ∼17% associated with the soft X-rays, and ∼3% with hard X-ray continuum and Fe K line. This emission is extended both along the ionization cone and in the cross-cone direction up to ∼3.8 kpc scales. The spectrum of NGC 7212 is best represented by a mixture of thermal and photoionization models that indicate the presence of complex gas interactions. These observations are consistent with what is observed in other CT AGN (e.g., ESO 428−G014, NGC 1068), providing further evidence that this may be a common phenomenon. High-resolution observations of extended CT AGN provide an especially valuable environment for understanding how AGN feedback impacts host galaxies on galactic scales. | 10.3847/1538-4357/ab76c8 | [
"https://arxiv.org/pdf/2003.02271v1.pdf"
] | 212,415,209 | 2003.02271 | 368a8704911b0f606a20337f2f69385273a48854 |
CHANDRA OBSERVATIONS OF NGC 7212: LARGE-SCALE EXTENDED HARD-XRAY EMISSION
March 6, 2020
Mackenzie L Jones
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
G Fabbiano
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
Martin Elvis
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
A Paggi
INAF-Osservatorio Astrofisico di Torino
Via Osservatorio 2010025Pino TorineseItaly
M Karovska
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
W P Maksym
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
A Siemiginowska
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
J Raymond
Center for Astrophysics |
Harvard & Smithsonian
60 Garden St02138CambridgeMAUSA
CHANDRA OBSERVATIONS OF NGC 7212: LARGE-SCALE EXTENDED HARD-XRAY EMISSION
March 6, 2020Draft version Typeset using L A T E X twocolumn style in AASTeX61galaxies: active; X-rays: galaxies
Recent observations of nearby Compton thick (CT) active galactic nuclei (AGNs) with Chandra have resolved hard (> 3 keV) X-ray emission extending out from the central supermassive black hole to kiloparsec scales, challenging the long-held belief that the characteristic hard X-ray continuum and fluorescent Fe K lines originate in the inner ∼parsec due to the excitation of obscuring material. In this paper we present the results of the most recent Chandra ACIS-S observations of NGC 7212, a CT AGN in a compact group of interacting galaxies, with a total effective exposure of ∼150 ks. We find ∼20% of the observed emission is found outside of the central kiloparsec, with ∼17% associated with the soft X-rays, and ∼3% with hard X-ray continuum and Fe K line. This emission is extended both along the ionization cone and in the cross-cone direction up to ∼3.8 kpc scales. The spectrum of NGC 7212 is best represented by a mixture of thermal and photoionization models that indicate the presence of complex gas interactions. These observations are consistent with what is observed in other CT AGN (e.g., ESO 428−G014, NGC 1068), providing further evidence that this may be a common phenomenon. High-resolution observations of extended CT AGN provide an especially valuable environment for understanding how AGN feedback impacts host galaxies on galactic scales.
INTRODUCTION
At the center of essentially every massive galaxy is a supermassive black hole (SMBH). These SMBHs emit enormous amounts of energy as active galactic nuclei (AGNs) powered by accretion onto the black hole (see Kormendy & Ho 2013;Padovani et al. 2017 for a review). In the AGN unified model energy is reflected, transmitted, and absorbed as it propagates out from the central nucleus, leaving traces of the AGN geometry on the observed multiwavelength emission (e.g., Lawrence & Elvis 1982;Antonucci 1993;Urry & Padovani 1995;Netzer 2015).
Until recently, it was believed that the characteristic hard X-ray continuum and fluorescent Fe K lines typical of an AGN could only originate from the excitation of an obscuring material in the inner parsecs. In this classical picture, the central SMBH and accretion disk are closely surrounded by an optically thick, molecular torus-like structure. This torus acts as an efficient screen such that radiation propagates along the opening angle as an ionization cone via direct transmission and reflection off of the obscuring material, while being completely attenuated in the cross-cone plane. Recent observations, however, have uncovered the presence of hard X-ray emission on ∼kiloparsec scales in the direction of the ionization cone (e.g., Circinus, Arévalo et al. 2014;NGC 1068, Bauer et al. 2015ESO 428−G014, Fabbiano et al. 2017) and cross-cone (e.g., ESO 428− G014 Fabbiano et al. 2018aG014 Fabbiano et al. ,b, 2019 in nearby Compton thick (CT) AGN. The high column densities (log N H > 23 cm −2 ) of these CT AGN uniquely allow for these types of investigations as the obscuration depletes the X-ray emission of the central point-like source, revealing the extended material.
In this paper, we present the results of an investigation into the presence of extended hard X-ray emission in NGC 7212, 1 a nearby (z = 0.0266; D lum ∼115 Mpc) Seyfert 2 galaxy with a heavily obscured AGN (log M bh = 7.54; log L/L Edd = −1.55; Hernández-García et al. 2015) located in a compact group of three interacting galaxies (e.g., Muñoz et al. 2007). NGC 7212 is part of a sample of nearby CT AGN in normal Seyfert 2 galaxies (as classified by their optical emission line ratios) with bright [O III] λ5007 cores, and no history of nuclear starbursts (Levenson et al. 2006). Other CT AGN in this sample have already been mentioned as exhibiting extended X-ray emission on ∼kiloparsec scales (e.g., NGC 1068, Bauer et al. 2015;ESO 428−G014, Fabbiano et al. 2017, 2018a,b, 2019, but NGC 7212 is the most distant of all of these nearby sources and thus is a valuable addition to this recent work in establishing the ubiquity of extended hard X-ray emission.
Previous observations of NGC 7212 have found kiloparsec-scale, diffuse, extended optical narrow line emission (ENLR; e.g., Wasilewski 1981;Falcke et al. 1998;Schmitt et al. 2003;Cracco et al. 2011;Congiu et al. 2017), and polarized optical broad line emission (e.g., Tran 1995a,b;Veilleux et al. 1997). It has a compact double radio source (extent 0.7 ) with moderate radio power aligned with the elongated narrow line emission (e.g., P.A. ∼ −7°, Falcke et al. 1998;Drake et al. 2003). In the X-rays, NGC 7212 has previously been established as nonvariable with a complex X-ray spectrum exhibiting the characteristic features of a CT AGN (e.g., Risaliti et al. 2000;Guainazzi et al. 2005;Bianchi et al. 2006;Levenson et al. 2006;Singh et al. 2011;Severgnini et al. 2012;Hernández-García et al. 2015;Marchesi et al. 2018).
To this extensive multiwavelength coverage, we have added deep X-ray observations of NGC 7212 for a cumulative Chandra exposure of 149.87 ks, in order to piece together a detailed picture of the morphological and spectral properties of this CT AGN. We describe these observations and the data reduction in Section 2, and report on the spatial and spectral properties of the nuclear and extended emission in Sections 3 and 4, respectively. In Section 5 we discuss the results of the spectral analysis and the implications of an extended hard X-ray component. Our findings and conclusions are summarized in Section 6.
OBSERVATIONS AND ANALYSIS
We obtained three Chandra ACIS-S observations of NGC 7212 (ObsIDs: 20372, 21668, 21672;P.I. Fabbiano) and combined these observations with an additional archival Chandra ACIS-S observation of this galaxy (ObsID: 4078; P.I. Kraemer) to generate a dataset with a cumulative effective exposure time of 147.82 ks (Table 1). These observations were then reprocessed and analyzed using CIAO 4.11 and CALDB 4.8.2 to enable subpixel analysis (Tsunemi et al. 2001;Wang et al. 2011). Each individual observation was inspected for high background flares (≥3σ) and all were deemed acceptable.
All four observations were exposure corrected and merged, following the CIAO merge threads, 2,3 using Ob-sID 20372 as the reference frame centered at (J2000) RA = 22:07:02.03 (331°45 30.44 ), decl. = +10:14:01.27 (10°14 1.27 ). We first visually inspected each observation and determined that manually shifting to the reference image would allow for the best alignment. The 2 http://cxc.harvard.edu/ciao/threads/combine/ 3 http://cxc.harvard.edu/ciao/threads/merge all/ final shifts (in native ACIS pixels) are listed in Table 1. The full-band (0.3-7.0 keV) merged image of NGC 7212 and its companion interacting galaxies are shown in Figure 1 (right). Contours corresponding to this merged 0.3-7.0 keV image are also shown overlaid on a g-band Pan-STARRs deep-stack image in Figure 1 (left). We limited our analysis to the 0.3-7.0 keV band, despite typically reliable Chandra coverage up to 8.0 keV, due to significant noise.
3. SPATIAL ANALYSIS The resolution achieved by Chandra is unmatched in the X-rays and provides a unique opportunity to study the detailed morphological characteristics of NGC 7212 on subarcsecond scales (1 = 556 pc). Using the CIAO image analysis tools available in SAOImage ds9 4 , we investigated the spatial characteristics of NGC 7212 by slicing the X-ray emission into six energy bins and generating corresponding images and radial profiles, following the methodology in Fabbiano et al. (2018a).
The images were built from 1/8 subpixel data and adaptively smoothed using dmimgadapt in the ds9 CIAO package 5 for a 0.5 − 15 pixel scale with 5 counts under the kernel for 30 iterations (Figure 2). The smoothing parameters were selected to optimize the details of the extended diffuse emission. Each energy sliced image reveals a bright nucleus with fainter diffuse large-scale emission. Focusing on the top two panels of Figure 2, there is obvious ∼kiloparsec-scale extended emission in the soft energy bands (< 3 keV). Likewise, at higher energies, focusing in particular on the 6.1 − 6.5 keV regime where we expect strong Fe K fluorescence (bottom, right panel), extended diffuse emission is present, albeit on smaller scales than in the soft X-rays.
From the 1/8 subpixel data that was used to generate the smoothed images, we extracted radial surface brightness profiles to quantify the significance of the extended emission. Based on an azimuthal projection of the surface brightness radiating outward from the central nucleus, we slice our data into two cone regions opening outward from the central nucleus in the south−north (cone) and west−east (cross-cone) directions (Figure 1; right). Interestingly, NGC 7212 does not exhibit a strong azimuthal dependence, unlike what has been found for many other extended hard X-ray sources (e.g., NGC 1068, Bauer et al. 2015;ESO 428−G014, Fabbiano et al. 2018a). Thus we define our cone angles by opposing 90°angle wedges centered around the cardinal points (e.g., the south−north cone is defined by 90°a ngles that are ±45°around the north and south axes).
The cone opening angles that we assume in this work are a conservative estimate. Left: X-ray contours (black) corresponding to the merged 0.3 − 7.0 keV Chandra ACIS image of the three-system interacting galaxy group, including the spiral galaxy NGC 7212 (southwest source), overlaid on a g-band Pan-STARRs deepstack. Right: merged 0.3 − 7.0 keV Chandra ACIS image of NGC 7212 with applied adaptive gaussian smoothing (dmimgadapt; 0.5 − 15 pixel scales, 5 counts under kernel, 30 iterations) on image pixel = 1/8 ACIS pixel. The image contours are logarithmic with colors corresponding to number of counts per image pixel. The nuclear 1 (0.556 kpc) circular region, and annular 8 (4.448 kpc) region split into four quadrants, used in the spatial analysis of NGC 7212 are shown in white. Concentric annuli were generated out to 8.0 (4.448 kpc) for each energy band in SAOImage ds9, starting with a width of 1.0 (0.556 kpc) and increasing as necessary in the outer regions to maintain a minimum of 10 counts. These extracted surface brightness profiles were then background subtracted and compared to a set of Chandra Point-source Functions (PSF) generated with ChaRT 6 and MARX 5.4.0 7 following the CIAO PSF simulation thread 8,9 for the given centroid positions and energy bands (Figure 3). The PSF models were normal- . Background subtracted radial profiles of NGC 7212 for the full energy band (0.3 − 7.0 keV) compared to the Chandra PSF which has been renormalized to the 1.0 (0.556 kpc) nuclear region for the (left) south to north and (right) west to east regions. Each bin contains a minimum of 10 counts and is shown with 1σ errors. We include a dashed horizontal line to indicate the level of background emission and note that points below this line are valid data since the background has already been subtracted from these profiles. We find excess emission observed outside of the central nucleus (∼1.0 , 0.556 kpc) in each of the given regions.
ized to the source counts in the central 1.0 circular region. Within the 8.0 radius and excluding the nuclear region (inner 1.0 circle), the full energy band (0.3−7.0 keV) contains 570.2 ± 23.9 net excess counts above the Chandra PSF in the south−north cone and 241.8 ± 15.6 net excess counts in the west−east cross-cone ( Figure 3). We further explore these excess counts as a function of energy in each cone region (Table 2). Of note, the high energy band where we would expect to see Fe K fluorescence (6.1 − 6.5 keV) contains a significant excess of 15.2 ± 3.9 total counts. The radial profiles for the south−north and west−east regions as a function of energy are shown in Figures 4 and 5, respectively. In both the north and east quadrants, the surface brightness falls below the PSF in the inner parsec (0.556 kpc). We do not expect this to be caused by pileup, as we estimate using PIMMS 10 less than 1% pileup for this source for both the individual and merged observations. Rather, this feature may be a consequence of CT obscuration or even strong nuclear absorption by a dust lane as suggested by optical observations along the northeast direction.
Outside of this feature, we find extended emission on kiloparsec scales (up to 8.0 ∼ 4.5 kpc) in the cone and cross-cone directions, although less significant in the cross-cone region. Analyzing the surface brightness on larger azimuthal scales outside of the 8.0 circular region becomes challenging due to possible contamination from the interacting group members, especially in the northern cone.
Following the methodology of Fabbiano et al. (2018a), we compare the extent of the diffuse emission in each energy bin by calculating the FWHM of the radial profiles in log space. This essentially normalizes the brightness in each energy band, minimizing the bias in the measured extent caused by higher signal-to-noise ratios. This is especially true at low energies where there are significantly more counts. We fit the radial profiles using a spline approximation, or for energies requiring wider bins (and therefore fewer points) we use a gaus-sian+polynomial curve. The errors (corresponding to 1σ) are derived from a Monte Carlo error analysis driven by the uncertainty associated with the adaptive binning.
We find that the FWHM decl.reases slightly with increasing energy in the cone direction (as in Fabbiano et al. 2018a), but there is no significant trend in the FWHM with energy in the cross-cone direction ( Figure 6; filled circles). The average width is consistent between the cone (red) and cross-cone (blue) directions, 2.10 ± 0.23 (∼1.17 kpc) and 2.15 ± 0.75 (∼1.20 kpc), respectively.
To better understand the extent of the surface brightness, we also calculate the width at 1% of the peak emission in each energy bin ( Figure 6; open circles). Compared to the FWHM calculations, we find a larger discrepancy between the cone and cross-cone directions. Below ∼4 keV, the average width in the cone direction is 6.50±0.47 (∼3.66 kpc) compared to 4.41±0.59 (∼2.45 Figure 3. Background subtracted radial profiles of NGC 7212 for the indicated energy bands compared to the Chandra PSF which has been renormalized to the 1.0 (0.556 kpc) nuclear region for the south−north region. Each bin contains a minimum of 10 counts and is shown with 1σ errors. We include a dashed horizontal line to indicate the level of background emission and note that points below this line are valid data since the background has already been subtracted from these profiles. Figure 6. Emission extent as a function of energy calculated from the radial profiles in Figure 4 and 5 for both the south−north and west−east regions. Radial profiles with less than four points were excluded from this analysis. Filled circles: full width at half maximum surface brightness. Empty circles: full width at 1% of the peak surface brightness. 1σ errors are shown.
kpc) in the cross-cone direction. Above ∼4 keV, the 1% width drops to 3.83 ± 0.12 (∼2.13 kpc) between 5.0 and 7.0 keV, and becomes consistent with the extent in the cross-cone direction. Similar trends in the surface brightness extent (as a function of energy) have been observed for ESO 428−G014 (Fabbiano et al. 2018a), for which extended emission in both the cone and cross-cone direction are observed, including that the cross-cone extent of ESO 428−G014 drops at lower energies compared to the cone direction.
4. SPECTRAL ANALYSIS To characterize the extended X-ray emission, we extracted the spectrum of NGC 7212 for three regions centered on the peak counts at ( . We binned the spectra to have a minimum of 20 counts bin −1 in the 8.0 circular region and 1.5 nuclear region and a minimum of 10 counts bin −1 in the 1.5 −8.0 annulus region and fit them to models using Sherpa in the 0.3 − 7.0 keV energy band. The 0.3 − 7.0 keV spectra extracted from the 8.0 circular region is shown in Figure 8. NGC 7212 has a complex soft excess, strong Fe Kα emission, and clear, distinct emission lines between 2 and 6 keV.
PEXRAV + Emission Line Models
We first fit the hard continuum in each region using a simple reflection PEXRAV model (fold E=300; rel refl=100; abund, Fe abund=1; cosIncl=0.45) with To this simple model we then systematically add unresolved emission lines, allowing the energy and amplitude to vary, while the redshift is kept frozen. We use a combination of fit statistics, significance of the emission line fluxes, and visual inspection of the residuals to justify the addition of another line. The best-fit emission line models are listed in Table 3.
For each region of interest, we find blended emission lines below ∼1.5 keV and distinct emission lines above ∼1.5 keV with the most significant lines at ∼1.8 keV NGC 7212 spectrum extracted from an 8.0 (4.448 kpc) circular region centered on the nucleus with best-fit PEXRAV (Γ = 1.9) + n-gaussian lines + galactic absorption model (top) and best-fit residuals (bottom). Fit information may be found in Table 3.
(Si XIII) and at ∼6.4 keV where we expect to see the Fe Kα fluorescent line. There is a degree of uncertainty in the measured energies and amplitudes of the low energy blended lines, but they are consistent with lines (e.g., O VII, Ne IX, Mg XI) observed in other AGN spectra (e.g., Koss et al. 2015;Fabbiano et al. 2018a;Maksym et al. 2019) and identified in the NIST Atomic Spectra Database 11 . The 0.3−7.0 keV spectrum extracted from the 8.0 circular region is shown in Figure 8 with the best-fit PEXRAV and line models. In the hard X-rays (3.0−7.0 keV) and for all regions, we find the characteristic Fe Kα emission line. The Fe Kα emission is dominated by the neutral emission at ∼6.4 keV. The fit benefited from the addition of a broad weak emission line surrounding the neutral Fe line, potentially caused by neutral Fe wings or the presence of blended Fe XXV emission.
We also find strong, significant emission lines in the hard X-rays around 2.9, 3.6, and 5.2 keV (redshift corrected for the distance of NGC 7212). The presence of these lines is not expected for a typical CT AGN (e.g., Koss et al. 2015;Maksym et al. 2019) and presents an interesting challenge for line classification. Our best identifications for the ∼2.9 and ∼3.6 keV line are species of calcium fluorescence lines, or varieties of argon (Ar XVII, argon Kα fluorescence lines). The ∼5.2 keV emission line that appears to be confined to the inner nuclear region has not yet been identified. It is possible that we are observing the effects of cosmic spallation of the obscuring material such that vanadium Kα emission is enhanced (e.g., Skibo 1997;Turner & Miller 2010;Gallo et al. 2019), similar to observations of M51 (Xu et al. 2016). It is unlikely that this observed emission is due to the ACIS background, which is fairly well understood at these energies 12 .
Physical Models
Beyond understanding what emission lines are found in NGC 7212, it is possible to investigate the physical origin of the X-ray emission using more complex photoionization and thermal spectral models. We start building these physical models with the best-fit continuum + Fe Kα emission line spectral model for each region, as described in Section 4.1. To this we add a photoionization and/or thermal model, one component at a time, testing the fit statistics after each addition and estimating the improvement using the F -test (Tables 4, 6, 7; described in subsequent sections). Because the F - test has been shown to be unreliable in some cases (e.g., Protassov et al. 2002), we place more emphasis on the fit statistics and observed residuals. We first used purely photoionization and purely thermal models before attempting a mixture of the two. In situations where only small improvements to the quality of the fit were made by the addition of another component, we examined the residuals of the best fits to identify features that indicate the significance of the improvement (as χ 2 is not sensitive to correlated residuals) to justify incorporating additional complexities.
N-component Photoionization Models
We start with a photoionization model since one has already been successfully used to describe a lower resolution spectrum of NGC 7212 (Bianchi et al. 2006). The photoionization model that we select is a CLOUDY (Ferland et al. 1998) model of reflection off of a plane parallel slab at an inclination of 45 degrees. The intensity and spectral shape of the ionizing radiation are set by the ionization parameter (U) and column density (N H ), respectively. Beginning with a single CLOUDY component, we add up to two additional photoionization components. In all three regions, we find a significant improvement increasing from a one-to two-component CLOUDY spectral model and a worse fit adding a third photoionization component (Table 4). The best-fit, twocomponent CLOUDY model parameters are listed in Table 5 and the spectral fit for the 8.0 circular region is shown in Figure 9 (top; left). As shown, this two-component photoionization model cannot fully represent the observed soft X-ray emission or the distinct emission lines between 3 and 6 keV.
N-component Thermal Models
The thermal components in this investigation are drawn from a solar abundance APEC model (Foster et al. 2012) with varying temperature (keV) and normalization parameters. We started with a single thermal component and added up to two additional APEC components to our model, although a single component thermal model is not favored (Hernández-García et al. 2015). Compared to the pure photoionization models, the fit statistics are worse for the purely thermal components (Table 4). Furthermore, when comparing the best-fit, two-component CLOUDY model with the bestfit, two-component APEC model residuals, we do not see any improvements in describing the soft X-ray emission. The best-fit, two-component APEC model parameters are listed in Table 5 and the spectral fit for the 8.0 circular region is shown in Figure 9 (top; right). Of our two thermal components, the components with kT ∼ 0.8 keV in the 8.0 circular and 1.5 nuclear region are consistent with the APEC model in Koss et al. (2016) (kT ∼ 0.8 keV). The addition of a third component significantly improved the fit in the 1.5 − 8.0 annulus region.
Mixed Photoionization and Thermal Models
Since fits using individual photoionization and thermal models failed to adequately represent the complex observed emission, we fit the spectrum with a variety of mixed model combinations, up to three each. The fit statistics and F -test results for the selected model mixtures in each of the three regions are shown in Table 6.
The best model combinations are region dependent. In the 1.5 nuclear region, the 2 + 1 (twophotoionization, single thermal) model offered the best fit. Similarly this fit worked well in the 8.0 circular region. For the 8.0 circular region, including an additional thermal component (2 + 2) improved the spectral fit slightly. Adding additional thermal components (1 + 2) improved the spectral fit for the 1.5 nuclear region compared to the single photoionization and thermal model, both in the observed residuals and the F -test calculation.
The best-fit parameters for our mixed models are listed in Table 8 and the 2 + 1 and 2 + 2 spectral fits for the 8.0 circular region are shown in Figure 9 at the bottom right and left, respectively. In the extended emission region (1.5 − 8.0 annulus), all of the mixed model spectral fits have χ 2 1 and thus are acceptable solutions. In each region, however, none of the mixed models are able to completely fit the distinct emission lines between 3 and 6 keV, in which we see notable residual features for every mixed model combination. X-ray emission was limited to the central few parsecs of the nucleus (e.g., Fabbiano et al. 2017;Maksym et al. 2017). The long-held assumption was that the origin of this emission is the reflection of energetic photons from the corona off obscuring material close to the nucleus (e.g., torus with luminosity-dependent distance of 0.1−1 pc; Netzer 2015). This picture is in line with the "standard" AGN model (Urry & Padovani 1995), in which the geometrical orientation of an AGN, with respect to the observer, produces its observed multiwavelength properties. The presence of kiloparsec-scale extended emission raises questions about the nature, origin, and locations of the obscuring material. High-resolution imaging observations of these sources allow us to better understand the limitations of the standard model and provide clues as to how the AGN interacts with and impacts its host galaxy through feedback processes. This work has closely examined NGC 7212, a CT AGN with extended X-ray emission, to better un-derstand the spectral and spatial characteristics of this AGN class and provide constraints on the connection between AGN and their host galaxies. A discussion of our results is presented in the following subsections.
Luminosities
From our spectral fits we have calculated the 0.3 − 7.0 keV luminosity for each region of interest (8.0 circular region, 1.5 nuclear region, and a 1.5 −8.0 annulus; Figure 1). We find that the 8 circular region, containing the majority of the galactic emission, has a luminosity of L 0.3−7.0 = 7.36 × 10 41 erg s −1 . Breaking this down further, we find the inner 1.5 nuclear region, which contains the majority of the CT AGN emission, has a luminosity of L 0.3−7.0 = 6.07 × 10 41 erg s −1 . For comparison with published results, we calculate the luminosity for 2 − 10 keV in the 8 re- Table 7. Summary of Reduced χ 2 , dof, and F-Test P for Best-fit N-Photoionization + N-Thermal Models 8.0 circular region 1.5 nuclear region 1.5 −8.0 annulus P + T χ 2 ν (ν) F-Test P P + T χ 2 ν (ν) F-Test P P + T χ 2 ν (ν) F-Test P
Spectral Emission Lines
The depth of our observation of NGC 7212 allows us to analyze three separate regions (8.0 circular region, 1.5 nuclear region, and a 1.5 −8.0 annulus) with enough statistical significance that we can compare the properties of the nuclear region to the region of extended emission. Each region is fit using a reflected power-law continuum with Γ = 1.9, consistent with the work of, e.g., Koss et al. (2016) and Marchesi et al. (2018), and additional gaussian emission lines, including a strong Fe Kα component. Not surprisingly, the normalization of the power-law component in the circumnuclear region is almost an order of magnitude larger than for the diffuse, extended emission in the outer region.
While we assume a photon index for the continuum in our emission line models of Γ = 1.9, we did compare the best-fit index for the nuclear and extended regions in the hard X-rays. We find that the extended region has a steeper photon index compared to that of the nuclear region (2.56 ± 0.06 compared to 2.10 ± 0.02), similar to what is found in ESO 428−G014 .
At energies below ∼1.2 keV, the predicted emission lines are complex and blended, but are consistent with what is observed in other nearby galaxies (e.g., Koss et al. 2015;Fabbiano et al. 2018a;Maksym et al. 2019). Similarly, the presence of strong Mg XI (1.331, 1.352 keV) and Si VIII (1.839, 1.865 keV) are typical for CT AGN.
In the hard X-rays, the most significant emission line that we see can be attributed to the Fe Kα emission line near 6.4 keV. This line is believed to originate from within the AGN torus region due to the excitation of the obscuring material. Thus it comes as no surprise that this emission line is strong in the nuclear region (inner 1.5 ∼0.8 kpc) where the CT AGN is located. However, we also find significant Fe Kα emission in the annulus in excess of the Chandra PSF model and extending to ∼3.7 kpc scales at ∼ 20% of the relative strength of the nuclear region.
In the simplest picture where AGN are classified as a function of line-of-sight orientation (e.g., Urry & Padovani 1995), extended Fe Kα X-ray emission is unexpected for a CT AGN since the obscuring material (e.g., torus) is expected to act as a screen in the inner 0.1 − 10 parsec, limiting this emission to the nuclear region (e.g., Netzer 2015). However, recent observations of nearby CT AGN find significant Fe Kα emission outside of the central nucleus (e.g., Circinus: extended ∼100 pc, Marinucci et al. 2013;Arévalo et al. 2014; NGC 1068: 30% Fe Kα emission observed at >140 pc, Bauer et al. 2015;ESO 428−G014: extended ∼3.7 kpc, Fabbiano et al. 2017, 2018a.
The hard energy band of NGC 7212 is more complex than expected for a CT AGN, especially in the 1.5 nuclear region, where the AGN emission dominates. We discuss the presence of strong, significant lines between ∼ 3 and 6 keV and their tentative identifications below.
Unique Features
We find three emission features not, to our knowledge, previously reported in AGN X-ray spectra. There is significant line emission near 2.9 keV in the outer annuli (1.5−8.0 ; Table 3) comparable in strength to the Fe Kα line that may be an argon fluorescence line (E lab = 2.958 keV). In order to achieve this kind of relative abundance in a photoionized plasma, the annulus region would need to contain a steeper ionizing spectrum compared to the nuclear region (which is consistent with our observations of the relative photon index; see Section 5.2). However, since the AGN spectrum is filtered through the torus, it is challenging to know for certain what shape illuminates the ambient gas. This line may also be present in the central nuclear region, but only at 1.9σ significance and so is unlikely to originate from the CT AGN directly.
In all three regions, we find a significant emission line near 3.6 keV (Table 3). At this energy, possible identifications include Ar XVII (E lab = 3.688 keV) or a calcium Kα fluorescence line (E lab = 3.691 keV). At both 2.9 and 3.6 keV we cannot make anything more than tentative identifications since these lines are not characteristic of AGN. Furthermore, we do not currently have a good physical explanation for only finding neutral calcium.
We also find a distinct emission line near 5.17 keV that is limited to the central 1.5 nuclear region. We currently have no definitive identification for the emission line at 5.17 keV, despite a thorough search of literature and the NIST Atomic Spectra Database 14 . It is unlikely to be an artifact of the ACIS-S background, which is fairly well known at this energy 15 . Likewise, it is unlikely to be due to contamination from a nearby or background galaxy, as its closest neighbor is located 9.8 kpc away (e.g., Koss et al. 2016), and the probability of overlap with a background galaxy (e.g., located at z = 0.24 where Fe Kα could explain this observed emission line) is low. vanadium He-like emission at 5.18 keV from cosmic spallation (energetic particles bombarding optically thick material leading to the formation of elements) could explain this observed emission (Gallo et al. 2019). However, vanadium Kα is typically weak compared to other spallation lines that we do not see (e.g., titanium, chromium; Gallo et al. 2019). The importance of spallation in AGN is debatable (e.g., Skibo 1997;Turner & Miller 2010;Gallo et al. 2019), as is the origin of the energetic particles (e.g., accretion disks, disk winds; Turner & Miller 2010). The development of high-resolution spectroscopic instruments, e.g, calorimeters, is increasingly important for detecting spallation and identifying emission at unusual energies, especially in the hard X-rays.
Physical Models
As discussed in Section 4.2, we find that a single photoionization or thermal component does not fully represent the soft X-ray emission from NGC 7212 and a more complex mixture of the two is needed.
For the 1.5 nuclear region (1.5 = 834 pc), the best fit to the spectrum is a combination of the two photoionization models and a single thermal model with kT = 0.96 +0.09 −0.11 keV, and the two ionization parameters log U 1 = −1.50 +0.25 −0.47 and log U 2 = 1.25 +0.09 −0.19 . In comparison, the two-component thermal model finds kT = 0.81 ± 0.05 keV, and kT = 8.62 +20.2 −2.9 keV. The two-component photoionization model finds log U = −1.99 +0.5 −1.01 , and log U = 1.30 +0.21 −0.14 ( Table 5). Since many of the fit statistics in this region are very similar, we can conservatively say that the best spectral fit is given by, at minimum, one thermal and one photoionization component (but two photoionization models are preferred).
In the 1.5 − 8.0 annulus region, the best-fit physical model is a combination of a single photoionization model with log U = −0.75 +0.22 −0.26 , and two thermal models with kT = 0.86 +0.09 −0.04 keV, and kT = 6.84 +7.92 −2.45 keV ( Table 8). The three-component thermal model finds kT = 0.85 +0.05 −0.08 keV, kT = 6.85 +18.7 −2.35 keV, and kT ∼64 keV, which is essentially pure bremsstrahlung. The two-component photoionization model finds log U = −1.00 +0.03 −0.24 , and log U = 1.42 +0.18 −0.10 . Similar to the 1.5 nuclear region, a three-component model is preferred in the annulus with, at minimum, one photoionization and one thermal model, plus one additional thermal or photoionization model.
Looking at the circumnuclear region (8.0 circular region), we find that the best fit is the combination of twophotoionization and two-thermal models, although it is also well fit by a two-photoionization-one-thermal mixture. The two-two mixture is preferred since the residuals of the best fit are visually less correlated (Figure 9).
This two-two model has the parameters: kT = 0.89 +0.07 −0.06 keV, kT = 5.44 + -−3.22 keV, U = −1.23 +0.23 −0.26 , and log U = 1.25 +0.09 −0.22 . These are consistent with the bestfit thermal and ionization parameters in both the nuclear and annular regions. Furthermore, the thermal component fit in each region, kT ∼0.89 keV, is consistent with previous observations of NGC 7212 (e.g., Koss et al. 2016;kT ∼0.8 keV). The densities that we derive in these mixed models are also consistent with typical densities in the interstellar medium (Tables 5, 8).
While NGC 7212 requires more than a single model component, the best-fit model mixture in each region was less complex than seen in other sources with observed extended X-ray emission (e.g., ESO 428−G014, Fabbiano et al. 2018a). It is unclear whether this preference for a "simplified" model is due to statistics, as NGC 7212 has significantly fewer counts than ESO 428−G014 (1280 ± 36 counts compared to 6983 ± 84 counts in their respective 8.0 circular regions), or due to confusion, as we are averaging over a bigger physical region in NGC 7212 (1.5 = 834 pc) compared to ESO 428−G014 (1.0 = 113 pc). The presence of dust lanes and disturbed irregular shape (compared to the more disky shape of ESO 428−G014) observed in NGC 7212 may also play a role (e.g., Muñoz et al. 2007).
Since our spectral models are consistent across all three regions of interest, we can begin to trace the full physical picture of NGC 7212. The low temperature (∼0.8 keV) thermal emission in the inner 1.5 nuclear and 1.5 − 8.0 annulus regions may correspond to shocks with velocities of v ∼850 km s −1 (assuming T shock = 3µv 2 shock /16k; where µ is the mean molecular mass of a fully ionized gas, and k is the Boltzmann constant; e.g., Wang et al. 2014;Fabbiano et al. 2018a).
In the annular region (1.5 − 8.0 ), the best-fit mixed model prefers a combination of a higher temperature and lower temperature thermal model. This low temperature thermal component is consistent with what is observed in the nuclear region, suggesting similar energetic origins. The higher temperature component (∼6.8 keV) is isolated to the annular region and corresponds to velocities near v ∼ 2400 km s −1 , on order of what is expected from [O III] velocities in the inner nuclear region (e.g., Kraemer & Crenshaw 2000;Kraemer et al. 2008). Shocks at these velocities have been observed on extended scales in CT AGN (e.g., Fischer et al. 2013) and have been associated with starburst-driven winds (e.g., NGC 6240, Wang et al. 2014).
Optical ground-based observations and emission line diagnostics (e.g., BPT diagrams; Baldwin et al. 1981), from Congiu et al. (2017) (see also Contini et al. 2012) suggest that the ionization mechanism for NGC 7212 is a combination of photoionization and shocks, which is qualitatively consistent with our results. Recently, Terao et al. (2016) suggests that NGC 7212 is not likely affected by fast shocks based on near-IR observations. In our 1.5 − 8.0 annular region, however, we find a thermal component that may be associated with fast shocks. This is not necessarily inconsistent with the Terao et al. (2016) results, since nonradiative, fast shocks would not be observed in the optical/IR. The photoionization parameters from our best fits also provide constraints on the presence of highly ionized outflows, such as warm absorbers (WAs), in NGC 7212. WAs typically have velocities that range in the thousands of km s −1 , column densities around N H = 10 20−21 cm −2 (e.g., Arav et al. 2013;Fischer et al. 2013). These outflows typically originate from the central continuum in the inner 100 parsec (e.g., Krongold et al. 2007;Arav et al. 2015), however, Arav et al. (2018) find that 12% of quasar outflows are found at distances larger than 1 kiloparsec. In many cases, AGN with WAs exhibit bi-conical outflows reflected in their photoionization parameters (e.g., Andrade-Velázquez et al. 2010;Fabbiano et al. 2018a).
In the inner nuclear region of NGC 7212, where outflows are typically located, the ionization parameters we find (log U1 = 1.25) and log U1 = −1.50)) are similar to what has been observed in CT AGN before. However, the column densities in this region are too high, and the velocities derived are too low, compared to what is typically seen in WAs. In comparison, the 1.5 − 8.0 annular region contains a thermal component with high ∼2400 km s −1 velocities, and column densities that are more consistent with WAs. While we cannot definitely confirm the presence of WAs in NGC 7212, the extended region is a possible host to these high velocity, highly ionized outflows.
Radial Profiles and FWHM
As described in Section 3, we see extended emission in the soft and hard X-rays in both the cone (south−north) and cross-cone (west−east) direction. It is possible to measure the extent of this X-ray emission out to kiloparsec scales by extracting and fitting the radial profiles of the emission for different energy bands. Looking at the extended region annulus, we find significant extended emission compared to the Chandra PSF for each energy bin: for example, in the 6.1 − 6.5 keV band where we expect to see Fe Kα, we find an excess of 15.2 ± 3.9 counts.
The extended region (1.0 − 8.0 annulus) contributes to 20.5% of the total observed counts, where 17.1% of this emission is in the soft X-rays. Breaking this down further into the cone and cross-cone regions, we find 14.4% of the total observed counts originate from the annulus in the south−north cone. The south−north cone, in addition to containing the majority of the extended X-ray emission, also encompasses the optical ionization cone and ENLR (Figure 10; Schmitt et al. 2003; see also Cracco et al. 2011;Congiu et al. 2017) located at position angle, P.A. = 170°. Other works have also uncovered extended emission in the optical and IR that is consistent along this cone axis (e.g., Hernández-García et al. 2015;Asmus et al. 2016;Müller-Sánchez et al. 2018).
We find a trend in the extended emission with energy in both the cone and cross-cone directions at 1% of the surface brightness such that the emission is more extended at soft X-rays. For the south−north cone, in particular, the soft X-rays are significantly extended on kiloparsec scales (average width ∼3.7 kpc).
Our observations also show strong extended emission in the cross-cone direction where we expect significant obscuration from the torus in the "standard" model. The origin of the diffuse emission is likely ionizing radiation from the active nucleus, e.g., from the corona, propagating to large scales via interactions with the interstellar medium (ISM). Georgantopoulos & Akylas (2019) fit a torus model to NuSTAR observations of NGC 7212 using MYTorus 16 and found the best-fit parameters N H = 1.1 × 10 24 cm −2 (consistent with Marchesi et al. 2018), E cut > 56, kT = 41 keV.
The presence of a clumpy torus in NGC 7212 could explain this excess emission, such that transmission occurs along the plane of the obscuring material. If we assume a torus geometry for the absorber with an opening angle of 90°in the south−north cone, we can estimate the transmission in the cross-cone direction. With these simple assumptions, we calculate that the volume of cross-cone region is ∼2.5× the volume of the cone. From Table 2, we find that the cross-cone region contains ∼37% more counts over the Chandra PSF than the south−north cone for energies > 1.5keV. Thus, the transmission in the cross-cone direction is ∼15% of the cone direction. For comparison, this is higher than the cross-cone transmission estimated for ESO 428−G014 (∼10%), but could be due to the weaker azimuthal dependence of NGC 7212, and may be explained by inclination effects.
Alternatively, this extended emission could be related to the compact double radio source observed in NGC 7212 (e.g., Tran 1995a,b;Falcke et al. 1998) that can be attributed to the presence of a jet. Recent relativistic hydrodynamical simulations of jets propagating through molecular disks (e.g., IC 5063; Mukherjee et al. 2018) have modeled the presence of warm and hot emission caused by jet−cold disk interactions. In these models, regions with gas temperatures ∼10 7 K may be found surrounding both cooler gas in the nucleus as well as expelled on large scales via filamentary winds. This is in line with our best fit spectral models that fit a thermal component with kT ∼ 0.9 keV (∼10 7 K) in both the 1.5 nuclear and 1.5 − 8.0 annular region.
The excess we observe is not oriented solely along the radio source axis (P.A. = −7°; Falcke et al. 1998 may be explained by recent simulations that predict the presence of a hot cocoon with gas temperatures ∼10 8−9 K surrounding the nucleus due to jet-ISM interactions. We find a thermal component in the 1.5 − 8.0 annular region with kT ∼ 7.0 keV (∼10 8 K) that is not found in the 1.5 nuclear region which may indicate the presence of one of these hot cocoons. Further evidence supporting jet-ISM interactions in NGC 7212 is reported by Congiu et al. (2017). However, we cannot rule out supernova heating in the 1.5 nuclear region or the 1.5 − 8.0 annular region. The thermal component (kT ∼ 0.9 keV) in the 1.5 nuclear region corresponds to a cooling time of ∼10 6 yr. Given an energy content of ∼10 56 erg, the supernova rate required to support this thermal energy is ∼0.1 yr −1 . In the 1.5 − 8.0 annular region, assuming the thermal component kT ∼ 6.84 keV is the dominant source of heating, the cooling time is ∼10 8 years. For an energy content of ∼10 57 erg, the heating could be accounted for with a supernova rate of ∼0.01 yr −1 .
SUMMARY AND CONCLUSIONS
We have analyzed the spectral properties and spatial extent of the kiloparsec-scale diffuse X-ray emission in NGC 7212 using 149.87 ks of combined Chandra observations.
1. We find that the extended diffuse emission region (1.5 − 8.0 ∼0.8 − 4.5 kpc annulus) accounts for more than 20% of the total emission from 0.3 to 7.0 keV. We further break this down into the soft (0.3 -3.0 keV) and hard (3.0 -7.0 keV) X-rays, where we find contributions to the total observed emission of ∼17% and ∼3%, respectively. The energy bin surrounding the Fe Kα emission line and hard Xray continuum (6.0 -7.0 keV) supplies ∼2% of the total observed emission.
2. Breaking the observed emission into discrete energy bands and cone/cross-cone regions, we find significant, up to 3.6 kpc, extended emission at 1% of the surface brightness. This extended emission is strongly associated with soft X-rays in the south−north cone direction (accounting for more than 14% of the total counts), and is consistent with the observed ENLR region (e.g., Cracco et al. 2011;Congiu et al. 2017) and compact double radio source (e.g., Tran 1995a,b;Falcke et al. 1998).
In the soft X-rays, this ∼3.7 kpc extended emission is similar in extent to the ∼3.4 kpc extended soft X-ray emission observed in ESO 428−G014 (Fabbiano et al. 2018a), but remains the largest extent to date in the literature. In the hard X-rays, we observe emission at 1% of the total surface brightness up to 2.7 kpc (similar to the 2.8 kpc extent in ESO 428−G014).
3. The detected emission along the cross-cone direction raises doubts about the standard AGN model in which the hard X-rays originate from the excitation of a uniform obscuring torus. The presence of hard X-rays on observed kiloparsec scales (e.g., for 5.0 − 7.0 keV the extent at 1% surface brightness is ∼2.13 kpc) could imply an interior clumpy torus structure (e.g., Nenkova et al. 2008) that allows for the transmission of radiation on kiloparsec scales. In the event of a clumpy torus, we estimate that transmission in the cross-cone direction is 15% of the cone direction. This is higher than the cross-cone transmission of ∼10% calculated for ESO 428−G014 (Fabbiano et al. 2018a).
4. We extract the spectrum for three different regions: an 8.0 (4.448 kpc) circular region, 1.5 (0.834 kpc) nuclear region, and 1.5 −8.0 annulus. For each region we fit a reflection model (PEXRAV) plus gaussian emission lines, incrementally adding lines until we obtain a good representation of the spectrum. For each region, the emission line model contains complex, blended emission lines in the soft X-rays below 1.2 keV likely made of O VII, Ne IX, and a variety of Fe lines. Strong individual lines are also found above 1.2 keV, including Mg XI, Si VIII, S Kα, and Fe Kα near ∼6.4 keV. The presence of these emission lines is consistent with lines found in other AGN (e.g., Koss et al. 2015;Fabbiano et al. 2018a;Maksym et al. 2019).
5. In our spectral line fits, we also discover three significant emission lines in the hard X-rays that are not typically observed in CT AGN. The emission lines at ∼2.9 keV and ∼3.6 keV are tentatively identified as species of argon and/or calcium. The emission at ∼5.2 keV is puzzling and not identified.
6. We also fit the spectra extracted for our three regions utilizing a combination of both photoionization and thermal physical models. In the inner 1.5 nuclear region, we find the spectrum is best fit by a minimum of one photoionization and one thermal model component (although two photoionization model components are preferred). Similarly, the 1.0 − 8.0 annulus region is best fit by at least one of each model component, but a second thermal component is preferred. Combining these two regions, we find the best-fit spectral model for the 8.0 circular region is given as combination of two-photoionization and two-thermal models with model parameters that mirror the individual subregions. The derived parameters in all three regions are consistent with typical ISM densities.
7. We find the ionization parameters in each region of interest are consistent with those found in highly ionized outflows (WAs). However, the typical velocities and column densities of WAs are more in line with the parameters derived in the 1.0 − 8.0 annulus region, rather than near the central source.
8. We find that the best-fit thermal spectral components for NGC 7212 may be equally well explained by shocks, jet-ISM interactions, and supernova heating.
(a) In the inner 1.5 nuclear region and 1.0 − 8.0 annular region, the temperature at kT ∼ 0.8 keV is consistent with ∼850 km s −1 shocks. The additional thermal component in the 1.0 − 8.0 annulus region (kT = 6.84 keV) corresponds to shocks near ∼2400 km s −1 , and is consistent with previous observations of CT AGN (Fischer et al. 2013).
(b) The warm thermal component (kT ∼ 0.8 keV) can likewise be explained by jet-ISM interactions in which warm gas is found surrounding cool gas in the nucleus and expelled on large scales via filament winds. Similarly, simulations of jet-ISM interactions predict a hot cocoon around the nuclear region that is consistent with the hot thermal component we observe in the annular region of NGC 7212 (e.g., Mukherjee et al. 2018). (c) We are unable to rule out supernova heating as the origin of this thermal component in both the 1.5 nuclear region and 1.0 − 8.0 annular region. For the 1.5 nuclear region where kT ∼ 0.8 keV (cooling time ∼10 6 years; E th ∼10 56 erg), the heating could be accomplished with ∼0.1 supernova per year. Assuming the hot thermal component (kT ∼ 7 keV; cooling time ∼10 8 years; E th ∼10 57 erg) dominates in the 1.0 −8.0 annular region, the supernova rate drops to ∼0.01 yr −1 .
9. Compared to other CT AGN with extended X-ray emission, the most comparable of which is ESO 428−G014 (Fabbiano et al. 2018a), we find that NGC 7212 requires a less complex multicomponent spectral model. These differences may be purely due to statistics (our observations of NGC 7212 have combined ∼1300 counts, compared to ∼7000 counts for ESO 428−G014), or confusion due to spatial scale (for NGC 7212, 1 = 556 pc, compared to 1 = 113 pc for ESO 428−G014).
This work demonstrates the advantages of deep Chandra observations for recovering statistically significant spatial and spectral information about an exciting class of CT AGN with observed diffuse hard X-ray emission on kiloparsec scales. High-resolution observations of extended emission sources, such as NGC 7212, can recover important information about the AGN and surrounding ISM, which can be used to test the AGN standard model by analyzing the morphology of the torus, and provide new insights into gas dynamics and AGN feedback mechanisms. As we plan for the next generation of great observatories (e.g., Lynx ), an emphasis on high spatial resolution and sensitive instruments will play a crucial role in the future of AGN studies.
We thank E. Bulbul and A. Foster for their assistance in identifying challenging X-ray emission line features, and P. Plucinsky for a useful discussion about the Chandra ACIS background. We also thank the referee for constructive comments and suggestions that improved this paper. This work makes use of data from the Chandra data archive, and the NASA-IPAC Extragalactic Database (NED). The analysis makes use of CIAO and Sherpa, developed by the Chandra X-ray Center; SAOImage ds9; XSPEC, developed by HEASARC at NASA-GSFC; and the Astrophysics Data System (ADS). This work was supported by the Chandra Guest Observer program, grant no. GO8-19074X (PI: Fabbiano).
Figure 1 .
1Figure 1. Left: X-ray contours (black) corresponding to the merged 0.3 − 7.0 keV Chandra ACIS image of the three-system interacting galaxy group, including the spiral galaxy NGC 7212 (southwest source), overlaid on a g-band Pan-STARRs deepstack. Right: merged 0.3 − 7.0 keV Chandra ACIS image of NGC 7212 with applied adaptive gaussian smoothing (dmimgadapt; 0.5 − 15 pixel scales, 5 counts under kernel, 30 iterations) on image pixel = 1/8 ACIS pixel. The image contours are logarithmic with colors corresponding to number of counts per image pixel. The nuclear 1 (0.556 kpc) circular region, and annular 8 (4.448 kpc) region split into four quadrants, used in the spatial analysis of NGC 7212 are shown in white.
Figure 2 .
2Adaptively smoothed images of NGC 7212 in the indicated energy bands on image pixel = 1/8 ACIS pixel (CIAO adaptive smoothing; 0.5 − 15 pixel scales, 5 counts under kernel, 30 iterations). The image contours are logarithmic with colors corresponding to the number of counts per image pixel. The black cross marks the center of the nucleus.
Figure 3
3Figure 3. Background subtracted radial profiles of NGC 7212 for the full energy band (0.3 − 7.0 keV) compared to the Chandra PSF which has been renormalized to the 1.0 (0.556 kpc) nuclear region for the (left) south to north and (right) west to east regions. Each bin contains a minimum of 10 counts and is shown with 1σ errors. We include a dashed horizontal line to indicate the level of background emission and note that points below this line are valid data since the background has already been subtracted from these profiles. We find excess emission observed outside of the central nucleus (∼1.0 , 0.556 kpc) in each of the given regions.
Figure 4 .
4Similar to
Figure 5 .
5Same as inFigure 4. Background subtracted radial profiles of NGC 7212 for the indicated energy bands compared to the Chandra PSF for the west−east region.
J2000) RA = 22:07:02.03 (331°45 30.44 ), decl. = +10:14:01.27 (10°14 1.27 ); (1) 8.0 = 4.448 kpc circular region, (2) 1.5 = 0.834 kpc nuclear region; and (3) 1.5 −8.0 annulus (Figure 7). The background was extracted from a surrounding, offcenter, source-free region at (J2000) RA = 22:07:03.54 (331°45 53.00 ), decl. = +10:13:42.78 (10°13 42.78 ) (Figure 7)
Figure 7 .
7Spectral extraction regions overlaid on the merged 0.3 − 7.0 keV Chandra ACIS image of NGC 7212 (Figure 1; right). The spectra of NGC 7212 is extracted in three regions centered at (J2000) RA = 22:07:02.03 (331°45 30.44 ), decl. = +10:14:01.27 (10°14 1.27 ): (1) 8.0 = 4.448 kpc circular region; (2) 1.5 = 0.834 kpc nuclear region; and (3) 1.5 −8.0 annulus. The background region is centered on (J2000) RA = 22:07:03.54 (331°45 53.00 ), decl. = +10:13:42.78 (10°13 42.78 ) with a radius of 10 .power-law photon index Γ = 1.9 (based on the NuS-TAR/XMM best fits,Marchesi et al. 2018; and consistent with, e.g.,Risaliti et al. 2000;Levenson et al. 2006;Hernández-García et al. 2015;Ricci et al. 2015;Koss et al. 2016), plus a gaussian emission line constrained to an energy range surrounding the Fe Kα 6.4 keV line, with constant galactic absorption (4.5 × 10 20 cm −2 ;Levenson et al. 2006). This simple power-law plus line model is consistent with the model components utilized in previous spectral fits of NGC 7212 (e.g., ASCA,Risaliti et al. 2000; XMM, Guainazzi et al. 2005;Hernández-García et al. 2015; CXC, Levenson et al. 2006;Hernández-García et al. 2015; NuSTAR, Koss et al. 2016;Marchesi et al. 2018), for which the median photon index is Γ ∼ 1.9 and the median equivalent width of the Fe Kα line is EW ∼ 0.8 keV.
Figure 8 .
8Figure 8. NGC 7212 spectrum extracted from an 8.0 (4.448 kpc) circular region centered on the nucleus with best-fit PEXRAV (Γ = 1.9) + n-gaussian lines + galactic absorption model (top) and best-fit residuals (bottom). Fit information may be found in Table 3.
a
Includes galactic absorption of 4.5 × 10 20 cm −2(Levenson et al. 2006) b Energies from NIST (physics.nist.gov);Koss et al. 2015;Maksym et al. 2019 c Lines blended in ACIS-S spectrum. These are tentative identifications. d Lines in 3-6 keV are rarely observed in AGN. These are tentative line identifications. e Lines at this energy do not fit into our current understanding of AGN emission.
Figure 9 .
95. DISCUSSION Recently, deep, high-resolution observations of CT AGN have uncovered extended hard X-ray emission on ∼kpc scales, a challenge to the long-held belief that hard Top: NGC 7212 spectrum extracted from an 8.0 (4.448 kpc) circular region centered on the nucleus fitted with a nuclear PEXRAV component plus (left) two-component photoionization and (right) two-component thermal model. Bottom: the same as above but with a mixture of component models: nuclear PEXRAV component plus a (left) single photoionization plus thermal component model, (right) two-component photoionization plus thermal component model.
10 −8 unconstrained Note. U is the ionization parameter of each component, NH is the column density (cm −2 ), kT is the temperature (keV), and EM is the normalization of the APEC model (cm −5 ); EM = 10 −14 4π[D A (1+z)] 2 nen H dV , where D A is the angular diameter distance to the source (cm) and ne, n H are the electron and Hydrogen densities (cm −3 ).
2016; log L 2−10 = 42.8, Müller-Sánchez et al. 2018; log L 2−10 = 43.21, Marchesi et al. 2018).
Note. U is the ionization parameter of each component, NH is the column density (cm −2 ), kT is the temperature (keV), and EM is the normalization of the APEC model (cm −5 ).
Table 1 .
1Observation LogObsID
Instrument
Texp (ks)
PI
Date
δx (px)
δy (px)
4078
ACIS-S
19.90
Kraemer
2012 Nov 15
-1.196
0.466
20372
ACIS-S
49.42
Fabbiano
2018 Aug 09
−
−
21668
ACIS-S
1.38
Fabbiano
2018 Aug 13
0.318
0.026
21672
ACIS-S
27.21
Fabbiano
2018 Sep 12
-1.709
-0.386
Table 2. Excess counts over the Chandra PSF in the cone
and cross-cone wedges (excluding the central 1.5 (0.834 kpc)
nuclear region) for select energy bands.
Energy
8 circular
S -N
W -E
(keV)
Counts (Err)
Counts (Err)
Counts (Err)
0.3-1.5
442.3 (21.0)
293.8 (17.1)
148.5 (12.2)
1.5-3.0
238.3 (15.4)
177.3 (13.3)
61.0 (7.8)
3.0-4.0
62.0 (7.9)
44.3 (6.7)
17.7 (4.2)
4.0-5.0
31.7 (5.6)
25.3 (5.0)
6.4 (2.5)
5.0-6.0
23.7 (4.9)
13.3 (3.7)
10.4 (3.2)
6.1-6.5
15.2 (3.9)
11.0 (3.3)
4.2 (2.0)
0.3-7.0
812.0 (28.5)
570.2 (23.9)
241.8 (15.6)
Table 3 .
3Spectral Fitting with PEXRAV + Individual Emission Lines aCounts
Norm. PEXRAV
χ 2
ν (ν)
Region
(error)
(ph cm −2 s −1 )
Continuum + Lines
8.0 circular region
1289 (36)
1.8 × 10 −5
0.71 (155)
1.5 nuclear region
625 (25)
1.5 × 10 −5
0.67 (218)
1.5 −8.0 annulus
664 (26)
1.5 × 10 −6
0.80 (86)
Region
Energy (keV)
Flux (10 −6 ph cm −2 s −1 )
Significance (σ)
Identification (E lab keV) b
8.0 circular region
0.46 +0.11
−0.42
42.0 +44.1
−14.9
< 3
N VII Lyα c (0.500)
O VII c (0.569)
1.5 nuclear region
0.49 +0.01
−0.02
6.7 ± 2.6
< 3
1.5 −8.0 annulus
0.45 +0.02
−0.03
10.5 +6.7
−3.8
< 3
−
−
−
Fe XVII c (0.826)
0.75 +0.04
−0.05
11.3 +3.1
−2.3
3.7
0.70 +0.05
−0.11
8.1 +1.4
−1.8
4.5
0.91 +0.01
−0.02
8.7 +1.4
−1.2
6.2
Ne IX (0.905)
Fe XIX (0.917, 0.922)
0.92 +0.02
−0.02
3.1 +1.0
−0.8
3.1
0.93 ± 0.03
2.0 ± 0.6
3.3
1.20 ± 0.02
4.7 ± 0.7
6.7
Fe XX c (1.241)
Fe XXIV c (1.129, 1.168)
1.14 +0.03
−0.04
1.8 ± 0.6
3.0
1.20 ± 0.03
1.7 +0.4
−0.3
4.3
1.48 ± 0.03
0.4 ± 0.2
< 3
Mg XI (1.331, 1.352)
Mg XII (1.472, 1.745)
1.33 ± 0.02
1.5 ± 0.4
3.8
1.47 ± 0.03, 1.70 ± 0.02
0.41 +0.15
−0.14 , 0.52 ± 0.14
3.7
1.80 ± 0.01
2.3 ± 0.3
7.7
Si XIII (1.839, 1.865)
1.81 +0.01
−0.02
1.5 ± 0.3
5.0
1.94 ± 0.04
0.57 +0.18
−0.16
3.2
2.02 ± 0.02
0.5 ± 0.2
< 3
Si XIV (2.005)
2.01 ± 0.03
0.34 +0.76
−0.16
< 3
−
−
−
2.37 ± 0.03
2.2 +0.5
−0.4
4.4
S Kα (2.308)
S XV (2.430)
2.40 +0.01
−0.08
0.95 +0.66
−0.22
< 3
2.32 +0.02
−0.04
0.68 +0.23
−0.20
3.0
2.98 ± 0.03
0.5 ± 0.2
< 3
Ar Kα d (2.958)
2.97 ± 0.03
0.30 ± 0.16
< 3
2.80 ± 0.05
0.63 ± 0.20
3.2
3.68 ± 0.02
1.0 ± 0.2
5.0
Ar XVII d (3.688)
Ca Kα d (3.691)
3.69 +0.02
−0.01
0.78 ± 0.18
4.3
3.59 ± 0.05
0.56 +0.17
−0.16
3.3
5.16 +0.02
−0.05
0.9 +0.4
−0.3
< 3
− e
5.17 +0.05
−0.03
1.1 ± 0.3
3.7
−
−
−
6.37 ± 0.01
5.3 ± 0.7
7.6
Fe Kα (6.442)
6.36 ± 0.01
5.0 ± 0.5
10.0
6.40 +0.05
−0.07
1.0 ± 0.3
3.3
6.49 +0.08
−0.06
9.1 ± 1.2
7.6
Fe Kα wing c
Fe XXV c (6.70)
6.64 +0.03
−0.27
8.0 +1.7
−2.4
Table 4 .
4Reduced χ 2 , dof, and F -Test P for Photoionization and Thermal Models Note. n−model: number of photoionization or thermal components used in the model.Photoionization
Thermal
χ 2
ν (ν)
F -Test P
χ 2
ν (ν)
F -Test P
(8.0 circular region)
1−model 1.20 (257)
...
1.47 (258)
...
2−model 1.14 (254) 3.6 × 10 −3
1.18 (256) 8.7 × 10 −13
3−model 1.15 (251)
−
1.19 (255)
−
(1.5 nuclear region)
1−model 1.28 (217)
...
1.34 (123)
...
2−model 1.23 (214) 4.4 × 10 −2
1.21 (121)
8.1 × 10 −7
3−model 1.25 (211)
−
1.22 (119)
−
(1.5 −8.0 annulus)
1−model 1.08 (105)
...
0.94 (140)
...
2−model 0.88 (102) 1.5 × 10 −4
0.96 (138)
−
3−model 0.90 (100)
−
0.77 (136)
2.9 × 10 −7
Table 5 .
5Best-fit Parameters with Two-component ModelsTwo−Photoionization
Two−Thermal
Table 6 .
6Reduced χ 2 , dof, and F-Test P for Mixed Photoionization and Thermal Models8.0 circular region
1.5 nuclear region
1.5 −8.0 annulus
N-Photo + N-Thermal
χ 2
ν (ν)
F-Test P
χ 2
ν (ν)
F-Test P
χ 2
ν (ν)
F-Test P
1 + 1
1.18 (255)
...
1.23 (215)
...
0.77 (137)
...
1 + 2
1.12 (253)
2.4 × 10 −3
1.23 (213)
−
0.70 (135)
2.1 × 10 −3
1 + 3
1.09 (251)
1.7 × 10 −2
1.24 (211)
−
0.72 (133)
−
2 + 1
1.08 (252)
8.5 × 10 −5
1.197 (212)
0.18
0.72 (134)
2.7 × 10 −2
2 + 2
1.07 (250)
0.27
1.199 (210)
−
0.73 (132)
−
2 + 3
1.08 (248)
−
1.24 (208)
−
0.74 (130)
0.054
3 + 1
1.08 (249)
0.98
1.23 (209)
−
0.75 (131)
−
3 + 2
1.08 (247)
−
1.26 (207)
−
0.82 (129)
−
3 + 3
1.11 (245)
−
1.27 (205)
−
0.75 (127)
−
Table 8 .
8Best-fit Parameters with Mixed Models
). However, the presence of hot gas in the cross-cone regionFigure 10. Optical contours (black) corresponding to HST HRC F330W observations (Schmitt et al. 2003) overlaid on the nucleus of the merged 0.3 − 7.0 keV Chandra ACIS image of NGC 7212 (adaptively smoothed on image pixel = 1/8 ACIS pixel; 0.5−15 pixel scales, 5 counts under kernel, 30 iterations) The image contours are logarithmic with colors corresponding to the number of counts per image pixel. Also indicated is the orientation of the ENLR (P.A. = 170°, white). The optical contours are logarithmic with X-ray colors corresponding to number of counts per image pixel.3.5e-08
0.021
0.11
0.44
1.8
7.1
N
E
2'' = 1.112 kpc
http://ds9.si.edu 5 http://cxc.harvard.edu/ciao/gallery/smooth.html
PIMMS v4.10; http://cxc.harvard.edu/toolkit/pimms.jsp
http://physics.nist.gov
http://cxc.harvard.edu/cal/Acis
http://physics.nist.gov 15 http://cxc.harvard.edu/cal/Acis
http://mytorus.com/
. M Andrade-Velázquez, Y Krongold, M Elvis, The Astrophysical Journal. 711888Andrade-Velázquez, M., Krongold, Y., Elvis, M., et al. 2010, The Astrophysical Journal, 711, 888
. R Antonucci, ARA&A. 31473Antonucci, R. 1993, ARA&A, 31, 473
. N Arav, B Borguet, C Chamberlain, D Edmonds, C Danforth, Monthly Notices of the Royal Astronomical Society. 4363286Arav, N., Borguet, B., Chamberlain, C., Edmonds, D., & Danforth, C. 2013, Monthly Notices of the Royal Astronomical Society, 436, 3286
. N Arav, G Liu, X Xu, The Astrophysical Journal. 85760Arav, N., Liu, G., Xu, X., et al. 2018, The Astrophysical Journal, 857, 60
. N Arav, C Chamberlain, G A Kriss, Astronomy and Astrophysics. 57737Arav, N., Chamberlain, C., Kriss, G. A., et al. 2015, Astronomy and Astrophysics, 577, A37
. P Arévalo, F E Bauer, S Puccetti, ApJ. 79181Arévalo, P., Bauer, F. E., Puccetti, S., et al. 2014, ApJ, 791, 81
. D Asmus, S F Hönig, P Gandhi, ApJ. 822109Asmus, D., Hönig, S. F., & Gandhi, P. 2016, ApJ, 822, 109
. J A Baldwin, M M Phillips, R Terlevich, PASP. 935Baldwin, J. A., Phillips, M. M., & Terlevich, R. 1981, PASP, 93, 5
. F E Bauer, P Arévalo, D J Walton, ApJ. 812116Bauer, F. E., Arévalo, P., Walton, D. J., et al. 2015, ApJ, 812, 116
. S Bianchi, M Guainazzi, M Chiaberge, A&A. 448499Bianchi, S., Guainazzi, M., & Chiaberge, M. 2006, A&A, 448, 499
. E Congiu, M Contini, S Ciroi, MNRAS. 471562Congiu, E., Contini, M., Ciroi, S., et al. 2017, MNRAS, 471, 562
. M Contini, V Cracco, S Ciroi, G La Mura, A&A. 54572Contini, M., Cracco, V., Ciroi, S., & La Mura, G. 2012, A&A, 545, A72
. V Cracco, S Ciroi, F Di Mille, MNRAS. 4182630Cracco, V., Ciroi, S., di Mille, F., et al. 2011, MNRAS, 418, 2630
. C L Drake, P J Mcgregor, M A Dopita, W J M Van Breugel, AJ. 1262237Drake, C. L., McGregor, P. J., Dopita, M. A., & van Breugel, W. J. M. 2003, AJ, 126, 2237
. G Fabbiano, M Elvis, A Paggi, ApJL. 8424Fabbiano, G., Elvis, M., Paggi, A., et al. 2017, ApJL, 842, L4
. G Fabbiano, A Paggi, M Karovska, ApJ. 85583ApJFabbiano, G., Paggi, A., Karovska, M., et al. 2018a, ApJ, 855, 131 -. 2018b, ApJ, 865, 83
. G Fabbiano, A Siemiginowska, A Paggi, ApJ. 87069Fabbiano, G., Siemiginowska, A., Paggi, A., et al. 2019, ApJ, 870, 69
. H Falcke, A S Wilson, C Simpson, ApJ. 502199Falcke, H., Wilson, A. S., & Simpson, C. 1998, ApJ, 502, 199
. G J Ferland, K T Korista, D A Verner, PASP. 110761Ferland, G. J., Korista, K. T., Verner, D. A., et al. 1998, PASP, 110, 761
. T C Fischer, D M Crenshaw, S B Kraemer, H R Schmitt, The Astrophysical Journal Supplement Series. 2091Fischer, T. C., Crenshaw, D. M., Kraemer, S. B., & Schmitt, H. R. 2013, The Astrophysical Journal Supplement Series, 209, 1
. A R Foster, L Ji, R K Smith, N S Brickhouse, ApJ. 756128Foster, A. R., Ji, L., Smith, R. K., & Brickhouse, N. S. 2012, ApJ, 756, 128
. L C Gallo, J S Randhawa, S G H Waddell, MNRAS. 4843036Gallo, L. C., Randhawa, J. S., Waddell, S. G. H., et al. 2019, MNRAS, 484, 3036
. I Georgantopoulos, A Akylas, A&A. 62128Georgantopoulos, I., & Akylas, A. 2019, A&A, 621, A28
. M Guainazzi, G Matt, G C Perola, A&A. 444119Guainazzi, M., Matt, G., & Perola, G. C. 2005, A&A, 444, 119
. L Hernández-García, J Masegosa, O González-Martín, I Márquez, A&A. 57990Hernández-García, L., Masegosa, J., González-Martín, O., & Márquez, I. 2015, A&A, 579, A90
. J Kormendy, L C Ho, ARA&A. 51511Kormendy, J., & Ho, L. C. 2013, ARA&A, 51, 511
. M J Koss, C Romero-Cañizales, L Baronchelli, ApJ. 807149Koss, M. J., Romero-Cañizales, C., Baronchelli, L., et al. 2015, ApJ, 807, 149
. M J Koss, R Assef, M Baloković, ApJ. 82585Koss, M. J., Assef, R., Baloković, M., et al. 2016, ApJ, 825, 85
. S B Kraemer, D M Crenshaw, ApJ. 532256Kraemer, S. B., & Crenshaw, D. M. 2000, ApJ, 532, 256
. S B Kraemer, H R Schmitt, D M Crenshaw, ApJ. 6791128Kraemer, S. B., Schmitt, H. R., & Crenshaw, D. M. 2008, ApJ, 679, 1128
. Y Krongold, F Nicastro, M Elvis, The Astrophysical Journal. 6591022Krongold, Y., Nicastro, F., Elvis, M., et al. 2007, The Astrophysical Journal, 659, 1022
. A Lawrence, M Elvis, ApJ. 256410Lawrence, A., & Elvis, M. 1982, ApJ, 256, 410
. N A Levenson, T M Heckman, J H Krolik, K A Weaver, P T &życki, ApJ. 648111Levenson, N. A., Heckman, T. M., Krolik, J. H., Weaver, K. A., &Życki, P. T. 2006, ApJ, 648, 111
. W P Maksym, G Fabbiano, M Elvis, ApJ. 84494ApJMaksym, W. P., Fabbiano, G., Elvis, M., et al. 2017, ApJ, 844, 69 -. 2019, ApJ, 872, 94
. S Marchesi, M Ajello, L Marcotulli, ApJ. 85449Marchesi, S., Ajello, M., Marcotulli, L., et al. 2018, ApJ, 854, 49
. A Marinucci, G Miniutti, S Bianchi, G Matt, G Risaliti, MNRAS. 4362500Marinucci, A., Miniutti, G., Bianchi, S., Matt, G., & Risaliti, G. 2013, MNRAS, 436, 2500
. D J Muñoz, D Mardones, G Garay, ApJ. 668906Muñoz, D. J., Mardones, D., Garay, G., et al. 2007, ApJ, 668, 906
. D Mukherjee, A Y Wagner, G V Bicknell, MNRAS. 47680Mukherjee, D., Wagner, A. Y., Bicknell, G. V., et al. 2018, MNRAS, 476, 80
. F Müller-Sánchez, E K S Hicks, M Malkan, ApJ. 85848Müller-Sánchez, F., Hicks, E. K. S., Malkan, M., et al. 2018, ApJ, 858, 48
. M Nenkova, M M Sirocky, R Nikutta, Ž Ivezić, M Elitzur, The Astrophysical Journal. 685160Nenkova, M., Sirocky, M. M., Nikutta, R., Ivezić,Ž., & Elitzur, M. 2008, The Astrophysical Journal, 685, 160
. H Netzer, ARA&A. 53365Netzer, H. 2015, ARA&A, 53, 365
. P Padovani, D M Alexander, R J Assef, A&A Rv. 252Padovani, P., Alexander, D. M., Assef, R. J., et al. 2017, A&A Rv, 25, 2
. R Protassov, D A Van Dyk, A Connors, V L Kashyap, A Siemiginowska, The Astrophysical Journal. 571545Protassov, R., van Dyk, D. A., Connors, A., Kashyap, V. L., & Siemiginowska, A. 2002, The Astrophysical Journal, 571, 545
. C Ricci, Y Ueda, M J Koss, ApJL. 81513Ricci, C., Ueda, Y., Koss, M. J., et al. 2015, ApJL, 815, L13
. G Risaliti, R Gilli, R Maiolino, M Salvati, A&A. 35713Risaliti, G., Gilli, R., Maiolino, R., & Salvati, M. 2000, A&A, 357, 13
. H R Schmitt, J L Donley, R R J Antonucci, J B Hutchings, A L Kinney, ApJS. 148327Schmitt, H. R., Donley, J. L., Antonucci, R. R. J., Hutchings, J. B., & Kinney, A. L. 2003, ApJS, 148, 327
. P Severgnini, A Caccianiga, R Della Ceca, A&A. 54246Severgnini, P., Caccianiga, A., & Della Ceca, R. 2012, A&A, 542, A46
. V Singh, P Shastri, G Risaliti, A&A. 53284Singh, V., Shastri, P., & Risaliti, G. 2011, A&A, 532, A84
. J G Skibo, ApJ. 478522Skibo, J. G. 1997, ApJ, 478, 522
. K Terao, T Nagao, T Hashimoto, ApJ. 833190Terao, K., Nagao, T., Hashimoto, T., et al. 2016, ApJ, 833, 190
. H D Tran, ApJ. 440578ApJTran, H. D. 1995a, ApJ, 440, 565 -. 1995b, ApJ, 440, 578
. H Tsunemi, K Mori, E Miyata, ApJ. 554496Tsunemi, H., Mori, K., Miyata, E., et al. 2001, ApJ, 554, 496
. T J Turner, L Miller, ApJ. 7091230Turner, T. J., & Miller, L. 2010, ApJ, 709, 1230
. C M Urry, P Padovani, PASP. 107803Urry, C. M., & Padovani, P. 1995, PASP, 107, 803
. S Veilleux, R W Goodrich, G J Hill, ApJ. 477631Veilleux, S., Goodrich, R. W., & Hill, G. J. 1997, ApJ, 477, 631
. J Wang, G Fabbiano, G Risaliti, ApJ. 72975Wang, J., Fabbiano, G., Risaliti, G., et al. 2011, ApJ, 729, 75
. J Wang, E Nardini, G Fabbiano, The Astrophysical Journal. 78155Wang, J., Nardini, E., Fabbiano, G., et al. 2014, The Astrophysical Journal, 781, 55
. A J Wasilewski, PASP. 93560Wasilewski, A. J. 1981, PASP, 93, 560
. W Xu, Z Liu, L Gou, J Liu, MNRAS. 45526Xu, W., Liu, Z., Gou, L., & Liu, J. 2016, MNRAS, 455, L26
| [] |
[
"Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value",
"Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value"
] | [
"Jaeyeon Kim \nMIT EECS\nMIT EECS\nSeoul National University\nSeoul National University\n\n",
"Asuman Ozdaglar [email protected] \nMIT EECS\nMIT EECS\nSeoul National University\nSeoul National University\n\n",
"Chanwoo Park \nMIT EECS\nMIT EECS\nSeoul National University\nSeoul National University\n\n",
"Ernest K Ryu [email protected] \nMIT EECS\nMIT EECS\nSeoul National University\nSeoul National University\n\n"
] | [
"MIT EECS\nMIT EECS\nSeoul National University\nSeoul National University\n",
"MIT EECS\nMIT EECS\nSeoul National University\nSeoul National University\n",
"MIT EECS\nMIT EECS\nSeoul National University\nSeoul National University\n",
"MIT EECS\nMIT EECS\nSeoul National University\nSeoul National University\n"
] | [] | In convex optimization, first-order optimization methods efficiently minimizing function values have been a central subject study since Nesterov's seminal work of 1983. Recently, however, Kim and Fessler's OGM-G and Lee et al.'s FISTA-G have been presented as alternatives that efficiently minimize the gradient magnitude instead. In this paper, we present H-duality, which represents a surprising one-to-one correspondence between methods efficiently minimizing function values and methods efficiently minimizing gradient magnitude. In continuous-time formulations, H-duality corresponds to reversing the time dependence of the dissipation/friction term. To the best of our knowledge, H-duality is different from Lagrange/Fenchel duality and is distinct from any previously known duality or symmetry relations. Using H-duality, we obtain a clearer understanding of the symmetry between Nesterov's method and OGM-G, derive a new class of methods efficiently reducing gradient magnitudes of smooth convex functions, and find a new composite minimization method that is simpler and faster than FISTA-G.Preprint. Under review. | null | [
"https://export.arxiv.org/pdf/2305.06628v2.pdf"
] | 258,615,315 | 2305.06628 | fb35e837464f40f218f00fea7a750479af3befe7 |
Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value
15 May 2023
Jaeyeon Kim
MIT EECS
MIT EECS
Seoul National University
Seoul National University
Asuman Ozdaglar [email protected]
MIT EECS
MIT EECS
Seoul National University
Seoul National University
Chanwoo Park
MIT EECS
MIT EECS
Seoul National University
Seoul National University
Ernest K Ryu [email protected]
MIT EECS
MIT EECS
Seoul National University
Seoul National University
Time-Reversed Dissipation Induces Duality Between Minimizing Gradient Norm and Function Value
15 May 2023
In convex optimization, first-order optimization methods efficiently minimizing function values have been a central subject study since Nesterov's seminal work of 1983. Recently, however, Kim and Fessler's OGM-G and Lee et al.'s FISTA-G have been presented as alternatives that efficiently minimize the gradient magnitude instead. In this paper, we present H-duality, which represents a surprising one-to-one correspondence between methods efficiently minimizing function values and methods efficiently minimizing gradient magnitude. In continuous-time formulations, H-duality corresponds to reversing the time dependence of the dissipation/friction term. To the best of our knowledge, H-duality is different from Lagrange/Fenchel duality and is distinct from any previously known duality or symmetry relations. Using H-duality, we obtain a clearer understanding of the symmetry between Nesterov's method and OGM-G, derive a new class of methods efficiently reducing gradient magnitudes of smooth convex functions, and find a new composite minimization method that is simpler and faster than FISTA-G.Preprint. Under review.
Introduction
Since Nesterov's seminal work of 1983 [35], accelerated first-order optimization methods that efficiently reduce function values have been central to the theory and practice of large-scale optimization and machine learning. In 2012, however, Nesterov initiated the study of first-order methods that efficiently reduce gradient magnitudes of convex functions [39]. In convex optimization, making the function value exactly optimal is equivalent to making the gradient exactly zero, but reducing the function-value suboptimality below a threshold is not equivalent to reducing the gradient magnitude below a threshold. This line of research showed that accelerated methods for reducing function values, such as Nesterov's FGM [35], the more modern OGM [24], and the accelerated composite optimization method FISTA [10] are not optimal for reducing gradient magnitude, and new optimal alternatives, such as OGM-G [27] and FISTA-G [29], were presented.
These new accelerated methods for reducing gradient magnitudes are understood far less than those for minimizing function values. However, an interesting observation of symmetry, described in Section 2, was made between these two types of methods, and it was conjectured that this symmetry might be a key to understanding the acceleration mechanism for efficiently reducing gradient magnitude.
Contribution. We present a surprising one-to-one correspondence between methods efficiently minimizing function values and methods efficiently minimizing gradient magnitude. We call this correspondence H-duality and formally establish a duality theory in both discrete-and continuous-time dynamics. Using H-duality, we obtain a clearer understanding of the symmetry between FGM/OGM and OGM-G, derive a new class of methods efficiently reducing gradient magnitudes, and find a new composite minimization method that is simpler and faster than FISTA-G, the prior state-of-the-art in efficiently reducing gradient magnitude in the composite minimization setup.
Preliminaries and Notation
Given f : R d → R, write f ⋆ = inf x∈R d f (x) ∈ (−∞, ∞) for the minimum value and x ⋆ ∈ argmin x∈R n f (x) for a minimizer, if one exists. Throughout this paper, we assume f ⋆ = −∞, but we do not always assume a minimizer x ⋆ exists. Given a differentiable f : R d → R and a pre-specified value of L > 0, we define the notation
[x, y] := f (y) − f (x) + ∇f (y), x − y x, y := f (y) − f (x) + ∇f (y), x − y + 1 2L ∇f (x) − ∇f (y) 2 x, ⋆ := f ⋆ − f (x) + 1 2L ∇f (x) 2 for x, y ∈ R d . A differentiable function f : R d → R is convex if the convexity inequality [x, y] ≤ 0
holds for all x, y ∈ R d . For L > 0, a function f : R d → R is L-smooth convex if it is differentiable and the cocoercivity inequality x, y ≤ 0 holds for all x, y ∈ R d [40]. If f has a minimizer x ⋆ , then x, ⋆ = x, x ⋆ , but the notation x, ⋆ is well defined even when a minimizer x ⋆ does not exist. If f is L-smooth convex, then x, ⋆ ≤ 0 holds for all x ∈ R d [40].
Throughout this paper, we consider the duality between the following two problems.
(P1) Efficiently reduce f (x N ) − f ⋆ assuming x ⋆ exists and x 0 − x ⋆ ≤ R.
(P2) Efficiently reduce 1 2L ∇f (y N ) 2 assuming f ⋆ > −∞ and f (y 0 ) − f ⋆ ≤ R. Here, R ∈ (0, ∞) is a parameter, x 0 and y 0 denote initial points of methods for (P1) and (P2), and x N and y N denote outputs of methods for (P1) and (P2).
Finally, the standard gradient descent (GD) with stepsize h is
x i+1 = x i − h L ∇f (x i ), i = 0, 1, . . . .(GD)
Prior works
Classically, the goal of optimization methods is to reduce the function value efficiently. In the smooth convex setup, Nesterov's fast gradient method (FGM) [35] achieves an accelerated O(1/k 2 )rate, and the optimized gradient method (OGM) [24] improves this rate by a factor of 2, which is, in fact, exactly optimal [17].
On the other hand, Nesterov initiated the study of methods for reducing the gradient magnitude of convex functions [39] as such methods help us understand non-convex optimization better and design faster non-convex machine learning methods. For smooth convex functions, (GD) achieves a O((f (x 0 ) − f ⋆ ) /N )-rate on the squared gradient magnitude [32, Proposition 3.3.1], while (OGM-G) achieves an accelerated O((f (x 0 ) − f ⋆ ) /N 2 )-rate [27], which matches a lower bound and is therefore optimal [33,34]. Interestingly, (OGM) and (OGM-G) exhibit an interesting hint of symmetry, as we detail in Section 2, and the goal of this work is to derive a more general duality principle from this observation.
In the composite optimization setup, iterative shrinkage-thresholding algorithm (ISTA) [12,43,15,13] achieves a O( x 0 − x ⋆ 2 /N )-rate on function-value suboptimality, while the fast iterative shrinkage-thresholding algorithm (FISTA) [10] achieves an accelerated O( x 0 − x ⋆ /N 2 )-rate. On the squared gradient mapping norm, FISTA-G achieves O((F (x 0 ) − F ⋆ )/N 2 )-rate [29], which is optimal [33,34]. Analysis of an accelerated method often uses the estimate sequence technique [36,7,37,8,38,30] or a Lyapunov analysis [35,10,48,9,51,1,4,5,6,41]. In this work, we focus on the Lyapunov analysis technique, as it is simpler and more amenable to a continuous-time view.
The notion of duality is fundamental in many branches of mathematics, including optimization. Lagrange duality [44,45,11], Wolfe duality [55,14,47,31], and Fenchel-Rockacheller duality [19,44] are related (arguably equivalent) notions that consider a pairing of primal and dual optimization problems. The recent gauge duality [20,21,2,56] and radial duality [23,22] are alternative notions of duality for optimization problems. Attouch-Théra duality [3,46] generalizes Fenchel-Rockacheller to the setup of monotone inclusion problems. In this work, we present H-duality, which is a notion of duality for optimization algorithms, and it is, to the best of our knowledge, distinct from any previously known duality or symmetry relations.
H-duality
In this section, we will introduce H-duality, state the main H-duality theorem, and provide applications. Let N ≥ 1 be a pre-specified iteration count. Let {h k,i } 0≤i<k≤N be an array of (scalar) stepsizes and identify it with a lower triangular matrix H ∈ R N ×N via H k+1,i+1 = h k+1,i if 0 ≤ i ≤ k ≤ N − 1 and H k,i = 0 otherwise. An N -step Fixed Step First Order Method (FSFOM) with H is
x k+1 = x k − 1 L k i=0 h k+1,i ∇f (x k ), ∀ k = 0, . . . , N − 1(1)
for any initial point x 0 ∈ R d and differentiable f . For H ∈ R N ×N , define its anti-transpose
H A ∈ R N ×N with H A i,j = H N −
Symmetry between OGM and OGM-G
Let f be an L-smooth convex function. Define the notation z + = z − 1 L ∇f (z) for z ∈ R d . The accelerated methods OGM [18,24] and OGM-G [27] are
x k+1 = x + k + θ k − 1 θ k+1 (x + k − x + k−1 ) + θ k θ k+1 (x + k − x k ) (OGM) y k+1 = y + k + (θ N −k − 1)(2θ N −k−1 − 1) θ N −k (2θ N −k − 1) (y + k − y + k−1 ) + 2θ N −k−1 − 1 2θ N −k − 1 (y + k − y k ) (OGM-G)
for k = 0, . . . , N − 1, where {θ i } N i=0 are defined as θ 0 = 1, θ 2 i+1 − θ i+1 = θ 2 i for 0 ≤ i ≤ N − 2, and θ 2 N − θ N = 2θ 2 N −1 . 1 (OGM) and (OGM-G) are two representative accelerated methods for the setups (P1) and (P2), respectively. As a surface-level symmetry, the methods both access the {θ i } N i=0 sequence, but (OGM-G) does so in a reversed ordering [27]. There turns out to be a deeperlevel symmetry: (OGM) and (OGM-G) are H-duals of each other, i.e., H A OGM = H OGM-G . The proof structures of (OGM) and (OGM-G) also exhibit symmetry. We can analyze (OGM) with the Lyapunov function
U k = L 2 x 0 − x ⋆ 2 + k−1 i=0 u i x i , x i+1 + k i=0 (u i − u i−1 ) x ⋆ , x i(2)
for −1 ≤ k ≤ N with {u i } N i=0 = (2θ 2 0 , . . . , 2θ 2 N −1 , θ 2 N ) and u −1 = 0. Since ·, · ≤ 0 and {u i } N i=0 is a positive monotonically increasing sequence, {U k } N k=−1 is dissipative, i.e., U N ≤ U N −1 ≤ · · · ≤ U 0 ≤ U −1 . So
θ 2 N (f (x N ) − f ⋆ ) ≤ θ 2 N (f (x N ) − f ⋆ ) + L 2 x ⋆ − x 0 + z 2 (•) = U N ≤ U −1 = L x 0 − x ⋆ 2 2 , where z = N i=0 ui−ui−1 L
∇f (x i ). The justification of (•) is the main technical challenge of this analysis, and it is provided in Appendix B.2. Dividing both sides by θ 2 N , we conclude the rate
f (x N ) − f ⋆ ≤ 1 θ 2 N L 2 x 0 − x ⋆ 2 .
Likewise, we can analyze (OGM-G) with the Lyapunov function
V k = v 0 (f (y 0 ) − f ⋆ + y N , ⋆ ) + k−1 i=0 v i+1 y i , y i+1 + k−1 i=0 (v i+1 − v i ) y N , y i (3) for 0 ≤ k ≤ N with {v i } N i=0 = 1 θ 2 N , 1 2θ 2 N −1 , . . . , 1 2θ 2 0 . Similarly, {V k } N k=0 is dissipative, so 1 2L ∇f (y N ) 2 (•) = V N ≤ V 0 = 1 θ 2 N (f (y 0 ) − f ⋆ ) + 1 θ 2 N y N , ⋆ ≤ 1 θ 2 N (f (y 0 ) − f ⋆ ) .
Again, the justification of (•) is the main technical challenge of this analysis, and it is provided in Appendix B.2. The crucial observations are (i) u i = 1/v N −i for 0 ≤ i ≤ N and (ii) the convergence rates share the identical factor 1/θ 2 N = 1/u N = v 0 . Interestingly, a similar symmetry relation holds between method pairs [(OBL-F ♭ ), (OBL-G ♭ )] [42] and [(GD), (GD)], which we discuss later in Section 2.4.
H-duality theorem
The symmetry observed in Section 2.1 is, in fact, not a coincidence. Suppose we have N -step FSFOMs with H and H A . We denote their iterates as
{x i } N i=0 and {y i } N i=0 . For the FSFOM with H, define {U k } N k=−1 with the general form (2) with u −1 = 0. If 0 = u −1 ≤ u 0 ≤ u 1 ≤ · · · ≤ u N , then {U k } N k=−1 is monotonically nonincreasing (dissipative). Assume we can show u N (f (x N ) − f ⋆ ) ≤ U N (∀ x 0 , x ⋆ , ∇f (x 0 ), . . . , ∇f (x N )∈ R d ). (C1) To clarify, since {x i } N i=0 lies within span{x 0 , ∇f (x 0 ), . . . , ∇f (x N )}, the U N depends on x 0 , x ⋆ , {∇f (x i )} N i=0 , {u i } N i=0 , H . If (C1) holds, the FSFOM with H exhibits the convergence rate u N (f (x N ) − f ⋆ ) ≤ U N ≤ · · · ≤ U −1 = L 2 x 0 − x ⋆ 2 .(4)For the FSFOM with H A , define {V k } N k=0 with the general form (3). If 0 ≤ v 0 ≤ v 1 ≤ · · · ≤ v N , then {V k } N k=0 is monotonically nonincreasing (dissipative). Assume we can show 1 2L ∇f (y N ) 2 ≤ V N (∀ y 0 , ∇f (y 0 ), . . . , ∇f (y N ) ∈ R d , f ⋆ ∈ R).(C2)
To clarify, since {y i } N i=0 lies within span{y 0 , ∇f (y 0 ), . . . , ∇f (y N )}, the V N depends on
y 0 , {∇f (y i )} N i=0 , f ⋆ , {v i } N i=0 , H A . If (C2) holds, the FSFOM with H A exhibits the convergence rate 1 2L ∇f (y N ) 2 ≤ V N ≤ · · · ≤ V 0 = v 0 (f (y 0 ) − f ⋆ ) + v 0 y N , ⋆ ≤ v 0 (f (y 0 ) − f ⋆ ) . (5)
We now state our main H-duality theorem, which establishes a correspondence between the two types of bounds for the FSFOMs induced by H and H A .
Theorem 1. Consider sequences of positive real numbers {u i } N i=0 and {v i } N i=0 related through v i = 1 uN−i for i = 0, . . . , N . Let H ∈ R N ×N be lower triangular. Then, (C1) is satisfied with {u i } N i=0 and H ⇔ (C2) is satisfied with {v i } N i=0 and H A .
Theorem 1 provides a sufficient condition that ensures an FSFOM with H with a convergence guarantee on (f (x N ) − f ⋆ ) can be H-dualized to obtain an FSFOM with H A with a convergence guarantee on ∇f (y N ) 2 . To the best of our knowledge, this is the first result establishing a symmetrical relationship between (P1) and (P2). Section 2.3 provides a proof outline of Theorem 1.
Proof outline of Theorem 1
Define
U : = U N − u N (f (x N ) − f ⋆ ) − L 2 x ⋆ − x 0 + 1 L N i=0 (u i − u i−1 )∇f (x i ) 2 V : = V N − 1 2 ∇f (y N ) 2 .
Expanding U and V reveals that all function value terms are eliminated and only quadratic terms of {∇f (x i )} N i=0 and {∇f (y i )} N i=0 remain. Now, (C1) and (C2) are equivalent to the conditions
U ≥ 0, ∀ ∇f (x 0 ), . . . , ∇f (x N ) ∈ R d , V ≥ 0 ∀ ∇f (y 0 ), . . . , ∇f (y N ) ∈ R d ,
respectively.
Next, define g x = ∇f (x 0 )|∇f (x 1 )|. . . |∇f (x N ) ∈ R d×(N +1) and g y = ∇f (y 0 )|∇f (y 1 )|. . . |∇f (y N ) ∈ R d×(N +1) .
We show that there is S(H, u) and
T (H A , v) ∈ S N +1 such that U = Tr (g x S(H, u)g ⊺ x ) , V = Tr g y T (H A , v)g ⊺ y . Next, we find an explicit invertible matrix M (u) ∈ R (N +1)×(N +1) such that S(H, u) = M(u) ⊺ T (H A , v)M(u). Therefore, Tr (g x S(H, u)g ⊺ x ) = Tr g y T (H A , v)g ⊺ y with g y = g x M(u) ⊺ and we conclude the proof.
This technique of considering the quadratic forms of Lyapunov functions as a trace of matrices is inspired by the ideas from the Performance Estimation Problem (PEP) literature [18,53]. The full proof is given in Appendix A.
Verifying conditions for H-duality theorem
In
{u i } N i=0 = (2θ 2 0 , . . . , 2θ 2 N −1 , θ 2 N ), {v i } N i=0 = 1 θ 2 N , 1 2θ 2 N −1 , . . . , 1 2θ 2 0 leads to U = 0, V = 0.
Therefore, (C1) and (C2) hold.
Example 2. Again, define z + = z − 1 L ∇f (z) for z ∈ R d . Consider the FSFSOMs [42] x k+1 = x + k + k k + 3 x + k − x + k−1 + k k + 3 x + k − x k k = 0, . . . , N − 2 x N = x + N −1 + N − 1 2(γ + 1) x + N −1 − x + N −2 + N − 1 2(γ + 1) x + N −1 − x N −1 (OBL-F ♭ )
and
y 1 = y + 0 + N − 1 2(γ + 1) y + 0 − y + −1 + N − 1 2(γ + 1) y + 0 − y 0 y k+1 = y + k + N − k − 1 N − k + 2 y + k − y + k−1 + N − k − 1 N − k + 2 y + k − y k k = 1, . . . , N − 1 (OBL-G ♭ )
where y + −1 = y 0 , x + −1 = x 0 and γ = N (N + 1)/2. It turns out that (OBL-F ♭ ) and (OBL-G ♭ ) are H-duals of each other. The choice
{u i } N i=0 = 1·2 2 , . . . , N (N +1) 2 , γ 2 + γ , {v i } N i=0 = 1 γ 2 +γ , 2 N (N +1) , . . . , 2 1·2 leads to U = N i=0 u i − u i−1 2L ∇f (x i ) 2 , V = v 0 2L ∇f (y N ) 2 + N −1 i=0 v i+1 − v i 2L ∇f (y i ) − ∇f (y N ) 2
where u −1 = 0. Since U and V are expressed as a sum of squares, (C1) and (C2) hold.
Example 3. Interestingly, (GD) is a self-dual FSFOM in the H-dual sense. For the case h = 1, the choice
{u i } N i=0 = . . . , (2N +1)(i+1) 2N −i , . . . , 2N + 1 , {v i } N i=0 = 1 2N +1 , . . . , N +i (2N +1)(N −i+1) , . . . leads to U = 0≤i,j≤N s ij L ∇f (x i ), ∇f (x j ) , V = 0≤i,j≤N t ij L ∇f (y i ), ∇f (y j )
for some {s ij } and {t ij } stated precisely in Appendix B.2. V ≥ 0 can be established by showing that the {t ij } forms a diagonally dominant and hence positive semidefinite matrix [27]. U ≥ 0 can be established with a more elaborate argument [18], but that is not necessary; V ≥ 0 implies (C2), and, by Theorem 1, this implies (C1).
Applications of the H-duality theorem
≤ 2T i = 2 i j=0 t j for 0 ≤ i ≤ N − 1 and t 2 N ≤ T N = N j=0 t j .
Consider a family of FSFOMs
x k+1 = x + k + (T k − t k )t k+1 t k T k+1 x + k − x + k−1 + (t 2 k − T k )t k+1 t k T k+1 x + k − x k(6)
for k = 0, 1, . . . , N − 1, where x + −1 = x 0 . This family coincides with the GOGM of [26], and it exhibits the rate [26,Theorem 5]
f (x N ) − f ⋆ ≤ 1 T N L 2 x 0 − x ⋆ 2 ,
which can be established from (2) with u i = T i for 0 ≤ i ≤ N . (6) is
Corollary 1. The H-dual ofy k+1 = y + k + T N −k−1 (t N −k−1 − 1) T N −k (t N −k − 1) y + k − y + k−1 + (t 2 N −k − T N −k )(t N −k−1 − 1) T N −k (t N −k − 1) y + k − y k for k = 0, . . . , N − 1, where y + −1 = y 0 , and it exhibits the rate 1 2L ∇f (y N ) 2 ≤ 1 T N (f (y 0 ) − f ⋆ ) .
Proof outline. By Theorem 1, (C2) holds with v i = 1/T N −i for 0 ≤ i ≤ N . We then use (5).
When T i = t 2 i for 0 ≤ i ≤ N , the FSFOM (6) reduces to Nestrov's FGM [35] and its H-dual is, to the best of our knowledge, a new method without a name. If t 2 i = 2T i for 0 ≤ i ≤ N − 1 and t 2 N = T N , (6) reduces to (OGM) and its H-dual is (OGM-G). If t i = i + 1 for 0 ≤ i ≤ N − 1 and t N = N (N + 1)/2, (6) reduces to (OBL-F ♭ ) and its H-dual is (OBL-G ♭ ).
Gradient magnitude rate of (GD). For gradient descent (GD) with stepsize h, the H matrix is the identity matrix scaled by h, and the H-dual is (GD) itself, i.e., (GD) is self-dual. For 0 < h ≤ 1,
the rate f (x N ) − f ⋆ ≤ 1 2N h+1 L 2 x 0 − x ⋆ 2
, originally due to [18], can be established from (2) with
{u i } N i=0 = . . . , (2N h+1)(i+1) 2N −i , .
. . , 2N h + 1 . Applying Theorem 1 leads to the following.
Corollary 2. Consider (GD) with 0 < h ≤ 1 applied to an L-smooth convex f . For N ≥ 1, 1 2L ∇f (x N ) 2 ≤ min f (x 0 ) − f ⋆ 2N h + 1 , L x 0 − x ⋆ 2 2(2⌊ N 2 ⌋h + 1)(2⌈ N 2 ⌉h + 1)
.
To the best of our knowledge, Corollary 2 is the tightest rate on gradient magnitude for (GD) for the general step size 0 < h < 1, and it matches [51, Theorem 3] for h = 1.
Resolving conjectures of A ⋆ -optimality of (OGM-G) and (OBL-F ♭ ). The prior work of [42] defines the notion of A ⋆ -optimality, a certain restricted sense of optimality of FSFOMs, and shows that (OGM) and (OBL-F ♭ ) are A ⋆ -optimal under a certain set of relaxed inequalities. On the other hand, A ⋆ -optimality of (OGM-G) and (OBL-G ♭ ) are presented as conjectures. Combining Theorem 1 and the A ⋆ -optimality of (OGM) and (OBL-F ♭ ) resolves these conjectures; (OGM-G) and (OBL-G ♭ ) are A ⋆ -optimal.
H-duality in continuous time
We now establish a continuous-time analog of the H-duality theorem. As the continuous-time result and, especially, its proof is much simpler than its discrete-time counterpart, the results of this section serve as a vehicle to convey the key ideas more clearly.
Let T > 0 be a pre-specified terminal time. Let H(t, s) be an appropriately integrable 2 real-valued kernel with domain {(t, s) | 0 < s < t < T }. We define a Continuous-time Fixed
Step First Order Method (C-FSFOM) with H as
X(0) = x 0 ,Ẋ(t) = − t 0 H(t, s)∇f (X(s)) ds, ∀ t ∈ (0, T )(7)
for any initial point x 0 ∈ R d and differentiable f . Note, the Euler discretization of C-FSFOMs (7) corresponds to FSFOMs (1). The notion of C-FSFOMs has been considered previously in [28]. for some function γ(·), the C-FSFOMs with H and its H-dual have the form
X(t) +γ(t)Ẋ(t) + ∇f (X(t)) = 0 (C-FSFOM with H(t, s) = e γ(s)−γ(t) ) Y (t) +γ(T − t)Ẏ (t) + ∇f (Y (t)) = 0 (C-FSFOM with H A (t, s))
Interestingly, friction terms with γ ′ have time-reversed dependence between the H-duals, and this is why we refer to this phenomenon as time-reversed dissipation.
Continuous-time H-duality theorem
For the C-FSFOM with H, define the energy function
U(t) = 1 2 X(0) − x ⋆ 2 + t 0 u ′ (s)[x ⋆ , X(s)]ds (8) for t ∈ [0, T ] with differentiable u : (0, T ) → R. If u ′ (·) ≥ 0, then {U(t)} t∈[0,T ] is dissipative. Assume we can show u(T ) (f (X(T )) − f ⋆ ) ≤ U(T ) (∀ X(0), x ⋆ , {∇f (X(s))} s∈[0,T ] ∈ R d ).(C3)
Then, the C-FSFOM with H exhibits the convergence rate
u(T ) (f (X(T )) − f ⋆ ) ≤ U(T ) ≤ U(0) = 1 2 X(0) − x ⋆ 2 .
For the C-FSFOM with H A , define the energy function
V(t) = v(0) f (Y (0)) − f (Y (T )) + t 0 v ′ (s)[Y (T ), Y (s)]ds (9) for t ∈ [0, T ] with differentiable v : (0, T ) → R. If v ′ (·) ≥ 0, then {V(t)} t∈[0,T ] is dissipative. Assume we can show 1 2 ∇f (Y (T )) 2 ≤ V(T ) (∀ Y (0), {∇f (Y (s))} s∈[0,T ] ∈ R d ).(C4)
Then, the C-FSFOM with H A exhibits the convergence rate
1 2 ∇f (Y (T )) 2 ≤ V(T ) ≤ V(0) = v(0) (f (Y (0)) − f (Y (T ))) ≤ v(0) (f (Y (0)) − f ⋆ )
. The formal statement of 2 and its proof are given in Appendix C.2. Loosely speaking, we can consider Theorem 2 as the limit of Theorem 1 with N → ∞.
Theorem 2 (informal). Consider differentiable functions u, v : (0, T ) → R related through v(t) =
Verifying conditions for H-duality theorem
As an illustrative example, consider the case H(t, s) = s r t r for r ≥ 3 which corresponds to an ODE studied in the prior work [48,49]. For the C-FSFOM with H, the choice u(t) = t 2 2(r−1) for the dissipative energy function {U(t)} T t=0 of (8) leads to For the C-FSFOM with H A , the choice v(t) = 1 u(T −t) = 2(r−1) (T −t) 2 for the dissipative energy function {V(t)} T t=0 of (9) leads to
V(T ) − 1 2 ∇f (Y (T )) 2 = 2(r−1)(r−3) Y (0)−Y (T ) 2 T 4 + T 0 2(r−1)(r−3) (T −s)Ẏ (s)+2(Y (s)−Y (T )) 2 (T −s) 5
ds.
Since the right-hand sides are expressed as sums/integrals of squares, they are nonnegative, so (C3) and (C4) hold. (By Theorem 2, verifying (C3) implies (C4) and vice versa.) The detailed calculations are provided in Appendix C.1.
Applications of continuous-time H-duality theorem
The C-FSFOM (7) with
H(t, s) = Cp 2 s 2p−1 t p+1 recovers X(t) + p + 1 tẊ (t) + Cp 2 t p−2 ∇f (X(t)) = 0
an ODE considered in [54]. The rate f (X(T )) − f ⋆ ≤ 1 2CT p X(0) − x ⋆ 2 can be established from (8) with u(t) = Ct p . The C-FSFOM with H A can be expressed as the ODË
Y (t) + 2p − 1 T − tẎ (t) + Cp 2 (T − t) p−2 ∇f (Y (t)) = 0.(10)
By Theorem 2, using (9) with v(t) = 1 C(T −t) p leads to the rate
1 2 ∇f (Y (T )) 2 ≤ 1 CT p (f (Y (0)) − f ⋆ ) .
Note that the continuous-time models of (OGM) and (OGM-G), considered in [49], are special cases of this setup with p = 2 and C = 1/2. The detailed derivation and well-definedness of the ODE are presented in Appendix C.3.
New method efficiently reducing gradient mapping norm: (SFG)
In this section, we introduce a novel algorithm obtained using the insights of Theorem 1. Consider minimizing F (x) := f (x) + g(x), where f : R d → R is L-smooth convex with 0 < L < ∞ and g : R d → R ∪ {∞} is a closed convex proper function. Write F ⋆ = inf x∈R n F (x) for the minimum value. For α > 0, define the α-proximal gradient step as
y ⊕,α = argmin z∈R n f (y) + ∇f (y), z − y + g(z) + αL 2 z − y 2 = Prox g αL y − 1 αL ∇f (y) .
Consider FSFOMs defined by a lower triangular matrix H = {h k,i } 0≤i<k≤N as follows:
x k+1 = x k − k i=0 αh k+1,i x i − x ⊕,α i , ∀ k = 0, . . . , N − 1.
When g = 0, this reduces to (1). FISTA [10], FISTA-G [29] and GFPGM [25] are instances of this FSFOM with α = 1. In this section, we present a new method for efficiently reducing the gradient mapping norm. This method is faster than the prior state-of-the-art FISTA-G [29] by a constant factor of 5.28 while having substantially simpler coefficients.
Theorem 3. Consider the method
y k+1 = y ⊕,4 k + (N −k+1)(2N −2k−1) (N −k+3)(2N −2k+1) y ⊕,4 k − y ⊕,4 k−1 + (4N −4k−1)(2N −2k−1) 6(N −k+3)(2N −2k+1) y ⊕,4 k − y k y N = y ⊕,4 N −1 + 3 10 y ⊕,4 N −1 − y ⊕,4 N −2 + 3 40 y ⊕,4 N −1 − y N −1 (SFG) for k = 0, . . . , N − 2, where y ⊕,4 −1 = y 0 . This method exhibits the rate min v∈∂F (y ⊕,4 N ) v 2 ≤ 25L 2 y N − y ⊕,4 N 2 ≤ 50L (N + 2)(N + 3) (F (y 0 ) − F ⋆ ) .
We call this method Super FISTA-G (SFG), and in Appendix D.3, we present a further general parameterized family (SFG-family). To derive (SFG-family), we start with the parameterized family GFPGM [25], which exhibits an accelerated rate on function values, and expresses it as FSFOMs with H. We then obtain the FSFOMs with H A + C, where C is a lower triangular matrix satisfying certain constraints. We find that the appropriate H-dual for the composite setup is given by this H A + C, rather than H A . We provide the proof of Theorem 3 in Appendix D.2.
(SFG) is an instance of (SFG-family) with simple rational coefficients. Among the family, the optimal choice has complicated coefficients, but its rate has a leading coefficient of 46, which is slightly smaller than the 50 of (SFG). We provide the details Appendix D.4.
Conclusion
In this work, we defined the notion of H-duality and formally established that the H-dual of an optimization method designed to efficiently reduce function values is another method that efficiently reduces gradient magnitude.
For optimization algorithms, the notion of equivalence, whether informal or formal [57], is intuitive and standard. For optimization problems, the notion of equivalence is also standard, but the beauty of convex optimization is arguably derived from the elegant duality of optimization problems. In fact, there are many notions of duality for spaces, problems, operators, functions, sets, etc. However, the notion of duality for algorithms is something we, the authors, are unfamiliar with within the context of optimization, applied mathematics, and computer science. In our view, the significance of this work is establishing the first instance of a duality of algorithms.
The idea that an optimization algorithm is an abstract mathematical object that we can take the dual of opens the door to many interesting questions. In particular, exploring for what type of algorithms the H-dual or a similar notion of duality makes sense is an interesting direction for future work.
A Proof of Theorem 1
Reformulate (C1) and (C2) into U and V . In this paragraph, we will show
(C1) ⇔ U ≥ 0, ∀ ∇f (x 0 ), . . . , ∇f (x N ) ∈ R d and (C2) ⇔ V ≥ 0, ∀ ∇f (y 0 ), . . . , ∇f (y N ) ∈ R d .
Recall the definition of U and V.
U : = U N − u N (f (x N ) − f ⋆ ) − L 2 x ⋆ − x 0 + 1 L N i=0 (u i − u i−1 )∇f (x i ) 2 (11) V : = V N − 1 2 ∇f (y N ) 2 .(12)First we calculate U N − u N (f (x N ) − f ⋆ ). U N − u N (f (x N ) − f ⋆ ) = L 2 x 0 − x ⋆ 2 + N i=0 (u i − u i−1 ) f (x i ) − f ⋆ + ∇f (x i ), x ⋆ − x i + 1 2L ∇f (x i ) 2 + N −1 i=0 u i f (x i+1 ) − f (x i ) + ∇f (x i+1 ), x i − x i+1 + 1 2L ∇f (x i ) − ∇f (x i+1 ) 2 − u N (f (x N ) − f ⋆ ) (•) = L 2 x 0 − x ⋆ 2 + N i=0 (u i − u i−1 ) ∇f (x i ), x ⋆ − x i + 1 2L ∇f (x i ) 2 + N −1 i=0 u i ∇f (x i+1 ), x i − x i+1 + 1 2L ∇f (x i ) − ∇f (x i+1 ) 2 = L 2 x 0 − x ⋆ + N i=0 u i − u i−1 L ∇f (x i ) 2 − 1 2L N i=0 (u i − u i−1 )∇f (x i ) 2 + N i=0 (u i − u i−1 ) ∇f (x i ), x 0 − x i + 1 2L ∇f (x i ) 2 + N −1 i=0 u i ∇f (x i+1 ), x i − x i+1 + 1 2L ∇f (x i ) − ∇f (x i+1 ) 2 .
Note that all function value terms are deleted at •. Therefore,
U = − 1 2L N i=0 (u i − u i−1 )∇f (x i ) 2 + N i=0 (u i − u i−1 ) ∇f (x i ), x 0 − x i + 1 2L ∇f (x i ) 2 + N −1 i=0 u i ∇f (x i+1 ), x i − x i+1 + 1 2L ∇f (x i ) − ∇f (x i+1 ) 2(13)
and
U N − u N (f (x N ) − f ⋆ ) =U + L 2 x 0 − x ⋆ + N i=0 u i − u i−1 L ∇f (x i ) 2 .
13
Since
(x 0 − x i ), (x i − x i+1 ) ∈ span {∇f (x 0 ), . . . , ∇f (x N )}, the value of U is independent with x 0 , x ⋆ . Thus the only term that depends on x 0 and x ⋆ is L 2 x 0 − x ⋆ + N i=0 ui−ui−1 L ∇f (x i ) 2 . Next, since x 0 , x ⋆ can have any value, we can take x 0 − x ⋆ = ui−ui−1 L ∇f (x i ). Thus it gives the fact that (C1) is equivalent to U ≥ 0, ∀ ∇f (x 0 ), . . . , ∇f (x N ) ∈ R d . Now we calculate V N − 1 2 ∇f (y N ) 2 . V N − 1 2 ∇f (y N ) 2 =v 0 f ⋆ − f (y N ) + 1 2L ∇f (y N ) 2 + v 0 (f (y 0 ) − f ⋆ ) + N −1 i=0 v i+1 f (y i+1 ) − f (y i ) + ∇f (y i+1 ), y i − y i+1 + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 + N −1 i=0 (v i+1 − v i ) f (y i ) − f (y N ) + ∇f (y i ), y N − y i + 1 2L ∇f (y i ) − ∇f (y N ) 2 − 1 2L ∇f (y N ) 2 (•) = v 0 2L ∇f (y N ) 2 + N −1 i=0 v i+1 ∇f (y i+1 ), y i − y i+1 + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 + N −1 i=0 (v i+1 − v i ) ∇f (y i ), y N − y i + 1 2L ∇f (y i ) − ∇f (y N ) 2 − 1 2L ∇f (y N ) 2 .
Note that all function values are deleted at (•). By the calculation result,
V = v 0 2L ∇f (y N ) 2 + N −1 i=0 v i+1 ∇f (y i+1 ), y i − y i+1 + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 + N −1 i=0 (v i+1 − v i ) ∇f (y i ), y N − y i + 1 2L ∇f (y i ) − ∇f (y N ) 2 − 1 2L ∇f (y N ) 2 .(14)(C2) is equivalent to V ≥ 0, ∀ ∇f (y 0 ), . . . , ∇f (y N ) ∈ R d . To establish Theorem 1, demonstrating U ≥ 0, ∀ ∇f (x 0 ), . . . , ∇f (x N ) ∈ R d ⇔ V ≥ 0, ∀ ∇f (y 0 ), . . . , ∇f (y N ) ∈ R d(15)
would suffice.
Transforming U and V into a trace. Define
g x : = ∇f (x 0 )|∇f (x 1 )|. . . |∇f (x N ) ∈ R d×(N +1) , g y : = ∇f (y 0 )|∇f (y 1 )|. . . |∇f (y N ) ∈ R d×(N +1) .
In this paragraph, we convert the U of (11) and the V of (12) into the trace of symmetric matrices. The key idea is:
For each a, b term where a, b ∈ span{∇f (x 0 ), . . . , ∇f (x N )}, we can write a = g x a, b = g x b for some a, b ∈ R (N +1)×1 . Then a, b = g x a, g x b = b ⊺ g ⊺ x g x a = Tr (b ⊺ g ⊺ x g x a) = Tr (ab ⊺ g ⊺ x g x ) = Tr (g x ab ⊺ g ⊺ x ) .(16)
Also note that
(x 0 − x i ), (x i − x i+1 ) ∈ span{∇f (x 0 ), . . . , ∇f (x N )} and (y i − y N ), (y i − y i+1 ) ∈ span{∇f (y 0 ), .
. . , ∇f (y N )}. By using this technique, we observe that there exists S(H, u), T (H A , v) that satisfy
(11) = Tr (g x S(H, u)g ⊺ x ) ,(12)= Tr g y T (H A , v)g ⊺ y .
From now, we specifically calculate S(H, u) and
T (H A , v). Denote {e i } N i=0 ∈ R (N +1)×1
as a unit vector which (i + 1)-th component is 1, e −1 = e N +1 = 0 and define H as
H = 0 0 H 0 ∈ R (N +1)×(N +1) .
Then, by the definition of g x and H, we have
g x e i = ∇f (x i ) 0 ≤ i ≤ N, 1 L g x H ⊺ e 0 = 0,(17)1 L g x H ⊺ e i+1 = 1 L i j=0 h i,j g x e j = 1 L i j=0 h i,j ∇f (x j ) = x i − x i+1 0 ≤ i ≤ N − 1.(18)
Therefore, we can express (11) with H, {u i } N i=0 , g x and {e i } N i=0 using (17) and (18) as
(11) = − 1 2L N i=0 (u i − u i−1 )∇f (x i ) 2 + N i=0 (u i − u i−1 ) ∇f (x i ), x 0 − x i + 1 2L ∇f (x i ) 2 + N −1 i=0 u i ∇f (x i+1 ), x i − x i+1 + 1 2L ∇f (x i ) − ∇f (x i+1 ) 2 = − 1 2L N i=0 (u i − u i−1 )g x e i 2 + N i=0 (u i − u i−1 ) g x e i , 1 L g x H ⊺ (e 0 + · · · + e i ) + 1 2L g x e i 2 + N −1 i=0 u i g x e i+1 , 1 L g x H ⊺ e i+1 + 1 2L g x (e i − e i+1 ) 2 .
Using (16)
induces (11) = Tr (g x S(H, u)g ⊺ x ) where S(H, u) = − 1 2L N i=0 (u i − u i−1 )e i N i=0 (u i − u i−1 )e i ⊺ + 1 2L H ⊺ N i=0 u i (e 0 + · · · + e i )(e i − e i+1 ) ⊺ + 1 2L N i=0 u i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ H + 1 2L N i=0 u i ((e i − e i+1 )e ⊺ i + e i (e i − e i+1 ) ⊺ ) − u N e N e ⊺ N .(19)
Similarly, we calculate (12). Define H A as a anti-transpose matrix of H:
H A = 0 0 H A 0 ∈ R (N +1)×(N +1) .
Then, by the definition of g y and H A , we have
g y e i = ∇f (y i ) 0 ≤ i ≤ N, 1 L g y H A ⊺ e 0 = 0,(20)1 L g y H A ⊺ e i+1 = 1 L i j=0 h i,j g y e j = 1 L i j=0 h i,j ∇f (y j ) = y i − y i+1 0 ≤ i ≤ N − 1.(21)
Therefore, we can express (12) with (20) and (21) as (12)
H A , {v i } N i=0 , g and {e i } N i=0 using= v 0 − 1 2L ∇f (y N ) 2 + N −1 i=0 v i+1 ∇f (y i+1 ), y i − y i+1 + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 + N −1 i=0 (v i+1 − v i ) ∇f (y i ), y N − y i + 1 2L ∇f (y i ) − ∇f (y N ) 2 = v 0 − 1 2L g y e N 2 + N −1 i=0 v i+1 g y e i+1 , 1 L g y H A ⊺ e i+1 + 1 2L g y (e i − e i+1 ) 2 + N −1 i=0 (v i+1 − v i ) − g y e i , 1 L g y H A ⊺ (e i+1 + · · · + e N ) + 1 2L g y (e i − e N ) 2 .
We can write (12) = Tr
g y T (H A , v)g ⊺ y where T (H A , v) = 1 2L N i=0 v i ((e i−1 − e i )(e i−1 − e N ) ⊺ + (e i−1 − e N )(e i−1 − e i ) ⊺ ) − v 0 2L e 0 e ⊺ 0 − 1 2L e N e ⊺ N + 1 2L N i=0 v i H A ⊺ (e i + · · · + e N )(e i − e i−1 ) ⊺ + (e i − e i−1 )(e i + · · · + e N ) ⊺ H A = 1 2L N i=0 1 u N −i ((e i−1 − e i )(e i−1 − e N ) ⊺ + (e i−1 − e N )(e i−1 − e i ) ⊺ ) − 1 2u N L e 0 e ⊺ 0 − 1 2L e N e ⊺ N + 1 2L N i=0 1 u N −i H A ⊺ (e i + · · · + e N )(e i − e i−1 ) ⊺ + (e i − e i−1 )(e i + · · · + e N ) ⊺ H A .(22)
Finding auxiliary matrix M(u) that gives the relation between S(H, u) and T (H A , v). We can show that there exists an invertible M (u) ∈ R (N +1)×(N +1) such that
S(H, u) = M(u) ⊺ T (H A , v)M(u).
(23) If we assume the above equation,
Tr (g x S(H, u)g ⊺ x ) = Tr g y T (H A , v)g ⊺ y (24) with g y = g x M(u) ⊺ . Since M(u) is invertible, {g|g ∈ R d×(N +1) } = {gM(u) ⊺ |g ∈ R d×(N +1) }. (25) Also, note that U ≥ 0 ∀ ∇f (x 0 ), . . . , ∇f (x N ) ⇔ Tr (g x S(H, u)g ⊺ x ) ≥ 0 ∀g x ∈ R d×(N +1) and V ≥ 0 ∀ ∇f (y 0 ), . . . , ∇f (y N ) ⇔ Tr g y T (H A , u)g ⊺ y ≥ 0 ∀g y ∈ R d×(N +1)
. By combining (24) and (25), we obtain
Tr (g x S(H, u)g ⊺ x ) ≥ 0 ∀g x ∈ R d×(N +1) ⇔ Tr g y S(H, u)g ⊺ y ≥ 0 ∀g x ∈ R d×(N +1) , g y = g x M(u) ⊺ ⇔ Tr g y S(H, u)g ⊺ y ≥ 0 ∀g y ∈ R d×(N +1) .
To sum up, we obtain (15)
U ≥ 0 ∀ ∇f (x 0 ), . . . , ∇f (x N ) ⇔ V ≥ 0 ∀ ∇f (y 0 ), . . . , ∇f (y N ) ,
which concludes the proof.
Explicit form of M(u) and justification of (23) .
Explicit form of M(u) is M = 0 · · · 0 0 u N 0 · · · 0 u N −1 u N − u N −1 0 · · · u N −2 u N −1 − u N −2 u N − u N −1 . . . . . . . . . . . . . . . u 0 · · · u N −2 − u N −3 u N −1 − u N −2 u N − u N −1 ∈ R (N +1)×(N +1) . (26) Now, we express M(u) = 0≤i,j≤N m ij (u)e i e ⊺ j , S(H, u) = 0≤i,j≤N s ij e i e ⊺ j , and T (H A , v) = 0≤i,j≤N t ij e i e ⊺ j . Calculating M ⊺ (u)T (H A , v)M(u) gives M ⊺ (u)T (H A , v)M(u) = i,j m ij (u)e j e ⊺ i i,j t ij e i e ⊺ j i,j m ij (u)e i e ⊺ j = i,j t ij k m ik (u)e k l m jl (u)e l ⊺ : = i,j t ij f i (u)f j (u) ⊺ . Thus it is enough to show that i,j s ij e i e ⊺ j and i,j t ij f i (u)f j (u) ⊺ are the same under the basis transformation f i (u) = k m ik (u)e k . From here, we briefly write f i instead f i (u), and f −1 = 0. Note that u i (e i − e i+1 ) = (f N −i − f N −i−1 ), 0 ≤ i ≤ N by definition of M(u). Therefore, we have 1 L H ⊺ N i=0 u i (e 0 + · · · + e i )(e i − e i+1 ) ⊺ = 1 L H ⊺ N i=0 (e 0 + · · · + e i )(f N −i − f N −i−1 ) ⊺ = 1 L H ⊺ N i=0 e i f ⊺ N −i .
Therefore, we can rewrite (19) as follows:
S(H, u) = − 1 2L f N f ⊺ N + 1 2L H ⊺ N i=0 e i f ⊺ N −i + 1 2L N i=0 f N −i e ⊺ i H + 1 2L N i=0 (f N −i − f N −i−1 )e ⊺ i + e i (f N −i f N −i−1 ) ⊺ − u N 2L e N e ⊺ N = − 1 2L f N f ⊺ N A1 + 1 2L i,j h i,j e j f ⊺ N −i + 1 2L i,j h i,j f N −i e ⊺ j B1 + 1 2L N i=0 (f N −i − f N −i−1 )e ⊺ i + e i (f N −i − f N −i−1 ) ⊺ C1 − u N 2L e N e ⊺ N D1 .(27)
Similarly, by using
e N −i − e N −i+1 = 1 u N −i (f i − f i−1 ) = v i (f i − f i−1 ) ,
we can rewrite (22) as follows:
M ⊺ (u)T (H A , v)M(u) = 1 2L N i=0 (e N −i+1 − e N −i )(f i−1 − f N ) ⊺ + (f i−1 − f N )(e N −i+1 − e N −i ) ⊺ − 1 2u N L f 0 f ⊺ 0 − 1 2L f N f ⊺ N + 1 2L H A ⊺ N i=0 f i e ⊺ N −i + 1 2L N i=0 e N −i f ⊺ i H A = − 1 2u N L f 0 f ⊺ 0 D2 + 1 2L i,j h N −i,N −j f j e ⊺ N −i + 1 2 i,j h N −j,N −i e N −i f ⊺ j B2 + 1 2L N i=0 (e N −i+1 − e N −i )f ⊺ i−1 + f i−1 (e N −i+1 − e N −i ) ⊺ − 1 2L (e 0 f ⊺ N + f N e ⊺ 0 ) C2 − 1 2L f N f ⊺ N A2 .(28)
For the final step, we compare (28) and (27) term-by-term, by showing X 1 = X 2 for X = A, B, C, D.
• A 1 = A 2 comes directly. • B 1 = B 2 comes from changing the summation index i → N − i and j → N − j. • C 1 = C 2 comes from the expansion of summation. • D 1 = D 2 comes from f 0 = u N e N .
Therefore,
S(H, u) = M ⊺ (u)T (H A , v)M(u),
which concludes the proof.
Remark 1. We can interpret M as a basis transformation, where
u N e N = f 0 , u i (e i − e i+1 ) = f N −i − f N −i−1 i = 0, 1, . . . , N − 1.(29)
Thus H OGM can be calculated as
H OGM (k + 1, i + 1) = 0 i > k 1 + 2θ k −1 θ k+1 i = k k l=i+1 θ l −1 θ l+1 2θi−1 2θi+1 i < k.
Recursive formula [27] of (OGM-G) is as following:
h k+1,i = θN−i−1−1 θN−i h k+1,i+1 i = 0, . . . , k − 2 θ N −k −1 θ N −k+1 (h k+1,i − 1) i = k − 1 1 + 2θ N −k−1 −1 θ N −k i = k.(31)If k > i, h k+1,k−1 = θ N −k − 1 θ N −k+1 2θ N −k−1 − 1 θ N −k , h k+1,i = θ N −i−1 − 1 θ N −i h k+1,i+1 = · · · = N −i−1 l=N −k+1 θ l − 1 θ l+1 h k+1,k−1 = N −i−1 l=N −k θ l − 1 θ l+1 2θ N −k−1 − 1 θ N −k .
Thus H OGM-G can be calculated as
H OGM-G (k + 1, i + 1) = 0 i > k 1 + 2θ N −k−1 −1 θ N −k i = k N −i−1 l=N −k θ l −1 θ l+1 2θ N −k−1 −1 θ N −k i < k , which gives H OGM-G = H A OGM . Gradient Descent. For (GD), H(i + 1, k + 1) = h i+1,k = hδ i+1,k+1 , where δ i,j is the Kronecker delta. Therefore, H GD = H A GD . OBL-F ♭ and OBL-G ♭ . Recall γ = N (N +1) 2
. We obtain the recursive formula of the H matrix of (OBL-F ♭ ).
h k+1,i = k k+3 h k,i k = 0, . . . , N − 2, i = 0, . . . , k − 2 1 + 2k k+3 k = 0, . . . , N − 2, i = k k k+3 (h k,k−1 − 1) k = 1, . . . , N − 2, i = k − 1 1 + N −1 γ+1 k = N − 1, i = N − 1 N −1 2(γ+1) (h N −1,N −2 − 1) k = N − 1, i = N − 2 N −1 2(γ+1) h N −1,i k = N − 1, i = 0, . . . , N − 3 .(32)
By using the above formula, we obtain
H OBL-F ♭ (k + 1, i + 1) = 1 + 2k k+3 k = 0, . . . , N − 2, i = k 2i(i+1)(i+2) (k+1)(k+2)(k+3) k = 0, . . . , N − 2, i = 0, . . . , k − 1 1 + N −1 γ+1 k = N − 1, i = N − 1 i(i+1)(i+2) (γ+1)N (N +1) k = N − 1, i = 0, . . . , N − 2 .
Similarly, we achieve the following recursive formula of the H matrix of (OBL-G ♭ ).
h k+1,i = N −k−1 N −k+2 h k,i k = 1, . . . , N − 1, i = 0, . . . , k − 2 1 + 2(N −k−1) N −k+2 k = 1, . . . , N − 1, i = k N −k−1 N −k+2 (h k,k−1 − 1) k = 1, . . . , N − 1, i = k − 1 1 + N −1 γ+1 k = 0, i = 0 .(33)
By using the above recursive formula, we obtain
H OBL-G ♭ (k + 1, i + 1) = 1 + N −1 γ+1 k = 0, i = 0 (N −k−1)(N −k)(N −k+1) (γ+1)N (N +1) k = 1, . . . , N − 1, i = 0 1 + 2(N −k−1) N −k+2 k = 1, . . . , N − 1, i = k 2(N −k−1)(N −k)(N −k+1) (N −i)(N −i+1)(N −i+2) k = 1, . . . , N − 1, i = 1, . . . , k − 1 . Thus H OBL-F ♭ = H A OBL-G ♭ .
B.2 Calculation of energy functions
Calculation of U and V with H matrix In this paragraph, we calculate U and V. Recall (13) and (14). (11). We have
First, we put x k+1 − x k = − 1 L k i=0 h k+1,i ∇f (x i ) toU = − 1 2L N i=0 (u i − u i−1 )∇f (x i ) 2 + N i=0 (u i − u i−1 ) ∇f (x i ), x 0 − x i + 1 2L ∇f (x i ) 2 + N −1 i=0 u i ∇f (x i+1 ), x i − x i+1 + 1 2L ∇f (x i ) − ∇f (x i+1 ) 2 = − 1 2L N i=0 (u i − u i−1 )∇f (x i ) 2 + N i=0 u i − u i−1 L ∇f (x i ), i−1 l=0 l j=0 h l+1,j ∇f (x j ) + 1 2 ∇f (x i ) 2 + N −1 i=0 u i L ∇f (x i+1 ), i j=0 h i+1,j ∇f (x j ) + 1 2 ∇f (x i ) − ∇f (x i+1 ) 2 .
By arranging, we obtain
U = 0≤j≤i≤N s i,j L ∇f (x i ), ∇f (x j ) where s i,j = − 1 2 (u N − u N −1 ) 2 + 1 2 u N j = i, i = N − 1 2 (u i − u i−1 ) 2 + u i j = i, i = 0, . . . , N − 1 u i h i,i−1 − u i−1 − (u i − u i−1 )(u i−1 − u i−2 ) j = i − 1 (u i − u i−1 ) i−1 l=j h l+1,j + u i−1 h i,j − (u i − u i−1 )(u j − u j−1 ) j = 0, . . . , i − 2.
.
20
Recall that we defined (12). We have
u −1 = 0. Next, put y k+1 − y k = − 1 L k i=0 h k+1,i ∇f (y i ) toV = V N − 1 2L ∇f (y N ) 2 = v 0 2L ∇f (y N ) 2 + N −1 i=0 v i+1 ∇f (y i+1 ), y i − y i+1 + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 + N −1 i=0 (v i+1 − v i ) ∇f (y i ), y N − y i + 1 2L ∇f (y i ) − ∇f (y N ) 2 − 1 2L ∇f (y N ) 2 = v 0 − 1 2L ∇f (y N ) 2 + N −1 i=0 v i+1 ∇f (y i+1 ), 1 L i j=0 h i+1,j ∇f (y j ) + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 + N −1 i=0 (v i+1 − v i ) ∇f (y i ), − 1 L N −1 l=i l j=0 h l+1,j ∇f (y j ) + 1 2L ∇f (y i ) − ∇f (y N ) 2 .
By arranging, we obtain
V = 0≤j≤i≤N t i,j L ∇f (y i ), ∇f (y j ) where t i,j = v1 2 + v1−v0 2 − (v 1 − v 0 ) N −1 l=0 h l+1,0 i = 0, j = i vi+1+vi 2 + vi+1−vi 2 − (v i+1 − v i ) N −1 l=i h l+1,i i = 1, . . . , N − 1, j = i v0−1 2 + vN 2 + N −1 i=0 vi+1−vi 2 i = N, j = i v i h i,i−1 − v i − (v i+1 − v i ) N −1 l=i h l+1,i−1 − (v i − v i−1 ) N −1 l=i h l+1,i i = 1, . . . , N − 1, j = i − 1 v N h N,N −1 − v N − (v N − v N −1 ) i = N, j = i − 1 v i h i,j − (v i+1 − v i ) N −1 l=i h l+1,j − (v j+1 − v j ) N −1 l=i h l+1,i i = 2, . . . , N − 1, j = 0, . . . , i − 2 v N h N,j − (v j+1 − v j ) i = N, j = 0, . . . , N − 2 .(35)u i − u i−1 = 2θ 2 i − 2θ 2 i−1 = 2θ i 0 ≤ i ≤ N − 1, u N − u N −1 = θ 2 N − 2θ 2 N −1 = θ N .
Therefore, we have
s i,i = − 1 2 (u N − u N −1 ) 2 + 1 2 u N = −θ N + θ N = 0 j = i, i = N − 1 2 (u i − u i−1 ) 2 + u i = −θ i + θ i = 0 j = i, i = 0, . . . , N − 1.
Now we claim that s ij = 0 when j = i. In the case that i = j + 1, we have
s i,i−1 = u i h i,i−1 − u i−1 − (u i − u i−1 )(u i−1 − u i−2 ) = u i 2θ i−1 − 1 θ i + 1 − u i−1 − (u i − u i−1 )(u i−1 − u i−2 ) = 2θ 2 i 2θi−1−1 θi + 1 − 2θ 2 i−1 − 4θ i θ i−1 0 ≤ i ≤ N − 1 θ 2 N 2θN−1−1 θN + 1 − 2θ 2 N −1 − 2θ N −1 θ N i = N = 0. We show s ij = 0 for j = i with induction on i, i.e., proving s i,j = (u i − u i−1 ) i−1 l=j h l+1,j + u i−1 h i,j − (u i − u i−1 )(u j − u j−1 ) = 0, j = 0, . . . , i − 2 .(36)
First we prove (36) for i = j + 2.
(u j+2 − u j+1 ) (h j+1,j + h j+2,j ) + u j+1 h j+2,j − (u j+2 − u j+1 )(u j − u j−1 ) =(u j+2 − u j+1 )h j+1,j + u j+2 h j+2,j − (u j+2 − u j+1 )(u j − u j−1 ) = 2θ j+2 h j+1,j + 2θ 2 j+2 h j+2,j − 4θ j+2 θ j 0 ≤ j ≤ N − 3 θ N h N −1,N −2 + θ 2 N h N,N −2 − 2θ N θ N −2 j = N − 2 =0. Next, assume (36) for i = i 0 . When i = i 0 + 1, (u i0+1 − u i0 ) i0 l=j h l+1,j + u i0 h i0+1,j − (u i0+1 − u i0 )(u j − u j−1 ) =(u i0+1 − u i0 ) i0−1 l=j h l+1,j + h i0+1,j + u i0 h i0+1,j − (u i0+1 − u i0 )(u j − u j−1 ) =(u i0+1 − u i0 ) (u i0 − u i0−1 )(u j − u j−1 ) − u i0−1 h i0,j u i0 − u i0−1 + h i0+1,j + u i0 h i0+1,j − (u i0+1 − u i0 )(u j − u j−1 ) =u i0+1 h i0+1,j − u i0−1 (u i0+1 − u i0 ) u i0 − u i0−1 h i0,j = 2θ 2 i0+1 h i0+1,j − 4θ 2 i 0 −1 θi 0 +1 2θi 0 h i0,j 0 ≤ i 0 ≤ N − 2 θ 2 N h i0+1,j − 2θ 2 N −2 θN 2θN−1 h i0,j i 0 = N − 1 =0
where the second equality comes from the induction hypothesis, and the third equality comes from (30). In sum, we proved s ij = 0 for every i and j, which implies U = 0.
Next, we will claim that t ij = 0 for all i, j. Firstly, explicit formula of H OGM-G (k + 1, i + 1) first. When k > i,
H OGM-G (k + 1, i + 1) = θ N −k − 1 θ N −k+1 θ N −k+1 − 1 θ N −k+2 · · · θ N −i−1 − 1 θ N −i 2θ N −k−1 − 1 θ N −k = θ 2 N −k − θ N −k θ N −k θ N −k+1 θ 2 N −k+1 − θ N −k+1 θ N −k+1 θ N −k+2 · · · θ 2 N −i−1 − θ N −i−1 θ N −i−1 θ N −i 2θ N −k−1 − 1 θ N −k = θ 2 N −k−1 θ N −k θ N −k+1 θ 2 N −k θ N −k+1 θ N −k+2 · · · θ 2 N −i−2 θ N −i−1 θ N −i 2θ N −k−1 − 1 θ N −k = θ 2 N −k−1 (2θ N −k−1 − 1) θ 2 N −i−1 θ N −i .
22
To calculate {t i,j }, it is enough to deal with the sum N −1 l=i h l+1,j , which can be expressed as
N −1 l=i h l+1,j = θN +1 2 i = 0, j = i θ N −i i = 1, . . . , N − 1, j = i θ 4 N −i−1 θN−jθ 2 N −j−1 i = 1, . . . , N − 1, j = 0, . . . , i − 1 .(37)
By inserting (37) in (35), [t ij = 0, ∀i, j] is obtained, which implies V = 0. (37) and (35)
First we calculate {s ij } for (OBL-F ♭ ). Recall u i = (i+1)(i+2) 2 for 0 ≤ i ≤ N − 1 and u N = γ 2 + γ where γ = N (N + 1)/2. When j = i, s i,i = − 1 2 (u N − u N −1 ) 2 + 1 2 u N i = N − 1 2 (u i − u i−1 ) 2 + u i 0 ≤ i ≤ N − 1 = γ 2 = uN −uN−1 2 i = N − 1 2 (i + 1) 2 + (i+1)(i+2) 2 = ui−ui−1 2 0 ≤ i ≤ N − 1 .
Now we claim that s ij = 0 when j = i. In the case j = i − 1, we have
s i,i−1 = u i h i,i−1 − u i−1 − (u i − u i−1 )(u i−1 − u i−2 ) = (i+1)(i+2) 2 h i,i−1 − i(i+1) 2 − (i + 1)i 0 ≤ i ≤ N − 1 γ 2 + γ h N,N −1 − N (N +1) 2 − γN i = N = 0.
We show s ij = 0 for j = i with induction on i, i.e., proving
s i,j = (u i − u i−1 ) i−1 l=j h l+1,j + u i−1 h i,j − (u i − u i−1 )(u j − u j−1 ) = 0 j = 0, . . . , i − 2 .(38)(38) holds when i = j + 2 since (u j+2 − u j+1 ) (h j+1,j + h j+2,j ) + u j+1 h j+2,j − (u j+2 − u j+1 )(u j − u j−1 ) =(u j+2 − u j+1 )h j+1,j + u j+2 h j+2,j − (u j+2 − u j+1 )(u j − u j−1 ) = (j + 3)h j+1,j + (j+3)(j+4) 2 h j+2,j − (j + 3)(j + 1) 0 ≤ j ≤ N − 3 γh N −1,N −2 + γ 2 + γ h N,N −2 − γ j = N − 2 =0. Assume (38) for i = i 0 . For i = i 0 + 1, (u i0+1 − u i0 ) i0 l=j h l+1,j + u i0 h i0+1,j − (u i0+1 − u i0 )(u j − u j−1 ) =(u i0+1 − u i0 ) i0−1 l=j h l+1,j + h i0+1,j + u i0 h i0+1,j − (u i0+1 − u i0 )(u j − u j−1 ) =(u i0+1 − u i0 ) (u i0 − u i0−1 )(u j − u j−1 ) − u i0−1 h i0,j u i0 − u i0−1 + h i0+1,j + u i0 h i0+1,j − (u i0+1 − u i0 )(u j − u j−1 ) =u i0+1 h i0+1,j − u i0−1 (u i0+1 − u i0 ) u i0 − u i0−1 h i0,j = (i0+2)(i0+3) 2 h i0+1,j − i0(i0+1)(i0+2) 2(i0+1) h i0,j 0 ≤ i 0 ≤ N − 2 γ 2 + γ h N,j − (N −1)N γ 2N h N −1,j i 0 = N − 1 =0.
Next, we calculate {t ij } for (OBL-G ♭ ). We need to deal with the sum N −1 l=k h l+1,i , which can be expressed as (35), we obtain
N −1 l=i h l+1,j = 1 + (N +2)(N −1) 4(γ+1) i = 0, j = 0 (N −i+2)(N −i+1)(N −i)(N −i−1) 4(γ+1)N (N +1) i = 1, . . . , N − 1, j = 0 (N −i+2)(N −i+1)(N −i)(N −i−1) 2(N −j)(N −j+1)(N −j+2) i = j + 1, . . . , N − 1, j = 1, . . . , N − 1 1 + N −i−1 2 i = j, j = 1, . . . , N − 1 . By combining v 0 = 1 γ 2 +γ , v i = 1 (N −i+1)(N −i+2) for 1 ≤ i ≤ N andt ij = 1 i = N, j = N 1 2N (N +1) − v0 2 i = 0, j = 1 v 0 − 1 N (N +1) i = N, j = 0 1 (N −i)(N −i+1)(N −i+2) i = 1, . . . , N − 1, j = i − 2 (N −i)(N −i+1)(N −i+2) i = N, j = 1, . . . , N − 1 0 otherwise = vN 2 i = N, j = N vi+1−vi 2 i = 0, . . . , N − 1, j = i −v i+1 + v i i = N, j = 0, . . . , N − 1 0 otherwise Therefore, V = 0≤j≤i≤N t ij L ∇f (y i ), ∇f (y j ) = v 0 2L ∇f (y N ) 2 + N −1 i=0 v i+1 − v i 2L ∇f (y i ) − ∇f (y N ) 2 .
B.2.3 Calculation of energy function of GD
t ij = 1 2 v 0 i = j, i = 0 v i i = j, 1 ≤ i ≤ N − 1 v N − 1 2 i = j, i = N 1 2 v min(i,j) − v min(i,j)+1 i = j.
We can verify that the matrix {t ij } 0≤i,j≤N is diagonally dominant:
t ii = | j =i t ij |. Therefore, 0≤i,j≤N −1 tij L ∇f (y i ), ∇f (y j ) ≥ 0 for any {∇f (y i )} N i=0
. This proof is essentially the same as the proof in [27], but we repeat it here with our notation for the sake of completeness.
(2N h + 1)(i + 1) 2N − i , . . . , 2N h + 1 satisfies (C1)(39)
Note that (39) gives
(2N h + 1)(f (x N ) − f ⋆ ) ≤ U N ≤ U −1 = L 2 x 0 − x ⋆ 2 .(40)
Later in the proof of Corollary 2, we will utilize the equation (39). The result (39) is proved in [18, Theorem 3.1], and we give the proof outline here.
In order to demonstrate (39), we will directly expand the expression
U N − u N (f (x N ) − f ⋆ ), instead of employing U as a intermediary step. Define {u ′ i } N i=0 : = . . . , (2N +1)(i+1) 2N −i , . . . , 2N + 1 . Then U N − u N (f (x N ) − f ⋆ ) = 1 L Tr (g ⊺ gS) where g = [∇f (x 0 )|. . . |∇f (x N )|L(x 0 − x ⋆ )] ∈ R d×(N +2
) and S ∈ S N +2 is given by
S = S ′ λ λ 1 2 , λ = [u 0 |u 1 − u 0 |. . . |u N − u N −1 ] ⊺ , S ′ = 2N h+1 2N +1 (hS 0 + (1 − h)S 1 ), S 0 = N −1 i=0 u ′ i 2 e i+1 e ⊺ i + e i e ⊺ i+1 + (e i − e i+1 )(e i − e i+1 ) ⊺ + N i=0 u ′ i − u ′ i−1 2 (e i (e 0 + · · · + e i−1 ) ⊺ + (e 0 + · · · + e i−1 )e ⊺ i + e i e ⊺ i ) S 1 = N −1 i=0 u ′ i 2 (e i − e i+1 )(e i − e i+1 ) ⊺ + N i=0 u ′ i − u ′ i−1 2 e i e ⊺ i . Now we will show S 0 to obtain U N − u N (f (x N ) − f ⋆ ) ≥ 0, ∀ g , which is (C1)
. By using Sylvester's Criterion, S 0 ≻ 0 follows. S 1 ≻ 0 follows from the fact that S 1 expressed by the sum of positive semi-definite matrices zz ⊺ . Since the convex sum of two positive semi-definite matrices is also positive semi-definite, S ′ = hS 0 + (1 − h)S 1 ≻ 0.
Next, we argue that det S = 0. Indeed, take τ = (1, . . . , −(2N h + 1)) ⊺ to show Sτ = 0, which gives det S = 0. Note that the determinant of S can also be expressed by
det(S) = 1 2 − λ ⊺ (S ′ ) −1 λ det(S ′ ).(41)
We have shown that S ′ ≻ 0, (41) implies 1
B.3 Omitted proof in Section 2.5 B.3.1 Omitted calculation of Corollary 1
Here, we will give the general formulation of H-dual FSFOM of
x k+1 = x k + β k x + k − x + k−1 + γ k x + k − x k , k = 0, . . . , N − 1.(42)
Proposition 1. The H-dual of (42) is
y k+1 = y k + β ′ k y + k − y + k−1 + γ ′ k y + k − y k , k = 0, . . . , N − 1(43)
where
β ′ k = β N −k (β N −1−k + γ N −1−k ) β N −k + γ N −k γ ′ k = γ N −k (β N −1−k + γ N −1−k ) β N −k + γ N −k for k = 0, . . . , N − 1 and (β N , γ N ) is any value that β N + γ N = 0. 3 Proof. The H matrix {h k,i } 0≤i<k≤N satisfies h k+1,i = 1 + β k + γ k i = k, k = 0, . . . , N − 1 β k (h k,i − 1) i = k − 1, k = 1, . . . , N − 1 β k h k,i i = k − 2, k = 2, . . . , N − 1 . Therefore, h k+1,i = k j=i+1 β j (β i + γ i + δ k,i )
where δ k,i is a Kronecker Delta function. Similarly, H matrix of (43) {g k,i } 0≤i<k≤N satisfies
g k+1,i = k j=i+1 β ′ j (β ′ i + γ ′ i + δ k,i ) = k j=i+1 β N −j (β N −1−j + γ N −1−j ) β N −j + γ N −j (β N −1−i + γ N −1−i + δ k,i ) = k j=i+1 β N −j (β N −k−1 + γ N −i−1 + δ N −k−1,N −i−1 ) . Thus g k+1,i = h N −i,N −1−k .
Now we derive the H-dual of (6) by applying Proposition 1. Note that
β k = (T k − t k )t k+1 t k T k+1 , γ k = (t 2 k − T k )t k+1 t k T k+1 , k = 0, . . . , N − 1.
Next, define β N and γ N as a same form of {β i , γ i } N −1 i=0 with any t N +1 > 0. Note that 3 Here, note that FSFOM (43) is independent with the any choice of βN , γN since y1 = y0 + (β ′
β k + γ k = t k+1 (t 2 k − t k ) t k T k+1 = t k+1 (t k − 1) T k+1 .0 + γ ′ 0 ) y + 0 − y0 = y0 + (βN−1 + γN−1) y + 0 − y0 .
By applying the formula at Proposition 1, we obtain
β ′ k = (T N −k − t N −k ) t N −k (t N −k−1 −1) T N −k t 2 N −k − t N −k = (T N −k − t N −k )(t N −k−1 − 1) T N −k (t N −k − 1) = T N −k−1 (t N −k−1 − 1) T N −k (t N −k − 1) and γ ′ k = (t 2 N −k − T N −k ) t N −k (t N −k−1 −1) T N −k t 2 N −k − t N −k = (t 2 N −k − T N −k )(t N −k−1 − 1) T N −k (t N −k − 1) .
B.3.2 Proof of Corollary 2
First, recall (39) when 0 < h ≤ 1:
(GD) and {u i } N i=0 = . . . ,
(2N h + 1)(i + 1) 2N − i , . . . , 2N h + 1 satisfies (C1)
Additionally, observe that the H matrix of (GD) is diag (h, . . . , h), which gives the fact that the H-dual of (GD) is itself.
Next, use Theorem 1 to obtain
(GD) with 0 < h ≤ 1 and {v i } N i=0 = 1 2N h + 1 , . . . , N + i (2N h + 1)(N − i + 1)
, . . . satisfies (C2) .
By using the same argument with (5), (44) gives
1 2L ∇f (y N ) 2 ≤ V N ≤ V 0 = 1 2N h + 1 (f (y 0 ) − f ⋆ ) + 1 2N h + 1 y N , y ⋆ ≤ 1 2N h + 1 (f (y 0 ) − f ⋆ ) .(45)
In addition, we can achieve the convergence rate of the gradient norm under the initial condition of x 0 − x ⋆ 2 :
1 2L ∇f (x 2N ) 2 ≤ 1 2N h + 1 (f (x N ) − f ⋆ ) ≤ 1 (2N h + 1) 2 L 2 x 0 − x ⋆ 2 and 1 2L ∇f (x 2N +1 ) 2 ≤ 1 2(N + 1)h + 1 (f (x N ) − f ⋆ ) ≤ 1 (2(N + 1)h + 1) (2N h + 1) L 2 x 0 − x ⋆ 2 .
The first inequality comes from (45) and the second inequality comes from (40).
B.3.3 Proof of A ⋆ -optimality of (OGM-G) and (OBL-G ♭ )
Definition of A ⋆ -optimal FSFOM. For the given inequality sets L, A ⋆ -optimal FSFOM with respect to [L, P1] is defined as a FSFOM which H matrix is the solution of following minimax problem:
minimize H∈R N ×N L maximize f f (x N ) − f ⋆ subject.to. [x 0 , . . . , x N are generated by FSFOM with the matrix H] [∀ l ∈ L, f satisfies l] x 0 − x ⋆ 2 ≤ R 2(46)
Similarly, A ⋆ -optimal FSFOM with respect to [L, (P 2)] is specified with its H matrix, which is the solution of the following minimax problem:
minimize H∈R N ×N L maximize f 1 2L ∇f (y N ) 2
subject.to. [y 0 , . . . , y N are generated by FSFOM with the matrix H]
[∀ l ∈ L, f satisfies l] f (y 0 ) − f ⋆ ≤ 1 2 LR 2(47)
Here R N ×N L is the set of lower triangular matrices. Next we denote the inner maximize problem of (46) and (47) as P 1 (L, H, R) and P 2 (L, H, R), respectively. For a more rigorous definition of A ⋆ -optimal FSFOM, refer [42].
27
Remark.
We discuss the minimax problems (46) and (47) and their interpretation. Specifically, we consider the meaning of the maximization problem in these minimax problems, which can be thought of as calculating the worstcase performance for a fixed FSFOM with H. In other words, the minimization problem in (46) and (47) can be interpreted as determining the optimal value of H that minimizes the worst-case performance.
Prior works. The prior works for A ⋆ -optimality are summarized as follows. Consider the following sets of inequalities.
L F ={ x i , x i+1 } N −1 i=0 ∪ { x ⋆ , x i } N i=0 = f (x i ) ≥ f (x i+1 ) + ∇f (x i+1 ), x i − x i+1 + 1 2L ∇f (x i ) − ∇f (x i+1 ) 2 N −1 i=0 f ⋆ ≥ f (x i ) + ∇f (x i ), x ⋆ − x i + 1 2L ∇f (x i ) 2 N i=0 , L G ={ y i , y i+1 } N −1 i=0 ∪ { y N , y i } N −1 i=0 ∪ { y N , ⋆ } = f (y i ) ≥ f (y i+1 ) + ∇f (y i+1 ), y i − y i+1 + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 N −1 i=0 f (y N ) ≥ f (y i ) + ∇f (y i ), y N − y i + 1 2L ∇f (y i ) − ∇f (y N ) 2 N −1 i=0 f (y N ) ≥ f ⋆ + 1 2L ∇f (y N ) 2 , L G ′ ={ y i , y i+1 } N −1 i=0 ∪ y N , y i − 1 2L ∇f (y i ) − ∇f (y N ) 2 N −1 i=0 ∪ y N , ⋆ − 1 2L ∇f (y N ) 2 = f (y i ) ≥ f (y i+1 ) + ∇f (y i+1 ), y i − y i+1 + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 N −1 i=0 f (y N ) ≥ f (y i ) + ∇f (y i ), y N − y i N −1 i=0 f (y N ) ≥ f ⋆ , L F ′ ={ x i , x i+1 } N −1 i=0 ∪ x ⋆ , x i − 1 2L ∇f (x i ) 2 N i=0 = f (x i ) ≥ f (x i+1 ) + ∇f (x i+1 ), x i − x i+1 + 1 2L ∇f (x i ) − ∇f (x i+1 ) 2 N −1 i=0 f ⋆ ≥ f (x i ) + ∇f (x i ), x ⋆ − x i N i=0
, and L exact ={ x i , x j } (i,j)∈{⋆,0,1,...,N } 2 .
(OGM) is A ⋆ -optimal with respect to [L F , (P1)] [24]. Furthermore, (OGM) is also A ⋆ -optimal under [L exact , (P1)] which implies that (OGM) is the exact optimal FSFOM with respect to (P1). In addition, the A ⋆ -optimality of (OGM-G) with respect to [L G , (P2)] was presented as a conjecture in [27], (OBL-F ♭ ) is A ⋆ -optimal with respect to [L F ′ , (P 1)] [42,Theorem 4], and the A ⋆ -optimality of (OBL-G ♭ ) with respect to [L G ′ , (P2)] was presented as a conjecture [42,Conjecture 8]. 4 In the remaining parts, we give the proof of the following two theorems.
Theorem 4. (OGM-G) is
A ⋆ -optimal with respect to [L G , (P2)]. Theorem 5. (OBL-G ♭ ) is is A ⋆ -optimal with respect to [L G ′ , (P2)].
Proof of A ⋆ -optimality of (OGM-G) We provide an alternative formulation of P 2 (L G , H, R) by using the methodology called PEP [18,50] maximize By using the definition of F, G, {e i } N i=0 and {y i } N i=0 , P 2 (L G , H, R) can be converted into
f 1 2L ∇f (y N ) 2 subject to f (y i+1 ) − f (y i ) + ∇f (y i+1 ), y i − y i+1 + 1 2L ∇f (y i ) − ∇f (y i+1 ) 2 ≤ 0, i = 0, . . . , N − 1 f (y i ) − f (y N ) + ∇f (y i ), y N − y i + 1 2L ∇f (y i ) − ∇f (y N ) 2 ≤ 0, i = 0, . . . , N − 1 f ⋆ + 1 2L ∇f (y N ) 2 ≤ f (y N ) f (y 0 ) − f ⋆ ≤ 1 2 LR 2 y i+1 = y i − 1 L i j=0 h i+1,j ∇f (y j ), i = 0, . . . , N − 1 .(48)Next, define F, G, {e i } N i=0 and {x i } N i=0 as G := ∇f (y 0 ), ∇f (y 0 ) ∇f (y 0 ), ∇f (y 1 ) · · · ∇f (y 0 ), ∇f (y N ) . . . . . . . . . ∇f (y N ), ∇f (y N ) ∇f (y 0 ), ∇f (y 1 ) · · · ∇f (y N ), ∇f (y N ) ∈ S N +1 , F := f (y 0 ) − f ⋆ f (y 1 ) − f ⋆ . . . f (y N ) − f ⋆ R (N +1)×1 , e i ∈ R Nminimize F,G 0 − 1 2L Tr (Ge N e ⊺ N ) subject to F(e i+1 − e i ) ⊺ + Tr (GA i ) ≤ 0, i = 0, . . . , N − 1 F(e i − e N ) ⊺ + Tr (GB i ) ≤ 0, i = 0, . . . , N − 1 − Fe ⊺ N + 1 2 Tr (Ge N e ⊺ N ) ≤ 0 Fe ⊺ 0 − 1 2 LR 2 ≤ 0(49)
where
A i := 1 2 e i+1 (y i − y i+1 ) ⊺ + 1 2 (y i − y i+1 )e ⊺ i+1 + 1 2L (e i − e i+1 ) (e i − e i+1 ) ⊺ B i := 1 2 e i (y N − y i ) ⊺ + 1 2 (y N − y i )e ⊺ i + 1 2L (e i − e N ) (e i − e N ) ⊺ .
Moreover, under the condition d ≥ N + 2, we can take the Cholesky factorization of G to recover the triplet {(y i , f (y i ), ∇f (y i ))} N i=0 . 5 Thus (48) and (49) are equivalent. The next step is calculating the Lagrangian of (49) and deriving the Lagrangian dual problem of it. In [52], they argued about the strong duality of (49).
Fact 1. Assume h i+1,i = 0 for 0 ≤ i ≤ N − 1.
Then the strong duality holds between (48) and (49).
Denote the dual variables of each constraints as {δ
i } N −1 i=0 , {λ i } N −1 i=0
, δ N and τ . Then Lagrangian becomes
L(δ, λ, τ, F, G) = F · X ⊺ + Tr (G · T) − τ L 2 R 2 where X = N −1 i=0 δ i (e i+1 − e i ) + N −1 i=0 λ i (e i − e N ) − δ N e N + τ e 0 , T = N −1 i=0 δ i A i + N −1 i=0 λ i B i + δ N 2 e N e ⊺ N − 1 2 e N e ⊺ N .
If [X = 0 and T 0] is false, we can choose F and G that makes the value of Lagrangian to be −∞. Thus the convex dual problem of (49) is
maximize δ,λ,τ minimize F,G L(δ, λ, τ, F, G) = maximize δ,λ,τ − τ L 2 R 2 subject to X = 0, T 0 δ i ≥ 0, λ i ≥ 0, τ ≥ 0 .(50)
For the constraint X = 0,
λ i = δ i − δ i−1 for 1 ≤ i ≤ N − 1, τ = δ N and −δ 0 + λ 0 + δ N = 0. By substituting v i+1 = δ i for 0 ≤ i ≤ N − 1 and v 0 = δ N , (50) becomes maximize v0,...,vN minimize F,G L(δ, λ, τ, F, G) = maximize δ,λ,τ − v0L 2 R 2 subject to T 0 0 ≤ v 0 ≤ · · · ≤ v N .(51)
Therefore if strong duality holds,
P 2 (L G , H, R) becomes minimize v0,...,vN v 0 L 2 R 2 subject.to. T 0, 0 ≤ v 0 ≤ · · · ≤ v N .(52)
We can apply a similar argument for P 1 (L F , H, R). To begin with, define {f i } N i=−1 as a unit vector of length N + 2 which (i + 2)-component is 1.
Additionally, define {x i } N i=0 , {C i } N −1 i=0 and {D i } N i=0
as follows:
x 0 := f −1 , x i+1 := x i − 1 L i j=0
h i+1,j f j i = 0, . . . , N − 1,
C i := 1 2 f i+1 (x i − x i+1 ) ⊺ + 1 2 (x i − x i+1 )f ⊺ i+1 + 1 2L (f i − f i+1 ) (f i − f i+1 ) ⊺ , D i := − 1 2 f i x ⊺ i − 1 2 x i f ⊺ i + 1 2L f i f ⊺ i .(53)
Then, if the strong duality holds, the problem
maximize f f (x N ) − f ⋆ subject.to. [x 0 , . . . , x N are generated by FSFO with the matrix H] [∀l ∈ L F , f satisfies l] x 0 − x ⋆ 2 ≤ R 2(54)
is equivalent to maximize u0,...,uN
L 2u N R 2 subject.to. S 0, 0 ≤ u 0 ≤ · · · ≤ u N ,(55)
where
S = L 2 f −1 f ⊺ −1 + N −1 i=0 u i C i + N i=0 (u i − u i−1 )D i .
By using Schur's Complement,
S 0 ⇔ S ′ 0 where S ′ = N −1 i=0 u i C i + N i=0 (u i − u i−1 )D i − 1 2L N i=0 (u i − u i−1 )f i N i=0 (u i − u i−1 )f i ⊺ .
Hence (55) is equivalent to
maximize u0,...,uN L 2u N R 2 subject.to. S ′ 0, 0 ≤ u 0 ≤ · · · ≤ u N .(56)
Now we will prove the following proposition. Proof. To simplify the analysis, we can consider {f i } N i=0 as a length N + 1 unit vector, as all terms with f −1 can be eliminated by using S ′ instead of S. With this simplification, both S ′ and T belong to S N +1 .
Next, let 0 < u 0 ≤ u 1 ≤ · · · ≤ u N and v i = 1 uN−i for 0 ≤ i ≤ N , noting that 0 < v 0 ≤ · · · ≤ v N .
It is important to observe that S ′ and T can be expressed in terms of S (H, u) and T H A , v , respectively. Furthermore, in the proof of Theorem 1 (A), we proved that S(H,
u) = M(u) ⊺ T (H A , v)M(u) for some invertible M(u). Thus, S(H, u) 0 ⇔ T (H A , v) 0.
Therefore, we obtain (a 0 , . . . , a N ) ∈ {(u 0 , . . . , u N )| S ′ 0, 0 < u 0 ≤ · · · ≤ u N } (57) if and only if
1 a N , . . . , 1 a 0 ∈ {(v 0 , . . . , v N )| T 0, 0 < v 0 ≤ · · · ≤ v N }.(58)
For the next step, we claim that the optimal value of (56) and maximize u0,...,uN
L 2u N R 2 subject.to. S ′ 0, 0 < u 0 ≤ · · · ≤ u N(59)
are same, i.e., consider the case when all u i are positive is enough. To prove that, assume there exists 0 = u 0 = · · · = u k , 0 < u k+1 ≤ · · · ≤ u N and H that satisfy
S ′ 0. Next, observe that the f k f ⊺ k component of S ′ is 0 but f k+1 f ⊺ k component of S ′ is u k+1 h k+1,k = 0, which makes S ′ 0 impossible.
When the optimal value of (52
) is 0, it implies {(v 0 , . . . , v N )| T 0, 0 < v 0 ≤ · · · ≤ v N }
is an empty set. Therefore, {(u 0 , . . . , u N )| S ′ 0, 0 < u 0 ≤ · · · ≤ u N } is also empty set. Since (59) and (56) have the same optimal value, the optimal value of (56) is 0.
If the optimal value of (52) is positive, the optimal values of (56) and (52) are the same since (57) and (58) subject.to. S 0, 0 ≤ u 0 ≤ · · · ≤ u N .
(61)
Applying Proposition 2 and using the fact
H OGM-G = H A OGM provides that H OGM-G is the solution of maximize H,hi+1,i =0 minimize v0,...,vN v 0 L 2 R 2 subject.to. T 0, 0 ≤ v 0 ≤ · · · ≤ v N .(62)
Finally, if
(47) = maximize H,hi+1,i =0 P 1 (L F , H, R)(63)
holds, the optimal solution of (47) is the H OGM-G , which proves the A ⋆ -optimality of (OGM-G) with respect to L G and (P2). The proof of (60) and (63) uses the continuity argument with H and please refer [42,Claim 4].
Remark of Proof of A ⋆ -optimality of (OGM-G). We proved that (OGM-G) is A ⋆ -optimal if we use the subset of cocoercivity inequalities. Therefore, it is still open whether (OGM-G) is optimal or not among the L-smooth convex function's gradient minimization method.
Proof of A ⋆ -optimality of (OBL-G ♭ ). We provide the proof that the H matrix of (OBL-G ♭ ) is the solution of
minimize H∈R N ×N L P 2 (L G' , H, R)(64)
To begin with, bring to mind the A ⋆ -optimality of (OBL-F ♭ ):
(OBL-F ♭ ) is A ⋆ -optimal with respect to the [L F ′ , P 1], i.e., the H matrix of (OBL-F ♭ ) is the solution of minimize H∈R N ×N L P 1 (L F' , H, R).(65)
To prove the A ⋆ -optimality of (OBL-G ♭ ), we use the A ⋆ -optimality of (OBL-F ♭ ).
Under the assumption of strong duality, we could change P 1 (L F' , H, R) into the following SDP:
maximize u0,...,uN L 2u N R 2 subject.to. S 1 0, 0 ≤ u 0 ≤ · · · ≤ u N (66) where S 1 = L 2 f −1 f ⊺ −1 + N −1 i=0 u i C i + N i=0 (u i − u i−1 ) D i − 1 2L f i f ⊺ i .
Here we used the same notation with (53).
Each 1 2L f i f ⊺ i term is subtracted from the original S since we consider the inequality x i , x ⋆ − 1 2L ∇f (x i ) 2 instead of x i , x ⋆ for L F ′ . Moreover, (66) is equivalent to maximize u0,...,uN L 2u N R 2 subject.to. S ′ 1 0, 0 ≤ u 0 ≤ · · · ≤ u N (67) where S ′ 1 = N −1 i=0 u i C i + N i=0 (u i − u i−1 ) D i − 1 2L f i f ⊺ i − 1 2L N i=0 (u i − u i−1 )f i N i=0 (u i − u i−1 )f i ⊺ .
Similarly, under the assumption of strong duality, P 2 (L G' , H, R) is equivalent to
minimize v0,...,vN v 0 L 2 R 2 subject.to. T 1 0, 0 ≤ v 0 ≤ · · · ≤ v N(68)
where
T 1 = N −1 i=0 v i+1 A i + N −1 i=0 (v i+1 − v i ) B i − 1 2L (e i − e N )(e i − e N ) ⊺ − 1 2 e N e ⊺ N .
Now we will prove the following proposition. Proof. The proof structure is the same as the proof of Proposition 2. First consider {f i } N i=0 as length N + 1, which gives S ′ 1 , T 1 ∈ S N +1 . Furthermore, in the proof of Appendix A, we proved that
S ′ 1 = M(u) ⊺ T 1 M u . Therefore, S ′ 1 0 ⇔ T 0.
The other steps are the same as the proof of Proposition 2.
Proof of Theorem 5. H OGM is the solution of (46) since (OGM) is A ⋆ -optimal with respect to L F and (P1). Additionally, if
(46) = maximize H,hi+1,i =0 P 1 (L F , H, R)(69)
holds, H OGM is the solution to the following problem due to the strong duality.
maximize H,hi+1,i =0 maximize u0,...,uN L 2u N R 2 subject.to. S 0, 0 ≤ u 0 ≤ · · · ≤ u N .(70)v 0 L 2 R 2 subject.to. T 0, 0 ≤ v 0 ≤ · · · ≤ v N .(71)
Finally, if
(47) = maximize H,hi+1,i =0 P 1 (L F , H, R)(72)
holds, the optimal solution of (47) is the H OGM-G , which proves the A ⋆ -optimality of (OGM-G) with respect to L G and (P2). The proof of (69) and (72) uses the continuity argument with H and please refer [42,Claim 4].
C Omitted parts in Section 3 C.1 Omitted calculations in Section 3.1
Strictly speaking, (7) is not a differential equation but rather a diffeo-integral equation. However, if H is separable, i.e., when H(t, s) = e β(s)−γ(t) for some β, γ : [0, T ) → R, then (7) can be reformulated as an ODE.
Assume the process (7) is well-defined. We can alternatively write (7) as
X(0) = x 0 ,Ẋ(t) = −e −γ(t)
t 0 e β(s) ∇f (X(s))ds.
By multiplying e γ(t) each side and differentiating, we obtain X(t) +γ(t)Ẋ(t) + e β(t)−γ(t) ∇f (X(t)) = 0.
The H-dual of (73) is
Y (0) = x 0 ,Ẏ (t) = −e β(T −t) t 0 e −γ(T −s) ∇f (Y (s))ds.
Under the well-definedness, by multiplying e −β(T −t) each side and differentiating, we obtain
Y (t) +β(T − t)Ẏ (t) + e β(T −t)−γ(T −t) ∇f (Y (t)) = 0.
When β(s) = γ(t) = r log t, two ODEs becomë
X(t) + r tẊ (t) + ∇f (X(t)) = 0 (74) andŸ (t) + r T − tẊ (t) + ∇f (X(t)) = 0,(75)
which are introduced in Section 3.2.
For the calculations of energy functions of (74) and (75), refer [49, Section 3.1, Section 4.2]. They proved that
(r − 1) X(0) − x ⋆ 2 =T 2 (f (X(T )) − x ⋆ ) + 1 2 TẊ(T ) + 2(X(T ) − x ⋆ ) 2 + (r − 3) X(T ) − x ⋆ 2 + T 0 (r − 3)s Ẋ (s) 2 ds − T 0 2s[X(s), x ⋆ ]ds
holds for (74) and
1
T 2 (f (Y (0)) − f (Y (T ))) + −r + 3 T 4 Y (0) − Y (T ) 2 = 1 4(r − 1) ∇f (Y (T )) 2 + T 0 r − 3 (T − s) 5 (T − s)Ẏ (s) + 2(Y (s) − Y (T )) 2 ds − T 0 2 (T − s) 3 [Y (s), Y (T )]ds
holds for (75). After arranging terms, we obtain the results in Section 3.2. 6
C.2 Proof of Theorem 2
To begin with, we give a formal version of Theorem 2. Consider two following conditions.
u(T ) (f (X(T )) − f ⋆ ) ≤ U(T ), (∀ X(0), x ⋆ , {∇f (X(s))} s∈[0,T ] ∈ A 1 ). (C3 ′ ) 1 2 ∇f (Y (T )) 2 ≤ V(T ), (∀ Y (0), {∇f (Y (s))} s∈[0,T ] ∈ A 2 ). (C4 ′ )
where A 1 and A 2 are family of vectors which makes Fubini's Theorem can be applied and will be defined later in this section.
Theorem 6 (Formal version of Theorem 2). Assume the C-FSFOMs (7) with H and H A are well-defined in the sense that solutions to the diffeo-integral equations exist. Consider differentiable functions u, v :
(0, T ) → R that v(t) = 1 u(T −t) for t ∈ [0, T ] and (i) lim s→0 u(t) = 0 (ii) lim s→−T v(s) (f (Y (s)) − f (Y (T )) + ∇f (X(T )), Y (T ) − Y (s) ) = 0.
(iii) f is L-smooth and convex.
Then the following holds.
[(C3 ′ ) is satisfied with u(·) and H] ⇔ (C4 ′ ) is satisfied with v(·) and H A . 6 In [49], they considered the ODE which gradient term is 2∇f (Y (t)) instead of ∇f (Y (t)).
34
Calculations of energy functions via transformation Frist of all, we calculate U(T ) − u(T )(f (X(T )) − f ⋆ ).
U(T ) − u(T ) (f (X(T )) − f ⋆ ) = 1 2 X(0) − x ⋆ 2 + T 0 u ′ (s) (f (X(s)) − f ⋆ + ∇f (X(s)), x ⋆ − X(s) ) ds − u(T ) (f (X(T )) − f ⋆ ) ds = 1 2 X(0) − x ⋆ 2 + T 0 u ′ (s) ∇f (X(s)), x ⋆ − X(s) ds − T 0 u(s) ∇f (X(s)),Ẋ(s) ds = 1 2 X(0) − x ⋆ 2 + T 0 u ′ (s) ∇f (X(s)), x ⋆ − X(0) ds + T 0 u ′ (s) ∇f (X(s)), X(0) − X(s) ds − T 0 u(s) ∇f (X(s)),Ẋ(s) ds = 1 2 X(0) − x ⋆ − T 0 u ′ (s)∇f (X(s))ds 2 − 1 2 T 0 u ′ (s)∇f (X(s))ds 2 + T 0 u ′ (s) ∇f (X(s)), X(0) − X(s) ds − T 0 u(s) ∇f (X(s)),Ẋ(s) ds.
We used parts of integration and u(0) : = lim s→0 u(s) = 0. Since X(0) − x ⋆ can have any value, (C3) is equivalent to
− 1 2 T 0 u ′ (s)∇f (X(s))ds 2 + T 0 u ′ (s) ∇f (X(s)), X(0) − X(s) ds − T 0 u(s) ∇f (X(s)),Ẋ(s) ds ≥ 0(76)
and
{g s ∈ R d } s∈[0,T ] → {f s ∈ R d } s∈[0,T ] . f s : = − 1 u(T ) g 0 + 1 u(s) (g s − g 0 ) − T s u ′ (b) u(b) 2 (g b − g 0 ) db, s ∈ [0, T ].(78)
One can show that the above two transformations are in the inverse relationship. Next, we calculate U(T ). Define f s : = ∇f (X(s)). Under the transformation (77), we can find a simple expression of (76). (79) 35 We used Fubini's Theorem at (•). Next we calculate V(T ).
(76) = − 1 2 g 0 2 − T 0 u ′ (s) f s , X(s) − X(0) ds − T 0 u(s) f s ,Ẋ(s) ds = − 1 2 g 0 2 − T 0 u ′ (s) f s , s 0Ẋ (a)da ds + T 0 u(s) f s , s 0Ẋ (a)da ds = − 1 2 g 0 2 − T 0 u ′ (s) f s , s 0Ẋ (a)da ds + T 0 s 0 H(s, a) u(s)f s , f a dads (•) = − 1 2 g 0 2 − T 0 Ẋ (a),V(T ) = 1 u(T ) (f (Y (0)) − f (Y (T ))) + T 0 d ds 1 u(T − s) (f (Y (s)) − f (Y (T )) + ∇f (Y (s)), Y (T ) − Y (s) ) ds = 1 u(T ) (f (Y (0)) − f (Y (T ))) + T 0 u ′ (T − s) u(T − s) 2 Y (T ) − Y (s), ∇f (Y (s)) − ∇f (Y (T )) ds + T 0 d ds 1 u(T − s) (f (Y (s)) − f (Y (T )) + ∇f (Y (T )), Y (T ) − Y (s) ) ds (•) = 1 u(T ) ∇f (Y (T )), Y (0) − Y (T ) + T 0 u ′ (T − s) u(T − s) 2 Y (T ) − Y (s), ∇f (Y (s)) − ∇f (Y (T )) ds − T 0 1 u(T − s) Ẏ (s), ∇f (Y (s)) − ∇f (Y (T )) ds.
(•) comes from parts of integration and the assumption lim
s→T v(s) (f (Y (s)) − f (Y (T )) + ∇f (X(T )), Y (T ) − Y (s) ) = 0.
To clarify, u ′ (T − s) = u ′ (z)| z=T −s . Now briefly write g s : = ∇f (Y (T − s)). Then
V(T ) − 1 2 ∇f (Y (T )) 2 = − 1 2 g 0 2 − 1 u(T ) T 0 g 0 ,Ẏ (s) ds + T 0 u ′ (T − s) u(T − s) 2 Y (T ) − Y (s), g T −s − g 0 ds − T 0 1 u(T − s) Ẏ (s), g T −s − g 0 ds = − 1 2 g 0 2 + T 0 u ′ (T − s) u(T − s) 2 T sẎ (a)da, g T −s − g 0 ds − 1 u(T ) T 0 g 0 ,Ẏ (s) ds − T 0 1 u(T − s) Ẏ (s), g T −s − g 0 ds (•) = − 1 2 g 0 2 + T 0 − 1 u(T ) g 0 − 1 u(T − s) (g T −s − g 0 ) + s 0 u ′ (T − b) u(T − b) 2 (g T −b − g 0 ) db,Ẏ (s) ds.(80)
We use Fubini's Theorem at (•). Finally, using (78), we obtain
V(T ) − 1 2 ∇f (Y (T )) 2 = − 1 2 g 0 2 − T 0 f T −s ,Ẏ (s) ds = − 1 2 g 0 2 + T 0 f T −s , s 0 H A (s, a)g T −a da ds = − 1 2 g 0 2 + T 0 f T −s , s 0 H(T − a, T − s)g T −a da ds = − 1 2 g 0 2 + T 0 s 0 H(s, a) f a , g s dads.
(81)
Proof of Theorem 2 Define A 1 and A 2 as follows. Regularity of (10) at t = −T . To begin with, note that ODE (10) can be expressed as
Ẇ (t) = − 2p−1 T −t W (t) − Cp 2 (T − t) p−2 ∇f (Y (t)) Y (t) = W (t)
for t ∈ (0, T ). Since right hand sides are Lipschitz continuous with respect to W and Y in any closed interval [0, s] ∈ [0, T ), solution (Y, W ) uniquely exists that satisfies above ODE with initial condition (Y (0), W (0)) = (y 0 , 0). Next, we give the proof of regularity of ODE (10) at terminal time T in the following order. The proof structure is based on the regularity proof in [49]:
(i) sup t∈[0,T ) Ẏ (t) is bounded (ii) Y (t) can be continuously extended to T (iii) lim t→T − Ẏ (t) = 0 (iv) lim t→T −Ẏ (t) (T −t) p−1 = Cp∇f (Y (T )).
Step (i): sup
t∈[0,T ) Ẏ (t) is bounded. ODE (10) is equivalent to 1 (T − s) p−2Ÿ (s) + 2p − 1 (T − s) p−1Ẏ (s) + Cp 2 ∇f (Y (s)) = 0.(82)
By multiplyingẎ (s) and integrating from 0 to t, we obtain
t 0 1 (T − s) p−2 Ÿ (s),Ẏ (s) ds + t 0 2p − 1 (T − s) p−1 Ẏ (s) 2 ds + Cp 2 t 0 Ẏ (s), ∇f (Y (s)) ds = 0
and integration by parts gives us
1 2(T − t) p−2 Ẏ (t) 2 − 1 2T p−2 Ẏ (0) 2 + t 0 2p − 1 (T − s) p−1 Ẏ (s) 2 ds + Cp 2 (f (Y (t)) − f (Y (0))) = 0. Define Ψ(t) : [0, T ) → R as Ψ(t) = 1 2(T − t) p−2 Ẏ (t) 2 + Cp 2 (f (Y (t)) − f (Y (0))) . SinceΨ (t) = − 2p − 1 (T − s) p−1 Ẏ (s) 2 ds, Ψ(t)
is a nonincreasing function. Thus
Ẏ (t) 2 = 2(T − t) p−2 Ψ(t) + Cp 2 (f (Y (0)) − f (Y (t))) ≤ 2T p−2 Ψ(0) + Cp 2 (f (Y (0)) − x ⋆ ) ,(83)
and the right hand side of (83) is constant, which implies M := sup
t∈[0,T ) Ẏ (t) < ∞.
Step (ii): Y (t) can be continuously extended to T . We can prove Y (t) is uniformly continuous due to the following analysis.
Y (t + δ) − Y (δ) = t+δ tẎ (s)ds ≤ t+δ t Ẏ (s) ds ≤ δM.
Since a uniformly continuous function g : D → R d can be extended continuously to D, Y : [0, T ) → R d can be extended to [0, T ].
Step ( f (Y (t)) also exists. Moreover, Ψ(t) is non-increasing and
Ψ(t) = 1 2(T − t) p−2 Ẏ (t) 2 + Cp 2 (f (Y (t)) − f (Y (0))) ≥ Cp 2 (x ⋆ − f (Y (0))) ,Ψ(t) = 1 2T p Ẏ (0) 2 − t 0 3 (T − s) p−1 Ẏ (s) 2 ds
is bounded below. Thus the value of integration is finite.
Step (iv): lim
t→T −Ẏ (t) (T −t) p−1 = Cp∇f (Y (T )). The (7) form of process Y iṡ Y (t) = − t 0 Cp 2 (T − t) 2p−1 (T − s) p+1 ∇f (Y (s))ds. By dividing (T − t) p−1 each side, we obtaiṅ Y (t) (T − t) p−1 = −(T − t) p t 0 Cp 2 (T − s) p+1 ∇f (Y (s))ds.
By the result of C.3, we can apply L'Hopital's rule, which gives
lim t→T −Ẏ (t) (T − t) p−1 = − lim t→T − t 0 Cp 2 (T −s) p+1 ∇f (Y (s))ds (T − t) −p = Cp∇f (Y (T )). Also, since ∇f is L-l=Lipshitz, ∇f (Y (t)) − ∇f (Y (T )) ≤ L Y (t) − Y (T ) . Therefore, lim t→T − ∇f (Y (t)) − ∇f (Y (T )) (T − t) β = 0(84)
for any β < p.
Applying Theorem 2 to (10). For the case p = 2, refer [49]. Now we consider the case p > 2, with u(t) = Ct p and v(t) =
v(s) (f (Y (s)) − f (Y (T )) + ∇f (Y (T )), Y (T ) − Y (s) ) = 1 C lim s→T − f (Y (s)) − f (Y (T )) + ∇f (Y (T )), Y (T ) − Y (s) (T − s) p (•) = 1 C lim s→T − Ẏ (s), ∇f (Y (s)) − ∇f (Y (T )) −p(T − s) p−1 (•) = − 1 pC lim s→T − Ẏ (s) (T − s) p−1 , ∇f (Y (s)) − ∇f (Y (T ))
=0.
(•) uses L'Hospital's rule and (•) uses the limit results at the previous paragraph. To prove
1 2 ∇f (Y (T )) 2 ≤ 1 CT p (f (Y (0)) − f (Y (T ))) , we carefully check that for any L-smooth convex function f , {∇f (Y (s))} s∈[0,T ] ∈ A 2 .
Verification of (80). Fubini's Theorem is used for 0≤s,a≤T
1 s≤a u ′ (T − s) u(T − s) 2 Ẏ (a), (∇f (Y (s)) − ∇f (Y (T ))) = 0≤s,a≤T 1 s≤aẎ (a) (∇f (Y (s)) − ∇f (Y (T ))) p C(T − s) p+1 .
The above function is continuous in 0 ≤ s, a ≤ T due to lim
t→T −Ẏ (t) (T −t) p−1 = Cp∇f (Y (T )) and (84) with β = 2 < p.
Verification of (79). First,
lim t→T − Y (t) − Y (T ) (T − t) p = lim t→T −Ẏ (t) −p(T − t) p−1 = −C∇f (Y (T ))
holds from (iv). Thus sup b∈(0,T ]
Y (T −b)−Y (T ) b p < ∞. Now we will show lim a→+0 a 1+ǫ f a = 0, ∀ ǫ > 0.
Observe
f a = 1 CT p g 0 + 1 CT p (g a − g 0 ) − T a p Cb p+1 (g b − g a ) db and a 1+ǫ f a = a 1+ǫ CT p g 0 + a 1+ǫ CT p (g a − g 0 ) − T a pa 1+ǫ Cb p+1 (g b − g a ) db ≤ a 1+ǫ CT p g 0 + a 1+ǫ CT p (g a − g 0 ) + T a pa 1+ǫ Cb p+1 g b − g 0 db + T a pa 1+ǫ Cb p+1 g a − g 0 db ≤ a 1+ǫ CT p g 0 + a 1+ǫ CT p (g a − g 0 ) + T a Lpa 1+ǫ Cb p+1 Y (T − b) − Y (T ) db + T a Lpa 1+ǫ Cb p+1 Y (T − a) − Y (T ) db ≤ a 1+ǫ CT p g 0 + a 1+ǫ CT p (g a − g 0 ) + T a Lpa ǫ Y (T − b) − Y (T ) b p db + La 1+ǫ C 1 a p − 1 T p Y (T − a) − Y (T ) .
By using the boundness of
Y (T −b)−Y (T ) b p
, we obtain the desired result.
Next,
a 2ǫ−p+2Ẋ (a) = a 0 a ǫ−p+2 H(a, b)f b db ≤ a 0 Cp 2 b 2p−1 a 2p−1−2ǫ f b db = a 0 Cp 2 b 2p−2−ǫ a 2p−1−2ǫ b 1+ǫ f b db = sup b∈[0,a] b 1+ǫ f b Cp 2 2p − 1 − ǫ a ǫ ,
which gives lim a→+0 a 2ǫ−p+2Ẋ (a) = 0, ∀ 0 < ǫ < 2p − 1.
For the final step, recall that Fubini's Theorem is used for 0≤a,s≤T 1 a≤s s p−1 f s ,Ẋ(a) dads.
To prove that the above function is continuous, take any 0 < ǫ < p−2 2 . Then lim 0≤a≤s,s→+0
s p−1 f s ,Ẋ(a) ≤ lim 0≤a≤s,s→+0 s −2ǫ+2p−3 a −2ǫ+p−2 Ẋ (a), f s = lim s→+0 s −2ǫ+2p−3 f s · lim a→+0 Ẋ (a) a −2ǫ+p−2 = 0,
which gives the desired result.
D Omitted calculation in Section 4 D.1 Preliminaries
General proximal step and composite FSFOM For a > 0, we newly define a prox-grad step with step size 1 L 1 α as follows:
y ⊕,α := argmin z∈R d f (y) + ∇f (y), z − y + g(z) + αL 2 z − y 2 .(85)
First, we will provide a generalized version of prox-grad inequality; Prox-grad inequality is originally used to prove the convergence of FISTA as below [10]:
F (y ⊕ ) − L y ⊕,1 − y, x − y ⊕,1 ≤ F (x) + L 2 y ⊕,1 − y 2 .
Proposition 4 (Generalized version of prox-grad inequality). For every x, y ∈ R d , the following holds:
x, y α :
= F (y ⊕,α ) − F (x ⊕,α )La y ⊕,α − y, x ⊕,α − y ⊕,α − L 2 y ⊕,α − y 2 ≤ 0.
Proof. The optimality condition of (85) provides
Lα(y ⊕,α − y) + ∇f (y) + u = 0, u ∈ ∂g(y ⊕,α ).
In addition, L-smoothness of f and convexity inequality of f and g give
F (y ⊕,α ) ≤ f (y) + ∇f (y), y ⊕,α − y + L 2 y ⊕,α − y 2 + g(y ⊕,α ) g(y ⊕,α ) + u, x ⊕,α − y ⊕,α = g(y ⊕,α ) + −Lα(y ⊕,α − y) − ∇f (y), x ⊕,α − y ⊕,α ≤ g(x ⊕,α ), f (y) + ∇f (y), x ⊕,α − y ≤ f (x ⊕,α ).
Summing the above inequalities provides the generalized version of prox-grad inequality.
Two problem settings for the composite optimization problem. For the composite optimization problem, we consider the following two problems.
(P1 ′ ) Efficiently reduce F (x ⊕,α N ) − F ⋆ assuming x ⋆ exists and x 0 − x ⋆ ≤ R. (P2 ′ ) Efficiently reduce min v∈∂F (y ⊕,α N ) v 2 assuming F ⋆ > −∞ and F (y 0 ) − F ⋆ ≤ R.
Note that when g = 0, (P1 ′ ) and (P2 ′ ) collapse to (P1) and (P2), respectively.
Parameterized FSFOM that reduces the composite function value : GFPGM [25] First define x ⋆ : = argmin x∈R d F (x). We recall the composite function minimization FSFOMs with x ⊕,α i by a lower triangular matrix {h k,i } 0≤i<k≤N as follows:
x k+1 = x k − k i=0 αh k+1,i x i − x ⊕,α i , ∀ k = 0, . . . , N − 1.(87)
In the case g = 0, (87) collapses to (1) as α (z − z ⊕,α ) = (z − z + ) = 1 L ∇f (z). The iterations of GFGPM are defined as
x k+1 = x ⊕,1 k + (T k − t k )t k+1 t k T k+1 x ⊕,1 k − x ⊕,1 k−1 + (t 2 k − T k )t k+1 t k T k+1 x ⊕,1 k − x k , k = 0, . . . , N − 1 (88) where x ⊕,1 −1 = x 0 and [t i > 0, T i = i j=0 t j ≤ t 2 i for 0 ≤ i ≤ N ].
(88) reduces the composite function value as follows:
F (x ⊕,1 N ) − F ⋆ ≤ 1 T N 1 2 x 0 − x ⋆ 2 .(89)
We can reconstruct the convergence proof of [25, Theorem 3.3] via our energy function scheme, which is defined as
U k = L 2 x 0 − x ⋆ 2 + k−1 i=0 u i x i , x i+1 1 + k i=0 (u i − u i−1 ) x ⋆ , x i 1(90)
for k = −1, . . . , N . The convergence rate result (89) is proved by using that {U k ] N k=−1 is dissipative, and
U N − T N F (x ⊕,1 N ) − F ⋆ = L 2 x 0 − x ⋆ + N i=0 (u i − u i−1 )(x i − x ⊕,1 i ) 2 + N i=0 L(T i − t 2 i ) 2 x ⊕,1 i − x i 2 ≥ 0.(91)
Here, we can make a crucial observation.
The formulation of (90) is same with (2), which all ·, · are changed into ·, · 1 and u i = T i .
Moreover, we can express the recursive formula of the H matrix of (88) as
h k+1,i = 1 + (t k −1)t k+1 T k+1 i = k (T k −t k )t k+1 t k T k+1 (h k,k−1 − 1) i = k − 1 (T k −t k )t k+1 t k T k+1 h k,i i = 0, . . . , k − 2 .(92)
As we used a parameterized family in the convex function (see Section 2.5), we will use these convergence results to construct the method for minimizing min v∈∂F (y ⊕,α
N ) v 2 .
Relationship between minimizing α 2 2L y ⊕,α N − y N 2 and minimizing min v∈∂F (y ⊕,α
N ) v 2 . Note that Lα(y ⊕,α N − y N ) + ∇f (y N ) + u = 0, u ∈ ∂g(y ⊕,α N ), which gives min v∈∂F (y ⊕,α N ) v 2 ≤ (•) ∇f (y N ) − ∇f (y ⊕,α N ) + Lα y N − y ⊕,α N 2 ≤ (•) L 2 (α + 1) 2 y N − y ⊕,α N 2 .(93)
(•) is triangle inequality and (•) comes from the L-smoothness of f .
D.2 Proof of Theorem 3
In this section, we give the proof outline of Theorem 3 and discuss the construction of matrix C.
Proof outline of Theorem 3. We begin by proposing a parameterized family reduces 1 2L y ⊕,α N − y N 2 under a fixed value of a > 0. To construct this family, take {t i } N i=0 , T i = i j=0 t j that satisfies (96). We define the H matrix of the family as 1
α H 0 + 1 α 2 C A ,
where H 0 has the same formulation as the H matrix of GFGPM (88) and C follows the recursive formula (97). We refer to this family as (SFG-family).
We can prove that (SFG-family) exhibits the convergence rate
αL 2 y ⊕,α N − y N 2 ≤ 1 T N (F (y 0 ) − F ⋆ ) ,(94)
which is motivated by the observation (⋄). In parallel with (⋄), we consider the energy function
V k = F (y 0 ) − F (y ⊕,α N ) + y −1 , y 0 α T N + k−1 i=0 1 T N −i−1 y i , y i+1 α + k−1 i=0 1 T N −i−1 − 1 T N −i y N , y i α
for k = 0, . . . , N , which y N , ⋆ changed into y −1 , y 0 α , all other ·, · terms changed into ·, · α and v i = 1 TN−i . (94) follows from the fact that {V i } N i=0 is dissipative and
αL 2 y ⊕,α N − y N 2 ≤ V N .
Detailed justification is given in Appendix D.3.
For the next step, we show that (SFG) is an instance of (SFG-family), under the choice of
α = 4, T i = (i + 2)(i + 3) 4 , 0 ≤ i ≤ N.
The detailed derivation is given in Appendix D.4.
Finally, combining the convergence result
2L y ⊕,4 N − y N 2 ≤ V N ≤ V 0 ≤ 4 (N + 2)(N + 3) (F (y 0 ) − F ⋆ )
and (93) gives
min v∈∂F (y ⊕,4 N ) v 2 ≤ 25L 2 y ⊕,4 N − y N 2 ≤ 50 (N + 2)(N + 3) (F (y 0 ) − F ⋆ ) .
Construction of matrix C. For an FSFOM (87) with matrix H 1 , we have V N − αL 2 y ⊕,α N − y N 2 = 1 2Lα 2 Tr (g ⊺ gT ⊕ ) with
T ⊕ : = α 2 N i=0 1 T N −i (H ⊺ 1 (e i + · · · + e N )(e i − e i−1 ) ⊺ + (e i − e i−1 )(e i + · · · + e N ) ⊺ H 1 ) A3 + α N i=0 1 T N −i ((e i−1 − e i )(e i−1 − e N ) ⊺ + (e i−1 − e N )(e i−1 − e i ) ⊺ ) − 1 T N e 0 e ⊺ 0 − e N e ⊺ N B3 + α T N e 0 e ⊺ 0 − N −1 i=0 1 T N −1−i e i e ⊺ i − 1 T 0 e N e ⊺ N
where e −1 = 0, e i is a unit vector which (i + 1) − th component is 1 and R N +1 and H 1 : = 0 0 H 1 0 . If we take
H 1 that makes T ⊕ 0, V N ≥ αL 2 y ⊕,α N − y N 2 follows. Next we observe 1 α 2 A 3 + 1 α B 3 = T (H 1 , v) for v i = 1 TN−i
which is defined as (22). By expanding (91) and defining g ′ :
= L x 0 − x ⊕,α 0 |. . . |x N − x ⊕,α N , U N − T N F (x ⊕,1 N ) − F ⋆ − L 2 x 0 − x ⋆ + N i=0 (u i − u i−1 )(x i − x ⊕,1 i ) 2 = Tr g ′ (g ′ ) ⊺ S ⊕ follows where S ⊕ : = S(H 0 , u) − N −1 i=0 (T i − t 2 i )e i e ⊺ i .
Here, u i = T i , and S(H 0 , u) is given by the formulation (19). Using (91), we obtain
S(H 0 , u) = N −1 i=0 (2T i − t 2 i )e i e ⊺ i + (T N − t 2 N )e N e ⊺ N .
Now recall matrix M(u) (26) and the result of Theorem 1:
S(H 0 , u) = M(u) ⊺ T (H A 0 , v)M(u).(95)
By substituting H 1 = 1 α H A 0 + X and using (95), the result is
M(u) ⊺ T ⊕ M(u) =α N −1 i=0 (2T i − t 2 i )e i e ⊺ i + (T N − t 2 N )e N e ⊺ N + α 2 M(u) ⊺ N i=0 1 T N −i (X ⊺ (e i + · · · + e N )(e i − e i−1 ) ⊺ + (e i − e i−1 )(e i + · · · + e N ) ⊺ X ) M(u) A4 + M(u) ⊺ α T N e 0 e ⊺ 0 − N −1 i=0 1 T N −1−i e i e ⊺ i − 1 T 0 e N e ⊺ N M(u) C3
where X : = 0 0 X 0 .
Next, consider A 4 as a function of the lower triangular matrix X . The key observation is that if we choose X appropriately, all non-diagonal terms of C 3 can be eliminated. With this choice of X , we have
T ⊕ = α N i=0 (2T i − t 2 i )e i e ⊺ i − T 0 e 0 e ⊺ 0 − N i=1 T 2 i T i−1 + t 2 i i−2 j=0 1 T j e i e ⊺ i .
Note that T ⊕ contains only diagonal terms. Therefore, T ⊕ 0 is equivalent to all coefficients of e i e ⊺ i being nonnegative, which can be formulated using t i and T i .
Remark. In the proof of Theorem 1 ( B 2 and B 1 at (27) and (28)), we have shown that
A 4 = α 2 N i=0 X A ⊺ T i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ + T i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ X A .
Observe that (e i − e i+1 )(e 0 + · · · + e i ) ⊺ is a lower triangular matrix and X A is a strictly lower triangular matrix, i.e., all diagonal component is 0. Thus it is worth noting that A 4 cannot induce any diagonal term e i e ⊺ i , choosing matrix C as the optimal one. D.3 Parameterized family reduces the gradient mapping norm FSFOM that reduces 1 2L y ⊕,α N − y N 2 in the composite minimization. We propose the parameterized family that
reduces 1 2L y ⊕,α N − y N 2 .
For fixed a > 0, take {t i } N i=0 , T i = i j=0 t j that satisfies the following conditions:
α(2T 0 − t 2 0 ) ≥ T 0 , α(2T k − t 2 k ) ≥ T 2 k T k−1 + t 2 k 1 T 0 + k−2 i=0 1 T i k = 1, . . . , N(96)
where we define k−2 i=0 1 Ti = 0 for k = 1 case. We define an lower triangular matrix C = {c k,i } 0≤i<k≤N as
c k+1,i = t1 T1 i = 0, k = 0 t k+1 T k+1 t k T0 + t k k−2 j=0 1 Tj + T k T k−1 i = k, k = 1, . . . , N − 1 t k+1 (T k −t k ) t k T k+1 c k,i i = 0, . . . , k − 1, i = 2, . . . , N − 1 .(97)
Matrix C has a crucial role in the correspondence between composite function value minimization and composite function gradient norm minimization with H → 1
α H + 1 α 2 C A .
Proposition 5 (SFG family). Assume that we choose H 0 matrix of (88), and define C as (97). Consider an FSFOM (87) which H matrix is 1
α H 0 + 1 α 2 C A .
Such FSFOM can be expressed as
y k+1 = y ⊕,α k + β ′ k y ⊕,α k − y ⊕,α k−1 + γ ′ k y ⊕,α k − y k , k = 0, . . . , N − 1, β ′ k = β N −k (β N −1−k + γ N −1−k ) β N −k + γ N −k , γ ′ k = γ N −k (β N −1−k + γ N −1−k ) β N −k + γ N −k , β k = t k+1 (T k − t k ) t k T k+1 , γ k = t k+1 (t 2 k − T k ) t k T k+1 + 1 α c k+1,k .
(SFG-family) and y ⊕,α −1 = y 0 . Then it exhibits the convergence rate
αL 2 y ⊕ N − y N 2 ≤ 1 T N (F (y 0 ) − F ⋆ ) .
Proof of Proposition 5. To start, we give the claim about matrix C.
Claim 1. C : = 0 0 C 0 satisfies the following equality.
2 N i=0 T i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ C = N i=1 1 T i−1 f N −i f ⊺ N −i + 1 T 0 f N f ⊺ N − T 0 e 0 e ⊺ 0 − N i=1 T 2 i T i−1 + t 2 i i−2 j=0 1 T j e i e ⊺ i(98)
where {e i } N i=0 is a any basis of R N +1 , e N +1 , f −1 are zero vectors and {f i } N i=0 are another basis of R N +1 which is defined as T i (e i − e i+1 ) = f N −i − f N −i−1 , i = 0, 1, . . . , N.
The proof of Claim 1 will be provided after the proof of Proposition 5. To begin with, for the FSFOM with matrix 1 α H 0 + 1 α 2 C A , define the energy function
V k = F (y 0 ) − F (y ⊕,α N ) + y −1 , y 0 α T N + k−1 i=0 1 T N −i−1 y i , y i+1 α + k−1 i=0 1 T N −i−1 − 1 T N −i y N , y i α ,
for k = 0, . . . , N . {V k } N k=0 is dissipative since ·, · α ≤ 0. This energy function is inspired by (3). Note that if we can prove
αL 2 y ⊕,α N − y N 2 ≤ V N ,(99)
then
αL 2 y ⊕,α N − y N 2 ≤ V N ≤ V 0 ≤ 1 T N F (y 0 ) − F (y ⊕,α N ) ≤ 1 T N (F (y 0 ) − F ⋆ ) .
Thus, it is enough to show (99). Defining g i : = Lα(y i − y ⊕,α i ) for 0 ≤ i ≤ N gives y i , y j α = F (y ⊕,α j ) − F (y ⊕,α i ) + g j , y i − y j + 1 La g j , g j − g i − 1 2Lα 2 g j 2 , y −1 , y 0 α = F (y ⊕,α 0 ) − F (y 0 ) + 1 L 2α − 1 2α 2 g 0 2 .
Plugging the above equalities provides
2Lα 2 V N − αL 2 y ⊕,α N − y N 2 = 2Lα 2 N −1 i=0 1 T N −i−1 g i+1 , y i − y i+1 + N −1 i=0 1 T N −i−1 − 1 T N −i g i , y N − y i + 2α − 1 T N g 0 2 − α g N 2 + N −1 i=0 1 T N −i−1 2α g i+1 , g i+1 − g i − g i+1 2 + N −1 i=0 1 T N −i−1 − 1 T N −i 2α g i , g i − g N − g i 2 = 2Lα 2 N −1 i=0 1 T N −i−1 g i+1 , y i − y i+1 + N −1 i=0 1 T N −i−1 − 1 T N −i g i , y N − y i + N −1 i=0 α T N −i−1 g i+1 − g i 2 + N −1 i=0 α T N −i−1 − α T N −i g i − g N 2 − a g N 2 + α T N g N 2 + N −1 i=0 1 T N −i−1 −α g i 2 + (a − 1) g i+1 2 + N −1 i=0 1 T N −i−1 − 1 T N −i −α g N 2 + (α − 1) g i 2 + 2α − 1 T N g 0 2 − α T N g N 2 = 2Lα 2 N −1 i=0 1 T N −i−1 g i+1 , y i − y i+1 + N −1 i=0 1 T N −i−1 − 1 T N −i g i , y N − y i + N −1 i=0 α T N −i−1 g i+1 − g i 2 + N −1 i=0 α T N −i−1 − α T N −i g i − g N 2 − α g N 2 + α T N g N 2 + α T N g 0 2 − N −1 i=0 1 T N −1−i g i 2 − 1 T 0 g N 2 .
Next, define g : = [g 0 |g 1 . . . |g N ] and
H 1 : = 0 0 1 α H 0 + 1 α 2 C A 0 , H 0 : = 0 0 H 0 0 .
By the same procedure as the proof of Theorem 1, we obtain 2Lα 2 V N − αL
2 y ⊕,α N − y N 2 = Tr (g ⊺ gT ⊕ ) where T ⊕ =α 2 N i=0 1 T N −i (H ⊺ 1 (f i + · · · + f N )(f i − f i−1 ) ⊺ + (f i − f i−1 )(f i + · · · + f N ) ⊺ H 1 ) + α N i=0 1 T N −i ((f i−1 − f i )(f i−1 − f N ) ⊺ + (f i−1 − f N )(f i−1 − f i ) ⊺ ) − 1 T N f 0 f ⊺ 0 − f N f ⊺ N + α T N f 0 f ⊺ 0 − N −1 i=0 1 T N −1−i f i f ⊺ i − 1 T 0 f N f ⊺ N
And consider the vector e i ∈ R (N +1)×1 which is same with (29).
In the proof of Theorem 1 (27), (28) , we have shown that
1 2L N i=0 1 T N −i (H ⊺ 1 (f i + · · · + f N )(f i − f i−1 ) ⊺ + (f i − f i−1 )(f i + · · · + f N ) ⊺ H 1 ) = 1 2L H ⊺ 0 α + C ⊺ α 2 N i=0 T i (e 0 + · · · + e i )(e i − e i+1 ) ⊺ + 1 2L N i=0 T i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ H 0 α + C α 2 and 1 2L N i=0 1 T N −i ((f i−1 − f i )(f i−1 − f N ) ⊺ + (f i−1 − f N )(f i−1 − f i ) ⊺ ) − 1 2T N L f 0 f ⊺ 0 − 1 2L f N f ⊺ N = − 1 2L N i=0 (T i − T i−1 )e i N i=0 (T i − T i−1 )e i ⊺ + 1 2L N i=0 T i ((e i − e i+1 )e ⊺ i + e i (e i − e i+1 ) ⊺ ) − T N e N e ⊺ N
under the above transformation. Therefore, T ⊕ can be expressed as
T ⊕ = α 2 H ⊺ 0 α + C ⊺ α 2 N i=0 T i (e 0 + · · · + e i )(e i − e i+1 ) ⊺ + N i=0 T i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ H 0 α + C α 2 + α − N i=0 (T i − T i−1 )e i N i=0 (T i − T i−1 )e i ⊺ + N i=0 T i ((e i − e i+1 )e ⊺ i + e i (e i − e i+1 ) ⊺ ) − T N e N e ⊺ N + α T N f 0 f ⊺ 0 − N −1 i=0 1 T N −1−i f i f ⊺ i − 1 T 0 f N f ⊺ N = αA + B + α T N f 0 f ⊺ 0 where A = H ⊺ 0 N i=0
T i (e 0 + · · · + e i )(e i − e i+1 ) ⊺ + T i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ C
− N −1 i=0 1 T N −1−i f i f ⊺ i − 1 T 0 f N f ⊺ N .
To calculate A, expand the energy function (91). Then we obtain
A = N −1 i=0 (2T i − t 2 i )e i e ⊺ i + (T N − t 2 N )e N e ⊺ N .
Moreover, by Claim 1,
B = −T 0 e 0 e ⊺ 0 − N i=1 T 2 i T i−1 + t 2 i i−2 j=0 1 T j e i e ⊺ i .
Combining above results and T N e N = f 0 , we achieve
T ⊕ = α N i=0 (2T i − t 2 i )e i e ⊺ i − T 0 e 0 e ⊺ 0 − N i=1 T 2 i T i−1 + t 2 i i−2 j=0 1 T j e i e ⊺
i and the condition (96) makes each coefficient of e i e ⊺ i nonnegative, which gives T 0. To achieve the iteration formula (SFG-family), consider the FSFOM (87) with 1 α H 0 + 1 α 2 C, which can be expressed as
x k+1 = x ⊕,α k + β k x ⊕,α k − x ⊕,α k−1 + γ k x ⊕,α k − x k , k = 0, . . . , N − 1, β k = t k+1 (T k − t k ) t k T k+1 , γ k = t k+1 (t 2 k − T k ) t k T k+1 + 1 α c k+1,k .
For last step, We make an observation that Proposition 1 can be applied when all z + terms changed into z ⊕,α . Proposition 1 gives the H-dual.
Proof of Claim 1. The left hand side of (98) is
2 N i=0 T i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ C =2 N i=0 T i (e i − e i+1 )(e 0 + · · · + e i ) ⊺ k>i c k,i e k e ⊺ i =2 N i=0 (f N −i − f N −i+1 )(e 0 + · · · + e i ) ⊺ k>i c k,i e k e ⊺ i =2 N i=0 f N −i e ⊺ i k>i c k,i e k e ⊺ i =2 N i=0 N k=i+1 c k,i f N −k e ⊺ i
Now we calculate the right-hand side of (98). By plugging
f N −i = T i e i + t i+1 e i+1 + · · · + t N e N , N i=1 1 T i−1 f N −i f ⊺ N −i + 1 T 0 f N f ⊺ N − T 0 e 0 e ⊺ 0 − N i=1 T 2 i T i−1 + t 2 i i−2 j=0 1 T j e i e ⊺ i = 2 N −1 i=0 a i e ⊺ i
where a 0 = t 1 e 1 + · · · + t N e N ,
a i = t i T 0 + i−2 j=0 t i T j + T i T i−1
(t i+1 e i+1 + · · · + t N e N ) , i = 1, . . . , N − 1. (c k,i T k + c k−1,i t k + · · · + c i+1,i t k ) e k . (k+3)(k+4) 4 k + 2 2 × 2 3 + k + 2 2 4 2 × 3 + · · · + 4 (k + 1)(k + 2) + 1 = 2 k + 4 k + 2 3 + k + 2 2 2k k + 2 + 1 = 2(4k + 5) 3(k + 4) .
To sum up, the matrix {g k,i } 0≤i<k≤N : = 1 4 H 0 + 1 16 C satisfies the following recursive formula. Fastest method among the SFG family via a numerical choice of a In this section, we give simple form of SFG, when all inequality conditions in (96) holds as equalities. {g k,i } 0≤i<k≤N becomes
g 1,0 = 1 α 1 + (t 0 − 1)t 1 T 1 + 1 α 2 t 1 T 1
Given a kernel H(t, s), analogously define its anti-transpose as H A (t, s) = H(T −s, T −t). We call [C-FSFOM with H A ] the H-dual of [C-FSFOM with H]. In the special case H(t, s) = e γ(s)−γ(t)
−t) for t ∈ [0, T ]. Assume certain regularity conditions (specified in Appendix C.2). Then, [(C3) is satisfied with u(·) and H] ⇔ (C4) is satisfied with v(·) and H A .
U
(T ) − u(T ) (f (X(T )) − f⋆) = TẊ(T )+2(X(T )−x⋆) 2 +2(r−3) X(T )
Now we calculate {s ij } and {t ij } for [(OGM), (OGM-G)] [(OBL-F ♭ ), (OBL-G ♭ )] and [(GD), (GD)]. B.2.1 Calculation of energy function of (OGM) and (OGM-G) We will show s ij = 0 and t ij = 0 for all i, j. By the definition of {u i } N i=−1 and {θ i } N i=−1 ,
of energy function of (OBL-F ♭ ) and (OBL-G ♭ )
Next, we prove that (GD) with h = 1 and{u i } N i=0 = . . . , (2N +1)(i+1) 2N −i ,. . . , 2N + 1 satisfies (C1), by showing more general statement: (GD) and {u i } N i=0 = . . . ,
+1 is a unit vector which (i + 1)-th component is 1, and y 0 := 0, y i+1 := y i − 1 L i j=0 h i+1,j e j i = 0, . . . , N − 1.
Proposition 2 .
2Consider a matrix H = {h i,j } 0≤j<i≤N and H A . If h i+1,i = 0 for 0 ≤ i ≤ N − 1and treating the solution of infeasible maximize problem as 0, the optimal values of P 1 (L F , H, R) and P 2 (L G , H A , R) are same.
Proposition 3 .
3Consider a matrix H = {h k,i } 0≤i<k≤N and H A . If h i+1,i = 0 for 0 ≤ i ≤ N − 1 and treating the solution of infeasible maximize problem as 0, the optimal values of P 1 (L F' , H, R) and P 2 (L G' , H A , R) are same.
Applying Proposition 2
2and using the fact H OGM-G = H A OGM provides that H OGM-G is the solution of maximize H,hi+1,i =0 minimize v0,...,vN
holds for any {∇f (X(s))} s∈[0,T ] . Now define a transformation {f s ∈ R d } s∈[0,T ] → {g s ∈ R d } s∈[0,T ] as g s : = u(s)f s + T s u ′ (z)f z dz, s ∈ [0, T ]
H
(s, a) f a , g s dads.
A 1 =H
1{{f s } s∈[0,T ] |Analysis in the previous paragraph holds}, A 2 = {{g s } s∈[0,T ] |Analysis in the previous paragraph holds}. Now we prove Theorem 2. We have shown that (C3 ′ ) is equivalent to (76) ≥ 0 for all {∇f (X(s))} s∈[0,T ] ∈ A 1 . By definition of A 1 , it is equivalent to (s, a) f a , g s dads ≥ 0 for any {g s } s∈[0,T ] ∈ A 2 . Moreover, by (81), it is also equivalent to {g s } s∈[0,T ] ∈ A 2 , which is (C4 ′ ). C.3 Omitted parts in Section 3.3
= 0 .
0We first prove the limit lim t→T − Ẏ (t) = 0 exists. From C.3, we know lim t→T − Y (t) exists and by continuity of f , lim t→T −
1 C
1(T −t) p . First, we verify conditions (i) and (ii) in Theorem 6. (i) holds since u(0) = 0, and (ii) holds since lim s→T −
Define g : = Lα y 0 − y ⊕,α 0 |. . . |y N − y ⊕,α N .
TT i (e 0 +
0i (e i − e i+1 )(e 0 + · · · + e i ) i ((e i − e i+1 )e ⊺ i + e i (e i − e i+1 ) ⊺ ) − T N e N e · · · + e i )(e i − e i+1 ) ⊺ + N i=0
1 .
1Now we claim that a i = N k=i+1 c k,i f N −k for i = 0, . . . , N − Note k,i (T k e k + t k+1 e k+1 + · · · + t N e N ) = N k=i+1
Therefore, the FSFOM with {g k,i } 0≤i<k≤N can be expressed asx 1 = x k , k = 1, . . . , N − 1 where x ⊕,4 −1 = x 0 .To obtain the H-dual, we apply Proposition 1.y k+1 = y ⊕,4 k + (N − k + 1)(2N − 2k − 1) (N − k + 3)(2N − 2k + 1) k = 0, .. . , N − 2 and y ⊕,4 −1 = y 0 .
j+1,N −i+1 for i, j = 1, . . . , N . We call [FSFOM with H A ] the H-dual of [FSFOM with H].
A family of gradient reduction methods. Parameterized families of accelerated FSFOMs for reducing function values have been presented throughout the extensive prior literature. Such families generalize Nesterov's method and elucidate the essential algorithmic component that enables acceleration. For reducing gradient magnitude, however, there are only four accelerated FSFOMs(OGM-G), (OBL-G ♭ ), and M-OGM-G [58], and [16, Lemma 2.6]. Here, we construct a simple
parameterized family of accelerated FSFOMs for reducing gradient magnitude by H-dualizing an
accelerated FSFOM family for reducing function values.
Let {t i } N
i=0 and {T i } N
i=0 be sequences positive real numbers satisfying t 2
i
We calculate {t ij } first. Recall that {v i } N i=0 =1
2N +1 , . . . ,
N +i
(2N +1)(N −i+1) , . . . and h i+1,k = δ i,k to (35), and
making {t ij } symmetric gives us
OGM is the solution to the following problem due to the strong duality.are
equivalent.
Proof of Theorem 4. H OGM is the solution of (46) since (OGM) is A ⋆ -optimal with respect to L F and (P1). Addition-
ally, if
(46) = maximize
H,hi+1,i =0
P 1 (L F , H, R)
(60)
holds, H maximize
H,hi+1,i =0
maximize
u0,...,uN
L
2u N
R 2
Throughout this paper, we use the convention of denoting iterates of a given "primal" FSFOM as x k while denoting the iterates of the H-dual FSFOM as y k .
In this paper, we avoid analytical and measure-theoretic details and focus on convergence (rather than existence) results of the continuous-time dynamics.
Remark 2.To clarify, the quantifier [∀ ∇f (x 0 ), . . . , ∇f (x N )] in (C1) means ∇f (x 0 ), . . . , ∇f (x N ) can be any arbitrary vectors in R d . This is different from [∇f (x 0 ), . . . , ∇f (x N ) be gradient of some f : R d → R]. The same is true for (C2).
− λ ⊺ (S ′ ) −1 λ = 0, which is the Schur complement of the matrix S. By a well-known lemma on the Schur complement, we conclude S 0.
Originally, the inequality set suggested that (OBL-G ♭ ) would A ⋆ -optimal is which (f (yN ) ≥ f⋆) is replaced withf (yN ) ≥ f⋆ + 1 2L ∇f (yN ) 2 in L G ′ .
Cholesky factorization is unique if G > 0 but it may be not unique if G 0. In this case, we choose one representation of Cholesky factorization.
In fact, when a = 1, the FSFOM becomes FISTA-G[29].
AcknowledgmentsWe thank Soonwon Choi, G. Bruno De Luca, and Eva Silverstein for sharing their insights as physicists. We thank Jaewook J. Suh for reviewing the manuscript and providing valuable feedback.Coefficients of e i+1 are coincides sinceNow assume t j t j+1 = c j,i T j + c j−1,i t j + · · · + c i+1,i t j c j+1,i T j+1 + c j,i t j+1 + · · · + c i+1,i t j+1 .By multiplying the above equation recursively, we obtainTo prove (100), expand and obtainAbove equation holds due to the recursive formula of C (97).D.4 Instances of SFG familyDerivation of (SFG) Here, we will show that (SFG) is the instance of SFG family, under the choice ofFirst, t i = i+2 2 for i ≥ 1 and t 0 = 3 2 . Also (96) holds sincePlugging above values into(92)and(97), we obtainand48Other terms come directly, andand for k > 0,where T −1 = 2a−1 α 2 . By using Proposition 1, we obtain H-dual.Since all equality holds at (96), the above FSFOM achieves the fastest rate among the SFG family under fixed α.7Now we optimize a to achieve the fastest convergence rate. Combine (93) and the result of Proposition 5 to obtainTo achieve the tightest convergence guarantee, we solve the following optimization problem under the fixed N .Denote the solution of the above optimization problem as R(α, N ). Since R(α, N ) depends on a and cannot achieve a closed-form solution, we numerically choose a and observe an asymptotic behavior of R(α, N ). By choosing α = 3.8, asymptotic rate of R(3.8, N ) is about 46 N 2 .
From Nesterov's estimate sequence to Riemannian acceleration. K Ahn, S Sra, COLTK. Ahn and S. Sra. From Nesterov's estimate sequence to Riemannian acceleration. COLT, 2020.
Foundations of gauge and perspective duality. A Y Aravkin, J V Burke, D Drusvyatskiy, M P Friedlander, K J Macphee, SIAM Journal on Optimization. 283A. Y. Aravkin, J. V. Burke, D. Drusvyatskiy, M. P. Friedlander, and K. J. MacPhee. Foundations of gauge and perspective duality. SIAM Journal on Optimization, 28(3):2406-2434, 2018.
A general duality principle for the sum of two operators. H Attouch, M Théra, Journal of Convex Analysis. 3H. Attouch and M. Théra. A general duality principle for the sum of two operators. Journal of Convex Analysis, 3:1-24, 01 1996.
Optimal rate of convergence of an ODE associated to the fast gradient descent schemes for b > 0. HAL Archives Ouvertes. J.-F Aujol, C , J.-F. Aujol and C. Dossal. Optimal rate of convergence of an ODE associated to the fast gradient descent schemes for b > 0. HAL Archives Ouvertes, 2017.
Rates of convergence of perturbed FISTAbased algorithms. HAL Archives Ouvertes. J.-F Aujol, C Dossal, G Fort, É Moulines, J.-F. Aujol, C. Dossal, G. Fort, and É. Moulines. Rates of convergence of perturbed FISTA- based algorithms. HAL Archives Ouvertes, 2019.
Optimal convergence rates for Nesterov acceleration. J.-F Aujol, C Dossal, A Rondepierre, SIAM Journal on Optimization. 294J.-F. Aujol, C. Dossal, and A. Rondepierre. Optimal convergence rates for Nesterov accelera- tion. SIAM Journal on Optimization, 29(4):3131-3153, 2019.
Interior gradient and proximal methods for convex and conic optimization. A Auslender, M Teboulle, SIAM Journal on Optimization. 163A. Auslender and M. Teboulle. Interior gradient and proximal methods for convex and conic optimization. SIAM Journal on Optimization, 16(3):697-725, 2006.
Estimate sequence methods: extensions and approximations. M Baes, M. Baes. Estimate sequence methods: extensions and approximations. 2009.
Potential-function proofs for gradient methods. N Bansal, A Gupta, Theory of Computing. 154N. Bansal and A. Gupta. Potential-function proofs for gradient methods. Theory of Computing, 15(4):1-32, 2019.
A fast iterative shrinkage-thresholding algorithm for linear inverse problems. A Beck, M Teboulle, SIAM Journal on Imaging Sciences. 21A. Beck and M. Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183-202, 2009.
S Boyd, L Vandenberghe, Convex optimization. Cambridge university pressS. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. R E Bruck, Journal of Mathematical Analysis and Applications. 611R. E. Bruck. On the weak convergence of an ergodic iteration for the solution of variational inequalities for monotone operators in Hilbert space. Journal of Mathematical Analysis and Applications, 61(1):159-164, 1977.
Signal recovery by proximal forward-backward splitting. P L Combettes, V R Wajs, Multiscale Modeling and Simulation. 44P. L. Combettes and V. R. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale Modeling and Simulation, 4(4):1168-1200, 2005.
Symmetric dual nonlinear programs. G B Dantzig, E Eisenberg, R W Cottle, Pacific Journal of Mathematics. 153G. B. Dantzig, E. Eisenberg, and R. W. Cottle. Symmetric dual nonlinear programs. Pacific Journal of Mathematics, 15(3):809-812, 1965.
An iterative thresholding algorithm for linear inverse problems with a sparsity constraint. I Daubechies, M Defrise, C De Mol, Communications on Pure and Applied Mathematics. 5711I. Daubechies, M. Defrise, and C. De Mol. An iterative thresholding algorithm for linear in- verse problems with a sparsity constraint. Communications on Pure and Applied Mathematics, 57(11):1413-1457, 2004.
Potential function-based framework for making the gradients small in convex and min-max optimization. J Diakonikolas, P Wang, SIAM Journal on Optimization. 323J. Diakonikolas and P. Wang. Potential function-based framework for making the gradients small in convex and min-max optimization. SIAM Journal on Optimization, 32(3):1668-1697, 2021.
The exact information-based complexity of smooth convex minimization. Y Drori, Journal of Complexity. 39Y. Drori. The exact information-based complexity of smooth convex minimization. Journal of Complexity, 39:1-16, 2017.
Performance of first-order methods for smooth convex minimization: a novel approach. Y Drori, M Teboulle, Mathematical Programming. 1451Y. Drori and M. Teboulle. Performance of first-order methods for smooth convex minimization: a novel approach. Mathematical Programming, 145(1):451-482, 2014.
On conjugate convex functions. W Fenchel, Canadian Journal of Mathematics. 11W. Fenchel. On conjugate convex functions. Canadian Journal of Mathematics, 1(1):73-77, 1949.
Dual gauge programs, with applications to quadratic programming and the minimum-norm problem. R M Freund, Mathematical Programming. 381R. M. Freund. Dual gauge programs, with applications to quadratic programming and the minimum-norm problem. Mathematical Programming, 38(1):47-67, 1987.
Gauge optimization and duality. M P Friedlander, I Macê, T K Pong, SIAM Journal on Optimization. 244M. P. Friedlander, I. Macê do, and T. K. Pong. Gauge optimization and duality. SIAM Journal on Optimization, 24(4):1999-2022, 2014.
B Grimmer, arXiv:2104.11185Radial duality part II: Applications and algorithms. B. Grimmer. Radial duality part II: Applications and algorithms. arXiv:2104.11185, 2021.
Radial duality part I: Foundations. B Grimmer, arXiv:2104.11179B. Grimmer. Radial duality part I: Foundations. arXiv:2104.11179, 2022.
Optimized first-order methods for smooth convex minimization. D Kim, J A Fessler, Mathematical Programming. 1591D. Kim and J. A. Fessler. Optimized first-order methods for smooth convex minimization. Mathematical Programming, 159(1):81-107, 2016.
Another look at the fast iterative shrinkage/thresholding algorithm (FISTA). D Kim, J A Fessler, SIAM Journal on Optimization. 281D. Kim and J. A. Fessler. Another look at the fast iterative shrinkage/thresholding algorithm (FISTA). SIAM Journal on Optimization, 28(1):223-250, 2018.
Generalizing the optimized gradient method for smooth convex minimization. D Kim, J A Fessler, SIAM Journal on Optimization. 282D. Kim and J. A. Fessler. Generalizing the optimized gradient method for smooth convex minimization. SIAM Journal on Optimization, 28(2):1920-1950, 2018.
Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions. D Kim, J A Fessler, Journal of Optimization Theory and Applications. 1881D. Kim and J. A. Fessler. Optimizing the efficiency of first-order methods for decreasing the gradient of smooth convex functions. Journal of Optimization Theory and Applications, 188(1):192-219, 2021.
Unifying nesterov's accelerated gradient methods for convex and strongly convex objective functions: From continuous-time dynamics to discrete-time algorithms. J Kim, I Yang, arXiv:2301.03576J. Kim and I. Yang. Unifying nesterov's accelerated gradient methods for convex and strongly convex objective functions: From continuous-time dynamics to discrete-time algo- rithms. arXiv:2301.03576, 2023.
A geometric structure of acceleration and its role in making gradients small fast. J Lee, C Park, E K Ryu, NeurIPS. J. Lee, C. Park, and E. K. Ryu. A geometric structure of acceleration and its role in making gradients small fast. NeurIPS, 2021.
Revisit of estimate sequence for accelerated gradient methods. B Li, M Coutiño, G B Giannakis, ICASSP. B. Li, M. Coutiño, and G. B. Giannakis. Revisit of estimate sequence for accelerated gradient methods. ICASSP, 2020.
Minmax and duality in nonlinear programming. O L Mangasarian, J Ponstein, Journal of Mathematical Analysis and Applications. 11O. L. Mangasarian and J. Ponstein. Minmax and duality in nonlinear programming. Journal of Mathematical Analysis and Applications, 11:504-518, 1965.
Optimization II: Numerical methods for nonlinear continuous optimization. A S Nemirovski, A. S. Nemirovski. Optimization II: Numerical methods for nonlinear continuous optimization., 1999.
On optimality of Krylov's information when solving linear operator equations. A S Nemirovsky, Journal of Complexity. 72A. S. Nemirovsky. On optimality of Krylov's information when solving linear operator equa- tions. Journal of Complexity, 7(2):121-130, 1991.
Information-based complexity of linear operator equations. A S Nemirovsky, Journal of Complexity. 82A. S. Nemirovsky. Information-based complexity of linear operator equations. Journal of Complexity, 8(2):153-175, 1992.
A method for unconstrained convex minimization problem with the rate of convergence O(1/k 2 ). Y Nesterov, Proceedings of the USSR Academy of Sciences. the USSR Academy of Sciences269Y. Nesterov. A method for unconstrained convex minimization problem with the rate of con- vergence O(1/k 2 ). Proceedings of the USSR Academy of Sciences, 269:543-547, 1983.
Smooth minimization of non-smooth functions. Y Nesterov, Mathematical Programming. 1031Y. Nesterov. Smooth minimization of non-smooth functions. Mathematical Programming, 103(1):127-152, 2005.
Accelerating the cubic regularization of Newton's method on convex problems. Y Nesterov, Mathematical Programming. 1121Y. Nesterov. Accelerating the cubic regularization of Newton's method on convex problems. Mathematical Programming, 112(1):159-181, 2008.
Efficiency of coordinate descent methods on huge-scale optimization problems. Y Nesterov, SIAM Journal on Optimization. 222Y. Nesterov. Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM Journal on Optimization, 22(2):341-362, 2012.
How to make the gradients small. Y Nesterov, Optima. Mathematical Optimization Society Newsletter. 88Y. Nesterov. How to make the gradients small. Optima. Mathematical Optimization Society Newsletter, (88):10-11, 2012.
Lectures on Convex Optimization. Y Nesterov, Springer2nd editionY. Nesterov. Lectures on Convex Optimization. Springer, 2nd edition, 2018.
Factor-√ 2 acceleration of accelerated gradient methods. C Park, J Park, E K Ryu, C. Park, J. Park, and E. K. Ryu. Factor- √ 2 acceleration of accelerated gradient methods.
. Applied Mathematics and Optimization. Applied Mathematics and Optimization, 2023.
Optimal first-order algorithms as a function of inequalities. C Park, E K Ryu, arXiv:2110.11035C. Park and E. K. Ryu. Optimal first-order algorithms as a function of inequalities. arXiv:2110.11035, 2021.
Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. G B Passty, Journal of Mathematical Analysis and Applications. 722G. B. Passty. Ergodic convergence to a zero of the sum of monotone operators in Hilbert space. Journal of Mathematical Analysis and Applications, 72(2):383-390, 1979.
R T Rockafellar, Convex Analysis. Princeton university pressR. T. Rockafellar. Convex Analysis. Princeton university press, 1970.
Conjugate Duality and Optimization. R T Rockafellar, CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics. R. T. Rockafellar. Conjugate Duality and Optimization. CBMS-NSF Regional Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics, 1974.
Large-Scale Convex Optimization. E K Ryu, W Yin, Cambrige University PressE. K. Ryu and W. Yin. Large-Scale Convex Optimization. Cambrige University Press, 2022.
Duality in nonlinear programming and the minimax theorem. J Stoer, Numerische Mathematik. 51J. Stoer. Duality in nonlinear programming and the minimax theorem. Numerische Mathematik, 5(1):371-379, 1963.
A differential equation for modeling Nesterov's accelerated gradient method: Theory and insights. W Su, S Boyd, E Candes, NeurIPSW. Su, S. Boyd, and E. Candes. A differential equation for modeling Nesterov's accelerated gradient method: Theory and insights. NeurIPS, 2014.
Continuous-time analysis of accelerated gradient methods via conservation laws in dilated coordinate systems. J J Suh, G Roh, E K Ryu, 2022J. J. Suh, G. Roh, and E. K. Ryu. Continuous-time analysis of accelerated gradient methods via conservation laws in dilated coordinate systems. ICML, 2022.
Convex interpolation and performance estimation of first-order methods for convex optimization. A B Taylor, A. B. Taylor. Convex interpolation and performance estimation of first-order methods for convex optimization. 2017.
Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions. A B Taylor, F Bach, COLTA. B. Taylor and F. Bach. Stochastic first-order methods: non-asymptotic and computer-aided analyses via potential functions. COLT, 2019.
Exact worst-case performance of first-order methods for composite convex optimization. A B Taylor, J M Hendrickx, F Glineur, SIAM Journal on Optimization. 273A. B. Taylor, J. M. Hendrickx, and F. Glineur. Exact worst-case performance of first-order methods for composite convex optimization. SIAM Journal on Optimization, 27(3):1283-1313, 2017.
Smooth strongly convex interpolation and exact worst-case performance of first-order methods. A B Taylor, J M Hendrickx, F Glineur, Mathematical Programming. A. B. Taylor, J. M. Hendrickx, and F. Glineur. Smooth strongly convex interpolation and exact worst-case performance of first-order methods. Mathematical Programming, 161(1-2):307- 345, 2017.
A variational perspective on accelerated methods in optimization. A Wibisono, A C Wilson, M I Jordan, Proceedings of the National Academy of Sciences of the United States of America. the National Academy of Sciences of the United States of America113A. Wibisono, A. C. Wilson, and M. I. Jordan. A variational perspective on accelerated methods in optimization. Proceedings of the National Academy of Sciences of the United States of America, 113(47):E7351--E7358, 2016.
A duality theorem for non-linear programming. P Wolfe, Quarterly of applied mathematics. 193P. Wolfe. A duality theorem for non-linear programming. Quarterly of applied mathematics, 19(3):239-244, 1961.
Duality of optimization problems with gauge functions. Optimization. S Yamanaka, N Yamashita, S. Yamanaka and N. Yamashita. Duality of optimization problems with gauge functions. Op- timization, pages 1-31, 2022.
S Zhao, L Lessard, M Udell, arXiv:2105.04684An automatic system to detect equivalence between iterative algorithms. S. Zhao, L. Lessard, and M. Udell. An automatic system to detect equivalence between iterative algorithms. arXiv:2105.04684, 2021.
Practical schemes for finding near-stationary points of convex finite-sums. AISTATS. K Zhou, L Tian, A M C So, J Cheng, K. Zhou, L. Tian, A. M.-C. So, and J. Cheng. Practical schemes for finding near-stationary points of convex finite-sums. AISTATS, 2022.
| [] |
[
"On the Advantages of Asynchrony in the Unsourced MAC",
"On the Advantages of Asynchrony in the Unsourced MAC"
] | [
"Alexander Fengler ",
"Alejandro Lancho ",
"Krishna Narayanan ",
"Yury Polyanskiy "
] | [] | [] | In this work we demonstrate how a lack of synchronization can in fact be advantageous in the problem of random access. Specifically, we consider a multiple-access problem over a frame-asynchronous 2-user binary-input adder channel in the unsourced setup (2-UBAC). Previous work has shown that under perfect synchronization the per-user rates achievable with linear codes over the 2-UBAC are limited by 0.5 bit per channel use (compared to the capacity of 0.75). In this paper, we first demonstrate that arbitrary small (even single-bit) shift between the user's frames enables (random) linear codes to attain full capacity of 0.75 bit/user. Furthermore, we derive density evolution equations for irregular LDPC codes, and prove (via concentration arguments) that they correctly track the asymptotic bit-error rate of a BP decoder. Optimizing the degree distributions we construct LDPC codes achieving per-user rates of 0.73 bit per channel use. | 10.48550/arxiv.2305.06985 | [
"https://export.arxiv.org/pdf/2305.06985v1.pdf"
] | 258,615,651 | 2305.06985 | 9e51fe37cee92fcfdfc4f9c993857aa1a187c0d6 |
On the Advantages of Asynchrony in the Unsourced MAC
Alexander Fengler
Alejandro Lancho
Krishna Narayanan
Yury Polyanskiy
On the Advantages of Asynchrony in the Unsourced MAC
Index Terms-Multiple-AccessLow-density parity check (LDPC)Unsourcedmassive machine-type communication
In this work we demonstrate how a lack of synchronization can in fact be advantageous in the problem of random access. Specifically, we consider a multiple-access problem over a frame-asynchronous 2-user binary-input adder channel in the unsourced setup (2-UBAC). Previous work has shown that under perfect synchronization the per-user rates achievable with linear codes over the 2-UBAC are limited by 0.5 bit per channel use (compared to the capacity of 0.75). In this paper, we first demonstrate that arbitrary small (even single-bit) shift between the user's frames enables (random) linear codes to attain full capacity of 0.75 bit/user. Furthermore, we derive density evolution equations for irregular LDPC codes, and prove (via concentration arguments) that they correctly track the asymptotic bit-error rate of a BP decoder. Optimizing the degree distributions we construct LDPC codes achieving per-user rates of 0.73 bit per channel use.
I. INTRODUCTION
A recent line of work, termed unsourced random access (URA or UMAC), exploits the idea of same-codebook communication [1]. This approach allows to separate the different messages in a multiple-access channel (MAC) based purely on the structure of the codebook, i.e., the set of allowed messages. It was shown that good unsourced code designs can approach the capacity of the additive white Gaussian noise (AWGN) adder channel without the need for coordination [1], [2]. While many unsourced code constructions have been proposed [2]- [8], most of them lack analytic understanding and it is not well understood what properties make a good unsourced codebook. Furthermore, many proposed schemes have a high decoding complexity. Recent works [9], [10] have constructed LDPC codes specifically for two-user communication on the unsourced binary input adder channel (UBAC). It was found that linear codes in general suffer a rate loss in the UBAC and cannot achieve sum rates higher than 1 bit/channel use, which is still far from the sum-rate capacity of 1.5 bits/channel use.
Another concern for the practical applicability of unsourced codes is the assumption of perfect synchronization, present in many works. In low-power low-cost transmitters perfect synchronization is hard to achieve. Classic results [11] show that frame-asynchrony does not change the capacity of a discrete MAC, as long as the allowed delay is smaller than the blocklength. Recent solutions for uncoordinated multipleaccess schemes that can deal with asynchronism were proposed in [12], [13]. Both of these works present schemes specifically for orthogonal frequency-division multiplexing (OFDM) modulation with timing offsets within the cyclic prefix. Such timing offsets can be efficiently handled in the frequency domain. Nonetheless, OFDM is not necessarily the best choice for the mMTC scenario since it requires a high level of frequency synchronization, which is hard to achieve with low-cost transmitters.
In this work, we first show that random linear codes achieve the BAC capacity of 1.5 bits/ch. use as soon as a frame delay of at least one symbol is introduced. As such, it enables same-codebook communication with linear codes and linear decoding complexity that does not suffer from the rate 1 bottleneck, which limits unsourced linear codes in the frame-synchronous case. Although the channel model is idealistic, it is also quite general and does not rely on any specific modulation method. Further, we design LDPC codes with linear decoding complexity for the two-user frameasynchronous UBAC. We find codes that achieve sum-rates of 1.46 bits/ch. use. The decoding can be done by two copies of a conventional single-user belief propagation (BP) decoder that periodically exchange information. We also show that our design works if the delay is a random integer with a maximum value that scales at most sub-linearly with the blocklength.
Randomized LDPC code designs for the two-user multipleaccess channel with AWGN have been presented in [14], [15]. For the code construction presented in [15] it is crucial that the two code ensembles are optimized independently, resulting in two different ensembles. If one check node (CN) distribution is fixed, the CN distribution of the other user can be optimized by a linear program. In [14], one common code ensemble is designed, but the two users pick a different random code from the same ensemble. In addition, to obtain a linear optimization program, the codes in [14] are constrained such that variable nodes (VNs) that are connected through the MAC have the same degree. Such a constraint would be hard to enforce in a model with random delay. In contrast, in this work we design one LDPC ensemble from which one code is chosen at random and used by both users. The design of the ensemble relies on alternating optimization of CN and VN degree distributions. Surprisingly, we find that degree one VNs do not result in error floors, in contrast to LDPC codes for the single-user binaryerasure channel (BEC). A particular difficulty in proving the density evolution (DE) in the joint graph is that the channel transition probabilities for one user depends on the transmitted codeword of the other user. Since the codewords come from the same codebook the channel outputs may be correlated. To that end we employ the symmetrization technique of coset ensembles, cf. [16], although an additional subtlety in our case is that we need to show that both users can use the same coset. Thus, our design strictly adheres to the unsourced paradigm where both users use a common codebook. The symmetrization allows us to prove that DE describes the asymptotic biterror rate (BER) and, furthermore, that it is independent of the transmitted codewords. This implies that we can assume that both users transmit the all-zero codeword plus a dither when analyzing the error probability. We provide a full proof that the asymptotic error probability is described by the DE and give an analysis of the probability of short-length stopping sets, which result in an error floor. The error floor analysis shows that we can expurgate short-length stopping sets created by the MAC nodes as long as the fraction of degree one VNs is below a certain threshold. Numerical simulations confirm that DE accurately predicts the error probability for large blocklengths. We use the DE to construct codes that approach the capacity of the two-user BAC. Our work shows that frame-asynchrony can be exploited to design efficient linear unsourced codes.
To summarize, our main intellectual contributions in this paper are:
• A random coding argument that shows that linear codes can achieve the full BAC capacity with a single symbol delay.
• The derivation of the DE equations under the samecodebook constraint and sub-linear frame delays.
• A rigorous proof that the BER of a random code from the ensemble will concentrate around the DE.
• The design of a codebook that enables two-user communication at rates close to the Shannon limit. These findings imply that a non-zero frame delay enables two users to use the same LDPC encoder while still achieving rates close to the two-user BAC capacity. In addition, decoding can be done with linear complexity and a simplified decoder architecture that consists of two connected copies of the same single-user BP decoder.
II. CHANNEL MODEL
We study the frame-asynchronous noiseless BAC:
y i = c 1,i + c 2,i−τ(1)
where τ ∈ [0 : τ max ] and c u,i ∈ {1, −1} for u ∈ {1, 2}, i ∈ [1 : n] and c u,i = 0 for i < 1 or i > n. More specifically, each user transmits a binary-phase-shift keying (BPSK) modulated version of a binary codeword c u = 2m u − 1, m u ∈ {0, 1} n . We will analyze the case where τ is random and uniformly distributed. Furthermore, we will study the asymptotic behavior of code constructions when τ max ∈ o(n), i.e., τ max /n → 0 as n → ∞. This setting is also known as mild asynchrony in information theory [17]. Both users transmit a uniform i.i.d. sequence of nR bits, b 1 , b 2 , by picking the respective binary codewords m 1 , m 2 independently, uniformly at random from a common codebook over the binary field C ∈ F n×2 nR 2 , where n denotes the blocklength and 0 < R < 1 the per-user rate. The decoder outputs a list of two messages g(y) and the per-user error probability is defined as P e = 1 2 (P(b 1 / ∈ g(y))+P(b 2 / ∈ g(y))).
Since the model includes no noise, the channel model reduces to an erasure channel where a received symbol can be considered as erased if (c 1,i , c 2,i−τ ) ∈ {(+1, −1), (−1, +1)}.
Remark 1: The coding construction in this paper also works for the synchronous model if users employ a randomly chosen cyclic shift of their codeword before transmission. However, in this case some mechanism needs to be added that allows to recover the shift of each user, e.g., adding a preamble to each codeword. For the model (1) this is not necessary since τ can be found easily from amplitude information in y.
Remark 2: The BAC model can also be used to model onoff keying modulation. In that case there is some ambiguity since there is no dedicated idle symbol. Nonetheless, it is still possible to detect the start of a frame by introducing a preamble.
III. RANDOM LINEAR CODES
We give the following result, which shows that random linear codes can achieve the two-user BAC capacity if a frame delay of just one symbol is introduced.
Theorem 1: There exist linear (n, k) codes for the two-user frame-asynchronous UBAC with τ = 1 and P e ≤ n − 1 2 2 n(2R−1.5) + o n (1).
Proof: The proof is given in Appendix A. Theorem 1 shows that random linear codes can achieve a vanishing error probability if R < 0.75 − δ for any δ > 0. It can be shown for both parity check and generator ensembles. We briefly describe the intuition behind the proof for parity check ensembles and why τ > 0 is strictly necessary to get rates larger than 0.5. The idea is to treat the channel as erasure channel, as described in Section II. The erased symbols can, in principle, be recovered by solving the parity check equations Hm 1 = 0 and Hm 2 = 0. A key property of the BAC is that on the erased set the codewords from the two user have opposed bits, i.e. c 1,i = −c 2,i−τ . This gives a second collection of parity equations for each codeword. For τ = 0 the additional parity check equations would be linearly dependent, and provide no new information. In that case, since the size of the erased set is around n/2, the parity check matrix needs to have n/2 + δ linearly independent rows for correct recovery, resulting in R < 1/2. In contrast, for τ = 1 we show in Appendix A that the collection of parity check equations arising from c 1,i = −c 2,i−τ for i ∈ E is linearly independent from the set of equations given by Hm 1 = Hm 2 = 0 with high probability. Therefore n/4 + δ linearly independent equations for each user, resulting in a total of n/2 + 2δ linearly independent equations for each codeword, will be enough to ensure correct decoding, allowing for R < 3/4. In the following we will construct LDPC codes that approach this limit with linear decoding complexity.
IV. LDPC CODE DESIGN
A. LDPC Code Ensembles LDPC codes are defined by a bipartite graph where the transmitted bits are represented by VNs which are subject to local parity checks, represented by CNs. We study random codes that are drawn uniformly at random from a given ensemble, defined by the degree distribution of VNs and CNs. Specifically, a random graph code from the ensemble is created by first assigning degrees to VN and CNs proportional to some degree distributions. Then the emanating stubs (half-edges) of VNs and CNs are connected through a uniform random permutation (multi-edges are not explicitly forbidden). Finally the VNs are also permuted uniformly at random. We would like to emphasize that it is important for our construction that the ensemble definition includes a random permutation of the VNs. For memoryless single-user channels this is usually not necessary since the error probability is invariant under permutation of VNs, and some works do not mention it for this reason, e.g., [18]. However, in the multiple-access case correlations between VN degrees of neighboring nodes may introduce unwanted correlations in the joint graph.
Let L i denote the fraction of nodes with degree i, λ i the fraction of edges that connect to degree i VNs, and ρ i the fraction of edges that connect to degree i CNs. We also define the corresponding power series L(x) := L i x i , λ(x) := λ i x i−1 , and ρ(x) := ρ i x i−1 , and we denote the corresponding ensemble as LDPC(λ, ρ).
B. Message Passing Decoding
We study the bit-error probability under BP decoding on the joint graph.The values of VNs (v 1,i , v 2,i ) are initialized with their know values if y i = 0 and are initialized with the erased symbol if y i = 0. BP decoding on the joint graph can be realized by running two conventional single-user BP decoders on (y 1 , ..., y n ) and (y 1+τ , ..., y n+τ ) respectively and exchanging information between them on (y 1+τ , ..., y n ). The information exchange is particularly simple for the BAC since c 1,i fully defines c 2,i−τ given y i . We denote the function nodes that enforce the channel constraint (1) as MAC nodes. An example of a joint graph is depicted in Fig. 1 where triangles depict MAC nodes, squares are CNs, and circles are VNs. The single-user decoder can be run for multiple iterations before information exchange. Nonetheless, in this paper we only study the case where each iteration of the single-user decoders is followed by a message exchange through the MAC nodes. This decoder has O(n) complexity.
C. Coset Codes
To simplify the analysis we consider the ensemble of cosets of LDPC codes where each code in this ensemble is specified by a graph G and a 'dither' vectord ∈ {0, 1} n with its BPSK representation d ∈ {±1} n . The ensemble is then specified by a degree distributions pair (λ(x), ρ(x)) and the dither vector. We consider the ensemble generated by randomly choosing VN and CN degrees according to the distribution pair λ(x), ρ(x) followed by a random permutation between the left sockets and right sockets, and by choosingd uniformly from {0, 1} n . Let C G,d denote the coset code corresponding to a given G andd. Let G and H denote the generator matrix and parity check matrix of the LDPC code, respectively, with a given G andd = 0. Then, m ∈ C G,d if and only if Hm = Hd.
At the encoders, the bit sequences b 1 and b 2 are encoded into codewords m 1 and m 2 , respectively, according to
m u = Gb u +d, u ∈ {1, 2}.(3)
Note that both users share the same ditherd. Since the BPSK mapping is one-to-one, we can also express the addition of the dither as multiplication of c 1 , c 2 with d, resulting in the channel output
y i = c 1,i d i + c 2,i−τ d i−τ .
Since d is chosen as part of the code design, it is known at the receiver and its effect can be easily incorporated into the message passing rules. The analysis in Section V will show that a randomly chosen dither will be good for any code and all codeword combinations with probability approaching 1 as n → ∞. Remark 3: Note that the constructed LDPC codes are not strictly linear but affine. Nonetheless, they can be encoded with a linear encoder followed by a common offset. Besides, numerical results suggest that the error probabilities stay unchanged when no dithering is used. As such, the dither is mainly used as an analytic tool here.
V. DENSITY EVOLUTION ANALYSIS
We next track the fraction of erased edges through the iterations averaged over the code and dither ensemble as n → ∞. Let x l be the probability that a message from a variable node to a check node is erased, y l the probability that a message from a check node to a variable node is erased, w l the probability that a message from a variable node to a MAC node is erased, and z l the probability that a message from a MAC node to a variable node is erased. The subscript l refers to the l-th iteration. The passed messages are visualized in Fig. 2. Assuming that the depth l neighborhood of each node is a tree, we can derive a recursion for the evolution of the above parameters as follows. Begin with initial conditions
y 0 = 1, x 0 = 1, z 0 = 1/2 x l+1 = z l λ(y l ) (4) y l+1 = 1 − ρ (1 − x l+1 ) (5) w l+1 = L (y l+1 ) (6) z l+1 = 1 2 w l+1 .(7)
These equations are obtained by following the basic message passing rules. An edge from a degree i VN to a CN is erased if all incoming edges are erased. The VN has a total of i−1 incoming edges from other CNs which are independently erased with probability y l and one incoming edge from a MAC node which is erased with probability z l , resulting in an erasure probability z l y i−1 l . Averaging over all VN degrees gives the expression for x l+1 . The other equations are derived similarly. The factor 1/2 in z l+1 arises since the value of each MAC node is independently erased with probability 1/2. Note that this is only true because of the symmetrization by the dither.
By performing some standard substitutions, we end up with the following scalar recursion:
x l+1 = 1 2 L (1 − ρ (1 − x l )) λ (1 − ρ (1 − x l )) .(8)
Likewise, we can obtain the following recursion on y l :
y l+1 = 1 − ρ 1 − 1 2 L(y l )λ(y l ) .(9)
The probability that a bit remains erased at the end of iteration l + 1 is given by
p l+1 = z l L(y l+1 ),(10)
where (p l ) l=1,2,... is a deterministic sequence of numbers. Our main theorem below shows that the BER of a randomly chosen code with a random dither sequence after l decoding iterations concentrates tightly around p l . Let
P b (d, c, l) := P b (c, G, n, l, d, τ ) = 1 2n 2n i=1 E[1{v l i = }|G, d](11)
be the BER (fraction of erased VNs) at blocklength n after l iterations for a given code G ∈ LDPC(λ, ρ) and codeword pair
c = (c 1 , c 2 ). Also letP b (d, l) = 1 |C| 2 c P b (d, c, l)
denote the average BER. Then the following holds:
Theorem 2: As n → ∞, for any τ ∈ [1 : τ max ]
P G,d (|P b (d, l) − p l | > λ) → 0(12)
for any λ > 0. Proof: The proof is given in Appendix B.
VI. OPTIMIZATION
We can use the DE equations to optimize the degree distributions. Specifically, define
f ρ (y) = y − 1 + rmax i=2 ρ i 1 − 1 2 L(z(y)λ(y) i−1(13)
where r max is the maximal CN degree. For fixed λ, (13) is linear in ρ i and gives rise to the linear program:
min ρ i ρ i i s.t. ρ i ≥ 0; i ρ i = 1; f ρ (y) > δ ∀y ∈ (0, 1)(14)
where δ ≥ 0 is a slack variable. For fixed ρ, (8) results in an optimization problem with linear objective and quadratic constraints. Details on the quadratic program are given in Appendix C. Unfortunately, it can be shown that the constraints are not positive semidefinite. Therefore, the problem is not convex in general and a solver is not guaranteed to converge to the optimal solution. Nonetheless, we find that general purpose quadratic solvers lead to good results and we are able to empirically find degree distributions that achieve rates close to the BAC capacity by alternating optimization of ρ and λ. To find distributions which can be decoded in a reasonable amount of iterations and are robust to finite length fluctuations we follow [19, Sec. VII] and set the slack variable to δ = c/ √ n. The parameter c is set empirically. Higher c will result in lower rates but less required decoding iterations.
A. Error-Floor Analysis
In single-user LDPC ensemble constructions, degree one VNs are usually avoided because they prevent the BER (and the BLER) from going to zero. Indeed, when two degree one VNs connect to the same CN, they create a low-weight stopping set that cannot be recovered, even by an ML decoder. However, for the two-user frame-asynchronous case, under certain circumstances, the presence of degree one VNs does not prevent the BLER from going to zero as n → ∞. As we shall see, this implies that we can increase the rates in the finite-blocklength regime without introducing error floors by introducing a small fraction of degree one VNs.
In the joint graph, degree one VNs can be recovered through the MAC nodes, even if they connect to the same CN. In the following theorem we provide a bound on the probability that a randomly chosen graph with a fraction L 1 of degree one VNs has a 4K-sized stopping set, consisting of just degree one VNs. The case K = 1 is depicted in Fig. 3. 1 2 . . . Theorem 3: The probability that a random code from the ensemble LDPC(λ, ρ) results in a joint graph that has no stopping sets of size ≤ 4K created by just degree one VNs for all τ ∈ [1 : τ max ] can be bounded from below by
1 − τ max 2 K k=1 L 2 1 1 − R k 1 2k − O K n 2(15)
Proof: See Appendix D. The above theorem also implies the following result on the BLER.
Theorem 4: If L 1 is sufficiently small compared to τ max such that (15) is strictly larger than zero, there exists a constant fraction of codes in the ensemble with a vanishing BLER.
Proof: See Appendix D. Theorem 4 shows that error floors can be avoided by resampling the code until one is found where the joint graph contains no 4K-sized stopping sets for a desired range τ max . It is necessary that τ max is small compared to L 2 1 /(1 − R). 1 Note that even if 4K-sized stopping sets of degree one VNs exist, they only result in bit-errors if all VNs in the set are erased (i.e., users transmit different symbols), which happens with probability 2 −4K . Therefore it may not be necessary to expurgate these sets for large K, depending on the desired BLERs. Besides, fixed length stopping sets result in a number of bit-errors which does not scale with n. As such, they could also be corrected by adding an outer code with rate approaching 1 as n → ∞. See also the discussion in [20]. Table I shows some degree distributions obtained using the optimization procedure given in Section VI. The slack variable δ was adjusted empirically to find codes that work with small blocklength and a reasonable number of required iterations. The erasure probability for Code 2 in Table I predicted from DE is shown in Fig. 4 together with some random decoding realizations with blocklength n = 5 · 10 4 . The empirical block error rate (BLER) of the codes in Table I in Fig. 5 for a fixed delay τ = 1. For the code construction we choose a random sample from the permutation ensemble and we check if it contains 4K-stopping sets up to K = 3. If it does, we sample again. The number of required samples is typically less than 10 for Code 2 and between zero and two for Codes 1 and 3. We can see in Fig. 5 that the resulting codes do not show an error floor. The case with random delay τ ∈ [1 : τ max ] is explored in Fig. 6. We choose τ max = 100 for Code 1 and τ max = 500 for Codes 2 and 3. The reason for choosing a smaller τ max for Code 1 is that for n < 1000, a delay of several hundred symbols is a significant fraction of the blocklenght, in which case the number of symbols where both codewords collide is rather small and hence, the BER is small, too. This effect also explains the non-monotonic behavior of the BER for Code 2. Note that both BLER and BER are limited by 1/τ max because τ = 0 will always result in a block error. As expected from the analysis in Section VI-A, the codes exhibit an error floor due to short length stopping sets caused by degree one VNs and therefore the corresponding BLERs do not vanish. We can observe in the simulations that for large enough n, block errors are caused almost exclusively by 4 remaining bit-errors for Code 1 and 3, while Code 2 also occasionally exhibits 8 or 12 remaining bit-errors. Thus, a high-rate outer code would be sufficient to resolve the remaining bit-errors in this case. For example, a BCH code would suffice with minimum distance 8 or 24, respectively. Let H E denote the sub-matrix of H with column indices in E and, analogously, m u,E be the restriction of m u to the set E. Note that on E = [n] \ E the entries of m 1 are known and similarly on E − τ the entries of m 2 are known. Therefore, we can compute the two syndromes:
VII. NUMERICAL RESULTS
s 1 = H E m 1,E s 2 = H E−τ m 2,E−τ(16)
and m 1 satisfies the two constraints:
H E m 1,E = s 1 H E−τ m 1,E = H E−τ (1 − m 2,E−τ ) = H E−τ 1 − s 2 =:s 2(17)
We
can defineH = [H 0 τ ; 0 τ H] ∈ F 2(n−k)×(n+τ ) 2
. With this (17) can be written asH E m 1,E = [s 1 ;s 2 ]. This equations can be solved if the rank of rank(H E ) > |E|.
Now let the entries of the parity check matrix H be Bernoulli(1/2) i.i.d. and define r = n − k. For some arbitrary erasure set E ⊂ [τ : n] of size d we compute the probability P d that a sub-matrixH E ofH of size 2r × d has rank d. Note that this probability is well defined since it does only depend on the size of E but not on the actual set. The complications in the proof, compared to standard techniques, arise from the fact thatH may contain the same vectors in top and bottom half, in which case we cannot assume anymore that they are independent. We can bound P d as follows:
P d ≥ d k=1 1 − 2 k+1 2 2r(18)
To get the bound we compute the smaller probabilityP d that H E has rank d and the following condition is fullfilled: i) Non of the top half vector from H E are in the column span of the bottom half H E−τ . We computeP d recursively by adding the indices in E in increasing order: Let E k denote the sub-set of E with only the first k indices. Assume the columns ofH E k−1 are linearly independent and condition i) is satisfied. If one columnh i := [h i ; h i−τ ] is added, the resulting set will be linearly dependent if {h i ∈ span(H E k−1 −τ )} := I 1 . In addition, condition i) will be broken if {h i ∈ span(H E k−1 −τ )} := I 2 happens. I 1 can be further decomposed into the two disjoint events I 1,1 = I 1 ∩{i−τ ∈ E k−1 } and I 1,2 = I 1 ∩{i−τ / ∈ E k−1 }. The conditional probability of I 1,1 is zero due to the assumption that condition i) is fullfilled. On the other hand, if i−τ / ∈ E k−1 then h i−τ is independent ofH E k−1 and the probability that I 1,2 ∩ I 2 happens is (1 − (2 k−1 + 2 k )/2 2r ) since there are 2 k−1 binary vectors in the span of H E k−1 and 2 k vectors in the span of [H E k−1 −τ , h i−τ ]. So we can boundP d as
P d ≥ 1 − 2 k+1 2 2r P d−1(19)
which proves (18). For a fixed parity check matrix H, the probability of decoding the first codeword wrong, averaged over all codeword pairs, is given by
P e,H = 1 − n−τ d=1 P(|E| = d)1(rank(H E ) = d)(20)
Let n := n − τ and d max = n /2 + δn . We can write |E| = n i=1 X i where we define X i := 1(c 1,i = −c 2,i−τ ). It holds that E[X i ] = 1/2, Var[X i ] = 1/4, 2 and the X i are pairwise independent 3 . Therefore Var(|E|) = n/4 and Chebychev's inequality shows that for any δ > 0
P |E| − n 2 > δn ≤ 1 4δn .(21)
Since
1(rank(H E ) ≥ d) is non-increasing in d we have P e,H ≤ 1 − 1(rank(H E ) = d max )P (|E| ≤ d max ) ≤ 1 − 1(rank(H E ) = d max ) + o n (1).(22)
Note that the channel is noiseless. Therefore, if one codeword is correctly recovered the second one can be obtained by subtracting the first. Also, it is irrelevant which codeword we attempt to decode since both users share the sameH E . For simplicity we assume that d max = (n − τ + δ)/2 is an integer. Averaging (22) over the code ensemble we get P e ≤ 1 −P (n−τ +δ)/2 + o n (1)
= n − τ + δ 2 2 n(1/2−2(1−R)) + o n (1)(23)
An alternative proof for the slightly different linear code ensemble of iid random generator matrices G ∈ F k×n 2 can be sketched as follows: Let (x i ) = x i−τ for i > τ denote the left-shift of a vector entry by some fixed τ and let u 1 , u 2 ∈ F k 2 be the transmitted bit sequences. Then the channel output reads as
y = (u 1 G) + (u 2 G)(24)
W.l.o.g. we can choose u 1 = e 1 and u 2 = e 2 . For arbitrary u 1 , u 2 we can find a basis which has u 1 , u 2 as first two basis vectors and work in the new basis. Since the distribution of G is invariant under basis change this does not affect the error probability. Conditioned on the two rows g 1 , g 2 the error probability is given by
P e,G = P( v1,v2 {(v 1 G) + (v 2 G) = g 1 + g 2 }|g 1 , g 2 ) (25)
We partition the space of possible sequences v 1 , v 2 into two sets A and A c such that sequences in A are zero in the first two positions. Within A, v 1 G, v 2 G are independent of g 1 , g 2 . Furthermore, we distinguish two cases. First, that v 1 and v 2 both contain at least one unique bit. In this case we can treat them as independent vectors and bound the error probability, averaged over G \ [g 1 , g 2 ] as
P e ≤ 2 2(k−2) P(b 1 + b 2 = 0) |Y0| P(b 1 + b 2 = 1) |Y1| P(b 1 + b 2 = 2) |Y2|
In the second case, where v 2 is of the form v 2 = v 1 ⊕ u for some vector u that is independent of v 1 . We get probabilities of the form
P(b 1 + (b 1 ⊕ b 2 ) = j) for j ∈ {0, 1,
This leaves only 15 possible cases, most of which are trivial or can be reduced by symmetry to one of the following 4 non-trivial cases:
• Case 1: v 1 = g 1 ⊕ u, v 2 = g 2 ⊕ u
We explicitly write down the equations that need to be satisfied for an error to occur (wlog for τ = 1):
g 1,i ⊕ u i + g 2,i−1 ⊕ u i−1 = g 1,i + g 2,i−1(30)
This can only be satisfied if u i = u i−1 which happens with probability 1/2. Therefore we can bound the error probability P e ≤ 2 k−2 1 2 n + o n (1) = 2 n(R−1) + o n (1) (31)
• Case 2: v 1 = g 1 ⊕ g 2 ⊕ u, v 2 = u
In this case there are some channel values that cannot be replicated. E.g. g 1,i = g 2,i = 0 and g 2,i−1 = 1, then y i = 1, but g 1,i ⊕ g 2,i = 0. So neither values if u i can replicate the channel output. Therefore P e = 0 + o n (1).
• Case 3: v 1 = g 1 ⊕ g 2 ⊕ u, v 2 = g 1 ⊕ u Similar to Case 2, giving P e = 0 + o n (1) similar to Case 2.
• Case 4: v 1 = g 1 ⊕ u, v 2 = u If y i = 1 both (u i , u i−1 ) = (0
APPENDIX B PROOF OF THEOREM 2
The outline of the proof is as follows:
Lemma 1 shows that P b with fixed dither concentrates around the dither average. Corollary 1 shows that P b for a fixed codeword pair concentrates around the average over all codeword pairs. Lemma 2 establishes that the dither average is independent of the transmitted codewords. Lemma 3 states that the computation tree for each VN is with high probability tree-like for a fixed depth l as n → ∞. Finally, we argue that that P b for any fixed random graph concentrates around the ensemble average, which concludes the proof of Theorem 2.
The proof will make repeated use of Azuma-Hoeffding's inequality [21], [22] applied to so called Doob martingales, which are conditional expectations of the form
Y i = E[f (X 1 , ..., X n )|X 1 = x 1 , .., , X i = x i ](33)
for some function f and a (not necessarily iid) sequence of RVs (X i ) i=1,...,n . It holds that Y 0 = E[f ], Y n = f (x 1 , ..., x n ), and Theorem 5 (Azuma-Hoeffding for Doob Martingales): Suppose that |Y k − Y k−1 | ≤ d k for a sequence (d k ) k=1,...,n of non-negative reals. Then for λ > 0 it holds
P(|Y n − Y 0 | > λ) ≤ 2 exp − λ 2 2 n k=1 d 2 k (34)
The next lemma shows that for two fixed transmitted codewords any randomly chosen dither sequence, with high probability, will result in a bit-error rate that is close to the bit-error rate averaged over all dither sequences.
Lemma 1:
P(|P b (d, c 1 , c 2 ) − E[P b (d, c 1 , c 2 )]| > λ) ≤ exp(−Cλn)(35)
for some constant C > 0 and any λ > 0.
Proof: Define the Doob martingale Y i = E[P b (d)|d 1 , ..., d i ]. Since any dither value d i affects at most two VNs and all VNs included in their depth l computation graphs, the number of affected VNs is upper bounded by a constant that does not scale with n. This constant can be bounded by the maximal VN and CN degrees in the graph as we will show later as part of the proof of Lemma 3. Therefore Y i has bounded increments and the concentration inequality (35) follows from (34). In fact, also the stronger statement holds, that a randomly chosen dither can be used for all codeword pairs (c 1 , c 2 ).
Corollary 1:
P 1 |C| 2 c1,c2 P b (d, c 1 , c 2 ) − E[P b (d)] > λ ≤ exp(−C λn)(36)
for some constant C > 0 and any λ > 0.
Proof: First, it holds that
P c 1 ,c 2 (|P b (d) − P b (d, c 1 , c 2 )| > λ) ≤ exp(−C λn) (37)
for some constant C > 0, any λ > 0 and any fixed d. This can be shown by applying Azuma-Hoeffding's inequality to the martingale that reveals c 1 and c 2 component by component. Since each component affects at most a finite number of VNs in the depth l neighborhood, the martingale has bounded increments. Furthermore,
P d (|P b (d) − E[P b (d)| > λ) = P c 1 ,c 2 ,d (|P b (d) − P b (d, c 1 , c 2 ) + P b (d, c 1 , c 2 ) − E[P b (d)| > λ) ≤ P c 1 ,c 2 ,d |P b (d) − P b (d, c 1 , c 2 )| > λ 2 + P c 1 ,c 2 ,d |P b (d, c 1 , c 2 ) − E[P b (d)| > λ 2 ≤ 2 exp(−C λn)(38)
where the last inequality follows by applying (35) and (37), integrating, and setting C = max{C, C }/2. The next lemma will show that the channel output, when averaged over the distribution of the dither, is iid and does not depend on the transmitted codewords c 1 , c 2 . Therefore, when evaluating E[P b (d)], we can assume that both users transmit the all-ones codeword. Dependencies may occur only if d i is shared in multiple channel outputs. Note that only y i and y i+τ include d i . We can compute p(y i = 0, y i+τ = 0) = p(y i+τ = 0|y i = 0)p(y i = 0)
= 1 2 p(d i+τ c 1,i+τ + d i c 2,i = 0|d i c 1,i + d i−τ c 2,i−τ = 0) = 1 2 p(d i+τ c 1,i+τ = d i−τ c 2,i−τ ) = 1 4 (40)
where the last inequality follows because d i+τ and d i−τ are independent. An arbitrary set S can be handled by using (40) repeatedly.
User 1
User 1
User 2 Fig. 7: Computation Graph, T denotes the basic LDPC computation tree with one VN connected to its adjacent CNs which in turn connected to their adjacent VNs.
User 1 T −τ −τ −τ −τ T +τ T
Next, we show that for a random code from LDPC(λ, ρ) the depth l computation graph rooted at a VN is a tree with high probability. Fig. 7 depicts the structure of the computation graph of an arbitrary VN of user 1. The root node is connected to one MAC node (triangle) and a variable number of check nodes, which in turns connect to other variable nodes, which connect to other check nodes. After this point, at each additional iteration the structure of parity checks followed by MAC nodes is recursively repeated l times. The number of leaves of each element, denoted by T , is an iid random variable whose distribution can be calculated from the left and right degree distributions. Here, we only need an upper bound on the number of leaves, which is given by n max = l max r max + 1 where l max and r max are the maximal VN and CN degrees. We next show that for a randomly chosen code from the ensemble LDPC(λ, ρ) the probability that the nodes in a computation graph with root VN i, i = 1, ..., 2n, contains only distinct VNs, and is therefore a tree, can be bound as follows.
Let us represent the VNs as two vectors v 1 , v 2 ∈ {0, 1} n+τ with zero padding, i.e. v 1,j = 0 for j ∈ [n + 1 : n + τ ] and v 2,j = 0 for j ∈ [0, τ ]. Let V u denote the set of VNs for user u, u ∈ {1, 2}. Due to the same-codebook constraint, the neighborhood of v 1,i is the same as the neighborhood of v 2,i+τ . The neighborhood of some fixed VN, without loss of generality in V 1 , at depth t can be recursively expressed as follows. Let N 2t (τ ) denote the neighborhood of root VN i (we drop the index i for readability) at depth t in the joint graph with offset τ . We also drop the dependence on τ when immaterial. We split the neighborhood as N 2t = N 2t
1 ∪ N 2t 2
where N 2t 1 , N 2t 2 denote the neighbors in V 1 and V 2 respectively. Let N 0 1 = i be the root. We can describe the evolution of N 2t u , u = {1, 2}, with increasing depth as follows. Define the shifted sets N ± τ :
= {i : i = j ± τ, j ∈ N }. 1) N 2t+1 1 = N 2t 1 ∪ V t,1 where V t,1 is the set of nodes in V 1 that connect to N 2t 1 through a CN. 2) N 2t+1 2 = N 2t+1 1 − τ . The right hand side (rhs) is the set of nodes in V 2 that connect to N t+1 through MAC nodes. 3) N 2t+2 2 = N 2t+1 2 ∪ V t,2 where V t,2 is the set of nodes in V 2 that connect to N 2t+1 2 through a CN. 4) N 2t+2 1 = N 2t+2 2 + τ .
The rhs is the set of nodes in V 1 that connect to N 2t+2 2 through MAC nodes. Note that for a random code from the ensemble LDPC(λ, ρ) the sets V t,u are random.
Lemma 3:
P(N 2T (τ )
is not a tree for some τ ∈ [1 : τ max ]) ≤ γ n where γ depends on T, λ, ρ and τ max but not on n.
Proof: The proof follows the structure of [20], [23]. Assume that the computation graph at iteration t, t < T , is a tree. 4 We need to compute the probability that any of the four steps in the construction of the neighborhood of a VN introduces a cycle. Note, that only the sets V t,u are random since the MAC connections are fixed. No cycle is introduced
if V t,1 ∩ N 2t 1 = V t,2 ∩ N 2t+1 2 = ∅.
In addition, since we need to take the same-codebook constraint into account, we also require that
V t,u ∩ {N 2t u + τ } = ∅ ∀τ ∈ [1 : τ max ](41)
for u = {1, 2}. Note that (28) is necessary because otherwise there would be a τ for which N 2t+1 2 contains a node which is a mirrored copy of a node in N 2t+1 1 . This implies that the edges connected to it are fixed and cannot be considered random iid anymore. Even though the event (41) does no necessarily result in a cycle, we treat it as such to get an upper bound on the probability that a computation graph is cycle-free. This increases the number of VNs that results in a cycle by a factor 4 The first iteration is special as it has one more connection than the others, as depicted in Fig. 7. It is apparent that this does not change the proof.
(1+τ max ) in each iteration compared to the case without MAC connections. Intuitively it is clear that this does not change the basic proof idea of [20] since the size of a neighborhood after t iterations does still not scale with n. Nonetheless, we give a formal proof for completeness. Let c T u and v T u = |N 2T u | denote the number of CNs and VNs in the computation graph of user u after T iterations in V u . Then, at iteration t + 1, the number of newly added CNs is at most
c t+1 u − c t u ≤ v t u l max(42)
and the number of newly added VNs is at most
v t+1 u − v t u ≤ c t+1 u r max .(43)
Both of these quantities can be upper-bounded independently of the index u = {1, 2}, so we drop it. Furthermore, v T ≥ v t and c T ≥ c t for T ≥ t. Conditioned on the event that
N 2(T −1) (τ ) isP(N 2T (τ ) is a tree ∀τ |N 2(T −1) (τ ) is a tree ∀τ ) ≥ 1 − (1 + τ max )c T m (1+τmax)(c T −c T −1 ) · 1 − (1 + τ max )v T n (1+τmax)(v T −v T −1 )(44)
So we obtain recursively that P(N 2T is a tree ∀τ ) ≥ T t=1 P(N 2t is a tree ∀τ |N 2(t−1) is a tree ∀τ )
≥ 1 − (1 + τ max )c T m (1+τmax)c T · 1 − (1 + τ max )v T n (1+τmax)v T ≥ 1 − ((1 + τ max )v T ) 2 + ((1+τmax)c T ) 2 1−R n(45)
and therefore
P(N 2T i is not a tree) ≤ ((1 + τ max )v T ) 2 + ((1+τmax)c T ) 2 1−R n
(46) We conclude the proof by giving bounds on c T and v T c T ≤ l max
T −1 t=1 v t ≤ l max (T − 1)v T −1 (47) v T ≤ r max T c T ≤ l max r max T 2 v T −1 (48) which gives v T ≤ (l max r max T 2 ) T (49) c T ≤ l max (T − 1)(l max r max T 2 ) T −1 .(50)
We conclude the proof by noting that both upper bounds on c T and v T are independent of n.
To conclude the proof of Theorem 2 it remains to show that E d [P b (d)] converges to the ensemble average over G ∈ LDPC(λ, ρ) as n → ∞. We omit a full proof and give only an outline since it follows, almost without modifications, the proof in [20]. By Lemma 2 assume that the channel output is iid in the computation of E G,d [P b (d)]. By Lemma 3 we can reduce the computation of
E G,d [P b (d)] to E G [1(v l i == )|N l i is a tree]
where v l i , i = 1, ..., 2n, denotes the value of the i-th VN after l iterations. The convergence of the edge erasure probabilities to the ensemble average can be shown by constructing an edge exposure martingale. In our case each revealed edge affects both users' graphs so the number of edges affected in the depth l neighborhood doubles. It is apparent that the martingale still has bounded increments as the number of edges in the depth l neighborhood of a given edge does not scale with n. Together with Lemma 1 and Corollary 1 this concludes the proof.
APPENDIX C DETAILS ON DEGREE OPTIMIZATION
The optimization of λ for fixed ρ can be expressed in standard form as follows.
g λ (x) = x − 1 2 L(z(x))λ(z(x)) = x − 1 2( λi i ) λ T H x λ(51)
where H x,ij = z(x) ij−1 i and z(x) = 1 − ρ(1 − x). We get the optimization problem:
max κ,λ κ s.t. λ i i − κ = 0; λ i ≥ 0; λ i = 1; λ T H x λ − 2κ(x − δ) < 0 ∀x ∈ (0, 1)(52)
APPENDIX D ERROR FLOOR ANALYSIS Throughout this section we use the term 4K stopping set (4K-SS) to denote stopping sets of size 4K consisting of just degree one VNs.
Theorem 6: The probability that a random code from the ensemble LDPC(λ, ρ) results in a joint graph that has no 4-SS for all τ ∈ [1 : τ max ] can be bounded as P (G(τ ) has no 4-SS ∀τ ∈ [1 : τ max ])
≥ 1 − τ max L 4 1 2(1 − R) 2 (53)
Proof: There are L1n 2 ≤ n 2 L 2 1 /2 pairs of degree one VNs. Let n c = (1 − R)n denote the number of CNs. The probability that a pair of VNs is connected to the same CN is 1/n c . Also letp denote the probability that 4-stopping set appears that contains a given pair of VNs (v 1 , v 2 ) in the joint graph with fixed τ . It is given byp = L 2 1 /n c , i.e., the probability that the nodes connected to (v 1 , v 2 ) through MAC nodes are both of degree one and connect to the same CN. The degrees and edges of all τ max VNs to the right of (v 1 , v 2 ) are independent and therefore the probability that at least one of the joint graphs with shift τ contains a 4-SS is given by 1 − (1 −p) τmax . Let N 4 denote the number of 4-SSs and I p the event that a 4-SS goes through pair (v 1 , v 2 ). Then the expected number of 4-SSs is given by
E[N 4 ] = E ( L 1 n 2 ) p=1 I p ≤ L 2 1 n 2 c 2(1 − R) 2 1 n c 1 − 1 − L 2 1 n c τmax ≤ τ max L 4 1 2(1 − R) 2(54)
The last inequality follows because (1 − x) τ ≥ 1 − τ x. If the expected number of 4-SSs is smaller than 1 there must be graphs in the ensemble that result in zero 4-SSs. Furthermore, for any non-negative random variable N it holds that P(N = 0) ≥ 1 − E[N ]. Proof of Thm. 3: Let k ≤ K. There are L1n 2k ≤ n 2k L 2k 1 /(2k)! k-tuples of degree one VNs. Let n c = (1−R)n denote the number of CNs. For each 2k-tuple there are (2k − 1)!! ways to partition them in pairs, where (2k − 1)!! = (2k − 1)(2k − 3)... · 1 denotes the double factorial. The probability that each pair is connected to the same CN is n −k c . Letp 2k denote the probability that a 4k-SS goes through a given 2k-tuple in the joint graph with fixed τ . Note that the degrees and edges of the neighbor sequence are not independent if the original tuple of VNs contains a consecutive sequence of length at least three. We show later that their contribution to the expected number of 4k-SSs is at most of order O(1/n 2 ) and results in the correction term in (15). For now we consider only 2k-tuples which do not contain consecutive sequences. For those, the degrees and edges of the neighbor sequence are independent of the original tuple. A 4k-SS is created if the 2k-tuple connected by MAC nodes consist of only degree one VNs which connect to k CNs in a configuration that does not result in shorter SSs. With respect to random permutations of VNs and edges this happens with probabilityp
Here we have trivially lower bound the configurations that result in SSs smaller than 2k by zero. The probability that at least one of the joint graphs with shift τ contains a 2k-SS is given by 1 − (1 −p 2k ) τmax . Let N 4k denote the number of 4k-SSs and I p,2k the event that a 4k-SS goes through the 2k-tuple p. Then the expected number of 4k-SSs is given by
E[N 4k ] = E ( L 1 n 2k ) p=1 I p,2k ≤ (2k − 1)!!L 2k 1 n 2k c (2k)!(1 − R) 2k 1 n k c (1 − (1 −p 2k ) τmax ) + O 1 n 3 ≤ τ max L 2 1 1 − R k ((2k − 1)!!) 2 (2k)! + O 1 n 3 ≤ τ max L 2 1 1 − R k 1 2k + O 1 n 2(56)
The second inequality follows because (1 − x) τ ≥ 1 − τ x. ] is smaller than 1 there must be graphs in the ensemble that result in zero SSs of size smaller than 4K because for any non-negative random variable N it holds that P(N = 0) ≥ 1 − E[N ].
It remains to show that the number of 2k-tuples that contain consecutive sequences is of order O(1/n 2 ). The number of length 2l + 1 sequences is at most linear in n while it reduces the probability that the neighbor sequence connects to k CNs by at most a factor of n l c . Therefore, the expected number of 4k-SSs that go through at least 2l + 1 consecutive VNs can be bound loosely by a O(n/n 2k−l ) term. Since l ≤ k − 1 the term is maximized for l = k − 1 and k = 2 giving the desired result.
Proof of Thm. 4: The proof follows by noting that the probability of having stopping sets with VNs with degree larger than one connected to the same set of CNs will go to zero as n → ∞. Indeed, the smallest possible stopping set containing degree two VNs is the one where two degree one VNs connect to the same CN, and two degree two VNs connected to the same pair of CNs. Their expected number can be upper-bounded by L2n 2 L 2 1 /n 3 c = O(1/n) since n c scales with n. Any larger stopping set containing degree two, or higher, VNs will have an even smaller expected number. Thus, as n → ∞, we can have only stopping sets involving degree one VNs, which implies that expurgating the randomly generated graphs that contains these stopping sets guarantees a vanishing BLER as n grows.
Fig. 1 :
1Factor Graph for a UBAC with τ = 1. Triangles denote MAC nodes, squares are CNs, circles are VNs.
Fig. 2 :
2Fraction of erased messages between VNs, CNs and MAC nodes.
Fig. 3 :
3Stopping set of size 4 in a joint graph for τ = 1
Fig. 4 :
4Erased fraction of VNs as a function of the number of iterations for Code 2. The black thick line represents the erasure probability from DE. The thin lines are sample paths for n = 5 · 10 4 .
Fig. 5 :
5BLER as a function of n for τ = 1.
Fig. 6 :
6BER as a function of n for random τ ∈ [0, τmax], with τmax = 100 for Code 1, and τmax = 500 for Code 2 We start by reformulating the decoding problem on the frame-asynchronous UBAC in terms of the parity check matrix H ∈ F n−k×n 2 of a linear code. Let m 1 , m 2 be two codewords, i.e., Hm 1 = Hm 2 = 0, and let both codewords be transmitted through the BAC (1) for some fixed τ . Let E = {i : y i = 0} and denote the shifted set by E − τ := {i : y i+τ = 0}. For i ∈ E we have m 1,i = 1 − m 2,i−τ .
1 , b 2 are independent Bernoulli(1/2) distributed bits and Y l = {i : y i = l}. Averaging over g 1 ,
2}, leading to (we skip the intermediate steps): to estimate the error probabilities in A c . First note that whenever v 1 , v 2 both contain at least one unique bit they can be treated as independent and we get the same bound as (27). Also, if only one of them contains a random vector, e.g., v 1 = g 1 , v 2 = g 2 ⊕ u, then there is at most one values of u i for each i which replicates the channel values. Resulting in P e ≤ 2 k−2 1 2 n + o n (1) = 2 n(R−1) + o n (1)
, 1 )
1and (u i , u i−1 ) = (1, 0) result in the correct channel output for both g 1,i = 0 and g 1,
Lemma 2 :
2For any two transmitted codewords c 1 , c 2 and any set S ⊂ [τ + 1 : n], each symbol in the channel output is erased independently with probability 1/2, i.e., Since d i , d j are independent if i = j, we have p(y i = 0) = p(d i c 1,i + d i−τ c 2,i−τ = 0) = 1 2 since d i c 1,i and d i−τ c 2,i−τ are independent and uniform over {−1, 1}.
TABLE I :
IDegree distributions for three codes at different rates.
a tree for all τ ∈ [1 : τ max ], going one step deeper will result in no cycles if the edges from the new VNs, of which there are v T − v T −1 , meet two conditions: First, they connect to distinct, not yet visited, CNs. And second, the resulting new CNs connect to distinct VNs that are neither in the set of c T −1 already visited VNs nor in the same set shifted by some τ . Both copies of the graph follow the same rules and have distinct sets of VNs and CNs, so we can bound them in the same way. The resulting probability is
The expected number of SSs up to length 4K is E[N ≤4K ] = K k=1 E[N 4k ] . If E[N ≤4K
This can be the case in applications where synchronization is possible but a small shift τ is deliberately introduced at the transmitters.
This holds for all typical codes, that is those which have a marginal bit distributions close to Bernoulli(1/2). It can be shown easily that all but an exponentially small fraction of random parity check codes are typical. So we can restrict parity check matrices to be typical.3 Again, this is true for all but an exponentially small fraction of parity check matrices. This can be seen by bringing H into systematic form. Then the first k data bits are clearly independent and two of the n − k parity check bits are dependent if and only if they are sums of the exact same set of bits.
A perspective on massive random-access. Y Polyanskiy, 10.1109/ISIT.2017.80069842017 IEEE International Symposium on Information Theory (ISIT). Y. Polyanskiy, "A perspective on massive random-access," in 2017 IEEE International Symposium on Information Theory (ISIT), Jun. 2017, pp. 2523-2527. DOI: 10.1109/ISIT.2017.8006984.
SPARCs for Unsourced Random Access. A Fengler, P Jung, G Caire, 10.1109/TIT.2021.3081189IEEE Trans. Inf. Theory. 6710A. Fengler, P. Jung, and G. Caire, "SPARCs for Unsourced Random Access," IEEE Trans. Inf. Theory, vol. 67, no. 10, pp. 6894-6915, Oct. 2021. DOI: 10.1109/TIT.2021.3081189.
Polar Coding and Random Spreading for Unsourced Multiple Access. A K Pradhan, V K Amalladinne, K R Narayanan, J Chamberland, 10.1109/ICC40277.2020.9148687ICC 2020 -2020 IEEE Int. Conf. Commun. ICC. A. K. Pradhan, V. K. Amalladinne, K. R. Narayanan, and J. Chamber- land, "Polar Coding and Random Spreading for Unsourced Multiple Access," in ICC 2020 -2020 IEEE Int. Conf. Commun. ICC, Jun. 2020, pp. 1-6. DOI: 10.1109/ICC40277.2020.9148687.
A Polar Code Based Unsourced Random Access for the Gaussian MAC. E Marshakov, G Balitskiy, K Andreev, A Frolov, 10.1109/VTCFall.2019.88915832019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall). E. Marshakov, G. Balitskiy, K. Andreev, and A. Frolov, "A Polar Code Based Unsourced Random Access for the Gaussian MAC," in 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Sep. 2019, pp. 1-5. DOI: 10.1109/VTCFall.2019.8891583.
Energy efficient coded random access for the wireless uplink. S S Kowshik, K Andreev, A Frolov, Y Polyanskiy, 10.1109/TCOMM.2020.3000635IEEE Trans. Commun. S. S. Kowshik, K. Andreev, A. Frolov, and Y. Polyanskiy, "Energy efficient coded random access for the wireless uplink," IEEE Trans. Commun., pp. 1-1, 2020. DOI: 10.1109/TCOMM.2020.3000635.
A Coded Compressed Sensing Scheme for Unsourced Multiple Access. V K Amalladinne, J.-F Chamberland, K R Narayanan, 10.1109/TIT.2020.3012948IEEE Trans. Inf. Theory. V. K. Amalladinne, J.-F. Chamberland, and K. R. Narayanan, "A Coded Compressed Sensing Scheme for Unsourced Multiple Access," IEEE Trans. Inf. Theory, 2020. DOI: 10.1109/TIT.2020.3012948.
Low-Complexity Coding and Spreading for Unsourced Random Access. D Truhachev, M Bashir, A Karami, E Nassaji, 10.1109/LCOMM.2020.3039436DOI: 10 . 1109 / LCOMM.2020.3039436IEEE Commun. Lett. 253D. Truhachev, M. Bashir, A. Karami, and E. Nassaji, "Low-Complexity Coding and Spreading for Unsourced Random Access," IEEE Com- mun. Lett., vol. 25, no. 3, pp. 774-778, Mar. 2021. DOI: 10 . 1109 / LCOMM.2020.3039436.
Pilot-Based Unsourced Random Access with a Massive MIMO Receiver, Interference Cancellation, and Power Control. A Fengler, O Musa, P Jung, G Caire, 10.1109/JSAC.2022.3144748IEEE J. Sel. Areas Commun. A. Fengler, O. Musa, P. Jung, and G. Caire, "Pilot-Based Unsourced Random Access with a Massive MIMO Receiver, Interference Can- cellation, and Power Control," IEEE J. Sel. Areas Commun., pp. 1-1, 2022. DOI: 10.1109/JSAC.2022.3144748.
On Coding Techniques for Unsourced Multiple-Access. G Liva, Y Polyanskiy, 10.1109/IEEECONF53345.2021.9723359DOI: 10 . 1109 / IEEECONF53345 . 2021 . 97233592021 55th Asilomar Conf. Signals Syst. Comput. G. Liva and Y. Polyanskiy, "On Coding Techniques for Unsourced Multiple-Access," in 2021 55th Asilomar Conf. Signals Syst. Comput., Oct. 2021, pp. 1507-1514. DOI: 10 . 1109 / IEEECONF53345 . 2021 . 9723359.
Sparse Graph Codes for the 2-User Unsourced MAC. A Fengler, G Liva, Y Polyanskiy, 2022 56th Asilomar Conf. Signals Syst. Comput. A. Fengler, G. Liva, and Y. Polyanskiy, "Sparse Graph Codes for the 2-User Unsourced MAC," in 2022 56th Asilomar Conf. Signals Syst. Comput., Nov. 2022.
Asynchronous multiple-access channel capacity. T Cover, R Mceliece, E Posner, 10.1109/TIT.1981.1056382IEEE Trans. Inform. Theory. 274T. Cover, R. McEliece, and E. Posner, "Asynchronous multiple-access channel capacity," IEEE Trans. Inform. Theory, vol. 27, no. 4, pp. 409- 413, Jul. 1981. DOI: 10.1109/TIT.1981.1056382.
Massive Random Access with Tensor-based Modulation in the Presence of Timing Offsets. A Decurninge, P Ferrand, M Guillaud, 10.1109/GLOBECOM48099.2022.10001729GLOBECOM 2022 -2022 IEEE Glob. A. Decurninge, P. Ferrand, and M. Guillaud, "Massive Random Access with Tensor-based Modulation in the Presence of Timing Offsets," in GLOBECOM 2022 -2022 IEEE Glob. Commun. Conf., Dec. 2022, pp. 1061-1066. DOI: 10.1109/GLOBECOM48099.2022.10001729.
Asynchronous Massive Access and Neighbor Discovery Using OFDMA. X Chen, L Liu, D Guo, G W Wornell, 10.1109/TIT.2022.3224951IEEE Trans. Inf. Theory. X. Chen, L. Liu, D. Guo, and G. W. Wornell, "Asynchronous Massive Access and Neighbor Discovery Using OFDMA," IEEE Trans. Inf. Theory, pp. 1-1, 2022. DOI: 10.1109/TIT.2022.3224951.
Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel. A Roumy, D Declercq, 10.1155/2007/74890J Wireless Com Network. 2007174A. Roumy and D. Declercq, "Characterization and Optimization of LDPC Codes for the 2-User Gaussian Multiple Access Channel," J Wireless Com Network, vol. 2007, no. 1, p. 074 890, Dec. 2007. DOI: 10.1155/2007/74890.
LDPC Coded Multiuser Shaping for the Gaussian Multiple Access Channel. A Balatsoukas-Stimming, S Rini, J Kliewer, 10.1109/ISIT.2019.88497852019 IEEE Int. Symp. Inf. Theory ISIT. A. Balatsoukas-Stimming, S. Rini, and J. Kliewer, "LDPC Coded Multiuser Shaping for the Gaussian Multiple Access Channel," in 2019 IEEE Int. Symp. Inf. Theory ISIT, Jul. 2019, pp. 2609-2613. DOI: 10.1109/ISIT.2019.8849785.
Information Theory and Reliable Communication. R G Gallager, WileyNew YorkR. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968.
A E Gamal, Y.-H Kim, Network Information Theory. Cambridge University PressA. E. Gamal and Y.-H. Kim, Network Information Theory. Cambridge University Press, Dec. 2011.
T Richardson, R Urbanke, Modern Coding Theory. Cambridge ; New YorkCambridge University PressIllustrated editionT. Richardson and R. Urbanke, Modern Coding Theory, Illustrated edition. Cambridge ; New York: Cambridge University Press, Mar. 2008.
Raptor codes. A Shokrollahi, 10.1109/TIT.2006.874390IEEE Trans. Inf. Theory. 526A. Shokrollahi, "Raptor codes," IEEE Trans. Inf. Theory, vol. 52, no. 6, pp. 2551-2567, Jun. 2006. DOI: 10.1109/TIT.2006.874390.
The capacity of low-density paritycheck codes under message-passing decoding. T Richardson, R Urbanke, 10.1109/18.910577IEEE Trans. Inf. Theory. 472T. Richardson and R. Urbanke, "The capacity of low-density parity- check codes under message-passing decoding," IEEE Trans. Inf. The- ory, vol. 47, no. 2, pp. 599-618, Feb. 2001. DOI: 10.1109/18.910577.
Probability Inequalities for Sums of Bounded Random Variables. W Hoeffding, 10.2307/2282952J. Am. Stat. Assoc. 58301W. Hoeffding, "Probability Inequalities for Sums of Bounded Random Variables," J. Am. Stat. Assoc., vol. 58, no. 301, pp. 13-30, 1963. DOI: 10.2307/2282952.
Weighted sums of certain dependent random variables. K Azuma, 10.2748/tmj/1178243286DOI: 10 . 2748 / tmj / 1178243286Tohoku Math. J. 192K. Azuma, "Weighted sums of certain dependent random variables," Tohoku Math. J. (2), vol. 19, no. 3, Jan. 1967. DOI: 10 . 2748 / tmj / 1178243286.
Analysis of low density codes and improved designs using irregular graphs. M Luby, M Mitzenmacher, A Shokrollahi, D Spielman, 10.1145/276698.276756Proc. Thirtieth Annu. ACM Symp. Theory Comput. -STOC 98. Thirtieth Annu. ACM Symp. Theory Comput. -STOC 98Dallas, Texas, United StatesACM PressM. Luby, M. Mitzenmacher, A. Shokrollahi, and D. Spielman, "Analy- sis of low density codes and improved designs using irregular graphs," in Proc. Thirtieth Annu. ACM Symp. Theory Comput. -STOC 98, Dallas, Texas, United States: ACM Press, 1998, pp. 249-258. DOI: 10.1145/276698.276756.
| [] |
[
"WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia",
"WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia"
] | [
"Kenichiro Ando [email protected] \nHitotsubashi University\n\n",
"Riken Aip \nHitotsubashi University\n\n",
"Satoshi Sekine \nHitotsubashi University\n\n",
"Riken Aip \nHitotsubashi University\n\n",
"Mamoru Komachi \nHitotsubashi University\n\n"
] | [
"Hitotsubashi University\n",
"Hitotsubashi University\n",
"Hitotsubashi University\n",
"Hitotsubashi University\n",
"Hitotsubashi University\n"
] | [] | Wikipedia can be edited by anyone and thus contains various quality sentences. Therefore, Wikipedia includes some poor-quality edits, which are often marked up by other editors. While editors' reviews enhance the credibility of Wikipedia, it is hard to check all edited text. Assisting in this process is very important, but a large and comprehensive dataset for studying it does not currently exist. Here, we propose WikiSQE, the first large-scale dataset for sentence quality estimation in Wikipedia. Each sentence is extracted from the entire revision history of Wikipedia, and the target quality labels were carefully investigated and selected. WikiSQE has about 3.4 M sentences with 153 quality labels. In the experiment with automatic classification using competitive machine learning models, sentences that had problems with citation, syntax/semantics, or propositions were found to be more difficult to detect. In addition, we conducted automated essay scoring experiments to evaluate the generalizability of the dataset. We show that the models trained on WikiSQE perform better than the vanilla model, indicating its potential usefulness in other domains. WikiSQE is expected to be a valuable resource for other tasks in NLP. | 10.48550/arxiv.2305.05928 | [
"https://export.arxiv.org/pdf/2305.05928v1.pdf"
] | 258,588,444 | 2305.05928 | 748a5f07e58c2b206acca19d0e2f912b6f2d64c0 |
WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia
Kenichiro Ando [email protected]
Hitotsubashi University
Riken Aip
Hitotsubashi University
Satoshi Sekine
Hitotsubashi University
Riken Aip
Hitotsubashi University
Mamoru Komachi
Hitotsubashi University
WikiSQE: A Large-Scale Dataset for Sentence Quality Estimation in Wikipedia
Wikipedia can be edited by anyone and thus contains various quality sentences. Therefore, Wikipedia includes some poor-quality edits, which are often marked up by other editors. While editors' reviews enhance the credibility of Wikipedia, it is hard to check all edited text. Assisting in this process is very important, but a large and comprehensive dataset for studying it does not currently exist. Here, we propose WikiSQE, the first large-scale dataset for sentence quality estimation in Wikipedia. Each sentence is extracted from the entire revision history of Wikipedia, and the target quality labels were carefully investigated and selected. WikiSQE has about 3.4 M sentences with 153 quality labels. In the experiment with automatic classification using competitive machine learning models, sentences that had problems with citation, syntax/semantics, or propositions were found to be more difficult to detect. In addition, we conducted automated essay scoring experiments to evaluate the generalizability of the dataset. We show that the models trained on WikiSQE perform better than the vanilla model, indicating its potential usefulness in other domains. WikiSQE is expected to be a valuable resource for other tasks in NLP.
Introduction
Wikipedia is a huge online encyclopedia that is famously editable by any user. It contains various topics and continues to improve its quality through repeated editing by users. However, the quality of Wikipedia has long been a matter of dispute (Giles, 2005;Britannica, 2006;Editorial, 2006;Chesney, 2006) and is a very important issue for natural language processing (NLP). Due to Wikipedia texts being widely used in NLP datasets and being a major source (Rajpurkar et al., 2016;Thorne et al., 2018;Koupaee and Wang, 2018;Bañón et al., 2020), the impact of Wikipedia quality on NLP is significant.
In fact, some poor edits exist caused by abuse, etc., which are often corrected by other editors. However, checking and revising all poor edits is unrealistic, and there have been several previous attempts to support this problem by machine. A typical one is Wikipedia's Bot 1 , which automatically corrects errant markup formats, substitutes deprecated features, and, in particular, reverts vandalistic edits. However, they are targeted at naive and superficial errors, while the remaining more multidimensional and in-depth evaluations are marked up by humans. Other work includes attempting to automatically detect specific labels given by editors (Ganter and Strube, 2009;Redi et al., 2019;Bertsch and Bethard, 2021). The former is unable to make fine-grained evaluations of each sentence, while the latter focuses only on one particular poor-quality aspect.
Hence, we built WikiSQE, a dataset to estimate the quality of sentences in fine-grained and various aspects. WikiSQE enables the evaluation of various quality aspects that could not be assessed before, including grammatical errors, semantic weirdness, the need for additional information, and many other aspects. Sentences were acquired from the entire revision history of English Wikipedia. They are assigned Wikipedia's inline template labels 2 by the Wikipedia editors as sentence quality. We carefully selected the target labels manually and filtered out noisy sentences, resulting in a total of 153 quality labels and about 3.4 M sentences. The 153 quality labels were further organized into five categories that we have defined.
The experiments with automatic classification using competitive machine learning models found that statements requiring additional information are classified with high accuracy, but that sentences requiring resolving citation, syntactic or seman- Besides, their quality-checking process is also a professional annotation. They have experience in the verification of many expressions and left the results of their sentence quality assessments on Wikipedia's labels. These are expected to be valuable resources for other tasks in NLP. The dataset and codes used in this work are publicly available.
Related Work
Previous studies on quality estimation in Wikipedia have mainly focused on the article-level (Mola-Velasco, 2011;Bykau et al., 2015;Wong et al., 2021;Asthana et al., 2021). They are aimed at estimating the quality of revisions and articles.
On the other hand, quality estimation studies focusing on sentence-level also exist. The quality labels for detection were created by manually assigned to small-scale sentences (Herzig et al., 2011;Hube and Fetahu, 2019) or by using Wikipedia's Inline template. For the latter, Inline templates include "Citation needed" (Redi et al., 2019), which indicates that a sentence needs citations, "Puffery" and "Peacock" (Bertsch and Bethard, 2021), which indicates that the sentence contains exaggerated expressions, and the "Weasel words" (Ganter and Strube, 2009), which indicates that the sentence contains ambiguous wording. A detailed comparison of our study with previous studies is given in Table 2. Previous studies cover a subset of Wikipedia quality labels and do not include various labels. Regarding the availability of data, two datasets are not publicly available. In comparison, our dataset contains a large number of labels and sentences and is available for public access. in other domains include grammatical error correction (Ng et al., 2014), linguistic acceptability (Warstadt et al., 2019), automated essay estimation 3 , etc. These are related to our sentence quality estimation task on Wikipedia. Indeed, our dataset contains labels that point out grammatical errors or semantically weird expressions (see Section 3.5). Therefore, we probed the applicability of WikiSQE to other domains (see Section 5).
Sentence Quality Estimation Dataset
Source Text
Wikipedia is written in a markup language called Wiki markup, and the officially provided dump file of Wikipedia 4 is also written in Wiki markup. To extract sentences for our dataset, we need to convert this dump file to HTML. The most accurate parser is the official MediaWiki 5 , which is also computationally expensive. This study targets the entire edit history of the English version, which requires especially large computational resources and time. Fast-processing third-party HTML conversion tools can be used, but WikiMedia's frequent version upgrades can cause many such tools to fail. For this reason, we use the full edit history data that have already been converted to HTML using MediaWiki in the previous study (Mitrevski et al., 2020) 6 . This data contains the entire edit history of all articles on English Wikipedia before 1 March 2019.
Quality Label
The target quality labels for constructing our dataset are those contained in Wikipedia's inline cleanup template 7 , 344 templates included in all. This collection of inline templates is a set of Wiki markups for editors to point out the poor quality of sentences in Wikipedia. This set includes some templates that are not related to sentence quality, such as those used for talk pages and user pages. Therefore, we carefully hand-picked 147 templates from 344. Next, these inline templates need to be converted to HTML to match the source text. Therefore, we converted these inline templates to HTML using the Wikipedia sandbox, thus obtaining an initial list of quality labels. However, there are still some problems regarding the coverage of our quality label list. The first problem is the temporal differences in the Medi-aWiki parser. The HTML of the source text was generated by MediaWiki as of the year 2019, thus there is a difference from the HTML output by the current MediaWiki. This means that HTML labels acquired by the current sandbox may not be present in the source text. To address this problem, we manually checked high-frequency inline templates in the source text and added new ones to our quality label list. As a result, six new labels were added, yielding a total of 153 quality labels. Some of them are shown in Table 1. All obtained quality labels and their descriptions are available on the web page of WikiSQE. Second, there are temporal differences in the Wikipedia inline cleanup template. Our Wikipedia inline cleanup template used for the quality labels is from the year 2022, which differs from the 2019 version existing in the source text. Hence, we need to obtain the inline template in the year 2019. We acquired the past quality labels by recursively getting the already deprecated inline template pages that redirect to each inline template page. As a result, 1,319 past inline templates linked to 153 quality labels were acquired.
Quality Category
For the analysis, we categorized the 153 quality labels into five types that were further abstracted by their characteristics (Table 1, Table 4). The five categories we defined and their descriptions are as follows.
Citation category contains 59 labels related to the citation and is the most sentence-rich category. They include mentions that some citation is needed, the quality of the reference, problems with the format of the citation, and problems with the sentence cannot be reconstructed from the reference. The majority of the label is "Citation needed", which indicates that the sentence needs some citation, and it accounts for 69% of the labels across the entire dataset.
Syntactic or semantic revision is the category that indicates that grammatical or semantic improvement is needed, to which 26 labels belong. "Clarification needed" is the most common label indicating that the unclear and difficult-to-understand meaning of the text should be clarified. There is also a label "Check quotation syntax", which indicates that the format of the quotation is incorrect.
Information addition is a category that points out the need for some additional information in the sentence. The most common label is "Who?", which indicates that the name of a specific person or organization is not specified. Similarly, there are many labels that require other entities, such as locations or times, to be specified.
Disputed claim is a category indicating that there is no problem with the form of the sentence, but that there is a problem with its proposition. The most common label is Dubious, which indicates that information is unnatural and of dubious truth from the editors. This category also includes statements that are not neutral, have some kind of bias, or are not suitable for an encyclopedia.
Finally, Other is a set of labels that do not be-long to any other category. The most common label is "Disambiguation needed", which is marked up when the Wikilink in the sentence is linked to a disambiguation page and needs to be improved. The next most common label is "Sic". This label contains sentences that are faithful to references but are generally grammatically incorrect.
These categories are further classified as Wikipedia-dependent and Wikipedia-independent. The three categories of Syntactic or semantic revision, Information addition, and Disputed claim have characteristics that can be generalized to common sentence quality estimation. Therefore, they are classified as Common label and handled separately in later experiments. The common label contains 78 different labels, which is about half of all labels.
Sentence Extraction and Filtering
To extract sentences from the source text, we first split sentences using pySBD (Sadvilkar and Neumann, 2020) and then extracted sentences whose label was included in the quality label list obtained in Section 3.2. However, the extracted sentences are noisy as they often contain items such as section titles and non-sentences. Therefore, we filtered out extremely short sentences of less than 10 words, sentences with the Wiki markup, and sentences having lower-case initial letters. Each sentence was stripped of citation markers and quality labels, and duplicates were removed, resulting in a final set of 3,417,909 sentences. For later experiments, all quality labels are removed from the sentences here, but the position of the quality labels is very important information, so our public dataset includes the version in that sentences have quality labels.
Analysis of WikiSQE
Examples of sentences included in WikiSQE are shown in Table 3. We found that some labels are added to sentences, words, and clauses. For example, "Citation needed", "Clarification needed", "Dubious", and "Neutrality disputed" are, by their characteristics, frequently added to specific spans or clauses in sentences. On the other hand, "Who?", "When?" and "Sic" are often added to words. This indicates that each label needs a different scope when assessing quality; the former cannot be judged without considering the meaning of the entire sentence, while the latter can be judged by considering the words alone. However, note that "Who?", "When?", etc. can be used to ambiguously request additional information about the whole sentence, so it is not an easy task. In the sentence example, "Citation needed" refers to the date when the term kendo was created and points out that it needs to be supplemented by external documentation. This is a typical example of "Citation needed", which is frequently used to enhance credibility by requesting supporting documentation when a specific date, time, or place is mentioned. In the example of "Clarification needed", it seems that the editor requests the specific process by which a road becomes County Road 23. This is a point that cuts deeply into the semantics of the sentence, and "Clarification needed" contains many examples that would be difficult to label without linguistic proficiency. However, these points are very important in making the article more readable. In "Who?" and "When?", the editor requests that additional new information in the phrases referring to an ambiguous person and time, respectively. These two examples are typical, and important regarding the credibility of the sentence gained by clarifying the person and time. This is especially important when the ambiguous phrase is crucial to the credibility of the sentence. In the example of "Sic", the title of a song is labeled as an official name, albeit grammatically incorrect. "Sic" is most commonly assigned to such proper nouns, and quotation marks are often used to state that the notation is correct. In "Dubious", the editors take issue with the claim that English is classified as the Romance language. It is very difficult to judge such claims, as they require a high level of intelligent work. It is necessary to evaluate in reference to one's own common sense and knowledge, whose process is closely related to fact-checking. In Neutrality disputed, the editors label statements that seem to overstate the positive impact of street basketball. Since Wikipedia is an encyclopedia, biased statements are not appropriate. However, editors often disagree about what expressions are biased. Therefore, discussions are usually held on the talk page. Table 4 shows the perplexity of each category calculated using GPT-2 (Radford et al., 2019). Each perplexity reflects the characteristics of the category well, with "Syntactic or semantic revision" having a high perplexity because it contains many sentences with usage that is grammatically and semantically unusual. "Other" has particularly high perplexity because it contains many sentences with sic. "Disputed claim" has low perplexity, although it is propositionally unusual, so it may be predicted harder because its weirdness does not appear on the surface. "Information addition" requires additional information, but the included sentences have no problem, and thus have low perplexity.
Experiment
We performed experiments to automatically detect problematic sentences in Wikipedia using Wik-iSQE.
Dataset
To ensure enough size for the development and test data, we performed the experiments by each quality category and by the top 20 most frequent labels.
For the experiments, we use the sentences included in the WikiSQE as positive examples, and for the negative examples, we extract newly unlabeled sentences using the same steps as in section 3.4, and randomly sample the data to the same size as the positive examples. For the development and test data, we randomly extract 500 positive and 500 negative examples each and concatenate them to make 1000 sentences. The remaining data were used as training data.
"Citation needed" contains a significantly large number of sentences, so we downsampled the data to 200,000 sentences in order to evaluate the categories as fairly as possible with respect to other labels. In addition, we prepare "All" category that includes all quality labels. It is not just a weighted average, but an independent dataset in which the sentences in all categories are concatenated and shuffled.
Setup
The models used for detection are competitive large-scale pre-training models DeBERTaV3 (He et al., 2021) and BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019). DeBERTa and RoBERTa both used base models, and BERT used a base_uncased model. The maximum number of training epochs is 20, and the model that records the highest F1 in the development set is used as the best model to predict the test set. The learning rate is determined by searching among 1-e6, 5-e6, 1-e5, and 5-e5. The maximum input sequence length is 256 and the batch size is 64. In all setups, we report the average F1 values of the experiments with three different seeds.
Results
The experimental results of the automatic detection for the categories are shown in Table 5. The overall F1 value was 70-85%, but detection of citation problems, sentences requiring syntactic or semantic revisions, and sentences with propositional problems were relatively difficult. The citation problem seems to be difficult with only Wikipedia, since it must consider the contents of the references. Sentences that require syntactic or semantic revisions and sentences with propositional problems are also challenging because they require capturing higherlevel semantic aspects. Sentences that require the addition of some information were detected with high accuracy because the expressions to which the information is added are unique (e.g., When? is for temporal expressions).
Comparisons among the models showed that BERT had the best overall score, but there were no significant differences in performance. "All" category provided the lowest performance for any of the models. On the other hand, "Common label" had a higher performance, and considering the difference, this may be the result of training on cross-category data, ignoring the clustering of categories. "Common label" cluster similar sentences better than "All", and thus seems to have learned well.
The results of the automatic detection experiment for individual quality labels are shown in Table 6. It can be observed that the F1 score changes a lot depending on the label. Those with F1 scores above 90 are "Sic", "Pronunciation?", "Attribution needed", and "Needs update", which are relatively easy to detect. The reasons are that "Sic" is characterized by grammatical errors and "Pronunciation?" is characterized by frequent sentences containing other languages. "Attribution needed" is a label used to request attribution when quoting someone's statement, etc., and is unique. "Needs update" is used mainly in numerical values of sports articles and is easy to detect. Thus, even for quality labels in the same category, detection performance differs greatly depending on the characteristics of the label.
Generalizability to Automated Essay Scoring Task
We probe if the quality labels in WikiSQE are useful for similar tasks in other domains. For this purpose, the automated essay scoring task is performed using the Automated Student Assessment Prize (ASAP) dataset. Table 5: Experimental results of automatic detection for each quality category. Bolded text indicates the best F1scored model. "Common label" is a set of "Syntactic or semantic revision", "Information addition", and "Disputed claim". Table 6: Results of automatic detection of the top 20 quality labels in frequency using De-BERTa. Each highlight represents the quality category of Citaion , Syntactic or semantic revision , Information addition , Disputed claim , or Other .
Label
Dataset
The automated essay scoring task has been studied to assist with essay exams that aim to assess students' writing skills. Manually scoring student essays is very time-consuming, so automated scoring is important. ASAP is the most well-known data set and contains eight essay prompts in different genres. Each essay is scored by teachers, and the model aims to automatically predict their scores. The difficulty with this dataset is that each prompt has a different range of scores from 0-3 to 0-60. In addition, the quadratic weighted kappa is commonly used as a measurement for evaluation, and this study follows it.
Preparation of the dataset basically follows prior research (Taghipour and Ng, 2016;Dong et al., 2017;Yang et al., 2020;Kumar et al., 2022). We split the validation and test sets from the original training data, since the test data of ASAP is not publicly available. We use the dataset provided by Taghipour and Ng (2016) with a 60/20/20 split for train, validation, and test set.
Models
The baseline model for this experiment was De-BERTa. We prepare some different versions of DeBERTa with additional training on WikiSQE. Our additional learning was performed with linear regression with all layers of DeBERTa as trainable parameters, and modifying the positive label to 1.0 and the negative label to 0.0 in the dataset created in Section 4.1. The reason for adopting linear regression is that it performed better in ASAP than in solving classification tasks. DeBERTa Cit was additionally trained with "Citation" category data from WikiSQE. Similarly, DeBERTa Syn , DeBERTa Inf , DeBERTa Dis , DeBERTa Oth , DeBERTa Com , and DeBERTa All additionally trained on "Syntactic or semantic revision", "Information addition", "Disputed claim", "Other", "Common label", and "All" categories, respectively. Prior studies developed more complex models, but we chose the simpler DeBERTa because we were eager to confirm the
Setup
We use the DeBERTaV3 base as DeBERTa. The maximum number of training epochs is 20, and the model that records the highest quadratic weighted kappa in the development set is used as the best model to predict the test set. The learning rate is determined by searching among 1-e6, 5-e6, 1-e5, and 5-e5. The maximum input sequence length is 256 and the batch size is 64. Following prior studies, all gold standard scores in ASAP are normalized to the range of [0, 1]. The system-generated normalized scores are rescaled to the original prompt-specific scale to calculate a quadratic weighted kappa score.
Result
The experimental results are shown in Table 7. All models trained on WikiSQE outperformed the original DeBERTa. This suggests the adaptability of WikiSQE to other domains. Especially, the model with "Common label" category performed best, improving performance by 13% over the original model. This result supports our hypothesis that "Common label" contains general concepts. The model that trained on all labels also performed well, while models using each category, except those using "Citation", performed poorly. This is a different result from the classification experiment for each category, but it seems that evaluating various aspects of sentence quality was helpful in predicting the model.
Conclusion and Future Work
In this study, we constructed a sentence quality estimation dataset for Wikipedia. We obtained a total of 153 quality labels using Wikipedia's inline template and applied them to the entire edit history of English Wikipedia, resulting in 3,417,955 sentences. Sentences were labeled with a variety of quality aspects, and we further classified them into five upper categories. In automatic detection experiments for coarse-grained categories, we found that the model was able to detect poor-quality sentences with an F1 score of 70-85%. We also found that "Citation" category, which is difficult to detect based on Wikipedia articles alone, and "Syntactic or semantic revision" and "Disputed claim" categories, which require a high level of semantic interpretation for the model, are relatively difficult to detect. In the automatic detection experiments for the fine-grained labels, we found that the detection performance varied greatly depending on the labels' characteristics. In the automated essay scoring experiments to probe the usefulness of the model for similar tasks in other domains, we found that training the model on WikiSQE improved its performance. This result suggests the generalizability of WikiSQE. In addition, "Common label" was found to highly improve the prediction performance, suggesting that the quality labels in "Common label" are Wikipedia-independent. Regarding generalizability, we experimented with the automated essay scoring task, but Wik-iSQE could be applied to other sentence quality evaluation tasks, such as grammatical error correction. In particular, we leave for future work the application of zero-shot transfer learning approach using WikiSQE to other tasks.
Our limitations are mainly twofold regarding test data. First, we could not create test data with human evaluations. Since our aim is to automatically generate sentences appropriate for Wikipedia, we need highly experienced English Wikipedia editors who are well-versed in the usage of all inline templates. As it was difficult to find several such experts, this study could not create gold test data.
Second, we did not evaluate detection performance with the realistic distribution on Wikipedia at testing time. In this paper, we constructed the same amount of positive and negative examples, but in reality there are quite a few positive examples relative to the negative examples. This means that the model must find only a few problematic sentences out of a huge amount of sentences, and naively experimenting in this setting will not draw the characteristics of WikiSQE. For this reason, we did not experiment with a realistic distribution. This issue could be mitigated with a pipeline strategy such as pre-filtering.
Ethical Considerations
This dataset may contain offensive, biased, or discriminatory sentences due to the high volume of user edits. However, since the purpose of this study is to evaluate sentences comprehensively, including such a perspective, no filtering was performed.
Textual error in the statement is copied exactly from the source. Needs update 19,550 Statement needs to be updated based on recent events. Specify 7,069 Sourced statement, but not sure of alignment with sources.Table 1: Examples of quality labels in Wikipedia that are classified into five categories. Four interesting labels are selected from the top five most frequent labels in each category, with the total number of sentences and their description. tic revision, or propositional problems are difficult to detect. In an automated essay scoring experiment to test the generalizability of the dataset, the pre-trained model, additionally trained on Wik-iSQE, outperformed the vanilla one. It indicates the usefulness of WikiSQE for other domains besides Wikipedia.Group
Count Description
Citation
Citation needed
2,373,911 Citation is required to verify the content.
Dead link
84,101 External link is broken.
Not in citation given
35,278 Failed to verify the statement's content from the source.
Original research?
69,449 Cited source is not verified by a third party.
Syntactic or semantic revision
Clarification needed
138,739 Statement is difficult to understand.
Vague
13,373 Contains vague words or statement.
Check quotation syntax
1,272 Quotation syntax is not match the guidelines.
Weasel words
1,055 Contains weasel words.
Information addition
Who?
91,924 Contains claims that do not identify individuals.
When?
72,920 Time period is so vague or ambiguous.
Pronunciation?
31,517 Audio or textual needs pronunciation.
Which?
23,387 References to organizations or other things are vague.
Disputed claim
45,920 Sourced statement, but that seems dubious or unlikely.
Neutrality disputed
8,465 Statement seemed to be biased.
relevant?
3,494 Uncertain if a statement is relevant to the article, or encyclopedic.
Disputed
2,433 Statement whose truth or factual is in dispute by editors.
Other
Disambiguation needed
107,953 Contains a wikilink which should be linked to a specific page.
Sic
50,658 Total
3,417,955
Similar tasks for sentence quality estimationInline templete # Sents Available?Citation needed
36,140
No
Weasel words
500
No
Peacock, Puffery
284
Yes
Table 2 :
2Comparison with previous studies. "Avail-
able?" indicates the public availability of the data.
Label Sentence
LabelCitation neededAccording to Japanese records, the term kendo is coined in Japan on August1, 1919.[citation needed] Dead link Player profile at LFChistory.net[dead link] Clarification needed It was later given to the county, and has a possibility of becoming County Road 23.[clarification needed] Vague Pisces is perhaps[vague] the first hit rock or pop album to feature the Moog. Who? However, many analysts[who?] are finding that as Google grows, the company is becoming more "corporate". When? Over the last thirty years,[when?] a debate has been ongoing whether a tiny number of Ukrainians settled in Canada before 1891. Disambiguation needed Attala County, Mississippi: Attala is named for Attala [disambiguation needed], a fictional Native American heroine. Sic He also notably sung 'Digital Surviver[sic]', theme of Akiyama Ryo from Digimon Tamers. Dubious Some academic linguists believe the modern English Language is half-Romance influenced (the evident Norman French influences), thus can be classified a Romance language.[dubious] Neutrality disputed Streetball is a very popular game worldwide, and a fun way for young people to keep out of trouble and avoid problems such as juvenile crime and drugs.[neutrality disputed]
Table 3 :
3Examples of quality labels in Wikipedia and their sentences.
GroupCount # Labels Ave. # Tokens PerplexityCitation
2,687,535
59
26.83
90.48
Syntactic or semantic revision
160,350
26
27.01
110.53
Information addition
310,853
32
27.82
79.37
Disputed claim
70,202
20
28.30
82.22
Other
188,969
16
34.01
110.92
Table 4 :
4Statistics of five sentence quality categories. # Labels denotes the number of labels, # Tokens denotes the average number of words in sentences, and Perplexity denotes the average perplexity of sentences.
Syn .424 .449 .363 .692 .678 .559 .603 .382 Oth .443 .429 .347 .710 .682 .576 .605 .395 .523 Models
P1
P2
P3
P4
P5
P6
P7
P8 Average
DeBERTa
.444 .429 .302 .703 .696 .572 .607 .388
.518
DeBERTa Cit
.430 .441 .550 .606 .729 .656 .676 .577
.583
DeBERTa .519
DeBERTa Inf
.422 .448 .354 .709 .685 .566 .602 .388
.522
DeBERTa Dis
.448 .431 .296 .700 .700 .568 .620 .429
.524
DeBERTa DeBERTa Com .434 .449 .552 .610 .778 .625 .670 .579
.587
DeBERTa All
.429 .443 .552 .592 .708 .656 .671 .576
.578
Table 7 :
7Results of the automated essay scoring experiment. P1-P8 represents the essay prompts, and each score is a quadratic weighted kappa. Bolded text indicates the model that performed best on each prompt. usefulness of the WikiSQE by eliminating other factors.
https://www.kaggle.com/c/asap-aes 4 https://dumps.wikimedia.org/enwiki/ 5 https://www.mediawiki.org/wiki/MediaWiki 6 Creative Commons Attribution 3.0 Unported 7 https://en.wikipedia.org/wiki/Category: Inline_cleanup_templates
Automatically labeling low quality content on wikipedia by leveraging patterns in editing behaviors. Sumit Asthana, Sabrina Tobar Thommel, Aaron Lee Halfaker, Nikola Banovic, 10.1145/3479503Proceedings of the ACM on Human-Computer Interaction. CSCW25Sumit Asthana, Sabrina Tobar Thommel, Aaron Lee Halfaker, and Nikola Banovic. 2021. Automatically labeling low quality content on wikipedia by leverag- ing patterns in editing behaviors. Proceedings of the ACM on Human-Computer Interaction, 5(CSCW2).
ParaCrawl: Web-scale acquisition of parallel corpora. Marta Bañón, Pinzhen Chen, Barry Haddow, Kenneth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, 10.18653/v1/2020.acl-main.417Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsElsa Sarrías, Marek Strelec, Brian Thompson, William Waites, Dion WigginsMarta Bañón, Pinzhen Chen, Barry Haddow, Ken- neth Heafield, Hieu Hoang, Miquel Esplà-Gomis, Mikel L. Forcada, Amir Kamran, Faheem Kirefu, Philipp Koehn, Sergio Ortiz Rojas, Leopoldo Pla Sempere, Gema Ramírez-Sánchez, Elsa Sar- rías, Marek Strelec, Brian Thompson, William Waites, Dion Wiggins, and Jaume Zaragoza. 2020. ParaCrawl: Web-scale acquisition of parallel cor- pora. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4555-4567.
Detection of puffery on the English Wikipedia. Amanda Bertsch, Steven Bethard, 10.18653/v1/2021.wnut-1.36Proceedings of the Seventh Workshop on Noisy User-generated Text. the Seventh Workshop on Noisy User-generated TextAmanda Bertsch and Steven Bethard. 2021. Detection of puffery on the English Wikipedia. In Proceedings of the Seventh Workshop on Noisy User-generated Text, pages 329-333.
Fatally flawed: Refuting the recent study on encyclopedic accuracy by the journal nature. Encyclopaedia Britannica, Estados Unidos: Encyclopaedia Britannica. ChicagoEncyclopaedia Britannica. 2006. Fatally flawed: Re- futing the recent study on encyclopedic accuracy by the journal nature. Chicago, Estados Unidos: Ency- clopaedia Britannica.
Fine-grained controversy detection in wikipedia. Siarhei Bykau, Flip Korn, Divesh Srivastava, Yannis Velegrakis, 10.1109/ICDE.2015.71134262015 IEEE 31st International Conference on Data Engineering. Siarhei Bykau, Flip Korn, Divesh Srivastava, and Yan- nis Velegrakis. 2015. Fine-grained controversy de- tection in wikipedia. In 2015 IEEE 31st Inter- national Conference on Data Engineering, pages 1573-1584.
An empirical examination of wikipedia's credibility. First Monday. Thomas Chesney, 10.5210/fm.v11i11.141311Thomas Chesney. 2006. An empirical examination of wikipedia's credibility. First Monday, 11(11).
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Un- derstanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171-4186.
Attentionbased recurrent convolutional neural network for automatic essay scoring. Fei Dong, Yue Zhang, Jie Yang, 10.18653/v1/K17-1017Proceedings of the 21st Conference on Computational Natural Language Learning. the 21st Conference on Computational Natural Language LearningFei Dong, Yue Zhang, and Jie Yang. 2017. Attention- based recurrent convolutional neural network for au- tomatic essay scoring. In Proceedings of the 21st Conference on Computational Natural Language Learning (CoNLL 2017), pages 153-162.
Britannica attacks. 10.1038/440582bNature. 440582Editorial. 2006. Britannica attacks. Nature, 440:582.
Finding Hedges by Chasing Weasels: Hedge Detection Using Wikipedia Tags and Shallow Linguistic Features. Viola Ganter, Michael Strube, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. the ACL-IJCNLP 2009 Conference Short PapersSuntec, SingaporeViola Ganter and Michael Strube. 2009. Finding Hedges by Chasing Weasels: Hedge Detection Us- ing Wikipedia Tags and Shallow Linguistic Features. In Proceedings of the ACL-IJCNLP 2009 Confer- ence Short Papers, pages 173-176, Suntec, Singa- pore.
Internet encyclopaedias go head to head. Jim Giles, 10.1038/438900aNature. 438Jim Giles. 2005. Internet encyclopaedias go head to head. Nature, 438:900-901.
Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. Pengcheng He, Jianfeng Gao, Weizhu Chen, 10.48550/ARXIV.2111.09543arXivPengcheng He, Jianfeng Gao, and Weizhu Chen. 2021. Debertav3: Improving deberta using electra-style pre-training with gradient-disentangled embedding sharing. arXiv.
An annotation scheme for automated bias detection in Wikipedia. Livnat Herzig, Alex Nunes, Batia Snir, Proceedings of the 5th Linguistic Annotation Workshop. the 5th Linguistic Annotation WorkshopLivnat Herzig, Alex Nunes, and Batia Snir. 2011. An annotation scheme for automated bias detection in Wikipedia. In Proceedings of the 5th Linguistic An- notation Workshop, pages 47-55.
Neural based statement classification for biased language. Christoph Hube, Besnik Fetahu, 10.1145/3289600.3291018Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining. the Twelfth ACM International Conference on Web Search and Data MiningChristoph Hube and Besnik Fetahu. 2019. Neural based statement classification for biased language. In Proceedings of the Twelfth ACM International Conference on Web Search and Data Mining, page 195-203.
Mahnaz Koupaee, William Yang Wang, 10.48550/ARXIV.1810.09305Wikihow: A large scale text summarization dataset. arXiv preprintMahnaz Koupaee and William Yang Wang. 2018. Wik- ihow: A large scale text summarization dataset. arXiv preprint.
Many hands make light work: Using essay traits to automatically score essays. Rahul Kumar, Sandeep Mathias, Sriparna Saha, Pushpak Bhattacharyya, 10.18653/v1/2022.naacl-main.106Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesRahul Kumar, Sandeep Mathias, Sriparna Saha, and Pushpak Bhattacharyya. 2022. Many hands make light work: Using essay traits to automatically score essays. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, pages 1485-1495.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, 10.48550/ARXIV.1907.11692RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. RoBERTa: A Robustly Optimized BERT Pretrain- ing Approach. arXiv.
WikiHist. html: English Wikipedia's Full Revision History in HTML Format. Blagoj Mitrevski, Tiziano Piccardi, Robert West, Proceedings of the International AAAI Conference on Web and Social Media. the International AAAI Conference on Web and Social Media14Blagoj Mitrevski, Tiziano Piccardi, and Robert West. 2020. WikiHist. html: English Wikipedia's Full Re- vision History in HTML Format. In Proceedings of the International AAAI Conference on Web and So- cial Media, volume 14, pages 878-884.
Wikipedia vandalism detection. M Santiago, Mola-Velasco, 10.1145/1963192.1963349Proceedings of the 28th International Conference on World Wide Web Companion. the 28th International Conference on World Wide Web CompanionSantiago M. Mola-Velasco. 2011. Wikipedia vandal- ism detection. In Proceedings of the 28th Interna- tional Conference on World Wide Web Companion, page 391-396.
The CoNLL-2014 shared task on grammatical error correction. Hwee Tou Ng, Mei Siew, Ted Wu, Christian Briscoe, Raymond Hendy Hadiwinoto, Christopher Susanto, Bryant, 10.3115/v1/W14-1701Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task. the Eighteenth Conference on Computational Natural Language Learning: Shared TaskHwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christo- pher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natu- ral Language Learning: Shared Task, pages 1-14.
Language models are unsupervised multitask learners. Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, 10.18653/v1/D16-1264Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392.
Citation needed: A taxonomy and algorithmic assessment of wikipedia's verifiability. Miriam Redi, Besnik Fetahu, Jonathan Morgan, Dario Taraborelli, 10.1145/3308558.3313618Proceedings of the 28th International Conference on World Wide Web Companion. the 28th International Conference on World Wide Web CompanionMiriam Redi, Besnik Fetahu, Jonathan Morgan, and Dario Taraborelli. 2019. Citation needed: A taxon- omy and algorithmic assessment of wikipedia's ver- ifiability. In Proceedings of the 28th International Conference on World Wide Web Companion, page 1567-1578.
PySBD: Pragmatic Sentence Boundary Disambiguation. Nipun Sadvilkar, Mark Neumann, Proceedings of Second Workshop for NLP Open Source Software. Second Workshop for NLP Open Source SoftwareOnlineNipun Sadvilkar and Mark Neumann. 2020. PySBD: Pragmatic Sentence Boundary Disambiguation. In Proceedings of Second Workshop for NLP Open Source Software, pages 110-114, Online.
A neural approach to automated essay scoring. Kaveh Taghipour, Hwee Tou Ng, 10.18653/v1/D16-1193Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingKaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 1882-1891.
FEVER: a large-scale dataset for fact extraction and VERification. James Thorne, Andreas Vlachos, Christos Christodoulopoulos, Arpit Mittal, 10.18653/v1/N18-1074Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies1Long PapersJames Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809-819.
Neural network acceptability judgments. Alex Warstadt, Amanpreet Singh, Samuel R Bowman, Transactions of the Association for Computational Linguistics. 7Alex Warstadt, Amanpreet Singh, and Samuel R Bow- man. 2019. Neural network acceptability judgments. Transactions of the Association for Computational Linguistics, 7:625-641.
Wiki-reliability: A large scale dataset for content reliability on wikipedia. Kayyen Wong, Miriam Redi, Diego Saez-Trumper, 10.1145/3404835.3463253Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 44th International ACM SIGIR Conference on Research and Development in Information RetrievalKayYen Wong, Miriam Redi, and Diego Saez-Trumper. 2021. Wiki-reliability: A large scale dataset for con- tent reliability on wikipedia. In Proceedings of the 44th International ACM SIGIR Conference on Re- search and Development in Information Retrieval, page 2437-2442.
Enhancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and ranking. Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, Xiaodong He, 10.18653/v1/2020.findings-emnlp.141Findings of the Association for Computational Linguistics: EMNLP 2020. Ruosong Yang, Jiannong Cao, Zhiyuan Wen, Youzheng Wu, and Xiaodong He. 2020. En- hancing automated essay scoring performance via fine-tuning pre-trained language models with combination of regression and ranking. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1560-1569.
| [] |
[
"Discrete sticky couplings of functional autoregressive processes",
"Discrete sticky couplings of functional autoregressive processes"
] | [
"Alain Durmus \nUniversité Paris-Saclay\nENS Paris-Saclay\nCNRS\nCentre BorelliF-91190Gif-sur-YvetteFrance\n",
"Andreas Eberle \nInstitute for Applied Mathematics\nUniversity of Bonn\nGermany\n",
"Aurélien Enfroy \ndépartement CITI\nSamovar, Télécom SudParis\nTIPIC\nInstitut Polytechnique de Paris\nPalaiseau\n",
"Arnaud Guillin \nLaboratoire de Mathématiques Blaise Pascal\nUniversité Clermont-Auvergne\nFrance\n",
"Pierre Monmarché \nLJLL -Sorbonne Université\nFrance\n"
] | [
"Université Paris-Saclay\nENS Paris-Saclay\nCNRS\nCentre BorelliF-91190Gif-sur-YvetteFrance",
"Institute for Applied Mathematics\nUniversity of Bonn\nGermany",
"département CITI\nSamovar, Télécom SudParis\nTIPIC\nInstitut Polytechnique de Paris\nPalaiseau",
"Laboratoire de Mathématiques Blaise Pascal\nUniversité Clermont-Auvergne\nFrance",
"LJLL -Sorbonne Université\nFrance"
] | [] | In this paper, we provide bounds in Wasserstein and total variation distances between the distributions of the successive iterates of two functional autoregressive processes with isotropic Gaussian noise of the formwhere ρ is an appropriate weighted Wasserstein distance or a V -distance, uniformly in the parameter γ, and on ρ(π γ ,π γ ), where π γ andπ γ are the respective stationary measures of the two processes. The class of considered processes encompasses the Euler-Maruyama discretization of Langevin diffusions and its variants. The bounds we derive are of order γ as γ → 0. To obtain our results, we rely on the construction of a discrete sticky Markov chain (W (γ) k ) k∈N which bounds the distance between an appropriate coupling of the two processes. We then establish stability and quantitative convergence results for this process uniformly on γ. In addition, we show that it converges in distribution to the continuous sticky process studied in[20,18]. Finally, we apply our result to Bayesian inference of ODE parameters and numerically illustrate them on two particular problems. 1 | null | [
"https://arxiv.org/pdf/2104.06771v1.pdf"
] | 233,231,573 | 2104.06771 | 87fa0508ef52e34e1da2913a3ca5644b4a383c03 |
Discrete sticky couplings of functional autoregressive processes
April 15, 2021
Alain Durmus
Université Paris-Saclay
ENS Paris-Saclay
CNRS
Centre BorelliF-91190Gif-sur-YvetteFrance
Andreas Eberle
Institute for Applied Mathematics
University of Bonn
Germany
Aurélien Enfroy
département CITI
Samovar, Télécom SudParis
TIPIC
Institut Polytechnique de Paris
Palaiseau
Arnaud Guillin
Laboratoire de Mathématiques Blaise Pascal
Université Clermont-Auvergne
France
Pierre Monmarché
LJLL -Sorbonne Université
France
Discrete sticky couplings of functional autoregressive processes
April 15, 2021
In this paper, we provide bounds in Wasserstein and total variation distances between the distributions of the successive iterates of two functional autoregressive processes with isotropic Gaussian noise of the formwhere ρ is an appropriate weighted Wasserstein distance or a V -distance, uniformly in the parameter γ, and on ρ(π γ ,π γ ), where π γ andπ γ are the respective stationary measures of the two processes. The class of considered processes encompasses the Euler-Maruyama discretization of Langevin diffusions and its variants. The bounds we derive are of order γ as γ → 0. To obtain our results, we rely on the construction of a discrete sticky Markov chain (W (γ) k ) k∈N which bounds the distance between an appropriate coupling of the two processes. We then establish stability and quantitative convergence results for this process uniformly on γ. In addition, we show that it converges in distribution to the continuous sticky process studied in[20,18]. Finally, we apply our result to Bayesian inference of ODE parameters and numerically illustrate them on two particular problems. 1
Introduction
We are interested in this paper in Markov chains (Y k ) k∈N starting from y ∈ R d and defined by recursions of the form
Y k+1 = T γ (Y k ) + σ √ γZ k+1 ,(1)
where σ > 0, γ ∈ (0,γ], for someγ > 0, {T γ : γ ∈ (0,γ]} is a family of continuous functions from R d to R d and (Z k ) k 1 is a sequence of i.i.d. d-dimensional standard Gaussian random variables. Note that the Euler-Maruyama discretization of overdamped Langevin diffusions or of general Komolgorov processes and its variants belong to this class of processes and in that setting γ corresponds to the discretization step size. Indeed, the Euler scheme consists in taking for any γ ∈ (0,γ], T γ (y) = y + γb(y) for some b : R d → R d . When b = −∇U for some potential U , these methods are now popular Markov Chain Monte Carlo algorithms to sample from the target density x → e −U (x) / R d e −U (y) dy. However, in some applications, explictly computing ∇U is not an option and further numerical methods must be implemented which come with additional bias since only approximations of ∇U can be used in (1). In this paper, we precisely study this additional source of error. In particular, based on a chain defined by (1), we consider a second Markov chain (Ỹ k ) k∈N defined by the recursioñ
Y k+1 =T γ (Ỹ k ) + σ √ γZ k+1 ,(2)
where {T γ : γ ∈ (0,γ]} is a family of functions from R d to R d such that for any γ,T γ is an approximation of T γ in a sense specified below, and (Z k ) k 1 is a sequence of i.i.d. ddimensional standard Gaussian random variables potentially correlated with (Z k ) k 1 . We will enforce below conditions that ensure that both (Y k ) k∈N and (Ỹ k ) k 1 are geometrically ergodic, and denote by π γ andπ γ their invariant probability measures respectively. If for any γ > 0,T γ is close in some sense to T γ , the overall process (Ỹ k ) k∈N can be seen as a perturbed version of (Y k ) k∈N , andπ γ is expected to be close to π γ . The main goal of this paper is to establish quantitative bounds on the Wasserstein and total variation distance between the finite-time laws of the two processes and between their equilibria. The study of perturbation of Markov processes has been the subject of many existing works; see e.g. [33,28,21,32,26] and the references therein. However, it turns out that these existing results do not apply as such. We pay particular attention to the dependency of these estimates on γ. Indeed, in the case of the Euler scheme of a continuous-time diffusion, π γ and the law of Y t/γ for some t > 0 converge to the invariant measure and law at time t of the continuous-time process, and similarly for the perturbed chain. Hence, as γ → 0, our estimates should not degenerate, but rather yield quantitative estimates for the continuous time process. More precisely, the present paper is the discrete-time counterpart of the study conducted by [18] in the continuous-time case, and as γ vanishes we recover estimates that are consistent with those of [18].
As in [18], our results are based on the construction of a suitable coupling of the processes, i.e. a simultaneous construction of a pair (Y k ,Ỹ k ) k∈N of non-independent chains that marginally follow (1) and (2) respectively and are designed to get and stay close to each other. We use the maximal reflection coupling for Gaussian laws, namely at each step the two chains are coupled to merge with maximal probability and, otherwise, we use a reflection (see Section 2.2 below). Estimates on the laws of the chains then follow from the study of ( Y k −Ỹ k ) k∈N , which is itself based on the analysis of a Markov chain (W k ) k∈N on [0, +∞) that is such that, by design of the coupling, Y k −Ỹ k k W k for all k ∈ N. Thus, the question of establishing bounds between the laws of two d-dimensional Markov chains is reduced to the study of a single one-dimensional chain. Besides, together with the Markov property, the auxiliary chain has some nice features. At first, it is stochastically monotonous, i.e. if (W k ) k∈N is a Markov chain associated to the same Markov kernel as (W k ) k∈N and such that W 0 W 0 , then for any k ∈ N, W k is stochastically dominated by W k , i.e. for any t 0, P(W k t) P(W k t). Secondly, (W k ) k∈N has an atom at 0.
The main results and main steps of this study are the following. First, we prove that (W k ) k∈N admits a unique invariant measure and that, independently of γ, the moments and mass on (0, +∞) of this equilibrium are small when the difference between T γ andT γ is small. Secondly, we establish the geometric convergence of the chain towards its equilibrium, at an explicit rate (stable as γ → 0). Finally, we prove that, as γ → 0, the chain (W k ) k∈N converges in law to the continuous-time sticky diffusion that played the same role in [18]. This last part is not necessary to get estimates on the finite-time and equilibrium laws of (1) and (2) for a given γ > 0, but it sheds some new light on the limit sticky process which, in [18], is constructed as the limit of continuous-time diffusions with diffusion coefficients that vanish at zero, rather than discrete-time chains. In some sense, (W k ) k∈N can be seen as a discretization scheme for the sticky process, see also [2] on this topic.
Besides the obvious continuous/discrete time difference between [18] and the present work, let us emphasize a few other distinctions. First, in [18], the one-dimensional sticky process has an explicit invariant measure. This is not the case in our framework, which makes the derivation of the bounds on the moments of the equilibrium a bit more involved. Secondly, in [18], although it is proven that the mass at zero and the first moment of the law of the sticky diffusion converge to their value at equilibrium (which is sufficient to get estimates on the laws of the two initial d-dimensional processes), the question of long-time convergence is not addressed for the sticky diffusion, whereas our long-time convergence results for (W k ) k 0 together with its convergence as γ → +∞ furnish an explicit convergence rate for the sticky diffusion. The proof of the stability of the mass at zero and of the first moment in [18] relies on a concave modification of the distance (such as used e.g. in [16]), which is contracted by the chain before it hits zero. This method does not apply to, say, the second moment of the process. As a consequence, the results of [18] only concern the total variation and W 1 Wasserstein distances, while we consider a broader class of distances.
Finally, our theoretical results are illustrated through numerical experiments. In particular, we study the influence of the discretization scheme generally needed to perform Bayesian inference for parameters of Ordinary Differetial Equations (ODEs).
Notation and convention
We denote by B(R d ), the Borel σ-field of R d endowed with the Euclidean distance and by ϕ σ 2 the density of the one-dimensional Gaussian distribution with zero-mean and variance σ 2 > 0. In the case σ = 1, we simply denote this density by ϕ. ∆ R d stands for the subset {(x, x) ∈ R 2d : x ∈ R d } of R d and for any A ⊂ R d , A c for its complement. Let µ and ν be two σ-finite measures on (R d , B(R d )). If ν is absolutely continuous with respect to µ, we write ν µ. We say that ν and µ are equivalent if and only if ν µ and µ ν. We denote by · and · the floor and ceiling function respectively. For d, n ∈ N * , M d,n (R) stands for the set of d × n real matrices. We denote by C k (U, A) the set of k times continuously differentiable functions from an open set U ⊂ R m to A ⊂ R p . We use the convention p k=n = 0 and p k=n = 1 for n < p, n, p ∈ N, and a/0 = +∞ for a > 0.
Sticky reflection coupling 2.1 Main result
The Markov kernels R γ associated with (Y k ) k∈N defined in (1) are given for any y ∈ R d ,
A ∈ B(R d ) by R γ (y, A) = (2πσ 2 γ) d /2 R d 1 A (y ) exp − y − T γ (y) 2 /(2σ 2 γ) dy .
Note thatR γ associated with (Ỹ k ) k∈N is given by the same expression upon replacing T γ bỹ T γ . We consider the following assumption on the family {T γ : γ ∈ (0,γ]}. This condition will ensure that R γ is geometrically ergodic (see Proposition 1) and it will be important to derive our main results regarding the distance of R k γ andR k γ , for k ∈ N. H1. There exist R 1 , L 0 and m > 0 such that for any γ ∈ (0,γ], there exists a non-decreasing
function τ γ : [0, +∞) → [0, +∞) satisfying τ γ (0) = 0, T γ (x) − T γ (x) τ γ ( x −x ) for any x,x ∈ R d , and sup r∈(0,+∞) {τ γ (r)/r} 1 + γL , sup r∈(R 1 ,+∞) {τ γ (r)/r} 1 − γm .(3)
In addition, sup γ∈(0,γ] T γ (0) < +∞.
Note that the condition that for any γ ∈ (0,γ], τ γ is non-decreasing can be omitted upon replacing in our study τ γ by the affine majorant
τ γ : r → (1 + Lγ)r if r ∈ [0, R 1 ] , (1 + Lγ)R 1 + (1 − mγ)(r − R 1 ) otherwise .(4)
Indeed, by definition and (3), for any r ∈ [0, +∞), τ γ (r) τ γ (r), therefore for any
x,x ∈ R d , T γ (x) − T γ (x) τ γ ( x −x )
. In addition, an easy computation leads to setting
R 2 = 2R 1 (L + m)/m, sup r∈(0,+∞) {τ γ (r)/r} 1 + γL , sup r∈(R 2 ,+∞) {τ γ (r)/r} 1 − γm/2 .
Then,τ γ satisfies H1 and is non-decreasing. Note that H1 implies that for any r ∈ [0, +∞) and γ ∈ (0,γ], τ γ (r) (1 + γL)r, therefore T γ is (1 + γL)-Lipschitz. The second condition in (3) ensures that for any γ ∈ (0,γ], T γ is a contraction at large distances, i.e. for any
x,x ∈ R d , T γ (x) − T γ (x) (1 − γm) x −x , if x −x R 1 .
The assumption H1 holds for the Euler scheme applied to diffusions with scalar covariance matrices, i.e. (1) with T γ (x) = x + γb(x) and a drift function b :
R d → R d , if, for some L b , m b , R b > 0, b is L b -Lipschitz continuous and satisfies x − y, b(x) − b(y) −m b x − y 2 , for all x, y ∈ R d with x − y R b . Indeed, this implies that for any x, y ∈ R d , T γ (x) − T γ (y) (1 + L b γ) x − y and, provided γ ∈ 0, L 2 /m b and x − y R b , T γ (x) − T γ (y) 2 (1 − m b γ) x − y 2 . Therefore, it suffices to consider τ γ defined by (4) with L = L b , m = m b /2 and R 1 = R b .
Our results will be stated in term of Wasserstein distances and V -norms, whose definitions are the following. Consider a measurable cost function c : R 2d → [0, ∞). Then the associated Wasserstein distance W c is given for two probability measures µ, ν on R d by
W c (ν, µ) = inf π∈Π(ν,µ) R 2d c(x, y)π(dx, dy) ,
where Π(ν, µ) is the set of transference plans or couplings between ν and µ, namely the set of probability measures on R d whose first and second d-dimensional marginals are ν and µ respectively. In the particular case where c(x, y) [11,Theorem 19.1.7]
= 1 ∆ c R d (x, y), W c is simply the total variation distance · TV . For V : R d → [1, +∞), the choice c(x, y) = 1 ∆ c R d (x, y){V (x)+V (y)} yields the V -norm (see), i.e. W c (ν, µ) = ν − µ V . Finally, for c(x, y) = x − y p with p ∈ [1, +∞)
, W c is the p-th power of the usual Wasserstein distance of order p.
Following the same line as the proof of [9, Theorem 15] 1 , we can show that H 1 implies that the Markov kernel R γ is V c -uniformly geometrically ergodic where for any c > 0 and
x ∈ R d , V c (x) = exp(c x 2 )
, with a convergence rate that scales linearly with the step size γ. Proposition 1. Assume H 1. Then, settingγ 1 =γ ∧ {1/m}, for any γ ∈ (0,γ 1 ], R γ admits a unique stationary distribution π γ . In addition, there exist c > 0, ρ ∈ [0, 1) and C 0 such that for any x ∈ R d and γ ∈ (0,
γ 1 ], δ x R k γ − π γ Vc Cρ kγ V c (x).
Proof. This result is a simple consequence of [9,Corollary 11]. For completeness, its proof is included in Section 5.1.1.
Note that this result can be made quantitative, and other convergence results in total variation and Wasserstein distance of order p ∈ [1, +∞) can also be established following the same lines as the proof of [9,Corollary 14]. However, these results are out of the scope of the present paper and would be simple adaptations of those in [9] or [17].
We now consider an assumption which quantifies the perturbation associated withT γ relatively to T γ , for γ ∈ (0,γ].
H 2. There exists c ∞ > 0 such that sup x∈R d T γ (x) −T γ (x) γc ∞ for all γ ∈ (0,γ].
Example 2.
The assumption H2 holds for the Euler scheme applied to diffusions with scalar covariance matrices, i.e. (1) and (2) with
T γ (x) = x + γb(x) andT γ (x) = x + γb(x) ,(5)
under the condition that
sup x∈R d b(x) −b(x) c ∞ .
This setting is exactly the one we introduced to motivate our study. In particular, in the case where b = −∇U for some potential U ,b may correspond to a numerical approximation of this gradient.
Note that compared to T γ , γ ∈ (0,γ], we do not assume any smoothness condition onT γ . More precisely, we do not assume thatT γ satisfies H1. Regarding the ergodicity properties ofR γ associated withT γ , γ ∈ (0,γ], we have the following result. Proposition 3. Assume H 1 and H 2 and setγ 1 =γ ∧ {1/m}. Then, for any γ ∈ (0,γ 1 ], R γ admits a unique stationary distributionπ γ . In addition, there exists c > 0 such that for any γ ∈ (0,γ 1 ], there exist ρ γ ∈ [0, 1) and C γ 0 such that for any
x ∈ R d δ xR k γ −π γ Vc C γ ρ k γ V c (x), where V c (x) = exp(c x 2 ).
Proof. The proof is postponed to Section 5.1.2.
Similarly to Proposition 1 with respect to R γ , Proposition 3 implies thatR γ is V c -uniformly geometrically ergodic. However in contrast to Proposition 1, the dependency of the rate of convergence with respect to the step size γ is not explicit anymore since the results and the method employed in [9] or [17] cannot be applied anymore.
Note that Proposition 1 and Proposition 3 imply that R γ andR γ converge to π γ andπ γ respectively in total variation and Wasserstein metric of any order p ∈ [1, +∞).
Based on the two assumptions above, we can now state one of our main results. Our goal is to quantify the distance between the laws of the iterates of the two chains (Y k ) k∈N and (Ỹ k ), in particular starting from the same initial point x ∈ R d or at equilibrium. Indeed, remark that, in view of Propositions 1 and 3, letting k → +∞ in the next statement yields quantitative bounds on W c (π γ ,π γ ) for any γ ∈ (0,γ ∧ {1/m}]. Then, there exist some explicit constants C, c 0, ρ ∈ [0, 1) such that for any k ∈ N, γ ∈ (0,γ]
and x,x ∈ R d , W c (δ x R k γ , δxR k γ ) Cρ γk V ( x −x ) + cc ∞ . where c(x,x) =c( x −x ).
Remark 5.
It is also possible to treat the case of cost functions c of the form c(x, y) = c( x − y ) (V (x) + V (y)) withc as in Theorem 4 and V a positive function, simply by using Hölder's inequality. Indeed, for p, q > 1 with 1/p + 1/q = 1, we can bound The rest of this section is devoted to the proof of Theorem 4. In particular, we define in the following the main object of this paper.
W c (ν, µ) W cp (ν, µ) 1/p (ν (V q )) 1/q + (µ (V q )) 1/q with c p (x, y) =c p ( x − y
The discrete sticky kernel
We define a Markovian coupling of the two chains (Y k ) k∈N and (Ỹ k ) k∈N defined in (1) and (2) by using, at each step, the maximal reflection coupling of the two Gaussian proposals, which is optimal for the total variation distance (i.e that maximizes the probability of coalescence). Let (U k ) k 1 be a sequence of i.i.d. uniform random variables on [0, 1] independent of (Z k ) k 1 which we recall is a sequence of i.i.d. d-dimensional standard Gaussian random variables. We define the discrete sticky Markov coupling K γ of R γ andR γ as the Markov kernel associated with the Markov chain on R 2d given for k ∈ N by
X k+1 = T γ (X k ) + (σ 2 γ) 1 /2 Z k+1 X k+1 = X k+1 B k+1 + (1 − B k+1 )F γ (X k ,X k , Z k+1 ) ,(6)
where
B k+1 = 1 [0,+∞) (p γ (X k ,X k , Z k+1 ) − U k+1 ) and F γ (x,x, z) =T γ (x) + (σ 2 γ) 1 /2 Id −2e(x,x)e(x,x) T z , E(x,x) =T γ (x) − T γ (x) , e(x,x) = E(x,x) E(x,x) if E(x,x) = 0 e 0 otherwise ,(7)p γ (x,x, z) = 1 ∧ ϕ σ 2 γ E(x,x) − (σ 2 γ) 1 /2 e(x,x), z ϕ σ 2 γ (σ 2 γ) 1 /2 e(x,x), z ,
where e 0 ∈ R d is an arbitrary unit-vector, i.e. e 0 = 1, and ϕ σ 2 γ is the density of the one-dimensional Gaussian distribution with mean 0 and variance σ 2 γ. In other words, K γ is given for any γ ∈ (0,γ], (x, y) ∈ R 2d and A ∈ B(R 2d ) by
K γ ((x,x), A) = R d 1 A (T γ (x) + (σ 2 γ) 1 /2 z, T γ (x) + (σ 2 γ) 1 /2 z)p γ (x,x, z) e − z 2 /2 (2π) d /2 dz + R d 1 A T γ (x) + (σ 2 γ) 1 /2 z, F γ (x,x, z) (1 − p γ (x,x, z)) e − z 2 /2 (2π) d /2 dz .
In words, from the initial conditions (x,x), this coupling works as follows: first, a Gaussian variable Z k+1 is drawn for the fluctuations of X k+1 . Then,X k+1 is made equal to X k+1 with probability p(x,x, Z k+1 ) and, otherwise, the random variableZ k+1 determines the fluctuations ofX k+1 with respect to its averageT γ (x) is given by the orthogonal reflection of Z k+1 in the [17] or [9]. The starting point of our analysis is the next result, which will enable to compare the coupling difference process X k+1 −X k+1 with a Markov chain on [0, +∞). Define (G k ) k 1 for any k 1 by
directionT γ (x) − T γ (x). It is well known that for any (x,x) ∈ R d , K γ ((x,x), A × R d ) = R γ (x, A) and K γ ((x,x), R d × A) =R γ (x, A),G k = e(X k−1 ,X k−1 ), Z k ,(8)
where e is given by (7). For any a 0, g ∈ R, u ∈ [0, 1] and γ ∈ (0,γ] define
H γ (a, g, u) = 1 [0,+∞) (u − p σ 2 γ (a, g)) a − 2(σ 2 γ) 1 /2 g ,(9)
where
p σ 2 γ (a, g) = 1 ∧ ϕ σ 2 γ a − (σ 2 γ) 1 /2 g ϕ σ 2 γ (σ 2 γ) 1 /2 g .(10)
Proposition 6. Assume H 1 and H 2 hold. Then for any γ ∈ (0,γ], k ∈ N, almost surely, we have
X k+1 −X k+1 G γ ( X k −X k , G k+1 , U k+1 ) ,(11)
where (X k ,X k ) k∈N are defined by (6), and for any w ∈ [0, +∞), g ∈ R and u ∈ [0, 1],
G γ (w, g, u) = H γ (τ γ (w) + γc ∞ , g, u) .
In addition, for any g ∈ R d and u ∈ [0, 1], w → G γ (w, g, u) is non-decreasing.
Proof. The proof is postponed to Section 5.1.3.
Consider now the stochastic process (W k ) k∈N starting from X 0 −X 0 and defined by induction on k as follows,
W k+1 = G γ (W k , G k+1 , U k+1 ) = τ γ (W k ) + γc ∞ − 2σ √ γG k if U k+1 p σ 2 γ (τ γ (W k ) + γc ∞ , G k ) 0 otherwise.(12)
By definition (8) and (7), an easy induction implies that (G k ) k 1 and (U k ) k 1 are independent, (G k ) k 1 are i.i.d. standard Gaussian random variables and (U k ) k 1 are i.i.d. uniform random variables on [0, 1]. Therefore, (W k ) k∈N is a Markov chain with Markov kernel Q γ defined for w ∈ [0, +∞) and A ∈ B([0, +∞)) by
Q γ (w, A) = δ 0 (A) R p σ 2 γ (τ γ (w) + γc ∞ , g)ϕ(g)dg(13)+ R 1 A τ γ (w) + γc ∞ − 2σγ 1/2 g {1 − p σ 2 γ (τ γ (w) + γc ∞ , g)}ϕ(g)dg ,
where ϕ is the density of the standard Gaussian distribution on R. By Proposition 6, we have almost surely for any k ∈ N,
X k −X k W k .(14)
Another consequence of Proposition 6 is that Q γ is stochastically monotonous (see e.g. [24] or [31]), more precisely if (W k ) k∈N and (W k ) k∈N are two chains given by (12) with the same variables (G k , U k ) k∈N with W 0 W 0 , then almost surely W k W k for all k ∈ N. This nice property will be used several times in the analysis of this chain.
The main consequence of (14) is the following result.
W c (δ x R k γ , δxR k γ ) R 2d c(y,ỹ)K k γ ((x,x), d(y,ỹ)) +∞ 0c (w)Q k γ ( x −x , dw) .
Proof. Let k ∈ N. By (14) and sincec is non-decreasing, we get almost surelyc( X k −X k ) c(W k ). Taking the expectation concludes the proof.
From Corollary 7, the question to get bounds on W c (δ x R k γ , δxR k γ ) boils down to the study of the Markov kernel Q γ on [0, +∞), which is the main part of our work.
Analysis of the auxiliary Markov chain
We start with a Lyapunov/drift result. Proposition 8. Assume H1 and H2 hold. Then for any w 0,
Q γ V * 1 (w) (1 − γm)V * 1 (w)1 (R 1 ,+∞) (w) + (1 + γL)V * 1 (w)1 (0,R 1 ] (w) + γc ∞ ,
where Q γ is defined by (13) and for any w ∈ R, V * 1 (w) = |w| .
Proof. The proof is postponed to Section 5.1.4.
Proposition 8 implies in particular that for any w ∈ R,
Q γ V * 1 (w) (1 − γm)V * 1 (w)1 (R 1 ,+∞) (w) + γ[(L + m)R 1 + c ∞ ] .
Then, a straightforward induction shows that for any k ∈ N,
Q k γ V * 1 (w) (1 − γm) k V * 1 (w) + [(L + m)R 1 + c ∞ ]/m ,
and therefore by Corollary 7 takingc(t) = t,
W 1 (δ x R k γ , δxR k γ ) (1 − γm) k x −x + [(L + m)R 1 + c ∞ ]/m .(15)
However, this result is not sharp as k → +∞. Indeed, in the case c ∞ = 0, R γ =R γ and by Proposition 1, it holds that W 1 (δ x R k γ , δxR k γ ) → 0 as k → +∞, while the right-hand side of (15) converges to (L + m)R 1 /m = 0. In particular, that is why adapting existing results, such as the one established in [32], is not an option here. from We need to refine our results in order to fill this gap. To this end, we need to analyze more precisely the long-time behavior of Q γ . A first step is to show that it is ergodic.
W c (δ x R k γ , δxR k γ ) +∞ 0c (w){Q k γ ( x −x , ·) − µ γ }(dw) + µ γ (c) .(16)
where µ γ is the stationary distribution of Q γ given by (13).
In particular, if x =x, W c (δ x R k γ , δ xR k γ ) µ γ (c).
Proof. The proof of (16) is a consequence of Proposition 9 and Corollary 7. The last statement follows from the fact that Q γ is stochastically monotonous. Indeed, by Proposition 6, for any w,w ∈ [0, +∞), w w, and a ∈ [0, +∞), Q γ (w, [0, a]) Q γ (w, [0, a]). Therefore, for any a ∈ [0, +∞), w → Q γ (w, [0, a]) is non-increasing on [0, +∞) and for any non-increasing bounded function f , Q γ f (w) Q γ f (w) for any w,w ∈ [0, +∞), w w. As a result, a straightforward induction shows that for any k ∈ N, w,w ∈ [0, +∞), w w, and a ∈ [0, +∞),
Q k γ (w, [0, a]) Q k γ (w, [0, a]). Then, we obtain Q k γ (0, [0, a]) +∞ 0 µ(dw)Q k γ (w, [0, a]) = µ γ ([0, a])
. Sincec is non-decreasing on [0, +∞), we get Q γc (0) µ γ (c), which combined with (16) completes the proof.
Corollary 10 then naturally brings us to derive moment bounds for the stationary distribution µ γ , γ ∈ (0,γ] and quantitative convergence bounds for Q γ to µ γ . Our next results address these two problems. Theorem 11. Assume H1 and H2 hold. For anyδ ∈ 0, {L −1 ∧ (σe −1 /c ∞ ) 2 } and γ ∈ (0,γ],
[0,+∞) w µ γ (dw) c ∞ c 1 , µ γ ((0, +∞)) c ∞ c 2 ,(17)
where µ γ is the stationary distribution of Q γ given by (13), and, considering ζ given below in (55),
c 1 = η 1 R 1 (1 + L/m) + 1/m , c 2 = e (δ+γ)L (c 1 (1 +γL)/δ 1/2 + [δ +γ] 1/2 )/( √ 2πσ) + 2ζ[δ +γ] 1/2 e 3(δ+γ)L /σ 3 , η 1 = [δ +γ] 1/2 ζe 3(δ+γ)L σ 3 + e (δ+γ)L 2 √ 2πσ Φ − (1 +γL)R 1 + (δ +γ)c ∞ 2δ 1/2 σe −(δ+γ)L .
Proof. The proof is postponed to Section 5.1.6.
Theorem 12.
Assume H1 and H2 hold. For any a > 0 and γ ∈ (0,γ],
+∞ 0 1 (0,+∞) (w) exp(aw) dµ γ (w) c ∞ c 3 ,
where c 3 is explicitly given in the proof and µ γ is the stationary distribution of Q γ given by (13).
Proof. The proof is postponed to Section 5.1.7.
We now specify the convergence of Q γ to µ γ for any γ ∈ (0,γ].
Theorem 13. Assume H1 and H2 hold. There exist explicit constants ρ ∈ [0, 1) and C 0 such that for any γ ∈ (0,γ], w 0,
δ w Q k γ − µ γ V Cρ γk V (w) ,
where V (w) = 1 + |w| or V (w) = exp(a |w|), for a > 0.
Proof. The proof is postponed to Section 5.1.8.
Combining the results of Corollary 10, Theorem 11, Theorem 12 and Theorem 13 allows to address the main questions raised in this section and prove Theorem 4.
Discussion on the bounds provided by Theorem 11
In this paragraph, we discuss how the constants c 1 , c 2 given in Theorem 11 behaves with respect to the parameters R 1 , L, m in the limit c ∞ → 0 andγ → 0. For ease of presentation, we also only consider the case σ = 1.
(1) First consider the case R 1 = 0. As m → 0, c 1 , c 2 are of order m −1 and 1/[mδ 1
/2 ] +δ 1/2 respectively forδ ∈ 0, {L −1 ∧ (σe −1 /c ∞ ) 2 } .
Since L can be taken arbitrarily small (as R 1 = 0), choosingδ = m −1 , we obtain that c 2 is of order m −1/2 . Note that the dependency of c 1 , c 2 with respect to m is sharp; see Example 14 below.
(2) We now consider the case R 1 1, L = 0. Note that in this caseδ can be chosen arbitrarily in (0, 1). Then, for some universal constants C 1 , C 2 , C 3 ,
η 1 C 1δ 1/2 /Φ{C 2 R 1 /δ 1/2 + C 3 c ∞δ 1/2 }. Therefore, takingδ = m −1 ∨ R 2 1 , we get that for some universal constants D 1 , D 2 , E 0, c 1 D 1 [(R 1 ∨ m −1/2 ) + m −1 ], c 2 Em −1/2 ∨ R 1 .
Note that the bound of c 2 with respect to R 1 and m is consistent with the results obtained in [18] (see [18,Lemma 1]) for the stationary distributions of continuous sticky processes. Note that it is shown in [18,Example 2] that this bound is sharp with respect to R 1 and m.
(3) In the case R 1 ∧L 1, takingδ = L −1 since we are in the regime c ∞ → 0, we get that up to logarithmic term and usingγ
L −1 , c 1 , c 2 are smaller than C exp[e 4 (R 1 L 1 /2 + c ∞ ) 2 ]
for some universal constant C 0. The estimate for c 2 is also consistent with [18, Lemma 1] which holds for stationary distributions of continuous sticky processes.
Example 14. Consider the particular example of two auto-regressive processes for which
T γ (y) = (1 − γ)y andT γ (y) = (1 − γ)y + γ a for γ ∈ 0, −c 1 c ∞ ∼ a and c 2 c ∞ ∼ Ca/ 1/2 , as → 0, for some universal constant C 0.
On the other hand, an easy computation (see e.g. [12]) shows that the stationary distributions π γ and π γ provided by Proposition 1 and Proposition 3 are N(0, −1 (2 − γ γ) −1 ) and N(a, −1 (2 − γ γ) −1 ) respectively. Therefore, we get W 1 (π γ ,π γ ) = a and π γ −π γ TV ∼ Ca/ 1/2 as → 0.
Continuous-time limit
In the case where T γ andT γ are specified by (5), then under appropriate conditions on b and b, it can be shown, see e.g. [9,Proposition 25], that for any T 0 and x ∈ R d ,
lim m→+∞ { δ x R m T /m − δ x P T V + δ xR m T /m − δ xPT V } = 0 ,(18)
for some measurable function V : R d → [1, +∞) and where (P t ) t 0 and (P t ) t 0 are the Markov semigroup corresponding to (1) and (2). Then, this naturally implies convergence in total variation and also Wasserstein distance of order
p if inf x∈R d {V (x)/ x p } > 0.
As a consequence, results of Section 2 immediately transfer to the continuous-time processes. More precisely, let c :
R 2d → [0, +∞) of the form c(x, y) =c( x − y ) for some non-decreasing functionc : [0, +∞) → [0, +∞),c(0) = 0. If (18) holds and sup x,y∈R d {c(x, y)/{V (x) + V (y)} < +∞,
we get by the triangle inequality that for any
x,x ∈ R d , T > 0, W c (δ x P T , δxP T ) lim inf m→+∞ W c (δ x R m T /m , δxR m T /m ).
Then, results of Section 2 can be applied implying if H 1 and H 2 holds, that for any x,x ∈ R d there exist
C 1 , C 2 0 such that for any T 0, W c (δ x P T , δxP T ) C 1 ρ T + C 2 c ∞ .
We therefore generalize the result provided in [18] which is specific to the total variation distance. We do not give a specific statement for this result which is mainly technical and is not the main subject of this paper. Instead, the goal of this section is to study the continuous-time limit of the coupling (6) (and not only of its marginals) toward some continuous-time sticky diffusion.
More precisely, let (γ n ) n∈N be a sequence of step sizes such that lim n→+∞ γ n = 0 and w 0 0. Then, consider the sequence of Markov chains {(W k ) k∈N : n ∈ N}, i.e. the sequence of continuous processes defined for any n ∈ N, t ∈ (0, +∞) by
W (n) t = W (n) t/γn + {W (n) t/γn − W (n) t/γn }{t/γ n − t/γ n } .(19)
Note that for any k ∈ N and h ∈ [0, γ n ], W (n)
kγn+h = W k + (h/γ n ){W (n) k+1 − W (n) k }.
We denote by W = C([0, +∞), R) endowed with the uniform topology on compact sets, W its corresponding σ-field and (W t ) t 0 the canonical process defined for any t ∈ (0, +∞) and
ω ∈ W by W t (ω) = ω t . Denote by (W t ) t 0 the filtration associated with (W t ) t 0 . Note that {(W (n) t ) t∈(0,+∞) : n ∈ N} is a sequence of W-valued random variables.
The main result of this section concerns the convergence in distribution of this sequence.
We consider the following assumption on the function τ γ .
A 1. There exists a function κ : [0, +∞) → [0, +∞) such that for any γ ∈ (0,γ], τ γ (w) = w + γκ(w) and κ(0) = 0. In addition, κ is L κ -Lipschitz: for any w 1 , w 2 ∈ (0, +∞), |κ(w 1 ) − κ(w 2 )| L κ |w 1 − w 2 |.
This is not a restrictive condition since, under H1, up to a possible modification of τ γ , it is always possible to ensure A1.
Under A1, we consider a sticky process [34,35,18], which solves the stochastic differential equation
dW t = {κ(W t ) + c ∞ }dt + 2σ1 (0,+∞) (W t )dB t ,(20)
where (B t ) t 0 is a one-dimensional Brownian motion. Note that for any initial distribution µ 0 on (R, B(R d )), (20) admits a unique weak solution by [18,Lemma 6].
The main result of this section is the following.
Theorem 15. Assume A 1. Then, the sequence {(W (n) t ) t 0 : n ∈ N} defined by (19) converges in distribution to the solution (W t ) t 0 of the SDE (20).
The proof of this theorem follows the usual strategy employed to show convergence of a sequence of continuous processes to a Markov process. A first step is to show that under A 1, {(W (n) t ) t 0 : n ∈ N} is uniformly bounded in L q for some q 2, on [0, T ] for any T 0.
Proposition 16. Assume A 1. Then for any
T 0, there exists C T 0 such that sup n∈N E[sup t∈[0,T ] {W (n) t } 4 ] C T where (W (n) t ) t 0 is defined by (19).
Proof. The proof is postponed to Section 5.2.1.
Then, we are able to obtain the tightness of the sequence of stochastic processes
{(W (n) t ) t 0 : n ∈ N}. Proposition 17. Assume A1. Then, {(W (n) t ) t 0 : n ∈ N} is tight in W.
Proof. The proof is postponed to Section 5.2.2.
Denote for any n ∈ N, µ n the distribution of (W t ) t 0 : n ∈ N} is tight again and since (20) admits a unique weak solution, the proof of Theorem 15 will be completed. To establish this result, we use the characterization of solutions of SDEs through martingale problems. More precisely by [6,Theorem 1.27], the distribution µ on W of (W t ) t 0 , solution of (20), is the unique solution to the martingale problem associated with µ 0 , the drift function w → κ(w) + c ∞ and the variance function 2σ1 (0,+∞) , i.e. it is the unique probability measure satisfying on the filtered probability space
(W, W, (W t ) t 0 , µ): (a) the distribution of W 0 is µ 0 ; (b) the processes (M t ) t 0 , (N t ) t 0 defined for any t 0 by M t = W t − W 0 − t 0 {c ∞ + κ(W u )}du , N t = M 2 t − 4σ 2 t 0 1 (0,+∞) (W u )du ,(21)
are (W t ) t 0 -local martingales.
In other words, it corresponds in showing that (M t ) t 0 is a (W t ) t 0 -local martingales and by [30, Theorem 1.8] identifying its quadratic variation ( M t ) t 0 as the process (4σ 2 t
0 1 (0,+∞) (W u )du) t 0 .
Therefore, Theorem 15 is a direct consequence of the following result.
Theorem 18. Assume A 1. Let µ ∞ be a limit point of (µ n ) n∈N . Then, the two processes
(M t ) t 0 and (N t ) t 0 defined by (21) are (W t ) t 0 -martingales on (W, W, (W t ) t 0 , µ ∞ ).
Proof. The proof is postponed to Section 5.2.4.
Consider the differential operators A,Ã defined for any ψ ∈ C 2 (R) by
Aψ(w) = {κ(w) + c ∞ }ψ (w) + 21 (0,+∞) (w)σ 2 ψ (w)(22)Aψ(w) = {κ(w) + c ∞ }ψ (w) + 2σ 2 ψ (w) ,
where κ is arbitrary extended on R. Note that A is the extended generator associated with (20). A crucial step in the proof of Theorem 18 is the following.
Proposition 19. Let ϕ ∈ C 3 (R), satisfying sup w∈R {|ϕ|(w)/(1 + w 2 ) + |ϕ |(w)/(1 + |w|) + |ϕ |(w) + |ϕ (3) |(w)} < +∞ .(23)Then, for any N ∈ N, (t 1 , . . . , t N , s, t) ∈ [0, +∞) N +2 , 0 t 1 · · · t N s < t, ψ : [0, +∞) N → R, positive,
continuous and bounded, it holds that
lim n→+∞ E ϕ(W (n) t ) − ϕ(W (n) s ) − t s Aϕ(W (n) u )du ψ(W (n) t 1 , . . . , W (n) t N ) = 0 .(24)
In addition, if ϕ (w) 0 for any w ∈ R, it holds that
lim sup n→+∞ E ϕ(W (n) t ) − ϕ(W (n) s ) − t sà ϕ(W (n) u )du ψ(W (n) t 1 , . . . , W (n) t N ) 0 .(25)
Proof. The proof is postponed to Section 5.2.3.
Note that while is in general sufficient to conclude on the convergence of the sequence of processes {(W (n) t ) t 0 : n ∈ N} (see e.g. [19]), in our setting, it is not enough to complete the proof of Theorem 15 since the diffusion coefficient associated with A is discontinuous. To circumvent this issue, we adapt to our sequence {(W
An application in Bayesian statistics: parameter estimation in an ODE
Setting and verifying the assumptions
Consider an ordinary differential equation ODE on R n of the forṁ
x θ (t) = f θ (x θ (t), t) , x θ (0) = x 0 ∈ R n ,(26)
where {f θ :, θ ∈ R d } is a family of function from R n × [0, +∞) to R n parametrized by some parameter θ ∈ R d . In all this section x 0 ∈ R n is assumed to be fixed and we consider the following assumption.
AO1. For all θ ∈ R d there exists a unique solution of (26) defined for all positive times, which we denote by (x θ (t)) t 0 . In addition, the functions (θ,
x, t) ∈ R d × R n × [0, +∞) → f θ (x, t) and (θ, t) ∈ R d × [0, +∞) → x θ (t) are continuously differentiable.
In fact the continuous differentiability of (
θ, t) → x θ (t) is a consequence of the one of (θ, x, t) → f θ (x, t), see e.g. [36, Theorem 4.D].
To fix ideas, throughout this section, we will repeatedly discuss the following case of a logistic equation.
Example 20. For r ∈ C 1 (R, R + ), set f θ (x) = x(1 − r(θ)x) for any θ, x ∈ R, so that (26) readsẋ θ (t) = x θ (t) (1 − r(θ)x θ (t)) , x θ (0) = x 0 ,
with x 0 0. In this example, AO 1 holds and, r and x 0 being positive, for all θ ∈ R, the solution of (26) is such that
x θ (t) ∈ [0, e t x 0 ] for all t 0. Indeed, x → x(1 − r(θ))
is locally Lipschitz continuous, which yields existence and uniqueness of a maximal solution. Since 0 is always an equilibrium, solutions stay positive, from which x θ (t) x θ (t) for all t 0, implying that x θ (t) e t x 0 for all t 0. This also implies non-explosion, hence the solution is defined on [0, +∞).
We consider the problem of estimating θ based on some observation of a trajectory of the ODE. More precisely, for T > 0, N ∈ N * , (t 1 , . . . , t N ) ∈ R N , 0 < t 1 < · · · < t N = T , the statistical model corresponding to the observation y = (y i ) i∈{1,...,N } ∈ (R n ) N is given by
y i = x θ (t i ) + ε i ,(27)
for θ ∈ R d and where (ε i ) i∈{1,...,N } are independent and identically random variables on R n distributed according to some known positive density ϕ ε with respect to the Lebesgue measure. Given a prior distribution with positive density π 0 on R d , the a posteriori distribution for this model admits a positive density π with respect to the Lebesgue measure which is characterized by the potential U given by (up to an additive constant)
− log π(θ) = U (θ) = − ln π 0 (θ) − N i=1 ln ϕ ε (y i − x θ (t i )) .
We consider the following assumption on π 0 and ϕ ε setting − log(π 0 ) = U 0 .
AO 2.
The functions π 0 and ϕ ε are twice continuously differentiable and there exist m U > 0,
L U , R U 0 such that ∇U 0 is L U -Lipschitz continuous and for any θ,θ ∈ R d with θ−θ R U , θ −θ, ∇U 0 (θ) − ∇U 0 (θ) m U θ −θ 2 .
In practice, expectations with respect to the posterior distribution can be approximated by ergodic means of the Unadjusted Langevin Algorithm (ULA), namely the Markov chain
X k+1 = X k − γ∇U (X k ) + 2γZ k+1 ,(28)
where γ > 0 and (Z k ) k∈N are independent and identically standard Gaussian variables. The long-time convergence of this algorithm and the numerical bias on the invariant measure due to the time discretization are well understood, see e.g. [7,14,15,8,10] and references therein. However, in the present case, it is not possible to sample this Markov chain, as the exact computation of
∇U (θ) = −∇ θ ln π 0 (θ) + N i=1 ∇ θ x θ (t i )∇ x ln ϕ ε (y i − x θ (t i )) ,(29)
is not possible in most cases because of the term involving x θ and ∇ θ x θ . Here ∇ θ and ∇ x denote the gradient operator with respect to θ and x respectively. Therefore, only approximations of these two functions can be used in place of (x θ (t i ), ∇ θ x θ (t i )) i∈ 0,N , which leads to an additional discretization bias. Our results based on the sticky coupling yields a quantitative bound on this error (with respect to the ideal ULA above). Let us detail this statement.
First, remark that t → z θ (t) = (x θ (t), ∇ θ x θ (t)) solveṡ z θ (t) = F θ (z θ (t), t) z θ (0) = z 0 = (x 0 , 0) (30) on R n × M d,n (R) with for any x ∈ R n , A ∈ M d,n (R), θ ∈ R d , t 0, F θ ((x, A), t) = (f θ (x, t), ∇ θ f θ (x, t) + A∇ x f θ (x, t)) .(31)
Provided f θ , ∇ θ f θ and ∇ x f are computable, in practice this ODE can be approximated by standard numerical schemes. For instance, a basic explicit Euler discretization with time-step h > 0 is given bỹ
z h θ (0) = z 0 ,z h θ ((k + 1)h) =z h θ (kh) + hF θ z h θ (kh), kh ∀k ∈ N andz h θ (t) =z h θ (kh) + (t − kh)F θ z h θ (kh), kh , t ∈ [kh, (k + 1)h) .(32)
To establish the consistency of this approximation (with some uniformity in θ), we consider the following condition.
AO 3.
There exist L F , L F , C F , δ > 0 and a compact set K ⊂ R n × M d,n (R) such that the following holds. For all t ∈ [0, T ] and θ ∈ R d , the ball centered at z θ (t) and radius δ is included in K. Moreover, for all z,z ∈ K, t, s ∈ [0, T ] and θ,θ ∈ R d , F θ (z, t) C F and
F θ (z, t) − Fθ(z, s) L F θ −θ + L F ( z −z + |t − s|) .(33)(t) ∈ K and N i=1 z θ (t i ) −z h θ (t i ) Ch, where z θ solves (30) and
z h θ is given by (32).
Proof. The proof is postponed to Section 5.3. Example 20). Let us check for instance that AO3 is satisfied for Example 20 provided that r is twice continuously differentiable on [0, +∞) with, for some L r , L r , L r > 0, r, r and r uniformly bounded respectively by L r , L r and L r .
Example 22 (Continuation of
We may consider for example r : θ → a 1 θ 2 /(θ 2 + a 2 ) for a 1 , a 2 ∈ (0, +∞). Recall that
f θ (x) = x(1 − r(θ)x) and x θ (t) ∈ [0, e t x 0 ] for all θ ∈ R and all t ∈ [0, T ], so that for any t 0 and θ ∈ R d , |∂ θ f θ (x θ (t))| = |r (θ)x 2 θ (t)| L r e 2t x 2r(θ)x 0 then x θ is non-increasing (in particular x θ (t) x 0 for all t 0) while, on the other hand, if 1 r(θ)x 0 , then 1 r(θ)x θ (t)
for all t 0. In both cases, we get that for all t 0,
|∂ x f θ (x θ (t))| = |1 − 2r(θ)x θ (t)| 1 + 2L r x 0 .
Combining the two previous bounds,
∂ θ x θ (t) t 0 L r e 2t x 2 0 + (1 + 2L r x 0 ) ∂ θ x θ (s) ds ,
and thus by Grönwall's inequality,
∂ θ x θ (t) M t = L r x 2 0 te (3+2Lrx 0 )t , for all t ∈ [0, T ] and θ ∈ R. Then, for any δ > 0, AO3 is satisfied with K = [−δ, e T x 0 + δ] × [−M T − δ, M T + δ].
Since K is compact and as, by (31), for any x ∈ R n , A ∈ M d,n (R), θ ∈ R d , t 0,
F θ ((x, A), t) = x(1 − r(θ)x) , r (θ)x 2 + A(1 − 2r(θ)x) ,
then (33) easily follows from the condition (34).
Since other schemes may be used, in particular higher-order ones, we consider more generally in the following a solver Ψ h : R d → (R n × M d,n (R)) N for h > 0 satisfying the condition:
AO 4. There existh, C Ψ , α > 0 such that, for any θ ∈ R d and h ∈ (0,h], N i=1 z θ (t i ) − Ψ h i (θ) C Ψ h α ,
where z θ is a solution of (30) and Ψ h i :
R d → R n × M d,n (R) is the i-th component of Ψ h .
When AO3 and AO4 are both satisfied, without loss of generality, we assume furthermore thath is sufficiently small so that C Ψh α δ. This implies that Ψ h i (θ) ∈ K for all θ ∈ R d , i ∈ {1, . . . , N } and h ∈ (0,h].
Writing Ψ h i (θ) = (x h θ (t i ), G h θ (t i )), we can consider for any θ ∈ R d , b h (θ) = −∇ θ ln π 0 (θ) + N i=1 G h θ (t i ) · ∇ x ln ϕ ε y i −x h θ (t i ) ,(35)
as an approximation of ∇U (29) Remark that, now, in contrast to b(θ) = ∇U (θ), it is possible in practice to evaluateb h (θ) for θ ∈ R d , provided ∇ x ln ϕ ε and ∇ θ ln π 0 can be evaluated.We now assess the error due to the use ofb h in place of the exact gradient in (28) by verifying that the assumption of Section 2 are satisfied. For γ, h > 0 and θ ∈ R d , denote
T γ (θ) = θ − γ∇U (θ) ,T γ,h (θ) = θ − γb h (θ) .(36)
When AO2 and AO3 are both satisfied, there exist C ϕ , L ϕ > 0 such that for all i ∈ {1, . . . , N }, the function s i on the compact set K given for any x ∈ R n and A ∈ M d,n (R), by
s i (x, A) = A∇ x ln ϕ ε (y i − x)(37)
is bounded by C s and L s -Lipschitz continuous on R n × M d,n (R).
Proposition 23.
Under AO 1, AO 2, AO 3 and AO 4, for any h ∈ (0,h), the functions T γ andT γ,h given by (36) satisfy for anyγ > 0, H1 and H2, with
c ∞ = C Ψ L s h α , R 1 = 2N C s m U ∨ R U , m = m U 2 , L = L U + L s L F N i=1 t i e L F t i .
Proof. The proof is postponed to Section 5.3.
Under the conditions of Proposition 23 and using the results of Section 2, we get that the Markov chains (X k ) k∈N and (X k ) k∈N associated to T γ andT γ,h given by (36) have unique invariant measure π γ andπ γ,h , and that there existγ,h, C > 0 such that for all γ ∈ (0,γ] and h ∈ (0,h], π γ −π γ,h TV Ch .
Example 24 (Continuation of Example 20). As a conclusion, consider the logistic case of Example 20 with the Euler scheme (32), assuming that r ∈ C 2 (R, [0, +∞)) satisfies (34).
Then, by Example 20 and Example 22, AO1, AO3 and AO4 hold. Assuming moreover that π 0 and ϕ ε are Gaussian, then AO2 also holds and we obtain (38).
Numerical results
We illustrate our findings on two particular ODEs. First, we consider the ODE associated with the Van der Pol oscillator corresponding to the second order ODE:
x θ (t) − θ(1 − x θ (t) 2 )ẋ θ (t) + x θ (t) = 0 ,(39)
where θ ∈ R is the parameter to infer. It corresponds to (26)
with f θ (x 1 , x 2 ) = (x 2 , θ(1 − x 2 1 )x 2 − x 1 )
. We generate synthetic data solving (39) using the 4th Runge-Kutta method for T = 10 and θ = 1. We then select (x θ (t i )) 25 i=1 for (t i ) 25 i=1 uniformly chosen in [0, T ]. The observations y = (y i ) 25 i=1 are obtained from (x θ (t i )) 25 i=1 adding i.i.d. zero-mean Gaussian noise with variance 0.5. We consider the corresponding statistical model (27) where (ε i ) 25 i=1 are i.i.d. zeromean Gaussian random variables with variance 0.5. We consider as prior π 0 , the zero-mean Gaussian distribution with variance 0.5. We then use ULA with γ = 10 −2 , for which the gradient is estimated using the Euler method with the time steps h ∈ {0.05, 0.01, 0.001}. Figure 1a represents the histograms corresponding to the different Markov chains after 10 5 iterations with a burn-in of 10 4 steps. Gaussian kernel density approximation of these histograms are estimated and used as proxy for the density of the invariant distributions π γ,h of the Markov chain (X k ) k∈N associated toT γ,h given by (36). To obtain a proxy for the density of π γ , the stationary distribution of (X k ) k∈N associated to T γ , we use the same procedure but using the Euler method with h = 0.0001. We then estimate the total variation between π γ and π γ,h for h ∈ {0.005, 0.004, 0.003, 0.0025, 0.0015, 0.001, 0.00075, 0.0005} using numerical integration. The corresponding results over 10 replications are reported in Figure 1b. We can observe that the total variation distance linearly decreases with h which supports our findings. For our second experiment, we consider the Lotka-Volterra model describing the evolution of the population of two interacting biological species denoted by t → x θ (t) = (u θ (t), v θ (t)). The dynamics of these two populations are assumed to be governed by the system of equations given by:u
θ (t) = (α − βv θ (t))u θ (t) ,v θ (t) = (−γ + δu θ (t))v θ (t) ,(40)
where θ = (α, β, γ, δ) is the parameter to infer. We follow the same methodology presented in [25,Chapter 16]. For this experiment, we consider an other statistical model as previously and generate synthetic data y = (y i ) 50 i=1 accordingly and associated with observation times (t i ) 50
i = (u y i , v y i ) with u y i = u θ (t i )e ε u,i , v y i = v θ (t i )e ε v,i and (ε u,j , ε v,j ) 50
j=1 are i.i.d. one dimensional zero-mean Gaussian random variables with covariance matrix I 2 . The prior π 0 is set to be the Gaussian distribution on R 4 with means (1, 0.05, 1, 0.05) and standard deviations (0.5, 0.05, 0.5, 0.05). The posterior distribution is then given by π(θ|y) ∝ exp(−U (θ)), where
U (θ) = − log π 0 (θ) + 50 i=1 (log u y i − log u θ (t i )) 2 + (log v y i − log v θ (t i )) 2 2ς 2 .
We then use ULA with γ = 5 × 10 −5 , for which the gradient is estimated using the Euler method. We focus here on the second component of the chain. The results for the other components are similar. Figure 2a represents the histograms for the second component corresponding to the different Markov chains after 10 7 iterations with a burn-in of 10 3 steps. Gaussian kernel density approximation of these histograms are estimated and used as proxy for the marginal density of the invariant distributions π γ,h of the Markov chain (X k ) k∈N associated toT γ,h given by (36). To obtain a proxy for the marginal density of π γ , the stationary distribution of (X k ) k∈N associated to T γ , we use the same procedure but using the Euler method with h = 0.0001. We then estimate the total variation between π γ and π γ,h for h ∈ {k × 10 −2 : k ∈ {1, . . . , 10}} using numerical integration. The corresponding results over 10 replications are reported in Figure 2b. We can observe that the total variation distance still linearly decreases with h.
Proof of Proposition 1
Recall that under H1, we have for any γ ∈ (0,γ],
sup x∈R d , x =0 { T γ (x) − T γ (0) / x } (1 + γL) , sup x∈R d , x R 1 { T γ (x) − T γ (0) / x } (1 − γm) .(41)
Define for any
x ∈ R d V c,γ (x) = exp(c x − T γ (0) 2 ), for c = m/(16σ 2 ) .(42)
Note that since sup γ∈(0,γ] T γ (0) < +∞ by H1, V c,γ goes to +∞ at infinity and
lim x →+∞ { sup γ∈(0,γ] V c,γ /V c }(x) = 1 = lim x →+∞ { inf γ∈(0,γ] V c,γ /V c }(x) = 1 .
Therefore, by [9,Corollary 11], it is sufficient to show that there existγ 1 ∈ (0,γ], λ ∈ (0, 1), A 0 such that for any γ ∈ (0,γ 1 ] and x ∈ R d ,
R γ V c,γ (x) λ γ V c,γ (x) + γA .(43)
We show that it holds with
γ 1 =γ ∧ {1/m} , λ = exp(−cmM 2 /4) , M = R 1 ∨ (8dσ 2 /m) 1 /2 , A = λγ exp(cM 2 +γ{B 1 M 2 + B 2 − log(λ)}){B 1 M 2 + B 2 − log(λ)} , B 2 = 2dcσ 2 , B 1 = 4C 1 , C 1 = C 2 ∨ C 2 2γ , C 2 = (2L) ∨ (Lγ) ∨ (8cσ 2 ) .(44)
Define for any x ∈ R d , T γ (x) = T γ (x) − T γ (0). Note that for any γ ∈ (0,γ 1 ], 2cσ 2 γ 1 by definition of c (42) andγ 1 . Let x ∈ R d and γ ∈ (0,γ 1 ]. Then, we obtain using that
R e az+bz 2 −z 2 /2 dz = (2π(1 − 2b) −1 ) d/2 e a 2 /(2(1−2b)) for any a ∈ R and b ∈ [0, 1/2), R γ V c,γ (x) = (2π) −d/2 R d exp(c T γ (x) + (σ 2 γ) 1 /2 z 2 − z 2 /2)dz = (1 − 2cσ 2 γ) −d/2 exp{c(1 − 2cσ 2 γ) −1 T γ (x) 2 } .
We now distinguish the case x M and x < M . In the first case, we get by (41)
R γ V c,γ (x) (1 − 2cγσ 2 ) −d/2 exp c(1 − mγ)(1 + 8cσ 2 γ) x 2 exp c(1 − mγ/4) x 2 + 2dcσ 2 γ − mcγ x 2 /4 λ γ V c,γ (x) ,(45)
where we used for the penultimate inequality that
R γ V c,γ (x) (1 − 2cγσ 2 ) −d/2 exp c(1 + Lγ) 2 (1 + 8cσ 2 γ) x 2 exp c(1 + γB 1 ) x 2 + γB 2 .
Using that e t − 1 te t for t 0, we obtain that
R γ V c,γ (x) = λ γ V c,γ (x) + λ γ V c,γ (x){exp cγB 1 x 2 + γB 2 − γ log(λ) − 1} λ γ V c,γ (x) + γλγV c,γ (x){B 1 x 2 + B 2 − log(λ)} exp γ(B 1 x 2 + B 2 − log(λ) ,
which combined with (45) completes the proof of (43).
Proof of Proposition 3
First note that under H1 and H2, for all x ∈ R d , since
T γ (x) − T γ (0) γc ∞ + T γ (x) − T γ (0)
and from (41),
sup x∈R d , x R 1 T γ (x) − T γ (0) /{(1 − γm) x + γc ∞ } 1 .(46)
Define for any
x ∈ R d V c,γ (x) = exp(c x − T γ (0) 2 ), for c = m/(16σ 2 ) .(47)R γ V c λ γ V c + A γ .(48)
Define for any x ∈ R d , T γ (x) =T γ (x) − T γ (0). Note that for any γ ∈ (0,γ 1 ], 2cσ 2 γ 1 by definition of c (47) andγ 1 . Let x ∈ R d and γ ∈ (0,γ 1 ]. Then, we obtain using that R e az+bz 2 −z 2 /2 dz = (2π(1 − 2b) −1 ) d/2 e a 2 /(2(1−2b)) for any a ∈ R and b ∈ [0, 1/2),
R γ V c,γ (x) = (2π) −d/2 R d exp(c T γ (x) + (σ 2 γ) 1 /2 z 2 − z 2 /2)dz = (1 − 2cσ 2 γ) −d/2 exp{c(1 − 2cσ 2 γ) −1 T γ (x) 2 } .
If x M , we get by (46), (1 − 2cσ 2 γ) −1 8cσ 2 γ and (1 − mγ) 2 1 − mγ since 2cσ 2 γ 1/2 and γ 1/m, by (47) and definition ofγ 1 ,
R γ V c,γ (x) (1 − 2cγσ 2 ) −d/2 exp c(1 + 8cσ 2 γ){(1 − mγ) x + c ∞ } 2 (1 − 2cγσ 2 ) −d/2 exp c{(1 − mγ/2) x + (1 + 8cσ 2 γ)c ∞ } 2 .
Therefore, we get lim inf x →+∞ [R γ V c,γ (x)/V c,γ (x)] = 0 which completes the proof of (48).
Proof of Proposition 6
The proof is based on this technical lemma.
Lemma 25. For any
g ∈ R, u ∈ [0, 1] and γ ∈ (0,γ], a, b ∈ [0, +∞), a b, H γ (a, g, u) H γ (b, g, u)(49)
Proof. Let u ∈ [0, 1], g ∈ R and a, b ∈ [0, +∞), a b. We first prove that for any c ∈ R + such that c − 2(σ 2 γ) 1
/2 g < 0, H γ (c, g, u) = 0 ,(50)
which implies that for any c ∈ R + , H γ (c, g, u) 0. We need to consider the following two cases.
(a) If c − (σ 2 γ) 1 /2 g < 0. Then, using −(σ 2 γ) 1 /2 g c − (σ 2 γ) 1 /2 g, ϕ σ 2 γ ((σ 2 γ) 1 /2 g) = ϕ σ 2 γ (−(σ 2 γ) 1 /2 g), and t → ϕ σ 2 γ (t) is decreasing on [0, +∞), we get p γ (c, g) = 1. Therefore (50) holds.
(b) If 0 c − (σ 2 γ) 1 /2 g (σ 2 γ) 1 /2 g, we obtain similarly to the first case that p γ (c, g) = 1 and therefore (50) holds.
We now show (49). It is straightforward by (50) if 0 > a−2(σ 2 γ) 1 /2 g. If 0 a−2(σ 2 γ) 1 /2 g. By using t → ϕ σ 2 γ (t) is decreasing on [0, +∞), we obtain
p σ 2 γ (a, g) p σ 2 γ (b, g) .(51)
Then, (49) follows from (9) and (51).
Proof of Proposition 6. Let k ∈ N . By H1, H2 and the triangle inequality, for any x,x ∈ R d ,
E(x,x) τ γ ( x −x ) + γc ∞ .(52)
By using (6), and (9) we have,
X k+1 −X k+1 = B k+1 − E(X k ,X k ) + 2(σ 2 γ) 1 /2 e(X k ,X k )e(X k ,X k ) T Z k+1 = B k+1 − E(X k ,X k ) e(X k ,X k ) + 2(σ 2 γ) 1 /2 G k+1 e(X k ,X k ) = B k+1 E(X k ,X k ) − 2(σ 2 γ) 1 /2 G k+1 = H γ ( E(X k ,X k ) , G k+1 , U k+1 ) .
This gives (11) when combined with (52). Finally, the last statement follows from Lemma 25 and H1 ensuring that τ γ is non-decreasing on [0, +∞).
Proof of Proposition 8
The proof is an easy consequence of this technical lemma. H1 and H2 hold. Then, for any w ∈ [0, +∞) and γ ∈ (0,γ], we have that Q γ V * 1 (w) = τ γ (w) + γc ∞ . Proof. For any w ∈ R, we have,
Lemma 26. Assume
Q γ V * 1 (w) = R (1 − p σ 2 γ (w, g)) τ γ (w) + γc ∞ − 2(σ 2 γ) 1 /2 g ϕ(g)dg = R (τ γ (w) + γc ∞ − 2g) ϕ (σ 2 γ) 1 /2 (g) − ϕ (σ 2 γ) 1 /2 (g) ∧ ϕ (σ 2 γ) 1 /2 (τ γ (w) + γc ∞ − g) dg = (τγ (w)+γc∞)/2 −∞ (τ γ (w) + γc ∞ − 2g) ϕ (σ 2 γ) 1 /2 (g) − ϕ (σ 2 γ) 1 /2 (τ γ (w) + γc ∞ − g) dg .
By using change of variable g → a − g we have,
(τγ (w)+γc∞)/2 −∞ (τ γ (w) + γc ∞ − 2g) ϕ (σ 2 γ) 1 /2 (g) − ϕ (σ 2 γ) 1 /2 (τ γ (w) + γc ∞ − g) dg = 1 2 R (τ γ (w) + γc ∞ − 2g) ϕ (σ 2 γ) 1 /2 (g) − ϕ (σ 2 γ) 1 /2 (τ γ (w) + γc ∞ − g) dg = τ γ (w) + γc ∞ .
Proof of Proposition 8. By Lemma 26 and H1, for any w ∈ [0, +∞),
Q γ V * 1 (w) = τ γ (w) + γc ∞ (1 − γm)V * 1 (w)1 (R 1 ,∞) (w) + (1 + γL)V * 1 (w)1 (0,R 1 ] (w) + γc ∞ .
This completes the proof.
Proof of Proposition 9
We first establish that Q γ admits a unique invariant probability measure µ γ and is geometrically ergodic. To that end, we show that Q γ is (
Q γ (w, {0}) [−1,1] p σ 2 γ (τ γ (w) + γc ∞ , g)ϕ(g)dg η K ,(53)
where using H1
η K = inf (r,g)∈K×[−1,1] p σ 2 γ (τ γ (w) + γc ∞ , g) [−1,1] ϕ(g)dg inf (a,g)∈[0,M ]×[−1,1] p σ 2 γ (a, g) [−1,1] ϕ(g)dg ,
and M = (1 + γL) sup(K) + γc ∞ . Note that since (a, g) → p σ 2 γ (a, g) is a continuous positive function, and [0, M ] × [−1, 1] is compact, η K > 0. Therefore {0} is an accessible (1, δ 0 )-small set and Q γ is irreducible. In addition, Q γ (0, {0}) > 0 which implies that Q γ is strongly aperiodic. (b) Let now C be a compact set, we show that C is small. By (53), for A ∈ B([0, +∞)) and w ∈ [0, +∞),
Q 2 γ (w, A) R 1 {0} (w)Q γ (w, A)Q γ (w, dw) η C Q γ (0, A) .
Therefore C is a (2, Q γ (0, ·))-small set. (c) In addition by H1 and (13) we have, for any w ∈ [0, +∞),
[0,+∞) V (w)Q γ (w, dw) 1 + R (1 − p σ 2 γ (τ γ (w) + γc ∞ , z)) τ γ (w) + γc ∞ + 2(σ 2 γ) 1 /2 (g) − ϕ(g)dg (1 − γm)V (w) + γR 1 (m + L) + γc ∞ + 2(σ 2 γ) 1 /2 / √ 2π .
The proof of the first part of the proposition is complete. We now establish the second part. Let A ∈ B(R) such that (δ 0 + Leb)(A) = 0. Then 0 ∈ A and Leb(A) = 0 therefore for any w ∈ R,
Q γ (w, A) 1 √ 2π R 1 A τ γ (w) + γc ∞ − 2σγ 1/2 g dg = 0 .
It follows that µ γ (A) = µ γ Q γ (A) = 0 and µ γ (δ 0 + Leb). Since for any w ∈ [0, +∞), Q γ (w, {0}) > 0, δ {0} is an irreducibility measure, and by [11,Theorem 9.2.15], µ γ is a maximal irreducibility measure for Q γ , δ 0 µ γ implying that
µ γ ({0}) > 0.
In the case c ∞ = 0, by (13) and (10), for any A ∈ B(()[0, +∞)), Leb(A) > 0, Q γ (w, A) > 0 for any w ∈ [0, +∞) and therefore Leb is an irreducibility measure. Applying [11,Theorem 9.2.15] again, we get that δ 0 + Leb µ γ . This completes the proof since we have already shown that µ γ (δ 0 + Leb).
Proof of Theorem 11
Lemma 27. Assume H1 and H2 hold. For any w ∈ R,
Q γ (w, {0}) = 2Φ − τ γ (w) + γc ∞ 2σ √ γ ,
where Q γ is defined by (13) and Φ is the cumulative distribution of the one-dimensional Gaussian distribution with mean 0 and variance 1.
Proof. Let w ∈ R. By (12) and the change of variable g → σ √ γg, we get
Q γ (r, {0}) = R 1 ∧ ϕ σ 2 γ τ γ (w) + γc ∞ − σ √ γg ϕ σ 2 γ σ √ γg ϕ(g)dg = R ϕ σ 2 γ (g) ∧ ϕ σ 2 γ (τ γ (w) + γc ∞ − g) dg = R ϕ σ 2 γ (g) ∧ ϕ σ 2 γ (g − τ γ (w) + γc ∞ ) dg = (τγ (w)+γc∞)/2 −∞ ϕ σ 2 γ (g − τ γ (w) + γc ∞ ) dg + +∞ (τγ (w)+γc∞)/2 ϕ σ 2 γ (g) dg = −(τγ (w)+γc∞)/2 −∞ ϕ σ 2 γ (g) dg + +∞ (τγ (w)+γc∞)/2 ϕ σ 2 γ (g) dg = 2Φ − τ γ (w) + γc ∞ 2σ √ γ ,
and the lemma follows.
Lemma 28.
Let σ 2 , γ > 0. For any t, a 0, we have
R 1 − 2Φ − t − 2σγ 1/2 g 2a p σ 2 γ (t, g)ϕ(g)dg = 0 ,
where p σ 2 γ is defined by (10), ϕ and Φ are the density and the cumulative distribution function of the one-dimensional Gaussian distribution with mean 0 and variance 1 respectively.
Proof. Using the changes of variable g → σγ 1/2 g, g → g − t and g → −g, we obtain
R 1 − 2Φ − t − 2σγ 1/2 g 2a p σ 2 γ (t, g)ϕ(g)dg = R 1 − 2Φ − t − 2σγ 1/2 g 2a 1 ∧ ϕ σ 2 γ a − σ √ γg ϕ σ 2 γ σ √ γg ϕ(g)dg = t/2 −∞ 1 − 2Φ − t − 2g 2a ϕ σ 2 γ (t − g) dg + +∞ t/2 1 − 2Φ − t − 2g 2a ϕ σ 2 γ (g) dg = −t/2 −∞ 1 − 2Φ − −t − 2g 2a ϕ σ 2 γ (g) dg + +∞ t/2 1 − 2Φ − t − 2g 2a ϕ σ 2 γ (g) dg = +∞ t/2 1 − 2Φ − −t + 2g 2a ϕ σ 2 γ (g) dg + +∞ t/2 1 − 2Φ − t − 2g 2a ϕ σ 2 γ (g) dg .
Using that s ∈ R, 1 − 2Φ (s) = − [1 − 2Φ (−s)] completes the proof.
Lemma 29.
Let σ 2 , γ > 0. For any t, s 0 and a > 0,
R 1 − 2Φ − t + s − 2σ √ γg 2a p σ 2 γ (t, g)ϕ(g)dg = 2P σ √ γG t/2, −s − t 2aG − 2σ √ γG s − t ,
where G,G are two independent one-dimensional standard Gaussian random variables, p σ 2 γ is defined by (10), ϕ and Φ are the density and the cumulative distribution function of the one-dimensional Gaussian distribution with mean 0 and variance 1 respectively.
Proof. Using the changes of variable g → σγ 1/2 g, g → g − t, we get
R 1 − 2Φ − t + s − 2σ √ γg 2a p σ 2 γ (t, g)ϕ(g)dg = −t/2 −∞ 1 − 2Φ − −t + s − 2g 2a ϕ σ 2 γ (g) dg + +∞ t/2 1 − 2Φ − t + s − 2g 2a ϕ σ 2 γ (g) dg = 2[P(σγ 1/2 G t/2) − A − B] (54) A = −t/2 −∞ Φ − −t + s − 2g 2a ϕ σ 2 γ (g) dg , B = +∞ t/2 Φ − t + s − 2g 2a ϕ σ 2 γ (g) dg .
In addition, we have since (−G, −G) has the same distribution than (G,G),
A = P σ √ γG − t 2 ,G − −t + s − 2σ √ γG 2a = P σ √ γG t 2 ,G −t + s + 2σ √ γG 2a B = P σ √ γG t 2 ,G − t + s − 2σ √ γG 2a
Therefore, we obtain
A + B = P (σ √ γG t/2) − P σ √ γG t/2, − t + s − 2σ √ γG 2a G −t + s + 2σ √ γG 2a = P (σ √ γG t/2) − P σ √ γG t/2, −s − t 2aG − 2σ √ γG s − t
Plugging this expression in (54) concludes the proof.
Lemma 30. For any a ∈ R, b ∈ [0, 1], it holds Φ (a + b) − Φ (a − b) 1 − 2Φ (−b) − a 2 b exp(−b 2 /2)/ √ 2π ,
where Φ are the density and the cumulative distribution function of the one-dimensional Gaussian distribution with mean 0 and variance 1.
Proof. Define ψ : R × [0, 1] → R for any a ∈ R, b ∈ [0, 1] by ψ(a, b) = Φ (a + b) − Φ (a − b) − 1 + 2Φ (−b) + a 2 b exp(−b 2 /2)/ √ 2π .
We show that ψ(a, b) 0 for any a ∈ R and b ∈ [0, 1]. Using that 1 − Φ(−t) = Φ(t) for any t ∈ R, we get that ψ(a, b) = ψ(−a, b) and therefore we only need to consider the case a 0 and b ∈ [0, 1]. In addition, for any b ∈ [0, 1], ψ(0, b) = 0 and thus, it is sufficient to establish that for any b ∈ [0, 1], a → ψ(a, b) is non-increasing on R − . For any a 0 and b ∈ (0, 1), we have using that sinh(t) = t 0 cosh(s)ds t cosh(t) and e −t 2 /2 cosh(t) 1 for any t ∈ [0, +∞),
√ 2π exp(b 2 /2) ∂ψ ∂a (a, b) = 2 exp(−a 2 /2) sinh(−ab) + 2ab < −2ab[exp(−a 2 b 2 /2) cosh(ab) − 1] 0 .
By continuity, it also holds for a 0 and b ∈ [0, 1] which concludes the proof. H1 and H2 hold. For any w ∈ [0, +∞) and α, β ∈ [0, +∞), such that α/(2β) 1,
Lemma 31. Assume
[0,+∞) 1 − 2Φ − τ γ (w) + α 2β Q γ (w, dw) 1 − 2Φ − τ γ (w) + γc ∞ + α/(1 + γL) 2 σ 2 γ + β/(1 + γL) 2 + ζ γα β 3 ,
where Φ is the density and the cumulative distribution function of the one-dimensional Gaussian distribution with mean 0 and variance 1,
ζ = (1 +γL) 2 σ 2 (2 √ 2π) −1 sup t 0 {t 2 Φ(−t)} + 1/8 .(55)
Proof. Let α, β 0 such that α/(2β) 1. By (13), we have
[0,+∞) 1 − 2Φ − τ γ (w) + α 2β Q γ (w, dw) = R 1 − 2Φ − τ γ (τ γ (w) + γc ∞ − 2σ √ γg) + α 2β (1 − p σ 2 γ (τ γ (w) + γc ∞ , g))ϕ(g)dg + 1 − 2Φ − τ γ (0) + α 2β R p σ 2 γ (τ γ (w) + γc ∞ , g)ϕ(g)dg = R 1 − 2Φ − τ γ (τ γ (w) + γc ∞ − 2σ √ γg) + α 2β (1 − p σ 2 γ (τ γ (w) + γc ∞ , g))ϕ(g)dg + 1 − 2Φ − τ γ (0) + α 2β 2Φ − τ γ (w) + γc ∞ 2σ √ γ . (56) By H1, t → 1 − 2Φ(−t) is increasing, we have setting ψ γ (w) = τ γ (w) + γc ∞ + α/(1 + γL), 1 − 2Φ − τ γ (τ γ (w) + γc ∞ − 2σ √ γg) + α 2β 1 − 2Φ − ψ γ (w) − 2σ √ γg 2β/(1 + γL) .
Using [15,Lemma 20], Lemma 28 and Lemma 29, we get
R 1 − 2Φ − τ γ (τ γ (w) + γc ∞ − 2σ √ γg) + α 2β (1 − p σ 2 γ (τ γ (w) + γc ∞ , g))ϕ(g)dg R 1 − 2Φ − ψ γ (w) − 2σ √ γg 2β/(1 + γL) (1 − p σ 2 γ (ψ γ (w), g))ϕ(g)dg − R 1 − 2Φ − ψ γ (w) − 2σ √ γg 2β/(1 + γL) p σ 2 γ (τ γ (w) + γc ∞ , g))ϕ(g)dg = 1 − 2Φ − ψ γ (w) 2 σ 2 γ + β/(1 + γL) 2 − 2P ({2σ √ γG τ γ (w) + γc ∞ } ∩ A) ,(57)
where
A = −τ γ (w) − γc ∞ − α 1 + γL 2βG 1 + γL − 2σ √ γG −τ γ (w) − γc ∞ + α 1 + γL ,
and G,G are two independent one-dimensional standard Gaussian random variables. Define θ γ : [0, +∞) × R → R for any w ∈ [0, +∞) and g ∈ R by θ γ (w, g) = (2β
) −1 (1 + γL)[−τ γ (w) − γc ∞ + σ √ γg].
Then, using that
A = θ γ (w, G) − α/(2β) G θ γ (w, G) + α/(2β) , we have P ({2σ √ γG τ γ (w) + γc ∞ } ∩ A) = R 1 [0,+∞) g − τ γ (w) + γc ∞ 2σ √ γ Φ θ γ (w, g) + α 2β − Φ θ γ (w, g) − α 2β ϕ(g)dg .
Since α/(2β) 1 by Lemma 30 we have, for any a ∈ R,
Φ a + α 2β − Φ a − α 2β 1 − 2Φ − α 2β − a 2 α 2 √ 2πβ e −α 2 /(8β 2 ) , which implies P ({2σ √ γG τ γ (w) + γc ∞ } ∩ A) Φ − τ γ (w) + γc ∞ 2σ √ γ 1 − 2Φ − α 2β − R 1 [0,+∞) g − τ γ (w) + γc ∞ 2σ √ γ θ 2 γ (g, w)α 2 √ 2πβ e −α 2 /(8β 2 ) ϕ(g)dg .
Therefore, we obtain using that E[1 [0,+∞) (G)
G 2 ] = 1/2, 1 − 2Φ − α 2β Φ − τ γ (w) + γc ∞ 2σ √ γ − P ({2σ √ γG τ γ (w) + γc ∞ } ∩ A) R 1 [0,+∞) g − τ γ (w) + γc ∞ 2σ √ γ θ 2 γ (g, w)α 2 √ 2πβ e −α 2 /(8β 2 ) ϕ(g)dg αγ β 3 (1 + γL) 2 σ 2 2 √ 2π e −α 2 /(8β 2 ) τ γ (w) + γc ∞ 2σ √ γ 2 Φ − τ γ (w) + γc ∞ 2σ √ γ − 1 √ 2π τ γ (w) + γc ∞ 2σ √ γ exp − τ γ (w) + γc ∞ 2σ √ γ 2 2 + 1/8 .
Combining this inequality and (57) in (56) concludes the proof.
Under H1 and H2, define (α k ) k 1 , (β k ) k 1 for any k 1 by
α k = γc ∞ k−1 i=0 (1 + γL) −i , β 2 k = γσ 2 k−1 i=0 (1 + γL) −2i .(58)
Lemma 32. Assume H1 and H2 hold. For any γ > 0, k 1, we have kγc ∞ e −kγL α k kγc ∞ , (kγ) 1/2 σe −kγL β k (kγ) 1/2 σ ,
[(kγ) 1/2 c ∞ /σ]e −kγL α k /β k [(kγ) 1/2 c ∞ /σ]e kγL [c ∞ γ 1/2 /(σ 3 k 1/2 )]e −kγL γα k /β 3 k [c ∞ γ 1/2 /(σ 3 k 1/2 )]e 3kγL(59)γ k−1 i=1 {α k /β 3 k } [2c ∞ (kγ) 1/2 /σ 3 ]e 3kγL ,(60)
where (α k ) k 1 , (β k ) k 1 are defined in (58).
Proof. Let k 1. Using for any i ∈ N, e −iγL (1 + γL) −i 1, we have
kγc ∞ e −kγL γc ∞ k−1 i=0 (1 + γL) −i kγc ∞ .(61)
In the same way, using for any i ∈ N, e −2iγL (1 + γL) −2i 1, we obtain
kγσ 2 e −2kγL γσ 2 k−1 i=0 (1 + γL) −2i kγσ 2 .(62)
Combining (61) and (62) completes the proof of the first four inequalities. Then, (60) is a simple consequence of (59) and a comparison test. H1 and H2 hold. Letδ ∈ 0, {L −1 ∧ (σe −1 /c ∞ ) 2 } , with the convention 1/0 = +∞. For any γ ∈ (0,γ], n ∈ {0, . . . , n γ }, n γ = δ /γ , and w ∈ [0, +∞), it holds [0,+∞)
Lemma 33. Assume
1 (0,+∞) (w)Q n+1 γ (w, dw) 1 − 2Φ − τ γ (w) + α n+1 2β n+1 + ζ n k=1 γα k β 3 k .(63)
where (α k ) k 1 , (β k ) k 1 and ζ are defined in (58) and (55) respectively and Φ is the cumulative distribution of the one-dimensional Gaussian distribution with mean 0 and variance 1.
Proof. Let γ ∈ (0,γ], w ∈ [0, +∞). Note that for any n ∈ {0, . . . , n γ }, by (58) and Lemma 32,
α n /(2β n ) 1 .(64)
Then, by Lemma 27, (63) holds for n = 0. Assume it holds for n ∈ {0, . . . , n γ − 1}. Then, we get [0,+∞)
1 (0,+∞) (w)Q n+1 γ (w, dw) [0,+∞) 1 − 2Φ − τ γ (w) + α n 2β n Q γ (w,w) + ζ n−1 k=1 γα k β 3 k .
The proof is then concluded by a straightforward induction using Lemma 31 and (64). H 1 and H 2
Theorem 34. Assume
hold. Let R 0. For any γ ∈ (0,γ] andδ ∈ 0, {L −1 ∧ (σe −1 /c ∞ ) 2 } , µ γ ((0, R)) η R c ∞ , where η R = [δ +γ] 1/2 ζe 3(δ+γ)L σ 3 + e (δ+γ)L 2 √ 2πσ Φ − (1 +γL)R + (δ +γ)c ∞ 2δ 1/2 σe −(δ+γ)L .(65)
Proof. Letδ ∈ 0, {L −1 ∧ (σe −1 /c ∞ ) 2 } and γ ∈ (0,γ]. Set n γ = δ /γ . Note thatδ γ(n γ + 1) δ +γ. By Lemma 33, Proposition 9, integrating (63) with respect to µ γ and using that τ γ (0) = 0, Φ(−t) 1/2 for any t 0, gives
µ γ ((0, +∞)) R 1 − 2Φ − τ γ (w) + α nγ +1 2β nγ +1 dµ γ (w) + ζ nγ k=1 γα k β 3 k (66) 1 − 2Φ − α nγ +1 2β nγ +1 + ζ nγ k=1 γα k β 3 k + 2 (0,+∞) Φ − α nγ +1 2β nγ +1 − Φ − τ γ (w) + α nγ +1 2β nγ +1 dµ γ (w) 1 − 2Φ − α nγ +1 2β nγ +1 + µ γ ((0, +∞)) + ζ nγ k=1 γα k β 3 k − 2 (0,R 1 ) Φ − τ γ (w) + α nγ +1 2β nγ +1 dµ γ (w) .
Rearranging the terms yields
2 (0,R 1 ) Φ − τ γ (w) + α nγ +1 2β nγ +1 dµ γ (w) 1 − 2Φ − α nγ +1 2β nγ +1 + ζ nγ k=1 γα k β 3 k .(67)
In addition by H1 using Lemma 32 and t → Φ(−t) is decreasing on R, we have
(0,R 1 ) Φ − τ γ (w) + α nγ +1 2β nγ +1 dµ γ (w) Φ − (1 + γL)R 1 + α nγ +1 2β nγ +1 µ γ ((0, R 1 )) Φ − (1 +γL)R 1 + (δ +γ)c ∞ 2δ 1/2 σe −(δ+γ)L µ γ ((0, R 1 )) ,(68)
Using that t → 1 − 2Φ(−t) is 2/π-Lipschitz and combining (67), (68) and Lemma 32 we get that
Φ − (1 +γL)R 1 + (δ +γ)c ∞ 2δ 1/2 σe −(δ+γ)L µ γ ((0, R 1 )) α nγ +1 /(2 √ 2πβ nγ +1 ) + ζγ nγ k=1 {α k /β 3 k } c ∞ [δ +γ] 1/2 ζe 3(δ+γ)L σ 3 + e (δ+γ)L 2 √ 2πσ ,
which implies that µ γ ((0, R 1 )) ηc ∞ and completes the proof.
Proof of Theorem 11. Letδ ∈ 0, {L −1 ∧ (σe −1 /c ∞ ) 2 } and γ ∈ (0,γ]. By Proposition 8 and using µ γ is invariant for Q γ , we obtain
R w dµ γ (w) (1 − γm) [R 1 ,+∞) w dµ γ (w) + (1 + γL) (0,R 1 ) w dµ γ (w) + γc ∞
Then, rearranging the terms in this inequality yields
[R 1 ,+∞) w dµ γ (w) R 1 µ γ ((0, R 1 ))L/m + c ∞ /m [0,+∞) w dµ γ (w) R 1 µ γ ((0, R 1 ))(1 + L/m) + c ∞ /m ,
which, combined with Theorem 34 applied to R ← R 1 , concludes the proof of the first inequality in (17). Finally, by (66), using that t → 1 − 2Φ(−t) is 2/π-Lipschitz, we have
µ γ ((0, +∞)) ( √ 2πβ nγ +1 ) −1 [0,+∞) {(1 +γL)w + α nγ +1 }dµ γ (w) + ζγ nγ k=1 {α k /β 3 k } (c ∞ c 1 (1 +γL) + α nγ +1 )/( √ 2πβ nγ +1 ) + ζγ nγ k=1 {α k /β 3 k } .
This finishes the proof using n γ = δ /γ ,δ γ(n γ + 1) δ +γ and Lemma 32.
Proof of Theorem 12
For any a > 0, define V * a : R → [0, +∞) for any w ∈ R by V * a (w) = 1 (0,+∞) (w) exp(aw) .
(69) H 1 and H 2 hold. Let a > 0. Then, for any w ∈ [0, +∞) and γ ∈ (0,γ],
Proposition 35. Assume
Q γ V * a (w)/V * a (w) λ γ a 1 [Ra,+∞) (w) + exp{γa(Lw + 2σ 2 a + c ∞ )}1 [0,Ra) (w) ,(70)
where
R a = 1 ∨ R 1 ∨ [(4aσ 2 + 2c ∞ )/m] , λ a = exp(−amR a /2) .(71)
Proof. Set for any w ∈ R, τ ∞ γ = τ γ (w) + γc ∞ . By definition (13) and using the change of variable g → σγ 1 /2 g twice, we have
Q γ V * (w) = R exp(a{τ ∞ γ (w) − 2σγ 1 /2 g}){1 − p σ 2 γ τ ∞ γ (w), g }ϕ(g)dg = τ ∞ γ (w)/2 −∞ exp(a{τ ∞ γ (w) − 2g}){ϕ σ 2 γ (g) − ϕ σ 2 γ (τ γ (w) − g)} . (2π) − 1 /2 σγ 1 /2 τ ∞ γ (w)/2 −∞ exp(a{τ ∞ γ (w) − 2σγ 1/2 g} − g 2 /2)dg = exp(aτ ∞ γ (w) + 2a 2 σ 2 γ)(2π) − 1 /2 σγ 1 /2 τ ∞ γ (w)/2 −∞ exp(−{g + 2aσγ 1 /2 } 2 /2)dg exp(aτ ∞ γ (w) + 2a 2 σ 2 γ)Φ(σγ 1 /2 τ ∞ γ (w)/2 + 2aσγ 1 /2 ) e aτ ∞ γ (w)+2a 2 σ 2 γ .
This concludes the proof of (70) for w ∈ [0, R a ) using H1. In the case w ∈ [R a , +∞), we only need to use that by H1 and definition of R a (71), where B a = exp{a(LR a + 2σ 2 a + c ∞ )}, λ a , R a are defined in (71) and η R in (65). By Proposition 35 and since V * (0) = 0 and µ γ is invariant for Q γ , we have
aτ ∞ γ (w) + 2a 2 σ 2 γ a(1 − mγ)w + ac ∞ + 2a 2 σ 2 γ aw − aγmR a /2.(0,+∞) V * a (w)dµ γ (w) λ γ a [Ra,+∞) V * a (w)dµ γ (w) + B γ a (0,Ra) V * a (w)dµ γ (w) ,
setting B a = exp{a(LR a + 2σ 2 a + c ∞ )}. Rearranging terms yields
(0,+∞) V * a (w)dµ γ (w) {B γ a − λ γ a }/{1 − λ γ a } (0,Ra) V * (w)dµ γ (w) {B γ a λ −γ a − 1}/{λ −γ a − 1} exp(aR a )c ∞ η Ra ,
where we have used Theorem 34 applied to R ← R a in the last inequality. The proof is then completed upon using that for any t 0, t e t − 1 te t .
Proof of Theorem 13
Lemma 36. Assume H1 and H2 hold. For any w ∈ [0, +∞) and α, β ∈ [0, +∞), β > 0,
(0,+∞) 1 − 2Φ − τ γ (w) + α 2β Q γ (w, dw) 1 − 2Φ − τ γ (w) + γc ∞ + α/(1 + γL) 2 σ 2 γ + β/(1 + γL) 2 ,
where Φ are the density and the cumulative distribution function of the one-dimensional Gaussian distribution with mean 0 and variance 1.
Proof. Let α, β 0, β > 0. By (13), we have
(0,+∞) 1 − 2Φ − τ γ (w) + α 2β Q γ (w, dw) = R 1 − 2Φ − τ γ (τ γ (w) + γc ∞ − 2σ √ γg) + α 2β (1 − p σ 2 γ (τ γ (w) + γc ∞ , g))ϕ(g)dg .
By
H1, t → 1 − 2Φ(−t) is increasing, we have setting ψ γ (w) = τ γ (w) + γc ∞ + α/(1 + γL), 1 − 2Φ − τ γ (τ γ (w) + γc ∞ − 2σ √ γg) + α 2β 1 − 2Φ − ψ γ (w) − 2σ √ γg 2β/(1 + γL) .
Using [15,Lemma 20], Lemma 28 and Lemma 29, we get
R 1 − 2Φ − τ γ (τ γ (w) + γc ∞ − 2σ √ γg) + α 2β (1 − p σ 2 γ (τ γ (w) + γc ∞ , g))ϕ(g)dg R 1 − 2Φ − ψ γ (w) − 2σ √ γg 2β/(1 + γL) (1 − p σ 2 γ (ψ γ (w), g))ϕ(g)dg − R 1 − 2Φ − ψ γ (w) − 2σ √ γg 2β/(1 + γL) p σ 2 γ (τ γ (w) + γc ∞ , g))ϕ(g)dg 1 − 2Φ − ψ γ (w) 2 σ 2 γ + β/(1 + γL) 2 ,
which completes the proof.
We consider in what follows that (W k ) k∈N is the canonical process on ([0, +∞) N , B([0, +∞)) ⊗N ) and for any w ∈ [0, +∞), P w and E w correspond to the probability and expectation respectively, associated with Q γ and the initial condition δ w on this space.
Lemma 37. Assume H1 and H2 hold. For any k ∈ N and w ∈ (0, +∞), P w min i∈{0,...,k+1}
W i > 0 1 − 2Φ − τ γ (w) + α k+1 2β k+1 ,
where α k+1 , β k+1 are given in (58).
Proof. The proof is by induction on k ∈ N. The proof for k = 0 follows from Lemma 27. Assume that the result holds for k − 1 ∈ N and for any w ∈ (0, +∞). Then, by the Markov property and the assumption hypothesis, for any w ∈ (0, +∞), P w min i∈{0,...,k}
W i > 0 = E w 1 (0,+∞) (W 1 )P W 1 min i∈{0,...,k−1} W i > 0 E x 1 (0,+∞) (W 1 ) 1 − 2Φ − τ γ (W 1 ) + α k 2β k .
The proof is then completed upon using Lemma 36.
Lemma 38. Assume H1 and H2 hold. Then, for any k ∈ N, w,w ∈ [0, +∞),
δ w Q k+1 γ − δwQ k+1 γ TV 1 − 2Φ − τ γ (w ∨w) + α k+1 2β k+1 ,
where α k+1 , β k+1 are given in (58).
Proof. We consider again (G k ) k 1 and (U k ) k 1 two independent sequences of i.i.d. standard Gaussian and [0, 1]-uniform random variables respectively. Define the Markov chains (W k ) k∈N and (W k ) k∈N starting from w ∈ [0, +∞) andw ∈ [0, +∞) respectively, for any k ∈ N,
W k+1 = G γ (W k , G k+1 , U k+1 ) andW k+1 = G γ (W k , G k+1 , U k+1 )
. Note that the case w =w is trivial so we only consider the converse and assume that w <w,w > 0. Then, we obtain by Proposition 6 that almost surely W k W k for any k ∈ N, which implies that
δ w Q k+1 γ − δwQ k+1 γ TV P W k+1 =W k+1 P min i∈{0,...,k+1}W i > 0 .
Indeed, if min i∈{0,...,k+1}Wi = 0, then there exists i ∈ {0, . . . , k + 1},W i = 0 which implies sinceW i W i 0 thatW i = W i and thereforeW k+1 = W k+1 by definition of the two processes. The proof is then completed by Lemma 37.
Proposition 39. Assume H1 and H2 hold. Let t > 0. Then, for any w,w ∈ [0, +∞),
δ w Q t 0 /γ γ − δwQ t 0 /γ γ TV 1 − 2Φ −L 1 /2 w ∨w + t 0 c ∞ 2σ{1 − e −2L(t 0 +γ) } 1 /2 .
Proof. Note that by (58) for any k ∈ N, α k+1 kγc ∞ and β 2 k+1 = (σ 2 (1 + γL)/L){1 − (1 + γL) −2k } (σ 2 (1 + γL)/L){1 − e −2kγL } using that (1 + t) e t for any t ∈ R. The proof is then completed using Lemma 38, H1 and the previous bounds for k ← t 0 /γ . H1 and H2 hold. Let a > 0. Then, for any w ∈ [0, +∞) and γ ∈ (0,γ],
Lemma 40. Assume
Q γ V * 1 (w) (1 − γm)V * 1 (w) + γ{L + m}R 1 + γc ∞ ,(72)Q γ V * a (w) λ γ a V * a (w) + γα a 1 [0,R 1 ] (w) ,(73)
where Q γ is defined by (13), for any w ∈ R, V * 1 (w) = |w|, V * a by (69), λ a , R a are defined by (71) and
α a = a exp{aR a +γa(LR a + 2σ 2 a + c ∞ )}[LR a + 2σ 2 a + c ∞ ] .(74)
Proof. (72) is a simple consequence of Proposition 8. Note that for w ∈ R, w R a , (73) holds by Proposition 35. In the case w < R a , using Proposition 35 and e t − 1 te t for any t 0, we obtain
Q γ V * (w) λ γ a V * (w) + V * (w)[λ −γ a exp{γa(Lw + 2σ 2 a + c ∞ )} − 1] λ γ a V * (w) + aV * (w) exp{γa(Lw + 2σ 2 a + c ∞ )}[Lw + 2σ 2 a + c ∞ ]
, which completes the proof.
Define V 1 , V a : R → [0, +∞) for any w ∈ R by V 1 (w) = 1 + Λ 1 |w| , V a (w) = 1 + Λ a 1 (0,+∞) (w) exp(aw) .(75)
Proposition 41. Assume H1 and H2 hold. Let a > 0 and t 0 > 0. Then, for any w ∈ [0, +∞) and γ ∈ (0,γ],
Q t 0 /γ γ V 1 (w) λ 1 V * 1 (w) + β 1 , Q t 0 /γ γ V a (w) λ a V * a (w) + β a ,
where Q γ is defined by (13), V 1 , V a by (75), λ a by (71), α a by (74) and
λ 1 = e −m , β 1 = (t 0 +γ)[{L + m}R 1 + c ∞ ] + 1 , β a = (t 0 +γ)α a + 1 .
Proof of Theorem 13. By Lemma 40 and an easy induction, and using that 1 − t e −t for any t 0, we have for any k ∈ N,
Q k γ V * 1 (w) λ kγ 1 V * 1 (w) + kγ[{L + m}R 1 c ∞ ] , Q k γ V * a (w) λ kγ a V * a (w) + kγα a .
Using that Q k γ 1 ≡ 1 for any k ∈ N completes the proof.
Proof of Theorem 13. Let t 0 > 0. We only show the result for V = V 1 . The result for V = V a , a > 0 is similar upon replacing λ 1 and β 1 by λ a and β a given in Proposition 41 respectively. Define
δ 1 = 4β 1 /(1 − λ 1 ) − 1 and M 1 = sup{w ∈ [0, +∞) : V 1 (w) δ 1 } which is well defined since lim w→+∞ V 1 (w) = +∞. Define in addition, ε 1 = 2Φ −L 1 /2 M 1 + t 0 c ∞ 2σ{1 − e −2L(t 0 +γ) } 1 /2 < 1 .
Then, {V 1 δ 1 } is a (1, ε)-Doeblin set for Q γ and λ 1 + 2β 1 /(1 + δ 1 ) < 1. Therefore, [11,Theorem 19.4.1] implies that for any k ∈ N,
δ w Q t 0 /γ k γ − µ γ V Cρ k {V 1 (w) + µ γ (V 1 )} ,
where a bound on µ γ (V 1 ) is provided by Theorem 11 and log(ρ) = log(1 − ε 1 ) log(λ 1 )/{log(1 − ε 1 ) + log(λ 1 − log(β 1 )}
λ 1 = λ 1 + 2β 1 /(1 + δ 1 ) ,β 1 = λ 1 β 1 + δ 1 C = {λ 1 + 1}/[1 +β 1 /{(1 − ε 1 )(1 −λ 1 )}] .
Postponed proofs of Section 3
Proof of Proposition 16
The proof is an easy consequence of Lemma 44 below and the definition of (W
E[|W 1 − τ γ (W 0 ) − γc ∞ | q ] (4σ 2 γ) q/2 m q + 2 sup u 0 [u q Φ(−u)] ,
where W 1 is defined by (12), m q is the q-th moment of the standard Gaussian distribution and Φ is its cumulative distribution function.
Proof. Let w 0 ∈ [0, +∞) and γ ∈ (0,γ]. By definition (13) and (10), we have settingτ ∞ γ (w 0 ) = {τ γ (w 0 ) + γc ∞ }/(2 σ 2 γ),
R + |w 1 − τ γ (w 0 ) − γc ∞ | q Q γ (w 0 , dw 1 ) = (4σ 2 γ) q/2 τ ∞ γ (w 0 ) −∞ |g| q ϕ(g)dg − (4σ 2 γ) q/2 +∞ τ ∞ γ (w 0 ) |g| q ϕ(g)dg + R |τ γ (w 0 ) + γc ∞ | q ϕ(2τ ∞ γ (w 0 ) − g) ∧ ϕ(g)dg (4σ 2 γ) q/2 m q + 2 p+1 σ p γ p/2 [τ ∞ γ (w 0 )] q Φ{−τ ∞ γ (w 0 )} ,
which completes the proof. (12).
Proof. By A 1, (12) and (13) we have that for any w 0 ∈ [0, +∞) and γ ∈ (0,γ], setting
W 0 = w 0 and κ ∞ (w 0 ) = κ(w 0 ) + c ∞ , R + w 4 1 Q γ (w 0 , dw 1 ) = E[W 4 1 ] E[(w 0 + γκ ∞ (w 0 ) − 2 σ 2 γG 1 ) 4 ] = {w 0 + γκ ∞ (w 0 )} 4 + 6σ 2 γ{w 0 + γκ ∞ (w 0 )} 2 + 3σ 4 γ 2 .
By A1, for any ∈ {2, 4}, we have that for any w 0 ∈ [0, +∞), γ ∈ (0,γ],
{w 0 + γκ ∞ (w 0 )} w 0 + 2 −1 γ(1 + L κ ) [|w 0 | + c ∞ ] .
Therefore, we obtain that there exists some constant C 1 , C 2 0, such that for any w 0 ∈ [0, +∞), γ ∈ (0,γ],
R + w 4 1 Q γ (w 0 , dw 1 ) w 4 0 + C 1 γ{1 + w 2 0 + w 4 0 } (1 + γC 2 )w 4 0 + γC 2 .
By an easy induction, we get then that for any w 0 ∈ [0, +∞), γ ∈ (0,γ] and k ∈ N,
R + w 4 1 Q k γ (w 0 , dw 1 ) (1 + C 2 γ) k w 4 0 + C 2 γ k−1 i=0 (1 + C 2 γ) i e kγC 2 [w 4 0 + C 2 ] ,
which completes the proof by the Markov property. (12).
Proof. Assume that E W 4 0 < +∞, otherwise the results holds. Denote by (F k ) k∈N the filtration associated with (W k ) k∈N . We consider the following decomposition for any ∈ N,
W − W 0 = A + B , A = −1 i=0 ∆M i , B = −1 i=0 H i , where using that E[W i+1 |F i ] = τ ∞ γ (W i )∆M i = W i+1 − E[W i+1 |F i ] = W i+1 − τ ∞ γ (W i ) , H i = τ ∞ γ (W i ) − W i .(76)
Then, using Young's inequality, we get for any γ ∈ (0,γ] and k ∈ N,
We now bound the two last terms in the right hand side of this equation. First, by A1 and Young's inequality, we get for any γ ∈ (0,γ] and k ∈ N,
E[max ∈{0,...,k} B 4 ] k 3 k−1 i=0 H 4 i 2 3 (kγ) 4 (1 + L κ ) 4 {max i∈{0,...,k−1} E[W 4 i ] + c 4 ∞ } . (78)
In addition, by definition (76), (∆M i ) i∈N are (F i ) i∈N -martingale increments. It follows by Burkholder inequality [5, Theorem 3.2] and Young's inequality that there exists C 4 0 satisfying for any k ∈ N and γ ∈ (0,γ],
E max ∈{0,...,k} A 4 C 4 E[{ k−1 i=0 ∆M 2 i } 2 ] C 4 k k−1 i=0 E[∆M 4 i ] .
Therefore by Lemma 42, we get that
E max ∈{0,...,k} A 4 C 4 E[{ k−1 i=0 ∆M 2 i } 2 ] C 4 (4σ 2 kγ) 2 m 4 + 2 sup u 0 [u 4 Φ(−u)] ,
where m 4 is the fourth moment of the standard Gaussian distribution. Combining this result with (78) and using Lemma 43 in (77) concludes the proof.
Proof of Proposition 17
To show this result, we use the Komolgorov criteria [22,Corollary 14.9]: for any T 0, there exist C T 0 such that for any n ∈ N and s, t ∈ [0, +∞), s t,
E W (n) t − W (n) s 4 C T (t − s) 2 .
Note that denoting k
E W (n) t − W (n) s 4 (t − s)γ −1 n {W k (n) 1 +1 − W k (n) 1 } 4 if k (n) 1 < k (n) 2 3 3 [{W k (n) 2 +1 − W k (n) 1 } 4 + {W k (n) 2 − W k (n) 1 } 4 + {W k (n) 1 − W k (n) 1 −1 } 4 ] otherwise .
Lemma 44, Lemma 43 and the Markov property complete the proof.
Proof of Proposition 19
We preface the proof by the following technical lemma.
Lemma 45. Assume A1. Then, for any q ∈ [1, +∞), we have (a) for any γ ∈ (0,γ] and w 0 ∈ [0, +∞),
− 2(4σ 2 γ) q/2 +∞ τ ∞ γ (w 0 ) |g| q ϕ(g)dg (79) R + |w 1 − τ γ (w 0 ) − γc ∞ | q Q γ (w 0 , dw 1 ) − (4σ 2 γ) q/2 m q 0 ,
whereτ ∞ γ (w 0 ) = {τ γ (w 0 ) + γc ∞ }/(2 σ 2 γ), Q γ is defined by (13), m q is the q-th moment of the standard Gaussian distribution. and ϕ is its probability density function;
(b) for any γ ∈ (0,γ],
R + |w 1 − γc ∞ | q Q γ (0, dw 1 ) 3(γc ∞ ) q .
Proof. (a) Let w 0 ∈ [0, +∞) and γ ∈ (0,γ]. By definition (13) and (10), we have settinḡ
τ ∞ γ (w 0 ) = {τ γ (w 0 ) + γc ∞ }/(2 σ 2 γ), R + |w 1 − τ γ (w 0 ) − γc ∞ | q Q γ (w 0 , dw 1 ) = (4σ 2 γ) q/2 τ ∞ γ (w 0 ) −∞ |g| q ϕ(g)dg − (4σ 2 γ) q/2 +∞ τ ∞ γ (w 0 ) |g| q ϕ(g)dg + R |τ γ (w 0 ) + γc ∞ | q ϕ(2τ ∞ γ (w 0 ) − g) ∧ ϕ(g)dg .(80)
Therefore, we obtain that
R + |w 1 − τ γ (w 0 ) − γc ∞ | q Q γ (w 0 , dw 1 ) − (4σ 2 γ) q/2 m q = −2(4σ 2 γ) q/2 +∞ τ ∞ γ (w 0 ) |g| q ϕ(g)dg + 2(4σ 2 γ) q/2 [{τ γ (w 0 )} q Φ(−τ γ (w 0 ))] . Using that +∞ τ ∞ γ (w 0 ) |g| q ϕ(g)dg {τ ∞ γ (w 0 )} q Φ(−τ ∞ γ (w 0 )
) completes the proof.
(b) By (80) and since τ γ (0) = 0 andτ ∞ γ (0) = γc ∞ /(2 σ 2 γ),
R + |w 1 − τ γ (0) − γc ∞ | q Q γ (0, dw 1 ) 2(4σ 2 γ) q/2 γc∞ 2 √ σ 2 γ 0 |g| ϕ(g)dg + (γc ∞ ) q .
Using that (4σ 2 γ) q/2 γc∞/(2 √ σ 2 γ) 0 |g| ϕ(g)dg (γc ∞ ) q completes the proof.
Proof of Proposition 19. Proof of (24). Let ϕ ∈ C ∞ (R d ) satisfying (23), N ∈ N, (t 1 , . . . , t N , s, t) ∈ [0, +∞) N +2 , 0 t 1 · · · t N s < t, ψ : R N + → R, continuous and bounded. Note that we only need to show that
lim n→+∞ E E ϕ(W (n) t ) − ϕ(W (n) s ) − t s Aϕ(W (n) u )du G (n) s ,
setting for any n ∈ N, u ∈ [0, +∞), G
E ϕ(W (n) t ) − ϕ(W (n) s ) − t s Aϕ(W (n) u )du G (n) s = E A (n) 1 + A (n) 2 + A (n) 3 G (n) s (81) A (n) 1 = ϕ(W (n) t ) − ϕ(W (n) k (n) 1 ) − {ϕ(W (n) s ) − ϕ(W (n) k (n) 2 )} A (n) 2 = − t s Aϕ(W (n) u )du + γ n k (n) 1 −1 k=k (n) 2 Aϕ(W (n) k ) A (n) 3 = ϕ(W (n) k (n) 1 ) − ϕ(W (n) k (n) 2 ) − γ n k (n) 1 −1 k=k (n) 2 Aϕ(W (n) k ) .
We deal with these three terms separately. First since ϕ satisfies (23), by the fundamental theorem of calculus, there exists C 0 such that for any w 0 , w 1 ∈ R, |ϕ(w 1 ) − ϕ(w 0 )| C max(|w 0 | , |w 1 |) |w 0 − w 1 |. By (19), we get that there exists C 0 such that for any n ∈ N, almost surely,
|A (n) 1 | Cγ n {max i∈{k (n) 1 ,k (n) 1 +1,k (n) 2 ,k (n) 2 +1} |W (n) i | 2 + 1} .
This implies by Lemma 42 and the Lebesgue convergence theorem that
lim n→+∞ E[|A (n) 1 |] = 0 .(82)
Regarding A (n) 2 , we consider the decomposition,
A (n) 2 = A (n) 2,1 + A (n) 2,2 , A (n) 2,1 = k (n) 2 γn s Aϕ(W (n) u )du + k (n) 1 γn t Aϕ(W (n) u )du A (n) 2,2 = − k (n) 1 −1 k=k (n) 2 (k+1)γn kγn {Aϕ(W (n) u ) − Aϕ(W (n) k )}du .
Since ϕ satisfies (23), by Lemma 42 and the Lebesgue dominated convergence theorem, we get that lim n→+∞ E[|A
Aϕ(W (n) u ) − Aϕ(W (n) k ) B u,k + 21 A c k σ 2 sup R |ϕ | B (n) u,k = (|κ(W (n) k )| + c ∞ )|ϕ (W (n) u ) − ϕ (W (n) k )| + |ϕ (W (n) u )||κ(W (n) u ) − κ(W (n) k )| + 21 A k σ 2 |ϕ (W (n) u ) − ϕ (W (n) k )| , where A k = {W (n) k = 0, W (n) k+1 = 0} ∪ {W (n) k+1 = 0, W (n) k = 0}
. Note that using that ϕ and ϕ are Lipchitz and supw ∈[0,+∞) |ϕ |(w)/(1 + |w|) < +∞ by (23), A1 and (19), we get that there exists C 0 such that for any n ∈ N, k ∈ {k (n) 2 , . . . , k (n) 1 − 1}, u ∈ (kγ n , (k + 1)γ n ),
|B (n) u,k | Cγ n {|W (n) k | 2 + |W (n) k+1 | 2 + 1} ,E 1 A c k = 0 .(83)
Note that using that by definition, (W (n) k ) k∈N is a Markov chain with Markov kernel Q γn (13), the Markov property implies that for any n ∈ N and k ∈ {k
(n) 2 , . . . , k (n) 1 − 1}, E 1 A c k P W (n) k = 0, W (n) k+1 = 0 + P W (n) k = 0, W (n) k+1 = 0 (84) = 1−2Φ[−c ∞ √ γ n /(2σ)] + 2E 1 R * + (W (n) k )Φ[−τ ∞ γn (W (n) k )/{2(σ 2 γ n ) 1 /2 }] ,
where τ ∞ γn (w) = τ γn (w)+γc ∞ , for any w ∈ [0, +∞). Since 1−2Φ(−u) u for any u ∈ [0, +∞), we get that there exists C 0 such that for any n ∈ N, γ n k (n) 1 −1 k=k
Regarding the second term in (84), consider the sequence of measurable functions defined for any n ∈ N, ω ∈ Ω, k ∈ N by
f n (ω, k) = γ n 1 {k (n) 2 ,...,k (n) 1 −1} (k)1 R * + (W (n) k )Φ[−τ ∞ γ (W (n) k )/{2(σ 2 γ n ) 1 /2 }] ,
on the measure space (Ω × N, F ⊗ 2 N , P ⊗ ν c ), where 2 N is the power set of N and ν c is the counting measure on N. Note that P ⊗ ν c almost everywhere, lim n→+∞ f n (ω, k) = 0 and in addition, k∈N Ωf n (ω, k)dP(ω) (t − s) + γ n . Therefore by the Lebesgue dominated convergence theorem, we obtain that lim n→+∞ k∈N Ωf n (ω, k)dP(ω) = 0 which implies by definition, k . Using that ϕ is three times continuously differentiable, we get by Taylor's theorem with Lagrange reminder, that for any n ∈ N, k ∈ {k Therefore, we obtain that lim n→+∞ k∈N Ωf n (ω, k)dP(ω) = 0 by the Lebesgue dominated convergence theorem, which implies by (90) Proof of (25). The proof follows exactly the same lines as (24) but we use that the only different and non negligeable term is A Using (79) in Lemma 45, the assumption that ϕ (w) 0 for any w ∈ R, and the Markov property, we get that for any n ∈ N, E[A (n) 4,2 | G s ] 0, which concludes the proof.
Proof of Theorem 18
Proposition 46. Assume A 1. Let µ ∞ be a limit point of (µ n ) n∈N . Then, µ ∞ -almost everywhere, inf t∈[0,+∞) W t 0.
Proof. Without loss of generality, we assume that (µ n ) n∈N converges to µ ∞ . Since ω → inf t∈[0,+∞) ω t is continuous, F = {ω ∈ W : inf t∈[0,+∞) ω t 0} is closed. Therefore, by the Portmanteau theorem [23,Theorem 13.16], we obtain that µ ∞ (F) lim sup n→+∞ µ n (F) = 1.
Proof of Theorem 18. Recall that we denote by (µ n ) n∈N the sequence of distribution on W associated with {(W (n) t ) t 0 : n ∈ N}. Let µ ∞ be a limit point of this sequence for the convergence in distribution. Without loss of generality, we assume that (µ n ) n∈N converges in distribution to µ ∞ . Note that by Proposition 16, for any continuous function F : W → R such that |F | (ω) C T {1 + sup t∈[0,T ] |ω t | δc } for δ c ∈ [0, 4), T, C T 0, then F is uniformly integrable for (µ n ) n∈N and therefore (see e.g. [
F dµ n = W F dµ ∞ .(92)
We divide then the proof into two parts. First part: we first show that under µ ∞ , (M t ) t 0 is a (W t ) t 0 -martingale. Considering (ω t 1 , . . . , ω t N ) , and applying to ϕ 1 (w) = w for any w ∈ R, since Aϕ 1 is continuous under A1, for any N ∈ N, (t 1 , . . . , t N , s, t) ∈ [0, +∞) N +2 , 0 t 1 · · · t N s < t, ψ : R N + → R, continuous and bounded,
E µ ∞ [(M t − M s ) ψ(W t 1 , . . . , W t N )] = 0 ,
where E µ ∞ [·] is the expectation under µ ∞ on (W, W). We obtain by the monotone class theorem and [30, Theorem 2.3, Chapter 0] that the first part of the result holds, i.e. (M t ) t 0 defined by (21) is a (W t ) t 0 -martingale on (W, W, (W t ) t 0 , µ ∞ ).
Second part: It remains to show that under µ ∞ , (N t ) t 0 is a (W t ) t 0 -martingale. We first establish setting ϕ 2 (w) = w 2 for w ∈ R, that
N t = ϕ 2 (W t ) − ϕ 2 (W 0 ) − t 0 Aϕ 2 (W u )du ,
is a (W t ) t 0 -submartingale, which easily implies that (N t ) t 0 is a (W t ) t 0 -submartingale. Let N ∈ N, (t 1 , . . . , t N , s, t) ∈ [0, +∞) N +2 , 0 t 1 · · · t N s < t, ψ : R N + → R, continuous, nonnegative and bounded. Then, consider F + 2 = F + 2,1 + F + 2,2 on W with :
F + 2,1 : ω → ϕ 2 (ω t ) − ϕ 2 (ω s ) − 2 t s ω u (κ(ω u ) + c ∞ )du ψ(ω t 1 , . . . , ω t N ) F + 2,2 : ω → −4σ 2 t s 1 R * + (ω u )du ψ(ω t 1 , . . . , ω t N )
Note that it is easy to check that F + 2,1 is continuous and F + 2,2 is bounded lower semi-continuous on W, i.e. for any (ω n ) n∈N converging to ω ∞ in W endowed with the uniform convergence on compact set, lim inf n→+∞ F + 2,2 (ω n ) F + 2,2 (ω ∞ ). Therefore, we obtain by the Portmanteau theorem [ Using the same arguments as before, we obtain that under µ ∞ , (Ñ t ) t 0 is a (W t ) t 0submartingale. Then, it is easy to verify that (N t ) t 0 is a (W t ) t 0 -submartingale. We complete then the proof by showing that (N t ) t 0 is also a (W t ) t 0 -supermartingale under µ ∞ . To do so, we need the following lemma.
Lemma 47. Assume A1. Then, for any limit point µ ∞ of (µ n ) n∈N , µ ∞ -almost everywhere, t → M t − 4σ 4 t is nonincreasing, where ( M t ) t 0 is the quadratic variation of (M t ) t 0 .
Proof. Let N ∈ N, (t 1 , . . . , t N , s, t) ∈ [0, +∞) N +2 , 0 t 1 · · · t N s < t, ψ : R N + → R, continuous, nonnegative and bounded. Consider now the continuous map
F − 2 : ω → ϕ 2 (ω t ) − ϕ 2 (ω 0 ) − t 0Ã
ϕ 2 (ω u )du ψ(ω t 1 , . . . , ω t N ) .
we get by a direct induction that, for all k ∈ {0, . . . , T /h },z h θ (kh) ∈ K and for all t ∈ (kh, (k + 1)h], t ∈ [0, T ],
f (t) 1 2 L F (1 + C F )h 2 (k + 1)e L F h(k+1)
Conclusion follows with C = 1 2
L F (1 + C F )N T e L F T .
Proof of Proposition 23. We have by AO2 and (37), T γ (θ) = θ − γ∇∇U (θ) with
∇U (θ) = ∇U 0 (θ) + N i=1 s i (z θ (t i ))(95)
where z θ solves (30)
z θ (t) − zθ(t) L F θ −θ te L F t ∀t 0 .
In particular, by (95) and (37),
∇U (θ) − ∇U (θ) L U + L s L F N i=1 t i e L F t i θ −θ .
Moreover, similarly, we get if θ −θ R U ,
θ −θ, ∇U (θ) − ∇U (θ) m U θ −θ 2 − N C s θ −θ .
Combining the last two estimates yields H1. Finally, H2 follows using AO4, (35) and (37) from
∇U (θ) −b h (θ) L s N i=1 z θ (t i ) − Ψ h i (θ) C Ψ L s h α .
Theorem 4 .
4Assume H1 and H2 hold and let (c, V ) ∈ {(1 (0,+∞) , |·|), (|·| , |·|), (1 (0,+∞) exp(|·|), 1 (0,+∞) exp(|·|))} .
Corollary 7 .
7Assume H 1 and H 2 hold. Let c : R 2d → [0, +∞) of the form c(x, y) = c( x − y ) for some non-decreasing functionc : [0, +∞) → [0, +∞),c(0) = 0. For any x,x ∈ R d and k ∈ N,
Proposition 9 .
9Assume H1 and H2hold. For any γ ∈ (0, γ], Q γ admits a unique invariant probability measure µ γ and is geometrically ergodic. In addition, µ γ ({0}) > 0 and µ γ is absolutely continuous with respect to the measure δ 0 + Leb on ([0, +∞), B([0, +∞))). Finally, in the case c ∞ = 0, µ γ and δ 0 + Leb are equivalent.Proof. The proof is postponed to Section 5.1.5.
Corollary 10 .
10Assume H 1 and H 2 hold. Let c : R 2d → [0, +∞) of the form c(x, y) = c( x − y ) for some non-decreasing functionc : [0, +∞) → [0, +∞),c(0) = 0. For any x,x ∈ R d and k ∈ N,
1 and a, > 0. Then, on the one hand, H 1 and H 2 are satisfied with R 1 = 0, m = and c ∞ = a which lead to
k
) k∈N : n ∈ N} for any n ∈ N, (W(n) k ) k∈N is the Markov chain defined by (12) with W (n) 0 = w 0 , γ = γ n and therefore associated with the Markov kernel Q γn . Let {(W (n) t ) t∈(0,+∞) : n ∈ N} be the continuous linear interpolation of {(W (n)
t
) t 0 on W. Then, by Prohorov's Theorem [3, Theorem 5.1,5.2], (µ n ) n∈N admits a limit point. If we now show that every limit point associated with {(W
t
) t 0 : n ∈ N} is a solution of the SDE(20) using that {(W
t
) t 0 : n ∈ N} the same strategy employed in[29, Proposition 6].
estimation of π γ − π γ,h TV
Figure 1 :
1Numerical illustrations for the Van der Pol oscillator (39)
i=1 uniformly spaced in [ 0 ,
i=10T ] for T = 10 and the true parameter θ 0 = (0.6, 0.025, 0.8, 0.025). More precisely, for any i ∈ {1, . . . , 50}, y
estimation of π γ − π γ,h TV
Figure 2 :
2Numerical illustrations for the Lotka-Volterra model (40) 5 Postponed proofs 5.1 Postponed proofs of Section 2
, (1 − 2cσ 2 γ) −1 8cσ 2 γ and (1 − mγ)2 1 − mγ since 2cσ 2 γ 1/2 and γ 1/m, by definition of c andγ 1 (42)-(44),
− log(1 − t) 2t for t ∈ [0, 1/2]. For the case x M , by (41), (1 − 2cσ 2 γ) −1 8cσ 2 γ and (1 − mγ) 2 1 − mγ since 2cσ 2 γ 1/2 and γ 1/m, by (42)-(44), and using − log(1 − t) 2t for t ∈ [0, 1/2] again,
a) irreducible and aperiodic, (c) any compact set of [0, +∞) is small and (d) there exists λ > 0 and b 0 such that Q γ V (w) λV (w) + b for any w ∈ [0, +∞) with V (w) = w + 1. The proof then follows from [11, Corollary 14.1.6, Theorem 15.2.4]. (a) Let K be a compact set. Then for any w ∈ K we have
Proof of Theorem 12 .
12Letδ ∈ 0, {L −1 ∧ (σe −1 /c ∞ ) 2 } and γ ∈ (0,γ]. We show that (17) holds with c 3 = (B a /λ a )γa(LR a + 2σ 2 a + c ∞ )η Ra exp(aR a )/ |log(λ a )| ,
t
) t 0 . Before stating and proving Lemma 44, we need the following technical results.Lemma 42. Assume A1. Then, for any q ∈ [1, +∞), we have for any γ ∈ (0,γ],
t/γ n , we have by(19)
s/γ n and consider the following decomposition
1 |] = 0. In addition, we have by definition of A (22) that for any n ∈ N, k ∈ {k 1}, u ∈ (kγ n , (k + 1)γ n ),
2Φ[−c ∞ √ γ n /(2σ)] C((t − s) + γ n 2Φ[−c ∞ √ γ n /(2σ)]} = 0 .
k
1}, there exists u k ∈ [0, 1] satisfyingϕ(W (n) k+1 ) − ϕ(W (n) k ) − γ n Aϕ(W (n) k ) = ϕ (W } 2 + 6 −1 ϕ (3) (u k W (n) k+1 + (1 − u k )W (n) k ){∆W (n) k } 3 .
[(t − s) + γ n ]γ n c 2 ∞ , showing that lim n→+∞ E[|E[ A
2 − 4γ n σ 2 ] .
F 1
1:ω → ϕ 1 (ω t ) − ϕ 1 (ω s ) − t s Aϕ 1 (ω u )du ψ
and s i are defined in (37). For all θ,θ ∈ R d and t ∈ [0, T ], z θ (t) − zθ(t) = t 0 F θ (z θ (s), s) − Fθ (z θ (s), s) dsand thus, using (33), Grönwall's inequality implies that
). Bounds on the W cp distance can then be established as in Theorem 4, while bounds on expected values of V q , independent of γ, are classically obtained through Lyapunov arguments (see e.g. the proof of Proposition 1 in Section 5.1.1).
0 .
0for all t 0. Notice that 1/r(θ) is an equilibrium of the equation, so that it cannot be crossed by other solutions. Hence, on the one hand, if 1
Lemma 44. Assume A 1. Then, there exists C 0 such that for any k ∈ N and γ ∈ (0,γ],E[max ∈{0,...,k} [W − W 0 ] 4 ] C(kγ) 2 e Ckγ {E[W 4 0 ] + 1}, where (W k ) k∈N is defined by
by Lemma 26 and the Markov property,
max ∈{0,...,k} [W − W 0 ] 4 2 3 {max ∈{0,...,k} A 4 + max ∈{0,...,k} B 4 } .
1, Lemma 5.1.7.])lim
n→+∞ W
23, Theorem 13.16] and (92) that Therefore,Proposition 19-(24) applied with ϕ ← ϕ 2 implies that 0 = lim supW
F +
2,1 dµ = lim
n→∞ W
F +
2,1 dµ n , and
W
F +
2,2 dµ lim inf
n→+∞ W
F +
2,2 dµ n .
n→+∞ W
F +
2 dµ n
W
F +
2 dµ .
[9, Theorem 15] consider the case where Tγ comes from the Euler discretization scheme and has form Tγ(x) = x + γb(x).
Acknowledgments. The work of AG is funded in part by the Project EFI ANR-17-CE40-0030 of the French National Research Agency. AD acknowledges support of the Lagrange Mathematical and Computing Research Center. A.E. has been supported by the Hausdorff Center for Mathematics. Gefördert durch die Deutsche Forschungsgemeinschaft (DFG) im Rahmen der Exzellenzstrategie des Bundes und der Länder -GZ 2047/1, Projekt-ID 390685813.(n) 2 , . . . , kIt follows then using the definition of A(22),(23), Lemma 44 and Lemma 42 thatNote that by Lemma 26 and the Markov property, we have that for any n ∈ N and k ∈ {kThen, by Lemma 45-(a) , we have using the Markov property that. Note that P ⊗ ν c almost everywhere, lim n→+∞fn (ω, k) = 0 and in addition,Then, by (92) and , we get lim n→∞ W F − 2 dµ n 0. Using that under µ ∞ (M t ) t 0 is a (W t ) t 0 -martingale, we get that (M 2 t − 4σ 2 t) t 0 is a (W t ) t 0 supermartingale. By the Doob-Meyer decomposition[22,Theorem 22.5], under µ ∞ , there exists a unique nondecreasing, locally integrable and predictable process (C t ) t 0 , such that (M 2 t −4σ 2 t+C t ) t 0 is a (W t ) t 0 -martingale. In addition, under µ ∞ , by [30, Theorem 1.8, Chapter IV], the quadratic variation ( M t ) t 0 of (M t ) t 0 is a finite variation process satisfying ( By Lemma 47, denoting by ( M t ) t 0 , the quadratic variation of (M t ) t 0 , see [30, Theorem 1.8, Chapter IV], µ ∞ -almost everywhere, t → M t − 4σ 2 t is nondecreasing and therefore we get that for any s, t ∈ [0, +∞), s t, µ ∞ -almost everywhere,In addition, by the occupation times formula [30, Corollary 1.6, Chapter VI] applied twice and Proposition 46, µ ∞ -almost everywhere,Using this result and (93), we get that M t − M s 4σ 2 t s 1 R * + (W u )du, for any s, t ∈ [0, +∞), s t. Therefore since (M 2 t − M t ) t 0 is a (W t ) t 0 -martingale under µ ∞ , we conclude that (N t ) t 0 is a (W t ) t 0 -supermartingale which completes the proof.Postponed proofs of Section 4Proof of Proposition 21. From AO3 we know that z θ (t) ∈ K for all t ∈ [0, T ]. In particular, using that for all 0 s t T , z θ (t) − z θ (s) = t s F θ (z θ (u), u)du, we get thatLet k ∈ N be such thatz h θ (kh) ∈ K (this is for instance the case of k = 0). Then, for t ∈ [kh, (k + 1)h], using by (32) thatand setting f (t) = z θ (t) −z h θ (t) , we get by (94) and AO3 for any h > 0 and t ∈ [kh, (k+1)h],Assuming that h h whereh is sufficiently small so that 1 2 L F (1 + C F )hT e L F T < δ ,
A1 and Young's inequality, setting τ ∞. 12It follows from the definitionIt follows from the definition (12), A1 and Young's inequality, setting τ ∞
Gradient flows: in metric spaces and in the space of probability measures. L Ambrosio, N Gigli, G Savaré, Springer Science & Business MediaL. Ambrosio, N. Gigli, and G. Savaré. Gradient flows: in metric spaces and in the space of probability measures. Springer Science & Business Media, 2008.
Sticky brownian motion and its numerical solution. B.-R Nawaf, M C Holmes-Cerfon, SIAM Review. 621Nawaf B.-R. and M. C. Holmes-Cerfon. Sticky brownian motion and its numerical solu- tion. SIAM Review, 62(1):164-195, 2020.
Convergence of probability measures. Patrick Billingsley, Patrick Billingsley. Convergence of probability measures. 1999.
An elementary analysis of a procedure for sampling points in a convex body. R Bubley, M Dyer, M Jerrum, Random Structures Algorithms. 123R. Bubley, M. Dyer, and M. Jerrum. An elementary analysis of a procedure for sampling points in a convex body. Random Structures Algorithms, 12(3):213-235, 1998.
Distribution function inequalities for martingales. D L Burkholder, Ann. Probab. 111973D. L. Burkholder. Distribution function inequalities for martingales. Ann. Probab., 1(1):19-42, 02 1973.
Singular stochastic differential equations. A S Cherny, H.-J Engelbert, Lecture Notes in Mathematics. 1858Springer-VerlagA. S. Cherny and H.-J. Engelbert. Singular stochastic differential equations, volume 1858 of Lecture Notes in Mathematics. Springer-Verlag, Berlin, 2005.
Theoretical guarantees for approximate sampling from smooth and log-concave densities. S Arnak, Dalalyan, J. R. Stat. Soc. Ser. B. Stat. Methodol. 793Arnak S. Dalalyan. Theoretical guarantees for approximate sampling from smooth and log-concave densities. J. R. Stat. Soc. Ser. B. Stat. Methodol., 79(3):651-676, 2017.
User-friendly guarantees for the langevin monte carlo with inaccurate gradient. S Arnak, Avetik Dalalyan, Karagulyan, Stochastic Processes and their Applications. Arnak S Dalalyan and Avetik Karagulyan. User-friendly guarantees for the langevin monte carlo with inaccurate gradient. Stochastic Processes and their Applications, 2019.
V , De Bortoli, A Durmus, arXiv:1904.09808Convergence of diffusions and their discretizations:from continuous to discrete processes and back. arXiv preprintV. De Bortoli and A. Durmus. Convergence of diffusions and their discretizations:from continuous to discrete processes and back. arXiv preprint arXiv:1904.09808, 2019.
Efficient stochastic optimisation by unadjusted langevin monte carlo. application to maximum marginal likelihood and empirical bayesian estimation. Alain Valentin De Bortoli, Marcelo Durmus, Ana F Pereyra, Vidal, arXiv:1906.12281arXiv preprintValentin De Bortoli, Alain Durmus, Marcelo Pereyra, and Ana F Vidal. Efficient stochas- tic optimisation by unadjusted langevin monte carlo. application to maximum marginal likelihood and empirical bayesian estimation. arXiv preprint arXiv:1906.12281, 2019.
. Randal Douc, Eric Moulines, Pierre Priouret, Philippe Soulier, Markov Chains. Springer Series in Operations Research and Financial Engineering. Randal Douc, Eric Moulines, Pierre Priouret, and Philippe Soulier. Markov Chains. Springer Series in Operations Research and Financial Engineering. 2018.
Analysis of langevin monte carlo via convex optimization. A Durmus, S Majewski, B Miasojedow, Journal of Machine Learning Research. 2073A. Durmus, S. Majewski, and B. Miasojedow. Analysis of langevin monte carlo via convex optimization. Journal of Machine Learning Research, 20(73):1-46, 2019.
Supplement to high-dimensional bayesian inference via the unadjusted langevin algorithm. A Durmus, E Moulines, Bernoulli. A. Durmus and E. Moulines. Supplement to high-dimensional bayesian inference via the unadjusted langevin algorithm. Bernoulli.
Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. A Durmus, É Moulines, Ann. Appl. Probab. 273A. Durmus and É. Moulines. Nonasymptotic convergence analysis for the unadjusted Langevin algorithm. Ann. Appl. Probab., 27(3):1551-1587, 2017.
High-dimensional bayesian inference via the unadjusted langevin algorithm. A Durmus, E Moulines, Bernoulli. 254AA. Durmus and E. Moulines. High-dimensional bayesian inference via the unadjusted langevin algorithm. Bernoulli, 25(4A):2854-2882, 2019.
Reflection couplings and contraction rates for diffusions. A Eberle, Probab. Theory Related Fields. A. Eberle. Reflection couplings and contraction rates for diffusions. Probab. Theory Related Fields, pages 1-36, 2015.
Quantitative contraction rates for markov chains on general state spaces. A Eberle, M Majka, Electronic Journal of Probability. 24A. Eberle and M. Majka. Quantitative contraction rates for markov chains on general state spaces. Electronic Journal of Probability, 24, 2019.
Sticky couplings of multidimensional diffusions with different drifts. A Eberle, R Zimmer, Annales de l'Institut Henri Poincaré, Probabilités et Statistiques. 55Institut Henri PoincaréA. Eberle and R. Zimmer. Sticky couplings of multidimensional diffusions with different drifts. In Annales de l'Institut Henri Poincaré, Probabilités et Statistiques, volume 55, pages 2370-2394. Institut Henri Poincaré, 2019.
Markov processes. S N Ethier, T G Kurtz, Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons IncS. N. Ethier and T. G. Kurtz. Markov processes. Wiley Series in Probability and Math- ematical Statistics: Probability and Mathematical Statistics. John Wiley & Sons Inc., New York, 1986. Characterization and convergence.
Stochastic flows and sticky Brownian motion. C J Howitt, University of WarwickPhD thesisC. J. Howitt. Stochastic flows and sticky Brownian motion. PhD thesis, University of Warwick, 2007.
Approximations of markov chains and high-dimensional bayesian inference. J E Johndrow, J C Mattingly, S Mukherjee, D Dunson, arXiv:1508.03387arXiv preprintJ. E. Johndrow, J. C. Mattingly, S. Mukherjee, and D. Dunson. Approxima- tions of markov chains and high-dimensional bayesian inference. arXiv preprint arXiv:1508.03387, 2015.
Foundations of modern probability. Probability and its Applications. Olav Kallenberg, Springer-VerlagNew York; New Yorksecond editionOlav Kallenberg. Foundations of modern probability. Probability and its Applications (New York). Springer-Verlag, New York, second edition, 2002.
Probability Theory: A Comprehensive Course. A Klenke, Springer-Verlag LondonUniversitext2 editionA. Klenke. Probability Theory: A Comprehensive Course. Universitext. Springer-Verlag London, 2 edition, 2014.
Computable exponential convergence rates for stochastically ordered markov processes. R B Lund, S P Meyn, R L Tweedie, Ann. Appl. Probab. 61R. B. Lund, S. P. Meyn, and R. L. Tweedie. Computable exponential convergence rates for stochastically ordered markov processes. Ann. Appl. Probab., 6(1):218-237, 02 1996.
Statistical Rethinking: A Bayesian Course with Examples in R and STAN. Chapman & Hall/CRC Texts in Statistical Science. R Mcelreath, CRC PressR. McElreath. Statistical Rethinking: A Bayesian Course with Examples in R and STAN. Chapman & Hall/CRC Texts in Statistical Science. CRC Press, 2020.
Perturbation bounds for monte carlo within metropolis via restricted approximations. F Medina-Aguayo, D Rudolf, N Schweizer, Stochastic Processes and their Applications. 130F. Medina-Aguayo, D. Rudolf, and N. Schweizer. Perturbation bounds for monte carlo within metropolis via restricted approximations. Stochastic Processes and their Applica- tions, 130(4):2200-2227, 2020.
S Meyn, R L Tweedie, Markov chains and stochastic stability. Peter W. GlynnCambridgeCambridge University Presssecond editionS. Meyn and R. L. Tweedie. Markov chains and stochastic stability. Cambridge University Press, Cambridge, second edition, 2009. With a prologue by Peter W. Glynn.
Sensitivity and convergence of uniformly ergodic markov chains. A Y Mitrophanov, Journal of Applied Probability. 424A. Y. Mitrophanov. Sensitivity and convergence of uniformly ergodic markov chains. Journal of Applied Probability, 42(4):1003-1014, 2005.
Multidimensional sticky brownian motions as limits of exclusion processes. Z Miklos, Mykhaylo Racz, Shkolnikov, Ann. Appl. Probab. 253Miklos Z. Racz and Mykhaylo Shkolnikov. Multidimensional sticky brownian motions as limits of exclusion processes. Ann. Appl. Probab., 25(3):1155-1188, 06 2015.
Continuous martingales and Brownian motion. D Revuz, M Yor, Grundlehren der Mathematischen Wissenschaften. 293Fundamental Principles of Mathematical SciencesD. Revuz and M. Yor. Continuous martingales and Brownian motion, volume 293 of Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathe- matical Sciences].
. Springer-Verlag, Berlinsecond editionSpringer-Verlag, Berlin, second edition, 1994.
Rates of convergence of stochastically monotone and continuous time markov models. G O Roberts, R L Tweedie, Journal of Applied Probability. 372G. O. Roberts and R. L. Tweedie. Rates of convergence of stochastically monotone and continuous time markov models. Journal of Applied Probability, 37(2):359-373, 2000.
Perturbation theory for Markov chains via Wasserstein distance. D Rudolf, N Schweizer, Bernoulli. 244AD. Rudolf and N. Schweizer. Perturbation theory for Markov chains via Wasserstein distance. Bernoulli, 24(4A):2610 -2639, 2018.
A perturbation theory for ergodic markov chains and application to numerical approximations. T Shardlow, A M Stuart, SIAM Journal on Numerical Analysis. 374T. Shardlow and A. M. Stuart. A perturbation theory for ergodic markov chains and application to numerical approximations. SIAM Journal on Numerical Analysis, 37(4):1120-1137, 2000.
On stochastic differential equations for multi-dimensional diffusion processes with boundary conditions. S Watanabe, J. Math. Kyoto Univ. 111S. Watanabe. On stochastic differential equations for multi-dimensional diffusion pro- cesses with boundary conditions. J. Math. Kyoto Univ., 11(1):169-180, 1971.
On stochastic differential equations for multi-dimensional diffusion processes with boundary conditions ii. S Watanabe, J. Math. Kyoto Univ. 113S. Watanabe. On stochastic differential equations for multi-dimensional diffusion pro- cesses with boundary conditions ii. J. Math. Kyoto Univ., 11(3):545-551, 1971.
Nonlinear functional analysis and its applications. Fixed-point theorems. E Zeidler, Springer-Verlag Berlin and Heidelberg GmbH & Co. K1E. Zeidler. Nonlinear functional analysis and its applications. Fixed-point theorems, volume Vol.1. Springer-Verlag Berlin and Heidelberg GmbH & Co. K, 1986.
| [] |
[
"Data-Driven Spectral Submanifold Reduction for Nonlinear Optimal Control of High-Dimensional Robots",
"Data-Driven Spectral Submanifold Reduction for Nonlinear Optimal Control of High-Dimensional Robots"
] | [
"John Irvin Alora ",
"Mattia Cenedese ",
"Edward Schmerling ",
"George Haller ",
"Marco Pavone "
] | [] | [] | Modeling and control of high-dimensional, nonlinear robotic systems remains a challenging task. While various model-and learning-based approaches have been proposed to address these challenges, they broadly lack generalizability to different control tasks and rarely preserve the structure of the dynamics. In this work, we propose a new, data-driven approach for extracting low-dimensional models from data using Spectral Submanifold Reduction (SSMR). In contrast to other datadriven methods which fit dynamical models to training trajectories, we identify the dynamics on generic, low-dimensional attractors embedded in the full phase space of the robotic system. This allows us to obtain computationally-tractable models for control which preserve the system's dominant dynamics and better track trajectories radically different from the training data. We demonstrate the superior performance and generalizability of SSMR in dynamic trajectory tracking tasks vis-á-vis the state of the art, including Koopman operatorbased approaches. | 10.48550/arxiv.2209.05712 | [
"https://export.arxiv.org/pdf/2209.05712v3.pdf"
] | 252,367,304 | 2209.05712 | ef1a0ed53ec181fee14093458819e19d8af6c58d |
Data-Driven Spectral Submanifold Reduction for Nonlinear Optimal Control of High-Dimensional Robots
John Irvin Alora
Mattia Cenedese
Edward Schmerling
George Haller
Marco Pavone
Data-Driven Spectral Submanifold Reduction for Nonlinear Optimal Control of High-Dimensional Robots
Modeling and control of high-dimensional, nonlinear robotic systems remains a challenging task. While various model-and learning-based approaches have been proposed to address these challenges, they broadly lack generalizability to different control tasks and rarely preserve the structure of the dynamics. In this work, we propose a new, data-driven approach for extracting low-dimensional models from data using Spectral Submanifold Reduction (SSMR). In contrast to other datadriven methods which fit dynamical models to training trajectories, we identify the dynamics on generic, low-dimensional attractors embedded in the full phase space of the robotic system. This allows us to obtain computationally-tractable models for control which preserve the system's dominant dynamics and better track trajectories radically different from the training data. We demonstrate the superior performance and generalizability of SSMR in dynamic trajectory tracking tasks vis-á-vis the state of the art, including Koopman operatorbased approaches.
I. INTRODUCTION
High-dimensional robotic systems promise to revolutionize the field of robotics due to the versatility brought forth by their large degrees of freedom (DOF). For example, continuum soft robots can exhibit embodied intelligence [1] in which they conform to surfaces and objects while maintaining a level of physical robustness unavailable to their more rigid counterparts. This level of compliance and elasticity make them well-suited to operate in delicate, geometrically constrained environments, which enable them to play crucial roles in settings where safe human-robot interaction is paramount.
Unfortunately, these advantages pose significant practical challenges for the modeling and control of these robots. This is due to the inherent nonlinearities and high DOF required to accurately capture the structural deformations that realize these compliant behaviors. While several model-and learning-based approaches have been proposed in literature to address some of these challenges, these methods suffer from their inability to tractably bridge the gap between having accurate, but low-dimensional models. This accuracydimensionality tradeoff results in methods that sacrifice Fig. 1. The SSM is a low-dimensional invariant manifold in the robot's phase space which exponentially attracts full state trajectories, causing them to synchronize with the persistent dynamics on the SSM. These structures can capture highly nonlinear behaviors far away from the fixed point and can be approximated arbitrarily-well without increasing the dimension of the SSM. predictive accuracy and structure preservation for a drastic decrease in dimensionality or vice versa.
Motivated by recent developments in Spectral Submanifold (SSM) theory [2] and its successful application to datadriven predictions of nonlinearizable phenomena [3], we propose a new data-driven Spectral Submanifold Reduction (SSMR) framework for learning low-dimensional, faithful dynamics of high-dimensional robots on SSMs. SSMs, as summarized in Figure 1, are low-dimensional, attracting invariant manifolds which capture highly nonlinear phenomena of high-dimensional systems. By learning the dynamics on these generic structures, we extract low-dimensional, controloriented models that preserve the dominant physics of the system. This allows SSMR to overcome common drawbacks associated with data-driven approaches such as lack of generalizability, high-data requirement, and sensitivity to noise. Statement of Contributions: (i) We present SSMR, the first data-driven approach for learning the dynamics of highdimensional robots on SSMs for control. We extend recent work on SSMLearn [4] by providing the additional innovation of disambiguating the effect of control from the underlying dynamics on the SSM. This allows us to extract highlyaccurate models for control in an equation-free manner.
(ii) We extend previous work on SSM-based control [5] to general control tasks by implementing a SSMR optimal control scheme and validating it on simulations of a highdimensional soft robot. We show that SSMR outperforms the state-of-the-art methods in both trajectory tracking per-formance and computational efficiency, highlighting a key feature of SSMR: it neither compromises accuracy nor computational tractability.
Related Work: While prevailing approaches for modeling high-dimensional robots involve the use of simplified assumptions that approximate the robot's behavior [6]- [8], these methods are only accurate for specific types of geometries and their low fidelity precludes their use in more challenging control tasks. To address this issue, a popular approach is to compress the governing equations of highfidelity computational models using projection-based model reduction to produce low-dimensional control surrogates. While this approach has seen some success for the control of linearized systems [9]- [11], model validity deteriorates rapidly when far away from the linearization point. An attempt at capturing the nonlinearities of the high-dimensional system via piecewise-affine approximation was proposed in [12], while the work in [13] uses an energy-conserving mesh sampling and weighing scheme [14] to construct a reduced order model for inverse-kinematic control. A common limitation of these projection-based techniques is that the accuracy of the low-dimensional surrogate depends on the choice and size of the subspace, which can grow rapidly for only incremental improvement in accuracy. Additionally, since these approaches require knowledge of the governing equations, their application to real-world robots remain challenging. Indeed, the process of extracting accurate models from finite element code is an encumbering and code-intrusive process.
Much of the literature in this direction is focused on using neural networks (NN) for learning approximations of these high-dimensional dynamics from observed transitions. From black-box architectures using simple multilayer perceptrons [15] to grey-box architectures that aim to preserve physical invariants [16], [17], these approaches vary by the level of inductive bias they introduce. In many cases the highdimensionality of the dynamics stems from the fact that the learning problem is posed in terms of high-dimensional observations (e.g., pixel images); the assumption is that there exist underlying low-dimensional latent state-space dynamics, to be learned, that explain the observations [18], [19]. This is similar to the setting of this work, where a core assumption is that the principal dynamics of, e.g., a continuum soft robot live on an underlying low-dimensional manifold. For such dissipative physical systems, critically, SSM theory yields insights on the manifold structure that we use to design the learning methodology.
Recently, the Koopman operator has attracted significant interest in the robotics community for data-driven learning of nonlinear dynamics. For example, finite-dimensional, datadriven approximations of (infinite-dimensional) Koopman operators were shown to outperform standard NN models for predicting soft robot dynamics in [20]. Since observed dynamics under the Koopman operator are linear, the approach lends itself to established control techniques such as model predictive control (MPC), as shown in [21]. Although this approach is conceptually appealing, most physical systems do not admit exact finite-dimensional, linear representations [22]. Similar to the projection-based methods, the Koopman approach suffers from the accuracy-dimensionality tradeoff since, in theory, more accurate Koopman models require increased number of a-priori chosen observable functions.
A common drawback with current data-driven approaches is that they typically result in models that rarely preserve all, if any, of the inherent structure of the dynamics (e.g., structural modes, passivity, etc). This results in models that do not generalize well to control tasks which involve trajectories outside the training set. Many of these approaches are also data-intensive and sensitive to noise; this precludes their in the design process and achieving good control performance in closed-loop requires a significant tuning of various hyperparameters. By explicitly targeting rigorous and generic structures in the high-dimensional robot's phase space, we are able to extract reduced-order models which overcome these issues.
Organization: We begin in Section II by detailing the class of high-dimensional systems we consider and posing the associated nonlinear optimal control problem. In Section III, we summarize relevant results from SSM theory and outline the data-driven procedure for learning control dynamics on SSMs. We then discuss our proposed control procedure in Section IV and present simulation results in Section V.
II. P F
A. High-Dimensional Optimal Control Problem
We consider control-affine mechanical systems with N ∈ N DOF. These systems encompass a wide range of robots such as manipulators, drones, and highly-articulated robots. In the continuum limit (i.e., N → ∞), these systems can also converge to the exact model of control-affine soft robots [23]. Such systems can be written in first-order form with state vector x ( ) ∈ R (where denotes full state, as opposed to the reduced state x introduced in Section III-B) as
x
( ) = Ax ( ) + f nl (x ( )) + Bu( ),(1)
where = 2N , A ∈ R × is assumed to be negativedefinite (i.e., the origin is an asymptotically stable fixed point for = 0) and B ∈ R × represents the linear control matrix. The nonlinear term f nl : R → R belongs to the class of analytic functions and satisfies f nl (0) = 0, f nl (0)/ x = 0, while the parameter 0 < 1 introduces our assumption that the magnitude of the control inputs should be moderate compared to the autonomous dynamics. In this work, we consider control tasks near the vicinity of the robot's equilibrium point (e.g., a highly-articulated manipulator arm conducting pick and place tasks in a constrained workspace). Thus, if the desired trajectories are reasonable, this assumption is typically satisfied. Derivation of Equation (1) from a secondorder mechanical system can be found in Appendix A.
We now pose the problem of controlling Equation (1) to follow arbitrary and dynamic trajectories in the vicinity of the origin. Consider the following continuous-time, optimal control problem (OCP) with quadratic cost and polytopic Step 1 depicts the data collection procedure whereby we displace the robot across various parts of its workspace and collect decaying trajectories. We then form our training data by truncating the dataset to approximate trajectories that are on or near the manifold.
Step 2 computes the SSM parameterization and autonomous dynamics while Step 3 regresses the control matrix which best explains how the autonomous SSM is translated under the influence of control. The "Diamond" soft robot is shown in its various displaced configurations on the far left.
constraints in states and control:
minimize u( ·) z( ) 2 Q f + ∫ 0 z( ) 2 Q + u( ) 2 R , subject to x (0) = g(z(0)), x ( ) = Ax ( ) + f nl (x ( )) + Bu( ), (2) y( ) = h(x ( )), z( ) = Cy( ), u ∈ U , z ∈ Z .
Here, z( ) = z( )−z( ) is the tracking difference between the performance variable, z( ) ∈ R and the desired trajectorȳ z( ) ∈ R . The observed state is denoted as y( ) ∈ R and [ 0 , ] represents the time horizon. Q, Q f ∈ R × are positive semi-definite matrices which represent the stage and terminal costs, respectively, over the performance variables, while R is a positive-definite matrix representing the cost on controls. The functions g : R → R and h : R → R map the performance variable to the full state and the full state to the observed state, respectively, while C ∈ R × is a selection matrix of states that we observe. Lastly, the constraint sets are defined as U :
= {u( ) ∈ R : M u u( ) ≤ b u } and Z := {z ∈ R : M z z( ) ≤ b z } with M u ∈ R × and M z ∈ R × , where
and represent the number of constraints in the inputs and the observed states, respectively.
For high-dimensional dynamical systems, i.e., 1, dimensionality of Equation (1) becomes a bottleneck and it is intractable to solve the OCP (2) in an online fashion. Thus, we seek a low-dimensional approximation of Equation (1) that enables online control and allows us to approximate a solution to the OCP.
III. D -D M L - D
In this section, we describe our data-driven SSMR procedure to construct controlled, predictive models of soft robots from data. Our approach entails learning low-dimensional models directly as the reduced dynamics on attracting, lowdimensional invariant manifolds that generically exist in dissipative physical systems.
A. SSMs in a Nutshell
We define an -dimensional spectral subspace as the direct sum of an arbitrary collection of eigenspaces of A,
:= 1 ⊕ 2 ⊕ ... ⊕ ,
where denotes the real eigenspace corresponding to an eigenvalue of A. Let Λ be the set of eigenvalues related to and Λ out be that of eigenvalues not related to . If min ∈Λ Re( ) > max ∈Λ out Re( ), then represents the slowest spectral subspace of order . Intuitively, the slowest spectral subspace corresponds to the dominant modes representing the persisting dynamics of the robot and can be extracted via modal analysis or principal component analysis (PCA).
Let us first assume that = 0. For purely linear systems without external forcing (i.e., the linearization of Equation (1)), any trajectories that start in will remain in , by the Spectral Mapping Theorem. When nonlinearities are introduced, superposition is lost and the autonomous part of Equation (1) is no longer invariant on . The autonomous SSM corresponding to , W ( ), is the smoothest -dimensional manifold in the robot's phase space which nonlinearly extends the invariance of , i.e., for the autonomous part of Equation (1)
x aut (0) ∈ W ( ) =⇒ x aut ( ) ∈ W ( ), ∀ ∈ R,(3)
where x aut ( ) = Ax aut ( ) + f nl (x aut ( )) and x aut ∈ R . Given the smoothness of the SSM, the parameterization of W ( ) and the corresponding reduced dynamics can be represented by polynomial maps [24], as detailed in Section III-B.
Low-dimensional slow SSMs corresponding to slow spectral subspaces are ideal candidates for model reduction as nearby full system trajectories become exponentially attracted towards these manifolds and synchronize with the slow dynamics. Figure 1 gives a visual depiction depiction of this property as well as the relationship between and W ( ). For a detailed definition of SSMs, see Appendix B.
For small , the SSM W ( ) is still relevant for control. From a theoretical point of view, results on the existence of non-autonomous SSMs subject to quasi-periodic forcing were established in [2]. Since quasi-periodic signals over a finite time interval are dense in the space of continuous signals over the same interval, we can interpret the trajectory of the system under control input as lying approximately on a time-varying, invariant manifold that is -perturbed from W ( ).
B. Reduced-order Models on SSMs
In general, since we seldom have access to the full state x , we must construct the SSM and the reduced dynamics of our system in the space of observed states such that ≥ 2 +1 either by Whitney or Takens embedding theorems [3]. In case y does not satisfy this condition, we use time-delay embeddings of y, whereby our new observed measurements include current and past measurements of y, in order to embed W ( ) in a space with sufficient dimension.
To describe the geometry of W ( ), we seek a pair w(x), v(y) of smooth, invertible functions where y = w(x) uniquely maps the reduced state on the SSM to the observed state and x = v(y) maps the observed state to the reduced coordinates, where x ∈ R is the reduced state. By definition of the invariance and tangency properties of the SSM, the two maps that parameterize W ( ) must satisfy the invertibility relations [2]
, y = (w • v) (y) and x = (v • w) (x) such that x = v(y) := V y, y = w(x) := W 0 x + Wx 2: ,(4)
where x 2: is the family of all monomials from order 2 to , and is the desired order of the Taylor series expansion for approximating the SSM. Also, the columns of V ∈ R × span the spectral subspace of and W 0 , W represent coefficient matrices of the SSM parameterization. In addition, the reduced dynamics on W ( ) are represented by
x aut = r aut (x) := R 0 x + Rx 2: ,(5)
where r aut : R → R is the autonomous reduced dynamics on the SSM; R 0 and R represent the corresponding coefficient matrices. Since W ( ) is locally a graph over the spectral subspace [3], we can identify the full state trajectory of System (1) on W ( ) described by Equation (5). Figure 1 gives an intuitive depiction of this concept.
We seek to learn the SSM-reduced dynamics for control and construct mappings that describe the trajectory of our observed states on SSMs. Our three-step SSMR procedure involves: (1) collecting trajectories at or near the SSM.
(2) learning the SSM geometry and the reduced dynamics in Equation (5), followed by (3) learning a linear control matrix that describes the effect of the controls in the reduced coordinates. Figure 2 summarizes the complete data-driven SSMR approach.
C. Learning Autonomous Dynamics on SSMs
To learn the geometry and reduced dynamics on the SSM, the training data should involve only trajectories that are near the SSM. Thus, we obtain training data snapshots by displacing the robot along various directions in its workspace, then collect the observed state trajectory as it decays to its equilibrium position. In other words, we form an augmented matrix of (possibly time-delayed) decay datasets Y raw = Y 1 , . . . , Y , as shown in Figure 2 (left). We remove initial transients converging to the SSM in our datasets Y raw by truncating the first few states in the decay trajectories [4] and forming the dataset, Y.
To start, we first compute V by finding the dominant modes of Equation (1). To do this we carry out principle component analysis on the trajectory dataset Y and pick the leading directions that capture a majority of the variance in the data. Indeed, for systems that do not feature strong nonlinearities, PCA is able to obtain a close estimate for the spectral subspace to which the SSM is tangent [25].
Once we project Y onto the reduced coordinate such that X = V Y, we can then learn the parameterization of W ( ) (i.e., learn the map w) by finding W and W 0 via polynomial regression
(W * 0 , W * ) = arg min W 0 ,W Y − W 0 X − WX 2: 2 .(6)
In a similar fashion, we can compute the polynomial form of the reduced dynamics in Equation (9) by finding the coefficients R 0 and R via the regression
(R * 0 , R * ) = arg min R 0 ,R X − R 0 X − RX 2: 2 ,(7)
The time derivative in Equation (7) can be computed using standard finite difference schemes if the sampling time of X is much smaller than the Nyquist sampling time of the fastest mode in the SSM dynamics. Otherwise, we can also compute a discrete-time alternative to Equation (9) using a similar procedure through simple shifting operations on the dataset as in [26]. The procedure outlined above is suitable for both numerical and experimental data. For the former, we can choose y( ) = x ( ) and let the dataset Y consist of full state information during decay. We implement this procedure using a modified version of SSMLearn [3].
D. Learning the Control Matrix
Keeping in mind our moderate control assumption, we assume = 1, without loss of generality, for the rest of the exposition. Once the reduced autonomous dynamics on W ( ) is known, we seek to learn the contribution of control in the reduced coordinates. Our goal is to find the best linear control matrix B ∈ R × which best explains the difference between the controlled dynamics and our model of the autonomous dynamics. We explore the actuation space of the robot by randomly sampling a sequence of inputs, U, and recording the corresponding (possibly time-delayed) observed state trajectory Y , as depicted in Figure 2 (right). https://github.com/StanfordASL/SSMR-for-control We then project the observed states down to the reduced coordinates and form the reduced state matrix X u = V Y u . Additionally, we evaluate our model of the autonomous dynamics and form the matrix X aut = r aut (X u ). Learning the (continuous-time) control matrix from data amounts to solving the minimization problem
B * = arg min B X u − X aut − B U 2 ,(8)
where X u is computed by finite differencing X u . Our learned, low-dimensional control dynamics is thus,
x = r(x, u) := R 0 x + Rx 2: + B u.(9)
In general, the introduction of control causes W ( ) to lose its invariance. Intuitively, though, we expect that the trajectories will remain within a small neighborhood of W ( ) since the effect of our control input is moderate compared to the system dynamics. Thus, we interpret this step as regressing a linear matrix that optimally translates the autonomous SSM under control inputs to be as close as possible to off-SSM trajectories.
IV. SSM-N MPC A. Reduced Order Optimal Control Problem
Learning the parameterization of W ( ) enables us to learn the intrinsic physics of our system, leading to lowdimensional and accurate reduced models with . This allows us to approximate the OCP in (2) by posing an optimization problem with respect to the dynamics on the SSM as follows
minimize u( ·) z( ) 2 Q + −1 ∑︁ =1 z( ) 2 Q + u( ) 2 R
subject to x(0) = V (y(0) − y eq ), x( ) = r(x( )) + B u( ), where z eq ∈ R and y eq ∈ R are the performance and observed states at equilibrium. To solve the approximate OCP (10) numerically, we discretize the continuous-time system and use Sequential Convex Programming (SCP) to transform (10) into a sequence of quadratic programs. If is small enough, we can compute the solution to the resulting approximate OCP in real-time. We implement our SSMRbased controller on top of the open-source soft robot control library presented in [27]. See Appendix C for more details on the SCP setup.
z( ) = Cw(x( )) + z eq ,(10)z( ) ∈ Z, u( ) ∈ U ,
V. S R A. Simulation
We now compare our proposed SSMR method against the Trajectory Piecewise-Linear (TPWL) approach [27] and Koopman operator-based control approach [21] in simulations of the elastomer "Diamond" robot (shown in Figure 2), https://github.com/StanfordASL/soft-robot-control as detailed in [27] and [28]. We show that our approach outperforms these baselines while mitigating their drawbacks, namely lack of generalization, lack of robustness to noise, and computational intractability.
We carry out simulations using the finite-element based SOFA framework [30]; the Diamond robot mesh we used for simulation can be found in the SoftRobots plugin [31]. The parameters of our Diamond robot mirror those reported for a hardware replica in [28], where = 175 MPa is the Young's modulus, = 0.45 is the Poisson ratio, and = 2.5, = 0.01 represent the usual parameters for Rayleigh damping i.e., = M+ K. Additionally, ≤ represents the rollout horizon of the optimal solution, u * , to OCP (10) while the controller sampling time is = , where is the time-discretization of the dynamics.
In this work, we consider control tasks in which the end effector of the robot is made to follow various trajectories. Thus, the performance variable z = [ ee , ee , ee ] denotes the position of the top of the robot in its workspace. We also introduce additive Gaussian measurement noise to simulate real-world conditions. We consider three control tasks which include following (1) a figure eight in the x-y plane subject to constraints, (2) a circle in the y-z plane, and (3) the same circle but near resonance with the dominant mode of the system.
Since we are in a simulation environment, we collect full state information as training data i.e., the -th dataset is
Y = [x 1 , x 2 , . . . x ]
. We obtain this data by displacing the robot along 44 different points in its workspace and observe the decaying trajectory state transitions sampled at = 1 ms. This is consistent with the highest frequency mode in the SSM which has a period of roughly 330 ms. After conducting PCA on our training data, we found that the 3 leading configuration modes (6 modes in phase space) captured more than 95% of the variance in our dataset. Hence, we learn a cubic order, 6-D autonomous SSM parametrization described in (6) and its continuous-time, reduced dynamics (9) using the procedure in Section III. Lastly, we learn the control matrix by randomly sampling controls and then collect the resulting state transitions sampled at 10 ms. Table I reports the mean-squared error tracking performance and average cumulative time to solve the QP for all trajectories at various controller parameters and time discretization of Equation (9). To enable real-time control, we seek control parameters such that the controller sampling time is at least an order of magnitude less than the solve time. These results show that our SSMR-based MPC scheme outperforms the TPWL and the Koopman approach in tracking performance across all trajectories considered, for small enough time discretization. Thus, our approach exhibits superior generalizability to control tasks as shown in the above figures and tables. Due to the low dimensionality of our learned model, we can solve the SCP iterations quickly and the computational burden grows modestly as the MPC horizon increases. As shown in Table I, the solve times for our approach are magnitudes lower than for the TPWL and Koopman-based methods, giving us more freedom to choose the controller parameters to enable real-time control. Observed deterioration of performance as increases in Table I is likely due to numerical errors introduced by coarser time-discretization of the dynamics since we learn a continuous-time model of Equation (9).
B. Discussion
Additionally, the SSMR approach offers several practical advantages over the alternatives. First, our SSM-based model exhibits good closed-loop performance at longer horizons and does not suffer from numerical conditioning issues that plague the Koopman approach. We found that at horizons ≥ 10, the Koopman QPs were no longer solvable, which is likely due to ill-condition of the Koopman matrices. It is well-known that approximation of the Koopman operator is numerically challenging when many observables are considered [32]. Since we explicitly reason about the dynamics of the system in the learning process, we find that SSMR yields radically low-dimensional and thus, numerically wellbehaved, models.
Second, our approach involves only two parameters: the order approximation and dimension of the manifold. Of these two, the dimension of the SSM is a property of the system dynamics, which can be inferred via a frequency analysis of the available data. The polynomial order of the SSM approximation controls the accuracy and the trade-off between generalization and overfitting. The size of the Koopman model grows rapidly with the number of observed states while the dimension of the projection basis for TPWL needs to be fairly large for acceptable closed-loop performance. In contrast, since off-manifold dynamics are sufficiently approximated by those on the SSM for closed-loop control, we can learn models of minimal size and tune the SSM order iteratively to increase model fidelity, as needed. This has considerable practical advantage over learning-based approaches where it is well-known that closed-loop performance is highly sensitive to choice of dictionary features, size, and regularization.
VI. C F W
In this work, we proposed a new data-driven approach for constructing control-oriented, reduced models of soft robots on spectral submanifolds. Using our approach, we can construct faithful, predictive, low-dimensional models which can be effectively used for real-time optimal control. We demonstrated that our SSM-based MPC scheme outperforms the state of the art significantly in both tracking error and computation time. The success of SSMLearn [4] in the experimental domain hints at the prospects of our SSMR approach for application to real world robots. Bolstered by promising results in a high-fidelity, finite-element simulation environment, we plan to validate the data-driven SSMR approach on our hardware platform detailed in [28].
While these results are promising, there are many open problems. For example, although the setting we consider in- ( 2 ) for all considered trajectories while the right shows average cumulative QP solve times (in milliseconds) for the SCP algorithm. The Koopman model consists of polynomials up to order 2 over z = [ ee , ee , ee ] with a single time delay ( koop = 66). We train separate, discrete-time Koopman models corresponding to the various time-discretizations, . The TPWL model parameters are set similarly to the ones reported in [28] ( TPWL = 42). We learn a single, continuous-time SSM model of cubic order and although it is low-dimensional ( SSM = 6), it outperforms the other approaches in both tracking performance (at low enough time-discretization) and solve time. The QP is solved using Gurobi [29] on a 1.6 GHz Intel Core i5 processor with 8 GB of RAM.
Figure Eight
Circle Near-Resonance Circle volves tasks around an equilibrium point (e.g., manipulation tasks in a constrained workspace), many high-dimensional systems are not fixed to a point and can freely navigate their environment. Extending our SSMR framework to handle these settings would generalize our approach to a broader class of systems. Also, since most robotic systems have configuration-dependent actuation constraints, we plan to extend our approach to learning dynamics with state-affine control. Lastly, we plan to estimate errors arising from SSM approximation a-priori and derive error bounds for constraint-tightening control schemes in an MPC framework.
Fig. 2 .
2Three-step procedure to learn control-oriented dynamics on the SSM.
Fig. 3 .
3Simulation results of tracking performance for tasks (1), (2), and (3) from left to right, respectively, with horizon length of = 3. The parameters used were tuned for each method to yield the best, real-time performance across all tasks. The TPWL trajectory is shown in green, the Koopman trajectory in orange, and the SSMR in blue. The dotted black line represents the reference trajectory while the red line represent constraints. The quasi-static circle (task 2) MSEs (in mm 2 ) are TPWL = 3.35, koop = 0.91, and SSM = 0.53. The near-resonance circle (task 3) MSEs are TPWL = 21.75, koop = 133.6, and SSM = 1.87.
Fig. 4 .
4Time-series simulation results of tracking performance for the quasistaticFigure Eight(task 1). The controller parameters for each approach are set similarly to those reported inFigure 3. The MSE (in mm 2 ) for the TPWL, Koopman, and SSMR approaches are TPWL = 0.22, koop = 0.38, and SSM = 0.13.
Figures 3 and 4 depict simulation results for trajectories (1), (2), and (3) for controller parameters chosen to maximize performance while enabling real-time control.
TABLE I .
IThe table on the left shows mean squared error
196 Koopman EDMD 1.540 1.286 0.515 0.67910
20
50
100
0.481 0.342 1.353 4.480
3.996 3.033 3.197 3.216
2.985 1.806 0.914 2.060
0.466 0.348 1.287 4.561
3.350 3.278 3.265 3.254
2.919 1.632 0.895 2.119
10
20
50
0.893 1.861 32.87
4.077 3.789 4.472
1.864 6.707 135.3
0.810 1.816 33.43
3.698 3.900 4.585
1.818 7.149 133.6
Average Solve Times (ms)
10
20
50
100
0.85 0.97 0.97 0.92
25.31 26.19 27.75 31.32
6.08 6.10 5.95 5.92
1.55 1.49 1.62 1.26
52.20 51.23 55.51 58.52
15.81 16.18 18.01 19.65
J.A. is supported by the Secretary of the Air Force STEM Ph.D. Fellowship. This work was supported by the NASA University Leadership Initiative (grant #80NSSC20M0163) and KACST; this article solely reflects the opinions and conclusions of its authors and not any Air Force, NASA, nor KACST entity.1 Department of
Acknowledgements. The authors thank Elisabeth Alora and Matteo Zallio for generating the instructive figures, Florian Mahlknecht for thoughtful discussions and help with initial implementations, and Spencer M. Richards for his careful review of the manuscript.AA. High-Dimensional Mechanical SystemWe consider robots modeled as mechanical systems with second-order form M q( ) + C q( ) + Kq( ) + F int (q( ), q( )) = Hu( ),(11)where q( ) ∈ R N is the vector of generalized coordinates, M ∈ R N ×N is the mass matrix, C ∈ R N ×N represents the damping matrix, K ∈ R N ×N is the stiffness matrix, and F int (q, q) ∈ R N represents the internal nonlinear forces. Also u( ) ∈ R is the vector of inputs and H ∈ R N × represents the linear mapping of actuation forces from their point of application to the configuration space.Equation(11)can be written in the first-order form ofB. Spectral SubmanifoldsRecent results in nonlinear dynamics establish the existence of unique, smoothest invariant structures in the phase space of Equation (1)[2]. SSMs are nonlinear continuations of the spectral subspaces of the linearization of Equation(1).The SSM corresponding to in the autonomous part of Equation(1)is defined as follows.Definition 1: An autonomous SSM W ( ), corresponding to a spectral subspace of the operator A is an invariant manifold of the autonomous part of the nonlinear system (1) such that 1) W ( ) is tangent to at the origin and has the same dimension as , 2) W ( ) is strictly smoother than any other invariant manifold satisfying condition 1 above. A slow SSM is associated with a non-resonant spectral subspace containing the slowest decaying eigenvectors of the linearized system. SSMs as described in Definition 1 turn out to exist as long as the spectrum of A| , Λ , has no loworder resonance relationship with any eigenvalue in the outer spectrum Λ out (see[2],[3]for details).C. Convex Formulation for Real-Time ControlConvex optimization allows us to leverage fast, iterative algorithms with polynomial-time complexity[33]to efficiently and reliably approximate a solution to(10). We use sequential convex programming (SCP) to transform the nonlinear equality constraints into a sequence of linear equality constraints. The key idea is then to iteratively re-linearize the dynamics around a nominal trajectory and solve a convex approximation near this trajectory until convergence to a local optimum of the continuous-time OCP (10) is observed.To be precise, suppose we have some nominal trajectory of states and controls (x , u ) = ({x } =1 , {u } −1 =1 ) at the -th iteration about which we linearize the nonlinear constraints. We can then define the resulting linearized OCP, (LOCP) +1 , as follows minimize u 1:N−1The terms A ∈ R × and H ∈ R × represent Jacobians of the dynamics and the observation map, respectively, while d ∈ R and c ∈ R are the accompanying residuals defined aswhere r d : R → R represents the time-discretized function of r. A sequence of (LOCP) are solved until convergence i.e., x +1 − x 2 < for arbitrarily small .Solving(14)requires each (LOCP) to be feasible, which is always possible with the introduction of virtual dynamics, as detailed in[34]. Additionally, since linearization provides a good approximation to the nonlinear dynamics only in a small neighborhood around the nominal trajectory, we use trust regions to ensure smooth convergence. We implement a modified version of[35]to solve the formulated SCP and treat the constraints on the performance variables as soft constraints.We solve the finite-horizon problem in a receding horizon fashion, where each receding horizon subproblem involves solving the (LOCP). This gives an optimal reduced-order model trajectory (x * , u * ) = ({x * } =1 , {u * } −1 =1 ) approximating the solution to OCP (2) over an arbitrarily long, finite horizon.
A concise guide to modelling the physics of embodied intelligence in soft robotics. G Mengaldo, F Renda, S L Brunton, M Bächer, M Calisti, C Duriez, G S Chirikjian, C Laschi, Nature Reviews Physics. G. Mengaldo, F. Renda, S. L. Brunton, M. Bächer, M. Calisti, C. Duriez, G. S. Chirikjian, and C. Laschi, "A concise guide to modelling the physics of embodied intelligence in soft robotics," Nature Reviews Physics, pp. 1-16, 2022.
Nonlinear normal modes and spectral submanifolds: Existence, uniqueness and use in model reduction. G Haller, S Ponsioen, Nonlinear Dynamics. 863G. Haller and S. Ponsioen, "Nonlinear normal modes and spectral submanifolds: Existence, uniqueness and use in model reduction," Nonlinear Dynamics, vol. 86, no. 3, pp. 1493-1534, Nov. 2016, : 1573-269X.
Data-driven modeling and prediction of non-linearizable dynamics via spectral submanifolds. M Cenedese, J Axås, B Bäuerlein, K Avila, G Haller, Nature Communications. 131M. Cenedese, J. Axås, B. Bäuerlein, K. Avila, and G. Haller, "Data-driven modeling and prediction of non-linearizable dynamics via spectral submanifolds," Nature Communications, vol. 13, no. 1, p. 872, Feb. 2022, : 2041-1723.
Data-driven nonlinear model reduction to spectral submanifolds in mechanical systems. M Cenedese, J Axås, H Yang, M Eriten, G Haller, Philosophical Transactions of the Royal Society A. 3802229M. Cenedese, J. Axås, H. Yang, M. Eriten, and G. Haller, "Data-driven nonlinear model reduction to spectral submanifolds in mechanical systems," Philo- sophical Transactions of the Royal Society A, vol. 380, no. 2229, p. 20 210 194, 2022.
Using spectral submanifolds for nonlinear periodic control. F Mahlknecht, J I Alora, S Jain, E Schmerling, R Bonalli, G Haller, M Pavone, arXiv:2209.06573arXiv preprintF. Mahlknecht, J. I. Alora, S. Jain, E. Schmerling, R. Bonalli, G. Haller, and M. Pavone, "Using spectral submanifolds for nonlinear periodic control," arXiv preprint arXiv:2209.06573, 2022.
Hyper-redundant manipulator dynamics: A continuum approximation. G S Chirikjian, Advanced Robotics. 93G. S. Chirikjian, "Hyper-redundant manipulator dy- namics: A continuum approximation," Advanced Robotics, vol. 9, no. 3, pp. 217-243, 1994.
Equilibrium conformations of concentric-tube continuum robots. D C Rucker, R J Webster, Iii , G S Chirikjian, N J Cowan, The International journal of robotics research. 2910D. C. Rucker, R. J. Webster III, G. S. Chirikjian, and N. J. Cowan, "Equilibrium conformations of concentric-tube continuum robots," The International journal of robotics research, vol. 29, no. 10, pp. 1263- 1280, 2010.
A modeling approach for continuum robotic manipulators: Effects of nonlinear internal device friction. J Jung, R S Penning, N J Ferrier, M R Zinn, 2011 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEEJ. Jung, R. S. Penning, N. J. Ferrier, and M. R. Zinn, "A modeling approach for continuum robotic manip- ulators: Effects of nonlinear internal device friction," in 2011 IEEE/RSJ International Conference on Intelli- gent Robots and Systems, IEEE, 2011, pp. 5139-5146.
Control design for soft robots based on reduced-order model. M Thieffry, A Kruszewski, C Duriez, T.-M Guerra, IEEE Robotics and Automation Letters. 41M. Thieffry, A. Kruszewski, C. Duriez, and T.-M. Guerra, "Control design for soft robots based on reduced-order model," IEEE Robotics and Automation Letters, vol. 4, no. 1, pp. 25-32, 2018.
Dynamically closed-loop controlled soft robotic arm using a reduced order finite element model with state observer. R K Katzschmann, M Thieffry, O Goury, A Kruszewski, T.-M Guerra, C Duriez, D Rus, 2019 2nd IEEE international conference on soft robotics (RoboSoft). IEEER. K. Katzschmann, M. Thieffry, O. Goury, A. Kruszewski, T.-M. Guerra, C. Duriez, and D. Rus, "Dynamically closed-loop controlled soft robotic arm using a reduced order finite element model with state observer," in 2019 2nd IEEE international conference on soft robotics (RoboSoft), IEEE, 2019, pp. 717-724.
Model-based dynamic feedback control of a planar soft robot: Trajectory tracking and interaction with the environment. C Della Santina, R K Katzschmann, A Bicchi, D Rus, The International Journal of Robotics Research. 394C. Della Santina, R. K. Katzschmann, A. Bicchi, and D. Rus, "Model-based dynamic feedback control of a planar soft robot: Trajectory tracking and interaction with the environment," The International Journal of Robotics Research, vol. 39, no. 4, pp. 490-513, 2020.
Soft robot optimal control via reduced order finite element models. S Tonkens, J Lorenzetti, M Pavone, Proc. IEEE Conf. on Robotics and Automation. IEEE Conf. on Robotics and AutomationS. Tonkens, J. Lorenzetti, and M. Pavone, "Soft robot optimal control via reduced order finite element mod- els," in Proc. IEEE Conf. on Robotics and Automation, 2021.
Fast, generic, and reliable control and simulation of soft robots using model order reduction. O Goury, C Duriez, IEEE Transactions on Robotics. 346O. Goury and C. Duriez, "Fast, generic, and reliable control and simulation of soft robots using model order reduction," IEEE Transactions on Robotics, vol. 34, no. 6, pp. 1565-1576, 2018.
Structurepreserving, stability, and accuracy properties of the energy-conserving sampling and weighting method for the hyper reduction of nonlinear finite element dynamic models. C Farhat, T Chapman, P Avery, International journal for numerical methods in engineering. 1025C. Farhat, T. Chapman, and P. Avery, "Structure- preserving, stability, and accuracy properties of the energy-conserving sampling and weighting method for the hyper reduction of nonlinear finite element dynamic models," International journal for numerical methods in engineering, vol. 102, no. 5, pp. 1077- 1110, 2015.
Model-based reinforcement learning for closed-loop dynamic control of soft robotic manipulators. T G Thuruthel, E Falotico, F Renda, C Laschi, IEEE Transactions on Robotics. 351T. G. Thuruthel, E. Falotico, F. Renda, and C. Laschi, "Model-based reinforcement learning for closed-loop dynamic control of soft robotic manipulators," IEEE Transactions on Robotics, vol. 35, no. 1, pp. 124-134, 2018.
Hamiltonian neural networks. S Greydanus, M Dzamba, J Yosinski, Advances in neural information processing systems. 32S. Greydanus, M. Dzamba, and J. Yosinski, "Hamilto- nian neural networks," Advances in neural information processing systems, vol. 32, 2019.
Lagrangian neural networks. M Cranmer, S Greydanus, S Hoyer, P Battaglia, D Spergel, S Ho, arXiv:2003.04630arXiv preprintM. Cranmer, S. Greydanus, S. Hoyer, P. Battaglia, D. Spergel, and S. Ho, "Lagrangian neural networks," arXiv preprint arXiv:2003.04630, 2020.
Robot motion planning in learned latent spaces. B Ichter, M Pavone, IEEE Robotics and Automation Letters. 43B. Ichter and M. Pavone, "Robot motion planning in learned latent spaces," IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 2407-2414, 2019.
Lasdi: Parametric latent space dynamics identification. W D Fries, X He, Y Choi, Computer Methods in Applied Mechanics and Engineering. 3992022W. D. Fries, X. He, and Y. Choi, "Lasdi: Parametric latent space dynamics identification," Computer Meth- ods in Applied Mechanics and Engineering, vol. 399, p. 115 436, 2022.
Nonlinear system identification of soft robot dynamics using koopman operator theory. D Bruder, C D Remy, R Vasudevan, 2019 International Conference on Robotics and Automation (ICRA). IEEED. Bruder, C. D. Remy, and R. Vasudevan, "Nonlinear system identification of soft robot dynamics using koopman operator theory," in 2019 International Con- ference on Robotics and Automation (ICRA), IEEE, 2019, pp. 6244-6250.
Modeling and control of soft robots using the koopman operator and model predictive control. D Bruder, B Gillespie, C D Remy, R Vasudevan, arXiv:1902.02827arXiv preprintD. Bruder, B. Gillespie, C. D. Remy, and R. Va- sudevan, "Modeling and control of soft robots using the koopman operator and model predictive control," arXiv preprint arXiv:1902.02827, 2019.
Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. S L Brunton, B W Brunton, J L Proctor, J N Kutz, PloS one. 112150171S. L. Brunton, B. W. Brunton, J. L. Proctor, and J. N. Kutz, "Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control," PloS one, vol. 11, no. 2, e0150171, 2016.
Model based control of soft robots: A survey of the state of the art and open challenges. C Della Santina, C Duriez, D Rus, arXiv:2110.01358arXiv preprintC. Della Santina, C. Duriez, and D. Rus, "Model based control of soft robots: A survey of the state of the art and open challenges," arXiv preprint arXiv:2110.01358, 2021.
How to compute invariant manifolds and their reduced dynamics in high-dimensional finite element models. S Jain, G Haller, Nonlinear Dynamics. S. Jain and G. Haller, "How to compute invariant man- ifolds and their reduced dynamics in high-dimensional finite element models," Nonlinear Dynamics, Oct. 2021, : 1573-269X.
Fast data-driven model reduction for nonlinear dynamical systems. J Axås, M Cenedese, G Haller, arXiv:2204.14169arXiv preprintJ. Axås, M. Cenedese, and G. Haller, "Fast data-driven model reduction for nonlinear dynamical systems," arXiv preprint arXiv:2204.14169, 2022.
Dynamic mode decomposition with control. J L Proctor, S L Brunton, J N Kutz, SIAM Journal on Applied Dynamical Systems. 151J. L. Proctor, S. L. Brunton, and J. N. Kutz, "Dynamic mode decomposition with control," SIAM Journal on Applied Dynamical Systems, vol. 15, no. 1, pp. 142- 161, 2016.
Soft robot optimal control via reduced order finite element models. S Tonkens, J Lorenzetti, M Pavone, arXiv:2011.02092arXiv preprintS. Tonkens, J. Lorenzetti, and M. Pavone, "Soft robot optimal control via reduced order finite element mod- els," arXiv preprint arXiv:2011.02092, 2021.
Reduced order model predictive control of high-dimensional systems. J Lorenzetti, Dept. of Aeronautics and Astronautics. Stanford UniversityPh.D. dissertationJ. Lorenzetti, "Reduced order model predictive con- trol of high-dimensional systems," Ph.D. dissertation, Stanford University, Dept. of Aeronautics and Astro- nautics, Stanford, California, Aug. 2021.
Gurobi Optimization, LLC, Gurobi Optimizer Reference Manual. Gurobi Optimization, LLC, Gurobi Optimizer Refer- ence Manual, 2022.
Sofaan open source framework for medical simulation. J Allard, S Cotin, F Faure, P.-J Bensoussan, F Poyer, C Duriez, H Delingette, L Grisoni, MMVR 15-Medicine Meets Virtual Reality. IOP Press125J. Allard, S. Cotin, F. Faure, P.-J. Bensoussan, F. Poyer, C. Duriez, H. Delingette, and L. Grisoni, "Sofa- an open source framework for medical simulation," in MMVR 15-Medicine Meets Virtual Reality, IOP Press, vol. 125, 2007, pp. 13-18.
Software toolkit for modeling, simulation, and control of soft robots. E Coevoet, T Morales-Bieze, F Largilliere, Z Zhang, M Thieffry, M Sanz-Lopez, B Carrez, D Marchal, O Goury, J Dequidt, Advanced Robotics. 3122E. Coevoet, T. Morales-Bieze, F. Largilliere, Z. Zhang, M. Thieffry, M. Sanz-Lopez, B. Carrez, D. Marchal, O. Goury, J. Dequidt, et al., "Software toolkit for modeling, simulation, and control of soft robots," Advanced Robotics, vol. 31, no. 22, pp. 1208-1224, 2017.
System norm regularization methods for koopman operator approximation. S Dahdah, J R Forbes, Proceedings of the Royal Society A. 4782265S. Dahdah and J. R. Forbes, "System norm regular- ization methods for koopman operator approximation," Proceedings of the Royal Society A, vol. 478, no. 2265, p. 20 220 162, 2022.
Convex optimization for trajectory generation. D Malyuta, T P Reynolds, M Szmuk, T Lew, R Bonalli, M Pavone, B Acikmese, IEEE Control Systems Magazine. 2022In PressD. Malyuta, T. P. Reynolds, M. Szmuk, T. Lew, R. Bonalli, M. Pavone, and B. Acikmese, "Convex optimization for trajectory generation," IEEE Control Systems Magazine, 2022, In Press.
Successive convexification of non-convex optimal control problems and its convergence properties. Y Mao, M Szmuk, B Açıkmeşe, 2016 IEEE 55th Conference on Decision and Control (CDC). IEEEY. Mao, M. Szmuk, and B. Açıkmeşe, "Successive convexification of non-convex optimal control prob- lems and its convergence properties," in 2016 IEEE 55th Conference on Decision and Control (CDC), IEEE, 2016, pp. 3636-3641.
Trajectory optimization on manifolds: A theoretically-guaranteed embedded sequential convex programming approach. R Bonalli, A Bylard, A Cauligi, T Lew, M Pavone, Robotics: Science and Systems. R. Bonalli, A. Bylard, A. Cauligi, T. Lew, and M. Pavone, "Trajectory optimization on manifolds: A theoretically-guaranteed embedded sequential convex programming approach," in Robotics: Science and Systems, 2019.
| [
"https://github.com/StanfordASL/SSMR-for-control",
"https://github.com/StanfordASL/soft-robot-control"
] |
[] | [
"Z Faidon Brotzakis \nDepartment of Chemistry\nUniversity of Cambridge\nCB2 1EWCambridge\n\nUK. ‡ van 't Hoff Institute for Molecular Sciences\nUniversity of Amsterdam\nPO Box 941571090 GDAmsterdamThe Netherlands\n",
"Peter G Bolhuis "
] | [
"Department of Chemistry\nUniversity of Cambridge\nCB2 1EWCambridge",
"UK. ‡ van 't Hoff Institute for Molecular Sciences\nUniversity of Amsterdam\nPO Box 941571090 GDAmsterdamThe Netherlands"
] | [] | Transition path sampling (TPS) is a powerful technique for investigating rare transitions, especially when the mechanism is unknown and one does not have access to the reaction coordinate. Straightforward application of TPS does not directly provide the free energy landscape nor the kinetics, which motivated the development of path sampling extensions, such as transition interface sampling (TIS), and the reweighted paths ensemble (RPE), that are able to simultaneously access both kinetics and thermodynamics. However, performing TIS is more involved than TPS, and still requires (some) insight in the reaction to define interfaces. While packages that can efficiently compute path ensembles for TIS are now available, it would be useful to directly compute the free energy from a single TPS simulation. To achieve this, we developed an approximate method, denoted Virtual Interface Exchange, that makes use of the rejected pathways in a form of waste recycling. The method yields an approximate reweighted path ensemble that allows an immediate view of the free energy landscape from a single TPS, as well as enables a full committor analysis. | 10.1063/1.5119252 | [
"https://arxiv.org/pdf/1907.04453v1.pdf"
] | 195,874,014 | 1907.04453 | fd53a28fdf70d0f1cb0f9e7f5a4f7500cc22962a |
Z Faidon Brotzakis
Department of Chemistry
University of Cambridge
CB2 1EWCambridge
UK. ‡ van 't Hoff Institute for Molecular Sciences
University of Amsterdam
PO Box 941571090 GDAmsterdamThe Netherlands
Peter G Bolhuis arXiv:1907.04453v1 [physics.chem-ph] 9 Jul 2019 Approximating Free Energy and Committor Landscapes in Standard Transition Path Sampling using Virtual Interface Exchange
Transition path sampling (TPS) is a powerful technique for investigating rare transitions, especially when the mechanism is unknown and one does not have access to the reaction coordinate. Straightforward application of TPS does not directly provide the free energy landscape nor the kinetics, which motivated the development of path sampling extensions, such as transition interface sampling (TIS), and the reweighted paths ensemble (RPE), that are able to simultaneously access both kinetics and thermodynamics. However, performing TIS is more involved than TPS, and still requires (some) insight in the reaction to define interfaces. While packages that can efficiently compute path ensembles for TIS are now available, it would be useful to directly compute the free energy from a single TPS simulation. To achieve this, we developed an approximate method, denoted Virtual Interface Exchange, that makes use of the rejected pathways in a form of waste recycling. The method yields an approximate reweighted path ensemble that allows an immediate view of the free energy landscape from a single TPS, as well as enables a full committor analysis.
I. INTRODUCTION
Molecular simulation of rare event kinetics is challenging, due to the long time scales and high barriers involved [1,2]. In the past decades many methods have been invented to overcome this challenge, either via enhanced sampling in configuration space (see e.g. Refs. [3][4][5][6][7][8][9][10][11][12][13]), or via path-based methods, that enhance the sampling in trajectory space (see e.g. Refs. [14][15][16][17][18][19][20][21][22]). Belonging to the latter category, the Transition Path Sampling (TPS) method collects unbiased dynamical trajectories that connect two predefined stable states [23][24][25][26]. The result is a path ensemble that accurately represents the dynamics of the process of interest, and which can be scrutinised to extract low dimensional descriptions of the reaction coordinate that in turn can be used for determining free energy or kinetics [27,28]. Notably, projections of the path ensemble on relevant order parameters such as path densities lead to qualitative mechanistic insight. TPS has been successfully applied to complex systems, e.g. protein folding and conformational changes [29], binding and aggregation [30][31][32], chemical reactions [33], and nucleation phenomena [34,35], yielding valuable insight in the reaction coordinate and mechanism.
However, one thing that is not readily available in a standard TPS ensemble is the free energy profile or landscape. This is because the TPS ensemble is a constrained ensemble, which misses information on all the failed paths that did not make it over the barrier, but still contribute significantly to the free energy. This missing information is not easy to correct for in standard TPS. Yet, reliable knowledge of the free energy landscape in the barrier region obtained from TPS simulations would be a very valuable analysis tool. Moreover the standard TPS * [email protected]. set up does not provide with the kinetic rate constants directly, and an additional transformation of the path ensemble is needed [24]. The TPS methodology suite has been greatly extended over the years. For instance, the transition interface sampling (TIS) version of TPS enable efficient computation of rate constants [36,37]. TIS also enables a reweighting of the path ensemble, giving access to the free energy landscapes, and committor surfaces [38]. While there are now software packages that can compute the path ensembles in TIS, this requires a more involved set up, compared to straightforward TPS [39][40][41]. Indeed, standard transition path sampling has been the entry point for most studies, and it is the first approach one should try, in particular when confronted with a complex transition for which no detailed mechanistic picture is available.
The purpose of this paper is to develop a way to approximate the reweighted path ensemble (RPE) from a single standard TPS run. This approximation is then sufficient to construct the free energy landscape in the barrier region. This approximation is realised by making use of the rejected paths in the TPS sampling, which give information on the free energy barrier. As this approach is making use of the rejected paths, it is a form of waste recycling, a method introduced by Frenkel for reusing rejected Monte Carlo moves [42]. In particular, we make use of the virtual replica exchange algorithm by Coluzza and Frenkel [43].
The method is roughly as follows. To compute the RPE we require the TIS ensembles for each interface. However, we only sample the full TPS ensemble. Now, we can interpret each shooting move as a virtual replica exchange move towards the TIS ensemble corresponding to the shooting point, followed by a constrained interface shot. We therefore call this methodology Virtual Interface Exchange TPS (VIE-TPS). Thus, each TPS shot gives an estimate for a particular TIS interface ensemble. From this we can estimate the RPE, and by care-fully keeping track of the crossing probabilities we can reweight each (accepted and rejected) trajectory in the ensemble, thus giving the unbiased free energy landscape.
The remainder of the paper is as follows. In the theory section we first briefly recap the TPS and TIS notation. Then we describe the VIE-TPS algorithm. The results section illustrates the new method on a toy model, the AD system, and the FF dimer.
II. THEORY
A. Summary of the TPS, TIS and single replica ensemble
In this section we give a brief overview of the notation for the TPS and TIS ensembles. A trajectory is denoted as x ≡ {x 1 , x 2 . . . x L }, where each frame (or slice, or snapshot) x contains the position and momenta of the entire system at a time t = i∆t. Frames are thus separated by a time interval ∆t, yielding a trajectory of duration L∆t. Denoting π[x] as the distribution of paths given by the underlying dynamics (e.g. Langevin dynamics), and introducing two stable state sets A and B, the TPS ensembe is defined as
P AB [x] = h AB [x]π[x]/Z AB ,(1)
here h AB [x] the indicator function that is only unity if the path connects A with B and Z AB the normalising partition function. In TIS an ordered sequence of interfaces λ 0 , λ 1 . . . λ n is introduced, parameterised by an order parameter λ. Denoting the set Λ i = {x|λ(x) > λ i }, one obtains a similar definition for TIS interface i
P AΛi [x] =h A i [x]π[x]/Z AΛi ,(2)h A i [x]
now the indicator function that is only unity if the path leaves A, crosses λ i , and then reaches either A or B. The crossing probability connected to the TIS ensemble is:
P A (λ|λ i ) = DxP AΛi [x]θ(λ max [x] − λ),(3)
where Dx indicates an integral over all paths, θ(x) is the Heaviside step function, and λ max [x] returns the maximum value of the λ along the path. Here we assumed that λ is steadily increasing with i. The shooting move is used to sample both TIS and TPS ensembles:
p acc [x (o) → x (n) ] =h i [x (n) ] min 1, L (o) L (n) ,(4)
Shooting moves in TIS can be accepted if the path crosses the interface i, but need an additional correction factor based on the length. It is also possible to use the constrained interface move [44], in which the shooting point is chosen among the n λ frames that are located at (or near) the interface λ (usually defined in some region around the interface). The acceptance criterion for such a constrained move on an interface j is also slight different. In fact, it is determined by the number of frames n λj one is allowed to choose from. The selection probability for a shooting frame is now p sel (x sp ) = 1/n λj , instead of 1/L. The acceptance criterion for a shot from the interface is thus
p acc [x (o) → x (n) ] =h j [x (n) ] min 1, p sel (x (n) sp ) p sel (x (o) sp ) =h j [x (n) ] min 1, n λj [x (o) ] n λj [x (n) ](5)
In single replica TIS (SRTIS) the interface itself is moving location, e.g. from λ i to λ j [45]. This interface move can be accepted with
p acc (x; λ i → λ j ) =h A j [x] min 1, g(λ i ) g(λ j ) ,(6)
where g(λ j ) is the correct density of paths for each interface. This density of paths (DOP) on the interfaces will not be equal for the different interfaces but is high close to stable states, and low close to the transition state region. In fact, the correct DOP is proportional to the crossing probability g(λ i ) ∝ P A (λ i ). This can be seen as follows.
While an exchange to a lower interface is always possible, an exchange opportunity to a higher interface occurs with the naturally occurring probability for pathways at the higher interfaces, which, in fact, is the crossing probability. To obtain an equal population (for a flat histogram sampling) the exchange acceptance should therefore be biased with the ratio of the crossing probabilities. As an exchange between two interfaces belonging to the same state is governed by the same crossing probability, the proportionality factor cancels. In the single replica TIS sampling the shooting move and the interface exchange are done separately. It is possible to combine the shooting move and the exchange interface as a single move. This combined shooting and exchange move can be seen as choosing a random interface, and moving the current interface to that position, followed by a shooting move from a shooting point constrained to that interface. When we move to a new interface, the selection of that interface is usually done randomly with a uniform distribution. Hence the selection probability does not appear in the acceptance criterion of Eq.6 . However, we might be using another selection criterion, in particular we would like to use the standard uniform selection of the shooting point on a path to determine the shooting point as well as the interface. When we select a frame from the path uniformly p f rame sel = 1/L, the chance to select a certain interface i is proportional to the number of frames n λi that are close to that interface. In fact it is, p sel . The acceptance probability is now
p acc (x; λ i → λ j ]) =h A j [x] min 1, p (λi) sel g(λ i ) p (λj ) sel g(λ j ) =h A j [x] min 1, n (λi) [x]g(λ i ) n (λj ) [x]g(λ j ) ,(7)
We can combine the single replica exchange move Eq. 7 with the constrained shooting move Eq. 5, yielding
P acc (x (o) → x (n) ; λ i → λ j ) =h A j [x (n) ]× × min 1, n λi [x (o) ]g(λ i ) n λj [x (n) ]g(λ j )(8)
Where again g(λ) ∝ P A (λ).
B. Interpreting TPS as SRTIS constrained shooting
Now we can apply this idea also to a straightforward TPS simulation where the interface i is basically fixed at λ i = λ B . The acceptance ratio for a (virtual) single replica shooting move to a new interface j by choosing a uniform frame on the path would then be
p acc (x (o) → x (n) ; λ B → λ j ) = =h A j [x (n) ] min 1, n λi [x (o) ]P A (λ B ) n λj [x (n) ]P A (λ j ) =h A j [x (n) ] min 1, 1 n λj [x (n) ] P A (λ B ) P A (λ j ) ,(9)
Here n λB [x (o) ] = 1 because the old interface λ B only has one point crossing. A major point to make is that the ratio of probabilities P A (λ B )/P A (λ j ) in this acceptance ratio is a constant for fixed λ j . The second remark is that for standard TPS the path can be never accepted, unless it also fulfils
p acc [x (o) → x (n) ] = h AB [x (n) ] min 1, L (o) L (n) ,(10)
Paths that do not fulfil this standard TPS condition will be rejected. However we can make use of the rejected paths by waste recycling [42].
C. Making use of Virtual Interface Exhange-TPS
Indeed, virtual Monte Carlo moves have shown to greatly enhance the sampling of the density states [42,46]. Coluzza and Frenkel [43] introduced a virtual replica exchange scheme in which a trial replica exchange move that is rejected can be counted as part of the ensemble. When regular replica exchange is considered, this results in a probability P j (q) for the configuration q in the jth replica, based on the exchange probability for replica i and j
P j (q) = (1 − p acc )δ(q j − q) + p acc δ(q i − q)(11)
where the first term accounts for non-exchange and recounts the q j , the second term for the exchange gives the contribution to q i , and where p acc is the acceptance probability for exchange. Extending this to path space gives
P j (x) = (1 − p acc )δ(x j − x) + p acc δ(x i − x)(12)
Thus if we have two paths x i and x j in two path ensembles i and j, respectively, then when virtually exchanging these paths between the ensembles, the path ensemble j will have contributions from the ensemble i as specified in this equation. For the single replica exchange shooting move in the TPS ensemble, the first term never contributes, since we are not sampling in the j replica but only in the TPS ensembe i. Hence the first delta function does not contribute, leading to
P j (x) = p acc =h A j [x] min 1, 1 n λj [x] P A (λ B ) P A (λ j ) = 1 n λj [x] P A (λ B ) P A (λ j ) ,(13)
where the second line follow from the fact that the second argument in the min function is always smaller than unity, if j < B, and we only consider paths that start in A. The crossing of the interface j is guaranteed by the constrained interface shooting move. This probability would be less and less likely for trial paths that are shot from an interface λ j closer to A. The big point again is that the second factor is constant, not dependent on anything else than λ j . So, we can take the weight of each path in the jth ensemble as 1/n λj [x(n)] ≡ f [x], times an unknown constant. This weight itself is proportional to the TIS path probability in interface j:
P j (x) ∝ P AΛi [x] ∝ f [x](14)
One can construct standard crossing probability histograms from the ensemble of all trial paths with the same interface λ j , and hence the same weights according to (15) where N j is the total number of trial paths for λ j . The regular, and correct way to construct these crossing probabilities would be to perform TIS on the interface λ j . Since we aim to get the crossing probability of the virtual interface exchange TPS and TIS identical, the conclusion is that this is only possible if the distribution of shooting points is the same in both cases, and pathways decorrelate quickly. This puts some restriction on the method: in particular it is only correct for two way shooting in the over-damped limit, and when λ is reasonably close to the RC.
P A (λ|λ j ) = 1 N j Nj x 1 n λj [x] θ(λ max [x] − λ),
Nevertheless, even when these conditions are in practice not fulfilled, the crossing probability can be used to approximate the RPE, and hence estimate the free energy surface, as well as the committor surface.
D. The VIE-TPS algorithm
The VIE-TPS algorithm is as follows for two-way shooting with uniform selection.
1. Choose a shooting point sp on the current path with uniform selection. Compute λ sp . Assign the closest interface j e.g. by binning.
2. Alter momenta of the shooting point (e.g. choosing anew from Maxwell Boltzmann distribution, or do random isotropic move) and integrate forward and backward in time until stable states A or B are reached.
3. Identify the path type (AA, AB, BA or BB), and compute the number n λsp of frames located at (in practice near) interface j.
4. For paths that start in A do the following:
(a) Assign the trial move to interface λ A sp e.g. by binning.
(b) Identify the maximum λ on the entire path λ max and update the crossing histogram for λ A sp by adding 1/n λsp to each bin between λ A sp < λ < λ max . 5. For paths that start in B do the following:
(a) Assign the trial move to interface λ B sp (e.g. by binning).
(b) Identify the minimum lambda on the entire path λ min and update the crossing histogram for λ B sp by adding 1/n λsp to each bin between λ min < λ < λ B sp .
6. Store trial path for a posteriori evaluation of the RPE with the assigned path weight 1/n λsp .
7. Accept trial paths according to the standard TPS criterion Eq.10: if the path does not connect A and B reject the trial path, retaining the previous path. Accept the path according to the length criterion L 0 /L n , reject otherwise.
8. Accumulate transition path ensemble in the normal way.
9. Repeat from step (1) until finished.
Note that while in this algorithm we compute the weights on-the-fly during the TPS sampling, it is also possible to post-process a precomputed TPS ensemble, if all trial paths have been stored.
E. Constructing the RPE
After the VIE-TPS sampling the RPE can be constructed from the crossing histograms obtained in steps 4b and 5b, using e.g. WHAM [47,48], or MBAR [49].
First, the total crossing probability histogram is constructed from the individual crossing histograms for all interfaces i = 1 . . . n − 1 by applying the WHAM (multiple histogram) method [47]
P A (λ|λ 1 ) = n−1 i=1w A i θ(λ i+1 − λ)θ(λ − λ i ) i j=1 P A (λ|λ j ).(16)
The weightsw A i are given bȳ
w A i = 1 i j=1 1/w A j ,(17)
where w A j are the optimized WHAM weights for each interface histogram j.
The RPE is now constructed by reweighting each path (which already had a weight f [x]) with a factor depending on its λ max [38]:
P[x] = c A n−1 j=1 P AΛj [x]f [x]W A [x] + c B n−1 j=1 P BΛj [x]f [x]W B [x],(18)
Here
W [x] = n−1 i=1w A i θ(λ max [x] − λ i )θ(λ i+1 − λ max [x]
) selects the correct interface weight for each path x based on its maximum λ value along the trajectory (minumum for BA paths in W B [x]). The unknown constants c A and c B follow from matching the AB and BA histograms for overlapping interfaces [38]. This can be most easily done by setting c A = C/P A (λ B |λ A ) and c B = C/P B (λ A |λ B ), where C is a single (arbitrary) normalising constant F. Projection of the RPE The free energy then follows from projecting the RPE on a selected set of order parameters q = {q 1 , . . . q m } using all trial pathways obtained in step 6 of the algorithm including the rejected ones.
F (q) = −k B T ln ρ(q) + const,(19)
where we can split up the contributions from the configurational density ρ(q) = ρ A (q) + ρ B (q) into two parts, one related to paths coming from A and one related to path coming from B. For the N A sampled trial paths that start in A (step 4b) ρ A (q) becomes
ρ A (q) = c A NA x f [x]W A [x] L k=0 δ(q(x k ) − q)(20)
and for the N B sampled trial paths that start in B (step 5b) ρ B (q) becomes
ρ B (q) = c B NB x f [x]W B [x] L k=0 δ(q(x k ) − q)(21)
Here δ(z) = m i=1 δ(z (i) ) is the Dirac delta function, used to project the configurations on to the m-dimensional collective variable space.
When the number of paths is reasonably small, and all paths can be stored on disk then this can be done a posteriori. When the number of paths exceeds storage capacity, one can can save instead of the entire path ensemble, only the histograms for paths ending at λ max , which requires much less storage. This can be efficiently be done inside the above algorithm by including a simple loop over the current trial path and determine the maximum (or minimum for paths starting in B) and histogram the relevant order parameters in each frame in the path. Then, at the end of the simulation these histograms are reweighted.
Besides the free energy we can project the averaged committor function p B on arbitrary surfaces by using the indicator function h B (x L ).
p B (q) = c A NA x f [x]W A [x]h B (x L )(22)+ c B NB x f [x]W B [x]h B (x L )(23)
Because paths are microscopic reversible, the (averaged) committor function p B (q) can aso be defined as the ratio of projected density ρ B (q) of all paths that begin in B to the total density ρ(q) [28]:
p B = ρ B (q) ρ A (q) + ρ B (q) .(24)
The above algoritm is applicable for two-way shooting. For one-way shooting it is also possible to construct the crossing probability, but as the trial paths do not have their backward integration, we cannot assume the full paths to be correct, and hence we cannot construct the FE directly using the above algorithm. However, we might still obtain the free energy by saving for each trial path, for the interface λ sp , the free energy histogram for values above the interface (below for paths that start in B) and then perform WHAM on these histograms. Note that this does not lead to the RPE, but just to the crossing histograms and free energy as a function fo λ Finally as in regular TIS, the rate constant k AB can be calculated as
k AB = φ 1,0 h A P A (λ n |λ 1 ),(25)
where the first term is the effective positive flux through the first interface and the second is the crossing probability of interface n of all trajectories shot from interface i and reach state A in their backward integration. The first term is easily accessible through MD and the second through the TIS or as shown in this study through TPS using waste recycling of the rejected paths.
III. SIMULATION METHODS
We benchmark VIE-TPS in three different examples. We first give a proof of principle with a simple 2D potential. Then we show that the approach works for the standard biomolecular isomerisation of alanine dipeptide. Finally we investigate the dimerisation of solvated FF dipeptides. Below we describe the simulation details for each of these systems.
A. Toy model
Consider the 2D potential landscape
V [x, y] = 0.0177778 0.0625x 4 + y 4 − e −0.3x 2 −0.01y 2 − 3e −0.3(x−4) 2 −0.01y 2 − 4e −0.3(x+4) 2 −0.01y 2 + 0.2 sin 2 (5x)(26)
The contour plot of this function is shown in Fig. 1. The asymmetric potential consists of two minima with different minimum potential values separated by a high barrier. An oscillatory potential in the x direction is added to make comparisons between different calculations clearer.
We perform TPS simulation at β = 3. For this setting the barrier is about 10 kT. We use three different dynamics: Metropolis Monte Carlo dynamics [1], Langevin dynamics at high friction γ = 10 and a medium friction γ = 2.5. For the MC we use a maximum step size of We perform TPS on this potential with an initial stable state A defined by x < −3.5 and a final stable state B defined by x > 3.5. During the TPS the crossing probability and the RPE were constructed using the algorithm above. The RPE was used to construct the FE.
B. Alanine Dipeptide
We perform atomistic molecular dynamics simulations of Alanine Dipeptide (AD) using the Gromacs 4.5.4 engine [50], employing the AMBER96 [51] and TIP3P force fields [52]. The system is prepared as follows: First, the AD molecule is placed in a cubic box of 28 x 28 x 28Å followed by an energy minimisation. The system is thereafter solvated, energy minimized, shortly equilibrated for 1 ns, and finally subjected to a production run of 75 ns NPT simulation. NPT simulations are carried out at ambient conditions. Bonds are constrained using the Lincs algorithm, Van Der Waals interactions are cut off at 1.1 nm, and electrostatics are treated using the Particle Mesh Ewald method using a Fourier spacing of 0.12 nm and a cut-off of 1.1 nm for the short range electrostatics. The leap-frog algorithm is used to propagate the dynamics, and the neighbour list is updated every 10 fs, using a 1.1 nm cut-off and a 2 fs time step. The temperature and pressure are kept constant using the v-rescale thermostat [53] and Parrinello-Rahman [54] barostat, respectively.
We use TPS to sample transition paths connecting the α to β state. The α state spans the volume of −150 • ≤ ψ ≤ −60 • and −180 • ≤ φ ≤ 0 • , and in turn the β state spans the volume of 150 • ≤ ψ ≤ 180 • and −180 • ≤ φ ≤ 0 • . Note that such state definitions are rather strict. The initial path is obtained from the 75 ns MD run. The twoway shooting, with randomized velocities and flexiblelength TPS variant is used. Frames are saved every 0.03 ps and the maximum allowed transition path length is 30 ps. The crossing probabilities were calculated along the ψ order parameter.
C. FF dimer
The details of the atomistic molecular dynamics simulation of the FF dimer are identical to the ones in [55]. We briefly outline it below. We perform atomistic molecular dynamics simulations of the FF dimer using the Gromacs 4.5.4 engine [50], employing the AMBER99SB-ILDN [56] and TIP3P force fields [52]. The FF segment is isolated from the KLVFFA sequence (residues 16-21) of the amyloid-beta peptide (PDB2Y29 [57]) and subsequently capped with neutral ACE and NME termini. The system is prepared as follows: First, two FF monomers are placed in a cubic box of 30 x 30 x 30Å followed by an energy minimization. The system is thereafter solvated, energy minimized, shortly equilibrated for 10 ns, and finally subjected to a production run of 200 ns NPT simulation. NPT simulations are carried out at ambient conditions. Bonds are constrained using the Lincs algorithm, Van Der Waals interactions are cut off at 1 nm, and electrostatics are treated using the Particle Mesh Ewald method using a Fourier spacing of 0.12 nm and a cut-off of 1 nm for the short range electrostatics. The leap-frog algorithm is used to propagate the dynamics, and the neighbour list is updated every 10 fs, using a 1 nm cut-off and a 2 fs time step. The temperature and pressure are kept constant using the v-rescale thermostat [53] and Parrinello-Rahman [54] barostat, respectively.
We use TPS to sample transition paths connecting the bound to unbound state. The bound state (B) spans the volume of minimum distance ≤ 0.22 nm, and in turn the unbound state (U ) the volume of minimum distance ≥ 1.1 nm. The initial path is obtained from the 200 ns MD run. The two-way shooting, with randomized velocities and flexible-length TPS variant is used. Frames are saved every 5 ps and the maximum allowed transition path length is 10 ns. The crossing probabilities were calculated along the minimum distance order parameter.
IV. RESULTS AND DISCUSSION
A. Toy model
For an easier comparison we compute the free energy always as a 1-D projection along the x-axis. The exact projection of Eq. 26 is given in Fig. 2 as a blue dashed line. The red curve is the negative logarithm of probability to observe configuration is the path ensemble obtained from direct projection of the paths on the x-axis. This curve shows that clearly a naive projection of the TPS ensemble will not remotely be close to the true free energy.
In Fig. 3a we plot for the Metropolis Monte Carlo dynamics TPS the individual crossing probabilities for the forward transition AB reweighted according to WHAM. The final histogram is also shown as a solid black curve. The lower panel shows the reweighted crossing probabilities for the forward and backward transition, both using the correct relative weight. From this it is directly possible to construct the RPE, which can be used to compute the free energy profiles.
In Fig. 4 we show the free energy profile for each of three different dynamics case. Also shown is the individual forward and backward contribution to the free energy. Note that both for Metropolis dynamics and medium high friction the agreement with the true free energy is excellent. For the low friction case the comparison is slightly less favorable, but still very reasonable. The discrepancy is most likely caused by some memory in the dynamics. The comparison between all three dynamics is shown in panel Fig. 4c. Again, while there is some discrepancy at the barrier flanks, the agreement in the barrier region is excellent.
VIE-TPS assumes that the distribution of shooting points along the interfaces is identical or at least close to the correct distribution in the corresponding TIS ensemble. For diffusive dynamics this assumption is reasonable, because paths decorrelate fast, and sample the (local) equilibrium distribution. For ballistic dynamics decorrelation is slower and the shooting point distribution from the reactive path ensemble is not necessary identical to that of the TIS ensemble. In addition, the presence of other channels and dead ends along the interaces that are not sampled in the reactive AB path ensemble will be present in the TIS ensemble, and contribute to the correct FE projection. This will result in an overestimation of the freen energy in the minima, something that we indeed observe.
Finally we show that the obtained RPE can reconstruct the free energy in arbitrary dimensions. Since we have only a 2D potential, this is by necessity a reconstruction of the original 2D potential from the 1D based RPE. To make this more interesting we slightly adjusted the potential to
V [x, y] = 0.0177778 0.0625x 4 + y 4 − 3e −0.3(x−4) 2 −0.01y 2 − 3e −0.3(x+4) 2 −0.01y 2 + e −3(x+1) 2 −0.1(y−2) 2 + e −3(x−1) 2 −0.1(y+2) 2(27)
This potential, shown in Fig. 5a, has again a two minima, but now the barrier region is convoluted in the ydirection. The 1D projection clearly does not contain this information. Yet, by projection of the RPE from a single TPS simulation the entire landscape is reconstructed. Note that this reconstruction is only possible due to the RPE, as by standard histogramming of the free energy, this information is lost.
Having access to the RPE and using Eq. 23 we project the committor along the xy dimensions for the potential of Eq. 26. Remarkably, the committor isolines twist at the barrier, as suggested by the underlying potential and hint towards a non linear reaction coordinate. Indeed, it is possible to use these surfaces to conduct a reaction coordinate analysis [27] B. Alanine Dipeptide Alanine dipeptide in water exhibits a conformational transition between states α and β in the timescale of hundreds of ps [45]. Yet, the equilibration in the basins is in the order of few ps, thus making the transition a rare event. The short transition time compared to today's computational capacities has made alanine dipeptide a toy biomolecular model for benchmarking enhanced sampling methods to brute force MD. We first benchmark VIE-TPS with a long brute force MD by projecting the free energy as a function of the ψ angle (see Fig. 7a). The agreement is good in the barrier region and within 0.5 kT in the region −50 • ≤ φ ≤ 80 • . We attribute the discrepancy in the free energy closer to state β to the memory trajectories have when 1) The dynamics is not diffusive enough, 2) the length of the transition paths is short. For alanine dipeptide the average path length is small (≈ 5 ps). This is the reason that this method should be used with strict state definitions. This discrepancy will be reduced for larger and more realistic transition times (as also shown in the next example). VIE-TPS can be used to reweight and project the Free Energy Surface (FES) as a function of any order parameter. By projecting the RPE along φ and ψ, we compare VIE-TPS and MD estimates of the FES (see Fig. 7b,c). As in the 1D projection, the FES is best estimated in the barrier region. Strikingly, VIE-TPS is able to resolve well two transition state regions, a higher one −80 • ≤ ψ ≤ −60 • , 0 • ≤ φ ≤ 30 • and a lower one −150 • ≤ ψ ≤ −125 • , 0 • ≤ φ ≤ 30 • as was also found in Ref. [58]. Moreover, the statistics and representation of the barrier region is much finer in the VIE-TPS than in MD, which has an exponentially rarer sampling of that region. Finally using VIE-TPS and Eq. 23, one can reconstruct the committor surface along any arbitrary order parameter. We plot the committor surface along Ψ (see Fig. 7d) and find that the isocommittor surface of 0.5 is located at the barrier region, discussed earlier. We note that the committor surface estimated in this way is much less error prone than calculating the committor directly through the shooting points.
VIE-TPS can be used to directly calculate transition rates from a single TPS and a short MD in states A and B simulation using Eq. 15 and Eq. 25. For the forward rate k αβ , by selecting λ 0 at ψ=-60 • and λ 1 at ψ=-50 • and λ n at ψ=150 • the estimated flux factor is 1.34 ps −1 and the crossing probability term is 0.039 (see Fig. 8), thus giving a rate of 0.052 ps −1 , which is less than a factor of two different from the respective rate of 0.0298 ps −1 coming from brute force MD. On the other hand, for the backward rate k βα by selecting λ 0 at ψ=150 • and λ 1 at ψ=140 • and λ n at ψ=-60 • the estimated flux factor is 0.66 ps −1 and the crossing probability term is 0.01 (see Fig. 8), thus giving a rate of 0.009 ps −1 , which is only a factor of two different from the respective rate of 0.004 ps −1 coming from brute force MD. These results are in fairly good agreement with Refs [40,45]. With this rates at hand, the free energy difference between stable states α and β, estimated as ∆G αβ =-log( k βα k αβ ), is 2.04 kT and 1.72 kT from MD and VIE-TPS respectively. This way of estimating the free energy difference between stable states gives more accurate results compared to the ones from the RPE free energy estimate (see Fig. 7). However, we stress once more that the VIE-TPS method gives only approximate results.
C. FF dipeptide dimerization
In the final illustrative example we focus on the dimerization of two phenylalanine dipeptides as in Ref. [55], shown in Fig. 9. The hydrophobicity of these peptides causes their dimerization, while entropy stabilizes the monomer state. The relaxation time in the basins is in the order of several ns, however the transition time is in the order of ps, classifying dimerization a rare transition. We benchmark VIE-TPS by comparing the MD estimate of the FES as a function of d min , and find excellent agreement between the two (see Fig. 10a). VIE-TPS is able to capture the details of the FES at the first and second hydration shell minima (0.5 nm and 0.8 nm). As in alanine dipeptide, there is a 0.5 kT difference close to the unbound stable state (distances greater than 0.9 nm). Note that the FES estimate from the brute force MD increases again after the minimum at 0.8 nm due to the finite size of the system. In reality, the FES as a function of the minimum distance at the unbound state should have been a plateau (as estimated by VIE-TPS).
By using the RPE information we can reweight the FES to a different order parameter, such as the solvent accessible surface (see Fig. 10b). The agreement between the two ways of calculating the FES is excellent. We attribute the better agreement of this system compared to the alanine dipeptide to the longer transition paths (≈ 400 ps) and the clearly diffusive dynamics of this system.
V. CONCLUSION
In this paper we have presented a way to extract (an approximation of) the reweighted path ensemble from a single standard (two state) TPS simulation employing the uniform two-way shooting algorithm. This has the great advantage that an estimate for the kinetics, the free energy, and the committor landscape can be directly given. We showed that the method approximates the RPE well in the barrier region, but is less accurate at the flanks towards the stables state, especially for dynamics with a large ballistic component. Nevertheless, we believe this will be very useful for deterministic dynamics, in which stochasticity plays a role, as is the case in most complex biomolecular transition.
We note that the RPE can be used for a reaction coordinate analysis, e.g. using the likelihood methods of Peters and Trout [59], or more advanced machine learning techniques. Finally, the free energies and committor surfaces can be used in conjunction with the Bayesian TPT formulas of Hummer [60] in order to alternatively calculate rate coefficients. We expect that this methodology will be soon part of the standard tools in packages such as OPS. Our method can be easily extended to multiple state TPS.
(
λi) sel = n λi [x]/L. Yet we are using the p f rame sel = 1/L. To correct for this bias, we multiply the
FIG. 1 .
1Plot of 2D potential as defined in Eq. 26
FIG. 2 .
2Free energy along the x-axis estimated by a) direct integration of the potential (blue) and b) the negative logarithm of the configurations generated from TPS (red).
FIGFIG. 4 .FIG. 5 .FIG. 7 .FIG. 10 .
45710. 3. a) Crossing probability for AB paths. Solid black line is WHAM result. All other curves come from histograms for λsp. b) Crossing probability from WHAM for forward and reverse histograms, normalised to their final value P (λ * ) Forward (black dotted), backward (red dotted), overall TPS (green) and reference (blue dashed) free energies for a) Monte Carlo, b) Langevin high friction, and c) Langevin low friction dynamics panels respectively. In the panel c) we show the free energy profiles of the Monte Carlo, Langevin high friction, and Langevin low friction dynamics together. Free energy surface of the potential of Eq. 27 constructed by a) analytical integration b) the virtual Interface exchange TPS scheme. FIG. 6. Committor surface for potential in Eq. Free energy surface of α to β transition of AD as a function of a) the Ψ angle, where in blue and orange are depicted the VIE-TPS and MD predictions respectively and Φ and Ψ coming from b) VIE-TPS RPE and c) MD. d) Committor surface projected on Φ and Ψ. FIG. 9. Snapshot of a configuration of the FF dipeptide dimer in solution, coming from an association/dissociation transition path. Free energy surface as a function of a) the minimum distance (dmin) between the two peptides and b) the solvent accessible surface (SAS). In blue and orange are depicted the VIE-TPS and MD predictions respectively.
The authors thank Georgios Boulougouris and Bernd Ensing for carefully commenting the manuscript. We acknowledge support from the Nederlandse Organisatie voor Wetenschappelijk Onderzoek (NWO) for the use of supercomputer facilities. Z.F.B. would like to acknowledge the Federation of European Biochemical Societies (FEBS) for financial support (LTF).
Understanding Molecular Simulation. D Frenkel, B Smit, Academic Press, IncOrlando, FL, USA2nd ed.D. Frenkel and B. Smit, Understanding Molecular Simu- lation, 2nd ed. (Academic Press, Inc., Orlando, FL, USA, 2001).
Reaction Rate Theory and Rare Events (Elsevier Science. B Peters, AmsterdamB. Peters, Reaction Rate Theory and Rare Events (Else- vier Science, Amsterdam, 2017).
. G M Torrie, J P Valleau, Chem. Phys. Lett. 28578G. M. Torrie and J. P. Valleau, Chem. Phys. Lett. 28, 578 (1974).
. E Carter, G Ciccotti, J T Hynes, R Kapral, Chem. Phys. Lett. 156472E. Carter, G. Ciccotti, J. T. Hynes, and R. Kapral, Chem. Phys. Lett. 156, 472 (1989).
. T Huber, A Torda, W Van Gunsteren, J. Comput. Aided Mol. Des. 8695T. Huber, A. Torda, W. van Gunsteren, J. Comput. Aided Mol. Des. 8, 695 (1994).
. H Grubmüller, Phys. Rev. E. 522893H. Grubmüller, Phys. Rev. E 52, 2893 (1995).
. A F Voter, J. Chem. Phys. 1064665A. F. Voter, J. Chem. Phys. 106, 4665 (1997).
A Laio, M Parrinello, Proc. Nat. Acad. Sci. USA. Nat. Acad. Sci. USA9912562A. Laio and M. Parrinello, Proc. Nat. Acad. Sci. USA 99, 12562 (2002).
. E Darve, A Pohorille, 10.1063/1.1410978J. Chem. Phys. 1159169E. Darve and A. Pohorille, J. Chem. Phys. 115, 9169 (2001).
. Y Sugita, Y Okamoto, Chem. Phys. Lett. 314141Y. Sugita, , and Y. Okamoto, Chem. Phys. Lett. 314, 141 (1999).
. E Marinari, G Parisi, Europhys. Lett. 19451E. Marinari and G. Parisi, Europhys. Lett. 19, 451 (1992).
. L Zheng, M Chen, W Yang, Proc. Natl. Acad. Sci. U.S.A. 10520227L. Zheng, M. Chen, and W. Yang, Proc. Natl. Acad. Sci. U.S.A. 105, 20227 (2008).
. Y Q Gao, J. Chem. Phys. 12864105Y. Q. Gao, J. Chem. Phys. 128, 064105 (2008).
. R Allen, D Frenkel, P Ten Wolde, J. Chem. Phys. 12424102R. Allen, D. Frenkel, and P. ten Wolde, J. Chem. Phys. 124, 024102 (2006).
. F Cerou, A Guyader, T Lelievre, D Pommier, J. Chem. Phys. 134xxF. Cerou, A. Guyader, T. Lelievre, and D. Pommier, J. Chem. Phys. 134, xx (2011).
. A K Faradjian, R Elber, J. Chem. Phys. 12010880A. K. Faradjian and R. Elber, J. Chem. Phys. 120, 10880 (2004).
. D Moroni, P G Bolhuis, T S Van Erp, http:/arxiv.org/abs/https:/doi.org/10.1063/1.1644537J. Chem. Phys. 1204055D. Moroni, P. G. Bolhuis, and T. S. van Erp, J. Chem. Phys. 120, 4055 (2004), https://doi.org/10.1063/1.1644537.
. M Villen-Altamirano, J Villen-Altamirano, Eur. Trans. Telecom. 13373M. Villen-Altamirano and J. Villen-Altamirano, Eur. Trans. Telecom. 13, 373 (2002).
. J T Berryman, T Schilling, J. Chem. Phys. 133244101J. T. Berryman and T. Schilling, J. Chem. Phys. 133, 244101 (2010).
. A Dickson, A Warmflash, A R Dinner, J. Chem. Phys. 131154104A. Dickson, A. Warmflash, and A. R. Dinner, J. Chem. Phys. 131, 154104 (2009).
. G Huber, S Kim, 10.1016/s0006-3495(96)79552-8Biophys. J. 7097G. Huber and S. Kim, Biophys. J. 70, 97 (1996).
. Y Zhang, P S Cremer, 10.1146/annurev.physchem.59.032607.093635Annu. Rev. Phys. Chem. 6163Y. Zhang and P. S. Cremer, Annu. Rev. Phys. Chem. 61, 63 (2010).
. C Dellago, P G Bolhuis, F S Csajka, D Chandler, J. Chem. Phys. 1081964C. Dellago, P. G. Bolhuis, F. S. Csajka, and D. Chandler, J. Chem. Phys. 108, 1964 (1998).
. P G Bolhuis, D Chandler, C Dellago, P L Geissler, 10.1146/annurev.physchem.53.082301.113146Annu. Rev. Phys. Chem. 53291P. G. Bolhuis, D. Chandler, C. Dellago, and P. L. Geissler, Annu. Rev. Phys. Chem. 53, 291 (2002).
. C Dellago, P G Bolhuis, P L Geissler, Adv. Chem. Phys. 1231C. Dellago, P. G. Bolhuis, and P. L. Geissler, Adv. Chem. Phys. 123, 1 (2002).
. C Dellago, P G Bolhuis, Adv Polym Sci. 221167C. Dellago and P. G. Bolhuis, Adv Polym Sci 221, 167 (2009).
. W Lechner, J Rogal, J Juraszek, B Ensing, P G Bolhuis, 10.1063/1.3491818J. Chem. Phys. 133174110W. Lechner, J. Rogal, J. Juraszek, B. Ensing, and P. G. Bolhuis, J. Chem. Phys. 133, 174110 (2010).
. P G Bolhuis, W Lechner, 10.1007/s10955-011-0324-6J. Stat. Phys. 145841P. G. Bolhuis and W. Lechner, J. Stat. Phys. 145, 841 (2011).
. J Vreede, J Juraszek, P G Bolhuis, 10.1073/pnas.0908754107Proc. Natl. Acad. Sci. U. S. A. 1072397J. Vreede, J. Juraszek, and P. G. Bolhuis, Proc. Natl. Acad. Sci. U. S. A. 107, 2397 (2010).
. M Schor, J Vreede, P G Bolhuis, 10.1016/j.bpj.2012.07.056Biophysical Journal. 1031296M. Schor, J. Vreede, and P. G. Bolhuis, Biophysical Journal 103, 1296 (2012).
. Z F Brotzakis, M Gehre, I K Voets, P G Bolhuis, 10.1039/c7cp02465gPhys. Chem. Chem. Phys. 1919032Z. F. Brotzakis, M. Gehre, I. K. Voets, and P. G. Bolhuis, Phys. Chem. Chem. Phys. 19, 19032 (2017).
. Z F Brotzakis, P G Bolhuis, 10.1021/acs.jpcb.8b10005J. Phys. Chem. B. Z. F. Brotzakis and P. G. Bolhuis, J. Phys. Chem. B (2019), 10.1021/acs.jpcb.8b10005.
. P L Geissler, 10.1126/science.1056991Science. 2912121P. L. Geissler, Science 291, 2121 (2001).
. D Moroni, P R Ten Wolde, P G Bolhuis, 10.1103/PhysRevLett.94.235703Phys. Rev. Lett. 941D. Moroni, P. R. Ten Wolde, and P. G. Bolhuis, Phys. Rev. Lett. 94, 1 (2005).
. W Lechner, C Dellago, P G Bolhuis, 10.1103/physrevlett.106.085701Physical Review Letters. 106W. Lechner, C. Dellago, and P. G. Bolhuis, Physical Review Letters 106 (2011), 10.1103/physrevlett.106.085701.
. T S Van Erp, D Moroni, P G Bolhuis, J. Chem. Phys. 1187762T. S. van Erp, D. Moroni, and P. G. Bolhuis, J. Chem. Phys. 118, 7762 (2003).
. R Cabriolu, K M S Refsnes, P G Bolhuis, T S Van Erp, 10.1063/1.4989844J. Chem. Phys. 147152722R. Cabriolu, K. M. S. Refsnes, P. G. Bolhuis, and T. S. van Erp, J. Chem. Phys. 147, 152722 (2017).
. J Rogal, W Lechner, J Juraszek, B Ensing, P G Bolhuis, J. Chem. Phys. 133174109J. Rogal, W. Lechner, J. Juraszek, B. Ensing, and P. G. Bolhuis, J. Chem. Phys. 133, 174109 (2010).
. A Lervik, E Riccardi, T S Van Erp, 10.1002/jcc.24900J. Comput. Chem. 382439A. Lervik, E. Riccardi, and T. S. van Erp, J. Comput. Chem. 38, 2439 (2017).
. D W Swenson, J H Prinz, F Noe, J D Chodera, P G Bolhuis, 10.1021/acs.jctc.8b00626J. Chem. Theory Comput. 15813D. W. Swenson, J. H. Prinz, F. Noe, J. D. Chodera, and P. G. Bolhuis, J. Chem. Theory Comput. 15, 813 (2019).
. D W Swenson, J H Prinz, F Noe, J D Chodera, P G Bolhuis, 10.1021/acs.jctc.8b00627J. Chem. Theory Comput. 15837D. W. Swenson, J. H. Prinz, F. Noe, J. D. Chodera, and P. G. Bolhuis, J. Chem. Theory Comput. 15, 837 (2019).
. D Frenkel, 10.1073/pnas.0407950101Proceedings of the National Academy of Sciences. 10117571D. Frenkel, Proceedings of the National Academy of Sciences 101, 17571 (2004).
. I Coluzza, D Frenkel, 10.1002/cphc.200400629ChemPhysChem. 61779I. Coluzza and D. Frenkel, ChemPhysChem 6, 1779 (2005).
. P G Bolhuis, 10.1063/1.2976011J. Chem. Phys. 129P. G. Bolhuis, J. Chem. Phys. 129 (2008), 10.1063/1.2976011.
. W Du, P G Bolhuis, 10.1063/1.4813777J. Chem. Phys. 13944105W. Du and P. G. Bolhuis, J. Chem. Phys. 139, 044105 (2013).
. G C Boulougouris, D Frenkel, 10.1021/ct049900mJ. Chem. Theory Comput. 1389G. C. Boulougouris and D. Frenkel, J. Chem. Theory Comput. 1, 389 (2005).
. A M Ferrenberg, R H Swendsen, 10.1103/physrevlett.63.1195Physical Review Letters. 631195A. M. Ferrenberg and R. H. Swendsen, Physical Review Letters 63, 1195 (1989).
. S Kumar, J M Rosenberg, D Bouzida, R H Swendsen, P A Kollman, 10.1002/jcc.540130812Journal of Computational Chemistry. 131011S. Kumar, J. M. Rosenberg, D. Bouzida, R. H. Swendsen, and P. A. Kollman, Journal of Computational Chemistry 13, 1011 (1992).
. M R Shirts, J D Chodera, 10.1063/1.2978177The Journal of Chemical Physics. 129124105M. R. Shirts and J. D. Chodera, The Journal of Chemical Physics 129, 124105 (2008).
. S Pronk, S Páll, R Schulz, P Larsson, P Bjelkmar, R Apostolov, M R Shirts, J C Smith, P M Kasson, D Van Der Spoel, B Hess, E Lindahl, 10.1093/bioinformatics/btt055Bioinformatics. 29845S. Pronk, S. Páll, R. Schulz, P. Larsson, P. Bjelk- mar, R. Apostolov, M. R. Shirts, J. C. Smith, P. M. Kasson, D. van der Spoel, B. Hess, and E. Lindahl, Bioinformatics 29, 845 (2013).
. C I Bayly, K M Merz, D M Ferguson, W D Cornell, T Fox, J W Caldwell, P A Kollman, P Cieplak, I R Gould, D C Spellmeyer, 10.1021/ja00124a002J. Am. Chem. Soc. 1175179C. I. Bayly, K. M. Merz, D. M. Ferguson, W. D. Cornell, T. Fox, J. W. Caldwell, P. A. Kollman, P. Cieplak, I. R. Gould, and D. C. Spellmeyer, J. Am. Chem. Soc. 117, 5179 (1995).
. W L Jorgensen, J Chandrasekhar, J D Madura, R W Impey, M L Klein, 10.1063/1.445869J. Chem. Phys. 79926W. L. Jorgensen, J. Chandrasekhar, J. D. Madura, R. W. Impey, and M. L. Klein, J. Chem. Phys. 79, 926 (1983).
. G Bussi, D Donadio, M Parrinello, 10.1063/1.2408420J. Chem. Phys. 12614101G. Bussi, D. Donadio, and M. Parrinello, J. Chem. Phys. 126, 014101 (2007).
. A Parrinello, M Rahman, 10.1063/1.328693J Appl. Phys. 527182A. Parrinello, M. and Rahman, J Appl. Phys. 52, 7182 (1981).
. Z F Brotzakis, P G Bolhuis, 10.1063/1.4965882J. Chem. Phys. 145164112Z. F. Brotzakis and P. G. Bolhuis, J. Chem. Phys. 145, 164112 (2016).
. K Lindorff-Larsen, S Piana, K Palmo, P Maragakis, J L Klepeis, R O Dror, D E Shaw, 10.1002/prot.22711Proteins. 781950K. Lindorff-Larsen, S. Piana, K. Palmo, P. Mara- gakis, J. L. Klepeis, R. O. Dror, and D. E. Shaw, Proteins 78, 1950 (2010).
. J Colletier, 10.1073/pnas.1112600108/-/DCSupplemental.www.pnas.org/cgi/doi/10.1073/pnas.1112600108PNAS. 10816938J. Colletier, PNAS 108, 16938 (2011).
. P G Bolhuis, C Dellago, D Chandler, Proc. Natl. Acad. Sci. U.S.A. 975877P. G. Bolhuis, C. Dellago, and D. Chandler, Proc. Natl. Acad. Sci. U.S.A. 97, 5877 (2000).
. B Peters, B L Trout, J. Chem. Phys. 12554108B. Peters and B. L. Trout, J. Chem. Phys. 125, 054108 (2006).
. G Hummer, 10.1063/1.1630572J. Chem. Phys. 120516G. Hummer, J. Chem. Phys. 120, 516 (2004).
Crossing probability from WHAM, as a function of the order parameter Ψ for path coming from states α (in blue) and β. in green) respectivelyFIG. 8. Crossing probability from WHAM, as a function of the order parameter Ψ for path coming from states α (in blue) and β (in green) respectively.
| [] |
[
"A Discriminative CNN Video Representation for Event Detection",
"A Discriminative CNN Video Representation for Event Detection"
] | [
"Zhongwen Xu \nITEE\nThe University of Queensland\nAustralia\n",
"Yi Yang [email protected]@cs.cmu.edu \nITEE\nThe University of Queensland\nAustralia\n",
"Alexander G Hauptmann \nSCS\nCarnegie Mellon University\nUSA\n"
] | [
"ITEE\nThe University of Queensland\nAustralia",
"ITEE\nThe University of Queensland\nAustralia",
"SCS\nCarnegie Mellon University\nUSA"
] | [] | In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkit. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6% to 36.8% for the TRECVID MEDTest 14 dataset and from 34.0% to 44.6% for the TRECVID MEDTest 13 dataset. | 10.1109/cvpr.2015.7298789 | [
"https://arxiv.org/pdf/1411.4006v1.pdf"
] | 8,471,433 | 1411.4006 | aa319f599504a444fef72d47a221566af83dc6e7 |
A Discriminative CNN Video Representation for Event Detection
Zhongwen Xu
ITEE
The University of Queensland
Australia
Yi Yang [email protected]@cs.cmu.edu
ITEE
The University of Queensland
Australia
Alexander G Hauptmann
SCS
Carnegie Mellon University
USA
A Discriminative CNN Video Representation for Event Detection
In this paper, we propose a discriminative video representation for event detection over a large scale video dataset when only limited hardware resources are available. The focus of this paper is to effectively leverage deep Convolutional Neural Networks (CNNs) to advance event detection, where only frame level static descriptors can be extracted by the existing CNN toolkit. This paper makes two contributions to the inference of CNN video representation. First, while average pooling and max pooling have long been the standard approaches to aggregating frame level static features, we show that performance can be significantly improved by taking advantage of an appropriate encoding method. Second, we propose using a set of latent concept descriptors as the frame descriptor, which enriches visual information while keeping it computationally affordable. The integration of the two contributions results in a new state-of-the-art performance in event detection over the largest video datasets. Compared to improved Dense Trajectories, which has been recognized as the best video representation for event detection, our new representation improves the Mean Average Precision (mAP) from 27.6% to 36.8% for the TRECVID MEDTest 14 dataset and from 34.0% to 44.6% for the TRECVID MEDTest 13 dataset.
Introduction and Related Work
Complex event detection [1,2], which targets the detection of such events as "renovating a home" in a large video collection crawled from Youtube, has recently attracted a lot of research attention in computer vision. Compared to concept analysis in videos, e.g., action recognition, event detection is more difficult primarily because an event is more complex and thus has greater intra-class variations. For example, a "marriage proposal" event may take place indoors or outdoors, and may consist of multiple concepts such as ring (object), kneeling down (action) and kissing (action).
Recent research efforts have shown that combining multiple features, including static appearance features [9,25,40], motion features [22,7,43,44,32] and acoustic features [27], yields good performance in event detection, as evidenced by the reports of the top ranked teams in the TRECVID Multimedia Event Detection (MED) competition [3,21,28,29] and research papers [30,39,45] that have tackled this problem. By utilizing additional data to assist complex event detection, researchers propose the use of "video attributes" derived from other sources to facilitate event detection [26], or to utilize related exemplars when the training exemplars are very few [46]. As we focus on improving video representation in this paper, this new method can be readily fed into those frameworks to further improve their performance.
Dense Trajectories and its enhanced version improved Dense Trajectories (IDT) [44] have dominated complex event detection in recent years due to their superior performance over other features such as the motion feature STIP [22] and the static appearance feature Dense SIFT [3]. Despite good performance, heavy computation costs greatly restrict the usage of the improved Dense Trajectories on a large scale. In the TRECVID MED competition 2014 [2], the National Institute of Standards and Technology (NIST) introduced a very large video collection, containing 200,000 videos of 8,000 hours in duration. Paralleling 1,000 cores, it takes about one week to extract the improved Dense Trajectories for the 200,000 videos in the TRECVID MEDEval 14 collection. Even after the spatial re-sizing and temporal down-sampling processing, it still takes 500 cores one week to extract the features [3]. As a result of the unaffordable computation cost, it would be extremely difficult for a relatively smaller research group with limited computational resources to process large scale MED datasets. It becomes important to propose an efficient representation for complex event detection with only affordable computational resources, e.g., a single machine, while at the same time attempting to achieve better performance.
One instinctive idea would be to utilize the deep learning approach, especially Convolutional Neural Networks (CNNs), given their overwhelming accuracy in image analysis and fast processing speed, which is achieved by lever-MEDTest 13 MEDTest 14 IDT [44,3] 34.0 27.6 CNN in Lan et al. [21] 29.0 N.A. CNN avg 32.7 24.8 Table 1. Performance comparison (mean Average Precision in percentage). Lan et al. [21] is the only attempt to apply CNN features in TRECVID MED 2013. CNNavg are our results from the average pooling representation of frame level CNN descriptors.
aging the massive parallel processing power of GPUs [20]. However, it has been reported that the event detection performance of CNN based video representation is worse than the improved Dense Trajectories last year [21,3], as shown in Table 1. A few technical problems remain unsolved. Firstly, CNN requires a large amount of labeled video data to train good models from scratch. The large scale TRECVID MED datasets (i.e., MEDTest 13 [1] and MEDTest 14 [2]) only have 100 positive examples per event, with many null videos which are irrelevant. The number of labeled videos is smaller than that of the video collection for sports videos [19]. In addition, as indicated in [46], event videos are quite different from action videos, so it makes little sense to use the action dataset to train models for event detection.
Secondly, when dealing with a domain specific task with a small number of training data, fine-tuning [11] is an effective technique for adapting the ImageNet pre-trained models for new tasks. However, the video level event labels are rather coarse at the frame level, i.e., not all frames necessarily contain the semantic information of the event. If we use the coarse video level label for each frame, performance is barely improved; this was verified by our preliminary experiment. Lastly, given the frame level CNN descriptors, we need to generate a discriminative video level representation. Average pooling is the standard approach [31,3] for static local features, as well as for the CNN descriptors [21]. Table 1 shows the performance comparisons of the improved Dense Trajectories and CNN average pooling representation. We provide the performance of Lan et al. [21] for reference as well. We can see that the performance of CNN average pooling representation cannot get better than the hand-crafted feature improved Dense Trajectories, which is fairly different from the observations in other vision tasks [11,6,12].
The contributions of this paper are threefold. First, this is the first work to leverage the encoding techniques to generate video representation based on CNN descriptors. Second, we propose to use a set of latent concept descriptors as frame descriptors, which further diversifies the output with aggregation on multiple spatial locations at deeper stage of the network. The approach forwards video frames for only one round along the deep CNNs for descriptor extraction.
With these two contributions, the proposed video CNN representation achieves more than 30% relative improvement over the state-of-the-art video representation on the large scale MED dataset, and this can be conducted on a single machine in two days with a GPU card installed. In addition, we propose to use Product Quantization [14] based on CNN video representation to speed up the execution (event search) time. According to our extensive experiments, we show that the approach significantly reduces the I/O cost, thereby making event prediction much faster while retaining almost the same level of precision.
Preliminaries
Unless otherwise specified, this work is based on the network architecture released by [36], i.e., the configuration with 16 weight layers in the VGG ILSVRC 2014 classification task winning solutions. The first 13 weight layers are convolutional layers, five of which are followed by a max-pooling layer. The last three weight layers are fullyconnected layers. In the rest of this paper, we follow the notations in [6,11]: pool 5 refers to the activation of the last pooling layer, fc 6 and fc 7 refer to the activation of the first and second fully-connected layers, respectively. Though the structure in [36] is much deeper than the classic CNN structure in [20,6,11], the subscripts of pool 5 , fc 6 and fc 7 notations still correspond if we regard the convolution layers between the max-pooling layers as a "compositional convolutional layer" [36]. We utilize the activations before Rectified Linear Units (i.e., fc 6 and fc 7 ) and after them (i.e., fc 6 relu and fc 7 relu), since we observe significant differences in performance between these two variants.
Video CNN Representation
We begin by extracting the frame level CNN descriptors using the Caffe toolkit [17] with the model shared by [36]. We then need to generate video level vector representations on top of the frame level CNN descriptors.
Average Pooling on CNN Descriptors
As described in state-of-the-art complex event detection systems [3,31], the standard way to achieve image-based video representation in which local descriptor extraction relies on individual frames alone, is as follows: (1) Obtain the descriptors for individual frames; (2) Apply normalization on frame descriptors; (3) Average pooling on frame descriptors to obtain the video representation, i.e.,
x video = 1 N N i=1 x i , x i
is the frame-level descriptor and N is the total number of frames extracted from the video; (4) Re-normalization on video representation.
Max pooling on frames to generate video representation is an alternative method but it is not typical in event detection. We observe similar performance with average pooling, so we omit this method.
Video Pooling on CNN descriptors
Video pooling computes video representation over the whole video by pooling all the descriptors from all the frames in a video. The Fisher vector [34,35] and Vector of Locally Aggregated Descriptors (VLAD) [15,16] have been shown to have great advantages over Bag-of-Words (BoWs) [37] in local descriptor encoding methods. The Fisher vector and VLAD have been proposed for image classification and image retrieval to encode image local descriptors such as dense SIFT and Histogram of Oriented Gradients (HOG). Attempts have also been made to apply Fisher vector and VLAD on local motion descriptors such as Histogram of Optical Flow (HOF) and Motion Boundary Histogram (MBH) to capture the motion information in videos. To our knowledge, this is the first work on the video encoding of CNN descriptors and we broaden the encoding methods from local descriptors to CNN descriptors in video analysis.
Fisher Vector Encoding
In Fisher vector encoding [34,35], a Gaussian Mixture Model (GMM) with K components can be denoted as Θ = {(µ k , Σ k , π k ), k = 1, 2, . . . , K}, where µ k , Σ k , π k are the mean, variance and prior parameters of k-th component learned from the training CNN descriptors in the frame level, respectively. Given X = (x 1 , . . . , x N ) of CNN descriptors extracted from a video, we have mean and covariance deviation vectors for the k-th component as:
u k = 1 N √ π k N i=1 q ki x i − µ k σ k v k = 1 N √ 2π k N i=1 q ki x i − µ k σ k 2 − 1 ,(1)
where q ik is the posterior probability. By concatenation of the u k and v k of all the K components, we form the Fisher vector for the video with size 2D K, where D is the dimension of CNN descriptor x i after PCA pre-processing. PCA pre-processing is necessary for a better fit on the diagonal covariance matrix assumption [35]. Power normalization, often Signed Square Root (SSR) with z = sign(z) |z|, and 2 normalization are then applied to the Fisher vectors [34,35].
VLAD Encoding
VLAD encoding [15,16] can be regarded as a simplified version of Fisher vector encoding. With K coarse centers {c 1 , c 2 , . . . , c K } generated by K-means, we can obtain the difference vector regarding center c k by:
u k = i:NN(xi)=c k (x i − c k ),(2)
where NN(x i ) indicates x i 's nearest neighbors among K coarse centers. The VLAD encoding vector with size D K is obtained by concatenating u k over all the K centers. Another variant of VLAD called VLAD-k, which extends the nearest centers with the k-nearest centers, has shown good performance in action recognition [18,33]. Without specification, we utilize VLAD-k with k = 5 by default. Except for the power and 2 normalization, we apply intranormalization [4] to VLAD.
Quantitative Analysis
Given the above three approaches, we need to find out which one is the most appropriate for the CNN descriptors. To this end, we conduct an analytic experiment on the MEDTest 14 training set [2] to study the discriminative ability of three types of video representations, i.e., average pooling, video pooling with Fisher vector, and video pooling with VLAD on the CNN descriptors. Specifically, we calculate the cosine similarity within the positive exemplars among all the events (denoted as pos-pos), and the cosine similarity between positive exemplars and negative exemplars (denoted as pos-neg). The results are shown in Figure 1. With a good representation, the data points of positive and negative exemplars should be far away from each Figure 2. Illustration of the latent concept descriptors encoding procedure. We adopt M filters in the last convolutional layer as M latent concept classifiers. Before the last convolutional layer, M filters (e.g., a cuboid of size 3 × 3 × 512) produce the prediction outputs at every convolution location, followed by the max-pooling operations. Then, we get the responses of windows of different sizes and strides (in this example the output size is 2 × 2) for each latent concept. Color strength corresponds to the strength of response of each filter. Finally, we accumulate the responses for the M filters at the same location into the latent concept descriptors. Each dimension corresponds to one latent concept. After obtaining all latent concept descriptors of all frames, we then apply encoding methods to get the final video representation. This figure is best viewed in color. other, i.e., the cosine similarity of "pos-neg" should be close to zero. In addition, there should be a clear difference between the distributions of "pos-pos" and "pos-neg".
Average pooling: In Figure 1, we observe that the "posneg" cosine similarity distribution is far from zero, which is highly indicative that a large portion of the positive and negative exemplar pairs are similar to each other. In addition, the intersection of areas under the two lines span over a large range of [0.2, 0.8]. Both observations imply that average pooling may not be the best choice.
Fisher vector: Although the "pos-neg" similarity distribution is fairly close to zero, a large proportion of the "pos-pos" pairs also fall into the same range. No obvious difference between the distributions of "pos-pos" and "posneg" can be observed.
VLAD: The distribution of the "pos-neg" pairs is much closer to zero than average pooling while a relatively small proportion of the "pos-pos" similarity is close to the peak of the "pos-neg" similarity.
From the above analytic study, we can see that VLAD is the most fit for the CNN descriptors because the VLAD representation has the best discriminative ability, which is also consistent with the experimental results in Section 5.1.
CNN Latent Concept Descriptors
Compared to the fully-connected layers, pool 5 contains spatial information. However, if we follow the standard way and flatten pool 5 into a vector, the feature dimension will be very high, which will induce heavy computational cost. Specifically, the features dimension of pool 5 is a × a × M , where a is the size of filtered images of the last pooling layer and M is the number of convolutional filters in the last convolutional layer (in our case, a = 7 and M = 512).
In the VGG network [36], pool 5 features are vectors of 25,088-D while the fc 6 and fc 7 features have only 4096-D. As a result, researchers tend to ignore the general features extracted from pool 5 [6,12]. The problem is even more severe in the video pooling scheme because the frame descriptors with high dimensions would lead to instability problems [10].
Note that the convolutional filters can be regarded as generalized linear classifiers on the underlying data patches, and each convolutional filter corresponds to a latent concept [24]. We propose to formulate the general features from pool 5 as the vectors of latent concept descriptors, in which each dimension of the latent concept descriptors represents the response of the specific latent concept. Each filter in the last convolutional layer is independent from other filters. The response of the filter is the prediction of the linear classifier on the convolutional location for the corresponding latent concept. In that way, pool 5 layer of size a×a×M can be converted into a 2 latent concept descriptors with M dimensions. Each latent concept descriptor represents the responses from the M filters for a specific pooling location. Once we obtain the latent concept descriptors for all the frames in a video, we then apply an encoding method to generate the video representation. In this case, each frame contains a 2 descriptors instead of one descriptor for the frame, as illustrated in Figure 2.
In [13], He et al. claim that the aggregation at a deeper layer is more compatible with the hierarchical information processing in our brains than cropping or wrapping the original inputs, and they propose to use a Spatial Pyramid Pooling (SPP) layer for object classification and detection, which not only achieves better performance but also relaxes the constraint that the input must be fixed-size. Different from [13], we do not train the network with the SPP layer from scratch, because it takes much longer time, especially for a very deep neural network. Instead, at the last pooling layer, we adopt multiple windows with different sizes and strides without retraining the CNNs. In that way, visual information is enriched while only marginal computation cost is added, as we forward frames through the networks only once to extract the latent concept descriptors.
After extracting the CNN latent concept descriptors for all spatial locations of each frame in a video, we then apply video pooling to all the latent concept descriptors of that video. As in [13], we apply four different CNN maxpooling operations and obtain (6 × 6), (3 × 3), (2 × 2) and (1 × 1) outputs for each independent convolutional filter, a total of 50 spatial locations for a single frame. The dimension of latent concept descriptors (512-D) is shorter than the descriptors from the fully-connected layers (4,096-D), while the visual information is enriched via multiple spatial locations on the filtered images.
Representation Compression
For the engineering aspect of a fast event search [2] on a large video collection, we can utilize techniques such as Product Quantization (PQ) [14] to compress the Fisher vector or VLAD representation. With PQ compression, the storage space in disk and memory can be reduced by more than an order of magnitude, while the performance remains almost the same. The basic idea of PQ is to decompose the representation into sub-vectors with equal length B, and then within each sub-vector, K-means is applied to generate 2 m centers as representative points. All the sub-vectors are approximated by the nearest center and encoded into the index of the nearest center. In this way, B float numbers in the original representation become m bit code; thus, the compression ratio is B×32 m . For example, if we take m = 8 and B = 4, we can achieve 16 times reduction in storage space.
Targeting at prediction on compressed data instead of on the original features, we can decompose the learned linear classifier w with an equal length B. With look-up tables to store the dot-product between sub-vectors of 2 m centers and the corresponding sub-vector of w, the prediction speed on large-amount of videos can be accelerated by D B times lookup operations and D B − 1 times addition operations for each video assuming D is the feature dimension [35] .
Experiment Settings
Datasets
In our experiments, we utilize the largest event detection datasets with labels 1 , namely TRECVID MEDTest 13 [1] and TRECVID MEDTest 14 [2]. They have been introduced by NIST for all participants in the TRECVID competition and research community to conduct experiments on. For both datasets, there are 20 complex events respectively, but with 10 events overlapping. MEDTest 13 contains events E006-E015 and E021-E030, while MEDTest 14 has events E021-E040. Event names include "Birthday party", "Bike trick", etc. Refer to [1,2] for the complete list of event names. In the training section, there are approximately 100 positive exemplars per event, and all events share negative exemplars with about 5,000 videos. The testing section has approximately 23,000 search videos. The total duration of videos in each collection is about 1,240 hours.
Features for Comparisons
As reported in [3] and compared with the features from other top performers [29,28,21] in the TRECVID MED 2013 competition, we can see that the improved Dense Trajectories has superb advantages over the original Dense Trajectories (used by all other teams except [3]), and is even better than approaches that combine many low-level visual features [29,28,21]. Improved Dense Trajectories extracts local descriptors such as trajectory, HOG, HOF, and MBH, and Fisher vector is then applied to encode the local descriptors into video representation. Following [44,3], we first reduce the dimension of each descriptor by a factor of 2 and then utilize 256 components to generate the Fisher vectors. We evaluate four types of descriptor in improved Dense Trajectories, and report the results of the best combination of descriptors and the two individual descriptors that have the best performance (HOG and MBH).
In addition, we report the results of some popular features used in the TRECVID competition for reference, such as STIP [22], MoSIFT [7] and CSIFT [40], though their performance is far weaker than improved Dense Trajectories.
Evaluation Details
In all the experiments, we apply linear Support Vector Machine (SVM) with LIBSVM toolkit [5]. We conduct extensive experiments on two standard training conditions: in 100Ex, 100 positive exemplars are given in each event and in 10Ex, 10 positive exemplars are given. In the 100Ex condition, we utilize 5-fold cross-validation to choose the parameter of regularization coefficient C in linear SVM. In the 10Ex condition, we follow [21] and set C in linear SVM to 1.
We sample every five frames in the videos and follow the pre-processing of [20,6] on CNN descriptor extraction. We extract the features from the center crop only. CNN descriptors are extracted using Caffe [17] with the best publicly available model [36], and we utilize vlfeat [41] to generate Fisher vector and VLAD representation.
Mean Average Precision (mAP) for binary classification is applied to evaluate the performance of event detection according to the NIST standard [1,2].
Experiment Results
Results for Video Pooling of CNN descriptors
In this section, we show the experiments on video pooling of fc 6 , fc 6 relu, fc 7 and fc 7 relu. Before aggregation, we first apply PCA with whitening on the 2 normalized CNN descriptors. Unlike local descriptors such as HOG, MBH, which have dimensions less than 200-D, the CNN descriptors have much higher dimensions (4,096-D). We conduct experiments with different reduced dimensions, i.e., 128, 256, 512 and 1,024, and utilize the reduced dimensions that best balance performance and storage cost in corresponding features, i.e., 512-D for fc 6 and fc 6 relu and 256-D for fc 7 and fc 7 relu. We utilize 256 components for Fisher vectors and 256 centers for VLAD as common choices in [35,15]. We will study the impact of parameters in Section 5.3. PCA projections, components in GMM for Fisher vectors, and centers in K-means for VLAD are learned from approximately 256,000 sampled frames in the training set.
Since we observe similar patterns in MEDTest 13 and MEDTest 14 under both 100Ex and 10Ex, we take MEDTest 14 100Ex as an example to compare with different representations, namely average pooling, video pooling with Fisher vectors and video pooling with VLAD. From Table 2, we can see that both video pooling with Fisher vectors and VLAD demonstrate great advantages over the average pooling representation. On the video pooling of CNN descriptors, Fisher vector encoding does not exhibit better performance than VLAD. Similar observations have been expressed in [10]. We suspect that the distribution of CNN descriptors is quite different from the local descriptors, e.g., HOG, HOF. We will study the theoretical reasons for the poorer performance of Fisher vector than VLAD on CNN video pooling in future research. We compare the performance of VLAD encoded CNN descriptors with state-of-the-art feature improved Dense Trajectories (IDT) and average pooling on CNN descriptors in Figure 3. We also illustrate the performance of the two strongest descriptors inside IDT (HOG and MBH). We can see very clearly that VLAD encoded CNN features significantly outperform IDT and average pooling on CNN descriptors over all settings. For more references, we provide the performance of a number of widely used features [28,29,21] on MEDTest 14 for comparison. MoSIFT [7] with Fisher vector achieves mAP 18.1% on 100Ex and 5.3% on 10Ex; STIP [22] with Fisher vector achieves mAP 15.0% on 100Ex and 7.1% on 10Ex; CSIFT [40] with Fisher vector achieves mAP 14.7% on 100Ex and 5.3% on 10Ex. Note that with VLAD encoded CNN descriptors, we can achieve better performance with 10Ex than the relatively poorer features such as MoSIFT, STIP, and CSIFT with 100Ex!
Results for CNN Latent Concept Descriptors with Spatial Pyramid Pooling
We evaluate the performance of latent concept descriptors (LCD) of both the original CNN structure and the structure with the Spatial Pyramid Pooling (SPP) layer plugged in to validate the effectiveness of SPP. Before encoding the latent concept descriptors, we first apply PCA with whitening. Dimension reduction is conducted from 512-D to a range of dimensions such as 32-D, 64-D, 128-D, and 256-D, and we find that 256-D is the best choice. We observe a similar pattern with video pooling of fc layers indicating that Fisher vector is inferior to VLAD on video pooling. We omit the results for Fisher vector due to limited space.
We show the performance of our proposed latent concept descriptors (LCD) in Table 3 and over the pool 5 features with average pooling, which demonstrates the advantages of our proposed novel utilization of pool 5 . With SPP layer, VLAD encoded LCD (LCD VLAD + SPP) continues to increase the performance further from the original structure (LCD VLAD ). The aggregation at a deeper stage to generate multiple levels of spatial information via multiple CNN max-pooling demonstrates advantages over the original CNN structure while having only minimal computation costs. The SPP layer enables a single pass of the forwarding in the network compared to the multiple passes of applying spatial pyramid on the original input images.
Analysis of the Impact of Parameters
We take VLAD encoded fc 7 features under MEDTest 14 100Ex as an example to see the impact of parameters in the video pooling process.
Dimensions of PCA: The original dimension of fc 7 is quite high compared to local descriptors. It is essential to investigate the impact of dimensions in PCA in the preprocessing stage, since it is critical to achieve a better tradeoff of performance and storage costs. Table 5 shows that in dimensions of more than 256-D, performance remains similar, whereas encoding in 128-D damages the performance significantly. Table 5. Impact of dimensions of CNN descriptors after PCA, with fixed K = 256 in VLAD.
Number of Centers in Encoding:
We explore various numbers of centers K in VLAD, and the results are shown in Table 6. With the increase of K, we can see that the discriminative ability of the generated features improves. However when K = 512, the generated vector may be too sparse, which is somewhat detrimental to performance.
VLAD-k: We experiment with the traditional VLAD as well, with nearest center only instead of k-nearest centers. Table 6. Impact on numbers of centers (K) in VLAD, with fixed PCA dimension of 256-D. mAP drops from 33.2% to 32.0%.
Power Normalization: We remove the SSR postprocessing and test the features on the VLAD encoded fc 7 . mAP drops from 33.2% to 27.0%, from which we can see the significant effect of SSR post-processing.
Intra-normalization:
We turn off the intranormalization. mAP drops from 33.2% to 30.6%. We conduct experiments on VLAD encoded fc 7 to see the performance changes with Product Quantization (PQ) compression. From the results in Table 7, we can see that PQ with B = 4 maintains the performance and even improves slightly. When B = 8, performance drops slightly. If we compress with B = 4 , we can store VLAD encoded fc 7 features in 3.1 GB for the MEDEval 14, which contains 200,000 videos of 8,000 hours' duration. With further compression with a lossless technique such as Blosc 2 [8], we can store the features of the whole collection in less than 1 GB, which can be read by a normal SSD disk in a few seconds. Without PQ compression, the storage size of the features would be 48.8 GB, which severely compromises the execution time due to the I/O cost. Utilization of compression techniques largely saves the I/O cost in the prediction procedure, while preserving the performance.
Results for Product Quantization Compression
original B = 4 B = 8 mAP 33.2 33.5 (↑ 0.3) 33.0 (↓ 0.2) space reduction - 16× 32×
In our speed test on the MEDEval 14 collection using the compressed data but not the original features, we can finish the prediction on 200,000 videos in 4.1 seconds per event using 20 threads on an Intel Xeon E5-2690v2 @ 3.00 GHz.
Results for Fusing Multiple Layers Extracted from the Same Model
We investigate average late fusion [38] to fuse the prediction results from different layers with PQ compression, i.e., VLAD encoded LCD with SPP, fc 6 and fc 7 . From Table 8 we can see that the simple fusion pushes the performance further beyond the single layers on MEDTest 13 and MEDTest 14, and achieves significant advantages over improved Dense Trajectories (IDT). Our proposed 2 Blosc can reduce the storage space by a factor of 4 method pushes the state-of-the-art performance much further, achieves more than 30% relative improvement on 100Ex, and more than 65% relative improvement on 10Ex over both challenging datasets. Table 8. Performance comparison of all settings; the last column shows the relative improvement of our proposed representation over IDT. Figure 4 and Figure 5 show the per-event mAP comparison of the 100Ex setting on MEDTest 13 and MEDTest 14. We provide results for average pooling on CNN descriptors with late fusion of three layers as well, denoted as CNN avg . Our proposed representation beats two other strong baselines in 15 out of 20 events in MEDTest 13 and 14 out of 20 events in MEDTest 14, respectively.
E06 E07 E08 E09 E10 E11 E12 E13 E14 E15 E21 E22 E23 E24 E25 E26 E27 E28 E29 E30
Comparison to the state-of-the-art Systems
We compare the MEDTest 13 3 results with the top performers in the TRECVID MED 2013 competition [3,29,21]. The AXES team does not show their performance on MEDTest 13 [3]. Natarajan et al. [29] report mAP 38.5% on 100Ex, 17.9% on 10Ex from their whole visual system of combining all their low-level visual features. Lan et al. [21] report 39.3% mAP on 100Ex of their whole system including non-visual features while they conducted 10Ex on their 3 In [3,29,21], teams report performance on MEDEval 13 as well, while MEDEval 13 is a different collection used in the competition, where only NIST can evaluate the performance. internal dataset. Our results achieve 44.6% mAP on 100Ex and 29.8% mAP on 10Ex, which significantly outperforms the top performers in the competition who combine more than 10 kinds of features with sophisticated schemes. To show that our representation is complementary to features from other modalities, we perform average late fusion of our proposed representation with IDT and MFCC, and generate a lightweight system with static, motion and acoustic features, which achieves 48.6% mAP on 100Ex, and 32.2% mAP on 10Ex. When the reports for TRECVID MED 2014 are available, we will also compare the MEDTest 14 performance with the top performers.
Conclusion
TRECVID Multimedia Event Detection (MED) has suffered from huge computation costs in feature extraction and classification processes. Using Convolutional Neural Network (CNN) representation seems to be a good solution, but generating video representation from CNN descriptors has different characteristics from image representation. We are the first to leverage encoding techniques to generate video representation from CNN descriptors. And we propose latent concept descriptors to generate CNN descriptors more properly. For fast event search, we utilize Product Quantization to compress the video representation and predict on the compressed data. Extensive experiments on the two largest event detection collections under different training conditions demonstrate the advantages of our proposed representation. We have achieved promising performance which is superior to the state-of-the-art systems which combine 10 more features. The proposed representation is extendible and the performance can be further improved by better CNN models and/or appropriate fine-tuning techniques.
D d=1 (x id − x jd ) 2 ,(5)
Though we show great performance on this simple idea of utilizing non-linear classifiers on CNN descriptors after average pooling in, non-linearity makes it hard to be applied on large-scale event detection like the MEDEval 14 collection with 200,000 videos. We provide the appendix here for improving the performance of average pooling on CNN descriptors at the scale of MEDTest 13 and MEDTest 14.
B. Experiment Results for Non-linear Classifiers
We conduct the experiments with the same settings of the main paper but with kernelized SVM using LIBSVM toolkit [5]. The additional parameter σ for kernel function computation is chosed by 5-fold cross-validation in 100Ex but fixed to 1 in 10Ex. Table 12. Performance comparison (mAP, in percentage) with different kernels in MEDTest 14 10Ex, IDT is with 13.9 mAP MEDTest 13 (Table 9 for 100Ex and Table 10 for 10Ex) and MEDTest 14 (Table 11 for 100Ex and Table 12 for 10Ex). We attach the performance of improved Dense Trajectories on each setting for comparison as well. We can see that nonlinear classifiers effectively boost the performance. It makes a clear gap ahead hand-crafted features, while it keeps the low-dimensionality advantages as well.
Looking into the details of the performance, we can see (1) Both RBF and exponential-χ 2 non-linear classifiers are significantly better than the linear classifier. Exponential-χ 2 kernel is observably better than RBF kernel in all the layers from all the settings. By applying exponential-χ 2 SVM, the performance of CNN features is on average 5% absolute mAP higher than the state-of-the-art features improved Dense Trajectories. (2) In linear space, the performance of fully-connected layers with ReLU neurons significantly outperforms the one without ReLU neurons (except the MEDTest 14 10Ex case), which is consistent with the common choice of layers in previous papers [11]; (3) pool 5 with much higher dimensions do not show advantages over fc layers. Thus, we suggest that when applying event detection on the average pooled CNN descriptors, we should use features extracted from fc 6 relu and fc 7 relu, and then apply exponential-χ 2 kernel classifier to achieve good performance.
Though non-linear classifiers can achieve much better performance than the linear classifiers for complex event detection, it cannot satisfy the efficiency requirement in the large-scale application. For non-linear classifiers, when a new test exemplar comes, it has to calculate the kernel matrix between the test exemplar and all the training exemplars, so that we can apply classifiers on the data. In the linear classifier case, we only need to conduct dot-product operation between the learned classifier and the test exemplar, which is thousands times faster in our complex event detection case.
To tackle this problem, people come up approaches with approximation of the kernel matrix like explicit feature mapping [42]. In our experiments, the explicit feature mapping approximation leads to about 2% absolute mAP drop with three times dimension as the original features, which is consistently with performance drop of the same approach in other papers [23]. The performance drop is mainly due to the fact that the explicit feature mapping is approximating χ 2 kernel, while χ 2 kernel has been shown inferior to exponential-χ 2 kernels in various vision tasks [23].
Figure 1 .
1Probability distribution of the cosine similarity between positive-positive (blue and plain) and positive-negative (red and dashed) videos using fc7 features, for average pooling (top), encoding with VLAD using 256 centers (middle), and encoding with the Fisher vector using 256-component GMM (bottom). As the range of probability of Fisher vectors is very different from average pooling and VLAD, we only use consistent axes for average pooling and VLAD. This figure is best viewed in color.
Figure 3 .
3Performance comparisons on MEDTest 13 and MEDTest 14, both 100Ex and 10Ex. This figure is best viewed in color.
Figure 4 .
4MEDTest 13 100Ex per event performance comparison (in mAP percentage). This figure is best viewed in color.
Figure 5 .
5MEDTest 14 100Ex per event performance comparison (in mAP percentage). This figure is best viewed in color.
Table 4 .
4In both 100Ex and 10Ex over two datasets, we can see clear gaps100Ex 10Ex
Average pooling
31.2
18.8
LCD VLAD
38.2
25.0
LCD VLAD + SPP
40.3
25.6
Table 3 .
3Performance comparisons for pool 5 on MEDTest 13.LCDVLAD is VLAD encoded LCD from the original CNN struc-
ture, while LCDVLAD + SPP indicates VLAD encoded LCD with
SPP layer plugged in.
100Ex 10Ex
Average pooling
24.6
15.3
LCD VLAD
33.9
22.8
LCD VLAD + SPP
35.7
23.2
Table 4. Performance comparisons for pool 5 on MEDTest 14. No-
tations are the same as Table 3.
Table 7 .
7Performance change analysis for VLAD encoded fc7 with
PQ compression. B is the length of the sub-vectors in PQ and
m = 8.
Table 9. Performance comparison (mAP, in percentage) with different kernels in MEDTest 13 100Ex, IDT is with 34.0 mAPTable 10. Performance comparison (mAP, in percentage) with different kernels in MEDTest 13 10Ex, IDT is with 18.0 mAPTable 11. Performance comparison (mAP, in percentage) with different kernels in MEDTest 14 100Ex. IDT is with 27.6 mAP. we show the performance of event detectors trained from average pooled CNN descriptors from key frames onpool 5
fc6
fc6 relu
fc7
fc7 relu
linear
31.2
27.8
32.7
26.4
32.0
RBF
34.4
36.6
37.9
34.9
37.2
exp χ 2
36.1
38.3
38.9
38.1
39.2
pool 5
fc6
fc6 relu
fc7
fc7 relu
linear
18.8
16.0
20.7
17.3
19.2
RBF
20.7
20.2
24.0
20.7
22.7
exp χ 2
22.2
23.8
24.5
22.8
24.3
pool 5
fc6
fc6 relu
fc7
fc7 relu
linear
24.6
19.9
24.8
18.8
23.8
RBF
28.6
29.2
31.4
29.1
29.2
exp χ 2
30.8
30.8
32.2
30.8
32.1
Here pool 5
fc6
fc6 relu
fc7
fc7 relu
linear
15.3
12.1
16.8
13.5
15.3
RBF
18.1
16.3
20.3
16.6
18.8
exp χ 2
19.9
20.0
20.6
20.2
19.8
Labels for MEDEval 13 and MEDEval 14 are not publicly available.
Appendices A. Non-linear Classifiers on CNN DescriptorsNoting the observation in[11]that performance improvement from fine-tuning is much larger for fc 6 and fc 7 than for pool 5 ,[11]suggests that most of the improvement of fine-tuning is gained from learning domain-specific nonlinear classifiers on top of them.We take the spirit of the neural network fine-tuning technique, and show that in our experiments, non-linear classifiers such as exponential-χ 2 kernel SVM or RBF kernel SVM on top of the CNN descriptors can boost the video classification performance significantly over the standard linear approach[6,11]. Exponential-χ 2 kernel and RBF kernel have been investigated in various applications of visual recognition area with the hand-crafted features such as SIFT, HOG. In deep learning era, exponential-χ 2 kernel and RBF kernel can still show performance advantages over linear kernel.The kernel functions K(X i , X j ) of exponential-χ 2 kernel and RBF kernel between two data points X i and X j are generalized as below,where X i = {x id } and X j = {x jd } are the deep learning features extracted from intermediate layers in CNNs, e.g. pool 5 , fc 6 , fc 7 . A is the average distances (with corresponding distance metric) between all the training features, and σ is the parameter for the kernel function. By utilizing different distance metric we can obtain different kernel.For exponential-χ 2 kernel , Dist(X i , X j ) is χ 2 distance between X i and X j , defined as:with D as the dimension of features X i , X j , and a small value to avoid "divided by zero", while for RBF kernel, Dist(X i , X j ) is Euclidean distance between X i and X j .Dist RBF (X i , X j ) = 1 2
. Trecvid Med 13, 26TRECVID MED 13. http://www.nist.gov/itl/ iad/mig/med13.cfm. 1, 2, 5, 6
. Trecvid Med 14, 26TRECVID MED 14. http://www.nist.gov/itl/ iad/mig/med14.cfm. 1, 2, 3, 5, 6
The AXES submissions at TrecVid. R Aly, R Arandjelovic, K Chatfield, M Douze, B Fernando, Z Harchaoui, K Mcguinness, N E O'connor, D Oneata, O M Parkhi, R. Aly, R. Arandjelovic, K. Chatfield, M. Douze, B. Fer- nando, Z. Harchaoui, K. McGuinness, N. E. O'Connor, D. Oneata, O. M. Parkhi, et al. The AXES submissions at TrecVid 2013. 2013. 1, 2, 5, 8
All about VLAD. R Arandjelović, A Zisserman, CVPR. R. Arandjelović and A. Zisserman. All about VLAD. In CVPR, 2013. 3
Libsvm: a library for support vector machines. C.-C Chang, C.-J Lin, ACM Transactions on Intelligent Systems and Technology. 2310TISTC.-C. Chang and C.-J. Lin. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27, 2011. 5, 10
Return of the devil in the details: Delving deep into convolutional nets. K Chatfield, K Simonyan, A Vedaldi, A Zisserman, BMVC. 610K. Chatfield, K. Simonyan, A. Vedaldi, and A. Zisserman. Return of the devil in the details: Delving deep into convo- lutional nets. In BMVC, 2014. 2, 4, 6, 10
Mosift: Recognizing human actions in surveillance videos. M.-Y Chen, A Hauptmann, 56CMU TRM.-Y. Chen and A. Hauptmann. Mosift: Recognizing human actions in surveillance videos. CMU TR, 2009. 1, 5, 6
Segmentation driven object detection with Fisher vectors. R G Cinbis, J Verbeek, C Schmid, ICCV. R. G. Cinbis, J. Verbeek, and C. Schmid. Segmentation driven object detection with Fisher vectors. In ICCV, 2013. 7
Histograms of oriented gradients for human detection. N Dalal, B Triggs, CVPR. N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005. 1
Stable hyper-pooling and query expansion for event detection. M Douze, J Revaud, C Schmid, H Jégou, ICCV. 46M. Douze, J. Revaud, C. Schmid, and H. Jégou. Stable hyper-pooling and query expansion for event detection. In ICCV, 2013. 4, 6
Rich feature hierarchies for accurate object detection and semantic segmentation. R Girshick, J Donahue, T Darrell, J Malik, CVPR. 1011R. Girshick, J. Donahue, T. Darrell, and J. Malik. Rich fea- ture hierarchies for accurate object detection and semantic segmentation. In CVPR, 2014. 2, 10, 11
Multi-scale orderless pooling of deep convolutional activation features. Y Gong, L Wang, R Guo, S Lazebnik, ECCV. 24Y. Gong, L. Wang, R. Guo, and S. Lazebnik. Multi-scale orderless pooling of deep convolutional activation features. In ECCV, 2014. 2, 4
Spatial pyramid pooling in deep convolutional networks for visual recognition. K He, X Zhang, S Ren, J Sun, ECCV. 45K. He, X. Zhang, S. Ren, and J. Sun. Spatial pyramid pooling in deep convolutional networks for visual recognition. In ECCV. 2014. 4, 5
Product quantization for nearest neighbor search. H Jegou, M Douze, C Schmid, TPAMI. 3315H. Jegou, M. Douze, and C. Schmid. Product quantization for nearest neighbor search. TPAMI, 33(1):117-128, 2011. 2, 5
Aggregating local descriptors into a compact image representation. H Jégou, M Douze, C Schmid, P Pérez, CVPR. 36H. Jégou, M. Douze, C. Schmid, and P. Pérez. Aggregating local descriptors into a compact image representation. In CVPR, 2010. 3, 6
Aggregating local image descriptors into compact codes. H Jégou, F Perronnin, M Douze, J Sánchez, P Pérez, C Schmid, TPAMI. 349H. Jégou, F. Perronnin, M. Douze, J. Sánchez, P. Pérez, and C. Schmid. Aggregating local image descriptors into com- pact codes. TPAMI, 34(9):1704-1716, 2012. 3
Caffe: An open source convolutional architecture for fast feature embedding. Y Jia, 6Y. Jia. Caffe: An open source convolutional architecture for fast feature embedding. http://caffe. berkeleyvision. org, 2013. 2, 6
Efficient feature extraction, encoding and classification for action recognition. V Kantorov, I Laptev, CVPR. V. Kantorov and I. Laptev. Efficient feature extraction, en- coding and classification for action recognition. In CVPR, 2014. 3
Large-scale video classification with convolutional neural networks. A Karpathy, G Toderici, S Shetty, T Leung, R Sukthankar, L Fei-Fei, CVPR. A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei. Large-scale video classification with convo- lutional neural networks. In CVPR, 2014. 2
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, NIPS. 6A. Krizhevsky, I. Sutskever, and G. E. Hinton. Imagenet classification with deep convolutional neural networks. In NIPS, 2012. 1, 2, 6
CMU-Informedia at TRECVID 2013 Multimedia Event Detection. Z.-Z Lan, L Jiang, S.-I Yu, TRECVID 2013 Workshop. 6Z.-Z. Lan, L. Jiang, S.-I. Yu, et al. CMU-Informedia at TRECVID 2013 Multimedia Event Detection. In TRECVID 2013 Workshop, 2013. 1, 2, 5, 6, 8
On space-time interest points. I Laptev, IJCV. 642-36I. Laptev. On space-time interest points. IJCV, 64(2-3):107- 123, 2005. 1, 5, 6
Chebyshev approximations to the histogram χ 2 kernel. F Li, G Lebanon, C Sminchisescu, CVPR. 11F. Li, G. Lebanon, and C. Sminchisescu. Chebyshev approx- imations to the histogram χ 2 kernel. In CVPR, 2012. 11
. M Lin, Q Chen, S Yan, abs/1312.4400M. Lin, Q. Chen, and S. Yan. Network in network. CoRR, abs/1312.4400, 2013. 4
Distinctive image features from scale-invariant keypoints. D G Lowe, IJCV. 602D. G. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91-110, 2004. 1
Hauptmann. Complex event detection via multi-source video attributes. Z Ma, Y Yang, Z Xu, S Yan, N Sebe, A , CVPR. Z. Ma, Y. Yang, Z. Xu, S. Yan, N. Sebe, and A. G. Haupt- mann. Complex event detection via multi-source video at- tributes. In CVPR, 2013. 1
Improved audio features for large-scale multimedia event detection. F Metze, S Rawat, Y Wang, ICME. F. Metze, S. Rawat, and Y. Wang. Improved audio features for large-scale multimedia event detection. In ICME, 2014. 1
The 2013 SESAME Multimedia Event Detection and Recounting system. G K Myers, R Nallapati, J Van Hout, TRECVID 2013 Workshop. 56G. K. Myers, R. Nallapati, J. van Hout, et al. The 2013 SESAME Multimedia Event Detection and Recounting sys- tem. In TRECVID 2013 Workshop, 2013. 1, 5, 6
Multimedia Event Detection and Multimedia Event Recounting Systems. P Natarajan, S Wu, F Luisier, TRECVID 2013 Workshop. 6P. Natarajan, S. Wu, F. Luisier, et al. BBN VISER TRECVID 2013 Multimedia Event Detection and Multimedia Event Re- counting Systems. In TRECVID 2013 Workshop, 2013. 1, 5, 6, 8
Multimodal feature fusion for robust event detection in web videos. P Natarajan, S Wu, S Vitaladevuni, X Zhuang, S Tsakalidis, U Park, R Prasad, CVPR. P. Natarajan, S. Wu, S. Vitaladevuni, X. Zhuang, S. Tsaka- lidis, U. Park, and R. Prasad. Multimodal feature fusion for robust event detection in web videos. In CVPR, 2012. 1
D Oneata, M Douze, J Revaud, S Jochen, D Potapov, H Wang, Z Harchaoui, J Verbeek, C Schmid, R Aly, TRECVID workshop. D. Oneata, M. Douze, J. Revaud, S. Jochen, D. Potapov, H. Wang, Z. Harchaoui, J. Verbeek, C. Schmid, R. Aly, et al. AXES at TRECVid 2012: KIS, INS, and MED. In TRECVID workshop, 2012. 2
Action and event recognition with Fisher vectors on a compact feature set. D Oneata, J Verbeek, C Schmid, ICCV. D. Oneata, J. Verbeek, and C. Schmid. Action and event recognition with Fisher vectors on a compact feature set. In ICCV, 2013. 1
Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. X Peng, L Wang, X Wang, Y Qiao, arXiv:1405.4506arXiv preprintX. Peng, L. Wang, X. Wang, and Y. Qiao. Bag of visual words and fusion methods for action recognition: Comprehensive study and good practice. arXiv preprint arXiv:1405.4506, 2014. 3
Improving the fisher kernel for large-scale image classification. F Perronnin, J Sánchez, T Mensink, ECCV. 2010. 3F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In ECCV. 2010. 3
Image classification with the fisher vector: Theory and practice. J Sánchez, F Perronnin, T Mensink, J Verbeek, IJCV6J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek. Im- age classification with the fisher vector: Theory and practice. IJCV, 105(3):222-245, 2013. 3, 5, 6
Very deep convolutional networks for large-scale image recognition. K Simonyan, A Zisserman, arXiv:1409.15566arXiv preprintK. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 2, 4, 6
Video google: A text retrieval approach to object matching in videos. J Sivic, A Zisserman, CVPR. J. Sivic and A. Zisserman. Video google: A text retrieval approach to object matching in videos. In CVPR, 2003. 3
Early versus late fusion in semantic video analysis. C G Snoek, M Worring, A W Smeulders, MM. ACM. C. G. Snoek, M. Worring, and A. W. Smeulders. Early versus late fusion in semantic video analysis. In MM. ACM, 2005. 7
Evaluation of low-level features and their combinations for complex event detection in open source videos. A Tamrakar, S Ali, Q Yu, J Liu, O Javed, A Divakaran, H Cheng, H Sawhney, CVPR. A. Tamrakar, S. Ali, Q. Yu, J. Liu, O. Javed, A. Divakaran, H. Cheng, and H. Sawhney. Evaluation of low-level features and their combinations for complex event detection in open source videos. In CVPR, 2012. 1
Evaluating color descriptors for object and scene recognition. K E Van De Sande, T Gevers, C G Snoek, TPAMI. 3296K. E. Van De Sande, T. Gevers, and C. G. Snoek. Evaluating color descriptors for object and scene recognition. TPAMI, 32(9):1582-1596, 2010. 1, 5, 6
Vlfeat: An open and portable library of computer vision algorithms. A Vedaldi, B Fulkerson, MM. ACM. A. Vedaldi and B. Fulkerson. Vlfeat: An open and portable library of computer vision algorithms. In MM. ACM, 2010. 6
Efficient additive kernels via explicit feature maps. A Vedaldi, A Zisserman, TPAMI3411A. Vedaldi and A. Zisserman. Efficient additive kernels via explicit feature maps. TPAMI, 34(3):480-492, 2012. 11
Action recognition by dense trajectories. H Wang, A Klaser, C Schmid, C.-L Liu, CVPR. H. Wang, A. Klaser, C. Schmid, and C.-L. Liu. Action recog- nition by dense trajectories. In CVPR, 2011. 1
Action recognition with improved trajectories. H Wang, C Schmid, ICCV. 25H. Wang and C. Schmid. Action recognition with improved trajectories. In ICCV, 2013. 1, 2, 5
Feature weighting via optimal thresholding for video analysis. Z Xu, Y Yang, I Tsang, N Sebe, A G Hauptmann, ICCV. Z. Xu, Y. Yang, I. Tsang, N. Sebe, and A. G. Hauptmann. Feature weighting via optimal thresholding for video analy- sis. In ICCV, 2013. 1
How related exemplars help complex event detection in web videos. Y Yang, Z Ma, Z Xu, S Yan, A G Hauptmann, ICCV. 1Y. Yang, Z. Ma, Z. Xu, S. Yan, and A. G. Hauptmann. How related exemplars help complex event detection in web videos? In ICCV, 2013. 1, 2
| [] |
[
"Rio: Order-Preserving and CPU-Efficient Remote Storage Access",
"Rio: Order-Preserving and CPU-Efficient Remote Storage Access"
] | [
"Xiaojian Liao \nDepartment of Computer Science and Technology\nCenter for Information Science and Technology (BNRist)\nTsinghua University Beijing National Research\n\n",
"Zhe Yang \nDepartment of Computer Science and Technology\nCenter for Information Science and Technology (BNRist)\nTsinghua University Beijing National Research\n\n",
"Jiwu Shu \nDepartment of Computer Science and Technology\nCenter for Information Science and Technology (BNRist)\nTsinghua University Beijing National Research\n\n"
] | [
"Department of Computer Science and Technology\nCenter for Information Science and Technology (BNRist)\nTsinghua University Beijing National Research\n",
"Department of Computer Science and Technology\nCenter for Information Science and Technology (BNRist)\nTsinghua University Beijing National Research\n",
"Department of Computer Science and Technology\nCenter for Information Science and Technology (BNRist)\nTsinghua University Beijing National Research\n"
] | [
"EuroSys '"
] | Modern NVMe SSDs and RDMA networks provide dramatically higher bandwidth and concurrency. Existing networked storage systems (e.g., NVMe over Fabrics) fail to fully exploit these new devices due to inefficient storage ordering guarantees. Severe synchronous execution for storage order in these systems stalls the CPU and I/O devices and lowers the CPU and I/O performance efficiency of the storage system.We present Rio, a new approach to the storage order of remote storage access. The key insight in Rio is that the layered design of the software stack, along with the concurrent and asynchronous network and storage devices, makes the storage stack conceptually similar to the CPU pipeline. Inspired by the CPU pipeline that executes out-of-order and commits in-order, Rio introduces the I/O pipeline that allows internal out-of-order and asynchronous execution for ordered write requests while offering intact external storage order to applications. Together with merging consecutive ordered requests, these design decisions make for write throughput and CPU efficiency close to that of orderless requests.We implement Rio in Linux NVMe over RDMA stack, and further build a file system named RioFS atop Rio. Evaluations show that Rio outperforms Linux NVMe over RDMA and a state-of-the-art storage stack named Horae by two orders of magnitude and 4.9× on average in terms of throughput of ordered write requests, respectively. RioFS increases the throughput of RocksDB by 1.9× and 1.5× on average, against Ext4 and HoraeFS, respectively. | 10.1145/3552326.3567495 | [
"https://export.arxiv.org/pdf/2210.08934v1.pdf"
] | 252,918,616 | 2210.08934 | 35b1590872f9ab89ab76d4cc16dafec1e748f5bd |
Rio: Order-Preserving and CPU-Efficient Remote Storage Access
May 8-12, 2023
Xiaojian Liao
Department of Computer Science and Technology
Center for Information Science and Technology (BNRist)
Tsinghua University Beijing National Research
Zhe Yang
Department of Computer Science and Technology
Center for Information Science and Technology (BNRist)
Tsinghua University Beijing National Research
Jiwu Shu
Department of Computer Science and Technology
Center for Information Science and Technology (BNRist)
Tsinghua University Beijing National Research
Rio: Order-Preserving and CPU-Efficient Remote Storage Access
EuroSys '
Rome, Italy23May 8-12, 202310.1145/3552326.3567495storage systems. * Jiwu Shu is the corresponding author ( ACM Reference Format: Xiaojian Liao, Zhe Yang, Jiwu Shu. 2023. Rio: Order-Preserving and CPU-Efficient Remote Storage Access. In Eighteenth European Conference on Computer Systems (EuroSys '23), May 8-12, 2023, Rome, Italy. ACM, New York, NY, USA, 15 pages. https://doi.org/10.1145/ 3552326.3567495CCS Concepts: • Information systems → Information Keywords: Storage OrderNVMe over FabricsFlashFile SystemSSD
Modern NVMe SSDs and RDMA networks provide dramatically higher bandwidth and concurrency. Existing networked storage systems (e.g., NVMe over Fabrics) fail to fully exploit these new devices due to inefficient storage ordering guarantees. Severe synchronous execution for storage order in these systems stalls the CPU and I/O devices and lowers the CPU and I/O performance efficiency of the storage system.We present Rio, a new approach to the storage order of remote storage access. The key insight in Rio is that the layered design of the software stack, along with the concurrent and asynchronous network and storage devices, makes the storage stack conceptually similar to the CPU pipeline. Inspired by the CPU pipeline that executes out-of-order and commits in-order, Rio introduces the I/O pipeline that allows internal out-of-order and asynchronous execution for ordered write requests while offering intact external storage order to applications. Together with merging consecutive ordered requests, these design decisions make for write throughput and CPU efficiency close to that of orderless requests.We implement Rio in Linux NVMe over RDMA stack, and further build a file system named RioFS atop Rio. Evaluations show that Rio outperforms Linux NVMe over RDMA and a state-of-the-art storage stack named Horae by two orders of magnitude and 4.9× on average in terms of throughput of ordered write requests, respectively. RioFS increases the throughput of RocksDB by 1.9× and 1.5× on average, against Ext4 and HoraeFS, respectively.
Introduction
Remote storage access (i.e., accessing storage devices over the network) has become increasingly popular for modern cloud infrastructures and datacenters to share the enormous capacity and bandwidth of fast storage devices [23,41]. Unlike legacy HDDs and SATA SSDs whose maximum bandwidth is less than 750 MB/s due to the interface limit, a commodity NVMe SSD provides nearly 7 GB/s bandwidth and 1.5 million IOPS [17]. The speeds of an RDMA NIC have transitioned to 200 Gbps for fast data transfer among servers [32]. These changes make CPU efficiency also a dominant factor in storage and network systems [6,7,15,18,21,22,26,27,29,30]. This paper targets storage order, which is the fundamental building block of storage consistency, and which often prevents reliable storage systems from exploiting these highperformance hardware devices.
Storage order indicates a certain persistence order of data blocks to storage media. It is extensively used in storage consistency mechanisms (e.g., database transactions [37,39], soft updates [31] and file system journaling [4,20]) to ensure deterministic and correct disk states despite a system crash. For decades, traditional networked storage systems use a quite expensive approach to ensure storage order. The following ordered write requests can not be processed until preceding requests are completed and associate data blocks are durable ( §2). This synchronous approach, however, leaves NICs and SSDs underutilized and the CPU in an idle state.
We first seek state-of-the-art approaches from the local storage stack to see if similar designs can be applied to networked storage to mitigate the performance overhead. Unfortunately, their approaches prevent the streamlined use of CPU, I/O devices, or both, thereby offering suboptimal CPU and I/O efficiency and making it difficult to scale to multiple servers. For example, Horae [28], a recently proposed orderpreserving approach, introduces a dedicated control path for storage order. However, the control path is synchronous and executed before the data path, which wastes considerable CPU cycles and further lowers the I/O throughput ( §3).
We observe that the layered design (e.g., the network and storage drivers) of the software stack, along with the concurrent and asynchronous network and storage devices, makes the storage stack conceptually similar to the CPU pipeline. We thus introduce the I/O pipeline for ordered write requests ( §4). The I/O pipeline adopts the out-of-order and asynchronous execution from the CPU pipeline. It speculatively executes ordered write requests that target non-overlapping data blocks in parallel as if they were orderless requests. As NICs or SSDs can be easily saturated by orderless requests, this asynchronous approach fully exploits NICs and SSDs. In addition, asynchronous execution lets CPUs continuously push I/O requests to the hardware, without being idle or switched out, thereby making more efficient use of CPUs.
In the I/O pipeline, since each layer of the storage stack can process multiple ordered write requests from different cores, asynchronous execution brings uncertainties that may violate the original ordering semantics or even sacrifice data persistence. We introduce a series of techniques including in-order submission and completion and leverage the in-order delivery of the network protocol, to reduce temporary out-oforder execution, thus removing uncertainties and providing final intact storage order to applications. Even if a crash occurs, our proposed asynchronous crash recovery algorithm can quickly recover the system to an ordered state. The key enabler of these techniques is a special structure called the ordering attribute, which is an identity of each ordered write request and tracks neighboring ordered write requests. It is embedded in the original request and carried throughout the storage stack. Therefore, although being asynchronous, each ordered write request is able to collect the scattered ordering attributes and reconstruct the original storage order at any time using the aforementioned techniques.
We implement this design within Rio, an order-preserving networked storage stack, with a set of modifications of Linux NVMe over RDMA stack ( §5). We further develop a file system called RioFS that uses the ordered block device of Rio. We evaluate Rio and RioFS with two kinds of SSDs (i.e., flash and Optane SSDs), against Linux NVMe over RDMA stack and an NVMe-oF version of Horae [28] which is originally an order-preserving storage stack for local NVMe SSDs and is extended to support networked storage ( §6). We find that Rio and RioFS perform significantly better than their counterparts. The throughput and CPU efficiency of Rio even come close to the orderless write requests.
In summary, we make the following contributions:
• We conduct a study on existing methods for storage order and summarize three lessons for building a highperformance and order-preserving storage stack. • We propose Rio that achieves storage order, high performance and CPU efficiency simultaneously. • We implement and evaluate Rio and RioFS, demonstrating significant performance and CPU efficiency improvement over state-of-the-art systems.
2 Background and Related Work
Remote Storage Access
The networked storage consists of two major components: the initiator and target. The target represents both the software and hardware of a remote device. It exposes local storage devices to remote initiators via the standard block storage protocol (e.g., NVMe [36]) over a range of network fabrics (e.g., RDMA, TCP). The initiator can thus access the remote block device as if using a local storage device. Recently, NVMe-oF (NVMe over Fabrics) is introduced as an alternative to the traditional iSCSI [1] owing to its low protocol overhead and high parallelism. A number of works [15,23,24,34] have studied and improved the orderless I/O path of the networked storage. To preserve storage order, they still rely on traditional synchronous execution as in the Linux NVMe-oF. Our proposal for storage order is orthogonal to their designs. As our implementation is based on NVMe-oF, we present its details (Figure 1(a)). In NVMe-oF, the file system, block layer and NVMe SSD are almost the same as the local NVMe over PCIe stack. The major difference lies in the initiator and target drivers that control how I/O commands and data blocks are transferred over the network. Specifically, if the network fabric is RDMA, I/O commands that describe the source and destination addresses of data blocks and completion responses are transferred via two-sided RDMA SEND operations. The data blocks are transferred by one-sided RDMA READ or WRITE operations. Note that one-sided operations bypass the target CPU, but two-sided operations require the target CPU to search and update RDMA queues.
Storage Order
The storage order defines a certain order of data blocks to be persisted in the storage media. Traditional Linux I/O stacks such as NVMe-oF do not offer storage ordering guarantees. They use a synchronous execution approach in the file systems (e.g., synchronous transfer and FLUSH commands) or applications (e.g., fsync) to achieve storage order. The cost of the traditional approach is expensive, and recent studies [9,10,12,28,43,44] attempt to reduce the overhead of storage order for local SCSI and NVMe stacks. In this section, we introduce Linux NVMe-oF and Horae [28] on NVMe stack, followed by discussing BarrierFS [43] and OptFS [12] designed on the older SCSI stack.
Each layer of NVMe-oF is orderless. For example, requests to different queues of the drivers can be executed out-of-order. The NVMe SSD may freely re-order requests due to massive internal data parallelism. Thus, to control storage order in NVMe-oF, the file system issues the next ordered write request only if associated data blocks of the preceding request are durable. Specifically, for each ordered write request, the file system does not issue the next request until the request flows through the block layer, initiator and target drivers and reaches the NVMe SSD, and data blocks are ensured to be durable by a FLUSH command on the SSD. The overhead of this approach which we call synchronous execution is severe [12,13,28,43] in local storage and becomes worse in the networked storage ( §3).
Horae separates the storage ordering control from the request flow and provides a dedicated control path (Figure 1(b)). The control path is used to ensure storage order first, and thus ordered write requests can be processed asynchronously and concurrently. Specifically, in the control path, Horae stores ordering metadata that retains enough information to recover from an untimely crash to the persistent memory region (PMR) [8,35] of the NVMe SSD. The PMR is a region of general purpose read/write persistent memory of the SSD. It is byte-addressable and can be accessed directly by CPU load and store instructions (MMIO operations), thus making the control path fast. However, since control path operations are executed before the ordered write requests are serviced, the control path is synchronous and becomes a serialization bottleneck ( §3).
BarrierFS keeps each layer order-preserving. For example, the block layer schedules ordered write requests in a FIFO fashion. The SSD needs a barrier write command to understand the storage order and make data blocks durable in order. This approach is overly strict to storage order and thus makes it difficult to extend the idea to support modern multi-queue hardware (e.g., NVMe SSD and RDMA NIC) [28], multiple targets [44] and servers. For example, to agree on a specific order, requests from different cores contend on the single hardware queue, which limits the multicore scalability. SSDs are unable to communicate with each other, and thus keeping storage order among multiple targets is challenging. As we will show in Rio, intermediate storage order is not a necessity and can be relaxed.
OptFS introduces an optimistic approach that uses transaction checksums to detect ordering violations and perform further crash recovery. This approach requires more CPU cycles to calculate and validate checksums. Such an investment of CPU cycles is beneficial for HDDs as the speed gap between legacy storage devices and CPUs is so large. However, for modern NVMe SSDs, CPU cycles are no longer a negligible part [26,27,29]. A study [11] from the same authors reveals that OptFS does not perform well on SSDs since the CRC32 checksum computation is a significant part of the total run-time. Furthermore, the transaction checksum is restricted to systems that place data blocks of ordered write requests in pre-determined locations (e.g., journaling). Hence, it does not offer a lower-level ordered block device abstraction that can be randomly written and atop which many applications are built (e.g., BlueStore [3], KVell [27]).
Motivation
In this section, we quantify and analyze the overhead of storage ordering guarantees of remote storage access.
Motivation Experiments
We perform experiments on both ordered and orderless write requests with Linux NVMe over RDMA and Horae [28]. We extend Horae to NVMe over RDMA stack (details in §6.1). Since we do not have barrier-enabled storage and can not control the behavior of the NIC, we are unable to evaluate BarrierFS. As OptFS is implemented in an old Linux with no support for NVMe and RDMA, we can not compare it by experiments. Other details of the testbed are described in §6.
The test launches up to 12 threads, and each performs the following workload to a private SSD area independently. Each thread issues an ordered write request that contains 2 continuous 4 KB data blocks, and then performs another 4 KB consecutive ordered write request. We choose this workload as it simulates the write pattern of the metadata journaling widely used in storage systems. Specifically, the first 2 data blocks represent the journal description and metadata blocks, and the final 4 KB block is the commit record. Applications (e.g., MySQL) that require strong consistency and durability issue fsync to trigger the metadata journaling. Figure 2 plots the average throughput. The gap between the orderless which does not guarantee storage order and other systems indicates the overhead of guaranteeing storage order.
We make two observations from Figure 2. First, orderless write requests which are executed asynchronously saturate the bandwidth of both the flash and Optane SSD with only a single thread. Second, Linux NVMe-oF (NVMe over RDMA) and Horae perform significantly worse than the orderless. Horae needs more than 8 CPU cores to fully drive existing SSDs. For storage arrays and newer and faster SSDs, it is expected that it needs more computation resources. The results of Figure 2 therefore indicate that the cost of storage ordering guarantees of existing approaches is expensive.
Analysis and Lessons
We examine the behaviors of the ordered I/O path and decompose the storage order overhead. The ordered I/O path consists of three main parts: CPU execution on the software, data transfer over network and PCIe, and device execution in particular the hardware barrier instructions (e.g., FLUSH). We analyze each part at length and summarize three lessons. Lesson 1: alleviating the overhead of storage barrier instructions. The ordered NVMe-oF suffers from the classic barrier instruction (i.e., FLUSH). On the flash SSD with a volatile write cache (Figure 2(a)), NVMe-oF issues a FLUSH command for each ordered request to ensure that preceding data blocks are durable. The FLUSH is a synchronous activity and flushes nearly all content including data blocks and FTL mappings from the device's volatile cache to persistent flash memory. Horae and the orderless remove the FLUSH. Comparing NVMe-oF with Horae, we observe that the FLUSH is quite expensive and thus lowers the throughput dramatically. Lesson 2: making data transfer asynchronous. The Optane SSD in Figure 2(b) has a power loss protection technique (e.g., a non-volatile write cache). Hence, the overhead of the FLUSH is marginal in this kind of SSDs. Here, the dominant factor is data transfer over the network and PCIe bus. The ordered NVMe-oF dispatches the next ordered write requests after the preceding request reaches the SSD. This synchronous approach, however, leaves the NIC and SSD underutilized.
Horae separates the storage order from the request flow, and thus makes the transfer of data blocks asynchronous. This approach allows more outstanding requests to be processed by NICs and SSDs. Nonetheless, the control path is executed synchronously before the data path. Comparing the orderless which transfers all data asynchronously with Horae ( Figure 2), we find that the synchronous control path of Horae decreases the throughput significantly.
We dive into Horae's control path to understand the inefficiency. The control path is essentially a faster and byteaddressable I/O path. An ideal implementation of the control path in NVMe-oF is based on one-sided RDMA operations and PCIe peer-to-peer (P2P). Specifically, the NIC can issue PCIe transactions on the PMR by P2P, bypassing the target CPU and memory. The initiator driver first invokes an RDMA WRITE operation to dispatch the ordering metadata, and then issues an RDMA READ operation to ensure that the ordering metadata reaches PMR. The latency of this ideal control path is expected to be larger than 4 s, the raw hardware latency of a persistent RDMA write [42]. Modern NVMe SSDs deliver a 4 KB I/O latency of sub-ten s [16,26] and the latency tends to decrease with newer PCIe 4.0 and 5.0 SSDs [17]. As the latency of the SSD is comparable to the control path, the overhead of the synchronous control path is non-negligible. In summary, to fully exploit the fast hardware devices, all data (including any control information) transfer over the network and PCIe should be asynchronous. Lesson 3: reducing CPU cycles whenever possible. If the I/O stack alleviates the overhead of storage barrier instructions and makes data transfer asynchronous, the bottleneck will be shifted to the CPU overhead (i.e., CPU cycles spent on the I/O stack). Specifically, in ordered NVMe-oF and Horae, a large fraction of CPU cycles are consumed at the device drivers, e.g., two-sided RDMA SEND operations. Synchronous execution prevents the I/O stack from merging consecutive ordered write requests. The I/O stack generates an I/O command for each request, and each I/O command requires many CPU cycles on RDMA and NVMe queues. When ordered write requests become asynchronous as the orderless, they can be merged to reduce the CPU overhead. We elaborate on this by Figure 3. Figure 3 presents the CPU overhead collected by the top command when we test the orderless NVMe-oF using a single thread and sequential writes. We choose this scenario as the throughput remains unchanged regardless of whether block merging is enabled or not. This ensures that the comparisons on the CPU overhead are relatively fair. The X-axis of Figure 3 shows the number of 4 KB data blocks that can be potentially merged. We control this number via the blk_start_plug and blk_finish_plug function calls in the code.
We find that merging substantially reduces the CPU cycles of both the initiator and target, atop both flash and Optane SSD. Merging decreases the number of requests and further NVMe-oF I/O commands, thereby reducing CPU cycles spent on the two-sided RDMA SEND operations. Although merging itself requires some investments in CPU cycles, it decreases CPU cycles to a greater extent, and thus the overall CPU overhead is improved. We conclude that the I/O stack should merge ordered write requests to reduce CPU overhead for fast drives.
Rio Design
Inspired by the studies in §3, we introduce Rio to preserve storage order while taking advantage of fast NICs and SSDs. Figure 4 shows the overview of Rio, a networked storage stack that spans over one initiator server and multiple target servers. Each target server consists of one or more NVMe SSDs, and requires at least one NVMe SSD with PMR support or a small region (2 MB) of byte-addressable persistent memory (e.g., Intel Optane main memory). In the initiator server, we revise the entire storage stack including the file system, block layer and device driver, to let the ordering flow through the stack asynchronously. Besides, a shim layer called Rio sequencer between the file system and block device is introduced to package the original orderless block abstraction as an ordered one. To benefit from Rio, applications can invoke intact file system calls (e.g., write, fsync) on RioFS which leverages Rio to accelerate the file system journaling.
Overview
The key design of Rio is to control the order at the start and end of the lifetime of ordered write requests, while allowing some out-of-order execution in between. Specifically, when ordered write requests are initiated by the file system or applications ( 1 ), Rio sequencer first generates a special ordering attribute which is an identity of ordered request and used for reconstructing storage order, and then dispatches the requests to the block layer asynchronously ( 2 ). When ordered write requests are finished and returned to Rio sequencer, Rio completes the requests in order using the ordering attributes ( 9 ), to handle the temporary outof-order execution. The file system and applications thus see the original ordered state. Then, the intermediate execution ( 3 , 4 , 6 and 8 ) becomes almost asynchronous and concurrent, enabling more requests to be processed by each layer simultaneously (lessons 1 and 2 from §3).
Rio's approach, compared to existing designs, allows more outstanding requests to NICs and SSDs, thereby taking full advantage of the abundant concurrency of modern fast NICs and NVMe SSDs. Rio also enables easy scaling to more individual target servers, as there are no ordering constraints on the data transfer of ordered write requests ( 4 , 6 ). Figure 5. Ordering attributes. Each big rectangle represents an ordering attribute of each ordered write request.
The asynchronous execution makes the post-crash states of Rio more uncertain, thus making crash consistency guarantees more challenging. For example, after a server power outage, data blocks of the latter request are likely to be durable ahead of those of the former one, which violates the storage order. Rio addresses this issue with the persistent ordering attribute, which essentially logs the persistent state of data blocks of each ordered write request ( 5 , 7 ). By scanning persistent ordering attributes, Rio can speculate on possible post-crash states, and further recover data blocks to the latest and ordered state. Rio performs recovery I/Os in an asynchronous and concurrent fashion, thereby also fully utilizing the NICs and SSDs.
Making storage order asynchronous also brings opportunities. The major opportunity is that, unlike traditional designs, consecutive ordered write requests of Rio can be staged and merged to reduce CPU overhead. For two consecutive ordered write requests, the classic NVMe over RDMA generates at least two NVMe-oF commands, which require at least four RDMA SEND operations. With RIO's I/O scheduler optimized for networked storage, the number of NVMe-oF commands and associated operations is halved. This further reduces the CPU cycles consumed at the device drivers.
In this section, we first present the organization of the ordering attribute ( §4.2) and the way of using it to enforce storage order ( §4.3). We next describe the crash recovery algorithm ( §4.4) and show the I/O scheduling ( §4.5). We finally present the programming model ( §4.6) and RioFS ( §4.7), and prove the correctness ( §4.8), ending with discussion of support for multiple initiator servers ( §4.9).
Ordering Attributes
Definition. The ordering attribute is an ordered write request's logical identity that describes the group it belongs to, the previous request it follows, and whether associated data blocks are durable during asynchronous execution. As shown in Figure 5, ordering attributes essentially form two kinds of lists, one for global order and the other for per-target-server order. The global order is the storage order for the entire cluster that consists of multiple target servers. It is recorded by a sequence number (seq) widely used in the distributed environment. The per-server order is the storage order for each target server. It is achieved via the prev field, which points to its preceding request in the same target server. An ordered write flow may contain several requests that can be freely reordered with each other, e.g., the journal description and journaled metadata. Rio groups this kind of request (e.g., W1_1 and W1_2) using the same sequence number for each request and a counter (num) in the final request to record the number of requests in the group. Rio guarantees storage order at the granularity of group. By scanning and examining ordering attributes, the storage order can be built up and Rio can regulate and recover the system to a correct state. Creation. The ordering attribute is generated by the Rio sequencer and embedded inside each request. Rio sequencer uses submission order from the file system as the storage order. Specifically, a request that comes first is assigned a lower sequence number. Rio sequencer increases the sequence number and calculates the number of individual requests in its group when it encounters a special request that marks the end of a group of ordered write requests. Rio sequencer retains the most recent sequence number for each target server, to fill the prev field. The persist field indicates whether the request is durable. Its initial value is 0 and used for recovery. The LBA field represents the logical block addresses of the request. Rio uses the split field to handle request splitting.
Though conceptually simple, ordering attributes are powerful across the I/O stack. For example, they allow correct and parallel persistence and detection of out-of-order execution during recovery. The block layer and device drivers leverage ordering attributes to perform merging and splitting, thus reducing CPU overhead and increasing I/O concurrency.
Parallel and Correct Persistence
Simply letting ordered write requests become orderless all the time leads to incorrect persistence and makes it difficult for the I/O stack to recover. The key point here is that a consensus must be achieved on the persistence boundary between the software (target driver) and hardware (NVMe SSD), as the software and hardware have distinct contexts. We introduce two techniques to achieve such a consensus. First, the target driver submits the requests in per-server order (step 6 of Figure 4) to ensure correct persistence by the original durability interface ( §4.3.1). Second, since the NVMe SSD does not understand the ordering attribute, the ordering attribute needs to be documented so that it can be parsed by the software when recovery is performed ( §4.3.2).
In-Order Submission.
Rio keeps the multi-queue design from the RDMA and NVMe stack. An ordered write request can be distributed to any hardware queue and an RDMA NIC is likely to reorder requests among multiple queues (step 4 ). Assume W3 of Figure 5 arrives at the target server earlier than W1_2 and requires instant data persistence to flush the SSD, i.e., requests before W3 must be stored in persistent media rather than the SSD's volatile write buffer. If the target driver directly submits W3, the SSD only makes W1_1 and W3 durable by a FLUSH command, ignoring W1_2 and violating the original durability semantics.
To address this issue, the target driver of Rio submits ordered write requests to the SSD in the per-server order. In the aforementioned example, the target will not submit W3 until all W1_1 and W1_2 are dispatched to the SSD.
The in-order submission mechanism allows each target server to transfer data blocks and flush SSDs concurrently. Rio does not use the global order of submission for avoiding coordination among servers. If the global order is used, the target server that has W3 must wait for the target server that has W2. This not only incurs extra network traffic but also introduces synchronization overhead, lowering the concurrency. As we show later, Rio can recover the out-of-order persistence over multiple servers in case of a crash.
Maintaining the per-server submission order potentially introduces synchronization overhead, as a later request is blocked until its preceding requests reach the server. Fortunately, this overhead is negligible due to the massive concurrency of NICs. The concurrency of NICs is usually larger than SSDs installed on the same server, and therefore the NIC can post storage I/Os to the target driver almost at the same time. We further use the in-order delivery property of RDMA to remove this overhead ( §4.5).
Persistent
Ordering Attributes. Rio makes ordering attributes persistent so as to reconstruct per-server ordering lists for further recovery. The key idea is to record the persistence state of data blocks in the persist field of the associated ordering attribute. Before submitting an ordered request to SSD, Rio persists the ordering attribute (step 5 ), which logs the storage order but indicates that data blocks are still in-progress and thus non-persistent. When data blocks become persistent, Rio sets the persist field to 1 (step 7 ). Specifically, for SSDs with power loss protection (PLP), e.g., non-volatile write cache, Rio toggles the persist field when a completion response is reported via the interrupt handler, since data blocks become durable when they reach the SSD and the FLUSH command is usually ignored by the block layer. For SSDs without PLP, the persist field is set only after the request with a FLUSH command is completed. Only one persist field whose request has a FLUSH command is toggled to indicate that data blocks of all preceding write requests become durable.
Rio stores the persistent ordering attributes to the persistent memory region (PMR) of NVMe SSDs. In particular, Rio organizes the PMR as a circular log and employs two inmemory pointers, the head and tail pointers, to efficiently manage the circular log. Rio appends the newly arrived ordering attributes to the log tail by increasing the tail pointer. Once the completion response of the ordered write request is returned to the application (indicating that the storage order is satisfied), associated ordering attributes become invalid and Rio recycles space by moving the head pointer. target server 2: 2 5 Figure 6. A recovery example. Other fields of the ordering attributes are omitted to facilitate the description.
In Rio, storing the small-sized ordering attributes by CPUinitiated MMIOs (usually less than 1 s) is significantly faster than persisting data blocks (usually more than 10 s). Moreover, each target server persists ordering attributes in parallel without any coordination. Hence, storing ordering attributes does not introduce much overhead to the overall I/O path.
Persistent ordering attributes are used to rebuild per-server ordering lists in parallel. Specifically, each server validates each ordering attribute by checking the persist field. For SSDs with PLP, an ordering attribute is valid if and only if its and its preceding attribute's persist fields are all set to 1. For SSDs without PLP, an ordering attribute is valid when the persist field of a latter ordering attribute that belongs to a FLUSH command is set to 1. By scanning ordering attributes, valid per-server storage order can be reconstructed. Other invalid ordering attributes are dropped, as the storage order among the ordered write requests is uncertain. With parallel persistence and validation of ordering attributes, Rio processes ordered write requests with high concurrency.
Crash Recovery and Consistency
The storage system must be able to recover to a consistent state in the face of a sudden server crash (e.g., a power outage). The main idea is to leverage reconstructed per-server ordering lists from target servers (described in §4.3.2) to detect out-of-order execution, so as to perform crash recovery. In this section, we first present RIO's crash recovery and consistency on systems that update out-of-place (e.g., log structure), and then show how RIO handles in-place updates.
Out-of-Place
Updates. We use Figure 6 to elaborate on Rio's recovery. In this figure, we assume that ordered write requests of Rio always update out-of-place so that there always exists one copy of old data blocks. As both the initiator and target servers can fail, we describe Rio's recovery strategy in these two types of crash scenarios. Initiator recovery. After restarting and reconnecting to all target servers, the initiator server collects per-server ordering lists (1←3, 2←5) from each target server. Next, the initiator rebuilds a global ordering list (1←2←3) by merging per-server ordering lists. For example, W5 is dropped since W4 is not durable. Then, the global ordering list is sent back to each target server to roll back. Data blocks that are not within the global ordering list (W4, W5, W6 and W7) are erased.
Target recovery. When a target server crashes, the initiator server tries to reconnect the target server. Once connected again, similar to the initiator recovery, the initiator server firstly rebuilds the global ordering list. The difference is that merging does not drop ordering attributes of alive target servers. Instead, the initiator server tries to repair the broken list by replaying non-persistent requests on failed targets. For example, assume target server 1 fails while server 2 is still alive. The initiator re-sends W4 to target 1 until a successful completion response is received. Replaying is idempotent and thus does not introduce inconsistency.
The version consistency requires that metadata is consistent and the versions of data and metadata match with each other. Most mechanisms that support version consistency (e.g., the data journaling of Ext4, the checkpointing of F2FS [25]) use the storage order to keep the versions of data and metadata the same and update data and metadata blocks out-of-place for crash recovery. Rio provides storage order and is thus capable of offering version consistency when the data and metadata blocks are updated out-of-place.
In-Place Updates (IPUs).
Crash-consistent storage systems atop commodity SSDs that do not have an atomic update interface usually update metadata out-of-place for system integrity. For user data, IPUs can be classified into two categories: normal IPUs that overwrite an existing file, and block reuse where data blocks of a file are re-assigned to another file. Rio is unaware of IPUs and upper layer systems (e.g., file systems) need to explicitly pass labels. Rio distinguishes IPUs from out-of-place updates by a special field (ipu) in the ordering attribute and handles them differently.
The target recovery for IPUs is the same as out-of-place updates. However, the initiator recovery is different: Rio does not perform roll-back but leaves the recovery strategy to upper layer systems (e.g., file systems). The file system can thus retrieve the global ordering list from Rio to achieve a certain level of consistency by customizing recovery logic.
This design of Rio leaves flexible consistency guarantees to upper layer systems, as handling IPUs is tricky. For example, data consistency (e.g., ordered mode of Ext4) requires that metadata is consistent and data is persisted before metadata. Ext4 achieves data consistency by updating data inplace before writing metadata to the journal area. As the global ordering list provides the persistence order, a file system built atop Rio can also achieve data consistency by erasing the metadata blocks that are persisted before IPU data blocks during recovery.
For block reuse, upper layer systems can not directly use Rio since the newer data can not be durable before the data block is freed from the prior owner (otherwise the prior owner can see the data of the new owner). For example, a thread issues a request to change the ownership of a file, followed by issuing another request to write data to that file. happens when data blocks of the later request are durable while the former is not durable, the file system will fail to recover to a consistent state. If the file system erases the data blocks of the later requests, the data content of the prior owner is lost. If the file system leaves the data blocks of the later requests untouched, the prior owner can see the data content of the new owner, which results in security issues. Upper layer systems (e.g., file systems) need to regress to the data journaling or use classic synchronous FLUSH for block reuse. For example, a file system can write the data blocks to the journal area to keep an old copy, so that the file system can roll back to the old copy during recovery if the metadata block that describes the file ownership is not durable. A file system can also issue a FLUSH command and wait for its completion to ensure that the ownership changes before the new data are written to the file.
Rio I/O Scheduler
Rio I/O scheduler is designed to exploit the asynchronous execution of ordered write requests and reduce the CPU overhead of the networked storage stack. Figure 7(a) presents the organization of Rio I/O scheduler. We use the stream notion for multicore scalability. The stream represents a sequence of ordered write requests. Across streams, there are no ordering restrictions, i.e., each stream has independent global order. The number of streams is configurable and by default equals the number of cores. Each CPU core can get arbitrary streams but has a dedicated stream in the common case, i.e., core 0 uses stream 0. Rio I/O scheduler has three main design principles. Principle 1: Rio uses dedicated software queues (ORDER queue) to schedule ordered write requests. Such separation of ordered requests from orderless ones reduces the latency of ordered write requests which are usually on the critical path (e.g., fsync), and simplifies the scheduling algorithms. Principle 2: Rio dispatches requests of a stream to the same NIC send queue, to exploit the in-order delivery property of the network protocol, thereby reducing the overhead of in-order submission of the target driver ( §4.3.1). For example, the block layer dispatches requests from stream 0 to NIC queue 0. As the reliable connected (RC) transport of RDMA preserves the delivery order of RDMA SEND operations for seq=1 num=0 prev=0 each queue, aligning the stream to a NIC queue reduces out-of-order deliveries over the network. Each socket of the TCP stack has similar in-order delivery property. Thus, this principle can be applied to TCP networks. Principle 3: The merging and splitting of Rio may enhance (but must not sacrifice) the original ordering guarantees. For example, merging adjacent data blocks of continuous ordered write requests may remove the storage barrier. The merged request however should be atomic since atomicity is a stronger consistency property than storage order. Whenever possible, Rio merges consecutive ordered write requests into a single large request. This reduces the number of NVMe-oF I/O commands and further CPU cycles on the storage protocol (lesson 3 from §3.2). We use examples from Figure 8 to illustrate principle 3 at length. By default, Rio does not reorder requests in the ORDER queue. However, reordering is allowed in Rio for fairness and rate limit but this is beyond the scope of this paper. Request merging. There are three requirements for request merging. First, merging is performed within a sole stream. Second, sequence numbers of requests must be continuous in order not to sacrifice the original storage order. Third, logical block addresses (LBAs) of requests must be non-overlapping and consecutive. Figure 8(a) shows three requests that meet these three requirements. The block layer merges them in the ORDER queue and compacts their ordering attributes into a single one. If merging spans multiple groups, the sequence number of the merged request becomes a range. As the three requests share a sole ordering attribute, they are considered as a whole during recovery and thus become atomic. In particular, the persist field will only be toggled if all three requests are durable. Otherwise, all three requests are discarded or replayed during recovery. Request splitting. The block layer of Rio splits a request to meet the hardware limitation (e.g., the transfer size of a single request) and software configuration (e.g., the stripe size of a logical volume). For example, the maximum transfer size of a single request of an Intel 905P SSD is 128 KB. An RDMA NIC has a similar constraint for a single work request. Figure 9. RioFS atop Rio. RioFS uses the Rio sequencer to perform file system journaling.
W1_1 W1_2 seq=1 num=2 prev=0 W2 seq=2 num=1 prev=1 W1-2 seq=1-2 num=3 prev=0 W1 seq=1 num=1 prev=0 W2 seq=2 num=2 prev=1 W2_1 seq=2 num=1 prev=1 W2_2 seq=2 num=1
Rio divides the larger ordered write request into smaller ones and tags the divided requests with a special split flag. During recovery, divided requests are merged back into the original request to validate the global order. For example in Figure 8(b), W2 is divided and scattered to two servers. During crash recovery, ordering attributes of W2_1 and W2_2 are sent back to the initiator to decide the global order. A merged request can not be split, and vice versa. A process can be migrated from one core to another due to the CPU scheduling, which leads to stream stealing (Figure 7(b)) and may complicate the I/O scheduler and make the stream notion difficult to use. To handle this case, Rio allows stream stealing and affiliates the stream to NIC send queue, which always forwards requests of a stream to the same NIC queue regardless of process's migration. Similar to the orderless requests, the pending requests in the ORDER queue are flushed to the initiator driver before the process is migrated.
Programming Model
Rio provides an ordered block device abstraction and asynchronous I/O interfaces to file systems and applications. Specifically, the rio_setup function specifies the number of streams, ideally to the number of independent transactions allowed in the applications or file systems (e.g., the number of cores) to maximize concurrency. The rio_setup function also associates the networked storage devices (e.g., a sole SSD, a logical volume or RAID) with the streams. The rio_submit function encapsulates the original block I/O submission function (submit_bio). It requires a stream ID and a flag to explicitly delimit the end of a group. Rio treats the submission order from file systems (or applications) as the global order and automatically manages the per-server and global order for each stream. File systems and applications only need to manage streams and decide the submission order. The rio_submit function dispatches requests to the target with ordering guarantees. To guarantee durability, file systems and applications need to embed a FLUSH command in the final request and use rio_wait to poll for the completion of the final request. The users can continuously push multiple asynchronous and ordered requests to SSDs via rio_submit and use rio_wait for durability, thereby achieving high concurrency.
A typical use case that relies heavily on the storage order is file system journaling (or write-ahead logging). File systems can associate each independent on-disk journaling with each stream and use rio_submit to dispatch journaling I/Os. We show a detailed file system design atop Rio in §4.7. Applications that are built atop the block device (e.g., Blue-Store [3]) can also use Rio to accelerate on-disk transactions. For example, applications can replace the asynchronous I/O interfaces (e.g., libaio [2]) with librio, which is a wrapper of the in-kernel interfaces such as rio_submit.
RioFS: A File System atop Rio
In this section, we introduce RioFS to alleviate the performance bottleneck from the file system and evaluate unmodified applications atop Rio. We develop RioFS based on Ext4 and use two main techniques (Figure 9). First, RioFS replaces all synchronous transfer and FLUSH commands that are used for storage order with the stream interfaces (e.g., rio_submit). This parallels ordered write requests of a single transaction. Second, RioFS employs a per-core journaling design from iJournaling [38] to increase the multicore scalability. Specifically, each core has a private journal area and performs journaling almost independently. When fsync is called, the thread dispatches only the corresponding filelevel transaction to its dedicated stream. To handle journal conflicts (e.g., multiple transactions on an inode), RioFS compares the global transaction IDs (sub-transaction IDs) and applies the latest transaction during checkpoint or recovery, following the iJournaling's design. The only difference between iJournaling and RioFS is the method of storage ordering guarantees. iJournaling uses synchronous transfer and FLUSH commands while RioFS uses the Rio sequencer.
RioFS uses Rio to handle normal IPUs, but regresses to the classic approach (i.e., synchronous FLUSH) to handle block reuse. Performing FLUSH for block reuse will not harm the average throughput unless the file system is nearly 100% full. In normal situations, RioFS first find free data blocks that are not referenced by any files without FLUSH commands.
Proof of Rio's and RioFS's Correctness
This section proves the correctness of Rio approach to storage order. We refer to data blocks of an ordered write request as . The n represents the global order, i.e., the seq value. We refer to the associated ordering attribute of the request as . When the persist value is 0, i.e., associated data blocks are not durable, the ordering attribute is¯. We use the term ← to describe the "persist-before" relationship; −1 ← means −1 must be durable prior to . We thus have← ← (steps 5 , 6 and 7 of Figure 4 and §4.3). To prove the correctness of Rio, we only need to prove that the post-crash state of Rio is valid, as the in-order completion mechanism of Rio guarantees an ordered state during normal execution. Assume there are N ordered write requests. Then, there are N+1 valid post-crash states, ∅, 1 , ... , 1 ← 2 ← ... , ... , 1 ← 2 ← ... . All states preserve prefix semantics. Other states, e.g., +1 ← , are invalid.
We consider the basic case with no merging and splitting first. During crash recovery, Rio first scans ordering attributes from 1 to . Suppose it finds that the first nondurable ordering attribute is¯. In other words, preceding ordering attributes are 1 , 2 , ... −1 . As ← , data blocks of the former k-1 requests are durable, i.e., 1 ← 2 ← ... −1 . This is a valid state that obeys the storage order. Due to the asynchronous execution, data blocks of a request later than the kth can be durable. Suppose this request is the mth, and thus we have , > and ← which disobeys the storage order. As¯← and¯already records the locations of , Rio performs recovery algorithm to discard or replay ... −1 . As a result, the post-crash state remains 1 ← 2 ← ... −1 by discarding, or changes to 1 ← 2 ← ... by replaying ( §4.4). Both post-crash states are valid and therefore Rio preserves the storage order.
Recall that Rio can only merge data blocks of consecutive requests (principle 3 from §4.5). Assume Rio merges , +1 , ... into , where indicates data blocks from request k to m. Thus, associated ordering attributes are also merged into . During crash recovery, is considered as one sole ordering attribute. Then, the proof returns to the aforementioned basic case. The only difference is that the consistency guarantee among to is enhanced to atomicity. In particular, there are m-k+2 valid post-crash states without merging, ∅, , ← +1 , ... , ← +1 ← ...
. With Rio's merging, the number of post-crash states is reduced to 2. The states are ∅ or ← +1 ← ... , representing the "nothing" or "all" states of the atomicity guarantee, respectively.
Recall that Rio merges the divided requests back to the original request when performing recovery. As a result, the proof also returns to the basic case if a request is split.
The correctness of RioFS depends on Rio and iJournaling. For storage order, RioFS replaces the FLUSH commands with Rio's ordering primitive to connect the file system (iJournaling) to the block layer. Since Rio guarantees storage order and iJournaling is correct, RioFS is correct.
Discussion
Rio assumes a sole initiator server accessing an array of SSDs. Distributed concurrency control over multiple initiator servers is orthogonal to this paper, and will not be the slower part that affects the overall performance as it is performed mostly in memory which is significantly faster than remote storage access (1M ops/s). For example, the throughput of allocating a sequencer number approaches 100M ops/sec [19]. Rio's architecture (Figure 4) can be extended to support multiple initiator servers, by extending Rio sequencer and RioFS to distributed services. We leave this for future work.
Rio Implementation
We implement Rio based on NVMe over RDMA driver from Mellanox [33] and Linux kernel 4.18.20 on real hardware.
A critical issue is to pass ordering attributes across the I/O stack. To distinguish ordered requests from the orderless, Rio uses two special flags to represent normal and final ordered write requests. Rio sequencer uses the private field (bi_private) of the original block I/O data structure (bio) to store ordering attributes and original private information together. The block layer can therefore get ordering attributes from the bio struct and perform scheduling. The initiator driver passes ordering attributes using reserved fields of the NVMe-oF write command. The detailed command format is presented in Table 1. The target driver persists ordering attributes in PMR via persistent MMIO write (i.e., an MMIO read after an MMIO write). As commercial SSDs used in the experiments do not support the relatively new PMR feature (released in NVMe spec 1.4 circa June 2019), we use 2 MB in-SSD capacitor-backed DRAM to serve as the PMR region for commercial SSDs, which is the same as in Horae [28] and ccNVMe [30]. Specifically, Rio remaps the non-volatile DRAM of an OC-SSD by the PCIe Base Address Register technique, which is mature and widely used by SSDs and NICs (e.g., doorbells). The ordering attributes are written to PMR and data blocks are sent to commercial SSDs.
Evaluation
In this section, we first describe the setups of the test environment ( §6.1). Next, we evaluate the performance of Rio atop both flash and Optane SSDs, as well as atop both storage arrays in a single target server and across multiple target servers ( §6.2). Then, we examine the performance of RioFS through microbenchmarks ( §6.3) and applications ( §6.4). We finally study the recovery overhead of Rio ( §6.5).
Experimental Setup
Hardware. We conduct all experiments in three physical servers. One is the initiator and the other two are target servers. Each server has 2 Intel Xeon Gold 5220 CPUs, and each CPU has 18 cores and runs at 2.20 GHz. We test three We also extend Horae to the same NVMe over RDMA stack. Specifically, the control path is built atop the initiator driver, and uses two-sided RDMA SEND operations to transfer the ordering metadata. When ordering metadata arrives at the target server, the target driver forwards it to PMR by a persistent MMIO write.
For file system and application performance, we compare RioFS against Ext4 and HoraeFS. To ensure the fairness of comparisons, we also adopt iJournaling's design in HoraeFS, the same as RioFS ( §4.7). All three file systems are based on the same codebase of Ext4 and the same OS, and use metadata journaling and 1 GB journal space in total. Both RioFS and HoraeFS allocate 24 streams during the evaluation, which are enough to drive all 4 SSDs on the targets in most cases. CPU efficiency. We use CPU efficiency to quantify the ability of storage systems to drive the I/O devices using a single unit of CPU cycles. Specifically, CPU efficiency is consistent with the write requests each CPU cycle can serve, i.e., throughput ÷ CPU utilization. The CPU utilization is collected by top command.
Block Device Performance
We examine the performance of block devices with different ways of storage ordering guarantees: Linux NVMe over RDMA, Horae and Rio. We also measure the performance of orderless write requests, to quantify how far we are from the optimal performance of ordered write requests. We conduct the experiments by varying the number of threads, the write size and the batch size of each group while keeping other parameters constant. We collect the throughput and CPU utilization. Figures 10, 11 and 12 illustrate the results across a variety of configurations. We get three major findings: (1) Rio achieves significantly larger I/O throughput with higher CPU efficiency against its peers; (2) the throughput and CPU efficiency of Rio come close to the orderless; (3) the Rio I/O scheduler greatly increases the CPU efficiency and further boosts the throughput. We next describe each figure in detail. In the flash SSD ( Figure 10(a)), Rio achieves two orders of magnitude higher throughput than Linux NVMe-oF and outperforms Horae by 2.8× on average. Rio offers higher throughput with 18.0× and 1.7× CPU efficiency in the initiator server, and 22.7× and 2.1× CPU efficiency in the target server on average compared to Linux and Horae, respectively. Rio prevails over its peers for two reasons. First, Rio removes the prohibitive FLUSH command, which is usually a device-wide operation of the SSDs that do not have power loss protection (PLP) (e.g., capacitors for data since the last FLUSH). The tested flash SSD does not have PLP, and whenever it receives a FLUSH command, it instantly drains off the data in the volatile buffer to the persistent flash memory, which neutralizes the multicore concurrency from the host and multi-chip concurrency from the flash when little data is buffered. Second, Rio makes the ordered I/O path mostly asynchronous and thus fully exploits the bandwidth of the NIC and SSD. We observe that the throughput of Horae is significantly lower than Rio when the count of threads is small, due to synchronous execution of the control path. The control path also decreases the CPU efficiency as the control path incurs additional network traffic (e.g., RDMA SEND) and therefore demands more CPU cycles. Rio reuses the functions of request merging from the orderless. By comparing the CPU efficiency of these two systems, we find the additional logic that the Rio I/O schedule adds (e.g., comparing the ordering attributes) does not introduce much overhead to both the CPU and I/O performance.
In the Optane SSD (Figure 10(b)), Rio exhibits 9.4× and 3.3× throughput on average against Linux and Horae. Rio delivers 1.7× and 1.7× CPU efficiency in the initiator server, and 3.5× and 2.0× CPU efficiency in the target server compared to Linux and Horae, respectively. Here, the Optane SSD has PLP and thus the FLUSH does not influence the throughput significantly. Yet, the synchronous execution is still a dominant factor affecting the overall throughput and CPU efficiency. By dramatically reducing the proportion of synchronous execution, Rio shows similar throughput and CPU efficiency against the orderless.
We extend the experiments to multiple SSDs ( Figure 10(c)) and two target servers (Figure 10(d)). The SSDs are organized as a single logical volume and the tested systems distribute 4 KB data blocks to individual physical SSDs in a roundrobin fashion. Linux can not dispatch the following ordered write request to other SSDs until the previous one finishes. Horae can not dispatch ordered write requests to SSDs in parallel before the control path finishes. Unlike Linux and Horae, Rio can distribute ordered write requests to SSDs concurrently and hence shows high CPU efficiency and I/O throughput. As we add more SSDs, a single thread can not fully utilize the overall bandwidth, even for the orderless. In this case, CPU efficiency becomes a dominant factor that affects the I/O throughput. Rio fully drives the SSDs with 4 threads due to high CPU efficiency.
6.2.2 Performance with varying write sizes. Figure 11 presents the throughput and CPU efficiency with varying write sizes. Only one thread is launched to perform sequential and random ordered writes.
We get similar results as in §6.2.1. Specifically, Rio outperforms Linux and Horae by up to two orders of magnitude and 6.1× respectively, with a more efficient use of CPU cycles. The key takeaway here is that asynchronous execution is also vital for larger ordered write requests. Even for 64 KB write, the throughput of Horae is half of Rio, as more than 30% CPU cycles are consumed for the control path. The CPU inefficiency thus leads to the suboptimal throughput. Figure 12 shows the throughput and CPU efficiency with varying batch sizes. Each batch contains a sequence of 4 KB sequential write requests that can be merged.
Performance with varying batch sizes.
When the computation resources are limited (Figure 12(a)), merging substantially reduces the CPU cycles spent on the drivers. This further increases the overall throughput of Rio (see the comparison between Rio and Rio w/o merge). When the computation resources are sufficient (Figure 12(b)), as the SSDs' bandwidth is almost saturated, merging does not lead to significantly higher I/O throughput. Yet, Rio retains high CPU efficiency and reserves more CPU cycles for other applications that use Rio or RioFS. Horae also allows merging for the data paths and merging also increases the CPU efficiency. However, owing to the synchronous control path, increments of Horae's CPU efficiency are less significant compared to Rio and the orderless. Hence, the normalized CPU efficiency of Horae decreases when the batch size increases. This indicates that the asynchronous execution at both NICs and SSDs of Rio plays an essential role in high CPU efficiency.
File System Performance
We evaluate the file system performance by FIO [5]. The file system is mounted on the initiator and stores data on a remote Intel 905P Optane SSD. Up to 24 threads are launched, and each issues 4 KB append write to a private file followed by fsync on the file, which always triggers metadata journaling. Figure 13 plots the results of fsync calls. We find that Rio saturates the bandwidth of the SSD with fewer CPU cores and achieves lower latency. Specifically, when the number of threads is 16, Rio successfully saturates the Optane SSD. The throughput increases by 3.0× and 1.2× in RioFS against Ext4 and HoraeFS, respectively. The average latency decreases by 67% and 18% in RioFS against Ext4 and HoraeFS. RioFS also makes the fsync less variable. In RioFS, the 99th percentile latency decreases by 50% and 20% against in Ext4 and HoraeFS. The improvement of throughput and latency comes from the asynchronous execution of Rio. We explain this by Figure 14. Figure 14 presents the internal procedure of an append write (i.e., write followed by fsync) in the file system, which consists of processing three types of data blocks: user data (D), journaled data (JM) including file system metadata and journal description block and a journal commit record (JC). Both RioFS and HoraeFS overlap the CPU and I/O processing and let the underlying device process these data blocks concurrently. The difference lies in the way of dispatching data blocks to the lower block layer. HoraeFS leverages the control path to dispatch a group of ordered writes and experiences an extra delay. In particular, the latency of (CPU) dispatching JM and JC increases dramatically (see the table in Figure 14) due to the synchronous control path over the network. RioFS can dispatch the following data blocks immediately after they reach the ORDER queue in the block layer. This does not need extra network round trips and thus brings performance improvement. Figure 15. Application performance.
Application Performance
We examine RioFS's performance with two applications, I/O intensive Varmail [40], and RocksDB [14] which is both CPU and I/O intensive. RioFS is mounted at the initiator server and stores its data on a remote Intel 905P Optane SSD. Varmail. Varmail is a metadata and fsync intensive workload from Filebench [40]. We keep the default configuration and parameters of Varmail but vary the number of threads during the test. Figure 15(a) reports the results.
The throughput increases by 2.3× and 1.3× on average when we use RioFS to serve the application I/Os against when we use Ext4 and HoraeFS. Varmail contains many persistent metadata operations, e.g., creat and unlink followed by fsync. RioFS provides a faster fsync call (details in §6.3) which persists these metadata blocks in an asynchronous fashion without a serialized I/O operation as in HoraeFS. Consequently, RioFS provides higher throughput. RocksDB. RocksDB is a popular key-value store deployed in several production clusters [14]. We deploy RocksDB atop the tested file systems and measure the throughput of the user requests. Here, we use db_bench, a benchmark tool from RocksDB to evaluate the file system performance under the fillsync workload, which represents the random write dominant case. During the test, the benchmark launches up to 36 threads, and each issues 16-byte keys and 1024-byte values to a 20 GB dataset. Figure 15(b) shows the results.
RioFS increases the throughput of RocksDB by 1.9× and 1.5× on average compared to Ext4 and HoraeFS, respectively. The performance improvement comes from two aspects: higher I/O utilization and CPU efficiency of Rio, as we have shown in §6.2. RioFS makes the ordered write requests asynchronous, thereby significantly increasing the I/O concurrency and reducing the CPU cycles consumed on idly waiting for block I/Os. This in turn provides more CPU cycles for RocksDB, which also demands CPU cycles for inmemory indexing and compaction. In the case of 36 threads, we observe that RocksDB has 110% higher CPU utilization when we use RioFS than when we use HoraeFS. Further, RioFS packs the sequential write requests of a transaction into a larger batch (i.e., the block merging), and thus reduces CPU cycles spent on the RDMA operations over the network.
As a result, RioFS shows better performance on both CPU and I/O intensive RocksDB.
Recovery Time
The recovery time of Rio and Horae depends on the number of in-progress ordered requests before a sudden system crash. The Linux block layer does not need to perform recovery as it permits only one in-progress ordered write request. Here, we examine the worst case of Rio's and Horae's recovery. Specifically, the test launches 36 threads, and each issues 4 KB ordered write requests continuously without explicitly waiting for the completion via fsync. Two target servers and 4 SSDs are used in the test. Another thread randomly injects an error into target servers, which crashes the target driver and stops the storage service. Then, the initiator server starts recovery after it reconnects to the target servers. We repeat the tests 30 times and report the average results.
Rio takes around 55 ms to reconstruct the global order, most of which is spent on reading data from PMR and transferring ordering attributes over the network. Horae takes less time (38 ms) to reload the ordering metadata as the size of the ordering metadata is smaller than that of the ordering attribute. The data recovery costs around 125 ms in Rio and 101 ms in Horae, which is used for discarding the data block that disobeys the storage order. Compared to Horae, Rio takes more time for data recovery as the number of out-of-order requests in Rio is higher than that in Horae. Fortunately, discarding is performed asynchronously for each SSD and each server, and thus Rio can fully exploit SSDs bandwidth and saves recovery time.
Conclusion
This paper presents the design, implementation and evaluation of Rio, an order-preserving networked storage stack. By allowing ordered writes to be processed asynchronously and using a set of order-preserving techniques to enforce the persistence order, Rio successfully drives storage devices over multiple target servers while ensuring storage order. We conclude this work with two key observations. First, the I/O stack should exploit the asynchronous interfaces (i.e., multiple deep hardware queues and asynchronous DMA engines) of modern NICs and SSDs to take full advantage of their high bandwidth. Second, although block merging is expensive for the local I/O stack on ultra-low latency SSDs, it's worth investing some CPU cycles in block merging to substantially reduce the control operations (e.g., RDMA SEND) over the network and further improve the CPU and I/O efficiency.
Figure 1 .
1Organization of NVMe-oF (b) An NVMe-oF version of Background of NVMe-oF and Horae.
Figure 2 .
2Motivation experiments. NVMe-oF: NVMe over RDMA with ordering guarantees. orderless: NVMe over RDMA with no ordering guarantee.
Figure 3 .
3Motivation for merging consecutive data blocks. Tested system: the orderless Linux NVMe over RDMA.
Figure 4 .
4Rio architecture.
Figure 7 .
7Rio I/O scheduler. (a) The organization of Rio I/O scheduler. (b) The way of Rio for handling thread migration.
Figure 8 .
8Request merging and splitting in Rio. The 'persist' field is initialized to 0 and is omitted for simplicity.
6.2. 1
1Multicore performance. Figures 10(a)-(d) plot the throughput and CPU efficiency with different numbers of threads. Each thread submits random ordered write requests to an individual stream.
Figure 11 .
11Performance with varying write sizes. CPU efficiency is normalized to the orderless.
Figure 12 .
12Performance with varying batch sizes. CPU efficiency is normalized to the orderless.
Figure 13 .Figure 14 .
1314File Latency breakdown.
The file system uses Rio to order the two requests. If a crash……
ORDER
queue 0
NIC send
queue 0
Stream 0 W1
W2
RIO
Sequencer
Block
Layer
Initiator
Driver
Core 0
Core N
Stream N
get_stream rio_submit
ORDER
queue N
Target
Driver
NIC send
queue N
Receive
queue 0
Receive
queue N
in-order delivery (optional)
Core 1
Stream 0
ORDER
queue 1
NIC send
queue 0
Core 0
Receive
queue 0
NIC send
queue 1
Receive
queue 1
Table 1. Rio NVMe-oF commands atop 1.4 spec. Rio uses the reserved fields of the NVMe-oF I/O commands to transfer ordering attributes over the network.Dword:bits NVMe-oF
Rio NVMe-oF
00:10-13
reserved
Rio op code, e.g., submit
02:00-31
reserved
start sequence (seq)
03:00-31
reserved
end sequence (seq)
04:00-31
metadata *
previous group (prev)
05:00-15
metadata *
number of requests (num)
05:16-31
metadata *
stream ID
12:16-19
reserved
special flags, e.g., boundary
* The metadata field of NVMe-oF is reserved.
Rio: Order-Preserving and CPU-Efficient Remote Storage Access EuroSys '23, May 8-12, 2023, Rome, Italy
AcknowledgementsWe sincerely thank our shepherd Xiaosong Ma and the anonymous reviewers for their valuable feedback. This work is funded by the National Natural Science Foundation of China (Grant No.62022051, 61832011).
Internet Small Computer Systems Interface (iSCSI. 2004. Internet Small Computer Systems Interface (iSCSI). https://www. ietf.org/rfc/rfc3720.txt.
An async IO implementation for Linux. 2022. An async IO implementation for Linux. https://elixir.bootlin. com/linux/v4.18.20/source/fs/aio.c.
File Systems Unfit as Distributed Storage Backends: Lessons from 10 Years of Ceph Evolution. Abutalib Aghayev, Sage Weil, Michael Kuchnik, Mark Nelson, Gregory R Ganger, George Amvrosiadis, 10.1145/3341301.3359656Proceedings of the 27th ACM Symposium on Operating Systems Principles. the 27th ACM Symposium on Operating Systems PrinciplesHuntsville, Ontario, Canada; New York, NY, USASOSP '19). Association for Computing MachineryAbutalib Aghayev, Sage Weil, Michael Kuchnik, Mark Nelson, Gre- gory R. Ganger, and George Amvrosiadis. 2019. File Systems Un- fit as Distributed Storage Backends: Lessons from 10 Years of Ceph Evolution. In Proceedings of the 27th ACM Symposium on Operat- ing Systems Principles (Huntsville, Ontario, Canada) (SOSP '19). As- sociation for Computing Machinery, New York, NY, USA, 353-369. https://doi.org/10.1145/3341301.3359656
Arpaci-Dusseau, H Remzi, Andrea C Arpaci-Dusseau, Crash Consistency: FSCK and journaling. Arpaci-Dusseau, Remzi H. and Arpaci-Dusseau, Andrea C. 2022. Crash Consistency: FSCK and journaling. http://pages.cs.wisc.edu/~remzi/ Classes/537/Spring2011/Book/file-journaling.pdf.
. Jens Axboe, Jens Axboe. 2017. fio -Flexible I/O tester. https://fio.readthedocs.io/ en/latest/fio_doc.html.
Scaling a File System to Many Cores Using an Operation Log. S Srivatsa, Rasha Bhat, Austin T Eqbal, M Frans Clements, Nickolai Kaashoek, Zeldovich, 10.1145/3132747.3132779Proceedings of the 26th Symposium on Operating Systems Principles. the 26th Symposium on Operating Systems PrinciplesShanghai, China; New York, NY, USAAssociation for Computing MachinerySOSP '17)Srivatsa S. Bhat, Rasha Eqbal, Austin T. Clements, M. Frans Kaashoek, and Nickolai Zeldovich. 2017. Scaling a File System to Many Cores Using an Operation Log. In Proceedings of the 26th Symposium on Operating Systems Principles (Shanghai, China) (SOSP '17). Association for Computing Machinery, New York, NY, USA, 69-86. https://doi. org/10.1145/3132747.3132779
Linux Block IO: Introducing Multi-Queue SSD Access on Multi-Core Systems. Matias Bjørling, Jens Axboe, David Nellans, Philippe Bonnet, 10.1145/2485732.2485740Proceedings of the 6th International Systems and Storage Conference. the 6th International Systems and Storage ConferenceHaifa, Israel; New York, NY, USA, ArticleAssociation for Computing Machinery22SYSTOR '13)Matias Bjørling, Jens Axboe, David Nellans, and Philippe Bonnet. 2013. Linux Block IO: Introducing Multi-Queue SSD Access on Multi-Core Systems. In Proceedings of the 6th International Systems and Storage Conference (Haifa, Israel) (SYSTOR '13). Association for Computing Machinery, New York, NY, USA, Article 22, 10 pages. https://doi.org/ 10.1145/2485732.2485740
NVMe SSD with Persistent Memory Region. Chander Chadha, Chander Chadha. 2017. NVMe SSD with Persistent Memory Re- gion. https://www.flashmemorysummit.com/English/Collaterals/ Proceedings/2017/20170810_FM31_Chanda.pdf.
Determinizing Crash Behavior with a Verified Snapshot-Consistent Flash Translation Layer. Yun-Sheng Chang, Yao Hsiao, Tzu-Chi Lin, Che-Wei Tsao, Chun-Feng Wu, Yuan-Hao Chang, Hsiang-Shang Ko, Yu-Fang Chen, 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association. Yun-Sheng Chang, Yao Hsiao, Tzu-Chi Lin, Che-Wei Tsao, Chun-Feng Wu, Yuan-Hao Chang, Hsiang-Shang Ko, and Yu-Fang Chen. 2020. Determinizing Crash Behavior with a Verified Snapshot-Consistent Flash Translation Layer. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association, 81- 97. https://www.usenix.org/conference/osdi20/presentation/chang
OPTR: Order-preserving Translation and Recovery Design for SSDs with a Standard Block Device Interface. Yun-Sheng Chang, Ren-Shuo Liu, Proceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference. the 2019 USENIX Conference on Usenix Annual Technical ConferenceRenton, WA, USA; Berkeley, CA, USAUSENIX ATC '19). USENIX AssociationYun-Sheng Chang and Ren-Shuo Liu. 2019. OPTR: Order-preserving Translation and Recovery Design for SSDs with a Standard Block Device Interface. In Proceedings of the 2019 USENIX Conference on Usenix Annual Technical Conference (Renton, WA, USA) (USENIX ATC '19). USENIX Association, Berkeley, CA, USA, 1009-1023. http://dl. acm.org/citation.cfm?id=3358807.3358893
Orderless and Eventually Durable File Systems. Vijay Chidambaram, Vijay Chidambaram. 2015. Orderless and Eventually Durable File Sys- tems. https://research.cs.wisc.edu/wind/Publications/vijayc-thesis15. pdf.
Optimistic Crash Consistency. Vijay Chidambaram, Thanumalayan Sankaranarayana Pillai, Andrea C Arpaci-Dusseau, Remzi H Arpaci-Dusseau, Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles. the Twenty-Fourth ACM Symposium on Operating Systems PrinciplesFarminton, PennsylvaniaSOSP '13)Vijay Chidambaram, Thanumalayan Sankaranarayana Pillai, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. 2013. Optimistic Crash Consistency. In Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles (Farminton, Pennsylvania) (SOSP '13).
. 10.1145/2517349.2522726ACMNew York, NY, USAACM, New York, NY, USA, 228-243. https://doi.org/10.1145/2517349. 2522726
Consistency without Ordering. Vijay Chidambaram, Tushar Sharma, Andrea C Arpaci-Dusseau, Remzi H Arpaci-Dusseau, Proceedings of the 10th USENIX Conference on File and Storage Technologies. the 10th USENIX Conference on File and Storage TechnologiesSan Jose, CA; USA, 9FAST'12). USENIX AssociationVijay Chidambaram, Tushar Sharma, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau. 2012. Consistency without Ordering. In Pro- ceedings of the 10th USENIX Conference on File and Storage Technologies (San Jose, CA) (FAST'12). USENIX Association, USA, 9.
A Persistent Key-Value Store for Fast Storage. Facebook, Facebook. 2022. A Persistent Key-Value Store for Fast Storage. https: //rocksdb.org/.
TCP ≈ RDMA: CPU-efficient Remote Storage Access with i10. Jaehyun Hwang, Qizhe Cai, Ao Tang, Rachit Agarwal, 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20). USENIX Association. Santa Clara, CAJaehyun Hwang, Qizhe Cai, Ao Tang, and Rachit Agarwal. 2020. TCP ≈ RDMA: CPU-efficient Remote Storage Access with i10. In 17th USENIX Symposium on Networked Systems Design and Implementation (NSDI 20). USENIX Association, Santa Clara, CA, 127-140. https://www. usenix.org/conference/nsdi20/presentation/hwang
Intel. 2022. Intel Optane SSD 905 Series. Intel. 2022. Intel Optane SSD 905 Series. https://ark.intel.com/ content/www/us/en/ark/products/series/129835/intel-optane-ssd- 905p-series.html.
Intel Corporation. 2022. Intel Optane SSD DC P5800X Series. Intel Corporation. 2022. Intel Optane SSD DC P5800X Series. https://ark.intel.com/content/www/us/en/ark/products/201859/intel- optane-ssd-dc-p5800x-series-1-6tb-2-5in-pcie-x4-3d-xpoint.html.
MTCP: A Highly Scalable User-Level TCP Stack for Multicore Systems. Eun Young Jeong, Shinae Woo, Muhammad Jamshed, Haewon Jeong, Sunghwan Ihm, Dongsu Han, Kyoungsoo Park, Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation. the 11th USENIX Conference on Networked Systems Design and ImplementationSeattle, WA; USANSDI'14). USENIX AssociationEun Young Jeong, Shinae Woo, Muhammad Jamshed, Haewon Jeong, Sunghwan Ihm, Dongsu Han, and KyoungSoo Park. 2014. MTCP: A Highly Scalable User-Level TCP Stack for Multicore Systems. In Proceedings of the 11th USENIX Conference on Networked Systems Design and Implementation (Seattle, WA) (NSDI'14). USENIX Association, USA, 489-502.
Design Guidelines for High Performance RDMA Systems. Anuj Kalia, Michael Kaminsky, David G Andersen, 2016 USENIX Annual Technical Conference (USENIX ATC 16). USENIX Association, Denver. COAnuj Kalia, Michael Kaminsky, and David G. Andersen. 2016. Design Guidelines for High Performance RDMA Systems. In 2016 USENIX An- nual Technical Conference (USENIX ATC 16). USENIX Association, Den- ver, CO, 437-450. https://www.usenix.org/conference/atc16/technical- sessions/presentation/kalia
Linux kernel development community. 2022. ext4 Data Structures and Algorithms. Linux kernel development community. 2022. ext4 Data Structures and Algorithms. https://www.kernel.org/doc/html/latest/filesystems/ext4/ index.html.
ScaleXFS: Getting scalability of XFS back on the ring. Dohyun Kim, Kwangwon Min, Joontaek Oh, Youjip Won, 20th USENIX Conference on File and Storage Technologies (FAST 22). USENIX Association. Santa Clara, CADohyun Kim, Kwangwon Min, Joontaek Oh, and Youjip Won. 2022. ScaleXFS: Getting scalability of XFS back on the ring. In 20th USENIX Conference on File and Storage Technologies (FAST 22). USENIX Associ- ation, Santa Clara, CA, 329-344. https://www.usenix.org/conference/ fast22/presentation/kim-dohyun
Jongseok Kim, Cassiano Campes, Joo-Young Hwang, Jinkyu Jeong, and Euiseong Seo. 2021. Z-Journal: Scalable Per-Core Journaling. 2021 USENIX Annual Technical Conference (USENIX ATC 21). USENIX AssociationJongseok Kim, Cassiano Campes, Joo-Young Hwang, Jinkyu Jeong, and Euiseong Seo. 2021. Z-Journal: Scalable Per-Core Journaling. In 2021 USENIX Annual Technical Conference (USENIX ATC 21). USENIX Association, 893-906. https://www.usenix.org/conference/atc21/ presentation/kim-jongseok
Flash Storage Disaggregation. Ana Klimovic, Christos Kozyrakis, Eno Thereska, Binu John, Sanjeev Kumar, 10.1145/2901318.2901337Proceedings of the Eleventh European Conference on Computer Systems. the Eleventh European Conference on Computer SystemsLondon, United Kingdom; New York, NY, USA, ArticleAssociation for Computing Machinery29EuroSys '16)Ana Klimovic, Christos Kozyrakis, Eno Thereska, Binu John, and San- jeev Kumar. 2016. Flash Storage Disaggregation. In Proceedings of the Eleventh European Conference on Computer Systems (London, United Kingdom) (EuroSys '16). Association for Computing Machinery, New York, NY, USA, Article 29, 15 pages. https://doi.org/10.1145/2901318. 2901337
ASPLOS '17). Association for Computing Machinery. Ana Klimovic, Heiner Litz, Christos Kozyrakis, 10.1145/3037697.3037732Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems. the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating SystemsXi'an, China; New York, NY, USAReFlex: Remote Flash ≈ Local FlashAna Klimovic, Heiner Litz, and Christos Kozyrakis. 2017. ReFlex: Remote Flash ≈ Local Flash. In Proceedings of the Twenty-Second International Conference on Architectural Support for Programming Languages and Operating Systems (Xi'an, China) (ASPLOS '17). As- sociation for Computing Machinery, New York, NY, USA, 345-359. https://doi.org/10.1145/3037697.3037732
F2FS: A New File System for Flash Storage. Changman Lee, Dongho Sim, Joo-Young Hwang, Sangyeun Cho, Proceedings of the 13th USENIX Conference on File and Storage Technologies. the 13th USENIX Conference on File and Storage TechnologiesSanta Clara, CA; Berkeley, CA, USAFAST'15). USENIX AssociationChangman Lee, Dongho Sim, Joo-Young Hwang, and Sangyeun Cho. 2015. F2FS: A New File System for Flash Storage. In Proceedings of the 13th USENIX Conference on File and Storage Technologies (Santa Clara, CA) (FAST'15). USENIX Association, Berkeley, CA, USA, 273- 286. http://dl.acm.org/citation.cfm?id=2750482.2750503
Asynchronous I/O Stack: A Low-latency Kernel I/O Stack for Ultra-Low Latency SSDs. Gyusun Lee, Seokha Shin, Wonsuk Song, Tae Jun Ham, Jae W Lee, Jinkyu Jeong, 2019 USENIX Annual Technical Conference (USENIX ATC 19). USENIX Association. Renton, WAGyusun Lee, Seokha Shin, Wonsuk Song, Tae Jun Ham, Jae W. Lee, and Jinkyu Jeong. 2019. Asynchronous I/O Stack: A Low-latency Kernel I/O Stack for Ultra-Low Latency SSDs. In 2019 USENIX Annual Technical Conference (USENIX ATC 19). USENIX Association, Renton, WA, 603-616. https://www.usenix.org/conference/atc19/presentation/ lee-gyusun
KVell: The Design and Implementation of a Fast Persistent Key-Value Store. Oana Baptiste Lepers, Karan Balmau, Willy Gupta, Zwaenepoel, 10.1145/3341301.3359628Proceedings of the 27th ACM Symposium on Operating Systems Principles. the 27th ACM Symposium on Operating Systems PrinciplesHuntsville, Ontario, Canada; New York, NY, USAAssociation for Computing MachinerySOSP '19)Baptiste Lepers, Oana Balmau, Karan Gupta, and Willy Zwaenepoel. 2019. KVell: The Design and Implementation of a Fast Persistent Key- Value Store. In Proceedings of the 27th ACM Symposium on Operating Systems Principles (Huntsville, Ontario, Canada) (SOSP '19). Association for Computing Machinery, New York, NY, USA, 447-461. https://doi. org/10.1145/3341301.3359628
Write Dependency Disentanglement with HORAE. Xiaojian Liao, Youyou Lu, Erci Xu, Jiwu Shu, 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association. Xiaojian Liao, Youyou Lu, Erci Xu, and Jiwu Shu. 2020. Write De- pendency Disentanglement with HORAE. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20). USENIX Association, 549-565. https://www.usenix.org/conference/osdi20/ presentation/liao
Max: A Multicore-Accelerated File System for Flash Storage. Xiaojian Liao, Youyou Lu, Erci Xu, Jiwu Shu, 2021 USENIX Annual Technical Conference (USENIX ATC 21). USENIX Association. Xiaojian Liao, Youyou Lu, Erci Xu, and Jiwu Shu. 2021. Max: A Multicore-Accelerated File System for Flash Storage. In 2021 USENIX Annual Technical Conference (USENIX ATC 21). USENIX Association, 877-891. https://www.usenix.org/conference/atc21/presentation/liao
Crash Consistent Non-Volatile Memory Express. Xiaojian Liao, Youyou Lu, Zhe Yang, Jiwu Shu, 10.1145/3477132.3483592Proceedings of the ACM SIGOPS 28th Symposium on Operating Systems Principles (Virtual Event. the ACM SIGOPS 28th Symposium on Operating Systems Principles (Virtual EventNew York, NY, USAAssociation for Computing MachineryGermany) (SOSP '21)Xiaojian Liao, Youyou Lu, Zhe Yang, and Jiwu Shu. 2021. Crash Consis- tent Non-Volatile Memory Express. In Proceedings of the ACM SIGOPS 28th Symposium on Operating Systems Principles (Virtual Event, Ger- many) (SOSP '21). Association for Computing Machinery, New York, NY, USA, 132-146. https://doi.org/10.1145/3477132.3483592
Soft Updates: A Technique for Eliminating Most Synchronous Writes in the Fast Filesystem. Kirk Marshall, Mckusick, R Gregory, Ganger, Proceedings of the Annual Conference on USENIX Annual Technical Conference. the Annual Conference on USENIX Annual Technical ConferenceMonterey, California; USA24ATEC'99). USENIX AssociationMarshall Kirk McKusick and Gregory R. Ganger. 1999. Soft Updates: A Technique for Eliminating Most Synchronous Writes in the Fast Filesystem. In Proceedings of the Annual Conference on USENIX An- nual Technical Conference (Monterey, California) (ATEC'99). USENIX Association, USA, 24.
Mellanox. 2022. 200Gb/s ConnectX-6 Ethernet Single/Dual-Port Adapter IC. Mellanox. 2022. 200Gb/s ConnectX-6 Ethernet Single/Dual-Port Adapter IC. https://www.mellanox.com/products/ethernet-adapter- ic/connectx-6-en-ic.
Mellanox. 2022. Mellanox OpenFabrics Enterprise Distribution for Linux. -3.2.9.0/MLNX_OFED_LINUX-4.7-3.2.9.0-ubuntu18.04-x86_64.tgzMellanox. 2022. Mellanox OpenFabrics Enterprise Distribution for Linux. https://www.mellanox.com/downloads/ofed/MLNX_OFED-4.7- 3.2.9.0/MLNX_OFED_LINUX-4.7-3.2.9.0-ubuntu18.04-x86_64.tgz.
Gimbal: Enabling Multi-Tenant Storage Disaggregation on SmartNIC JBOFs. Jaehong Min, Ming Liu, Tapan Chugh, Chenxingyu Zhao, Andrew Wei, 10.1145/3452296.3472940Proceedings of the 2021 ACM SIGCOMM 2021 Conference (Virtual Event, USA) (SIGCOMM '21). the 2021 ACM SIGCOMM 2021 Conference (Virtual Event, USA) (SIGCOMM '21)New York, NY, USAAssociation for Computing MachineryHwan Doh, and Arvind Krishnamurthy. 2021Jaehong Min, Ming Liu, Tapan Chugh, Chenxingyu Zhao, Andrew Wei, In Hwan Doh, and Arvind Krishnamurthy. 2021. Gimbal: En- abling Multi-Tenant Storage Disaggregation on SmartNIC JBOFs. In Proceedings of the 2021 ACM SIGCOMM 2021 Conference (Virtual Event, USA) (SIGCOMM '21). Association for Computing Machinery, New York, NY, USA, 106-122. https://doi.org/10.1145/3452296.3472940
NVMe organization. 2022. Non-Volatile Memory express. NVMe organization. 2022. Non-Volatile Memory express. https:// nvmexpress.org.
MySQL reference manual. Oracle. 2022. MySQL reference manual. https://dev.mysql.com/doc/ refman/8.0/en/.
iJournaling: Fine-grained Journaling for Improving the Latency of Fsync System Call. Daejun Park, Dongkun Shin, Proceedings of the 2017 USENIX Conference on Usenix Annual Technical Conference. the 2017 USENIX Conference on Usenix Annual Technical ConferenceSanta Clara, CA, USA; Berkeley, CA, USAUSENIX ATC'17). USENIX AssociationDaejun Park and Dongkun Shin. 2017. iJournaling: Fine-grained Jour- naling for Improving the Latency of Fsync System Call. In Proceedings of the 2017 USENIX Conference on Usenix Annual Technical Conference (Santa Clara, CA, USA) (USENIX ATC'17). USENIX Association, Berke- ley, CA, USA, 787-798. http://dl.acm.org/citation.cfm?id=3154690. 3154764
SQLite Consortium. 2022. SQLite. SQLite Consortium. 2022. SQLite. https://www.sqlite.org/index.html.
Filebench -A Model Based File System Workload Generator. Vasily Tarasov, Vasily Tarasov. 2017. Filebench -A Model Based File System Workload Generator. https://github.com/filebench/filebench.
Facebook's data center infrastructure: Open compute, disaggregated rack, and beyond. Jason Taylor, 2015 Optical Fiber Communications Conference and Exhibition (OFC. Jason Taylor. 2015. Facebook's data center infrastructure: Open com- pute, disaggregated rack, and beyond. In 2015 Optical Fiber Communi- cations Conference and Exhibition (OFC). 1-1.
Characterizing and Optimizing Remote Persistent Memory with RDMA and NVM. Xingda Wei, Xiating Xie, Rong Chen, Haibo Chen, Binyu Zang, 2021 USENIX Annual Technical Conference (USENIX ATC 21). USENIX Association. Xingda Wei, Xiating Xie, Rong Chen, Haibo Chen, and Binyu Zang. 2021. Characterizing and Optimizing Remote Persistent Memory with RDMA and NVM. In 2021 USENIX Annual Technical Conference (USENIX ATC 21). USENIX Association, 523-536. https://www.usenix. org/conference/atc21/presentation/wei
Barrier-enabled IO Stack for Flash Storage. Youjip Won, Jaemin Jung, Gyeongyeol Choi, Joontaek Oh, Seongbae Son, Jooyoung Hwang, Sangyeun Cho, Proceedings of the 16th USENIX Conference on File and Storage Technologies. the 16th USENIX Conference on File and Storage TechnologiesOakland, CA, USA; Berkeley, CA, USAFAST'18). USENIX AssociationYoujip Won, Jaemin Jung, Gyeongyeol Choi, Joontaek Oh, Seongbae Son, Jooyoung Hwang, and Sangyeun Cho. 2018. Barrier-enabled IO Stack for Flash Storage. In Proceedings of the 16th USENIX Conference on File and Storage Technologies (Oakland, CA, USA) (FAST'18). USENIX Association, Berkeley, CA, USA, 211-226. http://dl.acm.org/citation. cfm?id=3189759.3189779
O-AFA: Order Preserving All Flash Array. Joontaek Seung Won Yoo, Youjip Oh, Won, 10.1145/3534056.3534942Proceedings of the 15th ACM International Conference on Systems and Storage. the 15th ACM International Conference on Systems and StorageHaifa, Israel; New York, NY, USAAssociation for Computing MachinerySYSTOR '22)Seung Won Yoo, Joontaek Oh, and Youjip Won. 2022. O-AFA: Order Preserving All Flash Array. In Proceedings of the 15th ACM Interna- tional Conference on Systems and Storage (Haifa, Israel) (SYSTOR '22). Association for Computing Machinery, New York, NY, USA, 96-107. https://doi.org/10.1145/3534056.3534942
| [
"https://github.com/filebench/filebench."
] |
[
"Phonons in Intrinsic Josephson Systems with Parallel Magnetic Field",
"Phonons in Intrinsic Josephson Systems with Parallel Magnetic Field"
] | [
"C Preis \nInstitut für Theoretische Physik\nUniversität Regensburg\nD-93040RegensburgGermany\n",
"C Helm \nLos Alamos National Laboratory\nT-11, 87545Los AlamosNMUSA\n",
"K Schmalzl \nInstitut für Theoretische Physik\nUniversität Regensburg\nD-93040RegensburgGermany\n",
"J Keller \nInstitut für Theoretische Physik\nUniversität Regensburg\nD-93040RegensburgGermany\n",
"R Kleiner \nPhysikalisches Institut\nUniversität Tübingen\nD-72076TübingenGermany\n",
"P Müller \nPhysikalisches Institut III\nUniversität Erlangen-Nürnberg\nD-91058ErlangenGermany\n"
] | [
"Institut für Theoretische Physik\nUniversität Regensburg\nD-93040RegensburgGermany",
"Los Alamos National Laboratory\nT-11, 87545Los AlamosNMUSA",
"Institut für Theoretische Physik\nUniversität Regensburg\nD-93040RegensburgGermany",
"Institut für Theoretische Physik\nUniversität Regensburg\nD-93040RegensburgGermany",
"Physikalisches Institut\nUniversität Tübingen\nD-72076TübingenGermany",
"Physikalisches Institut III\nUniversität Erlangen-Nürnberg\nD-91058ErlangenGermany"
] | [] | Subgap resonances in the I-V curves of layered superconductors are explained by the coupling between Josephson oscillations and phonons with dispersion in c-direction. In the presence of a magnetic field applied parallel to the layers additional structures due to fluxon motion appear. Their coupling with phonons is investigated theoretically and a shift of the phonon resonances in strong magnetic fields is predicted. | 10.1016/s0921-4534(01)00646-3 | [
"https://export.arxiv.org/pdf/cond-mat/0009195v1.pdf"
] | 5,671,906 | cond-mat/0009195 | fba63842d55ad8a010d5d6006d35ce0d58071419 |
Phonons in Intrinsic Josephson Systems with Parallel Magnetic Field
13 Sep 2000
C Preis
Institut für Theoretische Physik
Universität Regensburg
D-93040RegensburgGermany
C Helm
Los Alamos National Laboratory
T-11, 87545Los AlamosNMUSA
K Schmalzl
Institut für Theoretische Physik
Universität Regensburg
D-93040RegensburgGermany
J Keller
Institut für Theoretische Physik
Universität Regensburg
D-93040RegensburgGermany
R Kleiner
Physikalisches Institut
Universität Tübingen
D-72076TübingenGermany
P Müller
Physikalisches Institut III
Universität Erlangen-Nürnberg
D-91058ErlangenGermany
Phonons in Intrinsic Josephson Systems with Parallel Magnetic Field
13 Sep 200017450+r7460GeKeywords: Josephsonphononsflux motion
Subgap resonances in the I-V curves of layered superconductors are explained by the coupling between Josephson oscillations and phonons with dispersion in c-direction. In the presence of a magnetic field applied parallel to the layers additional structures due to fluxon motion appear. Their coupling with phonons is investigated theoretically and a shift of the phonon resonances in strong magnetic fields is predicted.
Excitation of phonons by Josephson oscillations
The c-axis transport in the highly anisotropic cuprate-superconductors Tl 2 Ba 2 Ca 2 Cu 3 O 10+δ (TBCCO) and Bi 2 Sr 2 CaCu 2 O 8+δ (BSCCO) can well be described by a model where the superconducting CuO 2 layers are coupled by tunneling barriers forming a stack of Josephson junctions [1,2].
Subgap structures [3,4] in the I-V characteristics could be explained by the excitation of longitudinal c-axis phonons by the Josephson oscillations in resistive junctions [5][6][7]. In Ref. [8] a microscopic theory for the coupling between Josephson oscillations and phonons has been derived for the case of a short stack of Josephson junctions where it can be assumed that the gaugeinvariant phase difference γ n (x, t) = ϕ n (x, t) − ϕ n+1 (x, t) − 2ē h n+1 n dzA z (x, z, t) between superconducting layers n and n + 1 is constant along the layers (independent of x). In the simplest model the tunneling current through a junction n is relatexd the bias current density j b by the RSJ equation j b = j c sin γ n (t) + σE n (t) +Ḋ n (t).
(1)
Here E n (t) is the average electric field across the junction which is related to the phase differ-ence γ n by the second Josephson equationhγ n = 2edE n where d is the thickness of the barrier. D n = E ρ n is the field generated by the oscillating conduction electron charges on the CuO 2 layers. It can be expressed by the average field in the barrier and the ionic polarization, D n = ǫ 0 E n + P n . The polarization P n is proportional to the lattice displacements of ions. Phonons are excited by the field of the oscillating electronic charges on the CuO 2 layers. The field D(k z ) can be expressed by a generalized dielectric function
D(k z ) = ǫ 0 ǫ ph zz (k z , ω)E(k z ) of the form ǫ ph zz (k z , ω) = 1 − λ |Ω(k z , λ)| 2 ω 2 (k z , λ) − ω 2 −1 .
(
Here ω(k z , λ) are the eigenfrequencies of the dynamical matrix (including long-range Coulomb forces). The oscillator strength |Ω(k z , λ)| 2 takes care of the fact that ions inside the barrier and on the CuO 2 layers are excited by different fields and have to be counted differently in the polarization in the RSJ-equation (1).
The dielectric function has zeros at the frequencies of longitudinal c-axis phonons. It can also be written in the more common form
ǫ ph zz (k z , ω) = ǫ ∞ + λ |Ω(k z , λ)| 2 ω 2 (k z , λ) − ω 2(3)
where we have included a background DK. The frequenciesω(k z , λ) where the dielectric function has poles can also be calculated from a dynamical matrix where long-range Coulomb forces have been subtracted. In the limit k z → 0 they correspond to the transversal optical eigenfrequencies of the system. In the resistive state the phase difference of barrier n has the form γ n (t) = θ n + ωt + δγ n (t) (4) where δγ n (t) oscillates with the same Josephson frequency ω, while for a barrier in the superconducting state the term ωt is missing. Inserting this ansatz in the RSJ equations we can solve for the dc part and the oscillating part. In the special case where only one junction of the stack is in the resistive state we obtain for the normalized dccurrent J b := j b /j c as function of the dc-voltage V =hω/(2e):
J b = J qp (V ) + ω 2 J 2ω 2ǭ 2 + σ/(ǫ 0 ω) ǫ 2 1 + (ǭ 2 + σ/(ǫ 0 ω)) 2 .(5)
Hereǭ(ω) =ǭ 1 (ω) + iǭ 2 (ω) is an averaged phonon dielectric function defined bȳ
ǫ(ω) = G −1 (ω) +ω 2 J ω 2 − iσ ǫ 0 ω ,(6)G(ω) = 1 N z kz 1 ǫ ph zz (k z , ω) −ω 2 J ω 2 + iσ ǫ0ω .(7)
Hereω 2 J = ω 2 J 1 − (j/j c ) 2 and ω 2 J = 2edj c /(hǫ 0 ) is the bare Josephson plasma frequency. In the case of a dispersionless phonon we just havē ǫ(ω) = ǫ ph zz (ω). In the general case the I-V curve shows peaks at the zeros of the real part ofǭ(ω) corresponding to c-axis phonons with a high density of states and non-vanishing oscillator strength |Ω| 2 . Besides optical Γ-point phonons also phonons from the edge of the Brillouin zone at k z = π/d and even acoustical phonons may contribute to phonon resonances in the I-V curves [8].
The coupling to an acoustical phonon at k z = π/d may explain a resonance observed for TBCCO at 3.2 mV by Seidel et al. [9] at a frequency/voltage which is lower then any optical phonon branch expected from model calculations.
A fit to the experimental data can be made with reasonable values of the oscillator strength and phonon damping which are compatible with optical experiments. Also a double peak structure found in BSCCO [7] may be due to the coupling to one optical branch with two van Hove singularities at k z = 0 and k z = π/d.
The dispersion of phonons also leads to a coupling (phase-locking) of Josephson oscillations in different resistive junctions [8,13]. A phaselocking in a stack of Josephson junctions is important for applications of such systems as highfrequency mixers and detectors.
Vortex motion and phonons
In the case of long junctions and in the presence of an external magnetic field applied parallel to the layers we have to account for the variation of the phase γ n (x, t) along the layers (x-direction). The result is a generalization of the coupled sine-Gordon equations derived in [10][11][12]:
∂ 2 x γ n (x, t) = 1 λ 2 J J n − 1 λ 2 K (J n+1 + J n−1 ) (8) J n = sin γ n + σ ǫ 0 ω 2 Jγ n + 1 ω 2 J ǫ ph zzγn .(9)
The characteristic lengths for the variation of the phase along the layers are calculated from
1 λ 2 K = d 2 λ 2 c λ 2 ab ; 1 λ 2 J = 1 λ 2 c + 2 λ 2 K
where λ c , λ ab are the penetration depths of magnetic fields polarized in c-direction and parallel to the ab-planes, respectively. They are related to the corresponding plasma frequencies λ c = c/ω J , λ ab = c/ω ab , where c is the velocity of light. For BSCCO realistic values are λ ab = 170nm, d = 1.5nm, λ c = 150µm, which gives a Josephson penetration depth of λ J ≃ 1µm.
In the derivation of Eq. (8) terms of the form γ/ω 2 ab , which are small compared to γ for the frequency range considered, have been neglected. Furthermore in the coupling to phonons only polarization effects with polarization in c-direction excited by fields in c-direction (expressed by the dielectric function ǫ ph zz (k x , k z , ω)) have been taken into account. It can be shown [13], that the neglect of other polarizations is a good approximation for wave-vectors with k x d ≪ 1, which is well fulfilled for k x ≤ 1/λ J . On the other hand, k z is not small and k z ≫ k x in general.
In the following numerical calculations we consider only local polarization effects, i.e. we neglect the dispersion of phonons. This will be sufficient to demonstrate the principle effects.
Boundary conditions
Assuming a stack with N Josephson barriers with n = 1 . . . N + 1 superconducting layers and two normal contacts n = 0, N + 2 this set of equations (8) holds for n = 1 . . . N . In the barriers connecting the superconducting layers with the normal contact J n has to be replaced by the normalized bias current J b = j b /j c . It is useful to incorporate this boundary condition into the set of equations by subtracting J b for each term writ-ingJ n := J n − J b then
∂ 2 x γ n − 1 λ 2 c J b = 1 λ 2 JJ n − 1 λ 2 K J n+1 +J n−1 (10)
which has to be solved with the boundary condi-tionsJ n = 0 for n = 0, N + 1 for a finite stack. The magnetic field (in y-direction) which causes the variation of the phase along the layers enters explicitly the boundary condition for ∂ x γ n at the edges of the stack. It consists of the external field B ext and the field generated by the bias current. If we neglect the latter we have
∂ x γ n (x = 0) = ∂ x γ n (x = L x ) = 2ed h B ext =: η(11)
and we may drop the small contribution J b /λ 2 c on the left side of Eq. (10). Then in the case of a constant phase along the layers one recovers the RSJ equationJ n = 0.
Analytical solution
The set of coupled sine-Gordon equations can be solved numerically. Here one makes the general ansatz for the phases in the different layers:
γ n (x, t) = Γ n (t)+ηx+ M m=1 δγ n (m, t) cos( mπx L x )(12)
using a Fourier expansion for the spatially oscillating part of the phases and splitting off the term ηx which takes care of the average increase of the phase difference due to the applied magnetic field. With help of this ansatz a coupled set of differential equations for the components δγ n (m, t) can be derived and solved with Runge-Kutta techniques.
Approximate analytical solutions are possible by setting Γ n (t) = θ n + ωt + δγ n (0, t) (13) and linearizing the equations (8) with respect to small oscillating terms δγ n (m, t). This ansatz is well justified for large magnetic fields where the magnetic flux penetrates the stack almost homogeneously. Therefore there is a voltage drop V n = γ n (t) h/(2e) = ωh/(2e) over each junction. The oscillating part describes standing waves which oscillate primarily with the same basic Josephson frequency ω.
Using in addition a Fourier expansion of the oscillating parts in c-direction and keeping only the lowest harmonics in ω we arrive at the following expression for the dc-current
J b = ωσ ǫ 0 ω 2 J − 2 ω 2 J ω 2 1 N 2 kx,kz Im(P (k x , k z , ω)) (14) with P (k x , k z , ω) = |I(k x , η)| 2 |p(k z )| 2 ǫ ph zz (k x , k z , ω) + i σ ǫ 0 ω − k 2
xc 2 (k z ) ω 2 where the sum goes over the discrete values of k x = mπ/L x , k z = nπ/((N + 1)d), 0 < |n| ≤ N .
The denominator contains the phonon dielectric function and the characteristic velocitỹ
c(k z ) = c/ 1 − 2 λ 2 ab d 2 (cos(k z d) − 1).(15)
Resonances are expected at frequencies where the real part of the denominator vanishes. There size depends on the weighting functions
p(k z ) = n e −iθn e −ikz n(16)
which contains a Fourier transformation of the static phase distribution and the function
I(k x , η) = i 2L x Lx 0 dxe −iηx e ikxx(17)
depending on the magnetic field, which is peaked at k x ≃ η.
Single contact in a magnetic field
For a single contact, N = 1, the value of k z is fixed to k z = π/(2d). Then the characteristic (Swihart-) velocity isc = c 0 / 1 + 2λ 2 ab /d 2 = ω J λ J , and the resonance frequencies in the I-V curve are determined from
Re ǫ ph zz (ω) − ( k xc ω ) 2 = 0.(18)
For a constant ǫ ph zz structures appear in the current-voltage characteristic at ω res,m = mπc/( ǫ ph zz L x ). These are the well-known Fiske steps [14]. They correspond to the excitation of standing electromagnetic waves in the Josephson junction of length L x . The largest amplitude is obtained for wave-vectors k x ≃ η. Here the velocity of fluxons equals the phase velocity of the electromagnetic waves. In the case of a very long junction the Fiske steps merge into one flux flow branch which is peaked at V =hω res /(2e) = cdB ext / ǫ ph zz (Eck-peak [15]). In order to discuss the influence of phonons we use here for simplicity a dispersionless optical phonon band with the dielectric function
ǫ ph zz (ω) = ǫ ∞ + |Ω| 2 ω 2 TO − ω 2 − irω .(19)
The spectrum of resonances as function of the discrete values of k x = mπ/L x is shown in Fig. 1. One obtains two branches: for small k x -values the lower branch corresponds to the propagation of electromagnetic waves, while the upper branch is phonon-like. The lower branch ends at the (transverse) phonon eigenfrequency ω TO . The upper branch starts at the zero of the dielectric function, i.e. at the longitudinal eigenfrequency ω LO . The parameters used to calculate the dispersion shown in Fig. 1 are adapted to TBCCO. In Fig. 2 the result for the I-V curve is shown for three different magnetic fields. The figures compare numerical (Runge-Kutta) with analytical results. Note that the magnetic field selects the k x -value where the strongest resonance occurs: k max the phonon effects more clearly a large separation between ω TO and ω LO has been chosen. Furthermore for numerical reasons a comparatively small value of the McCumber parameter β c = 50 has been used, while β c = 500 would be more realistic (β c = ω 2 c /ω 2 J with ω c = 2edj c /(hσ)). For B ext = 0 only one resonance occurs at ω LO corresponding to the subgap-resonance discussed in Sec. 1. For finite B ext one finds two groups of resonances corresponding to the two branches in Fig. 1. The fine-structure is due to Fiske-resonances in the stack of finite length L x . With increasing field strength the upper peak shifts to higher frequencies. The lower peak approaches the TOfrequency while loosing weight. In all cases there is a gap with no resonances for frequencies between ω TO and ω LO .
Generally the agreement between numerical and analytical calculations is good, in particular, concerning the position of peaks. The agreement in the height of the peaks can be improved by going beyond the linear approximation in the oscillation amplitudes. The numerical calculations show the same hysteretic behavior as the experimental results if the bias current density is changed.
Several contacts in a magnetic field
Numerical calculations can also be performed for larger stacks. Analytical calculations are no longer possible for N > 2 without further assumptions on the relative phases θ n for the different junctions. In Fig. 3 we show results for the positions of the resonances for a stack with N = 4 junctions for all possible values of k z = nπ/(5d). The multiple branches correspond to different values of the characteristic velocityc which depends strongly on k z .
In Fig. 4 we show results for the I-V curve for a large applied magnetic field. Here we find a vortex lattice which is moving due to the bias current. Both numerical and analytical results are shown as function of frequency which corresponds to the dc-voltage drop over a single junction. For Again a gap appears between ω TO and ω LO in accordance with the dispersion curves in Fig. 3.
Comparison with experimental results and conclusions
The main results of the preceding section are: a) a shift of phonon resonances in a parallel magnetic field, b) Fiske resonances in the frequency range of phonons are no longer equidistant, c) the flux-flow voltage is no longer proportional to the magnetic field and has a gap between ω TO and ω LO . In order to observe these effects experimentally a strong magnetic field has to be applied (> 3T). In most experiments [7] the applied field has been much lower, and no shift of phonon resonances has been observed. A further require- ment is that the field is applied strictly parallel to the layers in order to avoid vortex pinning from inhomogeneities. The fact that in the cited experiments [7] neither Fiske resonances in the frequency range of phonons nor a flux flow branch could be observed is an indication of vortex pinning. Recently the flux flow voltage has been measured for a stack of 30 junctions in BSCCO [16]. Deviations from the linear field dependence together with anomalies in the frequency range of an optical phonon have been observed, which supports the present model for the interaction between flux flow and phonons. The author would like to thank K. Schlenga and L. Bulaevskii for fruitful discussions. Financial support by the Bayerische Forschungsstiftung (C.P.) and the Department of Energy under contract W-7405-ENG-36 (C.H.) is gratefully acknowledged.
Figure 1 .
1x = η = (2ed/h)B ext . In order to show Resonance frequencies for a single Josephson contact with one dispersionless phonon mode.
Figure 2 .
2I-V curves for a single contact for three different magnetic fields. B 0 = Φ 0 /(dL x ) corresponds to a magnetic field with one flux quantum per junction.
Figure 3 .
3Resonance frequencies for a stack of 4 contacts. our analytical calculations of the I-V curves we have superimposed results calculated with all possible values of k z . The comparison with numerical results in Fig. 4 demonstrates that modes with all possible k z -values will be excited by cycling the bias current. The group of peaks at the highest frequency correspond to electromagnetic modes with the smallest k z value. The middle group contains phonon like excitations while the lowest group are electromagnetic excitations of Kleiner modes.
Figure 4 .
4I-V curves for a stack of 4 contacts in high magnetic field.
. R Kleiner, F Steinmeyer, G Kunkel, P Müller, Phys. Rev. Lett. 682394R. Kleiner, F. Steinmeyer, G. Kunkel, P. Müller, Phys. Rev. Lett. 68, 2394 (1992)
. R Kleiner, P Müller, Phys. Rev. B. 491327R. Kleiner, P. Müller, Phys. Rev. B 49, 1327 (1994).
. K Schlenga, G Hechtfischer, R Kleiner, W Walkenhorst, P Müller, H L Johnson, M , K. Schlenga, G. Hechtfischer, R. Kleiner, W. Walkenhorst, P. Müller, H.L. Johnson, M.
. W Veith, E Brodkorb, Steinbeiss, Phys. Rev. Lett. 764943Veith, W. Brodkorb, E. Steinbeiss, Phys. Rev. Lett. 76, 4943 (1996)
Oxide Superconductor Physics and Nano-Engineering II. A Yurgens, D Winkler, N Zavaritsky, T Claeson, Proceedings of SPIE. D. Pavuna, I. Bozovic2697A. Yurgens, D. Winkler, N. Zavaritsky, T. Claeson, In: D. Pavuna, I. Bozovic (Eds.), Oxide Superconductor Physics and Nano-Engineering II, Proceedings of SPIE Vol. 2697, 433, (1996).
. C Helm, C Preis, F Forsthofer, J Keller, K Schlenga, R Kleiner, P Müller, Phys. Rev. Lett. 79737C. Helm, C. Preis, F. Forsthofer, J. Keller, K. Schlenga, R. Kleiner, P. Müller, Phys. Rev. Lett. 79, 737 (1997)
. C Helm, C Preis, F Forsthofer, J Keller, K Schlenga, R Kleiner, P Müller, Physica C. 29360C. Helm, C. Preis, F. Forsthofer, J. Keller, K. Schlenga, R. Kleiner, P. Müller, Physica C 293, 60 (1997)
. K Schlenga, R Kleiner, G Hechtfischer, M Mößle, P Müller, C Helm, C Preis, F Forsthofer, J Keller, H L Johnson, M Veith, E Steinbeiss, Phys. Rev. B. 5714518K. Schlenga, R. Kleiner, G. Hechtfischer, M. Mößle, P. Müller, C. Helm, C. Preis, F. Forsthofer, J. Keller, H.L. Johnson, M. Veith, E. Steinbeiss, Phys. Rev. B 57, 14518 (1998)
. C Helm, C Preis, J Walter, Keller, Phys. Rev. B. 62C. Helm, C. Preis, C Walter, J. Keller, to be published in Phys. Rev. B 62 (2000)
. P Seidel, A Pfuch, U Hübner, F Schmidl, H Schneidewind, T Ecke, J Scherbel, Physica C. 29349P. Seidel, A. Pfuch, U. Hübner, F. Schmidl, H. Schneidewind, T. Ecke, J. Scherbel, Physica C 293, 49 (1997)
. S Sakai, P Bodin, N F Pedersen, J. Appl. Phys. 732411S. Sakai, P. Bodin, N.F. Pedersen, J. Appl. Phys. 73, 2411 (1993)
. R Kleiner, H Müller, N F Kohlstedt, S Perdersen, Sakai, Phys. Rev. B. 503942R. Kleiner, P Müller, H. Kohlstedt, N.F. Perdersen, S. Sakai, Phys. Rev. B 50, 3942 (1994)
. L Bulaevskii, M Zamora, D Baeriswyl, H Beck, J R Clem, Phys. Rev. B. 5012831L. Bulaevskii, M. Zamora, D. Baeriswyl, H. Beck, J.R. Clem, Phys. Rev. B 50, 12831 (1994)
. C Preis, PhD thesisC. Preis, PhD thesis, and to be published
. M D Fiske, Rev. Mod. Phys. 36221M.D. Fiske, Rev. Mod. Phys. 36,221 (1964)
. R E Eck, D J Scalapino, B N Taylor, Phys. Rev. Lett. 1315R.E. Eck, D.J. Scalapino, B.N. Taylor, Phys. Rev. Lett. 13, 15 (1964)
. Yu I Latyshev, private communicationYu. I. Latyshev, private communication
| [] |
[
"Exploiting Shape Cues for Weakly Supervised Semantic Segmentation",
"Exploiting Shape Cues for Weakly Supervised Semantic Segmentation"
] | [
"Sungpil Kho \nGraduate School of Artificial Intelligence\nYonsei University\nSeoulRepublic of Korea\n",
"Pilhyeon Lee \nDepartment of Computer Science\nYonsei University\nSeoulRepublic of Korea\n",
"Wonyoung Lee \nGraduate School of Artificial Intelligence\nYonsei University\nSeoulRepublic of Korea\n",
"Minsong Ki \nAI Imaging Tech. Team, LG Uplus\nSeoulRepublic of Korea A R\n\nT I C L E I N F O\n\n",
"Hyeran Byun \nGraduate School of Artificial Intelligence\nYonsei University\nSeoulRepublic of Korea\n\nDepartment of Computer Science\nYonsei University\nSeoulRepublic of Korea\n"
] | [
"Graduate School of Artificial Intelligence\nYonsei University\nSeoulRepublic of Korea",
"Department of Computer Science\nYonsei University\nSeoulRepublic of Korea",
"Graduate School of Artificial Intelligence\nYonsei University\nSeoulRepublic of Korea",
"AI Imaging Tech. Team, LG Uplus\nSeoulRepublic of Korea A R",
"T I C L E I N F O\n",
"Graduate School of Artificial Intelligence\nYonsei University\nSeoulRepublic of Korea",
"Department of Computer Science\nYonsei University\nSeoulRepublic of Korea"
] | [] | A B S T R A C TWeakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks. However, it is challenging to derive comprehensive pseudo masks that cover the whole extent of objects due to the local property of CAMs, i.e., they tend to focus solely on small discriminative object parts. In this paper, we associate the locality of CAMs with the texturebiased property of convolutional neural networks (CNNs). Accordingly, we propose to exploit shape information to supplement the texture-biased CNN features, thereby encouraging mask predictions to be not only comprehensive but also well-aligned with object boundaries. We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities, in order to generate reliable pseudo masks to supervise the model. Importantly, our model is end-to-end trained within a single-stage framework and therefore efficient in terms of the training cost. Through extensive experiments on PASCAL VOC 2012, we validate the effectiveness of our method in producing precise and shape-aligned segmentation results. Specifically, our model surpasses the existing state-of-the-art single-stage approaches by large margins. What is more, it also achieves a new state-of-the-art performance over multi-stage approaches, when adopted in a simple two-stage pipeline without bells and whistles. ORCID(s): 0000-0002-3082-3214 (H. Byun)1We use the term "CAM" in a broad way to represent class score maps predicted by segmentation models throughout this paper. | 10.1016/j.patcog.2022.108953 | [
"https://export.arxiv.org/pdf/2208.04286v1.pdf"
] | 251,297,160 | 2208.04286 | af2bd1207e7a781acd7b2084cc50f5a75e642b80 |
Exploiting Shape Cues for Weakly Supervised Semantic Segmentation
Sungpil Kho
Graduate School of Artificial Intelligence
Yonsei University
SeoulRepublic of Korea
Pilhyeon Lee
Department of Computer Science
Yonsei University
SeoulRepublic of Korea
Wonyoung Lee
Graduate School of Artificial Intelligence
Yonsei University
SeoulRepublic of Korea
Minsong Ki
AI Imaging Tech. Team, LG Uplus
SeoulRepublic of Korea A R
T I C L E I N F O
Hyeran Byun
Graduate School of Artificial Intelligence
Yonsei University
SeoulRepublic of Korea
Department of Computer Science
Yonsei University
SeoulRepublic of Korea
Exploiting Shape Cues for Weakly Supervised Semantic Segmentation
Semantic segmentation Weakly supervised learning Texture biases Shape cues
A B S T R A C TWeakly supervised semantic segmentation (WSSS) aims to produce pixel-wise class predictions with only image-level labels for training. To this end, previous methods adopt the common pipeline: they generate pseudo masks from class activation maps (CAMs) and use such masks to supervise segmentation networks. However, it is challenging to derive comprehensive pseudo masks that cover the whole extent of objects due to the local property of CAMs, i.e., they tend to focus solely on small discriminative object parts. In this paper, we associate the locality of CAMs with the texturebiased property of convolutional neural networks (CNNs). Accordingly, we propose to exploit shape information to supplement the texture-biased CNN features, thereby encouraging mask predictions to be not only comprehensive but also well-aligned with object boundaries. We further refine the predictions in an online fashion with a novel refinement method that takes into account both the class and the color affinities, in order to generate reliable pseudo masks to supervise the model. Importantly, our model is end-to-end trained within a single-stage framework and therefore efficient in terms of the training cost. Through extensive experiments on PASCAL VOC 2012, we validate the effectiveness of our method in producing precise and shape-aligned segmentation results. Specifically, our model surpasses the existing state-of-the-art single-stage approaches by large margins. What is more, it also achieves a new state-of-the-art performance over multi-stage approaches, when adopted in a simple two-stage pipeline without bells and whistles. ORCID(s): 0000-0002-3082-3214 (H. Byun)1We use the term "CAM" in a broad way to represent class score maps predicted by segmentation models throughout this paper.
Introduction
The goal of semantic segmentation is to predict pixellevel categories in a given image. Thanks to the various applications, such as scene understanding [1], autonomous driving [2], and image editing [3], it has been actively studied by the research community. Particularly, a number of fully supervised methods are devised for the task and achieve excellent segmentation performances [4,5,6,7,8]. Nevertheless, the expensive cost for obtaining pixel-wise annotations largely limits their scalability and practicability.
To mitigate the cost issue, utilizing weak supervision has received increasing attention recently, mainly image-level tags [9,10,11,12]. Given only image-level labels, existing methods employ class activation maps (CAMs) 1 [13] as an initial seed region and derive pseudo masks from it [14,10]. However, with only the image-level classification loss, CAMs tend to highlight only small discriminative parts rather than the full extent of an object [15,16,17]. This locality of CAMs leads the resulting pseudo masks to be less comprehensive as well as to disagree with object outlines. To compensate for it, existing multi-stage methods try to refine CAMs in an offline fashion and utilize them to train an external segmentation network [10,18]. Nevertheless, they necessitate high levels of computational complexity and a long training time, which eventually compromises the advantage of weak supervision in terms of costs [19].
In this paper, we aim to generate improved CAMs on the fly that better cover object regions. To that end, we first make a connection between the locality of CAMs and the texture bias of convolutional neural networks (CNNs). As diagnosed in the recent literature [20,21,22], CNNs are heavily biased toward texture information when classifying an object. Therefore, it is reasonable that CAMs show high activation only for the discriminative texture patterns of small object parts (e.g., the faces of cats), as demonstrated in Fig. 1c. Conversely, one may imagine a shape-biased CNN [20] -its CAM would focus solely on the object boundaries (e.g., the outlines of cats). To sum up, both types of biases are complementary and indispensable for obtaining comprehensive CAMs that can capture object regions to a whole extent. This motivates us to explore leveraging shapebiased features together with the conventional texture-biased ones of CNNs so as to generate better CAMs.
From this motivation, we introduce a novel framework for weakly supervised semantic segmentation, where shape information plays a role as shape cues in producing complete CAMs that localize the entire regions of objects. Specifically, we design a shape cue module (SCM) that learns shape-related features by intentionally removing texture information. The shape-biased features extracted by SCM are injected back to the network, in order to supplement the original texture-biased features during the decoding process. Consequently, the model can derive more comprehensive segmentation maps that not only cover the discriminative parts but align well with object boundaries. Moreover, to efficiently generate precise pseudo masks during training, Compared to the vanilla CAM that pays attention to only local parts (e.g., faces), our method is able to produce more comprehensive activation maps by exploiting object shape cues.
we propose an online mask refinement method, namely semantics-augmented pixel refinement (SPR). Technically, our SPR takes into account class semantics as well as color information to compute local inter-pixel affinities. Based on the affinities, it can effectively refine the initial predictions, resulting in high-quality pseudo masks. They are in turn used to supervise our model in an end-to-end fashion.
With the help of the shape cues and the effective mask refinement, our model is able to produce comprehensive activation maps with well-aligned boundaries, as shown in Fig. 1d. Through extensive experiments on PASCAL VOC 2012 [23], we clearly validate the effectiveness of the proposed methods in improving the segmentation performance in the weakly supervised setting. Moreover, we demonstrate the superiority of our model over existing state-of-the-art single-stage approaches in terms of mask quality and boundary alignment. Our model also sets a new state-of-the-art in the multi-stage settings when adopted in a simple two-stage pipeline without bells and whistles.
To summarize, our contributions are four-fold.
• We shed light on the connection between the locality of CAMs and the texture bias of CNNs, which has hardly been handled before.
• We introduce a novel weakly supervised segmentation method that explicitly leverages shape-biased features as shape cues for producing comprehensive segmentation maps, overcoming the locality of CAMs.
• We propose a new pseudo mask generation method, where both color and semantic information are leveraged for obtaining pairwise pixel affinities, thereby accurately refining initial mask predictions.
• On PASCAL VOC 2012, the most popular benchmark, our method achieves a new state-of-the-art performance with a significant margin in both the single and the multi-stage settings.
Related Work
Fully Supervised Semantic Segmentation
The goal of semantic segmentation is to classify each pixel in a given image. A majority of fully supervised methods are built upon the encoder-decoder architecture to predict segmentation masks from latent representation [24,25,6]. Meanwhile, some methods adopt the attention mechanism to capture rich contextual relations [26,27,28,29]. In addition, there appear attempts to find the optimal model design by neural architecture search [30,31]. Recently, researchers bring the remarkable success of Transformers [32] to the segmentation task [7,33,8]. Despite the great performances, fully supervised methods suffer from the high labeling cost, which triggers researchers to explore weak supervision [9] or even the unsupervised setting [34,35].
Weakly Supervised Semantic Segmentation
Weakly supervised learning aims to mitigate heavy annotation costs and has widely been studied in image [13,36] and video domains [37,38,39] using various label forms. Specifically, most of the weakly supervised semantic segmentation work utilizes image-level labels owing to the cheapest cost, while a variety of supervision levels have also been employed, such as points [40], scribbles [41,42], and bounding boxes [43,44,45,46]. Our framework employs image-level labels for training, and existing methods can be divided into two mainstreams.
Multi-stage approaches first obtain class activation maps (CAMs) [13] using a classification network and then refine them in an offline manner, which are later used as pseudo labels to train external segmentation models. Early approaches examine the adjacent pixels of initial confident seeds to grow seed regions [47,9]. Moreover, several methods consider broader pixel affinities using the random walk [10] or the self-attention [11]. There are also some attempts to improve the initial seed quality by the stochastic convolution [15] or the online attention accumulation [48]. CONTA [14] adopts the backdoor adjustment to alleviate confounding biases, while CDA [49] designs a copy-andpaste augmentation to decouple the context from objects. Recent methods mainly focus on discovering less discriminative object parts by erasing the dominant regions [50,51], using complementary image patches [52], or pushing images away from the decision boundary [16]. Meanwhile, some methods attempt to refine segmentation results by exploiting saliency maps as post-processing [53,54] or auxiliary supervision [55,56] In addition, several recent methods consider the inter-image relation to capture richer information [57,58,59].
Among the prior work, there are several methods that try to refine the initial segmentation results to be aligned with boundaries. The first group relies on the saliency labels or predictions [54,55,58]. However, the saliency maps require extra human annotations and/or auxiliary model training. Also, they intrinsically highlight only the dominant object in the scene and ignore small objects, which can be detrimental to the segmentation task where both types of objects need to be captured. The second group tries to estimate semantic boundaries. Amongst, some methods [10,18] take an indirect way by allowing a model to learn interpixel affinities, while another approach [60] directly trains a boundary detection model by generating pseudo boundary labels. Nonetheless, they all need auxiliary networks for the purpose and cannot be trained in an end-to-end fashion, i.e., they require training of another segmentation model in the next stage. In contrast to them, our method (a) does not require any saliency labels or predicted maps, (b) does not need any auxiliary networks, and (c) is seamlessly integrated into a single-stage segmentation pipeline and provides shape cues in an end-to-end fashion.
Single-stage approaches streamline the complicated training of multi-stage counterparts in order to save the expensive training cost. EM-Adapt [61] introduces an online expectation-maximization method under weak annotation constraints. CRF-RNN [62] fuses top-down and bottomup activation maps with smoothness visual cues, while RRM [19] integrates the classification and the segmentation networks into an end-to-end framework. Moreover, SSSS [12] introduces three desirable concepts for precise segmentation and improves the segmentation quality based on them. Most recently, AA&LR [63] proposes the adaptive affinity loss and the label reassigning strategy to better learn from noisy pseudo masks.
Our method follows the efficient single-stage pipeline. Different from previous work, we shed light on the connection between the locality of CAMs and the texture biases of CNNs. Accordingly, we propose to explicitly leverage shape cues, thereby allowing the predicted masks to capture the objects accurately.
Texture Biases in Convolutional Neural Networks
In the recent literature [20,64], it is demonstrated that, unlike humans, convolutional neural networks (CNNs) have strong biases towards textures rather than shapes, i.e., they heavily rely on texture information when classifying objects. It is also revealed that shape-biased features are more robust than texture-biased counterparts against domain shifts [65] and adversarial attacks [66]. Accordingly, a variety of methods have been proposed to ameliorate the texture biases, such as image stylization [20], information erasure [67], dedicated augmentation schemes [21], adversarial training [68], and separated supervision [22].
In this paper, we argue that the texture-biased property of CNNs hampers the segmentation performance in the weakly supervised setting. To tackle this challenge, we discover shape-related features and use them as shape cues for producing more precise segmentation masks.
Proposed Method
As depicted in Fig. 2, our framework consists of two parts: the shape-aware segmentation network and the selfsupervised training with pseudo masks. The shape-aware Figure 2: Overview of the proposed framework. It consists of the shape-aware segmentation network and the self-supervised training with pseudo masks.
segmentation network takes an image as input and produces mask predictions. To enhance the mask quality, we introduce a novel shape cue module (SCM) that distills shape-related features from the encoder. During decoding, they are used as shape cues to encourage the segmentation mask to be comprehensive. Afterward, our semantics-augmented pixel refinement (SPR) further hones the mask prediction by leveraging both color information and class semantics, leading to accurate pseudo masks. The resulting pseudo masks in turn provide pixel-level supervision to the segmentation network in an online fashion.
Shape-aware Segmentation Network
Overview
Our shape-aware segmentation network follows the architectural design of DeepLabV3 [5]. An input image ∈ ℝ × ×3 goes through the encoder, where and are the height and the width of the image, respectively. Taking the encoded feature as input, the decoder predicts an initial segmentation score map. Following the existing work [12], we add an auxiliary map for the background class filled with constant values (i.e., 1). This results in the score map ∈ ℝ ℎ× ×( +1) , where (ℎ, ) indicates the size of the score map and + 1 denotes the number of object classes plus the background. To train the model with image-level labels, we obtain the image-level class scores by performing normalized global weighted pooling. In detail, we first obtain the normalized score map , i.e., = exp( )
∑ +1 ′ =1 exp( ′ )
, where is the normalized score for class of the -th pixel 2 . Then, the image-level class score for the -th class is derived by the attended sum of pixel-level scores as follows.
= ∑ ∀ ⋅ + ∑ ∀ ,(1)
where is a small constant for numerical stability. Afterward, we apply the sigmoid function on the obtained scores to get the image-level class probabilities, i.e.,̂= ( ).
Given the image-level class probabilitieŝand the image-level weak labels , we calculate the classification loss using the binary cross-entropy as follows.
cls = − 1 ∑ =1 ( loĝ+ (1 − ) log(1 −̂) ) . (2)
We note that the background class is excluded from the loss calculation.
Shape cue module
As discussed in Sec. 1, the CNN-based encoder would learn biased representation toward texture, which results in incomplete masks in the decoding process. To improve the mask quality, we design the shape cue module (SCM) that utilizes shape-related features as shape cues along with the original texture-biased features. Although various methods can be applied to extract shape information from CNNs, such as texture diversification [20] and adversarial training [68], we here instantiate our SCM based on the information erasure approach [67] owing to its simplicity and generality. In a nutshell, we pull out shape-biased features from the encoder by intentionally erasing its texture information.
The next question is, how can we selectively remove texture information? Generally, texture regions are known to be less informative than shape ones, since they tend to be highly similar to their neighborhood. Formally, the information contained in a given local region (i.e., patch) can be estimated with the Shannon entropy [69]. However, since directly estimating the distribution of a patch is impractical, we follow Shi et al. [67] to approximate the distribution by treating the neighboring patches as its samples. Consequently, the self-information of the patch can be estimated using the kernel density estimator with the Gaussian kernel as follows.
( ) = − log 1 | | ∑ ∀ ′ ∈ exp ( − ‖ ‖ − ′‖ ‖ 2 ∕2ℎ 2 ) √ 2 ℎ ,(3)
where is the set of all neighboring patches of within the pre-defined distance 1 , ℎ is the bandwidth of the Gaussian kernel, and | ⋅ | is the cardinality operator. Considering a patch placed on a texture region, for example, it is likely to have high color similarity with neighbors (the same texture) and therefore will get low self-information. On the contrary, another patch containing part of an object shape would have high self-information, since they tend to be unique among their neighborhood. Based on this tendency, we remove texture-related features by stochastically zeroing out the neurons of texture regions, where the dropping probability is inversely proportional to the self-information of the regions. Specifically, we define the dropping probability for the center pixel of the patch , i.e., , with a Boltzmann distribution as follows.
r( ) ∝ −( )∕ ,(4)
where is a temperature parameter that adjusts the smoothness of dropping probabilities. For instance, a small leads to a sharp probability distribution.
Shape Cue Module (SCM)
ASPP RGB Image Our SCM extracts shape-biased features and allows the decoder to use them as shape cues for accurate segmentation.
Segmentation Network
We obtain the dropping probability for every pixel, based on which we block out the input feature map. For example in Fig. 3, the green and the red boxes have been dropped due to the low information, whereas the blue box survives and passes through the following convolutional layer to acquire the texture-dropped (i.e., shape-biased) features. We then feed the obtained shape-related features into the decoder by concatenating them with intermediate features. By utilizing them as shape cues, our segmentation network can effectively produce comprehensive and boundary-aligned segmentation masks, as verified in Sec. 4.
Self-supervised Training with Pseudo Masks
To improve the segmentation performance, existing weakly supervised segmentation methods typically generate pseudo mask labels in either an online (single-stage) [19,12] or an offline manner (multi-stage) [9,10]. Our method lies in line with the single-stage approaches, and therefore obtains pseudo masks on the fly by efficiently refining the initial prediction.
Semantics-augmented pixel refinement
Existing single-stage approaches mainly exploit the color information of images (i.e., RGB) to refine initial predictions by performing dense-CRF [70] or using a local RGB affinity kernel [12]. However, hinging solely on the color space might be sub-optimal for mask refinement. For instance, considering a tree, it is implausible to propagate the predictions of leaves into branches based on color information, although they constitute the same object. To tackle this challenge, we propose to consider class semantics as well as color information to compute the pixel affinities. Specifically, we introduce a semantics-augmented pixel refinement (SPR), where two different kernels are employed respectively for color and label spaces. Formally, the joint kernel function between the pixels and can be formulated for class as follows.
( , ) = − ‖ ‖ ‖ − ‖ ‖ ‖ 2 − (1 − ) | | | − | | | ( ) 2 ,(5)
where is a hyper-parameter to balance two types of affinities, while and indicate the pixel intensity and the score for class of the -th pixel respectively, with their standard deviations of and .
With the defined kernel in Eq. 5, we get the local affinities for pixel by applying the softmax within a local window centered at it, i.e., → = exp( ( , )) ∑ ∀ ′ ∈ exp( ( , ′ )) , where is the set of pixels in the window with its radius of 2 . Thereafter, the normalized score map is repeatedly refined using the local affinities. For example, the refined score for class of the pixel at the -th iteration can be obtained by:
; = ∑ ∀ ∈ → ⋅ ; −1 ,(6)
where is the set of pixels within the distance 2 , and the beginning value ( = 0) is set to the initial score, i.e.,
;0 = . The refinement is performed for all pixels and the object and the background classes present in the image 3 . The refining process is at the -th iteration is illustrated in Fig. 4.
After a total of times refinement ( = 10), we get the final scores ; for class . Then, we generate the pseudo mask̃by selecting the class with the highest probability and thresholding on ; with the pre-defined threshold . We assign one-hot pseudo labels only to the remaining pixels, which constitute the valid set .
We note that our SPR can be viewed as a generalization of the existing RGB-based refinement methods [19,12]. For instance, when = 1, our SPR relies exclusively on pixel intensities, which is what the previous methods do. However, working only on pixel intensities would be sub-optimal for mask refinement and the class semantics indeed helps to improve the pseudo mask quality, which we validate in Sec. 4. Meanwhile, our SPR also relates to AffinityNet [10] in that both consider class semantics for mask refinement. Nonetheless, they clearly differ from each other in the following aspects. AffinityNet requires training of an auxiliary network and performs refinement in an offline manner. In contrast, our SPR is seamlessly integrated in an end-to-end framework and efficiently refines initial masks in an online fashion without any external network.
Learning from pseudo masks
With the pseudo masks, we train the model to produce similar segmentation results to them. From this point of view, we compute the conventional pixel-wise loss between predictions and pseudo masks. In addition, we adopt the region-wise loss that optimizes the segmentation results at the region level.
Firstly, the pixel-wise loss computes the cross-entropy losses for individual pixels and aggregates them as follows.
pixel = − 1 || ∑ ∀ ∈ +1 ∑ =1̃l og ,(7)
where and̃are the prediction and the pseudo GT of pixel for class , respectively. We compute the loss for only 3 In practice, the refinement is implemented with the GPU-friendly convolution operator, thus is lightweight in terms of costs. The refined score map at step −1 Figure 4: Illustration of semantics-augmented pixel refinement (SPR). Our SPR considers class semantics as well as pixel intensities to compute the inter-pixel affinities between the center and the neighboring pixels. Note that the predicted segmentation maps, i.e., ( ), are presented to represent the refined score maps for the illustrative purpose.
the pixels in the valid set and ignore the other unconfident pixels. We also normalize the class-wise loss to handle the class imbalance.
Since the pixel-wise loss penalizes the pixel-level predictions independently, it ignores the inter-pixel relation that is informative for segmentation. Therefore, along with the pixel-wise loss, we also consider the region-wise supervision. Motivated by Ke et al. [71], we design a loss function that involves structural information of pixels. Technically, we first define a local window with its radius 3 centering at the -th pixel in the pseudo masks. Afterward, we separate the neighboring pixels in the window, , into two sets according to whether they agree with the center point . If a pixel ∈ produces a different class prediction with the center , i.e., (̃) ≠ (̃), it is considered a boundary point and inserted into the boundary set bnd . On the contrary, we put a pixel into the non-boundary set non , if it shares the same class prediction with the center . We note that every pixel in the window belongs to exactly one of the subsets.
With the two disjoint subsets, we compute the regionwise loss. Concretely, we encourage the class probability distribution of the center pixel to be similar to those of pixels in the non-boundary set, while repelling those of pixels in the boundary set. The loss function is formulated as follows.
region = − 1 || ∑ ∀ ∈ 1 | | | | ∑ ∀ ∈ , , where , = { ( || ), if ∈ non (0, − ( || )), if ∈ bnd ,(8)
(⋅) denotes the Kullback-Leibler divergence between two class probability distributions, and is a margin.
In the weakly supervised setting, the generated pseudo masks are often noisy and unreliable, especially at the early training stage. This makes the region-wise loss misleading. Therefore, we use an annealing scheduling for the regionwise loss to stabilize the training. In summary, the overall loss function from the pseudo masks can be computed by:
mask = pixel + region ,(9)
where is a weighting factor for stable training, which steadily increases from 0 to 1 during the training.
Joint Training and Inference
As aforementioned, our model follows the single-stage pipeline and thus is trained in an end-to-end manner. The overall training object is the sum of the classification loss (Eq. 2) and the mask loss (Eq. 9).
total = cls + mask .(10)
For inference, we feed an image into our model and use the mask prediction by the decoder as the final result. Note that the pseudo mask generation is not performed at test time.
Experiments
Experimental Settings
Dataset and evaluation metrics
For evaluation, we use the most popular benchmark for weakly supervised semantic segmentation: PASCAL VOC 2012 [23]. It contains 20 object categories and includes 1,464, 1,449, and 1,456 images respectively for training, validation, and test. Following the convention, we augment the training set with the additional images provided by SBD [72], leading to 10,582 training samples in total. We measure the mean Intersection-over-Union (mIoU) between predicted masks and ground-truths.
Implementation details
Our segmentation network strictly follows the structure of DeepLabV3 [5], an encoder-decoder architecture with atrous spatial pyramid pooling with its output stride of 16. For a fair comparison, we use WideResNet-38 [73] pretrained on ImageNet [74] as the encoder and adopt a fourlayer decoder to predict segmentation masks. Our model is trained in an end-to-end manner for 20 epochs using the SGD optimizer with the weight decay of 10 −4 and the momentum of 0.9. The initial learning rate is set to 10 −3 for the backbone classification network and 10 −2 for the other modules. We follow the learning rate decay strategy of SEAM [11] with the decay rate of 0.9.
Hyperparameter settings
Data augmentation. Following the convention [10,75], we augment each input training image using the following three strategies: (1) random scaling with the ratio range of [0.9, 1.0], (2) random cropping with the resolution of 448 × 448, and (3) horizontal flip with the probability of 0.5. Shape cue module. To estimate the self-information of each patch, we use randomly sampled 9 patches from the neighborhood within the manhattan radius 1 = 7. For the Gaussian kernel, we set its bandwidth ℎ to 1. After obtaining the self-information, we derive the dropping probability using the Boltzmann distribution with the smoothness parameter of 0.5 followed by normalization. Semantics-augmented pixel refinement. To diversify the receptive field, we adopt multiple 3 × 3 windows with the set of radiuses 2 = {1, 2, 4, 8, 12, 24} and merge the results. We set the total number of refinement = 10. To generate the pseudo mask̃from the refinement score map , we use the adaptive threshold , i.e., 60% of the maximum scores among all positions for all classes. Also, we ignore conflicting pixels and less confident pixels with the lower bound of 0.2 during the learning from pseudo masks. The balancing parameter is set to 0.8 by default.
Region-wise Loss.
To build the boundary and nonboundary sets, we use a local window with its radius 3 = 3. The margin value, i.e., , is set to 3.0.
Comparison with State-of-the-arts
Single-stage results
In Table 1, we compare our model with existing stateof-the-art methods in terms of mIoU on PASCAL VOC 2012. For reference, we include fully supervised methods and multi-stage approaches that are not directly comparable to ours. On both validation and test sets, our method achieves a new state-of-the-art performance with large margins of 2.5 % and 2.0 %, respectively. Notably, even without CRF post-processing, our model is able to surpass all the existing single-stage competitors, which clearly manifests the superiority of ours. Moreover, our model shows a comparable performance to several multi-stage counterparts that rely on complicated and costly training.
Multi-stage results
To further validate the effectiveness of our model, we adopt it into a simple two-stage framework. Specifically, we first train our model with image-level labels in the singlestage setting. When the training finishes, we feed all training images into the model to generate pseudo masks. Thereafter, we perform CRF on the pseudo masks for offline refinement and utilize the refined pseudo masks as full supervision to train an external segmentation model. For the segmentation model, we employ DeepLab based on WideResNet-38 for a fair comparison with existing methods [51,52].
As shown in Table 1, the simple adoption of our method in the two-stage setting leads to a large performance boost, but at a cost. In the comparison, it outperforms all the previous multi-stage state-of-the-arts under image-level supervision. Furthermore, it even performs favorably against the several recent methods that utilize extra saliency maps. This verifies that our method is capable of generating highquality pseudo masks. It should be noted that this paper focuses on the singlestage pipeline. Therefore we conduct the following experiments and analyses in the cost-effective single-stage setting.
Qualitative comparison
To better understand the advantages of our model, we perform a qualitative comparison with one of the state-ofthe-art methods, SSSS [12]. For a clear comparison, we present both the results w/o and w/ CRF of the comparative methods, which are respectively shown in Fig. 5 and Fig. 6. In the both comparisons w/o and w/ CRF, it is obviously shown that our method produces more precise segmentation masks compared to SSSS. In particular, our results align well with the ground-truth masks regarding the object boundaries, even without the help of CRF as shown in the examples of Fig. 5. This clearly verifies the benefits of our novel shapeinjecting strategy.
Boundary evaluation
To examine the effectiveness of our method in boundary alignment, we further evaluate it using boundary IoU [77]. The metric has recently been proposed to measure the boundary quality of segmentation masks by computing IoUs only for pixels within distance from object outlines. We set to 5 % of an image diagonal, resulting in 30-pixel distances on average.
The results are shown in Table 2, where our method achieves a new state-of-the-art performance with a considerable gap without regard to the use of CRF post-processing. In addition, our method w/o CRF even surpasses the existing state-of-the-arts w/ CRF, which is consistent with the shapealigned segmentation results shown in the qualitative comparison (Fig. 6). By combining these comparison results on boundary quality with those on the conventional evaluation metric (i.e., mIoU), the superiority of our method over previous state-of-the-arts is clearly validated.
Class-wise segmentation results
The class-wise IoUs on the PASCAL VOC 2012 validation and test sets are respectively presented in Table 3 and Table 4. On both sets, our model outperforms the stateof-the-art methods for the majority of the object classes, which clearly demonstrates its superiority. In addition, even without using CRF (Ours w/o CRF), our method beats all the competitors for many object classes as well as in terms of the overall mIoU.
Ablation Study and Analysis
Effects of the proposed components
We investigate the contribution of each proposed component upon the baseline. Here the baseline denotes the common segmentation pipeline with the conventional colorbased refinement [19,12]. We measure mIoU and boundary IoU on PASCAL VOC 2012 train and val sets. The results are presented in Table 5. On one hand, SPR moderately improves the performance (#1→#2), suggesting that taking class semantics into account during mask refinement is helpful. On the other hand, adding SCM leads to a larger performance gain (#1→#3), which exhibits the importance of shape cues for segmentation. It is also noticed that, the boundary quality is greatly improved thanks to the shape cues, as expected. With both modules put together (#4), our model achieves significant performance boosts compared to the cases using either of them, indicating their synergic property. Intuitively, leveraging shape cues during decoding brings about better initial predictions, which subsequently aid SPR in generating more accurate pseudo masks.
In Fig. 7, we qualitatively compare the ablated variants by visualizing their individual CAMs. We observe that the baseline (#1) produces rough and inaccurate activation maps. In addition, SPR (#2) improves the overall activation map quality, while SCM (#3) helps to produce boundaryaligned activations. Equipped with both the modules, our full model (#4) predicts the activation maps that best suit the whole object regions.
Analysis on semantics-augmented pixel refinement
As mentioned in Sec. 3, our SPR extends the conventional color-based refinement [19,12] by incorporating class semantics as well as color information for mask refinement. More specifically, when of Eq. 5 is set to 1, SPR becomes the same with PAMR. However, we argue that only considering RGB intensities is insufficient for mask refinement. To validate this claim, we perform an analysis on the impact of in Table 6. As a result, our method achieves the best performance when = 0.8, with a large gain of 3.9 % from the color-based refinement ( = 1). It confirms that both color and class affinities are important for obtaining accurate pseudo masks, and balancing between them is necessary.
Visualization of the effect of shape cue module
To better understand the effect of our shape cue module (SCM), we provide visualization results in Fig. 8. From the examples, we make several observations: (1) The selfinformation maps (Fig. 8b) capture well the shapes (or edges) despite some noises in the background or complicated texture regions. (2) Our SCM (Fig. 8c) extracts useful shaperelated features that are sensitive to object silhouettes while filtering out the noises. (3) Guided by the shape cues from SCM, the decoder (Fig. 8e) produces more precise activation maps that are well-aligned with object shapes, compared to the encoder (Fig. 8d). More specifically, our SCM enables sharpening the coarse activation map (1 column) by capturing comprehensive object masks rather than small discriminative parts (2 -6 ℎ columns) and producing accurate activation maps even for occluded cases (7 ℎ -8 ℎ columns). This clearly manifests the effectiveness of our shape cues.
Visualization of the effect of semantics-augmented pixel refinement
To see the effect of our mask refinement (i.e., SPR), we visualize several examples in Fig. 9. As shown in the figure, initial predictions (Fig. 9c) are already accurate and comprehensive to a certain extent thanks to our shape cues, but the boundaries are somewhat rough and less match the ground truths due to the large receptive field. After performing our SPR on them, we obtain the more accurate pseudo masks with well-aligned outlines (Fig. 9d). They are in turn used to effectively guide our model in the self-supervised training stage.
One limitation of our SPR could be the inflexibility in balancing between the two types of information. Naturally, different object classes may have their own optimal balancing weights for refining their masks, and it may hold true even for the instances within the same class. This makes our SPR produce potentially sub-optimal pseudo masks, which in turn restricts the improvements of self-supervised training of the segmentation method. Designing adaptable (or learnable) balancing weights could be a promising direction for future work.
Effects of the loss functions
We perform ablation studies on the loss functions to inspect their contributions in Table 8. When no pseudo mask is used ( cls only), our model shows the inferior performance. When the pixel-wise loss ( pixel ) is adopted, the performance significantly increases, indicating the important role of our SPR module. On the other hand, solely using the region-wise loss ( region ) does not bring a performance gain. We conjecture that the region-wise loss could be misleading without pixel-level constraints, since the boundary sets from pseudo masks are unreliable, especially at the early steps. Meanwhile, the two losses from pseudo masks play complementary roles, and our model attains the best performance when they are used together.
Training cost analysis
To verify the training efficiency of our method over the existing multi-stage approaches, we conduct training cost analysis. For the competitors, we choose AffinityNet [10] and IRN [18], since they are widely used as baselines for the latest two-stage works such as SEAM [11], AdvCAM [16], and RIB [17], i.e., they require at least the training time of AffinityNet and IRN. The experiments are conducted using two GTX 1080Ti GPUs and the Intel Core i7-8700 Processor. The results can be found in Table 7. It can be observed that our method is much more training-efficient compared to the multi-stage approaches, as it can be trained in a singlestage manner without relying on auxiliary segmentation networks. In addition, our model does not require complicated training strategies including expensive multi-scale CAM generation and carefully designed auxiliary models. Moreover, it can also be noticeable that our method puts
Qualitative Results
We provide more visualization results to further demonstrate the strong performance of our model. The qualitative results on the PASCAL VOC 2012 training, validation, and test sets are respectively presented in Fig. 10, Fig. 11, and Fig. 12. For the training and validation sets, we show the ground-truth images and the results after CRF. On the other hand, we instead demonstrate the results of our method w/o and w/ CRF for the test set, as the ground-truths of the test split are withheld. As can be seen in Fig. 10 and Fig. 11, our method is able to produce very precise segmentation maps that well match the ground-truths even for the challenging cases (e.g., complex background, cluttered objects). Moreover, in Fig. 12, it is shown that our method produces comprehensive segmentation predictions with well-aligned boundaries even without CRF. After CRF post-processing, the results are further improved and show the high segmentation quality, even though the model is trained using only image-level labels.
Conclusion and Discussion
In this paper, we presented a novel framework for weakly supervised semantic segmentation. We started by associating the locality of CAMs with the texture bias of CNNs. To handle it and produce comprehensive segmentation masks, we proposed to extract shape information from the encoder and explicitly use it as shape cues for segmentation. Moreover, we designed the semantics-augmented pixel refinement in which pseudo masks are obtained using pixel-wise affinities that consider class semantics as well as color intensity. By the in-depth analyses, we verified the efficacy and the complementarity of the proposed methods. Furthermore, we achieved a new state-of-the-art on val and test sets of PASCAL VOC 2012 in both single and multi-stage settings.
Our proposed model effectively leverages the shape cue to produce the segmentation results aligned with the object boundaries, greatly improving the performance in the weakly-supervised setting. On the other hand, the performance of our model may depend on the quality of shape information to an extent, e.g., noisy shape information can mislead the model. In the future work, we would like to explore Vision Transformers [78] for obtaining shape information which are recently found to focus more on the object shapes compared to CNN variants [79].
Figure 1 :
1Visualization of the effectiveness of our method.
Figure 3 :
3Details of the proposed shape cue module (SCM).
Figure 5 :
5Qualitative comparison w/o CRF on PASCAL VOC 2012 validation set. We provide (a) input images, (b) ground-truths, (c) results of SSSS[12], and (d) results of our method.
Figure 6 :
6Qualitative comparison w/ CRF on PASCAL VOC 2012 validation set. We provide (a) input images, (b) ground-truths, (c) results of SSSS[12], and (d) results of our method.
Figure 7 :
7Qualitative comparison of the ablated variants.
Figure 8 :
8Visualization of the effect of SCM. We provide (a) input images, (b) self-information maps, as well as the Grad-CAMs of (c) SCM, (d) the encoder, and (e) the decoder.
Figure 9 :
9Visualization of the effect of SPR. We provide (a) input images, (b) ground-truths, (c) initial predictions, and (d) pseudo masks generated by SPR.
Figure 10 :
10Qualitative results on PASCAL VOC 2012 training set. We provide (a) input images, (b) ground-truths, (c) results of our method w/ CRF.
Figure 11 :Figure 12 :
1112Qualitative results on PASCAL VOC 2012 validation set. We provide (a) input images, (b) ground-truths, (c) results of our method w/ CRF. Qualitative results on PASCAL VOC 2012 test set. We provide (a) input images, (b) results of our method w/o CRF, (c) results of our method w/ CRF.only a minor computational overhead (about 10%) upon the baseline segmentation model, i.e., DeepLabV3[5].
Table 1
1State-of-the-art comparison on PASCAL VOC 2012 validation and test sets. We indicate the supervision level, such as imagelevel class labels (), full supervision ( ), saliency maps (), and additional data ().Method
Backbone
Sup.
CRF
val
test
DeepLabV3 [5]
ResNet-101
✗
-
85.7
WideResNet-38 [73]
✗
80.8 82.5
Multi-stage
DSRG [9] CVPR'18
ResNet-101
,
✓
61.4 63.2
FickleNet [15] CVPR'19
ResNet-101
,
✓
64.9 65.3
OAA [48] ICCV'19
ResNet-101
,
✓
65.2 66.4
CIAN [53] AAAI'20
ResNet-101
,
✓
64.1 64.7
ICD [57] CVPR'20
ResNet-101
,
✓
67.8 68.0
LSISU [76] PR'21
ResNet-101
,
✓
68.4 68.9
NSROM [54] CVPR'21
ResNet-101
,
✓
68.3 68.5
EDAM [58] CVPR'21
ResNet-101
,
✓
70.9 70.6
EPS [55] CVPR'21
ResNet-101
,
✓
71.0 71.8
AuxSegNet [16] ICCV'21
ResNet-101
,
✓
68.1 68.0
AffinityNet [10] CVPR'18 WideResNet-38
✓
61.7 63.7
IRN [18] CVPR'19
ResNet-50
✓
63.5 64.8
SEAM [11] CVPR'20
WideResNet-38
✓
64.5 65.7
ICD [57] CVPR'20
ResNet-101
✓
64.1 64.3
BES [60] ECCV'20
ResNet-101
✓
65.7 66.6
MCIS [59] ECCV'20
ResNet-101
✓
66.2 66.9
CONTA [14] Neurips'20
WideResNet-38
✓
66.1 66.7
AdvCAM [16] CVPR'21
ResNet-101
✓
68.1 68.0
ECS-Net [50] ICCV'21
WideResNet-38
✓
66.6 67.6
CDA [49] ICCV'21
WideResNet-38
✓
66.1 66.8
CGNet [51] ICCV'21
WideResNet-38
✓
68.4 68.2
CPN [52] ICCV'21
WideResNet-38
✓
67.8 68.5
RIB [17] Neurips'21
ResNet-101
✓
68.3 68.6
Ours (two-stage)
WideResNet-38
✗
67.9 68.6
✓
69.5 70.5
Single-stage
EM-Adapt [61] ICCV'15
VGG16
✓
38.2 39.6
SEC [47] ECCV'16
VGG16
, ,
✓
50.7 51.7
CRF-RNN [62] CVPR'17
VGG16
✓
52.8 53.7
RRM [19] AAAI'20
WideResNet-38
✓
62.6 62.9
SSSS [12] CVPR'20
WideResNet-38
✓
62.7 64.3
AA&LR [63] MM'21
WideResNet-38
✓
63.9 64.8
Ours
WideResNet-38
✗
65.2 65.6
✓
66.4 66.8
Table 2
2Boundary IoU comparison with state-of-the-art approaches on PASCAL VOC 2012 validation set. We reproduce the comparative methods using their official code.Method
Sup. w/o CRF w/ CRF
DeepLabV3 [5]
60.4
62.8
RRM [19]
42.0
45.4
SSSS [12]
39.6
46.2
Ours
46.6
49.3
Table 3
3Class-wise IoU results on PASCAL VOC 2012 validation set. All the results of the previous methods are the scores after CRF.Method
bg
aero bike
bird boat bot.
bus
car
cat
chair cow table dog horse mbk per. plant sheep soft train
tv
mIoU
CRF-RNN [62] 85.8 65.2 29.4 63.8 31.2 37.2 69.6 64.3 76.2 21.4 56.3 29.8 68.2
60.6
66.2 55.8
30.8
66.1
34.9 48.8 47.1
52.8
RRM [19]
87.9 75.9 31.7 78.3 54.6 62.2 80.5 73.7 71.2 30.5 67.4 40.9 71.8
66.2
70.3 72.6
49.0
70.7
38.4 62.7 58.4
62.6
SSSS [12]
88.7 70.4 35.1 75.7 51.9 65.8 71.9 64.2 81.1 30.8 73.3 28.1 81.6
69.1
62.6 74.8
48.6
71.0
40.1 68.5 64.3
62.7
AA&LR [63]
88.4 76.3 33.8 79.9 34.2 68.2 75.8 74.8 82.0 31.8 68.7 47.4 79.1
68.5
71.4 80.0
50.3
76.5
43.0 55.5 58.5
63.9
Ours w/o CRF 89.9 75.6 21.5 78.4 64.2 65.6 80.0 74.0 86.0 30.1 73.9 43.8 82.2
74.2
64.0 70.0
43.8
78.5
39.1 73.3 61.9
65.2
Ours w/ CRF
90.8 79.5 20.4 81.7 67.3 64.1 79.7 74.0 87.0 30.6 73.3 43.8 84.4
75.5
62.5 73.3
43.6
82.3
40.2 73.8 65.9
66.4
Table 4
Class-wise IoU results on PASCAL VOC 2012 test set. All the results of the previous methods are the scores after CRF.
Method
bg
aero bike
bird boat bot.
bus
car
cat
chair cow table dog horse mbk per. plant sheep soft train
tv
mIoU
CRF-RNN [62] 85.7 58.8 30.5 67.6 24.7 44.7 74.8 61.8 73.7 22.9 57.4 27.5 71.3
64.8
72.4 57.3
37.3
60.4
42.8 42.2 50.6
53.7
SSSS [12]
89.2 73.4 37.3 68.3 45.8 68.0 72.7 64.1 74.1 32.9 74.9 39.2 81.3
74.6
72.6 75.4
58.1
71.0
48.7 67.7 60.1
64.3
Ours w/o CRF 90.2 77.7 22.2 73.7 55.6 65.1 81.3 77.7 83.9 28.7 74.3 49.2 80.1
78.8
72.0 68.9
45.1
77.0
44.7 74.0 56.9
65.6
Ours w/ CRF
91.1 81.3 20.8 76.2 59.2 64.8 81.8 78.1 85.0 29.7 75.3 47.7 82.5
82.2
70.5 72.6
41.6
82.6
45.3 75.5 59.4
66.8
Table 5
5Effect of each component in our model. SCM: shape cue
module. SPR: semantics-augmented pixel refinement. bIoU
stands for the boundary IoU.
Variants Baseline SCM SPR
mIoU (%)
bIoU (%)
train
val
train
val
#1
✓
42.7 41.7 18.2 17.8
#2
✓
✓
45.5 44.1 23.1 22.2
#3
✓
✓
53.0 51.0 28.3 27.8
#4
✓
✓
✓
68.9 65.2 48.9 46.6
Table 6
6Analysis on the balancing hyper-parameter . We measure the mIoUs of pseudo masks and predictions for training and validation sets, respectively.⟶
0
⋯
0.5
0.6
0.7
0.8
0.9
1.0
train-pseudo 40.4 ⋯ 59.9 65.4 68.8 68.5 65.6 60.0
val
39.1 ⋯ 55.5 61.2 64.5 65.8 64.8 61.9
Table 7
7Training time comparison with multi-stage approaches on PASCAL VOC 2012 training set. We reproduce the comparative methods using their official code.Method
Stage
CAM
Multi-scale CAM
Auxiliary
Pseudo label
Segmentation
Total time
training
generation
model training
generation
model training
AffinityNet [10] Multi
15 min
12 min
113 min
4 min
340 min
484 min
IRN [10]
Multi
15 min
12 min
21 min
33 min
340 min
421 min
Ours
Single
-
-
-
-
372 min
372 min
(a)
(b)
(c)
Table 8
8Effects of loss functions. "bIoU" indicates the boundary IoU. cls pixel region
mIoU (%)
bIoU (%)
train
val
train
val
✓
35.7 35.6 22.4 22.0
✓
✓
61.6 58.1 38.1 36.4
✓
✓
33.7 33.5 23.0 22.7
✓
✓
✓
68.9 65.2 48.9 46.6
For better presentation, we use pixel indices rather than ( , ) coordinates throughout this paper.
Unified perceptual parsing for scene understanding. T Xiao, Y Liu, B Zhou, Y Jiang, J Sun, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionT. Xiao, Y. Liu, B. Zhou, Y. Jiang, J. Sun, Unified perceptual parsing for scene understanding, in: Proceedings of the European Conference on Computer Vision, 2018, pp. 418-434.
Importance-aware semantic segmentation in self-driving with discrete wasserstein training. X Liu, Y Han, S Bai, Y Ge, T Wang, X Han, S Li, J You, J Lu, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34X. Liu, Y. Han, S. Bai, Y. Ge, T. Wang, X. Han, S. Li, J. You, J. Lu, Importance-aware semantic segmentation in self-driving with discrete wasserstein training, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 11629-11636.
Semantic image synthesis with spatially-adaptive normalization. T Park, M.-Y Liu, T.-C Wang, J.-Y Zhu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionT. Park, M.-Y. Liu, T.-C. Wang, J.-Y. Zhu, Semantic image syn- thesis with spatially-adaptive normalization, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2337-2346.
Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. L.-C Chen, G Papandreou, I Kokkinos, K Murphy, A L Yuille, IEEE transactions. 404L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, A. L. Yuille, Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs, IEEE transactions on pattern analysis and machine intelligence 40 (4) (2017) 834-848.
Rethinking atrous convolution for semantic image segmentation. L.-C Chen, G Papandreou, F Schroff, H Adam, arXiv:1706.05587arXiv preprintL.-C. Chen, G. Papandreou, F. Schroff, H. Adam, Rethinking atrous convolution for semantic image segmentation, arXiv preprint arXiv:1706.05587.
Encoderdecoder with atrous separable convolution for semantic image segmentation. L.-C Chen, Y Zhu, G Papandreou, F Schroff, H Adam, European Conference on Computer Vision. L.-C. Chen, Y. Zhu, G. Papandreou, F. Schroff, H. Adam, Encoder- decoder with atrous separable convolution for semantic image seg- mentation, in: European Conference on Computer Vision, 2018, pp. 801-818.
Segmenting transparent objects in the wild. E Xie, W Wang, W Wang, M Ding, C Shen, P Luo, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionE. Xie, W. Wang, W. Wang, M. Ding, C. Shen, P. Luo, Segmenting transparent objects in the wild, in: Proceedings of the European Conference on Computer Vision, 2020, pp. 696-711.
Transformer for semantic segmentation. R Strudel, R Garcia, I Laptev, C Schmid, Segmenter, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionR. Strudel, R. Garcia, I. Laptev, C. Schmid, Segmenter: Transformer for semantic segmentation, in: Proceedings of the IEEE/CVF Interna- tional Conference on Computer Vision, 2021.
Weakly-supervised semantic segmentation network with deep seeded region growing. Z Huang, X Wang, J Wang, W Liu, J Wang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionZ. Huang, X. Wang, J. Wang, W. Liu, J. Wang, Weakly-supervised semantic segmentation network with deep seeded region growing, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7014-7023.
Learning pixel-level semantic affinity with imagelevel supervision for weakly supervised semantic segmentation. J Ahn, S Kwak, CVPRJ. Ahn, S. Kwak, Learning pixel-level semantic affinity with image- level supervision for weakly supervised semantic segmentation, in: CVPR, 2018, pp. 4981-4990.
Self-supervised equivariant attention mechanism for weakly supervised semantic segmentation. Y Wang, J Zhang, M Kan, S Shan, X Chen, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionY. Wang, J. Zhang, M. Kan, S. Shan, X. Chen, Self-supervised equiv- ariant attention mechanism for weakly supervised semantic segmen- tation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 12275-12284.
Single-stage semantic segmentation from image labels. N Araslanov, S Roth, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionN. Araslanov, S. Roth, Single-stage semantic segmentation from im- age labels, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 4253-4262.
Learning deep features for discriminative localization. B Zhou, A Khosla, A Lapedriza, A Oliva, A Torralba, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionB. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative localization, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 2921-2929.
Causal intervention for weakly-supervised semantic segmentation. D Zhang, H Zhang, J Tang, X Hua, Q Sun, Advances in Neural Information Processing Systems. D. Zhang, H. Zhang, J. Tang, X. Hua, Q. Sun, Causal intervention for weakly-supervised semantic segmentation, in: Advances in Neural Information Processing Systems, 2020.
Ficklenet: Weakly and semisupervised semantic image segmentation using stochastic inference. J Lee, E Kim, S Lee, J Lee, S Yoon, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Lee, E. Kim, S. Lee, J. Lee, S. Yoon, Ficklenet: Weakly and semi- supervised semantic image segmentation using stochastic inference, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 5267-5276.
Anti-adversarially manipulated attributions for weakly and semi-supervised semantic segmentation. J Lee, E Kim, S Yoon, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Lee, E. Kim, S. Yoon, Anti-adversarially manipulated attributions for weakly and semi-supervised semantic segmentation, in: Proceed- ings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 4071-4080.
Reducing information bottleneck for weakly supervised semantic segmentation. J Lee, J Choi, J Mok, S Yoon, Advances in Neural Information Processing Systems. J. Lee, J. Choi, J. Mok, S. Yoon, Reducing information bottleneck for weakly supervised semantic segmentation, in: Advances in Neural Information Processing Systems, 2021.
Weakly supervised learning of instance segmentation with inter-pixel relations. J Ahn, S Cho, S Kwak, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Ahn, S. Cho, S. Kwak, Weakly supervised learning of instance seg- mentation with inter-pixel relations, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 2209-2218.
Reliability does matter: An end-to-end weakly supervised semantic segmentation approach. B Zhang, J Xiao, Y Wei, M Sun, K Huang, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34B. Zhang, J. Xiao, Y. Wei, M. Sun, K. Huang, Reliability does matter: An end-to-end weakly supervised semantic segmentation approach, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 12765-12772.
Imagenet-trained cnns are biased towards texture; increasing shape bias improves accuracy and robustness. R Geirhos, P Rubisch, C Michaelis, M Bethge, F A Wichmann, W Brendel, International Conference on Learning Representations. R. Geirhos, P. Rubisch, C. Michaelis, M. Bethge, F. A. Wichmann, W. Brendel, Imagenet-trained cnns are biased towards texture; in- creasing shape bias improves accuracy and robustness, in: Interna- tional Conference on Learning Representations, 2019.
The origins and prevalence of texture bias in convolutional neural networks. K L Hermann, T Chen, S Kornblith, Advances in Neural Information Processing systems. K. L. Hermann, T. Chen, S. Kornblith, The origins and prevalence of texture bias in convolutional neural networks, in: Advances in Neural Information Processing systems, 2020.
Shape-texture debiased neural network training. Y Li, Q Yu, M Tan, J Mei, P Tang, W Shen, A Yuille, C Xie, International Conference on Learning Representations. Y. Li, Q. Yu, M. Tan, J. Mei, P. Tang, W. Shen, A. Yuille, C. Xie, Shape-texture debiased neural network training, in: International Con- ference on Learning Representations, 2021.
The pascal visual object classes (voc) challenge. M Everingham, L Van Gool, C K Williams, J Winn, A Zisserman, International journal of computer vision. 882M. Everingham, L. Van Gool, C. K. Williams, J. Winn, A. Zisserman, The pascal visual object classes (voc) challenge, International journal of computer vision 88 (2) (2010) 303-338.
Fully convolutional networks for semantic segmentation. J Long, E Shelhamer, T Darrell, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionJ. Long, E. Shelhamer, T. Darrell, Fully convolutional networks for semantic segmentation, in: Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 3431-3440.
U-net: Convolutional networks for biomedical image segmentation. O Ronneberger, P Fischer, T Brox, Proceedings of the International Conference on Medical image computing and computerassisted intervention. the International Conference on Medical image computing and computerassisted interventionSpringerO. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, in: Proceedings of the Inter- national Conference on Medical image computing and computer- assisted intervention, Springer, 2015, pp. 234-241.
Pyramid attention network for semantic segmentation. H Li, P Xiong, J An, L Wang, arXiv:1805.10180arXiv preprintH. Li, P. Xiong, J. An, L. Wang, Pyramid attention network for semantic segmentation, arXiv preprint arXiv:1805.10180.
Dual attention network for scene segmentation. J Fu, J Liu, H Tian, Y Li, Y Bao, Z Fang, H Lu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Fu, J. Liu, H. Tian, Y. Li, Y. Bao, Z. Fang, H. Lu, Dual attention network for scene segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 3146-3154.
Ccnet: Criss-cross attention for semantic segmentation. Z Huang, X Wang, L Huang, C Huang, Y Wei, W Liu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionZ. Huang, X. Wang, L. Huang, C. Huang, Y. Wei, W. Liu, Ccnet: Criss-cross attention for semantic segmentation, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 603-612.
Expectationmaximization attention networks for semantic segmentation. X Li, Z Zhong, J Wu, Y Yang, Z Lin, H Liu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionX. Li, Z. Zhong, J. Wu, Y. Yang, Z. Lin, H. Liu, Expectation- maximization attention networks for semantic segmentation, in: Pro- ceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 9167-9176.
Nas-fpn: Learning scalable feature pyramid architecture for object detection. G Ghiasi, T.-Y Lin, Q V Le, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionG. Ghiasi, T.-Y. Lin, Q. V. Le, Nas-fpn: Learning scalable feature pyramid architecture for object detection, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2019, pp. 7036-7045.
B Zoph, G Ghiasi, T.-Y Lin, Y Cui, H Liu, E D Cubuk, Q V Le, arXiv:2006.06882Rethinking pre-training and self-training. arXiv preprintB. Zoph, G. Ghiasi, T.-Y. Lin, Y. Cui, H. Liu, E. D. Cubuk, Q. V. Le, Rethinking pre-training and self-training, arXiv preprint arXiv:2006.06882.
Attention is all you need. A Vaswani, N Shazeer, N Parmar, J Uszkoreit, L Jones, A N Gomez, Ł Kaiser, I Polosukhin, Advances in Neural Information Processing systems. A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin, Attention is all you need, in: Ad- vances in Neural Information Processing systems, 2017, pp. 5998- 6008.
Object-contextual representations for semantic segmentation. Y Yuan, X Chen, J Wang, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionY. Yuan, X. Chen, J. Wang, Object-contextual representations for semantic segmentation, in: Proceedings of the European Conference on Computer Vision, 2020, pp. 173-190.
Picie: Unsupervised semantic segmentation using invariance and equivariance in clustering. J H Cho, U Mall, K Bala, B Hariharan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. H. Cho, U. Mall, K. Bala, B. Hariharan, Picie: Unsupervised seman- tic segmentation using invariance and equivariance in clustering, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16794-16804.
Unsupervised semantic segmentation by contrasting object mask proposals. W Van Gansbeke, S Vandenhende, S Georgoulis, L Van Gool, arXiv:2102.06191arXiv preprintW. Van Gansbeke, S. Vandenhende, S. Georgoulis, L. Van Gool, Unsupervised semantic segmentation by contrasting object mask pro- posals, arXiv preprint arXiv:2102.06191.
Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization. K Kumar Singh, Y Jae Lee, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionK. Kumar Singh, Y. Jae Lee, Hide-and-seek: Forcing a network to be meticulous for weakly-supervised object and action localization, in: Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 3524-3533.
Background suppression network for weaklysupervised temporal action localization. P Lee, Y Uh, H Byun, The 34th AAAI Conference on Artificial Intelligence. 2020P. Lee, Y. Uh, H. Byun, Background suppression network for weakly- supervised temporal action localization, in: The 34th AAAI Confer- ence on Artificial Intelligence, 2020, pp. 11320-11327.
Weakly-supervised temporal action localization by uncertainty modeling. P Lee, J Wang, Y Lu, H Byun, The 35th AAAI Conference on Artificial Intelligence. P. Lee, J. Wang, Y. Lu, H. Byun, Weakly-supervised temporal action localization by uncertainty modeling, in: The 35th AAAI Conference on Artificial Intelligence, 2021, pp. 1854-1862.
Learning action completeness from points for weakly-supervised temporal action localization. P Lee, H Byun, 10.1109/ICCV48922.2021.01339Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)P. Lee, H. Byun, Learning action completeness from points for weakly-supervised temporal action localization, in: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 13648-13657. doi:10.1109/ICCV48922.2021.01339.
What's the point: Semantic segmentation with point supervision. A Bearman, O Russakovsky, V Ferrari, L Fei-Fei, European Conference on Computer Vision. SpringerA. Bearman, O. Russakovsky, V. Ferrari, L. Fei-Fei, What's the point: Semantic segmentation with point supervision, in: European Conference on Computer Vision, Springer, 2016, pp. 549-565.
Scribblesup: Scribble-supervised convolutional networks for semantic segmentation. D Lin, J Dai, J Jia, K He, J Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionD. Lin, J. Dai, J. Jia, K. He, J. Sun, Scribblesup: Scribble-supervised convolutional networks for semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp. 3159-3167.
Boundary perception guidance: a scribble-supervised semantic segmentation approach. B Wang, G Qi, S Tang, T Zhang, Y Wei, L Li, Y Zhang, International Joint Conference on Artificial Intelligence. B. Wang, G. Qi, S. Tang, T. Zhang, Y. Wei, L. Li, Y. Zhang, Boundary perception guidance: a scribble-supervised semantic segmentation approach, in: International Joint Conference on Artificial Intelligence, 2019.
Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation. J Dai, K He, J Sun, Proceedings of the IEEE international conference on computer vision. the IEEE international conference on computer visionJ. Dai, K. He, J. Sun, Boxsup: Exploiting bounding boxes to supervise convolutional networks for semantic segmentation, in: Proceedings of the IEEE international conference on computer vision, 2015, pp. 1635-1643.
Box2seg: Attention weighted loss and discriminative feature learning for weakly supervised segmentation. V Kulharia, S Chandra, A Agrawal, P Torr, A Tyagi, European Conference on Computer Vision. SpringerV. Kulharia, S. Chandra, A. Agrawal, P. Torr, A. Tyagi, Box2seg: Attention weighted loss and discriminative feature learning for weakly supervised segmentation, in: European Conference on Computer Vi- sion, Springer, 2020, pp. 290-308.
Background-aware pooling and noise-aware loss for weakly-supervised semantic segmentation. Y Oh, B Kim, B Ham, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionY. Oh, B. Kim, B. Ham, Background-aware pooling and noise-aware loss for weakly-supervised semantic segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recogni- tion, 2021, pp. 6913-6922.
Bbam: Bounding box attribution map for weakly supervised semantic and instance segmentation. J Lee, J Yi, C Shin, S Yoon, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Lee, J. Yi, C. Shin, S. Yoon, Bbam: Bounding box attribution map for weakly supervised semantic and instance segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2643-2652.
Seed, expand and constrain: Three principles for weakly-supervised image segmentation. A Kolesnikov, C H Lampert, European Conference on Computer Vision. SpringerA. Kolesnikov, C. H. Lampert, Seed, expand and constrain: Three principles for weakly-supervised image segmentation, in: European Conference on Computer Vision, Springer, 2016, pp. 695-711.
Integral object mining via online attention accumulation. P.-T Jiang, Q Hou, Y Cao, M.-M Cheng, Y Wei, H.-K Xiong, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionP.-T. Jiang, Q. Hou, Y. Cao, M.-M. Cheng, Y. Wei, H.-K. Xiong, Integral object mining via online attention accumulation, in: Proceed- ings of the IEEE/CVF International Conference on Computer Vision, 2019, pp. 2070-2079.
Context decoupling augmentation for weakly supervised semantic segmentation. Y Su, R Sun, G Lin, Q Wu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionY. Su, R. Sun, G. Lin, Q. Wu, Context decoupling augmentation for weakly supervised semantic segmentation, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7004-7014.
Ecs-net: Improving weakly supervised semantic segmentation by using connections between class activation maps. K Sun, H Shi, Z Zhang, Y Huang, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionK. Sun, H. Shi, Z. Zhang, Y. Huang, Ecs-net: Improving weakly supervised semantic segmentation by using connections between class activation maps, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7283-7292.
Unlocking the potential of ordinary classifier: Class-specific adversarial erasing framework for weakly supervised semantic segmentation. H Kweon, S.-H Yoon, H Kim, D Park, K.-J Yoon, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionH. Kweon, S.-H. Yoon, H. Kim, D. Park, K.-J. Yoon, Unlocking the potential of ordinary classifier: Class-specific adversarial erasing framework for weakly supervised semantic segmentation, in: Pro- ceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 6994-7003.
Complementary patch for weakly supervised semantic segmentation. F Zhang, C Gu, C Zhang, Y Dai, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionF. Zhang, C. Gu, C. Zhang, Y. Dai, Complementary patch for weakly supervised semantic segmentation, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7242-7251.
Cian: Cross-image affinity net for weakly supervised semantic segmentation. J Fan, Z Zhang, T Tan, C Song, J Xiao, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34J. Fan, Z. Zhang, T. Tan, C. Song, J. Xiao, Cian: Cross-image affinity net for weakly supervised semantic segmentation, in: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, 2020, pp. 10762-10769.
Non-salient region object mining for weakly supervised semantic segmentation. Y Yao, T Chen, G.-S Xie, C Zhang, F Shen, Q Wu, Z Tang, J Zhang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionY. Yao, T. Chen, G.-S. Xie, C. Zhang, F. Shen, Q. Wu, Z. Tang, J. Zhang, Non-salient region object mining for weakly supervised semantic segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 2623-2632.
Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmentation. S Lee, M Lee, J Lee, H Shim, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionS. Lee, M. Lee, J. Lee, H. Shim, Railroad is not a train: Saliency as pseudo-pixel supervision for weakly supervised semantic segmen- tation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 5495-5505.
Leveraging auxiliary tasks with affinity learning for weakly supervised semantic segmentation. L Xu, W Ouyang, M Bennamoun, F Boussaid, F Sohel, D Xu, ICCVL. Xu, W. Ouyang, M. Bennamoun, F. Boussaid, F. Sohel, D. Xu, Leveraging auxiliary tasks with affinity learning for weakly super- vised semantic segmentation, in: ICCV, 2021.
Learning integral objects with intraclass discriminator for weakly-supervised semantic segmentation. J Fan, Z Zhang, C Song, T Tan, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionJ. Fan, Z. Zhang, C. Song, T. Tan, Learning integral objects with intra- class discriminator for weakly-supervised semantic segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 4283-4292.
Embedded discriminative attention mechanism for weakly supervised semantic segmentation. T Wu, J Huang, G Gao, X Wei, X Wei, X Luo, C H Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionT. Wu, J. Huang, G. Gao, X. Wei, X. Wei, X. Luo, C. H. Liu, Embedded discriminative attention mechanism for weakly supervised semantic segmentation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 16765- 16774.
Mining cross-image semantics for weakly supervised semantic segmentation, in: European conference on computer vision. G Sun, W Wang, J Dai, L Van Gool, SpringerG. Sun, W. Wang, J. Dai, L. Van Gool, Mining cross-image semantics for weakly supervised semantic segmentation, in: European confer- ence on computer vision, Springer, 2020, pp. 347-365.
Weakly supervised semantic segmentation with boundary exploration. L Chen, W Wu, C Fu, X Han, Y Zhang, European Conference on Computer Vision. SpringerL. Chen, W. Wu, C. Fu, X. Han, Y. Zhang, Weakly supervised seman- tic segmentation with boundary exploration, in: European Conference on Computer Vision, Springer, 2020, pp. 347-362.
Weakly-and semi-supervised learning of a deep convolutional network for semantic image segmentation. G Papandreou, L.-C Chen, K P Murphy, A L Yuille, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionG. Papandreou, L.-C. Chen, K. P. Murphy, A. L. Yuille, Weakly-and semi-supervised learning of a deep convolutional network for seman- tic image segmentation, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 1742-1750.
Combining bottom-up, top-down, and smoothness cues for weakly supervised image segmentation. A Roy, S Todorovic, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionA. Roy, S. Todorovic, Combining bottom-up, top-down, and smooth- ness cues for weakly supervised image segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017, pp. 3529-3538.
Adaptive affinity loss and erroneous pseudo-label refinement for weakly supervised semantic segmentation. X Zhang, Z Peng, P Zhu, T Zhang, C Li, H Zhou, L Jiao, Proceedings of the 29th ACM International Conference on Multimedia. the 29th ACM International Conference on MultimediaX. Zhang, Z. Peng, P. Zhu, T. Zhang, C. Li, H. Zhou, L. Jiao, Adap- tive affinity loss and erroneous pseudo-label refinement for weakly supervised semantic segmentation, in: Proceedings of the 29th ACM International Conference on Multimedia, 2021.
Shape or texture: Understanding discriminative features in cnns. M A Islam, M Kowal, P Esser, S Jia, B Ommer, K G Derpanis, N Bruce, International Conference on Learning Representations. M. A. Islam, M. Kowal, P. Esser, S. Jia, B. Ommer, K. G. Derpanis, N. Bruce, Shape or texture: Understanding discriminative features in cnns, in: International Conference on Learning Representations, 2021.
Shape-biased domain generalization via shock graph embeddings. M Narayanan, V Rajendran, B Kimia, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionM. Narayanan, V. Rajendran, B. Kimia, Shape-biased domain gen- eralization via shock graph embeddings, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 1315-1325.
Can shape structure features improve model robustness under diverse adversarial settings?. M Sun, Z Li, C Xiao, H Qiu, B Kailkhura, M Liu, B Li, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionM. Sun, Z. Li, C. Xiao, H. Qiu, B. Kailkhura, M. Liu, B. Li, Can shape structure features improve model robustness under diverse adversarial settings?, in: Proceedings of the IEEE/CVF International Conference on Computer Vision, 2021, pp. 7526-7535.
Informative dropout for robust representation learning: A shape-bias perspective. B Shi, D Zhang, Q Dai, Z Zhu, Y Mu, J Wang, ICML, PMLR, 2020. B. Shi, D. Zhang, Q. Dai, Z. Zhu, Y. Mu, J. Wang, Informative dropout for robust representation learning: A shape-bias perspective, in: ICML, PMLR, 2020, pp. 8828-8839.
Interpreting adversarially trained convolutional neural networks. T Zhang, Z Zhu, International Conference on Machine Learning. PMLRT. Zhang, Z. Zhu, Interpreting adversarially trained convolutional neural networks, in: International Conference on Machine Learning, PMLR, 2019, pp. 7502-7511.
A mathematical theory of communication. C E Shannon, The Bell system technical journal. 273C. E. Shannon, A mathematical theory of communication, The Bell system technical journal 27 (3) (1948) 379-423.
Efficient inference in fully connected crfs with gaussian edge potentials, Advances in neural information processing systems. P Krähenbühl, V Koltun, 24P. Krähenbühl, V. Koltun, Efficient inference in fully connected crfs with gaussian edge potentials, Advances in neural information pro- cessing systems 24 (2011) 109-117.
Adaptive affinity fields for semantic segmentation. T.-W Ke, J.-J Hwang, Z Liu, S X Yu, ECCVT.-W. Ke, J.-J. Hwang, Z. Liu, S. X. Yu, Adaptive affinity fields for semantic segmentation, in: ECCV, 2018, pp. 587-602.
Semantic contours from inverse detectors. B Hariharan, P Arbeláez, L Bourdev, S Maji, J Malik, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionIEEEB. Hariharan, P. Arbeláez, L. Bourdev, S. Maji, J. Malik, Semantic contours from inverse detectors, in: Proceedings of the IEEE Interna- tional Conference on Computer Vision, IEEE, 2011, pp. 991-998.
Van Den Hengel, Wider or deeper: Revisiting the resnet model for visual recognition. Z Wu, C Shen, A , Pattern Recognition. 90Z. Wu, C. Shen, A. Van Den Hengel, Wider or deeper: Revisiting the resnet model for visual recognition, Pattern Recognition 90 (2019) 119-133.
J Deng, W Dong, R Socher, L.-J Li, K Li, L Fei-Fei, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionIeeeImagenet: A large-scale hierarchical image databaseJ. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, L. Fei-Fei, Imagenet: A large-scale hierarchical image database, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Ieee, 2009, pp. 248-255.
Revisiting dilated convolution: A simple approach for weakly-and semi-supervised semantic segmentation. Y Wei, H Xiao, H Shi, Z Jie, J Feng, T S Huang, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionY. Wei, H. Xiao, H. Shi, Z. Jie, J. Feng, T. S. Huang, Revisiting di- lated convolution: A simple approach for weakly-and semi-supervised semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 7268-7277.
Weakly-supervised semantic segmentation with saliency and incremental supervision updating. W Luo, M Yang, W Zheng, 10.1016/j.patcog.2021.107858Pattern Recognition. 115107858W. Luo, M. Yang, W. Zheng, Weakly-supervised semantic segmen- tation with saliency and incremental supervision updating, Pattern Recognition 115 (2021) 107858. doi:https://doi.org/10.1016/j. patcog.2021.107858.
Boundary iou: Improving object-centric image segmentation evaluation. B Cheng, R Girshick, P Dollár, A C Berg, A Kirillov, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionB. Cheng, R. Girshick, P. Dollár, A. C. Berg, A. Kirillov, Boundary iou: Improving object-centric image segmentation evaluation, in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021, pp. 15334-15342.
An image is worth 16x16 words: Transformers for image recognition at scale. A Dosovitskiy, L Beyer, A Kolesnikov, D Weissenborn, X Zhai, T Unterthiner, M Dehghani, M Minderer, G Heigold, S Gelly, International Conference on Learning Representations. A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, et al., An image is worth 16x16 words: Transformers for image recognition at scale, in: International Conference on Learning Representations, 2021.
Intriguing properties of vision transformers. M M Naseer, K Ranasinghe, S H Khan, M Hayat, F Khan, M.-H Yang, Advances in Neural Information Processing Systems. 34M. M. Naseer, K. Ranasinghe, S. H. Khan, M. Hayat, F. Shah- baz Khan, M.-H. Yang, Intriguing properties of vision transform- ers, Advances in Neural Information Processing Systems 34 (2021) 23296-23308.
| [] |
[
"Magnetic structure of hexagonal YMnO 3 and LuMnO 3 from a microscopic point of view",
"Magnetic structure of hexagonal YMnO 3 and LuMnO 3 from a microscopic point of view"
] | [
"I V Solovyev ",
"M V Valentyuk ",
"V V Mazurenko ",
"\nComputational Materials Science Unit\nDepartment of Theoretical Physics and Applied Mathematics\nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan\n",
"\nUral Federal University\nMira str. 19620002EkaterinburgRussia\n"
] | [
"Computational Materials Science Unit\nDepartment of Theoretical Physics and Applied Mathematics\nNational Institute for Materials Science\n1-2-1 Sengen305-0047TsukubaIbarakiJapan",
"Ural Federal University\nMira str. 19620002EkaterinburgRussia"
] | [] | The aim of this work is to unravel a basic microscopic picture behind complex magnetic properties of hexagonal manganites. For these purposes, we consider two characteristic compounds:YMnO 3 and LuMnO 3 , which form different magnetic structures in the ground state (P 6 3 cm and P 6 3 cm, respectively). First, we establish an electronic low-energy model, which describes the behavior of the Mn 3d bands of YMnO 3 and LuMnO 3 , and derive parameters of this model from the first-principles calculations. From the solution of this model, we conclude that, despite strong frustration effects in the hexagonal lattice, the relativistic spin-orbit interactions lift the degeneracy of the magnetic ground state so that the experimentally observed magnetic structures are successfully reproduced by the low-energy model. Then, we analyze this result in terms of interatomic magnetic interactions, which were computed using different approximations (starting from the model Hamiltonian as well as directly from the first-principles electronic structure calculations in the local-spin-density approximation). We argue that the main reason why YMnO 3 and LuMnO 3 tend to form different magnetic structures is related to the behavior of the single-ion anisotropy, which reflects the directional dependence of the lattice distortion: namely, the expansion and contraction of the Mn-trimers, which take place in YMnO 3 and LuMnO 3 , respectively. On the other hand, the magnetic coupling between the planes is controlled by the next-nearest-neighbor interactions, which are less sensitive to the direction of the trimerization. In the P 6 3 cm structure of YMnO 3 , the Dzyaloshinskii-Moriya interactions lead to the spin canting out of the hexagonal plane -in the same direction as the single-ion anisotropy. Finally, using the Berry-phase formalism, we evaluate the magnetic-state dependence of the ferroelectric polarization, and discuss potential applications of the latter in magnetoelectric switching phenomena. | 10.1103/physrevb.86.054407 | [
"https://export.arxiv.org/pdf/1205.4478v1.pdf"
] | 116,889,882 | 1205.4478 | d67c75affedfb14634d4572ef9ccb1bfefcd6a14 |
Magnetic structure of hexagonal YMnO 3 and LuMnO 3 from a microscopic point of view
21 May 2012
I V Solovyev
M V Valentyuk
V V Mazurenko
Computational Materials Science Unit
Department of Theoretical Physics and Applied Mathematics
National Institute for Materials Science
1-2-1 Sengen305-0047TsukubaIbarakiJapan
Ural Federal University
Mira str. 19620002EkaterinburgRussia
Magnetic structure of hexagonal YMnO 3 and LuMnO 3 from a microscopic point of view
21 May 2012(Dated: February 20, 2022) 1
The aim of this work is to unravel a basic microscopic picture behind complex magnetic properties of hexagonal manganites. For these purposes, we consider two characteristic compounds:YMnO 3 and LuMnO 3 , which form different magnetic structures in the ground state (P 6 3 cm and P 6 3 cm, respectively). First, we establish an electronic low-energy model, which describes the behavior of the Mn 3d bands of YMnO 3 and LuMnO 3 , and derive parameters of this model from the first-principles calculations. From the solution of this model, we conclude that, despite strong frustration effects in the hexagonal lattice, the relativistic spin-orbit interactions lift the degeneracy of the magnetic ground state so that the experimentally observed magnetic structures are successfully reproduced by the low-energy model. Then, we analyze this result in terms of interatomic magnetic interactions, which were computed using different approximations (starting from the model Hamiltonian as well as directly from the first-principles electronic structure calculations in the local-spin-density approximation). We argue that the main reason why YMnO 3 and LuMnO 3 tend to form different magnetic structures is related to the behavior of the single-ion anisotropy, which reflects the directional dependence of the lattice distortion: namely, the expansion and contraction of the Mn-trimers, which take place in YMnO 3 and LuMnO 3 , respectively. On the other hand, the magnetic coupling between the planes is controlled by the next-nearest-neighbor interactions, which are less sensitive to the direction of the trimerization. In the P 6 3 cm structure of YMnO 3 , the Dzyaloshinskii-Moriya interactions lead to the spin canting out of the hexagonal plane -in the same direction as the single-ion anisotropy. Finally, using the Berry-phase formalism, we evaluate the magnetic-state dependence of the ferroelectric polarization, and discuss potential applications of the latter in magnetoelectric switching phenomena.
INTRODUCTION
Hexagonal manganites (the space group P 6 3 cm) are one of canonical examples of multiferroics, which have attracted an enormous attention recently. The coexistence of ferroelectricity and magnetism in such systems provides a unique possibility for manipulating the charges by applying a magnetic field and the spins by applying a voltage, which is crucially important for the construction of new forms of multifunctional devices. 1 To this end, the direct magnetic phase control by static electric field was realized in HoMnO 3 . 2 The interplay between the ferroelectric activity and the magnetic order was also demonstrated in YMnO 3 and LuMnO 3 with the measurements of the dielectric constant and the loss tangent, which were shown to exhibit clear anomalies around the Néel temperature (T N = 75 K and 88 K in YMnO 3 and LuMnO 3 , respectively), 3,4 even despite the fact that the ferroelectric transition itself occurred at much higher temperature (T C ∼ 880 K). 5 Another spectacular example is the coupling of magnetic and ferroelectric domains, which was visualized in YMnO 3 by using optical second harmonic generation technique. 6 Furthermore, the magnetic transition in YMnO 3 and LuMnO 3 is accompanied by a distinct change of the atomic positions. 7 Thus, the experimental data clearly demonstrates the existence of a strong coupling amongst electric, magnetic, and lattice degrees of freedom in these hexagonal manganites.
The magnetic frustration is one of the key concepts of multiferroic materials, which may assist the inversion symmetry breaking and, in a number of cases, be even responsible for such a breaking. 8 In this respect, the hexagonal lattice is not an exception, and is typically regarded as a playground for studying the magnetic frustration effects. However, it is also the main complication, hampering the theoretical understanding of multiferroic effects in hexagonal compounds, even despite the fact that the high-spin state (S=2), realized in manganites, is typically regarded as an "easy case" for such theoretical analysis, where the classical the spin fluctuations dominate over the quantum ones. Nevertheless, the ground state of classical spins in the hexagonal lattice is expected to be highly degenerate. Different signs of spin fluctuations, apparently originating from this degeneracy, were indeed observed in the neutron scattering experiments, even below T N . 9,10 Another evidence of spin fluctuations, which is also related to the quasi-two-dimensional character of magnetic interactions, is the large ratio of the Curie-Weiss temperature (θ CW ) to T N (about 7 in YMnO 3 ). 9 The degeneracy can be lifted by lattice distortions and, in this context, plenty of attention is paid to the so-called trimerization instability, inherent to the P 6 3 cm structure. 7,11 However, the trimerization alone does not lift the frustration of isotropic exchange interactions.
In this sense, the situation is fundamentally different from the exchange striction effect, which accompanies the formation of the E-type antiferromagnetic (AFM) state in the orthorhombic YMnO 3 and which lifts the frustration of nearest-neighbor (NN) interactions. 12 Nevertheless, the trimerization can interplay with the relativistic spin-orbit (SO) coupling and, in this way, give rise to new anisotropic interactions, which can lift the degeneracy and stabilize some individual magnetic structure with the well-defined symmetry. Such structures were detected in the experiments on the neutron diffraction (Ref. 11, 13, and 14) and optical second harmonic generation (Ref. 15). In a number of cases (e.g., in LuMnO 3 ), there can be several magnetic structures, coexisting in a narrow temperature range. 15 In short, despite difficulties, there is an enormous experimental progress in the identification of magnetic structures of hexagonal manganites, resulting from a delicate balance between lattice distortion, SO interaction, and frustration effects.
The microscopic understanding of rich magnetic properties of the hexagonal manganites is still rather limited. To begin with, there is no clear microscopic model, which would explain the origin of basic magnetic structures of hexagonal manganites, and why different manganites tend to form different magnetic structures. Basically, it is only known how the trimerization affects the NN isotropic interactions. 11 The presence of single-ion anisotropy and Dzyaloshinskii-Moriya (DM) interactions is, of course, anticipated. However, it is absolutely not clear how all these effects come together to form a variety of magnetic structures, realized in the hexagonal manganites.
In this paper, we will try to answer some of these questions. For these purposes, we consider two characteristic manganites: YMnO 3 and LuMnO 3 , which form different magnetic structures in the ground state: P 6 3 cm and P 6 3 cm, respectively (in the International notations, where each underlined symbol means that given symmetry operation is combined with the time inversion). We will show that this difference can be naturally related to different directions of the trimerization: expansion and contraction of the Mn-trimer, which takes place in YMnO 3 and LuMnO 3 , respectively. In our study, we start from the first-principles electronic structure calculations. First, we construct a low-energy electronic model, which captures details of the magnetic structure and correctly reproduces the magnetic ground state of YMnO 3 and LuMnO 3 . Then, we analyze these results by further transforming the electronic model into the spin one and elucidating which magnetic interaction is responsible for each detail of the magnetic structure. We will also consider the 'temperature effect', associated with the temperature change of the experimental crystal structure, and show that above T N it gradually diminishes the anisotropic interactions.
The rest of the paper is organized as follows. All methodological aspects, such as construction of the electronic model and calculation of magnetic interactions, are discussed in Sec. II. Results of solution of the electronic model in the Hartree-Fock (HF) approximation are presented in Sec. III A. In Sec. III B, we give a detailed analysis of the obtained results in terms of magnetic interactions, which were computed using different starting points. In
Sec. III C, we discuss the magnetic part of the ferroelectric polarization and propose how it can be controlled by switching the magnetic state. Finally, a brief summary of the work is given in Sec. IV.
II. METHOD
Since our goal is the construction of microscopic theory for the magnetic properties of YMnO 3 and LuMnO 3 , we first adopt the low-energy model, which would provide a realistic description for the Mn 3d bands of these compounds:
H = ij αβ t αβ ijĉ † iαĉjβ + 1 2 i {α} U αβγδĉ † iαĉ † iγĉiβĉiδ .(1)
The model is constructed in the basis of Wannier orbitals, using the input from the firstprinciples electronic structure calculations. Each Wannier orbital is denoted by the Greek symbol, which itself is the combination of spin (s= ↑ or ↓) and orbital (m= xy, yz, 3z 2 −r 2 , zx, or x 2 −y 2 ) variables. Since the Mn 3d bands in hexagonal manganites are well separated from the rest of the spectrum, 11 the construction of the model Hamiltonian (1) After the construction, the model is solved in the HF approximation. 16 This procedure appears to be extremely useful, especially for the search of the magnetic ground state.
Typically, in frustrated magnetic systems, we are dealing with the competition of several magnetic interactions of the both relativistic and non-relativistic origin. Therefore, even HF calculations for the relatively simple model (1) can be very time consuming, because they may require tends of thousands of iterations. In such a situation, the full scale electronic structure calculations are simply unaffordable. Since the degeneracy of the ground state is lifted by the lattice distortion, the HF approximation appears to be a good starting point for the analysis of the equilibrium magnetic properties. 16 Of course, the model (1) is not perfect, because it does not explicitly include the oxygen band, which can be important for the quantitative analysis of magnetic properties of manganites. Therefore, whenever possible, we check results of our model analysis by direct calculations in the local-spin-density approximation (LSDA). For these purposes, we use the tight-binding linear muffin-tin-orbital method (in the following we will refer to such calculations as 'LMTO calculations'). 18 Hopefully, in both cases we can employ the same strategy for calculations of magnetic interactions, which is based on the local force theorem and the Green's function technique. Namely, the isotropic exchange interactions (J ij ) can be obtained in the second order perturbation-theory expansion for the infinitesimal spin rotations, 19 antisymmetric DM interactions (d ij ) -by considering mixed type perturba-tion with respect to the rotations and the relativistic SO coupling, [20][21][22] and the single-ion anisotropy tensors (τ i ) -in the second order with respect to the SO interaction 23 .
The LMTO calculations have been performed for the AFM configuration ↑↓↑↓↑↓, where the arrows stand for the directions of magnetic moments at the sites 1-6 (see Fig. 1 for the notations of atomic positions). The use of the AFM configuration is essential in order Due to the hybridization with the oxygen states, which is treated explicitly in the LMTO calculations, the value of spin magnetic moment at the Mn-sites is reduced till 3.5 µ B .
Thus, some deviation of the local magnetic moment from the ionic value (4 µ B ), which is typically seen in the experiment, 11,13 can be attributed to the covalent mixing. In model HF calculations, similar effect can be described through the transformation from the Wannier basis to that of atomic orbitals. 16
III. RESULTS AND DISCUSSIONS
A. Optimization of Magnetic Structure
We start with the central result of our work and argue that the low-energy model (1), with the parameters derived from the first-principles electronic structure calculations, 17 successfully reproduces the magnetic ground state of YMnO 3 and LuMnO 3 .
The main candidates for the magnetic ground state of YMnO 3 and LuMnO 3 are shown in Fig. 2 (see also Refs. 13 and 14 for the notations). The unidimensional representations Γ 1 , Γ 2 , Γ 3 , and Γ 4 correspond to the magnetic space groups P 6 3 cm, P 6 3 cm, P 6 3 cm, and P 6 3 cm, respectively. The directions of the spin magnetic moment, obtained in the HF calculations for the low-energy model, are listed in Table I. In the Γ 1 and Γ 4 configurations, all magnetic moments lie in the xy-planes, while in the Γ 2 and Γ 3 configurations, there is also a small canting of magnetic moments along z. Moreover, the Γ 2 configuration allows for the weak ferromagnetism along z, while in the Γ 3 configuration, the z-components of the magnetic moments in the planes z=0 and z=c/2 cancel each other. More generally, the configurations Γ 1 (Γ 2 ) and Γ 4 (Γ 3 ) differ by the magnetic alignment in adjacent xy-planes:
{C 6 z |c/2} acts as the normal symmetry operation in Γ 1 and Γ 2 , which transforms these states to themselves, while in Γ 3 and Γ 4 , {C 6 z |c/2} enters the magnetic symmetry group in the combination with the time-inversion operationT . It corresponds to the additional flip of magnetic moments in the odd xy-planes of Γ 3 and Γ 4 . We have also considered other However, as it will become clear from the discussion below, they have higher energies.
The total energies of different magnetic configurations are summarized in Table II.
Thus, the ground state of YMnO is Γ cm), in agreement with the experiment. 11,15 In LuMnO , the ground state changes to Γ ), also in agreement with the experiment. 11,15 However, all the states are located in a narrow energy range, which is expected for frustrated magnetic systems. The lower-symmetry magnetic structure , which is typically regarded as another possible candidates for the magnetic ground state of the hexagonal manganites, 11,14,15 appears to be unstable and steadily converges to either cm (YMnO or (LuMnO ).
The band gap, obtained for both YMnO and LuMnO , is about 2 eV, which is larger than experimental 1.3 eV. 24 Nevertheless, such a discrepancy is quite expectable for the level of the HF calculations. However, as it will become clear from the discussion below, they have higher energies.
The total energies of different magnetic configurations are summarized in Table II.
Thus, the ground state of YMnO 3 is Γ 3 (P 6 3 cm), in agreement with the experiment. 11,15 In LuMnO 3 , the ground state changes to Γ 4 (P 6 3 cm), also in agreement with the experiment. 11,15 However, all the states are located in a narrow energy range, which is expected for frustrated magnetic systems. The lower-symmetry magnetic structure P 6 3 , which is typically regarded as another possible candidates for the magnetic ground state of the hexagonal manganites, 11,14,15 appears to be unstable and steadily converges to either P 6 3 cm (YMnO 3 ) or P 6 3 cm (LuMnO 3 ).
The band gap, obtained for both YMnO 3 and LuMnO 3 , is about 2 eV, which is larger than experimental 1.3 eV. 24 Nevertheless, such a discrepancy is quite expectable for the level of the HF calculations.
α 1 = 0, β 1 = 60 • α 2 = 0, β 2 = 180 • α 3 = 0, β 3 = 300 • α 1 = 0, β 1 = 60 • α 2 = 0, β 2 = 180 • α 3 = 0, β 3 = 300 • Γ 2 and Γ 3 α 1 = −0.2 • , β 1 = 150 • α 2 = −0.2 • , β 2 = 270 • α 3 = −0.2 • , β 3 = 30 • α 1 = −0.2 • , β 1 = 150 • α 2 = −0.2 • , β 2 = 270 • α 3 = −0.2 • , β 3 = 30 •α 1 = 0, β 1 = 60 • α 2 = −11.5 • , β 2 = 299.2 • α 3 = 11.5 • , β 3 = 180.8 • α 1 = 0, β 1 = 60 • α 2 = −20.2 • , β 2 = 297.8 • α 3 = 20.2 • , β 3 = 182.2 •
B. Analysis of Magnetic Interactions
In this section, we clarify results of the HF calculations for the low-energy model and argue that such a good agreement with the experimental data for the magnetic ground state is not surprising and can be understood from the analysis of corresponding magnetic interactions, which in turn depend on details of the lattice distortions in YMnO 3 and LuMnO 3 . Thus,
we consider the spin model: The symmetry of the P 6 3 cm lattice is such that there are two types of NN interactions.
H S = − ij J ij e i e j + ij d ij [e i × e j ] + i e iτi e i ,(2)
The first type takes place in the triangles of atoms 1-2-3 (4-5-6), which are either expanded Let us discuss the behavior of the single-ion anisotropy tensor. Due to the mirror reflection x→−x, the tensorτ 2 at the site 2 (see Fig. 1) has the following form:
τ 2 = τ xx 0 0 0 τ yy τ yz 0 τ zy τ zz ,
where τ zy = τ yz and τ xx +τ yy +τ zz = 0. Thus, the magnetic moments can either lie along the
x-axis or be perpendicular to it. In the latter case (and if τ yz =0) they from a canted magnetic structure. The anisotropy tensors at other Mn-sites can be generated by applying the symmetry operations of the space group P 6 3 cm. The matrix elements ofτ 2 can be evaluated in the second order of perturbation theory expansion with respect to the SO interactions. 23 Then, near the FM state, we obtain the following sets of independent parameters (in (300 K), respectively. Since τ zz >τ yy , all structures with large z-components of the magnetic moments are energetically unfavorable. Then, by diagonalizingτ 2 , one can find that the lowest-energy configuration in LuMnO 3 is the one where the magnetic moment at the site about 0.05 meV (for the 10 K structure). This situation is reversed in YMnO 3 , where the lowest energy corresponds to the canted magnetic configuration. The angle α, formed by the magnetic moment and the y-axis, is about 7 • . In the next configuration, which is higher in energy by about 0.10 meV (for the 10 K structure), the magnetic moment is parallel to the
x-axis. This energy difference is reduced till 0.01 meV for the 300 K structure. The same behavior was found in the LMTO calculations: for YMnO 3 , the lowest energy corresponds to the canted magnetic configuration (the canting from the y-axis is about 6 • ). The next configuration, where the magnetic moment is parallel to the x-axis, is higher in energy by 0.09 eV for the 10 K structure, and this energy difference further decreases for the 300 K structure.
Thus, the change of the ground state from Γ 3 to Γ 4 in the direction from YMnO 3 to LuMnO 3 is related to the behavior of the single-ion anisotropy, which in turns correlates with the distortion of the 1-2-3 triangles (expansion and contraction, respectively). Moreover, due to the 180 • rotation around the z-axis, which is required in order to transform the site 2 to the site 5 (see Fig. 1), the matrix element τ yz will change sign. Therefore, the canting of spins in the planes z=0 and z=c/2 of the Γ 3 structure will act in the opposite directions, and the magnetic moments along the z-axis will cancel each other.
The single-ion anisotropy will tend to align z-components of the magnetic moments ferromagnetically in each of the xy-plane. However, this effect will compete with the NN AFM interactions J 21 and J 21 ′ . The corresponding analytical expression for the spin canting can be obtained by minimizing the energies of single-ion anisotropy and isotropic exchange interactions: by assuming that all neighboring spins in the xy-plane form the 120 • -structure (as in the case of the Γ 2 and Γ 3 configurations), one can find that
tan 2α = − 2τ yz τ yy − τ zz + 3J 21 + 6J 21 ′ ,(3)
where the minus-sign corresponds to the situation, which is realized in our HF calculations and where e 2 is antiparallel to the y-axis (see Fig. 2). Then, for the Γ 3 configuration of YMnO 3 (10 K), the canting angle α can be estimated (using both model and LMTO parameters of magnetic interactions) as α ≈ −τ yz /(3J 21 + 6J 21 ′ ) = −0.03 • , which is about 7 times smaller than the values obtained in self-consistent HF calculations (Table I). Nevertheless, there is an additional contribution to the spin canting, caused by the DM interactions.
Parameters of DM interactions between NN sites in the xy-plane are listed in Table IV. They were obtained by considering mixed perturbation theory expansion with respect to Such a situation is realized in the magnetic configurations Γ 2 and Γ 3 (but not in Γ 1 and Γ 4 ). Then, the magnetic moment at the site 2 will experience the additional rotational force from the sites 1, 1 ′ , and 1 ′′ :
f 1→2 = [d 21 × e 1 ]+[d 21 ′ × e 1 ]+[d 21 ′′ × e 1 ].
For the magnetic configurations Γ 2 and Γ 3 , the sites of the type '3' will create the same rotational force:
f 3→2 = f 1→2 . However, for the Γ 1 and Γ 4 configurations, it holds f 3→2 = −f 1→2 . Therefore, these two contribution will cancel each other and there will be no canting of spins.
These rotational forces should be incorporated in the expression (3) for the spin canting, which yields α ≈ −(τ yz + f z 1→2 )/(3J 21 + 6J 21 ′ ) = −0.04 • . This canting is still smaller than α ∼ −0.21 • , obtained in the HF calculations for the Γ 3 configuration (Table I). Nevertheless, it should be noted that all the parameters of the spin Hamiltonian (2) were evaluated using perturbation theory expansion near the collinear FM state, which is very far from the groundstate configuration Γ 3 . Thus, it is difficult to expect that the perturbation theory, although is very useful for the semi-quantitative analysis, should be able to reproduce all details of the solutions of the electronic model (1). In fact, some parameters of the spin Hamiltonian corresponding rotational force f z 1→2 will be about 2 times larger than in the FM state. Meanwhile, the parameters of isotropic exchange interactions J 21 and J 21 ′ decrease by about 15%. These factors will additionally increase α.
Furthermore, the HF potential for the low-energy model (1) is orbitally dependent. In this cases, the local force theorem is no longer valid. 19 Therefore, the total energy change due to the SO interaction can be replaced only approximately by the change of the single-particle energies of the HF method. For the single-ion anisotropy, the situation was discussed in and LuMnO 3 : why the former tends to form the canted noncollinear magnetic structure Γ 3 , while the latter forms the planar structure Γ 4 .
C. Magnetic Contribution to Ferroelectric Polarization
Finally, we would like to comment on the behavior of electronic polarization P||c.
It was calculated within the Berry-phase formalism, 27 which was adopted for the model calculations. 8 Of course, the ferroelectric activity in YMnO 3 and LuMnO 3 is primarily caused by structural effects. For example, in YMnO 3 , the ferroelectric transition occurs at about T C = 880 K, 5 which is much higher than T N = 75 K. 7 This fact was also confirmed by first-principles calculations. 28 Another appealing evidence is that the ferroelectric domains in YMnO 3 always coincide with the structural ones. 5 Nevertheless, beside this structural deformation, we have found that there is a substantial magnetic contribution to P||c. More specifically, all magnetic configurations can be divided in two group. The first one includes Γ 1 , Γ 2 , and Γ 6 , where the magnetic moments in the planes z=0 and z=c/2
can be transformed to each other by the simple rotations. The second groups includes Γ 3 , Γ 4 , and Γ 5 , where these rotations should be additionally combined with the time inversion.
According to our finding, the states in each group are characterized by nearly equal values of P||c. However, the transformation of the magnetic state from one group to another would cause a finite jump of electronic polarization. Thus, in principle, the value of the ferroelectric polarization can be controlled by changing the magnetic state (and vice versa).
In this sense, more promising candidate is YMnO 3 , where the ground state (Γ 3 ) and the first excited state (Γ 2 ) belong to different groups. The energy difference ∆E between these two configurations is about 0.16 meV (see Table II). Then, the change of the ferroelectric polarization, associated with the change of the magnetic state Γ 3 →Γ 2 , can be estimated as ∆P||c= −120 µC/m 2 . The practical realization of such a switching phenomenon would be probably interesting, although it is not immediately clear, which external interaction could switch the magnetic state. Formally speaking, the magnetic configuration Γ 2 could be stabilized by the external electric field E||c, which couples to ∆P and results in the additional energy gain −∆PE. Alternatively, one could exploit the fact that Γ 2 allows for a weak ferromagnetism along z (while Γ 3 does not) and, therefore, could be also stabilized by interaction with the external magnetic field, −∆MB, which couples to the net magnetic moment ∆M (∼ −0.01µ B per Mn-site). However, in order to overcome the total energy difference ∆E, this would require unrealistically large values of E and B, which cannot be realized in practice. Therefore, one should explore alternative possibilities. For example, from the viewpoint of microscopic interactions, one could use the competition of the NN and next-NN interactions between adjacent xy-planes, which in the case of YMnO 3 act in the opposite direction (see discussions above). The Γ 3 configuration is stabilized by the next-NN interactions. However, if one could find such macroscopic conditions, which would shift this balance towards NN interactions, one could switch the magnetic structure Γ 3 →Γ 2 and, therefore, the ferroelectric polarization. Another possibility is, of course, to exploit magnetism of the rare-earth ions, which can act similar to B, but produces much stronger effect on the Mn-sublattice. Such a magnetic phase control was indeed realized experimentally in the series of hexagonal manganites with the magnetic rare-earth sublattices. Then, the model was solved in the HF approximation, by considering all possible noncollinear magnetic structures with different symmetries. Since the magnetic frustration in the hexagonal P 6 3 cm lattice is lifted by the relativistic SO interaction, the HF approximation provides a good starting point for the analysis of the magnetic properties of these compounds and successfully reproduce the experimental change of the magnetic ground state from P 6 3 cm to P 6 3 cm in the direction from YMnO 3 to LuMnO 3 , which was observed in the neutron diffraction and nonlinear optical studies.
In order to clarify the microscopic origin of such a change, we have further transformed the electronic model into the spin one and discussed the same trend in terms of differences in the behavior of magnetic interactions in these systems. We have found that the main reason why YMnO 3 and LuMnO 3 tend to form different magnetic structure is related to the behavior of the single-ion anisotropy, which couples to the trimerization distortion in the hexagonal plane and reflects different directions of this trimerization in YMnO 3 and LuMnO 3 (expansion and construction of the Mn-trimers, respectively). On the other hand, the interplane coupling in both compounds is controlled by the next-NN interactions, which is less sensitive to the direction of trimerization. The spin canting in the P 6 3 cm structure of YMnO 3 is a combined effect of both single-ion anisotropy and Dzyaloshinskii-Moriya interactions, which act in the same direction. As the trimerization distortion decreases with the temperature, all anisotropic interactions also decrease, thus reviving the magnetic frustration and the degeneracy of the magnetic state.
Finally, using the Berry-phase formalism, we have estimated the magnetic contribution to the ferroelectric polarization and discussed how it can be controlled by changing the magnetic structure of YMnO 3 .
is rather straightforward. The corresponding procedure can be found in Ref. 16. All calculations have been performed using experimental parameters of the crystal structure, measured at 10 K and 300 K (Ref. 7, Supplementary Information), i.e. well below and above the magnetic transition point. The experimental space group P 6 3 cm has 12 symmetry operations, which can be generated by the mirror reflection x→−x, m x , and the 60 • -degree rotation around the z-axis, combined with the half of the hexagonal translation, {C 6 z |c/2}. The crystal-field splitting, obtained from the diagonalization of the site-diagonal part oft ij = t αβ ij (without spin-orbit coupling), is very similar in YMnO 3 and LuMnO 3 . For example, if one uses the 10 K structure, we obtain the following scheme of the atomic levels: 24, and 1.57 eV in the case of LuMnO 3 . The use of the 300 K structure yields similar results. Clearly, the crystal field tends to stabilize four atomic orbitals, which are separated from the fifth one by the large energy gap. Such a scheme of the crystal-field splitting is consistent with the formal d 4 configuration of the Mn-ions, which is subjected to the Jahn-Teller instability. The fifth (unoccupied) orbital is of predominantly 3z 2 −r 2 symmetry. The off-diagonal elements oft ij = t αβ ij with respect to the site indices stand for the transfer integrals. They are listed in Ref. 17. The value of the screened Coulomb interaction U (defined as radial Slater's integral F 0 ) is about 2.6 eV for all considered systems. The intraatomic exchange (Hund's) coupling J H is about 0.9 eV, which is practically unscreened. The full matrices of screened Coulomb interactions can be also found in Ref. 17.
FIG. 1 .
1(Color online) Relative positions of Mn-sites in the hexagonal P 6 3 cm structure: the atoms located in the plane z=0 are indicated by the red (dark) spheres, and the atoms located in the plane z=c/2 are indicated by the light orange (grey) spheres. The Mn-trimers, which transform to each other by the symmetry operation {C 6 z |c/2}, are shaded. to open the band gap in LSDA (about 0.7 eV for YMnO 3 , which is comparable with the experimental optical gap of 1.3 eV, reported in Ref. 24). Certain inconvenience of working with the AFM ↑↓↑↓↑↓ configuration is that it artificially lowers the P 6 3 cm symmetry: in this case, the local symmetry can by preserved only around the sites 2 and 5, which will be selected as the reference points for the analysis of interatomic magnetic interactions. In our LMTO calculations we decided to stick to the regular LSDA functional and not to use any corrections for the on-site Coulomb interactions (LSDA+U). On the one hand, such corrections can improve the description for interatomic magnetic interactions (similar to the model). On the other hand, the use of the LSDA+U functional is always conjugated with some additional uncertainties in the calculations, related to the double-counting problem. Furthermore, the example of LaMnO 3 shows that LSDA is a reasonably good starting point for the analysis of interatomic magnetic interactions. 20 Nevertheless, when we compare the LMTO results with the model calculations we discuss possible consequences of the Coulomb U on the magnetic interactions in the former case.
online) Magnetic structures obtained in the calculations (in the notations of Ref. 13): (a), Γ (b), Γ (c), Γ (d), Γ with ||[100] (e), Γ with ||[120] (f), Γ with ||[100] (g), and Γ with ||[120] (h). The oxygen atoms are indicated by the small green (grey) spheres. The manganese atoms are indicated by the big spheres: the ones located in the =0 plane are shown by the red (dark) color, and the ones in the c/2 plane -by the light orange (grey) color. possible magnetic configurations with the symmetries Γ and Γ , as explained in Ref. 13.
FIG. 2 .
2(Color online) Magnetic structures obtained in the calculations (in the notations of Ref. 13): Γ 1 (a), Γ 2 (b), Γ 3 (c), Γ 4 (d), Γ 5 with e 1 ||[100] (e), Γ 5 with e 1 ||[120] (f), Γ 6 with e 1 ||[100] (g), and Γ 6 with e 1 ||[120] (h). The oxygen atoms are indicated by the small green (grey) spheres. The manganese atoms are indicated by the big spheres: the ones located in the z=0 plane are shown by the red (dark) color, and the ones in the z=c/2 plane -by the light orange (grey) color. possible magnetic configurations with the symmetries Γ 5 and Γ 6 , as explained in Ref. 13.
I. The angles α and β, representing the directions e i = (cos α i cos β i , cos α i sin β i , sin α i ) of the spin magnetic moments in the plane z=0, for different magnetic configurations (results of calculations using the experimental crystal structure, measured at 10 K). The atomic positions are explained inFig. 1. For the magnetic configurations Γ 1 , Γ 2 , and Γ 6 , the directions of the magnetic moments at the sites 4, 5, and 6 in the plane z=c/2 are obtained by the 180 • rotations around the z-axis of the vectors e 1 , e 2 , and e 3 . For the magnetic configurations Γ 3 , Γ 4 , and Γ 5 , these 180 • rotations should be combined with the time inversion.
Γ 5 with e 1
1||[100] α 1 = −9.6 • , β 1 = 150 • α 2 = 4.8 • , β 2 = 30.5 • α 3 = 4.8 • , β 3 = 269.5 • α 1 = −8.8 • , β 1 = 150 • α 2 = 4.4 • , β 2 = 30.3 • α 3 = 4.4 • , β 3 = 269.8 • Γ 5 with e 1 ||[120] α 1 = 0, β 1 = 60 • α 2 = −8.3 • , β 2 = 299.5 • α 3 = 8.3 • , β 3 = 180.5 • α 1 = 0, β 1 = 60 • α 2 = −7.6 • , β 2 = 299.8 • α 3 = 7.6 • , β 3 = 180.3 • Γ 6 with e 1 ||[100] α 1 = −13.4 • , β 1 = 150 • α 2 = 6.6 • , β 2 = 30.8 • α 3 = 6.6 • , β 3 = 269.2 • α 1 = −23.3 • , β 1 = 150 • α 2 = 11.4 • , β 2 = 30.1 • α 3 = 11.4 • , β 3 = 268.0 • Γ 6 with e 1 ||[120]
which can be obtained by eliminating the electronic degrees of freedom form the more general Hubbard model (1), or directly from the LMTO calculations.16,[19][20][21][22][23] In these notations,{J ij } are the isotropic exchange interactions, {d ij } are the antisymmetric DM interactions, {τ i } are the single-ion anisotropy tensors, e i stands the direction of the spin magnetic moment at the site i, and the summation runs over all pairs of atoms ij .The parameters of isotropic magnetic interactions are listed inTable III, and the atomic positions are explained inFig. 1. All NN interactions in the plane xy are AFM. This is reasonable, because the ferromagnetic (FM) coupling in the hexagonal geometry can be stabilized only by virtual hoppings onto unoccupied 3z 2 −r 2 orbital, which are relatively small (seeRef. 17). Moreover, the number of orbital paths, available for the virtual hoppings via this particular 3z 2 −r 2 orbital, is also small. Nevertheless, from the orbital decomposition of
(
the case of YMnO 3 ) or contracted (the case of LuMnO 3 ). The second type takes place in the bonds 2-1 ′ , 2-3 ′ , 2-1 ′′ , 2-3 ′′ , which are all equivalent. Then, due to the mirror reflection x→−x, the NN bonds 2-4 and 2-6 between adjacent xy-planes are also equivalent, and differ from the bond 2-5 ′ . The same situation holds for the next-NN interactions between the planes: there are two equivalent bonds 2-4 ′ and 2-6 ′ , which differ from the bond 2-5.For the NN interactions, both in and between adjacent xy-planes, there is a clear correlation between the bondlength and the strength of the exchange coupling. For example, in the low-temperature structure of YMnO 3 , the triangle of atoms 1-2-3 (4-5-6) is expanded (for two inequivalent NN bonds 2-1 and 2-1 ′ in the xy-plane, the ratio of the bondlengths is l 21 ′ /l 21 =0.961). Therefore, the AFM interaction J 21 ′ is stronger than J 21 . 11 The same tendency holds for the interplane interactions: for two inequivalent NN bonds 2-5 ′ and 2-4 (l 25 ′ /l 24 =0.991), the AFM interaction J 25 ′ is stronger than J 24 . In LuMnO 3 , where the triangle of atoms 1-2-3 (4-5-6) is compressed, the situation is the opposite: l 21 ′ /l 21 =1.016 and l 25 ′ /l 24 =1.003. Therefore, the exchange interactions in the bonds 2-1 and 2-4 are stronger than in the bonds 2-1 ′ and 2-5 ′ .The behavior of next-NN interactions between the planes obeys quite different rules.Since the direct transfer integrals are small (see Ref. 17 for details), these interactions are realized as the "super-superexchange" processes via intermediate sites in the pathes 2→6→5, 2→1→5, etc., which always include one compressed and one expanded bond. Therefore, the simple analysis in terms of the bondlengths is no longer applicable. Instead, we have found that for all considered compounds (and all considered structures), the AFM interaction in the bond 2-5 appears to be weaker than in the bonds 2-4 ′ (and in the equivalent to it bond 2-6 ′ ). Such a behavior has very important consequences: in a noncollinear structure, it is more favorable energetically to form the FM coupling in the bond 2-5 in order to maximize the AFM coupling in two other next-NN bonds 2-4 ′ and 2-6 ′ . Particularly, it explains why the magnetic ground state of YMnO 3 and LuMnO 3 should be Γ 3 , Γ 4 , or Γ 5 , which are characterized by the FM coupling in the bond 2-5, and not Γ 1 , Γ 2 or Γ 6 , where this coupling is AFM (seeFig. 2). In LuMnO 3 , this effect is additionally enhanced by the NN interactions between the planes: since AFM interaction in the bond 2-5 ′ is weaker than in two equivalent bonds 2-4 and 2-6, it is more favorable energetically to form the FM coupling between the sites 2, 5 and 5 ′ , where the latter two are connected by the translation. However, in YMnO 3 , the situation is the opposite and there is a strong competition between NN and next-NN interactions between the planes. Particularly, it explains a small energy difference between configurations Γ 3 and Γ 2 .The reliability of the obtained parameters can be checked by calculating θ CW . In the case of classical Heisenberg model, the latter is given by the formula θ CW ≈ i J 2i /3k B , which yields −562 and −650 K for the 10 K structure of YMnO 3 and LuMnO 3 , respectively.In the quantum case, these values should be additionally multiplied by (1+1/S). The structural changes have some effect mainly on YMnO 3 , and, if one uses parameters obtained for the 300 K structure, |θ CW | decreases by 7% (for comparison, similar change of θ CW for LuMnO 3 is about 1%). In any case, the obtained values are in a good agreement with experimental data.4,7,11 The calculations of T N are not straightforward: due to the quasitwo-dimensional character of isotropic exchange interactions, T N will be strongly suppressed by thermal fluctuations, as one of the consequences of the Mermin-Wagner theorem.25 Of course, the molecular-field approximation will overestimate T N (by factor 4, in comparison with the experiment).The LMTO calculations yield the following values of NN interactions in the plane xy(in meV): (J 21 , J 21 ′ )all the interactions are weaker than in the model analysis. Nevertheless, this seems reasonable. First, the NN interactions are generally weaker in the AFM ↑↓↑↓↑↓ configuration. This effect was also found in the model calculations, as will become clear below. Second, the ratio of AFM to FM contributions to the exchange coupling in manganites scales with the value of U as (U−J H )/(U+3J H ) ≈ 1−4J H /U. 26 Thus, larger U, which was used in the model (but not in the LMTO calculations), will shift this balance towards the AFM coupling. Similar tendency was found for inter-layer interactions: although LSDA, supplementing the LMTO calculations, somewhat overestimates FM contributions to the exchange interactions, the modulation of these interactions, caused by the lattice distortion, again favors the formation of magnetic configurations Γ 3 or Γ 4 . For example, in YMnO 3 (10 K), the LMTO calculations yield: J 24 = 0.20 meV, J 25 ′ = 0.04 meV, J 24 ′ = −0.14 meV,and J 25 = 0.08 meV. Therefore, these calculations confirm that the experimental coupling between hexagonal layers is stabilized by the next-NN interactions J 25 > J 24 ′ . The NN interactions act in the opposite direction: J 25 ′ < J 24 . However, their effect is smaller.
meV): (τ yy , τ yz , τ zz )= (−0.34, −0.12, 0.58), (−0.29, −0.11, 0.58), (−0.25, −0.12, 0.57), and (−0.26, −0.12, 0.57) for YMnO 3 (10 K), YMnO 3 (300 K), LuMnO 3 (10 K), and LuMnO 3
IV. Parameters of Dzyaloshinskii-Moriya interactions (measured in meV), calculated in the ferromagnetic states of YMnO 3 and LuMnO 3 . The atomic positions are explained in Fig. 1. Calculations have been performed using the experimental parameters of the crystal structure, measured at 10 K and 300 K (as denoted in the notations)interaction and infinitesimal rotations of spin magnetic moments. 20 In principle, the parameters d 21 ′ and d 21 ′′ are not independent and can be transformed to each other using symmetry operations of the space group P 6 3 cm. However, it is more convenient to consider their contributions independently. Due to the mirror reflection x→−x, the elements of two axial vectors d 23 and d 21 (see Fig. 1) obey the following rules: d x 23 = d x 21 , d y 23 = −d y 21 , and d z 23 = −d z 21 (similar situation holds for other NN interactions). Thus, they will produce a finite canting at the site 2 only if the directions of two other magnetic moments e 2 and e 3 would have an AFM component along x and a FM component along y, i.e.: e x 3 = −e x 1 and e y 3 = e y 1 .
( 2 )
2appear to be sensitive to the state, in which they are calculated. For example, we have also considered the collinear AFM configuration ↑↓↑↓↑↓, where the arrows stand for the directions of magnetic moments at the sites 1-6. In this case, the DM interactions involving the site 2, which is AFM coupled with all NN spins in the xy-plane, become (in meV): d 21 = (0.01, 0.03, 0.01), d 21 ′ = (0.04, 0, 0), and d 21 ′′ = (−0.01, 0.05, 0.01). Then,
Appendix B of Ref.23. Presumably, this is the main reason, explaining the quantitative difference between the results of the electronic and spin models. Thus, these are typical uncertainties, supplementing the construction and analysis of the spin model(2).Nevertheless, the local force theorem is valid within LSDA. Therefore, it is interesting to estimate the spin canting in the LMTO calculations, which are based on the LSDA functional. In this case, all DM interactions become larger. For example, for YMnO 3(10 K) we have obtained the following parameters (in meV): d 21 = (−0.01, 0.14, −0.20), d 21 ′ = (−0.16, 0.04, −0.12), and d 21 ′′ = (0.06, 0.18, −0.26). Then, by combining them with corresponding parameters of the single-ion anisotropy τ yz = −0.078 meV and isotropic exchange interactions J 21 and J 21 ′ , which are listed above, we obtain the canting angle α = −0.12 • . Thus, it is interesting that LSDA, despite its limitation, provides the best starting point for the analysis of the spin canting via the perturbation-theory expansion for the spin-orbit interaction, due to validity of the local force theorem. Similar situation was found in the orthorhombic LaMnO 3 . 20 Thus, although derivation of parameters of spin model (2) may differ in details, depending on the form of the electronic Hamiltonian, which is used as the starting point, as well as some additional approximations, underlying definitions of the model parameters, this analysis provides a clear microscopic basis for understanding the main difference between YMnO 3
of first-principles electronic structure calculations, we have established the low-energy model, which is able to capture basic magnetic properties of hexagonal manganites. This Hubbard-type model describes the behavior of the Mn 3d bands, being subjected to the lattice deformation and on-site electron-electron interactions. All parameters of such model, derived from the first principles calculations for two characteristic manganites YMnO 3 and LuMnO 3 , are summarized in Ref. 17.
TABLE
TABLE II .
IITotal energies of different magnetic configurations as obtained in the Hartree-Fock calculations for the low-energy model. The energies are measured in meV per one formula unit, relative to the most stable configuration. The magnetic configurations are explained in Fig. 2.The calculations for YMnO 3 and LuMnO 3 have been performed using the experimental crystal
structure, measured at 10 K and 300 K (as denoted in the notations).
configuration
YMnO 3 (10 K)
YMnO 3 (300 K)
LuMnO 3 (10 K)
LuMnO 3 (300 K)
Γ 1
0.37
0.20
0.48
0.23
Γ 2
0.16
0.19
0.61
0.32
Γ 3
0
0
0.13
0.10
Γ 4
0.21
0.01
0
0
Γ 5 with e 1 ||[100]
0.90
0.76
1.09
1.06
Γ 5 with e 1 ||[120]
0.90
0.76
1.09
1.06
Γ 6 with e 1 ||[100]
1.06
0.94
1.53
1.27
Γ 6 with e 1 ||[120]
1.06
0.94
1.53
1.27
TABLE III .
IIIParameters of isotropic exchange interactions (measured in meV), calculated in the ferromagnetic states of YMnO 3 and LuMnO 3 . The atomic positions are explained in Fig. 1. Calculations have been performed using the experimental parameters of the crystal structure, J ij in our LMTO calculations, we can conclude that such a FM contribution does exists and compensates about 30 % of AFM contributions, involving all other orbitals, except 3z 2 −r 2 .measured at 10 K and 300 K (as denoted in the notations).
bond
YMnO 3 (10 K)
LuMnO 3 (10 K)
YMnO 3 (300 K)
LuMnO 3 (300 K)
2-1
−21.28
−31.81
−23.26
−30.16
2-1 ′
−26.35
−27.57
−22.67
−27.92
2-4
−0.12
−0.20
−0.13
−0.20
2-5 ′
−0.19
−0.11
−0.08
−0.10
2-4 ′
−0.24
−0.31
−0.21
−0.24
2-5
−0.07
−0.16
−0.16
−0.23
TABLE
I. V. Solovyev and Z. V. Pchelkina, Phys. Rev. B 82, 094425 (2010); I. V. Solovyev, ibid. 83, 054404 (2011). Note that a prefactor was missing in the previous model calculations of P, and all values of the electric polarization, reported in this paper, should be additionally divided roughly by 2.5. This partly resolves the problem of disagreement with the experimental data.
[email protected] † Temporarily at Institute of Theoretical Physics. * Solovyev, 920355University of Hamburg* [email protected] † Temporarily at Institute of Theoretical Physics, University of Hamburg, Jungiusstrasse 9, 20355
. S.-W Cheong, M Mostovoy, Nature Materials. 613S.-W. Cheong and M. Mostovoy, Nature Materials 6, 13 (2007).
. Th, Lottermoser, Th, U Lonkai, D Amann, Hohlwein, M Jörg, Fiebig, Nature. 430541Th. Lottermoser, Th. Lonkai, U. Amann, D. Hohlwein, Jörg, and M. Fiebig, Nature 430, 541 (2004).
. Z J Huang, Y Cao, Y Y Sun, Y Y Xue, C W Chu, Phys. Rev. B. 562623Z. J. Huang, Y. Cao, Y. Y. Sun, Y. Y. Xue, C. W. Chu, Phys. Rev. B 56, 2623 (1997).
. T Katsufuji, S Mori, M Masaki, Y Moritomo, N Yamamoto, H Takagi, Phys. Rev. B. 64104419T. Katsufuji, S. Mori, M. Masaki, Y. Moritomo, N. Yamamoto, and H. Takagi, Phys. Rev. B 64, 104419 (2001).
. T Choi, Y Horibe, H T Yi, Y J Choi, W Wu, S.-W Cheong, Nature Materials. 9253T. Choi, Y. Horibe, H. T. Yi, Y. J. Choi, W. Wu, and S.-W. Cheong, Nature Materials 9, 253 (2010).
. M Fiebig, Th, D Lottermoser, A V Frölich, R V Goltsev, Pisarev, Nature. 419818M. Fiebig, Th. Lottermoser, D. Frölich, A. V. Goltsev, and R. V. Pisarev, Nature 419, 818 (2002).
The details will be discussed in a separate publication. S Lee, A Pirogov, M Kang, K.-H Jang, M Yonemura, T Kamiyama, S.-W Cheong, F , S. Lee, A. Pirogov, M. Kang, K.-H. Jang, M. Yonemura, T. Kamiyama, S.-W. Cheong, F. The details will be discussed in a separate publication.
. J Park, J.-G Park, G S Jeon, H.-Y Choi, C Lee, W Jo, R Bewley, K A Mcewen, T G Perring, Phys. Rev. B. 68104426J. Park, J.-G. Park, G. S. Jeon, H.-Y. Choi, C. Lee, W. Jo, R. Bewley, K. A. McEwen, and T. G. Perring, Phys. Rev. B 68, 104426 (2003).
. T J Sato, S.-H Lee, T Katsufuji, M Masaki, S Park, J R D Copley, H Takagi, Phys. Rev. B. 6814432T. J. Sato, S.-H. Lee, T. Katsufuji, M. Masaki, S. Park, J. R. D. Copley, and H. Takagi, Phys. Rev. B 68, 014432 (2003).
. J Park, S Lee, M Kang, K.-H Jang, C Lee, S V Streltsov, V V Mazurenko, M V Valentyuk, J E Medvedeva, T Kamiyama, J.-G Park, Phys. Rev. B. 8254428J. Park, S. Lee, M. Kang, K.-H. Jang, C. Lee, S. V. Streltsov, V. V. Mazurenko, M. V. Valentyuk, J. E. Medvedeva, T. Kamiyama, and J.-G. Park, Phys. Rev. B 82, 054428 (2010).
. D Okuyama, S Ishiwata, Y Takahashi, K Yamauchi, S Picozzi, K Sugimoto, H Sakai, M Takata, R Shimano, Y Taguchi, T Arima, Y Tokura, Phys. Rev. B. 8454440D. Okuyama, S. Ishiwata, Y. Takahashi, K. Yamauchi, S. Picozzi, K. Sugimoto, H. Sakai, M. Takata, R. Shimano, Y. Taguchi, T. Arima, and Y. Tokura, Phys. Rev. B 84, 054440 (2011).
. A Muñoz, J A Alonso, M J Martínez-Lope, M T Casáis, J L Martínez, M T Fernández-Díaz, Phys. Rev. B. 629498A. Muñoz, J. A. Alonso, M. J. Martínez-Lope, M. T. Casáis, J. L. Martínez, and M. T. Fernández-Díaz, Phys. Rev. B 62, 9498 (2000).
. P J Brown, T Chatterji, J. Phys.: Condens. Matter. 1810085P. J. Brown and T. Chatterji, J. Phys.: Condens. Matter 18, 10085 (2006).
. M Fiebig, D Frölich, K Kohn, St, Leute, Th, V V Lottermoser, R V Pavlov, Pisarev, Phys. Rev. Lett. 845620M. Fiebig, D. Frölich, K. Kohn, St. Leute, Th. Lottermoser, V. V. Pavlov, and R. V. Pisarev, Phys. Rev. Lett. 84, 5620 (2000).
. I V Solovyev, J. Phys.: Condens. Matter. 20293201I. V. Solovyev, J. Phys.: Condens. Matter 20, 293201 (2008).
. O K Andersen, Z Pawlowska, O Jepsen, Phys. Rev. B. 345253O. K. Andersen, Z. Pawlowska, and O. Jepsen, Phys. Rev. B 34, 5253 (1986).
. A I Liechtenstein, M I Katsnelson, V P Antropov, V A Gubanov, J. Magn. Magn. Matter. 6765A. I. Liechtenstein, M. I. Katsnelson, V. P. Antropov, and V. A. Gubanov, J. Magn. Magn. Matter. 67 65 (1987).
. I Solovyev, N Hamada, K Terakura, Phys. Rev. Lett. 764825I. Solovyev, N. Hamada, and K. Terakura, Phys. Rev. Lett. 76, 4825 (1996).
Note that this work employed the same strategy for derivation of parameters of DM interactions as in Ref. 20, but different choice of phases in the spin-rotation matrix. Namely, the rotation of spin from e 0 = (0, 0, 1) to e = (cos ϕ sin θ, sin ϕ sin θ, cos θ) was described by the following sets of the Euler angles: (α I , β I , γ I ) = (ϕ, θ, −ϕ), in this work, and (α II , β II , γ II ) = (ϕ, θ, 0), in Ref. 20. The second choice provides more compact expression for DM interactions and allows to get rid of the on-site contribution to the rotational force. V V Mazurenko, V I Anisimov, Phys. Rev. B. 71184434Of course, the total force. created by all spins, does not depend on the phase choiceV. V. Mazurenko and V. I. Anisimov, Phys. Rev. B 71, 184434 (2005). Note that this work employed the same strategy for derivation of parameters of DM interactions as in Ref. 20, but different choice of phases in the spin-rotation matrix. Namely, the rotation of spin from e 0 = (0, 0, 1) to e = (cos ϕ sin θ, sin ϕ sin θ, cos θ) was described by the following sets of the Euler angles: (α I , β I , γ I ) = (ϕ, θ, −ϕ), in this work, and (α II , β II , γ II ) = (ϕ, θ, 0), in Ref. 20. The second choice provides more compact expression for DM interactions and allows to get rid of the on-site contribution to the rotational force. Of course, the total force, created by all spins, does not depend on the phase choice.
. A N Rudenko, V V Mazurenko, V I Anisimov, A I Lichtenstein, Phys. Rev. B. 79144418A. N. Rudenko, V. V. Mazurenko, V. I. Anisimov, A. I. Lichtenstein, Phys. Rev. B 79, 144418 (2009).
. I V Solovyev, P H Dederichs, I Mertig, Phys. Rev. B. 5213419I. V. Solovyev, P. H. Dederichs, and I. Mertig, Phys. Rev. B 52, 13419 (1995).
. A M Kalashnikova, R V Pisarev, JETP Letters. 78143A. M. Kalashnikova and R. V. Pisarev, JETP Letters 78, 143 (2003).
. N D Mermin, H Wagner, Phys. Rev. Lett. 171133ibid. 17, 1307(E)N. D. Mermin and H. Wagner, Phys. Rev. Lett. 17, 1133 (1966); ibid. 17, 1307(E) (1966).
. K I Kugel, D I Khomskii, Sov. Phys. Usp. 25231K. I. Kugel and D. I. Khomskii, Sov. Phys. Usp. 25, 231 (1982).
. D Vanderbilt, R D King-Smith, Phys. Rev. B. 484442D. Vanderbilt and R. D. King-Smith, Phys. Rev. B 48, 4442 (1993);
. R Resta, J. Phys.: Condens. Matter. 22123201R. Resta, J. Phys.: Condens. Matter 22, 123201 (2010).
. B B Van Aken, T T M Palstra, A Filippetti, N A Spaldin, Nature Materials. 3164B. B. van Aken, T. T. M. Palstra, A. Filippetti, and N. A. Spaldin, Nature Materials 3, 164 (2004).
. M Fiebig, Th, R V Lottermoser, Pisarev, J. Appl. Phys. 938194M. Fiebig, Th. Lottermoser, and R. V. Pisarev, J. Appl. Phys. 93, 8194 (2003).
| [] |
[
"TORSORS AND TILINGS FROM TORIC TOGGLING",
"TORSORS AND TILINGS FROM TORIC TOGGLING"
] | [
"Colin Defant ",
"Michael Joseph ",
"Matthew Macauley ",
"Alex Mcdonough "
] | [] | [] | Much of dynamical algebraic combinatorics focuses on global dynamical systems defined via maps that are compositions of local toggle operators. The second author and Roby studied such maps that result from toggling independent sets of a path graph. We investigate a "toric" analogue of this work by analyzing the dynamics arising from toggling independent sets of a cycle graph. Each orbit in the dynamical system can be encoded via a grid of 0s and 1s; two commuting bijections on the set of 1s in this grid produce torsors for what we call the infinite snake group and the finite ouroboros groups. By studying related covering maps, we deduce precise combinatorial properties of the orbits. Because the snake and ouroboros groups are abelian, they define tilings of cylinders and tori by parallelograms, which we also characterize. Many of the ideas developed here should be adaptable both to other toggle actions in combinatorics and to other cellular automata. | null | [
"https://export.arxiv.org/pdf/2305.07627v1.pdf"
] | 258,676,443 | 2305.07627 | c458af2cc9a3c3f3f58b20d3c16e50f9e96054fe |
TORSORS AND TILINGS FROM TORIC TOGGLING
Colin Defant
Michael Joseph
Matthew Macauley
Alex Mcdonough
TORSORS AND TILINGS FROM TORIC TOGGLING
Much of dynamical algebraic combinatorics focuses on global dynamical systems defined via maps that are compositions of local toggle operators. The second author and Roby studied such maps that result from toggling independent sets of a path graph. We investigate a "toric" analogue of this work by analyzing the dynamics arising from toggling independent sets of a cycle graph. Each orbit in the dynamical system can be encoded via a grid of 0s and 1s; two commuting bijections on the set of 1s in this grid produce torsors for what we call the infinite snake group and the finite ouroboros groups. By studying related covering maps, we deduce precise combinatorial properties of the orbits. Because the snake and ouroboros groups are abelian, they define tilings of cylinders and tori by parallelograms, which we also characterize. Many of the ideas developed here should be adaptable both to other toggle actions in combinatorics and to other cellular automata.
It has been said that mathematics is the study of patterns, and this paper is all about understanding and analyzing interesting patterns that arise from a simple combinatorial action. Rather than write this paper in a traditional format starting with an overview of the background followed by a list of necessary definitions, we will begin with an example that illustrates the types of curious patterns and structures that attracted us to this problem in the first place. The action we are studying is elementary enough that it can be easily understood by a student in elementary school, yet the mathematical ideas that arise come from the interplay of combinatorics, group theory, number theory, and algebraic topology. The commuting bijections that make this all possible-they define torsors and tilings from orbits of independent sets-also appear in other actions of combinatorial objects and certain cellular automata. An independent set of the cycle graph C n can be viewed as a cyclic binary string v 1 , . . . , v n such that no two adjacent entries are both 1, where "cyclic" means that v 1 and v n are adjacent. The toggle operation at position k is the function τ k that "attempts to flip" the k th bit. Specifically, if v k = 1, then τ k flips it to 0. On the other hand, if v k = 0, then τ k flips it to 1 if doing so does not introduce consecutive 1s; otherwise, it fixes the k th bit. In this paper, we will study the action of iteratively toggling the bits of our binary string in the order τ 1 , . . . , τ n , and we will denote this by the map τ = τ n • · · · • τ 1 . Given an initial cyclic binary string x (0) , let x (1) = τ x (0) , x (2) = τ x (1) , and so on. Eventually, after some m ≥ 1 number of steps, we will return to our original string. That is, x (m+i) = x (i) for all i. An example of an orbit x (0) , . . . , x (m−1) of this action with n = 12 and m = 15 is shown in Figure 1. Toggle actions on combinatorial objects are important in the field of dynamical algebraic combinatorics; in the next subsection, we will provide some context, background, and motivation for studying them, and we will discuss why independent sets are of interest. For now, let us return to the example in Figure 1 and point out that for this particular orbit, the cyclic string · · · 345345 · · · consisting of the column sums has period 3. Our computational experiments suggested that this period must be odd in any orbit. This was the first curious property that we wanted to understand. Another pattern we observed is that if one reads the orbit table as one reads a book-across each row, from top to bottom-the resulting string (called the orbit vector ) consists of several repeating copies of a shorter string. For example, the table in Figure 1 has 180 entries, but it is easy to check that it is simply four repeated copies of the first 45 entries. For each fixed n, we also saw certain orbit sizes arise more often than others, and we wanted to better understand this. Finally, we saw patterns within the 1s which held for any orbit table. Specifically, for every 1, there is another 1 either two positions to the right or one position diagonally down and to the right (where we allow "wrapping around" the end of the table and from bottom to top). Additionally, many local patterns of 1s seemed to repeat throughout the tables. Notice how Figure 1 has many examples of 10101 substrings, as well as three consecutive diagonal 1s. This regularity of patterns suggested that there could be some hidden algebraic structure. Indeed, there turns out to be a simply transitive action of a particular abelian group on the live entries (the 1s) in any orbit table. In other words, the live entries are endowed with a group structure, but there is no distinguished identity element; this is called a torsor. This group, which is an invariant of the orbit, encodes a number of combinatorial properties of the original action. The fact that this group is abelian means that the it defines a regular tiling of an infinite cylinder by parallelograms. The combinatorial patterns that we initially saw, and much more, can be explained through this algebraic lens. In this paper, we will develop the theory of these actions, and we will prove a number of theorems about them.
x v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v
This paper is organized as follows. We will conclude the introduction in Section 1.2 by reviewing the notion of toggling and further discussing how we became interested in this problem. This can be considered "optional," but it gives the rest of the paper context and should be helpful for the non-expert. In Section 2, we formalize the orbits generated by toggling independent sets in two ways: as bi-infinite strings called ticker tapes and as bi-infinite tables called scrolls, which naturally live on cylinders. We introduce commuting bijections called the successor functions and co-successor functions, which define equivalence classes called snakes and co-snakes. These correspond to cosets of an infinite abelian snake group, which acts simply transitively on the set of live entries. Studying this group and its actions-both on the scrolls and on their universal covers-helps us understand toggling dynamics. Not only does this group make the set of live entries into a torsor, but its action gives the (co-)snakes a regular "shape" called their (co-)slither. These are invariants of the orbit, and they associate to the orbit tilings of the cylinder and the plane (its universal cover) by parallelograms. In Section 3, we completely classify the dynamics of the action (Theorem 3.11) by characterizing the possible orbits (Corollary 3.9). In Section 4, we take various quotients of our scrolls to obtain finite orbit tables that naturally embed into tori. The (co-)snakes merge into equivalence classes called (co-)ouroboroi, inducing a homomorphism from the snake group to the finite abelian ouroboros group. The snake group action descends to a simply transitive action on the live entries in the quotient tables, endowing them with a torsor structure for the ouroboros group. In Section 5, we return to the original topic that drew us to this problem: the period of the so-called sum vector. The odd periodicity (Corollary 5.4) is a straightforward consequence of the theory developed in the previous sections. However, we prove a much stronger result by characterizing which odd numbers arise as the period of the sum vector of some orbit for a given n (Theorem 5.6). We conclude this paper in Section 6 with discussions of open problems, how this framework can be broadened to other actions from dynamical algebraic combinatorics, and how it arises in certain cellular automata [DDJ + 21]. The crux of why all this works is due to the existence of commuting bijections that act simply transitively on the live entries. The fact that this phenomenon appears in other problems from combinatorics and other fields such as cellular automata suggests that the theory is of general interest.
1.2. Why toggle independent sets? The notion of toggling has recently gained considerable interest within the field of dynamical algebraic combinatorics; for surveys, see [Rob16,Str17]. Toggles yield an action on a collection L of subsets of a finite set. Common examples of the set L are the set of order ideals of a poset [SW12], the set of antichains of a poset [Jos19], the set of noncrossing partitions [EFG + 16], or the set of independent sets of a path graph [JR18]. In what follows, we may assume L is a collection of subsets of the set [n] := {1, . . . , n}. For k ∈ [n], the toggle τ k : L → L is defined by
(1) τ k (E) = E ∪ {k} if k ∈ E and E ∪ {k} ∈ L E \ {k} if k ∈ E and E \ {k} ∈ L E otherwise.
By construction, each toggle is a bijection, so this defines a group of permutations Tog(L) = τ 1 , . . . , τ n called the toggle group. Since each τ k is an involution, Tog(L) is a quotient of a Coxeter group. Following Coxeter theory, we will define a Coxeter element to be the product of all toggles in some order. Though sometimes it can be of interest to classify this group, work on toggling is usually focused elsewhere, such as understanding which classical bijections can be decomposed as products of toggles, which combinatorial statistics are invariant or homomesic under toggling, and how the dynamics change under different toggle orders. The first objects to be toggled were order ideals of a poset P . In 1974, Brouwer studied a bijection on the set J (P ) of order ideals that sends I to the order ideal generated by the minimal elements of P \ I [BS74]. In 1995, Cameron and and Fon-Der-Flaass showed that in this bijection can be constructed in graded posets by toggling each element of P once, in a particular order: by rows, from top-to-bottom [CF95]. In 2012, Striker and Williams observed that the conjugate Coxeter element, toggling by columns, was closely related to the classic operation of promotion on semistandard Young tableaux. This motivated them to name the aforemtioned bijection rowmotion, denoted Row J , and also to formalize and name toggles and the toggle group.
In [CF95], Cameron and and Fon-Der-Flaass also studied what is now known as antichain rowmotion: the map Row A sending A to the set of minimal element(s) of the complement of the order ideal I(A) := x ∈ P | x ≤ P a for some a ∈ A . Panyushev studied this map on root posets [Pan09]; this is one of the works that was a major impetus for the development of dynamical algebraic combinatorics. The second author of the present paper studied this bijection in the context of toggling [Jos19]. Even though order ideal and antichain rowmotion are conjugate via Row J •I = I • Row A , relating their factorizations into individual toggles is surprisingly trickier.
The aforementioned work led to the notion of toggling other combinatorial objects [Str15], as discussed above and defined in Eq. (1). Since every antichain is an independent set in a certain graph, toggling independent sets was a natural next step. Additionally, the second and third author (with others) considered toggling noncrossing partitions, viewed as collections of arcs [EFG + 16]. This revealed factorizations of existing actions such as the Kreweras complement and Simion-Ullman involution in terms of toggles. Like with antichains, toggling noncrossing partitions is a special case of toggling independent sets. All of this motivated the third author and Roby to investigate independent set toggling on its own. This is a difficult problem in general, so they started with a path graph [JR18]. The goals of the authors of [JR18] were to study combinatorial statistics called homomesies arising from toggling independent sets of path graphs, as well as to prove several conjectures of Propp. The actual toggle groups were later classified in [NY22].
In this article, we direct our attention to toggling independent sets of a cycle graph. 1 When toggling independent sets of graphs, certain aspects of cycle graphs end up being more complicated than path graphs, but others end up being nicer. For example, when toggling independent sets of a path, all Coxeter elements are conjugate; this simplifies certain arguments and makes some results hold for all Coxeter elements. In contrast, when working with the independent sets of the cycle C n , the Coxeter elements fall into n − 1 conjugacy classes [DMR16]. On the other hand, one nice property of cycle graphs is that they are vertex-transitive. In the end, different patterns and different questions arise over cycle graphs than over path graphs, in ways that were initially not clear to us, and the extent of which ultimately surprised us. This problem became much more algebraic in nature, and a broader mathematical theory emerged.
1 This transition from path graphs to cycle graphs explains our use of the word "toric" in the title of this article.
Indeed, the articles [DMR16,Def23] use the word "toric" to refer to cyclic analogues of objects that has previously been considered in "linear" settings. The paper [Def23] also uses local "toggle operators" to define an operator called toric promotion, although the toggles considered there are different from the ones considered here.
Dynamics and actions on infinite sets
2.1. Scrolls vs. ticker tapes. Let C n denote the cycle graph with vertex set V (C n ) = Z n (the integers modulo n) and edges {i, i + 1} (including {n, 1}). We often identify Z n with [n] := {1, . . . , n} or {0, . . . , n − 1} in the obvious manner. Throughout the paper, let n ≥ 2. Though an independent set of C n is a subset of [n], it will usually be easiest for us to denote it as a length-n binary string, with the requirement that it does not contain a pair of consecutive 1s, including those that "wrap around" the end of the word. We write I n for the collection of independent sets of C n , regardless of which notation we use. We will let F 2 = {0, 1} denote the bits of the binary string, i.e., the states of the vertices. We may write a binary n-tuple either as a string v 1 · · · v n or as a vector (v 1 , . . . , v n ) in F n 2 . For each vertex k ∈ [n], there is a bijective toggle operation τ k : I n → I n that adds k to an independent set I if k ∈ I and I ∪ {k} is an independent set, removes k from I if k ∈ I, and fixes I otherwise. 2 Throughout this paper, we will toggle in the order 1, . . . , n; that is, we consider the map τ ∈ Tog(I n ), τ := τ n • · · · • τ 1 .
Sometimes, we will write v 1 , . . . , v n rather than 1, . . . , n for extra emphasis, like we did in the header of Figure 1.
In the remainder of this subsection, we will describe two formats for viewing the dynamics that result from toggling independent sets of C n . The first is a vertically bi-infinite periodic table of 0s and 1s called the scroll, and the second is a bi-infinite periodic sequence called the ticker tape that we get from reading the scroll like one reads a book, across the columns and downward row-by-row. Each format has its advantages and disadvantages. Certain features are more prominent in one while hidden in the other or are notationally simpler in one than the other. Let x = x (0) = (x 1 , . . . , x n ) ∈ I n , and let x (i) = τ i (x) be the result of iterating τ exactly i times from x. Since τ is a bijection on I n , we can define this for all i ∈ Z. Consider the table with n columns, indexed by j = 1, . . . , n, and rows indexed by i ∈ Z, reading downward. The (i, j)-entry X i,j is the state of vertex v j in x (i) . In other words, the i th row is just the vector x (i) . This infinite table is called the scroll of x, and we will denote it by S = (X i,j ) = Scroll(x). The scroll of x = 00001010000 ∈ F 11 2 , shown in Figure 2, will be our new running example. Figure 2. The scroll of x (0) = (0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0) ∈ F 11 2 consists of the seven rows x (0) , . . . , x (6) repeated indefinitely. This is shown twice, with different color schemes, to emphasize visual patterns among the live (value of 1) entries that we will soon formalize as snakes and co-snakes, respectively.
x v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 x (0) 0 0 0 0 1 0 1 0 0 0 0 x (1) 1 0 1 0 0 0 0 1 0 1 0 x (2) 0 0 0 1 0 1 0 0 0 0 1 x (3) 0 1 0 0 0 0 1 0 1 0 0 x (4) 0 0 1 0 1 0 0 0 0 1 0 x (5) 1 0 0 0 0 1 0 1 0 0 0 x (6) 0 1 0 1 0 0 0 0 1 0 1 x v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 x (0) 0 0 0 0 1 0 1 0 0 0 0 x (1) 1 0 1 0 0 0 0 1 0 1 0 x (2) 0 0 0 1 0 1 0 0 0 0 1 x (3) 0 1 0 0 0 0 1 0 1 0 0 x (4) 0 0 1 0 1 0 0 0 0 1 0 x (5) 1 0 0 0 0 1 0 1 0 0 0 x (6) 0 1 0 1 0 0 0 0 1 0 1
The global dynamics of τ can be read off the scroll as one reads from a book: reading the rows from top to bottom, with each row read from left to right. This defines a bi-infinite sequence called the ticker tape, denoted X = (X k ) = Tape(x). To convert between ticker tape and scroll notation, let X 1 = X 0,1 , X 2 = X 0,2 , X 3 = X 0,3 , and so on, so that X in+j = X i,j . The ticker tape of the example in Figure 2 is . . . , X −6 , X −5 , X −4 , X −3 , X −2 , X −1 , X 0 0,0,0,0,1,0,1 , X 1 , X 2 , X 3 , X 4 , X 5 , X 6 , X 7 0,0,0,0,1,0,1 , X 8 , X 9 , X 10 , X 11 , X 12 , X 13 , X 14 0,0,0,0,1,0,1 , . . . .
Topologically, the ticker tape can be naturally embedded on an infinite line. In contrast, the scroll "wraps around" from the end of one row to the beginning of the next, so it is natural to view it as being embedded on a bi-infinite cylinder rather than on a plane. As such, it is always well-founded to speak of the entry immediately to the left or to the right of position (i, j), even if it is in the first or the last column. Notationally, even though we index the columns by j = 1, . . . , n, it will be convenient to set X i,k+n = X i+1,k for each k ∈ Z.
At times, it will be more convenient to "lift up" the cylinder to its universal cover and work with points in the plane. Here, we think of a scroll as a map S : Z × Z n → F 2 , which naturally lifts to the universal scroll S : Z × Z → F 2 , making the following diagram commute:
(2) Z × Z n q S # # Z × Z n S / / F 2 (i + k, j + kn) _ q S & & (i, j) S / / X i,j
Note that the quotient map q is not quite the "canonical" quotient from the plane to a cylinder that reduces the second entry modulo n, because the row increases when we wrap around. There is no such analogue for lifting the ticker tape in this manner because its canonical domain is a subset of R, which is already simply connected. Note that each infinite row of the universal scroll is a shifted copy of the ticker tape. In this context, we will usually take Z n = [n] = {1, . . . , n}, rather than {0, . . . , n − 1}, because we want to index the columns by the vertices, which are in [n]. A portion of the universal scroll of our running example from Figure 2 is shown in Figure 3. The shading is meant to highlight disjoint copies of orbits under the toggling map τ . In both a scroll and a ticker tape, positions that have a value of 1 are said to be live. Formally, the sets of live entries, in both formats, are
Live(S) = (i, j) ∈ Z × Z n | X i,j = 1 , Live(X ) = k ∈ Z | X k = 1 .
We can also define the set of live entries in the universal scroll as
Live( S) = (i, j) ∈ Z × Z | X i,j = 1 = q −1 (Live(S)).
2.2. The successor and co-successor functions. In this subsection, we will explore the patterns of the live entries within the scrolls. Given an arbitrary live entry (i, j), we can draw some easy conclusions about its surrounding entries. Recall that since scrolls are naturally embedded on a cylinder, we can always speak of the entry immediately to the left or to the right, even if we are in the first or the last column, by setting X i,k+n = X i+1,k . It is clear that for any live entry, the four entries that are immediately adjacent to it must be 0. Similarly, the diagonal entries in the upper-right and lower-left directions also must be 0.
Lemma 2.1. If (i, j) ∈ Live(S), then X i−1,j = X i−1,j+1 = X i,j−1 = X i,j+1 = X i+1,j−1 = X i+1,j = 0.
It is also elementary to see that exactly one of X i,j+2 and X i+1,j+1 must be 1; we will call the location of whichever entry is live the successor of (i, j). Similarly, exactly one of X i+2,j−2 and X i+2,j−1 must be 1; we will call the location of the live entry the co-successor of (i, j). The formal statement of this is given in Lemma 2.2, and a visual interpretation is shown in Figure 4. Notice how these two images also show the nearby surrounding entries that must be 0, as guaranteed by Lemma 2.1. · · · · · · · · · · · · · · · · · · · · · 0 0 0 0 · · · · · · · · · · · · · · · · · · 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 0 0 0 0 1 0 1 · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · · (2) is not just the standard "modulo n" reduction, since in the scroll, the end of a row wraps around to the beginning of the next one.
Lemma 2.2. If (i, j) ∈ Live(S) then X i,j+2 + X i+1,j+1 = 1, and X i+2,j−2 + X i+2,j−1 = 1.
· · · 0 0 · · · 0 1 0 a · · · · · · 0 0 a 0 · · · "successor of (i, j)" · · · 0 0 · · · · · · 0 1 · · · · · · 0 0 0 · · · · · · · · · b b · · · "co-successor of (i, j)" It will be convenient to think of the successor and co-successor as functions on the live entries.
Definition 2.3. Given a scroll S, the successor is the function s : Live(S) −→ Live(S) that sends (i, j) to the unique element of (i, j + 2), (i + 1, j + 1) ∩ Live(S). The co-successor is the function c : Live(S) −→ Live(S) that sends (i, j) to the unique element of (i + 2, j − 2), (i + 2, j − 1) ∩ Live(S).
Naturally, Definition 2.3 is easily translated into ticker tape notation, where the subscripts are indices rather than ordered pairs. Specifically, if X k = 1, then the successor and co-successor functions s, c : Live(X ) → Live(X ) are defined by sending k to the unique element of k + 2, k + n + 1 ∩ Live(X ) and k + 2n − 2, k + 2n − 1 ∩ Live(X ), respectively.
Lemma 2.4. The successor and co-successor functions are bijections on Live(S).
Proof. If (i, j) ∈ Live(S) were to have two s-preimages, then they would have to be (i − 1, j − 1) and (i, j − 2). However, X i−1,j−1 = 1 would force X i,j−2 = 0 by Lemma 2.1. Similarly, if (i, j) ∈ Live(S) were to have two c-preimages, then they would have to be (i − 2, j + 1) and (i − 2, j + 2), and this is clearly not allowed because they are adjacent. Figure 5 provides a visual for why these two scenarios are impossible.
· · · 0 1 0 0 · · · · · · 1 0 1 0 · · · · · · 0 0 · · · · · · 1 1 · · · · · · 0 0 · · · · · · 0 1 0 · · · · · · 0 0 · · · Figure 5. If the successor (respectively, co-successor) function were not bijective, then the impossible configuration on the left (respectively, right) would occur in the scroll.
Since the successor and co-successor functions are bijections on Live(S), each of them defines an equivalence relation. The color scheme back in Figure 2 highlights the equivalence classes, which is why we often prefer to use scrolls over ticker tapes. We will explore these notions more in the next subsection.
A fundamental property of the successor and co-successor functions is the simple observation that they commute.
Proposition 2.5. The successor of the co-successor is the co-successor of the successor. That is, for any live entry (i, j), we have s(c(i, j)) = c(s(i, j)).
Proof. Starting with a live entry (i, j), the two possibilities for its co-successor c(i, j) are shown in Figure 6. In each case, the successor of the co-successor is the position of either b or b, whichever is live. The successor of (i, j) is the position of either a or a, whichever is live. It is easy to check that in both cases above, a = 1 forces b = 1, and a = 1 forces b = 1. In both cases, the position of whichever b or b is live, by definition, is the co-successor of the successor.
· · · 0 0 · · · · · · 0 1 0 a · · · · · · 0 0 a 0 · · · · · · 0 0 0 0 Figure 6. A picture of the argument in Proposition 2.5 of why the successor and co-successor functions commute.
0 1 b · · · · · · b 0 · · · · · · 0 0 · · · · · · 0 1 0 a · · · · · · 0 0 a 0 · · · · · · 0 1 0 b · · · · · · 0 0 b 0 · · ·
At times, it will be more convenient to work in the universal scroll S than in S itself. The successor and co-successor functions lift to the commuting universal successor and universal co-successor functions s, c : Live( S) → Live( S), which are defined by replacing each S with S in Definition 2.3.
2.3. Snakes and co-snakes. Recall that by Lemma 2.4, the successor and co-successor functions are bijections on Live(S). Thus, they define a group, which we will denote by G(S) = s, c or by G(X ) = s, c , depending on whether we are using the notation of scrolls or ticker tapes. Clearly, the generators have infinite order, and by Proposition 2.5, this group is abelian. We will call G(S) the snake group. Our general approach for understanding the structure of the scrolls, and hence understanding toggling independent sets of the cycle graph, is to study this group and its actions. In this section, we will investigate the snake group's relations and derive a presentation.
Given a group G acting on a set X, we say X is a torsor for G (or a G-torsor ) if the action is simply transitive, i.e., it is both transitive and free. When this is the case, there is a bijection between G and X, and for a fixed generating set S ⊆ G, this action defines a Cayley graph structure on X. Specifically, the (left) Cayley graph for G = S | R has vertex set X, and for each x ∈ X and generator g ∈ S, there is a directed edge x → g.x, annotated with g (often by color). In our setting, there is a canonical action of the snake group G(S) on the set Live(S) of live entries, and we will prove that this makes Live(S) into a G(S)-torsor.
At times, it will be helpful to lift up to the universal scroll and work with the action of the affine snake group G( S) := s, c on Live( S). The actions of these two snake groups on the corresponding sets of live entries are described by the following commutative diagram, which relates the successor functions s and s. Here, q is the quotient map from Eq. (2):
Live( S) q s / / Live( S) q Live(S) s / / Live(S) (i + k, j + kn) _ q s / / s(i + k, j + kn) _ q (i, j) s / / s(i, j)
Naturally, there is an analogous diagram relating the co-successor functions c and c.
Definition 2.6. Given a live entry (i, j) in a scroll, the snake and the co-snake containing it are the following subsets of Z × Z n :
Snake(i, j) = s k (i, j) | k ∈ Z , CoSnake(i, j) = c k (i, j) | k ∈ Z .
The affine snake and affine co-snake are defined similarly, but for s and c in the universal scroll. We will denote these by Snake (i, j) and CoSnake (i, j), respectively.
Returning to our running example, the two snakes in the scroll shown in Figure 2 are highlighted by color in the table on the left. The six co-snakes are not as visually prominent in the scroll, but the live entries in the table on the right are colored to distinguish them. There are always infinitely many affine snakes and co-snakes. In the example in Figure 3, each affine snake is colored according to the snake to which the quotient map q (from Eq. (2)) sends it.
Remark 2.7. The term snake is borrowed from a paper by the third author and Roby about toggling independent sets of a path graph [JR18]. It was chosen because of the visual interpretation resulting from iterating the successor function from a given entry in a scroll. In [Had21], Haddadan studied snakes in tuple boards to analyze the dynamics of comotion on order ideals, which is also defined via toggles.
To understand the action of the snake group on the live entries in the scroll, it is easiest to consider the action of the affine snake group in the universal scroll and then project downwards. 3
Lemma 2.8. The affine snake group is free abelian, i.e., it has presentation
G( S) = s, c | s c = c s .
Proof. It suffices to show that the action of the abelian group G( S) on Live( S) ⊂ Z×Z is free. Consider an element s k c that fixes (i, j). Since the action of s increases the second coordinate and the action of c decreases it, k and cannot have opposite signs. Without loss of generality, assume k, ≥ 0. Since c increases the first coordinate, we have = 0. Then s k (i, j) = (i, j), so k = 0.
The next lemma tells us that the live entries in the universal scroll form a G( S)-torsor.
Lemma 2.9. The affine snake group acts simply transitively on Live( S).
Proof. Since G( S) is free abelian, it suffices to show that the action is transitive. Consider the affine snake containing (i, j) ∈ Live( S). There is another affine snake below it containing c(i, j) that differs by a translation of (−1, 2) or (−2, 2). Similarly, there is one above it containing c −1 (i, j), which differs by a translation of (1, −2) or (2, −2). Clearly, there is no room for live entries between any two such consecutive snakes. In particular, this means that we can get from any live entry (i, j) to another (i , j ) in Live( S) by first applying c for some ∈ Z, to traverse from Snake (i, j) to Snake (i , j ), and then applying s k for some appropriate k ∈ Z to move within the affine snake.
Since Live( S) is a G( S)-torsor, the affine snakes are in bijection with the cosets of s , and the affine co-snakes are in bijection with the cosets of c . Moreover, there are bijections between the elements of these cosets and those in the (co)-snakes. The quotient q : Live( S) → Live(S) is a topological covering map, so it induces a group homomorphism q * :
G( S) → G(S) with q * ( s) = s and q * ( c) = c. The snake group is the quotient G(S) ∼ = G( S)/ ker q * ,
and it acts simply transitively on Live( S)/ ker q, which can be canonically identified with Live(S).
Proposition 2.10. The set Live(S) is a torsor for the snake group, which has presentation
G(S) = s, c | sc = cs, s β = c α ,
where S has α snakes and β co-snakes.
Proof. We have already established the first statement. It follows that there is a bijective correspondence between snakes and cosets of s and between co-snakes and cosets of c . That is,
α = [G(S) : s ] and β = [G(S) : c ]
are the smallest positive integers for which s β ∈ c and c α ∈ s . It follows that s β = c ±α . To resolve the sign ambiguity, we notice that applying c will always increase the second coordinate of a live entry, while applying s cannot possibly decrease this coordinate. Any relation in G(S) other than sc = cs arises from an element of ker q * , and these all have the form s b c a for some a, b ∈ Z. Since both generators s = q * ( s) and c = q * ( c) have infinite order, we may thus assume that ker q * = s b c a for some a, b = 0. By minimality of α and β, we may take a = −α and b = β.
Slithers and co-slithers.
Since G(S) endows Live(S) with the structure of a Cayley graph, if we fix (i, j) ∈ Live(S), then every word in {s, s −1 , c, c −1 } corresponds to a path from (i, j). The snakes and co-snakes correspond to the cosets of s and c , respectively. In particular, this means that all snakes have the same algebraic structure, as do all co-snakes. In this section, we will prove a stronger result: the embeddings of all snakes and co-snakes in a given scroll additionally have the same "shape." As before, we will let α = [G(S) : s ] be the number of snakes and β = [G(S) : c ] be the number of co-snakes. Though we will work with scrolls, all of our definitions and results can also be translated into ticker tape notation.
From a fixed (i, j) ∈ Live(S), consider the next live entry reached when applying the successor or co-successor function. There are two cases for each, as was shown back in Figure 4. We will annotate a step of s(i, j) = (i + 1, j + 1) by "D" for "diagonal" and a step of s(i, j) = (i, j + 2) by "E" for "east." 4 Similarly, we will annotate a step of c(i, j) = (i + 2, j − 1) by "S" for "short," and c(i, j) = (i + 2, j − 2) by "L" for "long." Allowing inverses, it is straightforward to annotate any path in the Cayley graph of G(S). We will call this the shape of a path. If a path has length 1, then we will refer to it as a step and refer to its shape as its type. For example, the step from (i, j) to s(i, j) is either of type D or of type E. At times, it will be convenient to speak of a D-step or E-step (these are "s-steps"), or of an S-step or L-step (these are "c-steps"). An s-path is a sequence of s-steps (inverses allowed); a c-path is defined similarly.
Since the snake group is abelian, we have s(c(i, j)) = c(s(i, j)) for all (i, j) ∈ Live(S). Thus, if we start from (i, j), then applying an s-step and then a c-step results in the same endpoint as applying a c-step and then an s-step. A simple but useful observation is that we also take the same types of steps along these two paths, but in the opposite order. Geometrically, this means that paths formed by applying s −1 c −1 sc (and hence c −1 s −1 cs) always trace out parallelograms, and any scroll is tiled by these parallelograms. There are four such parallelograms, as shown in Figure 7. Specifically, in each of the four neighborhoods, there is a parallelogram formed by both 1s and both a's, and another one formed by both 1s and both a's. This double-counts each parallelogram, leaving four distinct 1s. In other words, every possible scroll is described by a periodic tiling of parallelograms on a cylinder. However, there are restrictions to which of these tilings are possible, which we will explore further in Section 3.
Lemma 2.11 (Parallelogram lemma). Starting from any
(i, j) ∈ Live(S), the paths (i, j) → c(i, j) → s(c(i, j)) and (i, j) → s(i, j) → c(s(i, j)
) have the same types of steps but in the opposite order.
Proof. It suffices to check all possible cases, which are shown in Figure 7. The two diagrams on the left show the possible parallelograms formed by applying s −1 c −1 sc to (i, j), starting with an L and an S, respectively. The diagrams on the right show the possible parallelograms formed by applying c −1 s −1 cs to (i, j), starting with a E and a D, respectively. · · · 0 0 · · · · · · 0 1 0 a · · · · · · 0 0 0 a 0 · · · · · · 0 1 0 a 0 · · · 0 0 · · · a 0 · · · · · · 0 0 · · · · · · 0 1 0 a · · · · · · 0 0 a 0 · · · · · · 0 1 0 a · · · · · · 0 0 a 0 · · · · · · 0 0 0 0 · · · · · · 0 1 0 1 0 · · · · · · 0 0 0 0 · · · · · · a a a a · · · 0 · · · 0 0 · · · · · · 0 0 · · · · · · 0 1 0 0 · · · · · · 0 0 1 0 · · · · · · a a 0 0 · · · · · · a 0 a · · · · · · 0 · · · Figure 7. Scrolls are tiled by up to four types of parallelograms; see the proof of Lemma 2.11. Each of the four diagrams here depicts two: one for each choice of a ∈ {0, 1}. Each parallelogram appears exactly twice.
The parallelogram lemma guarantees that the notion of the path shape is well-defined on snakes and co-snakes. In other words, applying the successor (resp., co-successor) function from any two live entries in the same co-snake (resp., snake) yields the same type of step.
Corollary 2.12. Suppose (i , j ) ∈ Snake(i, j) and (i , j ) ∈ CoSnake(i, j). Then the steps (i, j) → c(i, j) and (i , j ) → c(i , j ) have the same type (either both S or both L), and the steps (i, j) → s(i, j) and (i , j ) → s(i , j ) have the same type (either both D or both E).
Proof. We will show that (i, j) → c(i, j) and (i , j ) → c(i , j ) have the same type; the proof of the analogous statement for (i, j) → s(i, j) and (i , j ) → s(i , j ) is very similar. Since Snake(i, j) is the orbit of (i, j) under s, it suffices to prove the desired result when (i , j ) = s(i, j). However, this is immediate from the parallelogram lemma.
The elements of the cyclic quotient group G(S)/ c ∼ = Z β correspond to the co-snakes in S. Thus, starting at any live entry (i, j) and iterating the successor function β times defines an ordering of the co-snakes. We call the shape of this length-β path the slither from (i, j), denoted Slither(i, j). For ease of notation, we can use exponents to write a slither. In our running example back in Figure 2, we have
Slither(1, 1) = EDEDED = (ED) 3 and Slither(3, 2) = DEDEDE = (DE) 3 .
There is a similar construction for co-snakes. The elements of the cyclic quotient group G(S)/ s ∼ = Z α are the snakes in S. Starting at any live entry (i, j) and iterating the co-successor function α times defines an ordering of the snakes. We call the shape of this length-α path the co-slither from (i, j), denoted CoSlither(i, j). In our running example from Figure 2, we have CoSlither(i, j) = SS = S 2 for every live entry (i, j). The following is immediate from Corollary 2.12.
Lemma 2.13. Let (i, j) ∈ Live(S) and (i , j ) = s a c b (i, j) for some a, b ∈ Z.
• The slither from (i , j ) is the slither from (i, j), cyclically shifted a positions.
• The co-slither from (i , j ) is the co-slither from (i, j), cyclically shifted b positions.
By Lemma 2.13, it is well-defined to speak of the slither of S, written Slither(S), as the slither from any live entry of S up to cyclic shift. The co-slither of S, written CoSlither(S), is defined similarly.
2.5. Scales and periods. Here we formalize the notion of the "exponent" that we used to write slithers and co-slithers, and we use this notion to calculate properties of scrolls and the ticker tapes. By Lemma 2.13, these exponents are inherent properties of a scroll. Not only are they notationally convenient, but they underlie a fundamental property that will be useful to us later in this paper. In Section 4, we will relate these to the periods of both the scrolls and ticker tapes. We will also use a slightly different terminology: the word "degree" will both suggest the fact that it appears as an exponent and indicate its relation to the degree of a certain covering map that we will introduce in Section 4.
Definition 2.14. The degree of a scroll, denoted deg(S), is the length of Slither(S) divided by its period as a cyclic word. The co-degree of a scroll, denoted codeg(S), is the length of CoSlither(S) divided by its period as a cyclic word.
The scroll in our running example from Figure 2 has degree 3 because its slither is (DE) 3 . Note that this must divide β, the number of co-snakes. Similarly, this scroll has co-degree 2 because its co-slither is S 2 . This number divides α, the number of snakes.
The definitions of slithers, co-slithers, degrees, and co-degrees can be defined analogously on ticker tapes. We write Slither(k) (resp., CoSlither(k)) for the slither (resp., co-slither) from the k th entry of X . We also define deg(X ) = deg(S) and codeg(X ) = codeg(S). While it is usually easy to visualize results in the scroll, sometimes it is notationally simpler to work with ticker tapes.
Going forward, it will be fruitful to write the slither as deg(X ) copies of a word P over {D, E} and the co-slither as codeg(X ) copies of a word Q over {S, L}. Notationally, for any fixed k ∈ Live(X ), we will write
(3) Slither(k) = P deg(X ) , β = |P | · deg(X ), CoSlither(k) = Q codeg(X ) , α = |Q| · codeg(X ).
We will also want to speak of the number of spaces that we advance in the ticker tape upon applying P and Q; we will denote these as
(4) p = s |P | (k) − k, and q = c |Q| (k) − k.
By Lemma 2.13, these do not depend on k, though the actual words P and Q do. Note that p and q are the minimal shifts of the ticker tape that leave invariant the snakes and co-snakes, respectively. Going forward, we will refer to p as the snake scale and q as the co-snake scale.
In our running example from Figure 2, if we take k = 12, which is the live entry (i, j) = (1, 1) one row beneath the non-live entry in the upper-left corner, then
Slither(k) = (ED) 3 , β = 6, P = ED, deg(X ) = 3, p = 14,
and CoSlither(k) = S 2 , α = 2, Q = S, codeg(X ) = 2, q = 21. Starting at an arbitrary live entry k, each time we traverse the path corresponding to P , we go forward in the ticker tape p entries. For any r ∈ N, traversing the path P r advances us rp entries. Though this is intuitive, we will formalize and prove it below, as it will be a useful technical lemma for a number of results.
Lemma 2.15. For any r ∈ Z ≥0 and any k ∈ Live(X ), we have
s r|P | (k) − k = r s |P | (k) − k = rp and c r|Q| (k) − k = r c |Q| (k) − k = rq.
Proof. For the first statement, it suffices to show that s r|P | (k) = rp+k, and we will do this by induction. The cases when r = 0 or r = 1 are immediate. Assuming the hypothesis holds for r, we have
s (r+1)|P | (k) = s |P | (s r|P | (k)) = s |P | (rp + k) = rp + k + p = (r + 1)p + k.
The proof of the second equality is completely analogous; just replace s with c, |P | with |Q|, and p with q.
Recall that α and β are the number of snakes and co-snakes respectively of S.
Lemma 2.16. For any ticker tape X , the values of deg(X ) and codeg(X ) are relatively prime.
Proof. Let d = gcd(deg(X ), codeg(X )). By Lemma 2.15,
s β (k) − k = s deg(X )|P | (k) − k = d(s β/d (k) − k).
Since s β (k) = c α (k), the quantity above is equal to
c α (k) − k = c codeg(X )|Q| (k) − k = d(c α/d (k) − k),
and thus s β/d (k) = c α/d (k). It now follows from the presentation of the snake group (Proposition 2.10) that d = 1.
Lemma 2.17. For any ticker tape X , lcm(p, q) = deg(X )p = codeg(X )q.
Proof. By definition, we have the following chain of equalities.
deg(X )p = deg(X ) s |P | (k) − k = s β (k) − k = c α (k) − k = codeg(X ) c |Q| (k) − k = codeg(X )q.
From the presentation of the snake group, α and β are the smallest positive integers for which the middle equality holds. Thus, deg(X )p is the smallest multiple of p that is a multiple of q, and codeg(X )q is the smallest multiple of q that is a multiple of p. The lemma follows.
The quantity in Lemma 2.17 is significant enough that we will give it its own name.
Definition 2.18. The scale of a ticker tape X is
Scale(X ) := s β (k) − k = c α (k) − k,
where α is the number of snakes, β is the number of co-snakes, and k is an arbitrary integer. If S is the corresponding scroll, then we define the scale of S to be Scale(S) := Scale(X ).
Remark 2.19. The snake scale p and co-snake scale q of X (or S) are p = Scale(X ) codeg(X ) , and q = Scale(X ) deg(X ) .
Since the scale is the minimal positive integer σ for which k and k + σ always lie on the same snake and same co-snake, we have the following algebraic interpretation of the scale in terms of cosets.
Corollary 2.20. The scale of a ticker tape is Scale(X ) = lcm(p, q).
Proof. This follows immediately from the proof of Lemma 2.17.
Definition 2.21. The fibers of a scroll S (or ticker tape X ) are the equivalence classes of live entries defined by intersecting the snakes and co-snakes. In both notations, the fiber of a live entry is denoted
(5) Fiber(i, j) = Snake(i, j) ∩ CoSnake(i, j), Fiber(k) = Snake(k) ∩ CoSnake(k).
Algebraically, the fibers are just the orbits under the action of the cyclic group
s ∩ c = s β = c α ≤ G(S),
and the scale is the smallest positive integer σ for which two live entries in X are in the same fiber if and only if they differ by a multiple of σ in the ticker tape.
We will now explore the periodicity of the scroll and ticker tape. Though the scroll and ticker tape are both periodic, we measure their periods in different ways.
Definition 2.22. The period T (S) of a scroll S = (X i,j ) is the smallest m > 0 such that X i+m,j = X i,j for all i, j.
The period T (X ) of a ticker tape X = (X k ) is the smallest > 0 such that X k+ = X k for all k.
In our running example from Figure 2, the period of the ticker tape and scroll are both 7. In particular, the ticker tape is generated by the subsequence 1010000, and the scroll consists of 7 repeating rows. For the example back in Figure 1, the period of the ticker tape is 45, while the period of the scroll is 15.
Lemma 2.23. Suppose that k ∈ Live(X ) and ∈ Z ≥0 . Then T (X ) divides if and only if Slither(k) = Slither(k + ) and CoSlither(k) = CoSlither(k + )
Proof. The slither and co-slither from any live entry completely determine X . In particular, when the ticker tape is shifted positions, it remains unchanged. conversely, if Slither(k) = Slither(k + ) or CoSlither(k) = CoSlither(k + ), then the ticker tape changes after shifting by .
As a corollary, we get an analogous characterization of the period of a scroll, which will be useful later.
Corollary 2.24. For any (i, j) ∈ Live(S), the period T (S) is the minimal > 0 such that the following three conditions hold:
(1) (i + , j) ∈ Live(S),
(2) Slither(i, j) = Slither(i + , j), (3) CoSlither(i, j) = CoSlither(i + , j).
Theorem 2.25. The period of the ticker tape X is
T (X ) = gcd(p, q) = Scale(X ) deg(X ) codeg(X ) .
Proof. For the first equality, it is immediate from the definition of p that Slither(k) = Slither(k + ) if and only if is a multiple of p. Similarly, CoSlither(k) = CoSlither(k + ) if and only if is a multiple of q. The equality follows from Lemma 2.23. For the second equality, by Lemma 2.17, as well as the fact that pq = lcm(p, q) gcd(p, q), we have gcd(p, q) deg(X ) codeg(X ) = gcd(p, q) lcm(p, q) p · lcm(p, q) q = lcm(p, q) = Scale(X ).
by Corollary 2.20.
Corollary 2.26. The period of a scroll S is
T (S) = T (X ) gcd(T (X ), n) = lcm(T (X ), n) n = Scale(X ) deg(X ) codeg(X ) gcd(T (X ), n) .
Proof. It follows from the definition that the period T (S) is the minimum positive integer m such that n divides mT (X ). Therefore, T (S) must be T (X )/ gcd(T (X ), n). The result now follows from Theorem 2.25.
Combinatorial characterization and enumeration of dynamics
In this section, we will characterize when a given pair of a potential slither (a sequence of Ds and Es) and co-slither (a sequence of Ss and Ls) defines a ticker tape. This will allow us to characterize which infinite sequences can arise as ticker tapes and to enumerate them.
3.1. Computing slithers and subslithers. Define a substring of a ticker tape to be a finite sequence X i , . . . , X i+r of consecutive entries. We will consider a substring of a scroll to be any substring in the corresponding ticker tape. It is clear that a scroll or ticker tape can be reconstructed from any of its length-n substrings. In this section, we will give an algorithm to directly calculate the slither and co-slither based on the gaps between live entries of a length-n substring that begins with a 1. Each gap corresponds to a substring of the slither, which we will call a subslither. The slither is simply the concatenation of subslithers, though with the last step omitted. After establishing this construction, we will use it to combinatorially characterize all possible ticker tapes for a given n.
Since our construction depends on gaps between live entries, it will be necessary to speak of the live entries that appear immediately before and after a given live entry. We will formally define this using ticker tape notation, but it is easy to translate it back into the language of scrolls, if desired.
Definition 3.1. Given a live entry k ∈ Live(X ), its previous live entry k − and next live entry k + are k − = max j < k | j ∈ Live(X ) , k + = min > k | ∈ Live(X ) .
In scroll notation, we will write these as (i, j) − and (i, j) + . Two live entries are consecutive if one of them is the next live entry of the other. A 0-block is a maximal substring of 0s between consecutive live entries.
Clearly, any two consecutive live entries are separated by a 0-block of some length z ∈ {1, . . . , n}, and we will canonically assign a sequence of Ds and Es to each such block. The simplest case is when this 0-block has length z = 1, which occurs when (i, j) + = (i, j + 2) = s(i, j). In this case, we just take the path that is a single s-step, i.e., a length-1 sequence E.
If the 0-block has size z ∈ {2, . . . , n}, then our path must begin with a D. When this happens, we will iteratively apply the successor function until we reach c (i, j) + . Recall that the shape of this path, starting at (i, j) and ending at c (i, j) + , is the resulting sequence of Ds and Es. The following observation is straightforward but useful, and it is also necessary for the definition of a subslither.
Lemma 3.2. If k, ∈ Live(X ) and |k − | < n, then CoSnake(k) = CoSnake( ). In particular, any two live entries on the same row of a scroll are contained in different co-snakes.
Definition 3.3. Let (i, j) ∈ Live(S) be followed by a length-z 0-block. If z = 1, then the subslither from (i, j) is E; otherwise, it is the shape of the minimal s-path from (i, j) to c (i, j) + .
Suppose we start at any live entry (i, j). The slither describes the minimal path of Ds and Es that returns us to the same co-snake. This path touches every co-snake exactly once. Therefore, the subslither from (i, j) is an initial sequence of the slither from (i, j). It begins with an E if and only if (i, j + 2) ∈ Live(S), in which case that length-1 word is the entire subslither. Otherwise, the subslither must begin with a D. To understand where it terminates, we may, without loss of generality, assume that (i, j) = (i, 1). There are now two subcases. In the first, the next live entry is (i, j) + = (i+1, j +1), which occurs precisely when (i, j) is followed by a 0-block of length n. When this happens, the entries in row i + 1 alternate 0, 1, 0, 1, . . . , and then all but possibly the last entry in row i + 2 are 0. Two examples of opposite parity are shown in Figure 8. 1 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 Figure 8. Starting from the live entry (i, j) in the upper-left, the next live entry (i, j) + is on Snake(i, j) if and only if (i, j) is followed by exactly n 0s. In this case, the successor of the last live entry in the second row is c (i, j) + , so the subslither from (i, j) is DE n/2−1 D. Figure 8, the last live entry in row i+1 is the co-successor of (i, j) = (i, 1); this is (i + 1, n − 1) if n is odd and is (i + 1, n) if n is even. Because the snake group is abelian, the next live entry is c(i + 1, j + 1) = c((i, j) + ). By construction, this is where the subslither from (i, j) stops: at (i + 2, n) if n is odd or (i + 3, 1) if n is even. It is straightforward to see that the subslither from (i, j) is thus DE n/2−1 D.
In both subcases shown in
So far, we have covered the two extreme cases: if the 0-block that follows (i, j) has (minimal) length z = 1, then the subslither from (i, j) is E. If the 0-block has (maximal) length z = n, then the subslither is DE n/2−1 D. The final case is when the 0-block has length z ∈ {2, . . . , n − 1}. An example of this appears in Figure 9. We claim that in this case, the subslither is also DE z/2−1 D, which happens to match the case of z = n.
Lemma 3.4. The subslither from (i, j) ∈ Live(S) is
E if z = 1 DE z/2 −1 D if z ∈ {2, . . . , n},
where z is the length of the 0-block that follows (i, j).
Proof. We have already verified the cases when z = 1 and when z = n. Now suppose z ≥ 2. It is elementary to show that the subslither must be of the form DE r D for some integer r ≥ 0. Note that (i, j) + = (i, j + z + 1). After starting at (i, j) and traversing the subslither, we reach the live entry (i + 2, j + 2r + 2), which must also be c((i, j) + ). Now, c((i, j) + ) is either (i + 2, j + z) or (i + 2, j + z − 1) (depending on whether the step from (i, j) + = (i, j + z + 1) to its co-successor is of type S or L). It follows that j + 2r + 2 is either j + z or j + z − 1, so r = z/2 − 1.
In constructing the slither from (i, j) = (i, 1) from subslithers, there are two cases to handle separately: (i) traversing between consecutive live entries within a row, and then (ii) leaving the final live entry in the row, in which case we prematurely reach c (i, j) + before finishing the subslither and our algorithm will terminate. The subslither between consecutive live entries is completely determined by the sizes of the 0-blocks.
Clearly, we can construct the slither of a scroll by starting at any live entry (i, j) and successively computing subslithers; the only question is when to stop. Assume once again that (i, j) = (i, 1) is in the first column, and let (i, j ) be the last live entry in row i. It is straightforward to see that j < n and that the subslither from (i, j ) must start with a D. This step takes us to row i + 1, and the subslither continues with Es until either the entry (i + 1, n − 1) or (i + 1, n) is reached. Whichever of these is live is the co-successor of (i, 1), so this takes us back to c (i, j) + before we finish the subslither 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 0 1 0 0 0 0 0 0 1 0 1 0 1 0 0 0 0 1 0 0 1 0 1 0 0 0 1 0 1 0 1 0 0 0 0 0 0 1 0 1 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 Figure 9. Starting from the live entry in the upper-left, the steps DEED reach the co-snake containing its next live entry. Then, starting at that live entry, we use a single E to reach the co-snake of its next live entry. Then a DD takes us to the co-snake of the next live entry. From here, the steps DED reach the co-snake containing the next live entry. Finally, the steps DEE get us back to our starting co-snake. The slither of the scroll is simply the concatenation of these steps DEEDEDDDEDDEE. The co-slither SLSS is obtained from the co-successor functions of live entries in the first row, but taken in opposite cyclic order (right to left) and ignoring each live entry whose previous live entry is two spaces to its left. from (i, j ). Notice that an L-step from (i, 1) would take us to (i + 1, n − 1), and an S-step would take us to (i + 1, n). This is stated formally as the following lemma.
Lemma 3.5. Suppose (i, 1) ∈ Live(S) and the last live entry (i, j ) in row i is followed by exactly z consecutive 0s. If we start at (i, j ) and apply the initial segment of the subslither consisting of D followed by z−1 2 instances of E, then we end up at c(i, 1). Moreover, the step (i, 1) → c(i, 1) is of type S if z is odd and is of type L if z is even.
Still assuming that (i, j) = (i, 1), Lemmas 3.4 and 3.5 completely characterize how to construct the slither of a scroll, given any length-n substring that starts with a 1. Specifically, we traverse the gaps of 0s between live entries from left to right and append each subslither (E or DE r D) as described by Lemma 3.4. Then we append the partial subslither DE r per Lemma 3.5.
Proposition 3.6. To construct the slither from a row with (i, 1) ∈ Live(S), traverse the 0-blocks entirely contained in row i from left to right. Do the following for each 0-block (where z is the size of the 0-block):
• If z = 1, append E. • If z > 1, append DE z/2 −1 D.
Finally, append DE (z−1)/2 , where z ≥ 1 is the number of 0s in row i after the rightmost live entry in row i.
Proof. The first step is the result of iteratively appending subslithers, which, by Lemma 3.4, are either DE z/2−1 D or E. The last step of appending DE (z−1)/2 is due to Lemma 3.5.
An example of the construction of a subslither from a row beginning with a live entry is shown in Figure 10. Here, we construct the subslither of the scroll shown in Figure 9 from just the first row, using only the sizes of the 0-blocks. That figure also provides a summary of how to construct the co-slither, which we will derive next.
To construct the co-slither, start from (i, j) = (i, 1), and apply the co-successor function to reach either (i + 1, n) or (i + 1, n − 1). Applying the inverse-successor function from here takes us to (i, j ), the last live entry in row i, which lies on Snake((i, j )). Applying the co-successor function from (i, j ) takes us to another snake, and we can apply s −1 until we get back to row i and repeat this process until we return to Snake(i, j). Note that there could be several choices at each step, because some snake might have at least two consecutive live entries in row i, like the blue snake does in Figure 9. However, this does not matter, because applying the co-successor function from any of these lands us on the same snake. Slither (read left-to-right):
Co-slither (read right-to-left): Figure 10. The construction of the slither and co-slither from Figure 9. The slither is constructed by iteratively appending subslithers, as described in Proposition 3.6. The co-slither is constructed from the parity of the gaps from right-to-left using Proposition 3.7.
Proposition 3.7. To construct the co-slither from a row with (i, 1) ∈ Live(S), start with an L if it ends in an even number of 0s and an S otherwise. Next, for each gap of z > 1 consecutive 0s, going from right to left:
• if z is even, append S;
• if z is odd, append L.
Proof. The first step is due to the characterization of c(i, 1) in Lemma 3.5. Next, it is straightforward to see (e.g., in Figure 9) how for each live entry in row i other than (i, 1), the step taking that live entry to its co-successor is of type S if the number of 0s preceding it is even and is of type L if the number of 0s preceding it is odd.
An example of using Proposition 3.7 to construct the co-slither of the scroll shown in Figure 9 from just the first line is shown in Figure 10.
3.2. Characterizing ticker tapes. Propositions 3.6 and 3.7 can be used to construct all possible pairs of slithers and co-slithers for a given n. Given a ticker tape X , write β E and β D for the number of Es and Ds in Slither(X ) and α S and α L for the number of Ss and Ls in CoSlither(X ). Recall that the scale of a ticker tape is defined in Definition 2.18.
Lemma 3.8. The scale of a ticker tape (or scroll) is
σ = 2β E + (n + 1)β D = (2n − 1)α S + (2n − 2)α L .
Proof. From some fixed k ∈ Live(X ), we can compute the scale in two ways: (i) by applying the successor function β = β E + β D times, or (ii) applying the co-successor function α = α S + α L times. In the first case, we advance 2β E + (n + 1)β D positions in the ticker tape, and in the latter case, we advance (2n − 1)α S + (2n − 2)α L positions.
Corollary 3.9. For any ticker tape (or scroll), we have β D = 2(α S + α L ) − 1 and 2β E + 3α S + 4α L = n + 1.
Proof. Without loss of generality, assume that (0, 1) ∈ Live(S). There are α S + α L letters in the coslither: one for each gap of zeros between live entries and one more for the final string of 0s. Each gap contributes exactly two Ds, except the final string, which contributes one. It follows that the slither contains 2(α S + α L ) − 1 instances of D. The result now follows from substituting β D = 2(α S + α L ) − 1 into the equation in Lemma 3.8 and simplifying.
The equation 2β E + 3α S + 4α L = n + 1 from Corollary 3.9 gives a necessary condition for the slithers and co-slithers that exist for a given n. Theorem 3.11 will show that this condition is also sufficient. In particular, any solution to this equation with β E , α S , α L ≥ 0 and α S + α L > 0 gives a set of potential slithers and co-slithers that only differ by rearrangement. 5 From each of these slither and co-slither combinations, we can construct a scroll. Without loss of generality, we will assume that (0, 1) is live. Built into the definition of a feasible pair is the assumption that α S +α L > 0, and hence that β D > 0. In particular, both words must be nonempty.
β E α S α L β D = 2(α S + α L ) − 1 Slither Co-slither 5 0 1 1 EEEEED L 3 0 2 3 EEEDDD LL 3 0 2 3 EEDEDD LL 3 0 2 3 EEDDED LL 3 0 2 3 EDEDED LL 1 0 3 5 EDDDDD LLL 4 2 0 3 EEEEDDD SS 4 2 0 3 EEEDEDD SS 4 2 0 3 EEEDDED SS 4 2 0 3 EEDEEDD SS 4 2 0 3 EEDEDED SS 2 2 1 5 EEDDDDD SSL 2 2 1 5 EDEDDDD SSL 2 2 1 5 EDDEDDD SSL 0 2 2 7 DDDDDDD SSLL 0 2 2 7 DDDDDDD SLSL 1 4 0 7 EDDDDDDD SSSS
Theorem 3.11. Every feasible pair defines a ticker tape.
Proof. We can explicitly construct the ticker tape using Propositions 3.6 and 3.7. Begin with a live entry in (0, 1), and read off the slither. If we read an E and there have been an even number of Ds so far, we add a single 0 and then a live entry. If we read a D that is not the final D, then let t be the number of Es between this D and the next D. We add either 2t + 2 or 2t + 3 zeros and then another live entry. To determine which, we look at the next entry in the co-slither, reading right to left. If the entry is S, we add 2t + 2 zeros, while if it is L, we add 2t + 3 zeros. When we reach the final D in the slither, we simply fill out the rest of the row with zeros.
By the condition that β D = 2(α S + α L ) − 1, we know that upon reaching the rightmost D in the slither, we also reach the leftmost letter in the co-slither. Let t be the number of Es at the end of the slither. We claim that the number of 0s at the end of the first row of the scroll is precisely 2t + 1 if the leftmost entry in the co-slither is S and is 2t + 2 if it is L. This follows from the fact that 2β E + 3α S + 4α L = n + 1. In particular, when writing entries in the first row, we moved right two positions for each T , three positions for each S, and four positions for each L.
Now that we have constructed the first row, the remainder of the table is fully determined. By construction, the set of live entries forms an independent set, and the theorem follows.
By construction, each feasible pair corresponds to a unique ticker tape. The example in Figures 9 and 10 corresponds to the solution β E = 6, α S = 3, α L = 1 to the equation 2β E + 3α S + 4α L = 25.
Remark 3.12. Using generating functions, it is straightforward to calculate the number of solutions to the equation 2β E + 3α S + 4α L = n + 1 over the nonnegative integers that satisfy α S + α L > 0. In particular, this quantity is given by the coefficient of x n+1 in the generating function
1 1 − x 2 1 (1 − x 3 ) (1 − x 4 ) − 1 .
Calculating the total number of feasible pairs is more complicated because each solution corresponds to a collection of slithers and co-slithers made up of the same multiset of letters.
Example 3.13. Figure 11 gives a list of all of the ticker tapes for n = 13, which we computed using Theorem 3.11. There are a total of 17 ticker tapes (up to cyclic shift). Furthermore, one can see that there are 7 possible quadruples (α S , α L , β E , β D ). This is the coefficient of x 14 in the generating function from Remark 3.12.
4. Dynamics and actions on finite quotient spaces 4.1. Orbit tables and ouroboroi. Thus far, we have viewed the dynamics generated by toggling independent sets using infinite scrolls and ticker tapes. However, sometimes it will be convenient to restrict our attention to a repeating sequence of rows in a scroll. If we identify two identical rows by a quotient map of the scroll (a cylinder) to get a torus, the snakes and co-snakes "wrap around" from bottom to top. Inspired by the ancient symbol of a snake swallowing its tail, as shown on the left in Figure 12, we will call such a finite circular snake an ouroboros.
x On the left is a drawing of an ouroboros from a 1478 book on medieval alchemy by Theodoros Pelecanos; this image is from Wikipedia. On the right is the fundamental orbit table T 1 from our running example in Figure 2. When we allow snakes and co-snakes to wrap from bottom to top, the two snakes merge into one ouroboros with slither D E (top), and the six co-snakes merge into two co-ouroboroi, with co-slither S (bottom).
v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 x (
Let x ∈ F n 2 , and suppose r is a positive integer such that x (r) = x (0) = x. Then r must be a multiple of the period T (S) of the scroll defined by x (as defined in Section 2.1). Hence, r = ωT (S) for some positive integer ω that we call a frequency. Define the ω-fold orbit table of x to be the r × n table whose rows are x (0) , . . . , x (r−1) . We denote this table by T ω = Table ω (x) or as T = Table(x), depending on whether ω is understood.
It will at times be useful to work with a finite version of the ticker tape. If r = ωT (S) as above, then define the ω-fold orbit vector of x to be the length-rn subsequence of the ticker tape that has x as an initial sequence-the result of reading the ω-fold orbit table across each column, downward row-by-row. We denote this as (6)
V ω = Vector ω (x) = X 0,1 , . . . , X 0,n , X 1,1 , . . . , X 1,n , . . . , X r−1,1 , . . . , X r−1,n ∈ F rn 2 , or V = Vector(x) if ω is understood or unimportant to the context. We refer to the 1-fold orbit table (resp., vector) as the fundamental orbit table (resp., vector).
If T is an orbit table and V an orbit vector, we define their sets of live entries to be
Live(T ) = (i, j) ∈ Z r × Z n | X i,j = 1 , Live(V) = k ∈ Z rn | X k = 1 .
Though it makes no difference either way, we will continue with the convention that the columns are numbered 1, . . . , n and the rows are numbered 0, . . . , r −1. As such, we harmlessly take Z n = {1, . . . , n} and Z r = {0, . . . , r − 1} in the orbit table and Z rn = {1, . . . , rn} in the orbit vector. The live entries in an orbit table are simply the images of the live entries in the corresponding scroll under the natural quotient map p ω : Live(S) → Live(T ω ) that reduces the first coordinate of each entry modulo r. Under this map, the successor and co-successor functions descend to bijections on Live(T ω ) that we call the ω-successor function s ω and ω-co-successor function c ω . The relationship between the successor and its ω-counterpart is illustrated by the following commutative diagrams.
Live(S) pω s / / Live(S) pω Live(T ω ) sω / / Live(T ω ) (i + kr, j) _ pω s / / s(i + kr, j) _ pω (i, j) sω / / s ω (i, j)
Naturally, there is an analogous diagram relating c and c ω . The functions s ω and c ω generate a finite abelian group G(T ω ) := s ω , c ω that we call the ouroboros group of T ω , or the ω-fold ouroboros group of S. Since p ω is a topological covering map, there is an induced homomorphism p * ω : G(S) → G(T ω ) sending s → s ω and c → c ω . The ouroboros group is the quotient G(T ω ) ∼ = G(S)/ ker p * ω , and it acts simply transitively on Live(S)/ ker p ω , which can be canonically identified with Live(T ω ). We get a bijective correspondence between the orbits under s ω and c ω and the cosets of s ω and c ω . These are the images of the snakes and the co-snakes under the quotient map p ω , so we call them ouroboroi and co-ouroboroi, respectively.
Definition 4.1. Given a live entry (i, j) in an orbit table T ω , the ouroboros and co-ouroboros containing it are the sets
Ouro ω (i, j) = s k ω (i, j) | k ∈ Z , CoOuro ω (i, j) = c k ω (i, j) | k ∈ Z .
Throughout the rest of this paper, we will continue to assume that a scroll S has α snakes and β co-snakes. Recall that we denote the number of Ss and Ls in any co-slither by α S and α L , and the number of Ds and Es in any slither by β D and β E , respectively. Naturally, we have α = α S + α L and β = β D + β E .
We say the ω-fold orbit table T = T ω has α ω ouroboroi and β ω co-ouroboroi. (If ω is clear from the context, which it usually will be, then we will typically drop it as a subscript.) Similarly, we will often write s and c rather than s ω and c ω because ω will usually be unambiguous when we speak of these functions.
Theorem 4.2. For any orbit table T , the set Live(T ) is a torsor of the ouroboros group, which has presentation
G(T ) = s, c s c = c s, s β = c α , s η/α = c η/β = 1 ∼ = Z α × Z η/α ∼ = Z β × Z η/β ,
where η is the number of live entries.
Proof. We have already established the first statement. It follows that there are bijective correspondences between ouroboroi and cosets of s and between co-ouroboroi and cosets of c . Since G(T ) = s, c ∼ = G(S)/ ker p * is a finite abelian group of order η, the first two relations hold, and we have In other words, G(T )/ s ∼ = Z α and G(T )/ c ∼ = Z β . The result now follows.
x Figure 13. In the 2-fold orbit table T 2 from our running example in Figure 2, there are α 2 = 2 ouroboroi with slither D E and β 2 = 2 co-ouroboroi with co-slither S 2 .
v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 x (
Definition 4.3. The (co-)ouroboros degree of an orbit table T ω is the number of (co-)snakes in the p ω -preimage of each (co-)ouroboros. We denote these as We call deg(p 1 ) and codeg(p 1 ) the fundamental ouroboros degree and fundamental co-ouroboros degree, respectively.
Returning back to our running example, the fundamental orbit table (i.e., ω = 1) is shown in Figure 12. The α = 2 snakes in S merge into α 1 = 1 ouroboros, and the β = 6 co-snakes merge into β 1 = 2 co-ouroboroi. Therefore, the fundamental ouroboros and co-ouroboros degrees are deg(p 1 ) = α/α 1 = 2/1 = 2 and codeg(p 1 ) = β/β 1 = 6/2 = 3.
In contrast, in the 2-fold orbit table, shown in Figure 13, the α = 2 snakes in S remain α 2 = 2 separate ouroboroi, and the β = 6 co-snakes become β 2 = 2 co-ouroboroi. The ouroboros and co-ouroboros degrees are thus deg(p 2 ) = α/α 2 = 2/2 = 1 and codeg(p 2 ) = β/β 2 = 6/2 = 3.
Slithers and co-slithers naturally descend to orbit tables via the quotient p ω : Live(S) → Live(T ω ). The slither of S is a length-β sequence of Ds and Es, and it defines a cyclic ordering c , s c , . . . , s β−1 c of co-snakes. If we apply the quotient map p * ω : G(S) → G(T ω ), then we get a cyclic ordering of the β ω co-ouroboroi. Each co-ouroboros appears in the sequence c ω , s ω c ω , . . . , s β−1 ω c ω exactly codeg(p ω ) = β/β ω times, and this must be a divisor of the co-degree of S (the exponent that appears in the slither). Define the slither of T ω , denoted Slither(T ω ), to be any length-β subsequence of a slither of S (so Slither(T ω ) is defined up to cyclic shift). We will also refer to Slither(T ω ) as the ω-slither of S.
The preceding notions all have straightforward analogues for co-slithers. More precisely, a co-slither of S is a length-α sequence of Ss and Ls that defines a cyclic ordering s , c s , . . . , c α−1 s of snakes. Via the quotient map p * ω : G(S) → G(T ω ), we get a cyclic ordering of the α ω ouroboroi. Each ouroboros appears in the sequence s , c ω s , . . . , c α−1 ω s exactly deg(p ω ) = α/α ω times, and this must be a divisor of the degree of S (the exponent that appears in the co-slither). We define the co-slither of T ω , denoted CoSlither(T ω ), to be any length-α subsequence (defined up to cyclic shift) of a co-slither of S; we also call this the ω-co-slither of S. The preceding two paragraphs have established the following. We will refer to the (co-)slither of the fundamental orbit table (i.e., ω = 1) as the fundamental (co-)slither . To emphasize that we are taking the slither in an orbit table rather than in the scroll, we will sometimes write E and D rather than E and D, and we will similarly use S and T in co-slithers.
Let us return to our running example from Figure 2 and its fundamental and 2-fold orbit tables shown in Figures 12 and 13, respectively. The length of the fundamental slither D E is β 1 = β/ codeg(p 1 ), the number of co-ouroboroi, and the length of the fundamental co-slither S is α 1 = α/ deg(p 1 ), the number of ouroboroi. As guaranteed by Lemma 4.4, the (co-)slithers of S and T are related by
(DE) codeg(p 1 ) = (DE) 3 and S deg(p 1 ) = S 2 .
The 2-fold orbit table of our running example, shown in Figure 13, has two ouroboroi with slither D E and two co-ouroboroi with co-slither S 2 . The ouroboros degree is thus deg(p 2 ) = 2/2 = 1, and the co-ouroboros degree is codeg(p 2 ) = 6/2 = 3. As predicted by Lemma 4.4, we have (DE) codeg(p 2 ) = (DE) 3 and S 2 deg(p 2 ) = S 2 .
4.2.
Swallow and co-swallow functions. In this subsection, we will look at how the snakes and co-snakes merge in various ω-fold orbit tables under the quotient maps p ω . This will allow us to derive formulas relating α ω to α = α 1 and likewise relating β ω to β = β 1 . We will write S(S) for the set of snakes of S and C(S) for the set of co-snakes of S. Given s ∈ S(S), define its tail to be the smallest positive coordinate t of the ticker tape that is contained in s. It is easy to translate this back into orbit table notation if desired, and we will denote this as Tail(s), regardless of the setting. Next, for any ω, with r = ωT (S), define the ω-head of s to be the largest coordinate h ∈ [rn] of the orbit vector V ω contained in s. Applying the ω-successor function from the ω-head wraps past the end of the orbit vector to back near the beginning, landing on some live entry whose p ω -preimage lies in some snake s ∈ S(S), possibly s itself. This defines a bijection on the snakes of S, and motivated by the idea of an ouroboros swallowing its tail, we will call this bijection the ω-swallow function:
Swal ω : S(S) −→ S(S). As a permutation, Swal ω contains α ω disjoint cycles, each containing exactly deg(p ω ) snakes. The next result guarantees that if the snakes of S are canonically cyclically ordered, then this bijection cyclically shifts this ordering by the same amount.
Proposition 4.5. If we cyclically order the snakes in S by s 1 , . . . , s α so that the co-successor of any live entry in s i lies in s i+1 , then for some constant k,
Swal ω (s i ) = s i+k for all 1 ≤ i ≤ α,
where all indices are taken modulo α.
Proof. Since the snakes are cyclically ordered, it suffices to show that this holds for consecutive entries. Specifically, we will show that if Swal(s α ) = s k , then Swal(s 1 ) = s k+1 . Denote the ω-head and the tail of s i by h i = Head ω (s i ) and t i = Tail(s i ), respectively. 6 It suffices to show that if s(h α ) = t k , then s(h 1 ) = t k+1 . Assuming the former, start at h 1 , and apply s to get to the tail t j of some snake s j . Alternatively, applying c −1 takes us to some live entry on s α . Now, let i ∈ N be the minimal number of s-steps needed to reach the α-head, i.e., s i c −1 (h 1 ) = h α . The next s-step takes us to the tail t k ∈ s k . Finally, c(t k ) lies on snake s k+1 . Because the ouroboros group is abelian, we have c s i+1 c −1 (h 1 ) = s i+1 (h 1 ) ∈ s k+1 . We also know that s i s(h 1 ) = s i (t j ), and further applications of s from the tail will remain on the snake s j (until we reach the ω-head h j ), so j = k + 1.
We can analogously define the tail and ω-head of a co-snake c, which we will denote by Head ω (c) and Tail(c), respectively. Applying the ω-co-successor function from the ω-head of each co-snake defines a bijection
CoSwal ω : C(S) −→ C(S) that we call the co-swallow function. The basic properties of the swallow function carry over to the coswallow function. For example, as a permutation, CoSwal ω contains β ω disjoint cycles, each containing exactly deg(p ω ) co-snakes. If the co-snakes are canonically cyclically ordered, then this bijection cyclically shifts this ordering by the same amount. The proof is analogous to that of Proposition 4.5 and will be omitted.
Proposition 4.6. If we cyclically order the co-snakes in S by c 1 , . . . , c β so that the co-successor of any live entry in c i lies in c i+1 , then for some constant k,
CoSwal ω (c i ) = c i+k , for all i = 1, . . . , β,
where all indices are taken modulo β.
Let us return to our running example. Since there are β = 6 co-snakes but only α = 2 snakes, it is more illustrative to compute the co-swallow functions. The co-snakes are highlighted by color on the left in Figure 14. The entries marked with 1 indicate the tails of the co-snakes; these do not depend on ω. The entries marked with 1 indicate the 1-heads of the co-snakes; these do depend on the choice of ω = 1. From Figure 14, the co-swallow function is defined by
c 1 c 2 c 3 c 4 c 5 c 6 x v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 x (c 1 c 2 c 3 c 4 c 5 c 6 x v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 x (CoSwal 1 (c 1 ) = c 5 , CoSwal 1 (c 5 ) = c 3 , CoSwal 1 (c 3 ) = c 1
and also CoSwal 1 (c 2 ) = c 6 , CoSwal 1 (c 6 ) = c 4 , CoSwal 1 (c 4 ) = c 2 .
In the language of Proposition 4.5, the swallow function is the mapping s i → s i+4 , where the indices are taken modulo 6. We can describe this by the permutation with cycle decomposition (264)(153).
By Proposition 4.5, the swallow function Swal ω is the permutation sending i → i + k, so each cycle has length α/ gcd(α, k), and there are gcd(α, k) disjoint cycles. The different cycles correspond to the different ouroboroi.
4.3.
The ω-fold vs. fundamental orbit table. We are now ready to give explicit relationships between the properties of an ω-fold orbit table, such as the (co-)degree and the number of (co-)snakes, and the corresponding properties of the fundamental (i.e., 1-fold) orbit table. First, recall that deg(S) (resp., codeg(S)) is the length of Slither(S) (resp., CoSlither(S)) divided by its length as a cyclic word, and that the (co-)snake scale is the minimal shift of the ticker tape that leaves the (co-)snakes invariant. These are denoted p and q, respectively. Proof. Without loss of generality, suppose that (0, 1) ∈ Live(S). By Corollary 2.24, the minimal k > 1 such that (8) (k, 1) ∈ Live(S), Slither(k, 1) = Slither(0, 1), and CoSlither(k, 1) = CoSlither(0, 1)
is the period T (S) of the scroll. By definition, this is also the number of rows in the fundamental orbit table.
Next, let k > 0 be minimal such that the three conditions in Eq. (8) hold, with the additional condition that (k , 1) ∈ Snake(0, 1). Note that k /k is just the number of times the ouroboros wraps around from bottom to top before returning to the initial snake, so k = k deg(p 1 ) = deg(p 1 )T (S) = deg(p 1 )
T (X ) gcd(T (X ), n) = deg(p 1 ) lcm(T (X ), n) n .
Next, we will express k in terms of deg(S).
Recall that Scale(X ) is the minimal σ such that a live entry h is on the same snake and co-snake as h + σ. It follows that p = Scale(X )/ codeg(X ) is the minimal such that h is on the same snake as h + and CoSlither(h) = CoSlither(h + ). Using the same line of reasoning as in the proof of Corollary 2.26, we find that
k = 1 n lcm Scale(X ) codeg(X ) , n = 1 n lcm (deg(X )T (X ), n) .
Applying which establishes that deg(S) divides deg(p 1 ). The proof that codeg(S) divides codeg(p 1 ) is analogous: we simply replace the "additional condition" with (k , 1) ∈ CoSnake(0, 1) and swap all instances of deg(X ) and codeg(X ).
Corollary 4.8. The integers deg(p 1 ) and codeg(p 1 ) are relatively prime.
Proof. Lemma 2.16 established that gcd(deg(S), codeg(S)) = 1. The result now follows immediately from Proposition 4.7.
Let s 1 , . . . , s α be the snakes in S, cyclically ordered so that the co-successor of a live entry of s i lies in s i+1 . Suppose T is an orbit table associated with S. For any s i , the final live entry of s i in T has an ω-successor corresponding to an entry on the first row of T .
Proposition 4.9. Suppose a scroll S has α snakes and β co-snakes and that its fundamental orbit table has α ouroboroi and β co-ouroboroi. For ω > 1, the numbers α ω and β ω of ouroboroi and co-ouroboroi in its ω-fold orbit table satisfy α ω = α · gcd(deg(p 1 ), ω) and β ω = β · gcd(codeg(p 1 ), ω).
Proof. By Lemma 4.5, there is some constant k such that each snake s i maps to s i+k when wrapping vertically around the fundamental orbit table. It follows that each snake s i maps to s i+ωk when wrapping vertically around the ω-fold orbit table. This means that the number of ouroboroi in the ω-fold orbit table is gcd(ωk, α), so
α ω = gcd(ωk, α) = gcd(α, gcd(ωk, ωα)) = gcd(α, ω · gcd(k, α)) = gcd(k, α) · gcd(α/ gcd(k, α), ω) = α · gcd(deg(p 1 ), ω).
The second equality is analogous.
Our running example of a scroll has α = 2 snakes and β = 6 co-snakes, which we distinguished with different colors. As we took quotients to construct ω-fold orbit tables, these snakes and co-snakes merged into (co)-ouroboroi, so we needed fewer colors to represent them. However, there are special cases when taking the quotient preserves the snakes and co-snakes, and thus the colors as well. Going back to our running example, notice that if ω is a multiple of deg(p 1 ) = 2, then there are α ω = α = 2 ouroboroi, and if ω is a multiple of codeg(p 1 ) = 3, then there are β ω = β = 6 co-ouroboroi. Specifically, these quantities are α ω = 2, ω ≡ 0 (mod 2) 1, ω ≡ 1 (mod 2) and
β ω = 6, ω ≡ 0 (mod 3) 2, ω ≡ 1 (mod 3) 2, ω ≡ 2 (mod 3).
We now consider when the ω-fold orbit table T ω has the same number of ouroboroi as the scroll has snakes and also has the same number of co-ouroboroi as the scroll has co-snakes. This happens precisely when ω is a multiple of lcm(deg(p 1 ), codeg(p 1 )) = deg(p 1 ) · codeg(p 1 ). In this case, we say that the orbit table T ω = p ω (S) is color-preserving. (1) α ω = α and β ω = β;
(2) Swal ω and CoSwal ω are the identity permutation;
(3) there is a bijection between the set of (co-)snakes in the scroll and the set of (co-)ouroboroi in the orbit table; (4) the number rn = ωnT (S) of entries in T ω is a multiple of Scale(X ); (5) deg(p 1 ) · codeg(p 1 ) divides ω.
Proof. The equivalence of conditions (1), (2), (3) is immediate from the above discussion.
Let k be the smallest positive integer such that X k = 1. Let s and c be the snake and the co-snake, respectively, containing k. Let k be the smallest integer such that k > rn and k ≡ k (mod Scale(X )). Then k is a live entry in Swal ω (s) ∩ CoSwal ω (c). It follows that Scale(X ) divides rn if and only if Swal(s) = s and CoSwal(c) = c. But all cycles of Swal(s) (respectively, CoSwal(c)) have the same size, so (4) is equivalent to (2).
From Proposition 4.9, we see that (1) holds if and only if lcm(deg(p 1 ), codeg(p 1 )) divides ω. Since deg(p 1 ) and codeg(p 1 ) are relatively prime by Corollary 4.8, conditions (1) and (5) are equivalent.
Periods of sum vectors
In this section, we will introduce the sum vector of a scroll and fully classify the possible periods for the sum vector of a scroll on n vertices, when viewed as a cyclic word.
For an orbit table T , define the sum vector of T , written Σ(T ), to be the vector in N n given by the column sums of T . Clearly, for every ω ∈ N, the ω-fold orbit table of a scroll S has sum vector ω · Σ(T 1 ). Thus, we can define Σ(S) to be the sum vector of any orbit table of S, and this vector is well-defined up to scalar multiples. Note that Σ(S) indicates the relative frequency of live entries in each column of S. From the toggling perspective, this means that Σ(S) indicates the relative frequency with which each vertex of C n appears in the independent sets that we obtain from iteratively applying the toggling operation τ .
In many orbit tables, such as the one in our running example in Figure 2, the sum vector is constant, so its period is 1. However, this is not always the case. For example, the sum vector of the orbit table in Figure 15 has period 3.
x . An orbit table whose sum vector Σ(T 1 ) = (1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 1, 2) has period 3.
v 1 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 v 12 x (
The sum vector of an orbit table or scroll is closely related to the scale, the distance between two consecutive entries in any fiber. Specifically, we will want to keep track of the number of columns between such entries, and this is just the scale modulo n. However, it is also helpful to explicitly define this as an integer in {0, . . . , n − 1} (i.e., a particular residue modulo n).
Definition 5.1. The column scale of S, denoted ColScale(S), is the value in {0, 1, . . . , n − 1} that is congruent to Scale(S) modulo n.
By definition, if (i, j) and (i , j ) are consecutive entries in the same fiber, then j + ColScale(S) ≡ j (mod n). Lemma 5.2. For any scroll S, the following equality holds:
ColScale(S) = β D + 2β E = n − α S − 2α L .
Proof. The first equality follows from Definition 5.1 and Lemma 3.8, as well as the fact that β D +2β E < n, which follows from Corollary 3.9. For the second equality, we again apply Corollary 3.9 to find that
ColScale(S) = β D + 2β E = 2α S + 2α L + 2β E − 1 = (2β E + 3α S + 4α L ) − α S − 2α L − 1 = n + 1 − α S − 2α L − 1 = n − α S − 2α L .
Lemma 5.3. For any scroll S, the period of Σ(S) must divide gcd(n, Scale(S)) = gcd(n, ColScale(S)).
Proof. Clearly, the period of Σ(S) divides the period of the ticker tape, which divides Scale(S) by Theorem 2.25. Since Σ(S) has length n, its period divides n. Thus, the period of Σ(S) divides gcd(n, Scale(S)).
Corollary 5.4. For any scroll S, the sum vector Σ(S) has odd period.
Proof. Let λ be the period of Σ(S). Thus far, we have established that
λ | ColScale(S)
and
ColScale(S) = β D + 2β E
where the divisibility is by Lemma 5.3 and the equality by Lemma 5.2. The result is immediate from Corollary 3.9, which tells us that β D is odd.
Theorem 5.5. If the sum vector of a scroll on n ≥ 4 vertices has period λ, then λ | n and n ≥ 4λ.
Proof. We may assume λ > 1 since the result is trivial otherwise. Since it follows from Lemma 5.3 that λ | n, we just need to show that n ∈ {λ, 2λ, 3λ}. By Lemmas 5.2 and 5.3, λ | ColScale(S) and ColScale(S) = n − α S − 2α L = β D + 2β E < n.
It follows that λ = n. Suppose by way of contradiction that n = 2λ. Then, we must have ColScale(S) = n 2 . In this case, α S + 2α L = β D + 2β E = n 2 . By Corollary 3.9, this implies that α S + 2α L = 2α S + 2α L + 2β E − 1. It follows that β E = 0 and α S = 1. However, when β E = 0, the slither is made entirely of Ds, and each snake has the same number of live entries in each column. This means that the period of Σ(S) must be 1, which contradicts the assumption that λ > 1. Now suppose n = 3λ. We must have α S + 2α L ∈ n 3 , 2n 3 . If α S + 2α L = 2n 3 , then α S + 2α L = 2(β D + 2β E ) = 2(2α S + 2α L + 2β E − 1), which implies that 1 = 3α S +2α L +4β E . This is clearly impossible. On the other hand, if α S +2α L = n 3 , then 2(α S + 2α L ) = 2α S + 2α L + 2β E − 1. This is impossible because the two sides of the equation have opposite parities.
Next, we will prove a partial converse to Theorem 5.5. In particular, we will show that if λ | n and n ≥ 4λ, then there must exist some scroll on n vertices whose sum vector has period λ. Before proving this theorem, we introduce one more useful definition.
Suppose a snake has a slither of length β. A segment of the snake is a subset of Live(S) of the form k, s(k), s 2 (k), . . . , s β−1 (k) ,
where k is any live entry.
Theorem 5.6. For any λ, k ∈ N such that λ is odd and k ≥ 4, there exists a scroll S on n = kλ vertices such that the period of Σ(S) is λ.
Proof. We prove this result via a construction that depends on the parity of k. Since the sum vectors of the ω-fold orbit tables for different choices of ω always have the same period, we can work with the minimal color-preserving orbit table. First, suppose k is even. We will use the slither D λ−2 ED 2 E λk 2 −λ−1 and the co-slither SL λ−1 2 . Note that this slither and co-slither are a feasible pair, and thus define a ticker tape (and scroll) by Theorem 3.11. Furthermore, by a straightforward calculation, we find that the column scale of the associated scroll is λ(k − 1). Because gcd(λ(k − 1), λk) = λ, each snake can be divided into n/λ = k disjoint segments.
We will start by finding the contribution to the sum vector coming from a single snake, and we will then consider the total sum vector. Consider the mod n positions of live entries that form a single snake. For a single segment, we begin with λ − 1 adjacent positions, skip one position, take two more adjacent positions, and then take every other position until we place a 0 in position λ(k − 1). The next slither has an equivalent effect on the sum, but it is shifted by λ(k − 1) positions. Figure 16 gives an example of this construction for λ = 7 and k = 4. 1 1 1 1 0 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 0 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 0 1 1 1 0 1 0 1 0 1 0 1 0 1 0 1 1 1 1 1 1 0 1 1 1 0 1 0 1 0 1 0 1 0 1 0 Sum: 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 Snake 1 Contrib.: 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 Snake 2 Contrib.: 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 Snake 3 Contrib.: 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 Snake 4 Contrib.: 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 2 1 2 3 2 2 2 Overall Sum: 9 8 8 8 8 8 7 9 8 8 8 8 8 7 9 8 8 8 8 8 7 9 8 8 8 8 8 7 Figure 16. An example of the construction used for the proof of Theorem 5.6 when k = 4 and λ = 7. The first 3 rows of the figure show the column position of the live entries from a single snake, where different colors represent different segments. Note that when two live entries are adjacent in this presentation, they appear on subsequent rows of the orbit table, but the row position of live entries does not affect the column sums. The fourth row is the column sum of the first 3 rows, which is the contribution to the sum vector from a single snake. In the remaining rows of the figure, we add the contributions to the sum vector from each of the four snakes. This overall sum is the sum vector of the orbit table when k = 4 and λ = 7. We show in the proof that this sum vector always has period λ.
1 1
Notice from the structure of the slither that the contribution to the sum vector from a single snake is periodic with period λ. In particular, almost all of the first λ entries have the same value x, except that the second entry is x+1 and the λ th entry is x−1. This pattern repeats for every set of λ columns. Now, we need to consider the sum vectors of the other snakes. Notice that for every S in the coslither, we get a snake whose sum vector is shifted one position left from the previous snake, while for every L in the co-slither, we get a snake whose sum vector is shifted two positions left from the previous snake. For our co-slither of SL (λ−1)/2 , we consider how many times each of the entries x, x − 1 and x + 1 appear in each column. Through a straightforward computation, we find that x + 1 appears once in the first column as well as every column (of the first λ) of even index but does not appear in the other columns. Similarly, x − 1 appears once in the λ th column as well as once in every even-indexed column, but it does not appear in the other columns. Overall, this means that the initial λ entries in the sum vector are a + 1, a, a . . . , a, a − 1 for some a (see Figure 16). It follows that the period of the sum vector is λ.
In the case where k is odd, we use the slither D 2λ+1 E ((k−4)λ−1)/2 and the co-slither S 2 L λ−1 . It is again easy to confirm that the slither and co-slither are a feasible pair. Furthermore, by a straightforward calculation, we find that the column scale of the associated scroll is λ(k − 2). The argument for this case is essentially analogous to the argument we used for the case where k is even. Figure 17 gives an example of this construction for λ = 7 and k = 5. 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 1 0 Sum: 3 3 2 3 2 3 2 3 3 2 3 2 3 2 3 3 2 3 2 3 2 3 3 2 3 2 3 2 3 3 2 3 2 3 2 Figure 17. An example of the construction used for the proof of Theorem 5.6 when k = 5 and λ = 7. The different colors represent different segments. The 1s are in the correct column (but not the correct row) for a single snake so that we can see the impact on the sum vector.
Once we have the contribution to the sum vector from a single snake, showing that the total sum vector has period λ is even easier than it was in the previous case. In particular, using a co-slither of S 2 L r−1 is equivalent to taking λ sum vectors, each subsequently shifted 2 positions, and then adding one more sum vector shifted 1 position. The first λ contributions to the sum vectors must add up to a constant sum because gcd(2, λ) = 1. Thus, the total sum vector has period λ because a single sum vector has period λ.
There is nothing special about the constructions used for Theorem 5.6, and we expect that there are many other classes of ticker tapes satisfying the conditions of the theorem. The challenge was to incorporate just enough asymmetry so as not to shrink the period of the sum vector. One idea for future research would be to characterize the relative frequencies of different sum vector periods.
Concluding remarks
We began the work described in this paper thinking it was a fun but fairly narrow and self-contained problem about toggling independent sets of cycle graphs. What we encountered was an unexplored mathematical theory that is applicable to other combinatorial actions. The iteration of a fixed Coxeter element τ in any generalized toggle group defines a finite dynamical system, and the concepts of a scroll, ticker tape, and orbit table all carry over. There is nothing special about these objects on their own, as they are just ways to describe and visualize the dynamics. What makes this problem unique is the fact that there are two commuting bijections, the successor and co-successor functions, that act simply transitively on the live entries. As far as we know, studying actions on the live entries is new, and it endows the orbits of the global action with a signature algebraic structure.
It is natural to ask which other combinatorial actions also have torsor structures on their orbits. For this to have any chance of working, there needs to be strong structural regularity of the underlying graph. For example, in the current paper, the automorphism group of the graph-the dihedral group of order 2n-acts transitively on the vertices, and the action commutes with the toggle operation. It is hard to see how the orbits of a general action could admit any meaningful algebraic structure without this. In other words, we should be looking at other vertex-transitive graphs that are in some sense similar to cycle graphs. In ongoing work, we have found torsor structures on orbits from toggling independent sets over distance-2 cycle graphs and from toggling other combinatorial objects over cycle graphs.
In another direction, toggling independent sets can be formalized as one of the 256 elementary cellular automata (CA) rules. Specifically, a CA is a regular grid of cells, where each cell has a Boolean state in F 2 = {0, 1}, and the state of a vertex v is updated at each timestep based on the states of v and its neighbors. In the 1-dimensional setting, a "grid" is simply a cycle graph or an infinite path graph. In either case, the update function is defined by some f : F 3 2 → F 2 . Each one is characterized by its truth table, a generic example of which is shown in Eq. (9).
(9)
x i−1 x i x i+1 111 110 101 100 011 010 001 000 f (k)
i (x i−1 , x i , x i+1 ) b 7 b 6 b 5 b 4 b 3 b 2 b 1 b 0
Since each b i is in F 2 , there are 2 8 = 256 possible functions; each such function is indexed by the integer k ∈ {0, 1, . . . , 255}, which is the integer whose binary representation is b 7 b 6 b 5 b 4 b 3 b 2 b 1 b 0 . The one indexed by k is called the elementary cellular automata (ECA) rule k. In this setting, the toggle functions introduced in this paper can be realized as ECA rule 1. Technically, the bits b 7 , b 6 , and b 3 can be anything because the the substrings 111, 110, and 011 do not appear in the set L of independent sets of C n . However, it is arguably more natural to use ECA rule 1, also known as the logical NOR function. ECA rules are defined on all 2 n states, and the periodic points of ECA rule 1 are precisely the independent sets. Casting the work from this paper in the setting of cellular automata poses new questions that would likely not be asked from within the dynamical algebraic combinatorics community. For example, which ECA rules lead to dynamics whose live entries admit a torsor structure? For such a toggle action to be defined, the local functions need to act on the set of periodic points, and this happens for precisely 104 ECA rules, or 41 up to the equivalences defined by reflection and inversion. These rules were classified by the third author with McCammond and Mortveit in [MMM08], and their toggle groups were studied in [MMM11], though under the name of "dynamics groups." Preliminary investigation has revealed that most of these 41 ECA rules do not admit an interesting torsor structure, but largely for mundane reasons. For example, the toggle group is trivial for 26 of the 41 rules. For more on the connections between dynamical algebraic combinatorics and cellular automata, the interested reader can consult our paper in the proceedings of the annual AUTOMATA conference [DDJ + 21]. The first two thirds of that paper is a survey bringing these fields together, and the last third is an "extended abstract" of this current paper. In it, we also propose a number of open-ended problems in both fields, inspired by ideas and themes of the other one. It is our hope that the present work, and the ideas within, catches the interest of researchers with different backgrounds in these fields and beyond.
Figure 1 .
1An orbit of size 15 consisting of independent sets of the cycle graph C 12 .
Figure 3 .
3The universal scroll S of our running example S = Scroll(x) from Figure 2. Each 7 × 11 block highlighted by the checkerboard shading corresponds to an identical copy of a τ -orbit. The offset of the blocks arises because the covering map in Eq.
Figure 4 .
4A picture illustrating Lemmas 2.1 and 2.2, where a = 1 + a and b = 1 + b denote complementary entries in F 2 = {0, 1}.
Figure 11 .
11This table classifies all possible slithers and co-slithers (up to cyclic shift) for ticker tapes on n = 13 vertices.Definition 3.10. Fix a positive integer n. Let α S , α L , β E , and β D = 2(α S + α L ) − 1 be nonnegative integers. Suppose W s (the "slither") is a word over the alphabet {D, E} with β D instances of D and β E instances of E, and suppose W c (the "co-slither") is a word over the alphabet {S, L} with α S instances of S and α L instances of L. We say the pair (W s , W c ) is feasible if 2β E + 3α S + 4α L = n + 1.
Figure 12 .
121 v 2 v 3 v 4 v 5 v 6 v 7 v 8 v 9 v 10 v 11 x (0)
= [G(T ) : s ] = η/β and β = [G(T ) : c ] = η/α.
deg(p ω ) := [G(S) : s ] [G(T ω ) : s ω ] = α/α ω and codeg(p ω ) := [G(S) : c ] [G(T ω ) : c ω ] = β/β ω .
Lemma 4 . 4 .
44For any scroll S, we have Slither(T ω ) codeg(pω) = Slither(S) and CoSlither(T ω ) deg(pω) = CoSlither(S).
CoSwal 1 = (c 1 c 5 c 3 ) (c 2 c 6 c 4 )
Figure 14 .
14On the left is the repeating portion of the running example of our scroll. The tails of co-snakes are indicated by 1. On the right is the ω-fold orbit table for ω = 1. The ω-heads of the co-snakes are the positions of 1.
Proposition 4 . 7 .
47For any scroll S, deg(p 1 ) deg(S) and codeg(p 1 ) codeg(S).
We now have the explicit formula deg(S) = deg(X ) = deg(p 1 ) gcd (deg(X )T (X ), n) gcd (T (X ), n) ,
Proposition 4 . 10 .
410Given an orbit table T ω = p ω (S), the following conditions are equivalent:
Figure 15
15Figure 15. An orbit table whose sum vector Σ(T 1 ) = (1, 1, 2, 1, 1, 2, 1, 1, 2, 1, 1, 2) has period 3.
Alternatively, this can be formalized by taking L = In in the definition of generalized toggling in Eq. (1). Notice that the condition "and I \ {k} ∈ L" in the middle line is actually unnecessary, because removing a vertex from an independent set always leaves it independent.
We will continue to write snake groups multiplicatively due to their definitions in terms of function composition, and because most snakes are not adders.
Earlier conference papers involving this work, such as AUTOMATA [DDJ + 21] and FPSAC [DJMM22] used '2' instead of 'E.'
Note that if we had αS = αL = 0, then our co-slither would be empty, which is impossible.
Because we want this argument to work with both orbit table and orbit vector notation, we are not specifying whether hi and ti are integers or ordered pairs.
AcknowledgementsWe are grateful to the Banff International Research Station and the organizers of the Dynamical Algebraic Combinatorics Workshop, which was held virtually in 2020. We thank Laurent David for several substantial ideas and conversations during the initial stages of this project. We also thank Adam Dzedzej and David Einstein for helpful discussions.
On the period of an operator, defined on antichains. A Brouwer, L Schrijver, Stichting Mathematisch Centrum. Zuivere Wiskunde. 2474A. Brouwer and L. Schrijver. On the period of an operator, defined on antichains. Stichting Mathematisch Centrum. Zuivere Wiskunde, (ZW 24/74):1-13, 1974.
Toggling independent sets in cycle graphs. P Cameron, D Fon-Der-Flaass, ; L David, C Defant, M Joseph, M Macauley, A Mcdonough, OpenAccess Series in Informatics (OASIcs), Dagstuhl, Germany, 2021. Schloss Dagstuhl-Leibniz-Zentrum für Informatik. 1627th International Workshop on Cellular Automata and Discrete Complex Systems (AUTOMATA 2021)P. Cameron and D. Fon-Der-Flaass. Orbits of antichains revisited. European J. Combin., 16(6):545-554, 1995. [DDJ + 21] L. David, C. Defant, M. Joseph, M. Macauley, and A. McDonough. Toggling independent sets in cycle graphs. In 27th International Workshop on Cellular Automata and Discrete Complex Systems (AUTOMATA 2021), OpenAccess Series in Informatics (OASIcs), Dagstuhl, Germany, 2021. Schloss Dagstuhl-Leibniz-Zentrum für Informatik.
Toric promotion. C Defant, Proc. Amer. Math. Soc. 1517C. Defant. Toric promotion. Proc. Amer. Math. Soc., 151(7):45-57, 2023.
Torsors from toggling independent sets. C Defant, M Joseph, M Macauley, A Mcdonough, 34 th Int'l Conf. on Formal Power Series and Algebraic Combinatorics (FPSAC). Bangalore862022C. Defant, M. Joseph, M. Macauley, and A. McDonough. Torsors from toggling independent sets. In 34 th Int'l Conf. on Formal Power Series and Algebraic Combinatorics (FPSAC), volume 86B, pages Article #85, 12 pp., Bangalore, 2022.
Toric partial orders. M Develin, M Macauley, V Reiner, Trans. Amer. Math. Soc. 3684M. Develin, M. Macauley, and V. Reiner. Toric partial orders. Trans. Amer. Math. Soc., 368(4):2263-2287, 2016.
Noncrossing partitions, toggles, and homomesies. ] D + 16, M Einstein, E Farber, M Gunawan, M Joseph, J Macauley, S Propp, Rubinstein-Salzedo, Electron. J. Combin. 233+ 16] D. Einstein, M. Farber, E. Gunawan, M. Joseph, M. Macauley, J. Propp, and S. Rubinstein-Salzedo. Non- crossing partitions, toggles, and homomesies. Electron. J. Combin., 23(3), 2016.
Some instances of homomesy among ideals of posets. Shahrzad Haddadan, Electron J. Combin. Shahrzad Haddadan. Some instances of homomesy among ideals of posets. Electron J. Combin., pages P1-60, 2021.
Antichain toggling and rowmotion. M Joseph, Electron. J. Combin. 261M. Joseph. Antichain toggling and rowmotion. Electron. J. Combin., 26(1), 2019.
Toggling independent sets of a path graph. M Joseph, T Roby, Electron J. Combin. 251M. Joseph and T. Roby. Toggling independent sets of a path graph. Electron J. Combin., 25(1):1-18, 2018.
Order independence in asynchronous cellular automata. M Macauley, J Mccammond, H S Mortveit, J. Cell. Autom. 31M. Macauley, J. McCammond, and H.S. Mortveit. Order independence in asynchronous cellular automata. J. Cell. Autom., 3(1):37-56, 2008.
Dynamics groups of asynchronous cellular automata. M Macauley, J Mccammond, H S Mortveit, J. Algebraic Combin. 331M. Macauley, J. McCammond, and H.S. Mortveit. Dynamics groups of asynchronous cellular automata. J. Algebraic Combin., 33(1):11-35, 2011.
On the action of the toggle group of the Dynkin diagram of type A. Y Numata, Y Yamanouchi, Algebr. Comb. 51Y. Numata and Y. Yamanouchi. On the action of the toggle group of the Dynkin diagram of type A. Algebr. Comb., 5(1):149-161, 2022.
On orbits of antichains of positive roots. D Panyushev, European J. Combin. 302D. Panyushev. On orbits of antichains of positive roots. European J. Combin., 30(2):586-594, 2009.
Dynamical algebraic combinatorics and the homomesy phenomenon. T Roby, Recent Trends in Combinatorics. SpringerT. Roby. Dynamical algebraic combinatorics and the homomesy phenomenon. In Recent Trends in Combina- torics, pages 619-652. Springer, 2016.
The toggle group, homomesy, and the Razumov-Stroganov correspondence. J Striker, Electron. J. Combin. 222J. Striker. The toggle group, homomesy, and the Razumov-Stroganov correspondence. Electron. J. Combin., 22(2):P2-57, 2015.
Dynamical algebraic combinatorics: Promotion, rowmotion, and resonance. J Striker, Notices Amer. Math. Soc. 646J. Striker. Dynamical algebraic combinatorics: Promotion, rowmotion, and resonance. Notices Amer. Math. Soc., 64(6), 2017.
Promotion and rowmotion. J Striker, N Williams, European J. Combin. 33J. Striker and N. Williams. Promotion and rowmotion. European J. Combin., 33:1919-1942, 2012.
| [] |
[
"Towards Continuous Consistency Axiom",
"Towards Continuous Consistency Axiom",
"Towards Continuous Consistency Axiom",
"Towards Continuous Consistency Axiom"
] | [
"Mieczys Law \nInstitute of Computer Science ul\nJana Kazimierza\n",
"A K Lopotek \nInstitute of Computer Science ul\nJana Kazimierza\n",
"Robert A K Lopotek Cardinal \nFaculty of Mathematics and Natural Sciences School of Exact Sciences ul\nWyszyński University\nDewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland\n",
"Stefan \nFaculty of Mathematics and Natural Sciences School of Exact Sciences ul\nWyszyński University\nDewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland\n",
"Mieczys Law \nInstitute of Computer Science ul\nJana Kazimierza\n",
"A K Lopotek \nInstitute of Computer Science ul\nJana Kazimierza\n",
"Robert A K Lopotek Cardinal \nFaculty of Mathematics and Natural Sciences School of Exact Sciences ul\nWyszyński University\nDewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland\n",
"Stefan \nFaculty of Mathematics and Natural Sciences School of Exact Sciences ul\nWyszyński University\nDewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland\n"
] | [
"Institute of Computer Science ul\nJana Kazimierza",
"Institute of Computer Science ul\nJana Kazimierza",
"Faculty of Mathematics and Natural Sciences School of Exact Sciences ul\nWyszyński University\nDewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland",
"Faculty of Mathematics and Natural Sciences School of Exact Sciences ul\nWyszyński University\nDewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland",
"Institute of Computer Science ul\nJana Kazimierza",
"Institute of Computer Science ul\nJana Kazimierza",
"Faculty of Mathematics and Natural Sciences School of Exact Sciences ul\nWyszyński University\nDewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland",
"Faculty of Mathematics and Natural Sciences School of Exact Sciences ul\nWyszyński University\nDewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland"
] | [] | Development of new algorithms in the area of machine learning, especially clustering, comparative studies of such algorithms as well as testing according to software engineering principles requires availability of labeled data sets. While standard benchmarks are made available, a broader range of such data sets is necessary in order to avoid the problem of overfitting. In this context, theoretical works on axiomatization of clustering algorithms, especially axioms on clustering preserving transformations are quite a cheap way to produce labeled data sets from existing ones. However, the frequently cited axiomatic system of of Kleinberg [18], as we show in this paper, is not applicable for finite dimensional Euclidean spaces, in which many algorithms like k-means, operate. In particular, the so-called outer-consistency axiom fails upon making small changes in datapoint positions and inner-consistency axiom is valid only for identity transformation in general settings.Hence we propose an alternative axiomatic system, in which Kleinberg's inner consistency axiom is replaced by a centric consistency axiom and outer consistency axiom is replaced by motion consistency axiom. We demonstrate that the new system is satisfiable for a hierarchical version of k-means with auto-adjusted k, hence it is not contradictory. Additionally, as k-means creates convex clusters only, we demonstrate that it is possible to create a version detecting concave clusters and still the axiomatic system can be satisfied. The practical application area of such an axiomatic system may be the generation of new labeled test data from existent ones for clustering algorithm testing.Given that, Kleinberg [18] formulated axioms for distance-based cluster analysis (Properties 4,5, 6), which are rather termed properties by other e.g.[3]. Property 4. Let Range(f ) denote the set of all partitions Γ such that f (d) = Γ for some distance function d. A function f has the richness property if Range(f ) is equal to the set of all partitions of S. Property 5. A function f has the scale-invariance property if for any distance function d and any α > 0, we have f (d) = f (α · d).Property 6. Let Γ be a partition of S, and d and d two distance functions on S. We say that d is a Γ-transformation of d (Γ(d) = d ) if (a) for all | 10.1007/s10489-022-03710-1 | [
"https://export.arxiv.org/pdf/2202.06015v1.pdf"
] | 246,823,312 | 2202.06015 | 5cdf8b70b5863760a61e3955ca144b2570f247e6 |
Towards Continuous Consistency Axiom
February 15, 2022
Mieczys Law
Institute of Computer Science ul
Jana Kazimierza
A K Lopotek
Institute of Computer Science ul
Jana Kazimierza
Robert A K Lopotek Cardinal
Faculty of Mathematics and Natural Sciences School of Exact Sciences ul
Wyszyński University
Dewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland
Stefan
Faculty of Mathematics and Natural Sciences School of Exact Sciences ul
Wyszyński University
Dewajtis 501-248, 01-815Warszawa, WarszawaPoland, Poland
Towards Continuous Consistency Axiom
February 15, 2022cluster analysisclustering axiomsconsistencycontinuous consistencyinner-and outer-consistencycontinuous inner-and outer- consistencygravitational consistencycentric consistencymotion consis- tencyk-means algorithm
Development of new algorithms in the area of machine learning, especially clustering, comparative studies of such algorithms as well as testing according to software engineering principles requires availability of labeled data sets. While standard benchmarks are made available, a broader range of such data sets is necessary in order to avoid the problem of overfitting. In this context, theoretical works on axiomatization of clustering algorithms, especially axioms on clustering preserving transformations are quite a cheap way to produce labeled data sets from existing ones. However, the frequently cited axiomatic system of of Kleinberg [18], as we show in this paper, is not applicable for finite dimensional Euclidean spaces, in which many algorithms like k-means, operate. In particular, the so-called outer-consistency axiom fails upon making small changes in datapoint positions and inner-consistency axiom is valid only for identity transformation in general settings.Hence we propose an alternative axiomatic system, in which Kleinberg's inner consistency axiom is replaced by a centric consistency axiom and outer consistency axiom is replaced by motion consistency axiom. We demonstrate that the new system is satisfiable for a hierarchical version of k-means with auto-adjusted k, hence it is not contradictory. Additionally, as k-means creates convex clusters only, we demonstrate that it is possible to create a version detecting concave clusters and still the axiomatic system can be satisfied. The practical application area of such an axiomatic system may be the generation of new labeled test data from existent ones for clustering algorithm testing.Given that, Kleinberg [18] formulated axioms for distance-based cluster analysis (Properties 4,5, 6), which are rather termed properties by other e.g.[3]. Property 4. Let Range(f ) denote the set of all partitions Γ such that f (d) = Γ for some distance function d. A function f has the richness property if Range(f ) is equal to the set of all partitions of S. Property 5. A function f has the scale-invariance property if for any distance function d and any α > 0, we have f (d) = f (α · d).Property 6. Let Γ be a partition of S, and d and d two distance functions on S. We say that d is a Γ-transformation of d (Γ(d) = d ) if (a) for all
Introduction
Development of data mining algorithms, in particular also of clustering algorithms (see Def.3), requires a considerable body of labeled data. The data may be used for development of new algorithms, fine-tuning of algorithm parameters, testing of algorithm implementations, comparison of various brands of algorithms, investigating of algorithm properties like scaling in sample size, stability under perturbation and other.
There exist publicly available repositories of labeled data 1 , there exist technologies for obtaining new labeled data sets like crowdsourcing [10], as well as extending existent small bodies of labeled data into bigger ones (semi-supervised learning) as well as various cluster generators [17,26]. Nonetheless these resources may prove sparse for an extensive development and testing efforts because of risk of overfitting, risk of instability etc. Therefore efforts are made to provide the developers with abundant amount of training data.
The development of axiomatic systems may be exploited for such purposes. Theoretical works on axiomatization of clustering algorithms are quite a cheap way to produce labeled data sets from existing ones. These axiomatic systems (Def.2) include among others definitions of data set transformations under which some properties of the clustering algorithm will remain unchanged. The most interesting property is the partition of the data. The axiomatic systems serve of course other purposes, like deepening the understanding of the concept of clusters, the group similarity, partition and the clustering algorithm itself.
Various axiomatic frameworks have been proposed, e.g. for unsharp partitioning by [36], for graph clustering by [34], for cost function driven algorithms by [7], for linkage algorithms by [2], for hierarchical algorithms by [9,14,33], for multi-scale clustering by [8]. for settings with increasing sample sizes by [16], for community detection by [38], for pattern clustering by [30], and for distance based clustering [18], [7], see also [3]. Regrettably, the natural settings for many algorithms, like k-means, seem to have been ignored, that is (1) the embedding in the Euclidean space (Def.10), (2) partition of not only the sample but of the sample space, and (3) the behavior under continuous transformations (Def.12). It would also be a useful property if the clustering would be the same under some perturbation of the data within the range of error. Perturbation p(S) is an akin concept to continuous transformation in that for some small > 0 the distance between P ∈ S and p(P ) is below . The importance of the perturbation robustness in clustering has been recognized years ago and was studied by [5,6,25] among others. The candidate axiomatic system for this purpose seems to be the mentioned Kleinberg's system [18], as there are no formal obstacles to apply the axioms under Euclidean continuous settings. As we show, however, one of the transformations implied by Kleinberg's axioms, the consistency axiom transformation (Property.6) turns out to be identity transformation in Euclidean space, if continuity is required (Def.12.) Furthermore, its special case of inner-consistency (Def. 8) turns out to be identity transformation in Euclidean space even without continuity requirement (Theorem 16 and Theorem 17). The special case of outer-consistency (Def.9) suffers also from problems under continuous transformation (Theorem 20).
The possibility to perform continuous Γ-transformations preserving consistency is very important from the point of view of clustering algorithm testing because it is a desirable property that small perturbations of data do not produce different clusterings. In case that a consistency transform reduces to identity transform, the transformation is useless from the point of view of algorithm testing (same data set obtained).
Therefore, the axiom of consistency has to be replaced. We suggest to consider replacements for inner-consistency in terms of the centric consistency (Property. 29) and the outer-consistency should be replaced with motion consistency (Def.39). Both special types of consistency need replacements because each of them leads to contradictions, as shown in Section 4. 1 We will demonstrate that we repair in this way the basic deficiency of Kleinberg, that is the self-contradiction as well as transfer the axiomatic thinking into continuous domain (Theorem 42). In Section 12, we will verify the validity of our basic cluster-preserving transformations.
Previous Work
Let us recall that an Definition 1. axiom or postulate is a proposition regarded as self-evidently true without proof.
A clustering axiom is understood as a property that has to hold for a clustering algorithm in order for that algorithm to be considered as a clustering algorithm.
Definition 2. An axiomatic system is a set of axioms that are considered to hold at the same time.
If an axiomatic system holds for an algorithm, then one can reason about further properties of that algorithm without bothering of the details of that algorithm.
Google Scholar lists about 600 citations. of Kleinberg's axiomatic system. Kleinberg [18,Section 2] defines clustering function as: Definition 3. A clustering function is a function f that takes a distance function d on [set] S [of size n ≥ 2] and returns a partition Γ of S. The sets in Γ will be called its clusters.
i, j ∈ S belonging to the same cluster of Γ, we have d (i, j) ≤ d(i, j) and (b) for all i, j ∈ S belonging to different clusters of Γ, we have d (i, j) ≥ d(i, j). The clustering function f has the consistency property if for each distance function d and its Γ-transform d the following holds: if f (d) = Γ, then f (d ) = Γ For algorithms like k-means, Kleinberg defined also the property of k-richness: in which he requires not generation of "all partitions of S" but rather of "all partitions of S into exactly k non-empty clusters". Kleinberg demonstrated that his three axioms (Properties of richness 4, scale-invariance 5 and consistency 6) cannot be met all at once. (but only pair-wise), see his Impossibility Theorem [18,Theorem 2.1]. In order to resolve the conflict, there was proposed a replacement of consistency axiom with orderconsistency axiom [37], refinement-consistency ( [18]), inner/outer-consistency [3], order-invariance and so on (see [1]). We will be interested here in two concepts:
Definition 8. Inner-consistency is a special case of consistency in which the distances between members of different clusters do not change.
Definition 9. Outer-consistency is a special case of consistency in which the distances between members of the same cluster do not change.
In this paper, we depart from the assumptions of previous papers on Kleinberg's axiomatic system and its variants in that we introduce the assumption of continuity in the aspects: (1) the embedding of data in a fixed-dimensional space is assumed (Def. 11, 10), (2) the Γ-transformation has to be performed in a continuous way (Def.12, Def.13) and (3) the data set S is considered as a sample from a probability density function over the continuous space, like discussed already by Pollard [28] in 1981 and by MacQueen [24] in 1967 when considering the in the limit behavior of k-means target function, and by Klopotek [19] for k-means++. We assume that this probability density function is continuous. In this paper, also, we will go beyond the classical convex-shaped structure of k-means clusters.
Preliminaries
We discuss embedding of the data points in a fixed-dimensional Euclidean space, with distance between data points implied by the embedding and continuous versions of data transforms will be considered.
Definition 10. An embedding of a data set S into the m-dimensional space R m is a function E : S → R m inducing a distance function d E (i, j) between these data points being the Euclidean distance between E(i) and E(j).
We consider here finite dimensional Euclidean spaces R m . The continuity in this space is understood as follows:
Definition 11. A space / set S ⊆ R m is continuous iff within a distance of > 0 from any point P ∈ S there exist infinitely many points Q ∈ S.
Same applies to clusters: within a distance of > 0 from any point P of a cluster C there exist infinitely many points Q of the same cluster C. That is while a clustering algorithm like k-means operates on a finite sample, it actually splits the entire space info clusters, that is it assigns class membership also to unseen data points from the space.
Definition 12. By a continuous transformation of a (finite or infinite) data set S in such a space (S ⊂ R m ) we shall understand a function t(t, S), where t ∈ [0, 1], for any P ∈ S t(0, P ) = P and for each t ∈ [0, 1] and each > 0 there exist δ t, > 0 such that for any 0 < δ < δ t, and any t ∈ [0, 1] the point Q = t(t + δ, P ) lies within the distance of from t(t, P ) for each P .
k-means, kernel-k-means, their variants and many other clustering functions rely on the embedding into Euclidean space which raises the natural question on the behavior under continuous changes of positions in space. Let E S m be the set of all possible embeddings of the data set S in R m . Define: Definition 13. We speak about continuous consistency transformation or continuous Γ-transformation t of a clustering Γ if t is a continuous transformation as defined in Def. 12 and d(t(t, P ), t(t, Q)) ≥ d(t(t , P ), t(t , Q)) for any 0 ≤ t < t ≤ 1 if P, Q belong to the same cluster C ∈ Γ and d(t(t, P ), t(t, Q)) ≤ d(t(t , P ), t(t , Q)) for any 0 ≤ t < t ≤ 1 if P, Q do not belong to the same cluster.
The transform for the probability density function is understood in a natural way as a transformation of the probability density estimated from the sampling.
The clustering function f is also understood as partitioning the space in such a way that with increasing sample size the sample-based partition of the space approximates the partition of the space based on the probability density, as discussed in [21]. Continuous inner-Γ-transformation/consistency and continuous outer-Γ-transformation/consistency are understood in analogy to inner/outer-Γ-transformation/consistency. Subsequently, whenever we talk about distance, we mean Euclidean distance.
Property 14.
A clustering function f has the continuous-consistency property, if for each dataset S and the induced clustering Γ = f (S) each continuous Γtransformation t has the property that for each 0 ≤ t ≤ 1 the clustering of t(t, S) will result in the same clustering Γ.
The property of continuous consistency is a very important notion from the point of view of clustering in the Euclidean space, especially in the light of research on perturbation robustness [5,6,25].
Note that a Γ-transformation cannot be in general replaced by a finite sequence of inner-consistency and outer Γ-transformations. But if we have to do with a continuous Γ-transformation then it can be replaced by a continuum of inner-consistency and outer Γ-transformations Let us make the remark, that the inner-Γ-transformation and the outer-Γtransformation combination cannot replace the Γ-transformation, even if we use a finite sequence of such transformations. However we can think of a sequence of sequences of Γ-transformations approximating a continuous Γ-transformation. By a sequence t i,n of Γ-transformations of a data set S we understand the following sequence: t 0,n (S) = S and for each i = 1, . . . , n, t i,n (S) is a Γ-transformation of t i−1,n (S). In particular, for a continuous Γ-transformation t, the sequence t(i/n, S) for i = 0, . . . , n wold be such a sequence. We say that a sequence t i,n approximates the the continuous transformation t with precision π, if for each P in S the distance between t i,n (P ) and t(t, P ), where t ∈ [i/n, (i + 1)/n] does not exceed π.
Theorem 15. If t is a continuous-Γ-transformation, then there exists an n for which the approximating sequence of Γ-transformations t i,n can be found approximating t up to π. What is more, for possibly a larger n, a sequence exists consisting of a mixture of only inner and outer Γ-transformations.
Proof. It can be shown by appropriate playing with the πs, s at various scales of δ.
The Kleinberg's axioms reflect three important features of a clustering (1) compactness of clusters, (2) separation of clusters and (3) the balance between compactness and separation. In particular, the inner-consistency property says that the increase of compactness of a cluster should preserve the partition of data. The outer consistency property says that the increase of separation of clusters should preserve the partition of data. The scale-invariance states that preserving the balance between compactness and separation should preserve partition. The richness means that variation of balances between compactness and separation should lead to different partitions.
We will demonstrate however a number of impossibility results for continuous inner and outer-consistency. Essentially we show that in a number of interesting cases these operations are reduced to identity operations. Therefore we will subsequently seek replacements for continuous consistency, that will relax their rigidness, by proposing centric consistency introduced in [21] as a replacement for inner-consistency, and motion-consistency, introduced in [22] as a replacement for outer-consistency. We will demonstrate that under this replacement, the (nearly) richness axiom may be fulfilled in combination with centric consistency and scale-invariance.
As we will refer frequently to the k-means algorithm, let us recall that it is aimed at minimizing the cost function Q (reflecting its quality in that the lower Q the higher the quality) of the form:
Q(Γ) = m i=1 k j=1 u ij x i − µ j 2 = k j=1 1 n j xi,x l ∈Cj x i − x l 2 (1) = k j=1 1 2n j xi∈Cj x l ∈Cj x i − x l 2
for a dataset X under some partition Γ into the predefined number k of clusters, where u ij is an indicator of the membership of data point x i in the cluster C j having the cluster center at µ j (which is the cluster's gravity center). Note that [28] modified this definition by dividing the right hand side by m in order to make comparable values for samples and the population, but we will only handle samples of a fixed size so this definition is sufficient for our purposes. A k-means algorithm finding exactly the clustering optimizing Q shall be refereed to as kmeans-ideal. Realistic implementations start from a randomized initial partition and then improve the Q iteratively. An algorithm where the initial partition is obtained by random assignment to clusters shall be called random-set k-means.
For various versions of k-means algorithm see e.g. [35].
4 Main results
Internal Contradictions of Consistency Axioms
In this section we demonstrate that inner consistency axiom alone induces contradictions (Theorem 16 and Theorem 17). The continuous versions of the respective consistency transformations amplify the problems. Theorem 18 shows that if concave clusters are produced, continuous consistency transform may not be applicable. In general settings, continuous outer consistency transform (Theorem 20) may not be applicable to convex clusters either. These inner contradictions can be cured by imposing additional constraints on continuous consistency transform in form of gravitational (Def. 22, or homothetic (Theorem 24) or convergent (Property 25 transforms.
We claim first of all that inner-consistency cannot be satisfied non-trivially for quite natural clustering tasks, see Theorem 16 and Theorem 17.
Theorem 16. In m-dimensional Euclidean space for a data set with no cohyperplanarity of any m + 1 data points (that is when the points are in general position) the inner-Γ-transform is not applicable non-trivially, if we have a partition into more than m + 1 clusters.
See proof on page 15. By non-trivial we mean the case of Γ-transformation different from isometric transformation. Note that here we do not even assume continuous inner-consistency. Non-existence of non-trivial inner-consistency makes continuous one impossible.
As the data sets with properties mentioned in the premise of Theorem 16 are always possible, no algorithm producing the m + 1 or more clusters in R m has non-trivially the inner-consistency property. This theorem sheds a different light on claim of [3] that k-means does not possess the property of inner-consistency. No algorithm producing that many clusters has this property if we consider embeddings into a fixed dimensional Euclidean space. Inner-consistency is a special case of Kleinberg's consistency.
The above Theorem can be slightly generalized:
Theorem 17. In a fixed-dimensional space R m under Euclidean distance if there are at least m + 1 clusters such that we can pick from each cluster a data point so that m + 1 points are not co-hyperplanar (that is they are "in general position"), and at least two clusters contain at least two data points each, then inner-Γ-transform is not applicable (in non-trivial way). 2 See proof on page 15. Again continuous inner-consistency was not assumed.
Theorem 18. If a clustering function is able to produce concave clusters such that there are mutual intersections of one cluster with convex hull of a part of the other cluster, then it does not have the continuous consistency property if identity transformation is excluded.
See proof on page 16. The Theorem 18 referred to clustering algorithms producing concave clusters. But what when we restrict ourselves to partitions into convex clusters? Let us consider the continuous outer-consistency.
Definition 19. Consider three clusters C 1 , C 2 , C 3 . If the points S i , the separating hyperplanes h ij and vectors v ij with properties mentioned below exist, then we say that the three clusters follow the three-cluster-consistency principle. The requested properties are: (1) There exist points S 1 , S 2 , S 3 in convex hulls of C 1 , C 2 , C 3 resp. such that C i is separated from S j by a hyperplane h ij such that the line segment S i S j is orthogonal to h ij for i, j = 1, 2, 3. (2) Let v ij be the speed vector of cluster C i relatively to C j , v 13 for i, j = 1, 2, 3 such that v 12 ⊥ h 12 , v 13 ⊥ h 13 , and v 23 ⊥ h 23 , and v 12 /|S 1 S 2 | = v 13 /|S 1 S 3 |.
Note that condition (2) implies that v 23 = v 12 − v 13 and v 12 /|S 1 S 2 | = v 23 /|S 2 S 3 |.
Theorem 20. If a clustering function may produce partitioning such that there exists a sequence of cluster indices (first and last being identical) so that one is unable to find hyperplanes separating the clusters not violating for any subsequence of three clusters the three-cluster-consistency principle and without a mismatch between the first and the last S point, then this function does not have continuous outer-consistency property if we discard the isometric transformation.
See proof on page 17. Note that k-means clustering function does not suffer from the deficiency described by the precondition of Theorem 20 because it implies a Voronoi tessellation: if we choose always the gravity centers of the clusters as the S points, then h hyperplanes can always be chosen as hyperplanes orthogonal to S j S i lying half-way between S i and S j . Hence Theorem 21. Given k-means clustering (in form of a Voronoi tessellation), there exists a non-trivial continuous outer-Γ-transform preserving the outerconsistency property in that we fix the position of one cluster C and move each other cluster C j relocating each data point of cluster C j with a speed vector proportional to the vector connecting the cluster center C with cluster center C j .
So, by irony, the k-means clustering algorithm for which Kleinberg demonstrated that it does not have the consistency property, is the candidate for a non-trivial continuous outer consistency variant. Let us elaborate now on this variant.
Definition 22.
A Γ-transform has the property of gravitational consistency, if within each cluster the distances between gravity centers of any two disjoint subsets of cluster elements do not increase. See proof on page 19. The gravitational consistency property is not easy to verify (the number of comparisons is exponential in the size of the cluster). However, some transformations have this property, for example: Theorem 24. The homothetic transformation of cluster with a ratio in the range (0,1) in the whole space or any subspace has the property that the distances between gravity centers of any two disjoint subsets of cluster elements do not increase.
See proof on page 19. The gravitational consistency may be deemed as a property that is too kmeans specific.
Therefore, we considered, in the paper [21], the problem of Kleinberg's consistency in its generality. We showed there that the problem of Kleinberg's contradictions lies in the very concept of consistency because his Γ-transformation creates new clusters within existent one (departs from cluster homogeneity) and clusters existent ones into bigger ones (balance between compactness and separation is lost). This effect can be avoided probably only via requiring the convergent consistency, as explained below. See proof in paper [21]. When we actually cluster data sampled from a continuous distribution, we would expect the "in the limit" condition to hold, that is that the clustering obtained via a sample converges to the clustering of the sample space.
Theorem 27. If the in-the-limit condition has to hold then the convergent-Γtransform must be an isomorphism.
See proof in paper [21]. For this reason, under conditions of continuity, we need to weaken the constraints imposed by Kleinberg on clustering functions via his consistency axiom. As stated in Theorem 15, the Γ-transformation of Kleinberg under continuous conditions can be replaced by a limit on a sequence of inner and outer Γtransforms. Deficiencies of both have been just listed. Therefore we propose here two relaxations of the two properties: centric consistency as a replacement of inner-consistency and motion consistency as a replacement of outer-consistency.
Centric Consistency
Let us start with the centric consistency. It is inspired by the result of Theorem 24.
Definition 28. Let E be an embedding of the dataset S with distance function d (induced by this embedding). Let Γ be a partition of this dataset. Let C ∈ Γ and let µ c be the gravity center of the cluster C in E. We say that we execute the Γ * transformation (or a centric transformation, Γ(d; λ) = d ) if for some 0 < λ ≤ 1 a new embedding E is created in which each element x of C, with coordinates x in E, has coordinates x in E such that x' = µ c + λ(x − µ c ), all coordinates of all other data points are identical, and d is induced by E . C is then said to be subject of the centric transform.
Note that the set of possible centric Γ-transformations for a given partition is neither a subset nor superset of the set of possible Kleinberg's Γ-transformation in general. However, if we restrict ourselves to the fixed-dimensional space and to the fact that the inner-consistency property means in this space isometry, as shown in Theorems 16 and 17, then obviously the centric consistency property can be considered as much more general, in spite of the fact that it is a k-means clustering model-specific adaptation of the general idea of shrinking the cluster.
Property 29.
A clustering method has the property of centric consistency if after a Γ * transform it returns the same partition.
Theorem 30. k-means algorithm satisfies centric consistency property in the following way: if the partition Γ of the set S with distances d is a global minimum of k-means, and k = 2, and the partition Γ has been subject to centric Γtransformation yielding distances d , then Γ is also a global minimum of k-means under distances d .
See proof in paper [21]. For the sake of brevity, let us use the following convention subsequently. By saying that a Γ transformation produced the clustering Γ out of clustering Γ we mean that the original embedding was changed in such a way that any data point x with original coordinates x is located now at x . If we say that the transformation turned elements of sets P, Q into P, Q , then we mean that elements of P did not change coordinates while the elements of Q changed coordinates under new embedding.
Theorem 31. Let a partition {T, Z} be an optimal partition under 2-means algorithm. Let a subset P of T be subjected to centric transform yielding P (that is all points of P were moved to the gravity center of P by a factor λ), and T = (T − P ) ∪ P . Then partition {T , Z} is an optimal partition of T ∪ Z under 2-means.
See the proof on page 28. Let us discuss a variant of bisectional-k-means by [31]. The idea behind bisectional versions of k-means is that we apply 2-means to the entire data set and then recursively 2-means is applied to some cluster obtained in the previous step, e.g. the cluster with the largest cardinality. The algorithm is terminated e.g. upon reaching the desired number of clusters, k. Note that in this case the k-means-ideal quality function is not optimized, and even the cluster borders do not constitute a Voronoi diagram. However, we will discuss, for the purposes of this paper, a version of k-means such that the number of clusters is selected by the algorithm itself. We introduce a stopping criterion that a cluster is not partitioned, if for that cluster the decrease of Q (Def. 1) for the bisection relatively to the original cluster Q does not rich some threshold. An additional stopping criterion will be used, that the number of clusters cannot exceed k. It is easily seen that such an algorithm is scale-invariant (because the stopping criterion is a relative one) and it is also rich "to a large extent", that is we exclude partitions with more than k clusters only. If a clustering algorithm can return any clustering except one with the number of cluster over k, we shall call this property k ↓-near-richness. We shall call it bisectional-auto − k-means algorithm. Theorem 31 implies that Theorem 32. Bisectional-auto−k-means algorithm is centric-consistent, (scale)invariant and k ↓-nearly-rich.
See proof on page 31. Note that the Kleinberg's impossibility theorem [18] could be worked around so far only if we assumed clustering into exactly k clusters. But one ran into contradictions already when the number of clusters could have a range of more than two values, because the so-called anti-chain property of the set of possible partitions does not hold any more (see [18] for the impact of missing antichain property on the contradictions of Kleinberg axioms). So this theorem shows the superiority of the concept of centric consistency. Though centric consistency relaxes only inner-consistency, nonetheless Kleinberg's axiomatic system with inner-consistency fails also.
Let us now demonstrate theoretically, that k-means algorithm really fits in the limit the centric-consistency axiom.
Theorem 33. k-means algorithm satisfies centric consistency in the following way: if the partition Γ is a local minimum of k-means, and the partition Γ has been subject to centric consistency yielding Γ , then Γ is also a local minimum of k-means.
See proof on page 20. However, it is possible to demonstrate that the newly defined transform preserves also the global optimum of k-means.
Theorem 34. k-means algorithm satisfies centric consistency in the following way: if the partition Γ is a global minimum of k-means, and the partition Γ has been subject to centric consistency yielding Γ , then Γ is also a global minimum of k-means.
See the proof on page 23. Hence it is obvious that Theorem 35. k-means algorithm satisfies Scale-invariance, k-Richness, and centric Consistency.
Note that k-means does not have the property of inner-consistency. This means that the centric consistency truly relaxes inner-consistency.
Under some restrictions, the concept of "centric consistency" may be broadened to non-convex clustering methods. Consider the k-single link algorithm product Γ. Each cluster C can be viewed as a tree T C if we consider the links in the cluster as graph edges. Define the area A C of a cluster C as the union of balls centered at each node and with a radius equal to the longest edge incident with the node. A cluster may be deemed link-ball-separated if for each node N ∈ C the distance of N to A C is bigger than any edge incident with N . In such a case let move a leaf node L of T C along the edge connecting it to T C by factor 1 > λ L > 0. For non-leaf nodes M if a branch starting at M lies entirely within original A C , let move all nodes of the branch can be moved by the same factor λ M towards M . Both operations be called semi-centric transformation.
Theorem 36. (a) If the cluster C is link-ball-separated, then application of semi-centric transformation yield a new data set that has the same cluster structure as Γ. and C remains link-ball-separated. (b) If the cluster C 1 is link-ball-separated from other clusters and the cluster C 2 is link-ball-separated from other clusters and a just-mentioned transformation is performed on C 1 , then both remain link-ball-separated
The proof relies on trivial geometric observations. Part (a) is valid because distances will decrease within a cluster and no node from outside will get closer to N than to nodes from its own cluster, though now the tree T C of C may connect other nodes than T C . The cluster area A C will be a subarea of A C hence C is still link-ball-separated from the other clusters. This transformation may violate Kleinberg's outer as well as inner consistency constraints, but will still yield new data sets with same clustering properties. Part (b) holds due to smaller decrease of between-cluster distances compared to the λ. . Note that such "area" restrictions are not needed in case of k-means centric consistency.
Motion Consistency
Centric consistency replaces only one side of Kleinberg's consistency, the socalled inner-consistency. But we need also a replacement for the outer-consistency. Let us therefore introduce the motion-consistency (see [22].).
Definition 37. Cluster area is any solid body containing all cluster data points. Gap between two clusters is the minimum distance between the cluster areas, i.e., Euclidean distance between the closest points of both areas.
Definition 38. Given a clustered data set embedded in a fixed dimensional Euclidean space, the motion-transformation is any continuous transformation of the data set in the fixed dimensional space that (1) preserves the cluster areas (the areas may only be subject of isomorphic transformations) and (2) keeps the minimum required gaps between clusters (the minimum gaps being fixed prior to transformation). By continuous we mean: there exists a continuous trajectory for each data point such that the conditions (1) and (2) are kept all the way.
Definition 39. A clustering method has the property of motion-consistency, if it returns the same clustering after motion-transformation.
Note that motion-consistency is a relaxation of outer-consistency because it does not impose restrictions on increasing distances between points in different clusters but rather it requires keeping a distance between cluster gravity centers.
Theorem 40. If random-set k-means has a local minimum in ball form that is such that the clusters are enclosed into equal radius balls centered at the respective cluster centers, and gaps are fixed at zero, then the motion-transform preserves this local minimum.
See proof in paper [22]. However, we are not so interested in keeping the local minimum of k-means, but rather the global one (as required by motion-consistency property definition).
In [22], we have discussed some special cases of motion-consistency. Here, however, we are interested in the general case.
Theorem 41. Let the partition Γ O be the optimal clustering for k-means. Let R the radius such that, for each cluster C ∈ Γ O , the ball centered at gravity center of C and with radius R contains all data points of C. Let us perform a transformation according to Theorem 21 yielding a clustering Γ so that the distances between the cluster centres is equal to or greater than the distance between them under Γ O plus 4R. (This transformation is by the way motion-consistent.) Then the clustering is motion consistent under any motion transformation keeping cluster center distances implied by Γ.
See proof on page 33. So we can state that Theorem 42. The axioms of k-richness, scale-invariance, centric-consistency and motion-consistency are not contradictory.
The proof is straight forward: k-means algorithm has all these properties. Note that the theorem 41 can be extended to a broader range of clustering functions. Let F be the class of distance-based clustering functions embedded into Euclidean space such that the clustering quality does not decrease with the decrease of distances between cluster elements and changes of distances between elements from distinct clusters do not impact the quality function. 3 Theorem 43. If we replace in Theorem 41 the phrase " k-means." with "a function from the class F, then the Theorem is still valid.
See proof on page 33. But what about the richness, scale-invariance, centric-consistency and motionconsistency? As already mentioned, we will not consider the full richness but rather a narrower set of possible clusterings that still does not have the property of anti-chain.
Consider the Auto-means algorithm introduced in [21]. It differs from the just introduced bisectional-auto − k-means in that two parameters: p, g are introduced such that at each step both created clusters A, B have cardinalities such that p ≤ |A|/|B| ≤ 1/p for some parameter 0 < p < 1, and |A| ≥ m + 1, |B| ≥ m + 1, where m is the dimensionality of the data set, and both A and B can be enclosed in a ball centered at each gravity center with a common radius R such that the distance between gravity centers is not smaller than (2 + g)R, where g > 0 is a relative gap parameter.
Let us introduce the concept of radius-R-bound motion transform of two clusters A, B as motion transform preserving the gravity center of the set A ∪ B and keeping all data points within the radius R around the gravity center of A∪B. A hierarchical motion transform for auto-means should be understood as follows: If clusters A, B are at the top of the hierarchy, then the radius-R-bound motion transform is performed with R = ∞. If clusters C, D are subclusters of a cluster A from the next higher hierarchy level where A was included into a ball of radius R , then R -bound motion transform is applied to the clusters C, D. If the hierarchical motion transform does not change clustering, then we speak about hierarchical motion consistency.
Theorem 44. The axioms of k ↓-nearly-richness, scale-invariance, centricconsistency and hierarchical motion-consistency are not contradictory.
The proof is straight forward: Auto-means algorithm has all these properties. The disadvantage of using the k-means clustering is that it produces convex (in particular ball-shaped) clusters. There are, however, efforts to overcome this limitation via constructing clusters with multiple centers by modifying the kmeans-cost function [27], by a combined k-means clustering and agglomerative clustering based on data projections onto lines connecting cluster centers [23], or by combining k-means-clustering with single-link algorithm [11]. Let us follow the spirit of the latter paper and introduce the concave-k-means algorithm in the following way: First perform the k-means clustering. Then construct a minimum weight spanning tree, where the edge weight is the distance between cluster centres. Then in such a way that the weight of an edge connecting two clusters is the quotient of the actual distance between cluster centres. Then remove l − 1 longest edges from the tree, producing l clusters. The algorithm shall be called k-means-l-MST algorithm. As visible in the examples in Section 11 if we perform centric transformation for such a clustering on constituent k-means clusters, the clustering is preserved.
Theorem 45. If the areas of clusters could be moved in such a way that the distances between the closest points of these clusters would not be lower than some quantity m d , then the data can be clustered using k-means-l-MST algorithm in for k bigger than some k d in such a way that following some centric Γ-transform on k-means clusters a motion-Γ-transformation can be applied preserving the clustering (motion consistency property).
See the proof on page 33. In this way
• We have shown that the concept of consistency introduced by Kleinberg is not suitable for clustering algorithms operating in continuous space as a clustering preserving Γ-transform reduces generally to identity transform.
• Therefore we proposed, for use with k-means, property of centric consistency as a replacement for inner-consistency and the property of motion consistency as a replacement of outer-consistency, that are free from the shortcomings of Kleinberg's Γ-transform.
• The newly proposed transforms can be used as a method for generating new labeled data for testing of k-means like clustering algorithms.
• In case of continuous axiomatisation, one can overcome the Kleinberg's impossibility result on clustering by slightly strengthening his richness axiom (k ↓-nearly-richness which is much weaker than k-richness because it allows for non-anti-chain partition sets) and by relaxing axioms of innerconsistency to centric-consistency and outer-consistency to hierarchicalmotion-consistency and keeping the scale-invariance.
Problems with Inner-Consistency Property -Proofs
Proof. of Theorem 16. Let E 1 and E 2 be two embeddings such that the second is an inner-Γ-transform of the first one. In R m , the position of a point is uniquely defined by distances from m + 1 distinct non-cohyperplanar points with fixed positions. Assume that the inner-Γ-transform moves closer points in cluster C 0 , when switching from embedding E 1 to E 2 . So pick m + 1 point embeddings p 1 , . . . , p m+1 from any m + 1 other different clusters C 1 , . . . , C m+1 . The distances between these m + 1 points are fixed. So let their positions, without any decrease in generality, be the same under both E 1 , E 2 . Now the distances of any point p Z in the cluster C 0 to any of these selected points cannot be changed under the transform. Hence the positions of points of (the embedding of) the first cluster are fixed, no non-trivial inner-Γ-transformation applies.
Proof. of Theorem 17 The m + 1 data points from different clusters have to be rigid under inner-Γ-transformation. Any point from outside of this set is uniquely determined in space given the distances to these points. So for any two points from clusters not belonging to the selected m + 1 clusters no distance change is possible. So assume that these two clusters, containing at least two points each, are among those m+1 selected. If a point different from the selected points shall become closer to the selected point after inner-Γ-transformation, then it has to be on the other side of a hyperplane formed by m points than the m + 1st point of the same cluster before transformation and on the same side after the transformation. But such an effect is possible for one hyperplane only and not for two because the distances between the other points will be violated.
Problems with Continuous Outer Consistency Property -Proofs
Proof. of Theorem 18 . 2D case proof Consider two clusters, calling them a green one and a blue one. By definition, one of them must have points outside of the convex hull of a part of the other. For an illustrative example see Figure 1.
There exists the line segment connecting two green points B, C that intersects the line segment connecting blue points X, Y not at the endpoints of any of these segments. The existence is granted e.g. by choosing one green point inside of blue convex hull and the other outside. For any point A let A denote the vector from coordinate system origin to the point A. Let the positions of these points after Γ-transformation be B , C , X , Y resp. and let the respective line segments B C , X Y interiors intersect at the point M , with M = ω X + (1 − ω) Y . for some 0 < ω < 1. (With continuous transformation this is always possible). For this ω define the point M such that M = ω X(1 − ω) Y . Obviously it will belong to the interior of the line segment XY . The elementary geometry tells us that cos( Y XB) = |Y X| 2 +|XB| 2 −|Y B| 2 2|Y X||XB| . and at the same time |M
B| 2 = |XM | 2 + |XB| 2 − 2 cos( Y XB)|XM ||XB| because Y XB = M XB. Hence |M B| 2 = |XM | 2 + |XB| 2 − 2|XM ||XB| |Y X| 2 + |XB| 2 − |Y B| 2 2|Y X||XB| = ω 2 |XY | 2 + |XB| 2 − 2ω 2 |XY ||XB| |Y X| 2 + |XB| 2 − |Y B| 2 2|Y X||XB| = (1 − ω)|XB| 2 + ω|Y B| 2 − (1 − ω)ω|XY | 2
This result makes it clear that under Kleinberg's Γ-transformation M B will increase or stay the same, as |XB| and |Y B| increase or remain the same while |XY | decreases or is unchanged. By definition of Kleinberg's consistency: |BC| ≥ |B C |, |XY | ≥ |X Y |, |BX| ≤ |B X |, |CX| ≤ |C X |,
|BY | ≤ |B Y |, |CY | ≤ |C Y |. Now recall that |BC| ≤ |BM | + |M C| and |B C | = |B M | + |M C |. We have already shown that if |XY | > |X Y or |BX| < |B X |or |BY | < |B Y |, then |BM | < |B M |, and if |XY | > |X Y or |CY | < |C Y |or |CX| < |C X |, then |M C| < |M C | and hence |BC| < |B C |.
This would be, however, a contradiction which implies that |BC| = |B C |,
|XY | = |X Y |, |BX| = |B X |, |CX| = |C X |, |BY | = |B Y |, |CY | = |C Y |.
Now for any green datapoint A such that there exists a blue point Z (not necessarily different from Y ) and the line segment XZ intersects in the interior with BA, we can repeat the reasoning and state that the distances to B, C, X, Y will remain unchanged. Obviously, for all points A from outside of the blue convex hull it would be the case (a blue line segment must separate B from any green point outside of the convex hull). Any point inside it will either lie on the other side of the straight line XY than B or C. If the respective segments intersect, we are done. If not, the green line segment passes through blue area either "behind" X or Y . So there exists still another blue line segment intersecting with it. So the green data set is rigid. By the same argument the blue data set is also rigid -and the distances between them are also rigid, so that all data points are rigid under Kleinberg's Γ-transformation.
This may be easily generalized to n-dimensional space.
Proof. of Theorem 20 . It is necessary that the relative motion direction is orthogonal to a face separating two clusters (see Section 7) . If no such face exists, then there exists such a face orthogonal to the relative speed vector that members of both clusters are on each side of this face. Hence a motion causes decrease of distance between some elements of different clusters. Consider now a loop i 1 , i 2 , ..., i n of cluster indices that is an index sequence such that i 1 = i n . We want that continuous outer-Γ-transform is applicable to them. So we need to determine points S i1 , S i2 , S i3 following the abovementioned principle. Then, when dealing with clusters C i k , C i k+1 C i k+2 , the points S i k , S i k+1 are predefined, and the point S i k+2 will be automatically defined, if it exists. If it turns out that S i1 = S in then we have a problem because the vector of relative speed of C in−1 with respect to C i1 is determined to be v i1,in = v i1,i2 + v i2,i3 + ... + v in−1,in would not be zero which is a contradiction The proof follows directly from the indicated problem of non-zero relative speed of the first cluster with respect to itself.
For an example in 2D look at the Figure 1 to the right for an explanation, why each cluster has to have its own speed. Assume that we want to move only one cluster, that is F ACH with speed orthogonal to the edge AC. Imagine a point X in the cluster F ABG close to F and a point Y in cluster F ACH close to A. When moving the cluster F ACH away from ABC, all the ponts of F ACH will increase their distances to all points of ABC, but the points X and Y will get closer, because the relative speed of F ACB with respect to F ABG is not orthogonal to the edge F A. So, in a plane, the differences between the speed vectors of clusters sharing an edge must be orthogonal to that edge (or approximately orthogonal, depending on the size of the gap). The Figure 1 to the right illustrates the problem. In this figure the direction angles of the lines AF, BG, CH were deliberately chosen in such a way that it is impossible.(For the sin values of the respective angles see below). The choice was as follows: Assume the speed of cluster ACHF with respect to cluster ABC is fixed to 1. Then the speed of ABGF has to be 0.26794 because sin(F,A,C)= 0.20048 and sin(F,A,B)= 0.74820 . Therefore the speed of BCHG has to be 0.26794 because sin(G,B,A)= 0.5 and sin(G,B,C)= 0.5. Therefore the speed of ACHF has to be 0.15311 because sin(H,C,B)= 0.35921 and sin(H,C,A) 0.62861. But this is a contradiction because the speed of ACHF was assumed to be 1 which differs from 0.15311.
Experimental Illustration of Outer Consistency
Impossibility Theorem 20 In order to illustrate the need to move clusters perpendicularly to cluster border between clusters in case of continuous outer-Γ-transformation, compare e.g. Theorem 20, we generated randomly uniformly on a circle small clusters of about 100 data points each in 2D. In the first experiment 2 clusters and in the second 5 clusters were considered, as shown in Figs 2 and 3 left. In the first experiment one of the clusters was moved in different directions from its original position. The more the motion deviated from the perpendicular direction, the move violations of the Γ-transform requirements were observed, as visible in table 1. In the second experiment one of the clusters was moved orthogonally to one of its orders, but necessarily not orthogonal to other borders. The number of distance violations was the largest for the smallest motions as visible in table 2. As soon as the data points of the cluster got out of the convex hall of other clusters, the number of violations dropped to zero.
The Concept of Gravitational Consistency Property -Proofs
The problem with k-means in Kleinberg's counter example on consistency relies on the fact that k-means is a centric algorithm and decrease in distances between data points may cause the distance to the cluster center to be increased. It is easily shown that this is not a problem between clusters -upon consistency transformation cluster centers become more distant. The problem is within a cluster. Therefore we propose to add an additional constraint (gravity center constraint) on consistency that is that it is forbidden to increase the distance not only between data points but also between gravity centers of any disjoint subsets of a cluster.
Proof. of Theorem 23 . The centric sum of squares, CSS(C) of the data set C may be expressed in two ways:
CSS(C) = xi∈C ||x i µ(C)|| 2 = 1/2 |C| xi∈C xj ∈C ||x i − x j || 2 = 1 |C| {xi,xj }⊂C,xi =xj ||x i − x j || 2 where µ(c) = 1
|C| xi∈C x i . Obviously, for a partition C, C∈C CSS(C) is the k-means criterion function. There exists, however, a different way to express it. Let M(C) be an operator replacing each data point in C with µ(C) (forming a multiset). Let C(C) be a partition of c (into disjoint sets). Then CSS(C) = Ci∈C(C) CSS(C i ) + CSS(∪ Cj ∈C(C) M(C j )). Note that the latter is just the weighted sum of distances between centers of clusters from C(C). Assume now we have a partition C 0 being the optimal one under k-means and a competing one (but not optimal) C 1 . Construct a new partition (possibly into much more than k clusters) C 10 = {C; C = C i ∩ C j , C i ∈ C 0 , C j ∈ C j } The cost function of both C 0 , C 1 can be expressed as a sum of CSS of all elements of C 10 plus connections between gravity centers of C 10 lying inside of a C 0 cluster or outside. By the virtue of the property of gravity center constraint the latter connections will decrease under consistency operation, and due to properties of the third form of CSS the distances between clusters of C 10 contained in different clusters of C 0 will increase. Just look at the difference CSS(A ∪ B) − CSS(A) − CSS(B) -this difference, computed as sum of distances between data points will increase so the distance between gravity centers will increase too, by the equivalence of first two methods of computing CSS.
Proof. of Theorem 24 . The proof follows from elementary geometry.
When homothetic transformations were applied to various clusters, the distances between them need to be increased by the biggest distance decrease in order for the transformation to be a Γ-transformation in the spirit of Kleinberg. Figure 4: Getting some data points closer to the cluster center does not ensure cluster stability for k-means, because the cluster center can move. Left picture -data partition before moving data closer to the cluster center. Right picturedata partition thereafter.
9 k-means and the centric-consistency axiom
The first differentiating feature of the centric consistency is that no new structures are introduced in the cluster at any scale. The second important feature is that the requirement of keeping the minimum distance to elements of other clusters is dropped and only cluster centers do not get closer to one another. Note also that the centric consistency does not suffer from the impossibility of transformation for clusters that turn out to be internal.
Proof. of Theorem 33
V (C j ) be the sum of squares of distances of all objects of the cluster C j from its gravity center and Q be from equation (1).
. Hence Q(Γ) = k j=1 1 nj V (C j ).
Consider moving a data point x * from the cluster C j0 to cluster C j l As demonstrated by [13],
V (C j0 −{x * }) = V (C j0 )− nj 0 nj 0 −1 x * −µ j0 2 and V (C j l ∪{x * }) = V (C j l ) + n l n l +1 x * − µ j l 2
So it pays off to move a point from one cluster to
another if nj 0 nj 0 −1 x * − µ j0 2 > nj l nj l +1 x * − µ j l 2 .
If we assume local optimality of Γ, this obviously did not pay off. Now transform this data set to X in that we transform elements of cluster C j0 in such a way that it has now elements
x i = x i + λ(x i − µ j0
) for some 0 < λ < 1, see Figure 8. Consider a partition Γ of X . All clusters are the same as in Γ except for the transformed elements that form now a cluster C j0 . The question is: does it pay off to move a data point x' * ∈ C j0 between the clusters? Consider the plane containing x * , µ j0 , µ j l . Figure 5: Getting all data points closer to the cluster center without changing the position of cluster center, if we do not ensure that they move along the line connecting each with the center. Left picture -data partition before moving data closer to the cluster center. Right picture -data partition thereafter. q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q Project orthogonally the point x * onto the line µ j0 , µ j l , giving a point p. Either p lies between µ j0 , µ j l or µ j0 lies between p, µ j l . Properties of k-means exclude other possibilities. Denote distances y = x * − p , x = µ j0 − p , d = µ j0 − µ j l In the second case the condition that moving the point does not pay off means:
n j0 n j0 − 1 (x 2 + y 2 ) ≤ n j l n j l + 1 ((d + x) 2 + y 2 )
If we multiply both sides with λ 2 , we have:
λ 2 n j0 n j0 − 1 (x 2 + y 2 ) = n j0 n j0 − 1 ((λx) 2 + (λy) 2 ) ≤ n j l n j l + 1 ((d + λx) 2 + (λy) 2 )(2)
which means that it does not payoff to move the point x' * between clusters either. Consider now the first case and assume that it pays off to move x' * . So we would have
n j0 n j0 − 1 (x 2 + y 2 ) ≤ n j l n j l + 1 ((d − x) 2 + y 2 )
and at the same time
n j0 n j0 − 1 λ 2 (x 2 + y 2 ) > n j l n j l + 1 ((d − λx) 2 + λ 2 y 2 )
Subtract now both sides: Figure 8: Impact of contraction of the data point x * towards cluster center µ j0 by a factor λ to the new location x * -local optimum maintained. The left image illustrates the situation when the point x * is closer to cluster center µ j0 . The right image refers to the inverse situation. The point p is the orthogonal projection of the point x * onto the line µ j0 , µ j l .
n j0 n j0 − 1 (x 2 + y 2 ) − n j0 n j0 − 1 λ 2 (x 2 + y 2 )< n j l n j l + 1 ((d − x) 2 + y 2 ) − n j l n j l + 1 ((d − λx) 2 + λ 2 y 2 )
This implies
n j0 n j0 − 1 (1 − λ 2 )(x 2 + y 2 ) < n j l n j l + 1 ((1 − λ 2 )(x 2 + y 2 ) − 2dλx)
It is a contradiction because
n j0 n j0 − 1 (1−λ 2 )(x 2 +y 2 ) > n j l n j l + 1 (1−λ 2 )(x 2 +y 2 ) > n j l n j l + 1 ((1−λ 2 )(x 2 +y 2 )−2dλx)
So it does not pay off to move x' * , hence the partition Γ remains locally optimal 4 for the transformed data set.
If the data have one stable optimum only like in case of well separated normally distributed k real clusters, then both turn to global optima.
Proof. of Theorem 34 . Let us consider first the simple case of two clusters only (2-means). Let the optimal clustering for a given set of objects X consist of two clusters: T and Z. The subset T shall have its gravity center at the origin of the coordinate system. The quality of this partition Q({T, Z}) = n T V ar(T ) + n Z V ar(Z) where n T , n Z denote the cardinalities of T, Z and V ar(T ), V ar(Z) their variances (averaged squared distances to gravity center). We will prove by contradiction that by applying our Γ transform we get partition that will be still optimal for the transformed data points. We shall assume the contrary that is that we can transform the set T by some 1 > λ > 0 to T in such a way that optimum of 2-means clustering is not the partition {T , Z} but another one, say {A ∪ D, B ∪ C} where Z = C ∪ D, A and B are transforms of sets A, B for which in turn A ∪ B = T . It may be easily verified that
Q({A ∪ B, C ∪ D}) = n A V ar(A) + n A v 2 A + n B V ar(B) + n B v 2 B +n C V ar(C) + n D V ar(D) + n C n D n C + n D (v C − v D ) 2 while Q({A ∪ C, B ∪ D}) = n A V ar(A) + n D V ar(D) + + n A n D n A + n D (v A − v D ) 2 +n B V ar(B) + n C V ar(C) + + n B n C n B + n C (v B − v C ) 2 and Q({A ∪ B , C ∪ D}) = n A λ 2 V ar(A) + n A λ 2 v 2 A + n B λ 2 V ar(B) + n B λ 2 v 2 B +n C V ar(C) + n D V ar(D) + n C n D n C + n D (v C − v D ) 2 while Q({A ∪ C, B ∪ D}) = n A λ 2 V ar(A) + n D V ar(D) + + n A n D n A + n D (λv A − v D ) 2 +n B λ 2 V ar(B) + n C V ar(C) + + n B n C n B + n C (λv B − v C ) 2
The following must hold:
Q({A ∪ B , C ∪ D}) > Q({A ∪ D, B ∪ C})(3)
and
Q({A ∪ B, C ∪ D}) < Q({A ∪ D, B ∪ C})(4)
Additionally also
Q({A ∪ B, C ∪ D}) < Q({A ∪ B ∪ C, D})(5)
and
Q({A ∪ B, C ∪ D}) < Q({A ∪ B ∪ D, C})(6)
These two latter inequalities imply:
n C n D n C + n D (v C − v D ) 2 < (n A + n B )n C (n A + n B ) + n C v 2 C and n C n D n C + n D (v C − v D ) 2 < (n A + n B )n D (n A + n B ) + n D v 2 D
Consider now an extreme contraction (λ = 0) yielding sets A", B" out of A, B.
Then we have
Q({A" ∪ B", C ∪ D}) − Q({A" ∪ C, B" ∪ D}) = n C n D n C + n D (v C − v D ) 2 − n A n D n A + n D v 2 D − n B n C n B + n C v 2 C = n C n D n C + n D (v C − v D ) 2 − n A n D n A + n D (n A + n B ) + n D (n A + n B )n D (n A + n B )n D (n A + n B ) + n D v 2 D − n B n C n B + n C (n A + n B ) + n C (n A + n B )n C (n A + n B )n C (n A + n B ) + n C v 2 C = n C n D n C + n D (v C − v D ) 2 − n A n A + n D (n A + n B ) + n D (n A + n B ) (n A + n B )n D (n A + n B ) + n D v 2 D − n B n B + n C (n A + n B ) + n C (n A + n B ) (n A + n B )n C (n A + n B ) + n C v 2 C = n C n D n C + n D (v C − v D ) 2 − n A n A + n B (1 + n B n A + n D ) (n A + n B )n D (n A + n B ) + n D v 2 D − n B n A + n B (1 + n A n B + n C ) (n A + n B )n C (n A + n B ) + n C v 2 C < n C n D n C + n D (v C − v D ) 2 − n A n A + n B (n A + n B )n D (n A + n B ) + n D v 2 D − n B n A + n B (n A + n B )n C (n A + n B ) + n C v 2 C < 0
because the linear combination of two numbers that are bigger than a third yields another number bigger than this. Let us define a function
h(x) = +n A x 2 v 2 A + n B x 2 v 2 B + n C n D n C + n D (v C − v D ) 2 − n A n D n A + n D (xv A − v D ) 2 − n B n C n B + n C (xv B − v C ) 2
It can be easily verified that h(x) is a quadratic polynomial with a positive coefficient at
x 2 . Furthermore h(1) = Q({A∪B, C∪D})−Q({A∪C, B∪D}) < 0, h(λ) = Q({A ∪ B , C ∪ D}) − Q({A ∪ C, B ∪ D}) > 0, h(0) = Q({A" ∪ B", C ∪ D}) − Q({A" ∪ C, B" ∪ D}) < 0.
But no quadratic polynomial with a positive coefficient at x 2 can be negative at the ends of an interval and positive in the middle. So we have the contradiction. This proves the thesis that the (globally) optimal 2-means clustering remains (globally) optimal after transformation. Let us turn to the general case of k-means. Let the optimal clustering for a given set of objects X consist of k clusters: T and Z 1 , . . . , Z k−1 . The subset T shall have its gravity center at the origin of the coordinate system. The quality of this partition Q({T, Z 1 , . . . , Z k−1 }) = n T V ar(T ) + k−1 i=1 n Zi V ar(Z i ), where n Zi is the cardinality of the cluster Z i . We will prove by contradiction that by applying our Γ transform we get partition that will be still optimal for the transformed data points. We shall assume the contrary that is that we can transform the set T by some 1 > λ > 0 to T in such a way that optimum of k-means clustering is not the partition {T , Z 1 , . . . , Z k−1 } but another one, say
{T 1 ∪Z 1,1 ∪· · ·∪Z k−1,1 , T 2 ∪Z 1,2 ∪· · ·∪Z k−1,2 . . . , T k ∪Z 1,k ∪· · ·∪Z k−1,k } where Z i = ∪ k j=1 Z i,j (where Z i,j are pairwise disjoint), T 1 , .
. . , T k are transforms of disjoint sets T 1 , . . . , T k for which in turn ∪ k j=1 T j = T . It may be easily verified that
Q({T, Z 1 , . . . , Z k−1 }) = k j=1 n Tj V ar(T j ) + k j=1 n Tj v 2 Tj + k−1 i=1 n Zi V ar(Z i ) while (denoting Z * ,j = ∪ i=1 k − 1Z * ,j ) Q({T 1 ∪ Z * ,1 , . . . , T k ∪ Z * ,k }) = = k j=1 n Tj V ar(T j ) + n Z * ,j V ar(Z * ,j ) + + n Tj n Z * ,j n Tj + n Z * ,j (v Tj − v Z * ,j ) 2 whereas Q({T , Z 1 , . . . , Z k−1 }) = k j=1 n Tj λ 2 V ar(T j ) + k j=1 n Tj λ 2 v 2 Tj + k−1 i=1 n Zi V ar(Z i ) while Q({T 1 ∪ Z * ,1 , . . . , T k ∪ Z * ,k }) = = k j=1
n Tj λ 2 V ar(T j ) + n Z * ,j V ar(Z * ,j ) + + n Tj n Z * ,j n Tj + n Z * ,j (λv Tj − v Z * ,j ) 2
The following must hold:
Q({T , Z 1 , . . . , Z k−1 }) > Q({T 1 ∪ Z * ,1 , . . . , T k ∪ Z * ,k })(7)
and
Q({T, Z 1 , . . . , Z k−1 }) < Q({{T 1 ∪ Z * ,1 , . . . , T k ∪ Z * ,k })(8)
Additionally also
Q({T, Z 1 , . . . , Z k−1 }) < Q({{T ∪ Z * ,1 , Z * ,2 , . . . , Z * ,k )(9)
and
Q({T, Z 1 , . . . , Z k−1 }) < Q({T ∪ Z * ,2 , Z * ,1 , Z * ,3 , . . . , Z * ,k })(10)
and . . . and
Q({T, Z 1 , . . . , Z k−1 }) < Q({T ∪ Z * ,k , Z * ,1 , . . . , Z * ,k−1 })(11)
These latter k inequalities imply that for l = 1, . . . , k:
Q({T, Z 1 , . . . , Z k−1 }) = n T V ar(T ) + k j=1 n Tj V ar(T j ) + k j=1 n Tj v 2 Tj + k−1 i=1 n Zi V ar(Z i ) < Q({T ∪ Z * ,l , Z * ,1 , . . . , Z * ,l−1 , Z * ,l+1 . . . , Z * ,k }) = = n T V ar(T ) + k j=1 n Z * ,j V ar(Z * ,j ) + n T n Z * ,l n T + n Z * ,l (v T − v Z * ,l ) 2 + k−1 i=1 n Zi V ar(Z i ) < k j=1 n Z * ,j V ar(Z * ,j ) + n T n Z * ,l n T + n Z * ,l (v T − v Z * ,l ) 2 + k−1 i=1 n Zi V ar(Z i ) − k j=1 n Z * ,j V ar(Z * ,j ) < n T n Z * ,l n T + n Z * ,l (v Z * ,l ) 2
Consider now an extreme contraction (λ = 0) yielding sets T j " out of T j . Then we have
Q({T ", Z 1 , . . . , Z k−1 }) − Q({T " 1 ∪ Z * ,1 , . . . , T " k ∪ Z * ,k }) = k−1 i=1 n Zi V ar(Z i ) − k j=1 n Z * ,j V ar(Z * ,j ) + n Tj n Z * ,j n Tj + n Z * ,j (v Z * ,j ) 2 = k−1 i=1 n Zi V ar(Z i ) − k j=1 n Z * ,j V ar(Z * ,j ) − k j=1
n Tj n Z * ,j n Tj + n Z * ,j n T + n Z * ,j n T n Z * ,j
n T n Z * ,j n T + n Z * ,j (v Z * ,j ) 2 = k−1 i=1 n Zi V ar(Z i ) − k j=1 n Z * ,j V ar(Z * ,j ) − k j=1
n Tj n Tj + n Z * ,j
n T + n Z * ,j n T n T n Z * ,j n T + n Z * ,j (v Z * ,j ) 2 ≤ k−1 i=1 n Zi V ar(Z i ) − k j=1 n Z * ,j V ar(Z * ,j ) − k j=1
n Tj n T n T n Z * ,j n T + n Z * ,j (v Z * ,j ) 2 < 0 because the linear combination of numbers that are bigger than a third yields another number bigger than this. Let us define a function
g(x) = k j=1 n Tj x 2 v 2 Tj + k−1 i=1 n Zi V ar(Z i ) − k j=1
n Z * ,j V ar(Z * ,j ) + + n Tj n Z * ,j n Tj + n Z * ,j (xv Tj − v Z * ,j ) 2
It can be easily verified that g(x) is a quadratic polynomial with a positive coefficient at
x 2 . Furthermore g(1) = Q({T, Z 1 , . . . , z k−1 }) − Q({T 1 ∪ Z * ,1 , . . . , T k ∪ Z * ,k }) < 0, g(λ) = Q({T , Z 1 , . . . , Z k−1 }) − Q({T 1 ∪ Z * ,1 , . . . , T k ∪ Z * ,k }) > 0, g(0) = Q({T ", Z 1 , . . . , Z k−1 }) − Q({T " 1 ∪ Z * ,1 , . . . , T " k ∪ Z * ,k }) < 0.
But no quadratic polynomial with a positive coefficient at x 2 can be negative at the ends of an interval and positive in the middle. So we have the contradiction. This proves the thesis that the (globally) optimal k-means clustering remains (globally) optimal after transformation.
So summarizing the new Γ transformation preserves local and global optima of k-means for a fixed k. Therefore k-means algorithm is consistent under this transformation.
Note that (Γ * based) centric-consistency is not a specialization of Kleinberg's consistency as the requirement of increased distance between all elements of different clusters is not required in Γ * based Consistency.
Bisectional-Auto-X-Means
Proof. of Theorem 31 (Outline) Let the optimal clustering for a given set of objects X consist of two clusters: T and Z. Let T consist of two disjoint subsets P , Y , T = P ∪ Y and let us ask the question whether or not centric transformation of the set P will affect the optimality of clustering. Let T (λ) = P (λ) ∪ Y with P (λ) being an image of P under centric transformation. The cluster centre of T (λ) will be the same as that of T . We ask if {T (λ), Z} is the globally optimal clustering of T (λ) ∪ Z. Assume the contrary, that is that there exists a clustering into sets K (λ) = A (λ) ∪ B ∪ C, L (λ) = D (λ) ∪ E ∪ F , where P = A ∪ D, and A (λ) are the points obtains from A when P is subjected to centric transformation, and D (λ) is defined analogously, Let us discuss now the centric transform with λ = 0. In this case all points from A (0) and D (0) collapse to a single point. This point can be closer to either µ(K (0)) or µ(L (0)). Assume they are closer to µ(K (0)). In this case Q({K"(λ), L"(λ)}) ≤ Q({K (λ), L (λ)}). where K"(λ) = P (λ) ∪ B ∪ C, L"(λ) = E ∪ F for λ = 0, As all points subject to centric consistency are contained in a single set, we get
hence P (λ) = A (λ) ∪ D (λ), Y = B ∪ E, Z = C ∪ F ,Q({T (0), Z}) − Q({K (0), L (0)}) ≤ 0 that is h(0) ≤ 0.
It is also easily seen that h(λ) is a quadratic function of λ. This can be seen as follows:
Q({T (λ), Z}) = x∈T (λ) x − µ(T (λ)) 2 + x∈Z x − µ(Z) 2 = x∈P (λ) x − µ(P (λ)) 2 + x∈Y x − µ(Y ) 2 + µ(P (λ)) − µ(Y ) 2 · 1 1/|P | + 1/|Y | + x∈Z x − µ(Z) 2 = x∈A (λ) x − µ(A (λ)) 2 + x∈D (λ) x − µ(D (λ)) 2 + |A| µ(A (λ)) − µ(P (λ)) 2 + |D| µ(D (λ)) − µ(P (λ)) 2 + x∈Y x − µ(Y ) 2 + µ(P (λ)) − µ(Y ) 2 · 1 1/|P | + 1/|Y | + x∈Z x − µ(Z) 2
This expression is obviously quadratic in λ, since each point x ∈ P is transformed linearly to µ(P ) + λ(x − µ(P )). Note that µ(P (λ)) = µ(P ) so it does not depend on λ. On the other hand
Q({K (λ), L (λ)}) = Q({A (λ) ∪ B ∪ C, D (λ) ∪ E ∪ F }) = x∈A (λ) x − µ(A (λ)) 2 + x∈B x − µ(B) 2 + x∈C x − µ(C) 2 + 1 |A| + |B| + |C| |A||B| µ(A (λ)) − µ(B) 2 +|A||C| µ(A (λ)) − µ(C) 2 + |B||C| µ(B) − µ(C) 2 + x∈D (λ) x − µ(D (λ)) 2 + x∈E x − µ(E) 2 + x∈F x − µ(F ) 2 + 1 |D| + |E| + |F | |D||E| µ(D (λ)) − µ(E) 2 +|D||F | µ(D (λ)) − µ(F ) 2 + |F ||E| µ(F ) − µ(E) 2 Then h(λ) = Q({T (λ), Z}) − Q({K (λ), L (λ)}) = x∈A (λ) x − µ(A (λ)) 2 + x∈D (λ) x − µ(D (λ)) 2 + |A| µ(A (λ)) − µ(P ) 2 + |D| µ(D (λ)) − µ(P ) 2 + x∈Y x − µ(Y ) 2 + µ(P ) − µ(Y ) 2 · 1 1/|P | + 1/|Y | + x∈Z x − µ(Z) 2 − x∈A (λ) x − µ(A (λ)) 2 − x∈B x − µ(B) 2 − x∈C x − µ(C) 2 − 1 |A| + |B| + |C| |A||B| µ(A (λ)) − µ(B) 2 +|A||C| µ(A (λ)) − µ(C) 2 + |B||C| µ(B) − µ(C) 2 − x∈D (λ) x − µ(D (λ)) 2 − x∈E x − µ(E) 2 − x∈F x − µ(F ) 2 − 1 |D| + |E| + |F | |D||E| µ(D (λ)) − µ(E) 2 +|D||F | µ(D (λ)) − µ(F ) 2 + |F ||E| µ(F ) − µ(E) 2 =|A| µ(A (λ)) − µ(P ) 2 + |D| µ(D (λ)) − µ(P ) 2 − 1 |A| + |B| + |C| |A||B| µ(A (λ)) − µ(B) 2 + |A||C| µ(A (λ)) − µ(C) 2 − 1 |D| + |E| + |F | |D||E| µ(D (λ)) − µ(E) 2 + |D||F | µ(D (λ)) − µ(F ) 2 + x∈Y x − µ(Y ) 2 − x∈B x − µ(B) 2 − x∈E x − µ(E) 2 + µ(P ) − µ(Y ) 2 · 1 1/|P | + 1/|Y | + x∈Z x − µ(Z) 2 − x∈C x − µ(C) 2 − x∈F x − µ(F ) 2 − 1 |A| + |B| + |C| |B||C| µ(B) − µ(C) 2 − 1 |D| + |E| + |F | |F ||E| µ(F ) − µ(E) 2 =|A| µ(A (λ)) − µ(P ) 2 + |D| µ(D (λ)) − µ(P ) 2 − 1 |A| + |B| + |C| |A||B| µ(A (λ)) − µ(B) 2 + |A||C| µ(A (λ)) − µ(C) 2 − 1 |D| + |E| + |F | |D||E| µ(D (λ)) − µ(E) 2 + |D||F | µ(D (λ)) − µ(F ) 2 + c h
where c h is a constant (independent of λ and all the µs that depend on λ, are linearly dependent on it (by definition of centric consistency). Recall that
µ(A (λ)) − µ(P ) = 1/|A| x∈A λ(x − µ(P )) = λv A , where v A is a vector independent of λ. Hence µ(A (λ)) − µ(P ) 2 = λ 2 v A T v A . Similarly µ(A (λ)) − µ(B) 2 = (µ(A (λ) − µ(P )) + (µ(P ) − µ(B)) 2 = (µ(A (λ) − µ(P )) 2 + (µ(P ) − µ(B)) 2 + 2(µ(A (λ) − µ(P ))(µ(P ) − µ(B)) = λ 2 v A T v A + (µ(P ) − µ(B)) 2 + 2(µ(A (λ) − µ(P ))(µ(P ) − µ(B)) = λ 2 v A T v A + λc ABP + c BP
with c ABP , c BP being constants independent of λ, whereby only the first summand depends on λ 2 . Similarly
µ(A (λ)) − µ(C) 2 = λ 2 v A T v A + λc ACP + c CP
with c ACP , c CP being constants independent of λ, and so on. Therefore we can rewrite the h(λ) as
h(λ) = |A|λ 2 v A T v A + |D|λ 2 v D T v D − 1 |A| + |B| + |C| |A||B|(λ 2 v A T v A + λc ABP + c BP ) +|A||C|(λ 2 v A T v A + λc ACP + c CP ) − 1 |D| + |E| + |F | |D||E|(λ 2 v D T v D + λc DEP + c EP ) +|D||F |(λ 2 v D T v D + λc DF P + c F P ) + c h
So the coefficient at λ 2 amounts to:
|A| − − |A|(|B| + |C|) |A| + |B| + |C| v A T v A + |D| − |D|(|E| + |F |) |D| + |E| + |F | v D T v D
which is bigger than 0 if only |A| > 0 and |D| > 0. which is te case by our assumption of an alternative clustering. Therefore, for λ large enough, h(λ) > 0.
As h(λ) is a quadratic function in λ, and h(0) ≤ 0 and h(1) ≤ 0, then also h(λ) ≤ 0 for any value of λ between 0 and 1. This completes the proof.
Proof. Proof of Theorem 32 . The scale-invariance is implied by the properties of the k-means algorithm that serves as the subroutine and the fact that the stopping criterion is the relative decrease of Q, so that the stopping criterion is also not affected.
The 2 ↑-nearly-richness can be achieved as follows. Each partition consists of clusters of predefined cardinalities. Within each partition distribute the data points uniformly on a line segment. If we set the stopping criterion as lower than 9-fold decrease of Q, then by proper manipulation of distances between groups of clusters when combining them to the cluster hierarchy will ensure that the targeted partition is restored.
The centric consistency is implied by Theorem 31 as follows. At a given stage of the algorithm, when a bisection is to be performed, Theorem 31 ensures that the optimal bisection is the one that was used in the original partitioning process. the question is only about the stopping criterion: whether or not centric consistency would imply continuing the partitioning. Consider the notation from the previous proof. The Q improvement in the original clustering would amount to:
1 x∈X x − µ(X) 2 x∈P x − µ(P ) 2 + x∈Y x − µ(Y ) 2 + µ(P ) − µ(Y ) 2 · 1 1/|P | + 1/|Y | + x∈Z x − µ(Z) 2 and afterwards 1 x∈X x − µ(X) 2 − x∈P x − µ(P ) 2 + x∈P (λ) x − µ(P ) 2 · x∈P (λ) x − µ(P ) 2 + x∈Y x − µ(Y ) 2 + µ(P ) − µ(Y ) 2 1/|P | + 1/|Y | + x∈Z x − µ(Z) 2
This expression is as if x∈P x − µ(P ) 2 − x∈P (λ) x − µ(P ) 2 (positive and smaller than the denominator) were subtracted from the nominator and the denominator of the first quotient. That is the second quotient is smaller, so if the first did not stop the partitioning, so the second would not either.
Kleinberg's three axioms/properties of consistency, scale-invariance and richness are contradictory. We obtained here three similar axioms that are not contradictory. If we would replace Kleinberg's richness axiom only with 2 ↑-nearlyrichness, this would not help to resolve the contradiction. The real driving force behind the conflict resolution is the centric consistency. Centric consistency may appear as more rigid than Kleinberg's consistency, but on the other hand it is broader than Kleinberg's, as already mentioned (not all distances between elements of distinct clusters need to increase).
So at this point we know that a reasonable version of Kleinberg's Γ-transform is no more general than centric Γ-transform. Let us briefly demonstrate that centric consistency raises claims of equivalent clustering also in those cases when Kleinberg
Motion Consistency Proofs
Proof. of Theorem 41 . Given an optimal solution Γ O to k-means problem, where each cluster is enclosed in a (hyper)ball with radius R centered at its gravity center, if we rotate the clusters around their gravity centers, then the position of any data point can change by at most 2R. That is, the distances between points from different clusters can decrease by at most 4R. So if we move away the clusters by the distance of 4R, then (1) Γ clustering gives the same value of k-means cost function as Γ O (as the distances within clusters do not change) and (2) upon rotation in the new position no clustering different from Γ can have a lower cost function value than any competing clustering in the initial position of data points. Hence the optimum clustering is preserved upon any rotating transformation, and when moving the clusters keeping the center distances as prescribed, the optimal clustering is preserved also. .
Proof. of Theorem 45. The assumption of the Theorem 45 implies that when applying the k-means-l-MST algorithm, the k-means cluster centers from different l-MST clusters will lie at distance of at least m d . Therefore we can increase k to such an extent that the distances between k-means clusters in different l-MST clusters are smaller than m d . This means that the clustering will be preserved upon centric Γ-transform applied to k-means clusters. Let R M be the maximum radius of a hyperball enclosing any k-means cluster. Let d M be the maximum distance between k-means clusters from different l-MST clusters. So apply centric Γ-transform to k-means clusters with λ < m d 4R M +d M . If we move now the clusters, the motion consistency will be preserved, as implied by Theorem 41.
Proof. of Theorem 43. The proof follows the general idea behind the case of k-means. If the solution Γ o is optimal, then the operation performed will induce no change in the distances within the cluster and the distances between elements from different clusters will increase and this will cause an increase in the quality of the clustering function. For any alternative clustering, that is a non-optimal one Γ n , we can divide the distances between data points in the following four categories: K 1 -intra-cluster distances both under Γ o and Γ n , K 2 -intra-cluster distances under Γ o and inter-cluster distances under Γ n , K 3
Experiments
In order to validate the clustering preservation by the consistency transformations and motion transformations proposed in this paper also in cases when Kleinberg's consistency violation occurs, we investigated application of k-means to the datasets described in table 3, and applying to hem transformations from theorems 34 and 41. In Table 3, the column recs contains the number of data records, cols -the number of columns used n the experiments (numeric columns only), clustersthe number of clusters into which the data was clustered in the experiments, NSTART the number of restarts used for k-means algorithm (R implementation), time.sec is the execution time of the experiment (encompassing not only clustering, but also time for computing various quality measures).
The datasets were downloaded from (or from links from) the Web page http://cs.joensuu.fi/sipu/datasets/.
Experiments Related to Theorem 34
The experiments were performed in the following manner: For each dataset, the k-means with k = clusters was performed with the number of restarts equal to N ST ART , as mentioned in Table 3. The result served as a "golden standard" that is the "true" clustering via k-means.
Then the cluster was selected that was lying "most centrally" among all the clusters, so that the violation of Kleinberg's consistency upon centric consistency transformation would have the biggest effect. Thereafter four experiments of centric consistency transformation on that cluster were run with λ = 0.8, 0.6, 0.4, 0.2 resp. Then the resulting set was re-clustered using same k-means parameters. The number of disagreements in cluster membership is reported in Table 4. As visible, only KDDCUP04Bio.txt exhibits problems, due to the fact that the number of restarts was kept low in order to handle the large number of records. In order to demonstrate that the centric consistency allows for cluster preserving transformation in spite of Kleinberg consistency conditino violations, the following statistics was performed: 100 data points were randomly picked from the "central" cluster (if there were fewer of them, all were taken) and 100 data points from other clusters were randomly picked. Then the distances prior and after the centric consistency transformations were computed between each element of the first and the second set and compared. The percentage of cases, when the Kleinberg consistency condition was violated, was computed (for each aforementioned λ) The results are presented in Table 5.
As visible, as many as 70% of distances may violate Kleinberg's condition. The seriousness of the problem depends both on the dataset and the λ considered.
The conclusion is that the centric consistency transformation provides with derived labelled datasets for investigation of k-means like algorithms even in cases when Kleinberg axiomatic system is violated.
Experiments Related to Theorem 41
The experiments were performed in the following manner: For each dataset, the k-means with k = clusters was performed with the number of restarts equal to N ST ART , as mentioned in Table 3. As previously, the result served as a "golden standard" that is the "true" clustering via k-means.
Then the motion experiment was initiated. Theorem 41 indicates that a "jump" of the clusters would have to be performed, but we insisted on "con- tinuous" transformation. So first a centric consistency transformation was performed on each cluster so that the radii of all balls enclosing clusters are equal (the the radius of the smallest ball). Then centric consistency transformation was applied to each cluster so that the conditions of the Theorem 41 hold. After these two transformations, the correctness of clustering and the percentage of Kleinberg consistency violations was computed following the guidelines of the previous subsection. The results are reported in columns center.clu and center.dst of Table 6 resp. Afterwards twenty times motion by a small step (randomly picking the length of the step) in random direction was performed. It was checked if conditions of Theorem 41 hold. If so, the correctness of clustering and the percentage of Kleinberg consistency violations was computed following the guidelines of the previous subsection. The maximal values of both are reported in columns motion.clu and motion.dst of Table 6 resp. with respect to the clustering after the two centric consistency transformations and in columns motion.clu and motion.dst of Table 6 resp. with respect to the clustering in the previous step.
Again, the dataset KDDCUP04Bio.txt proves to be most problematic, due to the low number of restarts with k-means clustering, even 16,000 violations of cluster membership were observed. Note however that this dataset is huge compared to the other ones 14,5751 datapoints in over 70 dimensions. Such data are problematic for the k-means itself (difficulties in proper seeding and ten recovery from erroneous seeding). There were also minor problems with the artificial benchmark datasets dim064.txt and dim032.txt. Otherwise the even non-optimal k-means algorithm approximated the true optimum sufficiently well to show that the motion consistency transformation really preserves the clustering.
At the same time it is visible that the motion consistency transformation provides with derived labelled datasets for investigation of k-means like algorithms even in cases when Kleinberg axiomatic system is seriously violated.
Discussion
The attempts to axiomatize the domain of clustering have a rich history. Van Laarhoven and Marchiori [34] and Ben-David and Ackerman [7] the clustering axiomatic frameworks address either: (1) required properties of clustering functions, or (2) required properties of the values of a clustering quality function, or (3) required properties of the relation between qualities of different partitions.
The work of Kleinberg [18], which we discussed extensively, fits into the first category.
Van Laarhoven and Marchiori [34] developed an axiomatic system for graphs, fitting the same category. Though it is possible to view the datapoints in R m as nodes of a graph connected by edges with appropriate weights, their axioms essentially reflect the axioms of Kleinberg with all the problems for k-means like algorithms.
To overcome the problems with the consistency property, Ackerman et al. [3] propose splitting of it into the concept of outer-consistency and of innerconsistency. However, the k-means algorithm is not inner-consistent, see [3,Section 3.1]. We show that the problem with inner-consistency is much deeper, as it may be not applicable at all Theorem 16 and Theorem 17).
The k-means algorithm is said to be in this sense outer-consistent [3, Section 5.2]. But we have demonstrated in this paper that outer-consistency constitutes a problem in the Euclidean space, if continuous transformations are required, see Theorem 20. They claim also that k -means-ideal has the properties of outer-consistency and locality 5 . The property of locality would be useful for generation of testing sets for clustering function implementation. Ackerman at el [3] show that these properties are satisfied neither by k-means-random nor by a k-means with furthest element initialization.
The mentioned properties cannot be fulfilled by any algorithm under fixeddimensional settings. So they do so for k-means-ideal.
Zadeh Ben-David [37] propose instead the order-consistency, satisfied by some versions of single-linkage algorithm, providing with an elegant way of creating multitude of derived test sets. k-means is not order-consistent so that the property is not useful for continuous test data transformations for k-means.
Ackerman and Ben-David [7] pursue the second category of approaches to axiomatization. Instead of axiomatizaing the clustering function, they create axioms for cluster quality function.
Definition 46. Let C(X) be the set of all possible partitions over the set of objects X, and let D(X) be the set of all possible distance functions over the set of objects X. A clustering-quality measure (CQM) J : X×C(X)×D(X) → R + ∪{0} is a function that, given a data set (with a distance function) and its partition into clusters, returns a non-negative real number representing how strong or conclusive the clustering is.
Ackerman and Ben-David propose among others the following axioms:
Property 47. (CQM-scale-invariance) A quality measure J satisfies scale invariance if for every clustering Γ of (X, d) and every positive β, J(X, Γ, d) = J(X, Γ, βd).
Property 48. (CQM-consistency) A quality measure J satisfies consistency if for every clustering C over (X, d), whenever d is consistency-transformation of d, then J(X, Γ, d ) ≥ J(X, Γ, d).
If we define a clustering function f to maximize the quality function, then the clustering function is (scale)-invariant, but the clustering function does not need to be consistent in Kleinberg's sense. E.g. k-means is CQM-consistent, not being consistent in Kleinberg's sense. .
The basic problem with this axiomatic set is that the CQM-consistency does not tell anything about the (optimal) partition being the result of the consistency-transform, while Kleinberg's axioms make a definitive statement: the partition before and after consistency-transform has to be the same. So k-means could be in particular rendered to become CQM-consistent, CQMscale-invariant, and CQM-rich, if one applies a bi-sectional version (bisectionalauto-means).
A number of further characterizations of clustering functions has been proposed e.g. Ackerman's at al. [2] for linkage algorithms, Carlsson's [8] for multiscale clustering. None of the transforms proposed there seems however to fit the purpose of test set derivation for k-means testing by a continuous derivation. An interesting axiomatic system for hierarchical clustering was developed in [33]. It is defined not for datapoints but for probability distributions over some support(s). It includes a scale invariance property, but provides nothing that can be considered as corresponding to Kleinberg's consistency. This clearly limits the usability for test set generation for (hierarchical) clustering algorithms. Nonetheless this approach is worth noting and worth pursuing for non-hierarchical clustering functions like k-means for two reasons: (1) k-means not only splits the data themselves but also the Euclidean spce as such, (2) kmeans is probabilistic in nature and rather than exact re-clustering of data one should pay attention to recovery of appropriately defined probability distributions over the space (3) the concepts of centric consistency and that of motion consistency are applicable to probability distributions over finite support. So it may constitute a research path worthy further exploration.
Puzicha et al. [29] propose an axiomatic system in which the data can be transformed by shifting the entire data set (and not the individual clusters). In our approach, individual clusters may be subject to shifts.
Strazzeri et al. [32] introduced the axiom of monotonic consistency to replace Kleinberg's consistency axiom. He requires that, for an expansion function (monotonically increasing) ν :: [0; ∞1) → [0; ∞), each intercluster distance d is changed to d = ν(d) and each intracluster distance d is changed to d = ν −1 (d). This transform, though it is non-linear on the one hand, is more rigid than our centric consistency and motion consistency transforms, as it has to apply globally, and not locally to a cluster, and on the other hand, it is designed and applicable for graphs, but not for R m except for special cases.
Addad [12] proposed the following modification of the Kleinberg's consistency axiom. Let OP T (k) be the optimal value of a cost function when clustering into k parts. Let k * the k maximizing quotient OP T (k)/OP T (k − 1) Then their refined consistency requires that the Kleinberg's consistency transform yields the same clustering only if k * prior and after the transform agree. They prove that single link and k-means fulfil the refined consistency axiom (the latter under some "balance" requirements). While the refined consistency transform (identical to Kleinberg's consistency transform) is much more flexible in generating derived test sets, it is in practice hard to test whether or not a given transform produces identically clusterable datasets.
Ackerman et al. [4] consider a different way of deriving new samples from old ones, re-weighting of datapoints. It turns out that some algorithms are sensitive to re-weighting, other are robust. Regrettably, k-means is sensitive so that the method cannot be applied to derive new clustering test datasets from existent ones.
A discussion and critics of various approaches to axiomatic systems for clustering can be found in [15].
Conclusions
We showed that the consistency axiom of Kleinberg [18] constitutes a problem: neither consistency, nor inner-consistency nor outer-consistency can be executed continuously in Euclidean space of fixed dimension. Hence the application for generating of new labelled data from existent ones for clustering algorithm testing cannot be based on the respective transformations. Hence we proposed alternative axioms suitable for continuous transformations in Euclidean space matching the spirit of Kleinberg's consistency axioms but free of their induced contradictions. The respective transformations can therefore be applied for the mentioned test data generation.
This research was restricted to embeddings in Euclidean space. In a separate study [20] we showed that kernel-k-means algorithm under non-euclidean distances between data points may be deemed as working under Euclidean space after adding a specific constant to distance squares. Hence, for kernel-k-means testing still another way of new test sample generation is possible, that is via adding or subtracting constants from squared distances between data elements.
Property 7 .
7Let Range(f ) denote the set of all partitions Γ such that f (d) = Γ for some distance function d. A function f has the k-richness property if Range(f ) is equal to the set of all partitions of S into k non-empty sets.
Theorem 23 .
23Under gravitational Γ-transformation, the k-means possesses the property of Kleinberg's consistency and continuous consistency.
Property 25 .
25Define the convergent-Γ-transform as Γ-transform from Property6 in which if d(i, j) ≤ d(k, l), then also d (i, j) ≤ d (k, l) and d(i, j)/d(k, l) ≤ d (i, j)/d (k, l).The clustering function f has the convergent consistency property if for each distance function d and its convergent Γ-transformation, with f (d) = d the following holds: if f (d) = Γ, then f (d ) = Γ Theorem 26. The axioms of richness, scale-invariance and convergent-consistency are not contradictory.
Figure 1 :
1Impossible continuous Γ-transform (left figure) and impossible continuous outer-Γ-Transform and impossible inner-Γ-transform (right figure)
Figure 2 :
2Effect of outer-Γ-transform if moving not orthogonally to cluster border. Left -the original data, clustered by 2-means. Right -effect of moving by a vector parallel to cluster border.
Figure 3 :
3Effect of outer-Γ-transform if moving not orthogonally to all cluster borders. Left -the original data, clustered by 5-means. Right -effect of moving the top cluster by a vector orthogonal to only one cluster border.
Figure 6 :
6A mixture of 8 normal distributions as clustered by k-means algorithm (Voronoi diagram superimposed).
Figure 7 :
7Data fromFigure 6after a centralized Γ-transformation (Γ * transformation), clustered by k-means algorithm into 8 groups
that, for some λ = λ * ∈ (0, 1) has lower clustering quality function value Q({K (λ), L (λ)}). Define also the function h(λ) = Q({T (λ), Z}) − Q({K (λ), L (λ)}). Due to optimality assumption, h(1) ≤ 0.
Q({T (0), Z}) − Q({K"(0), L"(0)}) = Q({T (1), Z}) − Q({K"(1), L"(1)}) ≤ 0 because Q({T (1), Z}) = Q({T, Z}) is the optimum. Hence also
Figure 9 :
9Illustration of continuous motion consistency for the k-means-2-MST clustering algorithm. The presented sample consists of 11200 data points, k=1500. (a) original data, (b) clustering using k-means-2-MST, spanning trees shown (c) clustering from (b) subjected to centric Γ-transform with λ = 0.7, reclustered, (d) clustering from (b) subjected to centric Γ-transform with λ = 0.052, enabling motion with motion consistency, reclustered, (e) clustering from (d) subjected to motion (perpendicular vector with length 2, reclustered after motion, (f) clustering from (d) subjected to motion (perpendicular vector with length 4, reclustered after motion,
Table 1 :
1Number of data points from different clusters that the distance was reduced during moving one cluster. The move was by 0.1 of the distance between the cluster centers in the direction indicated by the rotation.-a cluster has zero relative speed to itself.rotation
-90 o
-60 o
-30 o
0 o
30 o
60 o
90
bad distances
8064 3166
480
0
296 2956
7920
percentage of bad distances 23.6% 9.3% 1.4% 0% 0.9% 8.6% 23.1 %
Table 2 :
2Number of data points from different clusters that the distance was reduced during moving one cluster (hence violating Γ-transform condition). The top cluster was moved away from the central cluster on the distance of s times the distance between cluster centers.shift s
0
0.1
0.2
0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
bad dst
0
392
280
158
62
26
6
0
0
0
0
% bad dst 0% 0.2% 0.1% 0.1% 0% 0% 0% 0% 0% 0% 0 %
B}, {C, D}, {E, F }. Imagine centric-Γ-transform of the cluster {C, D} into {C [−1], D [1]}. Note that hereby the quotient |CB| |AB| increases while |CB|'s consistency does not.
Imagine points A[−11], B[−9], C[−2], D[2], E[9], F [11] in 1d, constituting three
clusters {A, |AB|
|CE|
|EF | decreases, and at the same time |DB|
|AB| decreases while |CB|
|AB|
|DE|
|EF | increases.
No sequence of Kleinberg's Γ-transforms and invariance transforms on these
data can achieve such a transformation because all the mentioned quotients are
non-decreasing upon consistency and invariance transforms. This implies that
convergent-Γ-transform (the reasonable version of Kleinberg's Γ-transform) is a
special case of centric-Γ-transform.
Table 3 :
3Data sets information Dataset name recs cols clusters NSTART time.sec -inter-cluster distances under Γ o and intra-cluster distances under Γ n , K 4inter-cluster distances both under Γ o and Γ n . Only K 3 is interesting for our considerations, as neither K 1 nor K 2 nor K 4 change the quality function. Under K 2 , the quality of Γ o does not change, while that of Γ n may decrease. This completes the proof.MopsiLocations2012-Joensuu.txt
6014
2
5
1000
53
MopsiLocationsUntil2012-Finland.txt
13467
2
5
1000
80
t4.8k ConfLong Demo.txt
8000
2
5
1000
68
dim032.txt
1024
32
16
1000
243
ConfLongDemo JSI 164860.txt 164860
3
5
1000
2088
dim064.txt
1024
64
16
977
472
KDDCUP04Bio.txt 145751
74
8
54
6119
kddcup99 csv.csv 494020
5
10
640
3488
Table
Table 5 :
5ConfLongDemo JSI 164860.txt 72.55 68.98 19.22Kleinberg consistency violation percentage
λ =
0.8
0.6
0.4
0.2
MopsiLocations2012-Joensuu.txt
1.32
3.16
54.6 65.85
MopsiLocationsUntil2012-Finland.txt 18.75 1.965
4
0
t4.8k ConfLong Demo.txt 17.59 57.18 56.67 53.66
dim032.txt
0
0
0.49
0
8.73
dim064.txt
6
4
1
6
KDDCUP04Bio.txt
1
0
0
2
kddcup99 csv.csv
3.62
3.02
3.12 60.74
Table 6 :
6Kleinberg consistency distance errors percentage upon motioncenter.dst
center.clu
motion.dst
motion.clu
step.dst
step.clu
time
MopsiLocations2012-Joensuu.txt
65.92
0
74.23
0
98.08
0
384.0906
MopsiLocationsUntil2012-Finland.txt
60.59
0
60.86
0
97
0
392
t4.8k ConfLong Demo.txt
61.07
0
64.46
0
100
0
408
dim032.txt
64.12
4
68.15
5
92
6
2215.708
ConfLongDemo JSI 164860.txt
57.22
0
72.05
0
100
0
498.239
dim064.txt
58.89
2
64.46
5
100
8
4473.421
KDDCUP04Bio.txt
68.41
155
80.93
5393
98
16679
6737.905
kddcup99 csv.csv
52.87
0
65.78
0
95
0
1084.221
See e.g. https://ifcs.boku.ac.at/repository/, https://archive.ics.uci.edu/ml/ datasets.php
This impossibility does not mean that there is an inner-contradiction when executing the inner-Γ-transform. Rather it means that considering inner-consistency is pointless because inner-Γ-transform is in general impossible except for isometric transformation.
This property holds clearly for k-means, if quality is measured by inverted Q function, but also we can measure cluster quality of k-single-link with the inverted longest link in any cluster and then the property holds.
k-means quality function is known to exhibit local minima at which the k-means algorithm may get stuck at. This claim means that after the centric Γ-transformation a partition will still be a local optimum. If the quality function has a unique local optimum then of course it is a global optimum and after the transform the partition yielding this global optimum will remain the global optimum.
A clustering function clustering into k clusters has the locality property, if whenever a set S for a given k is clustered by it into the partition Γ, and we take a subset Γ ⊂ Γ with |Γ | = k < k, then clustering of ∪ C∈Γ into k clusters will yield exactly Γ .
Towards theoretical foundations of clustering. M Ackerman, PhD ThesisUniversity of WaterlooM. Ackerman. Towards theoretical foundations of clustering, 2012. Uni- versity of Waterloo, PhD Thesis.
Characterization of linkagebased clustering. M Ackerman, S Ben-David, D Loker, COLT 2010. M. Ackerman, S. Ben-David, and D. Loker. Characterization of linkage- based clustering. In COLT 2010, pages 270-281, 2010.
Towards property-based classification of clustering paradigms. M Ackerman, S Ben-David, D Loker, Adv. Neural Information Proc. Sys. 23. Curran Associates, IncM. Ackerman, S. Ben-David, and D. Loker. Towards property-based clas- sification of clustering paradigms. In Adv. Neural Information Proc. Sys. 23, pages 10-18. Curran Associates, Inc., 2010.
Weighted clustering: Towards solving the user's dilemma. Pattern Recognition. Margareta Ackerman, Shai Ben-David, Simina Brânzei, David Loker, 120108152Margareta Ackerman, Shai Ben-David, Simina Brânzei, and David Loker. Weighted clustering: Towards solving the user's dilemma. Pattern Recog- nition, 120:108152, 2021.
Center-based clustering under perturbation stability. Pranjal Awasthi, Avrim Blum, Or Sheffet, Inf. Process. Lett. 1121-2Pranjal Awasthi, Avrim Blum, and Or Sheffet. Center-based clustering under perturbation stability. Inf. Process. Lett., 112(1-2):49-54, January 2012.
Clustering under perturbation resilience. Maria-Florina Balcan, Yingyu Liang, SIAM J. Comput. 451Maria-Florina Balcan and Yingyu Liang. Clustering under perturbation resilience. SIAM J. Comput., 45(1):102-155, 2016.
Measures of clustering quality: A working set of axioms for clustering. S Ben-David, M Ackerman, Advances in Neural Information Processing Systems 21. D. Koller, D. Schuurmans, Y. Bengio, and L. BottouCurran Associates, IncS. Ben-David and M. Ackerman. Measures of clustering quality: A working set of axioms for clustering. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information Processing Systems 21, pages 121-128. Curran Associates, Inc., 2009.
Persistent clustering and a theorem of j. G Carlsson, F Mémoli, arXiv:0808.2241kleinberg. arXiv preprintG. Carlsson and F. Mémoli. Persistent clustering and a theorem of j. klein- berg. arXiv preprint arXiv:0808.2241, 2008.
Characterization, stability and convergence of hierarchical clustering methods. G Carlsson, F Mémoli, J. Mach. Learn. Res. 11G. Carlsson and F. Mémoli. Characterization, stability and convergence of hierarchical clustering methods. J. Mach. Learn. Res., 11:1425-1470, 2010.
Revolt: Collaborative crowdsourcing for labeling machine learning datasets. Joseph Chee Chang, Saleema Amershi, Ece Kamar, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. Gloria Mark, Susan R. Fussell, Cliff Lampe, m. c. schraefel, Juan Pablo Hourcade, Caroline Appert, and Daniel Wigdorthe 2017 CHI Conference on Human Factors in Computing SystemsDenver, CO, USAACMJoseph Chee Chang, Saleema Amershi, and Ece Kamar. Revolt: Collabora- tive crowdsourcing for labeling machine learning datasets. In Gloria Mark, Susan R. Fussell, Cliff Lampe, m. c. schraefel, Juan Pablo Hourcade, Car- oline Appert, and Daniel Wigdor, editors, Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, Denver, CO, USA, May 06-11, 2017, pages 2334-2346. ACM, 2017.
Combining partitional and hierarchical algorithms for robust and efficient data clustering with cohesion selfmerging. Ming-Syan Cheng-Ru Lin, Chen, IEEE Trans. Knowledge & Data Eng. 172Cheng-Ru Lin and Ming-Syan Chen. Combining partitional and hierarchi- cal algorithms for robust and efficient data clustering with cohesion self- merging. IEEE Trans. Knowledge & Data Eng., 17(2):145-159, 2005.
Clustering redemption-beyond the impossibility of kleinberg's axioms. Vincent Cohen-Addad, Varun Kanade, Frederik Mallmann-Trenn, Advances in Neural Information Processing Systems. S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. GarnettCurran Associates, Inc31Vincent Cohen-Addad, Varun Kanade, and Frederik Mallmann-Trenn. Clustering redemption-beyond the impossibility of kleinberg's axioms. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.
. R O Duda, P E Hart, G Stork, Pattern Classification. J. Wiley & SonsNew York2nd editionR.O. Duda, P.E. Hart, and G. Stork. Pattern Classification. J. Wiley & Sons, New York, 2nd edition, 2000.
Clustering axioms. Classification Society of North America Newsletter. J C Gower, J. C. Gower. Clustering axioms. Classification Society of North America Newsletter, pages 2-3, July 1990.
What are the true clusters?. Christian Hennig, Pattern Recognition Letters. 64Christian Hennig. What are the true clusters? Pattern Recognition Letters, 64(15 October 2015):53-62, 2015.
Computer science theory for the information age. J Hopcroft, R Kannan, chapter 8.13.2. A Satisfiable Set of Axioms. page 272ffJ. Hopcroft and R. Kannan. Computer science theory for the information age, 2012. chapter 8.13.2. A Satisfiable Set of Axioms. page 272ff.
Mdcgen: Multidimensional dataset generator for clustering. F Iglesias, T Zseby, D , Journal of Classification. 36F. Iglesias, T. Zseby, and D. et al. Ferreira. Mdcgen: Multidimensional dataset generator for clustering. Journal of Classification, 36:599-618, 2019.
An impossibility theorem for clustering. J Kleinberg, Proc. NIPS 2002. NIPS 2002J. Kleinberg. An impossibility theorem for clustering. In Proc. NIPS 2002, pages 446-453, 2002. http://books.nips.cc/papers/files/nips15/LT17.pdf.
On the consistency of k -means++ algorithm. A Mieczyslaw, Klopotek, Fundam. Inform. 1724Mieczyslaw A. Klopotek. On the consistency of k -means++ algorithm. Fundam. Inform., 172(4):361-377, 2020.
A feasible k-means kernel trick under non-euclidean feature space. R Lopotek, M , S Wierzchoń, International Journal of Applied Mathematics and Computer Science. 304Online publication date: 1R. K lopotek, M. K lopotek, and S Wierzchoń. A feasible k-means kernel trick under non-euclidean feature space. International Journal of Applied Mathematics and Computer Science, 30(4):703-715, 2020. Online publica- tion date: 1-Dec-2020.
In-the-limit clustering axioms. A Mieczys Law, Robert K Lopotek, to appear in Proc. ICAISC2020. Mieczys law A. K lopotek and Robert K lopotek. In-the-limit clustering ax- ioms. In to appear in Proc. ICAISC2020, 2020.
k-means cluster shape implications. A Mieczys Law, T Wierzchoń, Robert K Lopotek, to appear in Proc. AIAI2020. Mieczys law A. K lopotek, S lawomir T. Wierzchoń, and Robert K lopotek. k-means cluster shape implications. In to appear in Proc. AIAI2020, 2020.
A multi-prototype clustering algorithm. Manhua Liu, Xudong Jiang, Alex C Kot, Pattern Recognition. 425Manhua Liu, Xudong Jiang, and Alex C. Kot. A multi-prototype clustering algorithm. Pattern Recognition, 42(5):689 -698, 2009.
Some methods for classification and analysis of multivariate observations. J Macqueen, Proc. Fifth Berkeley Symp. on Math. Statist. and Prob. Fifth Berkeley Symp. on Math. Statist. and ProbUniv. of Calif. Press1J. MacQueen. Some methods for classification and analysis of multivariate observations. In Proc. Fifth Berkeley Symp. on Math. Statist. and Prob., volume 1, pages 281-297. Univ. of Calif. Press, 1967.
Foundations of perturbation robust clustering. Jarrod Moore, Margareta Ackerman, IEEE ICDM 2016. Jarrod Moore and Margareta Ackerman. Foundations of perturbation ro- bust clustering. In IEEE ICDM 2016, pages 1089-1094, 2016.
Kmlocal: A testbed for k-means clustering algorithms. David M Mount, David M. Mount. Kmlocal: A testbed for k-means clustering algorithms, 2005.
K-multiple-means: A multiple-means clustering method with specified k clusters. Feiping Nie, Cheng-Long Wang, Xuelong Li, Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '19. the 25th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '19New York, NY, USAAssociation for Computing MachineryFeiping Nie, Cheng-Long Wang, and Xuelong Li. K-multiple-means: A multiple-means clustering method with specified k clusters. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discov- ery and Data Mining, KDD '19, page 959-967, New York, NY, USA, 2019. Association for Computing Machinery.
Strong consistency of k-means clustering. David Pollard, Ann. Statist. 91David Pollard. Strong consistency of k-means clustering. Ann. Statist., 9(1):135-140, 1981.
A theory of proximity based clustering: Structure detection by optimization. J Puzicha, T Hofmann, J Buhmann, Pattern Recognition. 334J. Puzicha, T. Hofmann, and J. Buhmann. A theory of proximity based clustering: Structure detection by optimization. Pattern Recognition, 33(4):617-634, 2000.
A knowledge-based approach to pattern clustering. B Shekar, Indian Institute of SciencePhD thesisB. Shekar. A knowledge-based approach to pattern clustering. PhD thesis, Indian Institute of Science, 1988.
A comparison of document clustering techniques. M Steinbach, G Karypis, V Kumar, Proceedings of KDD Workshop on Text Mining, Proceedings of the 6th International Confer1193 ence on Knowledge discovery and Data Mining. KDD Workshop on Text Mining, the 6th International Confer1193 ence on Knowledge discovery and Data MiningBoston,MAM. Steinbach, G. Karypis, and V. Kumar. A comparison of document clustering techniques. In Proceedings of KDD Workshop on Text Mining, Proceedings of the 6th International Confer1193 ence on Knowledge dis- covery and Data Mining, Boston,MA, (2000)., 2000.
Possibility results for graph clustering: A novel consistency axiom. Fabio Strazzeri, J Rubén, Sánchez-García, Fabio Strazzeri and Rubén J. Sánchez-García. Possibil- ity results for graph clustering: A novel consistency axiom.
Towards an axiomatic approach to hierarchical clustering of measures. P Thomann, I Steinwart, N Schmid, P. Thomann, I. Steinwart, and N. Schmid. Towards an axiomatic approach to hierarchical clustering of measures, 2015.
Axioms for graph clustering quality functions. T Van Laarhoven, E Marchiori, Journal of Machine Learning Research. 15T. van Laarhoven and E. Marchiori. Axioms for graph clustering quality functions. Journal of Machine Learning Research, 15:193-215, 2014.
Modern Clustering Algorithms. Studies in Big Data 34. S T Wierzchoń, M A Lopotek, Springer VerlagS.T. Wierzchoń and M.A. K lopotek. Modern Clustering Algorithms. Stud- ies in Big Data 34. Springer Verlag, 2018.
A formalization of cluster analysis. W E Wright, Pattern Rec. 53W.E. Wright. A formalization of cluster analysis. Pattern Rec., 5(3):273- 282, 1973.
A uniqueness theorem for clustering. R B Zadeh, S Ben-David, Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI '09. the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI '09Arlington, Virginia, United StatesAUAI PressR. B. Zadeh and S. Ben-David. A uniqueness theorem for clustering. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, UAI '09, pages 639-646, Arlington, Virginia, United States, 2009. AUAI Press.
Communities in preference networks: Refined axioms and beyond. G Zeng, Y Wang, J Pu, X Liu, X Sun, J Zhang, ICDM 2016. G. Zeng, Y. Wang, J. Pu, X. Liu, X. Sun, and J. Zhang. Communities in preference networks: Refined axioms and beyond. In ICDM 2016, pages 599-608, 2016.
| [] |
[
"ADDITIVE TRIPLES OF BIJECTIONS, OR THE TOROIDAL SEMIQUEENS PROBLEM",
"ADDITIVE TRIPLES OF BIJECTIONS, OR THE TOROIDAL SEMIQUEENS PROBLEM",
"ADDITIVE TRIPLES OF BIJECTIONS, OR THE TOROIDAL SEMIQUEENS PROBLEM",
"ADDITIVE TRIPLES OF BIJECTIONS, OR THE TOROIDAL SEMIQUEENS PROBLEM"
] | [
"Sean Eberhard ",
"Freddie Manners ",
"Rudi Mrazović ",
"Sean Eberhard ",
"Freddie Manners ",
"Rudi Mrazović "
] | [] | [] | We prove an asymptotic for the number of additive triples of bijections {1, . . . , n} → Z/nZ, that is, the number of pairs of bijections π 1 , π 2 : {1, . . . , n} → Z/nZ such that the pointwise sum π 1 + π 2 is also a bijection. This problem is equivalent to counting the number of orthomorphisms or complete mappings of Z/nZ, to counting the number of arrangements of n mutually nonattacking semiqueens on an n × n toroidal chessboard, and to counting the number of transversals in a cyclic Latin square. The method of proof is a version of the Hardy-Littlewood circle method from analytic number theory, adapted to the group (Z/nZ) n . | 10.4171/jems/841 | [
"https://export.arxiv.org/pdf/1510.05987v3.pdf"
] | 119,122,400 | 1510.05987 | 71233fbc58473a7f8be61059042529f4628ad247 |
ADDITIVE TRIPLES OF BIJECTIONS, OR THE TOROIDAL SEMIQUEENS PROBLEM
22 Mar 2016
Sean Eberhard
Freddie Manners
Rudi Mrazović
ADDITIVE TRIPLES OF BIJECTIONS, OR THE TOROIDAL SEMIQUEENS PROBLEM
22 Mar 2016arXiv:1510.05987v3 [math.CO]
We prove an asymptotic for the number of additive triples of bijections {1, . . . , n} → Z/nZ, that is, the number of pairs of bijections π 1 , π 2 : {1, . . . , n} → Z/nZ such that the pointwise sum π 1 + π 2 is also a bijection. This problem is equivalent to counting the number of orthomorphisms or complete mappings of Z/nZ, to counting the number of arrangements of n mutually nonattacking semiqueens on an n × n toroidal chessboard, and to counting the number of transversals in a cyclic Latin square. The method of proof is a version of the Hardy-Littlewood circle method from analytic number theory, adapted to the group (Z/nZ) n .
Introduction
Let S be the set of bijections {1, . . . , n} → Z/nZ, thought of as a subset of the group (Z/nZ) n . We are interested in counting the number s n of additive triples in S, that is, the number of pairs of bijections π 1 , π 2 : {1, . . . , n} → Z/nZ such that the pointwise sum π 1 + π 2 is also a bijection.
The number s n has been studied somewhat extensively, but under a different guise. Since S is invariant under precomposition with permutations of {1, . . . , n} it is easy to see by arbitrarily identifying {1, . . . , n}
with Z/nZ that s n /n! is number of permutations π of Z/nZ such that x → π(x) − x is also a permutation. Such maps are called orthomorphisms or complete mappings, and afford the following fun interpretation. Define a semiqueen to be a chess piece which can move any distance horizontally, vertically, or diagonally in the 1 northeast-southwest direction. Then orthomorphisms represent ways of arranging n mutually nonattacking semiqueens on an n × n toroidal chessboard. Thus s n /n! is the number of such arrangements. In another guise this problem is that of counting the number of transversals of a cyclic Latin square. Here a transversal of an n × n Latin square is a set of n squares with no two sharing the same row, column, or symbol, and the cyclic Latin square is the Latin square with (i, j) entry given by i + j (mod n). Then as above the number of such transversals is s n /n!.
If n is even then s n = 0. Indeed, in this case for any bijection π : {1, . . . , n} → Z/nZ we have n i=1 π(i) = n/2, so the sum of two bijections is never again a bijection. If n is odd then it is easy to see that s n > 0, but estimating s n is not easy.
In 1991, Vardi [Var91] made a conjecture equivalent to the following.
Conjecture 1.1. There are constants c 1 > 0 and c 2 < 1 such that for any large enough odd number n we have c n 1 n! 2 ≤ s n ≤ c n 2 n! 2 .
The upper bound in this conjecture is known, and various authors have made incremental improvements to the constant c 2 . Cooper and Kovalenko [CK96] showed that c 2 = e −0.08854 is acceptable, and this was later improved by Kovalenko [Kov96] to c 2 = 1/ √ 2 and by McKay, McLeod and Wanless [MMW06] to c 2 = 0.614. More recently, Taranenko [Tar15] proved that one can take c 2 = 1/e + o(1). Glebov and Luria [GL15] proved the same bound using a somewhat simpler method based on entropy.
There has been much less progress on the lower bound, but nontrivial lower bounds for s n have been proved under various arithmetic assumptions about n. For example, Cooper [Coo00] proved that s n ≥ e 1 2 √ n log n n! under the hypothesis that n has a divisor of size roughly √ n, while if, say, n is prime then a lower bound of Rivin, Vardi, and Zimmermann [RVZ94] for the torodial queens problem gives s n ≥ 2 √ (n−1)/2 n!. These lower bounds were recently superseded by work of Cavenagh and Wanless [CW10], which showed that s n ≥ 3.246 n n! for all odd n. However, this lower bound is still a long way from the one in Conjecture 1.1. Some researchers have also tried investigating the growth rate of s n numerically: see Cooper, Gilchrist, Kovalenko, and Novaković [CGKN99] and Kuznetsov [Kuz07,Kuz08,Kuz09]. Based on this numerical evidence, Wanless [Wan11] conjectured that we can take c 1 , c 2 = 1/e + o(1); specifically he conjectured that lim n→∞ 1 n log(s n /n! 2 ) = −1.
Finally, we should mention that s n /(n · n!) is sequence A003111 in [Slo]. In this paper we prove Vardi's conjecture with the optimal values c 1 , c 2 = 1/e + o(1), thus also confirming Wanless's conjecture. In fact our estimate is much more precise -we compute s n up to a factor of 1 + o(1).
Theorem 1.2. Let n be an odd integer. Then s n = (e −1/2 + o(1))n! 3 /n n−1 .
Perhaps the most surprising feature of this estimate is the appearance of the constant e −1/2 . This constant arises as the sum of the singular series in an argument resembling the circle method from analytic number theory, but it can be rationalized heuristically as follows. If π 1 , π 2 : {1, . . . , n} → Z/nZ are random bijections then the sum f = π 1 + π 2 is something like a random function subject to x∈Z/nZ f (x) = 0, and so we might guess that the probability π 1 + π 2 is a bijection is about n · n!/n n . However if we fix two elements x, y ∈ Z/nZ then the difference
f (x) − f (y) = (π 1 (x) − π 1 (y)) + (π 2 (x) − π 2 (y))
is the sum of two uniformly random nonzero elements, so f (x) = f (y) with probability 1/(n − 1), not 1/n.
Thus the probability that f (x) = f (y) is smaller than we previously suggested by a factor of
1 − 1/(n − 1) 1 − 1/n = 1 − 1/n 2 + O(1/n 3 ).
There are a total of n 2 /2 + O(n) pairs x, y, so we might guess that the probability π 1 + π 2 is a bijection is more like 1 − 1/n 2 n 2 /2 n · n!/n n ≈ e −1/2 n!/n n−1 .
Our proof applies with only notational modifications when Z/nZ is replaced by any abelian group G of odd order. To be precise, given a finite abelian group G, let s(G) be the number of pairs of bijections π 1 , π 2 : {1, . . . , |G|} → G such that π 1 + π 2 is also a bijection. Then by the same method which proves Theorem 1.2, we have the following theorem. Theorem 1.3. Let G be a finite abelian group of odd order n. Then
s(G) = (e −1/2 + o(1))n! 3 /n n−1 .
There are additional complications however when G is allowed to have even order, say when G = (Z/2Z) d , and we do not know whether the same method can be made to work in this case.
Like the toroidal semiqueens problem, the toroidal queens problem -the problem of counting the number of ways of arranging n mutually nonattacking queens on an n × n toroidal chessboard -is equivalent to counting the number of solutions to a particular linear system in the set of bijections, namely the system π 1 + π 2 = π 3 , π 1 − π 2 = π 4 .
Linear systems of complexity 2 like this one are known not to be controlled by Fourier analysis alone, but there is some hope that one could use the higher-order theory pioneered by Gowers. This is an interesting avenue which has not yet been fully explored.
Notation. Although we have already implicitly introduced most of the notation and conventions, we include them here for the reader's convenience. We use the standard O-notation. To be concrete, for functions
f, g : N → R we write f (n) = O(g(n)) if |f (n)| ≤ Cg(n) for some constant C > 0, or f (n) ≤ O X (g(n)
) if C is allowed to depend on the parameter X. We write f (n) = o(g(n)) if f (n)/g(n) → 0 as n tends to infinity. For a finite set S and function f , we will denote the average of f on the set S by E s∈S f (s). We use a primed sum ′ s1,...,s k ∈S to denote the sum over all k-tuples of distinct elements s 1 , . . . , s k ∈ S. All the groups we will work with will be finite, and we equip each of them with the either uniform probability or counting measure, depending on whether we are working on the physical or the frequency side, respectively. Of course, this convention is respected in our definitions of convolution, inner product and L 2 -norm. Finally, we will also use the standard notation e(x) = e 2πix .
Acknowledgements. We would like to thank Ben Green for communicating the problem of estimating s n , Robin Pemantle for a discussion in connection with the method used to prove Theorem 4.1, and Fernando Shao for useful conversations.
Outline of the proof
Our approach to proving Theorem 1.2 is Fourier-analytic. Write G = Z/nZ for n an odd integer, and write S ⊂ G n for the set of bijections {1, . . . , n} → G. Our goal is to compute the quantity
E x,y∈G n 1 S (x)1 S (y)1 S (x + y) = 1 S * 1 S , 1 S .
A standard application of Parseval's identity shows that this quantity can be expressed in terms of the Fourier transform of 1 S as
χ∈ G n | 1 S (χ)| 2 1 S (χ).
(2.1)
Here we identify G with 1 n Z/Z in the usual way, so that for χ = (r 1 , . . . , r n ) ∈ ( 1 n Z/Z) n we have explicitly
1 S (r 1 , . . . , r n ) = 1 n n ′ x1,...,xn e(−r 1 x 1 − · · · − r n x n ), (2.2)
where, as mentioned in the previous section, we use ′ to denote the sum over distinct x 1 , . . . , x n ∈ G, i.e., the sum over S. In fact it is clear that 1 S is real-valued, since S = −S, and hence we may drop the absolute value signs in (2.1).
The form of the proof can then be viewed as an analogue of the Hardy-Littlewood circle method in analytic number theory, adapted to the group G n rather than Z. We borrow some nomenclature from the classical setting.
• There are a small number of characters χ ∈ G n , namely the characters χ = (r 1 , . . . , r n ) for which almost all of the r i are equal, that make a substantial contribution to the sum (2.1). We call the totality of these χ the major arcs. We compute explicitly the contribution of these χ to (2.1), up to small errors: this is an analogue of the singular series, which accounts for the main term in Theorem 1.2, including the bizarre constant e −1/2 . For all this see Section 3.
• We bound the contribution from all other characters using the triangle inequality; that is, we obtain an upper bound for all other χ | 1 S (χ)| 3 that is smaller than the main term. We call such χ collectively the minor arcs.
A large part of this paper is therefore devoted to obtaining good bounds for Fourier coefficients 1 S (χ), either pointwise or on average in a suitable sense. In fact we will need to combine several different arguments which are effective in different regimes.
2.1. Preliminaries on 1 S . In order to describe how the characters G n are divided up into different pieces in which different approaches will be effective, we will need some preliminary remarks.
First, note that the function 1 S (r 1 , . . . , r n ) is invariant under permutation of the r i : 1 S (r 1 , . . . , r n ) = 1 S (r σ(1) , . . . , r σ(n) ) for any bijection σ : {1, . . . , n} → {1, . . . , n}. This is immediate from the definition (2.2). Hence it makes sense to specify a Fourier coefficient of interest in the form 1 S (r a1 1 , . . . , r a k k ), where r i ∈ G are distinct, the a i are positive integers such that a i = n, and the notation r a means r repeated a times.
Second, observe that 1 S (r 1 , . . . , r n ) = 0 unless r i = 0. This follows from the fact that S is invariant under global shifts (x i ) → (x i + t), so one can compute
1 S (r 1 , . . . , r n ) = 1 n n ′ x1,...,xn e(−r 1 x 1 − · · · − r n x n ) = 1 n n ′ x1,...,xn e(−r 1 (x 1 + t) − · · · − r n (x n + t)) = 1 S (r 1 , . . . , r n )e(−(r 1 + · · · + r n )t).
Dually, note that 1 S is invariant under global shifts. This follows from the fact that
x i = 0 for any (x i ) ∈ S. Hence, 1 S (r 1 , . . . , r n ) = 1 n n ′ x1,...,xn e(−r 1 x 1 − · · · − r n x n ) = 1 n n ′ x1,...,xn e(−(r 1 + t)x 1 − · · · − (r n + t)x n ) = 1 S (r 1 + t, .
. . , r n + t).
Finally, note the trivial bound
| 1 S (χ)| ≤ 1 S (0) = n! n n .
2.2. Entropy ranges for minor arcs. We now explain a straightforward but still fairly powerful bound on | 1 S (χ)| that follows directly from these elementary considerations.
Proposition 2.1 ("Entropy bound"). We have
| 1 S (r a1 1 , . . . , r a k k )| ≤ n a 1 , . . . , a k −1/2 n! n n 1/2 .
Proof. Let O ⊂ G n denote the set of characters obtained by permuting the elements of (r a1 1 , . . . , r a k k ). Hence, |O| = n a1,...,a k and by permutation invariance, 1 S takes a constant value on O. Thus by Parseval,
n! n n = χ∈ G n | 1 S (χ)| 2 ≥ χ∈O | 1 S (χ)| 2 = |O|| 1 S (r a1 1 , . . . , r a k k )| 2 ,
and the result follows.
The term "entropy bound" refers to the fact that the quantity H(χ) = 1 n log n a1,...,a k is roughly the
entropy k i=1 (a i /n) log(n/a i ) ∈ [0,
log n] of a random variable taking the value r i with probability a i /n. In a slight abuse of nomenclature, we refer to this first quantity H(χ) as the entropy of χ. In this language, the bound of Proposition 2.1 is precisely exp(−Hn/2)(n!/n n ) 1/2 or roughly exp(−Hn/2 − n/2). This is already sufficient to control the contribution to (2.1) for most characters χ.
Corollary 2.2. We have
χ : H(χ)≥R | 1 S (χ)| 3 ≤ exp((3 − R)n/2) n! n n 3 .
Proof. By Parseval and Proposition 2.1,
χ : H(χ)≥R | 1 S (χ)| 3 ≤ χ : H(χ)≥R | 1 S (χ)| 2 sup χ : H(χ)≥R | 1 S (χ)| ≤ χ∈ G n | 1 S (χ)| 2 sup χ : H(χ)≥R exp(−H(χ)n/2) n! n n 1/2 ≤ exp(−Rn/2) n! n n 3/2 ≤ exp((3 − R)n/2) n! n n 3
as required, where we have used the estimate n! > (n/e) n . This is smaller than the main term for any fixed R > 3, so from now on such high-entropy characters need not concern us. We refer to such characters as the high-entropy minor arcs. Since a typical character χ has entropy comparable to log n, the high-entropy case comprises almost all characters in some sense, so this is a good first step.
On the opposite extreme we have characters χ with entropy o(1), such as χ = r k , −r k , 0 n−2k for k = o(n). The characters with entropy O log n n are the major arcs, but between O log n n and o(1) we have the low-entropy minor arcs. Such characters are necessarily of the form (. . . , r n−o(n) ), i.e., they take a single value almost all time. By the remarks above we may shift without loss of generality to assume that this value r is zero. Thus the low-entropy minor arcs are closely related to the set of sparse characters, characters χ comprised almost entirely of zeros. That leaves the characters χ with o(1) ≤ H(χ) ≤ 3, which form the medium-entropy minor arcs. A good model case are characters such as χ = (r n/3 , −r n/3 , 0 n/3 ), which are particularly troublesome. There are two fundamental areas of inefficiency in the arguments of Proposition 2.1 and Corollary 2.2 that prevent them from giving good bounds for low-or medium-entropy minor arcs. First, the bound on | 1 S | given by Proposition 2.1 is less effective for smaller H, and in particular worse than trivial when H < 1.
Clearly there is no hope for this programme unless we can find a bound on | 1 S | that is nontrivial throughout the range 0 < H < 1, and moreover obtains a substantial exponential saving for most of that range.
Second, in the proof of Corollary 2.2 we made use of the convenient bound
χ∈X | 1 S (χ)| 3 ≤ χ∈ G | 1 S (χ)| 2 sup χ∈X | 1 S (χ)|
for some appropriate set X ⊂ G n . It is fairly clear that we cannot afford this luxury for small entropies: since the main term has order (n!/n n ) 3 , we need | 1 S (χ)| = o((n!/n n ) 2 ) for all χ ∈ X for this to be effective.
This is a saving of about exp(−n) over the trivial bound of n!/n n , and it is simply not reasonable to expect such a bound to hold if, say, H(χ) = 1/1000. One way around this is to pigeonhole the characters χ into various sets X i , and apply such an L 2 · L ∞ bound on each set. To make this work, we would need a bound on
χ∈Xi | 1 S (χ)| 2
which makes a significant saving over the crude estimate
χ∈Xi | 1 S (χ)| 2 ≤ χ∈ G n | 1 S (χ)| 2
employed in Corollary 2.2. Alternatively, one could try to bound the L 3 sum χ∈Xi | 1 S (χ)| 3 directly, using an L ∞ bound and an estimate for the size of X i .
We address both of these inefficiencies in order to reach our final bound. Specifically, we will do each of the following.
• Improving on Proposition 2.1, we obtain a good general-purpose bound for | 1 S (χ)| that is nontrivial for values of H(χ) approaching zero. Roughly speaking, where Proposition 2.1 gives the bound e −Hn/2−n/2 we will prove the bound e −Hn/2−n . This appears in Section 4.
• We use this bound to estimate the contribution to (2.1) from χ with 10 −10 < H(χ) < 10, say, by using a dyadic decomposition and a simple estimate for the number of characters of a given entropy.
Thus we dispatch the medium-entropy minor arcs. This is covered in Section 7.
• Separately, we obtain a good estimate for
m-sparse χ | 1 S (χ)| 2 ,
i.e., an improvement to Parseval when summing only over those characters χ having exactly m nonzero terms. Here we might imagine m ≤ n/1000. We call this a sparseval bound, and prove this in Section 5.
• Finally, we obtain a slightly different L ∞ bound specialized to the case of sparse characters χ.
This appears in Section 6. The total contribution from the low-entropy minor arcs is controlled by combining this with the L 2 bound above.
In summary, we split the universe of all χ into several slightly overlapping regions -major arcs, lowentropy minor arcs, medium-entropy minor arcs, and high-entropy minor arcs -and apply a cocktail of different bounds and explicit computations adapted to each region.
2.3.
Interpretations of minor arc bounds. As we have said, a significant part of our effort will be spent proving nontrivial bounds on Fourier coefficients | 1 S (χ)| for χ in the minor arcs. Although such bounds are at first glance fairly esoteric, in fact they have natural interpretations. For instance, they are intimately related to questions of the following flavour. x i = 0} of G k . How close is T to being equidistributed on H, in a quantitative sense? In other words how close is the law of T to the uniform distribution on H?
Because of the connection of this question to the size of the Fourier coefficients 1 S (r a1 1 , . . . , r a k k ), our bounds give a strong answer to this question in many regimes. It is not inconceivable that these kind of bounds may find applications elsewhere.
Major arcs
In this section we compute 1 S (r 1 , . . . , r m , 0, . . . , 0) explicitly whenever m is bounded. To this end observe See, e.g., [Bón15] for more information. To apply this to 1 S , say that a partition P of {1, . . . , m} kills (r 1 , . . . , r m ) if i∈P r i = 0 for each P ∈ P, and observe that The following two observations are immediate from this formula:
• Suppose every killing partition of (r 1 , . . . , r m ) has at most k parts. Then
| 1 S (r 1 , . . . , r m , 0, . . . , 0)| ≤ O m 1 n m−k n! n n .
(3.3)
• Suppose that m is even, that r 1 , . . . , r m ∈ G are nonzero, and that (r 1 , . . . , r m ) is killed by a unique partition with m/2 parts. In other words suppose that r 1 , . . . , r m ∈ G are distinct and nonzero, and Since the number of (r 1 , . . . , r m ) killed by P is bounded by n m−k , the contribution to (3.5) from these (r 1 , . . . , r m ) is bounded by
n m n m−k O m (1) n m−k n! n n 3 = O m 1 n m−2k n! 3 n 3n ,
which is satisfactory unless k = m/2. Moreover the number of (r 1 , . . . , r m ) killed by at least two partitions P with m/2 parts is bounded by n m/2−1 , so the contribution from these (r 1 , . . . , r m ) is bounded by This proves the proposition.
Square-root cancellation for general Fourier coefficients
The aim of this section is to prove the following refinement of Proposition 2.1. • The terms depending on k are easily seen to be a small error in the regimes we care about. Indeed, if H(χ) = O(1) then χ can take at most O(n/ log n) = o(n) distinct values, and the additional term is then exp(o(n)), and thus small compared to the exponential entropy saving.
• The saving over Proposition 2.1 is an apparently modest factor of around e −n/2+o(n) . This may be uninspiring if H is large, but if H is small it is decisive.
• In general this bound is not sharp, often by a substantial amount. However, it is sometimes attained asymptotically for very special, arithmetically structured classes of characters. For instance, if we temporarily allow n to be even, then one can compute directly that log | 1 S ((1/2) From the above remarks, we see that one might hope to improve on Theorem 4.1, but only by ruling out the particular characters discussed above and others similar to them. For instance, a strengthening is almost certainly true under the assumption that n is prime. Thus one might think of Theorem 4.1 as the best general-purpose bound available.
We should also comment briefly on the term "square-root cancellation". If χ = (r a1 1 , . . . , r a k k ) then
1 S (χ) = n! n n E partitions P e −r 1 x∈P1 x − · · · − r k x∈P k x
where the expectation is over all ordered partitions P = (P 1 , . . . , P k ) of G into k pieces with |P i | = a i . The number of such partitions is precisely n a1,...,a k ; assuming that the phases in the average behave randomly then we should expect square-root cancellation, and we heuristically recover the bound in Theorem 4.1 up to lower-order terms.
Proof of Theorem 4.1. Let γ be the Gaussian measure on C k defined with respect to Lebesgue measure λ by
dγ dλ = 1 π k exp − k i=1 |z i | 2 .
We claim that
n n 1 S (χ) = C k z a1 1 · · · z a k k x∈G k i=1 e(−r i x)z i dγ. (4.1)
One can check this by expanding the product and using the identity
C k k i=1 z ai i z i bi dγ = k i=1 a i ! if a i = b i for each i, 0
otherwise.
(4.2)
We can now proceed by bounding the integral in (4.1) using various techniques. Applying the Cauchy-Schwarz inequality we bound the right hand side by
C k |z 1 | 2a1 · · · |z k | 2a k dγ 1/2 C k x∈G k i=1 e(−r i x)z i 2 dγ 1/2 . (4.
3)
The first factor here is exactly
k i=1 a i ! 1/2
by (4.2). If we now apply the AM-GM inequality to the second term and evaluate the resulting integral, we get
C k x∈G k i=1 e(−r i x)z i 2 dγ ≤ C k 1 n x∈G k i=1 e(−r i x)z i 2 n dγ = C k k i=1 |z i | 2 n dγ = s1+···+s k =n n s 1 , . . . , s k C k |z 1 | 2s1 · · · |z k | 2s k dγ = n! s1+···+s k =n 1 = n! n + k − 1 k − 1 .
Thus the theorem follows by dividing (4.3) by n n .
Remark 4.3. Though the identity (4.1) is easily verified directly, we briefly sketch here a possible route by which one might be motivated to write down such a formula in the first place.
Let {X x : x ∈ G} be indeterminates, and observe that the left hand side of (4.1) is precisely the coefficient of x∈G X x in the multivariate polynomial In some generality, it is possible to equate a quantity of the form
∂ b1 ∂Y b1 1 . . . ∂ b k ∂Y b k k f Y1=···=Y k =0 ,
where f (Y 1 , . . . , Y k ) is a suitable holomorphic function of k complex variables, with the corresponding integral
expression C k Y b1 1 . . . Y b k k f (Y 1 , . . . , Y k ) dγ,
where again γ denotes Gaussian measure. This can be verified by applying the case k = 1 iteratively, which in turn follows by averaging the usual Cauchy integral formula over different radii. This formula can be considered a particular higher-dimensional variant of the Cauchy integral formula.
The identity (4.1) follows by applying this identity to (4.4) and making an orthogonal change of variables.
Sparseval
We recall from Section 2 that one of our goals is to obtain good bounds on
m-sparse χ | 1 S (χ)| 2 , (5.1)
where the sum is over all characters χ = (r 1 , . . . , r n ) with precisely m nonzero entries. First consider the related sum
≤m-sparse χ | 1 S (χ)| 2 , (5.2)
i.e., the sum over characters with at most m non-zero entries. This can be bounded by applying Parseval to the set S m of injections {1, . . . , m} → G as a subset of the group G m . Indeed, letting N (χ) be the set of indices i for which r i is nonzero, we have Here we used (3.1).
≤m-sparse χ | 1 S (χ)| 2 ≤ |N |=m N (χ)⊂N | 1 S (χ)| 2 = n m r1
This is our first nontrivial sparseval bound. It turns out that overcounting as above is too inefficient in the context of the wider argument, when, say, m/n is a small constant. The purpose of the rest of this section therefore is to improve the bound on (5.1) as much as possible.
n m m! = n 2n n! 2 |N |=m N (χ)⊂N | 1 S (χ)| 2 = n 2n n! 2 m k=0 |N (χ)|=k | 1 S (χ)| 2 n − k m − k = m k=0 n − k m − k Q(k, n).
By inverting this relation we have
Q(m, n) = m k=0 (−1) m−k n − k m − k n k k! = 1 m! m k=0 m k (−1) k (n − m + 1) · · · (n − k)n k . (5.4)
This expression exhibits a vast amount of cancellation. To exploit this, we employ a generating function argument, followed by some complex analysis in the spirit of the saddle-point method (see [FS09,ter VIII] for background); see also Remark 5.2 for some further intuition.
Observe from (5.4) that Q(m, n) is exactly the coefficient of X m in f (X) = n n+1 e X (n + X) n−m+1 .
(5.5)
Thus by Cauchy's formula Q(m, n) = n n+1 i2π |z|=r e z (n + z) n−m+1 z m+1 dz, for any r in the range 0 < r < n, so Q(m, n) ≤ n n+1 r m max |z|=r e z (n + z) n−m+1 .
We claim that e z /(n + z) n−m+1 has only two local maxima on |z| = r: one at z = +r and the other at z = −r. To see this note that if |z| = r and ℜz = t then e z (n + z) n−m+1
2 = e 2t (n 2 + r 2 + 2nt) n−m+1 , so d dt log e z (n + z) n−m+1 2 = 2 − 2n(n − m + 1) n 2 + r 2 + 2nt .
This function has a unique pole at t = −(n + r 2 /n)/2, which is to the left of the allowed region −r ≤ t ≤ r, and it has a unique zero somewhere corresponding to a minimum, not a maximum, so the claim holds. Thus e z /(n + z) n−m+1 is bounded on |z| = r by e ±r (n ± r) n−m+1 , so Q(m, n) ≤ n n+1 e ±r r m (n ± r) n−m+1 = e ±r (1 ± r/n) n−m+1 n m r m . Here log e ±r (1 ± r/n) n−m+1 = ±r − (n − m + 1)(±r/n − r 2 /2n 2 + O(r 3 /n 3 )) = r 2 /2n + O(r 3 /n 2 + rm/n). Let ∂ be the forward difference operator defined on functions g : N → R by
∂g(k) = g(k + 1) − g(k),
and observe that
Q(m, n) = 1 m! ∂ m g m,n (0),
where g m,n is the polynomial defined by g m,n (k) = n k (n − k)(n − k − 1) · · · (n − m + 1).
Intuitively, the sum (5.4) exhibits enormous cancellation, precisely because it is an expression for a high derivative of a very "smooth" function g m,n . To take advantage of this smoothness we investigate the first derivative ∂g m,n . We compute ∂g m,n (k) = n k+1 (n − k − 1)(n − k − 2) · · · (n − m + 1) − n k (n − k)(n − k − 1) · · · (n − m + 1) = kn k (n − k − 1)(n − k − 2) · · · (n − m + 1) = k n g m,n (k + 1).
From this it is possible to control the higher derivatives: using the Leibniz-type formula
∂(gh)(k) = ∂g(k)h(k + 1) + g(k)∂h(k)
we derive the recurrence ∂ ℓ g m,n (k) = ∂ ℓ−1 k n g m,n (k + 1) = k n ∂ ℓ−1 g m,n (k + 1) + ℓ − 1 n ∂ ℓ−2 g m,n (k + 2), which holds for ℓ ≥ 2. In particular, letting
a ℓ = 1 ℓ! ∂ ℓ g m,n (m − ℓ),
we have the recurrence
ℓa ℓ = m − ℓ n a ℓ−1 + 1 n a ℓ−2 (5.6)
for (a ℓ ) ℓ≥0 . Observe that a 0 = n m , a 1 = (m − 1)n m−1 , and that Q(m, n) = a m . In Section 6 we will bound solutions to recurrences like (5.6) in a direct fashion. For (5.6) itself, it is convenient to employ a generating function technique. For an indeterminate X put
f (X) = ∞ ℓ=0 a ℓ X ℓ ,
and observe by multiplying (5.6) by X ℓ−1 and summing for ℓ ≥ 2 that
f ′ (X) − a 1 = m − 1 n (f (X) − a 0 ) − 1 n Xf ′ (X) + 1 n Xf (X) .
By inputting a 0 = n m and a 1 = (m − 1)n m−1 and rearranging we arrive at
(n − X)f ′ (X) = (m − 1 + X)f (X).
The formula (5.5) is obtained by solving this differential equation.
6. An L ∞ bound for low-entropy minor arcs
The main result of this section is the following proposition.
| 1 S (χ)| ≤ n m −1/2 e o(m) n! n n ,
provided that m is significantly greater than √ n. In other words a character similar to χ = (r m , 0 n−m ) is asymptotically about as bad as the worst case. Moreover if we allow n to be even and set r = 1/2 then the bound from Theorem 4.1 is essentially sharp, as remarked in that section.
However, in this setting, Proposition 6.1 improves on this bound by a significant factor of 2 −m/2 . Clearly then such an improvement is only possible under the assumption that n is odd, and the proof of Proposition 6.1 must exploit this assumption in a fundamental way. Since we do not know how to adapt the proof of Theorem 4.1 to exploit the absence of 2-torsion, we employ a more specialized approach that suffices in the sparse regime.
We observe that this is the unique stage of the proof in which we need the full strength of the hypothesis that G has odd order. Elsewhere we only need to assume that x∈G x = 0.
Proof of Proposition 6.1. Let χ = (r 1 , . . . , r m , 0 n−m ). The key tool in our proof is an exact recursive formula for Fourier coefficients 1 S (χ), in terms of related Fourier coefficients for which m ′ < m. Specifically, as long as r m is nonzero we have As an aside, we remark that this formula can be used for efficient explicit computation of certain kinds of Fourier coefficient.
Let U m be the maximal value of the | 1 S | taken over all characters with exactly m nonzero coordinates. If we apply the triangle inequality to the right hand side of (6.1) and bound each | 1 S (χ ′ )| appearing there by the appropriate value U m ′ (where m ′ < m), we obtain recursive bounds on the U m .
However, there is a subtlety in this process: the number of nonzero coordinates of (r 1 , . . . , r i + r m , . . . , r m−1 , 0, . . . , 0) might be either (m − 1) (if r i + r m = 0) or (m − 2) (if r i + r m = 0), and we do not have much control over which occurs. Since we expect U m−1 < U m−2 , it is in our interests to land in the former case as much as possible. We have the freedom to reorder the r i before applying (6.1), so our goal is to optimize the recursive bound over all choices of r m .
For each i ≤ m let N i = {j ≤ m : r j = −r i }. By reordering, we may assume without loss of generality that |N i | is minimized when i = m. Suppose j ∈ N m . Then r j = −r m , so since |G| is odd we have r j = r m , so N m ∩ N j = ∅. Thus by minimality |N m | ≤ m/2.
Hence, applying (6.1) we deduce the bound
| 1 S (χ)| ≤ |N m |U m−2 + (m − 1 − |N m |)U m−1 n − m + 1 ≤ 1 n − m + 1 max {(m − 1)U m−1 , (m/2)U m−2 + (m/2 − 1)U m−1 }
where the former term in the maximum covers the case U m−1 > U m−2 and the latter the more probable case U m−1 ≤ U m−2 . Hence
U m ≤ 1 n − m + 1 max {(m − 1)U m−1 , (m/2)U m−2 + (m/2 − 1)U m−1 } . (6.2)
Our task is now to obtain bounds on U m by solving this recurrence, which -ignoring the maximumresembles a kind of time-dependent Fibonacci sequence. In Remark 5.2 we dealt with a similar recurrence using generating function methods. Here, the presence of the maximum of two alternatives, and possibly other reasons, make this approach less attractive. Instead, we will bound (6.2) by more hands-on methods.
Specifically, we will phrase (6.2) in terms of a product of 2 × 2 matrices (as one might for the Fibonacci sequence), and control the L 2 norm of the result by bounding the L 2 → L 2 operator norms of each matrix in the product.
Write α m = m 2(n−m+1) , β m = m−2 2(n−m+1) , and γ m = m−1 n−m+1 . Additionally, we work with a rescaled version V m = (α 1 α 2 . . . α m ) −1/2 U m of U m , for m ≥ 1. From (6.2) we deduce that
V m ≤ max γ m α −1/2 m V m−1 , α 1/2 m α −1/2 m−1 V m−2 + β m α −1/2 m V m−1 (6.3)
holds for any m ≥ 3. If we define matrices Completely analogously, using the easy bound γ 2 m α −1 m ≤ 4m/n, we deduce
M m L 2 →L 2 ≤ 1 + O(m/n).
Thus we have
V m ≤ V 2 m + V 2 m−1 ≤ m r=3 max { M r L 2 →L 2 , N r L 2 →L 2 } · V 2 2 + V 2 1 ≤ m r=3 1 + O (r/n) 1/2 + r −1/2 · V 2 2 + V 2 1 .
We already know from Section 2 that U 1 = 0 and it is evident from (6.1) that U 2 = O(n!/n n+1 ). Hence, we can bound the term V 2 2 + V 2 1 in the above inequality by O(n!/n n ), which gives V m ≤ e O(m 3/2 /n 1/2 +m 1/2 ) · n! n n , and the claim stated in the proposition follows easily from this.
End of the argument
To estimate the sum of cubes
χ∈ G n 1 S (χ) 3
we divide the set of all χ ∈ G n into three regions depending on the value of H(χ): a high-entropy range H ≥ 10, a medium-entropy range ε ≤ H ≤ 10, and a low-entropy range H ≤ ε. Here ε is a small positive constant which will be chosen at the end of the argument.
In the high-entropy range H ≥ 10 it is enough to use the bound from Corollary 2.2, χ : H(χ)≥10 | 1 S (χ)| 3 ≤ e −3.5n n! 3 /n 3n .
In the medium-entropy range ε ≤ H ≤ 10, we first need a bound for the number of characters of a given entropy.
Lemma 7.1. If H ≤ 10 then the number of characters of entropy at most H is bounded by e Hn+o(n) .
Proof. Every character of entropy at most H has an orbit under the permutation action of size at most e Hn , so it suffices to show that there are only e o(n) such orbits of characters of entropy at most 10. Every orbit is uniquely specified by giving the number a r of appearances of r for each r ∈ G, so we must show that the number of nonnegative integer vectors (a r ) r∈ G with r a r = n and satisfying n! r∈ G a r ! ≤ e 10n is e o(n) . Let δ > 0 be a small constant, and let t be the sum of the a r for which a r ≤ e −11/δ n. Then e −11n n n ≤ e −10n n! ≤ r∈ G a r ! ≤ r∈ G a ar r ≤ (e −11/δ n) t n n−t = e −11t/δ n n , so t ≤ δn. Since at most e 11/δ of the a r are bigger than e −11/δ n, the number of (a r ) r∈ G is bounded by the number of ways of choosing the set B of at most e 11/δ indices r for which a r is big, times the number of ways of choosing these a r , times the number of ways of choosing (a r ) r / ∈B so that r / ∈B a r = n − r∈B a r ≤ δn. This is bounded by n e 11/δ n e 11/δ n + δn − 1 δn − 1 .
The claimed estimate now follows by applying Stirling's formula and choosing δ appropriately.
We conclude by combining Lemma 7.1 with Theorem 4.1. Note that if the entropy of χ is O(1) then the number of distinct coordinates of χ is O(n/ log n), so Theorem 4.1 implies | 1 S (χ)| ≤ e −H(χ)n/2+o(n) n!/n n .
Thus for any H ≤ 10 we have χ : 9H/10<H(χ)≤H | 1 S (χ)| 3 ≤ |{χ : H(χ) ≤ H}| max χ : H(χ)≥9H/10 | 1 S (χ)| 3 ≤ e −7Hn/20+o(n) n! 3 /n 3n .
Decomposing the range [ε, 10] into ranges of this type and bounding crudely, we obtain χ : ε≤H(χ)≤10
| 1 S (χ)| 3 ≤ O(log(1/ε))e −7εn/20+o(n) n! 3 /n 3n .
In the low-entropy range H ≤ ε, one can easily verify that χ = (r 1 , . . . , r n ) must repeat some coordinate r ∈ G at least (1 − ε)n times. Thus by global shift-invariance of 1 S we have ≤ e O(m 3/2 /n 1/2 +m 1/2 ) 2 −m/2 n! 3 n 3n ≤ e O(ε 1/2 m+m 1/2 ) 2 −m/2 n! 3 n 3n .
As long as ε is sufficiently small depending on the constant implicit in the O(ε 1/2 m) term, this is negligible except when m has size O(1). In this range, we can apply Proposition 3.1. We introduce a further parameter M and split the sum (7.1) into the ranges 1 ≤ m ≤ 2M and 2M < m ≤ εn, to obtain e O(ε 1/2 m+m 1/2 ) 2 −m/2 n! 3 n 3n + O log(1/ε)e −7εn/20+o(n) n! 3 n 3n
+ O e −3.5n n! 3 n 3n .
Theorem 1.2 now follows by choosing ε and M appropriately.
Question 2 . 3 .
23Suppose the elements of G = Z/nZ are divided into k sets A 1 , . . . , A k , each of a predetermined size a i ≈ n/k, at random. Consider the random variablesT i = r∈Ai r ∈ G for i = 1, . . . , k. The vector T = (T 1 , . . . , T k ) is a random variable taking values in the subgroup H = {(x i ) ∈ G k :
that 1 S
1(r 1 , . . . , r m , 0, . . . , over distinct x 1 , . . . , x m can be related to a sum over partitions of {1, . . . , m} using a type of Möbius inversion: for any function F (x 1 , . . . , x m ) we have ′ x1,...,xm F (x 1 , . . . , x m ) = P µ(P) x1,...,xm xi=xj whenever i P ∼j F (x 1 , . . . , x m ), where the outer sum runs over partitions P of {1, . . . , m}, and µ(P) = (−1) m−|P| P ∈P (|P | − 1)!.
|P| if P kills (r 1 , . . . , r m (r 1 , . . . , r m , 0, . . . , 0) = (n − m)! n n P killing (r1,...,rm) µ(P)n |P| . (3.2)
that, up to a permutation, r 2j = −r 2j−1 for each j = 1, . . . , m/2. Then 1 S (r 1 , . . . , r m , 0, . . . ,Proof. First of all note by permutation-invariance thatm-sparse χ 1 S (χ) 3 = n m r1,...,rm =0 1 S (r 1 , . . . , r m , 0, . . . , 0) 3 .(3.5) For each (r 1 , . . . , r m ) choose a maximal-size partition P which kills (r 1 , . . . , r m ), and split the sum up according to P. Suppose P has k parts. Since P cannot have singletons we have k ≤ m/2, and by (3.3) we have | 1 S (r 1 , . . . , r m , 0, . . . , 0)| ≤ O m 1 n m−k n! n n .
again satisfactory. Thus we may restrict our attention to those (r 1 , . . . , r m ) which are killed by a unique partition P with m/2 parts, and for these (3.4) gives 1 S (r 1 , . . . , r m , 0, . . . ,For a fixed such P the number of such (r 1 , . . . , r m ) is n m/2 + O m (n m/2−1 ), and the number of such P is(m − 1)(m − 3) · · · 1 = m! 2 m/2 (m/2)! ,so the total contribution from these (r 1 , . . . , r m
Theorem 4 . 1 .
41Suppose χ = (r a1 1 , . . . , r a k k ), where k i=1 a i = n. Then | 1 S (χ)| ≤ n + k − 1
n. Similarly, numerical evidence strongly suggests that log | 1 S ((1/3) a , (−1/3) a , 0 n−2a )|/(n!/n n ) is divisible by 3; for example, when n = 3003, log | 1 S ((1/3) 1001 , (−1/3) 1001 , 0 1001 )|/(n!/n n ) = −1649.01782245..., 1645.46757758.... But, for example, the evidence also suggests that log | 1 S ((1/3) a , 0 n−a )|/(n!/n n )
zero (although in fact this evaluation is redundant as this expression is a constant function).
,...,rm | 1 S (r 1 , . . . , r m , 0, . . . , 0)| 2 = n m r1,...,rm (n − m)! 2 n 2n−2m | 1 Sm (r 1 , . . . , r m )
The starting point of our strategy is to apply inclusion-exclusion to obtain an expression for (5.1). For m ≤ n let Q(m, n) = n 2n n! 2 m-sparse χ | 1 S (r)| 2 , and observe as in (5.3) that
. 2 .
2Although it is straightforward to verify from (5.4) that Q(m, n) is the coefficient of X m in (5.5), this proof is unsatisfying in that one needs to know the formula for f in advance. Here is an alternative proof of this fact which does not have this fault.
Proposition 6. 1 .. 2 .
12Let χ be a character with exactly m nonzero coordinates, where m ≤ n/3. Then | 1 S (χ)| ≤ e O(m 3/2 /n 1/2 +m 1/2 ) · 2 It is instructive to compare this bound to what one can deduce from Theorem 4.1. If χ has exactly m nonzero coordinates and takes precisely k distinct nonzero values, i.e., χ = (r a1 1 , . . . , r a k k , 0 n−m ) where a i = m, then Theorem 4.1 gives us a bound of | 1 S (χ)| ≤ worst case of this bound over all choices of k ≤ m and a 1 , . . . , a k can be computed approximately as
(r 1 , . . . , r i + r m , . . . , r m−1 , 0, . . . , 0). (6.1)
v m = (V m , V m−1 ) T , we can write (6.3) equivalently as v m ≤ max {M m v m−1 , N m v m−1 } .Using the easily obtainable bounds α m α −1 m−1 = 1 + O(1/m) and β 2 m α −1 m ≤ m/n, we get that Tr N T m N m ≤ 2 + m/n + O(1/m) and det N T m N m = 1 + O(1/m), and so by considering the singular values of N m , we get the operator norm bound N m L 2 →L 2 ≤ 1 + O((m/n) 1/2 + m −1/2 ).
Proposition 6.1 with Theorem 5.1 we have that for any m ≤ εn,
Finally, by combining the three bounds we have just proved for each range H ≤ ε, ε ≤ H
Handbook of Enumerative Combinatorics. M Bóna, Chapman and Hall/CRCM. Bóna. Handbook of Enumerative Combinatorics. Chapman and Hall/CRC, 2015.
Deriving the number of "good" permutations, with applications to cryptography. C Cooper, R Gilchrist, I N Kovalenko, D Novaković, Kibernet. Sistem. Anal. 1875C. Cooper, R. Gilchrist, I. N. Kovalenko, and D. Novaković. Deriving the number of "good" permutations, with applications to cryptography. Kibernet. Sistem. Anal., (5):10-16, 187, 1999.
The upper bound for the number of complete mappings. C Cooper, I N Kovalenko, Theory Probab. Math. Statist. 53C. Cooper and I. N. Kovalenko. The upper bound for the number of complete mappings. Theory Probab. Math. Statist., 53:77-83, 1996.
A lower bound for the number of good permutations. C Cooper, Data Recording, Storage and Processing. NatC. Cooper. A lower bound for the number of good permutations. Data Recording, Storage and Processing (Nat.
. Acad, Sci, 213UkraineAcad. Sci. Ukraine), 213:15-25, 2000.
On the number of transversals in Cayley tables of cyclic groups. N J Cavenagh, I M Wanless, Discrete Appl. Math. 1582N. J. Cavenagh and I. M. Wanless. On the number of transversals in Cayley tables of cyclic groups. Discrete Appl. Math., 158(2):136-146, 2010.
Analytic combinatorics. P Flajolet, R Sedgewick, Cambridge University PressCambridgeP. Flajolet and R. Sedgewick. Analytic combinatorics. Cambridge University Press, Cambridge, 2009.
On the maximum number of latin transversals. R Glebov, Z Luria, PreprintR. Glebov and Z. Luria. On the maximum number of latin transversals. 2015. Preprint available at http://arxiv.org/pdf/1506.00983.pdf.
On an upper bound for the number of complete mappings. I N Kovalenko, Kibernet. Sistem. Anal. 1881I. N. Kovalenko. On an upper bound for the number of complete mappings. Kibernet. Sistem. Anal., (1):81-85, 188, 1996.
Applying fast simulation to find the number of good permutations. N Yu, Kuznetsov, Cybernet. Systems Anal. 43N. Yu. Kuznetsov. Applying fast simulation to find the number of good permutations. Cybernet. Systems Anal., 43:830-837, 2007.
Estimating the number of good permutations by a modified fast simulation method. N Yu, Kuznetsov, Cybernet. Systems Anal. 44N. Yu. Kuznetsov. Estimating the number of good permutations by a modified fast simulation method. Cybernet. Systems Anal., 44:547-554, 2008.
Estimating the number of latin rectangles by the fast simulation method. N Yu, Kuznetsov, Cybernet. Systems Anal. 45N. Yu. Kuznetsov. Estimating the number of latin rectangles by the fast simulation method. Cybernet. Systems Anal., 45:69-75, 2009.
The number of transversals in a Latin square. B D Mckay, J C Mcleod, I M Wanless, Des. Codes Cryptogr. 403B. D. McKay, J. C. McLeod, and I. M. Wanless. The number of transversals in a Latin square. Des. Codes Cryptogr., 40(3):269-284, 2006.
The n-queens problem. I Rivin, I Vardi, P Zimmermann, Amer. Math. Monthly. 1017I. Rivin, I. Vardi, and P. Zimmermann. The n-queens problem. Amer. Math. Monthly, 101(7):629-639, 1994.
The on-line encyclopedia of integer sequences. N J A Sloane, N. J. A. Sloane. The on-line encyclopedia of integer sequences. https://oeis.org/.
Multidimensional permanents and an upper bound on the number of transversals in Latin squares. A A Taranenko, A. A. Taranenko. Multidimensional permanents and an upper bound on the number of transversals in Latin squares.
. J. Combin. Des. 237J. Combin. Des., 23(7):305-320, 2015.
Computational recreations in Mathematica. I Vardi, Addison-Wesley Publishing CompanyRedwood City, CAAdvanced Book ProgramI. Vardi. Computational recreations in Mathematica. Addison-Wesley Publishing Company, Advanced Book Pro- gram, Redwood City, CA, 1991.
Transversals in Latin squares: a survey. I M Wanless, Surveys in combinatorics 2011. CambridgeCambridge Univ. Press392I. M. Wanless. Transversals in Latin squares: a survey. In Surveys in combinatorics 2011, volume 392 of London Math. Soc. Lecture Note Ser., pages 403-437. Cambridge Univ. Press, Cambridge, 2011.
| [] |
[
"MONOTONE COMPLETE C*-ALGEBRAS AND GENERIC DYNAMICS",
"MONOTONE COMPLETE C*-ALGEBRAS AND GENERIC DYNAMICS"
] | [
"Kazuyuki Saitô ",
"J D Maitland ",
"Wright "
] | [] | [] | Let S be the Stone space of a complete, non-atomic, Boolean algebra. Let G be a countably infinite group of homeomorphisms of S. Let the action of G on S have a free dense orbit. Then we prove that, on a generic subset of S, the orbit equivalence relation coming from this action can also be obtained by an action of the Dyadic Group, Z 2 . As an application, we show that if M is the monotone cross product C * -algebra, arising from the natural action of G on C(S), and if the projection lattice in C(S) is countably generated then M can be approximated by an increasing sequence of finite dimensional subalgebras. On each S, in a class considered earlier, we construct a natural action of Z 2 with a free, dense orbit. Using this we exhibit a huge family of small monotone complete C * -algebras, (B λ , λ ∈ Λ) with the following properties: (i) Each B λ is a Type III factor which is not a von Neumann algebra. (ii) Each B λ is a quotient of the Pedersen-Borel envelope of the Fermion algebra and hence is strongly hyperfinite. The cardinality of Λ is 2 c , where c = 2 ℵ 0 . When λ = µ then B λ and Bµ take different values in the classification semi-group; in particular, they cannot be isomorphic. | 10.1112/plms/pds084 | [
"https://arxiv.org/pdf/1212.6503v1.pdf"
] | 119,634,734 | 1212.6503 | 26b25fdfc0885576145530dfac7b0fe2313552ce |
MONOTONE COMPLETE C*-ALGEBRAS AND GENERIC DYNAMICS
28 Dec 2012
Kazuyuki Saitô
J D Maitland
Wright
MONOTONE COMPLETE C*-ALGEBRAS AND GENERIC DYNAMICS
28 Dec 2012
Let S be the Stone space of a complete, non-atomic, Boolean algebra. Let G be a countably infinite group of homeomorphisms of S. Let the action of G on S have a free dense orbit. Then we prove that, on a generic subset of S, the orbit equivalence relation coming from this action can also be obtained by an action of the Dyadic Group, Z 2 . As an application, we show that if M is the monotone cross product C * -algebra, arising from the natural action of G on C(S), and if the projection lattice in C(S) is countably generated then M can be approximated by an increasing sequence of finite dimensional subalgebras. On each S, in a class considered earlier, we construct a natural action of Z 2 with a free, dense orbit. Using this we exhibit a huge family of small monotone complete C * -algebras, (B λ , λ ∈ Λ) with the following properties: (i) Each B λ is a Type III factor which is not a von Neumann algebra. (ii) Each B λ is a quotient of the Pedersen-Borel envelope of the Fermion algebra and hence is strongly hyperfinite. The cardinality of Λ is 2 c , where c = 2 ℵ 0 . When λ = µ then B λ and Bµ take different values in the classification semi-group; in particular, they cannot be isomorphic.
Introduction: monotone complete C * -algebras
Let A be a C * -algebra. Its self-adjoint part, A sa , is a partially ordered, real Banach space whose positive cone is {zz * : z ∈ A}. If each upward directed, normbounded subset of A sa , has a least upper bound then A is said to be monotone complete. Each monotone complete C * -algebra has a unit element (this follows by considering approximate units). Unless we specify otherwise, all C * -algebras considered in this note will possess a unit element. Every von Neumann algebra is monotone complete but the converse is false.
Monotone complete C * -algebras arise in several different areas. For example, each injective operator system can be given the structure of a monotone complete C * -algebra, in a canonical way. Injective operator spaces can be embedded as "corners" of monotone complete C * -algebras, see Theorem 6.1.3 and Theorem 6.1.6 [11] and [17,18].
When a monotone complete C * -algebra is commutative, its lattice of projections is a complete Boolean algebra. Up to isomorphism, every complete Boolean algebra arises in this way.
We recall that each commutative (unital) C * -algebra can be identified with C(X), the algebra of complex valued continuous functions on some compact Hausdorff space X. Then C(X) is monotone complete precisely when X is extremally disconnected, that is, the closure of each open subset of X is also open.
Monotone complete C * -algebras are a generalisation of von Neumann algebras. The theory of the latter is now very well advanced. In the seventies the pioneering work of Connes, Takesaki and other giants of the subject transformed our knowledge of von Neumann algebras, see [38]. By contrast, the theory of monotone complete C * -algebras is very incomplete with many fundamental questions unanswered. But considerable progress has been made in recent years.
This article follows on from [34] where we introduced a classification semi-group for small monotone complete C * -algebras which divides them into 2 c distinct equivalence classes. But it is not necessary to have read that paper in order to understand this one. Our aim is to be comprehensible by anyone with a grounding in functional analysis and some exposure to the more elementary parts of C * -algebra theory, say, the first chapter of [39].
A monotone complete C * -algebra is said to be a factor if its centre is one dimensional; we may regard factors as being as far removed as possible from being commutative. Just as for von Neumann algebras, each monotone complete factor is of Type I or Type II 1 or Type II ∞ , or Type III. Old results of Kaplansky [24,25,26] imply that each Type I factor is a von Neumann algebra. This made it natural for him to ask if this is true for every factor. The answer is "no", in general. We call a factor which is not a von Neumann algebra wild.
A C * -algebra is separably representable when it has an isometric * -representation on a separable Hilbert space. As a consequence of more general results, Wright [48] showed that if a monotone complete factor is separably representable (as a C *algebra) then it is a von Neumann algebra. So, in these circumstances, Kaplansky's question has a positive answer.
Throughout this note, a topological space is said to be separable if it has a countable dense subset; this is a weaker property than having a countable base of open sets. (But if the topology is metrisable, they coincide.) Akemann [1] showed that a von Neumann algebra has a faithful representation on a separable Hilbert space if, and only if its state space is separable.
We call a C * -algebra with a separable state space almost separably representable. Answering a question posed by Akemann, see [1], Wright [51] gave examples of monotone complete C * -algebras which have separable state spaces but which are NOT separably representable.
If a monotone complete factor M possesses a strictly positive functional and is not a von Neumann algebra then, as an application of a more general result in [52] M must be of Type III, see also [30]. Whenever an algebra is almost separably representable then it possesses a strictly positive functional. (See Corollary 3.2). So if a wild factor is almost separably representable then it must be of Type III.
A (unital) C * -algebra A is said to be small if there exists a unital complete isometry of A into L(H), where H is a separable Hilbert space. See [31] and [19,34].
It turns out that A is small if, and only if, A ⊗ M n (C) has a separable state space for n = 1, 2, ..., [31]. So clearly every small C * -algebra is almost separably representable. We do not know if the converse is true, but it is true for monotone complete factors, [31].
Examples of (small) wild factors were hard to find. The first examples were due to Dyer [10] and Takenouchi [37]. As a consequence of a strong uniqueness theorem of Sullivan-Weiss-Wright [36], it turned out that the Dyer factor and the Takenouchi factor were isomorphic. See also [29] where the Dyer factor was identified as a monotone cross product of the Dixmier algebra by an action of the dyadic rationals.
Another method of finding wild factors was given by Wright [46]. He showed that each C * -algebra A could be embedded in its "regular σ−completion", A. When A is separably representable then A is monotone complete and almost separably representable. Furthermore, when A is infinite dimensional, unital and simple, then A is a wild factor. But it was very hard to distinguish between these factors. Indeed one of the main results of [36] showed that an apparently large class of wild factors were, in fact, a unique (hyperfinite) factor. Some algebras were shown to be different in [32,20,33]. In 2001 a major breakthrough by Hamana [19] showed that there were 2 c non-isomorphic (small) wild factors, where c = 2 ℵo . This pioneering paper has not yet received as much attention as it deserves.
In [34] we introduced a quasi-ordering between monotone complete C * -algebras. From this quasi-ordering we defined an equivalence relation and used this to construct a classification semi-group W for a class of monotone complete C * -algebras. This semi-group is abelian, partially ordered, and has the Riesz decomposition property. For each monotone complete, small C * -algebra A we assign a "normality weight", w(A) ∈ W. If A and B are algebras then w(A) = w(B), precisely when these algebras are equivalent. It turns out that algebras which are very different can be equivalent. In particular, the von Neumann algebras correspond to the zero element of the semi-group. It might have turned out that W is very small and fails to distinguish between more than a few algebras. This is not so; the cardinality of W is 2 c , where c = 2 ℵ0 .
One of the useful properties of W is that it can sometimes be used to replace problems about factors by problems about commutative algebras [34]. For example, let G j be a countable group acting freely and ergodically on a commutative monotone complete algebra A j (j = 1, 2). By a cross-product construction using these group actions, we can obtain monotone complete C * -factors B j (j = 1, 2). Then it is easy to show that wA j = wB j . So if the commutative algebras A 1 and A 2 are not equivalent, then wB 1 = wB 2 . In particular, B 1 and B 2 are not isomorphic.
Influenced by K−theory, it is natural to wish to form the Grothendieck group of the semi-group W. This turns out to be futile, since this Grothendieck group is trivial, because every element of W is idempotent. By a known general theory [15], this implies that W can be identified with a join semi-lattice. The Riesz Decomposition Property for the semigroup turns out to be equivalent to the semilattice being distributive. So the known theory of distributive join semi-lattices can be applied to W.
To each monotone complete C * -algebra A we associated a spectroid invariant ∂A [34]. Just as a spectrum is a set which encodes information about an operator, a spectroid encodes information about a monotone complete C * -algebra. It turns out that equivalent algebras have the same spectroid. So the spectroid may be used as a tool for classifying elements of W. For a generalisation of spectroid, see [42].
One of the many triumphs of Connes in the theory of von Neumann algebras, was to show that the injective von Neumann factors are precisely those which are hyperfinite [6], see also [38]. It is natural to conjecture an analogous result for wild factors. See [5,43] But this is not true. For, by applying deep results of Hjorth and Kechris [22], it is possible to exhibit a small, wild, hyperfinite factor which is not injective. We shall give details of this, and other more general results in a sequel to this paper.
When dealing with monotone complete C * -algebras, saying precisely what we mean by "hyperfinite", "strongly hyperfinite", "approximately finite dimensional" and "nearly approximately finite dimensional", requires subtle distinctions which are not needed in von Neumann algebra theory. See Section 12 for details.
(# )Let Λ be a set of cardinality 2 c , where c = 2 ℵ0 . Then we showed in [34] that there exists a family of monotone complete C * -algebras {B λ : λ ∈ Λ} with the following properties. Each B λ is a monotone complete factor of Type III, and also a small C * -algebra. For λ = µ, B λ and B µ have different spectroids and so wB λ = wB µ and, in particular, B λ is not isomorphic to B µ . We show, in Section 12, that, by using the machinery constructed in this paper, we may choose each B λ such that it is generated by an increasing sequence of full matrix algebras.
Introduction: Generic dynamics
An elegant account of Generic Dynamics is given by Weiss [41]; the term first occurred in [36]. In these articles, the underlying framework is a countable group of homeomorphisms acting on a complete separable metric space with no isolated points (a perfect Polish space). This corresponds to dynamics on a unique compact, Hausdorff, extremally disconnected space (the Stone space of the complete Boolean algebra of regular open subsets of R).
Let G be a countable group. Unless we specify otherwise, G will always be assumed to be infinite and equipped with the discrete topology. Let X be a Hausdorff topological space with no isolated points. Further suppose that X is a Baire space i.e. such that the only meagre open set is the empty set. (This holds if X is compact or a G-delta subset of a compact Hausdorff space or is homeomorphic to a complete separable metric space.) A subset Y of X is said to be generic, if X\Y is meagre.
Let ε be an action of G on X as homeomorphisms of X.
In classical dynamics we would require the existence of a Borel measure on X which was G-invariant or quasi-invariant, and discard null sets. In topological dynamics, no measure is required and no sets are discarded. In generic dynamics, we discard meagre Borel sets.
We shall concentrate on the situation where, for some x 0 ∈ X, the orbit {ε g (x 0 ) : g ∈ G} is dense in X. Of course this cannot happen unless X is separable. Let S be the Stone space of the (complete) Boolean algebra of regular open sets of X. Then, see below, the action ε of G on X induces an action ε of G as homeomorphisms of S; which will also have a dense orbit.
When, as in [41], X is a perfect Polish space, then, as mentioned above, S is unique; it can be identified with the Stone space of the regular open sets of R. But if we let X range over all separable compact subspaces of the separable space, 2 R , then we obtain 2 c essentially different S; where S is compact, separable and extremally disconnected. For each such S, C(S) is a subalgebra of ℓ ∞ .
Let E be the relation of orbit equivalence on S. That is, sEt, if, for some group element g, ε g (s) = t. Then we can construct a monotone complete C * -algebra M E from the orbit equivalence relation. When there is a free dense orbit, the algebra will be a factor with a maximal abelian subalgebra, A, which is isomorphic to C(S). There is always a faithful, normal, conditional expectation from M E onto A.
For f ∈ C(S), let γ g (f ) = f • ε g −1 . Then g → γ g is an action of G as automorphisms of C(S). Then we can associate a monotone complete C * -algebra M (C(S), G), the monotone cross-product (see [37]) with this action. When the action ε is free, then M (C(S), G) is naturally isomorphic to M E . In other words, the monotone cross-product does not depend on the group, only on the orbit equivalence relation. This was a key point in [36] where a strong uniqueness theorem was proved.
In this article we consider 2 c algebras C(S), each taking different values in the weight semi-group W. (Here c = 2 ℵ0 , the cardinality of R.)
There is no uniqueness theorem but we do show the following. Let G be a countably infinite group. Let α be an action of G as homeomorphisms of S and suppose this action has a single orbit which is dense and free. Then, modulo meagre sets, the orbit equivalence relation obtained can also be obtained by an action of Z 2 as homeomorphisms of S. This should be compared with the situation in classical dynamics. e.g. It is shown in [7] that any action by an amenable group is orbit equivalent to an action of Z. But, in general, non-amenable groups give rise to orbit equivalence relations which do not come from actions of Z.
On each of 2 c , essentially different, compact extremally disconnected spaces we construct a natural action of Z 2 with a free, dense orbit. This gives rise to a family of monotone complete C * -algebras, (B λ , λ ∈ Λ) with the properties (#) described above.
Let E be the orbit equivalence relation arising from a free, ergodic action of G. Furthermore, suppose that the complete Boolean algebra of projections in C(S) is countably generated. Let N (M E ) be the smallest monotone closed * −subalgebra of M E which contains the normalising unitaries of A (that is the set of all unitaries u such that u * Au = A.). Then N (M E ) is an approximately finite dimensional (AFD) factor. More precisely there is an increasing sequence of finite dimensional, unital, * −subalgebras of N (M E ), whose union σ−generates N (M E ). (In contrast to the situation for von Neumann factors, we do not know whether we can always take these finite dimensional subalgebras to be full matrix algebras.) As pointed out above, we need to make a number of subtle distinctions when approximating monotone complete algebras by finite dimensional subalgebras; see Section12 for details. For example M E is "nearly AFD". But in Section 11 we construct huge numbers of examples of Z 2 actions on spaces S, which give rise to factors which we show to be strongly hyperfinite.
Monotone σ−complete C * -algebras
Although our focus is on monotone complete C * -algebras, we also need to consider more general objects, the monotone σ−complete C * -algebras.
A C * -algebra is monotone σ−complete if each norm bounded, monotone increasing sequence of self-adjoint elements has a least upper bound.
Lemma 3.1. Let A be a monotone σ−complete C * -algebra. Let there exist a positive linear functional µ : A → C which is faithful. Then A is monotone complete. Let Λ be a downward directed subset of A sa which is bounded below. Then there exists a monotone decreasing sequence (x n ), with each x n ∈ Λ, such that the greatest lower bound of (x n ), ∞ n=1 x n , is the greatest lower bound of Λ.
Proof. See [35].
Corollary 3.2. When an almost separably representable algebra is unital and monotone σ−complete then it is monotone complete.
Proof. If A is almost separably representable we can find states (φ n )(n = 1, 2...) which are dense in its state space. Then φ = ∞ n=1 1
Let A be a C * -subalgebra of L(H). Let V be a real subspace of the real Banach space L(H) sa . We call V a σ−closed subspace of L(H) sa if, whenever (a n ) is an upper bounded, monotone increasing sequence in V then its limit in the weak operator topology is in V. Consider the family of all σ−closed subspaces which contain A sa , then the intersection of this family is the (smallest) σ−closed subspace containing A sa . By a theorem of Pedersen this is the self-adjoint part of a monotone σ−complete C * -subalgebra of L(H). See Theorem 4.5.4 [28].
Let B be a (unital) C * -algebra. Let us recall some well known classical results [39,28]. Let (π, H) be the universal representation of B i.e. the direct sum of all the GNS representations corresponding to each state of B. Then the second dual of B, B ′′ , may be identified with the von Neumann envelope of π(B) in L(H). Let B ∞ sa be the smallest subspace of B ′′ sa ,( the self-adjoint part of B ′′ ), which is closed under taking limits (in the weak operator topology) of bounded, monotonic sequences.
Let B ∞ = B ∞ sa + iB ∞ sa .
Then by Pedersen's theorem B ∞ is a monotone σ−complete C * -subalgebra of B ′′ . We call B ∞ the Pedersen-Borel envelope of B.(Pedersen called this simply the "Borel envelope", it has also, with some justice, been called the Baire envelope.)
Let B be a monotone σ−complete C * -algebra. We recall that V ⊂ B sa is a σ−subspace of B sa , if V is a real vector subspace of B sa such that, whenever (b n ) is a monotone increasing sequence in V, which has a supremum b in B sa , then b ∈ V. (In particular, the σ−subspaces of L(H) are precisely the σ−closed subspaces of L(H).)
A σ−subalgebra of B is a * -subalgebra whose self-adjoint part is a σ−subspace of B sa . It follows from Lemma 1.2 [44] that each σ−subalgebra is closed in norm and hence is a C*-subalgebra; see also [5].
Further, J is a σ−ideal of B if J is a C * -ideal of B and also a σ−subalgebra of B.
When B and A are monotone σ−complete C * -algebras, a positive linear map φ : B → A is said to be σ−normal if, whenever (b n ) is monotone increasing and bounded above, then φ maps the supremum of (b n ), to the supremum of (φ(b n )),i.e.
φ( ∞ n=1 b n ) = ∞ n=1 φ(b n ).
Lemma 3.3. Let A be a monotone σ−complete C * -algebra and let J be a σ−ideal of A. Let q be the quotient homomorphism of A onto A/J. Then A/J is monotone σ−complete and q is σ−normal. Let (c n ) be a monotone increasing sequence in the self-adjoint part of A/J which is bounded above by c. Then there exists a monotone increasing sequence (a n ) in A sa such that q(a n ) = c n for each n and (a n ) is bounded above by a where q(a) = c.
Proof. This follows from Proposition 1.3 and Lemma 1.1 [44], see also [29].
The following representation theorem was proved by Wright in [45]. It may be thought of as a non-commutative generalisation of a theorem of Loomis and Sikorski in Boolean algebras [16].
Proposition 3.4. Let B be a monotone σ−complete C * -algebra. Then there exists a σ−normal homomorphism, π, from B ∞ onto B, such that π(b) = b for every b ∈ B. Let J be the kernel of the homomorphism π. Then J is a σ−ideal of B ∞ and B = B ∞ /J. Corollary 3.5. Let A and B be C * -algebras and let B be monotone σ−complete. Let φ : A → B be a positive linear map. Then φ has a unique extension to a σ−normal positive linear map, φ, from A ∞ into B. When φ is a * -homomorphism the following hold: First φ is also a * -homomorphism. Secondly, the range of φ is a σ− subalgebra of B. Thirdly, the self adjoint part of
φ[A ∞ ] is the smallest σ−subspace of B sa which contains φ[A sa ]. Finally, the kernel of φ is a σ−ideal, J, such that A ∞ /J ≈ φ[A ∞ ].
For a proof, see Proposition 1.1 [46]. REMARK Let S be a subset of a monotone σ−complete C * -algebra B. Let A be the smallest (unital) C * -subalgebra of B which contains S. Let φ be the inclusion map from A into B. By applying the preceding result, the smallest σ−subspace of B sa which contains A sa is the self adjoint part of a σ−subalgebra C of B. It is now natural to describe C as the σ−subalgebra of B which is σ−generated by S.
Extending continuous functions
We gather together some topological results which will be useful later. The most important of these is Theorem 4.7. We hope the presentation here is clear enough to ensure that the reader can reconstruct any missing proofs without difficulty. If we have misjudged this, we apologise and refer the reader to [14], see also [2].
Throughout this section, K is a compact Hausdorff space and D is a dense subset of K, equipped with the relative topology induced by K. It is easy to see that K has no isolated points if, and only if D has no isolated points.
Let us recall that a topological space T is extremally disconnected if the closure of each open subset is still an open set.
When K is extremally disconnected then, whenever Z is a compact Hausdorff space and f : D → Z is continuous, there exists a unique extension of f to a continuous function F : K → Z. In other words, K is the Stone-Czech compactification of D. (This is Theorem 4.7.)
For any compact Hausdorff space K, the closed subsets of D, in the relative topology, are all of the form F ∩ D where F is a closed subset of K. For any S ⊂ K, we denote the closure of S (in the topology of K) by cl(S). For S ⊂ D, we note that the closure of this set in the relative topology of D is cl(S) ∩ D. We denote this by cl D (S). We also use intS for the interior of S and, when S ⊂ D, the interior with respect to the relative topology is denoted by int D S.
The following lemmas are routine point-set topology. Lemma 4.6. Let D be a dense subspace of a compact Hausdorff extremally disconnected space Z. When A is a clopen subset of D in the relative topology, then clA is a clopen subset of Z. Let A and B be disjoint clopen subsets of D, in the relative topology. Then clA and clB are disjoint clopen subsets of Z.
The following result was given by Gillman and Jerison, see page 96 [14], as a byproduct of other results. The argument given here may be slightly easier and more direct. Proof. Since D is a subspace of the compact Hausdorff space, S, D is completely regular and hence has a well defined Stone-Czech compactification. By the fundamental property of βD, there exists a unique continuous surjection α from βD onto S, which restricts to the identity on D.
Let a and b be distinct points in βD. Then there exist disjoint clopen sets U and V such that a ∈ U and b ∈ V. Let A = U ∩ D and B = V ∩ D then U = cl βD A and V = cl βD B. So α[U ] ⊂ cl S A and α[V ] ⊂ cl S B. By Lemma 4.6, cl S A and cl S B are disjoint. Hence α(a) and α(b) are distinct points of S. Thus α is injective. It now follows from compactness, that α is a homeomorphism.
Ergodic discrete group actions on topological spaces
In this section, Y is a Hausdorff topological space which has no isolated points. For example, a compact Hausdorff space with no isolated points, or a dense subset of such a space.
When G is a group of bijections of Y, and y ∈ Y, we denote the orbit {g(y) : g ∈ G} by G[y].
Proof. (i) Let U be a G−invariant open set which is not empty. Since G[x 0 ] is dense, for some g ∈ G,we have g(x 0 ) ∈ U. But U is G−invariant. So x 0 ∈ U. Hence G[x 0 ] ⊂ U. So U is dense in Y. (ii) Suppose y is an element of Y such that G[y] is not dense in Y. Then Y \clG[y] is a non-empty G−invariant open set. So it is dense in Y. So clG[y] has empty interior.
When G is a group of homeomorphisms of Y its action is said to be ergodic if each G−invariant open subset of Y is either empty or dense in Y.
Induced actions
Let X be a compact Hausdorff space. Then, see Lemma 13 [34], X is separable if, and only if C(X) is isomorphic to a closed (unital) *-subalgebra of ℓ ∞ .
The regular σ-completion of an arbitrary C * -algebra was defined in [46]. For the commutative algebra, C(X), its regular σ-completion can be identified with the monotone σ-complete C * -algebra B ∞ 0 (X)/M 0 (X), where B ∞ 0 (X) is the algebra of bounded Baire measurable functions on X and M 0 (X) is the ideal of all f in B ∞ 0 (X) for which {x : f (x) = 0} is meagre. Let S be the structure space of B ∞ 0 (X)/M 0 (X) i.e. this algebra can be identified with C(S).
Let j : C(X) → B ∞ 0 (X)/M 0 (X) be the natural embedding. This is an injective (isometric) *-homomorphism.
Suppose that X is separable. Then there exists an injective *-homomorphism h : C(X) → ℓ ∞ . Since ℓ ∞ is monotone complete, h extends to a homomorphism H : C(S) → ℓ ∞ . From standard properties of regular σ−completions [46], H is also injective. Hence C(S) supports a strictly positive linear functional. By Lemma 3.1 it follows that C(S) is monotone complete and hence S is extremally disconnected.
Since H is an injective homomorphism, it follows that there is a surjective continuous map from βN onto S. So S is separable. REMARK Because C(S) is monotone complete, S is the Stone structure space of the complete Boolean algebra of regular open subsets of X. It follows from the Birkhoff-Ulam Theorem in Boolean algebras, and the linearisation arguments in [47] that C(S) can be identified with B ∞ (X)/M (X) where B ∞ (X) is the algebra of bounded Borel measurable functions on X and M (X) is the ideal of all bounded Borel functions with meagre support. In other words, B ∞ (X)/M (X) is isomorphic to B ∞ 0 (X)/M 0 (X) which is isomorphic to C(S). By the usual duality between compact Hausdorff spaces and commutative (unital) C * -algebras, there is a continuous surjection ρ from S onto X such that
j(f ) = f • ρ for each f in C(X).
By the basic properties of regular σ-completions, for each self-adjoint b in C(S), the set {j(a) : a ∈ C R (X) and j(a) ≤ b} has b as its least upper bound in C(S) sa = C R (S).
Lemma 6.1. Let Y be a subset of S such that ρ[Y ] is dense in X. Then Y is dense in S.
Proof. Let us assume that Y is not dense in S. Then there exists a non-empty clopen set E which is disjoint from clY.
Let
j(a) ≤ χ E . Then j(a)(s) ≤ 0 for s ∈ Y. So a(ρ(s)) ≤ 0 for ρ(s) ∈ ρ[Y ]
. Hence a ≤ 0. But this implies χ E ≤ 0 which is a contradiction.
Let Y be any compact Hausdorff space. Let Homeo(Y ) be the group of all homeomorphisms from Y onto Y . Let AutC(Y ) be the group of all * −automorphisms of C(Y ). For φ ∈ Homeo(Y ) let h φ (f ) = f • φ for each f ∈ C(Y ). Then φ → h φ is a bijection from the group Homeo(Y ) onto AutC(Y ) which switches the order of multiplication. In other words it is a group anti-isomorphism.
Let θ be a homeomorphism of X onto X. As above, let h θ be the corresponding * −automorphism of C(X). Also f → f • θ induces an automorphism h θ of B ∞ (X)/M (X). Since B ∞ (X)/M (X) can be identified with C(S), there exists θ in Homeo(S) corresponding to h θ . Clearly, h θ restricts to the automorphism, h θ , of C(X). Lemma 6.2. h θ is the unique automorphism of C(S) which is an extension of h θ . Hence θ is uniquely determined by θ. Furthermore, the map θ → θ is an injective group homomorphism from Homeo(X) into Homeo(S).
Proof. Let H be an automorphism of B ∞ (X)/M (X) = C(S), which is an extension of h θ . Let b be a self-adjoint element of B ∞ (X)/M (X). Then, for a ∈ C R (X), a ≤ b if, and only if, Ha ≤ Hb i.e. h θ a ≤ Hb. So Hb is the supremum of {h θ (a) : a ∈ C R (X), a ≤ b}. Hence H = h θ . That is, h θ is the unique extension of h θ to an automorphism of C(S).
Let h 1 and h 2 be in AutC(X). Then for a ∈ C(X), we have
h 1 h 2 (a) = h 1 h 2 (a) = h 1 h 2 (a) = h 1 h 2 (a).
By uniqueness, it now follows that h 1 h 2 = h 1 h 2 . Hence h → h is an injective group homomorphism of AutC(X) into AutC(S). So the map θ → θ is the composition of a group anti-isomorphism with an injective group homomorphism composed with a group anti-isomorphism. So it is an injective group homomorphism.
Corollary 6.3. θ(ρs) = ρ( θs) for each s ∈ S. Proof. For a ∈ C(X), s ∈ S, a • θ(ρs) = h θ (a)(ρs) = h θ (j(a)(s) = j(a)( θs) = a(ρ( θs)).
Hence θ(ρs) = ρ( θs).
Throughout this paper, unless we specify otherwise, G is a countable infinite group. Let ε : G → Homeo(X) be a homomorphism into the group of homeomorphisms of X. That is, ε is an action of G on X. For each g ∈ G, let ε g be the homeomorphism of S onto S induced by ε g . Then ε is the action of G on S induced by ε.
Let us recall that an action ε : G → Homeo(X) is non-degenerate if it is injective. We shall normally only use non-degenerate actions.
Proposition 6.4. Let x 0 be a point in X such that the orbit {ε g (x 0 ) : g ∈ G} is dense in X. Let s 0 ∈ S such that ρs 0 = x 0 . Then { ε g (s 0 ) : g ∈ G}is an orbit which is dense in S.
Proof. By Corollary 6.3, ε g (x 0 ) = ρ( ε g (s 0 )). It now follows from Lemma 6.1, that the orbit { ε g (s 0 ) : g ∈ G} is dense in S.
Definition 6.5. An orbit {ε g (x 0 ) : g ∈ G} is said to be free if, for g = ı, ε g (x 0 ) = x 0 .
Equivalently, for g = ı, ε g leaves no point of the orbit fixed.
It is easy to see that the existence of at least one free orbit implies that the action is non-degenerate. Definition 6.6. Let Y be a subset of X which is invariant under the action ε. Then the action ε is free on Y if, for each y ∈ Y, the orbit {ε g (y) : g ∈ G}is free. Lemma 6.7. Let G, X and ε be as above. Let x 0 ∈ X be such that the orbit {ε g (x 0 ) : g ∈ G} is both dense and free. Then there exists a G-invariant Y, which is a dense G δ subset of X such that for g = ı, ε g has no fixed point in Y. Also x 0 ∈ Y.
Proof. Fix g = ı, let K g = {x ∈ X : ε g (x) = x}. Then K g , the fix-point set of ε g , is closed. Let U be the interior of K g . Then the orbit {ε h (x 0 ) : h ∈ G} is disjoint from K g . So its closure is disjoint from U. But since the orbit is dense, this means that K g has empty interior.
Let Z = {K g : g ∈ G, g = ı}.
Then Z is the union of countably many closed nowhere dense sets. A calculation shows that
ε h [K g ] = K hgh −1 and from this it follows that Z is G-invariant.
Put Y = X\Z. Then Y has all the required properties.
Theorem 6.8. Let G, X and ε be as above. Let ε be the action of G on S induced by the action ε on X. Let x 0 ∈ X such that the orbit {ε g (x 0 ) : g ∈ G} is both dense and free. Let s 0 ∈ S such that ρs 0 = x 0 . Then { ε g (s 0 ) : g ∈ G} is a dense free orbit in S. Furthermore, there exists Y, a G-invariant, dense G δ subset of S, with s 0 ∈ Y, such that the action ε is free on Y.
Proof. By Corollary 6.3, ε g (ρs 0 ) = ρ( ε g s 0 ). That is, ε g (x 0 ) = ρ( ε g s 0 ).
It now follows from Lemma 6.1 that the orbit
{ ε g (s 0 ) : g ∈ G} is dense in S. Now suppose that ε h s 0 = s 0 . Then ρ( ε h s 0 ) = ρ(s 0 ). So ε h (x 0 ) = x 0 . Hence h = ı. It now follows that { ε g (s 0 ) : g ∈ G} is a dense free orbit in S.
The rest of the theorem follows by applying Lemma 6.7.
REMARK Let D be a countable dense subset of a compact Hausdorff space K. Let α be a homeomorphism of D onto D. Then, in general, α need not extend to a homeomorphism of K. But, from the fundamental properties of the Stone-Czech compactification, α does extend to a unique homeomorphism of βD, say θ α . Let S 1 be the Gelfand-Naimark structure space of B ∞ (βD)/M (βD). Then, from the results of this section, θ α induces a homeomorphism θ α of S 1 . Let S be the structure space of B ∞ (K)/M (K). Then, by Lemma 4.2, S is homeomorphic to S 1 . Hence each homeomorphism of D induces a canonical homeomorphism of S. So each action of G, as homeomorphisms of D, induces, canonically, an action of G as homeomorphisms of S.
Orbit equivalence
Let S be a compact Hausdorff extremally disconnected space with no isolated points. Let ε be an action of G as homeomorphisms of S which is non-degenerate.
Definition 7.1. Let Z be a G-invariant subset of S.
Then the action ε is said to be pseudo-free on Z if, for every g ∈ G, the fixed point set {z ∈ Z : ε g (z) = z} is a clopen subset of Z in the relative topology.
REMARK If an action is free on Z then, for g = ı, its fixed point set is empty. So each free action is also pseudo-free. In particular, each free orbit is also pseudo-free.
In the rest of this section, s 0 ∈ S such that the orbit D = {ε g (s 0 ) : g ∈ G} is dense in S. To simplify our notation, we shall write "g" for ε g . The restriction of g to D is a homeomorphism of D onto D. We shall abuse our notation by also denoting this restriction by "g".
From the results of Section 4, S is the Stone-Czech compactification of D. So any homeomorphism of D has a unique extension to a homeomorphism of S.
Lemma 7.2. Let O be a non-empty open subset of S. Then O ∩ D is an infinite set. Proof. Suppose O ∩ D is a finite set, say, {p 1 , p 2 , · · · , p n }. Then O \ {p 1 , p 2 , · · · , p n } is an open subset of S which is disjoint from D. But D is dense in S. Hence O = {p 1 , p 2 , · · · , p n }. So {p 1 } is an open subset of S. But
S has no isolated points. So this is a contradiction.
Let Z be a G-invariant dense subset of S and let h be a bijection of Z onto itself. Then h is said to be strongly G-decomposable over Z if there exist a sequence of pairwise disjoint clopen subsets of Z,
(A j ) where Z = A j , and a sequence (g j ) in G such that h(x) = g j (x) for x ∈ A j . When this occurs, h is a continuous, open map. Hence it is a homeomorphism of Z onto Z.
We also need a slightly weaker condition. Let h be a homeomorphism of S onto itself. Then h is G-decomposable (over S) if there exist a sequence of pairwise disjoint clopen subsets of S, (K j ) where K j is dense in S, and a sequence (g j ) in G such that
h(x) = g j (x) for x ∈ K j .
REMARK The set K j is an open dense set, hence its compliment is a closed nowhere dense set.
Lemma 7.3. Let h be a homeomorphism of D onto D. Let h be strongly G- decomposable over D. Let h be the unique extension of h to a homeomorphism of S. Then h is G-decomposable over S.
Proof. Let (A j ) be a sequence of pairwise disjoint clopen subsets of D. Then, by Lemma 4.6, (clA j ) is a sequence of pairwise disjoint clopen subsets of S.
Let (g j ) be a sequence in G such that h(x) = g j (x) for x ∈ A j . Then, by continuity, h(x) = g j (x) for x ∈ clA j . Also the open set clA j is dense in S (because it contains D).
Let Γ be a countable, infinite group of homeomorphisms of S which acts transitively on D.
If each γ ∈ Γ is strongly G-decomposable over D and each g ∈ G is strongly Γ-decomposable over D then Γ and G are said to be strongly equivalent.
Let us recall that orbit equivalence, with respect to the action of G on S, is defined by
x ∼ G y if, and only if g(x) = y for some g in G.
Lemma 7.4. Let Γ and G be strongly equivalent over D. Then there exists a G δ set Y, where D ⊂ Y ⊂ S, and Y is both G-invariant and Γ-invariant, such that Γ and G are strongly equivalent over Y.
Proof. Let Λ be the countable group generated by Γ and G. By Lemma 7.3, for each γ ∈ Γ, there is an open subset of S, U γ , such that D ⊂ U γ and γ decomposes with respect to G, over U γ . Similarly, for each g ∈ G, there is a corresponding open subset of S, V g , such that D ⊂ V g and g decomposes with respect to Γ, over V g .
Let W be the intersection of all the U γ and all the V g then W is a G δ subset of S such that D ⊂ W. Now let Y be the intersection of {λ[W ] : λ ∈ Λ}. Then Y is the required G δ set.
Corollary 7.5. Let Γ and G be strongly equivalent over D. Then there exists a G δ set Y, where D ⊂ Y ⊂ S, and Y is both G-invariant and Γ-invariant, such that the orbit equivalence relations ∼ G and ∼ Γ coincide on Y.
Proof. Straightforward. Lemma 7.6. Let each γ in Γ be strongly G-decomposable over D. Let the action of G be pseudo-free on D. Then Γ and G are strongly equivalent over D.
Proof. Let g ∈ G. Since Γ acts transitively on D, there exists γ 1 in Γ such that g(s 0 ) = γ 1 (s 0 ).
Since γ 1 is strongly G-decomposable over D, there exist a clopen set A 1 ⊂ D with s 0 ∈ A 1 and g 1 in G, such that γ 1 (s) = g 1 (s) for all s in A 1 .
In particular, g(s 0 ) = g 1 (s 0 ). So g −1 1 g(s 0 ) = s 0 . But the action of G is pseudo-free. So there exists a clopen neighbourhood of s 0 ,
K 1 ⊂ A 1 such that g −1 1 g(s) = s for s ∈ K 1 . Then g(s) = γ 1 (s) for s ∈ K 1 .
Since D is countable, we can find a sequence of disjoint clopen sets, (K j ) and a sequence (γ j ) in Γ such that g(s) = γ j (s) for s ∈ K j , and K j = D. Thus g is strongly Γ-decomposable over D. So Γ and G are strongly equivalent. Proof. Since a and b are in the same orbit of G, there exists g 1 in G such that
g 1 (a) = b. Then A ∩ g −1 1 [B]
is a clopen neighbourhood of a which is mapped by g 1 into B. Since S is extremally disconnected and has no isolated points and by making use of Lemma 4.6, we can find a strictly smaller clopen neighbourhood of a, say A 1 . By dropping to a clopen sub-neighbourhood if necessary, we can also demand that
g 1 [A 1 ] is a proper clopen subset of B. Let B 1 = g 1 [A 1 ].
By Lemma 7.2, A and B are infinite sets. Since they are subsets of D, they are both countably infinite. Enumerate them both. Let a 2 be the first term of the enumeration of A which is not in A 1 and let b 2 be the first term of the enumeration of B which is not in
B 1 . Then there exists g 2 in G such that g 2 (a 2 ) = b 2 . Now let A 2 be a clopen neighbourhood of a 2 , such that A 2 is a proper subset of A \ A 1 and g 2 [A 2 ] is a proper subset of B \ B 1 .
Proceeding inductively, we obtain a sequence, (A n ) of disjoint clopen subsets of A; a sequence (B n ) of disjoint clopen subsets of B and a sequence (g n ) from G such that g n maps A n onto B n . Furthermore A = A n and B = B n .
We define h as follows. For s ∈ A n , h(s) = g n (s). For s ∈ B n , h(s) = g −1 n (s). For s ∈ D \ (A ∪ B), h(s) = s. Then h has all the required properties.
Lemma 7.8. Let α and β be homeomorphisms of D onto itself. Suppose that each homeomorphism is strongly G-decomposable. Then βα is strongly G-decomposable.
Proof. Let {A i : i ∈ N} be a partition of D into clopen sets and (g α i ) a sequence in G which gives the G-decomposition of α. Similarly, let {B j : j ∈ N} be a partition of D into clopen sets and (g β j ) a sequence in G which gives the G-decomposition of β.
Then {A i ∩ α −1 [B j ] : i ∈ N, j ∈ N} is a partition of D into clopen sets. Let s ∈ A i ∩ α −1 [B j ]. Then βα(s) = g β j (α(s)) = g β j g α i (s). Lemma 7.9. Let D be enumerated as (s 0 , s 1 , ...). Let (D k )(k = 1, 2.
..) be a monotone decreasing sequence of clopen neighbourhoods of s 0 such that s n / ∈ D n for any n. Then the following statements hold.
(a) There is a sequence
(h k )(k = 1, 2...) of homeomorphisms of D onto D where h k = h −1 k . For 1 ≤ k ≤ n, the h k are mutually commutative. Each h k is strongly G−decomposable over D.
(b) For each positive integer n, there exists a finite family of pairwise disjoint, clopen subsets of D,
{K n (α 1 , α 2 , ..., α n ) : (α 1 , α 2 , ...α n ) ∈ Z 2 n } whose union is D. (c) Let K 0 = D. For 1 ≤ p ≤ n − 1 K p (α 1 , α 2 , ..., α p ) = K p+1 (α 1 , α 2 , ..., α p , 0) ∪ K p+1 (α 1 , α 2 , ..., α p , 1). (d) For 1 ≤ p ≤ n, K p (0, 0, ..., 0) ⊂ D p and s 0 ∈ K p (0, 0, ..., 0). (e) Let (α 1 , α 2 , ..., α p ) ∈ Z 2 p where 1 ≤ p ≤ n. Then the homeomorphism h α1 1 h α2 2 ...h αp p interchanges K p (β 1 , β 2 , ..., β p ) with K p (α 1 + β 1 , α 2 + β 2 , ..., α p + β p ). (f ) For each n, {s 0 , s 1 , ..., s n } ⊂ {h α1 1 h α2 2 ...h αn n (s 0 ) : (α 1 , α 2 , ..., α n ) ∈ Z 2 n }. (g) For each s ∈ D, if h α1 1 h α2 2 ...h αn n (s) = s then α 1 = α 2 = ... = α n = 0. Proof. Weh 1 = h −1
1 . For any s ∈ D, h 1 (s) and s are elements of disjoint clopen sets. Hence (g) holds for n = 1. Now let K 1 (0) = D 1 and K 1 (1) = D\D 1 . Let us now suppose that we have constructed the homeomorphisms h 1 , h 2 , ..., h n and the clopen sets
{K p (α 1 , α 2 , ...α p ) : (α 1 , α 2 , ...α p ) ∈ Z 2 p } for p = 1, 2, ..., n.
We now need to make the (n + 1)th step of the inductive construction. For some
(α 1 , α 2 , ..., α n ) ∈ {0, 1} n , s n+1 ∈ K n (α 1 , α 2 , ..., α n ). Let c = h α1 1 h α2 2 ...h αn n (s n+1 ). Then c ∈ K n (0, 0, ..., 0). If c = s 0 let b = c. If c = s 0 then let b be any other element of K n (0, 0, ..., 0). Now let A be a clopen subset of K n (0, 0, ..., 0) ∩ D n+1 such that s 0 ∈ A and b / ∈ A. Let B = K n (0, 0, ..., 0) \ A.
We apply Lemma 7.7 to find a homeomorphism h of D onto itself, which interchanges A and B, leaves every point outside A ∪ B fixed, maps s 0 to b, and h = h −1 . Also, h is strongly G-decomposable.
Let K n+1 (0, 0..., 0) = A and K n+1 (0, 0..., 1) = B. By construction, (d) holds for
p = n+1. Let K n+1 (α 1 , α 2 , ..., α p , 0) = h α1 1 h α2 2 ...h αn n [A] and K n+1 (α 1 , α 2 , ..., α p , 1) = h α1 1 h α2 2 ...h αn n [B]
. Then (b) holds for n + 1 and (c) holds for p = n. We now define h n+1 as follows. For s ∈ K n (α 1 , α 2 , ..., α n ),
h n+1 (s) = h α1 1 h α2 2 ...h αn n hh α1 1 h α2 2 ...h αn n (s). Claim 1 h n+1 commutes with h j for 1 ≤ j ≤ n.
To simplify our notation we shall take j = 1, but the calculation works in general, since each of {h r : r = 1, 2...n} commutes with the others.
Let
s ∈ D. Then s ∈ K n (α 1 , α 2 , ..., α n ) for some (α 1 , α 2 , ..., α n ) ∈ Z 2 n . So h 1 s ∈ K n (α 1 + 1, α 2 , .., α n ). Then h n+1 (h 1 s) = h α1+1 1 h α2 2 ...h αn n hh α1+1 1 h α2 2 ...h αn n (h 1 s). So h n+1 h 1 (s) = h 1 h α1 1 h α2 2 ...h αn n hh α1 1 h α2 2 ...h αn n h 1 h 1 (s) = h 1 h n+1 (s). From this we see that h n+1 commutes with h 1 . Similarly h n+1 commutes with h j for 2 ≤ j ≤ n. Claim 2 h n+1 is G-decomposable. By Lemma 7.8, h α1 1 h α2 2 ...h αn n hh α1 1 h α2 2 ...h αn n is G-decomposable.
So, on restricting to the clopen set K n (α 1 , α 2 , ..., α n ) this gives that h n+1 is G-decomposable over each K n (α 1 , α 2 , ..., α n ). Hence h n+1 is G-decomposable.
So, by Claim 1 and Claim 2, (a) holds for n + 1. It is straightforward to show that (b),(c), (d) and (e) hold for n + 1.
Now consider (f). Either c = s 0 in which case,
s 0 = h α1 1 h α2 2 ...h αn n (s n+1 ) which gives s n+1 = h α1 1 h α2 2 ...h αn n (s 0 ), or c = s 0 , in which case h n+1 (h α1 1 h α2 2 ...h αn n (s n+1 )) = h(h α1 1 h α2 2 ...h αn n (s n+1 )) = s 0 . This gives s n+1 = h n+1 h α1 1 h α2 2 ...h αn n (s 0 )
. Because the homeomorphisms commute, this gives (f) for n + 1.
Finally
consider (g). Let s ∈ D with h α1 1 h α2 2 ...h αn+1 n+1 (s) = s. If α n+1 = 0 then (g) implies α 1 = α 2 = ... = α n = 0. So now suppose α n+1 = 1. Let h β1 1 h β2 2 .
..h βn n (s) ∈ K n (0, 0..., 0). Then, since the h r all commute, we can suppose without loss of generality that s ∈ K n (0, 0..., 0).
Then h n+1 (s) = h α1 1 h α2 2 ...h αn n (s). But h n+1 maps K n (0, 0..., 0) to itself and h α1 1 h α2 2 ...h αn n maps K n (0, 0..., 0) to K n (α 1 , α 2 , ..., α n ). So h n+1 (s) ∈ K n (0, 0..., 0) ∩ K n (α 1 , α 2 , ..., α n ). But this intersection is only non- empty if α 1 = α 2 = ... = α n = 0. So h n+1 (s) = s. But h n+1 acting on K n (0, 0..., 0), interchanges K n+1 (0, 0..., 0) with K n+1 (0, 0..., 1). So s ∈ K n+1 (0, 0..., 1) ∩ K n+1 (0, 0..., 0), which is impossible.
Let us recall that Z 2 is the direct sum of an infinite sequence of copies of Z 2 . So each element of the group is an infinite sequence of 0s and 1s, with 1 occurring only finitely many times. We sometimes refer to it as the Dyadic Group.
Theorem 7.10. Let S be a compact Hausdorff extremally disconnected space with no isolated points. Let G be a countably infinite group. Let ε : G → Homeo(S) be a non-degenerate action of G as homeomorphisms of S. Let s 0 be a point in S such that the orbit {ε g (s 0 ) : g ∈ G} = D, is dense and pseudo-free. Then there exist an action γ : Z 2 → Homeo(S) and Y , a G-delta subset of S with D ⊂ Y, such that the following properties hold.
(1) The orbit {γ δ (s 0 ) : δ ∈ Z 2 } is free and coincides with the set D.
(2) The groups of homeomorphisms ε[G] and γ[ Z 2 ] are strongly equivalent over Y, which is invariant under the action of both these groups.
(3) The orbit equivalence relations corresponding, respectively, to ε[G] and γ[ Z 2 ] coincide on Y.
(4) γ is an isomorphism.
Proof. We replace "G" by "ε[G]" in the statement of Lemma 7.9 to find a sequence (h r ) of homeomorphisms of D onto itself with the properties listed in that lemma.
Each h r has a unique extension to a homeomorphism h r of S onto itself. For each α ∈ Z 2 , there exist a natural number n and (α 1 , α 2 , ..., α n ) ∈ Z 2 n such that
α = (α 1 , α 2 , ...α n , 0, 0, ...). We define γ α = h α1 1 h α2 2 ... h αn n .
Then γ is a homomorphism of Z 2 into Homeo(S). By Lemma 7.9 (f), the orbit {γ α (s 0 ) : α ∈ Z 2 } coincides with D. By Lemma 7.9 (g), this orbit is free for the action γ.
By Lemma 7.9 and Lemma 7.4, there exists a G δ set Y , D ⊂ Y ⊂ S, which is invariant under the action of both γ[ Z 2 ] and ε[G] also ε[G] and γ[ Z 2 ] are strongly equivalent over Y. The statement (3) now follows from Corollary 7.5. Finally, statement (4) follows from part (g) of Lemma 7.9.
We shall make no direct use of Z−actions in this article but the following corollary seems worth including. We sketch an argument which makes use of the above result and the notation used in Lemma 7.9.
Corollary 7.11. There exists a homeomorphism φ : S → S and a dense G δ -subset S 0 ⊂ S with the following properties. First, S 0 is invariant under the action of G. Secondly, φ[S 0 ] = S 0 . Thirdly, the orbit equivalence relation coming from G and the Z−orbit equivalence relation coming from φ, coincide on S 0 .
Proof. (Sketch) It follows from the preceding theorem that we may replace G by Z 2 . More precisely, we shall let G be the group of homeomorphisms of D generated by (h r )(r = 1, 2...).
Let For s ∈ E n let φ(s) = h 1 h 2 ...h n (s). Then φ is a continuous map of E n onto F n . From this it is straightforward to see that φ is a continuous bijection of D onto D\{s 0 }. Similarly, φ −1 is a continuous bijection from D\{s 0 } onto D.
E 1 = K 1 (0) and F 1 = K 1 (1). Let E j+1 = K j+1 (1, 0) where 1 ∈ Z j 2 and let F j+1 = K j+1 (0, 1) where 0 ∈ Z j 2 . We observe that if K n (α 1 , α 2 , ..., α n ) has non-empty intersection with K n+p (β 1 , β 2 , ..., β n+p ) then α 1 = β 1 , ..., α n = β n .
Since S has no isolated points, D\{s 0 } is dense in S. But, see Section 4, S can be identified with the Stone-Czech compactification of any dense subset of itself. Applying this to φ −1 and φ we find continuous extensions which are homeomorphisms of S onto S and which are inverses of each other. We abuse notation and denote the extension of φ to the whole of S by φ. Then j → φ j is the Z−action considered here; let ∆ be the group generated by φ. We shall further abuse our notation by writing h r for h r , the extension to a homeomorphism of S.
On applying Lemma 4.6, we see that (clE n )(n = 1, 2...) is a sequence of pairwise disjoint clopen subsets of S. So its union is a dense open subset ofṠ which we shall denote by O 1 . By continuity, for s ∈ clE n we have φ(s) = h 1 1 h 1 2 ...h 1 n (s). Similarly,(clF n )(n = 1, 2...) is a sequence of pairwise disjoint clopen subsets of S whose union, O 2 , is also dense in S.
Let Γ be the countable group generated by φ and G. Let S 0 be the intersection
{ γ[O 1 ∩ O 2 ]; γ ∈ Γ}. Then S 0 is a dense G δ −subset of S which is invari- ant under the action of Γ. From the definition of φ, it is clear that φ is strongly G−decomposable over S 0 . (Recall that we have identified G with Z 2 .) Similarly, φ −1 is also strongly G−decomposable over S 0 . Hence each element of ∆ is strongly G−decomposable over S 0 .
Let H(∆) be the group of all homeomorphisms h, of S onto S, such that h is strongly ∆−decomposable with respect to a finite partition of S into clopen sets. We shall show that h 1 is in H(∆).
For s ∈ E 1 = K 1 (0) we have φ(s) = h 1 (s) and, for s ∈ F 1 = K 1 (1), φ −1 (s) = h 1 (s). We observe that clE 1 and clF 1 are disjoint clopen sets whose union is S. Also,
h 1 = φχ clE1 + φ −1 χ clF1 So h 1 ∈ H(∆).
We now suppose that h 1 , ...h n are in H(∆). We wish to show h n+1 ∈ H(∆). Let s ∈ K n+1 (β, 0) where β ∈ Z n 2 . By Lemma 7.9 (e) h β1+1
1 ...h βn+1 n (s) ∈ K n+1 (1, 0). So, from the definition of φ, φ(h β1+1 1 ...h βn+1 n (s)) = h β1 1 ...h βn 1 h 1 n+1 (s) Making use of commutativity of the h r ,we get h n+1 (s) == h β1 1 ...h βn n φh β1+1 1 ...h βn+1
n (s) Then, by using continuity, this holds for each s ∈ clK n+1 (β, 0). By a similar argument, for s ∈ clK n+1 (β, 1) we get h n+1 (s) = h β1+1
1 ...h βn+1 n φh β1 1 ...h βn n (s). Since {clK n+1 (α) : α ∈ Z n+1 2
} is a finite collection of disjoint clopen sets whose union is S,it follows that h n+1 ∈ H(∆).
So, by induction, G ⊂ H(∆).
It now follows that, on S 0 ,the orbit equivalence relation coming from the action of G coincides with the orbit equivalence relation arising from the Z−action generated by φ.
REMARKS
In the above, D is not a subset of S 0 . Indeed, it is not obvious that S 0 has a dense orbit. So is it possible to modify the construction of φ so that it becomes a bijection of D onto itself ? 8. Monotone complete C * -algebra of an equivalence relation
The idea of constructing a C * -algebra or a von Neumann algebra from a groupoid has a long history and a vast literature; there is an excellent exposition in [38]. Here, instead of general groupoids, we use an equivalence relation with countable equivalence classes. Our aim is to construct monotone complete (monotone σcomplete) algebras by a modification of the approach used in [36]. We try to balance conciseness with putting in enough detail to convince the reader that this is an easy and transparent way to construct examples of monotone complete C *algebras. It makes it possible to obtain all the algebras which arise as a monotone cross-product by a countable discrete group acting on a commutative algebra, but without needing to use monotone tensor products.
In this section, X is a topological space where X is either a G-delta subset of a compact Hausdorff space or a Polish space (i.e. homeomorphic to a complete separable metric space). Then X is a Baire space i.e. the Baire category theorem holds for X. Let B(X) be the set of all bounded complex valued Borel functions on X. When equipped with the obvious algebraic operations and the supremum norm, it becomes a commutative C * -algebra.
In the following it would be easy to use a more general setting, where we do not assume a topology for X, replace the field of Borel sets with a σ−field T , and use T −measurable bijections instead of homeomorphisms. But we stick to a topological setting which is what we need later.
Let G be a countable group of homeomorphisms of X and let E = {(x, y) ∈ X × X : ∃g ∈ G such that y = g(x)}. Then E is the graph of the orbit equivalence relation on X arising from the action of G. We shall identify this equivalence relation with its graph. We know, from the work of Section 7, that the same orbit equivalence relation can arise from actions by different groups.
Let us recall that for A ⊂ X, the saturation of A (by E) is
E[A] = {x ∈ X : ∃ z ∈ A such that xEz} = {x ∈ X : ∃g ∈ G such that g(x) ∈ A} = {g[A] : g ∈ G}.
It follows from this that the saturation of a Borel set is also a Borel set. Proof.
For each g ∈ G, g[A] ⊂ E[A]. Since I is an ideal, if E[A] ∈ I then g[A] ∈ I. Conversely, if g[A] ∈ I for each g, then E[A]
is the union of countably many elements of the σ−ideal and hence in the ideal.
In the following we require that the action of G maps the ideal I into itself. Equivalently, for any A ∈ I, its saturation by E is again in I. This is automatically satisfied if I is the ideal of meagre Borel sets but we do not wish to confine ourselves to this situation.
Following the approach of [36], we indicate how orbit equivalence relations on X give rise to monotone complete C * -algebras. A key point, used in [36], is that these algebras are constructed from the equivalence relation without explicit mention of G. But in establishing the properties of these algebras, the existence of an underlying group is used. This construction (similar to a groupoid C * -algebra) seems particularly natural and transparent. For the reader's convenience we give a brief, explicit account which is reasonably self-contained. For reasons explained in Section 9, the work of this section makes it possible for the reader to safely avoid the details of the original monotone cross-product construction. We could work in greater generality (for example we could weaken the condition that the elements of G be homeomorphisms or consider more general groupoid constructions) but for ease and simplicity we avoid this. We are mainly interested in two situations. First, where X is an "exotic" space as considered in [34] but I is only the ideal of meagre subsets of X. Secondly, where X is just the Cantor space but I is an "exotic" ideal of the Borel sets. In this paper, only the first situation will be considered but since we will make use of the second situation in a later work and since no extra effort is required, we add this small amount of generality.
Since G is a countable group, each orbit is countable, in other words, each equivalence class associated with the equivalence relation E is countable. (Countable Borel equivalence relations and their relationship with von Neumann algebras were penetratingly analysed in [12,13].)
For each x ∈ X let [x] be the equivalence class generated by x. Let [X] be the set of all equivalence classes. Let ℓ 2 ([x]) be the Hilbert space of all square summable, complex valued functions from [x] to C. For each y ∈ [x] let δ y ∈ ℓ 2 ([x]) be defined by δ y (z) = 0 for z = y; δ y (y) = 1.
Then {δ y : y ∈ [x]} is an orthonormal basis for ℓ 2 ([x]) which we shall call the canonical basis for ℓ 2 ([x]). For each x ∈ X, L(ℓ 2 ([x])) is the von Neumann algebra of all bounded operators on ℓ 2 ([x]). We now form a direct sum of these algebras by:
S = [x]∈[X] L(ℓ 2 ([x])).
This is a Type I von Neumann algebra, being a direct sum of such algebras. It is of no independent interest but is a framework in which we embed an algebra of "Borel matrices" and then take a quotient, obtaining monotone complete C *algebras. To each operator F in S we can associate, uniquely, a function f : E → C as follows. First we decompose F as: When f and h are such functions from E to C, then straightforward matrix manipulations give
F = [x]∈[X] F [x] . Here each F [x] is a bounded operator on ℓ 2 ([x]). Now recall that (x, y) ∈ E precisely when y ∈ [x]. We now define f : E → C by f (x, y) = < F [x] δ x , δ y > .L(f )L(h) = L(f • h) where f • h(x, z) = y∈[x] f (x, y)h(y, z). Also L(f ) * = L(f * ), where f * (x, y) = f (y, x) for all (x, y) ∈ E.
Let ||f || = ||L(f )||. Then the matrix bounded functions on E form a C * -algebra isomorphic to S.
Let ∆ be the diagonal set {(x, x) : x ∈ X}. It is closed, because the topology of X is Hausdorff. It is an easy calculation to show that L(χ ∆ ) is the unit element of S. For each g ∈ G, the map (x, y) → (x, g(y)) is a homeomorphism.
So {(x, g(x)) : x ∈ X} is a closed set. Let us recall that
E = g∈G {(x, g(x)) : x ∈ X}.
Since G is countable, E is the union of countably many closed sets. Hence E is a Borel subset of X × X. Let U be an open neighbourhood of T , then U ∩ K n is non-empty for all n.
Fix (x, y) in E. Fix ε > 0. Let U = {S ∈ S : | < (S − T )δ x , δ y > | < ε}. Then we can find a subsequence (f n(r) )(r = 1, 2...) for which L(f n(r) ) ∈ U for each r. Thus |(f n(r) (x, y)− < T δ x , δ y > | < ε. So |f (x, y)− < T δ x , δ y > | ≤ ε. Since this holds for all positive ε, we have f (x, y) =< T δ x , δ y > . So T = L(f ).
Let T be the locally convex topology of S generated by all seminorms of the form V → | < V δ x , δ y > | as (x, y) ranges over E. This is a Hausdorff topology which is weaker than the weak operator topology. Hence it coincides with the weak operator topology on the unit ball, because the latter topology is compact. But
< L(f n )δ x , δ y >→ f (x, y) =< L(f )δ x , δ y > for all (x, y) ∈ E.
It now follows that L(f n ) → L(f ) in the weak operator topology of S. Since f is the pointwise limit of a sequence of Borel measurable functions, it too, is Borel measurable. So f ∈ M(E) and, since T is in the unit ball of S, f is in the unit ball of M(E).
Let p be the homeomorphism of ∆ onto X, given by p(x, x) = x. So B(X), the algebra of bounded Borel measurable functions on X, is isometrically * − isomorphic to B(∆) under the map h → h • p.
For each f ∈ M(E) let Df be the function on E which vanishes off the diagonal, ∆, and is such that, for each x ∈ X,
Df (x, x) = f (x, x).
Then D is a linear idempotent map from M(E) onto an abelian subalgebra which we can identify with B(∆), which can, in turn, be identified with B(X). Let Df be the function on X such that Df (x) = Df (x, x) for all x ∈ X. We shall sometimes abuse our notation by using Df instead of Df .
Let π : B(X) → M(E) be defined by π(h)(x, x) = h(x) for x ∈ X and π(h)(x, y) = 0 for x = y. Then π is a * −isomorphism of B(X) onto an abelian * −subalgebra of M(E), which can be identified with the range of D.
We have π Df = Df for f ∈ M(E). Also, for g ∈ B(X), Dπ(g) = g.
(#) D(f •f * )(x, x) = y∈[x] f (x, y)f * (y, x) = y∈[x] f (x, y)f (x, y) = y∈[x]
|f (x, y)| 2 ≥ 0.
Definition 8.8.
Let I I = {f ∈ M(E) : q D(f • f * ) = 0} = {f ∈ M(E) : ∃A ∈ I such that D(f • f * )(x) = 0 for x / ∈ A}.
Lemma 8.9. I I is a two-sided ideal of M(E).
Proof. In any C * -algebra,
(a + b)(a + b) * ≤ 2(aa * + bb * ).
So
D((f + g) • (f + g) * ) ≤ 2D(f • f * ) + 2D(g • g * ).
From this it follows that if f and g are both in I I then so also is f + g.
In any C * -algebra,
f z(f z) * = f zz * f * ≤ ||z|| 2 f f * .
From this it follows that f ∈ I I and z ∈ M(E) implies that f • z ∈ I I . Now suppose that f ∈ I I . Then for some
A ∈ I, E(f • f * )(x) = 0 for x / ∈ A. Since E[A] ∈ I we can suppose that A = E[A]. Hence if x / ∈ E[A] then [x] ∩ E[A] = ∅. For x / ∈ E[A], we have 0 = D(f • f * )(x) = y∈[x]
|f (x, y)| 2 . Proof. Let T be the (compact Hausdorff) structure space of the algebra B(X)/B I . By applying the Cauchy-Schwartz inequality we see that for x, y in M(E) and t ∈ T , |q D(x * • y)(t)| ≤ q D(x * • x)(t) 1/2 q D(y * • y)(t) 1/2 . Let x = 1 and let y ∈ I I . Then y * ∈ I I . So q D(y * • y) = 0. From the above inequality it follows that q D(y) = 0. Since I I is an ideal, if y ∈ I I then y • a is in the ideal for each a ∈ M (E). It now follows from the above that q D(y • a) = 0. Conversely, if q D(y • a) = 0 for all a ∈ M(E) then, on putting a = y * we see that y ∈ I I . Proof. Let (f r ) be a sequence in I I such that (L(f r )) is a sequence which converges in the weak operator topology to an element T of S. Then it follows from the Uniform Boundedness Theorem that the sequence is bounded in norm. By Proof. By Corollary 8.12 and the results of Section 3, the quotient algebra M(E)/I I is monotone σ−complete. Let g ∈ B(X). Then, as remarked before Lemma 8.7, Dπ(g) = g. Now π(g) ∈ I I if, and only if, q D(π(g) • π(g) * ) = 0. But q D(π(g) • π(g) * ) = q D(π(|g| 2 ) = q(|g| 2 ). Let (f n ) be a sequence in M(E) such that (Qf n ) is an upper bounded monotone increasing sequence in M(E)/I I . Then, by Lemma 3.3, we may assume that (f n ) is an upper bounded, monotone increasing sequence in M(E). Let Lf be the limit of (Lf n ) in the weak operator topology. (Since the sequence is monotone, Lf is also its limit in the strong operator topology.) By Lemma 8.5, f ∈ M(E), and f n (x, x) → f (x, x) for all x ∈ X. Thus Df n → Df pointwise on X. Also, since D is positive, (Df n ) is monotone increasing. Since Q is a σ−homomorphism, QDf is the least upper bound of (QDf n ). Since D (f + I I ) = QDf it now follows that D is σ−normal.
If µ is a strictly positive functional on B(X)/B I then µ D is a strictly positive linear functional on M (E)/I I . It then follows from Lemma 3.1 that M(E)/I I is monotone complete. Furthermore, if Λ is a downward directed subset of the selfadjoint part of M(E)/I I , with 0 as its greatest lower bound, then there exists a monotone decreasing sequence (x n ), with each x n in Λ, and x n = 0. It now follows from the σ−normality of D that 0 is the infimum of { D(x) : x ∈ Λ}. Hence D is normal.
We now make additional assumptions about the action of G and use this to construct a natural unitary representation of G. We give some technical results which give an analogue of Mercer-Bures convergence, see [27] and [4]. This will be useful in Section 12 when we wish to approximate elements of M(E)/I I by finite dimensional subalgebras.
For the rest of this section we suppose that the action of G on X is free on each orbit i.e. for each x ∈ X, x is not a fixed point of g, where g ∈ G, unless g is the identity element of G.
For each g ∈ G, let ∆ g = {(x, gx) : x ∈ X}. Then the ∆ g are pairwise disjoint and E = ∪ g∈G ∆ g .
For each g ∈ G, let u g : E → {0, 1} be the characteristic function of ∆ g −1 . As we pointed out earlier, χ ∆ is the unit element of M(E),so, in this notation, u 1 is the unit element of M(E).
For each (x, y) ∈ E we have:
u g • u h (x, y) = k∈G u g (x, kx)u h (kx, y). But u g (x, kx) = 0, only if k = g −1 and u h (g −1 x, y) = 0 only if y = h −1 g −1 x = (gh) −1 x. So u g • u h = u gh .
Also u * g (x, y) = u g (y, x) = u g (y, x). But u g (y, x) = 0 only if x = g −1 y, that is, only if y = gx. So u * g (x, y) = u g −1 (x, y). It follows that g → u g is a unitary representation of G in M(E).
Let f be any element of M(E). Then
(i) f • u g (x, y) = z∈[x]
f (x, z)u g (z, y) = f (x, gy).
So, for each x ∈ X, D(f • u g )(x, x) = f (x, gx). Then D(f • u g ) • u g −1 (x, y) = z∈[x] D(f • u g )(x, z)u g −1 (z, y) = D(f • u g )(x, x)u g −1 (x, y) = f (x, gx)χ ∆g (x, y). So (ii) D(f • u g ) • u g −1 (x, y) = f (x, y) if (x, y) ∈ ∆ g 0 if (x, y) / ∈ ∆ g .
The identity (#), used in Lemma 8.7,can be re-written as
(iii) D(f •f * )(x, x) = g∈G |f (x, gx)| 2 = g∈G |D(f •u g )(x, x)| 2 = g∈G | D(f • u g )(x)| 2 .
Let F be any finite subset of G.
Let f F = g∈F D(f • u g ) • u g −1 .
Then, using (iii),
(f − f F )(x, y) = 0
if (x, y) ∈ ∆ g and g ∈ F f (x, y) if (x, y) ∈ ∆ g and g / ∈ F.
We now replace f by f − f F in (iii) and get:
(iv) D((f −f F )•(f −f F ) * )(x, x) = g∈G\F |D(f •u g )(x, x)| 2 = g∈G\F | D(f •u g )(x)| 2 .
Now let (F n )(n = 1, 2...) be any strictly increasing sequence of finite subsets of G. Write f n for f Fn . Then
D((f − f n ) • (f − f n ) * )(x, x) decreases monotonically to 0 as n → ∞. Since Q is a σ−homomorphism, (v) QD((f − f n ) • (f − f n ) * ) = 0.
For each g ∈ G,let U g = Qu g . Since Q is a * −homomorphism onto M(E)/I I , U g is a unitary and g → U g is a unitary representation of G in M(E)/I I . On applying the preceding paragraph we get: Proof. This follows from Proposition 8.15 because z n = 0 for every n.
Lemma 8.17. For each f ∈ M(E) and each g ∈ G, u * g • f • u g (x, y) = f (gx, gy). In particular,f vanishes off ∆ if, and only if, u * g • f • u g vanishes off ∆. Proof. Let h ∈ M(E). Then, applying identity (i) we get (u * g • h)(a, b) = (h * • u g ) * (a, b) = (h * • u g )(b, a) = h * (b, ga) = h(ga, b).
Let h = f • u g . Then u * g • f • u g (x, y) = f • u g (gx, y) = f (gx, gy). Corollary 8.18. For each a ∈ A ,the diagonal algebra, and for each g ∈ G, U g aU * g is in A. Lemma 8.19. Let T be a compact, totally disconnected space. Let θ be a homeomorphism of T onto T. Let λ θ be the automorphism of C(T ) induced by θ. Let E be a non-empty clopen set such that, for each clopen Q ⊂ T, (λ θ (χ Q ) − χ Q )χ E = 0. Then θ(t) = t for each t ∈ E. In other words, λ θ (f ) = f for each f ∈ χ E C(T ).
Proof. Let us assume that t 0 ∈ E such that θ(t 0 ) = t 0 . By total disconnectedness,there exists a clopen set Q with θ(t 0 ) ∈ Q and t 0 / ∈ Q. We have λ θ (χ Q ) =
χ θ −1 [Q] . So θ −1 [Q] ∩ E = Q ∩ E. But t 0 is an element of θ −1 [Q] ∩ E and t 0 / ∈ Q.
This is a contradiction.
Let M be a monotone (σ−)complete C * −algebra. Then we recall that an automorphism α is properly outer if there does not exist a non-zero projection e such that α restricts to the identity on eM e. = D(z). We shall show that z = a. By Corollary 8.16, it will suffice to show that D((z − a)U g ) = 0 for each g ∈ G. We remark that D(aU g ) = a D(U g ) = 0 when g is not the identity element of G.
So it is enough to show that if g ∈ G and g is not the identity element of G then D(zU g ) = 0.
We have,
for each b ∈ D[M(E)/I I ], bz = zb. So b D(zU g ) = D(bzU g ) = D(zbU g ) = D(zU g U * g bU g ) = D(zU g )U * g bU g = U * g bU g D(zU g ). This implies that (λ g −1 (b) − b) D(zU g ) = 0. For shortness put c = D(zU g ).
Assume that c = 0. Then, by spectral theory, there exists a non-zero projection e and a strictly positive real number δ such that δe ≤ cc * . Then since each element of A is a finite linear combination of unitaries in A, it follows immediately that A is contained in the normaliser subalgebra.
0 ≤ δ(λ g −1 (b) − b)e(λ g −1 (b) − b) * ≤ (λ g −1 (b) − b)cc * (λ g −1 (b) − b) * = 0. So, (λ g −1 (b) − b)e = 0
Cross-product algebras
First let us recall some familiar facts. Let A be a unital C * -algebra. Let α be an automorphism of A. If there exists a unitary u ∈ A such that, for each z ∈ A, α(z) = uzu * then α is said to be an inner automorphism. When no such unitary exists in A then α is an outer automorphism.
Let G be a countable group and let g → β g be an homomorphism of G into the group of all automorphisms of A. Intuitively, a cross-product algebra, for this action of G, is a larger C * -algebra, B, in which A is embedded as a subalgebra and where each β g is induced by a unitary in B. More precisely, there is an injective *-homomorphism j : A → B, and a group homomorphism g → U g , (from G into the group of unitaries in B), such that, for each z ∈ A, jβ g (z) = U g j(z)U * g . So when we identify A with its image j[A] in B, although β g need not be an inner automorphism of A it can be extended to an inner automorphism of the larger algebra B. We also require that B is "in some sense" generated by j[A] and the collection of unitaries U g . When A is a monotone complete C * -algebra, we can always construct a B which is monotone complete.
An account of monotone cross-products when A is commutative was given by Takenouchi [37], see also Saitô [29]. (This was for abelian groups, but everything extends without difficulty to non-abelian groups.) This was later generalised by Hamana [21] to the situation where A is not commutative. For the purposes of this paper we only need to consider the situation when A is commutative. So for the rest of this section, A shall be a monotone complete commutative C * -algebra. Hence A ≃ C(S) where S is a compact, Hausdorff, extremally disconnected space. We shall outline below some of the properties of the monotone cross-product over abelian algebras. More information can be found in [37] and [29]. It turns out that they can be identified with algebras already constructed in Section 8. But, historically, the construction of monotone cross-products came first.
We shall suppose for the rest of this section that S has no isolated points. Then, as remarked in Section 4, any dense subset Y has no isolated points.
We shall use a result from [36] to relate monotone cross-products to the monotone complete C * -algebra of an orbit equivalence relation.
Let G be a countably infinite group of homeomorphisms of S, where the action of G has a free, dense orbit, then we shall show that the corresponding monotone cross-product is isomorphic to one obtained by an action of Z 2 . Let X be any dense subset of S. Then, see Section 4, S can be identified with the Stone-Czech compactification of X. So if f : X → C is a bounded continuous function then it has a unique extension to a continuous function f : S → C.
It follows that f → f is an isometric * −isomorphism of C b (X), the algebra of bounded continuous functions on X, onto C(S). Similarly, as remarked earlier, any homeomorphism θ from X onto X has a unique extension to a homeomorphism θ from S onto S. We may abuse our notation by using θ instead of θ i.e. using the same symbol for a homeomorphism of X and for its unique extension to a homeomorphism of S. Slightly more generally, when X 1 and X 2 are dense subsets of S, if there exists an homeomorphism of X 1 onto X 2 then it has a unique extension to a homeomorphism of S onto itself.
Let g be any homeomorphism of S onto itself. Let α g (f ) = f • g for each f ∈ C(S). Then α g is a * −automorphism of C(S). All * −automorphisms of C(S) arise in this way. Let G be a subgroup of Homeo(S), the group of homeomorphisms of S onto itself. Then the map g → α g is an injective map from G into Aut(C(S)), the group of * −automorphisms. If G is not abelian then this is not a group homomorphism but an injective anti-homomorphism.
Let us recall that for any group Γ, the opposite group, Γ op , is the same underlying set as Γ but with a new group operation defined by x × y = yx. Also Γ op and Γ are isomorphic groups, the map g → g −1 gives an isomorphism. So, in the preceding paragraph, g → α g is a group isomorphism of G op into Aut(C(S)). Since G op is isomorphic to G this is not of major significance. But to avoid confusion, we shall define β g = α g −1 for each g ∈ G. Then g → β g is a group isomorphism of G into the automorphism group of C(S).
Let α be a * −automorphism of A. Then α is said to be properly outer if, for each non-zero projection p, the restriction of α to pA is not the identity. Let Γ be a sub-group of Aut(A) such that every element, except the identity, is properly outer. Then the action of Γ on A is said to be free.
Let G be a countable group of homeomorphisms of S. Then, see [36], if g → β g is a free action of G on A then there exists a dense G-delta set Y ⊂ S, where Y is G−invariant, such that, whenever g ∈ G is not the identity, then g has no fixed points in Y. In other words, for each y ∈ Y, G[y] is a free orbit. Conversely, we have:
Lemma 9.1. Let X be a dense subset of S, where X is G−invariant. Let G[x]
be a free orbit for each x ∈ X. Then g → β g is a free action of G on C(S).
Proof. Let g ∈ G, such that, β g is not properly outer. So, for some non-empty clopen set K,
χ K a = β g (χ K a) = (χ K • g −1 )(a • g −1 ) = χ g[K] (a • g −1 ).
In particular, K = g [K]. Suppose that x 1 ∈ K with g(x 1 ) = x 1 . Then we can find a continuous function a which takes the value 1 at g(x 1 ) and 0 at x 1 . But, from the equation above, this implies 0 = 1. So g(x) = x for each x ∈ K. Because X is dense in S and K is a non-empty open set, there exists y ∈ X ∩ K. So y is a fixed point of g and G[y] is a free orbit. This is only possible if g is the identity element of G. Hence the action g → β g is free.
In all that follows, G is a countably infinite group of homeomorphisms of S and Y is a dense G-delta subset of S, where Y is G−invariant. Let g → β g be the corresponding action of G as automorphisms of A. Let M (C(S), G) be the associated (Takenouchi) monotone cross-product. We shall describe the monotone cross-product below.
The key fact is that, provided the G−action is free, the monotone cross-product algebra can be identified with the monotone complete C * -algebra arising from the G−orbit equivalence relation. The end part of Section 8 already makes this plausible. The reader who is willing to assume it can skip to Theorem 9.5.
Before saying more about the monotone cross-product, we recall some properties of the Hamana tensor product, as outlined in [36]. More detailed information can be found in [20,21,35].
(Comment: An alternative, equivalent, approach avoiding the tensor product, is to use the theory of Kaplansky-Hilbert modules [24,25,26,49], see below.)
For the rest of this section, H is a separable Hilbert space and H 1 is an arbitrary Hilbert space. Let us fix an orthonormal basis for H. Then, with respect to this basis, each V ∈ L(H 1 )⊗L(H) has a unique representation as a matrix [V ij ], where each V ij is in L(H 1 ). Let M be a von Neumann subalgebra of L(H 1 ). Then the elements of M ⊗L(H) are those V for which each V ij is in M. Let T be any set and Bnd(T ) the commutative von Neumann algebra of all bounded functions on T. Then, as explained in [36], Bnd(T )⊗L(H) can be identified with the algebra of all matrices [m ij ] over Bnd(T ) for which t → [m ij (t)] is a norm bounded function over T.
We denote the commutative C * -algebra of bounded, complex valued Borel measurable functions on Y by B(Y ). Following [35] Let M σ (B(Y ), G) be the subalgebra of B(Y ) ⊗L(H) consisting of those elements of the tensor product which have a matrix representation over B(Y ) of the form [a γ,σ ] (γ ∈ G, σ ∈ G) where a γτ,στ (y) = a γ,σ (τ y) for all y ∈ Y and all γ, σ, τ in G. Let E be the orbit equivalence relation on Y arising from G, that is E = {(y, gy) : y ∈ Y, g ∈ G}.
By Lemma 3.1 [36] we have: Lemma 9.2. Assume that each g ∈ G has no fixed points in Y unless g is the identity element. Then M σ (B(Y ), G) is naturally isomorphic to M(E).
For the reader's convenience we sketch the argument. The correspondence between these two algebras is given as follows. Let f ∈ M(E). For each σ, γ in G and, for all y ∈ Y,let a γ,σ (y) = f (γy, σy) .
Then a γ,σ is in B(Y ). Also the norm of [a γ,σ (y)] is uniformly bounded for y ∈ Y. So [a γ,σ ] is in B(Y ) ⊗L(H). Also a γτ,στ (y) = f (γτ y, στ y) = a γ,σ (τ y).
It now follows that [a γ,σ ] is in
M σ (B(Y ), G). Conversely, let [a γ,σ ] be in M σ (B(Y ), G)
. We now use the freeness hypothesis on the action of G on Y to deduce that ({(y, τ y) : y ∈ Y })(τ ∈ G) is a countable family of pairwise disjoint closed subsets of E. So we can now define, unambiguously, a function f : E → C by f (y, τ y) = a ι,τ (y). This is a bounded Borel function on E. It follows from the definition of M σ (B(Y ), G) that a γ,σ (y) = a ι,σγ −1 (γy) = f (γy, σy) for all σ, γ in G and all y in Y . From this it follows that f is in M(E).
Let π be the quotient homomorphism from B(Y ) onto C(S). (Each F in B(S) differs only on a meagre set from a unique function in C(S). Since S\Y is a meagre set, each f in B(Y ) corresponds to a unique element of C(S) which we denote π(f )).
Each element of the Hamana tensor product C(S)⊗L(H) has a representation as a matrix over C(S). [Remark: the product is not straightforward. ] By Theorem 2.5 [35] there exists a σ−homomorphism Π from B(Y ) ⊗L(H) onto C(S)⊗L(H) where Π([b γ,σ ]) = [π(b γ,σ )].
(Comment: As indicated above, we may use Kaplansky-Hilbert modules as an alternative approach. We replace the Hilbert space H by the separable Hilbert space ℓ 2 (G), consisting of all square summable complex functions on G with the standard basis {ξ γ } γ∈G . Let {e γ,σ } be the standard system of matrix units for L(H).
Let M = ℓ 2 (G, C(S)) be the Kaplansky-Hilbert module, over a monotone complete C * -algebra C(S), of all ℓ 2 -summable family of elements in C(S) on G and let {δ σ } be the standard basis for M [26] and, as Kaplansky defined, let {E γ,σ } be the standard system of matrix units for the monotone complete C * -algebra End C(S) (M) of all bounded module endomorphisms on M. Then we know that (C(S)⊗L(H), {1 ⊗ e γ,σ }) is isomorphic to (End C(S) (M), {E γ,σ }) by using [24]. Then Π([b γ,σ ]) can be identified with the ℓ 2 -limit σ∈G γ∈G π(b γ,σ )E γ,σ using Kadison-Pedersen order-convergence [23].) But the Takenouchi monotone cross-product is the subalgebra of C(S)⊗L( H) corresponding to matrices [a γ,σ ] over C(S) for which β τ −1 (a γ,σ ) = a γτ,στ for all γ, σ, τ in G. Equivalently, a γτ,στ (s) = a γ,σ (τ s) for all γ, σ, τ in G and s ∈ S. From this it follows that the homomorphism Π maps M σ (B(Y ), G) onto M (C(S), G). See Lemma 3.2 [36].
The diagonal subalgebra of M (C(S), G) consists of those matrices [a γ,σ ] which vanish off the diagonal i.e. a γ,σ = 0 for γ = σ. Also, β τ −1 (a ι,ι ) = a τ,τ for each τ ∈ G. It follows that we can define an isomorphism from A onto the diagonal of M (C(S), G) by j(a) = Diag(..., β τ −1 (a), ...). We recall, see [37] and [29], that the freeness of the action G implies that the diagonal algebra of M (C(S), G) is a maximal abelian * −subalgebra of M (C(S), G), alternatively, apply the results of Section 8.
Then, by Lemma 3.3 [36], we have: Let C(S) × β G be the smallest monotone closed * −subalgebra of M (C(S), G) which contains the diagonal and each unitary which implements the β−action of G. It will sometimes be convenient to call C(S) × β G the "small" monotone crossproduct. It turns out that C(S) × β G does not depend on G only on the orbit equivalence relation. This is not at all obvious but is an immediate consequence of Theorem 10.1. This theorem shows that when w is a unitary in M (C(S), G) such that w normalises the diagonal then w is in C(S) × β G. So, the isomorphism of M (C(S), G) onto M(E)/J maps C(S) × β G onto the normalizer subalgebra of M(E)/J.
Does the small monotone cross product equal the "big" monotone cross-product ? Equivalently, is M(E)/J equal to its normaliser subalgebra ? This is unknown, but we can approximate each element of M (C(S), G) by a sequence in C(S) × β G, in the following precise sense: Lemma 9.4. Let z ∈ M (C(S), G). Then there exists a sequence (z n ) in C(S) × β G such that the sequence D((z − z n ) * (z − z n )) is monotone decreasing and ∞ n=1 D((z − z n ) * (z − z n )) = 0.
Proof. This follows from Proposition 8.15.
Theorem 9.5. Let G j (j = 1, 2) be countable, infinite groups of homeomorphisms of S. Let g → β j g be the corresponding action of G j as automorphisms of C(S). Let Y be a G-delta, dense subset of S such that G j [Y ] = Y and G j acts freely on Y . Let E j be the orbit equivalence relation on Y arising from the action of G j . Suppose that E 1 = E 2 . Then there exists an isomorphism of M (C(S), G 1 ) onto M (C(S), G 2 ) which maps the diagonal algebra of M (C(S), G 1 ) onto the diagonal algebra of M (C(S), G 2 ). Furthermore, this isomorphism maps C(S)
× β 1 G 1 onto C(S) × β 2 G 2 .
Proof. The first part is a straight forward application of Lemma 9.3. Let E = E 1 = E 2 . Then both algebras are isomorphic to M(E)/J. The final part is a consequence of Theorem 10.1.
In the next result, we require S to be separable. Proof. By Theorem 7.10, there exists an isomorphism φ from Z 2 into Homeo(S) such that there exists a dense G-delta set Y in S with the following properties. First, Y is invariant under the action of both G and Z 2 . Secondly the induced orbit equivalence relations coincide on Y . Theorem 9.5 then gives the result. REMARK: By Lemma 5.1, when G has a dense orbit in S then the action on S is such that the only invariant clopen set is empty or the whole space. This implies that the action g → β g is ergodic, as defined in [37,29], which implies that the algebra M (C(S), G) is a monotone complete factor. Lemma 9.7. Let B be a Boolean σ−algebra. Let (p n ) be a sequence in B which σ−generates B, that is, B is the smallest σ−subalgebra of B which contains each p n . Let Γ be a group of automorphisms of B. Let Γ be the union of an increasing sequence of finite subgroups (Γ n ). Then we can find an increasing sequence of finite Boolean algebras (B n ) where each B n is invariant under the action of Γ n and ∪B n is a Boolean algebra which σ−generates B.
Proof. For any natural number k the free Boolean algebra on k generators has 2 k elements. So a Boolean algebra with k generators, being a quotient of the corresponding free algebra, has a finite number of elements.
We proceed inductively. Let B 1 be the subalgebra generated by {g(p 1 ) : g ∈ Γ 1 }. Then B 1 is finite and Γ 1 −invariant. Suppose we have constructed B 1 , B 2 ...B n . Then B n ∪ {p n+1 } is a finite set. So its saturation by the finite group Γ n+1 is again a finite set. So the Boolean algebra this generates, call it B n+1 , is finite. Clearly B n ⊂ B n+1 and B n+1 is invariant under the action of Γ n+1 .
A commutative monotone complete C * -algebra is countably σ−generated if its Boolean algebra of projections is σ−generated by a countable subset.
Proposition 9.8. Let the Boolean algebra of projections in C(S) be countably σ−generated by (p n ). Let Γ be a group of automorphisms of C(S). Let Γ be the union of an increasing sequence of finite subgroups (Γ n ). Let g → u g be the unitary representation of Γ in M (C(S), Γ) which implements the action of Γ on the diagonal algebra A. Let π be the canonical isomorphism of C(S) onto A. Then the C *algebra generated by {u g : g ∈ Γ} ∪ {π(p n ) : n = 1, 2...} is the closure of an increasing sequence of finite dimensional subalgebras.
Proof. Let B be the complete Boolean algebra of all projections in A. By Lemma 9.7, we can find an increasing sequence of finite Boolean algebras of projections (B n ) where B is σ−generated by ∪B n and each B n is invariant under the action of Γ n .
Let A n be the (complex) linear span of B n . Then A n is a finite dimensional * −subalgebra of A. Also, for g ∈ Γ n , u g A n u * g = A n . Now let B n be the linear span of {bu g : g ∈ Γ n and b ∈ A n }. Then B n is a finite dimensional * −subalgebra. Clearly (B n ) is an increasing sequence and ∪ ∞ n=1 B n is a * −subalgebra generated by {u g : g ∈ Γ} ∪ {π(p n ) : n = 1, 2...}.
The normaliser algebra
After the proof of Theorem 9.5, we made some claims concerning the normaliser algebra of a monotone cross-product. They follow from Theorem 10.1 below.
In this section M is a monotone complete C * -algebra with a maximal abelian * −subalgebra A and D : M → A a positive, linear, idempotent map of M onto A. It follows from a theorem of Tomiyama [40] that D is a conditional expectation, that is, D(azb) = a(Dz)b for each z ∈ M and every a, b in A. Clearly the monotone cross-product algebras considered in Section 9 satisfy these conditions.
We Let G be a countable group. Let g → u g be a unitary representation of G in N (A, M ). Let λ g (a) = u g au * g . Let α be an automorphism of A. We recall that α is properly outer if, for each non-zero projection e ∈ A, the restriction of α to eA is not the identity map. We further recall that the action g → λ g is free provided, for each g other than the identity, λ g is properly outer. But D is a conditional expectation. So D(wu g )u * g au g = D(wau g ) = D(σ(a)wu g ) = σ(a)D(wu g ). Because A is abelian, it follows that (σ(a) − λ −1 g (a))D(wu g ) = 0. Let p g be the range projection of D(wu g )D(wu g ) * in A. So, for every a ∈ A,
(#) (σ(a) − λ −1 g (a)
)p g = 0. Fix g and h with g = h, and let e be the projection p g p h . Then we have, for each a ∈ A,
(λ −1 h (a) − λ −1 g (a))e = (σ(a) − λ −1 g (a))p g p h − (σ(a) − λ −1 h (a))p h p g = 0.
Let b be any element of A and let a = λ g (b). Then (λ h −1 g (b) − b)e = 0. If e = 0, then by (i) it follows that h −1 g is the identity element of G. But this implies g = h, which is a contradiction. So 0 = e = p g p h . So {p g : g ∈ G} is a (countable) family of orthogonal projections.
Let q be a projection in A which is orthogonal to each p g . Then qD(wu g ) = qp g D(wu g ) = 0. So D(qwu g ) = 0 for each g ∈ G. Hence, by applying hypothesis (ii), qw = 0. But ww * = 1. So q = 0. Thus p g = 1.
From (#) we see that (λ g σ(a) − a)λ g (p g ) = 0.
We define q g to be the projection λ g (p g ). Then (# #) (a − λ g σ(a))q g = 0.
By arguing in a similar fashion to the above, we find that {q g : g ∈ G} is a family of orthogonal projections in A with q g = 1.
For each g ∈ G, let v g = u g p g .
Then v g is in M 0 and is a partial isometry with v g v * g = q g and v * g v g = p g . By the General Additivity of Equivalence for AW*algebras, see page 129 [3], there exists a unitary v in M 0 such that q g v = v g and vp g = v g p g = u g p g .
From (#), for each a ∈ A, σ(a)p g = u * g au g p g = p g v * avp g = v * avp g . So (σ(a) − v * av)p g = 0. Let y = σ(a) − v * av. Then y * yp g = 0. So the range projection of y * y is orthogonal to p g for each g, and hence is 0. So y = 0. It now follows that waw * = vav * for each a ∈ A. Then v * w commutes with each element
of A. Since A is maximal abelian in M it follows that v * w is in A. Since v is in M 0 , it now follows that w is in M 0 .
We note that the above theorem does not require the action g → λ g to be ergodic.
Free dense actions of the Dyadic Group
We have said a great deal about G−actions with a free dense orbit and the algebras associated with them. It is incumbent on us to provide examples. We do this in this section. We have seen that when constructing monotone complete algebras from the action of a countably infinite group G on an extremally disconnected space S, what matters is the orbit equivalence relation induced on S. When the action of G has a free, dense orbit in S then we have shown that the orbit equivalence relation (and hence the associated algebras) can be obtained from an action of Z 2 with a free, dense orbit. So, when searching for free, dense group actions, it suffices to find them when the group is Z 2 .
In this section we construct such actions of Z 2 . As an application, we will find 2 c hyperfinite factors which take 2 c different values in the weight semigroup [34].
We begin with some purely algebraic considerations before introducing topologies and continuity. We will end up with a huge number of examples.
We use F (S) to denote the collection of all finite subsets of a set S. We shall always regard the empty set, the set with no elements, as a finite set. We use N to be the set of natural numbers, excluding 0. Let C = {f k : k ∈ F (N)} be a countable set where k → f k is a bijection. For each n ∈ N let σ n be defined on C by
σ n (f k ) = f k\{n} n ∈ k f k∪{n} if n / ∈ k.
Lemma 11.1. (i) For each n, σ n is a bijection of C onto C, and σ n σ n = id, where id is the identity map on C.
(ii) When m = n then σ m σ n = σ n σ m .
Proof. (i) It is clear that σ n σ n = id and hence σ n is a bijection.
(ii) Fix f k . Then we need to show σ m σ n (f k ) = σ n σ m (f k ). This is a straightforward calculation, considering separately the four cases when k contains neither m nor n, contains both m and n, contains m but not n and contains n but not m.
We recall that the Dyadic Group, Z 2 , can be identified with the additive group of functions from N to Z 2 , where each function takes only finitely many non-zero values. For each n ∈ N, let g n , be the element defined by g n (m) = δ m,n for all m ∈ N. Then {g n | n ∈ N} is a set of generators of Z 2 .
Take any g ∈ Z 2 then g has a unique representation as g = g n1 + · · · + g np where 1 ≤ n 1 < · · · < n p or g is the zero. Let us define
ε g = σ n1 σ n2 ...σ np .
Here we adopt the notational convention that σ n1 σ n2 ...σ np denotes the identity map of C onto itself when {n 1 , ..n p } = ∅.
Then g → ε g is a group homomorphism of Z 2 into the group of bijections of C onto C. It will follow from Lemma 11.2 (ii) that this homomorphism is injective.
Lemma 11.2. (i) C = {ε g (f ∅ ) : g ∈ Z 2 } = {σ n1 σ n2 ...σ np (f ∅ ) : {n 1 , n 2 · · · , n p } ∈ F (N)}.
In other words C is an orbit.
(ii) For each k ∈ F (N), where k = {n 1 , ..., n p },
σ n1 σ n2 ...σ np (f k ) = f k only if σ n1 σ n2 ...σ np = id.
Proof. (i) Let k = {n 1 , · · · , n p } where n i = n j for i = j. Then σ n1 σ n2 ...σ np (f ∅ ) = f k .
(ii) Assume this is false. Then, for some k ∈ F (N) we have σ n1 σ n2 ...σ np (f k ) = f k where σ n1 σ n2 ...σ np is not the identity map. So we may assume, without loss of generality, that {n 1 , n 2 , ...n p } = m is a non-empty set of p natural numbers.
First consider the case where k is the empty set. Then σ n1 σ n2 ...σ np (f ∅ ) = f ∅ . So f m = f ∅ . But this is not possible because the map k → f k is injective.
So k cannot be the empty set; let k = {m 1 , m 2 , ...m q }. Then σ m1 σ m2 ...σ mq (f ∅ ) = f k . Hence σ n1 σ n2 ...σ np σ m1 σ m2 ...σ mq (f ∅ ) = σ m1 σ m2 ...σ mq (f ∅ ).
On using the fact that the σ j are idempotent and mutually commutative, we find that σ n1 σ n2 ...σ np (f ∅ ) = f ∅ . But, from the above argument, this is impossible. So (ii) is proved.
In [34] we consider the "Big Cantor Space" {0, 1} R , which is compact, totally disconnected and separable but not metrisable or second countable. In [34] we pointed out that each compact, separable, totally disconnected space is homeomorphic to a subspace of {0, 1} R . Let C be a countable subset of {0, 1} R then clC, the closure of C, is a compact separable, totally disconnected space. This implies that C is completely regular and hence has a Stone-Czech compactification βC.
We recall from the work of Section 6, that the regular σ−completion of C(clC) is monotone complete and can be identified with B ∞ (clC)/M (clC). Let clC be the maximal ideal space of B ∞ (clC)/M (clC). Then this may be identified with the Stone space of the complete Boolean algebra of regular open subsets of clC.
By varying C in a carefully controlled way, we exhibited 2 c essentially different extremally disconnected spaces in the form clC.
For each of these spaces clC we shall construct an action of Z 2 with a free dense orbit.
We need to begin by recalling some notions from [34]. In other words {n ∈ N : t ∈ O n and O n ∩ M = ∅} is an infinite set. An example satisfying these conditions can be obtained by putting T = 2 N , the Cantor space and letting O be an enumeration (without repetitions) of the (countable) collection of all non-empty clopen subsets.
For the rest of this section (T, O) will be a fixed but arbitrary feasible pair. Let (T, O) be a feasible pair and let R be a subset of T. Then R is said to be admissible if (i) R is a subset of T,with #R = #(T \R) = c.
(ii) O n is not a subset of R for any natural number n.
Return to the example where T is the Cantor space and O an enumeration of the non-empty clopen subsets. Then, whenever R ⊂ 2 N is nowhere dense and of cardinality c, R is admissible.
Throughout this section the feasible pair is kept fixed and the existence of at least one admissible set is assumed. For the moment, R is a fixed admissible subset of T. Later on we shall vary R.
Since F (N) × F (T ) has cardinality c, we can identify the Big Cantor space with 2 F (N)×F (T ) . For each k ∈ F (N), let f k ∈ 2 F (N)×F (T ) be the characteristic function of the set {(l, L) : L ∈ F (T \R), l ⊂ k and O n ∩ L = ∅ whenever n ∈ k and n / ∈ l}.
As in [34], let N (t) = {n ∈ N : t ∈ O n }. By feasibility, this set is infinite for each t ∈ T. It is immediate that f k (l, L) = 1 precisely when L ∈ F (T \R), l ⊂ k and, for each t ∈ L, N (t)∩(k\l) = ∅.
Let X R be the countable set {f k : k ∈ F (N)}. Let K R be the closure of X R in the Big Cantor space. Then K R is a (separable) compact Hausdorff totally disconnected space with respect to the relative topology induced by the product topology of the Big Cantor space. We always suppose X R to be equipped with the relative topology induced by K R .
Let C = X R . If the map k → f k is an injection then we can define σ n on X R as before. For each (k, K) ∈ F (N) × F (T ) let E (k,K) = {x ∈ K R : x(k, K) = 1}. The definition of the product topology of the Big Cantor space implies that E (k,K) and its compliment E c (k,K) are clopen subsets of K R . It also follows from the definition of the product topology that finite intersections of such clopen sets form a base for the topology of K R . Hence their intersections with X R give a base for the relative topology of X R . But we saw in [34] that, in fact, {E (k,K) ∩ X R : k ∈ F (N), K ∈ F (T \R)} is a base for the topology of X R . Also E (k,K) = ∅ unless K ⊂ T \R.
Since each E (k,K) is clopen, it follows from Lemma 4.1, that E (k,K) is the closure of E (k,K) ∩ X R .
To slightly simplify our notation, we shall write E(k, K) for E (k,K) ∩ X R and E n for
E ({n}.∅) ∩ X R . Also E c n is the compliment of E n in X R , which is, E c ({n}.∅) ∩ X R .
We shall see, below, that {f h : n / ∈ h} = E c n , equivalently, E n = {f h : n ∈ h}. When G is a subset of X R we denote its closure in βX R by clG. When G is a clopen subset of X R then clG is a clopen subset of βX R . So the closure of E n in βX R is clE n , whereas its closure in K R is E ({n}.∅) .
We need to show that each σ n is continuous on X R . Since σ n is equal to its inverse, this implies that σ n is a homeomorphism of X R onto itself.
Our first step to establish continuity of σ n is the following.
Lemma 11.4. We have E n = {f k : n ∈ k} and E c n = {f m : n / ∈ m}. Also σ n interchanges E n and E c n . Furthermore, for m = n, σ m maps E n onto E n and E c n onto E c n .
Proof. By definition f k ({n}, ∅) = 1 if, and only if {n} ⊂ k. So f m ∈ E c n precisely when n / ∈ m. For f k ∈ E n we have σ n (f k ) = f k\{n} . So σ n maps E n onto E c n . Similarly, it maps E c n onto E n . When m = n, consider f k ∈ E n . Then n ∈ k. So n ∈ k ∪ {m} and n ∈ k\{m}. Thus σ m (f k ) is in E n .i.e.σ m [E n ] ⊂ E n .
Since σ m is idempotent, we get σ m [E n ] = E n . Similarly σ m [E c n ] = E c n . Lemma 11.5. The map σ n : X R → X R is continuous.
As in Section 6, ρ is the continuous surjection from S R onto βX R which is dual to the natural injection from C(βX R ) into B ∞ (βX R )/M (βX R ) ≃ C(S R ). Let s 0 ∈ S R such that ρ(s 0 ) = f ∅ .
Theorem 11.6. Let g → ε g be the representation of Z 2 , as homeomorphisms of S R , as defined above. Then the orbit { ε g (s 0 ) : g ∈ Z 2 } is a free, dense orbit in S R . There exists Y, a dense G δ subset of S R , with s 0 ∈ Y, such that Y is invariant under the action ε and the action ε is free on Y.
Proof. By Lemma 11.2(i), X R = {ε g (f ∅ ) : g ∈ Z 2 }. By Proposition 6.4 this implies the orbit { ε g (s 0 ) : g ∈ Z 2 } is dense in S R .
By Lemma 11.2(ii), {ε g (f ∅ ) : g ∈ Z 2 } is a free orbit. This theorem now follows from Theorem 6.8.
Corollary 11.7. The group isomorphism, g → ε g , from Z 2 into AutC(S R ) is free and ergodic.
We shall see below that we can now obtain some additional information about this action of Z 2 as automorphisms of C(S R ). This will enable us to construct huge numbers of hyperfinite, small wild factors.
We have seen that, for each natural number n, σ n is a homeomorphism of X R onto itself with the following properties. First, σ n = σ −1 n . Secondly, σ n [E n ] = E c n and, for m = n,we have σ n [E m ] = E m . (This notation was introduced just before Lemma 11.4, above.) We have seen that σ n has a unique extension to a homeomorphism of βX R ,which we again denote by σ n . Then σ n [clE n ] = clE c n and, for m = n,we have σ n [clE m ] = clE m . We define e n ∈ C(βX R ) as the characteristic function of the clopen set clE n . Using the above notation, ε σn is the * −automorphism of B ∞ (βX R )/M (βX R ) ≃ C(S R ) induced by σ n . We have ε σn (e n ) = 1 − e n and, for m = n, ε σn (e m ) = e m . We recall, see the final paragraph of Section 6, that B ∞ (K R )/M (K R ) can be identified with B ∞ (βX R )/M (βX R ) and so with C(S R ). By Proposition 13 [34] the smallest monotone σ−complete * −subalgebra of B ∞ (βX R )/M (βX R ) which contains {e n : n = 1, 2...} is B ∞ (βX R )/M (βX R ) itself. We shall see that the (norm-closed) * −algebra generated by {e n : n = 1, 2...} is naturally isomorphic to C(2 N ).
When S ⊂ N we use η S to denote the element of 2 N which takes the value 1 when n ∈ S and 0 otherwise. Let G n be the clopen set {η S ∈ 2 N : n ∈ S}. These clopen sets generate the (countable) Boolean algebra of clopen subsets of 2 N . An application of the Stone-Weierstrass Theorem shows that the * −subalgebra of C(2 N ), containing each χ Gn , is dense in C(2 N ).
Lemma 11.8. There exists an isometric isomorphism, π 0 , from C(2 N ) into C(βX R ) such that π 0 (χ Gn ) = e n .
Proof. As in Section 6 of [34] we define a map Γ from the Big Cantor space, 2 F (N)×F (T ) , onto the classical Cantor space,2 N , by Γ(x)(n) = x(({n}, ∅)). Put J = {({n}, ∅) : n = 1, 2...}. Then we may identify 2 N with 2 J . So Γ may be regarded as a restriction map and, by definition of the topology for product spaces, it is continuous.
From the definition of f k , we see that f k ({n}, ∅) = 1 precisely when n ∈ k. So Γf k = η k . Hence Γ[E n ] = {η k : n ∈ k and k ∈ F (N)}. By the basic property of the Stone-Czech compactification, the natural embedding of X R into K R factors through βX R . So there exists a continuous surjection φ from βX R onto K R which restricts to the identity map on X R . Then Γφ maps clE n onto G n and clE c n onto G c n . For f ∈ C(2 N ) let π 0 (f ) = f • Γφ. Then π 0 is the required isometric isomorphism into C(βX R ) ⊂ B ∞ (βX R )/M (βX R ).
Let ε be the action of the Dyadic Group on C(S R ) considered above. Let M R be the corresponding monotone cross-product algebra. So there exists an isomorphism π R from C(S R ) onto the diagonal subalgebra of M R and a group representation g → u g of the Dyadic Group in the unitary group of M R such that u g π R (a)u * g = π R ( ε g (a)). Since each element of the Dyadic Group is its own inverse, we see that each u g is self-adjoint. Since the Dyadic Group is Abelian, u g u h = u h u g for each g and h.
As before, let g n be the n th term in the standard sequence of generators of Z 2 that is, g n takes the value 1 in the n th coordinate and 0 elsewhere. We abuse our notation by writing "u n " for the unitary u gn and "e n " for the projection π R (e n ) in the diagonal subalgebra of M R . We then have:
u n e n u n = 1 − e n and, for m = n, u n e m u n = e m .
Let A R = π R [C(S R )] be the diagonal algebra of M R . We recall that the Boolean σ−subalgebra of the projections of A R , generated by {e n : n = 1, 2...}, contains all the projections of A R . Lemma 11.9. Let F be the Fermion algebra. Then there exists an isomorphism Π from F onto the smallest norm closed * −subalgebra of M R which contains {u n : n = 1, 2...} and {e n : n = 1, 2...}. This isomorphism takes the diagonal of F onto the smallest closed abelian * −subalgebra containing {e n : n = 1, 2...}.
Proof. For any projection p we define p (0) = p and p (1) = 1 − p.
For each choice of n and for each choice of α 1 , ...α n from Z 2 , it follows from Lemma 11.8 that the product e α1 1 e α2 2 ...e αn n is neither 1 nor 0. In the notation of [50], (e n ) is a sequence of (mutually commutative) independent projections. The Lemma now follows from (the easy part) of the proof of Proposition 2.1 [50]. In particular, for each n, {u j : j = 1, 2, ..., n} ∪{e j : j = 1, 2, ..., n} generates a subalgebra isomorphic to the algebra of all 2 n × 2 n complex matrices. Lemma 11.11. B R is a monotone complete factor which contains A R as a maximal abelian * −subalgebra. There exists a faithful normal conditional expectation from B R onto A R . The state space of B R is separable. The factor B R is wild and of Type III. It is also a small C * -algebra.
Proof. Let D R be the faithful normal conditional expectation from M R onto A R . The maximal ideal space of A R can be identified with the separable space S R . Then, arguing as in Corollary 3.2, there exists a faithful state φ on A R . Hence φD R is a faithful state on M R and restricts to a faithful state on B R . So, by Lemma 3.1, B R is monotone complete. Let D be the restriction of D R to B R then D is a faithful and normal conditional expectation from B R onto A R .
Since each e n is in B R it follows that A R is a * −subalgebra of B R . Since it is maximal abelian in M R it must be a maximal abelian * −subalgebra of B R . So the centre of B R is a subalgebra of A R . Each u n is in B R and so each central projection of B R commutes with each u n . Since the action ε of the Dyadic group is ergodic, it follows that the only projections in A R which commute with every u n are 0 and 1. So B R is a (monotone complete) factor.
The state space of every unital C * -subalgebra of M R is a surjective image of the state space of M R , which is separable. So the state space of B R is separable. Equivalently, B R is almost separably representable. A slightly more elaborate argument shows that this algebra is small. See the remark preceding Theorem 6 [34].
Since B R contains a maximal abelian * −subalgebra which is not a von Neumann algebra it is a wild factor. Also M R is almost separably representable, hence it possesses a strictly positive state and so is a Type III factor [52], see also [30].
It now follows immediately that the factor is a small C * -algebra. For, by work of K.Saitô [31], a monotone complete factor is a small C * -algebra whenever it has a separable state space.
Proposition 11.12. The homomorphism Π extends to a σ−homomorphism Π ∞ from F ∞ , the Pedersen-Borel envelope of the Fermion algebra, onto B R . Let J R be the kernel of Π ∞ then F ∞ /J R is isomorphic to B R .
Proof. This follows by Corollary 3.5.
Approximately finite dimensional algebras
We could proceed in greater generality, but for ease and simplicity, we shall only consider monotone complete C * -algebra which possess a faithful state. Every almost separably representable algebra has this property and hence so does every small C * -algebra.
Definition 12.1. Let B be a monotone complete C * -algebra with a faithful state. Then B is said to be approximately finite dimensional if there exists an increasing sequence of finite dimensional * −subalgebras (F n ) such that the smallest monotone closed subalgebra of B which contains ∪ ∞ n=1 F n is B itself. Definition 12.2. If (i) B is a monotone complete C * -algebra which satisfies the conditions of Definition 12.1 and (ii) we can take each F n to be a full matrix algebra, then B is said to be strongly hyperfinite. The only item left to prove is that each factor B R is strongly hyperfinite. But the Fermion algebra is isomorphic to Π[F ]. So Π[F ] is the closure of an increasing sequence of full matrix algebras. It now follows that B R is strongly hyperfinite.
Corollary 12.7. For each orbit equivalence relation E(R), corresponding to R, the orbit equivalence factor M E(R) is hyperfinite.
Theorem 12.8. Let S be a separable compact Hausdorff extremally disconnected space. Let C(S) be countably σ−generated. Let G be a countably infinite group of homeomorphisms of S with a free, dense orbit. Let E be the orbit equivalence engendered by G and M E the corresponding monotone complete factor. Suppose that the Boolean algebra of projections of C(S) is countably generated. Then M E is nearly AFD. Let B be the smallest monotone closed * −subalgebra of M E containing the diagonal of M E and the unitaries induced by the action of G. Then B is AFD.
Proof. By Corollary 9.6,we may identify M E with M (C(S), Z 2 ). In other words we can assume that G = Z 2 . We can further assume that g → β g , the action of G as homeomorphisms of S, is free and ergodic. Hence the corresponding action g → β g , as automorphisms of C(S), is free and ergodic. Let π be the isomorphism from C(S) onto the diagonal. Let g → u g be a unitary representation of Z 2 such that u g π(a)u * g = π(β g (a)) for each a ∈ C(S). Let (p n ) be a sequence of projections in C(S) which σ−generate C(S). By Proposition 9.8 the C * -algebra,B 0 , generated by {p n : n = 1, 2...} ∪ {u n : n = 1, 2...} is the closure of the union of an increasing sequence of finite dimensional subalgebras. Let B be the smallest monotone σ−closed subalgebra containing B 0 . (B is the normalizer subalgebra.) Then B is monotone closed (because M E has a faithful state) and AFD. Hence M E is nearly AFD.
In Theorem 12.8 the hypotheses allow us to deduce that we can approximate factors by increasing sequences of finite dimensional subalgebras (AFD) but in the 2 c examples constructed in Section 11, we can do better. We can approximate by sequences of full matrix algebras (strongly hyperfinite). Let M be a monotone factor which is AFD. Is M strongly hyperfinite? Experience with von Neumann algebras would suggest a positive answer but for wild factors this is unknown.
Is M E equal to its normalizer subalgebra? Equivalently is the "small" monotone cross-product equal to the "big" monotone cross-product? In general, this is unknown. This is closely related to the following question, which has been unanswered for over thirty years: Let A be a closed * −subalgebra of L(H). Let A σ be the Pedersen-Borel envelope of A. Let A Σ be the sequential closure of A in the weak operator topology. By a theorem of Davies [8] A Σ is a C * -algebra. Clearly A σ ⊂ A Σ . Are these algebras the same? For some special cases, a positive answer is known, see the discussion in [28].
Let A be a monotone complete factor which is almost separably representable. When A is a von Neumann algebra being AFD is equivalent to being strongly hyperfinite and is equivalent to being injective. But, as we pointed out in Section 1, when A is a wild factor the relationship between injectivity and being AFD is a mystery.
Lemma 4. 1 .
1Let K be a compact Hausdorff space and D a dense subset of K. (i) For any open subset U of K we have cl(U ) = cl(U ∩ D). (ii) Let U, V be open subsets of K.Then V ⊂ Cl(U ) if, and only if,V ∩ D ⊂ Cl(U ∩ D) ∩ D = cl D (U ∩ D). (iii) Let U be an open subset of K. Then D ∩ int(clU ) = int D (cl D (U ∩ D)). (iv) If U isa regular open subset of K then U ∩ D is a regular open subset of D in the relative topology of D. Conversely, if E is a regular open subset of D in the relative topology, then E = V ∩ D where V is a regular open subset of K. For any topological space Y we let RegY denote the Boolean algebra of regular open subsets of Y . Let H : P(K) → P(D) be defined by H(S) = S ∩ D.
Lemma 4. 2 .
2The function H, when restricted to RegK, is a Boolean isomorphism of RegK onto RegD.
Lemma 4. 3 .
3A Hausdorff topological space T is extremally disconnected if, and only if, each regular open set is closed, and hence clopen.
Corollary 4. 4 .
4Let D be a dense subset of a compact Hausdorff extremally disconnected space S. Let D be equipped with the relative topology. Then D is an extremally disconnected space. Proof. Let V be a regular open subset of D. Then, by Lemma 4.1, part (iv), there exists U, a regular open subset of S, such that V = U ∩ D. By Lemma 4.3, U is a clopen subset of S. Hence V is a clopen subset of D in the relative topology. Again appealing to Lemma 4.3, we have that D is an extremally disconnected space.
Lemma 4. 5 .
5Let D be an extremally disconnected topological space. Also let D be homeomorphic to a subspace of a compact Hausdorff space. Then βD, its Stone-Czech compactification, is extremally disconnected.
Theorem 4. 7 .
7Let D be a dense subspace of a compact Hausdorff extremally disconnected space S. Then S is the Stone-Czech compactification of D. More precisely, there exists a unique homeomorphism from βD onto S which restricts to the identity homeomorphism on D.
Lemma 5 . 1 .
51Let G be a countable group of homeomorphisms of Y . (i) If there exists x 0 ∈ Y such that the orbit G[x 0 ] is dense in Y then every G−invariant open subset of Y is either empty or dense. (ii) If every non-empty open G−invariant subset of Y is dense then, for each x in Y, the orbit G[x] is either dense or nowhere dense.
Lemma 5 . 2 .
52Let Y be an extremally disconnected space. Let G be a group of homeomorphisms of Y. Then the action of G is ergodic, if, and only if, the only G−invariant clopen subsets are Y and ∅. Proof. Let U be a G−invariant open set. Then clU and Y \clU are G−invariant clopen sets. Then U is neither empty nor dense, if, and only if, clU and Y \clU are non-trivial clopen sets.
Lemma 7 . 7 .
77Let A and B be disjoint clopen subsets of D. Let a ∈ A and b ∈ B. Then there exists a homeomorphism h from D onto D with the following properties. First h is strongly G-decomposable. Secondly h interchanges A and B and leaves each point of D\(A ∪ B) fixed. Thirdly, h(a) = b. Fourthly h = h −1 .
give an inductive argument. First, let A = D 1 and let B = D\D 1 . By applying Lemma 7.7, there exists a homeomorphism h 1 of D onto itself, where h 1 interchanges D 1 and D \ D 1 , and maps s 0 to s 1 . (So (f) holds for n = 1.) Also
From this it follows that (E n )(n = 1, 2...) is a sequence of pairwise disjoint clopen subsets of D. Similarly, (F n )(n = 1, 2...) is a sequence of pairwise disjoint clopen subsets of D. We find that D = n=1 E n and n=1 F n = D\{s 0 }.
Definition 8. 1 .
1Let I be a σ−ideal of the Boolean algebra of Borel subsets of X with X / ∈ I.Definition 8.2. Let B I be the set of all f in B(X) such that {x ∈ X : f (x) = 0} is in I . Then B I is a σ−ideal of B(X). (See Section 3.) Let q be the quotient homomorphism from B(X) onto B(X)/B I . Lemma 8.3. Let A ∈ I. Then E[A] ∈ I if, and only if, g[A] ∈ I for every g ∈ G.
When f is restricted to [x] × [x] then it becomes the matrix representation of F [x] with respect to the canonical orthonormal basis of ℓ 2 ([x]). It follows that there is a bijection between operators in S and those functions f : E → C for which there is a constant k such that, for each [x] ∈ [X], the restriction of f to [x] × [x] is the matrix of a bounded operator on ℓ 2 ([x]) whose norm is bounded by k. Call such an f matrix bounded. For each matrix bounded f let L(f ) be the corresponding element of S.
Definition 8 . 4 .
84Let M(E) be the set of all Borel measurable functions f : E → C which are matrix bounded.
Lemma 8 . 5 .
85The set {L(f ) : f ∈ M(E)} is a C * -subalgebra of S which is sequentially closed with respect to the weak operator topology of S. We denote this algebra by L(M(E)). When equipped with the appropriate algebraic operations and norm,M(E) is a C * -algebra isomorphic to L[M(E)].Proof. See Lemma 2.1[36].Lemma 8.6. Let (f n ) be a sequence in the unit ball of M(E) which converges pointwise to f. Then f ∈ M(E) and L(f n ) converges to L(f ) in the weak operator topology of S. Also f is in the unit ball of M(E).Proof. The weak operator topology gives a compact Hausdorff topology on the norm closed unit ball of S.Let K n = {L(f j ) : j ≥ n} and let clK n be its closure in the weak operator topology of S. Then, by the finite intersection property, there exists T ∈ n∈N clK n .
Lemma 8. 7 .
7D is a positive map. Proof. Each positive element of M(E) is of the form f • f * .But
Thus f (x, y) = 0 for xEy and x / ∈ A. Then, for z /∈ A, we have D(f * • f )(z) = y∈[z] |f * (z, y)| 2 = y∈[z]|f (y, z)| 2 = 0. So f * ∈ I I . So I I is a two-sided ideal of M(E).
Lemma 8 . 10 .
810If y ∈ I I then q D(y) = 0. Furthermore y ∈ I I if, and only if q D(y • a) = 0 for all a ∈ M(E).
Lemma 8 .
811. L[I I ] is a (two-sided) ideal of L[M(E)] which is sequentially closed in the weak operator topology of S.
Lemma 8.5, there exists f ∈ M(E) such that L(f ) = T where L(f r ) → L(f ) in the weak operator topology. So f r → f pointwise. Hence D(f r ) → D(f ) pointwise. For each r there exists A r ∈ I such that x / ∈ A r implies D(f r )(x) = 0. Since I is a Boolean σ−ideal of the Boolean algebra of Borel subsets of X, ∪{A r : r = 1, 2...} is in I. Hence q D(f ) = 0. Proof. For any a ∈ M(E), (f r •a) is a sequence in I I such that (L(f r •a)) converges in the weak operator topology to L(f )L(a). So, as in the preceding paragraph, q D(f • a) = 0. By appealing to Lemma 8.10 we see that f ∈ I I . Hence L[I I ] is sequentially closed in the weak operator topology of S. Corollary 8.12. M(E) is monotone σ−complete and I I is a σ−ideal Proof. Each norm bounded monotone increasing sequence in L(M(E)) converges in the strong operator topology to an element T of S. By Lemma 8.5, T ∈ L(M(E)). Then T = L(f ) for some f ∈ M(E). Hence L(M(E)) (and its isomorphic image, M(E)) are monotone σ−complete. It now follows from Lemma 8.11, that I I is a σ−ideal. Definition 8.13. Let Q be the quotient map from M(E) onto M(E)/I I . Proposition 8.14. The algebra M(E)/I I is monotone σ−complete. There exists a positive, faithful, σ−normal, conditional expectation D from M(E)/I I onto a commutative σ−subalgebra, which is isomorphic to B(X)/B I . Furthermore, if there exists a strictly positive linear functional on B(X)/B I , then M(E)/I I is monotone complete and D is normal.
So π(g) ∈ I I if and only if |g| 2 ∈ B I i.e. if and only if g vanishes off some set A ∈ I i.e. if and only if g ∈ B I . So π induces an isomorphism from B(X)/B I onto D[M (E)]/I I . Let h ∈ I I . Then, by Lemma 8.10, q D(h) = 0. That is, D(h) ∈ B I . So π D(h) ∈ I I . But π D(h) = Dh. So h ∈ I I implies QDh = 0. It now follows that we can define D on M(E)/I I by D (f + I I ) = QDf. It is clear that D is a positive linear map which is faithful. Its range is an abelian subalgebra of M(E)/I I . This subalgebra is D[M(E)]/I I which, as we have seen above, is isomorphic to B(X)/B I ; we shall denote D[M(E)]/I I by A and call it the diagonal algebra. Furthermore, D is idempotent, so it is a conditional expectation.
Proposition 8 . 15 .
815Let z ∈ M(E)/I I . Let (F (n))(n = 1, 2...) be a strictly increasing sequence of finite subsets of G. Let z n = g∈F (n) D(zU g )U g −1 . ThenD((z − z n )(z − z n ) * ) = 0.Corollary 8.16. Let z ∈ M(E)/I I such that D(zU g ) = 0 for each g. Then z = 0.
Proposition 8 . 20 .
820Whenever g ∈ G and g is not the identity, let a → U g aU* g be a properly outer automorphism of D[M(E)/I I ]. Then D[M(E)/I I ] is a maximal abelian * −subalgebra of M(E)/I I . Proof. Let z commute with each element of D[M(E)/I I ]. Let a
for each b in the range of D. It now follows from Lemma 8.19 that λ g −1 (ea) = ea for each a in the range of D. But this contradicts the freeness of the action of G. So D(zU g ) = 0. It now follows that z is in D[M(E)/I I ].Hence D[M(E)/I I ] is a maximal abelian * −subalgebra of M(E)/I I . Corollary 8.21. When the action of G is free on X and I is the Boolean ideal of meagre Borel subsets of X then D[M(E)/I I ] is a maximal abelian * −subalgebra of M(E)/I I . A unitary w ∈ M(E)/I I is said to normalise a * −subalgebra A if wAw * = A. For future reference we define the normaliser subalgebra of M(E)/I I to be the smallest monotone complete * −subalgebra of M(E)/I I which contains every unitary which normalizes the diagonal subalgebra. It follows from Corollary 8.18 that each U g is a normalising unitary for the diagonal algebra, A = D[M(E)/I I ];
, the product B(Y ) ⊗L(H) may be defined as the Pedersen-Borel envelope of B(Y ) ⊗ min L(H) inside Bnd(Y )⊗L( H). The elements of B(Y ) ⊗L(H) correspond to the matrices [b ij ] where each b ij ∈ B(Y ) and y → [b ij (y)] is a norm bounded function from Y into L(H).
Lemma 9. 3 .
3Let E be the graph of the relation of orbit equivalence given by G acting on Y. Then there exists a σ−normal homomorphism δ from M(E) onto M (C(S), β, G). The kernel of δ is J = {z ∈ M(E) : D(zz * ) vanishes off a meagre subset of Y }. Furthermore, δ maps the diagonal subalgebra of M(E) onto the diagonal subalgebra of M (C(S), β, G). In particular, δ induces an isomorphism of M(E)/J onto M (C(S), G).
Corollary 9 . 6 .
96Let G be a countable, infinite group of homeomorphisms of S. Suppose, for some s 0 ∈ S, G[s 0 ] is a free, dense orbit. Then there exists an isomorphism φ of Z 2 into Homeo(S), such that there exists an isomorphism of M (C(S), G) onto M (C(S), Z 2 ) which maps the diagonal algebra of M (C(S), G) onto the diagonal algebra of M (C(S), Z 2 ).
recall that a unitary w in M is a nomaliser of A if wAw * = A. It is clear that the normalisers of A form a subgroup of the unitaries in M. We use N (A, M ) to denote this normaliser subgroup. Let M N be the smallest monotone closed * −subalgebra of M which contains N (A, M ). Then M N is said to be the normaliser subalgebra of M.
Theorem 10 . 1 .
101Let M 0 be the smallest monotone closed subalgebra of M which contains A ∪ {u g : g ∈ G}.We suppose that:(i) The action g → λ g is free. (ii) For each z ∈ M, if D(zu g ) = 0 for every g ∈ G then z = 0.Then M 0 contains every unitary in M which normalises A, that is, M 0 = M N .Proof. Let w be a unitary in M which normalises A. Let σ be the automorphism of A induced by w. Then, for each a ∈ A, we have waw * = σ(a). So wa = σ(a)w.Hence, for each g, we have D(wau g ) = D(σ(a)wu g ).
A pair (T, O) is said to be feasible if it satisfies the following conditions: (i) T is a set of cardinality c = 2 ℵ0 ; O = (O n )(n = 1, 2...) is an infinite sequence of non-empty subsets of T , with O m = O n whenever m = n. (ii) Let M be a finite subset of T and t ∈ T \M. For each natural number m there exists n > m such that t ∈ O n and O n ∩ M = ∅.
Lemma 11 . 3 .
113Let f k = f m . Then k = m. Proof. By definition, f k (l, ∅) = 1 precisely when l ⊂ k. Since f k (m, ∅) = f m (m, ∅) = 1 it follows that m ⊂ k. Similarly, k ⊂ m. Hence m = k.
Definition 11. 10 .
10Let B R be the the smallest monotone σ−complete * −subalgebra of M R which contains Π[F ].
Definition 12. 3 .
3Let M be a monotone complete C * -algebra with a faithful state. We call M nearly approximately finite dimensional (with respect to B) if it satisfies the following conditions: (i) M contains a monotone closed subalgebra B, where B is approximately finite dimensional. (ii) There exists a linear map D : M → B which is positive, faithful and normal. (iii) For each z ∈ M, there exists a sequence (z n )(n = 1, 2...) in B, such that D((z − z n )(z − z n ) * ) ≥ D((z − z n+1 )(z − z n+1 ) * ) for each n, and ∞ n=1 D((z − z n )(z − z n ) * ) = 0.
n φ n is a faithful, positive linear functional.
We need to consider three possibilities.(1) First suppose that n ∈ h, that is f h ∈ E n . Then f h\{n} = σ n (f h ), which is in E(l, L). So l ⊂ h\{n} which implies n / ∈ l. Also N (t) ∩ ((h\{n})\l) = ∅ for all t ∈ L. It follows that l ∪ {n} ⊂ h and, for all t ∈ L,Let f k ∈ E(l∪{n}, L). Then l∪{n} ⊂ k. Also, for t ∈ L, N (t)∩(k\(l∪{n})) = ∅. Hence l ⊂ k\{n} and, for t ∈ L, N (t) ∩ ((k\{n})\l) = ∅. This implies σ n (f k ) = f k\{n} ∈ E(l, L). Thus E(l ∪ {n}, L) is a clopen set, which is a neighbourhood of f h and a subset of(3) We now suppose that n / ∈ h and n / ∈ l. As in(2), statements (a), (b) and (c) hold.Note It follows from(1),(2)and(3)We recall that X R is completely regular because it is a subspace of the compact Hausdorff space K R . Let βX R be its Stone-Czech compactification. Then each continuous function f : X R → X R has a unique extension to a continuous function F from βX R to βX R . When f is a homeomorphism, then by considering the extension of f −1 it follows that F is a homeomorphism of βX R . In particular, each σ n has a unique extension to a homeomorphism of βX R . We abuse our notation by also denoting this extension by σ n .Let us recall from Section 6, that when θ is in Homeo(βX R ) then it induces an automorphism h θ of C(βX R ) by h θ (f ) = f • θ. It also induces an automorphismThen H θ is the unique automorphism of B ∞ (βX R )/M (βX R ) which extends h θ . Let S R be the (extremally disconnected) structure space of B ∞ (βX R )/M (βX R ); this algebra can then be identified with C(S R ). Then H θ corresponds to θ, an homeomorphism of S R . Then θ → h θ is a group anti-isomorphism of Homeo(βX R ) onto AutC(βX R ); h θ → H θ is an isomorphism of AutC(βX R ) into AutC(S R ). Also H θ → θ is a group anti-isomorphism of AutC(S R ) into Homeo(S R ). When G is an Abelian subgroup of Homeo(βX R ) it follows that θ → H θ and θ → θ are group isomorphisms of G into AutC(S R ) and Homeo(S R ), respectively.We recall that g → ε g is an injective group homomorphism of Z 2 into the group of bijections of C onto C. By taking the natural bijection from C onto X R , and by applying Lemma 11.2 and Lemma 11.5, we may regard ε * as an injective group homomorphism of Z 2 into Homeo(X R ). Since each homeomorphism of X R onto itself has a unique extension to a homeomorphism of βX R onto itself, we may identify ε * with an injective group homomorphism of Z 2 into the group Homeo(βX R ). This induces a group isomorphism, g → ε g from Z 2 into AutoC(S R ) by putting ε g = H εg . The corresponding isomorphism, g → ε g , from Z 2 into Homeo(S R ), is defined by ε g = ε g for each g ∈ Z 2 .
M hyperfinite if it contains a monotone closed subalgebra B such that (i) M is nearly approximately finite dimensional with respect to B and (ii) B is strongly hyperfinite. M hyperfinite if it contains a monotone closed subalgebra B such that (i) M is nearly approximately finite dimensional with respect to B and (ii) B is strongly hyperfinite.
Let S be a compact Hausdorff extremally disconnected space. Let G be a countably infinite group and g → β g be a free action of G as automorphisms of C(S). Proposition 12.5.. Let M (C(S), G) be the corresponding monotone cross-productProposition 12.5. 'Let S be a compact Hausdorff extremally disconnected space. Let G be a countably infinite group and g → β g be a free action of G as auto- morphisms of C(S). Let M (C(S), G) be the corresponding monotone cross-product;
be the diagonal map. let π[C(S)] be the diagonal subalgebra; let D : M (C(S), G) → π[C(S). Let g → u g be a unitary representation of G in M (C(S), G) such that β g (a) = u g au * g for each a ∈ π[C(S)let π[C(S)] be the diagonal subalgebra; let D : M (C(S), G) → π[C(S)] be the di- agonal map. Let g → u g be a unitary representation of G in M (C(S), G) such that β g (a) = u g au * g for each a ∈ π[C(S)].
Let B be the monotone closure of the * −algebra generated by π. ∪{u g : g ∈ G}. If B is AFD then M (C(S), G) is nearly AFD. C(S)Let B be the monotone closure of the * −algebra generated by π[C(S)] ∪{u g : g ∈ G}. If B is AFD then M (C(S), G) is nearly AFD.
By Theorem 10.1, B is the normalizer subalgebra of M (C(S), G). Proof. By Theorem 10.1, B is the normalizer subalgebra of M (C(S), G).
G) is nearly AFD whenever B is AFD. We recall that in Section 3 [34] we constructed a weight semigroup, W, which classifies monotone complete C * -algebras. In particular, for algebras B 1 and B 2 , they are equivalent (as defined in [34]) precisely when their values in the weight semigroup, wB 1 and wB 2 , are the same. REMARK The theory of von Neumann algebras would lead us to expect M (C(S), G) = C(S) × β G. But this is an open problem. G) satisfies condition (iii) of Definition 12.3, with respect to B and the diagonal map D. It follows immediately that M. By Lemma 9.4, M (C(SHowever it is easy to show that w(M (C(S), G)) = w(C(S) × β G). that is, the two algebras are equivalent. Let (T, O) be a feasible pair as in Section 11. Let R be the collection of all admissible subsets of T . For each R ∈ R let A R = C(S R ). Then, by Corollary 20 [34], we can find R 0 ⊂ R such that #R 0 = 2 c where c = 2 ℵ0 with the following property. Whenever R 1 and R 2 are distinct elements of R 0 then A R1 is not equivalent to A R2 that is, wA R1 = wA R2By Lemma 9.4, M (C(S), G) satisfies condition (iii) of Definition 12.3, with re- spect to B and the diagonal map D. It follows immediately that M (C(S), G) is nearly AFD whenever B is AFD. We recall that in Section 3 [34] we constructed a weight semigroup, W, which classifies monotone complete C * -algebras. In particular, for algebras B 1 and B 2 , they are equivalent (as defined in [34]) precisely when their values in the weight semigroup, wB 1 and wB 2 , are the same. REMARK The theory of von Neumann algebras would lead us to expect M (C(S), G) = C(S) × β G. But this is an open problem. However it is easy to show that w(M (C(S), G)) = w(C(S) × β G), that is, the two algebras are equiva- lent. Let (T, O) be a feasible pair as in Section 11. Let R be the collection of all admissible subsets of T . For each R ∈ R let A R = C(S R ). Then, by Corollary 20 [34], we can find R 0 ⊂ R such that #R 0 = 2 c where c = 2 ℵ0 with the follow- ing property. Whenever R 1 and R 2 are distinct elements of R 0 then A R1 is not equivalent to A R2 that is, wA R1 = wA R2 .
There exists a family of monotone complete C * -algebras, (B λ , λ ∈ Λ) with the following properties: Each B λ is a strongly hyperfinite Type III factor, each B λ is a small C * -algebra, each B λ is a quotient of the Pedersen-Borel envelope of the Fermion algebra. The cardinality of Λ is 2 c , where c = 2 ℵ0 . When λ = µ then B λ and B µ take different values in the classification semi-group W. Theorem 12.6. in particular, they cannot be isomorphicTheorem 12.6. There exists a family of monotone complete C * -algebras, (B λ , λ ∈ Λ) with the following properties: Each B λ is a strongly hyperfinite Type III factor, each B λ is a small C * -algebra, each B λ is a quotient of the Pedersen-Borel envelope of the Fermion algebra. The cardinality of Λ is 2 c , where c = 2 ℵ0 . When λ = µ then B λ and B µ take different values in the classification semi-group W; in particular, they cannot be isomorphic.
First we put Λ = R 0 . For each R ∈ R 0 we have a faithful normal conditional expectation from B R onto the maximal abelian * −subalgebra A R . We use the partial ordering defined in. 34Proof. First we put Λ = R 0 . For each R ∈ R 0 we have a faithful normal conditional expectation from B R onto the maximal abelian * −subalgebra A R . We use the partial ordering defined in [34].
Since A R is a monotone closed subalgebra of B R and B R is a monotone closed subalgebra of M R we obtain A R B R M R . Since D R is a faithful normal map from M R onto A R , we have M R A R . Hence A R˜BR˜MR . By using the classification weight semi-group W, we get wA R = wB R = wM R . Since, for R 1 = R 2. we have wA R1 = wA R2 this implies wB R1 = wB R2 . ReferencesSince A R is a monotone closed subalgebra of B R and B R is a monotone closed subalgebra of M R we obtain A R B R M R . Since D R is a faithful normal map from M R onto A R , we have M R A R . Hence A R˜BR˜MR . By using the classification weight semi-group W, we get wA R = wB R = wM R . Since, for R 1 = R 2 , we have wA R1 = wA R2 this implies wB R1 = wB R2 . References
Separable representations of a W*-algebra. C A Akemann, Proc. Amer. Math. Soc. 24C.A. Akemann, "Separable representations of a W*-algebra", Proc. Amer. Math. Soc. 24, 354-355 (1970).
Part III (Appendix on General Topology. B Balcar, P Simon, J.D.Monk, R.BonnetNorth Holland, AmsterdamHandbook of Boolean AlgebrasB. Balcar and P. Simon, Part III (Appendix on General Topology) in J.D.Monk, R.Bonnet, ed.,"Handbook of Boolean Algebras", North Holland, Amsterdam, 1989.
. S K Berberian, * − Baer, Rings, SpringerBerlinS.K. Berberian, Baer * −rings, (Springer, Berlin, 1972).
Abelian subalgebras of von Neumann algebras. D Bures, Mem. Amer. Math. Soc. 110D. Bures, "Abelian subalgebras of von Neumann algebras", Mem. Amer. Math. Soc. 110 , Providence, R.I (1971).
Non-commutative integration for monotone sequentially closed C * -algebras. E Christensen, Math. Scand. 31E. Christensen, "Non-commutative integration for monotone sequentially closed C * - algebras", Math. Scand. 31, 171-190 (1972).
Classification of injective factors. A Connes, Ann. Math. 104A. Connes, "Classification of injective factors", Ann. Math.,104 , 73-115 (1976) .
Amenable equivalence relations generated by a single transformation. A Connes, J Feldman, B Weiss, Ergodic Theory Dynamical Systems. 1A. Connes, J. Feldman and B. Weiss, Amenable equivalence relations generated by a single transformation, Ergodic Theory Dynamical Systems 1, 431-450 (1981).
On the Borel structure of C * -algebras (with an appendix by R.V. Kadison). E B Davies, Comm. Math. Phys. 8E.B. Davies, "On the Borel structure of C * -algebras (with an appendix by R.V. Kadison)", Comm. Math. Phys.8, 147-163 (1968).
Banach-Tarski decompositions using sets with the property of Baire. R Dougherty, M Foreman, J.Amer.Math.Soc. 7R. Dougherty and M. Foreman, "Banach-Tarski decompositions using sets with the property of Baire", J.Amer.Math.Soc.,7,75-124 (1994).
Concerning AW*-algebras. J A Dyer, Notices Amer. Math. Soc. 17788J.A. Dyer,"Concerning AW*-algebras", Notices Amer. Math. Soc. 17, 788(1970).
. E G Effros & Z. Ruan, O U P Operator Spaces, OxfordE.G. Effros & Z. Ruan,Operator spaces,O.U.P., Oxford (2000).
Ergodic equivalence relations, cohomology, and von Neumann algebras I. J Feldman, Calvin C Moore, Trans. Amer. Math. Soc. 234J. Feldman and Calvin C. Moore, Ergodic equivalence relations, cohomology, and von Neu- mann algebras I, Trans. Amer. Math. Soc. 234, 289-324 (1977).
Ergodic equivalence relations, cohomology, and von Neumann algebras II. J Feldman, Calvin C Moore, Trans. Amer. Math. Soc. 234J. Feldman and Calvin C. Moore, Ergodic equivalence relations, cohomology, and von Neu- mann algebras II, Trans. Amer. Math. Soc. 234, 325-359 (1977).
L Gillman, & M Jerison, Rings of Continuous Functions, van Nostrand. PrincetonL. Gillman & M. Jerison, Rings of Continuous Functions, van Nostrand, Princeton (1960).
General lattice theory. G Grätzer, Birkhäuser VerlagBaselG. Grätzer, General lattice theory, Birkhäuser Verlag, Basel (1978).
P R Halmos, Lectures on Boolean algebras, van Nostrand. TorontoP.R. Halmos, Lectures on Boolean algebras, van Nostrand, Toronto (1963).
Injective envelopes of operator systems. M Hamana, Publ. Res. Res. Inst. Math.Sci. Kyoto Univ. 15M. Hamana, "Injective envelopes of operator systems", Publ. Res. Res. Inst. Math.Sci. Kyoto Univ. 15,773-785 (1979).
Injective envelopes of C * -algebras. M Hamana, J. Math. Soc. Japan. 31M. Hamana, "Injective envelopes of C * -algebras", J. Math. Soc. Japan 31, 181-197 (1979).
Infinite, σ−finite non W*, AW*-factors. M Hamana, Int. J. Math. 12M. Hamana, "Infinite, σ−finite non W*, AW*-factors",Int. J. Math.,12, 81-95 (2001)
Tensor products for monotone complete C * -algebras, I. M Hamana, Japan. J. Math. 8M. Hamana,"Tensor products for monotone complete C * -algebras, I",Japan. J. Math., 8, 259-283 (1982).
Tensor products for monotone complete C * -algebras, II. M Hamana, Japan. J. Math. 8M. Hamana,"Tensor products for monotone complete C * -algebras, II",Japan. J. Math., 8, 285-295 (1982).
Rigidity theorems for actions of product groups and countable Borel equivalence relations. G Hjorth, A S Kechris, Mem.Amer.Math.Soc. 177G. Hjorth and A.S. Kechris, "Rigidity theorems for actions of product groups and countable Borel equivalence relations", Mem.Amer.Math.Soc. 177, 1-109 (2005).
Equivalence in operator algebras. R V Kadison, G K Pedersen, Math. Scand. 27R.V. Kadison and G.K. Pedersen, "Equivalence in operator algebras", Math. Scand. 27, 205-222 (1970).
Algebras of Type I. I Kaplansky, Ann.Math. 56I. Kaplansky, "Algebras of Type I", Ann.Math. 56, 460-472 (1953).
Projections in Banach algebras. I Kaplansky, Ann.Math. 53I.Kaplansky,"Projections in Banach algebras",Ann.Math.53, 235-249 (1951).
Modules over operator algebras. I Kaplansky, Amer. J. Math. 75I.Kaplansky, "Modules over operator algebras", Amer. J. Math. 75, 839-858 (1953).
Convergence of Fourier series in discrete cross products of von Neumann algebras. R Mercer, Proc. Amer. Math. Soc. 94R. Mercer, "Convergence of Fourier series in discrete cross products of von Neumann alge- bras", Proc. Amer. Math. Soc.94, 254-258 (1985).
G K Pedersen, C * -algebras and their automorphism groups. LondonAcademic PressG.K. Pedersen, C * -algebras and their automorphism groups,Academic Press, London 1979.
AW * -algebras with monotone convergence property and examples by Takenouchi and Dyer. K Saitô, Tohoku Math. J. 31K. Saitô, "AW * -algebras with monotone convergence property and examples by Takenouchi and Dyer", Tohoku Math. J., 31, 31-40 (1979).
AW * -algebras with monotone convergence property and type III, non W * , AW * -factors. K Saitô, Lecture Notes in Mathematics. 650SpringerK. Saitô, "AW * -algebras with monotone convergence property and type III, non W * , AW * - factors", Lecture Notes in Mathematics, 650 , 131-134, Springer, Berlin (1978) .
The smallness problem for C * -algebras. K Saitô, J.Math.Anal.App. 360K.Saitô, "The smallness problem for C * -algebras",J.Math.Anal.App., 360, 369-376 (2009) .
C * -algebras with countable order dense subsets and tensor products of monotone complete C * -algebras. K Saitô, Quart. J. Math.Oxford. 43K.Saitô, "C * -algebras with countable order dense subsets and tensor products of monotone complete C * -algebras", Quart. J. Math.Oxford 43, 349-360 (1992).
Wild, Type III, monotone complete simple C * -algebras indexed by cardinal numbers. K Saitô, J. London Math. Soc. 49K.Saitô, "Wild, Type III, monotone complete simple C * -algebras indexed by cardinal num- bers", J. London Math. Soc.49 , 543-555 (1994).
On classifying monotone complete algebras of operators. K Saitô, J D M Wright, Ricerche Mat. 56K.Saitô and J.D.M. Wright, "On classifying monotone complete algebras of operators", Ricerche Mat 56, 321-355 (2007).
On tensor products of monotone complete C * -algebras. K Saitô, J D M Wright, Quart. J. Math. Oxford. 35K.Saitô and J.D.M. Wright, "On tensor products of monotone complete C * -algebras", Quart. J. Math. Oxford 35, 209-221 (1984).
Generic dynamics and monotone complete C * -algebras. D Sullivan, B Weiss, J D M Wright, Trans. Amer. Math. Soc. 295D. Sullivan, B. Weiss, J.D.M. Wright, "Generic dynamics and monotone complete C * - algebras",Trans. Amer. Math. Soc. 295, 795-809 (1986).
A non-W*, AW*-factor. O Takenouchi, Lecture Notes in Math. 650SpringerO. Takenouchi,"A non-W*, AW*-factor", Lecture Notes in Math. 650,135-139, Springer, Berlin (1978).
M Takesaki, Theory of operator algebras. II, III; BerlinSpringerM. Takesaki,Theory of operator algebras, II, III. Springer,Berlin (2000).
Theory of operator algebras,I. M Takesaki, SpringerBerlinM. Takesaki,Theory of operator algebras,I. Springer, Berlin (1979).
On the projection of norm one in W * -algebras. J Tomiyama, Proc. Japan Acad. 33J. Tomiyama, "On the projection of norm one in W * -algebras",Proc. Japan Acad.,33 (1957), 608-612.
Descriptive set theory and dynamical systems. B Weiss ; M Foremann, LMS Lecture Note Series. 277A survey of generic dynamicsB. Weiss, "A survey of generic dynamics", Descriptive set theory and dynamical systems, Ed by M Foremann et al., LMS Lecture Note Series 277,C.U.P., 273-291 (2000).
On classifying monotone complete algebras of operators. J D M Wright, Contemporary Math.(Amer. Math. Soc.). 503J.D.M. Wright, "On classifying monotone complete algebras of operators",Contemporary Math.(Amer. Math. Soc.) 503, 307-317 (2009).
Paradoxical decompositions of the cube and injectivity. J D M Wright, Bull. London Math. Soc. 22J.D.M. Wright, "Paradoxical decompositions of the cube and injectivity", Bull. London Math. Soc.22,18-24 (1989).
On minimal σ−completions of C * -algebras. J D M Wright, Bull London Math. Soc. 6J.D.M. Wright, "On minimal σ−completions of C * -algebras",Bull London Math. Soc. 6, 168-174 (1974).
Every monotone σ−complete C * -algebra is the quotient of its Baire σ−envelope by a σ−ideal. J D M Wright, J.London Math. Soc. 6J.D.M. Wright, "Every monotone σ−complete C * -algebra is the quotient of its Baire σ−envelope by a σ−ideal", J.London Math. Soc.6, 210-214 (1973).
Regular σ−completions of C * -algebras. J D M Wright, J. London Math. Soc. 12J.D.M. Wright, "Regular σ−completions of C * -algebras", J. London Math. Soc. 12, 299-309 (1976).
Stone algebra valued measures and integrals. J D M Wright, Proc. London Math. Soc. 19J.D.M. Wright, "Stone algebra valued measures and integrals", Proc. London Math. Soc.19, 107-112 (1968).
On some problems of Kaplansky in the theory of rings of operators. J D M Wright, Math. Z. 172J.D.M. Wright, "On some problems of Kaplansky in the theory of rings of operators", Math. Z.,172, 131-141 (1980).
A spectral theorem for normal operators on a Kaplansky-Hilbert module. J D M Wright, Proc. London. Math. Soc. 19J.D.M. Wright, "A spectral theorem for normal operators on a Kaplansky-Hilbert module", Proc. London. Math. Soc.19, 107-122(1968).
Hyperfiniteness in wild factors. J D M Wright, J. London Math. Soc. 38J.D.M. Wright, "Hyperfiniteness in wild factors", J. London Math. Soc.38, 492-502 (1988).
On C * -algebras which are almost separably representable. J D M Wright, J.London Math. Soc. 18J.D.M. Wright, "On C * -algebras which are almost separably representable", J.London Math. Soc.18, 147-150 (1978).
On semifinite AW*-algebras. J D M Wright, Math. Proc. Camb.Phil.Soc. 79J.D.M. Wright, "On semifinite AW*-algebras", Math. Proc. Camb.Phil.Soc.79,443-445 (1975).
| [] |
[
"An end-to-end, interactive Deep Learning based Annotation system for cursive and print English handwritten text",
"An end-to-end, interactive Deep Learning based Annotation system for cursive and print English handwritten text"
] | [
"Pranav Guruprasad \nBirla Institute of Technology and Science Pilani\nK. K. Birla Goa Campus\nIndia\n\nDepartment of Biotechnology\nBhupat and Jyoti Mehta School of Biosciences\nIndian Institute of Technology\nMadrasChennaiIndia\n",
"Sujith Kumar S \nDepartment of Biotechnology\nBhupat and Jyoti Mehta School of Biosciences\nIndian Institute of Technology\nMadrasChennaiIndia\n",
"Vigneswaran C ",
"V Srinivasa Chakravarthy \nDepartment of Biotechnology\nBhupat and Jyoti Mehta School of Biosciences\nIndian Institute of Technology\nMadrasChennaiIndia\n"
] | [
"Birla Institute of Technology and Science Pilani\nK. K. Birla Goa Campus\nIndia",
"Department of Biotechnology\nBhupat and Jyoti Mehta School of Biosciences\nIndian Institute of Technology\nMadrasChennaiIndia",
"Department of Biotechnology\nBhupat and Jyoti Mehta School of Biosciences\nIndian Institute of Technology\nMadrasChennaiIndia",
"Department of Biotechnology\nBhupat and Jyoti Mehta School of Biosciences\nIndian Institute of Technology\nMadrasChennaiIndia"
] | [] | With the surging inclination towards carrying out tasks on computational devices and digital mediums, any method that converts a task that was previously carried out manually, to a digitized version, is always welcome. Irrespective of the various documentation tasks that can be done online today, there are still many applications and domains where handwritten text is inevitable, which makes the digitization of handwritten documents a very essential task. Over the past decades, there has been extensive research on offline handwritten text recognition. In the recent past, most of these attempts have shifted to Machine learning and Deep learning based approaches. In order to design more complex and deeper networks, and ensure stellar performances, it is essential to have larger quantities of annotated data. Most of the databases present for offline handwritten text recognition today, have either been manually annotated or semi automatically annotated with a lot of manual involvement. These processes are very time consuming and prone to human errors. To tackle this problem, we present an innovative, complete end-to-end pipeline, that annotates offline handwritten manuscripts written in both print and cursive English, using Deep Learning and User Interaction techniques. This novel method, which involves an architectural combination of a detection system built upon a stateof-the-art text detection model, and a custom made Deep Learning model for the recognition system, is combined with an easy-to-use interactive interface, aiming to improve the accuracy of the detection, segmentation, serialization and recognition phases, in order to ensure high quality annotated data with minimal human interaction. | 10.1007/978-981-16-3690-5_50 | [
"https://export.arxiv.org/pdf/2304.08670v1.pdf"
] | 243,896,901 | 2304.08670 | 5e830d5abc47398a1ef3007c5eb0354929c4230b |
An end-to-end, interactive Deep Learning based Annotation system for cursive and print English handwritten text
Pranav Guruprasad
Birla Institute of Technology and Science Pilani
K. K. Birla Goa Campus
India
Department of Biotechnology
Bhupat and Jyoti Mehta School of Biosciences
Indian Institute of Technology
MadrasChennaiIndia
Sujith Kumar S
Department of Biotechnology
Bhupat and Jyoti Mehta School of Biosciences
Indian Institute of Technology
MadrasChennaiIndia
Vigneswaran C
V Srinivasa Chakravarthy
Department of Biotechnology
Bhupat and Jyoti Mehta School of Biosciences
Indian Institute of Technology
MadrasChennaiIndia
An end-to-end, interactive Deep Learning based Annotation system for cursive and print English handwritten text
[email protected] 1 , [email protected] 2 1 2 Guruprasad et al.Handwritten word detectionHandwritten text recognitionAuto- mated Text Annotation
With the surging inclination towards carrying out tasks on computational devices and digital mediums, any method that converts a task that was previously carried out manually, to a digitized version, is always welcome. Irrespective of the various documentation tasks that can be done online today, there are still many applications and domains where handwritten text is inevitable, which makes the digitization of handwritten documents a very essential task. Over the past decades, there has been extensive research on offline handwritten text recognition. In the recent past, most of these attempts have shifted to Machine learning and Deep learning based approaches. In order to design more complex and deeper networks, and ensure stellar performances, it is essential to have larger quantities of annotated data. Most of the databases present for offline handwritten text recognition today, have either been manually annotated or semi automatically annotated with a lot of manual involvement. These processes are very time consuming and prone to human errors. To tackle this problem, we present an innovative, complete end-to-end pipeline, that annotates offline handwritten manuscripts written in both print and cursive English, using Deep Learning and User Interaction techniques. This novel method, which involves an architectural combination of a detection system built upon a stateof-the-art text detection model, and a custom made Deep Learning model for the recognition system, is combined with an easy-to-use interactive interface, aiming to improve the accuracy of the detection, segmentation, serialization and recognition phases, in order to ensure high quality annotated data with minimal human interaction.
Introduction
Handwriting Recognition has been a field of extensive research, for the past few decades, and has evolved from approaches which employ the likes of Hidden Markov models [1], to Deep learning approaches [2][3] in the recent past. Optical Character Recognition (OCR) systems that carry out the task of complete digitization, including text detection and recognition, have also been prevalent for a long time. Even though OCR systems have come a long way and produce excellent results on print text, their success has not carried over as much to cursive handwritten text digitization and face various challenges [4]. This can be attributed mainly to the variations in styles of handwriting, the spacing, lighting, geometrical orientations, noise, problems in segmentation, and much more. Even though many OCR systems of late are using Deep Learning approaches since the boom in the Deep Learning era, the results of these have been impressive only on handwritten documents with certain predefined structural formats, and not on simple, common handwritten documents and manuscripts. These drawbacks are because most OCR implementations are based on templating and feature extraction techniques. Thus, it is clear that there is still abundant room for improvements in the field of handwritten text recognition and digitization. To aid further research in these fields, we propose an annotation system for cursive and print handwritten English text, that provides fast and state-of-the-art quality annotated data while requiring hardly any human intervention or effort. Our annotation system offers a big advantage for users to create a comprehensive annotated dataset on any kind of handwritten documents of their choice, with no prerequisites for specified styles, formats, appearances or cumbersome preprocessing techniques. Upon passing mere photographs or scanned copies of the individual pages of a handwritten document into our pipeline, it converts them into a coherent annotated dataset that the user can use further for any desired tasks. Our pipeline consists of a word detection system built upon the state-of-the-art text detection model -EAST; an interactive user interface built using Python TKinter, that provides the user an opportunity to remove the common errors in the detection, segmentation and serialization phases which are committed by even the best text detectors; and finally a powerful Deep Learning model for the recognition phase, which boasts of a custom designed multi-dimensional LSTM (Long Short Term Memory units), Convolutional Neural Network (CNN) and a Connectionist Temporal Classifier(CTC). Even though we provide word level annotations, the recognition is implemented on a character level, thus allowing the recognition system to recognize words beyond its training data. In addition to this, meticulous and innovative data preprocessing techniques have been implemented on the images of the words that are detected and passed on to the recognition system, which helps bolster the robustness of the system and increase the accuracy of the annotations.
Related Works
The advent of Deep Learning and its rise in popularity, led to an increased need for annotated data for supervised learning tasks. Most of the early datasets have been annotated manually, and only in the recent past has there been attempts to automate the process of annotation and reduce the human efforts involved in it. However, it is important to keep in mind that in annotation, feedback from the user is indispensable in some cases in order to create the highest quality of annotated data that is nearly devoid of any errors.
There have been attempts to automate the annotation of data in various fields such as real time video feeds [5], object detection [6], and even the semantic web [7] . Similarly, with the evident need for annotated data for offline handwritten text, especially with the shift towards Deep Learning based approaches for offline handwriting text recognition, it is not surprising that there has been a lot of research in this area too. There have been quite a few works that have proposed a systematic arrangement of stages to create a complete annotation engine for handwritten text, comprising of varying levels of automation [8][9]. However, a complete end-to-end pipeline to annotate handwritten text with very minimal human interaction is still considered to be a very challenging task as discussed in a part of a study by Ung et al. [10].
The two main components of our annotation pipeline are a word detection system and a handwriting recognition system. There has been extensive research in these fields for the past many years. There have been non-Machine Learning approaches, Machine Learning approaches, and most recently, Deep Learning approaches. Lavrenko et al. [11] presented a Hidden Markov Model based holistic word recognition approach, inspired by the results in cognitive psychology, for word recognition in handwritten historical documents. This method, which did not implement the segmentation of words into characters, gave a recognition accuracy of 65% which exceeded the results of the other systems during that time. Optical Character Recognition (OCR) systems have been around for a very long time, and there have been many attempts for Handwritten OCR, for various languages [12]. OCR techniques have evolved quantifiably over this period, and over the past few years with the advent of cloud computing, GPUs, and a better research community, have shifted towards some very impressive Deep Learning based models [13][14][15]. However, as mentioned in the previous section, OCR techniques face various challenges [16] and have not been able to provide exciting results for cursive handwritten text documents that lack a predefined structure.
A work done by Shiedl et al. [17] implements a handwriting recognition system, with an architecture based on Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN) and a Connectionist Temporal Classifier (CTC), which provides impressive results for handwriting recognition. This was one of the main inspirations for the recognition architecture we present in this work. The 2D LSTM implemented in the recognition system of our work, is inspired by a work by Graves et al. [18].
Text detection has also been an extensively researched field, especially with the advancements in image processing, object detection and deep learning. Many works follow different categories of techniques for scene text detection and document text detection, like the sliding window techniques [19], single shot detection techniques [20] and region based text detection techniques [21]. Zhou et al. [22] proposed a simple yet robust deep learning based scene text detector known as EAST -Efficient and Accurate Scene Text Detector. In our work we build upon this EAST model as it not only outperforms a majority of the state-of-the-art text detectors in terms of accuracy and speed, but is easy to build upon, and works satisfactorily on different orientations of text words which provides a great advantage when dealing with handwritten text detection, given the variability in handwriting style from person to person.
Data and Preprocessing
This section mentions the datasets used for training and testing the detection and recognition architectures in our pipeline, and also discusses the various preprocessing techniques applied on the words, both before training the recognition system, and also before testing or implementing the recognition system on the unseen data, in order to help increase accuracy of recognition.
Data
The EAST model upon which our detection system is built, was trained on the IC-DAR 2013 [23] and 2015 datasets [24]. The ICDAR 2015 dataset consists of a total of 1500 scene text images, out of which 1000 are training data and the remaining are test data. The ICDAR 2013 dataset contains 229 training images which were also used additionally for training. For the recognition system, our model was trained on the entire IAM Offline Handwriting Database [25]. This dataset contains words in both print and cursive handwritten English text that have been written by 657 writers. These pages have been scanned, automatically segmented, and then manually verified. Containing over 1500 pages of scanned text and over 1,00,000 labeled words, this dataset provides the massive volume of data that would be required to train the deep network of our recognition system, in order to perform well. Apart from this, the dataset offers a large diversity in the various features of offline handwritten text such as style, geometric orientation and spacing, thus ensuring the robustness of the handwriting recognition system. This enhances the performance of the system not only with respect to new, unseen data, but also transfer learning. Apart from the IAM offline dataset, we used the CVL dataset [26] to test the robustness of our recognition system, as the handwriting styles in this dataset make it very hard to recognize the words and are not considered to be very legible .
Preprocessing
For training the recognition system
The IAM dataset consists of grayscale images of individual words. There is a variation in width and height of the images due to the lengths of different words and the heights of different characters. To maintain uniformity, and to make it easier to pass the images as inputs to the model, each image was resized such that it has the width or height of at least 1, and either a width of 128, or height of 32 at most . This resized input image was then copied into a complete white image which had a fixed width of 128 and height of 32 (128x32). Thus, every input had a uniform dimension of 128x32 without getting distorted in the process. Basic data augmentation techniques were used to increase the dataset size and make it robust to common variations that occur in the dataset. As a part of this, the input images were subject to random stretches, for which the stretch values were obtained by a random function set within a specified range. They were also shifted horizontally and vertically by small amounts so that even if parts of a character were cropped out, the recognition system would still be able to recognize the whole word. After these augmentations, the grayscale values of the images were also normalized, just to make the task easier for the network. Some of the images in the IAM dataset are damaged, and to take care of this problem, black images of the same dimensions were used in place of these damaged files. In addition to all these techniques, during the training process, the recognition system was additionally trained on the same images but with an added randomized Gaussian noise which helped reduce any chances of overfitting the IAM dataset, and enabled the recognition system to recognize the word to a good accuracy level irrespective of the gray values and contamination of the new image due to various natural causes.
Processing applied to the new unseen data
Converting to the IAM format
The pictures or scanned copies of the handwritten text documents that have to be annotated, may look very different from the images used to train the recognition system. The neural network not only learns how to read the words but also learns features like the contrast, the thickness of the words, the style and even learns the features of the surroundings.
To make sure that these factors do not affect the results, and to cater to a huge variety of handwritten text in documents, we process the word images (which are passed on in the pipeline to the recognition system) that do not resemble the images in the IAM dataset. To make sure they are very similar to the IAM images, three functions are carried out :
1. The contrast of the images is increased to a high contrast level 2. The word images are cropped to a very tight fit around the text 3. The thickness of the text is increased to make it resemble a bold font style Figure 1(a) shows a handwritten image that is initially not in the IAM data format, and Figure 1(b) shows how the preprocessing step converts it to resemble IAM dataset style images. This preprocessing step has been observed to increase the accuracy of recognition significantly for word images that look different from the training data.
Removing the slope and slant (Normalizing)
The amount of slope and slant in cursive English words varies from writer to writer. Since this change in slope and slant affects the appearance and geometric orientation of the characters, this variation may have a significant effect on the performance of the recognition system.
To make sure that irrespective of the writer's style, the recognition system accurately recognizes the characters, while testing on the unseen data -we use a normalization technique to remove the slope and slant to a fair extent from all the words that come through the pipeline into the recognition system, so that they all have roughly the same amount of slope and slant. This makes sure that a cursive character written by two writers with completely different styles, would still look very similar to each other and thus prevent the recognition system from making any errors. When an input image is passed into the function that carries out this operation to remove the slant,the entire text part of the image is deslanted and made fairly upright, and the empty part of the image is filled with white colour. The algorithm we use for this slope and slant removal is based on a work by Vinciarelli et al. [27], and was implemented using OpenCV. Figure 2 The functionality of the proposed pipeline can be divided into independent phases, with each phase contributing sequentially, towards the goal of completely annotating the document. In this section we discuss the functioning of the pipeline, phase by phase when the photo or scanned copy of a single page of the handwritten text document is passed as input. Figure 6 shows a simple overview of the sequence of phases in our pipeline.
Word detection
The first phase in the pipeline is the detection of each of the words in the photograph of the page uploaded.Our detection model was built upon the pre-trained EAST model, which is a very robust text detector and achieves state-of-the-art text detection accuracies such as an F-score of 0.7820 on the ICDAR 2015 dataset. The page that is uploaded to our pipeline, is resized to a standard 720p image, before passing it into the detection system for detecting the individual words. For a page of handwritten text , our detection system which boasts of very impressive speed, produces its final set of bounding boxes around all the words it detects in the page, in just around an average of 0.5 seconds. In this network, one output layer gives the coordinates of all the initial bounding boxes that are predicted for the words detected in the page. In addition to this output layer, there is another output layer with a sigmoid activation function that outputs the probabilities signifying the presence of text in different regions of the image. After these values are obtained, a Non-Maximum Suppression technique is implemented to remove the weak and overlapping bounding boxes which are associated with lesser probability scores (which do not have a higher probability than a specified threshold). This then results in the final set of the bounding boxes predicted by the system, with their respective coordinates, for all the words it could detect in the page. After this function is complete, there are still some words that are undetected (which are usually one or two lettered words), some wrongly located bounding boxes, and some bounding boxes that do not cover the entire word vertically, or horizontally, or along both directions. These are common mistakes committed even by state-of-the-art object and text detection systems. Especially in our case, since it is handwritten text, the data is rife with noise and style variations, which only increases the chances of these detection imperfections. To make sure that we do not miss out on annotating every single word and symbol in the document, and to ensure that the accuracy levels of the recognized words which are used to annotate the document are kept high, the pipeline leads into the next phase, which is the interactive interface that allows the user to intervene and rectify these imperfections with minimal effort.
Interactive Interface
Editing, Adding and Deleting Bounding Boxes
As mentioned in the previous section, there are always certain cases where human intervention is inevitably required to create near flawless annotations, which is facilitated in our work by this interactive interface. The interface was completely built using Tkinter, which is a Python interface to the Tk GUI toolkit. Tkinter is widely used as a standard GUI for Python implementations, and works across all the popular operating systems. This part of the pipeline first displays the entire image of the input handwritten page after being resized, on a Tkinter canvas of fixed size, for the user to see. Then the bounding box coordinates from the detection system of the pipeline are retrieved, and scaled according to the Tkinter canvas size. Once the coordinates are obtained with respect to the canvas size, these bounding boxes are displayed as interactive red rectangles at the coordinates where the detection system predicted the bounding boxes on the page. Before the serialization of words, this interface provides 4 main functionalities -Adding new bounding boxes, deleting bounding boxes, resizing bounding boxes and moving around of bounding boxes.
Adding Bounding Boxes: This feature can be used by the user when the detection system has not recognized certain small words or punctuation symbols. Usually text detection systems do not recognize one or two lettered words that are written so shabbily that they hardly look like text, or some punctuation symbols like full stops or commas, that are written too lightly and are barely recognizable as a written component. This can be corrected by drawing a bounding box over any symbol or word that has not been detected. The drawing of the bounding box is made very easy for the user and a left click and drag of the mouse anywhere on the canvas simply draws a bounding box, and this is logged into the system automatically as a new bounding box, along with its respective details such as coordinates.
Deleting Bounding Boxes: This feature comes in handy when the system has detected wrong spaces of the image as a text containing region, or if there are two bounding boxes over one hyphenated words, and many other scenarios like this. We allow the user to remove any bounding box on the canvas with the simple press of a key, after which the information stored regarding this bounding box is deleted in the system.
Resizing/moving Bounding Boxes: These features are useful when the bounding box coordinates detected by the detection system are not entirely accurate and do not cover the entire part of the word or symbol. This is a very common problem in the already existing methods and can affect the output of the recognition system adversely. The resizing functionality that we provide, allows the user to resize bounding boxes very easily, identical to how one would resize an image in a word document or a powerpoint presentation document, by dragging along the corners or edges of the bounding box. Moving the bounding boxes around as a whole, can be done by hovering the mouse over the desired bounding box and pressing any one of the arrow keys according to the required direction. Once the bounding boxes have been shifted or resized, their respective modified coordinates are automatically stored. Once these editing operations are carried out and the user is satisfied with the positions of the bounding boxes around all the words and symbols in the page, the serialization of the words can be carried out. Figure 3 shows examples of when these operations are required and how our interface allows the user to fix them.
(a) (b) (c) (d)
(e) (f) Fig. 3 (a) The detection system missed the detection of a one lettered word -"a" (b) The interface allows the user to draw a bounding box so that "a" is detected (c) The detection system does not draw the bounding box accurately enough to enclose the whole word (d) The interface allows the user to resize the bounding box so that the whole word is enclosed (e) The detection system detects one hyphenated word as two separate Words (f) The interface allows the user to delete the extra box, move around and resize the other box so that it is enclosed as one
Serialization
The serialization phase is a very important phase as it determines the order of the detected words in the page of the document. Once the editing phase is complete and verified by the user, just a simple press of a specific key, automatically serializes all the words and symbols that have bounding boxes around them, and is represented on the canvas by straight lines between the bounding boxes to indicate their order, as it can be seen in Figure 4. The serialization function was implemented by our own sorting algorithm that was based on the scaled canvas coordinates of the bounding boxes, the resolution of the image, the space between the lines of text and the space between the adjacent words in a line. Once this serialization is complete and is visible to the user, the user can further modify it in case they desire to change the order of any of the words. We have enabled a swapping feature to carry this task out, which we facilitated by constructing an elaborate dictionary data structure that contains information regarding the coordinates of each of the bounding boxes, their neighbours on either sides based on the serialized order, and their object tags in the canvas. By just right clicking on two bounding boxes and pressing a key, the serialized order of the two bounding boxes are swapped. This change is not only shown on the interface, but also automatically logged in the system, and the new final serialized order is stored. Fig. 4 The black solid lines between the bounding boxes indicate the order in which the words have been serialized
Recognition
Once the serialization phase is complete, the order of the words present in the document is now finalized. The images of these individual words, within each of the bounding boxes, are then extracted and stored using OpenCV. They are then passed on, one by one (in the same serialized order), as individual word inputs to the recognition system in the pipeline. Each input image that is passed into the recognition system goes through the three architectures of our recognition model : the Convolutional Neural Network (CNN), the multi-dimensional LSTM, and the Connectionist Temporal Classifier(CTC). The exact details and nature of these architectures are discussed in the next section. For each input of an individual image to the recognition system, the output is a sequence of characters. The recognition system thus outputs a sequence of characters/ individual character, for each of the detected, and serialized images passed into it from the previous phases of the pipeline. As the image passes through each of the layers of the CNN, the trained layers extract all the required features from that image. There are three main operations that are carried out on the image in the CNN, in each layer: the convolutional operation, a non-linear activation and a pooling function. Apart from these three operations, we add a Gaussian noise layer that adds standard Gaussian noise to the input, for reasons explained in 3.2.1. Then finally, after passing through two fully connected layers, a feature map is output.
The output feature map from the CNN is then passed as input to our 2D LSTM. An LSTM is used instead of a standard unidirectional or bidirectional RNN, as LSTMs prevent loss of information over long distances, and so is very helpful when dealing with long character sequences, which is very important for the task at hand. Our custom 2D LSTM was designed and implemented instead of using a standard one dimensional LSTM, because we felt that considering both the horizontal and vertical dimensions of handwritten text while recognizing it, would be much more effective than just working along one dimension. This is because the English cursive handwritten text has a myriad of variations along both the dimensions, and the system would be very robust if it could learn the features across both these dimensions. Moysset et al. [28] show that 2D LSTMs give great results for recognition of handwritten text, and provide higher performances as compared to single dimensional RNNs or LSTMs even when used on complex, challenging and real life data.
The output sequence of the LSTM is mapped to a matrix which becomes the input to the final CTC layer. Connectionist Temporal classification, a work by Graves et al. [29] is a method that serves two purposes : it not only calculates the loss values required for training, but also decodes the matrix that is output from the LSTM, to obtain the final text that is present in the input image. During the training process, both the ground truth and the LSTM output matrix are fed to the CTC layer, and based on these, a loss value is calculated which is used to train the system to recognize the right sequence of characters. During the inference phase, only the LSTM output matrix is fed, and is decoded by the CTC layer, using a decoding algorithm, to get the text from the images. As the text from each input image is recognized by the recognition system, they are checked for any misspells, and are corrected using a state-of-the-art python spell checker known as Pyspellchecker. This spell checker which was developed and released very recently, even offers a feature where the users can add words of their choice to the dictionary, thus allowing them customize the dictionary to suit the task at hand. After this stage, all the recognized words and symbols are stored in the same serialized order. Then, all these stored texts are retrieved and displayed in text boxes on the Tkinter canvas, under each of their corresponding detected words/symbols, as the annotations for that word/symbol, as seen in Figure 5. Then we offer a feature that allows the user to interact again with the pipeline before the final annotations are stored. The user is allowed to edit the recognized text present in each of the text boxes according to their desire. This ensures that any small mistakes made by the system are corrected completely before the annotations are finalized. After this stage, the user can click a button that shows up on the canvas, upon which the final modified annotations for all the words are stored and written into a text file in the same order as they appear in the original handwritten document, and are accessible to the user.
Details of the Recognition model
The combination of a CNN, RNN and a CTC layer has been gaining popularity in the recent past, especially for text recognition tasks. We modify this architectural combination, and improve upon it by building our own 2 Dimensional LSTM to replace the standard RNN, which results in a significant increase in performance. Apart from this, the CNN in our system has also been designed in a way such that it is powerful and robust enough to deal with handwritten text images. The CNN model contains 10 layers, out of which each of the first 8 layers perform convolutional operations, non linear activations and pooling functions. The convolutional operations are carried out by filters of varying kernel sizes from 7x7 to 3x3, and a standard RELU non linear activation function is used. These 8 layers are followed by a Gaussian noise layer and 2 fully connected layers. The input to the CNN is the preprocessed image of dimensions 128x32, and the output is a feature map of size 32x512.The dimension of this 32x512 feature map that is passed as input to the LSTM, represents 512 features per time step, where each time step represents the position for the characters that may possibly be present in the word to be recognized. Each of these timesteps contain 512 relevant features extracted by the CNN layers. There are 32 timesteps because we set the maximum length of the character sequence that can be recognized, to 32. We found that for values greater than 32 the system performed worse and the loss values were considerably higher.
The 2D LSTM which was built using 256 hidden cells, processes this feature map further, by only carrying forward the relevant information. The output of the 2D LSTM is finally mapped to a matrix of dimension 32x80, where 80 is the number of possible characters that can be recognized. This is because, apart from the 79 different characters present in our training dataset, another extra character is required for CTC operations, known as the CTC blank. Therefore, this 32x80 matrix contains the probability scores with respect to the 80 different possible entries for each timestep.
This matrix is provided as input to the CTC layer, which during training is compared with the ground truth tensor to generate a CTC loss value. This CTC loss value was the error metric that was considered for training. We used an RMSProp optimizer with a decaying learning rate that was initialized to 0.01, and a batch size of 50 for the training process. During the inference phase, the text in the image is decoded by the CTC layer using a CTC beam search decoding algorithm, which is offered as a feature in the Neural Net module of Tensorflow. This algorithm was used instead of the standard greedy best path decoding algorithm, because of which we were able to improve the accuracy of the recognized words even further. Figure 7 shows the summary of our recognition model architecture.
Results and Discussions
The IAM offline dataset which was preprocessed as discussed in Section 3.2, was used to train the recognition system, with a train-valid split ratio of 95:5. Therefore, a total of 115320 words were used to train the model. In order to measure the performance of our recognition system, Character Error Rate (CER) was used as the error criterion, which is the standard among most works in this field. The CER was calculated based on the Levenshtein edit distance between the recognized word and the groundtruth word. This edit distance value between each ground truth and corresponding recognized word, was summed for all the words in the epoch, and divided by the total number of characters in all the words in the epoch, to get the final CER value. Table 1 shows the CER obtained using the two different architectures that we tried for the recognition system: one that implemented a multi-dimensional LSTM, and the other that used a single dimensional bidirectional LSTM. The significant reduction in error rate upon using a 2D LSTM justifies our choice of building a custom multidimensional LSTM instead of using a standard unidimensional, bidirectional LSTM. Table 2 further compares the results of our work with other works done in recent times that aimed to do similar tasks after being trained on the same datasets. As it can be observed, our proposed model performs better than all of them.
The decoding algorithm that we used during inference was a beam search decoding algorithm as mentioned in Section 4.4. Even though this gave much better results as compared to the greedy CTC decoder algorithm, we tried using an even better decoding algorithm known as word beam search decoding, in an attempt to further the recognition accuracy. This resulted in a small improvement in validation word accuracy. However, using a word beam search decoder limits the words recognized to those present in a dictionary/corpus that is created in the process. Thus, to make sure that our system is not limited by such constraints and performs well on words never seen before, we chose the beam search decoding algorithm over the word beam search decoding algorithm. To further check the robustness of our recognition system, we checked it with handwritten text that have writing styles which are harder to interpret than the text in the IAM dataset. For this we used the CVL dataset. Figure 8 shows how the system was able to pass all the phases of the pipeline successfully, and recognize the words from the CVL dataset to great accuracy, even though these words are not very legible. It is clear from our results and methodology that the proposed pipeline not only provides very impressive results for automatically annotating the data, but also ensures that the user has to put in a negligible amount of effort to ensure a near flawless annotation. Fig. 8 Words from the CVL dataset recognized by our recognition system.
Conclusion and Future works
After observing the burgeoning need for annotated data in the field of handwritten text recognition, we have presented a robust and innovative pipeline that carries out annotation of cursive and print handwritten English text, with a great accuracy while ensuring the least amount of human effort required. We present an annotation pipeline that uses a word detection system based on a state-of-the-art model, and combine it with a powerful custom designed recognition system. To deal with the common errors that are committed by these systems, we provide a very intuitive interactive interface which ensures the removal of a majority of the flaws with minimum effort, resulting in very fast, high quality annotated data. The potential for this pipeline is very high and has various applications. It can be used to create large amounts of custom annotated data very easily, that can be used to train systems that focus on problems such as restoring old handwritten scriptures and manuscripts, evaluating exam answer sheets, designing literary based softwares, digitizing handwritten notes and much more. This work has potential to further be improved, by using deeper networks and larger datasets in the case of availability of powerful computational systems and hardware, which we did not have access to. Better preprocessing techniques and more powerful CTC decoding algorithms are also aspects of the work that can be focussed on for significant improvements.
Future work can focus on extending our current work to regional languages, where there is a clear lack of significant amounts of annotated data. Designing annotation pipelines for regional languages may require more sophisticated segmentation techniques and better recognition systems, and is definitely something that requires much more meticulous research. Even though there are a large number of problems that can be solved by developing systems that aim to digitize or restore documents in regional languages, one of the main limiting factors for extensive research in this field is the dearth of high quality annotated data.
Fig. 1
1Handwritten text image before and after the preprocessing steps to convert to IAM style text (a) Non IAM handwritten text image before preprocessing (b) Same handwritten text image as (a), but after the preprocessing step done to convert to IAM style image
Fig. 2
2(a) shows an image of handwritten text with slant and slope, and Figure 2(b) shows the same handwritten text after the slope and slant have been removed. Handwritten text image before and after the slope and slant removal (a) A handwritten text image from the IAM dataset (b) Same handwritten text in (a), but after slant and slope removal 4 Methodology and functioning Phases
Fig. 5
5The recognized words are displayed in editable text boxes below their corresponding detected words
Fig. 6
6An overview of the sequence of phases in our pipeline
Fig. 7
7Summary of our recognition system architecture
Table 2
2Table 1 Comparison of the two recognition models implementing different LSTM architectures (in terms of CER) Work Model CER Ingle R et al. [30] Two Bidirectional LSTM layers 12.8 Ingle R et al. [30] 1-D gated recurrent convolutional layers 14.1 J. Almazan et al. Our work CNN + multidimensional LSTM + CTC 9.3 Comparison with other works that implemented different recognition architectures and were trained on the IAM offline datasetModel Architecture
Character Error Rate (CER) Epochs trained
CNN + Bidirectional LSTM
(single dimension) + CTC
12.36
110
CNN + multidimensional LSTM +
CTC
9.3
100
[31]
Kernelized Common Subspace Regression 11.27
T. Bluche
[32]
Multi-layer perceptrons and Hidden
Markov system
15.6
Hidden Markov Models in Handwriting Recognition. M Gilloux, Fundamentals in Handwriting Recognition. NATO ASI Series(Series F: Computer and Systems Sciences). Impedovo S.Berlin, HeidelbergSpringer124Gilloux M. (1994) Hidden Markov Models in Handwriting Recognition. In: Impedovo S. (eds) Fundamentals in Handwriting Recognition. NATO ASI Series(Series F: Computer and Systems Sciences), vol 124. Springer, Berlin, Heidelberg.
Handwritten Text Recognition Using Deep Learning. B Balci, D Saadati, D Shiferaw, CS231n: Convolutional Neural Networks for Visual Recognition, Stanford Uni., Course Project Report. B. Balci, D. Saadati and D. Shiferaw, "Handwritten Text Recognition Using Deep Learn- ing," CS231n: Convolutional Neural Networks for Visual Recognition, Stanford Uni., Course Project Report, 2017.
Raymond Ptucha, Felipe Petroski Such, Suhas Pillai, Frank Brockler, Vatsala Singh, Paul Hutkowski, Intelligent character recognition using fully convolutional neural networks" Pattern Recognition. 88Raymond Ptucha, Felipe Petroski Such, Suhas Pillai, Frank Brockler, Vatsala Singh, and Paul Hutkowski, "Intelligent character recognition using fully convolutional neural networks" Pat- tern Recognition, 88:604-613, 2019
A Study on Handwriting Analysis by OCR. Roy Sukanya, International Journal of Scientific and Research Publications. 81Sukanya, Roy, et al. "A Study on Handwriting Analysis by OCR."International Journal of Scientific and Research Publications, Volume 8, Issue 1, January 2018.
Deep Learning Based Automatic Video Annotation Tool for Self-Driving Car. N S Manikandan, K Ganesan, N.S.Manikandan and K.Ganesan. "Deep Learning Based Automatic Video Annotation Tool for Self-Driving Car." (2019).
Fully Automated Annotation With Noise-Masked Visual Markers for Deep-Learning-Based Object Detection. Kiyokawa, & Takuya, Tomochika, & Keita, Takamatsu, Tsukasa Ogasawara, 4.1972-1977.10.1109/LRA.2019.2899153Kiyokawa, Takuya & TOMOCHIKA, Keita & Takamatsu, Jun & Ogasawara, Tsukasa. (2019). Fully Automated Annotation With Noise-Masked Visual Markers for Deep-Learning- Based Object Detection. 4. 1972-1977. 10.1109/LRA.2019.2899153.
Automatic Semantic Annotation Using Machine Learning. Jie Tang, Machine Learning. Tang, Jie, et al. "Automatic Semantic Annotation Using Machine Learning." Machine Learn- ing, 2012, pp. 535-578.
A Semi-automatic Annotation Scheme for Bangla Online Mixed Cursive Handwriting Samples. U Bhattacharya, R Banerjee, S Baral, R De, S K Parui, 2012 International Conference on Frontiers in Handwriting Recognition. BariU. Bhattacharya, R. Banerjee, S. Baral, R. De and S. K. Parui, "A Semi-automatic Anno- tation Scheme for Bangla Online Mixed Cursive Handwriting Samples," 2012 International Conference on Frontiers in Handwriting Recognition, Bari, 2012, pp. 680-685.
Automated Semantic Annotation of Species Names in Handwritten Texts. L Stork, A Weber, J Van Den Herik, A Plaat, F Verbeek, K Wolstencroft, Advances in Information Retrieval. ECIR 2019. Azzopardi L., Stein B., Fuhr N., Mayr P., Hauff C., Hiemstra D.ChamSpringer11437Stork L., Weber A., van den Herik J., Plaat A., Verbeek F., Wolstencroft K. (2019) Automated Semantic Annotation of Species Names in Handwritten Texts. In: Azzopardi L., Stein B., Fuhr N., Mayr P., Hauff C., Hiemstra D. (eds) Advances in Information Retrieval. ECIR 2019. Lecture Notes in Computer Science, vol 11437. Springer, Cham.
Strategy and Tools for Collecting and Annotating Handwritten Descriptive Answers for Developing Automatic and Semi-Automatic Marking -An Initial Effort to Math. H Q Ung, M K Phan, H T Nguyen, M Nakagawa, 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW). Sydney, AustraliaH. Q. Ung, M. K. Phan, H. T. Nguyen and M. Nakagawa, "Strategy and Tools for Collect- ing and Annotating Handwritten Descriptive Answers for Developing Automatic and Semi- Automatic Marking -An Initial Effort to Math," 2019 International Conference on Document Analysis and Recognition Workshops (ICDARW), Sydney, Australia, 2019, pp. 13-18.
Holistic word recognition for handwritten historical documents. V Lavrenko, T M Rath, R Manmatha, First International Workshop on Document Image Analysis for Libraries. Palo Alto, CA, USAV. Lavrenko, T. M. Rath and R. Manmatha, "Holistic word recognition for handwritten his- torical documents," First International Workshop on Document Image Analysis for Libraries, 2004. Proceedings., Palo Alto, CA, USA, 2004, pp. 278-287.
Handwritten Optical Character Recognition (OCR): A Comprehensive Systematic Literature Review (SLR). Jamshed Memon, IEEE Access. 8Memon, Jamshed, et al. "Handwritten Optical Character Recognition (OCR): A Comprehen- sive Systematic Literature Review (SLR)." IEEE Access, vol. 8, 2020, pp. 142642-142668.
Efficient, Lexicon-Free OCR Using Deep Learning. Marcin Namysl, Iuliu Konya, 2019 International Conference on Document Analysis and Recognition (ICDAR). Namysl, Marcin, and Iuliu Konya. "Efficient, Lexicon-Free OCR Using Deep Learning." 2019 International Conference on Document Analysis and Recognition (ICDAR), 2019.
STN-OCR: A single neural network for text detection and text recognition. C Bartz, H Yang, C Meinel, arXiv:1707.08831arXiv preprintC. Bartz, H. Yang and C. Meinel, "STN-OCR: A single neural network for text detection and text recognition", arXiv preprint arXiv:1707.08831, 2017.
Recursive Recurrent Nets with Attention Modeling for OCR in the Wild -Chen-Yu Lee, Simon Osindero 2016 IEEE Conference on Computer Vision and Pattern Recognition. CVPR2016Recursive Recurrent Nets with Attention Modeling for OCR in the Wild -Chen-Yu Lee, Si- mon Osindero 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) -2016
A Detailed Analysis of Optical Character Recognition Technology. Hamad, Mehmet Kaya, ElectronicsandComputers.4.244-244.10.18100/ijamec.270374International Journal of Applied Mathematics. Hamad, Karez & Kaya, Mehmet. (2016). A Detailed Analysis of Optical Character Recogni- tion Technology. International Journal of Applied Mathematics, Electronics and Computers. 4. 244-244. 10.18100/ijamec.270374.
Thesis on Handwritten Text Recognition in Historical Documents. Harald Scheidl, Technische Universität WienScheidl, Harald, "Thesis on Handwritten Text Recognition in Historical Documents". Tech- nische Universität Wien. 2018.
Lecture Notes in Computer Science Artificial Neural Networks -ICANN. Alex Graves, Multi-Dimensional Recurrent Neural NetworksGraves, Alex, et al. "Multi-Dimensional Recurrent Neural Networks." Lecture Notes in Com- puter Science Artificial Neural Networks -ICANN 2007, 2007, pp. 549-558.
End-to-End Scene Text Recognition. Kai Wang, 2011 International Conference on Computer Vision. Wang, Kai, et al. "End-to-End Scene Text Recognition." 2011 International Conference on Computer Vision, 2011.
Single Shot Text Detector with Regional Attention. He, Pan, 2017 IEEE International Conference on Computer Vision (ICCV. He, Pan, et al. "Single Shot Text Detector with Regional Attention." 2017 IEEE International Conference on Computer Vision (ICCV), 2017.
Text extraction in natural scenes using region-based method. Zhihu Huang, Journal of Digital Information Management. 124Huang, Zhihu, et al. "Text extraction in natural scenes using region-based method." Journal of Digital Information Management, vol. 12, no. 4, 2014
EAST: An Efficient and Accurate Scene Text Detector. Xinyu Zhou, 10.1109/cvpr.2017.2832017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR. Zhou, Xinyu, et al. "EAST: An Efficient and Accurate Scene Text Detector." 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017, doi:10.1109/cvpr.2017.283.
ICDAR 2013 robust reading competition. D Karatzas, F Shafait, S Uchida, M Iwamura, L G Bigorda, S R Mestre, J Mas, D F Mota, J A Almazan, L P De, Heras, Proc. of ICDAR. of ICDARICDAR 2013: D. Karatzas, F. Shafait, S. Uchida, M. Iwamura, L. G. i Bigorda, S. R. Mestre, J. Mas, D. F. Mota, J. A. Almazan, and L. P. de las Heras. ICDAR 2013 robust reading competition. In Proc. of ICDAR, 2013.
D Karatzas, L Gomez-Bigorda, A Nicolaou, S Ghosh, A Bagdanov, M Iwamura, J Matas, L Neumann, V R Chandrasekhar, S Lu, F Shafait, S Uchida, E Valveny, ICDAR 2015 competition on robust reading. Proc. of ICDARICDAR 2015 : D. Karatzas, L. Gomez-Bigorda, A. Nicolaou, S. Ghosh, A. Bagdanov, M. Iwamura, J. Matas, L. Neumann, V. R. Chandrasekhar, S. Lu, F. Shafait, S. Uchida, and E. Valveny. ICDAR 2015 competition on robust reading. In Proc. of ICDAR, 2015
The IAM-database: An English Sentence Database for Off-line Handwriting Recognition. U Marti, H Bunke, Int. Journal on Document Analysis and Recognition. 5U. Marti and H. Bunke. The IAM-database: An English Sentence Database for Off-line Hand- writing Recognition. Int. Journal on Document Analysis and Recognition, Volume 5, pages 39 -46, 2002.
CVL-Database: An Off-line Database for Writer Retrieval, Writer Identification and Word Spotting. Florian Kleber, Stefan Fiel, Markus Diem, Robert Sablatnig, Proc. of the 12th Int. Conference on Document Analysis and Recognition (ICDAR) 2013. of the 12th Int. Conference on Document Analysis and Recognition (ICDAR) 2013Florian Kleber, Stefan Fiel, Markus Diem and Robert Sablatnig, CVL-Database: An Off-line Database for Writer Retrieval, Writer Identification and Word Spotting, In Proc. of the 12th Int. Conference on Document Analysis and Recognition (ICDAR) 2013, pp. 560-564, 2013. 26
A New Normalization Technique for Cursive Handwritten Words. Alessandro Vinciarelli, Juergen Luettin, Pattern Recognition Letters. 229Vinciarelli, Alessandro, and Juergen Luettin. "A New Normalization Technique for Cursive Handwritten Words." Pattern Recognition Letters, vol. 22, no. 9, 2001, pp. 1043-1050.
Are 2D-LSTM Really Dead for Offline Text Recognition?. Bastien Moysset, Ronaldo Messina, 10.1007/s10032-019-00325-0International Journal on Document Analysis and Recognition (IJDAR). 223Moysset, Bastien, and Ronaldo Messina. "Are 2D-LSTM Really Dead for Offline Text Recog- nition?" International Journal on Document Analysis and Recognition (IJDAR), vol. 22, no. 3, 2019, pp. 193-208., doi:10.1007/s10032-019-00325-0.
-Alex Graves, Santiago Fernández, Faustino Gomez, Jürgen Schmidhuber, Proceedings of the 23rd international conference on Machine learning -ICML. the 23rd international conference on Machine learning -ICMLConnectionist temporal classificationConnectionist temporal classification -Alex Graves, Santiago Fernández, Faustino Gomez, Jürgen Schmidhuber. Proceedings of the 23rd international conference on Machine learning - ICML '06 -2006
A Scalable Handwritten Text Recognition System. R Ingle, Fujii, & Yasuhisa, Deselaers, & Thomas, Jonathan & Baccash, Ashok Popat, 17-24.10.1109/IC-DAR.2019.00013Ingle, R. & Fujii, Yasuhisa & Deselaers, Thomas & Baccash, Jonathan & Popat, Ashok. (2019). A Scalable Handwritten Text Recognition System. 17-24. 10.1109/IC- DAR.2019.00013.
Word spotting and recognition with embedded attributes. J Almazan, A Gordo, A Fornes, E Valveny, IEEE Transactions on Pattern Analysis & Machine Intelligence. 12J. Almazan, A. Gordo, A. Fornes, and E. Valveny. Word spotting and recognition with embedded attributes. IEEE Transactions on Pattern Analysis & Machine Intelligence, (12):2552-2566, 2014.
Deep neural networks for large vocabulary handwritten text recognition. T Bluche, Paris Sud Paris XIUniversitéPh.D. dissertationT. Bluche, "Deep neural networks for large vocabulary handwritten text recognition," Ph.D. dissertation, Université Paris Sud Paris XI, 2015.
| [] |
[
"Quantum Superposition States: Spin Glasses and Entanglement",
"Quantum Superposition States: Spin Glasses and Entanglement"
] | [
"Aslı Tuncer \nInstitute of Physics\nKoç University\nIstanbulTurkey\n",
"Serhat C Kadıoglu \nInstitute of Physics\nKoç University\nIstanbulTurkey\n"
] | [
"Institute of Physics\nKoç University\nIstanbulTurkey",
"Institute of Physics\nKoç University\nIstanbulTurkey"
] | [] | Spin-glass (SG) is a fascinating system that has garnered significant attention due to its intriguing properties and implications for various research fields. In condensed matter physics, SGs are a prototypical example of disordered systems and have been studied extensively to understand the behavior of complex systems with the random disorder. One of the key characteristics of SGs is that they contain random disorder, which leads to many possible states of the system occurring with very close probabilities. We explore the concept of spin-glass superposition states (SSs), which are equiprobable SSs of possible electronic configurations. Using the Edward-Anderson (EA) type SG order parameter and magnetization, we demonstrate that these SSs can be classified based on their contribution to distinguishing magnetic order (disorder), such as SG, (anti)ferromagnetic, and paramagnetic phases. We also generalize these SSs based on the system size and investigate the entanglement of these phase-based SSs using the negativity measure. We show that the SG order parameter can be utilized to determine the entanglement of magnetically ordered (disordered) phases, or vice versa, with negativity signifying magnetic order. Our findings provide further insight into the nature of quantum SSs and their relevance to SGs and quantum magnets. They have implications for a range of fields, including condensed matter physics, where SGs are a prototypical example of disordered systems. They are also relevant for other fields, such as neural networks, optimization problems, and information storage, where complex systems with random disorder behavior are greatly interested. Overall, our study provides a deeper understanding of the behavior of spin glasses and the nature of quantum SSs, with potential applications in a variety of fields. | null | [
"https://export.arxiv.org/pdf/2304.09782v1.pdf"
] | 258,212,985 | 2304.09782 | 244a60379a46e892a12a51c5a16ae9104b54c338 |
Quantum Superposition States: Spin Glasses and Entanglement
Aslı Tuncer
Institute of Physics
Koç University
IstanbulTurkey
Serhat C Kadıoglu
Institute of Physics
Koç University
IstanbulTurkey
Quantum Superposition States: Spin Glasses and Entanglement
(Dated: April 20, 2023)
Spin-glass (SG) is a fascinating system that has garnered significant attention due to its intriguing properties and implications for various research fields. In condensed matter physics, SGs are a prototypical example of disordered systems and have been studied extensively to understand the behavior of complex systems with the random disorder. One of the key characteristics of SGs is that they contain random disorder, which leads to many possible states of the system occurring with very close probabilities. We explore the concept of spin-glass superposition states (SSs), which are equiprobable SSs of possible electronic configurations. Using the Edward-Anderson (EA) type SG order parameter and magnetization, we demonstrate that these SSs can be classified based on their contribution to distinguishing magnetic order (disorder), such as SG, (anti)ferromagnetic, and paramagnetic phases. We also generalize these SSs based on the system size and investigate the entanglement of these phase-based SSs using the negativity measure. We show that the SG order parameter can be utilized to determine the entanglement of magnetically ordered (disordered) phases, or vice versa, with negativity signifying magnetic order. Our findings provide further insight into the nature of quantum SSs and their relevance to SGs and quantum magnets. They have implications for a range of fields, including condensed matter physics, where SGs are a prototypical example of disordered systems. They are also relevant for other fields, such as neural networks, optimization problems, and information storage, where complex systems with random disorder behavior are greatly interested. Overall, our study provides a deeper understanding of the behavior of spin glasses and the nature of quantum SSs, with potential applications in a variety of fields.
I. INTRODUCTION
Spin glasses are a fascinating phenomenon in condensed matter physics due to their unique microscopic properties. However, these systems contain random disorder, resulting in many possible states occurring with similar probabilities [1][2][3] and being unable to arrange in a particular spin state, which satisfies the energy minimum for each interaction [4,5]. Such a situation is called frustration [6]. Even if a system is unfrustrated classically, it may exhibit frustration in the quantum case [7][8][9][10][11] due to non-commutativity and entanglement [9,12]. Interestingly, even with just a few entangled elements, novel phenomena can occur in the quantum domain [9,[12][13][14]. Quantum fluctuations and entanglement can also play essential roles in the behavior of spin glasses since the spin glass order occurs at low temperatures [13], and thermal fluctuations do not dominate the feature of spin-glass order. Quantum interference may also lead to unexpected effects, such as the suppression of tunneling [15] or the formation of localized states [16,17]. Spin glasses, on the other hand, should have frustrated spin(s). When considered from the quantum perspective, the complex behavior of spin glasses is understandable as arising from quantum interference and entanglement of formation.
The frustration results in a complex and disordered arrangement of the magnetic spins, which can be described by a distribution of spin configurations rather than a well-defined pattern. Since frustration leads to * Correspondence email address: [email protected] many local minima in their free energy landscapes, making it difficult to choose any configuration due to their equiprobable nature. In contrast to this approach, we investigate the existence of spin glasses in distinct quantum superposition states of possible electronic configurations without needing any ensemble of spin configurations. As a result, it can be thought that spin glasses become frozen in any of these configurations. We directly measured the local magnetizations of the spins and the Edwards-Anderson SG order parameter to describe the corresponding magnetic phases, which include all-to-all interactions and randomly distributed antiferromagnetic (AFM) impurities. The paper is organized in Section II we introduce our model and describe the procedure for identifying the superposition states that contributing to the SG phase. We also expand our results to well-known magnetic orders (disorder) and classify these superposition states (SSs) concerning their phase contributions, such as paramagnetic (PM), ferromagnetic (FM), and antiferromagnetic magnetic phases in Section III. Once we identify the phase-based superposition states, we discuss the role of entanglement and the relationship between the SG-order parameter and entanglement in Section IV. Finally, we conclude the paper in Section V with an outlook of our results and the impact on future theoretical investigations and experimental implementations of current quantum technologies.
II. MODEL
We consider N Ising spins interacting through infiniteranged exchange interaction with the Hamiltonian,
H = −Σ (i,j) J ij σ z i σ z j .(1)
Here σ z i is Pauli z-matrices with i, j = 1, . . . , N and the interaction couplings are quenched variables governed by a Gaussian distribution with a variance J 2 /N and zero
mean < J ij >= 0, P (J ij ) ∝ exp 1 2 ( N J 2 ij J 2 )
). The randomly distributed antiferromagnetic interaction is the source of frustration in case the system's Hamiltonian can not be minimized, at least for one spin or bound. Although there is no direct analogy between the geometric frustration in classical systems and its quantum counterpart, it has been defined in quantum systems related to entanglement and coherence effects [18]. In this study, we will concentrate mainly on N -atom system with all-to-all interactions. Since each spin has two states, these Ising spin systems exhibit an exponentially large phase space of 2 N configurations.
We started from the simplest model with interacting three-qubit system may have frustration, see Figure 1.
In the classical case, to see the phase transition, the system should go to the thermodynamic limit, so it would be impossible to see the phase transition on three-spin model. In addition, the thermal or quantum fluctuations drive the system to transition, but we do not consider the thermal fluctuations in this work.
To inject the quantum effects into the classical version of the Ising model so-called Edward-Anderson spin-glass Hamiltonian (1), one of the well-known ways is to add a transverse field. The quantum fluctuations arises from a competition between the spin-spin interactions and such an applied external field. In contrast to this approach, the present study assumes that quantum fluctuations and frustration are introduced into the system via direct injection of quantum superposition states. Through measurement of corresponding order parameters, such as the local magnetization of the i th spin for a given realization α, with m α i =< σ i > α , we were able to determine the specific superposition states that contribute to various magnetic phases. The EA spin-glass order parameter, which corresponds to overlaps of the local magnetization [19] and is provided in (2), was also utilized in our analysis.
q α EA = 1 N N i=1 (m α i ) 2 .(2)
As we modified these definitions to our SS concept by taking the averages over our superposition states, the average magnetization is and the spin-glass order parameter includes the overlapping between the discrete states of the entities of the SS by definition. In this way, we have found some contributions to SG order from the equally weighted superposition state in both energy and computational basis with the non-zero q EA and m = 0. Besides obtaining the SSs contribute to the SG phase, we obtained the PM and (anti)ferromagnetic SSs in both vanishing q EA = 0 and m = 0, and both are non-zero cases, respectively. All these different order and disorder SSs are given in Section III and we explained that explicit classification and association with their entanglement in Sections IV. The state of a system consisting of N spins can be considered as a product state, and the initial state is set to be a superposition of all possible configurations,
m = 1 N N i ψ suppos. |σ z i |ψ suppos. ,(3)|ψ = ⊗ N i | → with | → = 1 √ 2 (| ↑ + | ↓ ),
where | ↑ and | ↓ are the eigenbasis of the Pauli-z operator. Each spin has two possible orientations, up or down, in this representation. However, the superposition states that we have created by taking the sum of these product states will no longer be product states with the quantum correlations they contain. While the random disorder interactions between spins force a fixed orientation to minimize the energy, some of the spins may remain in the | → state even if there is no external field due to frustration.
In Figure 1, the antiferromagnetic (J < 0) interactions may cause geometric frustration in the first triangle, as shown by the two-side-aligned arrows denoting the frus-trated spin. The qubit state can be in any superposition of the form |ψ = a 1 |0 +a 2 |1 as long as |a 1 | 2 +|a 2 | 2 = 1. However, this is not the case for classical spins. We illustrate such a frustrated configuration state of a threebody system separated into two states that do not have frustration in the σ z basis:
|ψ > = |0 ⊗ |1 ⊗ |0 + |1 √ 2 , = 1 √ 2 (|0 ⊗ |1 ⊗ |0 + |0 ⊗ |1 ⊗ |1 ) .(4)
The corresponding phases can be obtained from the relative order parameters. Therefore, although many different superposition states could be considered via the energy eigen states or computational basis vectors, we will only consider the equally weighted twostate superposition in the computational basis. We will continue with the standard basis of energy-state products from now on. This standard basis is |e , |g for one atom, |ee , |eg , |ge , |gg for two, and |eee , |eeg , |ege , |gee , |egg , |geg , |gge , |ggg for three atoms. Figure 1 shows the three-atom standard basis on the left bottom. Corresponding positions in the natural basis of the levels are given in the circles next to levels and the numbers with the green dots below the levels denotes the placement of the excited atoms and ground state, respectively.
All our superposition states contributes to the spinglass order should satisfy the two simple rules:
(i) The SG-contibuted SSs must have at least one coexcited spin,
(ii) The total number of excited spin labels should not exceed the total number of spins in the system.
The first rule denotes the overlapping between the states, and the second one corresponds to the SS should have at least one frustrated spin an equally-weighted qubit-state such as called cat state denoted by C. Once we address the excited spins, we can write the spin-glass SSs quickly in each system size. However, in ferromagnetic (antiferromagnetic) SSs, the number of labels can be greater than the number of spins in the system. Understandably, the ferromagnetic order does not have to include any frustrated spin, and all the spins can be in the same (up or down) direction. It can be classified as that one excited and a set of cat states {C,C,. . . } pair will contribute to the FM/AFM orders. To fix a spin on a state can be thought of as making a local measurement on it, and this causes a loss of quantumness. We examined this loss in terms of negativity, a measure of quantum entanglement defined as [20],
N = ρ T 1 − 1 2 .(5)
Here, the trace norm of the quantum state ρ = |ψ suppos ψ suppos | is denoted by ρ T 1 . The detailed about negativity calculations and results are explained in Section IV.
III. QUANTUM SUPERPOSITION STATES FOR MAGNETIC ORDERS AND DISORDER
The magnetic phases of interest can be classified in terms of magnetization and the EA spin-glass order parameter, q EA [19]. All corresponding magnetization and The triangular parameter domain can be seen in all cases; moreover, the maximum value of the q EA remains the same. In each system size, it is possible to see all the magnetic phases in interest.
In both figures, having various numbers of m values against the same q EA value indicates spontaneous symmetry breaking [21]. While this symmetry breaking signifies the FM (AFM) order, the PM phase corresponds to the point of m = 0 and q EA = 0. The red-dashed line starts with the PM regime, and along the line, we can obtain the SSs corresponding to the SG regime. For simplicity, we will continue with the particular case of these SS space, such as equally weighted superposition states. However, even if the system size enlarges and extra SSs arise, the triangular structure of the diagram remains the same. In other words, the symmetry breaking can be observed for N → ∞ in terms of the different q EA values.
Let's return to the SSs that are equally weighted. Figure 3 shows how binary superpositions in a system of dimension N = 3 correspond to different quantum magnets, including SG, FM, AFM, and PM-phases. As shown in Figure 4(a), we have derived the matrix representation of the binary states that contribute exclusively to the SG phase in systems comprising N = 3, 4, 5, and 6 atoms. In order to maintain clarity, we have elected to exclusively represent the SG-SSs, omitting the explicit depiction of the superposition states that would pertain to other magnetic phases. However, it is pertinent to note that the off-diagonal elements do indeed play a role in the PM phase, as they are composed of non-overlapping product spin states. Furthermore, the upper and lower triangular regions of the non-diagonal elements in the matrix are associated with the FM or AFM phase, depending on the specific binary superpositions involved. Consequently, this superposition matrix can be regarded as an evenly distributed superposition state space that pertains to the configuration space. The matrix is divided into distinct quantum magnetic phases, thereby serving as a phase diagram. Additionally, we have noted that the phase partition pattern remains consistent and scalable even as the system size is expanded. The differentiation of the q EA order parameter at each even value of system size, N , can be associated with replica symmetry breaking in spin-glass systems [13]. For example, while there is a unique q EA value for N = 3, sizes N = 4 and N = 5 have two different values of q EA . Figure 4 illustrates a recursive pattern, wherein taking a partial trace over the most recently added qubit leads the system state space to the previous state space. Co-excited atoms in SSs can also be defined as overlapping between two states. This overlap scales with the system size, similar to the differentiation of q EA . The number of possible overlapped atoms increases with each even number of N . However, unlike the SG-SSs, the PM-phase SSs have neither co-excited spins nor any overlap. All PM-phase SSs are maximally entangled, similar to Greenberger-Horne-Zeilinger (GHZ) states [22]. We illustrated that entanglement of spin-glass SSs in the first line of the Figure 4(b) and entanglement of SSs corresponds to the other magnetic phases in the second line of the Figure 4(b), respectively.
IV. THE NEW ENTANGLEMENT WITNESS OF THE MAGNETIC STRUCTURES
The presence or absence of overlap between states in their superposed states corresponds to the ordered/disordered states of the system. Moreover, the overlapping superposition states can exhibit different magnetic orders, such as spin-glass, ferromagnetic, and antiferromagnetic orders. Firstly, after defining the order/disorder distinction based on the presence or absence of overlap, we observe that paramagnetic (disordered) systems corresponding zero overlaps also possess maximum entanglement. From this perspective, we calculated the entanglement of the superposition states using logarithmic negativity to define the relationship between the amount of overlap and entanglement. Considering that the Edward-Anderson spin glass order parameter measures the degree of overlap between two different system configurations (superposed states), we investigated the direct relationship between negativity N and q EA order parameter. The numerical results illustrate the relationship between these two quantities, as shown in Figure 5. We obtained that the order parameter q EA decreases linearly with N ,
q EA = q max − 1 4 N .(6)
Here q max = 1 is the maximum value of the normalized EA spin-glass order parameter. According to the figure, the state starts from the fully entangled state (PM state) with N = 1 and q EA = 0. Then, its maximal-entangled portion becomes smaller and smaller until it reaches the separable state with N = 0 and q EA = q max = 0.25. Each distinct q EA value corresponds to a different entangled cluster size. We separated regions corresponding to m-partite entanglement with m = N, N − 1, . . . , 1 by dashed lines. However, as the system size approaches infinity (N → ∞), the classification mentioned above is no longer discernible, and the different q EA values on the fitting line be indistinguishable. This is due to the loss of quantumness in the system, as it becomes classical and the separations between distinct q EA values vanish.
In this analysis, we present both numerical results and a discussion of superposition states, including the multi-partite entangled component, concerning the recursive growing pattern of the sg-order parameter q EA and negativity N , illustrated in Figure 4. Specifically, we consider a three-particle system where each particle can exist in one of the three configurations from the set E N =3 (SG) = {C, e, g}, where E N =3 (SG) denotes the ensemble including the probable spin configurations. We identify six possible configuration states for the SG contribution with vanishing magnetization and non-zero q EA .
Suppose one more qubit is added to the system. The system state has two possible configuration ensembles:
E N =4 (SG) =
{C, C, e, g}, q EA = 0.125, {e, g, e, g}, q EA = 0.25.
which correspond to SG-contributed superposition. In this case, we observe two distinct values of q EA corresponding to the two different configuration sets. The permutation group of the first set yields 24 different SSs that have q EA = 0.125, while the second set yields six SSs with q EA = 0.25 appropriate to Figure 4. As we increase the system size to N = 5 spins, the number of possible sets remains the same as for N = 4, and we observe two additional sets, namely {C, C, C, e, g} and {C, e, g, e, g}.
Notably, the q EA parameter differs between distinct sets and within a set, depending on the number of entangled particles in a superposition state. For instance, the set {C, C, C, e, g} can be considered as
{C, C, C, e, g} = {|GHZ 3 , e, g} {|GHZ 2 , C, e, g},(8)
here, the sub set |GHZ 3 : {C, C, C} and |GHZ 2 : {C, C} gives the maximally entangled (GHZ) states, and the subscript denotes the number of entangled particles. In general definition, n-particle GHZ state can be written as
|GHZ n = α(|g ⊗n + |e ⊗n ),(9)
where α is a constant [22,23], the source of these maximally entangled states is related to the number of cat state C in the permutation sets. If the possible configuration sets lack cat state C, the resulting state will be deemed separable. The superposition state may also be partially entangled, in which the number of cat states is less than N − 1. Defining the entangled portion of the state, particularly for larger systems, presents a challenge. While our focus centers on SG superposition states, analogous conditions arise in FM and AFM cases.
We developed a metric to quantify the entanglement partition of a state in terms of the spin-glass order parameter, q EA , which allowed us to achieve our objective. Figure 5 illustrates the entanglement partitions for various system sizes, along with the corresponding inverse linear relationship between the negativity and q EA . As q EA decreases from its maximum value corresponding to separable states, the number of entangled particles increases until the system reaches a state of maximum entanglement.
Based on the presence of entangled particle ensembles, Figure 5 provides a classification of magnetic phases.
V. CONCLUSIONS
This research demonstrates that spin glasses can exist in equiprobable superposition states of potential electronic configurations in a quantum framework. We propose using cat states to define frustrated spins and link the frustration to quantum interference. By employing the Edward-Anderson spin-glass order parameter and magnetization, we classify the superposition states based on their contribution to distinguishing magnetic order (or disorder) in various phases, such as SG, (anti)FM, and PM. Our results provide valuable insights into the nature of spin glasses in quantum systems and have implications for developing quantum technologies such as quantum cryptography [24], quantum simulation [25][26][27], quantum computation [28][29][30][31], quantum sensing [32] and metrology [33].
We establish a direct correlation between the Edward-Anderson spin glass order parameter, q EA , and a measure of entanglement, negativity represented by N . We demonstrate that the spin glass order parameter can also function as an indicator of entanglement, while conversely, the negativity of entanglement can serve as the order parameter to distinguish between phases of order and disorder, specifically the ferromagnetic (FM) and paramagnetic (PM) phases. This result is due to the fact that entanglement is the ability of qubits to correlate their state with other qubits. Quantum phase transitions (QPTs) are an established finding in condensed matter physics, characterized by significant changes in the ground state properties of a quantum system induced by small variations in an external parameter. For example, QPTs can be induced in spin systems by variations in the magnetic field [34,35], while in cold atom simulators of Hubbard-like models, changes in the intensity of a laser beam can trigger QPTs [14]. While our study does not address QPTs directly, several methods for driving quantum states to a target state have been studied extensively in the literature. These include quantum entanglement, state transfer [36][37][38][39][40][41], and quantum adiabatic evolution [42,43]. Once we obtain the corresponding quantum states to the different quantum phases, phase transition can be studied in this contextuality.
This study reveals that the structural similarities between entanglement and spin glass order parameter per-sist across systems of varying sizes, as indicated by the consistent patterns shown in Figure 4. Furthermore, our study highlights the potential use of superposition states in defining magnetic order (disorder) [33] in condensed matter physics, which has broader implications for quantum information processing and quantum computing [28][29][30][31]. These findings offer new insights into the nature of quantum superposition states and their relevance to spin glasses and quantum magnets. The categorization of states according to their magnetic properties, utilizing physical order parameters and entanglement, is a critical prerequisite for the effective manipulation of entangled states that are necessary for quantum information processing and transfer via qubits [44]. We offer that these superposition states are candidates to be new phase-based-bits in usage of the quantum computing [23,45]. We are currently investigating their possible use in other physical systems.
FIG. 1 .
1Top panel: The graphical representation of SGsuperposition state for N = 3 qubits. The interactions (lines) between the qubits (blue spheres) are illustrated for FM (J > 0) and AFM (J < 0) couplings are yellow and light blue edges, respectively. Bottom left: The possible SS is found by matching the dashed lines. Bottom right: One of the |ψSG state contributes to the SG-order. The numbers below the levels correspond to the labels of the excited atoms of the state, and dots represent the placement of the groundlevel atoms. The circled numbers next to the levels show the corresponding computational basis states. In the smaller box, the corresponding SG-contributed superposition states of the three-spin system with vanishing magnetization and non-zero SG-order parameters are shown in green color.
Order parameter for all superposition states.
FIG. 2 .
2(Color online.) The magnetization vs. EA spin-glass order parameter, qEA, is shown for (a) all equally-weighted superposition states in different sizes N = 3, 4, 5, 6, 7, and 8. The red horizontal line indicates qEA = 0 with m = 0, corresponds to the spin-glass regime, (b) not only the equally weighted SSs but also expanded superposition states at N = 3. We obtained expanded superposition states by giving different weights to superposed ones. In both figures, the paramagnetic phase is observed at the point m = 0 and qEA = 0. These findings provide valuable insights into the nature of quantum SSs and their relevance to SG and quantum magnets.q EA parameter values of the equally weighted and expanded superposition state spaces are displayed inFigure 2(a) andFigure 2(b), respectively. The parameter values of equally weighted superposition states, a special case of the expanded ones, are obtained at N=3,4,5,6 system sizes.
FIG. 3 .
3(Color online.) Schematic representation of SSs contributes to (a)SG, (b)FM, (c)AFM and (d)PM phases. Arrows depict the equally weighted summation of the states. This figure provides a clear and comprehensive visualization of the various superposition states and their relation to spin glasses, quantum magnets, and quantum superposition states.
non-zero SG order parameter q EA of equally weighted SSs. (b) First line: The Entanglement of equally weighted SSs contribute to SG order. Second line: Entanglement of all magnetic orders.
FIG. 4 .
4(Color online.) (a) The matrix representation of all spin-glass order contributed equally-weighted superposition states is given for N = 3, 4, 5, and 6 spins from left to right with non-zero Edward-Anderson order parameter and zero magnetization. As the system size increases, the number of distinct values of qEA increases, indicating a signal of replica symmetry breaking. The various colors denote different values of qEA (or N ), and this pattern repeats itself in subsequent generations from left to right. Diagonal elements correspond to the single states and they are equal to qEA = 0.25 fixed value. However, they contribution to spin-glass or FM/AFM order may alter by the system size. One another fixed part of the matrix representation is off diagonal elements have only qEA = 0 values and they contributes to the PM order for all system sizes. (b) The same pattern of qEA can be obtained for average negativity N with different numerical values. It should be noted that there is a reciprocal relationship between the N average negativity and the qEA order parameter.
online.) The left panel shows the variation of the negativity with the Edward-Anderson spin-glass order parameter qEA for particle systems with N = 4, 5, 6, 7, and 8. The distinct regions in the plot correspond to different values of qEA that indicate the extent of particle entanglement. Initially, all systems are fully entangled with N = 1, and as qEA decreases, the number of entangled particles decreases, ultimately leading to a separable state with N = 0. In the right panel, the phase-contributed superposition states are classified based on the number of entangled particles in the N -particle system.
ACKNOWLEDGEMENTSThe authors would like to acknowledge the financial support from the Scientific and Technological Research Council of Türkiye (TÜBİTAK), grant No. 120F100. We would also like to express our gratitude toÖ. E. Müstecaplıoglu and M. Paternostro for their insightful discussions.
K H Fischer, J A Hertz, 10.1017/CBO9780511628771Spin Glasses, Cambridge Studies in Magnetism. Cambridge University PressK. H. Fischer and J. A. Hertz, Spin Glasses, Cam- bridge Studies in Magnetism (Cambridge University Press, 1991).
J A Mydosh, 10.1201/9781482295191Spin Glasses: An Experimental Introduction. CRC Press1st ed.J. A. Mydosh, Spin Glasses: An Experimental Introduc- tion (1st ed.) (CRC Press., 1993).
H Nishimori, Statistical Physics of Spin Glasses and Information Processing: an Introduction. Oxford University PressH. Nishimori, Statistical Physics of Spin Glasses and In- formation Processing: an Introduction (Oxford Univer- sity Press, 2001).
Theory of the frustration effect. ii. ising spins on a square lattice. J Vannimenus, G Toulouse, 10.1088/0022-3719/10/18/008Journal of Physics C: Solid State Physics. 10537J. Vannimenus and G. Toulouse, Theory of the frustra- tion effect. ii. ising spins on a square lattice, Journal of Physics C: Solid State Physics 10, L537 (1977).
F Harary, Graph Theory and Its Applications. Academic Press1st ed.F. Harary, Graph Theory and Its Applications., 1st ed. (Academic Press, 1970).
Spin glasses: Experimental facts, theoretical concepts, and open questions. K Binder, A P Young, 10.1103/RevModPhys.58.801Rev. Mod. Phys. 58801K. Binder and A. P. Young, Spin glasses: Experimen- tal facts, theoretical concepts, and open questions, Rev. Mod. Phys. 58, 801 (1986).
Characterizing and quantifying frustration in quantum many-body systems. S M Giampaolo, G Gualdi, A Monras, F Illuminati, 10.1103/PHYSREVLETT.107.260602.Physical Review Letters. 107S. M. Giampaolo, G. Gualdi, A. Monras, and F. Illumi- nati, Characterizing and quantifying frustration in quan- tum many-body systems., Physical Review Letters 107, 10.1103/PHYSREVLETT.107.260602. (2011).
Entanglement and frustration in ordered systems. M M Wolf, F Verstraete, J I Cirac, 10.1142/S021974990300036XInternational Journal of Quantum Information. 01465M. M. Wolf, F. Verstraete, and J. I. Cirac, Entanglement and frustration in ordered systems, International Journal of Quantum Information 01, 465 (2003).
Frustration, interaction strength, and ground-state entanglement in complex quantum systems. C M Dawson, M A Nielsen, Physical Review A. 6952316C. M. Dawson and M. A. Nielsen, Frustration, interac- tion strength, and ground-state entanglement in complex quantum systems, Physical Review A 69, 052316 (2004).
Probing quantum frustrated systems via factorization of the ground state. S M Giampaolo, G Adesso, F Illuminati, 10.1103/PhysRevLett.104.207202Phys. Rev. Lett. 104207202S. M. Giampaolo, G. Adesso, and F. Illuminati, Prob- ing quantum frustrated systems via factorization of the ground state, Phys. Rev. Lett. 104, 207202 (2010).
Solving frustration-free spin systems. N De Beaudrap, M Ohliger, T J Osborne, J Eisert, 10.1103/PhysRevLett.105.060504Phys. Rev. Lett. 10560504N. de Beaudrap, M. Ohliger, T. J. Osborne, and J. Eisert, Solving frustration-free spin systems, Phys. Rev. Lett. 105, 060504 (2010).
Quantum entanglement in a noncommutative system. S Adhikari, B Chakraborty, A S Majumdar, S Vaidya, 10.1103/PhysRevA.79.042109Phys. Rev. A. 7942109S. Adhikari, B. Chakraborty, A. S. Majumdar, and S. Vaidya, Quantum entanglement in a noncommutative system, Phys. Rev. A 79, 042109 (2009).
M Mezard, G Parisi, M Virasoro, https:/arxiv.org/abs/https:/www.worldscientific.com/doi/pdf/10.1142/0271Spin Glass Theory and Beyond. WORLD SCIENTIFIC2nd ed.M. Mezard, G. Parisi, and M. Virasoro, Spin Glass The- ory and Beyond , 2nd ed. (WORLD SCIENTIFIC, 1986) https://www.worldscientific.com/doi/pdf/10.1142/0271.
Frustration, area law, and interference in quantum spin models. A S U Sen, J Dziarmaga, A Sanpera, M Lewenstein, 10.1103/PhysRevLett.101.187202Phys. Rev. Lett. 101A. S. U. Sen, J. Dziarmaga, A. Sanpera, and M. Lewen- stein, Frustration, area law, and interference in quan- tum spin models, Phys. Rev. Lett. 101, 10.1103/Phys- RevLett.101.187202 (2008).
M Razavy, Quantum theory of tunneling. River Edge, NJ; SingaporeWorld ScientificM. Razavy, Quantum theory of tunneling (River Edge, NJ ; Singapore : World Scientific, 2003).
Absence of diffusion in certain random lattices. P W Anderson, Phys. Rev. 1091492P. W. Anderson, Absence of diffusion in certain random lattices, Phys. Rev. 109, 1492 (1958).
Signatures of anderson localization and delocalized random quantum states. G J Moro, G Dall'osto, B Fresch, 10.1016/j.chemphys.2018.03.006energy and Entropy of Change: From Elementary Processes to Biology. 514G. J. Moro, G. Dall'Osto, and B. Fresch, Signatures of anderson localization and delocalized random quantum states, Chemical Physics 514, 141 (2018), energy and Entropy of Change: From Elementary Processes to Biol- ogy.
Frustration, entanglement, and correlations in quantum many-body systems. U Marzolino, S M Giampaolo, F Illuminati, 10.1103/PHYSREVA.88.020301Physical Review A. 88U. Marzolino, S. M. Giampaolo, and F. Illuminati, Frustration, entanglement, and correlations in quantum many-body systems, Physical Review A 88 (2013).
The order parameter in a spin glass. S F Edwards, P W Anderson, 10.1088/0305-4608/5/5/017Journal of Physics F: Metal Physics. 5965S. F. Edwards and P. W. Anderson, The order parameter in a spin glass, Journal of Physics F: Metal Physics 5, 965 (1975).
Computable measure of entanglement. G Vidal, R F Werner, 10.1103/PhysRevA.65.032314Phys. Rev. A. 6532314G. Vidal and R. F. Werner, Computable measure of en- tanglement, Phys. Rev. A 65, 032314 (2002).
Patterns of Symmetry Breaking. H. Arodz, J. Dziarmaga, and W. H. ZurekDordrechtSpringerH. Arodz, J. Dziarmaga, and W. H. Zurek, eds., Pat- terns of Symmetry Breaking, 1568-2609 No. 1 (Springer Dordrecht, 2003).
Bell's theorem without inequalities. D M Greenberger, M A Horne, A Shimony, A Zeilinger, 10.1119/1.16243American Journal of Physics. 58D. M. Greenberger, M. A. Horne, A. Shimony, and A. Zeilinger, Bell's theorem without inequalities, Ameri- can Journal of Physics 58, 10.1119/1.16243 (1990).
M A Nielsen, I L Chuang, 10.1017/CBO9780511976667Quantum Computation and Quantum Information. 10Th anniversary. Cambridge University PressCambridge Studies in MagnetismM. A. Nielsen and I. L. Chuang, Quantum Computa- tion and Quantum Information. 10Th anniversary ed., Cambridge Studies in Magnetism (Cambridge University Press, 2010).
S Pirandola, U L Andersen, L Banchi, M Berta, D Bunandar, R Colbeck, D Englund, T Gehring, C Lupo, C Ottaviani, J L Pereira, M Razavi, J S Shaari, M Tomamichel, V C Usenko, G Vallone, P Villoresi, P Wallden, 10.1364/AOP.361502Advances in quantum cryptography. 121012S. Pirandola, U. L. Andersen, L. Banchi, M. Berta, D. Bunandar, R. Colbeck, D. Englund, T. Gehring, C. Lupo, C. Ottaviani, J. L. Pereira, M. Razavi, J. S. Shaari, M. Tomamichel, V. C. Usenko, G. Vallone, P. Vil- loresi, and P. Wallden, Advances in quantum cryptogra- phy, Adv. Opt. Photon. 12, 1012 (2020).
Quantum simulation. I M Georgescu, S Ashhab, F Nori, Reviews of Modern Physics. 86153I. M. Georgescu, S. Ashhab, and F. Nori, Quantum sim- ulation, Reviews of Modern Physics 86, 153 (2014).
Practical quantum advantage in quantum simulation. A J Daley, I Bloch, C Kokail, S Flannigan, N Pearson, M Troyer, P Zoller, Nature. 607667A. J. Daley, I. Bloch, C. Kokail, S. Flannigan, N. Pearson, M. Troyer, and P. Zoller, Practical quantum advantage in quantum simulation, Nature 607, 667 (2022).
Using quantum computers for quantum simulation. K L Brown, W J Munro, V M Kendon, Entropy. 122268K. L. Brown, W. J. Munro, and V. M. Kendon, Us- ing quantum computers for quantum simulation, Entropy 12, 2268 (2010).
Quantum information and computation. C Bennett, D Divincenzo, 10.1038/35005001Nature. 404247C. Bennett and D. DiVincenzo, Quantum information and computation., Nature 404, 247 (2000).
Quantum computers and quantum computations. K A Valiev, 10.1070/PU2005v048n01ABEH002024Physics-Uspekhi. 48K. A. Valiev, Quantum computers and quantum computations, Physics-Uspekhi 48, 10.1070/PU2005v048n01ABEH002024 (2005).
Quantum simulation of frustrated ising spins with trapped ions. M.-S C K Kim, S Korenblit, R Islam, E E Edwards, J K Freericks, G.-D Lin, L.-M Duan, C Monroe, 10.1038/nature09071Nature. 465M.-S. C. K. Kim, S. Korenblit, R. Islam, E. E. Edwards, J. K. Freericks, G.-D. Lin, L.-M. Duan, and C. Mon- roe, Quantum simulation of frustrated ising spins with trapped ions, Nature 465, 10.1038/nature09071 (2010).
Frustration and entanglement in the t2g spin-orbital model on a triangular lattice: Valence-bond and generalized liquid states. B Normand, A M Oleś, 10.1103/PhysRevB.78.094427Phys. Rev. B. 7894427B. Normand and A. M. Oleś, Frustration and entangle- ment in the t2g spin-orbital model on a triangular lattice: Valence-bond and generalized liquid states, Phys. Rev. B 78, 094427 (2008).
Quantum sensing. C L Degen, F Reinhard, P Cappellaro, 10.1103/RevModPhys.89.035002Rev. Mod. Phys. 8935002C. L. Degen, F. Reinhard, and P. Cappellaro, Quantum sensing, Rev. Mod. Phys. 89, 035002 (2017).
Simulating a quantum magnet with trapped-ions. A Friedenauer, H Schmitz, J T Glueckert, D Porras, T Schaetz, Nature Physics. 4A. Friedenauer, H. Schmitz, J. T. Glueckert, D. Por- ras, and T. Schaetz, Simulating a quantum magnet with trapped-ions, Nature Physics 4, 757-761 (2008).
S Sachdev, Cambridge Studies in Magnetism:Second Edition. Cambridge University PressQuantum Phase TransitionsS. Sachdev, Quantum Phase Transitions, 2nd ed., Cam- bridge Studies in Magnetism:Second Edition (Cambridge University Press, 2011).
A quantum phase transition in a quantum external field: Superposing two magnetic phases. M Rams, M Zwolak, B Damski, Sci Rep. 2M. Rams, M.and Zwolak and B. Damski, A quantum phase transition in a quantum external field: Superpos- ing two magnetic phases., Sci Rep 2 (2012).
Faithful conditional quantum state transfer between weakly coupled qubits. M Miková, I Straka, M Mičuda, V Krcmarsky, M Dušek, M Jezek, J Fiurásek, R Filip, 10.1038/srep32125Scientific Reports. 632125M. Miková, I. Straka, M. Mičuda, V. Krcmarsky, M. Dušek, M. Jezek, J. Fiurásek, and R. Filip, Faith- ful conditional quantum state transfer between weakly coupled qubits, Scientific Reports 6, 32125 (2016).
Quantum interface between light and atomic ensembles. K Hammerer, A S Sørensen, E S Polzik, Rev. Mod. Phys. 82K. Hammerer, A. S. Sørensen, and E. S. Polzik, Quantum interface between light and atomic ensembles., Rev. Mod. Phys. 82 (2010).
Quantum interface between light and atomic ensembles. N Daniilidis, H Häffner, Annu. Rev. Condens. Matter Phys. 483N. Daniilidis and H. Häffner, Quantum interface between light and atomic ensembles., Annu. Rev. Condens. Mat- ter Phys. 4, 83 (2013).
Coupling superconducting qubits via a cavity bus. J E Majer, Nature. 449J. e. a. Majer, Coupling superconducting qubits via a cavity bus., Nature 449, 443-447 (2007).
. A Reiserer, N Kalb, G Rempe, S Ritter, Quan, Nature. 508A. Reiserer, N. Kalb, G. Rempe, and S. Ritter, A quan- tum gate between a flying optical photon and a single trapped atom., Nature 508, 237-240 (2014).
Quantum state transfer between matter and light. D N Matsukevich, A Kuzmich, Science. 306663D. N. Matsukevich and A. Kuzmich, Quantum state transfer between matter and light., Science 306, 663 (2004).
Quantum adiabatic evolution. A Joye, C.-E Pfister, 10.1007/978-1-4615-2460-1_13On Three Levels: Micro-, Meso-, and Macro-Approaches in Physics. M. Fannes, C. Maes, and A. VerbeureBoston, MASpringer USA. Joye and C.-E. Pfister, Quantum adiabatic evolu- tion, in On Three Levels: Micro-, Meso-, and Macro- Approaches in Physics, edited by M. Fannes, C. Maes, and A. Verbeure (Springer US, Boston, MA, 1994) pp. 139-148.
Quantum state transfer between matter and light. Y P Kandel, H Qiao, S Fallahi, G C Gardner, M J Manfra, J M Nichol, 10.1038/s41467-021-22416-5Nat. Commun. 12Y. P. Kandel, H. Qiao, S. Fallahi, G. C. Gardner, M. J. Manfra, and J. M. Nichol, Quantum state trans- fer between matter and light., Nat. Commun. 12, 10.1038/s41467-021-22416-5 (2021).
Teleporting an unknown quantum state via dual classical and einstein-podolskyrosen channels. C H Bennett, G Brassard, C Crépeau, R Jozsa, A Peres, W K Wootters, 10.1103/PhysRevLett.70.1895Phys. Rev. Lett. 701895C. H. Bennett, G. Brassard, C. Crépeau, R. Jozsa, A. Peres, and W. K. Wootters, Teleporting an unknown quantum state via dual classical and einstein-podolsky- rosen channels, Phys. Rev. Lett. 70, 1895 (1993).
Simulating physics with computers. R P Feynman, 10.1007/BF02650179International Journal of Theoretical Physics. 21467R. P. Feynman, Simulating physics with computers., International Journal of Theoretical Physics 21, 467 (1982).
| [] |
[
"Distributed Resilient Submodular Action Selection in Adversarial Environments",
"Distributed Resilient Submodular Action Selection in Adversarial Environments"
] | [
"Jun Liu ",
"Lifeng Zhou ",
"Ryan K Williams "
] | [] | [] | In this letter, we consider a distributed submodular maximization problem for multi-robot systems when attacked by adversaries. One of the major challenges for multi-robot systems is to increase resilience against failures or attacks. This is particularly important for distributed systems under attack as there is no central point of command that can detect, mitigate, and recover from attacks. Instead, a distributed multirobot system must coordinate effectively to overcome adversarial attacks. In this work, our distributed submodular action selection problem models a broad set of scenarios where each robot in a multi-robot system has multiple action selections that may fulfill a global objective, such as exploration or target tracking. To increase resilience in this context, we propose a fully distributed algorithm to guide each robot's action selection when the system is attacked. The proposed algorithm guarantees performance in a worst-case scenario where up to a portion of the robots malfunction due to attacks. Importantly, the proposed algorithm is also consistent, as it is shown to converge to the same solution as a centralized method. Finally, a distributed resilient multi-robot exploration problem is presented to confirm the performance of the proposed algorithm. | 10.1109/lra.2021.3080629 | [
"https://arxiv.org/pdf/2105.07305v1.pdf"
] | 234,742,084 | 2105.07305 | 77c2b01d607971400f279be80d9f4644b7fce903 |
Distributed Resilient Submodular Action Selection in Adversarial Environments
Jun Liu
Lifeng Zhou
Ryan K Williams
Distributed Resilient Submodular Action Selection in Adversarial Environments
Index Terms-Distributed robot systemsplanningscheduling and coordinationmulti-robot systemsresilientsubmodular optimization
In this letter, we consider a distributed submodular maximization problem for multi-robot systems when attacked by adversaries. One of the major challenges for multi-robot systems is to increase resilience against failures or attacks. This is particularly important for distributed systems under attack as there is no central point of command that can detect, mitigate, and recover from attacks. Instead, a distributed multirobot system must coordinate effectively to overcome adversarial attacks. In this work, our distributed submodular action selection problem models a broad set of scenarios where each robot in a multi-robot system has multiple action selections that may fulfill a global objective, such as exploration or target tracking. To increase resilience in this context, we propose a fully distributed algorithm to guide each robot's action selection when the system is attacked. The proposed algorithm guarantees performance in a worst-case scenario where up to a portion of the robots malfunction due to attacks. Importantly, the proposed algorithm is also consistent, as it is shown to converge to the same solution as a centralized method. Finally, a distributed resilient multi-robot exploration problem is presented to confirm the performance of the proposed algorithm.
Fig. 1
. In a multi-robot environmental exploration application, the robots are mounted with downward-facing cameras to explore the environment. An attacker blocks one of the robots' cameras (red).
the system against worst-case attacks. By worst-case attacks, we refer to the case where the system may have up to K sensor failures. Robots operating in adversarial scenarios may get cyber-attacked or face failures, resulting in a temporary withdrawal of robots from the task (e.g., because of temporary deactivation of their sensors, blockage of their field of view). For example, in Fig. 1, each robot in the system is equipped with a downward-facing camera to explore an environment with different weights in different areas, where there is one robot whose sensor is blocked by an attacker. It is worth mentioning that robot failure and sensor failure are different. If a sensor is attacked, the corresponding robot may not know this attack and still perform other tasks/communications as planned.
Related Work: The resilience of multi-robot systems has received attention recently (see a comprehensive survey in [1]). In [2], the authors presented a resilient formation control algorithm that steers a team of mobile robots to achieve desired flocking even though some team members are non-cooperative (or adversarial) and broadcast deceptive signals. Deceptive or spoofing attacks were also considered in the wireless communication [3] and the target state estimation [4] of multirobot teams. Another type of attack, called masquerade attack, was studied in a multi-agent path-finding problem [5]. In this letter, we instead focus on defending multi-robot systems against the denial-of-service (DoS) attacks that can compromise the sensors' functionality [6]. For example, a polynomialtime resilient algorithm to counter adversarial denial-of-service (DoS) attacks or failures in a submodular maximization problem was proposed in [7]. Meanwhile, resilient coordination algorithms have been designed to cope with adversarial attacks in multi-robot target tracking [8], the orienteering problem [9], etc. In [10], the authors proposed to solve the centralized resilient target tracking problem [8] in a distributed way. This method partitions robots into subgroups/cliques, and then the subgroups perform a centralized algorithm in parallel to counter the worst-case attacks. Thus, even if there exist communications between subgroups, these available communications are not utilized because each subgroup operates independently. Therefore, the proposed algorithm in [10] has a worse approximation bound than its centralized counterpart.
The action selection problem falls into the combinatorial robotics application domain. The authors in [11] proposed a consensus-based method for the task allocation problem. In [12], the authors used matroids to model the task allocation constraints and provided a distributed approach with 1/2 optimality ratio. In [13], the authors extended the use of matroids to abstract task allocation constraints modeling and demonstrated the suboptimality through a sequential auction method in a decentralized scenario. In [14], [15], the authors applied submodular and matroids techniques in a multi-robot intermittent environmental monitoring problem, where the deployment actions are selected based on the environmental process. In [16], the authors utilized the submodularity of a mutual information function to prove the performance of a distributed multi-robot exploration method, while synchronization is needed. The authors in [17] considered two coupled action selection problems in an environmental monitoring application, where the selected tasks have an impact on the monitored environmental process behavior. In [18], the authors studied how the information from other robots impact the decisions of a multi-robot system in distributed settings. Similarly, the submodular properties were also utilized in the consensus problem [19], the leader selection problem [20], etc. However, resilience is not the primary consideration, especially when the system is under worst-case attacks. In this letter, we propose a fully distributed resilient algorithm that requires no central point of command to solve the action selection problem in adversarial environments. The proposed distributed resilient method can perform as well as the corresponding centralized algorithm when subject to worst-case adversarial attacks.
Contributions: The contributions are as follows: 1) We formulate a fully distributed resilient submodular action selection problem. 2) We demonstrate how to solve the problem in a fully distributed manner where each robot computes its action only and shares the decision with its neighbors to achieve convergence with performance guarantees. 3) We prove and evaluate the proposed algorithm's performance is consistent, as it is shown to converge to the same solution as a centralized method. Organization: In Section II, we introduce preliminaries followed by the problem formulation. In Section III, we use two subsections to demonstrate the two phases of the proposed algorithm. Then, the performance analysis of the proposed algorithm is shown in Section IV. In Section V, numerical evaluation is performed in a multi-robot exploration problem. We then close the letter in Section VI.
II. PRELIMINARIES AND PROBLEM FORMULATION A. Submodular Set Functions
Definition 1 (Submodularity [21]): A submodular function f : 2 V → R is a set function, satisfying the property f (X ∪ Fig. 2. Each robot chooses one motion primitive from its candidate motion primitive set to explore a region of the environment.
{v}) − f (X ) ≥ f (Y ∪ {v}) − f (Y), where V is the ground set, X ⊆ Y ⊆ V, and v ∈ V \ Y.
The power set 2 V is the set of all subsets of V, including ∅ and V itself. A set function is monotone non-decreasing if f (X ) ≤ f (Y) when X ⊆ Y ⊆ V. Submodularity appears in a wide variety of robotics applications. We refer the reader to [22], [23] for more details.
v ∈ V into set X ⊆ V be f X (v) f (X ∪ {v}) − f (X ) 1 .
Definition 3 (Curvature [24]): Let f : 2 V → R be a monotone non-decreasing submodular function, we define the
curvature of f (·) as c f = 1 − min v∈V f (V)−f (V\{v}) f (v) , where v ∈ V.
This curvature represents the submodularity level of f (·).
It holds that 0 ≤ c f ≤ 1. If c f = 0, then f (·) is a modular function and f (X ∪ {v}) − f (X ) = f (v). If c f = 1, then f (X ∪ {v}) − f (X ) = 0, where X ⊆ V and v ∈ V \ X .
B. Problem Formulation
Robots, actions, and rewards: Consider a team of N robots denoted by R = {1, . . . , N }. Each robot is equipped with one sensor. There is a (undirected) communication graph 2 G = (R, E) associated with nodes R, and edges E such that (i, j) ∈ E if i and j can communicate with each other. We denote by N i the neighbors of robot i. The diameter d(G) of the communication G is the greatest length of the shortest paths between vertices. The communication can be synchronized or asynchronized. The performance analysis will be based on synchronized communication. Each robot i ∈ R has a set of candidate actions V i and can only choose one action from V i at each execution step. For example, in motion planning using motion primitives, the robot can only choose one motion primitive from its candidate motion primitives at a time. As shown in Fig. 2, robot 1 chooses action v 1 1 from its available action set V 1 = {v 1 1 , v 1 2 , v 1 3 } and robot 2 chooses action v 2 3 from its available action set V 2 = {v 2 1 , v 2 2 , v 2 3 , v 2 4 }, yielding the shaded explored area. We denote by V i∈R V i the ground set containing all robots' possible actions. There is an reward associated with any action. Also, the reward associated with action v 1 1 is the gray explored area. The function value Algorithm 1 Distributed resilient selection for robot i Input:
• Action set V i ; number of anticipated attacks K; • Communication graph G; objective function f (·). Output: Set S.
1: S i 1 ← ∅, S i 2 ← ∅, α i 1 ← 0, α i 2 ← 0; 2: S i 1 ←GENERATEREMOVALS(S i 1 , α i 1 ); 3: S i 2 ←GENERATECOMPLEMENTS(S i 1 , S i 2 , α i 2 ); 4: S ← S i 1 ∪ S i 2 .
f (S) or the combined reward associated with v 1 1 and v 2 3 is the explored gray areas.
Objective Function: We use a non-decreasing and submodular function f : 2 V → R to model the quality of each valid action set S ⊆ V since the diminishing return property of objective functions is common in robotics. For example, in Fig. 2, f (·) measures the extent of the joint area explored by chosen actions S = {v 1 1 , v 2 3 } (represented by the gray areas), which is a well-known coverage function that exhibits the submodularity property [22].
Assumption 1 (Attacks): We assume the robots encounter worst-case attacks that result in their sensor DoS failures. Thus, robots can still communicate with their neighbors even though their sensors are denied or blocked. The maximum number of anticipated attacks is upper bounded by K, (K ≤ N ), where N is the number of robots.
Problem 1 (Distributed resilient multi-robot action selection in adversarial environments): The robots, by communicating actions and rewards with their neighbors over the communication graph G, choose action set S (per robot per action) to maximize a submodular objective f (·) against K worst-case attacks. That is
maximize S⊆V min F ⊆S f (S \ F) subject to |S ∩ V i | = 1, ∀ i ∈ R, |F| ≤ K,(1)
where R contains the indexes of the robots in the system, F denotes the action set associated with the attacked sensors, and V i is the available action set for robot i. The first constraint ensures that robot i only chooses one action from its action set V i . The "min" operator indicates the attacks we consider are the worst-case attacks. The constraint |F| ≤ K captures the problem assumption that at most K sensors in the team can fail or get attacked.
In this problem, each robot needs to take other robots' actions into consideration while making its decision. That is because neighboring robots' selected actions may have an impact on local robot's action selection. In other words, different action selections result in different performances.
III. A CONSISTENT ALGORITHM FOR DISTRIBUTED RESILIENT SUBMODULAR MAXIMIZATION
We present a distributed resilient algorithm (Algorithm 1) for solving Problem 1. At a high level, Algorithm 1 contains two main procedures GENERATEREMOVALS (Algorithm 2) Algorithm 2 (Phase I) Generate approximated removal set for each robot i
1: procedure GENERATEREMOVALS(S i 1 , α i 1 ) 2: while α i 1 < 2d(G) do 3: if S i 1 = ∅ then 1) Initialization 4: S i 1 ← arg max v∈Vi f (v); 5: f (s) ← max v∈Vi f (v); 6: end if 7: 8: S i 1 ← S i 1 ∪ S j 1 , ∀ j ∈ N i ; 2) Communication 9: M = min(K, |S i 1 |);
3) Local computation 10:
S i 1 ← top M actions 3 in S i 1 ; 11: send (S i 1 , {f (s)}), ∀ s ∈ S i 1 to all j ∈ N i ; 12:
update α i 1 .
13:
end while 14: end procedure and GENERATECOMPLEMENTS (Algorithm 3). In the following, we present and analyze these procedures from robot i's perspective since other robots will follow the same procedures. In general, robot i will use these two procedures to approximate the following two sets:
• S i 1 : the set that approximates the optimal worst-case removal set. Since computing the optimal worst-case removal set is intractable, we use S i 1 as an approximation. We denote by A the indices of the robots used by S i 1 . This is the phase I. • S i 2 : the set that approximates the optimal set that maximizes the objective function using V \ V i , ∀ i ∈ A. Again, this is an approximation since computing the optimal set is intractable. This is phase II. These two procedures will be executed sequentially. Robot i will use α i 1 and α i 2 as two counters for different phases to decide whether to stop the corresponding procedure or not. Upon the stopping of Algorithm 1, both S i 1 and S i 2 converge. The final solution of Problem 1 will then be S i 1 ∪ S i 2 . In each phase, robot i will approximate and update S i 1 and S i 2 through the following processes: 1) Initialization, which is used to make the first approximation. 2) Inter-robot communication, which is used to combine its local approximation with neighbors' approximations. 3) Local computation, which is used to update local approximation.
A. Phase I: Generate Approximated Removals
The procedure for generating approximated removals is called GENERATEREMOVALS (Algorithm 2). This procedure aims to approximate K action removals S 1 (|S 1 | = K) through the below processes. regardless of other robots' selections. The selected action is s ∈ arg max v∈Vi f (v). Following the constraint |S ∩ V i | = 1, ∀ i ∈ R, robot i is only allowed to select one action from its candidate action set V i to update its action set S i 1 . Meanwhile, f (s) is also recorded.
2) Inter-robot communication: To update i's local approximation set S i 1 , robot i needs to combine j's approximation S j 1 , ∀ j ∈ N i . Since our task in this phase is to approximate K action removals, we can merge j's approximation as
S i 1 ← S i 1 ∪ S j 1 . 3) Local computation: Once receiving neighbor j's action set S j 1 , robot i updates its action set S i 1 based on S j 1 . We first form a new candidate set S i 1 ← S i 1 ∪ S j 1 to update S i 1 .
Then, we need to select the top K actions for robot i. There are two cases:
• If |S i 1 | ≤ K, there is no need to update S i 1 . • If |S i 1 | > K, robot i selects the top K actions from S i 1 .
In Algorithm 2 lines 9-10, we combine these two cases as a single operation. That is, robot i needs to select top m := min(K, |S i 1 |) actions from S i 1 . Finally, robot i shares S i 1 and the corresponding action values f (s), ∀ s ∈ S i 1 with all its neighbors j ∈ N i . 4) Stopping condition: After one cycle of local computation and inter-robot communication, the local counter α i 1 will be incremented by 1. Finally, when α i 1 reaches 2d(G), robot i stops and all robots have an agreement on S 1 .
B. Phase II: Generate Approximated Complements
The procedure for generating the approximated complements is GENERATECOMPLEMENTS shown in Algorithm 3. This procedure aims to approximate N − K greedy action selections S 2 (|S 2 | = |R \ A| = N − K) for the remaining robots R \ A through inter-robot communication and local computation with A denoting the robots that select actions in phase I. Depending on whether robot i is used as removals or not, robot i in phase II will have two different functionalities:
• If i ∈ A, then robot i acts as a conveyor only to merge the approximation S j 2 from j, ∀ j ∈ N i and broadcast the merged/updated S i 2 to j, ∀ j ∈ N i . So, robot i only participated in inter-robot communication.
• If i ∈ R \ A, robot i also needs to update its approximation S i 2 using the local computation process. In the following, we demonstrate phase II from robot i's perspective assuming i ∈ R \ A. If i ∈ A, then the local computation process will be skipped for robot i. 1) Initialization: At the first iteration of phase II, S i 2 = ∅ and thus robot i can directly update S i 2 as the action with the maximum marginal gain based on the empty set. That is, s ∈ arg max v∈Vi f ∅ (v). Then, S i 2 is updated as S i 2 ← s. The corresponding marginal gain f ∅ (s) is also recorded.
2) Inter-robot communication: Let us consider the case where |S i 2 | = n with n ≤ N −K at some point before the algorithm stops. We first consider s ∈ S i 2 in the descending order they are added through the local computation procedure. For example, if |S i 2 | = n, we can write S i 2 as S i 2 = {s 1 , . . . , s n },
if f X (X ∪ v) ≥ g, ∀ v ∈ V i then 19: s ∈ arg max v∈Vi f X (X ∪ v);
20:
X ← X ∪ {s}; 21: break;
22:
end if 23:
X ← X ∪ {s n }. Similarly, we also apply this reordering procedure to S j 2 , ∀ j ∈ N i . Thus, there is also a marginal gain and an order associated with the action s ∈ S j 2 . With the marginal gains and orders ready, we are ready to merge S j 2 with S i 2 . For every S j 2 , we augment S i 2 with S j 2 and apply an operation as
S i+ 2 ← sort({S i 2 , S j 2 }, 'descend').
This operation is read as "s ∈ {S i 2 , S j 2 } are sorted in a descending order based on the associated marginal gains".
Remove redundant actions: The merged set S i+ 2 may contain redundant actions. By redundant actions, we refer to the actions s ∈ S i+ 2 having the following redundant action properties:
• s appears in S i 2 and S j 2 . e.g., s = s where s ∈ S i 2 and s ∈ S j 2 ; • The associated marginal gains are the same. • The orders in S i 2 and S j 2 are the same. e.g., γ(s) = γ(s ). We can check these properties for each v ∈ S i+ 2 against all other actions to remove the redundant actions.
Remove order changed actions: After the above process, we know that there is an order associated with s, ∀ s ∈ S i+ 2 . Similarly, we also know that s ∈ S i 2 and s ∈ S j 2 also have their orders. If the order of any action is changed before and after the augmentation, this action and the actions having lower marginal gains than this action's marginal gain will be invalid. This rule is from the submodularity of f (·). We can then use the following properties to remove order changed actions. Specifically, we need to remove any s ∈ S i+ 2 if s satisfies the following order changed properties:
• s appears in S i+ 2 and S i 2 (or S j 2 ). e.g., s = s where s ∈ S i+ 2 and s ∈ S i 2 (or s ∈ S j 2 ). • The orders are not the same. e.g., γ(s) = γ(s ). This operation can be illustrated by the following example. If the local approximation S i 2 is S i 2 = {s 1 , s 2 } and f ∅ (s 1 ) ≥ f s1 (s 2 ), Then, the orders of s 1 , s 2 ∈ S i 2 are as follows γ(s 1 ) = 1, γ(s 2 ) = 2. Also, if a neighbor j's approximation is
S j 2 = {s 2 , s 3 } and f ∅ (s 2 ) ≥ f s2 (s 3 )
. Then, the orders of s 2 , s 3 ∈ S j 2 are γ(s 2 ) = 1, γ(s 3 ) = 2. Now consider the case where the augmented set is S i+ 2 = {s 1 , s 2 , s 3 }, and the marginal gains are such that
f ∅ (s 1 ) ≥ f s1 (s 2 ) s2∈S i 2 = f ∅ (s 2 ) s2∈S j 2 ≥ f s2 (s 3 ).(2)
After applying the above redundancy removal procedure, there may exist a case where
f {s1,s2} (v) > f {s1,s2} (s 3 ),
where v ∈ V \ (S i 1 ∪ {s 1 , s 2 }). In this case, s 3 is no longer a valid action in S i+ 2 as action v has a higher marginal gain. To deal with this case, we can use action orders. From the marginal gains relations in (2), we have
γ(s 1 ) = 1, γ(s 2 ) s2 ∈ S i 2 = 2, γ(s 2 ) s2 ∈ S j 2 = 3, γ(s 3 ) = 4.
Also, we know that the original orders of s 2 , s 3 ∈ S j 2 are γ(s 2 ) = 1, γ(v 3 ) = 2, where s 2 , s 3 ∈ S j 2 . So, the order of s 2 and s 3 where s 2 , s 3 ∈ S j 2 are changed after merging. Therefore, we need to remove these two actions from S i+ 2 . Finally, the augmented approximation is assigned to S i 2 as S i 2 ← S i+ 2 . 3) Local computation: After the inter-robot communication process, robot i may need to change its original action selection. That is because robot i made its selection before knowing its neighbors' approximations (S j 2 , ∀ j ∈ N i ). Once receiving S j 2 , ∀ j ∈ N i , robot i can update its own action selection, i.e., v ∈ V i .
We update robot i's action selection based on the marginal gain of v ∈ V i for every possible combination of its neighbors' selections. The necessity of this operation is from the observation that the marginal gain of an action will be changed if the already selected action set is changed. For example, after the inter-robot communication, if we have S i 2 = {s 1 , . . . , s n } and v / ∈ S i 2 , ∀ v ∈ V i , we then need to check the marginal gain of v ∈ V i when v has different orders in S i 2 . Since we already know the associated marginal gains of s ∈ S i 2 , we can compare
max v∈Vi f ∅ (v) vs. f ∅ (s 1 ), . . . max v∈Vi f {s1,...,sn−1} (v) vs. f {s1,...,sn−1} (s n ).
Whenever ∀ v ∈ V i generates a better marginal gain than the compared one, we replace the compared action with the action v and delete the actions selected after the compared one. This is because if the orders of the actions are changed, then the marginal gains are invalid. This operation is shown in lines 15-25 (Algorithm 3). Meanwhile, the associated marginal gain is also updated. Finally, the updated S i 2 along with the marginal gains of s ∈ S i 2 are broadcasted to j ∈ N i . 4) Stopping condition: After one cycle of local computation and inter-robot communication, the local counter α i 2 will be incremented by 1 if there is no change of S i 2 before and after these two processes. Otherwise, the counter α i 2 is reset to 0. When α i 2 reaches 2d(G), where d(G) is the diameter of G, robot i stops operations. Meanwhile, all robots have an agreement on the approximation of S 2 .
IV. PERFORMANCE ANALYSIS Lemma 1: The procedure GENERATEREMOVALS (Algorithm 2) for finding the approximated removals has the following performance:
1) Approximation performance: The approximated removals for robot i is S i 1 = S 1 , where S 1 is the K-max consensus result. 2) Convergence time: The algorithm takes d(G) steps to converge, where d(G) is the diameter of G. 3) Computational complexity: The computational complexity for every robot is at most O(|V i |). Proof: 1). Approximation performance: In the centralized scenario, we know that we need to find the top K actions to approximate the removal set. In the distributed scenario,
S i 1 is updated as S i 1 ← arg max v∈V1(i) f (v)
at the beginning. Assume that i and j are different before communicating with each other. Upon receiving S j 1 , robot i's approximation S i 1 is updated by using min(K, |S i 1 ∪ S j 1 |) actions as shown in line 9 (Algorithm 2). Similarly, this procedure is also applied to j. Thus, robot i and j will agree with each other on the top K actions after communication. Finally, when all robots r ∈ R receive other robots' approximation after d(G) steps, they achieve a consensus on the top K actions.
2) Convergence time: In every execution of Algorithm 2, i needs to update S i 1 through the received S j 1 from j ∈ N i . Similarly, it takes d(G) steps for i to receive S r 1 from r that has the longest communication distance. During this procedure, every robot can receive all other robots' approximation at least once. Thus, Algorithm 2 takes d(G) steps to converge.
3) Computational complexity: Robot i needs |V i | evaluations to find the largest contribution s ∈ arg max v∈Vi f (v). After communications, if s ∈ S i 1 , then s is among the top K actions. If s / ∈ S i 1 , then s is replaced by other actions with larger contributions and we do not need to evaluate for robot i again. So, the number of evaluations for every robot is at most O(|V i |).
Lemma 2: The procedure GENERATECOMPLEMENTS (Algorithm 3) for finding the complements has the following performance:
1) Approximation performance: The approximated complements for i is S i 2 = S 2 , where S 2 is the centralized greedy solution generated by using marginal gains. 2) Convergence time: In the worst-case, the algorithm converges in 2(N − K + 1)d(G) steps, where d(G) is the diameter of G. 3) Computational complexity: The computational complexity for every robot is at most O((N − K) 2 |V i |). Proof: 1). Approximation performance: When merging S j 2 with S i 2 , we first use the sort(·) procedure to maintain the orders of the actions s ∈ S i+ 2 regardless of the redundancy and the orders of the actions in S i+ 2 as shown in line 9 of Algorithm 3. Then, through the operation described in Section III-B (also in lines 10-11 of Algorithm 3), we resolve these two issues by removing any s ∈ S i+ 2 that is either redundant or order changed. In local computation, when robot i updates its action set, the marginal gains of s ∈ S i 2 \ V i are used as oracles. In the local computation procedure, when any v ∈ V i replaces the compared action, the actions having lower marginal gains in S i 2 are removed from S i 2 . This procedure maintains the descending orders of s ∈ S i 2 while updating robot i's contribution. Therefore, all these procedures help to keep the descending order of s ∈ S i 2 . When these procedures are applied to all robots in R, the system converges to the same approximation S i 2 since every robot will have an agreement on at least one action after each communication. Also, since the descending orders of v ∈ S i 2 are kept during all communications, the final converged S i 2 is the same as the centralized solution. That is, S i 2 = S 2 . 2) Convergence time: Through the above analysis, we know that if S i 2 = S j 2 , then this disagreement is resolved through communications. In an extreme case, we assume that the communication distance between robot i and robot r is d(G). It then takes 2d(G) steps for i to agree with r on at least one action that is selected by i or r. Also, max i∈R |S i 2 | = N − K. Therefore, it takes at most 2(N − K)d(G) steps to reach the final agreement. Meanwhile, robot i needs to take another 2d(G) steps to confirm the convergence.
3) Computational complexity: Algorithm 3 needs at most |V i ||S i 2 ∪ S j 2 | evaluations during each local computation procedure since i checks its maximum contribution against every combination of the actions in the merged set S i 2 ∪ S j 2 . Also, it holds that max i,j∈R |S i 2 ∪ S j 2 | = N − K. Therefore, the computational complexity for every robot is at most O((N − K) 2 |V i |).
Theorem 1: Algorithm 1 has the following performance: 1) Performance: The approximation ratio is
f (S \F ) ≥ max{ 1 − c f 1 + c f , 1 1 + K , 1 |R| − K }f (S \F ).
where S is an optimal solution and F is an optimal removal set with respect to S . 2) Convergence time: In the worst-case, the algorithm converges in (2N − 2K + 3)d(G) steps, where d(G) is the diameter of G. 3) Computational complexity: The computational complexity for every robot is at most O((N − K) 2 |V i |). Proof: 1). Approximation performance: From Lemma 1 and Lemma 2, we know that S i 1 = S 1 and S i 2 = S 2 , where S 1 and S 2 are the corresponding centralized solutions. Then, the approximation performance of the distributed resilient algorithm (Algorithm 1) is the same as that of its centralized counterpart [7]. That is,
f (S \ F ) ≥ max 1 − c f 1 + c f , 1 1 + K , 1 |R| − K f (S \ F ).
where S is an optimal solution and F is an optimal removal set with respect to S . 2) Convergence time: Based on the results from Lemma 1 and Lemma 2, we know that the convergence time is (2N − 2K + 3)d(G).
3) Computational complexity: Combining the results from Lemma 1 and Lemma 2, we have the computational complexity as O((N − K) 2 |V i |).
V. NUMERICAL EVALUATION
A. Simulation Setup
Environment settings: We verify the performance of Algorithm 1 by implementing it into a scenario where a distributed multi-robot system explores an environment modeled by a Gaussian mixture model (GMM). Specifically, the environment is generated as z(x, y) = Compared algorithms: We compare the performance of Algorithm 1, which is referred to as "distributed-resilient", with the performance of the following methods:
• An optimal method, where the solution is generated through a brute-force search. • A semi-dist method [10], where the solution is generated by first partitioning robots into groups and then running a centralized resilient algorithm in each group. • A cent-greedy method [21], where the solution is generated greedily based on marginal gains that maximize the objective function in a centralized manner. • A cent-rand method, where the solution is generated randomly in a centralized manner. Multi-robot system settings: We compare the performance of the system using two different settings: 1), In the first setting, we compare the performance of our distributed resilient method with the optimal, the semi-dist, the cent-greedy method, and the cent-rand methods. We set the number of robots as N = 5, and the maximum number of attacks as K = 3. Then, we run 200 trials to compare the performance. We generate random initial locations for the robots in each trial, and each robot has four actions (forward, backward, left, and right). 2), In the second setting, we compare the performance of the resilient method, the semi-dist, the centgreedy method, and the cent-rand method. Specifically, we set the number of robots as N ∈ {30, 40, 50}. The corresponding number of attacks K is randomly generated from [0.5N, 0.75N ] and rounded to an integer. This setting means that at least 50% of the robots will be attacked, and at most 75% of the robots will be attacked. The robots' rewards are added with white Gaussian noise with a mean of 10% of the original rewards and a variance of 5% of the original rewards. We then run 50 trials for each setting. However, since finding the worst-cast attacks is intractable and we aim to test the proposed algorithm's scalability, we use greedy attacks in this case. That is, the attacker attacks the robots' sensors greedily using the standard greedy algorithm [21]. Common simulation parameters include: in each of the above settings, the sensing range of the robots is set to 10; the reward of an action is the environmental importance explored by this action. In each trial, the robots are randomly placed in a region with x ∈ [50, 100], y ∈ [50, 100]. Finally, we perform Monte Carlo simulations to test the performance of these four methods with the same simulation parameters.
B. Performance Comparison
Performance metric: The performance of different methods is captured by the sum of the importance in the explored area after worst-case attacks. Specifically, we first generate a solution for each method. Then, attackers attack and remove the contributions of attacked robots. Since finding worst-case attacks is intractable, we use a brute-force search to find the worst-case attacks for each generated solution.
In the first setting, we compare the statistics of the utilities of different methods using 200 trials, as shown in Fig. 3. The utilities in the box-plot reflect the performance of different methods by using the quartiles of each solution. As suggested in the result, we observe that the median of the utilities generated by the proposed distributed resilient method shows better performance than that of the other three methods except for the optimal method. In Fig. 4, we compare the optimality ratios of different solutions with respect to their corresponding optimal solutions in each setting. Specifically, the optimality ratio range of the proposed distributed-resilient is [0.77, 1]. The optimality ratio range of the semi-distri-resilient method is [0.75, 1]. The optimality ratio range of the centralized-greedy method is [0.55, 1] The optimality ratio range of the centralized-random method is [0.35, 1]. This further illustrates that the proposed algorithm (Algorithm 1) is superior to the other methods since most of the cases have a close to optimal optimality ratio as shown in Fig. 4(a). In Fig. 5(a), we demonstrate the environment and the initial configuration of the robots of one instance. Then, the solution of the proposed distributedresilient method using one instance is shown in Fig. 5(b).
In the second setting, we compare the mean of the utilities of different methods. Fig. 6 shows the utilities of four different approaches. The result demonstrates that Algorithm 1 yields superior results compared with the other methods. Finally, we plot in Fig. 7 the evolution of utilities for different robots when the number of robots is 15, and the number of attacks is 8, demonstrating the convergence of the proposed distributed resilient method. Remark 5.1: Under our problem formulation, imperfect motion and sensing impact our algorithm in two ways: they disturb how the robots evaluate the reward of their actions and the ability to execute high-reward actions as planned. These influences do not change the fundamental nature of the problem but would instead impact the collected reward. However, the proposed algorithm is still superior to the other three methods, as shown in Fig. 6.
VI. CONCLUSIONS AND FUTURE WORK
In this letter, we proposed a fully distributed algorithm for the problem of the resilient submodular action selection. We proved that the solution of the proposed algorithm converges to the corresponding centralized algorithm. We evaluated the algorithm's performance through extensive simulations. Directions for future work include exploiting connectivity of the system when communications are attacked, investigating the different importance of the robots (nodes) when the system is attacked, and revisiting our problem with noisy motion and perception considered.
Definition 2 (
2Marginal gain): For a set function f : 2 V → R, let the marginal gain of adding element
1 )
1Initialization: In Algorithm 2, robot i first selects an action that contributes the most to the objective function f 3 Top M actions in action set S i 1 (|S i 1 | ≥ M ): given the function values f (s) of all actions s ∈ S i 1 , sort these function values in a descending order, and set the M actions corresponding to the first M function values as the top M actions in S i 1 .
30: end procedure such that f ∅ (s 1 ) ≥ . . . ≥ f {s1,...,sn−1} (s n ). We also use γ(·) to denote the order of action s ∈ S i 2 as γ(s 1 ) = 1, . . . , γ(s n ) = n.
B
=1 r b (x, y) = r b, where (x, y) are 2D coordinates, r : R → R are the weights for the basis functions b : R 2 → R, ∀ = 1, . . . , B. Also, r = [r 1 , . . . , r ] and b = [b 1 , . . . , b ] are the stacked weights and basis functions respectively. The number of basis function and variances are selected randomly. In the simulation, we use a 200 × 200 field to represent the environment. There is an environmental importance associated with each location. The importance value of a location equals to the GMM value of that location.
Fig. 3 .
3The statistics of the utilities of the five different methods over 200 trials with the number of robots N = 5 and the number of attacks K = 3. The box-plot demonstrates the quartiles of different solutions.
Fig. 4 .Fig. 5 .
45The optimality ratios of different solutions with respect to their corresponding optimal solutions in each of the 200 trials. (a). The robots (N = 5) are placed randomly in the environment, and a connected communication graph G is initialized randomly. (b). The resilient solution after the worst-case attacks (K = 3). The selections of worst-case attacked sensors are in gray. The selections of unattacked sensors are in cyan.
Fig. 6 .
6The utilities of the four different methods with N = 30, 40, and 50 and with the number of attacks K (for each N ) randomly generated from [0.5N, 0.75N ].
Fig. 7 .
7The evolution of the utilities of the 7 robots that are not attacked when the number of robot N = 15 robots and the number of attacks K = 8.
We will use v to represent {v} if there is no confusion.2 It is worth noting that the proposed algorithm also works for directed communication graphs.
Multi-robot coordination and planning in uncertain and adversarial environments. L Zhou, P Tokekar, Current Robotics Reports. L. Zhou and P. Tokekar, "Multi-robot coordination and planning in uncertain and adversarial environments," Current Robotics Reports, pp. 1-11, 2021.
Resilient flocking for mobile robot teams. K Saulnier, D Saldana, A Prorok, G J Pappas, V Kumar, IEEE Robot. Autom. Letter. 22K. Saulnier, D. Saldana, A. Prorok, G. J. Pappas, and V. Kumar, "Resilient flocking for mobile robot teams," IEEE Robot. Autom. Letter, vol. 2, no. 2, pp. 1039-1046, 2017.
Guaranteeing spoof-resilient multi-robot networks. S Gil, S Kumar, M Mazumder, D Katabi, D Rus, Auton. Robots. 416S. Gil, S. Kumar, M. Mazumder, D. Katabi, and D. Rus, "Guaranteeing spoof-resilient multi-robot networks," Auton. Robots, vol. 41, no. 6, pp. 1383-1400, 2017.
Resilient distributed state estimation with mobile agents: Overcoming Byzantine adversaries, communication losses, and intermittent measurements. A Mitra, J A Richards, S Bagchi, S Sundaram, Auton. Robots. 433A. Mitra, J. A. Richards, S. Bagchi, and S. Sundaram, "Resilient distributed state estimation with mobile agents: Overcoming Byzantine adversaries, communication losses, and intermittent measurements," Auton. Robots, vol. 43, no. 3, pp. 743-768, 2019.
Masquerade attack detection through observation planning for multi-robot systems. K Wardega, R Tron, W Li, Proc. Int. Conf. Auton. Agents Multi. Syst. Int. Conf. Auton. Agents Multi. SystK. Wardega, R. Tron, and W. Li, "Masquerade attack detection through observation planning for multi-robot systems," in Proc. Int. Conf. Auton. Agents Multi. Syst., 2019, pp. 2262-2264.
Denial-of-service in wireless sensor networks: Attacks and defenses. D R Raymond, S F Midkiff, IEEE Pervasive Comput. 71D. R. Raymond and S. F. Midkiff, "Denial-of-service in wireless sensor networks: Attacks and defenses," IEEE Pervasive Comput., vol. 7, no. 1, pp. 74-81, 2008.
Resilient monotone submodular function maximization. V Tzoumas, K Gatsis, A Jadbabaie, G J Pappas, Proc. IEEE Conf. Decis. Control. IEEE Conf. Decis. ControlV. Tzoumas, K. Gatsis, A. Jadbabaie, and G. J. Pappas, "Resilient monotone submodular function maximization," in Proc. IEEE Conf. Decis. Control, 2017, pp. 1362-1367.
Resilient active target tracking with multiple robots. L Zhou, V Tzoumas, G J Pappas, P Tokekar, IEEE Robot. Autom. Letter. 41L. Zhou, V. Tzoumas, G. J. Pappas, and P. Tokekar, "Resilient active target tracking with multiple robots," IEEE Robot. Autom. Letter, vol. 4, no. 1, pp. 129-136, 2018.
Robust multiple-path orienteering problem: Securing against adversarial attacks. G Shi, L Zhou, P Tokekar, Proc. Robot.: Sci. Syst. G. Shi, L. Zhou, and P. Tokekar, "Robust multiple-path orienteering problem: Securing against adversarial attacks," in Proc. Robot.: Sci. Syst., 2020.
Distributed attackrobust submodular maximization for multi-robot planning. L Zhou, V Tzoumas, G J Pappas, P Tokekar, Proc. IEEE Int. Conf. Robot. Autom. IEEE Int. Conf. Robot. AutomL. Zhou, V. Tzoumas, G. J. Pappas, and P. Tokekar, "Distributed attack- robust submodular maximization for multi-robot planning," in Proc. IEEE Int. Conf. Robot. Autom., 2020, pp. 2479-2485.
Consensus-based decentralized auctions for robust task allocation. H.-L Choi, L Brunet, J P How, IEEE Trans. Robot. 254H.-L. Choi, L. Brunet, and J. P. How, "Consensus-based decentralized auctions for robust task allocation," IEEE Trans. Robot., vol. 25, no. 4, pp. 912-926, 2009.
Distributed greedy algorithm for multiagent task assignment problem with submodular utility functions. G Qu, D Brown, N Li, Automatica. 105G. Qu, D. Brown, and N. Li, "Distributed greedy algorithm for multi- agent task assignment problem with submodular utility functions," Automatica, vol. 105, pp. 206-215, 2019.
Decentralized matroid optimization for topology constraints in multi-robot allocation problems. R K Williams, A Gasparri, G Ulivi, Proc. IEEE Int. Conf. Robot. Autom. IEEE Int. Conf. Robot. AutomR. K. Williams, A. Gasparri, and G. Ulivi, "Decentralized matroid op- timization for topology constraints in multi-robot allocation problems," in Proc. IEEE Int. Conf. Robot. Autom., 2017, pp. 293-300.
Monitoring over the long term: Intermittent deployment and sensing strategies for multi-robot teams. J Liu, R K Williams, Proc. IEEE Int. Conf. Robot. Autom. IEEE Int. Conf. Robot. AutomJ. Liu and R. K. Williams, "Monitoring over the long term: Intermittent deployment and sensing strategies for multi-robot teams," in Proc. IEEE Int. Conf. Robot. Autom., 2020, pp. 7733-7739.
Coupled temporal and spatial environment monitoring for multi-agent teams in precision farming. J Liu, R K Williams, IEEE Conf. J. Liu and R. K. Williams, "Coupled temporal and spatial environment monitoring for multi-agent teams in precision farming," in IEEE Conf. Control Technol. Appl., 2020, pp. 273-278.
Distributed matroid-constrained submodular maximization for multi-robot exploration: Theory and practice. M Corah, N Michael, Auton. Robots. 432M. Corah and N. Michael, "Distributed matroid-constrained submodular maximization for multi-robot exploration: Theory and practice," Auton. Robots, vol. 43, no. 2, pp. 485-501, 2019.
Submodular optimization for coupled task allocation and intermittent deployment problems. J Liu, R K Williams, IEEE Robot. Autom. Letter. 44J. Liu and R. K. Williams, "Submodular optimization for coupled task allocation and intermittent deployment problems," IEEE Robot. Autom. Letter, vol. 4, no. 4, pp. 3169-3176, 2019.
The impact of information in distributed submodular maximization. D Grimsman, M S Ali, J P Hespanha, J R Marden, IEEE Trans. Control Netw. Syst. 64D. Grimsman, M. S. Ali, J. P. Hespanha, and J. R. Marden, "The impact of information in distributed submodular maximization," IEEE Trans. Control Netw. Syst., vol. 6, no. 4, pp. 1334-1343, 2018.
Submodular optimization for consensus networks with noise-corrupted leaders. E Mackin, S Patterson, IEEE Trans. Autom. Control. 647E. Mackin and S. Patterson, "Submodular optimization for consensus networks with noise-corrupted leaders," IEEE Trans. Autom. Control, vol. 64, no. 7, pp. 3054-3059, 2018.
A supermodular optimization framework for leader selection under link noise in linear multi-agent systems. A Clark, L Bushnell, R Poovendran, IEEE Trans. Autom. Control. 592A. Clark, L. Bushnell, and R. Poovendran, "A supermodular optimization framework for leader selection under link noise in linear multi-agent systems," IEEE Trans. Autom. Control, vol. 59, no. 2, pp. 283-296, 2013.
An analysis of approximations for maximizing submodular set functions-I. G L Nemhauser, L A Wolsey, M L Fisher, Math. Program. 141G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher, "An analysis of approximations for maximizing submodular set functions-I," Math. Program., vol. 14, no. 1, pp. 265-294, 1978.
Submodular function maximization. A Krause, D Golovin, Tractability: Practical Approaches to Hard Problems. Cambridge University PressA. Krause and D. Golovin, "Submodular function maximization," in Tractability: Practical Approaches to Hard Problems. Cambridge Uni- versity Press, 2014, pp. 71-104.
Combinatorial optimization: polyhedra and efficiency. A Schrijver, Springer Science & Business Media24A. Schrijver, Combinatorial optimization: polyhedra and efficiency. Springer Science & Business Media, 2003, vol. 24.
Submodular set functions, matroids and the greedy algorithm: tight worst-case bounds and some generalizations of the rado-edmonds theorem. M Conforti, G Cornuéjols, Discrete Appl. Math. 73M. Conforti and G. Cornuéjols, "Submodular set functions, matroids and the greedy algorithm: tight worst-case bounds and some generalizations of the rado-edmonds theorem," Discrete Appl. Math., vol. 7, no. 3, pp. 251-274, 1984.
| [] |
[
"Injection of photoelectrons into dense argon gas",
"Injection of photoelectrons into dense argon gas"
] | [
"A F Borghesani \nDepartment of Physics\nMax-Planck-Institut fűr Physik und Astrophysik\nWerner-Heisenberg-Institut fűr Physik\nCNISM Unit\nUniversity of Padua\nvia F. Marzolo 8, Főhringer Ring 6, D-8000 Műnchen 40I-35131PaduaItaly, Germany\n",
"P Lamp \nDepartment of Physics\nMax-Planck-Institut fűr Physik und Astrophysik\nWerner-Heisenberg-Institut fűr Physik\nCNISM Unit\nUniversity of Padua\nvia F. Marzolo 8, Főhringer Ring 6, D-8000 Műnchen 40I-35131PaduaItaly, Germany\n"
] | [
"Department of Physics\nMax-Planck-Institut fűr Physik und Astrophysik\nWerner-Heisenberg-Institut fűr Physik\nCNISM Unit\nUniversity of Padua\nvia F. Marzolo 8, Főhringer Ring 6, D-8000 Műnchen 40I-35131PaduaItaly, Germany",
"Department of Physics\nMax-Planck-Institut fűr Physik und Astrophysik\nWerner-Heisenberg-Institut fűr Physik\nCNISM Unit\nUniversity of Padua\nvia F. Marzolo 8, Főhringer Ring 6, D-8000 Műnchen 40I-35131PaduaItaly, Germany"
] | [] | The injection of photoelectrons in a gaseous or liquid sample is a widespread technique to produce a cold plasma in a weakly-ionized system in order to study the transport properties of electrons in a dense gas or liquid. We report here the experimental results of of the collection efficiency of photoelectrons injected into dense argon gas at the temperature T = 142.6 K as a function of the externally applied electric field and gas density. We show that the experimental data can be interpreted in terms of the so called Young-Bradbury model only if multiple scattering effects due to the dense environment are taken into account when computing the scattering properties and the energetics of the electrons. | 10.1088/0963-0252/20/3/034001 | [
"https://arxiv.org/pdf/1007.1187v2.pdf"
] | 95,200,573 | 1007.1187 | 3bff1f57e5e0f03bcc07976b31e941a1fbe60535 |
Injection of photoelectrons into dense argon gas
20 Jul 2010
A F Borghesani
Department of Physics
Max-Planck-Institut fűr Physik und Astrophysik
Werner-Heisenberg-Institut fűr Physik
CNISM Unit
University of Padua
via F. Marzolo 8, Főhringer Ring 6, D-8000 Műnchen 40I-35131PaduaItaly, Germany
P Lamp
Department of Physics
Max-Planck-Institut fűr Physik und Astrophysik
Werner-Heisenberg-Institut fűr Physik
CNISM Unit
University of Padua
via F. Marzolo 8, Főhringer Ring 6, D-8000 Műnchen 40I-35131PaduaItaly, Germany
Injection of photoelectrons into dense argon gas
20 Jul 2010PACS numbers: 5150+v, 5225Fi
The injection of photoelectrons in a gaseous or liquid sample is a widespread technique to produce a cold plasma in a weakly-ionized system in order to study the transport properties of electrons in a dense gas or liquid. We report here the experimental results of of the collection efficiency of photoelectrons injected into dense argon gas at the temperature T = 142.6 K as a function of the externally applied electric field and gas density. We show that the experimental data can be interpreted in terms of the so called Young-Bradbury model only if multiple scattering effects due to the dense environment are taken into account when computing the scattering properties and the energetics of the electrons.
Introduction
Injection of electrons from a metal into a gaseous or fluid dielectric is a process of technological relevance whose theoretical understanding is not yet complete [1,2]. Electrons emitted either by photo-or tunnel cathodes are injected directly into the conduction band of the dielectric medium and, hence, the energy separation of the Fermi level in the metal and the bottom of the conduction band can be measured [3].
Once injected into a medium, the hot carriers lose energy by means of scattering events that eventually lead them to thermalization. Through the same physical mechanisms of scattering, combined with the action of an externally applied electric field and of the image force field, some of the electrons are captured back by the cathode and are not collected. The collection efficiency may thus give useful pieces of information on the scattering processes a hot electron undergoes on its way to thermalization [4].
Rare gases represent a practical realization of disordered systems and the possibility to vary their density in the range between dilute gas (N ≈ 10 −2 nm −3 ) and liquid (N ≈ 20 nm −3 ) offers a unique opportunity for studying how electron states and transport depend on density and degree of disorder. ‡ Present address: BMW Munich, Germany
The scattering processes responsible of the drift mobility of electrons are now well understood and can be described up to intermediate densities within a picture in which multiple scattering effects modify the single scattering picture, which is valid at extremely low densities, and suitably dress the electron-atom scattering cross section [5]. Whereas for gases of positive scattering length, e.g., helium and neon, this picture breaks down at even higher densities because of electron localization in density fluctuations [6], in gases with negative scattering length, in particular in gaseous and liquid argon, electrons still propagate as quasifree particles with very long mean free paths [7].
In the past, Young and Bradbury developed a theory relating the actually collected charge to injection energy and applied field [8]. This model is quite succesful at predicting the overall electric field dependence of the experimental data in a very dilute gas, even in spite of the untenability of some of the assumptions that it is based on.
More recently, the injection of electrons in dense argon gas and liquid has been studied by using thin-film cold-cathode emitters [9]. The researchers interpreted their experimental data within the YB model even though they acknowledged its flaws. As a result, they were able to detect an unexpected density dependence of the electronatom momentum-transfer scattering cross section. Unfortunately, their interpretation of the data is spoiled by the unavailability, at that time, of a valid model for the description of scattering at such high densities.
Newer Monte-Carlo (MC) based classical-trajectories numerical simulations were subsequently carried out in order to statistically study the collection efficiency of the electron injection process in a gas as a function of field and density without the questionable hypotheses of the YB model [10]. As a matter of fact, the results of the numerical simulations agree with the analytical prediction of the YB model as far as the electric field dependence is concerned. However, the simulations do not give any physical explanations for the explicit analytical dependence shown by both the experimental and the simulated data. In addition to that, MC simulations completely fail at predicting the experimentally observed density dependence of the data because they did not take into account, as we will show next, the quantum multiple scattering effects that affect the scattering properties of electrons at high densities.
Thus, within our program of investigating the electron mobility in dense rare gases at very high densities, we decided to study the collection efficiency of electrons injected into dense argon gas by means of the photoelectric effect in view of the fact that we now have a well-established theoretical model to describe the behaviour of the quasifree electrons in dense noble gases [5].
Experimental Details
We have used the well-known pulsed photoemission method used in previous electron mobility measurements in neon [11,12], helium [13], and argon [5] and we have exploited the same experimental apparatus used for mobility measurements in liquid, gaseous, and critical argon [14]. Details of the apparatus have been published elsewhere [14,15]. We recall here only the most relevant features.
The sample cell consists of a copper block that can withstand gas pressures up to 5 MPa and is contained inside a cryostat for accurate thermoregulation within 1 mK in the temperature range (100 < T < 300) K. The cell is filled with the highest-purity (99.9999% vol.), commercial argon gas. The gas is further purified by circulating it through an Oxisorb filter (Messer Griesheim, Germany) so as to reach the final impurity content of a few tenths of parts per billion required for drift mobility measurements [5]. The gas pressure is measured with an accuracy of ±1 kPa.
An ultraviolet (UV)-grade quartz window coated with a ≈ 10 nm thin gold layer acts as both photocathode and electrode for the drift voltage [16]. As UV light source a xenon flashlamp (duration ≈ 1 µs) is used (Hamamatsu, model No. L2435). The spectral distribution of the light emitted by the xenon flashlamp can approximately be described by an asymmetric gaussian peak centered at about λ m = 232 nm with left-and right widths (+6, +28) nm, respectively, corresponding to a photon energy of ≈ 5.4 eV.
The UV light is guided to the photocathode by means of a UV-grade quartz fiber. Typically, ≈ 10 6 electrons, i.e., 160 fC, per pulse are released in vacuo. This amount of charge corresponds to ≈ 0.5 V at the output of the active integrator connected to the anode. In order to improve the signal-to-noise ratio, 256 signals are fetched and averaged together for each value of the electric field applied to the electrodes. The signal waveforms are then analyzed by standard numerical techniques [17].
Experimental Results
In this paper we report the results for the charge injected into dense argon gas at a temperature T = 142.6 K below the critical temperature T c = 150.9 K for pressures in the one-phase region (P < 3.6 MPa). The number density N of the gas is calculated from T and P by means of an accurate equation of state [18].
The applied electric field is in the range (1 < E < 400) ·10 2 V/m and is small enough to avoid breakdown or gas ionization. The absence of any contact potential effects that might harm the calculated values of E for voltages around 1 V is confirmed by checking that the drift mobility of electrons is field-independent for the lowest field strengths [5,19].
In figure 1 we report the experimental results of the charge Q collected by the anode as a function of the applied field E for some densities. The experimental accuracy is ≈ 10 %. We do not show all of the measured isopycnal curves just for the sake of clarity. A qualitative analysis of this figure shows that Q steadily increases with increasing E, decreases with increasing N at constant E, does not strongly depend on N for large E when N exceeds some intermediate value, and, finally, shows a change of the dependence on E in a region about E ≈ 15 · 10 2 V/m. However, such a way to display the results does not help identifying the regularities hidden in the data. Actually, the electric field E is not the best physical variable to describe the data because it has no specific universal significance when the drifting charges do scatter off the gas atoms as in the present case. Electrons are better characterized by the amount of energy gained from the electric field over one mean free path (mfp) eEℓ = e(E/N σ mt ), where ℓ = (N σ mt ) −1 is the mfp and σ mt is the electron-atom momentum transfer scattering cross section. So, it is customary to plot Q as a function of the density-normalized electric field E/N that is thus proportional, for a given cross section, to the energy in excess of thermal gained from the field during drift.
In figure 2 we plot Q as a function of E/N for some isopycnal curves. These results should be compared with the only other experiment on charge injection in argon gas at a similar temperature though in that experiment hot electrons are injected into the gas by using a tunnel diode as the cathode and though much stronger reduced electric fields are used [9]. The behaviour of Q as a function of E/N and N is fairly complicated. For E/N well in excess of ≈ 2 mTd (1 mTd = 10 −24 V m 2 ), Q ∝ (E/N ) 1/2 for all densities and the amount of charge collected at constant E/N in this range increases with increasing N. This behaviour compares favorably with the results of Smejtek et al. [9] whose experiment is carried out at T = 160 K and only spans the high-field range for E/N 10 mTd. By contrast, the results of the MC simulations [10] show the opposite tendency for the collected charge to decrease with increasing density. Thus, we will next focus on this controversial aspect of the numerical analysis. For E/N < 2 mTd, the data deviate from the (E/N ) 1/2 -law showing a double change of curvature.
Finally, for E/N ≈ 0.3 mTd, a crossover of the different isopycnal curves takes place so that, at even smaller E/N, the density ordering is reversed with respect to the high-field region.
Owing to the large zero-field electron mobility even at the highest N, (µ ≈ 0.1 m 2/ Vs) [5,19], to the very low impurity content (less than one part per billion O 2 equivalent) that might give origin to slow O − 2 ions, and to the small amount of charge injected, we can rule out space-charge effects as the cause of the observed low-field behaviour of the collected charge.
Discussion
The emission of photoelectrons from a metal cathode into vacuum takes place when the photon energy exceeds the threshold energy, or work function W v , of the metal. If emission occurs in a medium, it is found that the threshold energy W m is shifted from its vacuum value by the amount V 0 , which is interpreted as the bottom of the conduction band of the medium [2]
W m = W v + V 0(1)
In the case of argon, V 0 < 0 [20] and less photon energy is required for photoelectron emission than in vacuo. Electrons are then photoemitted into the medium with a broad energy distribution [21,22,23] up to a maximum energy
E 0 = (hc/λ m ) − W m ,
where λ m is the shortest wavelength in the flashlamp, h is the Planck's constant and c is the light speed in vacuo. Once injected into the medium, the epithermal electrons drift under the combined influence of diffusion, of their own image field [24] that brings them back to the cathode, and of the externally applied electric field E that pulls them toward the anode. The net potential energy is given by
V (x) = − 1 4 e 2 4πǫ 0 Kx − eEx (2)
where x is distance from the cathode and K is the relative dielectric constant of the medium. For the densities of the present experiment K = 1 within a few percent [25].
The potential energy V (x) has a maximum at a distance
x m = (e/16πǫ 0 KE) 1/2 with value V m = V (x m ) = −2eEx m .
The application of an electric field thus lowers the threshold energy by the Schottky correction ∆W = |V m |. It is instructive to evaluate x m and V m in the conditions of the present experiments. By assuming K ≃ 1, we get x m ≈ 2 · 10 −6 m and V m ≈ 4 · 10 −5 eV for E = 1 · 10 2 V/m, and x m ≈ 9.5 · 10 −8 m and V m ≈ 8 · 10 −3 eV for E = 4 · 10 5 V/m. We note that, at the quite small field strengths of our experiment, the Schottky lowering of the threshold energy is always smaller than the thermal energy E T = (3/2)k B T ≈ 18 · 10 −3 eV and the position of the potential energy maximum is located very far from the cathode even on the mfp scale. In figure 3 the energy levels at the metal-gas interface are schematically shown.
The emitted electrons are characterized by the initial excess kinetic energy E 0 over the barrier. The ultimate fate of an electron injected into the gas, whether it is scattered back to the cathode or is collected by the anode, depends on the distance at which it thermalizes compared to the distance of the potential maximum x m .
In (pure) argon gas electrons undergo only momentum exchange scattering processes that randomize the electron velocities and lead to a slow loss of their initial kinetic energy. One possibility for the electron is to be immediately backscattered well before x m upon injection and be returned to the cathode [9]. The probability that such a backscattered electron might still diffuse forward to the anode over the potential energy maximum or directly tunnel through the potential barrier is negligible owing to the small strength of the electric field, hence the large value of the distance x m , and owing to the quite low temperature of the experiment [3]. Some of the injected electrons may not be backscattered and may slowly lose their excess energy by these elastic scattering processes until they thermalize beyond the potential maximum and are collected by the anode [9]. The physical situation, however, is not this simple because the electron escape probability is smaller the higher the initial kinetic energy of the electrons. Actually, an electron that has already crossed the barrier at x m on its way to the anode, and still has sufficient energy to surmount it, may at any time undergo collisions that reverse its motion sending it back across the barrier into the cathode [2].
Several attempts have been done in the past at explaining the ratio of the observed current to the saturation current, i.e., the current collected in vacuo [26,27,28]. The validity of the results obtained so far is difficult to ascertain because it requires an exact solution of the Boltzmann's transport equation which is not yet analytically available unless numerical MC techniques are exploited [10].
Thermalization is a complicated process in which a huge number of collisions is involved [10,29] and mainly relies on the electron-atom momentum transfer scattering cross section σ mt . In spite of this, an oversimplified model due to Young and Bradbury (YB) [8] has proven quite succesful at describing the experimental results in low density gases though it has given origin to severe criticism [9,10].
The YB model assumes that the process responsible for the removal of electrons from the current stream is scattering backward at such an angle that the electrons can reach the emitter again. The image field is neglected. The return current is calculated by further assuming only reflection of electrons in their first encounter. In order that an electron returns to the emitter if backscattered at a distance x from it, its kinetic energy (1/2)mu 2 associated with its velocity u towards the cathode must be greater than the work done by the applied electric field for a displacement over the distance
x, (1/2)mu 2 ≥ eEx.
On the other hand, the total kinetic energy at collision with velocity w is (1/2)mw 2 = E 0 + eEx, where E 0 is the injection energy. Electrons return to the cathode if they are scattered within a cone subtending the solid angle
Ω = 2π 1 − x x + E 0 /eE 1/2 (3)
The return probability R(x) is thus given by Ω/4π
R(x) = 1 2 1 − x x + E 0 /eE 1/2 (4)
By further assuming that the applied electric field is small enough not to significantly deflect the electrons before their first encounter and that ℓ is negligible with respect to the distance between cathode and anode, the ratio of the collected current I (or charge Q if current is integrated) to the saturation current I 0 (or charge Q 0 ) can be written as
I I 0 ≡ Q Q 0 = ∞ 0 ℓ −1 exp (−x/ℓ) x x + (E 0 /eE) 1/2 dx = ∞ 0 e −y y y + d −2 1/2 dy(5)
where d 2 = eEℓ/E 0 = (eE/E 0 N σ mt ) . The integral in equation 5 can be evaluated numerically as a function of the parameter d ∝ (E/N ) 1/2 . In figure 4 I/I 0 is shown as a function of (eE/E 0 N σ mt ) 1/2 . As can be seen, the integral for d ≤ 0.2 can accurately be approximated by
I I 0 ≡ Q Q 0 ≃ A e E 0 σ mt 1/2 E N 1/2 (6)
where A is a numerical constant of order unity [8].
The condition d 0.2 sets an upper field strength limit for the validity of equation 6. In the present experiment we can estimate E 0 ≈ 0.35 eV, and, by assuming σ mt ≈ 8 · 10 −20 m 2 for thermal electrons [30,31] (the choice of a more appropriate value of the momentum transfer scattering cross section will accurately be discussed later on), equation 6 is valid for E/N up to 1 Td or even more.
Furthermore, it is argued on the basis of plausibility arguments that, for equation 5 to be valid, the fraction R of electrons that are returned back to the emitter after they have traveled a distance equal to ℓ must be smaller than the fraction T = 1 − R of electrons that are transmitted toward the anode [9]. How smaller this fraction has to be is not known. By assuming
T (ℓ) − R(ℓ) = eEℓ E 0 + eEℓ 1/2 < α(7)
with 0 < α < 1, for the cases we are interested in, eEℓ ≪ E 0 , i.e., when the energy gained by the electron from the field over a mfp is much smaller than the injection energy, equation 7 leads to the condition
E N α 2 E 0 σ mt e(8)
If α = 0.1 as argued in literature [9], and by using the previous estimates for E 0 and for σ mt , the YB model should be valid for E/N 0.3 Td. This threshold value is far too high as compared with the present experimental data for which the (E/N ) 1/2 -law is obeyed for much lower reduced field values. This fact only means that the choice of the numerical value of α is rather arbitrary. Moreover, as it will become clear later, the value of σ mt to be used in equation 8 is not easy to choose owing to the strong energy dependence of the actual cross section and to the presence of multiple scattering effects at high densities. The data presented in figure 2 cover the low-to intermediate field range (5·10 −5 < E/N < 2 · 10 −1 ) Td and partially overlap the previous experiment data that span the higher-field range (5 · 10 −3 < E/N < 4 · 10 1 ) Td [9].
Our data clearly show that the YB law, equation 6, reasonably well describes the collected charge data for E/N 1 mTd up to E/N ≈ 0.2 Td for all the investigated densities from N = 0.26 nm −3 up to N = 3.09 nm −3 . This result extends the validity of the YB model to E/N values one order of magnitude smaller than the previous experiment [9] and suggests that the assumption expressed by equation 7 is unrealistic.
A detailed inspection of figure 2 further shows that the experimental data do not strictly obey the YB law even at the highest field strength. We believe that the upward deviations from it occur as a consequence of the energy dependence of the cross section that increases with energy for energies above the Ramsauer-Townsend (RT) minimum.
If the YB law, equation 6, holds true (at least, approximately), the value of the momentum transfer scattering cross section can be deduced at each density N. In order to do this, it is better to recast equation 6 in the following form:
Q Q 0 = A ′ S(N ) E N 1/2(9)
in which A ′ = Ae 1/2 and S(N ) = (E 0 σ mt ) −1/2 . The density dependent cross section is then calculated as
σ(N ) = S 2 (N 1 ) S 2 (N ) E 0 (N 1 ) E 0 (N ) σ 0(10)
where N 1 is a (low) density taken as a reference, σ 0 is a scattering cross section value obtained from low-density gas swarm experiments, and E 0 is the injection energy. This simple procedure yields a density-dependent cross section, as obtained by Smejtek et al. [9], indeed. For reasons that will become clear later, we have simply termed σ(N ) the cross section determined in this way, rather than using the previous symbol σ mt . In order to determine σ(N ) from our data we have first fitted the Q data to the (E/N ) 1/2 -law for E/N 1 mTd so as to calculate the slope S(N ). The injection energy for N = 0 in our case is estimated to be E 0 (N = 0) ≡ E 0 (0) ≈ 0.35 eV. The variation of E 0 with N is accounted for by the density dependence of the energy at the bottom of the conduction band of the gas V 0 (N )
E 0 (N ) = E 0 (0) − V 0 (N )(11)
We have used for V 0 the experimental data of Reininger et al., which are well described by the interpolation formula [20] V
0 (N ) = V 0 (N 0 ) + a (N − N 0 ) + b c ln {cosh [c (N − N 0 )]}(12)
with N 0 = 11.03 nm −3 , V 0 (N 0 ) = −0.253 eV, a = −3.34 · 10 −3 eV nm −3 , b = 2.48 · 10 −2 eV nm −3 , and c = −0.3 nm −3 . The interpolation formula is corrected for impurity effects at low density [32]. In our conditions the effect of the density change of V 0 is quite important. At the highest density of our experiment N ≈ 3.09 nm −3 , the contribution of V 0 amounts to ≈ 25 % of E 0 (0).
The normalization constant has been chosen σ 0 = 3.4 · 10 −20 m 2 for N 1 = 0.26 nm −3 , which is consistent with the mobility data published elsewhere [5,19].
In figure 5 we plot the density-dependent momentum transfer scattering cross section determined according to the above mentioned procedure. The data of Smejtek et al. are also shown for the sake of comparison.
The observed density dependence of the electron-atom momentum transfer scattering cross section can be easily explained in terms of the model developed by us in order to explain the anomalous density effects of the electron mobility in dense noble gases [11,5]. For a detailed description of this model we refer to a previous paper [5]. We recall here its main features to the specific goal of interpreting the present data. At the densities of the present experiment the electron de Broglie wavelength, its mfp, and the average interatomic distance become comparable with each other so that multiple scattering effects set in. In particular, the ground state energy of a quasifree electron immersed in a medium is increased with respect to its thermal value by a density-dependent quantum shift [33] that is recognized as the bottom of the conduction band V 0 (N ). V 0 (N ) can be written as the sum of potential and kinetic contributions [34] V 0 (N ) = U P (N ) + E K (N ) (13) U P (N ) < 0 is a potential energy contribution that arises from the screened polarization interaction of the electron with the surrounding atoms whereas E K (N ) is a kinetic energy term that is due to excluded volume effects because the volume accessible to the electron shrinks as N increases. It turns out that E K (N ) > 0 and increases with increasing N. E K can be calculated by enforcing the condition that the electron ground state wave function is endowed with average translational symmetry about the equivalent Wigner-Seitz (WS) cell centered about each atom of the gas [35]. This condition leads to the eigenvalue equation
tan [k 0 (r s −ã (k 0 ))] − k 0 r s = 0(14)
that must be solved for the wavevector k 0 (N ) in a selfconsistent way.
r s = (3/4πN ) 1/3
is the radius of the WS cell,ã = (σ t /4π) 1/2 is the hard-core radius of the Hartree-Fock potential for rare gas atoms [34], and σ t is the electron-atom total scattering cross section. Finally, E K is given by
E K = 2 k 2 0 2m(15)
In figure 6 we show E K (N ) as a function of the gas density calculated according to equation 15 by using the cross section reported by Weyrehter et al. [31]. The experiments on electron mobility in dense rare gases [11,12,5,14,19,36] have clearly shown that only the kinetic contribution E K of the total energy shift V 0 enhances the kinetic energy of electrons during collisions. In this way, the scattering properties of electrons, namely their scattering cross sections, have to be evaluated at the shifted kinetic energy E + E K (N ). In other words, the bottom of the electron energy distribution function is shifted by an amount equal to E K (N ). The dependence of the cross section on the electron energy, shown in figure 7, and the energy shift by E K (N ) are the main physical effects leading to a density dependence of the cross section. Smejtek et al. [9] were not able to explain the causes of the observed density dependence of the cross section because the physical picture described above was yet to emerge at that time.
Two more multiple scattering effects come into play when the density is large enough for the electron mfp and wavelength to become comparable. The first one is a quantum self-interference of the electron wave function scattered off atoms located along paths connected by time-reversal symmetry [37] that leads to an increase of the rate of back-scattering [38].
The second effect is due to correlations among scatterers. The electron wave packet extends over a wide region encompassing many atoms. The total scattered wave packet is obtained by coherently summing up all partial scattering amplitudes contributed by each atom and the resulting cross section is enhanced by the static structure factor of the gas [39].
The latter two multiple scattering effects deeply influence the propagation of the wave packet and, hence, the electron mobility, indeed, though they do not alter very much the electron energy distribution function. We can neglect them for the analysis of the results of the present experiment mainly because the YB model does not deal with the electron wave packet propagating through the gas from the cathode to the anode but it only treats the charge backscattered in the first encounter and the collected charge is obtained only as a difference between the injected and backreflected charge.
The electron energy distribution function g (E) is given by the Davydov-Pidduck distribution function [40,41]
g (E) = C − E 0 k B T + M 6mz eE N σ mt (z + E K (N )) 2 −1 dz (16)
Here, k B is the Boltzmann's constant, M and m are the masses of the argon atom and of the electron, respectively. The normalization constant C is such that ∞ 0 z 1/2 g(z) dz = 1.
In equation 16 we have explicitly put into evidence that the cross section is evaluated at the shifted energy whereas we have dropped the corrections due to correlation and self-interference effects with respect to the formulas used for the mobility [5]. Once the distribution is known, averages can be calculated. In particular, it can be shown that the electron mean energy remains approximately thermal, except for the contribution E K , up to quite high field strengths E/N ≈ 2 or 3 mTd, depending on the density. We believe that this fact further means that thermalization in dense argon gas must be a fairly rapid process.
In figure 8 we show the momentum transfer scattering cross section evaluated at the mean shifted energyĒ = E + E K , where . . . indicates a thermal average. As anticipated, the scattering cross section now shows a strong density dependence, especially at low E/N, that is acquired by the effect of the energy shift E K (N ) combined with the very rapid decrease of σ mt with energy as shown in figure 7. In the low-field region, electrons are thermal but their mean energy isĒ = (3/2)k B T + E K (N ) > (3/2)k B T. Thus, the average cross section, which can be well approximated in the low-field region by the cross section evaluated at the mean energy σ mt (E) ≃ σ mt Ē , turns out to be much smaller than if it were evaluated at thermal energy only. For instance, for T = 142.6 K and for N = 0.26 nm −3 , (3/2)k B T ≈ 18 meV and E K ≈ 10 meV. σ mt (E = (3/2)k B T ) ≈ 4 · 10 −20 m 2 , whereas σ mt E =Ē ≈ 3 · 10 −20 m 2 .
In figure 5 the solid line represents σ mt Ē that compares very favorably with the values of the density dependent cross section obtained by analyzing the collected charge data according to the YB model. This good agreement between theory and experimental data lends credibility to our analysis.
Moreover, we are now in a position to explain the discrepancy between our data and those of Smejtek et al. . As a consequence of the rapid variation of V 0 with N they overestimated the cross section by a factor [E 0 − V 0 (N )] /E 0 leading to a correction that can be as large as 30 % for N = 10 nm −3 if E 0 = 1 eV. Were the injection energy E 0 < 1 eV, the overestimation factor would be even worse.
A second issue is that they normalized the data by using σ 0 values derived from old swarm experiments at very high reduced electric fields E/N [42,43]. At such high fields, the electron drift mobility does no longer depend on density and it is much smaller than at low fields, thus leading to an estimation of the cross section that is erroneously too large. The old swarm experiments are superseded by more recent ones [44,5], in which the drift mobility has been measured also in the limit of low fields. These new experiments have clarified the physical nature of the density dependence of the cross section for measurements in gas under pressure and their results about the cross section are consistent with its determination from low-density swarm data [30] and from beam experiment [31]. The new mobility data [44,5] allow a much more accurate estimate of σ 0 at the density of Smejtek's experiment.
By performing the corrections relative to the overestimation of the E 0 (N 1 )/E 0 (N ) factor and of σ 0 , the data of Smejtek et al. can now be shown to be in far better agreement with the cross section values obtained in the present experiment, as shown by the crosses in figure 5.
As already stated, the present data cover a range of reduced electric fields that are a couple of order of magnitudes smaller than in the previous experiment [9]. In this range quasifree electrons are thermal. It is evident from figure 2 that, as E/N is reduced, for each density there is a crossover region leading to deviations from the YB (E/N ) 1/2 -law. In the low field region the collected charge shows a much stronger dependence on E/N than at high fields. Moreover, the higher the density the stronger the field dependence. We do not have any explanations for this behaviour but we propose an analysis to show that it is still related to the properties of the momentum transfer scattering cross section even at quite low reduced fields.
In order to make deviations from the YB law more evident, we factor the (E/N ) 1/2 dependence out of the data and plot Q/(E/N ) 1/2 as a function of E/N in figure 9 only for a few isopycnal curves again for the sake of clarity. Except the smallest density for which very low values of E/N were not reached, the deviations Q/(E/N ) 1/2 are strongly peaked for all other densities in the range (0.3 < E/N < 0.7) mTd.
This behaviour of Q/(E/N ) 1/2 very closely resembles that of the densitynormalized electron mobility µN as a function of E/N (see figure 2 of reference [5]). µN shows a maximum that is related to the RT minimum of σ mt . The similarity of the behaviour of µN and Q/(E/N ) 1/2 is hardly surprising because µN is a suitable thermal average of σ −1 mt [5] and Q/(E/N ) 1/2 ∝ σ −1/2 according to the YB model, equation 6. The main difference between the behaviours of µN and Q/(E/N ) 1/2 is that the position of the mobility maximum occurs at a value (E/N ) m that decreases with increasing N, starting with (E/N ) m ≈ 4 mTd for N = 0.37 nm −3 down to (E/N ) m ≈ 2 mTd for N ≈ 6.1 nm −3 [5], whereas the position of the Q/(E/N ) 1/2 maximum, though the quality of the data does not allow to locate it with great accuracy, apparently occurs for nearly the same values or for only slightly increasing values of E/N for isopycnals of increasing N.
The decrease of (E/N ) m with increasing N in the mobility case has been rationalized [5,19] by realizing that the mobility maximum is the fingerprint of the RT minimum of σ mt occurring for E ≈ 230 meV. As the average energy of electrons is increased by the kinetic energy shift E K as N increases, less energy ∝ E/N has to be supplied by the field with increasing N in order that the average electron energy equals that of the RT minimum.
Unfortunately, we do not have at present any similar, simple explanation for the behaviour of the Q/(E/N ) 1/2 maximum. We can only argue that the values of E/N corresponding to the maximum deviation from the YB law are smaller than those of the mobility maximum because in the present case the electrons are already epithermal upon injection.
Anyway, we want to show that the collected charge data still bear close relationship with the cross section even outside the range of (strict) validity of the YB model. In fact, by using equation 6 even at the point of maximum deviation of Q from the (E/N ) 1/2 -law, one obtains
σ Q,M ≡ max Q/(E/N ) 1/2 −2 ∝ σ(17)
In figure 10 we plot the values of σ Q,M determined by using the same procedure as for equation 10. The datum for the lowest density is not shown because a maximum is hardly observed at all for that density. σ Q,M has thus been normalized to the average cross section σ mt Ē calculated for N = 0.77 nm −3 . σ mt Ē is calculated theoretically from σ mt (E) as explained before and, again,Ē = (3/2)k B T + E K (N ). Once more, we note that the overall behaviour of the density dependence of the cross section determined at low E/N is in excellent agreement with the model that takes into account multiple scattering effects. This fact validates the analysis carried out previously.
Conclusions
We have studied the collection efficiency of photoelectrons injected into dense argon gas at low temperature. Our data cover a range of density-normalized electric fields E/N much lower than previous data which were measured at higher fields by exploiting a different injection technique [9]. In the high-field region, our and previous data compare favorably with the only available theoretical model [8]. This model predicts that a fraction of the epithermal electrons injected into a gas may be returned to the cathode as soon as they undergo their first scattering event. According to this model, the momentum transfer scattering cross section can be deduced from the dependence of the collected charge on the reduced electric field.
In view of the more complete, available knowledge about the scattering processes in dense rare gases that includes multiple scattering effects [5], we have been able to relate the cross section determined from the charge data to the thermal average of the gas-phase scattering cross section [31]. Several problems, however, still remain unsolved. In our opinion, the most severe one deals with the hypothesis assumed by YB to derive their model. According to this hypothesis, the fate of an electron depends on its first encounter scattering. By contrast, we obtain a nice agreement with the model by calculating the thermal average of the gas-phase cross section and by taking into account multiple scattering effects. This fact means that electrons must have reached thermal equilibrium with the gas and it is very well known that thermalization occurs after a very large number of collisions [10,29]. This fact overtly contradicts the YB hypothesis.
MC-based calculations, when showing that a very large number of electron-atom collisions are required to determine the fate of a given electron, confirm that an attempt at explaining the collection efficiency in terms of what occurs at the first encounter appears to be unrealistic (for a more complete discussion, see [10]).
MC calculations do reproduce the observed (E/N ) 1/2 -law as a consequence of purely statistical effects [10] though they do not suggest any physical explanations for the explicit square-root functional form. Moreover, they fail at reproducing the density ordering of the experimental data in argon. This result is hardly surprising because MC studies have been carried out for classical trajectories without taking into account the quantum multiple scattering effects, which are active in a dense gaseous environment [5].
In any case, neither the YB model nor the MC calculations do provide an explanation of the change of behaviour of the collected charge as a function of the electric field at very low fields as we have observed.
Figure 1 .
1Q vs E for N (nm −3 ) = 0.26 (closed circles), 0.51 (diamonds), 0.77 (triangles), 1.54 (crosses), 2.33 (squares), and 3.09 (open circles).
Figure 2 .
2Q vs E/N for N (nm −3 ) = 3.09 (closed circles), 2.06 (open circles), 1.03 (closed diamonds), and 0.26 (squares). 1 mTd = 10 −24 V m 2 . Solid line: (E/N ) 1/2 -law.
Figure 3 .
3Schematic diagram of the energy levels at the cathode-gas interface.
Figure 4 .
4Ratio of the collected-to the saturation current in the YB model as a function of the parameter d = (eE/E0/N σmt) 1/2 (equation 5). Dotted line: equation 6.
Figure 5 .
5Density dependence of the momentum transfer scattering cross section determined by equation 10 at fairly high fields. Dots: present determination of σ normalized by the cross section value at N = 0.26 nm −3 . Solid line: σmt evaluated at the shifted thermal energy (see text). Open squares: data by Smejtek et al.[9]. Crosses: corrected Smejtek's data (see text).
Figure 6 .
6Density dependence of the kinetic energy shift EK[5].
Figure 7 .Figure 8 .
78Energy dependence of the momentum transfer scattering cross section σmt of argon [31]. E/N −dependence of the momentum transfer scattering cross section σmt evaluated at the shifted mean energyĒ = E + EK for N (nm −3 ) = 0.26 (solid line), 1.28 (dash-dotted line), and 3.09 (dashed line). comparison is possible. First of all, in the analysis of their data, Smejtek et al. assumed the injection energy E 0 ≈ 1 eV independent of density. So, they disregarded the density variation of the bottom of the conduction band V 0 by setting E 0 (N 1 )/E 0 (N ) = 1 in equation 9
Figure 9 .
9Deviations of the experimental data from the YB law, Q/ (E/N ) 1/2 , as a function of E/N for N (nm −3 ) = 3.09 (squares), N = 1.8 (diamonds), N = 0.77 (triangles), and N = 0.26 (circles).
Figure 10 .
10Cross section determined by using the maximum of Q/(E/N ) 1/2 at low E/N (squares). Solid line: σmt(Ē) (see text).
Liquid state electronics of insulating liquids. W F Schmidt, CRC PressBoca RatonW. F. Schmidt. Liquid state electronics of insulating liquids. CRC Press, Boca Raton, 1997.
Emergence of photoelectrons from a metal surface into liquid argon: a Monte Carlo treatment. A O Allen, Philip J Kuntz, Werner F Schmidt, The Journal of Physical Chemistry. 88A. O. Allen, Philip J. Kuntz, and Werner F. Schmidt. Emergence of photoelectrons from a metal surface into liquid argon: a Monte Carlo treatment. The Journal of Physical Chemistry, 88:3718-3722, 1984.
One-dimensional Onsager theory for carrier injection in metal-insulator systems. D F Blossey, Physical Review B. 9D. F. Blossey. One-dimensional Onsager theory for carrier injection in metal-insulator systems. Physical Review B, 9:5183-5187, 1974.
Hot electron injection into liquid argon from a tunnel cathode. M Silver, P Kumbhare, P Smejtek, D G Onn, Journal of Chemical Physics. 52M. Silver, P. Kumbhare, P. Smejtek, and D. G. Onn. Hot electron injection into liquid argon from a tunnel cathode. Journal of Chemical Physics, 52:5195-5199, 1970.
Excess electron mobility in high-density argon gas. A F Borghesani, M Santini, P Lamp, Physical Review A. 46A. F. Borghesani, M. Santini, and P. Lamp. Excess electron mobility in high-density argon gas. Physical Review A, 46:7902-7909, 1992.
Electron self-trapping in liquids and dense gases. J P Hernandez, Review of Modern Physics. 63J. P. Hernandez. Electron self-trapping in liquids and dense gases. Review of Modern Physics, 63:675-697, 1991.
Motion of electrons in liquid argon. J Lekner, Physica Review. 158J. Lekner. Motion of electrons in liquid argon. Physica Review, 158:130-137, 1967.
Photoelectric currents in gases between parallel plate electrodes. L A Young, N E Bradbury, Phys. Rev. 43L. A. Young and N. E. Bradbury. Photoelectric currents in gases between parallel plate electrodes. Phys. Rev., 43:34-37, 1933.
Hot electron injection into dense argon, nitrogen, and hydrogen. P Smejtek, M Silver, K S Dy, D G Onn, J. Chem. Phys. 59P. Smejtek, M. Silver, K. S. Dy, and D. G. Onn. Hot electron injection into dense argon, nitrogen, and hydrogen. J. Chem. Phys., 59:1374-1384, 1973.
A classical trajectory Monte Carlo model for the injection of electrons into gaseous argon. P J Kuntz, W F Schmidt, The Journal of Chemical Physics. 76P. J. Kuntz and W. F. Schmidt. A classical trajectory Monte Carlo model for the injection of electrons into gaseous argon. The Journal of Chemical Physics, 76:1136-1145, 1982.
Electron mobility in neon at high density. A F Borghesani, L Bruschi, M Santini, G Torzo, Physical Review A. 37A. F. Borghesani, L. Bruschi, M. Santini, and G. Torzo. Electron mobility in neon at high density. Physical Review A, 37:4828-4835, 1988.
Electron mobility and localization effect in high-density neon gas. A F Borghesani, M Santini, Physical Review A. 42A. F. Borghesani and M. Santini. Electron mobility and localization effect in high-density neon gas. Physical Review A, 42:7377-7388, 1990.
High-temperature electron localization in dense He gas. A F Borghesani, M Santini, Physical Review E. 6556403A. F. Borghesani and M. Santini. High-temperature electron localization in dense He gas. Physical Review E, 65:056403, 2002.
Measurement of electron mobility in liquid and critical argon at low electric-field strengths. R Eibl, P Lamp, G Buschhorn, Physical Review B. 42R. Eibl, P. Lamp, and G. Buschhorn. Measurement of electron mobility in liquid and critical argon at low electric-field strengths. Physical Review B, 42:4356-4362, 1990.
Untersuchung zur photoelektrischen Injektion von Elektronen in fluessiges Argon. P Lamp, Technische Universitaet MuenchenPhD thesisP. Lamp. Untersuchung zur photoelektrischen Injektion von Elektronen in fluessiges Argon. PhD thesis, Technische Universitaet Muenchen, 1989.
Simple photoelectronic source for swarm experiments in high-density gases. A F Borghesani, L Bruschi, M Santini, G Torzo, Review of Scientific Instruments. 57A. F. Borghesani, L. Bruschi, M. Santini, and G. Torzo. Simple photoelectronic source for swarm experiments in high-density gases. Review of Scientific Instruments, 57:2234-2237, 1986.
Electron swarm experiments in fluids-signal waveform analysis. A Borghesani, M Santini, Measurement Science and Technology. 1A F Borghesani and M Santini. Electron swarm experiments in fluids-signal waveform analysis. Measurement Science and Technology, 1:939-947, 1990.
Eine neue Fundamentalgleichung fuer das fluide Zustand von Argon fuer Temperaturen von der Schmelzlinie bis 700 K und Druecke 1000 MPa. C Tegeler, R Span, W Wagner, Verein Deutscher Ingenieure Verlag3DuesseldorfTechnical ReportC. Tegeler, R. Span, and W. Wagner. Eine neue Fundamentalgleichung fuer das fluide Zustand von Argon fuer Temperaturen von der Schmelzlinie bis 700 K und Druecke 1000 MPa. Technical Report vol.3 (nr. 480), Verein Deutscher Ingenieure Verlag, Duesseldorf, 1997.
Electron mobility maximum in dense argon gas at low temperature. A F Borghesani, Journal of Electrostatics. 53A. F. Borghesani. Electron mobility maximum in dense argon gas at low temperature. Journal of Electrostatics, 53:89-106, 2001.
Relationship between the energy V 0 of the quasi-free-electron and its mobility in fluid argon, krypton, and xenon. R Reininger, U Asaf, I T Steinberger, S Basak, Physical Review B. 28R. Reininger, U. Asaf, I. T. Steinberger, and S. Basak. Relationship between the energy V 0 of the quasi-free-electron and its mobility in fluid argon, krypton, and xenon. Physical Review B, 28:4426-4432, 1983.
The analysis of photoelectric sensitivity curves for clean metals at various temperatures. R H Fowler, Physical Review. 38R. H. Fowler. The analysis of photoelectric sensitivity curves for clean metals at various temperatures. Physical Review, 38:45-56, 1931.
A Further Experimental Test of Fowler's Theory of Photoelectric Emission. L A Dubridge, Physical Review. 39L. A. DuBridge. A Further Experimental Test of Fowler's Theory of Photoelectric Emission. Physical Review, 39:108-118, 1932.
Theory of the Energy Distribution of Photoelectrons. L A Dubridge, Physical Review. 43L. A. DuBridge. Theory of the Energy Distribution of Photoelectrons. Physical Review, 43:727- 741, 1933.
Classical Electrodynamics. J D Jackson, WileyNew YorkJ. D. Jackson. Classical Electrodynamics. Wiley, New York, 1999.
Intermolecular Forces. Their Origin and Determination. G C Maitland, M Rigby, E Brian Smith, W A Wakeham, Clarendon, OxorfdG. C. Maitland, M. Rigby, E. Brian Smith, and W. A. Wakeham. Intermolecular Forces. Their Origin and Determination. Clarendon, Oxorfd, 1981.
J J Thomson, G P Thomson, Conduction of Electricity Through Gases. CambridgeCambridge University PressJ. J. Thomson and G. P. Thomson. Conduction of Electricity Through Gases. Cambridge University Press, Cambridge, 1928.
Basic Processes of Gaseous Electronics. L B Loeb, University of California PressBerkeleyL. B. Loeb. Basic Processes of Gaseous Electronics. University of California Press, Berkeley, 1955.
Transport de charges dans un gaz, au voisinage d'une paroi et en présence d'un champ magnétique. A Békiarian, Journal de Physique. 29A. Békiarian. Transport de charges dans un gaz, au voisinage d'une paroi et en présence d'un champ magnétique. Journal de Physique (Paris), 29:434-442, 1968.
Electron thermalization in gases. ii. neon, argon, krypton, and xenon. A Mozumder, The Journal of Chemical Physics. 72A. Mozumder. Electron thermalization in gases. ii. neon, argon, krypton, and xenon. The Journal of Chemical Physics, 72:6289-6298, 1980.
Scattering cross sections in argon from electron transport parameters. G N Haddad, T F O'malley, Australian Journal of Physics. 35G. N. Haddad and T. F. O'Malley. Scattering cross sections in argon from electron transport parameters. Australian Journal of Physics, 35:35-39, 1982.
Measurements of differential cross sections for e-Ar, Kr, Xe scattering at E = 0.052 eV. Zeitschrift für Physik D Atoms. M Weyhreter, B Barzick, A Mann, F Linder, Molecules and Clusters. 7M. Weyhreter, B. Barzick, A. Mann, and F. Linder. Measurements of differential cross sections for e-Ar, Kr, Xe scattering at E = 0.052 eV. Zeitschrift für Physik D Atoms, Molecules and Clusters, 7:333-347, 1988.
Experimental determination of the conduction band of excess electrons in liquid Ar. A F Borghesani, G Carugno, M Santini, IEEE Transaction on Electrical Insulation. 26A. F. Borghesani, G. Carugno, and M. Santini. Experimental determination of the conduction band of excess electrons in liquid Ar. IEEE Transaction on Electrical Insulation, 26:615-622, 1991.
Sopra lo spostamento per pressione delle righe elevate delle serie spettrali. E Fermi, Il Nuovo Cimento. 11E. Fermi. Sopra lo spostamento per pressione delle righe elevate delle serie spettrali. Il Nuovo Cimento, 11:157-166, 1934.
Stability criterion for the localization of an excess electron in a nonpolar fluid. B E Springett, Joshua Jortner, Morrel H Cohen, The Journal of Chemical Physics. 48B. E. Springett, Joshua Jortner, and Morrel H. Cohen. Stability criterion for the localization of an excess electron in a nonpolar fluid. The Journal of Chemical Physics, 48:2720-2731, 1968.
Analysis of excess electron states in neon gas. J P Hernandez, L W Martin, Physical Review A. 43J. P. Hernandez and L. W. Martin. Analysis of excess electron states in neon gas. Physical Review A, 43:4568-4571, 1991.
Electron transport in fluid argon in combined electric and magnetic fields. P Lamp, G Buschhorn, Physical Review B. 50P. Lamp and G. Buschhorn. Electron transport in fluid argon in combined electric and magnetic fields. Physical Review B, 50, 12 1994.
Calculation of the mobility of electrons injected in liquid argon. G Ascarelli, Physical Review B. 33G. Ascarelli. Calculation of the mobility of electrons injected in liquid argon. Physical Review B, 33:5825-5833, 1986.
The electron drift velocity in dense gases. V M Atrazhev, I T Iakubov, Journal of Physics D: Applied Physics. 10V. M. Atrazhev and I. T. Iakubov. The electron drift velocity in dense gases. Journal of Physics D: Applied Physics, 10, 1977.
Scattering of waves by an ensemble of fluctuating potentials. J Lekner, Philosophical Magazine. 18J. Lekner. Scattering of waves by an ensemble of fluctuating potentials. Philosophical Magazine, 18:1281-1286, 1968.
Statistical Physics. G H Wannier, DoverNew YorkG. H. Wannier. Statistical Physics. Dover, New York, 1966.
M H Cohen, J Lekner, Theory of Hot Electrons in Gases. 158M.H. Cohen and J. Lekner. Theory of Hot Electrons in Gases, Liquids, and Solids. Physical Review, 158:305-309, 1967.
R Grunberg, Messungen der Elektronenbeweglichkeit bei hohem Gasdrucken in Ar. He, N 2 und H 223R. Grunberg. Messungen der Elektronenbeweglichkeit bei hohem Gasdrucken in Ar, He, N 2 und H 2 . Zeitschrift fűr Naturforschung, 23 a:1994-2004, 1968.
Some measurements of electron drift velocities in compressed gases. N L Allen, B A Prew, Journal of Physics B: Atomic and Molecular Physics. 3N. L. Allen and B. A. Prew. Some measurements of electron drift velocities in compressed gases. Journal of Physics B: Atomic and Molecular Physics, 3:1113-1126, 1970.
Density dependence of the electron drift velocity in argon. A K Bartels, Physics Letters A. 44A. K. Bartels. Density dependence of the electron drift velocity in argon. Physics Letters A, 44:403-404, 1973.
| [] |
[
"Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality",
"Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality"
] | [
"Hisashi Ishida ",
"Juan Antonio Barragan ",
"Adnan Munawar ",
"Zhaoshuo Li ",
"Peter Kazanzides ",
"Michael Kazhdan ",
"Danielle Trakimas ",
"Francis X Creighton ",
"Russell H Taylor "
] | [] | [] | The introduction of image-guided surgical navigation (IGSN) has greatly benefited technically demanding surgical procedures by providing real-time support and guidance to the surgeon during surgery. To develop effective IGSN, a careful selection of the information provided to the surgeon is needed. However, identifying optimal feedback modalities is challenging due to the broad array of available options. To address this problem, we have developed an open-source library that facilitates the development of multimodal navigation systems in a wide range of surgical procedures relying on medical imaging data. To provide guidance, our system calculates the minimum distance between the surgical instrument and the anatomy and then presents this information to the user through different mechanisms. The real-time performance of our approach is achieved by calculating Signed Distance Fields at initialization from segmented anatomical volumes. Using this framework, we developed a multimodal surgical navigation system to help surgeons navigate anatomical variability in a skull-base surgery simulation environment. Three different feedback modalities were explored: visual, auditory, and haptic. To evaluate the proposed system, a pilot user study was conducted in which four clinicians performed mastoidectomy procedures with and without guidance. Each condition was assessed using objective performance and subjective workload metrics. This pilot user study showed improvements in procedural safety without additional time or workload. These results demonstrate our pipeline's successful use case in the context of mastoidectomy. | 10.48550/arxiv.2303.01733 | [
"https://export.arxiv.org/pdf/2303.01733v1.pdf"
] | 257,353,857 | 2303.01733 | adbf33af1a3308ca175363f3726dbce5eaf16c15 |
Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality
Hisashi Ishida
Juan Antonio Barragan
Adnan Munawar
Zhaoshuo Li
Peter Kazanzides
Michael Kazhdan
Danielle Trakimas
Francis X Creighton
Russell H Taylor
Improving Surgical Situational Awareness with Signed Distance Field: A Pilot Study in Virtual Reality
The introduction of image-guided surgical navigation (IGSN) has greatly benefited technically demanding surgical procedures by providing real-time support and guidance to the surgeon during surgery. To develop effective IGSN, a careful selection of the information provided to the surgeon is needed. However, identifying optimal feedback modalities is challenging due to the broad array of available options. To address this problem, we have developed an open-source library that facilitates the development of multimodal navigation systems in a wide range of surgical procedures relying on medical imaging data. To provide guidance, our system calculates the minimum distance between the surgical instrument and the anatomy and then presents this information to the user through different mechanisms. The real-time performance of our approach is achieved by calculating Signed Distance Fields at initialization from segmented anatomical volumes. Using this framework, we developed a multimodal surgical navigation system to help surgeons navigate anatomical variability in a skull-base surgery simulation environment. Three different feedback modalities were explored: visual, auditory, and haptic. To evaluate the proposed system, a pilot user study was conducted in which four clinicians performed mastoidectomy procedures with and without guidance. Each condition was assessed using objective performance and subjective workload metrics. This pilot user study showed improvements in procedural safety without additional time or workload. These results demonstrate our pipeline's successful use case in the context of mastoidectomy.
I. INTRODUCTION
Technically demanding surgical procedures, such as skullbased surgical procedures, have greatly benefited from the introduction of image-guided surgical navigation (IGSN). IGSN systems use preoperative models of patient anatomy derived from Computed Tomography (CT) or Magnetic Resonance Imaging (MRI) images for surgical planning. Intraoperatively, these models can be registered to the actual patient, and the motions of surgical instruments relative to the patient can be tracked. Using this information, a navigation system can provide the surgeon with real-time support information and guidance, leading to improved surgical situational awareness, lower mental demands, and higher patient safety [1].
The effectiveness of a navigation system depends not only on accurate registration algorithms but also on carefully tailoring the information presented to the surgeon [2]. IGSN The hardware setup emulates a real mastoidectomy environment with the head-mounted display (HMD) in place of a stereo microscope, the haptic device in place of the surgical drill, and a foot pedal interface for actuating the drill. The setup is housed on a movable cart for portability. systems can be broadly categorized into systems that only provide support information to the surgeon and systems that directly affect the surgeons' actions, e.g., a robotic surgical system that enforces safety barriers [3], [4]. IGSN can be further categorized depending on the medium used to provide navigational information, e.g., visual, auditory, and tactile information. Given the broad spectrum of available options to present feedback, identifying optimal modalities for a specific surgical context is challenging.
Surgical simulation environments present a cost-effective solution to the problems of designing and evaluating guidance systems. First, surgical simulation allows testing the navigation systems on highly realistic surgical environments using anatomic models created from CT or MRI images. Second, simulations are controlled environments that are well suited to evaluate the effect of guidance on surgical situational awareness, perceived workload, and skill maintenance. Nevertheless, adding surgical navigation capabilities to existing simulation environments is still a challenging task, requiring the integration of multiple software components without degrading the simulation performance. For example, providing the surgeon with a real-time warning when a surgical tool is getting close to delicate anatomy requires very efficient calculation of tool-to-model distances.
To facilitate the development process of novel guidance systems, we have developed a modular and open-source plugin that enables multimodal surgical navigation for the Asynchronous Multi-Body Framework (AMBF) [6] [5]. SDF calculation is done at the initialization phase using the same CT scan that is loaded into the simulation environment. At runtime, multimodal feedback is provided to the users via the haptic device, speakers, and head mounted display incorporated in the FIVRS system. lator. At the heart of our plugin, we have integrated a library to calculate Signed Distance Fields (SDF) from segmented anatomical volumes at initialization. The resulting SDF volumes can then be used to calculate in real-time the minimum distance between the virtual instruments and different anatomies and provide feedback to the user (Fig. 2).
The plugin was designed to be highly applicable to various surgical procedures that rely on imaging data. The flexibility of this framework can be attributed to two reasons. Firstly, the plugin directly supports the loading of segmented anatomical volumes created with 3D Slicer [7], an opensource platform popular among clinical users for analyzing and displaying information derived from medical imaging. Secondly, the plugin was designed to allow easy customization of the feedback modalities, allowing it to be used on multiple procedures and surgical specialties with potentially different safety constraints.
Using this framework, we have developed a multimodal surgical navigation system for the skull-base surgery simulation environment FIVRS (Fig. 1) [5]. We have two goals for the navigational system: (1) identifying skull-base surgeons' preferred feedback modalities to navigate anatomical variability; and (2) demonstrating the flexibility of our proposed method to account for different types of feedback modalities. The selected feedback modalities for this system were visual, auditory, and haptic. A pilot user study was conducted with three experienced surgeons and a medical student to evaluate the utility of the system and guidance modalities. The results of this pilot study will be used in the future to guide a larger user study aimed at identifying the optimal feedback modalities in skull-base surgery.
Although the proposed framework was evaluated on a single surgical procedure, we emphasize that our plugin can be used to develop surgical navigation systems in any surgical specialty that relies on CT scans, e.g., sinus, orthopedic, spinal, and laryngeal surgery. In summary, this paper reports the following contributions: (1) a multimodal navigation system for skull base surgery; (2) an open-source and modular library that enables the development of IGSN systems based on Signed Distance Field for a wide range of surgical procedures; and (3) a pilot study showing the utility of the system in the context of a mastoidectomy procedure.
II. RELATED WORK
Traditionally, image-guidance systems have relied on additional screens to display the position of surgical tools relative to patient anatomy. However, these systems are rarely used in temporal bone surgery as the surgeon would have to switch their attention from the surgical field to the guidance screen [8]. In this regard, less intrusive feedback modalities such as audio, visual overlays, and haptic feedback have been shown to be more promising in mastoidectomy procedures. An audio guidance system was proposed by Cho et al. [9] to avoid damage to the facial nerve. This system would gradually increase the alarm frequency as the drill got closer to the optical nerve to alert the surgeon.
Regarding visual guidance, one common approach has been to use a head mounted display (HMD) to annotate the surgeon's field of view. For example, Rose et al. [10] used a Microsoft HoloLens HMD to overlay transparent images of the neck and temporal bone anatomy on phantom models. Finally, haptic feedback provided by cooperativecontrol robots has been proposed as a mechanism to improve safety in surgery. Ding et al. [11] demonstrated that virtual fixtures could be enforced by a cooperatively controlled robotic system with sub-millimeter accuracy in a phantom drilling experiment.
Our current system integrates multiple previously proposed feedback modalities into a single system and uses SDF information to activate them in a timely manner. This allows surgeons to experience multiple types of feedback on the same simulated task, enabling objective comparisons across the modalities. Furthermore, testing the guidance modalities in a repeatable and controllable environment, such as a VR simulation, isolates the effects of different modalities on performance and mental demand.
III. METHODOLOGY
The development of our multimodal surgical guidance system is presented as follows. In section III-A, the definition of SDF volumes and the format used to represent the anatomy is presented. In section III-B, the definition of an SDF volume and the library used to calculate them is presented. Finally, section III-C describes the development of three SDF-based feedback modalities.
A. Anatomy representation and SDF definition
We use 3D Slicer to visualize and segment preoperative CT and MRI images. For our study, we use patient models derived from a collection of patient CT scans. Multiple anatomic structures for each patient were segmented using methods developed by Ding et al. and saved in Segmented Nearly Raw Raster Data (".seg.nrrd") format [12]. This data is then loaded in the simulation and rendered in the scene as described in [5].
The same segmented CT images are used for SDF generation to ensure consistency between the SDF's voxel coordinates and the surgical simulation models. A plugin was developed to load the SDF volumes at initialization and to query the stored values at runtime.
B. Calculation of SDF
Each segmented CT scan comprises of several anatomies (n = 16 in our current experiments). To calculate the SDF volume from the n th anatomy, S (n) , a C++ open source library was used [13]. This library provides an efficient and parallelized implementation of Saito and Toriwaki's method [14] for SDF calculation. An SDF volume is represented as a 3D voxel grid where the value at each voxel represents the signed distance to the closest point on a specific anatomy (Fig. 3). The positive and negative values of the distance indicate voxels that are exterior and interior to the anatomy, respectively .
C. Feedback modalities based on SDF volumes
The loaded SDF volumes allow the system to easily query the distance to the closest anatomy. Using the distance to the closest anatomy, three distinct feedback modalities were developed to improve the user's situational awareness: visual, haptic, and auditory feedback. For all modalities, the drill's position is converted to the SDF frame giving us the corresponding voxel coordinate (x ≡ {i, j, k}) within the SDF volumes. The minimum distance between the drill tip and the nearest anatomy is calculated by querying all the SDF volumes (S * (x) ≤ ∀S (n) (x)). The SDF volume for the closest anatomy (S * (x)) is used to generate user feedback. Haptic and audio feedback is activated once the drill is within the predefined thresholds. These thresholds were selected by an expert otolaryngology surgeon. 1) Visual Feedback: Our method uses the SDF to display the closest anatomy's name and the distance to it in a text box on the head-mounted stereoscopic display (Fig. 5). The text changes its color once the drill is closer than 1mm to any anatomy. The location of this text box was also determined after a pilot study with an attending Otologic surgeon. 2) Audio Feedback: We implement auditory feedback to notify the surgeons when the tool tip is about to collide with critical anatomies. Otologic surgeons are familiar with this kind of auditory feedback from nerve proximity monitors (e.g., [15]), and it provides a form of situational awareness that is not disruptive during the surgery. An alarm sound is generated when the drill is closer than a defined distance τ a to the critical anatomies. Furthermore, after consulting with surgeons, we discovered that audio feedback could provide initial situational awareness cues when approaching the critical anatomy so that the activation threshold for audio warnings can be larger than that used for haptic feedback.
Visual feedback using textual overlay
3) Haptic Feedback: FIVRS adopts CHAI3D's [16] finger proxy collision algorithm [17] to provide haptic feedback by simulating the collision of the drill tip with the surface of the volume. We add an additional force term, F SDF ∈ R 3 , to the contact force provided by FIVRS to prevent the user from drilling critical anatomies. The formulation of this SDFbased force can be written as follows:
F SDF = F max (τ f − d a ) d (SDF ) if d a < τ f 0 Otherwise (1)
where F max ∈ R is the maximum force in Newtons, d a represents the closest distance to the anatomy, τ f is the activation threshold for haptic feedback and d (SDF ) the direction of the force. This direction is calculated with
d (SDF ) = d/| d|, where d is a finite difference.
IV. EVALUATION AND RESULTS
We conducted a pilot user study to illustrate the use of our framework and to provide preliminary feedback on the proposed guidance modalities for a larger follow-on study. For the experimental setup, we employed a Phantom Omni (3D Systems, USA) as a haptic device and a VIVE PRO (HTC VIVE, Taiwan) as a Head Mounted Display. The entire system was installed on a single transportable workstation (Fig. 1).
A. User Study Design
Four clinical participants (2 attending surgeons, 1 fellow, and 1 medical student) who all have sufficient knowledge about mastoidectomy were recruited in this study. We selected two segmented CT scans of the temporal bone, both of which contain the same 16 distinctive anatomies. Based on expert surgeon feedback, we confirmed that these two CT scans require similar skills to perform the mastoidectomy; thus, we assume that there is no significant difference in the complexity of the procedure. Each user was asked to drill a wide cortical mastoidectomy to the point of exposing the short process of the incus without harming the nonbone anatomies (Fig. 4). We tested four different feedback modalities: (1) No assistance, (2) with visual feedback, (3) with audio feedback, and (4) with force feedback. Experimental order and the drilled anatomy were randomized to mitigate the learning effect. Lastly, each user was allowed to familiarize themselves with the simulator before starting the experiment.
To evaluate the proposed modalities, we adopted two objective metrics, which are the task completion time and the number of unintended voxels removed, and one subjective metric that is the NASA TLX survey. Task completion time was defined as the time between the first and last removed voxel. The number of removed voxels from critical anatomy was also recorded during the experiment.
After each trial, users were asked to complete the NASA TLX [18] survey. This survey uses six indicators to evaluate the workload: mental demand, physical demand, temporal demand, performance, effort, and frustration.
B. Results
The task completion times and the number of unintended voxels removed are shown in Table I and Fig. 6. In terms of task completion time, there was no significant difference between the baseline and the three proposed assistance methods. Fig. 6 shows that haptic feedback reduced the number of inadvertent voxels removed to nearly zero. Audio feedback also reduced the number of unintended voxels removed.
The result of the NASA TLX survey can be found in Fig. 7, which shows that haptic feedback has the lowest workload in mental demand, physical demand, performance, effort, and frustration across all four conditions. Audio feedback also reduced most of the workload (mental demand, physical demand, performance, effort, and frustration) compared to the baseline method. On the other hand, visual feedback led to higher mental demand, performance compared to baseline.
V. DISCUSSION
During the user study, the proposed SDF plugin successfully offered three different assistance modalities at an interactive update rate of over 80 Hz on an AMD Ryzen 5800 CPU, 32 GB DDR4 RAM, and RTX 3080 GPU system. This suggests that the proposed pipeline was able to generate real-time guidance from patients' CT scans. The user study results showed that haptic feedback was able to reduce the number of unintended contacts with critical anatomies and improve multiple workload metrics. Although audio feedback did not impose physical restrictions to avoid collisions, most participants felt that it was useful and reduced mental demands. One of the participants did note, however, that not knowing which anatomy triggered the feedback increased his mental workload. Visual feedback had a negative impact on drilling assistance for most participants, which can be attributed to the type of information we offered. One of the participants claimed that the location of the text overlay was too distant from the workspace and showing the distance was too much information to process; consequently, they had to shift their view from the drill. These findings align with the results indicating that visual feedback has a high variance in the unintended voxels removed, as well as the NASA TLX results. To address these issues, an alternate visual feedback method will be implemented in future work in which the distance information is provided by changing the drill or anatomy's color. Lastly, the less experienced participant found the visual feedback as a helpful learning tool since it provides information about anatomies that are not physically visible.
VI. SUMMARY AND FUTURE WORK
This paper reports the development of a novel and highly applicable framework to develop real-time surgical navigation systems based on the Signed Distance Field (SDF). The framework was designed to be adaptable to any surgical procedure that relies on CT or MRI imaging and to account for different feedback modalities. Using our proposed method, we developed a multimodal guidance system (visual, audio, and haptic) for a mastoidectomy virtual drilling simulator. A pilot user study with 3 surgeons and 1 medical student showed that our guidance system reduced unintended contact with critical anatomies and lowered mental demands without increasing operation time. We also found that users preferred haptic and audio feedback over visual feedback.
In future work, we plan to conduct more extensive studies analyzing the effect of multiple feedback modalities on surgical performance. Furthermore, we plan to develop guidance systems for different surgical procedures, such as laminectomy and sinus surgery, to test the general applicability of our system. Lastly, combining the digital twins framework [19], we aspire to implement SDF-based guidance modalities with existing robotic systems and real surgical phantoms.
ACKNOWLEDGMENTS AND DISCLOSURES
Hisashi Ishida was supported in part by the ITO foundation for international education exchange, Japan. This work was also supported in part by a research contract from Galen Robotics, by NIDCD K08 Grant DC019708, by a research agreement with the Hong Kong Multi-Scale Medical Robotics Centre, and by Johns Hopkins University internal funds. Russell Taylor and Johns Hopkins University (JHU) may be entitled to royalty payments related to the technology discussed in this paper, and Dr. Taylor has received or may receive some portion of these royalties. Also, Dr. Taylor is a paid consultant to and owns equity in Galen Robotics, Inc. These arrangements have been reviewed and approved by JHU in accordance with its conflict of interest policy. This study was approved by the Johns Hopkins University Institutional Review Board under the protocol IRB00264318.
SUPPLEMENTARY INFORMATION
A supplementary video is provided with the submission. For more information, visit the project repository at https://github.com/jabarragann/ volumetric_drilling.
1
Department of Computer Science, Johns Hopkins University, Baltimore, MD 21218, USA. 2 Department of Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, MD 21287, USA. *These authors contributed equally. Email: {hishida3,
Fig. 1 :
1Hardware setup for virtual drilling simulator.
Fig. 3 :
3Example visualization of an SDF volume's slice. (a) Segmented CT scan showing three anatomies: Temporomandibular (TMJ), Ear Canal (EAC), Sinus, (b) SDF slice for TMJ, (c) SDF slice for EAC, and (d) SDF slice for Sinus. Voxels at each slice store the minimum distance between that voxel's location and a specific anatomy. The units for the color scale are mm. (e) Combined SDF image of TMJ, EAC, and Sinus. In the combined slice, different regions are color-coded by the closest anatomy (Green: EAC, Red: TMJ, Blue: Sinus).
Fig. 4 :
4Sequence of snapshots (from 1 to 4) showing the Mastodectomy procedure performed by the user study participants.
Fig. 5 :
5Visual Feedback. Textual overlay provides the name of the closest anatomy and the distance to that specific anatomy via HMD in stereoscopic view.
Fig. 6 :
6User study objective metrics. (a) Completion time per anatomy and (b) number of unintended voxels removed.
Fig. 7 :
7NASA TLX results. Values near the center indicate better results.
simu -
simuarXiv:2303.01733v1 [cs.HC] 3 Mar 2023Fig. 2: System Architecture. The proposed Surgical navigation plugin is developed on top of the FIVRS frameworkDrill pose
Segmented CT
HMD
SDF volumes
F or ce
Initialization
Runtime
Proposed Surgical Navigation Plugin
Haptic device
Image
Force
Drilling simulator
User Interfaces
Speaker
Feedbacks (Visual/Audio/Haptic)
TABLE I :
IObjective Performance metrics results. Among all
experimental conditions, haptic guidance led to a reduction of
unintended voxels removed without increases in completion
time.
Impact of image-guided surgery on surgeons' performance: a literature review. M Luz, G Strauss, D Manzey, https:/www.inderscienceonline.com/doi/abs/10.1504/IJHFE.2016.083516International Journal of Human Factors and Ergonomics. 43-4Inderscience PublishersM. Luz, G. Strauss, and D. Manzey, "Impact of image-guided surgery on surgeons' performance: a literature review," International Journal of Human Factors and Ergonomics, vol. 4, no. 3-4, pp. 229-263, Jan. 2016, publisher: Inderscience Publishers. [Online]. Available: https: //www.inderscienceonline.com/doi/abs/10.1504/IJHFE.2016.083516
An Augmented Reality Interface for Endoscopic Ear Surgery. N Matsumoto, B Cho, M Hashizume, T Nakagawa, Innov. in Endoscopic Ear Surg., S. Kakehata, T. Ito, and D. YamauchiSpringerSingaporeN. Matsumoto, B. Cho, M. Hashizume, and T. Nakagawa, "An Augmented Reality Interface for Endoscopic Ear Surgery," in Innov. in Endoscopic Ear Surg., S. Kakehata, T. Ito, and D. Yamauchi, Eds. Singapore: Springer, 2020, pp. 73-78.
Spatial motion constraints in medical robot using virtual fixtures generated by anatomy. M Li, M Ishii, R H Taylor, IEEE Transactions on Robotics. 231M. Li, M. Ishii, and R. H. Taylor, "Spatial motion constraints in medical robot using virtual fixtures generated by anatomy," IEEE Transactions on Robotics, vol. 23, no. 1, pp. 4-19, 2007.
Anatomical mesh-based virtual fixtures for surgical robots. Z Li, A Gordon, T Looi, J Drake, C Forrest, R H Taylor, IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS). Z. Li, A. Gordon, T. Looi, J. Drake, C. Forrest, and R. H. Tay- lor, "Anatomical mesh-based virtual fixtures for surgical robots," in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2020, pp. 3267-3273.
Fully immersive virtual reality for skull-base surgery: Surgical training and beyond. A Munawar, Z Li, N Nagururu, D Trakimas, P Kazanzides, R H Taylor, F X Creighton, arXiv:2302.13878arXiv preprintA. Munawar, Z. Li, N. Nagururu, D. Trakimas, P. Kazanzides, R. H. Taylor, and F. X. Creighton, "Fully immersive virtual reality for skull-base surgery: Surgical training and beyond," arXiv preprint arXiv:2302.13878, 2023.
A realtime dynamic simulator and an associated front-end representation format for simulating complex robots and environments. A Munawar, Y Wang, R Gondokaryono, G S Fischer, 2019A. Munawar, Y. Wang, R. Gondokaryono, and G. S. Fischer, "A real- time dynamic simulator and an associated front-end representation format for simulating complex robots and environments," in 2019
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Nov 2019, pp. 1875-1882.
3d slicer: a platform for subject-specific image analysis, visualization, and clinical support. R Kikinis, S D Pieper, K G Vosburgh, Intraoperative imaging and image-guided therapyR. Kikinis, S. D. Pieper, and K. G. Vosburgh, "3d slicer: a platform for subject-specific image analysis, visualization, and clinical support," Intraoperative imaging and image-guided therapy, pp. 277-289, 2014.
The utility of intraoperative navigation of the temporal bone for otolaryngology resident training. Z G Schwam, V Z Kaul, M K Cosetti, G B Wanna, The Laryngoscope. 1305Z. G. Schwam, V. Z. Kaul, M. K. Cosetti, and G. B. Wanna, "The utility of intraoperative navigation of the temporal bone for otolaryngology resident training," The Laryngoscope, vol. 130, no. 5, pp. E368-E371, 2020.
Warning navigation system using real-time safe region monitoring for otologic surgery. B Cho, M Oka, N Matsumoto, R Ouchida, J Hong, M Hashizume, International Journal of Computer Assisted Radiology and Surgery. 83B. Cho, M. Oka, N. Matsumoto, R. Ouchida, J. Hong, and M. Hashizume, "Warning navigation system using real-time safe region monitoring for otologic surgery," International Journal of Computer Assisted Radiology and Surgery, vol. 8, no. 3, pp. 395-405, May 2013.
Development of augmented-reality applications in otolaryngology-head and neck surgery. A S Rose, H Kim, H Fuchs, J.-M Frahm, The Laryngoscope. 129S3A. S. Rose, H. Kim, H. Fuchs, and J.-M. Frahm, "Development of augmented-reality applications in otolaryngology-head and neck surgery," The Laryngoscope, vol. 129, no. S3, pp. S1-S11, 2019.
Volumetric Accuracy Analysis of Virtual Safety Barriers for Cooperative-Control Robotic Mastoidectomy. A S Ding, S Capostagno, C R Razavi, Z Li, R H Taylor, J P Carey, F X Creighton, Otology & Neurotology. 42101513A. S. Ding, S. Capostagno, C. R. Razavi, Z. Li, R. H. Taylor, J. P. Carey, and F. X. Creighton, "Volumetric Accuracy Analysis of Vir- tual Safety Barriers for Cooperative-Control Robotic Mastoidectomy," Otology & Neurotology, vol. 42, no. 10, p. e1513, Dec. 2021.
Automated registration-based temporal bone computed tomography segmentation for applications in neurotologic surgery. A S Ding, A Lu, Z Li, D Galaiya, J H Siewerdsen, R H Taylor, F X Creighton, Otolaryngology-Head and Neck Surgery. 1671A. S. Ding, A. Lu, Z. Li, D. Galaiya, J. H. Siewerdsen, R. H. Taylor, and F. X. Creighton, "Automated registration-based temporal bone computed tomography segmentation for applications in neurotologic surgery," Otolaryngology-Head and Neck Surgery, vol. 167, no. 1, pp. 133-140, 2022.
Euclidean distance transform (version 1.00). M Kazhdan, M. Kazhdan, "Euclidean distance transform (version 1.00)," https:// github.com/mkazhdan/EDT, 2013.
New algorithms for euclidean distance transformation of an n-dimensional digitized picture with applications. T Saito, J.-I Toriwaki, Pattern Recognition. 2711T. Saito and J.-I. Toriwaki, "New algorithms for euclidean distance transformation of an n-dimensional digitized picture with applica- tions," Pattern Recognition, vol. 27, no. 11, pp. 1551-1565, Nov. 1994.
Facial nerve intraoperative monitoring in otologic surgeries under sedation and local anesthesia -a case series and literature review. L R L Mangia, V M Santos, T M Mansur, G R M Wiemes, R Hamerschmidt, Int Arch Otorhinolaryngol. 241L. R. L. Mangia, V. M. Santos, T. M. Mansur, G. R. M. Wiemes, and R. Hamerschmidt, "Facial nerve intraoperative monitoring in otologic surgeries under sedation and local anesthesia -a case series and literature review," Int Arch Otorhinolaryngol, vol. 24, no. 1, pp. e11- e17, 2020.
Chai 3d: An opensource library for the rapid development of haptic scenes. F Conti, F Barbagli, D Morris, C Sewell, IEEE World Haptics. 381F. Conti, F. Barbagli, D. Morris, and C. Sewell, "Chai 3d: An open- source library for the rapid development of haptic scenes," IEEE World Haptics, vol. 38, no. 1, pp. 21-29, 2005.
Haptic display for human interaction with virtual dynamic environments. D Ruspini, O Khatib, J. of Robotic Systems. 1812D. Ruspini and O. Khatib, "Haptic display for human interaction with virtual dynamic environments," J. of Robotic Systems, vol. 18, no. 12, pp. 769-783, 2001.
NASA-task load index (NASA-TLX); 20 years later. S G Hart, Proc. of the human factors and ergonomics society annual meeting. of the human factors and ergonomics society annual meetingLos Angeles, CASage publications Sage CA50S. G. Hart, "NASA-task load index (NASA-TLX); 20 years later," in Proc. of the human factors and ergonomics society annual meeting, vol. 50, 9. Sage publications Sage CA: Los Angeles, CA, 2006, pp. 904-908.
Twin-s: A digital twin for skull-base surgery. H Shu, R Liang, Z Li, A Goodridge, X Zhang, H Ding, N Nagururu, M Sahu, F X Creighton, R H Taylor, arXiv:2211.11863arXiv preprintH. Shu, R. Liang, Z. Li, A. Goodridge, X. Zhang, H. Ding, N. Nagu- ruru, M. Sahu, F. X. Creighton, R. H. Taylor et al., "Twin-s: A digital twin for skull-base surgery," arXiv preprint arXiv:2211.11863, 2022.
| [
"https://github.com/jabarragann/"
] |
[
"Minimal surfaces and mean curvature flow",
"Minimal surfaces and mean curvature flow"
] | [
"Tobias H Colding ",
"William P Minicozzi ",
"I I "
] | [] | [] | We discuss recent results on minimal surfaces and mean curvature flow, focusing on the classification and structure of embedded minimal surfaces and the stable singularities of mean curvature flow. This article | null | [
"https://arxiv.org/pdf/1102.1411v1.pdf"
] | 117,117,859 | 1102.1411 | 58f862d26766d31bed8bcb434a72b2a1819e112d |
Minimal surfaces and mean curvature flow
Tobias H Colding
William P Minicozzi
I I
Minimal surfaces and mean curvature flow
is dedicated to Rick Schoen. 2000 Mathematics Subject Classification:Beijing-Boston The title of This book***** ALM ?, pp. 1-?and Phrases:
We discuss recent results on minimal surfaces and mean curvature flow, focusing on the classification and structure of embedded minimal surfaces and the stable singularities of mean curvature flow. This article
Introduction
The main focus of this survey is on minimal surfaces and mean curvature flow, but to put these topics in perspective we begin with more elementary analysis of the energy of curves and functions. This leads us to first variation formulas for energy and critical points for those. The critical points are of course geodesics and harmonic functions, respectively. We continue by considering the gradient, or rather the negative gradient, flow for energy which leads us to the curve shortening flow and the heat equation. Having touched upon these more elementary topics, we move on to one of our main topics which is minimal surfaces. We discuss first and second variations for area and volume and the gradient (or rather negative) gradient flow for area and volume which is the mean curvature flow. Beginning as elementary as we do allows us later in the survey to draw parallels from the more advanced topics to the simpler ones.
The other topics that we cover are the Birkhoff min-max argument that produces closed geodesics and its higher dimensional analog that gives existence of closed immersed minimal surfaces. We discuss stable and unstable critical points and index of critical points and eventually discuss the very recent classification of all stable self-similar shrinkers for the mean curvature flow. For minimal surfaces, 2 T. H. Colding and W. Minicozzi stability and Liouville type theorems have played a major role in later developments and we touch upon the Bernstein theorem that is the minimal surface analog of the Liouville theorem for harmonic functions and the curvature estimate that is the analog of the gradient estimate. We discuss various monotone quantities under curve shortening and mean curvature flow like Huisken's volume, the width, and isoperimetric ratios of Gage and Hamilton. We explain why Huisken's monotonicity leads to that blow ups of the flow at singular points in space time can be modeled by self-similar flows and explain why the classification of stable self-similar flows is expected to play a key role in understanding of generic mean curvature flow where the flow begins at a hypersurface in generic position. One of the other main topics of this survey is that of embedded minimal surfaces where we discuss some of the classical examples going back to Euler and Monge's student Meusnier in the 18th century and the recent examples of Hoffman-Weber-Wolf and various examples that date in between. The final main results that we discuss are the recent classification of embedded minimal surfaces and some of the uniqueness results that are now known.
It is a great pleasure for us to dedicate this article to Rick Schoen.
Harmonic functions and the heat equation
We begin with a quick review of the energy functional on functions, where the critical points are called harmonic functions and the gradient flow is the heat equation. This will give some context for the main topics of this survey, minimal surfaces and mean curvature flow, that are critical points and gradient flows, respectively, for the area functional.
Harmonic functions
Given a differentiable function u : R n → R , the energy is defined to be E(u) = 1 2 |∇u| 2 = 1 2 ∂u ∂x 1 2 + · · · + ∂u ∂x n 2 .
(1)
This gives a functional defined on the space of functions. We can construct a curve in the space of functions by taking a smooth function φ with compact support and considering the one-parameter family of functions u + t φ . Restricting the energy functional to this curve gives E(u + tφ) = 1 2 |∇(u + t φ)| 2 = 1 2 |∇u| 2 + t u, ∇φ + t 2 2 |∇φ| 2 .
(2)
Differentiating at t = 0, we get that the directional derivative of the energy functional (in the direction φ) is
d dt t=0 E(u + t φ) = ∇u, ∇φ = − φ ∆u ,(3)
Minimal surfaces and MCF 3 where the last equality used the divergence theorem and the fact that φ has compact support. We conclude that:
Lemma 1. The directional derivative d dt t=0 E = 0 for all φ if and only if ∆u = ∂ 2 u ∂x 2 1 + · · · + ∂ 2 u ∂x 2 n = 0 .
Thus, we see that the critical points for energy are the functions u with ∆u = 0; these are called harmonic functions. In fact, something stronger is true. Namely, harmonic functions are not just critical points for the energy functional, but are actually minimizers.
Lemma 2. If ∆u = 0 on a bounded domain Ω and φ vanishes on ∂Ω, then
Ω |∇(u + φ)| 2 = Ω |∇u| 2 + Ω |∇φ| 2 .
Proof. Since ∆u = 0 and φ vanishes on ∂Ω, the divergence theorem gives 0 = Ω div (φ∇u) = Ω ∇φ, ∇u .
So we conclude that Ω |∇(u + φ)| 2 = Ω |∇u| 2 + 2 ∇φ, ∇u + |∇φ| 2 = Ω |∇u| 2 + Ω |∇φ| 2 .
The heat equation
The heat equation is the (negative) gradient flow (or steepest descent) for the energy functional. This means that we evolve a function u(x, t) over time in the direction of its Laplacian ∆u, giving the linear parabolic heat equation
∂u ∂t = ∆u .(6)
Given any finite energy solution u of the heat equation that decays fast enough to justify integrating by parts, the energy is non-increasing along the flow. In fact, we have
d dt E(u) = − ∂u ∂t ∆u = − (∆u) 2 .
Obviously, harmonic functions are fixed points, or static solutions, of the flow.
Negative gradient flows near a critical point
We are interested in the dynamical properties of the heat equation near a harmonic function. Before getting to this, it is useful to recall the simple finite dimensional case. Suppose therefore that f : R 2 → R is a smooth function with a nondegenerate critical point at 0 (so ∇f (0) = 0 but the Hessian of f at 0 has rank 2). The behavior of the negative gradient flow (x , y ) = −∇f (x, y)
is determined by the Hessian of f at 0. The behavior depends on the index of the critical point, as is illustrated by the following examples:
(Index 0): The function f (x, y) = x 2 + y 2 has a minimum at 0. The vector field is (−2x, −2y) and the flow lines are rays into the origin. Thus every flow line limits to 0.
(Index 1): The function f (x, y) = x 2 −y 2 has an index one critical point at 0. The vector field is (−2x, 2y) and the flow lines are level sets of the function h(x, y) = xy. Only points where y = 0 are on flow lines that limit to the origin.
(Index 2): The function f (x, y) = −x 2 − y 2 has a maximum at 0. The vector field is (2x, 2y) and the flow lines are rays out of the origin. Thus every flow line limits to ∞ and it is impossible to reach 0.
Thus, we see that the critical point 0 is "generic", or dynamically stable, if and only if it has index 0. When the index is positive, the critical point is not generic and a "random" flow line will miss the critical point.
f (x, y) = x 2 + y 2 has a minimum at 0. Flow lines: Rays through the origin.
Minimal surfaces and MCF 5 f (x, y) = x 2 − y 2 has an index one critical point at 0. Flow lines: Level sets of xy. Only points where y = 0 limit to the origin.
Heat flow near a harmonic function
To analyze the dynamical properties of the heat flow, suppose first that u satisfies the heat equation on a bounded domain Ω with u = 0 on ∂Ω. By the maximum principle, a harmonic function that vanishes on ∂Ω is identically zero. Thus, we expect that u limits towards 0. We show this next.
Since u = 0 on ∂Ω, applying the divergence theorem to u∇u gives Ω |∇u| 2 = − Ω u ∆u .
Applying Cauchy-Schwarz and then the Dirichlet Poincaré inequality gives
Ω |∇u| 2 2 ≤ Ω u 2 Ω (∆u) 2 ≤ C Ω |∇u| 2 Ω (∆u) 2 .
Finally, dividing both sides by Ω |∇u| 2 gives 2 E(t) ≡ 2 E(u(·, t)) =
Ω |∇u| 2 ≤ C Ω (∆u) 2 = −C E (t) ,
where C = C(Ω) is the constant from the Dirichlet Poincaré inequality. Integrating this gives that E(t) decays exponentially with
E(t) ≤ E(0) e − 2 C t .
Finally, another application of the Poincaré inequality shows that u 2 (x, t) also decays exponentially in t, as we expected.
T. H. Colding and W. Minicozzi
If u does not vanish on ∂Ω, there is a unique harmonic function w with
u ∂Ω = w ∂Ω .
It follows that (u − w) also solves the heat equation and is zero on ∂Ω. By the previous argument, (u−w) decays exponentially and we conclude that all harmonic functions are attracting critical points of the flow. Since we have already shown that harmonic functions are minimizers for the energy functional, and thus index zero critical points, this is exactly what the finite dimensional toy model suggests.
Energy of a curve
Geodesics in a Riemannian manifold M n arise variationally in two ways. They are critical points of the energy functional restricted to maps into M (generalizing harmonic functions) and they are also critical points of the length functional. We will first analyze the energy functional on curves.
Critical points for energy are geodesics
Suppose γ is a closed curve in a Riemannian manifold M n , i.e.,
γ : S 1 → M ,
where the circle S 1 is identified with R/2π Z. The energy of γ is E(γ) = 1 2 S 1 |γ | 2 .
A variation of γ is a curve in the space of curves that goes through γ. We can specify this by a map
F : S 1 × [− , ] → M n
with F (·, 0) = γ. The variation vector field V is the tangent vector to this path given by V = ∂F ∂t . An easy calculation shows that d dt t=0 E(γ(·, t)) = γ , F s,t = − γ , V , where γ = ∇ γ γ . We conclude that
d dt t=0 E = 0 for all V
if and only if γ = 0. Such a curve is called a geodesic.
Minimal surfaces and MCF 7 3.2 Second variation of energy of a curve in a surface
We have seen that a closed geodesic γ : S 1 → M 2 in a surface M is a critical point for energy. The hessian of the energy functional is given by the second variation formula. For simplicity, we assume that |γ | = 1 and V = φ n is a normal variation where n is the unit normal to γ, so V = φ n. We compute
d 2 dt 2 t=0 E(t) = |F s,t | 2 − γ , F s,tt = |F t,s | 2 − γ , F s,tt = |V | 2 − γ , F s,tt = |φ | 2 − K φ 2 = − φ φ + K φ 2 ,
where K is the curvature of M .
In this calculation, we used that F ss = 0 since γ is a geodesic and that the curvature K comes in when one changes the order of derivatives, i.e.,
F s , F s,tt = F s , F tt,s + K [|F s | 2 |F t | 2 − F s , F t 2 ] .
Using that F s = V = φ n is perpendicular to F t = γ and |γ | = 1 gives F s , F s,tt = F s , F tt,s + K φ 2 .
A geodesic γ 0 is stable if the Hessian of the energy functional at γ 0 has index zero, i.e., if d 2 dt 2 t=0 E(t) ≥ 0 , for all variations of γ 0 . Roughly speaking, stable geodesics minimize energy compared to nearby curves.
Geodesics in a free homotopy class
The simplest way to produce geodesics is to look for minima of the energy functional. To get a closed geodesic, the minimization is done in a free homotopy class. A free homotopy class of a closed curve c : S 1 → M on a manifold M consists of all the curves that are homotopic to c. Namely, a curve γ is freely homotopic to c if there exists a one parameter family
F : S 1 × [0, 1] → M
so F (·, 0) = c and F (·, 1) = γ. The difference between a homotopy class and a free homotopy class is that there is no fixed base point for a free homotopy class.
Standard arguments in Riemannian geometry then give:
Lemma 3. In each free homotopy class on a closed manifold, there is at least one curve that realizes the smallest energy. This minimizing curve is a geodesic and is non-trivial if the homotopy class is non-trivial.
T. H. Colding and W. Minicozzi
A torus (or donut) has genus 1.
Two curves in the same free homotopy class.
Birkhoff: A closed geodesic on a two sphere
In the 1910s, Birkhoff came up with an ingenious method of constructing nontrivial closed geodesics on a topological 2-sphere. Since S 2 is simply-connected, this cannot be done by minimizing in a free homotopy class. Birkhoff's instead used a min-max argument to find higher index critical points. We will describe Birkhoff's idea and some related results in this section; see [B1], [B2] and section 2 in [Cr] for more about Birkhoff's ideas.
Sweepouts and the width
The starting point is a min-max construction that uses a non-trivial homotopy class of maps from S 2 to construct a geometric metric called the width. Later, we will see that this invariant is realized as the length of a closed geodesic. Of course, one has to assume that such a non-trivial homotopy class exists (it does on S 2 , but not on higher genus surfaces; fortunately, it is easy to construct minimizers on higher genus surfaces).
Let Ω be the set of continuous maps
σ : S 1 × [0, 1] → M
with the following three properties:
• For each t the map σ(·, t) is in W 1,2 .
• The map t → σ(·, t) is continuous from [0, 1] to W 1,2 .
• σ maps S 1 × {0} and S 1 × {1} to points. Given a mapσ ∈ Ω, the homotopy class Ωσ is defined to be the set of maps σ ∈ Ω that are homotopic toσ through maps in Ω. The width W = W (σ) associated to the homotopy class Ωσ is defined by taking inf of max of the energy of each slice. That is, set
W = inf σ∈Ωσ max t∈[0,1] E (σ(·, t)) ,(7)
where the energy is given by
E (σ(·, t)) = S 1 |∂ x σ(x, t)| 2 dx .
The width is always non-negative and is positive ifσ is in a non-trivial homotopy class. A particularly interesting example is when M is a topological 2-sphere and the induced map from S 2 to M has degree one. In this case, the width is positive and realized by a non-trivial closed geodesic. To see that the width is positive on non-trivial homotopy classes, observe that if the maximal energy of a slice is sufficiently small, then each curve σ(·, t) is contained in a convex geodesic ball in M . Hence, a geodesic homotopy connects σ to a path of point curves, so σ is homotopically trivial.
Pulling the sweepout tight to obtain a closed geodesic
The key to finding the closed geodesic is to "pull the sweepout tight" using the Birkhoff curve shortening process (or BCSP). The BCPS is a kind of discrete gradient flow on the space of curves. It is given by subdividing a curve and then replacing first the even segments by minimizing geodesics, then replacing the odd segments by minimizing geodesics, and finally reparameterizing the curve so it has constant speed.
It is not hard to see that the BCSP has the following properties:
• It is continuous on the space of curves.
• Closed geodesics are fixed under the BCSP.
• If a curve is fixed under BCSP, then it is a geodesic.
It is possible to make each of these three properties quantitative. Using this map and more refined versions of these properties, we showed the existence of a sequence of tightened sweepouts:
Theorem 4. [CM2]) There exists a sequence of sweepouts γ j with the property that: Given > 0, there is δ > 0 so that if j > 1/δ and
2π E (γ j (·, t 0 )) = Length 2 (γ j (·, t 0 )) > 2π (W − δ) ,(8)
then for this j we have dist γ j (·, t 0 ) , G < where G is the set of closed geodesics.
As an immediate consequence, we get the existence of non-trivial closed geodesics for any metric on S 2 ; this is due to Birkhoff. See [LzWl] for an alternative proof using the harmonic map heat flow.
Sweepouts by spheres
We will now define a two-dimensional version of the width, where we sweepout by spheres instead of curves. Let Ω be the set of continuous maps
σ : S 2 × [0, 1] → M
Minimal surfaces and MCF 11
Initial sweepout Tightened sweepout so that:
• For each t ∈ [0, 1] the map σ(·, t) is in C 0 ∩ W 1,2 .
• The map t → σ(·, t) is continuous from [0, 1] to C 0 ∩ W 1,2 .
• σ maps S 2 × {0} and S 2 × {1} to points.
Given a map β ∈ Ω, the homotopy class Ω β is defined to be the set of maps σ ∈ Ω that are homotopic to β through maps in Ω. We will call any such β a sweepout. The (energy) width W E = W E (β, M ) associated to the homotopy class Ω β is defined by taking the infimum of the maximum of the energy of each slice. That is, set
W E = inf σ∈Ω β max t∈[0,1] E (σ(·, t)) ,(9)
where the energy is given by
E (σ(·, t)) = 1 2 S 2 |∇ x σ(x, t)| 2 dx .(10)
The next result gives the existence of a sequence of good sweepouts.
Theorem 5. [CM19]) Given a metric g on M and a map β ∈ Ω representing a non-trivial class in π 3 (M ), there exists a sequence of sweepouts γ j ∈ Ω β with max s∈[0,1] E(γ j s ) → W (g), and so that given > 0, there existj and δ > 0 so that if j >j and
Area(γ j (·, s)) > W (g) − δ ,(11)
then there are finitely many harmonic maps u i :
S 2 → M with d V (γ j (·, s), ∪ i {u i }) < .(12)
12 T. H. Colding and W. Minicozzi In (12), we have identified each map u i with the varifold associated to the pair (u i , S 2 ) and then taken the disjoint union of these S 2 's to get (12) is a weak measure-theoretic distance called "varifold distance"; see [CM19] or Chapter 3 of [CM14] for the definition.
∪ i {u i }. The distance d V in
One immediate consequence of Theorem 5 is that if s j is any sequence with Area(γ j (·, s j )) converging to the width W (g) as j → ∞, then a subsequence of γ j (·, s j ) converges to a collection of harmonic maps from S 2 to M . In particular, the sum of the areas of these maps is exactly W (g) and, since the maps are automatically conformal, the sum of the energies is also W (g). The existence of at least one non-trivial harmonic map from S 2 to M was first proven in [SaUh], but they allowed for loss of energy in the limit; cf. also [St]. Ruling out this possible energy loss in various settings is known as the "energy identity" and it can be rather delicate. This energy loss was ruled out by Siu and Yau, using also arguments of Meeks and Yau (see Chapter VIII in [ScYa2]). This was also proven later by Jost, [Jo].
Curve shortening flow
The Birkhoff curve shortening process was a kind of discrete gradient flow on the space of curves. We turn next to a continuous gradient flow that is called the curve shortening flow.
Suppose that γ 0 again is a curve but this time we will think of it as an embedded submanifold in R 2 or, more generally, a surface M 2 . We can again look at variations γ t of the one-dimensional submanifold γ 0 and get for lengths:
d dt t=0 Length(γ t ) = γ0 h n, V ,
where h is the (geodesic) curvature of the one-dimensional submanifold given by
h = ∇ e1 n, e 1 ,
where e 1 is a unit vector tangent to the curve γ 0 and n is the unit normal to γ 0 . It follows that 1. γ 0 is a critical point for length if and only if it is a geodesic (after being reparameterized to have constant speed).
2. The negative gradient flow for the length functional in R 2 is the curve shortening flow
∂ t x = h n .
The simplest (non-trivial) solution of the curve shortening flow is given by a one-parameter family of concentric circles with radius
r(t) = √ −2t
Minimal surfaces and MCF 13 Figure 5: Curve shortening flow: the curve evolves by its geodesic curvature. The red arrows indicate direction of flow.
for t in (−∞, 0). This is an ancient solution since it is defined for all t < 0, it is self-similar since the shape is preserved (i.e., we can think of it as a fixed circle moving under rigid motions of R 2 ), and it becomes extinct at the origin in space and time.
Self-similar solutions
A solution of the curve shortening flow is self-similar if the shape does not change with time. The simplest example is a static solution, like a straight line, that does not change at all. The next simplest is given by concentric shrinking circles, but there are many other interesting possibilities. There are three types of self-similar solutions that are most frequently considered:
• Self-similar shrinkers.
• Self-similar translators.
• Self-similar expanders.
We will explain shrinkers first. Suppose that c t is a one-parameter family of curves flowing by the curve shortening flow for t < 0. We say that c t is a selfsimilar shrinker if c t = √ −t c −1 for all t < 0. For example, circles of radius √ −2t give such a solution. In 1986, Abresch and Langer, [AbLa], classified such solutions and showed that the shrinking circles give the only embedded one (cf. Andrews, [An]). In 1987, Epstein-Weinstein, [EpW], showed a similar classification and analyzed the dynamics of the curve shortening flow near a shrinker. We say that c t is a self-similar translator if there is a constant vector V ∈ R 2 so that c t = c 0 + t V for all t ∈ R. These solutions are eternal in that they are defined for all time.
It is easy to see that any translator must be non-compact. Calabi discovered a self-similar translator in the plane that he named the grim reaper . Calabi's Grim Reaper is given as the graph of the function u(x, t) = t − log sin x .
Self-similar expanders are similar to shrinkers, except that they move by expanding dilations. In particular, the solutions are defined as t goes to +∞. It is not hard to see that expanders must be non-compact.
There are other possible types of self-similar solutions, where the solutions move by one-parameter families of rigid motions over time. See Halldorsson, [Hh], for other self-similar solutions to the curve shortening flow.
Theorems of Gage-Hamilton and Grayson
In 1986, building on earlier work of Gage, [Ga1] and [Ga2], Gage and Hamilton classified closed convex solutions of the curve shortening flow:
Theorem 6. (Gage-Hamilton, [GaH]) Under the curve shortening flow every simple closed convex curve remains smooth and convex and eventually becomes extinct in a "round point". More precisely, they showed that the flow becomes extinct in a point and if the flow is rescaled to keep the enclosed area constant, then the resulting curves converge to a round circle. They did this by tracking the isoperimetric ratio and showing that it was approaching the optimal ratio which is achieved by round circles.
In 1987, M. Grayson, [G1], showed that any simple closed curve eventually becomes convex under the flow:
Theorem 7. (Grayson,[G1]) Any simple closed curve eventually becomes convex under the curve shortening flow. Thus, by the result of Gage-Hamilton, it becomes extinct in a "round point".
Isoperimetric monotonicity under the curve shortening flow
In 1995, Hamilton, [Ha1], and Huisken, [H5], discovered two beautiful new ways to prove Grayson's theorem. Both of these relied on proving monotonicity of various isoperimetric ratios under the curve shortening flow and using these to rule out singularities other than shrinking circles. Recently, Andrews and Bryan, [AnB] discovered another monotone quantity and used it to give a self-contained 1 proof of Grayson's theorem. We will describe two monotone quantities discovered by Hamilton.
16
T. H. Colding and W. Minicozzi Figure 8: The snake manages to unwind quickly enough to become convex before extinction.
For both of Hamilton's quantities, we start with a simple closed curve
c : S 1 → R 2 .
The image of c encloses a region in R 2 . Each simple curve γ inside this region with boundary in the image of c divides the region into two subdomains; let A 1 and A 2 be the areas of these subdomains and let L be the length of the dividing curve γ.
Hamilton's first quantity I is defined to be
I = inf γ L 2 1 A 1 + 1 A 2 ,
where the infimum is over all possible dividing curves γ.
Theorem 8. (Hamilton,[Ha1]) Under the curve shortening flow, I increases if I ≤ π.
Hamilton's second quantity J is defined to be
J = inf γ L L 0 ,
where the infimum is again taken over all possible dividing curves γ and the quan-
tity L 0 = L 0 (A 1 , A 2 )
is the length of the shortest curve which divides a circle of area A 1 + A 2 into two pieces of area A 1 and A 2 .
Theorem 9. (Hamilton, [Ha1]) Under the curve shortening flow, J always increases. We will give a rough idea why these theorems are related to Grayson's theorem. Grayson had to rule out a singularity developing before the curve became convex. As you approach a singularity, the geodesic curvature h must be larger and larger. Since the curve is compact, one can magnify the curve just before this singular time to get a new curve where the maximum of |h| is one. There is a blow up analysis that shows that this dilated curve must look like a circle unless it is very long and skinny (like the grim reaper). Finally, a bound on any of these isoperimetric quantities rules out these long skinny curves.
Minimal surfaces
We turn next to higher dimensions and the variational properties of the area functional. Critical points of the area functional are called minimal surfaces. In this section, we will give a rapid overview of some of the basic properties of minimal surfaces; see the book [CM14] for more details.
The first variation of area for surfaces
Let Σ 0 be a hypersurface in R n+1 and n its unit normal. Given a vector field
V : Σ → R n+1
with compact support, we get a one-parameter family of hypersurfaces
Σ s = {x + s V (x) | x ∈ Σ 0 } .
T. H. Colding and W. Minicozzi
The first variation of area (or volume) is
d ds s=0 Vol (Σ s ) = Σ0 div Σ0 V ,
where the divergence div Σ0 is defined by
div Σ0 V = n i=1 ∇ ei V, e i ,
where e i is an orthonormal frame for Σ. The vector field V can be decomposed into the part V T tangent to Σ and the normal part V ⊥ . The divergence of the
normal part V ⊥ = V, n n is div Σ0 V ⊥ = ∇ ei ( V, n n) , e i = V, n ∇ ei n, e i = H V, n ,
where the mean curvature scalar H is
H = div Σ0 (n) = n i=1 ∇ ei n, e i .
With this normalization, H is n/R on the n-sphere of radius R.
Minimal surfaces
By Stokes' theorem, div Σ0 V T integrates to zero. Hence, since div Σ0 V ⊥ = H V, n , we can rewrite the first variation formula as
d ds s=0 Vol (Σ s ) = Σ0 H V, n .
A hypersurface Σ 0 is minimal when it is a critical point for the area functional, i.e., when the first variation is zero for every compactly supported vector field V . By the first variation formula, this is equivalent to H = 0.
Minimal graphs
If Σ is the graph of a function u : R n → R, then the upward-pointing unit normal is given by n = (−∇u, 1)
1 + |∇ R n u| 2 ,(13)
and the mean curvature of Σ is given by
H = −div R n ∇ R n u 1 + |∇ R n u| 2 .(14)
he field of minimal surs dates back to the pubtion in 1762 of Lange's famous memoir ai d'une nouvelle méthpour déterminer les ima et les minima des mules intégrales ininies". In a paper pubed in 1744 Euler had aly discussed minimizing perties of the surface known as the catenoid, he considered only varins within a certain class urfaces. In the almost quarter of a millennium t has passed since Lange's memoir, minimal aces has remained a vibrant area of research and re are many reasons why. The study of minimal faces was the birthplace of regularity theory. It on the intersection of nonlinear elliptic PDE, metry, and low-dimensional topology, and over years the field has matured through the efforts any people. However, some very fundamental stions remain. Moreover, many of the potenly spectacular applications of the field have yet e achieved. For instance, it has long been the e that several of the outstanding conjectures ut the topology of 3-manifolds could be resolved g detailed knowledge of minimal surfaces. urfaces with uniform curvature (or area) bounds Thus, minimal graphs are solutions of the nonlinear divergence-form PDE
div R n ∇ R n u 1 + |∇ R n u| 2 = 0 .(15)
Every smooth hypersurface is locally graphical, so small pieces of a minimal surface satisfy this equation (over some plane).
In 1916, Bernstein proved that planes were the only entire solutions of the minimal surface equation:
Theorem 10. (Bernstein, [Be]) Any minimal graph over all of R 2 must be flat (i.e., u is an affine function).
Remarkably, this theorem holds for n ≤ 7, but there are non-flat entire minimal graphs in dimensions 8 and up.
The Bernstein theorem should be compared with the classical Liouville theorem for harmonic functions:
Theorem 11. (Liouville) A positive harmonic function on R n must be constant.
Consequences of the first variation formula
Suppose that Σ ⊂ R n+1 is a hypersurface with normal n. Given f : R n+1 → R, the Laplacian on Σ applied to f is
∆ Σ f ≡ div Σ (∇f ) T = i=1 Hess f (e i , e i ) − ∇f, n H ,(16)
where e i is a frame for Σ and Hess f is the R n+1 Hessian of f . We will use this formula several times with different choices of f .
First, when f is the i-th coordinate function x i , (16) becomes
∆ Σ x i = − ∂ i , n H .
We see that:
Lemma 12. Σ is minimal ⇐⇒ all coordinate functions are harmonic.
Combining this with the maximum principle, we get Osserman's convex hull property, [Os3]:
Proposition 13. If Σ is compact and minimal, then Σ is contained in the convex hull of ∂Σ.
Proof. If not, then we could choose translate and rotate Σ so that ∂Σ ⊂ {x 1 < 0} but Σ contains a point p ∈ {x 1 > 0}. However, the function x 1 is harmonic on Σ, so the maximum principle implies that its maximum is on ∂Σ. This contradiction proved the proposition.
Applying (16) with f = |x| 2 and noting that the R n+1 Hessian of |x| 2 is twice the identity and the gradient is 2x, we see that
∆ Σ |x| 2 = 2 n − 2 x, n H , when Σ n ⊂ R n+1 is a hypersurface. When Σ is minimal, this becomes ∆ Σ |x| 2 = 2 n .
This identity is the key for the monotonicity formula:
Theorem 14. If Σ ⊂ R n+1 is a minimal hypersurface, then d dr Vol(B r ∩ Σ) r n = 1 r n+1 ∂Br∩Σ x ⊥ 2 |x T | ≥ 0 .
Moreover, the density ratio is constant if and only if x ⊥ ≡ 0; this is equivalent to Σ being a cone with its vertex at the origin (i.e., Σ is invariant with respect to dilations about 0). Minimal surfaces and MCF 21 6.5 Examples of minimal surfaces
6.5.1 The Catenoid
The catenoid, shown in figure 12, is the only non-flat minimal surface of revolution. It was discovered by Euler in 1744 and shown to be minimal by Meusnier (a student of Monge) in 1776. It is a complete embedded topological annulus (i.e., genus zero and two ends) and is given as the set where x 2 1 + x 2 2 = cosh 2 (x 3 ) in R 3 . It is easy to see that the catenoid has finite total curvature.
The Helicoid
The helicoid (see figure 13) is given as the set x 3 = tan −1 x2 x1 ; alternatively, it is given in parametric form by
(x 1 , x 2 , x 3 ) = (t cos s, t sin s, s) ,(17)
where s, t ∈ R. It was discovered by Meusnier (a student of Monge) in 1776. It is complete, embedded, singly-periodic and simply connected. The helicoid is a ruled surface since its intersections with horizontal planes {x 3 = s} are straight lines. These lines lift and rotate with constant speed to form a double spiral staircase. In 1842, Catalan showed that the helicoid is the only (non-flat) ruled minimal surface. A surface is said to be "ruled" if it can be parameterized by
X(s, t) = β(t) + s δ(t) where s, t ∈ R ,(18)
22
T. H. Colding and W. Minicozzi and β and δ are curves in R 3 . The curve β(t) is called the "directrix" of the surface, and a line having δ(t) as direction vector is called a "ruling". For the standard helicoid, the x 3 -axis is a directrix, and for each fixed t the line s → (s cos t, s sin t, t) is a ruling. Figure 13: The helicoid, with the ruling pictured. Credit: Matthias Weber, www.indiana.edu/ minimal.
The Riemann Examples
Around 1860, Riemann, [Ri], classified all minimal surfaces in R 3 that are foliated by circles and straight lines in horizontal planes. He showed that the only such surfaces are the plane, the catenoid, the helicoid, and a two-parameter family that is now known as the Riemann examples. The surfaces that he discovered formed a family of complete embedded minimal surfaces that are singly-periodic and have genus zero. Each of the surfaces has infinitely many parallel planar ends connected by necks ("pairs of pants"). Modulo rigid motions, this is a 2 parameter family of minimal surfaces. The parameters are: • Neck size.
• Angle between period vector and the ends.
If we keep the neck size fixed and allow the angle to become vertical (i.e., perpendicular to the planar ends), the family degenerates to a pair of oppositely oriented helicoids. On the other hand, as the angle goes to zero, the family degenerates to a catenoid.
The Genus One Helicoid
In 1993, Hoffman-Karcher-Wei gave numerical evidence for the existence of a complete embedded minimal surface with genus one that is asymptotic to a helicoid; they called it a "genus one helicoid". In [HoWW], Hoffman, Weber and Wolf constructed such a surface as the limit of "singly-periodic genus one helicoids", where each singly-periodic genus one helicoid was constructed via the Weierstrass representation. Later, Hoffman and White constructed a genus one helicoid variationally in [HoWh1].
Second variation
Minimal surfaces are critical points for the area functional, so it is natural to look at the second derivative of the area functional at a minimal surface. This is called 24 T. H. Colding and W. Minicozzi Figure 15: The genus one helicoid. Figure 16: A periodic minimal surface asymptotic to the helicoid, whose fundamental domain has genus one. Credit: Matthias Weber, www.indiana.edu/ minimal. the second variation and it has played an important role in the subject since at least the work of Simons, [Sim], in 1968.
To make this precise, let Σ 0 be a 2-sided minimal hypersurface in R n+1 , n its unit normal, f a function, and define the normal variation
Σ s = {x + s f (x) n(x) | x ∈ Σ 0 } .
A calculation (see, e.g., [CM14]) shows that the second variation of area along this one-parameter family of hypersurfaces is given by
d 2 ds 2 s=0 Vol (Σ s ) = Σ0 |∇f | 2 − |A| 2 f 2 = − Σ0 f (∆ + |A| 2 ) f ,(19)
where A is the second fundamental form. When Σ 0 is minimal in a Riemannian manifold M , the formula becomes
d 2 ds 2 s=0 Vol (Σ s ) = Σ0 |∇f | 2 − |A| 2 f 2 − Ric M (n, n) f 2 ,
where Ric M is the Ricci curvature of M . A minimal surface Σ 0 is stable when it passes the second derivative test, i.e., when
0 ≤ d 2 ds 2 s=0 Vol (Σ s ) = − Σ0 f (∆ + |A| 2 ) f ,
for every compactly supported variation Σ s . Analytically, stability means that the Jacobi operator ∆ + |A| 2 is non-negative.
There is a useful analytic criterion to determine stability:
Proposition 15. (Fischer-Colbrie and Schoen, [FiSc]
) A 2-sided minimal hy- persurface Σ ⊂ R 3 is stable if and only if there is a positive function u with ∆ u = −|A| 2 u.
Since the normal part of a constant vector field automatically satisfies the Jacobi equation, we conclude that minimal graphs are stable. The same argument implies that minimal multi-valued graphs are stable. 2 Stability is a natural condition given the variational nature of minimal surfaces, but one of the reasons that stability is useful is the following curvature estimate of R. Schoen, [Sc1]:
Theorem 16. (Schoen, [Sc1]) If Σ 0 ⊂ R 3 is stable and 2-sided and B R is a geodesic ball in Σ 0 , then sup B R 2 |A| ≤ C R ,
where C is a fixed constant.
When Σ 0 is complete, we can let R go to infinity and conclude that Σ 0 is a plane. This Bernstein theorem was proven independently by do Carmo-Peng, [dCP], and Fischer-Colbrie-Schoen, [FiSc].
See [CM15] for a different proof of Theorem 16 and a generalization to surfaces that are stable for a parametric elliptic integrand. The key point for getting the curvature estimate is to establish uniform area bounds just using stability:
Theorem 17. [CM15]) If Σ 2 ⊂ R 3 is stable and 2-sided and B r0 is simply-connected, then
Area (B r0 ) ≤ 4π 3 r 2 0 .(20)
The corresponding result is not known in higher dimensions, although Schoen, Simon and Yau proved curvature estimates assuming an area bound in low dimensions; see [ScSiY]. The counter-examples to the Bernstein problem in dimensions seven and up show that such a bound can only hold in low dimensions. However, R. Schoen has conjectured that the Bernstein theorem and curvature estimate should be true also for stable hypersurfaces in R 4 :
Conjecture 18. (Schoen) If Σ 3 ⊂ R 4 is a complete immersed 2-sided stable minimal hypersurface, then Σ is flat. Conjecture 19. (Schoen) If Σ 3 ⊂ B r0 = B r0 (x) ⊂ M 4 is an immersed 2-sided stable minimal hypersurface where |K M | ≤ k 2 , r 0 < ρ 1 (π/k, k)
, and ∂Σ ⊂ ∂B r0 , then for some C = C(k) and all 0 < σ ≤ r 0 ,
sup Br 0 −σ |A| 2 ≤ C σ −2 .(21)
Any progress on these conjectures would be enormously important for the theory of minimal hypersurfaces in R 4 .
T. H. Colding and W. Minicozzi
Classification of embedded minimal surfaces
One of the most fundamental questions about minimal surfaces is to classify or describe the space of all complete embedded minimal surfaces in R 3 . We have already seen three results of this type:
1. Bernstein showed that a complete minimal graph must be a plane.
2. Catalan showed that a ruled minimal surface is either a plane or a helicoid.
A minimal surface of revolution must be a plane or a catenoid.
Each of these theorems makes a rather strong hypothesis on the class of surfaces. It would be more useful to have classifications under weaker hypotheses, such as just as the topological type of the surface.
The last decade has seen enormous progress on the classification of embedded minimal surfaces in R 3 by their topology. The surfaces are generally divided into three cases, according to the topology:
• Disks.
• Planar domains -i.e., genus zero.
• Positive genus.
The classification of complete surfaces has relied heavily upon breakthroughs on the local descriptions of pieces of embedded minimal surfaces with finite genus.
The topology of minimal surfaces
The topology of a compact connected oriented surface without boundary is described by a single non-negative number: the genus. The sphere has genus zero, the torus has genus one, and the connected sum of k-tori has genus k.
The genus of an oriented surface with boundary is defined to be the genus of the compact surface that you get by gluing in a disk along each boundary component. Since an annulus with two disks glued in becomes a sphere, the annulus has genus zero. Thus, the topology of a connected oriented surface with boundary is described by two numbers: the genus and the number of boundary components.
The last topological notion that we will need is properness. An immersed submanifold Σ ⊂ M is proper when the intersection of Σ with any compact set in M is compact. Clearly, every compact submanifold is automatically proper.
There are two important monotonicity properties for the topology of minimal surfaces in R 3 ; one does not use minimality and one does.
Lemma 20. If Σ has genus k and Σ 0 ⊂ Σ, then the genus of Σ 0 is at most k.
Proof. This follows immediately from the definition of the genus and does not use minimality.
Lemma 21. If Σ ⊂ R 3 is a properly embedded minimal surface and B R (0)∩∂Σ = ∅, then the inclusion of B R (0)∩Σ into Σ is an injection on the first homology group.
Proof. If not, then B R (0) ∩ Σ contains a one-cycle γ that does not bound a surface in B R (0) ∩ Σ but does bound a surface Γ ⊂ Σ. However, Γ is then a minimal surface that must leave B R (0) but with ∂Γ ⊂ B R (0), contradicting the convex hull property (Proposition 13).
This has the following immediate corollary for disks:
Corollary 22. If Σ is a properly embedded minimal disk, then each component of B R (0) ∩ Σ is a disk.
Multi-valued graphs
We will need the notion of a multi-valued graph from [CM6]- [CM9]. At a first approximation, a multi-valued graph is locally a graph over a subset of the plane but the projection down is not one to one. Thus, it shares many properties with minimal graphs, including stability, but includes new possibilities such as the helicoid minus the vertical axis.
To be precise, let D r be the disk in the plane centered at the origin and of radius r and let P be the universal cover of the punctured plane C \ {0} with global polar coordinates (ρ, θ) so ρ > 0 and θ ∈ R. Given 0 ≤ r ≤ s and θ 1 ≤ θ 2 , define the "rectangle" S θ1,θ2
r,s ⊂ P by S θ1,θ2 r,s = {(ρ, θ) | r ≤ ρ ≤ s , θ 1 ≤ θ ≤ θ 2 } .(22)
An N -valued graph of a function u on the annulus D s \ D r is a single valued graph over
S −N π,N π r,s = {(ρ, θ) | r ≤ ρ ≤ s , |θ| ≤ N π} .(23)(Σ θ1,θ2
r,s will denote the subgraph of Σ over the smaller rectangle S θ1,θ2 r,s ). The multi-valued graphs that we will consider will never close up; in fact they will all be embedded. Note that embedded corresponds to that the separation never vanishes. Here the separation w is the difference in height between consecutive sheets and is therefore given by
w(ρ, θ) = u(ρ, θ + 2π) − u(ρ, θ) .(24)
In the case where Σ is the helicoid [i.e., Σ can be parametrized by (s cos t, s sin t, t) where s, t ∈ R], then
Σ \ x 3 − axis = Σ 1 ∪ Σ 2 ,(25)
where Σ 1 , Σ 2 are ∞-valued graphs. Σ 1 is the graph of the function u 1 (ρ, θ) = θ and Σ 2 is the graph of the function u 2 (ρ, θ) = θ + π. In either case the separation w = 2 π. Note that for an embedded multi-valued graph, the sign of w determines whether the multi-valued graph spirals in a left-handed or right-handed manner, in other words, whether upwards motion corresponds to turning in a clockwise direction or in a counterclockwise direction.
(1) Each component of B R/ω ∩ Γ \ B ω is a graph with gradien (2) Γ contains a graph Γ −Nπ,Nπ ω,R/ω with gradient ≤ τ and dist Γ\Γ
Disks are double spiral staircases or graphs
We will describe first the local classification of properly embedded minimal disks that follows from [CM6]- [CM9]. This turns out to be the key step for understanding embedded minimal surfaces with finite genus since any of these can be decomposed into pieces that are either disks or pairs of pants.
There are two classical models for embedded minimal disks. The first is a minimal graph over a simply-connected domain in R 2 (such as the plane itself), while the second is a double spiral staircase like the helicoid. A double spiral staircase consists of two staircases that spiral around one another so that two people can pass each other without meeting.
In [CM6]- [CM9] we showed that these are the only possibilities and, in fact, every embedded minimal disk is either a minimal graph or can be approximated by a piece of a rescaled helicoid. It is graph when the curvature is small and is part of a helicoid when the curvature is above a certain threshold.
of such a disk. In f bedded minimal d tion or is part of helicoid was disco by Meusnier in 17
However, befor will give some exa double spiral stair The main point in the proof is to show the double spiral staircase structure when the curvature is large. The proof of this is long, but can be split into three main steps.
Double Spiral St
Three main steps.
A. Fix an integer N (the "large" of the curvature in what follows will depend on N ). If an embedded minimal disk Σ is not a graph (or equivalently if the curvature is large at some point), then it contains an N -valued minimal graph which initially is shown to exist on the scale of 1/ max |A|. That is, the N -valued graph is initially shown to be defined on an annulus with both inner and outer radius inversely proportional to max |A|.
B. Such a potentially small N -valued graph sitting inside Σ can then be seen to extend as an N -valued graph inside Σ almost all the way to the boundary. That is, the small N -valued graph can be extended to an N -valued graph defined on an annulus where the outer radius of the annulus is proportional to R. Here R is the radius of the ball in R 3 that the boundary of Σ is contained in.
C. The N -valued graph not only extends horizontally (i.e., tangent to the initial sheets) but also vertically (i.e., transversally to the sheets). That is, once there are N sheets there are many more and, in fact, the disk Σ consists of two multi-valued graphs glued together along an axis.
consider will o a nonvanr the floors). e Figure 2) θ). This general structure result for embedded minimal disks, and the methods used in its proof, give a compactness theorem for sequences of embedded minimal disks. This theorem is modelled on rescalings of the helicoid and the precise 32 T. H. Colding and W. Minicozzi statement is as follows (we state the version for extrinsic balls; it was extended to intrinsic balls in [CM12]):
} = Σ 1 ∪ Σ 2 , on C \ {0}. = θ , and Σ 2 θ + π. (Theorem 23. (Theorem 0.1 in [CM9].) Let Σ i ⊂ B Ri = B Ri (0) ⊂ R 3 be a sequence of embedded minimal disks with ∂Σ i ⊂ ∂B Ri where R i → ∞. If sup B1∩Σi |A| 2 → ∞ ,(26)
then there exists a subsequence, Σ j , and a Lipschitz curve S : R → R 3 such that after a rotation of R 3 :
1. x 3 (S(t)) = t. (That is, S is a graph over the x 3 -axis.)
2. Each Σ j consists of exactly two multi-valued graphs away from S (which spiral together).
3. For each 1 > α > 0, Σ j \ S converges in the C α -topology to the foliation,
F = {x 3 = t} t , of R 3 . 4. sup Br(S(t))∩Σj |A| 2 → ∞ for all r > 0, t ∈ R. (The curvatures blow up along S.)
This theorem is sometimes referred to as the lamination theorem. Meeks showed in [Me2] that the Lipschitz curve S is in fact a straight line perpendicular to the foliation.
The assumption that the radii R i go to infinity is used in several ways in the proof. This guarantees that the leaves are planes (this uses the Bernstein theorem), but it also is used to show that the singularities are removable. We will see in the next subsection that this is not always the case in the "local case" where the R i remain bounded.
The local case
In contrast to the global case of the previous subsection, there are local examples of sequence of minimal surfaces that do not converge to a foliation. The first such example was constructed in [CM17], where we constructed a sequence of embedded minimal disks in a unit ball in R 3 so that:
• Each contains the x 3 -axis.
• Each is given by two multi-valued graphs over {x 3 = 0} \ {0}.
• The graphs spiral faster and faster near {x 3 = 0}.
The precise statement is:
Theorem 24. (Colding-Minicozzi, [CM17]) There is a sequence of compact em- bedded minimal disks 0 ∈ Σ i ⊂ B 1 ⊂ R 3 with ∂Σ i ⊂ ∂B 1 and containing the vertical segment {(0, 0, t) | |t| < 1} ⊂ Σ i so: (1) lim i→∞ |A Σi | 2 (0) = ∞.
(2) sup i sup Σi\B δ |A Σi | 2 < ∞ for all δ > 0.
(3) Σ i \ {x 3 -axis} = Σ 1,i ∪ Σ 2,i for multi-valued graphs Σ 1,i and Σ 2,i .
(4) Σ i \ {x 3 = 0} converges to two embedded minimal disks Σ ± ⊂ {±x 3 > 0} with Σ ± \ Σ ± = B 1 ∩ {x 3 = 0}. Moreover, Σ ± \ {x 3 -axis} = Σ ± 1 ∪ Σ ± 2 for multi-valued graphs Σ ± 1 and Σ ± 2 each of which spirals into {x 3 = 0}; see fig. 21.
It follows from (4) that Σ i \ {0} converges to a lamination of B 1 \ {0} (with leaves Σ − , Σ + , and B 1 ∩ {x 3 = 0} \ {0}) which does not extend to a lamination of B 1 . Namely, 0 is not a removable singularity.
TOBIAS H. COLDING AND WILLIAM P. MINICOZZI II he limit in a ball of a segenerating helicoids is a parallel planes. This is roper. Figure 2. A schematic picture of the limit in Theorem 1 which is not smooth and not proper (the dotted x 3 -axis is part of the limit). The limit contains four multi-valued graphs joined at the x 3 -axis; Σ + 1 , Σ + 2 above the plane x 3 = 0 and Σ − 1 , Σ − 2 below the plane. Each of the four spirals into the plane.
Σ + 1 Σ − 2 Σ − 1 x 3 = 0 x 3 -axis Σ + 2
x 3 -axis There are now a number of other interesting local examples of singular limit laminations:
• Meeks-Weber, [MeWe]: The Meeks-Weber bent helicoids are sequences of minimal surfaces defined in a tub about a curve that converge to a minimal foliation of the tube except along the curve itself and this curve is the singular set.
• Dean, [De]: Dean generalized the construction of [CM17] to get isolated singular points.
• Khan, [Kh]: Khan used the Weierstrass representation to construct sequences of spiraling multi-valued graphs where curvature blows up on half an interval.
• Hoffman-White, [HoWh2]: They proved the definitive existence result for subsets of an axis by getting an arbitrary closed subset as the singular set. Their proof is variational, seizing on the fact that half of the helicoid is area-minimizing (and then using reflection to construct the other half).
• Kleene, [Kl]: Gave different proof of Hoffman-White using the Weierstrass representation in the spirit of [CM17], [De] and [Kh].
• Calle-Lee, [CaL]: Constructed local helicoids in Riemannian manifolds.
One of the most interesting questions is when does a minimal lamination have removable singularities? This is most interesting when for minimal limit laminations that arise from sequences of embedded minimal surfaces. It is clear from these examples that this is a global question. This question really has two separate cases depending on the topology of the leaves near the singularity. When the leaves are simply connected, the only possibility is the spiraling and multi-valued graph structure proven in [CM6]- [CM9]; see [CM18] for a flux argument to get removability in the global case. A different type of singularity occurs when the injectivity radius of the leaves goes to zero at a singularity; examples of this were constructed by Colding and De Lellis in [CD]. The paper [CM10] has a similar flux argument for the global case where the leaves are not simply-connected.
The one-sided curvature estimate
One of the key tools used to understand embedded minimal surfaces is the onesided curvature estimate proven in [CM9] using the structure theory developed in [CM6]- [CM9].
The one-sided curvature estimate roughly states that an embedded minimal disk that lies on one-side of a plane, but comes close to the plane, has bounded curvature. Alternatively, it says that if the curvature is large at the center of a ball, then the minimal disk propagates out in all directions so that it cannot be contained on one side of any plane that passes near the center of the ball.
Theorem 25. [CM9]) There exists 0 > 0 so that the following holds. Let y ∈ R 3 , r 0 > 0 and
Σ 2 ⊂ B 2r0 (y) ∩ {x 3 > x 3 (y)} ⊂ R 3(27)
be a compact embedded minimal disk with ∂Σ ⊂ ∂B 2 r0 (y). For any connected
component Σ of B r0 (y) ∩ Σ with B 0 r0 (y) ∩ Σ = ∅, sup Σ |A Σ | 2 ≤ r −2 0 .(28)
The example of a rescaled catenoid shows that simply-connected and embedded are both essential hypotheses for the one-sided curvature estimate. More precisely, the height of the catenoid grows logarithmically in the distance to the axis of rotation. In particular, the intersection of the catenoid with B r0 lies in a slab of thickness ≈ log r 0 and the ratio of
log r 0 r 0 → 0 as r 0 → ∞ .(29)
, B, and C, will be used to ing theorem, which is the
igure 4). Let Σ i ⊂ B R i = nce of embedded minimal B R i , where R i → ∞ .
If there exists a subsequence e S : R → R 3 such that after is a graph over the x 3 -axis).
exactly two multivalued hich spiral together).
converges in the C α -topol-= {x 3 = t} t of R 3 by flat for all r > 0 and t ∈ R. p along S.) ent that the Σ j \ S are mulnverge to F means that for ⊂ R 3 \ S and j sufficiently f multivalued graphs over K ∩ Σ j → K ∩ F. e following sections, A, B, ough to prove Theorem 1. ot follow from A, B, and C cise statement than C of form above and below a h. This requires using the stimate". f the rest of the paper: key results that are used 1. These are the existence , i.e., A and B, and the imature estimate. Following ounds for the separation of Two Key Ingredients in the Proof of Theorem 1: Existence of Multivalued Graphs and the One-Sided Curvature Estimate
We now come to the two key results about embedded minimal disks. The first says that if the curvature of such a disk Σ is large at some point x ∈ Σ, then near x a multivalued graph forms (in Σ ) and this extends (in Σ ) almost all the way to the boundary. Moreover, the inner radius r x of the annulus where the multivalued graph is defined is inversely proportional to |A|(x), and the initial separation between the sheets is bounded by a constant times the inner radius, i.e., |w (r x , θ)| ≤ C r x .
An important ingredient in the proof of Theorem 1 is that, just like the helicoid, general embedded minimal disks with large curvature at some interior point can be built out of N-valued graphs. In other words, any embedded minimal disk can be divided into pieces, each of which is an N-valued graph. Thus the disk itself should be thought of as being obtained by stacking these pieces (graphs) on top of each other.
The second key result (Theorem 2) is a curvature estimate for embedded minimal disks in a half-space. As a corollary of this theorem, we get that the set of points in an embedded minimal disk where the curvature is large lies within a cone, and thus the multivalued graphs, whose existence was discussed above, will all start off within this cone; see Figure 8 and Figure 9.
The curvature estimate for disks in a half-space is the following:
Theorem 2. (See Figure 5). There exists > 0 such 3 Β 2r 0 Figure 5. The one-sided curvature estimate for an embedded minimal disk Σ in a half-space with ∂Σ ⊂ ∂B 2r 0 : The components of B r 0 ∩ Σ intersecting B r 0 are graphs.
x 3 0 = Β r 0 Β εr 0 !
The other half.
: The singular set and the two multivalued graphs. Thus, after dilating the catenoid by 1 j , we get a sequence of minimal surfaces in the unit ball that converges as sets to {x 3 = 0} as j → ∞. However, the catenoid is not flat, so these rescaled catenoids have |A| → ∞ and (28) does not apply for j large.
Genus zero have pair of pants decomposition
Thus far, we have concentrated on the case of embedded minimal disks. We turn next to the general case of embedded minimal planar domains (i.e., where the surfaces have genus zero but may not be simply connected). The new possibilities are illustrated by Riemann's family of examples where small necks connect large graphical regions that are asymptotic to planes. Cutting along these small necks, one can decompose the Riemann examples into "pairs of pants" that are topologically disks with two sub-disks removed (think of the outer boundary as the waist and the two inner boundaries corresponding to the legs).
One of the main results from [CM10] is that a general embedded minimal planar domain has a similar pair of pants decomposition:
Theorem 26. Any nonsimply connected embedded minimal planar domain with a small neck can be cut along a collection of short curves. After the cutting, we are left with graphical pieces that are defined over a disk with either one or two subdisks removed (a topological disk with two subdisks removed is called a pair of pants). Moreover, if for some point the curvature is large, then all of the necks are very small.
The following compactness result is a consequence: like a gradient estimate for a harmonic function where the gradient bound is on one half of the ball where the function is defined.
Using the minimal surface equation and the fact that Σ has points close to a plane, it is not hard to see that, for > 0 sufficiently small, (9) is equivalent to the statement that Σ is a graph over the plane {x 3 = 0}.
We will often refer to Theorem 2 as the one-sided curvature estimate (since Σ is assumed to lie on one side of a plane). Note that the assumption in Theorem 2 that Σ is simply connected (i.e., that Σ is a disk) is crucial, as can be seen from the example of a rescaled catenoid. The catenoid (see Figure 6) is the minimal surface in R 3 given by (cosh s cos t, cosh s sin t, s), where s, t ∈ R. Rescaled catenoids converge (with multiplicity two) to the flat plane; see Figure 7. Likewise, by considering the universal cover of the catenoid, one sees that Theorem 2 requires the disk to be embedded and not just immersed.
Definition 1. (Cones; see Figure 8). If δ > 0 and x ∈ R 3 , then we denote by C δ (x) the (convex) cone with vertex x, cone angle (π /2 − arctan δ), and axis parallel to the x 3 -axis. That is (see Figure 8), A rescaled catenoid shows that simply-connected is essential. In the next section, we will turn to finer structure and compactness theorems for sequences of planar domains.
(10) C δ (x) = {x ∈ R 3 | x 2 3 ≥ δ 2 (x 2 1 + x 2 2 )} + x .
Compactness theorems for planar domains
In order to describe the results for sequence of planar domains, it will be useful to divide things into two cases depending on whether or not the topology is concentrating a points. To distinguish between these cases, we will say that a sequence of surfaces Σ 2 i ⊂ R 3 is uniformly locally simply connected (or ULSC) if for each compact subset K of R 3 , there exists a constant r 0 > 0 (depending on K) so that for every x ∈ K, all r ≤ r 0 , and every surface Σ i
each connected component of B r (x) ∩ Σ i is a disk.(30)
For instance, a sequence of rescaled catenoids where the necks shrink to zero is not ULSC, whereas a sequence of rescaled helicoids is. Another way of locally distinguishing sequences where the topology does not concentrate from sequences where it does comes from analyzing the singular set. The singular set S is defined to be the set of points where the curvature is blowing up. That is, a point y in R 3 is in S for a sequence Σ i if sup Br(y)∩Σi |A| 2 → ∞ as i → ∞ for all r > 0.
We will show that for embedded minimal surfaces S consists of two types of points. The first type is roughly modelled on rescaled helicoids and the second on rescaled catenoids:
• A point y in R 3 is in S ulsc if the curvature for the sequence Σ i blows up at y and the sequence is ULSC in a neighborhood of y.
• A point y in R 3 is in S neck if the sequence is not ULSC in any neighborhood of y. In this case, a sequence of closed non-contractible curves γ i ⊂ Σ i converges to y.
The sets S neck and S ulsc are obviously disjoint and the curvature blows up at both, so S neck ∪ S ulsc ⊂ S. An easy argument will later show that, after passing to a subsequence, we can assume that
S = S neck ∪ S ulsc .(32)
Note that S neck = ∅ is equivalent to that the sequence is ULSC as is the case for sequences of rescaled helicoids. On the other hand, S ulsc = ∅ for sequences of rescaled catenoids. We will show that every sequence Σ i has a subsequence that is either ULSC or for which S ulsc is empty. This is the next "no mixing" theorem. We will see later that these two different cases give two very different structures.
T. H. Colding and W. Minicozzi
Theorem 28. [CM10]
) If Σ i ⊂ B Ri = B Ri (0) ⊂ R 3 is a sequence of compact embedded minimal planar domains with ∂Σ i ⊂ ∂B Ri where R i → ∞, then there is a subsequence with either S ulsc = ∅ or S neck = ∅.
In view of Theorem 28 and the earlier results for disks, it is natural to first analyze sequences that are ULSC, so where S neck = ∅, and second analyze sequences where S ulsc is empty. We will do this next.
Common for both the ULSC case and the case where S ulsc is empty is that the limits are always laminations by flat parallel planes and the singular sets are always closed subsets contained in the union of the planes. This is the content of the next theorem:
Theorem 29. (Colding-Minicozzi, [CM10]) Let Σ i ⊂ B Ri = B Ri (0) ⊂ R 3 be a sequence of compact embedded minimal planar domains with ∂Σ i ⊂ ∂B Ri where R i → ∞. If sup B1∩Σi |A| 2 → ∞ ,(33)
then there exists a subsequence Σ j , a lamination L = {x 3 = t} {t∈I} of R 3 by parallel planes (where I ⊂ R is a closed set), and a closed nonempty set S in the union of the leaves of L such that after a rotation of R 3 :
(A) For each 1 > α > 0, Σ j \ S converges in the C α -topology to the lamination L \ S. (B) sup Br(x)∩Σj |A| 2 → ∞ as j → ∞ for all r > 0 and x ∈ S. (The curvatures blow up along S.)
Loosely speaking, our next result shows that when the sequence is ULSC (but not simply connected), a subsequence converges to a foliation by parallel planes away from two lines S 1 and S 2 . The lines S 1 and S 2 are disjoint and orthogonal to the leaves of the foliation and the two lines are precisely the points where the curvature is blowing up. This is similar to the case of disks, except that we get two singular curves for non-disks as opposed to just one singular curve for disks.
Theorem 30. [CM10]) Let a sequence Σ i , limit lamination L, and singular set S be as in Theorem 29. Suppose that each B R (0) ∩ Σ i is not simply-connected. If every Σ i is ULSC and
sup B1∩Σi |A| 2 → ∞ ,(34)
then the limit lamination L is the foliation F = {x 3 = t} t and the singular set S is the union of two disjoint lines S 1 and S 2 such that:
(C ulsc ) Away from S 1 ∪ S 2 , each Σ j consists of exactly two multi-valued graphs spiraling together. Near S 1 and S 2 , the pair of multi-valued graphs form double spiral staircases with opposite orientations at S 1 and S 2 . Thus, circling only S 1 or only S 2 results in going either up or down, while a path circling both S 1 and S 2 closes up.
(D ulsc ) S 1 and S 2 are orthogonal to the leaves of the foliation.
Theorem 31. [CM10]) Let a sequence Σ i , limit lamination L, and singular set S be as in Theorem 29. If S ulsc = ∅ and
sup B1∩Σi |A| 2 → ∞ ,(35)
then S = S neck by (32) and (C neck ) Each point y in S comes with a sequence of graphs in Σ j that converge to the plane {x 3 = x 3 (y)}. The convergence is in the C ∞ topology away from the point y and possibly also one other point in {x 3 = x 3 (y)} ∩ S. If the convergence is away from one point, then these graphs are defined over annuli; if the convergence is away from two points, then the graphs are defined over disks with two subdisks removed.
Uniqueness of complete examples: Catenoid
The catenoid is the only (non-flat) minimal surface of revolution, but there are a number of other ways to uniquely characterize it. We will discuss this next. In this subsection, Σ will always be complete, minimal, and embedded in R 3 .
The first modern results assumed that Σ has finite total curvature 3 :
Theorem 32. (Schoen,[Sc2]) The catenoid is the unique Σ with finite total curvature and two ends.
It follows from Schoen's result that an embedded finite total curvature minimal surface with two ends cannot have positive genus.
Theorem 33. (Lopez and Ros, [LRo]) The catenoid is the unique (non-flat) Σ with finite total curvature and genus zero.
The main point of the Lopez-Ros theorem is to show that a (finite total curvature) genus zero minimal surface with more than two ends cannot be embedded.
A major breakthrough came in 1997 with Collin's proof of the generalized Nittsche conjecture:
Theorem 34. (Collin, [Co]) If Σ is proper, has finite topology and at least two ends, then it has finite total curvature. [CM11] gives an alternative proof of Collin's theorem using the one-sided curvature estimate. The assumption that Σ has at least two ends rules out the possibility of the helicoid (which has infinite total curvature) Finally, in 2008, Colding-Minicozzi, [CM12], showed that embeddedness and finite topology together imply properness, thus removing the assumption of properness. The final result is:
T. H. Colding and W. Minicozzi
Theorem 35. (Schoen, Lopez-Ros, Collin, Colding-Minicozzi) The catenoid is only complete embedded minimal surface with finite topology and either:
• Exactly two ends, or
• Genus zero and more than one end.
There are local versions of these global uniqueness results for the catenoid. The starting point is a local version of Collin's result that follows from the argument of [CM11], using the one-sided curvature estimate, although it is not recorded there. This was done in [CM16], where the following local version was proven:
Theorem 36. [CM16]) There exist > 0 and C 1 , C 2 , C 3 > 1 so that: If Σ ⊂ B R ⊂ R 3 is an embedded minimal annulus with ∂Σ ⊂ ∂B R and π 1 (B R ∩ Σ) = 0, then there is a simple closed geodesic γ ⊂ Σ of length so that:
• The curve γ splits the connected component of B R/C1 ∩ Σ containing it into annuli Σ + and Σ − , each with |A| 2 ≤ 5π.
• Each of Σ ± \ T C2 (γ) is a graph with gradient ≤ 1.
• log(R/ ) ≤ C 3 h where the separation h is given by
h ≡ min |x + − x − | x ± ∈ ∂B R/C1 ∩ Σ ± .
Here T s (S) denotes the (intrinsic in Σ) tubular neighborhood of radius s about the set S ⊂ Σ.
Uniqueness of complete examples: Helicoid
In this subsection, Σ will always be complete, minimal, and embedded in R 3 .
Using the lamination theorem and one-sided curvature estimate from [CM6]- [CM9], Meeks-Rosenberg proved the uniqueness of the helicoid in 2005:
Theorem 37. [MeR2]) The helicoid is the unique (non-flat) proper, simply-connected Σ.
By [CM12], the assumption of properness can be removed in Theorem 37.
Again using [CM6]- [CM9], Meeks-Rosenberg and Bernstein-Breiner studied the ends of finite genus embedded minimal surfaces, showing that these are asymptotic to helicoids. The Bernstein-Breiner theorem gives:
Theorem 38. [BB2]) Any (non-flat) finite genus Σ with one end is asymptotic to a helicoid.
In particular, any finite genus embedded minimal surface with one end must be conformal to a punctured Riemann surface and one gets rather good control on the Weierstrass data (it has an essential singularity at the puncture, but of the same type that the helicoid does). It would be very interesting to get a finer description of the moduli space of such examples. One natural result in this direction is a recent theorem of Bernstein and Breiner (that proves a conjecture of Bobenko, [Bo]):
Theorem 39. [BB3]) Let Σ be an embedded genus one helicoid in R 3 . Then there is a line so that rotation by 180 degrees about is an orientation preserving isometry of Σ.
Uniqueness of complete examples: Riemann examples
Calabi-Yau conjectures
Recall that an immersed submanifold in R n is proper if the pre-image of any compact subset of R n is compact in the surface. This property has played an important role in the theory of minimal submanifolds and many of the classical theorems in the subject assume that the submanifold is proper. It is easy to see that any compact submanifold is automatically proper. On the other hand, there is no reason to expect a general immersion to be proper. For example, the non-compact curve parametrized in polar coordinates by ρ(t) = π + arctan(t) , θ(t) = t spirals infinitely between the circles of radius π/2 and 3π/2. However, it was long thought that a minimal immersion (or embedding) should be better behaved. This principle was captured by the Calabi-Yau conjectures. Their original form was given in 1965 in [Ce] where E. Calabi made the following two conjectures about minimal surfaces (see also S.S. Chern, page 212 of [Cs] and S.T. Yau's 1982 problem list, [Ya3]):
Conjecture 41. "Prove that a complete minimal hypersurface in R n must be unbounded."
Calabi continued: "It is known that there are no compact minimal submanifolds of R n (or of any simply connected complete Riemannian manifold with sectional curvature ≤ 0). A more ambitious conjecture is":
Conjecture 42. "A complete [non-flat] minimal hypersurface in R n has an unbounded projection in every (n − 2)-dimensional flat subspace."
The immersed versions of these conjectures were shown to be false by examples of Jorge and Xavier,[JXa2], and N. Nadirashvili, [Na]. The latter constructed a complete immersion of a minimal disk into the unit ball in R 3 , showing that Conjecture 41 also failed for immersed surfaces; cf. [MaMo1], [LMaMo1], [LMaMo2].
It is clear from the definition of proper that a proper minimal surface in R 3 must be unbounded, so the examples of Nadirashvili are not proper.
The strong halfspace theorem of D. Hoffman and W. Meeks shows that properness also prevented a minimal surface from being contained in a slab, or even a half-space:
Theorem 43. (Hoffman-Meeks, [HoMe]) A complete connected properly immersed minimal surface contained in {x 3 > 0} ⊂ R 3 must be a horizontal plane {x 3 = Constant}.
In [CM12], it was shown that the Calabi-Yau Conjectures were true for embedded surfaces. We will describe this more precisely below.
The main result of [CM12] is an effective version of properness for disks, giving a chord-arc bound. Obviously, intrinsic distances are larger than extrinsic distances, so the significance of a chord-arc bound is the reverse inequality, i.e., a bound on intrinsic distances from above by extrinsic distances. This is accomplished in the next theorem:
Theorem 44. [CM12]) There exists a constant C > 0 so that if Σ ⊂ R 3 is an embedded minimal disk,
B Σ 2R = B Σ 2R (0) is an intrinsic ball in Σ \ ∂Σ of radius 2R, and if sup B Σ r 0 |A| 2 > r −2 0 where R > r 0 , then for x ∈ B Σ R C dist Σ (x, 0) < |x| + r 0 .(36)
The assumption of a lower curvature bound, sup B Σ r 0 |A| 2 > r −2 0 , in the theorem is a necessary normalization for a chord-arc bound. This can easily be seen by rescaling and translating the helicoid.
Properness of a complete embedded minimal disk is an immediate consequence of Theorem 44. Namely, by (36), as intrinsic distances go to infinity, so do extrinsic distances. Precisely, if Σ is flat, and hence a plane, then obviously Σ is proper and if it is non-flat, then sup B Σ r 0 |A| 2 > r −2 0 for some r 0 > 0 and hence Σ is proper by (36).
A consequence of Theorem 44 together with the one-sided curvature estimate is the following version of that estimate for intrinsic balls:
Corollary 45. [CM12]) There exists > 0, so that if
Σ ⊂ {x 3 > 0} ⊂ R 3 (37) is an embedded minimal disk, B Σ 2R (x) ⊂ Σ \ ∂Σ, and |x| < R, then sup B Σ R (x) |A Σ | 2 ≤ R −2 .(38)
As a corollary of this intrinsic one-sided curvature estimate we get that the second, and "more ambitious", of Calabi's conjectures is also true for embedded minimal disks.
In fact, [CM12] proved both of Calabi's conjectures and properness also for embedded surfaces with finite topology:
Theorem 46. [CM12]) The plane is the only complete embedded minimal surface with finite topology in R 3 in a halfspace.
Theorem 47. [CM12]) A complete embedded minimal surface with finite topology in R 3 must be proper.
There have been several properness results for Riemannian three-manifolds. W. Meeks and H. Rosenberg,[MeR3], generalized this to get a local version in Riemannian three-manifolds; they also extended it to embedded minimal surfaces with finite genus and positive injectivity radius in R 3 . In [Cb], B. Coskunuzer proved properness for area minimizing disks in hyperbolic three-space, assuming that there is at least one C 1 point in the boundary at infinity. Finally, Daniel, Meeks and Rosenberg proved a version for Lie Groups in [DMR].
There has been extensive work on both properness and the halfspace property assuming various curvature bounds. Jorge and Xavier,[JXa1] and [JXa2], showed that there cannot exist a complete immersed minimal surface with bounded curvature in ∩ i {x i > 0}; later Xavier proved that the plane is the only such surface in a halfspace, [Xa]. Recently, G.P. Bessa, Jorge and G. Oliveira-Filho, [BJO], and H. Rosenberg, [Ro], have shown that if complete embedded minimal surface has bounded curvature, then it must be proper. This properness was extended to embedded minimal surfaces with locally bounded curvature and finite topology by Meeks and Rosenberg in [MeR2]; finite topology was subsequently replaced by finite genus in [MePRs1] [MaMo3] to show that any convex, possibly noncompact or nonsmooth, region of R 3 admits a proper complete minimal immersion of the unit disk. There are a number of interesting related results, including [AFM], [MaMeNa], and [ANa].
Mean curvature flow
We will now turn to the gradient flow for volume, i.e., mean curvature flow (or MCF). This is the higher dimensional analog of the curve shortening flow. In this section, we will give a rapid overview of the subject.
A one-parameter family of hypersurfaces Σ t ⊂ R n+1 flows by mean curvature if
∂ t x = −H n ,
T. H. Colding and W. Minicozzi
where n is the unit normal and H is the mean curvature. Since the first variation formula gives that d dt
Vol (Σ t ) = Σt ∂ t x, H n ,
we see that mean curvature flow is the (negative) gradient flow for volume and
d dt Vol(Σ t ) = − Σt H 2 .
Minimal surfaces (where H = 0) are fixed points for this flow. The next simplest example is given by concentric round n-dimensional spheres of radius √ −2nt for t < 0. Figure 25: Cylinders, spheres and planes are self-similar solutions of the mean curvature flow. The shape is preserved, but the scale changes with time.
Spheres
Cylinders Planes
MCF of graphs
We saw earlier that the mean curvature of the graph of a function u : R n → R is given by
H = −div R n ∇ R n u 1 + |∇ R n u| 2 .
Minimal surfaces and MCF 45
Thus, the graph of u(x, t) flows by MCF if
∂u ∂t = 1 + |∇ R n u| 2 1 2 div R n ∇ R n u 1 + |∇ R n u| 2 .
The factor 1 + |∇ R n u| 2 1 2 on the right compensates for the fact that the x n+1 direction is not normal to the graph. We already saw that Calabi's grim reaper u(x, t) = t−log sin x gives an example of graphs flowing by mean curvature; see [AW] and [CSS] for similar examples and stability results.
Self-similar shrinkers
Let Σ t ⊂ R n+1 be a one-parameter family of hypersurfaces flowing by MCF for t < 0. Σ t is said to be a self-similar shrinker if
Σ t = √ −t Σ −1
for all t < 0. We saw that spheres of radius √ −2nt give such a solution. Unlike the case of curves, numerical evidence suggests that a complete classification of embedded self-shrinkers is impossible. In spite of all the numerical evidence, there are very few rigorous examples of self-shrinkers.
In 1992, Angenent, [A], constructed a self-similar shrinking donut in R 3 . The shrinking donut was given by rotating a simple closed curve around an axis. See Kleene and Möller, [KlMo], for a classification of self-shrinkers of rotation (including ones with boundary).
The maximum principle
The parabolic maximum principle plays an important role in mean curvature flow. For instance, it is used to prove the following key facts:
1. If two closed hypersurfaces are disjoint, then they remain disjoint under MCF.
A closed embedded hypersurface remains embedded under MCF.
3. If a closed hypersurface is convex, then it remains convex under MCF.
4. Likewise, mean convexity (i.e, H > 0) is preserved under MCF.
In 1989, Grayson, [G2], showed that his result for curves does not extend to surfaces. In particular, he showed that a dumbbell with a sufficiently long and narrow bar will develop a pinching singularity before extinction. A later proof was given by Angenent, [A], using the shrinking donut and the avoidance property (1). Figures 27 to 30
The self-shrinker equation
A MCF M t is a self-similar shrinker if M t = √ −tM −1 for t < 0. This is equivalent to that Σ = M −1 satisfies the equation 4 H =
x, n 2 .
That is: M t = √ −tM −1 ⇐⇒ M −1 satisfies H = x,n 2 . The self-shrinker equation arises variationally in two closely related ways: as minimal surfaces for a conformally changed metric and as critical points for a weighted area functional. We return to the second later, but state the first now:
Lemma 48. Σ is a self-shrinker ⇐⇒ Σ is a minimal surface in the metric
g ij = e − |x| 2 2n δ ij .
The proof follows immediately from the first variation. Unfortunately, this metric is not complete (the distance to infinity is finite) and the curvature blows up exponentially.
Huisken's theorem about MCF of convex hypersurfaces
In 1984, Huisken, [H1], showed that convexity is preserved under MCF and showed that the surfaces become round:
Theorem 49. (Huisken, [H1]) Under MCF, every closed convex hypersurface in R n+1 remains convex and eventually becomes extinct in a "round point".
T. H. Colding and W. Minicozzi
The shrinking cube.
Half of the shrinking cube.
A numerical example of Chopp, Exper. Math. 1994. This is exactly analogous to the result of Gage-Hamilton for convex curves, but it's interesting to note that Huisken's proof works only for n > 1. Namely, he shows that the hypersurfaces become closer and closer to being umbilic and that the limiting shapes are umbilic. A hypersurface is umbilic if all of the eigenvalues of the second fundamental form are the same; this characterizes the sphere when there are at least two eigenvalues, but is meaningless for curves.
We mention a few related results. First, Schulze showed a generalization of this for flows by other powers of mean curvature in [Sf1] and [Sf2]. Second, Sesum showed that the rescaled mean curvature flow converges exponentially to a round sphere in [Se].
Convexity means that every eigenvalue of A has the right sign. There are weaker conditions that are also preserved under mean curvature flow and where significant results have been obtained. The first of these is mean convexity, where the hypersurfaces have positive mean curvature. Mean convex flows have been analyzed by Huisken-Sinestrari, [HS1] and [HS2], and White, [W2] and [W3]. A related class of hypersurfaces are the 2-convex ones, where the sum of any pair of principal curvatures is positive (it is not hard to see that this implies mean convexity); this case has been studied by Huisken-Sinestrari, [HS3]. In addition, Smoczyk,[Sm1], showed that Star-shaped hypersurfaces remain star-shaped under MCF. Finally, Ecker-Huisken, [EH1] and [EH2], showed that being graphical is also preserved under MCF and proved estimates for graphical mean curvature flow. In each of these cases, the maximum principle is used to show that the condition is preserved under MCF.
Minimal surfaces and MCF 49 9 Width and mean curvature flow
We saw previously that every closed hypersurface must become extinct under MCF in a finite amount of time. It is interesting then to estimate this extinction time. On obvious estimate is in terms of the diameter since the hypersurface must become extinct before a ball that encloses it. However, there are cases where this estimate is far from sharp. In this section, we will prove another extinction time estimate in terms of the geometric invariant called the width that was previously introduced.
Sweepouts and one-dimensional width
Let M be a smooth closed convex surface in R 3 . 5 Convexity implies that M is diffeomorphic to S 2 and, thus, we can fix a map σ : S 1 × [0, 1] → M that maps S 1 × {0} and S 1 × {1} to points and that is topologically a degree one map from S 2 to S 2 . Let Ω σ denote the homotopy class of such maps. Given this homotopy class, the width W = W (σ) was defined in (7) to be
W = inf σ∈Ωσ max s∈[0,1] E (σ(·, s)) ,
where the energy is given by
E (σ(·, s)) = 1 2 S 1 |∂ xσ (x, s)| 2 dx .
It is not hard to see that the width is continuous in the metric, though the curve realizing it may not be. radius at most 4π in M is strictly geodesically convex then dist M (x, y) ≤ 2|x − y|.
1.1. The width. Let Ω be the set of continuous ma each t the map σ(·, t) is in W 1,2 , the map t → σ(·, t) is finally σ maps S 1 × {−1} and S 1 × {1} to points. Giv Ωσ is defined to be the set of maps σ ∈ Ω that are hom width W = W (σ) associated to the homotopy class Ω the energy of each slice. That is, set
(1.1) W = inf σ∈Ωσ max t∈[−1,1]
Energy where the energy is given by Energy (σ(·, t)) = S 1 non-negative and is positive ifσ is in a non-trivial hom The main theorem, Theorem 1.9, that almost max are almost geodesics, is proven in subsection 1.4. The construction of the sequence of tighter and tighter sw that is defined in the next subsection. We also state map in the next subsection, but postpone their proofs
The width is co the min-max cur be. In fact, elab can easily see tha more than contin The continuity o parameter family immediately from there exists a δ > |s − t| < δ, then 1 Recall that the square of the W 1,2 norm of a map f : S 1 → that are W 1,2 close are also C 0 close; cf. (1.8). 2 A particularly interesting example is when M is a topolog to M has degree one. In this case, the width is positive and r see that the width is positive on non-trivial homotopy classes, o is sufficiently small, then each curve σ(·, t) is contained in a co homotopy connects σ to a path of point curves, so σ is homoto Figure 34: Three snapshots of a one-parameter family of "dumbbell" metrics. The geodesic realizing the width jumps from one bell to the other. The jump occurs in the middle picture where the geodesic is not unique.
Estimates for rate of change of width under the mean curvature flow
In [CM2], we proved the following estimate for the rate of change of the width under MCF:
Theorem 50. [CM2]) If M t is a MCF of closed convex hypersurfaces and W (t) is the width of M t , then
d dt W ≤ −2π (39)
in the sense of limsup of forward difference quotients.
Minimal surfaces and MCF
51
Note that W (t) is continuous but may not be differentiable in t, so (39) may not hold in the classical sense. Still, the fact that it holds in the sense of limsup of forward difference quotients is enough to integrate and get that
W (t) ≤ W (0) − 2 π t .(40)
Since the width is obviously positive until M t becomes extinct, we see that M t becomes extinct by time W (0) 2π . Finally, observe that capping off a long thin cylinder gives convex surfaces with a fixed bound on the width (coming from the radius of the cylinder) but with arbitrarily large diameter. For these surfaces, the estimate on the extinction time coming from the width is much better than what one would get from the diameter.
Singularities for MCF
We will now leave convex hypersurfaces and go to the general case. As Grayson's dumbbell showed, there is no higher dimensional analog of Grayson's theorem for curves. The key for analyzing singularities is a blow up (or rescaling) analysis similar to the tangent cone analysis for minimal surfaces. As for minimal surfaces, the starting point is a monotonicity formula that gives uniform control over the rescalings.
Huisken's monotonicity
We will need to recall Huisken's monotonicity formula (see [H3], [E1], [E2]). To do this, first define the non-negative function Φ on R n+1 × (−∞, 0) by
Φ(x, t) = [−4πt] − n 2 e |x| 2 4t ,(41)
and then set Φ (x0,t0) (x, t) = Φ(x − x 0 , t − t 0 ). In 1990, G. Huisken proved the following monotonicity formula for mean curvature flow, [H3]:
Theorem 51. (Huisken,[H3]) If M t is a solution to the MCF and u is a C 2 function, then
d dt Mt u Φ (x0,t0) = − Mt Hn − (x − x 0 ) ⊥ 2 (t 0 − t) 2 u Φ (x0,t0) + Mt [u t − ∆u] Φ .(42)
When u is identically one, we get the monotonicity formula
d dt Mt Φ (x0,t0) = − Mt Hn − (x − x 0 ) ⊥ 2 (t 0 − t) 2 Φ (x0,t0) .(43)
Huisken's density is the limit of Mt Φ x0,t0 as t → t 0 . That is,
Θ x0,t0 = lim t→t0 Mt Φ x0,t0 ;(44)
52
T. H. Colding and W. Minicozzi this limit exists by the monotonicity (43) and the density is non-negative as the integrand Φ x0,t0 is non-negative. It is also interesting to note that Huisken's Gaussian volume is constant in time if and only if M t is a self-similar shrinker with
M t = √ −t (M −1 ) .
One drawback to Huisken's formula is that it requires one to integrate over all of space. In 2001, K. Ecker discovered a local monotonicity formula where the integral is over bounded sets. This formula is modeled on Watson's mean-value formula for the linear heat equation; see [E2] for details.
Tangent flows
If M t is a MCF, then so is the parabolic scaling for any constant λ > 0
M t = λ M λ −2 t .
When λ is large, this magnifies a small neighborhood of the origin in space-time.
If we now take a sequence λ i → ∞ and let M i t = λ i M λ −2 i t , then Huisken's monotonicity gives uniform Gaussian area bounds on the rescaled sequence. Combining this with Brakke's weak compactness theorem for mean curvature flow, [B], it follows that a subsequence of the M i t converges to a limiting flow M ∞ t (cf., for instance, page 675-676 of [W2] and chapter 7 of [I2]). Moreover, Huisken's monotonicity implies that the Gaussian area (centered at the origin) is now constant in time, so we conclude that M ∞ t is a self-similar shrinker. This M ∞ t is called a tangent flow at the origin. The same construction can be done at any point of space-time.
We will return to this point of view later when we describe results from [CM1] classifying the generic tangent flows.
Gaussian integrals and the F functionals
For t 0 > 0 and x 0 ∈ R n+1 , define F x0,t0 by F x0,t0 (Σ) = (4πt 0 ) −n/2 Σ e − |x−x 0 | 2 4t 0 dµ = Σ Φ x0,t0 (·, 0) .
We will think of x 0 as being the point in space that we focus on and t 0 as being the scale. By convention, we set F = F 0,1 .
We will next compute the first variation of the F functionals, but we will allow variations in all three parameters: the hypersurface Σ 0 , the center x 0 , and the scale t 0 . Namely, fix a hypersurface Σ 0 ⊂ R n+1 with unit normal n, a function f , vectors x 0 , y ∈ R n+1 and constants t 0 , h ∈ R with t 0 > 0. Define variations (i.e., Minimal surfaces and MCF 53 one-parameter families) by
Σ s = {x + s f (x) n(x) | x ∈ Σ 0 } , x s = x 0 + s y , t s = t 0 + s h .
The first variation is given by:
Lemma 52. [CM1]) If Σ s , x s and t s are variations as above, then ∂ ∂s (F xs,ts (Σ s )) is
(4π t 0 ) − n 2 Σ f H − x − x 0 , n 2t 0 + h |x − x 0 | 2 4t 2 0 − n 2t 0 + x − x 0 , y 2t 0 e −|x−x 0 | 2 4t 0 dµ .
Proof. From the first variation formula (for area), we know that
(dµ) = f H dµ .(45)
The s derivative of the weight e −|x−xs| 2 /(4ts) will have three separate terms coming from the variation of the surface, the variation of x s , and the variation of t s . Using, respectively, that ∇|x − x s | 2 = 2 (x − x s ),
∂ ts log (4πt s ) −n/2 e − |x−xs | 2 4ts = −n 2t s + |x − x s | 2 4t 2 s(46)
and ∂ xs |x − x s | 2 = 2 (x s − x), we get that the derivative of log (4π t s ) − n 2 e −|x−xs| 2 /(4ts) at s = 0 is given by
− f 2t 0 x − x 0 , n + h |x − x 0 | 2 4t 2 0 − n 2t 0 + 1 2t 0 x − x 0 , y .(47)
Combining this with (45) gives the lemma.
We will say that Σ is a critical point for F x0,t0 if it is simultaneously critical with respect to variations in all three parameters, i.e., variations in Σ and all variations in x 0 and t 0 . Strictly speaking, it is the triplet (Σ, x 0 , t 0 ) that is a critical point of F , but we will refer to Σ as a critical point of F x0,t0 . The next proposition shows that Σ is a critical point for
if Σ0 f H − x − x 0 , n 2t 0 + h |x − x 0 | 2 4t 2 0 − n 2t 0 + x − x 0 , y 2t 0 e −|x−x 0 | 2 4t 0 = 0
for any choice of f , y and h. It is clear why this holds for f , but why h and y? The answer is that h and y correspond to scalings and translations, respectively, and these can equivalently be achieved by appropriate choices of f .
A self-shrinker Σ satisfies H = x,n 2 , so it is a critical point of the F = F 0,1 functional. We will use these functionals to understand dynamical stability of selfshrinkers. The first step is to compute the Hessian, or second variation, of this functional. Before doing this, it will be useful to introduce some of the operators that will arise.
Weighted inner products and the drift Laplacian
Let f and g be functions. It is natural to look at the weighted L 2 inner product Similarly, we have the weighted inner product for gradients:
Σ ∇ T f, ∇ T g e − |x| 2 4 . Since div Σ e − |x| 2 4 ∇ T f = ∆ Σ f − 1 2 x, ∇ T f e − |x| 2 4 , the divergence theorem applied to g e − |x| 2 4 ∇ T f gives Σ ∇ T f, ∇ T g e − |x| 2 4 = − Σ g ∆ Σ f − 1 2 x, ∇ T f e − |x| 2 4 .
We call this operator the "drift Laplacian" L
L f = ∆ Σ f − 1 2 x, ∇ T f .
It follows that L is symmetric in the weighted space. The L operator plays a similar role for self-shrinkers that the Laplacian did for minimal surfaces. To see this, recall that given f : R n+1 → R, the Laplacian on Σ applied to f is
∆ Σ f = i=1 Hess f (e i , e i ) − ∇f, n H ,
Minimal surfaces and MCF
55
where e i is a frame for Σ and Hess f is the R n+1 Hessian of f . Therefore, when Σ is a self-shrinker, it follows that
Lx i = − 1 2 x i ,(48)L|x| 2 = 2n − |x| 2 .(49)
The second formula is closely related to the fact that self-shrinkers are critical points for variations in all three parameters of the F 0,1 functional. Namely, we saw earlier that on any self-shrinker Σ we must have
Σ |x| 2 4 − n 2 e −|x| 2 4 = 0
This can also be seen by using L operator. To do this, use the symmetry of L to get for any u and v (that do not grow too quickly)
Σ (uLv) e − |x| 2 4 = − Σ ∇ T u, ∇ T v e − |x| 2 4 .
Applying this with u = 1 and v = |x| 2 and using (49) gives Σ 2n − |x| 2 e − |x| 2 4 = 0 .
We will need another second order operator L which differs from L by a zero-th order term:
L = L + |A| 2 + 1 2 ,
where the drift Laplacian L is given by
L f = ∆ Σ f − 1 2 x, ∇ T f .
Clearly, L is also symmetric with respect to the weighted inner product. The operator L plays the role of the second variation operator ∆ + |A| 2 + Ric(n, n) for minimal surfaces.
Second variation
Let Σ 0 ⊂ R n+1 be a self-shrinker with unit normal n, a function f , a vector y ∈ R n+1 and a constant h ∈ R. We will assume that Σ 0 is complete, ∂Σ 0 = ∅, and Σ 0 has polynomial volume growth (so that all of our Gaussian integrals converge). As before, define variations
Σ s = {x + s f (x) n(x) | x ∈ Σ 0 } , x s = s y , t s = 1 + s h .
T. H. Colding and W. Minicozzi
Theorem 54. [CM1]) If we set F = ∂ ss s=0 (F xs,ts (Σ s )), then
F = (4π) −n/2 Σ −f Lf + 2f h H − h 2 H 2 + f y, n − y, n 2 2 e −|x| 2 4
dµ .
(50)
First observation: All (compact) critical points are unstable in the usual sense. Namely, if we set h = 0, y = 0 and f ≡ 1 so that
L 1 = L 1 + |A| 2 + 1 2 = |A| 2 + 1 2 ,
then we see that
∂ 2 ∂s 2 s=0 (F 0,1 (Σ s )) = (4π) − n 2 Σ0 −|A| 2 − 1 2 < 0 .
This instability explains why there are very few examples of embedded self-shrinkers that have been proven to exist. Namely, they are difficult to construct variationally since they tend to be highly unstable critical points of F 0,1 . In fact, we get the same instability for non-compact self-shrinkers, at least when the volume growth is under control:
Theorem 55. [CM1]) If Σ ⊂ R n+1 is a smooth complete selfshrinker without boundary and with polynomial volume growth, then there exists a function u with compact support so that − (u L u) e − |x| 2 4 < 0 .
(51)
The L operator applied to H and translations
The link between the mean curvature H of a self-shrinker Σ and the second variation is that H is an eigenfunction for L with eigenvalue −1; moreover, if y is a constant vector, then y, n is also an eigenfunction for L.
Theorem 56. [CM1]) The mean curvature H and the normal part v, n of a constant vector field v are eigenfunctions of L with
LH = H and L v, n = 1 2 v, n .(52)
This will be important later, but it is worth noting that this explains an odd fact in the second variation formula. Namely, since L is a symmetric operator (in the weighted L 2 space) and H and v, n are eigenfunctions with different eigenvalues, it follows that H and v, n are orthogonal. This explains why there was no H y, n term in the second variation, Theorem 54.
Remark 57. Interestingly, there is an analogous situation for Ricci flow. In this case, Cao, Hamilton and Ilmanen, [CaHI], computed the second variation formula for Perelman's shrinker entropy and discovered an analog of the L operator. Moreover, Cao and Zhu showed in [CaZ] that the Ricci tensor is an eigenvector for this operator.
Minimal surfaces and MCF 57 11 Smooth compactness theorem for self-shrinkers
In [CM3], we proved the following smooth compactness theorem for self-shrinkers in R 3 :
Theorem 58. [CM3]) Given an integer g ≥ 0 and a constant V > 0, the space of smooth complete embedded self-shrinkers Σ ⊂ R 3 with • genus at most g,
• ∂Σ = ∅, • Area (B R (x 0 ) ∩ Σ) ≤ V R 2 for all x 0 ∈ R 3 and all R > 0 is compact.
Namely, any sequence of these has a subsequence that converges in the topology of C m convergence on compact subsets for any m ≥ 2.
The surfaces in this theorem are assumed to be homeomorphic to closed surfaces with finitely many disjoint disks removed. The genus of the surface is defined to be the genus of the corresponding closed surface. For example, an annulus is a sphere with two disks removed and, thus, has genus zero.
The main motivation for this result is that self-shrinkers model singularities in mean curvature flow. Thus, the above theorem can be thought of as a compactness result for the space of all singularities.
This should be compared with the Choi-Schoen compactness theorem for minimal surfaces in a manifold with positive Ricci curvature, [CiSc]. However, the conformal metric is not complete and even the scalar curvature changes sign.
The proof of smooth compactness
There are five main points in the proof of Theorem 58:
1. The bound on the genus plus local area bounds imply local bounds on |A| 2 (this follows from the local Gauss-Bonnet estimate in theorem 3 of [I1]).
We will say a bit more about step (4) and why multiplicity implies stability. The basic point is that as 2 sheets come together, they are both graphs over the limit and the difference w i between these 2 graphs does not vanish (by embeddedness). Thus, w i does not change sign and (almost) satisfies the linearized equation Lw i = 0. The w i 's go to 0, but the Harnack inequality gives convergence for (a subsequence of)
u i = w i w i (p) .
It is not hard to show that the limiting function u is a positive solution of Lu = 0 with u(p) = 1. Of course, u is initially defined only away from the isolated singular points, but it is possible to show that it extends across these potential singularities. Finally, as we saw for minimal surfaces, this implies positivity of the operator L.
The entropy
The F x0,t0 functional was defined for t 0 > 0 and x 0 ∈ R n+1 by
F x0,t0 (Σ) = (4πt 0 ) −n/2 Σ e − |x−x 0 | 2 4t 0 dµ .
If M t flows by mean curvature and t > s, then Huisken's monotonicity formula gives
F x0,t0 (M t ) ≤ F x0,t0+(t−s) (M s ) .(53)
Thus, we see that a fixed F x0,t0 functional is not monotone under the flow, but the supremum over all of these functionals is monotone. We call this invariant the entropy and denote it by
λ(Σ) = sup x0,t0 F x0,t0 (Σ) .(54)
The entropy has four key properties:
1. λ is invariant under dilations, rotations, and translations.
λ(M t ) is non-increasing under MCF.
3. If Σ is a self-shrinker, then λ(Σ) = F 0,1 (Σ) = Θ 0,0 .
4. Entropy is preserved under products with a line, i.e., λ(Σ × R) = λ(Σ).
A few entropies
Stone, [St], computed the densities Θ 0,0 , and thus also λ, for self-shrinking spheres, planes and cylinders:
• λ(R 2 ) = 1.
• λ(S 2 2 ) = 4 e ≈ 1.4715.
• λ(S 1 √ 2 ) = 2π e ≈ 1.5203. Moreover, he also showed that λ(S n ) is decreasing in n.
Minimal surfaces and MCF 59 12.2 How entropy will be used
The main point about λ is that it can be used to rule out certain singularities because of the monotonicity of entropy under MCF and its invariance under dilations:
Corollary 59. If Σ is a self-shrinker given by a tangent flow for M t with t > 0, then
F 0,1 (Σ) = λ(Σ) ≤ λ(M 0 ) .
Classification of entropy stable singularities
To illustrate our results, we will first specialize to the case where n = 2, that is to mean curvature flow of surfaces in R 3 .
Theorem 60. [CM1]) Suppose that Σ ⊂ R 3 is a smooth complete embedded self-shrinker without boundary and with polynomial volume growth.
• If Σ is not a sphere, a plane, or a cylinder, then there is a graphΣ over Σ of a compactly supported function with arbitrarily small C m norm (for any fixed m) so that λ(Σ) < λ(Σ).
In particular, Σ cannot arise as a tangent flow to the MCF starting fromΣ.
Thus, spheres, planes and cylinders are the only generic self-shrinkers. This should be contrasted with Huisken's result for convex flows, where we see that any small perturbation remains convex and, thus, still becomes extinct at a round point.
Essentially the same result holds in all dimensions, with one small difference: the perturbation does not have compact support if the shrinker is a product of a lines with an unstable shrinker in one dimension less; see [CM1].
F-stability
We saw that every self-shrinker is unstable as a critical point of F 0,1 and, in fact, it is already unstable just from the variations corresponding to translations and dilations. Roughly speaking, we will say that a self-shrinker is F -stable if these are the only sources of instability (this is essentially orbital stability for a corresponding dynamical system). Namely, a self-shrinker Σ is F -stable if for every variation Σ s there exist variations x s and t s so that d 2 ds 2 s=0 F xs,ts (Σ s ) ≥ 0 .
It is not hard to see that S n and R n are F -stable:
Lemma 61. (Colding-Minicozzi, [CM1]) The n-sphere of radius √ 2n in R n+1 is F -stable.
Proof. Note that x T = 0, A is 1/ √ 2n times the metric, and L = ∆ + 1. Therefore, by Theorem 54, the lemma will follow from showing that given an arbitrary normal variation f n, there exist h ∈ R and y ∈ R n+1 so that
S n −f (∆f + f ) + √ 2n f h − n 2 h 2 + f y, n − y, n 2 2 ≥ 0 .(55)
Recall that the eigenvalues of the Laplacian 6 on the n-sphere of radius one are given by k 2 + (n − 1) k for k = 0, 1, . . . with 0 corresponding to the constant function and the first non-zero eigenvalue n corresponding to the restrictions of the linear functions in R n+1 . It follows that the eigenvalues of ∆ on the sphere of radius √ 2n are given by
µ k = k 2 + (n − 1) k 2n ,(56)
with µ 0 = 0 corresponding to the constant functions and µ 1 = 1 2 corresponding to the linear functions. Let E be the space of W 1,2 functions that are orthogonal to constants and linear functions; equivalently, E is the span of all the eigenfunctions for µ k for all k ≥ 2. Therefore, we can choose a ∈ R and z ∈ R n+1 so that
f 0 ≡ f − a − z, n ∈ E .(57)
Using the orthogonality of the different eigenspaces, we get that
S n −f (∆f + f ) ≥ (µ 2 − 1) S n f 2 0 + (µ 1 − 1) S n z, n 2 + (µ 0 − 1) S n a 2 = 1 n S n f 2 0 − 1 2 S n z, n 2 − S n a 2 .(58)
Again using the orthogonality of different eigenspaces, we get
S n √ 2n f h + f y, n = S n √ 2n a h + z, n y, n .(59)
Combining (58) and (59), we get that the left hand side of (55) is greater than or equal to
S n f 2 0 n − 1 2 z, n 2 − a 2 + √ 2n a h − n 2 h 2 + z, n y, n − y, n 2 2 = S n f 2 0 n − 1 2 ( z, n − y, n ) 2 − a − √ n h √ 2 2 .(60)
This can be made non-negative by choosing y = z and h = √ 2 a √ n .
The splitting theorem
The importance of F -stability comes from the following "splitting theorem" from [CM1]: If Σ 0 is a self-shrinker that does not split off a line and Σ 0 is F -unstable, then there is a compactly supported variation Σ s with
λ(Σ s ) < λ(Σ 0 ) ∀ s = 0 .
The precise statement of the splitting theorem is :
Theorem 62. [CM1]) Suppose that Σ ⊂ R n+1 is a smooth complete embedded self-shrinker with ∂Σ = ∅, with polynomial volume growth, and Σ does not split off a line isometrically.
If Σ is F -unstable, then there is a compactly supported variation Σ s with Σ 0 = Σ so that λ(Σ s ) < λ(Σ) for all s = 0.
The idea of the splitting theorem is roughly:
• Use that Σ does not split to show that F 0,1 is a strict maximum for F x0,t0 .
• Deform Σ 0 in the F -unstable direction Σ s .
• Consider the function G(s, x 0 , t 0 ) given by G(s, x 0 , t 0 ) = F x0,t0 (Σ s ) . and show that this has a strict maximum at x 0 = 0, t 0 = 1 and s = 0.
The precise statement of the first step is:
Lemma 63. [CM1]) Suppose that Σ is a smooth complete embedded self-shrinker with ∂Σ = ∅, polynomial volume growth, and Σ does not split off a line isometrically. Given > 0, there exists δ > 0 so
sup {F x0,t0 (Σ) | |x 0 | + | log t 0 | > } < λ − δ .(61)
Proof. (sketch of Theorem 62). Assume that Σ is not F -stable and, thus, there is a one-parameter normal variation Σ s for s ∈ [−2 , 2 ] with Σ 0 = Σ so that:
(V1) For each s, the variation vector field is given by a function f Σs times the normal n Σs where every f Σs is supported in a fixed compact subset of R n+1 .
(V2) For any variations x s and t s with x 0 = 0 and t 0 = 1, we get that ∂ ss s=0 F xs,ts (Σ s ) < 0 .
We will use this to prove that Σ is also entropy-unstable.
Setting up the proof: Define a function G :
R n+1 × R + × [−2 , 2 ] → R + by G(x 0 , t 0 , s) = F x0,t0 (Σ s ) .(63)
T. H. Colding and W. Minicozzi
We will show that there exists some 1 > 0 so that if s = 0 and |s| ≤ 1 , then
λ(Σ s ) ≡ sup x0,t0 G(x 0 , t 0 , s) < G(0, 1, 0) = λ(Σ) ,(64)
and this will give the theorem withΣ equal to Σ s for any s = 0 in (− 1 , 1 ); by taking s > 0 small enough, we can arrange thatΣ is as close as we like to Σ 0 = Σ. The remainder of the proof is devoted to establishing (64). The key points will be:
1. G has a strict local maximum at (0, 1, 0).
2. The restriction of G to Σ 0 , i.e., G(x 0 , t 0 , 0), has a strict global maximum at (0, 1).
3. |∂ s G| is uniformly bounded on compact sets.
4. G(x 0 , t 0 , s) is strictly less than G(0, 1, 0) whenever |x 0 | is sufficiently large.
5. G(x 0 , t 0 , s) is strictly less than G(0, 1, 0) whenever |log t 0 | is sufficiently large.
The proof of (64) assuming (1)-(5): We will divide into three separate regions depending on the size of |x 0 | 2 + (log t 0 ) 2 . First, it follows from steps (4) and (5) that there is some R > 0 so that (64) holds for every s whenever
x 2 0 + (log t 0 ) 2 > R 2 .(65)
Second, as long as s is small, step (1) implies that (64) holds when x 2 0 +(log t 0 ) 2 is sufficiently small.
Finally, in the intermediate region where x 2 0 + (log t 0 ) 2 is bounded from above and bounded uniformly away from zero, step (2) says that G is strictly less than λ(Σ) at s = 0 and step (3) says that the s derivative of G is uniformly bounded. Hence, there exists some 3 > 0 so that G(x 0 , t 0 , s) is strictly less than λ(Σ) whenever (x 0 , t 0 ) is in the intermediate region as long as |s| ≤ 3 .
This completes the proof of (64) assuming (1)-(5). See [CM1] for the proofs of (1)-(5).
Classification of F-stable self-shrinkers
Theorem 64. [CM1]) If Σ is a smooth 7 complete embedded self-shrinker in R n+1 without boundary and with polynomial volume growth that is F -stable with respect to compactly supported variations, then it is either the round sphere or a hyperplane.
Combined with the splitting theorem, this gives the classification of generic self-shrinkers.
The main steps in the proof of Theorem 64 are:
• Show that F -stability implies mean convexity (i.e., H ≥ 0).
• Classify the mean convex self-shrinkers (see Theorem 65 below).
The classification of mean convex self-shrinkers began with [H3], where Huisken showed that the only smooth closed self-shrinkers with non-negative mean curvature in R n+1 (for n > 1) are round spheres (i.e., S n ). When n = 1, Abresch and Langer, [AbLa], had already shown that the circle is the only simple closed self-shrinking curve. In a second paper, [H4], Huisken dealt with the non-compact case. He showed in [H4] that the only smooth open embedded self-shrinkers in R n+1 with H ≥ 0, polynomial volume growth, and |A| bounded are isometric products of a round sphere and a linear subspace (i.e. S k × R n−k ⊂ R n+1 ). We will show that Huisken's classification holds even without the |A| bound which will be crucial for our applications:
Theorem 65. ( [H3], [H4] and [CM1]) S k × R n−k are the only smooth complete embedded self-shrinkers without boundary, with polynomial volume growth, and H ≥ 0 in R n+1 .
The S k factor in Theorem 65 is round and has radius √ 2k; we allow the possibilities of a hyperplane (i.e., k = 0) or a sphere (n − k = 0).
Proof in the compact case
Since L is symmetric in the weighted space, its spectral theory is similar to the Laplacian:
1. There are eigenvalues µ 1 < µ 2 ≤ . . . with µ i → ∞ and eigenfunctions u i with L u i = −µ i u i .
2. The lowest eigenfunction u 1 does not change sign.
3. If µ i = µ j , then u i and u j are orthogonal, i.e., Σ u i u j e − |x| 2 4 = 0 .
Let Σ be a closed self-shrinker and suppose that H changes sign. We will show that Σ is F -unstable. We know that L H = H. Since H changes sign, it is NOT the lowest eigenfunction. It follows that µ 1 < −1. Let u 1 be the corresponding (lowest) eigenfunction and define the variation Σ s = {x + s u 1 (x) n(x) | x ∈ Σ} .
Given variations x s = sy and t s = 1 + sh, the second variation is (4π) − n 2 times µ 1 u 2 1 + 2u 1 hH − h 2 H 2 + u 1 y, n − y, n 2 2 e − |x| 2 4 .
T. H. Colding and W. Minicozzi
But u 1 is orthogonal to the other eigenfunctions H and y, n , so = µ 1 u 2 1 − h 2 H 2 − y, n 2 2 e − |x| 2 4 . This is obviously negative no matter what h and y are.
An application
Fix D, V and g and let M D,V,g be all self-shrinkers in R 3 with:
• Diameter at most D.
• Entropy at most V .
• Genus at most g.
Then: C-M compactness theorem implies that M D,V,g is smoothly compact. Combined with our entropy stability: There exists > 0 so that if Σ ∈ M D,V,g is not S 2 , then there is a graphΣ over Σ with λ(Σ) ≤ λ(Σ) − .
Piece-wise MCF
We next define an ad hoc notion of generic MCF that requires the least amount of technical set-up, yet should suffice for many applications.
A piece-wise MCF is a finite collection of MCF's M i t on time intervals [t i , t i+1 ] so that each M i+1 ti+1 is the graph over M i ti+1 of a function u i+1 ,
Area M i+1 ti+1 = Area M i ti+1 , λ M i+1 ti+1 ≤ λ M i ti+1 .
With this definition, area is non-increasing in t even across the jumps.
Generic compact singularities
In this subsection, we will assume that the multiplicity one conjecture of Ilmanen holds. Under this assumption, we have the following generalization of the Grayson-Huisken theorems:
Theorem 66. [CM1]) For any closed embedded surface Σ ⊂ R 3 , there exists a piece-wise MCF M t starting at Σ and defined up to time t 0 where the surfaces become singular. Moreover, M t can be chosen so that if
lim inf t→t0 diamM t √ t 0 − t < ∞ ,
then M t becomes extinct in a round point.
14 Non-compact self-shrinkers
The spectrum of L when Σ is non-compact
If Σ is non-compact, there may not be a lowest eigenvalue for L = L + |A| 2 + 1 2 . However, we can still define the bottom of the spectrum (which we still call µ 1 ) by
µ 1 = inf f − Σ (f Lf ) e − |x| 2 4 Σ f 2 e − |x| 2 4 ,
where the infimum is taken over smooth functions f with compact support. Warning: Since Σ is non-compact, we must allow the possibility that µ 1 = −∞.
We have the following characterization of µ 1 generalizing the compact case (as usual Σ is a complete self-shrinker without boundary and with polynomial volume growth):
Proposition 67. [CM1]) If µ 1 = −∞, then:
• There is a positive function u on Σ with L u = −µ 1 u.
• If v is in the weighted W 1,2 space and L v = −µ 1 v, then v = C u for C ∈ R.
• |A| |x| is in the weighted L 2 space.
(This proposition combines lemmas 9.15 and 9.25 in [CM1].)
µ 1 when H changes sign
We have already seen that the mean curvature H is an eigenfunction of L with eigenvalue −1. The next theorem shows that if H changes, then the bottom of the spectrum µ 1 is strictly less than −1.
Theorem 68. [CM1]) If the mean curvature H changes sign, then µ 1 < −1.
The idea of the proof follows:
1. We can assume that µ 1 = −∞. Thus, Proposition 67 gives that |A| |x| is in the weighted L 2 space.
2. Differentiating the self-shrinker equation H = 1 2 x, n gives 2 ∇ e1 H = −A ij x, e j .
It follows from this and (1) that H, ∇H, and |A| H are in the weighted L 2 space.
3. The bounds in (3) are enough to justify using H as a test function in the definition of µ 1 and get that µ 1 ≤ −1.
4. It remains to rule out that µ 1 = −1. But this would imply that H does not change sign by the uniqueness part of Proposition 67.
14.3 The F -unstable variation when µ 1 < −1
We have shown that if H changes sign, then Σ has µ 1 < −1. When Σ was closed, it followed immediately from this and the orthogonality of eigenfunctions with different eigenvalues (for a symmetric operator) that Σ was F -unstable. However, this orthogonality uses an integration by parts which is not justified when Σ is open. Instead, we show that the lowest eigenfunction on a sufficiently large ball is almost orthogonal to H and the translations. This turns out to be enough to prove F -instability:
Lemma 69. [CM1]) If µ 1 < −1, then there existsR so that if R ≥R and u is a Dirichlet eigenfunction for µ 1 (B R ), then for any h ∈ R and any y ∈ R n+1 we have −u L u + 2u h H + u y, n − h 2 H 2 − y, n 2 2
B R < 0 .(66)
Here, in (66), we used [·] B R to denote the Gaussian weighted integral over the ball B R .
To illustrate how the almost orthogonality comes in, we will explain a simple case of Lemma 69 when µ 1 < − 3 2 (this still leaves the possibility that µ 1 is between − 3 2 and −1). Sketch of (66) when µ 1 < − 3 2 : Using the Cauchy-Schwartz inequality ab ≤ 1 2 (a 2 + b 2 ) on the cross-term u y, n , the left hand side of (66) is bounded from above by 1 2 + µ 1 (B R ) u 2 + 2u hH − h 2 H 2
B R .(67)
We will show that this is negative when R is large. Since µ 1 < − 3 2 , we can choosē R so that µ 1 (BR) < − 3 2 . Given any R ≥R, then (67) is strictly less than
−u 2 + 2u hH − h 2 H 2 B R = − (u − hH) 2 B R ,(68)
which gives (66) in this case.
Classification of mean convex self-shrinkers
Throughout this subsection, Σ is a complete, non-compact self-shrinker with polynomial volume growth, ∂Σ = ∅ and H > 0. We will sketch the proof of the classification theorem, i.e., Theorem 65, which gives that Σ is a cylinder. The classification relies heavily upon the following "Simons' identity" for the second fundamental form A of a self-shrinker Σ:
L A = A ,
where we have extended L to act on tensors in the natural way. Taking the trace of this recovers that L H = H since traces and covariant derivatives commute (the metric is parallel).
Roughly speaking, the identity LA = A says that the whole matrix A is a lowest eigenfunction for L. Moreover, the matrix strong maximum principle shows that the kernel of A consists of parallel vector fields that split off a factor of R k . The remaining principle curvatures must then all be multiples of each other (by the uniqueness of the lowest eigenfunctions for L). We will show that the remaining non-zero principle curvature are in fact the same, i.e., Σ is the product of an affine space and a totally umbilic submanifold.
The main steps in the proof Theorem 65 are:
1. Using L A = A, it follows that
L |A| = |A| + |∇A| 2 − |∇|A|| 2 |A| ≥ |A| .
2. Using L A = A and the "stability inequality" coming from H > 0, we show that |A| 2 + |A| 4 + |∇|A|| 2 + |∇A| 2 e − |x| 2 4 < ∞ .
(This should remind you of the Schoen-Simon-Yau, [ScSiY], curvature estimates for stable minimal hypersurfaces.)
Figure 1 :
1Freely homotopic curves.
Figure 2: A sweepout.
Figure 3 :
3Birkhoff's curve shortening process.
Figure 4 :
4Tightening the sweepout.
Figure 6 :
6Four snapshots in time of concentric circles shrinking under the curve shortening flow.
Figure 1 .
1Two scaled grim reapers.
Figure 7 :
7Calabi's grim reaper moves by translations.
Figure 9 :
9The minimizing curve γ in Hamilton's first isoperimetric quantity.The curve c.A circle enclosing the same area.
Figure 10 :
10Defining the length L 0 in Hamilton's second isoperimetric quantity.
Photograph courtesy of John Oprea.
Figure I .
IThe minimal surface called the helicoid is a double spiral staircase. The photo shows a spiral staircase-one half of a helicoid-as a soap film.
Figure 11 :
11The minimal surface called the helicoid is a double-spiral staircase. This photo shows half of a helicoid as a soap film.
Figure 12 :
12The catenoid given by revolving x 1 = cosh x 3 around the x 3 -axis. Credit: Matthias Weber, www.indiana.edu/ minimal.
Figure 14 :
14Two of the Riemann examples. The second one is starting to degenerate to helicoids. Credit: Matthias Weber, www.indiana.edu/ minimal.
Figure 17 :Figure 8 .
178Multi-valued graphs. The helicoid is obtained by gluing together two ∞-valued graphs along a line. Credit: Matthias Weber, www.indiana.edu/ The separation w for a multi-valued graph in (II.
Figure 18 :
18The separation w for a multi-valued minimal graph.
A
double spiral stai that spiral around ple can pass each II shows Leonard case in Château d in France. The co in 1519 (the sam died) and was co Figure II. Leonardo da Vinci's double spiral staircase in Château de Chambord.
FigureFigure
II. Leonardo da Vinci's double spiral staircase in Château de Chambord.
Figure
Figure III. A model of da Vinci's staircase.
Figure 19 :
19Left: Drawing by Leonardo Da Vinci of a double spiral staircase from around 1490. Right: Model of Da Vinci's double spiral staircase in Château de Chambord. Both figures reprinted from [CM20]. Minimal surfaces and MCF 31
Figure 3 .
3Proving Theorem 1. A) Finding a small N-valued graph in Σ . B) Extending it in Σ to a large N-valued graph. C) Extending the number of sheets.
Figure 20 :
20Three main steps: A. Finding a small N -valued graph in Σ. B. Extending it in Σ to a large N -valued graph. C. Extend the number of sheets.
Figure 21 :
21A schematic picture of the examples in Theorem 24. Removing the x 3 -axis disconnects the surface into two multi-valued graphs; one of these is in bold.
Figure 22 :
22The one-sided curvature estimate.
Figure 9
9Figure 9 ill from Theorem away from 0, disjoint from mum principle Σ is assumed a component plane {x 3 = 0} follows.
Figure 6 .Figure 7 .
67The catenoid given by revolving x 1 = cosh x 3 around the x 3 -axis. Rescaling the catenoid shows that the property of being simply connected (and embedded) is needed in the one-sided curvature estimate.
Figure 8 .
8It
Figure 23 :Figure 24 :
2324Rescaled The pair of pants decomposition. Credit: Matthias Weber, www.indiana.edu/ minimal.
Corollary 27 .
27[CM10]) A sequence of embedded minimal planar domains that are not ULSC, but with curvatures blowing up, has a subsequence that converges to a collection of flat parallel planes.
Using[CM6]-[CM10] and Colding-De Lellis-Minicozzi,[CDM], Meeks-Perez-Ros recently showed that the Riemann examples are the unique Σ's with genus zero and infinitely many ends.Theorem 40. (Meeks-Perez-Ros, [MePRs4]) The Riemann examples are the unique complete properly embedded minimal planar domains with infinitely many ends. This theorem completes the classification of the genus zero properly embedded minimal surfaces. Remarkably, it turned out that the classical examples discovered in the 1700's and 1800's were the only ones. A number of central questions remain, including the structure of the moduli space of finite genus properly embedded minimal surfaces and the systematic construction of examples.
by Meeks, J. Perez and A. Ros. Inspired by Nadirashvili's examples, F. Martin and S. Morales constructed in [MaMo2] a complete bounded minimal immersion which is proper in the (open) unit ball. That is, the preimages of compact subsets of the (open) unit ball are compact in the surface and the image of the surface accumulates on the boundary of the unit ball. They extended this in
Figure 26 :
26show 8 snapshots in time of the evolution of a dumbbell; the figures were created by computer simulation by U. Angenent's shrinking donut from numerical simulations of D. Chopp,[Ch]. The vertical z-axis is the axis of rotation and the horizontal r-axis is a line of reflection symmetry.
Figure 27 :
27Grayson's dumbbell; initial surface and step 1.
Figure 28 :
28The dumbbell; steps 2 and 3.
Figure 29 :
29The dumbbell; steps 4 and 5.
Figure 30 :
30The dumbbell; steps 6 and 7.
Figure 31 :
31A numerical example of a closed shrinker from Chopp,[Ch].Chopp, 1994: A self-shrinker asymptotic to a cone.
Figure 32 :
32A non-compact numerical example from Chopp,[Ch].
Figure 33 :
33A sweepout.
F x0,t0 if and only if it is the time −t 0 slice of a self-shrinking solution of the mean curvature flow that becomes extinct at the point x 0 and time 0. Proposition 53. (Colding-Minicozzi, [CM1]) Σ is a critical point for F x0,t0 if and only if H = 52, Σ is critical if and only
CM3]) for background and basic properties of minimal surfaces and [CM2] for a more detailed survey A.Σ 1 is
s the subset
ion w = 2π.
ultivalued
inimal sur-
lued graph,
multivalued
See [CM1], [O], [S] (and the forthcoming book
[C.
B.
B R
In the proof of Theorem 1, the following (direct) consequence of Theorem 2 (with Σ d playing the roleIn Corollary 1
valued.)
Corollary 1.
such that for
with ∂Σ ⊂ ∂B 2R
taining a 2-val
annulus D R \
component of
valued graph.
"Self-contained" means avoiding the use of a blow up analysis.
A multi-valued graph is a surface that is locally a graph over a subset of the plane, but the projection down to the plane is not one to one.
A minimal surface Σ has finite total curvature if Σ |A| 2 < ∞.
This equation differs by a factor of two from Huisken's definition of a self-shrinker; this is because Huisken works with the time −1/2 slice.
This works for convex hypersurfaces in R n+1 too with the obvious modifications.
. Using (1) and the Choi-Schoen curvature estimate,[CiSc], we get a subsequence that converges smoothly, possibly with multiplicity, away from isolated "singular points" where the curvature concentrates.3. By Allard's theorem,[Al], the existence of singular points implies that the convergence is with multiplicity greater than one.4. If the limit has multiplicity greater than one, then the limit is stable as a minimal surface in the conformally changed metric (by rescaling to get a Jacobi field as in[CM13]).5. Combining(4)and Theorem 55 shows that there cannot be any singular points.
See, e.g., (14) on page 35 of Chavel,[Ca].
The theorem holds when n ≤ 6 and Σ is an oriented integral varifold that is smooth off of a singular set with locally finite (n − 2)-dimensional Hausdorff measure.
3. Using (2) to show that various integrals converge and justify various integrations by parts, (1) and L H = H imply that H = C |A| for C > 0.4. The combination of |∇A| 2 = |∇|A|| 2 and H = C |A| -and some work -give the classification. This last step is essentially the same argument as in [H3].
The normalized curve shortening flow and homothetic solutions. U Abresch, J Langer, J. Differential Geom. 232U. Abresch and J. Langer, The normalized curve shortening flow and ho- mothetic solutions. J. Differential Geom. 23 (1986), no. 2, 175-196.
Density theorems for complete minimal surfaces in R 3. A Alarcon, L Ferrer, F Martin, Geom. Funct. Anal. 181A. Alarcon, L. Ferrer, and F. Martin, Density theorems for complete min- imal surfaces in R 3 , Geom. Funct. Anal. 18 (2008), no. 1, 1-49.
. T H Colding, W Minicozzi, T. H. Colding and W. Minicozzi
Limit sets for complete minimal immersions. A Alarcon, N Nadirashvili, Math. Z. 2581A. Alarcon and N. Nadirashvili, Limit sets for complete minimal immer- sions, Math. Z. 258 (2008), no. 1, 107-113.
On the first variation of a varifold. W Allard, Ann. of Math. 2W.K Allard, On the first variation of a varifold. Ann. of Math. (2) 95 (1972), 417-491.
Mean curvature flow through singularities for surfaces of rotation. S Altschuler, S Angenent, Y Giga, J. Geom. Anal. 53S. Altschuler, S. Angenent, and Y. Giga, Mean curvature flow through singularities for surfaces of rotation. J. Geom. Anal. 5 (1995), no. 3, 293-358.
Translating surfaces of the non-parametric mean curvature flow with prescribed contact angle. S Altschuler, L Wu, Calc. Var. Partial Differential Equations. 21S. Altschuler and L. Wu, Translating surfaces of the non-parametric mean curvature flow with prescribed contact angle, Calc. Var. Partial Differential Equations 2(1), 101-111 (1994)
Classification of limiting shapes for isotropic curve flows. B Andrews, J. Amer. Math. Soc. 162B. Andrews, Classification of limiting shapes for isotropic curve flows. J. Amer. Math. Soc. 16 (2003), no. 2, 443-459.
Curvature bound for curve shortening flow via distance comparison and a direct proof of Grayson's theorem, Crelle. B Andrews, P Bryan, to appearB. Andrews and P. Bryan Curvature bound for curve shortening flow via distance comparison and a direct proof of Grayson's theorem, Crelle, to ap- pear.
Shrinking doughnuts. S Angenent, Nonlinear diffusion equations and their equilibrium states, Birkhaüser. Boston-Basel-Berlin3S. Angenent, Shrinking doughnuts, In: Nonlinear diffusion equations and their equilibrium states, Birkhaüser, Boston-Basel-Berlin, 3, 21-38, 1992.
A computed example of nonuniqueness of mean curvature flow in R 3. S B Angenent, D L Chopp, T Ilmanen, Comm. Partial Differential Equations. 2011S. B. Angenent, D. L. Chopp, and T. Ilmanen. A computed example of nonuniqueness of mean curvature flow in R 3 . Comm. Partial Differential Equations, 20 (1995), no. 11-12, 1937-1958.
Distortions of the helicoid. J Bernstein, C Breiner, Geom. Dedicata. 137J. Bernstein and C. Breiner, Distortions of the helicoid, Geom. Dedicata 137 (2008), 143-147.
J Bernstein, C Breiner, Conformal Structure of Minimal Surfaces with Finite Topology, Crelle. to appearJ. Bernstein and C. Breiner, Conformal Structure of Minimal Surfaces with Finite Topology, Crelle, to appear.
Symmetry of Embedded Genus-One Helicoids, To appear. J Bernstein, C Breiner, Duke Math. J. J. Bernstein and C. Breiner, Symmetry of Embedded Genus-One Helicoids, To appear, Duke Math. J.
Über ein geometrisches Theorem und seine Anwendung auf die partiellen Differentialgleichungen vom ellipschen Typos. S Bernstein, 2- eme sér. 15 (1915-17) 38-45Comm. Soc. Math. Kharkov. 26Math. Zeit.S. Bernstein,Über ein geometrisches Theorem und seine Anwendung auf die partiellen Differentialgleichungen vom ellipschen Typos. Math. Zeit. 26 (1927) 551-558 (translation of the original version in Comm. Soc. Math. Kharkov 2- eme sér. 15 (1915-17) 38-45).
Half-space theorems for minimal surfaces with bounded curvature. G P Bessa, L Jorge, G Oliveira-Filho, J. Diff. Geom. 57G.P. Bessa, L. Jorge and G. Oliveira-Filho, Half-space theorems for minimal surfaces with bounded curvature, J. Diff. Geom. 57 (2001) 493-508.
Dynamical systems with two degrees of freedom. G D Birkhoff, TAMS. 182G.D. Birkhoff, Dynamical systems with two degrees of freedom. TAMS 18 (1917), no. 2, 199-300.
Dynamical systems. G D Birkhoff, AMS Colloq. Publ. 9G.D. Birkhoff, Dynamical systems, AMS Colloq. Publ. vol 9, Providence, RI, 1927.
The motion of a surface by its mean curvature. K Brakke, Mathematical Notes. 20Princeton University PressK. Brakke, The motion of a surface by its mean curvature. Mathematical Notes, 20. Princeton University Press, Princeton, N.J., 1978.
Helicoids with handles and Baker-Akhiezer spinors. A Bobenko, Math. Z. 2291A. Bobenko, Helicoids with handles and Baker-Akhiezer spinors. Math. Z. 229 (1998), no. 1, 9-29.
Problems in differential geometry. E Calabi, ; S Kobayashi, J EellsJr, Proceedings of the United States-Japan Seminar in Differential Geometry. the United States-Japan Seminar in Differential GeometryKyoto, Japan; TokyoNippon Hyoronsha Co., Ltd170E. Calabi, Problems in differential geometry, Ed. S. Kobayashi and J. Eells, Jr., Proceedings of the United States-Japan Seminar in Differential Geometry, Kyoto, Japan, 1965. Nippon Hyoronsha Co., Ltd., Tokyo (1966) 170.
Width and flow of hypersurfaces by curvature functions. M Calle, S Kleene, J Kramer, Trans. Amer. Math. Soc. 363M. Calle, S. Kleene and J. Kramer, Width and flow of hypersurfaces by curvature functions, Trans. Amer. Math. Soc. 363 (2011), 1125-1135.
Non-proper helicoid-like limits of closed minimal surfaces in 3-manifolds. M Calle, D Lee, Math. Z. 2614M. Calle and D. Lee, Non-proper helicoid-like limits of closed minimal sur- faces in 3-manifolds, Math. Z. 261 (2009), no. 4, 725-736.
Gaussian densities and stability for some Ricci solitons. H-D Cao, R S Hamilton, T Ilmanen, preprintH-D. Cao, R.S. Hamilton, and T. Ilmanen, Gaussian densities and stability for some Ricci solitons, preprint 2004.
On second variation of Perelman's Ricci shrinker entropy. H-D Cao, M Zhu, preprint 2010H-D. Cao and M. Zhu, On second variation of Perelman's Ricci shrinker entropy, preprint 2010.
Eigenvalues in Riemannian Geometry. Pure and Applied Mathematics. I Chavel, Academic Press, Inc115Orlando, FLI. Chavel, Eigenvalues in Riemannian Geometry. Pure and Applied Mathe- matics, 115. Academic Press, Inc., Orlando, FL, 1984.
Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations. Y Chen, Y Giga, S Goto, J. Differential Geom. 333Y. Chen, Y. Giga, and S. Goto, Uniqueness and existence of viscosity solutions of generalized mean curvature flow equations. J. Differential Geom. 33 (1991), no. 3, 749-786.
The geometry of G-structures. S S Chern, Bull. Amer. Math. Soc. 72S.S. Chern, The geometry of G-structures, Bull. Amer. Math. Soc. 72 (1966) 167-219.
The space of minimal embeddings of a surface into a three-dimensional manifold of positive Ricci curvature. H I Choi, R Schoen, Invent. Math. 81H.I. Choi and R. Schoen, The space of minimal embeddings of a surface into a three-dimensional manifold of positive Ricci curvature, Invent. Math. 81 (1985) 387-394.
Computation of self-similar solutions for mean curvature flow. D Chopp, Experiment. Math. 31D. Chopp, Computation of self-similar solutions for mean curvature flow. Experiment. Math. 3 (1994), no. 1, 1-15.
Stability of translating solutions to mean curvature flow. J Clutterbuck, O Schnürer, F Schulze, Calc. Var. Partial Differential Equations. 293J. Clutterbuck, O. Schnürer, and F. Schulze, Stability of translating so- lutions to mean curvature flow. Calc. Var. Partial Differential Equations 29 (2007), no. 3, 281-293.
Singular limit laminations, Morse index, and positive scalar curvature. T H Colding, C De Lellis, Topology. 4412545T.H. Colding and C. De Lellis, Singular limit laminations, Morse index, and positive scalar curvature. Topology 44 (2005), no. 1, 2545.
Three circles theorems for Schrödinger operators on cylindrical ends and geometric applications. T H Colding, C De Lellis, W P Minicozzi, I I , Comm. Pure Appl. Math. 6111T.H. Colding, C. De Lellis, and W.P. Minicozzi II, Three circles theo- rems for Schrödinger operators on cylindrical ends and geometric applications, Comm. Pure Appl. Math. 61 (2008), no. 11, 1540-1602.
Generic mean curvature flow I; generic singularities. T H Colding, W P Minicozzi, I I , T.H. Colding and W.P. Minicozzi II, Generic mean curvature flow I; generic singularities, preprint, http://lanl.arxiv.org/abs/0908.3788.
Width and mean curvature flow. T H Colding, W P Minicozzi, I I , Geom. Topol. 125T.H. Colding and W.P. Minicozzi II, Width and mean curvature flow, Geom. Topol. 12 (2008), no. 5, 2517-2535.
Smooth compactness of self-shrinkers. T H Colding, W P Minicozzi, I I , Comm. Math. Helv. to appearT.H. Colding and W.P. Minicozzi II, Smooth compactness of self-shrinkers, Comm. Math. Helv., to appear, http://arxiv.org/pdf/0907.2594.
T H Colding, W P Minicozzi, I I , Minimal surfaces. Courant Lecture Notes in Mathematics, 4. NYU. Courant Institute of Math. Sciences, NYT.H. Colding and W.P. Minicozzi II, Minimal surfaces. Courant Lecture Notes in Mathematics, 4. NYU, Courant Institute of Math. Sciences, NY, 1999.
Shapes of embedded minimal surfaces. T H Colding, W P Minicozzi, I I , Proc. Natl. Acad. Sci. USA. 10330T. H. Colding and W.P. Minicozzi II, Shapes of embedded minimal surfaces. Proc. Natl. Acad. Sci. USA 103 (2006), no. 30, 11106-11111
The space of embedded minimal surfaces of fixed genus in a 3-manifold. I. Estimates off the axis for disks. T H Colding, W P Minicozzi, I I , Ann. of Math. 2T. H. Colding and W.P. Minicozzi II, The space of embedded minimal surfaces of fixed genus in a 3-manifold. I. Estimates off the axis for disks. Ann. of Math. (2) 160 (2004), no. 1, 27-68.
The space of embedded minimal surfaces of fixed genus in a 3-manifold. II. Multi-valued graphs in disks. T H Colding, W P Minicozzi, I I , Ann. of Math. 2T. H. Colding and W.P. Minicozzi II, The space of embedded minimal surfaces of fixed genus in a 3-manifold. II. Multi-valued graphs in disks. Ann. of Math. (2) 160 (2004), no. 1, 69-92.
The space of embedded minimal surfaces of fixed genus in a 3-manifold. III. Planar domains. T H Colding, W P Minicozzi, I I , Ann. of Math. 2T. H. Colding and W.P. Minicozzi II, The space of embedded minimal surfaces of fixed genus in a 3-manifold. III. Planar domains. Ann. of Math. (2) 160 (2004), no. 2, 523-572.
The space of embedded minimal surfaces of fixed genus in a 3-manifold. IV. Locally simply connected. T H Colding, W P Minicozzi, I I , Ann. of Math. 2T. H. Colding and W.P. Minicozzi II, The space of embedded minimal surfaces of fixed genus in a 3-manifold. IV. Locally simply connected. Ann. of Math. (2) 160 (2004), no. 2, 573-615.
The space of embedded minimal surfaces of fixed genus in a 3-manifold. V. Fixed genus. T H Colding, W P Minicozzi, I I , preprintT. H. Colding and W.P. Minicozzi II, The space of embedded minimal surfaces of fixed genus in a 3-manifold. V. Fixed genus, preprint.
Complete properly embedded minimal surfaces in R 3. T H Colding, W P Minicozzi, I I , Duke Math. J. 107T.H. Colding and W.P. Minicozzi II, Complete properly embedded mini- mal surfaces in R 3 , Duke Math. J. 107 (2001) 421-426.
The Calabi-Yau conjectures for embedded surfaces. T H Colding, W P Minicozzi, I I , Ann. of Math. 2T.H. Colding and W.P. Minicozzi II, The Calabi-Yau conjectures for em- bedded surfaces, Ann. of Math. (2) 167 (2008), no. 1, 211-243.
Embedded minimal surfaces without area bounds in 3-manifolds. T H Colding, W P Minicozzi, I I , Geometry and topology: Aarhus. Providence, RIAmer. Math. Soc258T.H. Colding and W.P. Minicozzi II, Embedded minimal surfaces without area bounds in 3-manifolds. Geometry and topology: Aarhus (1998), 107-120, Contemp. Math., 258, Amer. Math. Soc., Providence, RI, 2000.
A course in minimal surfaces, Graduate Studies in Math. T H Colding, W P Minicozzi, I I , Amer. Math. SocProvidence, RIT.H. Colding and W.P. Minicozzi II, A course in minimal surfaces, Grad- uate Studies in Math., Amer. Math. Soc., Providence, RI, 2011.
Estimates for parametric elliptic integrands. T H Colding, W P Minicozzi, I I , International Mathematics Research Notices. 6T.H. Colding and W.P. Minicozzi II, Estimates for parametric elliptic integrands, International Mathematics Research Notices, no. 6 (2002) 291- 297.
On the structure of embedded minimal annuli. T H Colding, W P Minicozzi, I I , 1539- 1552. Minimal surfaces and MCF 71International Mathematics Research Notices. 29T.H. Colding and W.P. Minicozzi II, On the structure of embedded mini- mal annuli, International Mathematics Research Notices, no. 29 (2002) 1539- 1552. Minimal surfaces and MCF 71
Embedded minimal disks: Proper versus nonproper -global versus local. T H Colding, W P Minicozzi, I I , Transactions of the AMS. 356T.H. Colding and W.P. Minicozzi II, Embedded minimal disks: Proper versus nonproper -global versus local, Transactions of the AMS, 356 (2004) 283-289.
Multi-valued minimal graphs and properness of disks. T H Colding, W P Minicozzi, International Mathematics Research Notices. II21T.H. Colding and W.P. Minicozzi II, Multi-valued minimal graphs and properness of disks, International Mathematics Research Notices, no. 21 (2002) 1111-1127.
Width and finite extinction time of Ricci flow. T H Colding, W P Minicozzi, I I , Geom. Topol. 125T.H. Colding and W.P. Minicozzi II, Width and finite extinction time of Ricci flow, Geom. Topol. 12 (2008), no. 5, 2537-2586.
Disks that are double spiral staircases. T H Colding, W P Minicozzi, I I , Notices Amer. Math. Soc. 503T.H. Colding and W.P. Minicozzi II, Disks that are double spiral stair- cases, Notices Amer. Math. Soc. 50 (2003), no. 3, 327-339.
Topologie et courbure des surfaces minimales proprement plonges de R 3. P Collin, Ann. of Math. 2P. Collin, Topologie et courbure des surfaces minimales proprement plonges de R 3 , Ann. of Math. (2) 145 (1997) 1-31.
Notes sur la démonstration de N. Nadirashvili des conjectures de Hadamard et Calabi-Yau. P Collin, H Rosenberg, Bull. Sci. Math. 1237P. Collin and H. Rosenberg, Notes sur la démonstration de N. Nadirashvili des conjectures de Hadamard et Calabi-Yau. Bull. Sci. Math. 123 (1999), no. 7, 563-575.
Least area planes in hyperbolic 3-space are properly embedded. B Coskunuzer, Indiana Univ. Math. J. 581B. Coskunuzer, Least area planes in hyperbolic 3-space are properly embedded, Indiana Univ. Math. J. 58 (2009), no. 1, 381-392.
Example of a complete minimal immersion in R 3 of genus one and three embedded ends. C Costa, Bol. Soc. Brasil. Mat. 151-2C. Costa, Example of a complete minimal immersion in R 3 of genus one and three embedded ends, Bol. Soc. Brasil. Mat. 15 (1984), no. 1-2, 47-54.
Area and the length of the shortest closed geodesic. C B Croke, J. Diff. Geom. 271C.B. Croke, Area and the length of the shortest closed geodesic, J. Diff. Geom., 27 (1988), no. 1, 1-21.
Half-space theorems and the embedded Calabi-Yau problem in Lie groups. B Daniel, W Meeks, H Rosenberg, preprintB. Daniel, W. Meeks and H. Rosenberg, Half-space theorems and the embedded Calabi-Yau problem in Lie groups, preprint, 2010.
Embedded minimal disks with prescribed curvature blowup. B Dean, Proc. Amer. Math. Soc. 1344B. Dean, Embedded minimal disks with prescribed curvature blowup, Proc. Amer. Math. Soc. 134 (2006), no. 4, 1197-1204.
Stable complete minimal surfaces in R 3 are planes. M Carmo, C K Peng, Bull. Amer. Math. Soc. (N.S.). 16M. do Carmo and C. K. Peng, Stable complete minimal surfaces in R 3 are planes, Bull. Amer. Math. Soc. (N.S.) 1 (1979), no. 6, 903-906.
Local monotonicity formulas for some nonlinear diffusion equations. K Ecker, Calc. Var. Partial Differential Equations. 231K. Ecker, Local monotonicity formulas for some nonlinear diffusion equations. Calc. Var. Partial Differential Equations 23 (2005), no. 1, 67-81.
Regularity theory for mean curvature flow. K Ecker, Progress in Nonlinear Differential Equations and their Applications. Boston, MABirkhuser Boston, Inc57K. Ecker, Regularity theory for mean curvature flow. Progress in Nonlinear Differential Equations and their Applications, 57. Birkhuser Boston, Inc., Boston, MA, 2004.
A Formula Relating Entropy Monotonicity to Harnack Inequalities. K Ecker, Communications in Analysis and Geometry. 155K. Ecker, A Formula Relating Entropy Monotonicity to Harnack Inequalities. Communications in Analysis and Geometry, Volume 15, Number 5, 1025 - 1061, 2008.
K Ecker, Heat equations in geometry and topology. 110K. Ecker, Heat equations in geometry and topology. Jahresber. Deutsch. Math.-Verein. 110 (2008), no. 3, 117-141.
Mean curvature evolution of entire graphs. K Ecker, G Huisken, Ann. of Math. 2K. Ecker and G. Huisken, Mean curvature evolution of entire graphs, Ann. of Math. (2) 130 (1989), no. 3, 453-471.
Interior estimates for hypersurfaces moving by mean curvature. K Ecker, G Huisken, Invent. Math. 1053K. Ecker and G. Huisken, Interior estimates for hypersurfaces moving by mean curvature. Invent. Math. 105 (1991), no. 3, 547-569.
Wave motion: theory, modelling, and computation. C Epstein, M Gage, Math. Sci. Res. Inst. Publ. 7SpringerThe curve shortening flowC. Epstein and M. Gage, The curve shortening flow. Wave motion: theory, modelling, and computation (Berkeley, Calif., 1986), 15-59, Math. Sci. Res. Inst. Publ., 7, Springer, New York, 1987.
A stable manifold theorem for the curve shortening equation. C Epstein, M Weinstein, Comm. Pure Appl. Math. 401C. Epstein and M. Weinstein, A stable manifold theorem for the curve shortening equation, Comm. Pure Appl. Math. 40 (1987), no. 1, 119-139.
L C Evans, Partial differential equations. Providence, RIAMS19L. C. Evans, Partial differential equations. Graduate Studies in Mathematics, 19. AMS, Providence, RI, 1998.
Motion of level sets by mean curvature. L C Evans, J Spruck, I. J. Differential Geom. 333L. C. Evans and J. Spruck, Motion of level sets by mean curvature. I. J. Differential Geom. 33 (1991), no. 3, 635-681.
The structure of complete stable minimal surfaces in 3-manifolds of nonnegative scalar curvature. D Fischer-Colbrie, R Schoen, Comm. Pure Appl. Math. 33D. Fischer-Colbrie and R. Schoen, The structure of complete stable minimal surfaces in 3-manifolds of nonnegative scalar curvature, Comm. Pure Appl. Math. 33 (1980) 199-211.
Curve shortening makes convex curves circular. M Gage, Invent. Math. 762M. Gage, Curve shortening makes convex curves circular, Invent. Math. 76 (1984), no. 2, 357-364.
An isoperimetric inequality with applications to curve shortening. M Gage, Duke Math. J. 504M. Gage, An isoperimetric inequality with applications to curve shortening, Duke Math. J. 50 (1983), no. 4, 1225-1229.
The heat equation shrinking convex plane curves. M Gage, R S Hamilton, J. Differential Geom. 23M. Gage and R. S. Hamilton, The heat equation shrinking convex plane curves. J. Differential Geom. 23 (1986) 69-96.
The heat equation shrinks embedded plane curves to round points. M Grayson, J. Differential Geom. 262M. Grayson, The heat equation shrinks embedded plane curves to round points. J. Differential Geom. 26 (1987), no. 2, 285-314.
A short note on the evolution of a surface by its mean curvature. M Grayson, Duke Math. J. 583M. Grayson, A short note on the evolution of a surface by its mean curvature. Duke Math. J. 58 (1989), no. 3, 555-558.
Shortening embedded curves. M Grayson, Ann. of Math. 2M. Grayson, Shortening embedded curves. Ann. of Math. (2) 129 (1989), no. 1, 71-111.
Self-similar solutions to the curve shortening flow. H Halldorsson, H. Halldorsson, Self-similar solutions to the curve shortening flow, preprint 2010.
Isoperimetric estimates for the curve shrinking flow in the plane. R S Hamilton, Modern methods in complex analysis. Princeton, NJ; Princeton, NJPrinceton Univ. Press137R. S. Hamilton, Isoperimetric estimates for the curve shrinking flow in the plane. Modern methods in complex analysis (Princeton, NJ, 1992), 201-222, Ann. of Math. Stud., 137, Princeton Univ. Press, Princeton, NJ, 1995.
Harnack estimate for the mean curvature flow. R S Hamilton, J. Differential Geom. 411R. S. Hamilton, Harnack estimate for the mean curvature flow. J. Differential Geom. 41 (1995), no. 1, 215-226.
Higher genus Riemann minimal surfaces. L Hauswirth, F Pacard, Invent. Math. 1693L. Hauswirth and F. Pacard, Higher genus Riemann minimal surfaces. In- vent. Math. 169 (2007), no. 3, 569-620.
An end-to-end construction for singly periodic minimal surfaces. L Hauswirth, F Morabito, M Rodrguez, Pacific J. Math. 2411L. Hauswirth, F. Morabito, and M. Rodrguez, An end-to-end construction for singly periodic minimal surfaces. Pacific J. Math. 241 (2009), no. 1, 1-61.
Complete embedded minimal surfaces with finite total curvature, Geometry V. D Hoffman, H Karcher, Encyclopaedia Math. Sci. R. Osserman90Springer-VerlagD. Hoffman and H. Karcher, Complete embedded minimal surfaces with finite total curvature, Geometry V (R. Osserman, ed.) Encyclopaedia Math. Sci. 90, Springer-Verlag, New York (1997) 5-93.
The strong halfspace theorem for minimal surfaces. D Hoffman, W Meeks, Invent. Math. 101D. Hoffman and W. Meeks III, The strong halfspace theorem for minimal surfaces, Invent. Math. 101 (1990) 373-377.
An embedded genus-one helicoid. D Hoffman, M Weber, M Wolf, Ann. of Math. 2D. Hoffman, M. Weber, and M. Wolf, An embedded genus-one helicoid, Ann. of Math. (2) 169 (2009), no. 2, 347-448.
Genus-one helicoids from a variational point of view. D Hoffman, B White, Comment. Math. Helv. 834D. Hoffman and B. White, Genus-one helicoids from a variational point of view, Comment. Math. Helv. 83 (2008), no. 4, 767-813.
Sequences of embedded minimal disks whose curvatures blow up on a prescribed subset of a line. D Hoffman, B White, arXiv:0905.0851v2D. Hoffman and B. White, Sequences of embedded minimal disks whose curvatures blow up on a prescribed subset of a line, arXiv:0905.0851v2.
The geometry of genus-one helicoids. D Hoffman, B White, Comment. Math. Helv. 843547569D. Hoffman and B. White, The geometry of genus-one helicoids. Com- ment. Math. Helv. 84 (2009), no. 3, 547569.
Flow by the mean curvature of convex surfaces into spheres. G Huisken, JDG. 201G. Huisken, Flow by the mean curvature of convex surfaces into spheres. JDG 20 (1984) no. 1, 237-266.
Local and global behaviour of hypersurfaces moving by mean curvature. G Huisken, Proc. CMA, ANU. CMA, ANU26G. Huisken, Local and global behaviour of hypersurfaces moving by mean curvature, Proc. CMA, ANU, vol. 26, 1991.
Asymptotic behavior for singularities of the mean curvature flow. G Huisken, J. Differential Geom. 311G. Huisken, Asymptotic behavior for singularities of the mean curvature flow. J. Differential Geom. 31 (1990), no. 1, 285-299.
Local and global behaviour of hypersurfaces moving by mean curvature. G Huisken, Differential geometry: partial differential equations on manifolds. Los Angeles, CA; Providence, RIAmer. Math. Soc54G. Huisken, Local and global behaviour of hypersurfaces moving by mean curvature. Differential geometry: partial differential equations on manifolds (Los Angeles, CA, 1990), 175-191, Proc. Sympos. Pure Math., 54, Part 1, Amer. Math. Soc., Providence, RI, 1993.
A distance comparison principle for evolving curves. G Huisken, Asian J. Math. 21G. Huisken, A distance comparison principle for evolving curves. Asian J. Math. 2 (1998), no. 1, 127-133.
Convexity estimates for mean curvature flow and singularities of mean convex surfaces. G Huisken, C Sinestrari, Acta Math. 1831G. Huisken and C. Sinestrari, Convexity estimates for mean curvature flow and singularities of mean convex surfaces, Acta Math. 183 (1999) no. 1, 45-70.
Mean curvature flow singularities for mean convex surfaces. G Huisken, C Sinestrari, Calc. Var. Partial Differential Equations. 8G. Huisken and C. Sinestrari, Mean curvature flow singularities for mean convex surfaces. Calc. Var. Partial Differential Equations, 8 (1999), 1-14.
Mean curvature flow with surgeries of twoconvex hypersurfaces. G Huisken, C Sinestrari, Invent. Math. 1751G. Huisken and C. Sinestrari, Mean curvature flow with surgeries of two- convex hypersurfaces, Invent. Math. 175 (2009), no. 1, 137-221.
T Ilmanen, Singularities of Mean Curvature Flow of Surfaces. preprintT. Ilmanen, Singularities of Mean Curvature Flow of Surfaces, preprint, 1995, http://www.math.ethz.ch/˜/papers/pub.html.
Elliptic regularization and partial regularity for motion by mean curvature. T Ilmanen, Mem. Amer. Math. Soc. 108520T. Ilmanen, Elliptic regularization and partial regularity for motion by mean curvature. Mem. Amer. Math. Soc. 108 (1994), no. 520.
T Ilmanen, Lectures on Mean Curvature Flow and Related Equations (Trieste Notes). T. Ilmanen, Lectures on Mean Curvature Flow and Related Equations (Trieste Notes), 1995.
On the existence of complete bounded minimal surfaces in R n. L Jorge, F Xavier, Bol. Soc. Brasil. Mat. 102L. Jorge and F. Xavier, On the existence of complete bounded minimal surfaces in R n , Bol. Soc. Brasil. Mat. 10 (1979), no. 2, 171-173.
A complete minimal surface in R 3 between two parallel planes. L Jorge, F Xavier, Annals of Math. 2L. Jorge and F. Xavier, A complete minimal surface in R 3 between two parallel planes, Annals of Math. (2) 112 (1980) 203-206.
Two-dimensional geometric variational problems. J Jost, Wiley and SonsChichester, N.Y.J. Jost, Two-dimensional geometric variational problems, J. Wiley and Sons, Chichester, N.Y. (1991).
Complete embedded minimal surfaces of finite total curvature. N Kapouleas, J. Diff. Geom. 47N. Kapouleas, Complete embedded minimal surfaces of finite total curvature, J. Diff. Geom., 47 (1997) 95-169.
The genus one helicoid and the minimal surfaces that led to its discovery. H Karcher, F Wei, D Hoffman, Global analysis in modern mathematics. Orono, ME; Waltham, MA; Houston, TX119170Publish or PerishH. Karcher, F. Wei, and D. Hoffman, The genus one helicoid and the min- imal surfaces that led to its discovery. Global analysis in modern mathematics (Orono, ME, 1991; Waltham, MA, 1992), 119170, Publish or Perish, Houston, TX, 1993.
A Minimal Lamination of the Unit Ball with Singularities along a Line Segment. S Khan, Illinois J. Math. 533S. Khan, A Minimal Lamination of the Unit Ball with Singularities along a Line Segment, Illinois J. Math., Volume 53, Number 3 (2009), 833-855.
S Kleene, arXiv:0910.0199A Minimal Lamination with Cantor Set-Like Singularities. S. Kleene, A Minimal Lamination with Cantor Set-Like Singularities, arXiv:0910.0199.
Self-shrinkers with a rotational symmetry. S Kleene, N M Möller, preprint 2010S. Kleene and N. M. Möller, Self-shrinkers with a rotational symmetry, preprint 2010.
Local rigidity theorems for minimal hypersurfaces. H B Lawson, Jr , Ann. of Math. 892H. B. Lawson, Jr., Local rigidity theorems for minimal hypersurfaces. Ann. of Math. (2) 89 1969 187-197.
Existence of good sweepouts on closed manifolds. L Lin, L Wang, Proc. Amer. Math. Soc. 138L. Lin and L. Wang, Existence of good sweepouts on closed manifolds, Proc. Amer. Math. Soc. 138 (2010), 4081-4088.
Adding handles to Nadirashvili's surfaces. F Lopez, F Martin, S Morales, J. Diff. Geom. 601F. Lopez, F. Martin, and S. Morales, Adding handles to Nadirashvili's surfaces, J. Diff. Geom. 60 (2002), no. 1, 155-175.
Complete nonorientable minimal surfaces in a ball of R 3. F Lopez, F Martin, S Morales, Trans. Amer. Math. Soc. 3589F. Lopez, F. Martin, and S. Morales, Complete nonorientable minimal surfaces in a ball of R 3 , Trans. Amer. Math. Soc., 358 (2006), no. 9, 3807- 3820.
On embedded complete minimal surfaces of genus zero. F Lopez, A Ros, 293-300. Minimal surfaces and MCF 75J. Differential Geom. 331F. Lopez and A. Ros, On embedded complete minimal surfaces of genus zero. J. Differential Geom. 33 (1991), no. 1, 293-300. Minimal surfaces and MCF 75
Bounded domains which are universal for minimal surfaces. F Martin, W Meeks, N Nadirashvili, Amer. J. Math. 1292F. Martin, W. Meeks, and N. Nadirashvili, Bounded domains which are universal for minimal surfaces, Amer. J. Math. 129 (2007), no. 2, 455-461.
A complete bounded minimal cylinder in R 3 , Michigan Math. F Martin, S Morales, J. 473F. Martin and S. Morales, A complete bounded minimal cylinder in R 3 , Michigan Math. J. 47 (2000), no. 3, 499-514.
On the asymptotic behavior of a complete bounded minimal surface in R 3. F Martin, S Morales, Trans. AMS. 35610F. Martin and S. Morales, On the asymptotic behavior of a complete bounded minimal surface in R 3 , Trans. AMS, 356 (2004), no. 10, 3985-3994.
Complete proper minimal surfaces in convex bodies. F Martin, S Morales, Duke Math. J. 1283F. Martin and S. Morales, Complete proper minimal surfaces in convex bodies, Duke Math. J. 128 (2005), no. 3, 559-593.
Adding one handle to half-plane layers. L Mazet, J. Differential Geom. 842389407L. Mazet, Adding one handle to half-plane layers. J. Differential Geom. 84 (2010), no. 2, 389407.
The geometry, topology, and existence of periodic minimal surfaces. W Meeks, Proc. Sympos. Pure Math. 541American Mathematical SocietyW. Meeks III, The geometry, topology, and existence of periodic minimal surfaces, Proc. Sympos. Pure Math., 54, Part 1, American Mathematical Society, Providence, 1993.
The regularity of the singular set in the Colding and Minicozzi lamination theorem. W Meeks, Duke Math. J. 1232W. Meeks III, The regularity of the singular set in the Colding and Minicozzi lamination theorem, Duke Math. J. 123 (2004), no. 2, 329-334.
The limit lamination metric for the Colding-Minicozzi minimal lamination. W Meeks, Illinois J. Math. 492W. Meeks III, The limit lamination metric for the Colding-Minicozzi mini- mal lamination, Illinois J. Math. 49 (2005), no. 2, 645-658.
The geometry of minimal surfaces of finite genus I. Curvature estimates and quasiperiodicity. W MeeksIII, J Perez, A Ros, J. Differential Geom. 661W. Meeks III, J. Perez, and A. Ros, The geometry of minimal surfaces of finite genus I. Curvature estimates and quasiperiodicity, J. Differential Geom. 66 (2004), no. 1, 1-45.
The geometry of minimal surfaces of finite genus II. Nonexistence of one limit end examples. W MeeksIII, J Perez, A Ros, Invent. Math. 1582W. Meeks III, J. Perez, and A. Ros, The geometry of minimal surfaces of finite genus II. Nonexistence of one limit end examples, Invent. Math. 158 (2004), no. 2, 323-341.
The geometry of minimal surfaces of finite genus III; bounds on the topology and index of classical minimal surfaces. W MeeksIII, J Perez, A Ros, preprintW. Meeks III, J. Perez, and A. Ros, The geometry of minimal surfaces of finite genus III; bounds on the topology and index of classical minimal surfaces, preprint.
Properly embedded minimal planar domains. W MeeksIII, J Perez, A Ros, preprintW. Meeks III, J. Perez, and A. Ros, Properly embedded minimal planar domains, preprint.
The geometry and conformal structure of properly embedded minimal surfaces of finite topology in R 3. W Meeks Iii, H Rosenberg, Invent. Math. 1143W. Meeks III and H. Rosenberg, The geometry and conformal structure of properly embedded minimal surfaces of finite topology in R 3 , Invent. Math. 114, no. 3 (1993) 625-639.
The uniqueness of the helicoid. W Meeks Iii, H Rosenberg, Ann. of Math. 2W. Meeks III and H. Rosenberg, The uniqueness of the helicoid, Ann. of Math. (2) 161 (2005), no. 2, 727-758.
The minimal lamination closure theorem. W Meeks Iii, H Rosenberg, Duke Math. J. 1333W. Meeks III and H. Rosenberg, The minimal lamination closure theorem, Duke Math. J. 133 (2006), no. 3, 467-497.
Bending the helicoid. W H Meeks, M Weber, Math. Ann. 3394W.H. Meeks and M. Weber, Bending the helicoid, Math. Ann. 339 (2007), no. 4, 783-798.
. T H Colding, W Minicozzi, T. H. Colding and W. Minicozzi
Sobolev and mean-value inequalities on generalized submanifolds of R n. J H Michael, L M Simon, Comm. Pure Appl. Math. 26J. H. Michael and L. M. Simon, Sobolev and mean-value inequalities on generalized submanifolds of R n . Comm. Pure Appl. Math. 26 (1973), 361- 379.
Hadamard's and Calabi-Yau's conjectures on negatively curved and minimal surfaces. N Nadirashvili, Invent. Math. 126N. Nadirashvili, Hadamard's and Calabi-Yau's conjectures on negatively curved and minimal surfaces, Invent. Math. 126 (1996) 457-465.
Construction of complete embedded self-similar surfaces under mean curvature flow. Part I. X H Nguyen, Trans. Amer. Math. Soc. 3614X. H. Nguyen, Construction of complete embedded self-similar surfaces under mean curvature flow. Part I, Trans. Amer. Math. Soc., 361 (2009), no. 4, 1683-1701.
Construction of complete embedded self-similar surfaces under mean curvature flow. Part II. X H Nguyen, Adv. Differential Equations. 155-6X. H. Nguyen, Construction of complete embedded self-similar surfaces under mean curvature flow. Part II, Adv. Differential Equations 15 (2010), no. 5-6, 503530.
Translating Tridents. X H Nguyen, Comm. in PDE. 343X. H. Nguyen, Translating Tridents, Comm. in PDE, Vol. 34 (2009), no. 3, 257 -280.
Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations. S Osher, J Sethian, J. Comput. Phys. 791S. Osher and J. Sethian, Fronts propagating with curvature-dependent speed: algorithms based on Hamilton-Jacobi formulations. J. Comput. Phys. 79 (1988), no. 1, 12-49.
A survey of minimal surfaces. R Osserman, Dover2nd. editionR. Osserman, A survey of minimal surfaces, Dover, 2nd. edition (1986).
Global properties of minimal surfaces in E 3 and E n. R Osserman, Ann. of Math. 802R. Osserman, Global properties of minimal surfaces in E 3 and E n , Ann. of Math. (2) 80 1964 340-364.
The convex hull property of immersed manifolds. R Osserman, J. Diff. Geom. 6R. Osserman, The convex hull property of immersed manifolds, J. Diff. Geom., 6 (1971/72) 267-270.
Finite extinction time for the solutions to the Ricci flow on certain three-manifolds. G Perelman, math.DG/0307245G. Perelman, Finite extinction time for the solutions to the Ricci flow on certain three-manifolds, math.DG/0307245.
On Plateau's problem. T Rado, Ann. of Math. 31T. Rado, On Plateau's problem, Ann. of Math. 31 (1930) 457-469.
On the problem of Plateau. T Rado, Ergebnisse der Mathematic und ihrer Grenzgebiete. 2Springer-VerlagT. Rado, On the problem of Plateau, Ergebnisse der Mathematic und ihrer Grenzgebiete, vol. 2. Springer-Verlag, Berlin (1953).
B Riemann, Über die Fläche vom kleinsten Inhalt bei gegebener Begrenzung. 13B. Riemann,Über die Fläche vom kleinsten Inhalt bei gegebener Begrenzung, Abh. Königl. d. Wiss. Göttingen, Mathem. Cl., 13, 3-52 (1867).
Some recent developments in the theory of properly embedded minimal surfaces in R 3 , Seminare Bourbaki. H Rosenberg, Asterisque No. 92H. Rosenberg, Some recent developments in the theory of properly embedded minimal surfaces in R 3 , Seminare Bourbaki 1991/92, Asterisque No. 206 (1992) 463-535.
A cylindrical type complete minimal surface in a slab of R 3. H Rosenberg, E Toubiana, Bull. Sci. Math. III. H. Rosenberg and E. Toubiana, A cylindrical type complete minimal sur- face in a slab of R 3 , Bull. Sci. Math. III (1987) 241-245.
The existence of minimal immersions of 2-spheres. J Sacks, K Uhlenbeck, Ann. of Math. 2J. Sacks and K. Uhlenbeck, The existence of minimal immersions of 2- spheres, Ann. of Math. (2) 113 (1981) no. 1, 1-24.
Estimates for stable minimal surfaces in three-dimensional manifolds. R Schoen, Seminar on Minimal Submanifolds. Princeton, N.J.Princeton University Press103R. Schoen, Estimates for stable minimal surfaces in three-dimensional man- ifolds, In Seminar on Minimal Submanifolds, Ann. of Math. Studies, vol. 103, 111-126, Princeton University Press, Princeton, N.J., 1983.
Uniqueness, symmetry, and embeddedness of minimal surfaces. R Schoen, J. Diff. Geom. 184R. Schoen, Uniqueness, symmetry, and embeddedness of minimal surfaces, J. Diff. Geom. 18 (1983), no. 4, 791-809.
Regularity of stable minimal hypersurfaces. R M Schoen, L M Simon, Comm. Pure Appl. Math. 346R.M. Schoen and L.M. Simon, Regularity of stable minimal hypersurfaces. Comm. Pure Appl. Math. 34 (1981), no. 6, 741-797.
Regularity of simply connected surfaces with quasi-conformal Gauss map. R Schoen, L Simon, Seminar on Minimal Submanifolds. Princeton, N.J.Princeton University Press103R. Schoen and L. Simon, Regularity of simply connected surfaces with quasi-conformal Gauss map, In Seminar on Minimal Submanifolds, Annals of Math. Studies, vol. 103, 127-145, Princeton University Press, Princeton, N.J., 1983.
Curvature estimates for minimal hypersurfaces. R M Schoen, L M Simon, S T Yau, Acta Math. 1343-4R.M. Schoen, L.M. Simon, and S.T. Yau, Curvature estimates for minimal hypersurfaces. Acta Math. 134 (1975), no. 3-4, 275-288.
On the proof of the positive mass conjecture in general relativity. R Schoen, S T Yau, Comm. Math. Phys. 651R. Schoen and S.T. Yau, On the proof of the positive mass conjecture in general relativity, Comm. Math. Phys. 65 (1979), no. 1, 45-76.
R Schoen, S T Yau, Lectures on harmonic maps. Int. PressR. Schoen and S.T. Yau, Lectures on harmonic maps, Int. Press (1997).
Evolution of convex hypersurfaces by powers of the mean curvature. F Schulze, Math. Z. 2514F. Schulze, Evolution of convex hypersurfaces by powers of the mean curva- ture. Math. Z. 251 (2005), no. 4, 721-733.
Convexity estimates for flows by powers of the mean curvature. F Schulze, Ann. Sc. Norm. Super. Pisa Cl. Sci. 5F. Schulze, Convexity estimates for flows by powers of the mean curvature. Ann. Sc. Norm. Super. Pisa Cl. Sci. (5) 5 (2006), no. 2, 261-277.
Rate of convergence of the mean curvature flow. N Sesum, Comm. Pure Appl. Math. 614N. Sesum, Rate of convergence of the mean curvature flow. Comm. Pure Appl. Math. 61 (2008), no. 4, 464-485.
L M Simon, Lectures on Geometric Measure Theory, Proceedings of the CMA. Canberra3L. M. Simon, Lectures on Geometric Measure Theory, Proceedings of the CMA, ANU No. 3, Canberra, 1983.
Minimal varieties in Riemannian manifolds. J Simons, Ann. of Math. 88J. Simons, Minimal varieties in Riemannian manifolds, Ann. of Math. 88 (1968) 62-105.
Starshaped hypersurfaces and the mean curvature flow. K Smoczyk, Manuscripta Math. 952K. Smoczyk, Starshaped hypersurfaces and the mean curvature flow, Manuscripta Math. 95 (1998), no. 2, 225-236.
Self-shrinkers of the mean curvature flow in arbitrary codimension. K Smoczyk, Int. Math. Res. Not. 48K. Smoczyk, Self-shrinkers of the mean curvature flow in arbitrary codi- mension. Int. Math. Res. Not. 2005, no. 48, 2983-3004.
A density function and the structure of singularities of the mean curvature flow. A Stone, Calc. Var. 2A. Stone, A density function and the structure of singularities of the mean curvature flow, Calc. Var. 2 (1994), 443-480.
Multi-valued graphs in embedded constant mean curvature disks. G Tinaglia, Trans. Amer. Math. Soc. 3591G. Tinaglia, Multi-valued graphs in embedded constant mean curvature disks, Trans. Amer. Math. Soc. 359 (2007), no. 1, 143-164.
Structure theorems for embedded disks with mean curvature bounded in Lp. G Tinaglia, Comm. Anal. Geom. 164819836G. Tinaglia, Structure theorems for embedded disks with mean curvature bounded in Lp. Comm. Anal. Geom. 16 (2008), no. 4, 819836.
Curvature blow-up in perturbations of minimal cones evolving by mean curvature flow. J J L Velázquez, Ann. Scuola Norm. Sup. Pisa Cl. Sci. 4J.J.L. Velázquez, Curvature blow-up in perturbations of minimal cones evolv- ing by mean curvature flow. Ann. Scuola Norm. Sup. Pisa Cl. Sci. (4) 21 (1994), no. 4, 595-628.
Construction de surfaces minimales en recollant des surfaces de Scherk. M Traizet, Ann. Inst. Fourier (Grenoble). 465M. Traizet, Construction de surfaces minimales en recollant des surfaces de Scherk, Ann. Inst. Fourier (Grenoble) 46 (1996), no. 5, 1385-1442.
Adding handles to Riemann's minimal surfaces. M Traizet, J. Inst. Math. Jussieu. 1M. Traizet, Adding handles to Riemann's minimal surfaces, J. Inst. Math. Jussieu 1 (2002) 145-174.
A Bernstein Type Theorem For Self-similar Shrinkers. L Wang, preprint. submittedL. Wang, A Bernstein Type Theorem For Self-similar Shrinkers, preprint December 2009, submitted.
Minimal surfaces of least total curvature and moduli spaces of plane polygonal arcs. M Weber, M Wolf, Geom. Funct. Anal. 86M. Weber and M. Wolf, Minimal surfaces of least total curvature and moduli spaces of plane polygonal arcs, Geom. Funct. Anal. 8 (1998), no. 6, 1129-1170.
Teichmüller theory and handle addition for minimal surfaces. M Weber, M Wolf, Ann. of Math. 1562M. Weber and M. Wolf, Teichmüller theory and handle addition for minimal surfaces, Ann. of Math. (2), 156 (2002) 713-795.
Evolution of curves and surfaces by mean curvature. B White, Proceedings of the International Congress of Mathematicians. the International Congress of MathematiciansBeijing; BeijingHigher Ed. PressIB. White, Evolution of curves and surfaces by mean curvature. Proceedings of the International Congress of Mathematicians, Vol. I (Beijing, 2002), 525- 538, Higher Ed. Press, Beijing, 2002.
The size of the singular set in mean curvature flow of mean-convex sets. B White, J. Amer. Math. Soc. 133B. White, The size of the singular set in mean curvature flow of mean-convex sets. J. Amer. Math. Soc. 13 (2000), no. 3, 665-695
The nature of singularities in mean curvature flow of mean-convex sets. B White, J. Amer. Math. Soc. 161B. White, The nature of singularities in mean curvature flow of mean-convex sets. J. Amer. Math. Soc. 16 (2003), no. 1, 123-138.
Stratification of minimal surfaces, mean curvature flows, and harmonic maps. B White, J. Reine Angew. Math. 488B. White, Stratification of minimal surfaces, mean curvature flows, and har- monic maps. J. Reine Angew. Math. 488 (1997), 1-35.
A local regularity theorem for mean curvature flow. B White, Ann. of Math. 2B. White, A local regularity theorem for mean curvature flow. Ann. of Math. (2) 161 (2005), no. 3, 1487-1519.
. N Wickramasekera, in preparationN. Wickramasekera, in preparation.
Convex hulls of complete minimal surfaces. F Xavier, Math. Ann. 269F. Xavier, Convex hulls of complete minimal surfaces, Math. Ann. 269 (1984) 179-182.
S T Yau, Nonlinear analysis in geometry, L'Eseignement Mathematique. S.T. Yau, Nonlinear analysis in geometry, L'Eseignement Mathematique (2) 33 (1987) 109-158.
Open problems in geometry. S T Yau, Proc. Sympos. Pure Math. 54American Mathematical SocietyPart 1S.T. Yau, Open problems in geometry, Proc. Sympos. Pure Math., 54, Part 1, American Mathematical Society, Providence, 1993.
Problem section. S T Yau, Seminar on Differential Geometry. Princeton University Press102S.T. Yau, Problem section, Seminar on Differential Geometry, Ann. of Math. Studies, v. 102, Princeton University Press (1982) 669-706.
Review of geometry and analysis, Mathematics: frontiers and perspectives. S T Yau, Amer. Math. SocProvidence, RIS.T. Yau, Review of geometry and analysis, Mathematics: frontiers and perspectives, Amer. Math. Soc., Providence, RI, (2000) 353-401.
| [] |
[
"Industrial Forecasting with Exponentially Smoothed Recurrent Neural Networks",
"Industrial Forecasting with Exponentially Smoothed Recurrent Neural Networks"
] | [
"Matthew F Dixon [email protected]. "
] | [] | [] | Industrial forecasting has entered an era of unprecedented growth in the size and complexity of data which require new modeling methodologies. While many new general purpose machine learning approaches have emerged, they remain poorly understand and irreconcilable with more traditional statistical modeling approaches. We present a general class of exponential smoothed recurrent neural networks (RNNs) which are well suited to modeling non-stationary dynamical systems arising in industrial applications such as electricity load management and financial risk and trading. In particular, we analyze their capacity to characterize the non-linear partial autocorrelation structure of time series and directly capture dynamic effects such as seasonality and regime changes. Application of exponentially smoothed RNNs to electricity load forecasting, weather data and financial time series, such as minute level Bitcoin prices and CME futures tick data, highlight the efficacy of exponential smoothing for multi-step time series forecasting. The results also suggest that popular, but more complicated neural network architectures originally designed for speech processing, such as LSTMs and GRUs, are likely over-engineered for industrial forecasting and light-weight exponentially smoothed architectures capture the salient features while being superior and more robust than simple RNNs. | 10.2139/ssrn.3572181 | [
"https://arxiv.org/pdf/2004.04717v1.pdf"
] | 215,548,962 | 2004.04717 | 8b36f18a44e0f960c50b053f760bc44223768de8 |
Industrial Forecasting with Exponentially Smoothed Recurrent Neural Networks
February 2020
Matthew F Dixon [email protected].
Industrial Forecasting with Exponentially Smoothed Recurrent Neural Networks
February 2020* Matthew Dixon is an Assistant Professor in the Department of Applied Math, Illinois Institute of Technology. 1 arXiv:2004.04717v1 [stat.ML] 9 Apr 2020
Industrial forecasting has entered an era of unprecedented growth in the size and complexity of data which require new modeling methodologies. While many new general purpose machine learning approaches have emerged, they remain poorly understand and irreconcilable with more traditional statistical modeling approaches. We present a general class of exponential smoothed recurrent neural networks (RNNs) which are well suited to modeling non-stationary dynamical systems arising in industrial applications such as electricity load management and financial risk and trading. In particular, we analyze their capacity to characterize the non-linear partial autocorrelation structure of time series and directly capture dynamic effects such as seasonality and regime changes. Application of exponentially smoothed RNNs to electricity load forecasting, weather data and financial time series, such as minute level Bitcoin prices and CME futures tick data, highlight the efficacy of exponential smoothing for multi-step time series forecasting. The results also suggest that popular, but more complicated neural network architectures originally designed for speech processing, such as LSTMs and GRUs, are likely over-engineered for industrial forecasting and light-weight exponentially smoothed architectures capture the salient features while being superior and more robust than simple RNNs.
Background
Recurrent neural networks (RNNs) are the building blocks of modern sequential learning. RNNs use recurrent layers to capture non-linear temporal dependencies with a relatively small number of parameters [15]. They learn temporal dynamics by mapping an input sequence to a hidden state sequence and outputs, via a recurrent layer and a feedforward layer. However, despite the success of these and their successors, there appears to be a chasm between the statistical modeling literature (see e.g. [6,20,16]) and the machine learning literature (see e.g. [3,29,18,30]).
While there have been many recent theoretical developments in recurrent neural networks from a dynamical systems perspective [22,8,27], there are still open fundamental questions as to how the type and properties of the time series data informs the choice of architectures. For example, [24] find empirical evidence that Echo State Networks are well suited for spatio-temporal modeling and Uber [32] won the 2019 M4 forecasting competition with a hybrid exponential smoothing-RNN method, which exponentiates the output of a neural network and combines it with a past observed time series level. There have been exhaustive empirical studies on the application of recurrent neural networks to prediction from financial time series data such as historical limit order book and price history [10,4,5,31,7,26]. [31] find evidence that stacking networks leads to superior performance on intra-day stock data combined with technical indicators, whereas [2] combine wavelet transforms and stacked autoencoders with LSTMs on OHLC bars and technical indicators. [5] find evidence that dilated convolutional networks out perform LSTMs on various indices. [10] demonstrate that RNNs outperform feed-forward networks with lagged features on limit order book data.
There are still open fundamental questions as to how the type and properties of the time series data inform the choice of these architectures. One of the main contributions of this paper is to introduce a new class of RNNs, with supporting theoretical justification for architectural design choices.
A second challenge with applying neural networks to time series data is the over-reliance on extensive hyper-parameter tuning, a computationally complex global optimization problem with undesirable outcomes which are dependent on initial conditions for parameter choices. Moreover, little is known about how often to re-tune the parameters and the length of historical data used to train the model can affect the hyper-parameter tuning.
A third challenge is how to combine more traditional and informative diagnostics of time series data, such as stationarity and memory cut-off, into the model selection procedure. Such an approach is standard in the time series modeling literature [6] but absent in the machine learning literature. Finally, most of the aforementioned studies are engineering orientated and do not provide substantive insight to justify why the architectural choice is well suited to the dataset.
One of the main contributions of this paper is to cast RNNs into a time series modeling framework and rely on statistical diagnostics in combination with cross-validation to tune the architecture. The statistical tests characterize stationarity and memory cut-off length and provide insight into whether the data is suitable for longer-term forecasting and whether the model must be non-stationarity.
It is well known that plain RNNs have difficultly in learning long-term dynamics, due in part to the vanishing and exploding gradients that can result from back propagating the gradients down through the many unfolded layers of the network [3,29]. A particular type of RNN, called a Long Short-Term Memory (LSTM) [18,30] was proposed to address this issue of vanishing or exploding gradients which essentially arises due to the shape of the activation function. A memory unit used in a LSTM allows the network to learn which previous states can be forgotten and alleviates the vanishing gradient. Partly for this reason, LSTMs have demonstrated much empirical success in the literature [13,34].
The inclusion of forget, reset and update gates in the LSTM, and a slightly simpler variant -Gated Recurrent Units (GRUs) [9], provides a switching mechanism to forget memory while simultaneously updating a hidden state vector. These units do not, however, provide an easily accessible mathematical structure from which to study their time series modeling properties. As such, there is much opaqueness about the types of architectures which are best suited to prediction tasks based on data properties such as wide sense non-stationarity (see Appendix A). However we shall show that exponential smoothing not only alleviates the gradient problem but characterizes the time series modeling properties of these architectures.
The main outcome of applying this approach is that it partially identifies the choice of architecture based on both its time series properties and that of the data. The approach is general and easily extensible to GRUs and LSTMs with the inclusion of the reset gate and cell memory. In this paper, the input sequence is assumed to be of fixed length, although the methodology could be extended to variable length as in sequence to sequence learning models [33]. A summary of the main contributions of this paper are • We show how plain RNNs, with short-term memory, can be generalized to exhibit long-term memory with a smoothing scalar α-smoothing also helps offset the infamous vanishing gradient problem in plain RNNs with only one extra parameter;
• We show how time series analysis guides the architectural parameters -the sequence length of the RNN can be determined from the estimated partial autocorrelogram, and tanh activation of the recurrent layer is needed for stability; and
• We demonstrate how a dynamic α t -RNN model for non-stationary data [11], a lightweight version of GRUs and LSTMs, has the ability to model complex non-stationary times series data with comparable performance.
The remainder of this paper is outlined as follows. Section 2 introduces the α-RNN and Section 3 applies time series analysis to guide the architectural properties. Section 4 introduces a dynamic version of the model and illustrates the dynamical behavior of α. Details of the training, implementation and experiments together with the results are presented in Section 6. Finally, Section 7 concludes with directions for future research.
α-RNNs
Given auto-correlated observations of covariates or predictors, X t , and responses Y t at times t = 1, . . . , N, in the time series data D := {x t } N t=1 , x t := (x t , y t ), our goal is to construct an m-step (m > 0) ahead times series predictorŷ t+m = F(x t ), of an observed target, y t+m , from a p length input sequencex t
y t+m := F(x t ) + u t , wherex t := {x t−p+1 , . . . , x t },
x t−j =: L j [x t ] is a j th lagged observation of x t ∈ R d , for j = 0, . . . , p − 1 and u t is the homoschedastic model error at time t. We introduce the α-RNN:
y t+m = F W,b,α (x t )(1)
where
F W,b,α (x t ) is an α ∈ [0, 1] smoothed RNN with weight matrices W := (W h , U h , W y ), and biases vectors b := (b h , b y ).
For each time step in the sequence s = t − p + 2, . . . , t, forward passes separately update a hidden internal stateĥ s ∈ R H , using the recurrence relations:
h s = σ(W h x s + U hhs−1 + b h ), h s = αĥ s + (1 − α)h s−1 ,
where the input weight matrix W h ∈ R H×d and the recurrence weight matrix U h ∈ R H×H . H is the number of hidden units and d is the dimensionality of the input space.h s ∈ R H is an exponentially smoothed version of the hidden stateĥ s . When the output is continuous, the output from the final hidden state is given by:
y t+m = W yĥt + b y ,(2)
with the starting condition in each sequence,ĥ t−p+1 = σ(W h x t−p+1 ).
Times Series Modeling
This section bridges the time series modeling literature [6,20,21] and the machine learning literature. We shall assume here for ease of exposition that the time series data is univariate (d = 1), D := {y t } N t=1 , and thus the predictor is endogenous 1 . Since autoregressive (AR(p)) models are well known in time series modeling, we find it instructive to show that plain RNNs are non-linear AR(p) models. For ease of exposition, consider the simplest case of a RNN with one hidden unit, H = 1. Without loss of generality, we set U h = W h = φ, W y = 1, b h = 0 and b y = µ. Then we can show by backward substitution that a plain-RNN, F W,b (x t ), with sequence length p, is a non-linear auto-regressive, N AR(p), model of order p.ĥ
t−p+1 = σ(φy t−p+1 ) h t−p+2 = σ(φĥ t−p+1 + φy t−p+2 ) . . . = . . . h t = σ(φĥ t−1 + φy t ) y t+m =ĥ t + µ thenŷ t+m = µ + σ(φ(1 + σ(φ(L + σ(φ(L 2 + · · · + σ(φL p−1 ) . . . )[y t ].(3)
When the activation is the identity function σ := I d , then we recover the AR(p) model
y t+m = µ + p−1 i=0 φ i+1 L i [y t ], φ i = φ i .(4)
with geometrically decaying autoregressive coefficients when |φ| < 1.
α-RNNs
The α-RNN(p) is almost identical to a plain RNN, but with an additional scalar smoothing parameter, α, which provides the recurrent network with "long-memory" 2 . To see this, let us consider a one-step ahead univariate α-RNN(p) in which the smoothing parameter is fixed. For each time step s = t − p + 2, . . . , t:
(output)ŷ t+1 = W yĥt + b y , (hidden state update)ĥ s = σ(U hhs−1 + W h y s + b h ), (smoothing)h s = αĥ s + (1 − α)h s−1 .
This model augments the plain-RNN by replacingĥ s−1 in the hidden layer with an exponentially smoothed hidden stateh s−1 . The effect of the smoothing is to provide infinite memory when α 1. For the special case when α = 1, we recover the plain RNN with short memory of length p << N.
We can easily see this informally by simplifying the parameterization and considering the unactivated case. Setting b y = b h = 0, U h = W h = φ ∈ R and W y = 1:
y t+1 =ĥ t , (5) = φ(h t−1 + y t ), (6) = φ(αĥ t−1 + (1 − α)h t−2 + y t ),(7)
with the starting condition in each sequence,ĥ t−p+1 = φy t−p+1 . With out loss of generality, consider p = 2 lags in the model so thatĥ t−1 = φy t−1 . Then
h t = φ(αφy t−1 + (1 − α)h t−2 + y t )(8)
and the model can be written in the simpler form
y t+1 = φ 1 y t + φ 2 y t−1 + φ(1 − α)h t−2 ,(9)
with auto-regressive weights φ 1 := φ and φ 2 := αφ 2 . We now see that there is a third term on the RHS of Eq. 9 which vanishes when α = 1 but provides infinite memory to the model sinceh t−2 depends on y 1 , the first observation in the whole time series, not just the first observation in the sequence. To see this, we unroll the recursion relation in the exponential smoother:
h t+1 = α t−1 s=0 (1 − α) sĥ t−s + (1 − α) t y 1 .(10)
where we used the property thath 1 = y 1 . It is often convenient to characterize exponential smoothing by the half-lifethe number of lags needed for the coefficient (1 − α) s to equal a half, which is s = −1/log 2 (1 − α). To gain further insight we use partial auto-correlations to characterize the memory of the model.
Partial Autocorrelation
We consider autoregressive time series models, with additive white noise 3 , of the form
y t =ŷ t + u t , u t ∼ N(0, σ 2 n ),
which carry a signature which allows its order, p, to be determined from "covariance stationary" time series data (see Appendix A for a definition of covariance stationarity). This signature encodes the memory in the model and is given by "partial autocorrelations". Informally each partial autocorrelation measures the correlation of a random variable, y t , with its h th lag, y t−h , while controlling for intermediate lags. The partial autocorrelation must be non-zero to be able to predict y t from y t−h . The sign of the partial autocorrelation is also useful for interpretability and describes the directional relationship between the random variables. The formal definition of the partial autocorrelation is now given.
Definition 3.1 (Partial Autocorrelation). A partial autocorrelation at lag h ≥ 2 is a conditional autocorrelation between a variable, y t , and its h th lag, y t−h under the assumption that the values of the intermediate lags, y t−1 , . . . , y t−h+1 are controlled:
τ h :=τ t,t−h :=γ h γ t,h γ t−h,h , whereγ h :=γ t,t−h := E[y t − P(y t | y t−1 , . . . , y t−h+1 ), y t−h − P(y t−h | y t−1 , . . . , y t−h+1 )]
is the lag-h partial autocovariance, P(W | Z) is an orthogonal projection of W onto the set Z and
γ t,h := E[(y t − P(y t | y t−1 , . . . , y t−h+1 )) 2 ].(11)
The partial autocorrelation function (PACF)τ h : N → [−1, 1] is a map h : →τ h . The plot ofτ h against h is referred to as the partial correlogram.
The PACF of the RNN(p) can be used to determine the lower bound on the sequence length in an α-RNN(p). To see this, we first show that the partial autocorrelation of the α-RNN(p) is time independent and has a sharp cut-off after p lags if α = 1.
Theorem 1 (Partial autocorrelation of an α-RNN(p)). The partial autocorrelation of the α-RNN(p) is time independent and exhibits a cut-off after p lags:
τ s = 0, s > p if α = 1. If α ∈ (0, 1) the α-RNN(p)
has non-zero partial autocorrelations at lags beyond the sequence length.
See Appendix B for a proof. For α ∈ (0, 1), the α-RNN has non-zero partial autocorrelations τ s 0, s > p. It is easy to see this from the additional term containing α in Equation 9. Further insight can be gained from Figure 1 which shows the fitted partial correlogram from data generated by an α-RNN(3) with additive white noise. We observe that the memory is always longer than the sequence length of 3 when α ∈ (0, 1). As α approaches zero, the model has increasing memory 4 . The theorem and the properties of the α-RNN(p) suggest to determine the sequence length p from the fitted PACF of each covariate. Moreover, the prediction horizon should not exceed that suggested by the maximum order of statistically significant partial autocorrelations.
Stability
Times series modeling also places emphasis on model "stability" -this is the model attribute that past random disturbances decay in the model and the effect of lagged data becomes less relevant to the model output with increasing lag. We present the following theorem which shows that the stability of the α-RNN model is determined solely by hidden state's activation function, with the property that |σ(·)| < 1.
Theorem 2 (Stability of RNN(p) models). Suppose that there exists an invertible nonlinear function of the lag operator Φ(L) of the form:
y t = Φ −1 (L)[u t ] = (1 − σ(φσ(φσ(φσ(. . . )) + . . . + φL 2 ) + φL) −1 [u t ],
where, without loss of generality, we have again conveniently set W x = U h = φ, W y = 1 and b h = b y = 1. Then the RNN is stable if and only if |σ(x)| < 1 for all finite x. See Appendix C for a proof. In particular, the theorem justifies the common choice of σ(·) := tanh(·) and while sigmoid is also another viable choice, it is too restrictive as it prohibits negative partial auto-correlations. We shall see in Section 6 that negative partial auto-correlations arise in time series data.
Vanishing gradient
It is well known that plain-RNNs exhibit a vanishing gradient problem [29,3]. Following [29], we extend some of the analysis of BPTT to α-RNNs. The α-RNN(p) BPTT can be written as:
∂L ∂W = N t=1 ∂L ∂W (12) = N t=1 ∂L ∂h t t−1 k=t−p t−1 i=k+1 ∂h i ∂h i−1 ∂h k ∂W(13)
for some generic loss function L t , where
∂h i ∂h i−1 = (1 − α) + α ∂ĥ i ∂h i−1 = (1 − α) + ασ (W h x i + U hhi−1 + b h )U h .
Substituting the expression for ∂h i ∂h i−1 into Equation 12 gives an expression proportional to
t−1 i=k+1 (1 − α) + ασ (I i )U h , I i := W h x i + U hhi−1 + b h . When α = 1 this expression is t−1 i=k+1 σ (I i )U h .
When σ(·) := tanh(·), this product goes to zero with increasing function compositions due to the gradient of tanh vanishing with large |x| and the product of small functions. However, when α 1, the additional term provides an additional contribution to the gradient which is non trivial for small α and independent of the input.
Thus the α-RNN will not only alleviate the vanishing gradient problem, has guaranteed stability if we use choose a tanh activation function, but exhibits non-zero partial auto-correlations up to at least lag p for α ∈ (0, 1].
The α-RNN model can be trained by treating α as an additional parameter to be fitted with stochastic gradient descent. The choice to pre-determine that α is independent of time is obviously limited to stationary time series. While this is restrictive, it suggests that a simple statistical test of the time series can pre-determine the efficacy of the approach 5 . Moreover, if the data is covariance stationary, then the α-RNN will preserve the stationarity of the partial auto-correlation structure, eliminating the need for more complex architectures such as GRUs and LSTMs (which are often motivated purely on the basis of the vanishing gradient problem). Such a procedure shall be demonstrated in Section 6. We can extend the model to non-stationary time series, which exhibit dynamic partial autocorrelation, by using a dynamic version of exponential smoothing.
Dynamic α t -RNNs
The extension of RNNs to dynamical time series models, suitable for non-stationary time series data, relies on dynamic exponential smoothing is a time dependent, convex, combination of the smoothed output,h t , and the hidden stateĥ t :
h t+1 = α tĥt + (1 − α t )h t ,(14)
where α t ∈ [0, 1] denotes the dynamic smoothing factor which can be equivalently written in the one-step-ahead forecast of the formh
t+1 =h t + α t (ĥ t −h t ).(15)
Hence the smoothing can be viewed as a form of dynamic forecast error correction; When α t = 0, the forecast error is ignored and the smoothing merely repeats the current hidden stateh t , to the effect of the model losing its memory. When α t = 1, the forecast error overwrites the current hidden stateh t . The smoothing can also be viewed a weighted sum of the lagged observations, with lower or equal weights, α t−s s r=1 (1 − α t−r+1 ) at the s ≥ 1 lagged hidden state,ĥ t−s :
h t+1 = α tĥt + t−1 s=1 α t−s s r=1 (1 − α t−r+1 )ĥ t−s + g(α), where g(α) := t−1 r=0 (1 − α t−r )ỹ 1 .
Note that for any α t−r+1 = 1, the smoothed hidden stateh t+1 will have no dependency on all lagged hidden states {ĥ t−s } s ≥r . The model simply forgets the hidden states at or beyond the r th lag.
Neural network exponential smoothing
While the class of α t -RNN models under consideration is free to define how α is updated (including changing the frequency of the update) based on the hidden state and input, a convenient choice is use a recurrent layer. Returning again to the more general setup with a hidden state vector h t ∈ R H , let us model the smoothing parameterα t ∈ [0, 1] H to give a filtered time series
h t =α t •ĥ t + (1 −α t ) •h t−1 ,(16)
where • denotes the Hadamard product between vectors. This smoothing is a vectorized form of the above classical setting, only here we note that when (α t ) i = 1, the i th component of the hidden variable is unmodified and the past filtered hidden variable is forgotten. On the other hand, when the (α t ) i = 0, the i th component of the hidden variable is obsolete, instead setting the current filtered hidden variable to its past value. The smoothing in Equation 16 can be viewed then as updating long-term memory, maintaining a smoothed hidden state variable as the memory through a convex combination of the current hidden variable and the previous smoothed hidden variable.
The hidden variable is given by the semi-affine transformation:
h t = σ(U hht−1 + W h x t + b h )(17)
which in turns depends on the previous smoothed hidden variable. Substituting Equation 17
into Equation 16 gives a function ofh t−1 and x t :
h t = g(h t−1 , x t ; α)(18):=α t • σ(U hht−1 + W h x t + b h ) + (1 −α t ) •h t−1 .(19)
We see that when α t = 0, the smoothed hidden variableh t is not updated by the input x t . Conversely, when α t = 1, we observe that the hidden variable locally behaves like a non-linear autoregressive series. Thus the smoothing parameter can be viewed as the sensitivity of the smoothed hidden state to the input x t .
The challenge becomes how to determine dynamically how much error correction is needed. As in GRUs and LSTMs, we can address this problem by learningα = F (W α ,U α ,b α ) (x t ) from the input variables with the recurrent layer parameterized by weights and biases (W α , U α , b α ). The one-step ahead forecast of the smoothed hidden state,h t , is the filtered output of another plain RNN with weights and biases
(W h , U h , b h ).
Comparison with α-RNNs Figure 2a shows the response of a univariate α-RNN model when the input consists of two unit impulses and zeros otherwise. For simplicity, the sequence length is assumed to be 3 (i.e. the RNN has a memory of 3 lags), the biases are set to zero, all the weights are set to one 6 and σ(x) := tanh(x). The RNN loses memory of the unit impulse after three lags, whereas the RNNs with smooth hidden states maintain memory of the first unit impulse even when the second unit impulse arrives. The figure also shows an α t -RNN model, which although appears insignificant in this example, allows the model to fit non-stationary time series data. This is because the time dependent α t results in dynamic partial autocorrelations. The response ofα t to shocks can be seen in Figure 2b
Relationship to GRUs and LSTMs
GRUs
The α t -RNN model has no means to entirely reset its memory and become a feed-forward network (FFN). This is because the hidden variables update equation always depends on the previous smoothed hidden state, unless U h = 0. By adding a reset layer, we recover a GRU:
smoothing :h t =α t •ĥ t + (1 −α t ) •h t−1 smoother update :α t = σ (1) (U αht−1 + W α x t + b α ) hidden state update :ĥ t = σ(U hrt •h t−1 + W h x t + b h ) reset update :r t = σ (1) (U rht−1 + W r x t + b r ).
When viewed as an extension of our α t RNN model, we observe that the effect of introducing a reset, or switch,r t , is to forget the dependence ofĥ t on the smoothed hidden state. Effectively, we turn the update forĥ t from a plain RNN to a FFN and entirely neglect the recurrence. The recurrence in the update ofĥ t is thus dynamic. It may appear that the combination of a reset and adaptive smoothing is redundant. But remember thatα t effects the level of error correction in the update of the smoothed hidden state,h t , whereasr t adjusts the level of recurrence in the unsmoothed hidden stateĥ t . Put differently,α t by itself can not disable the memory in the smoothed hidden state (internal memory), whereasr t in combination withα t can. More precisely, when α t = 1 andr t = 0,h t =ĥ t = σ(W h x t + b h ) which is reset to the latest input, x t , and the GRU is just a FFN. Also, when α t = 1 andr t > 0, a GRU acts like a plain RNN. Thus a GRU can be seen as a more general architecture which is capable of being a FFN or a plain RNN under certain parameter values.
These additional layers (or cells) enable a GRU to learn extremely complex long-term temporal dynamics that a vanilla RNN is not capable of. Lastly, we comment in passing that in the GRU, as in a RNN, there is a final feedforward layer to transform the (smoothed) hidden state to a response:ŷ
t = W Yht + b Y .(20)
LSTMs
The α t -RNN model, like the GRU, provides a mechanism for propagating a smoothed hidden state -a long term memory which can be overridden and even turn the network into a plain RNN (with short memory) or even a memoryless FFN. More complex models using hidden units with varying connections within the memory unit have been proposed in the engineering literature with empirical success [18,13,34]. LSTMs are similar to GRUs but have a separate (cell) memory, c t , in addition to a hidden state h t . LSTMs also do not require that the memory updates are a convex combination. Hence they are more general than exponential smoothing. The mathematical description of LSTMs is rarely given in an intuitive form, but the model can be found in, for example, [18].
The cell memory is updated by the following expression involving a forget gate,α t , an input gatê z t and a cell gateĉ
t c t =α t • c t−1 +ẑ t •ĉ t .(21)
In the terminology of LSTMs, the triple (α t ,r t ,ẑ t ) are respectively referred to as the forget gate, output gate, and input gate. Our change of terminology is deliberate and designed to provided more intuition and continuity with RNNs and the statistics literature. We note that in the special case whenẑ t = 1 −α t we obtain a similar exponential smoothing expression to that used in our α t -RNN. Beyond that, the role of the input gate appears superfluous and difficult to reason with using time series analysis.
When the forget gate,α t = 0, then the cell memory depends solely on the cell memory gate updateĉ t . By the termα t • c t−1 , the cell memory has long-term memory which is only forgotten beyond lag s ifα t−s = 0. Thus the cell memory has an adaptive autoregressive structure.
The extra "memory", treated as a hidden state and separate from the cell memory, is nothing more than a Hadamard product:
h t =r t • tanh(c) t ,(22)
which is reset ifr t = 0. Ifr t = 1, then the cell memory directly determines the hidden state.
Thus the reset gate can entirely override the effect of the cell memory's autoregressive structure, without erasing it. In contrast, the α t -RNN and the GRU has one memory, which serves as the hidden state, and it is directly affected by the reset gate.
The reset, forget, input and cell memory gates are updated by plain RNNs all depending on the hidden state h t .
Reset gate :
r t = σ(U r h t−1 + W r x t + b r ) Forget gate :α t = σ(U α h t−1 + W α x t + b α ) Input gate :ẑ t = σ(U z h t−1 + W z x t + b z ) Cell memory gate :ĉ t = tanh(U c h t−1 + W c x t + b c ).
Like the α t -RNN, the LSTM can function as a short-memory, plain-RNN; just set α t = 0 in Equation 21. However, the LSTM can also function as a coupling of FFNs; just setr t = 0 so that h t = 0 and hence there is no recurrence structure in any of the gates. For avoidance of doubt, since the nomenclature doesn't suggest it, all models in this paper can model long and short-term autoregressive memory. The α t -RNN couples these memories through a smoothed hidden state variable. The LSTM separates out the long memory, stored in the cellular memory, but uses a copy of it, which may additionally be reset. Strictly speaking, the cellular memory has long-short autoregressive memory structure, so it would be misleading in the context of time series analysis to strictly discern the two memories as long and short (as the nomenclature suggests). The latter can be thought of as a truncated version of the former.
Numerical Experiments
This section describes numerical experiments using financial time series data to evaluate the various RNN models. All models are implemented in v1. 15.0 of TensorFlow [1]. Times series cross-validation is performed using separate training, validation and test sets. To preserve the time structure of the data and avoid look ahead bias, each set represents a contiguous sampling period with the test set containing the most recent observations. To prepare the training, validation and testing sets for m-step ahead prediction, we set the target variables (responses) to the t + m observation, y t+m , and use the lags from t − p + 1, . . . t for each input sequence. This is repeated by incrementing t until the end of each set. In our experiments, each element in the input sequence is either a scalar or vector and the target variables are scalar.
We use the SimpleRNN Keras method with the default settings to implement a fully connected RNN. Tanh activation functions are used for the hidden layer with the number of units found by time series cross-validation with five folds to be H ∈ {5, 10, 20} and L 1 regularization, λ 1 ∈ {0, 10 −3 , 10 −2 }. The Glorot and Bengio uniform method [14] is used to initialize the non-recurrent weight matrices and an orthogonal method is used to initialize the recurrence weights as a random orthogonal matrix. Keras's GRU method is implemented using version 1406.1078v, which applies the reset gate to the hidden state before matrix multiplication. Similarly, the LSTM method in Keras is used. Tanh activation functions are used for the recurrence layer and sigmoid activation functions are used for all other gates. The AlphaRNN and AlphatRNN classes are implemented by the authors for use in Keras. Statefulness is always disabled.
Each architecture is trained for up to 2000 epochs with an Adam optimization algorithm with default parameter values and using a mini-batch size of 1000 drawn from the training set. Early stopping is used with a patience of 50 to 100 and minimum delta between 10 −8 and 10 −6 . No randomization is used in the mini-batching sampling in order to preserve the ordering of the data. To evaluate the forecasting accuracy, we set the forecast horizon to up to ten steps ahead instead of the usual step ahead forecasts often presented in the machine learning literaturelonger forecasting horizons are often more relevant due to operational constraints in industry applications and are more challenging when the data is non-stationary since the network's fixed partial auto-correlation will not adequately capture the observed changing autoregressive structure. The numerical results are separated in two areas of interest: (i) properties of recurrent architectures on archetypal data properties such as seasonality and regime switching; and (ii) evaluation on real data to demonstrate their utility in industrial forecasting.
LLM Seasonality DGP
To characterize the ability of exponentially smoothed RNNs to directly capture seasonality from noisy data, without the need to separately deseasonalize the data, we generate hourly data from an additive local level model with daily seasonality and i.i.d. noise [17]:
observed series : y t = µ t + γ t + u t , u t ∼ N (0, σ 2 u ), latent level : µ t = µ t−1 + χ t , χ t ∼ N (0, σ 2 χ ), latent seasonal : γ t = s−1 j=1 −γ t−j + ω t , ω t ∼ N (0, σ 2 ω ),
for t ∈ {s, . . . , N }. Choosing s = 24, we simulate N = 10, 000 observations under noise variances σ 2 u = 300, σ 2 χ = 1, σ 2 ω = 1. The first 8, 000 observations are used for training and the remaining are used for testing.
The data is non-stationary -we accept the Null hypothesis of the augmented Dickey-Fuller test which states that the AR model contains a unit root. The test statistic is 0.317 and the p-value is 0.9230 (the critical values are 1%: -3.431, 5%: -2.862, and 10%: -2.567). We choose a sequence length of p = 30 based on the PACF (not shown as the DGP is known). Figure 3a shows the PACF on the generated data -the positive lags at 24, 48, 72 and 96, due to the seasonality, are clearly shown. Figure 3b and 3c show the PACF on the RNN and α-RNN, both stationary architectures. Neither are able to capture the seasonality. On the other hand, the α t -RNN, the GRU and the LSTM, shown in Figure 3(d-f), adequately capture the seasonality. This is an important observation as typically deasonalization techniques such as differencing or convolutional filtering are needed. For completeness, the cross-validated parameters, using five folds, are shown in Table 1. Table 1: The cross-validated parameters of the ten-step ahead forecasting models for the llm+seasonality data generation process.
Multi-Regime DGP
We evaluate the various times series forecasting models on a simulated univariate non-stationary dataset. Specifically we assume the existence of a discrete latent variable on which a conditionally stationary random distribution exists. Such a model is dynamically representative of the types of noisy time series generated by many applications such as electricity demand motor neuron events from electroencephalography (EEG) data [23] and channel detection in cognitive radio [25]. Indeed regimes are ubiquitous in industrial applications such as recession periods in economic data, on-peak and off-peak periods in traffic and electricity demand. To test that the time series is indeed non-stationary, we accept the Null hypothesis of the augmented Dickey-Fuller test at the 99% confidence level which states that the AR model contains a unit root and is thus non-stationary. The test statistic is −3.17 and the p-value is 0.021 (the critical values are 1%: -3.434, 5%: -2.863, and 10%: -2.568). We choose a sequence length of p = 30 based on the PACF (not shown as the DGP is known). Figure 12 compares the performance of the various forecasting networks. The ten-step ahead forecasts are compared in ascending order of the number of hidden layers -plain RNNs, α-RNNs, α t -RNNs, GRUs and LSTMs. The figure shows that stationary models such as the plain RNN and the α-RNN inadequately capture the regime change, with the peaks being severely underestimated. The latter effect is symptomatic of a model which has learned the auto-regressive structure in the lower regime, to the detriment of the other. All methods produce significant forecast error at the regime transition. We further observe minor differences in the performance of the GRU versus the α t -RNN model suggesting that the reset gate provides marginal effect for this dataset. We also observe no additional benefit in maintaining an additional cell memory. See Appendix 5 for further discussion on the cellular memory. Table 2: The ten-step ahead forecasting models for regime switching data are compared for various architectures using time series cross-validation. Table 2 compares the average MSEs and their standard deviations each ten-step ahead forecasting model using time series cross validation with ten folds. To assess overfitting, the MSEs over the training periods are also provided for comparison with in and out-of-distribution model performance. The stationary models are observed to exhibit the worst MSE-the plain RNN and the α-RNN are similar in performance, the latter alleviates the vanishing gradient problem. The α t -RNN and the GRU compare in performance suggesting the effect of the reset gate in the latter is marginal. Figure 4: The ten-step ahead forecasts are compared for various recurrent architectures over a regime switching dataset whose regime alternates every 100 observations.
Short-term climate forecasting
The Jena climate modeling dataset was recorded at the Weather Station operated by the Max Planck Institute for Biogeochemistry in Jena, Germany. 14 different quantities (such as air temperature, atmospheric pressure, humidity, wind direction etc) were recorded every 10 minutes, over several years. The dataset contains 420,551 observations covering the period 2009-2016 [19]. We demonstrate how the different networks forecast the temperature using all lagged observed variables. Each covariate in the training and the test set is normalized using the moments of the training data only so as to avoid look-ahead bias or introduce a bias in the test data.
We reject the Null hypothesis of the augmented Dickey-Fuller test at the 99% confidence level for each covariate in favor of the alternative hypothesis that the data is stationary (contains at least one unit root). The largest test statistic is −3.81841 and the p-value is 0.002 (the critical values are 1%: -3.431, 5%: -2.862, and 10%: -2.567). However, we observe some evidence of cyclical memory in some of the covariates as seen in Figure 5.
We choose a sequence length of p = 20 based on the PACF and perform a ten-step ahead forecast. Figure 6 compares the performance of the various forecasting networks and shows that stationary models such as the plain RNN and the α-RNN adequately capture the temperature dynamics, even when the forecasting date moves further from the training period -this is expected because the partial autocorrelation is stationary and there is hence no secular drift in the error.
Viewing the results of time series cross validation, using the first 23,971 observations, in Table 3, we observe minor differences in the performance of the GRU versus the α t -RNN, suggesting that the reset gate provides no benefit for this dataset. In this case, we observe no additional benefit in the LSTM. Figure 6: The ten-step ahead forecasts of temperature using the Jena weather data with MSEs shown in parentheses.
Electricity consumption
N = 30000 observations of hourly power system loads are provided over the period from January 1st 2008 to June 20th 2011 for the DK2 bidding zone collected in the open power systems time series data [28]. The consumption time series is chosen as it exhibits both short term cyclical patterns and longer-term trends. However, while these cyclical patterns correspond to peak and off-peak consumption periods, the transitions are gradual. Without attempting to deseasonalize the data nor de-trend it, we apply the various architectures as in the previous experiment.
We reject the Null hypothesis of the augmented Dickey-Fuller test at the 99% confidence level in favor of the alternative hypothesis that the data is stationary (contains no unit roots). The test statistic is −10.991 and the p-value is 0 (the critical values are 1%: -3.431, 5%: -2.862, and 10%: -2.567).
Figure 7:
The PACF of the electricity load data exhibits seasonality
The PACF in Figure 7 is observed to exhibit seasonality. We choose a sequence length of p = 30 and perform a ten-step ahead forecast to highlight the limitations of not including high order lags. Figure 12 compares the performance of the various networks and shows that plain RNN performs poorly, whereas and the α t -RNN better captures the load dynamics. From Table 6, we further observe relatively minor differences in the performance of the GRU versus the α t -RNN, suggesting that the reset gate provides no benefit. We also observe no additional benefit in the LSTM.
Bitcoin forecasting
One minute snapshots of USD denominated Bitcoin mid-prices are captured from Coinbase over the period from January 1st to November 10th, 2018. We demonstrate how the different networks forecast Bitcoin prices using lagged observations of prices. The predictor in the training and the test set is normalized using the moments of the training data only so as to avoid look-ahead bias or introduce a bias in the test data. We accept the Null hypothesis of the augmented Figure 8: The ten-step ahead forecasts are compared for various architectures over the hourly electricity consumption dataset. Dickey-Fuller test as we can not reject it at even the 90% confidence level. The data is therefore stationary (contains at least one unit root). The largest test statistic is −2.094 and the p-value is 0.237 (the critical values are 1%: -3.431, 5%: -2.862, and 10%: -2.567). While the partial autocovariance structure is expected to be time dependent, we observe a short memory of only four lags by estimating the PACF over the entire history (see Figure 9). We choose a sequence length of p = 4 based on the PACF and perform a four-step ahead forecast. We comment in passing that there is little, if any, merit in forecasting beyond this time horizon given the largest significant lag indicated by the PACF. Figure 10 compares the performance of the various forecasting networks and shows that stationary models such as the plain RNN and the α-RNN least capture the price dynamics -this is expected because the partial autocorrelation is non-stationary.
Viewing the results of time series cross validation, using the first 30,000 observations, in Table 5, we observe minor differences in the performance of the LSTM, GRU versus the α t -RNN, suggesting that the reset gate and extra cellular memory in the LSTM provides negligible benefit for this dataset. In this case, we observe very marginal additional benefit in the LSTM, yet the complexity of the latter is approximately 50x that of the α t -RNN. Figure 10: The four-step ahead forecasts of temperature using the minute snapshot Bitcoin prices (USD) with MSEs shown in parentheses. Note that the prices have been standardized.
High Frequency Trading Data
Our dataset consists of N = 1, 033, 468 observations of tick-by-tick Volume Weighted Average Prices (VWAPs) of CME listed ESU6 level II data over the month of August 2016 [12,10].
We reject the Null hypothesis of the augmented Dickey-Fuller test at the 99% confidence level in favor of the alternative hypothesis that the data is stationary (contains no unit roots). The test statistic is −5.243 and the p-value is 7.16 × 10 −6 (the critical values are 1%: -3.431, 5%: -2.862, and 10%: -2.567). The PACF in Figure 11 is observed to exhibit a cut-off at approximately 23 lags. We therefore choose a sequence length of p = 23 and perform a ten-step ahead forecast. Note that the time-stamps of the tick data are not uniform and hence a step refers to a tick. Figure 12 compares the performance of the various networks and shows that plain RNN performs poorly, whereas and the α t -RNN better captures the VWAP dynamics. From Table 6, we further observe relatively minor differences in the performance of the GRU versus the α t -RNN, again suggesting that the reset gate and extra cellular memory in the LSTM provides no benefit. In this case, we find that the GRU is approximately 10x the complexity of the α t -RNN with very marginal benefit. Figure 12: The ten-step ahead forecasts of VWAPs are compared for various architectures using the tick-by-tick dataset.
Conclusion
Industrial forecasting has entered an era of unprecedented growth in the size and complexity of data which require new modeling methodologies. This paper presented a general class of exponential smoothed recurrent neural networks (RNNs) which are well suited to modeling nonstationary dynamical systems arising in industrial applications such as electricity load management and financial risk and trading. In particular, we demonstrated how they characterize the nonlinear partial autocorrelation structure of time series and directly capture dynamic effects such as seasonality and regime changes. Application of exponentially smoothed RNNs to electricity load forecasting, weather data and financial time series, such as minute level Bitcoin prices and CME futures tick data, highlighted the efficacy of exponential smoothing for multi-step time series forecasting. In all of these examples, we show that exponentially smoothed RNNs are well suited to forecasting, being much simpler than more complex architectures such as GRUs and
LSTMs, yet retaining the most important aspects needed for forecasting non-stationary series. These methods scale to large numbers of covariates and complex data. The experimental design and architectural parameters, such as the predictive horizon and model parameters, can be determined by simple statistical tests and diagnostics, without the need for extensive parameter optimization. Moreover, unlike traditional time series methods such as ARIMA models, these methods are shown to be unconditionally stable without the need to pre-process the data.
A Time Series Modeling Definitions
Definition A.1 (Time series). A time series is one observation of a stochastic process, over a specific interval: {y t } N t=1 . Definition A.2 (Autocovariance). The j th autocovariance of a time series is γ jt
:= E[(y t − µ t )(y t−j − µ t−j )] where µ t := E[y t ].
Definition A.3 (Covariance (weak) stationarity). A time series is weak (or wide-sense) covariance stationary if it has time constant mean and autocovariances of all orders:
µ t = µ, ∀t γ jt = γ j ,
∀t.
As we've seen, this implies that γ j = γ −j : the autocovariances depend only on the interval between observations, but not the time of the observations.
Definition A.4 (Autocorrelation). The j th autocorrelation, τ j is just the j th autocovariance divided by the variance:
τ j = γ j γ 0 .(23)
Definition A.5 (White noise). White noise, φ t , is i.i.d. error which satisfies all three conditions:
1. E[φ t ] = 0, ∀t; 2. V[φ t ] = σ 2 , ∀t; and
3. φ t and φ s are independent, t s, ∀t, s.
Gaussian white noise just adds a normality assumption to the error. White noise error is often referred to as a "disturbance", "shock" or "innovation" in the time series literature.
B Proof of Partial Autocovariance Property of α−RNNs
Proof. Let's first consider a RNN(1) process, i.e. α = p = 1. The lag-1 partial autocovariance is
γ 1 = E[y t − µ, y t−1 − µ] = E[ŷ t + u t − µ, y t−1 − µ],(24)
and using the RNN(1) model with, for simplicity, a single recurrence weight, φ:
y t = σ(φy t−1 )(25)
givesγ
1 = E[σ(φy t−1 ) + u t − µ, y t−1 − µ] = E[y t−1 σ(φy t−1 )],(26)
where we have assumed µ = 0 in the second part of the expression.
Continuing with the lag-2 autocovariance gives: γ 2 = E[y t − P(y t | y t−1 ), y t−2 − P(y t−2 | y t−1 )],
and P(y t | y t−1 ) is approximated by the RNN(1): ŷ t = P(y t | y t−1 ) = σ(φy t−1 ). (28) and P(y t−2 | y t−1 ) is approximated by the backward RNN(1):
y t−2 = P(y t−2 | y t−1 ) = σ(φ(ŷ t−1 + u t−1 )),
so that we see, crucially, thatŷ t−2 depends on u t−1 but not on u t . Substituting the backward RNN(1) and u t = y t −ŷ t into Equation 27 gives
γ 2 = E[u t , y t−2 − σ(φ(ŷ t−1 + u t−1 ))],(30)
and y t−2 − P(y t−2 | y t−1 ) hence depends on {u t−1 , u t−2 , . . . }. Thus we have thatγ 2 = 0. Now suppose α ∈ (0, 1). Repeating the above we have the backward α-RNN:
y t−2 = P(y t−2 | y t−1 ) = σ(φ(α(ŷ t−1 + u t−1 ) + (1 − α)h t ))(31)
and we see that, by virtue of the dependency ofh t on y t and henceŷ t + u t , that the lag-2 autocovariance is no longer zero.
Now consider the lag-2 partial autocovariance of the RNN(2) process, again with α = 1. Using the backward RNN(2) model:
y t−2 = P(y t−2 | y t−1 ) = σ (φσ(φ(ŷ t + u t ) + y t−1 )) ,(32)
which depends on u t and hence the lag-2 partial autocovariance:
γ 2 = E[u t , y t−2 − σ (φσ(φ(ŷ t + u t ) +ŷ t−1 + u t−1 ))],(33)
is not zero. It follows by induction that lag-s partial autocorrelations γ s = E[u t , y t−s − P(y t−s | y t−s+1 , . . . , y t−1 )] = 0, s > p,
since P(y t−s | y t−s+1 , . . . , y t−1 ) is approximated by the backward RNN(p):
y t−s = P(y t−s | y t−s+1 , . . . , y t−1 ) (35) = σ φσ(φσ(. . . , φσ(ŷ t−s+p + u t−s+p ) + . . .ŷ t−s+p−1 + u t−s+p−1 ) + . . . ) +ŷ t−1 + u t−1 (36) Thus the PACF for an α-RNN(p) has a cut-off at p lags when α = 1. With long memory, i.e. α ∈ (0, 1),τ s 0, s > p and hence the minimum memory of α-RNN(p) model with α ∈ (0, 1] is p.
Such a property can be used to identify the order of the RNN model from the estimated PACF and hence determines the sequence length in the α-RNN which is guaranteed to have at least the same order for α ∈ (0, 1].
C Proof of RNN stability theorem
Proof. The proof proceeds by induction. We first consider the RNN(1) model:
y t = Φ −1 (L)[u t ] = (1 − σ(φL)) −1 [u t ],(37)
where for ease of exposition we have set W y = 1, U h = W x = φ ∈ R, and b h = b y = 0 without loss of generality. Expressing this as a infinite dimensional non-linear moving average model
y t = 1 1 − σ(φL) [u t ] = ∞ j=0 (σ(φL)[u t ]) j ,(38)
and the infinite sum will be stable when the (σ(·)) j terms do not grow with j, i.e. |σ| < 1 for all values of φ and y t−1 . In particular, the choice tanh satisfies the requirement on σ. For higher order models, we follow an induction argument and show first that for a RNN(2) model we obtain
y t = 1 1 − σ(φσ(φL 2 ) + φL) [u t ] = ∞ j=0 σ j (φσ(φL 2 ) + φL)[u t ],
which again is stable if |σ| < 1 and it follows for any model order that the stability condition holds.
It follows that lagged unit impulses of the data strictly decay with the order of the lag when |σ| < 1. Again by induction, at lag 1, the output from the hidden layer is
h t = σ(φ1 + φ0) = σ(φ1).(39)
The absolute value of each component of the hidden variable under a unit vector impulse at lag 1 is strictly less than 1:
|h t | j = |σ(φ1)| j < 1,(40)
if |σ(x)| < 1 and each element of φ1 is finite. Additionally if σ is strictly monotone increasing then |h t | j under a lag two unit innovation is strictly less than |h t | j under a lag one unit innovation |σ(φ1)) j | > |σ(φσ(φ1))| j .
The choice of tanh or sigmoid activation is therefore suitable for RNNs with finite weights and input.
D Regime Switching DGP
The simulated regime switching data is given by:
y t+1 = µ t + 30 i=1 φ i,t y t−i+1 + ε t , ε t N(0, σ 2 n )(42)
where σ n = 1 × 10 −4 , y 0 = 0.38 and for j = 0, . . . , 150 φ i,t = φ 0 , t = j L, . . . , ( j + 1)L, mod( j, 2) = 0, φ 1 , t = j L, . . . , ( j + 1)L, mod( j, 2) 0,
with lag parameters φ 0 = 0.02, φ 1 = 0.01 and regime length L = 100 and µ t = c + µ(−1) j , t = j L, . . . , ( j + 1)L, j = 0, . . . , 150.
where µ = 0.1 and c = 0.14.
Figure 1 :
1The fitted partial correlogram of univariate data generated by various additive noise α-RNN(3) models.
5 for a discussion of the differences between the α t -RNN and the GRU and LSTM.(a) Comparison of model responses in the presence of shocks to the inputs.(b) Response ofα t to shocks in the input.
Figure 2 :
2An illustrative example of the response of an α-RNN in comparison with a plain-RNN. The RNN model is chosen for illustrative purposes to be a RNN(3) model, i.e. with a sequence length of 3.
Figure 3
3compares the PACFs of various architectures on the test set.
Figure 3 :
3The PACFs are compared for various recurrent architectures over a local level seasonality dataset with a periodicity of 24 hours.
N
= 20000 observations of the univariate time series are generated from a two-regime AR(30) model. The two regime AR(30) model is a linear model of the form given in Appendix D.
Figure 5 :
5The partial autocorrelogram (PACF) for each of the covariates (features) used in the model. Some of the covariates exhibit a monotonically decaying PACF while others exhibit oscillatory decay, with positive and negative partial autocorrelations.
Figure 9 :
9The partial autocorrelogram (PACF) for 1 minute snapshots of Bitcoin mid-prices (USD) over the period 2018-01-01 to 2018-11-10.
Figure 11 :
11The PACF of the tick-by-tick VWAP of ESU6 over the month of August 2016.
Table 3 :
3The ten-step ahead climate forecasts are compared for various architectures using time series cross-validation.
Table 4 :
4The ten-step ahead forecasting models for electricity load are compared for various architectures using time series cross-validation.
Table 5 :
5Thefour-step ahead Bitcoin forecasts are compared for various architectures using time
series cross-validation. The half-life of the α-RNN is found to be 1.077 minutes (α = 0.4744).
Table 6 :
6The ten-step ahead forecasting models for VWAPs are compared for various architectures using time series cross-validation. The half-life of the α-RNN is found to be 2.398 periods (α = 0.251).
The sequence of features is from the same time series as the predictor.2 Long memory refers to autoregressive memory beyond the sequence length. This is also sometimes referred to as "stateful". For avoidance of doubt, we are not suggesting that the α-RNN has an additional cellular memory, as in LSTMs.
The assumption of Gaussian white noise is convenient but not necessary -any type of white noise is possible.
Although the model has no memory in the limit α = 0.
For multivariate data, the test should be applied to each covariate.
Note that the weights have not been fitted here, we are merely observing the effect of smoothing on the hidden state for the simplest choice of parameter values.7 Note that the value of α is initially set to one since no long-term memory is needed. Hence the response to the second shock at time t = 12 is more insightful.
(a) The forecasts for each architecture and the observed out-of-sample time series.(b) The errors for each architecture over the same test period.
TensorFlow: A System for Large-scale Machine Learning. M Abadi, P Barham, J Chen, Z Chen, A Davis, J Dean, Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI'16. the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI'16Abadi, M., P. Barham, J. Chen, Z. Chen, A. Davis, J. Dean, et al. (2016). TensorFlow: A System for Large-scale Machine Learning. In Proceedings of the 12th USENIX Conference on Operating Systems Design and Implementation, OSDI'16, pp. 265-283.
A deep learning framework for financial time series using stacked autoencoders and long-short term memory. W Bao, J Yue, Y Rao, PLOS ONE. 127Bao, W., J. Yue, and Y. Rao (2017, 07). A deep learning framework for financial time series using stacked autoencoders and long-short term memory. PLOS ONE 12 (7), 1-24.
Learning sequence representations. J Bayer, Technische Universität MünchenBayer, J. (2015). Learning sequence representations. Technische Universität München.
An ensemble of lstm neural networks for high-frequency stock market classification. S Borovkova, I Tsiamas, Journal of Forecasting. 386Borovkova, S. and I. Tsiamas (2019). An ensemble of lstm neural networks for high-frequency stock market classification. Journal of Forecasting 38 (6), 600-619.
A Borovykh, S Bohte, C W Oosterlee, Conditional time series forecasting with convolutional neural networks. Borovykh, A., S. Bohte, and C. W. Oosterlee (2017). Conditional time series forecasting with convolutional neural networks.
G Box, G M Jenkins, Time Series Analysis: Forecasting and Control. Holden-DayBox, G. and G. M. Jenkins (1976). Time Series Analysis: Forecasting and Control. Holden- Day.
Exploring the attention mechanism in lstm-based hong kong stock price movement prediction. S Chen, L Ge, Quantitative Finance. 199Chen, S. and L. Ge (2019). Exploring the attention mechanism in lstm-based hong kong stock price movement prediction. Quantitative Finance 19 (9), 1507-1515.
Z Chen, J Zhang, M Arjovsky, L Bottou, Symplectic recurrent neural networks. Chen, Z., J. Zhang, M. Arjovsky, and L. Bottou (2019). Symplectic recurrent neural networks.
Empirical evaluation of gated recurrent neural networks on sequence modeling. J Chung, Ç Gülçehre, K Cho, Y Bengio, CoRR abs/1412.3555Chung, J., Ç . Gülçehre, K. Cho, and Y. Bengio (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR abs/1412.3555.
Sequence classification of the limit order book using recurrent neural networks. M Dixon, Journal of Computational Science. Dixon, M. (2017). Sequence classification of the limit order book using recurrent neural networks. Journal of Computational Science.
Time series modeling with α-RNNs. M Dixon, J London, Technical reportDixon, M. and J. London (2021). Time series modeling with α-RNNs. Technical report.
Deep learning for spatio-temporal modeling: Dynamic traffic flows and high frequency trading. M F Dixon, N G Polson, V O Sokolov, Applied Stochastic Models in Business and Industry. 353Dixon, M. F., N. G. Polson, and V. O. Sokolov (2019). Deep learning for spatio-temporal modeling: Dynamic traffic flows and high frequency trading. Applied Stochastic Models in Business and Industry 35 (3), 788-807.
F A Gers, D Eck, J Schmidhuber, Applying LSTM to Time Series Predictable through Time-Window Approaches. Berlin, Heidelberg; Berlin HeidelbergSpringerGers, F. A., D. Eck, and J. Schmidhuber (2001). Applying LSTM to Time Series Predictable through Time-Window Approaches, pp. 669-676. Berlin, Heidelberg: Springer Berlin Heidelberg.
Understanding the difficulty of training deep feedforward neural networks. X Glorot, Y Bengio, Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and Statistics. the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and StatisticsGlorot, X. and Y. Bengio (2010). Understanding the difficulty of training deep feedforward neural networks. In In Proceedings of the International Conference on Artificial Intelligence and Statistics (AISTATS'10). Society for Artificial Intelligence and Statistics.
Generating sequences with recurrent neural networks. A Graves, Graves, A. (2013). Generating sequences with recurrent neural networks.
Time series analysis. J Hamilton, Princeton Univ. PressPrinceton, NJHamilton, J. (1994). Time series analysis. Princeton, NJ: Princeton Univ. Press.
Forecasting, Structural Time Series Models and the Kalman Filter. A Harvey, Cambridge University PressHarvey, A. (1990). Forecasting, Structural Time Series Models and the Kalman Filter. Cambridge University Press.
Long short-term memory. S Hochreiter, J Schmidhuber, Neural Comput. 98Hochreiter, S. and J. Schmidhuber (1997, November). Long short-term memory. Neural Comput. 9 (8), 1735-1780.
Jena climate data. Collected by the Jena weather station by the Max Planck Institute for Biogeochemistry. Jena , Jena (2016). Jena climate data. Collected by the Jena weather station by the Max Planck Institute for Biogeochemistry. url: https://www.bgc-jena.mpg.de/wetter.
Introduction to Modern Time Series Analysis. G Kirchgässner, J Wolters, Springer-VerlagBerlin, Heidelberg; Berlin HeidelbergKirchgässner, G. and J. Wolters (2007). Introduction to Modern Time Series Analysis. Berlin, Heidelberg: Springer-Verlag Berlin Heidelberg.
Inference for asymmetric exponentially weighted moving average models. D Li, K Zhu, Journal of Time Series Analysis. 411Li, D. and K. Zhu (2020). Inference for asymmetric exponentially weighted moving average models. Journal of Time Series Analysis 41 (1), 154-162.
Predicting rare events in multiscale dynamical systems using machine learning. S H Lim, L T Giorgini, W Moon, J S Wettlaufer, Lim, S. H., L. T. Giorgini, W. Moon, and J. S. Wettlaufer (2019). Predicting rare events in multiscale dynamical systems using machine learning.
Markov switching model for quick detection of event related desynchronization in eeg. G Lisi, D Rivela, A Takai, J Morimoto, Frontiers in Neuroscience. 1224Lisi, G., D. Rivela, A. Takai, and J. Morimoto (2018). Markov switching model for quick detection of event related desynchronization in eeg. Frontiers in Neuroscience 12, 24.
Deep echo state networks with uncertainty quantification for spatio-temporal forecasting. P L Mcdermott, C K Wikle, McDermott, P. L. and C. K. Wikle (2018). Deep echo state networks with uncertainty quantification for spatio-temporal forecasting.
Hidden markov and markov switching model for primary user channel state prediction in cognitive radio. A M Mikaeil, B Guo, X Bai, Z Wang, The 2014 2nd International Conference on Systems and Informatics (ICSAI 2014). Mikaeil, A. M., B. Guo, X. Bai, and Z. Wang (2014, Nov). Hidden markov and markov switching model for primary user channel state prediction in cognitive radio. In The 2014 2nd International Conference on Systems and Informatics (ICSAI 2014), pp. 854-859.
Forecasting jump arrivals in stock prices: new attention-based network architecture using limit order book data. Y Mãkinen, J Kanniainen, M Gabbouj, A Iosifidis, Quantitative Finance. 1912MÃkinen, Y., J. Kanniainen, M. Gabbouj, and A. Iosifidis (2019). Forecasting jump arrivals in stock prices: new attention-based network architecture using limit order book data. Quantitative Finance 19 (12), 2033-2050.
Recurrent neural networks in the eye of differential equations. M Y Niu, L Horesh, I Chuang, Niu, M. Y., L. Horesh, and I. Chuang (2019). Recurrent neural networks in the eye of differential equations.
Open power systems data. Total load excl. transmission system losses in DK2 in MW (DK 2 load actual net consumption tso). Ops, OPS (2019). Open power systems data. Total load excl. transmission system losses in DK2 in MW (DK 2 load actual net consumption tso). url:https://data.open-power-system- data.org/time series.
On the difficulty of training recurrent neural networks. R Pascanu, T Mikolov, Y Bengio, Pascanu, R., T. Mikolov, and Y. Bengio (2012). On the difficulty of training recurrent neural networks.
Long Short-Term Memory. J Schmidhuber, S Hochreiter, Neural Comput. 98Schmidhuber, J. and S. Hochreiter (1997). Long Short-Term Memory. Neural Comput 9 (8), 1735-1780.
Universal features of price formation in financial markets: perspectives from deep learning. J Sirignano, R Cont, Quantitative Finance. 199Sirignano, J. and R. Cont (2019). Universal features of price formation in financial markets: perspectives from deep learning. Quantitative Finance 19 (9), 1449-1459.
A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. S Smyl, International Journal of Forecasting. 361M4 CompetitionSmyl, S. (2020). A hybrid method of exponential smoothing and recurrent neural networks for time series forecasting. International Journal of Forecasting 36 (1), 75 -85. M4 Competition.
Sequence to sequence learning with neural networks. I Sutskever, O Vinyals, Q V Le, Advances in Neural Information Processing Systems. Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. WeinbergerCurran Associates, Inc27Sutskever, I., O. Vinyals, and Q. V. Le (2014). Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger (Eds.), Advances in Neural Information Processing Systems 27, pp. 3104-3112. Curran Associates, Inc.
Electric load forecasting in smart grids using long-short-term-memory based recurrent neural network. J Zheng, C Xu, Z Zhang, X Li, 2017 51st Annual Conference on Information Sciences and Systems (CISS). Zheng, J., C. Xu, Z. Zhang, and X. Li (2017, March). Electric load forecasting in smart grids using long-short-term-memory based recurrent neural network. In 2017 51st Annual Conference on Information Sciences and Systems (CISS), pp. 1-6.
| [] |
[
"On PFH and HF spectral invariants",
"On PFH and HF spectral invariants"
] | [
"Guanheng Chen "
] | [] | [] | In this note, we define the link spectral invariants by using the cylindrical formulation of the quantitative Heegaard Floer homology. We call them HF spectral invariants. We deduce a relation between the HF spectral invariants and the PFH spectral invariants by using closed-open morphisms and open-closed morphisms. For the sphere, we prove that the homogenized HF spectral invariants at the unit are equal to the homogenized PFH spectral invariants. Moreover, we show that the homogenized PFH spectral invariants are quasimorphisms. | null | [
"https://export.arxiv.org/pdf/2209.11071v4.pdf"
] | 252,438,703 | 2209.11071 | f7a5be868104109c259d1b34636bad5ad230bd47 |
On PFH and HF spectral invariants
Guanheng Chen
On PFH and HF spectral invariants
In this note, we define the link spectral invariants by using the cylindrical formulation of the quantitative Heegaard Floer homology. We call them HF spectral invariants. We deduce a relation between the HF spectral invariants and the PFH spectral invariants by using closed-open morphisms and open-closed morphisms. For the sphere, we prove that the homogenized HF spectral invariants at the unit are equal to the homogenized PFH spectral invariants. Moreover, we show that the homogenized PFH spectral invariants are quasimorphisms.
Introduction and main results
Let Σ be a closed surface with genus g and ω a volume form of volume 1 (of course, the number 1 can be replaced by any positive number). Given a volume-preserving diffeomorphism ϕ : Σ → Σ, M. Hutchings defines a version of Floer homology for (Σ, ω, ϕ) which he calls periodic Floer homology [18,20], abbreviated as PFH. called PFH spectral invariants [4] (also see [6,16] for the non-Hamiltonian case).
A link L on Σ is a disjoint union of simple closed curves. Under certain monotone assumptions, D. Cristofaro-Gardiner, V. Humilière, C. Mak, S. Seyfaddini and I. Smith show that the Lagrangian Floer homology of a Lagrangian pair (Sym d ϕ H (L), Sym d L) in Sym d Σ, denoted by HF (Sym d Σ, Sym d L, Sym d ϕ H ), is well-defined and non-vanishing [7]. Here ϕ H is a Hamiltonian symplecticmorphism. They call the Floer homology HF (Sym d Σ, Sym d L, Sym d ϕ H ) quantitative Heegaard Floer homology, abbreviated as QHF. For any two different Hamiltonian symplecticmorphisms, the corresponding QHF are canonically isomorphic to each other. Let HF (Sym d L) denote an abstract group that is a union of all the QHF defined by Hamiltonian symplecticmorphisms, modulo the canonical isomorphisms. Using R. Leclercq and F. Zapolsky's general results [31], they define a set of numerical invariants parameterized by QFH where η is a fixed nonnegative constant. These numerical invariants are called link spectral invariants. Even though these two spectral invariants come from different Floer theories, they satisfy many parallel properties. So it is natural to study whether they have any relation. To this end, our strategy is to construct morphisms between these two Floer homologies. Because these two Floer theories are defined by counting holomorphic curves in manifolds of different dimensions, it is hard to define the morphisms directly. To overcome this issue, the author follows R. Lipshitz's idea [29] to define a homology by counting holomorphic curves in a 4-manifold, denoted by HF (Σ, ϕ H (L), L) [13]. Moreover, the author proves that there is an isomorphism [1]. A version of (1.2) also has been constructed by V. Colin, P. Ghiggini, and K. Honda [15] for a different setting. Using these morphisms, we obtain a partial result on the relation between PFH spectral invariants and HF spectral invariants [13].
In this note, we define the quantum product structures and spectral invariants for HF (Σ, ϕ H (L), L) as the Lagrangian Floer homology. Similar to the QHF, for any ϕ H , the group HF (Σ, ϕ H (L), L) is isomorphic to an abstract group HF (Σ, L) canonically. The spectral invariants defined by HF (Σ, ϕ H (L), L) are denoted by c L,η . To distinguish with the link spectral invariants c link L,η , we call c L,η the HF spectral invariants instead. Via the isomorphism (1.1), we know that c L,η is equivalent to c link L,η in certain sense (see (2.16)).
The purpose of this paper is to study the properties of c L and try to understand the relations between c L and c pf h d . Before we state the main results, let us recall the assumptions on a link. Definition 1.1. Fix a nonnegative constant η. Let L = d i=1 L i be a d-disjoint union of simple closed curves on Σ. We call L a link on Σ. We say a link L is η-admissible if it satisfies the following properties:
A. 1 The integer satisfies d = k + g, where g is the genus of Σ and k > 1. k i=1 L i is a disjoint contractile simple curves. For k + 1 ≤ i ≤ d, L i is the cocore of the 1-handle. For each 1-handle, we have exactly one corresponding L i .
A. 2 We require that Σ − L = ∪ k+1 i=1B i . Let B i be the closure ofB k . Then B i is a disk for 1 ≤ i ≤ k and B k+1 is a planar domain with 2g + k boundary components. For 1 ≤ i ≤ k, the circle L i is the boundary of B i .
A.3B i ∩B j = ∅.
A.4 For 0 ≤ i < j ≤ k, we have B i ω = B j ω = λ. Also, λ = 2η(2g +k −1)+ B k+1 ω.
A picture of an admissible link is shown in Figure 1. Note that if L is admissible, Figure 1: The red circles are the admissible link. so is ϕ(L), where ϕ is any Hamiltonian symplecticmorphism. We assume that the link is η-admissible throughout. Remark 1.1. To define HF (Σ, ϕ H (L), L), we need stronger assumptions on L than [7] for technical reasons.
In the first part of this note, we study the properties of the spectral invariants c L,η . The results are summarized in the following theorem. These properties are parallel to the one in [7,31]. Remark 1.2. At this moment, we haven't confirmed whether the isomorphism (1.1) is canonical, but we believe that it is true from the view point of tautological correspondence. Also, we don't know whether the product µ 2 agrees with the usual quantum product in monotone Lagrangian Floer homology. So we cannot deduce Theorem 1 from (1.1) and the results in [7,31] directly. But the methods in the proof of Theorem 1 basically the same as [7,31].
In [13], we define the closed-open morphisms (1.2). We use the same techniques to construct a "reverse" of the closed-open morphisms, called open-closed morphisms. Theorem 2. Let L be an admissible link and ϕ H a d-nondegenerate Hamiltonian symplecticmorphism. Fix a reference 1-cycle γ 0 with degree d and a base point x ∈ L. Let Z ref ∈ H 2 (W, x H , γ 0 ) be a reference relative homology class. Let (W, Ω H , L H ) be the open-closed symplectic cobordism defined in Section 5. Then for a generic admissible almost complex structure J ∈ J (W, Ω H ), the triple (W, Ω H , L H ) induces a homomorphism
(OC Z ref (W, Ω H , L H ) J ) * : HF (Σ, ϕ H (L), L, x) J → P F H(Σ, ϕ H , γ 0 ) J
satisfying the following properties:
• (Partial invariance) Suppose that ϕ H , ϕ G satisfy the following conditions: (see Definition 2.1) ♠.1 Each periodic orbit of ϕ H with degree less than or equal d is either d-negative elliptic or hyperbolic.
♠.2 Each periodic orbit of ϕ G with degree less than or equal d is either d-positive elliptic or hyperbolic.
Fix reference relative homology classes
Z ref ∈ H 2 (X, γ 1 , γ 0 ), Z 0 ∈ H 2 (W, x G , γ 0 ) and Z 1 ∈ H 2 (W, x H , γ 1 ) satisfying A ref #Z 0 = Z ref #Z 1 ,
where A ref is the class defining the continuous morphism I H,G 0,0 . Then for any generic admissible almost complex structures J H ∈ J (W, Ω H ) and J G ∈ J (W, Ω G ), we have the following commutative diagram:
HF * (Σ, ϕ H (L), L, x) J H (OC Z 1 (W,Ω H ,L H ) J H ) * − −−−−−−−−−−−−−− → P F H * (Σ, ϕ H , γ 1 ) J H I H,G 0,0 P F H sw Z ref (X,Ω X ) HF * (Σ, ϕ G (L), L, x) J G (OC Z 0 (W,Ω G ,L G ) J G ) * − −−−−−−−−−−−−− → P F H * (Σ, ϕ G , γ 0 ) J G Here P F H sw Z ref (X, Ω X )
is the PFH cobordism map induced by symplectic cobordism (2.9) and I H,G 0,0 is the continuous morphism on QHF defined in Section 3.
• (Non-vanishing) There are nonzero classes e ♦ ∈ HF (Σ, L) and σ x ♦H ∈ P F H(Σ, ϕ H , γ x H ) such that if ϕ H satisfies the condition (♠.2), then we have
OC Z ref (W, ϕ H , L H ) * ((j x H ) −1 (e ♦ )) = σ x ♦H ,
where j x H is the canonical isomorphism (2.14). In particular, the open-closed morphism is non-vanishing.
There is a special class e ∈ HF (Σ, L) called the unit (see Definition 3.7). Suppose that the link L is 0-admissible. Define spectral invariants
c − L (H) := c L,c pf h d (H, σ x ♦H , γ x H ) + 1 0 H t (x)dt ≤ c − L (H) ≤ c + L (H) ≤ c pf h d (H, σ x ♥H , γ x H ) + 1 0 H t (x)dt, where 1 0 H t (x)dt is short for d i=1 1 0 H t (x i )dt.c pf h d (H, σ x ♥H , γ x H )+ 1 0 H t (x)dt−1 ≤ c − L (H) ≤ c + L (H) ≤ c pf h d (H, σ x ♥H , γ x H )+ 1 0 H t (x)dt.
Moreover, for any ϕ ∈ Ham(S 2 , ω), we have
µ L,η=0 (ϕ, e) = µ L,η=0 (ϕ, e ♦ ) = µ pf h d (ϕ), (1.3)
where µ L,η=0 , µ pf h d are the homogenized spectral invariants (see Section 6 for details). In particular, for any two 0-admissible links L, L with same number of components, then we have µ L,η=0 (ϕ, e) = µ L ,η=0 (ϕ, e) = µ L,η=0 (ϕ, e ♦ ) = µ L ,η=0 (ϕ, e ♦ ). Remark 1.3. For technical reasons, the cobordism maps on PFH are defined by using the Seiberg-Witten theory [28] and the isomorphism "PFH=SWF" [30]. Nevertheless, the proof of the Theorem 2 needs a holomorphic curves definition. The assumptions ♠.1, ♠.2 are used to guarantee that the PFH cobordism maps can be defined by counting holomorphic curves. According to the results in [11], the Seiberg-Witten definition agrees with the holomorphic curves definition in these special cases. We believe that the assumptions ♠.1, ♠.2 can be removed if one could define the PFH cobordism maps by pure holomorphic curve methods.
The terms c pf h d (H, σ x ♥H , γ x H )+ 1 0 H t (x)dt, c pf h d (H, σ x ♦H , γ x H )+
By Proposition 3.7 of [11], the conditions ♠.1, ♠.2 can be achieved by a C 1perturbation of the Hamiltonian functions. More precisely, fix a metric g Y on S 1 × Σ. For any δ > 0 and Hamiltonian function H, there is a Hamiltonian function H such that
• ϕ H satisfies ♠.1 (♠.2). • |H − H| + |dH − dH | g Y ≤ δ.
Even Theorem 2 relies on the conditions ♠.1, ♠.2 , the above estimates and the Hofer-Lipschitz property imply that Corollaries 1.2, 1.3 work for a general Hamiltonian function.
From [7], we know that the homogenized link spectral invariants are homogeneous quasimorphisms. We show that this is also true for µ pf h d (ϕ). Recall that a homogeneous quasimorphism on a group G is a map µ : G → R such that
1. µ(g n ) = nµ(g); 2. there exists a constant D = D(µ) ≥ 0, called the defect of µ, satisfying |µ(gh) − µ(g) − µ(h)| ≤ D.
Relavant results
The Calabi property in Theorem 1 in fact is an analogy of the "ECH volume property" for embedded contact homology, it was first discovered by D. Cristofaro-Gardiner, M. Hutchings, and V. Ramos [3]. Embedded contact homology (short for "ECH") is a sister version of the periodic Floer homology. The construction of ECH and PFH are the same. The only difference is that they are defined for different geometric structures. If a result holds for one of them, then one could expect that there should be a parallel result for another one. The Calabi property also holds for PFH. This is proved by O. Edtmair and Hutchings [16], also by D. Cristofaro-Gardiner, R. Prasad and B. Zhang [6] independently. The Calabi property for QHF is discovered in [7].
Recently, D. Cristofaro-Gardiner, V. Humilière, C. Mak, S. Seyfaddini and I. Smith show that the homogenized link spectral invariants satisfy the "two-terms Weyl law" for a class of automatous Hamiltonian functions [8] on the sphere. We believe that the HF spectral invariants agree with the link spectral invariants. If one could show that this is true, Corollary 1.3 implies that homogenized PFH spectral invariants agree with the homogenized link spectral invariants. This suggests that homogenized PFH spectral invariants should also satisfy the "two-terms Weyl law".
Preliminaries
Periodic Floer homology
In this section, we review the definition of twisted periodic Floer homology and PFH spectral invariants. For more details, please refer to [20,21,4,16].
Suppose that Σ is a closed surface and ω is a volume form of volume 1. Given a Hamiltonian function H : S 1 t × Σ → R, then we have a unique vector field X Ht , called the Hamiltonian vector field, satisfying the relation ω(X Ht , ·) = d Σ H t . Let ϕ t H be the flow generated by X Ht , i.e., ∂ t ϕ t H = X Ht • ϕ t H and ϕ 0
H = id. For each t, ϕ t H is a symplecticmorphism. The time-1 flow is denoted by ϕ H := ϕ 1 H . A Hamiltonian function is called automatous if it is t-independent.
Fix a symplecticmorphism ϕ. Define the mapping torus by
Y ϕ := [0, 1] t × Σ/(0, ϕ(x)) ∼ (1, x).
There is a natural vector field R := ∂ t and a closed 2-form ω ϕ on Y ϕ induced from the above quotient. The pair (dt, ω ϕ ) forms a stable Hamiltonian structure and R is the Reeb vector field. Let ξ := ker π * denote the vertical bundle of π : Y ϕ → S 1 .
A periodic orbit is a map γ : R/qZ → Y ϕ satisfying the ODE ∂ t γ(t) = R • γ(t). The number q ≥ 0 is called the period or degree of γ. Note that q is equal to the intersection number [γ] · [Σ].
A periodic orbit is called nondegenerate if the linearized return map does not have 1 as an eigenvalue. The nondegenerate periodic orbits are classified as either elliptic or hyperbolic according to the eigenvalues of linearized return maps. The symplecticmorphism ϕ is called d-nondegenerate if every closed orbit with degree at most d is nondegenerate.
Let γ be an elliptic periodic orbit with period q. We can find a trivialization of ξ such that the linearized return map is a rotation e i2πθt , where {θ t } t∈[0,q] is a continuous function with θ 0 = 0. The number θ = θ t | t=q ∈ R/Z is called the rotation number of γ (see [21] for details). The following definition explains the terminologies in the assumptions ♠.1, ♠.2.
Definition 2.1. (see [22] Definition 4.1) Fix d > 0. Let γ be an embedded elliptic orbit with degree q ≤ d.
• γ is called d-positive elliptic if the rotation number θ is in (0, q d ) mod 1.
• γ is called d-negative elliptic if the rotation number θ is in (− q d , 0) mod 1.
For our purpose, we assume that ϕ is Hamiltonian throughout (but the construction of PFH works for a general symplecticmorphism). Under the Hamiltonian assumption, we have the following diffeomorphism Let H 2 (Y ϕ , γ + , γ − ) denote the set of 2-chains Z in Y ϕ with ∂Z = γ + −γ − , modulo the boundary of 3-chains. We call the element Z ∈ H 2 (Y ϕ , γ + , γ − ) a relative homology classes. This an affine space of
Ψ H :S 1 t × Σ → Y ϕ H (s, t, x) → (s, t, (ϕ t H ) −1 (x)). (2.4) It is easy to check that Ψ * H (ω ϕ H ) = ω + d(H t dt) and (Ψ H ) * (∂ t + X H ) = R.
PFH complex
H 2 (Y ϕ , Z) ∼ = Z[Σ] ⊕ (H 1 (S 1 ) ⊗ H 1 (Σ)).
For a relative homology class Z ∈ H 2 (Y ϕ , γ + , γ − ), Hutchings defines a topology index called J 0 index [19] that measure the topology complexity of the curves. Fix a trivialization τ of ξ. The J 0 index is given by the following formula:
J 0 (Z) := −c τ (ξ| Z ) + Q τ (Z) + i m i −1 p=1 CZ τ (γ p +,i ) − j n j −1 q=1 CZ τ (γ q −,j ),
where c τ (ξ| Z ) is the relative Chern number, Q τ (Z) is the relative self-intersection number and CZ τ is the Conley-Zehnder index. There is another topological index called ECH index. It is defined by
I(Z) := c τ (ξ| Z ) + Q τ (Z) + i m i p=1 CZ τ (γ p +,i ) − j n j q=1 CZ τ (γ q −,j ).
We refer readers to [18,19] for more details on I and J 0 . Fix a reference 1-cycle γ 0 transversed to ξ positively. Assume that [γ 0 ] · [Σ] > g(Σ) throughout. An anchored orbit set is a pair (γ, [Z]), where γ is an orbit set and
[Z] ∈ H 2 (Y ϕ , γ + , γ − )/ ker(ω ϕ + ηJ 0 ). We call it an anchored PFH generator if γ is a PFH generator. Note that H 2 (Y ϕ , γ + , γ − )/ ker(ω ϕ + ηJ 0 ) is an affine space of Z[Σ].
The chain complex P F C(Σ, ϕ, γ 0 ) is the set of the formal sums (possibly infinity)
a (γ,[Z]) (γ, [Z]), (2.5)
where a (γ,[Z]) ∈ Z/2Z and each (γ, [Z]) is an anchored PFH generator. Also, for any C ∈ R, we require that there is only finitely many (γ, [Z]) such that Z ω ϕ H > C and a (γ,[Z]) = 0.
Let Λ = { i a i q b i |a i ∈ Z/2Z, b 0 < b 1 .
..} be the Novikov ring. Then the P F C(Σ, ϕ, γ 0 ) is Λ-module because we define an action
i a i q b i · (γ, [Z]) := i a i (γ, [Z − b i Σ]).
(2.6)
In most of the time, it is convenient to take
γ 0 = Ψ H (S 1 × x), (2.7) denoted by γ x H , where x = (x 1 , ...x d ) is d-points on Σ (not necessarily to be distinct).
Differential on PFH To define the differential, consider the symplectization
X := R s × Y ϕ , Ω := ω ϕ + ds ∧ dt.
An almost complex structure on X is called admissible if it preserves ξ, is R-invariant, sends ∂ s to R, and its restriction to ξ is compatible with ω ϕ . The set of admissible almost complex structures is denoted by J (Y ϕ , ω ϕ ). Given J ∈ J (Y ϕ , ω ϕ ) and orbit sets γ
+ = {(γ +,i , m i )}, γ − = {(γ −,j , n j )}, let M J (γ + , γ − , Z)
be the moduli space of punctured holomorphic curves u :Ḟ → X with the following properties: u has positive ends at covers of γ +,i with total multiplicity m i , negative ends at covers of γ −,j with total multiplicity n j , and no other ends. Also, the relative homology class of u is Z. Note that M J (γ + , γ − , Z) admits a natural R-action.
The differential ∂ J on P F C(Σ, ϕ, γ 0 ) is defined by
∂ J (γ + , [Z + ]) := γ − Z,I(Z)=1 # M J (γ + , γ − , Z)/R (γ − , [Z + − Z]).
The homology of ( P F C(Σ, ϕ, γ 0 ), ∂ J ) is called the twisted periodic Floer homology, denoted by P F H(Σ, ϕ, γ 0 ) J . By Corollary 1.1 of [30], PFH is independent of the choice of almost complex structures and Hamiltonian isotopic of ϕ. Note that P F H(Σ, ϕ, γ 0 ) is a Λ-module because the action (2.6) descends to the homology.
The U-map There is a well-defined map
U : P F H(Σ, ϕ H , γ 0 ) → P F H(Σ, ϕ H , γ 0 ). Fix z ∈ R × Y ϕ H .
The definition of the U-map is similar to the differential. Instead of counting I = 1 holomorphic curves modulo R translation, the U-map is defined by counting I = 2 holomorphic curves that pass through the fixed point z and modulo R translation. The homotopy argument can show that the U-map is independent of the choice of z. For more details, please see Section 2.5 of [26].
Cobordism maps on PFH Let (X, Ω X ) be a symplectic 4-manifold. Suppose that there exists a compact subset K such that
(X − K, Ω X ) ∼ = [0, ∞) × Y ϕ + , ω ϕ + + ds ∧ dt ∪ (−∞, 0] × Y ϕ − , ω ϕ − + ds ∧ dt (2.8) We allow Y ϕ + = ∅ or Y ϕ − = ∅. We call (X, Ω X ) a symplectic cobordism from (Y ϕ + , ω ϕ + ) to (Y ϕ − , ω ϕ − )
. Fix a reference homology class Z ref ∈ H 2 (X, γ 0 , γ 1 ). The symplectic manifold (X, Ω X ) induces a homomorphism
P F H sw Z ref (X, Ω X ) : P F H(Σ, ϕ + , γ 0 ) → P F H(Σ, ϕ − , γ 1 ).
This homomorphism is called a PFH cobordism map. Following Hutchings-Taubes's idea [23], the cobordism map P F H sw Z ref (X, Ω X ) is defined by using the Seiberg-Witten theory [28] and Lee-Taubes's isomorphism [30]. Even the cobordism maps are defined by Seiberg-Witten theory, they satisfy some nice properties called holomorphic curves axioms. It means that the PFH cobordism maps count holomorphic curves in certain sense. For the precise statement, we refer readers to [11] and Appendix of [13].
In this paper, we will focus on the following special cases of (X, Ω X ).
1. Given two Hamiltonian functions H + , H − , define a homotopy H s := χ(s)H + + (1 − χ(s))H − , where χ is a cut off function such that χ = 1 for s ≥ R 0 > 0 and and χ = 0 for χ ≤ 0. Define
X := R s × S 1 t × Σ, ω X := ω + dH s ∧ dt, Ω X := ω X + ds ∧ dt. (2.9)
This is a symplectic cobordism if R 0 is sufficiently large. Note that we identify Y ϕ H ± with S 1 × Σ implicitly by using (2.4). Fix a reference relative homology class Z ref ∈ H 2 (X, γ 0 , γ 1 ). Then we have a cobordism map
P F H sw Z ref (X, Ω X ) : P F H(Σ, ϕ H + , γ 0 ) → P F H(Σ, ϕ H − , γ 1 ).
This map only depends on H + , H − and the relative homology class Z ref .
2. Let (B − , ω B − , j B − ) be a sphere with a puncture p. Suppose that we have neighbourhood U of p so that we have the following identification
(B − , ω B − , j B − )| U ∼ = ([0, ∞) s × S 1 t , ds ∧ dt, j),
where j is a complex structure that maps ∂ s to ∂ t . Let χ : R → R be cut off function such that χ = 1 when s ≥ R 0 and χ(s) = 0 when s ≤ R 0 /10. Take
X − =: B − × Σ ω X − := ω + d(χ(s)Hdt) Ω X − := ω X − + ω B − .
(2.10)
For sufficiently large R 0 > 0, (X − , Ω X − ) is a symplectic manifold satisfying (2.8).
In the case (2.9), if H + satisfies ♠.1 and H − satisfies ♠.2, the author shows that the cobordism map P F H sw Z ref (X, Ω X ) can be defined alternatively by using the pure holomorphic curve methods [11]. The holomorphic curves definition will be used to prove Theorem 2. That is why we need the assumptions ♠.1, ♠.2 in the statement.
Filtered PFH We define a functional A η H on the anchored orbit sets deformed by the J 0 index as follows:
A η H (γ, [Z]) := Z ω ϕ + ηJ 0 (Z). When η = 0, we write A H (γ, [Z]) := A η=0 H (γ, [Z]
) for short. Even through we add an perturbation term to the usual action functional, this still give us a filtration on the PFH complex .
Lemma 2.2 (Lemma 2.4 of [13]). Let J ∈ J (Y ϕ , ω ϕ ) be an admissible almost complex structure in the symplectization of R × Y ϕ . Let C ∈ M J (α + , α − ) be a holomorphic current in R × Y ϕ without closed component. Then J 0 (C) ≥ 0. Let P F C L (Σ, ϕ, γ 0 ) be the set of formal sum (2.5) satisfying A η H (γ, [Z]) < L. By Lemma 2.2, this is a subcomplex of ( P F C(Σ, ϕ, γ 0 ), ∂ J ). The homology is denoted by P F H L (Σ, ϕ, γ 0 ). Let i L : P F H L (Σ, ϕ, γ 0 ) → P F H(Σ, ϕ, γ 0 ) be the map induced by the inclusion. Take η = 0. Fix σ ∈ P F H(Σ, ϕ, γ 0 ). The PFH spectral invariant is defined by c pf h d (H, σ, γ 0 , J) := inf{L ∈ R|σ belongs to the image of i L }.
If ϕ H is degenerate , we define
c pf h d (H, σ, γ 0 ) = lim n→∞ c pf h d (H n , σ n , γ 0 ), (2.11)
where ϕ Hn are d-nondegenerate, {H n } ∞ n=1 C ∞ -converges to H, and σ n ∈ P F H(Σ, ϕ Hn , γ 0 ) is the class corresponding to σ.
One could define the PFH spectral invariants using A η H for η > 0, however, we cannot prove the Hofer-Lipschitz property by using the methods in [4]. This is because Lemma 2.2 is not true for holomorphic currents in the symplectic cobordisms.
Quantitative Heegaard Floer homology
In this section, we review the cylindrical formulation of QHF defined in [13].
Cylinderical formulation of QHF
Fix an admissible link L = ∪ d i=1 L i and a Hamiltonian symplecticmorphism ϕ H . We always assume that ϕ H is nondegenerate in the sense that ϕ H (L) intersects L transversely.
A Reeb chord of ϕ H is a d-union of paths
y = [0, 1] × (y 1 , ...y d ) ⊂ [0, 1] × Σ, where y i ∈ L i ∩ ϕ H (L σ(i) )= (x 1 , · · · x d ), where x i ∈ L i . Define a reference chord x H (t) = ϕ H • (ϕ t H ) −1 (x) ⊂ [0, 1] t × Σ from {0} × ϕ H (L) to {1} × L. Let (E := R s × [0, 1] t × Σ, Ω := ω + ds ∧ dt) be a symplectic manifold. Let L = R × ({0} × ϕ H (L) ∪ {1} × L)
be a union of Lagrangian submanifolds in (E, Ω). Let y ± be two Reeb chords. Then we have a concept called d-multisection in E. Roughly speaking, this is a map u :Ḟ → E which is asymptotic to y ± as s → ±∞ and satisfies the Lagrangian boundary conditions, whereḞ is a Riemann surface with boundary punctures. If a d-multisection is holomorphic, we call it an HF curve. The set of equivalence classes of the d-multisections is denoted by
H 2 (E, y + , y − ). An element in H 2 (E, y + , y − ) is also called a relative homology class. Fix A ∈ H 2 (E, y + , y − ).
The ECH index and J 0 index also can be generalized to the current setting, denoted by I(A) and J 0 (A) respectively. The definition of relative homology class, HF curves and ECH index will be postponed to Section 3. We will define these concepts for a slightly general setting. Given a Reeb chord y, a capping of y is an equivalence class [A] in H 2 (E, x H , y)/ ker(ω+ ηJ 0 ). Define the complex CF (Σ, ϕ H (L), L, x) be the set of formal sums of capping
(y,[A]) a (y,[A]) (y, [A]) (2.12)
satisfying that a (y,[A]) ∈ Z/2Z and for any C ∈ R, there are only finitely (y,
[A]) such that − A ω > C and a (y,[A]) = 0. For 1 ≤ i ≤ k, let v i : [0, 1] s × [0, 1] t → Σ be a map such that v i (0, t) = v i (1, t) = v i (s, 0) = x i and v i (s, 1) ∈ L i and represents the class [B i ] ∈ H 2 (Σ, L i , Z). Define u x i :[0, 1] s × [0, 1] t → [0, 1] s × [0, 1] t × Σ (s, t) → (s, t, ϕ H • (ϕ t H ) −1 • v i (s, t)).
Together with the trivial strip at
x j (j = i), u x i represents a class in H 2 (E, x H , x H ), still denoted by [B i ]. We also replace the map v i by v i , where v i satisfies v i (0, t) = v i (1, t) = v i (s, 1) = x i and v i (s, 0) ∈ L i and represents the class [B i ] ∈ H 2 (Σ, L i ).
Using the same construction, we have another map u x i . The slightly different between u x i and u x i is that
u x i | t=1 wraps ∂B i one time while u x i | t=0 wraps ∂ϕ H (B i ) one time. So we denote the equivalence class of u x i in H 2 (E, x H , x H ) by [ϕ H (B i )].Let R = { i a i T b i |a i ∈ Z/2Z, b 0 < b 1 .
..} be the Novikov ring. To distinguish the one for PFH, we use different notations to denote the ring and the formal variable.
Then CF (Σ, ϕ H (L), L, x) is a R-module because we have the following action i a i T b i · (y, [A]) := i a i (y, [A] + b i B). (2.13) Let J E denote the set of Ω-compatible almost complex structures satisfying that J is R s -invariant, J(∂ s ) = ∂ t , J sends T Σ to itself and J| T Σ is ω-compatible. Fix J ∈ J E .
Let M J (y + , y − , A) denote the moduli space of HF curves that are asymptotic to y ± as s → ±∞ and have relative homology
class A. Because J is R s -invariant, this induces a natural R-action on M J (y + , y − , A).
Fix a generic J ∈ J E . The differential is defined by
d J (y + , [A + ]) = A∈H 2 (E,y + ,y − ),I(A)=1 # M J (y + , y − , A)/R (y − , [A + #A]).
The homology of (CF
* (Σ, ϕ H (L), L, x), d J ) is well defined [13], denoted by HF * (Σ, ϕ H (L), L, x) J .
Again, the Floer homology is a R-module. By Proposition 3.9 of [13], the homology is independent of the choices of J and H. For different choices of (J, H), there is an isomorphism between the corresponding QHF called a continuous morphism. More details about this point are given in Section 3 later. For two different choices of base points, the corresponding homologies are also isomorphic. Let HF (Σ, L) be the direct limit of the continuous morphisms. For any H, we have an isomorphism
j x H : HF (Σ, ϕ H (L), L, x) → HF (Σ, L).
(2.14)
Combining the isomorphism (1.1) with Lemma 6.10 of [7], we know that
HF * (Σ, L) is isomorphic to H * (T d , R) as an R-vector space, where T d is the d-torus.
Remark 2.1. Even though we only define the QHF for a Hamiltonian symplecticmorphism ϕ H , the above construction also works for a pair of Hamiltonian symplecticmorphisms (ϕ H , ϕ K ). Because ϕ K (L) is also an admissible link, we just need to replace L by ϕ K (L). The result is denoted by HF (Σ, ϕ H (L), ϕ K (L), x).
Filtered QFH and spectral invariants
Similar as [31,7], we define an action functional on the generators by
A η H (y, [A]) := − A ω + 1 0 H t (x)dt − ηJ 0 (A).
We remark that the term J 0 (A) is corresponding to the ∆ · [ŷ] in [7],where ∆ is the diagonal of Sym d Σ andŷ is a capping of a Reeb chord y. This view point is proved in Proposition 4.2 in [13].
Let CF L (Σ, ϕ H (L), L, x) be the set of formal sums (2.13) satisfying A η H (y, [A]) < L. It is easy to check that it is a subcomplex. The filtered QHF, denoted by HF L (Σ, ϕ H (L), L), is the homology (CF L (Σ, ϕ H (L), L), d J ). Let i L : HF L (Σ, ϕ H (L), L, x) → HF (Σ, ϕ H (L), L, x)
be the homomorphism induced by the inclusion.
Definition 2.3. Fix a ∈ HF (Σ, L). The HF spectral invariant is c L,η (H, a) := inf{L ∈ R|(j x H ) −1 (a) belongs to the image of i L }. Let c = a (y,[A]) (y, [A]) be a cycle in CF (Σ, ϕ H (L), L, x). The action of this cycle is defined by A η H (c) = max{A η H (y, [A])|a (y,[A]) = 0}.
Then the spectral invariant can be expressed alternatively as
c L,η (H, a) = inf{A η H (c)|[c] = (j x H ) −1 (a)}. (2.15)
Let HF (Sym d Σ, Sym d L, Sym d ϕ H ) denote the QHF defined in [7]. Because QHF is independent of the choices of ϕ H and x, we have an abstract group HF (Sym d L) and a canonical isomorphism
j x H : HF (Sym d Σ, Sym d L, Sym d ϕ H ) → HF (Sym d L).
Since the isomorphism (1.1) also preserves the action filtrations, we have
1 d c L,η (H, a) = c link L,η (H, j x H • Φ H • (j x H ) −1 (a)). (2.16) We want to emphasize that the isomorphism j x H •Φ H •(j x H ) −1 : HF (Σ, L) → HF (Sym d L)
Morphisms on HF
In this section, we define the continuous morphisms, quantum product and unit on HF (Σ, L).
Moduli space of HF curves
In this subsection, we give the definition of HF curves, relative homology class, and the ECH index.
Let D m be a disk with boundary punctures (p 0 , p 1 , ..., p m ), the order of the punctures is counter-clockwise. See Figure 2. Let ∂ i D m denote the boundary of D m connecting p i−1 and p i for 1 ≤ i ≤ m. Let ∂ m+1 D m be the boundary connecting p m and p 0 .
Fix a complex structure j m and a Kähler form ω Dm over D m throughout. We say that D m is a disk with strip-like ends if for each p i we have a neighborhood U i of Let π :
p i such that (U i , ω Dm , j m ) ∼ = (R i × [0, 1], ds ∧ dt, j), (3.17) where j is the standard complex structure on R × [0, 1] that j(∂ s ) = ∂ t , where i = + for 1 ≤ i ≤ m and 0 = −. Here R + = [0, ∞) and R − = (−∞, 0].E m = D m × Σ → D m be the trivial fibration. A closed 2-form ω Em is called admissible if ω Em | Σ = ω and ω Em = ω over the strip-like ends. Note that Ω Em = ω Em + ω Dm is a symplectic form on E m if ω Dm is large enough. As a result, (π : E m → D m , Ω m ) over U i can be identified with (π : U i × Σ → U i , Ω Dm ) ∼ = (π R×[0,1] : R i × [0, 1] × Σ → R i × [0, 1], ω + ds ∧ dt). (3.18)
We call it a (strip-like) end of (E m , Ω Em ) at p i .
Let L = (L 1 , ...L m+1 ) be a chain of d-disjoint union Lagrangian submanifolds in ∂E m satisfying the following conditions:
C.1 Let L i = L| ∂ i Dm ⊂ π −1 (∂ i D m ). L i consists of d-disjoint union of Lagrangian submanifolds. C.2 For 1 ≤ i ≤ m, over the end at p i (under the identification 3.18), we have L = (R + × {0} × L p i−1 ) ∪ (R + × {1} × L p i ).
C.3 Over the end at p 0 (under the identification 3.18), we have
L = (R − × {0} × L p 0 ) ∪ (R − × {1} × L pm ). C.4 The links {L p i } m i=0
are η-admissible and they are Hamiltonian isotropic to each other.
C.5 For z ∈ ∂D m , L ∩ π −1 (z) is an admissible link.
Let (E m , Ω m , L m ) and (E n , Ω n , L n ) be two symplectic fibrations. Suppose that the negative end of (E m , Ω m , L m ) agrees with the i-th positive end of (E n , Ω n , L n ), i.e,
(E m , Ω m , L m )| U 0 ∼ = (R s − ≤0 × [0, 1] t × Σ, ω + ds − ∧ dt, R s − ≤0 × ({0} × L 0 ∪ {1} × L 1 )) (E n , Ω n , L n )| U i ∼ = (R s + ≥0 × [0, 1] t × Σ, ω + ds + ∧ dt, R s + ≥0 × ({0} × L 0 ∪ {1} × L 1 )). Fix R ≥ 0. Define the R-stretched composition (E, Ω, L) := (E n , Ω n , L n ) • R (E m , Ω m , L m ) by (E, Ω, L) = (E n , Ω n , L n )| s + ≤R ∪ s + −R=s − +R (E m , Ω m , L m )| s − ≥−R . (3.19)
In most of the time, the number R is not important, we suppress it from the notation.
Definition 3.1. Fix Reeb chords y i ∈ L p i−1 ∩ L p i and y 0 ∈ L p 0 ∩ L pm . Let (Ḟ , j) be a Riemann surface (possibly disconnected) with boundary punctures. A d-multisection is a smooth map u : (Ḟ , ∂Ḟ ) → E m such that 1. u(∂Ḟ ) ⊂ L. Let {L i j } d i=1 be the connected components of L| ∂ j D . For each 1 ≤ i ≤ d, u −1 (L i j )
consists of exactly one component of ∂Ḟ .
Let H 2 (E m , y 1 , ..y m , y 0 ) be the set of continuous maps
u : (Ḟ , ∂Ḟ ) → (E, L ∪ m i=1 {∞} × y i ∪ {−∞} × y 0 )
satisfying the conditions 1), 2), 3) and modulo a relation ∼. Here u 1 ∼ u 2 if and only if their compactifications are equivalent in
H 2 (E m , L ∪ m i=1 {∞} × y i ∪ {−∞} × y 0 ; Z). An element in H 2 (E m , y 1 , .
.y m , y 0 ) is called a relative homology class. An easy generalization is that one could replace the Reeb chords by the reference chords x H in the above definition.
Definition 3.2. An almost complex structure is called adapted to fibration if
1. J is Ω Em -tame. 2. Over the strip-like ends, J is R s -invariant, J(∂ s ) = ∂ t , J preserves T Σ and J| T Σ is compatible with ω. 3. π is complex linear with respect to (J, j m ), i.e., j m • π * = π * • J.
Let J tame (E m ) denote the set of the almost complex structures adapted to fibration. Fix an almost complex structure J. If u is a J-holomorphic d-multisection, then u is called an HF curve.
Using the admissible 2-form ω Em , we have a splitting
T E m = T E h m ⊕ T E v m , where T E v m = ker π * and T E h m = {v ∈ T E m |ω Em (v, w) = 0, w ∈ T v E m }.
With respect to this splitting, an almost complex structure J ∈ J tame (E m ) can be written as J = J hh 0 J hv J vv . Therefore, J is Ω Em -compatible if and only if J hv = 0. Let J comp (E m ) ⊂ J tame (E m ) denote the set of almost complex structures which are adapted to fibration and Ω Em -compatible. Later, we will use the almost complex structures in J comp (E m ) for computations.
Fredholm index and ECH index
We begin to define the index of an HF curve.
There are two types of index defined for an HF curve, called Fredholm index and ECH index.
To begin with, fix a trivialization of u * T Σ as follows. Fix a non-singular vector v on L. Then (v, j Σ (v)) gives a trivialization of T Σ| L , where j Σ is a complex structure on Σ. We extend the trivialization arbitrarily along y i . Such a trivialization is denoted by τ .
Define a real line bundle L over ∂F as follows. Take L| ∂Ḟ := u * (T L ∩ T Σ). Extend L to ∂F −∂Ḟ by rotating in the counter-clockwise direction from u * T L i p j−1 and u * T L i p j by the minimum account. Then (u * T Σ, L) forms a bundle pair over ∂F . With respect to the trivialization τ , we have a well-defined Maslov index µ τ (u) := µ(u * T Σ, L, τ ) and relative Chern number c 1 (u * T Σ, τ ). The number 2c 1 (u * T Σ, τ ) + µ τ (u) is independent of the trivialization τ . The Fredholm index of an HF curve is defined by
indu := −χ(F ) + 2c 1 (u * T Σ, τ ) + µ τ (u) + d(2 − m).
The above index formula can be obtained by the doubling argument in Proposition 5.5.2 of [15]. Given A ∈ H 2 (E m , y 1 , ..y m , y 0 ), an oriented immersed surface C ⊂ E m is a τrepresentative of A if 1. C intersects the fibers positively along ∂C;
2. π [0,1]×Σ | C is an embedding near infinity;
3. C satisfies the τ -trivial conditions in the sense of Definition 4.5.2 in [15].
Let C be a τ -trivial representative of A. Let ψ be a section of the normal bundle N C such that ψ| ∂C = Jτ . Let C be a push-off of C in the direction of ψ. Then the relative self-intersection number is defined by
Q τ (A) := #(C ∩ C ).
Suppose that A ∈ H 2 (E m , y 1 , ..y m , y 0 ) admits a τ -representative. We define the ECH index of a relative homology class by
I(A) := c 1 (T Σ| A , τ ) + Q τ (A) + µ τ (A) + d(1 − m).
Using the relative adjunction formula, we have the following result. Proof. By the same argument in Lemma 4.5.9 [15], we have
c 1 (u * T E m , (τ, ∂ t )) = c 1 (du(T F ), ∂ t ) + c 1 (N u , Jτ ) = χ(F ) − d + Q τ (u) − 2δ(u),
where N u is the normal bundle of u and ∂ t is a trivialization of T D m such that it agrees with ∂ t over the ends. On the other hand, we have
c 1 (u * T E m , (τ, ∂ t )) = c 1 (u * T Σ, τ ) + c 1 (u * T D m , ∂ t ) = c 1 (u * T Σ, τ ).
Combine the above two equations; then we obtain the ECH equality. J 0 index We imitate Hutchings to define the J 0 index. The construction of J 0 here more or less comes from the relative adjunction formula. The J 0 index for the usual Heegarrd Floer homology can be found in [27]. Fix a relative homology class A ∈ H 2 (E m , y 1 , ..y m , y 0 ). The J 0 index is defined by
J 0 (A) = −c 1 (T E m | A , (τ, ∂ t )) + Q τ (A).
The following lemma summarize the properties of J 0 .
Suppose that an HF curve
u = u 0 ∪ u 1 : F → M has two irreducible components, then J 0 (u) = J 0 (u 0 ) + J 0 (u 1 ) + 2#(u 0 ∩ u 1 ).
3. If the class A supports an HF curve, then J 0 (A) ≥ 0.
Let
A, A ∈ H 2 (E m , y 1 , ..y m , y 0 ). Suppose that A − A = m[Σ] + k i=1 c i [B i z ], where B i z are the disks bounded by L m ∩ π −1 m (z) for z ∈ ∂D m . Then J 0 (A ) = J 0 (A) + 2m(d + g − 1).
Proof. The first item follows directly from the definition and the relative adjunction formula. The second item also follows from defintion directly. Since an HF curve has at least one boundary, hence −χ(F ) + d ≥ 0. By the first two items, the third item holds.
Using the computations in Lemma 3.4 of [13], we obtain the last item. A quick way to see
J 0 (A) = J 0 (A + k i=1 c i B i z )
is that adding disks along boundaries will not change the Euler characteristic of the curves.
Cobordism maps
With the above preliminaries, we now define the the product structure on HF. To begin with, let us define the cobordism maps on QHF induced by (E m , Ω m , L m ). Assume that L p i = ϕ H i (L). Define reference chords by δ i (t) := ϕ H i (xH i #H i−1 (t)) for 1 ≤ i ≤ m and δ 0 (t) = ϕ Hm (xH m#H0 (t)), whereH t (x) = −H t (ϕ t H (x)). Here
H#K(t, x) := H t (x) + K t ((ϕ t H ) −1 (x))
is the composition of two Hamiltonian functions. By the chain rule, we have ϕ t H#K = ϕ t H • ϕ t K . There is another operation on Hamiltonian functions called the join. The join of H and K is defined by
H t K t (x) = 2ρ (2t)K ρ(2t) (x) if 0 ≤ t ≤ 1 2 2ρ (2t − 1)H ρ(2t−1) (x) if 1 2 ≤ t ≤ 1,
where ρ : [0, 1] → [0, 1] is a fixed non-decreasing smooth function that is equal to 0 near 0 and equal to 1 near 1. As with the composition, the time
1-map of H t K t is ϕ H • ϕ K .
The following proposition is similar to the result in Section 4 of [14].
HF A ref (E m , Ω m , L m ) J : m i=1 HF (Σ, L p i−1 , L p i ) → HF (Σ, L p 0 , L pm ).HF A ref (E m , Ω 0 , L 0 ) J 0 = HF A ref (E m , Ω 1 , L 1 ) J 1 .
In particular, the cobordism maps are independent of the choice of almost complex structures.
2. (Composition rule) Suppose that the negative end of (E m , Ω m , L m ) agrees with the j-th positive end of (E n , Ω n , L n ). Then we have
HF A 2 (E n , Ω n , L n )•HF A 1 (E m , Ω m , L m ) = HF A 1 #A 2 (E m+n−1 , Ω m+n−1 , L m+n−1 ),
where (E m+n−1 , Ω m+n−1 , L m+n−1 ) is the composition of (E m , Ω m , L m ) and (E n , Ω n , L n ) defined in (3.19).
Proof. In chain level, define Here A 0 is determined by the relation
CF A ref (E m , Ω m ,LA 1 #..A m #A#(−A 0 ) = A ref .
To see the above map is well defined, first note that the HF curves are simple because they are asymptotic to the Reeb chords. Therefore, the transervality of moduli space can be obtained by a generic choice of an almost complex structure. By Theorem 3.3, the ECH indices of HF curves are nonnegative.
Secondly, consider a sequence of HF curves {u n :Ḟ → E m } ∞ n=1 in M J (y 1 , ...y m ; y 0 , A). Apply the Gromov compactness [2] to {u n } ∞ n=1 . To rule out the bubbles, our assumptions on the links play a key role here. Note that the bubbles arise from pinching an arc or an interior simple curve in
(z), where z ∈ ∂ j D m . Let L m ∩ π −1 (z) = ∪ d i=1 L i z . By assumptions, L i z bounded a disk B i z for 1 ≤ i ≤ k. Then the image of v is either B i z or Σ − B i z for some 1 ≤ i ≤ k, or Σ − L j z for some k + 1 ≤ j ≤ d.
Similarly, if v comes from pinching an interior simple curve in F n , then the image of v must be a fiber Σ. The index formula in Lemma 3.3 [13] can be generalized to the current setting easily. As a result, the bubble v contributes at least 2 to the ECH index. Roughly speaking, this is because the Maslov index of a disk is 2. Also, adding a Σ will increase the ECH index 2(k + 1). This violates the condition that I = 0. Hence, the bubbles can be ruled out. Therefore, M J (y 1 , ...y m ; y 0 , A) is compact. Similarly, the bubbles cannot appear in the module space of HF curves with I = 1. The standard neck-stretching and gluing argument [29] shows that CF A ref (E m , Ω m , L m ) is a chain map.
The invariance and the composition rule follow from the standard homotopy and neck-stretching argument. Again, the bubbles can be ruled by the index reason as the above.
Reference relative homology classes
Obviously, the cobordism maps depend on the choice of the reference relative homology class A ref . For any two different reference homology classes, the cobordism maps defined by them are differed by a shifting (2.13). To exclude this ambiguity, we fix a reference relative homology class in the following way:
Let χ + (s) : R s → R be a function such that χ + = 1 when s ≤ −R 0 and χ + = 0 when s ≥ −1. Define a diffeomorphism
F + :R − × [0, 1] × Σ → R − × [0, 1] × Σ (s, t, x) → (s, t, ϕ K • ϕ χ(s)H • (ϕ t χ(s)H ) −1 (x)).
We view F + as a map on the end of E 0 by extending F + to be (z, x) → (z, ϕ K (x)) over the rest of E 0 . Let L + := F + (∂D 0 × L) ⊂ ∂E 0 be a submanifold. Note that
L + | s≤−R 0 = R s≤−R 0 × ({0} × ϕ K • ϕ H (L) ∪ {1} × ϕ K (L)
). The surface F + (D 0 × {x}) represent a relative homology class A + ∈ H 2 (E 0 , ∅, ϕ K (xK #(K#H) )). For any Hamiltonian functions H 1 , H 2 , we find a suitable H such that H 1 = H#K and H 2 = K. So the above construction gives us a class A + H 1 ,H 2 ∈ H 2 (E 0 , ∅, ϕ H 2 (xH 2 #H 1 )). Let D 0 be a disk with a strip-like positive end. Define E 0 := D 0 × Σ. By a similar construction, we have a fiber-preserving diffeomorphism F − :
E 0 → E 0 . Let L − := F − (∂D 0 × L). Then A − H 1 ,H 2 := [F − (D 0 × {x})]
gives a relative homology class in H 2 (E 0 , ϕ H 2 (xH 2 #H 1 ), ∅).
Using A ± H 1 ,H 2 , we determine a unique reference homology class A ref ∈ H 2 (E m , δ 1 , .., δ m , δ 0 ) as follows: For i-th positive end of (E m , L m ), we glue it with (E 0 , L + ) as in (3.19), where L + is determined by H i−1 , H i . Similarly, we glue the negative end of (E m , L m ) with (E 0 , L − ). Then this gives us a pair (E = D × Σ, L), where D is a closed disk without puncture. Note that H 2 (E, L; Z) ∼ = H 2 (E, ∂D×L; Z). Under this identification, we have a canonical class A can = [D × {x}] ∈ H 2 (E, L; Z). We pick A ref ∈ H 2 (E m , δ 1 , .., δ m , δ 0 ) to be a unique class such that
A − H 0 ,Hm #A ref # m i=1 A + H i−1 ,H i = A can .
Continuous morphisms
In the case that m = 1, we identify π :
E 1 → D 1 with π : R s × [0, 1] t × Σ → R s × [0, 1] t .
Given two pairs of symplecticmorphisms (ϕ H 1 , ϕ K 1 ) and (ϕ H 2 , ϕ K 2 ), we can use the same argument in Lemma 6.1.1 of [14] to construct a pair (Ω 1 , L 1 ) such that 1. Ω 1 is a symplectic form such that Ω 1 | |s|≥R 0 = ω + ds ∧ dt.
2. L 1 ⊂ R × {0, 1} × Σ are two d-disjoint union of Ω 1 -Lagrangian submanifolds, 3. L 1 | s≥R 0 = (R s≥R 0 × {0} × ϕ H 1 (L)) ∪ (R s≥R 0 × {1} × ϕ K 1 (L)), 4. L 1 | s≤−R 0 = (R s≤−R 0 × {0} × ϕ H 2 (L)) ∪ (R s≤−R 0 × {1} × ϕ K 2 (L)),
We call the above triple (E 1 , Ω 1 , L 1 ) a Lagrangian cobordism from (ϕ H 1 (L), ϕ K 1 (L)) to (ϕ H 2 (L), ϕ K 2 (L)).
Recall that the reference class A ref is the unique class defined in Section 3.2.1. Lemma 3.6. The naturality isomorphisms satisfy the following diagram.
By the invariance property in Proposition 3.5, the cobordism map HF
A ref (E 1 , Ω 1 , L 1 ) only depends on {(H i , K i )} i=1HF (Σ, ϕ K 1 (L), L) (I H 1 ) * − −−− → HF (Σ, ϕ H 1 #K 1 (L), ϕ H 1 (L)) I K 1 ,K 2 0,0 I H 1 #K 1 ,H 2 #K 2 H 1 ,H 2 HF (Σ, ϕ K 2 (L), L) (I H 2 ) * − −−− → HF (Σ, ϕ H 2 #K 2 (L), ϕ H 2 (L))
In particular, we have (I H 1 )
* = I H 1 #K 1 ,K 1 H 1 ,0 .
Proof. To prove the statement, we first split the diagram into two:
HF (Σ, ϕ K 1 (L), L) (I H 1 ) * − −−− → HF (Σ, ϕ H 1 #K 1 (L), ϕ H 1 (L)) I K 1 ,K 2 0,0 I H 1 #K 1 ,H 1 #K 2 H 1 ,H 1 HF (Σ, ϕ K 2 (L), L) (I H 1 ) * − −−− → HF (Σ, ϕ H 1 #K 2 (L), ϕ H 1 (L)) Id I H 1 #K 2 ,H 2 #K 2 H 1 ,H 2 HF (Σ, ϕ K 2 (L), L) (I H 2 ) * − −−− → HF (Σ, ϕ H 2 #K 2 (L), ϕ H 2 (L))
To prove the first diagram, we define a diffeomorphism
F H 1 :R × [0, 1] × Σ → R × [0, 1] × Σ (s, t, x) → (s, t, ϕ H 1 (x))
Let (R × [0, 1] × Σ, Ω 1 , L) be a Lagrangian cobordism from (ϕ K 1 (L), L) to (ϕ K 2 (L), L). Note that if u ∈ M J (y + , y − ) is an HF curve in (R × [0, 1] × Σ, Ω 1 ) with Lagrangian boundaries L, then F H 1 (u) is a F H 1 * J-holomorphic HF curve in (R×[0, 1]×Σ, (F −1 H 1 ) * Ω 1 ) with Lagrangian boundaries F H 1 (L). This gives a 1-1 correspondence between the curves in (E 1 , Ω 1 , L) and curves in (E 1 , (F −1 H 1 ) * Ω 1 , F H 1 (L)). Note that F H 1 (u) is a holomorphic curve contributed to the cobordism map
CF A ref (E 1 , (F −1 H 1 ) * Ω 1 , F H 1 (L)F {Hs} (L) = R ≥R 0 × (({0} × ϕ H 1 #K (L) ∪ {1} × ϕ H 1 (L)) when s ≥ R 0 R ≤−R 0 × (({0} × ϕ H 2 #K (L) ∪ {1} × ϕ H 2 (L)) when s ≤ −R 0
Therefore, we define the continuous morphism I , we just need to take K 2 = K 1 and H 2 = 0 in the diagram.
Quantum product on HF
Consider E 2 = D 2 × Σ with a symplectic form Ω E 2 = ω + ω D 2 . Take L 2 = (∂ 1 D 2 × ϕ H 1 (L)) ∪ (∂ 2 D 2 × ϕ H 2 (L)) ∪ (∂ 3 D 2 × ϕ H 3 (L)). Define µ H 1 ,H 2 ,H 3 2 := HF A ref (E 2 , Ω 2 , L 2 ), where A ref is the reference class in Section 3.2.1. Then µ H 1 ,H 2 ,H 3 2 is a map µ H 1 ,H 2 ,H 3 2 : HF (Σ, ϕ H 1 (L), ϕ H 2 (L))⊗HF (Σ, ϕ H 2 (L), ϕ H 3 (L)) → HF (Σ, ϕ H 1 (L), ϕ H 3 (L)).
By Proposition 3.5, we have the following diagram:
HF (Σ, ϕ H 1 (L), ϕ H 2 (L)) ⊗ HF (Σ, ϕ H 2 (L), ϕ H 3 (L)) µ H 1 ,H 2 ,H 3 2 − −−−−−− → HF (Σ, ϕ H 1 (L), ϕ H 3 (L)) I H 1 ,K 1 H 2 ,K 2 ⊗I H 2 ,K 2 H 3 ,K 3 I H 1 ,K 1 H 3 ,K 3 HF (Σ, ϕ K 1 (L), ϕ K 2 (L)) ⊗ HF (Σ, ϕ K 2 (L), ϕ K 3 (L)) µ K 1 ,K 2 ,K 3 2 − −−−−−− → HF (Σ, ϕ K 1 (L), ϕ K 3 (L))
Therefore, µ H 1 ,H 2 ,H 3 2 descends to a bilinear map µ 2 : HF (Σ, L)⊗HF (Σ, L) → HF (Σ, L). We call µ 2 the quantum product on QHF.
Unit
In this subsection, we define the unit of the quantum product µ 2 .
Consider the case that m = 0. Let L 0 ⊂ ∂E 0 = ∂D 0 × Σ be d-disjoint union of submanifolds such that
L 0 | s≤−R 0 = R| s≤−R 0 × ({0} × ϕ H (L) ∪ {1} × ϕ K (L)).
Take a symplectic form Ω 0 such that Ω 0 | s≤−R 0 = ω + ds ∧ dt and L 0 is a union of Ω 0 -Lagrangians. The tuple (E 0 , Ω 0 , L 0 ) can be constructed as follows: First, we take a Lagrangian cobordism (E 1 , Ω 1 , L 1 ) from (L, L) to (ϕ H (L), ϕ K (L)). Then take (E 0 , Ω 0 , L 0 ) to be the composition of (E 1 , Ω 1 , L 1 ) and (E 0 , ω + ω D 0 , ∂D 0 × L).
These data induce a cobordism map
HF A ref (E 0 , Ω 0 , L 0 ) : R → HF (Σ, ϕ H (L), ϕ K (L)).
Again, A ref is the reference class in Section 3.2.1. Define
e H,K := HF A ref (E 0 , Ω 0 , L 0 )(1).
By Proposition 3.5, we have
I H 1 ,H 2 K 1 ,K 2 (e H 1 ,K 1 ) = e H 2 ,K 2 and µ H 1 ,H 2 ,H 3 2 (a ⊗ e H 2 ,H 3 ) = I H 1 ,H 1 H 2 ,H 3 (a),
where a ∈ HF (Σ, ϕ H 1 (L), ϕ H 2 (L)). These identities imply that the following definition makes sense.
Definition 3.7. The class e H,K descends to a class e ∈ HF (Σ, L). We call it the unit. It is the unit with respect to µ 2 in the sense that µ 2 (a ⊗ e) = a.
We now describe the unit when H is a small Morse function. Fix perfect Morse functions f L i : L i → R with a maximum point y + i and a minimum point y − i . Extend ∪ i f L i to be a Morse function f : Σ → R satisfying the following conditions:
M.1 (f, g Σ ) satisfies the Morse-Smale condition, where g Σ is a fixed metric on Σ.
M.2 f | L i has a unique maximum y + i and a unique minimum y − i .
M.3 {y + i } are the only maximum points of f . Also, f ≤ 0 and f (y + i ) = 0 for 1 ≤ i ≤ d.
M.4 f = f L i − 1
Take H = f , where 0 < 1. By Lemma 6.1 in [13], the set of Reeb chords of ϕ H is
{y = [0, 1] × (y 1 , ...y d ) | y i ∈ Crit(f | L i )} (3.22)
For each y = [0, 1] × (y 1 , ...y d ), we construct a relative homology class A y as follows:
Let η = ∪ d i=1 η i : ⊕ i [0, 1] s → L be a d-union of paths in L, where η i ⊂ L i satisfies η i (0) = y i and η i (1) = x i . Let u i (s, t) = (s, t, ϕ H • (ϕ t H ) −1 (η i (s))
). Then u = ∪ d i=1 u i is a d-multisection and it gives arise a relative homology class A y ∈ H 2 (E, x H , y). It is easy to show that (see Equation (3.18) [13])
A H (y, [A y ] + k i=1 c i [B i ] + m[Σ]) = H(y) − λ k i=1 c i − m, J 0 ([A y ] + k i=1 c i [B i ] + m[Σ]) = 2m(g + d − 1). (3.23) Lemma 3.8. Take H = f . Let y ♥ = [0, 1] × (y + 1 , ..., y + d )
. Let A ref be the reference homology class defined in Section 3.2.1. Then we have a suitable pair (Ω E 0 , L 0 ) such that for a generic J ∈ J comp (E 0 ), we have
CF A ref (E 0 , Ω E 0 , L 0 ) J (1) = (y ♥ , [A y ♥ ]).
In particular, (y ♥ , [A y ♥ ]) is a cycle that represents the unit.
The idea of the proof is to use index and energy constraint to show that the union of horizontal sections is the only I = 0 holomorphic curve contributed to the cobordism map CF A ref (E 0 , Ω E 0 , L 0 ) J (1). Since the proof Lemma 3.8 is the same as Lemma 6.6 in [13], we omit the details here. From the Lemma 3.8, we also know that the definition of unit in Definition 3.7 agrees with the Definition 6.7 of [13].
Proof of Theorem 1
In this section, we study the properties of the spectral invariants c L,η . These properties and their proof are parallel to the one in [7,31]. For different base points x, x , we have an isomorphism
The HF action spectrum
Ψ A x,x : H 2 (E, x H , y) → H 2 (E, x H , y)
preserving the action functional (see Equation 3.17 of [13]). In particular, the action spectrum is independent of the base point. So we omit x from the notation. A Hamiltonian function H is called mean-normalized if Σ H t ω = 0 for any t.
X F s t = ∂ s ϕ s,t • ϕ −1 s,t .
F s t is unique if we require that F s t is mean-normalized. Note that X F s t = 0 along t = 0, 1 because ϕ s,0 = id and ϕ s,1 = ϕ H = ϕ K = ϕ. By the mean-normalized condition, we have Since u is a disjoin union of strips, we have J 0 (A) = J 0 (A 0 #A). By a direct computation, we have (18.3.17) of [32]). Therefore,
u * i ω = 1 0 1 0 ω(∂ s ϕ −1 s,t (x i ), ∂ t ϕ −1 s,t (x i ))ds ∧ dt = 1 0 1 0 ω(X F s t (x i ), X H s t (x i ))ds ∧ dt = 1 0 1 0 {F s t , H s t }(x i )ds ∧ dt Because H, K are mean-normalized, we have ∂ s H s t − ∂ t F s t − {F s t , H s t } = 0 (seeu * i ω = 1 0 1 0 (∂ s H s t (x i ) − ∂ t F s t (x i )) ds ∧ dt = 1 0 H 1 t (x i )dt − 1 0 H 0 t (x i )dt = 1 0 K t (x i )dt − 1 0 H t (x i )dt.
This implies that
Proof of Theorem 1
Proof.
1. Suppose that ϕ H is nondegenerate. Then Spec(H : L) is a discrete set over R. The spectrality follows directly from the expression (2.15). For the case that ϕ H is degenerate, the statement can be deduced from the limit argument in [31].
2. To prove the Hofer-Lipschitz, we first need to construct a Lagrangian cobordism so that we could estimate the energy of holomorphic curves.
Let χ(s) : R s → R be a non-decreasing cut-off function such that and Ω E = ω E + ds ∧ dt.
χ(s) = 0 if s ≤ −R 0 1. if s ≥ R 0 . (4.24) Let H s := χ(s)H + + (1 − χ(s))H − . Define a diffeomorphism F :R × [0, 1] × Σ → R × [0, 1] × Σ (s, t, x) → (s, t, ϕ H s • (ϕ t H s ) −1 (x)).
Then L ⊂ R × {0, 1} × Σ is a disjoint union of Ω E -Lagrangians such that
L| s≥R 0 = R s≥R 0 × ({0} × ϕ H + (L)) ∪ ({1} × L) L| s≤−R 0 = R s≤R 0 × ({0} × ϕ H − (L)) ∪ ({1} × L) . Let A ref = F (R × [0, 1] × {x}) ∈ H 2 (E 1 , x H + , x H − )
. Take a generic J ∈ J comp (E 1 ). Then we have a cobordism HF A ref (E 1 , Ω E , L) J and it is the continuous morphism I
H + ,H − 0,0
. Let u ∈ M J (y + , y − ) be an HF curve in (E 1 , Ω, L). The energy of u satisfies The inquality in the second step follows the same argument in Lemma 3.8 of [4].
u * ω E = F −1 (u) ω + d Σ H s ∧ dt +χ(s)(H + − H − )ds ∧ dt ≥ F −1 (u)χ (s)(H + − H − )ds ∧ dt ≥ d
On the other hand, we have
A ref ω E = A + ω + u * ω E − A − ω J 0 (A ref ) = J 0 (A + ) + J 0 (u) − J 0 (A − ) due to the relation A + #u#(−A − ) = A ref . Note that A ref ω E = 1 0 H + (t, x)dt −d 1 0 min Σ (H + − H − )dt ≤ u * ω E + ηJ 0 (u) = A η H + (y + , A + ) − A η H − (y − , A − ).
Fix a = 0 ∈ HF (Σ, L). For any fixed δ, take a cycle c + ∈ CF (Σ, ϕ H + (L), (L)) represented (j x H + ) −1 (a) and satisfying On the other hand, c L,η (H s , a) is continuous with respect to s. Therefore, it must be constant. By the assumption A.4, we have
Spec(H : L) = {m 0 λ + m 1 (1 − kλ) + m 1 2η(d + g − 1) + d i=1 1 0 c i (t)dt|m 0 , m 1 ∈ Z} = {mλ + d i=1 1 0 c i (t)dt|m ∈ Z}.
Define a family of Hamiltonians functions {H s := sH} s∈[0,1] . By the spectrality,
we have c L (H s , a) = m 0 λ + d i=1 1 0 sc i (t)dt.
Here m 0 must be a constant due to the Hofer-Lipschitz continuity. We know that m 0 λ = c L (0, a) by taking s = 0. Then the Lagrangian control property follows from taking s = 1.
6. Let a, b ∈ HF (Σ, L). Take
Ω 2 = ω + ω D 2 L 2 = (∂ 1 D 2 × ϕ H • ϕ K (L)) ∪ (∂ 2 D 2 × ϕ H (L)) ∪ (∂ 3 D 2 × L) . Then µ 2 : HF (Σ, ϕ H •ϕ K (L), ϕ H (L))⊗HF (Σ, ϕ H (L), L) → HF (Σ, ϕ H •ϕ K (L), L).
Let us first consider the following special case: Suppose that there is a base point
x = (x 1 , ...x d ) ∈ L such that d Σ H t (x i ) = d Σ K t (x i ) = 0 ∇ 2 H t (x i ), ∇ 2 K t (x i ) are non-degenerate.for 1 ≤ i ≤ d. This assumption implies that ϕ t H (x i ) = x i , ϕ t K (x i ) = x i and d Σ (H t K t )(x i ) = 0.
In particular, x is a non-degenerate Reeb chord of ϕ H , ϕ K and ϕ H • ϕ K . Also, the reference chords become
x H = ϕ H (x K ) = x H K = x. Take A ref = [D 2 × {x}] ∈ H 2 (ϕ H (x K ), x H , x H K ) beu * ω = − A 1 ω − A 2 ω + A 0 ω + A ref ω J 0 (A 1 ) + J 0 (A 2 ) + J 0 (u) − J 0 (A 0 ) = J 0 (A ref ).
(4.29)
Take J ∈ J comp (E 2 ). Then u * ω ≥ 0. By Lemma 3.4, J 0 (u) ≥ 0. Combine these facts with (4.28), (4.29); then we have Note that the above construction works for any δ, we can take δ → 0.
Since the normalization of H K and H#K are homotopic, we can replace H K in the triangle equality by H#K.
7. By the triangle inequality, we have c L,η (0, e) = c L,η (0, µ 2 (e ⊗ e)) ≤ c L,η (0, e) + c L,η (0, e).
Hence, we get c L,η (0, e) ≥ 0. On the other hand, Lemma 3.8 and (2.15) imply that c L,η (0, e) ≤ 0.
8. The proof of the Calabi property relies on the Hofer-Lipschitz and the Lagrangian control properties. We have obtained these properties. One can follow the same argument in [7] to prove the Calabi property. We skip the details here.
Open-closed morphisms
In this section, we prove Theorem 2. Most of the arguments here are parallel to [15] and the counterparts of the closed-open morphisms [13]. Therefore, we will just outline the construction of the open-closed morphisms and the proof of partial invariance. We will focus on proving the non-vanishing of the open-closed morphisms.
To begin with, let us introduce the open-closed symplectic manifold and the Lagrangian submanifolds. The construction follows [15]. Define a base surface B ⊂ The symplectic form Ω H on W H is defined to be the restriction of ω ϕ H + ds ∧ dt. Note that W H is diffeomorphic (preserving the fibration structure) to the B × Σ. So we denote W H by W instead when the context is clear. We place a copy of L on the fiber π −1 W (3, 1) and take its parallel transport along ∂B using the symplectic connection. The parallel transport sweeps out an Ω H -Lagrangian submanifold L H in W . Then L H consists of d-disjoint connected components. Moreover, we have
L H | s≥3×{0} = R s≥3 × {0} × ϕ H (L) L H | s≥3×{1} = R s≥3 × {1} × L.1. u(∂Ḟ ) ⊂ L H . Write L H = ∪ d i=1 L i H , where L i H is a connected component of L H . For each 1 ≤ i ≤ d, u −1 (L i H )
consists of exactly one component of ∂Ḟ .
2. u is asymptotic to y as s → ∞.
3. u is asymptotic to γ as s → −∞.
4. F u * ω ϕ H < ∞.
A J-holomorphic d-multisection is called an HF-PFH curve. We remark that the HF-PFH curves are simple because they are asymptotic to Reeb chords. This observation is crucial in the proof of Theorem 2.
Let
Z y,γ := L H ∪ ({∞} × y) ∪ ({−∞} × γ) ⊂ W.
We denote H 2 (W, y, γ) the equivalence classes of continuous maps u : (Ḟ , ∂Ḟ ) → (W, Z y,γ ) satisfying 1), 2), 3) in the above definition. Two maps are equivalent if they represent the same element in H 2 (W, Z y,γ ; Z). Note that H 2 (W, y, γ) is an affine space of H 2 (W, L H ; Z). The difference of any two classes can be written as
Z − Z = k i=1 c i [B i ] + m[Σ] + [S],
where [B i ] is the class represented by the parallel translation of B i and [S] is a class in the
H 1 (S 1 , Z) ⊗ H 1 (Σ, Z)-component of H 2 (Y ϕ H , Z).
Fix a nonvanishing vector field on L. This gives a trivialization τ of T Σ| L . We extend it to T Σ| L H by using the symplectic parallel transport. We then extend the trivialization of T Σ| L H in an arbitrary manner along {∞} × y and along {−∞} × γ. Then we can define the relative Chern number c 1 (u * T Σ, τ ). This is the obstruction of extending τ to u.
Define a real line bundle L of T Σ along L H ∪ {∞} × y as follows. We set L| L H := T L H ∩T Σ. Then extend L across {∞}×y by rotating in the counterclockwise direction from T ϕ H (L) to T L in T Σ by the minimum amount. With respect to the trivialization τ , we have Maslov index for the bundle pair (u * L, u * T Σ), denoted by µ τ (u).
The Fredholm index for an HF-PFH curve is
indu := −χ(Ḟ ) − d + 2c 1 (u * T Σ, τ ) + µ τ (u) − µ ind τ (γ).
The notation µ ind τ (γ) is explained as follows. Let γ = {(γ i , m i )}. Suppose that for each i, u has k i -negative ends and each end is asymptotic to γ q j i . Then the total multiplicity is m
i = k i j=1 q j . Define µ ind τ (γ) := i k i j=1 CZ τ (γ q j i ),
where CZ τ is the Conley-Zehnder index. Given Z ∈ H 2 (W, y, γ), the ECH index is (see Definition 5.6.5 of [15])
I(Z) := c 1 (T W | Z , τ ) + Q τ (Z) + µ τ (Z) − µ ech τ (γ) − d, where µ ech (γ) := i m i p=1 CZ τ (γ p i ). We define the J 0 index of Z by J 0 (Z) := −c 1 (T W | Z , τ ) + Q τ (Z) − µ J 0 τ (γ), where µ J 0 τ (γ) = i m i −1 p=1 µ τ (γ p ).I(u) ≥ indu + 2δ(u) J 0 (u) ≥ 2(g(F ) − 1 + δ(u)) + #∂F + |γ|,
where |γ| is a quantity satisfying |γ| ≥ 1 provided that it is nonempty. Moreover, I(u) = indu holds if and only if u satisfies the ECH partition condition. If u = ∪ a u a is an HF-PFH curve consisting of several (distinct) irreducible components, then
I(u) ≥ a I(u a ) + a =b 2#(u a ∩ u b ), J 0 (u) ≥ a J 0 (u a ) + a =b 2#(u a ∩ u b ).
In particular, J 0 (u) ≥ 0 for an HF-PFH curve.
In this paper, we don't need the details on "ECH partition condition" and |γ|. For the readers who are interested in it, please refer to [18,19]. The proof of Theorem 5.2 basically is a combination of the relative adjunction formula and Hutchings's analysis in [19]. We omit the details here.
Construction and invariance of OC
In this subsection, we outline the construction of the open-closed morphisms. Also, we will explain why it satisfies the partial invariance.
To begin with, we need the following lemma to rule out the bubbles.
Lemma 5.3. Let Z, Z ∈ H 2 (W, y, γ) be relative homology classes such that Z − Z = m[Σ] + k i=1 c i [B i ] + [S], where [S] ∈ H 1 (S 1 , Z) ⊗ H 1 (Σ, Z). Then we have I(Z ) = I(Z) + k i 2c i + 2m(k + 1) J 0 (Z ) = J 0 (Z) + 2m(d + g − 1)
.
(5.31)
Proof. Using the same argument in Lemma 3.3 of [13], we know that adding B i to a relative homology class Z will increase the ECH index by 2 because the Maslov index of B i is 2. Also, the adding disks will not change the topology of the curves. Hence, J 0 is still the same. If we add [Σ] to Z, then the ECH index will increase by 2(k + 1). See the index ambiguity formula in Proposition 1.6 of [18]. Similarly, adding [S] doesn't change the ECH index and J 0 index.
OC Z ref (W H , Ω H , L H ) J (y, [A]) = (γ,[Z]) Z,I(Z)=0 #M J (y, γ, Z)(γ, [Z]),
The class Z is characterized by A#Z#Z = Z ref . The arguments in [15] (also see the relevant argument for closed-open morphisms [13]) show that this is well defined and it is a chain map. The main difference here with [15] is that the bubbles would appear, but these can be ruled out by Lemma 5.3 and the argument in Proposition 3.5.
To prove the partial invariance, the arguments consist of the following key steps:
1. If we deform the open-closed morphism smoothly over a compact set of W (the deformation needs to be generic), then the standard homotopy arguments show that the open-closed morphism is invariant.
2. Assume that ϕ H satisfies ♠.1 and ϕ G satisfies ♠.2. Let (E 1 , Ω 1 , L 1 ) be a Lagrangian cobordism from (ϕ G (L), L) to (ϕ H (L), L). Let (X, Ω X ) be a symplectic cobordism from (Y ϕ H , ω ϕ H ) to (Y ϕ G , ω ϕ G ) defined by (2.9). Consider the Rstretched composition of (E 1 , Ω 1 , L 1 ), (W H , Ω H , L H ) and (X, Ω X ), denoted by (W R , Ω R , L R ). As R → ∞, the I = 0 HF-PFH curves in (W R , Ω R , L R ) converges to a holomorphic building. Under assumptions ♠.1, ♠.2, the holomorphic curves in (X, Ω X ) have nonnegative ECH index (see Section 7.1 of [11]). Combining this with Theorems 3.3, 5.2, the holomorphic curves in each level have nonnegative ECH index. As a result, these holomorphic curves have zero ECH index. They are either embedded or branched covers of trivial cylinders. By Huctings-Taubes's gluing argument [24,25], the open-closed morphism defined by
(W R , Ω R , L R ) is equal to I G,H 0,0 • (OC Z ref (W H , Ω H , L H )) * • P F H hol Z ref (X, Ω X ) for R 1. Here P F H hol Z ref (X, Ω X )
is the PFH cobordism map defined by counting embedded holomorphic curves in X. By Theorem 3 in [11], we can replace it by P F H sw Z ref (X, Ω X ). Finally, by the homotopy invariance in step 1, we get the partial invariance.
For more details, we refer the readers to [13].
Computations of OC
In this subsection, we compute the open-closed morphism for a special Hamiltonian function H satisfying ♠.1. Using partial invariance, we deduce the non-vanishing result under the assumption ♠.2. The main idea here is also the same as [13].
Suppose Figure 1. This is a nice candidate for computation because we can describe the periodic orbits and Reeb chords in terms of the critical points and the index of holomorphic curves are computable. However, the H doesn't satisfy ♠.1 or ♠.2. We need to follow the discussion in Section 6.1 of [13] to modify H . Fix numbers 0 < 0 1 and δ, δ 0 > 0. By [13], we have a function ε : Σ → R such that 0 < ≤ ε ≤ 0 and the new autonomous Hamiltonian function H ε = −εf satisfies the following conditions: F.4 For each local maximum p, ϕ Hε has a family periodic orbits γ r 0 ,θ (t) that foliates S 1 t × ∂U r 0 p , where δ + δ 0 ≤ r 0 ≤ δ + 2δ 0 . Moreover, the period of γ r 0 ,θ (t) is strictly greater than d. By Proposition 3.7 of [11], we perturb H ε to a new Hamiltonian function H ε (may depend on t) such that it satisfies the following properties:
1. H ε | Σ−U δ = H ε | Σ−U δ .
2. H ε still satisfies F.4 and F.5.
3. |H ε − H ε | ≤ c 0 δ and |dH ε − dH ε | ≤ c 0 δ.
4. The periodic orbits of ϕ H ε with period less than or equal to d are either hyperbolic or d-negative elliptic. In other words, ϕ H ε is d-nondegenerate and satisfies ♠.1.
Remark 5.1. Because we take H ε = −εf , the maximum points {y + i } of f are the minimum points of H ε . We use {y i − } to denote the minimum points of H ε from now on.
Let y be a critical point of H ε . Let γ y denote the constant simple periodic orbit at the critical point y. We define PFH generators and a Reeb chord as follows:
1. Let I = (i 1 , ...i d ). Here we allow i j = i k for j = k. Let α I = γ y i 1 − ...γ y i d − . When I = (1, 2..., d), we denote α I by α ♦ . Here we use multiplicative notation to denote an orbit set instead.
2. y ♦ = [0, 1] × (y 1 − , ...y d − ).
Let α = γ p 1 · · · γ p d and β = γ q 1 · · · γ q d be two orbit sets, where p i , q j ∈ Crit(H ε ). Following [13], we define a relative homology class Z α,β as follows:
Let η = d i=1 η i : d i=1
[0, 1] → Σ be a union of paths with d components such that η i (1) = p i and η i (0) = q i . Define a relative homology class Z α,β := [S 1 × η] ∈ H 2 (Y ϕ Hε , α, β).
(5.32)
We also use this way to define Z α ∈ H 2 (Y ϕ Hε , α, γ x H ). Let J (W, Ω H ε ) ⊂ J tame (W, Ω H ε ) be a set of almost complex structures which are the restriction of admissible almost complex structures in J (Y ϕ H ε , ω ϕ H ε ).
Take a J ∈ J (W, Ω H ε ). Let u y i be the restriction of R × γ y i to W . Obviously, it is a J-holomorphic curve in M J (y i , γ y i ). It is called a horizontal section of (W, Ω H ε , L H ε , J). Moreover, it is easy to check that indu y i − = 0 from the definition. Proof. The proof is the same as Lemma 6.6 in [13].
The horizontal sections ∪ d i=1 u y i − represent a relative homology class Z hor . We take the reference relative homology class to be
Z ref = A y ♦ #Z hor #Z α ♦ ∈ H 2 (W, x H , γ x H ).
Lemma 5.5. For a generic J ∈ J (W, Ω H ε ), we have
OC Z ref (W, Ω H ε , L H ε ) J (y ♦ , A y ♦ ) = (α ♦ , Z α ♦ ) + (β, Z), Here (β, Z) satisfies β = α ♦ and A η H ε (α ♦ , Z α ♦ ) > A η H ε (β, Z).
Proof. Consider the moduli space of HF-PFH curves M J 0 (y ♦ , α ♦ ) with I = 0. Let u ∈ M J 0 (y ♦ , α ♦ ). By Lemma 5.3, J 0 (u) = 2m(d + g − 1). Also, I(u) = 0 implies that k i=1 c i + m(k + 1) = 0. On the other hand,
E ω H ε (u) + ηJ 0 (u) = |d vert u| 2 + ηJ 0 (u) = k i=1 λc i + m + η2m(d + g − 1) = λ k i=1 c i + (k + 1)m = 0.
Theorem 5.2 and Lemma 5.4 imply that u = ∪ i u y i − is a union of horizontal sections. In other words, the union of horizontal sections is the unique element in M J 0 (y ♦ , α ♦ ). If u is an HF-PFH curve in M J 0 (y ♦ , β) and β = α ♦ , then E ωϕ H ε (u) > 0; otherwise, u is horizontal and u must be asymptotic to α ♦ . By Theorem 5.2, we have
0 < E ω H ε (u) + ηJ 0 (u) = Z ref ω ϕ H ε − Ay ♦ ω − Z ω ϕ H ε + η(J 0 (Z ref ) − J 0 (A y ♦ ) − J 0 (Z)) = Zα ♦ ω ϕ H ε − Z ω ϕ H ε + η(J 0 (Z α ♦ ) − J 0 (Z)) = A η H ε (α ♦ , Z α ♦ ) − A η H ε (β, Z).
Note that the above intersection numbers are well defined because γ r 0 ,θ and α I are disjoint. Because R × γ r 0 ,θ is holomorphic by the choice of J X , the above equality implies that C doesn't intersect R × γ r 0 ,θ . In particular, C is contained in the product region R × S 1 t × (Σ − U δ+δ 0 ). Then C ω X = 0 implies that C is a union of trivial cylinders (Proposition 9.1 of [18]). Thus we must have α I = α ♦ .
Lemma 5.8. Let J X be a generic almost complex structure in J comp (X, Ω X ) and J X is R-invariant in the product region R × S 1 t × (Σ − U δ+δ 0 ). Then we have
P F C sw Z ref (X, Ω X ) J X (α ♦ , Z α ♦ ) = (α ♦ , Z α ♦ ) + (β , Z ), where (β , Z ) satisfies A H ε (α ♦ , Z α ♦ ) − A Hε (β , Z ) ≥ 1 4(k+1) and β = α I .
Proof. By the holomorphic axioms (Theorem 1 of [11] and Appendix of [13]) and Lemma 5.7, we know that
< P F C sw Z ref (X, Ω X ) J X (α ♦ , Z α ♦ ), (α I , Z) >= 0 when (α I , Z) = (α ♦ , Z α ♦ ), and < P F C sw Z ref (X, Ω X ) J X (α ♦ , Z α ♦ ), (α ♦ , Z α ♦ ) >= 1.
Assume that < P F C sw Z ref (X, Ω X ) J X (α ♦ , Z α ♦ ), (β , Z ) >= 1 for some (β , Z ) and β = α I . Again by the holomorphic axioms, we have a holomorphic curve C ∈ M J 0 (α ♦ , β ). It is easy to check that
I(C) = −h(β ) − 2e + (β ) + 2m(k + 1) = 0 A H ε (α ♦ , Z α ♦ ) − A Hε (β , Z) = −H ε (β ) + m,(5.33)
where h(β ) is the total multiplicities of all the hyperbolic orbits in β and e + (β ) is the total multiplicities of all the elliptic orbits at local maximum of H ε . Because β = α I , we have h(β ) + 2e + (β ) ≥ 1. Therefore, we have
A Hε (α ♦ , Z α ♦ ) − A Hε (β , Z ) = −H ε (β ) + h(β ) + 2e + (β ) 2(k + 1) ≥ 1 2(k + 1) + O(d 0 ) ≥ 1 4(k + 1) .
Lemma 5.9. Let (β, Z) be a factor of c given in Lemma 5.5. Let J X be the almost complex structure in Lemma 5.8. Then we have
P F C sw Z ref (X, Ω X ) J X (β, Z) = (β , Z ), where (β , Z ) satisfies A η Hε (α ♦ , Z α ♦ ) − A η Hε (β , Z ) > 0 and β = α I .
Proof. First, we show that β cannot be α I . Assume that
< P F C sw Z ref (X, Ω X ) J X (β, Z), (α I , Z ) >= 1.
Then we have a broken holomorphic curve C = (C, C 0 ), where C ∈ M J 0 (y ♦ , β) is an HF-PFH curve and C 0 ∈ M J X 0 (β, α I ). The holomorphic curve gives us a relative homology class Z ∈ H 2 (W, y ♦ , α I ).
Reintroduce the periodic orbits γ i r 0 ,θ 0 near the local maximums of H ε . The superscript "i" indicates that the local maximum lies in the domainB i , where 1 ≤ i ≤ k + 1. In particular, γ i r 0 ,θ 0 lies in S 1 ×B i . Define a curve v i := (R × γ i r 0 ,θ 0 ) ∩ W . Note that it is J-holomorphic and ∂v i is disjoint from the Lagrangian L H ε . Then for any relative homology class Z ∈ H 2 (W, y ♦ , α I ), we have a well-defined intersection number
n i (Z ) := #(Z ∩ v i ).
The relative homology class Z ∈ H 2 (W, y ♦ , α I ) can be written as
Z = Z hor #Z α ♦ ,α I + k i=1 c i [B i ] + m[Σ] + [S],#(C ∩ ( k+1 i=1 v i )) = k+1 i=1 n i (Z) = k i=1
c i q + (k + 1)mq = 0.
By the intersection positivity of holomorphic curves, C doesn't intersect R × γ i r 0 ,θ 0 . In particular, C 0 lies inside the product region of X. Therefore, C 0 ω ϕ H ε ≥ 0 and J 0 (C 0 ) ≥ 0. By Theorem 5.2, J 0 (Z) = J 0 (C) + J 0 (C 0 ) ≥ 0. By (5.34), Lemmas 5.3, 5.4, we have
Z ω ϕ H ε + ηJ 0 (Z) = Z hor #Zα ♦ ,α I ω ϕ H ε + λ k i=1 c i + m + 2mη(d + g − 1) = λ k i=1 c i + m(k + 1) = 0, Z ω ϕ H ε + ηJ 0 (Z) ≥ C ω ϕ H ε + C 0 ω ϕ H ε > 0.
We obtain a contradiction. Now we consider the case that < P F C sw Z ref (X, Ω X ) J X (β, Z), (β , Z ) >= 1 and β = α I . As before, we have a broken holomorphic curve C = (C, C 0 ), where C ∈ M J 0 (y ♦ , β) is an HF-PFH curve and C 0 ∈ M J X (β, β ).
Suppose that β has E + distinct simple orbits (ignoring the multiplicity) at the local maximums and E − distinct simple orbits at the local minimums. Similar as (5.33), we have 0 = I(C) = I(C) + I(C 0 ) = −h(β ) − 2e + (β ) + k i=1 2c i + 2m(k + 1)
J 0 (C) = d − h(β ) − 2e + (β ) + E + − E − + 2m(d + g − 1) A H ε (y ♦ , A y ♦ ) − A Hε (β , Z ) = C ω ϕ H ε +
Proof. Let (X − , Ω X − ) be the symplectic cobordism from (Y ϕ Hε , ω ϕ Hε ) to ∅ in (2.10). Fix a generic J X − ∈ J comp (X − , Ω X − ). Using the same argument as in [12], we define a homomorphism P F C hol Z ref (X − , Ω X − ) J X − : P F C(Σ, ω ϕ Hε , γ x Hε ) → Λ, by counting I = 0 (unbroken) holomorphic curves in (X − , Ω X − ). Moreover, this is a chain map. Therefore, P F C hol Z ref (X − , Ω X − ) J X − induces a homomorphism in homology level:
P F H hol Z ref (X − , Ω X − ) J X − : P F H(Σ, ω ϕ Hε , γ x Hε ) → Λ,
Using Taubes's techniques [33,34] and C. Gerig's generalization [10], P F H hol Z ref (X − , Ω X − ) J X − should agree with the PFH cobordism map P F H sw Z ref (X − , Ω X − ) J X − (see Remark 1.3 of [12]). But we don't need this to prove the lemma.
To show that c is non-exact, it suffices to prove P F C hol Z ref (X − , Ω X − ) J X − (c ) = 0. In [12], the author computes the map P F C hol Z ref (X, Ω X ) J X for the elementary Lefschetz fibration (a symplectic fibration over a disk with a single singularity). The current situation is an easier version of [12]. By the argument in [12], we have
P F C hol Z ref (X − , Ω X − ) J X − (α I , Z I ) = 1, P F C hol Z ref (X −
, Ω X − ) J X − (β , Z ) = 0 for (β , Z ) = (α I , Z I ).
(5.36) Therefore, Lemmas 5.8 and 5.9 imply that P F C hol Z ref (X − , Ω X − ) J X − (c ) = 1. Here let us explain a little more about how to get (5.36). Basically, the idea is the same as Lemma 3.8. From the computation of the ECH index, we know that I = 0 implies that the holomorphic curves must be asymptotic to α I . Also, the energy is zero. Therefore, the unbranched covers of the horizontal sections are the only curves that contribute to P F C hol Z ref (X − , Ω X − ) J X − , and this leads to (5.36). Even these holomorphic curves may not be simple, they are still regular (see [9]).
Spectral invariants
In this section, we assume that the link L is 0-admissible. Fix a base point x = (x 1 , · · · x d ). Define a reference 1-cycle γ x H := Ψ H (S 1 × x). Let (X, Ω X ) symplectic cobordism (2.8) from (Y ϕ H , ω ϕ H ) to (Y ϕ G , ω ϕ G ). Let Z H,G
Comparing the PFH and HF spectral invariants
Homogenized spectral invariants
Let Ham(Σ, ω) be the universal cover of Ham(Σ, ω). By Theorem 1, this is well defined. Thus, the HF spectral invariants descend to invariants onφ ∈ Ham(Σ, ω). But in general, the spectral invariants cannot descend to Ham(Σ, ω). This is also true for PFH spectral invariants. To obtain numerical invariants for the elements in Ham(Σ, ω) rather than its universal cover, we need the homogenized spectral invariants. It is well known that Ham(Σ, ω) = Ham(Σ, ω) when g(Σ) ≥ 1. Therefore, we only consider the case that Σ = S 2 . Fix ϕ ∈ Ham(Σ, ω). We define the homogenized HF spectral invariant by These two inequalities implies that µ L,η=0 is a quasimorphism with defect 1. So is µ pf h d .
Shenzhen University E-mail adress: [email protected]
Recently, D. Cristofaro-Gardiner, V. Humilière, and S. Seyfaddini use a twisted version of PFH to define a family of numerical invariants c pf h d : C ∞ (S 1 × Σ) × P F H(Σ, ϕ, γ 0 ) → {−∞} ∪ R.
c link L,η : C ∞ ([0, 1] × Σ) × HF (Sym d L) → {−∞} ∪ R,
Φ
H : HF (Σ, ϕ H (L), L) → HF (Sym d Σ, Sym d L, Sym d ϕ H ). (1.1) Therefore, this can be viewed as an alternative formulation of the quantitative Heegaard Floer homology. When the context is clear, we also call it QHF. It serves as a bridge between the QHF and PFH. The author establishes a homomorphism from PFH to QHF (CO Z 0 (W, Ω H , L H ) J ) * : P F H(Σ, ϕ H , γ 0 ) J → HF (Σ, ϕ H (L), L, x) J (1.2) which is called the closed-open morphism. The map (1.2) is an analogy of the usual closed-open morphism from the symplectic Floer homology to Lagrangian Floer homology defined by P. Albers
Theorem 1 .(
1The spectral invariant c L,η : C ∞ ([0, 1] × Σ) × HF (Σ, L) → {−∞} ∪ R satisfies the following properties: 1. (Spectrality) For any H and a = 0 ∈ HF (Σ, L), we have c L,η (H, a) ∈ Spec(H : L). 2. (Hofer-Lipschitz) For a = 0 ∈ HF (Σ, L), we have H t − K t )dt ≤ c L,η (H, a) − c L,η (K, a) ≤ d 1 0 max Σ (H t − K t )dt. 3. (Shift) Fix a = 0 ∈ HF (Σ, L). Let c : [0, 1] t → R be a function only dependent on t. Then c L,η (H + c, a) = c L,η (H, a) invariance) Let H, K are two mean-normalized Hamiltonian functions. Suppose that they are homotopic in the sense of Definition 4.1. Then c L,η (H, a) = c L,η (K, a).5. (Lagrangian control) If H t | L i = c i (t) for i = 1, .., d, then c L,η (H, a) = c L,η (0, a) dt + c L,η (0, a) ≤ c L,η (H, a) ≤ c L,η (0, a) Triangle inequality) For any Hamiltonian functions H, K and a, b ∈ HF (Σ, L), we have c L,η (H#K, µ 2 (a ⊗ b)) ≤ c L,η (H, a) + c L,η (K, b), where µ 2 is the quantum product defined in Section 3. 7. (Normalization) For the unit e, we have c L,η (0, e) = 0. 8. (Calabi property) Let {L m } ∞ m=1 be a sequence of η-admissible links. Suppose that {L m } ∞ m=1 is equidistributed in the sense of [7]. Let d m denote the number of components of L m . Then for a = 0 ∈ HF (Σ, L), we have lim m→∞ 1 d m c L m ,η (H, a) − c L m ,η (0, dt ∧ ω.
Corollary 1. 3 .
3Suppose that L is 0-admissible and Σ = S 2 . Then for any Hamiltonian function H, we have
1 0
1H t (x)dt and c L,η (H, a) are independent of the choice of the base point. See discussion in Section 7 of[13].
Theorem 3 .
3The homogenized spectral invariants µ pf h d : Ham(S 2 , ω) → R are homogeneous quasimorphisms with defect 1.
An orbit set is a finite set of pairs γ = {(γ i , m i )}, where {γ i } are distinct embedded periodic orbits and {m i } are positive integers. An orbit set is called a PFH generator if it satisfies a further condition: If γ i is hyperbolic, then m i = 1.
may depend on H and x a priori. Therefore, the relation (2.16) is not strong enough to transfer all the properties of c link L to c L .
Figure 2 :
2A picture of the case m = 5.
Theorem 3.3. (Theorem 4.5.13 of[15]) Let u be a J-holomorphic HF curve. Then the ECH index and the Fredholm index satisfy the following relation:I(u) = indu + 2δ(u),where δ(u) ≥ 0 is a count of the singularities of u with positive weight. Moreover, I(u) = ind(u) if and only if u is embedded.
Lemma 3. 4 .
4The index J 0 satisfies the following properties:1. Let u : F → M be an irreducible HF curve, then J 0 (u) = −χ(F ) + d + 2δ(u).
Proposition 3. 5 .
5Let (π m : E m = D m × Σ → D m , Ω m ) be the symplectic fiber bundle with strip-like ends. Let L m ⊂ π −1 (∂D m ) be Lagrangian submanifolds of (E m , Ω m ) satisfying C.1, C.2, C.3, C.4, C.5. Fix a reference homology class A ref ∈ H 2 (E m , δ 1 , .., δ m , δ 0 ) and a generic almost complex structure J ∈ J tame (E m ). Then (π m : E m → D m , Ω m , L m ) induces a homomorphism
Invariance) Suppose that there exists a family of symplectic form {Ω τ } τ ∈[0,1] and a family of Ω τ -Lagrangians {L} τ ∈[0,1] ⊂ ∂E m satisfying C.1, C.2, C.3, C.4, C.5 and {(Ω τ , L τ )} τ ∈[0,1] is τ -independent over the strip-like ends. Assume {J τ } τ ∈[0,1] is a general family of almost complex structures. Then
m ) J ((y 1 , [A 1 ])⊗..(y m , [A m ])) = I(A)=0 #M J (y 1 , ...y m ; y 0 , A)(y 0 , [A 0 ]).
F n . Since the Lagrangian components of L m are pairwise disjoint, if an irreducible component v of the bubbles comes from pinching an arc a, then the endpoints of a must lie inside the same component of L m . By the open mapping theorem, v lies in a fiber π −1
K 1 ,
1K 2 is an isomorphism.The direct limit of HF (Σ, ϕ H (L), ϕ K (L), x) is denoted by HF (Σ, L). Because HF (Σ, ϕ H (L), ϕ K (L), x) is independent of x, so is HF (Σ, L). We have a canonicalisomorphism j x H,K : HF (Σ, ϕ H (L), ϕ K (L), x) → HF (Σ, L). (3.20)that is induced by the direct limit.Let H be a Hamiltonian function. We consider another homomorphismI H : CF (Σ, ϕ K (L), L) → CF (Σ, ϕ H#K (L), ϕ H (L)) (3.21) defined by mapping (y, [A]) to (ϕ H (y), [ϕ H (A)]). Obviously, it induces an isomorphism (I H ) * in the homology level.We call it the naturality isomorphism. In the following lemma, we show that it agrees with the continuous morphism.
F
{Hs} :R × [0, 1] × Σ → R × [0, 1] × Σ (s, t, x) → (s, t, ϕ Hs (x)) Let L = R×(({0}×ϕ K (L))∪({1}×L)) be Lagrangians in (R×[0, 1]×Σ, Ω = ω+ds∧dt). Then F {Hs} (L) is a disjoint union (F −1{Hs} ) * Ω-Lagrangian submanifolds such that
H 1
1#K 2 ,H 2 #K 2 H 1 ,H 2 by counting the holomorphic curves in (R × [0, 1] × Σ, (F −1 {Hs} ) * Ω, I {Hs} (L)). Similar as the previous case, the map F {Hs} establishes a 1-1 correspondence between the curves in (R × [0, 1] × Σ, Ω, L) and curves in (R × [0, 1] × Σ, (F −1 {Hs} ) * Ω, I {Hs} (L)). This gives us the second diagram. To see (I H 1 ) * = I H 1 #K 1 ,K 1 H 1 ,0
a base point x. Define the action spectrum to be Spec(H : L, x) := {A η H (y, [A])|A ∈ H 2 (E, x H , y)}.
F s 0 =
0F s 1 = 0. Let u i (s, t) = (s, t, ϕ • ϕ −1 s,t (x i )). Then u := ∪ d i=1 u i represents a class A 0 ∈ H 2 (E, x K , xH ). This induces an isomorphism Ψ A 0 : CF (Σ, ϕ H (L), L, x) → CF (Σ, ϕ K (L), L, x) by mapping (y, [A]) to (y, [A 0 #A]).
A η K (y, [Ψ A 0 (A)]) = A η H (y, [A]). In particular, Spec(H : L) = Spec(K : L).
L
:= F (R × {0, 1} × L) ω E := (F −1 ) * (ω + d(H s t dt))
(
H + − H − )dt.
c + ) ≤ c L,η (H + , a) + δ. Let c − = I H + ,H − 0,0 (c + ). Then it is a cycle represented (j x H − ) −1 (a). Take a factor (y − , [A − ]) in c − such that A η H (y − , [A − ]) = A η H (c − ). Find a factor (y + , [A + ]) in c + such that < I H + ,H − 0,0 (y + , [A + ]), (y − , [A − ]) >= 1. Then the above + − H − )dt ≤ c L,η (H + , a) − c L,η (H − , a) + δ. Take δ → 0. Interchange the positions of H + and H − ; then we obtain the Hofer-Lipschitz property. 3. Since H and K are homotopic, we have a family of Hamiltonian functions {H s t } s∈[0,1] with H 0 t = H t and H 1 t = K t . By Lemma 4.2, we have Spec(H : L) = Spec(H s : L) = Spec(K : L).
4 .
4Define a family of functions H s = H + sc. Note that A η H+sc (y, A) = A η H (y, A) + s 1 0 c(t)dt. Therefore, c L,η (H s , a) − s 1 0 c(t)dt ∈ Spec(H : L). By the Hofer-Lipschitz property, c L,η (H s , a) − s 1 0 c(t)dt is a constant. Taking s = 0, we know that the constant is c L,η (H, a).
5 .
5If H t | L i = c i (t), then ϕ H (L i ) = L i .The Reeb chords are corresponding to y ∈ L.
∈ M J ((y 1 , [A 1 ]) ⊗ (y 2 , [A 2 ]), (y 0 , [A 0 ])) be an HF curve with I = 0. Here the relative homology classes satisfy A 1 #A 2 #u#(−A 0 ) = A ref . Therefore, the energy and J 0 index of u is
R s × (R t /(2Z)) by B := R s × (R t /(2Z)) − B c , where B c is (2, ∞) s ×[1,2] t with the corners rounded. SeeFigure 3.
Figure 3 :
3The open-closed surface Let Y ϕ H := [0, 2] × Σ/(0, ϕ H (x)) ∼ (2, x) be the mapping torus of ϕ H . Then π : R s × Y ϕ H → R s × (R t /(2Z)) is a surface bundle over the cylinder. Define a surface bundle W H by π W = π| W : W H := π −1 (B) → B.
We call the triple (W H , Ω H , L H ) an open-closed cobordism. Definition 5.1 (Definition 5.4.3 of [15]). Fix a Reeb chord y and an orbit set γ with degree d. Let (Ḟ , j) be a Riemann surface (possibly disconnected) with punctures. A d-multisection in W is a smooth map u : (Ḟ , ∂Ḟ ) → W such that
The index inequalities still hold in the open-closed setting. Theorem 5.2. (Theorem 5.6.9 of [15], Lemma 5.2 of [13]) Let u ∈ M J (y, γ) be an irreducible HF-PFH curve in (W H , Ω H , L H ). Then we have
Fix a reference relative homology classZ ref ∈ H 2 (W, x H , γ ref ).The open-closed morphism in chain level is defined by
that f is a Morse function satisfying M.1, M.2, M.3, and M.4. Define H = − f , where 0 < 1. H is a slight perturbation of the height function in
F. 1
1There is an open setU δ+δ 0 = ∪ p U δ+δ 0 p such that H ε | Σ−U δ+δ 0 = H | Σ−U δ+δ 0 ,where p runs over all the local maximums of −f and U δ+δ 0 p is a (δ + δ 0 )-neighbourhood of p.
F. 2
2H ε is still a Morse function satisfying the Morse-Smale conditions. Moreover, Crit(H ε ) = Crit(−f ).
F. 3 ϕ
3Hε is d-nondegenerate. The periodic orbits of ϕ Hε with period at most d are covers of the constant orbits at critical points of H ε .
F. 5
5The Reeb chords of ϕ Hε are still corresponding to the critical points of ∪ d i=1 f L i . See(3.22).
Lemma 5. 4 .
4Let u : F → W be a J-holomorphic HF-PFH curve in (W, Ω H , L H ) and J ∈ J (W, Ω H ). Then the ω ϕ H -energy satisfies E ωϕ H (u) := F u * ω ϕ H ≥ 0. Moreover, when H = H ε , E ωϕ H (u) = 0 if and only if u is a union of the horizontal sections.
where [S] ∈ H 1 (S 1 , Z) ⊗ H 1 (Σ, Z) and Z hor is the class represented by the union of horizontal sections. By (5.31), the ECH index of Z isI(Z) = i 2c i + 2m(k + 1) = I(C) + I(C 0 ) = 0. (5.34)Let q i denote the period of γ i r 0 ,θ 0 . From the construction in[13], the period of γ ir 0 ,θ 0 is determined by the function ε. For a suitable choice of ε, we can choose q i = q for 1 ≤ i ≤ k + 1. By definition, we have n i (Z hor #Z α ♦ ,α I ) = 0, n i ([B j ]) = δ ij q n i ([S]) = 0 and n i ([Σ]) = q. (5.35) for 1 ≤ i, j ≤ k + 1. From (5.34) and (5.35), we know that
Let H ε be the function satisfying F.1, F.2, F.3, F.4, F.5 . Let H ε be the perturbation. Let y ♦ = [0, 1] × (y 1 − , ..y d − ) be the Reeb chord of ϕ H ε , where {y i − } d i=1 are minimums of H ε . By Lemma 5.6, [(y ♦ , [A y ♦ ])] represents a class e ♦ ∈ HF (Σ, L), i.e., e ♦ = j x H ε ([(y ♦ , [A y ♦ ])]). Let e be the unit of HF (Σ, L). Reintroduce c + L (H) := c L,η=0 (H, e) and c − L (H) := c L,η=0 (H, e ♦ ).
1 ×x] ∈ H 2 (X, γ x H , γ x G ) be a reference class. Given another base point x , let η be d union of paths starting from x and ending atx . Let Z x ,x = [S 1 × η] ∈ H 2 (X, γ x H , γ x H ). The cobordism map P F H sw Z H,G x (X, Ω X ) only depends on x, H, G. Thus we denote it by I x H,G . The cycle c = (α ♦ , Z α ♦ ) + (β, Z) in Lemma 5.5 represents a class σ ♦ = 0 ∈ P F H(Σ, ϕ H ε , γ H ε ). Define σ x ♦H := Ψ Zy ♦ ,x • I y ♦ H ε ,H (σ ♦ ).Let H − and H + be Hamiltonian functions satisfying conditions ♠.1 and ♠.2 respectively. Let W + be the open-closed cobordism and W − the closed-open cobordism. Then by[13] and Theorem 2, we haveCO Z − ref (W − , ϕ H − , L H − ) * (σ x ♥H − ) = (j x H − ) −1 (e) OC Z + ref (W + , Ω H + , L H + ) * ((j x H + ) −1 (e ♦ )) = σ x ♦H + . Proof of Corollary 1.2. The inequality c + L (H) ≤ c pf h d (H, σ x ♥H , γ x t (x)dt ≤ c − L (H), it suffices to prove the inequality for a Hamiltonian function H satisfying ♠.2 because of the Hofer-Lipschitz property and Remark 1.3. By using Theorem 2 and the argument in Section 7 of[13], we obtain the inequality.By definition, (y ♦ ,A y ♦ ) represents (j x H ε ) −1 (e ♦ ) = (j x − f ) −1 (e ♦ ). Therefore,c L (− f, e ♦ ) ≤ A − f (y ♦ , A y ♦ ) = O( ).Let → 0. We have c L (0, e ♦ ) ≤ 0. By the triangle inequality, we havec − L (H) ≤ c + L (H) + c L (0, e ♦ ) ≤ c + L (H).
A element in Ham(Σ, ω) is a homotopy class of paths {ϕ t } t∈[0,1] ⊂ Ham(Σ, ω) with fixed endpoints ϕ 0 = id and ϕ 1 = ϕ. Letφ ∈ Ham(Σ, ω) be a class represented by a path generated by a meannormalized Hamiltonian H. Define c L (φ, a) := c L (H, a).
µ
L,η (ϕ, a) := lim sup n→∞ c L,η (φ n , a) n , −c pf h d (H, σ x ♥H ) = c pf h d (H, σ x ♦H ) ≥ c pf h d (H, σ x ♥H ) − 1.By Corollary 1.3, we get the second inequality for c + L .Proof of Theorem 3. By Theorem 1, we havec + L (H) + c + L (K) =c + L (H) + c + L (H H K) ≤c + L (H) + c + L (H) + c + L (H K) ≤ c + L (H K) + 1.
η=0 (H, e ♦ ) and c + L (H) := c L,η=0 (H, e).Corollary 1.2. Suppose that the link L is 0-admissible. For any Hamiltonian function
H, let σ x
♥H ∈ P F H(Σ, ϕ H , γ x
H ) be the class defined in Section 7 of [13]. We have
and σ : {1, ..., d} → {1, ..., d} is a permutation. Obviously, a Reeb chord is determined by d-distinct intersection points (y 1 , ...y d ). Thus, we don't distinguish the Reeb chords and d-intersection points. Fix a base point x
,2 . We call it a continuous morphism, denoted by I H 1 ,H 2 K 1 ,K 2 . The continuous morphisms satisfy I H 2 ,H 3K 2 ,K 3 • I H 1 ,H 2
K 1 ,K 2 = I H 1 ,H 3
K 1 ,K 3 . Thus, I H 1 ,H 2
). Also, the cobordism map induces I H 1 #K 1 ,H 1 #K 2 H 1 ,H 1 . Hence, the first diagram is true. To prove the second diagram, the idea is the same. Let H s : [0, 1] × Σ → R be a family of Hamiltonian functions such that H s = H 1 for s ≥ R 0 and H s = H 2 for s ≤ −R 0 . Define a diffeomorphism
Definition 4.1. Two mean-normalized Hamiltonians H 0 , H 1 are said to be homotopic if there exists a smooth path of Hamiltonians {H s } s∈[0,1] connecting H 0 to H 1 such that H s is normalized and ϕ H s = ϕ H 0 = ϕ H 1 for all s. {ϕ s,t } s∈[0,1] is also a family of Hamiltonian symplecticmorphisms. Let F s t be the Hamiltonian function in s-direction, i.e.,Lemma 4.2. If two mean-normalized Hamiltonian functions H, K are homotopic, then
we have
Spec(H : L) = Spec(K : L).
Proof. Fix a base point x = (x 1 , ...x d ). Let {ϕ s,t := ϕ t
H s } s∈[0,1],t∈[0,1] be the homotopic
such that ϕ 0,t = ϕ t
H , ϕ 1,t = ϕ t
K and ϕ 1
H s = ϕ H = ϕ K for all s ∈ [0, 1]. For a fixed
t,
. For 1 ≤ i ≤ m, u is asymptotic to y i as s → ∞.3. u is asymptotic to y 0 as s → −∞.4. F u * ω Em < ∞.
y 2 in a neighborhood of L i , where y is the coordinate of the normal direction.
t K δ t | ≤ c 0 δ,Apply the triangle inequality to H δ t , K δ t , H δ t K δ t . By the Hofer-Lipschitz continuity, we have c L,η (H K, µ 2 (a ⊗ b)) ≤ c L,η (H, a) + c L,η (K, b) + O(δ).
.(4.30)Assume that µ 2 (a⊗b) = 0. Let c 0 ∈ CF (Σ, ϕ H •ϕ K (L), L), c 1 ∈ CF (Σ, ϕ H (L), L) and c 2 ∈ CF (Σ, ϕ H •ϕ K (L), ϕ H (L)) be cycles represented j −1 H K (µ 2 (a⊗b)), j −1 H (a) and j −1 H K,H (b) respectively. By Lemma 3.6, ϕ −1 H (c 2 ) is a cycle represented j −1 K (b). We choose c 1 , c 2 such thatTherefore,(4.30)implies that A η H K (c 0 ) ≤ A η H (c 1 ) + A η K (ϕ −1 H (c 2 )). Take δ → 0. We have c L,η (H K, µ 2 (a ⊗ b)) ≤ c L,η (H, a) + c L,η (K, b).For general Hamiltonians H t , K t , we first construct approximations H δ t , K δ t satisfying the assumptions(4.27).Fix local coordinates (x, y) around x i . Then we can write H t (x, y) = H t (0) + ∂ x H t (0)x + ∂ y H t (0)y + R t (x, y).We may assume that ∇ 2 H t (0) is non-degenerate; otherwise, we can achieve this by perturbing H t using a small Morse function with a critical point at x i . Pick a cut-off function χ δ (r) : R + → R such that χ δ (0) = 1, χ δ (0) = 0 andWe perform the same construction for K t . Apparently, we haveAccording to Lemma 6.8 in[7], we know that. By Lemma 5.6, c is a cycle. To show that c is non-exact, we first need to find the corresponding cycle c ∈ P F C(Σ, ϕ Hε , γ 0 ) because the elements in P F C(Σ, ϕ Hε , γ 0 ) can be figured out easily. Let (X, Ω X ) be the symplectic cobordism in (2.9). We take H + = H ε andHε ) be the reference homology class.Let J comp (X, Ω X ) be the set of Ω X -compatible almost complex structures such thatIn the following lemmas, we compute P F C sw Z ref (X, Ω X ) J X (c).Lemma 5.7. Let J X ∈ J comp (X, Ω X ) be an almost complex structure such that it is R-invariant in the product regionProof. Let C ∈ M J X 0 (α ♦ , α I ) be a (broken) holomorphic curve. Let Z ∈ H 2 (X, α ♦ , α I ) denote the relative homology class of C. Then Z can be written as. It is easy to show that I(α ♦ , α I , Z) = 2m(Z)(k + 1) andThen m(Z) = 0 because I = 0. Also, we havewhere h(β ) is the total multiplicities of the hyperbolic orbits and e + (β ) is the total multiplicities of the elliptic orbits at the local maximums..Then the cycle c is non-exact, i.e., it represents a non-zero class in P F H(Σ, ϕ Hε , γ x Hε ).whereφ is a lift of ϕ. One can define the PFH spectral invariant forφ in the same manner, denoted by c pf h d (φ, σ, γ 0 ). Similarly, the homogenized PFH-spectral invariant iswhere H is a mean-normalized Hamiltonian function generatedφ. This is well defined by Proposition 3.6 of[5].Remark 6.1. Let x = (x 1 , ..., x d ). Suppose that each x i is the south pole of the sphere,Proof of Corollary 1.3. There is a natural trivialization τ H of ξ| γ x H defined by pushing forward the S 1 -invariant trivialization over S 1 × {x}. Then we have a well-defined grading gr(α, [Z]) for each anchored orbit set (see(11)of[5]). We claim thatBecause the cobordism maps I x H,G preserve the grading, one only needs to check this for a special case that H is a small Morse function. Take H = H ε . Then σ x ♦H is represented by (α ♦ , Z α ♦ ). The class σ x ♥H = P F H sw Z ref (X + , Ω X + )(1) (see Remark 6.1 of[13]), where X + = B + × Σ and B + is a punctured sphere with a negative end. The construction of (X + , Ω X + ) is similar to (2.10). By index reason, the class σ x ♥H can be The usual energy estimate imply that the U -map decreases the PFH spectral invariants. As a result,According to Proposition 4.2 of[16], we haveTherefore, we haveThis implies that (1.3).QuasimorphismsIn this section, we show that µ pf h d is a quasimorphism on Ham(S 2 , ω). The argument is similar to M. Entov and L. Polterovich[17]. Before we prove the result, let us recall some facts about the duality in Floer homology.Let c be a graded filtered Floer-Novikov complex over a field F in the sense of[35]. We can associate c with a graded chain complex (C * (c), ∂). One can define the homology and spectral numbers for (C * (c), ∂). Roughly speaking, c is an abstract complex that is characterized by the common properties of Floer homology. It is not hard to see that the PFH chain complex is an example of graded filtered Floer-Novikov complexes.For c, M. Usher defines another graded filtered Floer-Novikov complex c op called the opposite complex. Roughly speaking, the homology of (C * (c op ), δ) is the Poincare duality of H * (C * (c)) in the following sense: There is a non-degenerate pairing ∆ : H −k (C * (c op )) × H k (C * (c)) → F. We refer the readers to[35]for the details of the graded filtered Floer-Novikov complex and opposite complex.Let c 1 , c 2 be graded filtered Floer-Novikov complexes. Let I : C * (c 1 ) → C * (c 2 ) be a 0-degree chain map given bywhere p i are generators of C * (c i ) and n(p 1 , p 2 ) ∈ F. Define I op : C * (c op 2 ) → C * (c op 1 ) byLemma 6.1. The map I op : C * (c op 2 ) → C * (c op 1 ) satisfies the following properties:• I op is a chain map. It descends to a map I op * : H * (C * (c op 2 )) → H * (C * (c op 1 )).• Let I 1 : C * (c 1 ) → C * (c 2 ) and I 2 : C * (c 2 ) → C * (c 3 ) be two 0-degree chain maps.In particular, if I * is an isomorphism, so is I op * .• Let a ∈ H −k (C * (c op 2 )) and b ∈ H k (C * (c 1 )). Then we haveThe proof of this lemma is straightforward (see Proposition 2.4 in[35]for the case c 1 = c 2 ), we left the details to the readers. Now we construct the opposite complex of (P F C * (Note that (ι −1 ) * (ω +dH t ∧dt) = ω +dH τ ∧dτ. If γ is a ϕ H periodic orbit, thenγ := ι•γ is a ϕ −1 H periodic orbit. Here we orientγ such that it transverse Σ positively. Recall that the symplectic cobordism (X = R×S 1 ×Σ, Ω X = ω +d(H s t dt)+ds∧dt). We extend the map ι to beO.2 gr(γ, −ι * Z) = −gr(γ, Z).O.3 Let u ∈ M J (γ + , γ − , Z) be a holomorphic curve in (X, Ω X ). Thenū = ι • u ∈ MJ (γ − ,γ + , ι * Z) be a holomorphic curve in X, (ι −1 ) * Ω X , whereJ = ι * • J • ι −1 * . This establishes a 1-1 correspondence between M J (γ + , γ − , Z) and MJ (γ − ,γ + , ι * Z).These three points implies that P F C * (S 2 , ϕ −1 H , γ x H ) is the opposite complex of P F C * (S 2 , ϕ −1 H , γ x H ). The pairing ∆ :This pairing descends to the homologies. By Usher's result[35], we haveProof. Let g : S 2 → R be a Morse function with two critical points x + , x − , where x + is the maximum point and x − is the minimum point. LetḠ := g. Take x = (x − , ..., x − ) be the base point. Thenwhere d ± ≥ 0 such that d + + d − = d. The grading formula implies that ∂ = 0. Also, we have σ x ♥Ḡ = (γ d x + , Z γ d ). Therefore, σ = σ x ♦H . We have
Then d J = 0. In particular, (y ♦ , A y ♦ ) is a cycle. represented by a cycle that is a certain combination of constant orbits at the maximum points. Lemma 5.6. Let d J be the differential of CF (Σ, ϕ H ε (L). L). It is not difficult to show that the claim is true. According to Example 2.16 of [16], we know thatLemma 5.6. Let d J be the differential of CF (Σ, ϕ H ε (L), L). Then d J = 0. In partic- ular, (y ♦ , A y ♦ ) is a cycle. represented by a cycle that is a certain combination of constant orbits at the maximum points. It is not difficult to show that the claim is true. According to Example 2.16 of [16], we know that
A Lagrangian Piunikhin-Salamon-Schwarz morphism and two comparison homomorphisms in Floer homology. P Albers, Int. Math. Res. Not. IMRN. 4P. Albers, A Lagrangian Piunikhin-Salamon-Schwarz morphism and two com- parison homomorphisms in Floer homology, Int. Math. Res. Not. IMRN 2008, no. 4, 2008.
Compactness results in symplectic field theory. F Bourgeois, Y Eliashberg, H Hofer, K Wysocki, E Zehnder, Geom. Topol. 7F. Bourgeois, Y. Eliashberg, H. Hofer, K. Wysocki, and E. Zehnder, Compact- ness results in symplectic field theory, Geom. Topol. 7 (2003), 799-888.
The asymptotics of ECH capacities. D Cristofaro-Gardiner, M Hutchings, V Ramos, Invent. Math. 199D. Cristofaro-Gardiner, M.Hutchings, and V. Ramos, The asymptotics of ECH capacities, Invent. Math. 199(2015), 187-214.
D Cristofaro-Gardiner, V Humilière, S Seyfaddini, arXiv:2001.01792Proof of the simplicity conjecture. D. Cristofaro-Gardiner, V. Humilière, and S. Seyfaddini, Proof of the simplicity conjecture. arXiv:2001.01792, 2020.
PFH spectral invariants on the two-sphere and the large scale geometry of Hofer's metric. D Cristofaro-Gardiner, V Humilière, S Seyfaddini, arXiv:2102.04404v1D. Cristofaro-Gardiner, V. Humilière, and S. Seyfaddini PFH spectral in- variants on the two-sphere and the large scale geometry of Hofer's metric. arXiv:2102.04404v1, 2021.
Zhang The smooth closing lemma for area-preserving surface diffeomorphisms. D Cristofaro-Gardiner, R Prasad, B , arXiv:2110.02925D. Cristofaro-Gardiner, R. Prasad, B. Zhang The smooth closing lemma for area-preserving surface diffeomorphisms. arXiv:2110.02925, 2021.
D Cristofaro-Gardiner, V Humilière, C Mak, S Seyfaddini, I , arXiv:2105.11026Smith Quantitative Heegaard Floer cohomology and the Calabi invariant. D. Cristofaro-Gardiner, V. Humilière, C. Mak, S. Seyfaddini, I. Smith Quanti- tative Heegaard Floer cohomology and the Calabi invariant. arXiv:2105.11026, 2022.
Smith Subleading asymptotics of link spectral invariants and homeomorphism groups of surfaces. D Cristofaro-Gardiner, V Humilière, C Mak, S Seyfaddini, I , arXiv:2206.10749D. Cristofaro-Gardiner, V. Humilière, C. Mak, S. Seyfaddini, I. Smith Sub- leading asymptotics of link spectral invariants and homeomorphism groups of surfaces. arXiv:2206.10749
Taming the pseudoholomorphic beasts in R × S 1 × S 2 , Geom. C Gerig, Topol. 24C. Gerig, Taming the pseudoholomorphic beasts in R × S 1 × S 2 , Geom,Topol. 24(2020) 1791-1839.
Seiberg-Witten and Gromov invariants for self-dual harmonic 2-forms. C Gerig, arXiv:1809.03405C. Gerig, Seiberg-Witten and Gromov invariants for self-dual harmonic 2- forms, arXiv:1809.03405, (2018).
On cobordism maps on periodic Floer homology. G Chen, Algebr. Geom. Topol. 211G. Chen, On cobordism maps on periodic Floer homology, Algebr. Geom. Topol. 21 (1) 1-103, 2021.
Cobordism maps on periodic Floer homology induced by elementary Lefschetz fibrations. G Chen, Topology Appl. 302ppPaper No. 107818G. Chen, Cobordism maps on periodic Floer homology induced by elementary Lefschetz fibrations. Topology Appl. 302 (2021), Paper No. 107818, 23 pp.
G Chen, arXiv:2111.11891Closed-open morphisms on periodic Floer homology. G. Chen, Closed-open morphisms on periodic Floer homology, arXiv:2111.11891, 2021.
V Colin, K Honda, Y Tian, arXiv:2006.05701Applications of higher-dimensional Heegaard Floer homology to contact topology. V. Colin, K. Honda, and Y. Tian, Applications of higher-dimensional Heegaard Floer homology to contact topology. arXiv:2006.05701, 2020.
V Colin, P Ghiggini, K Honda, arXiv:1208.1074The equivalence of Heegaard Floer homology and embedded contact homology via open book decompositions I. V. Colin, P. Ghiggini, and K. Honda, The equivalence of Heegaard Floer homology and embedded contact homology via open book decompositions I, arXiv:1208.1074, 2012.
O Edtmair, M Hutchings, arXiv:2110.02463PFH spectral invariants and C ∞ -closing lemmas. 2021O. Edtmair, M. Hutchings, PFH spectral invariants and C ∞ -closing lemmas. arXiv:2110.02463, 2021
Calabi quasimorphism and quantum homology. M Entov, L Polterovich, Int. Math. Res. Not. 30M. Entov and L. Polterovich. Calabi quasimorphism and quantum homology. Int. Math. Res. Not. 2003, no. 30, 1635-1676. MR1979584.
An index inequality for embedded pseudoholomorphic curves in symplectizations. M Hutchings, J. Eur. Math. Soc. 4M. Hutchings, An index inequality for embedded pseudoholomorphic curves in symplectizations, J. Eur. Math. Soc. 4 (2002) 313-361.
The embedded contact homology index revisited, New perspectives and challenges in symplectic field theory. M Hutchings, CRM Proc. Lecture Notes. 49Amer. Math. SocM. Hutchings, The embedded contact homology index revisited, New perspectives and challenges in symplectic field theory, 263-297, CRM Proc. Lecture Notes 49, Amer. Math. Soc., 2009.
The periodic Floer homology of Dehn twist. M Hutchings, M Sullivan, Algebr. Geom. Topol. 5M. Hutchings, M. Sullivan, The periodic Floer homology of Dehn twist, Algebr. Geom. Topol. 5 (2005), pp. 301 -354. issn: 1472 -2747.
M Hutchings, Contact and Symplectic Topology. Springer26Bolyai Society Mathematical StudiesM. Hutchings, Lecture notes on embedded contact homology, Contact and Sym- plectic Topology, Bolyai Society Mathematical Studies, vol. 26, Springer, 2014, 389-484.
. M Hutchings, Beyond ECH capacities, Geometry and Topology. 20M. Hutchings, Beyond ECH capacities, Geometry and Topology 20 (2016) 1085-1126.
Proof of the Arnold chord conjecture in three dimensions II. M Hutchings, C H Taubes, Geom, Topol. 17M. Hutchings and C. H. Taubes, Proof of the Arnold chord conjecture in three dimensions II , Geom, Topol. 17 (2003), 2601-2688.
Gluing pseudoholomorphic curves along branched covered cylinders I. M Hutchings, C H Taubes, J. Symplectic Geom. 5M. Hutchings and C. H. Taubes, Gluing pseudoholomorphic curves along branched covered cylinders I , J. Symplectic Geom. 5 (2007) 43-137.
Gluing pseudoholomorphic curves along branched covered cylinders II. M Hutchings, C H Taubes, J. Symplectic Geom. 7M. Hutchings and C. H. Taubes, Gluing pseudoholomorphic curves along branched covered cylinders II , J. Symplectic Geom. 7 (2009) 29-133.
The Weinstein conjecture for stable Hamiltonian structures. M Hutchings, C H Taubes, Geom. Topol. 13M. Hutchings and C. H. Taubes, The Weinstein conjecture for stable Hamilto- nian structures, Geom. Topol. 13 (2009), 901-941.
C Kutluhan, G Matic, J Van Horn-Morris, A Wand, arXiv:1603.02673Filtering the Heegaard Floer contact invariant. C. Kutluhan, G. Matic, J. Van Horn-Morris, A. Wand, Filtering the Heegaard Floer contact invariant, arXiv:1603.02673.
Monopoles and three-manifolds. P Kronheimer, T Mrowka, New Math. Monogr. 10Cambridge Univ. PressP. Kronheimer, T. Mrowka, Monopoles and three-manifolds, New Math. Monogr. 10, Cambridge Univ. Press (2007).
A cylindrical reformulation of Heegaard Floer homology. R Lipshitz, Geom. Topol. 10R. Lipshitz, A cylindrical reformulation of Heegaard Floer homology, Geom. Topol. 10 (2006).
Periodic Floer homology and Seiberg-Witten-Floer cohomology. Y-J Lee, C H Taubes, J. Symplectic Geom. 101Y-J. Lee and C. H. Taubes, Periodic Floer homology and Seiberg-Witten-Floer cohomology, J. Symplectic Geom. 10 (2012), no. 1, 81-164.
Spectral invariants for monotone Lagrangians. R Leclercq, F Zapolsky, J. Topol. Anal. 103R. Leclercq and F. Zapolsky, Spectral invariants for monotone Lagrangians. J. Topol. Anal., 10(3):627-700, 2018.
Y-G , Symplectic geometry and pseudoholomorphic curves. CambridgeCambridge University Press2New Mathematical Mono-graphsY-G. Oh Symplectic topology and Floer homology. Vol. 2, New Mathematical Mono-graphs, vol. 28, Cambridge University Press, Cambridge, 2015, Sym- plectic geometry and pseudoholomorphic curves.
Seiberg-Witten and Gromov invariants for symplectic 4-manifolds. C H Taubes, First International Press Lecture Series 2, International Press1798809Somerville, MAC. H. Taubes, Seiberg-Witten and Gromov invariants for symplectic 4- manifolds, First International Press Lecture Series 2, International Press, Somerville, MA (2000) MR1798809
Embedded contact homology and Seiberg-Witten Floer cohomology I-V. C H Taubes, Geometry and Topology. 14C. H. Taubes, Embedded contact homology and Seiberg-Witten Floer cohomol- ogy I-V, Geometry and Topology 14 (2010).
Duality in filtered floer-novikov complexes. M Usher, Journal of Topology and Analysis. M. Usher, Duality in filtered floer-novikov complexes. Journal of Topology and Analysis (2011).
| [] |
[
"Topic Extraction and Bundling of Related Scientific Articles",
"Topic Extraction and Bundling of Related Scientific Articles"
] | [
"Shameem A Puthiya Parambath [email protected] \nUmea University\nUmeaSweden\n"
] | [
"Umea University\nUmeaSweden"
] | [] | Automatic classification of scientific articles based on common characteristics is an interesting problem with many applications in digital library and information retrieval systems. Properly organized articles can be useful for automatic generation of taxonomies in scientific writings, textual summarization, efficient information retrieval etc. Generating article bundles from a large number of input articles, based on the associated features of the articles is tedious and computationally expensive task. In this report we propose an automatic two-step approach for topic extraction and bundling of related articles from a set of scientific articles in real-time. For topic extraction, we make use of Latent Dirichlet Allocation (LDA) topic modeling techniques and for bundling, we make use of hierarchical agglomerative clustering techniques.We run experiments to validate our bundling semantics and compare it with existing models in use. We make use of an online crowdsourcing marketplace provided by Amazon called Amazon Mechanical Turk to carry out experiments. We explain our experimental setup and empirical results in detail and show that our method is advantageous over existing ones. | null | [
"https://arxiv.org/pdf/1212.5423v2.pdf"
] | 722,492 | 1212.5423 | 67bd270b4fa28e770256f8a2420ff6d435b9b78f |
Topic Extraction and Bundling of Related Scientific Articles
21 Dec 2012
Shameem A Puthiya Parambath [email protected]
Umea University
UmeaSweden
Topic Extraction and Bundling of Related Scientific Articles
21 Dec 2012
Automatic classification of scientific articles based on common characteristics is an interesting problem with many applications in digital library and information retrieval systems. Properly organized articles can be useful for automatic generation of taxonomies in scientific writings, textual summarization, efficient information retrieval etc. Generating article bundles from a large number of input articles, based on the associated features of the articles is tedious and computationally expensive task. In this report we propose an automatic two-step approach for topic extraction and bundling of related articles from a set of scientific articles in real-time. For topic extraction, we make use of Latent Dirichlet Allocation (LDA) topic modeling techniques and for bundling, we make use of hierarchical agglomerative clustering techniques.We run experiments to validate our bundling semantics and compare it with existing models in use. We make use of an online crowdsourcing marketplace provided by Amazon called Amazon Mechanical Turk to carry out experiments. We explain our experimental setup and empirical results in detail and show that our method is advantageous over existing ones.
Introduction
With the advancement of information retrieval systems, especially search technologies, finding relevant information about any topic under the sky is relatively an easy task. Search engines like Google are very effective and popular for web retrieval. Researchers rely on these search engines to gather related works relevant to their field of work. Most of the search engines run dedicated services for scientific literature search, example includes popular websites like Google Scholar [htt12c] and CiteSeerX [htt12b]. All of the websites above mentioned are very competent and retrieve large number of articles on proper input query. For example, our search for scholarly articles for the topic 'topic modeling' resulted in 1,190,000 and 141,843 articles using Google Scholar and CiteSeerX respectively. These results are ordered based on the indexing and ranking algorithms used by the underlying search system and contain similar articles scattered over different pages.
Grouping or bundling of articles, resulting from any extensive search into smaller coher-ent groups is an interesting but a difficult task. Even though lots of research studies were conducted in the area of data bundling, a concrete generalized algorithm does not exist. Effective grouping of data requires a precise definition of closeness between a pair of data items and the notion of closeness always depend on the data and the problem context. Closeness is defined in terms of similarity of the data pairs which in turn is measured in terms of dissimilarity or distance between pair of items. In this report we use the term similarity,dissimilarity and distance to denote the measure of closeness between data items. Most of the bundling scheme start with identifying the common attributes(metadata) of the data set, here scientific articles, and create bundling semantics based on the combination of these attributes. Here we suggest a two step algorithm to bundle scientific articles. In the first step we group articles based on the latent topics in the documents and in the second step we carry out agglomerative hierarchical clustering based on the inter-textual distance and co-authorship similarity between articles. We run experiments to validate the bundling semantics and to compare it with content only based similarity. We used 19937 articles related to Computer Science from arviv [htt12a] for our experiments.
Topic Extraction
Latent Dirichlet Allocation
Latent Dirichlet Allocation(LDA) [BNJ03] is a probabilistic generative model for document modeling. It is based on Probabilistic Latent Semantic Analysis(PLSA), a generative model suggested by Thomas Hofmann in [LP99,Hof99]. [BNJ03] LDA is based on dimensionality reduction assumption, bag-of-words assumption i.e., order of words in a document is not important. Words are considered to be conditionally independent and identically distributed. Ordering of documents is also neglected and assumed to be independent and identically distributed. This is called document exchangeability. Same principle applies for topics also. There is no prior ordering of topics which makes it identifiable. Basic assumption in the LDA model are given below
• Number of documents are fixed
• Vocabulary size is fixed
• Number of topics are fixed • Word distribution is a multinomial distribution • Topic distribution is a multinomial distribution • topic weight distribution is a Dirichlet distribution
• word distribution per topic is a Dirichlet distribution Generative model suggest a probabilistic procedure to generate documents given a distribution over topics. Given a distribution over topics, a document can be generated by recursively selecting a topic over given topic distribution and then selecting a word from the selected topic. We, now, formally define the mathematical model behind LDA. We are given a fixed set of documents D = {d 1 , d 2 , d 3 , ....d N }, a fixed set of vocabulary W = {w 1 , w 2 , w 3 , ....w M } and a set of topics T = {t 1 , t 2 , ..., t k }. Let d ∈ D denote a random document,w ∈ W denote a random word and t ∈ T denote a random topic. Let P (d) be the probability of selecting a document, P (t|d) is the probability of selecting topic t in document d given the probability of selecting the document d and P (w|t) is the probability of selecting word w in topic t given the probability of selecting the topic t. The join probability distribution of the observed variables (d, w) is
P (d, w) = P (d)P (w|d) Since w & d are conditionally independant over t P (w|d) = t k t1 P (w|t)P (t|d) =⇒ P (d, w) = P (d) t k t1 P (w|t)P (t|d)
According to Baye's Rule
P (d)P (t|d) = P (d|t)P (t) =⇒ P (d, w) = t k t1 P (t)P (w|t)P (d|t)
Now thinking the opposite direction, given a document, using statistical inference we can find the topics associated with each document. This illustrates the statistical inference problem, inverse of the approach mentioned above. Here given a document, we would like to find the associated topics which is most likely to have generated the given document. We refer to the set of topics generated using topic modeling method as topic classes. This involves inferring the word distribution in the topics and topic distribution in the documents given the word distribution in the documents. LDA algorithm generates this topic classes using statistical inference techniques based on the assumptions given earlier. LDA tool we used, MALLET, uses an algorithm based on Gibbs sampling [CG92] to estimate the topic classes.
Bundling
In this section, we will elaborate the second step of our algorithm i.e., bundling documents in a given topic class. A topic, selected from the set of topics generated by LDA, is given to the clustering system and it generates coherent bundles based on the selected similarity measures. Similarity measures for our data set is defined based on extended co-authorship and inter-textual distance.
Extended Co-authorship Dissimilarity
Extended Co-authorship Dissimilarity between two articles is conceived in terms of the similarity between the extended co-authors of the articles. Extended co-authors is defined as the union of the set of authors and referenced authors of an article. Extended Coauthorship Similarity between two articles is defined as the Jaccard Coefficient on the extended co-authors two articles.
Inter-textual Distance
Inter-textual distance, due to Labbe [LL01], is defined over the frequency of the common vocabulary of the texts. It measures the relative distance of the texts from each other. Mathematically inter-textual distance between two texts A and B is defined as,
D (A,B) = VA,V A(B) F iA − E iA(B) N A + N A(B)
Here, F iA is the frequency of the i th vocabulary in document A, F iB is the frequency of the i th vocabulary in document B, E iA(B) is the frequency of the i th vocabulary in B with mathematical expectation more than or equal to one with respect to A. N A is the sum of the frequency of vocabulary in A, N B is the sum of the frequency of vocabulary in B and N A(B) is the sum of frequency of vocabulary in B with expectation value more than or equal to one. A proximity matrix Cont is constructed for all the articles in the given topic class containing the inter-textual distances between them.
N A = VA F iA , N A(B) = VB E iA(B) E iA(B) = F iB × N A N B , N B = VB F iB
Bundling
To apply hierarchical, agglomerative clustering algorithm, we create a combined proximity matrix D from the respective distance measures ExtCoauth and Cont as given by
D =α * ExtCoauth + (1 − α) * Cont, 0 ≤ α ≤ 1
where α is the weight factor. We apply fastcluster algorithm as in [Mul11] for hierarchical agglomerative clustering to create √ n number of bundles, where n is the number of articles in the selected topic class.
Experiments
In this section, we detail the experimental protocol based on Amazon Mechanical Turk, a crowdsourcing market place of Amazon. There are two types of evaluation techniques employed to measure the quality of clustering schemes, one being theoretical evaluation and other being user evaluation. We make use of user evaluation techniques here. Amazon Mechanical Turk(AMT) [htt] is a crowdsourcing marketplace service provided by Amazon where users can work on small tasks which is currently difficult to achieve using computers i.e., work that requires human intelligence. In Mechanical Turk terminologies, tasks are called Human Intelligence Tasks(HIT), user who provides task is called Requester and user who works on the task is called Worker. A HIT is a well explained, self-contained question of the type described earlier. A requester will create HITs and publish it on AMT. A requester can assert some mechanism to recruit suitable workers or filter out unskilled workers through qualification tests. To run the experiments, we selected three topic classes from the set of twenty six topic classes. Topic classes selected for the experiments are Machine Learning, Information Retrieval and Graph Theory. We selected five bundles from these three topic classes. Each of these bundle is presented to users to check the quality and compare it with bundles generated using content based only clustering.
Independent Study
In independent study, we measure the quality of the bundling process independently through Worker feedback. Here we validate the semantics by asking the Worker to comment on the quality of the bundles generated by our algorithm. Here we make use of survey questionnaire in which we ask the Workers are asked to read the articles in the bundle and give their feedback on the similarity of the articles in the bundle.
Results
Results of the survey questions are detailed in Table 1. All the 29 users participated in the survey for topic information retrieval confirm that the articles in the bundles are very similar which is 100% success ratio. Out of the 24 users who participated in the survey for Graph Theory 20 users affirm that the member articles in the bundles are similar. In case of Machine Learning, 84.2% of the participated workers agree with the member article similarity in the bundle. Overall the agreement ratio of independent study is 89.1% which is a very good indication that our selection of clustering semantic is very good.
Comparative Study
In comparative study, we ask the worker to do "side-by-side" comparison of two bundling results one based on the content and extended authorship similarity and the other one based on content similarity only. Aim of the comparative study is to check whether the semantic used by us gives a better result than the other popular commonly used semantics. We employ survey type questionnaire in which we ask the worker to read the two bundles and give each bundle most appropriate name. At the end they are asked to point the bundle, which was easiest to name. Our assumption is diverse bundle will be difficult to name and similar bundle will be very easy to name.
SIM (A, B) = |Extended Co − auth(A) ∩ Extended Co − auth(B)| |Extended Co − auth(A) ∪ Extended Co − auth(B)| , where Extended Co − auth(A) = Extended Co − authorship of article A Extednded Co − auth(B) = Extended Co −authorship of article B Corresponding Extended Co-authorship Dissimilarity is defined as ExtCoauth(A, B) = 1 − SIM (A, B). We create a proximity matrix ExtCoauth containing the Extended Coauthorship Dissimilarity among all the articles as ExtCoauth = [ExtCoauth i,j ] n * n = [ExtCoauth(i, j)]
Latent Dirichlet Allocation. David M Blei, Andrew Y Ng, Michael I Jordan, Journal of Machine Learning Research. 3David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993-1022, 2003.
Explaining the Gibbs Sampler. George Casella, Edward I George, The American Statistician. 463George Casella and Edward I. George. Explaining the Gibbs Sampler. The American Statistician, 46(3):pp. 167-174, 1992.
Probabilistic Latent Semantic Indexing. Thomas Hofmann, SIGIR. Thomas Hofmann. Probabilistic Latent Semantic Indexing. In SIGIR, pages 50-57, 1999. [htt] https://www.mturk.com/mturk. Amazon Mechanical Turk.
Inter-Textual Distance and Authorship Attribution Corneille and Moliere. Cyril Labbé, Dominique Labbé, Journal of Quantitative Linguistics. 83Cyril Labbé and Dominique Labbé. Inter-Textual Distance and Authorship Attribution Corneille and Moliere. Journal of Quantitative Linguistics, 8(3):213-231, 2001.
UAI '99: Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence. Kathryn B. Laskey and Henri PradeStockholm, SwedenMorgan KaufmannKathryn B. Laskey and Henri Prade, editors. UAI '99: Proceedings of the Fifteenth Con- ference on Uncertainty in Artificial Intelligence, Stockholm, Sweden, July 30 -August 1, 1999. Morgan Kaufmann, 1999.
Modern hierarchical, agglomerative clustering algorithms. CoRR, abs/1109.2378. Daniel Mullner, Daniel Mullner. Modern hierarchical, agglomerative clustering algorithms. CoRR, abs/1109.2378, 2011.
| [] |
[
"Federated Distributionally Robust Optimization for Phase Configuration of RISs",
"Federated Distributionally Robust Optimization for Phase Configuration of RISs"
] | [
"Ben Chaouki ",
"Sumudu Issaid ",
"Mehdi Samarakoon ",
"H Bennis [email protected] ",
"Vincent Poor \nElectrical Engineering Department\nPrinceton University\nPrincetonUSA\n",
"\nCentre for Wireless Communications (CWC)\nUniversity of Oulu\nFinland\n"
] | [
"Electrical Engineering Department\nPrinceton University\nPrincetonUSA",
"Centre for Wireless Communications (CWC)\nUniversity of Oulu\nFinland"
] | [] | In this article, we study the problem of robust reconfigurable intelligent surface (RIS)-aided downlink communication over heterogeneous RIS types in the supervised learning setting. By modeling downlink communication over heterogeneous RIS designs as different workers that learn how to optimize phase configurations in a distributed manner, we solve this distributed learning problem using a distributionally robust formulation in a communication-efficient manner, while establishing its rate of convergence. By doing so, we ensure that the global model performance of the worst-case worker is close to the performance of other workers. Simulation results show that our proposed algorithm requires fewer communication rounds (about 50% lesser) to achieve the same worst-case distribution test accuracy compared to competitive baselines.Index Terms-Reconfigurable intelligent surface (RIS), federated learning, communication-efficiency, distributionally robust optimization (DRO). | 10.1109/globecom46510.2021.9685599 | [
"https://arxiv.org/pdf/2108.09026v2.pdf"
] | 237,259,941 | 2108.09026 | 3bd7a4c581ba6d9128143d2ee29ff33230c06c2f |
Federated Distributionally Robust Optimization for Phase Configuration of RISs
Ben Chaouki
Sumudu Issaid
Mehdi Samarakoon
H Bennis [email protected]
Vincent Poor
Electrical Engineering Department
Princeton University
PrincetonUSA
Centre for Wireless Communications (CWC)
University of Oulu
Finland
Federated Distributionally Robust Optimization for Phase Configuration of RISs
In this article, we study the problem of robust reconfigurable intelligent surface (RIS)-aided downlink communication over heterogeneous RIS types in the supervised learning setting. By modeling downlink communication over heterogeneous RIS designs as different workers that learn how to optimize phase configurations in a distributed manner, we solve this distributed learning problem using a distributionally robust formulation in a communication-efficient manner, while establishing its rate of convergence. By doing so, we ensure that the global model performance of the worst-case worker is close to the performance of other workers. Simulation results show that our proposed algorithm requires fewer communication rounds (about 50% lesser) to achieve the same worst-case distribution test accuracy compared to competitive baselines.Index Terms-Reconfigurable intelligent surface (RIS), federated learning, communication-efficiency, distributionally robust optimization (DRO).
I. INTRODUCTION
Towards enabling non line-of-sight (NLOS) connectivity, the concept of reconfigurable intelligent surfaces (RISs) has gained significant interest recently in both industry and academic fora. Due to the capability of dynamic control of electromagnetic wave propagation using nearly passive multiple reflective elements, RIS technology is identified as a lowcost and scalable communications solution [1], [2]. However, the dynamic configuration of passive reflective elements under changes in the communication system and different RIS manufacturing designs remains as one of the main challenges in RIS-aided wireless communication. The majority of the existing literature on RISs-assisted communication including [2], [3] and references therein relies on centralized controllerdriven optimization and machine learning (ML) techniques. Therein, the main focus is to devise RIS configuration techniques by exploiting statistical correlations within the observed channel state information (CSI) without distinguishing the impacts of system designs (e.g., differences in propagation environments, transmitter (Tx)/receiver (Rx)/RIS locations, size of the RIS, etc.). In fact, these works neglect the limitations imposed by communication and privacy concerns during local data sharing, calling for distributed and privacy-preserving approaches. This work is supported by Academy of Finland 6G Flagship (grant no. 318927) and project SMARTER, projects EU-ICT IntellIoT and EUCHIS-TERA LearningEdge, and CONNECT, Infotech-NOOR, and NEGEIN. Federated learning (FL) is a learning framework that allows a centralized model to be trained between several devices and a central entity, a parameter server (PS), while preserving privacy by relying on shared models/gradients rather than accessing their individual data. While several federated algorithms have been proposed [4]- [6], FedAvg [7] remains the state-of-the-art approach for solving the distributed learning problem in a PS-based architecture. In a nutshell, FedAvg is a communication-efficient primal approach that consists of running several local iterations at each worker before exchanging information with the PS. However, since FedAvg solves the distributed learning problem using the empirical risk minimization (ERM), i.e. FedAvg minimizes the empirical distribution of the local losses, its performance drops when the local data are non-identically distributed across devices.
The heterogeneity of local data owned by the devices involved in the learning is a significant challenge in FL settings compared to classical distributed optimization. In fact, several works [8]- [10] have demonstrated that increasing the diversity of local data distributions harms the generalization capability of the central model obtained by solving the distributed learning problem using FedAvg. This is because the ERM formulation assumes that all local data are drawn from the same distribution. However, this assumption is strong since local data distributions can in practice differ significantly from the average distribution. Consequently, though the global model has a good average performance in terms of test accuracy, its performance locally reduces significantly when the local data are heterogeneous. To obviate this issue, recently, the authors in [11] proposed a distributionally robust federated averaging (DRFA) algorithm with reduced communication. Instead of using the ERM formulation, the authors adopt a distributionally robust optimization (DRO) objective by formulating the distributed learning problem as a minimization problem of a distributionally robust empirical loss. However, a major weakness with DRFA is that it requires two communication rounds: one to update the primal variable and another one to update the dual variable.
The main contribution of this paper is to propose a communication-efficient and distributionally robust learning algorithm, dubbed as Federated Group Distributionally Robust Averaging (FGDRA), to learn the optimal RIS configuration yielding maximum downlink capacity in the het- erogeneous network setting. Specifically, we define the dual variables locally for each worker and propose to update them in an adversarial way [12], [13], before sharing them with the PS, which performs a normalization step to ensure that the dual variables belongs to the simplex. As a consequence, the update of the primal and dual variables in our proposed algorithm requires only a single communication round between the devices and the PS. Simulation results show that our proposed approach is more communication-efficient than DRFA. Moreover, it incorporates the benefits of the DRO formulation by being more robust to the heterogeneity of local data compared to FedAvg.
II. SYSTEM MODEL & PROBLEM FORMULATION
We consider a set of multiple downlink RIS communication scenarios, each consisting of a single Tx-Rx pair without lineof-sight (LOS), a RIS, and randomly located scatterers in the Tx's vicinity as illustrated in Fig. 1. The NLOS connectivity between Tx and Rx is provided by the RIS via reflecting signals transmitted from Tx and diffracted signals from a set of S scatterers. Here, the phases of the RIS elements are adjusted to maximize the Tx-Rx communication data rate by a builtin controller. Note that for each scenario, RISs with different design specifications in terms of the size of the surface and the distance between RIS elements are used. In this view, we define the scenario-specific RIS controller as a worker, hereinafter.
A. Channel Model
Let h = [h q ] q∈Q and g = [g q ] q∈Q be the channel vectors of the incident (Tx-RIS) and reflected (RIS-Rx) signals defined over the reflective elements Q in the RIS. The channel model is based on the work in [14], in which, the link between Tx and RIS is composed of LOS channels as well as NLOS channels due to the presence of scatterers, while the RIS-Rx link has LOS connectivity due to their close proximity. Let d o , a o , and b o be the distance, azimuth angle, and elevation angle of an object o ∈ {Tx, Rx, Scatterers} with respect to the RIS. With a uniformly distributed random phase η ∼ U[0, 2π] and ı 2 = −1, under the assumption that the scatterers are only in the vicinity of the Tx, the RIS-Rx channel is modeled as follows
g = G(b RIS,R )L(d RIS,R )e ıη Ω(a RIS,R , b RIS,R ),(1)
where G(·), L(·), and Ω(·) are the RIS element radiation pattern, distance-dependent path loss, and array response, respectively [14]. Similar to (1), the LOS component of the channel between RIS and Tx is modeled by
h LOS = G(b RIS,T )L(d RIS,T )e ıη Ω(a RIS,T , b RIS,T ),(2)
as defined in the fifth generation (5G) channel model [14]. The NLOS links between the Tx and RIS are due to the presence of scatterers. Let d s , a s , and b s be the traveleddistance of the reflected signal from Tx to RIS at scatterer s and the azimuth and elevation angles of scatterer s with respect to the RIS, respectively. Then, the NLOS channel is modeled as follows
h NLOS = 1 S S s=1 γ s G(b s )L(d s )Ω(a s , b s ),(3)
where γ s ∼ CN (0, 1) is a scatterer-dependent random path gain. In this view, the channel between Tx and RIS becomes h = h LOS + h NLOS .
B. Downlink Rate Maximization
For a given scenario, the worker n adjust the phases of incident signals at the RIS to improve the downlink data rate. Let φ = [φ q ] q∈Q be the phase change decision at the RIS over its reflective elements with abs(φ q ) = 1. Under which, the received signal u at the Rx is given by
u = g † φhv + z,(4)
where
v is the transmit signal with E [v 2 ] = p and z ∼ N (0, N 0 ) is the noise. The data rate at the Rx is r(φ, h, g) = ω log 2 1 + |g † φh| 2 p ωN0
where ω is the bandwidth, in which, the downlink data rate maximization is cast as follows
max φ∈C r(φ, h, g) = ω log 2 1 + |g † φh| 2 p ωN0 ,(5)
where C is the feasible set of RIS configurations, which are referred to as configuration classes. Due to the notion of configuration classes, an analytical solution cannot be directly derived to determine the optimal configuration φ . The alternate approach is to adopt a heuristic searching mechanism, but the complexity of such a heuristic search increases with the number of reflective elements and their configurations. Hence, we resort to ML to develop a data-driven solution.
Consider that worker n has a dataset D n = {(x j , c j )|j ∈ {1, . . . , J n }} consisting of observed CSI x j = (h j , g j ) and a label c j corresponds to the optimal RIS configuration φ j . The data-driven design seeks for a parameterized probabilistic
classifier f θ (x j ) = [f c θ (x j )] c∈C that satisfies min θ − 1 N N n=1 1 Jn j∈Dn c∈C I c (c j ) log f c θ (x j ) ,(6)
where the indicator I c (c j ) = 1 only if the configuration c is equivalent to c , and zero, otherwise. Note that (6) relies on a centralized training mechanism where workers share their datasets with a centralized server. Under the limitations in data sharing due to communication constraints and/or privacy concerns, (6) is formulated as a distributed learning problem under the ERM formulation as follows
min θ 1 N N n=1 n (θ),(7)
where n (θ) = − 1
Jn j∈Dn
c∈C I c (c j ) log f c θ (x j )
is the local loss function of the n th worker. In (7), the parameter vector θ is referred to as the global model, which can be obtained via the FedAvg algorithm [7]. Note that under the formulation introduced in (7), it is assumed that the weight associated with each worker participating in the training is the same, i.e., the centralized model is trained to minimize the loss with respect to the uniform distribution over worker datasets. In the presence of heterogeneous local data, relying on the above assumption could result in a model that fails to perform well for some workers yielding a non-robust global model. An alternative approach to solving (7) is rather to minimize the distributionally robust empirical loss to learn a model with uniformly good performance across all workers. Next, we describe the distributed learning problem under the DRO formulation and elaborate on our approach to solve it.
III. DISTRIBUTIONALLY ROBUST DESIGN OF RISS
We start by stating the DRO formulation for the distributed learning problem as
min θ max λ∈Λ F (λ, θ) = N n=1 λ n n (θ),(8)
where λ ∈ Λ {λ ∈ R N + : N n=1 λ n = 1} is the vector of weights associated with each local loss function. Unlike the ERM formulation that involves only minimizing over a uniform combination of the loss functions, the DRO formulation is a min-max problem over a weighted sum of the loss functions. Solving the learning problem introduced in (8) ensures the good performance of the global model over the worst-case combination of empirical local distributions.
Our proposed approach to solve (8) is closely related to the DRFA algorithm proposed in [11] with a subtle difference in which instead of defining the dual variables vector λ at the PS side, we define locally for each worker n the dual variable λ n . By doing so, we avoid communicating twice between the workers and the PS to update the primal and dual variables, and hence the algorithm is more communication-efficient. For each worker n, we define the primal variable as θ n . Let K denote the number of communication rounds between the PS and the workers, τ the number of local SGD steps for updating the primal variables, and B the size of the mini-batch used to compute the stochastic gradient. In this case, the total number of iterations is T = Kτ . Finally, let α and γ denote the learning rates used to update the primal and dual variables, respectively.
At a given communication round k, the FGDRA algorithm runs as follows 1) PS selects a subset S k of size m from the set [N ] {1, . . . , N } of all workers and send θ k and λ k n to each worker n ∈ S k . 2) Each worker n ∈ S k runs locally τ SGD steps from θ k to update its primal variable θ (k+1)τ n .
3) Given θ
(k+1)τ n , each worker n ∈ S k updates its dual variable λ k n using an exponentiated gradient ascent, then shares both primal and dual variables with the PS. 4) PS collects primal variables from every worker n ∈ S k and perform model averaging to update the global model, and then normalizes the dual variables vector.
Note that while in FedAvg, the PS selects a subset of workers S randomly in a uniform manner, in our setting (similar to DRFA), the PS selects the subset according to PS broadcasts θ k and λ k n to each worker n ∈ S k 4:
for worker n ∈ S k parallel do 5: Worker sets θ kτ n = θ k 6:
for t = kτ, . . . , (k + 1)τ − 1 do 7:
Worker samples mini-batch ξ t n of size B
8:
Worker updates its primal variable using, θ t+1 n = θ t n − αλ k n ∇ n (θ t n ; ξ t n )
9: end for 10:
Worker updates its dual variable using,
λ k n = λ k n exp γ n (θ (k+1)τ n ; ξ n )(10)σ > 0 such that E [ ∇ n (θ, ξ n ) ] ≤ σ, ∀n ∈ [N ]. Assumption 3. (Bounded Variance) There exits a constant ν > 0 such that E [ ∇ n (θ, ξ n ) − ∇ n (θ) ] ≤ ν, ∀n ∈ [N ].
The following theorem establishes the convergence rate of our proposed algorithm.
1 T T t=0 E ∇F λ ( t/τ ) ,θ t 2 ≤ 2E F λ 0 ,θ 0 + 17 2 + 8 m σ 2 + 17ν 2 1 √ T . (11)
The proof is deferred to Appendix A.
IV. SIMULATION RESULTS
A. Simulation Settings
In our experiments, we consider that each worker n has its dataset D n generated as detailed in Section II. B. We report both the average global test accuracy, and the worst distribution test accuracy and their corresponding one standard error shaded area based on five runs. The worst distribution test accuracy is defined as the worst of all test accuracies for each local distribution. For fair comparison between the algorithms, we use the same hyperparameters, detailed in Table I, unless otherwise stated in the text. We use a multi-layer perceptron (MLP) neural network with two hidden layers having 64 and 32 neurons, respectively, while the input layer and the output layers have 400 and 4 neurons, respectively. The activation function used in the hidden layers is the rectified linear unit (ReLU), while the softmax activation function is used at the output layer. The loss function used is the cross-entropy loss.
B. Communication Efficiency
We compare the performance of our proposed approach to two baselines, namely DRFA and FedAvg. In Fig. 2, we plot the average global test accuracy, the worst distribution test accuracy, as well as the standard deviation (SD) of the global test accuracy, as a function of the number of communication rounds. We can observe from Fig. 2a that the performance of the three algorithms in terms of the average global test accuracy is quite similar. However, Fig. 2b shows that FGDRA outperforms the baselines in terms of the number of communication rounds to achieve the same level of worst distribution test accuracy. By examining Figs. 2a and 2b together, we can clearly see that the heterogeneity of the local data has an effect on the performance of the global model. However, the drop in performance is more evident in FedAvg and DRFA compared to our proposed algorithm. Moreover, our approach provides gains in terms of the number of communication rounds compared to DRFA. In fact, FGDRA requires around 800 communication rounds to converge compared to DRFA requiring more than 1500 communication rounds. To further support our claim, Fig. 2c depicts the SD of different workers' accuracy, indicating the degree of fairness of the global model across workers. Compared to FedAvg and DRFA, we clearly see that our proposed approach promotes more fairness among workers in the sense that the global model performs well on the worst distribution compared to the average one.
C. Sensitivity to Hyperparameters
Next, we study the impact of the number of local iterations τ , the size of the mini-batch B as well as the sampling size m on the average global test accuracy and worst distribution test accuracy, for K = 800 communication rounds in Table II, III, and IV respectively. Table II shows that increasing τ corresponds to an increase in both test accuracies for all algorithms. A similar conclusion can be drawn from Table III when increasing B, though the impact of τ seems more noticeable. We report the test accuracies for different values of m in Table IV. Note that considering a subset of workers participating in the training at each communication round mimics the asynchronous setting. We can observe that increasing m improves both test accuracies for all three algorithms. However, the gap between the average global accuracy and the worst distribution test accuracy is larger in the case of FedAvg and DRFA compared to our proposed approach. If we consider m = 2, i.e. only half of the workers are sampled, the difference between the average global accuracy and the worst distribution accuracy is 17% in the FedAvg case while it is around 10% for DRFA and about 3% for FGDRA.
V. CONCLUSIONS AND FUTURE WORK
This work proposes a novel distributed robust RIS-aided communication design for heterogeneous RIS configurations. The problem is cast as a classification-type DRO problem as opposed to data heterogeneity-unaware ERM approach. To solve this problem, we propose a communication-efficient and distributionally robust algorithm with convergence guarantees. As a solution, a neural network-based classifier for phase configuration is trained in a distributed supervised learning manner and compared with two state-of-the-art techniques.
The results indicate that the proposed classifier is more robust across heterogeneous system designs with faster convergence Let 0 ≤ k < K. For kτ ≤ t < (k + 1)τ , we definē
θ t = 1 m n∈S ( t/τ ) θ t n .(12)
From the update rule, we have
θ t+1 n = θ t n − αλ k n G t n ,(13)
where G t n = ∇ n (θ t n , ξ t n ). Hence, we can write
θ t+1 n = θ kτ n − αλ k n t r=kτ G r n (14) θ t+1 = θ kτ n − α m j∈S ( t/τ ) t r=kτ λ k j G t j .(15)
For ease of notation, we set S t = S ( t/τ ) . Therefore, we have
Adding λ k n ∇ n (θ r n )−λ k n ∇ n (θ r n ) and 1 m j∈St λ k j ∇ j (θ r j )− 1 m j∈St λ k j ∇ j (θ r j ) and using Assumptions 1 and 2, we get θt+1 − θ t+1 n ≤ 4α 2 τ (λ k max ) 2 t−1 r=kτ 1 m j∈St ∇ j (θ r j ) − G r j 2 + ∇ n (θ r n ) 2 + G r n −∇ n (θ r n ) 2 + 1 m j∈St
∇ j (θ r j ) 2 ≤ 4α 2 τ 2 1 + 1 m σ 2 + 2ν 2 ,(17)
where λ k max = max i∈St λ k i and we used λ k max ∈ (0, 1]. Using the smoothness of F , we have
E F λ k ,θ t+1 ≤ E F λ k ,θ t + E ∇F λ k ,θ t ,θ t+1 −θ t (I) + L 2 E θ t+1 −θ t 2 (II) .(18)
From (13), we can writē θ t+1 =θ t − α m n∈St λ k n G t n .
We start by re-writing the term (II) as E θ t+1 −θ t 2 = α 2 E 1 m n∈St λ k n ∇ n (θ t n ) 2 + α 2 E 1 m n∈St λ k n G t n − λ k n ∇ n (θ t n ) 2 . (20)
Focusing on the first term of (20), we can write
E 1 m n∈St λ k n G t n − λ k n ∇ n (θ t n ) 2 ≤ (λ k max ) 2 m 2 n∈St E G t n − ∇ n (θ t n ) 2 ≤ σ 2 m ,(21)
where we used that λ k max ∈ (0, 1] and Assumption 3. Replacing (21) in (20), we get
E θt+1 −θ t 2 ≤ α 2 σ 2 m + α 2 E n∈St λ k n m ∇ n (θ t n ) 2 .
(22) For term (I), we can write
E ∇F λ k ,θ t ,θ t+1 −θ t = −αE ∇F λ k ,θ t , 1 m n∈St λ k n G t n(23)
Using the unbiaseness of G t n and the identity a, b = 1 2
a 2 + b 2 − a − b 2 , we get E ∇F λ k ,θ t ,θ t+1 −θ t = − α 2 E ∇F λ k ,θ t 2 − α 2 E n∈St λ k n m ∇ n (θ t n ) 2 + α 2 E ∇F λ k ,θ t − 1 m n∈St λ k n ∇ n (θ t n ) 2 (III) .(24)
Using [11, Lemma 1], we can re-write the term (III) as
E ∇F λ k ,θ t − 1 m n∈St λ k n ∇ n (θ t n ) 2 ≤ E 1 N N n=1 λ k n ∇ n θt − ∇ n (θ t n ) 2 ≤ 2L 2 N E N n=1
θt − θ t n 2
Fig. 1 .
1RIS-aided donwlink communication system highlighting the scenario and worker definitions over different RIS sizes, as well as the the role of the PS.
Algorithm 1
1Federated Group Distributionally Robust Averaging (FGDRA) Inputs: N , τ , K, α , γ, B, m, θ 0 , λ 0 . Outputs: θ K , λ K . 1: for k = 0 to K − 1 do 2: PS samples S k ⊂ [N ] according to λ k with size of m 3:
Fig. 2 .
2n ∈ S k sends θ (k+1)τ n and λ k n back to the PS 13: PS computes θ k+1 = 1 m n∈S k θ (k+1)τ n 14: PS normalizes the dual variables vector λ 15: Comparing FGDRA with DRFA and FedAvg in terms of: (a) average global test accuracy, (b) worst distribution test accuracy, and (c) standard deviation (SD) of the global accuracy. the weighting vector λ. The detailed steps of the FGDRA algorithm are summarized in Algorithm 1. Next, we present the theoretical guarantees of our proposed algorithm. First, we state some standard assumptions needed for the proof Assumption 1. (Smoothness) Each local loss function n (·), n ∈ [N ] and the global function F (·, ·) are L-smooth. Assumption 2. (Bounded Gradient) There exits a constant
Theorem 1 :
1Suppose assumptions 1-3 hold. If we set α =
TABLE I PARAMETERS
IUSED IN THE NUMERICAL EXPERIMENTS.PARAMETER
VALUE
LEARNING RATE FOR PRIMAL UPDATE (α) 2 × 10 −3
LEARNING RATE FOR DUAL UPDATE (γ)
5 × 10 −3
MINI-BATCH SIZE (B)
50
NUMBER OF WORKERS (N )
4
NUMBER OF LOCAL ITERATIONS (τ )
10
SAMPLING SIZE (m)
TABLE II IMPACT
IIOF THE NUMBER OF LOCAL ITERATIONS T ON THE TEST ACCURACIES 1 . TABLE III IMPACT OF THE MINI-BATCH SIZE B ON THE TEST ACCURACIES.τ = 1
τ = 5
τ = 10
FGDRA 54.96/34.12 72.78/65 77.58/73.62
DRFA
46.75/31.62 66.25/44 75.59/64.26
FEDAVG
48.56/24.5
64.7/36.5 73.96/57.62
B = 10
B = 30
B = 50
FGDRA 70.09/64.87 73.96/69.75 77.58/73.62
DRFA
70.46/58.62
75.28/62
75.59/64.26
FEDAVG 70.28/51.37 76.18/62.37 73.96/57.62
TABLE IV IMPACT
IVOF THE SAMPLING SIZE m ON THE TEST ACCURACIES. FEDAVG 58.06/34.87 62.62/45.12 73.96/57.62compared the existing designs. Future extensions will be focused on systems consisting of multiple transmitters, receivers, and RISs with several antennas.m = 1
m = 2
m = 3
FGDRA
57.37/38.5
62.56/59.15 77.58/73.62
DRFA
56.24/36
61.2/51.87
75.59/64.26
APPENDIX A
PROOF OF THEOREM 1
Test accuracies (expressed in %) are reported in the form (average global test accuracy/worst distribution test accuracy) based on five runs for K = 800.
Large intelligent surface for positioning in millimeter wave MIMO systems. J He, H Wymeersch, L Kong, O Silvén, M Juntti, 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring). J. He, H. Wymeersch, L. Kong, O. Silvén, and M. Juntti, "Large intelligent surface for positioning in millimeter wave MIMO systems," in 2020 IEEE 91st Vehicular Technology Conference (VTC2020-Spring).
. IEEE. IEEE, 2020, pp. 1-5.
Deep learning-based phase reconfiguration for intelligent reflecting surfaces. Ö Özdogan, E Björnson, arXiv:2009.13988Ö.Özdogan and E. Björnson, "Deep learning-based phase reconfigura- tion for intelligent reflecting surfaces," preprint arXiv:2009.13988, 2020.
Unsupervised learning for passive beamforming. J Gao, C Zhong, X Chen, H Lin, Z Zhang, IEEE Commun. Lett. 245J. Gao, C. Zhong, X. Chen, H. Lin, and Z. Zhang, "Unsupervised learning for passive beamforming," IEEE Commun. Lett., vol. 24, no. 5, pp. 1052-1056, 2020.
Advances and open problems in federated learning. P Kairouz, H B Mcmahan, B Avent, A Bellet, M Bennis, A N Bhagoji, K Bonawitz, Z Charles, G Cormode, R Cummings, arXiv:1912.04977arXiv preprintP. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings et al., "Advances and open problems in federated learning," arXiv preprint arXiv:1912.04977, 2019.
Federated machine learning: Concept and applications. Q Yang, Y Liu, T Chen, Y Tong, ACM Transactions on Intelligent Systems and Technology (TIST). 102Q. Yang, Y. Liu, T. Chen, and Y. Tong, "Federated machine learning: Concept and applications," ACM Transactions on Intelligent Systems and Technology (TIST), vol. 10, no. 2, pp. 1-19, 2019.
Federated learning: Challenges, methods, and future directions. T Li, A K Sahu, A Talwalkar, V Smith, IEEE Signal Processing Magazine. 373T. Li, A. K. Sahu, A. Talwalkar, and V. Smith, "Federated learning: Challenges, methods, and future directions," IEEE Signal Processing Magazine, vol. 37, no. 3, pp. 50-60, 2020.
Communication-efficient learning of deep networks from decentralized data. B Mcmahan, E Moore, D Ramage, S Hampson, B A Arcas, Artificial Intelligence and Statistics. AISTATSB. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y Arcas, "Communication-efficient learning of deep networks from decentralized data," in Artificial Intelligence and Statistics (AISTATS), 2017, pp. 1273- 1282.
On the convergence of local descent methods in federated learning. F Haddadpour, M Mahdavi, arXiv:1910.14425arXiv preprintF. Haddadpour and M. Mahdavi, "On the convergence of local descent methods in federated learning," arXiv preprint arXiv:1910.14425, 2019.
Scaffold: Stochastic controlled averaging for federated learning. S P Karimireddy, S Kale, M Mohri, S Reddi, S Stich, A T Suresh, International Conference on Machine Learning (ICML). S. P. Karimireddy, S. Kale, M. Mohri, S. Reddi, S. Stich, and A. T. Suresh, "Scaffold: Stochastic controlled averaging for federated learn- ing," in International Conference on Machine Learning (ICML), 2020, pp. 5132-5143.
Feddane: A federated newton-type method. T Li, A K Sahu, M Zaheer, M Sanjabi, A Talwalkar, V Smithy, 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEET. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smithy, "Feddane: A federated newton-type method," in 2019 53rd Asilomar Conference on Signals, Systems, and Computers. IEEE, 2019, pp. 1227-1231.
Distributionally robust federated averaging. Y Deng, M M Kamani, M Mahdavi, Advances in Neural Information Processing Systems (NIPS). 33Y. Deng, M. M. Kamani, and M. Mahdavi, "Distributionally robust federated averaging," in Advances in Neural Information Processing Systems (NIPS), vol. 33, 2020, pp. 15 111-15 122.
Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization. S Sagawa, P W Koh, T B Hashimoto, P Liang, arXiv:1911.08731arXiv preprintS. Sagawa, P. W. Koh, T. B. Hashimoto, and P. Liang, "Distributionally robust neural networks for group shifts: On the importance of regular- ization for worst-case generalization," arXiv preprint arXiv:1911.08731, 2019.
Robust optimization over multiple domains. Q Qian, S Zhu, J Tang, R Jin, B Sun, H Li, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Q. Qian, S. Zhu, J. Tang, R. Jin, B. Sun, and H. Li, "Robust optimization over multiple domains," in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 33, no. 01, 2019, pp. 4739-4746.
Indoor and outdoor physical channel modeling and efficient positioning for reconfigurable intelligent surfaces in mmWave bands. E Basar, I Yildirim, I F Akyildiz, arXiv:2006.02240arXiv preprintE. Basar, I. Yildirim, and I. F. Akyildiz, "Indoor and outdoor physical channel modeling and efficient positioning for reconfigurable intelligent surfaces in mmWave bands," arXiv preprint arXiv:2006.02240, 2020.
| [] |
[
"Supporting the Curation of Twitter User Lists",
"Supporting the Curation of Twitter User Lists"
] | [
"Derek Greene [email protected] ",
"Fergal Reid [email protected] ",
"Gavin Sheridan [email protected] ",
"Pádraig Cunningham [email protected] ",
"\nSchool of Computer Science & Informatics\nSchool of Computer Science & Informatics\nUniversity College Dublin\nStoryful DublinIreland, Ireland\n",
"\nUniversity College Dublin\nIreland\n"
] | [
"School of Computer Science & Informatics\nSchool of Computer Science & Informatics\nUniversity College Dublin\nStoryful DublinIreland, Ireland",
"University College Dublin\nIreland"
] | [] | Twitter introduced lists in late 2009 as a means of curating tweets into meaningful themes. Lists were quickly adopted by media companies as a means of organising content around news stories. Thus the curation of these lists is important, they should contain the key information gatekeepers and present a balanced perspective on the story. Identifying members to add to a list on an emerging topic is a delicate process. From a network analysis perspective there are a number of views on the Twitter network that can be explored, e.g. followers, retweets mentions etc. We present a process for integrating these views in order to recommend authoritative commentators to include on a list. This process is evaluated on manually curated lists about unrest in Bahrain and the Iowa caucuses for the 2012 US election. | null | [
"https://arxiv.org/pdf/1110.1349v1.pdf"
] | 14,839,971 | 1110.1349 | cffe9d9db2b6ce056024b4794b1589a770d2a5e6 |
Supporting the Curation of Twitter User Lists
Derek Greene [email protected]
Fergal Reid [email protected]
Gavin Sheridan [email protected]
Pádraig Cunningham [email protected]
School of Computer Science & Informatics
School of Computer Science & Informatics
University College Dublin
Storyful DublinIreland, Ireland
University College Dublin
Ireland
Supporting the Curation of Twitter User Lists
Twitter introduced lists in late 2009 as a means of curating tweets into meaningful themes. Lists were quickly adopted by media companies as a means of organising content around news stories. Thus the curation of these lists is important, they should contain the key information gatekeepers and present a balanced perspective on the story. Identifying members to add to a list on an emerging topic is a delicate process. From a network analysis perspective there are a number of views on the Twitter network that can be explored, e.g. followers, retweets mentions etc. We present a process for integrating these views in order to recommend authoritative commentators to include on a list. This process is evaluated on manually curated lists about unrest in Bahrain and the Iowa caucuses for the 2012 US election.
Introduction
Media outlets that leverage the content produced by users of social media sites can now break or cover stories as they evolve on the ground, in real time (e.g. videos, photographs, tweets). However, a signicant issue arises when trying to (a) identify content around a breaking news story in a timely manner (b) monitor the proliferation of content on a certain news event over a period of time, and (c) ensure that content is reliable and accurate. Storyful 1 is a social media news agency established in 2009 with the aim of filtering news, or newsworthy content, from the vast quantities of noisy data that streams through social networks. To this end, Storyful invests significant time into the manual curation of content on social media networks, such as Twitter and YouTube. In some cases this involves identifying "gatekeepers" who are prolific in their ability to locate, filter and monitor news from eyewitnesses.
Twitter users can organise the users they follow into Twitter lists. Storyful maintains lists of users relevant to a given news story, as a means of monitoring breaking news related to that story. Often these stories generate community-decided hashtags (e.g. #occupywallstreet) -but even with small news events, using such hashtags to track the evolution of a story becomes difficult. Spambots quickly intervene, while users with no proximity (in space, time or expertise) to the news story itself drown out other voices. Manual curation of lists is one way to overcome this problem, but is time consuming, and risks incomplete coverage. In order to support the list curation process, we propose methods for identifying the important users that form the "community" around a news story on Twitter. Specifically, given a small seed list of users supplied by a domain expert, we are interested in using network analysis techniques to expand this set to produce a user list that provides comprehensive coverage of the story. The motivation is that the members of this list will provide additional valuable content relating to the story.
A number of authors have considered the related problem of producing personal recommendations for additional users to follow on Twitter, either by following user links or performing textual analysis of tweet content. Hannon et al. [3] proposed a set of techniques for producing personal recommendations on users to follow, based on the similarity of the aggregated tweets or "profiles" of users that are connected to the ego in the Twitter social graph. Such techniques have primarily relied on a single view of the network to produce suggestions. However, we can view the same Twitter network from a range of different perspectives. For instance, Conover et al. [2] performed an analysis of Twitter data based on references to other Twitter screen names in a tweet, while researchers have also looked at the diffusion of content via retweets to uncover the spread of memes and opinions on Twitter [2,6]. The idea is that both mentions and retweets provide us with some insight of the differing interactions between microblogging users.
In Section 2 we describe a set of recommendation criteria and network exploration methods used to support user list curation on Twitter. Rather than using a single view of the network to produce recommendations, we employ a multi-view approach that produces user rankings based on different graph representations of the Twitter network surrounding a given user list, and combines them using an SVD-based aggregation approach [10]. Information from multiple views is also used to control the exploration of the Twitter network -this is an important consideration due to the limitations surrounding Twitter data access. To verify the accuracy of the resulting recommendations, in Section 3 we describe experiments performed on a previously-curated Twitter list relating to coverage of the Iowa caucuses in advance of the 2012 US Presidential Election. In Section 4 we investigate whether a "silo" effect arises in cases where a user list is expanded from an initial seed list with a strong bias towards a particular perspective on a story. We do this by evaluating the proposed recommendation techniques on subsets of a previously-curated list covering the current political situation in Bahrain. This study motivates further work in this area, which is discussed in Section 5.
Methods
Bootstrapping
We now describe our proposed system for supporting user list curation. The initial input to the system is a seed list of one or more users that have been manually labelled as being relevant to a particular news story. Once a seed list has been supplied, the first operation of the system involves a bootstrapping phase, which retrieves follower ego networks around all seed list members. Other information regarding these users is also retrieved -such as user list membership information and a limited number of tweets. The extent of the exploration process can be controlled by setting an upper limit for the number of links to follow and tweets to retrieve -these parameters control the trade-off between network exploration depth and the number of queries required. The latter is an important consideration, not only in terms of running time, but also due to the fact that Twitter employs a quota system that limits the number of permitted API queries that can be made per hour.
After the bootstrap phase, the system will have two disjoint lists for the news story. The core set contains curated Twitter accounts, initially this corresponds to the members of the seed list. The candidate set contains Twitter accounts that are not in the core set, but exist in the wider network around the core -some of these users may potentially be relevant for curation, while others will be spurious. Initially this will consist of the new non-seed users that were found during the bootstrap phase.
Recommending Users
In the subsequent recommendation phase, a ranked list of the r top users from the candidate set is produced. Firstly, we produce individual rankings using a number of criteria applied to different graphs, each representing a different view of the same network. The motivation is that each view potentially captures a different aspect of the relations between Twitter users around a given news story. We construct four different views:
1. Core friend graph: This is a directed graph which contains nodes representing all users in the core set, along with the non-core users who they follow. 2. Core mention graph: As an alternative network view, we analyse the non-core users mentioned by the users in the core set. Specifically, an edge links from a core node A to a non-core node B if A has mentioned B in at least one tweet -the weight of the edge corresponds to the number of tweets. The idea here is that this directed, weighted mention graph is a proxy for the dialogue between these Twitter users. 3. Core retweet graph: We also analyse retweeting activity by core users involving tweets originally posted by non-core users. This involves the construction of a weighted, directed graph, where an edge links from core node A to non-core node B if A has retweeted B's tweets at least once -the weight of the edge corresponds to the number of retweets. 4. Weighted co-listed graph: Another alternative view, which has not been widely explored in the literature, is to look at relations based on the aggregation of co-assignments to Twitter user lists. At an aggregate level, this could be regarded as a form of crowd-sourced curation, where the assumption is that related pairs of users will be more frequently assigned to the same list than users who have dissimilar to one another. Based on this idea, we construct a weighted, undirected graph as follows. For each user list that has been identified, we measure the overlap w between the list's members and the core set using the Jaccard set similarity measure [4]. If w > 0 then, for each unique core/non-core pair of users in the user list, we create an edge between these two users with weight w. If an edge between the users already exists, we increment the weight on the edge by w.
The criteria that we use on these graphs are as follows:
1. In-degree: A simple approach for directed graphs is to look at the in-degree centrality of each Twitter user. For weighted graphs, we calculate the sum of the weights on incoming edges. 2. Normalised in-degree: Using standard in-degree centrality can potentially lead to the selection of high-degree Twitter users who are not specialised in a particular geographic or topical area. Our solution has been to introduce a normalisation factor to reduce the impact of high degree nodes. The normalisation approach is similar to standard log-based TF-IDF term weighting functions that are widely applied in text mining to reduce the influence of frequently-occurring terms [7]. The normalised follower count value for the user u is defined as:
nfc(u) = log (seed f ollowers(u)) · log max f ollowers all f ollowers(u)(1)
where • seed f ollowers(u) = the number of users in the core set that follow the user u.
• all f ollowers(u) = the total number of all users following the user u on Twitter.
• max f ollowers = a scaling factor, defined to be the largest number of Twitter followers among any of the core and non-core users. 3. HITS with priors: The HITS algorithm, originally proposed by [5], has been widely used to assign hub and authority scores to each mode in graph, depending upon its the topology. We can use the authority scores applied to a Twitter network to identify key users in that network. Since we wish to focus on authority relative to our pre-curated core list, we use the variation of HITS proposed by [9], which introduces prior probabilities for each node. Specifically, each of the m users in the core list is given an initial probability 1/m, while the other non-core nodes are given an initial probability of 0.
Naturally, certain criteria are only meaningful when applied to certain graphs. For the purpose of the evaluations described in this paper, we use the following five combinations:
• Normalised degree applied to the core friend graph.
• HITS with priors applied to the core friend graph.
• Weighted in-degree applied to the co-listed, mention, and retweet graphs. Figure 1: Overview of curation support system, illustrating the workflow between the bootstrap, recommendation, and update phases.
Combining Rankings
The various graph/criterion combinations can potentially produce rankings of users that differ significantly. To combine rankings, we use SVD-based aggregation, which has previously been shown to be effective for this task [10]. We construct a matrix X from the ranks (rather than the raw scores), with users on the rows and rankings on the columns. We then apply SVD to this matrix and extract the first left singular vector. The values in this vector provide aggregated scores for the users. By arranging these values in descending order, we can produce a final ranking of users. We select the top r users to form our list of user recommendations. Finally, we can also apply additional filtering of recommendations based on a minimum tweet count filter and a filter to remove users who have not tweeted within a given time period.
After a set of recommendations has been generated, the ranked list of suggested users would be presented to a human curator, who could then select a subset to migrate to the core set (i.e. to augment the existing Twitter user list). The use of a "human in the loop" in the proposed system resembles the role of the oracle in active learning algorithms for classification [8].
Network Exploration
Once the core set has been modified, the system enters the update phase, which modifies the current copy of the network to reflect (a) changes in membership of the core set, and (b) any changes in the Twitter network since the last update (e.g. addition/removal of follower links, new tweets). Specifically, the network is explored using a process based both on the follower graph and also on tweet content:
• For the current core set, retrieve their friend/follower links, user list memberships, and recent tweets for all set members (i.e. same process as in the bootstrap phase).
• For the last set of recommended users who were not migrated to the core set, retrieve their friend/follower links, user list memberships, and recent tweets.
• For the set of m users who were most frequently mentioned in tweets posted by the core set, retrieve their friend/follower links, user list memberships, and recent tweets.
Again the extent of the exploration for each of the above can be controlled by setting maximum values for the number of links to follow and tweets to retrieve. Once the local copy of the network has been update, the data then feeds back to the recommendation phase and another iteration of the recommendation-selection-update process is executed. A visual overview of the complete curation system process is shown in Fig. 1. initial data collection -16 September 2011 -this list contained 128 unique users. To evaluate the robustness of the user recommendation process, we use cross validation, randomly dividing the complete Iowa user list into four disjoint datasets, each containing 32 users. As an example, the subgraph induced by the core set on the follower graph of Iowa dataset #1 is shown in Fig. 2(a) the positions of nodes were calculated using the force directed layout implementation provided by Gephi [1].
In our experiments we applied the workflow shown in Fig. 1 to each of the sets individually for six recommendation-selection-update iterations after the initial bootstrapping phase. Note that no information was shared between the runs. The extent of network exploration during the update phase was controlled using the following constraints:
• A maximum of up to 1,000 friend/follower links were retrieved per user at a given iteration.
• A maximum of up to 1,000 user lists were retrieved per user at a given iteration.
• A maximum of up to 1,000 tweets was retrieved per user at a given iteration.
• Very high-degree users > 50, 000 friends and/or 50,000 followers were filtered.
To generate recommendations, we used the views and criteria as described in Section 2. We filtered the recommendations to remove those users who had not tweeted in the previous two weeks and/or those who had posted fewer than 25 tweets in total. At each iteration we generated r = 50 recommendations -by the final iteration, users were selected from a complete candidate set with average size of ≈ 62k users. At this stage, we had also collected an average ≈ 63k tweets and ≈ 138k follower links for each dataset. In place of a manual curator, after each complete iteration we automatically selected the top five highest ranked users (based on SVD aggregation) to add to the core set. The six iterations thus yielded 30 additional core users for each of the four sets. As an example, the final expanded core set for Iowa dataset #1 is shown in Fig. 2(b). It is interesting to observe that several high-degree nodes were added to the core set, such as the user @TerryBranstad, the official account of the Governor of Iowa.
Discussion
Next, to quantitatively validate the relevance of the recommendations produced by our proposed techniques, we use two measures that are frequently used in information retrieval tasks: precision In general, we observe that the Iowa user list studied here consists of a relatively homogeneous group of users pertaining to a story with a relatively narrow focus -the users are predominantly Republicans involved in the Iowa caucuses. Therefore, unlike the study in [2] which analysed Twitter relations across the entire country during 2010 US midterm elections, here a pronounced partisan divide is not evident.
Case Study 2: Bahrain
Experimental Setup
For our second study, we analyse a dataset with significantly different characteristics. As a seed list we use a Twitter list covering the current political situation in Bahrain which was also manually curated by Storyful 3 . As of 27 September 2011, this list contained 51 users. A small number of these have a "loyalist" or "pro-government" stance, while the remaining users could be regarded as being either "non-loyalist", or "neutral" observers with an interest in Bahrain. This natural division in the seed list raises an interesting question -does starting with a seed list that takes a particular stance on a given news story lead to the construction of localised network "silos", which may lead an automated system to give biased user recommendations?
To investigate this, we generate recommendations based on a seed list Bahrain-L containing a subset of 14 users that have been putatively labelled as "loyalist". We ran four complete iterations using the same exploration constraints, filters, and selection mechanism used in the previous evaluation. This resulted in a core set containing 34 users, a candidate set of 51,114 users, 138,777 follower links, and 53,450 unique tweets. Fig. 3(a) shows the subgraph induced by the original complete curated list of 51 users on the follower graph -the split between the "loyalist" users and the other users is evident from the positions calculated by force directed layout. In particular, the latter group of users form a densely connected core, while most of the "loyalist" nodes are not well-connected with the rest of the subgraph. Fig. 3(b) shows a subgraph induced by the union of the curated list, with the set of nodes selected based on the recommendation process using Bahrain-L alone as the seed set. We observe that none of the 37 "non-loyalist" nodes from the curated list were selected during the four iterations. In contrast, we see that the new users are closely connected with the other "loyalist" users, forming a second dense core. While we might expect this if recommendations were only generated based on follower links, recall that rankings based on mentions and retweets are also being aggregated to select new users. In fact, the addition of these rankings appears to further compound the "silo" effect which is evident from Fig. 4.2.
Discussion
Our analysis suggests that there is little interaction on Twitter between users with differing stances on the political situation in Bahrain. On the one hand, this highlights weakness of the proposed recommendation techniques in the case of stories that are highly-polarised. Alternative criteria, which emphasise diversity over homogeneity, may provide a solution -this is analogous to the attempts in active learning to identify diverse examples in order to widely cover the sample space [8]. On the other hand, these results also highlight the continued importance of the role of the curator in (a) selecting a suitably diverse seed list as a starting point, (b) actioning recommendations produced by the system.
Conclusions
In this paper we have proposed a comprehensive approach for automating aspects of the Twitter list curation process, based on novel network exploration and multi-view recommendation techniques.
In the evaluation in Section 3, we showed that, using different starting subsets of a manually-curated list, we can recall the original human annotations while maintaining high precision.
Based on the observations made in Section 4, we suggest that the next major phase of our work will involve exploring the diffusion patterns of newsworthy multimedia resources (e.g. links to images and videos) in the network surrounding a user list. For instance, identifying users who are frequently early in retweet chains for such resources may help diversify user list recommendations in situations where the "silo" effect is pronounced, such as in the Bahrain case study. In future we also plan to apply the proposed recommendation and network exploration techniques beyond Twitter, looking at multiple views across several different social networks. A key issue here will be the generation of a reliable mapping between users on different networks.
FirstFigure 2 :
2, we evaluated the proposed recommendation system on a Twitter user list previously curated by Storyful, covering Iowa politics during the 2012 US Presidential Primaries 2 . At the time of Induced subgraph of the follower graph for the core set members in the Iowa dataset #1 after (a) the initial bootstrap phase, (b) six complete iterations. Larger nodes with a more saturated colour are indicative of nodes with a higher in-degree (i.e. users with more followers within the core set). Highlighted edges indicate reciprocated follower links between users. Layout positions are preserved for both figures.
Figure 3 :
3Follower graph for core set members in the Bahrain dataset after (a) the initial bootstrap phase, (b) four complete iterations. Blue nodes denote users in the original user list that are putatively labelled as "loyalist", while the remaining members of the user list are coloured green. The additional nodes that have been selected, based on recommendations using Bahrain-L as a seed list, are coloured red.
http://www.storyful.com 1 arXiv:1110.1349v1 [cs.SI] 6 Oct 2011
http://twitter.com/#!/trailmix12/iowa
http://twitter.com/#!/storyfulpro/bahrain
AcknowledgmentsThis work is supported by Science Foundation Ireland Grant No. 08/SRC/I140 (Clique: Graph & Network Analysis Cluster). The authors thank Storyful for their participation in the evaluations performed in this paper.
Gephi: An open source software for exploring and manipulating networks. M Bastian, S Heymann, M Jacomy, Proc. International AAAI Conference on Weblogs and Social Media (ICWSM'09). International AAAI Conference on Weblogs and Social Media (ICWSM'09)M. Bastian, S. Heymann, and M. Jacomy. Gephi: An open source software for exploring and manipulating networks. In Proc. International AAAI Conference on Weblogs and Social Media (ICWSM'09), pages 361-362, 2009.
Political polarization on twitter. M D Conover, J Ratkiewicz, M Francisco, B Gonçalves, A Flammini, F Menczer, Proc. 5th Intl. Conference on Weblogs and Social Media (ICWSM'11). 5th Intl. Conference on Weblogs and Social Media (ICWSM'11)M. D. Conover, J. Ratkiewicz, M. Francisco, B. Gonçalves, A. Flammini, and F. Menczer. Political polarization on twitter. In Proc. 5th Intl. Conference on Weblogs and Social Media (ICWSM'11), 2011.
Recommending twitter users to follow using content and collaborative filtering approaches. J Hannon, M Bennett, B Smyth, Proc. 4th ACM conference on Recommender systems. 4th ACM conference on Recommender systemsACMJ. Hannon, M. Bennett, and B. Smyth. Recommending twitter users to follow using content and collaborative filtering approaches. In Proc. 4th ACM conference on Recommender systems, pages 199-206. ACM, 2010.
The distribution of flora in the alpine zone. P Jaccard, New Phytologist. 112P. Jaccard. The distribution of flora in the alpine zone. New Phytologist, 11(2):37-50, 1912.
Authoritative sources in a hyperlinked environment. J M Kleinberg, Journal of the ACM (JACM). 465J.M. Kleinberg. Authoritative sources in a hyperlinked environment. Journal of the ACM (JACM), 46(5):604-632, 1999.
Detecting and tracking political abuse in social media. J Ratkiewicz, M Conover, M Meiss, B Gonçalves, A Flammini, F Menczer, Proc. 5th International AAAI Conference on Weblogs and Social Media (ICWSM'11). 5th International AAAI Conference on Weblogs and Social Media (ICWSM'11)J. Ratkiewicz, M. Conover, M. Meiss, B. Gonçalves, A. Flammini, and F. Menczer. Detecting and tracking political abuse in social media. In Proc. 5th International AAAI Conference on Weblogs and Social Media (ICWSM'11), 2011.
Term weighting approaches in automatic text retrieval. G Salton, C Buckley, 87-881Ithaca, NY, USADepartment of Computer Science, Cornell UniversityTechnical ReportG. Salton and C. Buckley. Term weighting approaches in automatic text retrieval. Technical Report 87-881, Department of Computer Science, Cornell University, Ithaca, NY, USA, 1987.
Active Learning Literature Survey. B Settles, 1648University of Wisconsin-MadisonTechnical ReportB. Settles. Active Learning Literature Survey. Technical Report 1648, University of Wisconsin-Madison, 2009.
Algorithms for estimating relative importance in networks. S White, P Smyth, Proc. 9th ACM SIGKDD international conference on Knowledge discovery and data mining. 9th ACM SIGKDD international conference on Knowledge discovery and data miningACMS. White and P. Smyth. Algorithms for estimating relative importance in networks. In Proc. 9th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 266-275. ACM, 2003.
Merging multiple criteria to identify suspicious reviews. G Wu, D Greene, P Cunningham, Proc. 4th ACM Conference on Recommender Systems (RecSys'10). 4th ACM Conference on Recommender Systems (RecSys'10)G. Wu, D. Greene, and P. Cunningham. Merging multiple criteria to identify suspicious re- views. In Proc. 4th ACM Conference on Recommender Systems (RecSys'10), 2010.
| [] |
[
"Heat kernel regularization of the effective action for stochastic reaction-diffusion equations",
"Heat kernel regularization of the effective action for stochastic reaction-diffusion equations"
] | [
"+,++David Hochberg \nLaboratorio de Astrofísica Espacial y Física Fundamental\n++ Centro de Astrobiología\nApartado 5072728080MadridSpain\n\nTheoretical Division T-8\nCSIC-INTA\nCarretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++\n\n++++ Physics Department\nLos Alamos National Laboratory\n87545Los AlamosNew Mexico\n\nWashington University\n63130-4899Saint LouisMissouriUSA\n",
"++Carmen Molina-París \nLaboratorio de Astrofísica Espacial y Física Fundamental\n++ Centro de Astrobiología\nApartado 5072728080MadridSpain\n\nTheoretical Division T-8\nCSIC-INTA\nCarretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++\n\n++++ Physics Department\nLos Alamos National Laboratory\n87545Los AlamosNew Mexico\n\nWashington University\n63130-4899Saint LouisMissouriUSA\n",
"Matt Visser ++++ \nLaboratorio de Astrofísica Espacial y Física Fundamental\n++ Centro de Astrobiología\nApartado 5072728080MadridSpain\n\nTheoretical Division T-8\nCSIC-INTA\nCarretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++\n\n++++ Physics Department\nLos Alamos National Laboratory\n87545Los AlamosNew Mexico\n\nWashington University\n63130-4899Saint LouisMissouriUSA\n"
] | [
"Laboratorio de Astrofísica Espacial y Física Fundamental\n++ Centro de Astrobiología\nApartado 5072728080MadridSpain",
"Theoretical Division T-8\nCSIC-INTA\nCarretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++",
"++++ Physics Department\nLos Alamos National Laboratory\n87545Los AlamosNew Mexico",
"Washington University\n63130-4899Saint LouisMissouriUSA",
"Laboratorio de Astrofísica Espacial y Física Fundamental\n++ Centro de Astrobiología\nApartado 5072728080MadridSpain",
"Theoretical Division T-8\nCSIC-INTA\nCarretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++",
"++++ Physics Department\nLos Alamos National Laboratory\n87545Los AlamosNew Mexico",
"Washington University\n63130-4899Saint LouisMissouriUSA",
"Laboratorio de Astrofísica Espacial y Física Fundamental\n++ Centro de Astrobiología\nApartado 5072728080MadridSpain",
"Theoretical Division T-8\nCSIC-INTA\nCarretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++",
"++++ Physics Department\nLos Alamos National Laboratory\n87545Los AlamosNew Mexico",
"Washington University\n63130-4899Saint LouisMissouriUSA"
] | [] | The presence of fluctuations and non-linear interactions can lead to scale dependence in the parameters appearing in stochastic differential equations. Stochastic dynamics can be formulated in terms of functional integrals. In this paper we apply the heat kernel method to study the short distance renormalizability of a stochastic (polynomial) reaction-diffusion equation with real additive noise. We calculate the one-loop effective action and its ultraviolet scale dependent divergences. We show that for white noise a polynomial reaction-diffusion equation is one-loop finite in d = 0 and d = 1, and is one-loop renormalizable in d = 2 and d = 3 space dimensions. We obtain the one-loop renormalization group equations and find they run with scale only in d = 2.PACS number(s): 02.50.Ey;02.50.-r;05.40.+j | 10.1103/physreve.63.036132 | [
"https://arxiv.org/pdf/cond-mat/0009424v1.pdf"
] | 14,844,864 | cond-mat/0009424 | acd1842a764be7543171759be0ef8cf67c81f835 |
Heat kernel regularization of the effective action for stochastic reaction-diffusion equations
27 Sep 2000
+,++David Hochberg
Laboratorio de Astrofísica Espacial y Física Fundamental
++ Centro de Astrobiología
Apartado 5072728080MadridSpain
Theoretical Division T-8
CSIC-INTA
Carretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++
++++ Physics Department
Los Alamos National Laboratory
87545Los AlamosNew Mexico
Washington University
63130-4899Saint LouisMissouriUSA
++Carmen Molina-París
Laboratorio de Astrofísica Espacial y Física Fundamental
++ Centro de Astrobiología
Apartado 5072728080MadridSpain
Theoretical Division T-8
CSIC-INTA
Carretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++
++++ Physics Department
Los Alamos National Laboratory
87545Los AlamosNew Mexico
Washington University
63130-4899Saint LouisMissouriUSA
Matt Visser ++++
Laboratorio de Astrofísica Espacial y Física Fundamental
++ Centro de Astrobiología
Apartado 5072728080MadridSpain
Theoretical Division T-8
CSIC-INTA
Carretera de Ajalvir Km. 428850Torrejón de Ardoz, MadridSpain +++
++++ Physics Department
Los Alamos National Laboratory
87545Los AlamosNew Mexico
Washington University
63130-4899Saint LouisMissouriUSA
Heat kernel regularization of the effective action for stochastic reaction-diffusion equations
27 Sep 2000(October 23, 2018)Reaction-diffusionEffective actionBeta functions
The presence of fluctuations and non-linear interactions can lead to scale dependence in the parameters appearing in stochastic differential equations. Stochastic dynamics can be formulated in terms of functional integrals. In this paper we apply the heat kernel method to study the short distance renormalizability of a stochastic (polynomial) reaction-diffusion equation with real additive noise. We calculate the one-loop effective action and its ultraviolet scale dependent divergences. We show that for white noise a polynomial reaction-diffusion equation is one-loop finite in d = 0 and d = 1, and is one-loop renormalizable in d = 2 and d = 3 space dimensions. We obtain the one-loop renormalization group equations and find they run with scale only in d = 2.PACS number(s): 02.50.Ey;02.50.-r;05.40.+j
I. INTRODUCTION
Many examples abound where particular spacetime distributions of matter are selected over a wide variety of seemingly possible choices. In many of these cases, these patterns are well described, and their temporal evolution accurately modelled, by non-linear partial differential equations subject to noise, or equivalently, by stochastic partial differential equations (SPDEs). Concrete examples can be found in the domains of pattern formation, chemical chaos, biological morphogenesis, and flame-front propagation, just to name a few [1][2][3]. As argued in Ref. [4] the effective potential is a superb tool for studying the onset of pattern formation (i.e., symmetry breaking) about the static and spatially homogeneous solutions of the SPDE. This quantity takes into account both interactions and fluctuations to a given number of loops. (The loop expansion is a controlled expansion in the amplitude of the fluctuations.) To go beyond the study of symmetry breaking and static homogeneous configurations, it is crucial to be able to include and account for time development and spatial inhomogeneities. Ultimately, we are interested in studying the late time behavior of the solutions of SPDEs; we would like to know if there is an asymptotic steady state, or state of equilibrium, as this impinges on the late time behavior of the emergent pattern. The effective potential does not yield this information, and one must turn to the effective action. The effective action contains all the dynamical solutions of the SPDE and their asymptotic behavior. It obeys a variational principle wherein its first variation yields the SPDE. Its exact calculation, however, is very difficult. Along the way, one must first "regularize" its short distance divergences that are dimension dependent. These must be identified, isolated, regulated, and if possible, renormalized by suitable redefinitions of the bare parameters appearing in the SPDE. The study of these dimension dependent divergences lends itself to a direct calculation of the renormalization group equations (RGEs) that encode the scale dependence (or "running") of these parameters. The RGE fixed points provide information about the asymptotic states. For these and other reasons, the divergent structure of the effective action warrants a detailed analysis in its own right.
In this paper we study the stochastic reaction-diffusion (RD) equation by means of a functional integral representation that will be used to calculate its one-loop effective action. In earlier work we calculated and analyzed the effective potential for the RD equation (the effective action evaluated for special field configurations that are time independent) [5]. However, the effective potential does not provide information about the dynamics of the system or wavefunction renormalization. The effective potential can only signal the static homogeneous states around which one can study the onset of pattern formation (i.e., symmetry breaking), but does not indicate which such state (if any) is dynamically accessible or most likely. To go beyond the limitations inherent in an effective potential analysis, and in order to judge the importance of the noise-induced symmetry broken ground state configuration, one should explore the space of dynamical (spatially inhomogeneous) solutions of the system. For instance, are there solutions that indicate the system is "thermalizing", or can we find steady state solutions? An important step in this direction is the analysis of the effective action undertaken in this paper. In this calculation we encounter short distance divergences that need to be regularized. We have chosen to carry out this regularization procedure by means of (generalized) heat kernel techniques. To do so, we will be required to calculate the first (integrated) Seeley-DeWitt coefficients.
In addition to the physical RD field itself, we must also calculate the contribution from the ghost field, a necessary ingredient in this formalism [4].
After the regularization step, we turn to the one-loop renormalization of the effective action. We find that for additive white noise RD systems are one-loop finite in d = 0 and d = 1 space dimensions, and (at least) one-loop renormalizable in d = 2 and d = 3 space dimensions for polynomial reaction kinetics. There is no wavefunction renormalization in any of these dimensions, irrespective of the degree of the reaction polynomial [6]. Moreover, the ultraviolet renormalizability requires including a bare tadpole λ 0 , or constant term, in the equation of motion.
Application of heat kernel methods to stochastic field theories far from equilibrium may not be familiar, so a few comments regarding them may be useful. These techniques have been used primarily for calculating effective actions and the one-loop physics in quantum field theories (QFTs) and in curved space quantum field theory [7][8][9][10][11][12]. In the quantum domain, one is interested in computing quantities such as the effective action and the effective potential , which provide crucial information regarding the structure of the underlying theory at different length and time scales and are important in assessing the theory's renormalizability (or lack thereof), the determination of the running of couplings and parameters, patterns of spontaneous and dynamical symmetry breaking, and the structure of short distance (ultraviolet) and long distance (infrared) divergences [13][14][15][16][17]. Moreover, for renormalizable theories, the computation of the effective action (actually, only its divergent part is needed) can be used to extract the RGEs that govern the scale dependence of the couplings and parameters appearing in the theory [18][19][20]. Though perhaps better known in the context of these fields, these same techniques can be generalized and applied to reveal the corresponding one-loop physics associated with stochastic dynamic phenomena and to systems subject to fluctuations.
As a key technical step, we need to obtain the Green functions for certain higher-order differential operators that are neither elliptic nor hyperbolic. (The operators treated here are second-order in physical time derivatives and fourth-order in space derivatives.) We set up a generalization of the Schwinger proper time asymptotic expansion to compute these Green functions and obtain explicit expressions for the first Seeley-DeWitt coefficients associated with these operators. We carry this out employing the minimal representation in which only physical and ghost field variables appear in the action [4].
At this stage we emphasize the following: We start from a given "classical" SPDE, together with a specification of the type of noise and its correlations. Our aim is to map this SPDE to an equivalent generating functional and effective action, at which point QFT techniques can be used. We emphasize that our approach and use of field theory is rather different from that initiated some years ago, whose aim is to derive the SPDE and the noise correlations from microphysics by mapping a classical master equation to "second-quantized" variables, and then finally to an action principle [21][22][23].
We conclude with a summary and discussion of this work, and prospects for further development.
II. STOCHASTIC FIELD THEORY FOR REACTION-DIFFUSION EQUATIONS
In this paper we demonstrate that heat kernel techniques can be used to compute the ultraviolet divergent terms arising in the one-loop effective action associated with field theory formulations of SPDEs. In particular, we apply these methods to the class of single component reaction-diffusion equations
Dφ(x) = V [φ(x)] + η(x),(1)
where
D = ∂ ∂t − ν∇ 2 ,(2)
and
V [φ] = N j=0 λ j j! φ j (x).(3)
Here η(x) is a noise term with normalized probability distribution given by P [η]. We employ the shorthand notation x = ( x, t). Furthermore, V [φ] is a polynomial of degree N in the concentration (or field variable φ) and the λ j 's are a set of reaction rates. For convenience we have placed the decay (or "mass" term) into the potential and have treated it as another coupling: λ 1 ≡ −m 2 . It must be noted that this equation has the form of a purely dissipative system and has a bona-fide potential energy term V [φ] [15]. Thus, it makes sense to calculate an effective potential for constant field configurations, as well as an effective action for inhomogeneous fields. Both the effective action and the effective potential are derived by means of a field theory for this SPDE.
Before continuing, we should point out that there are reasons to believe that as a phenomenological equation, the RD being considered here, (and others structurally similar to it), might not be adequate to describe certain two-body annihilation processes, or pair reaction kinetics, since recent derivations based on master equations show that the SPDE in question should actually be complex with imaginary noise (leading to negative noise correlations) [6,24,25]. On the other hand, for processes involving particle clustering, these same derivations yield real stochastic equations and noise, as well as positive noise correlations [26]. Of course, there are many situations in which a microscopic derivation of the SPDE is entirely out of the question, either because explicit knowledge of the microscopic details is lacking and/or because the random fluctuations owe to uncontrollable contingencies. In these situations the benefit of adopting a phenomenological strategy should be self-evident. Finally, the application of this equation need not be restricted to just chemical diffusion [27].
For homogeneous and static concentrations it is sufficient to study the effective potential [5]. In this paper we complement that analysis by making use of the effective action to consider inhomogeneous and time dependent field configurations. In the minimalist approach (see Ref. [4]) one starts with the normalized generating functional Z[J] encoding the stochastic dynamics described by the RD equation (1). This involves the RD scalar field φ plus the Jacobian determinant (denoted here by J ) and its adjoint (J † ), (these determinants arise from a change of variables). The generating functional (partition function) is given by [4,15]
Z[J] = [dφ] exp{[−S[φ] + d n x J(x)φ(x)]/A} √ J J † [dφ] exp{−S[φ]/A} √ J J † ,(4)
where the "classical" action is [valid for Gaussian noise:
G η (x, y) = η(x)η(y) = A g 2 (x, y)] S[φ] = 1 2 d n x 1 d n x 2 (Dφ(x 1 ) − V [φ(x 1 )]) g −1 2 (x 1 , x 2 ) (Dφ(x 2 ) − V [φ(x 2 )]),(5)
with n = d + 1 and d the number of space dimensions (we will keep d as a free parameter throughout this paper). For a general SPDE there may be non-vanishing contributions from the "ghost" fields (Jacobian determinants). We follow here the discussion of Ref. [4] to separate the noise two-point function into the product of a shape g 2 (x, y) and a constant amplitude A. Irrespective of how we decide to normalize the shape, the constant amplitude A is always the loop-counting parameter of the perturbation expansion [4]. A loop-counting parameter is very useful in organizing such a perturbative expansion. Moreover, any symmetry that is present in the classical action (5) is preserved at each order in the loop expansion since the loop-counting parameter multiplies the entire action (and the source term J) in (4). One of the advantages of the minimal representation is that it leads to this natural identification of the noise amplitude [4,15]. We introduce the generating functional for connected correlation functions W [J] and its Legendre transform, the effective action Γ[φ] [15] , (note the explicit factor of the noise amplitude A)
W [J] = +A log Z[J],(6a)Γ[φ] = −W [J] + d n x J(x) φ [J](x) −φ[0](x) ,(6b)
withφ
[J] = δW [J] δJ .(7)
The barred fieldsφ[J] andφ[0] are the solutions of the equations of motion
δΓ[φ] δφ φ [J] = J(x), and δΓ[φ] δφ φ [0] = 0,(8)
respectively. It is usually assumed that the former equation has a unique solutionφ[J] (at least for small J), and that for vanishing source (J = 0) the unique solution is the vanishing mean fieldφ[0] = 0, i.e., φ( x, t) J=0 = 0, where the angular brackets denote the stochastic average with respect to the noise probability distribution P[η]. This is valid for a symmetric ground state. We next expand the action (5) about this background up to quadratic order in the stochastic fluctuation δφ = φ −φ. We can carry out a perturbation expansion in the small parameter A and compute the one-loop effective action to obtain [13] 1
Γ[φ] = S[φ] − S[0] + A 2 Tr log S (2) field [φ] − Tr log S(2)field [0] − log J [φ] − log J † [φ] + log J [0] + log J † [0] + O(A 2 ) = S[φ] − S[0] + Γ (1−loop) [φ] + O(A 2 ) = S[φ] − S[0] + Γ (1−loop) ǫ [φ] + Γ (1−loop) f inite [φ] + O(A 2 ),(9)
where the matrix elements of the Jacobi field operator S (2) field [φ] are
x 1 |S (2) field [φ]|x 2 = S (2) field (φ, x 1 , x 2 ) = δ 2 S[φ] δφ(x 1 ) δφ(x 2 )
.
We have anticipated the appearance of divergences in the one-loop contribution to the effective action, arising from both the physical and ghost fields, and have supplied it with a cut-off ǫ. The notation S[0] is actually shorthand for S[φ[J = 0]], and for a symmetric ground state one typically has φ[0] = 0 and S[0] = 0, unless there is a "tadpole" in the classical action. In fact, when looking for mean field solutions of the zero-loop equation of motion, we will find it convenient to consider a non-vanishing value of the mean field φ[0] = v 0 = 0 and will study fluctuations about this mean value. The terms involving S[0] and S (2) field [0] appear due to the normalization factor in (4). The notation "Tr" stands for the trace and indicates that we are to take the (time and space) coincidence limits x 2 → x and x 1 → x, followed by an integration over the common limit x. The one-loop effective action will contain divergent terms and it is precisely these terms we wish to isolate and compute. We have collected all such divergences into the expression Γ f inite . There may also be higher-loop contributions, denoted by O(A 2 ), whenever we need to emphasize them explicitly. Although these latter contributions are important for constructing the full effective action, that calculation is beyond the scope of the present paper.
In order to compute the one-loop effective action we need to obtain S
(2) field [φ]
. This Jacobi field operator is diagonal in coordinate space. For the purpose of this calculation and in the interest of simplicity, we consider the case of white noise. (Colored noise can be dealt with, but it brings in time and space derivatives of the shape function, which complicate the heat kernel analysis.) For white noise we have η(x)η(y) = 2D 0 δ n (x, y) and therefore we can write
G η (x, y) = 2 D 0 δ n (x, y), ⇒ A = 2 D 0 , g 2 (x, y) = δ n (x, y), and g −1 2 (x, y) = δ n (x, y),(11)
which fixes the noise normalization. The Jacobi field operator corresponding to the RD equation is easy to calculate starting from the classical action. We simplify notation and write the zero-noise action as
S[φ] = 1 2 d n x (Dφ − V [φ]) 2 = 1 2 d n x ∂ t φ − ν∇ 2 φ − V [φ] 2 .(12)
The Jacobi operator for the physical field, S
field (φ, x 1 , x 2 ), is given by
S (2) field (φ, x 1 , x 2 ) ≡ (−∂ t − ν∇ 2 − V ′ [φ(x 1 )])(∂ t − ν∇ 2 − V ′ [φ(x 1 )]) − V ′′ [φ(x 1 )](Dφ(x 1 ) − V [φ(x 1 )]) δ n (x 1 , x 2 ), (13) where V ′ [φ] = dV [φ] dφ and V ′′ [φ] = d 2 V [φ] dφ 2 .
For the ghost field the Jacobi operator is given by [4]
S (2) ghost (φ, x 1 , x 2 ) ≡ (−∂ t − ν∇ 2 − V ′ [φ(x 1 )])(∂ t − ν∇ 2 − V ′ [φ(x 1 )]) δ n (x 1 , x 2 ),(14)
and its determinant can be written as [4] det[S
(2) ghost (φ, x 1 , x 2 )] = J [φ] J † [φ].(15)
In order to carry out the perturbation expansion we also need to consider the "free" case defined by the limit
V [φ] → 0 S (2) free (x 1 , x 2 ) = (−∂ t − ν∇ 2 ) (∂ t − ν∇ 2 ) δ n (x 1 , x 2 ) = − ∂ 2 t + (ν∇ 2 ) 2 δ n (x 1 , x 2 ).(16)
Free physical fields have the same Jacobi operator as free ghost fields, so that as V [φ] → 0, the physical and ghost field contributions to the effective action cancel. We now look ahead a little: as the Jacobi operator S
free (x 1 , x 2 ) contains fourth order space derivatives (bi-harmonic operator ), rather than second order derivatives, the behavior of the DeWitt-Schwinger expansion [7][8][9][10][11] is qualitatively different in that it includes fractional powers of the Schwinger proper time parameter. We now calculate the mean field v 0 (i.e., the background field) by studying the solutions of the classical equation of motion, which is given (for arbitrary source J) by
δS δφ φ[J] = (−∂ t − ν∇ 2 − V ′ [φ(x)]) (Dφ(x) − V [φ(x)]) = J(x).(17)
If the source vanishes and the mean field is homogeneous and static, we have
V ′ [v 0 ] V [v 0 ] = 0,(18)
which always has at least one real solution [5].
In order to calculate the one-loop effective action for RDs, one must include the contribution from the "ghost" fields. These "ghost" Jacobians are given by [5]
J = det D − δV δφ , and J † = det D † − δV † δφ .(19)
We can now complete the formal calculation of the one-loop effective action. We have [4] Γ
[φ] = S[φ] − S[v 0 ] + A 2 Tr log S (2) field [φ] − Tr log D † − δV † δφ D − δV δφ − (φ → v 0 ) + O(A 2 ),(20)
so the one-loop effective action receives one contribution from the physical field
Γ (1−loop) field = A 2 Tr log S (2) field [φ] − Tr log S (2) field [v 0 ] ,(21)
and a contribution from the ghost field
Γ (1−loop) ghost = − A 2 Tr log D † − δV † δφ D − δV δφ φ − Tr log D † − δV † δφ D − δV δφ v0 .(22)
We will soon see that individually, each contribution has complicated Seeley-DeWitt coefficients, but when taken together, the physical plus ghost sectors yield simple net Seeley-DeWitt coefficients.
III. COMPUTING THE ONE-LOOP EFFECTIVE ACTION
In this section we construct the regulated expression Γ
(1−loop) ǫ [φ]
for the RD equation. We follow closely the DeWitt-Schwinger (DS) proper time formalism to analyze the ultraviolet divergences [7][8][9][10][11][14][15][16]. (We have striven to keep this section self-contained.)
In this formalism the integral representation for Γ (1−loop) [φ], eq. (9), involves a fictitious "time" parameter s (denoted as Schwinger proper time). To this end, we define the following function, where A is any operator
g α (A) ≡ +∞ 0 ds s α−1 e −sA = A −α Γ(α),(23)
with Γ(α) the Gamma function. We consider the limit α → 0
g α (A) → 1 α − γ − log A,(24)
where γ = 0.577..., is Euler's constant. Although this integral is divergent, the difference of two such integrals is finite and well defined
lim α→0 [g α (B) − g α (A)] = log A − log B = − +∞ 0 ds s (e −sA − e −sB ),(25)
and comparing with (9), the desired proper time integral for the one-loop effective action is given by
Γ (1−loop) [φ] = Γ (1−loop) field [φ] + Γ (1−loop) ghost [φ] (26) = − A 2 +∞ 0 ds s Tr e −sH field − e −s[H0] field + A 2 +∞ 0 ds s Tr e −sH ghost − e −s[H0] ghost ,(27)
where the "Hamiltonians" in the exponentials are the Jacobi operators
H field = S (2) field [φ[J]] = (D † − V ′ )(D − V ′ ) − V ′′ (Dφ − V ),(28a)[H 0 ] field = S (2) field [φ[0]],(28b)H ghost = S (2) ghost [φ[J]] = (D † − V ′ )(D − V ′ ),(28c)[H 0 ] ghost = S (2) ghost [φ[0]],(28d)H free = D † D,(28e)
as can be seen by comparing (26) with (9), (13), (14), and (16). To proceed with the calculation, we need an explicit form for the operators e −sH (that is, for e −sH field , e −s[H0] field , e −sH ghost , and e −s[H0] ghost ), or rather, their matrix elements, so that we can take the indicated traces. To solve for them, we note that e −sH is the exact solution of the following operator differential equation
He −sH = − ∂ ∂s e −sH .(29)
If one takes matrix elements in the spacetime coordinate basis |x ≡ | x, t , inserts a complete set of states, and makes use of the diagonality of H in the coordinate basis [note that S
(2) field [φ] is proportional to δ n (x 1 , x 2 )], we obtain H(x) x|e −sH |x ′ = dy x|H|y y|e −sH |x ′ = x|H e −sH |x ′ = − ∂ ∂s x|e −sH |x ′ ,(30)
or equivalently
H(x) G(x, x ′ |s) = − ∂ ∂s G(x, x ′ |s), with G(x, x ′ |s) ≡ x|e −sH |x ′ .(31)
This latter equation defines the Green function G(x, x ′ |s) in terms of the matrix element of the operator e −sH in the coordinate representation. These steps can be repeated for the other Hamiltonians. Fortunately, for the purposes of the present work, it is not necessary to solve this equation exactly (for either H field or H ghost ), as we are interested in the short distance divergent part of the one-loop effective action. What we will do instead is solve the "heat" equations (29), (30), and (31) adiabatically by expanding in small positive fractional powers of the proper time variable s, which is where all the ultraviolet divergences are to be found (different techniques are required if one is interested in the infrared limit). Nevertheless, to get "off the ground" it will be most useful to have the exact solution to (30) in the free limit (V [φ] → 0). We now turn to this task, which entails solving exactly (29) with H free . Since equation (30) looks like a heat equation in a n + 1 dimensional spacetime (parabolic equation in the proper time variable), we know how to solve it together with specified boundary and/or initial conditions. In the free field limit (G → G free ), we must solve the following equation
−∂ 2 t + (ν∇ 2 ) 2 + ∂ ∂s G free (x, x ′ |s) = δ n (x − x ′ ) δ(s),(32)
subject to the boundary (initial) condition G free (x, x ′ |0) = δ n (x − x ′ ). Strictly speaking, the Green function depends on both arguments x and x ′ , but due to the translational invariance of the dynamical equation, we have G(x,
x ′ |s) = G(x − x ′ |s) = G( x − x ′ , t − t ′ |s)
and it suffices to treat it as a function of one spacetime coordinate. We can always restore its dependence on both spacetime arguments at any time.
The formal solution for G free , expanded to fourth order in x − x ′ (along the diagonal), is given by
G free (x, x ′ |s) = A d exp − (t − t ′ ) 2 4s s − 1 2 − d 4 1 − C d,1 2d | x − x ′ | 2 ν √ s + C d,2 8d(d + 2) (| x − x ′ | 2 ) 2 ν 2 s + O (| x − x ′ | 2 ) 3 s 3/2 ,(33)
where
A d = 1 4π 1 2 π d 2 Γ( d 4 ) 2(2π) d Γ( d 2 ) 1 ν 2 d 4 , and C d,n = Γ( d+2n 4 ) Γ( d 4 )
.
Details of the calculation leading to the final expression for G (free) (x, x ′ |s) are given in Appendix A.
We also wish to point out that for static and homogeneous background fields (v 0 ) the computation of G field [v 0 ] and G ghost [v 0 ] is not much more complicated than that for G free . Details are presented in Appendix B. These static and homogeneous calculations allow one to compare with the effective potential formalism of [5], and serve as a check on the current effective action calculation.
We adopt the following ansatz to perturbatively solve the heat equations (30) for small s (adiabatic approximation)
G field (x, x ′ |s) = G free (x, x ′ |s) f field (x, x ′ |s), (35a) G ghost (x, x ′ |s) = G free (x, x ′ |s) f ghost (x, x ′ |s),(35b)
where
f field (x, x ′ |s) = +∞ l=0 s l 2 b l 2 (x, x ′ ) = b 0 + s 1 2 b 1 2 + s b 1 + O(s 3 2 ), (36a) f ghost (x, x ′ |s) = +∞ l=0 s l 2 a l 2 (x, x ′ ) = a 0 + s 1 2 a 1 2 + s a 1 + O(s 3 2 ),(36b)
are asymptotic series in half-integer powers of the proper time with coefficient functions b l 2 and a l 2 (called "Seeley-DeWitt" coefficients). Note that we have had to consider fractional powers in this small s expansion. [By considering simple cases it is easy to convince oneself that for a differential operator of order n the "heat kernel" expansion should start with an overall factor proportional to s −d/n and then contain subdominant terms that are integer powers of
s 2/n .]
In principle, these coefficients can be calculated to arbitrarily high order by solving recursion relations obtained by substituting (35a) and (35b) into (30) for H field and H ghost , respectively. The boundary (initial) conditions
G field (x, x ′ |0) = G ghost (x, x ′ |0) = δ n (x, x ′ )
imply that b 0 = 1 and a 0 = 1. These coefficients start the Seeley-DeWitt hierarchy and allow for a complete determination of the Seeley-DeWitt coefficients appearing in (36a)-(36b). For second-order differential operators this procedure has now become automated [28]. For fourth-order differential operators considerably less is known [29].
In practice, we will see that only the first Seeley-DeWitt coefficients are germane to the problem and that it is sufficient to find the "integrated" Seeley-DeWitt coefficients. This permits us to calculate the relevant coefficients by means of a technique based on the Feynman-Hellman formula [30], which can itself be viewed as a specialization of the Baker-Campbell-Hausdorff formula [31].
We regulate the one-loop effective action by cutting off the lower limit of the proper time integral
Γ (1−loop) ǫ [φ] ≡ − A 2 +∞ ǫ ds s Tr e −sH field − e −s[H0] field + A 2 +∞ ǫ ds s Tr e −sH ghost − e −s[H0] ghost ,(37)
where we can identify ǫ = 1/Ω 2 cut−off and Γ
(1−loop) ǫ [φ] with Γ (1−loop) Ω cut−off [φ]
. As the product sH must be dimensionless, we deduce that s has engineering dimensions of (time) 2 or equivalently, (frequency) −2 . In this stochastic field theory the cut-off can be taken to be a frequency scale Ω cut−off , and this identification allows us to compare between these two types of cut-off (proper time versus frequency). Since these theories are not Lorentz invariant, a frequency cut-off is not "quite" interchangeable with a wavenumber cut-off (more on this point below).
Substituting the above ansatz (35a)-(35b) into (37), making use of the explicit form for G free (33), and expanding out the first terms of f field and f ghost yields
Γ (1−loop) ǫ [φ] A = − 1 2 +∞ ǫ ds s d n x [G free (x, x ′ |s)] ([f field (x, x ′ |s)] − [f ghost (x, x ′ |s)]) = − 1 2 +∞ ǫ ds s d n x [G free (x, x ′ |s)] s 1 2 [b 1 2 (x, x ′ )] − s 1 2 [a 1 2 (x, x ′ )] + s [b 1 (x, x ′ )] − s [a 1 (x, x ′ )] + O(s 3 2 ) = − 1 2 A d +∞ ǫ ds s 3 2 + d 4 d n x s 1 2 [c 1 2 (x, x ′ )] + s [c 1 (x, x ′ )] + O(s 3 2 ) .(38)
In arriving at this last expression we have used the fact that the coincidence limit of the free heat kernel is
[G free (x, x ′ |s)] = A d s − 1 2 − d 4 ,(39)
which follows immediately from (33). We have also made use of the standard notation to express coincidence limits. Given any function h(x, x ′ ), we write lim
x ′ →x h(x, x ′ ) = [h(x, x ′ )].(40)
(Although we also employ the square brackets to denote arguments of functionals and functions, and to group expressions, the intended meaning should be clear from context and there should be no confusion.) We have defined net Seeley-DeWitt coefficients
c l 2 ≡ b l 2 − a l 2 , ∀ l = 0, 1, 2, . . . .(41)
The Seeley-DeWitt coefficients b l 2 , a l 2 , and c l 2 are functions of the mean field φ( x, t) and its derivatives, and as remarked above, can in principle be determined by solving a recursion relation resulting from inserting the ansatz (35a) and (35b) into (30). However, to obtain the form of the divergences of the one-loop effective action we need not evaluate these coefficients. It suffices to calculate the lower bound (s → 0) of the proper time integral. In the limit ǫ → 0 we find that the divergent terms in the RD effective action are given by
Γ (1−loop) ǫ [φ] = − 1 2 A d A 4 d ǫ − d 4 d n x [c 1 2 (x, x ′ )] − ǫ 1 2 − d 4 ( 1 2 − d 4 ) d n x [c 1 (x, x ′ )] − ǫ 1− d 4 (1 − d 4 ) d n x [c 3 2 (x, x ′ )] + · · · .(42)
We now list the divergences in the RD one-loop effective action for the following space dimensions
d = 0 Γ (1−loop) ǫ [φ] = −2A 0 A log(Ω 2 ǫ) dt [c 1 2 ],(43a)d = 1 Γ (1−loop) ǫ [φ] = −2A 1 A ǫ − 1 4 dx dt [c 1 2 ],(43b)d = 2 Γ (1−loop) ǫ [φ] = − A 2 2 A 2ǫ − 1 2 d 2 x dt [c 1 2 ] − log(Ω 2 ǫ) d 2 x dt [c 1 ] ,(43c)d = 3 Γ (1−loop) ǫ [φ] = − A 3 2 A 4 3 ǫ − 3 4 d 3 x dt [c 1 2 ] + 4ǫ − 1 4 d 3 x dt [c 1 ] .(43d)
In all of these cases we only need to solve for the first two adiabatic (Seeley-DeWitt) coefficients c 1 2 , and c 1 ; indeed, it is only the spacetime integrated net coefficients that are needed. In higher space dimensions, additional c l 2 's would be required. However, for most practical applications it is enough to consider 0 ≤ d ≤ 3. (Moreover, earlier work regarding the one-loop effective potential for RD indicates that this field theory is non-renormalizable for d ≥ 4 [5]). This dimension range covers the spatially homogeneous limit (d = 0), one-dimensional (linear) systems (d = 1), surfaces (d = 2), and bulk systems (volumes) (d = 3). We see that the divergences are of two types: (fractional) powers of the cut-off and logarithms of the cut-off. Only the latter can yield one-loop renormalization group beta functions and associated RGEs.
IV. CALCULATION OF THE SEELEY-DEWITT COEFFICIENTS
In this section we present a formalism that can in principle yield all the Seeley-DeWitt coefficients. As we have seen in the previous section, the calculation of the one-loop effective action involves solving the auxiliary partial differential equations of parabolic type (denoted as heat equations, even if the diffusion is non-standard).
In this formalism (see Appendix C) the first step is to write
Tr[exp(−sH)] = Tr{exp[−s(H free + δH)]},(44)
where δH is a lower-order differential operator when compared to H or H free . We now apply a version of the Feynman-Hellman formula [30,31], as discussed in Appendix C, to obtain
We now make use of the explicit form of these "Hamiltonians". We write
H ghost = (D † − V ′ )(D − V ′ ) = D † D − (D † − V ′ )V ′ + 2ν∇V ′ · ∇,(49)
where in the last term of the right hand side both ∇'s act on everything to the right. We know that the free Hamiltonian is given by
H free = D † D = [−∂ 2 t + ν 2 (∇ 2 ) 2 ],(50)
so that we can identify δH 1 as
δH 1 = −[(D † − V ′ )V ′ − 2ν∇V ′ · ∇].(51)
(Note that δH 1 is a linear differential operator.) From the definition of the ghost Hamiltonian
H field = H ghost − V ′′ (Dφ − V ),(52)
we deduce the following form for δH 2
δH 2 = −V ′′ (Dφ − V ).(53)
(Note that δH 2 is a function, not a differential operator.) Now consider the first order perturbation [the O(ǫ) term]
X 1 ≡ s Tr{[V ′′ (Dφ − V )] exp(−sH free )}.(54)
From the known form of the free kernel, [see, e.g., equation (33) or equation (A11)], and the fact that δH 2 is a function, this reduces to
X 1 = A d s − 1 2 − d 4 d n x [s V ′′ (Dφ − V )] .(55)
This implies that the first-order perturbation does not contribute to the Seeley-DeWitt coefficient c 1 2 , though it does contribute to c 1 . In fact, we can write
d n x [c 1 ] = d n x V ′′ (Dφ − V ) + · · · .(56)
This is actually the only contribution to the relevant Seeley-DeWitt coefficients. (There might have been additional contributions coming from those portions of the second-order term X 2 that have space derivatives; fortunately they vanish, as we now verify.) Let us consider
X 2 ≡ s 2 2 1 0 dℓ Tr [V ′′ (Dφ − V )] exp(−s ℓ H free ) [V ′′ (Dφ − V )] exp[−s (1 − ℓ) H free ](57)+ s 2 1 0 dℓ Tr [V ′′ (Dφ − V )] exp(−s ℓ H free ) (D † − V ′ )V ′ − 2ν(V ′ ∇ 2 + (∇V ′ ) · ∇) exp[−s (1 − ℓ) H free ] .
We can have any of the following cases -No gradients hit the free kernel: the term containing two factors of [V ′′ (Dφ − V )] is proportional to s 2 and can only contribute to c 2 , which is not needed in the present context.
-One gradient hits one kernel: from equation (33) or equation (A11), one can see that there is a factor of (x − x ′ ) i /(ν √ s) of order s 3/2 . Such a term is odd under the interchange of x and x ′ and will vanish when taking the spacetime trace.
-Two gradients hit the same free kernel: there will be contributions of the type
C d,1 δ ij ν √ s + (C d,1 ) 2 (x − x ′ ) i (x − x ′ ) j d 2 s ,(58)
which, after tracing with the free kernel, yield contributions proportional to s 3/2 . Therefore, these terms contribute to c 3 2 , which is not needed. Continuing in this manner, it is easy to convince oneself that there are no additional contributions to the required coefficients c 1 2 and c 1 . We can finally write
d n x [c 1 2 ] = 0, (59a) d n x [c 1 ] = d n x [V ′′ (Dφ − V )].(59b)
Note that the present calculation only yields the integrated on-diagonal (x = x ′ ) Seeley-DeWitt coefficients and is insensitive to any term that vanishes upon integration. With a little more work along these lines, it is also possible to obtain the Seeley-DeWitt coefficients: [a 1 2 ] and [a 1 ] for the ghost field, and [b 1 2 ] and [b 1 ] for the physical field. We only quote the results here
d n x [a 1 2 ] = d n x 2 C d,1 V ′ ,(60a)d n x [a 1 ] = d n x d 2 (V ′ ) 2 + (D † − V ′ )V ′ ,(60b)d n x [b 1 2 ] = d n x 2 C d,1 V ′ ,(60c)d n x [b 1 ] = d n x d 2 (V ′ ) 2 + (D † − V ′ )V ′ + V ′′ (Dφ − V ) .(60d)
V. ONE-LOOP RENORMALIZATION
We have already calculated the (regularized) one-loop effective action (20) for the RD equation, and thus, we may now explore the renormalizability of this field theory following the prescription reviewed in [20]. In order to do so we must analyze the divergences of the one-loop effective action Γ (1−loop) ǫ . We must also keep in mind the fact that the bare theory (i.e., defined by the action (12)) does not depend on the arbitrary scale µ introduced by the renormalization scheme. Therefore, just as for the case of QFTs [18,19], we will derive a set of equations that govern the scale dependence of the parameters appearing in the RD effective action from the identity
µ dΓ[φ] dµ = 0 = µ d(S[φ] − S[v 0 ]) dµ + µ dΓ (1−loop) ǫ [φ] dµ + O(A 2 ),(61)
where the O(A 2 ) indicates there will be higher-loop contributions to the effective action. In quantum field theory this identity does yield the one-loop RGEs since equation (61) can be expressed in terms of a sum of independent field operators (operator basis) and each coefficient of an element of this basis determines an independent RGE. As we have already calculated the relevant Seeley-DeWitt coefficients for the RD equation, we now turn to investigate the one-loop renormalizability of its stochastic field theory. The renormalizability criteria are based on the following definitions. For renormalizable and super-renormalizable theories the counterterms needed to cancel the divergences are equal to, or fewer in number than the terms appearing in the zero-loop action, which implies that the Seeley-DeWitt coefficients are expandable in terms of the same operator basis appearing in the classical action. In particular, this basis set consists of {∂ t φ, ∇ 2 φ, 1, φ, φ 2 , · · · , φ N }. For non-renormalizable theories this criterion fails. That is, there are terms in the integrated Seeley-DeWitt coefficients that do not appear in the classical action [14,15,32,33].
By comparing the zero-loop action (12) with the divergent terms of the one-loop effective action, it is easy to see that the latter do not contain any field operators not already present in the bare (classical) action. The divergent contributions to the one-loop effective action in d-dimensions are given by
Γ (1−loop) ǫ [φ; v 0 ] = A 2 A d ǫ 1 2 − d 4 ( 1 2 − d 4 ) d d x dt [c 1 (φ) − c 1 (v 0 )] + O(ǫ 1− d 4 ) = A 2 A d ǫ 1 2 − d 4 ( 1 2 − d 4 ) d d x dt {V ′′ (φ)[Dφ − V (φ)] + V ′′ (v 0 )V (v 0 )} + O(ǫ 1− d 4 ).(62)
Some remarks are in order. First of all we point out the fact that the one-half Seeley-DeWitt coefficients of the field and ghost mutually cancel out. This cancellation is special to the RD system and does not take place for generic SPDEs. Secondly, the ill-defined quantity "ǫ 0 /0" arising in d = 2 must be replaced by log(Ω 2 ǫ) = log(Ω 2 /Ω 2 cut−off ). The dimensionfull (but arbitrary) parameter Ω is required to make the argument of the logarithm dimensionless. It is often more convenient to introduce a cut-off in wavenumber, rather than in frequency. In Lorentz invariant theories (QFTs, for example) these are essentially equivalent and it is usual to adopt units where the speed of light is one. In the RD system this would be inappropriate, since the equation is not Lorentz invariant. Instead, one notes that dimensionally
[ǫ] = [proper time s] = [physical time t] 2 = [ν] −2 [distance] 4 ,(63)
and therefore, in terms of a wavenumber cut-off Λ and a wavenumber subtraction point µ "ǫ 0 /0" → log(Ω 2 ǫ) = log(Ω 2 /Ω 2 cut−off ) = log(µ 4 /Λ 4 ).
(This observation is important when comparing different regularization schemes; for instance the effective potential calculation of [5] uses a wavenumber cut-off.) Thirdly, it is of great importance to study the type of divergence arising in the one-loop effective action for the RD equation, i.e., logarithms versus (fractional) power. From the above we see that the type of divergence depends on the number of space dimensions d. If d is odd, there will never be logarithmic divergences to one-loop order in the RD field theory; to get a logarithm, the space dimensionality must be even. A similar feature holds also for the one-loop effective action for QFTs in odd spacetime dimensions [20,33]. We can conclude that the appearance of logarithmic divergences for specific space dimensions is not an artifact of the RD field theory, nor of SPDEs, nor is it an artifact of the regularization scheme we have employed. In QFT the RGEs yield the running of the coupling constants, i.e., give the scale dependence, if and only if, there are logarithmic divergences in the effective action. Thus, we can already predict that the parameters in the RD equation will not run in the ultraviolet region (to one-loop order) for odd space dimensions [33].
Nevertheless, the one-loop effective action in odd space dimensions still contains divergences (though not for d = 1), which must be subtracted by suitable counterterms, but once this subtraction has been performed, the remaining finite part Γ
(1−loop) f inite
is independent of the subtraction scale µ.
VI. RENORMALIZATION OF THE RD EFFECTIVE ACTION
In this section we calculate the counterterms needed to renormalize the one-loop effective action. We first start with the bare classical action (12) for the reaction-diffusion equation
S[φ] = 1 2 d n x ∂ t φ − ν∇ 2 φ − V [φ] 2 = 1 2 d n x ∂ t φ − ν∇ 2 φ − N j=0 λ j j! φ j 2 .(65)
The bare parameters (no subscript) are related to the renormalized ones (denoted by a subscript R) as follows
φ = Z −1/2 φ R , with Z = 1 + δZ, (66) λ j = λ R j + δλ j , (67) ν = ν R + δν,(68)
with δZ, δλ j , and δν the corresponding counterterms for Z, λ j , and ν, respectively. (Our convention for the definition of the wavefunction renormalization constant Z does not follow the standard one in QFT [14,15].) Our task consists in demonstrating that all the divergences appearing in the regulated one-loop effective action can be cancelled by suitable choices for these counterterms. In effect, we absorb the divergences into the (bare) parameters of the RD equation by renormalizing these parameters. If we write the bare action in terms of the renormalized parameters and counterterms, and keep up to linear order in the counterterms, (which is sufficient for a one-loop analysis), we find
S[φ] = 1 2 d n x ∂ t φ R − ν R ∇ 2 φ R − N j=0 λ R j j! φ j R 2 − δZ 2 d n x ∂ t φ R − ν R ∇ 2 φ R − N j=0 λ R j j! φ j R 2 − d n x ∂ t φ R − ν R ∇ 2 φ R − N j=0 λ R j j! φ j R δν ∇ 2 φ R + N j=0 φ j R j! δλ j + δZ λ R j (1 − j) 2 ,
which can be written in a more compact and transparent notation as follows
S[φ] = 1 2 d n x (D R φ R − V R [φ R ]) 2 − δZ 2 d n x (D R φ R − V R [φ R ]) 2 − d n x (D R φ R − V R [φ R ]) δν ∇ 2 φ R + N j=0 φ j R j! δλ j + δZ λ R j (1 − j) 2 (69) = S R [φ R , µ] + S δ [φ R ],(70)
where we have introduced the scale dependent renormalized action S R [φ R , µ]
S R [φ R , µ] ≡ 1 2 d n x Z 1 2 (µ)∂ t φ − ν R (µ)Z 1 2 (µ)∇ 2 φ − V [Z 1 2 φ, λ R j (µ)] 2 ,(71)
with
V R [φ R ] ≡ V [Z 1 2 φ, λ R j (µ)] = N j=0 λ R j (µ) j! Z j 2 (µ)φ j ,(72)
the renormalized (scale dependent) potential. The meaning of D R is clear from inspection. The final equality in (70) defines the finite, renormalized action S R and the divergent (but regulated) counterterm action S δ . The individual counterterms appearing in S δ will be used to cancel off the divergences arising in (62). We carry out this cancellation separately in each space dimension since each case leads to structurally different divergences [see equation (62)].
A. d = 0 counterterms and renormalization
The case d = 0 is very simple: in zero space dimensions there is no diffusion. There is a brief discussion in reference [5] and we do not belabor the point here except to reiterate that in d = 0 the RD equation is one-loop renormalizable and one-loop finite.
B. d = 1 counterterms and renormalization
In one space dimension the (formally) divergent effective action is given by
Γ (1−loop) ǫ [φ R ] = −2AA 1 ǫ −1/4 dx dt [c 1 2 ] + O(A 2 ),(73)
with A 1 = Γ(1/4)/[8π(πν R ) 1/2 ]. We did not explicitly write this term in equation (62) since it vanishes identically. From our previous calculation of the Seeley-DeWitt coefficients we know that [c 1 2 ] = 0 (in all dimensions), which tells us that in one space dimension there are no divergences, that is, the theory is one-loop finite and there is no need to introduce counterterms. Since no renormalization is required, there will be no scale dependence in the parameters appearing in the RD equation. The beta functions, β O ≡ µ dO/dµ, (encoding the scale dependence of the parameters) are therefore zero [at least up to order O(A 2 )], and we have
Z = 1 + O(A 2 ), (74) ν = ν R + O(A 2 ), (75) λ j = λ R j + O(A 2 ).(76)
C. d = 2 counterterms and renormalization
In two space dimensions the divergent effective action is given by equation (62) Γ
ǫ [φ R ] = + AA 2 2 log(µ 4 /Λ 4 ) d 2 x dt [c 1 ],(77)where A 2 = 1/(16πν R ).
From the calculation of the Seeley-DeWitt coefficients we know that for d = 2 we have
[c 1 2 ] = 0,(78a)[c 1 ] = V ′′ R [φ R ](Dφ R − V R [φ R ]),(78b)
where we have written the Seeley-DeWitt coefficient [c 1 ] in terms of the renormalized parameters as we are only working to one-loop order. Therefore, for the divergent part of effective action we can write
Γ (1−loop) ǫ [φ R ] = A 8πν R log(µ/Λ) d 2 x dt V ′′ R [φ R ](D R φ R − V R [φ R ]).(79)
In order to determine the value of the counterterms and to cancel them off, we must set
Γ (1−loop) ǫ [φ R ] = −S δ [φ R ] + finite.(80)
This cancellation can be made up to a residual finite but scale dependent logarithm. This is because the difference of two divergent logarithms can be finite and non-zero. The counterterms are proportional to log(µ/Λ), where µ is an arbitrary scale needed to render the argument dimensionless, but this scale need not coincide with µ 0 , the other arbitrary scale needed to render the argument of the other logarithm, log(µ 0 /Λ), dimensionless [33]. If we perform this cancellation, we obtain the following µ-dependent family of solutions for the counterterms δλ j
A 2 A 2 log(µ 2 /Λ 2 ) V ′′ R [φ] = A 2 A 2 log(µ 2 /Λ 2 ) N j=0 λ R j j! j(j − 1) φ j−2 = N j=0 δλ j j! φ j .(81)
As we are working to one-loop order, we can set Z(µ) equal to one in V ′′ R . We can then read off the individual counterterms from this equation, using the linear independence of the basis {∂ t φ, ∇ 2 φ, 1, φ, φ 2 , · · · , φ N }, to obtain
δZ = 0 + O(A 2 ), (82a) δν = 0 + O(A 2 ), (82b) δλ 0 = A 8πν R log(µ/Λ)λ R 2 + O(A 2 ),(82c)δλ 1 = A 8πν R log(µ/Λ)λ R 3 + O(A 2 ),(82d)
. . .
δλ N −2 = A 8πν R log(µ/Λ)λ R N + O(A 2 ),(82e)δλ N −1 = 0 + O(A 2 ), (82f) δλ N = 0 + O(A 2 ).(82g)
In particular, we see that there is no wavefunction renormalization nor diffusion constant renormalization in two dimensions at one-loop order. The couplings associated with the highest and next-to-highest powers of the field (λ R N −1 , λ R N ) do not require renormalization to this order. As pointed out above, due to the logarithmic divergence in two dimensions, when we subtract the divergences from the counterterm action, we are left with a finite µ-dependent logarithmic piece in addition to the renormalized action, that is
Γ[φ] = S[φ] + Γ (1−loop) ǫ [φ] + Γ (1−loop) f inite [φ] = S R [φ R , µ] + S δ [φ R ] + Γ (1−loop) ǫ [φ R ] + Γ (1−loop) f inite [φ R ] = S R [φ R , µ] + A A 2 2 log µ 4 0 µ 4 d 2 x dt (D R φ R − V R [φ R ])V ′′ R [φ R ] + O(A 2 ).(83)
We insert this expression into (61) to obtain the one-loop RGE
µ d dµ (D R φ R − V R [φ R ]) = A 8πν R V ′′ R [φ R ] + O(A 2 ).(84)
In arriving at this equation, we have cancelled an overall common factor of the classical equation of motion, since
D R φ R − V R [φ R ] = 0 in general.
By collecting up the coefficients of the linearly independent terms in (84) we find the corresponding one-loop RGEs and beta functions in d = 2 to be given by
β Z = µ dZ dµ = 0 + O(A 2 ),(85a)β ν = µ dν R dµ = 0 + O(A 2 ),(85b)β λ0 = µ dλ R 0 dµ = − A 8πν R λ R 2 + O(A 2 ),(85c)β λ1 = µ dλ R 1 dµ = − A 8πν R λ R 3 + O(A 2 ),(85d)
. . .
β λN−2 = µ dλ R N −2 dµ = − A 8πν R λ R N + O(A 2 ),(85e)β λN−1 = µ dλ R N −1 dµ = 0 + O(A 2 ),(85f)β λN = µ dλ R N dµ = 0 + O(A 2 ).(85g)
Since there is no wavefunction nor diffusion constant renormalization the set of one-loop RGEs can be summarized in the following way
µ dV R [φ R ] dµ = − A 8πν R V ′′ R [φ R ] + O(A 2 ).(86)
This equation agrees with the computation of the RGEs based on the effective potential, which was calculated in Ref. [5]. [See equation (51) of that paper.] Furthermore, if we define µ = µ 0 exp(τ ), the previous RGE becomes
dV R [φ R ] dτ = − A 8πν R d 2 V R [φ R ] dφ 2 R + O(A 2 ),(87)
which implies the fact that the one-loop RGE in d = 2 behaves like an anti-diffusion process in field space. At this point it is interesting to compare our results with independent renormalization group results obtained, for example, by Cardy in Ref. [6]. If a path integral is derived (along the lines given in [21][22][23]) for the two-body process A + A → inert, then the corresponding RD equation turns out to be given by [6]
Dφ = −2λφ 2 + η(x),(88)
where however, the noise must be pure imaginary. A renormalization group analysis of equation (88) shows that the field φ does not require wavefunction renormalization, nor does the diffusion constant ν renormalize. Our one-loop heat kernel computation performed for an arbitrary reaction polynomial, (85a,85b), is in complete accord with these results (even though we treat real noise). Returning to (88), the only non-vanishing renormalization is that of the coupling λ. It turns out that the one-loop renormalization group beta function is exact, and when expressed in terms of the dimensionless renormalized coupling g R is given by [6] β
(g R ) = b g 2 R ,(89)
in d = 2 dimensions, where b is a positive constant (the value of this constant is not specified in [6]). In order to compare these results, we define the following dimensionless couplings
g j ≡ A 8πν R λ R j+2 λ R j , 0 ≤ j ≤ N − 2,(90)
provided, of course, that for a given j the coupling constant λ R j = 0. This definition together with the hierarchy of beta functions given in (85) show that the dimensionless coupling constants g j satisfy the following one-loop RGE
β(g j ) ≡ µ dg j dµ = g j λR j+2 λ R j+2 −λ R j λ R j ,(91)
where the overdot is shorthand notation for µ d/dµ. We now consider an RD equation of the type given in (1) with real noise and for a degree-two (N = 2) reaction polynomial (3) V [φ] = λ 0 + λ2 2 φ 2 . This particular choice is made in order to be able to treat an RD equation as close as possible in structure to the one in (88). Apart from the imaginary versus real noise, the essential difference lies in the fact that we (must) have a tadpole term, whereas (88) lacks such a term. In this case there is only one dimensionless coupling which can be defined, namely
g 0 = A 8πν R λ R 2 λ R 0 ,(92)
and equation (91) implies the following one-loop beta function for this dimensionless coupling
β(g 0 ) = g 0 λR 2 λ R 2 −λ R 0 λ R 0 = −g 0 λR 0 λ R 0 = g 2 0 .(93)
This follows from (85c) and from the fact thatλ R 2 ∝ λ R 4 = 0. Thus, for the purposes of renormalization and calculating the RGEs, this example demonstrates that it is equivalent to start from a complex or real SPDE, and that the field theory can be derived from microscopic principles or obtained by means of the procedure outlined in [4].
D. d = 3 counterterms and renormalization
In three space dimensions the divergent effective action is given by equation (62)
Γ (1−loop) ǫ [φ R ] = −2AA 3 ǫ −1/4 d 3 x dt [c 1 ],(94)
with
A 3 = Γ(3/4)/[16π(πν R ) 3/2 ].
The net Seeley-DeWitt coefficient is given by
[c 1 ] = V ′′ [φ R ](Dφ R − V R [φ R ]).(95)
The vanishing of the index one-half Seeley-DeWitt coefficient means that the divergent effective action in d = 3 becomes
Γ (1−loop) ǫ [φ R ] = −2AA 3 ǫ −1/4 d 3 x dt V ′′ [φ R ](D R φ R − V [φ R ]) + O(A 2 ).(96)
In order to determine the value of the counterterms we must once again set
Γ (1−loop) ǫ [φ R ] = −S δ [φ R ]+finite.(97)
The last identification yields the following (µ-independent) set of counterterms
δZ = 0 + O(A 2 ), (98a) δν = 0 + O(A 2 ), (98b) δλ 0 = −AA 3 λ R 2 ǫ −1/4 + O(A 2 ),(98c)δλ 1 = −AA 3 λ R 3 ǫ −1/4 + O(A 2 ), (98d) . . . δλ N −2 = −AA 3 λ R N ǫ −1/4 + O(A 2 ), (98e) δλ N −1 = 0 + O(A 2 ), (98f) δλ N = 0 + O(A 2 ). (98g)
As there is no scale dependent logarithmic divergence at one-loop order in three-dimensions, all the beta functions vanish [33].
VII. DISCUSSION
In this paper we have generalized and applied a method based on the DeWitt-Schwinger proper time expansion to calculate the ultraviolet divergences of the one-loop effective action associated to the RD equation. This particular approach involves the physical degrees of freedom plus the "ghost" fields, which are needed to account for the functional Jacobian that arises from a certain change of variables [4]. For RDs this Jacobian is generally non-vanishing and must be taken into account in the computation of the characteristic functional. The importance of the effective action lies in the fact that it is the quantity needed to derive equations of motion, which correctly take into account fluctuations and interactions to a given number of loops. The effective action encodes the dynamics of the system. By contrast, the effective potential can only tell us about static solutions. In order to know whether the minima of the effective potential are relevant as solutions of the late time behavior of the system we must see how accessible these solutions are. But before any of these questions can be answered, the effective action must be calculated.
The one-loop effective action is given in terms of a functional determinant, which must be regulated and renormalized. The heat kernel technique is an established method (used in QFT) for carrying this out. In QFT the functional determinant appearing in the one-loop effective action is usually quadratic in both time (∂ 2 t ) and space derivatives (∇ 2 ). In passing to a Euclidean spacetime, the corresponding proper time equation for the kernel to be solved (29) is a heat equation for diffusion in n = d + 1 dimensions, with s playing the rôle of diffusion time. This is why its Green function (whether exactly or approximately calculated) is justifiably denoted as the heat kernel. However, for SPDEs, such as those considered in [4], the functional determinant in the corresponding one-loop effective action involves not only a "mismatch" between the number of spatial and temporal derivatives, but also fourth-or even higher-order spatial derivatives. We have seen an explicit example of this in the RD equation treated here, which yields second order in time but fourth order in space derivatives, (∇ 2 ) 2 , for its associated operator determinant. (Even higher derivatives will be encountered in the one-loop effective actions based on the Sivashinski and the Swift-Hohenberg equations: two time derivatives but eight spatial derivatives.) While much is known about the standard heat kernel and its associated Seeley-DeWitt coefficients, very few (as far as the authors are aware) of these ideas have been applied to other types of field theories whose fluctuations may be of a non-quantum nature (i.e., noise) [29]. The heat kernel technique (and its generalization) is well suited to regularize one-loop effective actions, and therefore, is very useful to handle theories with derivative-type interactions, as well as higher derivative "kinetic" terms.
We have applied these techniques to compute the one-loop effective action for the RD equation. As regards its ultraviolet renormalizability, we found that the terms appearing at one-loop order have the same structure, i.e., involve the same terms present already at the level of the "classical" or zero-loop action. Strictly speaking, this claim holds true only if a certain bare constant is added to the original equation of motion, as we have seen. This constant, or "tadpole", is needed to carry out the one-loop renormalization of the leading divergence that appears in the effective action. Moreover, this constant admits a simple physical interpretation and can be ascribed either to a constant flux rate or as the mean value of the additive noise source. In regards to the scale of application of the RD equation, the one-loop renormalizability indicates that although RD is a macroscopic equation, [only intended to make physical sense for scales greater than a certain minimum length scale L 0 , defined by the underlying molecular physics (if one is discussing chemical reactions)], we have shown in this paper that the ultraviolet behavior of the RD equation is controlled, and that considered as a strictly phenomenological equation its short distance behavior is much better than one had any right to expect . The short distance structure of this effective action has been revealed via the calculation of the Seeley-DeWitt coefficients up to one-loop order in the noise amplitude. By means of this information, we have been able to establish the one-loop finiteness of RD equations driven by additive white noise for d = 0 and d = 1 space dimensions and their renormalizability for d = 2 and d = 3 space dimensions. In d = 2 space dimensions there are logarithmic divergences which lead to running, (scale dependence) of the parameters that describe the RD equation. There is no wavefunction renormalization at one-loop order. These results hold for polynomial reaction kinetics of arbitrary degree N . (Note: The absence of wavefunction renormalization has already been demonstrated for the case of a (complex) RD equation describing pair reactions (N = 2), where it turns out that the one-loop beta function in d = 2 is exact [6].) When taken as a model for pattern development, this result becomes even more striking since this means we can safely use the RD equation to investigate the important short distance and small time limit of the field correlations that arise in pattern formation, as already remarked earlier.
The RGE results obtained here are identical to those obtained (by different means) from an effective potential calculation for RD equations presented in Ref. [5]. The effective potential is the effective action restricted to constant field configurations and plays an important rôle in uncovering patterns of symmetry breaking and in the onset of pattern formation. Nevertheless, the claim of the one-loop renormalizability made in [5] requires the investigation of the wavefunction renormalization which was beyond the scope of that paper. The work presented here is also intended to complete and complement that discussion. Moreover, as pointed out there, the combined effects of noise and interactions is to shift the symmetric states of the system, as well as to change the nature of the linear instabilities that may be induced by perturbations around these new states. A spatial pattern is, by definition, a spatially inhomogeneous configuration with a higher or lesser degree of symmetry, if any such symmetry is at all present. Thus, to investigate the onset of the spatial-temporal patterns resulting from fluctuations and interactions, one must go beyond the effective potential. This requires working with inhomogeneous fields and the effective action.
The cautious reader will have noticed that most of the calculations developed here depend solely on the reaction potential V and its derivatives and not on the fact that V is a polynomial. In fact, this is easy to see from the structure of our Seeley-DeWitt coefficients, (59a)-(59b) and (60a)-(60b). Thus, the question arises if our treatment can be extended and applied to handle RD equations with non-polynomial reaction kinetics. The answer is in the affirmative provided that the potential admits a real solution to (18) (e.g., V [φ] ∼ sinh[φ]) since we need a constant background about which to expand the action, as indicated in (62). Provided such a solution exists, the rest of the analysis presented here carries through as is, up to the extraction of the renormalization group equations (which will again be non-vanishing only for d = 2). At this point the explicit functional form of the potential changes the nature of the basis set of independent field operators that leads to the RGEs. Non-polynomial reaction kinetics do indeed arise in many applications in chemical physics and in the modelling of biological pattern formation, typically in the form of rational functions (i.e., ratios of polynomials) and whenever constraints are to be imposed on the model [1,2]. When a coarse-grained field theoretic approach is applied to density waves in earthquakes for example, a stochastic PDE for the (scalar) slip field (which measures the relative displacement of two elastic media in contact taken along the surface of contact) results which depends on the cosine of the slip field, cos φ, and is driven by additive white noise [34]. Thus, the work presented here is broad in scope.
Finally, these results also have the following practical application: as analytic calculations can be carried only so far, it is clear that numerical studies of SPDEs are crucial. Ultraviolet renormalizability corresponds to the situation in which long distance physics is largely insensitive to the details of short distance physics. In considering numerical studies of the RD equation, we can therefore assert the cut-off insensitivity of the numerical solutions, at least to one-loop. (In numerical studies, the ultraviolet cut-off is provided by the lattice spacing.) This is of paramount importance since a numerical study of RD will give us the information needed to see if the system thermalizes, if it has steady state solutions, and most importantly, if the minima of the effective potential calculated in Ref. [5] are explored in the time evolution of the system.
δ d ( x) =
regulate them by means of a cut-off ǫ. The finite terms are collectively represented by Γ (1−loop)
This equation will be the basis for extracting the first two integrated Seeley-DeWitt coefficients.The second step is to realise that we only need to look at the differenceTr[exp(A + ǫB)] = Tr[exp(A)] + ǫ Tr[B exp(A)] +
ǫ 2
2
1
0
dℓ Tr {B exp(ℓA) B exp[(1 − ℓ)A])} + O(ǫ 3 ).
(45)
Tr[exp(−sH field )] − Tr[exp(−sH ghost )],
(46)
and write
Tr[exp(−sH ghost )] = Tr{exp[−s(H free + δH 1 )]},
(47a)
Tr[exp(−sH field )] = Tr{exp[−s(H free + δH 1 + δH 2 )]}.
(47b)
If we take the difference of the previous operators, the O(ǫ 0 ) term automatically cancels, as does the O[ǫ 2 (δH 1 ) 2 ]
term, leaving
Tr[exp(−sH field )] − Tr[exp(−sH ghost )] = −ǫs Tr[δH 2 exp(−sH free )]
+
ǫ 2
2
s 2
1
0
dℓ Tr {δH 2 exp(−ℓsH free ) δH 2 exp[(1 − ℓ)sH free ])}
+ǫ 2 s 2
1
0
dℓ Tr {δH 1 exp(−ℓsH free ) δH 2 exp[(1 − ℓ)sH free ])}
+O(ǫ 3 ).
We now drop the overbar on φ with the understanding that this stands for the conjugate field of J and not the field appearing in the classical (zero-loop) action.
ACKNOWLEDGEMENTSThis work was supported in part by the Spanish Ministry of Education and Culture and the Spanish Ministry of Defense (DH and CMP). In the US, support was provided by the US Department of Energy (CMP and MV). Additionally, MV wishes to acknowledge support from the Spanish Ministry of Education and Culture through the Sabbatical Program, and to recognize the kind hospitality provided by LAEFF(Madrid, Spain). The authors would like to gratefully acknowledge discussions with Juan Pérez-Mercader, whose interest and support was crucial in completing this work.APPENDIX A: FREE GREEN FUNCTION FOR THE RD EFFECTIVE ACTIONIn this Appendix we calculate the free Green function appearing in(33). The only "difficult" part of the analysis is dealing with the fourth-order spatial bi-harmonic operator (∇ 2 ) 2 . Using translational invariance to set x − x ′ → x, and introducing Fourier transforms (d n k = d d k dω) as followswe easily find thatGwhere k 2 = k · k. By inverting the Fourier transform (A1) one obtains the following integral representation for the free Green functionWe first perform the integral over Ω by means of a contour integral in the complex Ω-plane. There is only one simple pole on the negative imaginary Ω-axis and we close the (semi-circular) contour (centered at the origin) in the lower half plane. As the radius of this arc goes to infinity, only the contribution along the real Ω-axis remains (s > 0). As a result and by the Residue Theorem we have (the contour is closed in the clockwise sense)We can go further and compute the ω integral exactly to obtain G free (x, 0|s) = 1 4πsand the remaining momentum integral is manifestly convergent (for d > 0). As for the boundary condition, note that for s → 0, lim s→0 G (free) ( xt, 00|s) = δ(t, 0)δ d ( x, 0), (in the sense of distributions) as it must, sinceandOne can also check that the boundary condition is satisfied, before integrating over ω, by simply setting s = 0 in (A4). Let us now work out the momentum integration. The angular integration is given byThe exact Green function or kernel for our free "heat" operator in(32)isThis solves the differential equation(32)and satisfies the boundary condition G free ( x, t|0) = δ d ( x) δ(t) for vanishing proper time s, hence this is the unique solution of (32). Important point: as remarked above, translational invariance implies that G free ( xt,We are treating stochastic processes on a flat d + 2-dimensional background (d-space dimensions plus real physical time t plus Schwinger proper time s).Using the Taylor series representationand integrating (A9) term-by-term, we find that (after making use of the time and space translational invariance of the Green function)where.(A13)Special attention should be paid to the fact that this free Green function involves a series in half-integer powers of Schwinger proper time, √ s, a feature that we use in choosing our ansatz for the full Green function.APPENDIX B: HOMOGENEOUS FIELD CONFIGURATIONSFor constant (homogeneous and static) fields, φ = v 0 , the associated "heat kernel" can be solved for exactly. We simply present the final result and skip the details of a calculation that is an analog of that for G free (x, x ′ |s). The on-diagonal Green function G field (x, x|s) is given bywhere we have again made use of the definition C d,ℓ . We conclude thatWe now match the first fractional powers in s and obtain the Seeley-DeWitt coefficients for a constant field configuration v 0The coefficients for Gghost can be immediately obtained from the previous Seeley-DeWitt coefficients by settingFinally, for the net Seeley-DeWitt coefficients, we can write
mail: [email protected] c Electronic mail: visser@kiwi. wustl.edumail: [email protected] c Electronic mail: [email protected]
. M C Cross, P C Hohenberg, Rev. Mod. Phys. 65851M.C. Cross and P.C. Hohenberg, Rev. Mod. Phys. 65, 851 (1993).
D Walgraef, Spatio-Temporal Pattern Formation. New YorkSpringerD. Walgraef, Spatio-Temporal Pattern Formation (Springer, New York, 1997).
. G I Sivashinski, Acta Astronautica. 6569G.I. Sivashinski, Acta Astronautica 6, 569 (1979).
. D Hochberg, C Molina-París, J Pérez-Mercader, M Visser, Phys. Rev. E. 606343and references thereinD. Hochberg, C. Molina-París, J. Pérez-Mercader, and M. Visser, Phys. Rev. E 60, 6343 (1999), and references therein.
. D Hochberg, C Molina-París, J Pérez-Mercader, M Visser, J. Stat. Phys. 99903D. Hochberg, C. Molina-París, J. Pérez-Mercader, and M. Visser, J. Stat. Phys. 99, 903 (2000).
J Cardy, cond- mat/9607163Proceedings of Mathematical Beauty of Physics. J.B. ZuberMathematical Beauty of Physics24J. Cardy, in Proceedings of Mathematical Beauty of Physics, Adv. Ser. in Math. Phys. 24, ed. J.B. Zuber (cond- mat/9607163).
N D Birrell, P C W Davies, Quantum fields in curved space. CambridgeCambridge University PressN.D. Birrell and P.C.W. Davies, Quantum fields in curved space (Cambridge University Press, Cambridge, 1982).
S A Fulling, Aspects of Quantum Field Theory in Curved Spacetime. Cambridge, EnglandCambridge University PressS.A. Fulling, Aspects of Quantum Field Theory in Curved Spacetime (Cambridge University Press, Cambridge, England, 1989).
A A Grib, S G Mamayev, V M Mostepanenko, Vacuum quantum effects in strong fields (Friedmann Lab. St. Petersburg, RussiaA.A. Grib, S.G. Mamayev and V.M. Mostepanenko, Vacuum quantum effects in strong fields (Friedmann Lab, St. Peters- burg, Russia, 1994).
B S Dewitt, General Relativity: An Einstein Centenary Survey. S.W Hawking and W. IsraelCambridge, EnglandCambridge University PressB.S. DeWitt, in General Relativity: An Einstein Centenary Survey, ed. S.W Hawking and W. Israel (Cambridge University Press, Cambridge, England, 1979).
G Gibbons, General Relativity: An Einstein Centenary Survey. S.W Hawking and W. IsraelCambridge, EnglandCambridge University PressG. Gibbons, in General Relativity: An Einstein Centenary Survey, ed. S.W Hawking and W. Israel (Cambridge University Press, Cambridge, England, 1979).
Quantum field theory under the influence of external conditions. E G See, M. BordagTeubnerLeipzigSee, e.g., Quantum field theory under the influence of external conditions, edited by M. Bordag (Teubner, Leipzig, 1996).
There is a wealth of excellent monographs and texts treating the subject of quantum field theory. A small and select sampling of some modern expositions may be found in [14-17]. For more additional references, see the bibliography in Ref. 20There is a wealth of excellent monographs and texts treating the subject of quantum field theory. A small and select sampling of some modern expositions may be found in [14-17]. For more additional references, see the bibliography in Ref. [20].
P Ramond, Field Theory: A Modern Primer. MassachusettsAddison-WesleyP. Ramond, Field Theory: A Modern Primer (Addison-Wesley, Massachusetts, 1981).
J Zinn-Justin, Quantum field theory and critical phenomena. OxfordOxford University Press2nd edJ. Zinn-Justin, Quantum field theory and critical phenomena (Oxford University Press, Oxford, 1989), 2nd ed.
R J Rivers, Path integral methods in quantum field theory. Cambridge, EnglandCambridge University PressR.J. Rivers, Path integral methods in quantum field theory (Cambridge University Press, Cambridge, England, 1987).
S Weinberg, The quantum theory of fields I & II. Cambridge, EnglandCambridge University PressS. Weinberg, The quantum theory of fields I & II (Cambridge University Press, Cambridge, England, 1996).
. Y Fujimoto, L O'raifeartaigh, G Parravicini, Nucl. Phys. B. 212268Y. Fujimoto, L. O'Raifeartaigh, and G. Parravicini, Nucl. Phys. B 212, 268 (1983).
. B Gato, J León, J Pérez-Mercader, M Quirós, Nucl. Phys. B. 253285B. Gato, J. León, J. Pérez-Mercader, and M. Quirós, Nucl. Phys. B 253, 285 (1985).
. D Hochberg, C Molina-París, J Pérez-Mercader, M Visser, Int. J. Mod. Phys. A. 141485D. Hochberg, C. Molina-París, J. Pérez-Mercader, and M. Visser, Int. J. Mod. Phys. A 14, 1485 (1999).
. M Doi, J. Phys. A. 91465M. Doi, J. Phys. A 9, 1465 (1976).
. L Peliti, J. Physique. 461469L. Peliti, J. Physique 46, 1469 (1985).
. P Grassberger, F Krause, T Von, Twer, J. Phys. A. 17105P. Grassberger, F. Krause and T. von der Twer, J. Phys. A 17, L105 (1984).
. M J Howard, U C Tauber, J. Phys. A: Math. Gen. 307721M.J. Howard and U.C. Tauber, J. Phys. A: Math. Gen. 30, 7721 (1977).
. P.-A Rey, J Cardy, J. Phys. A: Math. Gen. 321585P.-A. Rey and J. Cardy, J. Phys. A: Math. Gen. 32, 1585 (1999).
. H Janssen, Z. Phys. B. 42151H. Janssen, Z. Phys. B 42, 151 (1981).
J D Murray, Mathematical Biology. BerlinSpringer-VerlagJ.D. Murray, Mathematical Biology (Springer-Verlag, Berlin, 1993).
. I G Avramidi, R , Schimming in [12] and the references thereinI.G. Avramidi and R. Schimming in [12] and the references therein.
Calculations of heat kernel coefficients for some specific fourth-order differential operators on curved backgrounds have been performed by. Phys. Lett. B. V.P. Gusynin255296Nucl. Phys. BCalculations of heat kernel coefficients for some specific fourth-order differential operators on curved backgrounds have been performed by V.P. Gusynin, Phys. Lett. B 255, 233 (1989), Nucl. Phys. B 333, 296 (1990).
Assigning proper credit for this formula is extremely difficult: It is roughly equivalent to the Hellman-Feynman theorem of quantum mechanical forces, which was originally proven by P. Ehrenfest. Phys. Rev. 45340where Feynman simply called it "perturbation theory". The formulae of Appendix C can also be found in the mathematical literature as several un. named lemmata on the way to proving the Baker-Campbell-Hausdorff formula [31Assigning proper credit for this formula is extremely difficult: It is roughly equivalent to the Hellman-Feynman theorem of quantum mechanical forces, which was originally proven by P. Ehrenfest, Z. Phys. 45, 455 (1927), and later discussed by Hellman (1937), and independently rediscovered by R. Feynman, Phys. Rev. 56, 340 (1939). The version used here is equivalent to equation (2.181) of R. Feynman, Statistical Mechanics, (Benjamin, Reading, Massachusetts, 1972), where Feynman simply called it "perturbation theory". The formulae of Appendix C can also be found in the mathematical literature as several un-named lemmata on the way to proving the Baker-Campbell-Hausdorff formula [31].
Die symbolische Exponential Formel in der Gruppen Theorie. F Hausdorff, Leipziger Ber. 5819F. Hausdorff, "Die symbolische Exponential Formel in der Gruppen Theorie", Leipziger Ber., 58, 19 (1906).
Baker-Campbell-Hausdorff formulas. R Gilmore, J. Math. Phys. 152090R. Gilmore, "Baker-Campbell-Hausdorff formulas", J. Math. Phys. 15, 2090 (1974).
The Baker-Campbell-Hausdorff formula and nested commutator identities. J A Oteo, J. Math. Phys. 32419J.A. Oteo, "The Baker-Campbell-Hausdorff formula and nested commutator identities", J. Math. Phys. 32, 419 (1991).
Goldberg's theorem and the Baker-Campbell-Hausdorff formula. H Kobayashi, N Hatano, M Suzuki, Physica A. 250535H. Kobayashi, N. Hatano, and M. Suzuki, "Goldberg's theorem and the Baker-Campbell-Hausdorff formula", Physica A 250, 535 (1998).
. S K Blau, M Visser, A Wipf, Nucl. Phys. B. 310163S.K. Blau, M. Visser, and A. Wipf, Nucl. Phys. B 310, 163 (1988).
. D Hochberg, C Molina-París, J Pérez-Mercader, M Visser, Phys. Rev. Lett. 814802D. Hochberg, C. Molina-París, J. Pérez-Mercader, and M. Visser, Phys. Rev. Lett. 81, 4802 (1998).
. J B Rundle, W Klein, S Gross, Phys. Rev. Lett. 764285J.B. Rundle, W. Klein and S. Gross, Phys. Rev. Lett. 76, 4285 (1996).
| [] |
[
"Dynamic cycles in edge-colored multigraphs *",
"Dynamic cycles in edge-colored multigraphs *"
] | [
"Galeana-Sánchez Hortensia \nInstituto de Matemáticas\nUniversidad Nacional Autónoma de México\nÁrea de la investigación científica, Circuito Exterior, Ciudad Universitaria04510Coyoacán\n\nCDMX\nMéxico\n",
"Vilchis-Alfaro Carlos \nInstituto de Matemáticas\nUniversidad Nacional Autónoma de México\nÁrea de la investigación científica, Circuito Exterior, Ciudad Universitaria04510Coyoacán\n\nCDMX\nMéxico\n",
"V "
] | [
"Instituto de Matemáticas\nUniversidad Nacional Autónoma de México\nÁrea de la investigación científica, Circuito Exterior, Ciudad Universitaria04510Coyoacán",
"CDMX\nMéxico",
"Instituto de Matemáticas\nUniversidad Nacional Autónoma de México\nÁrea de la investigación científica, Circuito Exterior, Ciudad Universitaria04510Coyoacán",
"CDMX\nMéxico"
] | [] | Let H be a graph possibly with loops and G be a multigraph without loops. An H-coloring of G is a function c : E(G) → V (H). We will say that G is an H-colored multigraph, whenever we are taking a fixed H-coloring of G. The set of all the edges with end vertices u and v will be denoted by E uv . We will say that W = (v 0 , e 1 0 , . . . , e k 0 0 | null | [
"https://export.arxiv.org/pdf/2303.02548v1.pdf"
] | 257,364,924 | 2303.02548 | 93b2b722cbea69009280e9a1bae6d3673984a162 |
Dynamic cycles in edge-colored multigraphs *
5 Mar 2023
Galeana-Sánchez Hortensia
Instituto de Matemáticas
Universidad Nacional Autónoma de México
Área de la investigación científica, Circuito Exterior, Ciudad Universitaria04510Coyoacán
CDMX
México
Vilchis-Alfaro Carlos
Instituto de Matemáticas
Universidad Nacional Autónoma de México
Área de la investigación científica, Circuito Exterior, Ciudad Universitaria04510Coyoacán
CDMX
México
V
Dynamic cycles in edge-colored multigraphs *
5 Mar 2023
Let H be a graph possibly with loops and G be a multigraph without loops. An H-coloring of G is a function c : E(G) → V (H). We will say that G is an H-colored multigraph, whenever we are taking a fixed H-coloring of G. The set of all the edges with end vertices u and v will be denoted by E uv . We will say that W = (v 0 , e 1 0 , . . . , e k 0 0
Introduction
For basic concepts, terminology and notation not defined here, we refer the reader to [4] and [7]. Throughout this work, we will consider graphs, multigraphs (graphs allowing parallel edges) and
Theorem 1 (Grossman and Häggkvist [19], and Yeo [29]). Let G be a c-edge-colored graph, c ≥ 2, with no PC cycle. Then, G has a vertex z ∈ V (G) such that no connected component of G − z is joined to z with edges of more than one color.
Abouelaoualim, Das, Fernandez de la Vega, Karpinski, Manoussakis, Martinhon and Saad [1] gave degree conditions, sufficient for an edge-colored multigraph to have a PC hamiltonian cycle.
Theorem 2 (Abouelaoualim et al. [1]). Let G be a c-edge-colored multigraph, such that no two parallel edges have the same color, of orden n. If for every x ∈ V (G), δ i (x) ≥ ⌈(n + 1)/2⌉, for every i ∈ {1, . . . , c}.
(a) If c = 2, then G has a PC hamiltonian cycle when n is even, and a PC cycle of length n − 1, when n is odd.
(b) If c ≥ 3, then G has a PC hamiltonian cycle.
In this paper we will consider the following edge-coloring. Let H be a graph possibly with loops and G be a graph without loops. An H-coloring of G is a function c : E(G) → V (H). We will say that G is an H-colored graph, whenever we are taking a fixed H-coloring of G. A walk W = (v 0 , e 0 , v 1 , e 1 , . . . , e k−1 , v k ) in G, where e i = v i v i+1 for every i ∈ {0, . . . , k − 1}, is an Hwalk iff (c(e 0 ), a 0 , c(e 1 ), . . . , c(e k−2 ), a k−2 , c(e k−1 )) is a walk in H, with a i = c(e i )c(e i+1 ) for every i ∈ {0, . . . , k − 2}. We will say that W is closed if v 0 = v k and c(e k−1 )c(e 0 ) ∈ E(H). Notice that if H is a complete graph without loops, then an H-walk is a properly colored walk. And moreover, if H is looped graph with no more edges, then an H-walk is a monochromatic walk.
The concepts of H-coloring and H-walks were introduced, for the first time by Linek and Sands in [22], in the context of kernel theory and related topics, see [10,11,18]. In [16], Galeana-Sánchez, Rojas-Monroy, Sánchez-López and Villarreal-Valdés gave necessary and sufficient conditions for the existence of closed Euler H-trails. In [17], Galeana-Sánchez, Rojas-Monroy and Sánchez-López study the existence of H-cycle, in H-colored graphs, and extended the Theorem 1, in the context of H-colored graphs.
Benítez-Bobadilla, Galeana-Sánchez and Hernández-Cruz [6] introduced a generalization of Hwalks as follows. They allowed "lane changes", i.e., they allowed concatenation of two H-walks, say W 1 = (x 0 , e 0 , x 1 , . . . , x k−1 , e k−1 , x k ) and W 2 = (y 0 , f 0 , y 1 , . . . , y j−1 , f j−1 , y j ), as long as, the last edge of the first one and the first edge of the second one satisfy that x k−1 = y 0 and x k = y 1 (i.e., the edges e 0 and f 0 lie between the same end vertices and travel in the same direction that means that star in the same vertex and end in the same vertex). As a result, they defined the following concept: Let G be an H-colored multigraph, a dynamic H-walk in G is a sequence of vertices W = (x 0 , x 1 , . . . , x k ) in G such that for each i ∈ {0, . . . , k − 2} there exists an edge f i = x i x i+1 and there exists an edge
f i+1 = x i+1 x i+2 such that c(f i )c(f i+1 ) is an edge in H.
When we deal with a multigraph G, we will denote by E G uv the set of all the edges in G with end vertices u and v.
For the purpose of this paper, we need a definition and some more notation that will allow us to know the edges belonging to a dynamic H-walk. So, we will say that W = (v 0 , e 1 0 , . . . , e k 0 0 , v 1 , e 1 1 , . . . , e k 1 1 , v 2 , . . . , v n−1 , e 1 n−1 , . . . , e
k n−1 n−1 , v n ), where for each i ∈ {0, . . . , n − 1}, k i ≥ 1 and e j i ∈ E G v i v i+1 for every j ∈ {1, . . . , k i }, is a dynamic H-walk iff c(e k i i )c(e 1 i+1
) is an edge in H, for each i ∈ {0, . . . , n − 2}. We will say that a dynamic H-walk is a closed dynamic H-walk whenever v 0 = v n and c(e n−1 , e 1 0 , . . . , e k 0 0 , v n = v 1 ), and W is closed (unless n = 1, i.e., W is of the form (v 0 , e 0 , . . . , e k , v 1 ), where k ≥ 1). If W is a dynamic H-walk that does not repeat edges (vertices), then W will be called dynamic H-trail (dynamic H-path). If W is closed and not repeat a vertex, except for the first and the last, then W will be called dynamic H-cycle.
It follows from the definition of dynamic H-walk that every H-walk in G is a dynamic H-walk in G. Moreover, if G has no parallel edges, then every dynamic H-walk is an H-walk.
A motivation for the study of dynamic H-walks in H-colored multigraphs are their possible applications. For example, suppose that we are working with a communication network, represented by a graph G, where each vertex represent a connection point, and an edge between two connection points means that they have a direct link between them. Moreover, each directed link have failure probability (namely risk; such as, damage, attack, virus, blockage, among many others), this failure probability will be represented by a color assigned to that edge. Now, consider a new graph, say H, where each vertex of H is one of the color used in the described coloring of the edges of G; and we add an edge in H from one color to another whenever such a colors transition is convenient or possible (for example, if transitions with the same probability of failure are forbidden, then H will have no loops). Notice that in the practice, it is possible to send the same message simultaneously over two or more parallel connections. If it is required to send a message from point A to point B through G with the more convenient form, we need to find a dynamic H-walk in G from A to B.
In addition, multigraphs can model several applied problems in a more natural way than simple graphs, see [14,26,27].
In this work, we study the existence of dynamic H-cycles and dynamic H-trails, and the length of dynamic H-cycles and dynamic H-paths in H-colored multigraphs. To accomplish this, we introduce a new concept of color degree, namely, the dynamic degree, that allows us to extend some classic results, such as, Ore's Theorem, for H-colored multigraphs. Also, we give sufficient conditions for the existence of hamiltonian dynamic H-cycles in H-colored multigraphs with at most one "lane change", and as a consequence, we obtain sufficient conditions for the existence of PC hamiltonian cycle in c-edge-colored multigraphs, with c ≥ 3. Moreover, we improve the conditions given in Theorem 2 b) for an infinitely family of multigraphs.
Notation and Terminology
Let G be a multigraph. If e is an edge and u and v are the vertices such that e = uv, then e is said to join u and v, we will say that u and v are the ends of e, we will say that the edge e is incident with u (respectively v) and also we will say that u and v are adjacent. If u = v, then the edge e is a loop. The set of all the edges in G with end vertices u and v will be denoted by E G uv , when there is no confusion, for simplicity, we will write E uv . Let e and f be two edges in E uv , we will say that e and f are parallel. The neighborhood of a vertex u, denoted by N G (u), is defined as the set of all the vertices adjacent with u in G.
G v 1 v 2 v 3 v 4 v 5 v 6 v 7 B G RA walk in a multigraph G is a sequence (v 0 , v 1 , . . . , v k ), where v i v i+1 ∈ E(G) for every i in {0, . . . , k − 1}.
We define the length of the walk W as the number k, denoted by l(W ). We will say that a walk is closed
if v 0 = v k . If v i = v j for all i and j with i = j, it is called a path. A cycle is a closed walk (v 0 , v 1 , . . . , v k , v 0 ), with k ≥ 3, such that v i = v j for all i and j with i = j.
A graph G is said to be multipartite, if for some positive integer k, there exists a partition X 1 , . . . , X k of V (G), such that X i is an independent set in G (that is no two vertices of X i are adjacent) for every i in {1, . . . , k}, in this case, also G is called k-partite. It said that G is a complete k-partite graph whenever G is k-partite and for every u in X i and for every v in X j , with i = j , we have that u and v are adjacent, denoted by K n 1 ,...,n k where |X i | = n i for every i in {1, . . . , k}. In the particular case when k = 2, the graph G is said to be bipartite graph.
Let G be an H-colored multigraph, a sequence W = (v 0 , e 1 0 , . . . , e k 0 0 , v 1 , e 1 1 , . . . , e k 1 1 , v 2 , . . . , v n−1 , e 1 n−1 , . . . , e
k n−1 n−1 , v n ) in G, where for each i ∈ {0, . . . , n − 1}, k i ≥ 1 and e j i ∈ E v i v i+1 for every j ∈ {1, . . . , k i }, is a dynamic H-walk in G iff c(e k i i )c(e 1 i+1
) is an edge in H, for each i ∈ {0, . . . , n−2}. We define length of the dynamic H-walk, denoted by l(W ), as the number n. We will say that the dynamic H-walk W has k i − 1 changes from v i to v i+1 , and the number of changes of W is
n−1 i=1 (k i − 1)
. Notice that if W is a dynamic H-walk with zero changes, then W is an H-walk. In Figure 1, T = (v 1 , e 4 , v 4 , e 10 , v 5 , e 14 , e 13 , v 6 , e 15 , v 7 ) is a dynamic H-trail with one change and C = (v 4 , e 9 , v 5 , e 14 , e 13 , v 6 , e 15 , v 7 , e 11 , v 4 ) is a dynamic H-cycle with one change.
Main Results
In what follows H will be a graph possibly with loops, and G will be an H-colored multigraph.
We will use an auxiliary graph, denoted by G u , that is defined as follows: Let G be an Hcolored multigraph and u be a vertex of G; G u is the simple graph such that V (G u ) = {e ∈ E(G) : e is incidentwith u}, and two different vertices a and b are joining by only one edge in G u if and only if c(a) and c(b) are adjacent in H.
Let G be an H-colored multigraph and {u, v} ⊆ V (G). We will say that E uv is a dynamic edge set if and only if there exist {e, f } ⊆ E uv such that N H (c(e)) = N H (c(f )) and neither of them is subset of the other. The dynamic degree of u, denoted by δ dym (u), is the number of vertices v such that E uv is a dynamic edge set. In Figure 1, the set E v 1 v 4 is a dynamic edge set but E v 1 v 2 is not a dynamic edge set, since N H (c(e 1 )) = N H (c(e 2 )) = {R}. Observation 1. If G u is a complete k u -partite graph, for some k u ≥ 2, then E uv is a dynamic edge set if and only if, there exist e and f in E uv in different sets of the partition of V (G u ). Proposition 3. Let G be an H-colored multigraph such that G u is a complete k u -partite graph, for every u in V (G) and for some k u ,
k u ≥ 2. If T = (x 0 , x 1 , . . . , x n ) is a walk in G such that for each i ∈ {0, . . . , n − 1}, E x i x i+1 is a dynamic edge set, then there exist e i ∈ E x i x i+1 , for every i ∈ {0, . . . , n − 1}, such that T ′ = (x 0 , e 0 , x 1 , . . . , x n−1 , e n−1 , x n ) is an H-walk. Moreover, if E x 0 xn
is a dynamic edge set, then there exist {e n , e n+1 } ⊆ E x 0 xn such that C = (x 0 , e 0 , x 1 , . . . , x n−1 , e n−1 , x n , e n , x 0 ) is a closed H-walk or C = (x 0 , e 0 , x 1 , . . . , x n−1 , e n−1 , x n , e n , e n+1 , x 0 ) is a closed dynamic H-walk, i.e., there exists a closed dynamic H-walk with at most one change.
Proof. Suppose that G is an H-colored multigraph such that G u is a complete k u -partite graph, for every u in V (G) and for some
k u ≥ 2. Let T = (x 0 , x 1 , . . . , x n ) be a walk in G such that E x i x i+1 is a dynamic edge set. Consider the edge e 0 = x 0 x 1 in E(G), since E x 1 x 2 is a dynamic edge set, by Observation 1, we have that there exist f 1 = x 1 x 2 and f 2 = x 1 x 2 in different sets of the partition of V (G x 1 ). It follows from the fact that e 0 ∈ V (G x 1 ) and G x 1 is a complete k x 1 -partite graph, that e 0 f 1 ∈ E(G x 1 ) or e 0 f 2 ∈ E(G x 1 ).
And then e 1 will be the edge such that e 0 e 1 ∈ E(G x 1 ), i.e., e 1 = f 1 or e 1 = f 2 (in case that both edges are adjacent to e 0 , we take e 1 = f 1 ).
Since E x 2 x 3 is a dynamic edge set, by Observation 1, we have that there exist g 1 = x 2 x 3 and
g 2 = x 2 x 3 in different sets of the partition of V (G x 2 ). It follows from the fact that e 1 ∈ V (G x 2 ) and G x 2 is a complete k x 2 -partite graph, that e 1 g 1 ∈ E(G x 2 ) or e 1 g 2 ∈ E(G x 2 )
. And then e 2 will be the edge such that e 1 e 2 ∈ E(G x 2 ), i.e., e 2 = g 1 or e 2 = g 2 (in case that both edges are adjacent to e 1 , we take e 2 = g 1 ). Repeating this procedure, we have that for every
i ∈ {0, . . . , n − 1}, there exist e i ∈ E x i x i+1 such that T ′ = (x 0 , e 0 , x 1 , e 1 , x 2 , . . . , x n−1 , e n−1 , x n ) is an H-walk. Now, suppose that E x 0 xn is a dynamic edge set, since G xn is a complete k xn -partite graph, then there exists e n = x n x 0 such that e n−1 e n ∈ E(G xn ). If e n e 0 ∈ E(G x 0 ), then C = (x 0 , e 0 , x 1 , . . . , x n , e n , x 0 ) is a closed H-walk.
Otherwise, e n e 0 ∈ E(G x 0 ). And since E xnx 0 is a dynamic edge set, then there exist an edge e n+1 = x n x 0 such that e n and e n+1 are in different sets of the partition of V (G x 0 ). Now, since G x 0 is a complete k x 0 -partite graph and e n e 0 ∈ E(G x 0 ), then e n+1 e 0 ∈ E(G x 0 ). Therefore, C = (x 0 , e 0 , x 1 , T ′ , x n , e n , e n+1 , x 0 ) is a closed dynamic H-walk.
Corollary 4. Let G be an H-colored multigraph such that G u is a complete k u -partite graph, for every u in V (G) and for some k u , k u ≥ 2. If T = (x 0 , x 1 , . . . , x n ) is a path in G such that for every i ∈ {0, . . . , n − 1}, E x i x i+1 is a dynamic edge set, then there exist e i ∈ E x i x i+1 , for every i ∈ {0, . . . , n − 1}, such that T ′ = (x 0 , e 0 , x 1 , . . . , x n−1 , e n−1 , x n ) is an H-path. Moreover, if E x 0 xn is a dynamic edge set, then there exist {e n , e n+1 } ⊆ E x 0 xn such that C = (x 0 , e 0 , x 1 , . . . , x n−1 , e n−1 , x n , e n , x 0 ) is an H-cycle or C = (x 0 , e 0 , x 1 , . . . , x n−1 , e n−1 , x n , e n , e n+1 , x 0 ) is a dynamic H-cycle, i.e., there exists a dynamic H-cycle with at most one change.
Theorem 5. Let G be an H-colored multigraph such that G u a complete k u -partite graph, for every u in V (G) and for some k u , k u ≥ 2. If δ dym (u) ≥ d ≥ 2, for every u ∈ V (G), then G has a dynamic H-cycle of length at least d + 1 and with at most one change.
Proof. Let T = (x 0 , x 1 , . . . , x k ) be a path of maximum length such that E x i x i+1 is a dynamic edge set, for every i ∈ {0, . . . , k − 1}. Claim 1. T has length at least d.
Suppose that T has length at most d − 1. Since δ dym (x k ) ≥ d, then there exists a vertex x k+1 ∈ V (G) \ V (T ) such that E x k x k+1 is a dynamic edge set. Hence, T ′ = (x 0 , x 1 , . . . , x k , x k+1 ) is a path of length k+1 such that E x i x i+1 is a dynamic edge set, for every i ∈ {0, . . . , k}, contradiction. Therefore, T has length at least d.
Since T is of maximum length, if E x 0 u is a dynamic edge set, then u ∈ V (T ) (otherwise we can extend T ). Let j = max{i | E x 0 x i is a dynamic edge set}. Since δ dym (x 0 ) ≥ d, we have that j ≥ d. Therefore, C ′ = (x 0 , x 1 , . . . , x j , x 0 ) is a cycle such that E x i x i+1 is a dynamic edge set, for every i ∈ {0, . . . , j}, hence by Corollary 4, we have that there exist C a dynamic H-cycle of length j + 1 ≥ d + 1 with at most one change.
Let K 2 n be a complete multigraph with |E uv | = 2, for every {u, v} ∈ V (K 2 n ). And, let H be a complete simple graph with k ≥ 2 vertices, and G the union of two K 2 n that share a unique vertex. If we H-color G in such a way that every pair of parallel edges has different color, then δ dym (x) ≥ n − 1, for every x ∈ V (G), and the length of the maximum dynamic H-cycle in G is n. So, we cannot improve the length of the dynamic H-cycle in Theorem 5.
Corollary 6. Let G be an H-colored complete multigraph such that G u is a complete k u -partite graph, for every u ∈ V (G) and for some k u , k u ≥ 2. If E xy is a dynamic edge set, for every {x, y} ⊆ V (G), then G has a hamiltonian dynamic H-cycle.
Theorem 7. Let G be an H-colored multigraph such that G u a complete k u -partite graph, for every u in V (G) and for some k u , k u ≥ 3. If δ dym (u) ≥ d ≥ 2, for every u ∈ V (G), then G has an H-path of length at least min{2d, n}, or G has an H-cycle of length at least d + 1.
Proof. Suppose that G is an H-colored multigraph such that for every u ∈ V (G), we have that δ dym (u) ≥ d ≥ 2, and G u is a complete k u -partite graph, for some k u ≥ 3.
Then, for every
x ∈ V (G), there exist e x , f x , g x ∈ E(G) such that {e x , f x } ⊆ E xvx , for some v x ∈ V (G), g x = xy x ∈ E xvx
and e x , f x and g x are in different parts of the partition of G x (it possible by Observation 1 and the fact that δ dym (x) ≥ 2).
Let T = (u 0 , u 1 , . . . , u j−1 = v x , u j = x, u j+1 = y x , . . . , u k ) the longest path such that {g x , e x } ⊆ E(T ) and E u i u i+1 is a dynamic edge set, for every i ∈ {0, . . . , j − 1, j + 1, . . . , k − 1}, i.e., E xyx is the only not necessarily a dynamic edge set in T .
Notice that l(T ) is at least d since δ dym (u 0 ) ≥ d and T is of maximum length path, hence k = l(W ) ≥ d.
If k ≥ 2d, since E u j−2 u j−1 is a dynamic edge set, then there is an edge e j−2 ∈ E u j−2 u j−1 such that c(e j−2 )c(g x ) ∈ E(H). So, by following the same procedure as in the proof of Proposition 3, we can construct the following: 1) An H-path from u j−1 to u 0 starting with the edge e j−2 , say T 0 = (u j−1 , e j−2 , u j−2 , . . . , u 1 , e 0 , u 0 ); and 2) an H-path from u j to u k starting with the edge e x , say T 1 = (u j , e x , u j+1 , . . . , u k−1 , e k , u k ).
Hence, T ′ = (u 0 , T −1 0 , u j−1 , g x , u j , T 1 , u k ) is an H-path of length k ≥ 2d. So, suppose that k ≤ 2d − 1. Case 1. j + 1 ≤ d. Notice that if E u 1 v is a dynamic edge set, then v ∈ V (T ). Otherwise, T ′ = (v, u 0 , u 1 , . . . , u k ) is a path of length k+1 such that E v i v i+1 is a dynamic edge set, for every i ∈ {0, . . . , j−1, j+1, . . . , k−1}, contradicting the choice of T . Therefore, v ∈ V (T ).
On the other hand, since δ dym (u 0 ) ≥ d, then there is u p ∈ V (T ), where d ≤ p ≤ k such that E u 0 up is a dynamic edge set. Hence, C = (u 0 , u 1 , . . . , u j , u j+1 , . . . , u p , u 0 ) is a cycle in G such that E u i u i+1 is a dynamic edge set, for every i ∈ {0, . . . , j − 1, j + 1, . . . , p − 1}.
Since C is a cycle, we can rewrite it as follows C = (u j = x, u j−1 = y x , u j−2 , . . . , u 0 , u p , u p−1 , . . . , u j+1 = v x , u j = x). Since E u j−2 u j−1 is a dynamic edge set, there is an edge e j−2 ∈ E u j−2 u j−1 such that c(e j−2 )c(g x ) ∈ E(H). Hence, by Corollary 4, there is an H-path T ′ = (x, g x , y x , e j−2 , u j−2 , . . . , u j+2 , e j+2 , u j+1 = v x ).
Since e x and f x are incident with v x , then e j+2 , e x and f x are vertices of G vx . We know that e x and f x are in different parts of the partition, then e j+2 e x ∈ E(G vx ) or e j+2 f x ∈ E(G vx ). Hence, C ′ = (x, g x , y x , e j−2 , u j−3 , . . . , u j+2 , e j+2 , v x , e x , x) or C ′ = (x, g x , y x , e j−2 , u j−3 , . . . , u j+2 , e j+2 , v x , f x , x) is an H-cycle (because, pairwise e x , f x and g x are in different parts of the partition of G x ) and C ′ is of length at least d + 1.
Case 2. j + 1 > d.
Notice that if E u k v is a dynamic edge set, then v ∈ V (T ). Otherwise, T ′ = (u 0 , u 1 , . . . , u k , u k+1 = v) is a path of length k + 1 such that E v i v i+1 is a dynamic edge set, for every i ∈ {0, . . . , j − 1, j + 1, . . . , k}, contradicting the choice of T . Therefore, v ∈ V (T ).
On the other hand, since δ dym (u k ) ≥ d, then there is
u p ∈ V (T ), where p ≤ 2d − d − 1 = d − 1,
such that E u k up is a dynamic edge set. Hence, C = (u p , u p+1 , . . . , u j−1 , u j , u j+1 , . . . , u k , u p ) is a cycle in G such that E u i u i+1 is a dynamic edge set, for every i ∈ {p, . . . , k − 1} \ {j}.
Since C is a cycle, we can rewrite it as follows C = (u j = x, u j−1 = y x , u j−2 , . . . , u p , u k , u k−1 , . . . , u j+1 = v x , u j = x). Since E u j−2 u j−1 is a dynamic edge set, there is an edge e j−2 ∈ E u j−2 u j−1 such that c(e j−2 )c(g x ) ∈ E(H). Hence, by Corollary 4, there is an H-path T ′ = (x, g x , y x , e j−2 , u j−2 , . . . , u p , e k , u k , e k−1 , u k−1 , . . . , u j+2 , e j+2 , u j+1 = v x ).
Since e x and f x are incident with v x , then e j+2 , e x and f x are vertices of G vx . We know that e x and f x are in different parts of the partition, then e j+2 e x ∈ E(G vx ) or e j+2 f x ∈ E(G vx ). Hence,
C ′ = (x, g x , y, e j−2 , u j−3 , T ′ , u j+2 , e j+2 , v x , e x , x) or C ′ = (x, g x , y, e j−2 , u j−3 , T ′ , u j+2 , e j+2 , v x , f x , x)
is an H-cycle (because, pairwise e x , f x and g x are in different parts of the partition of G x ) and C ′ is of length at least d + 1.
Let G be an H-colored multigraph. We will say that the dynamic graph of G, denoted by G dym , is the simple graph such that V (G dym ) = V (G) and two different vertices u and v are adjacent, with only one edge, in G dym if and only if E uv is a dynamic edge set in G.
Theorem 8. Let G be an H-colored multigraph such that G u a complete k u -partite graph, for every u in V (G) and for some k u , k u ≥ 2. If G dym is connected and δ dym (x) = 2p x , where p x ≥ 1, then G has a spanning closed dynamic H-trail with at most one change.
Proof. Suppose that G dym is connected and δ dym (x) = 2p x , where p x ≥ 1, then we have that G dym has a closed Euler trail, say T = (x 0 , x 1 , . . . , x n , x 0 ).
Then, T is a spanning closed dynamic H-trail in G such that E x i x i+1 is a dynamic edge set, for every i ∈ {0, 1, 2, . . . , n} (when i = n, then x i+1 = x 0 ). Therefore, by Proposition 3, there exist a spanning closed dynamic H-trail in G with at most one change.
Theorem 9. Let G be an H-colored multigraph with n vertices such that G u a complete k u -partite graph, for every u in V (G) and for some k u , k u ≥ 2.
(a) If δ dym (u) + δ dym (v) ≥ n, for every {u, v} ⊆ V (G) such that E uv is not a dynamic edge set, then G has a hamiltonian dynamic H-cycle with at most one change.
(b) If there is x 0 ∈ V (G) such that k x 0 ≥ 3, and δ dym (x) + δ dym (y) ≥ n + 1, for every {x, y} ⊆ V (G), such that E xy is not a dynamic edge set, then G has a hamiltonian H-cycle.
Proof. a) Notice that δ G dym (u) + δ G dym (v) ≥ n for every pair of non adjacent vertices u and v in G dym . Hence, by Ore's Theorem, we have that G dym has a hamiltonian cycle, say C = (x 1 , x 2 , . . . , x n , x 1 ). Then, C is a hamiltonian cycle in G such that E x i x i+1 is a dynamic edge set, for every i ∈ {1, 2, . . . , n} (when i = n, then x i+1 = x 1 ). Therefore, by Corollary 4, there exist a hamiltonian dynamic H-cycle with at most one change. b) Suppose that G is an H-colored multigraph such that G u is a complete k u -partite graph, for some k u ≥ 2, for every u ∈ V (G), and δ dym (x) + δ dym (y) ≥ n + 1, for every {x, y} ⊆ V (G) such that E xy is not a dynamic edge set and there is x 0 ∈ V (G) such that k x 0 ≥ 3.
It follows from (a) that G has a hamiltonian cycle such that E x i x i+1 is a dynamic edge set, for every i ∈ {0, . . . , n} (when i = n, then x i+1 = x 0 ), say C = (x 0 , x 1 , . . . , x n , x 0 ).
Notice that G x i [E x i x i−1 ∪ E x i x i+1 ] is a complete k ′ x i -partite graph, where 2 ≤ k ′ x i ≤ k x i , for every i ∈ {0, . . . , n}.
Case 1. There exists i ∈ {0, . . . , n} such that k ′
x i ≥ 3. Then there exist e ∈ E x i−1 x i , g ∈ E x i x i+1 and f ∈ E x i x i−1 ∪ E x i x i+1 which are in different parts of G x i [E x i x i−1 ∪ E x i x i+1 ]. If f ∈ E x i x i−1 . By Corollary 4, there is an H-path T 1 = (x i , g, x i+1 , . . . , x i−2 , e i−2 , x i−1 ). Hence, C 1 = (x i , T 1 , x i−1 , e, x i ) or C 2 = (x i , T 1 , x i−1 , f, x i ) is a hamiltonian H-cycle in G.
If f ∈ E x i x i+1 . By Corollary 4, there is an H-path T 2 = (x i , e, x i−1 , . . . , x i+2 , e i+2 , x i+1 ). Hence,
C 3 = (x i , T 2 , x i+1 , f, x i ) or C 4 = (x i , T 2 , x i−1 , g, x i ) is a hamiltonian H-cycle in G. Case 2. k ′ x i = 2, for every x ∈ V (G), i.e., G x i [E x i x i−1 ∪ E x i x i+1 ] is a complete bipartite graph. Let A = {g ∈ V (G x 0 ) : ge ∈ E(G x 0 ) for every e ∈ V (G x 0 [E x 0 x 1 ∪ E x 0 xn ])}. Since G x 0 is a complete k x 0 -partite graph and k x 0 ≥ 3, then A = ∅. Let p = max{i : E x 0 x i ∩ A = ∅}.
Notice that p ∈ {0, 1, n} because of the condition of the case and there is no loops.
If E x 1 x p+1 is a dynamic edge set. By Corollary 4 and E xpx p−1 is a dynamic edge set, there is an H-path T 3 = (x p , e p , x p−1 , . . . , x 1 , e 1 , x p+1 , e p+1 , x p+2 , . . . , x n ) such that ge p ∈ E(G vp ). Since E xnx 0 is a dynamic edge set and g ∈ A, we have that there is an edge e n ∈ E xnx 0 such that C 5 = (x 0 , g, x 1 , T, x n , e n , x 0 ) is a hamiltonian H-cycle in G.
If E x 1 x p+1 is not a dynamic edge set, then by the hypothesis δ dym (x 1 ) + δ dym (x p+1 ) ≥ n + 1. Subcase 1. There is j, where 2 < j ≤ p, such that E x 1 x j and E x p+1 x j−1 are dynamic edge sets.
When j = p, Corollary 4 and the fact that E x 1 xp is a dynamic edge set, imply that T 4 = (x 0 , g, x p , e p , x 1 , . . . , x p−1 , e p−1 , x p+1 , . . . , x n ) is an H-path. Since, E xnx 0 is a dynamic edge set and g ∈ A, there is an edge e n ∈ E xnx 0 such that C 4 = (x 0 , g, x p , T 4 , x n , e n , x 0 ) is a hamiltonian H-cycle. Otherwise, T 5 = (x 0 , g, x p , e p , x p−1 , . . . , x j , e j , x 1 , e 1 , x 2 , . . . , x j−1 , e j−1 , x p+1 , e p+1 , x p+2 , . . . , x n ) is an H-path. Since, E xnx 0 is a dynamic edge set and g ∈ A, there is an edge e n ∈ E xnx 0 such that C 5 = (x 0 , g, x p , T 5 , x n , e n , x 0 ) is a hamiltonian H-cycle.
Subcase 2.
There is j, where p + 2 ≤ j ≤ n, such that E x 1 x j and E x p+1 x j+1 are dynamic edge sets.
When j = n, Corollary 4 and the fact that E xpx p−1 is a dynamic edge set, imply that T 6 = (x 0 , g, x p , e p , x p−1 , . . . , x 1 , e 1 , x n , e n , x n−1 , . . . , x p+1 ) is an H-path. Since E x 0 x p+1 is a dynamic edge set, g ∈ A and by the maximality of g; there is an edge e p+1 ∈ E x p+1 x 0 such that C 6 = (x 0 , T 6 , x p+1 , e p+1 , x 0 ) is a hamiltonian H-cycle. Otherwise, T 7 = (x 0 , g, x p , e p , x p−1 , . . . , x 1 , e 1 , x j , e j , x j−1 , . . . , x p+1 , e p+1 , x j+1 , e j+1 , x j+2 , . . . , x n ) is an H-path. Since, E xnx 0 is a dynamic edge set and g ∈ A, there is an edge e n ∈ E xnx 0 such that C 7 = (x 0 , g, x p , T 7 , x n , e n , x 0 ) is a hamiltonian H-cycle.
Subcase 3. For each j, where 2 < j ≤ p, at least one of E x 1 x j or E x p+1 x j−1 is not a dynamic edge set, and for each k, where p + 2 ≤ k ≤ n, at least one of E x 1 x k or E x p+1 x k+1 is not a dynamic edge sets.
In this case, δ dym (x p+1 ) ≤ (n−2)−(δ dym (x 1 )−2) = n−δ dym (x 1 ). So, δ dym (x 1 )+δ dym (x p+1 ) ≤ n, a contradiction.
Therefore, G has a hamiltonian H-cycle.
We think (but still we cannot prove) that the statement of Theorem 9b remains true if we replace the condition δ dym (x) + δ dym (y) ≥ n + 1 by δ dym (x) + δ dym (y) ≥ n. Moreover, we cannot replace it by δ dym (x) + δ dym (y) ≥ n − 1, since if we H-color G, the multigraph resulting from the union of two K 3 n that share a unique vertex, in such a way that every pair of parallel edges has different color, where H is a complete simple graph with at least three vertices. Then, G has no hamiltonian H-cycle.
Theorem 10. Let G be an H-colored multigraph with n vertices such that G u is a complete k upartite graph, for every u in V (G) and for some k u , k u ≥ 2. If δ dym (u) + δ dym (v) ≥ n − 1, for every pair of distinct vertices u and v of G such that E uv is not a dynamic edge set, then G has a hamiltonian H-path.
Theorem 11. Let G be an H-colored multigraph with n vertices such that G u is a complete k upartite graph, for every u in V (G) and for some k u , k u ≥ 2. If δ dym (u) + δ dym (v) ≥ n + 1, for every pair of distinct vertices u and v of G such that E uv is not a dynamic edge set, then for every pair of distinct vertices x and y, there is a hamiltonian H-path between x and y.
Corollary 12. Let G be an H-colored multigraph such that G u a complete k u -partite graph, for every u in V (G) and for some k u , k u ≥ 2. If δ dym (u) ≥ n/2, for every u ∈ V (G), then G has a hamiltonian dynamic H-cycle with at most one change.
Corollary 13. Let G be an H-colored multigraph such that G u is a complete k u -partite graph, for every u ∈ V (G), and for some k u , k u ≥ 3. If δ dym (x) ≥ (n + 1)/2, for every x ∈ V (G), then G has a hamiltonian H-cycle.
Recall that a c-edge-colored multigraph can be represented as an H-colored multigraph if H is a complete graph with c vertices and without loops. Moreover, if {e, f } ⊆ E xy , for some {x, y} ⊆ V (G), such that c(e) = c(f ), i.e., e and f are parallel edges of different color, then E xy is a dynamic edge set, by Observation 1.
Corollary 14.
Let G be a c-edge-colored multigraph such that every vertex is incident to at least two edges of different color. If at least one vertex is incident to at least three edges of different color and, for every pair of distinct vertices x and y, δ dym (x) + δ dym (y) ≥ n + 1, then G has a PC hamiltonian cycle.
Corollary 15. Let G be a c-edge-colored multigraph such that every vertex is incident to at least two edges of different color. If δ dym (x) ≥ (n + 1)/2, for every x ∈ V (G), and at least one vertex is incident to at least three edges of different color, then G has a PC hamiltonian cycle.
Recall that in a c-edge-colored multigraph, say G, we say that N G i (x) denotes the set of vertices of G that are joined to x with an edge of color i. The ith degree of x, x ∈ V (G), denoted by δ G i (x), is equal to |N G i (x)|, i.e., the cardinality of N G i (x). When there is no confusion, for simplicity, we will write N i (x) and δ i (x) instead of N G i (x) and δ G i (x), respectively. Its follows by the definition of δ G i (x) that if {e, f } ⊆ E xu , for some u ∈ V (G) such that e and f have the same color, then δ G\{e} i (x) = δ G\{f } i (x) = δ G i (x). So, in what follows, we will consider edge-colored multigraphs with no parallel edges with the same color. Therefore, if G is an c-edge-colored multigraph, then |E uv | ≤ c, for every {u, v} ⊆ V (G).
Theorem 16. Let G be a c-edge-colored multigraph, c ≥ 3, with n vertices and |E uv | ≤ c − 1, for every {u, v} ⊆ V (G). If for every x ∈ V (G), δ i (x) ≥ n/2, for every i ∈ {1, . . . , c}, then G has PC hamiltonian cycle.
Proof. Suppose that G is a c-edge-colored multigraph, c ≥ 3, with n vertices, |E uv | ≤ c − 1, for every {u, v} ⊆ V (G), and δ i (x) ≥ n/2, for every x ∈ V (G) and for every i ⊆ {1, . . . , c}.
Claim. δ dym (x) ≥ (n + 1)/2, for every x ∈ V (G). Proceeding by contradiction, suppose that there is a vertex u ∈ V (G) such that δ dym (u) = k < (n + 1)/2.
On the one hand, since δ i (u) ≥ n/2, for every i ∈ {1, . . . , c}, we have that d(u) ≥ cn/2, i.e, the number of edges incident to x is at least cn/2.
On the other hand, if E uy is a dynamic edge set, then u is joined to y with at most (c − 1) edges. Otherwise, E uy is not a dynamic edge set and u is joined to y with at most one edge. Then, d(u) ≤ (n − 1 − k) + (c − 1)k = n − 1 + (c − 2)k.
Therefore, d(u) ≤ n−1+(c−2)k < n−1+(c−2)(n+1/2) = c(n+1/2)−2 < c(n+1/2) ≤ d(u), a contradiction.
Therefore, δ dym (x) ≥ (n+1)/2, for every x ∈ V (G), and by Corollary 13, G has PC hamiltonian cycle.
We think (but still we cannot prove) that the previous theorem remains true, if we remove the condition "|E uv | ≤ c − 1, for every {u, v} ⊂ V (G)".
is an edge in H. Notice that if W is a closed dynamic H-walk satisfying that v 1 = v n and e k n−1 n−1 and e 1 0 are parallel in G, then W can be rewrite as W = (v 1 , e 1 1 , ..., v n−1 = v 0 , e 1 n−1 , . . . , e k n−1
Figure 1 :
1The sequence P = (v 4 , e 9 , v 5 , e 14 , e 13 , v 6 , e 16 , v 7 , e 11 , e 12 , v 4 ) is a dynamic H-cycle in G and there is no H-cycle of length greater than 2 containing v 5 , v 6 or v 7
Cycles and paths in edge-colored graphs with given degrees. A Abouelaoualim, K C Das, W F De La, M Vega, Y Karpinski, C Manoussakis, R Martinhon, Saad, J. Graph Theory. 64A. Abouelaoualim, K. C. Das, W. F. de la Vega, M. Karpinski, Y. Manoussakis, C. Martinhon, and R. Saad, Cycles and paths in edge-colored graphs with given degrees, J. Graph Theory, 64 (2010), 63-86.
Algorithms for routing and channel assignment in wireless infrastructure networks. S K Ahuja, The University of ArizonaPhD thesisS. K. Ahuja, Algorithms for routing and channel assignment in wireless infrastructure net- works, PhD thesis, The University of Arizona, 2010.
On supereulerian 2-edge-coloured graphs. J Bang-Jensen, T Bellitto, A Yeo, Graphs Combin. 37J. Bang-Jensen, T. Bellitto, and A. Yeo, On supereulerian 2-edge-coloured graphs, Graphs Combin, 37 (2021), 2601-2620.
J Bang-Jensen, G Z Gutin, Digraphs: theory, algorithms and applications. Springer Science & Business MediaJ. Bang-Jensen and G. Z. Gutin, Digraphs: theory, algorithms and applications, Springer Science & Business Media, 2008.
Partitioning 2-edge-colored Ore-type graphs by monochromatic cycles. J Barát, G N Sárközy, J. Graph Theory. 81J. Barát and G. N. Sárközy, Partitioning 2-edge-colored Ore-type graphs by monochromatic cycles, J. Graph Theory, 81 (2016), 317-328.
Characterization of color patterns by dynamic H-paths. G Benítez-Bobadilla, H Galeana-Sánchez, C Hernández-Cruz, Discrete Appl. Math. 267G. Benítez-Bobadilla, H. Galeana-Sánchez, and C. Hernández-Cruz, Characterization of color patterns by dynamic H-paths, Discrete Appl. Math., 267 (2019),41-51.
J A Bondy, U S R Murty, Graph theory with applications. Macmillan London290J. A. Bondy, U. S. R. Murty, et al., Graph theory with applications, vol. 290, Macmillan London, 1976.
Alternating hamiltonian cycles in two colored complete bipartite graphs. A G Chetwynd, A J Hilton, J. Graph Theory. 16A. G. Chetwynd and A. J. Hilton, Alternating hamiltonian cycles in two colored complete bipartite graphs, J. Graph Theory, 16 (1992), 153-158.
Paths through fixed vertices in edge-colored graphs. W Chou, Y Manoussakis, O Megalakaki, M Spyratos, Z Tuza, Math. Inform. Sci. Humaines. 127W. Chou, Y. Manoussakis, O. Megalakaki, M. Spyratos, and Z. Tuza, Paths through fixed vertices in edge-colored graphs, Math. Inform. Sci. Humaines, 127 (1994),49-58.
. P Delgado-Escalante, H Galeana-Sánchez, AKCE Int. J. Graphs Comb. 11Restricted domination in arc-colored digraphsP. Delgado-Escalante and H. Galeana-Sánchez, Restricted domination in arc-colored digraphs, AKCE Int. J. Graphs Comb., 11 (2014),95-104.
Independent restricted domination and the line digraph. P Delgado-Escalante, H Galeana-Sánchez, L P Ramírez, AKCE Int. J. Graphs Comb. 9P. Delgado-Escalante, H. Galeana-Sánchez, and L. P. Ramírez, Independent restricted domi- nation and the line digraph, AKCE Int. J. Graphs Comb., 9 (2012),31-42.
Hamiltonian circuits determining the order of chromosomes. D Dorninger, Discrete Appl. Math. 50D. Dorninger, Hamiltonian circuits determining the order of chromosomes, Discrete Appl. Math., 50 (1994),159-168.
Geometrical constraints on Bennett's predictions of chromosome order. D Dorninger, W Timischl, Heredity. 58D. Dorninger and W. Timischl, Geometrical constraints on Bennett's predictions of chromo- some order, Heredity, 58 (1987),321-325.
Effects of fisheries management on local ecological knowledge. E R Farr, J S Stoll, C M Beitl, Ecol. Soc. 23E. R. Farr, J. S. Stoll, and C. M. Beitl, Effects of fisheries management on local ecological knowledge, Ecol. Soc., 23 (2018).
Characterization of edge-colored complete graphs with properly colored Hamilton paths. J Feng, H.-E Giesen, Y Guo, G Gutin, T Jensen, A Rafiey, J. Graph Theory. 53J. Feng, H.-E. Giesen, Y. Guo, G. Gutin, T. Jensen, and A. Rafiey, Characterization of edge-colored complete graphs with properly colored Hamilton paths, J. Graph Theory, 53 (2006),333-346.
Villarreal-Valdés, Some conditions for the existence of Euler H-trails. H Galeana-Sánchez, R Rojas-Monroy, R Sánchez-López, J , Graphs Combin. 35H. Galeana-Sánchez, R. Rojas-Monroy, R. Sánchez-López, and J. I. Villarreal-Valdés, Some conditions for the existence of Euler H-trails, Graphs Combin, 35 (2019),1197-1208.
H-cycles in H-colored multigraphs. H Galeana-Sánchez, R Rojas-Monroy, R Sánchez-López, J I Villarreal-Valdés, J Imelda, Graphs Combin. 38H. Galeana-Sánchez, R. Rojas-Monroy, R. Sánchez-López, J. I. Villarreal-Valdés, and J. Imelda, H-cycles in H-colored multigraphs, Graphs Combin, 38 (2022),1-20.
H Galeana-Sánchez, R Sánchez-López, H-kernels in infinite digraphs. 29H. Galeana-Sánchez and R. Sánchez-López, H-kernels in infinite digraphs, Graphs Combin, 29 (2013),913-920.
Alternating cycles in edge-partitioned graphs. J W Grossman, R Häggkvist, J. Combin. Theory Ser. B. 34J. W. Grossman and R. Häggkvist, Alternating cycles in edge-partitioned graphs, J. Combin. Theory Ser. B, 34 (1983),77-81.
Almost eulerian compatible spanning circuits in edge-colored graphs. Z Guo, H Broersma, B Li, S Zhang, Discrete Math. 344112174Z. Guo, H. Broersma, B. Li, and S. Zhang, Almost eulerian compatible spanning circuits in edge-colored graphs, Discrete Math., 344 (2020), 112174.
Compatible spanning circuits in edge-colored graphs. Z Guo, B Li, X Li, S Zhang, Discrete Math. 343111908Z. Guo, B. Li, X. Li, and S. Zhang, Compatible spanning circuits in edge-colored graphs, Discrete Math., 343 (2020), 111908.
A note on paths in edge-coloured tournaments. V Linek, B Sands, Ars Combin. 44V. Linek and B. Sands, A note on paths in edge-coloured tournaments, Ars Combin., 44 (1996),225-228.
A Dirac type condition for properly coloured paths and cycles. A Lo, J. Graph Theory. 76A. Lo, A Dirac type condition for properly coloured paths and cycles, J. Graph Theory, 76 (2014),60-87.
Long properly coloured cycles in edge-coloured graphs. A Lo, J. Graph Theory. 90A. Lo, Long properly coloured cycles in edge-coloured graphs, J. Graph Theory, 90 (2019), 416-442.
On channeldiscontinuity-constraint routing in wireless networks. S Sankararaman, A Efrat, S Ramasubramanian, P K Agarwal, Ad hoc networks. 13S. Sankararaman, A. Efrat, S. Ramasubramanian, and P. K. Agarwal, On channel- discontinuity-constraint routing in wireless networks, Ad hoc networks, 13 (2014),153-169.
A multigraph approach to social network analysis. T Shafie, J. Soc. Struct. 16T. Shafie, A multigraph approach to social network analysis, J. Soc. Struct., 16 (2015).
Multiplexity analysis of networks using multigraph representations. T Shafie, D Schoch, Stat. Methods Appl. 30T. Shafie and D. Schoch, Multiplexity analysis of networks using multigraph representations, Stat. Methods Appl., 30 (2021), 1425-1444.
The orderly colored longest path problem-a survey of applications and new algorithms. M Szachniuk, M C De Cola, G Felici, J Blazewicz, RAIRO Oper. Res. 48M. Szachniuk, M. C. De Cola, G. Felici, and J. Blazewicz, The orderly colored longest path problem-a survey of applications and new algorithms, RAIRO Oper. Res., 48 (2014),25-51.
A note on alternating cycles in edge-coloured graphs. A Yeo, J. Combin. Theory Ser. B. 69A. Yeo, A note on alternating cycles in edge-coloured graphs, J. Combin. Theory Ser. B, 69 (1997), 222-225.
| [] |
[
"REMARKS ON RITT OPERATORS, THEIR H ∞ FUNCTIONAL CALCULUS AND ASSOCIATED SQUARE FUNCTION ESTIMATES",
"REMARKS ON RITT OPERATORS, THEIR H ∞ FUNCTIONAL CALCULUS AND ASSOCIATED SQUARE FUNCTION ESTIMATES"
] | [
"Bernhard H Haak "
] | [] | [] | This note deals with the boundedness of the H ∞ functional calculus of Ritt operators T and associated square function estimates. The purpose is to give a shorter, concise and slightly more general approach towards Le Merdy's results [19] on this subject. Date: 7th March 2023. | null | [
"https://export.arxiv.org/pdf/2303.03022v1.pdf"
] | 257,364,971 | 2303.03022 | 704fad9eb4a76c70feffd0068217c0f173097e8e |
REMARKS ON RITT OPERATORS, THEIR H ∞ FUNCTIONAL CALCULUS AND ASSOCIATED SQUARE FUNCTION ESTIMATES
6 Mar 2023
Bernhard H Haak
REMARKS ON RITT OPERATORS, THEIR H ∞ FUNCTIONAL CALCULUS AND ASSOCIATED SQUARE FUNCTION ESTIMATES
6 Mar 2023
This note deals with the boundedness of the H ∞ functional calculus of Ritt operators T and associated square function estimates. The purpose is to give a shorter, concise and slightly more general approach towards Le Merdy's results [19] on this subject. Date: 7th March 2023.
Introduction and main result
Ritt -or Ritt-Tadmor -operators and their functional calculus have attracted some attention in recent years, see for example [1,2,4,5,8,14,17,19,21,24,26]. It is well known that the boundedness of the H ∞ functional calculus of sectorial or strip-type operators is linked to certain square function estimates, see for example [15,18,20], as well as [10,12] for extensive references. A 'discrete' analogue to such results for Ritt operators is stated next.
E ∞ k=1 r k f k X or E ∞ k=1 γ k f k X
where the variables r k (respectively, γ k ) refer to a sequence of independent, identically distributed Rademacher (respectively standard Gaussian) random variables. In the case that X = L p (Ω), and, more generally, the case of Banach spaces of finite cotype, these two random sums are comparable in the sense of a double inequality. Moreover, both expressions are equivalent to (1.1) if X = L p (Ω) with 1 < p < ∞.
The proposed extension in this article can be split up in several specific substatements under different hypotheses on the operators and the Banach space geometry, and are the subject of sections 4-6. A framework that contains all of these results, and allows us to compare our findings with Le Merdy's Theorem 1.1 is the case of a Banach space enjoying Pisier's property (α), as defined below. Our main result can now be phrased as follows: Theorem 1.2. Let X be a Banach space that has Pisier's property (α), and T ∈ B(X) be a bounded operator on X, such that (I − T ) is injective and has dense range. Then the following assertions are equivalent.
(a) The operator T admits a bounded H ∞ (Stolz ω )-functional calculus for some ω ∈ (0, π/2) where Stolz ω is a Stolz type domain, see figure 1 below.
(b) For some (and hence all) m 1 , m 2 ≥ 1, the operator T and its adjoint T ′ both satisfy uniform estimates
E γ k k m1− 1 /2 (Id − T ) m1 T k−1 x 2 X ≤ C m1 x and E γ k k m2− 1 /2 (Id − T ′ ) m2 (T ′ ) k−1 x ′ 2 X ′ ≤ C m2 x ′ X ′ .
Moreover, T is an R-Ritt and hence γ-Ritt operator in this case.
Notice that if T is weakly compact, the mean ergodic theorem [28] shows that X decomposes as
X = ker(Id − T ) ⊕ Im(Id − T ) =: X 0 ⊕ X 1 .
One can then define f (T ) := f (1)Id ⊕ f (T | X1 ) and observe that (Id − T | X1 ) is injective with dense range. This allows to remove the hypothesis of injectivity and dense range of the operator Id−T in Theorem 1.2 at the expense of requiring that X is a reflexive Banach space.
The fact that Theorem 1.1 can be formulated in a context of Banach spaces with property (α) already appears in an unnumbered remark in [19, end of section 7]. The emphasis of our approach is therefore on different proofs that yield a slightly more general result. As an example, in contrast with Theorem 1.1, in our Theorem 1.2 the R-Ritt property is no longer a hypothesis, but a conclusion. Observe that Rboundedness or γ-boundedness, (and hence the R/γ-Ritt property) is usually hard to check in concrete examples: we consider our version an important improvement for this reason.
Our approach also allows a bit more freedom in the choice of the square function, and a sharper domain of the H ∞ -functions (see below for a discussion of different domain types in the literature). Moreover, our proofs rely on the same structural arguments that we use in [9] to explain the link between H ∞ -calculus and (dual) square function estimates for sectorial or strip-type operators. Finally, our approach avoids the 'detour' to the functional calculus of the sectorial operator A = Id−T that Le Merdy [19] uses at some steps in his proofs. Instead, we constantly stay in the "Ritt world".
A Banach space X is said to have Rademacher type p ∈ [1,2], if there exists a constant t p (X) such that, for all N ≥ 1 and all x 1 , . . . ,
x N ∈ X E N n=1 r n x n ≤ t p (X) N n=1 x n p 1 /p
All Banach spaces have type p = 1. We say the the type is non-trivial if p > 1.
Similarly, X has cotype q ≥ 2 if there exists a constant c q (X) such that, for all N ≥ 1 and all
x 1 , . . . , x N ∈ X N n=1 x n q 1 /q ≤ c q (X)E N n=1
r n x n with the obvious modification if q = +∞. All Banach spaces have cotype q = +∞. There are two reasons to consider such spaces in the following. First, since spaces like c 0 , ℓ ∞ have trivial cotype, no Banach space with finite cotype can contain an isomorphic copy of such spaces. The Kwapién -Hoffman Jørgensen theorem [11,16] ensures therefore that on Banach spaces with non-trivial cotype, uniform boundedness (say, in L 2 Ω; X)-norm), and convergence of sequence of random sums We remind the reader of Kahane's contraction principle: for any Banach space, any N ≥ 1 and x 1 , . . . x N ∈ X and any sequence (α n ) n≥1 in R N ,
S N = N n=1 r n x(2.1) E N n=1 α n r n x n ≤ max{|α n | : 1 ≤ n ≤ N } E N n=1 r n x n .
If a similar "contraction" holds for a collection T of operators, instead of scalars, i.e. if T ⊂ B(X; Y ) is such that any N ≥ 1 and x 1 , . . . x N ∈ X and any T 1 , . . . ,
T N ∈ T E N n=1 r n T n x n Y ≤ C E N n=1 r n x n X .
we say that T is R-bounded. Letting N =1 shows that R-bounded sets are bounded in B(X), but the converse is wrong, in general. The corresponding expression with Gaussians instead of Rademachers leads to the notion of γ-boundedness.
Rademacher-or R-boundedness implies γ-boundedness, but in spaces of finite cotype, both notions coincide. We need at some places that
(2.2) T R-bounded ⇒ absco(T ) R-bounded,
a statement that is commonly referred to as "convex hull lemma", see for example [6,12].
Finally, we recall the definition of Pisier's property (α). A Banach space is said to enjoy this property, if double-indexed random sums admit a 'contraction principle': ≤ max |α n,k | : 1 ≤ n ≤ N, 1 ≤ k ≤ L E E n≤N,k≤K r n r k x n,k .
As an example, this will be the case for all separable L p -spaces, Besov spaces or Sobolev spaces. Property (α) implies finite cotype, but not necessarily non-trivial type of the Banach space X (think of ℓ 1 ). We refer to [12, Section 7.5].
Basic definitions of Ritt operators and their functional calculus
Ritt operators are bounded operators on a Banach space X, whose spectrum lies in the closed unit disc and that satisfy Ritt's condition [23] (3.1) ∃K>0 ∀|λ| > 1 :
(λ − 1)R(λ, T ) ≤ K
We say that T is R-Ritt (respectively γ-Ritt), if the set
(3.2) {(λ − 1)R(λ, T ) : |λ| > 1} is R-bounded (respectively γ-bounded).
When we talk about the H ∞ -calculus of an operator, we need to specify the domain of holomorphic functions. Functional calculi on smaller domains are 'better', since a larger set of functions is admissible. By the classical Dunford-Riesz calculus, any bounded operator operator admits a bounded functional calculus on a sufficiently large domain, i.e. there exists a bounded algebra homomorphism Φ : Indeed, evaluating condition (3.1) on the line {Re(λ) = 1} easily shows that the spectrum of T is contained in a symmetric sector around the real axis of angle < π / 2 , centred at z = 1 and open to the left. Moreover, the uniform boundedness of the resolvent on D ∁ ∩ {Re(z) < 1 − ε} shows that the spectrum σ(T ) satisfies also
H ∞ (O) → B(X) f → f (T ) := 1 2πi Γ f (z) R(z, T ) dzσ(T ) ∩ {Re(z) < 1−ε} ⊆ B(0, R) ∩ {Re(z) < 1−ε}
for some radius R ∈ (0, 1). Intersecting both spectral information, one obtains for a suitable r ∈ (0, 1), necessarily σ(T ) ⊆ B r , where B r is convex hull of the singleton z=1 and the ball B(0, r), see figure 1. A more refined analysis [8,Proposition 4.2] has shown that the spectrum of a Ritt operator lies in a much smaller subset, that we will call a Stolz domain: a set of the form
(3.3) Stolz ω = z ∈ D : |1 − z| 1 − |z| < ω
for some ω > 1. We call ω the Stolz type of the domain. It's opening angle is 2 arccos( 1 ω ). We mention that a bounded operator T on a Banach space is a Ritt operator if, and only the following two conditions hold simultaneously:
(pb) ∃C 1 > 0 : ∀n ≥ 0 : T n ≤ C 1 and (dd) ∃C 2 > 0 : ∀n ≥ 0 : kT k−1 (Id − T ) ≤ C 2
The first condition is power-boundedness, the second the discrete derivative condition. While the first one corresponds to the boundedness of a C 0 -semigroup, the latter corresponds to its analyticity in a sector. None of the two conditions can be omitted to characterise Ritt operators. Indeed, a simple 2-dimensional rotation satisfies (pb) but not (dd), whereas a counterexample to the opposite implication is presented in [13]. Similarly, R-Ritt means that the set
{T n : n ≥ 1} ∪ {kT k−1 (Id − T ) : k ≥ 1} is R-bounded in B(X), see [19, Lemma 5.2].
Given a Ritt operator T whose spectrum is contained in a Stolz domain of type ω and given ν > ω we let
E ω,ν := ϕ ∈ H ∞ (Stolz ν ) : ∃ϑ : ω < ϑ < ν such that ϕ(z) 1−z ∈ L 1 (∂Stolz ϑ ) For f ∈ E ω,ν we have the absolutely convergent integral (3.4) Φ(f ) := f (T ) := 1 2πi ∂Stolz ϑ f (z) R(z, T ) dz.
that defines a bounded operator, see also figure 2. We call the map Φ : E ω,ν → B(X) the elementary functional calculus for T . We extend this by an abstract approach outlayed in [9]: suppose that every f ∈ H ∞ (Stolz ν ) admits an elementary function e ∈ E ω,ν such that e(T ) is injective and ef ∈ E ω,ν . We say that e regularises f . We can then define
(3.5) Φ(f ) := f (T ) := e(T ) −1 (ef )(T )
and extend the functional calculus from E ω,ν to H ∞ (Stolz ν ): the operator f (T ) := Φ(f ) will be at least a densely defined, closed operator that may, or may not, be bounded: if so, we say that T has a bounded H ∞ (Stolz ν )-functional calculus.
x y σ(T ) γ Since we will deal with upper, but also "lower" square function estimates, i.e. estimates of the form
E γ k k m1− 1 /2 (Id − T ) m1 T k−1 x 2 X ≥ c x
for T as well as its adjoint, we see that necessarily, (Id−T ) is injective has dense range. This shows that such hypothesis are natural in the given context, in particular in Theorem 1.2. Here is another useful consequence: the Caley transform function e(z) := 1 − z 1 + z provides a nice elementary function that regularises all f ∈ H ∞ simultaneously. Consequently, the construction given in (3.5) above works well if (Id−T ) is injective.
Finally, usual functional calculus constructions require a 'weak' control on the behaviour on approximating sequences : assume that the uniformly bounded sequence (f n ) converges pointwise to f in H ∞ . Then we need
(3.6) Φ(f n )x → Φ(f )x
weakly, for a dense set of points x ∈ X. Limiting ourselves to points of the form x = e(T )y for y ∈ X, and assume injectivity and dense range of (Id−T ) provides such a dense set, on which even strong convergence follows easily from the dominated convergence theorem.
Notice that the equivalent Ritt condition (pb) and (dd) can be reformulated in terms bounded operators acting form X to ℓ ∞ (X) by
x → (T n x) n≥0 and x → (n(Id − T )T n−1 x) n≥1
respectively. When we talk about square function characterisations of the boundedness of the H ∞ -calculus, this is done in a similar way, but using different sequence norms. Given a separable Hilbert space H and a Banach space X, we say that a (bounded) operator Φ :
H → X is γ-radonifying, if the (Gaussian) random series ∞ n=1 γ n Φ(h n ) converges in L 2 (Ω; X), where (h n )
is some orthonormal basis. We refer to [12,25] for more details on the ideal of γ-radonifying operators. In the situation we consider in this note, we shall always let H := ℓ 2 . Then X-valued sequences can then be identified with operators, by defining Φ(e n ) := x n on some orthonormal basis (e n ) of ℓ 2 . The γ-norm of a sequence (x n ) is then defined as the γ-norm of the corresponding operator Φ:
(x n ) n≥1 γ(ℓ2;X) := E n γ n x n 2 X 1 /2 .
In the case that X is a Hilbert space, this is just the ℓ 2 (X)-norm.
We say that our Ritt operator T admits a square function estimate Φ m , if operator
(3.7) Φ m : X → γ(ℓ 2 ; X) x → k m− 1 /2 T k−1 (Id − T ) m x k
is bounded. Along with square functions we consider "dual" square functions estimates. These are formally bounded operators acting
(3.8) Φ * m : X ′ → γ(ℓ 2 ; X) ′ x ′ → k m− 1 /2 (T ′ ) k−1 (Id − T ′ ) m x ′ k
where γ(ℓ 2 ; X) ′ is the dual space of γ(ℓ 2 ; X), defined by trace duality, see for example [12,15,25]:
(3.9) (x ′ n ) n≥1 γ(ℓ2;X) ′ := sup n x ′ n , x n : (x n ) γ(ℓ2;X) ≤ 1
If X has nontrivial type, then γ(ℓ 2 ; X) ′ = γ(ℓ 2 ; X ′ ). This will be the case for reflexive L p -spaces, reflexive Besov or Sobolev spaces, etc.
We will several times in the proofs use a handy observation, that appears as well in the proof of [2, Theorem 3.3].
Lemma 3.1. Let |u| < 1 and n ≥ 0. Then ∞ k=1 k n (1 − u) n+1 u k−1 = 1
Proof of Lemma 3.1. Since |u| < 1, the geometric series j u j can be differentiated term-wise. Therefore, we obtain
∞ k=1 k n u k−1 = 1 n! d du n ∞ j=0 u j = 1 (1 − u) n+1 .
We are now ready to prove different results that, put together, show in particular Theorem 1.2. They are split in three parts. In section 4 we give a new, and simple method to infer square function estimates from the boundedness of the functional calculus. The approach also slightly improves [19,Thm. 7.3] where only the case m 1 = m 2 = 1 is considered. In the following section 5 we show how square function estimates can be used to "upgrade" the Ritt property to γ-Ritt or R-Ritt on Banach spaces with property (α). Finally, in section 6 we close the circle from square functions towards functional calculus. This does not differ in essential points from Le Merdy's ideas developed in [19]: it is a classical "pushing through the square function" technique that relies on the fact that any operator has a bounded functional calculus with respect to square-function norms.
4.
Obtaining square functions estimates from the boundedness of the H ∞ -calculus on Stolz domains
We will use an idea from [9] that in its most simplified version. To stay selfcontained we re-do some of the arguments that will avoid us the more abstract notation from [9]. Let us study the case of square function estimates Φ m from (3.7). The key is to conceive them as the following functional calculus expression
Φ m : X → γ(ℓ 2 ; X) x → (h → [F m (z)|h] (T )x))
where F m is a vector-valued function
(4.1) F m : Stolz ω → ℓ 2 , z → k m− 1 /2 (1 − z) m z k−1 k≥1
.
Since we work with a Banach space X of finite cotype, Gaussian and Rademacher random sums are equivalent. So let (r n ) be a sequence of independent Rademacher variables and (h n ) some sequence in ℓ 2 . Moreover, assume that T has a bounded H ∞ (Stolz ν )-calculus on X, with ν > ω. Then
E N n=1 r n [F m |h n ] (T )x = E N n=1 r n [F m |h n ] (T )x ≤ C H ∞ (T ) x E sup z∈Stolzν N n=1 r n [F m (z)|h n ] ≤ C H ∞ (T ) x sup z∈Stolzν ([F m (z)|h n ]) k ℓ1 . (4.2)
If we let (h n ) an orthonormal basis we arrive precisely at the definition γ-norms (actually, uniform boundedness of these partial sums, and convergence of the full random series are equivalent in Banach spaces not containing c 0 and a fortiori on spaces of finite cotype). Due to the "ideal property" of γ-radonifying operators (cf. e.g. [25, Theorem 6.2]), we have even more freedom, for example to select (h n ) as a Riesz basis (i.e. an isomorphic image of an orthonormal basis).
If we consider the vector-valued function (4.1), the estimate given in (4.2) tells us that we obtain the desired square function estimate Φ m from the boundedness of the functional calculus, if we can pick a Riesz basis (h n ) in a way that the scalar products ([F m (z)|h n ]) n≥1 form an absolutely summable sequence, whose ℓ 1norm is uniformly bounded on Stolz domains. While the square summability of the sequence of these scalar products is not sensitive to the choice of the Riesz-basis (h n ), absolute sums are. Let us illustrate this in the case m = 1. We write then F = F 1 .
Taking scalar products of F (z) against a canonical orthonormal basis (e n ) of ℓ 2 , we obtain [F (z)|e n ] = √ n(1 − z)z n−1 and the ℓ 1 -norm of this sequence equals where Li s (z) are the poly-logarithmic functions. By Wirtinger's theorem [27] (see also, for example [22]) it has the following asymptotics when |z| < 1, z → 1:
(4.3) Li − 1 /2 (z) ∼ Γ( 3 /2)(1 − z) −3/2
as a consequence, the ℓ 1 -sums are not uniformly bounded on Stolz domains. This simply means that h n = e n is a bad choice of a basis.
Let us now devise another basis of ℓ 2 that is better behaved. We will write a := e 2πi/3 , one non-trivial third root of unity, and we let b := i for a forth root. With these, we build the two 5 × 5 matrices
A = 1 a a 2 0 0 1 a 2 a 4 0 0 0 1 b b 2 b 4 0 1 b 2 b 4 b 6 0 1 b 3 b 6 b 9 and D k = diag( 5k+1 5k+j ) j=1..5
Since A is invertible and D k has clearly uniformly bounded condition numbers, also A k = AD k has clearly uniformly bounded condition numbers. By identifying ℓ 2 with 2 C 5 we define the operator A = A k on ℓ 2 and see that it is an isomorphism of ℓ 2 . Consequently, the lines of the infinite block diagonal matrix
A 0 0 0 0 . . . 0 A 1 0 0 . . . 0 0 A 2 0 . . . . . . . . . 0 . . . form a Riesz basis {b 1 , b 2 , b 3 , . . .} of ℓ 2 .
This basis has some nice features that the canonical basis of ℓ 2 does not have. Indeed, we observe that
[F (z)|b 1 ] = √ 1(1 + az + a 2 z 2 )(1 − z) = √ 1(1 − z) 1−z 3 1−az [F (z)|b 2 ] = √ 2(1 + a 2 z + a 4 z 2 )(1 − z)z = √ 2(1 − z)z 1−z 3 1−a 2 z [F (z)|b 3 ] = √ 3(1 + bz + b 2 z 2 + b 3 z 3 )(1 − z)z 2 = √ 3(1 − z)z 2 1−z 4 1−bz [F (z)|b 4 ] = √ 4(1 + b 2 z + b 4 z 2 + b 6 z 3 )(1 − z)z 3 = √ 4(1 − z)z 3 1−z 4 1−b 2 z [F (z)|b 5 ] = √ 5(1 + b 3 z + b 6 z 2 + b 9 z 3 )(1 − z)z 4 = √ 5(1 − z)z 4 1−z 4 1−b 3 z [F (z)|b 6 ] = √ 6(1 + az + a 2 z 2 )(1 − z) 5 = √ 6(1 − z)z 5 1−z 3 1−az [F (z)|b 7 ] = √ 7(1 + a 2 z + a 4 z 2 )(1 − z)z 6 = √ 7(1 − z)z 6 1−z 3
1−a 2 z and so forth: compared to the scalar product of F (z) against a canonical basis vector e n we gain an additional regularising factor (1 − z) in the numerator of each fraction, and "pay" the gain with a pole at some non-trivial root of unity by the denominator -but these poles lie all outside of Stolz domains: they do not harm the uniform boundedness of ℓ 1 -norms for z ∈ Stolz ω . In virtue of the Stolz domain condition (3.3), the growth order of (1 − |z|) −3/2 that we saw in (4.3) will now be (more than) compensated by the factor |1 − z| 2 : as a consequence, our approach outlayed in (4.2) does work when we use the Rieszbasis (b n ) instead of the canonical orthonormal basis (e n ). We obtain the following result.
Proposition 4.1. Let X be a Banach space of finite cotype, and T a Ritt operator of type ω on X. If T has a bounded H ∞ (Stolz ν )-calculus for some ν > ω, then for any m ≥ 1
Φ m : X → γ(ℓ 2 ; X) x → k m− 1 /2 T k−1 (Id − T ) m x k and Φ * m : X ′ → γ(ℓ 2 ; X) ′ x ′ → k m− 1 /2 (T ′ ) k−1 (Id − T ′ ) m x ′ k define bounded (dual) square functions.
Proof. The proof of the case m > 1 for square function estimates is a straightforward modification of the case m = 1 explained above, and therefore omitted. In virtue of the preceding discussion, the square function estimate for Φ m is clear, so we only have to explain the dual square functions. Recall the trace duality from (3.9). It inherits the ideal property from γ-norms, i.e. we may pass from an orthonormal basis to a Riesz basis. For a sequence (x k ) ∈ γ(ℓ 2 ; X) we consider
k [F m (z)|b k ] (T ) ′ x ′ , x k E x ′ , k r k [F m (z)|b k ] (T ) j r j x j ≤ C H ∞ (T ) x ′ sup z∈Stolzν ([F m (z)|b k ]) k ℓ1 E j r j x j ≤ π 2 C H ∞ (T ) x ′ sup z∈Stolzν ([F m (z)|b k ]) k ℓ1 E j γ j x j .
The last inequality does not use finite cotype, but a simple domination of Rademacher sums by Gaussian sums, see [7,Prop. 12.11]. Taking now the supremum over all sequences (x n ) with (x n ) γ ≤ 1 finishes to proof.
'Ritt property' via square function estimates
In this section we extend [19,Theorem 5.3] to Banach spaces with property (α). Often, property (α) is used to improve properties, like 'upgrading' Ritt property to the R-Ritt property. Notice that we go further here: we show that merely (dual) square function estimates for a bounded operator suffice to obtain the R-Ritt property, under the mild geometrical property (α). Recall that R-boundedness and γ-boundedness are equivalent notions on Banach spaces with property (α).
Theorem 5.1. Let T be a bounded operator on a Banach space having property (α) such that Id + T is invertible. Assume that T admits square functions Φ m1 for m 1 ≥ 1 and dual square functions Φ * m2 , where m 2 ≥ 1. Then T is R-Ritt (and γ-Ritt).
Recall that property (α) implies finite cotype, so that Rademacher-and Gaussian averages are equivalent: γ-Ritt can be replaced by R-Ritt in this section.
Proof.
Step 1: T is an γ-power-bounded operator. The main idea is to observe that T n acts as a shift on the square functions of the type Φ m1 , Φ m2 . This gives the idea to "push" the operator T n through the squarefunction to obtain R-boundedness. To this end we need lower square function estimates. Up to some minor modification, these come out of Lemma 3.1 that we read as an approximate identify: Clearly, for any m ≥ 1,
k m− 1 /2 ≈ √ k(k + 1)(k + 2) · · · (k + m − 1)
in the sense of a double inequality with uniform constants for all k > 0. As a consequence of the contraction principle (2.1), and the fact that (Id+T ) and (Id+T ′ ) are isomorphisms acting on X and X ′ respectively, the supposed (dual) square functions for Φ m1 , Φ * m2 imply (dual) square functions for
Φ m1 : X → γ(ℓ 2 ; X) x → √ k(k + 1) · · · (k + m 1 − 1)T k−1 (Id − T ) m1 (Id + T ) −m1 x k and Φ m2 : X → γ(ℓ 2 ; X) x → √ k(k + 1) · · · (k + m 2 − 1)T k−1 (Id − T ) m2 (Id + T ) −m2 x k
The corresponding functional calculus expressions are given by
ϕ m1 (z) = √ k(k + 1) · · · (k + m 1 − 1)z k−1 (1 − z) m1 (1 + z) −m1 and ϕ m2 (z) = √ k(k + 1) · · · (k + m 2 − 1)z k−1 (1 − z) m2 (1 + z) −m2
Now using Lemma 3.1 again, we have
[ ϕ m1 (z)| ϕ m2 (z)] ℓ2 = ∞ k=1 k(k + 1) · · · (k + m − 1)(1 − z) m z 2k−2 (1 + z) −m = m!
By Theorem A.2, this implies lower square function estimates for Φ m1 and hence for Φ m1 . We are ready to prove γ-boundedness. Let N ≥ 1 and x 1 , . . . , x N ∈ X be given. Then property (α) allows to use the contraction principle for double-indexed random sums in the following estimate:
E N n=1 γ n T n x n (lower sqf) E E N n=1 ∞ k=1 γ n γ k k m1− 1 /2 (I − T ) m1 T n+k−1 x n (property (α)) E E N n=1 ∞ k=1 γ n γ k (k + n) m1− 1 /2 (I − T ) m1 T n+k−1 x n (j = k+n) = E E N n=1 j≥n γ n γ j j m1− 1 /2 (I − T ) m1 T j−1 x n (property (α)) ≤ E E N n=1 j≥1 γ n γ j j m1− 1 /2 (I − T ) m1 T j−1 x n (upper sqf) ≤ E n γ n x n .
We obtain that {T n : n ≥ 0} is γ-bounded. The discrete derivatives are obtained in two further steps.
Step 2: Obtaining a special γ-bounded set We start with the following formula that relies on the geometric series for z 2 :
(2k − 2) m−1 (1 − z) m−1 z 2k−2 = 2(1 + z) ∞ j=k (k − 1) m−1 × (1 − z) m z 2j−2 = 2(1 + z) ∞ j=k k−1 j m−1 × (j m1− 1 /2 (1 − z) m1 z k−1 ) × (j m2− 1 /2 (1 − z) m2 z k−1 )
The front factor 2(Id + T ) is an isomorphism and therefore unimportant to us. The idea is to conceive this sum as an 'integral representation' with the multipliers
m k : j → ½ j≥k k−1 j m−1 ,
and to appeal to Theorem A.1. Clearly, |m k (j)| ≤ 1. By the assumed (dual) squarefunction estimates and the fact that X has property (α), the integral representation theorem A.1 yields that
k m−1 (Id − T ) m−1 T 2k−2 : k ≥ 1 γ Φ m1 Φ * m2
A similar calculation can be done for odd powers of T , by pulling out the factor z in the representation formula and reducing it to an even power. Summarising,
k m−1 (Id − T ) m−1 T k−1 : k ≥ 1 γ Φ m1 Φ * m2 .
Step 3: Discrete derivatives form a γ-bounded set As a consequence of lemma 3.1 for the choice n = m − 2, we have
n(1 − T )T n = ∞ k=n n k 2 × k 2 (1 − T ) 2 T k if m = 3 ∞ k=n n(k−n+1) k 3 × k 3 (1 − T ) 3 T k if m = 4 ∞ k=n n cm−3(k−n+1) (m−4)! k m−1 × k m−1 (Id − T ) m−1 T k if m ≥ 5.
The scalar multipliers satisfy in each case, uniformly over all n ≥ 1, By the convex-hull property (2.2), and the previous step,
n(1 − T )T n : n ≥ 1 γ Φ γ (Φ m1 ) Φ γ ′ (Φ * m2 )
follows. The discrete derivatives of (T n ) form therefore a γ-bounded set. Together with our findings in Step 1, we proved that
{T n , n(Id − T )T n−1 : n ≥ 1}
is R-bounded, which by [19,Lemma 5.2] is equivalent to T being R-Ritt or γ-Ritt.
We stress the importance of (dual) square function estimates for these arguments: of course a bounded operator needs not to be Ritt. But even if we assumed the Ritt property, instead of the (dual) square function estimates, we could not conclude: indeed, as stated in [3,Corollary 1.3.2] and the subsequent remarks, any Banach space X that admits an unconditional basis while not being isomorphic to a Hilbert space allows to construct Ritt operators which are not R-Ritt. This shows that our hypotheses of square function estimates cannot be weakened easily.
Obtaining a bounded H ∞ -calculus on Stolz domains via square functions estimates
In this section we give a similar proof to Le Merdy's result [19,Theorem 7.3]. This allows to strengthen step 1 of the proof of Theorem 5.1 and push f (T ) through the square function. We re-formulate the proof, although being very close for the sake of completeness, but also since the domain of the underlying functional calculus (i.e. Stolz ω instead of B r ) is different. Theorem 6.1. Let X be a Banach space and an R-Ritt operator of type ω on X that admits (dual) square function estimates Φ m1 , Φ * m2 . Then T has a bounded H ∞ (Stolz θ )-calculus for all θ > ω.
Observe that if X has Pisier's property (α), the R-Ritt condition is not needed thanks to Theorem 5.1.
Proof. Let f ∈ H ∞ (Stolz ν ) and where ν > ω and fix some ω < ϑ < ν. We start applying Lemma 3.1 for n = m + 1 and u = z 3 with |z| < 1: multiplying with f (z), we obtain the representation formula
(6.1) f (z) = ∞ k=1 m k (z) × ϕ m1 (k, z) × ϕ m2 (k, z). where m k (z) = 1 (m+1)! f (z)(1 + z + z 2 ) m+1 m j=1 k+j k × k(1 − z)z k−1
are elementary functions. If we had assumed X to have property (α), then
(6.2) m k (T ) : k ≥ 1 γ C ϑ f ∞ .
follows right away from the second part of Theorem A.1. But the result is true on any Banach spaces, as we will show below. Let us first put into light its usefulness:
| f (T )x, x ′ | = ∞ k=1 [ϕ m2 (z)m k (z)ϕ m1 (z)|e k ] (T )x, x ′ = E ∞ n=1 γ n [ϕ m2 (z)|e n ] (T ) ∞ k=1 γ k m k (T ) [ϕ m1 (z)|e k ] (T )x , x ′ ≤ E ∞ k=1 γ k m k (T ) [ϕ m1 (z)|e k ] (T )x 2 X 1 /2 E ∞ n=1 γ n [ϕ m2 (z)|e n ] (T ) ′ x ′ 2 X ′ 1 /2 (6.2) ≤ C ϑ f ∞ Φ m1 x γ Φ * m2 x ′ γ ′ ≤ C ϑ f ∞ Φ m1 Φ * m2
x X x ′ X ′ . and taking the supremum over all x ′ ≤ 1 and x ≤ 1 the result follows.
It remains to prove the claim. First, the scalar factors
1 (m+1)! m j=1 k+j k
in m k can be removed by the contraction principle (2.1). Moreover, Id + T + T 2 is an isomorphism by the spectral mapping theorem. This means that we may focus on the operators {f (T ) k(Id − T )T k−1 : k ≥ 1}. To treat the R-boundedness of this set, a functional calculus argument is used: the boundary of Γ := ∂Stolz ϑ is parameterised by
(6.3) γ(t) := 1 − r(t)e it r(t) = 2ϑ ϑ 2 −1 (ϑ cos(t) − 1), where t ∈ [−C ϑ , C ϑ ] with C ϑ := arccos( 1 ϑ ). Since f (T ) k(Id − T )T k−1 x = 1 2πi Γ kz k−1 f (z)(Id − T )R(λ, T )xdz
it is sufficient to verify that z → kz k−1 f (z) is absolutely integrable on Γ, satisfying kz k−1 f (z) L 1 (Γ) ≤ f ∞ to conclude by the convex-hull lemma that
f (T ) k(Id − T )T k−1 : k ≥ 1 R f H ∞ (Stolzν ) .
Similar arguments appear in Le Merdy's paper [19] who refers essentially to Vitse [26]. Her estimate is done for the Stolz type domain B ϑ instead of Stolz ϑ , so we provide full details for our setting here, even if there are some similarities. First notice that
|γ(t)| = 1 + ϑ 2 − 2ϑ cos(t) ϑ 2 − 1 = 1 − r(t) ϑ and |γ ′ (t)| = 2ϑ ϑ 2 − 1 1 + ϑ 2 − 2ϑ cos(t) = 2θ √ ϑ 2 −1 |γ(t)| 1 /2
Therefore, using that c ≤ |γ(t)| ≤ 1, we find that
Γ k|z| k−1 |f (z)| |dz| ϑ f H ∞ (Stolzν ) C ϑ −C ϑ k|γ(t)| k− 1 /2 dt ϑ f H ∞ (Stolzν ) C ϑ −C ϑ k|γ(t)| k−1 dt.
We split this integral in two parts: the first part where t ∈ (−ε, ε) is far away from z=1. The second part |t| ≥ ε contains the point z=1. We fix some ε > 0, taken sufficiently small to satisfy ϑ cos(ε) > 1. In the first region, we have |γ(t)| ≤ q < 1 and so sup k kq k ≤ sup t>0 tq −t = | ln(q)|e −1 shows |t|<ε |f (γ(t))| k|γ(t)| k−1 dt ≤ f ∞ |Γ| | ln(q)|e −1 .
When |t| ≥ ε, we use a change of variables: since r ′ (t) = 2ϑ 2 ϑ 2 −1 sin(t),
ε<|t|<C ϑ |f (γ(t))| k|γ(t)| k−1 dt ϑ f ∞ ε<|t|<C ϑ k 1 − r(t) ϑ k−1 dt ϑ f ∞ 1 sin(ε) ε<|t|<C ϑ k sin(t) 1 − r(t) ϑ k−1 dt = f ∞ 1 sin(ε) 2 ϑ 2 −1 (α cos(ε)−1) 0 k (1 − s) k−1 ds ≤ f ∞ 1 sin(ε) −(1 − s) k s= 2 ϑ 2 −1 (α cos(ε)−1) s=0 ≤ 1 sin(ε) f ∞ .
and the proof is complete: we refrain from optimising over all possible ε > 0 in both inequalities.
Open question: For a given Ritt operator, the link between its H ∞ functional calculus and its associated square function estimates resembles the analogous link for sectorial (or strip type) operators A. In fact, the proofs we present are built up using the same abstract principles. For both, Ritt and sectorial operators on Banach spaces of finite cotype, the functional calculus (if it is bounded) can be extended to so-called "bounded square functional" (or "quadratic") H ∞ -calculus, verifying
(6.4) E n γ n f n (T )x sup z n |f n (z)| 2 1 /2
x .
In the case of sectorial (or strip type) operators, the square functional calculus can be deduced directly from Theorem A.1 via integral representations that make direct use of "standard" square functions. In the case of Ritt operators however, this does not seem to work. The integral representation analogous to the sectorial situation would be
(6.5) f (z) = k a k ϕ m1 (k, z)ϕ m2 (k, z)
where the coefficients a k do not depend on the variable z (unlike the formula (6.1) above). Despite this lack, the validity of (6.4) was shown in Le Merdy's paper [19]. Instead of using the assumed (dual) square functions of the functions ϕ m (z), he rather bootstraps the square functional calculus from the (scalar) H ∞ -calculus by means of the Francks-McIntosh decomposition that produce a kind of "brute force square functions" for a collection of functions which are not explicitly known. Our integral representation theorem A.1 can then be applied to the Francks-McIntosh decomposition.
From this observation two question emerge: first, we would like to identify the (non-separable) subspaces S m1,m2 := k a k ϕ m1 (k, z)ϕ m2 (k, z) : (a k ) ∈ ℓ ∞ and their sum in H ∞ (Stolz ω ). Second, we expect that other, explicit square functions for Ritt operators are to be found that will, allow amongst others, general representation formulas. This question might be linked to the conformal map of Stolz regions to the unit disc, that seem not to be explicitly known, either.
Appendix A. Tools from abstract functional calculus theory
We cite in this section to abstract results borrowed from Haak-Haase [9]. We base our results on these results to emphasise that the H ∞ -calculus theory of Ritt operators as well as that of sectorial or strip type operators can be explained with a unified abstract approach to holomorphic functional calculus. For both theorems we require that (H ∞ (O), Φ) be a functional calculus on the Banach space X satisfying the abstract construction principles outlayed in (3.5) and (3.6): in particular they do apply to Ritt operators T , for which (Id−T ) is injective and has rense range. If K is finite-dimensional or if X has finite cotype and x ∈ D(Φ γ (g)) then x ∈ D(Φ γ (u)) and
(A.2) Φ γ (u)x γ ≤ C m L ∞ (Ω;K ′ ) Φ γ ′ (f ) Φ γ (g)x γ ,
where C depends only on dim(K) or the cotype (constant) of X, respectively.
The following strenghening holds true: if X has additionally property (α), then the set
{Φ(u m ) : m ∈ L ∞ (Ω, µ), m ∞ ≤ 1} is γ-bounded, with bound Φ(u m ) : m ∞ ≤ 1 γ ≤ C + C − Φ γ (g) γ Φ γ ′ (f ) γ ′ ,
where C + C − is the condition number of the isomorphism γ(ℓ 2 ⊕ ℓ 2 ; X) ≃ γ(ℓ 2 ; γ(ℓ 2 ; X)).
The following result form [9] gives an abstract tool to infer lower square function estimates for a function g from upper ones of a function f under an "identity condition" on the pointwise scalar products [f (z)|g(z)] H . x X ≃ Φ γ (g)x γ for all x ∈ X.
Theorem 1 . 1 (
11Le Merdy). Let 1 < p < ∞ and T : L p (Ω) → L p (Ω) be a Ritt operator. Then the following assertions are equivalent.
k
The operator T admits a bounded H ∞ (B γ )-functional calculus for some γ ∈ (0, π/2) where B γ is a certain Stolz type domain within the complex unit circle (see figure 1 below).(b) The operator T as well as its adjoint T * : L q (Ω) → L q (Ω) both satisfy uniform estimates∞ k=1 k (Id − T )T k−1 (Id − T * )(T * )We will formulate Le Merdy's result for Banach spaces having certain geometric conditions (see section 2 for precise definitions). The basic idea is to replace the
Figure 1 .
1α n,k r n r k x n,k Two definitions of Stolz domains in the literature: as an example, for α = 2, the region B r in light grey, and the corresponding smaller Stolz region Stolz α in dark grey. Precise definitions of both regions will be given in section 3 below.
for all complex domains O that contain the spectrum of T . The point of Ritt operators is that the resolvent condition (3.1) implies further spectral properties that allow to narrow down the domain O (initially a neighbourhood D) to smaller domains that are subsets of D ∪ {1} where D = {z ∈ C : |z| < 1}.
Figure 2 .
2Spectrum and integration path for the elementary functional calculus of Ritt operators
n | [F (z)|e n ] | = |1 − z| Li − 1 2 (|z|)
Theorem A. 1 .
1Let K be a Hilbert space and H := L 2 (Ω) for some measure space (Ω, µ). Suppose further that f, g ∈ H ∞ (O; H) such that the square function associated with g, Φ γ (g) : X → γ(H; X), as well as the dual square function associated with f ,Φ γ ′ (f ) : X ′ → γ ′ (H; X ′ ),are bounded. Consider, for m ∈ L ∞ (Ω; K ′ ), the function u ∈ H ∞ (O; K ′ ) defined by t) · f (t, z) g(t, z) µ(dt) ∈ K ′ (z ∈ O).
Theorem A. 2 .
2Suppose that f ∈ H ∞ (O; H) and g ∈ H ∞ (O; H ′ ) are such that Φ γ (g) and Φ γ ′ (f ) are bounded operators. If [f (z)|g(z)] H = 1 for all z ∈ O, one has the norm equivalence
n are equivalent. Second, on Banach spaces with finite cotype, Gaussian and Rademacher random sums E r n x nX
and
E γ n x n
X
are equivalent with constants depending only on the cotype constant, see for ex-
ample [12, Theorem 8.1.3].
Acknowledgement: the author would like to thank Markus Haase for many helpful discussions on functional calculi and square functions estimates.Basic definitions from the geometry of Banach spacesWe recall some standard terminology from geometry of Banach spaces, cf.[7,Chapter 11] or[12,Chapter 7].
Square functions for Ritt operators on noncommutative L p -spaces. Cédric Arhancet, Math. Scand. 1132Cédric Arhancet, Square functions for Ritt operators on noncommutative L p -spaces, Math. Scand. 113 (2013), no. 2, 292-319.
Dilation of Ritt operators on L p -spaces. Cédric Arhancet, Christian Le Merdy, Israel J. Math. 2011Cédric Arhancet and Christian Le Merdy, Dilation of Ritt operators on L p -spaces, Israel J. Math. 201 (2014), no. 1, 373-414.
Loris Arnold, γ-bounded c 0 -semigroups and power γ-bounded operators : characterizations and functional calculi. Université de BesaçonPh.D. thesisLoris Arnold, γ-bounded c 0 -semigroups and power γ-bounded operators : character- izations and functional calculi, Ph.D. thesis, Université de Besaçon, France, 2022, https://tel.archives-ouvertes.fr/tel-03545380/document.
H ∞ -functional calculus for commuting families of Ritt operators and sectorial operators. Olivier Arrigoni, Christian Le Merdy, Oper. Matrices. 134EnglishOlivier Arrigoni and Christian Le Merdy, H ∞ -functional calculus for commuting families of Ritt operators and sectorial operators, Oper. Matrices 13 (2019), no. 4, 1055-1090 (English).
New estimates on the Brunel operator. I Assani, R S Hallyburton, S Mcmahon, S Schmidt, C Schoone, Colloq. Math. 1691EnglishI. Assani, R. S. Hallyburton, S. McMahon, S. Schmidt, and C. Schoone, New estimates on the Brunel operator, Colloq. Math. 169 (2022), no. 1, 117-139 (English).
Schauder decomposition and multiplier theorems. P Clément, B Pagter, F A Sukochev, H Witvliet, Studia Math. 1382P. Clément, B. de Pagter, F. A. Sukochev, and H. Witvliet, Schauder decomposition and multiplier theorems, Studia Math. 138 (2000), no. 2, 135-163.
Absolutely summing operators. Joe Diestel, Hans Jarchow, Andrew Tonge, Cambridge Studies in Advanced Mathematics. 43Cambridge University PressJoe Diestel, Hans Jarchow, and Andrew Tonge, Absolutely summing operators, Cambridge Studies in Advanced Mathematics, vol. 43, Cambridge University Press, Cambridge, 1995.
On discrete subordination of power bounded and Ritt operators. Alexander Gomilko, Yuri Tomilov, Indiana Univ. Math. J. 672EnglishAlexander Gomilko and Yuri Tomilov, On discrete subordination of power bounded and Ritt operators, Indiana Univ. Math. J. 67 (2018), no. 2, 781-829 (English).
Square function estimates and functional calculus. H Bernhard, Markus Haak, Haase, In preparationBernhard H. Haak and Markus Haase, Square function estimates and functional calculus, In preparation .
The functional calculus for sectorial operators, Operator Theory. Markus Haase, Advances and Applications. 169Birkhäuser VerlagMarkus Haase, The functional calculus for sectorial operators, Operator Theory, Advances and Applications, vol. 169, Birkhäuser Verlag, 2006.
Sums of independent Banach space valued random variables. Jørgen Hoffmann-Jørgensen, Studia Math. 52Jørgen Hoffmann-Jørgensen, Sums of independent Banach space valued random variables, Studia Math. 52 (1974), 159-186.
Tuomas Hytönen, Jan Van Neerven, Mark Veraar, Lutz Weis, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics. ChamSpringerIITuomas Hytönen, Jan van Neerven, Mark Veraar, and Lutz Weis, Analysis in Banach spaces. Vol. II, Ergebnisse der Mathematik und ihrer Grenzgebiete. 3. Folge. A Series of Modern Surveys in Mathematics [Results in Mathematics and Related Areas. 3rd Series. A Series of Modern Surveys in Mathematics], vol. 67, Springer, Cham, 2017, Probabilistic methods and operator theory.
Power-bounded operators and related norm estimates. N Kalton, S Montgomery-Smith, K Oleszkiewicz, Y Tomilov, J. Lond. Math. Soc., II. Ser. 702EnglishN. Kalton, S. Montgomery-Smith, K. Oleszkiewicz, and Y. Tomilov, Power-bounded operators and related norm estimates, J. Lond. Math. Soc., II. Ser. 70 (2004), no. 2, 463-478 (English).
Remarks on ℓ 1 and ℓ∞-maximal regularity for power-bounded operators. N J Kalton, P , J. Aust. Math. Soc. 843N. J. Kalton and P. Portal, Remarks on ℓ 1 and ℓ∞-maximal regularity for power-bounded operators, J. Aust. Math. Soc. 84 (2008), no. 3, 345-365.
The H ∞ -functional calculus and square function estimates. Nigel J Kalton, Lutz Weis, Fritz Gesztesy, Gilles Godefroy, Loukas Grafakos and Igor VerbitskyEnglish1BaselNigel J. Kalton and Lutz Weis, The H ∞ -functional calculus and square function estimates., Selecta. Volume 1. Edited by Fritz Gesztesy, Gilles Godefroy, Loukas Grafakos and Igor Verbitsky, Basel: Birkhäuser/Springer, 2016, pp. 716-764 (English).
Sums of independent Banach space valued random variables. S Kwapień, Studia Math. J. Hoffmann-Jørgensen52Studia Math.S. Kwapień, On Banach spaces containing c 0 , Studia Math. 52 (1974), 187-188, A supple- ment to the paper by J. Hoffmann-Jørgensen: "Sums of independent Banach space valued random variables" (Studia Math. 52 (1974), 159-186).
On functional calculus properties of Ritt operators. Florence Lancien, Christian Le Merdy, Proc. Roy. Soc. Edinburgh Sect. A. 1456Florence Lancien and Christian Le Merdy, On functional calculus properties of Ritt operators, Proc. Roy. Soc. Edinburgh Sect. A 145 (2015), no. 6, 1239-1250.
On square functions associated to sectorial operators. Christian Le Merdy, Bull. Soc. Math. France. 1321Christian Le Merdy, On square functions associated to sectorial operators, Bull. Soc. Math. France 132 (2004), no. 1, 137-156.
H ∞ functional calculus and square function estimates for Ritt operators. Christian Le Merdy, Rev. Mat. Iberoam. 304EnglishChristian Le Merdy, H ∞ functional calculus and square function estimates for Ritt operators, Rev. Mat. Iberoam. 30 (2014), no. 4, 1149-1190 (English).
Operators which have an H∞ functional calculus, Miniconference on operator theory and partial differential equations (North Ryde, 1986). Alan Mcintosh, Proc. Centre Math. Anal. Austral. Nat. Univ. 14Austral. Nat. Univ.Alan McIntosh, Operators which have an H∞ functional calculus, Miniconference on oper- ator theory and partial differential equations (North Ryde, 1986), Proc. Centre Math. Anal. Austral. Nat. Univ., vol. 14, Austral. Nat. Univ., Canberra, 1986, pp. 210-231.
On joint functional calculus for Ritt operators. Parasar Mohanty, Samya Kumar Ray, Integral Equations Oper. Theory. 91214Parasar Mohanty and Samya Kumar Ray, On joint functional calculus for Ritt operators, Integral Equations Oper. Theory 91 (2019), no. 2, 18 (English), Id/No 14.
Some functional relations derived from the Lindelöf-Wirtinger expansion of the Lerch transcendent function. Luis M Navas, Francisco J Ruiz, Juan L Varona, Math. Comput. 84292EnglishLuis M. Navas, Francisco J. Ruiz, and Juan L. Varona, Some functional relations derived from the Lindelöf-Wirtinger expansion of the Lerch transcendent function, Math. Comput. 84 (2015), no. 292, 803-813 (English).
A condition that limn→∞ n −1 T n = 0. R K Ritt, Proc. Am. Math. Soc. 4EnglishR. K. Ritt, A condition that limn→∞ n −1 T n = 0, Proc. Am. Math. Soc. 4 (1953), 898-899 (English).
Functional calculus estimates for Tadmor-Ritt operators. Felix L Schwenninger, J. Math. Anal. Appl. 4391EnglishFelix L. Schwenninger, Functional calculus estimates for Tadmor-Ritt operators, J. Math. Anal. Appl. 439 (2016), no. 1, 103-124 (English).
Jan Van Neerven, The AMSI-ANU Workshop on Spectral Theory and Harmonic Analysis. Canberra44Austral. Nat. Univ.Jan van Neerven, γ-radonifying operators-a survey, The AMSI-ANU Workshop on Spec- tral Theory and Harmonic Analysis, Proc. Centre Math. Appl. Austral. Nat. Univ., vol. 44, Austral. Nat. Univ., Canberra, 2010, pp. 1-61.
A band limited and Besov class functional calculus for Tadmor-R itt operators. Pascale Vitse, Arch. Math. 854EnglishPascale Vitse, A band limited and Besov class functional calculus for Tadmor-R itt operators, Arch. Math. 85 (2005), no. 4, 374-385 (English).
Über eine besondere Dirichletsche Reihe. W Wirtinger, J. Reine Angew. Math. 129GermanW. Wirtinger,Über eine besondere Dirichletsche Reihe., J. Reine Angew. Math. 129 (1905), 214-219 (German).
Mean ergodic theorem in Banach spaces. Kosaku Yosida, Proc. Imp. Acad. Japan. Imp. Acad. JapanEnglish14Kosaku Yosida, Mean ergodic theorem in Banach spaces, Proc. Imp. Acad. Japan 14 (1938), 292-294 (English).
| [] |
[
"Robust Sum-Rate Maximization in Transmissive RMS Transceiver-Enabled SWIPT Networks",
"Robust Sum-Rate Maximization in Transmissive RMS Transceiver-Enabled SWIPT Networks"
] | [
"Zhendong Li ",
"Senior Member, IEEEWen Chen ",
"Ziheng Zhang ",
"Senior Member, IEEEQingqing Wu ",
"Huanqing Cao ",
"Senior Member, IEEEJun Li "
] | [] | [] | In this paper, we propose a state-of-the-art downlink communication transceiver design for transmissive reconfigurable metasurface (RMS)-enabled simultaneous wireless information and power transfer (SWIPT) networks. Specifically, a feed antenna is deployed in the transmissive RMS-based transceiver, which can be used to implement beamforming. According to the relationship between wavelength and propagation distance, the spatial propagation models of plane and spherical waves are built. Then, in the case of imperfect channel state information (CSI), we formulate a robust system sum-rate maximization problem that jointly optimizes RMS transmissive coefficient, transmit power allocation, and power splitting ratio design while taking account of the non-linear energy harvesting model and outage probability criterion. Since the coupling of optimization variables, the whole optimization problem is non-convex and cannot be solved directly. Therefore, the alternating optimization (AO) framework is implemented to decompose the non-convex original problem. In detail, the whole problem is divided into three sub-problems to solve. For the non-convexity of the objective function, successive convex approximation (SCA) is used to transform it, and penalty function method and difference-of-convex (DC) programming are applied to deal with the non-convex constraints. Finally, we alternately solve the three sub-problems until the entire optimization problem converges. Numerical results show that our proposed algorithm has convergence and better performance than other benchmark algorithms.Monte Carlo method[18]. All of the above works demonstrate that the performance of the PS scheme is better than that of the TS scheme. However, PS-based SWIPT can solve the energy shortage in IoT devices, but the energy consumption and cost of BSs also need to be considered urgently.Considering the requirements to reduce the power consumption and cost of the BS, the recently proposed reconfigurable metasurface (RMS) may be a potential solution. RMS also known as reconfigurable intelligent surface (RIS), is an advanced technology that makes it possible to reconfigure wireless channels in wireless communications networks. RMS contains many passive elements with adjustable phase and amplitude. Since RMS is a passive communication equipment, it can only reflect or transmit signal and does not perform signal processing. RMS has the characteristics of low cost and easy deployment and is an environment-friendly communication device [19]-[21]. Because of the above advantages of RMS, it has been widely studied in both academia and industry. Specifically, depending on the medium material, RMS is mainly divided into three types: reflective RMS [22]-[25], transmissive RMS [26], [27] and simultaneously transmitting and reflecting (STAR) RMS [28], [29]. For reflective RMS, it is also called intelligent reflecting surface (IRS) and used to improve the energy efficiency and spectral efficiency of communication networks, and RMS can make the system obtain obvious performance gains in the main communication scenarios. Zhang et al. and Yang et al. maximized the communication capacity of the IRS-assisted system in MIMO systems, respectively [22]. Yang et al. applied IRS to physical layer security to maximize the secret rate [24]. For the transmissive RMS, it can solve the problem of blind coverage in the communication networks. Zeng et al. evaluated the performance of the downlink RIS-assisted communication system and summarized the selection of the optimal working mode of RIS for a specific user location [26]. Zhang et al.proposed an intelligent omni-surface communication system, where transmissive elements adjust the phase of the received signal to improve network coverage[27]. While STAR RMS can split the incident signal into transmitted and reflected signals, helping to achieve full spatial coverage on both sides of the surface. Wu et al. studied the problem of resource allocation in STAR RMS-assisted multi-carrier communication networks[29]. In the above researches, RMS is used as a communication auxiliary device for channel reconstruction and performance boost in two modes.Furthermore, RMS can also be used as a transmitter, which is a very promising research direction. Tang et al. implemented real-time communication of quadrature amplitude modulation (QAM)-MIMO by using reflective RMS and verified the theoretical model[30]. In terms of transmitter design, transmissive RMS has better performance than reflective RMS, which is mainly because of the following two reasons [31]-[34]. One of the reasons is that when RMS works in the reflective mode, the user and the feed antenna are located on the same side of the RMS, which makes the incident and the reflected electromagnetic (EM) waves to interfere with each other. Another reason is that the transmissive RMS transceiver can be designed with higher aperture efficiency | 10.1109/jiot.2022.3228868 | [
"https://export.arxiv.org/pdf/2212.05288v1.pdf"
] | 254,564,609 | 2212.05288 | a44e52f2009d06ef0e02ae746ea1e90a15b50717 |
Robust Sum-Rate Maximization in Transmissive RMS Transceiver-Enabled SWIPT Networks
Zhendong Li
Senior Member, IEEEWen Chen
Ziheng Zhang
Senior Member, IEEEQingqing Wu
Huanqing Cao
Senior Member, IEEEJun Li
Robust Sum-Rate Maximization in Transmissive RMS Transceiver-Enabled SWIPT Networks
1Index Terms-RMSSWIPTimperfect CSInon-linear energy harvestingoutage probability criterion
In this paper, we propose a state-of-the-art downlink communication transceiver design for transmissive reconfigurable metasurface (RMS)-enabled simultaneous wireless information and power transfer (SWIPT) networks. Specifically, a feed antenna is deployed in the transmissive RMS-based transceiver, which can be used to implement beamforming. According to the relationship between wavelength and propagation distance, the spatial propagation models of plane and spherical waves are built. Then, in the case of imperfect channel state information (CSI), we formulate a robust system sum-rate maximization problem that jointly optimizes RMS transmissive coefficient, transmit power allocation, and power splitting ratio design while taking account of the non-linear energy harvesting model and outage probability criterion. Since the coupling of optimization variables, the whole optimization problem is non-convex and cannot be solved directly. Therefore, the alternating optimization (AO) framework is implemented to decompose the non-convex original problem. In detail, the whole problem is divided into three sub-problems to solve. For the non-convexity of the objective function, successive convex approximation (SCA) is used to transform it, and penalty function method and difference-of-convex (DC) programming are applied to deal with the non-convex constraints. Finally, we alternately solve the three sub-problems until the entire optimization problem converges. Numerical results show that our proposed algorithm has convergence and better performance than other benchmark algorithms.Monte Carlo method[18]. All of the above works demonstrate that the performance of the PS scheme is better than that of the TS scheme. However, PS-based SWIPT can solve the energy shortage in IoT devices, but the energy consumption and cost of BSs also need to be considered urgently.Considering the requirements to reduce the power consumption and cost of the BS, the recently proposed reconfigurable metasurface (RMS) may be a potential solution. RMS also known as reconfigurable intelligent surface (RIS), is an advanced technology that makes it possible to reconfigure wireless channels in wireless communications networks. RMS contains many passive elements with adjustable phase and amplitude. Since RMS is a passive communication equipment, it can only reflect or transmit signal and does not perform signal processing. RMS has the characteristics of low cost and easy deployment and is an environment-friendly communication device [19]-[21]. Because of the above advantages of RMS, it has been widely studied in both academia and industry. Specifically, depending on the medium material, RMS is mainly divided into three types: reflective RMS [22]-[25], transmissive RMS [26], [27] and simultaneously transmitting and reflecting (STAR) RMS [28], [29]. For reflective RMS, it is also called intelligent reflecting surface (IRS) and used to improve the energy efficiency and spectral efficiency of communication networks, and RMS can make the system obtain obvious performance gains in the main communication scenarios. Zhang et al. and Yang et al. maximized the communication capacity of the IRS-assisted system in MIMO systems, respectively [22]. Yang et al. applied IRS to physical layer security to maximize the secret rate [24]. For the transmissive RMS, it can solve the problem of blind coverage in the communication networks. Zeng et al. evaluated the performance of the downlink RIS-assisted communication system and summarized the selection of the optimal working mode of RIS for a specific user location [26]. Zhang et al.proposed an intelligent omni-surface communication system, where transmissive elements adjust the phase of the received signal to improve network coverage[27]. While STAR RMS can split the incident signal into transmitted and reflected signals, helping to achieve full spatial coverage on both sides of the surface. Wu et al. studied the problem of resource allocation in STAR RMS-assisted multi-carrier communication networks[29]. In the above researches, RMS is used as a communication auxiliary device for channel reconstruction and performance boost in two modes.Furthermore, RMS can also be used as a transmitter, which is a very promising research direction. Tang et al. implemented real-time communication of quadrature amplitude modulation (QAM)-MIMO by using reflective RMS and verified the theoretical model[30]. In terms of transmitter design, transmissive RMS has better performance than reflective RMS, which is mainly because of the following two reasons [31]-[34]. One of the reasons is that when RMS works in the reflective mode, the user and the feed antenna are located on the same side of the RMS, which makes the incident and the reflected electromagnetic (EM) waves to interfere with each other. Another reason is that the transmissive RMS transceiver can be designed with higher aperture efficiency
I. INTRODUCTION
T HE rapid development of wireless communication enables the Internet-of-Things (IoT) to be utilized in more scenarios, e.g., smart industry, smart medical and the Internet of vehicles [1], [2]. Based on the relevant data, it is inferred that the number of IoT devices worldwide will rise to 14.7 billion by 2030 in the future IoT networks [3]. However, IoT devices are usually small in size, which makes the battery capacity often limited and have difficulty in meeting the energy requirements of rich applications in IoT. Therefore, energy management for large-scale IoT devices is a critical issue.
Meanwhile, to solve the path loss problem caused by highfrequency communication and ensure that the coverage is not reduced, the number of 5G base stations (BSs) is greatly increased compared with 4G BSs [4]. In addition, massive multiple-input multiple-output (MIMO) requires numerous radio frequency (RF) links to provide support, which will lead to a surge in power consumption and cost. Hence, it is urgent to seek a novel transceiver architecture with low power consumption and low cost.
As a promising technique for energy harvested in the IoT, wireless energy transmission (WET) can convert the received RF signal into electrical energies, which can be well applied to solve the energy management of large-scale IoT devices [5]. Simultaneous wireless information and power transfer (SWIPT) is a valid mode in WET. Specifically, in SWIPT, the user divides the received RF signal into an information decoder (ID) and an energy harvester (EH) through power splitting (PS) or time switching (TS) [6]- [10]. With MIMO technology, SWIPT can also be implemented through antenna switching or spatial switching. In antenna switching, each antenna element is switched dynamically between decoding/rectifying in the antenna domain [11]. In spatial switching, information or energy is transmitted through eigenchannels obtained by eigenvalue decomposition of the MIMO channel matrix. According to the above-mentioned implementation technology of SWIPT, there have been many studies on the integration of SWIPT into existing communication technology [12]- [14]. Power splitting factor and signal autocorrelation matrix are designed jointly to maximize the power harvested in the MIMO channel [12]. Buckley et al. proposed an energy receiving architecture under orthogonal frequency division multiplexing (OFDM) system [13]. Under this architecture, the user performs energy harvesting and storage from the cyclic prefix of the signal. SWIPT in non-orthogonal multiple access (NOMA) network was studied in [14], authors consider energy harvested constraint and the quality-of-service (QoS) requirement of each user and minimize BS transmit power. For the various implementations of SWIPT mentioned above, there are also studies comparing these implementations in specific scenarios, especially PS and TS [15]- [17]. The authors compare the attainable rateenergy trade-off in SWIPT-based communication systems for multiple-input single-output (MISO) channel [15] and MIMO channel [16]. Zhou et al. considered the joint optimization of resource allocation and power splitting in the OFDM system [17]. Jiang et al. approximately obtained the optimal solution to the probability of information and energy coverage for UAVs assisting SWIPT networks and verified it with the arXiv:2212.05288v1 [eess.SP] 10 Dec 2022 and operating bandwidth [31]. For the above reasons, applying transmissive RMS to multi-antenna transmitter designs is a potential technique in future wireless communications [35].
In view of the two important issues in IoT networks: the limited battery capacity of the devices and the excessive energy consumption of the BSs, we propose a downlink transmission design scheme for SWIPT networks based on the transmissive RMS transceiver. In order to make the design more practical, a nonlinear energy harvesting model is applied to this network model. Compared with the linear energy harvesting model, the nonlinear model has higher energy conversion efficiency [36]. Considering the difficulty of channel estimation in RMSassisted systems, the channel estimation error matrix is introduced into our model to simulate the impact of imperfect channel state information (CSI). In this paper, we aim to maximize the system sum-rate downlink by jointly optimizing RMS transmissive coefficient, power allocation, and power splitting ratio with the outage probability criterion. Given that the problem formulated is non-convex, it is necessary to design a reasonable and effective algorithm to solve it. The main contributions of this paper can be summarized as follows:
• We propose a novel transmissive RMS transceiverenabled SWIPT network architecture, where the RMS is used as transceiver to implement beamforming. Specifically, RMS transmissive coefficient, transmit power allocation and power splitting ratio are designed jointly to maximize the system sum-rate. Taking into account the imperfect CSI, we use outage probability to measure QoS and energy harvested requirements, which can demonstrate the robustness of our design. However, it is non-trivial to directly obtain the global optimal solution to this problem since the high coupling of optimization variables. • We propose a joint optimization algorithm based on an alternating optimization (AO) framework to solve this formulated robust system sum-rate maximization problem. Specifically, the original problem is first transformed into a tractable problem. Then, the original problem is decoupled into three sub-problems with respect to transmit power allocation, power splitting ratio and RMS transmissive coefficient to be solved separately. Finally, we alternately optimize the three sub-problems till the entire problem converges. • Numerical results reveal the superior performance of the proposed algorithm in downlink multi-user SWIPT networks with transmissive RMS as transmitter. Specifically, the algorithm first has good convergence. Secondly, under the constraints of information and energy harvested requirements based on outage probability criterion, the robust joint optimization algorithm can improve the sumrate of system compared to other benchmarks under the conditions of different number of RMS elements, number of users, and maximum transmit power.
The rest of this paper is as follows. In section II, we delineate the system model and optimization problem formulation in transmissive RMS transceiver-enabled SWIPT networks when considering the non-linear EH model and the
h k,LoS = 1, e −j 2π λ d sin θ AoD k cos φ AoD k , . . . , e −j 2π λ (Nx−1)d sin θ AoD k cos φ AoD k T ⊗ 1, e −j 2π λ d sin θ AoD k sin φ AoD k , . . . , e −j 2π λ (Nz−1)d sin θ AoD k sin φ AoD k T ,(3)
imperfect CSI. Then, in section III, the proposed robust joint optimization algorithm is elaborated. Section IV reveals the performance superiority of the proposed algorithm compared to other benchmarks. Finally, section V concludes this paper.
Notations: Matrices are represented by bold uppercase letters. Vectors are denoted by bold lowercase letters. Scalars are represented by standard lowercase letters. For a complexvalued scalar x, |x| denotes its absolute value and for a complex-valued vector x, x represents the Euclidean norm. For a general matrix A, rank(A), A H , A m,n and A denote its rank, conjugate transpose, m, n-th entry and matrix norm, respectively. For a square matrix X, tr(X) and rank(X), denote its trace, rank, and X 0 denotes that X is a positive semidefinite matrix. C M ×N represents the M × N dimensional complex matrix space and j is the imaginary unit. Finally, CN (µ, C) denotes the distribution of a circularly symmetric complex Gaussian (CSCG) random vector with mean µ and covariance matrix C, and ∼ stands for 'distributed as'.
II. SYSTEM MODEL AND OPTIMIZATION PROBLEM FORMULATION
A. System Model
As shown in Fig. 1, the system model of transmissive RMS transceiver-enabled SWIPT networks is first introduced and it mainly includes a transmissive RMS transceiver and K users with a single antenna. The transceiver is composed of a transmissive RMS with N elements and a feed antenna. It is worth noting that although we are considering a transmissive RMS transceiver architecture, a portion of the electromagnetic waves emitted from the feed antenna will always be reflected. However, we can quantify this part of the reflected electromagnetic wave by a certain ratio, so it does not affect the algorithm design of this problem. In this paper, for the convenience of analysis, we assume that the electromagnetic wave is completely transmitted, i.e., no incident electromagnetic waves are reflected. The transmissive RMS is equipped with an intelligent controller which can control the amplitude and phase shift of all transmissive elements. We let f = [f 1 , ..., f N ] T ∈ C N ×1 represent the RMS transmissive coefficient vector at the transmitter, where f n = β n e jθn represents the amplitude and phase shift of the n-th element respectively, which should satisfy
|f n | ≤ 1, ∀n.(1)
The channel from the RMS transceiver to the k-th user can be named as the RMS-user channel, and the channel gain can be denoted by h H k ∈ C 1×N . For ease of analysis, all channels are assumed to be quasi-static flat fading, i.e., h H k is constant within each transmission time T . It is worth noting that the transmissive RMS transmits the signal passively and has no ability to actively send and receive signals. We assume that the communication works in time division duplex (TDD) mode, i.e., the channel estimation is completed in the uplink transmission. Downlink CSI can be obtained according to channel reciprocity. This paper assumes that the transmissive RMS transceiver cannot obtain the CSI perfectly, and the specific modeling is explained below.
In this paper, we model the array of RMS as a uniform planar array (UPA), which is a more realistic array response, i.e., N = N x ×N z , N x and N z denote the number of elements in the horizontal and vertical directions of the transmissive RMS, respectively. Herein, RMS-user channel is modeled as a Rice channel model, which can be given by
h k = β d k d 0 −α κ κ + 1 h k,LoS + 1 κ + 1 h k,NLoS , ∀k,(2)
where β denotes the channel gain when the reference distance d 0 = 1 m, α is the path loss exponent between the RMS transceiver and the user, d k is the distance between RMS transceiver and the k-th user. κ denotes the Rician factor, h k,LoS represents the LoS component, which can be determined by the Eq. (3) at the top of this page, where θ AoD k and φ AoD k are the vertical angle and horizontal angle of the angleof-departure (AoD) at the RMS transceiver, respectively. d denotes the spacing between successive antenna elements and λ denotes the carrier wavelength. h k,NLoS represents the NLoS component and [h k,NLoS ] (nx−1)Nz+nz ∼ CN (0, 1) is the (n x − 1) N z + n z element of the vector h k,NLoS . Accordingly, the signal received by the k-th user can be denoted by
y k = h H k f K i=1 √ p i s i + n k , ∀k,(4)
where s i denotes the signal from the RMS transceiver to the i-th user, Without loss of generality, we usually assume that it is an independent and identically distributed (i.i.d) CSCG random variable, i.e., s i ∼ CN (0, 1). n k represents additive white Gaussian noise (AWGN) introduced at the k-th user's receiving antenna, and it is also usually set assumed to be i.i.d CSCG variable, i.e., n k ∼ CN 0, σ 2 k . p k represents the power allocated to the k-th user and the following constraints should be satisfied
p k ≥ 0, ∀k,(5)and K k=1 p k ≤ P max ,(6)
where P max is the maximum transmit power of transmissive RMS transceiver.
This paper considers transmissive RMS transceiver-enabled SWIPT networks. Specifically, from the received RF signal, each user adopts the PS protocol to coordinate energy harvesting and information decoding, i.e., each user's received signal is divided into the ID and EH by the power splitter. The k-th user divide the ρ k portion of the received signal power to ID and the rest (1 − ρ k ) portion to EH. Therefore, the received signal for ID in the downlink of the k-th user is denoted by
y ID k = √ ρ k y k = √ ρ k h H k f K i=1 √ p i s i + n k + z k , ∀k,(7)
where z k represents AWGN caused by the ID of the k-th user and it is set to be an i.i.d CSCG variable, z k ∼ CN 0, δ 2 k . Then, the signal to interference plus noise ratio (SINR) of the k-th user is denoted by
SINR k = ρ k p k h H k f 2 ρ k i =k p i h H k f 2 + ρ k σ 2 + δ 2 k , ∀k.(8)
In addition, for the k-th user, the received signal for EH in the downlink can be given by
y EH k = 1 − ρ k y k = 1 − ρ k h H k f K i=1 √ p i s i + n k , ∀k.
(9) Accordingly, the power obtained by the k-th user for EH is given by
p EH k = E y EH k 2 = (1 − ρ k ) K i=1 p i h H k f 2 + σ 2 k , ∀k.
(10) In this paper, a more practical non-linear energy harvested model is adopted. Hence, the power harvested by the k-th user can be expressed as
Ψ p EH k = ∂ k X k 1 + exp −a k p EH k − b k − Y k , ∀k,(11)
where ∂ k represents the maximum energy harvested of the kth user, a k and b k are specific parameters related to the circuit.
X k = exp (a k b k )/(1 + exp (a k b k )) and Y k = ∂ k /exp (a k b k ).
We consider that under normalized time, the energy harvested by the k-th user can be given by
E k = Ψ p EH k , ∀k.(12)Let Φ k = E h k h H k = h k h H k ∈ C N ×N represent the
channel covariance matrix of the k-th user in the downlink 1 . Then, the SINR of the k-th user can be expressed by the channel covariance matrix as
SINR k = ρ k p k tr (Φ k F) ρ k i =k p i tr (Φ k F) + ρ k σ 2 k + δ 2 k , ∀k,(13)
where F = f f H ∈ C N ×N and it should satisfy rank (F) = 1, F 0 and F n,n ≤ 1, ∀n. In addition, the energy harvested of the k-th user is further denoted by
E k = Ψ (1 − ρ k ) K i=1 p i tr (Φ k F) + σ 2 k , ∀k. (14)
To make the model more realistic, we consider that the CSI of the downlink cannot be obtained accurately, i.e., in the case of imperfect CSI. Specifically, the channel covariance matrix is assumed to be expressed as Φ k + ∆Φ k , where Φ k ∈ C N ×N denotes the covariance matrix of the estimated channel in the downlink and ∆Φ k ∈ C N ×N is the error matrix corresponding to the estimated error of Φ k , which can also be called the uncertainty matrix, because it represents the difference between the estimated value and the true value [37]. Note that Φ k and ∆Φ k are Hermitian matrices, then the SINR and the energy harvested for the k-th user is denoted by
SINR k = ρ k p k tr ((Φ k + ∆Φ k ) F) ρ k i =k p i tr ((Φ k + ∆Φ k ) F) + ρ k σ 2 k + δ 2 k , ∀k,(15)
and
E k = Ψ (1 − ρ k ) K i=1 p i tr ((Φ k + ∆Φ k ) F) + σ 2 k , ∀k.
(16) Accordingly, the k-th user's achievable rate (bps/Hz) is expressed as
R k = log 2 (1 + SINR k ) , ∀k.(17)
Since random matrix variable terms are involved in R k , we take its expectation, which can be defined as E {R k }. However, we can't use general methods to directly obtain a closed-form expression for the expectation. To solve this problem, we approximate the expectation of the achievable rate by applying Proposition 1 below.
Proposition 1: For any a and b, if X is a random variable term or contains a random variable term, the following approximation holds,
E log 2 1 + aX bX + 1 ≈ E log 2 1 + E {aX} E {bX} + 1 . (18)
Proof : The proof of this formula is similar to the proof of Theorem 1 in Ref. [38] and here the proof is omitted.
For the convenience of analysis, we assume that ∆Φ k is a Hermitian matrix, and the elements on the diagonal are i.i.d.
O ID k = Pr {SINR k ≤ γ th } = Pr ρ k p k tr ((Φ k + ∆Φ k ) F) ρ k i =k p i tr ((Φ k + ∆Φ k ) F) + ρ k σ 2 k + δ 2 k ≤ γ th , ∀k,(20)O EH k = Pr {E k ≤ E th } = Pr Ψ (1 − ρ k ) K i=1 p i tr ((Φ k + ∆Φ k ) F) + σ 2 k ≤ E th , ∀k.(21)
cyclic symmetric real Gaussian random variables with zero mean and σ 2 Φ variance. Other elements are i.i.d. CSCG random variables with zero mean and σ 2 Φ variance. According to the Proposition 1, we can take that the expectation of the k-th user's achievable rate as follows
E {R k } ≈ log 2 1 + ρ k p k tr (Φ k F) ρ k i =k p i tr (Φ k F) + ρ k σ 2 k + δ 2 k , ∀k.(19)
Considering imperfect CSI, the user's SINR is a random variable, which means that we can only express the information and energy harvested requirement with outage probability. We define the information outage probability of the k-th user as the probability that its SINR is smaller than the threshold γ th , which can be expressed as the Eq. (20), where Pr {·} is the probability operator. Similarly, energy harvested outage probability is defined as the probability that the energy harvested is lower than the threshold E th , which can be expressed as the Eq. (21).
B. Problem Formulation
Let ρ = [ρ 1 , ..., ρ K ], p = [p 1 , ..., p K ]. We consider that the information outage probability of each user is not greater than ζ k , and the energy outage probability of each user is not greater than ε k . By jointly optimizing the power splitting ratio ρ, RMS transmissive coefficient F and the transmit power allocation p, the expectation of the system sum-rate is maximized. Therefore, the original problem P0 can be expressed as
P0 : max ρ,p,F K k=1 log 2 1 + ρ k p k tr (Φ k F) ρ k i =k p i tr (Φ k F) + ρ k σ 2 k + δ 2 k , s.t. p k ≥ 0, ∀k, (22a) K k=1 p k ≤ P max ,(22b)0 ≤ ρ k ≤ 1, ∀k,(22c)Pr {SINR k ≤ γ th } ≤ ζ k , ∀k,(22d)Pr {E k ≤ E th } ≤ ε k , ∀k, (22e) F n,n ≤ 1, ∀n, (22f) F 0, (22g) rank (F) = 1,(22h)
where constraint (22a) and constraint (22b) are the transmit power allocation constraints of transmissive RMS transceiver, constraint (22c) is the power splitting ratio constraint of each user. To guarantee the QoS of user information and energy harvesting at the same time, constraint (22d) ensures that the information outage probability of each user is not greater than ζ k and constraint (22e) ensures that the energy harvesting outage probability of each user is not greater than ε k . Constraints (22f)-(22h) are RMS transmissive coefficient constraints.
As can be observed, the original problem P0 is non-convex for following several reasons: First, the highly coupled variables make the objective function non-concave. In addition, constraints (22d) and (22e) are constraints based on the outage probability criterion, which are difficult to handle directly. Finally, a non-convex rank-one constraint (22h) is introduced after the RMS transmissive coefficient vector is lifted to a matrix. Therefore, solving this problem is challenging.
III. ROBUST JOINT OPTIMIZATION ALGORITHM DESIGN IN TRANSMISSIVE RMS TRANSCEIVER-ENABLED SWIPT NETWORKS
A. Problem Transformation
Obviously, the problem P0 is a non-convex optimization problem and needs to be transformed into a tractable convex problem. Next, we reformulate the probability constraint (21d) through a statistical model. Herein, O ID k can be rewritten as Eq. (23) on the top of the next page. We introduce the auxiliary matrixΦ
k = ρ k p k Φ k − ρ k γ th i =k p i Φ k , ∀k,(24)
and
∆Φ k = ρ k p k ∆Φ k − ρ k γ th i =k p i ∆Φ k , ∀k.(25)
Then the information outage probability of the k-th user can be given by
O ID k = Pr tr Φ k + ∆Φ k F ≤ ρ k γ th σ 2 k + γ th δ 2 k , ∀k.(26)
We define a random variable χ k = tr Φ k + ∆Φ k F , ∀k and an intermediate variable to be optimized c k = ρ k γ th σ 2 k + γ th δ 2 k , ∀k. SinceΦ k , ∆Φ k and F are all Hermitian matrices, the following Proposition 2 can be cited for the probability distribution analysis of χ k .
O ID k = Pr ρ k p k tr ((Φ k + ∆Φ k ) F) ≤ ρ k γ th i =k p i tr ((Φ k + ∆Φ k ) F) + ρ k γ th σ 2 k + γ th δ 2 k , ∀k.(23)O ID k = Pr {χ k ≤ c k } = c k −∞ 1 √ 2πσ e,k F exp − χ k − tr Φ k F 2 2σ 2 e,k F 2 dχ k , ∀k,(30)O EH k = Pr Ψ (1 − ρ k ) K i=1 p i tr ((Φ k + ∆Φ k ) F) + σ 2 e,k ≤ E th = Pr tr Φ k + ∆ Φ k F ≤ ϕ k , ∀k,(35)
Proposition 2: if X is a random matrix with CSCG random elements with 0 mean and variance σ 2
x , for any deterministic matrix Y, the following formula is established
tr (YX) ∼ CN 0,σ 2 x tr YY H .(27)
According to Proposition 2 [39], we can obtain χ k ∼ CN tr Φ k F , σ 2 e,k tr FF H , where σ 2 e,k can be given by
σ 2 e,k = ρ 2 k p 2 k σ 2 Φ + ρ 2 k γ 2 th i =k p 2 i σ 2 Φ , ∀k,(28)
then,
σ 2 e,k = ρ 2 k σ 2 Φ p 2 k + γ 2 th i =k p 2 i , ∀k.(29)
Therefore, the information outage probability of the k-th user O ID k can be obtained by the Eq. (30), where F 2 = tr FF H .
According to the definition of the error function
erf (x) = 2 √ π x 0 exp −u 2 du,(31)
the information outage probability of the k-th user can finally be given by
O ID k = 1 2 − 1 2 erf tr Φ k F − c k √ 2σ e,k F , ∀k.(32)
Thus, constraint (22d) can be rewritten as
1 2 − 1 2 erf tr Φ k F − c k √ 2σ e,k F ≤ ζ k , ∀k.(33)
This formula can be converted to
tr Φ k F − c k ≥ √ 2σ e,k F erf −1 (1 − 2ζ k ) , ∀k.(34)
Similarly, the k-th user's energy outage probability O EH k is denoted by Eq.
(35), where Φ k = (1 − ρ k ) K i=1 p i Φ k , ∆ Φ k = (1 − ρ k ) K i=1 p i ∆Φ k and ϕ k = Ψ −1 (E th ) − (1 − ρ k ) σ 2 k .
Then, we define a random variable k = tr Φ k + ∆ Φ k F . According to Proposition 2, we can
obtain k ∼ CN tr Φ k F , β 2 e,k tr FF H , where β 2 e,k
can be given by
β 2 e,k = (1 − ρ k ) 2 K i=1 p i 2 σ 2 Φ , ∀k.(36)
Therefore, the k-th user's energy outage probability O EH k is obtained by Eq. (37) on the top of the next page. Thus, the constraint (22e) can be rewritten as
1 2 − 1 2 erf tr Φ k F − ϕ k √ 2β e,k F ≤ ε k , ∀k.(38)
This formula can be converted to
tr Φ k F − ϕ k ≥ √ 2β e,k F erf −1 (1 − 2ε k ) , ∀k.(39)
Hence, we can transform the problem P0 into problem P1, which can be given by
P1 : max ρ,p,F K k=1 log 2 1 + ρ k p k tr (Φ k F) ρ k i =k p i tr (Φ k F) + ρ k σ 2 k + δ 2 k , s.t. p k ≥ 0, ∀k, (40a) K k=1 p k ≤ P max ,(40b)0 ≤ ρ k ≤ 1, ∀k,(40c)tr Φ k F − c k ≥ √ 2σ e,k F erf −1 (1 − 2ζ k ) , ∀k, (40d) tr Φ k F − ϕ k ≥ √ 2β e,k F erf −1 (1 − 2ε k ) , ∀k, (40e) F n,n ≤ 1, ∀n, (40f) F 0, (40g) rank (F) = 1.(40h)
After the original problem is transformed, the AO framework can be implemented to decouple the problem P1 into
O EH k = Pr { k ≤ ϕ k } = ϕ k −∞ 1 √ 2πβ e,k F exp − k − tr Φ k F 2 2β 2 e,k F 2 d k = 1 2 − 1 2 erf tr Φ k F − ϕ k √ 2β e,k F , ∀k. (37) K k=1 log 2 K i=1 ρ k p i tr (Φ k F) + ρ k σ 2 k + δ 2 k − log 2 ρ k i =k p i tr (Φ k F) + ρ k σ 2 k + δ 2 k = K k=1 (g k (F) −ḡ k (F)) , ∀k.(41)
three sub-problems: RMS transmissive coefficient optimization, transmit power allocation optimization, and power splitting ratio optimization. Then three non-convex sub-problems are transformed into convex sub-problems by applying DC programming and SCA, respectively. Next, by alternately optimizing these three sub-problems to reach convergence, the final RMS transmissive coefficient, transmit power allocation, and power splitting ratio scheme can be obtained.
B. RMS Transmissive Coefficient Optimization
In this subsection, we first fix the power splitting ratio ρ and transmit power allocation p, and optimize the RMS transmissive coefficient F. The objective function can be expressed as the Eq. (41), which is the difference of two concave functions with respect to (w.r.t) F, which are not concave. Herein, we approximateḡ k (F) linearly by SCA as follows
g k (F) ≤ḡ k (F r ) + tr (∇ Fḡk (F r )) H (F − F r ) ḡ k (F) ub , ∀k,(42)
with
∇ Fḡk (F r ) = ρ k i =k p i Φ H k ρ k i =k p i tr (Φ k F r ) + ρ k σ 2 k + δ 2 k ln 2 , ∀k,(43)
where F r represents the value at the r-th SCA iteration. Therefore, the problem P1 can be approximately expressed as follows P2: max
F K k=1 (g k (F) −ḡ k (F)) , s.t. tr Φ k F − c k ≥ √ 2σ e,k F erf −1 (1 − 2ζ k ) , ∀k,(44a)tr Φ k F − ϕ k ≥ √ 2β e,k F erf −1 (1 − 2ε k ) , ∀k, (44b) F n,n ≤ 1, ∀n, (44c) F 0, (44d) rank (F) = 1. (44e)
Since the constraint (44e) is non-convex, we consider that apply the DC programming to address this non-convex rankone constraint.
Lemma 1: For any square matrix B ∈ C N ×N , B 0 and tr (B) > 0, whose rank is one can be equivalently expressed as
rank (B) = 1 ⇒ tr (B) − B 2 = 0,(45)
where tr (B) =
Then, a penalty factor is introduced and the above Eq. (46) is added to the objective function of the problem P2. Next, it is converted into the problem P3, which can be given by
P3: max F K k=1 g k (F) −ḡ k (F) ub − (tr (F) − F 2 ) , s.t. tr Φ k F − c k ≥ √ 2σ e,k F erf −1 (1 − 2ζ k ) , ∀k, (47a) tr Φ k F − ϕ k ≥ √ 2β e,k F erf −1 (1 − 2ε k ) , ∀k, (47b) F n,n ≤ 1, ∀n, (47c) F 0,(47d)
where represents the penalty factor associated with the rankone. Because F 2 is a convex function, the problem P3 is still not a convex problem, which can be linearized by using the SCA technique, and its lower bound can be given by
F 2 ≥ F r 2 + tr u max (F r ) u max (F r ) H (F − F r ) ( F 2 ) lb ,(48)
where u max (F r ) denotes the eigenvector corresponding to the largest eigenvalue of the matrix F at the r-th SCA iteration. Thus, the problem P3 can be further converted into
K k=1 log 2 K i=1 ρ k p i tr (Φ k F) +ρ k σ 2 k + δ 2 k − log 2 ρ k i =k p i tr (Φ k F) + ρ k σ 2 k + δ 2 k = K k=1 h k (p i ) −h k (p i ) . (50) h k (p i ) ≤h k (p r i ) + ρ k tr (Φ k F) ρ k i =k p r i tr (Φ k F) + ρ k σ 2 k + δ 2 k ln 2 (p i − p r i ) ∆ =h k (p i ) ub , ∀k, (51) K k=1 log 2 K i=1 ρ k p i tr (Φ k F) +ρ k σ 2 k + δ 2 k − log 2 ρ k i =k p i tr (Φ k F) + ρ k σ 2 k + δ 2 k = K k=1 f (ρ k ) −f (ρ k ) ,∀k.(53)
the problem P4 as follows
P4: max F K k=1 g k (F) −ḡ k (F) ub − tr (F) − ( F 2 ) lb , s.t. tr Φ k F − c k ≥ √ 2σ e,k F erf −1 (1 − 2ζ k ) , ∀k, (49a) tr Φ k F − ϕ k ≥ √ 2β e,k F erf −1 (1 − 2ε k ) , ∀k, (49b) F n,n ≤ 1, ∀n, (49c) F 0. (49d)
After the analysis, when the probability of the user's information and energy outage is less than 0.5, the coefficients on the right side of the inequalities of Eq. (49a) and Eq. (49b) about the matrix F are positive. In general, the outage probability is not greater than 0.5. The subsequent simulation in this paper is set to 0.1, which can satisfy this condition. If the outage probability is set to be greater than 0.5, SCA can be further used to linearize the right-hand-side (RHS) of the inequalities of Eq. (49a) and Eq. (49b) to solve the problem. Therefore, this problem is a semidefinite programming (SDP) problem, which can be efficiently solved by utilizing the CVX toolbox to obtain the RMS transmissive coefficient.
C. Transmit Power Allocation Optimization
In this subsection, the RMS transmissive coefficient F and power splitting ratio ρ are given, and we optimize the transmit power allocation p. The objective function can be denoted by Eq. (50). It can be seen that the objective function is the difference of two concave functions w.r.t p i . Thus, it is a nonconcave function. It can be linearized by SCA, i.e., we perform a first-order Taylor expansion on the second term and the Eq. (51) can be obtained, where p r i represents the value at the r-th SCA iteration. Hence, the problem P1 is transformed as
follows P5: max p K k=1 h k (p i ) −h k (p i ) ub , s.t. p k ≥ 0, ∀k, (52a) K k=1 p k ≤ P max , (52b) tr Φ k F − c k ≥ √ 2σ e,k F erf −1 (1 − 2ζ k ) , ∀k, (52c) tr Φ k F − ϕ k ≥ √ 2β e,k F erf −1 (1 − 2ε k ) , ∀k. (52d)
Since the problem is a standard convex optimization problem, we can use CVX toolbox to solve it and obtain the transmit power allocation p.
D. Power Splitting Ratio Optimization
In this subsection, the power splitting ratio ρ for each user is optimized when the remaining two sets of variables are fixed. Herein, the objective function can be given by the Eq. (53). Similarly, by using the SCA to linearize the second term in Eq. (53), we can obtain the Eq. (54) on the top of the next page.
Therefore, the problem P1 can be transformed into the problem P6, which can be given by
P6: max ρ K k=1 f (ρ k ) −f (ρ k ) ub , s.t. 0 ≤ ρ k ≤ 1, ∀k,(55a)tr Φ k F − c k ≥ √ 2σ e,k F erf −1 (1 − 2ζ k ) , ∀k,(55b)tr Φ k F − ϕ k ≥ √ 2β e,k F erf −1 (1 − 2ε k ) , ∀k.
(55c)
f (ρ k ) ≤f (ρ r k ) + i =k p i tr (Φ k F) + σ 2 k ρ r k i =k p i tr (Φ k F) + ρ r k σ 2 k + δ 2 k ln 2 (ρ k − ρ r k ) f (ρ k ) ub , ∀k.(54)
We can see that this problem is a standard convex optimization problem and can be efficiently solved by utilizing the CVX toolbox.
E. The Overall Robust Joint Optimization Algorithm in Transmissive RMS-enabled SWIPT Networks
In this subsection, we propose the overall joint RMS transmissive coefficient, transmit power allocation, and power splitting ratio optimization algorithm and summarize it in Algorithm 1. First, when the transmit power allocation and power splitting ratio are given, the RMS transmissive coefficient are determined by solving the problem P4. We can respectively solve the problem P5 and P6 to obtain the transmit power allocation and power splitting ratio. At last, the three sub-problems are optimized alternately until the entire problem converges.
Algorithm 1 Robust Joint Optimization Algorithm in Transmissive RMS-enabled SWIPT Networks 1: Input: F 0 , p 0 , ρ 0 , threshold and iteration index r = 0. Solve the problem P4 to obtain RMS transmissive coefficient F * .
4:
Solve the problem P5 to obtain transmit power allocation p * .
5:
Solve the problem P6 to obtain power splitting ratio ρ * . 6: Update iteration index r = r + 1. 7: until The whole problem satisfies the convergence threshold requirement. 8: return RMS transmissive coefficient, transmit power allocation, power splitting ratio.
F. Computational Complexity and Convergence Analysis 1) Computational complexity analysis: In each iteration, the computational complexity of the proposed robust joint optimization algorithm is divided into three parts. The first is to solve the SDP problem P4 with complexity O M 3.5 through the interior point method [40]. In addition, the complexity of calculating the subgradient through singular value decomposition is O M 3 . Accordingly, the computational complexity of the first part is at most O M 3.5 . Then, both the second part and the third part solve problems P5 and P6 with computational complexity O K 3.5 , respectively. Herein, let r be the number of iterations required for the proposed robust joint optimization algorithm to reach convergence, the computational complexity of Algorithm 1 can be expressed as O r K 3.5 + M 3.5 .
2) Convergence analysis: The convergence of the proposed robust joint optimization Algorithm 1 in transmissive RMSenabled SWIPT networks can be proved as as shown later.
Let F r , p r and ρ r denote the r-th iteration solution to the problem P4, P5 and P6. The objective function can be expressed as R (F r , p r , ρ r ). In the step 3 of Algorithm 1, the RMS transmissive coefficient F * can be obtained for given p r and ρ r . Hence, we have
R (F r , p r , ρ r ) ≤ R F r+1 , p r , ρ r .(56)
In the step 4 of Algorithm 1, the transmit power allocation p * can be obtained when F r and ρ r are given. Herein, we also have
R F r+1 , p r , ρ r ≤ R F r+1 , p r+1 , ρ r .(57)
Similarly, in the step 5 of Algorithm 1, the power splitting ratio ρ * can also be obtained when F r and p r are given. Thus, we have
R F r+1 , p r+1 , ρ r ≤ R F r+1 , p r+1 , ρ r+1 .(58)
Based on the above, we can obtain
R (F r , p r , ρ r ) ≤ R F r+1 , p r+1 , ρ r+1 .(59)
The above inequality proves that the value of the objective function is monotonic non-decreasing after each iteration of Algorithm 1. In addition, there is an upper bound on the objective function value for the problem P1. The above two aspects ensure the convergence performance of Algorithm 1.
IV. NUMERICAL RESULTS
In this section, we demonstrate the effectiveness of the proposed robust joint optimization algorithm in transmissive RMS-enabled SWIPT networks through numerical simulations. In the simulation setting, we consider a threedimensional communication network scenario, where the position of RMS transmitter is (0m, 0m, 15m), and K = 4 users are randomly distributed in a circle whose center coordinates is (0m, 0m, 0m) with a radius of 50m. RMS is equipped with N = 16 elements. The antenna spacing is set to half the wavelength of the carrier. Meanwhile, we assume that the parameters of all users are the same, i.e., a k = 150, b k = 0.024 and ξ k = 24mW [41]. Herein, we set σ 2 = −50dBm, γ th = −30dB and E th = −40dB in the simulations. The path loss exponent is set as α = 3. We set the path loss β to -20dB when the reference distance is 1m and set the Rician factor κ to 3dB. The threshold for algorithm convergence is set as 10 −3 .
First, the convergence of the proposed algorithm is verified in transmissive RMS-enabled SWIPT networks. Fig. 2 shows the change of system sum-rate with algorithm iterations. It obvious that the sum-rate increases as the number of iterations increases, which verifies our proposed algorithm has good convergence. In addition, we compare the effect of different RMS transmissive element counts on system performance.
Considering that the array of RMS is distributed in a UPA with the same number of elements in the horizontal and vertical directions, the number of RMS element is a perfect square. Specifically, we compare the system sum-rate of the proposed algorithm when the number of RMS transmissive elements are 9, 16, and 36. It can be concluded that the larger the number of RMS elements, the upper the system sum-rate. In this section, We verify the good performance of the proposed robust joint optimization algorithm in transmissive RMS-enabled SWIPT networks compared with other benchmark algorithms. (1) benchmark 1 (RMS-random-phase): In this case, we adopt a random RMS coefficient to deploy RMS and don't optimize its coefficient, i.e., the problems P4 and P6 are optimized alternately. (2) benchmark 2 (fixed-transmitpower): In this case, the transmit power is allocated equally to each user, while the RMS transmissive coefficient and power splitting ratio optimization still use the solution of problems P4 and P6. (3) benchmark 3 (fixed-power-splitting-ratio): In this case, the power splitting ratio is regarded as a constant, i.e., ρ k = 0.5, ∀k, and we optimize problems P4 and P5 jointly.
Next, we investigate the relationship between the system sum-rate and the maximum transmit power of RMS transceiver. As shown in Fig. 3, the system sum-rate increases as the increase of the maximum transmit power of RMS transceiver, and the performance of our proposed algorithm outperforms all benchmarks, which reflects the advantage of jointly optimizing the RMS transmissive coefficient, transmit power allocation and power splitting ratio. The performance of benchmark 2 is the worst because it equally allocates the transmit power to each user and does not take advantage of the channel differences of different users. The performance of the system can be improved by allocating more resources to users User maximum transmit power (mW) whose channel quality is better. Compared with benchmark 3, the proposed scheme has similar performance when the transmit power is high, because when the power is high, the constraints of the user's SINR and energy harvested are easier to meet, and the system performance mainly depends on the transmissive RMS coefficient design and transmit power allocation. Fig. 4 shows the system sum-rate verus the number of RMS transmissive elements. It can be seen that the system sumrate increases as the number of transmissive RMS element increases for all benchmark algorithms, which is mainly because when the number of transmissive elements increases, the number of reconstructed channels increases, and the channel gain of the receiver increases. This also reflects the performance advantage of the RMS as a low-cost passive component, which improves spatial diversity by increasing the number of RMS elements without requiring additional signal processing. It has a wide range of application in IoT networks. Moreover, the proposed algorithm has obvious performance advantages in different numbers of RMS elements, which reflecting the advantage of the robust joint optimization algorithm. Then, the system sum-rate versus the number of users is dipicted in Fig. 5. It is obvious that the system sumrate decreases as the increase of the number of users. This is mainly because we keep the maximum transmit power unchanged. When the number of users increases and the SINR constraints of each user must be satisfied, each user needs to obtain a certain amount of energy, which leads to mutual interference increases and users with better channels have difficulty obtaining more power. Furthermore, our proposed algorithm still outperforms other benchmarks with the same number of users, which indicates our proposed algorithm can better deal with mutual interference. Fig. 6 shows the system sum-rate versus energy harvested threshold. It is obvious that when the user's energy harvested threshold increases, the system sum-rate decreases. Owing to when the threshold increases, the user needs to obtain a larger 5 10 15 20 25 30 35 40 Number of elements power allocation or decrease the power splitting ratio to meet the constraints of energy harvested, and the achievable rate of each user decreases with the decrease of the power splitting ratio. Therefore, system sum-rate also decreases. While the performance of the benchmark 3 remains almost unchanged, this is because we satisfy the constraints by initially setting a reasonable splitting ratio, and the power splitting ratio will not change in the subsequent alternate optimization process. Fig. 7 depicts the system sum-rate versus the noise power spectral density. It can be seen that the performance is greatly affected by noise due to the large interference between users in the model considered in this paper. As the noise power spectral density increases, the system sum-rate decreases. Compared with other benchmarks, our proposed algorithm has the best performance, and the advantage is more obvious in the environment with larger noise, which shows that our proposed optimization of joint RMS transmissive coefficient, transmit power allocation and power splitting ratio can be used well in all environments. Fig. 8 illustrates the variation of our proposed robust joint optimization algorithm and other benchmark algorithms versus different spectral norm of channel error matrix. The abscissa of Fig. 8 is logarithmic. The increase of channel estimation error will lead to the degradation of system performance. This is mainly because a larger channel estimation error matrix will make the constraints (22d) and (22e) tighter, which will degrade performance of the system in terms of sum-rate. It's obvious that when the spectral norm of channel error matrix is large, the performance of benchmark 3 decreases sharply. This is because a small power splitting ratio is set to meet the requirements of constraint (22e) initially, and power splitting ratio cannot be updated during the alternate optimization process, and the objective function is significantly affected by power splitting ratio at this time. In fact, perfect CSI cannot 10 - 19 10 -18 10 -17 10 -16 10 -15 Spectral norm of the error matrix be obtained at the transmitter in the practical system, a certain channel estimation error is considered in our model, which is more robust and more conducive for deployment in actual communication networks.
V. CONCLUSIONS
In this paper, we investigate the system sum-rate maximization problem for transmissive RMS-enabled SWIPT networks. Specifically, RMS transmissive coefficient, transmit power allocation and power splitting ratio are jointly designed under the requirements of SINR and energy harvested based on outage probability criterion. First, the problem containing outage probability constraints is transformed into a tractable optimization problem. Owing to non-convexity of the transformed problem, AO algorithm based on SCA, DC and penalty function method is implemented to to handle non-convexity and solve the problem. Besides, we analyze the complexity of the proposed algorithm and prove its convergence performance. From the numerical results, it can be concluded that our proposed algorithm outperforms other algorithms in terms of system sum-rate, which demonstrate transmissive RMS transceiver is a potential multi-antenna technology in the design of future wireless communication networks.
Fig. 1 .
1Transmissive RMS transceiver-enabled SWIPT networks.
(B), B 2 = σ 1 (B) represents the spectral norm of matrix B, and represents the n-th largest singular value of matrix B. On the basis of Lemma 1, rankone constraint (44e) can be transformed in the optimization problem P2 as follows rank (F) = 1 ⇒ tr (F) − F 2 = 0.
Fig. 3. System sum-rate versus maximum transmit power.1.52
1.54
1.56
1.58
1.6
1.62
1.64
1.66
1.68
System sum-rate (bps/Hz)
Proposed algorithm
Benchmark 1
Benchmark 2
Benchmark 3
Fig. 4. System sum-rate versus the number of RMS transmissive elements.Fig. 5. System sum-rate versus the number of users.1.5
1.55
1.6
1.65
1.7
1.75
1.8
System sum-rate (bps/Hz)
Proposed algorithm
Benchmark 1
Benchmark 2
Benchmark 3
2
2.5
3
3.5
4
4.5
5
5.5
6
Number of users
1.4
1.5
1.6
1.7
1.8
1.9
2
System sum-rate (bps/Hz)
Proposed algorithm
Benchmark 1
Benchmark 2
Benchmark 3
Fig. 6. System sum-rate versus the energy harvested threshold.Noise power spectral density (mW/Hz) System sum-rate (bps/Hz)Fig. 7. System sum-rate versus the noise power spectral density.1
1.05
1.1
1.15
1.2
1.25
1.3
1.35
1.4
Energy harvest threshold E th
10 -31
1.55
1.56
1.57
1.58
1.59
1.6
1.61
1.62
1.63
System sum-rate (bps/Hz)
Proposed algorithm
Benchmark 1
Benchmark 2
Benchmark 3
1
2
3
4
5
6
7
8
9
10
10 -6
1.56
1.58
1.6
1.62
1.64
1.66
1.68
1.7
Proposed algorithm
Benchmark 1
Benchmark 2
Benchmark 3
System sum-rate (bps/Hz)Fig. 8. System sum-rate versus spectral norm of channel error matrix.1.48
1.49
1.5
1.51
1.52
1.53
1.54
1.55
1.56
Proposed algorithm
Benchmark 1
Benchmark 2
Benchmark 3
We consider a quasi-static channel model, i.e., during each transmission time duration T , h k is a constant. Therefore, we use the instantaneous value of the channel gain to compute the channel covariance matrix instead of the expectation operator.
Data delivery in wireless multimedia sensor networks: Challenging and defying in the IoT era. F Al-Turjman, A Radwan, IEEE Wireless Commun. 245F. Al-Turjman and A. Radwan, "Data delivery in wireless multimedia sensor networks: Challenging and defying in the IoT era," IEEE Wireless Commun., vol. 24, no. 5, pp. 126-131, Oct. 2017.
A survey on resource allocation for 5G heterogeneous networks: Current research, future trends, and challenges. Y Xu, G Gui, H Gacanin, F Adachi, IEEE Commun. Surveys Tuts. 232Y. Xu, G. Gui, H. Gacanin, and F. Adachi, "A survey on resource allocation for 5G heterogeneous networks: Current research, future trends, and challenges," IEEE Commun. Surveys Tuts., vol. 23, no. 2, pp. 668-695, Secondquarter 2021.
Adaptive resource allocation in SWIPT-enabled cognitive IoT networks. W Sun, Q Song, J Zhao, L Guo, A Jamalipour, IEEE Internet Things J. 91W. Sun, Q. Song, J. Zhao, L. Guo, and A. Jamalipour, "Adaptive resource allocation in SWIPT-enabled cognitive IoT networks," IEEE Internet Things J., vol. 9, no. 1, pp. 535-545, Jan. 2022.
Wireless powered sensor networks for internet of things: Maximum throughput and optimal power allocation. Z Chu, F Zhou, Z Zhu, R Q Hu, P Xiao, IEEE Internet Things J. 51Z. Chu, F. Zhou, Z. Zhu, R. Q. Hu, and P. Xiao, "Wireless powered sensor networks for internet of things: Maximum throughput and optimal power allocation," IEEE Internet Things J., vol. 5, no. 1, pp. 310-321, Feb. 2018.
Resource allocation for secure swipt-enabled D2D communications with α fairness. Y Xu, B Gu, D Li, Z Yang, C Huang, K.-K Wong, IEEE Trans. Veh. Technol. 711Y. Xu, B. Gu, D. Li, Z. Yang, C. Huang, and K.-K. Wong, "Resource allocation for secure swipt-enabled D2D communications with α fair- ness," IEEE Trans. Veh. Technol., vol. 71, no. 1, pp. 1101-1106, Jan. 2022.
Power splittingbased SWIPT with decode-and-forward full-duplex relaying. H Liu, K J Kim, K S Kwak, H Vincent Poor, IEEE Trans. Wireless Commun. 1511H. Liu, K. J. Kim, K. S. Kwak, and H. Vincent Poor, "Power splitting- based SWIPT with decode-and-forward full-duplex relaying," IEEE Trans. Wireless Commun., vol. 15, no. 11, pp. 7561-7577, Nov. 2016.
Joint beamforming and power-splitting control in downlink cooperative SWIPT NOMA systems. Y Xu, C Shen, Z Ding, X Sun, S Yan, G Zhu, Z Zhong, IEEE Trans. Signal Process. 6518Y. Xu, C. Shen, Z. Ding, X. Sun, S. Yan, G. Zhu, and Z. Zhong, "Joint beamforming and power-splitting control in downlink cooperative SWIPT NOMA systems," IEEE Trans. Signal Process., vol. 65, no. 18, pp. 4874-4886, Sept. 2017.
Energy-assisted decodeand-forward for energy harvesting cooperative cognitive networks. D K Verma, R Y Chang, F.-T Chien, IEEE Trans. Cogn. Commun. Netw. 33D. K. Verma, R. Y. Chang, and F.-T. Chien, "Energy-assisted decode- and-forward for energy harvesting cooperative cognitive networks," IEEE Trans. Cogn. Commun. Netw., vol. 3, no. 3, pp. 328-342, Sept. 2017.
Energy efficiency optimization for CoMP-SWIPT heterogeneous networks. J Tang, A Shojaeifard, D K C So, K.-K Wong, N Zhao, IEEE Trans. Commun. 6612J. Tang, A. Shojaeifard, D. K. C. So, K.-K. Wong, and N. Zhao, "Energy efficiency optimization for CoMP-SWIPT heterogeneous networks," IEEE Trans. Commun., vol. 66, no. 12, pp. 6368-6383, Dec. 2018.
Robust resource allocation and power splitting in SWIPT enabled heterogeneous networks: A robust minimax approach. Y Xu, G Li, Y Yang, M Liu, G Gui, IEEE Internet Things J. 66Y. Xu, G. Li, Y. Yang, M. Liu, and G. Gui, "Robust resource allocation and power splitting in SWIPT enabled heterogeneous networks: A robust minimax approach," IEEE Internet Things J., vol. 6, no. 6, pp. 10 799- 10 811, Dec. 2019.
Simultaneous wireless information and power transfer in modern communication systems. I Krikidis, S Timotheou, S Nikolaou, G Zheng, D W K Ng, R Schober, IEEE Commun. Mag. 5211I. Krikidis, S. Timotheou, S. Nikolaou, G. Zheng, D. W. K. Ng, and R. Schober, "Simultaneous wireless information and power transfer in modern communication systems," IEEE Commun. Mag., vol. 52, no. 11, pp. 104-110, Nov. 2014.
Joint transceiver optimization of MIMO SWIPT systems for harvested power maximization. Z Chen, Q Shi, Q Wu, W Xu, IEEE Signal Process. Lett. 2410Z. Chen, Q. Shi, Q. Wu, and W. Xu, "Joint transceiver optimization of MIMO SWIPT systems for harvested power maximization," IEEE Signal Process. Lett., vol. 24, no. 10, pp. 1557-1561, Sept. 2017.
System and design for selective OFDM SWIPT transmission. R F Buckley, R W Heath, IEEE Trans. Green Commun. Netw. 51R. F. Buckley and R. W. Heath, "System and design for selective OFDM SWIPT transmission," IEEE Trans. Green Commun. Netw., vol. 5, no. 1, pp. 335-347, Dec. 2021.
Joint beamforming design and power splitting optimization in IRS-assisted SWIPT NOMA networks. Z Li, W Chen, Q Wu, K Wang, J Li, IEEE Trans. Wireless Commun. 213Z. Li, W. Chen, Q. Wu, K. Wang, and J. Li, "Joint beamforming design and power splitting optimization in IRS-assisted SWIPT NOMA networks," IEEE Trans. Wireless Commun., vol. 21, no. 3, pp. 2019- 2033, Sept. 2022.
Wireless information and energy transfer in multi-antenna interference channel. C Shen, W.-C Li, T.-H Chang, IEEE Trans. Signal Process. 6223C. Shen, W.-C. Li, and T.-H. Chang, "Wireless information and energy transfer in multi-antenna interference channel," IEEE Trans. Signal Process., vol. 62, no. 23, pp. 6249-6264, Dec. 2014.
Max-min fairness optimal rate-energy trade-off of SWIPT for massive MIMO downlink. D Kudathanthirige, R Shrestha, G A Baduge, IEEE Commun. Lett. 234D. Kudathanthirige, R. Shrestha, and G. A. Aruma Baduge, "Max-min fairness optimal rate-energy trade-off of SWIPT for massive MIMO downlink," IEEE Commun. Lett., vol. 23, no. 4, pp. 688-691, Apr. 2019.
Wireless information and power transfer in multiuser OFDM systems. X Zhou, R Zhang, C K Ho, IEEE Trans. Wireless Commun. 134X. Zhou, R. Zhang, and C. K. Ho, "Wireless information and power transfer in multiuser OFDM systems," IEEE Trans. Wireless Commun., vol. 13, no. 4, pp. 2282-2294, Apr. 2014.
Outage probability and throughput of multirelay SWIPT-WPCN networks with nonlinear EH model and imperfect CSI. R Jiang, K Xiong, P Fan, L Zhou, Z Zhong, IEEE Syst. J. 141R. Jiang, K. Xiong, P. Fan, L. Zhou, and Z. Zhong, "Outage probability and throughput of multirelay SWIPT-WPCN networks with nonlinear EH model and imperfect CSI," IEEE Syst. J., vol. 14, no. 1, pp. 1206- 1217, Oct. 2020.
Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison. M Di Renzo, K Ntontin, J Song, F H Danufane, X Qian, F Lazarakis, J De Rosny, D.-T Phan-Huy, O Simeone, R Zhang, M Debbah, G Lerosey, M Fink, S Tretyakov, S Shamai, IEEE Open J. Commun. Soc. 1M. Di Renzo, K. Ntontin, J. Song, F. H. Danufane, X. Qian, F. Lazarakis, J. De Rosny, D.-T. Phan-Huy, O. Simeone, R. Zhang, M. Debbah, G. Lerosey, M. Fink, S. Tretyakov, and S. Shamai, "Reconfigurable in- telligent surfaces vs. relaying: Differences, similarities, and performance comparison," IEEE Open J. Commun. Soc., vol. 1, pp. 798-807, Jun. 2020.
Robust beamforming design and time allocation for IRS-assisted wireless powered communication networks. Z Li, W Chen, Q Wu, H Cao, K Wang, J Li, IEEE Trans. Commun. 704Z. Li, W. Chen, Q. Wu, H. Cao, K. Wang, and J. Li, "Robust beam- forming design and time allocation for IRS-assisted wireless powered communication networks," IEEE Trans. Commun., vol. 70, no. 4, pp. 2838-2852, Apr. 2022.
Robust max-min energy efficiency for RIS-aided hetnets with distortion noises. Y Xu, H Xie, Q Wu, C Huang, C Yuen, IEEE Trans. Commun. 702Y. Xu, H. Xie, Q. Wu, C. Huang, and C. Yuen, "Robust max-min energy efficiency for RIS-aided hetnets with distortion noises," IEEE Trans. Commun., vol. 70, no. 2, pp. 1457-1471, Feb. 2022.
Capacity characterization for intelligent reflecting surface aided mimo communication. S Zhang, R Zhang, IEEE J. Sel. Areas Commun. 388S. Zhang and R. Zhang, "Capacity characterization for intelligent reflect- ing surface aided mimo communication," IEEE J. Sel. Areas Commun., vol. 38, no. 8, pp. 1823-1838, Jun. 2020.
Resource allocation for IRS-assisted wireless powered communication networks. H Cao, Z Li, W Chen, IEEE Wireless Commun. Lett. 1011H. Cao, Z. Li, and W. Chen, "Resource allocation for IRS-assisted wireless powered communication networks," IEEE Wireless Commun. Lett., vol. 10, no. 11, pp. 2450-2454, Nov. 2021.
Deep reinforcement learning-based intelligent reflecting surface for secure wireless communications. H Yang, Z Xiong, J Zhao, D Niyato, L Xiao, Q Wu, IEEE Trans. Wireless Commun. 201H. Yang, Z. Xiong, J. Zhao, D. Niyato, L. Xiao, and Q. Wu, "Deep reinforcement learning-based intelligent reflecting surface for secure wireless communications," IEEE Trans. Wireless Commun., vol. 20, no. 1, pp. 375-388, Sept. 2021.
RISenhanced WPCNs: Joint radio resource allocation and passive beamforming optimization. Y Xu, Z Gao, Z Wang, C Huang, Z Yang, C Yuen, IEEE Trans. Veh. Technol. 708Y. Xu, Z. Gao, Z. Wang, C. Huang, Z. Yang, and C. Yuen, "RIS- enhanced WPCNs: Joint radio resource allocation and passive beam- forming optimization," IEEE Trans. Veh. Technol., vol. 70, no. 8, pp. 7980-7991, Aug. 2021.
Reconfigurable intelligent surfaces in 6g: Reflective, transmissive, or both?. S Zeng, H Zhang, B Di, Y Tan, Z Han, H V Poor, L Song, IEEE Commun. Lett. 256S. Zeng, H. Zhang, B. Di, Y. Tan, Z. Han, H. V. Poor, and L. Song, "Reconfigurable intelligent surfaces in 6g: Reflective, transmissive, or both?" IEEE Commun. Lett., vol. 25, no. 6, pp. 2063-2067, Feb. 2021.
Beyond intelligent reflecting surfaces: Reflective-transmissive metasurface aided communications for full-dimensional coverage extension. S Zhang, H Zhang, B Di, Y Tan, Z Han, L Song, IEEE Trans. Veh. Technol. 6911S. Zhang, H. Zhang, B. Di, Y. Tan, Z. Han, and L. Song, "Beyond intelligent reflecting surfaces: Reflective-transmissive metasurface aided communications for full-dimensional coverage extension," IEEE Trans. Veh. Technol., vol. 69, no. 11, pp. 13 905-13 909, Sept. 2020.
Joint design for simultaneously transmitting and reflecting (STAR) RIS assisted NOMA systems. J Zuo, Y Liu, Z Ding, L Song, H Vincent Poor, IEEE Trans. Wireless Commun. 2022J. Zuo, Y. Liu, Z. Ding, L. Song, and H. Vincent Poor, "Joint design for simultaneously transmitting and reflecting (STAR) RIS assisted NOMA systems," IEEE Trans. Wireless Commun., pp. 1-1, early access 2022.
Resource allocation in STAR-RIS-aided networks: OMA and NOMA. C Wu, X Mu, Y Liu, X Gu, X Wang, IEEE Trans. Wireless Commun. 219C. Wu, X. Mu, Y. Liu, X. Gu, and X. Wang, "Resource allocation in STAR-RIS-aided networks: OMA and NOMA," IEEE Trans. Wireless Commun., vol. 21, no. 9, pp. 7653-7667, Sept. 2022.
MIMO transmission through reconfigurable intelligent surface: System design, analysis, and implementation. W Tang, J Y Dai, M Z Chen, K.-K Wong, X Li, X Zhao, S Jin, Q Cheng, T J Cui, IEEE J. Sel. Areas Commun. 3811W. Tang, J. Y. Dai, M. Z. Chen, K.-K. Wong, X. Li, X. Zhao, S. Jin, Q. Cheng, and T. J. Cui, "MIMO transmission through reconfigurable intelligent surface: System design, analysis, and implementation," IEEE J. Sel. Areas Commun., vol. 38, no. 11, pp. 2683-2699, Jul. 2020.
High-efficiency transmissive programmable metasurface for multimode OAM generation. X Bai, F Kong, Y Sun, G Wang, J Qian, X Li, A Cao, C He, X Liang, R Jin, Advanced Optical Materials. 817X. Bai, F. Kong, Y. Sun, G. Wang, J. Qian, X. Li, A. Cao, C. He, X. Liang, R. Jin et al., "High-efficiency transmissive programmable metasurface for multimode OAM generation," Advanced Optical Ma- terials, vol. 8, no. 17, p. 2000570, Jun. 2020.
Space-time-frequency modulation mechanisms of monochromatic and nonmonochromatic electromagnetic waves on a digital programmable transmission metasurface. X Wan, J W Wang, Z A Huang, B Y Li, Q Xiao, T J Cui, Advanced Functional Materials. 32132107557X. Wan, J. W. Wang, Z. A. Huang, B. Y. Li, Q. Xiao, and T. J. Cui, "Space-time-frequency modulation mechanisms of monochromatic and nonmonochromatic electromagnetic waves on a digital programmable transmission metasurface," Advanced Functional Materials, vol. 32, no. 13, p. 2107557, Dec. 2022.
Multifunctional vortex beam generation by a dynamic reflective metasurface. B Liu, Y He, S.-W Wong, Y Li, Advanced Optical Materials. 94B. Liu, Y. He, S.-W. Wong, and Y. Li, "Multifunctional vortex beam generation by a dynamic reflective metasurface," Advanced Optical Materials, vol. 9, no. 4, p. 2001689, Dec. 2021.
A 1-bit 10 × 10 reconfigurable reflectarray antenna: Design, optimization, and experiment. H Yang, F Yang, S Xu, Y Mao, M Li, X Cao, J Gao, IEEE Trans. Antennas Propag. 646H. Yang, F. Yang, S. Xu, Y. Mao, M. Li, X. Cao, and J. Gao, "A 1-bit 10 × 10 reconfigurable reflectarray antenna: Design, optimization, and experiment," IEEE Trans. Antennas Propag., vol. 64, no. 6, pp. 2246- 2254, Apr. 2016.
Beamforming design and power allocation for transmissive RMS-based transmitter architectures. Z Li, W Chen, H Cao, IEEE Wireless Commun. Lett. 111Z. Li, W. Chen, and H. Cao, "Beamforming design and power allocation for transmissive RMS-based transmitter architectures," IEEE Wireless Commun. Lett., vol. 11, no. 1, pp. 53-57, Jan. 2022.
Practical non-linear energy harvesting model and resource allocation for SWIPT systems. E Boshkovska, D W K Ng, N Zlatanov, R Schober, IEEE Commun. Lett. 1912E. Boshkovska, D. W. K. Ng, N. Zlatanov, and R. Schober, "Practical non-linear energy harvesting model and resource allocation for SWIPT systems," IEEE Commun. Lett., vol. 19, no. 12, pp. 2082-2085, Dec. 2015.
Robust downlink beamforming based on outage probability specifications. B K Chalise, S Shahbazpanahi, A Czylwik, A B Gershman, IEEE Trans. Wireless Commun. 610B. K. Chalise, S. Shahbazpanahi, A. Czylwik, and A. B. Gershman, "Robust downlink beamforming based on outage probability specifica- tions," IEEE Trans. Wireless Commun., vol. 6, no. 10, pp. 3498-3503, Oct. 2007.
3D UAV trajectory and communication design for simultaneous uplink and downlink transmission. M Hua, L Yang, Q Wu, A L Swindlehurst, IEEE Trans. Commun. 689M. Hua, L. Yang, Q. Wu, and A. L. Swindlehurst, "3D UAV trajectory and communication design for simultaneous uplink and downlink trans- mission," IEEE Trans. Commun., vol. 68, no. 9, pp. 5908-5923, Sept. 2020.
Robust uplink beamforming based upon minimum outage probability criterion. B Chalise, A Czylwik, IEEE Global Telecommunications Conference, 2004. GLOBECOM '04. 6B. Chalise and A. Czylwik, "Robust uplink beamforming based upon minimum outage probability criterion," in IEEE Global Telecommunica- tions Conference, 2004. GLOBECOM '04., vol. 6, 2004, pp. 3974-3978
S Boyd, S P Boyd, L Vandenberghe, Convex optimization. Cambridge university pressS. Boyd, S. P. Boyd, and L. Vandenberghe, Convex optimization. Cambridge university press, 2004.
Global energy efficiency in secure MISO SWIPT systems with nonlinear power-splitting EH model. Y Lu, K Xiong, P Fan, Z Ding, Z Zhong, K B Letaief, IEEE J. Sel. Areas Commun. 371Y. Lu, K. Xiong, P. Fan, Z. Ding, Z. Zhong, and K. B. Letaief, "Global energy efficiency in secure MISO SWIPT systems with non- linear power-splitting EH model," IEEE J. Sel. Areas Commun., vol. 37, no. 1, pp. 216-232, Sept. 2019.
| [] |
[
"Vine dependence graphs with latent variables as summaries for gene expression data",
"Vine dependence graphs with latent variables as summaries for gene expression data"
] | [
"Xinyao Fan \nDepartment of Statistics\nUniversity of British Columbia\nCanada\n",
"Harry Joe \nDepartment of Statistics\nUniversity of British Columbia\nCanada\n",
"Yongjin Park \nDepartment of Statistics\nUniversity of British Columbia\nCanada\n"
] | [
"Department of Statistics\nUniversity of British Columbia\nCanada",
"Department of Statistics\nUniversity of British Columbia\nCanada",
"Department of Statistics\nUniversity of British Columbia\nCanada"
] | [] | The advent of high-throughput sequencing technologies has lead to vast comparative genome sequences. The construction of gene-gene interaction networks or dependence graphs on the genome scale is vital for understanding the regulation of biological processes. Different dependence graphs can provide different information.Some existing methods for dependence graphs based on high-order partial correlations are sparse and not informative when there are latent variables that can explain much of the dependence in groups of genes. Other methods of dependence graphs based on correlations and first-order partial correlations might have dense graphs. When genes can be divided into groups with stronger within group dependence in gene expression than between group dependence, we present a dependence graph based on truncated vines with latent variables that makes use of group information and low-order partial correlations. The graphs are not dense, and the genes that might be more central have more neighbors in the vine dependency graph. We demonstrate the use of our dependence graph construction on two RNA-seq data sets -yeast and prostate cancer. There is some biological evidence to support the relationship between genes in the resulting dependence graphs.A flexible framework is provided for building dependence graphs via low-order partial correlations and formation of groups, leading to graphs that are not too sparse or dense. We anticipate that this approach will help to identify groups that might be central to different biological functions.• FOCI: This method can be considered as improvement on the direct threshold method by considered first-order partial correlations. However, the method only consider the (modified) first-order partial correlation. For variables that are highly correlated due to several latent variables (assume only positive dependence exists), it will produce a dense connected graph, because information from higher-order partial correlation is not used.• Truncated partial correlation vines: The partial correlation vine is explained in details in Section 2.4. Before details of the dependence graph based on the vine is given, the example below is used to show that, for factor structures, it avoids the denseness of the graph based on thresholding correlations and it avoids the sparseness of the conditional independence graph. The reason is that it makes good use of the strong correlations and some additional low-order partial correlations. | null | [
"https://export.arxiv.org/pdf/2303.01626v1.pdf"
] | 257,353,455 | 2303.01626 | b962c489d51323c1904c09d1b5e28bd544f300e8 |
Vine dependence graphs with latent variables as summaries for gene expression data
February, 2023
Xinyao Fan
Department of Statistics
University of British Columbia
Canada
Harry Joe
Department of Statistics
University of British Columbia
Canada
Yongjin Park
Department of Statistics
University of British Columbia
Canada
Vine dependence graphs with latent variables as summaries for gene expression data
February, 2023partial correlationGaussian factor modeldependence graphgene-gene networktruncated vinelatent dependenceRNA-seq
The advent of high-throughput sequencing technologies has lead to vast comparative genome sequences. The construction of gene-gene interaction networks or dependence graphs on the genome scale is vital for understanding the regulation of biological processes. Different dependence graphs can provide different information.Some existing methods for dependence graphs based on high-order partial correlations are sparse and not informative when there are latent variables that can explain much of the dependence in groups of genes. Other methods of dependence graphs based on correlations and first-order partial correlations might have dense graphs. When genes can be divided into groups with stronger within group dependence in gene expression than between group dependence, we present a dependence graph based on truncated vines with latent variables that makes use of group information and low-order partial correlations. The graphs are not dense, and the genes that might be more central have more neighbors in the vine dependency graph. We demonstrate the use of our dependence graph construction on two RNA-seq data sets -yeast and prostate cancer. There is some biological evidence to support the relationship between genes in the resulting dependence graphs.A flexible framework is provided for building dependence graphs via low-order partial correlations and formation of groups, leading to graphs that are not too sparse or dense. We anticipate that this approach will help to identify groups that might be central to different biological functions.• FOCI: This method can be considered as improvement on the direct threshold method by considered first-order partial correlations. However, the method only consider the (modified) first-order partial correlation. For variables that are highly correlated due to several latent variables (assume only positive dependence exists), it will produce a dense connected graph, because information from higher-order partial correlation is not used.• Truncated partial correlation vines: The partial correlation vine is explained in details in Section 2.4. Before details of the dependence graph based on the vine is given, the example below is used to show that, for factor structures, it avoids the denseness of the graph based on thresholding correlations and it avoids the sparseness of the conditional independence graph. The reason is that it makes good use of the strong correlations and some additional low-order partial correlations.
Introduction
With the rapid advancements in accuracy and throughput, high-throughput sequencing technologies enabled researchers to routinely measure mRNA transcript levels over tens of thousands of genes. Since observed gene expression values were derived from interactions between genes, having gene expression profiles across many samples has naturally sparked scientific interest 1 arXiv:2303.01626v1 [stat.ME] 2 Mar 2023 in inferring such interaction patterns from the observed data. Gene-gene interaction networks implicate functional organizations of complex biological of biological pathways and essential cellular processes. Building an accurate interaction network of genes improves our understanding of complex biological systems and provides a tool to make proper interpretations of large, highdimensional genomics data. Since the invention of microarray technology [28], RNA-sequencing [30], and single-cell RNA-sequencing [23] to date, finding an underlying network structure from observed data has been a fundamentally important problem in biology.
Many approaches have been proposed in statistics to understand interaction patterns embedded in a large-scale, high-dimensional data matrix by constructing a statistical (conditional) dependency graph. Traditionally, the inference problem has been formulated to search for an optimal model in the class of Gaussian graphical models or other relevant conditional independence graphs [32]. There, we measure high-order partial correlation values directly from the inverse covariance matrix estimated from observed data and identify one type of dependence graph.
However, finding such a complex dependency graph could become intractable in computation, resulting in a graph model overfitting to observed data stringent regularization penalties were imposed, see [21] and references therein. Alternatively, dependence graphs can be constituted of low-order partial correlations, which would only rely on relatively fast and reliable estimates.
The examples include a traditional spanning tree model [7] and a vine structure [19]. Although such a method based on low-order partial correlation patterns generally leads to a more informative dependence graph than the high-order graphs given the limited sample size [24], we noticed challenges in real-world data, which were generally affected by a handful of latent group-wise effects. Unless we properly address latent group variables, which explain a large proportion of variance in the observed data, we show that learning a conditional independence graph is not suitable and may lead to an unwanted sparse graph. This work will propose a new methodology for inferring a dependency graph from gene expression data, building on a low-order vine graph model [19] while incorporating latent group variables to improve the resolutions among the observed variables along with the relationships with the nuisance variables introduced by a latent group structure.
Summary of existing methods
In this section, a few methods for constructing dependence graphs are summarized. The methods are defined in terms of partial correlations computed from a covariance or correlation matrix so the main formulas for partial correlations are given in section 2.1 before methods are summarized in Section 2.2. The methods include the conditional independence graph with thresholding, the FOCI method in [24] and the truncated partial correlation vine. A more basic graph is based on the Thresholding-Correlation method, for which two variables are connected in the dependence graph if their absolute correlation exceeds a given threshold. Section 2.3 includes the simple case of a correlation matrix based on a factor structure to illustrate that the Thresholding-Correlation method tends to be too dense and the conditional independence graph tends to be too sparse when the dimension d is large.
Partial correlations
This section summarized some required results on partial correlations in order to explain different methods of constructing dependence graphs. There is an assumption that variables jointly have a multivariate Gaussian distribution after appropriate transforms on each variable.
Let the (transformed) observation vector be X I = (X 1 , . . . , X d ), where d ≥ 2 is large and I is the index set {1, 2, . . . , d}; Suppose X I has multivariate Gaussian distribution, denoted The cardinality of L {i,j} refer to the order of the partial correlation coefficient. In the case where order is zero, |S| = |{i, j}| = 2 and L {i,j} = ∅. Then ρ i,j;∅ = ρ ij , which is the standard pairwise correlations between X i and X j , i, j ∈ I, i = j. For any order |L {i,j} | with |L {i,j} | > 1, [1] derives a formula to recursively calculate the partial correlations in terms of lower-order (partial) correlations. Suppose |S| ⊆ I, with cardinality |S| ≥ 3. Consider a set {i, j, k} ⊆ S with three distinct indices. LetL := S −{i,j,k} and L {i,j} =L ∪ k. Then
ρ i,j;L {i,j} = ρ i,j;L − ρ i,k;L ρ j,k;L 1 − ρ 2 i,k;L 1 − ρ 2 j,k;L(1)
Let the precision matrix (inverse of covariance matrix) Σ −1 = (σ ij ) and I ij = I\{i, j}. For the inverse correlation matrix, it is well known from [32] section 5.8 that for i = j,
ρ ij;I {ij} = − σ ij √ σ ii σ jj(2)
These are the highest order partial correlations.
Summary of dependence graphs
Conditional dependence graph with thresholding
Let V = {1, .
. . , d} be the set of vertices, one for each variable. The edge set E consist of unordered pairs (i, j), (i, j) ∈ E if there is an edge between X i and X j . For a theoretical covariance matrix, X i and X j are connected in the graph if X i ⊥ X j X L {i,j} and are not connected in the graph only if X i ⊥ X j X L {i,j} . That is, there is no edge connected variables i, j if σ ij = 0 or equivalently ρ ij;L {i,j} = 0 for i = j.
For a sample covariance matrix, Section 6.1 of [32] suggests a naive procedure with a threshold. The sample-based conditional independency graph is based on the rule that an edge for two variables is not included in the graph if the absolute high-order partial correlation coefficient in (2) for these two variables is below the threshold.
FOCI (first-order-conditional independence method)
In the FOCI method proposed in [24], the two variables X i and X j are connected only if there are no other variables in the analysis for which X i and X j are conditionally independent or which causes an association reversal (that means partial correlation given some variable X k and correlation between X i , X j has opposite signs). Their method defines a modified first-order partial correlation similar to equation (1).
More formally, e ij is an edge in G, if and only if there is no other variable, k (k = i = j) such
thatρ ij;k ≈ 0 orρ ij;k < 0 whereρ ij;k = |ρ ij | − |ρ ik ||ρ jk | 1 − ρ 2 ik 1 − ρ 2 jk
A threshold is used for the closeness to 0.
Truncated partial correlation vines [3] propose vines as a graphical object to summarize conditional dependence; definitions for vine are in a later Section 2.4. A common use is to find a parametrization of the correlation matrix so that dependence can be summarized with low-order partial correlations; that is, higher-order partial correlations are small in absolute value. An algorithm to do this is briefly summarized next.
First, a maximal spanning tree with (d − 1) edges is constructed combining variables with the largest absolute correlations. The tree can summarize all of the dependence of a set of variables only if A-B-C is a segment of the tree, then variables A and C are conditional independent given variable B. That is, if a maximal spanning tree summarizes the dependence, then there is Markov dependence meaning that two variables are conditionally independent given the variables in the path between the two. If the correlation matrix based on this tree is not close to the (empirical) correlation matrix, then additional edges of 2 variables are added based on large absolute partial correlation given one conditioning variable (2 variables are each connected to the conditioning variable). Subsequently more edges can be added based on conditioning on 2 or more variables.
More details about vines are reviewed in Section 2.4.
Comparisons
Some comments of the different dependence graphs are summarized in this section. A numerical example is given to show that the conditional independence graph with thresholding can lead to sparse graphs when latent variables can explain much of the dependence among the variables.
A theoretical explanation is given for this property.
• Thresholding on correlations: The main problem with this method is that it leads to a dense graph if the variables are mostly at least moderately correlated with each other.
For high-dimensional data, if there can be a cluster or subset of variables that are highly correlated, the dependence graph will have a near-complete graph for this cluster. In addition, the method does not consider any information on conditional dependence.
• Conditional dependence graph with thresholding:
The method relies on nonzero elements in the inverse correlation matrix related to the partial correlation of two variables given the remaining variables. However, the highorder interactions in the graph are usually not easy to interpret. For high-dimensional correlation matrices based on structured factor models, there are theoretically no zeros in the inverse correlation matrix if each variables loads to at least one latent factor; see for example the form of the inverse correlation matrix in Section 3.10.4 of [14]. But as the dimension gets larger and larger, many entries of the inverse correlation matrix approach zero. Consider a 1-factor model and assume the dependence in the model is strong. [9] show that in such case, the latent variable can be recovered consistently from the observed variables as the dimension goes to infinity. This indicates that the conditional correlation of two variables given the remaining variables is asymptotically equivalent to the conditional correlations of two variables given the latent variables; the latter conditional correlation is zero. Given a threshold, the dependency graph tends to be sparser as the number of variables increases if added variables continue to load moderately on the latent variable.
Therefore, constructing graphs based on non-zeros elements in the precision matrix is not informative for the Gaussian factor dependence structures when the dimension is large.
The conditional dependency graph can be useful when Σ comes from models not linked to latent variables. One example is from Proposition 2 of [15] where Σ is based on a In the dependence graph based on a vine, the first tree is a maximal spanning tree and its edges are shown in the color black with the correlation labeling the edges between pairs of variables. Additional coloured edges are drawn to indicate some partial correlations that exceed thresholds. Blue, red, and green colored edges are used to represent the partial correlations between variables given one, two, and more variables respectively.
Examples to compare different methods A small illustrative example is provided to compare different dependence graphs built by various methods. Data are simulated from the 1-factor model with latent variable W :
Z j = α j W + ψ j j j ∈ {1, . . . , D},(3)
where W, 1 , 2 , . . . are mutually independent N (0, 1) random variables, and −1 < α j < 1,
ψ 2 j = 1−α 2 j for all j. Let the loading matrix/vector be A = [α 1 , . . . , α d ] T , Ψ 2 = diag(ψ 2 1 , . . . , ψ 2 d )
. The correlation matrix of (Z 1 , . . . , Z d ) T is Σ = AA T + Ψ 2 and the correlation matrix of The four methods are applied to both Σ and Σ * ; the former matrix is a submatrix of the latter without the last rows and column. The correlation matrix Σ * is:
(Z 1 , . . . , Z d , W ) T is Σ * = Σ A A T 1 .
Using (2), the partial correlation matrix based on Σ is:
and the partial correlation matrix based on Σ * is:
The connected edges of the 4 methods of dependence graphs are summarized in Table 1 For TC wth a threshold of 0.2, the dependence graph is a complete graph and so the edges are not listed. The truncation level of the vine is 2 (higher order partial correlations are very small in higher order trees of the vine) in the upper panel and the threshold for partial correlations are set to be 0.2 for drawing edges. The truncation level of the vine is 1 in the lower panel because this suffices to replicate the actual graph of the 1-factor model with each (observed) variable Z j linked to the latent variable W .
From Table 1, for the highly correlated variables based on a factor model, the graph obtained from the direct-threshold is dense, while the conditional independency graph becomes sparser as more variables are linked to the latent variable. In this setting, the FOCI and vine methods which relies on the lower-order partial correlations provide more parsimonious dependency graphs.
With the latent variable, the vine dependency graph has an edge for each observed variables Table 1 with a threshold of 0.2, the edge [2-3] is added for conditional dependence but not [9][10].
In the simple 1-factor setting, the FOCI method gives the same graph as the vine method when the latent variable is included. However, the FOCI method considers only first-order partial correlation, while the vine graphs consider the higher-order conditional relationships of the variables.
For structured factor models with more than one latent variable (and structured zeros in the loading matrix), such as data simulated from a bi-factor model (see Section 3.11.1 of [14]) with variables in non-overlapping groups, the FOCI method can not recover the dependence structure of the latent variable model, and the graph will be dense to explain the dependence of variables in each group. , such that the variables in the same group are more correlated. The theory in [9] shows, under some mild assumptions, that the latent variables can be consistently estimated from the observed variables in the bi-factor model when the dimension of each group is large.
Therefore, the partial correlations of two variables given the remaining variables will decrease towards 0 as d becomes large. The conclusions for comparison of dependency graphs of the four methods in the bi-factor case will be similar to the 1-factor case.
Background on truncated partial correlation vines
The example in the preceding example was use to provide an idea of how the dependence graph based on vines includes edges based on some larger absolute partial correlations. The mathematics behind truncated partial correlation vines is summarized in this section.
A vine as a graphic object on d variables consisting of a sequence of linked trees. The first tree summarizes edges of d − 1 pairs of variables, with the d variables as nodes. Subsequent trees summarize conditional dependencies. The mathematical definition of vine as a sequence of trees is developed in [3]. V is a regular vine on d variables, indexed as 1, Example: The terminology in the above definition is illustrated with d = 4 variables denoted via indices 1, 2, 3, 4. Suppose [1,2], [1,3], [2,4] are edges in tree T 1 . For tree 2, we could have two edges [2, 3; 1] with nodes [1,2] and [1,3], and [1, 4; 2] with nodes [1,2] and [2,4]. If n 1 = [1,2] and n 2 = [2, 4], then n 1 ∆n 2 = {1, 4} with cardinality 2, and n 1 ∩ n 2 = {2}; the edge in tree 2 for nodes n 1 , n 2 is denoted as [1, 4; 2] -note that no edge can be constructed from [1,3] and [2,4] because if n 1 = [1, 3] and n 2 = [2, 4], then n 1 ∆n 2 = ∅, the cardinality is not 2 and the proximity If the edges of the trees in the vine have values ρ a,b for edges (a, b) in tree 1, and ρ a,b;S for edges e = (a, b; S) after tree 1. A d × d correlation matrix can be parametrized as a partial correlation vine with d(d − 1)/2 parameters that are algebraically independent in (−1, 1) using the values of the partial correlations on these edges; see [19]. There are many different ways to re-parameterize a correlation matrix into a partial correlation vine. For any regular vine structure V, there is a one-to-one relationship between the entries of d(d − 1)/2 correlations of a positive definite correlation matrices and the set of algebraically independent (d − 1) correlations and (d − 1)(d − 2)/2 partial correlations in the partial correlation vine.
. . . , d, with E(V) = d−1 i=1 E(T i ) denoting the set of edges of V, if 1. V = (T 1 , . . . , T d−1 ) [consists of d − 1 trees] 2.
The vine in the above example leads to the partial correlation vine with parameters (ρ 12 , ρ 13 , ρ 24 ; ρ 23;1 , ρ 14;2 , ρ 34;12 ). Another partial correlation vine with 4 variables has parameters (ρ 12 , ρ 13 , ρ 14 ; ρ 23;1 , ρ 24;1 , ρ 34;12 ).
Algorithm for truncated partial correlation vine and vine dependency graph
A sequential maximum spanning tree (MST) algorithm can be used to find truncated partial correlation vines such the partial correlation in high-order trees are small. From [20], the log-determinant of the correlation matrix log det(R) = e∈E(V) log(1 − ρ 2 e ) for any regular vine structure V, where {ρ e } is the set of algebraically independent correlations and partial correlations on the edges of the vine. Good m-truncated partial correlation vine should lead to a correlation matrix that approximates R; this means − e∈T1,...,Tm
log(1 − ρ 2 e ) ≈ − log det(R),(4)
as this would imply the the partial correlations in trees m + 1, . . . , d − 1 are closed to 0.
For (4), the choice of edge weight is − log(1 − ρ 2 e ) for edge e for the sequential MST algorithm summarized in Section 6.17 of [14]. The (l + 1)-th tree T l+1 is constructed based on the locally optimal tree T l . In general, the algorithm with local optimality at each tree level is not globally optimal; enumeration over vines is only possible for d ≤ 7. To decide on the truncation level m, a comparative fit index is proposed in [4]; this is fine for a sample correlation matrix when the sample size is large enough relative to d. Otherwise one can stop the sequential MST when the absolute partial correlation in trees start to be below a specified threshold.
If the optimum over the left-hand side of (4) can be found, it might have mostly the same edges at that from the MST algorithm, but not necessarily in the same tree; this does not affect identification of nodes or genes that have strong links to many other nodes/genes.
For a vine dependency graph, two variables are connected if, in a tree in the truncated vine, they form an edge with absolute partial correlation that exceeds a threshold. Variables that are in a path with one or a few variables in between generally have weak conditional independence given the intermediate variables but have non-neglible dependence.
Methodology
In data sets with a large number of variables, one can expect that the variables can be partitioned into non-overlapping or overlapping groups, and group information can provide insight on the dependence of the variables. We describe the methodology of finding groups and adding groupbased latent variables to get a dependency graph based on truncated vines. The outline of the steps are first presented, followed by details of the implementation.
If the variables are not normally distributed, then the first processing step is a rank transform to standard normal N (0, 1) via the probability integral transform.
Suppose there are d variables (such as gene expression measurements for d genes) and the sample size is n. The variables are assumed to be monotonically related so that summarization via correlations is meaningful. Let the data vectors be (x i1 , . . . , x id ), for i = 1, 2, . . . , n. For a fixed variable index j,
z ij = Φ −1 rank(x ij ) − 0.5 n
where Φ is the cumulative distribution function of the standard normal distribution. The transformed variables are denoted as (z i1 , . . . , z id ), for i = 1, 2, . . . , n and they are said to be on the z-scale. After the transformation, the correlation matrix R data is obtained from the(z i1 , . . . , z id ) vectors.
Outline of the main steps
This section outlines the steps to find groups and introduce latent variables to explain dependence within groups. Since all genes interact with other genes in order to form a larger protein complex, as machinery consisting of small parts, the activities of genes within the same complex are coexpressed and become present in the cell simultaneously. Functionally homogeneous groups of genes (variables) naturally emerge in biological networks, and many empirical studies indeed confirm the existence of group structures intact in both manifested gene expression profiles and underlying physical interaction networks (e.g., [2]; [13]).
To identify groups, some variable clustering algorithms can be applied to obtain initial nonoverlapping groups, and some additional diagnostic tools to make adjustments so that dependence within each smaller group has the structure of 1-factor with residual dependence. For each such resulting group, a latent variable can explain most of the dependence among variables in the group, and a proxy variable is created as an estimate of the latent variable.
The procedure consists of several steps. 3. Now each group G g has variables with moderate to strong correlations. Fit a 1-factor model to the correlation matrix of group G g , and note the variables within the group that can have stronger residual dependence. The remaining variables in G g have the structure of 1-factor with weak residual dependence.
4. For the variables in G g with weak residual dependence, a latent variable W g can explain most of the dependence, and theory from [18] provides an approach to compute a proxy variable w g that estimates the latent variable.
5. Add w 1 , . . . , w m to the d variables and construct a truncated vine dependency graph using the algorithm summarized earlier.
For simpler interpretation, it might be useful to re-orient variables within each group that are negative correlated with most other variables in the group. These variable might be those with negative loadings in the fitted 1-factor mdoels. A heatmap can be plotted to visualize the group structure before and after negation of some variables (on the transformed z-scale).
Details of the steps
1. For high-dimensional data, a variable clustering algorithm such as the CLV algorithm in [29] (R library ClustVarLV) can be used to form non-overlapping homogeneous groups.
The CLV algorithm tries to find an optimal partition of the variables to maximize the summation of a homogeneity measure of variables within resulting clusters. The measure of homogeneity is larger when the variables are more strongly associated with the latent component in each cluster.
2. The weakly correlated variables to separate out can be found based on the correlation matrices for the groups G 1 , . . . , G m from the CLV algorithm.
3. For group G g (g = 1, . . . , m) after step 2, let R g,data be the empirical correlation matrix for group G g and let R g,1-factor be the correlation matrix from a fitted 1-factor dependence model. The residual correlation matrix is D g = |R g,1-factor − R g,data |. A variable is considered to have stronger residual dependence if in the row of D g corresponding to this variable, there is a large value (exceeding threshold 1) or the sum of values in this row is large (exceeding threshold 2).
4. Suppose there are d g variables in group G g with weaker residual dependence: z 1g , z 2g , . . . , z dg,g .
The proxy variable are defined as [18] show that this proxy variable can act as a good estimate of the latent variable under some mild assumptions of weak residual dependence, when d g is large enough and the variables have been oriented to have positive loadings in the 1-factor model fit. The variables with stronger residual dependence are still in group G g but there is no guarantee that the proxy with these variables included would be theoretically consistent.
w g = Φ −1 d −1 g dg j=1 z jg .(5)
5. In the vine dependency graph with proxy variables, one can expect that the variables in group G g to be mostly linked to w g in the first vine tree. The additional edges beyond the first vine tree will explain the local residual dependence after being conditioned on the group latent effect. Overall, the vine dependency graph not only shows the dependence or conditional dependence between variables but also provides more information on the group structure as well as summarizes the latent dependence among variables.
Biological Applications
Two gene applications are presented in this section. One involves a yeast gene dataset, and another involves a prostate cancer dataset. The method for building the vine dependency graphs with latent variables is applied to some selected genes of the two datasets. The aim is to explore whether the resulting dependency graphs can be informative and match some biological findings in the literature.
MAP kinase pathway inference in Yeast Data
The yeast dataset is from Gene Expression Omnibus database (accession no. GSE1990). From the documentation of the dataset, the haploid segregants from a cross between the yeast strains BY4716 and RM11-1a as in [5]. This series contains all GSE617 samples, plus 27 additional segregants assayed with the same protocol and the same reference sample as GSE617, consisting of 262 gene expression vectors of approximately 7,000 genes. Since the actual regulatory network topology is known for mitogen-actived protein kinase (MAPK) signalling pathway in Kyoto Encyclopedia of Genes and Genomes (KEGG) database [16], we focused on identifying relationships among d = 37 genes that are known to constitute the MAPK pathway. The ground truth gene regulatory networks are available in [17]. We show the illustrative ground-truth gene regulatory network in Figure 1. In this figure, there are several genes appearing in multiple positions, therefore, it cannot be considered as a dependency graph. However, our goal is to construct dependency graphs for these genes at the expression level and compare the results with this reference graph to see if there is some biological match. For each group, we introduce the proxy variables in (5) even though the group sizes d g are not large. The proxy variables are referred to as proxy1, proxy2, proxy3 and proxy4. The resulting vine dependency graph is shown in Figure 3.
Compare the vine dependency graph with the ground-truth figure, the genes linked to proxy1 and proxy2, such as MAT-α, GPA1, DIG1, FAR1 ; they are closely related and are involved for cell cycle arrest process. A group of genes, such as SLN1, SHO1, YPD1, CTT1, are linked to the proxy4, and these genes are involved in "osmolyte synthesis process". Another group of genes, RHO1, BKC1, SWI4, SLT2, and RLM1, are linked to the proxy3, and they are involved in "cell wall modelling process". These findings suggest the links between the proxy variable and the observed variables could provide some group information and the genes clustered in the same group are usually involve in the same cell process.
In addition to the group information, there are some interesting links in the vine dependency graph matches the ground-truth pathway graph in Figure 1. For example, in the cell wall remodelling process, the genes involved in the process are all linked closely in the vine graph:
SLT2 linked to RLM1 in the first tree and linked to SWI4 through RML1. This matches with the path SLT2 −→ RLM1 and SWI4 in the ground-truth pathway. A group of genes, PKC1, BCK1, proxy3 and RHO1 are linear in the first tree 1; this roughly matches the ground-truth path RHO1 −→ PKC1 −→ BCK1. Furthermore, the genes with many links to the other genes may act as hub genes and involve in multiple processes. For example, STE20 appears in multiple locations in Figure 1, and plays roles in multiple cell process. From the vine dependency graph, it also has many links within two edges to other genes in different groups.
M F ( A L P H A ) 1 M F ( A L P H A ) 2 S T E 3 M F A 2 − S T E 2 − M F A 1 − K S S 1 − F U S 1 G P A 1 F A R 1 F U S 3 S T E 1 2 T E C 1 D I G 1 M L P 2 M L P 1 R L M 1 S W I 4 B C K 1 S T E 2 0 M C M 1 R H O 1 − P K C 1 M I D 2 C T T 1 − M S N 4 − S H O 1 W S C 3 S L T 2 − Y P D 1 S L N 1 G S C 2 − H O G 1 S S K 1 − S T E 1 8 − F K S 1 M K K 1 M F ( A L P H A ) 1 M F ( A L P H A ) 2 S T E 3 M F A 2 − S T E 2 − M F A 1 − K S S 1 − F U S 1 G P A 1 F A R 1 F U S 3 S T E 1 2 T E C 1 D I G 1 M L P 2 M L P 1 R L M 1 S W I 4 B C K 1 S T E 2 0 M C M 1 R H O 1 − P K C 1 M I D 2 C T T 1 − M S N 4 − S H O 1 W S C 3 S L T 2 − Y P D 1 S L N 1 G S C 2 − H O G 1 S S K 1 − S T E 1 8 − F K S 1 M K K 1
Constructing condition-specific dependency networks in Prostate Cancer Study
We collected gene expression data profiled by RNA-sequencing over n = 497 prostate adenocarcinoma patients in the cancer genome atlas (TCGA) cohort. RNA-sequencing technology directly measures the number of short reads mapped onto each gene (exons), which shows robust correlations with the number of mRNA molecules transcribed within each gene. Here, we specifically focus on genes previously known to be involved in the cancer developmental process (KEGG), as was done in the previous analysis [21]. For the selected 314 genes available in the TCGA data, and overlapping with the KEGG cancer pathway annotation, we applied our proposed methodology.
The CLV clustering algorithm partitioned the total 314 genes into six homogeneous groups, For simpler illustration purpose (less busy graphs), we show the connected sub-graph constructed from the CLV-groups with index 1, 6, 3 which have more between group dependence. In Figure 4, there are three homogeneous groups among the 164 genes and we denote the groups along the diagonal from the bottom left to be group 1, group 6 and group 3, and then 15 isolated genes separated from these CLV groups.
For the three groups, the enrichment analysis is performed on each group. The R package goseq is used to identify the ontology of the three groups and the top three potential ontology for each group are shown in Table 2.
group ontopology 1 positive regulation of peptide hormone secretion drug metabolic process generation of precursor metabolites and energy 6 collagen-containing extracellular matrix angiogenesis skeletal system development 3 regulation of muscle system process nuclear lumen nucleoplasm
Vine dependency graphs
For the tumor cases, vine dependency graphs were obtained without and with introducing the proxy variables; they are in Figures 6 and 7 respectively.
The simple thresholding on elements of the partial correlations in (2) are also performed to get conditional independency graphs. The summary statistics on the partial correlations are minimum: -0.34, 1st Quartile: -0.04, Median: 0.00, 3rd Quartle: 0.05, maximum: 0.77.
Most of the partial correlation are small in absolute value and close to zero. The node degrees distributions for inferred networks for truncated vines and conditional independency graphs based on different thresholds 0.1 and 0.15 respectively are shown in Figure 5. In looking at genes with the largest node degrees and their neighbors, the resulting subsets are not related to the gene groups based on our methodology. [21] include several methods for producing graphs based on the inverse covariance matrix; however, we cannot match the tuning parameters of the methods to thresholds on partial correlations so we cannot make comparisons.
Based on the vine dependency graphs, we identify the most connected genes and summarize the results in Table 3. In the table, we also show the effect size of these genes computed from the raw RNA-seq counts in log scale and the logFC value (log of fold change) obtained from the DEG (differential expressed gene analysis) using R package edgeR. The fold change describes the differences of RNA expression in gene between normal and tumored cases. The positive fold changes are up-regulated and negative fold changes are down-regulated genes compared to the normal cases. In addition, all the genes in the table pass the tests for determining differential expression using the likelihood ratio test in edgeR, when the cut-off p-value is set at 0.05. (7) 0.30 0.52 ---GTSE1- (7) 1.70 2.02 --RRM2- (7) 1.79 1.94 ---E2F1- (7) 0.90 1.36 ---CHEK2- (7) 1.06 1.04 --group and also to genes in another group, acting as a bridge between the two groups. Another example for bridging is the gene COL4A2 ; it connects to the proxy variables of the group 6 and to the genes of group 1. LAMB3 is a gene from group 3, but was not used to calculate the proxy variable because it shows strong residual dependence. The graph shows that this gene is linked to multiple groups. The links with genes in group 3 explains the residual local dependence after taking into account the group effect, while the links with the group 6 proxy variable through the gene TCF7L1 explain between-group dependence (and non-weak residual dependence).
From Table 3 Combining the two vine dependency graphs with Table 3 For those potential hub genes, some of them have some related biological evidence to support them.
• TCF7L1 : [31] suggest that induction of WNT4/TCF7L1 results in increased NED and malignancy in prostate cancer that is linked to dysregulation of androgen receptor signaling and activation of the IL-8/CXCR2 pathway.
• BIRC5 : BIRC5 is an immune-related gene that inhibits apoptosis and promotes cell proliferation. It is highly expressed in most tumors and leads to poor prognosis in cancer
patients. [33] shows BIRC5 was significantly correlated with multiple immune cells infiltrates in a variety of tumors.
• GSTP1 : [25] states the gene belongs to the GSTs family, a group of enzymes involved in detoxification of exogenous substances and it also plays an important role in cell cycle regulation. Its dysregulation correlates with a large variety of tumors, in particular with prostate cancer.
• FGF2 : From [12] and [27], Fibroblast growth factor (FGF) 2 (or basic FGF) is expressed at increased levels in human prostate cancer. FGF2 can promote cell motility and proliferation, increase tumor angiogenesis, and inhibit apoptosis, all of which play an important role in tumor progression. Recently, [22] also identifies the FGF2 gene as a hub gene in the development of the prostate cancer.
Based on all the biological evidence, the hub genes we identified from the small subset (164 genes) play important roles in the progression of prostate cancer; this indicates that the vine dependency graphs can provide some interesting and useful insights.
Discussion
This paper presents a method to construct dependency graphs via truncated vines with latent variables. It makes use of low-order partial correlations and avoids the problem of high-order partial correlations used in conditional dependency graphs. Low-order partial correlations are easier to interpret for the conditional dependence between variables. With latent variables that can explain much of the dependence in groups of variables, high-order partial correlations get closer to 0 in absolute value as the number of varables linked to each latent variable increases.
Here we address the problem of finding a specialized from of dependency structures from data while constraining candidate graphs to be a vine graph. Finding a statistical dependency network from observed data has long been of great interest to computational biologists since the first introduction of microarray technology to date. However, structural learning over combinatorial spaces of general graphs, such as directed acyclic graphs (DAG), makes exact latent structure inference highly intractable and resorts to greedy optimization algorithms [11]. Even if we could enumerate all of these structures, given the limited sample size of expression data, a scoring function (objective) often run into a critical problem of distinguishing multiple equally, or at least similarly, likely structures [6]. However, interpretable truncated vine structures can be obtained with a relatively smaller data set.
Statistical dependency structures generally do not correspond to physical protein-protein interaction networks; hence, care should be taken for biological interpretations. Instead, dependency maps typically represent the notion of functional modules, and hub nodes (high-degree vertices) implicate essential genes that participate in multiple biological processes. Modeling statistical dependency structures over high-dimensional gene expression data also benefits subsequent analysis. For instance, conditional probability calculation facilitated by a vine graph (conditional independency structures) will improve regression and classification problems in predicting the outcome of diseases based on observed gene expressions [ [10], [8]]. We could also expect our approach will provide an efficient way to compute marginal probabilities over many genes thanks to the sparsity by construction. This is particularly appealing for G-formula com- Edges in black from a maximal spanning tree and the connected edges in blue, red, and green explain the conditional dependence of variables given one, two and 3 or more variables. The edges are drawn in the plot if the absolute partial correlation of two variables given one, two and three or more variables are greater than 0.30, 0.15, and 0.30 respectively and are labelled if they are greater than 0.5, 0.3, 0.4 respectively. The labels on the spanning tree in color black is the correlation between two variables while the labels on additional edges are partial correlation of variables given other variables. There are no proxy variables introduced and the observed variables are shown in rectangles.
putation in causality inference [26].
X
I ∼ N (µ, Σ), where Σ is the d × d covariance matrix. Let S ⊆ I and have at least cardinality 2, i.e. |S| ≥ 2. For a pair i, j ∈ S, i = j, denote S with {i, j} removed by L {i,j} := S −{i,j} = S\{i, j}. Let the corresponding random vector be X L {i,j} := {X k , k ∈ L {i,j} }. Define ρ ij;L {i,j} as the partial correlation as the correlation parameter of the conditional (Gaussian) distribution of [X i , X j |X L {i,j} ]. It quantifies the dependence between X i and X j without the linear effect of X L {i,j} .
truncated partial correlation vine. The conditional independency graph based on Σ −1 are parsimonious and informative. If the vine truncation level is m, the edges of the resulting conditional independency graph are those in trees 1 to m of the vine; these involves m i=1 (d−i) non-zero entries in Σ −1 . The remaining (d−m)(d−m−1)/2 positions of the Σ −1 are zero.
Suppose d = 10 loadings are generated uniformly from interval [0.5, 0.9]. With weaker loadings, a larger d would be needed to illustrate the comparisons. In one simulation, after sorting into decreasing order, the values in A is [0.90, 0.86, 0.81, 0.77, 0.72, 0.68, 0.63, 0.59, 0.54, 0.50] T .
Z
j linked to the latent variable W and this replicates the dependency graph used in latent variable models and structured equation models; the model (3) implies Cov (Z j , Z j |W ) = 0 for i = j so edges for conditional dependence are not needed. Without the latent variable for the graph, the variable most correlated with W is Z 1 and the vine dependency graph first links each of the other Z j to Z 1 ; but there is conditional dependence of the other variables given Z 1 -from (1) and Σ, ρ 2,3;1 = (0.69 − 0.77 × 0.73)/ (1 − 0.77 2 )(1 − 0.73 2 ) = 0.29 and ρ 9,10;1 = (0.27 − 0.49 × 0.45)/ (1 − 0.49 2 )(1 − 0.45 2 ) = 0.06, so in
One can construct numerical examples with a large d and a structured bi-factor loading structure with a large number of variables in each group. The correlation between observed variables and the global latent variable can be sampled from a positive interval such as [a 0 , b 0 ] = [0.3, 0.8] while the partial correlations of observed variables in one group and the local latent variable given the global latent variable can be sampled from another interval, for example, [a 1 , b 1 ] = [0.4, 0.7]
T 1 is a connected tree with nodes N (T 1 ) = {1, 2, . . . , d}, and edges E(T 1 ). For l > 1, T l is a tree with nodes N (T l ) = E(T l−1 ) [edges in a tree becomes nodes in the next tree]; 3. (Proximity conditions) for l = 2, . . . , d − 1, for {n 1 , n 2 } ∈ E(T l ), #(n 1 ∆n 2 ) = 2, where ∆ denotes symmetric difference and # denotes carnality [nodes joined in an edge differ by two elements]
Truncated vines consist of a parsimonious way for representing the dependence of d variables. This is usually done constructing the trees by putting pairs of variables with strongest correlations in tree 1; and pairs of variables with strongest partial correlations ρ a;b;S in loworder trees. An m-truncated vines (with m d) assumes that the most of the dependencies among variables are captured by the first m trees V m = (T 1 , . . . , T m ) of the vine. The remaining trees have zero or weak conditional dependence (zero remaining partial correlations for an exact m-truncated partial correlation vine, and weak partial correlations below a threshold for an approximate m-truncated partial correlation vine).
Then a linear regression model is fitted with surrogate variable as predictor and imputed variable as response. The prediction on the N (0, 1) of the imputed variables from the linear model are then converted to the original scale by the inverse rank transform. The values are imputed at the missing positions.
Figure 1 :
1MAPK signaling pathway (Saccharomyces cerevisiae) from WikipediaThe proposed methodology is applied to the z-scale data after imputation of missing value and transform to standard normal. Combining the initial clustering results with the diagnostic results from the heatmap, there are 4 small homogeneous groups. For variables that are mostly negatively correlated with other variables within the groups, we re-orient them by changing the sign such that the correlations within each group are positive.
Figure 2 :
2Heatmap of the yeast RNA expression data converted to normal scales; there are four homogeous groups from the bottom left to upper right, the group sizes are 7, 7, 9, 8 respectively. There are 6 isolated variables not in the 4 groups.
Figure 3 :
3consisting of 55, 44, 63, 62, 44, and 46 variables. In order to estimate 1-factor models, we identified a total of 44 weakly-correlated variables, 3, 6, 5, 15, 8 and 7 from these six clusters Vine dependency graphs for yeast RNA expression data; Edges in black from a maximal spanning tree and the connected edges in blue, red, and green explain the conditional dependence of variables given one, two and 3 or more variables. The edges are drawn in the plot if the partial correlation of two variables given one, two and three or more variables are greater than 0.25, 0.15, and 0.30 respectively and are labelled if they are greater than 0.5, 0.3, 0.4 respectively. The labels on the spanning tree in color black is the correlation between two variables while the labels on additional edges are partial correlation of variables given other variables. There are four proxy variables introduced (shown in circles) and the observed variables are shown in rectangles. and separated them out from each cluster, resulting in the tightly-connected groups of 52, 38, 58, 47, 36, and 39 variables largely explained by a single latent factor within each group.
For groups 1
1, 6, 3, the numbers of variables used for proxy calculations were respectively 52, 39, 45. The vine dependency graph for all 6 groups in the additional file 1. From the correlation matrix, some genes have RNA-seq values that are mostly negatively correlated with RNA-seq values of other genes. For these genes, we change the sign of the RNAseq value in the z-scale; equivalently this is negation of the original RNA-seq values. A − (minus sign) is added to the gene name for the heatmap and vine graphs.
Figure 4 :
4A gene expression correlation matrix between 164 genes computed on prostate cancer tumor samples. The color and intensity represent Pearson's correlation values (−1 and 1) based on normalized gene expression z-scores. From the bottom left to the right, the first block has 52 genes in group 1, the second block has 39 genes in group 6, and the third block has 58 genes in group 3. After third group, there are 15 genes separated out of the CLV groups.
Figure 5 :
5Node degree distributions for inferred networks for considered methods.From the two vine dependency graphs, isolated variables usually have node degree 1, and these are far from the clusters connected around the proxy variables. The genes used to calculate the proxy variables are usually directly linked to the corresponding group-based proxy variable, while some of them are linked via a secondary tree to the proxy variable through one of the other genes in that group.The hub genes are usually strongly correlated with the proxy variable in the group and also have a relatively strong correlation with other variables in the group. Examples are genes BIRC5, GSTP1, TCF7L1, CREBBP ; they are linked to a proxy variable and are connected to other variables in a star shape. Some hub genes such as FGF2 connect to proxy variables in one
, there are several hub genes (with "−" after the gene name) that are re-oriented because negative correlation of RNA-seq values with those of many other genes; examples are RBX1, BIRC5, E2F1. The effect size of these genes are positive, which indicates they are up-regulated in the prostate cancer patients relative to non-diseased subjects. The RNA-seq variables that were not re-oriented had mainly negative effect sizes, indicating that the corresponding genes are down-regulated in prostate cancer patients.
, we expect the most connected genes with a relatively large effect size may play an important role in the development of the prostate cancer. A shortlist of hub genes in tumor patients consists of COL4A2, GSTP1, BIRC5, FGF2, TCF7L1, CREBBP, LAMB3, E2F1 from the vine graphs with proxies, and the most connected genes with large effect size are RBX1, LAMA4, ITGB1, FGFR1, FZD7, EP300, TGFBR2 in the vine graphs without proxies.
Figure 6 :
6dependency graph for 164 genes in the RNA-seq data of prostate cancer. Edges in black from a maximal spanning tree and the connected edges in blue, red, and green explain the conditional dependence of variables given one, two and 3 or more variables. The edges are drawn in the plot if the absolute partial correlation of two variables given one, two and three or more variables are greater than 0.30, 0.15, and 0.30 respectively and are labelled if they are greater than 0.5, 0.3, 0.4 respectively. The labels on the spanning tree in color black is the correlation between two variables while the labels on additional edges are partial correlation of variables given other variables. There are no proxy variables introduced and the observed variables are shown in rectangles.
Figure 7 :
7dependency graph for 164 genes in the RNA-seq data of prostate cancer with three proxy variables.
.method
connected edges
# edges
Σ
TC
complete graph
55
CDG
1-2, 1-3, 1-4
3
FOCI
1-2, 1-3, 1-4, 1-5, 1-6, 1-7, 1-8, 1-9, 1-10, 2-3, 2-4, 2-5, 2-6, 3-4, 3-5
15
Vine
1-2, 1-3, 1-4, 1-5, 1-6, 1-7, 1-8, 1-9, 1-10, 2-3, 2-4, 2-5, 2-6
13
Σ *
TC
complete graph
66
CDG
1-W , 2-W , 3-W , 4-W , 5-W , 6-W , 7-W
7
FOCI
1-W , 2-W , 3-W , 4-W , 5-W , 6-W , 7-W , 8-W , 9-W , 10-W
10
Vine
1-W , 2-W , 3-W , 4-W , 5-W , 6-W , 7-W , 8-W , 9-W , 10-W
10
Table 1: The connected edges for four methods applied on the correlation matrix Σ and Σ * .
The latent variable is denoted as W . Abbreviation are TC for Thresholding on correlation, CDG
for Conditional dependence graph with thresholding. The threshold is fixed at 0.2 for TC, CDG
and FOCI.
condition does not hold. For tree 3, nodes [2, 3; 1] and [1, 4; 2] have edge [3, 4; 1, 2] because the symmetric difference of the two nodes is {3, 4} with cardinality 2.
1 .
1Split the variables into non-overlapping homogeneous groups using a variable clustering algorithm. Suppose there are m groups, denoted as G 1 , . . . , G m .2. Because each variable is assigned to some group, a check is made to separate out variables that are not strongly related within their assigned group (or other groups). Such variables are considered as isolated from any group. There are now smaller groups G 1 , . . . , G m and a set of isolated variables.
The proportions of missing values for 50% of the genes are below 3%, while others have missing proportion between 3% and 10%. For missing values in the raw dataset, we use Gaussian bivariate copulas for imputation. For the variable with missing values, we find a surrogate variable which is highly correlated with the variable. The non-missing records of both surrogate and the imputed variable are extracted and rank-transformed to N (0, 1) distributed variables.
Table 2 :
2The top 3 ontologies for the CLV-groups 1, 6, 3 .
Table 3 :
3The genes which pass the DEG test (perform LRT test using package "edgeR"; p-value cut off 0.05) and also have large number of links in the built vine dependency graphs. The panel shows the hub genes from the vine graphs built on the tumor samples; the left panel is for vine built without the proxies while the right panel is for vine built with introducing the proxies. effect size is computed in the log-scale: (log y tumor − log y normal )/s pooled , y is the raw RNA-seq data, logFC denotes the log transform of fold-change which is defined as the ratio of 'expression level' in two groups. Proxy1, proxy6, proxy3 are for groups 1, 6, 3. For gene names ending with −, their RNA-seq values were re-oriented for drawing the heatmap and vine graphs.gene(node-degree) effect size logFC gene(node-degree)[group] effect size logFC
TCF7L1(18)
-1.81
-0.86 proxy3(33)
-
-
CREBBP(16)
-0.27
0.23 proxy1(32)
-
-
RBX1-(11)
0.43
0.63 proxy6(28)
-
-
LAMA4(10)
-0.86
-0.45 COL4A2 (14) [B]
-0.78
-0.51
BIRC5-(10)
1.74
2.33 GSTP1(12) [C]
-2.04
-1.87
ITGB1(9)
-1.28
-0.51 BIRC5-(11) [C]
1.74
2.33
FGFR1(9)
-1.10
-0.54 FGF2(9) [B]
-1.19
-0.61
FZD7(9)
-1.21
-0.64 TCF7L1(9) [C]
-1.81
-0.86
GSTP1(9)
-2.04
-1.87 CREBBP(9) [A]
-0.27
0.23
COL4A2(8)
-0.78
-0.51 LAMB3(9) [C]
-1.28
-1.42
FGF2(8)
-1.19
-0.61 E2F1-(8) [C]
0.90
1.36
EP300(7)
-0.25
0.26 MMP2(7) [B]
-0.73
-0.26
LAMB3(8)
-1.28
-1.42 CKS2-(7) [C]
0.85
1.11
TGFBR2(8)
-1.05
-0.46 CBL(7) [A]
-0.42
0.15
ITGA3(8)
-1.45
-0.67 -
-
-
HRAS-
An Introduction to Multivariate Statistical Analysis. T W Anderson, WileyNew YorkT. W. Anderson. An Introduction to Multivariate Statistical Analysis. Wiley, New York, 1958.
Network biology: understanding the cell's functional organization. A.-L Barabasi, Z N Oltvai, Nature Reviews Genetics. 52A.-L. Barabasi and Z. N. Oltvai. Network biology: understanding the cell's functional organization. Nature Reviews Genetics, 5(2):101-113, 2004.
Vines-a new graphical model for dependent random variables. T Bedford, R M Cooke, Annals of Statistics. 304T. Bedford and R. M. Cooke. Vines-a new graphical model for dependent random variables. Annals of Statistics, 30(4):1031-1068, 2002.
Truncation of vine copulas using fit indices. E C Brechmann, H Joe, Journal of Multivariate Analysis. 138E. C. Brechmann and H. Joe. Truncation of vine copulas using fit indices. Journal of Multivariate Analysis, 138:19-33, 2015.
Genetic dissection of transcriptional regulation in budding yeast. R B Brem, G Yvert, R Clinton, L Kruglyak, Science. 2965568R. B. Brem, G. Yvert, R. Clinton, and L. Kruglyak. Genetic dissection of transcriptional regulation in budding yeast. Science, 296(5568):752-755, 2002.
Learning equivalence classes of Bayesian-Network structures. D M Chickering, The Journal of Machine Learning Research. 2D. M. Chickering. Learning equivalence classes of Bayesian-Network structures. The Journal of Machine Learning Research, 2:445-498, 2002.
Approximating discrete probability distributions with dependence trees. C Chow, C Liu, IEEE transactions on Information Theory. 143C. Chow and C. Liu. Approximating discrete probability distributions with dependence trees. IEEE transactions on Information Theory, 14(3):462-467, 1968.
Network-based classification of breast cancer metastasis. H.-Y Chuang, E Lee, Y.-T Liu, D Lee, T Ideker, Molecular Systems Biology. 31140H.-Y. Chuang, E. Lee, Y.-T. Liu, D. Lee, and T. Ideker. Network-based classification of breast cancer metastasis. Molecular Systems Biology, 3(1):140, 2007.
High-dimensional factor copula models with estimation of latent variables. X Fan, H Joe, arXiv:2205.14487arXiv preprintX. Fan and H. Joe. High-dimensional factor copula models with estimation of latent vari- ables. arXiv preprint arXiv:2205.14487, 2022.
. N Friedman, D Geiger, M Goldszmidt, Bayesian Network classifiers. Machine Learning. 292N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian Network classifiers. Machine Learn- ing, 29(2):131-163, 1997.
Using Bayesian Networks to analyze expression data. N Friedman, M Linial, I Nachman, D Pe'er, Journal of Computational Biology. 73-4N. Friedman, M. Linial, I. Nachman, and D. Pe'er. Using Bayesian Networks to analyze expression data. Journal of Computational Biology, 7(3-4):601-620, 2000.
Alterations in expression of basic Fibroblast Growth Factor (FGF) 2 and its receptor FGFR-1 in human prostate cancer. D Giri, F Ropiquet, M Ittmann, Clinical Cancer Research. 55D. Giri, F. Ropiquet, and M. Ittmann. Alterations in expression of basic Fibroblast Growth Factor (FGF) 2 and its receptor FGFR-1 in human prostate cancer. Clinical Cancer Re- search, 5(5):1063-1071, 1999.
Relating whole-genome expression data with protein-protein interactions. R Jansen, D Greenbaum, M Gerstein, Genome Research. 121R. Jansen, D. Greenbaum, and M. Gerstein. Relating whole-genome expression data with protein-protein interactions. Genome Research, 12(1):37-46, 2002.
Dependence Modeling with Copulas. H Joe, Chapman & Hall/CRCBoca Raton, FLH. Joe. Dependence Modeling with Copulas. Chapman & Hall/CRC, Boca Raton, FL, 2014.
Parsimonious graphical dependence models constructed from vines. H Joe, Canadian Journal of Statistics. 464H. Joe. Parsimonious graphical dependence models constructed from vines. Canadian Journal of Statistics, 46(4):532-555, 2018.
KEGG for representation and analysis of molecular networks involving diseases and drugs. M Kanehisa, S Goto, M Furumichi, M Tanabe, M Hirakawa, Nucleic Acids Research. 381supplM. Kanehisa, S. Goto, M. Furumichi, M. Tanabe, and M. Hirakawa. KEGG for repre- sentation and analysis of molecular networks involving diseases and drugs. Nucleic Acids Research, 38(suppl 1):D355-D360, 2010.
T Kelder, M P Van Iersel, K Hanspers, M Kutmon, B R Conklin, C T Evelo, A R Pico, WikiPathways: building research communities on biological pathways. 40T. Kelder, M. P. Van Iersel, K. Hanspers, M. Kutmon, B. R. Conklin, C. T. Evelo, and A. R. Pico. WikiPathways: building research communities on biological pathways. Nucleic Acids Research, 40(D1):D1301-D1307, 2012.
Approximate likelihood with proxy variables for parameter estimation in high-dimensional factor copula models. P Krupskii, H Joe, Statistical Papers. 63P. Krupskii and H. Joe. Approximate likelihood with proxy variables for parameter estima- tion in high-dimensional factor copula models. Statistical Papers, 63:543-569, 2022.
A parameterization of positive definite matrices in terms of partial correlation vines. D Kurowicka, R Cooke, Linear Algebra and its Applications. 372D. Kurowicka and R. Cooke. A parameterization of positive definite matrices in terms of partial correlation vines. Linear Algebra and its Applications, 372:225-251, 2003.
Uncertainty analysis with high dimensional dependence modelling. D Kurowicka, R M Cooke, Wiley, ChichesterD. Kurowicka and R. M. Cooke. Uncertainty analysis with high dimensional dependence modelling. Wiley, Chichester, 2006.
Estimation of high-dimensional graphical models using regularized score matching. L Lin, M Drton, A Shojaie, Electronic Journal of Statistics. 101806L. Lin, M. Drton, and A. Shojaie. Estimation of high-dimensional graphical models using regularized score matching. Electronic Journal of Statistics, 10(1):806, 2016.
Identification of potential key genes for pathogenesis and prognosis in prostate cancer by integrated analysis of gene expression profiles and the cancer genome atlas. S Liu, W Wang, Y Zhao, K Liang, Y Huang, Frontiers in Oncology. 10809S. Liu, W. Wang, Y. Zhao, K. Liang, and Y. Huang. Identification of potential key genes for pathogenesis and prognosis in prostate cancer by integrated analysis of gene expression profiles and the cancer genome atlas. Frontiers in Oncology, 10:809, 2020.
Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. E Z Macosko, A Basu, R Satija, J Nemesh, K Shekhar, M Goldman, I Tirosh, A R Bialas, N Kamitaki, E M Martersteck, Cell. 1615E. Z. Macosko, A. Basu, R. Satija, J. Nemesh, K. Shekhar, M. Goldman, I. Tirosh, A. R. Bialas, N. Kamitaki, E. M. Martersteck, et al. Highly parallel genome-wide expression profiling of individual cells using nanoliter droplets. Cell, 161(5):1202-1214, 2015.
Estimating genomic coexpression networks using first-order conditional independence. P M Magwene, J Kim, Genome Biology. 512P. M. Magwene and J. Kim. Estimating genomic coexpression networks using first-order conditional independence. Genome Biology, 5(12):1-16, 2004.
GSTP1 methylation and protein expression in prostate cancer: diagnostic implications. F Martignano, G Gurioli, S Salvi, D Calistri, M Costantini, R Gunelli, U De Giorgi, F Foca, V Casadio, Disease Markers. F. Martignano, G. Gurioli, S. Salvi, D. Calistri, M. Costantini, R. Gunelli, U. De Giorgi, F. Foca, and V. Casadio. GSTP1 methylation and protein expression in prostate cancer: diagnostic implications. Disease Markers, 2016.
An introduction to g methods. A I Naimi, S R Cole, E H Kennedy, International Journal of Epidemiology. 462A. I. Naimi, S. R. Cole, and E. H. Kennedy. An introduction to g methods. International Journal of Epidemiology, 46(2):756-762, 2017.
Fibroblast Growth Factor 2 promotes tumor progression in an autochthonous mouse model of prostate cancer. N Polnaszek, B Kwabi-Addo, L E Peterson, M Ozen, N M Greenberg, S Ortega, C Basilico, M Ittmann, Cancer Research. 6318N. Polnaszek, B. Kwabi-Addo, L. E. Peterson, M. Ozen, N. M. Greenberg, S. Ortega, C. Basilico, and M. Ittmann. Fibroblast Growth Factor 2 promotes tumor progression in an autochthonous mouse model of prostate cancer. Cancer Research, 63(18):5754-5760, 2003.
Quantitative monitoring of gene expression patterns with a complementary DNA microarray. M Schena, D Shalon, R W Davis, P O Brown, Science. 2705235M. Schena, D. Shalon, R. W. Davis, and P. O. Brown. Quantitative monitoring of gene expression patterns with a complementary DNA microarray. Science, 270(5235):467-470, 1995.
Clustering of variables around latent components. Communications in Statistics-Simulation and Computation. E Vigneau, E Qannari, 32E. Vigneau and E. Qannari. Clustering of variables around latent components. Communi- cations in Statistics-Simulation and Computation, 32(4):1131-1150, 2003.
RNA-Seq: a revolutionary tool for transcriptomics. Z Wang, M Gerstein, M Snyder, Nature Reviews Genetics. 101Z. Wang, M. Gerstein, and M. Snyder. RNA-Seq: a revolutionary tool for transcriptomics. Nature Reviews Genetics, 10(1):57-63, 2009.
TCF7L1 regulates cytokine response and neuroendocrine differentiation of prostate cancer. Y.-C Wen, Y.-N Liu, H.-L Yeh, W.-H Chen, K.-C Jiang, S.-R Lin, J Huang, M Hsiao, W.-Y Chen, Oncogenesis. 1011Y.-C. Wen, Y.-N. Liu, H.-L. Yeh, W.-H. Chen, K.-C. Jiang, S.-R. Lin, J. Huang, M. Hsiao, and W.-Y. Chen. TCF7L1 regulates cytokine response and neuroendocrine differentiation of prostate cancer. Oncogenesis, 10(11):1-11, 2021.
Graphical Models in Applied Multivariate Statistics. J Whittaker, WileyChichesterJ. Whittaker. Graphical Models in Applied Multivariate Statistics. Wiley, Chichester, 1990.
BIRC5 is a prognostic biomarker associated with tumor immune cell infiltration. L Xu, W Yu, H Xiao, K Lin, Scientific Reports. 111L. Xu, W. Yu, H. Xiao, and K. Lin. BIRC5 is a prognostic biomarker associated with tumor immune cell infiltration. Scientific Reports, 11(1):1-13, 2021.
| [] |
[
"Tracking monocular camera pose and deformation for SLAM inside the human body",
"Tracking monocular camera pose and deformation for SLAM inside the human body"
] | [
"Juan J Gómez Rodríguez ",
"Member, IEEEJ M M Montiel ",
"Fellow, IEEEJuan D Tardós "
] | [] | [] | Monocular SLAM in deformable scenes will open the way to multiple medical applications like computerassisted navigation in endoscopy, automatic drug delivery or autonomous robotic surgery. In this paper we propose a novel method to simultaneously track the camera pose and the 3D scene deformation, without any assumption about environment topology or shape. The method uses an illumination-invariant photometric method to track image features and estimates camera motion and deformation combining reprojection error with spatial and temporal regularization of deformations. Our results in simulated colonoscopies show the method's accuracy and robustness in complex scenes under increasing levels of deformation. Our qualitative results in human colonoscopies from Endomapper dataset show that the method is able to successfully cope with the challenges of real endoscopies: deformations, low texture and strong illumination changes. We also compare with previous tracking methods in simpler scenarios from Hamlyn dataset where we obtain competitive performance, without needing any topological assumption. | 10.1109/iros47612.2022.9981203 | [
"https://arxiv.org/pdf/2204.08309v1.pdf"
] | 248,227,463 | 2204.08309 | bca362233d9ed4db710a36f0a9782732e71adc6f |
Tracking monocular camera pose and deformation for SLAM inside the human body
Juan J Gómez Rodríguez
Member, IEEEJ M M Montiel
Fellow, IEEEJuan D Tardós
Tracking monocular camera pose and deformation for SLAM inside the human body
Monocular SLAM in deformable scenes will open the way to multiple medical applications like computerassisted navigation in endoscopy, automatic drug delivery or autonomous robotic surgery. In this paper we propose a novel method to simultaneously track the camera pose and the 3D scene deformation, without any assumption about environment topology or shape. The method uses an illumination-invariant photometric method to track image features and estimates camera motion and deformation combining reprojection error with spatial and temporal regularization of deformations. Our results in simulated colonoscopies show the method's accuracy and robustness in complex scenes under increasing levels of deformation. Our qualitative results in human colonoscopies from Endomapper dataset show that the method is able to successfully cope with the challenges of real endoscopies: deformations, low texture and strong illumination changes. We also compare with previous tracking methods in simpler scenarios from Hamlyn dataset where we obtain competitive performance, without needing any topological assumption.
I. INTRODUCTION Visual Simultaneous Localization and Mapping (SLAM)
and Visual Odometry (VO) in static environments have been hot research topics in the last decades and many methods have raised to solve them with outstanding accuracy and robustness using features [1], direct methods [2], or hybrid techniques [3]. The increasing popularity of these techniques has raised expectations to solve SLAM in more complex scenarios. For example, one can think of many useful applications of SLAM in Minimal Invasive Surgery (MIS) like guiding surgeons through augmented reality annotations towards the place where a polyp was detected in a previous exploration, and automatic polyp measurement to analyze its evolution. Moreover, surgical robots would greatly benefit from SLAM inside the human body as they will be more secure, robust and accurate, and they will be able to combine information coming from previous explorations or from other sensors like Computerized Tomography (CT).
However, visual SLAM inside the human body poses tremendous challenges like weak texture, changing illumination, specular reflections, and lack of rigidity (Fig. 1). Weak texture and specular reflections hinder data association algorithms based on feature matching, preventing methods like ORB-SLAM3 [4] from working in these sequences. On the other hand, changing illumination puts direct methods like DSO [2] or DSM [5] and hybrid methods like SVO [3] in serious trouble, as they assume constant illumination of the environment. In contrast, we solve data association with a modified Lucas-Kanade algorithm, first presented in [6], that is able to cope with local illumination changes.
But the major challenge to be addressed for SLAM inside the human body is deformable scenes, as breaking up with the rigidity assumption impairs both environment reconstruction and camera tracking. The recent DefSLAM [7] is the first monocular deformable SLAM system able to perform tracking and mapping, but it strongly relies on the assumption of a smooth continuous shape with planar topology, which does not hold in colonoscopies (see Fig. 1).
In this paper we present the first pure monocular method able to initialize a map and track camera pose and scene deformation in general scenes inside the human body ( Fig. 1), without any topological or shape assumption. Our main contribution is a simple formulation that combines photometric feature tracking and an optimization based on reprojection error with spatial and temporal regularizers that encode local assumptions over the environment deformation, endowing our algorithm with enough expressivity to model complex scenes and track their deformations in real-life endoscopies. We provide quantitative evaluation on realistic colonoscopy simulations [8] and qualitative results on real human colonoscopies from the Endomapper project [9], that were out of reach for previous techniques. We present quantitative comparisons in almost-planar scenes from Hamlyn dataset [10] where we obtain competitive performance, despite not using any assumption on the scene topology or shape.
II. RELATED WORK
The computer vision and robotics communities have developed excellent rigid visual SLAM systems [2] [3] [4] in the last years. While all these algorithms use quite different techniques they all rely in a vital, yet simple, assumption: that the environment is static. In contrast, deformable SLAM, which completely breaks up with the rigidity assumption, is still a challenging research topic.
Many works have tried to solve deformable SLAM by using sensors that provide complete 3D information of the environment like stereo or RGB-D cameras. This is the case of the seminal DynamicFusion [11] which uses RGB-D images to reconstruct highly deforming environments with an Iterative Closest Point (ICP) algorithm and a spatial regularizer to constrain deformations of points that are close to each other, that we adopt in our work. Several extensions to DynamicFusion have been developed since then, being the most notably VolumeDeform [12] which combines the use of SIFT features and reprojection error with a dense ICP data term, to reduce drift and increase robustness. In [13] they formulate a variational method that takes RGB-D images to reconstruct a deforming environment. This work is later extended in [14] introducing camera pose computation.
There is increasing interest in SLAM in Minimally Invasive Surgery (MIS), where RGB-D sensors are not available. For this reason, works like [15] [16] use depth coming form stereo images and an error function that combines reprojection errors and regularizers to perform deformable SLAM. As before, the reprojection error is augmented using other 3D terms like ICP errors or Point-to-Plane errors. Regarding the regularizers used, they are similar to the one introduced in DynamicFusion to represent that deformations occur locally, using pair-wise deformation terms between close points, which prevents divergence of individual points in the reconstruction.
Nevertheless stereo cameras are not appropriate for certain applications like colonoscopies where two cameras with enough baseline may not fit in the body cavities. In this kind of scenarios, only monocular deformable SLAM can be performed. This is even a harder problem since no real 3D information is available from a single view, scale is unobservable, and combining multiple views of a deforming scene is an open issue. The first monocular deformable SLAM system is DefSLAM [7] which splits the deformable SLAM problem in 2 threads, one for tracking and one for mapping. They use ORB features and minimize a reprojection error term with a deformation energy term that penalizes stretching and bending of the imaged surface. However, as ORB features are quite unstable in intracorporeal sequences, SD-DefSLAM [6] extends DefSLAM to a semi-direct method by integrating an illumination-invariant Lucas-Kanade tracker to perform data association, achieving better robustness and accuracy. Crucially, both methods assume that the surface has planar topology, and model the surface with a triangle mesh which impose a strong global condition over the environment: the imaged surfaces have to be continuous with no holes. This is quite a strong assumption that seriously limits the kind of scenes that can be handled by both algorithms excluding, for example, colonoscopies (Fig. 1).
To tackle this limitation, [17] proposes a fully photometric algorithm to track camera pose and deformation using sparse 3D surfels (surface elements) under the assumption of local isometry. The use of surfels that have no constrains between them allows to model any kind of topologies. While obtaining very good results in medical scenes, the method still requires 3D information coming from a stereo camera to initialize the surfels. Also, the use of large surfels (in practice, square patches of 23 × 23 pixels in the image) can easily violate the local isometry assumption and is inefficient as using too many close pixels provide redundant information with little to no improvement in accuracy [2]. Furthermore, the regularizers used impose small deformations with respect to a pose at rest, which can be inappropriate in many applications.
In contrast, in this work we propose a pure monocular method for tracking camera pose and deformation. Under the assumption of slow deformations, we perform fully automatic monocular map initialization to obtain a first seed of the environment structure. Following previous works [3], [6], we use photometric feature tracking for robustness and accuracy, and reprojection error for convergence and efficiency during optimization. In addition, we integrate two regularizers that encode our assumptions of smooth and slow deformation in order to constrain the reconstruction problem.
III. DEFORMABLE TRACKING
This section is devoted to presenting our tracking algorithm. We first present the assumptions that governing our system. Afterwards we introduce our data association for tracking. Then we explore our algorithm for monocular map initialization and finally we present our optimization backbone and its formulation encoding each one of our assumptions.
A. Assumptions
The biggest difficulty when dealing with deformable scenarios is that the rigid assumption is violated. This makes camera pose and deformation prediction a non-separable problem for which infinite solutions arise, i.e. not all degrees of freedom (DoF) are observable. This is even drastically worse when using a single monocular camera as the scale is also unknown. For all this, one must incorporate some a-priori knowledge into the problem in order to confine the possible solutions into a reduced set of solutions that correctly represents the real nature of the environment. In this paper, we propose the following assumptions to constrain our reconstruction problem: 1) Local isometry: we assume that the vicinity of a surface point follows an isometric model, that is local distances are preserved. 2) Smooth deformation: we consider that points that are close in space must undergo similar deformations.
3) Slow deformation: deformations are assumed to happen slowly over time. 4) Camera motion is faster than deformation: finally, we assume that camera can move faster than deformations, so we attribute rigid motions to the camera, computing deformations as small as possible. Assumption (1) enables us to use a phtometric feature tracker defining a small neighbourhood around each tracked point that is assumed to be locally rigid. This allows us to take an approach similar to [18] to perform short term data association. Assumption (2) introduces local constrains in the deformations observed without imposing a global deformation or surface model. This effectively makes our system general enough to model any environment. Assumption (3) allow us to imposes temporal continuity in the position of surface points, reducing the effect of image and data association noise. Finally, Assumption (4) is the one that allows us to separate camera motion and environment deformation. In all SLAM systems, the sensor provides relative information, and as a result, the absolute pose of camera and environment is not observable. In rigid SLAM this is simply addressed by choosing an arbitrary global pose, for example the first camera pose is chosen to be zero. In deformable SLAM this is not enough as a camera motion is indistinguishable from a hypothetical case where all the environment moves rigidly, what is called the floating map ambiguity in [17]. This assumption allow us to use regularizers that penalize deformation over camera motion. In that way, the rigid part of the relative motion between environment and camera will be attributed to camera motion, obtaining deformations as small as possible.
It is important to note that none of the above assumptions impose global constrains over the surface topology, smoothness, or deformation, allowing us to model generic deformations and environments.
B. Data Association
Our previous experience in monocular deformable SLAM [6] has proven that an accurate data association is crucial in order to reach good accuracy and robustness. Indeed other works have shown the potential of direct methods in this task like [2] in which the photometric term allows to get as a byproduct of the tracking the feature associations. This is done by imposing a global rigid transformation to all points as it is assumed that the environment is stationary. However this is far to be true in deformable SLAM. Indeed one can not impose any global constrain to the data association step as it is easily violated by deformations.
For that, we propose to perform photometric data association with Shi-Tomasi features [19] prior the camera pose and deformation estimation using the modified multi-scale Lucas-Kanade algorithm proposed in [6]:
arg min d,α,β v∈P (u) (I 0 (v) − αI t (v + d) − β) 2(1)
where P (u) is a small pixel patch centered at the keypoint u. I 0 is the firs frame, where the points are intialize, and I t This algorithm has been proven to achieve excellent results when tracking image features in short time steps even in the presence of deformations or local illumination changes (Fig. 2). The key of its performance lies in using no global model: each point can move freely with respect the others. Also a local illumination invariance is achieved by computing local gain α and bias β terms for each point.
In order to remove any possible outlier track, we compute the Structural Similarity Index (SSIM) [20] between the reference x and tracked y pixel patches to identify any outlier track:
SSIM (x, y) = (2µ x µ y + C 1 )(2σ xy + C 2 ) (µ 2 x + µ 2 y + C 1 )(σ 2 x + σ 2 y + C 2 )(2)
where µ x and σ x are the mean and covariance of the pixel patch, σ xy is the crossed covariance between both patches and C 1 and C 2 are constant values to avoid inestability when means and covariances approaches to zero. This has been proven to be a good similarity metric for small pixel windows as it combines in a same metric a luminance, contrast and structure comparison.
C. Monocular map initialization
Initializing a map from monocular images in rigid environments is well known in Structure from Motion (SfM). In deforming environments, Non-Rigid Structure from Motion (NRSfM) techniques can be used [7]. However, they require, for example in the map initialization, assumptions such as a smooth scene surface with planar topology, which are not met in real colonoscopies (see Fig. 1 and 2).
We propose to exploit assumption (4) using two close frames in which the environment can be considered quasirigid, and most image innovation can be attributed to the camera motion. This allows to apply SfM to obtain a first estimation of the map as if it is rigid and treat any deformation as small noise.
Ideally, the method should to be independent of the camera model either pinhole or fish-eye. We propose to initialize the monocular map by computing the Essential matrix between 2 close frames using as input normalized projective rays from features in the images. Our proposed initialization algorithm goes through the following steps:
1) Extract Shi-Tomasi features evenly distributed in the reference frame I 0 and track them in the current frame I t using the Lucas-Kanade optical flow algorithm. Unproject the matched features into normalized rays x 0 i and x t i and using the camera model unprojection function.
2) Compute an Essential matrix that relates poses of the two frames:
x t i T Ex 0 i = 0(3)
This is done inside a RANSAC scheme to reduce the influence of outliers coming from the data association. 3) Recover the relative camera motion T C t C 0 from E.
This will yield to 4 motion hypothesis (2 rotations and 2 translations). We are using close frames to initialize, hence the camera rotation should be small so we can safely select the smallest rotation to solve the rotation ambiguity. Finally we disambiguate the translation component by selecting the one that yields to the highest number of points in front of both cameras. 4) Reconstruct environment using the camera motion recovered. For that, we use the Inverse Depth Weighted Midpoint [21] to triangulate tracked features as it provides low 3D-2D errors in low parallax scenarios.
D. Camera pose prediction
To encode assumption (4), we first estimate a preliminar camera pose T C t W for time t prior estimating any deformation. We assume that the camera follows a physical model of constant velocity. This provides us with an initial guess of the camera pose that will then be refined using Non Linear Least Squares (NLLS) using a reprojection error with the environment geometry observed in the previous frame t − 1. This can be seen as a way to attribute to a camera motion most of the image innovation seen. Note that this is not the final pose we compute but just a seed for our global optimization for the deformations and camera pose detailed in the next section.
E. Tracking camera pose and deformation
Our goal is, given some feature matches in the current frame u t i and the 3D reconstruction in the previous temporal instant X t−1 i , to estimate the current camera pose T C t W and the deformation δ δ δ t i of each point such as the current scene can be estimated as
X t i = X t−1 i + δ δ δ t i .
For that we introduce a reprojection data term E t i,rep along with 2 regularizers E t i,spa and E t i,tmp to constrain the deformations in our total global cost function, E t for time t, defined by:
E t = i∈P E t i,rep + λ spa E t i,spa + λ tmp E t t,tmp(4)
where P represents the set of points being observed in the current frame. Our global problem can be solved using Non-Linear Squares optimization such us:
T C t W , δ δ δ t i = arg min T C t W ,δ δ δ t i E t(5)
Next we define the terms of our cost function E t . 1) Reprojection term: we obtain feature matches u t i in the current frame with the modified Lucas-Kanade algorithm presented in [6], computing on this way the reprojection error as it follows:
E t i,rep = ρ u t i −û t i 2 Σrep (6)
where ρ is the Hubber robust cost,û t i and u t i are respectively the match of feature i in the current image I t and its projection given by:
u t i = Π(T C t C 0 (Π −1 (u 0 i , d i ) + X t−1 i + δ δ δ t i )(7)
The accuracy of indirect methods is limited by the feature detector resolution (typically no less than 1 pixel). However matches obtained with photometric methods have subpixel accuracy boosting on this way the accuracy of our reprojection term while keeping its nice convergence basin.
2) Spatial regularizer: Following [11] we encode assumption (2) with a regularizer that constrains deformations locally so they are spatially smooth:
E t i,spa = j∈G(i) ρ w t ij (δ δ δ t i − δ δ δ t j ) 2 Σspa(8)
Here G represents a weighted graph that encodes related points whose deformations should be regularized together. The weight in G of two connected points i and j is w t ij which depends on the Euclidean distance between both points at the immediately previous time instant t − 1 and is computed according to the following formula:
w t ij = exp − X t−1 i − X t−1 j 2 2σ 2 (9)
where σ is a radial basis weight that controls the influence radius of each point. This regularizer is crucial as it enforces as rigid-as-possible deformations and contributes towards a global consistency of the deformations.
3) Temporal regularizer: Finally we add a temporal regularizer on the deformations to represent that they occur slowly over time (assumption (3)):
E t i,tmp = ρ δ δ δ t i 2 Σtmp(10)
This regularizer also interacts with assumption (4) as it penalizes big deformations that could be explained with a camera motion.
IV. EXPERIMENTS
We evaluate our method in a Minimal Invasive Surgery sequences, more specifically in colonoscopies. This kind of sequences pose big challenges as they exhibit continuous deformations, poor textures and harsh illumination conditions. We provide quantitative results in photorealistic synthetic data and qualitative experiments with in-vivo human colonoscopies. For comparison purposes we also test our method in the Hamlyn dataset using its stereo setup to evaluate our reconstructions against other state-of-the-art methods. A summary of our main results can be seen in Fig 3.
A. Implementation details
We implement the monocular map initialization, camera pose and deformation estimation in C++. For non Linear Squares optimization we use the Levenberg-Marquart algorithm implemented in the g2o library [22]. For feature extraction and matching we implement our own Shi-Tomasi feature extractor and Lucas-Kanade tracker (Eq. 1). We set a threshold of 0.8 for the SSIM score (Eq. 2) to detect and reject spurious feature tracks. Regarding the optimization, we set Σ rep to 1 pixel, Σ spa and Σ tmp to 10 mm. Since Σ spa and Σ tmp correctly scales the E spa and E tmp terms, we set their respective λ to 1. Finally, for the Hubber cost threshold we use the 95 percentile of χ 2 , with 2 DoF for E rep and with 3 DoF for E spa and E tmp .
Regarding the regularization graph G, for each point we only add regularization terms with its K = 20 closest points in 3D with σ set to 15 mm when initializing the map with the monocular camera and 55 mm when using the stereo to get the first map reconstruction. This is done to ensure that a points is always regularized and at the same time ignoring points that have little influence with the current point to reduce the computational burden.
B. Simulated Colon dataset
We use the VR-Caps [8] to generate photerealistic synthetic image sequences of a 3D colon model obtained from a Computed Tomography. Since this is a simulation, we have access full to camera pose and 3D scene ground truth. Indeed, we can generate sequences with different camera trajectories and degrees of deformation enabling us to test each one of the components of our system individually.
For evaluation purposes, we simulate an insertion maneuver ( Fig. 3b) with different degrees of deformation. We model the deformations via a sine wave propagating along the simulated colon. We apply this deformations to A (mm) ω (rad/s) 0 2.
V t y = V 0 y + A sin(ωt + V 0 x + V 0 y + V 0 z )(11)
where V 0 x , V 0 y and V 0 z are the coordinates of the surface point at rest. We can control the magnitude and velocity of the deformations according to the parameters A and ω respectively. Table I shows the reconstruction accuracy of our system in the simulated sequence with different deformation velocities and amplitudes. The error shown is the Root Mean Square Error (RMSE) of the reconstructed points for all frames according to:
e rms = i s tXt i − X t,gt i 2 n(12)
Since this is a full monocular formulation, we find, for each frame, an optimal scale factor, s t , to align our reconstructions with the ground truth.
Results show that our formulation can reach nice reconstruction error around 2-3 mm even though in presence of deformations. One interesting result is that our system is more sensitive to deformation velocities than the magnitude itself being aligned with assumption (3).
C. Hamlyn dataset
We also test our formulation in real endoscopic sequences. For that purpose, we use sequences 20 (Fig. 3c) and 21 (Fig. 3d) of the Hamlyn dataset [10]. Sequence 20 (from frame #750) corresponds to abdominal exploration with slow deformation. Sequence 21 (also from frame #750) images a liver with 2 lobes each of them moving on its own. This can be considered as an articualted motion. In both sequences, surface texture is poor and illumination conditions are unfavorable. This dataset is recorded with a stereo endoscope, allowing us to estimate environment groundtruth from the disparity observed by the stereo sensor.
We evaluate our formulation in 2 setups (Table II) for comparison purposes. In the first setup, we initialize our system with the first stereo images and perform monocular tracking, in order to allow comparison with previous methods ORB-SLAM [1], SD-DefSLAM [6] and Direct and Sparse Deformable Tracking (DSDT) [17]. Since we are initializing from the stereo images, we do not perform any scale alignment when computing the RMSE. We achieve [17]. We report reconstruction RMSE (mm) and number of frames processed.
competitive results regarding reconstruction error, obtaining a consistent error around 1.5 mm. Since we do not impose any restriction on the surface topology or in the deformations, we achieve a significantly smaller error compared with SD-DefSLAM. This is specially clear in sequence 21 with the 2 lobes moving independently what limits the accuracy of SD-DefSLAM. The comparison with DSDT suggests that our regularizers are versatile, we are able to code better the spatial smoothness of sequence 20, achieving a lower error, while still being competitive in hard discontinuity of sequence 21. DSDT is able to keep the track longer because, in contrast with DSDT, our method still does not implement any policy to recover points lost during tracking. The second setup uses the full monocular pipeline including our monocular initalization (Sec. III-C) computing the RMSE after a per frame scale correction (Eq. 12). In this scenario, our system reaches errors around 2.8-3.3 mm which is aligned with the errors obtained in the simulation dataset under significant deformations. The increase in error compared with the stereo setup is due to the quality of the map initialization that no longer relies on a perfect stereo initialization.
Also it is important to note that in these sequences the surfaces shape and deformations observed are completely different from the ones seen in the simulation dataset proving that we can model general surface shapes and deformations.
D. Real endoscopy sequences
We provide qualitative results in real in-vivo human colonoscopy sequences from the EndoMapper project [9]. These sequences display the big challenges real colonoscopies pose, such as deformation, little to no texture in the images, lighting conditions varying from frame to frame, reflections and fish-eye optics ( fig. 1.) In this case, there is no ground truth to compare with, because the dataset just records standard monocular endoscope procedures. For this reason, we only provide qualitative results. Fig. 2 displays how we are able to initialize maps with high density of points form quite close (3 frames apart) frames capturing the tubular topology of the colon. In Figures 1 and 3a, it can be seem how our algorithm is able to capture the scene deformation and the endoscope trajectory, being able to track map points for more than 30 frames in two examples of real colonoscopies of two different patients.
V. CONCLUSIONS
In this work we have presented an approach for monocular camera tracking and deformation estimation without assumptions on the environment shape or topology. Instead, we successfully encode with simple regularizers the assumptions about the type of deformations that are common in endoscopy. Compared with the state of the art, and our method, including map initialization, is applicable in a much wider range of shape and topologies, like colonoscopies, while having similar accuracy in more standard almost-planar scenarios.
The presented monocular initialization and tracking contributes to make real a fully deformable SLAM system. The deformable mapping to expand the map as the camera explores new regions is closer after our contribution, being a promising venue of future work in the short term. In the mid-term, a multi-map deformable SLAM offers a profitable for future work because will be able to cope with occlusions and tracking losses prevalent in real colonoscopies.
Fig. 1 :
1Tracking the camera pose and scene deformation in a human colonoscopy from Endomapper dataset. Top: images from the sequence. Bottom: camera trajectory in blue, undeformed map points in black and map point deformation trajectories in red.
Fig. 2 :
2Top: Two images separated by 3 frames from our EndoMapper dataset with tracked features. Bottom: map initialized from those tracks is the current frame in time t. These patches are updated every 5 images to account for big scale changes or rotations.
Fig. 3 :
3Results of our algorithm for different sequences. Per each sequence, in columns results after the initial middle and final frame. Two rows per sequence. The first row displays the 3D reconstruction: black points are the undeformed map, red lines are the map point deformation trajectories, in blue the camera trajectory. The second row the RGB frames with the tracked features in green. From top to bottom: (a) EndoMapper real in-vivo sequence, (b) Simulated sequence, (c) Hamlyn 20 sequence and (d) Hamlyn 21 sequence. All datasets have been processed using only monocular images.
TABLE I :
IReconstruction RMSE (mm) in simulated colono-
scopies [23] for different deformation types
the y coordinate of the point surfaces simulating perilstatic
movements according to the following formula:
Stereo Initialization Monocular Init. ORB-SLAM3 [4] SD-DefSLAM [6] DSDT [17] Ours Ours20
RMSE
1.37
4.68
2.9
1.48
2.79
# Fr.
220
252
500
350
350
21
RMSE
-
6.19
1.3
1.55
3.31
# Fr.
-
323
300
300
300
TABLE II :
IIComparison with previous methods in sequences 20 and 21 from Hamlyn Dataset as shown in
This work was supported by EU-H2020 grant 863146: ENDOMAPPER, Spanish government grant PGC2018-096367-B-I00 and by Aragón government grant DGA T45-17R and PhD scholarship of J. J. Gómez-Rodrígez.The authors are with the Instituto de Investigación en Ingeniería de Aragón (I3A), Universidad de Zaragoza, María de Luna 1, 50018 Zaragoza, Spain. E-mail: {jjgomez, josemari, tardos}@unizar.es.
ORB-SLAM: a versatile and accurate monocular SLAM system. R Mur-Artal, J M M Montiel, J D Tardós, IEEE Transactions on Robotics. 315R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, "ORB-SLAM: a versatile and accurate monocular SLAM system," IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147-1163, 2015.
Direct sparse odometry. J Engel, V Koltun, D Cremers, IEEE transactions on pattern analysis and machine intelligence. 40J. Engel, V. Koltun, and D. Cremers, "Direct sparse odometry," IEEE transactions on pattern analysis and machine intelligence, vol. 40, no. 3, pp. 611-625, 2017.
SVO: Semidirect visual odometry for monocular and multicamera systems. C Forster, Z Zhang, M Gassner, M Werlberger, D Scaramuzza, IEEE Transactions on Robotics. 332C. Forster, Z. Zhang, M. Gassner, M. Werlberger, and D. Scaramuzza, "SVO: Semidirect visual odometry for monocular and multicamera systems," IEEE Transactions on Robotics, vol. 33, no. 2, pp. 249- 265, 2016.
ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM. C Campos, R Elvira, J J G Rodríguez, J M Montiel, J D Tardós, IEEE Transactions on Robotics. 376C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, "ORB-SLAM3: An accurate open-source library for visual, visual-inertial, and multimap SLAM," IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874-1890, 2021.
Direct sparse mapping. J Zubizarreta, I Aguinaga, J M M Montiel, IEEE Transactions on Robotics. 364J. Zubizarreta, I. Aguinaga, and J. M. M. Montiel, "Direct sparse mapping," IEEE Transactions on Robotics, vol. 36, no. 4, pp. 1363- 1370, 2020.
SD-DefSLAM: Semi-direct monocular SLAM for deformable and intracorporeal scenes. J J Gómez-Rodríguez, J Lamarca, J Morlana, J D Tardós, J M Montiel, IEEE Int. Conf. on Robotics and Automation (ICRA). J. J. Gómez-Rodríguez, J. Lamarca, J. Morlana, J. D. Tardós, and J. M. Montiel, "SD-DefSLAM: Semi-direct monocular SLAM for deformable and intracorporeal scenes," in IEEE Int. Conf. on Robotics and Automation (ICRA), 2021, pp. 5170-5177.
DefSLAM: Tracking and mapping of deforming scenes from monocular sequences. J Lamarca, S Parashar, A Bartoli, J Montiel, IEEE Transactions on robotics. 371J. Lamarca, S. Parashar, A. Bartoli, and J. Montiel, "DefSLAM: Track- ing and mapping of deforming scenes from monocular sequences," IEEE Transactions on robotics, vol. 37, no. 1, pp. 291-303, 2020.
Vr-caps: a virtual environment for capsule endoscopy. K İncetan, I O Celik, A Obeid, G I Gokceler, K B Ozyoruk, Y Almalioglu, R J Chen, F Mahmood, H Gilbert, N J Durr, Medical image analysis. 70101990K.İncetan, I. O. Celik, A. Obeid, G. I. Gokceler, K. B. Ozyoruk, Y. Almalioglu, R. J. Chen, F. Mahmood, H. Gilbert, N. J. Durr et al., "Vr-caps: a virtual environment for capsule endoscopy," Medical image analysis, vol. 70, p. 101990, 2021.
Endomapper project. "Endomapper project," https://sites.google.com/unizar.es/ \endomapper/home, accessed: 2022-02-28.
Three-dimensional tissue deformation recovery and tracking. P Mountney, D Stoyanov, G.-Z Yang, IEEE Signal Processing Magazine. 274P. Mountney, D. Stoyanov, and G.-Z. Yang, "Three-dimensional tissue deformation recovery and tracking," IEEE Signal Processing Maga- zine, vol. 27, no. 4, pp. 14-24, 2010.
DynamicFusion: Reconstruction and tracking of non-rigid scenes in real-time. R A Newcombe, D Fox, S M Seitz, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionR. A. Newcombe, D. Fox, and S. M. Seitz, "DynamicFusion: Recon- struction and tracking of non-rigid scenes in real-time," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2015, pp. 343-352.
VolumeDeform: Real-time volumetric non-rigid reconstruction. M Innmann, M Zollhöfer, M Nießner, C Theobalt, M Stamminger, European Conference on Computer Vision. SpringerM. Innmann, M. Zollhöfer, M. Nießner, C. Theobalt, and M. Stam- minger, "VolumeDeform: Real-time volumetric non-rigid reconstruc- tion," in European Conference on Computer Vision. Springer, 2016, pp. 362-379.
SobolevFusion: 3D reconstruction of scenes undergoing free non-rigid motion. M Slavcheva, M Baust, S Ilic, Proceedings of the IEEE conference on computer vision and pattern recognition. the IEEE conference on computer vision and pattern recognitionM. Slavcheva, M. Baust, and S. Ilic, "SobolevFusion: 3D reconstruc- tion of scenes undergoing free non-rigid motion," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 2646-2655.
Variational level set evolution for non-rigid 3D reconstruction from a single depth camera. IEEE Transactions on Pattern Analysis and Machine Intelligence. 438--, "Variational level set evolution for non-rigid 3D reconstruction from a single depth camera," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 8, pp. 2838-2850, 2020.
Dynamic reconstruction of deformable soft-tissue with stereo scope in minimal invasive surgery. J Song, J Wang, L Zhao, S Huang, G Dissanayake, IEEE Robotics and Automation Letters. 31J. Song, J. Wang, L. Zhao, S. Huang, and G. Dissanayake, "Dynamic reconstruction of deformable soft-tissue with stereo scope in minimal invasive surgery," IEEE Robotics and Automation Letters, vol. 3, no. 1, pp. 155-162, 2017.
EMDQ-SLAM: Real-time high-resolution reconstruction of soft tissue surface from stereo laparoscopy videos. H Zhou, J Jayender, International Conference on Medical Image Computing and Computer-Assisted Intervention. SpringerH. Zhou and J. Jayender, "EMDQ-SLAM: Real-time high-resolution reconstruction of soft tissue surface from stereo laparoscopy videos," in International Conference on Medical Image Computing and Computer-Assisted Intervention. Springer, 2021, pp. 331-340.
Direct and sparse deformable tracking. J Lamarca, J J G Rodriguez, J D Tardos, J Montiel, arXiv:2109.07370arXiv preprintJ. Lamarca, J. J. G. Rodriguez, J. D. Tardos, and J. Montiel, "Direct and sparse deformable tracking," arXiv preprint arXiv:2109.07370, 2021.
An iterative image registration technique with an application to stereo vision. B D Lucas, T Kanade, Proceedings of the 7th International Joint conference on Artificial Intelligence. the 7th International Joint conference on Artificial Intelligence2B. D. Lucas and T. Kanade, "An iterative image registration technique with an application to stereo vision," in Proceedings of the 7th International Joint conference on Artificial Intelligence, vol. 2, 1981, p. 674-679.
Good features to track. J Shi, C Tomasi, Proceedings of IEEE Conference on Computer Vision and Pattern Recognition. IEEE Conference on Computer Vision and Pattern RecognitionJ. Shi and C. Tomasi, "Good features to track," in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 1994, pp. 593-600.
Image quality assessment: from error visibility to structural similarity. Z Wang, A C Bovik, H R Sheikh, E P Simoncelli, IEEE transactions on image processing. 134Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, "Image quality assessment: from error visibility to structural similarity," IEEE transactions on image processing, vol. 13, no. 4, pp. 600-612, 2004.
Triangulation: why optimize. S H Lee, J Civera, arXiv:1907.11917arXiv preprintS. H. Lee and J. Civera, "Triangulation: why optimize?" arXiv preprint arXiv:1907.11917, 2019.
g2o: A general framework for (hyper) graph optimization. G Grisetti, R Kümmerle, H Strasdat, K Konolige, Proceedings of the IEEE International Conference on Robotics and Automation (ICRA). the IEEE International Conference on Robotics and Automation (ICRA)G. Grisetti, R. Kümmerle, H. Strasdat, and K. Konolige, "g2o: A general framework for (hyper) graph optimization," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 9-13.
Endoslam dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos: Endo-sfmlearner. K B Ozyoruk, G I Gokceler, G Coskun, K Incetan, Y Almalioglu, F Mahmood, E Curto, L Perdigoto, M Oliveira, H Sahin, H Araujo, H Alexandrino, N J Durr, H B Gilbert, M Turan, K. B. Ozyoruk, G. I. Gokceler, G. Coskun, K. Incetan, Y. Almalioglu, F. Mahmood, E. Curto, L. Perdigoto, M. Oliveira, H. Sahin, H. Araujo, H. Alexandrino, N. J. Durr, H. B. Gilbert, and M. Turan, "Endoslam dataset and an unsupervised monocular visual odometry and depth estimation approach for endoscopic videos: Endo-sfmlearner," 2020.
| [] |
[
"Complementary aspects of non-equilibrium thermodynamics",
"Complementary aspects of non-equilibrium thermodynamics"
] | [
"Lee Jinwoo \nDepartment of Mathematics\nKwangwoon University\n20 Kwangwoon-ro, Nowon-gu01897SeoulKorea\n",
"Hajime Tanaka \nDepartment of Fundamental Engineering\nInstitute of Industrial Science\nUniversity of Tokyo\n4-6-1 Komaba, Meguro-ku153-8505TokyoJapan\n"
] | [
"Department of Mathematics\nKwangwoon University\n20 Kwangwoon-ro, Nowon-gu01897SeoulKorea",
"Department of Fundamental Engineering\nInstitute of Industrial Science\nUniversity of Tokyo\n4-6-1 Komaba, Meguro-ku153-8505TokyoJapan"
] | [] | Bio-molecules are active agents in that they consume energy to perform tasks. The standard theoretical description, however, considers only a system-external work agent. Fluctuation theorems, for example, do not allow work-exchange between fluctuating molecules. This tradition leaves 'action through work', an essential characteristic of an active agent, out of proper thermodynamic consideration. Here, we investigate thermodynamics that considers internal-work. We find a complementary set of relations that capture the production of free energy in molecular interactions while obeying the second law of thermodynamics. This thermodynamic description is in stark contrast to the traditional one. A choice of an axiom whether one treats a portion of Hamiltonian as 'internal-work' or 'internal-energy' decides which of the two complementary descriptions manifests among the dual. We illustrate, by examining an allosteric transition and a single-molecule fluorescence-resonance-energy-transfer measurement of proteins, that the complementary set is useful in identifying work content by experimental and numerical observation. | null | [
"https://export.arxiv.org/pdf/1705.01234v2.pdf"
] | 208,076,864 | 1705.01234 | 4da693cdbdc13b4fa6cd016e07a157df29567555 |
Complementary aspects of non-equilibrium thermodynamics
Lee Jinwoo
Department of Mathematics
Kwangwoon University
20 Kwangwoon-ro, Nowon-gu01897SeoulKorea
Hajime Tanaka
Department of Fundamental Engineering
Institute of Industrial Science
University of Tokyo
4-6-1 Komaba, Meguro-ku153-8505TokyoJapan
Complementary aspects of non-equilibrium thermodynamics
(Dated: October 12, 2021)
Bio-molecules are active agents in that they consume energy to perform tasks. The standard theoretical description, however, considers only a system-external work agent. Fluctuation theorems, for example, do not allow work-exchange between fluctuating molecules. This tradition leaves 'action through work', an essential characteristic of an active agent, out of proper thermodynamic consideration. Here, we investigate thermodynamics that considers internal-work. We find a complementary set of relations that capture the production of free energy in molecular interactions while obeying the second law of thermodynamics. This thermodynamic description is in stark contrast to the traditional one. A choice of an axiom whether one treats a portion of Hamiltonian as 'internal-work' or 'internal-energy' decides which of the two complementary descriptions manifests among the dual. We illustrate, by examining an allosteric transition and a single-molecule fluorescence-resonance-energy-transfer measurement of proteins, that the complementary set is useful in identifying work content by experimental and numerical observation.
The bio-molecular process is an essential class of chemical reactions, involving big molecules which fold [1], change their configurations [2], and associate and dissociate with each other or with chemicals [3,4] to carry out their biological functions. The energy landscape theory [5][6][7] has successfully explained those molecular phenomena using non-equilibrium statistical mechanics [8][9][10], taking into account fluctuations at the singlemolecule level [11][12][13]. The theory, for example, explains molecules' personalities [14] or molecule-to-molecule variations [15] by effective-ergodicity-breaking transitions over a long observation time [16], caused mainly due to the ruggedness of the free energy landscape [17].
T. Hill pioneered thermodynamic formalism of molecular phenomena for a steady-state assuming detailed balance and time-scale separation between slow variables (e.g., molecular machines) and fast variables (e.g., ATP molecules) [18,19]. It enables us to discuss what type of thermodynamic changes have occurred to biological systems, and study the budget of work and entropy production with thermodynamic rigor [20][21][22][23][24][25][26][27].
Beyond steady-states, the study of non-equilibrium thermodynamics is still in progress. Hatano and Sasa have extended the second-law of thermodynamics for transitions between steady states [28]. Seifert has discussed the second-law along a single trajectory for nonequilibrium processes in terms of stochastic entropy that is time-dependent and converges to equilibrium entropy as a system goes to equilibrium [10,29]. Bochkov and Kuzovlev (BK) have introduced work to the formalism of fluctuation theorems [30,31]. Jarzynski and Crooks * e-mail: [email protected] † e-mail: [email protected] have linked non-equilibrium work to equilibrium free energy [32,33]. The authors have related work to point-wise non-equilibrium free energy [34].
Depending on whether work affects the Hamiltonian of a system or not, thermodynamic descriptions of a nonequilibrium process become fundamentally different. In Jarzynski's equality, external perturbation λ t modifies the Hamiltonian and thus equilibrium free energy, deforming the energy landscape itself of a system, while it does not in BK's approach, merely driving a system out of equilibrium on a fixed energy landscape [35,36]. In both cases, λ t that mediates work-exchange (see Fig. 1A) should vary in a pre-determined and repeatable manner [11-13, 37, 38]. Thus if a work source fluctuates (e.g., cases as depicted in Fig. 1B), work fluctuation theorems do not apply.
In this paper, we consider a situation where systeminternal perturbation (e.g., the interaction between molecules) drives a non-equilibrium process. We take two different approaches: One is that the interaction between interacting molecules affects the Hamiltonian and thus the equilibrium free energy of a system (see Fig. 1C), and the other is that it does not modify the Hamiltonian but merely drives them out of equilibrium (see Fig. 1D). We refer the former and the latter to views I and II, respectively. We will prove a work fluctuation theorem in terms of a new quantity of Ψ that encodes work for both cases and derive complementary sets of thermodynamic relations for a non-equilibrium process. A molecular system z = (x, y) composed of helicase (x; yellow object) and DNA (y; red-black zipper) is schematically represented. In each case, the shaded area in light pink is the location of a work source, and the blue dashed ellipse indicates the boundary of a system, which is defined by the independent variable of non-equilibrium free energy ψ as we describe below. A, External control λt that varies in a pre-determined manner applies work to the molecular system z. The system includes molecules x and y and is described by ψ(z; λt). B, If a work source (helicase) fluctuates, work fluctuation theorems do not apply. Depicted boundary of the system indicates that we are considering ψ(y, ·), including DNA only, upon which helicase (x) works. C, The interaction between the two molecules is being considered as a part of internal-energy, which we call view I. D, The interaction between the two molecules is being considered as internal-work, which we call view II. In (C, D), we consider ψ(z, ·) so that the system includes both molecules. It is crucial to note the fundamental difference between B and D in the system's boundary: In the former, x works on y, but in the latter, z works on itself.
I. RESULTS
A. A state function Ψ that encodes work
We decompose the phase space of a system into disjoint mesoscopic states {χ j |j = 1, · · · , J}. We also partition the time axis {τ k |k = 0, · · · , K} with τ 0 = 0, depending on the time resolution (δτ k ) of an experiment. An unusual aspect of our approach is to treat an intermediate non-equilibrium state χ j at time τ k as a thermodynamic ensemble, more specifically, an ensemble of paths to the state χ j from an initial probability distribution [34], which enables us to treat χ j with thermodynamic rigor (see Fig. 2).
Let λ t for 0 ≤ t ≤ τ be an arbitrary process which we repeat with an initial probability distribution p(χ j , τ 0 ). We define Ψ for each mesoscopic state χ j at coarsegrained time τ k as follows:
Ψ(χ j , τ k ) := −β −1 ln e −βψ(z,t) z∈χj ,t∈τ k ,(1)
where ψ(z, t) is local free energy of microstate z at time t, β is the inverse temperature (β := 1/(k B T ), where k B is the Boltzmann constant and T is the temperature of the heat bath), and the brackets indicate the average over all z ∈ χ j and t ∈ τ k with respect to the conditional probability p(z, t|χ j , τ k ). We remark that Ψ is an original quantity that is different from the average non-equilibrium free energy since we have Ψ(χ j , τ k ) ≤ ψ(z, t) z∈χj ,t∈τ k , where the equality holds if and only if local equilibrium holds. Local free energy ψ and thus Ψ depend on internal energy, which differs in the two approaches (views I and II) that we take. Thus we will discuss them in detail below. Temporarily, we indicate the dependency on internal energy by subscript v. Let G v (χ j ; τ k ) be the (time-averaged) conformational free energy of χ j , i.e.
G v (χ j ; τ k ) := −β −1 ln z∈χj e −βEv(z;λt) dz τ k , where E v
is system's internal energy, and · τ k indicates the average over time t ∈ τ k . We note that G v (χ j ; τ k ) is well defined regardless of whether the condition of local equilibrium within χ j holds or not. Now Eq. (1) implies
p(χ j , τ k ) = e −βGv(χj ;τ k ) e −βΨv(χj ,τ k ) ,(2)
which holds in full non-equilibrium situations (see Appendix A.1 for the proof). If we further assume the microscopic reversibility [33,[39][40][41], we have
Ψ v (χ j , τ k ) = −β −1 ln e −βWtot χj ,(3)
where W tot := W v + ψ v (z, 0), W v is work, and the brackets indicate the average over all paths to state χ j at time τ k (see Appendix A.2 for the proof). Eq. (3) tells that Ψ v (χ j , τ k ) encodes a property of the ensemble of trajectories that reach χ j at time τ k from an initial probability distribution, more specifically, an amount of work done for reaching state χ j at time τ k . Eq. (3) implies a refined version of the second law of thermodynamics within each ensemble χ j :
W v χj ≥ ∆Ψ v (χ j , τ k ),(4)where ∆Ψ v (χ j , τ k ) := Ψ v (χ j , τ k ) − ψ v (z, 0)
. Here the last term, ψ v (z, 0) , is the average non-equilibrium free energy at time 0 (see Appendix A.2 for the proof).
B. A choice of an axiom; internal-work vs. internal-energy
Let us consider that a finite classical stochastic system weakly coupled to the heat bath of inverse temperature β is composed of two subsystems. We may consider an active agent to act upon another molecule (see Figs. 1C and 1D). Let z = (x, y), where x(y) denotes a phase space point of each molecule. We set λ t as follows: We
FIG. 2.
Non-equilibrium state χj as an ensemble of trajectories. The colored surface is a schematic representation of the free energy landscape Gpass of a molecular system in terms of hypothetical reaction coordinates 1 and 2. The molecular states χi and χj are crystal structures of open and closed conformations of a ribose-binding protein, respectively [42]. Some members of barrier-overcoming trajectories reaching the closed conformation χj are presented in the orange dotted curved arrows. Each trajectory consumes work that is internally supplied through the interaction with a ribose. An ensemble average of these work along all paths to χj at time τ from an initial distribution p0(χj) determines ΨII(χj, τ ) (see Eq. (3)). assume that before time τ 0 , two subsystems are in inert equilibrium with Hamiltonian
E pass (z) := E x (x) + E y (y),(5)
where E x and E y are Hamiltonians of each subsystem. At time τ 0 , they start to interact through activating interaction energy E int (x, y):
E act (z) := E int (x, y).(6)
To motivate two different approaches that we take, we start by obtaining the energy balance equation for this process following Jarzynski's approach [32]. We may describe the change of internal energy of the system during the process as follows:
E(z, t) := E pass (z) + H(t)E act (z),(7)
where H(t) is the Heaviside step function that is H(t) := 1 for t > 0 and H(t) := 0 for t ≤ 0, and the time derivative of H(t) gives the Dirac delta function δ(t). Then Jarzynski's work W jar done on the system along trajectory {z t } 0≤t≤τ is (8) which is the interaction energy evaluated at the initial point z 0 [32]. Jarzynski equation gives the difference between the initial equilibrium free energy A 0 and the final one A 1 in terms of W jar , telling that the equilibrium free energy instantly changes due to the activated interaction energy. Now, Eq. (7) and Eq. (8) provide the following energy balance equation:
W jar := τ 0 ∂E ∂t dt = τ 0 δ(t)E act (z t ) dt = E act (z 0 ),E act (z 0 ) = ∆E pass + E act (z τ ) + Q b ,(9)
where ∆E pass := E pass (z τ ) − E pass (z 0 ), and Q b is dissipated heat. We would like to rewrite Eq. (9) into the following canonical form of the first law of thermodynamics:
W v = ∆E v + Q b ,(10)
where W v is work done on the system, ∆E v is the change of an internal energy, and subscript v indicates the dependency on our different approaches. A comparison of Eq. (9) to Eq. (10) gives us two options; moving E act (z 0 ) in Eq. (9) to the right-hand side or moving E act (z τ ) to the left-hand side. If we move E act (z 0 ) in Eq. (9) to the right-hand side, the first law of thermodynamics reads
0 = ∆E + Q b ,(11)
where ∆E := ∆E pass + ∆E act , and ∆E act :
= E act (z τ ) − E act (z 0 ). From Eq. (A8) and Eq. (A9), E(z) = E x (x) + E int (x, y) + E y (y)
is the total energy of the composite system. The local free energy then becomes
ψ I (z, t) := E(z) − β −1 σ(z, t),(12)
where σ(z, t) := − ln p(z, t) is the stochastic entropy [29,33]. We define Ψ I (χ j , τ k ) by Eq. (1) accordingly (see Appendix A.3 and A.4). Here subscript I indicates this first approach (v=I), say view I (see Fig. 1C). If we move E act (z τ ) in Eq. (9) to the left-hand side and set
W act := −∆E act ,(13)
the first-law of thermodynamics reads
W act = ∆E pass + Q b .(14)
Contrary to view I, the corresponding local free energy is subtle: In Eq. (13), the conformational change of molecules secretes some energy −∆E act , which we treat as internal-work W act . If we took that portion of energy not only as internal-work but also as a part of internalenergy, it would violate the first law of thermodynamics, Eq. (14), by counting double the same portion of energy. Thus, we are forced to have
ψ II (z, t) := E pass (z) − β −1 σ(z, t).(15)
We define Ψ II (χ j , τ k ) by Eq. (1) accordingly (see Appendix A.3 and A.4). Here subscript II indicates this second approach (v=II), say view II (see Fig. 1D). It is important not to confuse this case, where the system includes both x and y, with that depicted in Fig. 1B, where the system includes only y (cf. [43,44], where a work source as a subsystem does not fluctuate). We remark three things: Firstly, views II and I respectively treat the activated energy E act (z) as internal-work and internal-energy, which behave in a complementary manner: if one increases during a process, the other decreases, forming a precursor of a drastic divergence of thermodynamics of the process as we describe below. Secondly, except tradition, no physical constraint forces us to select either one of the two approaches, making a choice axiomatic. Thirdly, Eq. (13) can deal with generalized (e.g., entropic) forces by including relevant degrees of freedom involved in the effects (e.g., energy from chemical potentials by incorporating some solution molecules into the system [45]).
C. A simple process in the two views
Let us consider that a molecular system is in equilibrium in a hypothetical one-dimensional reaction coordinate χ j (0 ≤ χ j ≤ L) (see Appendix B.1 on the details on the simulation). Here we set the initial probability distribution p 0 (χ j ) of the system's state to be in χ j at time τ 0 = 0 to be uniform:
p 0 (χ j ) ∝ e −βGpass(χj ) = |χ j |,(16)
where G pass (χ j ) := −β −1 ln z∈χj e −βEpass(z) dz with E pass (z) := 0 under the assumption that the volume |χ j | is constant for all j. At time τ 0 , one activates energy E act (z) such that the conformational free en- (6). After the activation at time τ 0 , the system would evolve towards a new steady-state:
ergy G(χ j ) := −β −1 ln z∈χj e −βE(z) dz is as depicted in Fig. 3Ap 1 (χ j ) ∝ e −βG(χj ) .(17)
We assume a linear relationship between forces and fluxes so that the Fokker-Plank equation governs the dynamics of the probability p(χ j , τ k ) of finding the system at state χ j at time τ k [46,47]. Fig. 3 shows the time evolution of this bare process in terms of Ψ I (Figs. 3A(1)-(2)) and Ψ II (Figs. 3A(3)-(4)).
In view I, the initial equilibrium distribution p 0 (χ j ) suddenly becomes a non-equilibrium one due to the activated energy E act (z). In detail, the work fluctuation theorem, Eq. (3), applied at time τ 0 using Jarzynski's work W jar in Eq. (8) together with energy balance Eq. (9) gives Ψ I (χ j , τ 0 ) − A 0 , which is the amount of abrupt change from the initial equilibrium free energy A 0 due to the activated energy E act (z) (see Appendix A.3 for the proof). For time τ k > τ 0 , the deviation of the probability p(χ j , τ k ) from the final probability p 1 (χ j ) would cause the dynamics of p. Eq. (2) implies
Ψ I (χ j , τ k ) = A 1 + β −1 ln p(χ j , τ k ) p 1 (χ j ) ,(18)
where
A 1 := −β −1 ln χj z∈χj e −βE(z) dz is the final equilibrium free energy. Thus, ∆Ψ I := Ψ I (χ j , τ k ) − A 1 in Figs. 3A(1)-(2)
indicates the extent to which each state χ j is away from the final state, and the imbalance of Ψ I (χ j , τ k ) in χ j leads the destination of the ensemble of paths to the steepest descent direction of Ψ I until the ensemble resolves the imbalance:
Ψ I (χ j , τ ∞ ) = A 1 for all χ j [34]. Taking average of Eq. (18) with respect to p(χ j , τ k ) over all χ j gives Ψ I (χ j , τ k ) j = A 1 + D(p(χ j , τ k ) p 1 (χ j )), where D(p p 1 )
is the Kullback-Leibler divergence which is positive and converges to 0 as p approaches to p 1 (see the solid blue curve in Fig. 3B). Thus, we have Ψ I j tends to be minimized.
In short, view I defines thermodynamics of the process as the relaxation of a non-equilibrium towards the equilibrium state.
In view II, Eq. (2) implies
Ψ II (χ j , τ k ) = A 0 + β −1 ln p(χ j , τ k ) p 0 (χ j ) ,(19)
where A 0 := −β −1 ln χj z∈χj e −βEpass(z) dz is the initial equilibrium free energy. The effect of the activation of E act (z) is not immediate; it does not change neither the internal energy of the system, Eq. (15), nor Ψ II at time τ 0 in Eq. (19). As the molecule changes its conformation such that it generates power as internalwork as in Eq. (13), it increases Ψ II as Eq. (3) indicates.
Thus ∆Ψ II (χ j , τ k ) := Ψ II (χ j , τ k ) − A 0 in Figs. 3A(3)- (4)
shows the amount of internal work that the ensemble of molecules has generated to form χ j at time τ k . Taking average of Eq.
(19) over χ j gives Ψ II (χ j , τ k ) j = A 0 + D(p(χ j , τ k ) p 0 (χ j )),
where D(p p 0 ) becomes larger as p continues to deviate from p 0 as the systeminternal work accumulates (see the solid pink curve in Fig. 3B). Thus, we have (see also Appendix A.5) Ψ II j tends to be maximized.
We note that this statement does not violate the second law of thermodynamics which reads as follows:
W act ≥ ∆Ψ II j .(20)
Eq. (20) holds from Eq. (4) by taking average over all χ j .
A B ΔΨI / kBT ΔΨII / kBT Reaction coordinate ( ) (1)(3)
Reaction coordinate ( ) (1) shows ∆ΨI(χj, τ k ) from τ0 to τ100 and (2) that from τ101 to τ5000.
(2)(4)A(3)-(4), ∆ΨII(χj, τ k ) := ΨII(χj, τ k ) − A0,
where A0 is the initial equilibrium free energy, indicates the amount of work that the ensemble of molecules has generated to form χj at time τ k from an initial equilibrium state. (3) shows ∆ΨII(χj, τ k ) from τ0 to τ100 and (4) that from τ101 to τ5000. A(5), The color code for (1)-(4) is shown. A(6), The conformational free energy G(χj) for a hypothetical one dimensional reaction coordinate χj is shown. B, The solid pink curve shows ΨII j , the average of ΨII over all χj, for τ0 < t < τ4000, and the solid blue curve is for ΨI j . The dotted curves are those quantities for the reverse process. Eact(χj) := −β −1 ln e −βE act (z)
p 0 (z|χ j ) ,
where the brackets indicate the average over all z ∈ χj with respect to p0(z|χj), is the instant Jarzynski's work, and D indicates the Kullback-Leiber divergence. −Eact(χj) p(χ j ,τ k ) indicates the average of −Eact(χj) over all χj with respect to p(χj, τ k ).
In summary, view II defines the thermodynamics of the process as the accumulation of internal-work that drives the system away from equilibrium. We note that Eq. (18) and Eq. (19) hold in full non-equilibrium situations with no time-scale separation assumption between fast (z ∈ χ j ) and slow (χ j ) variables (see Appendix A.4 for additional relations in the two views).
D. A work content of each state
The work content revealed from view II provides more valuable information in realistic cases than in previous artificial ones, as we illustrate below.
Ribose-binding protein
We consider the allosteric transition of ribose-binding protein (RBP), which is an essential class of biomolecular reactions (see also Appendix B.2). We set
E pass (z) := E RBP (x) + E rib (y) E act (z) := E int (x, y),
where E RBP (x) and E rib (y) are the internal energy of RBP and ribose, respectively, E int (z) gives interaction between them, and z = (x, y) is a phase-space point of the system. A reaction coordinate that we consider is angle θ between the centers of mass of the two domains of RBP and the center of mass of the hinge (see the crystal structures of open and closed conformations in Fig. 2).
Before time τ 0 , the system is in inert equilibrium with Hamiltonian E pass (z). The light green curve in Fig. 4A from computer simulations in [48] represents the energy landscape
G pass (θ) := −β −1 ln z∈θ e −βEpass(z) dz ,(21)
where the open state (θ ≥ 125 • ) is more stable than the closed state (θ < 125 • ). It indicates that ribose-binding is an active process that should overcome the free energy barrier G pass (θ) over 124 • ≥ θ ≥ 112 • separating the two states. At time τ 0 , one activates E act (z), which we treat as internal-work. Then, the non-equilibrium probability distribution in Eq. (2) reads:
p(θ, τ k ) = exp [−βG pass (θ)] exp [−βΨ II (θ, τ k )] ,(22)
which decomposes a molecular state into two contributions; one that forms the free energy barrier and the other that overcomes the barrier to perform a task. The red curve in Fig. 4A calculated using Eq. (19) upon data from [48] represents a work-content Ψ II (χ j , τ ∞ ).
Here we define ∆Φ II (χ j , τ ∞ ) = Ψ II (χ j , τ ∞ ) − A 0 . Then, we can, for example, obtain the following information We have two remarks:
Firstly, ∆Ψ II (112 • , τ ∞ ) ≈ ∆G pass (112 • ) (:= G pass (112 • ) − G pass (128 • )),
where G pass (128 • ) is the conformational free energy of the mostly populated state in the initial ribose-free condition. It seems to be reasonable since Eq. (4) tells that ∆Ψ II (χ j , τ ∞ ) forms the minimum of average internal-work W act χj for reaching χ j from the initial equilibrium ensemble, in which the mostly populated state is dominant. Secondly, a sharp drop in Ψ II (θ, τ ∞ ) for 108 • ≤ θ ≤ 112 • indicates that the cause of binding in this regime is not the interaction energy between RBP and ribose (since −W act = ∆E int > 0 in this regime), but a structural preference of RBP (and ribose) for a more closed state since G pass (108 • ) < G pass (112 • ), where G pass (θ) = −β −1 ln (x,y)∈θ e −β(ERBP(x)+E rib (y)) dx dy.
In this way, view II provides us with invaluable intu-itive information on the biological reaction.
Single-molecule FRET measurement
The new perspective (view II) also enables us to extract from a kinetic experiment of single molecules a work content during molecular interactions under conditions far from equilibrium. We analyze an experiment reported in [49] that uses a microfabricated laminar-flow mixer coupled to the measurement of Förster resonance energy transfer (FRET) efficiencies of individual protein molecules under an abrupt change of solution conditions (see also Appendix B.3).
In detail, the cold shock protein (Csp) that is labeled with fluorescent donor and acceptor dyes at the terminal Cys residues is under equilibrium in pH 7 phosphate buffer. They triggered unfolding by mixing the solution with a denaturant, 8 M guanidinium chloride (GdmCl). After mixing is completed by the time about 50 ms, FRET efficiencies are measured at chosen regions, which correspond, via a fixed flow rate of their experimental setup, to times τ k ≈ 0.1 , 0.2 , 0.5 , 1.0 , 2.0 , and 4.0 seconds after triggering unfolding [49].
GdmCl molecules unfold a protein by engaging in hy-drogen bonds with the protein backbone or solvating the charged residues of a protein, by directly altering electrostatic interactions [50]. Thus we can extract the amount of electrostatic interactions from the kinetic experiment by setting as follows:
E pass (z) := E CspInBuffer (x) + E GdmCl (y) E act (z) := E int (x, y),(23)
where E CspInBuffer (x) is the internal energy of the solution (x), i.e. Csp and buffer molecules, E GdmCl (y) is the internal energy of GdmCl molecules (y), and E int (x, y) gives the interaction between the solution and GdmCl. We consider that before time τ 0 , the solution with inert GdmCl is in equilibrium with Hamiltonian E pass (z), and at time τ 0 , the interaction between the solution and GdmCl, which we treat as internal-work, is turned-on.
A reaction coordinate that we consider is the measured FRET efficiency E m := n a /(n a + n d ) at chosen regions that correspond to the delays τ k . Here n d and n a are the sums (in 30 minutes) of donor counts and acceptor counts, respectively, for each single molecule event that lasts for 1 ms [50]. We note that a standard correction procedure that is well-established very recently [51] enables one to convert E m to the distance between fluorophores using the Förster theory [52,53] so that one can count thermodynamic states [54].
From the relative event probability of measured FRET efficiencies p(E m , τ k ) at each time τ k [49], we obtained the temporal change of Ψ II (E m , τ k ) that encodes the amount of interactions between the solution and GdmCl molecules in forming state E m , as shown in Fig. 4B. Here we used Eq. (19) with p 0 (E m ), which is the probability of E m before mixing the denaturant. Fig 4C shows
∆G pass (E m ) (:= G pass (E m ) − A 0 ) calculated by −β −1 ln p 0 (E m ).
We note that E m ≈ 0.95 corresponds to the folded state where most of the fluorescence photons are emitted by the acceptor due to such lower dye-dye separation as 1 nm. In such a situation, the excitation of the donor dye by a focused laser results in rapid energy transfer to the acceptor dye [49]. On the other hand, lower FRET efficiencies E m ≈ 0.4 indicates greater dye-dye separation that corresponds to unfolded states.
After time τ 0 , staying the unfolded state requires energy from the heat bath against W act := −∆E act that favors unfolding. This example also shows the usefulness of the thermodynamic descriptions based on view II.
II. DISCUSSION
The following equality in [55] is well known
p(z, t) p eq (z, λ t ) = e −β∆A e −βW (t) z,t ,(24)
where W is Jarzynski's work, ∆A := A t −A 0 , and A t and p eq (z, λ t ) are respectively the equilibrium free energy and the equilibrium probability density corresponding to the value of external parameter λ t at time t. Eq. (24) implies
W z,t − ∆A ≥ β −1 ln p(z, t) p eq (z, λ t ) ,(25)
which is also well-known [55].
W jar z,t ≥ Ψ I (z, t) − A 0 ,(26)
as we have discussed (see Appendix A.3). Alternatively, with an appropriate non-equilibrium preparation of system's initial state, we may set λ t := E(z) as in Eq. (11) for t ≥ 0 (including t = 0 so that ∆A = 0 like BK's approach), then, Eq. (25) reads by Eq. (18):
0 = W z,t ≥ Ψ I (z, t) − A 1 .(27)
Conceptually, we have introduced a new perspective that considers system-internal perturbation as thermodynamic work. View II is similar to BK's approach in the sense that work does not modify a system's Hamiltonian. However, it is fundamentally different from BK's approach as well as conventional approaches, in both of which work is defined as system-external perturbation. Only with substituting Jarzynski's work by internal-work as in Eq. (13), Eq. (25) enables us to extract an internalwork content from molecular interactions using Eq. (19):
W act z,t ≥ Ψ II (z, t) − A 0 ,(28)
since we have ∆A = 0.
III. SUMMARY
Conventionally, the thermodynamic description of a system has exclusively considered a system-external work agent, or an external viewpoint. Here we have considered how to make a thermodynamic description from the viewpoint of an internal-work agent. We have introduced a new state function Ψ(χ j ) for mesoscopic state χ j with no local equilibrium assumption and linked it to work done by the internal agent. We have derived a complementary set of relations, showing that thermodynamic states seen from an internal-work agent are fundamentally different from those seen from an external viewpoint (compare, for example, Eq. (26) and Eq. (28)). We have demonstrated that the new thermodynamic description based on the system-internal work agent not only provides a fresh perspective on the non-equilibrium evolution of a system but also are useful for quantifying molecules' efforts, or internal-work in chemical and biological reactions. discussions. L.J. was supported by the National Research Competing financial interests. The authors declare no competing financial interests.
Appendix A: Proof and analysis
We consider a finite classical stochastic system weakly coupled to a heat bath of inverse temperature β := 1/(k B T ) where k B is the Boltzmann constant and T is the temperature of the heat bath. We decompose the phase space of a system into disjoint mesoscopic states {χ j |j = 1, · · · , J}. We also partition the time axis {τ k |k = 0, · · · , K} with τ 0 = 0, depending on the time resolution (δτ k ) of an experiment.
Proof of equation (2)
Let us define the local form of non-equilibrium free energy [34]:
ψ v (z, t) := E v (z; λ t ) + β −1 ln p(z, t),(A1)
where subscript v indicates viewpoint dependency, E v (z; λ t ) is an internal energy of a system at microstate z, λ t is external control, and p(z, t) is the probability density of z at time t. The definition of Ψ in equation (1) implies e −βΨv(χj ,τ k ) = 1 p(χ j , τ k )δτ k z∈χj ,t∈τ k e −βψv(z,t) p(z, t)dzdt = 1 p(χ j , τ k )δτ k z∈χj ,t∈τ k e −βEv(z;λt) dzdt = e −βGv(χj ;τ k ) p(χ j , τ k ) .
(A2)
Here G v (χ j ; τ k ) := − 1 β ln z∈χj e −βEv(z;λt) dz τ k , where brackets indicate the average over time τ k . We used Eq. (A1)
to obtain the second line from the first. Rearranging Eq. (A2) with respect to p(χ j , τ k ) gives:
p(χ j , τ k ) = e −βGv(χj ;τ k ) e −βΨv(χj ,τ k ) ,(A3)
which proves equation (2) in the main text.
Proof of equations (3) and (4)
We (temporarily) consider an arbitrary process λ t for 0 ≤ t ≤ τ with well-defined initial probability density p(z, 0) for each microstate z of a system. We denote the phase-space point of the system at time t as z t . For each trajectory {z t } 0≤t≤τ , we define the time-reversed conjugate as {z t } 0≤t≤τ := {z * τ −t }, where * denotes momentum reversal. We assume that internal energy is invariant upon momentum reversal and assume the microscopic reversibility [33,[39][40][41]:
p({z t }|z 0 ) p ({z t }|z 0 ) = e βQ b ,(A4)
where Q b is energy transferred to the heat bath, and p({z t }|z 0 ) is the conditional probability of path {z t } given initial point z 0 and p ({z t }|z 0 ) is that for the reverse process which is defined by λ t := λ τ −t for 0 ≤ t ≤ τ and p (z 0 , 0) := p(z τ , τ ). Now we have 0). Here we used the first law of thermodynamics, equation (10), which reads W v = ∆E v + Q b , and Eq. (A1). Focusing on trajectories Γ χj ,τ k that reach χ j at time τ k , Eq. (A5) implies e −βWtot χj := e −βWv−βψv(z0,0)
p({z t }) p ({z t }) = p({z t }|z 0 ) · p(z 0 , 0) p ({z t }|z 0 ) · p (z 0 , 0) = exp{βQ b + β∆E v − β∆ψ v } = exp{βW v − β∆ψ v }, (A5) where ∆ψ v := ψ v (z τ , τ ) − ψ v (z 0 ,χj = 1 p(χ j , τ k )δτ k τ ∈τ k {zt}∈Γχ j ,τ e −βWv−βψv(z0,0) p({z t })d{z t }dτ = 1 p(χ j , τ k )δτ k τ ∈τ k {zt}∈Γχ j ,τ e −βψv(zτ ,τ ) p ({z t })d{z t }dτ = 1 p(χ j , τ k )δτ k τ ∈τ k zτ ∈χj e −βψv(zτ ,τ ) p(z τ , τ )dz τ dτ = e −βΨv(χj ,τ k ) ,(A6)
which proves equation (3) in the text. Here we used p(z τ , τ ) = p (z 0 , 0), and the assumption that internal energy is invariant upon momentum reversal, and d{z t } = d{z t }. The application of Jensen's inequality to Eq. (A6) implies a refined version of the second law of thermodynamics (equation (4)): 0) . Here the last term, ψ v (z 0 , 0) , is the average of ψ v (z 0 , 0) with respect to p(z 0 , 0).
W v χj ≥ ∆Ψ v (χ j , τ k ), (A7) where ∆Ψ v (χ j , τ k ) := Ψ v (χ j , τ k ) − ψ v (z 0 ,
The relationship between the two views and Jarzynski's work
From now on, we restrict our attention to the following process: Before time τ 0 , two subsystems (x and y) are in inert equilibrium with Hamiltonian
E pass (z) := E x (x) + E y (y),(A8)
where E x and E y are Hamiltonians of each subsystem, and z = (x, y) is a microstate of the total system. At time τ 0 , they start interaction by activating interaction energy E int (x, y):
E act (z) := E int (x, y).(A9)
We define E(z) := E pass (z) + E act (z). Now we take views I and II in combination, instead of taking a single point of view exclusively, and repeat the proof of the work fluctuation theorem, which clarifies the relationship between the two views and the activated energy interpreted as Jarzynski's work. By combining the energy balance equation (9) and Jarzynski's work, we obtain:
W jar = E act (z 0 ) = ∆E pass + E act (z τ ) + Q b ,(A10)
together with p(z 0 , 0) = exp{−E pass (z 0 ) + ψ II (z 0 , 0)} β in Eq. (A1) from view II, and p(z τ , τ ) = exp{−E(z τ ) + ψ I (z τ , τ )} β in Eq. (A1) from view I. We note that ∆E pass := E pass (z τ ) − E pass (z 0 ) and ψ II (z 0 , 0) = A 0 , where A 0 := −β −1 ln χj z∈χj e −βEpass(z) dz is the initial equilibrium free energy. Then, Eq. (A5) reads:
p({z t }) p ({z t }) = exp{Q b + ∆E pass + E act (z τ ) − ψ I (z τ , τ ) + A 0 } β = exp{W jar − ψ I (z τ , τ ) + A 0 } β ,(A11)
By following the same argument as in Eq. (A6), we obtain
Ψ I (χ j , τ k ) = A 0 − β −1 ln e −βWjar χj .(A12)
At time τ 0 = 0, Eq. (A12) tells that Jarzynski's work gives Ψ I (χ j , 0) − A 0 , which is the amount of abrupt change from the initial equilibrium free energy A 0 due to the activated energy E act (z 0 ). We may rewrite Eq. (A12) at time τ 0 = 0 as follows:
Ψ I (χ j , 0) − Ψ II (χ j , 0) = E act (χ j ),(A13)
where
E act (χ j ) := −β −1 ln e −βEact(z0) p0(z0|χj ) .(A14)
Here the brackets indicate the average with respect to p 0 (z 0 |χ j ) over all z 0 ∈ χ j . Now we show that Eq. (A13) holds for all τ k , and Jarzynski's work E act (χ j ) gives the difference between conformational free energy G pass (χ j ) := −β −1 ln z∈χj e −βEpass(z) dz and G(χ j ) := −β −1 ln z∈χj e −βE(z) dz .
To this end, we may write p(χ j , τ k ) in Eq. (A3) by taking views II and I, respectively, as follows:
p(χ j , τ k ) = e −βGpass(χj ) e −βΨII(χj ,τ k ) = e −βG(χj ) e −βΨI(χj ,τ k ) ,(A15)
which implies
Ψ I (χ j , τ k ) − Ψ II (χ j , τ k ) = G(χ j ) − G pass (χ j ) (A16) for all τ k . At time τ 0 , Eq. (A16) implies G(χ j ) − G pass (χ j ) = E act (χ j ) (A17)
by Eq. (A13). Combining Eq. (A16) and Eq. (A17) shows that for all τ k ,
Ψ I (χ j , τ k ) − Ψ II (χ j , τ k ) = E act (χ j ),(A18)
proving that Jarzynski's work E act (χ j ) links views I and II.
Corollaries
We have two corollaries. Firstly, Eq. (A15) immediately implies the following fluctuation theorem for Ψ that holds for all τ k :
e −βΨII(χj ,τ k ) χj = e −βA0 e −βΨI(χj ,τ k ) χj = e −βA1 ,(A19)
where A 0 := −β −1 ln χj z∈χj e −βEpass(z) dz and A 1 := −β −1 ln χj z∈χj e −βE(z) dz are the initial and the final equilibrium free energy, respectively. Here the brackets indicate the average over all χ j . By Jensen's inequality, Eq. (A19) implies
Ψ II (χ j , τ k ) χj ≥ A 0 Ψ I (χ j , τ k ) χj ≥ A 1 .(A20)
Secondly, the exponent G pass (χ j ) and G(χ j ) in Eq. (A15) does not imply that a system is in local equilibrium within χ j . We have the following non-equilibrium equalities for any t e −βΨII(z,t) δ χj (z) z = e −βGpass(χj ) e −βΨI(z,t) δ χj (z)
z = e −βG(χj ) ,(A21)
where the bracket indicates average over all z, and δ χj (z) = 1 if z ∈ χ j and 0 otherwise. We remark that Eq. (A21) links two known relations in the literature; one for work and the conformational free energy G(χ j ) [56], and the other for work and ψ(z, t) [34].
Extremal principle in view II
The upper-bound of the average of Ψ II (χ j , τ k ) over all χ j comes from equation (20) by taking view II as follows:
Ψ II (χ j , τ k ) j ≤ A 0 + W act ,(A22)
where W act := E act (z 0 ) − E act (z τ ) for a trajectory {z t } 0≤t≤τ , and the average W act is over all paths that end at time τ ∈ τ k . Now we convert this inequality into an equality by analyzing how the average internal-work, W act , is consumed during the process. By Eq. (A1), we have
ψ I (z, t) − ψ II (z, t) = E act (z),(A23)
which implies
W act = ∆ ψ II z − ∆ ψ I z ,(A24)
where ∆f := f (τ k ) − f (0) for any function f of time. Here f (τ k ) := τ ∈τ k f (τ ) dτ /δτ k is the time average over all τ ∈ τ k so that we have ∆ ψ II z := ψ II (z, τ ) p(z,τ k ) − ψ II (z, 0) p(z,0) , and ∆ ψ I z := ψ I (z, τ ) p(z,τ k ) − ψ I (z, 0) p(z,0) , where brackets · p(z,τ k ) indicate the average over all z and τ ∈ τ k with respect to p(z, τ ) (note that τ 0 := 0). To engage Ψ II (χ j , τ k ) with Eq. (A24), we consider p(z, t|χ j , τ k ) and the locally-equilibrated distribution p eq loc (z, t|χ j , τ k ) within z ∈ χ j , which reads e −βEpass(z) /e −βGpass(χj ) in view II. Direct calculation of the Kullback-Leibler divergence D (p(z, t|χ j , τ k ) p eq loc (z, t|χ j , τ k )) using Eq. (A1) and Eq. (A3) gives ψ II (z, t) p(z,t|χj ,τ k ) = Ψ II (χ j , τ k ) + D (p(z, t|χ j , τ k ) p eq loc (z, t|χ j , τ k )) .
Taking the average of Eq. (A25) over all χ j with respect to p(χ j , t) gives
ψ II (z, t) z = Ψ II (χ j , τ k ) j + D(p(z, t) p eq loc (z, t)).(A26)
Finally, combing Eq. (A24) and Eq. (A26) proves
W act = ∆ Ψ II j − ∆ ψ I z + ∆I,(A27)
where Ψ II j indicates the average of Ψ II (χ j , ·) over all χ j , and I(t) := D(p(z, t) p eq loc (z, t)). Eq. (A27) tells that some of the internal-work increases Ψ II j , some dissipate, resulting in the irreversible entropy production −∆ ψ I z , and the rest develops local details away from local equilibrium D(p(z, ·) p eq loc (z, ·)). Now we return to the inequality for the upper-bound of Ψ II j , Eq. (A22), taking τ k → ∞, and then, we have Ψ II (χ j , ∞) j ≤ A 0 + W act .
(A28)
During the process, the total internal-work becomes
W act = E act (z) p0(z) − E act (z) p1(z) ,(A29)
and the irreversible entropy production −∆ ψ I z reads:
ψ I (z, 0) p0(z) − ψ I (z, ∞) p1(z) = A 0 + E act (z) p0(z) − A 1 (A30)
by Eq. (A23). The amount of local-details developed away from local-equilibrium ∆I is
ψ II (z, ∞) p1(z) − Ψ II (χ j , ∞) p1(χj ) = E act (χ j ) p1(χj ) − E act (z) p1(z)(A31)
by Eq. (A18) and Eq. (A23) since we have I(0) = 0. We should subtract Eq. (A30) and Eq. (A31) from Eq. (A29) so that the maximally attainable value of Ψ II (χ j , ∞) j from Eq. (A28) reads
Ψ II (χ j , ∞) j ≤ A 1 − E act (χ j ) p1(χj ) .(A32)
Now Eq. (A18) tells that
Ψ II (χ j , ∞) j = A 1 − E act (χ j ) p1(χj ) ,(A33)
which proves that Ψ II j tends to be maximized. region with compressed air and are mixed by the time they have arrived a point 50 µm beyond the center inlet, which corresponds to 50 ms due to the flow velocity of about 1 µm/ms. Actual FRET measurements are made at distances ≥ 100 µm from the center inlet, which corresponds to ≥ 0.1 s. We digitized histograms of measured FRET efficiency E m during unfolding and fitted double-Gaussians to the data, which is justified since Csp exhibits two-state folding/unfolding kinetics [62,63], as shown in Fig. 5 in Appendix (we refer the reader to [49] for more details on obtaining E m ). We observe that as the time flows, the event probability at lower transfer efficiency (greater dye-dye separation) increases, indicating that non-equilibrium unfolding progresses. We calculated β(Ψ II (E m , τ k ) − A 0 ) by ln(p(E m , τ k )/p 0 (E m )), and β(G pass (E m ) − A 0 ) by − ln p 0 (E m ) for Figs 4B and 4C.
The use of data from the initial solution condition without GdmCl to obtain p 0 (E m ) for the complex of Csp with inert GdmCl is justified again since the conformational free energy of the complex can be decomposed into that of the solution of Csp and that of GdmCl molecules due to the absence of interactions between them as follows:
FIG. 1 .
1Work in fluctuation theorems.
FIG. 3 .
3The time evolution of Ψ(χj, τ k ) and its average Ψ j from views I and II. A(1)-(2), ∆ΨI(χj, ·) := ΨI(χj, ·) − A1, where A1 is the final equilibrium free energy, indicates the extent to which each state χj is away from the final equilibrium state.
open state without ribose final closed state with ribose closed open ribose-bound partiallyt open ribose-bound FIG. 4. A work-content revealed from an equilibrium computer simulation of the ribose-binding protein (RBP) and a non-equilibrium kinetic experiment of a microfabricated laminar-flow mixer coupled to the measurement of Förster resonance energy transfer (FRET) for the cold-shock protein (Csp). A, The light green curve is the conformational free energy Gpass(θ) at the initial ribose-free state and the red curve is ΨII(θ, τ∞) that encodes a work-content for forming the state θ at time τ∞. B, A work-content of each state characterized by the measured FRET efficiency Em after an abrupt change at time τ0 of solution conditions to those that favor unfolded states, Em ≈ 0.4, from those that favor the folded state, Em ≈ 0.95. Blue, orange, green, red, purple, and brown curves represent ΨII(Em, τ k ) at τ k = 0.1, 0.2, 0.5, 1.0, 2.0, and 4.0 seconds, respectively. C, The conformation free energy Gpass(Em) of Csp in the initial (before mixing) equilibrium solution conditions that favor the folded state is shown. from Fig. 4A. The ensemble of molecules applies work of ∆Ψ II (115 • , τ ∞ ) ≈ 3k B T on average in the sense of Eq. (3) to reach the right edge of the plateau of G pass (θ) at θ = 115 • and should spend additional energy of 3k B T to walk over to the left edge, in total, applying internalwork of ∆Ψ II (112 • , τ ∞ ) ≈ 6k B T in reaching θ = 112 • for ribose-binding from the initial inert equilibrium state of free energy A 0 .
Fig. 4Bshows that it amounts from −∆Ψ II (0.95, τ k ) ≈ 0.4k B T to 4.0k B T as time flows, which could occur very rarely at last. Most of interaction energy has been consumed to make Csp unfolded, E m ≈ 0.4, where we observe that as time flows, ∆Ψ II (0.4, τ k ) −→ ∆G pass (0.4) := G pass (0.4) − G pass (0.95), which is 1.8k B T .
Technically, we have decomposed Eq. (24) into Eq. (1), Eq. (2), and Eq. (3) so that Eq. (25) reduces to Eq. (4) of view I. Considering, for example, molecular interactions as system-external perturbation by setting W = W jar as in Eq. (8) (so that ∆A = 0), Eq. (25) reads by Eq. (18):
Author Contributions. L.J and H.T conceived the work, developed the theory, and wrote the paper. Additional information. Correspondence and requests for materials should be addressed to L.J ([email protected]) or H.T. ([email protected]).
FIG. 5 .
5Microfluidic mixer and measured FRET efficiencies. A, Schematic representation of microfluidic mixer is drawn. Due to the flow velocity is about 1µm/ms, the observation spots 0.1, 0.2, 0.5, 1.0, 2.0, and 4.0 mm away from the center inlet correspond to the time delays of τ k ≈ 0.1, 0.2, 0.5, 1.0, 2.0, and 4.0 s after an abrupt change of solution conditions at time τ0 = 0 to those that favor unfolding. B, The histogram of measured FRET efficiency Em in the initial (before mixing) solution conditions that favor folding. C, Histograms of measured FRET efficiency Em at time τ k . In B and C, green dotted lines with blue solid-circle ends indicate the values of the relative event probability of Em digitized from [49] and orange curves are double Gaussian fits to the data.
p 0 (E m ) := z∈Em e −βEpass(z) dz z e −βEpass(z) dz = x∈Em e −βE CspInBuffer (x) dx y e −βE GdmCl (y) dy x e −βE CspInBuffer (x) dx y e −βE GdmCl (y) dy = x∈Em e −βE CspInBuffer (x) dx x e −βE CspInBuffer (x) dx ,
Acknowledgments. L.J. thanks Chan-Woong Lee forAppendix B: Simulation detailsActivation of bi-stable potentialThe initial distribution p 0 (χ) for a hypothetical one-dimensional reaction coordinate χ(0 ≤ χ ≤ L) with L = 10 is set to the uniform distribution, and at time τ 0 , we activated G(χ) which is a bistable potential represented inFig. 3A(6). We solved numerically 300, 000 times the over-damped Langevin equation:where thermal fluctuation ξ satisfies the fluctuation-dissipation relation ξ(t)ξ(t ) = 2k B T ζδ(t − t ), and ζ = 1. We partitioned the one-dimensional domain into 50 bins to form {χ j |j = 1, · · · , 50} and counted the number of particles for each bin at each time to obtain Ψ I (χ j , τ k ), Ψ II (χ j , τ k ), Ψ I j , and Ψ II j inFig. 3. After integrating over 4000 steps using 0.01 discretization interval, we deactivated G(χ) to simulate the inverse process and to calculate Ψ I j , and Ψ II j .Ribose binding proteinWe have analyzed the simulation results from Ravindranathan et al.[48]to calculate Ψ II (θ, ∞) and G pass (θ) inFig. 4Aby setting as follows:where E RBP (x) and E rib (y) are the internal energy of ribose-binding protein (RBP) and ribose, respectively, E int (z) gives interaction between them, and z = (x, y) is a phase-space point of the system. A reaction coordinate is angle θ between the centers of mass of the two domains of RBP and the center of mass of the three-stranded hinge. We consider the case that before time τ 0 , RBP and ribose are in equilibrium without interaction, and at time τ 0 , they start interaction by activating E act (z).Ravindranathan et al.[48]carried out a total of 32 umbrella sampling molecular dynamics simulations[57], 16 of RBP with ribose and 16 of RBP without ribose, using the OPLS-AA all-atom force field[58]with the AGBNP implicit solvent model[59]. They took crystal structures of RBP from RCSB Protein Data Bank[42], added Hydrogen atoms, heated the system to 300 K over 3 ps, equilibrated for 225 ps at 300 K, and after equilibration, gathered data for 800 ps. The time-step is 1 fs, and all atoms are treated explicitly. They applied the weighted histogram analysis[60,61]to obtain the unbiased (independent of biasing potential) population distribution p 0 (θ) of ribose-free RBP and p 1 (θ) of ribose-bound RBP, which we have digitized to calculate β(Ψ II (θ, ∞) − A 0 ) by ln(p 1 (θ)/p 0 (θ)), and β(G pass (θ) − A 0 ) by − ln p 0 (θ) forFig 4A.The use of data from ribose-free RBP simulations to obtain p 0 (θ) for the complex of RBP and ribose with no interaction is justified since the conformational free energy of the complex can be decomposed into that of each molecules due to the absence of interactions between them as follows:Single-molecule FRET measurementLipman et al.[49]have measured fluorescence resonance energy transfer (FRET) efficiencies of single molecules under conditions far from equilibrium by coupling a microfabricated laminar-flow mixer, which enables time-resolved measurement of FRET after an abrupt change in solution conditions. They have carried out the kinetic experiments for the folding of the cold shock protein (Csp) and the unfolding of Csp. We analyze their data for Csp unfolding.A solution of labeled Csp in phosphate buffer is prepared in the center inlet channel, and 8M GdmCl molecules are in the two outer inlet channels (seeFig. 5in the Appendix). Solutions in the inlets are driven to the mixing
Experimental and theoretical aspects of protein folding. C B Anfinsen, H A Scheraga, Advances in protein chemistry. 29Anfinsen, C. B. & Scheraga, H. A. Experimental and theoretical aspects of protein folding. Advances in protein chemistry 29, 205-300 (1975).
Getting into shape: conformational and supramolecular landscapes in small biomolecules and their hydrated clusters. E G Robertson, J P Simons, Physical Chemistry Chemical Physics. 3Robertson, E. G. & Simons, J. P. Getting into shape: conformational and supramolecular landscapes in small biomolecules and their hydrated clusters. Physical Chemistry Chemical Physics 3, 1-18 (2001).
Molecular recognition: hydrogen-bonding receptors that function in highly competitive solvents. E Fan, S A Van Arman, S Kincaid, A D Hamilton, Journal of the American Chemical Society. 115Fan, E., Van Arman, S. A., Kincaid, S. & Hamilton, A. D. Molecular recognition: hydrogen-bonding receptors that function in highly competitive solvents. Journal of the American Chemical Society 115, 369-370 (1993).
Impact of chemical heterogeneity on protein self-assembly in water. S.-H Chong, S Ham, Proceedings of the National Academy of Sciences. 109Chong, S.-H. & Ham, S. Impact of chemical heterogeneity on protein self-assembly in water. Proceedings of the National Academy of Sciences 109, 7636-7641 (2012).
The energy landscapes and motions of proteins. H Frauenfelder, S G Sligar, P G Wolynes, Urbana. 5161801Frauenfelder, H., Sligar, S. G. & Wolynes, P. G. The energy landscapes and motions of proteins. Urbana 51, 61801 (1991).
Protein folding funnels: a kinetic approach to the sequence-structure relationship. P E Leopold, M Montal, J N Onuchic, Proceedings of the National Academy of Sciences. 89Leopold, P. E., Montal, M. & Onuchic, J. N. Protein folding funnels: a kinetic approach to the sequence-structure relationship. Proceedings of the National Academy of Sciences 89, 8721-8725 (1992).
New view" of protein folding reconciled with the old through multiple unfolding simulations. T Lazaridis, M Karplus, Science. 278Lazaridis, T. & Karplus, M. " New view" of protein folding reconciled with the old through multiple unfolding simulations. Science 278, 1928-1931 (1997).
. E M Sevick, R Prabhakar, Stephen R Williams, D Searles, J. Fluctuation Theorems. E.M. Sevick, R. Prabhakar, Stephen R. Williams & Searles, D. J. Fluctuation Theorems.
. Annual Review of Physical Chemistry. 59Annual Review of Physical Chemistry 59, 603-633 (2008).
Equalities and inequalities: Irreversibility and the second law of thermodynamics at the nanoscale. C Jarzynski, Annu. Rev. Codens. Matter Phys. 2Jarzynski, C. Equalities and inequalities: Irreversibility and the second law of thermodynamics at the nanoscale. Annu. Rev. Codens. Matter Phys. 2, 329-351 (2011).
Stochastic thermodynamics, fluctuation theorems and molecular machines. U Seifert, Rep. Prog. Phys. 75126001Seifert, U. Stochastic thermodynamics, fluctuation theorems and molecular machines. Rep. Prog. Phys. 75, 126001 (2012).
Reversible unfolding of single RNA molecules by mechanical force. J Liphardt, B Onoa, S B Smith, I Tinoco, C Bustamante, Science. 292Liphardt, J., Onoa, B., Smith, S. B., Tinoco, I. & Bustamante, C. Reversible unfolding of single RNA molecules by mechanical force. Science 292, 733-737 (2001).
Verification of the Crooks fluctuation theorem and recovery of RNA folding free energies. D Collin, Nature. 437Collin, D. et al. Verification of the Crooks fluctuation theorem and recovery of RNA folding free energies. Nature 437, 231-234 (2005).
Experimental free-energy measurements of kinetic molecular states using fluctuation theorems. A Alemany, A Mossa, I Junier, F Ritort, Nature Phys. Alemany, A., Mossa, A., Junier, I. & Ritort, F. Experimental free-energy measurements of kinetic molecular states using fluctuation theorems. Nature Phys. (2012).
Dynamic personalities of proteins. K Henzler-Wildman, D Kern, Nature. 450964Henzler-Wildman, K. & Kern, D. Dynamic personalities of proteins. Nature 450, 964 (2007).
Hidden complexity in the isomerization dynamics of Holliday junctions. C Hyeon, J Lee, J Yoon, S Hohng, D Thirumalai, Nature chemistry. 4Hyeon, C., Lee, J., Yoon, J., Hohng, S. & Thirumalai, D. Hidden complexity in the isomerization dynamics of Holliday junctions. Nature chemistry 4, 907-914 (2012).
Universal features in the energetics of symmetry breaking. É Roldán, I A Martinez, J M R Parrondo, D Petrov, Nature Physics. Roldán,É., Martinez, I. A., Parrondo, J. M. R. & Petrov, D. Universal features in the energetics of symmetry breaking. Nature Physics (2014).
Multiple native states reveal persistent ruggedness of an RNA folding landscape. S V Solomatin, M Greenfeld, S Chu, D Herschlag, Nature. 463681Solomatin, S. V., Greenfeld, M., Chu, S. & Herschlag, D. Multiple native states reveal persistent ruggedness of an RNA folding landscape. Nature 463, 681 (2010).
Thermodynamics of small systems. T L Hill, The Journal of Chemical Physics. 36Hill, T. L. Thermodynamics of small systems. The Journal of Chemical Physics 36, 3182-3197 (1962).
Free energy transduction in biology: the steady-state kinetic and thermodynamic formalism. T Hill, ElsevierHill, T. Free energy transduction in biology: the steady-state kinetic and thermodynamic formalism (Elsevier, 2012).
Energy transduction of isothermal ratchets: Generic aspects and specific examples close to and far from equilibrium. A Parmeggiani, F Jülicher, A Ajdari, J Prost, Physical Review E. 602127Parmeggiani, A., Jülicher, F., Ajdari, A. & Prost, J. Energy transduction of isothermal ratchets: Generic aspects and specific examples close to and far from equilibrium. Physical Review E 60, 2127 (1999).
The force exerted by a molecular motor. M E Fisher, A B Kolomeisky, Proceedings of the National Academy of Sciences. 96Fisher, M. E. & Kolomeisky, A. B. The force exerted by a molecular motor. Proceedings of the National Academy of Sciences 96, 6597-6602 (1999).
Simple mechanochemistry describes the dynamics of kinesin molecules. M E Fisher, A B Kolomeisky, Proceedings of the National Academy of Sciences. 98Fisher, M. E. & Kolomeisky, A. B. Simple mechanochemistry describes the dynamics of kinesin molecules. Proceedings of the National Academy of Sciences 98, 7748-7753 (2001).
The physics of molecular motors. C Bustamante, D Keller, G Oster, Accounts of Chemical Research. 34Bustamante, C., Keller, D. & Oster, G. The physics of molecular motors. Accounts of Chemical Research 34, 412-420 (2001).
Efficiency of molecular motors at maximum power. T Schmiedl, U Seifert, Europhysics Letters). 8330005EPLSchmiedl, T. & Seifert, U. Efficiency of molecular motors at maximum power. EPL (Europhysics Letters) 83, 30005 (2008).
Kinesin's network of chemomechanical motor cycles. S Liepelt, R Lipowsky, Physical review letters. 98258102Liepelt, S. & Lipowsky, R. Kinesin's network of chemomechanical motor cycles. Physical review letters 98, 258102 (2007).
Quantifying the heat dissipation from a molecular motor's transport properties in nonequilibrium steady states. W Hwang, C Hyeon, The Journal of Physical Chemistry Letters. 8Hwang, W. & Hyeon, C. Quantifying the heat dissipation from a molecular motor's transport properties in nonequilibrium steady states. The Journal of Physical Chemistry Letters 8, 250-256 (2016).
Energetic Costs, Precision, and Transport Efficiency of Molecular Motors. The journal of physical chemistry letters 9. W Hwang, C Hyeon, Hwang, W. & Hyeon, C. Energetic Costs, Precision, and Transport Efficiency of Molecular Motors. The journal of physical chemistry letters 9, 513-520 (2018).
Steady-state thermodynamics of Langevin systems. T Hatano, S Sasa, Phys. Rev. Lett. 86Hatano, T. & Sasa, S.-i. Steady-state thermodynamics of Langevin systems. Phys. Rev. Lett. 86, 3463-3466 (2001).
Entropy production along a stochastic trajectory and an integral fluctuation theorem. U Seifert, Phys. Rev. Lett. 9540602Seifert, U. Entropy production along a stochastic trajectory and an integral fluctuation theorem. Phys. Rev. Lett. 95, 40602 (2005).
General theory of thermal fluctuations in nonlinear systems. G N Bochkov, Y E Kuzovlev, Zh. Eksp. Teor. Fiz. 72Bochkov, G. N. & Kuzovlev, Y. E. General theory of thermal fluctuations in nonlinear systems. Zh. Eksp. Teor. Fiz 72, 238-243 (1977).
Fluctuation-dissipation relations for nonequilibrium processes in open systems. G N Bochkov, Y E Kuzovlev, Soviet Journal of Experimental and Theoretical Physics. 49543Bochkov, G. N. & Kuzovlev, Y. E. Fluctuation-dissipation relations for nonequilibrium processes in open systems. Soviet Journal of Experimental and Theoretical Physics 49, 543 (1979).
Nonequilibrium equality for free energy differences. C Jarzynski, Phys. Rev. Lett. 78Jarzynski, C. Nonequilibrium equality for free energy differences. Phys. Rev. Lett. 78, 2690-2693 (1997).
Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. G E Crooks, Phys. Rev. E. 60Crooks, G. E. Entropy production fluctuation theorem and the nonequilibrium work relation for free energy differences. Phys. Rev. E 60, 2721-2726 (1999).
Local non-equilibrium thermodynamics. L Jinwoo, H Tanaka, Sci.Rep. 57832Jinwoo, L. & Tanaka, H. Local non-equilibrium thermodynamics. Sci.Rep. 5, 7832 (2015).
Comparison of far-from-equilibrium work relations. C Jarzynski, C. R. Physique. 8Jarzynski, C. Comparison of far-from-equilibrium work relations. C. R. Physique 8, 495-506 (2007).
Quantum fluctuation relations: Foundations and applications. M Campisi, P Hänggi, P Talkner, Colloquium, Reviews of Modern Physics. 83771Campisi, M., Hänggi, P. & Talkner, P. Colloquium: Quantum fluctuation relations: Foundations and applications. Reviews of Modern Physics 83, 771 (2011).
Equilibrium information from nonequilibrium measurements in an experimental test of Jarzynski's equality. J Liphardt, S Dumont, S B Smith, I TinocoJr, C Bustamante, Science. 296Liphardt, J., Dumont, S., Smith, S. B., Tinoco Jr, I. & Bustamante, C. Equilibrium information from nonequilibrium measurements in an experimental test of Jarzynski's equality. Science 296, 1832-1835 (2002).
Experimental test of Hatano and Sasa's nonequilibrium steady-state equality. E H Trepagnier, Trepagnier, E. H. et al. Experimental test of Hatano and Sasa's nonequilibrium steady-state equality.
. Proc. Nat. Acad. Sci. USA. 101Proc. Nat. Acad. Sci. USA 101, 15038-15041 (2004).
. J No Kurchan, Title, J. Phys. A: Math. Gen. 313719Kurchan, J. No Title. J. Phys. A: Math. Gen. 31, 3719 (1998).
The fluctuation theorem as a Gibbs property. C Maes, Journal of statistical physics. 95Maes, C. The fluctuation theorem as a Gibbs property. Journal of statistical physics 95, 367-392 (1999).
Hamiltonian derivation of a detailed fluctuation theorem. C Jarzynski, J. Stat. Phys. 98Jarzynski, C. Hamiltonian derivation of a detailed fluctuation theorem. J. Stat. Phys. 98, 77-102 (2000).
The protein data bank. H M Berman, P E Bourne, J Westbrook, C Zardecki, Protein Structure. CRC PressBerman, H. M., Bourne, P. E., Westbrook, J. & Zardecki, C. The protein data bank. In Protein Structure, 394-410 (CRC Press, 2003).
R Balian, From Microphysics to Macrophysics: Methods and Applications of Statistical Physics. Volume I (Theoretical and Mathema. SpringerBalian, R. From Microphysics to Macrophysics: Methods and Applications of Statistical Physics. Volume I (Theoretical and Mathema (Springer, 2006).
Information processing and the second law of thermodynamics: An inclusive, Hamiltonian approach. S Deffner, C Jarzynski, Physical Review X. 341003Deffner, S. & Jarzynski, C. Information processing and the second law of thermodynamics: An inclusive, Hamiltonian approach. Physical Review X 3, 41003 (2013).
Stochastic thermodynamics of single enzymes and molecular motors. U Seifert, The European Physical Journal E. 3426Seifert, U. Stochastic thermodynamics of single enzymes and molecular motors. The European Physical Journal E 34, 26 (2011).
The Mesoscopic Dynamics of Thermodynamic Systems. D Reguera, J M Rub\i, J M Vilar, J. Phys. Chem. B. 109Reguera, D., Rub\i, J. M. & Vilar, J. M. G. The Mesoscopic Dynamics of Thermodynamic Systems. J. Phys. Chem. B 109, 21502-21515 (2005).
Relative entropy: Free energy associated with equilibrium fluctuations and nonequilibrium deviations. H Qian, Physical Review E. 6342103Qian, H. Relative entropy: Free energy associated with equilibrium fluctuations and nonequilibrium deviations. Physical Review E 63, 42103 (2001).
Conformational equilibria and free energy profiles for the allosteric transition of the ribose-binding protein. K P Ravindranathan, E Gallicchio, R M Levy, Journal of molecular biology. 353Ravindranathan, K. P., Gallicchio, E. & Levy, R. M. Conformational equilibria and free energy profiles for the allosteric transition of the ribose-binding protein. Journal of molecular biology 353, 196-210 (2005).
Single-molecule measurement of protein folding kinetics. E A Lipman, B Schuler, O Bakajin, W A Eaton, Science. 301Lipman, E. A., Schuler, B., Bakajin, O. & Eaton, W. A. Single-molecule measurement of protein folding kinetics. Science 301, 1233-1235 (2003).
Interactions between hydrophobic and ionic solutes in aqueous guanidinium chloride and urea solutions: lessons for protein denaturation mechanism. E P O'brien, R I Dima, B Brooks, D Thirumalai, Journal of the American Chemical Society. 129O'Brien, E. P., Dima, R. I., Brooks, B. & Thirumalai, D. Interactions between hydrophobic and ionic solutes in aqueous guanidinium chloride and urea solutions: lessons for protein denaturation mechanism. Journal of the American Chemical Society 129, 7346-7353 (2007).
Precision and accuracy of single-molecule fret measurements?a multi-laboratory benchmark study. B Hellenkamp, Nature methods. 15669Hellenkamp, B. et al. Precision and accuracy of single-molecule fret measurements?a multi-laboratory benchmark study. Nature methods 15, 669 (2018).
A practical guide to single-molecule fret. R Roy, S Hohng, T Ha, Nature methods. 5507Roy, R., Hohng, S. & Ha, T. A practical guide to single-molecule fret. Nature methods 5, 507 (2008).
Advances in single-molecule fluorescence methods for molecular biology. C Joo, H Balci, Y Ishitsuka, C Buranachai, T Ha, Annu. Rev. Biochem. 77Joo, C., Balci, H., Ishitsuka, Y., Buranachai, C. & Ha, T. Advances in single-molecule fluorescence methods for molecular biology. Annu. Rev. Biochem. 77, 51-76 (2008).
Protein folding studied by single-molecule fret. B Schuler, W A Eaton, Current opinion in structural biology. 18Schuler, B. & Eaton, W. A. Protein folding studied by single-molecule fret. Current opinion in structural biology 18, 16-26 (2008).
Dissipation and lag in irreversible processes. S Vaikuntanathan, C Jarzynski, Europhys. Lett. 8760005Vaikuntanathan, S. & Jarzynski, C. Dissipation and lag in irreversible processes. Europhys. Lett. 87, 60005 (2009).
Free energy reconstruction from nonequilibrium single-molecule pulling experiments. G Hummer, A Szabo, Proc. Nat. Acad. Sci. USA. 98Hummer, G. & Szabo, A. Free energy reconstruction from nonequilibrium single-molecule pulling experiments. Proc. Nat. Acad. Sci. USA 98, 3658-3661 (2001).
Multidimensional adaptive umbrella sampling: applications to main chain and side chain peptide conformations. C Bartels, M Karplus, Journal of Computational Chemistry. 18Bartels, C. & Karplus, M. Multidimensional adaptive umbrella sampling: applications to main chain and side chain peptide conformations. Journal of Computational Chemistry 18, 1450-1462 (1997).
Development and testing of the OPLS all-atom force field on conformational energetics and properties of organic liquids. W L Jorgensen, D S Maxwell, J Tirado-Rives, J. Am. Chem. Soc. 118Jorgensen, W. L., Maxwell, D. S. & Tirado-Rives, J. Development and testing of the OPLS all-atom force field on conformational energetics and properties of organic liquids. J. Am. Chem. Soc 118, 11225-11236 (1996).
AGBNP: An analytic implicit solvent model suitable for molecular dynamics simulations and high-resolution modeling. E Gallicchio, R M Levy, Journal of computational chemistry. 25Gallicchio, E. & Levy, R. M. AGBNP: An analytic implicit solvent model suitable for molecular dynamics simulations and high-resolution modeling. Journal of computational chemistry 25, 479-499 (2004).
The weighted histogram analysis method for free-energy calculations on biomolecules. I. The method. S Kumar, J M Rosenberg, D Bouzida, R H Swendsen, P A Kollman, Journal of computational chemistry. 13Kumar, S., Rosenberg, J. M., Bouzida, D., Swendsen, R. H. & Kollman, P. A. The weighted histogram analysis method for free-energy calculations on biomolecules. I. The method. Journal of computational chemistry 13, 1011-1021 (1992).
The calculation of the potential of mean force using computer simulations. B Roux, Computer physics communications. 91Roux, B. The calculation of the potential of mean force using computer simulations. Computer physics communications 91, 275-282 (1995).
Conservation of rapid two-state folding in mesophilic, thermophilic and hyperthermophilic cold shock proteins. D Perl, Nature structural biology. 5229Perl, D. et al. Conservation of rapid two-state folding in mesophilic, thermophilic and hyperthermophilic cold shock proteins. Nature structural biology 5, 229 (1998).
Thermodynamics of the unfolding of the cold-shock protein from thermotoga maritima. D Wassenberg, C Welker, R Jaenicke, Journal of molecular biology. 289Wassenberg, D., Welker, C. & Jaenicke, R. Thermodynamics of the unfolding of the cold-shock protein from thermotoga maritima. Journal of molecular biology 289, 187-193 (1999).
| [] |
[
"A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security",
"A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security"
] | [
"Mohammed Ali Al-Garadi \nInternet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA\n",
"Amr Mohamed \nInternet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA\n",
"Abdulla Al-Ali \nInternet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA\n",
"Xiaojiang Du [email protected] \nInternet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA\n",
"Mohsen Guizani [email protected]. \nInternet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA\n"
] | [
"Internet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA",
"Internet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA",
"Internet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA",
"Internet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA",
"Internet of Things Security\nSecurity based Intelligence\nIoT Big Data\nGuizani is with Department of Electrical and Computer Engineering\nUniversity of Idaho\nMoscowIdahoUSA"
] | [] | The Internet of Things (IoT) integrates billions of smart devices that can communicate with one another with minimal human intervention. It is one of the fastest developing fields in the history of computing, with an estimated 50 billion devices by the end of 2020. On the one hand, IoT technologies play a crucial role in enhancing several real-life smart applications that can improve life quality. On the other hand, the crosscutting nature of IoT systems and the multidisciplinary components involved in the deployment of such systems have introduced new security challenges. Implementing security measures, such as encryption, authentication, access control, network security and application security, for IoT devices and their inherent vulnerabilities is ineffective. Therefore, existing security methods should be enhanced to secure the IoT ecosystem effectively. Machine learning and deep learning (ML/DL) have advanced considerably over the last few years, and machine intelligence has transitioned from laboratory curiosity to practical machinery in several important applications. The ability to monitor IoT devices intelligently provides a significant solution to new or zero-day attacks. ML/DL are powerful methods of data exploration for learning about 'normal' and 'abnormal' behaviour according to how IoT components and devices perform within the IoT environment. Consequently, ML/DL methods are important in transforming the security of IoT systems from merely facilitating secure communication between devices to security-based intelligence systems. The goal of this work is to provide a comprehensive survey of ML methods and recent advances in DL methods that can be used to develop enhanced security methods for IoT systems. IoT security threats that are related to inherent or newly introduced threats are presented, and various potential IoT system attack surfaces and the possible threats related to each surface are discussed. We then thoroughly review ML/DL methods for IoT security and present the opportunities, advantages and shortcomings of each method. We discuss the opportunities and challenges involved in applying ML/DL to IoT security. These opportunities and challenges can serve as potential future research directions. | 10.1109/comst.2020.2988293 | [
"https://arxiv.org/pdf/1807.11023v1.pdf"
] | 51,877,788 | 1807.11023 | da8a949f9c9f1df3a38f12c2cac97b789c705465 |
A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security
Mohammed Ali Al-Garadi
Internet of Things Security
Security based Intelligence
IoT Big Data
Guizani is with Department of Electrical and Computer Engineering
University of Idaho
MoscowIdahoUSA
Amr Mohamed
Internet of Things Security
Security based Intelligence
IoT Big Data
Guizani is with Department of Electrical and Computer Engineering
University of Idaho
MoscowIdahoUSA
Abdulla Al-Ali
Internet of Things Security
Security based Intelligence
IoT Big Data
Guizani is with Department of Electrical and Computer Engineering
University of Idaho
MoscowIdahoUSA
Xiaojiang Du [email protected]
Internet of Things Security
Security based Intelligence
IoT Big Data
Guizani is with Department of Electrical and Computer Engineering
University of Idaho
MoscowIdahoUSA
Mohsen Guizani [email protected].
Internet of Things Security
Security based Intelligence
IoT Big Data
Guizani is with Department of Electrical and Computer Engineering
University of Idaho
MoscowIdahoUSA
A Survey of Machine and Deep Learning Methods for Internet of Things (IoT) Security
Index Terms-Deep Learning, Machine Learning,
The Internet of Things (IoT) integrates billions of smart devices that can communicate with one another with minimal human intervention. It is one of the fastest developing fields in the history of computing, with an estimated 50 billion devices by the end of 2020. On the one hand, IoT technologies play a crucial role in enhancing several real-life smart applications that can improve life quality. On the other hand, the crosscutting nature of IoT systems and the multidisciplinary components involved in the deployment of such systems have introduced new security challenges. Implementing security measures, such as encryption, authentication, access control, network security and application security, for IoT devices and their inherent vulnerabilities is ineffective. Therefore, existing security methods should be enhanced to secure the IoT ecosystem effectively. Machine learning and deep learning (ML/DL) have advanced considerably over the last few years, and machine intelligence has transitioned from laboratory curiosity to practical machinery in several important applications. The ability to monitor IoT devices intelligently provides a significant solution to new or zero-day attacks. ML/DL are powerful methods of data exploration for learning about 'normal' and 'abnormal' behaviour according to how IoT components and devices perform within the IoT environment. Consequently, ML/DL methods are important in transforming the security of IoT systems from merely facilitating secure communication between devices to security-based intelligence systems. The goal of this work is to provide a comprehensive survey of ML methods and recent advances in DL methods that can be used to develop enhanced security methods for IoT systems. IoT security threats that are related to inherent or newly introduced threats are presented, and various potential IoT system attack surfaces and the possible threats related to each surface are discussed. We then thoroughly review ML/DL methods for IoT security and present the opportunities, advantages and shortcomings of each method. We discuss the opportunities and challenges involved in applying ML/DL to IoT security. These opportunities and challenges can serve as potential future research directions.
I. INTRODUCTION
HE recent progress in communication technologies, such as the Internet of Things (IoT), has remarkably transcended the traditional sensing of surrounding environments. IoT technologies can enable modernisations that improve life quality [1] and have the capability to collect, quantify and understand the surrounding environments. This situation Mohammed Ali Al-garadi, Amr Mohamed, Abdulla Al-Ali, are with Department of Computer Science and Engineering, Qatar University, 2713, Doha, Qatar. E-mails: {mohammed.g, abdulla.alali, amrm}@qu.edu.qa Xiaojiang Du is with Department of Computer and Information Sciences, Temple University, Philadelphia. E-mail: simplifies the new communication forms among things and humans and thus enables the realisation of smart cities [2]. IoT is one of the fastest emerging fields in the history of computing, with an estimated 50 billion devices by the end of 2020 [3,4]. On the one hand, IoT technologies play a crucial role in enhancing real-life smart applications, such as smart healthcare, smart homes, smart transportation and smart education. On the other hand, the crosscutting and large-scale nature of IoT systems with various components involved in the deployment of such systems have introduced new security challenges.
IoT systems are complex and contain integrative arrangements. Therefore, maintaining the security requirement in a wide-scale attack surface of the IoT system is challenging. Solutions must include holistic considerations to satisfy the security requirement. However, IoT devices mostly work in an unattended environment. Consequently, an intruder may physically access these devices. IoT devices are connected normally over wireless networks where an intruder may access private information from a communication channel by eavesdropping. IoT devices cannot support complex security structures given their limited computation and power resources [5]. Complex security structures of the IoT are due to not only limited computation, communication and power resources but also trustworthy interaction with a physical domain, particularly the behaviour of a physical environment in unanticipated and unpredictable modes, because the IoT system is also part of a cyber-physical system; autonomously, IoT systems must constantly adapt and survive in a precise and predictable manner with safety as a key priority, particularly in settings where threatening conditions, such as in health systems, might occur [6]. Moreover, new attack surfaces are introduced by the IoT environment. Such attack surfaces are caused by the interdependent and interconnected environments of the IoT. Consequently, the security is at higher risk in IoT systems than in other computing systems, and the traditional solution may be ineffective for such systems [7].
A critical consequence of the extensive application of IoT is that IoT deployment becomes an interconnected task. For example, IoT systems should simultaneously consider energy efficiency, security, big IoT data analytics methods and interoperability with software applications [4] during the deployment stage. One aspect cannot be ignored when considering advances in another [4]. This integration provides a new opportunity for researchers from interdisciplinary fields to investigate current challenges in IoT systems from different perspectives. However, this integration also introduces new security challenges due to the distribution nature of IoT devices, which provide a large and vulnerable surface. This characteristic of IoT devices presents many security issues. Moreover, the IoT platform generates a large volume of valuable data. If these data are not transmitted and analysed securely, then a critical privacy breach may occur.
IoT systems are accessible worldwide, consist mainly of constrained resources and constructed by lossy links [8]. Therefore, crucial modifications of existing security concepts for information and wireless networks should be implemented to provide effective IoT security methods. Applying existing defence mechanisms, such as encryption, authentication, access control, network security and application security, is challenging and insufficient for mega systems with many connected devices, with each part of the system having inherent vulnerabilities. For example, 'Mirai' is an exceptional type of botnets that has recently caused large-scale DDoS attacks by exploiting IoT devices [7,9]. Existing security mechanisms should be enhanced to fit the IoT ecosystem [7]. However, the implementation of security mechanisms against a specified security threat is quickly conquered by new types of attacks created by attackers to circumvent existing solutions. For example, amplified DDoS attacks utilise spoofed source IP addresses for the attack location to be untraceable by defenders. Consequently, attacks that are more complex and more destructive than Mirai can be expected because of the vulnerabilities of IoT systems. Moreover, understanding which methods are suitable for protecting IoT systems is a challenge because of the extensive variety of IoT applications and scenarios [7]. Therefore, developing effective IoT security methods should be a research priority [7,9].
As shown in Figure 1, having the capability to monitor IoT devices can intelligently provide a solution to new or zero-day attacks. Machine learning and deep learning (ML/DL) are powerful methods of data exploration to learn about 'normal' and 'abnormal' behaviour according to how IoT components and devices interact with one another within the IoT environment. The input data of each part of the IoT system can be collected and investigated to determine normal patterns of interaction, thereby identifying malicious behaviour at early stages. Moreover, ML/DL methods could be important in predicting new attacks, which are often mutations of previous attacks, because they can intelligently predict future unknown attacks by learning from existing examples. Consequently, IoT systems must have a transition from merely facilitating secure communication amongst devices to security-based intelligence enabled by DL/ML methods for effective and secure systems. Although essentially DL is a branch of ML, this paper discusses them in two separate sections to provide the readers with in-depth review, inclusive comparisons, and potential applications of both traditional ML and DL methods for IoT security. The main differences between traditional ML and DL methods has been discussed in previous literature [10,11]. Similarly, in this paper ML refers to traditional ML methods that require engineered features, while DL methods refer to recent advances in learning methods that utilise several nonlinear processing layers for discriminative or generative feature abstraction and transformation for pattern analysis [12]. Figure 2 shows a thematic taxonomy of ML/DL for IoT security. The remaining parts of the paper adopt the classification presented on the thematic taxonomy. The present survey comprehensively reviews ML/DL algorithms for IoT security that can provide researchers and developers a manual guide to developing an effective and end-to-end security solution based on intelligence. This survey also aims to highlight the list of challenges of using ML/DL to secure IoT systems. Section II provides an overview of general IoT systems, but the purpose of such an overview is to summarise the method used by the IoT model and its characteristics for increasing security risk. The summary points are provided at the end of Section II. Section III presents the IoT security properties and threats and discusses the potential vulnerabilities and attack surfaces of IoT systems (IoT attack surfaces are categorised into physical device, network service, cloud service and web and application interfaces). Moreover, we discuss a new attack surface caused by the IoT environment. In Section IV, we discuss the most promising ML and DL algorithms, their advantages, disadvantages and applications in the IoT security and then present the comparison and summary table for the reviewed ML/DL methods at the end of each section. Section V discusses and comprehensively compares the application of ML/DL methods in securing each IoT layer, and a summary table of the studies that used ML and DL for IoT security is presented. In this section, we also present the enabling technology of ML/DL deployment for IoT security. In Section VI, the issues, challenges and future directions in using ML/DL for effectively securing IoT systems are presented and classified; the challenges are related to IoT data issues, learning strategies, operations under the interdependent, interconnected and interactive environments, possible misuse of ML and DL algorithms by attackers, inherent privacy and security issues of ML and DL and inherent properties of an IoT device. These challenges prevent the implementations of effective ML/DL methods for IoT system security (i.e. computational complexity or security vs. other trade-offs) and are presented as future directions. Furthermore, we present other future directions, such as integrating ML/DL with other technology (e.g. edge computing and blockchain) to provide reliable and effective IoT security methods. Section VII presents the conclusions drawn from this survey.
The key contributions of this survey are listed as follows:
• Comprehensive discussion on the potential vulnerabilities and attack surfaces of IoT systems: We discuss various threats and attack surfaces in IoT systems. The attack surfaces are categorised into physical device, network service, cloud service and web and application interfaces, with several examples of security threat and potential vulnerabilities for each attack surfaces. We also discuss a new attack surface caused by the interdependent, interconnected and interactive environments of IoT systems. • In-depth review of the ML and recent advances in DL methods for IoT security: The most promising ML and DL algorithms for securing IoT systems are reviewed, and their advantages, disadvantages and applications in IoT security are discussed. Furthermore, comparisons and summary tables for ML and DL methods are presented to provide learned lessons. • Application of ML/DL for each IoT layers: The application of ML/DL for securing perception, network and application layers is reviewed. The works reviewed are compared on the basis of the type of learning method used, the type of attack surfaces secured and the type of threats detected. The enabling technologies of ML/DL deployment for IoT security are discussed. • Challenges and future directions: Several potential research challenges and future directions of ML/DL for IoT security are presented. The following subsection discusses related works to highlight the major differences of this survey from the previous survey on IoT security.
A. Related Work
Several researchers have conducted surveys on the IoT security to provide a practical guide for existing security vulnerabilities of IoT systems and a roadmap for future works. However, most of the existing surveys on IoT security have not particularly focused on the ML/DL applications for IoT security. For example, Surveys [13][14][15][16][17][18][19] reviewed extant research and classified the challenges in encryption, authentication, access control, network security and application security in IoT systems. Granjal, Monteiro and Silva [20] emphasised the IoT communication security after reviewing issues and solutions for the security of IoT communication systems. Zarpelão et al. [21] conducted a survey on intrusion detection for IoT systems. Weber [22] focused on legal issues and regulatory approaches to determine whether IoT frameworks satisfy the privacy and security requirements. Roman, Zhou and Lopez [23] discussed security and privacy in the distributed IoT context. These researchers also enumerated several challenges that must be addressed and the advantages of the distributed IoT approach in terms of security and privacy concerns. Survey [24] reviewed evolving vulnerabilities and threats in IoT systems, such as ransomware attacks and security concerns. Xiao et al. [25] briefly considered the ML methods for protecting data privacy and security in the IoT context. Their study also indicated three challenges in future directions of ML implementation in IoT systems (i.e. computation and communication overhead, backup security solutions and partial state observation). Other survey papers such as [26,27] focused on the uses of data mining and machine learning methods for cybersecurity to support intrusion detection. The surveys mainly discussed the security of the cyber domain using data mining and machine learning methods and mainly reviewed misuse and anomaly detections in cyberspace [26,27].
However, in contrast to other surveys, our survey presents a comprehensive review of cutting-edge machine and recent advances in deep learning methods from the perspective of IoT security. This survey identifies and compares the opportunities, advantages and shortcomings of various ML/DL methods for IoT security. We discuss several challenges and future directions and present the identified challenges and future directions on the basis of reviewing the potential ML/DL applications in the IoT security context, thereby providing a useful manual for researchers to transform the IoT system security from merely enabling a secure communication among IoT components to end-to-end IoT security-based intelligent approaches.
Figure 2 Thematic Taxonomy of ML/DL for IoT Security
II. OVERVIEW OF THE IOT SYSTEM
This section provides an overview of the general IoT systems. However, the objective of this section is to highlight the characteristics of IoT systems that may increase security risk. The summary points are provided at the end of this section.
IoT converts a physical object from a conventional object to a smart object by utilising technologies, such as communication technologies, Internet protocols and applications, sensor networks and ubiquitous and pervasive computing [28]. The implementation of a flawless IoT system is crucial in the academe and industry due to the wide range of applications that can enable the execution of smart city concepts through billions of connected smart devices [29]. The IoT model can be defined as the interconnection of massive heterogeneous devices and systems in diverse communication patterns, such as thing-tothing human-to-human or human-to-thing [15,28]. The IoT architecture consists of physical objects that are integrated into a communication network and supported by computational equipment with the aim of delivering smart services to users. The IoT architecture generally has three layers, namely, application, network and perception [16]. This architecture can be further taxonomized for simplicity and improved analysis, as shown in Figure 3. Each level is described in the following subsections. The physical object level involves IoT physical sensors. The main function of physical objects is to sense, collect and possibly process information. This level adopts sensors and actuators, such as temperature, humidity, motion and acceleration sensors, to implement diverse sensing functionalities. The plug-and-play mechanism must be applicable at this level to configure heterogeneous sensors [28,30,31]. IoT sensors are resource-constrained devices because they have limited battery capacity and computation capability. Understanding the sensor data delivered by these objects is a key step in achieving a context-aware IoT system [32,33]. A large part of the big data of IoT is generated at this level. The increase in the number of IoT devices and the extensive increment in data volume indicate a positive correlation between the growth of big data and the growth of IoT devices. Effective analysis of the big data of IoT can result in improved decision making for a highly secure IoT implementation.
B. Connectivity
One of the main objectives of the IoT platform is to connect heterogeneous sensors cooperatively and subsequently provide smart services [28]. The sensors implemented in the IoT platform are resource-constrained because they are powered by batteries and have a limited computation and storage capability [33,34]. Therefore, IoT sensors must work with low-power resources under a lossy and noisy communication environment [28]. The following connectivity challenges are encountered in the deployment of IoT devices.
• The first one is providing unique IPs to billions of devices connected to the Internet. This challenge can be mitigated by incorporating 6LoWPAN that uses IPv6. • The second challenge is developing low-power communication for transmitting data generated by sensors. • The third challenge is implementing effective routing protocols that consider the limited memory of sensors and support the flexibility and mobility of smart objects. The recent communication technologies employed in IoT are 6LoWPAN, Bluetooth, IEEE 802.15.4, WiFi, ultra-wide bandwidth, RFID and near-field communication (NFC) [28].
C. Middleware
A middleware aims to effectively represent the complexities of a system or hardware, thus allowing developers to focus only on the issue to be solved without interruption at the system or hardware level [35,36]. These complexities are commonly related to communication and computational issues. A middleware offers a software level amongst applications, the operating system and the network communication levels; it enables cooperative processing. From the computational perspective, a middleware offers a level between an application and the system software [33,35,37]. Its main functions can be summarised as follows. First, it enables cooperation between heterogeneous IoT objects so that the diverse categories of IoT can interact with one another effortlessly through middleware assistance [33,35,37]. One of key roles of middleware is to provide interoperability between the IoT devices. Second, a middleware must provide scalability amongst several devices that are likely to interact in the IoT realm. The future growth of IoT devices should be handled by the middleware by providing vital modifications when the organisation scales [33]. The third function is device discovery [33] and context awareness, which should be provided by a middleware to support the objects' awareness of all other surrounding IoT objects. A middleware should provide context-aware computing to understand sensor data. Sensor data can be utilised to obtain the context, and the obtained context can be used to provide smart services to users [38]. The last function is to provide security and privacy to IoT devices because the data collected by IoT devices are generally related to humans or an industry. Security and privacy concerns must be addressed in such circumstances. A middleware must construct mechanisms to provide a secure IoT system [33].
D. Big Data Analytics
The huge amounts of data produced or captured by IoT are extremely valuable. ML can play an analytical role in building intelligent IoT systems to deliver smart services in the IoT realm [39]. Big data are created [40] by several physical objects that are used in various IoT applications. However, physical devices produce volumes of data that should be analysed in real time to acquire useful knowledge. To obtain insights from these data, researchers [39][40][41][42][43][44] have discussed different methods of integrating big data analytical methods with IoT design. Unlike traditional analytical methods, ML and DL can effectively derive unobserved insights from big data and convert big data into useful data with minimal human assistance [40]. Analytical methods can be categorised into three: descriptive, predictive and perspective analytics [40]. Descriptive analytics is used for analysing data to describe current or past events. Predictive analytics is used for analysing data to predict the future based on the patterns that occur in current events. Prescriptive analytics is used for analysing data to make decisions by examining various real scenarios and providing a set of recommendations to decision makers. The big data related to the behaviour of IoT systems are vital in building ML/DL to secure IoT systems.
E. Applications
IoT has several applications. The commonly known applications include smart healthcare, smart transportation, smart grid and smart building. These applications are briefly discussed in the following subsections.
1) Smart healthcare
IoT devices have become popular in health applications in recent years. The IoT system is rapidly becoming a key instrument in healthcare [45]. IoT devices are used in healthcare sectors to closely observe and record patient conditions and send warnings to the concerned healthcare system in critical circumstances to provide a rapid and timely treatment to patients. Internet of medical things (IoMT) devices have been adopted in approximately 60% of the healthcare sector [46]. IoMT is believed to have a significant role in transforming the healthcare field by empowering the evolution from disorganised healthcare to synchronised healthcare. In 2015, 30.3% of 4.5 billion IoT devices are IoMT devices. This number is estimated to rise to 20-30 billion IoMT devices by 2020.
However, in contrast to other applications, the IoT in healthcare systems must be secured whilst providing flexible access to devices to save lives in emergency cases. For example [47], an individual who has an implanted IoT-based medical device has experienced an emergency situation where he/she suddenly must be admitted into a hospital rather than only regularly visiting. In this case, the staff at the new hospital must be able to access the implanted IoT-based medical device easily. Therefore, a complex security requirement may be not acceptable, and the security method must consider and balance between security and flexible access during emergency situations.
Moreover, IoT sensors are widely used to monitor daily health-related activities. A smartphone is usually employed to monitor health-related activities, such as daily activity (number of steps, walking and running distance and cycling distance), and sleep analysis. IoT has prodigious opportunities to potentially advance healthcare systems and a wide range of applications [48]. The recent development in traditional medical devices towards interactive environment medical devices can be further advanced by the IoT system by connecting implanted, wearable and environmental sensors collaboratively within the IoT system to monitor users' health effectively and ensure real-time health support [45]. However, securing IoT systems remains a critical issue [49,50], and further investigation is required to securely implement IoT devices in healthcare.
2) Smart transportation
Smart or intelligent transport systems have become attainable with the help of IoT systems. The main objective of smart transport is to manage daily traffic in cities intelligently by analysing data from well-connected sensors located in different places and implementing data fusion (data from CCTV, mobile devices, GPS, accelerometers, gyroscope-based applications and weather sensors). The data are then explored and integrated to provide smart choices to users [51]. Moreover, the data analytics of smart transport can implicitly enhance shipment schedules, advance road safety and improve delivery time [40].
3) Smart governance
IoT can facilitate smart governance. Integrating data from different governmental sectors can provide authorities with abundant information from a wide range of sensor data (from weather-related data to security-related data). The huge amount of data generated by IoT sensors can overcome the limitations of conventional monitoring systems in an exceptional manner, thereby presenting a knowledge-based system from information fusion sources that compiles and correlates data from different sectors to deliver an optimal decision considering multiple perspectives.
4) Smart agriculture
IoT systems can be applied to improve the agriculture sector. IoT sensors can be implemented to enable real-time monitoring of the agriculture sector. IoT sensors can collect useful data on humidity level, temperature level, weather conditions and moisture level. The collected data can then be analysed to provide important real-time mechanisms, such as automatic irrigation, water quality monitoring, soil constituent monitoring and disease and pest monitoring [52].
5) Smart grid
The latest development in power grids was achieved by using the IoT platform to construct a smart grid in which the electricity between suppliers and consumers is handled smartly to improve efficiency, safety and real-time monitoring [53][54][55]. The IoT platform plays a significant role in effective grid management. Applying IoT technology in a smart grid can help prevent disasters, decrease power transmission to enhance the reliability of power transmission and minimise economic losses [56]. Moreover, analysing the data generated by IoT sensors can help decision makers select a suitable electricity supply level to deliver to customers. 6) Smart homes IoT components are used to realise smart homes. Home IoTbased machines and systems (e.g. fridge, TV, doors, air conditioner, heating systems and so on) are now easy to observe and control remotely [28,57]. A smart home system can understand and respond to surrounding changes, such as automatically switching on air conditioners based on weather predictions and opening the door based on face recognition. Intelligent homes should consistently collaborate with their internal and external environments [58]. The internal environment involves all home IoT devices that are managed internally, and the external environment involves objects that are not managed by the smart home but play important roles in the construction of the smart home, such as smart grids [28,58].
7) Smart supply chain
An important application of IoT technology in real life is the development of easier and more flexible business processes than before. The development in IoT-embedded sensors, such as RFID and NFCE, enables the interaction between IoT sensors embedded on the products and business supervisors. Therefore, these goods can be tracked throughout production and transportation processes until they reach the consumer. The monitoring process and the data generated through this process are crucial in making appropriate decisions, which can in turn improve machine uptime and the service provided to customers [55].
F. Collaboration and Business Objective
At this level, IoT service is delivered to users, and the data captured and analysed at the lower levels are integrated into the business objective. This level mainly involves human interaction with all of the levels of the IoT model. The aim at this level is to effectively utilise the data captured, transmitted and analysed at different levels of the IoT model to improve social and economic growth. The analysis of big data generated by IoT devices can be incorporated into the business objective at this stage to identify factors that can improve the business outcome and create optimal strategic business plans.
From the above discussions of the IoT system, we can conclude that the nature of IoT systems can increase security risks because of the following reasons:
• By nature, the IoT is a multipart model with various applications with diverse requirements. This nature demonstrates the huge complexity of such systems through extensive IoT applications, from smart home, smart car to smart healthcare. Such drivers and various IoT applications can be a challenge whilst developing an effective security scheme, in which, the effective security method proposed for a specific application or requirement may be unsuitable for other applications with different requirements. • IoT systems are vastly heterogeneous in protocols, platforms and devices that are accessible worldwide, consist mainly of constrained resources, constructed by lossy links [7,8] and lack standardization. Such features of IoT systems become bottlenecks that prevent the development of effective and generalised security schemes for such systems. • IoT devices can be designed to autonomously adapt to the surrounding environment. Consequently, IoT devices can be controlled by other devices [7]. In such cases, an effective IoT security must not only be proposed to secure each device independently but also to provide an end-to-end security solution. • The data generated by the IoT is valuable, and such data can be analysed to understand the behaviour of individuals and their daily activities. Therefore, such information can be used by policymakers to adjust their products smartly to satisfy the preferences and requirements of individuals. However, this result can turn IoT devices into eavesdropping devices that capture user information including biometric data, such as voices, faces and fingerprints, that can aid in IoT device intrusion. • Physical attacks can increase by implementing IoT systems because most of the physical things of IoT (e.g. sensors) may be ubiquitously and physically reachable [23,59]. Physical threats may also be caused by unintended damage from natural disasters, such as floods or earthquakes, or disasters caused by humans, such as wars [60,61]. Therefore, an effective security solution must be context-aware by considering such characteristics of IoT systems. • IoT systems do not have exact boundaries and are constantly adjusted whilst new devices are added or due to user mobility. Such characteristics allow the IoT model to continually expand attack surfaces and introduce several vulnerabilities. Therefore, methods that can comprehensively understand and gain knowledge on the behaviour of things and other IoT components within such large systems are required. However, ML/DL methods can predict the expected behaviour of a system by learning from previous experiences. Therefore, applying ML/DL methods can significantly advance the security methods by transforming the security of IoT systems from simply facilitating secure communication between devices to security-based intelligence systems.
III. IOT SECURITY THREATS
IoT integrates the Internet with the physical world to provide an intelligent interaction between the physical world and its surroundings. Generally, IoT devices work in diverse surroundings to accomplish different goals. However, their operation must meet a comprehensive security requirement in cyber and physical states [60,62]. IoT systems are complex and contain multidisciplinary arrangements. Therefore, maintaining the security requirement with the wide-scale attack surface of the IoT system is challenging. To satisfy the desired security requirement, the solution should include holistic considerations. However, IoT devices mostly work in an unattended environment. Consequently, an intruder may physically access these devices. IoT devices are normally connected over wireless networks where an intruder might expose private information from the communication channel by eavesdropping. IoT devices cannot support complex security structures because of their limited computation and power resources [5]. Therefore, securing the IoT system is a complex and challenging task. Given that the main objective of the IoT system is to be accessed by anyone, anywhere and anytime, attack vectors or surfaces also become accessible to attackers [23,63]. Consequently, causing potential threats to become more probable. A threat is an act that can exploit security weaknesses in a system and exerts a negative impact on it [5]. Numerous threats, such as passive attacks (e.g. eavesdropping) and active threats (e.g. spoofing, Sybil, man-in-the-middle, malicious inputs and denial of service (DoS)), might affect the IoT system. Figure 4 shows the potential attacks that can affect the main security requirements (authentication, integrity nonrepudiation, confidentiality availability and authorisation). The following main security properties should be considered while developing an effective IoT security methods.
Confidentiality: Confidentiality is a vital security characteristic of IoT systems. IoT devices may store and transfer sensitive information that should not be revealed by unauthorised individuals. Medical (patient related), personal, industry and military data are highly confidential and must be secured against unauthorised access [5,64]. However, in specific scenarios, such as IoT medical devices, although communications are encrypted and data are confidentially stored and transferred, attackers can still sense the existence of the physical device and can even track the holder. In such a situation, the location confidentiality of the holder is exposed and put at risk [47].
Integrity: Data from IoT devices are generally transferred through wireless communication and must be changed only by authorised entities. Integrity features are thus fundamental in ensuring an effective checking mechanism to detect any modification during communication over an insecure wireless network. Integrity features can secure the IoT system from malicious inputs that might be used to launch structured query language (SQL) injection attacks [65]. A deficiency in integrity inspection can allow for modification of the data stored on the memory of IoT devices, which can affect the main operational functions of the physical devices for a long time without being detected easily. IoT systems have various integrity requirements. For example, IoT implantable medical devices require effective integrity checking against random errors because they affect human lives directly. Loss, errors or modification of information in several circumstances can lead to loss of human lives [47,66,67].
Authentication: The identity of entities should be perfectly established prior to performing any other process. However, due to the nature of IoT systems, authentication requirements differ from system to system. For example, authentication should be robust in an IoT system where a service needs to offer robust security rather than high flexibility. Trade-offs are a major challenge in developing an effective authentication scheme. For example, the trade-off between security and safety in IoT medical devices is that both security and safety must be balanced when designing an authentication scheme. Similarly, the trade-off can be between an effective authentication scheme and battery-based devices or between privacy and security [68]. Therefore, an IoT system requires an effective authentication that can balance system constraints and provide robust security mechanisms [69].
Authorisation: Authorisation includes granting users access rights to an IoT system, such as a physical sensor device. The users may be machines, humans or services. For example, the data collected by sensors should only be delivered to and accessed by authorised users (authorised objects and service requesters) [15,70]. In other words, an action must be performed only if the requester has satisfactory authorisation to command it. The main challenge in authorisation in IoT environments is how to grant access successfully in an environment where not only humans but also physical sensors (things) should be authorised to interact with the IoT system [14]. In addition, in handling huge amounts of data in such a heterogeneous environment, the data must be protected throughout the sensing and transmission process and should be made available only to authorised parties [71].
Availability: The services delivered by IoT systems must always be available to authorised entities. Availability is a fundamental feature of a successful deployment of IoT systems. However, IoT systems and devices can still be rendered unavailable by many threats, such as DoS or active jamming. Therefore, ensuring the continuous availability of IoT services to users is a critical property of IoT security.
Non-repudiation: The non-repudiation property is meant to provide access logs that serve as evidence in situations where users or objects cannot repudiate an action. Generally, nonrepudiation is not considered a key security property for many IoT systems [5]. However, non-repudiation can be an important security property in specific contexts, such as payment systems where both parties cannot repudiate a payment transaction [5].
For an effective IoT security scheme, the security properties above should be considered. However, these properties can be exploited by several security threats, as shown in Figure 4. In the following subsection, we briefly discuss potential security threats to understand how different security properties should be maintained in a secure IoT environment.
A. Threats in IoT
Security threats can be categorised as cyber and physical. Cyber threats can be further classified as passive or active. The following subsection provides a brief discussion of these threats.
1) Cyber Threats
Passive threats: A passive threat is performed only by eavesdropping through communication channels or the network. By eavesdropping, an attacker can collect information from sensors, track the sensor holders, or both. Currently, collecting valuable personal information, particularly personal health data, has become rampant on the black market [62]. The value of personal health information on the black market is $50 compared with $1.50 for credit card information and $3 for a social security number [62]. Moreover, an attacker can eavesdrop on communication channels to track the location of the IoT device holder if its communication channel is within range [72,73], thus causing a violation of privacy.
Active threats: In active threats, the attacker is not only skilful in eavesdropping on communication channels, but also in modifying IoT systems to change configurations, control communication, deny services and so on. Attacks may include a sequence of interventions, disruptions and modifications. For example, potential attacks on an IoT system (shown in Figure 4) may involve the following active attacks: impersonation (e.g. spoofing, Sybil and man-in-the-middle), malicious inputs, data tampering and DoS. An impersonation attack is intended to impersonate an IoT device or authorised users. If an attack path exists, active intruders can attempt to partially or fully impersonate an IoT entity [23]. Malicious input attacks are intended to insert malicious software into the targeted IoT system. This software will run a code injection attack. The injected malicious software has a dynamic nature, and new types of attacks are constantly introduced to violate IoT components remarkably because IoT systems have a naturally large, well-connected surface [24,74]. Meanwhile, data tampering is the act of intentionally changing (deleting, changing, manipulating or editing) information via unauthorised operations. Data are commonly transmitted or stored. In both situations, data might be captured and tampered, which might affect the significant functions of IoT systems, such as changing the billing price in the case of an IoT-based smart grid [75]. Many types of DoS attacks can be utilised against IoT. These types range from conventional Internet DoS attacks that are established to deplete the resources of the service provider and network bandwidth to signal jamming that targets wireless communication. Distributed DoS (DDoS) is a severe DoS where several attacks are launched with different IPs, which makes discriminating it from normal traffic of normal devices challenging compared with the attack with a huge traffic form signal or limited number of devices that is easier to discriminate from normal traffic and devices. Although different forms of DoS attacks exist, they have a common aim: to interfere with the availability of IoT services [21]. IoT systems have billions of connected devices that can be exploited through destructive DDoS, such as Mirai. Mirai is an exceptional type of botnets that has recently caused largescale DDoS attacks by using IoT devices [7,9].
2) Physical Threats
Physical threats can be in terms of physical destruction. In these threats, the attacker generally does not have technical capabilities to conduct a cyber-attack. Therefore, the attacker can only affect the reachable physical objects and other components of IoT that lead to terminating the service. By adopting IoT systems, these types of attacks may become widescale because most of the physical objects of IoT (sensors and cameras) are expected to be everywhere and physically accessible [23,59]. Physical threats may also be caused by unintended damage from natural disasters, such as floods or earthquakes, or disasters caused by humans, such as wars [60,61].
B. Attack Surfaces
In this section, we discuss possible IoT system attack surfaces and the potential threats related to each surface. As shown in Figure 5, IoT attack surfaces can be categorised into physical device, network service, cloud service, web and application interface. The new IoT environment introduces threat surfaces.
1) Physical Device Surface
Physical devices, such as RFID, are a main part of IoT systems. In an embedded communication system, RFID plays a significant role in implementing microprocessors for wireless communication [76]. The key characteristic of RFID tags is automatic identification through a unique identifier that involves fast information transmission between tags (RFID is tagged to an object that can be anything, from human to animal) and readers [77]. The main function of RFID technology is to supervise the process of objects in real time; this bridges the interaction between virtual and actual worlds. Consequently, these tiny physical devices can be expended in an exceptionally wide range of applications [76]. However, most physical devices suffer from many security-related issues. Another unit of physical device surface is the sensor node. Sensor nodes mainly consist of sensors used for sensing and actuators used for actuating devices in accordance with specified instructions. Sensors nodes commonly have high latency.
Most physical devices are resource-constrained and contain valuable information, which makes them a potential surface for attackers; for example, they can be exploited to track their holders, flooding them with many access requests that cause DoS or other attacks, such as eavesdropping, spoofing and counterfeiting [78,79]. Moreover, this surface is highly vulnerable to physical threats because it is the most physically accessible surface for an attacker.
2) Network Service Surface
The IoT system contains physical objects (sensors and actuators) that are connected through wired and wireless technologies. Sensor networks (SNs) are significant resources for realising IoT systems. SNs can be constructed without an IoT system. However, an IoT system cannot be constructed without SNs [38,80]. An IoT system consists of SNs and a wired network, thus creating a large-scale network surface. Such a wide network surface can be potentially vulnerable. Moreover, IoT systems face new security threats that are inherited from wired and wireless SNs. These new threats are introduced when traditional networks are directly integrated into IoT networks. The direct integration of a wireless SN into an IoT network poses several issues because traditional networks are no longer secure within IoT environments; for example, the resilience of WSNs (the sensors within a WSN openly provide its information to external parties) makes this network completely vulnerable to attacks in IoT environments [38,80].
Other threats can be designed by attackers to target the routing protocol that may lead to network failure. Accordingly, designing a secure routing protocol is important to IoT system security [78,81]. Attacks can also be launched at a port by searching and examining open ports. Detection of open ports can encourage attackers to launch an attack on the services operating on these open ports. Such an attack can extract detailed information about the network, such as IP address, MAC address, router and gateway [9,82].
IoT has expanded network connectivity, mobility and collaboration between users. Such features increase network service surfaces, leading to frequent security risks and attacks, such as hacking, interruption, acknowledgement spoofing, DoS, man-in-the-middle attack protocol tunnelling and interception [83]. Furthermore, the Internet network, which is a key component that connects IoT devices, has different players ranging from business subscribers to individual subscribers and from a local network area (LAN) to a worldwide network area (WAN), thereby connecting a wide range of devices and servers [84]. On the one hand, the Internet can provide a wide range of services and applications that can work in synergy with the information collected from sensors to achieve a fully functional IoT system for providing intelligent services. On the other hand, the continuous use of traditional Internet protocol (TCP/IP) to connect billions of objects and devices worldwide is highly vulnerable to numerous security and privacy threats, such as viruses, intrusion and hacking, replay attack and identity theft [15,48,85].
3) Cloud Service Surface
Cloud computing provides a set of innovative services that are introduced to offer access to stores and processes for obtaining information from anywhere and at any time; accordingly, the requirement for hardware equipment is either limited or eliminated [86]. Cloud computing can be defined as enabling remote access to shared service resources [87,88]. Cloud computing can serve as a platform that can be used as a base technology to realise the vision of IoT [89]. Cloud computing has significant characteristics that can benefit IoT systems, such as computational and energy efficiencies and storage, service and application over the Internet [89]. The integration of cloud and IoT offers great opportunities for IoT systems. IoT can benefit from the unrestrained resources of the cloud, thereby overcoming the main constraints of IoT, such as computational and energy capabilities [90]. The integration of cloud and IoT offers opportunities for the cloud as well. The cloud can use an IoT device as a bridge to be integrated into real-life applications through a dynamic and distributed means, consequently supplying cloud services to a large consumer base [89,91]. However, with the integration of cloud and IoT systems, several security concerns arise because such a distributed system is vulnerable to numerous attacks, such as (1) malicious attacks that can manipulate flaws in data security to obtain unauthorised access (e.g. cross-site scripting (XSS), SQL injection flaws, cross-site request forgery (CSRF) and insecure storage [92]); (2) insufficient integrity controls at the data level that can result in security threats by avoiding the authorisation process to directly access the database [92,93]; and (3) a security threat may exist in all virtualisation software which can be utilised by intruders (e.g. the vulnerability of a virtual server might allow a guest OS to run codes on the host). Consequently, the vulnerability of the virtual server could be exploited to allow the elevation of privileges [92,93].
Cloud computing has substantial consequences on information privacy and confidentiality. Privacy and confidentiality risks differ significantly according to the terms and conditions between the cloud service provider and the cloud service consumers. However, the integration of IoT devices with cloud computing introduces several privacy concerns, such as exposing highly confidential data (e.g. personal medical data of the holder or home-based sensor data). Privacy is a vital factor that prevents users from adopting IoT devices. Therefore, development should be accompanied with effective privacy protection for a successful IoT system deployment [94][95][96]. Moreover, multi-tenancy, which is one of the main features of cloud computing, can cause security threats that may lead to private information leakage. Multi-tenancy allows multiple users to store their data using the cloud via the application interface (API). In such a condition, the data of several users can be stored at the same locality, and data in such an environment can be accessed by one of these users. By either hacking through the loop holes in API or inserting the client code into the cloud system, an unauthorised operation attack can be launched against the data [92]. Authorised cloud users might also misuse their permissible access to gain unauthorised privileges and launch attacks, such as internal DoS. Such attacks can be called insider attacks [82], which can introduce a critical trust issue when the cloud is integrated with IoT.
4) Web and Application Interface
Most services provided by IoT systems provide users remote access via the Web or mobile applications. For example, in a smart home, the smart things that are connected to home appliances are designed to be controlled by users using their mobile applications or by webpage interfaces in a few cases. Mobile applications have also been developed for smart cars, watches, belts, shoes, glasses, lights, parking and other things that are becoming IoT-based devices controlled by mobile applications. With the rapid development of IoT, virtual and real worlds are being integrated, and soon the difference between the two worlds will become undefinable. IoT devices can interact with one another in real time. This scenario can be ultimately achieved with the help of smartphone applications [97]. Smartphones have become ubiquitous because of the extensive services they provide to users through their applications. Android-based devices are among the popular smart devices. They have captured a massive market because of their open architecture and the popularity of their application programming interface (APIs) among developer groups [98,99]. However, the open nature of mobile operating systems permits users to download diverse applications involving malicious applications that are uploaded by a third-party to online application stores without thorough security checks [98,100]. The growing popularity of Android-based devices and other operating system devices has attracted malware developers, followed by a huge increase in Android malware [98]. Malware developers can control smartphones by utilising platform vulnerabilities, extracting private user information or constructing botnets. Furthermore, Android applications may release private information carelessly or maliciously. Consequently, their functioning behaviours, operational models and usage patterns should be recognised to develop practical security methods for mobile devices [98]. Mobile devices are exposed to risks and threats, such as bluesnarfing, bluejacking, eavesdropping, tracking and DoS [15,75,101].
5) New attack surfaces introduced by IoT
In this section, we discuss new attack surfaces introduced by the IoT environment.
a) Threats caused by IoT interdependent environment
With the rapid growth of IoT objects, the collaboration between objects has become more automated and require less human involvement. IoT objects no longer merely interact with one another like devices within a network. Many IoT devices nowadays are designed to achieve the vision of a smart city, such that many of these devices are controlled by other devices or depend on the operational condition of other devices or the surrounding environment. For example, if a GPS sensor is aware of the traffic situation in a different road from the user's home to work and the user's health condition (asthma) is known, then the GPS should select the route from the user's home to work that is most suitable for his health condition (less traffic and air pollution) based on the health information and traffic and pollution sensors. Similarly, [102] provided another example where a sensor senses that the indoor temperature is raised and a smart plug senses that the air cooler is turned off; then, the windows automatically open. Such interdependent processes are common in applications that utilise IoT devices to achieve a fully automated process. In this environment, the targeted IoT device may be unreachable by an attacker, but the attacker could modify the operation mode of another device or its sensing parameter through the environment that has direct interdependence to launch a threat [102]. Therefore, attacking one surface, such as reducing the temperature or manipulating pollution data, can cause severe effects on other sensors whose operations depend on the information from these sensors. In such an interdependent environment, the attacker can select the weakest nodes in the systems to interrupt the entire systems.
b) Interconnected environment
IoT systems connect billions of devices. This architecture does not only expend the surface of the attack but also the magnitude of the attack. With these densely interconnected devices, an infected thing can become a destructive attack that infects numerous things at a large scale, thus affecting a large part of a city. This situation of nuclear destruction of technology can be described as 'IoT goes nuclear' [103]. Research [103] shows that IoT devices, even with secured industry-standard cryptographic methods, may be exploited by attackers to produce a new-fangled category of security risks that can be circulated from one IoT device to all its physically connected devices through the IoT medium. Consequently, an attacker can launch rapid and destructive attacks that may not be easy to control. To illustrate the impact of this scenario, an experimental case was conducted in [103], where an infection attack was launched by exploiting the popular Philips Hue smart lamps. The malware was diffused by moving directly from one lamp to the adjacent lamp through wireless connectivity provided by the built-in ZigBee and physical proximity. The researchers [103] found that the global AES-CCM key can be used to encrypt and authenticate new firmware without knowing any real updates on smart lamps by using cheap available equipment. This situation shows how vulnerable such devices are, even the devices produced by a well-known company that applies reliable cryptographic methods for security. Such attacks can start at a single point at any location and may end up infecting the entire city, thereby allowing the attackers to control the lights of the city or use IoT lamps in DDoS attacks [103]. Consequently, an infection attack may spread rapidly to large-scale devices and components due to the interconnected nature of IoT systems.
c) Social IoT environment
The Social Internet of Things (SIoT) was introduced recently to integrate social networking into IoT. The basis of such integration is that each thing (object) can obtain preferred services through its social objects called friends in a distributed manner with just local information [104]. Consequently, the threats caused by this IoT environment can be related to privacy concerns that may cause exposing sensitive information about the objects when integrated into a social network of IoT devices [105].
IV.
REVIEW OF MACHINE LEARNING AND DEEP LEARNING APPLICATIONS IN IOT SECURITY Learning algorithms have been widely adopted in many realworld applications because of their unique nature of solving problems. Such algorithms handle the construction of machines that progress automatically through experience [106]. Recently, learning algorithms have been widely applied in practice. The current advancement of learning algorithms has been driven by the development of new algorithms and the availability of big data, in addition to the emergence of low-computation-cost algorithms [106]. ML and DL have advanced considerably over the past few years, starting from laboratory curiosity and progressing to practical machinery with extensive, significant applications [106]. Even though DL is a ML sub-field, in this paper ML refers to traditional ML methods that require engineered features, while DL methods refer to recent advances in learning methods that utilise several non-linear processing layers for discriminative or generative feature abstraction and transformation for pattern analysis [12]. The purpose of discussing ML and DL in two sections is to provide the readers with in-depth review of both of them.
Generally, learning algorithms aim to improve performance in accomplishing a task with the help of training and learning from experience. For instance, in learning intrusion detection, the task is to classify system behaviour as normal or abnormal. An improvement in performance can be achieved by improving classification accuracy, and the experiences from which the algorithms learn are a collection of normal system behaviour. Learning algorithms are classified into three main categories: supervised, unsupervised and reinforcement learning (RL).
Supervised learning methods form their classification or prediction model on the basis of a learnt mapping [106] and produce by observing the input parameters. In other words, these methods capture the relationships between the input parameters (features) and the required output. Therefore, at the initial stage of supervised learning, learning examples are needed to train the algorithms, which are then used to predict or classify the new input [107]. Recent prodigious advancement in supervised learning engages deep networks. These networks can be viewed as multilayer networks with threshold units [106], each of which calculates the function of its input [108,109].
Although many practical realisations of DL have originated from supervised learning methods for learning representations, recent works have achieved progress in improving DL systems that learn important representations of the input without the necessity of pre-labelled training data [110]. These learning algorithms are unsupervised learning methods, which are generally intended to analyse unlabelled data. The objective of an unsupervised learning algorithm is to categorise the input data into distinctive groups by examining the similarity between them.
The third common type of ML is RL [111,112]. RL algorithms are trained by the data from an environment. RL aims to understand an environment and discover the best approaches to a given agent in different environments [113]. The training data in RL are halfway between those of supervised and unsupervised learning. In place of the training samples in which the right output is provided for a specified input, the training data in RL are assumed to indicate whether an action is right or not; if an action is not right, then the problem remains until the right action is discovered [106]. Thus, RL is trial-and-error learning.
In this section, we discuss the most promising ML and DL algorithms in IoT security perspective. Firstly, we discuss the traditional ML algorithms, their advantages, disadvantages and applications in IoT security. Secondly, we discuss DL algorithms, their advantages, disadvantages and applications in IoT security.
A. Machine learning (ML) methods for IoT security
In this subsection, we discuss the common ML algorithms (i.e. decision trees (DT), support vector machines (SVM), Bayesian algorithms, k-nearest neighbour (KNN), random forest (RF), association rule (AR) algorithms, ensemble learning, k-means clustering and principal component analysis (PCA)) and their advantages, disadvantages and applications in IoT security
1) Decision Trees (DTs)
DT-based methods mainly classify by sorting samples according to their feature values. Each vertex (node) in a tree represents a feature, and each edge (branch) denotes a value that the vertex can have in a sample to be classified. The samples are classified starting at the origin vertex and with respect to their feature values. The feature that optimally splits the training samples is deemed the origin vertex of the tree [114]. Several measures are used to identify the optimal feature that best splits the training samples, including information gain [115] and the Gini index [116].
Most DT-based approaches consist of two main processes: building (induction) and classification (inference) [117]. In the building (induction) process, a DT is constructed typically by initially having a tree with unoccupied nodes and branches. Subsequently, the feature that best splits the training samples is considered the origin vertex of the tree. This feature is selected using different measures, such as information gain. The premise is to assign the feature root nodes that maximally reduce the intersection area between classes in a training set, consequently improving the discrimination power of the classifier. The same procedure is applied to each sub-DT until leaves are obtained and their related classes are set. In the classification (inference) process, after the tree is constructed, the new samples with a set of features and unknown class are classified by starting with the root nodes of the constructed tree (i.e. the tree constructed during the training process) and proceeding on the path corresponding to the learnt values of the features at the inner nodes of the tree. This procedure is sustained until a leaf is acquired. Finally, the related labels (i.e. predicted classes) of the new samples are obtained [117].
Researchers in [117] summarised the main points for simplifying DT construction. Firstly, pre-pruning or postpruning is applied to the tree to reduce the tree size. Secondly, the space of the states searched is adjusted. Thirdly, the search algorithm is enhanced. Next, the data features are reduced by removing or disregarding redundant features through the search process. Finally, the structure of the tree is converted into an alternative data structure, such as a set of rules. The main weaknesses of DT-based methods are summarised as follows [117]. Firstly, they require large storage because of the nature of construction. Secondly, understanding DT-based methods is easy only if few DTs are involved. However, certain applications involve a massive construction of trees and several decision nodes. In these applications, the computational complexity is high, and the underlying model for classifying samples is complex.
A DT is used as a main classifier or collaborative classifier with other ML classifiers in security applications, such as intrusion detection [118,119]. For example, a previous study proposed the use of a fog-based system call system to secure IoT devices [120]. The research used DT to analyse network traffic to detect suspicious traffic sources and consequently detect DDoS behaviour.
2) Support Vector Machines (SVMs)
SVMs are used for classification by creating a splitting hyperplane in the data attributes between two or more classes such that the distance between the hyperplane and the most adjacent sample points of each class is maximised [121]. SVMs are notable for their generalisation capability and specifically suitable for datasets with a large number of feature attributes but a small number of sample points [122,123]. Theoretically, SVMs were established from statistical learning [121]. They were initially created to categorise linearly divisible classes into a two-dimensional plane comprising linearly separable data points of different classes (e.g. normal or abnormal). SVMs should produce an excellent hyperplane, which delivers maximum margin, by increasing the distance between the hyperplane and the most adjacent sample points of each class. The advantages of SVMs are their scalability and their capabilities to perform real-time intrusion detection and update the training patterns dynamically.
SVMs have been widely used in various security applications, such as intrusion detection [124][125][126], and are efficient in terms of memory storage because they create a hyperplane to divide the data points with a time complexity equal to !(# $ ), where N refers to the number of samples [122,123]. In relation to the IoT environment, a study [127] developed an Android malware detection system to secure IoT systems and applied a linear SVM to their system. They compared the detection performance of SVM with other ML algorithms, namely, naïve Bayes (NB), RF and DT. The comparison results showed that SVM outperformed the other ML algorithms. Such results confirmed the robust application of SVM for malware detection. Nevertheless, additional studies are essential to investigate the performance of SVMs with enriched datasets and datasets that are created with different environments and attack scenarios. Moreover, comparing the performances of SVM with DL algorithms, such as convolutional neural network (CNN) algorithms, in such a situation may be interesting.
In a previous work, an SVM was used to secure a smart grid, and attack detection in a smart grid was empirically studied [128]. This research showed that the ML algorithms SVM, KNN, perceptron, ensemble learning and sparse logistic regression are effective in detecting known and unknown attacks, performing better than traditional methods used for attack detection in smart grids.
In another research direction, SVM was recently used as a tool to exploit device security. The results in [129,130] showed that ML methods can break cryptographic devices and that SVM is more effective in breaking cryptographic devices than the traditional method (i.e. template attack).
3) Bayesian theorem-based algorithms
Bayes' theorem explains the probability of an incident on the basis of previous information related to the incident [131]. For instance, DoS attack detection is associated with network traffic information. Therefore, compared with assessing network traffic without knowledge of previous network traffic, using Bayes' theorem can evaluate the probability of network traffic being attack (related or not) by using previous traffic information. A common ML algorithm based on Bayes' theorem is the Naive Bayes (NB) classifier.
The NB classifier is a commonly used supervised classifier known for its simplicity. NB calculates posterior probability and uses Bayes' theorem to forecast the probability that a particular feature set of unlabelled examples fits a specific label with the assumption of independence amongst the features. For example, for intrusion detection, NB can be used to classify the traffic as normal or abnormal. The features that can be used for traffic classification, such as connection duration, connection protocol (e.g. TCP and UDP) and connection status flag, are treated by the NB classifier independently despite that these features may depend on one another. In NB classification, all features individually contribute to the probability that the traffic is normal or abnormal; thus, the modifier "naïve" is used. NB have been used for network intrusion detection [132,133] and anomaly detection [134,135]. The main advantages of NB classifiers include simplicity, ease of implementation, applicability to binary and multi-class classification, low training sample requirement [136] and robustness to irrelevant features (The features are preserved independently.). However, NB classifiers cannot capture useful clues from the relationships and interactions among features. The interactions among features can be important for accurate classification, particularly in complex tasks in which the interactions among features can significantly help the classifier increase its discrimination power among classes [137].
4) k-Nearest neighbour (KNN)
KNN is a nonparametric method. KNN classifiers often use the Euclidean distance as the distance metric [138]. Figure 6 demonstrates KNN classification, in which new input samples are classified. In the figure, the red circles represent malicious behaviours, and the green circles represent the normal behaviours of the system. The newly unknown sample (blue circle) needs to be classified as malicious or normal behaviour. The KNN classifier categorises the new example on the basis of the votes of the selected number of its nearest neighbours; i.e. KNN decides the class of unknown samples by the majority vote of its nearest neighbours. For instance, in Figure 6, if the KNN classification is based on one nearest neighbour (k = 1), then it will categorise the class of the unseen sample as normal behaviour (because the nearest cycle is a green cycle). If the KNN classification is based on two nearest neighbours (k = 2), then the KNN classifier will categorise the class of the unseen sample as normal behaviour because the two nearest circles are green (normal behaviour). If the KNN classification is based on three and four nearest neighbours (k = 3, k = 4), then the KNN classifier will categorise the class of the unknown sample as malicious behaviour because the three and four nearest circles are red circles (malicious behaviour). Testing different values of k during the cross-validation process is an important step to determine the optimal value of k for a given dataset. Although the KNN algorithm is a simple classification algorithm and effective for large training datasets [139], the best k value always varies depending on the datasets. Therefore, determining the optimal value of k may be a challenging and time-consuming process. KNN classifiers have been used for network intrusion detection and anomaly detection [140][141][142][143][144][145]. Considering the IoT environment, a study [146] proposed a model for the detection of U2R and R2L attacks. The model reduced the dimensionality of the features to enhance efficiency using two layers of feature reduction and then applied a twotier classification model that uses NB and KNN classifiers; the proposed model showed good detection results for both attacks. Another research developed intrusion detection system-based KNN [147]. The developed system was meant for use in classifying nodes as normal or abnormal in a wireless sensor network (WSN), which is an important unit of IoT systems; the proposed system exhibited efficient and accurate intrusion detection. RFs are supervised learning algorithms. In an RF, several DTs are constructed and combined to acquire a precise and robust prediction model for improved overall results [148,149]. Therefore, an RF consists of numerous trees that are constructed randomly and trained to vote for a class. The most voted class is selected as the final classification output [148]. Even though the RF classifier is constructed mainly using DTs, these classification algorithms substantially differ. Firstly, DTs normally formulate a set of rules when the training set is fed into the network, and this set of rules is subsequently used to classify a new input. RF uses DTs to construct subsets of rules for voting a class; thus, the classification output is the average of the results and RF is robust against over-fitting. Moreover, RF bypasses feature selection and requires only a few input parameters [26]. However, the use of RF may be impractical in specific real-time applications in which the required training dataset is large because RF needs the construction of several DTs. RF algorithms have been used for network intrusion detection and anomaly detection [150,151]. In a previous study [152], RF, SVM, KNN and ANN were trained to detect DDoS in IoT systems, and RF provided slightly better classification results than did the other classifiers when limited feature sets were used to avoid additional computational overhead and improve the applicability of the system to real-time classification. RF was trained using features obtained from network traffic with the purpose of correctly recognising IoT device categories from the white list. The authors extracted and manually labelled network traffic data from 17 IoT devices. These devices belonged to nine categories of IoT devices and adopted to train a multi-class classifier using RF algorithms. The study concluded that ML algorithms, in general, and specifically RF, hold practical significance in correctly identifying unauthorised IoT devices [153].
6) Association Rule (AR) algorithms
AR algorithms [154] have been used to identify an unknown variable by investigating the relationship among various variables in a training dataset. For example, let ', ) and * be variables in a dataset +. An AR algorithm aims to study the relationship among these variables to discover their correlations and consequently construct a model. Subsequently, this model is used to predict the class of new samples. AR algorithms identify frequent sets of variables [26], which are combinations of variables that frequently co-exist in attack examples. For example, in a previous study [155], the associations between TCP/IP variables and attack types were investigated using ARs, and the occurrence of various variables, such as service name, destination port, source port and source IP, were examined to predict the attack type. The AR algorithm reported in [156] exhibited favourable performance in intrusion detection. The researchers used fuzzy association rules in an intrusion detection model, which yielded a high detection rate and a low UNKNOWN false positive rate [156]. However, compared with other learning methods, AR methods are not commonly used in IoT environments; thus, further exploration is suggested to check whether an AR method can be optimised or combined with another technique to provide an effective solution to IoT security. The main drawbacks of AR algorithms in practice are as follows. Firstly, the time complexity of AR algorithms is high. Association rules increase rapidly to an unmanageable quantity, particularly when the frequency among variables is decreased. Although several different approaches have been introduced to tackle the issue of efficiency, they are not always effective [157]. Moreover, AR algorithms are based on simple assumptions among variables (direct relationships and occurrence). In certain cases, these assumptions are inapplicable, especially to security applications, in which attackers usually attempt to imitate the behaviour of normal users.
7) Ensemble learning (EL)
One of promising directions in ML is EL. EL combines the outputs of numerous basic classification methods to produce a collective output and consequently improve classification performance. EL aims to combine heterogeneous or homogeneous multi-classifiers to obtain a final result [158]. At the initial stage of ML development, every learning method has its advantages and achievements in specific applications or with specific datasets. Experimental comparisons in [159] found that the best learning method differs by application. The underlying learning theory used for a classifier depends on the data. Given that the nature of data apparently changes with the application, the best learning method that suits the given application data may not be the best for other applications. Therefore, researchers have started combining different classifiers to improve accuracy. EL uses several learning methods; thus, it reduces variance and is robust to over-fitting. The combination of different classifiers can provide results beyond the original set of hypotheses; thus, EL can adapt well to a problem [160]. However, the time complexity of an EL-based system is more than that of a single classifier-based system because EL comprises several classifiers [161,162]. EL has been effectively used for intrusion, anomaly and malware detection [163][164][165][166].
A previous study [167] showed that the time complexity of such learning models can be reduced to make them suitable for devices with limited hardware resources, such as IoT devices. the authors proposed a lightweight, application-independent, ensemble learning-based framework for detecting online anomalies in the IoT environment. The proposed framework aims to tackle two issues: 1) accomplishing automated and distributed online learning approaches to identifying anomalies for resource-constrained devices and 2) evaluating the proposed framework with real data. The study reported that the ensemblebased method outperformed each individual classifier [167].
8) k-Means clustering
k-Means clustering is based on an unsupervised ML approach. This method aims to discover clusters in the data, and k refers to the number of clusters to be generated by the algorithm. The method is implemented by iteratively allocating each data point to one of the k clusters according to the given features. Each cluster will contain samples with similar features. The k-means algorithm applies iterative refinement to generate an ultimate result. The inputs of the algorithm are the number of clusters (k) and dataset, which contains a set of features for each sample in the dataset. Firstly, the k centroids are estimated, and then each sample is assigned to its closest cluster centroid according to the squared Euclidean distance. Secondly, after all the data samples are assigned to a specific cluster, the cluster centroids are recalculated by computing the mean of all samples assigned to that cluster. The algorithm iterates these steps until no sample that can modify the clusters exists [168,169]. The main limitations of k-means clustering are as follows. Firstly, the user has to select k in the beginning. Secondly, this algorithm assumes that all spherical clusters have an approximately equal numbers of samples. The k-means algorithms can be applied to anomaly detection by distinguishing normal behaviour from abnormal behaviour by feature similarity calculations [170,171]. Muniyandi, Rajeswari and Rajaram [172] proposed an anomaly detection method using k-means with DT (i.e. C4.5 DT algorithm). However, the performance of k-means was less effective than those of supervised learning methods, specifically in detecting known attack [173]. Unsupervised algorithms are generally a good choice when generating the labelled data is difficult. However, the application of clustering methods, in general, and k-means in particular, to IoT system security is still at its infancy and should be explored further.
Unsupervised ML methods have many applications in securing IoT systems. For instance, k-means clustering was used for securing WSNs by detecting intrusions [174]. In a study on Sybil detection in industrial WSNs [175], a kerneloriented scheme was proposed to differentiate Sybil attackers from normal sensors by clustering the channel vectors. A clustering algorithm showed the potential to preserve private data anonymisation in an IoT system [176]. The use of clustering to develop data anonymisation algorithms can significantly advance data exchange security [176].
9) Principal component analysis (PCA)
PCA is a feature-reduction technique that can be applied to transform a large set of variables into a reduced set that preserves most of the information represented in the large set. This technique converts a number of probably correlated features into a reduced number of uncorrelated features, which are called principal components [177]. Therefore, the main working principle of PCA can be utilised for feature selection to realise real-time intrusion detection for IoT systems; a previous work proposed a model that uses PCA for feature reduction and adopts softmax regression and KNN algorithm as classifiers. The author reported that the combination of PCA with these classifiers provided a time-and computing-efficient system that can be utilised in real time in IoT environments [178]. Table 1 shows Potential ML methods for securing IoT systems and their advantages, disadvantages and applications in IoT security. DT is a simple, easy-to-use and transparent method.
DT requires large storage because of its construction nature. Understanding DTbased methods is easy only if few DTs are involved.
Detection of intrusion [118,119]and suspicious traffic sources [120] SVM SVMs form a splitting hyperplane in the feature dimension of two or more classes such that the distance between the hyperplane and the most adjacent sample points of each class is maximised [121].
SVMs are known for their generalisation capability and suitability for data consisting of a large number of feature attributes but a small number of sample points [122,123].
The optimal selection of a kernel is difficult. Understanding and interpreting SVM-based models are difficult.
Detection of intrusion [124][125][126], malware [127] and attacks in smart grids [128] NB NB calculates the posterior probability. It uses Bayes' theorem to forecast the probability that a particular feature set of unlabelled samples fits a specific label with the assumption of independence amongst features.
NB is known for its simplicity, ease of implementation, low training sample requirement [136] and robustness to irrelevant features (The features are preserved independently. KNN is a popular and effective ML method for intrusion detection.
The optimal k value usually varies from one dataset to another; therefore, determining the optimal value of k may be a challenging and timeconsuming process.
Detection of intrusions [146] and anomalies [140][141][142][143][144][145].
RF
In an RF, several DTs are constructed and combined to acquire a precise and established prediction model for improved overall results.
RF is robust to over-fitting. RF bypasses feature selection and requires only a few input parameters.
RF is based on constructing several DTs; thus, it may be impractical in specific real-time applications in which the required training dataset is large.
Detection of intrusion [150], anomalies [151], DDoS attacks [152] and unauthorised IoT devices [153] AR algorithm AR algorithms aim to study the relationship among the variables in a given training dataset T to discover correlations and consequently construct a model. This model is then used to predict the class of new samples.
AR algorithms are simple and easy to use.
The time complexity of the algorithms is high. AR algorithms use simple assumptions among variables (direct relationships and occurrence). In certain cases, these assumptions are inapplicable, especially to security applications.
Detection of intrusion [156]
EL EL combines the outputs of numerous basic classification methods to produce a collective output and consequently improve classification performance.
EL reduces variance and is robust to over-fitting. EL provides results beyond the original set of hypotheses; therefore, EL can adapt better than can a single classifierbased method to a problem.
The time complexity of an EL system is higher than that of a single classifier-based system.
Detection of intrusion, anomalies and malware [163][164][165][166][167].
k-Means clustering
k-Means clustering is an unsupervised learning approach that identifies clusters in the data according to feature similarities. k refers to the number of clusters to be generated by the algorithm.
Unsupervised algorithms are generally a good choice when generating the labelled data is difficult. k-Means clustering can be used for private data anonymisation in an IoT system because it does not require labelled data.
k-Means clustering is less effective than supervised learning methods, specifically in detecting known attacks [173].
Sybil detection in industrial WSNs [175] and private data anonymisation in an IoT system [176] PCA PCA is a process that converts a number of probably correlated features into a reduced number of uncorrelated features, which are called principal components [177].
PCA can achieve dimensionality reduction and consequently reduce the complexity of the model.
PCA is a feature-reduction method that should be used with other ML methods to establish an effective security approach.
PCA can be used for real-time detection systems in IoT environments [178] by reducing the model features.
B. Deep learning (DL) methods for IoT Security
Recently, the applications of DL to IoT systems have become an imperative research topic [179]. The most vital advantage of DL over traditional ML is its superior performance in large datasets. Several IoT systems produce a large amount of data; thus, DL methods are suitable for such systems. Moreover, DL can automatically extract complex representations from data [179]. DL methods can enable the deep linking of the IoT environment [180]. Deep linking is a unified protocol that permits IoT-based devices and their applications to interact with one another automatically without human intervention. For example, the IoT devices in a smart home can automatically interact to form a fully smart home [179].
DL methods provide a computational architecture that combines several processing levels (layers) to learn data representations with several levels of abstraction. Compared with traditional ML methods, DL methods have considerably enhanced state-of-the-art applications [12]. DL is a ML subfield that utilises several non-linear processing layers for discriminative or generative feature abstraction and transformation for pattern analysis. DL methods are also known as hierarchical learning methods because they can capture hierarchical representations in deep architecture. The working principle of DL is inspired by the working mechanisms of the human brain and neurons for processing signals. Deep networks are constructed for supervised learning (discriminative), unsupervised learning (generative learning) and the combination of these learning types, which is called hybrid DL.
1) Convolutional neural networks (CNNs)
CNNs were introduced to reduce the data parameters used in a traditional artificial neural network (ANN). The data parameters are reduced by utilising three concepts, namely, sparse interaction, parameter sharing and equivariant representation [181]. Reducing the connections between layers increases the scalability and improves the training time complexity of a CNN.
A CNN consists of two alternating types of layers: convolutional layers and pooling layers. The convolutional layers convolute data parameters with the help of multiple filters (kernels) of equal size [11]. The pooling layers perform down-sampling to decrease the sizes of the subsequent layers through max pooling or average pooling. Max pooling divides the input into non-overlapping clusters and selects the maximum value for each cluster in the previous layer [182,183], whereas average pooling averages the values of each cluster in the previous layer. Another important layer of a CNN is the activation unit, which performs a non-linear activation function on each element in the feature space. The non-linear activation function is selected as the rectified linear unit (ReLU) activation function, which involves nodes with the activation function ,(-) = /0-(0, -) [184]. The working principle of CNN applied to IoT Security is shown in Figure 8.
The main advantage of a CNN is that it is extensively applied to the training approaches in DL. It also allows for the automatic learning of features from raw data with high performance. However, a CNN has high computational cost; thus, implementing it on resource-constrained devices to support onboard security systems is challenging. Nevertheless, distributed architecture can solve this issue. In this architecture, a light deep neural network (DNN) is implemented and trained with only a subset of important output classes on-board, but the complete training of the algorithm is achieved at cloud level for deep classification [185].
The development of CNNs is mainly directed towards image recognition advancement. Accordingly, CNNs have become widely used, leading to developing successful and effective models for image classification and recognition with the use of large public image sources, such as ImageNet [186,187]. Furthermore, CNNs demonstrate robustness in numerous other applications. For IoT security, a study [188] proposed a CNNbased malware detection method for Android. With the application of the CNN, the significant features related to malware detection are learnt automatically from the raw data, thereby eliminating the need for manual feature engineering.
The key point in using a CNN is that the network is trained to learn suitable features and execute classification conjointly, thus eliminating the extraction process required in traditional ML and consequently providing an end-to-end model [188]. However, the robust learning performance of CNNs can be used by attackers as a weapon. A previous study [189] showed that a CNN algorithm can break cryptographic implementations successfully.
Figure 8 Illustration of CNN Working Principle for IoT Security 2) Recurrent neural networks (RNNs)
An RNN is a vital category of DL algorithms. RNNs were proposed to handle sequential data. In several applications, forecasting the current output is based on the analysis of the associations from several previous samples. Thus, the output of the neural network depends on the present and past inputs. In such an arrangement, a feed-forward NN is inappropriate because the association between the input and output layers are preserved with no dependency [190]. Therefore, when the backpropagation algorithm was introduced, its most remarkable application was the training of RNNs [12,191]. For applications that consist of sequential inputs (e.g. speech, text and sensor data), RNNs are recommended [12,191].
An RNN integrates a temporal layer to capture sequential data and then learns multifaceted variations through the hidden units of the recurrent cell [192]. The hidden units are modified according to the data presented to the network, and these data are continually updated to reveal the present condition of the network. The RNN processes the present hidden state by estimating the subsequent hidden state as an activation of the formerly hidden state. RNNs are used because of their capability of managing sequential data effectively. This capability is advantageous for various tasks, such as threat detection, in which the patterns of the threat are time dependent. Therefore, using recurrent connections can improve neural networks and reveal important behaviour patterns. The main drawback of RNNs, however, is the issue of vanishing or exploding gradients [193]. RNNs and their variants have achieved excellent performance in many applications with sequential data, such as machine translation and speech recognition [194][195][196]. Moreover, RNNs can be used for IoT security. IoT devices generate large amounts of sequential data from several sources, such as network traffic flows, which are among the key features for detecting several potential network attacks. For example, a previous study [197] discussed the feasibility of an RNN in examining network traffic behaviour to detect potential attacks (malicious behaviour) and confirmed the usefulness of the RNN in classifying network traffic for accurate malicious behaviour detection. Thus, RNNs provide a practical solution in realworld scenarios. Exploring RNNs and their variants are of significance in improving IoT system security, specifically for time series-based threats.
3) Deep autoencoders (AEs)
A deep AE is an unsupervised learning neural network trained to reproduce its input to its output. An AE has a hidden layer h, which defines a code used to represent the input [181]. An AE neural network is divided into two parts: the encoder function ℎ = ,(-) and the decoder function, which attempts to reproduce the input 9 = :(ℎ).The encoder obtains the input and converts it into an abstraction, which is generally termed as a code. Subsequently, the decoder acquires the constructed code, which was initially produced to represent the input, to rebuild the original input. The training process in AEs should be accomplished with minimum reconstruction error [198]. However, AEs cannot learn to replicate the input perfectly. AEs are also restricted because they can produce an approximate copy only, merely copying the inputs that are similar to the training data. The model is required to prioritise which characteristics of the inputs should be copied; thus, it frequently learns useful characteristics of the data [181]. AEs are potentially important for feature extraction. AEs can be successfully used for representation learning to learn features (in place of the manually engineered features used in traditional ML) and reduce dimensionality with no prior data knowledge. AEs, nevertheless, consume high computational time. Although AEs can effectively learn to capture the characteristics of the training data, they may only complicate the learning process rather than represent the characteristics of the dataset if the training dataset is not representative of the testing dataset.
AEs were used to detect network-based malware in [199]; the AEs were trained to learn the latent representation of a diverse feature set; particularly, AEs were trained on the feature vector extracted from the cybersystems. The AEs exhibited better detection performance than did the traditional ML algorithms SVM and KNN [199]. In another study [200], an AE was combined with a DBN to construct a malware detection method and used for data dimensionality reduction by non-linear mapping to extract only the significant features; subsequently, the DBN learning algorithm was trained to detect malicious code.
4) Restricted Boltzmann machines (RBMs)
RBMs are deep generative models developed for unsupervised learning [201]. An RBM is a completely undirected model with no link between any two nodes in the same layer. RBMs consist of two types of layers: visible and hidden layers. The visible layer holds the known input, whereas the hidden layer consists of multiple layers that include the latent variables. RBMs hierarchically understand features from data, and the features captured in the initial layer are used as latent variables in the following layer.
The research in [202] developed a network anomaly detection model that can overcome the inherent challenges in developing such a model. These challenges include the generation of labelled data required for the effective training of the model because a network traffic dataset is multi-part and irregular. The second challenge is the constant evolution of anomaly behaviour with time. Therefore, the model should be dynamically adapted to detect any new form of attacks and generalised to detect the anomaly in different network environments. To solve these challenges, the researchers in [202] proposed a learning model that is based on a discriminative RBM, which they selected due to its capability to combine generative models with suitable classification accuracy to detect network anomaly in a semi-supervised fashion even with incomplete training data. However, their experimental results showed that the classification performance of the discriminative RBM was affected when the classifier was tested on a network dataset that differed from the network dataset on which the classifier was trained. This finding should be further investigated, and how a classifier can be generalised to detect an anomaly in different network environments should be further studied.
The feature representation capability of a single RBM is limited. However, RBM can be substantially applied by stacking two or more RBMs to form a DBN. This process is discussed in the following section.
5) Deep belief networks (DBNs)
DBNs are generative methods [203]. A DBN consists of stacked RBMs that execute greedy layer-wise training to accomplish robust performance in an unsupervised environment. In a DBN, training is accomplished layer by layer, each of which is executed as an RBM trained on top of the formerly trained layer (DBNs are a set of RBMs layers used for the pre-training phase and subsequently become a feed-forward network for weight fine-tuning with contrastive convergence.) [192]. In the pre-training phase, the initial features are trained through a greedy layer-wise unsupervised approach, whereas a softmax layer is applied in the fine-tuning phase to the top layer to fine-tune the features with respect to the labelled samples [195].
DBNs have been successfully implemented in malicious attack detection. A previous study [204] proposed an approach to secure mobile edge computing by applying a DL-based approach to malicious attack detection. The study used a DBN for automatic detection, and the proposed DBN-based model showed vital improvement in malware detection accuracy compared with ML-based algorithms [204]. This result demonstrated the superiority of DL, in general, and DBNs in particular, to traditional manual feature engineering methods in malware detection. In another study [200], an AE was combined with a DBN to establish a malware detection method, and an AE DL algorithm was used for the reduction of data dimensionality by non-linear mapping to extract only the significant features; subsequently, the DBN learning algorithm was trained to detect malicious code.
DBNs are unsupervised learning methods trained with unlabelled data iteratively for significant feature representation. However, even though DBNs use contrastive convergence to reduce computational time, these networks are still inapplicable to on-board devices with limited resources.
6) Generative adversarial networks (GANs)
Introduced by [205], GANs have recently emerged as promising DL frameworks. A GAN framework simultaneously trains two models, namely, generative and discriminative models, via an adversarial process as shown in Figure 9. The generative model learns the data distribution and generates data samples, and the discriminative model predicts the possibility that a sample originates from the training dataset rather than the generative model (i.e. evaluates the sample for authenticity). The objective of training the generative model is to increase the probability that the discriminative model misclassifies the sample [205]. In each stage, the generative model, which is the generator, is prepared to deceive the discriminator by generating a sample dataset from random noise. By contrast, the discriminator is fed with several real data samples from the training set, accompanied by the samples from the generator. The discriminator aims to classify real (from the training dataset) and unreal (from the generative model) samples. The performances of the discriminative and generative models are measured by the correctly and incorrectly classified samples, respectively. Subsequently, both models are updated for the next iteration. The output discriminative model assists the generative model to enhance the samples generated for the subsequent iteration [198].
GANs have been recently implemented in IoT security. For example, the study in [206] proposed an architecture for securing the cyberspace of IoT systems, and the proposed architecture involves training DL algorithms to classify the system behaviour as normal or abnormal. GAN algorithms were integrated into the proposed architecture for preliminary study, whose evaluation results showed the effectiveness of the GANbased architecture in detecting abnormal system behaviour [206].
GANs may have a potential application in IoT security because they may learn different attack scenarios to generate samples similar to a zero-day attack and provide algorithms with a set of samples beyond the existing attacks. GANs are suitable for training classifiers through a semi-supervised approach. GANs can generate samples more rapidly than can fully visible DBNs because the former is not required to generate different entries in the samples sequentially. In GANs, generating a sample needs only one pass through the model, unlike in RBMs, which require an unidentified number of iterations of a Markov chain [205,207]. However, GAN training is unstable and difficult. Learning to generate discrete data, such as text, by using a GAN is a challenging task [205,207].
7) Ensemble of DL networks (EDLNs)
Several DL algorithms can work collaboratively to perform better than independently implemented algorithms. EDLNs can be accomplished by merging generative, discriminative or hybrid models. EDLNs are often used to handle complex problems with uncertainties and high-dimensional features. An EDLN comprises stacked individual classifiers, either homogenous (classifiers from the same family) or heterogeneous (classifiers from different families), and is used to enhance diversity, accuracy, performance and generalisation [208]. Although EDLNs have achieved remarkable success in many applications, such as human activity recognition, EDLNs application in IoT security needs further investigation, particularly the possibility of implementing light homogenous or heterogeneous classifiers in a distributed environment to improve the accuracy and performance of an IoT security system and solve challenges related to computational complexity. Table 2 shows Potential DL methods for securing IoT systems and their advantages, disadvantages and applications in IoT security.
CNNs
CNNs mainly aim to reduce data parameters used by applying sparse interactions, parameter sharing and equivariant representations [181], thereby reducing the connections between layers to quantities less than those in ANNs.
CNNs are robust supervised DL methods with highly competitive performance.
With the new features of CNNs, their scalability is increased and their training time complexity is improved compared with those of ANNs. CNNs have potential application in IoT security as they can automatically learn features from security raw data.
CNNs have high computational cost; thus, implementing them on resource-constrained devices to support onboard security systems is challenging.
Malware detection [188]; CNNs can automatically learn features of raw security data; therefore, they can construct an endto-end security model for IoT systems [188].
RNNs
RNNs integrate a temporal layer to take sequential data and then learn multi-faceted variations with the hidden unit of the recurrent cell [192].
RNNs and their variants have achieved excellent performance in many applications with sequential data. In certain cases, IoT security data consist of sequential data; thus, RNNs have potential application in IoT security.
The main drawback of RNNs is the issue of vanishing or exploding gradients [193].
RNNs can classify network traffic with high accuracy in detecting malicious behaviour [197]. RNNs and their variants show considerable potential in improving IoT system security, specifically for time series-based threats.
AEs
An AE has a hidden layer ℎ, which has a code to represent the input. An AE neural network is divided into two parts: the encoder function ℎ = ,(-) and the decoder function, which attempts to reproduce the input 9 = :(ℎ). The encoder obtains the input and converts it into an abstraction, which is generally termed as a code. Subsequently, the decoder acquires the constructed code that was initially produced to represent the input to rebuild the original input.
AEs are potentially important for feature extraction. AEs can be effectively used for representation learning to learn features in place of the manually engineered features used in traditional ML and reduce dimensionality with no prior data knowledge.
AEs consume considerable computational time.
Although AEs can effectively learn to capture the characteristics of the training data, if the training dataset is not representative of the testing dataset, then the AEs may only complicate the learning process rather than represent the characteristics of the dataset.
AEs can be used for detecting malware [199]. AE has been combined with a DBN to establish a malware detection method [200].
RBMs
RBMs are deep generative models developed for unsupervised learning [201]. They are completely undirected models with no link between any two nodes in the same layer.
Using a feedback mechanism on RBMs allows for the extraction of numerous vital features through an unsupervised approach.
RBMs have high computational cost; thus, implementing them on resource-constrained IoT devices to support onboard security systems is challenging.
RBMs can be used for network anomaly detection [202].
DBNs
DBNs consist of stacked RBMs that execute greedy layer-wise training to accomplish robust performance in an unsupervised environment.
DBNs are unsupervised learning methods trained with unlabelled data iteratively for significant feature representation.
DBNs present high computational cost due to the extensive initialisation process caused by the large number of parameters.
DBNs can be used for malicious attack detection [204].
C. Reinforcement learning (RL) methods for IoT security
Learning from the surrounding environment is one of the first learning methods humans experience. Humans naturally start learning by interacting with their environment. RL is inspired by the psychological and neuroscientific perspectives on animal behaviour and of the mechanism by which agents can enhance their control of the environment [111,112]. RL involves making an agent learn how to map situations to actions appropriately to achieve the highest rewards [112]. The agent does not have previous knowledge of which actions to implement but has to learn which actions produce the most rewards by attempting them through trial and error. The features 'trial' and 'error' are the main and unique features of RL. Thus, the agent continues to learn from its experience to increase its rewards. One of the recent successful RL methods is the deep Q network [111]. Extensions of deep Q networks have been suggested, including double Q-learning [209], continuous control with deep RL [210] and prioritised experience replay [211].
RL has been implemented to solve several IoT issues. Studies by [212,213] proposed an anti-jamming scheme that is based on reinforcement learning for wideband autonomous cognitive radios (WACRs). In [212], information about sweeping jammer signal and unintentional interference was used to distinguish it from other WACRs; RL was used with this information to learn a sub-band selection policy accurately to evade the jammer signal and interference from other WACRs. Similarly, in [213], an RL method based on Q-learning was trained to effectively avoid jamming attacks sweeping over a wide spectrum of hundreds of MHz in real time. In the same direction, [214] used RL to develop an anti-jamming scheme for cognitive radios and integrate the scheme with deep CNN to improve the efficiency of RL in a large number of frequency channels. A similar scheme against aggressive jamming was proposed using deep RL in [215], in which jamming was considered activated in an aggressive environment, which is normally expected in tactical mobile networking; the results showed that RL is a promising method of developing schemes against aggressive jamming.
V. IOT SECURITY LAYERS BASED ON ML AND DL METHODS
In this section, we classify the previous studies on ML and DL methods for IoT security according to the layers these methods intend to protect. Even though ML and DL may be applied to protect more than one layer or the end-to-end system (which is the advantage of ML and DL methods over other methods and holds potential future uses), the following classification is proposed to highlight the conceptualisation of the ML and DL methods for IoT security. At the end of this section the technology tools that can essentially enable ML/DL deployment for IoT security are listed.
A. Perception layer
One of the promising applications of DL methods is physicallayer authentication. Traditional physical-layer authentication techniques apply assumption checks and relate the randomness and exclusiveness of the radio channel between "Alice" and "Bob", to detect spoofing attacker "Eve" in a wireless network. Nonetheless, such an approach is not always practical, specifically in dynamic networks [216]. Wang, Jiang, Lv and Xiao [216] used a learning model to construct a physical-layer authentication model that uses past data generated from a spoofing model as learning vectors to train an extreme learning machine. The proposed model exhibited improved spoofing detection performance and consequently achieved considerably enhanced authentication accuracy compared with that of stateof-the-art methods.
Shi, Liu, Liu and Chen [217] proved that the present Wi-Fi signals generated by IoT objects can be adopted to detect distinctive human behavioural and physiological features and can be utilised to authenticate individuals on the basis of an
GANs
The GAN framework simultaneously trains two models (i.e. generative and discriminative models) via an adversarial process. The generative model learns the data distribution and generates data samples, and the discriminative model predicts the possibility that a sample originates from the training dataset rather than the generative model (i.e. evaluates the instance for authenticity).
In GANs, generating a sample needs only one pass through the model, unlike in DBNs and RBMs in which an unidentified number of iterations of a Markov chain is required [205,207].
GAN training is unstable and difficult. Learning to generate discrete data by using GAN is a difficult task [205,207].
GANs can be used to build an architecture for securing the cyberspace of IoT systems [206].
EDLNs
EDLNs can be accomplished by merging generative, discriminative or hybrid models.
Combining DL classifiers can help achieve model diversity, improve model performance and expand model generalisation.
The time complexity of the system can be significantly increased.
The use of GANs in securing IoT systems needs further investigation, particularly the possibility of implementing light homogenous or heterogeneous classifiers in a distributed environment to improve the accuracy and performance of a system.
understanding of their daily activities. The authors proposed a scheme which adopts a single pair of Wi-Fi signals generated by IoT devices to mine Wi-Fi channel state information and thus obtain the amplitude and the relative phase for precise user authentication without the need for user participation. Using these features, the authors developed a DL model (i.e. Deep Neural Network (DNN)) to identify the daily human activity distinctiveness of each individual and subsequently generate a fingerprint for each user, called Wi-Fi fingerprint, to capture the distinct characteristics of different users; the proposed DLbased authentication method exhibited high accuracy [217]. This study validates the potential application of DL algorithms in constructing authentication systems. In another study [215], a scheme against aggressive jamming was developed using RL, and jamming was considered activated in an aggressive environment, which is normally expected in tactical mobile networking. RL was found effective in developing a method against aggressive jamming [215].
The research in [218] also considered the issue of jamming in an IoT network and introduced a centralised approach to addressing possible jamming attacks in an IoT environment, which consists of resource-constrained devices. The idea of the proposed model is to use the IoT access point to protect against the jamming attacker by distributing its power over the subcarriers in an intelligent manner and using an evolutionarybased algorithm. The proposed method can converge in a practical iteration number; thus, it can provide a better solution than a random power allocation strategy.
Along the same direction, two previous studies [212,213] proposed RL-based anti-jamming schemes for WACRs. In [212], the authors used information about sweeping jammer signal and unintentional interference to distinguish it from those of other WACRs. This information and RL were combined to learn a sub-band selection policy accurately to evade the jammer signal and interference from other WACRs. Similarly, in [213], an RL method based on Q-learning was trained to effectively avoid jamming attacks sweeping over a wide spectrum of hundreds of MHz in real-time. In the same direction, [214] used RL to develop an anti-jamming scheme for cognitive radios and integrate it with deep CNN to improve the efficiency of RL in a large number of frequency channels.
Incorporating cognitive radio (CR) capability into IoT devices has paved the way for an innovative research on IoT systems [219]. Currently, many researchers are conducting studies on communication and computing in IoT systems. According to two previous studies [220,221], IoT systems cannot be sustained without comprehensive cognitive capability because of growing issues. CRs are radio devices that can learn and change in accordance with their dynamic environment [222]. The main step towards accomplishing such cognitive operation is enabling CRs to sense and understand their working environment. Ideally, CRs should be able to work over a wide frequency range. However, sensing all required frequencies in real time is a challenging task, specifically with the existence of jamming attacks. CRs can become increasingly useful and reliable communication systems if they can eliminate the incidence of accidental interference or deliberate jamming attacks [213].
B. Network layer
The network layer forms the largest surface of the IoT system. This layer is responsible for transmitting and routing data. It provides a ubiquitous access environment to the perception layer, i.e. data communication and storage functionalities [223]. Therefore, securing the IoT network layer should be of high technical priority. Along the same line of thought, Yavuz [224] proposed a DL-based model to detect the routing protocol for IoT systems and created a dataset for training and testing the DL model by using the Cooja IoT simulator with simulations up to 1000 nodes within 16 networks to detect three types of attacks, namely decreased rank attack, hello flood attack and version number attack. The DNN achieved high performance in detecting the three attacks. However, the authors did not mention the statistics as to how many normal and anomalous samples were in the created dataset. Precision, recall and f-measure were used as evaluation metrics; however, in model evaluation, they may not reflect the actual performance of the model and tend to be biased if the created dataset is imbalanced.
Nobakht, Sivaraman and Boreli [225] proposed an intrusion detection framework that is implemented at the network level and constructed ML algorithms to protect smart devices installed in home environments. They used precision and recall metrics to measure the performance of the classifiers. However, the dataset used was unbalanced, with the number of illegal access samples forming the majority of the samples in the dataset; thus, both evaluation metrics may not precisely reflect the model performance. In case of imbalanced data, other performance metrics such as the area under the receiver operating characteristic curve (AUC) can be better choice to evaluate the performance than accuracy, recall and precision metrics [226,227].
A previous study [197] discussed the viability of an RNN (i.e. large short-term memory [LSTM] network) in the analysis of network traffic behaviour to detect potential attacks (malicious behaviour) and confirmed the effectiveness of the RNN in precisely classifying network traffic to detect malicious behaviour; thus, the LSTM network can be adopted as a practical solution in real-world scenarios.
Cañedo and Skjellum [228] used ML to detect anomalies, specifically training ANN algorithms to detect whether the data sent from an edge to the smart object in an IoT system are valid or invalid. They generated the data from the edge to the device nodes and then inserted invalid and valid data to train the model; the experimental results showed that the ANN can effectively detect invalid data. However, diverse and enriched datasets that contain various data tampering attacks should be used to train and test the ANN to reconfirm whether it can maintain high accuracy in practical settings or other advanced learning algorithms are required. An investigation in this research direction is recommended to generate enriched datasets.
In another study [229], an intrusion detection system (IDS) based on a hybrid detection method (i.e. unsupervised ML method with specification-based method) was used for an IoT system. For this purpose, the author proposed a local intrusion detection method at the local node by using a specificationbased intrusion detection approach; the method examined the behaviour of the host nodes and sent analysis results to the global node, which used an ML-based intrusion detection method (i.e. unsupervised optimum-path forest algorithm [230] for clustering the data from local node on the basis of the MapReduce design [231]).
A generative model (i.e. unsupervised model) using AEs was proposed in [199] to detect malware network-based anomaly in cybersystems. The AEs were trained to learn the latent representation of a diverse feature set, and they received a feature vector extracted from the cybersystems; compared with SVM and KNN, the AEs exhibited improved detection performance [199].
Wi-Fi technology is an IoT-enabling technology, especially for smart homes [232]. Wi-Fi technology is of practical importance to the expansion of IoT [233]. A previous study [234] aimed to detect impersonation attacks in a Wi-Fi environment by developing a method called weighted feature selection for extracting and selecting deep features, which were combined with the features generated by a stacked AE (SAE) algorithm. The combined features were then fed into a neural network to train it for classifying the input data into two classes (i.e. impersonation or normal) [234]. This combination of unsupervised DL algorithm (i.e. SAE) and supervised DL algorithm (i.e. ANN) showed high detection accuracy, confirming the potential applications of deep algorithms in securing Wi-Fi networks from impersonation attacks. A similar study [235] used a combination of two unsupervised algorithms (SAE) for mining features and k-means clustering for categorising the input into two classes: benign and malicious.
The research in [202] developed a network anomaly detection model that can overcome the inherent challenges in developing such a model. These challenges include the generation of labelled data required for the effective training of the model because a network traffic dataset is multi-part and irregular. The second challenge is the constant evolution of anomaly behaviour with time. Therefore, the model should be dynamically adapted to detect any new form of attacks and generalised to detect the anomaly in different network environments. To solve these challenges, the researchers in [202] proposed a learning model that is based on a discriminative RBM, which they selected due to its capability to combine generative models with suitable classification accuracy to detect network anomaly in a semi-supervised fashion even with incomplete training data. However, their experimental results showed that the classification performance of the discriminative RBM was affected when the classifier was tested on a network dataset that differed from the network dataset on which the classifier was trained. This finding should be further investigated, and how a classifier can be generalised to detect an anomaly in different network environments should be further studied.
Saied, Overill and Radzik [236] used an ANN to detect known and unknown DDoS attacks in a real-time environment.
The proposed defence technique aimed to thwart fake packets and permit real packets to pass through. They assessed the ANN's performance in unknown DDoS detection when it is trained with old and updated datasets and reported that the further they trained the algorithm with the latest features of known DDoS attacks, the more they improved the detection probabilities for known and unknown DDoS attacks. The ANN algorithm learns from training samples and then detects zeroday attack features, which are comparable to the features on which it was trained [236].
Chen, Zhang and Maharjan [204] developed a DL-based model for malicious attack detection to secure mobile edge computing. The approach used a DBN for automatic detection, and the model exhibited improved accuracy in malware detection compared with ML-based algorithms, confirming the effectiveness of the automatic feature learning characteristic of DL compared with traditional feature engineering methods.
Meidan et al. [237] implemented ML algorithms for precise IoT device identification by utilising network traffic features, which are then fed into a multi-stage classifier. The classifier categorises the devices that are connected to the network as IoT or non-IoT devices; the ML algorithms identify unauthorised links of IoT devices automatically and accordingly alleviate the disruptions that may occur due to threats.
In a previous study [238], the abnormal behaviour of IoT objects was profiled, and the generated dataset from profiling was used to train the classifier to detect abnormal behaviour. The author investigated how a partial variation (assuming that the attacker can utilise such changes for malicious purposes) of sensed data can influence the accuracy of the learning algorithm and used SVM and k-means clustering as experimental cases for examining the impact of such changes on the detection accuracy of both ML algorithms. The results showed that both algorithms (i.e. SVM and k-means) suffered from detection accuracy drops. The zero-day attacks are mostly variations of existing attacks; thus, the accuracy of the classifier in detecting variations and changes in the dataset is research topic for future investigation.
A system called 'IoT SENTINEL', which is based on the RF classification algorithm, was proposed in [239] to recognise the types of devices connected to an IoT system automatically and execute an action to restrain any of vulnerable connections accordingly to reduce damage that may be caused by compromised devices.
A previous study [240] developed an IDS for IoT by combining fuzzy c-means clustering [241] and the feature selection method PCA [177]. The results of the study indicated that the proposed method can increase detection effectiveness.
In [242], the authors proposed a framework to recognise all potential attack paths and alleviate the effects of attacks on the IoT system; the proposed framework contains a graphical security model. The framework consists of five connected stages starting with data processing, in which the information from the system and the security metrics is fed and processed. In the second stage, which is the security model generation, a gap model is generated; this model contains all potential attack paths in the IoT system; an attack path identifies the structure of the nodes that the intruder can compromise to gain access to the required node. In the third and fourth stages, the IoT network, including the attack paths, is visualised (i.e. security visualisation) and analysed (i.e. security analysis), respectively. Finally, the security model is updated on the basis of the analysis of the attack paths and patterns captured in the previous stages. However, this study used basic statistical analysis to obtain the security model; therefore, whether the proposed framework can be improved by integrating it with intelligent methods, such as ML or DL methods, should be investigated.
In [243], a solution was proposed to detect and restrain malware diffusion in an IoT network. The solution is based on fog computing, which can simultaneously maximise malware detection and minimise the possibility of privacy breach. The proposed malware detection system was constructed using an IDS, and deployment was accomplished at cloud and fog computing to avoid the restrictions on IDS deployment in smart objects [243]. The authors also presented a framework to show the possible application of malware dissemination restraint in IoT networks.
C. Application layer
Currently, most IoT services have application and user interfaces; for example, the Android platform is becoming a vital element for enabling the IoT system [49]. In the related security literature, a previous study [244]. showed the effective performance of DL in accurately detecting Android malware, and the authors of the study constructed a DL model to learn features from Android apps. Subsequently, the learning model was used to identify unspecified Android malware; the authors showed the effectiveness of using DL in Android malware detection in terms of performance accuracy and time efficiency, indicating that DL can be adapted to real-world applications.
A past work [188] proposed an Android malware detection method that utilises a CNN. With the application of the CNN, the significant features related to malware detection are learnt automatically from the raw data, thereby eliminating the need for manual feature engineering. The main advantage of using DL algorithms, such as CNNs, is that the network is trained to learn suitable features and execute classification conjointly, eliminating the extraction process required in traditional ML and consequently providing an end-to-end model [188].
The study in [206] proposed an architecture for securing the cyberspace of IoT systems, and the proposed architecture involves training ML algorithms to classify the system behaviour as normal or abnormal. They used GAN algorithms, which were integrated into the proposed architecture for preliminary study, whose results showed the effectiveness of DL-based architecture in detecting abnormal system behaviour.
Cybersecurity remains to be a serious challenge, especially with the steadily increasing number of objects connecting to the cyberspace, such as IoT. Cyberattacks, including zero-day attacks, are incessantly evolving; consequently, the vulnerabilities and opportunities open to attackers increase with the rapid growth of IoT. Many of these attacks are minor variations of formerly identified cyberattacks [245]. Therefore, the recent improvements in effective learning algorithms are significant. Effective learning algorithms can be trained to adapt to attack variations in the cyberspace with high-level feature abstraction capability; thus, they can provide resilient solutions to the variations of formerly identified cyberattacks or new attacks [245]. A previous study [245] proposed a DL model to enhance cybersecurity and enable attack detection in IoT systems and verified the appropriateness of the DL model in securing the cyberspace of IoT systems. Similarly, [246] proposed a distributed DL model to deliver accurate protection against cyberattacks and threats in fog-to-things computing and used an SAE algorithm to construct their learning model. The authors confirmed that the DL models are more suitable for such cyberattack protection than are traditional methods in terms of scalability, accuracy and false alarm rate. Table 3 shows the comparison and summary of studies on ML and DL for IoT security.
D. Enabling technology for ML/DL deployment for IoT security
On the one hand, realising ML/DL to construct an intelligence-based security for IoT systems can be practically challenging because robust software and hardware requirements are required to implement such complex algorithms. On the other hand, the recent advancements in computational capability of tiny devices and in several ML/DL implementation platforms can result in successfully implementing these methods onboard devices, such as smartphones [247], or in fog and edge computing platforms [247]. As shown in Figure 10 the technology tools that can essentially enable ML/DL deployment for IoT security can be generally listed as a large growth of IoT data, robust software frameworks for facilitating the development of security model based on ML/DL methods and sophisticated hardware equipment to deploy the developed security model.
Large growth of IoT data: The large growth will result in producing large-volume data. The data contains useful information about the system behaviour under different modes, that is, 'normal and attack modes'. The current volume of data is considered larger than that in the past. Data are the main elements for successful implementing ML/DL-based systems. Therefore, additional data about system behaviour, which can be used to successfully enable ML/DL deployment for IoT security, are produced with the continued growth of IoT systems. However, several challenges related to creating security data remain. These challenges are discussed in the challenges and future direction section.
Robust software frameworks: A recent development of ML/DL has resulted in introducing several dedicated ML/DL implementation frameworks and libraries that can empower ML/DL deployment for IoT security that can enable ML/DL deployment for IoT security. The learning algorithms are continuously developing. However, building and deploying them successfully can be challenging without effective frameworks and dedicated libraries. Currently, several frameworks allow the building of models that can offer an enhanced level of abstraction with minimal programming Study
[224] − − − − − − − − − ü − − − − − − − − − ü −
Routing attack detection
[215] − − − − − − − − − − − − − − − − − ü ü − −
Jamming attacks
[128] − ü − ü − − ü − − − − − − − − − − − − − ü
False data injection attacks
[225] − ü − − − − − − − − − − − − − − − − − ü − Intrusion detection [197] − − − − − − − − − − − ü − − − − − − − ü − Malicious behaviour detection [228] − − − − − − − − − ü − − − − − − − − − ü − Data tampering [216] − − − − − − − − − ü − − − − − − − − ü − − Spoofing attack detection [212] − − − − − − − − − − − − − − − − − ü ü − −
Jamming attacks
[199] − − − − − − − − − − − − ü − − − − − − ü − Malware detection [233] − − − − − − − − − ü − − ü − − − − − − ü − Impersonation attacks [235] − − − ü − − − − − − − − ü − − − − − − ü − Impersonation attacks [245] − − − − − − − − − ü − − − − − − − − − − ü Cyberattacks [246] − − − − − − − − − − − − ü − − − − − − − ü Cyberattacks [202] − − − − − − − − − − − − − ü − − − − − ü − Network anomaly detection [236] − − − − − − − − − ü − − − − − − − − − ü − DDoS attack detection [204] − − − − − − − − − − − − − − ü − − − − ü −
Malicious attack detection
[217] − ü − − − − − − − ü − − − − − − − − ü − − Authentication [188] − − − − − − − − − − ü − − − − − − − − − ü Malware detection [237] − − − − − − ü − − − − − − − − − − − − ü − IoT device identification (authorisation) [238] − ü − − − − − ü − − − − − − − − − − − ü −
Data tampering and abnormal behaviour
[239] − − − − ü − − − − − − − − − − − − − − ü − Authorisation [240] − − − − − − − − ü − − − − − − − − − − ü − Intrusion detection [206] − − − − − − − − − − − − − − − ü − − − ü −
Abnormal behaviour detection complications. These ML/DL frameworks and dedicated libraries support several programming languages and are built with a GPU to optimise the training process of DL algorithms. [248]. Consequently, these libraries can enable the ML/DL deployment for IoT security by providing an efficient and easy implementation of ML/DL. Therefore, researchers and scientists on security requirements mostly focus on applying and optimising such algorithms rather than building them from scratch which can be time-consuming and entail high costs. These libraries include TensorFlow [249], convolutional architecture for fast feature embedding [250], Theano [251], deeplearning4j [252], Torch [253], Neon and MXNet [254] (for additional DL Libraries, see [252]).
Effective deployment strategies: The ML/DL model for IoT security can be deployed on-board, on cloud computing or on edge computing. The search for the optimal deployment strategy is vital for the ML/DL-based model implementation in real life to secure IoT devices and systems. The deployment of ML/DL for IoT system security must consider the limited resource of IoT devices (i.e. limited computational power and memory size), real-time threats detection and response and accessibility to frequent updating of security models (updating the model to detect newly emerging threats).
For the limited resource of IoT devices, offloading the ML/DL execution can be a practical solution in terms of computational power, memory size and performance and a suitable means for regular updates of the ML/DL model. However, offloading may lead to high latency [255] which may not satisfy the required real-time detection systems for practical IoT systems. The second choice for deploying ML/DL for IoT system security can be accomplished by deploying the ML/DL security model on the IoT device which is not concerned with communication quality [255]. However, deploying ML/DL on the IoT device remain challenging. The main feature of IoT devices is limited, thereby indicating limited computational power and memory size. However, applying ML/DL-based models involves computational power, high power consumption and sizable memory to store the model. Moreover, current ML/DL frameworks are commonly built on third-party libraries, which renders the migration to onboard deployment extra challenging [255].
On the one hand, the abovementioned solution can still be optimised in the future to successfully satisfy the requirements for deploying ML/DL for IoT system security. On the other hand, edge computing can bridge the gap between the deployment of cloud computing with powerful resources but high latency and the onboard deployment with no latency but limited resources. The deployment of ML/DL for IoT system security using edge computing can enable effective computation power over onboard deployment that is executed at the edge of the network. Therefore, the processing is performed near the data sources and has less latency than the deployment on the cloud. Edge computing can process downstream data from cloud services to IoT devices and upstream data from IoT devices to cloud services (e.g. smartphone that operates between body sensors and the cloud) [256]. By implementing an edge computing framework, the ML/DL security model for the IoT system can be placed near the network edge, operate as guards to detect malicious behaviour, secure devices and reduce the severity of eavesdropping threats by approaching the IoT devices [257].
VI. ISSUES, CHALLENGES AND FUTURE DIRECTIONS
In this section, we present a list of Issues, challenges and future directions for using ML and DL methods to mitigate security weakness IoT systems, which are classified based on data, learning strategies, IoT environments, inherent ML and DL Challenges, opportunities to integrated ML/DL with other technology, computational complexity issues and security vs other trades off requirements.
A. IoT data related issues 1) Availability of security related datasets
The general purpose of learning algorithms is capturing the patterns from the available partial training dataset and then constructing a model to categorise the new inputs on the basis of the learnt patterns. In this process, a question to investigate is the volume of training data required to train the learning algorithms sufficiently for these algorithms to be generalised for new input in the given domain [258]. In the context of the application of ML and DL for IoT security, the major challenge encountered by ML and DL, in general, and the supervised ML and DL methods in particular, is how to extract or generate a realistic and high-quality training dataset that contains various possible attack types. A high-quality training dataset is an essential ingredient to train the ML and DL algorithms accurately. The training datasets should be comprehensive and diverse. They should contain information that reflects nearly all of the strategies of real-world attacks because these training datasets are the basis for obtaining model knowledge. This condition can directly influence model accuracy. Given that IoT systems generate large volumes of data, real-time data streaming data quality maintenance remains a challenge.
A vital future research direction is the use of crowd-sourcing methods for generating datasets related to IoT threats and attacks. Rich datasets that include nearly all attack patterns should be generated for training ML and DL algorithms. Furthermore, such datasets can be used to benchmark the accuracy of newly proposed algorithms against that of existing methods for attack detection. Although generating collaborative IoT threat datasets, which can be continuously updated with new attacks, is of great importance, it is challenging technically due to the wide diversity of IoT devices. Furthermore, a privacy issue prevails because datasets may contain sensitive or critical information that are not meant to be shared publicly, specifically for industrial and medical IoT devices.
2) Learning to secure IoT with low-quality data
Most of the proposed DL representations are generally for high-quality data [195]. However, IoT systems comprise heterogeneous connected devices, and large-scale streaming, leading to the possibility of high-noise and corrupted data to be gathered from such systems [198,259]. Therefore, learning to secure IoT systems requires effective DL models that can handle and learn from low-quality data, particularly when obtaining high-quality training data is practically infeasible. Therefore, multi-modal and effective DL models should be developed to secure IoT systems with large-scale streaming, heterogeneous and high-noise data.
3) Augmentation of IoT security data to improve learning algorithm performance
Intuitively, richer the data that ML and DL algorithms have to learn from, the more accurate they can be [260]. Although obtaining a large dataset is relatively easy in certain domains, such as image and natural language processing, acquiring a large dataset for ML and DL is relatively difficult in the domain of data security in IoT systems. Therefore, finding alternative means to obtain substantial amounts of data in this domain is desirable. Data augmentation is used to expand limited data by generating new samples from existing ones. In the augmentation of IoT security data, the limited amount of existing IoT security samples can be utilised to generate new samples.
The key challenge in data augmentation is producing new data samples that preserve the appropriate data distribution for each class, normally necessitating domain knowledge [192,261]. In view of this problem, suitable methods for the augmentation of IoT security data should be investigated to improve the classification accuracy of learning methods.
B. Learning Strategies for Effective IoT Security 1) Zero-day attacks on IoT
The main advantage of ML and DL methods over traditional security methods, such as the threat signature-based method, is their capability to detect zero-day attacks. Zero-day attacks, which are evolving threats, were previously anonymous to detection systems. These attacks have varying potentials, such as metamorphic malware attacks that automatically reprogram themselves each time they are circulated or transmitted. Consequently, detecting these malware attacks by traditional methods is difficult [262,263]. The number of emerging IoT security threats [24]., such as zero-day attacks, is continuously growing at an alarming rate [24]. For example, the Mirai botnet and its derivations are becoming an alarming threat to the security of IoT systems [7,9]. The development of the recent derivation of the Mirai botnet, Satori, proves that other malicious IoT botnets are emerging to exploit known and zeroday vulnerabilities [264].
On the one hand, the recent derivations of Mirai suggest that IoT malware will continue growing because Mirai's open source code allows creators of IoT malware to produce new variants of Mirai that exploit known and zero-day vulnerabilities to attack IoT devices [7,9,24,264]. On the other hand, having the ability to monitor and control IoT security intelligently provides an important solution to these new attacks or the zero-day attack and its variations. ML and DL algorithms are powerful analysis tools for learning normal or abnormal behaviour on the basis of the interactions among the IoT systems and devices within the IoT ecosystem. The input data from each element of an IoT system and its devices can be collected and examined to determine normal patterns of interaction and consequently identify malicious behaviour at an early stage. Moreover, in view of the capability of ML and DL methods to learn from existing samples to predict future unknown samples intelligently, these methods have the potential to predict new attacks, which, in many cases, are simple derivations and mutations of previous attacks. Therefore, IoT security systems need to advance from the simple facilitation of the secure communication between devices to intelligent security enabled by DL and ML methods.
2) Lifelong Learning for learning IoT threats
One of main characteristics of the IoT environment is dynamism; several new things join and numerous objects leave the system given the numerous and diverse IoT devices used to manage different applications and scenarios. Given IoT's nature, normal structures and patterns of IoT systems may considerably change with time, and threats and attacks targeting the IoT system may likewise persistently vary with time. Therefore, distinguishing between normal and abnormal IoT system behaviour cannot be always pre-defined. Thus, the frequent updating of security models is required to handle and understand IoT modifications. In an actual IoT environment, the short-term learning of threats and attacks targeting IoT systems may be ineffective for long-term protection. Consequently, the concept of lifelong learning can hold realistic significance in long-term real-world applications. The concept of lifelong machine learning [106,265,266] is directed towards the construction of a model that can perform the retraining process repeatedly for the learning of new emerging patterns related to each behaviour. The model should be able to adapt to and learn from new environments continuously [106,265,266]. Researchers have reported that the further they trained the algorithm with the latest features of known DDoS attacks, the more they improved the detection probabilities for known and unknown DDoS attacks. The ANN algorithm learns from training samples and then detects zero-day attack features, which are comparable to the features on which it was trained [236]. Therefore, frequently updating the training samples is important for developing effective real-world security models for IoT-related threats.
3) Transfer learning
Transfer learning refers to the idea of transferring knowledge from a domain with sufficient training data to a domain with insufficient training data. The main purpose of transfer learning is to reduce the time and effort required for the new learning process. The main concern in transfer learning deals with the part of knowledge that can be transferred as knowledge that is common between the domains. Therefore, transferring such knowledge is useful. Meanwhile, transferring knowledge that is specific for a particular domain and does not hold any importance to other domains must be avoided [267].
The concept of transfer learning can be useful for securing IoT systems, which comprise different elements, such as devices, WSNs and cloud computing. The security of these elements has already been extensively studied, and wellestablished training samples on different attacks have been generated. Consequently, if transfer learning is accomplished successfully from the elements of the IoT, then such learning may significantly improve the security performance of the entire IoT system with less effort and cost in constructing training samples.
C. ML and DL for IoT security in interdependent, interconnected and interactive environments
In this section, we present the opportunities for using ML and DL methods to mitigate internal security issues arising from the structures of IoT systems, which are interdependent, interconnected and interactive environments.
As explained previously, with the rapid increase in the number of IoT devices, the collaboration among devices is becoming increasingly autonomous; i.e. they require reduced human involvement. IoT devices no longer simply interact with one another like devices within a network. Many current IoT devices are designed to achieve the vision of a smart city, in which many of the devices are controlled by other devices or depend on the operational condition of other devices or the surrounding environment
The advantage of using ML and DL in securing IoT devices in such an environment is that these methods can be developed to go beyond simply understanding the operational behaviour of specific devices to understanding the operational behaviour of entire systems and their devices.
Moreover, IoT systems connect billions of devices; thus, not only the surface of the attack but also the magnitude of the attack should be considered in IoT systems. With these densely interconnected devices, an infected thing can result in a destructive attack that infects a considerable number of things at a large scale, even affecting a substantial part of a city.
For interconnected systems, the benefit of using ML and DL for securing IoT devices is that ML the DL methods can provide intelligence to systems for detecting abnormal behaviours of a thing or groups of things and thus automatically respond at an early stage. This strategy may mitigate the impact of the attack and lead to learning for the prevention of future occurrences of similar attacks on the basis of a solid understanding of the current causes.
Along the same direction, ML and DL can be effective for securing IoT devices in an interactive environment In SIoT, suitable instructions should be established for objects to choose their appropriate friends as these impacts the service outputs built on top of the social network [104]. The advancement in SIoT increases critical security and privacy concerns regarding the disclosure of sensitive information related to the objects [105].
ML and DL methods can potentially contribute to securing the integration of social networking into IoT. However, this direction is still at its infancy and needs further investigation.
D. ML and DL Challenges 1) Possible misuse of ML and DL algorithms by attackers (breaking cryptographic implementations by ML and DL methods)
Recent advances in ML and DL algorithms have enabled them to be used in breaking cryptographic implementations. For example, two previous studies [129,130] used ML to break cryptographic systems using SVMs, which outperformed the template attack. Similarly, the authors in [189] investigated different DL algorithms to break cryptographic systems and reported that DL can break cryptographic systems. Specifically, CNN and AE algorithms performed better than did ML algorithms (SVM and RF) and the rational profiling method template attack.
A previous study showed that RNNs can learn decryption. Specifically, an RNN with a 3000-unit LSTM can learn the Enigma machine decryption function by learning effective internal representations of these ciphers; the results suggested that DL algorithms, such as RNNs, can capture and learn the algorithmic representations of polyalphabetic ciphers for cryptanalysis [268].
2) Privacy of ML and DL
Recent studies [269][270][271] have shown that ML and DL algorithms can leak data. Privacy-preserving ML and DL algorithms are vulnerable to dominant attacks [269]. A study showed that federated, distributed or even decentralised DL methods are easily broken and unable to maintain training set privately [269]. The authors developed an attack to manipulate the real-time nature of a learning process in which the adversary was allowed to train a GAN that creates samples similar to those in the targeted training dataset, which was supposed to be private; the samples produced by the GAN were supposed to originate from the same distribution as the training dataset [269]. Therefore, DL algorithms themselves are vulnerable to potential attacks when generating the training data. Consequently, the attackers can build a DL system that can recognise how DL-based detection methods work and thus generate attacks which cannot be detected easily. This area of research is still at its infancy and needs to be investigated further to find the appropriate solution to such an issue.
3) Security of ML and DL methods
Researchers have recently investigated various threats that can be launched against ML and DL algorithms. These algorithms are susceptible to many threats that either decrease the accuracy and performance of the classifiers or expose sensitive data used in the training process of the classifiers. Examples of the potential threats that can be utilised by attackers include poisoning, evasion, impersonation and inversion attacks s [272]. Poisoning is a threat in which the attacker injects malicious samples with incorrect labels into the training dataset to modify training data distribution, decrease the discrimination power of the classifier in distinguishing between the normal and abnormal behaviour of the system, and ultimately decrease classifier accuracy and performance. Such attacks can be potentially launched against ML algorithms that need to dynamically update their training sets and learning models to adapt to the new attacks features, such as ML algorithms for malware detection [272,273]. The second possible attack on ML and DL is the evasion attack. This attack is based on generating adversarial samples by modifying the attack features to be slightly different from the malicious samples used to train the model; consequently, the probability of the attack being detected by the classifier is decreased, and the attack avoids detection, thereby reducing the performance of the system remarkably [272]. The third possible attack is impersonation. In this attack, the attacker attempts to mimic the data samples to deceive the ML algorithms to classifying the original samples with different labels incorrectly from the impersonated ones [272,274,275]. The last possible attack is inversion, which exploits the application program interfaces presented to the users by the current ML platform to collect roughly the necessary information about the pre-trained ML models [271,276]. Subsequently, this extracted information is used to perform reverse engineering to obtain the sensitive data of users. This kind of attack violates the privacy of users by exploring the data, which are sensitive in certain cases (e.g. patients' medical data), inserted in the ML models [277,278].
4) Insights into DL architecture
ML and DL methods change the means through which a computer solves a problem, from instructing the computer what to do programmatically to training the computer what to do intelligently (learning from experience). However, despite the progress achieved by DL algorithms in many applications, a theory that can describe why and how DNNs run depending on their architecture has not yet been established. Such a theory can be significant in comprehending the quantity of data or the number of the layers required to achieve the desired performance. The theory can also facilitate the reduction of the resources (e.g. time, energy and memory) required to construct a DL model [180], thereby providing a sophisticated but lightweight DL model that is useful for resource-constrained systems, such as IoT devices. Establishing a lightweight DL model is a significant step towards the implementation of onboard security systems for IoT devices. Thus, this topic needs further exploration in future studies.
E. Integrating DL/ML with Other Technology for IoT Security 1) Implementation of ML and DL at the edge
Edge computing has become an essential technology in providing IoT services. Edge computing immigrates service provision from the cloud to the network edge, which holds a potential solution in the IoT era [179,279]. Implementing DL and ML at the edge for IoT security can minimise delays, realise near-real-time detection systems, improve energy efficiency and enhance the scalability of lightweight IoT objects. Such implementation can offer an effective framework for data processing with reduced network traffic load. However, edge computing is still at its infancy, and its implementation is accompanied by several challenges. Further research needs to be conducted to explore and develop effective strategies for implementing DL and ML at the edge to provide real-time IoT security.
2) Synergic integration of ML and DL with blockchain for IoT security
Blockchain is an emerging technology that uses cryptography to secure transactions within a network. A blockchain delivers a decentralised database (called 'digital ledger') of transactions, of which each node on the network is aware [280]. The network is a chain of devices (e.g. computers) that all need to endorse a transaction before it can be verified and recorded [280]. In other words, a blockchain is simply a data structure that allows the production and distribution of a 'tamper-proof digital ledger' of exchanges [281]. The decentralised architecture of a blockchain is antithetical to the security issues that are inherent in a centralised architecture. Using decentralised database architecture, transaction authentication depends on the approval of many parties in systems rather than of a single authority, as is common practice in centralised systems. Therefore, blockchain systems can render transactions relatively more secure and transparent than those in centralised systems. IoT systems are distributed by nature. Thus, the distributed digital ledger blockchain can play a significant role in securing IoT systems.
ML and DL are concerned with training machines to learn from real-world samples to act autonomously and intelligently. The goal of ML and DL methods is to allow the machines to become smart machines. The simplified definitions of both technologies (i.e. ML/DL and blockchain) reveal that a synergic relation can be obtained by combining both technologies to achieve a fully functional IoT security system. Firstly, ML and DL may assist blockchain technology in realising smart decision making and improved evaluation, filtering and comprehension of data and devices within a network to facilitate the effective implementation of blockchain for enhanced trust and security services for IoT systems. Secondly, blockchain may assist ML and DL by providing a large volume of data because blockchain is a decentralised database that stresses the importance of data distribution among several nodes on a specific network. The availability of big data is a main factor in establishing an accurate ML-and DL-based model. Therefore, with the increase in the volume of data to be analysed, particularly security-related data, the accuracy of ML and DL methods can be considerably increased and generalised to develop a security model with enhanced reliability.
F. Computational complexity
IoT devices are resource-constrained devices. The resources of IoT devices (things), such as memory, computation and energy, which are required for ML and DL deployment, are limited and create a crucial bottleneck in the adoption of DL and ML for real-time on-board implementation [282]. The current solutions of computation offloading and execution in the cloud suffer from high wireless energy overhead. Moreover, the availability of the applications for such solutions is based on the network conditions. Consequently, if the network connectivity is weak, then cloud offloading will be unattainable, leading to the unavailability of the applications. Another recent solution which may advance the implementation of ML and DL for IoT security is the development of edge computing GPUs (mobile GPU). However, GPUs on mobile can still consume considerable mobile battery reserves [282].
On the one hand, enhancing GPU-based solutions and proposing an efficient offloading strategy are important in advancing the implementation of ML-and DL-based IoT security to enhance the performance of IoT DL applications in IoT systems with cloud and edge computing [179]. On the other hand, ML and DL frameworks that can efficiently reduce computational complexity should be developed. Developing real-time detection and protection systems are important for providing effective security mechanisms, particularly for largescale IoT systems. Thus, reducing computational complexity holds practical importance in future research.
G. Security vs Trade-offs in IoT Applications
The existing security trade-offs, such as that between availability and safety, are another challenge to the achievement of a robust security scheme for IoT systems. Moreover, the importance of various security trade-offs differs from one IoT application to another. For example, an IoMT system should provide a security scheme, but it should also offer the flexibility of being accessible in emergency situations. When a patient with an implanted IoMT, which monitors his or her health conditions, is suddenly in an emergency situation, easy access to the IoMT device is the first priority in saving his or her life. Therefore, creating a design that balances providing a robust security scheme to protect the implanted IoMT and guaranteeing accessibility of such devices during emergency situations is necessary. Such a trade-off between security and safety poses a critical challenge. An appropriate balance between patient safety and device security is an important parameter to be considered in the design phase [47,62]. ML and DL methods mainly aim to provide intelligence and contextual awareness to devices; therefore, these methods can better mitigate security trade-off issues than can traditional access control methods.
Similarly, other applications of IoT have different security trade-offs in accordance with the diverse implemented environments. Given the required security level and security trade-offs in specified IoT applications, security design should satisfy different operation modes within the given applications. Future research may utilise the intelligence capability of ML and DL methods to design security schemes that can effectively satisfy various security trade-offs under different operation modes within a specified application.
VII. CONCLUSION
The requirements for securing IoT devices have become complex because several technologies, from physical devices and wireless transmission to mobile and cloud architectures, need to be secured and combined with other technologies. The advancement in ML and DL has allowed for the development of various powerful analytical methods that can be used to enhance IoT security.
In this survey, various IoT security threats and IoT attack surfaces are discussed. A comprehensive review of the potential uses of ML and DL methods in IoT security is provided. These methods are then compared at the end of each subsection in terms of their advantages, disadvantages and applications in IoT security. Afterward, the uses of the ML and DL methods for securing the main IoT layers (i.e. perception, network and application layers) are reviewed. Finally, an extensive list of issues, challenges and future directions related to the use of ML and DL in effectively securing IoT systems are presented and classified according to data; learning strategies; ML and DL for IoT security in the interdependent, interconnected and interactive environments of IoT systems; diverse security tradeoffs in IoT applications and synergic integration of ML and DL with blockchain for IoT security.
This survey aims to provide a useful manual that can encourage researchers to advance the security of IoT systems from simply enabling secure communication among IoT components to developing intelligent end-to-end IoT securitybased approaches.
Acknowledgements: This work is supported by NPRP grant #8-408-2-172 from the Qatar National Research Fund (a member of Qatar Foundation). The statements made herein are solely the responsibility of the authors.
List of Acronyms
Acronym Description 6LoWPAN
Combination IPv6and Low-power Wireless Personal Area Networks
Principal Component Analysis
RBMs
Restricted Boltzmann Machines
ReLU
Rectified Linear Units
RNN
Recurrent Neural Network
RF
Random-Forest
RFID
Radio Frequency Identification
Wireless Sensor Network
Wi-Fi
Wireless Fidelity
Figure 1
1Illustration of the potential role of ML/DL in IoT security
Figure 3
3IoT architecture A. Physical objects
Figure 4
4Potential threats in the IoT system
Figure 5
5IoT attack surfaces
Figure 6
6Figure 6 KNN working principle
CNNs and recurrent neural networks (RNNs) are examples of discriminative DL methods. Deep autoencoders (AEs), deep belief networks (DBN), restricted Boltzmann machines (RBMs), generative adversarial networks (GANs) and ensemble of DL networks (EDLNs) are examples of hybrid DL methods.
Figure 7
7Illustration of NNs Working Principle for IoT Security
Figure 9
9Illustration of GAN Working Principle
Figure 10
10Illustration the technology tools that can essentially enable ML/DL deployment for IoT security
Table 1
1Potential ML methods for securing IoT systemMethod
Working principle
Advantages
Disadvantages
Potential Application in
IoT Security
DT
DT-based method uses a DT to
establish a model (i.e. a prediction
model) to learn from training
samples by representing them as
branches and leaves. The pre-
trained model is then used to
predict the class of the new
sample.
Table 2
2Potential DL methods for securing IoT systemsMethods
Working principle
Advantages
Disadvantages
Potential Application in
IoT Security
Table 3
3Comparison and summary of studies on ML and DL for IoT securityAttack surfaces
secured
Threats detected
or security application
He has over 25 years of experience in wireless networking research and industrial systems development. He is currently an Associate Professor with the College of Engineering, Qatar University, and the Director of the Cisco Regional Academy. His research interests include wireless networking and edge computing for IoT applications. He has authored or co-authored over 150 refereed journal and conference papers, textbook, and book chapters in reputed international journals and conferences. He holds three awards from IBM Canada for his achievements and leadership, and three best paper awards, latest from
Fog computing: Helping the Internet of Things realize its potential. A V Dastjerdi, R Buyya, Computer. 498A. V. Dastjerdi and R. Buyya, "Fog computing: Helping the Internet of Things realize its potential," Computer, vol. 49, no. 8, pp. 112-116, 2016.
A survey on trust management for Internet of Things. Z Yan, P Zhang, A V Vasilakos, Journal of network and computer applications. 42Z. Yan, P. Zhang, and A. V. Vasilakos, "A survey on trust management for Internet of Things," Journal of network and computer applications, vol. 42, pp. 120-134, 2014.
The internet of things: How the next evolution of the internet is changing everything. D Evans, CISCO white paper. 1D. Evans, "The internet of things: How the next evolution of the internet is changing everything," CISCO white paper, vol. 1, no. 2011, pp. 1-11, 2011.
The changing computing paradigm with internet of things: A tutorial introduction. S Ray, Y Jin, A Raychowdhury, IEEE Design & Test. 332S. Ray, Y. Jin, and A. Raychowdhury, "The changing computing paradigm with internet of things: A tutorial introduction," IEEE Design & Test, vol. 33, no. 2, pp. 76-96, 2016.
Cyber security and the internet of things: vulnerabilities, threats, intruders and attacks. M Abomhara, Journal of Cyber Security and Mobility. 41M. Abomhara, "Cyber security and the internet of things: vulnerabilities, threats, intruders and attacks," Journal of Cyber Security and Mobility, vol. 4, no. 1, pp. 65-88, 2015.
The Cyber-Physical Systems Revolution. D Serpanos, Computer. 513D. Serpanos, "The Cyber-Physical Systems Revolution," Computer, vol. 51, no. 3, pp. 70-73, 2018.
Botnets and internet of things security. E Bertino, N Islam, Computer. 502E. Bertino and N. Islam, "Botnets and internet of things security," Computer, vol. 50, no. 2, pp. 76-79, 2017.
SVELTE: Real-time intrusion detection in the Internet of Things. S Raza, L Wallgren, T Voigt, Ad hoc networks. 118S. Raza, L. Wallgren, and T. Voigt, "SVELTE: Real-time intrusion detection in the Internet of Things," Ad hoc networks, vol. 11, no. 8, pp. 2661-2674, 2013.
C Kolias, G Kambourakis, A Stavrou, J Voas, DDoS in the IoT: Mirai and other botnets. 50C. Kolias, G. Kambourakis, A. Stavrou, and J. Voas, "DDoS in the IoT: Mirai and other botnets," Computer, vol. 50, no. 7, pp. 80-84, 2017.
Y Xin, Machine Learning and Deep Learning Methods for Cybersecurity. Y. Xin et al., "Machine Learning and Deep Learning Methods for Cybersecurity," IEEE Access, 2018.
Big data deep learning: challenges and perspectives. X.-W Chen, X Lin, IEEE access. 2X.-W. Chen and X. Lin, "Big data deep learning: challenges and perspectives," IEEE access, vol. 2, pp. 514-525, 2014.
Deep learning. Y Lecun, Y Bengio, G Hinton, nature. 5217553436Y. LeCun, Y. Bengio, and G. Hinton, "Deep learning," nature, vol. 521, no. 7553, p. 436, 2015.
A roadmap for security challenges in the Internet of Things. A R Sfar, E Natalizio, Y Challal, Z Chtourou, A. R. Sfar, E. Natalizio, Y. Challal, and Z. Chtourou, "A roadmap for security challenges in the Internet of Things," Digital Communications and Networks, 2017.
Security, privacy and trust in Internet of Things: The road ahead. S Sicari, A Rizzardi, L A Grieco, A Coen-Porisini, Computer networks. 76S. Sicari, A. Rizzardi, L. A. Grieco, and A. Coen-Porisini, "Security, privacy and trust in Internet of Things: The road ahead," Computer networks, vol. 76, pp. 146-164, 2015.
Internet of Things security: A survey. F A Alaba, M Othman, I A T Hashem, F Alotaibi, Journal of Network and Computer Applications. 88F. A. Alaba, M. Othman, I. A. T. Hashem, and F. Alotaibi, "Internet of Things security: A survey," Journal of Network and Computer Applications, vol. 88, pp. 10-28, 2017.
A survey on the internet of things security. K Zhao, L Ge, Computational Intelligence and Security (CIS), 2013 9th International Conference on. IEEEK. Zhao and L. Ge, "A survey on the internet of things security," in Computational Intelligence and Security (CIS), 2013 9th International Conference on, 2013, pp. 663-667: IEEE.
A survey on internet of things: Security and privacy issues. J S Kumar, D R Patel, International Journal of Computer Applications. 9011J. S. Kumar and D. R. Patel, "A survey on internet of things: Security and privacy issues," International Journal of Computer Applications, vol. 90, no. 11, 2014.
Security in the internet of things: a review. H Suo, J Wan, C Zou, J Liu, Computer Science and Electronics Engineering (ICCSEE), 2012 international conference on. IEEE3H. Suo, J. Wan, C. Zou, and J. Liu, "Security in the internet of things: a review," in Computer Science and Electronics Engineering (ICCSEE), 2012 international conference on, 2012, vol. 3, pp. 648-651: IEEE.
Internet of Things Security: a top-down survey. D E Kouicem, A Bouabdallah, H Lakhlef, Computer Networks. D. E. Kouicem, A. Bouabdallah, and H. Lakhlef, "Internet of Things Security: a top-down survey," Computer Networks, 2018.
Security for the internet of things: a survey of existing protocols and open research issues. J Granjal, E Monteiro, J S Silva, IEEE Communications Surveys & Tutorials. 173J. Granjal, E. Monteiro, and J. S. Silva, "Security for the internet of things: a survey of existing protocols and open research issues," IEEE Communications Surveys & Tutorials, vol. 17, no. 3, pp. 1294-1312, 2015.
A survey of intrusion detection in Internet of Things. B B Zarpelão, R S Miani, C T Kawakani, S C De Alvarenga, Journal of Network and Computer Applications. 84B. B. Zarpelão, R. S. Miani, C. T. Kawakani, and S. C. de Alvarenga, "A survey of intrusion detection in Internet of Things," Journal of Network and Computer Applications, vol. 84, pp. 25-37, 2017.
Internet of Things-New security and privacy challenges. R H Weber, Computer law & security review. 261R. H. Weber, "Internet of Things-New security and privacy challenges," Computer law & security review, vol. 26, no. 1, pp. 23-30, 2010.
On the features and challenges of security and privacy in distributed internet of things. R Roman, J Zhou, J Lopez, Computer Networks. 5710R. Roman, J. Zhou, and J. Lopez, "On the features and challenges of security and privacy in distributed internet of things," Computer Networks, vol. 57, no. 10, pp. 2266-2279, 2013.
The rise of ransomware and emerging security challenges in the Internet of Things. I Yaqoob, Computer Networks. 129I. Yaqoob et al., "The rise of ransomware and emerging security challenges in the Internet of Things," Computer Networks, vol. 129, pp. 444-458, 2017.
L Xiao, X Wan, X Lu, Y Zhang, D Wu, arXiv:1801.06275IoT Security Techniques Based on Machine Learning. arXiv preprintL. Xiao, X. Wan, X. Lu, Y. Zhang, and D. Wu, "IoT Security Techniques Based on Machine Learning," arXiv preprint arXiv:1801.06275, 2018.
A survey of data mining and machine learning methods for cyber security intrusion detection. A L Buczak, E Guven, IEEE Communications Surveys & Tutorials. 182A. L. Buczak and E. Guven, "A survey of data mining and machine learning methods for cyber security intrusion detection," IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1153-1176, 2016.
A Detailed Investigation and Analysis of using Machine Learning Techniques for Intrusion Detection. P Mishra, V Varadharajan, U Tupakula, E S Pilli, IEEE Communications Surveys & Tutorials. P. Mishra, V. Varadharajan, U. Tupakula, and E. S. Pilli, "A Detailed Investigation and Analysis of using Machine Learning Techniques for Intrusion Detection," IEEE Communications Surveys & Tutorials, 2018.
Internet of things: A survey on enabling technologies, protocols, and applications. A Al-Fuqaha, M Guizani, M Mohammadi, M Aledhari, M Ayyash, IEEE Communications Surveys & Tutorials. 174A. Al-Fuqaha, M. Guizani, M. Mohammadi, M. Aledhari, and M. Ayyash, "Internet of things: A survey on enabling technologies, protocols, and applications," IEEE Communications Surveys & Tutorials, vol. 17, no. 4, pp. 2347-2376, 2015.
The Internet of Things-A survey of topics and trends. A Whitmore, A Agarwal, L. Da Xu, Information Systems Frontiers. 172A. Whitmore, A. Agarwal, and L. Da Xu, "The Internet of Things-A survey of topics and trends," Information Systems Frontiers, vol. 17, no. 2, pp. 261-274, 2015.
Study and application on the architecture and key technologies for IOT. Z Yang, Y Yue, Y Yang, Y Peng, X Wang, W Liu, Multimedia Technology (ICMT), 2011 International Conference on. IEEEZ. Yang, Y. Yue, Y. Yang, Y. Peng, X. Wang, and W. Liu, "Study and application on the architecture and key technologies for IOT," in Multimedia Technology (ICMT), 2011 International Conference on, 2011, pp. 747-751: IEEE.
Research on the architecture of Internet of things. M Wu, T.-J Lu, F.-Y Ling, J Sun, H.-Y Du, Advanced Computer Theory and Engineering (ICACTE), 2010 3rd International Conference on. IEEE5M. Wu, T.-J. Lu, F.-Y. Ling, J. Sun, and H.-Y. Du, "Research on the architecture of Internet of things," in Advanced Computer Theory and Engineering (ICACTE), 2010 3rd International Conference on, 2010, vol. 5, pp. V5- 484-V5-487: IEEE.
Context aware computing for the internet of things: A survey. C Perera, A Zaslavsky, P Christen, D Georgakopoulos, IEEE communications surveys & tutorials. 161C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos, "Context aware computing for the internet of things: A survey," IEEE communications surveys & tutorials, vol. 16, no. 1, pp. 414-454.
Internet of things: architectures, protocols, and applications. P Sethi, S R Sarangi, Journal of Electrical and Computer Engineering. 2017P. Sethi and S. R. Sarangi, "Internet of things: architectures, protocols, and applications," Journal of Electrical and Computer Engineering, vol. 2017, 2017.
The web of things: A survey. D Zeng, S Guo, Z Cheng, JCM. 66D. Zeng, S. Guo, and Z. Cheng, "The web of things: A survey," JCM, vol. 6, no. 6, pp. 424-438, 2011.
. M A Razzaque, M Milojevic-Jevric, A Palade, S , M. A. Razzaque, M. Milojevic-Jevric, A. Palade, and S.
Middleware for internet of things: a survey. Clarke , IEEE Internet of Things Journal. 31Clarke, "Middleware for internet of things: a survey," IEEE Internet of Things Journal, vol. 3, no. 1, pp. 70-95, 2016.
Adaptive middleware for autonomic systems. S Neely, S Dobson, P Nixon, Annales des télécommunications. Springer61S. Neely, S. Dobson, and P. Nixon, "Adaptive middleware for autonomic systems," in Annales des télécommunications, 2006, vol. 61, no. 9-10, pp. 1099-1118: Springer.
Role of middleware for internet of things: A study. S Bandyopadhyay, M Sengupta, S Maiti, S Dutta, S. Bandyopadhyay, M. Sengupta, S. Maiti, and S. Dutta, "Role of middleware for internet of things: A study."
Context aware computing for the internet of things: A survey. C Perera, A Zaslavsky, P Christen, D Georgakopoulos, IEEE communications surveys & tutorials. 161C. Perera, A. Zaslavsky, P. Christen, and D. Georgakopoulos, "Context aware computing for the internet of things: A survey," IEEE communications surveys & tutorials, vol. 16, no. 1, pp. 414-454, 2014.
Data mining for Internet of Things: A survey. C.-W Tsai, C.-F Lai, M.-C Chiang, L T Yang, IEEE Communications Surveys and Tutorials. 161C.-W. Tsai, C.-F. Lai, M.-C. Chiang, and L. T. Yang, "Data mining for Internet of Things: A survey," IEEE Communications Surveys and Tutorials, vol. 16, no. 1, pp. 77-97, 2014.
The role of big data analytics in Internet of Things. E Ahmed, Computer Networks. 129E. Ahmed et al., "The role of big data analytics in Internet of Things," Computer Networks, vol. 129, pp. 459-471, 2017.
A survey towards an integration of big data analytics to big insights for value-creation. M K Saggi, S Jain, Information Processing & Management. M. K. Saggi and S. Jain, "A survey towards an integration of big data analytics to big insights for value-creation," Information Processing & Management, 2018.
Internet of things: A review of surveys based on context aware intelligent services. D Gil, A Ferrández, H Mora-Mora, J Peral, Sensors. 1671069D. Gil, A. Ferrández, H. Mora-Mora, and J. Peral, "Internet of things: A review of surveys based on context aware intelligent services," Sensors, vol. 16, no. 7, p. 1069, 2016.
Data fusion and IoT for smart ubiquitous environments: A survey. F Alam, R Mehmood, I Katib, N N Albogami, A Albeshri, IEEE Access. 5F. Alam, R. Mehmood, I. Katib, N. N. Albogami, and A. Albeshri, "Data fusion and IoT for smart ubiquitous environments: A survey," IEEE Access, vol. 5, pp. 9533- 9554, 2017.
Context-Aware Computing, Learning, and Big Data in Internet of Things: A Survey. O B Sezer, E Dogdu, A M Ozbayoglu, IEEE Internet of Things Journal. 51O. B. Sezer, E. Dogdu, and A. M. Ozbayoglu, "Context- Aware Computing, Learning, and Big Data in Internet of Things: A Survey," IEEE Internet of Things Journal, vol. 5, no. 1, pp. 1-27, 2018.
RFID technology for IoT-based personal healthcare in smart spaces. S Amendola, R Lodato, S Manzari, C Occhiuzzi, G Marrocco, IEEE Internet of things journal. 12S. Amendola, R. Lodato, S. Manzari, C. Occhiuzzi, and G. Marrocco, "RFID technology for IoT-based personal healthcare in smart spaces," IEEE Internet of things journal, vol. 1, no. 2, pp. 144-152, 2014.
. Internet of Medical Things. 6"Internet of Medical Things, Forecast to 2021 [Online]: https://store.frost.com/internet-of-medical-things-forecast- to-2021.html," 06-Jun-2017.
Security and privacy issues in implantable medical devices: A comprehensive survey. C Camara, P Peris-Lopez, J E Tapiador, Journal of biomedical informatics. 55C. Camara, P. Peris-Lopez, and J. E. Tapiador, "Security and privacy issues in implantable medical devices: A comprehensive survey," Journal of biomedical informatics, vol. 55, pp. 272-289, 2015.
. S R Islam, D Kwak, M H Kabir, M Hossain, K.-S , S. R. Islam, D. Kwak, M. H. Kabir, M. Hossain, and K.-S.
The internet of things for health care: a comprehensive survey. Kwak, IEEE Access. 3Kwak, "The internet of things for health care: a comprehensive survey," IEEE Access, vol. 3, pp. 678-708, 2015.
Improved particle swarm optimization algorithm for android medical care IOT using modified parameters. W.-T Sung, Y.-C Chiang, Journal of medical systems. 366W.-T. Sung and Y.-C. Chiang, "Improved particle swarm optimization algorithm for android medical care IOT using modified parameters," Journal of medical systems, vol. 36, no. 6, pp. 3755-3763, 2012.
An open, secure and flexible platform based on internet of things and cloud computing for ambient aiding living and telemedicine. X M Zhang, N Zhang, Computer and Management (CAMAN), 2011 International Conference on. IEEEX. M. Zhang and N. Zhang, "An open, secure and flexible platform based on internet of things and cloud computing for ambient aiding living and telemedicine," in Computer and Management (CAMAN), 2011 International Conference on, 2011, pp. 1-4: IEEE.
Intelligent transportation systems based on internet-connected vehicles: Fundamental research areas and challenges. G Dimitrakopoulos, ITS Telecommunications (ITST), 2011 11th International Conference on. IEEEG. Dimitrakopoulos, "Intelligent transportation systems based on internet-connected vehicles: Fundamental research areas and challenges," in ITS Telecommunications (ITST), 2011 11th International Conference on, 2011, pp. 145-151: IEEE.
Development of IoT based smart security and monitoring devices for agriculture. T Baranwal, P K Pateriya, Cloud System and Big Data Engineering. Confluence2016T. Baranwal and P. K. Pateriya, "Development of IoT based smart security and monitoring devices for agriculture," in Cloud System and Big Data Engineering (Confluence), 2016
6th International Conference. IEEE6th International Conference, 2016, pp. 597-602: IEEE.
A survey on smart grid communication infrastructures: Motivations, requirements and challenges. Y Yan, Y Qian, H Sharif, D Tipper, IEEE communications surveys & tutorials. 151Y. Yan, Y. Qian, H. Sharif, and D. Tipper, "A survey on smart grid communication infrastructures: Motivations, requirements and challenges," IEEE communications surveys & tutorials, vol. 15, no. 1, pp. 5-20, 2013.
Cloud computing applications for smart grid: A survey. S Bera, S Misra, J J Rodrigues, IEEE Transactions on Parallel and Distributed Systems. 265S. Bera, S. Misra, and J. J. Rodrigues, "Cloud computing applications for smart grid: A survey," IEEE Transactions on Parallel and Distributed Systems, vol. 26, no. 5, pp. 1477- 1494, 2015.
Big IoT data analytics: Architecture, opportunities, and open research challenges. M Marjani, IEEE Access. 5M. Marjani et al., "Big IoT data analytics: Architecture, opportunities, and open research challenges," IEEE Access, vol. 5, pp. 5247-5261, 2017.
Application of internet of things in smart grid power transmission. Q Ou, MUSICY Zhen, MUSICX Li, MUSICY Zhang, MUSICL Zeng, MUSICMobile, Ubiquitous, and Intelligent Computing. Q. Ou, Y. Zhen, X. Li, Y. Zhang, and L. Zeng, "Application of internet of things in smart grid power transmission," in Mobile, Ubiquitous, and Intelligent Computing (MUSIC),
Third FTRA International Conference on. IEEEThird FTRA International Conference on, 2012, pp. 96-100: IEEE.
CASAS: A smart home in a box. D J Cook, A S Crandall, B L Thomas, N C Krishnan, Computer. 467D. J. Cook, A. S. Crandall, B. L. Thomas, and N. C. Krishnan, "CASAS: A smart home in a box," Computer, vol. 46, no. 7, pp. 62-69, 2013.
Survey in smart grid and smart home security: Issues, challenges and countermeasures. N Komninos, E Philippou, A Pitsillides, IEEE Communications Surveys & Tutorials. 164N. Komninos, E. Philippou, and A. Pitsillides, "Survey in smart grid and smart home security: Issues, challenges and countermeasures," IEEE Communications Surveys & Tutorials, vol. 16, no. 4, pp. 1933-1954, 2014.
Internet of Things (IoT): Taxonomy of security attacks. M Nawir, A Amir, N Yaakob, O B Lynn, 2016 3rd International Conference on. IEEEin Electronic Design (ICEDM. Nawir, A. Amir, N. Yaakob, and O. B. Lynn, "Internet of Things (IoT): Taxonomy of security attacks," in Electronic Design (ICED), 2016 3rd International Conference on, 2016, pp. 321-326: IEEE.
Ensuring safety, security, and sustainability of mission-critical cyber-physical systems. A Banerjee, K K Venkatasubramanian, T Mukherjee, S K S Gupta, Proceedings of the IEEE. 1001A. Banerjee, K. K. Venkatasubramanian, T. Mukherjee, and S. K. S. Gupta, "Ensuring safety, security, and sustainability of mission-critical cyber-physical systems," Proceedings of the IEEE, vol. 100, no. 1, pp. 283-299, 2012.
Context-aware security solutions for cyber-physical systems. K Wan, V Alagar, Mobile Networks and Applications. 192K. Wan and V. Alagar, "Context-aware security solutions for cyber-physical systems," Mobile Networks and Applications, vol. 19, no. 2, pp. 212-226, 2014.
Security tradeoffs in cyber physical systems: A case study survey on implantable medical devices. R Altawy, A M Youssef, IEEE Access. 4R. AlTawy and A. M. Youssef, "Security tradeoffs in cyber physical systems: A case study survey on implantable medical devices," IEEE Access, vol. 4, pp. 959-979, 2016.
Proposed security model and threat taxonomy for the Internet of Things (IoT). S Babar, P Mahalle, A Stango, N Prasad, R Prasad, International Conference on Network Security and Applications. SpringerS. Babar, P. Mahalle, A. Stango, N. Prasad, and R. Prasad, "Proposed security model and threat taxonomy for the Internet of Things (IoT)," in International Conference on Network Security and Applications, 2010, pp. 420-429: Springer.
Analysis of security threats, requirements, technologies and standards in wireless sensor networks. J Lopez, R Roman, C Alcaraz, Foundations of Security Analysis and Design. SpringerJ. Lopez, R. Roman, and C. Alcaraz, "Analysis of security threats, requirements, technologies and standards in wireless sensor networks," in Foundations of Security Analysis and Design V: Springer, 2009, pp. 289-338.
Is your cat infected with a computer virus?. M R Rieback, B Crispo, A S Tanenbaum, Fourth Annual IEEE International Conference on. IEEEPervasive Computing and CommunicationsM. R. Rieback, B. Crispo, and A. S. Tanenbaum, "Is your cat infected with a computer virus?," in Pervasive Computing and Communications, 2006. PerCom 2006. Fourth Annual IEEE International Conference on, 2006, pp. 10 pp.-179: IEEE.
Secrets and lies: digital security in a networked world. B Schneier, John Wiley & SonsB. Schneier, Secrets and lies: digital security in a networked world. John Wiley & Sons, 2011.
Security threats to Internet: a Korean multi-industry investigation. B Jung, I Han, S Lee, Information & Management. 388B. Jung, I. Han, and S. Lee, "Security threats to Internet: a Korean multi-industry investigation," Information & Management, vol. 38, no. 8, pp. 487-498, 2001.
Why not keep your personal data secure yet private in IoT?: Our lightweight approach. T Bose, S Bandyopadhyay, A Ukil, A Bhattacharyya, A Pal, Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP). IEEEIEEE Tenth International Conference onT. Bose, S. Bandyopadhyay, A. Ukil, A. Bhattacharyya, and A. Pal, "Why not keep your personal data secure yet private in IoT?: Our lightweight approach," in Intelligent Sensors, Sensor Networks and Information Processing (ISSNIP), 2015 IEEE Tenth International Conference on, 2015, pp. 1- 6: IEEE.
A lightweight attribute-based encryption scheme for the Internet of Things. X Yao, Z Chen, Y Tian, Future Generation Computer Systems. 49X. Yao, Z. Chen, and Y. Tian, "A lightweight attribute-based encryption scheme for the Internet of Things," Future Generation Computer Systems, vol. 49, pp. 104-112, 2015.
An end-to-end secure key management protocol for e-health applications. M R Abdmeziem, D Tandjaoui, Computers & Electrical Engineering. 44M. R. Abdmeziem and D. Tandjaoui, "An end-to-end secure key management protocol for e-health applications," Computers & Electrical Engineering, vol. 44, pp. 184-197, 2015.
SEA: a secure and efficient authentication and authorization architecture for IoT-based healthcare using smart gateways. S R Moosavi, Procedia Computer Science. 52S. R. Moosavi et al., "SEA: a secure and efficient authentication and authorization architecture for IoT-based healthcare using smart gateways," Procedia Computer Science, vol. 52, pp. 452-459, 2015.
A literature review of RFID-enabled healthcare applications and issues. S F Wamba, A Anand, L Carter, International Journal of Information Management. 335S. F. Wamba, A. Anand, and L. Carter, "A literature review of RFID-enabled healthcare applications and issues," International Journal of Information Management, vol. 33, no. 5, pp. 875-891, 2013.
Securing wireless implantable devices for healthcare: Ideas and challenges. K Malasri, L Wang, IEEE Communications Magazine. 477K. Malasri and L. Wang, "Securing wireless implantable devices for healthcare: Ideas and challenges," IEEE Communications Magazine, vol. 47, no. 7, 2009.
Security and privacy for cloud-based IoT: Challenges. J Zhou, Z Cao, X Dong, A V Vasilakos, IEEE Communications Magazine. 551J. Zhou, Z. Cao, X. Dong, and A. V. Vasilakos, "Security and privacy for cloud-based IoT: Challenges," IEEE Communications Magazine, vol. 55, no. 1, pp. 26-33, 2017.
Security issues and challenges for the IoT-based smart grid. C Bekara, Procedia Computer Science. 34C. Bekara, "Security issues and challenges for the IoT-based smart grid," Procedia Computer Science, vol. 34, pp. 532- 537, 2014.
The internet of things: A survey. L Atzori, A Iera, G Morabito, Computer networks. 5415L. Atzori, A. Iera, and G. Morabito, "The internet of things: A survey," Computer networks, vol. 54, no. 15, pp. 2787- 2805, 2010.
Internet of Things (IoT): A vision, architectural elements, and future directions. J Gubbi, R Buyya, S Marusic, M Palaniswami, Future generation computer systems. 297J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, "Internet of Things (IoT): A vision, architectural elements, and future directions," Future generation computer systems, vol. 29, no. 7, pp. 1645-1660, 2013.
Security of the Internet of Things: perspectives and challenges. Q Jing, A V Vasilakos, J Wan, J Lu, D Qiu, Wireless Networks. 208Q. Jing, A. V. Vasilakos, J. Wan, J. Lu, and D. Qiu, "Security of the Internet of Things: perspectives and challenges," Wireless Networks, vol. 20, no. 8, pp. 2481-2501, 2014.
TinySec: a link layer security architecture for wireless sensor networks. C Karlof, N Sastry, D Wagner, Proceedings of the 2nd international conference on Embedded networked sensor systems. the 2nd international conference on Embedded networked sensor systemsACMC. Karlof, N. Sastry, and D. Wagner, "TinySec: a link layer security architecture for wireless sensor networks," in Proceedings of the 2nd international conference on Embedded networked sensor systems, 2004, pp. 162-175: ACM.
A survey on sensor networks. I F Akyildiz, W Su, Y Sankarasubramaniam, E Cayirci, IEEE Communications magazine. 408I. F. Akyildiz, W. Su, Y. Sankarasubramaniam, and E. Cayirci, "A survey on sensor networks," IEEE Communications magazine, vol. 40, no. 8, pp. 102-114, 2002.
Feedback: Towards Dynamic Behavior and Secure Routing forWireless Sensor Networks. Z Cao, J Hu, Z Chen, M Xu, X Zhou, Advanced Information Networking and Applications, 2006. AINA 2006. 20th International Conference on. IEEE2Z. Cao, J. Hu, Z. Chen, M. Xu, and X. Zhou, "Feedback: Towards Dynamic Behavior and Secure Routing forWireless Sensor Networks," in Advanced Information Networking and Applications, 2006. AINA 2006. 20th International Conference on, 2006, vol. 2, pp. 160-164: IEEE.
A survey of intrusion detection techniques in cloud. C Modi, D Patel, B Borisaniya, H Patel, A Patel, M Rajarajan, Journal of Network and Computer Applications. 361C. Modi, D. Patel, B. Borisaniya, H. Patel, A. Patel, and M. Rajarajan, "A survey of intrusion detection techniques in cloud," Journal of Network and Computer Applications, vol. 36, no. 1, pp. 42-57, 2013.
A lightweight authenticated communication scheme for smart grid. Y Liu, C Cheng, T Gu, T Jiang, X Li, IEEE Sensors Journal. 163Y. Liu, C. Cheng, T. Gu, T. Jiang, and X. Li, "A lightweight authenticated communication scheme for smart grid," IEEE Sensors Journal, vol. 16, no. 3, pp. 836-842, 2016.
Extracting trust information from security system of a service. Ş Bahtiyar, M U Çağlayan, Journal of Network and Computer Applications. 351Ş. Bahtiyar and M. U. Çağlayan, "Extracting trust information from security system of a service," Journal of Network and Computer Applications, vol. 35, no. 1, pp. 480- 490, 2012.
Secure and dependable software defined networks. A Akhunzada, Journal of Network and Computer Applications. 61A. Akhunzada et al., "Secure and dependable software defined networks," Journal of Network and Computer Applications, vol. 61, pp. 199-221, 2016.
State-of-the-art, challenges, and open issues in the integration of Internet of things and cloud computing. M Díaz, C Martín, B Rubio, Journal of Network and Computer Applications. 67M. Díaz, C. Martín, and B. Rubio, "State-of-the-art, challenges, and open issues in the integration of Internet of things and cloud computing," Journal of Network and Computer Applications, vol. 67, pp. 99-117, 2016.
A view of cloud computing. M Armbrust, Communications of the ACM. 534M. Armbrust et al., "A view of cloud computing," Communications of the ACM, vol. 53, no. 4, pp. 50-58, 2010.
Above the clouds: A berkeley view of cloud computing. M Armbrust, UCB/EECS-2009-28EECS Department, University of CaliforniaTechnical ReportM. Armbrust et al., "Above the clouds: A berkeley view of cloud computing," Technical Report UCB/EECS-2009-28, EECS Department, University of California, Berkeley2009.
Secure integration of IoT and cloud computing. C Stergiou, K E Psannis, B.-G Kim, B Gupta, Future Generation Computer Systems. 78C. Stergiou, K. E. Psannis, B.-G. Kim, and B. Gupta, "Secure integration of IoT and cloud computing," Future Generation Computer Systems, vol. 78, pp. 964-975, 2018.
Extending sensor networks into the cloud using amazon web services. K Lee, D Murray, D Hughes, W Joosen, Networked Embedded Systems for Enterprise Applications (NESEA), 2010 IEEE International Conference on. IEEEK. Lee, D. Murray, D. Hughes, and W. Joosen, "Extending sensor networks into the cloud using amazon web services," in Networked Embedded Systems for Enterprise Applications (NESEA), 2010 IEEE International Conference on, 2010, pp. 1-7: IEEE.
Integration of cloud computing and internet of things: a survey. A Botta, W Donato, V Persico, A Pescapé, Future Generation Computer Systems. 56A. Botta, W. De Donato, V. Persico, and A. Pescapé, "Integration of cloud computing and internet of things: a survey," Future Generation Computer Systems, vol. 56, pp. 684-700, 2016.
A survey on security issues in service delivery models of cloud computing. S Subashini, V Kavitha, Journal of network and computer applications. 341S. Subashini and V. Kavitha, "A survey on security issues in service delivery models of cloud computing," Journal of network and computer applications, vol. 34, no. 1, pp. 1-11, 2011.
Secure and trusted cloud of things. T Bhattasali, R Chaki, N Chaki, India Conference (INDICON). T. Bhattasali, R. Chaki, and N. Chaki, "Secure and trusted cloud of things," in India Conference (INDICON), 2013
. Annual IEEE. IEEEAnnual IEEE, 2013, pp. 1-6: IEEE.
Implicit authentication through learning user behavior. E Shi, Y Niu, M Jakobsson, R Chow, International Conference on Information Security. SpringerE. Shi, Y. Niu, M. Jakobsson, and R. Chow, "Implicit authentication through learning user behavior," in International Conference on Information Security, 2010, pp. 99-113: Springer.
Does cloud computing matter? An analysis of the cloud model software-as-aservice and its impact on operational agility. S Fremdt, R Beck, S Weber, 46th Hawaii International Conference on. IEEESystem Sciences (HICSS)S. Fremdt, R. Beck, and S. Weber, "Does cloud computing matter? An analysis of the cloud model software-as-a- service and its impact on operational agility," in System Sciences (HICSS), 2013 46th Hawaii International Conference on, 2013, pp. 1025-1034: IEEE.
Iot-privacy: To be private or not to be private. A Ukil, S Bandyopadhyay, A Pal, Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on. IEEEA. Ukil, S. Bandyopadhyay, and A. Pal, "Iot-privacy: To be private or not to be private," in Computer Communications Workshops (INFOCOM WKSHPS), 2014 IEEE Conference on, 2014, pp. 123-124: IEEE.
Mobile Apps leveraging the Internet of Things (IoT). s. o. technologies, "Mobile Apps leveraging the Internet of Things (IoT)," [Online]: https://www.spaceotechnologies.com/mobile-apps- leveraging-the-internet-of-things/, Last aceessed 4 April 2018.
Android security: a survey of issues, malware penetration, and defenses. P Faruki, IEEE communications surveys & tutorials. 172P. Faruki et al., "Android security: a survey of issues, malware penetration, and defenses," IEEE communications surveys & tutorials, vol. 17, no. 2, pp. 998-1022, 2015.
Detection and prevention of installation of malicious mobile applications. S Das, J Divakarla, P Sharma, Google PatentsS. Das, J. Divakarla, and P. Sharma, "Detection and prevention of installation of malicious mobile applications," ed: Google Patents, 2015.
Asdroid: Detecting stealthy behaviors in android applications by user interface and program behavior contradiction. J Huang, X Zhang, L Tan, P Wang, B Liang, Proceedings of the 36th International Conference on Software Engineering. the 36th International Conference on Software EngineeringACMJ. Huang, X. Zhang, L. Tan, P. Wang, and B. Liang, "Asdroid: Detecting stealthy behaviors in android applications by user interface and program behavior contradiction," in Proceedings of the 36th International Conference on Software Engineering, 2014, pp. 1036-1046: ACM.
The emerging field of mobile health. S R Steinhubl, E D Muse, E J Topol, Science translational medicine. 7283S. R. Steinhubl, E. D. Muse, and E. J. Topol, "The emerging field of mobile health," Science translational medicine, vol. 7, no. 283, pp. 283rv3-283rv3, 2015.
W Zhou, Y Zhang, P Liu, arXiv:1802.03110The Effect of IoT New Features on Security and Privacy: New Threats, Existing Solutions, and Challenges Yet to Be Solved. arXiv preprintW. Zhou, Y. Zhang, and P. Liu, "The Effect of IoT New Features on Security and Privacy: New Threats, Existing Solutions, and Challenges Yet to Be Solved," arXiv preprint arXiv:1802.03110, 2018.
IoT goes nuclear: Creating a ZigBee chain reaction. E Ronen, A Shamir, A.-O Weingarten, C O'flynn, IEEE Symposium. IEEEin Security and Privacy (SPE. Ronen, A. Shamir, A.-O. Weingarten, and C. O'Flynn, "IoT goes nuclear: Creating a ZigBee chain reaction," in Security and Privacy (SP), 2017 IEEE Symposium on, 2017, pp. 195-212: IEEE.
Friendship selection in the social internet of things: challenges and possible strategies. M Nitti, L Atzori, I P Cvijikj, IEEE Internet of things journal. 23M. Nitti, L. Atzori, and I. P. Cvijikj, "Friendship selection in the social internet of things: challenges and possible strategies," IEEE Internet of things journal, vol. 2, no. 3, pp. 240-247, 2015.
From" smart objects" to" social objects": The next evolutionary step of the internet of things. L Atzori, A Iera, G Morabito, IEEE Communications Magazine. 521L. Atzori, A. Iera, and G. Morabito, "From" smart objects" to" social objects": The next evolutionary step of the internet of things," IEEE Communications Magazine, vol. 52, no. 1, pp. 97-105, 2014.
Machine learning: Trends, perspectives, and prospects. M I Jordan, T M Mitchell, Science. 3496245M. I. Jordan and T. M. Mitchell, "Machine learning: Trends, perspectives, and prospects," Science, vol. 349, no. 6245, pp. 255-260, 2015.
The elements of statistical learning: data mining, inference and prediction. J Franklin, The Mathematical Intelligencer. 272J. Franklin, "The elements of statistical learning: data mining, inference and prediction," The Mathematical Intelligencer, vol. 27, no. 2, pp. 83-85, 2005.
Deep learning in neural networks: An overview. J Schmidhuber, Neural networks. 61J. Schmidhuber, "Deep learning in neural networks: An overview," Neural networks, vol. 61, pp. 85-117, 2015.
Learning deep architectures for AI. Y Bengio, Foundations and trends® in Machine Learning. 2Y. Bengio, "Learning deep architectures for AI," Foundations and trends® in Machine Learning, vol. 2, no. 1, pp. 1-127, 2009.
Reducing the dimensionality of data with neural networks. G E Hinton, R R Salakhutdinov, science. 3135786G. E. Hinton and R. R. Salakhutdinov, "Reducing the dimensionality of data with neural networks," science, vol. 313, no. 5786, pp. 504-507, 2006.
Human-level control through deep reinforcement learning. V Mnih, Nature. 5187540529V. Mnih et al., "Human-level control through deep reinforcement learning," Nature, vol. 518, no. 7540, p. 529, 2015.
Reinforcement learning: An introduction. R S Sutton, A G Barto, MIT press CambridgeR. S. Sutton and A. G. Barto, Reinforcement learning: An introduction (no. 1). MIT press Cambridge, 1998.
Machine learning in wireless sensor networks: Algorithms, strategies, and applications. M A Alsheikh, S Lin, D Niyato, H.-P Tan, IEEE Communications Surveys & Tutorials. 164M. A. Alsheikh, S. Lin, D. Niyato, and H.-P. Tan, "Machine learning in wireless sensor networks: Algorithms, strategies, and applications," IEEE Communications Surveys & Tutorials, vol. 16, no. 4, pp. 1996-2018, 2014.
Supervised machine learning: A review of classification techniques. S B Kotsiantis, I Zaharakis, P Pintelas, Emerging artificial intelligence applications in computer engineering. 160S. B. Kotsiantis, I. Zaharakis, and P. Pintelas, "Supervised machine learning: A review of classification techniques," Emerging artificial intelligence applications in computer engineering, vol. 160, pp. 3-24, 2007.
Induction of decision trees. J R Quinlan, Machine learning. 11J. R. Quinlan, "Induction of decision trees," Machine learning, vol. 1, no. 1, pp. 81-106, 1986.
Building decision tree classifier on private data. W Du, Z Zhan, Proceedings of the IEEE international conference on Privacy, security and data mining. the IEEE international conference on Privacy, security and data miningAustralian Computer Society, Inc14W. Du and Z. Zhan, "Building decision tree classifier on private data," in Proceedings of the IEEE international conference on Privacy, security and data mining-Volume 14, 2002, pp. 1-8: Australian Computer Society, Inc.
Decision trees: a recent overview. S B Kotsiantis, Artificial Intelligence Review. 394S. B. Kotsiantis, "Decision trees: a recent overview," Artificial Intelligence Review, vol. 39, no. 4, pp. 261-283, 2013.
Reducing false positives in intrusion detection systems using data-mining techniques utilizing support vector machines, decision trees, and naive Bayes for off-line analysis. K Goeschel, SoutheastCon. IEEEK. Goeschel, "Reducing false positives in intrusion detection systems using data-mining techniques utilizing support vector machines, decision trees, and naive Bayes for off-line analysis," in SoutheastCon, 2016, 2016, pp. 1-6: IEEE.
A novel hybrid intrusion detection method integrating anomaly detection with misuse detection. G Kim, S Lee, S Kim, Expert Systems with Applications. 414G. Kim, S. Lee, and S. Kim, "A novel hybrid intrusion detection method integrating anomaly detection with misuse detection," Expert Systems with Applications, vol. 41, no. 4, pp. 1690-1700, 2014.
Secure the internet of things with challenge response authentication in fog computing. S Alharbi, P Rodriguez, R Maharaja, P Iyer, N Subaschandrabose, Z Ye, Performance Computing and Communications Conference (IPCCC). IEEES. Alharbi, P. Rodriguez, R. Maharaja, P. Iyer, N. Subaschandrabose, and Z. Ye, "Secure the internet of things with challenge response authentication in fog computing," in Performance Computing and Communications Conference (IPCCC), 2017 IEEE 36th International, 2017, pp. 1-2: IEEE.
Support vector machine active learning with applications to text classification. S Tong, D Koller, Journal of machine learning research. 2S. Tong and D. Koller, "Support vector machine active learning with applications to text classification," Journal of machine learning research, vol. 2, no. Nov, pp. 45-66, 2001.
The nature of statistical learning theory. V Vapnik, Springer Science & Business MediaV. Vapnik, The nature of statistical learning theory. Springer Science & Business Media, 2013.
A survey of data mining and machine learning methods for cyber security intrusion detection. A L Buczak, E Guven, IEEE Communications Surveys & Tutorials. 182A. L. Buczak and E. Guven, "A survey of data mining and machine learning methods for cyber security intrusion detection," IEEE Communications Surveys & Tutorials, vol. 18, no. 2, pp. 1153-1176, 2015.
Robust Support Vector Machines for Anomaly Detection in Computer Security. W Hu, Y Liao, V R Vemuri, ICMLA. W. Hu, Y. Liao, and V. R. Vemuri, "Robust Support Vector Machines for Anomaly Detection in Computer Security," in ICMLA, 2003, pp. 168-174.
A Novel Kernel SVM Algorithm with Game Theory for Network Intrusion Detection. Y Liu, D Pi, KSII Transactions on Internet & Information Systems. 118Y. Liu and D. Pi, "A Novel Kernel SVM Algorithm with Game Theory for Network Intrusion Detection," KSII Transactions on Internet & Information Systems, vol. 11, no. 8, 2017.
Machine learning approach for ip-flow record anomaly detection. C Wagner, J François, T Engel, International Conference on Research in Networking. SpringerC. Wagner, J. François, and T. Engel, "Machine learning approach for ip-flow record anomaly detection," in International Conference on Research in Networking, 2011, pp. 28-39: Springer.
Linear SVM-based android malware detection for reliable IoT services. H.-S Ham, H.-H Kim, M.-S Kim, M.-J Choi, Journal of Applied Mathematics. 2014H.-S. Ham, H.-H. Kim, M.-S. Kim, and M.-J. Choi, "Linear SVM-based android malware detection for reliable IoT services," Journal of Applied Mathematics, vol. 2014, 2014.
. M Ozay, I Esnaola, F T Y Vural, S R Kulkarni, H , M. Ozay, I. Esnaola, F. T. Y. Vural, S. R. Kulkarni, and H.
Machine learning methods for attack detection in the smart grid. V Poor, IEEE Transactions on Neural Networks and Learning Systems. 278V. Poor, "Machine learning methods for attack detection in the smart grid," IEEE Transactions on Neural Networks and Learning Systems, vol. 27, no. 8, pp. 1773-1786, 2016.
A machine learning approach against a masked AES. L Lerman, G Bontempi, O Markowitch, Journal of Cryptographic Engineering. 52L. Lerman, G. Bontempi, and O. Markowitch, "A machine learning approach against a masked AES," Journal of Cryptographic Engineering, vol. 5, no. 2, pp. 123-139, 2015.
Intelligent machine homicide. A Heuser, M Zohner, International Workshop on Constructive Side-Channel Analysis and Secure Design. SpringerA. Heuser and M. Zohner, "Intelligent machine homicide," in International Workshop on Constructive Side-Channel Analysis and Secure Design, 2012, pp. 249-264: Springer.
A multidimensional unfolding method based on Bayes' theorem. G , Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment. 3622-3G. D'Agostini, "A multidimensional unfolding method based on Bayes' theorem," Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, vol. 362, no. 2-3, pp. 487-498, 1995.
Network intrusion detection using naive bayes. M Panda, M R Patra, International journal of computer science and network security. 712M. Panda and M. R. Patra, "Network intrusion detection using naive bayes," International journal of computer science and network security, vol. 7, no. 12, pp. 258-263, 2007.
Intrusion detection using naive Bayes classifier with feature reduction. S Mukherjee, N Sharma, Procedia Technology. 4S. Mukherjee and N. Sharma, "Intrusion detection using naive Bayes classifier with feature reduction," Procedia Technology, vol. 4, pp. 119-128, 2012.
Survey on anomaly detection using data mining techniques. S Agrawal, J , Procedia Computer Science. 60S. Agrawal and J. Agrawal, "Survey on anomaly detection using data mining techniques," Procedia Computer Science, vol. 60, pp. 708-713, 2015.
OCPAD: One class Naive Bayes classifier for payload based anomaly detection. M Swarnkar, N Hubballi, Expert Systems with Applications. 64M. Swarnkar and N. Hubballi, "OCPAD: One class Naive Bayes classifier for payload based anomaly detection," Expert Systems with Applications, vol. 64, pp. 330-339, 2016.
Bayesian inference in statistical analysis. G E Box, G C Tiao, John Wiley & SonsG. E. Box and G. C. Tiao, Bayesian inference in statistical analysis. John Wiley & Sons, 2011.
On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes. A Y Ng, M I Jordan, Advances in neural information processing systems. A. Y. Ng and M. I. Jordan, "On discriminative vs. generative classifiers: A comparison of logistic regression and naive bayes," in Advances in neural information processing systems, 2002, pp. 841-848.
A simple KNN algorithm for text categorization. P Soucy, G W Mineau, Proceedings IEEE International Conference on. IEEE International Conference onIEEEData MiningP. Soucy and G. W. Mineau, "A simple KNN algorithm for text categorization," in Data Mining, 2001. ICDM 2001, Proceedings IEEE International Conference on, 2001, pp. 647-648: IEEE.
Efficient kNN classification algorithm for big data. Z Deng, X Zhu, D Cheng, M Zong, S Zhang, Neurocomputing. 195Z. Deng, X. Zhu, D. Cheng, M. Zong, and S. Zhang, "Efficient kNN classification algorithm for big data," Neurocomputing, vol. 195, pp. 143-148, 2016.
Use of k-nearest neighbor classifier for intrusion detection1. Y Liao, V R Vemuri, Computers & security. 215Y. Liao and V. R. Vemuri, "Use of k-nearest neighbor classifier for intrusion detection1," Computers & security, vol. 21, no. 5, pp. 439-448, 2002.
Network intrusion detection based on rough set and k-nearest neighbour. A O Adetunmbi, S O Falaki, O S Adewale, B K , International Journal of Computing and ICT Research. 21A. O. Adetunmbi, S. O. Falaki, O. S. Adewale, and B. K. Alese, "Network intrusion detection based on rough set and k-nearest neighbour," International Journal of Computing and ICT Research, vol. 2, no. 1, pp. 60-66, 2008.
Intrusion detection by machine learning: A review. C.-F Tsai, Y.-F Hsu, C.-Y. Lin, W.-Y Lin, Expert Systems with Applications. 3610C.-F. Tsai, Y.-F. Hsu, C.-Y. Lin, and W.-Y. Lin, "Intrusion detection by machine learning: A review," Expert Systems with Applications, vol. 36, no. 10, pp. 11994-12000, 2009.
Nearest neighbors based density peaks approach to intrusion detection. L Li, H Zhang, H Peng, Y Yang, Chaos, Solitons & Fractals. 110L. Li, H. Zhang, H. Peng, and Y. Yang, "Nearest neighbors based density peaks approach to intrusion detection," Chaos, Solitons & Fractals, vol. 110, pp. 33-40, 2018.
Intrusion detection system using hybrid binary PSO and K-nearest neighborhood algorithm. A R Syarif, W Gata, Information & Communication Technology and System (ICTS), 2017 11th International Conference on. IEEEA. R. Syarif and W. Gata, "Intrusion detection system using hybrid binary PSO and K-nearest neighborhood algorithm," in Information & Communication Technology and System (ICTS), 2017 11th International Conference on, 2017, pp. 181-186: IEEE.
Real-time anomaly detection systems for Denialof-Service attacks by weighted k-nearest-neighbor classifiers. M.-Y. Su, Expert Systems with Applications. 384M.-Y. Su, "Real-time anomaly detection systems for Denial- of-Service attacks by weighted k-nearest-neighbor classifiers," Expert Systems with Applications, vol. 38, no. 4, pp. 3492-3498, 2011.
A two-layer dimension reduction and two-tier classification model for anomaly-based intrusion detection in IoT backbone networks. H H Pajouh, R Javidan, R Khayami, D Ali, K.-K R Choo, IEEE Transactions on Emerging Topics in Computing. H. H. Pajouh, R. Javidan, R. Khayami, D. Ali, and K.-K. R. Choo, "A two-layer dimension reduction and two-tier classification model for anomaly-based intrusion detection in IoT backbone networks," IEEE Transactions on Emerging Topics in Computing, 2016.
A new intrusion detection system based on KNN classification algorithm in wireless sensor network. W Li, P Yi, Y Wu, L Pan, J Li, Journal of Electrical and Computer Engineering. 2014W. Li, P. Yi, Y. Wu, L. Pan, and J. Li, "A new intrusion detection system based on KNN classification algorithm in wireless sensor network," Journal of Electrical and Computer Engineering, vol. 2014, 2014.
Random forests. L Breiman, Machine learning. 451L. Breiman, "Random forests," Machine learning, vol. 45, no. 1, pp. 5-32, 2001.
Random forests for classification in ecology. D R Cutler, Ecology. 8811D. R. Cutler et al., "Random forests for classification in ecology," Ecology, vol. 88, no. 11, pp. 2783-2792, 2007.
A hybrid network intrusion detection technique using random forests. J Zhang, M Zulkernine, Availability, Reliability and Security. IEEEThe First International Conference onJ. Zhang and M. Zulkernine, "A hybrid network intrusion detection technique using random forests," in Availability, Reliability and Security, 2006. ARES 2006. The First International Conference on, 2006, pp. 8 pp.-269: IEEE.
Network Intrusion Detection Based on Random Forest and Support Vector Machine. Y Chang, W Li, Z Yang, Computational Science and Engineering (CSE) and Embedded and Ubiquitous Computing. IEEE12017 IEEE International Conference onY. Chang, W. Li, and Z. Yang, "Network Intrusion Detection Based on Random Forest and Support Vector Machine," in Computational Science and Engineering (CSE) and Embedded and Ubiquitous Computing (EUC), 2017 IEEE International Conference on, 2017, vol. 1, pp. 635-638: IEEE.
Machine Learning DDoS Detection for Consumer Internet of Things Devices. R Doshi, N Apthorpe, N Feamster, arXiv:1804.04159arXiv preprintR. Doshi, N. Apthorpe, and N. Feamster, "Machine Learning DDoS Detection for Consumer Internet of Things Devices," arXiv preprint arXiv:1804.04159, 2018.
Detection of Unauthorized IoT Devices Using Machine Learning Techniques. Y Meidan, arXiv:1709.04647arXiv preprintY. Meidan et al., "Detection of Unauthorized IoT Devices Using Machine Learning Techniques," arXiv preprint arXiv:1709.04647, 2017.
Mining association rules between sets of items in large databases. R Agrawal, T Imieliński, A Swami, Acm sigmod record. ACM22R. Agrawal, T. Imieliński, and A. Swami, "Mining association rules between sets of items in large databases," in Acm sigmod record, 1993, vol. 22, no. 2, pp. 207-216: ACM.
OMC-IDS: at the cross-roads of OLAP mining and intrusion detection. H Brahmi, I Brahmi, S B Yahia, Pacific-Asia Conference on Knowledge Discovery and Data Mining. SpringerH. Brahmi, I. Brahmi, and S. B. Yahia, "OMC-IDS: at the cross-roads of OLAP mining and intrusion detection," in Pacific-Asia Conference on Knowledge Discovery and Data Mining, 2012, pp. 13-24: Springer.
Intrusion detection using fuzzy association rules. A Tajbakhsh, M Rahmati, A Mirzaei, Applied Soft Computing. 92A. Tajbakhsh, M. Rahmati, and A. Mirzaei, "Intrusion detection using fuzzy association rules," Applied Soft Computing, vol. 9, no. 2, pp. 462-469, 2009.
Association rules mining: A recent overview. S Kotsiantis, D Kanellopoulos, GESTS International Transactions on Computer Science and Engineering. 321S. Kotsiantis and D. Kanellopoulos, "Association rules mining: A recent overview," GESTS International Transactions on Computer Science and Engineering, vol. 32, no. 1, pp. 71-82, 2006.
A survey of multiple classifier systems as hybrid systems. M Woźniak, M Graña, E Corchado, Information Fusion. 16M. Woźniak, M. Graña, and E. Corchado, "A survey of multiple classifier systems as hybrid systems," Information Fusion, vol. 16, pp. 3-17, 2014.
A few useful things to know about machine learning. P Domingos, Communications of the ACM. 5510P. Domingos, "A few useful things to know about machine learning," Communications of the ACM, vol. 55, no. 10, pp. 78-87, 2012.
Ensemble machine learning: methods and applications. C Zhang, Y Ma, SpringerC. Zhang and Y. Ma, Ensemble machine learning: methods and applications. Springer, 2012.
A comparative analysis of genetic algorithm and ant colony optimization to select attributes for an heterogeneous ensemble of classifiers. L E Santana, L Silva, A M Canuto, F Pintro, K O Vale, Evolutionary Computation (CEC). IEEEL. E. Santana, L. Silva, A. M. Canuto, F. Pintro, and K. O. Vale, "A comparative analysis of genetic algorithm and ant colony optimization to select attributes for an heterogeneous ensemble of classifiers," in Evolutionary Computation (CEC), 2010 IEEE Congress on, 2010, pp. 1-8: IEEE.
Current Issues in Ensemble Methods and Its Applications. N M Baba, M Makhtar, S A Fadzli, M K Awang, Journal of Theoretical and Applied Information Technology. 812266N. M. Baba, M. Makhtar, S. A. Fadzli, and M. K. Awang, "Current Issues in Ensemble Methods and Its Applications," Journal of Theoretical and Applied Information Technology, vol. 81, no. 2, p. 266, 2015.
Intrusion detection system using bagging ensemble method of machine learning. D Gaikwad, R C Thool, Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on. IEEED. Gaikwad and R. C. Thool, "Intrusion detection system using bagging ensemble method of machine learning," in Computing Communication Control and Automation (ICCUBEA), 2015 International Conference on, 2015, pp. 291-295: IEEE.
A novel SVM-kNN-PSO ensemble method for intrusion detection system. A A Aburomman, M B I Reaz, Applied Soft Computing. 38A. A. Aburomman and M. B. I. Reaz, "A novel SVM-kNN- PSO ensemble method for intrusion detection system," Applied Soft Computing, vol. 38, pp. 360-372, 2016.
Enhanced anomaly detection using ensemble support vector machine. R R Reddy, Y Ramadevi, K Sunitha, Big Data Analytics and Computational Intelligence (ICBDAC. IEEER. R. Reddy, Y. Ramadevi, and K. Sunitha, "Enhanced anomaly detection using ensemble support vector machine," in Big Data Analytics and Computational Intelligence (ICBDAC), 2017 International Conference on, 2017, pp. 107-111: IEEE.
High accuracy android malware detection using ensemble learning. S Y Yerima, S Sezer, I Muttik, IET Information Security. 96S. Y. Yerima, S. Sezer, and I. Muttik, "High accuracy android malware detection using ensemble learning," IET Information Security, vol. 9, no. 6, pp. 313-320, 2015.
Ensembles of incremental learners to detect anomalies in ad hoc sensor networks. H H Bosman, G Iacca, A Tejada, H J Wörtche, A Liotta, ad hoc networks. 35H. H. Bosman, G. Iacca, A. Tejada, H. J. Wörtche, and A. Liotta, "Ensembles of incremental learners to detect anomalies in ad hoc sensor networks," ad hoc networks, vol. 35, pp. 14-36, 2015.
Algorithm AS 136: A kmeans clustering algorithm. J A Hartigan, M A Wong, Journal of the Royal Statistical Society. Series C (Applied Statistics). 281J. A. Hartigan and M. A. Wong, "Algorithm AS 136: A k- means clustering algorithm," Journal of the Royal Statistical Society. Series C (Applied Statistics), vol. 28, no. 1, pp. 100- 108, 1979.
Data clustering: 50 years beyond K-means. A K Jain, Pattern recognition letters. 318A. K. Jain, "Data clustering: 50 years beyond K-means," Pattern recognition letters, vol. 31, no. 8, pp. 651-666, 2010.
Traffic anomaly detection using k-means clustering. G Münz, S Li, G Carle, GI/ITG Workshop MMBnet. G. Münz, S. Li, and G. Carle, "Traffic anomaly detection using k-means clustering," in GI/ITG Workshop MMBnet, 2007.
Network anomaly detection: methods, systems and tools. M H Bhuyan, D K Bhattacharyya, J K Kalita, IEEE communications surveys & tutorials. 161M. H. Bhuyan, D. K. Bhattacharyya, and J. K. Kalita, "Network anomaly detection: methods, systems and tools," IEEE communications surveys & tutorials, vol. 16, no. 1, pp. 303-336, 2014.
Network anomaly detection by cascading k-Means clustering and C4. 5 decision tree algorithm. A P Muniyandi, R Rajeswari, R Rajaram, Procedia Engineering. 30A. P. Muniyandi, R. Rajeswari, and R. Rajaram, "Network anomaly detection by cascading k-Means clustering and C4. 5 decision tree algorithm," Procedia Engineering, vol. 30, pp. 174-182, 2012.
Learning intrusion detection: supervised or unsupervised?. P Laskov, P Düssel, C Schäfer, K Rieck, International Conference on Image Analysis and Processing. SpringerP. Laskov, P. Düssel, C. Schäfer, and K. Rieck, "Learning intrusion detection: supervised or unsupervised?," in International Conference on Image Analysis and Processing, 2005, pp. 50-57: Springer.
Intrusion detection for wireless sensor networks based on multi-agent and refined clustering. H Wang, Z Yuan, C.-D Wang, Communications and Mobile Computing, 2009. CMC'09. WRI International Conference on. IEEE3H.-b. Wang, Z. Yuan, and C.-d. Wang, "Intrusion detection for wireless sensor networks based on multi-agent and refined clustering," in Communications and Mobile Computing, 2009. CMC'09. WRI International Conference on, 2009, vol. 3, pp. 450-454: IEEE.
Channel-based Sybil Detection in Industrial Wireless Sensor Networks: a Multi-kernel Approach. Q Li, K Zhang, M Cheffena, X Shen, GLOBECOM 2017-2017 IEEE Global Communications Conference. IEEEQ. Li, K. Zhang, M. Cheffena, and X. Shen, "Channel-based Sybil Detection in Industrial Wireless Sensor Networks: a Multi-kernel Approach," in GLOBECOM 2017-2017 IEEE Global Communications Conference, 2017, pp. 1-6: IEEE.
The anonymization protection algorithm based on fuzzy clustering for the ego of data in the Internet of Things. M Xie, M Huang, Y Bai, Z Hu, Journal of Electrical and Computer Engineering. 2017M. Xie, M. Huang, Y. Bai, and Z. Hu, "The anonymization protection algorithm based on fuzzy clustering for the ego of data in the Internet of Things," Journal of Electrical and Computer Engineering, vol. 2017, 2017.
Principal component analysis. S Wold, K Esbensen, P Geladi, Chemometrics and intelligent laboratory systems. 21-3S. Wold, K. Esbensen, and P. Geladi, "Principal component analysis," Chemometrics and intelligent laboratory systems, vol. 2, no. 1-3, pp. 37-52, 1987.
A Dimension Reduction Model and Classifier for Anomaly-Based Intrusion Detection in Internet of Things. S Zhao, W Li, T Zia, A Y Zomaya, Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence & Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress. S. Zhao, W. Li, T. Zia, and A. Y. Zomaya, "A Dimension Reduction Model and Classifier for Anomaly-Based Intrusion Detection in Internet of Things," in Dependable, Autonomic and Secure Computing, 15th Intl Conf on Pervasive Intelligence & Computing, 3rd Intl Conf on Big Data Intelligence and Computing and Cyber Science and Technology Congress (DASC/PiCom/DataCom/CyberSciTech), 2017 IEEE 15th
Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing. H Li, K Ota, M Dong, IEEE Network. 321H. Li, K. Ota, and M. Dong, "Learning IoT in Edge: Deep Learning for the Internet of Things with Edge Computing," IEEE Network, vol. 32, no. 1, pp. 96-101, 2018.
State-of-the-art deep learning: Evolving machine intelligence toward tomorrow's intelligent network traffic control systems. Z M Fadlullah, IEEE Communications Surveys & Tutorials. 194Z. M. Fadlullah et al., "State-of-the-art deep learning: Evolving machine intelligence toward tomorrow's intelligent network traffic control systems," IEEE Communications Surveys & Tutorials, vol. 19, no. 4, pp. 2432-2455, 2017.
I Goodfellow, Y Bengio, A Courville, Y Bengio, Deep learning. MIT press CambridgeI. Goodfellow, Y. Bengio, A. Courville, and Y. Bengio, Deep learning. MIT press Cambridge, 2016.
Evaluation of pooling operations in convolutional architectures for object recognition. D Scherer, A Müller, S Behnke, International conference on artificial neural networks. SpringerD. Scherer, A. Müller, and S. Behnke, "Evaluation of pooling operations in convolutional architectures for object recognition," in International conference on artificial neural networks, 2010, pp. 92-101: Springer.
Flexible, high performance convolutional neural networks for image classification. D C Ciresan, U Meier, J Masci, L Maria Gambardella, J Schmidhuber, IJCAI Proceedings-International Joint Conference on Artificial Intelligence. Barcelona, Spain221237D. C. Ciresan, U. Meier, J. Masci, L. Maria Gambardella, and J. Schmidhuber, "Flexible, high performance convolutional neural networks for image classification," in IJCAI Proceedings-International Joint Conference on Artificial Intelligence, 2011, vol. 22, no. 1, p. 1237: Barcelona, Spain.
. !!! Invalid Citation !!!, !!! INVALID CITATION !!! .
Distributed neural networks for Internet of Things: the Big-Little approach. E De Coninck, International Internet of Things Summit. SpringerE. De Coninck et al., "Distributed neural networks for Internet of Things: the Big-Little approach," in International Internet of Things Summit, 2015, pp. 484-492: Springer.
Imagenet classification with deep convolutional neural networks. A Krizhevsky, I Sutskever, G E Hinton, Advances in neural information processing systems. A. Krizhevsky, I. Sutskever, and G. E. Hinton, "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, 2012, pp. 1097-1105.
Deep learning for remote sensing data: A technical tutorial on the state of the art. L Zhang, L Zhang, B Du, IEEE Geoscience and Remote Sensing Magazine. 42L. Zhang, L. Zhang, and B. Du, "Deep learning for remote sensing data: A technical tutorial on the state of the art," IEEE Geoscience and Remote Sensing Magazine, vol. 4, no. 2, pp. 22-40, 2016.
Deep android malware detection. N Mclaughlin, Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy. the Seventh ACM on Conference on Data and Application Security and PrivacyACMN. McLaughlin et al., "Deep android malware detection," in Proceedings of the Seventh ACM on Conference on Data and Application Security and Privacy, 2017, pp. 301-308: ACM.
Breaking cryptographic implementations using deep learning techniques. H Maghrebi, T Portigliatti, E Prouff, International Conference on Security, Privacy, and Applied Cryptography Engineering. SpringerH. Maghrebi, T. Portigliatti, and E. Prouff, "Breaking cryptographic implementations using deep learning techniques," in International Conference on Security, Privacy, and Applied Cryptography Engineering, 2016, pp. 3-26: Springer.
How to construct deep recurrent neural networks. R Pascanu, C Gulcehre, K Cho, Y Bengio, arXiv:1312.6026arXiv preprintR. Pascanu, C. Gulcehre, K. Cho, and Y. Bengio, "How to construct deep recurrent neural networks," arXiv preprint arXiv:1312.6026, 2013.
Training and analysing deep recurrent neural networks. M Hermans, B Schrauwen, Advances in neural information processing systems. M. Hermans and B. Schrauwen, "Training and analysing deep recurrent neural networks," in Advances in neural information processing systems, 2013, pp. 190-198.
Deep Learning Algorithms for Human Activity Recognition using Mobile and Wearable Sensor Networks: State of the Art and Research Challenges. H F Nweke, Y W Teh, M A , U R Alo, Expert Systems with Applications. H. F. Nweke, Y. W. Teh, M. A. Al-garadi, and U. R. Alo, "Deep Learning Algorithms for Human Activity Recognition using Mobile and Wearable Sensor Networks: State of the Art and Research Challenges," Expert Systems with Applications, 2018.
On the difficulty of training recurrent neural networks. R Pascanu, T Mikolov, Y Bengio, International Conference on Machine Learning. R. Pascanu, T. Mikolov, and Y. Bengio, "On the difficulty of training recurrent neural networks," in International Conference on Machine Learning, 2013, pp. 1310-1318.
Speech recognition with deep recurrent neural networks. A Graves, A Mohamed, G Hinton, Acoustics, speech and signal processing (icassp), 2013 ieee international conference on. IEEEA. Graves, A.-r. Mohamed, and G. Hinton, "Speech recognition with deep recurrent neural networks," in Acoustics, speech and signal processing (icassp), 2013 ieee international conference on, 2013, pp. 6645-6649: IEEE.
A survey on deep learning for big data. Q Zhang, L T Yang, Z Chen, P Li, Information Fusion. 42Q. Zhang, L. T. Yang, Z. Chen, and P. Li, "A survey on deep learning for big data," Information Fusion, vol. 42, pp. 146- 157, 2018.
Learning phrase representations using RNN encoder-decoder for statistical machine translation. K Cho, arXiv:1406.1078arXiv preprintK. Cho et al., "Learning phrase representations using RNN encoder-decoder for statistical machine translation," arXiv preprint arXiv:1406.1078, 2014.
An analysis of Recurrent Neural Networks for Botnet detection behavior. P Torres, C Catania, S Garcia, C G Garino, Biennial Congress of Argentina (ARGENCON). IEEEP. Torres, C. Catania, S. Garcia, and C. G. Garino, "An analysis of Recurrent Neural Networks for Botnet detection behavior," in Biennial Congress of Argentina (ARGENCON), 2016 IEEE, 2016, pp. 1-6: IEEE.
M Mohammadi, A Al-Fuqaha, S Sorour, M Guizani, arXiv:1712.04301Deep Learning for IoT Big Data and Streaming Analytics: A Survey. arXiv preprintM. Mohammadi, A. Al-Fuqaha, S. Sorour, and M. Guizani, "Deep Learning for IoT Big Data and Streaming Analytics: A Survey," arXiv preprint arXiv:1712.04301, 2017.
Autoencoder-based feature learning for cyber security applications. M Yousefi-Azar, V Varadharajan, L Hamey, U Tupakula, Neural Networks (IJCNN. 2017M. Yousefi-Azar, V. Varadharajan, L. Hamey, and U. Tupakula, "Autoencoder-based feature learning for cyber security applications," in Neural Networks (IJCNN), 2017
A hybrid malicious code detection method based on deep learning. Y Li, R Ma, R Jiao, methods. 95Y. Li, R. Ma, and R. Jiao, "A hybrid malicious code detection method based on deep learning," methods, vol. 9, no. 5, 2015.
A practical guide to training restricted Boltzmann machines. G E Hinton, Neural networks: Tricks of the trade. SpringerG. E. Hinton, "A practical guide to training restricted Boltzmann machines," in Neural networks: Tricks of the trade: Springer, 2012, pp. 599-619.
Network anomaly detection with the restricted Boltzmann machine. U Fiore, F Palmieri, A Castiglione, A De Santis, Neurocomputing. 122U. Fiore, F. Palmieri, A. Castiglione, and A. De Santis, "Network anomaly detection with the restricted Boltzmann machine," Neurocomputing, vol. 122, pp. 13-23, 2013.
A fast learning algorithm for deep belief nets. G E Hinton, S Osindero, Y.-W Teh, Neural computation. 187G. E. Hinton, S. Osindero, and Y.-W. Teh, "A fast learning algorithm for deep belief nets," Neural computation, vol. 18, no. 7, pp. 1527-1554, 2006.
Deep Learning for Secure Mobile Edge Computing. Y Chen, Y Zhang, S Maharjan, arXiv:1709.08025arXiv preprintY. Chen, Y. Zhang, and S. Maharjan, "Deep Learning for Secure Mobile Edge Computing," arXiv preprint arXiv:1709.08025, 2017.
Generative adversarial nets. I Goodfellow, Advances in neural information processing systems. I. Goodfellow et al., "Generative adversarial nets," in Advances in neural information processing systems, 2014, pp. 2672-2680.
A secure architecture for IoT with supply chain risk management. R E Hiromoto, M Haney, A Vakanski, Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2017 9th IEEE International Conference on. IEEE1R. E. Hiromoto, M. Haney, and A. Vakanski, "A secure architecture for IoT with supply chain risk management," in Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS), 2017 9th IEEE International Conference on, 2017, vol. 1, pp. 431- 435: IEEE.
Improved techniques for training gans. T Salimans, I Goodfellow, W Zaremba, V Cheung, A Radford, X Chen, Advances in Neural Information Processing Systems. T. Salimans, I. Goodfellow, W. Zaremba, V. Cheung, A. Radford, and X. Chen, "Improved techniques for training gans," in Advances in Neural Information Processing Systems, 2016, pp. 2234-2242.
Combining pattern classifiers: methods and algorithms. L I Kuncheva, John Wiley & SonsL. I. Kuncheva, Combining pattern classifiers: methods and algorithms. John Wiley & Sons, 2004.
Deep Reinforcement Learning with Double Q-Learning. H Van Hasselt, A Guez, D Silver, AAAI. 16H. Van Hasselt, A. Guez, and D. Silver, "Deep Reinforcement Learning with Double Q-Learning," in AAAI, 2016, vol. 16, pp. 2094-2100.
T P Lillicrap, arXiv:1509.02971Continuous control with deep reinforcement learning. arXiv preprintT. P. Lillicrap et al., "Continuous control with deep reinforcement learning," arXiv preprint arXiv:1509.02971, 2015.
Prioritized experience replay. T Schaul, J Quan, I Antonoglou, D Silver, arXiv:1511.05952arXiv preprintT. Schaul, J. Quan, I. Antonoglou, and D. Silver, "Prioritized experience replay," arXiv preprint arXiv:1511.05952, 2015.
Multi-agent Reinforcement Learning Based Cognitive Anti-jamming. M A Aref, S K Jayaweera, S Machuzak, Wireless Communications and Networking Conference. IEEEM. A. Aref, S. K. Jayaweera, and S. Machuzak, "Multi-agent Reinforcement Learning Based Cognitive Anti-jamming," in Wireless Communications and Networking Conference (WCNC), 2017 IEEE, 2017, pp. 1-6: IEEE.
Reinforcement learning based anti-jamming with wideband autonomous cognitive radios. S Machuzak, S K Jayaweera, Communications in China (ICCC). S. Machuzak and S. K. Jayaweera, "Reinforcement learning based anti-jamming with wideband autonomous cognitive radios," in Communications in China (ICCC), 2016
IEEE/CIC International Conference on. IEEEIEEE/CIC International Conference on, 2016, pp. 1-5: IEEE.
Two-dimensional antijamming communication based on deep reinforcement learning. G Han, L Xiao, H V Poor, Acoustics, Speech and Signal Processing. IEEEG. Han, L. Xiao, and H. V. Poor, "Two-dimensional anti- jamming communication based on deep reinforcement learning," in Acoustics, Speech and Signal Processing (ICASSP), 2017 IEEE International Conference on, 2017, pp. 2087-2091: IEEE.
Competing mobile network game: Embracing antijamming and jamming strategies with reinforcement learning. Y Gwon, S Dastangoo, C Fossa, H Kung, Communications and Network Security (CNS), 2013 IEEE Conference on. IEEEY. Gwon, S. Dastangoo, C. Fossa, and H. Kung, "Competing mobile network game: Embracing antijamming and jamming strategies with reinforcement learning," in Communications and Network Security (CNS), 2013 IEEE Conference on, 2013, pp. 28-36: IEEE.
Physical-Layer Authentication Based on Extreme Learning Machine. N Wang, T Jiang, S Lv, L Xiao, IEEE Communications Letters. 217N. Wang, T. Jiang, S. Lv, and L. Xiao, "Physical-Layer Authentication Based on Extreme Learning Machine," IEEE Communications Letters, vol. 21, no. 7, pp. 1557-1560, 2017.
Smart User Authentication through Actuation of Daily Activities Leveraging WiFi-enabled IoT. C Shi, J Liu, H Liu, Y Chen, Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing. the 18th ACM International Symposium on Mobile Ad Hoc Networking and ComputingACM5C. Shi, J. Liu, H. Liu, and Y. Chen, "Smart User Authentication through Actuation of Daily Activities Leveraging WiFi-enabled IoT," in Proceedings of the 18th ACM International Symposium on Mobile Ad Hoc Networking and Computing, 2017, p. 5: ACM.
Jamming in the Internet of Things: A game-theoretic perspective. N Namvar, W Saad, N Bahadori, B Kelley, Global Communications Conference (GLOBECOM). N. Namvar, W. Saad, N. Bahadori, and B. Kelley, "Jamming in the Internet of Things: A game-theoretic perspective," in Global Communications Conference (GLOBECOM), 2016
. IEEE. IEEEIEEE, 2016, pp. 1-6: IEEE.
Cognitiveradio-based internet of things: Applications, architectures, spectrum related functionalities, and future research directions. A A Khan, M H Rehmani, A Rachedi, IEEE wireless communications. 243A. A. Khan, M. H. Rehmani, and A. Rachedi, "Cognitive- radio-based internet of things: Applications, architectures, spectrum related functionalities, and future research directions," IEEE wireless communications, vol. 24, no. 3, pp. 17-25, 2017.
Cognitive internet of things: a new paradigm beyond connection. Q Wu, IEEE Internet of Things Journal. 12Q. Wu et al., "Cognitive internet of things: a new paradigm beyond connection," IEEE Internet of Things Journal, vol. 1, no. 2, pp. 129-143, 2014.
When cognitive radio meets the internet of things?. A A Khan, M H Rehmani, A Rachedi, Wireless Communications and Mobile Computing Conference (IWCMC). IEEEA. A. Khan, M. H. Rehmani, and A. Rachedi, "When cognitive radio meets the internet of things?," in Wireless Communications and Mobile Computing Conference (IWCMC), 2016 International, 2016, pp. 469-474: IEEE.
A survey on machine-learning techniques in cognitive radios. M Bkassiny, Y Li, S K Jayaweera, IEEE Communications Surveys & Tutorials. 153M. Bkassiny, Y. Li, and S. K. Jayaweera, "A survey on machine-learning techniques in cognitive radios," IEEE Communications Surveys & Tutorials, vol. 15, no. 3, pp. 1136-1159, 2013.
A survey: Attacks on RPL and 6LoWPAN in IoT. P Pongle, G Chavan, Pervasive Computing (ICPC). P. Pongle and G. Chavan, "A survey: Attacks on RPL and 6LoWPAN in IoT," in Pervasive Computing (ICPC), 2015
Deep learning in cyber security for internet of things. F Y Yavuz, F. Y. Yavuz, "Deep learning in cyber security for internet of things," 2018.
A host-based intrusion detection and mitigation framework for smart home IoT using OpenFlow. M Nobakht, V Sivaraman, R Boreli, Availability, Reliability and Security. IEEE11th International Conference onM. Nobakht, V. Sivaraman, and R. Boreli, "A host-based intrusion detection and mitigation framework for smart home IoT using OpenFlow," in Availability, Reliability and Security (ARES), 2016 11th International Conference on, 2016, pp. 147-156: IEEE.
The use of the area under the ROC curve in the evaluation of machine learning algorithms. A P Bradley, Pattern recognition. 307A. P. Bradley, "The use of the area under the ROC curve in the evaluation of machine learning algorithms," Pattern recognition, vol. 30, no. 7, pp. 1145-1159, 1997.
Learning from imbalanced data. H He, E A Garcia, IEEE Transactions on knowledge and data engineering. 219H. He and E. A. Garcia, "Learning from imbalanced data," IEEE Transactions on knowledge and data engineering, vol. 21, no. 9, pp. 1263-1284, 2009.
Using machine learning to secure IoT systems. J Cañedo, A Skjellum, Privacy, Security and Trust (PST), 2016 14th Annual Conference on. IEEEJ. Cañedo and A. Skjellum, "Using machine learning to secure IoT systems," in Privacy, Security and Trust (PST), 2016 14th Annual Conference on, 2016, pp. 219-222: IEEE.
Hybrid of anomaly-based and specification-based IDS for Internet of things using unsupervised OPF based on MapReduce approach. H Bostani, M Sheikhan, Computer Communications. 98H. Bostani and M. Sheikhan, "Hybrid of anomaly-based and specification-based IDS for Internet of things using unsupervised OPF based on MapReduce approach," Computer Communications, vol. 98, pp. 52-71, 2017.
Data clustering as an optimum-path forest problem with applications in image analysis. L M Rocha, F A Cappabianco, A X Falcão, International Journal of Imaging Systems and Technology. 192L. M. Rocha, F. A. Cappabianco, and A. X. Falcão, "Data clustering as an optimum-path forest problem with applications in image analysis," International Journal of Imaging Systems and Technology, vol. 19, no. 2, pp. 50-68, 2009.
MapReduce: simplified data processing on large clusters. J Dean, S Ghemawat, Communications of the ACM. 511J. Dean and S. Ghemawat, "MapReduce: simplified data processing on large clusters," Communications of the ACM, vol. 51, no. 1, pp. 107-113, 2008.
Securing IoT for smart home system. F K Santoso, N C Vun, Consumer Electronics (ISCE), 2015 IEEE International Symposium on. IEEEF. K. Santoso and N. C. Vun, "Securing IoT for smart home system," in Consumer Electronics (ISCE), 2015 IEEE International Symposium on, 2015, pp. 1-2: IEEE.
The applications of wifi-based wireless sensor network in internet of things and smart grid. L Li, H Xiaoguang, C Ke, H Ketai, Industrial Electronics and Applications (ICIEA), 2011 6th IEEE Conference on. IEEEL. Li, H. Xiaoguang, C. Ke, and H. Ketai, "The applications of wifi-based wireless sensor network in internet of things and smart grid," in Industrial Electronics and Applications (ICIEA), 2011 6th IEEE Conference on, 2011, pp. 789-793: IEEE.
Deep Abstraction and Weighted Feature Selection for Wi-Fi Impersonation Detection. M E Aminanto, R Choi, H C Tanuwidjaja, P D Yoo, K Kim, IEEE Transactions on Information Forensics and Security. 133M. E. Aminanto, R. Choi, H. C. Tanuwidjaja, P. D. Yoo, and K. Kim, "Deep Abstraction and Weighted Feature Selection for Wi-Fi Impersonation Detection," IEEE Transactions on Information Forensics and Security, vol. 13, no. 3, pp. 621- 636, 2018.
Improving Detection of Wi-Fi Impersonation by Fully Unsupervised Deep Learning. M E Aminanto, K Kim, Information Security Applications: 18th International Workshop. M. E. Aminanto and K. Kim, "Improving Detection of Wi- Fi Impersonation by Fully Unsupervised Deep Learning," in Information Security Applications: 18th International Workshop, WISA 2017, 2017.
Detection of known and unknown DDoS attacks using Artificial Neural Networks. A Saied, R E Overill, T Radzik, Neurocomputing. 172A. Saied, R. E. Overill, and T. Radzik, "Detection of known and unknown DDoS attacks using Artificial Neural Networks," Neurocomputing, vol. 172, pp. 385-393, 2016.
ProfilIoT: a machine learning approach for IoT device identification based on network traffic analysis. Y Meidan, Proceedings of the Symposium on Applied Computing. the Symposium on Applied ComputingACMY. Meidan et al., "ProfilIoT: a machine learning approach for IoT device identification based on network traffic analysis," in Proceedings of the Symposium on Applied Computing, 2017, pp. 506-509: ACM.
ProFiOt: Abnormal Behavior Profiling (ABP) of IoT devices based on a machine learning approach. S.-Y Lee, S Wi, E Seo, J.-K Jung, T.-M Chung, Telecommunication Networks and Applications Conference (ITNAC. IEEES.-Y. Lee, S.-r. Wi, E. Seo, J.-K. Jung, and T.-M. Chung, "ProFiOt: Abnormal Behavior Profiling (ABP) of IoT devices based on a machine learning approach," in Telecommunication Networks and Applications Conference (ITNAC), 2017 27th International, 2017, pp. 1-6: IEEE.
. M Miettinen, S Marchal, I Hafeez, N Asokan, A.-R , M. Miettinen, S. Marchal, I. Hafeez, N. Asokan, A.-R.
IoT Sentinel: Automated devicetype identification for security enforcement in IoT. S Sadeghi, Tarkoma, Distributed Computing Systems (ICDCS). IEEESadeghi, and S. Tarkoma, "IoT Sentinel: Automated device- type identification for security enforcement in IoT," in Distributed Computing Systems (ICDCS), 2017 IEEE 37th International Conference on, 2017, pp. 2177-2184: IEEE.
Mobile network intrusion detection for IoT system based on transfer learning algorithm. L Deng, D Li, X Yao, D Cox, H Wang, Cluster Computing. L. Deng, D. Li, X. Yao, D. Cox, and H. Wang, "Mobile network intrusion detection for IoT system based on transfer learning algorithm," Cluster Computing, pp. 1-16, 2018.
FCM: The fuzzy cmeans clustering algorithm. J C Bezdek, R Ehrlich, W Full, Computers & Geosciences. 102-3J. C. Bezdek, R. Ehrlich, and W. Full, "FCM: The fuzzy c- means clustering algorithm," Computers & Geosciences, vol. 10, no. 2-3, pp. 191-203, 1984.
A framework for automating security analysis of the internet of things. M Ge, J B Hong, W Guttmann, D S Kim, Journal of Network and Computer Applications. 83M. Ge, J. B. Hong, W. Guttmann, and D. S. Kim, "A framework for automating security analysis of the internet of things," Journal of Network and Computer Applications, vol. 83, pp. 12-27, 2017.
Multistage Signaling Game-based Optimal Detection Strategies for Suppressing Malware Diffusion in Fog-Cloudbased IoT Networks. S Shen, L Huang, H Zhou, S Yu, E Fan, Q Cao, IEEE Internet of Things Journal. S. Shen, L. Huang, H. Zhou, S. Yu, E. Fan, and Q. Cao, "Multistage Signaling Game-based Optimal Detection Strategies for Suppressing Malware Diffusion in Fog-Cloud- based IoT Networks," IEEE Internet of Things Journal, 2018.
A deep learning approach to android malware feature learning and detection. X Su, D Zhang, W Li, K Zhao, Trustcom/BigDataSE/I SPAIEEEX. Su, D. Zhang, W. Li, and K. Zhao, "A deep learning approach to android malware feature learning and detection," in Trustcom/BigDataSE/I SPA, 2016 IEEE, 2016, pp. 244-251: IEEE.
Distributed attack detection scheme using deep learning approach for Internet of Things. A A Diro, N Chilamkurti, Future Generation Computer Systems. A. A. Diro and N. Chilamkurti, "Distributed attack detection scheme using deep learning approach for Internet of Things," Future Generation Computer Systems, 2017.
Deep Learning: The Frontier for Distributed Attack Detection in Fog-to-Things Computing. A Abeshu, N Chilamkurti, IEEE Communications Magazine. 562A. Abeshu and N. Chilamkurti, "Deep Learning: The Frontier for Distributed Attack Detection in Fog-to-Things Computing," IEEE Communications Magazine, vol. 56, no. 2, pp. 169-175, 2018.
C Zhang, P Patras, H Haddadi, arXiv:1803.04311Deep Learning in Mobile and Wireless Networking: A Survey. arXiv preprintC. Zhang, P. Patras, and H. Haddadi, "Deep Learning in Mobile and Wireless Networking: A Survey," arXiv preprint arXiv:1803.04311, 2018.
cudnn: Efficient primitives for deep learning. S Chetlur, arXiv:1410.0759arXiv preprintS. Chetlur et al., "cudnn: Efficient primitives for deep learning," arXiv preprint arXiv:1410.0759, 2014.
Tensorflow: a system for large-scale machine learning. M Abadi, OSDI. 16M. Abadi et al., "Tensorflow: a system for large-scale machine learning," in OSDI, 2016, vol. 16, pp. 265-283.
Caffe: Convolutional architecture for fast feature embedding. Y Jia, Proceedings of the 22nd ACM international conference on Multimedia. the 22nd ACM international conference on MultimediaACMY. Jia et al., "Caffe: Convolutional architecture for fast feature embedding," in Proceedings of the 22nd ACM international conference on Multimedia, 2014, pp. 675-678: ACM.
Theano: A Python framework for fast computation of mathematical expressions. R Al-Rfou, arXiv preprintR. Al-Rfou et al., "Theano: A Python framework for fast computation of mathematical expressions," arXiv preprint, 2016.
Deep Learning moving beyond shallow machine learning since. "" Deep Learning moving beyond shallow machine learning since 2006" http://deeplearning.net/software_links/,"
Torch7: A matlab-like environment for machine learning. R Collobert, K Kavukcuoglu, C Farabet, EPFL-CONF-192376BigLearn, NIPS workshop. R. Collobert, K. Kavukcuoglu, and C. Farabet, "Torch7: A matlab-like environment for machine learning," in BigLearn, NIPS workshop, 2011, no. EPFL-CONF-192376.
Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems. T Chen, arXiv:1512.01274arXiv preprintT. Chen et al., "Mxnet: A flexible and efficient machine learning library for heterogeneous distributed systems," arXiv preprint arXiv:1512.01274, 2015.
Enabling deep learning on iot devices. J Tang, D Sun, S Liu, J.-L Gaudiot, Computer. 5010J. Tang, D. Sun, S. Liu, and J.-L. Gaudiot, "Enabling deep learning on iot devices," Computer, vol. 50, no. 10, pp. 92- 96, 2017.
The promise of edge computing. W Shi, S Dustdar, Computer. 495W. Shi and S. Dustdar, "The promise of edge computing," Computer, vol. 49, no. 5, pp. 78-81, 2016.
End-to-End Trust and Security for Internet of Things Applications. S Bhattarai, Y Wang, Computer. 514S. Bhattarai and Y. Wang, "End-to-End Trust and Security for Internet of Things Applications," Computer, vol. 51, no. 4, pp. 20-27, 2018.
Deep learning applications and challenges in big data analytics. M M Najafabadi, F Villanustre, T M Khoshgoftaar, N Seliya, R Wald, E Muharemagic, Journal of Big Data. 211M. M. Najafabadi, F. Villanustre, T. M. Khoshgoftaar, N. Seliya, R. Wald, and E. Muharemagic, "Deep learning applications and challenges in big data analytics," Journal of Big Data, vol. 2, no. 1, p. 1, 2015.
Internet of things and big data technologies for next generation healthcare. C Bhatt, N Dey, A S Ashour, C. Bhatt, N. Dey, and A. S. Ashour, "Internet of things and big data technologies for next generation healthcare," 2017.
The Effectiveness of Data Augmentation in Image Classification using Deep Learning. L Perez, J Wang, arXiv:1712.04621arXiv preprintL. Perez and J. Wang, "The Effectiveness of Data Augmentation in Image Classification using Deep Learning," arXiv preprint arXiv:1712.04621, 2017.
Data augmentation of wearable sensor data for parkinson's disease monitoring using convolutional neural networks. T T Um, Proceedings of the 19th ACM International Conference on Multimodal Interaction. the 19th ACM International Conference on Multimodal InteractionACMT. T. Um et al., "Data augmentation of wearable sensor data for parkinson's disease monitoring using convolutional neural networks," in Proceedings of the 19th ACM International Conference on Multimodal Interaction, 2017, pp. 216-220: ACM.
Are metamorphic viruses really invincible. A Lakhotia, A Kapoor, E Kumar, Virus Bulletin. 1257A. Lakhotia, A. Kapoor, and E. Kumar, "Are metamorphic viruses really invincible," Virus Bulletin, vol. 12, p. 57, 2004.
Combining supervised and unsupervised learning for zeroday malware detection. P M Comar, L Liu, S Saha, P.-N Tan, A Nucci, INFOCOM. IEEEP. M. Comar, L. Liu, S. Saha, P.-N. Tan, and A. Nucci, "Combining supervised and unsupervised learning for zero- day malware detection," in INFOCOM, 2013 Proceedings IEEE, 2013, pp. 2022-2030: IEEE.
IoT Malware Evolves to Harvest Bots by Exploiting a Zero-day Home Router Vulnerability. C X A Y J Cong Zheng, paloalto networksC. X. a. Y. J. Cong Zheng, "IoT Malware Evolves to Harvest Bots by Exploiting a Zero-day Home Router Vulnerability," paloalto networks, 2018.
Big data classification: Problems and challenges in network intrusion prediction with machine learning. S Suthaharan, ACM SIGMETRICS Performance Evaluation Review. 414S. Suthaharan, "Big data classification: Problems and challenges in network intrusion prediction with machine learning," ACM SIGMETRICS Performance Evaluation Review, vol. 41, no. 4, pp. 70-73, 2014.
Lifelong learning for sentiment classification. Z Chen, N Ma, B Liu, arXiv:1801.02808arXiv preprintZ. Chen, N. Ma, and B. Liu, "Lifelong learning for sentiment classification," arXiv preprint arXiv:1801.02808, 2018.
A survey on transfer learning. S J Pan, Q Yang, IEEE Transactions on knowledge and data engineering. 2210S. J. Pan and Q. Yang, "A survey on transfer learning," IEEE Transactions on knowledge and data engineering, vol. 22, no. 10, pp. 1345-1359, 2010.
Learning the Enigma with Recurrent Neural Networks. S Greydanus, arXiv:1708.07576arXiv preprintS. Greydanus, "Learning the Enigma with Recurrent Neural Networks," arXiv preprint arXiv:1708.07576, 2017.
Deep models under the GAN: information leakage from collaborative deep learning. B Hitaj, G Ateniese, F Pérez-Cruz, Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security. the 2017 ACM SIGSAC Conference on Computer and Communications SecurityACMB. Hitaj, G. Ateniese, and F. Pérez-Cruz, "Deep models under the GAN: information leakage from collaborative deep learning," in Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 2017, pp. 603-618: ACM.
Privacy-preserving deep learning. R Shokri, V Shmatikov, Proceedings of the 22nd ACM SIGSAC conference on computer and communications security. the 22nd ACM SIGSAC conference on computer and communications securityACMR. Shokri and V. Shmatikov, "Privacy-preserving deep learning," in Proceedings of the 22nd ACM SIGSAC conference on computer and communications security, 2015, pp. 1310-1321: ACM.
Membership inference attacks against machine learning models. R Shokri, M Stronati, C Song, V Shmatikov, IEEE Symposium. IEEEin Security and Privacy (SPR. Shokri, M. Stronati, C. Song, and V. Shmatikov, "Membership inference attacks against machine learning models," in Security and Privacy (SP), 2017 IEEE Symposium on, 2017, pp. 3-18: IEEE.
A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View. Q Liu, P Li, W Zhao, W Cai, S Yu, V C Leung, IEEE access. 6Q. Liu, P. Li, W. Zhao, W. Cai, S. Yu, and V. C. Leung, "A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View," IEEE access, vol. 6, pp. 12103-12117, 2018.
Security analysis of online centroid anomaly detection. M Kloft, P Laskov, Journal of Machine Learning Research. 13M. Kloft and P. Laskov, "Security analysis of online centroid anomaly detection," Journal of Machine Learning Research, vol. 13, no. Dec, pp. 3681-3724, 2012.
Practical evasion of a learning-based classifier: A case study. P Laskov, Security and Privacy (SP). IEEEP. Laskov, "Practical evasion of a learning-based classifier: A case study," in Security and Privacy (SP), 2014 IEEE Symposium on, 2014, pp. 197-211: IEEE.
Practical black-box attacks against machine learning. N Papernot, P Mcdaniel, I Goodfellow, S Jha, Z B Celik, A Swami, Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security. the 2017 ACM on Asia Conference on Computer and Communications SecurityACMN. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami, "Practical black-box attacks against machine learning," in Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 2017, pp. 506-519: ACM.
A methodology for formalizing model-inversion attacks. X Wu, M Fredrikson, S Jha, J F Naughton, Computer Security Foundations Symposium (CSF). IEEEX. Wu, M. Fredrikson, S. Jha, and J. F. Naughton, "A methodology for formalizing model-inversion attacks," in Computer Security Foundations Symposium (CSF), 2016 IEEE 29th, 2016, pp. 355-370: IEEE.
Model inversion attacks that exploit confidence information and basic countermeasures. M Fredrikson, S Jha, T Ristenpart, Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security. the 22nd ACM SIGSAC Conference on Computer and Communications SecurityACMM. Fredrikson, S. Jha, and T. Ristenpart, "Model inversion attacks that exploit confidence information and basic countermeasures," in Proceedings of the 22nd ACM SIGSAC Conference on Computer and Communications Security, 2015, pp. 1322-1333: ACM.
Stealing Machine Learning Models via Prediction APIs. F Tramèr, F Zhang, A Juels, M K Reiter, T Ristenpart, USENIX Security Symposium. F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart, "Stealing Machine Learning Models via Prediction APIs," in USENIX Security Symposium, 2016, pp. 601-618.
Serving at the edge: A scalable iot architecture based on transparent computing. J Ren, H Guo, C Xu, Y Zhang, IEEE Network. 315J. Ren, H. Guo, C. Xu, and Y. Zhang, "Serving at the edge: A scalable iot architecture based on transparent computing," IEEE Network, vol. 31, no. 5, pp. 96-105, 2017.
Decentralizing privacy: Using blockchain to protect personal data. G Zyskind, O Nathan, Security and Privacy Workshops (SPW). IEEEG. Zyskind and O. Nathan, "Decentralizing privacy: Using blockchain to protect personal data," in Security and Privacy Workshops (SPW), 2015 IEEE, 2015, pp. 180-184: IEEE.
Can Blockchain Strengthen the Internet of Things?. N Kshetri, IT Professional. 194N. Kshetri, "Can Blockchain Strengthen the Internet of Things?," IT Professional, vol. 19, no. 4, pp. 68-72, 2017.
Deepx: A software accelerator for lowpower deep learning inference on mobile devices. N D Lane, Information Processing in Sensor Networks (IPSN). N. D. Lane et al., "Deepx: A software accelerator for low- power deep learning inference on mobile devices," in Information Processing in Sensor Networks (IPSN), 2016
15th ACM/IEEE International Conference on. IEEE15th ACM/IEEE International Conference on, 2016, pp. 1- 12: IEEE.
IEEE than 5 million US dollars research grants from the US National Science Foundation (NSF), Army Research Office, Air Force, NASA, the State of Pennsylvania, and Amazon. He won the best paper. IEEE GLOBECOM 2014 and the best poster runner-up award at the ACM MobiHoc 2014. He serves on the editorial boards of three international journals. Dr. Du served as the lead Chair of the Communication and Information Security Symposium of the IEEE International Communication Conference (ICC) 2015, and a Co-Chair of Mobile and Wireless Networks Track of IEEE Wireless Communications and Networking Conference (WCNC) 2015such as INFOCOM. USAMohammed Ali Al-Garadi received his PhD in Computer science from University of Malaya ; Nehru Technological University, Hyderabad, India. Currently, he is a researcher in Qatar University in joint collaborative project between Qatar University, University of Idaho, USA, and Temple UniversityMalaysia in 2017. He obtained his B. Tech and M.Tech. degree in electronic and communication engineering from JawaharlalMohammed Ali Al-Garadi received his PhD in Computer science from University of Malaya, Malaysia in 2017. He obtained his B. Tech and M.Tech. degree in electronic and communication engineering from Jawaharlal Nehru Technological University, Hyderabad, India. Currently, he is a researcher in Qatar University in joint collaborative project between Qatar University, University of Idaho, USA, and Temple University, USA. He has published several research articles in refereed journals and conferences. He served as reviewer for several journals including, IEEE Communications Magazine, IEEE Transactions on Knowledge and Data Engineering, IEEE than 5 million US dollars research grants from the US National Science Foundation (NSF), Army Research Office, Air Force, NASA, the State of Pennsylvania, and Amazon. He won the best paper award at IEEE GLOBECOM 2014 and the best poster runner-up award at the ACM MobiHoc 2014. He serves on the editorial boards of three international journals. Dr. Du served as the lead Chair of the Communication and Information Security Symposium of the IEEE International Communication Conference (ICC) 2015, and a Co-Chair of Mobile and Wireless Networks Track of IEEE Wireless Communications and Networking Conference (WCNC) 2015such as INFOCOM (2007 -
Du is a Senior Member of IEEE. Im, Noms, Globecom Icc, Wcnc, Ipccc Broadnet, Dr, ACM, IM, NOMS, ICC, GLOBECOM, WCNC, BroadNet, and IPCCC. Dr. Du is a Senior Member of IEEE and a Life Member of ACM.
respectively. He is currently a Professor and the ECE Department Chair at the University of Idaho, USA. Previously, he served as the Associate Vice President of Graduate Studies. Mohsen Guizani received the B.S. (with distinction) and M.S. degrees in electrical engineering, the M.S. and Ph.D. degrees in computer engineering from Syracuse University. Syracuse, NY, USAWileyQatar University, Chair of the Computer Science Department, Western Michigan University, and Chair of the Computer Science Department, University of West Florida ; University of Colorado-Boulder, Syracuse University, and Kuwait UniversityHis research interests include wireless communications and mobile computing, computer networks, mobile cloud computing, security, and smart grid. He currently serves on the editorial boards of several international technical journals and the Founder and the Editorin-Chief of Wireless Communications and Mobile Computing journalMohsen Guizani received the B.S. (with distinction) and M.S. degrees in electrical engineering, the M.S. and Ph.D. degrees in computer engineering from Syracuse University, Syracuse, NY, USA, in 1984, 1986,1987, and 1990, respectively. He is currently a Professor and the ECE Department Chair at the University of Idaho, USA. Previously, he served as the Associate Vice President of Graduate Studies, Qatar University, Chair of the Computer Science Department, Western Michigan University, and Chair of the Computer Science Department, University of West Florida. He also served in academic positions at the University of Missouri- Kansas City, University of Colorado-Boulder, Syracuse University, and Kuwait University. His research interests include wireless communications and mobile computing, computer networks, mobile cloud computing, security, and smart grid. He currently serves on the editorial boards of several international technical journals and the Founder and the Editor- in-Chief of Wireless Communications and Mobile Computing journal (Wiley).
| [] |
[] | [] | [] | [] | arXiv:cond-mat/9504063v2 18 Apr 1995 cond-m at/9504063 Q uantum B oltzm ann equation of com posite ferm ions interacting w ith a gauge eld Yong B aek K i m ,Patri ck A .Lee,and X i ao-G ang W en D epartm entofPhysics,M assachusetts Institute ofTechnol ogy C am bridge,M assachusetts 02139 A pri l17,1995 A B ST R A C T W e deri ve the quantum B ol tzm ann equati on (Q B E) of com posi te ferm i ons at/near the = 1=2 state usi ng the non-equi l i bri um G reen' s functi on techni que. T hel owestorderperturbati vecorrecti on to thesel f-energy dueto thestrong gauge el d uctuati ons suggests that there i s no wel l de ned Landau-quasi -parti cl e. T herefore,we cannot assum e the exi stence ofthe Landau-quasi -parti cl es a priori i n the deri vati on of the Q B E. U si ng an al ternati ve form ul ati on, we deri ve the Q B E for the general i zed Ferm i surface di spl acem ent w hi ch corresponds to the l ocalvari ati on of the chem i calpotenti ali n m om entum space. From thi s Q B E, one can understand i n a uni ed fashi on the Ferm i -l i qui d behavi orsofthe densi tydensi ty and the current-currentcorrel ati on functi onsat = 1=2 (i n the l ong wave l ength and the l ow frequency l i m i ts)and the si ngul ar behavi or ofthe energy gap obtai ned from the ni te tem perature acti vati on behavi or of the com pressi bi l i ty near = 1=2. Im pl i cati ons of these resul ts to the recent experi m ents are al so di scussed. PA C S num bers: 73. 40. H m ,71. 27. + a,11. 15. -q | 10.1103/physrevb.52.17275 | [
"https://export.arxiv.org/pdf/cond-mat/9504063v2.pdf"
] | 14,677,579 | cond-mat/9504063 | e915d234b743f7b295b21e003978569223f6f505 |
arXiv:cond-mat/9504063v2 18 Apr 1995 cond-m at/9504063 Q uantum B oltzm ann equation of com posite ferm ions interacting w ith a gauge eld Yong B aek K i m ,Patri ck A .Lee,and X i ao-G ang W en D epartm entofPhysics,M assachusetts Institute ofTechnol ogy C am bridge,M assachusetts 02139 A pri l17,1995 A B ST R A C T W e deri ve the quantum B ol tzm ann equati on (Q B E) of com posi te ferm i ons at/near the = 1=2 state usi ng the non-equi l i bri um G reen' s functi on techni que. T hel owestorderperturbati vecorrecti on to thesel f-energy dueto thestrong gauge el d uctuati ons suggests that there i s no wel l de ned Landau-quasi -parti cl e. T herefore,we cannot assum e the exi stence ofthe Landau-quasi -parti cl es a priori i n the deri vati on of the Q B E. U si ng an al ternati ve form ul ati on, we deri ve the Q B E for the general i zed Ferm i surface di spl acem ent w hi ch corresponds to the l ocalvari ati on of the chem i calpotenti ali n m om entum space. From thi s Q B E, one can understand i n a uni ed fashi on the Ferm i -l i qui d behavi orsofthe densi tydensi ty and the current-currentcorrel ati on functi onsat = 1=2 (i n the l ong wave l ength and the l ow frequency l i m i ts)and the si ngul ar behavi or ofthe energy gap obtai ned from the ni te tem perature acti vati on behavi or of the com pressi bi l i ty near = 1=2. Im pl i cati ons of these resul ts to the recent experi m ents are al so di scussed. PA C S num bers: 73. 40. H m ,71. 27. + a,11. 15. -q
I. IN T R O D U C T IO N
Si ncethedi scovery ofthei nteger(IQ H )and fracti onalquantum H al l(FQ H )e ectsthe two-di m ensi onalel ectron system i n strong m agneti c el ds has often surpri sed us. A m ong recent devel opem ents, a l ot of attenti on has been gi ven to the appearance of the new m etal l i c state at the l l i ng fracti on = 1=2 [ 1]and the associ ated Shubni kov-de H aas osci l l ati onsofthe l ongi tudi nalresi stance around = 1=2 [ 2,3] . T he si m i l ari ty between these phenam ena near = 1=2 and those ofel ectrons i n weak m agneti c el ds was successful l y expl ai ned by the com posi te ferm i on approach [ 4] .U si ng the ferm i oni c C hern-Si m onsgauge theory ofthe com posi te ferm i ons [ 5,6] ,H al peri n,Lee,and R ead (H LR )devel oped a theory that descri bes the new m etal l i c state at = 1=2 [ 6] .
A com posi te ferm i on i s obtai ned by attachi ng an even num ber 2n of ux quanta to an el ectron and the transform ati on can be real i zed by i ntroduci ng an appropri ate C hern-Si m ons gauge el d [4][5][6] . It i s found that the m ost si ngul ar contri buti on to the sel f-energy (k;!) com es from the transverse part of the gauge el d uctuati ons [ 6,11] . T he l owest order perturbati ve correcti on to the sel f-energy (due to the transeverse gauge el d) i s cal cul ated by several authors [ 6,11] . It turns out that R e Im ! 2 1+ for 1 < 2 and R e ! l n !, Im ! for = 1 (C oul om b i nteracti on).T hustheLandau cri teri on forthequasi -parti cl e i s vi ol ated i n the case of1 < 2 and the case of = 1 show s the m argi nalFerm il i qui d behavi or. In ei ther cases,the e ecti ve m ass ofthe ferm i ons di verges,as m =m / j k j 1 + 1 for 1 < 2 and as m =m / j l n k jfor = 1,w here k = k 2 2m and i s the chem i cal potenti al [ 6] .
In a sel f-consi stent treatm ent ofthe sel f-energy [ 6] ,the energy gap ofthe system i n the presence ofa sm al le ecti ve m agneti c el d B can be determ i ned as E g / j B j 1+ 2 for 1 < 2 and E g / j B j jln B j for = 1. T herefore, the energy gap of the system vani shes faster than the m ean el d predi cti on or equi val entl y the e ecti ve m ass di verges i n a si ngul ar way as = 1=2 i s approached. T hese resul ts suggest that the e ecti ve Ferm i vel oci ty ofthe ferm i on v F goesto zero at = 1=2 even though the Ferm iwave vectork F i s ni te and the quasi -parti cl es have a very short l i fe ti m e (T=" F ) 2=(1+ ) (1=" F ),w here T i s the tem perature and " F i s the Ferm ienergy. H owever,the recent m agneti c focusi ng experi m ent [ 10]suggests that the ferm i on has a l ong l i fe ti m e or a l ong m ean free path w hi ch seem s i nconsi stent w i th the above pi cture.
Si nce the one-parti cl e G reen' sfuncti on i snotgauge-i nvari ant,the si ngul arsel f-energy coul d be an arti fact ofthe gauge choi ce. To address thi s questi on,recentl y we exam i ned the l owest order perturbati ve correcti ons to the gauge-i nvari ant densi ty-densi ty and the current-currentcorrel ati on functi ons [ 12] .Iti sfound thattherearei m portantcancel l ati ons
between the sel f-energy correcti ons and the vertex correcti ons due to the W ard-i denti ty [ 12,13] .A sa resul t,the densi ty-densi ty and the current-current correl ati on functi onsshow a Ferm i -l i qui d behavi or for al l rati os of ! and v F q [ 12] . In parti cul ar, the edge of the parti cl e-hol e conti nuum ! = v F q i s essenti al l y not changed, w hi ch m ay suggest a ni te e ecti ve m ass. From the current-current correl ati on functi on, the transport scatteri ng rate (due to the transverse part ofthe gauge el d) i s gi ven by 1= tr / !
1+
! after the cancel l ati on (T he scatteri ng rate woul d be m uch l arger 1= tr / !
1+
! had we i gnored the vertex correcti on) [ 12] . T herefore,the ferm i ons have a l ong transport l i fe ti m e w hi ch expl ai ns a l ong free path i n the m agneti c focusi ng experi m ent. From these resul ts, one m ay suspect w hether the di vergent m ass obtai ned from the sel f-energy has any physi cal m eani ng.
H owever,dueto theabsence oftheunderl yi ng quasi -parti cl epi cture,wecannotsi m pl y concl ude that the ferm i ons have a ni te e ecti ve m ass associ ated w i th the l ong l i fe ti m e w hi ch was obtai ned from the sm al l q and ! behavi ors of the densi ty-densi ty and the current-current correl ati on functi ons. In fact,i ti sfound that2k F response functi onsshow si ngul arbehavi orscom pared to the usualFerm il i qui d theory [ 13] .W e al so l i ke to m enti on that the recent experi m ents on the Shbuni kov-de H ass osci l l ati ons [ 3] In order to answer the questi on about the e ecti ve m ass,i t i s i m portant to exam i ne othergauge-i nvari antquanti ti esw hi ch m ay potenti al l y show a di vergente ecti ve m ass. In a recentpaper [ 14] ,we cal cul ated the l owestorderperturbati ve correcti on to the com pressi bi l i ty w i th a xed B ,w hi ch show s a therm al l y acti vated behavi or w hen the chem i cal potenti all i es exactl y at the m i ddl e ofthe successi ve e ecti ve Landau l evel s. It turns out that the correcti ons to the acti vati on energy gap and the correspondi ng e ecti ve m ass are si ngul ar and consi stent w i th the previ ous sel f-consi stent treatm ent ofthe sel f-energy [ 6] . T hus i t i s necessary to understand the apparentl y di erent behavi ors ofthe densi tydensi ty correl ati on functi on at = 1=2 and the acti vati on energy gap determ i ned from the com pressi bi l i ty near = 1=2.
O ne resol uti on ofthe probl em was suggested by Stern and H al peri n [ 15]w i thi n the usualLandau-Ferm i -l i qui d theory fram ework. T he i dea i s that both ofthe e ecti ve m ass and the Landau-i nteracti on-functi on are si ngul ar i n such a way that they cancel each other i n the densi ty-densi ty correl ati on functi on. R ecentl y, Stern and H al peri n [ 15]put forward thi s i dea and construct a Ferm i -l i qui d-theory ofthe ferm i on-gauge system i n the case ofC oul om b i nteracti on. Even though the use ofthe Landau-Ferm i -l i qui d theory or equi val entl y the exi stence ofthe wel lde ned quasi -parti cl es can be m arginall y justi ed i n the case ofthe C oul om b i nteraci on,we feelthati ti snecessary to constructa m ore general fram ework w hi ch appl i es to the arbi trary two-parti cl e i nteracti on (1 < 2 as wel las = 1) and al l ow s us check the val i di ty ofthe Ferm il i qui d theory and to judge w hen the di vergent m ass show s up. In parti cul ar,i t i s worthw hi l e to provi de a uni ed pi cture for understandi ng the previ ous theoreti calstudi es [16][17][18][19][20][21][22][23][24] .
In the usualFerm i -l i qui d theory,the Q B E ofthe quasi -parti cl es provi des the useful i nform ati onsaboutthe l ow l yi ng exci
II. T H E M O D E L A N D T H E Q U A N T U M B O LT Z M A N N E Q U A T IO N IN T H E A B SE N C E O F T H E Q U
w here the Lagrangi an densi ty L i s
L = (@ 0 + ia 0 ) 1 2m (@ i ia i + iA i ) 2 i 2 ~ a 0 " ij @ i a j + 1 2 Z d 2 r 0 (r) (r) v(r r 0 ) (r 0 ) (r 0 ) ;(2)B = r A = B B 1=2n ;(5)
w here
L = (@ 0 + i a 0 ) 1 2m (@ i i a i + i A i ) 2 i 2 ~ a 0 " ij @ i a j + 1 2 Z d 2 r 0 (r a(r)) v(r r 0 ) (r a(r 0 )) ;(7)
A fteri ntegrati ng outtheferm i onsand i ncl udi ng gauge el d uctuati onsw i thi n the random phase approxi m ati on (R PA ) [ 6] ,the e ecti ve acti on ofthe gauge el d can be obtai ned as
S e = 1 2 Z d 2 q (2 ) 2 d! 2 a (q;!) D 1 (q;!; B ) a (q;!) ;(8)
w here D 1 (q;!; B ) was cal cul ated by several authors [ 6,29,30] . For our purpose, the 2 2 m atri x form for D 1 i s su ci ent so that ; = 0;1 and 1 represents the di recti on that i s perpendi cul ar to q. In parti cul ar,w hen B = 0,the gauge el d propagator has the fol l ow i ng form [ 6] :
D 1 (q;!)= m 2 i q 2 ~ i q 2 ~ i ! q +~ (q)q 2 ! ;(9)
w here = 2n e k F and~ (q)= 1 24 m + v(q) (2 ~ ) 2 . Si nce the m ostsi ngul ar contri buti on to the sel fenergy correcti on com es from the transverse part ofthe gauge el d [ 6,11] ,we concentrate on the e ectofthe transverse gauge el d uctuati ons. In the i nfrared l i m i t,the transeverse gauge el d propagator can be taken as [ 12,14]
D 11 (q;!)= 1 i ! q + q ;(10)
w here = 1 24 m + V 0 (2 ~ ) 2 for = 2 and = V 0 (2 ~ ) 2 for 1 < 2.
B efore expl ai ni ng the way we construct the Q B E for the ferm i on-gauge-el d system i n w hi ch there i s no wel l de ned Landau-quasi -parti cl e i n general , we revi ew the usual deri vati on ofthe Q B E for the Ferm i -l i qui d w i th wel lde ned quasi -parti cl es [ 19,21] . T he Q B E i snothi ng butthe equati on ofm oti on ofthe ferm i on di stri buti on functi on. T herefore, i t can be deri ved from the equati on ofm oti on ofthe non-equi l i bri um one-parti cl e G reen' s functi on. Fol l ow i ng K adano and B aym [ 26] , l et us consi der the fol l ow i ng one-parti cl e G reen' s functi on.
G < (x 1 ;x 2 )= ih y (x 2 ) (x 1 )i ;(11)
w here x 1 = (r 1 ;t 1 ) and x 2 = (r 2 ;t 2 ). A t non-equi l i bri um ,G < (x 1 ;x 2 ) does not sati sfy the transl ati onali nvari ance i n space-ti m e so thati tcannot be w ri tten asG < (x 1 x 2 ). B y the fol l ow i ng change ofvari abl es (r rel ;t rel )= x 1 x 2 and (r;t)= (x 1 + x 2 )=2 ;
G < (x 1 ;x 2 ) can be w ri tten as G < (r rel ;t rel ;r;t)= ih y (r r rel 2 ;t t rel 2 ) (r+ r rel 2 ;t+ t rel 2 )i :
B y theFouri ertransform ati on fortherel ati vecoordi natest rel and r rel ,wegetG < (p;!;r;t).
A t equi l i bri um ,G < can be w ri tten as [26][27][28]
G < 0 (p;!)= if 0 (!)A (p;!);(14)
w here f 0 (!)= 1=(e ! =T + 1) i s the equi l i bri um Ferm idi stri buti on functi on and ( R i s the retarded sel f-energy)
A (p;!)= 2 Im R (p;!) (! p R e R (p;!)) 2 + (Im R (p;!)) 2 :(15)
In the usualFerm i -l i qui d theory,Im R ! so that A (p;!) i s a peaked functi on of
! around ! = p + R e R .
In thi s case,the equi l i bri um spectralfuncti on can be taken as
[ 26-28] A (p;!)= 2 (! p R e R (p;!)) :(16)
U si ng thi s property,i fthe system i s not far away from the equi l i bri um ,one can construct a cl osed equati on forthe ferm i on di stri buti on functi on f(p;r;t) [26][27][28] ,w hi ch i sthe Q B E.
T he l i neari zed Q B E of f(p;r;t) = f(p;r;t) f 0 (p), w here f 0 (p) i s the equi l i bri um di stri buti on functi on,i s the Q B E ofthe quasi -parti cl es i n the Ferm i -l i qui d theory. From thi s Q B E,the equati on ofm oti on for the Ferm isurface deform ati on,w hi ch i s de ned as [26][27][28] ( ;r;t)=
Z dj pj f(p;r;t) ;(17)
can be al so constructed.
In the case of the ferm i on-gauge-el d system , as m enti oned i n the i ntroducti on, Im R (!) i s l arger than ! (1 < 2) or com parabl e to ! ( = 1),i.e.,stri ctl y speaki ng, there i s no wel l de ned Landau-quasi -parti cl e from the vi ew poi nt of perturbati on theory. H owever, Stern and H al peri n [ 15] showed that, w i thi n a sel f-consi stent treatm ent, the Ferm i -l i qui d theory can be barel y appl i ed to the case ofC oul om b i nteracti on i n the sense that R e R i s l ogari thm i cal l y l arger than Im R . N ote that, i n general , A (p;!) at equi l i bri um i s not a peaked functi on of! anym ore i n the ferm i on-gauge-el d system .
B ecause ofthi s,f(p;r;t)does not sati sfy a cl osed equati on ofm oti on even near the equil i bri um . H owever, i f R i s onl y a functi on of !, A (p;!) i s sti l la wel l peaked functi on of p around p = 0 for su ci entl y sm al l! [ 25] . T hi s observati on l eads us to de ne the fol l ow i ng general i zed di stri buti on functi on [ 25]
f( ;!;r;t)= i Z d p 2 G < (p;!;r;t);(18)u( ;r;t)= Z d! 2 f( ;!;r;t)(19)
w hi ch correspondsto the vari ati on ofthe l ocalchem i calpotenti ali n the m om entum space.
T hi s object can be sti l lwel lde ned even i n the absence ofthe sharp Ferm isurface. T hi s i s because one can al ways de ne a chem i calpotenti ali n each angl e ,w hi ch i s the energy requi red to putan addi ti onalferm i on i n the di recti on l abel ed by i n the m om entum space.
In the nextsecti on,we deri ve the l i neari zed Q B E for the general i zed di stri buti on functi on f( ;!;r;t).
III. Q U A N T U M B O LT Z M A N N E Q U A T IO N FO R T H E G E N E R A LIZ E D D IST R IB U T IO N F U N C T IO N
In the non-equi l i bri um G reen' s functi on form ul ati on, the fol l ow i ng m atri ces of the G reen' s functi on and the sel f-energy sati sfy the D yson' s equati on [ 28]
G = G t G < G > G t and~ = t < > t ;(20)
w here
G > (x 1 ;x 2 )= ih (x 1 ) y (x 2 )i; G < (x 1 ;x 2 )= ih y (x 1 ) (x 2 )i; G t (x 1 ;x 2 )= (t 1 t 2 )G > (x 1 ;x 2 )+ (t 2 t 1 )G < (x 1 ;x 2 ); G t (x 1 ;x 2 )= (t 2 t 1 )G > (x 1 ;x 2 )+ (t 1 t 2 )G < (x 1 ;x 2 );(21)
and > ; < ; t ; t are the associ ated sel f-energi es. (t)= 1 for t> 0 and zero for t< 0.
G R (retarded) and G A (advanced) G reen' s functi ons can be expressed i n term s of G t (ti m e-ordered),G t (anti ti m e-ordered),G < ,G > as fol l ow s.
G R = G t G < = G > G t ; G A = G t G > = G < G t :(22)
Si m i l arl y, R and A are gi ven by
R = t < = > t ; A = t > = < t :(23)
T he m atri x G reen' s functi on sati s es the fol l ow i ng equati ons ofm oti on
i @ @t 1 H 0 (r 1 ) G (x 1 ;x 2 )= (x 1 x 2 )Ĩ + Z dx 3~ (x 1 ;x 3 )G (x 3 ;x 2 ); i @ @t 2 H 0 (r 2 ) G (x 1 ;x 2 )= (x 1 x 2 )Ĩ + Z dx 3G (x 1 ;x 3 )~ (x 3 ;x 2 ) ;(24)
w here
H 0 (r 1 )= 1 2m @ @r 1 2 and H 0 (r 2 )= 1 2m @ @r 2 2 :(25)
For our purpose,we need onl y the equati on ofm oti on for
G < i @ @t 1 H 0 (r 1 ) G < (x 1 ;x 2 )= Z dx 3 t (x 1 ;x 3 )G < (x 3 ;x 2 ) < (x 1 ;x 3 )G t (x 3 ;x 2 ) ; i @ @t 2 H 0 (r 2 ) G < (x 1 ;x 2 )= Z dx 3 G t (x 1 ;x 3 ) < (x 3 ;x 2 ) G < (x 1 ;x 3 ) t (x 3 ;x 2 ) :(26)
Taki ng the di erence ofthe two equati ons ofEq. (26),and usi ng the fol l ow i ng rel ati ons
G t = R e G R + 1 2 (G < + G > ) ; G t = 1 2 (G < + G > ) R e G R ;(27)
we get
" i @ @t 1 + i @ @t 2 + 1 2m @ @r 1 2 1 2m @ @r 2 2 # G < (x 1 ;x 2 ) = Z dx 3 R e R (x 1 ;x 3 )G < (x 3 ;x 2 )+ < (x 1 ;x 3 )R e G R (x 3 ;x 2 ) R e G R (x 1 ;x 3 ) < (x 3 ;x 2 ) G < (x 1 ;x 3 )R e R (x 3 ;x 2 ) + 1 2 > (x 1 ;x 3 )G < (x 3 ;x 2 ) 1 2 < (x 1 ;x 3 )G > (x 3 ;x 2 ) 1 2 G > (x 1 ;x 3 ) < (x 3 ;x 2 )+ 1 2 G < (x 1 ;x 3 ) > (x 3 ;x 2 ) :(28)
N ear equi l i bri um , one can l i neari ze thi s equati on assum i ng that G =G G 0 and ~ =~ ~ 0 are sm al l ,w hereG 0 and~ 0 are m atri ces ofthe equi l i bri um G reen' s functi on and the sel f-energy. T he Fouri er transformG (
p 1 ;p 2 ) (p 1 = (p 1 ;! 1 ), p 2 = (p 2 ;! 2 )) of G (x 1 ;x 2 )
can be w ri tten i n term s ofthe new vari abl es de ned by
p = (p;!)= (p 1 p 2 )=2 and q = (q; )= p 1 + p 2 :(29)
U si ng these vari abl es, the Fouri er transform ed l i neari zed equati on of G < (p;q) can be w ri tten as
[ v F j qjcos p q ] G < (p;q) [R e R 0 (p + q=2) R e R 0 (p q=2) ] G < (p;q) + [G < 0 (p + q=2) G < 0 (p q=2) ] (R e R (p;q)) [ < 0 (p + q=2) < 0 (p q=2) ] (R e G R (p;q)) + [R e G R 0 (p + q=2) R e G R 0 (p q=2) ] < (p;q) = G < 0 (p) > (p;q)+ > 0 (p) G < (p;q) G > 0 (p) < (p;q) < 0 (p) G > (p;q) ;(30)
w here p q i s the angl e between p and q. In the presence ofan externalpotenti alU (q), one shoul d add a term U (q)[G < 0 (p+ q=2) G < 0 (p q=2)]i n the l efthand si de ofEq. (30).
W e next check that thi s expressi on i s equi val ent to the usualQ B E for G < (p;!;r;t), w here r and tare conjugate vari abl es ofq and . N ote that
F (p + q=2) F (p q=2) q @F @p + @F @! ;(31)
for sm al lj qjand . From Eq. (30) and Eq. (31),one can check that G < (p;!;r;t),w hi ch i s the Fouri er transform of G < (p;q),sati s es the fol l ow i ng equati on.
[! p 2 =2m ; G < (p;!;r;t)] [R e R 0 (p;!);G < (p;!;r;t)] [ (R e R (p;!));G < 0 (p;!) ] [ < 0 (p;!); (R e G R (p;!;r;t)] [ < (p;!;r;t);R e G R 0 (p;!) ] = G < 0 (p;!) > (p;!;r;t)+ > 0 (p;!) G < (p;!;r;t) G > 0 (p;!) < (p;!;r;t) < 0 (p;!) G > (p;!;r;t);(32)< (p;!)= X q Z 1 0 d p q m 2 Im D 11 (q; ) [(n 0 ( )+ 1)G < (p + q;! + )+ n 0 ( )G < (p + q;! ) ]; > (p;!)= X q Z 1 0 d p q m 2 Im D 11 (q; ) [n 0 ( )G > (p + q;! + )+ (n 0 ( )+ 1)G > (p + q;! ) ];(35)
w here n 0 ( )= 1=(e =T 1) i s the equi l i bri um boson di stri buti on functi on. T he realpart ofthe retarded sel f-energy i s gi ven by
R e R (p;!;q; )= Z d! 0 P Im R (p;! 0 ;q; ) ! ! 0 = Z d! 0 2 i P > (p;! 0 ;q; ) < (p;! 0 ;q; ) ! ! 0 ;
(36) w here P represents the pri nci pal val ue and Im R = 1 2i ( > < ) i s used. T he sam e rel ati ons hol d for the G reen' s functi on G R ,
R e G R (p;!;q; )= Z d! 0 2 i P G > (p;! 0 ;q; ) G < (p;! 0 ;q; ) ! ! 0(37)
and
Im G R = 1 2i (G > G < ).
A t equi l i bri um ,the G reen' s functi ons G < ,G > can be w ri tten as [26][27][28]
G < (p;!)= if 0 (!)A (p;!); G > (p;!)= i(1 f 0 (!))A (p;!);(38)
w here A (p;!) i s gi ven by Eq. (15). From these rel ati ons,the one-l oop sel f-energy R 0 at equi l i bri um can be w ri tten as
R 0 (p;!)= X q Z 1 0 d p q m 2 1 + n 0 ( ) f 0 ( p + q ) ! + i p + q + n 0 ( )+ f 0 ( p + q ) ! + i p + q +(39)
A sem phasi zed i n the previ ous secti on,i fthe sel f-energy depends onl y on the frequency !, A (p;!) at equi l i bri um i s a peaked functi on of p . T herefore,as far as the system i s not far away from the equi l i bri um ,the general i zed di stri buti on functi on f( p q ;!;q; ),w hi ch i s gi ven by the fol l ow i ng rel ati ons,can be wel lde ned at zero tem perature [ 25] :
Z d p 2 iG < (p;!;q; ) f( p q ;!;q; ) ; Z d p 2 iG > (p;!;q; ) 1 f( p q ;!;q; ) ;(40)
w here p q i s the angl e between p and q.
T he extensi on to the case of ni te tem peratures requi resspeci alcare because,even at equi l i bri um ,Im R 0 (p;!) i s know n to be di vergent [ 11]so that A (p;!),G < 0 ,and G > 0 at equi l i bri um are not wel lde ned. T herefore,the non-equi l i bri um G < and G > are al so not wel lde ned near equi l i bri um . In order to resol ve thi s probl em , l et us rst separate the gauge el d uctuati ons i nto two parts,i.e.,a(q; ) a (q; ) for < T and a(q; ) A ssum i ng that j pj j p 0 j k F and usi ng j p 0 pj k F j p 0 q p q j ,we get D 11 (q; )
a + (q; )for > T ,D 11 (k F j p 0 q p q j ;! 0 !)
. U si ng the above resul tsand the factthatG < and G > are wel l peaked functi ons of p near the equi l i bri um ,R e R can be w ri tten as
R e R = N (0) Z d p 0 q 2 Z d! 0 v 2 F R e D 11 (k F j p 0 q p q j ;! 0 !) f( p 0 q ;! 0 ;q; ) ; (41) w here N (0) = m 2
i s the densi ty of state. Si nce we assum e that the gauge el d i s at equi l i bri um , (R e R ),w hi ch i s the devi ati on from the equi l i bri um ,can be w ri tten as
(R e R )= N (0) Z d p 0 q 2 Z d! 0 v 2 F R eD 11 (k F j p 0 q p q j ;! 0 !) f( p 0 q ;! 0 ;q; ): (42)
W e al so assum e thatthe non-equi l i bri um sel f-energy dependsonl y on ! asthatofthe equil i bri um case,w hi ch i s pl ausi bl e as far as the system i s not far away from the equi l i bri um .
In orderto getthe equati on forf( p q ;!;q; ),we perform [
R d p =2 i ntegralon both si des ofthe Eq. (30). N ote that Z d p 2 R e G R (p;! 0 ;q; ) = Z d! 0 2 P (1 f( p q ;! 0 ;q; ))+ f( p q ;! 0 ;q; ) ! ! 0 = Z d! 0 2 P 1 ! ! 0 = 0 :(43)v F q cos p q ] f( p q ;!) N (0) Z d p 0 q 2 Z d! 0 v 2 F R e D 11 (k F j p 0 q p q j ;! 0 !) [f 0 (! 0 + =2) f 0 (! 0 =2) ] f( p q ;!) + N (0) Z d p 0 q 2 Z d! 0 v 2 F R e D 11 (k F j p 0 q p q j ;! 0 !) [f 0 (! + =2) f 0 (! =2) ] f( p 0 q ;! 0 ) = N (0) Z d p 0 q Z 1 0 d Z d! 0 v 2 F Im D 11 (k F j p 0 q p q j ; ) (! 0 ! + ) [ f( p q ;!) (1 f 0 (! 0 )+ n 0 ( )) f( p 0 q ;! 0 ) (f 0 (!)+ n 0 ( )) ] (! 0 ! ) [ f( p 0 q ;! 0 ) (1 f 0 (!)+ n 0 ( )) f( p q ;!)(f 0 (! 0 )+ n 0 ( )) ] :(44)F ( p 0 q p q ;! 0 !)= v 2 F R e D 11 (k F j p 0 q p q j ;! 0 !) :(45)[ v F q cos p q ] f( p q ;!) [R e R 0 (! + =2) R e R 0 (! =2) ] f( p q ;!) + N (0) Z d p 0 q 2 Z d! 0 F ( p 0 q p q ;! 0 !) [f 0 (! + =2) f 0 (! =2) ] f( p 0 q ;! 0 ) =N (0) Z d p 0 q 2 Z d! Z d! 0 v 2 F R e D 11 (k F j p 0 q p q j ;! 0 !) [f 0 (! 0 + =2) f 0 (! 0 =2) ] f( p q ;!) f( p 0 q ;!) = N (0) Z d p 0 q Z 1 0 d Z d! Z d! 0 v 2 F Im D 11 (k F j p 0 q p q j ; ) [ (! 0 ! + ) (1 f 0 (! 0 )+ n 0 ( ))+ (! 0 ! ) (f 0 (! 0 )+ n 0 ( )) ] f( p q ;!) f( p 0 q ;!) :(46)
In the presence of the external potenti al U (q; ), one shoul d add an addi ti onal term
!) = v 2 F R e D 11 (k F j j ;!). N ote that R e D 11 (k F j j ;!)= ( = 2 ) k 2+ F j j 2+ ! 2 + ( = ) 2 k 2+ 2 F j j 2+ 2 :(47)
It can be checked from Eq. (46) that f( p q ;!;q; ) i s ni te onl y w hen j !j at zero tem perature. T herefore,the frequency ! i n R e D 11 (k F j j ;!) i s cuto by . In thi s case, one can i ntroduce the dependent cuto
w here [ v F q cos p q ]u( p q ;q; )
F ( ;! = 0)= v 2 F k F 1 j j :(49)+ N (0) Z d p 0 q 2 F L andau ( p 0 q p q ) u( p q ;q; ) u( p 0 q ;q; ) = N (0) Z d p 0 q Z 1 0 d Z d! Z d! 0 v 2 F Im D 11 (k F j p 0 q p q j ; ) [ (! 0 ! + ) (1 f 0 (! 0 ))+ (! 0 ! ) f 0 (! 0 ) ] f( p q ;!) f( p 0 q ;!) : (50) N ote that N (0) R d p 0 q =2 F L andau ( p 0 q p q ) / 2 1+
one can get
u l (q; ) v F q 2 [u l+ 1 (q; )+ u l 1 (q; ) ] + N (0) Z d 2 F L andau ( ) 1 cos (l ) u l (q; ) = N (0) Z d Z 1 0 d Z d! Z d! 0 v 2 F Im D 11 (k F j j ; ) 1 cos (l ) [ (! 0 ! + ) (1 f 0 (! 0 ))+ (! 0 ! ) f 0 (! 0 ) ] f l (!;q; ) :(52)
V . Q U A N T U M B O LT Z M A N N E Q U A T IO N IN T H E P R E SE N C E O F E F F E C T IV E M A G N E T IC F IE LD A N D T H E E N E R G Y G A P
Si nce the sel f-energy does not depend on the m om entum P i n the ferm i on-gauge-el d system ,the onl y term w hi ch contri butes to the Q B E i s P m B @ @P G < (P ;!;q; ) : [ v F q cos P q ]u( P q ;q; ) i ! c @ @ P q u( P q ;q; ) [ v F q cos P q ]u( P q ;q; ) i ! c @ @ P q u( P q ;q; ) 0 :
+ N (0) Z d P 0 q 2 F L andau ( P 0 q P q ) u( P q ;q; ) u( P 0 q ;q; ) = N (0) Z d P 0 q Z 1 0 d Z d! Z d! 0 v 2 F Im D 11 (k F j P 0 q P q j ; ) [ (! 0 ! + ) (1 f 0 (! 0 ))+ (! 0 ! ) f 0 (! 0 ) ] f( P q ;!) f( P 0 q ;!) ;(57)
O n the other hand,for the rough uctuati ons (l> l c ),the sel f-energy part dom i nates and
we have a contri buti on w hi ch i s ofthe order E
1 + 1 g (1 <
2) or j l n E g j( = 1).
Ignori ng term com pared to E 2) or 1=j l n E g j( = 1) on both si des ofthe equati on,we get
[ v F q cos P q ]u( P q ;q; ) i ! c @ @ P q u( P q ;q; )= col l i si on i ntegral; (59)
w here v F = k F =m , ! c = B =m ,and m =m / E 1 + 1 g (1 <
2) or j l n E g j( = 1).
Let us consi der two di erent types ofwave packets created al ong the Ferm isurface. 2) or B =j l n E g j( = 1). Sol vi ng thi s sel f-consi stent equati on for E g ,we get
E g / ( j B j 1+ 2 ; i f1 < 2 ; j B j jln B j ; i f = 1 .
(60)
T hi s resul t i s the sam e as the sel f-consi stent treatm ent ofH LR [ 6]and al so the perturbati ve eval uati on ofthe acti vati on energy gap i n the ni te tem perature com pressi bi l i ty [ 14] . 57)) N ote that, for sm al l q l c ! c =v F , u( P q ;q;t) corresponds to a sm ooth uctuati on of the Ferm i surface. W hi l e, for l arge q l c ! c =v F , even the sm ooth parts of u( P q ;q;t),around P q = =2,correspond to a rough uctuati on,hence the w hol e functi on u( P q ;q;t) corresponds to a rough uctuati on. T hus, we expect that the sm al l q
v l = v F q 2 " v l+ 1 p g(l)g(l+ 1) + v l 1 p g(l)g(l 1) # ; v l = p g(l) u l ;(61)v l = l ! c g(l) v l + v F q 2 " v l+ 1 p g(l)g(l+ 1) + v l 1 p g(l)g(l 1) # :(63)
To obtai n the spectrum for qv F > ! c we w i l luse a sem i cl assi calapproach. N ote that ( P q ;l) i s a canoni cal coordi nate and m om entum pai r. T he cl assi cal H am i l toni an that corresponds to the quantum system Eq. (63) can be found to be
H ( P q ;l)= l ! c g(l) + v F q g(l) cos( P q ) :(65)
A ssum i ng g(l)i sa sl ow l y varyi ng functi on ofl,one arri vesatthe fol l ow i ng si m pl e cl assi cal equati ons ofm oti on
_ P q = ! c g(l) ; _ l= v F q g(l) si n( P q ) :(66)
From thi s equati on,one can easi l y show that
l= v F q ! c cos( P q )+ l 0 ;(67)_ P q = ! c g( v F q ! c cos( P q )+ l 0 ) ;(68)
w hi ch descri bes a peri odi c m oti on. T he angul ar frequency ofthe peri odi c m oti on i s gi ven
by ! = 2 ! c R 2 0 g( v F q ! c cos( P q )+ l 0 ) d P q :(69)
T he above cl assi calfrequency ! has a quantum i nterpretati on. It i s the gap between nei ghbori ng energy l evel s,of w hi ch the energy i s cl ose to the cl assi calenergy associ ated w i th the cl assi calm oti on descri bed by Eq. (67).In parti cul ar,the cyl cotron frequency ! cyc i s gi ven by the gap between the = 0 l eveland the rst > 0 l evel . T herefore To anal yze the behavi or of! cyc ,we rst m ake an approxi m ati on for Eq. (70) as
! cyc = 2 ! c R 2 0 g( v F q ! c cos( P q )+ 1) d P q :(70)! cyc = ! c g( v F q ! c + 1) ;(71)
w here i sa non-zero constantbetween 0 and 1.W eseethat! cyc (q)hasa sharp dependence on q around q ! c =v F . T he sm al l er the ! c the sharper the q dependence. T hi s sharp dependence i s not due to the si ngul ar gauge i nteracti on,but m erel y a consequence ofthe fact that g(1)6 = g(2)6 = .
A s q i ncreases,g( v F q ! c + 1) becom es l arger and l arger,thus we expect that ! cyc (q) decreases. W hen q exceeds a crossover val ue q c , g( v F q ! c + 1) saturates at a very l arge val ue and ! cyc (q) i s drasti cal l y reduced. T hi s phenom ena i s a resul t ofthe si ngul ar gauge i nteracti on. T he crossover m om entum q c i s determ i ned from
v F q c ! c = l c = k F ! cyc (q ! 1 ) 1 1+ ; and ! cyc (q ! 1 )= ! c C ( )(! cyc (q ! 1 )) 1 1+ for 1 < 2;
! cyc (q ! 1 )= ! c C ( = 1)j l n ! cyc (q ! 1 )j for = 1 ;
(72) w here
C ( )= v F cos h 2 1 + 1 i 2 (1 + ) si n 2 1+ 1 + 1 2 1+ for 1 < 2 and C ( = 1)= v F 2 2 for = 1. W e nd q c = B ( ) p ! c for 1 < 2 ; q c = B ( = 1) p ! c j l n ! c j for = 1 ; (73) w here B ( )= m ( = ) 1 1+ p
C ( ). W hen q q c ,the cycl otron frequency saturates at the fol l ow i ng val ues.
! cyc (q ! 1 )= ( ! c =C ( )) 1+ 2 for 1 < 2 ;
! cyc (q ! 1 )= ! c =C ( = 1) j l n( ! c =C ( = 1))j for = 1 :
W hen v F q= ! c 1,the cycl otron freqency i sexpected to have the fol l ow i ng scal i ng form : T hi s i s because as q decreases bel ow a val ue oforder ! c =v F ,the l= 1 m odes start to have a hi gher energy than thatofthe l= 2 m odes,and the l owest l yi ng m odes crossover to the l= 2 m odes.
! cyc (q)/ ( ! c )
In the absence ofthe si ngul argauge i nteracti on,accordi ng to the pi cture devel oped i n R ef. 30,one expectsthatthe i ntra-Landau-l evelpl asm a m ode near = 1=2 hasa gap 2 ! c for q < ! c =v F . T he gap i s expected to be reduced by the factor 2 w hen q > ! c =v F . In the presence ofthe si ngul ar gauge i nteracti on,we nd that the pl asm a m ode has a gap of order 2 ! c (si nce g( 2) 6 = 1) for q < ! c =v F .
E g = ! c ,we get E g / j B j 1+ 2
for 1 < 2 and E g / j B j jln B j for = 1. T hese are consi stent w i th the previ ous resul ts [ 6,14,15] .
A ppendix
In thi sappendi x,weconsi dertheQ B E at ni tetem peratures. R ecal lthatIm R (p;!) at equi l i bri um di verges at ni te tem peratures, w hi ch has no cuto [ 11] . In thi s case, i t i s cl ear from Eq. (15)
A t the m ean el d l evel ,one takes i nto account onl y the average of the stati sti cal m agneti c el d due to the attached m agneti c ux. If the i nteracti on between ferm i onsi si gnored,the system can bedescri bed asthefree ferm i onsi n an e ecti ve m agneti c el d B = B B 1=2n , w here B 1=2n = 2nn e hc=e i s the averaged stati sti cal m agneti c el d and n e i s the densi ty ofel ectrons. T herefore,i n the m ean el d theory,the FQ H states w i th = p 2n p+ 1 can be descri bed as the IQ H state ofthe com posi te ferm i ons w i th p l l ed Landau l evel s occupi ed i n an e ecti ve m agneti c el d B [ 4-6] . In parti cul ar, B = 0 atthe l l i ng fracti ons = 1=2n so thatthe ground state ofthe system i sthe l l ed Ferm isea w i th a wel lde ned Ferm iwave vector k F [ 6, 7] . A s a resul t,the Shubni kov-de H aas osci l l ati ons near = 1=2 can be expl ai ned by the presence ofa wel lde ned Ferm i wave vector at = 1=2 [ 6] . T he m ean el d energy gap ofthe system w i th = p 2p+ 1 i n the p ! 1 l i m i t i s gi ven by E g = e B m c ,w here m i s the m ass ofthe com posi te ferm i ons. N ote that,i n the l arge ! c l i m i t,the ni te m i s caused by the C oul om b i nteracti on between the ferm i ons. T he e ecti ve m ass m shoul d be chosen such that the Ferm ienergy E F i s gi ven by the C oul om b energy scal e. T here are a num ber of experi m ents w hi ch show that there i s a wel l de ned Ferm i wave vector at = 1=2 [ 8-10] . T hey observed the geom etri cal resonances between the sem i cl assi calorbi tofthecom posi teferm i onsand anotherl ength scal earti ci al l y i ntroduced to the system near = 1=2. H owever,i t i s possi bl e that the uctuati ons and the two-parti cl e i nteracti ons,w hi ch are i gnored i n the m ean el d theory,are very i m portant.N ote thatthe densi ty uctuati ons correspond to the uctuati ons of the stati sti cal m agneti c el d. T herefore, the densi ty uctuati ons above the m ean el d state i nduces the gauge el d uctuati ons [ 5, 6] . If the ferm i ons are i nteracti ng vi a a two-parti cl e i nteracti on v(q) = V 0 =q 2 (1 2), the e ects ofthe gauge el d uctuati ons can be m odi ed. In fact,the gauge el d uctuati ons becom e m ore si ngul ar as the i nteracti on range becom es shorter (l arger ). T he reason i s that the l onger range i nteracti on (sm al l er ) suppresses m ore e ecti vel y the densi ty uctuati ons, thus i t i nduces the l ess si ngul ar gauge el d uctuati ons. T herefore, i t i s i m portantto exam i ne w hetherthe m ean el d Ferm i -l i qui d state i sstabl e agai nstthe gauge el d uctuati ons w hi ch al so i ncl udes the e ects ofthe two-parti cl e i nteracti on. O ne way to study the stabi l i ty ofthe m ean el d Ferm i -l i qui d state i s to exam i ne the l ow energy behavi or ofthe sel f-energy correcti on i nduced by the gauge el d uctuati ons.
have observed som e features w hi ch were i nterpretated as a si gn ofthe di vergent e ecti ve m ass ofthe ferm i ons as = 1=2 i s approached. T he experi m ental l y determ i ned e ecti ve m ass di verges i n a m ore si ngul ar way than any theoreti calpredi cti on. H owever,thei r determ i nati on of the e ecti ve m ass i s based on a theory for the non-i nteracti ng ferm i ons and al so the di sorder e ect i s very i m portant near = 1=2 because the stati c uctuati ons of the densi ty due to the i m puri ti es i nduces an addi ti onal stati c random m agneti c el d. Si nce there i s no sati sfactory theory i n the presence ofdi sorder,i ti sdi cul tto com pare the present theory and the experi m ents.
sel f-energy correcti on w hi ch gi ves the si ngul ar m ass correcti on,the other one com es from the general i zed Landau-i nteracti on-functi on,and nal l y i t contai ns the col l i si on i ntegral .T hese quanti ti es are cal cul ated to the l owest order i n the coupl i ng to the gauge el d.B y studi ng the dynam i calproperti es ofthe col l ecti ve m odes usi ng the Q B E,we nd thatthe sm ooth uctuati onsofthe Ferm isurface (orthe sm al langul arm om entum m odes) show the usualFerm i -l i qui d behavi or,w hi l e the rough uctuati ons (or the l arge angul ar m om entum m odes)show the si ngul arbehavi ordeterm i ned by the si ngul arsel f-energy correcti on.H eretheangul arm om entum i stheconjugatevari abl eoftheangl em easured from a gi ven di recti on i n m om entum space. T here i sa forward scatteri ng cancel l ati on between the si ngul ar sel f-energy correcti on and the si ngul ar (general i zed) Landau-i nteracti on-functi on and a si m i l arcancel l ati on exi stsi n the col l i si on i ntegralasfarasthe sm al langul arm om entum m odes l< l c (l c / 1 1+ ,w here i s the sm al lexternalfrequency) are concerned. H owever,i n the case ofthe l arge angul ar m om entum m odes l> l c ,the contri buti on from theLandau-i nteracti on-functi on becom esvery sm al lso thatthesel f-energy correcti on domi natesand thecol l i si on i ntegralal so cannotbei gnored i n general .In thi scasethebehavi ors ofthe l ow l yi ng m odes are very di erent from those i n the Ferm il i qui ds. Ifwei gnorethecol l i si on i ntegral ,i tcan beshow n thatthesystem hasa l otofcol l , / q=j l n qj( = 1) and = v F q w hi l e there i s the parti cl e-hol e conti nuum bel ow / q 1+ 2 (1 < 2), / q=j l n qj( = 1). T he di sti ncti on between these two types ofl ow l yi ng exci tati ons are obscured by the exi stence ofthe col l i si on i ntegral . From the above resul ts,we see that the densi ty-densi ty and the current-current correl ati on functi ons,bei ng dom i nated by the sm al langul ar m om entum m odes l< l c ,show the usualFerm i -l i qui d behavi or. O n the other hand,the energy gap away from = 1=2 i s determ i ned by the behavi ors ofthe l arge angul ar m om entum m odes l> l c so that the si ngul ar m ass correcti on show s up i n the energy gap ofthe system . T heoutl i neofthepaperi sasfol l ow s.In secti on II,wei ntroducethem odeland expl ai n the way we contruct the Q B E w i thout assum i ng the exi stence of the quasi -parti cl es. In secti on III, the Q B E for the general i zed di stri buti on functi on i s deri ved for B = 0. In secti on IV ,we construct the Q B E for the general i zed Ferm isurface di spl acem ent for B = 0. W e al so determ i ne the general i zed Landau-i nteracti on-functi on and di scuss i ts consequences. In secti on V ,T he Q B E i n the presence ofa sm al l B i s constructed and the energy gap of the system i s determ i ned. In secti on V I, W e di scuss the col l ecti ve exci tati ons ofthe system for the cases of B = 0 and B 6 = 0. W e concl ude the paper and di scuss the i m pl i cati ons ofour resul ts to experi m ents i n secti on V II.W e concentrate on the zero tem perature case i n the m ai n text and provi de the deri vati on ofthe Q B E at ni te tem peratures i n the appendi x,w hi ch requi res som e speci altreatm ents com pared to the zero tem perature counterpart.
A SI-PA R T IC LE S T he two di m ensi onalel ectrons i nteracti ng vi a a two-parti cl e i nteracti on can be transform ed to the com posi te ferm i onsi nteracti ng vi a the sam e two-parti ce i nteracti on and al so coupl ed to an appropri ate C hern-Si m ons gauge el d w hi ch appears due to the stati sti cal m agneti c ux quanta attached to each el ectron [ 5, 6] . T he m odelcan be constructed as fol l ow s ( h = e = c = 1).
w here represents the ferm i on el d and~ i s an even num ber 2n w hi ch i s the num ber of ux quanta attached to an el ectron, and v(r) / V 0 =r i s the Fouri er transform of v(q) = V 0 =q 2 (1 2) w hi ch represents the i nteracti on between the ferm i ons. A i s the externalvector potenti al(B = r A ) and we choose the C oul om b gauge r a = 0 for the C hern-Si m ons gauge el d. N ote that the i ntegrati on over a 0 enforces the fol l ow i ch represents the fact that~ num ber of ux quanta are attached to each el ectron. T he saddl e poi nt ofthe acti on i s gi ven by the fol l ow i ng condi ti ons: r hai= 2 ~ n e = B 1=2n and ha 0 i= 0 : (4) T herefore, at the m ean el d l evel , the ferm i ons see an e ecti ve m agneti c el d ( A = A hai):
w hi ch becom es zero at the Landau l evel l l i ng factor = 1=2n. T he IQ H e ect of the ferm i ons m ay appear w hen the e ecti ve Landau l evel l l i ng factor p = 2 n e B becom es an i nteger. T hi s i m pl i es that the realexternal m agneti c el d i s gi ven by B = B 1=2n + B = 2 n e 2n p+ 1 p w hi ch corresponds to a FQ H state ofel ectronsw i th the l l i ng factor = p 2n p+ 1 . T he uctuati onsoftheC hern-Si m onsgauge el d, a = a ha i,can bei ncorperated as fol l ow s.
w here i s the angl e between p and a gi ven di recti on. T he l i neari zed quantum B ol tzm ann equati on for f( ;!;r;t)= f( ;!;r;t) f 0 (!) can be deri ved,w hi ch i s anal ogous to the Q B E ofthe quasi -parti cl es i n the usualFerm i -l i qui d theory. From thi s Q B E,one can al soconstruct the equati on ofm oti on for the general i zed Ferm isurface di spl acem ent[ 25]
[! p 2
2=2m R e R (p;!;r;t);G < (p;!;r;t)] [ < (p;!;r;t);R e G R (p;!;r;t)] = > (p;!;r;t)G < (p;!;r;t) G > (p;!;r;t) < (p;!;r;t): (34) W e di rectl y dealw i th Eq. (30) i n m om entum space (q; ) rather than the l ong ti m e, l ong wave l ength expansi on i n realspace (r;t)gi ven by Eq. (32).Forsi m pl i ci ty,we assum e thatthe gauge el d i si n equi l i bri um . T he non-equi l i bri um one-l oop sel f-energy correcti on,w hi ch i s gi ven by the di agram i n Fi g. 1,can be w ri tten as[ 27,28]
then exam i ne the e ectsofa + ,a separatel y. T he cl assi cal uctuati on a ofthe gauge el d can be regarded as a vector potenti alw hi ch corresponds to a stati c but spati al l y varyi ng m agneti c el d b = r a . In order to rem ove the di vergence i n the sel f-energy,one can consi derthe one-parti cl e G reen' sfuncti onG G (P ;!;r;t)asa functi on ofa new vari abl e P = p a . Si nce we e ecti vel y separate outa uctuati ons, the sel f-energy,w hi ch appears i n the equati on ofm oti on gi ven by Eq. (24),shoul d contai n onl y a + uctuati ons and i s free of di vergences. T herefore, G < G < (P ;!;r;t) i s wel lde ned and i ts equati on ofm oti on i s gi ven by the Fouri er transform ofEq. (30) w i th the fol l ow i ng repl acem ent. In the rst pl ace,the vari abl e p shoul d be changed to a new vari abl eP = p a .Secondl y,the sel f-energy~ shoul d be changed to~ + w hi ch contai ns now onl y a + uctuati ons. Fi nal l y,the equati on ofm oti on contai ns a term w hi ch depends on b . W e argued i n the appendi x that i gnori ng thi s term does not a ect the physi cal i nterpretati ons ofthe Q B E,w hi ch w i l lappear i n secti ons IV ,V ,and V I.W e provi de the detai l s ofthe anaysi s for the ni te tem perature case i n the appendi x. From now on,we w i l ladopt the notati on that G < shoul d be understood as G < for ni te tem peratures. For exam pl e,the general i zed di stri buti on functi on at ni te tem peratures i s gi ven by Eq. (40) w i th the repl acem entthatG < ;G > ! G < ;G > . T he sam e type ofabuse ofnotati on appl i es to the sel f-energy,w here onl y a + uctuati ons shoul d be i ncl uded,i.e. the Q B E i s val i d at ni te T ,provi ded that the l ower cuto T i s i ntroduced for the frequency i ntegral s. In Eq. (35), one can change the vari abl es such that p 0 = p + q and ! 0 = ! + . T he gauge el d propagator can be w ri tten i n term s ofthe new vari abl es as D 11 (q; ) = D 11 (p 0 p;! 0 !),w here (p;!)and (p 0 ;! 0 )representthe i ncom i ng and outgoi ng ferm i ons.
T
hus the fourth and the fth term s i n the l eft hand si de ofthe Q B E (gi ven by Eq. (30)) vani sh after R d p =2 i ntegrati on. A fter thi s i ntegral ,usi ng Eqs. (36),(40) and (42),the rem ai ni ng parts ofthe Eq. (30) can be w ri tten as ( f( p q ;!) f( p q ;!;q; ))
Som e expl ai nati ons of each term i n the Eq. (44) are i n order. In the rst pl ace, as m enti oned i n the previ ous secti on, the Eq. (44) i s the anal og of the usual Q B E for the quasi -parti cl e di stri buti on functi on f(p;q; ),thus the structures ofthe Q B Es i n both cases are si m i l ar. T he rst term on the l eft hand si de of the equati on corresponds to the free ferm i ons. T he second term on the l eft hand si de corresponds to the sel f-energy correcti on w hi ch renorm al i zes the m ass ofthe ferm i ons. T he thi rd term on the l eft hand si de can be regarded asthe contri buti on from the general i zed Landau-i nteracti on functi on w hi ch can be de ned as
N
ote thatthi sgeneral i zed Landau-i nteracti on functi on contai nsthe frequency dependence aswel lasthe usualangul ardependence. T hi si sdue to the factthatthe gauge i nteracti on i s retarded i n ti m e and i t i s al so one ofthe m ajor di erences between the ferm i on-gaugeel d system and the usualFerm il i qui d. T he ri ghthand si de ofthe equati on i snothi ng but the col l i si on i ntegralI collision and i s gi ven by the Ferm i -gol den-rul e. T hus,Eq. (44) can be w ri tten as
v
F q cos p q U (q; ) i n the l eft hand si de ofEq. (46),w hi ch requi res a carefulderi vati on. N ote that the contri buti ons from the sel f-energy and the general i zed Landau-i nteracti onfuncti on are com bi ned i n the l eft hand si de ofthe Q B E.Even though the above equati on i s al ready useful , i t i s worthw i l e to transform thi s equati on to the m ore fam i l ar one. In the next secti on,we provi de the approxi m ate Q B E for u( p q ;q; ) w hi ch i s m ore useful to understand the l ow energy exci tati ons ofthe system .IV . Q U A N T U M B O LT Z M A N N E Q U A T IO N FO R T H E G E N E R A LIZ E D F E R M I SU R FA C E D ISP LA C E M E N TIn order to transform the Q B E gi ven by Eq. (45) or Eq. (46) to a m ore fam i l i ar form , i t i s necessary to si m pl i fy the general i zed Landau-i nteracti on-functi on F ( ;
ate F ( ;!) by the fol l ow i ng F L andau ( ).
F
L andau ( )= F ( ;! = 0) ; i fj j> c ; F ( = c ;! = 0) ; otherw i se ,
U
si ng thi s approxi m ati on and f 0 (!) = ( !) at zero tem perature, the Q B E gi ven by Eq. (46) at zero tem perature can be transform ed i nto (the ni te tem perature case i s di scussed i n the appendi x)
the contri buti on from the real part of the retarded sel f-energy. O n the other hand, N (0) R d p 0 q =2 F L andau ( p 0 q p q ) u( p 0 q ;q; ) represents the Landaui nteracti on part. For sm ooth uctuati ons ofthe general i zed Ferm isurface di spl acem ent,u( ;q; )i s a sl ow l y varyi ng functi on of so that there i s a forward scatteri ng cancel l ati on between the sel f-energy part and the Landau-i nteracti on part. T herefore,for sm ooth uctuati ons,the si ngul ar behavi or of the sel f-energy does not appear i n the dynam i cs of the general i zed Ferm i surface di spl acem ent. O ne the other hand, for rough uctuati ons, u( ;q; ) i s a fastl y varyi ng functi on. In thi s case, the Landau-i nteracti on part becom es very sm al l and the sel f-energy part dom i nates. T hus, for rough uctuati ons, the dynam i cs of the generai l zed Ferm isurfacedi spl acem entshoul d show thesi ngul arbehavi orofthesel f-energy. From these resul ts, one can expect that the sm ooth and the rough uctuati ons provi de very di erent physi calpi ctures for the el em entary exci tati ons ofthe system . O ne can m ake thi s observati on m ore concrete by l ooki ng at the Q B E i n angul ar m o-m entum l(w hi ch i sthe conjugate vari abl e of )space. B y the fol l ow i ng Fouri erexpansi on,
N
ote that,i n the 1 cos(l )factori nsi de the i ntegralon the l efthand si de ofthe Q B E gi ven by Eq. (52),1 com es from the sel f-energy part and cos (l ) com es from the Landaui nteracti on part. For l< l c =2 and the addi ti onal 2 dependence m akes the angl e i ntegrall ess si gul ar because typi cal i s ofthe order of 1 1+ . D ue to thi s cancel l ati on for the sm al langl e (forward) scatteri ng,the correcti on from the sel f-energy part and the Landau-i nteracti on part becom es of the order of4 1+ so that i t does not cause any si gul ar correcti on. N ote that a si m i l ar type of cancel l ati on occurs i n the col l i si on i ntegral . T herefore, for the sm al langul ar m om entum m odes l < l c , the system behaves l i ke the usualFerm il i qui d. For l> l c ,the cos (l ) factor becom es hi ghl y osci l l ati ng asa functi on of so thatthe Landau-i nteracti on partbecom esvery sm al l . A sa resul t,the sel f-energy part dom i nates and the di spersi on rel ati on for the dynam i cs ofthe general i zed Ferm isurface di spl acem ent i s changed from = v F q to / q q=j l n qj ( = 1). A l so, a si m i l ar thi ng happens i n the col l i si on i ntegral , i.e., the cos (l ) factor does not contri bute and the rem ai ni ng contri buti on show s the si ngul ar behavi or ofthe i m agi nary part ofthe sel f-energy so that the col l i si on i ntegralcannot be i gnored for 1 < 2 and can be m arginall y i gnored for = 1. U si ng the above resul ts, one can understand the densi ty-densi ty and the currentcurrent correl ati on functi ons w hi ch show no anom al ous behavi or i n the l ong wave l ength and the l ow frequency l i m i ts [ 12, 13] . From the Q B E,one can eval uate these correl ati on functi ons by taki ng the angul ar average ofthe densi ty or the current di sturbance due to the externalpotenti aland cal cul ati ng the l i nearresponse. A sa resul t,i n these correl ati on functi ons,the sm al langul ar m om entum m odes are dom i nati ng so that the resul ts do not show any si ngul ar behavi or. From these resul ts, one can al so expect that two di erent behavi ors ofthe sm al l(l< l c ) and the l arge (l> l c ) angul ar m om entum m odes m ay show up even i n the presence of the ni te e ecti ve m agneti c el d B and the l arge angul ar m om entum m odes m ay be responsi bl e for the si ngul ar energy gap ofthe system [ 6, 14, 15] , w hi ch i s the subject ofthe next secti on.
W e fol l ow H ansch and M ahan[ 31] to deri ve the Q B E i n the presence of the ni te e ecti ve m agneti c el d B . T he onl y di erence between the case of B 6 = 0 and that of B = 0 i sthatthe externalvectorpotenti al A = 1 2 r B entersto the kni eti c energy i n the equati on ofm oti on ofthe one-parti cl e G reen' s functi on [ 31] . T he sam e procedure used i n the case of B = 0 can be i m pl oyed to deri ve the Q B E from the equati on of m oti on ofthe one-parti cl e G reen' sfuncti on. T he resul ti ng equati on can be transform ed to a conveni ent form by a change ofvari abl es gi ven by one can construct the Q B E for G < (P ;!;q; ) w hi ch i s now a functi on ofP [ 31] . A s a resul t,the change we have to m ake for the case of B 6 = 0 (com pared to the case of B = 0 gi ven by Eq. (30)) i s that al lthe m om entum vari abl es shoul d be changed from p to P and the fol l ow i ng addi ti onalterm s shoul d be added to Eq. (30) [ 31] .
< (P ;!;q; )+ B @ @P < 0 (P ;!;q; ) @ @P R e G R (P ;!;q; ) :
In pri nci pl e,the sel f-enegy and the G reen' s functi on i n the Q B E al so depend on the e ecti ve m agneti c el d B . In the sem i cl assi cal approxi m ati on for very sm al l B , we i gnore thi s type of B dependence and,i nstead ofthat,we i ntroduce a l ow energy cutoE g i n the frequency i ntegral s, w hi ch i s the energy gap of the system . T hen, after the i ntegrati on R d P =2 ,the equati on becom es that ofEq. (30) w i th a l ow energy cuto E g and i t al so contai ns an addi ti onalterm gi ven by! c @ @ P q f( P q ;!;q; ) ; (56) w here ! c = B =m . A fter R d!=2,the Q B E for a general i zed Ferm isurface di spl acem ent can be w ri tten as
w here a l ow energy cuto E g i s i ntroduced i n the frequency i ntegral s. In parti cul ar,the angl e cuto c i n F L andau ( ) shoul d be changed from c si m i l ar i nterpretati ons can be m ade as the case of B = 0. For the sm ooth uctuati ons(l< l c 1= c ),therei sa cancel l ati on between thesel f-energy and theLandaui nteracti on parts. A s a resul t,we have a term w hi ch i s the order of E 3 1+ g w hi ch can be i gnored com pared to because E g i s very sm al lnear = 1=2 or B = 0. A l so,a si m i l ar thi ng happens i n the col l i si on i ntegral . T herefore, the Q B E for the sm ooth uctuati onscan be w ri tten as
N
ote that the revol uti on of these wave packets i s governed by two di erent frequenci es ! c and ! c . T he frequency of revol uti on of the broad wave packet (see Fi g. 2 (a)) i s gi ven by ! c because i tm ai nl y consi stsofsm al langul arm om entum m odes. O n the other hand, i f we i gnore the col l i si on i ntegral i n the Q B E, the frequency of revol uti on of the narrow wave packet (see Fi g. 2 (b)) i s gi ven by ! c because i t m ai nl y contai ns the l arge angul ar m om entum m odes. T he energy gap ofthe system can be obtai ned by quanti zi ng the m oti on ofrevol uti on and taki ng the sm al l est quanti zed frequency as the energy gap of the system . T herefore,the energy gap ofthe system i s gi ven by E g = ! c /
W e see that the di vergent e ecti ve m ass show s up i n the energy gap E g . M ore detai l ed di scussi ons of the l ow l yi ng exci tati ons descri bed by the Q B E can be found i n the next secti on.V I. C O LLE C T IV E E X C IT A T IO N SLet us rst study the col l ecti ve exci tati ons ofthe system w i th B = 0 by l ooki ng at the Q B E gi ven by Eq. (52). W e i gnore the col l i si on i ntegralfor the ti m e bei ng and di scuss i ts i n uence l ater. In the absence of the col l i si on i ntegral ,Eq. (52) can be consi dered as the Schr odi nger equati on ofan equi val ent ti ght bi ndi ng m odeli n the angul ar m om entum space. It i s conveni ent to rew ri te Eq. (52) as
F
L andau ( ) 1 cos (l ) : (62) Eq. (61) descri bes a parti cl e hoppi ng i n a 1D l atti ce w i th a ' spati al 'dependent hoppi ng am pl i tude t l v F q 2g(l) . N ote thatg(l)i softhe orderone forl< l c and becom esm uch l arger, g(l)/ 1 + 1 ,w hen l> l c . D ue to thi stype of' spati al 'dependent hoppi ng am pl i tude,the ei genspectrum ofEq. (61) consi sts oftwo parts. T hat i s,there i s a conti nuous spectrum near the center ofthe band and a di screte spectrum i n the tai lofthe band. T he descrete spectrum appears above and bel ow the conti nuous spectrum (See Fi g. 3). T he boundary between these two di erent spectra i s determ i ned from = 2t l! 1 / v F q 1 + 1 , w hi ch sel f-consi stantl y generates a si ngul ar di spersi on rel ati on ( ) q=j l n qj( = 1). A l so,the tai lofthe band ends at ( )= 2t 1 v F q. O ne can m ap thi senergy spectrum to thedi agram fortheexci tati onsi n theusual q pl ane,w hi ch i sgi ven by the Fi g. 4. N ote thatthe conti nuum states(l> l c )can be m apped to the parti cl e-hol e conti nuum w hi ch exi st bel ow / q 1+ 2 (1 < 2) or / q=j l n qj ( = 1). O n the other hand, the bound states (the di screte spectrum ) (l < l c ) can be m apped to the col l ecti ve m odes w hi ch exi st between / q v F q. H owever, the di sti ncti on between these two di erent el em entary exci tati onsi sobscured by the presence ofthe col l i si on i ntegralw hi ch provi desthe l i fe ti m e forthe exci tati ons. In parti cul ar,si nce g(l; )doesnotprovi de a sharp boundary between l> l c and l< l c ,one expectsa crossoverfrom the parti cl e-hol e exci tati onsto the col l ecti ve m odes even i n the absence ofthe col l i si on i ntegral . N ow l et us consi der the case of B 6 = 0 (i.e.,away from = 1=2 state). In thi s case, Eq. (61) becom es (see al so Eq. (
W
hen g(l)= 1,one can w ri te the sol uti on ofEq. (63) (or Eq. (57)) as u( P q ;q;t)/ e in P q i t e i v F q ! c sin P q w i th = n ! c . T hus,we recover the wel lknow n spectrum ofdegenerate Landau l evel s for free ferm i ons.W hen g(l)6 = const. ,i ti sdi cul tto cal cul ate the spectrum ofEq. (63).H owever,usi ng g(l) = g( l),we can show that the spectrum ofEq. (63) i s sym m etri c about = 0,and = 0 i s al ways an ei genval ue ofEq. (63). A l so,for non-zero ! c ,the spectrum i s al ways di screte.
m odes and the l arge q m odes have very di erent dynam i cs. T he sm al lq m odes shoul d be control ed by the ni te e ecti ve m ass and the l arge q m odes,the di vergent m ass. To understand the behavi or ofthe m odes i n m ore detai l ,i n the fol l ow i ng,we present a sem i cl assi calcal cul ati on. T he m ai n resul tthatwe obtai n i sthe Eq. (75).T he di spersi on w i th f(1 )= const:and f(x 1)/ x 1 . T he crossover m om entum q
w here l 0 i s a constant. N ote that Eq. (67) w i th l 0 = 0 i s an exact sol uti on for the cl assi cal system Eq. (65), w hi ch descri bes a m oti on w i th zero energy. N ow the rst equati on i n Eq. (66) can be si m pl i ed as
H
ere we have chosen l 0 = 1 (i nstead of l 0 = 0) so that Eq. (70) repreduces the exact resul t Eq. (64) w hen q = 0. N ote that g(l) al so depends on frequency and we shoul d set = ! cyc i n the functi on g(l). T hus,the cyl cotron frequency shoul d be sel f-consi stantl y determ i ned from Eq. (70). W e woul d l i ke to rem ark that w hen q ! c =v F ,the cl assi calfrequency i n Eq. (69) i s a sm ooth functi on ofl 0 ,hence a sm ooth functi on ofthe energy. T hi s m eans that the gap between the nei ghbori ng energy l evel s i s al so a sm ooth functi on ofthe energy ofthe l evel s.T heval i di ty ofthesem i cl assi calapproach requi resthatthegap between nei ghbori ng energy l evel s i s al m ost a constant i n the nei ghborhood of i nterested energi es. T hus the above behavi or of the cl assi calfrequency i ndi cates that the sem i cl assi calapproach i s at l east sel f-consi stant.
f(1 ) i s determ i ned from ! cyc (q ! 1 )/ ( ! c )1+ 2 and f(x 1) can be obtai ned from the condi ti on that ! cyc (q)= ! c for q ! c =v F . N ote that the di vergence off(x)for sm al lx shoul d be cuto w hen x ! c =v F q c . A s a resul t,the cycl otron spectrum of the system l ooks l i ke the one gi ven by Fi g.5.T he sm al l er gap for q > q c corresponds to a di vergent e ecti ve m ass m / ( ! c ) l e the l argergap nearq = 0 can be vi ewed asa cycl otron freqency deri ved from a ni tee ecti vem ass. T he therm alacti vati on gap m easured through the l ongi tudi nalconductance i s gi ven by the sm al l er gap at l arge wave vectors q > q c . H owever the cycl otron frequency m easured through the cycl otron resonance forthe uni form el ectri c el d shoul d be gi ven by the l arger gap. T he above di scussi on ofthe cycl otron frequency i s for the toy m odel ,w here onl y the transverse gauge el d uctuati onsarei ncl uded. O ne m ay wonderw hetherthesam epi cture al so appl i es to the real = 1=2 state. In the real = 1=2 state,the l owest l yi ng pl asm a m odes correspond to the i ntra-Landau-l evelexci tati ons,ofw hi ch energy i s m uch l ess than the i nter-Landau-l evelgap ! c . In the q ! 0 l i m i t,such m odes decoupl e from the center ofm ass m oti on. T hi s m eans that the u 1 com ponents (w hi ch correspond to the di pol ar di storti ons ofthe Ferm isurface) ofthe ei genm odes m ust di sappear i n the q ! 0 l i m i t as far as the l owest l yi ng m odes (i ntra-Landau-l evelm odes) are concerned. T he m ode that contai ns u 1 com ponents shoul d have the bi g i nter-Landau-l evelgap i n the q ! 0 l i m i t i n order to sati sfy the K ohn' s theorem . Exam i ni ng our sol uti on for the ei genm odes i n the q ! 0 l i m i t,we nd that the l owest ei genm odes are gi ven by u l / 1;l . T herefore, accordi ng to the above consi derati on,we cannoti denti fy the l owestl yi ng m odesi n the toy m odelw i th the l owest l yi ng i ntra-Landau l evelpl asm a m odes i n the realm odel . H owever, thi s probl em can be xed fol l ow i ng the procedure i ntroduced i n R ef. 30. T hat i s,we m ay i ntroduce an addi ti onalnon-di vergentLandau-Ferm i -l i qui d param eter F 1 w hi ch m odi es onl y the val ue ofg( 1). W e m ay ne-tune the val ue of F 1 such that the l= 1 m odes i n Eq. (64) w i l l have the bi g i nter-Landau-l evel gap = ! c g( 1) = ! c . In thi s case the l= 2 m odes becom e the l owest l yi ng m odes i n the q ! 0 l i m i t. Such m odes correspond to the quadradpol ardi storti onsofthe Ferm isurface and decoupl e from the center ofm ass m oti on. T he above correcti on onl y a ects the energy of the l owest l yi ng m odes for the sm al lm om enta,q < ! c =v F . W i th thi s type ofcorrecti on,our resul ts for the toy m odel essenti al l y appl i es to the = 1=2 state. T he onl y change i sthatthe l owest l yi ng m odes at sm al lm om enta,q ! c =v F ,i s gi ven by the l= 2 m odes i nstead ofthe l= 1 m odes.
H owever,the gap for the l arge m om enta can be m uch l essthan ! c . O bservi ng thi sdrasti c gap reducti on w i l lcon rm the presence ofthe si ngul ar gauge i nteracti on. In the above di sscusi on, we have i gnored the e ects of the col l i si on term . T he rol e ofcol l i si on i ntegrali s si m pl y to provi de the l i fe ti m e e ects on the col l ecti ve exci tati ons. H owever, due to the energy conservati on, onl y the col l ecti ve m odes w i th energy greater than 2! cyc (q m in ) w i l lhave a ni te l i fe ti m e. H ere ! cyc (q m in ) i s the m i ni m um energy gap ofthe l owest l yi ng pl asm a m ode and q m in i s the m om entum w here the energy takes the m i ni m um val ue. For l arge q,the m odes above 2! cyc (q m in ) m ay have a short l i fe ti m e such that the m odes are not wel lde ned. V II. SU M M A R Y ,C O N C LU SIO N , A N D IM P LIC A T IO N S T O E X P E R IM E N T SIn thi s secti on, we sum m ari ze the resul ts and provi de the uni ed pi cture for the com posi te ferm i ons i nteracti ng w i th a gauge el d. In thi s paper,we construct a general fram ework,w hi ch i stheQ B E ofthesystem ,to understand theprevi ousl y know n theoreti cal[ 6,[12][13][14][15][16]and experi m ental[ 1-3, 8-10]resul ts. Si nce there i s no wel lde ned Landau-quasiparti cl e,we cannot use the usualform ul ati on ofthe Q B E so that we used an al ternati ve form ul ati on w hi ch wasused by Prange and K adano[ 25]forthe el ectron-phonon probl em . W e used the non-equi l i bri um G reen' s functi on techni que[25][26][27][28] to deri ve the Q B E of the general i zed di stri buti on functi on f( p q ;!;q; ) for B = 0, and f( P q ;!;q; ) (P = p A ) for B 6 = 0. From thi s equati on, we al so deri ved the Q B E for the general i zed Ferm isurface di spl acem ent u( p q ;q; ) ( B = 0) or u( P q ;q; ) ( B 6 = 0) w hi ch corresponds to the l ocalvari ati on ofthe chem i calpotenti ali n m om entum space. For B = 0, the Q B E consi sts of three parts; the sel f-energy part, the generali zed Landau-i nteracti on part, and the col l i si on i ntegral . T he Landau-i nteracti on functi on F L andau ( ) can be taken as F L andau ( ) / 1=j j for > c / 1 1+ and 1=j c j for < c . For the sm ooth uctuati ons of the genaral i zed Ferm i surface di spl acem ent (l < l c 1= c / 1 1+ ), w here l (the angul ar m om entum i n m om entum space) i s the conjugate vari abl e ofthe angl e ,there i s a sm al l -angl e-(forward)-scatteri ng cancel l ati on between the sel f-energy partand the Landau-i nteracti on part. B oth ofthe sel f-energy part and the Landau-i nteracti on part are ofthe order ati on,the com bi nati on ofthese contri buti ons becom esofthe order 4 1+ . T here i s al so a si m i l ar cancel l ati on i n the col l i si on i nteralso that the transport scatteri ng rate becom es ofthe order 4 1+ . A s a resul t,the sm ooth uctuati ons show no anom al ous behavi orexpected from the si ngul arsel f-energy correcti on. O n the otherhand,forthe rough uctuati ons (l> l c ),the Landau-i nteracti on part becom es very sm al land the sel f-energy part,w hi ch i s proporti onalto 2 1+ ,dom i nates. A l so the col l i si on i ntegralbecom es ofthe order 2 1+ . T herefore,the rough uctuati ons show anom al ousbehavi or ofthe sel f-energy correcti on and suggestthatthe e ecti ve m assshow s a di vergentbehavi orm / m / j l n jfor = 1. From these resul ts, one can understand the densi ty-densi ty and the current-current correl ati on functi ons cal cul ated i n the perturbati on theory [ 12, 13] ,w hi ch show no anom al ousbehavi ori n thel ong wavel ength and thel ow frequency l i m i ts.U si ng theQ B E,onecan eval uate these correl ati on functi onsby taki ng the angul araverage ofthe densi ty orcurrent di sturbance due to the external potenti aland cal cul ati ng the l i near response. T hus, i n these correl ati on functi ons, the sm al langul ar m om entum m odes are dom i nati ng so that the resul ts do not show any si ngul ar behavi or. N ote that the cancel l ati on w hi ch exi sts i n the col l i si on i ntegrali m pl i es that the transport l i fe ti m e i s su ci entl y l ong to expl ai n the l ong m ean free path ofthe com posi te ferm i ons i n the recent m agneti c focusi ng experi m ent [ 10] . For the 2k F response functi ons,there i s no correspondi ng cancel l ati on between the sel f-energy part and the Landau-i nteracti on part so that i t show s the si ngul ar behavi or [ 13] . T he Q B E i n the presence ofthe sm al le ecti ve m agneti c el d B was used to understand the energy gap E g ofthe system . A s the case of B = 0,there can be two di erent behavi ors ofthe general i zed Ferm isurface di spl acem ent. For the sm ooth uctuati ons (l< l c / E 1 1+ g ),the frequency ofrevol uti on ofthewave packeti sgi ven by ! c = B =m , i: e: ,there i sno anom al ousbehavi orafter the cancel l ati on between the sel f-energy and the Landau-i nteracti on parts. For the rough uctuati ons,the sel f-energy part dom i nates and the frequency ofrevol uti on ofthe wave packeti srenorm al i zed as ! c / ! c E can be obtai ned by quanti zi ng the m oti on ofthe wave packet and taki ng the l owestquanti zed frequency w hi ch i snothi ng but ! c . Sol vi ng the sel f-consi stent equati on
/
T he exci tati ons of the system were studi ed from the Q B E of the general i zed Ferm i surface di spl acem ent. For B = 0,i n the absence ofthe col l i si on i ntegral ,there are two types ofthe exci tati ons w hi ch can be descri bed m ost easi l y i n the q pl ane. T here are parti cl e-hol e exci tati ons w hi ch exi st bel ow an edge / q 1+ 2 (1 < 2) or / q=j l n qj ( = 1). T here are al so col l ecti ve m odes w hi ch exi st between / q q=j l n qj( = 1) and v F q. H owever,the di sti ncti on between these two di erent el em entary exci tati ons i s obscured by the presence ofthe col l i si on i ntegralw hi ch provi des the l i fe ti m e of the exci tati ons. In the case of B 6 = 0,the Q B E i n the presence ofthe ni te B i s agai n used to understand the l ow l yi ng pl asm a spectrum ofthe system as a functi on ofq. For q < q c ,w here q c / p j B jfor 1 < 2 and q c / p j B jl n j B jfor = 1,the pl asm a m ode corresponds to a sm ooth uctuati on ofthe Ferm isurface,and the exci tati on gap i sgi ven by ! c B =m . O n the otherhand,for q > q c ,the pl asm a m ode corresponds to a rough uctuati on ofthe Ferm isurface. A s a consequence,the exci tati on gap becom es m uch sm al l er and proporti onalto j j B j =j l n B j for = 1. T hus,the l owest exci tati on spectrum ofthe system l ooks l i ke the one gi ven by Fi g. 5,w hi ch i s consi stent w i th the previ ous num eri calcal cul ati ons [ 30] . A ppl yi ng the pi cture devel oped i n thi s paper for the = 1=2 m etal l i c state to the m agneti c focusi ng experi m ent ofR ef. 10,we nd that the observed osci l l ati ons shoul d not be i nterpreted as the e ects due to the focusi ng ofthe quarsi parti cl es. T hi s i sbecause the i nel asti c m ean free path L q = v F and the l i fe ti m e 1=T ofthe quasi parti cl e i s qui te short. H ere v F i s the renorm al i zed Ferm ivel oci ty ofthe quasi parti cl e. For the C oul l n(E F =T ) H ere n i sthe densi ty ofthe el ectron,T the tem perature,m the bare m assofthe com posi te ferm i on,and E F = k 2 F 2m = 2 n m . Taki ng n = 10 11 cm 2 and m to be the el ectron m ass i n the vacuum (see R ef. 2,R ef. 3 and R ef. 10)ch i s m uch l ess than the l ength ofthe sem i -ci rcul ar path, 6 m ,w hi ch connects the two sl i ts. T herefore,the osci l l ati ons observed i n R ef. 10 cannot be expl ai ned by the focusi ng of the quasi parti cl es w hi ch have a di vergent e ecti ve m ass and a short l i fe ti m e. T here i s another way to expl ai n the observed osci l l ati ons i n R ef. 10. W e can i nject a net current i nto one sl i t,w hi ch causes a di pol ar di storti on ofthe l ocalFerm isurface near the sl i t. T he currentand the associ ated di pol ardi storti on propagate i n space accordi ng to the Q B E and are bended by the e ecti ve m agneti c el d B . T hi scausesthe osci l l ati on i n the current recei ved by the other sl i t. A ccordi ng to thi s pi cture,the osci l l ati ons observed i n R ef. 10 i s caused by the sm ooth uctuati ons ofthe Ferm isurface w hose dynam i cs i s i denti calto those ofa Ferm il i qui d w i th a nite e ecti ve m ass. T hus,the osci l l ati ons i n the m agneti c focusi ng experi m ents behave as i fthey are caused by quasi parti cl es w i th a ni te e ecti ve m ass and a l ong l i fe ti m e. T he rel exati on ti m e for the current di stri buti on i sgi ven by j E F T 2 ln (E F =T ) forthe C oul om b i nteracti on. T hi sl eadsto a di ussi on l ength (caused by the gauge uctuati ons) L j = v F j ,w here v F i s the bare Ferm ivel oci ty ofthe com posi te ferm i ons. realdi ussi on l ength shoul d be shorter than the above val ue due to other possi bl e scatteri ng m echani sm s. T hus,we expect that the crossover tem perature,above w hi ch the osci l l ati onsdi sappear,shoul d be l owerthan 150m K .In the experi m ent[ 10] ,no osci l l ati ons were observed above 100m K .A nother i m portant consequence ofour pi cture i s that,i fa ti m e-of-i ght m easurem ent can be perform ed by pul si ng the i ncom i ng current,the ti m e i s gi ven by the bare vel oci ty v F and notthe quasi parti cl e vel oci ty v F . Fi nal l y,we m ake a rem ark on the surface acousti c wave experi m ent. T he condi ti on thatwe can see the resonance between the cycl otron radi usand sound wave l ength i sgi ven by ! cyc ! s ,w here ! cyc i s the cycl otron frequency and ! s i s the sound wave frequency. T he reason i sthatwe can regard the sound wave asa standi ng wave onl y w hen ! cyc ! s . Let us i m agi ne that we are changi ng ! s such that ! s ! c . Ifwe use the quasi parti cl e pi cture to expl ai n the above resonance, then the cycl otron frequency ! cyc i s determ i ned by the di vergent e ecti ve m ass,and ! cyc shoul d be com parabl e to ! c . T herefore,there shoul d not be any resonance because ! cyc ! s i n thi s case. H owever, i n real i ty, the resonance i s governed by the sm ooth uctuati on ofthe Ferm isurface,so that ! cyc ! c i s a cycl otron frequency determ i ned by the ni te bare m ass of the com posi te ferm i on. A s a resul t, one shoul d sti l l see the resonance because ! cyc ! s ! c . T herefore, one can expect that there shoul d be sti l lresonance e ects even w hen the phonon energy exceeds the energy gap determ i ned from the Shubni kov-de H aas osci l l ati ons. T he bottom l i ne i s that the cycl otron frequency m easured i n acousti c wave experi m ents can be m uch l arger than the energy gap m easured i n transport experi m ents. In a recent experi m ent of W i l l et et. al[ 32] , resonance was observed w hen ! s i s l arger than the energy gap of the system determ i ned by the l arge e ecti ve m ass obtai ned from the Shubni kov-de H aas osci l l ati ons [ 3] . T he authors cl ai m ed that thi s i s an apparent contradi cti on between the surface acousti c wave experi m ent and the Shubni kov-de H aas osci l l ati ons. W e woul d l i ke to poi nt out that the cycl otron frequency (for sm al l q) i s determ i ned by the bare m ass (In a crude esti m ati on [ 6] ,the bare m ass i s about 1/3 ofthe el ectron m ass i n vacuum ). O n the other hand,the m ass obtai ned from Shubuni kov-de H aas osci l l ati ons or from the acti vati on gap i n transportm easurem entsi si n pri nci pl e a di erentm ass,w hi ch i n practi ce turns out to be oforder ofthe el ectron m ass i n vacuum even away from = 1=2. Even though we do not understand quanti tati vel y the m ass di erence,there i s i n pri nci pl e no contradi cti on. T he surface acousti c experi m ent i s i n fact an excel l ent way of m easuri ng the bare m ass. A C K N O W LE D G M E N T S W e are gratefulto A .Stern and B .I.H al peri n fordi scussi ng thei rresul tsw i th uspri or to the publ i cati on. W e al so woul d l i ke to thank A .Furusaki ,W .K ang,A .J.M i l l i s,T . M .R i ce,M .Si gri st,H .L.Storm er,R .L.W i l l et,A .Yacoby for hel pfuldi scussi ons and E. H .Fradki n,C .M .Varm a,P.B .W i egm ann for enl i ghteni ng com m ents. Y B K and X G W are supported by N SF grant N o. D M R -9411574. PA L i s supported by N SF grant N o. D M R -9216007.
..for j r r 0 j l 0 ..
0that G < 0 (p;!) = if 0 (!)A (p;!) i s not wel l de ned. T hus, i t i s al so di cul t to de ne G < (p;!;r;t) for the non-equi l i bri um case. Si nce the di vergent contri buti on to the sel f-energy com es from the gauge el d uctuati ons w i th < T ,w here i s the energy transfer by the gauge el d[ 11] ,i ti s worthw hi l e to separate the gauge el d uctuati ons i nto two parts,i.e.,a(q; ) a (q; ) for < T and a(q; ) a + (q; ) for > T ,and exam i ne the e ects ofa + ,a separatel y.T he cl assi cal uctuati on a ofthe gauge el d can be regarded as a vector potenti al w hi ch corresponds to a stati c but spati al l y varyi ng m agneti c el d b = r a . For a gi ven random ' m agneti c' el d,b (r),and i n a xed gauge,the uctuati on ofthe gauge potenti ala can be very l arge. T he gauge potenti alcan have huge di erences from one poi nt to another,as l ong as the two poi nts are wel lseparated. W e know that l ocal l y the centerofthe Ferm isurface i satthe m om entum p a (r)around the poi ntr i n space. T he huge uctuati on ofa i ndi cates that the l ocalFerm isurfaces at di erent poi nts i n space m ay appear i n very di erent regi ons i n the m om entum space. T hi s i s the reason w hy the one-parti cl eG reen' sfuncti on i n them om entum spacei snotwel lde ned. T hi sal so suggests that the Ferm i on di stri buti on i n the m om entum space,f(p;!),m ay be i l l -de ned. N ote thatthe l ocalFerm isurface can be determ i ned i n term softhe vel oci ty ofthe ferm i ons(i.e., the states w i th m 2 v 2 = 1 2m (p a ) 2 < E F are l l ed) and the vel oci ty i s a gauge-i nvari nt physi calquanti ty.T hi ssuggeststhati ti sm orereasonabl eto study theferm i on di stri buti on i n the physi calvel ocity space. T he above di scussi on l eads us to consi der the one-parti cl e G reen' s functi onG (P ;!;r;t) as a functi on of a new vari abl e P = m v = p a . N ote that thi s transform ati on i s rem i ni cent of the procedure we used i n the case of the ni te e ecti ve m agneti c el d (see secti on V ).W e m ay fol l ow the si m i l ar l i ne ofderi vati on to obtai n the Q B E i n the random m agneti c el d. Si nce we e ecti vel y separate out a uctuati ons, the sel f-energy, w hi ch appears i n the equati on of m oti on gi ven by Eq. (24), shoul d contai n onl y a + uctuati ons. T herefore,the equati on ofm oti on for G < (P ;!;r;t) i s gi ven by the Fouri er transform ofEq. (30) w i th the fol l ow i ng repl acem ent. In the rst pl ace,the vari abl e p shoul d be changed to a new vari abl e P = p a . Secondl y,the sel f-energy~ shoul d be changed to~ + w hi ch contai ns now onl y a + uctuati ons. Fi nal l y, as we can see from the case ofthe ni te e ecti ve m agneti c el d i n secti on V ,the fol l ow on ofm oti on contai nstheterm w hi ch dependson b ,butdoesnotcontai n the term s w hi ch depend on a i n an expl i ci tway. Si nce we rem oved the source ofthe di vergence (non-gauge-i nvari ance w i th respect to a ),the G reen' s functi onG (P ;!;r;t) or the correspondi ng sel f-energy i s now ni te for ni te T or !. N ow one can perform the i ntegrati on R d P =2 of G < (P ;!;r;t)safel y to de ne Z d P 2 iG < (P ;!;r;t) f( ;!;r;t); Z d P 2 iG > (P ;!;r;t) 1 f( ;!;r;t); (A : 2) w here i s the angl e between P and a gi ven di recti on. For a w hi l e, l et us i gnore the contri buti on com i ng from the term that depends on b (r) i n the equati on ofm oti on for f( ;!;r;t),w hi ch i s gi ven by absence ofthi sterm ,the equati on ofm oti on ofthe general i zed di stri buti on functi on f( ;!;q; ) i s gi ven by Eq. (44) w i th the constrai nt that the l ower cuo T shoul d be i ntroduced i n the frequency i ntegral s,w hi ch i s due to the fact that onl y a + uctuati ons shoul d be i ncl uded. U si ng the sam e procedure we used i n secti on IV ,we can construct the equati on of m oti on for the general i zed Ferm i surface di spl acem ent (the change that c i n the de ni ti on of the Landau-i nteracti on-functi on F L andau T herefore, the sam e argum ents for the sm al l and l arge angul ar m om entum m odes can be used to di scuss the physi cal consequences of the Q B E and the change i s that the crossover angul ar m om entum i s now gi ven by l et us di scuss the e ect ofthe term w hi ch depends on b (r). A fter i ntegrati on R d!=2 ofthe Q B E for the general i zed di stri buti on functi on f( ;!;r;t),thi s term has the fol l ow i ng form i n the Q B E for u( s term provi des the scatteri ng m echani sm due to a uctuati ons and generates a di spersi on ofthe angl e . T he transport scatteri ng rate 1= w hi ch i s due to a uctuati ons can be esti m ated as fol l ow s. In order to exam i ne b uctuati ons,l et us rst consi der hb (q)b ( T herefore,the typi call ength scal e ofb (r) uctuati ons i s gi ven by l 0 = 1=q 0 . T he typi calval ue ofb (r) over the l ength scal e l 0 can be esti m ated from hb (r)b (r 0 so that typi cal b 1= p l 5 0 . T he di spersi on of the angl e after the ferm i on travel s over the l ength l 0 can be esti m ) (l 0 =v F ) 1=(k F l 0 ) 3=2 . Let l M = nl 0 be the m ean free path w hi ch i s de ned by the l ength scal e after w hi ch the totaldi spersi on ofthe angl e becom es ofthe order one. T he num ber n can be esti m ated by requi ri ng that the totaldi spersi on ofthe angl e p n p n=(k F l 0 ) 3=2 becom es ofthe order one so that n (k F l 0 ) 3 . T hus, From l M = v F ,the scatteri ng rate due to a uctuati ons can be esti sam e order as that of the scatteri ng rate due to a + uctuati ons i n the case of the sm al langul ar m om entum m odes (l< l c ). For l< l c ,the contri buti on from the i m agi nary part ofthe sel f-energy Im R / T 2 1+ i s cancel ed by the contri buti on from the Landau-i nteracti on functi on so that the resul ti ng scatteri ng rate In the other l i m i tofl arge angul ar m om entum m odes (l> l c ),1= can be com pl etel y i gnored. T hi s i s because the sel f-energy contri buti on dom i nates. Si nce 1= < T and i t i s at m ost the sam e order as the scatteri ng rate due to a + uctuati ons even i n the case ofthe sm al langul ar m om entum m odes,i gnori ng thi s contri buti on does nota ectthe generalconsequences ofthe Q B E,w hi ch are di scussed i n secti onsIV ,V ,and V I. T herefore, the Q B E for the general i zed di stri buti on functi on at ni te tem peratures i s essenti al l y gi ven by Eq. (44) w i th the l ower cuto T of the frequency i ntegral i n the expressi on ofthe contri buti ons from the sel f-energy and the Landau-i nteracti on-functi on. A s a resul t,the form ofthe Q B E i s the sam e as thatofthe zero tem perature case and the onl y di erence i s that the crossover angl e c and the crossover angul ar m om entum l c are now gi ven by c
one-l oop Feynm an di agram for the sel f-energy of the ferm i ons. H ere the sol i d l i ne represents the ferm i on propagatorand the wavy l i ne denotesthe R PA gauge el d propagator. Fi g. 2 A broad wave packet (a) and a narrow wave packet (b) (gi ven by the shaded regi on) created i n the m om entum space. T he ci rcl e i s the schem ati c representati on of the Ferm i surface, w hi ch i s actual l y not so wel l de ned, and the arrow represents the di recti on ofm oti on ofthe wave packet. Fi g. 3 T he energy band ( ) ofthe ti ght bi ndi ng m odelgi ven by Eq. (61)as a functi on of . T he shaded regi on around the centerofthe band correspondsto the conti nuum states and the hatched regi on i n the tai l s ofthe band corresponds to the bound states.
Fi g. 4
4T he el em entary exci tati ons i n q space i n the absence of the col l i si on i ntegral . T he shaded regi on corresponds to the parti cl e-hol e conti nuum and the hatched regi on correspondsto the col l ecti ve m odes. T he boundary i sgi ven by the si ngul ardi spersi on rel ati on / q 1+ 2 for 1 < 2 and / q=j l n qjfor = 1. Fi g. 5 T he l owest exci tati on spectrum of the com posi te ferm i on system i n the presence of the ni te e ecti ve m agneti c el d B as a functi on ofthe wave vector q (sol i d l i ne). T he dashed l i ne i s the scal i ng curve descri bed i n the text. For q q c ,the exci tati on gap becom es sm al l er and i s proporti onalto j B j 1+ 2 for 1 < 2 and j B j =j l n B j for = 1. q c / p j B jfor 1 < 2 and q c / p j B jj l n B jfor = 1.
tati onsofthe system . O urobjecti ve i sto constructa si m i l ar Q B E w hi ch descri bes al lthe l ow energy physi cs ofthe com posi te ferm i on system . O ne i m portant di cul ty we are faci ng here i s that we cannot assum e the exi stence of the qui asi -parti cl es a priori i n the deri vati on of the Q B E even though the conventi onal deri vati on of the Q B E of the Ferm i -l i qui d theory rel i es on the exi stence of these quasiparti cl es. Fol l ow i ng cl osel y the work of Prange and K adano [ 25] about the el ectronphonon system , w here there i s al so no wel l de ned quasi -parti cl e at tem peratures hi gh com pared w i th the D ebye tem perature, we concentrate on a general i zed Ferm i surface di spl acem entw hi ch,i n ourcase,correspondsto thel ocalvari ati on ofthechem i calpotenti al i n m om entum space. D ue to the non-exi stence ofa wel lde ned quasi parti cl e,the usual di stri buti on functi on n k i n the m om entum space cannotbe descri bed by a cl osed equati on ofm oti on. H owever we w i l lsee l aterthat the general i zed Ferm isurface di spl acem ent does sati sfy a cl osed equati on ofm oti on. T hi s equati on ofm oti on w i l lbe al so cal l ed as Q B E. W e use the non-equi l i bri um G reen' sfuncti on techni que [ 26-28]to deri ve the new Q B E and cal cul ate the general i zed Landau-i nteracti on-functi on w hi ch has the frequency dependence as wel las the usual angul ar dependence due to the retarded nature of the gauge i nteracti on. T he Q B E at = 1=2 consi sts ofthree parts. O ne i sthe contri buti on from the
I collision :A fter taki ng the i ntegral
R
d!=2 on both si des ofEq. (44),i t can be seen that one
cannot w ri te the Q B E onl y i n term s ofu( p q ;q; ) =
R
d!=2
f( p q ;!;q; ) w hi ch i s
the general i zed Ferm isurface di spl acem ent. T hat i s,the Q B E becom es
[
v F q cos p q ]u( p q ;q; )
. H W Ji, Phys. R ev. B. 4012013H .W .Ji ang etal .,Phys. R ev. B 40,12013 (1989).
. R R , Phys. R ev. Lett. 702944R .R .D u et al .,Phys. R ev. Lett. 70,2944 (1993);
. D R Ey, Phys. R ev. Lett. 721906D .R .Leadl ey et al .,Phys. R ev. Lett. 72,1906 (1994).
. R R , Phys. R ev. Lett. 733274R .R .D u et al .,Phys. R ev. Lett. 73,3274 (1994);
. H C , M Shayegan, S J , Phys. R ev. Lett. 733270H .C .M anoharan,M .Shayegan, and S.J.K l epper etal .,Phys. R ev. Lett. 73,3270 (1994).
. J K Jai N, Phys. R ev. Lett. 63199J.K .Jai n,Phys. R ev. Lett. 63,199 (1989);
. Phys. R ev. B. 417653Phys. R ev. B 41,7653 (1990);
. A dv. Phys. 41105A dv. Phys. 41,105 (1992).
. A Lopez, E Fradki N, Phys. R ev. B. 445246A .Lopez and E.Fradki n,Phys. R ev. B 44,5246 (1991);
. Phys. R ev. Lett. 692126Phys. R ev. Lett. 69,2126 (1992).
. B I , P A Lee, N Ead, Phys. R ev. B. 477312B .I.H al peri n,P.A .Lee,and N .R ead,Phys. R ev. B 47,7312 (1993).
. V , S C Zhang, Phys. R ev. B. 469889V .K al m eyer and S.C .Zhang,Phys. R ev. B 46,9889 (1992).
. R L , Phys. R ev. Lett. 713846R .L.W i l l et etal .,Phys. R ev. Lett. 71,3846 (1993);
. R L , Phys. R ev. B. 477344R .L.W i l l et etal .,Phys. R ev. B 47,7344 (1993).
. W , Phys. R ev. Lett. 713850W .K ang etal .,Phys. R ev. Lett. 71,3850 (1993).
. V J , B Su, J K Jai N, Phys. R ev. Lett. 722065V .J.G ol dm an,B .Su,and J.K .Jai n,Phys. R ev. Lett. 72,2065 (1994).
. N , P A Lee, Phys. R ev. Lett. 642550N . N agaosa and P. A . Lee, Phys. R ev. Lett. 64, 2550 (1990);
. P A Lee, N , Phys. R ev. B. 465621P. A . Lee and N . N agaosa,Phys. R ev. B 46,5621 (1992).
. Y B , A Furusaki, X . -G , P A Lee, Phys. R ev. B. 5017917Y .B .K i m ,A .Furusaki ,X . -G .W en,and P.A .Lee,Phys. R ev. B 50,17917 (1994).
. B L Er, L B Io, A J , Phys. R ev. B. 5014048B .L.A l tshul er,L.B .Io e,and A .J.M i l l i s,Phys. R ev. B 50,14048 (1994).
Y B K I M, P A Lee, X . -G , P C E Stam, uence of gauge el d uctuations on com posite ferm ions near the hal f. ll ed state,cond-m at/9411057Y . B . K i m , P. A . Lee, X . -G . W en, and P. C . E. Stam p, In uence of gauge el d uctuations on com posite ferm ions near the hal f-ll ed state,cond-m at/9411057.
Singul arities in the Ferm i l iquid description of a partiall y ll ed Landau l evel and energy gaps of fractional quantum H all states. A Stern, B I , condm at/9502032A . Stern and B .I.H al peri n, Singul arities in the Ferm i l iquid description of a par- tiall y ll ed Landau l evel and energy gaps of fractional quantum H all states, cond- m at/9502032.
. B L , L B Io E, Phys. R ev. Lett. 692979B .L.A l tshul er and L.B .Io e,Phys. R ev. Lett. 69,2979 (1992).
. D V Hveshchenko, R , T M , Phys. R ev. B. 4810766D .V .K hveshchenko,R .H l ubi na,and T .M .R i ce,Phys. R ev. B 48,10766 (1993).
. D V , P C E Stam, Phys. R ev. Lett. 712118D .V .K hveshchenko and P.C .E.Stam p,Phys. R ev. Lett. 71,2118 (1993);
. Phys. R ev. B. 495227Phys. R ev. B 49,5227 (1994).
. J , Phys. B. 422617J.Pol chi nski ,N ucl . Phys. B 422,617 (1994).
. Eugene W Junw U G An, Ong, Phys. R ev. Lett. 714226Junw u G an and Eugene W ong,Phys. R ev. Lett. 71,4226 (1994).
. L B Io E, D Li Dsky, B L , Phys. R ev. Lett. 73472L.B .Io e,D .Li dsky,and B .L.A l tshul er,Phys. R ev. Lett. 73,472 (1994).
. H .-J Won, A , J B , Phys. R ev. Lett. 73284H .-J. K won, A . H oughton, and J. B .M arston, Phys. R ev. Lett. 73, 284 (1994);
B row n U ni versi ty Prepri nt,T heory offerm ion l iquid,cond-m at/9501067. B row n U ni versi ty Prepri nt,T heory offerm ion l iquid,cond-m at/9501067.
. C , F , Phys. B. 417359C .N ayak,and F.W i l czek,N ucl . Phys. B 417,359 (1994);
. Phys. B. 430534N ucl . Phys. B 430,534 (1994).
Pl atzm an, and B . I. H al peri n. S , P M , Phys. R ev. Lett. 71777S. H e, P. M . Pl atzm an, and B . I. H al peri n, Phys. R ev. Lett. 71, 777 (1993);
. Y Atsugai, P. -A , X . -G En, Phys. R ev. Lett. 71424Y . H atsugai ,P. -A .B ares,and X . -G .W en,Phys. R ev. Lett. 71,424 (1993);
. Y B , X . -G En, Phys. R ev. B. 508078Y .B .K i m and X . -G .W en,Phys. R ev. B 50,8078 (1994).
. R E Prange, L P , Phys Ev, A. 134R .E.Prange and L.P.K adano ,Phys. R ev. 134,A 566 (1964).
. L P , G Uantum Statisticalm Echanics, Enjam I N, York, L.P.K adano and G .B aym ,Q uantum StatisticalM echanics,B enjam i n,N ew York, 1962.
. L V , Zh. Eksp. Teor. Fi z. 471515Sov. Phys. -JET PL.V .K el dysh,Zh. Eksp. Teor. Fi z. 47,1515 (1964) [ Sov. Phys. -JET P 20,1018 (1965)] .
G D , M any Particl e Physics. 58323N ew YorkPl enumG . D . M ahan, M any Particl e Physics, 2nd Edi ti on, Pl enum , N ew York, 1990; J. R am m er and H .Sm i th,R ev. M od. Phys. 58,323 (1986).
. Y H Hen, F Czek, E , B I , Int. J.M od. Phys. 31001Y .H .C hen,F.W i l czek,E.W i tten,and B .I.H al peri n,Int. J.M od. Phys. 3,1001 (1989).
Si m on and B .I.H al peri n. S H , Phys. R ev. B. 4817368S.H .Si m on and B .I.H al peri n,Phys. R ev. B 48,17368 (1993);
Si m on and B. S H , S.H .Si m on and B .
. I , Phys. R ev. B. 501807I.H al peri n,Phys. R ev. B 50,1807 (1994);
Si m on,and B .I.H al peri n. S H Song H E, Phys. R ev. B. 501823Song H e,S.H .Si m on,and B .I.H al peri n, Phys. R ev. B 50,1823 (1994).
. W , G D , Phys. R ev. B. 281886W .H ansch and G .D .M ahan,Phys. R ev. B 28,1886 (1983).
. R L Et, K W , L N Pfei Er, prepri ntR .L.W i l l et,K .W .W est,and L.N .Pfei er,prepri nt.
| [] |
[
"SYMMETRY OF HYPERSURFACES WITH SYMMETRIC BOUNDARY",
"SYMMETRY OF HYPERSURFACES WITH SYMMETRIC BOUNDARY"
] | [
"Hui Ma ",
"Chao Qian ",
"Jing Wu ",
"ANDYongsheng Zhang "
] | [] | [] | In this paper, we gain the interior symmetry of certain hypersurfaces with symmetric boundary under appropriate boundary condition, including minimal hypersurfaces, hypersurfaces of constant mean curvature, hypersurfaces of constant higher order mean curvature, and Helfrich-type hypersurfaces in R n+1 . Suppose that X : Σ n → R n+1 is an embedding of a compact, connected, C 2 hypersurface of constant r-th (r ≥ 1) order mean curvature with boundary ∂Σ. Let G ⊂ SO(n + 1) be a compact connected Lie subgroup. We prove that if ∂Σ is a G-invariant submanifold, one component of which is real analytic, and satisfies some natural geometric conditions, then Σ is also G-invariant.In particular, if one of the connected components of ∂Σ is an orbit of G, the analytic condition automatically holds. For the cases of minimal hypersurfaces and CMC hypersurfaces, the C 2 condition can be weaken to be C 1 . Last but not least, similar result can be also obtained for C 4,α Helfrich-type hypersurfaces in R n+1 .with the origin as the only singular point or a compact analytic hypersurface with boundary M . When the action G is of cohomogeneity 2, his idea is to reduce the solution to the 2-dimensional orbit space R := R n+1 /G. Now M descends to an interior point p M . With a canonical metric, a shortest geodesic γ : [0, 1] −→ R from p M to ∂R produces | null | [
"https://export.arxiv.org/pdf/2211.06836v1.pdf"
] | 253,510,558 | 2211.06836 | c79602fc430cda0a518f091dd07cb5cee0ffa7a7 |
SYMMETRY OF HYPERSURFACES WITH SYMMETRIC BOUNDARY
13 Nov 2022
Hui Ma
Chao Qian
Jing Wu
ANDYongsheng Zhang
SYMMETRY OF HYPERSURFACES WITH SYMMETRIC BOUNDARY
13 Nov 2022
In this paper, we gain the interior symmetry of certain hypersurfaces with symmetric boundary under appropriate boundary condition, including minimal hypersurfaces, hypersurfaces of constant mean curvature, hypersurfaces of constant higher order mean curvature, and Helfrich-type hypersurfaces in R n+1 . Suppose that X : Σ n → R n+1 is an embedding of a compact, connected, C 2 hypersurface of constant r-th (r ≥ 1) order mean curvature with boundary ∂Σ. Let G ⊂ SO(n + 1) be a compact connected Lie subgroup. We prove that if ∂Σ is a G-invariant submanifold, one component of which is real analytic, and satisfies some natural geometric conditions, then Σ is also G-invariant.In particular, if one of the connected components of ∂Σ is an orbit of G, the analytic condition automatically holds. For the cases of minimal hypersurfaces and CMC hypersurfaces, the C 2 condition can be weaken to be C 1 . Last but not least, similar result can be also obtained for C 4,α Helfrich-type hypersurfaces in R n+1 .with the origin as the only singular point or a compact analytic hypersurface with boundary M . When the action G is of cohomogeneity 2, his idea is to reduce the solution to the 2-dimensional orbit space R := R n+1 /G. Now M descends to an interior point p M . With a canonical metric, a shortest geodesic γ : [0, 1] −→ R from p M to ∂R produces
Introduction
The Plateau problem is searching for surfaces with minimal area spanned by a given boundary. To solve generalized Plateau problems in various situations, Federer and Fleming introduced the concept of integral current in [15]. When the boundary has certain symmetries, one may expect that the solution of the Plateau problem also has same symmetries as the boundary. More specifically, for a compact, connected Lie group G acting orthogonally on R n+1 , it is natural to explore whether or not there exists a G-invariant (integral current) solution T to the Plateau problem for a given closed G-invariant boundary M .
Lawson [30] answered the above question affirmatively in the area-minimizing setting when M is of codimension 2. In fact, an area-minimizing solution T discovered by Lawson is either a hypercone C(M ) := {t · x : t ∈ [0, 1] and x ∈ M } an area-minimizing solution. It is easy to see that the unit outward conormal n of T , corresponding to the normalization of the lifting vector field of −γ ′ (0), is G-invariant. In Figure 1, γ in Case (A) generates an area-minimizing cone, while those in Case (B) produce two analytic solutions. Apparently, Case (B) exhibits that only part of the full symmetry of M can descend to the interior of T . Namely, either γ 1 or γ 2 is not Z 2 symmetric whereas p M is for the Z 2 action of switching coordinate arguments. Failure of inheriting the Z 2 symmetry comes from the nonsymmetry of γ ′ i (0) for both i = 1 and 2. In general, we raise the following Problem 1.1. What kind of boundary symmetry can pass to the interior of certain interesting hypersurfaces?
In this paper we shall mainly focus on symmetries corresponding to connected Lie subgroups (not necessarily of cohomogeneity 2 in R n+1 ) of SO(n + 1) and those from inhomogeneous isoparametric foliations of S n (1). In the sequel, G will denote a compact connected Lie subgroup of SO(n + 1).
Four kinds of hypersurfaces of our particular interest are minimal hypersurfaces, hypersurfaces with constant mean curvature (or simply CMC hypersurfaces), hypersurfaces with constant higher order mean curvature and Helfrich-type hypersurfaces.
For minimal hypersurfaces, Morgan [33] proved that a compact boundary can only bound finitely many stable minimal hypersurfaces in R n+1 for n ≤ 5. Therefore, if the boundary of a stable minimal hypersurface is G-invariant, whereas the interior is not, then the G-action can generate infinitely many stable minimal hypersurfaces with the same boundary, which contradicts to the above finiteness. The same reason actually tells that certain uniqueness result for submanifolds of higher codimension can also lead to a corresponding interior symmetry, e.g., complex submanifolds and special Lagrangian submanifolds in C n with G-invariant boundary (cf. [19] and [20]).
For CMC hypersurfaces, since soap bubbles with circular rims are always observed to be spherical caps, it is natural to investigate CMC hypersurfaces in Problem 1.1. Koiso [27] employed a reflection method of Alexandrov [1] and proved that under mild condition the rotational symmetry of a sphere must be inherited by any compact CMC hypersurface with the sphere being its boundary. After Koiso's result, Earp, Brito, Meeks and Rosenberg [13] provided another condition for a compact CMC hypersurface with spherical boundary to be a spherical cap.
As a generalization of the mean curvature, the r-th mean curvature H r is defined by H r := Sr ( n r )
, where S r is the r-th elementary symmetric polynomial of principal curvatures.
For hypersurfaces with constant r-th mean curvature and spherical boundary, we remark that Problem 1.1 has been solved by Alías and Malacarne [2]. They showed that a hypersurface with constant r-th mean curvature (r ≥ 2) and spherical boundary must be either a planar ball (H r = 0) or a spherical cap (H r = 0).
As stated above, previous contributions mainly focus on rotational symmetry of embedded hypersurfaces with connected boundaries. For immersed case, Kapouleas [26] constructed infinitely many immersed compact non-spherical CMC surfaces of genus g ≥ 3 whose boundary is a round planar circle. Therefore, in order to obtain interior symmetry from boundary for immersed hypersurfaces, it is necessary to impose additional topological restriction or other suitable restrictions. In [40], Palmer and Pámpano studied rotational symmetry of CMC surfaces with a round circle as a boundary component and the geodesic torsion τ g ≡ 0 along the circle. It is worth noting that the condition τ g ≡ 0 is indispensable.
In this paper we shall deal with general symmetries for both embedded hypersurfaces and immersed hypersurfaces (possibly with non-connected boundaries) under suitable boundary condition given by the following concept of contact angle. Definition 1.2. Let Σ be an n-dimensional manifold with boundary ∂Σ. Suppose that X : (Σ, ∂Σ) → R n+1 is an embedding with unit normal vector field ν. When a connected component Γ of ∂Σ lies in a hypersphere S n (R), one can define contact angle θ pointwise in Γ by
ν n = cos θ sin θ − sin θ cos θ X R N , (1.1)
where n is the outward unit conormal of ∂Σ in Σ and N the unit conormal of Γ in S n (R).
Throughout our paper except Section 7, we assume that X : Σ n → R n+1 is an embedding of a hypersurface with boundary ∂Σ . If there is no confusion we will identify X(Σ) with Σ and X(∂Σ) with ∂Σ, respectively.
As one of our main results, we have Theorem 1.3. Let X : Σ n → R n+1 be an embedding of a connected hypersurface with boundary ∂Σ, and Γ ⊂ S n (R) be a connected component of ∂Σ. Let G be a compact connected Lie subgroup of SO(n+1) with Lie algebra G. Assume that Γ is a real analytic G-invariant submanifold in R n+1 , which means that Γ = p∈Γ G · p consists of G-orbits. If the contact angle θ is constant on each G-orbit in Γ and Σ satisfies one of the following (i). Σ ⊂ R n+1 is a C 1 minimal hypersurface;
(ii). Σ ⊂ R n+1 is a C 1 hypersurface of constant mean curvature;
(iii). Σ ⊂ R n+1 is a C 2 hypersurface of constant r-th mean curvature containing an interior elliptic point, r > 1 and H r > 0 with respect to a unit normal vector field, then the interior of Σ is locally G-invariant 1 .
If in addition Σ is complete with respect to the induced metric, and each connected component of ∂Σ is a G-invariant submanifold in R n+1 , then Σ is G-invariant. Remark 1.4. As a special case, when Γ is an orbit of G, it is actually a homogeneous isoparametric hypersurface in S n (1). Since isoparametric hypersurfaces are real analytic, Theorem 1.3 can answer Problem 1.1 in many interesting situations, for instance see Corollary 3.3 and Corollary 3.6.
Remark 1.5. Let B n+1 be the (n + 1)-dimensional closed unit ball in R n+1 . Let Σ = B n with X(Σ) ⊂ B n+1 , X(∂Σ) ⊂ ∂B n+1 = S n (1),
If the boundary ∂Σ has constant contact angle θ, in some literature Σ is also said to be with a θ-capillary boundary. In particular, if θ = π 2 , then Σ is said to be with free boundary.
Remark 1.6. For area-minimizing integral current of arbitrary codimension, Lander [29] showed the invariance of an area-minimizing integral current with boundary B under a polar group action G, where B is supposed to be G-invariant and lying in the union of the principal orbits. Unlike Lander's requirement, in Theorem 1.3 the orbits in Γ are not necessarily of same type and the action of G is not necessarily a polar action. Remark 1.7. In the case of hypersurfaces with constant r-th mean curvature, we assume an additional condition that there exists an interior elliptic point in X(Σ), that is, a point where all principal curvatures have the same sign. In particular, when Σ is compact and ∂Σ is a round (n − 1)-sphere, this condition holds naturally except that Σ is a flat disk. However, in general the compactness cannot ensure the existence of an interior elliptic point.
In fact, our method to prove Theorem 1.3 also applies to other hypersurfaces satisfying certain elliptic equation. We will introduce the Helfrich-type hypersurfaces in Section 6. By studying first variation formula we get an Euler-Lagrange equation (6.8) which is a fourth order elliptic equation. For Helfrich-type hypersurfaces, we have the following Theorem 1.8. Let X : Σ n → R n+1 be an embedding of a connected C 4,α hypersurface with boundary ∂Σ, α ∈ (0, 1). Assume that Σ is a Helfrich-type hypersurface and Γ ⊂ S n (R) is a connected component of ∂Σ. Let G be a compact connected Lie subgroup of SO(n+1). Assume that Γ is real analytic G-invariant submanifold in R n+1 , which means that Γ = p∈Γ G · p consists of G-orbits.
If the contact angle θ is constant on each G-orbit in Γ, then the interior of Σ is locally G-invariant.
If in addition Σ is complete with respect to the induced metric and each connected component of ∂Σ is a G-invariant submanifold in R n+1 , then Σ is G-invariant.
Since most of our work takes place locally and an immersion is locally an embedding, we have the following 1 See Definition 2.3. Theorem 1.9. Let X : Σ n → R n+1 be an immersion of a connected hypersurface with boundary ∂Σ such that X| ∂Σ is an embedding, and Γ be a connected component of ∂Σ.
Let G be a compact connected Lie subgroup of SO(n+1) with Lie algebra G. Assume that X(Γ) lies in S n (R) and is a real analytic G-invariant submanifold in R n+1 , which means that Γ = p∈Γ G · p consists of G-orbits. If the contact angle θ is constant on each G-orbit in Γ and X satisfies one of the following (i). X is a C 1 minimal immersion;
(ii). X is a C 1 immersed hypersurface of constant mean curvature;
(iii). X is a C 2 immersed hypersurface of constant r-th mean curvature containing an interior elliptic point, r > 1 and H r > 0 with respect to a unit normal vector field;
(iv). X is a C 4,α immersed hypersurface of Helfrich-type, then the interior of X(Σ) is locally G-invariant.
If in addition
X(Σ) is a closed subset in R n+1 and X(∂Σ) is a G-invariant subset in R n+1 , then X(Σ) is a G-invariant set.
This paper is organized as follows. Section 2 contains some basic definitions and results on group action and real analyticity. In Section 3, we are concerned with CMC hypersurfaces with rotationally symmetric boundary at first and arrive at a preliminary answer to Problem 1.1. Beyond this, we consider isoparametric hypersurfaces in unit spheres to be boundaries in our setting. Based on isoparametric foliations, we construct examples of minimal hypersurfaces and CMC hypersurfaces with G-invariant boundaries and G-invariant conormals. Section 4 is devoted to certain real analytic properties through Morrey's regularity theory. Section 5 exhibits proofs of our main results (Theorem 1.3 and Theorem 5.1) by making use of the Cauchy-Kovalevskaya theorem. In Section 6, we introduce the definition of Helfrich-type hypersurface and study its real analyticity and symmetry. In Section 7, we explore immersion situations and inheritance of symmetry from boundary. For completeness we review the Cauchy-Kovalevskaya theorem and adapt it to our setting in Appendix A.
Preliminaries
Infinitesimal action, local action and global action.
Definition 2.1 (cf. [5]). Let G be a Lie group with Lie algebra G and Σ a smooth manifold. An infinitesimal action of G on Σ is a homomorphism of G to the Lie algebra of smooth vector fields on Σ. A partial action A of G on Σ is a smooth map
A : D → Σ (g, p) → g · p (2.1)
in a neighborhood D ⊂ G × Σ of {e} × Σ such that e · p = p and g 1 · (g 2 · p) = (g 1 g 2 ) · p whenever g 1 · (g 2 · p) lies in D. A partial action defined on D = G × Σ is called a global action. Two partial actions (A 1 , D 1 ), (A 2 , D 2 ) are said to be equivalent if there is a domain D ⊂ D 1 , D 2 containing {e} × Σ such that A 1 | D = A 2 | D . A local action is an equivalence class of partial actions.
Remark 2.2.
(i). A global action A defines a local action A loc in an obvious way. (ii). The category of local actions is equivalent to the category of infinitesimal actions. Definition 2.3. A local action defined by some global action is said to be complete. A subset Γ ⊂ Σ is called locally A-invariant (or locally G-invariant if there is no confusion) if for some representative (A, D) we have g · p ∈ Γ for all p ∈ Γ and (g,
p) ∈ D. A function f : Σ → R is called locally A-invariant if for the equivalent infinitesimal action V on Σ we have V f ≡ 0.
The next proposition guarantees assembling local G-invariance to a global one.
Proposition 2.4. Let X : Σ → R n+1 be an embedding of a connected hypersurface with boundary ∂Σ. Let G be a connected Lie subgroup of SO(n + 1) with Lie algebra G. Assume that Σ is complete with respect to the induced metric, the interior of Σ is locally G-invariant and ∂Σ is G-invariant. Then Σ is G-invariant.
Proof. For any φ ∈ G, since Σ is locally G-invariant, φX is a Killing vector field on Σ with the induced metric. Since ∂Σ is G-invariant, we only need to show that g · p ∈ Σ − ∂Σ for each g ∈ G and p ∈ Σ − ∂Σ. Equivalently, for any φ ∈ G, we need to show that each maximal integral curve of φX in Σ − ∂Σ is defined for all t ∈ R. Let γ be an integral curve of V := φX in Σ − ∂Σ whose maximal domain is (a, b), where −∞ ≤ a < 0 < b ≤ +∞. Let p = γ(0) and let θ denote the flow of V on Σ.
Assume that b < +∞, we will show that γ can be extended past b which will lead to a contradiction. Since V is a Killing vector field on Σ, along γ we have d dt
|γ ′ (t)| 2 = d dt V, V =2 ∇ γ ′ (t) V, V = ∇ γ ′ (t) V, V − γ ′ (t), ∇ V V = 0. (2.2)
Hence V has constant speed along γ. Let {t i } ⊂ (a, b) be a sequence such that lim n→∞ t i = b. Then we have
dist(γ(t i ), γ(t j )) ≤ t j t i |γ ′ (t)|dt = |V (p)||t i − t j |.
It follows that {γ(t i )} is a Cauchy sequence in Σ. Since Σ is complete with respect to the induced metric, {γ(t i )} converges to a point q ∈ Σ. Again by the G-invariance of ∂Σ, we can see that q / ∈ ∂Σ. In fact, if q ∈ ∂Σ, then we can define an integral curveγ bȳ
γ(t) = γ(t), a < t < b, q, t = b. (2.3)
By the G-invariance of ∂Σ, the whole integral curveγ must be contained in ∂Σ. However, we haveγ(0) = p / ∈ ∂Σ which leads to a contradition. Now choose a neighborhood U ⊂ Σ − ∂Σ of q and a small ǫ such that θ is defined on
(−ǫ, ǫ)×U . There exists t k > b−ǫ such that γ(t k ) ∈ U . Then defineγ : (a, b+ǫ) → Σ−∂Σ byγ (t) = γ(t), a < t < b, θ t−t k (γ(t k )), t k − ǫ < t < t i + ǫ. (2.4) γ is well-defined since θ t−t k (γ(t k )) = θ t (p) = γ(t) for t < b.
Thus γ can be extended toγ, which contradicts the maximality. ⊓ ⊔
Real analyticity.
In this subsection, we will review some definitions about real analyticity of embedded submanifolds in R n+1 . where Df (u) is the Jacobian matrix of f at v. The pair (U, f ) is called a local parametrization around p.
Definition 2.5 (cf. [28]). A subset Σ ⊂ R n+1 is called an m-dimensional real analytic submanifold if, for each p ∈ Σ,
Now real analytic functions can be defined on a real analytic submanifold.
Definition 2.7. With the notations above, let Σ be a real analytic submanifold (with or without boundary) in R n+1 . A function h : Σ → R is said to be real analytic at p ∈ Σ if there exists a local parametrization (U, f ) around p ∈ Σ with f (q) = p such that h • f : U → R is real analytic at q.
Various examples
3.1. Rotationally symmetric case. In this section, we start with embedded hypersurface Σ n ⊂ R n+1 with rotationally symmetric boundary for arbitrary n ≥ 2. Here we allow the boundary ∂Σ to be non-connected and only assume that some connected component Γ of ∂Σ is an (n − 1)-sphere. If the contact angle is constant along Γ, we obtain the following Proposition 3.1. Let X : Σ n → R n+1 be an embedding of a connected, real analytic hypersurface of constant mean curvature with boundary ∂Σ and Γ ⊂ ∂Σ be a connected component of ∂Σ. Assume that Γ ⊂ {x n+1 = 0} is an (n − 1)-sphere with constant contact angle θ, then the interior of Σ n is locally rotationally symmetric around E n+1 .
If in addition Σ is complete with respect to the induced metric and ∂Σ is rotationally symmetric around E n+1 , then Σ n is rotationally symmetric around E n+1 .
Remark 3.2. Proposition 3.1 is essentially a special case of Theorem 1.3 and we omit the proof here. Moreover, this result can be regarded as a high dimensional generalization of Proposition 5.1 in [40]. In the following sections we shall deal with other hypersurfaces with less regularity assumptions and general symmetries.
Note that rotationally symmetric hypersurfaces of constant mean curvature in R n+1 have been classified by Hsiang and Yu [24]. Those hypersurfaces provide natural examples of CMC hypersurfaces with constant contact angle along boundaries.
Isoparametric hypersurfaces as boundaries.
In this subsection, we will introduce isoparametric hypersurfaces in unit spheres which generate a large family of intriguing examples.
Recall that an isoparametric hypersurface M n−1 in the unit sphere S n (1) is a hypersurface with constant principal curvatures. Denote the number of distinct principal curvatures of a closed isoparametric hypersurface M n−1 by g, and the principal curvatures by λ 1 > λ 2 > · · · > λ g with multiplicities m 1 , . . . , m g , respectively. Münzner [37] proved that g can be only 1, 2, 3, 4 or 6, m i = m i+2 (subscripts mod g), the principal curvatures could be written as λ i = cot(θ + i−1 g π) with θ ∈ (0, π g ) (i = 1, . . . , g), and a closed, connected isoparametric hypersurface M must be a level set of the restriction to S n (1) of a homogeneous polynomial F : R n+1 → R of degree g satisfying the Cartan-Münzner equations:
|∇F | 2 = g 2 |x| 2g−2 , △F = m 2 − m 1 2 g 2 |x| g−2 .
Such a polynomial F is called the Cartan-Münzner polynomial, and
f = F | S n (1) takes values in [−1, 1]. For −1 < t < 1, f −1 (t) is an isoparametric hypersurface. The level sets M + = f −1 (1) and M − = f −1 (−1)
are the two focal submanifolds of codimensions m 1 + 1 and m 2 + 1 respectively in S n (1), which are smooth minimal submanifolds of S n (1) (cf. [39]). Here are some nice properties of isoparametric hypersurfaces:
(i). dist(M + , M − ) = π g , where g ∈ {1, 2, 3, 4, 6}. (ii)
. For each θ ∈ (0, π g ), let M θ be the tube of constant radius θ around M + in S n (1). Then there exists some c ∈ (−1, 1), such that M θ = f −1 (c).
(iii). For each θ ∈ (0, π g ), the volume V (M θ ) of M θ is given by sin gθ Examples and classification results of isoparametric hypersurfaces are discussed as follows. For g ≤ 3, Cartan showed that the isoparametric hypersurfaces in this case must be homogeneous and hence get classified. For g = 4, the most complicated and beautiful case, the isoparametric hypersurfaces are either of OT-FKM type or homogeneous with (m 1 , m 2 ) = (2, 2), (4, 5) (cf. [4], [6], [7], [8], [25]). For g = 6, the multiplicities satisfy m 1 = m 2 = 1 or 2 and the isoparametric hypersurfaces must be homogeneous (cf. [11], [31], [32]). For recent progress on isoparametric theory and related topics, we refer to [42].
Due to the classification results, isoparametric hypersurfaces in unit spheres are either homogeneous or of OT-FKM type with g = 4. Hence, we will recall homogeneous isoparametric hypersurfaces in Section 3.2.1, and isoparametric hypersurfaces of OT-FKM type, most of which are inhomogeneous, in Section 3.2.2.
3.2.1. Homogeneous isoparametric hypersurfaces. By definition, a hypersurface M n−1 in S n (1) is homogeneous if there exists a closed connected Lie subgroup G ⊂ SO(n + 1) such that M n−1 is an orbit of G. It is obvious that homogeneous hypersurfaces in S n (1) are always isoparametric. By virtue of Hsiang-Lawson [23] and Takagi-Takahashi [44], we know that all homogeneous isoparametric hypersurfaces M n−1 in S n (1) can be obtained as principal orbits of isotropy representations of compact Riemannian symmetric pairs (U, G) of rank 2.
As a corollary of Theorem 1.3 and the property (iv), when the boundary is a homogeneous isoparametric hypersurface, we gain the following Corollary 3.3. Let Σ n be an embedded hypersurface in R n+1 with boundary ∂Σ. Assume that one of the connected components of ∂Σ is a homogeneous isoparametric hypersurface M n−1 in S n (1), i.e., an orbit of some closed connected Lie subgroup G ⊂ SO(n + 1). If the contact angle is constant along M n−1 and X(Σ) satisfies the same condition as in Theorem 1.3, then the interior of Σ is locally G-invariant.
If in addition Σ is complete with respect to the induced metric, and each connected
component of ∂Σ is a G-invariant submanifold in R n+1 , then Σ is G-invariant.
3.2.2.
Isoparametric hypersurfaces of OT-FKM type. For a given symmetric Clifford system {P 0 , P 1 , · · · , P m } on R 2l satisfying P α P β + P β P α = −2δ αβ Id for 0 ≤ α, β ≤ m, Ferus, Karcher and Münzner [17] constructed a Cartan-Münzner polynomial F of degree 4 on R 2l
F : R 2l → R F (x) = |x| 4 − 2 m α=0 P α x, x 2 .
It is not difficult to verify that f = F | S 2l−1 (1) is an isoparametric function on S 2l−1 (1). The regular level sets of f are called isoparametric hypersurfaces of OT-FKM type. In general, isoparametric hypersurfaces of OT-FKM type are not extrinsic homogeneous. In this paper, the following fact is interesting to us.
Proposition 3.4 ([17]). Let M 2l−2 be an isoparametric hypersurface of OT-FKM type in S 2l−1 (1) with (m 1 , m 2 ) = (m, l − m − 1), and Spin(m + 1) be the connected Lie subgroup in SO(2l) generated by the Lie subalgebra Span{P α P β | 0 ≤ α, β ≤ m} ⊂ so(2l). Then M 2l−2 is Spin(m + 1)-invariant.
Remark 3.5. As a simple illustration of the Spin(m + 1) action in Proposition 3.4, a concrete example is given as follows. For m = 1, l ≥ 3, the corresponding isoparametric hypersurface is diffeomorphic to
S 1 × V 2 (R l ), where V 2 (R l ) = {(x, y) | x, y ∈ R l , |x| = |y| = 1, x⊥y}. Now the Spin(2) action is given by Spin(2) × (S 1 × V 2 (R l )) −→ S 1 × V 2 (R l ),
(e it , (e iθ , (x, y))) −→ (e i(θ−2t) , (cos tx + sin ty, − sin tx + cos ty)).
When the boundary is an isoparametric hypersurface of OT-FKM type, we have the following application of the property (iv), Proposition 3.4 and Theorem 1.3. Corollary 3.6. Let Σ 2l−1 be an embedded hypersurface in R 2l with boundary ∂Σ. Assume that a connected component of ∂Σ is an isoparametric hypersurface M 2l−2 of OT-FKM type in S 2l−1 (1) which is Spin(m + 1)-invariant. Moreover, assume that Σ satisfies the same condition as in Theorem 1.3 with G = Spin(m + 1). Then the interior of Σ is locally Spin(m + 1)-invariant.
If in addition Σ is complete with respect to the induced metric, and ∂Σ is Spin(m + 1)invariant, then Σ is Spin(m + 1)-invariant.
Constructions via isoparametric foliations.
In this subsection, we aim to construct examples of minimal hypersurfaces and CMC hypersurfaces with isoparametric boundaries and certain prescribed conormals via isoparametric foliations. In particular, we will show that given any G-homogeneous isoparametric boundary M and any prescribed G-invariant exterior unit conormal V along M , except in the case of minimal cone, there exists a complete properly immersed (possibly embedded) G-invariant minimal hypersurface Σ with ∂Σ = M and n = V . Here an immersed hypersurface X :
Σ n → R n+1 is called G-invariant if there exists a smooth G-action on Σ such that X • h = h • X for any h ∈ G.
Recall that a G-invariant immersed hypersurface Σ is minimal if and only if Σ is minimal among G-invariant competitors (see Hsiang-Lawson [23]). Therefore, when the G-action on R n+1 is of cohomogeneity 2, finding a G-invariant minimal hypersurface is equivalent to finding a corresponding geodesic in the orbit space
R = C π g := (x, y) ∈ R 2 : 0 ≤ arctan y x ≤ π g and x ≥ 0
with the canonical metric
g c = V 2 dx 2 + dy 2 .
Here g is the number of distinct principal curvatures of a principal orbit in S n (1) and V = V (x, y) is the volume of G-invariant orbit in R n+1 represented by the point (x, y).
The reason for this working so well is the nice property (⋆) that the length of any immersed curve segment in R equals the volume of its corresponding hypersurface in R n+1 .
The geometric behavior of the complete geodesics in R was studied by Wang in [46]. Note that a complete geodesic γ : J −→ R starting from p M where J = [0, 1] or [0, ∞) is determined by the direction −γ ′ (0) that corresponds to V . According to Lemma 3.1 and Theorem 4.8 in [46], there are three different types of γ depending on the choice of p M and V : (i) γ goes to the origin O forming a minimal cone; (ii) γ hits ∂R perpendicularly; (iii) γ extends to infinity asymptotically toward the minimal cone (see Figure 2). Moreover, among all geodesics starting from p M , there are at most one of them goes to the origin O and at most two of them hit ∂R\{O}. We would like to remark that there is no geodesic with both ending points in ∂R since there is no closed minimal hypersurface in R n+1 .
In fact, one can obtain the same type results for inhomogeneous isoparametric hypersurface M of S n (1). Note that R n+1 \{O} is foliated by positive homotheties of leaves of an isoparametric foliation of S n (1). Now we define an "orbit space" R = C π g in the same way endowed with metric g c where, up to a positive constant, The property (⋆) is still valid.
g c = V 2 dr 2 + r 2 dθ 2 (3.1) p M(
According to Ferus and Karcher [16], for any immersed curve γ(s) = (r(s), θ(s)) in R parametrized by its arc length, the mean curvatureh of the hypersurface (in R n+1 ) corresponding to γ was given by
r ′ (s) = sin α, θ ′ (s) = cos α r , α ′ (s) = −h + n cos α r − h(θ) sin α r ,(3.2)
where h(θ) = g 2 (m 1 cot gθ 2 − m 2 tan gθ 2 ) is the mean curvature (with respect to the unit normal toward M + ) of the isoparametric hypersurface of distance θ to M + in S n (1) and α is the angle between γ ′ (s) and ∂ ∂θ . A wonderful property of (3.2) as known for homogeneous situation is thath only relies on r and θ, independent of points on the isoparametric hypersurface of S n (1). Therefore, given homogeneous or inhomogeneous isoparametric boundary M and prescribed constant contact angle along M , the existence question for hypersurface of non-zero constant mean curvatureh can be solved by the ODE system (3.2). We notice that the geometric behavior of solution curves of (3.2) was studied by Hsiang and Huynh in [22]. On the one hand, Theorem A in [22] states that every global solution curve has two asymptotic lines parallel to the boundary lines θ = 0 and θ = π/g respectively. On the other hand, Theorem B in [22] says that a global solution curve can hit the boundary line θ = 0 (resp. θ = π/g) at most once. As a result, a solution curve γ of (3.2) starting from p M either hits ∂R (in fact, perpendicularly) or extends to infinity. Moreover, either case produces both properly immersed and embedded examples (see Figure 3).
Apparently, anh-solution curve generates a corresponding smoothh-CMC hypersurface. Coupling with property (⋆), a 0-solution curve for minimal hypersurface is indeed a geodesic with respect to (3.1). In fact,h/V is exactly the curvature of γ with respect to g c (3.1).
Based on the discussion above, we conclude this section by the following
Real analyticity of hypersurfaces in R n+1
In this section, we will review some materials on real analyticity, and prove Theorem 4.7 and Theorem 6.3, which will play an important role in this paper.
4.1.
Morrey's regularity theory. In this subsection, we will recall the interior and boundary regularity theory of Morrey [35,36], which is crucial for us to relax the restriction on the regularity of hypersurfaces considered in our main results. For the reader's convenience, we will introduce some basic notations before stating Morrey's results.
Let D ⊂ R p be a bounded domain,D be its closure and x = (x 1 , · · · , x p ) ∈ D . Consider a system of nonlinear partial differential equations in u = (u 1 , · · · , u N ) :D → R N , φ j (x, u, Du, D 2 u, · · · ) = 0, j = 1, · · · , N.
(4.1)
This system is called a real analytic system if each φ j is real analytic for all values of its arguments. The linearization of the nonlinear system (4.1) along u is defined by Assume that there exist integers s 1 , · · · , s N and t 1 , · · · , t N such that each operator L jk (x, D) is of order not greater than s j +t k . Let L 0 jk (x, D) be the terms in L jk (x, D) which are exactly of order s j + t k and denote its characteristic polynomial by L 0 jk (x, λ), where λ = (λ 1 , · · · , λ n ). Definition 4.1 (cf. [12]). The linear system (4.2) is called elliptic if such integers s 1 , · · · , s N and t 1 , · · · , t N exist that at each point x the determinant L(x, λ) := det(L 0 jk (x, λ)) of the characteristic polynomial is not zero for any non-zero λ ∈ R p . Definition 4.2 (cf. [12]). The nonlinear system (4.1) is called elliptic along u if its linearization (4.2) along u form a linear elliptic system.
L jk (x, D)v k := d dλ φ j (x, u + λv, D(u + λv), D 2 (u + λv), · · · ) λ=0 = 0, v = (v 1 , · · · , v N ), j, k = 1, · · · , N.
Since we can add the same integer to all the t j and subtract it from all the s j , we assume that max{s j |1 ≤ j ≤ N } = 0 in Section 4.1.1. Now we are ready to state the following theorem. Theorem 4.3 (cf. [35]). Let D ⊂ R p be a bounded domain andD be its closure. Let u = (u 1 , · · · , u N ) :D → R N be a solution of a real analytic nonlinear system φ j (x 1 , · · · , x p , · · · , u k , Du k , · · · , D s j +t k u k , · · · ) = 0, j = 1, · · · , N,
(4.3)
where φ j involves derivatives of u k of order not greater than s j + t k for 1 ≤ j, k ≤ N and max{s j |1 ≤ j ≤ N } = 0. Assume that the system (4.3) is elliptic along u and each u k is of class C t k ,µ inD with some 0 < µ < 1, then u is real analytic at each interior point of D.
Boundary regularity.
Assume that there exist non-negative integers s 1 , · · · , s N such that each operator L jk (x, D) is of order not greater than s j +s k and define s := max{s j |1 ≤ j ≤ N }. Let L 0 jk (x, D) be the terms in L jk (x, D) which are exactly of order s j + s k and denote its characteristic polynomial by L 0 jk (x, λ), where λ = (λ 1 , · · · , λ p ).
Definition 4.4 (cf. [38]). The linear system (4.2) is called strongly elliptic if such integers s 1 , · · · , s N exist that at each point x the characteristic matrix (L 0 jk (x, λ)) is definite, i.e., L 0 jk (x, λ)ξ jξk = 0 for any non-zero λ ∈ R p and any non-zero ξ ∈ C N . Moreover, the linear system (4.2) is called uniformly strongly elliptic along u inD if it is strongly elliptic and
Re[L 0 jk (x, λ)ξ jξk ] ≥ M N j=1 |λ| 2s j |ξ j | 2 , ∀λ ∈ R p , ∀ξ ∈ C N ,(4.4)
for some M > 0 which is independent of λ, ξ and x ∈D.
Definition 4.5 (cf. [38]). The nonlinear system (4.1) is called strongly elliptic (uniformly strongly elliptic, respectively) along u if its linearization (4.2) is strongly elliptic (uniformly strongly elliptic, respectively). Now we are ready to state the following theorem about the boundary regularity.
Theorem 4.6 (cf. [36]). Let D ⊂ R p be a bounded domain andD be its closure. Let u = (u 1 , · · · , u N ) :D → R N be a solution of the following real analytic nonlinear system φ j (x 1 , · · · , x p , · · · , u k , Du k , · · · , D s j +s k u k , · · · ) = 0, j = 1, · · · , N, (4.5)
where φ j involves derivatives of u k of order not greater than s j + s k for 1 ≤ j, k ≤ N . Assume that the system (4.5) is uniformly strongly elliptic along u inD and each u k is of class C s+s k ,µ inD with some 0 < µ < 1 and s := max{s j |1 ≤ j ≤ N }. If u possesses Dirichlet data which are real analytic along a real analytic portion C of the boundary of D, then u can be extended real analytically across C.
4.2.
Real analyticity of hypersurfaces. As an application of Morrey's regularity theorems, we get the following Theorem 4.7. Let X : Σ n → R n+1 be an embedding of a hypersurface with boundary ∂Σ. If X(Σ) satisfies one of the following:
(i). X(Σ) is a C 1 minimal hypersurface;
(ii). X(Σ) is a C 1 hypersurface of constant mean curvature;
(iii). X(Σ) is a C 2 hypersurface of constant r-th mean curvature containing an interior elliptic point, r > 1 and H r > 0 for proper choice of the unit normal vector field, then X(Σ) is real analytic at each interior point of Σ and can be extended analytically across real analytic portion of the boundary ∂Σ.
Proof. We will divide the proof into three parts. In the first two parts we deal with cases (i) and (ii) simultaneously and in Part 3 we deal with case (iii).
Part 1: Firstly, we can regard minimal hypersurfaces and hypersurfaces of non-zero constant mean curvature as critical hypersurfaces of the functionals A(Σ t ) and J(Σ t ), respectively. Here A(Σ t ) := Area(Σ t ) and J(Σ t ) := A(Σ t ) + n n+1 H 0 Σt X, ν , where H 0 := A(Σ) −1 Σ HdA, ν is a unit normal vector field and H is the mean curvature with respect to ν. By Theorem 9.2 in [34] we can see that C 1 minimal hypersurfaces and C 1 CMC hypersurfaces are of class C 2,α .
Part 2: Secondly, since we can regard a minimal hypersurface as a hypersurface of constant mean curvature zero, we only need to deal with C 2,α CMC hypersurfaces.
For each interior point p in Σ, without loss of generality we can choose an orthonormal basis {e A } 1≤A≤n+1 of R n+1 such that p is the origin and ν(p) = e n+1 . Let π : R n+1 → T p Σ be the orthogonal projection to T p Σ. Then there is a neighborhood V of p such that V ∩ Σ can be regarded as a graph defined on π(V ∩ Σ) ⊂ T p Σ, i.e., there exists u : π(V ∩ Σ) → R such that X(q) = (x, u(x)) for any q ∈ V ∩ Σ and x = π(q). By Part 1, u is locally C 2,α around p.
Under this graph representation it is well known that u satisfies the following quasilinear elliptic equation of second order:
n i,j=1 (δ ij − u i u j W 2 )u ij = −nHW, (4.6) where u i = ∂ ∂x i u and W = (1 + i u 2 i ) 1 2 .
Thus as a C 2,α solution to the quasilinear real analytic elliptic equation (4.6), u is real analytic at each interior point by Theorem 4.3.
As for the boundary part, for each point p on a real analytic portion Γ ⊂ ∂Σ, we also have the above graph representation. Moreover, we can choose a local parametrization (U, f ) of p such that f (U ) = V ∩ Γ, where f : U ⊂ R n−1 → R m+1 is a real analytic map with rank n − 1.
We need to show that u possesses real analytic Dirichlet data along π(f (U )). Since π • f : U → T p Σ is real analytic with rank n − 1, π(f (U )) is a real analytic hypersurface of T p Σ ∼ = R n . More precisely, (π(f (U )), π • f ) is a real analytic parametrization of π(f (U )) in T p Σ. For each q ∈ U we have
u • π • f (q) = f (q), e n+1 ,
which is real analytic in U . Hence u is real analytic in π(f (U )). Note that each u i (1 ≤ i ≤ n) is uniformly bounded around p, we can see that (4.6) is uniformly strongly elliptic.
Therefore, as a C 2,α solution to the quasilinear real analytic strongly elliptic equation (4.6), u can be extended analytically to a neighborhood of p ∈ R n+1 by Theorem 4.6. This completes the proof of the first two cases.
Part 3: Finally, to deal with the case of constant r-th mean curvature, let us first introduce the L r operator.
Choose a local orthonormal frame {e 1 , · · · , e n } of Σ and let ν be a unit normal vector field. Denote the components of the second fundamental form with respect to ν by (h ij ), then the classical Newton transformations T r are defined inductively by
T 0 ij = δ ij , T 1 ij = S 1 δ ij − h ij , · · · T r ij = S r δ ij − k T r−1 ik h kj , (4.7)
where S r is the r-th elementary symmetric polynomial of the principal curvatures. Associated to each Newton transformation T r , there is a second-order differential operator L r defined by
L r f := i,j T r ij f ij , for any f ∈ C ∞ (Σ), (4.8)
where f ij is the Hessian of f . When r = 0, L 0 is just the Laplace-Beltrami operator on Σ. It is also known (cf. [3]) that L r−1 satisfies L r−1 X, a = rS r ν, a , (4.9)
for any fixed vector a ∈ R n+1 . In light of the proof for Proposition 3.2 in [3], when H r > 0 and in addition there exists an interior elliptic point, one can prove that L r−1 is elliptic for the hypersurface Σ with boundary.
Similar to cases (i) and (ii), for each point p in Σ we can regard a neighborhood of p in Σ as a graph X = (x, u(x)) defined on a domain in T p Σ. By assumption u is C 2 and by (4.9) we can derive that u satisfies the following elliptic equation of second order:
L r−1 u = − rS r W .
(4.10)
Since (4.10) is a nonlinear real analytic elliptic equation, by Theorem 9.1 in [34] we can see that u is of class C 2,α . Therefore the conclusion follows from Theorem 4.3 and Theorem 4.6 as before.
⊓ ⊔ Remark 4.8. We can also use the regularity theory of second-order elliptic PDE to prove Part 1 (cf. [18] Section 7 and Section 9). Moreover, in case (i) the C 1 condition can be weaken to be Lipschitz continuous.
Proof of the main theorem
Now we consider arbitrary compact connected Lie subgroup G ⊂ SO(n + 1) instead of SO(n) ⊂ SO(n + 1). In Theorem 1.3, we assume that Γ is real analytic and G-invariant with constant contact angle on each G-orbit. However, in order to prove the first part of Theorem 1.3, we only need a local assumption and have the following local result Theorem 5.1. Let X : Σ n → R n+1 be an embedding of a connected hypersurface with boundary ∂Σ, and Γ ⊂ S n (R) be a connected component of ∂Σ. Let G be a compact connected Lie subgroup of SO(n+1) with Lie algebra G. Assume that U is an open subset in Γ and it is real analytic and locally G-invariant. If the contact angle θ is locally G-invariant in U and X(Σ) satisfies one of the following (i). X(Σ) is a C 1 minimal hypersurface;
(ii). X(Σ) is a C 1 hypersurface of constant mean curvature;
(iii). X(Σ) is a C 2 hypersurface of constant r-th mean curvature containing an interior elliptic point, r > 1 and H r > 0 with respect to a unit normal vector field, then the interior of Σ is locally G-invariant.
Remark 5.2. Similar to Theorem 5.1, we also have local versions of Theorem 1.8 and Theorem 1.9. Here for simplicity we omit them.
Proof of Theorem 5.1. Consider any one-parameter subgroup in G and denote the infinitesimal generator by φ ∈ G. Let ν be a unit normal vector field and Φ(X) := φX, ν be the normal part of φX on Σ. In the following, we want to show that Φ(X) ≡ 0 in Σ.
In the first two cases, the mean curvature H is constant. Since φX is a Killing vector field, Φ satisfies L[Φ] = ∆ Σ Φ + |A| 2 Φ = 0, (5.1) where ∆ Σ is the Laplacian on Σ with the induced metric and A is the second fundamental form of Σ with respect to ν.
As for the third case, since φX is a Killing vector field, Φ(X) satisfies
L[Φ] = L r−1 Φ + (S 1 S r − (r + 1)S r+1 )Φ = 0, (5.2)
where L r−1 is defined by (4.8) and S r is the r-th elementary symmetric polynomial of the principal curvatures.
One can check that when r = 1 the equation (5.2) coincides with (5.1), so there is no confusion about the operator L.
For each point p ∈ U , let {T i } be a local orthonormal frame of G·p around p and extend it to a local orthonormal frame {T i , S j } of U around p. Then we have
∂ n Φ| p = φn, ν + φX, ∇ n ν = φ(− sin θ X X + cos θN ), cos θ X X + sin θN + φX, ∇ n ν = i φX, T i T i , ∇ n ν + j φX, S j S j , ∇ n ν + φX, ν ν, ∇ n ν + φX, n n, ∇ n ν = i φX, T i T i , ∇ n ν ,(5.3)
where , denotes the standard Euclidean inner product and ∇ denotes the Levi-Civita connection of R n+1 . Here we use the fact that φX is tangent to the orbit G · p and φ ∈ G ⊂ so(n + 1) is anti-symmetric.
By considering the parallel hypersurfaces along the normal exponential map of U in Σ with respect to the inward unit conormal −n, we can extend {T i , S j , n} to a local orthonormal frame of Σ on a neighborhoodŨ of p. Moreover, at each point q ∈Ũ , {T i , S j } are tangent to the parallel hypersurface of U through q and n is normal to this parallel hypersurface. Let [ , ] denote the Lie bracket of vector fields along Σ. By definition [T i , n]| p ∈ T p Σ at each point p ∈ U , hence we have ν, [T i , n] | U = 0. Moreover, since the contact angle θ is locally G-invariant, we have T i θ| U = 0. Now it follows that
T i , ∇ n ν = − ν, ∇ n T i = − ν, ∇ T i n + ν, [T i , n] = − cos θ X X + sin θN, ∇ T i (− sin θ X X + cos θN ) = sin θ cos θ X X , ∇ T i X X + sin 2 θ N, ∇ T i X X − cos 2 θ X X , ∇ T i N − sin θ cos θ N, ∇ T i N = N, ∇ T i X X = 0.L[f ] = 0 in Σ, f = ∂ n f = 0 on U. (5.5)
By the Cauchy-Kovalevskaya theorem, the Cauchy problem (5.5) has a unique real analytic solution (cf. Appendix A). Since there is a trivial solution f ≡ 0, we have f = Φ ≡ 0 on Σ. Therefore the interior of Σ is invariant under the infinitesimal action φ, that is, Σ is locally G-invariant.
⊓ ⊔
With the help of the preparation above, we are in position to prove Theorem 1.3.
Proof of Theorem 1.3. By assumption, Σ is complete with respect to the induced metric and each connected component of ∂Σ is a G-invariant submanifold in R n+1 . Moreover, we have proved in Theorem 5.1 that Σ is locally G-invariant. Hence, it follows from Proposition 2.4 that Σ is G-invariant. ⊓ ⊔
Symmetry of Helfrich-type hypersurface
In this section, we turn to study interior symmetry of a new class of hypersurfaces, called Helfrich-type hypersurfaces, which can be regarded as an extension of Willmore surfaces. For an immersion X : Σ n → R n+1 , the Helfrich energy is defined by
H c [Σ] := Σ (H + c) 2 dΣ, (6.1)
where c ∈ R is a constant. Obviously, when n = 2 and c = 0, H 0 is exactly the Willmore energy in R 3 , and critical surfaces are just Willmore surfaces. When n = 2, as critical surfaces of Helfrich energy, Helfrich surfaces have also been widely studied ( [9,10,14,21,40,41,45], etc. and references therein).
Firstly, we derive the following first variational formula of the Helfrich energy H c for n ≥ 2, which includes the n = 2 case in [45]. Proposition 6.1. Let X : Σ n → R n+1 be an immersion of a hypersurface with boundary ∂Σ. Consider any sufficiently smooth variation of X with compactly supported variational vector field V + Φν, where V is tangent to X(Σ) and ν is the unit normal vector field of X(Σ). Then the first variation of the Helfrich energy H c is given by
∂ ∂t H c = − 2 n Σ {∆ Σ H + (H + c)[|A| 2 − n 2 2 (H + c)H]}ΦdΣ + ∂Σ {(H + c) 2 V, n + 2 n [Φ∂ n H − (H + c)∂ n Φ]}ds. (6.2)
Proof. Let X t := X + t(V + Φν). Under a natural coordinate system {x i } 1≤i≤n , we have
g ij = ∂ ∂x i X t , ∂ ∂x j X t and h ij = − ∂ ∂x j ∂ ∂x i X t ,
ν . Firstly, let us consider the case V ≡ 0. By direct calculation we have
∂ ∂t X t = Φν, ∂ ∂t g ij = 2Φh ij , ∂ ∂t dΣ = nΦHdΣ, ∂ ∂t h ij = −∇ i ∇ j Φ + Φh ik h jl g kl , ∂ ∂t H = − 1 n (∆ Σ Φ + |A| 2 Φ),(6.3)
at t = 0. Then the first variation of H c is
∂ ∂t H c = Σ 2(H + c) ∂ ∂t HdΣ + (H + c) 2 ∂ ∂t dΣ = Σ − 2 n (H + c)(∆ Σ Φ + |A| 2 Φ) + n(H + c) 2 HΦdΣ = − 2 n Σ {∆ Σ (H + c) + (H + c)|A| 2 − n 2 2 (H + c) 2 H}ΦdΣ − 2 n ∂Σ [(H + c)∂ n Φ − Φ∂ n (H + c)]ds,(6.4)
where we use the Green's second identity in the last equality.
Secondly, for the case Φ = 0, by direct calculation we have Similar to Theorem 4.7, we have the following regularity result for Helfrich-type hypersurfaces. Theorem 6.3. Let X : Σ n → R n+1 be a C 4,α embedding of a hypersurface with boundary ∂Σ for some α ∈ (0, 1). If Σ is a Helfrich-type hypersurface, then Σ is real analytic at each interior point and can be extended analytically across real analytic portion of the boundary ∂Σ.
Proof. As the proof of Theorem 4.7, we can write X as a graph of u locally. Then the Euler-Lagrange equation (6.7) can be written as an equation of u:
∆ Σ ∆ Σ u + 1 W (∆ Σ W ∆ Σ u + ∇ Σ W, ∇ Σ ∆ Σ u ) − (c − W n ∆ Σ u)[ n W 3 |∇ 2 Σ u| 2 + n 2 2 ∆ Σ u(c − W n ∆ Σ u)] = 0. (6.8)
Since the equation (6.8) is a quasilinear equation of fourth order, the principal part of its linearization is just the terms of order four and its characteristic polynomial becomes
L 0 (x, λ) = 1 W 4 n i,j,k,l=1 (δ kl − u k u l W 2 )(δ ij − u i u j W 2 )λ i λ j λ k λ l ,(6.9)
where λ = (λ 1 , · · · , λ n ) ∈ R n . Hence, the equation (6.8) is a quasilinear real analytic elliptic equation and the conclusion follows from Theorem 4.3 and Theorem 4.6 as before.
⊓ ⊔
Now based on the Euler-Lagrange equation (6.7) and (6.8), we can use the method in the proof of Theorem 1.3 and prove Theorem 1.8.
Proof of Theorem 1.8. Let G ⊂ so(n + 1) be the Lie algebra of G. Consider any action in G and denote the infinitesimal generator by φ ∈ G. Let ν be a unit normal vector field and Φ(X) := φX, ν be the normal part of φX, we want to show that Φ(X) ≡ 0 in Σ. By Proposition 6.3, we can see that X(Σ) is real analytic at each interior point and can be extended analytically across Γ. Moreover, we can regard the first equation in (6.7) as a real analytic elliptic equation F [X] = 0. Since φX is a Killing vector field, Φ satisfies the linearization of this equation:
L[Φ] := ∂ ∂t F [X + tφX] = ∂ ∂t F [X + tΦν] = 0. (6.10)
In order to compute L[Φ], let us consider the variation X t := X +tΦν. Under this variation we have (6.3) and can further compute the variation of |A| 2
∂ ∂t |A| 2 = −2g ik g jl h kl ∇ i ∇ j Φ − 2Φh ij h kl h mn g il g jm g hn . (6.11)
Moreover, we have
L[Φ] = ∂ ∂t F [X t ] = ∂ ∂t {∆ Σ H + (H + c)[|A| 2 − n 2 2 (H + c)H]}. (6.12)
By substituting (6.3) and (6.11) into (6.12), we can see that L[Φ] = 0 is a quasilinear equation of fourth order and the principal part of its linearization is just − 1 n ∆ Σ ∆ Σ Φ. Obviously L[Φ] is elliptic with real analytic coefficients.
For each point p ∈ Γ, let {T i } be a local orthonormal frame of Γ around p. As the proof of Theorem 1.3, we have Φ| Γ ≡ ∂ n Φ| Γ ≡ φX, n | Γ ≡ 0, (6.13) which also means ∇Φ| Γ ≡ 0.
The first derivative of H with respect to t reads ∂ ∂t H = − 1 n (∆Φ + |A| 2 Φ) + ∇H, φX . (6.14)
Since H is constant on Γ, by above properties of Φ, (6.14) yields to 0 = ∆Φ| Γ = ∇ n ∇Φ, n | Γ . (6.15) Now we have ∂ 2 n Φ| Γ = ∇Φ, ∇ n n | Γ + ∇ n ∇Φ, n | Γ = 0, (6.16) which also means ∇∇Φ| Γ ≡ 0.
Since H is constant and ∂ n H = 0 on Γ, it follows that ∇H = ∂ n Hn = 0 on Γ. By (6.14) again, on Γ we have 0 = ∂ n (∆Φ) + ∇ n ∇H, φX
= ∂ n (∆Φ) + n−1 i=1 ∇ n ∇H, T i T i , φX = ∂ n (∆Φ) + n−1 i=1 ∇ T i ∇H, n T i , φX = ∂ n (∆Φ). (6.17)
On the other hand,
∂ n (∆Φ) = ∂ 3 n Φ − ∇ n ∇Φ, ∇ n n + n−1 i=1 ∇ n ∇ T i ∇Φ, T i + n−1 i=1 ∇ T i ∇Φ, ∇ n T i .(6.18)
It follows that
∂ 3 n Φ| Γ = − n−1 i=1 ∇ n ∇ T i ∇Φ, T i | Γ = n−1 i=1 R(n, T i )∇Φ − ∇ T i ∇ n ∇Φ − ∇ [n,T i ] ∇Φ, T i | Γ = 0, (6.19)
where R is the curvature tensor on Σ.
Now we have a real analytic solution f = Φ to the following Cauchy problem
L[f ] = 0, in Σ, f = ∂ n f = ∂ 2 n f = ∂ 3 n f = 0, on Γ.
(6.20)
By the Cauchy-Kovalevskaya theorem, the Cauchy problem (6.20) has a unique real analytic solution (cf. Appendix A). Since there is a trivial solution f ≡ 0, we have f = Φ ≡ 0 on Σ. Therefore the interior of Σ is invariant under the infinitesimal action φ, that is, Σ is locally G-invariant.
Moreover, if Σ is complete with respect to the induced metric and each connected component of ∂Σ is a G-invariant submanifold in R n+1 , then it follows from Proposition 2.4 that Σ is G-invariant.
⊓ ⊔ Remark 6.4.
(i). In particular, analogous to Remark 1.4, the assumption that Γ is real analytic and G-invariant can also be replaced by that Γ is an orbit of G.
(ii). When n = 2, the rotational symmetry of Helfrich surface was studied by Palmer and Pámpano in [41].
Further discussion on immersions
In this section we consider immersions of hypersurfaces. Using the fact that an immersion is locally an embedding, we can apply the proof of Theorem 5.1 to obtain Theorem 1.9.
Proof of Theorem 1.9. Consider any one-parameter subgroup in G and denote the infinitesimal generator by φ ∈ G. For each point p in Σ, locally we can regard X as an embedding in a neighborhood U p . Let ν be a unit normal vector field and Φ(X) := φX, ν be the normal part of φX defined on X(U p ). Firstly, we want to show that Φ ≡ 0 in any such neighborhood X(U p ).
For any p ∈ ∂Σ, there exists a neighborhood U p such that X| Up : U p → R n+1 is an embedding. As the proof of Theorem 1.3, we have Φ ≡ 0 in X(U p ). Consider a subset F of Σ defined by F := {p ∈ Σ : there exists a neighborhood U p such that X| Up is an embedding, and Φ ≡ 0 in X(U p )}.
Then ∂Σ ⊂ F and F is obviously open. For each point q ∈ ∂F , let U q be a neighborhood of q such that X| Uq is an embedding. By definition of ∂F , there exists a sequence of points {q i } ∈ F ∩U q such that lim n→∞ q i = q. Since Φ is real analytic in X(U q ) and Φ(X(q i )) = 0 for each i, we have Φ ≡ 0 in X(U q ) which means q ∈ F . Therefore, F is closed and hence F = Σ. Now for each p ∈ Σ, we find a neighborhood U p such that X| Up is an embedding and Φ ≡ 0 in X(U p ). Then we have a partial action (ψ p , D p ) defined by ψ p : D p → X(U p ) (g, X(p)) → ψ p (g, X(p)) := g · X(p) (7.1) in a neighborhood D p ⊂ G × X(U p ) of {e} × X(U p ). If X(U p ) ∩ X(U q ) is not empty, then for each (g, X(s)) in D p we have ψ p (g, X(s)) = g · X(s) = ψ q (g, X(s)). So we have a partial action (ψ, D) in D = p D p such that ψ| Dp = ψ p and the interior of X(Σ) is locally G-invariant.
Moreover, if X(Σ) is closed and X(∂Σ) is G-invariant, then X(Σ) is actually G-invariant. To see this, for each p ∈ Σ − ∂Σ, we only need to show that G · X(p) ⊂ X(Σ). For each φ ∈ G, we denote θ φ (t, p) to be the global flow generated by φX in R n+1 . Since G is a compact connected Lie group, we have G · X(p) = φ∈G t∈R θ φ (t, p). Now let W := {t ∈ R|θ φ (t, p) ∈ X(Σ)}. For each t 0 ∈ W , we can find q ∈ Σ such that X(q) = θ φ (t 0 , p) ∈ X(Σ). Then there exists a neighborhood U q of q such that X| Uq is an embedding. As the proof before, we have Φ ≡ 0 in X(U q ). Therefore, φX is a Killing vector field on X(U q ) and there exists an integral curve θ X(q) (t) : (−ǫ, ǫ) → X(U q ). By the uniqueness of integral curve, we have (t 0 − ǫ, t 0 + ǫ) ⊂ W which means W is open.
On the other hand, let {t i } be a sequence in W converging to b, then θ φ (t i , p) is a sequence in X(Σ) converging to θ φ (b, p). Since X(Σ) is closed in R n+1 , θ φ (b, p) is also contained in X(Σ), that is, b ∈ W . Hence W is closed. Consequently we have W = R and G · X(Σ) ⊂ X(Σ).
⊓ ⊔ Appendix A. The Cauchy-Kovalevskaya theorem
For completeness, in this section we review the classical Cauchy-Kovalevskaya theorem. We will mainly follow the discussion in Rauch's book [43].
In R n+1 with coordinate (t, x 1 , · · · , x n ), consider the fully nonlinear partial differential equation
F (t, x, ∂ m t u, ∂ j t ∂ α x u | j ≤ m − 1, j + |α| ≤ m) = 0 (A.1) with prescribed data ∂ j t u(0, x) = g j (x), 0 ≤ j ≤ m − 1. (A.2)
Given (0,x) ∈ R n+1 , suppose that there exists γ ∈ R such that F (0,x, γ, ∂ α x g j (x)) = 0. (A. 3) In order to solve (A.1) for ∂ m t u by the implicit function theorem, we need the following non-characteristic condition. holds.
More precisely, if (A.3) and (A.4) hold, and F is real analytic near (0,x, γ, ∂ α x g j (x)), then it follows from the implicit function theorem that ∂ m t u(t, x) = G(t, x, ∂ j t ∂ α x u | j ≤ m − 1, j + |α| ≤ m), with G real analytic near (0,x, ∂ α x g j (x)). Then we state the Cauchy-Kovalevskaya theorem for fully nonlinear PDE.
Theorem A.2 (Cauchy-Kovalevskaya). Consider the fully nonlinear partial differential equation (A.1) with prescribed data (A.2). Given (0,x) ∈ R n+1 , suppose that there exists γ ∈ R satisfying (A.3), and F and g j are real analytic near (0,x, γ, ∂ α x g j (x)) andx, respectively. In addition, suppose that the hypersurface t = 0 is non-characteristic at (0,x) with respect to γ. Then, there exists an analytic solution u to (A.1) realizing (A.2) and ∂ m t u(0,x) = γ. Moreover, two such solutions defined on a connected neighborhood of (0,x) must be equal.
Finally, we explain the uniqueness of analytic solution to the Cauchy problems (5.5) and (6.20) by using the Cauchy-Kovalevskaya theorem. Proof. We only prove the case in Theorem 5.1, i.e., the Cauchy problem (5.5), since the same proof is also applicable to other cases.
For each point p ∈ U , there is a real analytic local parametrization (V, f ) with f (0) = p. Let (t, x 2 , · · · , x n ) be the coordinates of the closed upper half space H n such that H n = {(t, x 2 , · · · , x n ) ∈ R n |t ≥ 0}. Then locally the Cauchy problem (5.5) can be regarded as the following Cauchy problem L[u(t, x 2 , · · · , x n )] = 0 in V, u = ∂ t u = 0 on V ∂H n . (A.5)
In order to apply the Cauchy-Kovalevskaya theorem, we only need to verify the noncharacteristic condition (A.4). Under the coordinates (x 1 = t, x 2 , · · · , x n ), the principal part of L[u] can be written as L 0 [u] = n i,j=1 A ij (x, Du)∂ i ∂ j u. Since we have shown that the operator L is elliptic and the ellipticity is coordinate-free, the coefficient matrix (A ij (x, Du)) is positive definite. In particular it follows that the main diagonal element A 11 is positive. Hence the surface t = 0 is non-characteristic at the origin of R n on the solution u to (A.5). Now by the Cauchy-Kovalevskaya theorem, the Cauchy problem (A.5) has a unique analytic solution locally. Obviously, u ≡ 0 is a trivial real analytic solution to (A.5), hence u = Φ ≡ 0 is just this unique solution.
⊓ ⊔
Figure 1 .
1) G = SO(3) × SO(3) and M = S 3 (1) × S 3 ) G = SO(2) × SO(2)and M = S 2 (1) × S 2 (1) Shortest curves to boundary of R
there exists an open subset U ⊂ R m and a real analytic map f : U → R n+1 which maps open subsets of U onto relatively open subsets of Σ such that p ∈ f (U ) and rank[Df (u)] = m, ∀u ∈ U, where Df (u) is the Jacobian matrix of f at u. Definition 2.6. Let H m be the closed upper half space of R m . A subset Σ ⊂ R n+1 is called an m-dimensional real analytic submanifold with boundary if, for each p ∈ Σ, there exists an open subset U ⊂ H m and a real analytic map f : U → R n+1 which maps open subsets of U onto relatively open subsets of Σ such that p ∈ f (U ) and rank[Df (u)] = m, ∀u ∈ U,
a positive constant independent of θ, and the mean curvature of M θ is d dθ log V (M θ ) with respect to the unit normal ∇f |∇f | . (iv). Each isoparametric hypersurface in S n (1) is actually a real analytic submanifold in R n+1 of codimension 2, which is an intersection of some level set of Cartan-Münzner polynomial F and S n (1).
The multiplicity pair is (m 1 , m 2 ) = (m, l − m − 1) provided m > 0 and l − m − 1 > 0, where l = kδ(m), k being a positive integer and δ(m) the dimension of irreducible module of the Clifford algebra C m−1 .
(d) g = 6 , m 1 = m 2 = 1 Figure 2 .
6112a) g = 2, m 1 = 2, m 2 = 6 and p M is not on the minimal cone θ = Complete geodesics starting from p M in orbit space R in the polar coordinate, where V = r n−1 · sin gθ 2
Solution curves starting from p M with different α andh (c) Global solution curves with sameh p M (d) Solution curves starting from p M with α = 0 and differenth
Figure 3 .
3Solution curves of (3.2) in "orbit space" R with g = 4, m 1 = m 2 = 2
Proposition 3 . 7 .
37Given any isoparametric hypersurface M n−1 in S n (1) and any angle ϕ ∈ [0, 2π), it holds (i). The minimal case: Either there exists a minimal cone Σ in R n+1 or a complete properly immersed (possibly embedded) minimal hypersurface X : Σ → R n+1 with ∂Σ = M and constant contact angle ϕ along M .
(
ii). The CMC case: For anyh = 0, there exists a complete properly immersed (possibly embedded) hypersurface X : Σ → R n+1 of constant mean curvatureh with ∂Σ = M and constant contact angle ϕ along M .Remark 3.8. When Σ is connected we have uniqueness on the level of integral current, i.e., in either case every immersion map X in the statement induces the same integral current X # [[Σ]] (cf.[15]).
. 1 .
1Interior regularity.
4.7, X(Σ) is real analytic at each interior point and it can be extended analytically across U . Now taking (5.2) , (5.3) and (5.4) into account, we deduce that f = Φ is a real analytic solution to the following Cauchy problem
. 2 .
2Let X : Σ n → R n+1 be an immersion of a hypersurface with boundary ∂Σ. We say Σ is Helfrich-type if ∆ Σ H + (H + c)[|A| 2 − n 2 2 (H + c)H] = 0 in Σ, H = −c, ∂ n H = 0 on ∂Σ.
Theorem A.3. Φ ≡ 0 is the unique analytic solution to the Cauchy problems (5.5) and (6.20) locally. Moreover, if Σ is connected, then Φ ≡ 0 in Σ.
Acknowledgments. The authors would like to thank Prof. Yuxiang Li for his helpful discussion on the regularity theory of second-order elliptic PDE. The first and the third named authors are partially supported by NSFC(No. 11831005, 12061131014). The second named author is partially supported by NSFC (No. 11871282). The fourth named author is partially supported by NSFC(No. 11971352, 12022109).Remark A.4. Actually, we only use the uniqueness part of the Cauchy-Kovalevskaya theorem. Let u be a real analytic solution to (A.1) realizing (A.2) and ∂ m t u(0,x) = γ. By assuming the non-characteristic condition, we can use the implicit function theorem to get. Since the right-hand side only involves terms with partial derivatives of order ≤ k − 1 in t, by induction all the derivatives of u at (0,x) will be uniquely determined. Therefore any two real analytic solutions must coincide on any connected open set containing (0,x).Remark A.5. In the proof of Theorem A.3, we can see that by means of local parametrization one can apply the Cauchy-Kovalevskaya theorem to any real analytic non-characteristic hypersurface. Moreover, the non-characteristic condition always holds for elliptic operators.
A characteristic property of spheres. A D Alexandrov, Ann. Mat. Pura Appl. 584A. D. Alexandrov. A characteristic property of spheres. Ann. Mat. Pura Appl. (4), 58:303-315, 1962.
Constant scalar curvature hypersurfaces with spherical boundary in Euclidean space. L J Alías, J M Malacarne, Rev. Mat. Iberoamericana. 182L. J. Alías and J. M. Malacarne. Constant scalar curvature hypersurfaces with spherical boundary in Euclidean space. Rev. Mat. Iberoamericana, 18(2):431-442, 2002.
Stability of hypersurfaces with constant r-mean curvature. J L M Barbosa, A G Colares, Ann. Global Anal. Geom. 153J. L. M. Barbosa and A. G. Colares. Stability of hypersurfaces with constant r-mean curvature. Ann. Global Anal. Geom., 15(3):277-297, 1997.
Isoparametric hypersurfaces with four principal curvatures. T E Cecil, Q.-S Chi, G R Jensen, Ann. of Math. 1662T. E. Cecil, Q.-S. Chi, and G. R. Jensen. Isoparametric hypersurfaces with four principal curvatures. Ann. of Math. (2), 166(1):1-76, 2007.
Collapsing Riemannian manifolds while keeping their curvature bounded. J Cheeger, M Gromov, I. J. Differential Geom. 233J. Cheeger and M. Gromov. Collapsing Riemannian manifolds while keeping their curvature bounded. I. J. Differential Geom., 23(3):309-346, 1986.
Isoparametric hypersurfaces with four principal curvatures, II. Q.-S Chi, Nagoya Math. J. 204Q.-S. Chi. Isoparametric hypersurfaces with four principal curvatures, II. Nagoya Math. J., 204:1-18, 2011.
Isoparametric hypersurfaces with four principal curvatures. Q.-S Chi, III. J. Differential Geom. 943Q.-S. Chi. Isoparametric hypersurfaces with four principal curvatures, III. J. Differential Geom., 94(3):469-504, 2013.
Isoparametric hypersurfaces with four principal curvatures. Q.-S Chi, IV. J. Differential Geom. 1152Q.-S. Chi. Isoparametric hypersurfaces with four principal curvatures, IV. J. Differential Geom., 115(2):225-301, 2020.
Global minimizers for axisymmetric multiphase membranes. R Choksi, M Morandotti, M Veneroni, ESAIM Control Optim. Calc. Var. 194R. Choksi, M. Morandotti, and M. Veneroni. Global minimizers for axisymmetric multiphase mem- branes. ESAIM Control Optim. Calc. Var., 19(4):1014-1029, 2013.
Boundary value problems for a special Helfrich functional for surfaces of revolution: existence and asymptotic behaviour. K Deckelnick, M Doemeland, H.-C Grunau, Calc. Var. Partial Differential Equations. 6012021K. Deckelnick, M. Doemeland, and H.-C. Grunau. Boundary value problems for a special Helfrich func- tional for surfaces of revolution: existence and asymptotic behaviour. Calc. Var. Partial Differential Equations, 60(1):Paper No. 32, 31, 2021.
Isoparametric hypersurfaces, case g = 6, m = 1. J Dorfmeister, E Neher, Comm. Algebra. 1311J. Dorfmeister and E. Neher. Isoparametric hypersurfaces, case g = 6, m = 1. Comm. Algebra, 13(11):2299-2368, 1985.
Interior estimates for elliptic systems of partial differential equations. A Douglis, L Nirenberg, Comm. Pure Appl. Math. 8A. Douglis and L. Nirenberg. Interior estimates for elliptic systems of partial differential equations. Comm. Pure Appl. Math., 8:503-538, 1955.
Structure theorems for constant mean curvature surfaces bounded by a planar curve. R Earp, F Brito, W H Meeks, H Iii, Rosenberg, Indiana Univ. Math. J. 401R. Earp, F. Brito, W. H. Meeks, III, and H. Rosenberg. Structure theorems for constant mean curva- ture surfaces bounded by a planar curve. Indiana Univ. Math. J., 40(1):333-343, 1991.
The Helfrich boundary value problem. S Eichmann, Calc. Var. Partial Differential Equations. 581Paper No. 34, 26S. Eichmann. The Helfrich boundary value problem. Calc. Var. Partial Differential Equations, 58(1):Paper No. 34, 26, 2019.
Normal and integral currents. H Federer, W H Fleming, Ann. of Math. 722H. Federer and W. H. Fleming. Normal and integral currents. Ann. of Math. (2), 72:458-520, 1960.
Nonrotational minimal spheres and minimizing cones. D Ferus, H Karcher, Comment. Math. Helv. 602D. Ferus and H. Karcher. Nonrotational minimal spheres and minimizing cones. Comment. Math. Helv., 60(2):247-269, 1985.
. D Ferus, H Karcher, H F Münzner, Cliffordalgebren und neue isoparametrische Hyperflächen. Math. Z. 1774D. Ferus, H. Karcher, and H. F. Münzner. Cliffordalgebren und neue isoparametrische Hyperflächen. Math. Z., 177(4):479-502, 1981.
Elliptic partial differential equations of second order. D Gilbarg, N S Trudinger, Classics in Mathematics. Springer-VerlagReprint of the 1998 editionD. Gilbarg and N. S. Trudinger. Elliptic partial differential equations of second order. Classics in Mathematics. Springer-Verlag, Berlin, 2001. Reprint of the 1998 edition.
On boundaries of complex analytic varieties. F R Harvey, H B Lawson, Jr , I. Ann. of Math. 1022F. R. Harvey and H. B. Lawson, Jr. On boundaries of complex analytic varieties. I. Ann. of Math. (2), 102(2):223-290, 1975.
Calibrated geometries. R Harvey, H B Lawson, Jr , Acta Math. 148R. Harvey and H. B. Lawson, Jr. Calibrated geometries. Acta Math., 148:47-157, 1982.
Elastic properties of lipid bilayers: theory and possible experiments. W Helfrich, 28Zeitschrift für Naturforschung CW. Helfrich. Elastic properties of lipid bilayers: theory and possible experiments. Zeitschrift für Natur- forschung C, 28(11-12):693-703, 1973.
Generalized rotational hypersurfaces of constant mean curvature in the Euclidean spaces. W.-Y Hsiang, H.-L Huynh, II. Pacific J. Math. 1301W.-Y. Hsiang and H.-L. Huynh. Generalized rotational hypersurfaces of constant mean curvature in the Euclidean spaces. II. Pacific J. Math., 130(1):75-95, 1987.
Minimal submanifolds of low cohomogeneity. W.-Y Hsiang, H B Lawson, Jr , J. Differential Geometry. 5W.-Y. Hsiang and H. B. Lawson, Jr. Minimal submanifolds of low cohomogeneity. J. Differential Geometry, 5:1-38, 1971.
A generalization of a theorem of Delaunay. W.-Y Hsiang, W C Yu, J. Differential Geometry. 162W.-Y. Hsiang and W. C. Yu. A generalization of a theorem of Delaunay. J. Differential Geometry, 16(2):161-177, 1981.
On the classification of isoparametric hypersurfaces with four distinct principal curvatures in spheres. S , Ann. of Math. 1682S. Immervoll. On the classification of isoparametric hypersurfaces with four distinct principal curva- tures in spheres. Ann. of Math. (2), 168(3):1011-1024, 2008.
Compact constant mean curvature surfaces in Euclidean three-space. N Kapouleas, J. Differential Geom. 333N. Kapouleas. Compact constant mean curvature surfaces in Euclidean three-space. J. Differential Geom., 33(3):683-715, 1991.
Symmetry of hypersurfaces of constant mean curvature with symmetric boundary. M Koiso, Math. Z. 1914M. Koiso. Symmetry of hypersurfaces of constant mean curvature with symmetric boundary. Math. Z., 191(4):567-574, 1986.
Birkhäuser Advanced Texts: Basler Lehrbücher. S G Krantz, H R Parks, A primer of real analytic functions. Birkhäuser Advanced Texts: Basel TextbooksS. G. Krantz and H. R. Parks. A primer of real analytic functions. Birkhäuser Advanced Texts: Basler Lehrbücher. [Birkhäuser Advanced Texts: Basel Textbooks].
Area-minimizing integral currents with boundaries invariant under polar actions. J C Lander, Trans. Amer. Math. Soc. 3071J. C. Lander. Area-minimizing integral currents with boundaries invariant under polar actions. Trans. Amer. Math. Soc., 307(1):419-429, 1988.
The equivariant Plateau problem and interior regularity. H B Lawson, Jr , Trans. Amer. Math. Soc. 173H. B. Lawson, Jr. The equivariant Plateau problem and interior regularity. Trans. Amer. Math. Soc., 173:231-249, 1972.
Isoparametric hypersurfaces with (g, m) = (6, 2). R Miyaoka, Ann. of Math. 1772R. Miyaoka. Isoparametric hypersurfaces with (g, m) = (6, 2). Ann. of Math. (2), 177(1):53-110, 2013.
Isoparametric hypersurfaces with (g, m) = (6, 2). R Miyaoka, Errata ofR. Miyaoka. Errata of "Isoparametric hypersurfaces with (g, m) = (6, 2)", [MR2999038].
. Ann. of Math. 1832Ann. of Math. (2), 183(3):1057-1071, 2016.
On finiteness of the number of stable minimal hypersurfaces with a fixed boundary. F Morgan, Indiana Univ. Math. J. 354F. Morgan. On finiteness of the number of stable minimal hypersurfaces with a fixed boundary. Indiana Univ. Math. J., 35(4):779-833, 1986.
Second-order elliptic systems of differential equations. C B MorreyJr, Contributions to the theory of partial differential equations. Princeton, N.J.Princeton University PressC. B. Morrey, Jr. Second-order elliptic systems of differential equations. In Contributions to the theory of partial differential equations, Annals of Mathematics Studies, no. 33, pages 101-159. Princeton University Press, Princeton, N.J., 1954.
On the analyticity of the solutions of analytic non-linear elliptic systems of partial differential equations. I. Analyticity in the interior. C B MorreyJr, Amer. J. Math. 80C. B. Morrey, Jr. On the analyticity of the solutions of analytic non-linear elliptic systems of partial differential equations. I. Analyticity in the interior. Amer. J. Math., 80:198-218, 1958.
On the analyticity of the solutions of analytic non-linear elliptic systems of partial differential equations. II. Analyticity at the boundary. C B MorreyJr, Amer. J. Math. 80C. B. Morrey, Jr. On the analyticity of the solutions of analytic non-linear elliptic systems of partial differential equations. II. Analyticity at the boundary. Amer. J. Math., 80:219-237, 1958.
Isoparametrische Hyperflächen in Sphären. H F Münzner, Math. Ann. 2511H. F. Münzner. Isoparametrische Hyperflächen in Sphären. Math. Ann., 251(1):57-71, 1980.
Remarks on strongly elliptic partial differential equations. L Nirenberg, Comm. Pure Appl. Math. 8L. Nirenberg. Remarks on strongly elliptic partial differential equations. Comm. Pure Appl. Math., 8:649-675, 1955.
Some results in E. Cartan's theory of isoparametric families of hypersurfaces. K Nomizu, Bull. Amer. Math. Soc. 79K. Nomizu. Some results in E. Cartan's theory of isoparametric families of hypersurfaces. Bull. Amer. Math. Soc., 79:1184-1188, 1973.
Minimizing configurations for elastic surface energies with elastic boundaries. B Palmer, A Pámpano, J. Nonlinear Sci. 311B. Palmer and A. Pámpano. Minimizing configurations for elastic surface energies with elastic bound- aries. J. Nonlinear Sci., 31(1):23-36, 2021.
The Euler-Helfrich functional. B Palmer, A Pámpano, Paper No. 79. 612022B. Palmer and A. Pámpano. The Euler-Helfrich functional. Calc. Var. Partial Differential Equations, 61(3):Paper No. 79, 28, 2022.
Recent progress in isoparametric functions and isoparametric hypersurfaces. C Qian, Z Tang, Real and complex submanifolds. TokyoSpringer106C. Qian and Z. Tang. Recent progress in isoparametric functions and isoparametric hypersurfaces. In Real and complex submanifolds, volume 106 of Springer Proc. Math. Stat., pages 65-76. Springer, Tokyo, 2014.
Partial differential equations. J Rauch, Graduate Texts in Mathematics. 128Springer-VerlagJ. Rauch. Partial differential equations, volume 128 of Graduate Texts in Mathematics. Springer- Verlag, New York, 1991.
On the principal curvatures of homogeneous hypersurfaces in a sphere. R Takagi, T Takahashi, Differential geometry (in honor of Kentaro Yano). TokyoKinokuniya Book Store Co., LtdR. Takagi and T. Takahashi. On the principal curvatures of homogeneous hypersurfaces in a sphere. In Differential geometry (in honor of Kentaro Yano), pages 469-481. Kinokuniya Book Store Co., Ltd., Tokyo, 1972.
A geometric theory on the elasticity of bio-membranes. Z C Tu, Z C Ou-Yang, J. Phys. A: Math. Gen. 3747Z. C. Tu and Z. C. Ou-Yang. A geometric theory on the elasticity of bio-membranes. J. Phys. A: Math. Gen., 37(47):11407-11429, nov 2004.
On a class of minimal hypersurfaces in R n. Q M Wang, Math. Ann. 2982Q. M. Wang. On a class of minimal hypersurfaces in R n . Math. Ann., 298(2):207-251, 1994.
| [] |
[
"Modulation of Negative Index Metamaterials in the Near-IR Range",
"Modulation of Negative Index Metamaterials in the Near-IR Range"
] | [
"Evgenia Kim [email protected] \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n",
") † \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n",
"Wei Wu \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n",
"Ekaterina Ponizovskaya \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n",
"Zhaoning Yu \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n",
"Alexander M Bratkovsky \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n",
"Shih-Yuang Wang \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n",
"R Stanley Williams \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n",
"Y Ron Shen \nDepartment of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California\n"
] | [
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California",
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California",
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California",
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California",
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California",
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California",
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California",
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California",
"Department of Physics\nQuantum Science Research\nUniversity of California\nHewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California"
] | [] | AbstractOptical modulation of the effective refractive properties of a "fishnet" metamaterial with a Ag/Si/Ag heterostructure is demonstrated in the near-IR range and the associated fast dynamics of negative refractive index is studied by pump-probe method. Photo excitation of the amorphous Si layer at visible wavelength and corresponding modification of its optical parameters is found to be responsible for the observed modulation of negative refractive index in near-IR. † Ag wires along the two perpendicular directions were ~220 nm and ~110 nm for the bottom layer, respectively, and were approximately 40% smaller for the top layer as a result of our fabrication procedure(Fig. 1b). The fishnet was fabricated by using nanoimprint and electron-beam lithography. The fabrication procedure was described in detail elsewhere[8].Figure 1eshows the SEM image of the sample. A similar sample, with SiO 2 silica replacing Si as the dielectric spacer, was studied earlier to demonstrate the effectiveness of such fishnet structures as negative index materials in the near-IR range [9].Linear transmission and reflection spectra were measured in order to characterize the sample. The measurements have been carried out with a Nd:YAG laser/optical parametric system generating 20 ps pulses tunable in the entire near-IR range. The experimental | 10.1063/1.2801701 | [
"https://export.arxiv.org/pdf/0708.2095v1.pdf"
] | 119,618,642 | 0708.2095 | f69a6a75e184fae2e0629b64d382b1fdff6f4932 |
Modulation of Negative Index Metamaterials in the Near-IR Range
Evgenia Kim [email protected]
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
) †
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
Wei Wu
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
Ekaterina Ponizovskaya
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
Zhaoning Yu
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
Alexander M Bratkovsky
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
Shih-Yuang Wang
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
R Stanley Williams
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
Y Ron Shen
Department of Physics
Quantum Science Research
University of California
Hewlett-Packard Laboratories94720, 94304BerkeleyCalifornia, California
Modulation of Negative Index Metamaterials in the Near-IR Range
AbstractOptical modulation of the effective refractive properties of a "fishnet" metamaterial with a Ag/Si/Ag heterostructure is demonstrated in the near-IR range and the associated fast dynamics of negative refractive index is studied by pump-probe method. Photo excitation of the amorphous Si layer at visible wavelength and corresponding modification of its optical parameters is found to be responsible for the observed modulation of negative refractive index in near-IR. † Ag wires along the two perpendicular directions were ~220 nm and ~110 nm for the bottom layer, respectively, and were approximately 40% smaller for the top layer as a result of our fabrication procedure(Fig. 1b). The fishnet was fabricated by using nanoimprint and electron-beam lithography. The fabrication procedure was described in detail elsewhere[8].Figure 1eshows the SEM image of the sample. A similar sample, with SiO 2 silica replacing Si as the dielectric spacer, was studied earlier to demonstrate the effectiveness of such fishnet structures as negative index materials in the near-IR range [9].Linear transmission and reflection spectra were measured in order to characterize the sample. The measurements have been carried out with a Nd:YAG laser/optical parametric system generating 20 ps pulses tunable in the entire near-IR range. The experimental
Negative index metamaterials (NIM) that exhibit unique refractive properties [1][2][3] are currently a focus of research in optoelectronics. Numerous unusual properties and applications of NIM have been discussed [4][5][6]. Some of them on optical communications and data processing, such as near-field photonic links, require modulation of the effective negative refractive index of the material. This can be achieved by modulating the optical properties of the constituents of an NIM. We report here the first study of optical modulation of an NIM in the near-IR range. Using the pump/probe method, we observed a pump-induced change of 40% in the effective negative refractive index of an NIM composed of a Ag/Si/Ag fishnet heterostructure, with a relaxation time of 58 ps limited by carrier relaxation in Si.
In this study, the fishnet structure [7] was designed with the use of FDTD method (see Fig. 1a) to have a negative refractive index in the 1.6-1.8 μm near-IR range (Fig. 1d) and a magnetic resonance around 1.7 μm (Fig. 1c). The fishnet on a glass substrate consisted of two 25 nm Ag metallic layers separated by a 80 nm amorphous(α−) Si layer and perforated by a periodic array of holes ( Fig. 1 b). The period of the resulting network of metallic wires was 320 nm along the wires in both directions (Fig. 1a). The widths of the setup was the same as that described earlier [8]. For transmission measurement, the input beam was normal to the surface with polarization parallel to the thin Ag wires. For reflection measurement, the same polarized beam was tilted by 10 o to the normal to the surface. The phase difference was also measured in transmission and reflection of the two beams polarized parallel to the thin and thick Ag wires, respectively. To study the pumpinduced change of transmission and reflection, we adopted the pump/probe method using Q-switched YAG:Nd 3+ doubled output at wavelength of 532 nm as the pump pulses and the time-delayed wavelength tunable IR pulses from OPO system as the probe. We Pumping the sample (i) produces free carriers in Si and Ag and their relaxation also leads to (ii) heating of the sample. Both processes could modify the optical constants of the materials and hence the optical response of the metamaterial, but carrier relaxation is expected to be much faster than heating. In our pump/probe measurement, we measured a set of transmission spectra at different delay times between pump and probe, and observed the recovery of the induced changes on the resonance structure. Fig. 3a shows the pump-induced change of transmission at the resonance peak of the fishnet as a function of the probe time delay for the pump fluence of 320 μJ/cm 2 . For comparison, we also display in Fig. 3b the cross-correlation trace of our pump and probe pulses obtained from sum-frequency generation in a barium borate crystal.
To fit the experimental data for the fishnet in Fig. 3a one can use the following expression:
2 2 2 1 2 2 2 1 2( ) 2 ( ) 2 1 1 ( ) [ ] t t a t t w w S t t e A e e d τ θ − ∞ ∞ − − − − −∞ −∞ ∝ − ⋅ + ⋅ ∫ ∫ 2 tdt ) .(1)
Here,
expected that the pump would excite carriers in Ag and Si, modify their refractive indices, and alter the optical responses of the fishnet. Displayed in Figs. 2a and 2b are the transmittance and reflectance spectra for our sample, with and without pump. They show a resonant structure at ~1.7 μm in agreement with the theoretical prediction presented in Figs. 1b and 1c. The pump fluence was 320 μJ/cm 2 . It is seen that the pump induces a red shift of 15±2 nm, and a decrease of ~50% in the peak magnitude of the magnetic resonance in the transmittance spectrum. Results of the phase measurement are shown in Fig. 2c. The observed phase difference is mainly due to the effect of the magnetic resonance seen only by the waves with polarization of magnetic field vector along the thick Ag wires. Without the pump, it reaches 38 −° in transmission and in reflection at the peak of the resonance. With the pump, the values change °o, respectively, following the change of the resonance structure. Real and imaginary parts of the effective refractive index, n, can then be deduced from the experimental data in Figc. 2a and 2c using the method of Ref.[10], with results shown in Fig. 2d. At the magnetic resonance, for fishnet exhibits a dip reaching a value of . With the pump reducing the resonance strength, it changes to . Fig. 2e also shows the measured transmittance at the magnetic resonance as a function of the pump fluence. The linear relation indicates that the effect is due to a linear increase of pump absorption in the structure.
step function, the exponential decay term exp[-α(t 2 -t 1 )] and the constant A describe, respectively, the effects of carrier relaxation and heating on modulation; the Gaussian functions represent the pump and probe pulse profiles that reproduce the cross-correlation trace inFig. 3bwith w = 19 ps, and τ is the time delay between pump and probe pulses.Figure 3ashows a fit of Eq. (1) to experimental data with τ = 58 ps.We expect that the pump-induced modulation comes mainly from photo-excitation of carriers in the α−Si layer. In that case, the observed fast relaxation would reflect the carrier relaxation in the Si layer. A similar pump/probe measurement are carried out on an 80-nm α−Si film alone, without top and bottom Ag layers The pump-induced absorption and reflectance changes versus time for α−Si film are shown inFig. 3a and3b, respectively. The initial dip in reflectance results from a pump-induced reduction of the refractive index, and can be associated with pump-induced free carriers in α−Si. The change in the absorption (inFig. 3a)is determined from simultaneous measurements of reflectance and transmittance. The decay of absorption changes can be fit by a single exponential with a relaxation time of 50 ps (dashed line inFig.3 a), characteristic of carrier relaxation in α−Si[11, 12]. The fact that the pump-induced modulation of our fishnet sample has a decay closely resembling that of the free α-Si film indicates that free carrier excitation and relaxation is indeed the dominant mechanism responsible for the modulation, while the excitations in the Ag wires appear not to be important. The tail observed at a long probe delay time, on the other hand, must have resulted from a thermal modulation as the excited carriers relaxed and released the energy to heat up the sample.To further confirm that carrier excitation in α-Si is the dominating mechanism underlying the observed pump-induced modulation of the fishnet structure, we deduced from the pump/probe measurement of the α-Si film (without silver layers) a maximum pumppump fluence of 320 μJ/cm 2 . The imaginary part of the index is due to finite conductivity of photoinduced carriers, that we estimated to be calculation of the effective refractive index of the fishnet structure with thechanges in α−Si refractive red shift of the magnetic resonance [the dip in Re ] of about 5 and 30 nm and decreases of the resonant amplitude by 30% and 70%, respectively, Fig. 4. The results are in fair agreement with the experimental observation shown in Fig. 2d. The pump/probe measurement on a fishnet structure with silica replacing Si are made and found that the maximum pump-induced change of transmittance with 320 μJ/cm Because silica does not absorb at the pump frequency, the effect, if any, would have come from pump excitation of the Ag wires. This again indicates that modulation of the fishnet structure by excitation of the Ag wires is not effective. In conclusion, pump-probe experiments and FDTD simulations demonstrate photoinduced modulation of the effective negative refractive index of a Ag/Si/Ag fishnet structure. A pump with fluence of 320 μJ/cm 2 at visible wavelength can change the refractive index of a Ag/Si/Ag fishnet negative index structure at the resonance from carriers in the α-Si spacer are responsible for the modulation. It is characterized by dynamic response of 58 ps governed by the carrier relaxation time in α-Si. The present work opens up the possibility of fast switching and/or modulation of NIM devices in the optical range. The authors thank DARPA for partial support. 9. E.M. Kim, W. Wu, E. Ponizovskaya, Y. Liu, Z. Yu, A.M. Bratkovsky, N. Fang, X. Zhang, Y.R. Shen, and S.Y. Wang, Appl. Phys. A 87, 143 (2007). 10. D. Smith, S. Schultz, P. Markos, and C. Soukoulis, Phys. Rev. B 65, 195104 (2002). 11. A. Esser, K. Seibert, H. Kurz, G.N. Parsons, C. Wang, B.N. Davidson, G. Lucovsky, and R.J. Nemanich, Phys. Rev. B 41, 2879 (1990). 12. M. Kubinyi, A. Grofcsik, and W. Jones, J. Molec. Struc. 408, 121 (1997).
Figure captions
Figure captions
Figure 1 .
1Schematic of the "fishnet" Ag/Si/Ag structure: (a) top view, (b) side view. (c) Effective magnetic permeability eff μ and (d) effective refractive index for the eff n "fishnet" structure calculated by the FDTD method with the structural parameters given on panel (b), with pump off. (e) SEM images of a fabricated fishnet structure: the top frame has a lower magnification than the bottom frame.
Figure 2 .Figure 3 .
23(a) Transmittance and (b) reflectance spectra from the "fishnet" structure without pump (black dots) and with a pump fluence of 320 (open black dots) at zero pump-probe time delay. (c) Phase difference spectra for transmitted and reflected light without the pump (black and red dots, respectively) and with the pump (open black and red dots, respectively). (d) Real and imaginary parts of the refractive index deduced from experimental data without the pumping (black and red dots, respectively) and with the pumping (open black and red dots, respectively). (e) Variation of transmittance at the magnetic resonance as a function of the pump fluence. (a) Pump-induced transmission change at the magnetic resonance of the fishnet (red dots) and pump-induced absorption variation from an amorphous (α−) Si film (black open dots) as functions of the probe time delay for a pump fluence of 320 μJ/cm 2 (b) Cross-correlation trace of pump and probe pulses obtained from sum-frequency generation in a barium borate crystal (red dots) and the time-resolved pump-induced reflectance change from an amorphous (α−) Si film (open black dots).
Figure 4 .
4FDTD simulation of effective refractive indices of the fishnet structure with various refractive index changes in α−Si layer:
Figure 4
4Figure 4
L I Mandelshtam, Lectures in Optics, Relativity, and Quantum Mechanics. Moscow, Nauka389L.I. Mandelshtam, Lectures in Optics, Relativity, and Quantum Mechanics (Moscow, Nauka, 1972), p. 389.
. R A Silin, Usp. Fiz. Nauk. 175R.A. Silin, Usp. Fiz. Nauk, 175, 562 (2006);
Delay Systems (Radio. R A Silin, V P Sazonov, MoscowR.A. Silin and V.P. Sazonov, Delay Systems (Radio, Moscow, 1966).
. V G Veselago, Sov. Phys. Usp. 92509Usp. Fiz. NaukV.G. Veselago, Usp. Fiz. Nauk, 92, 517 (1967) [Sov. Phys. Usp. 10, 509 (1968).]
. J B Pendry, Phys. Rev. Lett. 85J.B. Pendry, Phys. Rev. Lett. 85, 3966-3969 (2000).
. N Fang, H Lee, X Zhang, Science. 308N. Fang, H. Lee, X. Zhang, Science, 308, 534-537 (2005).
. D O S Melville, R J Blaikie, C R Wolf, Appl. Phys. Lett. 844403D.O.S. Melville, R.J. Blaikie, C. R. Wolf, Appl. Phys. Lett. 84, 4403 (2004).
. S Zhang, W Fan, B K Minhas, A Frauenglass, K J Malloy, S R J Brueck, S , Phys. Rev. Lett. 9437402S. Zhang, W. Fan, B. K. Minhas, A. Frauenglass, K. J. Malloy, and S. R. J. Brueck S, Phys. Rev. Lett. 94, 037402 (2005).
. W Wu, Y Liu, E M Kim, Z Yu, N Fang, X Zhang, Y R Shen, S Y Wang, App. Phys. Lett. 9063107W. Wu, Y. Liu, E.M. Kim, Z. Yu, N. Fang, X. Zhang, Y.R. Shen, S.Y. Wang, App. Phys. Lett., 90, 063107 (2007).
| [] |
[
"UNIVERSIDADE FEDERAL DE SERGIPE CENTRO DE CIÊNCIAS EXATAS E TECNOLOGIA DEPARTAMENTO DE COMPUTAÇÃO",
"UNIVERSIDADE FEDERAL DE SERGIPE CENTRO DE CIÊNCIAS EXATAS E TECNOLOGIA DEPARTAMENTO DE COMPUTAÇÃO"
] | [
"Sistema De Navegação \nDepartamento de Computação/UFS\nTrabalho de Conclusão de Curso\n\n",
"Autônomo Baseado \nDepartamento de Computação/UFS\nTrabalho de Conclusão de Curso\n\n",
"Visão Computacional \nDepartamento de Computação/UFS\nTrabalho de Conclusão de Curso\n\n",
"Michel Conrado \nDepartamento de Computação/UFS\nTrabalho de Conclusão de Curso\n\n",
"Cardoso Meneses \nDepartamento de Computação/UFS\nTrabalho de Conclusão de Curso\n\n"
] | [
"Departamento de Computação/UFS\nTrabalho de Conclusão de Curso\n",
"Departamento de Computação/UFS\nTrabalho de Conclusão de Curso\n",
"Departamento de Computação/UFS\nTrabalho de Conclusão de Curso\n",
"Departamento de Computação/UFS\nTrabalho de Conclusão de Curso\n",
"Departamento de Computação/UFS\nTrabalho de Conclusão de Curso\n"
] | [] | ResumoRobôs autônomos são utilizados como ferramenta para a solução de diversos tipos de problemas, como o mapeamento e o monitoramento de ambientes. Seja pelas condições adversas à presença humana ou mesmo pela necessidade de reduzirem-se custos, é certo que vários esforços vêm sendo somados para desenvolverem-se robôs com nível de autonomia cada vez maior. Estes devem ser capazes de se locomover em ambientes dinâmicos, sem contar com o auxílio de operadores humanos ou sistemas externos. Nota-se, portanto, que a forma de percepção e modelagem do ambiente torna-se significativamente relevante para a navegação. Dentre os principais métodos de sensoriamento, destacam-se aqueles baseados na visão. Através desta é possível gerar modelos com alto nível de detalhamento acerca do ambiente, uma vez que diversas características podem ser aferidas, tais como textura, cor e luminosidade. No entanto, as técnicas mais precisas de navegação autônoma baseadas em visão apresentam custo computacional superior ao suportado por plataformas móveis de baixo custo, tais como a Raspberry Pi. Esta, por sua vez, vem ganhando cada vez mais espaço em aplicações comerciais e científicas, devido ao seu eficiente gerenciamento de energia e às suas dimensões reduzidas. Sendo assim, o trabalho realizado teve como objetivo desenvolver um robô de baixo custo, controlado por uma Raspberry Pi e cujo sistema de navegação autônomo é baseado em visão. Para tanto, a estratégia utilizada consistiu na identificação de obstáculos a partir do reconhecimento de padrões de fluxo óptico. Através deste sinal é possível inferir a movimentação relativa entre o robô e os demais elementos inseridos no ambiente. Para estimá-lo, utilizou-se o algoritmo de Lucas-Kanade, o qual é capaz de ser executado pela Raspberry Pi sem que seu desempenho seja prejudicado. Finalmente, utilizou-se um classificador SVM para identificar padrões deste sinal associados à movimentação de obstáculos. O sistema desenvolvido foi avaliado considerando-se sua execução sobre uma base de dados formada por padrões de fluxo óptico extraídos de um ambiente real de navegação. Ao fim, seu desempenho foi comparado ao obtido por outros trabalhos relacionados. Constatou-se que a frequência de processamento deste sistema foi superior à apresentada pelos demais. Além disso, sua acurácia e seu custo de aquisição foram, respectivamente, superior e inferior ao da maioria dos trabalhos considerados.Palavras-chave: Navegação autônoma, Visão computacional, Fluxo óptico, Reconhecimento de padrões, Raspberry Pi.AbstractAutonomous robots are used as the tool to solve many kinds of problems, such as environmental mapping and monitoring. Either for adverse conditions related to human presence or even for the need to reduce costs, it is certain that many efforts have been made to develop robots with increasingly high level of autonomy. They must be capable of locomotion through dynamic environments, without human operators or assistant systems' help. It is noted, thus, that the form of perception and modeling of the environment becomes significantly relevant to navigation. Among the main sensing methods are those based on vision. Through this it is possible to create highly-detailed models about the environment, since many characteristics can be measure, such as texture, color and illumination. However, the most accurate vision based navigation techniques are computationally expensive to run on low-cost mobile platforms, as the Raspberry Pi. This computer, in turn, has been increasingly used in scientific and commercial applications, due to its efficient power supply management and reduced dimensions. Therefore, the goal of this work was to develop a low-cost robot, controlled by a Raspberry Pi, whose navigation system is based on vision. For this purpose, the strategy used consisted in identifying obstacles via optical flow pattern recognition. Through this signal it is possible to infer the relative displacement between the robot and other elements in the environment. Its estimation was done using the Lucas-Kanade algorithm, which can be executed by the Raspberry Pi without harming its performance. Finally, a SVM based classifier was used to identify patterns of this signal associated to obstacles movement. The developed system was evaluated considering its execution over an optical flow pattern dataset extracted from a real navigation environmet. In the end, it was verified that the processing frequency of the system was superior to the others. Furthermore, its accuracy and acquisition cost were, respectively, higher and lower than most of the cited works. | null | [
"https://arxiv.org/pdf/1710.06518v1.pdf"
] | 7,494,784 | 1710.06518 | 16e2a3291b65adec88303bb929cabeaeda7a66ab |
UNIVERSIDADE FEDERAL DE SERGIPE CENTRO DE CIÊNCIAS EXATAS E TECNOLOGIA DEPARTAMENTO DE COMPUTAÇÃO
17 Oct 2017
Sistema De Navegação
Departamento de Computação/UFS
Trabalho de Conclusão de Curso
Autônomo Baseado
Departamento de Computação/UFS
Trabalho de Conclusão de Curso
Visão Computacional
Departamento de Computação/UFS
Trabalho de Conclusão de Curso
Michel Conrado
Departamento de Computação/UFS
Trabalho de Conclusão de Curso
Cardoso Meneses
Departamento de Computação/UFS
Trabalho de Conclusão de Curso
UNIVERSIDADE FEDERAL DE SERGIPE CENTRO DE CIÊNCIAS EXATAS E TECNOLOGIA DEPARTAMENTO DE COMPUTAÇÃO
17 Oct 2017
ResumoRobôs autônomos são utilizados como ferramenta para a solução de diversos tipos de problemas, como o mapeamento e o monitoramento de ambientes. Seja pelas condições adversas à presença humana ou mesmo pela necessidade de reduzirem-se custos, é certo que vários esforços vêm sendo somados para desenvolverem-se robôs com nível de autonomia cada vez maior. Estes devem ser capazes de se locomover em ambientes dinâmicos, sem contar com o auxílio de operadores humanos ou sistemas externos. Nota-se, portanto, que a forma de percepção e modelagem do ambiente torna-se significativamente relevante para a navegação. Dentre os principais métodos de sensoriamento, destacam-se aqueles baseados na visão. Através desta é possível gerar modelos com alto nível de detalhamento acerca do ambiente, uma vez que diversas características podem ser aferidas, tais como textura, cor e luminosidade. No entanto, as técnicas mais precisas de navegação autônoma baseadas em visão apresentam custo computacional superior ao suportado por plataformas móveis de baixo custo, tais como a Raspberry Pi. Esta, por sua vez, vem ganhando cada vez mais espaço em aplicações comerciais e científicas, devido ao seu eficiente gerenciamento de energia e às suas dimensões reduzidas. Sendo assim, o trabalho realizado teve como objetivo desenvolver um robô de baixo custo, controlado por uma Raspberry Pi e cujo sistema de navegação autônomo é baseado em visão. Para tanto, a estratégia utilizada consistiu na identificação de obstáculos a partir do reconhecimento de padrões de fluxo óptico. Através deste sinal é possível inferir a movimentação relativa entre o robô e os demais elementos inseridos no ambiente. Para estimá-lo, utilizou-se o algoritmo de Lucas-Kanade, o qual é capaz de ser executado pela Raspberry Pi sem que seu desempenho seja prejudicado. Finalmente, utilizou-se um classificador SVM para identificar padrões deste sinal associados à movimentação de obstáculos. O sistema desenvolvido foi avaliado considerando-se sua execução sobre uma base de dados formada por padrões de fluxo óptico extraídos de um ambiente real de navegação. Ao fim, seu desempenho foi comparado ao obtido por outros trabalhos relacionados. Constatou-se que a frequência de processamento deste sistema foi superior à apresentada pelos demais. Além disso, sua acurácia e seu custo de aquisição foram, respectivamente, superior e inferior ao da maioria dos trabalhos considerados.Palavras-chave: Navegação autônoma, Visão computacional, Fluxo óptico, Reconhecimento de padrões, Raspberry Pi.AbstractAutonomous robots are used as the tool to solve many kinds of problems, such as environmental mapping and monitoring. Either for adverse conditions related to human presence or even for the need to reduce costs, it is certain that many efforts have been made to develop robots with increasingly high level of autonomy. They must be capable of locomotion through dynamic environments, without human operators or assistant systems' help. It is noted, thus, that the form of perception and modeling of the environment becomes significantly relevant to navigation. Among the main sensing methods are those based on vision. Through this it is possible to create highly-detailed models about the environment, since many characteristics can be measure, such as texture, color and illumination. However, the most accurate vision based navigation techniques are computationally expensive to run on low-cost mobile platforms, as the Raspberry Pi. This computer, in turn, has been increasingly used in scientific and commercial applications, due to its efficient power supply management and reduced dimensions. Therefore, the goal of this work was to develop a low-cost robot, controlled by a Raspberry Pi, whose navigation system is based on vision. For this purpose, the strategy used consisted in identifying obstacles via optical flow pattern recognition. Through this signal it is possible to infer the relative displacement between the robot and other elements in the environment. Its estimation was done using the Lucas-Kanade algorithm, which can be executed by the Raspberry Pi without harming its performance. Finally, a SVM based classifier was used to identify patterns of this signal associated to obstacles movement. The developed system was evaluated considering its execution over an optical flow pattern dataset extracted from a real navigation environmet. In the end, it was verified that the processing frequency of the system was superior to the others. Furthermore, its accuracy and acquisition cost were, respectively, higher and lower than most of the cited works.
Agradecimentos
A realização deste trabalho, juntamente com todo o contexto que o cerca, deve-se aos seguintes envolvidos:
• Minha família, maior apoiadora e principal inspiração para os meus sonhos;
• Os amigos que estiveram ao meu lado durante esta jornada, com os quais pude compartilhar experiências e aprendizados valiosíssimos;
• Os professores que se empenharam em transmitir seu conhecimento e cujos ensinamentos jamais serão esquecidos;
• Todos os colegas e instituições que direta ou indiretamente contribuíram para esta realização.
A todos, minha imensurável gratidão.
Your time is limited, so don't waste it living someone else's life. Don't be trapped by dogma -which is living with the results of other people's thinking.
Don't let the noise of others' opinions drown out your own inner voice. And most important, have the courage to follow your heart and intuition. (Steve Jobs) 1 Introdução O desenvolvimento de robôs capazes de se locomover de maneira autônoma pode ser considerado um dos temas de estudo mais promissores da robótica (BEKEY, 2005). Tal importância deve-se à quantidade quase que irrestrita de aplicações relevantes e inovadoras proporcionadas pela utilização destes instrumentos. De uma maneira generalizada, estas aplicações podem ser catalogadas de acordo com uma das seguintes categorias: transporte, busca, mapeamento e monitoramento (SIEGWART; NOURBAKHSH, 2004). No entanto, tal categorização não significa que o uso dessas ferramentas está restrito a um contexto ou a uma área de atuação específica. De fato, atualmente é possível observar a utilização de robôs autônomos como a principal ferramenta para a solução dos mais variados tipos de problemas. Como exemplos, podem-se citar a utilização de robôs aquáticos para a captura de enxames de águas-vivas com o intuito de realizar o controle populacional de tal espécie (Figura 1), o uso de veículos aéreos autônomos para monitorar o tráfego de veículos em grandes metrópoles e assim extrair dados que possam ajudar a diagnosticar e a solucionar problemas no trânsito da região (WANG; CHEN; YIN, 2015), a aplicação de robôs em ambientes industriais com o objetivo de transportar mercadorias pesadas entre pontos de carregamento e descarregamento (KADIR et al., 2015) O primeiro subprocesso refere-se à elaboração de um mapa do ambiente que envolve o robô. Pelo fato de ser apenas um modelo, este mapa apresenta uma quantidade restrita de informações acerca do ambiente, sendo que seu nível de detalhamento é estipulado a partir das necessidades do robô e das especificações do caso de uso que o mesmo deve realizar. A elaboração deste mapa é feita com base na medição de algumas das características do ambiente, como variação da taxa de ocupação, distribuição de profundidade, geometria, entre outras. A partir do mapa gerado, parte-se para o segundo subprocesso, o qual tem como objetivo identificar a posição do robô neste mapa. Este processo de localização pode ser realizado tanto com base em novas medições do ambiente como também na extração de dados do próprio robô, processo este conhecido na literatura como odometria (CHO et al., 2013). Conhecendo-se a localização do robô no ambiente, é possível então traçar o caminho a ser percorrido. Em geral, escolhe-se o trajeto que apresente maior nível de segurança e menor custo para o robô. Por exemplo, pode-se buscar o caminho que apresente a menor quantidade de obstáculos, que leve o robô ao ponto de destino em menor tempo ou mesmo que apresente superfície com maior regularidade (GANGANATH; CHENG; TSE, 2015). Finalmente, definido o caminho a ser percorrido parte-se para a última etapa do ciclo de navegação do robô, a qual consiste em definir a sequência de acionamento dos seus atuadores que efetivamente garanta o cumprimento do trajeto estabelecido. Esta sequência de acionamento pode consistir em alterar a direção das rodas do robô, em variar a velocidade de rotação dos seus motores, etc.
Lista de ilustrações
Obviamente, cada um destes subprocessos pode apresentar maior ou menor relevância a depender das características do contexto no qual o robô está inserido. Por exemplo, há situações nas quais antes mesmo do início da navegação o robô já possui alguma representação do ambiente. Nesses casos, geralmente não há a necessidade de se criar um novo mapa, pois se assume que o ambiente em questão seja estático, isto é, suas características relevantes não são alteradas ao longo do tempo. Há também casos nos quais o robô é capaz de receber, através de outro dispositivo, informações sobre sua própria localização. Nesse tipo de contexto, não há a necessidade de ser estimado tal posicionamento através de medições do ambiente. É o caso da navegação baseada em GPS (LEE;CHUNG, 2015). A elaboração do caminho a ser percorrido também apresenta relevância variável a depender das condições impostas ao robô. Há contextos que permitem a construção de tal caminho antes mesmo de iniciar a navegação. Nesses casos, o trajeto a ser seguido é calculado uma única vez, não havendo mais a necessidade do robô realizar este cálculo novamente ao longo de sua locomoção. Finalmente, pode-se dizer que a relevância do controle dos atuadores utilizados pelo robô está relacionada à complexidade mecânica apresentada pelos mesmos. Por exemplo, no caso de robôs cujos atuadores correspondem a rodas, seu controle pode resumir-se simplesmente a acionar ou não tais atuadores. Já no caso de robôs cuja locomoção baseia-se no movimento de pernas mecânicas, o controle dos atuadores pode envolver questões mais complexas, como a manutenção do equilíbrio ou mesmo a definição da amplitude de cada passada (Figura 2).
Figura 2 -Robô AiDIN III, cujos atuadores são pernas articuladas (KOO et al., 2013).
Dessa forma, em contextos nos quais o robô deve apresentar autonomia completa percebe-se que o problema de percepção e modelagem do ambiente torna-se significativamente relevante para os demais subprocessos envolvidos na navegação. Esta autonomia não se refere somente à total independência de intervenção humana durante a navegação, mas também à não utilização de qualquer espécie de sistema auxiliar externo (como radares ou sistemas de geolocalização) e até mesmo à necessidade de conhecimentos prévios mínimos a respeito do ambiente a ser explorado. De fato, nestas situações a única fonte de informação sobre o ambiente utilizada pelos demais subprocessos corresponde justamente ao modelo gerado pelo próprio robô. Consequentemente, o nível de detalhamento deste modelo é um dos principais responsáveis por definir a complexidade e as limitações envolvidas na realização das demais etapas da navegação. É devido a isso que as técnicas existentes para resolução do problema de navegação autônoma podem ser superficialmente categorizadas de acordo com o método utilizado para o sensoriamento do ambiente e para a representação das informações extraídas. Dentre os principais métodos destacam-se aqueles baseados na utilização de sensores de alcance e aqueles baseados em visão (SIEGWART; NOURBAKHSH, 2004).
Assim como qualquer sensor ativo, sensores de alcance funcionam com base na emissão de energia e na medição da resposta gerada pelo ambiente (Figura 3). Mais especificamente, estes sensores emitem algum tipo de sinal (como uma onda sonora ou um sinal eletromagnético) e medem o tempo que o mesmo leva para ser refletido pelo ambiente. Este tipo de sensor é amplamente utilizado em sistemas robóticos devido à sua simples manipulação e pelo fato das medidas obtidas serem de fácil interpretação. No entanto, por serem ativos há sempre o risco destes sensores alterarem características de interesse do ambiente. Além disso, a utilização de sensores mais simples (como o ultrassônico) provê apenas medidas unidimensionais do ambiente, de modo que o modelo gerado é consideravelmente simplificado. Ainda, sensores que permitem a obtenção de medidas multidimensionais, como lasers (SUI et al., 2011), apresentam alto custo de aquisição, principalmente quando utilizados em sistemas robóticos de pequeno porte. Por sua vez, o sensoriamento através de visão baseia-se na utilização de câmeras para a obtenção de dados tanto dimensionais quanto relacionados a características puramente visuais do ambiente, como textura, cor, luminosidade, etc. Em muitas situações, estas características apresentam importância fundamental para a correta modelagem do ambiente, sendo que sensores de alcance são incapazes de medi-las. Além disso, o custo de aquisição de câmeras com desempenho razoável é consideravelmente menor que o de lasers. Sendo assim, técnicas de navegação autônoma baseadas em visão têm sido amplamente desenvolvidas, sendo atualmente aplicadas em diversos problemas relevantes.
Obstáculo
(1) Sinal ultrassônico transmitido pelo sensor
(2) Sinal refletido pelo obstáculo Figura 3 -Funcionamento de sensor ultrassônico de alcance.
As técnicas de sensoriamento baseadas em visão podem ser divididas em duas classes: as que utilizam visão estereoscópica e as que usam visão monocular (SZELISKI, 2010). Técnicas que utilizam visão estereoscópica são inspiradas no funcionamento do sistema de visão humano e, de modo simplificado, consistem na captura de múltiplas imagens do ambiente sob diferentes perspectivas para a construção de um modelo tridimensional. Para isso, identificam-se pontos correspondentes contidos nas múltiplas imagens relacionadas a uma mesma cena. Esta etapa geralmente é realizada utilizando-se algoritmos de correlação (por exemplo, o SIFT ou o SURF (KOSTAVELIS et al., 2016)). Ao identificá-los, realiza-se o cálculo da profundidade através de algum método de triangulação. Dessa forma, é possível obter um modelo tridimensional do ambiente. Em geral, os modelos gerados através de visão estereoscópica apresentam alta exatidão e baixo índice de incerteza (LINS et al., 2016). No entanto, o custo computacional exigido para a execução dos algoritmos de correlação e de triangulação é elevado. Sendo assim, torna-se inviável a utilização desta abordagem em plataformas robóticas com especificações de hardware mais econômicas ou que estejam inseridas em contextos que exijam baixo tempo de processamento.
Por sua vez, as técnicas baseadas em visão monocular utilizam algoritmos que, em geral, têm como objetivo identificar padrões de cores, texturas e símbolos (LI; BIRCHFIELD, 2010), rastrear bordas (CONRAD; DESOUZA, 2010) e calcular a movimentação aparente das imagens capturadas (BOROUJENI, 2012). Estas operações apresentam baixo custo computacional, principalmente quando comparadas às realizadas por técnicas de visão estereoscópica. Devido a isso, abordagens baseadas em visão monocular vêm sendo amplamente aplicadas em sistemas robóticos de navegação autônoma, com ênfase naqueles de pequeno porte, de baixo custo e que devem operar em alta frequência. Ainda assim, observa-se a existência de plataformas móveis nas quais técnicas de visão monocular ainda não foram devidamente investigadas no contexto da navegação autônoma de robôs. Plataformas mais recentes, como as exibidas na Figura 4, apresentam especificações de hardware e software que as tornam adequadas para serem aplicadas neste tipo de contexto. Sendo assim, percebe-se a oportunidade de explorar o potencial destas plataformas para a implementação de sistemas de navegação autônomos baseados em visão monocular.
Motivação
A motivação encontrada para o desenvolvimento deste trabalho está relacionada às características promissoras de robôs capazes de se locomover de maneira completamente autônoma, ou seja, sem o auxílio de agentes ou sistemas externos. Devido a tais características, estes instrumentos podem ser aplicados numa grande variedade de situações sem que haja a necessidade de adaptá-los ou mesmo de alterar o ambiente em questão. Além disso, percebe-se que a maioria dos trabalhos voltados para o desenvolvimento deste tipo de ferramenta utiliza plataformas computacionais de alto custo de aquisição, dificultando sua utilização em aplicações cotidianas. Finalmente, o surgimento de novas plataformas de programação móvel capazes de apresentar poder de processamento relativamente alto, eficiente gerenciamento de energia e custo de aquisição não tão elevado representa uma oportunidade para o desenvolvimento de sistemas robóticos menos custosos e com dimensões reduzidas. Dentre tais plataformas, destaca-se a Raspberry Pi (FOUNDATION, 2016c), a qual é amplamente comercializada e possui vasta documentação.
Objetivos
O objetivo principal deste trabalho é desenvolver um sistema robótico capaz de se locomover de maneira autônoma através de visão monocular e que seja baseado na plataforma de baixo custo Raspberry Pi.
Para tanto, os objetivos específicos deste trabalho são:
• Realizar uma revisão bibliográfica sobre as principais técnicas de visão monocular utilizadas para o desenvolvimento de robôs autônomos;
• Investigar o desempenho da plataforma Raspberry Pi ao executar alguns dos principais algoritmos utilizados em visão computacional;
• Desenvolver a plataforma robótica na qual o sistema de navegação será inserido;
• Desenvolver uma estratégia de navegação autônoma baseada em visão monocular capaz de ser executada pela Raspberry Pi;
• Comparar o desempenho do sistema final desenvolvido com o dos demais sistemas descritos na literatura.
Estrutura do trabalho
A sequência deste trabalho está organizada da seguinte forma:
• Capítulo 2 consiste numa revisão bibliográfica acerca das principais estratégias de navegação autônoma;
• Capítulo 3 apresenta o conceito de fluxo óptico e discute a detecção de obstáculos com base no reconhecimento deste padrão;
• Capítulo 4 apresenta os detalhes da plataforma física desenvolvida e descreve o funcionamento do sistema de navegação proposto;
• Capítulo 5 descreve a metodologia utilizada para avaliar o sistema desenvolvido e discute os resultados obtidos.
2
Trabalhos Relacionados
Diferentes estratégias de navegação autônoma baseadas em visão monocular podem ser encontradas em publicações recentes. As próximas sessões descrevem aquelas consideradas mais relevantes para este trabalho. Ao fim deste capítulo é realizada uma análise comparativa entre os sistemas de navegação desenvolvidos em cada um dos trabalhos citados.
Detecção de piso por homografia
A detecção de piso corresponde ao processo de identificação dos pontos de uma imagem que representam a superfície do ambiente capaz de ser percorrida. Esta informação é relevante para a navegação pois permite a identificação dos possíveis objetos contidos na cena. Através deste conhecimento é possível elaborar algoritmos capazes de evitar obstáculos, rastrear alvos, planejar rotas, dentre outros. O sistema proposto por (CONRAD; DESOUZA, 2010) realiza a detecção do piso através de uma abordagem probabilística baseada em informações de homografia obtidas a partir da correlação de pontos da imagem. Mais precisamente, este sistema utiliza uma variação do algoritmo EM (Expectation Maximization) para realizar o agrupamento nãosupervisionado dos pontos, de modo a separar aqueles que representam o piso daqueles que não. Para isso, são apresentados ao EM os parâmetros da matriz de homografia obtida com base nos pontos correlacionados presentes em duas imagens subsequentes. Esta correlação é encontrada a partir da aplicação do algoritmo SIFT. Experimentalmente, o classificador gerado apresentou acurácia equivalente a 99,6%. No entanto, a plataforma utilizada para os testes tinha como base um processador com frequência igual a 2,0GHz. Este tipo de configuração é inviável para sistemas robóticos de baixo custo e de tamanho reduzido.
Detecção de piso por segmentação de linhas
A detecção de piso também é tratada por (LI; BIRCHFIELD, 2010). Este propõe um algoritmo de detecção baseado na classificação de linhas horizontais e verticais. Estas linhas correspondem às bordas presentes nas imagens capturadas do ambiente. Após detectá-las, utilizase um classificador SVM para determinar se tais linhas representam de fato uma fronteira entre o piso e as demais estruturas presentes na imagem. Durante os experimentos realizados, este algoritmo apresentou acurácia igual a 89,1%. Além disso, pelo fato do classificador desenvolvido utilizar diferentes características visuais para ponderar o valor final da classe indicada, o método proposto apresentou alto índice de acerto mesmo em ambientes com grande incidência de luz e reflexão. No entanto, a taxa de acerto foi baixa para ambientes que apresentavam pisos com texturas não-uniformes. Mais ainda, o sistema final desenvolvido foi capaz de processar apenas 5 imagens por segundo.
Segmentação de fluxo óptico
Por sua vez, (CALDEIRA; SCHNEEBELI; SARCINELLI-FILHO, 2007) sugere a segmentação de objetos com base exclusivamente na sua movimentação relativa ao observador. Esta é conhecida como fluxo óptico. O objetivo desta estratégia é agrupar pontos da imagem que apresentam o mesmo tipo de movimento. Assim, cada agrupamento pode ser associado a estruturas tridimensionais inseridas no ambiente. Após esta segmentação, utiliza-se o fluxo óptico de cada região da imagem para estimar o tempo em que possíveis objetos presentes no ambiente entrarão em contato com o robô. Dessa forma, podem-se determinar as regiões mais seguras para a navegação. Durante experimentos, o sistema desenvolvido foi capaz de realizar todo o processo de segmentação, cálculo do tempo de contato e controle do robô em cerca de 135ms. Este tempo é razoável, considerando-se que todo o processo é realizado pelo próprio robô durante a navegação.
Tempo de contato
O tempo de contato entre objetos contidos no ambiente e o robô também é calculado por (SANCHEZ-GARCIA et al., 2015). Neste trabalho, inicialmente utiliza-se a segmentação por cor para identificar um possível obstáculo. Ao identificá-lo, calcula-se seu tamanho aparente com base nas bordas da região segmentada. Em seguida, utiliza-se a variação deste tamanho para calcular o coeficiente Tau Margin. Este valor é proporcional ao tempo de contato entre o objeto segmentado e o robô. Quando este valor é menor que um determinado limiar deve-se alterar a direção da navegação. Para isso, calcula-se o fluxo óptico em diferentes regiões da imagem com o intuito de verificar o nível de movimentação relativa. Finalmente, o robô é desviado para a região com menor intensidade de fluxo óptico. O método proposto por este trabalho tem como vantagem o baixo custo computacional, uma vez que a maior parte das operações realizadas ocorrem apenas sobre a região segmentada da imagem. No entanto, por realizar a segmentação baseada exclusivamente em cor, este método é incapaz de identificar obstáculos com texturas não-uniformes ou que apresentem baixo contraste em relação ao ambiente.
Classificação de fluxo óptico
Finalmente, (SHANKAR; VATSA; SUJIT, 2014) propõe um sistema de navegação cuja identificação de obstáculos é realizada exclusivamente com base na detecção de padrões de fluxo óptico. Para isso, utiliza-se um classificador SVM, o qual é treinado antes da navegação a partir de amostras de fluxo óptico extraídas de ambientes semelhantes àquele no qual o robô deverá navegar autonomamente. Após a fase de treinamento, o classificador embarcado no robô é utilizado para indicar a existência ou não de obstáculos em cada imagem capturada. Nos casos em que há a indicação de obstáculos, desvia-se o robô para a direção na qual há menor índice de fluxo óptico. A intensidade do desvio é proporcional à taxa de confiança do classificador. Experimentalmente, o sistema apresentou acurácia equivalente a 88,8%. Além disso, a taxa de processamento do robô foi de aproximadamente 7 imagens por segundo.
Análise comparativa
Com base nos dados fornecidos em cada um dos trabalhos citados anteriormente, foi construída a Tabela 1. Nela, são apresentados os principais parâmetros comparativos entre os sistemas de navegação propostos nestes trabalhos. Vale ressaltar que dentre os parâmetros listados, encontra-se o custo estimado de aquisição dos seus componentes físicos. Além disso, como cada trabalho utilizou uma metodologia própria de avaliação, os valores de acurácia e da taxa de processamento não devem ser utilizados diretamente para estabelecerem-se relações de superioridade. Tais parâmetros, portanto, apenas podem ser considerados a título de comparação estimada.
Tabela 1 -Análise comparativa entre os trabalhos relacionados citados neste capítulo. Ao analisar-se a Tabela 1, torna-se evidente que as técnicas de navegação baseadas na detecção de piso exigem plataformas físicas robustas e com grande poder de processamento. Este fator contribui diretamente para o alto custo financeiro do sistema de navegação. Em contrapartida, aquele baseado na segmentação de fluxo óptico apresentou a maior frequência de processamento. Além disso, o sistema baseado na classificação deste mesmo sinal apresentou o menor custo financeiro sem que sua acurácia fosse prejudicada. Sendo assim, percebe-se que a utilização do fluxo óptico como característica primária a ser considerada para a navegação demostra ser mais adequada para sistemas que devem apresentar baixo custo financeiro e computacional, como é o caso daquele pretendido por este trabalho.
3
Detecção de Obstáculos por Classificação de Fluxo Óptico
Fluxo óptico corresponde ao conjunto de vetores que descrevem a movimentação aparente dos padrões de brilho de uma imagem. Esta movimentação geralmente está associada ao deslocamento relativo de objetos contidos numa sequência de imagens e a câmera (Figura 5). Consequentemente, através deste fluxo é possível mensurar tal movimento (HORN; SCHUNCK, 1981).
(a) (b) (c)
Figura 5 -Exemplo de fluxo óptico percebido entre duas imagens em sequência. Os vetores ilustrados em 5c representam a movimentação aparente percebida entre as imagens 5a e 5b.
Para o problema de navegação autônoma este tipo de informação é extremamente relevante, uma vez que a mesma está relacionada ao movimento percebido pelo robô acerca dos demais elementos contidos no ambiente. Assim, através do fluxo óptico é possível determinar suas dimensões (CALDEIRA; SCHNEEBELI; SARCINELLI-FILHO, 2007) e posições relativas (CROON, 2015). Ainda, tais características podem ser generalizadas e, consequentemente, transformadas em conhecimento útil acerca do ambiente (SHANKAR; VATSA; SUJIT, 2014). Dentre as aplicações deste conhecimento está a detecção de obstáculos. Esta tarefa corresponde ao processo de identificação de elementos que bloqueiam a passagem do robô (BOROUJENI, 2012). Devido à sua relevância, entende-se que tal processo representa o requisito mínimo para qualquer sistema de navegação autônomo. Sendo assim, buscou-se investigar sua realização com base no reconhecimento de padrões de fluxo óptico.
As seções a seguir apresentam a definição formal de fluxo óptico e discutem o problema de detecção de obstáculos a partir da classificação deste padrão.
Fluxo Óptico
Seja a intensidade do brilho de uma imagem num instante t definida pela função I( x,t), onde x = (x, y) T corresponde à posição de cada pixel. Considerando-se que em t + 1 tal intensidade é transladada e mantem-se constante, tem-se que:
I( x,t) = I( x + u,t + 1), (3.1) onde u = (u 1 , u 2 ) T representa o deslocamento no plano 2D e corresponde, portanto, ao vetor de fluxo óptico.
A Equação 3.1 é geralmente utilizada como ponto de partida para o desenvolvimento de estimadores de fluxo. Muito embora a constância da intensidade de brilho aparente ser uma suposição não realista, na prática consegue-se estimar fluxos com grande precisão (FLEET; WEISS, 2005).
A função de brilho transladada pode ser aproximada por uma série de Taylor de primeira ordem da seguinte forma:
I( x + u,t + 1) ≈ I( x,t) + u.∇I( x,t) + I t ( x,t), (3.2) sendo ∇I ≡ (I x , I y ) o vetor gradiente espacial discreto de I(x, y,t), I t a derivada parcial temporal discreta de I(x, y,t) e u.∇I( x,t) o produto escalar.
Combinando-se a Equação 3.1 e a Equação 3.2, chega-se à seguinte expressão:
u.∇I( x,t) + I t ( x,t) ≈ 0 (3.3)
A Equação 3.3 é conhecida como equação de restrição de gradiente (ANTON; BIVENS; DAVIS, 2014). Apenas a partir desta equação não é possível determinar os valores u 1 e u 2 . É necessário que outras restrições sejam impostas. Na literatura, são listados vários métodos para a resolução dessa equação. Dentre os principais, encontra-se o método de Lucas-Kanade.
Algoritmo de Lucas-Kanade
Para solucionar a Equação 3.3, o algoritmo de Lucas-Kanade (LUCAS;KANADE, 1981) propõe que o fluxo u seja considerado constante para pontos da imagem pertencentes a uma mesma vizinhança de tamanho N. Dessa forma, passa-se a contar com N equações de restrição de gradiente através das quais pode-se determinar u. Para isso, utiliza-se o método dos mínimos quadrados (FILHO, 2016) para estimar-se o valor de v que minimize a função de erro:
E( u) = ∑ x g( x).[ u.∇I( x,t) + I t ( x,t)] 2 , (3.4)
onde g( x) é uma função de ponderação utilizada para dar maior relevância a restrições de pontos específicos da vizinhança.
Finalmente, o valor mínimo de E( u) pode ser encontrado a partir das equações:
∂ E(u 1 , u 2 ) ∂ u 1 = ∑ x g( x).[I 2 x + I x I y + I x I t ] (3.5) ∂ E(u 1 , u 2 ) ∂ u 2 = ∑ x g( x).[I 2 y + I y I x + I y I t ]. (3.6)
Por estimar o fluxo com base no cálculo de derivadas de primeira e segunda ordem referentes a pontos contidos numa mesma vizinhança, o algoritmo de Lucas-Kanade apresenta baixo custo computacional (BARRON; FLEET; BEAUCHEMIN, 1994). No entanto, para que a suposição da igualdade do fluxo nas vizinhanças seja válida, é necessário que a velocidade da movimentação relativa entre a câmera e os objetos contidos na imagem seja aproximadamente constante. Além disso, é comum a utilização de filtros passa-baixa para a suavização da imagem, de modo a reduzirem-se os termos derivativos de ordem mais alta e assim tornar a aproximação da Equação 3.2 válida.
Reconhecimento de Padrões
As principais definições a respeito de reconhecimento de padrões podem ser apresentadas a partir da seguinte ilustração: considerem-se as duas espécies ilustradas na Figura 6. Uma delas corresponde ao coelho europeu (Oryctolagus cuniculus), enquanto a outra refere-se à lebre europeia (Lepus europaeus). Ambas as espécies apresentam características físicas bem semelhantes, como as orelhas compridas e o formato da cabeça, por exemplo. No entanto, ao analisá-las com mais rigor, é possível notar que a lebre possui orelhas mais compridas e aparenta ter um corpo com dimensões maiores. De fato, sabe-se que o coelho europeu adulto pesa em torno de 2,77Kg (KRAUS, 2013) e possui orelhas com comprimento médio igual a 7,8cm (QUESENBERRY; CARPENTER, 2011). Por sua vez, a lebre europeia pesa cerca de 3,33Kg e suas orelhas chegam a atingir em média 12,78cm (ANGELICI; LUISELLI, 2007). Assim, percebe-se que, num primeiro momento, tais características podem ser úteis para a identificação de integrantes de cada umas destas espécies.
(a) Oryctolagus cuniculus (coelho europeu).
(b) Lepus europaeus (lebre europeia).
Figura 6 -Exemplo de espécies semelhantes cuja principal diferença encontra-se no peso e no comprimento das orelhas.
Com base nas informações apresentadas, suponha-se que foram coletadas medidas referentes ao peso e ao comprimento das orelha de alguns animais pertencentes às espécies citadas. Estas medidas são ilustradas na Tabela 2 e na Tabela 3.
Tabela 2 -Características de possíveis amostras da espécie Oryctolagus cuniculus (coelho europeu). Ao analisar-se a Figura 7, nota-se que os pontos refentes aos membros da mesma espécie localizam-se em regiões bem definidas, de forma a ser possível identificá-los com certa facilidade. De fato, a separação entre tais regiões pode ser sinalizada através de uma reta, como ilustrado na que diferenciam os membros de cada espécie até a suposição da classe à qual pertence um elemento desconhecido, corresponde a uma tarefa de reconhecimento de padrões. Um padrão nada mais é do que um objeto definido por um conjunto de características (THEODORIDIS, 2003). Na situação apresentada até então, os coelhos e as lebres correspondem a objetos. palavras, foi possível identificá-lo.
O processo de reconhecimento ilustrado pode ser estendido para vários outros problemas, nos quais seja preciso levar em consideração mais do que apenas duas características de cada objeto. De forma geral, um padrão pode ser representado como um vetor de características da seguinte maneira:
x = (X 1 , X 2 , ..., X N ) (3.7)
Na Equação 3.7, N corresponde à quantidade de características consideradas e X N ao valor da N-ésima característica. Como pode-se perceber, para problemas onde os vetores de características apresentam mais do que três dimensões, torna-se inviável determinar graficamente um modelo capaz de separá-los corretamente. Além disso, pode haver situações nas quais a fronteira de classificação não seja linear, de modo que a regra de separação a ser estabelecida seja mais complexa do que um reta. Finalmente, em muitas situações os padrões avaliados podem ser separados por mais de um único modelo. Nestes casos, surge a necessidade de escolher-se aquele capaz de otimizar algum parâmetro específico exigido pelo contexto em questão, como a distância entre as regiões separadas, a simplicidade da fronteira, o ajuste do modelo aos padrões conhecidos, dentre outros.
A depender do problema tratado, pode-se preferir separar os padrões analisados com base em mais de duas regiões. Independente da quantidade escolhida, é importante ressaltar que os modelos gerados durante o processo de reconhecimento de padrões são apenas capazes de estimar a classe a qual pertencem os objetos. Tal estimativa, portanto, não garante que o reconhecimento seja correto. Sendo assim, durante a fase de criação do modelo (também conhecida como treinamento) busca-se estudar padrões que apresentem características marcantes sobre seu conjunto. Estes objetos geralmente fazem parte de uma base de treino utilizada exclusivamente para a criação do modelo. Uma vez criado, sua qualidade é aferida com base na apresentação de novos objetos, os quais fazem parte de uma base de validação. Para isso, seus rótulos são inicialmente escondidos, de modo que o modelo treinado seja utilizado para predizê-los. Ao final, as classes atribuídas através do modelo são comparadas aos rótulos reais dos padrões apresentados. A partir daí, é possível inferir a qualidade do modelo gerado. Ao analisá-la, pode-se decidir realizar novos ajustes no modelo e testá-lo novamente. Todo este processo é realizado até que a qualidade aferida seja satisfatória, de modo a minimizar a quantidade de padrões mal classificados.
Finalmente, é importante ressaltar a importância da escolha das características a serem consideradas para a criação do modelo. Em geral, os padrões estudados em problemas de reconhecimento apresentam várias características. É o caso das duas espécies tratadas na ilustração trazida anteriormente. Neste problema, diversos parâmetros poderiam ter sido considerados para caracterizá-las. No entanto, percebeu-se que o peso e o comprimento das orelhas por si só já apresentavam alta correlação com o tipo da espécie. Sendo assim, decidiu-se considerar apenas tais atributos, de modo que o modelo gerado pudesse separar todos os objetos conhecidos da forma mais simples possível. Em problemas mais complexos, como os relacionados à visão computacional, é natural que a quantidade de características a serem consideradas seja razoavelmente maior. No entanto, deve-se ter como objetivo selecionar a menor quantidade de características possível, de modo a tornar trivial a utilização do modelo para fins de classificação (DUDA; HART; STORK, 2001).
Detecção de Obstáculos
A detecção de obstáculos corresponde ao processo de identificação de objetos que bloqueiam a passagem percorrida por um agente. Dentre as principais características destes objetos está a sua acentuada movimentação relativa. Tal intensidade de aproximação é contrastante àquelas apresentadas pelos demais elementos da cena, uma vez que estes encontram-se mais distantes do agente (Figura 10).
Por sua vez, sabe-se que o movimento relativo entre os elementos de uma imagem e a câmera podem ser estimados a partir do cálculo do fluxo óptico. Sendo assim, com base nos conhecimentos acerca de reconhecimento de padrões, tem-se que a partir de amostras de fluxo óptico extraídas de diversas cenas sem e com obstáculos é possível gerar um modelo capaz de separar tais padrões. Em outras palavras, pode-se treinar um classificador com base em objetos já rotulados e aplicá-lo para indicar se uma nova cena apresenta ou não um obstáculo. Sendo assim, a detecção de obstáculos com base no reconhecimento de fluxo óptico pode ser realizada através de uma máquina classificadora, cujo aprendizado supervisionado é alcançado com base nas seguintes definições:
• x i = (F 1 , ...F n ) corresponde ao vetor de caraterísticas associado à i-ésima imagem da sequência considerada;
• F n corresponde ao par (v 1 , v 2 ), sendo v 1 e v 2 a norma e a fase do vetor de fluxo óptico associado ao n-ésimo ponto da imagem, respectivamente;
• Cada x i está associado a um valor y i ∈ {−1, +1}, onde os rótulos −1 e +1 indicam a ausência e a presença de obstáculos na imagem i, respectivamente.
Como visto, o padrão analisado corresponde ao movimento percebido entre uma sequência de imagens. Seu vetor de características é composto pelo fluxo óptico estimado em n pontos distintos. Além disso, as informações utilizadas para representar o fluxo correspondem à sua norma e à sua fase. A primeira é justificada pelo contraste da intensidade de movimentação já discutido anteriormente. Por sua vez, a fase é considerada devido à possibilidade de prováveis obstáculos se moverem para fora da trajetória do agente, deixando de representar um bloqueio. É o caso de uma curva acentuada, na qual a intensidade do fluxo é alta, porém o objeto à frente do percurso já está sendo desviado pelo agente (Figura 11). Finalmente, para o problema em questão é suficiente assumirem-se a existência de apenas duas classes: a da movimentação associada a obstáculos e a do movimento referente a elementos não bloqueantes.
Sistema Proposto
Com base nas discussões realizadas nos capítulos anteriores, foi desenvolvida uma plataforma robótica cujo sistema de navegação baseia-se na detecção de obstáculos a partir da classificação de fluxo óptico. Tal sistema foi implementado sobre a placa de baixo custo Raspberry Pi. As seções a seguir descrevem o hardware e o software do robô desenvolvido.
Hardware
Os componentes que formam o hardware da plataforma projetada correspondem a uma placa Raspberry Pi, um chassi de sustentação contendo dois atuadores, uma fonte de alimentação e dois sensores: uma câmera monoscópica e um sensor ultrassônico. Os detalhes acerca das especificações técnicas e da utilização de cada um destes componentes são descritos a seguir, juntamente com a apresentação do hardware final obtido e dos custos arcados.
Raspberry Pi
A plataforma Raspberry Pi (FOUNDATION, 2016c) teve seu primeiro modelo lançado mundialmente no ano de 2012 e desde então tornou-se amplamente difundida entre educadores, instituições de ensino e entusiastas da tecnologia. Com o objetivo de estimular o interesse no aprendizado de Computação, este pequeno computador tem chamado a atenção da comunidade acadêmica devido às suas dimensões extremamente reduzidas, ao seu baixo custo de aquisição e à sua alta performance. Baseada na arquitetura ARM (HOLDINGS, 2016), a Raspberry Pi comporta, além de seu processador, uma unidade de processamento gráfico (GPU), memória RAM, entradas USB e interface Ethernet integrados numa única placa, cujas dimensões podem ser comparadas às de um cartão de crédito. A Figura 12 apresenta a Raspberry Pi 3 Model B utilizada neste projeto. Suas principais especificações são (FOUNDATION, 2016d):
• CPU ARMv8 quad-core 64-bit 1,2GHz;
• RAM 1GB;
• GPU VideoCore IV 3D;
• 1 leitor de cartão Micro SD;
• 4 portas USB;
• LAN Wireless 802.11n;
• Bluetooth 4.1;
• 1 porta Ethernet;
• 1 porta HDMI.
Figura 12 -Raspberry Pi 3 Model B
Como visto nas especificações citadas, a Raspberry Pi 3 Model B conta com um adaptador Wireless embutido, tornando-a ainda mais adequada para aplicações em sistemas autônomos, como o proposto neste trabalho. Além disso, um dos grandes diferenciais da Raspberry Pi é o seu conjunto de pinos de propósito geral, conhecido como GPIO (Figura 13). Estes pinos permitem que a Raspberry Pi se comunique através de sinais digitais com dispositivos externos de entrada e saída, como sensores, sinalizadores e motores (CORPORATION, 2012). A tensão máxima de entrada e saída suportada por cada pino corresponde a 3,3V. Já a corrente máxima suportada por cada um é igual a 16mA, sendo que todo o conjunto é capaz de fornecer ao máximo 50mA simultaneamente. Por fim, é importante ressaltar que apesar de todos esses recursos e especificações, a Raspberry Pi exige uma fonte de alimentação de apenas 5,1V capaz de fornecer cerca de 2,5A, sendo seu consumo médio inferior a 5,1W (FOUNDATION, 2016b).
Chassi
A Figura 14 ilustra o chassi utilizado como base para a construção da plataforma robótica, o qual contém os atuadores a serem controlados pelo sistema de navegação. Tal escolha deve-se ao fácil controle das rodas do chassi e à variedade de superfícies em que o mesmo pode ser utilizado sem que haja perda de estabilidade. Além disso, considerou-se adequada a escolha de um chassi que evidenciasse a simplicidade, o baixo custo e a eficiência do sistema final a ser desenvolvido.
Figura 14 -Chassi utilizado como base para a plataforma robótica Este chassi conta com as seguintes especificações:
Alimentação
Para alimentar os dois motores foram utilizadas inicialmente quatro pilhas de 1,5V. Já para alimentar a Raspberry Pi de maneira estável, tentou-se utilizar algumas pilhas recarregáveis de 1,25V em conjunto com o regulador linear de tensão LM317 (Figura 17). Esta escolha foi feita em virtude da fácil manipulação e do baixo custo de aquisição apresentados por este regulador.
No entanto, por ser linear, o LM317 exige uma tensão de entrada 3V maior que aquela desejada na saída, o que ocasiona seu rápido aquecimento devido à alta dissipação de potência. Além disso, a corrente de saída recomendada pelo fabricante deste regulador para que não haja a necessidade de conectá-lo a um dissipador de calor corresponde a somente 1,5A, inferior à exigida pela Raspberry Pi. Sendo assim, ao tentar alimentar esta placa utilizando o LM317, percebeu-se que este passou a esquentar muito rapidamente, de modo que seu circuito de proteção limitou a tensão de saída para um valor muito inferior ao exigido pela Raspberry Pi.
Figura 17 -Regulador de tensão LM317 utilizado inicialmente para estabilizar a alimentação da Raspberry Pi. No entanto, o mesmo demonstrou-se inviável, já que sua corrente máxima fornecida sem que haja necessidade de um dissipador de calor é menor do que a exigia pela placa (INCORPORATED, 2002).
Devido às dimensões reduzidas do chassi, tornava-se inviável a instalação de um dissipador de calor. Pelo mesmo motivo desconsiderou-se a utilização de um regulador do tipo
Step-Down (INCORPORATED, 2016 Ao compararem-se os custos apresentados na Tabela 5 com os dos trabalhos relacionados exibidos na Tabela 1, nota-se que a plataforma desenvolvida é mais barata que a da maioria dos sistemas citados. Além disso, é importante ressaltar que o trabalho que propôs a plataforma mais barata da Tabela 1 não inclui os custos do chassi e da fonte de alimentação utilizados. Os detalhes acerca do funcionamento e da implementação de cada etapa da navegação são melhor descritos nas subseções apresentadas a seguir.
Software
Cálculo do fluxo óptico
Para estimar o fluxo óptico entre as duas imagens do ambiente capturadas em sequência, optou-se pela utilização do algoritmo de Lucas-Kanade, cuja implementação encontra-se disponível na OpenCV. Para utilizá-lo, considerou-se uma distribuição circular e simétrica de pontos, a qual foi definida a partir de experimentações visuais realizadas neste trabalho (Apêndice C). Esta consiste em 1 ponto central e 5 anéis concêntricos, cada um contendo 20 pontos igualmente espaçados. Estes anéis foram posicionados de modo que suas distâncias ao centro aumentassem de maneira exponencial. Tal distribuição foi gerada através de um script em R (R Core Team, 2013) e salva num arquivo .dat, de forma a ser carregada pelo sistema de navegação. Ao carregá-la, o sistema a projeta sobre o centro da imagem a ser utilizada para a estimação do fluxo. Esta projeção foi configurada para ocupar 80% do tamanho da imagem. A Figura 24 ilustra tal distribuição já projetada.
Uma vez definidos os pontos a serem monitorados e com base nas definições apresentadas no Capítulo 3, considerou-se a aplicação de alguns filtros sobre as imagens utilizadas para a estimação do fluxo. A Figura 25 ilustra todo o processo realizado para o cálculo do fluxo óptico. Inicialmente, as imagens consideradas são convertidas para níveis de cinza, tendo em vista que a estimação do fluxo é feita com base na intensidade de seu brilho. Em seguida, aplicase um filtro Gaussiano com tamanho de janela igual a 3x3 pixels. Este corresponde a um filtro passa-baixa capaz de suavizar as imagens de forma ponderada e simétrica (STRINGHINI ILANA DE ALMEIDA SOUZA, 2011). Assim, consegue-se atenuar termos derivativos de ordem mais alta e, consequentemente, tornar a aproximação da Equação 3.2 válida. Logo após, as imagens suavizadas são submetidas a um filtro passa-alta Laplaciano. O objetivo de tal aplicação é realçar suas bordas, uma vez que estas correspondem às regiões com maior variação de brilho. Vale notar que neste trabalho deu-se preferência ao filtro Laplaciano em virtude do mesmo gerar como saída uma imagem em níveis de cinza, ao contrário de filtros como o de Canny (CANNY, 1986), o qual produz uma imagem binária. Dessa forma, consegue-se fornecer mais detalhes sobre as bordas realçadas. Finalmente, as imagens filtradas são utilizadas como entrada para o algoritmo de Lucas-Kanade. Este foi configurado com tamanho de vizinhança igual a 31x31 pixels e critérios de parada correspondentes ao valor máximo de 10 iterações e ao erro mínimo de 0,03. Além disso, utilizou-se a implementação piramidal deste algoritmo, considerando-se a quantidade total de 3 camadas (BOUGUET, 2001). Esta implementação consiste na estimação recursiva do fluxo, de modo que cada camada está associada a um nível de resolução menor da imagem. Assim, através da implementação piramidal, consegue-se lidar com deslocamentos de padrões de brilho maiores que o tamanho da vizinhança considerada, os quais são gerados a partir de movimentações acentuadas. A Figura 26 ilustra a realização de cada uma destas etapas.
Classificação do fluxo óptico
Para classificar os padrões de fluxo óptico, utilizou-se a implementação de SVM (Apêndice D) com kernel BRF fornecida pela Scikit-Learn. Tal escolha justifica-se pelo alto grau de generalização alcançado por essa máquina, uma vez que a mesma é capaz de maximizar a margem de separação entre exemplos de classes distintas. Além disso, foi levada em conta a boa performance alcançada por classificadores SVM quando utilizados para o reconhecimento de padrões localizados em espaços com grande número de dimensões (DUDA; HART; STORK, 2001). É o caso do vetor de características formado pelos fluxos ópticos, o qual apresenta dezenas de atributos. Também é o caso de vetores de características formados a partir de padrões sequenciais de fluxo, através dos quais seria possível analisar a variação temporal da movimentação percebida. Apesar desta análise não ter sido realizada neste trabalho, a mesma pode ser investigada em trabalhos futuros. Uma vez treinada, esta máquina seria utilizada para apontar se o padrão de fluxo apresentado indica ou não a existência de obstáculos. Porém, antes que tal apresentação fosse feita, seria preciso condicionar o vetor de características, de modo a facilitar o processo de classificação. O objetivo desta normalização é homogeneizar a representação de padrões que encontram-se em escalas diferentes mas possuem a mesma relevância. (THEODORIDIS, 2003). Em seguida, o vetor normalizado é projetado para um espaço de dimensões reduzidas. Este é definido pelos vetores obtidos através do algoritmo PCA (Principal Component Analysis (HOTELLING, 1933)) durante a fase de treinamento da SVM. O objetivo deste redimensionamento é otimizar o processo de classificação, tendo em vista a redução da quantidade de atributos a serem apresentados à SVM, além da atenuação de ruídos. Finalmente, o vetor obtido é apresentado à SVM, a qual sinalizará se o mesmo indica ou não a presença de obstáculos.
Tomada de decisão
A Figura 28 ilustra o processo de tomada de decisão com base na indicação do classificador. Caso este sinalize a ausência de obstáculos, deve-se conduzir o robô em linha reta. Caso contrário, realiza-se uma manobra de desvio. Neste caso, é preciso definir qual será a direção tomada. Para isso, optou-se em dividir a última imagem capturada em duas partições simétricas. Em seguida, escolhe-se a direção da partição na qual o valor médio das amplitudes do fluxo óptico seja menor. Esta estratégia considera que a direção com menor intensidade de fluxo tem menos chance de apresentar obstáculos próximos, uma vez que a movimentação relativa percebida entre o robô e os elementos contidos no ambiente é menor. É o caso da cena apresentada na Figura 29, na qual os objetos contidos no lado esquerdo contribuem para que a média da amplitude dos vetores de fluxo nesta partição da imagem seja maior que a média da parte oposta. Nesta situação, o robô seria direcionado para o lado direito.
Acionamento dos atuadores
Para acionar os atuadores do sistema de navegação, utilizou-se como base o módulo Python RPi. GPIO (CROSTON, 2017). Este corresponde a uma classe que disponibiliza funções para acessar e controlar a interface GPIO da Raspberry Pi. Para utilizá-la, foram projetadas duas novas classes: Roda e Robô. A primeira teria como finalidade modelar o comportamento dos atuadores a serem controlados através da interface GPIO. Já a segunda teria como papel oferecer ao sistema de navegação uma interface de controle de alto nível. A Figura 30 apresenta o diagrama UML que descreve a relação entre as classes utilizadas.
| V L | = | V R |cos(θ V ) (4.1) | V A | = | V R |sen(θ V ) (4.2)
, sendo | V L | e | V A | o módulo das componentes proporcionais às velocidades linear e angular do robô, respectivamente, e θ V a inclinação do vetor V R .
Inicialmente, o método alterarVelocidade utiliza as expressões anteriores para calcular o módulo das componentes da velocidade resultante desejada. Em seguida, o valor | V A | é utilizado para configurar a velocidade angular do robô. Este valor é interpretado como sendo a diferença entre os ciclos de trabalho de cada roda. Sendo assim, o mesmo é passado como argumento para o método setDutyCycle da roda mais distante ao centro de rotação do robô. Em seguida, configura-se o sentido de rotação das rodas com base no cosseno do ângulo θ V . Caso este valor seja positivo, o sentido das rodas é configurado de modo a fazer com que o robô se locomova para frente. Caso seja negativo, configuram-se as rodas para girarem no sentido contrário. Esta configuração é realizada a partir do método setSentido da classe Roda. Finalmente, o valor | V L | é adicionado ao ciclo de trabalho de ambas as rodas através do método setDutyCycle, respeitandose seu limite máximo, uma vez que tal ciclo é definido de maneira percentual. Vale ressaltar que o método alterarVelocidade dá prioridade à aplicação da velocidade angular em detrimento da linear. Esta é interpretada pela classe Robô como sendo a velocidade apresentada pela roda mais lenta. Sendo assim, há a possibilidade da velocidade linear desejada não ser atingida devido à saturação do valor de ciclo de trabalho de alguma das rodas.
Por fim, a interface fornecida pela classe Robô foi utilizada pelo sistema de navegação para efetuar o acionamento dos atuadores após a tomada de decisão. Nos casos em que a trajetória do robô deveria ser mantida em linha reta, utilizou-se o método alterarVelocidade com os argumentos velocidade percentual igual a 50% e direção igual a 0rad. Já para a efetuação dos desvios, utilizou-se, inicialmente, o mesmo método com os argumentos velocidade percentual igual a 60% e direção igual a π 2 rad (desvio para a direita) e 3π 2 rad (desvio para a esquerda). Após 200ms, utilizou-se novamente o método alterarVelocidade, passando-se como argumentos os mesmos valores empregados para a trajetória em linha reta.
5
Avaliação da Plataforma
A plataforma desenvolvida foi avaliada com base em dois experimentos: um focado exclusivamente na performance offline do seu classificador e outro com ênfase no seu desempenho online. As próximas seções descrevem as metodologias utilizadas para cada tipo de avaliação.
Avaliação Offline
Metodologia
A metodologia offline empregada para avaliar a qualidade do sistema consistiu na apresentação de determinados exemplos previamente rotulados e no posterior registro da quantidade de indicações corretas e incorretas realizadas pelo seu classificador, juntamente com o tempo levado para realizar cada indicação. Mais especificamente, utilizou-se o método de validação cruzada k-fold (KOHAVI et al., 1995) para particionar uma determinada base de dados em k divisões mutuamente exclusivas. Tal base foi formada a partir de padrões de fluxo óptico já rotulados. Durante k iterações, os exemplos de (k − 1) partições foram utilizados para treinar o classificador. Ao fim do treino, os exemplos da partição restante foram apresentados a este último, o qual deveria prever a classe correspondente a cada um deles. Finalmente, as predições feitas pelo classificador foram comparadas aos valores dos rótulos previamente conhecidos, de modo a serem registradas as quantidades de previsões corretas e incorretas. É importante notar que durante cada iteração deste processo, a partição não utilizada para treino foi alternada. Desse modo, garante-se que ao fim do processo de validação o classificador tenha predito a classe de todos os exemplos contidos nas k partições.
Os erros e acertos do classificador ao longo de todo o processo de validação foram registrados para ambas as classes, de modo a ser preenchida sua matriz de confusão. Esta é ilustrada na Figura 32. Como pode ser visto, seu preenchimento é realizado de modo que a célula TP (True Positive) contenha a quantidade de exemplos corretamente preditos pelo classificador como sendo da classe positiva, FP (False Positive) contenha a quantidade de exemplos classificados incorretamente como positivos, TN (True Negative) contenha a quantidade de exemplos corretamente apontados como negativos e FN (False Negative) contenha a quantidade de exemplos incorretamente preditos como negativos.
TP FP FN TN
Classe Real
Positiva Negativa
Positiva
Negativa
Classe Predita
Figura 32 -Ilustração de uma matriz de confusão. Suas linhas são definidas pela previsão do classificador, enquanto que suas colunas são definidas pelo valor da classe a qual um determinado exemplo realmente pertence.
A partir da matriz de confusão, foram extraídas medidas que expressam de maneira mais clara as relações entre os erros e acertos do classificador para ambas as classes. Tais medidas correspondem à precisão (Equação 5.1), à cobertura (Equação 5.2), à media F (Equação 5.3) e à acurácia (Equação 5.4). A precisão refere-se à qualidade do classificador em acertar previsões para a classe positiva. Já a cobertura mede a quantidade de exemplos pertencentes à classe positiva que foram corretamente preditos pelo classificador. Por sua vez, a medida F representa uma média harmônica entre a precisão e a cobertura. Finalmente, a acurácia informa o total de previsões corretas realizadas pelo classificador independentemente da classe apontada.
Precisão = T P T P + FP (5.1) Cobertura = T P T P + FN (5.2) Medida F = 2 * Precisão * Cobertura Precisão + Cobertura (5.3) Acurácia = T P + T N T P + FP + T N + FN (5.4)
Os valores de tais medidas, juntamente com aqueles referentes ao tempo de execução, foram registrados ao fim do processo de validação cruzada, de modo a serem analisados e comparados aos obtidos pelos trabalhos relacionados. Figura 33 -Ilustração do circuito de teste offline elaborado.
Base de Dados
Os vídeos foram gravados com a resolução 320x240 pixels, sendo a velocidade linear do robô fixada em 0,1m/s, aproximadamente. Cada vídeo consistiu em capturas realizadas durante 3 voltas completas do robô ao longo do circuito. Além disso, para avaliar posteriormente o nível de fixação e aprendizado do sistema de navegação, pensou-se em alterar as características do De forma semelhante foi realizado o processo de alteração do posicionamento dos obstáculos para a gravação dos 4 últimos vídeos. A distribuição do vídeo 5 é análoga à do vídeo 1, a do vídeo 6 assemelha-se à do vídeo 2 e assim sucessivamente. A diferença neste caso é que o sentido da navegação corresponde ao oposto daquele considerado durante a gravação dos 4 vídeos iniciais. Com essa alteração, conseguiu-se não apenas registrar os mesmos obstáculos em frente a planos de fundo diferentes, mas também capturar movimentações de curvas para direções até então inéditas. Dessa forma, esta alteração foi relevante na medida em que pôde ser utilizada para ensinar ao robô qual é o padrão de movimento de um obstáculo sendo desviado nos dois sentidos possíveis.
Finalmente, a base de dados a ser formada deveria contar não apenas com os padrões de fluxo óptico, mas também com seus respectivos rótulos. Neste trabalho, entendeu-se que não seria viável realizar a atribuição de rótulos manualmente, uma vez que tal processo é de natureza subjetiva. Ou seja, a depender do supervisor humano os padrões poderiam ser rotulados de forma diferente, fato este que inviabilizaria a reprodução deste mesmo processo por diferentes trabalhos. Além disso, como a base formada tende a possuir milhares de exemplos, sua rotulação manual corresponde a um processo extremamente exaustivo, de certo modo inviável para aplicações práticas. Sendo assim, com a intenção de automatizar este processo, utilizou-se o sensor ultrassônico HC-SR04, o qual já havia sido instalado na plataforma. A partir deste sensor foi possível registrar a distância entre o robô e os obstáculos. Estas foram posteriormente armazenadas de modo que pudessem ser facilmente associadas aos seus respectivos quadros contidos nos vídeos capturados.
Tendo em vista que o sensor ultrassônico deveria ser capaz de registrar com precisão e exatidão a distância do robô aos obstáculos, os mesmos foram escolhidos de modo a maximizar a qualidade da sua medição. Sendo assim, foram escolhidos os obstáculos exibidos na Figura 35. Os mesmos apresentam superfície lisa e reflexiva, sendo apropriados para a utilização deste tipo de sensor de alcance. A Figura 36 apresenta um dos quadros capturados durante a navegação do robô, juntamente com a medida registrada pelo sensor ultrassônico. Por fim, vale ressaltar que estes mesmos obstáculos também foram selecionados devido ao seu alto nível de textura, em contraste ao dos demais elementos presentes no cenário considerado. Através deste grau de textura torna-se possível estimar padrões de fluxo óptico mais bem definidos. Finalmente, uma vez capturados os vídeos, calculou-se o fluxo óptico associado aos mesmos. Para isso, utilizou-se a implementação do algoritmo de Lucas-Kanade disponibilizada pela OpenCV. Além disso, consideram-se os mesmos parâmetros utilizados durante a classificação, os quais foram detalhados no Capítulo 4. Ao fim, foram gerados 8 arquivos contendo amostras de fluxo. Por sua vez, as medidas obtidas pelo ultrassom foram convertidas em rótulos -1 e 1 da seguinte forma: Figura 36 -Exemplo de quadro capturado durante gravação da base de dados. Sobre o quadro foi ilustrada a medição do sensor ultrassônico.
f (m) = 1, se L in f ≤ m ≤ L sup −1, caso contrário (5.5)
, onde m é o valor da medida do sensor ultrassônico, L sup representa a distância máxima para que um objeto seja considerado obstáculo e L in f refere-se ao valor de medida máximo considerado ruído. Neste trabalho, consideram-se L sup = 70cm e L in f = 10cm.
As características da base final gerada são listadas a seguir:
• Quantidade de atributos: 202;
• Quantidade de exemplos da classe positiva: 6533;
• Quantidade de exemplos da classe negativa: 31627;
• Quantidade total de exemplos: 38160;
Treinamento
O processo de treinamento do classificador ao longo de cada iteração da validação cruzada consistiu, inicialmente, na extração das principais características dos padrões a serem ensinados. Considerou-se que tal extração seria necessária para facilitar o processo de aprendizado do classificador, uma vez que os exemplos a serem ensinados continham centenas de atributos. Sendo assim, optou-se pela aplicação da PCA (Principal Component Analysis) sobre a base de treinamento, através da implementação fornecida pela biblioteca Scikit-Learn . Para utilizá-la foi preciso apenas informar a base de dados a ser analisada e a quantidade percentual de informação que deveria ser agregada no conjunto de componentes extraídas. Tal valor foi definido como 90% devido ao entendimento de que esta porcentagem seria suficiente para conter as informações mais relevantes para a classificação, reduzir drasticamente o número de dimensões dos padrões a serem ensinados ao sistema e, por fim, reduzir-se o nível de dados ruidosos. Ao final da aplicação da PCA, foram armazenadas as principais componentes extraídas. A intenção foi utilizá-las tanto durante o treinamento quanto ao longo dos testes. Vale ressaltar que ao longo de todo o processo de validação conseguiu-se reduzir o número de dimensões dos padrões para, em média, 43.
As principais componentes extraídas da base de treinamento foram utilizadas para projetar os exemplos desta base em seu espaço vetorial de dimensões reduzidas. Os vetores projetados foram apresentados ao classificador, o qual consistiu numa SVM (Support Vector Machine). Para isso, foi utilizada uma implementação de SVM também fornecida pela biblioteca Scikit-Learn. As principais configurações realizadas foram relacionadas ao kernel considerado e ao ponderamento de cada classe durante o aprendizado. Para a primeira, selecionou-se o kernel RBF (Radial Basis Function) (Chih-Wei Hsu, Chih-Chung Chang;LIN, 2008)). Através deste, a SVM é capaz de mapear de modo não-linear os vetores de características para espaços com dimensões mais altas, permitindo assim que sejam tratados casos nos quais a fronteira de separação dos padrões não pode ser linear. Já a segunda configuração consistiu na utilização do parâmetro balanced para utilizar o recurso de ponderação balanceada fornecido pela implementação da Scikit-Learn. Esta configuração foi feita em virtude do desbalanceamento da base de dados utilizada, um vez que a mesma apresenta aproximadamente 5 vezes mais exemplos da classe negativa do que da classe positiva. Este recurso altera o nível de contribuição de cada exemplo para o aprendizado da SVM, de modo que os exemplos da classe minoritária têm seu nível aumentado como forma de compensação. Por fim, os demais parâmetros disponíveis foram configurados com seus valores padrões.
Ao final do processo de treinamento, foram salvos os vetores correspondentes às principais componentes extraídas a partir da PCA e os vetores de suporte definidos durante o treinamento da SVM. O objetivo foi disponibilizá-los para a execução da rotina de classificação.
Baseline
A metodologia empregada para avaliar o sistema com base no classificador SVM também foi utilizada considerando-se outros dois modelos de aprendizado, cujas implementações também foram fornecidas pela biblioteca Scikit-Learn: um classificador Perceptron (ROSENBLATT, 1958) e um SVR (Support Vector Regressor) (DRUCKER et al., 1997). O primeiro consiste num modelo de classificação linear, sendo que seus principais parâmetros de treinamento consistiram em 100 iterações máximas e na ponderação balanceada dos exemplos de cada classe. O objetivo de avaliar-se a performance do sistema com base neste modelo consistiu em utilizar os resultados obtidos como referência para analisarem-se aqueles alcançados através da SVM. Já o segundo consiste num modelo de regressão, a partir do qual é possível estimar a distância associada a cada padrão de fluxo. O objetivo em mensurar-se o desempenho do sistema com base neste modelo consiste na investigação da relevância do processo de normalização dos vetores de características. Sendo assim, para utilizá-lo, foram apresentados os padrões não normalizados, juntamente com as medidas originais obtidas através do sensor de ultrassom. Após treinado e durante os testes, as medidas estimadas por este modelo foram convertidas em rótulos, de maneira análoga ao processo realizado durante a validação com base nos demais modelos. Assim, a partir destes rótulos foi possível realizar a classificação dos padrões. Vale ressaltar que, assim como para a SVM, utilizou-se o SVR com kernel RBF.
Resultados
A Tabela 6 apresenta a matriz de confusão do classificador SVM obtida durante cada iteração da validação cruzada. A partir da Tabela 6, foram calculadas a precisão, a cobertura, a medida F e a acurácia médias do classificador. Tais medidas são listadas a seguir:
• Precisão = 75, 46 ± 6, 21%;
• Cobertura = 61, 71 ± 4, 75%;
• Medida F = 68, 00 ± 3, 75%;
• Acurácia = 89, 90 ± 1, 36%.
Ao analisarem-se as medidas obtidas, percebe-se que o valor da cobertura foi consideravelmente baixo. Isso indica que muitos dos padrões de fluxo associados à movimentação de obstáculos não foram corretamente percebidos pelo classificador. No entanto, vale lembrar que numa situação real é necessário que apenas uma amostra de fluxo seja reconhecida como pertencente a um obstáculo para que seja realizada a manobra de desvio. Ou seja, o valor da cobertura não equivale necessariamente à taxa de colisões do robô. Por sua vez, a precisão do classificador possui maior relevância, já que quanto menor for esta medida maior tende a ser o número de desvios equivocados realizados pelo sistema. Ao analisar-se a precisão média obtida, é possível notar que seu valor foi consideravelmente superior ao da cobertura, estando de acordo com o esperado. Além disso, boa parte dos padrões incorretamente classificados como pertencentes à classe positiva foram capturados poucos centímetros acima do limiar L sup considerado para gerar os rótulos, como ilustra a figura Figura 37. Assim, entende-se que numa situação real o desvio realizado devido a estas indicações poderia ser tolerado. Finalmente, o valor da acurácia média do classificador encontra-se acima daqueles apresentados pela maioria dos sistemas discutidos no Capítulo 2. A Tabela 7 apresenta tanto as medidas de desempenho obtidas pelo classificador SVM quanto aquelas associadas aos demais modelos utilizados como baseline. Ao analisá-las, percebese que o classificador linear Perceptron apresentou o menor valor de acurácia. Assim, pode-se supor inadequada a utilização de uma fronteira linear para separar os padrões de fluxo óptico. • Tempo para estimação do fluxo óptico: 50, 50 ± 1, 64ms;
• Tempo para projeção através da PCA: 0, 72 ± 0, 02ms;
• Tempo para previsão da SVM: 15, 97 ± 0, 66ms.
Ao analisarem-se tais medidas, nota-se que o processo mais custoso corresponde à estimação do fluxo óptico. No entanto, ao somar-se o tempo gasto com todas as etapas da classificação, percebe-se que a frequência de processamento do sistema, sem contar com o tempo de captura das imagens, é igual a 14,88FPS. No entanto, ao considerar-se a frequência média de captura do sistema como sendo igual a 25,28FPS (Apêndice B), chega-se à conclusão de que sua taxa média total de processamento corresponde a 9,4FPS. Este valor pode ser considerado satisfatório, uma vez que é consideravelmente maior que aqueles apresentados pelos trabalhos citados no Capítulo 2. Além disso, vale ressaltar que a estimação da frequência de captura foi realizada com base em imagens de tamanho 640x480 pixels. Como a avaliação do classificador foi realizada sobre imagens cujos tamanhos correspondem à metade deste valor, pode-se supor que numa situação real a taxa de captura seja ainda maior. Logo, pode-se estimar que a frequência de processamento média do sistema desenvolvido é maior que 9,4FPS.
Sendo assim, a Tabela 9 apresenta uma comparação final entre o sistema desenvolvido e aqueles propostos pelos trabalhos citados no Capítulo 2. Como pode-se perceber, o sistema de navegação desenvolvido neste trabalho apresentou a maior frequência de processamento. Além disso, seu custo final é superior a apenas um dos sistemas citados. Finalmente, o valor de sua acurácia média encontra-se acima dos apresentados pela maioria dos demais sistemas. Sendo assim, com base nos parâmetros comparativos apresentados, pode-se concluir que o sistema de navegação desenvolvido é de baixo custo, apresenta alta frequência de processamento e possui acurácia satisfatória. Para que o robô fosse capaz de reconhecer os obstáculos distribuídos no circuito, embarcou-se na plataforma uma SVM treinada a partir de todos os padrões presentes na base criada durante o experimento offline (seção 5.1). Além disso, tal máquina foi treinada de forma idêntica à empregada naquele experimento. Quanto às configurações do robô, consideraram-se a mesma velocidade linear e a mesma angulação da câmera utilizados durante a captura dos padrões de treino. Finalmente, é importante destacar que alguns dos obstáculos contidos no cir-cuito de teste online não foram utilizados para a formação da base de treinamento. Dessa forma, o robô deveria ser capaz de generalizar o conhecimento adquirido naquela fase. A Figura 40 apresenta os obstáculos considerados. O vídeo disponibilizado em <http://www.youtube.com/v/hzyKAGhQExg?rel=0> apresenta a navegação do robô ao longo do circuito de teste. Já a Figura 41 exibe algumas cenas desta gravação. Ao analisá-la, é possível notar que o robô desviou dos dois primeiros obstáculos antes de aproximar-se mais do que 70cm. No entanto, os desvios realizados foram válidos, já que o robô encontrava-se em rota de colisão com os mesmos. Sendo assim, pode-se destacar que em nenhum momento o robô colidiu com os obstáculos contidos no circuito. Além disso, observa-se que todos os desvios foram realizados para a direção esperada. Finalmente, percebe-se que o controle dos atuadores permitiu a realização de curvas precisas, através das quais o robô pôde se posicionar na direção de todos os obstáculos subsequentes.
(a) (b) (c) (d) (e)(f
6
Conclusão Durante este trabalho foi possível investigar a utilização de técnicas de visão computacional para o desenvolvimento de um sistema robótico de baixo custo, capaz de navegar de maneira autônoma e baseado na plataforma Raspberry Pi. Mais especificamente, puderam-se revisar diferentes abordagens, propostas em trabalhos recentes, relacionadas à construção de sistemas autônomos cuja única forma de sensoriamento corresponde à visão monocular. A partir desta revisão, percebeu-se a maior adequação do fluxo óptico como característica primária a ser considerada por sistemas de navegação que devem apresentar baixo custo computacional. Sendo assim, explorou-se a definição de fluxo óptico e avaliou-se sua utilização para a tarefa de reconhecimento de obstáculos. Com base na investigação realizada, desenvolveu-se um robô autônomo capaz de identificar obstáculos a partir da classificação de fluxo óptico. O processo de desenvolvimento foi composto por duas etapas: a construção do hardware e a implementação do software. Durante a primeira etapa, realizou-se a seleção e a integração dos diferentes componentes de hardware utilizados para a construção da plataforma física. Por sua vez, a segunda etapa consistiu em, inicialmente, configurar e a avaliar a placa Raspberry Pi. Em seguida, implementou-se o sistema de navegação cuja estimação de fluxo óptico baseia-se no algoritmo de Lucas-Kanade e cuja classificação é realizada através de uma SVM.
O sistema desenvolvido foi avaliado com base na sua performance offline e online. Para o primeiro tipo de avaliação, implementou-se uma estratégia automática de coleta de padrões de fluxo óptico já rotulados, com o propósito de construir-se uma base de dados contendo amostras de cenários reais. Em seguida, utilizou-se o método de validação cruzada k-fold para treinar o classificador a partir de um determinado conjunto de elementos contidos na base e aplicá-lo sobre o grupo restante, de forma iterativa e exclusiva. Durante cada iteração, foram coletados o tempo exigido pelo sistema para executar cada etapa da classificação e a matriz de confusão do classificador. Ao final, os parâmetros médios obtidos foram utilizados para comparar a qualidade do sistema desenvolvido com a dos propostos pelos trabalhos relacionados. Ao fim da comparação, constatou-se que o sistema implementado apresentou acurácia superior e custo de aquisição inferior ao da maioria dos trabalhos citados. Além disso, sua frequência de processamento superou as apresentadas pelos sistemas comparados. Já durante a avaliação online, o sistema desenvolvido foi capaz de identificar e de desviar corretamente de todos os obstáculos inseridos no ambiente percorrido, sendo que parte destes elementos não foram apresentados ao robô durante seu treinamento. Sendo assim, ao fim das avaliações concluiu-se que a plataforma robótica construída atingiu os objetivos definidos.
Para trabalhos futuros, pretende-se explorar a utilização de modelos de classificação probabilísticos, através dos quais seja possível desenvolver um sistema de navegação capaz de prever obstáculos com base em padrões de fluxo óptico estimados no passado (LIPTON; BERKOWITZ; ELKAN, 2015). Além disso, deseja-se experimentar novas estratégias para a extração de características do fluxo estimado sobre uma cena, de modo a reduzir-se a dimensão dos padrões apresentados ao classificador. Finalmente, pretende-se desenvolver um protocolo de teste que permita aferir a capacidade de generalização do sistema em contraste à apresentada por aqueles baseados em sensores de alcance. (FOUNDATION, 2017), a partir do qual seriam desenvolvidos todos os códigos do sistema. Esta escolha foi feita devido à simplicidade e, consequentemente, à praticidade oferecidas pela linguagem. Além disso, as funções fornecidas pelas bibliotecas a serem utilizadas já se encontrariam compiladas, de modo que os scripts em Python apenas realizariam suas chamadas. Sendo assim, o desempenho do sistema não seria prejudicado.
Com o suporte a Python definido, instalou-se a versão 3.2 da biblioteca multiplataforma de visão computacional e aprendizado de máquina OpenCV (TEAM, 2017). Esta biblioteca de código aberto contém mais de 2500 algoritmos otimizados, sendo utilizada mundialmente por diversas empresas, grupos de pesquisa e órgãos governamentais (TEAM, 2016). Para instalá-la, primeiramente foram adicionados alguns pacotes responsáveis por permitir o carregamento de imagens e vídeos de diferentes formatos, tais como JPEG, PNG, AVI, dentre outros. Após, realizou-se o download do código fonte da OpenCV. Antes de compilá-lo, foi preciso instalar a biblioteca de processamento numérico Numpy (WALT; COLBERT; VAROQUAUX, 2011). Finalmente, compilou-se a OpenCV. Dentre os parâmetros de compilação configurados, habilitouse a utilização do recurso TBB (INTEL, 2017). A partir do mesmo, permite-se que o código da OpenCV seja executado paralelamente sobre os 4 núcleos de processamento da Raspberry Pi.
foi medida a quantidade média de imagens processadas por segundo em cada plataforma, sendo que o tamanho das imagens consideradas correspondeu sempre a 640x480 pixels. Os resultados finais obtidos são apresentados na tabela a seguir: Ao analisar os resultados apresentados na Tabela 10, percebe-se inicialmente que a taxa de captura da Raspberry Pi ficou próxima à máxima permitida pela câmera. Nota-se também que a taxa obtida pelo desktop foi inferior, fato este justificado pela possível incompatibilidade do sistema operacional com o formato de codificação utilizado pela câmera. Considerando-se os demais testes, nota-se que o desempenho da Raspberry Pi foi cerca de 15 vezes inferior ao do desktop, uma vez que suas especificações de hardware são significativamente inferiores. Além disso, é comum que em sistemas autônomos baseados em visão computacional sejam utilizadas câmeras de baixo a médio custo com frequência máxima de captura igual à 30 quadros por segundo. Como a frequência média de processamento da Raspberry Pi encontrou-se próxima deste valor, percebe-se a existência de uma margem razoável de tempo que pode ser preenchida com mais operações sem que haja perda significativa de performance. Sendo assim, com base nos resultados de desempenho obtidos ao longo dos testes, pôde-se concluir que a utilização da Raspberry Pi 3 Model B para o desenvolvimento do sistema proposto neste trabalho é viável.
APÊNDICE C -Seleção dos Pontos Considerados para a Estimação do Fluxo Óptico
Antes de realizar-se a extração do fluxo óptico, primeiramente deve-se determinar quais pontos da imagem serão monitorados. Esta escolha é relevante pelo fato de que quanto maior a quantidade de pontos, maior será a complexidade computacional envolvida no cálculo do fluxo. Ao passo que quanto menor tal quantidade, menor a resolução do padrão de movimento visualizado. Além disso, a distribuição dos pontos escolhidos também é relevante para o problema de detecção de obstáculos, uma vez que as regiões da imagem com maior concentração de pontos tornam-se mais relevantes para a determinação do padrão de movimento. Ou seja, a partir de distribuições não uniformes de pontos monitorados é possível realçar a contribuição de determinadas regiões da imagem para a indicação da existência de obstáculo.
Para determinar a melhor distribuição de pontos de monitoramento visando o problema de detecção de obstáculos, foram realizados testes visuais com base na aplicação do algoritmo de Lucas-Kanade sobre determinados vídeos de navegação em cenários reais. O objetivo destes testes foi avaliar visualmente se o fluxo obtido evidenciava de forma satisfatória o movimento dos obstáculos contidos no ambiente. A Figura 46 apresenta os resultados de alguns desses testes.
A Figura 46a exibe o fluxo calculado sobre 100 pontos igualmente espaçados. Pelo fato de abranger todas as regiões da imagem, esta abordagem aparenta ser adequada para análises da movimentação geral do ambiente. Por exemplo, é possível estimar qual direção apresenta maior grau de movimentação média e, assim, direcionar o robô para a direção contrária. Porém, tal abordagem não aparenta ser viável para a identificação de obstáculos, já que a concentração de pontos de análise sobre obstáculos específicos tende a ser insuficiente para uma possível classificação. Já o fluxo exibido na Figura 46b foi calculado apenas sobre pontos com características visuais distintas, como bordas e texturas realçadas. Tais pontos foram obtidos utilizando-se o algoritmo de Shi-Tomasi (SHI;TOMASI, 1994). Apesar da maior concentração dos pontos considerados, percebe-se que esta abordagem pode levar à análise de pontos que não sejam relevantes para a navegação do robô. É o caso de um corredor cujas paredes tenham texturas mais realçadas do que possíveis obstáculos que se encontrem logo à frente do caminho.
Por sua vez, a Figura 46c apresenta o fluxo calculado sobre 100 pontos posicionados de acordo com uma distribuição gaussiana cuja origem encontra-se no centro da imagem. Esta abordagem teve como objetivo gerar um foco sobre a região mais relevante para a detecção de obstáculos. Dessa forma, garante-se que tal região sempre será monitorada. Ao mesmo tempo, as demais regiões também são analisadas, porém com menor relevância. Essa abordagem aparenta ser mais adequada para este tipo de problema, uma vez que concentra em seu foco pontos suficientes para a classificação de obstáculos específicos. No entanto, a assimetria desta distribuição torna mais difícil a cobertura de obstáculos de maneira uniforme ao longo da navegação. Devido a isso, realizou-se o teste final com base na distribuição apresentada na Figura 46d. Esta mantém a mesma propriedade da distribuição vista na Figura 46c, uma vez que concentra a maior parte dos pontos no centro da imagem e reduz tal concentração de maneira semelhante a uma distribuição gaussiana. Além disso, esta distribuição é simétrica, de modo que as regiões com mesma distância do centro possuem a mesma probabilidade de cobrirem um obstáculo.
Sendo assim, com base nos testes realizados, escolheu-se pela distribuição circular vista na Figura 46d para a extração do fluxo óptico neste trabalho.
APÊNDICE D -Support Vector Machine
(SVM)
Máquinas de Vetores de Suporte (SVMs, do Inglês Support Vector Machines) correspondem a uma técnica de aprendizado de máquina baseada na maximização da margem que separa padrões. Para isso, esta técnica considera o mapeamento dos padrões para dimensões geralmente maiores que a dos vetores de características, de modo a encontrar-se o hiperplano que otimiza tal separação (THEODORIDIS, 2003). A formulação mais simples de uma SVM consiste na implementação linear com margem rígida. Para este caso, considere-se que o vetor de características de um determinado conjunto X de padrões de treinamento seja dado por x i , i = 1, 2...N. Tais padrões estão associados a apenas uma de duas possíveis classes, w 1 e w 2 . Estas, por sua vez, são linearmente separáveis. Assim, o objetivo é encontrar um hiperplano g(x) = w T x + w 0 = 0 (D.1) que classifique corretamente todos os vetores de treinamento. A Figura 47 ilustra dois possíveis hiperplanos capazes de separar os padrões de duas classes distintas. Como pode-se perceber, o hiperplano ilustrado pela reta contínua aparenta ser mais adequado para classificar novos padrões, uma vez que o mesmo provê maior espaço para que as amostras de cada classe sejam distribuídas sem que a regra de separação seja violada. Em outras palavras, o classificador formado por este hiperplano apresenta maior capacidade de generalização. Assim, entende-se que de modo geral o hiperplano a ser escolhido para a criação de classificadores deve ser capaz de maximizar a margem existente entre as duas classes separadas.
Para encontrar tal hiperplano, considera-se inicialmente que sua distância a um determinado ponto corresponde a z = |g(x)| ||w|| (D.2) Em seguida, assume-se que o valor de g(x) sobre os pontos mais próximos de w 1 e w 2 (padrões preenchidos na Figura 48) corresponda a +1 para w 1 e −1 para w 2 . Esta consideração é equivalente às seguintes afirmações: 1. A largura da margem desejada corresponde a 1 ||w|| + 1 ||w|| = 2 ||w|| 2. A restrição a ser atingida é igual a
x 1
x 2
Figura 47 -Exemplo de possíveis hiperplanos capazes de separar padrões de duas classes diferentes.
w T x + w 0 ≥ 1, ∀x ∈ w 1 (D.3)
w T x + w 0 ≤ −1, ∀x ∈ w 2 (D.4)
x 1
x 2 d d Figura 48 -Exemplo de hiperplano (reta contínua) que maximiza a margem de separação entre padrões de duas classes distintas.
Finalmente, considerando que y i (x i ) corresponde ao valor da classe de x i , tem-se que o problema de busca do hiperplano que maximiza as margens de separação é definido como:
Minimizar J(w) ≡ 1 2 ||w|| 2 (D.5) Sujeito a y i (w T x i + w 0 ) ≥ 1, i = 1, 2...N (D.6)
, sendo que as restrições da Equação D.6 garantem que não haja dados de treinamento entre as margens de separação das classes.
A Figura 48 ilustra o hiperplano ótimo (reta contínua) para um determinado problema de classificação. O classificador formado por este hiperplano é, portanto, chamado Máquina de Vetor de Suporte. Além disso, os vetores de suporte associados à máquina correspondem àqueles que atravessam os parões mais próximos à fronteira encontrada (os quais encontram-se preenchidos na Figura 48).
Como dito anteriormente, a Equação D.5 e a Equação D.6 definem a tarefa de busca de uma fronteira de separação linear. Para casos nos quais a mesma deve ser não-linear, consideramse as seguintes equações (Chih-Wei Hsu, Chih-Chung Chang;LIN, 2008):
Minimizar J(w, w 0 , ε) ≡ 1 2 ||w|| 2 +C N ∑ i=1 I(ε i ) (D.7)
Sujeito a y i (w T φ (x i ) + w 0 ) ≥ 1 − ε i , i = 1, 2...N (D.8) ε i ≥ 0, i = 1, 2...N (D.9)
, sendo K(φ (x i ), φ (x j )) = φ (x i ) T φ (x j ) chamada função kernel, ε i conhecidas como variáveis de folga, ε o vetor de parâmetros ε i , C o parâmetro de penalidade do termo de erro e I(ε i ) = 1, ε i ≥ 0 0, ε i = 0 (D.10)
Figura 1 - 22 Figura 5 - 37 Figura 12 - 42 Figura 17 - 65 Figura 37 - 73 Figura 42 - 84 Figura 46 - 89 Figura 47 -
1225371242176537734284468947Robô aquático capaz de capturar águas-vivas (KIM et al., 2012). . . . . . . 18 Figura 2 -Robô AiDIN III, cujos atuadores são pernas articuladas (KOO et al., 2013). 19 Figura 3 -Funcionamento de sensor ultrassônico de alcance. . . . . . . . . . . . . . . 20 Figura 4 -Exemplos de plataformas de computação móvel adequadas para o contexto de navegação autônoma. . . . . . . . . . . . . . . . . . . . . . . . . . . . . Exemplo de fluxo óptico percebido entre duas imagens em sequência. Os vetores ilustrados em 5c representam a movimentação aparente percebida entre as imagens 5a e 5b. . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 Figura 6 -Exemplo de espécies semelhantes cuja principal diferença encontra-se no peso e no comprimento das orelhas. . . . . . . . . . . . . . . . . . . . . . 31 Figura 7 -Plotagem das amostras de cada espécie com base nas suas características. O eixo das abcissas corresponde ao peso e o das ordenadas representa o comprimento das orelhas. Os símbolos em azul correspondem aos coelhos europeus, enquanto que aqueles em vermelho referem-se às lebres europeias. 32 Figura 8 -Ilustração de reta que marca uma possível fronteira entre as regiões definidas por cada espécie. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 Figura 9 -Ilustração de elemento cuja espécie ainda é desconhecida (símbolo em verde). Como o mesmo encontra-se abaixo da reta separadora, pode-se estimar que tal elemento pertença à espécie Oryctolagus cuniculus. . . . . . . . . . . . 34 Figura 10 -Exemplo da movimentação característica de um obstáculo. 10a apresenta o fluxo óptico de uma cena sem obstáculos próximos. 10b ilustra a movimentação contrastante de um obstáculo em relação aos demais elementos do ambiente. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Figura 11 -Ilustração de fluxo óptico extraído de uma curva. Como visto, a intensidade do movimento é alta, porém o obstáculo desviado à esquerda não representa mais um bloqueio. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Raspberry Pi 3 Model B . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Figura 13 -Pinos de propósito geral de uma Raspberry Pi 2. Estes são os mesmos presentes na Raspberry Pi 3 Model B.(FOUNDATION, 2016a). . . . . . . . . . . 40 Figura 14 -Chassi utilizado como base para a plataforma robótica . . . . . . . . . . . . 40 Figura 15 -Modelo do motor DC utilizado para movimentar as rodas do chassi. . . . . . 41 Figura 16 -Ponte H. 16a apresenta o esquema elétrico deste circuito. 16b exibe a pinagem do circuito integrado L293D. . . . . . . . . . . . . . . . . . . . . . . . . . Regulador de tensão LM317 utilizado inicialmente para estabilizar a alimentação da Raspberry Pi. No entanto, o mesmo demonstrou-se inviável, já que sua corrente máxima fornecida sem que haja necessidade de um dissipador de calor é menor do que a exigia pela placa (INCORPORATED, 2002). . . . 43 Figura 18 -Carregador portátil Power Pack APC utilizado para alimentar a Raspberry Pi e os dois motores DC do chassi (ELETRIC, 2016). . . . . . . . . . . . . . . 44 Figura 19 -Câmera LG AN-VC500 instalada no chassi (LG, 2016). . . . . . . . . . . . 44 Figura 20 -Sensor ultrassônico HC-SR04 instalado no chassi para utilização durante o treinamento do sistema. (INDOWARE, 2013). . . . . . . . . . . . . . . . . 45 Figura 21 -Divisor de tensão utilizado para conectar a saída Echo do sensor ultrassônico à interface GPIO da Raspberry Pi. . . . . . . . . . . . . . . . . . . . . . . 46 Figura 22 -Plataforma robótica construída. 22a apresenta a visão em perspectiva da plataforma completa. 22b exibe a parte inferior da plataforma, com foco sobre a ligação dos motores ao L293D. . . . . . . . . . . . . . . . . . . . . 47 Figura 23 -Fluxograma do sistema de navegação proposto. . . . . . . . . . . . . . . . 48 Figura 24 -Distribuição de pontos considerados para a estimação de fluxo óptico. . . . 49 Figura 25 -Fluxograma da etapa de cálculo do fluxo óptico. . . . . . . . . . . . . . . . 50 Figura 26 -Sequência de pré-processamento para o cálculo do fluxo óptico. 26a e 26b correspondem a capturas realizadas em t e t + 1, respectivamente. Já 26c e 26d apresentam a conversão de tais capturas para níveis de cinza. A aplicação de um filtro Gaussiano sobre estas conversões é ilustrada em 26e e 26f. Por sua vez, 26g e 26h ilustram a aplicação de um filtro Laplaciano sobre os resultados do filtro Gaussiano. Finalmente, 26i apresenta o fluxo óptico obtido. 51 Figura 27 -Fluxograma da etapa de pré-processamento e classificação do fluxo óptico. . 52 Figura 28 -Fluxograma da etapa de tomada de decisão. . . . . . . . . . . . . . . . . . 56 Figura 29 -Comparação de fluxo óptico para tomada de decisão. 29a apresenta o fluxo óptico calculado sobre uma determinada cena. 29b ilustra a amplitude média dos vetores de fluxo associados a cada partição da imagem. . . . . . . . . . 57 Figura 30 -Diagrama de classes de controle dos atuadores. . . . . . . . . . . . . . . . . 58 Figura 31 -Ilustração do vetor velocidade e suas componentes. . . . . . . . . . . . . . 59 Figura 32 -Ilustração de uma matriz de confusão. Suas linhas são definidas pela previsão do classificador, enquanto que suas colunas são definidas pelo valor da classe a qual um determinado exemplo realmente pertence. . . . . . . . . . . . . . 61 Figura 33 -Ilustração do circuito de teste offline elaborado. . . . . . . . . . . . . . . . 62 Figura 34 -Ambiente no qual foram gravados os vídeos da base de dados. . . . . . . . . 63 Figura 35 -Diferentes obstáculos inseridos no circuito de teste offline. . . . . . . . . . . 64 Figura 36 -Exemplo de quadro capturado durante gravação da base de dados. Sobre o quadro foi ilustrada a medição do sensor ultrassônico. . . . . . . . . . . . . Exemplo de padrões de fluxo rotulados pelo classificador. A imagem à esquerda corresponde ao quadro original capturado pelo robô. Sobre o mesmo foi ilustrada a medida obtida pelo sensor ultrassônico naquele instante. Já a imagem à direita ilustra o fluxo óptico estimado. A cor vermelha representa a indicação de obstáculo por parte do sistema. 37a apresenta um padrão corretamente identificado pelo classificador. Já 37b ilustra um padrão mal classificado, já que a distância associada encontra-se acima do limiar L sup . . 68 Figura 38 -Ilustração do circuito de teste online. . . . . . . . . . . . . . . . . . . . . . 71 Figura 39 -Ambiente de navegação utilizado para a avaliação online. . . . . . . . . . . 71 Figura 40 -Diferentes obstáculos inseridos no circuito de teste online. . . . . . . . . . . 72 Figura 41 -Navegação autônoma do robô ao longo do circuito de teste. 41a e 41b ilustram a aproximação do robô ao primeiro obstáculo e o seu desvio, respectivamente. Já 41c e 41d exibem, respectivamente, a aproximação do robô ao segundo obstáculo e o seu desvio. Por fim, 41e e 41f apresentam a aproximação do robô ao terceiro obstáculo e o seu desvio, respectivamente. . . . . . . . . . Cartão SD no qual foi gravada a imagem do Raspbian (SANDISK, 2017). . 82 Figura 43 -Software Win32 Disk Imager utilizado para gravar a imagem do Raspbian no cartão SD (SOURCEFORGE, 2017). . . . . . . . . . . . . . . . . . . . . . 83 Figura 44 -Interface gráfica do Raspbian. No canto superior direito é possível acessar o assistente para configuração de rede. . . . . . . . . . . . . . . . . . . . . . 83 Figura 45 -Conexão SSH a partir do software PuTTy. . . . . . . . . . . . . . . . . . . Diferentes aplicações do algoritmo de Lucas-Kanade. 46a apresenta o fluxo sobre 100 pontos equidistantes, 46b apresenta o fluxo sobre os pontos com melhores características, 46c apresenta o fluxo sobre pontos posicionados segundo uma distribuição gaussiana e 46d apresenta o fluxo sobre pontos posicionados de forma circular. . . . . . . . . . . . . . . . . . . . . . . . . Exemplo de possíveis hiperplanos capazes de separar padrões de duas classes diferentes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 Figura 48 -Exemplo de hiperplano (reta contínua) que maximiza a margem de separação entre padrões de duas classes distintas. . . . . . . . . . . . . . . . . . . . . 91 Lista de tabelas
Figura 4 -
4Exemplos de plataformas de computação móvel adequadas para o contexto de navegação autônoma.
se, finalmente, que o peso e o comprimento das orelhas de um novo elemento tenham sido aferidos, sendo que sua espécie é desconhecida. O ponto correspondente a este elemento é plotado sobre o gráfico que contém as espécies já conhecidas. O resultado obtido é exibido na Figura 9. Ao analisar o novo ponto, percebe-se que o mesmo encontra-se próximo à região na qual estão concentrados os coelhos europeus. Sendo assim, torna-se razoável assumir que o elemento desconhecido provavelmente corresponde a um membro desta espécie. Todo o problema ilustrado até aqui, desde o processo de identificação das características Figura 8 -Ilustração de reta que marca uma possível fronteira entre as regiões definidas por cada espécie.
Figura 10 -
10Exemplo da movimentação característica de um obstáculo. 10a apresenta o fluxo óptico de uma cena sem obstáculos próximos. 10b ilustra a movimentação contrastante de um obstáculo em relação aos demais elementos do ambiente.
Figura 11 -
11Ilustração de fluxo óptico extraído de uma curva. Como visto, a intensidade do movimento é alta, porém o obstáculo desviado à esquerda não representa mais um bloqueio.
Figura 13 -
13Pinos de propósito geral de uma Raspberry Pi 2. Estes são os mesmos presentes na Raspberry Pi 3 Model B.(FOUNDATION, 2016a).
Figura 22 -
22O sistema de navegação proposto foi desenvolvido sobre o sistema operacional Raspbian (FOUNDATION, 2016e) com base nas bibliotecas de visão computacional OpenCV (TEAM, 2017) e de aprendizado de máquina Scikit-Learn (PEDREGOSA et al., 2011) (Apêndice A). Seu ciclo de funcionamento é resumidamente descrito pelo fluxograma da Figura 23. Plataforma robótica construída. 22a apresenta a visão em perspectiva da plataforma completa. 22b exibe a parte inferior da plataforma, com foco sobre a ligação dos motores ao L293D. em tal diagrama, o sistema inicialmente captura uma imagem de referência e inicia o ciclo de navegação. Neste, captura-se uma nova imagem do ambiente e calcula-se o fluxo óptico entre as duas imagens registradas. O fluxo estimado é apresentado ao classificador, o qual indicará se há algum obstáculo no caminho percorrido. Com base nesta indicação, toma-se uma decisão quanto à atualização da direção do robô, de modo a realizar-se alguma manobra de desvio ou manter-se a trajetória do robô em linha reta.
Figura 23 -
23Fluxograma do sistema de navegação proposto.
Figura 24 -
24Distribuição de pontos considerados para a estimação de fluxo óptico.
Figura 27 -
27A sequência de passos executados nesta etapa é ilustrada na Figura 27. Fluxograma da etapa de pré-processamento e classificação do fluxo óptico. Primeiramente, a amplitude do fluxo é normalizada linearmente entre o intervalo [0, 1].
Como visto no diagrama, a classe Roda possui 4 atributos, dentre os quais 3 representam pinos presentes em circuitos Ponte H, como os contidos no CI L293D (Figura 16) utilizado para conectar os motores do robô à interface GPIO. O atributo pinoHabilitação refere-se ao pino que habilita a ponte. Este é análogo ao pino 1,2EN do L29D3D. Já os atributos pinoSentidoHorário e pinoSentidoAntiHorário referem-se às chaves de controle da ponte. Estas são análogas aos pinos 1A e 2A do L29D3D. Ao combiná-las é possível alterar o sentido da corrente que passa pela ponte, de modo a configurar o sentido de rotação da roda associada. Cada pino de um objeto Roda é inicializado utilizando-se o método setUp disponibilizado por um objeto RPi.GPIO. Este recebe como parâmetros a numeração do pino real da interface GPIO que será considerado e a indicação do propósito do pino, ou seja, se o mesmo será utilizado como entrada ou saída de sinal. No caso da classe Roda, todos os pinos são configurados como saídas. Já o último atributo corresponde a um objeto capaz de controlar a aplicação da modulação PWM sobre qualquer pino de propósito geral da Raspberry Pi. Este objeto é instanciado por meio do método PWM, fornecido pela classe RPi.GPIO. Como parâmetros para sua instanciação foram passados uma referência ao atributo pinoHabilitação e o valor 2000Hz, referente à frequência da onda gerada.Sendo assim, a partir destes atributos, foi possível modelar o comportamento desejado para cada roda do robô com base na elaboração de 4 métodos: ligar, setDutyCycle, setSentido e desligar. O primeiro refere-se à inicialização da geração do sinal modulado sobre o pino de habilitação da roda modelada. Sua implementação consiste na invocação do método start, implementado pela classe PWM, passando-se como parâmetro o valor referente à largura de pulso nula. Por sua vez, o segundo método refere-se à alteração da largura do pulso gerado e aplicado sobre o pino de habilitação da roda. Esta largura é diretamente proporcional ao torque da roda, sendo que sua alteração é realizada através do método ChangeDutyCycle, fornecido pela classe PWM. Já o método setSentido refere-se à configuração do sentido de rotação da roda. Sua implementação é baseada na habilitação das chaves de controle da ponte H associada à roda modelada, as quais são representadas pelos atributos pinoSentidoHorário e pinoSentidoAntiHorário. Para o caso do sentido desejado ser o horário, coloca-se em nível lógico alto o primeiro atributo e em nível lógico baixo o segundo. Já no caso do sentido oposto, o primeiro atributo é colocado em nível lógico baixo enquanto que o segundo é colocado em nível lógico alto. Tais atribuições de nível lógico são realizadas pela classe Roda através do método output fornecido por um objeto RPi.GPIO. Finalmente, o terceiro método corresponde ao encerramento do trabalho realizado pela roda modelada. Sua implementação consiste em cessarse o fornecimento do sinal modulado para o pino de habilitação da roda. Para isso, utiliza-se o método stop fornecido pela classe PWM.Para modelar o comportamento físico do robô projetado e fornecer uma interface de alto nível para o controle dos atuadores, projetou-se a classe Robô. Os métodos disponibilizados pela mesma são implementados com base na manipulação de dois objetos Roda, os quais a compõem. Tais métodos são: ligar, desligar e alterarVelocidade. Os dois primeiros consistem na invocação dos métodos de mesmo nome implementados pelos objetos do tipo Roda. Por sua vez, o terceiro método consiste na configuração das velocidades linear e angular do robô. Seus parâmetros correspondem ao módulo e à fase do vetor V R da velocidade resultante desejada (Figura 31). O valor do primeiro parâmetro é dado em termos percentuais de ciclo de trabalho, enquanto que o segundo é dado em radianos. As componentes do vetor resultante foram modeladas da seguinte forma:
A base de dados utilizada para o processo de validação do classificador foi formada por padrões de fluxo óptico extraídos de 8 vídeos. Estes foram gravados com o auxílio da própria plataforma robótica, de modo que as imagens capturadas representassem o mesmo ponto de vista percebido pelo robô durante a navegação autônoma. Para isso, o robô foi guiado remotamente por um controlador humano ao longo de um circuito contendo diferentes obstáculos. O objetivo foi registrar o movimento de aproximação de cada obstáculo ao observador da imagem. Ao aproximar-se suficientemente de um destes, o robô foi sempre desviado para o próximo e assim sucessivamente. Este comportamento é ilustrado na Figura 33. Nela é possível notar a distribuição espacial dos 4 obstáculos considerados, além do trajeto percorrido pelo robô. Dessa forma, a partir dessa estratégia foi possível gravar sequências contínuas de navegação. A Figura 34 apresenta o ambiente no qual os vídeos foram gravados.
Figura 34 -
34Ambiente no qual foram gravados os vídeos da base de dados. ambiente à medida em que novos vídeos fossem gravados. Para isso, foi definido que entre cada uma das quatro primeiras gravações seria alternado o posicionamento dos obstáculos. Por exemplo, considerando que inicialmente os obstáculos encontram-se distribuídos segundo ilustrado na Figura 33, após a gravação do vídeo inicial o primeiro obstáculo foi colocado na posição do segundo, o qual foi posicionado no lugar do terceiro, que por sua vez foi colocado no lugar do quarto, o qual foi deslocado para a posição do primeiro. A partir deste novo arranjo, grava-se o segundo vídeo. Em seguida, repete-se o mesmo processo de redistribuição sequencial de obstáculos antes de gravar-se o terceiro vídeo. Por fim, o mesmo foi realizado para a gravação do quarto vídeo.
Figura 35 -
35Diferentes obstáculos inseridos no circuito de teste offline.
Figura 37 -
37Exemplo de padrões de fluxo rotulados pelo classificador. A imagem à esquerda corresponde ao quadro original capturado pelo robô. Sobre o mesmo foi ilustrada a medida obtida pelo sensor ultrassônico naquele instante. Já a imagem à direita ilustra o fluxo óptico estimado. A cor vermelha representa a indicação de obstáculo por parte do sistema. 37a apresenta um padrão corretamente identificado pelo classificador. Já 37b ilustra um padrão mal classificado, já que a distância associada encontra-se acima do limiar L sup .
Figura 38 -
38Ilustração do circuito de teste online.
Figura 39 -
39Ambiente de navegação utilizado para a avaliação online.
Figura 40 -
40Diferentes obstáculos inseridos no circuito de teste online.
principal do software PuTTy. (b) Sessão SSH entre o cliente PuTTy e o Raspbian.
Figura 45 -
45Conexão SSH a partir do software PuTTy.
Figura 46 -
46Diferentes aplicações do algoritmo de Lucas-Kanade. 46a apresenta o fluxo sobre 100 pontos equidistantes, 46b apresenta o fluxo sobre os pontos com melhores características, 46c apresenta o fluxo sobre pontos posicionados segundo uma distribuição gaussiana e 46d apresenta o fluxo sobre pontos posicionados de forma circular.
Tabela 1 -
1Análise comparativa entre os trabalhos relacionados citados neste capítulo. . 26 Tabela 2 -Características de possíveis amostras da espécie Oryctolagus cuniculus (coelho europeu). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Tabela 3 -Características de possíveis amostras da espécie Lepus europaeus (lebre europeia). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Tabela 4 -Especificações dos motores DC . . . . . . . . . . . . . . . . . . . . . . . . 41 Tabela 5 -Custo da plataforma desenvolvida. . . . . . . . . . . . . . . . . . . . . . . 46 Tabela 6 -Matriz de confusão do classificador SVM durante cada iteração da validação cruzada . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 Tabela 7 -Medidas médias aferidas com base nos diferentes modelos utilizados. . . . . 69 Tabela 8 -Tempo médio de processamento utilizado para cada etapa da classificação durante a validação cruzada. . . . . . . . . . . . . . . . . . . . . . . . . . 69 Tabela 9 -Análise comparativa entre o sistema desenvolvido e os trabalhos relacionados citados no Capítulo 2. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 Tabela 10 -Resultados dos testes de desempenho . . . . . . . . . . . . . . . . . . . . . 87 Treinamento . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 5.1.4 Baseline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66 5.1.5 Resultados . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67 5.2 Avaliação Online . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70 6 Conclusão . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74 Referências . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 APÊNDICE A Configuração da Raspberry Pi para o Desenvolvimento do Sistema de Navegação . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 APÊNDICE B Avaliação da Raspberry Pi 3 Model B . . . . . . . . . . . . . . . . . 86 APÊNDICE C Seleção dos Pontos Considerados para a Estimação do Fluxo Óptico 88 APÊNDICE D Support Vector Machine (SVM) . . . . . . . . . . . . . . . . . . . . 90Lista de abreviaturas e siglas
A
Ampere
ARM
Advanced RISC Machine
AVI
Audio Video Interleave
CI
Circuito Integrado
cm
Centímetro
cos
Função cosseno
DC
Direct Current
DFT
Discrete Fourier Transform
EM
Expectation Maximization
FN
False Negative
FP
False Positive
FP
Frames Per Second
GB
Gigabytes
GPIO
General Purpose Input/Output
GPU
Graphics Processing Unit
GHz
Giga-hertz
GPS
Global Positioning System
HDMI
High-Definition Multimedia Interface
Hz
Hertz
JPEG
Joint Photographic Experts Group
Kg
Quilograma
LAN
Local Area Network
m
Metro
mm
Milímetro
mA
Miliampere
mAH
Miliampere-hora
ms
Milissegundo
MB
Megabytes
MHz
Mega-hertz
PCA
Principal Component Analysis
PNG
Portable Network Graphics
PWM
Pulse Width Modulation
rad
Radianos
RAM
Random Access Memory
RBF
Radial Basis Function
RPM
Rotations Per Minute
sen
Função seno
SD
Secure Digital
SFTP
SSH File Transfer Protocol
SIFT
Scale-Invariant Feature Transform
SSH
Secure Shell
SURF
Speeded-Up Robust Features
SVM
Support Vector Machine
SVR
Support Vector Regressor
TB
Terabytes
TBB
Threading Building Blocks
TN
True Negative
TP
True Positive
UML
Unified Modeling Language
USB
Universal Serial Bus
W
Watt
V
Volt
5.1.3 Apêndices
81
e a utilização de robôs para coleta de índices de radiação em áreas contaminadas que apresentam risco à saúde humana (CHAIYASOONTHORN; HONGYIM; MITATHA, 2015).Além de promissor, o desenvolvimento de robôs que naveguem de forma autônoma também pode ser considerado um tema de estudo extremamente desafiador. De fato, tal desafio pode ser justificado pela complexidade das diferentes etapas que compõem este tipo de navegação. Embora o objetivo final seja fazer com que o robô se locomova de maneira segura entre um ponto de origem e um ponto de destino, tal tarefa é composta por pelo menos quatro subprocessos de natureza multidisciplinar, incluindo as áreas de Computação, Física, Engenharia Mecânica, Engenharia Eletrônica e Biologia (SIEGWART;NOURBAKHSH, 2004). Estes quatro subprocessos podem ser enumerados como:Figura 1 -Robô aquático capaz de capturar águas-vivas (KIM et al., 2012). 1. Modelagem do ambiente no qual o robô está inserido; 2. Localização do robô em relação ao modelo gerado do ambiente; 3. Elaboração da rota a ser percorrida pelo robô; 4. Controle dos atuadores do robô de modo a garantir o cumprimento da rota elaborada.
Peso (Kg) Comprimento das orelhas (cm)Para melhor visualizar a diferença existente entre ambas as espécies, cada amostra foi plotada num gráfico bidimensional, utilizando-se como coordenadas os valores das suas respectivas características aferidas (Figura 7). Figura 7 -Plotagem das amostras de cada espécie com base nas suas características. O eixo das abcissas corresponde ao peso e o das ordenadas representa o comprimento das orelhas. Os símbolos em azul correspondem aos coelhos europeus, enquanto que aqueles em vermelho referem-se às lebres europeias.2,77
7,3
2,76
7,1
2,77
7,4
2,78
7,9
2,77
7,4
Tabela 3 -Características de possíveis
amostras da espécie Lepus eu-
ropaeus (lebre europeia).
Peso (Kg) Comprimento das orelhas (cm)
3,3
12,8
3,5
13,2
3,6
13,4
3,2
12,9
3,3
13,0
Estes são definidos por inúmeras características, tais como: tamanho, peso, cor, velocidade, dentre outras. O reconhecimento de um padrão corresponde, portanto, à tentativa de agruparem-se objetos que tenham características semelhantes. No caso do problema ilustrado, a característica desejada corresponde à espécie. Como visto, uma das possíveis estratégias de reconhecimento consiste em avaliarem-se as semelhanças existentes entre determinados objetos cujas classes já sejam conhecidas. No exemplo apresentado, estas semelhanças corresponderam ao peso do animal e ao comprimento de suas orelhas. Em seguida, desenvolve-se um modelo que represente a regra de separação entre os elementos das diferentes classes. A reta ilustrada na Figura 8 representa a regra desenvolvida para o problema tratado. Finalmente, através deste modelo pode-se inferir a classe ao qual pertence um determinado objeto cujo rótulo ainda é desconhecido. Foi o caso do novo elemento projetado na Figura 9. Como seu respectivo ponto ficou abaixo da reta divisória, pôde-se inferir que o mesmo corresponde a um coelho europeu. Em outras Figura 9 -Ilustração de elemento cuja espécie ainda é desconhecida (símbolo em verde). Como o mesmo encontra-se abaixo da reta separadora, pode-se estimar que tal elemento pertença à espécie Oryctolagus cuniculus.
Para movimentar as rodas do chassi, foram utilizados dois motores DC com caixa de redução e eixo duplo(Figura 15). A relação entre a tensão, a corrente e a velocidade de rotação destes motores são apresentadas na Tabela 4. Para acionar estes motores foi utilizado o circuito integrado L293D (Texas Instruments Incorporated, 2016), o qual possui internamente dois circuitos denominados Ponte H. Como visto na Figura 16a, neste tipo de circuito o acionamento do motor M é realizado através de quatro chaves, as quais definem o sentido com que a corrente gerada pela fonte V atravessará o motor. Assim, a utilização deste tipo de circuito provê duas vantagens: é possível alterar facilmente o sentido de rotação do motor e o seu controle pode ser realizado de maneira isolada do fornecimento de alimentação. Esta última vantagem é especialmente importante quando deseja-se utilizar um controlador incapaz de fornecer a alimentação necessária para o funcionamento do motor, como é o caso da Raspberry Pi utilizada neste projeto. Figura 15 -Modelo do motor DC utilizado para movimentar as rodas do chassi. Na Figura 16b é possível observar a pinagem do L293D. Os pinos 1Y e 2Y correspondem aos terminais da primeira ponte H nos quais seria ligado o motor M da Figura 16a. Da mesma forma, os pinos 3Y e 4Y correspondem aos mesmos terminais presentes na segunda ponte H. O valor de saída destes pinos é determinado pelos valores de entrada nos respectivos pinos A de mesma numeração. Já os pinos 1,2E e 3,4E habilitam os pinos 1Y, 2Y e 3Y, 4Y, respectivamente.Por fim, a tensão destinada ao pino VCC1 é utilizada para alimentar os circuitos lógicos deste circuito integrado, enquanto que a tesão aplicada no pino VCC2 é destinada exclusivamente aos pinos de saída Y. Sendo assim, um dos motores instalados no chassi foi ligado aos pinos Y1 e Y2 do LD293D, enquanto que o outro motor foi ligado aos pinos Y3 e Y4. Já os demais pinos de controle (do tipo E e A) foram conectados aos pinos da interface GPIO da Raspberry Pi. Além disso, esta também foi utilizada para fornecer uma tensão de 5V para o circuito lógico do LD293D.• Dimensões: 21, 3x15, 3cm;
• Diâmetro das rodas: 6, 5cm;
• Espessura das rodas: 2, 7cm.
Tabela 4 -Especificações
dos motores DC
Tensão Corrente RPM
3V
100mA
120
5V
100mA
208
6V
120mA
240
M
s1
s2
s3
s4
V
(a)
(b)
Figura 16 -Ponte H. 16a apresenta o esquema elétrico deste circuito. 16b exibe a pinagem do
circuito integrado L293D.
). Finalmente, decidiu-se utilizar o carregador portátil Power Pack APC (Figura 18). Este é capaz de fornecer 5000mAH já regulados, possuindo duas saídas de 5V, sendo que uma delas fornece no máximo 1A, enquanto que a outra é capaz de prover até 2,4A. Sendo assim, foi possível alimentar tanto a Raspberry Pi quanto os dois motores DC, conseguindo-se também reduzir o peso total sobre o chassi e aumentar o espaço disponível, já que foram retiradas as pilhas anteriormente utilizadas para alimentar os motores. Com o maior espaço disponível sobre o chassi, foi possível instalar os dois sensores a serem utilizados pelo sistema de navegação. O primeiro deles foi a câmera LG AN-VC500 Figura 19, a qual possui as seguintes especificações: Figura 18 -Carregador portátil Power Pack APC utilizado para alimentar a Raspberry Pi e os dois motores DC do chassi (ELETRIC, 2016). A mesma foi posicionada na parte frontal do chassi, de modo que sua inclinação em relação ao solo fosse de aproximadamente 0 • . Em seguida, foi preciso apenas conectá-la a uma das portas USB da Raspberry Pi. Em seguida, foi instalado o sensor ultrassônico HC-SR04 (Figura 20). O mesmo funciona com base na emissão de uma onda sonora, cuja frequência é igual a 40KHz, e na medição do tempo que a mesma leva para ser refletida pelo ambiente. Este sensor foi instalado na plataforma com a intenção de ser utilizado durante o processo de treinamento do sistema de navegação. Através deste instrumento é possível inferir a distância do robô a obstáculos rígidos, lisos e reflexivos. Seu alcance varia entre 2cm a 4m, com precisão de 3mm, segundo o fabricante. Além disso, o mesmo requer 5V de alimentação. Seus 4 pinos são nomeados como: VCC, GRD, Trigger e Echo. Os dois primeiros correspondem aos pinos de alimentação e de referência (terra), respectivamente. Já o terceiro corresponde ao pino de entrada utilizado para efetuar o disparo da onda sonora. Para isso, deve-se fornecer um pulso cuja duração seja igual a 10µs. Por fim, o último corresponde ao pino de saída, o qual retorna uma onda cujo tempo em nível lógico alto é proporcional ao tempo de ida e volta do sinal emitido. Os três primeiros pinos foram conectados diretamente à interface GPIO da Raspberry Pi. O mesmo não foi possível para o pino Echo. Como sua tensão de saída corresponde a 5V, foi preciso utilizar um circuito divisor de tensão para conectá-lo à Raspberry Pi. O mesmo foi formado por resistores de 4, 7KΩ e 10KΩ (Figura 21). Figura 20 -Sensor ultrassônico HC-SR04 instalado no chassi para utilização durante o treinamento do sistema. (INDOWARE, 2013). Figura 21 -Divisor de tensão utilizado para conectar a saída Echo do sensor ultrassônico à interface GPIO da Raspberry Pi.4.1.4 Sensores
• Resolução máxima: 1920x1080 pixels;
• Taxa de captura: 30 FPS;
• Formatos de saída: H.264 e YUY2;
• Interface: USB 2.0.
Figura 19 -Câmera LG AN-VC500 instalada no chassi (LG, 2016).
4.1.5 Hardware Final
A Figura 22 apresenta a plataforma final construída. Já o seu custo financeiro é descrito
na Tabela 5.
4.7 Kiloohms
Echo
GPIO
Tabela 5 -Custo da plataforma desen-
volvida.
Componente
Custo (US$)
Raspberry Pi 3
30,00
Câmera LG AN-VC500
89,99
Chassi
21,55
Sensor HC-SR04
2,00
L293D
1,90
Power Bank APC M5WH
25,38
Total
170,82
Figura 25 -Fluxograma da etapa de cálculo do fluxo óptico.Início
Fim
Converter imagens para níveis de cinza
Aplicar filtro Gaussiano
Aplicar filtro Laplaciano
Calcular fluxo óptico
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
Figura 26 -Sequência de pré-processamento para o cálculo do fluxo óptico. 26a e 26b corres-
pondem a capturas realizadas em t e t + 1, respectivamente. Já 26c e 26d apresentam
a conversão de tais capturas para níveis de cinza. A aplicação de um filtro Gaussiano
sobre estas conversões é ilustrada em 26e e 26f. Por sua vez, 26g e 26h ilustram a
aplicação de um filtro Laplaciano sobre os resultados do filtro Gaussiano. Finalmente,
26i apresenta o fluxo óptico obtido.
Figura 29 -Comparação de fluxo óptico para tomada de decisão. 29a apresenta o fluxo óptico calculado sobre uma determinada cena. 29b ilustra a amplitude média dos vetores de fluxo associados a cada partição da imagem.Início
Fim
Obstáculo indicado pelo
classificador?
Soma amplitude dos fluxos
do lado esquerdo
Soma amplitude dos fluxos
do lado direito
Soma do lado esquerdo é
maior que a do direito?
Desvia para a direita
Desvia para a esquerda
Sim
Não
Sim
Não
Mantém robô em linha reta
Figura 28 -Fluxograma da etapa de tomada de decisão.
(a)
(b)
Robô
-rodaDireita: Roda
-rodaEsquerda: Roda
+ligar(): void
+desligar(): void
+alterarVelocidade(módulo:real,direção:real): void
Roda
-pinoHabilitação: int
-pinoSentidoHorário: int
-pinoSentidoAntiHorário: int
-pwm: PWM
+ligar(): void
+setDutyCycle(dutyCycle:real): void
+setSentido(horário:booleano): void
+desligar(): void
1
2
RPi.GPIO
+setUp(pino:int,propósito:int): void
+PWM(pino:int,frequência:int): PWM
+output(pino:int,nível:booleano): void
PWM
+start(dutyCycle:real): void
+ChangeDutyCycle(dutyCycle): void
+stop(): void
1
1
1
1
Figura 30 -Diagrama de classes de controle dos atuadores.
V R
V L
V A
θ V
Figura 31 -Ilustração do vetor velocidade e suas componentes.
Por sua vez, o SVR não foi capaz de reconhecer nenhum exemplo da classe positiva. De fato, durante sua avaliação percebeu-se que todas as medidas estimadas pertenceram ao intervalo [135cm, 150cm], mesmo com os vetores de fluxo não sendo normalizados em etapa alguma da validação. Dessa forma, com base nas medidas associadas a cada modelo torna-se justificável a utilização da SVM com kernel RBF somada à normalização do fluxo óptico. iterações da validação cruzada. T op corresponde ao tempo médio levado para se estimar o fluxo óptico, T pca é o tempo médio utilizado para a projeção do vetor de características através da PCA e T svm é o tempo médio gasto pela SVM para classificar o padrão de fluxo.Tabela 7 -Medidas médias aferidas com base nos diferentes modelos utili-
zados.
Modelo
Precisão
Cobertura
Medida F
Acurácia
SVM
75, 46 ± 6, 21% 61, 71 ± 4, 75% 68, 00 ± 3, 75% 89, 90 ± 1, 36%
Perceptron 16, 37 ± 2, 36% 45, 92 ± 9, 89% 24, 08 ± 3, 81% 50, 83 ± 3, 09%
SVR
-
0, 00 ± 0, 00%
-
82, 84 ± 1, 08%
Por sua vez, a Tabela 8 apresenta o tempo médio levado por cada etapa da classificação
durante as Tabela 8 -Tempo médio de processamento uti-
lizado para cada etapa da classifica-
ção durante a validação cruzada.
Iteração (K) T op (ms) T pca (ms) T svm (ms)
1
53,66
0,76
17,16
2
51,99
0,74
16,76
3
48,93
0,70
15,72
4
50,53
0,72
15,77
5
50,58
0,72
16,04
6
50,53
0,71
15,33
7
49,01
0,68
15,23
8
49,08
0,70
15,70
A partir da Tabela 8, foi calculado o tempo médio levado pelo sistema para executar cada
etapa da classificação. Tais valores são listados a seguir:
Tabela 9 -
9Análise comparativa entre o sistema desenvolvido e os trabalhos relacionados citados no Capítulo 2. observá-lo navegar de maneira autônoma. Para isso, considerou-se o circuito ilustrado na Figura 38. Como pode-se perceber, no mesmo foram distribuídos três obstáculos diferentes. Através do arranjo considerado, os desvios realizados pelo robô o direcionariam ao longo de todo o circuito da seguinte forma: inicialmente o mesmo já encontraria-se posicionado na direção do primeiro obstáculo. Ao aproximar-se deste, o desvio esperado deveria direcioná-lo para o segundo obstáculo. Uma vez próximo do mesmo, um novo desvio deveria ser capaz de posicioná-lo na frente do terceiro obstáculo. Finalmente, ao desviar deste último, o robô alcançaria o fim do circuito. A Figura 39 apresenta uma imagem do ambiente de navegação considerado.Sistema
Acurácia
Frames
Por Segundo
Custo da
Plataforma (US$)
Detecção de piso por homografia
(CONRAD; DESOUZA, 2010)
99,60%
-
7142,82
Detecção de piso por segmentação de linhas
(LI; BIRCHFIELD, 2010)
89,10%
5
10612,82
Segmentação de fluxo óptico
(CALDEIRA; SCHNEEBELI; SARCINELLI-FILHO, 2007)
-
7,41
4080,00
Tempo de contato
(SANCHEZ-GARCIA et al., 2015)
-
-
-
Classificação de fluxo óptico
(SHANKAR; VATSA; SUJIT, 2014)
88,80%
7
50
Desenvolvido
89,90%
9,4
170,82
5.2 Avaliação Online
A avaliação online da plataforma consistiu em inserir-se o robô num determinado ambi-
ente e
)
Figura 41 -Navegação autônoma do robô ao longo do circuito de teste. 41a e 41b ilustram a aproximação do robô ao primeiro obstáculo e o seu desvio, respectivamente. Já 41c e 41d exibem, respectivamente, a aproximação do robô ao segundo obstáculo e o seu desvio. Por fim, 41e e 41f apresentam a aproximação do robô ao terceiro obstáculo e o seu desvio, respectivamente.
FOUNDATION, R. P. Raspberry Pi -Teach, Learn, and Make with Raspberry Pi. 2016. Disponível em: <https://www.raspberrypi.org/>. Citado 2 vezes nas páginas 22 e 38. APÊNDICE A -Configuração da Raspberry Pi para o Desenvolvimento do Sistema de Navegação Inicialmente, realizou-se, num computador à parte, o download da imagem do sistema operacional Raspbian (FOUNDATION, 2016e), o qual é baseado no Debian. Apesar de haver suporte para vários outros sistemas operacionais (baseados ou não em Linux), optou-se pela utilização do Raspbian pelo fato deste ser o sistema oficial da Raspberry Pi. Em seguida, a imagem obtida foi gravada num cartão Micro SD de 32GB e 10MB/s (Figura 42) através do software Win32 Disk Imager (Figura 43). Após a gravação, este cartão foi inserido na RaspberryPi, a qual foi conectada a uma fonte de energia, a um mouse e a um teclado com interfaces USB, a uma rede local por meio de conexão Ethernet e a um monitor com interface HDMI. Ao ligar a Raspberry Pi, realizou-se o login no sistema com as credenciais padrões fornecidas pelo fabricante e executou-se a ferramenta de configuração da plataforma. Através desta, permitiuse que o sistema operacional ocupasse todo o espaço fornecido pelo cartão SD. Além disso, configurou-se o sistema para sempre iniciar a partir da linha de comando. Finalmente, realizou-se a atualização do seu firmware.Figura 42 -Cartão SD no qual foi gravada a imagem do Raspbian (SANDISK, 2017). Após configurar o sistema operacional da Raspberry Pi, foi preciso conectá-la a uma rede Wi-Fi. Para isso, executou-se o assistente fornecido através da interface gráfica do sistema (Figura 44). Uma vez conectada, removeu-se a conexão Ethernet. Em seguida, instalou-se o software PuTTY (Figura 45) num computador à parte, por meio do qual seria possível realizar uma conexão SSH com o servidor embutido no Raspbian e, assim, controlar a Raspberry Pi Figura 43 -Software Win32 Disk Imager utilizado para gravar a imagem do Raspbian no cartão SD (SOURCEFORGE, 2017). remotamente. Uma vez estabelecida a conexão, puderam-se remover o mouse, o teclado e o monitor até então conectados à Raspberry Pi. Por fim, também instalou-se, neste mesmo computador de controle, o software FileZilla, através do qual seria possível transferir arquivos entre tal máquina e o Raspbian por meio do protocolo SFTP. Figura 44 -Interface gráfica do Raspbian. No canto superior direito é possível acessar o assistente para configuração de rede.Uma vez realizada a conexão da Raspberry Pi com o computador de controle, foi preciso instalar as ferramentas utilizadas para o desenvolvimento do sistema de navegação a ser executado. A primeira delas foi o Python 2.7
Tabela 10 -
10Resultados dos testes de desempenho Experimentos Média de FPS no Raspberry Pi Média de FPS no DesktopTaxa de captura
25,28
22,78
Cálculo da DFT
27,68
451,81
Filtro de Canny
29,81
455,34
Fluxo óptico
36,69
472,36
Body Size and Altitude Partitioning of the Hares Lepus Europaeus and L. Corsicanus Living in Sympatry and Allopatry in Italy. F M Angelici, L Luiselli, 0909-6396. Citado na página 30Wildlife Biology. ANGELICI, F. M.; LUISELLI, L. Body Size and Altitude Partitioning of the Hares Lepus Europaeus and L. Corsicanus Living in Sympatry and Allopatry in Italy. Wildlife Biology, v. 13, n. Toschi 1965, p. 251-257, 2007. ISSN 0909-6396. Citado na página 30.
. H Anton, I Bivens, S Davis, Cálculo -V2, 9788582602461. DisponívelBookman. Portuguese EditionANTON, H.; BIVENS, I.; DAVIS, S. Cálculo -V2 (Portuguese Edition). Bookman, 2014. ISBN 9788582602461. Disponível em: <https://www.amazon.com/C%C3%
-V2-Portuguese-Howard-Anton A1lculo, ebook/dp/B015WUBGAU?SubscriptionId= 0JYN1NVW651KCA56C102&tag=techkie-20&linkCode=xm2&camp=2025&creative= 165953&creativeASIN=B015WUBGAU>. 29A1lculo-V2-Portuguese-Howard-Anton-ebook/dp/B015WUBGAU?SubscriptionId= 0JYN1NVW651KCA56C102&tag=techkie-20&linkCode=xm2&camp=2025&creative= 165953&creativeASIN=B015WUBGAU>. Citado na página 29.
Systems and Experiment Performance of optical flow techniques. J L Barron, D J Fleet, S S Beauchemin, http:/link.springer.com/10.1007/BF014209840920-5691International Journal of Computer Vision. 1BARRON, J. L.; FLEET, D. J.; BEAUCHEMIN, S. S. Systems and Experiment Performance of optical flow techniques. International Journal of Computer Vision, v. 12, n. 1, p. 43-77, 1994. ISSN 0920-5691. Disponível em: <http://link.springer.com/10.1007/BF01420984>. Citado na página 30.
org -community supported open hardware computers for making. Org Beagleboard, Beagleboard, BEAGLEBOARD.ORG. BeagleBoard.org -community supported open hardware computers for making. 2016. Disponível em: <https://beagleboard.org/>. Citado na página 22.
Disponível em: <https: //www.amazon.com/Autonomous-Robots-Inspiration-Implementation-Intelligent-ebook/dp/ B006H2CD7I?SubscriptionId=0JYN1NVW651KCA56C102&tag=techkie-20&linkCode= xm2&camp=2025&creative=165953&creativeASIN=B006H2CD7I>. G A Bekey, 978026253418517Autonomous Robots: From Biological Inspiration to Implementation and Control (Intelligent Robotics and Autonomous Agents series). A Bradford BookBEKEY, G. A. Autonomous Robots: From Biological Inspiration to Im- plementation and Control (Intelligent Robotics and Autonomous Agents se- ries). A Bradford Book, 2005. ISBN 9780262534185. Disponível em: <https: //www.amazon.com/Autonomous-Robots-Inspiration-Implementation-Intelligent-ebook/dp/ B006H2CD7I?SubscriptionId=0JYN1NVW651KCA56C102&tag=techkie-20&linkCode= xm2&camp=2025&creative=165953&creativeASIN=B006H2CD7I>. Citado na página 17.
Fast obstacle detection using targeted optical flow. N S Boroujeni, 29BOROUJENI, N. S. Fast obstacle detection using targeted optical flow. . . . (ICIP), 2012 19th IEEE . . . , v. 2, n. 3, p. 65-68, 2012. Citado 2 vezes nas páginas 21 e 29.
Pyramidal implementation of the affine lucas kanade feature tracker-description of the algorithm. J.-Y Bouguet, 10828907Pages.Slc.Edu. 249BOUGUET, J.-Y. Pyramidal implementation of the affine lucas kanade feature tracker-description of the algorithm. Pages.Slc.Edu, v. 2, p. 3, 2001. ISSN 10828907. Citado na página 49.
An optical flow-based sensing system for reactive mobile robot navigation. E M D O Caldeira, H J A Schneebeli, M Sarcinelli-Filho, 0103-1759Controle & Automação Sociedade Brasileira de Automaticav. 18, n. 3. Citado 3 vezes nas páginas 25, 28 e 70CALDEIRA, E. M. D. O.; SCHNEEBELI, H. J. A.; SARCINELLI-FILHO, M. An optical flow-based sensing system for reactive mobile robot navigation. Sba: Controle & Automação Sociedade Brasileira de Automatica, v. 18, n. 3, p. 265-277, 2007. ISSN 0103-1759. Citado 3 vezes nas páginas 25, 28 e 70.
A computational approach to edge detection. J Canny, IEEE Transactions on Pattern Analysis and Machine Intelligence, Institute of Electrical and Electronics Engineers. 649IEEECANNY, J. A computational approach to edge detection. IEEE Transactions on Pattern Analysis and Machine Intelligence, Institute of Electrical and Electronics Engineers (IEEE), PAMI-8, n. 6, p. 679-698, nov 1986. Citado na página 49.
Building automatic packet report system to report position and radiation data for autonomous robot in the disaster area. S Chaiyasoonthorn, N Hongyim, S Mitatha, 10.1109/ICCAS.2015.736488315th International Conference on Control, Automation and Systems (ICCAS). Institute of Electrical and Electronics Engineers. IEEECHAIYASOONTHORN, S.; HONGYIM, N.; MITATHA, S. Building automatic packet report system to report position and radiation data for autonomous robot in the disaster area. In: 2015 15th International Conference on Control, Automation and Systems (ICCAS). Institute of Electrical and Electronics Engineers (IEEE), 2015. Disponível em: <http://dx.doi.org/10.1109/ICCAS.2015.7364883>. Citado na página 17.
. Chih-Wei Hsu, Chih-Chung Chang, Chih-Wei Hsu, Chih-Chung Chang;
A Practical Guide to Support Vector Classification. C.-J Lin, 1464-410XBJU international. 101185 e 92LIN, C.-J. A Practical Guide to Support Vector Classification. BJU international, v. 101, n. 1, p. 1396-400, 2008. ISSN 1464-410X. Disponível em: <http://www.csie.ntu.edu.tw/{~}cjlin/papers/guide/guide.p>. Citado 3 vezes nas páginas 66, 85 e 92.
Positioning of a mobile robot based on odometry and a new ultrasonic lps. B.-S Cho, 10.1007/s12555-012-0045-x2005-4092. DisponívelInternational Journal of Control, Automation and Systems. 11CHO, B.-S. et al. Positioning of a mobile robot based on odometry and a new ultrasonic lps. International Journal of Control, Automation and Systems, v. 11, n. 2, p. 333-345, 2013. ISSN 2005-4092. Disponível em: <http://dx.doi.org/10.1007/s12555-012-0045-x>. Citado na página 18.
Homography-based ground plane detection for mobile robot navigation using a Modified EM algorithm. D Conrad, G N Desouza, IEEE. Citado 3 vezes nas páginas 21, 24 e 70CONRAD, D.; DESOUZA, G. N. Homography-based ground plane detection for mobile robot navigation using a Modified EM algorithm. 2010 IEEE International Conference on Robotics and Automation, p. 910-915, 2010. Disponível em: <http: //ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=5509457>. Citado 3 vezes nas páginas 21, 24 e 70.
. B Corporation, Bcm2835, Peripherals, Science. 39CORPORATION, B. BCM2835 ARM Peripherals. Science, 2012. Citado na página 39.
Distance estimation with efference copies and optical flow maneuvers : a stability-based strategy. G C H E G Croon, De, 10.1088/1748-3190/11/1/0160041748-3190. DisponívelBioinspiration & Biomimetics. 11IOP PublishingCROON, G. C. H. E. G. de. Distance estimation with efference copies and optical flow maneuvers : a stability-based strategy . Bioinspiration & Biomimetics, IOP Publishing, v. 11, n. 1, p. 1-30, 2015. ISSN 1748-3190. Disponível em: <http://dx.doi.org/10.1088/1748-3190/11/1/016004>. Citado na página 28.
GPIO 0.6.3. B Croston, Rpi, CROSTON, B. RPi.GPIO 0.6.3. 2017. Disponível em: <https://pypi.python.org/pypi/RPi.GPIO>. Citado na página 53.
Support vector regression machines. H Drucker, 10495258Advances in Neural Information Processing Systems. DRUCKER, H. et al. Support vector regression machines. Advances in Neural Information Processing Systems, v. 1, p. 155-161, 1997. ISSN 10495258. Disponível em: <http://papers.nips.cc/paper/1238-support-vector-regression-machines.pdf>. Citado na página 66.
. R O Duda, P E Hart, D G Stork, Pattern Classification. 2001. 680 p. Citado 2 vezes nas páginas 35 e 52DUDA, R. O.; HART, P. E.; STORK, D. G. Pattern Classification. 2001. 680 p. Citado 2 vezes nas páginas 35 e 52.
APC -Carregador Portátil -Power Bank -Schneider Eletric. S Eletric, Power, Pack, ELETRIC, S. Power Pack APC -Carregador Portátil -Power Bank -Schneider Eletric. 2016. Disponível em: <http://powerpackapc.com.br/>. Citado 2 vezes nas páginas 8 e 44.
Fundamentos de Cálculo Numérico. A Filho, 9788582603857. Citado na página 30BookmanS.l.FILHO, A. Fundamentos de Cálculo Numérico. [S.l.]: Bookman, 2016. ISBN 9788582603857. Citado na página 30.
Optical Flow Estimation. Mathematical models for Computer Vision: The Handbook. D Fleet, Y Weiss, 1941-0042FLEET, D.; WEISS, Y. Optical Flow Estimation. Mathematical models for Computer Vision: The Handbook, p. 239-257, 2005. ISSN 1941-0042. Disponível em: <http: //eprints.pascal-network.org/archive/00001065/>. Citado na página 29.
Disponível em: <https: //docs. P S Foundation, 83Python 2.7.13 documentationFOUNDATION, P. S. Python 2.7.13 documentation. 2017. Disponível em: <https: //docs.python.org/2/>. Citado na página 83.
GPIO -Raspberry Pi Documentation. R P Foundation, FOUNDATION, R. P. GPIO -Raspberry Pi Documentation. 2016. Disponível em: <https://www.raspberrypi.org/documentation/usage/gpio-plus-and-raspi2/>. Citado 2 vezes nas páginas 7 e 40.
Power Supply -Raspberry Pi Documentation. R P Foundation, FOUNDATION, R. P. Power Supply -Raspberry Pi Documentation. 2016. Disponível em: <https://www.raspberrypi.org/documentation/hardware/raspberrypi/power/README.md>. Citado na página 39.
Raspberry Pi 3 Model B -Raspberry Pi. R P Foundation, FOUNDATION, R. P. RaspbianAbout -RaspbianFOUNDATION, R. P. Raspberry Pi 3 Model B -Raspberry Pi. 2016. Disponível em: <https://www.raspberrypi.org/products/raspberry-pi-3-model-b/>. Citado na página 38. FOUNDATION, R. P. RaspbianAbout -Raspbian. 2016. Disponível em: <https: //www.raspbian.org/RaspbianAbout>. Citado 2 vezes nas páginas 46 e 82.
A Constraint-Aware Heuristic Path Planner for Finding Energy-Efficient Paths on Uneven Terrains. N Ganganath, C Cheng, C K Tse, 1551-3203. DisponívelIEEE Transactions on Industrial Informatics. GANGANATH, N.; CHENG, C.-t.; TSE, C. K. A Constraint-Aware Heuristic Path Planner for Finding Energy-Efficient Paths on Uneven Terrains. IEEE Transactions on Industrial Informatics, v. 11, n. 3, p. 601-611, 2015. ISSN 1551-3203. Disponível em: <http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=7061469>. Citado na página 18.
. A Holdings, Home -Arm, HOLDINGS, A. Home -ARM. 2016. Disponível em: <http://www.arm.com/>. Citado na página 38.
Determining optical flow. B K P Horn, B G Schunck, 00043702Artificial Intelligence. 1-328HORN, B. K. P.; SCHUNCK, B. G. Determining optical flow. Artificial Intelligence, v. 17, n. 1-3, p. 185-203, 1981. ISSN 00043702. Citado na página 28.
Analysis of a complex of statistical variables into principal components. H Hotelling, Journal of educational psychology. 6417Citado 2 vezes nas páginas 53 e 85HOTELLING, H. Analysis of a complex of statistical variables into principal components. Journal of educational psychology, Warwick & York, v. 24, n. 6, p. 417, 1933. Citado 2 vezes nas páginas 53 e 85.
LM317L 3-Terminal Adjustable Regulator LM317L 3-Terminal Adjustable Regulator. T I Incorporated, 43INCORPORATED, T. I. LM317L 3-Terminal Adjustable Regulator LM317L 3-Terminal Adjustable Regulator. v. 5, n. March, p. 1-4, 2002. Citado 2 vezes nas páginas 8 e 43.
LM2679 SIMPLE SWITCHER ® 5-A Step-Down Voltage Regulator. T I Incorporated, INCORPORATED, T. I. LM2679 SIMPLE SWITCHER ® 5-A Step-Down Voltage Regulator.
INTEL. Threading Building Blocks. INTEL. Threading Building Blocks. 2017. Disponível em: <https://www.threadingbuildingblocks. org/>. Citado na página 84.
An autonomous industrial robot for loading and unloading goods. M A Kadir, 10.1109/ICIEV.2015.73339842015 International Conference on Informatics, Electronics & Vision (ICIEV). Institute of Electrical and Electronics Engineers. IEEE17KADIR, M. A. et al. An autonomous industrial robot for loading and unloading goods. In: 2015 International Conference on Informatics, Electronics & Vision (ICIEV). Institute of Electrical and Electronics Engineers (IEEE), 2015. Disponível em: <http://dx.doi.org/10.1109/ICIEV.2015.7333984>. Citado na página 17.
Development of jellyfish removal robot system JEROS. D Kim, 10.1109/URAI.2012.64630922012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI). Institute of Electrical and Electronics Engineers. IEEEKIM, D. et al. Development of jellyfish removal robot system JEROS. In: 2012 9th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI). Institute of Electrical and Electronics Engineers (IEEE), 2012. Disponível em: <http: //dx.doi.org/10.1109/URAI.2012.6463092>. Citado 2 vezes nas páginas 7 e 18.
A study of cross-validation and bootstrap for accuracy estimation and model selection. R Kohavi, STANFORD, CA. Ijcai60S.l.], 1995. v. 14, n. 2KOHAVI, R. et al. A study of cross-validation and bootstrap for accuracy estimation and model selection. In: STANFORD, CA. Ijcai. [S.l.], 1995. v. 14, n. 2, p. 1137-1145. Citado na página 60.
Development of a quadruped walking robot AiDIN-III using biologically inspired kinematic analysis. I M Koo, International Journal of Control, Automation and Systems. 11Springer NatureCitado 2 vezes nas páginas 7 e 19KOO, I. M. et al. Development of a quadruped walking robot AiDIN-III using biologically inspired kinematic analysis. International Journal of Control, Automation and Systems, Springer Nature, v. 11, n. 6, p. 1276-1289, nov 2013. Citado 2 vezes nas páginas 7 e 19.
Stereo-based Visual Odometry for Autonomous Robot Navigation. I Kostavelis, 1729-8806International Journal of Advanced Robotic Systems. 1KOSTAVELIS, I. et al. Stereo-based Visual Odometry for Autonomous Robot Navigation. International Journal of Advanced Robotic Systems, p. 1, 2016. ISSN 1729-8806. Disponível em: <http://www.intechopen.com/journals/international{\_}journal{\_}of{\_}advanced{\_ }robotic{\_}systems/stereo-based-visual-odometry-for-autonomous-r>. Citado na página 21.
The Biology of the Laboratory Rabbit. S H W R E F A Kraus, Elsevier ScienceKRAUS, S. H. W. R. E. F. A. L. The Biology of the Laboratory Rabbit. Elsevier Science, 2013. Disponível em: <http://www.ebook.de/de/product/23226530/the_biology_of_the_laboratory_ rabbit.html>. Citado na página 30.
Position estimation using multiple low-cost GPS receivers for outdoor mobile robots. W Lee, W Chung, 10.1109/URAI.2015.735890612th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI). Institute of Electrical and Electronics Engineers. IEEELEE, W.; CHUNG, W. Position estimation using multiple low-cost GPS receivers for outdoor mobile robots. In: 2015 12th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI). Institute of Electrical and Electronics Engineers (IEEE), 2015. Disponível em: <http://dx.doi.org/10.1109/URAI.2015.7358906>. Citado na página 19.
G Lg, Portugal, Acessórios TV -Câmara Videoconferência AN-VC500. LG, G. LG Portugal :: Acessórios TV -Câmara Videoconferência AN-VC500. 2016. Disponível em: <http://www.lg.com/pt/acessorios-tv/lg-AN-VC500-camara-smart>.
Image-based segmentation of indoor corridor floors for a mobile robot. Y Li, S T Birchfield, 2153-0858IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 -Conference Proceedings. Citado 3 vezes nas páginas 21, 25 e 70LI, Y.; BIRCHFIELD, S. T. Image-based segmentation of indoor corridor floors for a mobile robot. IEEE/RSJ 2010 International Conference on Intelligent Robots and Systems, IROS 2010 - Conference Proceedings, p. 837-843, 2010. ISSN 2153-0858. Citado 3 vezes nas páginas 21, 25 e 70.
A Novel Machine Vision Approach Applied for Autonomous Robotics Navigation. R G Lins, Proceedings -2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015. -2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 201521LINS, R. G. et al. A Novel Machine Vision Approach Applied for Autonomous Robotics Navigation. Proceedings -2015 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2015, p. 1912-1917, 2016. Citado na página 21.
A Critical Review of Recurrent Neural Networks for Sequence Learning. Z C Lipton, J Berkowitz, C Elkan, 9781450330633LIPTON, Z. C.; BERKOWITZ, J.; ELKAN, C. A Critical Review of Recurrent Neural Networks for Sequence Learning. p. 1-38, 2015. ISSN 9781450330633. Disponível em: <http://arxiv.org/abs/1506.00019>. Citado na página 75.
An Iterative Image Registration Technique with an Application to Stereo Vision. Imaging, v. 130, n. x. B D Lucas, T Kanade, 17486815LUCAS, B. D.; KANADE, T. An Iterative Image Registration Technique with an Application to Stereo Vision. Imaging, v. 130, n. x, p. 674-679, 1981. ISSN 17486815. Disponível em: <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.49.2019{&}rep=rep1{&}ty>. Citado na página 30.
Scikit-learn: Machine learning in Python. F Pedregosa, Journal of Machine Learning Research. 1285PEDREGOSA, F. et al. Scikit-learn: Machine learning in Python. Journal of Machine Learning Research, v. 12, p. 2825-2830, 2011. Citado 3 vezes nas páginas 46, 66 e 85.
Rabbits and Rodents -E-Book. K Quesenberry, J W Carpenter, Ferrets, Elsevier Health SciencesQUESENBERRY, K.; CARPENTER, J. W. Ferrets, Rabbits and Rodents -E-Book. Elsevier Health Sciences, 2011. Disponível em: <http://www.ebook.de/de/product/23019441/katherine_ quesenberry_james_w_carpenter_ferrets_rabbits_and_rodents_e_book.html>. Citado na página 30.
R: A Language and Environment for Statistical Computing. R Core Team, Vienna, AustriaR Core Team. R: A Language and Environment for Statistical Computing. Vienna, Austria, 2013. Disponível em: <http://www.R-project.org/>. Citado na página 48.
The perceptron: A probabilistic model for information storage and organization in the brain. F Rosenblatt, Psychological Review. 666APAROSENBLATT, F. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, American Psychological Association (APA), v. 65, n. 6, p. 386-408, 1958. Citado na página 66.
Decision making for obstacle avoidance in autonomous mobile robots by time to contact and optical flow. A J Sanchez-Garcia, 25th International Conference on Electronics. Citado 2 vezes nas páginas 25 e 70SANCHEZ-GARCIA, A. J. et al. Decision making for obstacle avoidance in autonomous mobile robots by time to contact and optical flow. 25th International Conference on Electronics, Communications and Computers, CONIELECOMP 2015, p. 130-134, 2015. Citado 2 vezes nas páginas 25 e 70.
Global em Soluções de Armazenamento de Memória Flash. Sandisk, Sandisk, Líder, SANDISK. SanDisk | Líder Global em Soluções de Armazenamento de Memória Flash. 2017. Disponível em: <https://www.sandisk.com.br/>. Citado 2 vezes nas páginas 9 e 82.
Collision avoidance for a low-cost robot using SVM-based monocular vision. A Shankar, M Vatsa, P B Sujit, IEEE International Conference on Robotics and Biomimetics. Citado 3 vezes nas páginas 26, 29 e 70SHANKAR, A.; VATSA, M.; SUJIT, P. B. Collision avoidance for a low-cost robot using SVM-based monocular vision. 2014 IEEE International Conference on Robotics and Biomimetics, IEEE ROBIO 2014, p. 277-282, 2014. Citado 3 vezes nas páginas 26, 29 e 70.
Good features to track. J ; Shi, Tomasi, 10.1109/CVPR.1994.323794Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR-94. Institute of Electrical and Electronics Engineers. IEEE Conference on Computer Vision and Pattern Recognition CVPR-94. Institute of Electrical and Electronics EngineersIEEESHI, J.; TOMASI. Good features to track. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition CVPR-94. Institute of Electrical and Electronics Engineers (IEEE), 1994. Disponível em: <http://dx.doi.org/10.1109/CVPR.1994.323794>. Citado na página 88.
Introduction to Autonomous Mobile Robots. R Siegwart, I Nourbakhsh, Massachusetts Institute of Technology. 20SIEGWART, R.; NOURBAKHSH, I. R. Introduction to Autonomous Mobile Robots. [S.l.]: Massachusetts Institute of Technology, 2004. Citado 2 vezes nas páginas 17 e 20.
Sourceforge. Win32, Disk Imager / Wiki / Home. 2017. Disponível em: <https. SOURCEFORGE. Win32 Disk Imager / Wiki / Home. 2017. Disponível em: <https:
Visão Computacional Usando OpenCV. Stringhini Ilana De Almeida Souza, L A D S E M M , 49Universidade Presbiteriana MackenzieS.l.STRINGHINI ILANA DE ALMEIDA SOUZA, L. A. d. S. e. M. M. D. Visão Computacional Usando OpenCV. [S.l.]: Universidade Presbiteriana Mackenzie, 2011. Citado na página 49.
Laser Measurement Key Technologies and Application in Robot Autonomous Navigation. J Sui, http:/www.worldscientific.com/doi/abs/10.1142/S02180014110089200218-0014. DisponívelInternational Journal of Pattern Recognition and Artificial Intelligence. SUI, J. et al. Laser Measurement Key Technologies and Application in Robot Autonomous Navigation. International Journal of Pattern Recognition and Artificial Intelligence, v. 25, n. 07, p. 1127-1146, 2011. ISSN 0218-0014. Disponível em: <http://www.worldscientific.com/doi/abs/10.1142/S0218001411008920>. Citado na página 20.
Computer Vision. R Szeliski, Springer LondonSZELISKI, R. Computer Vision. Springer London, 2010. Disponível em: <http: //www.ebook.de/de/product/19111262/richard_szeliski_computer_vision.html>. Citado na página 21.
. O D Team, About | Opencv, TEAM, O. D. ABOUT | OpenCV. 2016. Disponível em: <http://opencv.org/about.html>. Citado na página 83.
OpenCV 3.2 | OpenCV. O D Team, TEAM, O. D. OpenCV 3.2 | OpenCV. 2017. Disponível em: <http://opencv.org/opencv-3-2.
Texas Instruments Incorporated. L293x Quadruple Half-H Drivers. Texas Instruments Incorporated. 21Texas Instruments Incorporated. L293x Quadruple Half-H Drivers. Texas Instruments Incorporated, p. 21, 2016. Disponível em: <http://www.ti.com/lit/ds/symlink/l293.pdf>. Citado na página 41.
Pattern Recognition. K S K Theodoridis, Elsevier Academic Press4th. ed. [S.l.. Citado 3 vezes nas páginas 33, 53 e 90THEODORIDIS, K. S. K. Pattern Recognition. 4th. ed. [S.l.]: Elsevier Academic Press, 2003. Citado 3 vezes nas páginas 33, 53 e 90.
. S Walt, Van Der, WALT, S. van der;
The NumPy array: A structure for efficient numerical computation. S C Colbert, G Varoquaux, IEEE84Computing in Science & Engineering, Institute of Electrical and Electronics EngineersCOLBERT, S. C.; VAROQUAUX, G. The NumPy array: A structure for efficient numerical computation. Computing in Science & Engineering, Institute of Electrical and Electronics Engineers (IEEE), v. 13, n. 2, p. 22-30, mar 2011. Citado na página 84.
Citado na página 17. Por fim, foi instalado na Raspberry Pi a biblioteca de aprendizado de máquina Scikit-Learn. L Wang, F Chen, H ; Yin, Pedregosa, 09265805a qual disponibiliza ferramentas relevantes para o desenvolvimento do sistema, como a implementação da PCA (HOTELLING, 1933) e da SVM. Chih-Wei Hsu, Chih-Chung ChangLINDetecting and tracking vehicles in traffic by unmanned aerial vehiclesWANG, L.; CHEN, F.; YIN, H. Detecting and tracking vehicles in traffic by unmanned aerial vehicles. Automation in Construction, n. May, 2015. ISSN 09265805. Citado na página 17. Por fim, foi instalado na Raspberry Pi a biblioteca de aprendizado de máquina Scikit- Learn (PEDREGOSA et al., 2011), a qual disponibiliza ferramentas relevantes para o desenvol- vimento do sistema, como a implementação da PCA (HOTELLING, 1933) e da SVM (Chih-Wei Hsu, Chih-Chung Chang; LIN, 2008).
. Apêndice B -Avaliação Da, Raspberry Pi 3 Model B. APÊNDICE B -Avaliação da Raspberry Pi 3 Model B
Pi 3 Model B e analisar sua viabilidade para este projeto, foram elaborados testes de desempenho baseados na frequência de captura de imagens através da câmera e na execução de algoritmos frequentemente utilizados em aplicações de visão computacional, os quais encontram-se implementados na OpenCV. Os testes realizados consistiram na medição do tempo levado pela Raspberry Pi para executar as seguintes operações: • Capturar imagem. Para Avaliar O Desempenho Da Raspberry, Para avaliar o desempenho da Raspberry Pi 3 Model B e analisar sua viabilidade para este projeto, foram elaborados testes de desempenho baseados na frequência de captura de imagens através da câmera e na execução de algoritmos frequentemente utilizados em aplicações de visão computacional, os quais encontram-se implementados na OpenCV. Os testes realizados consistiram na medição do tempo levado pela Raspberry Pi para executar as seguintes operações: • Capturar imagem;
. Dft Calcular, • Calcular DFT;
. • Aplicar Filtro De Canny, • Aplicar Filtro de Canny;
. • Estimar, Óptico, • Estimar fluxo óptico.
Este experimento foi realizado 50 vezes, sendo que ao final coletou-se o tempo médio levado pela plataforma. Por sua vez, o terceiro teste consistiu em avaliar a performance da Raspberry Pi ao pré-processar um determinado vídeo contendo 480 quadros e aplicar sobre este o Filtro de Canny, o qual corresponde a um filtro passa-alta muito utilizado para a detecção de bordas. Durante o teste, mediu-se o tempo levado para converter cada quadro em níveis de cinza, aplicar um filtro passabaixa Gaussiano e em seguida o Filtro de Canny. Ao final, armazenou-se o tempo médio de processamento por quadro. Finalmente, o último experimento consistiu na medição do tempo de estimação do fluxo óptico sobre um determinado vídeo contendo 230 imagens. O primeiro teste consistiu na medição do tempo levado pela plataforma para capturar 120 imagens através da câmera utilizada neste trabalho. O objetivo foi estimar a taxa de captura média da Raspberry Pi. Este teste é relevante pois apesar da câmera em questão ser capaz de capturar até 30 quadros por segundo, a taxa final de imagens capturadas pela Raspberry Pi pode ser influenciada pela latência da conexão USB, pelo escalonamento de processos no sistema, dentre outros fatores. Já o segundo teste consistiu em medir-se o tempo do cálculo da DFT (Discrete Fourier Transform) sobre um vídeo contendo 60 quadros. Para isso, utilizou-se o algoritmo de Lucas-Kanade com base numa distribuição de 100 pontos fixos. Ao final, coletou-se o tempo médio gasto por quadroO primeiro teste consistiu na medição do tempo levado pela plataforma para capturar 120 imagens através da câmera utilizada neste trabalho. O objetivo foi estimar a taxa de captura média da Raspberry Pi. Este teste é relevante pois apesar da câmera em questão ser capaz de capturar até 30 quadros por segundo, a taxa final de imagens capturadas pela Raspberry Pi pode ser influenciada pela latência da conexão USB, pelo escalonamento de processos no sistema, dentre outros fatores. Já o segundo teste consistiu em medir-se o tempo do cálculo da DFT (Discrete Fourier Transform) sobre um vídeo contendo 60 quadros. Este experimento foi realizado 50 vezes, sendo que ao final coletou-se o tempo médio levado pela plataforma. Por sua vez, o terceiro teste consistiu em avaliar a performance da Raspberry Pi ao pré-processar um determinado vídeo contendo 480 quadros e aplicar sobre este o Filtro de Canny, o qual corresponde a um filtro passa-alta muito utilizado para a detecção de bordas. Durante o teste, mediu-se o tempo levado para converter cada quadro em níveis de cinza, aplicar um filtro passa- baixa Gaussiano e em seguida o Filtro de Canny. Ao final, armazenou-se o tempo médio de processamento por quadro. Finalmente, o último experimento consistiu na medição do tempo de estimação do fluxo óptico sobre um determinado vídeo contendo 230 imagens. Para isso, utilizou-se o algoritmo de Lucas-Kanade com base numa distribuição de 100 pontos fixos. Ao final, coletou-se o tempo médio gasto por quadro.
Além de executá-los na Raspberry Pi, estes testes também foram realizados num desktop, cujas principais especificações são: processador I7 4470K 3,5GHz, memória 16GB 800MHz, armazenamento 2TB 6GB/s e sistema operacional Windows 8.1. Após a execução de cada teste. Além de executá-los na Raspberry Pi, estes testes também foram realizados num desktop, cujas principais especificações são: processador I7 4470K 3,5GHz, memória 16GB 800MHz, armazenamento 2TB 6GB/s e sistema operacional Windows 8.1. Após a execução de cada teste,
| [] |
[
"A SUFFICIENT CONDITION FOR THE LOWER SEMICONTINUITY OF NONLOCAL SUPREMAL FUNCTIONALS IN THE VECTORIAL CASE",
"A SUFFICIENT CONDITION FOR THE LOWER SEMICONTINUITY OF NONLOCAL SUPREMAL FUNCTIONALS IN THE VECTORIAL CASE"
] | [
"Giuliano Gargiulo ",
"Elvira Zappale "
] | [] | [] | In this note we present a sufficient condition ensuring lower semicontinuity for nonlocal supremal functionals of the typewhere Ω is a bounded open subset of R N and W : Ω × Ω × R d×N × R d×N → R. MSC (2020): 49J45 (primary); 26B25 | null | [
"https://export.arxiv.org/pdf/2305.14801v1.pdf"
] | 258,866,216 | 2305.14801 | e8c2cd91d8e49810630da9499abc6d003f9c0164 |
A SUFFICIENT CONDITION FOR THE LOWER SEMICONTINUITY OF NONLOCAL SUPREMAL FUNCTIONALS IN THE VECTORIAL CASE
24 May 2023
Giuliano Gargiulo
Elvira Zappale
A SUFFICIENT CONDITION FOR THE LOWER SEMICONTINUITY OF NONLOCAL SUPREMAL FUNCTIONALS IN THE VECTORIAL CASE
24 May 2023nonlocalitysupremal functionalslower semicontinuityYoung measures Date: May 252023
In this note we present a sufficient condition ensuring lower semicontinuity for nonlocal supremal functionals of the typewhere Ω is a bounded open subset of R N and W : Ω × Ω × R d×N × R d×N → R. MSC (2020): 49J45 (primary); 26B25
Introduction
In recent years a great attention has been devoted to nonlocal functionals both in the integral and supremal setting, due to the many applications to peridynamics, machine learning, image processing, etc. [1,5,6,7,9,14,18] and to L ∞ variational problems e.g. [2,11,12,15,19,27], among a wide literature.
Motivated by the Direct Methods in the Calculus of Variations the study of necessary and sufficient conditions ensuring lower semicontinuity of such functionals has been conducted in many papers, see [8,10,21,22,23,24,25].
In particular, given a bounded open set Ω ⊂ R N , in [22] characterizing conditions for the sequential lower semicontinuity in L ∞ (Ω; R d ) of the functional G : v ∈ L ∞ (Ω; R d ) → R defined as G(v) := ess sup (x,y)∈Ω×Ω W (v(x), v(y)) (1.1) have been provided. Furthermore in [21] necessary and sufficient assumptions on the supremand W have been determined to ensure that, in absence of lower semicontinuity, the sequentially weakly * lower semicontinuous envelope of G has the same form, i.e. it can be expressed as a double supremal functional. We also emphasize that [21] contains a powerlaw approximation result for functionals as in (1.1), which, on the other hand, also appear in their inhomogeneous version in the context of image denoising (cf. [14]). Unfortunately, analogous results are not available in the context where the fields v satisfy some differential constraint, in particular when v(x) = ∇u(x), with u ∈ W 1,∞ (Ω; R d ). In this paper we will show that a sufficient condition for the functional
W 1,∞ (Ω; R d ) ∋ u → ess sup (x,y)∈Ω×Ω W (x, y, ∇u(x), ∇u(y)),
to be weakly * sequentially lower semicontinuous in W 1,∞ (Ω; R d×N ) is the separate curl Young quasiconvexity in the second set of variables. The notion of curl Young quasiconvexity has been introduced in [2], as a sufficient condition for the sequential weak * lower semicontinuity of functionals of the type ess sup x∈Ω f (x, ∇u(x)) in W 1,∞ (Ω; R d ), (see also [13] for a similar notion suited for L p approximation of supremal functionals, and [28] for the setting adopted in this paper).
Let Q be the unit cube ]0, 1[ N , and let f : R d×N −→ R be a lower semicontinuous function, bounded from below. f is curl Young quasiconvex, if
f ˆR d×N ξ dν x (ξ) ≤ ess sup y∈Q ν y − ess sup ξ∈R d×N f (ξ) , L N − a.e. x ∈ Q,
whenever ν ≡ {ν x } x∈Q is a W 1,∞ -gradient Young measure (see [20] for the introduction, [29] and [17] for a comprehensive description). For the readers' convenience we just say that Young measures encode information on the oscillation behaviour of weakly converging sequences. For a more detailed introduction to the topic, we refer to the broad literature, e.g. [16,Chapter 8], [23], [29,Section 4].
In general dimensions l, m, n ∈ N, we denote by M(R l ) the set of bounded Radon measures and by Pr(R l ) its subsets of probability measures.
Let U ⊂ R n be a Lebesgue measurable set with finite measure. By definition, a Young
measure ν = {ν x } x∈U is an element of the space L ∞ w (U ; M(R m )) of essentially bounded, weakly * measurable maps defined in U → M(R m ), which is isometrically isomorphic to the dual of L 1 (U ; C 0 (R m )), such that ν x := ν(x) ∈ Pr(R m ) for L n -a.e. x ∈ U . One calls ν homogeneous if there is a measure ν 0 ∈ Pr(R m ) such that ν x = ν 0 for L n -a.e. x ∈ U . A sequence (z j ) j of measurable functions z j : U → R m is said to generate a Young measure ν ∈ L ∞ w (U ; Pr(R m )) if for every h ∈ L 1 (U ) and ϕ ∈ C 0 (R m ), lim j→∞ˆU h(x)ϕ(z j (x)) dx =ˆU h(x)ˆR m ϕ(ξ)dν x (ξ) dx =ˆU h(x) ν x , ϕ dx, or ϕ(z j ) * ⇀ ν x , ϕ for all ϕ ∈ C 0 (R m ); in formulas, z j Y M −→ ν as j → ∞,
with ·, · denoting the duality product between probability measures and continuous func-
tions C 0 (R m ).
To keep the brevity of this article we omit the fundamental theorem for Young measures, for which we refer to [ We also recall that if (z j ) j ⊂ L p (U ; R m ) (p ∈ (1, +∞]) generates a Young measure ν and converges weakly(
* ) in L p (U ; R m ) to a limit function u, then [ν x ] = ν x , id =´R m ξdν x (ξ) = u(x) for L n -a.e. x ∈ U .
In the sequel we will mainly restrict to gradient Young measure, namely with U := Ω ⊂ R N a bounded open set, and m = N ×d, a W 1,∞ -gradient Young measure (see [20]) is a Young measure generated by a sequence of (∇u j ) j with u j ∈ W 1,∞ (Ω; R d ).
For our purposes, we also recall that in [28, Proposition 4.3 and Remark 4.4] curl Young quasiconvexity has been characterized as follows.
f is curl Young quasiconvex if and only if it verifies
f ˆR d×N ξ dν(ξ) ≤ ν − ess sup ξ∈R d×N f (ξ)
whenever ν is a W 1,∞ -gradient Young measure.
Lower semicontinuity
The notion which will play a crucial role for us is the separate curl Young-quasiconvexity.
W ([ν], [µ]) ≤ (ν ⊗ µ)-ess sup (ξ,ζ)∈R d×N ×R d×N W (ξ, ζ) = ν-ess sup ξ∈R d×N µ-ess sup ζ∈R d×N W ξ, ζ) (2.1) = µ-ess sup ζ∈R d×N ν-ess sup ξ∈R d×N W (ξ, ζ)
for every ν, µ W 1,∞ -gradient Young measures. If W : Ω × Ω × R d×N × R d×N → R is a normal integrand bounded from below, then it is said to be separately curl Young quasiconvex if W (x, y, ·, ·) is separately curl Young quasiconvex for L N ⊗ L N -a.e. (x, y) ∈ Ω × Ω.
A key tool for the proof of our result is the following lemma, first stated in [3] in the continuous and homogeneous case, and, then proved in its current version in [28].
f (x, u k (x)) ≥ ess sup x∈Uf (x), wheref (x) := ν x -ess sup ξ∈R m f (x, ξ) for x ∈ U .
With the aim of analyzing nonlocal problems, in [22] to any function u ∈ L 1 (Ω; R m ) it has been associated the vector field w u (x, y) := (u(x), u(y)) for (x, y) ∈ Ω × Ω.
(2.2)
In the sequel we will consider nonlocal fields w ∇u (x, y) = (∇u(x), ∇u(y)) for (x, y) ∈ Ω × Ω.
The following lemma, which was established by Pedregal in [23, Proposition 2.3], gives a characterization of Young measures generated by sequences as in (2.2).
Lemma 2.3. Let (u j ) j ⊂ L p (Ω; R m ) with 1 ≤ p ≤ ∞ generate a Young measure ν = {ν x } x∈Ω , and let Λ = {Λ (x,y) } (x,y)∈Ω×Ω be a family of probability measures on R m × R m .
Then Λ is the Young measure generated by the sequence (w uj ) j ⊂ L p (Ω × Ω; R m × R m ) defined according to (2.2) if and only if
Λ (x,y) = ν x ⊗ ν y for L N ⊗ L N -a.e. (x, y) ∈ Ω × Ω and ˆΩˆR m |ξ| p dν x (ξ) dx < ∞, if p < ∞, supp ν x ⊂ K for L N -a.e. x ∈ Ω with a fixed compact set K ⊂ R m , if p = ∞.h(t) = 0 if t ≤ 1, t − 1 if 1 ≤ t ≤ 2, 1 if t ≥ 2
Indeed for any fixed η or ξ the function W (·, η) or W (ξ, ·) turns out to be curl Young quasiconvex but not generally level convex.
We are in position to establish our main result Theorem 2.5. Let W : Ω × Ω × R d×N × R d×N → R be a normal integrand, bounded from below and such that W (x, y, ·, ·) is separately curl Young quasiconvex for L N ⊗ L N -a.e. (x, y) ∈ Ω × Ω. Let F : W 1,∞ (Ω; R d ) → R be the functional defined by F (u) = ess sup (x,y)∈Ω×Ω W (x, y, ∇u(x), ∇u(y)).
(2.3)
Then the functional F is sequentially weakly * lower semicontinuous in W 1,∞ (Ω; R d ).
Remark 2.6. We observe that this result extends to the non-homogeneous and differential setting [22,Proposition 3.6].
The same proof could be used to show that separate level convexity of W (x, y, ·, ·) for L N ⊗ L N -a.e. (x, y) ∈ Ω × Ω is sufficient to guarantee the sequential weak* lower semicontinuity in L ∞ (Ω; R d ) of ess sup (x,y)∈Ω×Ω W (x, y, u(x), u(y)).
Nevertheless, as proven in the latter setting, under homogeneity assumptions, we conjecture that separate curl Young quasiconvexity is not 'really' necessary for the sequential lower semicontinuity of the functional in (2.3), since from one hand some symmetry of W should be taken into account, (cf. the notions of Cartesian separate level convexity in [22,21]), but also it is worth to observe that even in the local setting it is currently an open question the necessity of curl Young quasiconvexity for the sequential weak* lower semicontinuity of ess sup x∈Ω f (∇u(x)), namely it is not known, in general, if curl Young quasiconvexity is equivalent to the Strong Morrey quasiconvexity introduced in [4], except some particular case as those considered in [2] and [26].
Finally, we also point out that, under suitable continuity conditions on the second set of variables for W , our arguments could be successfully employed to prove the lower semicontinuity of nonlocal supremal functionals under more general differential constraints than curl.
Proof. The result follows from Lemmas 2.2 2.3, and Definition 2.1. Without loss of generality we can assume that W is non negative.
Let (w ∇uj ) j ⊂ W 1,∞ (Ω × Ω; R d×N × R d×N ) be the sequence of nonlocal vector fields associated with (∇u j ) j , cf. (2.2), and Λ = {Λ} (x,y)∈Ω×Ω = ν x ⊗ ν y for x, y ∈ Ω the generated W 1,∞ -gradient Young measure according to Lemma 2.3. Lemma 2.2 implies that lim inf j→∞ F (u j ) = lim inf j→∞ ess sup (x,y)∈Ω×Ω W (x, y, ∇u j (x), ∇u j (y)) ≥ ess sup (x,y)∈Ω×Ω W (x, y), (2.4) where W (x, y) := Λ (x,y) -ess sup (ξ,ζ)∈R d×N ×R d×N W (ξ, ζ). By Lemma 2.3,
W (x, y) = ν x ⊗ ν y -ess sup (ξ,ζ)∈R d×N ×R d×N W (ξ, ζ)
= ν x -ess sup ξ∈R d×N ν y -ess sup ζ∈R d×N W (ξ, ζ)
for L N ⊗ L N -a.e. (x, y) ∈ Ω × Ω, and since W is separately curl Young quasiconvex, it results that W (x, y) ≥ W (x, y, [ν x ], [ν y ]) = W (x, y, ∇u(x), ∇u(y)) (2.5) for L N ⊗ L N -a.e. (x, y) ∈ Ω × Ω. The proof follows from (2.4) and (2.5).
Conclusions
In this paper we provide a sufficient condition for the lower semicontinuity of nonlocal supremal functionals depending on the gradients of suitable Lipschitz fields. We conjecture that this notion is also suitable to provide an L p approximation result in the spirit of what is proven for L ∞ fields in [21]. This latter study and the search for necessary conditions will be the subject of future research. We conclude observing that analogous results in the case of nonlocal integral functionals, depending on the gradient of scalar fields, can be found in [10].
Lemma 2 . 2 .
22Let U ⊂ R n be an open set with finite measure and let f : U × R m → R be a normal integrand bounded from below. Further, let (u k ) be a uniformly bounded sequence of functions in L ∞ (U ; R m ) generating a Young measure ν = {ν x } x∈U . Then,
Remark 2. 4 .
4The class of separately curl Young quasiconvexity is not empty since any separately level convex function is separately curl Young quasiconvex, indeed in [22, Lemma 3.5 (iv)] it has been proven that any Borel function W , whose sublevel sets are separately convex (i.e. W is separately level convex) satisfies (2.1) for every ν, µ ∈ Pr(R d×N ). On the other hand the notions are not equivalent as we can see considering the function W : R 2×2 × R 2×2 → [0, +∞], defined as W (ξ, η) = (sup{h(|ξ|), k(ξ)})(sup{h(|η|), k(η)}), with h and k as in [2, Example 6.7], namely k(Σ) := arctan(detΣ) and
16, Theorems 8.2 and 8.6], [29, Theorem 4.1, Proposition 4.6].
Definition 2.1. Let W : R d×N × R d×N → Rbe a lower semicontinuous function. W is said to be separately curl Young quasiconvex if
AcknowledgementsGG and EZ gratefully acknowledge support from INdAM GNAMPA. The authors do not have any conflict of interest with third parties and they both contribute in the same way in the redaction of the manuscript.Data availability statement: non applicable.
Bilevel optimization, deep learning and fractional Laplacian regularization with applications in tomography. H Antil, Z W Di, R Khatri, Inverse Problems. 36622H. Antil, Z. W. Di ans R. Khatri, Bilevel optimization, deep learning and fractional Laplacian regular- ization with applications in tomography, Inverse Problems , 36, 2020, No. 6, pp. 22.
On the lower semicontinuity of supremal functional under differential constraints. N Ansini, F Prinari, ESAIM Control Optim. Calc. Var. 214N. Ansini and F. Prinari, On the lower semicontinuity of supremal functional under differential con- straints, ESAIM Control Optim. Calc. Var. 21, 2015, No. 4, pp. 1053-1075.
Viscosity solutions and analysis in L ∞ , in Nonlinear analysis, differential equations and control. E N Barron, Proceedings of the NATO Advanced Study Institute and séminaire de mathématiques supérieures. the NATO Advanced Study Institute and séminaire de mathématiques supérieuresMontréal, Canada; DordrechtKluwer Acad. PublE. N. Barron, Viscosity solutions and analysis in L ∞ , in Nonlinear analysis, differential equations and control. Proceedings of the NATO Advanced Study Institute and séminaire de mathématiques supérieures, Montréal, Canada, July 27-August 7, 1998, Kluwer Acad. Publ., Dordrecht, (1999), pp. 1-60.
Lower semicontinuity of L ∞ functionals. E N Barron, R R Jensen, C Y Wang, Ann. Inst. H. Poincaré Anal. Non Linéaire. 18E. N. Barron, R. R. Jensen, and C.Y. Wang, Lower semicontinuity of L ∞ functionals, Ann. Inst. H. Poincaré Anal. Non Linéaire, 18 (2001), pp. 495-517.
Γ-convergence of polyconvex functionals involving s-fractional gradients to their local counterparts. J C Bellido, J Cueto, C Mora-Corral, Calc. Var. Partial Differential Equations. 6029J. C. Bellido, J. Cueto and C. Mora-Corral, Γ-convergence of polyconvex functionals involving s-fractional gradients to their local counterparts., Calc. Var. Partial Differential Equations, 60, (2021), No. 1, Paper No. 7, 29.
C Bond-based peridynamics does not converge to hyperelasticity as the horizon goes to zero. J C Bellido, J Cueto, C Mora-Corral, J. Elasticity. 1412J. C. Bellido, J. Cueto and C. Mora-Corral, C Bond-based peridynamics does not converge to hypere- lasticity as the horizon goes to zero, J. Elasticity, 141, 2020, No. 2, pp. 273-289.
Fractional Piola identity and polyconvexity in fractional spaces. J C Bellido, J Cueto, C Mora-Corral, Ann. Inst. H. Poincaré C Anal. Non Linéaire. 374J. C. Bellido, J. Cueto and C. Mora-Corral, Fractional Piola identity and polyconvexity in fractional spaces, Ann. Inst. H. Poincaré C Anal. Non Linéaire, 37, 2020, No. 4, pp. 955-981.
Lower semicontinuity and relaxation via Young measures for nonlocal variational problems and applications to peridynamics. J C Bellido, C Mora-Corral, SIAM J. Math. Anal. 501J.C. Bellido, and C. Mora-Corral, Lower semicontinuity and relaxation via Young measures for nonlocal variational problems and applications to peridynamics, SIAM J. Math. Anal., 50, 2018, No. 1, pp. 779-809.
Hyperelasticity as a Γ-limit of peridynamics when the horizon goes to zero. J C Bellido, C Mora-Corral, P , Calc. Var. Partial Differential Equations. 542J. C. Bellido, C. Mora-Corral and P. Pedregal, Hyperelasticity as a Γ-limit of peridynamics when the horizon goes to zero, Calc. Var. Partial Differential Equations, 54, 2015, No. 2, 1643-1670.
A necessary and sufficient condition for the weak lower semicontinuity of onedimensional non-local variational integrals. J Bevan, P , Proc. Roy. Soc. Edinburgh Sect. A. 1364J. Bevan, P. Pedregal, A necessary and sufficient condition for the weak lower semicontinuity of one- dimensional non-local variational integrals, Proc. Roy. Soc. Edinburgh Sect. A., 136, 2006, No. 4, 701- 708.
Vectorial variational principles in L ∞ and their characterisation through PDE systems. A Birzhan, N Katzourakis, Appl. Math. Optim. 832A. Birzhan and N. Katzourakis, Vectorial variational principles in L ∞ and their characterisation through PDE systems, Appl. Math. Optim.,83, 2021, No. 2, 833-848.
Vectorial variational problems in L ∞ constrained by the Navier-Stokes equations. E Clark, N Katzourakis, B Muha, Nonlinearity. 351E. Clark, N. Katzourakis, and B. Muha, Vectorial variational problems in L ∞ constrained by the Navier-Stokes equations, Nonlinearity, 35, (2022) No. 1, 470-491,
Γ convergence and absolute minimizers for supremal functionals. ESAIM: Control, Optimisation and Calculus of Variations. T Champion, L De Pascale, F Prinari, 10T. Champion, L. De Pascale, and F. Prinari, Γ convergence and absolute minimizers for supremal functionals. ESAIM: Control, Optimisation and Calculus of Variations, 10, (2004), pp. 14-27.
Structural changes in nonlocal denoising models arising through bi-level parameter learning. E Davoli, R Ferreira, C Kreisbeck, H Shoenberger, E. Davoli, R. Ferreira, C. Kreisbeck and H. Shoenberger, Structural changes in nonlocal denoising models arising through bi-level parameter learning https://arxiv.org/abs/2209.06256.
Γ-convergence for power-law functionals with variable exponents. M Eleuteri, F Prinari, Nonlinear Anal. Real World Appl. 58Paper No. 103221, 21M. Eleuteri, and F. Prinari, Γ-convergence for power-law functionals with variable exponents, Nonlinear Anal. Real World Appl., 58, (2021), Paper No. 103221, 21.
I Fonseca, G Leoni, Modern Methods in the Calculus of Variations: L p Spaces. New YorkSpringer-Verlag599Springer Monographs in MathematicsI. Fonseca and G. Leoni, Modern Methods in the Calculus of Variations: L p Spaces, Springer Mono- graphs in Mathematics, Springer-Verlag, New York, 2007, pp. xiv+599.
Modern Methods in the Calculus of Variations: Sobolev Spaces. I Fonseca, G Leoni, in preparationI. Fonseca and G. Leoni, Modern Methods in the Calculus of Variations: Sobolev Spaces, in preparation.
Learning nonlocal regularization operators. G Holler, K Kunisch, Math. Control Relat. Fields. 12G. Holler, and K. Kunisch, Learning nonlocal regularization operators, Math. Control Relat. Fields, 12, 2022, 1, pp. 81-114.
A minimisation problem in L ∞ with PDE and unilateral constraints. N Katzourakis, ESAIM Control Optim. Calc. Var. 2627Paper No. 60N. Katzourakis, A minimisation problem in L ∞ with PDE and unilateral constraints, ESAIM Control Optim. Calc. Var., 26, (2020, Paper No. 60, 27.
Pedregal Characterizations of Young measures generated by gradients. D Kinderlehrer, P , Arch. Rational Mech. Anal. 4D. Kinderlehrer, P. Pedregal Characterizations of Young measures generated by gradients, Arch. Ra- tional Mech. Anal. 4, 115 (1991), pp. 329-365.
Zappale Cartesian convexity as the key notion in the variational existence theory for nonlocal supremal functionals. C Kreisbeck, A Ritorto, E , 113111Nonlinear Anal. 225C. Kreisbeck, A. Ritorto, E. Zappale Cartesian convexity as the key notion in the variational existence theory for nonlocal supremal functionals, Nonlinear Anal., 225, (2022), No. 113111.
Lower Semicontinuity and Relaxation of Nonlocal L ∞ functionals. C Kreisbeck, E Zappale, Calc. Var. Partial Differential Equations. 594ppC. Kreisbeck and E. Zappale, Lower Semicontinuity and Relaxation of Nonlocal L ∞ functionals, Calc. Var. Partial Differential Equations 59, 138 (2020), n. 4, 36 pp.
Nonlocal variational principles. P , Nonlinear Anal. 2912P. Pedregal, Nonlocal variational principles, Nonlinear Anal., 29, 1997, No. 12, pp. 1379-1392.
Weak lower semicontinuity and relaxation for a class of non-local functionals. P , Rev. Mat. Complut. 293P. Pedregal, Weak lower semicontinuity and relaxation for a class of non-local functionals, Rev. Mat. Complut., 29, 2016, No. 3, pp. 485-495.
On non-locality in the calculus of variations. P , SeMA J. 784P. Pedregal, On non-locality in the calculus of variations, SeMA J., 78, 2021, No. 4, pp. 435-456.
A relaxation result in the vectorial setting and power law approximation for supremal functionals. F Prinari, E Zappale, J. Optim. Theory Appl. 1862F. Prinari and E. Zappale, A relaxation result in the vectorial setting and power law approximation for supremal functionals. J. Optim. Theory Appl., 186, 2020, 2, pp. 412-452.
Existence of minimizers for nonlevel convex supremal functionals. A M Ribeiro, E Zappale, SIAM J. Control Optim. 52A. M. Ribeiro and E. Zappale, Existence of minimizers for nonlevel convex supremal functionals, SIAM J. Control Optim. 52, 2014, 5, pp. 3341-3370.
. A M Ribeiro, E Zappale, in preparationA. M. Ribeiro and E. Zappale, in preparation
Calculus of variations. F Rindler, address: [email protected]+444. Dipartimento di Scienze e Tecnologie, Universitá degli studi del Sannio, via de Sanctis. Cham; Benevento, Italy EmailSpringerUniversitextF. Rindler, Calculus of variations, Universitext, Springer, Cham, 2018, xii+444. Dipartimento di Scienze e Tecnologie, Universitá degli studi del Sannio, via de Sanctis, 82100, Benevento, Italy Email address: [email protected]
. Antonio Scarpa. Dipartimento di Scienze di Base ed Applicate per l'Ingegneria, Sapienza, Universitá di Roma16161*corresponding author Email address: [email protected] di Scienze di Base ed Applicate per l'Ingegneria, Sapienza, Universitá di Roma, via Antonio Scarpa 16, 00161 Roma, Italy, *corresponding author Email address: [email protected]
| [] |
[
"Global Existence of Strong Solutions and Serrin-Type Blowup Criterion for 3D Combustion Model in Bounded Domains",
"Global Existence of Strong Solutions and Serrin-Type Blowup Criterion for 3D Combustion Model in Bounded Domains"
] | [
"Jiawen Zhang \nSchool of Mathematical Sciences\nUniversity of Chinese Academy of Sciences\n100049BejingP.R. China\n"
] | [
"School of Mathematical Sciences\nUniversity of Chinese Academy of Sciences\n100049BejingP.R. China"
] | [] | The combustion model is studied in three-dimensional (3D) smooth bounded domains with various types of boundary conditions. The global existence and uniqueness of strong solutions are obtained under the smallness of the gradient of initial velocity in some precise sense. Using the energy method with the estimates of boundary integrals, we obtain the a priori bounds of the density and velocity field. Finally, we establish the blowup criterion for the 3D combustion system. | null | [
"https://export.arxiv.org/pdf/2302.06270v2.pdf"
] | 256,827,776 | 2302.06270 | 23b517203747b7d1ab7b37f032c344c388bc10f0 |
Global Existence of Strong Solutions and Serrin-Type Blowup Criterion for 3D Combustion Model in Bounded Domains
14 Feb 2023
Jiawen Zhang
School of Mathematical Sciences
University of Chinese Academy of Sciences
100049BejingP.R. China
Global Existence of Strong Solutions and Serrin-Type Blowup Criterion for 3D Combustion Model in Bounded Domains
14 Feb 20233D combustion modelDirichlet boundary conditionsslip boundary condi- tionsglobal strong solutionsSerrin's condition
The combustion model is studied in three-dimensional (3D) smooth bounded domains with various types of boundary conditions. The global existence and uniqueness of strong solutions are obtained under the smallness of the gradient of initial velocity in some precise sense. Using the energy method with the estimates of boundary integrals, we obtain the a priori bounds of the density and velocity field. Finally, we establish the blowup criterion for the 3D combustion system.
Introduction
In this paper, we assume that Ω is a simply connected bounded domain in R 3 with smooth boundary and investigate the following system in Ω, ρ t + div(ρu) = 0, ρ ≥ 0, (ρu) t + div(ρu ⊗ u) − div[2µ(ρ)D(u)] + ∇π = 0, div u = c 0 ∆ψ(ρ), ψ(ρ) := ρ −1 , (1.1) where u = (u 1 , u 2 , u 3 ), ρ and π stand for the unknown velocity field, density and pressure respectively, c 0 > 0 is a fixed constant, µ is a positive function and µ(s) ∈ C ∞ (0, ∞).
(1.
2)
The deformation tensor D(u) is denoted by
D(u) = 1 2 ∇u + (∇u) t = 1 2 (∂ i u j + ∂ j u i ), 1 ≤ i, j ≤ 3. (1.3)
The system is equiped with the initial data u(x, 0) = u 0 (x), ρ(x, 0) = ρ 0 (x), x ∈ Ω (1.4) and one of the following boundary conditions:
n · ∇ρ = 0, u · n = 0, curl u × n = −B · u on ∂Ω × (0, T ) (A)
where B = B(x) is a smooth positive semi-definite matrix, or n · ∇ρ = 0, u = 0 on ∂Ω × (0, T ),
Combustion model is the low Mach number limit of the fully compressible Navier-Stokes equations, see [25], and it is tightly linked with the non-homogeneous incompressible Navier-Stokes equations (taking c 0 = 0) and the homogeneous one (taking ρ be a constant). There are lots of works studying the combustion model (1.1) and the problems associated with it. The study of the system (1.1), which has been introduced by A. Majda [27], can date back to the 1980s. P. Embid [14] has proved the local-in-time well-posedness for classical solutions of the system (1.1) with the periodic boundary condition. Also, the local well-posedness was considered by H. B. da Veiga [12] with (1.1) 3 replaced by Fick's law ψ(ρ) = log ρ. Danchin-Liao [13] established the local well-posedness in critical homogeneous Besov spaces under some smallness assumptions and that in non-homogeneous Besov space for arbitrarily large data.
For the global-in-time existence of weak and strong solutions of (1.1) and relative problems, P. Secchi [30] proved that there exists a unique global strong solution in the two-dimensional domain providing the diffusion coefficient c 0 is small enough. They also considered the limiting behavior of the solutions when c 0 → 0 + for 2D and 3D case and the convergence towards the corresponding solutions of non-homogeneous incompressible Navier-Stokes equations. Another remarkable work comes from P. Lions [26] where he has shown the global existence of weak solutions only under a small perturbation of a constant density without any restriction on the initial velocity. However, in [26], he only gives the proof for R 2 and periodic case. Also in [13], Danchin-Liao proved the existence of solutions in critical homogeneous Besov spaces provided the initial density is closed to a constant and the initial velocity is small enough. For large initial data, Bresch-Essoufi-Sy [5] showed the global existence of the weak solutions for the combustion model in dimensions 2 and 3 by taking µ(ρ) be a specific function c0 2 log ρ and then, in [6], Bresch-Giovangigli-Zatorska relaxed the restriction on µ(ρ) by renormalizing the mass equation. Recently, W. Tan [34] proved the global existence of the weak and strong solutions for the system (1.1) with general coefficient µ(ρ) in (1.1) 2 and ψ(ρ) in (1.1) 3 provided ∇ρ L 2 is small enough.
Another relative model to the system (1.1) is the so-called Kazhikhov-Smagulov type model, see (1.16). In [9,10], Cai-Liao-Sun established the global-in-time existence of strong solutions to the initial-boundary value problem of a 2D Kazhikhov-Smagulov type model for incompressible non-homogeneous fluids with mass diffusion for the arbitrary size of initial data. For other works on the classical Kazhikhov-Smagulov's model, we refer the reader to [2,4].
If the diffusion coefficient c 0 tends to zero, (1.1) may reduce to the general non-homogeneous incompressible Navier-Stokes equations. There are also plenty of works studying it with the general viscosity coefficient µ(ρ), we refer the reader to [1,8,11,16,21,20,25] and the references therein.
In the final part of this paper we focus on the mechanism of blowup and the structure of possible singularities of strong solutions to the Navier-Stokes system. The blowup criterion on the Leray-Hopf weak solutions to the 3D incompressible homogeneous Navier-Stokes equations was first given by J. Serrin [31], that is, if a weak solution u satisfies u ∈ L s (0, T ; L r ), 2 s + 3 r ≤ 1, 3 < r ≤ ∞, (1.5) then it is regular. Later, He-Xin [17] showed that the Serrin's criterion (1.5) still holds even in the case of the incompressible MHD equations. For non-homogeneous incompressible Navier-Stokes equations, H. Kim [22] has shown that if (ρ, u) blows up at T * , then lim t→T * u L s (0,T ;L r w ) = ∞ for all
2 s + 3 r ≤ 1, 3 < r ≤ ∞.
(1. 6) In recent works, X. Zhong [39] obtained a blowup criterion (1.5) to the non-homogeneous incompressible heat conducting Navier-Stokes flows in bounded domain of R 3 . For the compressible fluids, we refer reader to [18,19,36] and references therein. However, the theory for the 3D combustion model with the general viscosity coefficient in the bounded domain is still blank. Therefore, our goal is obtaining the global existence of strong solutions with small initial data and extending the Serrin's blow-up criterion to (1.1).
Before stating the main theorem, let us explain some notation and conventions used throughout the paper. First, we define the strong solutions as follows.
everywhere in Ω × (0, T ) such that α ≤ ρ ≤ β, ρ ∈ C([0, T ]; H 2 ) ∩ L 2 (0, T ; H 3 ), ρ t ∈ C([0, T ]; L 2 ) ∩ L 2 (0, T ; L 2 ), u ∈ C([0, T ]; H 1 ) ∩ L 2 (0, T ; H 2 ), u t ∈ L 2 (0, T ; L 2 ), π ∈ L 2 (0, T ; H 1 ). (1.7)
In particular, if (1.7) holds for all T ∈ (0, ∞), we call (ρ, u, π) the global strong one.
For 1 ≤ p ≤ ∞ and integer numbers k ≥ 1, the standard Sobolev spaces and other functional spaces are defined as follows:
L p = L p (Ω), W k,p = W k,p (Ω), H k = W k,2 , W k,p 0 = C ∞ 0 closure in the norm of W k,p , · B1∩B2 = · B1 + · B2 , for two Banach spaces B 1 and B 2 ,H 1 ω := u ∈ H 1 : u · n = 0, curl u × n = −B · u on ∂Ω .
Next, we set
f dx := Ω f dx, ∂ f := ∂Ω f dS and f Ω := 1 |Ω| f
which is the average of a function f over Ω. The weak, weak* and strong convergence of a sequence {f n } are respectively denoted by
f n w − − ⇀ f, f n w * − − ⇀ f, f n s − − → f.
Finally, for two 3 × 3 matrices A = {a ij } , B = {b ij }, the symbol A : B represents the trace of AB, that is,
A : B := tr(AB) = 3 i,j=1 a ij b ji .
Now, we give our main theorems. The first theorem concerns with the global existence of strong solutions for (1.1) when Ω is a bounded domain. Theorem 1.2. Suppose that Ω ⊂ R 3 is a simply connected bounded domain with smooth boundary and (ρ 0 , u 0 ) satisfies
0 < α ≤ ρ 0 ≤ β < ∞, x ∈ Ω, (1.8) the compatibility condition div u 0 = c 0 ∆ρ −1 0 , x ∈ Ω u 0 · n = c 0 n · ∇ρ −1 0 , x ∈ ∂Ω (1.9) and u 0 ∈ H 1 ω , if u satisfies the boundary condition (A); u 0 ∈ H 1 0 , if u satisfies the boundary condition (B).
Then there exists a positive constant δ depending only on Ω, c 0 , α and β such that if
∇u 0 L 2 ≤ δ (1.10)
and π satisfies the normalized condition Next, we give the Serrin-type blowup criterion.
π = 0,(1.Theorem 1.3. If (ρ, u, π)
is a local strong solution on Ω × (0, T * ) and T * < ∞ is the maximal time of existence, then lim T →T * u L r (0,T ;L s ) = ∞, (1.12) where r and s satisfy the relation [20,37] where they obtain the global strong solutions for non-homogeneous incompressible Navier-Stokes equations with density-depended viscosity coefficient µ(ρ) and the Dirichlet boundary conditions, our result can be seen as an extension from the divergence-free velocity field u, div u = 0, to non-divergence-free one, that is, (1.1) 3 .
2 s + 3 r ≤ 1, 3 < r ≤ ∞.(1.
Remark 1.6. In our proof of the theorem, we only need ψ(s) ∈ C 3 (0, ∞), thus, more general ψ(ρ) can also be considered under the same assumptions.
Remark 1.7. From the hypothesis of Theorem 1.2, one may notice that we do not impose any information about the regularity of ρ 0 (except for the size restriction (1.8)). This is mainly because of the compatibility condition (1.9). Indeed, for example, if u 0 ∈ H 1 ω , one can solve the following elliptic problem
c 0 ∆ρ −1 0 = div u 0 , x ∈ Ω, n · ∇ρ −1 0 = 0, x ∈ ∂Ω,
from which the regularity of ρ 0 is completely determinded by that of u 0 . More precisely, we have, for all 1 < p ≤ 6,
∇ρ 0 L p ≤ C(p) u 0 L p , ∇ρ 0 H 1 ≤ C ∇u 0 L 2 .
(1.14)
Now, we give some comments about the analysis throughout the whole paper. Generally speaking, in order to overcome the non-divergence-free of u, our proof for Theorem 1.2 is based on two types of decomposition. For the first case, that is, u satisfying the boundary condition (A), we may write in view of (1.1)
3 v = u − c 0 ∇ρ −1 ,(1.15)
Consequently, using (1.15), the original system (1.1) can be changed into the following Kazhikohlv-Samgulov type model,
ρ t + v · ∇ρ + c 0 ρ −2 |∇ρ| 2 − c 0 ρ −1 ∆ρ = 0, (ρv) t + div(ρv ⊗ v) − div [2µ(ρ)D(v)] + ∇π 1 = c 0 div 2µ(ρ)∇ 2 ρ −1 −c 0 div ρv ⊗ ∇ρ −1 − div c 0 ρ∇ρ −1 ⊗ v − c 2 0 div ρ∇ρ −1 ⊗ ∇ρ −1 , div v = 0,(1.16)
where π 1 = π − c 0 (log ρ) t is a modified pressure. Then, one can find that the mass equation (1.1) 1 becomes a parabolic type one, which provides us some high regularity properites for ρ, and, on the other hand, v is divergence-free, which allows us to use some "standard" treatments of the classical incompressible Navier-Stokes equations. Thus, in Section 3, we will mainly discuss the system (1.16) and try to derive the a priori esitmates of (ρ, v).
So, here, we give an explanation about the definition of v 0 , the initial value of v, and the boundary condition related to v. Since we have the compatibility condition (1.9) from which we can find a unique function v 0 defined by
v 0 := u 0 − c 0 ∇ρ −1 0 . (1.17)
Then, we may impose v 0 as the initial value of v. Of course, in view of the estimates (1.14), v 0 is also controlled by u 0 , that is,
v 0 L p ≤ C(p) u 0 L p , ∇v 0 L 2 ≤ C ∇u 0 L 2 . (1.18)
For the boundary condition, if u satisfies the condition (A), applying curl on (1.15) implies that v satisfies
curl v × n = −B · (v + c 0 ∇ρ −1 ) on ∂Ω × (0, T ). (A')
In this case, we would call (ρ, v) or v satisfying the condition (A'). In addition, from (1.17), we can obtain the compatibility condition corresponding with (ρ 0 , v 0 ), that is,
div v 0 = 0, x ∈ Ω v 0 · n = 0, curl v 0 × n = −B · (v 0 + c 0 ∇ρ −1 0 ), x ∈ ∂Ω (1.19) provided u 0 ∈ H 1 ω .
To sum up, our sketches of the proof is given by
(ρ 0 , u 0 ) (1.17) = === ⇒ (ρ 0 , v 0 ) =⇒ existence of (ρ, v) (1.15) = === ⇒ existence of (ρ, u).
Another difficulty in this situation comes from the boundary integrals. To overcome it, we mainly adapt the idea from Cai-Li [7].
Since v · n = 0 on ∂Ω, we have v = v ⊥ × n on ∂Ω, where v ⊥ = −v × n. Then, for f ∈ H 1 , ∂ v · ∇f = ∂ v ⊥ × n · ∇f = curl v ⊥ · ∇f ≤ C v H 1 ∇f L 2 ,
which is clearly has advantages over using the trace inequality, since the latter needs f ∈ H 2 . For u satisfying (B), the situation is somewhat different, since, in every case that follows, v satisfies the non-homogeneous Dirichlet boundary conditions, that is,
v = −c 0 ∇ρ −1 on ∂Ω × (0, T ).
(1.20)
Such condition may bring too much high order derivatives so that the boundary integrals are no longer controllable, especially when we treat the energy estimates for v. Therefore, we shall apply another type of decomposition whose idea comes from Lemma 2.6 (see Section 2). From which, one can find a function Q = B[c 0 ∆ρ −1 ], where B is the Bogovskiǐ operator. As a consequence, u will be splitted into u = w + Q. (1.21) and, hence, one can hope to get the energy estimates for the system (1.1) The advantage of above decomposition is obvious: on the one hand, from Lemma 2.6, Q is "almost" ∇ρ, in other words, for all 1 < p < ∞, Q has the following bounds
Q L p ≤ C ∇ρ L p , Q H 1 ≤ C ( ∆ρ L 2 + ∇ρ L 3 ∇ρ L 6 ) , Q t L p ≤ C ( ∇ρ t L p + |ρ t ||∇ρ| L p ) ; (1.22)
on the other hand, it is easy to check that w has a vanished boundary, which will not generate any bounary term when applying the energy estimates. Therefore, the strategy of the proof can be concluded as follows (ρ 0 , u 0 ) (1.21) = === ⇒ (1.22) estimates for (ρ, u) =⇒ · · · At last, to prove Theorem 1.3, we mainly adapt proofs mentioned above with a slight change. We first let (1.12) be false, that is,
lim t→T * u L s (0,T ;L r ) ≤ M 0 < ∞, (1.23)
then following the proof of Theorem 1.2, one may obtain the bounds for (ρ, u, π) satisfying (1.7), which will give the contradictory ot the maximality of T * . However, when it comes to the higher order estimates of (ρ, v) (or (ρ, u)), one has to control
|∇ρ| 3 L 2 = ∇ρ 3 L 6 ,
due to the nonlinear terms
ρ −2 |∇ρ| 2 , c 2 0 div ρ∇ρ −1 ⊗ ∇ρ −ρ t + v · ∇ρ − c 0 ∆ log ρ = 0, (1.24)
which pushes us to estimate log ρ thanks to the pure transport constructure ρ t + v · ∇ρ and the disspation term −∆ log ρ, see Section 4 for details. The rest of this paper is organized as follows. In Section 2, we give some elementary results which will be used in later. Section 3 is devoted to the a priori estimates for system (1.1) and the proof for Theorem 1.2. Finally, in Section 4, we will give the proof of Theroem 1.3.
Preliminaries
First, we give the following local existence result for system (1.1). We have already proved this for 2D case in our previous work [38] and the 3D one can be established step by step only after some minor adaptions. Moreover, if µ(ρ) is a positive constant, then the above result also holds for the condition (B).
Remark 2.2.
Even if we restrict µ(ρ) = µ a positive constant in the case (B), the existence result can be extended to µ(ρ) = µ(ρ ǫ ) (see [38] for details), where
ρ ǫ ∈ C ∞ (Ω), α ≤ ρ ǫ ≤ β, ρ ǫ s − − → ρ in W k,p for all ρ ∈ W k,p , k ∈ N, 1 ≤ p < ∞.
This extension will help us to fill the gap between the existence of local strong solutions and that of global one when (ρ, u) satisfies the condition (B).
Next, we give the well-known Gagliardo-Nirenberg's inequalities which will be frequently used later. [24,28]). Assume that Ω is a bounded domain in R 3 with smooth boundary. Then there exist generic constants C and C 1 which depend only on p and Ω such that, for all p ∈ [2, 6] and f ∈ H 1 ,
Lemma 2.3 (Gagliardo-Nirenberg
f L p (Ω) ≤ C f 6−p 2p L 2 ∇f 3p−6 2p L 2 + C 1 f L 2 .
Moreover, if either f · n| ∂Ω = 0 or f Ω = 0, we can choose C 1 = 0.
The next two lemmas can be found in [3,35].
Lemma 2.4.
Let Ω be a bounded simply connected domain in R 3 with smooth boundary. Assume that k ≥ 0 is an integer and 1 < p < ∞. Then for all u ∈ W k+1,p with u · n = 0 on ∂Ω, there exists a positive constant C = C(k, p, Ω) such that
u W k+1,p ≤ C ( div u W k,p + curl u W k,p ) . Lemma 2.5. Suppose that Ω is a bounded simply connected domain in R 3 smooth boundary. Let k ≥ 0 be an integer, 1 < p < ∞. Then for u ∈ W k+1,p with u × n = 0 on ∂Ω, there exists a constant C = C(k, p, Ω) such that u W k+1,p ≤ C ( div u W k,p + curl u W k,p + u L p ) . Next, consider the problem div u = f, x ∈ Ω, u = Φ, x ∈ ∂Ω, (2.1)
where Ω is a bounded smooth domain in R 3 . We have the following standard estimates, which will be used to eliminate the non-homogeneity of equations.
1) If Φ = 0, there exists a bounded linear operator B = [B 1 , B 2 , B 3 ], B : {f ∈ L p : f Ω = 0} → W 1,p 0 3 such that B[f ] W 1,p ≤ C(p) f L p , for all p ∈ (1, ∞), and the function Q = B[f ] solves the problem (2.1). Moreover, if f = div g with a certain g ∈ L r , g · n| ∂Ω = 0, then for any r ∈ (1, ∞) B[f ] L r ≤ C(r) g L r .
B is so-called the Bogovskiǐ operator.
2) If f = 0, there exists a bounded linear operator
C = [C 1 , C 2 , C 3 ], C : {Φ : Φ · n| ∂Ω = 0, div Φ ∈ L p } → W 1,p 3 such that C[Φ] W 1,p ≤ C(p) div Φ L p ,
for all p ∈ (1, ∞) and the function R = C[Φ] sovles the problem (2.1).
The next two lemmas about the estimates of Stokes system are important to the higher order estimates of v.
Lemma 2.7. Let Ω be a bounded simply connnected domain in R 3 with smooth boundary and (u, p) satisfy the following Stokes equations
−∆u + ∇p = F, x ∈ Ω, div u = 0, x ∈ Ω, (2.2)
where p is normalized by the condition p = 0 and F ∈ L 2 . Then, we have the following conclusions:
(1) If u satisfies the boundary condition u · n = 0, curl u × n = Φ on ∂Ω, where Φ ∈ H 1 is a function defined on Ω. Then there exists a positive constant C depending only on Ω such that
u H 2 + p H 1 ≤ C( F L 2 + Φ H 1 ). (2.3) (2) If u satisfies the boundary condition u = Φ on ∂Ω, where Φ ∈ H 2 is a function defined on Ω.
Then there exists a positive constant C depending only on Ω such that
u H 2 + p H 1 ≤ C( F L 2 + Φ H 2 ). (2.4)
Proof. We only give the proof for (1), since (2) can be found in [15], Chapter IV. Multiplying u on both side of (2.2) 1 and integrating by parts, one has
| curl u| 2 = ∂ Φ · u + F · u,
which, using Lemma 2.4 and trace inequality, implies that
u H 1 ≤ C ( F L 2 + Φ H 1 ) . (2.5)
Then, ∇p ∈ H −1 and, using the condition p = 0, we have
p L 2 ≤ C ( F L 2 + u H 1 ) . (2.6)
Next, applying curl on (2.2) 1 leads to the following Laplace equations,
−∆ curl u = curl F.
Then, multiplying curl u − Φ ⊥ and integrating over Ω gives
| curl curl u| 2 − curl curl u · curl Φ ⊥ + ∂ (n × curl curl u) · curl u − Φ ⊥ = F · curl curl u − curl Φ ⊥ + ∂ (n × F ) · curl u − Φ ⊥ , that is, using the indentity a · (b × c) = b · (c × a) = c · (a × b), | curl curl u| 2 − curl curl u · curl Φ ⊥ = F · curl curl u − curl Φ ⊥ , which imlplies that curl curl u L 2 ≤ C ( F L 2 + Φ H 1 ) .
It follows from Lemma 2.4-2.5 and (2.5) that
u H 2 ≤ C ( F L 2 + Φ H 1 + u L 2 ) . (2.7)
Because of the uniqueness of the Stokes system, one can eliminate the L 2 -norm of u on the righthand side of (2.7). On the other hand, of course, we have
p H 1 ≤ C ∇p L 2 ≤ C( ∆u L 2 + F L 2 ).
Thus, alonging with (2.7), we complete the proof.
Lemma 2.8.
Let Ω be a bounded simply connnected domain in R 3 with smooth boundary. Let (u, p) be a strong solution of the following Stokes type system,
− div[2µ(ρ)D(u)] + ∇p = F, x ∈ Ω, div u = 0, x ∈ Ω, (2.8)
where p is normalized by the condition p = 0, F ∈ L 2 and
0 < µ ≤ µ(ρ) ≤ µ < ∞, ∇µ(ρ) ∈ L r , r ∈ (3, ∞]
Then, we have the following results:
(1) If u satisfies the boundary condition u · n = 0, curl u × n = Φ on ∂Ω, where Φ ∈ H 1 is a function defined on Ω. Then there exists a positive constant C depending only on µ, µ and Ω such that
u H 2 + p H 1 ≤ C ∇µ(ρ) r r−3 L r ∇u L 2 + 1 + ∇µ(ρ) r r−3 L r ( F L 2 + Φ H 1 ) . (2) If u satisfies the boundary condition u = Φ on ∂Ω, where Φ ∈ H 2 is a function defined on Ω.
Then there exists a positive constant C depending only on µ, µ and Ω such that
u H 2 + p H 1 ≤ C ∇µ(ρ) r r−3 L r ∇u L 2 + 1 + ∇µ(ρ) r r−3 L r ( F L 2 + Φ H 2 ) .
Proof. We still only give the proof of (1), since (2) can be checked in a similar way. Using Lemma 2.6, we can find a function R = C[Φ] such that div R = 0 and R| ∂Ω = Φ. Then, we rewrite (2.12)
1 as − div[2µ(ρ)D(u − R)] + ∇p = F + div[2µ(ρ)D(R)]. (2.9)
Multiplying u − R on both sides of (2.9), integrating by parts like in Lemma 2.7 and using the control
R H 1 ≤ C div Φ L 2 , one has u H 1 + p L 2 ≤ C ( F L 2 + Φ H 1 ) . (2.10) Next, converting (2.12) 1 into the form − ∆u + ∇ p µ(ρ) = F µ(ρ) + 2∇µ(ρ) · D(u) µ(ρ) − p∇µ(ρ) µ(ρ) 2 ,(2.u H 2 + p H 1 ≤ C u H 2 + ∇ p µ(ρ) L 2 + ∇µ(ρ) · D(u) µ(ρ) L 2 + p∇µ(ρ) µ(ρ) 2 L 2 ≤ C( F L 2 + Φ H 1 + ∇µ(ρ) L r ∇u L 2r r−2 + ∇µ(ρ) L r p L 2r r−2 ) ≤ C F L 2 + Φ H 1 + ∇µ(ρ) r r−3 L r ∇u L 2 + ∇µ(ρ) r r−3 L r p L 2 + 1 2 ( u H 2 + p H 1 ) ,
which implies that
u H 2 + p H 1 ≤ C F L 2 + Φ H 1 + ∇µ(ρ) r r−3 L r ∇u L 2 + ∇µ(ρ) r r−3 L r p L 2 ≤ C F L 2 + Φ H 1 + ∇µ(ρ) r r−3 L r ∇u L 2 + ∇µ(ρ) r r−3 L r ( F L 2 + Φ H 1 ) .
This completes the proof.
Next, we consider the Hölder continuity of ρ and the non-divergence type Stokes model.
Lemma 2.9 ([10, 23, 33, 38]). Let v ∈ L s (0, T ; L r ), div v = 0, v · n = 0 and ρ ∈ C([0, T ]; L 2 ) ∩ L 2 (0, T ; H 1 ) be the weak solution of equation (1.8) 1 , α ≤ ρ ≤ β. Let ρ satisfy n · ∇ρ = 0 on ∂Ω provided Ω ⊂ R 3 is a bounded domain with smooth boudary. Suppose that ρ 0 ∈ C γ0 (Ω) for some γ 0 ∈ (0, 1), then ρ is Hölder continuous. More precisely, ρ ∈ C γ, γ 2 (Q T )
, for some γ depending only on γ 0 , α and β.
Lemma 2.10.
Let Ω be a bounded simply connnected domain in R 3 with smooth boundary. Let (u, p) be a strong solution of the following Stokes type system,
−µ(x)∆u + ∇p = F, x ∈ Ω, div u = 0, x ∈ Ω, (2.12)
where p is normalized by the condition p = 0, F ∈ L 2 and
0 < µ ≤ µ(x) ≤ µ < ∞, µ(x) ∈ C(Ω).
Then, we have the following results:
(1) If u satisfies the boundary condition u · n = 0, curl u × n = Φ on ∂Ω, where Φ ∈ H 1 is a function defined on Ω. Then there exists a positive constant C depending only on Ω such that
u H 2 + p H 1 ≤ C( F L 2 + Φ H 1 ). (2.13) (2) If u satisfies the boundary condition u = Φ on ∂Ω, where Φ ∈ H 2 is a function defined on Ω.
Then there exists a positive constant C depending only on Ω such that
u H 2 + p H 1 ≤ C( F L 2 + Φ H 2 ). (2.14)
Proof. The proof of Lemma 2.10 is an easy consequence of the freezing point argument, since we already have the conclusion when µ ≡ constant from the Lemma 2.7.
At last, in subsection 3.2, we need the following lemma.
Lemma 2.11 (Simon [29,32]). Let X ֒→ B ֒→ Y be three Banach spaces with compact imbedding
X ֒→֒→ Y . Further, let there eixst 0 < θ < 1 and M > 0 such that v B ≤ M v 1−θ X v θ Y , for all v ∈ X ∩ Y. Denote for T > 0, W (0, T ) := W s0,r0 (0, T ; X) ∩ W s1,r1 (0, T ; Y ) with s 0 , s 1 ∈ R, r 1 , r 0 ∈ [1, ∞], and s θ := (1 − θ)s 0 + θs 1 , 1 r θ := 1 − θ r 0 + θ r 1 , s * := s θ − 1 r θ . Assume that s θ > 0 and F is a bounded set in W (0, T ). (1) If s * ≤ 0, then F is precompact in L p (0, T ; B) for all 1 ≤ p < − 1 s * . (2) If s * > 0, then F is precompact in C([0, T ]; B).
Proof of Theorem 1.2
In this section, we assume u 0 ∈ C ∞ (Ω) ∩ H 1 0 (or H 1 ω ). We always suppose that the assumptions in Theorem 1.2 hold. In the following proof, in order to simplify the notation, we denote by ε i , i ∈ N + , the arbitrarily small number belongs to (0, 1/2] and we use the subscript C εi to emphasize the dependency of the constant C on ε i .
A Priori Estimates
Case (A)
The key of the proof is deriving the following proposition. Using the idea from [20,37], we first assume the bounds (3.1) and obtain the a priori estimates of (ρ, v) (see below). Then, using the a priori estimates in Lemma 3.2-3.3 leads to a smaller bounds (3.2), which means that we can close the energy estimates.
∇ρ L 6 ≤ 2, T 0 ∇v 4 L 2 + ∆ρ 4 L 2 dt ≤ 2 ∇u 0 2 L 2 , (3.1)
then, one has
sup t∈[0,T ] ∇ρ L 6 ≤ 1, T 0 ∇v 4 L 2 + ∆ρ 4 L 2 dt ≤ ∇u 0 2 L 2 . (3.2)
We first come to prove the lower order estimates of (ρ, v).
Lemma 3.2. Let (ρ, v)
be a smooth solution of (1.16), then α ≤ ρ ≤ β and there exist some positive constant C depending only on Ω, c 0 , α and β such that, for all T ∈ (0, ∞),
sup t∈[0,T ] ρ − (ρ 0 ) Ω 2 L 2 + T 0 ∇ρ 2 L 2 dt ≤ C ∇u 0 2 L 2 . (3.3) Furthermore, if ∇u 0 L 2 ≤ 1 and the condition (3.1) holds, one has sup t∈[0,T ] ∇ρ 2 L 2 + T 0 ∇ρ 4 L 3 + ∆ρ 2 L 2 dt ≤ C ∇u 0 2 L 2 , (3.4) sup t∈[0,T ] v 2 L 2 + T 0 v 4 L 3 + ∇v 2 L 2 dt ≤ C ∇u 0 2 L 2 . (3.5)
Proof. First of all, α ≤ ρ ≤ β is a consequence of the standard maximal principle. Next, multiplying ρ − (ρ 0 ) Ω on both sides of (1.16) 1 and integrating over Ω, one has
ρ − (ρ 0 ) Ω 2 L 2 t + ν ∇ρ 2 L 2 ≤ 0. Therefore, (3.3)
is an easy consequence of Grönwall's inequality and the control
ρ 0 − (ρ 0 ) Ω L 2 ≤ C ∇ρ 0 L 2 ≤ C ∇u 0 L 2 .
To prove (3.4), multiplying −∆ρ and integrating over Ω, one has
1 2 |∇ρ| 2 t + c 0 ρ −1 |∆ρ| 2 = (v · ∇ρ)∆ρ + c 0 ρ −2 |∇ρ| 2 ∆ρ := 2 i=1 G i ,(3.6)
where, using Lemma 2.3,
|G 1 | ≤ C v L 6 ∇ρ L 3 ∆ρ L 2 ≤ C ε1 ∇v 4 L 2 ∇ρ 2 L 2 + ε 1 ∆ρ 2 L 2 , |G 2 | ≤ C ∇ρ L 6 ∇ρ L 3 ∆ρ L 2 ≤ C ε2 ∆ρ 4 L 2 ∇ρ 2 L 2 + ε 2 ∆ρ 2 L 2 . (3.7)
Thus, we have ∇ρ
2 L 2 t + ν ∆ρ 2 L 2 ≤ C ∆ρ 4 L 2 + ∇v 4 L 2 ∇ρ 2 L 2 . (3.8)
Applying the Grönwall's inequality on (3.8) and using cndition (3.1), we obtain (3.4). With help of (3.3)-(3.4), we next come to the proof of (3.5). Multiplying v on both sides of (1.16) 2 and integrating over Ω, one has
1 2 ρ|v| 2 t − div[2µ(ρ)D(v)] · v = − 2c 0 µ(ρ)∇ 2 ρ −1 : ∇v + c 0 ρv · ∇v · ∇ρ −1 + c 2 0 ρ∇ρ −1 · ∇v · ∇ρ −1 + ∂ 2c 0 µ(ρ)n · ∇ 2 ρ −1 · v := 4 i=1 H i . (3.9)
For the second term on the left-hand side, using ∆v = − curl curl v and Lemma 2.3 and 2.5, we have
− div[2µ(ρ)D(v)] · v = µ(ρ) curl curl v · v − 2µ ′ (ρ)∇ρ · D(v) · v = µ(ρ)| curl v| 2 + ∂ µ(ρ)v · B · v + ∂ c 0 µ(ρ)v · B · ∇ρ −1 + curl v · (∇µ(ρ) × v) − 2∇µ(ρ) · D(v) · v ≥ µ curl v 2 L 2 + ∂ v · B · v − C v H 1 ∇ρ H 1 − C ∇ρ L 6 v L 3 ∇v L 2 ≥ ν ∇v 2 L 2 − C ∆ρ 2 L 2 + ∆ρ 4 L 2 v 2 L 2 .
(3.10)
Next, for H 1 -H 4 , one has, applying Lemma 2.3,
|H 1 | ≤ C ( ∆ρ L 2 + ∇ρ L 6 ∇ρ L 3 ) ∇v L 2 ≤ C ε1 ∆ρ 2 L 2 + ∆ρ 4 L 2 ∇ρ 2 L 2 + ε 1 ∇v 2 L 2 , |H 2 | ≤ C ∇ρ L 6 v L 3 ∇v L 2 ≤ C ε2 ∆ρ 4 L 2 v 2 L 2 + ε 2 ∇v 2 L 2 , |H 3 | ≤ C ∇ρ L 6 ∇ρ L 3 ∇v L 2 ≤ C ε3 ∆ρ 4 L 2 ∇ρ 2 L 2 + C ε3 ∆ρ 2 L 2 + ε 3 ∇v 2 L 2 .
(3.11) and, using the fact that v · ∇ 2 ρ −1 · n = −v · ∇n · ∇ρ −1 on ∂Ω × (0, T ),
|H 4 | = ∂ 2c 0 µ(ρ)n · ∇ 2 ρ −1 · v = ∂ 2c 0 µ(ρ)v · ∇n · ∇ρ −1 = ∂ 2c 0 µ(ρ)(v ⊥ × n) · ∇n · ∇ρ −1 = 2c 0 µ(ρ) curl v ⊥ · ∇n · ∇ρ −1 − 2c 0 v ⊥ · curl µ(ρ)∇n · ∇ρ −1 = 2c 0 µ(ρ) curl v ⊥ · ∇n · ∇ρ −1 + 2c 0 µ(ρ)v ⊥ · curl ∇n · ∇ρ −1 − 2c 0 ∇µ(ρ) × ∇n · ∇ρ −1 · v ⊥ ≤ C v H 1 ∇ρ H 1 + C ∇ρ 2 L 3 v L 3 ≤ C ε4 ∆ρ 2 L 2 + ∇ρ 4 L 3 + ε 4 ∇v 2 L 2 .
(3.12)
Combining (3.10)-(3.12), we deduce from (3.9) that √ ρv
2 L 2 t + ν ∇v 2 L 2 ≤ C ∆ρ 4 L 2 √ ρv 2 L 2 + ∆ρ 2 L 2 + ∆ρ 4 L 2 ∇ρ 2 L 2 + ∇ρ 4 L 3 .v 2 L 2 + T 0 v 4 L 3 + ∇v 2 L 2 dt ≤ C ∇u 0 2 L 2 + v 0 2 L 2 ≤ C ∇u 0 2 L 2 ,
which gives (3.5). Thus, we complete the proof of Lemma 3.2.
Next, we prove the higher order estimates for (ρ, v), that is, Lemma 3.3. Let (ρ, v, π 1 ) be a smooth solution of (1. 16). Suppose that ∇u 0 L 2 ≤ 1 and the condition (3.1) holds, then there exist some positive constants C depending only on Ω, c 0 , α and β such that, for all T ∈ (0, ∞),
sup t∈[0,T ] P(t) + T 0 Q(t) + π 2 H 1 dt ≤ C ∇u 0 2 L 2 . (3.14)
where
P(t) := ∇v 2 L 2 + ∆ρ 2 L 2 + ρ t 2 L 2 , Q(t) := v t 2 L 2 + v 2 H 2 + ∇∆ρ 2 L 2 + ∇ρ t 2 L 2 .
Proof. We first apply −∇∆ρ · ∇ on both sides of (1.16) 1 and, then, integrate over Ω, we have
1 2 |∆ρ| 2 t + c 0 ρ −1 |∇∆ρ| 2 = ∇∆ρ · ∇v · ∇ρ + v · ∇ 2 ρ · ∇∆ρ + ∇ c 0 ρ 2 |∇ρ| 2 · ∇∆ρ + c 0 ρ 2 ∆ρ∇ρ · ∇∆ρ := 4 i=1 I i ,(3.15)
where, applying Lemma 2.3,
|I 1 | ≤ C ∇v L 3 ∇ρ L 6 ∇∆ρ L 2 ≤ C ε1 ∆ρ 4 L 2 ∇v 2 L 2 + ε 1 ∇∆ρ 2 L 2 + v 2 H 2 , |I 2 | ≤ C v L 6 ∆ρ L 3 ∇∆ρ L 2 ≤ C ε2 ∇v 4 L 2 ∆ρ 2 L 2 + ε 2 ∇∆ρ 2 L 2 , |I 3 | ≤ C ∇ρ 3 L 6 + ∇ρ L 6 ∇ 2 ρ L 3 ∇∆ρ L 2 ≤ C ε3 ∆ρ 4 L 2 ∆ρ 2 L 2 + ε 3 ∇∆ρ 2 L 2 , |I 4 | ≤ C ∇ρ L 6 ∆ρ L 3 ∇∆ρ L 2 ≤ C ε4 ∆ρ 4 L 2 ∆ρ 2 L 2 + ε 4 ∇∆ρ 2 L 2 .
(3.16)
Thus, substituting (3.16) into (3.15) leads to
∆ρ 2 L 2 t + ν ∇∆ρ 2 L 2 ≤ C ε ∇v 4 L 2 + ∆ρ 4 L 2 ∆ρ 2 L 2 + ∇v 2 L 2 + ε v 2 H 2 (3.17)
For the higher order estimates of v, multiplying v t on both sides of (1.16) 2 and integrating over Ω lead to
ρ|v t | 2 − div[2µ(ρ)D(v)] · v t = − ρu · ∇v · v t + c 0 div 2µ(ρ)∇ 2 ρ −1 · v t − c 0 div ρv ⊗ ∇ρ −1 · v t − c 2 0 div ρ∇ρ −1 ⊗ ∇ρ −1 · v t := 4 i=1 J i .(3.18)
For the second term on the left-hand side, we have
− div[2µ(ρ)D(v)] · v t = µ(ρ) curl curl v · v t − 2∇µ(ρ) · D(v) · v t = ∂ µ(ρ)v t · B · v + ∂ c 0 µ(ρ)v t · B · ∇ρ −1 + µ(ρ) curl v · curl v t + curl v · [∇µ(ρ) × v t ] − 2∇µ(ρ) · D(v) · v t = 1 2 ∂ µ(ρ)v · B · v + µ(ρ)| curl v| 2 t + ∂ c 0 µ(ρ)v t · B · ∇ρ −1 − ∂ 1 2 µ(ρ) t v · B · v − 1 2 µ(ρ) t | curl v| 2 + curl v · [∇µ(ρ) × v t ] − 2∇µ(ρ) · D(v) · v t := 1 2 ∂ µ(ρ)v · B · v + µ(ρ)| curl v| 2 t + 5 i=1 K i . (3.19)
However, For K 1 -K 5 , we have, using Lemma 2.3,
K 1 = − ∂ c 0 µ(ρ) ρ 2 (v ⊥ t × n) · B · ∇ρ = c 0 µ(ρ) ρ 2 curl v ⊥ t · B · ∇ρ − c 0 v ⊥ t · ∇ µ(ρ) ρ 2 × (B · ∇ρ) − c 0 µ(ρ) ρ 2 v ⊥ t · curl(B · ∇ρ) = M ′ (t) − c 0 curl v ⊥ · B · µ(ρ) ρ 2 ∇ρ t − c 0 v ⊥ t · ∇ µ(ρ) ρ 2 × (B · ∇ρ) − c 0 µ(ρ) ρ 2 v ⊥ t · curl(B · ∇ρ) ≤ M ′ (t) + C ∇v L 2 ( ∇ρ t L 2 + ρ t L 6 ∇ρ L 3 ) + C v t L 2 ( ∇ρ L 6 ∇ρ L 3 + ∆ρ L 2 ) ≤ M ′ (t) + C ε1 ∇ρ 4 L 3 ∇v 2 L 2 + ∇v 2 L 2 + ε 1 ∇ρ t 2 L 2 + C ε2 ∆ρ 4 L 2 ∇ρ 2 L 2 + ∆ρ 2 L 2 + ε 2 v t 2 L 2 ,(3.20)
where M (t) := c 0 µ(ρ) ρ 2 curl v ⊥ · B · ∇ρ, and
K 2 = − ∂ 1 2 µ(ρ) t v · B · v = ∂ 1 2 µ(ρ) t (n × v ⊥ ) · B · v = 1 2 µ(ρ) t curl v ⊥ · B · v − 1 2 v ⊥ · [∇µ(ρ) t × (B · v)] − 1 2 µ(ρ) t v ⊥ · curl(B · v) ≤ C ρ t L 6 v L 3 ∇v L 2 + C v L 6 v L 3 ∇ρ t L 2 + C v L 3 v L 6 ∇ρ L 3 ρ t L 6 ≤ C ε3 v 4 L 3 ∇v 2 L 2 + ∇ρ 4 L 3 ∇v 2 L 2 + ∇v 2 L 2 + ε 3 ∇ρ t 2 L 2 (3.21)
and
|K 3 | ≤ C ρ t L 6 ∇v L 3 ∇v L 2 ≤ C ε4 ∇v 4 L 2 ∇v 2 L 2 + ε 4 v 2 H 2 + ∇ρ t 2 L 2 , |K 4 | ≤ C ∇v L 3 ∇ρ L 6 v t L 2 ≤ C ε5 ∆ρ 4 L 2 ∇v 2 L 2 + ε 5 v 2 H 2 + v t 2 L 2 , |K 5 | ≤ C ∇v L 3 ∇ρ L 6 v t L 2 ≤ C ε6 ∆ρ 4 L 2 ∇v 2 L 2 + ε 6 v 2 H 2 + v t 2 L 2 . (3.22) Combining (3.19)-(3.22), we have − div[2µ(ρ)D(v)] · v t ≥ 1 2 ∂ µ(ρ)v · B · v + µ(ρ)| curl v| 2 t + M ′ (t) − C ε ∇ρ 4 L 3 + ∆ρ 4 L 2 + v 4 L 3 + ∇v 4 L 2 ∇v 2 L 2 − C ε ∇v 2 L 2 − C ε ∆ρ 4 L 2 ∇ρ 2 L 2 + ∆ρ 2 L 2 − ε v 2 H 2 + v t 2 L 2 .
(3.23)
Next, we turn to estimate J 1 -J 4 and apply Lemma 2.3, that is,
|J 1 | ≤ C ( ∇ρ L 6 + v L 6 ) ∇v L 3 v t L 2 ≤ C ε1 ∆ρ 4 L 2 + ∇v 4 L 2 ∇v 2 L 2 + ε 1 v 2 H 2 + v t 2 L 2 , |J 2 | ≤ C ∇ρ 3 L 6 + ∇ρ L 6 ∆ρ L 3 + ∇∆ρ L 2 v t L 2 ≤ C ε2 ∆ρ 4 L 2 ∆ρ 2 L 2 + ∇∆ρ 2 L 2 + ε 2 v t 2 L 2 , |J 3 | ≤ C ∇ρ 2 L 6 v L 6 + ∇ρ L 6 ∇v L 3 + ∆ρ L 3 v L 6 v t L 2 ≤ C ε3 ∆ρ 4 L 2 ∇v 2 L 2 + ∇v 4 L 2 ∆ρ 2 L 2 +ε 3 ∆v 2 L 2 + v t 2 L 2 + ∇∆ρ 2 L 2 , |J 4 | ≤ C ∇ρ 3 L 6 + ∇ρ L 6 ∆ρ L 3 v t L 2 ≤ C ε4 ∆ρ 4 L 2 ∆ρ 2 L 2 + ∇∆ρ 2 L 2 + ε 4 v t 2 L 2 .
(3.24) For simplicity, we rewrite (3.25) as
Now, substituting (3.23) and (3.24) into (3.18), one can deduce that
∂ µ(ρ)v · B · v + µ(ρ)| curl v| 2 t + ν v t 2 L 2 + M ′ (t) ≤ C ε ∇ρ 4 L 3 + ∆ρ 4 L 2 + v 4 L 3 + ∇v 4 L 2 + 1 ∇v 2 L 2 + C ε ∆ρ 4 L 2 ∆ρ 2 L 2 + ∇∆ρ 2 L 2 + ε v 2 H 2 + ∇ρ t 2 L 2 .∇v 2 L 2 t + ν v t 2 L 2 + M ′ (t) ≤ C ε ∇ρ 4 L 3 + ∆ρ 4 L 2 + v 4 L 3 + ∇v 4 L 2 + 1 ∇v 2 L 2 + C ε ∆ρ 4 L 2 ∆ρ 2 L 2 + ∇∆ρ 2 L 2 + ε v 2 H 2 , ≤ C ε ∆ρ 4 L 2 + ∇v 4 L 2 ∇v 2 L 2 + ∆ρ 2 L 2 + C ε ∇v 2 L 2 + ∇∆ρ 2 L 2 + ε v 2 H 2 + ∇ρ t 2 L 2 ,(3.26)
since, from the positivity of B and Lemma 2.4
∂ µ(ρ)v · B · v ≥ 0, µ(ρ)| curl v| 2 ∼ ∇v 2 L 2 ,
and they do not influent the results after applying the Grönwall's inequality for (3.25).
We still need to estimate v H 2 . To get this, we convert (1.16) 2 into the form
− div[2µ(ρ)D(v)] + ∇π = F,(3.27)
where
F := −ρv t − ρu · ∇v + c 0 div 2µ(ρ)∇ 2 ρ −1 − c 0 div ρv ⊗ ∇ρ −1 − c 2 0 div ρ∇ρ −1 ⊗ ∇ρ −1 + c 0 ∇(log ρ) t .
(3.28)
In order to use Lemma 2.8, from the embedding L 2 ֒→ H −1 , one should estimate F L 2 , that is,
F 2 L 2 ≤ C v t 2 L 2 + ∇ρ t 2 L 2 + C ∆ρ 4 L 2 + ∇v 4 L 2 ∆ρ 2 L 2 + ∇∆ρ 2 L 2 + C ε1 ∆ρ 4 L 2 ∇v 2 L 2 + ε 1 v 2 H 2 , (3.29)
where we have used
∇(log ρ) t L 2 ≤ C ( ∇ρ t L 2 + ρ t L 3 ∇ρ L 6 ) ≤ C ∇ρ t L 2
from Lemma 2.3 and condition (3.1).
On the other hand, in this case, Φ := −B · (v + c 0 ∇ρ −1 ), where Φ as in Lemma 2.8. Hence, applying Poincaré's inequality leads to
Φ 2 H 1 ≤ C v 2 H 1 + ∇ρ 2 H 1 + ∇ρ 2 L 3 ∇ρ 2 L 6 ≤ C ∇v 2 L 2 + ∆ρ 2 L 2 + ∆ρ 4 L 2 ∇ρ 2 L 2 .v 2 H 2 + π 2 H 1 ≤ C ∇ρ 4 L 6 ∇v 2 L 2 + 1 + ∇ρ 4 L 6 F 2 L 2 + Φ 2 H 1 ≤ C F 2 L 2 + Φ 2 H 1 + ∇v 2 L 2 ≤ C v t 2 L 2 + ∇ρ t 2 L 2 + C ∆ρ 4 L 2 + ∇v 4 L 2 ∆ρ 2 L 2 + C ∇∆ρ 2 L 2 + C ∇v 2 L 2 + C ε ∆ρ 4 L 2 ∇v 2 L 2 + ε v 2 H 2 , which gives v 2 H 2 + π 2 H 1 ≤ C v t 2 L 2 + ∇ρ t 2 L 2 + C ∆ρ 4 L 2 + ∇v 4 L 2 ∆ρ 2 L 2 + ∇v 2 L 2 + C ∇∆ρ 2 L 2 + ∇v 2 L 2 (3.31)
Combining (3.26) and (3.31), one has ∇v 2
L 2 t + ν v t L 2 + ν v 2 H 2 + M ′ (t) ≤ C ε ∆ρ 4 L 2 + ∇v 4 L 2 ∇v 2 L 2 + ∆ρ 2 L 2 + C ε ∇∆ρ 2 L 2 + ∇v 2 L 2 + ε ∇ρ t 2 L 2 .
(3.32)
At last, we come to estimate ∇ρ t . Applying ρ t ∂ t on both sides of (1.16) 1 and integrating over Ω yield that
1 2 |ρ t | 2 t + c 0 ρ −1 |∇ρ t | 2 = − (v t · ∇ρ)ρ t + 2c 0 ρ −3 |ρ t | 2 |∇ρ| 2 − 2c 0 ρ −1 (∇ρ · ∇ρ t ) ρ t − c 0 ρ −1 |ρ t | 2 ∆ρ := 4 i=1 L i . (3.33)
It follow from Lemma 2.3 that
|L 1 | ≤ v t L 2 ∇ρ L 6 ρ t L 3 ≤ C ε1 ∆ρ 4 L 2 ρ t 2 L 2 + ε 1 v t 2 L 2 + ∇ρ t 2 L 2 , |L 2 | ≤ ∇ρ L 2 ∇ρ L 6 ρ t L 3 ≤ C ε2 ∆ρ 4 L 2 ρ t 2 L 2 + ε 2 ∇ρ 2 L 2 + ∇ρ t 2 L 2 , |L 3 | ≤ ∇ρ t L 2 ∇ρ L 6 ρ t L 3 ≤ C ε3 ∆ρ 4 L 2 ρ t 2 L 2 + ε 3 ∇ρ t 2 L 2 , |L 4 | ≤ ∆ρ L 2 ρ t L 6 ρ t L 3 ≤ C ε4 ∆ρ 4 L 2 ρ t 2 L 2 + ε 4 ∇ρ t 2 L 2 .
(3.34)
Combining (3.33) and (3.34) leads to
ρ t 2 L 2 t + ν ∇ρ t 2 L 2 ≤ C ε ∆ρ 4 L 2 ρ t 2 L 2 + ε v t 2 L 2 + ∇ρ 2 L 2 . (3.35)
This, alonging with (3.32), yields that
∇v 2 L 2 + ρ t 2 L 2 t + ν v t 2 L 2 + ν v 2 H 2 + ν ∇ρ t 2 L 2 + M ′ (t) ≤ C ∆ρ 4 L 2 + ∇v 4 L 2 P(t) + C ∇∆ρ 2 L 2 + ∇v 2 L 2 .
(3.36)
On the other hand, combining (3.17) and (3.31) leads to
∆ρ 2 L 2 t + ν ∇∆ρ 2 L 2 ≤ C ε ∇v 4 L 2 + ∆ρ 4 L 2 ∆ρ 2 L 2 + ∇v 2 L 2 + ε ∇v 2 L 2 + v t 2 L 2 + ∇ρ t 2 L 2 , (3.37) that is, ν 2 ∇∆ρ 2 L 2 ≤ − ν 2 ∇∆ρ 2 L 2 − ∆ρ 2 L 2 t + C ε ∇v 4 L 2 + ∆ρ 4 L 2 ∆ρ 2 L 2 + ∇v 2 L 2 + ε ∇v 2 L 2 + v t 2 L 2 + ∇ρ t 2 L 2 .
(3.38)
Thus, substituting (3.38) into (3.36) and choosing ε small enough, we obtain
2C ν ∆ρ 2 L 2 + ∇v 2 L 2 + ρ t 2 L 2 t + ν 2 2C ν ∇∆ρ 2 L 2 + v t 2 L 2 + ∇ρ t 2 L 2 + M ′ (t) ≤ C ∇v 4 L 2 + ∆ρ 4 L 2 P(t) + C ∇v 2 L 2 ,
or, equivalent to say, using the definition of P(t), Q(t),
P ′ (t) + νQ(t) + M ′ (t) ≤ C ∇v 4 L 2 + ∆ρ 4 L 2 P(t) + C ∇v 2 L 2 ,P(t) + T 0 Q(t), dt ≤ C ∇u 0 2 L 2 ,(3.39)
where we have used the control
ρ t,0 2 L 2 ≤ C ∇ρ 0 2 L 3 v 0 2 L 6 + ∇ρ 0 2 L 3 ∇ρ 0 2 L 6 + ∆ρ 0 2 L 2 ≤ C ∆ρ 0 2 L 2 ∇v 0 2 L 2 + ∆ρ 0 4 L 2 + ∆ρ 0 2 L 2 ≤ C ∇u 0 4 L 2 + ∇u 0 2 L 2 ≤ C ∇u 0 2 L 2 ,
and the follwoing estimates
sup t∈[0,T ] M (t) ≤ ε sup t∈[0,T ] ∇v 2 L 2 + C ε sup t∈[0,T ] ∇ρ 2 L 2 ≤ ε sup t∈[0,T ] P(t) + C ε ∇u 0 2 L 2 ,P(t) + C ε ∇u 0 2 L 2 (3.41)
where h(t) is an integrable function on [0, ∞). Finally, plugging (3.39) into (3.31), we have
T 0 π 2 H 1 dt ≤ C ∇u 0 2 L 2 ,
which, together with (3.39), completes the proof of (3.14).
∇ρ L 6 ≤ C 1 sup t∈[0,T ] ∆ρ L 2 ≤ C 1 C ∇u 0 L 2 ,
where C as in Lemma 3.3 and C 1 is Sobolev embedding constant. Thus, if we choose
∇u 0 L 2 ≤ δ 1 := (C 1 C) −1 , (3.42)
we can derive the first part of (3.2). For the rest of (3.2), using Lemma 3.3 again leads to
T 0 ∇v 4 L 2 + ∆ρ 4 L 2 dt ≤ sup t∈[0,T ] ∆ρ 2 L 2 + sup t∈[0,T ] ∇v 2 L 2 T 0 ∆ρ 2 L 2 + ∇v 2 L 2 dt ≤ λ −1 C 2 ∇u 0 4 L 2 ≤ ∇u 0 2 L 2 provided ∇u 0 L 2 ≤ δ 2 := λ 1/2 C −1 , (3.43)
where C as in Lemma 3.3. It follows from (3.42) and (3.43) that one should choose δ := min{1, δ 1 , δ 2 }. Of course, such δ depends only on Ω, c 0 , α and β and, therefore, we estabished (3.2).
Case (B)
Similar with the preceding subsection, we are going to prove the following proposition.
∇ρ L 6 ≤ 1, T 0 ∇u 4 L 2 + ∆ρ 4 L 2 dt ≤ ∇u 0 2 L 2 . (3.45)
One should notice that the norms of v and u are equivalent in the following sense under condition
(3.44), v L p + ∇ρ L p ∼ u L p + ∇ρ L p , ∇v L p + ∆ρ L p + ∇ρ 2 L 2p ∼ ∇u L p + ∆ρ L p + ∇ρ 2 L 2p , ∆v L 2 + ∇∆ρ L 2 ∼ ∆u L 2 + ∇∆ρ L 2 , (3.46)
where (3.46) 3 is deduced by
∆v L 2 ≤ C ∆u L 2 + ∇∆ρ L 2 + ∇ρ L 6 ∇ 2 ρ L 3 + ∇ρ 3 L 6 ≤ C ∆u L 2 + ∇∆ρ L 2 + ∇ρ L 6 ∇∆ρ L 2 + ∇ρ 2 L 6 ∇∆ρ L 2 ≤ C ( ∆u L 2 + ∇∆ρ L 2 )
and vice versa. Now, we come to prove. We can easily derive a similar lemma comparing with Lemma 3.2 which is given as follows. (ρ, u, π) be a smooth solution of (1.1), then there exist some positive constant C depending only on Ω, c 0 , α and β such that, for all T ∈ (0, ∞),
Lemma 3.5. Let
sup t∈[0,T ] ρ − (ρ 0 ) Ω 2 L 2 + T 0 ∇ρ 2 L 2 dt ≤ C ∇u 0 2 L 2 .∇ρ 2 L 2 + T 0 ∇ρ 4 L 3 + ∆ρ 2 L 2 dt ≤ C ∇u 0 2 L 2 , (3.48) sup t∈[0,T ] F (t) + T 0 G(t) + π 2 H 1 dt ≤ C ∇u 0 2 L 2 , (3.49) where F (t) := ∇u 2 L 2 + ∆ρ 2 L 2 + ρ t 2 L 2 , G(t) := ∇∆ρ 2 L 2 + u t 2 L 2 + ∆u 2 L 2 + ∇ρ t 2 L 2
Proof. (3.47) has been proved in Lemma 3.2 and (3.48) is a trivial consequence of (3.8). Indeed, using (3.46), condition (3.44), Lemma 2.3 and Poincaré's inequality leads to
∇v 4 L 2 ≤ C ∇u 4 L 2 + ∆ρ 4 L 2 + ∇ρ 4 L 3 ∇ρ 4 L 6 ≤ C ∇u 4 L 2 + ∆ρ 4 L 2 ,
and, thus, ∇ρ
2 L 2 t + ν ∆ρ 2 L 2 ≤ C ∆ρ 4 L 2 + ∇u 4 L 2 ∇ρ 2 L 2 .
To prove (3.49), we first come to get the lower order estimate of u. Multiplying w on both sides of (1.1) 2 and integrating over Ω, one has
1 2 ρ|u| 2 t + 2µ(ρ)|D(u)| 2 = ρu t · Q + ρu · ∇u · Q − div[2µ(ρ)D(u)] · Q = ρu t · Q + ρu · ∇u · Q + 2µ(ρ)D(u) · ∇Q := 3 i=1 M i , (3.50)
where, from Lemma 2.3,
|M 1 | ≤ C u t L 2 Q L 2 ≤ C ε1 ∇ρ 2 L 2 + ε 1 u t 2 L 2 , |M 2 | ≤ C u L 3 ∇u L 2 ∇ρ L 6 ≤ C ε2 ∆ρ 4 L 2 u 2 L 2 + ε 2 ∇u L 2 , |M 3 | ≤ C ∇u L 2 ∇Q L 2 ≤ C ε3 ∆ρ 2 L 2 + ε 3 ∇u L 2 .
(3.51)
Here, we have used the following control
∇Q L 2 ≤ C ∆ρ L 2 + ∇ρ 2 L 4 ≤ C ( ∆ρ L 2 + ∇ρ L 3 ∇ρ L 6 ) ≤ C ∆ρ L 2 .
Thus, substituting (3.51) into (3.50) gives
√ ρu 2 L 2 t + ν ∇u 2 L 2 ≤ C ε ∆ρ 4 L 2 √ ρu 2 L 2 + C ε ∇ρ 2 L 2 + ∆ρ 2 L 2 + ε u t 2 L 2 . (3.52)
Multiplying w t on both sides of (1.1) 2 and integrating over Ω, one has
ρ|u t | 2 + µ(ρ)|D(u)| 2 t = − ρu t · Q t − ρu · ∇u · w t + µ(ρ) t |D(u)| 2 − div[2µ(ρ)D(u)] · Q t := 4 i=1 N i , (3.53)
where, using Lemma 2.3,
|N 1 | ≤ C u t L 2 Q t L 2 ≤ C ε1 ∇ρ t 2 L 2 + ε 1 u t 2 L 2 , |N 2 | ≤ C u L 6 ∇u L 3 w t L 2 ≤ C ε2 ∇u 4 L 2 ∇u 2 L 2 + ε 2 ∇ρ t 2 L 2 + ∆ρ 4 L 2 ρ t 2 L 2 +ε 2 ∆u 2 L 2 + u t 2 L 2 , |N 3 | ≤ C ρ t L 3 ∇u L 2 ∇u L 6 ≤ C ε3 ∇u 4 L 2 ρ t 2 L 2 + C ∇ρ t 2 L 2 + ε 3 ∆u 2 L 2 , |N 4 | ≤ C ( ∇ρ L 6 ∇u L 3 + ∆u L 2 ) Q t L 2 ≤ C ε4 ∆ρ 4 L 2 ∇u 2 L 2 + C ε4 ∇ρ t 2 L 2 + ε 4 ∆u 2 L 2 , (3.54)
where we have used
Q t L 2 ≤ C ( ∇ρ t L 2 + ∇ρ L 6 ρ t L 3 ) ≤ C ∇ρ t L 2 .
Combining (3.53) and (3.54) leads to
µ(ρ)|D(u)| 2 L 2 t + ν u t 2 L 2 ≤ C ε ∇u 4 L 2 + ∆ρ 4 L 2 ∇u 2 L 2 + ρ t 2 L 2 + C ε ∇ρ t 2 L 2 + ε ∆u 2 L 2 .v 2 H 2 + ∇π 2 L 2 ≤ C ∇ρ 4 L 6 ∇v 2 L 2 + 1 + ∇ρ 4 L 6 F 2 L 2 + Φ 2 H 2 ≤ C F 2 L 2 + Φ 2 H 2 + ∆ρ 4 L 2 ∇v 2 L 2 ≤ C v t 2 L 2 + ∇ρ t 2 L 2 + C ∆ρ 4 L 2 + ∇v 4 L 2 ∆ρ 2 L 2 + C ∇∆ρ 2 L 2 + C ∇v 2 L 2 + C ε ∇v 4 L 2 + ∆ρ 4 L 2 ∇v 2 L 2 + ε v 2 H 2
where F as in (3.28)-(3.29). Thus, we still have (3.31) and, if we convert v into u and ρ by (3.46) and condition (3.44), we can derive the bounds for ∆u, that is,
∆u 2 L 2 + π 2 H 1 ≤ C ∆ρ 4 L 2 + ∇u 4 L 2 ∆ρ 2 L 2 + ∇u 2 L 2 + C ∇∆ρ 2 L 2 + ∇u 2 L 2 + u t 2 L 2 + ∇ρ t 2 L 2 (3.56)
Combining (3.52), (3.55) and (3.56) and choosing ε small enough, one has
∇u 2 L 2 t + ν 2 ∇u 2 L 2 + ε 2C ∆u 2 L 2 + ν 2 u t 2 L 2 ≤ C ε ∇u 4 L 2 + ∆ρ 4 L 2 F (t) + C ε ∇ρ t 2 L 2 + ε ∇∆ρ 2 L 2 + C ∇ρ 2 L 2 + ∆ρ 2 L 2 ,(3.57)
where we have used
√ ρu L 2 + µ(ρ)|D(u)| L 2 ∼ ∇u L 2 .
Similarly, converting v into u and ρ, we can also obtain an analogous estimates from (3.17), that is, ∆ρ
2 L 2 t + ν ∇∆ρ 2 L 2 ≤ C ∇u 4 L 2 + ∆ρ 4 L 2 ∆ρ 2 L 2 + C ε1 ∆ρ 4 L 2 ∇u 2 L 2 + ε 1 ∆u 2 L 2 , which, combining with (3.56), gives ∆ρ 2 L 2 t + ν ∇∆ρ 2 L 2 ≤ C ε1 ∇u 4 L 2 + ∆ρ 4 L 2 ∆ρ 2 L 2 + ∇u 2 L 2 + ε 1 ∇u 2 L 2 + ∇ρ t 2 L 2 .
(3.58)
Combining (3.57)-(3.58) and letting ε, ε 1 suitably small yield that, ∃ ν > 0,
∇u 2 L 2 + ∆ρ 2 L 2 t + ν ∇∆ρ 2 L 2 + ∇u 2 L 2 + ∆u 2 L 2 + u t 2 L 2 ≤ C ∇u 4 L 2 + ∆ρ 4 L 2 F (t) + C ∇ρ t 2 L 2 + C ∇ρ 2 L 2 + ∆ρ 2 L 2 ,(3.59)
On the other hand, from (3.34), we can deduce similarly that
ρ t 2 L 2 t + ν ∇ρ t 2 L 2 ≤ C ε2 ∆ρ 4 L 2 ρ t 2 L 2 + C ε2 ∇ρ 2 L 2 + ε 2 u t 2 L 2 . (3.60)
Then times 2C for (3.60) and plugging it into (3.59), choosing ε 2 sufficiently small and using Poincaré's inequality, we have, for some positive constant ν,
F ′ (t) + νG(t) ≤ C ∇u 4 L 2 + ∆ρ 4 L 2 F (t) + C ∇ρ 2 L 2 + ∆ρ 2 L 2 , (3.61)
where we have used the following equivalent norms for convenience
F (t) ∼ ∇u 2 L 2 + ∆ρ 2 L 2 + 2C ρ t 2 L 2 , G(t) ∼ ∇∆ρ 2 L 2 + u t 2 L 2 + ∆u 2 L 2 + 2C ∇ρ t 2 L 2
and these equivalences do not have an influence on the final result after applying the Grönwall's inequality.
Thus, we get the higher order estimates for (ρ, u) by using Grönwall's inequality and (3.47)-(3.48) and the estimate of π can be obtained from (3.56). Consequently, we show the estimate (3.49) and finished the proof.
Proof of Proposition 3.4. The proof of Proposition 3.4 is exactly same with that of Proposition 3.1 and, thus, we omit the proof and leave it proof to readers.
Proof of Theorem 1.2
With the uniform bounds hold in our hand, the proof is rather simple. We first come to prove the case (A). Using Lemma 2.1, there exists a unique strong solution (ρ, u) of (1.1) on Ω × (0, T 1 ) with initial data (ρ 0 , u 0 ) satifying the boundary condition (A), for some positive time T 1 . Then, one may use the a priori estimates, Proposition (3.1) and Lemma 3.2-3.3 to extend the strong solution (ρ, u) globally in time. Indeed, if T 1 < ∞ is the maximal time for existence, then using the uniform bounds, we have
(ρ, u)(x, T 1 ) := lim t→T − 1 (ρ, u)(x, t) in the sense of H 2 × H 1 (3.62)
satisfying the conditions imposed on the initial data, that is, α ≤ ρ(T 1 ) ≤ β and u(T 1 ) ∈ H 1 ω , at the time T 1 . Furthermore, it is easy to check that (ρ, u)(x, T 1 ) satisfies the compatiablity condition (1.9). Therefore, we can take (ρ, u)(x, T 1 ) as the initial data and apply Lemma 2.1 to extend the strong solution beyond T 1 . This contradicts the maximality of T 1 and, hence, we finish the proof of Theorem 1.2 for the case (A).
However, for (ρ, u) satisfying (B), we can use Lemma 2.1 and Remark 2.2 to extend (ρ, u) on Ω × (0, T 1 ) to the global one for every fixed ǫ ∈ (0, 1]. Then, using the a priori estimates, Proposition 3.4 and Lemma 3.5, we can get a uniform bounds for (ρ ǫ , u ǫ , π ǫ ), for all ǫ ∈ (0, 1]. More precisely, one may has, as ǫ → 0 + ,
ρ ǫ w * − − ⇀ ρ in C([0, T ]; H 2 ) ∩ L 2 (0, T ; H 3 ), ρ ǫ t w * − − ⇀ ρ t in C([0, T ]; L 2 ) ∩ L 2 (0, T ; H 1 ), u ǫ w * − − ⇀ u in C([0, T ]; H 1 ) ∩ L 2 (0, T ; H 2 ), u ǫ t w * − − ⇀ u t in C([0, T ]; H 1 ) ∩ L 2 (0, T ; L 2 ), π ǫ w − − ⇀ π in L 2 (0, T ; H 1 ).
(3.63)
Then, after applying Lemma 2.11, we may derive that
ρ ǫ −→ ρ uniformly for all (x, t) ∈ Ω × [0, T ], ρ ǫ s − − → ρ in C([0, T ]; H 2 ), u ǫ s − − → u in C([0, T ]; H 1 ).
Proof of Theorem 1.3
We first come to prove the blowup criterion. Throughout this section, we let (ρ, u, π) be a strong solution described in Theorem 1.3 andC be a positive generic constant depending on c 0 , α, β, T * , M 0 and u 0 H 1 . Suppose that (1.12) were false, that is, for some r and s,
lim T →T * u L s (0,T ;L r ) ≤ M 0 < ∞,ρ t 2 L 2 + ρ 2 H 2 + u 2 H 1 + T 0 ρ t 2 H 1 + ρ 2 H 3 + ∇u 2 H 1 dt ≤C. (4.2)
The proof of Proposition 4.1 will be separated into the following two parts.
Case for (ρ, u) satisfying (A)
The first lemma is the part of Lemma 3.2, we give it here for convenience.
Lemma 4.2. The following bounds hold for condition (A) and for all
T ∈ [0, T * ), that is, α ≤ ρ ≤ β, sup t∈[0,T ] ρ − (ρ 0 ) Ω 2 L 2 + ν T 0 ∇ρ 2 L 2 dt ≤ ρ 0 − (ρ 0 ) Ω 2 L 2 ,(4.3)
Next, we give the lower order bounds for (log ρ, v), that is, Proof. We first change (1.16) to the form (4.5) and, then, multiplying −∆ log ρ on both sides of (4.5), integrating over Ω and using Lemma 2.3 imply that
(log ρ) t + v · ∇ log ρ − c 0 ρ −1 ∆ log ρ = 0,1 2 |∇ log ρ| 2 t + c 0 ρ −1 |∆ log ρ| 2 = (v · ∇ log ρ)∆ log ρ ≤ ∇ log ρ L r v L 2r r−2 ∆ log ρ L 2 ≤ C ε ∇ log ρ 2 L r v 2r−6 r L 2 ∇v 6 r L 2 + ε ∆ log ρ 2 L 2 ≤ C ε ( ∇ρ s L r + 1) v 2 L 2 + ε ∆ log ρ 2 L 2 + ∇v 2 L 2 , (4.6) that is ∇ log ρ 2 L 2 t + ν ∆ log ρ 2 L 2 ≤ C ε ( ∇ρ s L r + 1) v 2 L 2 + ε ∇v 2 L 2 . (4.7)
To estimate the rest part of (4.4), it follows from (3.9)-(3.10) that √ ρv
2 L 2 t + ν ∇v 2 L 2 ≤ C ∇ log ρ H 1 ∇v L 2 + ∇ log ρ L r v L 2r r−2 ∇v L 2 + 4 i=1 H i ,(4. |H 1 | ≤ C ∆ log ρ L 2 + ∇ log ρ L r ∇ log ρ L 2r r−2 ∇v L 2 ≤ C ε1 ( ∇ρ s L r + 1) ∇ log ρ 2 L 2 + ε 1 ∇v 2 L 2 + ∆ log ρ 2 L 2 , |H 2 | ≤ C ∇ log ρ L r v L 2r r−2 ∇v L 2 ≤ C ε2 ( ∇ρ s L r + 1) v 2 L 2 + ε 2 ∇v 2 L 2 + ∆ log ρ 2 L 2 , |H 3 | ≤ C ∇ log ρ L r ∇ log ρ L 2r r−2 ∇v L 2 ≤ C ε3 ( ∇ρ s L r + 1) ∇ log ρ 2 L 2 + ε 3 ∇v 2 L 2 + ∆ log ρ 2 L 2 , |H 4 | ≤ C v H 1 ∇ log ρ H 1 + ∇ log ρ L r v L 2r r−2 ∇ log ρ L 2 ≤ C ε4 ∆ log ρ 2 L 2 + ( ∇ρ s L r + 1) v 2 L 2 + ε 4 ∇v 2 L 2 + ∇ log ρ 2 L 2 .
(4.9)
Combining (4.8) and (4.9), we deduce that √ ρv
2 L 2 t + ν ∇v 2 L 2 ≤ C ( ∇ρ s L r + 1) v 2 L 2 + ∇ log ρ 2 L 2 + C ∆ log ρ 2 L 2 . (4.10)
Multiplying 2ε on (4.10) and alonging with (4.7), then chooseing ε suitably small gives
1 2C √ ρv 2 L 2 + ∇ log ρ 2 L 2 t + ν 2 1 2C ∇v 2 L 2 + ∆ log ρ 2 L 2 ≤ C ( ∇ρ s L r + 1) v 2 L 2 + ∇ log ρ 2 L 2 .
which, using Grönwall's inequality, condition (4.1) and Lemma 4.2, implies (4.4).
L 2 + ∆ρ 2 L 2 + ∆ log ρ 2 L 2 + ∇v 2 L 2 , Q(t) := ∇(log ρ) t 2 L 2 + ∇∆ρ 2 L 2 + ∇∆ log ρ 2 L 2 + ∇ 2 v . L 2
Proof. Applying −∇∆ log ρ · ∇ on both sides of (4.5) and integrating over Ω, we have
1 2 |∆ log ρ| 2 t + c 0 ρ −1 |∇∆ log ρ| 2 = ∇∆ log ρ · ∇v · ∇ log ρ + c 0 ρ −1 ∆ log ρ∇ log ρ · ∇∆ log ρ + v · ∇ 2 log ρ · ∇∆ log ρ := 3 i=1 O i ,(4. |O 1 | ≤ ∇ log ρ L r ∇v L 2r r−2 ∇∆ log ρ L 2 ≤ C ε1 ( ∇ρ s L r + 1) ∇v 2 L 2 + ε 1 ∇∆ log ρ 2 L 2 + v 2 H 2 , |O 2 | ≤ C ∇ log ρ L r ∆ log ρ L 2r r−2 ∇∆ log ρ L 2 ≤ C ε2 ( ∇ρ s L r + 1) ∆ log ρ 2 L 2 + ε 2 ∇∆ρ 2 L 2 .
(4.13)
For O 3 , we integrate by parts to get
O 3 = v i ∂ ij log ρ∂ j ∆ log ρ = ∂ (v i ∂ ij log ρn j )∆ log ρ − (∂ j v i ∂ ij log ρ)∆ log ρ = ∂ (v i ∂ ij log ρn j )∆ log ρ − ∂ (∂ j v i ∂ j log ρn i )∆ log ρ + ∂ j v i ∂ j log ρ∂ i ∆ log ρ := B 1 + B 2 + B 3 .
(4.14)
Since the simplest part B 3 can be handled similarly like O 1 , we only need estimate B 1 and B 2 . First, using the boundary condition v · n = n · ∇ log ρ = 0 and Lemma 2.3, we have
B 1 = ∂ v · ∇ 2 log ρ · n∆ log ρ = − ∂ v · ∇n · ∇ log ρ∆ log ρ = ∂ (n × v ⊥ ) · ∇n · ∇ log ρ∆ log ρ = (curl v ⊥ · ∇n · ∇ log ρ)∆ log ρ − v ⊥ · [∇∆ log ρ × (∇n · ∇ log ρ)] − v ⊥ · curl(∇n · ∇ log ρ)∆ log ρ ≤ C ∇ log ρ L r ∇v L 2r r−2 ∆ log ρ L 2 + C ∇ log ρ L r v L 2r r−2 ∇∆ log ρ L 2 + C v L 3 ∆ log ρ L 6 ∆ log ρ L 2 ≤ C ε3 ( ∇ρ s L r + 1) ∇v 2 L 2 + C ε3 v 4 L 3 ∆ log ρ 2 L 2 + ε 3 ∇∆ log ρ 2 L 2 .
(4.15) Hence,
|B 1 | ≤ C ε3 ( ∇ρ s L r + 1) ∇v 2 L 2 + C ε3 v 4 L 3 ∆ log ρ 2 L 2 + ε 3 ∇∆ log ρ 2 L 2 .
(4.16)
Similarly, for B 2 , one has
B 2 = − ∂ ∇ log ρ · ∇v · n∆ log ρ = ∂ ∇ log ρ · ∇n · v∆ log ρ = ∂ ∇ log ρ · ∇n · (v ⊥ × n)∆ log ρ = − (∇ log ρ · ∇n · curl v ⊥ )∆ log ρ + ∇∆ log ρ × (∇ log ρ · ∇n) · v ⊥ + ∆ log ρ curl(∇ log ρ · ∇n) · v ⊥ ≤ C ε4 ( ∇ρ s L r + 1) ∇v 2 L 2 + C ε4 v 4 L 3 ∆ log ρ 2 L 2 + ε 4 ∇∆ log ρ 2 L 2 ,(4.17)
that is,
|B 2 | ≤ C ε4 ( ∇ρ s L r + 1) ∇v 2 L 2 + C ε4 v 4 L 3 ∆ log ρ 2 L 2 + ε 4 ∇∆ log ρ 2 L 2 . (4.18)
Combining (4.13)-(4.14), (4.16) and (4.18), one can deduce that
∆ log ρ 2 L 2 t + ν ∇∆ log ρ 2 L 2 ≤ C ∇ρ s L r + v 4 L 3 + 1 ∆ log ρ 2 L 2 + C ε ( ∇ρ s L r + 1) ∇v 2 L 2 + ε v 2 H 2 . (4.19)
On the other hand, we slightly change (3.15) (more precisely, I 3 ) into the form
1 2 |∆ρ| 2 t + c 0 ρ −1 |∇∆ρ| 2 = ∇∆ρ · ∇v · ∇ρ + v · ∇ 2 ρ · ∇∆ρ + c 0 ∇|∇ log ρ| 2 · ∇∆ρ + c 0 ρ −2 ∆ρ∇ρ · ∇∆ρ := 4 i=1 I i .
(4.20)
Then, exactly following the proof of (4.13)-(4.18), we can obtain the festimate which is similar with (4.19), that is,
∆ρ 2 L 2 t + ν ∇∆ρ 2 L 2 ≤ C ε ∇ρ s L r + v 4 L 3 + 1 ∆ρ 2 L 2 + ∆ log ρ 2 L 2 + C ε ( ∇ρ s L r + 1) ∇v 2 L 2 + ε v 2 H 2 + ∇∆ log ρ 2 L 2 ,(4.21)
together with (4.19) yields
∆ρ 2 L 2 + ∆ log ρ 2 L 2 t + ν ∇∆ρ 2 L 2 + ∇∆ log ρ 2 L 2 ≤ C ∇ρ s L r + v 4 L 3 + 1 ∆ρ 2 L 2 + ∆ log ρ 2 L 2 + ∇v 2 L 2 + ε v 2 H 2 ,(4.22)
For the estimate of (log ρ) t , applying (log ρ) t ∂ t on both sides of (4.5) and integrating over Ω, one has
1 2 |(log ρ) t | 2 t + c 0 ρ −1 |∇(log ρ) t | 2 = − c 0 ρ −1 ∇(log ρ) t · ∇ log ρ(log ρ) t − v t · ∇ log ρ(log ρ) t + c 0 ρ −1 |(log ρ) t | 2 |∇ log ρ| 2 := 3 i=1 P i ,(4.23)
where, using Lemma 2.3,
|P 1 | ≤ C ∇ log ρ L r (log ρ) t L 2r r−2 ∇(log ρ) t L 2 ≤ C ε1 ( ∇ρ s L r + 1) (log ρ) t 2 L 2 + ε 1 ∇(log ρ) t 2 L 2 |P 2 | ≤ ∇ log ρ L r (log ρ) t L 2r r−2 v t L 2 ≤ C ε2 ( ∇ρ s L r + 1) (log ρ) t 2 L 2 + ε 2 v t 2 L 2 |P 3 | ≤ ∇ log ρ 2 L r (log ρ) t 2 L 2r r−2 ≤ C ε3 ( ∇ρ s L r + 1) (log ρ) t 2 L 2 + ε 3 ∇(log ρ) t 2 L 2 .
(4.24)
Combining (4.23) and (4.24) leads to
(log ρ) t 2 L 2 t + ν ∇(log ρ) t 2 L 2 ≤ C ε ( ∇ρ s L r + 1) (log ρ) t 2 L 2 + ε v t 2 L 2 (4.25)
We still need to treat the higher order bounds for v. The proof is basically the same as we did in (3.19)- (3.22) and the main differences one should notice are terms K 3 and J 2 -J 4 . For K 3 ,
K 3 = − 1 2 µ(ρ) t | curl v| 2 = − 1 2 ∂ (n × v) · curl vµ(ρ) t − 1 2 ∇µ(ρ) t × curl v · v − 1 2 µ(ρ) t ∆v · v = − 1 2 ∂ ρµ ′ (ρ)(log ρ) t v · B · v − 1 2 ρµ ′ (ρ)∇(log ρ) t × curl v · v − 1 2 ρµ ′ (ρ) + ρ 2 µ ′′ (ρ) (log ρ) t ∇ log ρ × curl v · v − 1 2 µ(ρ) t ∆v · v ≤ |K 2 | + C ∇(log ρ) t L 2 + ∇ρ L r (log ρ) t L 2r r−2 v L r ∇v L 2r r−2 + C v L r (log ρ) t L 2r r−2 ∆v L 2 ≤ |K 2 | + C ε ( v s L r + ∇ρ s L r + 1) ∇v 2 L 2 + (log ρ) t 2 L 2 + ε ∇(log ρ) t 2 L 2 + v 2 H 2 ,(4.26)
while, for J 2 -J 4 , using the relation
∇ 2 ρ −1 = 1 ρ 2 ∇ 2 ρ − 2 ρ ∇ 2 log ρ
and Lemma 2.3, we have
J 2 = c 0 div 2µ(ρ)∇ 2 ρ −1 · v t ≤ C ∇ρ L r ∇ 2 ρ −1 L 2r r−2 v t L 2 + C ∇∆ρ −1 L 2 v t L 2 ≤ C ε1 ( ∇ρ s L r + 1) ∆ρ 2 L 2 + ∆ log ρ 2 L 2 + C ε1 ∇∆ρ 2 L 2 + ∇∆ log ρ 2 L 2 + ε 1 v t 2 L 2 (4.27) J 3 = − c 0 div ρv ⊗ ∇ρ −1 · v t = c 0 div (v ⊗ ∇ log ρ) · v t ≤ C ∇ρ L r ∇v L 2r r−2 v t L 2 + C v L r ∇ 2 log ρ L 2r r−2 v t L 2 ≤ C ε2 ( ∇ρ s L r + v s L r + 1) ∆ log ρ 2 L 2 + ∇v 2 L 2 + ε 2 ∇∆ log ρ 2 L 2 + v 2 H 2 + v t 2 L 2 (4.28) J 4 = − c 2 0 div ρ∇ρ −1 ⊗ ∇ρ −1 · v t = c 2 0 div ∇ log ρ ⊗ ∇ρ −1 · v t ≤ C ∇ρ L r ∇ 2 log ρ L 2r r−2 + ∇ 2 ρ −1 L 2r r−2 v t L 2 ≤ C ε3 ( ∇ρ s L r + 1) ∆ log ρ 2 L 2 + ∆ρ 2 L 2 + ε 3 ∇∆ log ρ 2 L 2 + ∇∆ρ 2 L 2 + v t 2 L 2 (4.29)
Therefore, modifying the corresponding norms of (v, ∇ρ) from (3.19)-(3.22) into the L r -norms, alonging with (4.26)-(4.29), we have
∂ µ(ρ)v · B · v + µ(ρ)| curl v| 2 t + ν v t 2 L 2 + M ′ (t) ≤ C ε ∇ρ 4 L 3 + ∇ρ s L r + v 4 L 3 + v s L r + 1 ∇v 2 L 2 + ∆ log ρ 2 L 2 + ∆ρ 2 L 2 + C ε ∇∆ log ρ 2 L 2 + ∇∆ρ 2 L 2 + ε v 2 H 2 + ∇(log ρ) t 2 L 2 .
(4.30)
For the sake of simplicity, as we have explained in (3.26) and (3.40), we can rewrite (4.31) into
∇v 2 L 2 t + ν v t 2 L 2 ≤ C ε [I(t) + 1] ∇v 2 L 2 + ∆ log ρ 2 L 2 + ∆ρ 2 L 2 + C ε ∇∆ log ρ 2 L 2 + ∇∆ρ 2 L 2 + ε v 2 H 2 + ∇(log ρ) t 2 L 2 ,(4.31)
where I(t) is an integrable function over (0, T * ).
For H 2 -norm of v, analoging with (3.27)-(3.31) and applying Lemma 2.3, one has
F 2 L 2 ≤ C ε ( ∇ρ s L r + v s L r + 1) ∆ log ρ 2 L 2 + ∆ρ 2 L 2 + ∇v 2 L 2 + C v t 2 L 2 + ∇(log ρ) t 2 L 2 + ε v 2 H 2 + ∇∆ log ρ 2 L 2 + ∇∆ρ 2 L 2 Φ 2 H 1 ≤ C ∇v 2 L 2 + ∆ρ 2 L 2 + ∆ log ρ 2 L 2 .v 2 H 2 + p 2 H 1 ≤ C ε ( ∇ρ s L r + v s L r + 1) ∆ log ρ 2 L 2 + ∆ρ 2 L 2 + ∇v 2 L 2 + C v t 2 L 2 + ∇(log ρ) t 2 L 2 + ε ∇∆ log ρ 2 L 2 + ∇∆ρ 2 L 2 ,(4.33)
alonging with (4.31) gives Remark 4.5. From the proof above, one should notice that, it is the convection term ρu · ∇u that restricts us to use the Serrin's condition of v. In fact, we can directly use the the bound u ∈ L s (0, T ; L r ) in (4.6) to get the lower bounds for log ρ (see also Lemma 4.6), but, in order to show this point, we insist to only use ∇ρ ∈ L s (0, T ; L r ).
∇v 2 L 2 t + ε 2C v 2 H 2 + ν 2 v t 2 L 2 ≤ C ε [I(t) + 1] ∇v 2 L 2 + ∆ log ρ 2 L 2 + ∆ρ 2 L 2 + C ε ∇∆ log ρ 2 L 2 + ∇∆ρ 2 L 2 + ε ∇(log ρ) t 2 L 2 .
Now, we turn back to prove Proposition 4.1 for (ρ, u) satisfying (A).
∇ρ t L 2 ≤ C ( ρ t ∇ρ L 2 + ∇(log ρ) t L 2 ) ≤ C ∆ρ 2 L 2 ρ t L 2 + ∇(log ρ) t L 2 + 1 2 ∇ρ t L 2 , that is, T 0 ∇ρ t 2 L 2 dt ≤ C sup t∈[0,T ] ∆ρ 2 L 2 sup t∈[0,T ] ρ t 2 L 2 T 0 ∆ρ 2 L 2 dt + T 0 ∇(log ρ) t 2 L 2 dt ≤C.
Case for (ρ, u) satisfying (B)
We basically follow the proof in subsection 4.1. Since the nonlinear term |∇ρ| 2 , one still has to estimate for ρ together with log ρ. In case of use, we colloect some bounds from (1.22)
∇Q L 2 ≤ C ∆ρ L 2 + ∇ρ L r ∇ρ L 2r r−2 ≤ C ∆ρ L 2 + ( ∇ρ s L r + 1) ∇ρ 2 L 2 , (4.35) Q t L 2 ≤ C ∇(log ρ) t L 2 + ∇ρ L r ρ t L 2r r−2 ≤ C ∇(log ρ) t L 2 + ( ∇ρ s L r + 1) (log ρ) t 2 L 2 .
(4.36)
First, we give a lemma for the lower order bounds of ρ. Proof. The estimates for ρ and log ρ come from (3.6) and (4.6), respectively, and we only give the proof for log ρ here, since another one can be proved similarly. From (4.6), we have
1 2 |∇ log ρ| 2 t + c 0 ρ −1 |∆ log ρ| 2 = (v · ∇ log ρ)∆ log ρ ≤ v L r ∇ log ρ L 2r r−2 ∆ log ρ L 2 ≤ C ε ( v s L r + 1) ∇ log ρ 2 L 2 + ε ∆ log ρ 2 L 2 ≤ C ε ( u s L r + 1) ∇ log ρ 2 L 2 + ε ∆ log ρ 2 L 2 ,(4.38)
then, applying (4.1) and Grönwall's inequality for (4.41), we conclude the proof. ≤ C ( ∇ρ s L r + v s L r + 1) ∆ρ 2 L 2 + ∆ log ρ 2 L 2 + ∇v 2 L 2 + ε v 2
H 2 ≤ C ( u s L r + 1)F (t) + ε v 2 H 2 ,(4.40)
On the other hand, we still have (4.25), that is,
(log ρ) t 2 L 2 t + ν ∇(log ρ) t 2 L 2 ≤ C ε ( ∇ρ s L r + 1) (log ρ) t 2 L 2 + ε v t 2 L 2 ≤ C ε ( u s L r + 1)F (t) + ε u t 2 L 2 .
(4.41)
Here, we have used the fact that
∇v 2 L 2 ≤ C ∇u 2 L 2 + ∇ 2 ρ −1 2 L 2 ≤ C ∇u 2 L 2 + ∆ρ 2 L 2 + ∆ log ρ 2 L 2 , v t 2 L 2 ≤ C u t 2 L 2 + ∇ρ −1 t 2 L 2 ≤ C u t 2 L 2 + ∇(log ρ) t 2 L 2 + ∇ρ 2 L r (log ρ) t 2 L 2r r−2 .
For u, similar with the proof in subsection 4.1, we apply the Serrin's condition (4.1) on (3.50)-(3.55) and use (4.35)-(4.36), we can derive that √ ρu
2 L 2 t + ν ∇u 2 L 2 ≤ C ε ( u s L r + 1) √ ρu 2 L 2 + ∇ρ 2 L 2 + C ε ∇ρ 2 L 2 + ∆ρ 2 L 2 + ε u t 2 L 2 ≤ C ε ( u s L r + 1)F(t) + C ε ∇ρ 2 L 2 + ∆ρ 2 L 2 + ε u t 2 L 2 ,(4.42)
and µ(ρ)|D(u)| 2
L 2 t + ν u t 2 L 2 ≤ C ε ( u s L r + 1)F (t) + C ε ∇(log ρ) t 2 L 2 + ε ∆u 2 L 2 ,(4.43)
where the only term we need concern is N 3 = µ(ρ) t |D(u)| 2 in (3.54).
However, this term can be computed by integrating by parts,
N 3 = µ(ρ) t |D(u)| 2 = − ∇µ(ρ) t · D(u) · u − 1 2 µ(ρ) t ∆u · u = − ρµ ′ (ρ)∇(log ρ) t · D(u) · u − (log ρ) t (ρµ ′ (ρ)) ′ ∇ρ · D(u) · u − 1 2 ρµ ′ (ρ)(log ρ) t ∆u · u ≤ C u L r ∇u L 2r r−2 ∇(log ρ) t L 2 + C ∇ρ L r (log ρ) t L 2r r−2 u L r ∇u L 2r r−2 + C u L r (log ρ) t L 2r r−2 ∆u L 2 ≤ C ε ( u s L r + 1) ∇u 2 L 2 + (log ρ) t 2 L 2 + ε ∇(log ρ) t 2 L 2 + ∆u 2 L 2 ≤ C ε ( u s L r + 1)F(t) + ε ∇(log ρ) t 2 L 2 + v 2 H 2 ,
where we have used
∆u 2 L 2 ≤ C v 2 H 2 + ∇∆ρ −1 2 L 2 ≤ C v 2 H 2 + ∇∆ρ 2 L 2 + ∇∆ log ρ 2 L 2 + C ∇ρ 2 L r ∆ρ 2 L 2r r−2 + ∆ log ρ 2 L 2r r−2
To estimate ∆u, we apply Lemma 2.3, 2.9 and 2.10 on (3.27) and, then, use (4.32)-(4.33) with Φ = −c 0 ∇ρ −1 and
Φ H 2 ≤ C ∇∆ρ −1 L 2 ≤ C ( ∇∆ρ L 2 + ∇∆ log ρ L 2 ) + C ∇ρ L r ∆ρ L 2r r−2 + ∆ log ρ L 2r r−2 to deduce that v 2 H 2 + π 2 H 1 ≤ C ( u s L r + 1) ∆ log ρ 2 L 2 + ∆ρ 2 L 2 + ∇v 2 L 2 + C v t 2 L 2 + ∇(log ρ) t 2 L 2 + ∇∆ log ρ 2 L 2 + ∇∆ρ 2 L 2 ≤ C ( u s L r + 1)F (t) + C u t 2 L 2 + ∇(log ρ) t 2 L 2 + ∇∆ log ρ 2 L 2 + ∇∆ρ 2 L 2 .
(4.44) Now, collecting the bounds (4.40)-(4.44) and following the proof from (3.57) to (3.61), one has F ′ (t) + νG(t) ≤ C( u s L r + 1)F (t) + C ∇ρ 2 L 2 + ∆ρ 2 L 2 .
(4.45)
Applying the Grönwall's inequality and Lemma 4.6 on (4.45) and, then, turning back to (4.44), we obtain (4.39).
The proof of Proposition 4.1 is same as that at the end of subsection 4.1, we omit it and left it to readers.
Proof of Theorem 1.3
Since we have Proposition 4.1 and the constantC is independent with T ∈ (0, T * ). Thus, we can let t → T * and consider (ρ, u)(x, T * ) as the initial data. Then, following the proof in subsection 3.2, we can deduce the violation of the maximality of T * . Therefore, we complete the proof for Theorem 1.3.
Definition 1.1. (ρ, u, π) is called a strong solution of (1.1) on Ω × (0, T ), if (1.1) holds almost
11) the system (1.1)-(1.4), (A) or (B) admits a unique global strong solution (ρ, u, π).
Lemma 2. 1 .
1Assume that (ρ 0 , u 0 ) satisfies the same conditions as in Theorem 1.2 and Ω ⊂ R 3 is a simply connected bounded domain with smooth boundary. Let π saitisfies the condition (1.11). Then there exists a positive time T 1 depending on Ω, c 0 , α, β and u 0 H 1 so that the problem (1.1)-(1.4), (A)admits an unique strong solution (ρ, u, π) on Ω × (0, T 1 ).
Lemma 2. 6
6([15], Theorem III.3.3). Suppose that Φ · n = 0 on ∂Ω and f Ω = 0. Then,
Proposition 3. 1 .
1There exists a positive constant δ depending on Ω, c 0 , α and β such that, if ∇u 0 L 2 ≤ δ and sup t∈[0,T ]
h
(s) ds dt ≤ ε sup t∈[0,T ] ∇v 2 L 2 + C ε sup t∈[0,T ] ∇ρ 2 L 2 ≤ ε sup t∈[0,T ]
Proposition 3. 4 .
4There exists a positive constantδ depending on Ω, c 0 , α and β such that, if ∇u L 2 ≤δ and sup t∈[0,T ]
( 3 . 47 )
347Furthermore, if ∇u 0 L 2 ≤ 1 and the condition (3.44) holds, one has sup t∈[0,T ]
∆u L 2 , we follow the proof (3.27)-(3.31) and use Lemma 2.8 (2) with Φ = −c 0 ∇ρ −1 and condition (3.44) to deduce
and (3.64) are eough to let ǫ → 0 + and recover to the original system (1.1). The uniqueness can be obtained by similar method in[38].
T
→T * v L s (0,T ;L r ) + ∇ρ L s (0,T ;L r ) ≤ M 0 , we want to show the following estimate holds.Proposition 4.1. Under the above condition, one has, for all T ∈ [0, T * ), sup t∈[0,T ]
Lemma 4. 3 . 2 L 2 + v 2 L 2 + T 0 ∆ log ρ 2 L 2
3222022Suppose that (4.1) holds and (ρ, u) satisfies (A), then one has sup t∈[0,T ] ∇ log ρ + ∇v 2 L 2 dt ≤C.(4.4)
Lemma 4. 4 .
4Suppose that (4.1) holds and (ρ, u) satisfies (A), then sup t∈[0,T ]P t) + π 2 H 1 dt ≤C.
(t) := (log ρ) t 2
:= −B · [v + c 0 ∇ρ −1 ]. Thus, from Lemma 2.10, (2.13), we have
combining (4.22), (4.25), (4.33) and (4.34) by using the similar approach from (3.32)-(3.38), then applying the Grönwall's inequality, we deduce the estimate (4.11).
Lemma 4. 6 .
6Suppose that (ρ, u) satisfies the condition (B). Then, for all T ∈ (0, T * ),
Lemma 4. 7 .L 2 +
72Suppose that (ρ, u) satisfies the condition (B). Then, On the one hand, We follow the proof from (4.12) to(4.22) and replace all v ∇∆ log ρ 2 L 2
Now, we turn back to prove Proposition 3.1.Proof of Proposition 3.1. Since, from Lemma 3.3 and the Sobolev embedding theorem,
sup
t∈[0,T ]
Proof of Proposition 4.1. Combining Lemma 4.3-4.4, we can get Proposition 4.1. The only point one should notice is that
Global well-posedness of 3-D density-dependent Navier-Stokes system with variable viscosity. H Abidi, P Zhang, Science China Mathematics. 58H. Abidi and P. Zhang. Global well-posedness of 3-D density-dependent Navier-Stokes system with variable viscosity. Science China Mathematics, 58:1129-1150, 2015.
Boundary value problems in mechanics of nonhomogeneous fluids. S N Antontsev, A Kazhiktov, V N Monakhov, ElsevierS. N. Antontsev, A. Kazhiktov, and V. N. Monakhov. Boundary value problems in mechanics of nonhomogeneous fluids. Elsevier, 1989.
L p theory for the div-curl system. J Aramaki, Int. J. Math. Anal. 86J. Aramaki. L p theory for the div-curl system. Int. J. Math. Anal, 8(6):259-271, 2014.
Diffusion on viscous fluids. Existence and asymptotic properties of solutions. H Beirao Da, Veiga, Annali della Scuola Normale Superiore di Pisa-Classe di Scienze10H. Beirao Da Veiga. Diffusion on viscous fluids. Existence and asymptotic properties of solutions. Annali della Scuola Normale Superiore di Pisa-Classe di Scienze, 10(2):341-355, 1983.
Effect of density dependent viscosities on multiphasic incompressible fluid models. D Bresch, E H Essoufi, M Sy, Journal of Mathematical Fluid Mechanics. 93D. Bresch, E. H. Essoufi, and M. Sy. Effect of density dependent viscosities on multiphasic incompressible fluid models. Journal of Mathematical Fluid Mechanics, 9(3):377-397, 2007.
Two-velocity hydrodynamics in fluid mechanics: Part I well posedness for zero mach number systems. D Bresch, V Giovangigli, E Zatorska, Journal de mathematiques pures et appliquees. 1044D. Bresch, V. Giovangigli, and E. Zatorska. Two-velocity hydrodynamics in fluid mechanics: Part I well posedness for zero mach number systems. Journal de mathematiques pures et appliquees, 104(4):762-800, 2015.
Existence and exponential growth of global classical solutions to the compressible Navier-Stokes equations with slip boundary conditions in 3D bounded domains. G Cai, J Li, arXiv:2102.06348arXiv preprintG. Cai and J. Li. Existence and exponential growth of global classical solutions to the com- pressible Navier-Stokes equations with slip boundary conditions in 3D bounded domains. arXiv preprint arXiv:2102.06348, 2021.
Global strong solutions to density-dependent viscosity Navier-Stokes equations in 3D exterior domains. G Cai, B Lü, Y Peng, arXiv:2205.05925arXiv preprintG. Cai, B. Lü, and Y. Peng. Global strong solutions to density-dependent viscosity Navier- Stokes equations in 3D exterior domains. arXiv preprint arXiv:2205.05925, 2022.
Global regularity for the initial value problem of a 2-D Kazhikhov-Smagulov type model. X Cai, L Liao, Y Sun, Nonlinear Analysis: Theory, Methods & Applications. 75X. Cai, L. Liao, and Y. Sun. Global regularity for the initial value problem of a 2-D Kazhikhov- Smagulov type model. Nonlinear Analysis: Theory, Methods & Applications, 75(15):5975- 5983, 2012.
Global strong solution to the initial-boundary value problem of a 2-D Kazhikhov-Smagulov type model. X Cai, L Liao, Y Sun, Discrete & Continuous Dynamical Systems-S. 75917X. Cai, L. Liao, and Y. Sun. Global strong solution to the initial-boundary value problem of a 2-D Kazhikhov-Smagulov type model. Discrete & Continuous Dynamical Systems-S, 7(5):917, 2014.
Unique solvability for the density-dependent Navier-Stokes equations. Y Cho, H Kim, Nonlinear Analysis: Theory, Methods & Applications. 59Y. Cho and H. Kim. Unique solvability for the density-dependent Navier-Stokes equations. Nonlinear Analysis: Theory, Methods & Applications, 59(4):465-489, 2004.
On the motion of non-homogeneous fluids in the presence of diffusion. H B Da Veiga, R Serapioni, A Valli, Journal of Mathematical Analysis and Applications. 851H. B. da Veiga, R. Serapioni, and A. Valli. On the motion of non-homogeneous fluids in the presence of diffusion. Journal of Mathematical Analysis and Applications, 85(1):179-191, 1982.
On the well-posedness of the full low mach number limit system in general critical Besov spaces. R Danchin, X Liao, Communications in Contemporary Mathematics. 14031250022R. Danchin and X. Liao. On the well-posedness of the full low mach number limit system in general critical Besov spaces. Communications in Contemporary Mathematics, 14(03):1250022, 2012.
Well-posedness of the nonlinear equations for zero mach number combustion. P Embid, Communication in Partial Differential Equation. 1211P. Embid. Well-posedness of the nonlinear equations for zero mach number combustion. Communication in Partial Differential Equation, 12(11):1227-1283, 1987.
An introduction to the mathematical theory of the Navier-Stokes equations: Steadystate problems. G Galdi, Springer Science & Business MediaG. Galdi. An introduction to the mathematical theory of the Navier-Stokes equations: Steady- state problems. Springer Science & Business Media, 2011.
Global well-posedness and exponential stability of 3D Navier-Stokes equations with density-dependent viscosity and vacuum in unbounded domains. C He, J Li, B Lü, Archive for Rational Mechanics and Analysis. 2393C. He, J. Li, and B. Lü. Global well-posedness and exponential stability of 3D Navier-Stokes equations with density-dependent viscosity and vacuum in unbounded domains. Archive for Rational Mechanics and Analysis, 239(3):1809-1835, 2021.
On the regularity of weak solutions to the magnetohydrodynamic equations. C He, Z Xin, Journal of Differential Equations. 2132C. He and Z. Xin. On the regularity of weak solutions to the magnetohydrodynamic equations. Journal of Differential Equations, 213(2):235-254, 2005.
Serrin-type blowup criterion for full compressible Navier-Stokes system. X Huang, J Li, Y Wang, Archive for Rational Mechanics and Analysis. 207X. Huang, J. Li, and Y. Wang. Serrin-type blowup criterion for full compressible Navier-Stokes system. Archive for Rational Mechanics and Analysis, 207(1):303-316, 2013.
Serrin-type criterion for the three-dimensional viscous compressible flows. X Huang, J Li, Z Xin, SIAM Journal on Mathematical Analysis. 434X. Huang, J. Li, and Z. Xin. Serrin-type criterion for the three-dimensional viscous compress- ible flows. SIAM Journal on Mathematical Analysis, 43(4):1872-1886, 2011.
Global strong solution of 3D inhomogeneous Navier-Stokes equations with density-dependent viscosity. X Huang, Y Wang, Journal of Differential Equations. 2594X. Huang and Y. Wang. Global strong solution of 3D inhomogeneous Navier-Stokes equations with density-dependent viscosity. Journal of Differential Equations, 259(4):1606-1627, 2015.
Strong solutions of the Navier-Stokes equations for nonhomogeneous incompressible fluids. H , Jun Choe, H Kim, H. Jun Choe and H. Kim. Strong solutions of the Navier-Stokes equations for nonhomogeneous incompressible fluids. 2003.
A blow-up criterion for the nonhomogeneous incompressible Navier-Stokes equations. H Kim, SIAM journal on mathematical analysis. 375H. Kim. A blow-up criterion for the nonhomogeneous incompressible Navier-Stokes equations. SIAM journal on mathematical analysis, 37(5):1417-1434, 2006.
Ural'tseva. Linear and quasi-linear equations of parabolic type. O A Ladyzhenskaia, V A Solonnikov, N N , American Mathematical Soc23O. A. Ladyzhenskaia, V. A. Solonnikov, and N. N. Ural'tseva. Linear and quasi-linear equa- tions of parabolic type, volume 23. American Mathematical Soc., 1988.
A first course in Sobolev spaces. G Leoni, American Mathematical SocG. Leoni. A first course in Sobolev spaces. American Mathematical Soc., 2017.
P.-L , Incompressible Models. Oxford University Press on Demand1P.-L. Lions. Mathematical Topics in Fluid Mechanics: Volume 1: Incompressible Models, volume 1. Oxford University Press on Demand, 1996.
P.-L , Compressible Models. Oxford University Press on Demand2P.-L. Lions. Mathematical Topics in Fluid Mechanics: Volume 2: Compressible Models, vol- ume 2. Oxford University Press on Demand, 1998.
Compressible fluid flow and systems of conservation laws in several space variables. A Majda, Springer Science & Business Media53A. Majda. Compressible fluid flow and systems of conservation laws in several space variables, volume 53. Springer Science & Business Media, 2012.
On elliptic partial differential equations. L Nirenberg, Il principio di minimo e sue applicazioni alle equazioni funzionali. SpringerL. Nirenberg. On elliptic partial differential equations. In Il principio di minimo e sue applicazioni alle equazioni funzionali, pages 1-48. Springer, 2011.
Introduction to the mathematical theory of compressible flow. A Novotny, I Straskraba, OUP Oxford. 27A. Novotny and I. Straskraba. Introduction to the mathematical theory of compressible flow, volume 27. OUP Oxford, 2004.
On the motion of viscous fluids in the presence of diffusion. P Secchi, SIAM Journal on Mathematical Analysis. 191P. Secchi. On the motion of viscous fluids in the presence of diffusion. SIAM Journal on Mathematical Analysis, 19(1):22-31, 1988.
On the interior regularity of weak solutions of the Navier-Stokes equations. J Serrin, Archive for Rational Mechanics and Analysis. 9J. Serrin. On the interior regularity of weak solutions of the Navier-Stokes equations. Archive for Rational Mechanics and Analysis, 9:187-195, 1962.
Compact sets in the space L p (0, T ; B). J Simon, 146Annali di Matematica pura ed applicataJ. Simon. Compact sets in the space L p (0, T ; B). Annali di Matematica pura ed applicata, 146(1):65-96, 1986.
Global regularity for the initial-boundary value problem of the 2-D Boussinesq system with variable viscosity and thermal diffusivity. Y Sun, Z Zhang, Journal of Differential Equations. 2556Y. Sun and Z. Zhang. Global regularity for the initial-boundary value problem of the 2- D Boussinesq system with variable viscosity and thermal diffusivity. Journal of Differential Equations, 255(6):1069-1085, 2013.
Two-velocity hydrodynamics in fluid mechanics: global existence for 2D case. W Tan, Nonlinearity. 342964W. Tan. Two-velocity hydrodynamics in fluid mechanics: global existence for 2D case. Non- linearity, 34(2):964, 2021.
Estimating ∇u by div u and curl u. Mathematical methods in the applied sciences. W , Von Wahl, 15W. Von Wahl. Estimating ∇u by div u and curl u. Mathematical methods in the applied sciences, 15(2):123-143, 1992.
A blow-up criterion for 3D compressible magnetohydrodynamic equations with vacuum. X Xu, J Zhang, Mathematical Models and Methods in Applied Sciences. 22021150010X. Xu and J. Zhang. A blow-up criterion for 3D compressible magnetohydrodynamic equations with vacuum. Mathematical Models and Methods in Applied Sciences, 22(02):1150010, 2012.
Global well-posedness for the incompressible Navier-Stokes equations with densitydependent viscosity coefficient. J Zhang, Journal of Differential Equations. 2595J. Zhang. Global well-posedness for the incompressible Navier-Stokes equations with density- dependent viscosity coefficient. Journal of Differential Equations, 259(5):1722-1742, 2015.
Well-posedness for 2D combustion model in bounded domains and Serrin-type blowup criterion. J Zhang, arXiv:2301.02976arXiv preprintJ. Zhang. Well-posedness for 2D combustion model in bounded domains and Serrin-type blowup criterion. arXiv preprint arXiv:2301.02976, 2023.
Global strong solution for 3D viscous incompressible heat conducting Navier-Stokes flows with non-negative density. X Zhong, Journal of Differential Equations. 2638X. Zhong. Global strong solution for 3D viscous incompressible heat conducting Navier-Stokes flows with non-negative density. Journal of Differential Equations, 263(8):4978-4996, 2017.
| [] |
[
"Language-Grounded Indoor 3D Semantic Segmentation in the Wild",
"Language-Grounded Indoor 3D Semantic Segmentation in the Wild"
] | [
"David Rozenberszki \nTechnical University of Munich\n\n",
"Angela Dai \nTechnical University of Munich\n\n"
] | [
"Technical University of Munich\n",
"Technical University of Munich\n"
] | [] | 2 NVIDIA https://rozdavid.github.io/scannet200 D. Rozenberszki et al.challenging for existing 3D semantic segmentation methods. To learn more robust 3D features in this context, we propose a language-driven pre-training method to encourage learned 3D features that might have limited training examples to lie close to their pre-trained text embeddings. Extensive experiments show that our approach consistently outperforms state-of-the-art 3D pre-training for 3D semantic segmentation on our proposed benchmark (+9% relative mIoU), including limited-data scenarios with +25% relative mIoU using only 5% annotations. | 10.48550/arxiv.2204.07761 | [
"https://export.arxiv.org/pdf/2204.07761v2.pdf"
] | 248,227,627 | 2204.07761 | b3dd7c78ae3960474541f01d9a9ce7b5de65c110 |
Language-Grounded Indoor 3D Semantic Segmentation in the Wild
David Rozenberszki
Technical University of Munich
Angela Dai
Technical University of Munich
Language-Grounded Indoor 3D Semantic Segmentation in the Wild
3D semantic scene understanding3D semantic segmenta- tion3D representation learninglanguage + 3D vision
2 NVIDIA https://rozdavid.github.io/scannet200 D. Rozenberszki et al.challenging for existing 3D semantic segmentation methods. To learn more robust 3D features in this context, we propose a language-driven pre-training method to encourage learned 3D features that might have limited training examples to lie close to their pre-trained text embeddings. Extensive experiments show that our approach consistently outperforms state-of-the-art 3D pre-training for 3D semantic segmentation on our proposed benchmark (+9% relative mIoU), including limited-data scenarios with +25% relative mIoU using only 5% annotations.
Fig. 1
: We present the ScanNet200 benchmark, which studies 200-class 3D semantic segmentation -an order of magnitude more categories than previous 3D scene understanding benchmarks. To address this challenging task, we propose to guide 3D feature learning by anchoring it to the richly-structured text embedding space of CLIP for the semantic class labels. This results in improved 3D semantic segmentation across the large set of class categories.
Abstract. Recent advances in 3D semantic segmentation with deep neural networks have shown remarkable success, with rapid performance increase on available datasets. However, current 3D semantic segmentation benchmarks contain only a small number of categories -less than 30 for ScanNet and SemanticKITTI, for instance, which are not enough to reflect the diversity of real environments (e.g., semantic image understanding covers hundreds to thousands of classes). Thus, we propose to study a larger vocabulary for 3D semantic segmentation with a new extended benchmark on ScanNet data with 200 class categories, an order of magnitude more than previously studied. This large number of class categories also induces a large natural class imbalance, both of which are arXiv:2204.07761v2 [cs.CV] 28 Jul 2022 1 Introduction In recent years, remarkable advances have been made in 3D semantic segmentation as a core task underlying 3D perception for myriad applications, including robotics, autonomous navigation, and mixed reality. The introduction of several large-scale real-world 3D datasets [10,4,1] has led to rapid developments in data-driven 3D deep learning techniques, with an emphasis on point-and sparsevoxel-based approaches [15,48,9,52,18]. However, popular benchmarks such as ScanNet [10] or SemanticKITTI [1] focus on a limited number of class categories (20 and 28 classes, respectively), and thus these label sets do not well-represent the diversity and complexity of real scene content that would be encountered in the wild. In contrast, common image segmentation benchmarks [13,30] contain over 80 annotated class labels, with recent large-vocabulary image challenges [17] presenting over 1000 categories for recognition tasks.
Thus, we propose to address a larger-vocabulary setting for 3D semantic segmentation. In particular, we focus on the indoor domain and consider 3D scans of ScanNet [10] where a variety of different object categories are seen in the RGB-D scans despite its benchmark evaluating on only 20 classes. We present ScanNet200, a 200-class 3D semantic segmentation benchmark, considering an order of magnitude more class annotations than previously considered. This new set of classes includes both finer-grained categories of previous classes as well as a large number of previously unaddressed classes. This induces a much more challenging setting reflecting the naturally observed semantic classes already seen in the raw ScanNet RGB-D observations, where the data also reflects naturally encountered class imbalances (e.g., walls and floors are seen much more often than nightstands, which are also seen far more often than fire extinguishers). In addition considering the setting where all dense annotations are available for train scenes for the 200 classes, we also consider limited annotation scenarios with only sparse annotations per scene, given the expense of 3D data annotation.
In order to address this challenging new benchmark for 3D semantic segmentation, we explore standard techniques for data and loss balancing for the much larger number of class categories. In combination with the most effective techniques, we further observe that, unlike the limited, imbalanced geometric content, state-of-the-art language models have observed and attained rich representations of all categories, and so can induce a better structure onto learned 3D embeddings. Thus, we propose to ground 3D feature learning with strong pre-trained CLIP text features to construct a richly-structured 3D feature representation space. To this end, we formulate a language-grounded pre-training by mapping learned 3D features to pre-trained language embeddings with a contrastive loss. This enables a more robust 3D representation learning under imbalanced and limited 3D observations. Experiments on our ScanNet200 semantic segmentation as well as semantic segmentation in the limited data regime demonstrate the effectiveness of our language-grounded 3D semantic segmentation. In summary, our contributions are:
-We propose a new 200-class 3D semantic segmentation benchmark on realworld 3D ScanNet scene data, considering an order of magnitude more category annotation labels than existing 3D semantic segmentation benchmarks. -In order to guide the construction of robust 3D semantic feature representations for this challenging task, we propose to align geometric feature extraction to the category embedding of the CLIP pretrained language model. This results in improved performance both overall and in the rarely seen, including in the limited-data regime.
Related Work
3D Semantic Segmentation. With the introduction of large-scale annotated real-world 3D datasets [10,4,1], 3D semantic segmentation has seen significant focus in recent years with various deep learning-based methods developed around different 3D representations. Early works tackled 3D semantic segmentation on dense volumetric grids [10,11], but were limited in cubic growth in memory and compute. [15,9] enabled significant performance improvements by leveraging a structured space representation in a sparse fashion to operate efficiently at high resolutions. In this work, we also adopt a sparse 3D convolutional backbone to explore languageguided pre-training for larger-vocabulary semantic segmentation. introduced an unsupervised contrastive pre-training in the context of data-efficient 3D scene understanding with limited reconstruction and limited annotations available. In contrast to these 3D pre-training methods, we propose a supervised multi-modal 3D representation learning guided by text encoded features to learn a more robust feature representation space covering significantly more class categories than previously studied for 3D. Inspired by the data-efficient scene understanding of [20], we additionally study a limited annotations scenario for our ScanNet200 benchmark. Additionally, Mix3D [37] presented a data augmentation scheme to mix multiple 3D scenes together to generate semantic segmentation that is more robust against undesired context biases. Our instance-based sampling when fine-tuning the learned language-guided 3D features is inspired by the scene mixing, but operates at an instance level to help mitigate class imbalances. Previous methods have also leveraged text embeddings in 3D learning for zero-shot pointcloud segmentation [34,8] and classification [56]. More recently, CLIP [43] was shown as a powerful conditioner for generative 3D models [47,33]. We also aim to leverage powerful CLIP text embeddings for robust 3D semantic pre-training.
3D Representation
3D Scene Understanding Benchmarks. Recently, various large-scale realworld 3D scene understanding benchmarks have been established. Early benchmarks such as the NYUv2 dataset [36] introduced RGB-D frame-based annotations on a limited number of frames (e.g., 1449 for NYUv2). ScanNet [10] presented a much larger-scale RGB-D dataset and benchmark with 1513 denselyannotated reconstructed 3D scans. While it contains hundreds of raw annotated label data, the ScanNet benchmark evaluates only 20 class categories for its 3D scene understanding tasks. Similarly, Matterport3D [4] presents a largescale RGB-D dataset with a 20-class semantic segmentation evaluation. Additionally, SemanticKITTI [1] established an outdoor 3D dataset and benchmark for LiDAR-based scene understanding with 28 class category annotations. We present our ScanNet200 benchmark based on ScanNet scene data with an order of magnitude more classes than previous benchmarks.
Class Imbalance. Real-world dataset annotations tend to contain natural class imbalances which can lead to skewed learning of more frequently observed class categories. Despite the lack of study on mitigating class imbalances in 3D, various methods have been presented to address them in 2D image understanding tasks.
In particular, class imbalance in image classification problems is often addressed by oversampling underrepresented categories with strong data augmentation techniques to obtain an evenly-distributed dataset. Various methods have been introduced towards data-sampling-based re-balancing, for instance random oversampling of underrepresented classes [3,54,50], sampling novel poses of known categories [32], undersampling overrepresented classes [35], frequencybased sampling [25], as well as feature-based or generative sampling [40,39,55]. Inspired by such approaches, we propose a 3D instance-based sampling to mitigate class imbalances for 3D semantic segmentation. Fig. 2: During pre-training, we guide 3D feature learning by mapping learned features to text encoded anchors of the corresponding semantic labels, constructed by a constrastive loss between text and 3D. This establishes a more robust 3D feature representation space guided by the rich structure of the text embeddings.
Alternative methods have been proposed to re-balance the loss for image understanding tasks [28,21,29]. In particular, the focal loss [29] has been shown to be effective for 2D object detection and semantic segmentation by focusing the training on hard examples or to instance contours [2]. We also study the effect of focal loss balancing for the 3D semantic segmentation task.
Method
Our approach tackles the 200-class 3D semantic segmentation task on Scan-Net [10] data, exploiting well-structured language models that have trained on rich observations across all category labels. In particular, we leverage pre-trained text embeddings from CLIP [43] as anchors to which we learn to map geometric features during the pre-training step. We then use these language-grounded features for fine-tuning downstream 3D semantic segmentation. During fine-tuning, we further address the class imbalance by instance-based augmentation as well as focal loss-based class-balancing for the downstream loss.
Language-Grounded 3D Feature Learning
As training data for language-based models are available in far greater quantities than 3D semantic annotations, we propose to ground 3D feature learning to wellstructured, pre-trained text encodings. This enables a more robust construction of a learned feature space guided towards a highly-structured, rich text feature space, to support downstream 3D semantic segmentation. An overview of our language-grounded 3D pre-training is shown in Figure 2.
Text Encoder. We leverage a pre-trained CLIP [43] to map semantic labels to text features. Note that our approach is agnostic to the specific language model used, but we found CLIP's multi-modal training is well-suited to our language-3D pre-training. We refer to the supplemental for additional analysis on alternative text embeddings.
During pre-training, the text encoder is kept fixed, and takes the N class =200 target semantic labels in their text form, tokenizes them, and encodes them to their text encodings to f t 1 , ..., f t N class ∈ R D , where D is the dimensionality of the text representation space. We leverage the text features f t i to anchor learning of 3D features such that learned 3D features will lie close to text encodings if they represent the same semantic class.
3D Encoder. For 3D feature extraction, we employ a state-of-the-art sparse 3D convolutional U-Net [9]. Our 3D encoder backbone takes as input a sparse voxelized 3D scan S, with RGB color as input features, and produces for each sparse voxel location a 3D feature f s i ∈ R D .
Text-supervised Contrastive optimization. We then train the 3D feature encoder to map to the well-structured space of the text model by formulating a contrastive objective to bring together the different data modalities. For a 3D scan S with all N p sparse voxel locations in the current batch, we map together 3D features f s i to text features f t h(i) representing the semantic label text:
L pos = Np i=1 max 0, f s i · f t h(i) |f s i | · |f t h(i) | − t pos ,(1)
where h(i) is the semantic text label for location i, and t pos is a threshold value for gradient clipping. Similarly, multiple non-matching semantic text features, sampled from all text semantic labels, are pushed away from the learned features as negatives:
L neg = Np i=1 1 |M | j∈M max 0, t neg − f s i · f t j |f s i | · |f t j | ,(2)
where M ∈ N class are a set of semantic label encodings different from i, f t j is the corresponding text feature, and t neg is a threshold value for gradient clipping.
We found that a cosine distance between features empirically produced the best results compared to alternative distance measures such as ℓ 1 , ℓ 2 , or MSE. This allows for more flexibility in the feature learning by constraining only vector directions, and is similarly reflected in CLIP-driven image classification [27,16].
The final language-3D pre-training objective is then:
L = L pos + λL neg(3)
where λ weights the effect of the multiple negatives with the positive loss. We found empirically that negative sampling was necessary for effective 3D representation learning, rather than employing positive text associations only. During optimization, multiple possible point feature trajectories are converging to the target anchors, and we encourage the solutions that maximize cluster separation at all times (see Sec. 5 for additional analysis). Additionally, as we sample target feature anchors from the complete set of categories, we are able to maximize cluster separation within categories rarely appearing together in the same scenes, in contrast to unsupervised algorithms.
3D Semantic Segmentation Fine-tuning
We use the language-grounded pre-trained 3D features for fine-tuning for 3D semantic segmentation. Here, we also directly address the inherent class imbalance due to the natural long-tail distribution of the class categories in denselyannotated 3D scenes (e.g., far more walls and floors than lamps or dumbbells). In particular, we address this through data augmentation for class balancing as well as a class-balanced loss.
Class re-balancing by instance sampling. We observe that since rare classes are not only infrequently observed but are often small objects and thus represented by smaller sets of points or voxels, they often overfit to recognizing both the surrounding context and the object. We thus propose to augment scenes by placing instances of infrequently seen class categories in them and breaking overly specific context dependencies for recognition. An overview of our instance sampling is shown in Figure 3. We obtain instances from ScanNet200 semantic instance annotations, and sample from instances of rare class categories from train scenes. We note here, that we relied on the available ScanNet instance annotations, but since we are augmenting long tail categories only, sparsely appearing in all scenes, the conversion from semantic to instance segmentations comes essentially free with surface label clustering. We place these sampled instances in potentially physically valid locations in a new scene. To this end, we compute a height map of the scene in which the object is to be inserted and iteratively sample instance centroid candidates where the new object can be placed. Any sampled object center where the inserted object would collide with existing objects, based on bounding box overlap, is discarded. For all accepted placements we update the height map and continue with the iterations until the condition on the number of samples is met. This enables class re-balancing by over-sampling rare categories and breaking unduly specific context dependencies for recognition. For additional implementation details please refer to Section 8 in our supplemental material.
Class-balanced loss. As instance sampling-based data augmentation will not fully balance classes (e.g., walls, floors, and other frequently seen categories still dominate), we also consider the class balancing of the loss function. Rather than a standard cross entropy loss for semantic segmentation, we adapt a focal loss [29] which was shown to be effective in mitigating class imbalance effects for 2D object detection. The focal loss applies a dynamic weighting factor based on the usefulness of a given sample to re-weight the cross entropy, focusing on difficult-to-classify examples. In particular, the focal loss proposes a modulating factor for a cross entropy loss:
L focal (p t ) = −(1 − p t ) γ log(p t ),(4)
where p t is the point prediction probability for the respective target label and γ ≥ 0 is focusing the modulating factor (1 − p t ) γ . In practice, we did not see a direct improvement over cross entropy training by applying a focal loss directly, so we additionally re-balance the loss based on the class imbalance of the train set:
F L(p t ) = −α(1 − p t ) γ log(p t ), α i = log(n i ) N class j=1 log(n j )(5)
By explicitly considering category imbalances, we found this to provide improved performance over both a standard focal loss or direct category-balanced cross entropy (c.f. Sec 5 for more analysis).
Implementation Details
During pre-training, we use a sparse 3D U-Net backbone for 3D feature encoding, implemented with the MinkowskiEngine [9]. We adapt the MinkUNet34 to output feature dimension maps of size D = 512 to match the dimensionality of the pre-trained text encoding from CLIP [43]. For additional details on optimization please refer our supplemental at Section 7. We follow a two stage training with pretraining and fine-tuning for both semantic and instance segmentation, where for all comparisons we use the same 3D backbone architecture.
ScanNet200 Benchmark
The ScanNet Benchmark 1 has provided an active online benchmark evaluation for 3D semantic segmentation, but only considers 20 class categories, which is insufficient to capture the diversity of many real-world environments. We thus present the ScanNet200 Benchmark for 3D semantic segmentation with 200 class categories, an order of magnitude more than previous. We follow the original train/val/test split of ScanNet [10], while training and evaluating over significantly more class categories. Figure 4 shows the class category distribution for ScanNet200 over the number of annotated instances and the number of annotated surface points per category in the train set.
To obtain the 200 class categories, we considered the raw semantic label annotations provided by ScanNet [10], which contains 607 raw categories. After merging near-duplicate labels, this resulted in 550 unique semantic classes, from which we selected the 200-most represented categories by the number of instances, forming ScanNet200. The 200-class selection enables enforcing a minimum of 10 samples from all categories.
In order to better understand performance under the natural class imbalance of the ScanNet200 benchmark, we further split the 200 categories into sets of 66, 68 and 66 categories, based on the frequency of number of labeled surface points in the train set: head, common and tail respectively. Evaluation over all categories as well as for the head, common, and tail splits enables a more precise understanding of segmentation performance.
Limited Annotation Task. We additionally study semantic segmentation performance on ScanNet200 in the limited annotation regime, as dense 3D annotations are expensive to acquire. In the limited annotation setting, we emulate annotations queried from annotators with a randomly sampled annotated point per object, and any additional points annotated based on farthest point sampling, similar to settings of weakly-supervised methods [31]. We consider scenarios of (5%, 10%, 50%) of annotated surface points provided, where all scene geometry is available (but unlabeled for surface points without annotations).
Instance Segmentation Task. In addition to 3D semantic segmentation, we also evaluate 3D instance segmentation on ScanNet200. We evaluate methods by mean Average Precision (mAP) at IoU of (25%, 50%) and averaged over all overlaps between [50%, 95%] at 5% steps, following the original [10] benchmark.
Evaluation metrics. To evaluate semantic segmentation, we consider several evaluation metrics. The primary evaluation metric is the category-level mean intersection-over-union (mIoU ) score as tp/(tp+f p+f n), as a commonly adopted segmentation measure. Additionally, we evaluate precision as tp/(tp + f p) and recall as tp/(tp + f n), to provide further insight towards over-prediction and under-prediction, respectively. All evaluation metrics are measured across head, common, and tail splits as well as globally across all categories, in order to consider performance for more and less frequently seen class categories.
Experiments
We evaluate our approach for language-grounded pre-training with state-of-theart alternatives for 3D semantic segmentation on our ScanNet200 benchmark. For our method and all baselines, we use the same 80M parameter sparse 3D U-Net backbone implemented with MinkowskiNet [9]. Table 1. For CSC, we use the same pre-training experimental setup as proposed by the authors for our 3D backbone. For SupCon, we sample 5 positive and 5 negative candidates from the training scene for each source point and train it for 300 epochs with the same optimization parameters as our method. Our instance sampling, as well as focal loss, individually help to improve performance, particularly for lesser-seen class categories. Additionally, all pre-training approaches improve performance over training from scratch, while our language-grounded feature learning enables more effective semantic reasoning with consistent improvements over baselines and across common and rarely seen class categories.
Limited annotation semantic segmentation. As data annotations remain expensive to acquire, we additionally evaluate our approach in comparison with state of the art in the limited annotation scenario of our ScanNet200 Benchmark described in Sec. 4. Figure 5 shows performance over varying amounts of labeled annotation data available (5%, 10%, 50%, 100%). Note that since our pre-training leverages text labels to guide pre-training, we only pre-train with the available annotations, whereas CSC is pre-trained with all geometric data available for the train scenes and fine-tuned with the limited annotation data. Our approach enables more robust semantic segmentation on this challenging benchmark, consistently improving and recovering the performance of training from scratch with only 5% of the annotations. Moreover, in the very low annotation regime, we see significant improvements on tail categories, with an increase of +8 mIoU from the state-of-the-art 3D pre-training of CSC with 5% of annotations available.
How much does a class-balanced focal loss help? We evaluate the effect of our class-balanced focal loss [29] variant (C-Focal ) in Table 1, which helps to improve performance over training from scratch with a standard cross entropy Fig. 6: Qualitative semantic segmentation results on ScanNet [10] scenes. In comparison to training from scratch, class-balance focal loss, and the 3D pre-training of CSC [20], our language-grounded 3D feature learning enables more consistent and accurate semantic segmentation, even for challenging less frequently seen class categories (e.g., "dish rack" in row 4, "telephone" in the last row).
loss. Additionally, we see a consistent improvement with a smaller 3D backbone model in Table 3 in supplementary material, particularly for tail categories. We note that the class-balanced focal loss improves notably over both the original focal loss formulation (both using γ = 2), as well as a class-balanced cross entropy.
What is the impact of data balancing with instance sampling? We additionally evaluate the effect of applying data balancing by our instance sampling during training in Table 1 (Ins. samp) as well as for a smaller 3D backbone in supplemental Table 3. We find that this instance sampling consistently provides a small improvement in performance across common and rare class categories.
What is the effect of our language-grounded pre-training? Table 1 shows that our language-grounded pretraining to text-based CLIP [43] embeddings without focal loss or instance sampling already improves over all baselines. Our full approach with focal loss and instance sampling in addition to text-anchored pre-training enables consistent, effective improvement in comparison to alternative approaches.
3D instance segmentation task. In addition to 3D semantic segmentation, we also analyze a 3D instance segmentation task in Table 2, showing that our approach generalizes across multiple downstream tasks with consistent performance improvement. We use the same pre-trained 3D backbones and fine-tune them for instance segmentation by predicting an offset vector for every scene point as a voting mechanism together with the semantic labels. These directional distance vectors are optimized during train time, while the clustering of the instances is calculated only at test time. For the task and clustering algorithm, we adopt the paradigms of [24,20] to our ScanNet200 benchmark. For this task, we train our models with a batch size of 8 for 300 epochs and momentum SGD optimizer with the same parameters as in the semantic segmentation experiments, except for a smaller starting learning rate of 0.02. Learned feature representation space. We analyze the pre-trained representation spaces by visualizing a t-SNE projection of the learned features in Figure 7. By anchoring 3D feature learning to a richly-structured text embedding space, we can learn a more structured 3D feature representation space. Our full language-grounded pre-training results in a more structured feature representation space with improved semantic segmentation performance.
Limitations and Future Work. We believe our language-grounded 3D feature learning provides a promising step towards more robust and general 3D scene understanding, though several important limitations remain. It is often the case that infrequently observed objects are small and their geometric resolution is limited, so while tail category performance has improved using only geometric input, there is still much room for improvement. In particular, we note that color image observations could provide significantly higher resolution signals to explore for more accurate tail category recognition. Additionally, text encodings are used to anchor learned 3D feature representations, but currently, only the semantic labels of each object are considered, whereas text caption or object attribute descriptions could potentially provide a richer signal.
Conclusion
We have presented ScanNet200, a new benchmark for 3D semantic segmentation with an order of magnitude more class categories, along with a new approach for language-grounded pre-training to address 3D semantic feature learning under imbalanced and limited data. Our approach demonstrates robust feature learning by anchoring learned features to richly-structured CLIP text embeddings, demonstrating consistent improvements over strong baselines on our challenging ScanNet200 Benchmark and under limited annotation scenarios. We believe that this makes an important step towards 3D semantic scene understanding in the wild, and forms a basis for future multi-modal exploration for a variety of 3D perception tasks.
16. Gu, X., Lin, T.Y., Kuo, W., Cui, Y.: Zero-shot detection via vision and language knowledge distillation. arXiv e-prints pp. arXiv-2104 (2021) 6 17. Gupta, A., Dollar, P., Girshick, R.
Additional Ablations
Generalization across backbone sizes. In Table 3, we evaluate our approach and baselines using a smaller 20M parameter 3D backbone model. In this scenario, we do not map 3D features directly to their 512-dimensional text embeddings from CLIP but a 96-dimensional projection (obtained by PCA). We see consistent improvements from training from scratch as well as over state of the art while using a smaller-backbone architecture. Table 3: Generalization across backbone sizes: 3D semantic segmentation with a 20M parameter 3D U-Net backbone on ScanNet200. Our approach maintains consistent improvements over state of the art with this smaller 3D backbone.
Comparison with point-based baselines As an additional ablation we compare out method with point-based state-of-the-art segmentation models capable of processing complete ScanNet scenes at once. We choose RandLa-Net [22] and SCF-Net [14] as baselines and use the official authors implementation and hyperparameters for both methods. Our method outperforms both approaches with our language-guided pretraining on this challenging large-vocabulary task. The performance evaluated on ScanNet200 can be seen in Table 4.
Effect of contrastive distance metric. Table 5 additionally considers alternative distance metrics of ℓ 1 and ℓ 2 in comparison with our used cosine distance metric. The ℓ 1 and ℓ 2 distances were more challenging to optimize to align to corresponding text embeddings, with cosine distance producing the best performance. Table 4: Comparison with point-based RandLA-Net and SCF-Net on Scan-Net200 semantic segmentation (mIoU).
Effect of the pre-trained language model. In Table 5, we consider alternative language models to CLIP [43]; both BERT [12] and GPT2 [44] are also popular language models trained on large amounts of text data, rather than the imagetext training of CLIP.
For text encodings, we use the BERT variant bert uncased L-8 H-512 A-8 from [49], and project the 768-dimensional GPT2 encodings from the small GPT2 model to 512 dimensions by PCA. We find the rich embedding structure from the multi-modal nature of CLIP produces the best results. Table 5: Ablation study on different language models for generating the text anchors during the pre-training stage. We show that while the model benefited from pretraining guided by all language models, CLIP was found to be the most suitable for this task. We also show that more rigid loss distance metrics such as l1 or l2 can even significantly hinder the performance.
Implementation Details
Training parameters In the pretraining stage, we use a momentum SGD optimizer with batch size 8 and an initial learning rate of 0.05, decayed by a factor of 0.3 at epochs 150 and 250, a momentum of 0.9, and train for 400 epochs until convergence. We additionally use λ = 1, t pos = 0, t neg = 0.6 and N i = 3 uniformly sampled from all ScanNet200 categories. We then fine-tune the pre-trained 3D backbone for 3D semantic segmentation. We optimize with the same momentum SGD and batch size, with an initial learning rate of 0.05, decayed by a factor of 0.3, and train for 170 epochs until convergence. For the instance sampling, we sampled from the 66 classes least frequently represented in the training set surface point annotations.
Instance Sampling For the instance sampling, we randomly sample from the 66 tail classes, and select them by probability computed from the inverse log frequencies of the train set histogram. Object centers are placed in the scene by randomly sampling (x, y) locations in the scene, with z determined by the max height of the scene geometry at (x, y) (such that the object sits on the scene geometry). Objects are then inserted with a random orientation around the up (z) axis. Any object insertion that induces a bounding box collision with scene objects or previously inserted objects are discarded. Placement determined by object center on the support plane encourages placement of instances with sufficient physical support in the scene. In combination with color and geometry augmentation, this helps to learn more robust features in our large-vocabulary setting.
We also note here that while we do not explicitly address lighting effects during instance augmentation, a minor appearance difference is noticeable on the original and sampled parts of the scene, we still achieve a clear effect with this augmentation technique. We hypothesize that at the 2cm resolution which our method (and state-of-the-art methods) use, lighting inconsistencies have a limited-to-negligible effect. In addition, we use random color jittering, which reduces the chance of learning from erroneous signal, and observe that the advantage of this geometric augmentation provides notably more benefit than color. We provide evidence of our reasoning with an experiment to train the same baseline sparse 20M parameter UNet model with and without voxel color signal, and observed that the final performance differs only in 1% mIoU. 10 Breakdown of Class IoU scores Table 6 shows the per-category IoU scores for 3D semantic segmentation on ScanNet200. Fig. 8: Qualitative results for 3D semantic instance segmentation results on Scan-Net [10] scenes. Our language-grounded pretraining together with class-balanced losses can also effectively improve performance in object recognition. Table 6: Class IoU scores on the ScanNet200 benchmark of our proposed method, and compared with other state-of-the-art approaches.
3D Instance Segmentation Results
Fig. 3 :
3Our instance sampling augments scenes during training with by placing rarely-seen class category instances into them, breaking unduly specific context dependencies that can be easily learned from only a few examples.
Fig. 4 :
4Class category distribution for our ScanNet200 Benchmark showing number of instances per category; note that the frequencies are given on log-scale and ordered by number of instances per category.
Fig. 5 :
53D semantic segmentation under varying amounts of limited annotations. Even when considering only a small number of annotated surface points for our supervised language-guided 3D pre-training, our approach improves notably over the state-of-the-art 3D pre-training of CSC [20]. Contrastive Learning (SupCon) [26], along with our instance-based data balancing and focal loss [29] training in
Fig. 7 :
7We show a comparison with the representation learned by CSC [20], SupCon [26], as well as our approach when training with only positive samples.
: LVIS: A dataset for large vocabulary instance segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2019) 2 18. Han, L., Zheng, T., Xu, L., Fang, L.: Occuseg: Occupancy-aware 3d instance segmentation. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 2940-2949 (2020) 2 19. He, K., Fan, H., Wu, Y., Xie, S., Girshick, R.: Momentum contrast for unsupervised visual representation learning. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 9729-9738 (2020) 3 20. Hou, J., Graham, B., Nießner, M., Xie, S.: Exploring data-efficient 3d scene understanding with contrastive scene contexts. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 15587-15597 (2021) 4, 10, 11, 12, 13, 14, 19, 21 21. Hsieh, T.I., Robb, E., Chen, H.T., Huang, J.B.: Droploss for long-tail instance segmentation. In: AAAI. vol. 3, p. 15 (2021) 5 22. Hu, Q., Yang, B., Xie, L., Rosa, S., Guo, Y., Wang, Z., Trigoni, N., Markham, A.: Randla-net: Efficient semantic segmentation of large-scale point clouds. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2020) 19 23. Huang, S., Xie, Y., Zhu, S.C., Zhu, Y.: Spatio-temporal self-supervised representation learning for 3d point clouds. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 6535-6545 (2021) 3 24. Jiang, L., Zhao, H., Shi, S., Liu, S., Fu, C.W., Jia, J.: Pointgroup: Dual-set point grouping for 3d instance segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2020) 13 25. Kang, B., Xie, S., Rohrbach, M., Yan, Z., Gordo, A., Feng, J., Kalantidis, Y.: Decoupling representation and classifier for long-tailed recognition. In: Eighth International Conference on Learning Representations (ICLR) (2020) 4 26. Khosla, P., Teterwak, P., Wang, C., Sarna, A., Tian, Y., Isola, P., Maschinot, A., Liu, C., Krishnan, D.: Supervised contrastive learning. Advances in Neural Information Processing Systems 33, 18661-18673 (2020) 10, 11, 14 27. Li, B., Weinberger, K.Q., Belongie, S., Koltun, V., Ranftl, R.: Language-driven semantic segmentation. arXiv preprint arXiv:2201.03546 (2022) 6 28. Li, Y., Wang, T., Kang, B., Tang, S., Wang, C., Li, J., Feng, J.: Overcoming classifier imbalance for long-tail object detection with balanced group softmax. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp. 10991-11000 (2020) 5 29. Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision. pp. 2980-2988 (2017) 5, 8, 11, 19 30. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision. pp. 740-755. Springer (2014) 2 31. Liu, Z., Qi, X., Fu, C.W.: One thing one click: A self-training approach for weakly supervised 3d semantic segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 1726-1736 (2021) 10 32. Manhardt, F., Kehl, W., Gaidon, A.: Roi-10d: Monocular lifting of 2d detection to 6d pose and metric shape. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. Michele, B., Boulch, A., Puy, G., Bucher, M., Marlet, R.: Generative zero-shot learning for semantic segmentation of 3d point cloud. CoRR abs/2108.06230 (2021), https://arxiv.org/abs/2108.06230 4 35. More, A.: Survey of resampling techniques for improving classification performance in unbalanced datasets. arXiv preprint arXiv:1608.06048 (2016) 4 36. Nathan Silberman, Derek Hoiem, P.K., Fergus, R.: Indoor segmentation and support inference from rgbd images. In: ECCV (2012) 4 37. Nekrasov, A., Schult, J., Litany, O., Leibe, B., Engelmann, F.: Mix3D: Out-of-Context Data Augmentation for 3D Scenes. In: International Conference on 3D Vision (3DV) (2021) 4 38. Van den Oord, A., Li, Y., Vinyals, O.: Representation learning with contrastive predictive coding. arXiv e-prints pp. arXiv-1807 (2018) 3 39. Peng, M., Zhang, Q., Xing, X., Gui, T., Huang, X., Jiang, Y.G., Ding, K., Chen, Z.: Trainable undersampling for class-imbalance learning. In: AAAI (2019) 4 40. Perez-Ortiz, M., Tiňo, P., Mantiuk, R., Hervás-Martínez, C.: Exploiting synthetically generated data with semi-supervised learning for small and imbalanced datasets. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33, pp. 4715-4722 (2019) 4 41. Qi, C.R., Su, H., Mo, K., Guibas, L.J.: Pointnet: Deep learning on point sets for 3d classification and segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 652-660 (2017) 3 42. Qi, C.R., Yi, L., Su, H., Guibas, L.J.: Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems 30 (2017) 3 43. Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J., et al.: Learning transferable visual models from natural language supervision. In: International Conference on Machine Learning. pp. 8748-8763. PMLR (2021) 4, 5, 8, 13, 20 44. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI blog 1(8), 9 (2019) 20 45. Rao, Y., Liu, B., Wei, Y., Lu, J., Hsieh, C.J., Zhou, J.: Randomrooms: Unsupervised pre-training from synthetic shapes and randomized layouts for 3d object detection. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3283-3292 (2021) 3 46. Riegler, G., Osman Ulusoy, A., Geiger, A.: Octnet: Learning deep 3d representations at high resolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition. pp. 3577-3586 (2017) 3 47. Sanghi, A., Chu, H., Lambourne, J.G., Wang, Y., Cheng, C., Fumero, M.: Clipforge: Towards zero-shot text-to-shape generation. CoRR abs/2110.02624 (2021), https://arxiv.org/abs/2110.02624 4 48. Thomas, H., Qi, C.R., Deschaud, J.E., Marcotegui, B., Goulette, F., Guibas, L.J.: Kpconv: Flexible and deformable convolution for point clouds. In: Proceedings of the IEEE/CVF international conference on computer vision. pp. 6411-6420 (2019) 2, 3 49. Turc, I., Chang, M.W., Lee, K., Toutanova, K.: Well-read students learn better: On the importance of pre-training compact models. arXiv preprint arXiv:1908.08962 (2019) 20
Figure 8
8shows qualitative visualizations for the downstream task of 3D instance segmentation in comparison to training from scratch and CSC[20].
The introduction of PointNet [41] presented a point-based alternative with strong memory efficiency by operating on unstructured point clouds, with various methods introducing local operators to better learn neighborhood structures [42,48,53]. Hierarchical grid structures such as octrees provided a more structured alternative for grid-based reasoning without dense memory consumption [46]. Recently, the introduction of sparse 3D convolutions
CLIP only) 50.39 22.84 10.10 27.73 71.64 69.72 44.47 61.94 62.20 29.37 17.35 36.16mIoU
Precision
Recall
Head Common Tail
All Head Common Tail
All Head Common Tail
All
Scratch
48.29 19.08
7.86 25.02 68.81 66.29 39.88 58.32 60.45 25.50 15.06 33.67
Ins. samp.
48.46 18.97
9.22 25.49 70.04 62.98 49.41 60.81 59.64 24.66 19.25 34.52
C-Focal
48.10 20.28
9.38 25.86 68.10 65.64 47.43 60.39 60.08 26.28 19.14 35.48
SupCon [26]
48.55 19.17 10.34 26.02 69.52 65.42 40.62 58.52 60.27 26.28 19.14 35.23
CSC [20]
49.43 19.52 10.28 26.41 70.00 67.75 40.78 59.51 61.01 25.75 17.62 34.79
Ours (Ours
51.51 22.68 12.41 28.87 72.72 66.69 58.30 65.90 62.50 29.09 26.61 39.40
Table 1 :
1Comparison to state of the art on ScanNet200. Our language-grounded 3D feature learning enables improved performance across frequent and infrequently seen categories in comparison with pure data augmentation or loss balancing techniques as well as state-of-the-art 3D pre-training. Our approach achieves over 5% mIoU performance over training from scratch, more than double the performance improvement of CSC[20].Comparison to the state of the art. We compare with a state-of-the-art
pre-training approaches Contrastive Scene Contexts (CSC) [20] and Supervised
Table 2 :
23D instance segmentation, in comparison with training from scratch and state-of-the-art 3D pre-training approach CSC [20]. Our language-grounded pre-training improves over both baselines.
http://kaldir.vc.in.tum.de/scannet_benchmark/
AcknowledgementsThis project is funded by the Bavarian State Ministry of Science and the Arts and coordinated by the Bavarian Research Institute for Digital Transformation (bidt).Language-Grounded Indoor 3D Semantic Segmentation in the WildAppendixIn this supplemental material, we show additional results on the downstream task of 3D instance segmentation and the extended benchmark of ScanNet200 in Sec. 9. We additionally provide further ablation analysis in Sec. 7 on the effect of the pretrained language model and distance metric used in the contrastive objective of the point-to-language features. Finally, we present a break-down of per-class IoU scores in Sec. 10.
SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences. J Behley, M Garbade, A Milioto, J Quenzel, S Behnke, C Stachniss, J Gall, Proc. of the IEEE/CVF International Conf. on Computer Vision (ICCV. of the IEEE/CVF International Conf. on Computer Vision (ICCV24Behley, J., Garbade, M., Milioto, A., Quenzel, J., Behnke, S., Stachniss, C., Gall, J.: SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Se- quences. In: Proc. of the IEEE/CVF International Conf. on Computer Vision (ICCV) (2019) 2, 3, 4
Lu-net: An efficient network for 3d lidar point cloud semantic segmentation based on end-to-endlearned 3d features and u-net. P Biasutti, V Lepetit, J F Aujol, M Brédif, A Bugeau, Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. the IEEE/CVF International Conference on Computer Vision Workshops5Biasutti, P., Lepetit, V., Aujol, J.F., Brédif, M., Bugeau, A.: Lu-net: An efficient network for 3d lidar point cloud semantic segmentation based on end-to-end- learned 3d features and u-net. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops. pp. 0-0 (2019) 5
A systematic study of the class imbalance problem in convolutional neural networks. M Buda, A Maki, M A Mazurowski, Neural Networks. 1064Buda, M., Maki, A., Mazurowski, M.A.: A systematic study of the class imbalance problem in convolutional neural networks. Neural Networks 106, 249-259 (2018) 4
A Chang, A Dai, T Funkhouser, M Halber, M Niessner, M Savva, S Song, A Zeng, Y Zhang, arXiv:1709.06158Matterport3d: Learning from rgb-d data in indoor environments. 24arXiv preprintChang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A., Zhang, Y.: Matterport3d: Learning from rgb-d data in indoor envi- ronments. arXiv preprint arXiv:1709.06158 (2017) 2, 3, 4
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, PMLRInternational conference on machine learning. 3Chen, T., Kornblith, S., Norouzi, M., Hinton, G.: A simple framework for con- trastive learning of visual representations. In: International conference on machine learning. pp. 1597-1607. PMLR (2020) 3
Exploring simple siamese representation learning. X Chen, K He, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition3Chen, X., He, K.: Exploring simple siamese representation learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 15750-15758 (2021) 3
Y Chen, M Nießner, A Dai, arXiv:2112.029904dcontrast: Contrastive learning with dynamic correspondences for 3d scene understanding. 3arXiv preprintChen, Y., Nießner, M., Dai, A.: 4dcontrast: Contrastive learning with dynamic cor- respondences for 3d scene understanding. arXiv preprint arXiv:2112.02990 (2021) 3
Zeroshot learning on 3d point cloud objects and beyond. A Cheraghian, S Rahman, T F Chowdhury, D Campbell, L Petersson, CoRR abs/2104.04980Cheraghian, A., Rahman, S., Chowdhury, T.F., Campbell, D., Petersson, L.: Zero- shot learning on 3d point cloud objects and beyond. CoRR abs/2104.04980 (2021), https://arxiv.org/abs/2104.04980 4
4d spatio-temporal convnets: Minkowski convolutional neural networks. C Choy, J Gwak, S Savarese, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition210Choy, C., Gwak, J., Savarese, S.: 4d spatio-temporal convnets: Minkowski convolu- tional neural networks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 3075-3084 (2019) 2, 3, 6, 8, 10
Scannet: Richly-annotated 3d reconstructions of indoor scenes. A Dai, A X Chang, M Savva, M Halber, T Funkhouser, M Nießner, Proc. Computer Vision and Pattern Recognition (CVPR). Computer Vision and Pattern Recognition (CVPR)1022Dai, A., Chang, A.X., Savva, M., Halber, M., Funkhouser, T., Nießner, M.: Scannet: Richly-annotated 3d reconstructions of indoor scenes. In: Proc. Computer Vision and Pattern Recognition (CVPR), IEEE (2017) 2, 3, 4, 5, 9, 10, 12, 22
3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation. A Dai, M Nießner, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)3Dai, A., Nießner, M.: 3dmv: Joint 3d-multi-view prediction for 3d semantic scene segmentation. In: Proceedings of the European Conference on Computer Vision (ECCV). pp. 452-468 (2018) 3
J Devlin, M W Chang, K Lee, K Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. 20arXiv preprintDevlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: Pre-training of deep bidirec- tional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018) 20
The pascal visual object classes (voc) challenge. M Everingham, L Van Gool, C K Williams, J Winn, A Zisserman, International journal of computer vision. 8822Everingham, M., Van Gool, L., Williams, C.K., Winn, J., Zisserman, A.: The pascal visual object classes (voc) challenge. International journal of computer vision 88(2), 303-338 (2010) 2
Scf-net: Learning spatial contextual features for large-scale point cloud segmentation. S Fan, Q Dong, F Zhu, Y Lv, P Ye, F Y Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)19Fan, S., Dong, Q., Zhu, F., Lv, Y., Ye, P., Wang, F.Y.: Scf-net: Learning spatial contextual features for large-scale point cloud segmentation. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). pp. 14504-14513 (June 2021) 19
3d semantic segmentation with submanifold sparse convolutional networks. B Graham, M Engelcke, L Van Der Maaten, CVPR. 23Graham, B., Engelcke, M., van der Maaten, L.: 3d semantic segmentation with submanifold sparse convolutional networks. CVPR (2018) 2, 3
Pointaugmenting: Cross-modal augmentation for 3d object detection. C Wang, C Ma, M Zhu, X Yang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern Recognition4Wang, C., Ma, C., Zhu, M., Yang, X.: Pointaugmenting: Cross-modal augmentation for 3d object detection. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp. 11794-11803 (2021) 4
Pointcontrast: Unsupervised pre-training for 3d point cloud understanding. S Xie, J Gu, D Guo, C R Qi, L Guibas, O Litany, European conference on computer vision. Springer3Xie, S., Gu, J., Guo, D., Qi, C.R., Guibas, L., Litany, O.: Pointcontrast: Unsuper- vised pre-training for 3d point cloud understanding. In: European conference on computer vision. pp. 574-591. Springer (2020) 3
Rpvnet: A deep and efficient range-point-voxel fusion network for lidar point cloud segmentation. J Xu, R Zhang, J Dou, Y Zhu, J Sun, S Pu, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer VisionXu, J., Zhang, R., Dou, J., Zhu, Y., Sun, J., Pu, S.: Rpvnet: A deep and ef- ficient range-point-voxel fusion network for lidar point cloud segmentation. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 16024-16033 (2021) 2
Spidercnn: Deep learning on point sets with parameterized convolutional filters. Y Xu, T Fan, M Xu, L Zeng, Y Qiao, Proceedings of the European Conference on Computer Vision (ECCV). the European Conference on Computer Vision (ECCV)3Xu, Y., Fan, T., Xu, M., Zeng, L., Qiao, Y.: Spidercnn: Deep learning on point sets with parameterized convolutional filters. In: Proceedings of the European Confer- ence on Computer Vision (ECCV). pp. 87-102 (2018) 3
Second: Sparsely embedded convolutional detection. Y Yan, Y Mao, B Li, Sensors. 184Yan, Y., Mao, Y., Li, B.: Second: Sparsely embedded convolutional detection. Sensors (Basel, Switzerland) 18 (2018) 4
Oversampling for imbalanced data via optimal transport. Y Yan, M Tan, Y Xu, J Cao, M K Ng, H Min, Q Wu, AAAI4Yan, Y., Tan, M., Xu, Y., Cao, J., Ng, M.K., Min, H., Wu, Q.: Oversampling for imbalanced data via optimal transport. In: AAAI (2019) 4
Pointclip: Point cloud understanding by CLIP. R Zhang, Z Guo, W Zhang, K Li, X Miao, B Cui, Y Qiao, P Gao, H Li, CoRR abs/2112.02413Zhang, R., Guo, Z., Zhang, W., Li, K., Miao, X., Cui, B., Qiao, Y., Gao, P., Li, H.: Pointclip: Point cloud understanding by CLIP. CoRR abs/2112.02413 (2021), https://arxiv.org/abs/2112.02413 4
Self-supervised pretraining of 3d features on any point-cloud. Z Zhang, R Girdhar, A Joulin, I Misra, Proceedings of the IEEE/CVF International Conference on Computer Vision. the IEEE/CVF International Conference on Computer Vision3Zhang, Z., Girdhar, R., Joulin, A., Misra, I.: Self-supervised pretraining of 3d features on any point-cloud. In: Proceedings of the IEEE/CVF International Con- ference on Computer Vision. pp. 10252-10263 (2021) 3
| [] |
[
"Oscillatory surface dichroism of an insulating topological insulator Bi 2 Te 2 Se",
"Oscillatory surface dichroism of an insulating topological insulator Bi 2 Te 2 Se"
] | [
"M Neupane \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"S Basak \nDepartment of Physics\nNortheastern University\n02115BostonMassachusettsUSA\n",
"N Alidoust \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"S.-Y Xu \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"Chang Liu \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"I Belopolski \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"G Bian \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"J Xiong \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"H Ji \nDepartment of Chemistry\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"S Jia \nDepartment of Chemistry\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"S.-K Mo \nAdvanced Light Source\nLawrence Berkeley National Laboratory\n94305BerkeleyCaliforniaUSA\n",
"M Bissen \nSynchrotron Radiation Center\n53589-3097StoughtonWIUSA\n",
"M Severson \nSynchrotron Radiation Center\n53589-3097StoughtonWIUSA\n",
"H Lin \nDepartment of Physics\nNortheastern University\n02115BostonMassachusettsUSA\n",
"N P Ong \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"T Durakiewicz \nCondensed Matter and Magnet Science Group\nLos Alamos National Laboratory\n87545Los AlamosNMUSA\n",
"R J Cava \nDepartment of Chemistry\nPrinceton University\n08544PrincetonNew JerseyUSA\n",
"A Bansil \nDepartment of Physics\nNortheastern University\n02115BostonMassachusettsUSA\n",
"M Z Hasan \nDepartment of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA\n"
] | [
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nNortheastern University\n02115BostonMassachusettsUSA",
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Chemistry\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Chemistry\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Advanced Light Source\nLawrence Berkeley National Laboratory\n94305BerkeleyCaliforniaUSA",
"Synchrotron Radiation Center\n53589-3097StoughtonWIUSA",
"Synchrotron Radiation Center\n53589-3097StoughtonWIUSA",
"Department of Physics\nNortheastern University\n02115BostonMassachusettsUSA",
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Condensed Matter and Magnet Science Group\nLos Alamos National Laboratory\n87545Los AlamosNMUSA",
"Department of Chemistry\nPrinceton University\n08544PrincetonNew JerseyUSA",
"Department of Physics\nNortheastern University\n02115BostonMassachusettsUSA",
"Department of Physics\nJoseph Henry Laboratory\nPrinceton University\n08544PrincetonNew JerseyUSA"
] | [] | Using circular dichroism-angle resolved photoemission spectroscopy (CD-ARPES), we report a study of the effect of angular momentum transfer between polarized photons and topological surface states on the surface of highly bulk insulating topological insulator Bi2Te2Se. The photoelectron dichroism is found to be strongly modulated by the frequency of the helical photons including a dramatic sign-flip. Our results suggest that the observed dichroism and its sign-flip are consequences of strong coupling between the photon field and the spin-orbit nature of the Dirac modes on the surface. Our studies reveal the intrinsic dichroic behavior of topological surface states and point toward the potential utility of bulk insulating topological insulators in device applications.While the basic electronic structure and spinmomentum locking of topological insulators have been studied using surface sensitive probes such as angleresolved photoemission spectroscopy and scanning tunneling microscopy[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], much remains to be discovered regarding their critical and strong response to light, electric or magnetic fields. Such perturbations can selectively couple to different aspects of the surface wavefunction. The full wavefunction of the topological surface states (TSSs) is known to feature not only strong spin-orbit coupled texture but also its variation and modulation from layer to layer due to its finite penetration into the bulk[12,19]. Therefore, it is of critical importance to understand the nature of electron-photon scattering process in the TSS. It is commonly believed that the single frequency dichroic signal reveals the spin (and/or orbital) texture of the material and also controls the photocurrent[8][9][10][12][13][14][15]. However, in real materials, this apparently simple control process is further complicated by multiple factors including the presence of bulk bands at the Fermi level leading to surface bulk hybridization, quantum well formation and surface-bulk scattering thus masking the intrinsic response. Predictable control of the topological surface states has not yet been achieved.In order to understand the intrinsic dichroic behavior of topological surface states it is important to study the effect of angular momentum transfer between the polarized photons and the surface states as a function of photon frequency and polarization in a highly bulk insulating topological insulator class where Fermi level lies within the bulk band-gap and cuts across the topological surface states only. We carried out circular dichroism-angle resolved photoemission (CD-ARPES) measurements on Bi 2 Te 2 Se (BTS221), a recently realized bulk resistive topological insulator (more than 6 Ω · cm). BTS221 sample shows much better insulating characteristics compared to Bi 2 Te 3 or Bi 2 Se 3 , with an in-gap Fermi level, and is thus ideal for exploring the real origin of dichroic effects without complications related to interaction between the bulk and surface states. This is not possible in Bi 2 Te 3 [20]. We report that the intrinsic dichroism is strongly modulated by the frequency of photons including a dramatic sign-flip which further undergoes magnitude oscillations. Our results suggest a lack of unique experimental correspondence between the dichroism and spin-texture chirality (right or left handedness) for a specific photon frequency. We present theoretical calculations accounting for the Dirac-electron and helicalphoton interaction and show that the sign-flip and the magnitude modulation in dichroism are consequences of the combined effect of strong coupling between the photon helicity and the spin-orbit texture of the massless Dirac modes and the projection of the multiple orbitaltextures within the effective skin depth of the topological surface states.Single crystalline samples of topological insulators were grown using the Bridgman method, which is detailed elsewhere[21][22][23]. ARPES measurements for the low energy electronic structure were performed at the Synchrotron Radiation Center (SRC), Wisconsin, equipped with high efficiency VG-Scienta SES2002 electron analyzers, using the U9 VLS-PGM beam and the Advanced Light Source (ALS), California, using BL10 equipped with high efficiency R4000 electron analyzers. The polarization purity is better than 99% for horizontal polarization (HP) and better than 80% for right circularly polarized (RCP) and left circularly polarized (LCP) light. Samples were cleaved in situ and measured at 20 K in a vacuum better than 1 × 10 −10 torr. Energy and momentum resolution were better than 15 meV and 1% of the surface Brillouin zone (BZ), respectively. We theoretically calculate the CD response on the surface of BTS221, where the electronic structure of BTS221 is modeled by the tightbinding theory with the parameter fitted by the GGA arXiv:1307.7311v1 [cond-mat.mes-hall] | 10.1103/physrevb.88.165129 | [
"https://arxiv.org/pdf/1307.7311v1.pdf"
] | 117,897,007 | 1307.7311 | 8292eb99ae38ddb4c416adb76ebee8177a06e69d |
Oscillatory surface dichroism of an insulating topological insulator Bi 2 Te 2 Se
27 Jul 2013
M Neupane
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
S Basak
Department of Physics
Northeastern University
02115BostonMassachusettsUSA
N Alidoust
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
S.-Y Xu
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
Chang Liu
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
I Belopolski
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
G Bian
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
J Xiong
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
H Ji
Department of Chemistry
Princeton University
08544PrincetonNew JerseyUSA
S Jia
Department of Chemistry
Princeton University
08544PrincetonNew JerseyUSA
S.-K Mo
Advanced Light Source
Lawrence Berkeley National Laboratory
94305BerkeleyCaliforniaUSA
M Bissen
Synchrotron Radiation Center
53589-3097StoughtonWIUSA
M Severson
Synchrotron Radiation Center
53589-3097StoughtonWIUSA
H Lin
Department of Physics
Northeastern University
02115BostonMassachusettsUSA
N P Ong
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
T Durakiewicz
Condensed Matter and Magnet Science Group
Los Alamos National Laboratory
87545Los AlamosNMUSA
R J Cava
Department of Chemistry
Princeton University
08544PrincetonNew JerseyUSA
A Bansil
Department of Physics
Northeastern University
02115BostonMassachusettsUSA
M Z Hasan
Department of Physics
Joseph Henry Laboratory
Princeton University
08544PrincetonNew JerseyUSA
Oscillatory surface dichroism of an insulating topological insulator Bi 2 Te 2 Se
27 Jul 2013(Dated: May 11, 2014)
Using circular dichroism-angle resolved photoemission spectroscopy (CD-ARPES), we report a study of the effect of angular momentum transfer between polarized photons and topological surface states on the surface of highly bulk insulating topological insulator Bi2Te2Se. The photoelectron dichroism is found to be strongly modulated by the frequency of the helical photons including a dramatic sign-flip. Our results suggest that the observed dichroism and its sign-flip are consequences of strong coupling between the photon field and the spin-orbit nature of the Dirac modes on the surface. Our studies reveal the intrinsic dichroic behavior of topological surface states and point toward the potential utility of bulk insulating topological insulators in device applications.While the basic electronic structure and spinmomentum locking of topological insulators have been studied using surface sensitive probes such as angleresolved photoemission spectroscopy and scanning tunneling microscopy[1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], much remains to be discovered regarding their critical and strong response to light, electric or magnetic fields. Such perturbations can selectively couple to different aspects of the surface wavefunction. The full wavefunction of the topological surface states (TSSs) is known to feature not only strong spin-orbit coupled texture but also its variation and modulation from layer to layer due to its finite penetration into the bulk[12,19]. Therefore, it is of critical importance to understand the nature of electron-photon scattering process in the TSS. It is commonly believed that the single frequency dichroic signal reveals the spin (and/or orbital) texture of the material and also controls the photocurrent[8][9][10][12][13][14][15]. However, in real materials, this apparently simple control process is further complicated by multiple factors including the presence of bulk bands at the Fermi level leading to surface bulk hybridization, quantum well formation and surface-bulk scattering thus masking the intrinsic response. Predictable control of the topological surface states has not yet been achieved.In order to understand the intrinsic dichroic behavior of topological surface states it is important to study the effect of angular momentum transfer between the polarized photons and the surface states as a function of photon frequency and polarization in a highly bulk insulating topological insulator class where Fermi level lies within the bulk band-gap and cuts across the topological surface states only. We carried out circular dichroism-angle resolved photoemission (CD-ARPES) measurements on Bi 2 Te 2 Se (BTS221), a recently realized bulk resistive topological insulator (more than 6 Ω · cm). BTS221 sample shows much better insulating characteristics compared to Bi 2 Te 3 or Bi 2 Se 3 , with an in-gap Fermi level, and is thus ideal for exploring the real origin of dichroic effects without complications related to interaction between the bulk and surface states. This is not possible in Bi 2 Te 3 [20]. We report that the intrinsic dichroism is strongly modulated by the frequency of photons including a dramatic sign-flip which further undergoes magnitude oscillations. Our results suggest a lack of unique experimental correspondence between the dichroism and spin-texture chirality (right or left handedness) for a specific photon frequency. We present theoretical calculations accounting for the Dirac-electron and helicalphoton interaction and show that the sign-flip and the magnitude modulation in dichroism are consequences of the combined effect of strong coupling between the photon helicity and the spin-orbit texture of the massless Dirac modes and the projection of the multiple orbitaltextures within the effective skin depth of the topological surface states.Single crystalline samples of topological insulators were grown using the Bridgman method, which is detailed elsewhere[21][22][23]. ARPES measurements for the low energy electronic structure were performed at the Synchrotron Radiation Center (SRC), Wisconsin, equipped with high efficiency VG-Scienta SES2002 electron analyzers, using the U9 VLS-PGM beam and the Advanced Light Source (ALS), California, using BL10 equipped with high efficiency R4000 electron analyzers. The polarization purity is better than 99% for horizontal polarization (HP) and better than 80% for right circularly polarized (RCP) and left circularly polarized (LCP) light. Samples were cleaved in situ and measured at 20 K in a vacuum better than 1 × 10 −10 torr. Energy and momentum resolution were better than 15 meV and 1% of the surface Brillouin zone (BZ), respectively. We theoretically calculate the CD response on the surface of BTS221, where the electronic structure of BTS221 is modeled by the tightbinding theory with the parameter fitted by the GGA arXiv:1307.7311v1 [cond-mat.mes-hall]
Using circular dichroism-angle resolved photoemission spectroscopy (CD-ARPES), we report a study of the effect of angular momentum transfer between polarized photons and topological surface states on the surface of highly bulk insulating topological insulator Bi2Te2Se. The photoelectron dichroism is found to be strongly modulated by the frequency of the helical photons including a dramatic sign-flip. Our results suggest that the observed dichroism and its sign-flip are consequences of strong coupling between the photon field and the spin-orbit nature of the Dirac modes on the surface. Our studies reveal the intrinsic dichroic behavior of topological surface states and point toward the potential utility of bulk insulating topological insulators in device applications.
While the basic electronic structure and spinmomentum locking of topological insulators have been studied using surface sensitive probes such as angleresolved photoemission spectroscopy and scanning tunneling microscopy [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18], much remains to be discovered regarding their critical and strong response to light, electric or magnetic fields. Such perturbations can selectively couple to different aspects of the surface wavefunction. The full wavefunction of the topological surface states (TSSs) is known to feature not only strong spin-orbit coupled texture but also its variation and modulation from layer to layer due to its finite penetration into the bulk [12,19]. Therefore, it is of critical importance to understand the nature of electron-photon scattering process in the TSS. It is commonly believed that the single frequency dichroic signal reveals the spin (and/or orbital) texture of the material and also controls the photocurrent [8][9][10][12][13][14][15]. However, in real materials, this apparently simple control process is further complicated by multiple factors including the presence of bulk bands at the Fermi level leading to surface bulk hybridization, quantum well formation and surface-bulk scattering thus masking the intrinsic response. Predictable control of the topological surface states has not yet been achieved.
In order to understand the intrinsic dichroic behavior of topological surface states it is important to study the effect of angular momentum transfer between the polarized photons and the surface states as a function of photon frequency and polarization in a highly bulk insulating topological insulator class where Fermi level lies within the bulk band-gap and cuts across the topological surface states only. We carried out circular dichroism-angle resolved photoemission (CD-ARPES) measurements on Bi 2 Te 2 Se (BTS221), a recently realized bulk resistive topological insulator (more than 6 Ω · cm). BTS221 sample shows much better insulating characteristics compared to Bi 2 Te 3 or Bi 2 Se 3 , with an in-gap Fermi level, and is thus ideal for exploring the real origin of dichroic effects without complications related to interaction between the bulk and surface states. This is not possible in Bi 2 Te 3 [20]. We report that the intrinsic dichroism is strongly modulated by the frequency of photons including a dramatic sign-flip which further undergoes magnitude oscillations. Our results suggest a lack of unique experimental correspondence between the dichroism and spin-texture chirality (right or left handedness) for a specific photon frequency. We present theoretical calculations accounting for the Dirac-electron and helicalphoton interaction and show that the sign-flip and the magnitude modulation in dichroism are consequences of the combined effect of strong coupling between the photon helicity and the spin-orbit texture of the massless Dirac modes and the projection of the multiple orbitaltextures within the effective skin depth of the topological surface states.
Single crystalline samples of topological insulators were grown using the Bridgman method, which is detailed elsewhere [21][22][23]. ARPES measurements for the low energy electronic structure were performed at the Synchrotron Radiation Center (SRC), Wisconsin, equipped with high efficiency VG-Scienta SES2002 electron analyzers, using the U9 VLS-PGM beam and the Advanced Light Source (ALS), California, using BL10 equipped with high efficiency R4000 electron analyzers. The polarization purity is better than 99% for horizontal polarization (HP) and better than 80% for right circularly polarized (RCP) and left circularly polarized (LCP) light. Samples were cleaved in situ and measured at 20 K in a vacuum better than 1 × 10 −10 torr. Energy and momentum resolution were better than 15 meV and 1% of the surface Brillouin zone (BZ), respectively. We theoretically calculate the CD response on the surface of BTS221, where the electronic structure of BTS221 is modeled by the tightbinding theory with the parameter fitted by the GGA arXiv:1307.7311v1 [cond-mat.mes-hall] 27 Jul 2013 results. The ARPES matrix element effects is considered in the electron-photon scattering process. (See [24] for details).
The crystalline symmetry, the cleavage plane (Telayer) and sample characterization for BTS221 are shown in Fig. 1. It is believed that the reduction in the bulk conductivity is possible in BTS221 due to the confinement of Se atoms within the central layer which likely suppresses the Se vacancy generation as well as reduces the antisite defects between Bi and Te atoms. Comparative resistivity profiles show significant degree of bulk insulation in BTS221 with respect to prototype materials such as Bi 2 Te 3 and Bi 2 Se 3 . Based on the period of oscillations in high-field transport, we obtain an averaged 2D carrier concentration n s ∼ 1.7 × 10 12 cm −2 and hence a Fermi momentum of k F ∼ 0.047Å −1 . Applying a standard Dingle analysis to the SdH amplitudes, we infer a surface mobility µ s = 2, 800 cm 2 /Vs and a Fermi velocity v F = 6 × 10 5 m/s in our samples [25]. The nonconducting behavior of the bulk and the in-gap Fermi level in our samples reduce the possibility of interaction of bulk and surface states, which is also evident from the high degree of surface state contribution to transport typically seen in the quantum oscillation data [25]. These results are in qualitative agreement with conventional bandstructure measurements of BTS221 [17,18]. BTS221 samples thus provide an ideal platform to explore the intrinsic CD effect theoretically expected from the topological surface states, which is not possible with metallic Bi 2 Te 3 TI [20]. The ARPES dispersion maps of surface states for BTS221 are shown in Fig. 2(a) while the experimental geometry used for the CD measurements is shown in the inset of Fig. 3(a). The sample surface is parallel to the XY plane and circularly polarized photons (spiral arrow) propagate in the XZ plane at an angle (θ) of 50 • to the sample surface normal. The chemical potential is found to lie within the bulk band gap cutting across the TSS only. A nearly isotropic surface Fermi surface without any significant hexagonal deformation is seen which suggests that this system can be thought of as a material realization of a nearly-ideal Dirac Fermion gas near the native chemical potential. This also indicates a nearabsence of interaction between bulk and surface states (in contrast to the hexagonally warped lower Dirac cone of Bi 2 Te 3 shown in the inset of Fig. 1(b) [26]). The Dirac node in BTS221 is found to be nearly buried within the bulk valence band, which makes the surface state in the lower cone degenerate with bulk bands. As a result, the intrinsic CD effect associated with the lower Dirac cone cannot be clearly disentangled from the bulk. We therefore focus on the detailed CD behavior of the upper Dirac band.
A clear surface state CD response on the photoelectron signal from the upper Dirac cone is observed where the +k Dirac branch is positive and the -k Dirac branch is negative in CD intensity (Fig. 2). The magnitude of the CD response signal defined as I CD =(I RCP − I LCP )/(I RCP + I LCP ) is observed to be about 20% for incident photons with an energy of 18 eV in BTS221 for electrons with binding energy of about 150meV, well below the chemical potential. This CD behavior is qualitatively consistent with previous work on other TIs such as Cu x Bi 2 Se 3 [11] and Bi 2 Se 3 [12,14,27]. Previously such CD response has been used to derive the details of spin-texture and chirality under the assumption that the response measured at a single frequency qualitatively samples the complete surface state wavefunction properties. In a multi-orbital system where surface state penetrates more than the very top layer a single photon energy may not capture the full details of the wavefunction. Indeed, analogously measured CD response but for the 23 eV photons shows, in our data Fig. 2(b), a momentum space reversal of CD sign per Dirac band, namely, the +k Dirac branch is negative whereas the -k Dirac branch is positive. The reversal of CD between 18 and 23 eV is also seen in our systematic measurements of the momentum distribution profiles (Figs. 2(c-f)). We further study the CD response of these samples with photons of intermediate energies to study the functional dependence on photon frequency or energy. Fig. 3 reveals the systematics of the CD response, magnitude and sign, in BTS221 within photon energies from 20 to 31 eV. We found that the reversal of CD between 18 eV and 23 eV is in fact a part of the full oscillation profile.
In order to further check the generality of our observation, we perform surface CD-ARPES measurements on two other TI systems, namely Bi 1.4 Sb 0.6 Te 1.5 S 1.5 (BiS-bTeS) and the prototype Bi 2 Te 3 . (See [24] for details). The flipping of the CD sign is also observed in these compounds. The flipping of CD in metallic Bi 2 Te 3 is in agreement with ref. [20]. These systematic measurements imply that the CD modulation and sign-flip behavior are likely to be a general property of the topological surface states beyond BTS221. Our results indicate that the intrinsic CD of the topological surface states is strongly modulated with photon energy and the existence of the sign-flip suggests that the CD signal can not be a straightforward reflection of the spin texture of the initial ground state [20,28,29]. Taking the experimental data alone suggest a lack of unique correspondence between the dichroism and spin-texture chirality namely left or right handedness which is critical for the mirror Chern number measurement reflecting the class of topological order.
In order to understand the general trend observed in our experimental data, we carry out DFT(GGA) based first-principles calculations with consideration of the photoemission electron-photon scattering process and its microscopic relation to the measured CD profile. We write the photoemission matrix element as follows: where the '+' and '−' signs refer to the right and left circularly polarized light, respectively. Here a and b are constants related to the crystal potential, J i,nlm (k f ) = (−i) l e ik f ·R i F nl (k f )Y lm (θ k f , φ k f ) and χ nlm = χ nlm,↑ − χ nlm,↓ . F nl (k f ) = r 2 drj l (k f r)R nl (r) is the form factor associated with the atomic orbital (nlm), j l is the spherical Bessel function, k f is the momentum of the free electron final state and Y lm is the spherical harmonic for the angular variables of k f . The initial state is expanded into atomic orbital (nlm) of the i-th atom in the unit cell at position R i , |i = R nl (r)Y lm (θ, φ)|χ nlm where the spin function is |χ nlm = χ nlm,↑ | ↑ + χ nlm,↓ | ↓ , | ↑ and | ↓ being the spin eigenstates for the quantization axis along z-direction. The circular dichroism is
M ± (k f ) = i,nlm J i,nlm (k f ) ± iaχ nlm,↓(↑) +(1 ∓ ib)χ nlm(1)I CD (k f , k, E) ∝ |M + (k f , ↑, ↓)| 2 − |M − (k f , ↑, ↓)| 2 W(k, E)(2)
where W(k, E) is the spectral function of the initial topological surface states, and k, E, ↑, and ↓ are the momentum, the energy, and the spin of the initial topological surface states, respectively. The detail of calculation is given in [24]. It can be seen from equation (2) that the term of (|M + | 2 − |M − | 2 ) is a function of the spin of initial ground states of the topological surface states. More importantly, we also note that photoelectron momentum k f is a photon energy-dependent quantity, and consequently I CD is also a function of incident photon energy, as follows from equation (2) . Thus our model naturally explains the experimental observation of the CD signal dependence on incident photon energy. In Figs. 4(a) and 4(b) we plot the calculated CD spectra for two sets of photon energies and the sign-flip oscillation is seen in Fig. 4(c). The structure factor e ik f ·R i also indicates that the surface states may have a spatially dependent orbital mixing and it can be probed by varying the perpendicular component of the Bloch wave vector, k f ⊥ , through its dependence on photon energy. As a result, the CD of the surface states could have a non-trivial dependence on the k f ⊥ of the emitted electron. Thus the photon energy dependent photoelectron interference effect plays a role in shaping up the spectra by masking the chirality of the spin-texture of the initial electronic states.
According to the helical spin-texture, the CD of the lower Dirac cone is expected to have opposite sign with respect to that of the upper Dirac cone. However, CD of the lower Dirac cone is found to have a complex profile in which at certain photon energies it even shows the same sign as the upper Dirac cone. This behavior of CD of the lower Dirac cone can possibly be attributed to the intermixing between the bulk valence bands and surface states well below the Fermi level. Furthermore, we use the three-step model of photoemission to determine the ARPES matrix element which is technically very different from the one-step model [20]. Despite the use of the one-step formulation of ARPES in treating the photoemission process as a single coherent event, it is often difficult to adduce physical microscopic insight into the origin of spectral features resulting from these quite complex calculations. The three-step modeling, on the other hand, is very transparent, allowing straightforward disentanglement of various factors at play in the observed spectral features by considering the microscopic processes at play. Thus, our model is able to identify the photon energy dependent structure factor e ik f ·R i and J i,nlm (k f ) to be the physical quantities controlling the sign-flip of the CD profile. This model can be applied further to disentangle the contributions of various orbitals as a function of photon energy.
In conclusion, we presented systematic photon energy dependent circular polarization response of photoelectrons in ARPES revealing an anomalous behavior of the CD signal on the topological insulator surfaces. Our experimental results supported by our theoretical calculations suggest that measured CD response not only depends on the orbital/spin angular momentum of the initial states but also on the photon energy sampled, mixedorbital content and the details of the coupling mechanism of initial state to the electric field of the incident light.
Our experimental findings reveal a rich response behavior of topological surface states thus open new avenues in understanding and controlling topological insulator properties with polarized light.
FIG. 1 .
1(a) Crystal structure of BTS221. Red, green and blue circles represent the Se, Bi and Te atoms, respectively. (b) Normalized in-plane resistivities (R/R300) plotted as a function of temperature (T) for BTS221. The resistivity profiles of Bi2Se3 and Bi2Te3 are added for comparison. Insets show the Fermi surface plots for BTS221 (upper panel) and Bi2Te3 (lower panel). Arrows around the FS represent the in-plane spin-texture. (c) Shubnikov-de Haas oscillation measurements on the topological surface of BTS221 (see[25] for details).
FIG. 2 .
2(a) High-resolution ARPES measurements of Bi2Te2Se for right circularly polarized (RCP) light, left circularly polarized (LCP) light and the photoelectron circular dichroism (IRCP − ILCP ) measured with photon energy ω1 =18 eV. (b) Analogus measurements as in (a) for photon energy ω2 =23 eV. These spectra are measured along theΓ −M high symmetry momentumspace cut. (c) The measured CD values for binding energies of 50 meV and 100 meV as marked on the ICD plot of (a) by black dashed lines and denoted by numbers 1 and 2. (d) Similar measurements as (c) for (b). (e) and (f) The momentum distribution curves of CD spectra for 18 eV and 23 eV photons, respectively. See [24] for additional CD measurements data.
FIG. 3 .
3(a) The measured CD values are plotted as a function of photon energy. The photoelectron CD value is estimated as ICD=(I + − I − )/(I + + I − ) for data taken with momentum k ∼ −0.05Å −1 and binding energy ∼ 100 meV. Arrows represent the photon energies of representative CD-ARPES spectra presented in (b). Inset shows the geometry of the ARPES measurement (see text for details). (b) ARPES plots of the circular dichroic photoemission with various photon energies. The corresponding photon energies are noted on the plots. The spectra taken with 20-23 eV exhibit negative CD values (top row) while it is taken with 24-31 eV exhibit positive CD values (bottom row). See [24] for additional data. a consequence of combination of spectral function of the topological surface state and the photoemission matrix element under left and right circularly polarized light as expressed below:
FIG. 4 .
4Model calculations are carried out for photon energies of (a) 21 eV and 22 eV, and (b) 24 eV and 25 eV. The change in the sign of CD can be observed by comparing spectra shown in (a) and (b). (c) The theoretically calculated ICD at constant kx and E as a function of photon energy. The black arrow indicates the photon energy value where flipping of CD sign is expected in our theoretical model for BTS221.
. J E Moore, Nature. 464194J. E. Moore, Nature 464, 194 (2010).
. M Z Hasan, C L Kane, Rev. Mod. Phys. 823045M. Z. Hasan and C. L. Kane, Rev. Mod. Phys. 82, 3045 (2010).
. X. -L Qi, S. -C Zhang, Rev. Mod. Phys. 831057X. -L. Qi and S. -C. Zhang, Rev. Mod. Phys. 83, 1057 (2011).
. D Hsieh, Nature. 452970D. Hsieh et al., Nature 452, 970 (2008).
. Y Xia, Nat. Phys. 5398Y. Xia et al., Nat. Phys. 5, 398 (2009).
. Y L Chen, Science. 325178Y. L. Chen et al., Science 325, 178 (2009).
. D Hsieh, Phys. Rev. Lett. 103146401D. Hsieh et al., Phys. Rev. Lett. 103, 146401 (2009).
. S Raghu, Phys. Rev. Lett. 104116401S. Raghu et al., Phys. Rev. Lett. 104, 116401 (2010).
. J Wunderlich, Science. 3301801J. Wunderlich et al., Science 330, 1801 (2010).
. P Hosur, Phys. Rev. B. 8335309P. Hosur et al., Phys. Rev. B 83, 035309 (2011).
. Y Ishida, Phys. Rev. Lett. 10777601Y. Ishida et al., Phys. Rev. Lett. 107, 077601 (2011).
. S R Park, Phys. Rev. Lett. 10846805S. R. Park et al., Phys. Rev. Lett. 108, 046805 (2012).
. J.-H Park, Phys. Rev. B. 85195401J.-H. Park et. al., Phys. Rev. B 85, 195401 (2012).
. Y H Wang, Phys. Rev. Lett. 107207602Y. H. Wang et al., Phys. Rev. Lett. 107, 207602 (2011).
. J W Mclver, Nat. Nano. 796J. W. Mclver et. al., Nat. Nano. 7, 96 (2012).
. Y Wang, N Gedik, Phys. Status Solidi RRL. 764Y. Wang, and N. Gedik, Phys. Status Solidi RRL 7, 64 (2013).
. S.-Y Xu, arXiv:1007.5111v1S.-Y. Xu et al., arXiv:1007.5111v1 (2010).
. M Neupane, Phys. Rev. B. 85235406M. Neupane et al., Phys. Rev. B 85, 235406 (2012).
. S V Eremeev, Nat. Commun. 3635S. V. Eremeev et al., Nat. Commun. 3, 635 (2011).
. M R Scholz, Phys. Rev. Lett. 110216801M. R. Scholz et al., Phys. Rev. Lett. 110, 216801 (2013).
. H Ji, Phys. Rev. B. 85201103H. Ji et. al., Phys. Rev. B 85, 201103 (2012).
. Z Ren, Phys. Rev. B. 82241306Z. Ren et al., Phys. Rev. B 82, 241306(R) (2010).
. S Jia, Phys. Rev. B. 86165119S. Jia et al., Phys. Rev. B 86, 165119 (2012).
. J Xiong, Physica E. 44917J. Xiong et al., Physica E 44, 917 (2012).
. L Fu, Phys. Rev. Lett. 103266801L. Fu, Phys. Rev. Lett. 103, 266801 (2009).
. M S Bahramy, Nat. Commun. 31159M. S. Bahramy et al., Nat. Commun. 3, 1159 (2012).
. C Jozwiak, Nat. Phys. 9293C. Jozwiak et al., Nat. Phys. 9, 293 (2013).
Correspondence and requests for materials should be addressed to M. Z.-H Zhu, Phys. Rev. Lett. 110216401Z.H.. Email: [email protected]. Zhu et al., Phys. Rev. Lett. 110, 216401 (2013). Correspondence and requests for materials should be addressed to M.Z.H. (Email: [email protected]).
| [] |
[
"SYMPLECTIC SYMMETRY AND RADIAL SYMMETRY EITHER PERSISTENCE OR BREAKING OF INCOMPRESSIBLE FLUID",
"SYMPLECTIC SYMMETRY AND RADIAL SYMMETRY EITHER PERSISTENCE OR BREAKING OF INCOMPRESSIBLE FLUID"
] | [
"Yongqian Han "
] | [] | [] | The incompressible Navier-Stokes equations are considered. We find that these equations have symplectic symmetry structures. Two linearly independent symplectic symmetries form moving frame. The velocity vectors possess symplectic representations in this moving frame. The symplectic representations of twodimensional Navier-Stokes equations hold radial symmetry persistence. On the other hand, we establish some results of radial symmetry either persistence or breaking for the symplectic representations of three-dimensional Navier-Stokes equations. Thanks radial symmetry persistence, we construct infinite non-trivial solutions of static Euler equations with given boundary condition. Therefore the randomness and turbulence of incompressible fluid appear provided Navier-Stokes flow converges to static Euler flow.2000 Mathematics Subject Classification. 35Q30, 76D05, 76F02, 37L20. | null | [
"https://export.arxiv.org/pdf/2305.13737v1.pdf"
] | 258,841,234 | 2305.13737 | 2b295c5cbf45e442614c7d9f9dfa20d10aafb595 |
SYMPLECTIC SYMMETRY AND RADIAL SYMMETRY EITHER PERSISTENCE OR BREAKING OF INCOMPRESSIBLE FLUID
23 May 2023
Yongqian Han
SYMPLECTIC SYMMETRY AND RADIAL SYMMETRY EITHER PERSISTENCE OR BREAKING OF INCOMPRESSIBLE FLUID
23 May 2023
The incompressible Navier-Stokes equations are considered. We find that these equations have symplectic symmetry structures. Two linearly independent symplectic symmetries form moving frame. The velocity vectors possess symplectic representations in this moving frame. The symplectic representations of twodimensional Navier-Stokes equations hold radial symmetry persistence. On the other hand, we establish some results of radial symmetry either persistence or breaking for the symplectic representations of three-dimensional Navier-Stokes equations. Thanks radial symmetry persistence, we construct infinite non-trivial solutions of static Euler equations with given boundary condition. Therefore the randomness and turbulence of incompressible fluid appear provided Navier-Stokes flow converges to static Euler flow.2000 Mathematics Subject Classification. 35Q30, 76D05, 76F02, 37L20.
Introduction
The Navier-Stokes equations in R 3 with initial data are given by (1.1) u t − ν∆u + (u · ∇)u + ∇P = 0,
(1.2) ∇ · u = 0, (1.3) u| t=0 = u 0 ,
where u = u(t, x) = u 1 (t, x), u 2 (t, x), u 3 (t, x) and P = P (t, x) stand for the unknown velocity vector field of fluid and its pressure, u 0 = u 0 (x) = u 1 0 (x), u 2 0 (x), u 3 0 (x) is the given initial velocity vector field satisfying ∇·u 0 = 0, ν > 0 is the coefficient of viscosity.
Here ∂ x j denotes by ∂ j (j = 1, 2, 3).
For the mathematical setting of this problem, we introduce Hilbert space
H(R 3 ) = u ∈ L 2 (R 3 ) 3 ∇ · u = 0 endowed with L 2 (R 3 )
3 norm (resp. scalar product denoted by (·, ·) ). For simplicity of presentation, space H m (R 3 ) 3 denotes by H m , where m ≥ 0. In what follows we use the usual convention and sum over the repeated indices. The Fourier transformation of u(t, x) with respect to x denotes byû(t, ξ). Then (1.4) ξ jû j = ξ 1û 1 + ξ 2û 2 + ξ 3û 3 = 0.
It is thatû(t, ξ) is perpendicular to ξ. Denote by ξ⊥û. Equivalentlyû(t, ξ) ∈ T ξ S 2 . Let A = (a, b, c) ∈ R 3 − {0} and ξ ∈ S 2 . Then vector {A × ξ} ∈ T ξ S 2 is called the 1st order incompressible symplectic symmetry. Take 3 × 3 matrix M = m ij . Then vector {ξ × (M ξ)} ∈ T ξ S 2 is called the 2nd order incompressible symplectic symmetry. Generally let T = T i 1 ···in be nth order tensor and T (ξ · · · ξ) = T i 1 i 2 ···in ξ i 2 · · · ξ in . Then vector {ξ × T (ξ · · · ξ)} ∈ T ξ S 2 is called the nth order incompressible symplectic symmetry.
Let vectors A ∈ R 3 − {0} and B ∈ R 3 − {0} be linear independent. Then A × ξ and B × ξ are linear independent for any ξ ∈ R 3 − {0}, and form the basis of space T ξ S 2 which is so-called moving frame. There existφ(ξ) andψ(ξ) such that u(ξ) =φ(ξ) · {A × ξ} +ψ(ξ) · {B × ξ}, ∀û(ξ) ∈ T ξ S 2 .
Therefore for any velocity vector u satisfying the equation (1.2), there exist linear independent vectors A, B ∈ R 3 − {0} and real scalar functions φ and ψ such that (1.5) u(t, x) = {A × ∇}φ(t, x) + {B × ∇}ψ(t, x).
We call that the formulation (1.5) is (1,1)-symplectic representation of velocity vector u.
Similarly we call that the following formulation (1.6)
(1.6) u(t, x) = {(A × ∇) × ∇}φ(t, x) + {(B × ∇) × ∇}ψ(t, x),
is (2,2)-symplectic representation of velocity vector u. And the following formulation (1.7)
(1.7) u(t, x) = {A × ∇}φ(t, x) + {(B × ∇) × ∇}ψ(t, x),
is called (1,2)-symplectic representation of velocity vector u.
There is a large literature studying the incompressible Navier-Stokes equations. In 1934 Leray [32] proved that there exists a global weak solution to the problem (1.1)-(1.3) with initial data in L 2 . In 1951 Hopf [22] extended this result to bounded smooth domain. Moreover Leray-Hopf weak solutions satisfy energy inequality [51] (1. 8) u(t, ·) 2 L 2 + 2 t 0 ∇u(τ, ·) 2 L 2 dτ ≤ u 0 2 L 2 , ∀t > 0.
The uniqueness and regularity of Leray-Hopf weak solution is a famous open question. Numerous regularity criteria were proved [12,15,27,28,30,40,48,54].
Local existence and uniqueness of H m solution can be established by using analytic semigroup [36] with initial data in H m (R 3 ), m ≥ 1. This result is stated as follows. Local existence and uniqueness of mild solution or strong solution were established [4,17,19,24,25,26,57] with initial data in L p (R 3 ), p > 3. The main result is as follows. The uniqueness in Proposition 1.1 and 1.2 ensures that the symplectic symmetries which are corresponding to initial data u 0 can be kept by the solution u.
Besides the local-posedness, the lower bounds of possible blowup solutions were considered [8,9,11,15,19,32,41]. The concentration phenomena of possible blowup solutions was studied [33].
It is well-known that the equations (1.1) (1.2) are scaling-invariant in the sense that if u solves (1.1) (1.2) with initial data u 0 , so dose u λ (t, x) = λu(λ 2 t, λx) with initial data λu 0 (λx). A space X defined on R 3 is so-called to be critical provided u 0 X = λu 0 (λ·) X for any λ > 0. L 3 (R 3 ) is one of critical spaces. For the initial data in critical spaces, the posedness of global solution of the equations (1.1)-(1.3) is obtained [5,10,19,29,39] with small initial data. The regularity criterion was established [12,13,27,34,47]. On the other hand, the ill-posedness was showed [2,14,55,58].
It is also studied that solutions of the problem (1.1)-(1.3) are in various function spaces [6,16,18,23,29,50]. Partial regularity of suitable weak solutions was established [3,31,35,43,44,53,56]. Non-existence of self-similar solutions was proved [38,52]. Decay of the solutions can be found in [7,19,37,45,46], etc.
In this paper, we study the radial symmetry of symplectic representation for the solutions of the problem (1.1)-(1.3).
Firstly we consider two-dimensional Navier-Stokes equations. Here x = (x 1 , x 2 ) ∈ R 2 . The main result is as follows. Assume that the initial data φ 0 (x) is radial symmetric function and regular enough. Then there exists a unique global solution u of the problem (1.1)-(1.3) such that u satisfies (1.9) and φ(t, x) = 1 4πνt R 2 exp{− |y| 2 4νt }φ 0 (x − y)dy. (1.11) Moreover the function φ is radial symmetric function.
By the proof of Theorem 1.3, equation (2.3) means that u 0 defined by (1.10) is static Euler flow provided φ 0 (x) is radial symmetric function. Let Φ(s) ∈ C ∞ c (0, 1) and φ 0 (x) = φ 0 (r) = Φ(r/R a ) for any R a > 0, where x ∈ R 2 and r 2 = x · x. Then u = − ∂ 2 φ 0 (r), ∂ 1 φ 0 (r), 0 is the solution of static two dimensional Euler equations (u · ∇)u + ∇P = 0, ∇ · u = 0, (1.12) with Dirichlet boundary condition u| ∂B 2 (r<Ra) = 0. (1.13) It is obvious that the number of solution of static two dimensional Euler equations (1.12) (1.13) is infinite.
Provided Navier-Stokes flow (1.1)-(1.3) (1.13) converges to static Euler flow (1.12) (1.13) as t → ∞ and ν → 0, formally the randomness and turbulence of incompressible fluid appear, which is due to there exist infinite non-trivial solutions of static two dimensional Euler equations (1.12) with Dirichlet boundary condition (1.13).
For instance, there exists a unique Navier-Stokes flow
u(t, x) = − ∂ 2 φ(t, x), ∂ 1 φ(t, x), 0 , φ(t, x) = 1 4πνt R 2 exp{− |y| 2 4νt }Φ |x − y|/R a dy. (1.14)
Assume that lim t→∞,ν→0 tν = w, w is random number. (1.15) We solve the limit of Navier-Stokes flow defined by (1.14)
u e (r) = lim t→∞,ν→0 u ns (t, x) = − ∂ 2 φ c (r), ∂ 1 φ c (r), 0 , φ c (r) =φ c (x) = 1 4πw R 2 exp{− |y| 2 4w }Φ |x − y|/R a dy.
(1. 16) It is obvious that random u e is the solution of static Euler equations (1.12). Similarly let Φ j (r) = α sin(jr) + β cos(jr), j = 1, 2, · · · , α and β be any constants. Then
u(x) = − ∂ 2 Φ j (r), ∂ 1 Φ j (r), 0
is the solution of static two dimensional Euler equations (1.12) with symplectic representation
u(x) = − ∂ 2 φ(x), ∂ 1 φ(x), 0 ,
and periodic boundary condition (1.17) φ(r + 2π) = φ(r), ∀r ≥ 0 for any j = 1, 2, · · · . Considering two dimensional Navier-Stokes equations (1.1)-(1.3) with symplectic representation (1.9) (1.10), periodic boundary condition (1.17) and initial data φ 0 = Φ k for given k, α and β, there exists a unique solution u of this problem.
Provided Navier-Stokes flow (1.1)-(1.3) (1.17) converges to static Euler flow (1.12) (1.17) as t → ∞ and ν → 0, formally the randomness and turbulence of incompressible fluid appear, which is due to there exist infinite non-trivial solutions of static two dimensional Euler equations (1.12) with periodic boundary condition (1.17).
For instance, there exists a unique Navier-Stokes flow
u(t, x) = − ∂ 2 φ k (t, r), ∂ 1 φ k (t, r), 0 , φ k (t, r) =φ k (t, x) = 1 4πνt R 2 exp{− |y| 2 4νt }Φ k (|x − y|)dy. (1.18)
Provided assumption (1.15) is satisfied, we solve the limit of Navier-Stokes flow defined by (1.18) u e (r) = lim t→∞,ν→0
u ns (t, x) = − ∂ 2 φ k (r), ∂ 1 φ k (r), 0 , φ k (r) =φ k (x) = 1 4πw R 2 exp{− |y| 2 4w }Φ k (|x − y|)dy. (1.19)
It is obvious that random u e is the solution of static Euler equations (1.12). For example φ 0 (r) = r − 2 p +1 . By Theorem 1.3, there exists a unique global solution u although u 0 L p (R 2 ) = ∞ for any 1 ≤ p ≤ ∞. The problem is whether this solution u has continuous dependence on initial data or not.
Contrast to the radial symmetry persistence of two-dimensional Navier-Stokes equations, there appears more complicate and more interesting phenomena for three-dimensional Navier-Stokes equations.
We find that the following two equations 1
r ∂ r ψ · ∂ r 1 r ∂ r φ − 1 r ∂ r φ · ∂ r 1 r ∂ r ψ = 0, (1.20) 1 r ∂ r φ · ∂ r 1 r ∂ r φ + 1 r ∂ r ψ · ∂ r 1 r ∂ r ∆ψ = 0, (1.21)
play key role to solve three-dimensional Navier-Stokes equations and static three dimensional Euler equations. (1.23) where vector A ∈ R 3 − {0}. Let φ 0 (x) and ψ 0 (x) be regular enough.
u(t, x) =(A × ∇)φ(t, x) + {(A × ∇) × ∇}ψ(t, x), (1.22) u 0 (x) =(A × ∇)φ 0 (x) + {(A × ∇) × ∇}ψ 0 (x),
Provided (II) (Radial Symmetry Persistence) Provided (φ 0 , ψ 0 ) = Φ(r), Ψ(r) . Let us define (III) (Radial Symmetry Breaking) Provided (φ 0 , ψ 0 ) = Φ(r), Ψ(r) , and u defined by (1.22) is solution of the problem (1.1)-(1.3). Then functions φ(t, x) and ψ(t, x) defined by (1.22) can not be radial symmetric functions of x for any t > 0 although the initial data φ 0 (x) and ψ 0 (x) are radial symmetric functions. (1.25) where λ, α and β are any real constants. Then (φ, ψ) = Φ λαβ (r), Ψ λαβ (r) satisfies
φ(t, x) = φ(t, r) = e −ν∆t φ 0 = e −ν∆t Φ, ψ(t, x) = ψ(t, r) = e −ν∆t ψ 0 = e −ν∆t Ψ. (1.24)Corollary 1.5. Let us define Φ λαβ (r) = λΨ λαβ (r), Ψ λαβ (r) = α 1 r sin(λr) + β 1 r cos(λr),equations (1.20) (1.21). u = (A × ∇)Φ λαβ + {(A × ∇) × ∇}Ψ λαβ is solution of static three dimensional Euler equations (1.12).
Let us take In Corollary 1.5, let λ = j = 1, 2, · · · . Then
φ(t, x) = φ(t, r) = e −νλ 2 t Φ λαβ = λψ(t, x), ψ(t, x) = ψ(t, r) = e −νλ 2 t Ψ λαβ .u = (A × ∇)Φ jαβ + {(A × ∇) × ∇}Ψ jαβ
is the solution of static three dimensional Euler equations (1.12) with symplectic representation
(1.27) u = (A × ∇)φ + {(A × ∇) × ∇}ψ
and periodic boundary condition
(1.28) {ξφ(ξ)} ξ=r+2π = rφ(r), {ξψ(ξ)} ξ=r+2π = rψ(r), ∀r ≥ 0 for any j = 1, 2, · · · .
Considering three dimensional Navier-Stokes equations (1.1)-(1.3) with symplectic representation (1.22) (1.23), periodic boundary condition (1.28) and initial data (φ 0 , ψ 0 ) = Φ kαβ , Ψ kαβ for given k, α and β, there exists a unique solution u of this problem.
Provided Navier-Stokes flow (1.1)-(1.3) (1.28) converges to static Euler flow (1.12) (1.28) as t → ∞ and ν → 0, formally the randomness and turbulence of incompressible fluid appear, which is due to there exist infinite non-trivial solutions of static three dimensional Euler equations (1.12) with periodic boundary condition (1.28).
For instance, provided assumption (1.15) is satisfied, we solve the limit of Navier-Stokes flow defined by (1.22) (1.26)
u e (r) = lim t→∞,ν→0 u ns (t, x) =(A × ∇)φ λαβ (r) + {(A × ∇) × ∇}ψ λαβ (r), φ λαβ (r) =e −λ 2 w Φ λαβ (r), ψ λαβ (r) =e −λ 2 w Ψ λαβ (r1 r ∂ r Ψ · ∂ r 1 r ∂ r Φ − 1 r ∂ r Φ · ∂ r 1 r ∂ r Ψ = 0, (1.30) 1 r ∂ r Φ · ∂ r 1 r ∂ r Φ + 1 r ∂ r Ψ · ∂ r 1 r ∂ r ∆Ψ = 0. (1.31) Let φ(t, x) = e −ν∆t Φ, ψ(t, x) = e −u(t, x) =(A × ∇)φ(t, x) + {(B × ∇) × ∇}ψ(t, x), (1.33) u 0 (x) =(A × ∇)φ 0 (x) + {(B × ∇) × ∇}ψ 0 (x), (1.34)
where vectors A ∈ R 3 − {0} and B ∈ R 3 − {0} are perpendicular each other A⊥B. Let φ 0 (x) and ψ 0 (x) be regular enough. Then the functions φ(t, x) and ψ(t, x) can not be radial symmetric functions of x for any t > 0 although the initial data φ 0 (x) and ψ 0 (x) are radial symmetric functions, except that φ 0 = f 2 r 2 + f 0 , ψ 0 = g 4 r 4 + g 2 r 2 + g 0 and f 2 g 4 = 0. Here f j and g j (j=0, 2, 4) are arbitrary constants.
Especially provided u 0 = 0 and u 0 ∈ L 2 (R 3 ), then the functions φ(t, x) and ψ(t, x) can not be radial symmetric functions of x for any t > 0 although the initial data φ 0 (x) and ψ 0 (x) are radial symmetric functions. (1.36) where vectors A ∈ R 3 − {0} and B ∈ R 3 − {0} are linearly independent. Let φ 0 (x) and ψ 0 (x) be regular enough. Then the functions φ(t, x) and ψ(t, x) can not be radial symmetric functions of x for any t > 0 although the initial data φ 0 (x) and ψ 0 (x) are radial symmetric functions, except that φ 0 = f 2 r 2 + f 0 and ψ 0 = g 2 r 2 + g 0 . Here f j and g j (j=0, 2) are arbitrary constants.
u(t, x) =(A × ∇)φ(t, x) + (B × ∇)ψ(t, x), (1.35) u 0 (x) =(A × ∇)φ 0 (x) + (B × ∇)ψ 0 (x),
Especially provided u 0 = 0 and u 0 ∈ L 2 (R 3 ), then the functions φ(t, x) and ψ(t, x) can not be radial symmetric functions of x for any t > 0 although the initial data φ 0 (x) and ψ 0 (x) are radial symmetric functions. where vectors A ∈ R 3 − {0} and B ∈ R 3 − {0} are linearly independent. Let φ 0 (x) and ψ 0 (x) be regular enough. Then the functions φ(t, x) and ψ(t, x) can not be radial symmetric functions of x for any t > 0 although the initial data φ 0 (x) and ψ 0 (x) are radial symmetric functions, except that φ 0 = f 4 r 4 + f 2 r 2 + f 0 , ψ 0 = g 4 r 4 + g 2 r 2 + g 0 and f 2 g 4 = f 4 g 2 . Here f j and g j (j=0, 2, 4) are arbitrary constants.
Especially provided u 0 = 0 and u 0 ∈ L 2 (R 3 ), then the functions φ(t, x) and ψ(t, x) can not be radial symmetric functions of x for any t > 0 although the initial data φ 0 (x) and ψ 0 (x) are radial symmetric functions.
Here radial symmetry breaking is a kind of singularity of structure. Appropriate orthogonal transformations are the main ingredient of proving these theorems.
The plan of this paper is as follows. Section 2 is devoted to study radial symmetry persistence of two-dimensional Navier-Stokes equations. Section 3 is devoted to show radial symmetry either persistence or breaking of three-dimensional Navier-Stokes equations with (1,2)-symplectic representation. Section 4 is devoted to establish radial symmetry breaking of three-dimensional Navier-Stokes equations with (1,1)-symplectic representation. Section 5 is devoted to investigate radial symmetry breaking of threedimensional Navier-Stokes equations with (2,2)-symplectic representation.
(1,1)-Symplectic Representation and
Radial Symmetry Persistence in R 2
In this section, we assume that x = (x 1 , x 2 ) ∈ R 2 and the velocity vector u holds the following (1,1)-symplectic representation
u(t, x) = u(t, x 1 , x 2 ) = u 1 (t, x 1 , x 2 ), u 2 (t, x 1 , x 2 ), 0 , u 1 (t, x 1 , x 2 ) = −∂ 2 φ(t, x 1 , x 2 ), u 2 (t, x 1 , x 2 ) = ∂ 1 φ(t, x 1 , x 2 ). (2.1)
Putting together (1.1) and (2.1), we have
∂ t ∆φ − ν∆ 2 φ = − ∂ 1 (u · ∇)u 2 + ∂ 2 (u · ∇)u 1 = − {φ, ∆φ}, (2.2) where ∆ = ∂ 2 1 + ∂ 2 2 , Poisson bracket {·, ·} is given by {f, g} = ∂ 1 f ∂ 2 g − ∂ 2 f ∂ 1 g.
The equation (2.2) is well known Hasegawa-Mima equation ( [20,21]). Now we assume that φ is radial symmetric function with respect to space variable x ∈ R 2 . It is that φ(t, x) = φ(t, r) and r 2 = x 2 1 + x 2 2 . Then we have
{φ, ∆φ} = x 1 r ∂ r φ · x 2 r ∂ r ∆φ − x 2 r ∂ r φ · x 1 r ∂ r ∆φ = 0,(2.∂ t φ − ν∆φ =w 0 (t), (2.4)
where w 0 is any function of t.
There exists a global solution of equation (2.4)
φ(t, x) = 1 4πνt R 2 exp{− |y| 2 4νt }φ 0 (x − y)dy + t 0
w 0 (s)ds, (2.5) where φ 0 (r) = φ(t, r)| t=0 . φ(t, x) is radial symmetric function since φ 0 is radial. Because w 0 (t) has no contribution on velocity u, we select w 0 = 0.
For any point ξ ∈ S 1 , the unit tangent vector v ∈ T ξ S 1 is unique for given positive direction. Then the moving frame on T S 1 is also unique, and the solution u is unique. Theorem 1.3 is proved.
3.
(1,2)-Symplectic Representation and Radial Symmetry Either Persistence or Breaking in R 3
In this section, we assume that the velocity vector u holds the following (1,2)symplectic representation
u(t, x) =(A × ∇)φ(t, x) + {(B × ∇) × ∇}ψ(t, x), (3.1) where vectors A = (a 1 , a 2 , a 3 ) ∈ R 3 − {0} and B = (b 1 , b 2 , b 3 ) ∈ R 3 − {0}.
Let
ω(t, x) =∇ × u(t, x) = − {(A × ∇) × ∇}φ(t, x) + (B × ∇)∆ψ(t, x), (3.2)
Taking curl with equation (1.1), we have
(3.3) ω t − ν∆ω + (u · ∇)ω − (ω · ∇)u = 0.
Thanks the following observations And taking scalar product with A × ∇ and equation (3.3), we derive
(B × ∇) · u(t, x) = (A × ∇) · (B × ∇)φ(t, x) = (A · B)∆ − (A · ∇)(B · ∇) φ(t, x), (3.4) (A × ∇) · ω(t, x) = (A × ∇) · (B × ∇)∆ψ(t, x) = (A · B)∆ − (A · ∇)(B · ∇) ∆ψ(t, x),(3.7) (A · B)∆ − (A · ∇)(B · ∇) ∆{ψ t − ν∆ψ} + (A × ∇) · {(u · ∇)ω − (ω · ∇)u} = 0.
Now we assume that φ and ψ are radial symmetric functions with respect to space variable x ∈ R 3 . It is that φ(t, x) = φ(t, r), ψ(t, x) = ψ(t, r) and r 2 = x 2 1 + x 2 2 + x 2 3 . Then we have
u(t, x) =(A × x) 1 r ∂ r φ + (B · x)x 1 r ∂ r 1 r ∂ r ψ + B 1 r ∂ r ψ − B∆ψ, (3.8) ω(t, x) = − (A · x)x 1 r ∂ r 1 r ∂ r φ − A 1 r ∂ r φ + A∆φ + (B × x) 1 r ∂ r ∆ψ, (3.9) (u · ∇)u = 1 r ∂ r φ{(A × x) · ∇} + (B · x) 1 r ∂ r 1 r ∂ r ψ {x · ∇} + 1 r ∂ r ψ{B · ∇} − ∆ψ{B · ∇} (A × x) 1 r ∂ r φ + (B · x)x 1 r ∂ r 1 r ∂ r ψ + B 1 r ∂ r ψ − B∆ψ (3.10) ={A × (A × x)} 1 r ∂ r φ · 1 r ∂ r φ − x{(A × B) · x} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 2(A × x)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A × x)(B · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ + 2x(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + x(B · x) 2 ∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r 1 r ∂ r ψ + B(B · x)∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r ψ − B(B · x)∂ r 1 r ∂ r ψ · ∂ r ∆ψ + (A × B) 1 r ∂ r φ · 1 r ∂ r ψ + (A × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + x(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + B(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + x(B · x) 2 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ + B(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − B(B · x) 1 r ∂ r ψ · 1 r ∂ r ∆ψ − (A × B)∆ψ · 1 r ∂ r φ − (A × x)(B · x)∆ψ · 1 r ∂ r 1 r ∂ r φ − x(B · B)∆ψ · 1 r ∂ r 1 r ∂ r ψ − B(B · x)∆ψ · 1 r ∂ r 1 r ∂ r ψ − x(B · x) 2 ∆ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ − B(B · x)∆ψ · 1 r ∂ r 1 r ∂ r ψ + B(B · x)∆ψ · 1 r ∂ r ∆ψ, (B × ∇) · {(u · ∇)u} =(B × ∇) · {A(A · x) − x(A · A)} 1 r ∂ r φ · 1 r ∂ r φ − (B × ∇) · x{(A × B) · x} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 2(B × ∇) · (A × x)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (B × ∇) · (A × x)(B · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ + 2(B × ∇) · x(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + (B × ∇) · x(B · x) 2 ∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r 1 r ∂ r ψ + (B × ∇) · B(B · x)∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r ψ − (B × ∇) · B(B · x)∂ r 1 r ∂ r ψ · ∂ r ∆ψ + (B × ∇) · (A × B) 1 r ∂ r φ · 1 r ∂ r ψ + (B × ∇) · (A × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (B × ∇) · x(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + (B × ∇) · B(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + (B × ∇) · x(B · x) 2 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ + (B × ∇) · B(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − (B × ∇) · B(B · x) 1 r ∂ r ψ · 1 r ∂ r ∆ψ − (B × ∇) · (A × B)∆ψ · 1 r ∂ r φ − (B × ∇) · (A × x)(B · x)∆ψ · 1 r ∂ r 1 r ∂ r φ − (B × ∇) · x(B · B)∆ψ · 1 r ∂ r 1 r ∂ r ψ − (B × ∇) · B(B · x)∆ψ · 1 r ∂ r 1 r ∂ r ψ − (B × ∇) · x(B · x) 2 ∆ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ − (B × ∇) · B(B · x)∆ψ · 1 r ∂ r 1 r ∂ r ψ + (B × ∇) · B(B · x)∆ψ · 1 r ∂ r ∆ψ (3.11) ={(A × B) · x}(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r φ + {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 4(A · B)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 2{(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 2(A · B)(B · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ + {(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r ∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ + {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ψ + 2(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + {(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r ∆ψ · 1 r ∂ r φ − 2(A · B)(B · x)∆ψ · 1 r ∂ r 1 r ∂ r φ − {(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r ∆ψ · 1 r ∂ r 1 r ∂ r φ , (u · ∇)ω = 1 r ∂ r φ{(A × x) · ∇} + (B · x) 1 r ∂ r 1 r ∂ r ψ {x · ∇} + 1 r ∂ r ψ{B · ∇} − ∆ψ{B · ∇} − (A · x)x 1 r ∂ r 1 r ∂ r φ − A 1 r ∂ r φ + A∆φ + (B × x) 1 r ∂ r ∆ψ (3.12) = − (A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + (B × (A × x)) 1 r ∂ r φ · 1 r ∂ r ∆ψ − 2x(A · x)(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − x(A · x)(B · x)∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r 1 r ∂ r φ − A(B · x)∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r φ + A(B · x)∂ r 1 r ∂ r ψ · ∂ r ∆φ + (B × x)(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r ∆ψ + (B × x)(B · x)∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r ∆ψ − x(A · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − B(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − x(A · x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r φ − A(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + A(B · x) 1 r ∂ r ψ · 1 r ∂ r ∆φ + (B × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ + x(A · B)∆ψ · 1 r ∂ r 1 r ∂ r φ + B(A · x)∆ψ · 1 r ∂ r 1 r ∂ r φ + x(A · x)(B · x)∆ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r φ + A(B · x)∆ψ · 1 r ∂ r 1 r ∂ r φ − A(B · x)∆ψ · 1 r ∂ r ∆φ − (B × x)(B · x)∆ψ · 1 r ∂ r 1 r ∂ r ∆ψ = − (A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + A(B · x) 1 r ∂ r φ · 1 r ∂ r ∆ψ + A(B · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − A(B · x) 2 r ∂ r ψ · 1 r ∂ r ∆φ − x2(A · x)(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + x(A · x)(B · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r φ − x(A · B) 1 r ∂ r φ · 1 r ∂ r ∆ψ − x(A · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + x(A · B)∆ψ · 1 r ∂ r 1 r ∂ r φ − B(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + B(A · x)∆ψ · 1 r ∂ r 1 r ∂ r φ + (B × x)(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r ∆ψ − (B × x)(B · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ , (ω · ∇)u = − (A · x) 1 r ∂ r 1 r ∂ r φ {x · ∇} − 1 r ∂ r φ{A · ∇} + ∆φ{A · ∇} + 1 r ∂ r ∆ψ{(B × x) · ∇} (A × x) 1 r ∂ r φ + (B · x)x 1 r ∂ r 1 r ∂ r ψ + B 1 r ∂ r ψ − B∆ψ (3.13) = − (A × x)(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r φ − (A × x)(A · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r φ − 2x(A · x)(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − x(A · x)(B · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r 1 r ∂ r ψ − B(A · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ + B(A · x)∂ r 1 r ∂ r φ · ∂ r ∆ψ − (A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − x(A · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − A(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − x(A · x)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ − B(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + B(A · x) 1 r ∂ r φ · 1 r ∂ r ∆ψ + (A × x)(A · x)∆φ · 1 r ∂ r 1 r ∂ r φ + x(A · B)∆φ · 1 r ∂ r 1 r ∂ r ψ + A(B · x)∆φ · 1 r ∂ r 1 r ∂ r ψ + x(A · x)(B · x)∆φ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ + B(A · x)∆φ · 1 r ∂ r 1 r ∂ r ψ − B(A · x)∆φ · 1 r ∂ r ∆ψ + (A × (B × x)) 1 r ∂ r ∆ψ · 1 r ∂ r φ + (B × x)(B · x) 1 r ∂ r ∆ψ · 1 r ∂ r 1 r ∂ r ψ =(A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − A(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + A(B · x)∆φ · 1 r ∂ r 1 r ∂ r ψ − 2x(A · x)(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + x(A · x)(B · x) 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ − x(A · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + x(A · B)∆φ · 1 r ∂ r 1 r ∂ r ψ − x(A · B) 1 r ∂ r φ · 1 r ∂ r ∆ψ + B(A · x) 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − B(A · x) 1 r ∂ r φ · 1 r ∂ r ∆ψ + (B × x)(B · x) 1 r ∂ r ∆ψ · 1 r ∂ r 1 r ∂ r ψ , (A × ∇) · {(u · ∇)ω − (ω · ∇)u} =(A × ∇) · − 2(A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + A(B · x) 1 r ∂ r φ · 1 r ∂ r ∆ψ + A(B · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − A(B · x) 2 r ∂ r ψ · 1 r ∂ r ∆φ + A(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − A(B · x)∆φ · 1 r ∂ r 1 r ∂ r ψ + x(A · x)(B · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r φ − x(A · x)(B · x) 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ − x(A · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + x(A · B)∆ψ · 1 r ∂ r 1 r ∂ r φ + x(A · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − x(A · B)∆φ · 1 r ∂ r 1 r ∂ r ψ − B(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + B(A · x)∆ψ · 1 r ∂ r 1 r ∂ r φ − B(A · x) 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + B(A · x) 1 r ∂ r φ · 1 r ∂ r ∆ψ − (B × x)(B · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ (3.14) = − (A · A)(A · x) 4 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − {(A · A)r 2 − (A · x) 2 }(A · x) 2 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + {(A × B) · x}(A · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r φ − {(A × B) · x}(A · x) 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ + {(A × B) · x}(A · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − {(A × B) · x}(A · x) 1 r ∂ r ∆ψ · 1 r ∂ r 1 r ∂ r φ + {(A × B) · x}(A · x) 1 r ∂ r 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − {(A × B) · x}(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆ψ − (A · B)(B · x) 4 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ − {(A · B)(B · x) − (B · B)(A · x)} 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ − {(A · B)r 2 − (A · x)(B · x)}(B · x) 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ .
Putting (3.11) into (3.6), we have
(A · x)(B · x) 1 r ∂ r 1 r ∂ r − (A · B){ 2 r ∂ r + r∂ r 1 r ∂ r } {φ t − ν∆φ} ={(A × B) · x}(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r φ + {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 4(A · B)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 2{(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 2(A · B)(B · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ + {(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r ∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ + {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ψ + 2(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + {(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r ∆ψ · 1 r ∂ r φ − 2(A · B)(B · x)∆ψ · 1 r ∂ r 1 r ∂ r φ − {(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r ∆ψ · 1 r ∂ r 1 r ∂ r φ . (3.15)
Putting (3.14) into (3.7), we derive
(A · x)(B · x) 1 r ∂ r 1 r ∂ r − (A · B){ 2 r ∂ r + r∂ r 1 r ∂ r } ∆{ψ t − ν∆ψ} = − (A · A)(A · x) 4 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − {(A · A)r 2 − (A · x) 2 }(A · x) 2 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + {(A × B) · x}(A · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r φ − {(A × B) · x}(A · x) 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ + {(A × B) · x}(A · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − {(A × B) · x}(A · x) 1 r ∂ r ∆ψ · 1 r ∂ r 1 r ∂ r φ + {(A × B) · x}(A · x) 1 r ∂ r 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − {(A × B) · x}(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆ψ − (A · B)(B · x) 4 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ − {(A · B)(B · x) − (B · B)(A · x)} 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ − {(A · B)r 2 − (A · x)(B · x)}(B · x) 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ .(A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } 1 r ∂ r {φ t − ν∆φ} = − (A · x) (A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (A · x) (A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } ∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ − (A · x) (A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (A · x) (A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } ∆ψ · 1 r ∂ r 1 r ∂ r φ = (A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } 1 r ∂ r (A · x)· r 0 s 2 s ∂ s ψ · 1 s ∂ s 1 s ∂ s φ − 2 s ∂ s φ · 1 s ∂ s 1 s ∂ s ψ ds , (3.17) (A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } 1 r ∂ r ∆{ψ t − ν∆ψ} = (A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } 1 r ∂ r (A · x)· r 0 s 2 s ∂ s φ · 1 s ∂ s 1 s ∂ s φ + 2 s ∂ s ψ · 1 s ∂ s 1 s ∂ s ∆ψ ds ,(3.18)
where we have used the following facts
(3.19) (A × ∇)(A · x) = A × A = 0, (A × ∇) · (A × ∇)ϕ(r) = (A · x) 2 1 r ∂ r − (A · A){2 + r∂ r } 1 r ∂ r ϕ(r) (3.20)
for any radial function ϕ(r). The equations (3.17) (3.18) imply that
φ t − ν∆φ =(A · x) r 0 s 2 s ∂ s ψ · 1 s ∂ s 1 s ∂ s φ − 2 s ∂ s φ · 1 s ∂ s 1 s ∂ s ψ ds, (3.21) ∆{ψ t − ν∆ψ} =(A · x) r 0 s 2 s ∂ s φ · 1 s ∂ s 1 s ∂ s φ + 2 s ∂ s ψ · 1 s ∂ s 1 s ∂ s ∆ψ ds. (3.22)
Let us select orthogonal transformation ρ as follows
(3.23) y = ρx = x 0 0 1 1 0 0 0 1 0 = (x 2 , x 3 , x 1 ).
Then
r 2 = y · y = ρx · ρx = x · x.
Applying the orthogonal transformation (3.23) in the equations (3.21) (3.22), we obtain
φ t − ν∆φ =(A · y) r 0 s 2 s ∂ s ψ · 1 s ∂ s 1 s ∂ s φ − 2 s ∂ s φ · 1 s ∂ s 1 s ∂ s ψ ds, (3.24) ∆{ψ t − ν∆ψ} =(A · y) r 0 s 2 s ∂ s φ · 1 s ∂ s 1 s ∂ s φ + 2 s ∂ s ψ · 1 s ∂ s 1 s ∂ s ∆ψ ds, (3.25)
where y = ρx. Employing the equations (3.21) (3.24), we get
(A · (ρx − x)) r 0 s 2 s ∂ s ψ · 1 s ∂ s 1 s ∂ s φ − 2 s ∂ s φ · 1 s ∂ s 1 s ∂ s ψ ds = 0. (3.26) Similarly using (3.22) (3.25), we derive (A · (ρx − x)) r 0 s 2 s ∂ s φ · 1 s ∂ s 1 s ∂ s φ + 2 s ∂ s ψ · 1 s ∂ s 1 s ∂ s ∆ψ ds = 0. (3.27)
Given r, thanks x ∈ S 2 r is arbitrary, the equations (3.26) and (3.27) imply that
1 r ∂ r ψ · ∂ r 1 r ∂ r φ − 1 r ∂ r φ · ∂ r 1 r ∂ r ψ = 0, (3.28) 1 r ∂ r φ · ∂ r 1 r ∂ r φ + 1 r ∂ r ψ · ∂ r 1 r ∂ r ∆ψ = 0. (3.29)
Putting (3.28) into (3.21) and putting (3.29) into (3.22), we derive that
φ t − ν∆φ = 0, (3.30) ∆{ψ t − ν∆ψ} = 0. (3.31) Let (φ, ψ) = Φ(r), Ψ(r) be solution of (3.28) (3.29). Then u = (A × ∇)Φ + {(A × ∇) × ∇}Ψ satisfies static three dimensional Euler equation (u · ∇)u + ∇P = 0, ∇ · u = 0 by above all of calculations. Result (I) is proved. The equation (3.28) means that φ = h(t)ψ, (3.32) where h(t) is any function of t. Putting (3.32) into (3.29), we have h 2 (t)∂ r 1 r ∂ r ψ + ∂ r 1 r ∂ r ∆ψ = 0. (3.33)
There exists a unique solution
φ(t, r) = e −ν∆t φ 0 (r), ψ(t, r) = e −ν∆t ψ 0 (r), (3.34) of equations (3.30) (3.31) with initial data φ(t, r)| t=0 = φ 0 (r), ψ(t, r)| t=0 = ψ 0 (r). Take (φ 0 , ψ 0 ) = Φ(r), Ψ(r) . Then φ(t, r) = e −ν∆t φ 0 (r) = e −ν∆t Φ(r), ψ(t, r) = e −ν∆t ψ 0 (r) = e −ν∆t Ψ(r). (3.35) Since (φ, ψ) = (φ 0 , ψ 0 ) satisfies equations (3.32) (3.33) with h(t) ≡ h(0), φ(t, r), ψ(t, r)
also satisfies equations (3.32) (3.33). Thus φ(t, r), ψ(t, r) satisfies equations (3.28) (3.29). This means that φ(t, r), ψ(t, r) is solution of the equations (3.17) (3.18).
Result (II) is proved. On the other hand, let (φ 0 , ψ 0 ) = Φ(r), Ψ(r) and (φ 0 , ψ 0 ) be regular enough. By theory of local well-posedness of Navier-Stokes equations, there exist T > 0 and solution
φ(t, x), ψ(t, x) of equations (3.15) (3.16) such that φ(t, x) ∈ C([0, T ], H m ), ψ(t, x) ∈ C([0, T ], H m+1 ),
where m > 0 large enough. Assume that φ(t, x), ψ(t, x) is radial and satisfies equations (3.28) (3.29) for t > 0. Let t → 0, we derive (φ, ψ) = (φ 0 , ψ 0 ) satisfies equations (3.28) (3.29). This is contradictory.
These facts mean that equations (3.15) (3.16) can not be satisfied by any radial symmetry functions φ(t, r) and ψ(t, r) provided (φ 0 , ψ 0 ) = Φ(r), Ψ(r) .
In summary, Theorem 1.4 is proved.
Proof of Corollary 1.5. Now we consider the following equations
(B · x) 1 r ∂ r 1 r ∂ r {φ t − ν∆φ} ={(A × B) · x} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r φ − (B · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ψ + (B · B) 1 r ∂ r 1 r ∂ r φ · ∆ψ − (B · x) 2 1 r ∂ r 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (B · x) 2 1 r ∂ r 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ , (3.39) (B · x) 1 r ∂ r 1 r ∂ r ∆{ψ t − ν∆ψ} = − (A · A) 4 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − {(A · A)r 2 − (A · x) 2 } 2 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + {(A × B) · x} 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r φ − {(A × B) · x} 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ + {(A × B) · x} 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − {(A × B) · x} 1 r ∂ r ∆ψ · 1 r ∂ r 1 r ∂ r φ + {(A × B) · x} 1 r ∂ r 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − {(A × B) · x} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆ψ + (B · B) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ + (B · x) 2 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ .
(3.40)
Choose the orthogonal transformation ρ b which rotation axis is vector B, such that
B · (ρ b x) = (ρ t b B) · x = B · x, (3.41)
where ρ t b is the adjoint operator of ρ b . For example
y = ρ b x = xM b 0 1 0 −1 0 0 0 0 1 M t b , (3.42) M b = 0 − b 2 2 +b 2 3 |B| 2 b 1 |B| b 3 |B| b 1 b 2 |B| 2 b 2 |B| − b 2 |B| b 1 b 3 |B| 2 b 3 |B| , M t b = 0 b 3 |B| − b 2 |B| − b 2 2 +b 2 3 |B| 2 b 1 b 2 |B| 2 b 1 b 3 |B| 2 b 1 |B| b 2 |B| b 3 |B| .(B · y) 1 r ∂ r 1 r ∂ r {φ t − ν∆φ} ={(A × B) · y} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r φ − (B · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ψ + (B · B) 1 r ∂ r 1 r ∂ r φ · ∆ψ − (B · y) 2 1 r ∂ r 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (B · y) 2 1 r ∂ r 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ , (3.44) (B · y) 1 r ∂ r 1 r ∂ r ∆{ψ t − ν∆ψ} = − (A · A) 4 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − {(A · A)r 2 − (A · y) 2 } 2 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + {(A × B) · y} 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r 1 r ∂ r φ − {(A × B) · y} 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r 1 r ∂ r ψ + {(A × B) · y} 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − {(A × B) · y} 1 r ∂ r ∆ψ · 1 r ∂ r 1 r ∂ r φ + {(A × B) · y} 1 r ∂ r 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − {(A × B) · y} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆ψ + (B · B) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ + (B · y) 2 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ .{(A × B) · (x − ρ b x)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r φ = 0. (3.46)
Given r, thanks x ∈ S 2 r is arbitrary, we have
∂ r 1 r ∂ r φ · 1 r ∂ r φ = 0. (3.47)
This equation (3.47) implies that
φ = f 2 r 2 + f 0 , (3.48)
where f 0 and f 2 are any functions of t.
Provided f 2 = 0, putting (3.48) into (3.39), the equation (3.39) is satisfied. Putting (3.48) into (3.40), we obtain that
(B · x) 1 r ∂ r 1 r ∂ r ∆{ψ t − ν∆ψ} =(B · B) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ + (B · x) 2 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ .(B · y) 1 r ∂ r 1 r ∂ r ∆{ψ t − ν∆ψ} =(B · B) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ + (B · y) 2 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ .
(3.50)
Solving difference of (3.49) and (3.50), we derive Applying the orthogonal transformation z = O r x in the equation (3.49), by the same arguments as in the proof of (3.51), we have
1 r ∂ r 1 r ∂ r ∆{ψ t − ν∆ψ} ={B · (x + ρx)} 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ .1 r ∂ r 1 r ∂ r ∆{ψ t − ν∆ψ} ={B · (x + O r x)} 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ .
(3.53)
Solving difference of (3.53) and (3.51), we derive
{B · (ρx − O r x)}∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ = 0. (3.54)
Given r, thanks x ∈ S 2 r is arbitrary, we have
∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ = 0.{(A × B) · (x − ρ b x)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆ψ = 0. (3.59) Given r, since x ∈ S 2 r is arbitrary, we have ∂ r 1 r ∂ r φ · 1 r ∂ r ∆ψ = 0. (3.60)
We can also derive (3.58) from the equation (3.60).
Putting The velocity u corresponding to (φ, ψ) defined by (3.48)(3.58) is as follows
u(t, x) =(A × ∇)φ + {(B × ∇) × ∇}ψ =2f 2 (A × x) + 8g 4 (B · x)x − B(16g 4 r 2 + 4g 2 ). (3.61) It is obvious that R 3 |u(t, x)| 2 dx = ∞, ∀t ≥ 0.
On the other hand, provided that at least one of (3.48) and (3.58) is not satisfied, or f 2 g 4 = 0 although (3.48) and (3.58) are satisfied, then the equations (3.15) (3. 16) can not be satisfied by any radial symmetry functions φ and ψ.
In summary, Theorem 1.7 is proved.
(1,1)-Symplectic Representation and Radial Symmetry Breaking in R 3
In this section, we assume that the velocity vector u holds the following (1,1)symplectic representation
u(t, x) ={A × ∇}φ(t, x) + {B × ∇}ψ(t, x), (4.1) where vectors A = (a 1 , a 2 , a 3 ) ∈ R 3 − {0} and B = (b 1 , b 2 , b 3 ) ∈ R 3 − {0} are linearly independent.
Let
ω(t, x) =∇ × u(t, x) = − {(A × ∇) × ∇}φ(t, x) − {(B × ∇) × ∇}ψ(t, x).(4.3) ω t − ν∆ω + (u · ∇)ω − (ω · ∇)u = 0.
Thanks the following observations And taking scalar product with A × ∇ and equation (4.3), we derive
(B × ∇) · ω(t, x) = {(A × ∇) × (B × ∇)} · ∇ φ(t, x) = (A × B) · ∇∆φ(t, x), (4.4) (A × ∇) · ω(t, x) = − {(A × ∇) × (B × ∇)} · ∇ ψ(t, x) = −(A × B) · ∇∆ψ(t, x),(4.7) (A × B) · ∇∆{ψ t − ν∆ψ} − (A × ∇) · {(u · ∇)ω − (ω · ∇)u} = 0.
Now we assume that φ and ψ are radial symmetric functions with respect to space variable x ∈ R 3 . It is that φ(t, x) = φ(t, r), ψ(t, x) = ψ(t, r) and r 2 = x 2 1 + x 2 2 + x 2 3 . Then we have
u(t, x) =(A × x) 1 r ∂ r φ + (B × x) 1 r ∂ r ψ, (4.8) ω(t, x) =A ∆φ − 1 r ∂ r φ − x(A · x) 1 r ∂ r 1 r ∂ r φ + B ∆ψ − 1 r ∂ r ψ − x(B · x) 1 r ∂ r 1 r ∂ r ψ , (4.9) (u · ∇)ω = 1 r ∂ r φ(A × x) · ∇ + 1 r ∂ r ψ(B × x) · ∇ A ∆φ − 1 r ∂ r φ − x(A · x) 1 r ∂ r 1 r ∂ r φ + B ∆ψ − 1 r ∂ r ψ − x(B · x) 1 r ∂ r 1 r ∂ r ψ = − (A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A × x)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + x{(A × B) · x} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − x{(A × B) · x} 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B × x)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ , (4.10) (ω · ∇)u = ∆φ − 1 r ∂ r φ A · ∇ − (A · x) 1 r ∂ r 1 r ∂ r φ x · ∇ + ∆ψ − 1 r ∂ r ψ B · ∇ − (B · x) 1 r ∂ r 1 r ∂ r ψ x · ∇ (A × x) 1 r ∂ r φ + (B × x) 1 r ∂ r ψ =(A × x)(A · x) 1 r ∂ r 1 r ∂ r φ · ∆φ − 1 r ∂ r φ + (B × x)(A · x) 1 r ∂ r 1 r ∂ r ψ · ∆φ − 1 r ∂ r φ − (A × B) 1 r ∂ r ψ · ∆φ − 1 r ∂ r φ − (A × x)(A · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r φ − (A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (B × x)(A · x)∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r φ − (B × x)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (A × x)(B · x) 1 r ∂ r 1 r ∂ r φ · ∆ψ − 1 r ∂ r ψ + (A × B) 1 r ∂ r φ · ∆ψ − 1 r ∂ r ψ + (B × x)(B · x) 1 r ∂ r 1 r ∂ r ψ · ∆ψ − 1 r ∂ r ψ − (A × x)(B · x)∂ r 1 r ∂ r φ · ∂ r 1 r ∂ r ψ − (A × x)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B × x)(B · x)∂ r 1 r ∂ r ψ · ∂ r 1 r ∂ r ψ − (B × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ (4.11) =(A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A × x)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A × x)(B · x) 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (A × B)∂ r φ · ∂ r 1 r ∂ r ψ − (A × B)∂ r ψ · ∂ r 1 r ∂ r φ + (B × x)(A · x) 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B × x)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (B × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ , (u · ∇)ω − (ω · ∇)u = − 2(A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + x{(A × B) · x} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (A × B)∂ r φ · ∂ r 1 r ∂ r ψ + (A × B)∂ r ψ · ∂ r 1 r ∂ r φ − x{(A × B) · x} 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(B × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2(B × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ , (4.12) (B × ∇) · {(u · ∇)ω − (ω · ∇)u} =(B × ∇) · − 2(A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + x{(A × B) · x)} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (A × B)∂ r φ · ∂ r 1 r ∂ r ψ + (A × B)∂ r ψ · ∂ r 1 r ∂ r φ − x{(A × B) · x} 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(B × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2(B × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = − 4(A · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2{(A × B) × A} · x 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2{(A · B)r 2 − (A · x)(B · x)}(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 4(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2{(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + {(A × B) × B} · x 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − {(A × B) × B} · x 1 r ∂ r ∂ r φ · ∂ r 1 r ∂ r ψ + {(A × B) × B} · x 1 r ∂ r ∂ r ψ · ∂ r 1 r ∂ r φ + {(A × B) × B} · x 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 4|B| 2 (B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{|B| 2 r 2 − (B · x) 2 }(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 4|B| 2 (A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2{|B| 2 r 2 − (B · x) 2 }(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ , (4.13) (A × ∇) · {(u · ∇)ω − (ω · ∇)u} =(A × ∇) · − 2(A × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + x{(A × B) · x} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (A × B)∂ r φ · ∂ r 1 r ∂ r ψ + (A × B)∂ r ψ · ∂ r 1 r ∂ r φ − x{(A × B) · x} 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(B × x)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2(B × x)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = − 4|A| 2 (A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2{|A| 2 r 2 − (A · x) 2 }(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 4|A| 2 (B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2{|A| 2 r 2 − (A · x) 2 }(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − {(A × B) × A} · x 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − {(A × B) × A} · x 1 r ∂ r ∂ r φ · ∂ r 1 r ∂ r ψ + {(A × B) × A} · x 1 r ∂ r ∂ r ψ · ∂ r 1 r ∂ r φ − {(A × B) × A} · x 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 4(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(A × B) × B} · x 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 4(A · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2{(A · B)r 2 − (A · x)(B · x)}(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ , (4.14) where ∆f = 3 r ∂ r f + r∂ r ( 1 r ∂ r f ).
Employing (4.6) and (4.13), we obtain that
{(A × B) · x} 1 r ∂ r ∆{φ t − ν∆φ} − 4(A · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2{(A · A)(B · x) − (A · B)(A · x)} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2{(A · B)r 2 − (A · x)(B · x)}(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 4(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2{(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r ∂ r φ · ∂ r 1 r ∂ r ψ + {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r ∂ r ψ · ∂ r 1 r ∂ r φ + {(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 4|B| 2 (B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{|B| 2 r 2 − (B · x) 2 }(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 4|B| 2 (A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2{|B| 2 r 2 − (B · x) 2 }(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ ={(A × B) · x} 1 r ∂ r ∆{φ t − ν∆φ} − 6(A · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A · B)(A · x)r∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(B · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B)(A · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(B · B)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B · B)(A · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · A)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A · B)(B · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (A · B)(B · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 4(B · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(B · B)r 2 − (B · x) 2 }(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(A · x) 2 (B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2(A · x)(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · x)(B · x) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ =0. (4.15)
Applying (4.7) and (4.14), we get that
{(A × B) · x} 1 r ∂ r ∆{ψ t − ν∆ψ} + 4|A| 2 (A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2{|A| 2 r 2 − (A · x) 2 }(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 4|A| 2 (B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2{|A| 2 r 2 − (A · x) 2 }(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + {(A · A)(B · x) − (A · B)(A · x)} 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + {(A · A)(B · x) − (A · B)(A · x)} 1 r ∂ r ∂ r φ · ∂ r 1 r ∂ r ψ − {(A · A)(B · x) − (A · B)(A · x)} 1 r ∂ r ∂ r ψ · ∂ r 1 r ∂ r φ + {(A · A)(B · x) − (A · B)(A · x)} 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 4(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2{(A · B)(B · x) − (B · B)(A · x)} 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2{(A · B)r 2 − (A · x)(B · x)}(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 4(A · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 2{(A · B)r 2 − (A · x)(B · x)}(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ ={(A × B) · x} 1 r ∂ r ∆{ψ t − ν∆ψ} + 4(A · A)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2{(A · A)r 2 − (A · x) 2 }(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + (A · B)(A · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A · B)(A · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(B · B)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(A · A)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (A · A)(B · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · A)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A · A)(B · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 6(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(A · B)(B · x)r∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2(A · x) 2 (B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(A · x) 2 (B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(A · x)(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ =0.
(4.16)
Proof of Theorem 1.8. Since vectors A and B are linearly independent, they can span a plane Γ AB = span{A, B}. We select the reflection transformation ρ ab , that Γ AB is invariant plane, such that
r 2 = x · x = ρ ab x · ρ ab x, A · ρ ab x = A · x, B · ρ ab x = B · x.
(4.17)
Here
|A| 2 = A · A, B ′ = B − A·B A·A A = (b ′ 1 , b ′ 2 , b ′ 3 ), M ab = a 1 |A| b ′ 1 |B ′ | m 1 |A||B| a 2 |A| b ′ 2 |B ′ | m 2 |A||B| a 3 |A| b ′ 3 |B ′ | m 3 |A||B| , m 1 = a 2 b 3 − a 3 b 2 , m 2 = a 3 b 1 − a 1 b 3 , m 3 = a 1 b 2 − a 2 b 1 , (4.18) y = ρ ab x = xM ab 1 0 0 0 1 0 0 0 −1 M t ab , (4.19)
where M t ab is the adjoint matrix of M ab . Applying the orthogonal transformation y = ρ ab x in the equation (4.15), we obtain
{(A × B) · y} 1 r ∂ r ∆{φ t − ν∆φ} − 6(A · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A · B)(A · x)r∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(B · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B)(A · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(B · B)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B · B)(A · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · A)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A · B)(B · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (A · B)(B · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 4(B · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(B · B)r 2 − (B · x) 2 }(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(A · x) 2 (B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2(A · x)(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · x)(B · x) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ =0, (4.20)
where we have used (4.17). The equations (4.15) (4.20) imply
{(A × B) · (x − ρ ab x)} 1 r ∂ r ∆{φ t − ν∆φ} =0. (4.21)
Given r, thanks x ∈ S 2 r is arbitrary, we get ∂ r ∆{φ t − ν∆φ} = 0. (4.22) Similarly using the same arguments as in the proof of (4.22), for example employing the orthogonal transformation y = ρ ab x in the equation (4.16) etc., we derive
∂ r ∆{ψ t − ν∆ψ} = 0. (4.23)
Putting (4.22) into (4.15), we have Putting (4.23) into (4.16), we have
− 6(A · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A · B)(A · x)r∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(B · B)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B)(A · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(B · B)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B · B)(A · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · A)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A · B)(B · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (A · B)(B · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 4(B · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(B · B)r 2 − (B · x) 2 }(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(A · x) 2 (B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2(A · x)(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · x)(B · x) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.4(A · A)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2{(A · A)r 2 − (A · x) 2 }(A · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + (A · B)(A · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A · B)(A · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(B · B)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(A · A)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (A · A)(B · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · A)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A · A)(B · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 6(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(A · B)(B · x)r∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2(A · x) 2 (B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(A · x) 2 (B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(A · x)(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ = 0.
(4.25)
Observe that (A·x)(B ·x) is equal to either the scalar product of vectors (B ·x)A and x, or the scalar product of vectors (A · x)B and x. Use this observation, the equations (4.24) can be rewritten as follows
− 6(A · B)A 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A · B)Ar∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(B · B)A 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B)A∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(B · B)A 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B · B)A∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2Aξ(A · x)(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2Aη(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2Aζ(B · x) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 2(A · A)B 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A · B)B∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (A · B)B∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 4(B · B)B 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(B · B)r 2 − (B · x) 2 }B 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2B(1 − ξ)(A · x) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2B(1 − η)(A · x)(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2B(1 − ζ)(A · x)(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0,(4.26)
where parameters ξ, η, ζ ∈ R.
Since the vectors A and B are linearly independent, on the left hand of equation (4.26), all coefficients of A and B are zero. Thus we have
− 6(A · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A · B)r∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(B · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2ξ(A · x)(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2η(B · x) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2ζ(B · x) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0, (4.27) 2(A · A) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (A · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 4(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(B · B)r 2 − (B · x) 2 } 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(1 − ξ)(A · x) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2(1 − η)(A · x)(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(1 − ζ)(A · x)(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.
(4.28)
Let us select orthogonal transformation ρ a which rotation axis is vector A, and orthogonal transformation ρ b which rotation axis is vector B.
Applying the orthogonal transformation y = ρ b x in (4.27), we obtain
− 6(A · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A · B)r∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(B · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2ξ(A · y)(B · y) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2η(B · y) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2ζ(B · y) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0. (4.29)
Calculating the difference of (4.27) and (4.29), we get
ξ{A · (x − ρ b x)}(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ = 0, (4.30) ξ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ = 0. (4.31)
In fact, firstly the equation (4.31) is satisfied for any x ∈ R 3 − {x|x = ρ b x}. Next for x ∈ {x|x = ρ b x}, choosing x n ∈ R 3 − {x|x = ρ b x} such that x n → x as n → ∞, we can prove that the equation (4.31) is also satisfied.
Putting (4.31) into (4.27), and using the orthogonal transformation z = ρ a x, we derive Calculating the difference of (4.27) and (4.32), we get
− 6(A · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A · B)r∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(B · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2η(B · z) 2 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2ζ(B · z) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.η{B · (x − ρ a x)}{B · (x + ρ a x)} 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + ζ{B · (x − ρ a x)}{B · (x + ρ a x)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0, (4.33) η∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + ζ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.
(4.34)
In fact, firstly the equation (4.34) is satisfied for any x ∈ R 3 − {x|x = ±ρ a x}. Next for x ∈ {x|x = ±ρ a x}, choosing x n ∈ R 3 − {x|x = ±ρ a x} such that x n → x as n → ∞, we can prove that the equation (4.34) is also satisfied. Putting (4.31) and (4.34) into (4.27), we have
− 6(A · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(A · B)r∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − 2(B · B) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − (B · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 2(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (B · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ = 0. (4.35)
Applying the orthogonal transformation y = ρ b x in (4.28), we obtain Calculating the difference of (4.28) and (4.36), we get
2(A · A) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (A · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 4(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(B · B)r 2 − (B · y) 2 } 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(1 − ξ)(A · y) 2 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + 2(1 − η)(A · y)(B · y) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(1 − ζ)(A · y)(B · y) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.(1 − ξ){A · (x − ρ b x)}{A · (x + ρ b x)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + (1 − η){A · (x − ρ b x)}(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (1 − ζ){A · (x − ρ b x)}(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.
(4.37)
Using the same arguments as in the proof of (4.31), we have
(1 − ξ){A · (x + ρ b x)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + (1 − η)(B · x) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (1 − ζ)(B · x) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.
(4.38)
Using the orthogonal transformation z = ρ a x in (4.38), we derive
(1 − ξ){A · (z + ρ b z)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + (1 − η)(B · z) 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (1 − ζ)(B · z) 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.
(4.39)
Using ρ a ρ b = ρ b ρ a and solving the difference of (4.38) and (4.39), we get
(1 − η){B · (x − ρ a x)} 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (1 − ζ){B · (x − ρ a x)} 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.
(4.40)
Given r, thanks x ∈ S 2 is arbitrary, we obtain
(1 − η)∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (1 − ζ)∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0. (4.41)
Inserting (4.41) into (4.38), we have
(1 − ξ)∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ = 0. (4.42)
In fact, firstly the equation (4.42) is satisfied for any x ∈ R 3 − {x|x = −ρ b x}. Next for x ∈ {x|x = −ρ b x}, choosing x n ∈ R 3 − {x|x = −ρ b x} such that x n → x as n → ∞, we can prove that the equation (4.42) is also satisfied.
Putting (4.42) (4.41) into (4.28), and employing the orthogonal transformation z = ρ a x, we derive
2(A · A) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (A · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 4(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ − 2{(B · B)r 2 − (B · z) 2 } 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ = 0. (4.43)
Calculating the difference of (4.28) and (4.43), we get
{B · (x − ρ a x)}{B · (x + ρ a x)} 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ = 0. (4.44)
By the same arguments as in the proof of (4.34), we prove
∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ = 0.2(A · A) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ − (A · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − (A · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ − 4(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ = 0. (4.46)
Putting (4.31) (4.42) together, we derive
∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ = 0. (4.47)
Putting (4.34) (4.41) together, we derive
∂ r 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + ∂ r 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ = 0.(4.4(A · A)(A · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + (A · B)(A · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A · B)(A · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(B · B)(A · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ + 2(A · A)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (A · A)(B · x)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · A)(B · x) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A · A)(B · x)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 6(A · B)(B · x) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ = 0. (4.49)
Thanks vectors A and B are linearly independent, the coefficients of A and B in the equation (4.49) are zero. Therefore we have
4(A · A) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r φ + (A · B)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A · B)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ − 2(B · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ = 0, (4.50) 2(A · A) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + (A · A)∂ r ∂ r ψ · 1 r ∂ r 1 r ∂ r φ + 2(A · A) 1 r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + (A · A)∂ r ∂ r φ · 1 r ∂ r 1 r ∂ r ψ + 6(A · B) 1 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ψ = 0.
(4.51)
The equation (4.47) implies that
1 r ∂ r φ = ±{f 2 r 2 + f 1 } 1/2 , f 1 ≥ 0, (4.52) ∂ r 1 r ∂ r φ = ±f 2 r{f 2 r 2 + f 1 } −1/2 , ∆φ = 3 r ∂ r φ + r∂ r 1 r ∂ r φ = ± 4{f 2 r 2 + f 1 } 1/2 ∓ f 1 {f 2 r 2 + f 1 } −1/2 , ∂ r ∆φ = ±4f 2 r{f 2 r 2 + f 1 } −1/2 ± f 1 f 2 r{f 2 r 2 + f 1 } −3/2 , ∂ r {∂ r ∆φ} = ±2f 1 f 2 {f 2 r 2 + f 1 } −3/2 ± 3f 2 1 f 2 {f 2 r 2 + f 1 } −5/2 , ∂ 2 r {∂ r ∆φ} = ∓6f 1 f 2 2 r{f 2 r 2 + f 1 } −5/2 ∓ 15f 2 1 f 2 2 r{f 2 r 2 + f 1 } −7/2 , ∆{∂ r ∆φ} = ∂ 2 r {∂ r ∆φ} + 2 r ∂ r {∂ r ∆φ} = ∓ 2 r f 1 f 2 {f 2 r 2 + f 1 } −3/2 ∓ 3 r f 2 1 f 2 {f 2 r 2 + f 1 } −5/2 ± 15 r f 3 1 f 2 {f 2 r 2 + f 1 } −7/2 , (4.53) ∂ t {∂ r ∆φ} = ± 2f 2t r{f 2 r 2 + f 1 } −1/2 ± 3 2 f 1 f 2t r{f 2 r 2 + f 1 } −3/2 ∓ f 1t f 2 r{f 2 r 2 + f 1 } −3/2 ± 3 2 f 2 1 f 2t r{f 2 r 2 + f 1 } −5/2 ∓ 3 2 f 1 f 1t f 2 r{f 2 r 2 + f 1 } −5/2 ,(4.54)
where f 2 and f 1 are any functions of t. Putting (4.53)(4.54) into equation (4.22), we derive 2f 2t r{f 2 r 2 + f 1 } −1/2 Assume that f 2 = 0. Let r → ∞ in equation (4.55), we obtain f 2t = 0. Then equation (4.55) implies that
+ 3 2 f 1 f 2t r{f 2 r 2 + f 1 } −3/2 − f 1t f 2 r{f 2 r 2 + f 1 } −3/2 + 3 2 f 2 1 f 2t r{f 2 r 2 + f 1 } −5/2 − 3 2 f 1 f 1t f 2 r{f 2 r 2 + f 1 } −5/2 = − 2ν 1 r f 1 f 2 {f 2 r 2 + f 1 } −3/2 − 3ν 1 r f 2 1 f 2 {f 2 r 2 + f 1 } −5/2 + 15ν 1 r f 3 1 f 2 {f 2 r 2 + f 1 } −7/2 .f 1t r 3 {f 2 r 2 + f 1 } −3/2 + 3 2 f 1t f 1 r 3 {f 2 r 2 + f 1 } −5/2 =2νf 1 r{f 2 r 2 + f 1 } −3/2 + 3νf 2 1 r{f 2 r 2 + f 1 } −5/2 − 15νf 3 1 r{f 2 r 2 + f 1 } −7/2 . (4.56)
Let r → ∞ in equation (4.56), we obtain f 1t = 0. Then equation (4.56) implies that
0 = 2{f 2 r 2 + f 1 } 2 + 3f 1 {f 2 r 2 + f 1 } − 15f 2 1 , ∀r > 0. (4.57)
Thus f 2 = f 1 = 0. This is contradictory.
Therefore f 2 = 0 and ∂ r φ = f 1 r where f 1 is any function of t. Similarly, the equation (4.45) implies that
1 r ∂ r ψ = ±{g 2 r 2 + g 1 } 1/2 , g 1 ≥ 0. (4.58)
Putting (4.58) into equation (4.23), we derive 2g 2t r{g 2 r 2 + g 1 } −1/2
+ 3 2 g 1 g 2t r{g 2 r 2 + g 1 } −3/2 − g 1t g 2 r{g 2 r 2 + g 1 } −3/2 + 3 2 g 2 1 g 2t r{g 2 r 2 + g 1 } −5/2 − 3 2 g 1 g 1t g 2 r{g 2 r 2 + g 1 } −5/2 = − 2ν 1 r g 1 g 2 {g 2 r 2 + g 1 } −3/2 − 3ν 1 r g 2 1 g 2 {g 2 r 2 + g 1 } −5/2 + 15ν 1 r g 3 1 g 2 {g 2 r 2 + g 1 } −7/2 .
(4.59)
Assume that g 2 = 0. Let r → ∞ in equation (4.59), we obtain g 2t = 0. Then equation (4.59) implies that
g 1t r 3 {g 2 r 2 + g 1 } −3/2 + 3 2 g 1t g 1 r 3 {g 2 r 2 + g 1 } −5/2 =2νg 1 r{g 2 r 2 + g 1 } −3/2 + 3νg 2 1 r{g 2 r 2 + g 1 } −5/2 − 15νg 3 1 r{g 2 r 2 + g 1 } −7/2 .
(4.60)
Let r → ∞ in equation (4.60), we obtain g 1t = 0. Then equation (4.60) implies that
0 = 2{g 2 r 2 + g 1 } 2 + 3g 1 {g 2 r 2 + g 1 } − 15g 2 1 , ∀r > 0. (4.61)
Thus g 2 = g 1 = 0. This is contradictory.
Therefore g 2 = 0 and ∂ r ψ = g 1 r where g 1 is any function of t. Provided ∂ r φ = f 1 r and ∂ r ψ = g 1 r, then all equations (4.47) (4.45) (4.48) (4.35) (4.46) (4.50) (4.51) (4.22) and (4.23) are satisfied. Therefore the equations (4.15) (4.16) are satisfied.
In summary, Theorem 1.8 is proved.
(2,2)-Symplectic Representation and Radial Symmetry Breaking in R 3
In this section, we assume that the velocity vector u holds the following (2,2)symplectic representation
u(t, x) ={(A × ∇) × ∇}φ(t, x) + {(B × ∇) × ∇}ψ(t, x) = ∇ 1 att , ∇ 2 att , ∇ 3 att φ(t, x) + ∇ 1 btt , ∇ 2 btt , ∇ 3 btt ψ(t, x), (5.1) where vectors A = (a 1 , a 2 , a 3 ) ∈ R 3 − {0} and B = (b 1 , b 2 , b 3 ) ∈ R 3 − {0} are linearly independent, (A × ∇) × ∇ = ∇ 1 att , ∇ 2 att , ∇ 3 att , (B × ∇) × ∇ = ∇ 1 btt , ∇ 2 btt , ∇ 3 btt , ∇ 1 att = (A · ∇)∂ 1 − a 1 ∆, ∇ 2 att = (A · ∇)∂ 2 − a 2 ∆, ∇ 3 att = (A · ∇)∂ 3 − a 3 ∆, (5.2) ∇ 1 btt = (B · ∇)∂ 1 − b 1 ∆, ∇ 2 btt = (B · ∇)∂ 2 − b 2 ∆, ∇ 3 btt = (B · ∇)∂ 3 − b 3 ∆. (5.3)
Thanks the following observations
(B × ∇) · u(t, x) = − {(A × ∇) × (B × ∇)} · ∇ φ(t, x) = −(A × B) · ∇∆φ(t, x), (5.4) (A × ∇) · u(t, x) = {(A × ∇) × (B × ∇)} · ∇ ψ(t, x) = (A × B) · ∇∆ψ(t, x),(5.5)
taking scalar product with B × ∇ and equation (1.1), we have
(5.6) (A × B) · ∇∆{φ t − ν∆φ} − (B × ∇) · {(u · ∇)u} = 0.
And taking scalar product with A × ∇ and equation (1.1), we derive
(5.7) (A × B) · ∇∆{ψ t − ν∆ψ} + (A × ∇) · {(u · ∇)u} = 0.
Introduce symbol ∂ j at and ∂ j bt as follows
∂ 1 at = a 2 ∂ 3 − a 3 ∂ 2 , ∂ 2 at = a 3 ∂ 1 − a 1 ∂ 3 , ∂ 3 at = a 1 ∂ 2 − a 2 ∂ 1 , ∂ 1 bt = b 2 ∂ 3 − b 3 ∂ 2 , ∂ 2 bt = b 3 ∂ 1 − b 1 ∂ 3 , ∂ 3 bt = b 1 ∂ 2 − b 2 ∂ 1 .
(5.8)
Then A × ∇ = ∂ 1 at , ∂ 2 at , ∂ 3 at and B × ∇ = ∂ 1 bt , ∂ 2 bt , ∂ 3 bt .
Let us rewrite the nonlinear terms in the equation (5.6)
(B × ∇) · {(u · ∇)u} = ∂ j bt u k ∂ k u j =∂ j bt ∇ k att φ + ∇ k btt ψ ∂ k ∇ j att φ + ∇ j btt ψ = − ∇ k att φ + ∇ k btt ψ ∂ k (A × B) · ∇∆φ + ∂ j bt ∇ k att φ + ∂ j bt ∇ k btt ψ ∂ k ∇ j att φ + ∇ j btt ψ = − (A × B) · ∇ ∇ k att φ + ∇ k btt ψ ∂ k ∆φ + (A × B) · ∇∇ k att φ + (A × B) · ∇∇ k btt ψ ∂ k ∆φ + ∂ j bt ∇ k att φ + ∂ j bt ∇ k btt ψ ∂ k ∇ j att φ + ∇ j btt ψ = − (A × B) · ∇ {A · ∇∂ k φ}∂ k ∆φ − ∆φ{A · ∇∆φ} − (A × B) · ∇ {B · ∇∂ k ψ}∂ k ∆φ − ∆ψ{B · ∇∆φ} + ∇∆φ · (B × ∇)(A · ∇) 2 φ + ∇∆φ · (B × ∇)(A · ∇)(B · ∇)ψ + ∇∆ψ · (B × ∇)(B · ∇) 2 ψ + ∇∆ψ · (B × ∇)(A · ∇)(B · ∇)φ, (5.9) where {∇ k att φ}∂ k = {(A · ∇)∂ k − a k ∆}φ ∂ k ={(A · ∇)∂ k φ}∂ k − ∆φ A · ∇, (5.10) {∇ k btt ψ}∂ k = {(B · ∇)∂ k − b k ∆}ψ ∂ k ={(B · ∇)∂ k ψ}∂ k − ∆ψ B · ∇, (5.11) ∂ j bt ∇ k att φ + ∂ j bt ∇ k btt ψ ∂ k ∇ j att φ + ∇ j btt ψ = ∂ j bt ∇ k att φ + ∂ j bt ∇ k btt ψ {(A · ∇)∂ j − a j ∆}∂ k φ + {(B · ∇)∂ j − b j ∆}∂ k ψ , (5.12) ∂ j bt ∇ k att φ ∂ j (A · ∇)∂ k φ ={(A · ∇)∂ k − a k ∆}∂ j bt φ ∂ k ∂ j (A · ∇)φ =∂ j bt (A · ∇)∂ k φ ∂ j (A · ∇)∂ k φ − ∂ j bt ∆φ ∂ j (A · ∇)(A · ∇)φ =∂ j ∆φ ∂ j bt (A · ∇) 2 φ =∇∆φ · (B × ∇)(A · ∇) 2 φ, (5.13) −∂ j bt ∇ k att φ a j ∆∂ k φ = −(A × B) · ∇∇ k att φ ∂ k ∆φ, (5.14) ∂ j bt ∇ k att φ ∂ j (B · ∇)∂ k ψ ={(A · ∇)∂ k − a k ∆}∂ j bt φ ∂ k ∂ j (B · ∇)ψ =∂ j bt (A · ∇)∂ k φ ∂ j (B · ∇)∂ k ψ − ∂ j bt ∆φ ∂ j (A · ∇)(B · ∇)ψ = − ∂ j (A · ∇)∂ k φ ∂ j bt (B · ∇)∂ k ψ + ∂ j ∆φ ∂ j bt (A · ∇)(B · ∇)ψ = − ∇(A · ∇)∂ k φ · (B × ∇)(B · ∇)∂ k ψ + ∇∆φ · (B × ∇)(A · ∇)(B · ∇)ψ, (5.15) − ∂ j bt ∇ k att φ b j ∆∂ k ψ = 0, (5.16) ∂ j bt ∇ k btt ψ ∂ j (A · ∇)∂ k φ =∂ j bt {(B · ∇)∂ k − b k ∆}ψ ∂ j (A · ∇)∂ k φ =∂ j bt (B · ∇)∂ k ψ ∂ j (A · ∇)∂ k φ − ∂ j bt ∆ψ ∂ j (A · ∇)(B · ∇)φ =∂ j bt (B · ∇)∂ k ψ ∂ j (A · ∇)∂ k φ + ∂ j ∆ψ ∂ j bt (A · ∇)(B · ∇)φ =(B × ∇)(B · ∇)∂ k ψ · ∇(A · ∇)∂ k φ + ∇∆ψ · (B × ∇)(A · ∇)(B · ∇)φ, (5.17) −∂ j bt ∇ k btt ψ a j ∆∂ k φ = −(A × B) · ∇∇ k btt ψ ∂ k ∆φ, (5.18) ∂ j bt ∇ k btt ψ ∂ j (B · ∇)∂ k ψ =∂ j bt {(B · ∇)∂ k − b k ∆}ψ ∂ j (B · ∇)∂ k ψ =∂ j bt (B · ∇)∂ k ψ ∂ j (B · ∇)∂ k ψ − ∂ j bt ∆ψ ∂ j (B · ∇)b k ∂ k ψ = − ∂ j bt ∆ψ ∂ j (B · ∇) 2 ψ =∂ j ∆ψ ∂ j bt (B · ∇) 2 ψ = ∇∆ψ · (B × ∇)(B · ∇) 2 ψ, (5.19) − ∂ j bt ∇ k btt ψ b j ∆∂ k ψ = 0. (5.20)
Similarly we rewrite the nonlinear terms in the equation (5.7)
(A × ∇) · {(u · ∇)u} = ∂ j at u k ∂ k u j =∂ j at ∇ k att φ + ∇ k btt ψ ∂ k ∇ j att φ + ∇ j btt ψ = ∇ k att φ + ∇ k btt ψ ∂ k (A × B) · ∇∆ψ + ∂ j at ∇ k att φ + ∂ j at ∇ k btt ψ ∂ k ∇ j att φ + ∇ j btt ψ =(A × B) · ∇ ∇ k att φ + ∇ k btt ψ ∂ k ∆ψ − (A × B) · ∇∇ k att φ + (A × B) · ∇∇ k btt ψ ∂ k ∆ψ + ∂ j at ∇ k att φ + ∂ j at ∇ k btt ψ ∂ k ∇ j att φ + ∇ j btt ψ =(A × B) · ∇ {A · ∇∂ k φ}∂ k ∆ψ − ∆φ{A · ∇∆ψ} + (A × B) · ∇ {B · ∇∂ k ψ}∂ k ∆ψ − ∆ψ{B · ∇∆ψ} + ∇∆φ · (A × ∇)(A · ∇) 2 φ + ∇∆φ · (A × ∇)(A · ∇)(B · ∇)ψ + ∇∆ψ · (A × ∇)(B · ∇) 2 ψ + ∇∆ψ · (A × ∇)(A · ∇)(B · ∇)φ, (5.21) where ∂ j at ∇ k att φ + ∂ j at ∇ k btt ψ ∂ k ∇ j att φ + ∇ j btt ψ = ∂ j at ∇ k att φ + ∂ j at ∇ k btt ψ {(A · ∇)∂ j − a j ∆}∂ k φ + {(B · ∇)∂ j − b j ∆}∂ k ψ , (5.22) ∂ j at ∇ k att φ ∂ j (A · ∇)∂ k φ ={(A · ∇)∂ k − a k ∆}∂ j at φ ∂ k ∂ j (A · ∇)φ =∂ j at (A · ∇)∂ k φ ∂ j (A · ∇)∂ k φ − ∂ j at ∆φ ∂ j (A · ∇)(A · ∇)φ = − ∂ j at ∆φ ∂ j (A · ∇) 2 φ =∂ j ∆φ ∂ j at (A · ∇) 2 φ = ∇∆φ · (A × ∇)(A · ∇) 2 φ, (5.23) −∂ j at ∇ k att φ a j ∆∂ k φ = 0, (5.24) ∂ j at ∇ k att φ ∂ j (B · ∇)∂ k ψ ={(A · ∇)∂ k − a k ∆}∂ j at φ ∂ k ∂ j (B · ∇)ψ =∂ j at (A · ∇)∂ k φ ∂ j (B · ∇)∂ k ψ − ∂ j at ∆φ ∂ j (A · ∇)(B · ∇)ψ =∂ j at (A · ∇)∂ k φ ∂ j (B · ∇)∂ k ψ + ∂ j ∆φ ∂ j at (A · ∇)(B · ∇)ψ =(A × ∇)(A · ∇)∂ k φ · ∇(B · ∇)∂ k ψ + ∇∆φ · (A × ∇)(A · ∇)(B · ∇)ψ, (5.25) − ∂ j at ∇ k att φ b j ∆∂ k ψ = {(A × B) · ∇∇ k att φ}∆∂ k ψ, (5.26) ∂ j at ∇ k btt ψ ∂ j (A · ∇)∂ k φ =∂ j at {(B · ∇)∂ k − b k ∆}ψ ∂ j (A · ∇)∂ k φ =∂ j at (B · ∇)∂ k ψ ∂ j (A · ∇)∂ k φ − ∂ j at ∆ψ ∂ j (A · ∇)(B · ∇)φ = − ∂ j (B · ∇)∂ k ψ ∂ j at (A · ∇)∂ k φ + ∂ j ∆ψ ∂ j at (A · ∇)(B · ∇)φ = − ∇(B · ∇)∂ k ψ · (A × ∇)(A · ∇)∂ k φ + ∇∆ψ · (A × ∇)(A · ∇)(B · ∇)φ, (5.27) −∂ j at ∇ k btt ψ a j ∆∂ k φ = 0, (5.28) ∂ j at ∇ k btt ψ ∂ j (B · ∇)∂ k ψ =∂ j at {(B · ∇)∂ k − b k ∆}ψ ∂ j (B · ∇)∂ k ψ =∂ j at (B · ∇)∂ k ψ ∂ j (B · ∇)∂ k ψ − ∂ j at ∆ψ ∂ j (B · ∇)b k ∂ k ψ = − ∂ j at ∆ψ ∂ j (B · ∇) 2 ψ =∂ j ∆ψ ∂ j at (B · ∇) 2 ψ = ∇∆ψ · (A × ∇)(B · ∇) 2 ψ, (5.29) − ∂ j at ∇ k btt ψ b j ∆∂ k ψ = {(A × B) · ∇∇ k btt ψ}∆∂ k ψ. (5.30)
Putting together (5.6) and (5.9), we derive
(A × B) · ∇∆{φ t − ν∆φ} + (A × B) · ∇ {A · ∇∂ k φ}∂ k ∆φ − ∆φ{A · ∇∆φ} + (A × B) · ∇ {B · ∇∂ k ψ}∂ k ∆φ − ∆ψ{B · ∇∆φ} − ∇∆φ · (B × ∇)(A · ∇) 2 φ − ∇∆φ · (B × ∇)(A · ∇)(B · ∇)ψ − ∇∆ψ · (B × ∇)(B · ∇) 2 ψ − ∇∆ψ · (B × ∇)(A · ∇)(B · ∇)φ = 0. (5.31)
Putting together (5.7) and (5.21), we derive
(A × B) · ∇∆{ψ t − ν∆ψ} + (A × B) · ∇ {A · ∇∂ k φ}∂ k ∆ψ − ∆φ{A · ∇∆ψ} + (A × B) · ∇ {B · ∇∂ k ψ}∂ k ∆ψ − ∆ψ{B · ∇∆ψ} + ∇∆φ · (A × ∇)(A · ∇) 2 φ + ∇∆φ · (A × ∇)(A · ∇)(B · ∇)ψ + ∇∆ψ · (A × ∇)(B · ∇) 2 ψ + ∇∆ψ · (A × ∇)(A · ∇)(B · ∇)φ = 0. (5.32)
Now we assume that φ and ψ are radial symmetric functions with respect to space variable x ∈ R 3 . It is that φ(t, x) = φ(t, r), ψ(t, x) = ψ(t, r) and
r 2 = x 2 1 + x 2 2 + x 2 3 .
Then we have Putting together (5.31) and (5.33)-(5.38), we derive Given r, thanks x ∈ S 2 r is arbitrary, the equations (5.50) (5.51) imply that Since the vectors A and B are linearly independent, the equation (5.52) implies that ∂ r φ · ∂ r 1 r ∂ r ∆φ = 0, (5.56) ∂ r 1 r ∂ r ψ · ∂ r ∆φ + 2∂ r ψ · ∂ r 1 r ∂ r ∆φ =∂ r ∆ψ · ∂ r ( 1 r ∂ r φ).
(A × B) · ∇ {A · ∇∂ k φ}∂ k ∆φ − ∆φ{A · ∇∆φ} =(A × B) · ∇ {A · ∇ x k r ∂ r φ} x k r ∂ r ∆φ − {A · x}∆φ · 1 r ∂ r ∆φ =(A × B) · ∇ (A · x) { 1 r ∂ r φ + r∂ r ( 1 r ∂ r φ)} 1 r ∂ r ∆φ − ∆φ · 1 r ∂ r ∆φ =(A × B) · x{A · x} 1 r ∂ r { 1 r ∂ r φ + r∂ r ( 1 r ∂ r φ)} 1 r ∂ r ∆φ − ∆φ · 1 r ∂ r ∆φ = − (A × B) · x{A · x} 1 r ∂ r 2 r ∂ r φ · 1 r ∂ r ∆φ = − (A × B) · x{A · x} 2 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆φ + 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ∆φ , (5.33) (A × B) · ∇ {B · ∇∂ k ψ}∂ k ∆φ − ∆ψ{B · ∇∆φ} =(A × B) · ∇ {B · ∇x k 1 r ∂ r ψ}x k 1 r ∂ r ∆φ − {B · x}∆ψ · 1 r ∂ r ∆φ =(A × B) · ∇ (B · x) { 1 r ∂ r ψ + r∂ r ( 1 r ∂ r ψ)} 1 r ∂ r ∆φ − ∆ψ · 1 r ∂ r ∆φ =(A × B) · x{B · x} 1 r ∂ r { 1 r ∂ r ψ + r∂ r ( 1 r ∂ r ψ)} 1 r ∂ r ∆φ − ∆ψ · 1 r ∂ r ∆φ = − (A × B) · x{B · x} 1 r ∂ r 2 r ∂ r ψ · 1 r ∂ r ∆φ = − (A × B) · x{B · x} 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r ∆φ + 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆φ , (5.34) − ∇∆φ · (B × ∇)(A · ∇) 2 φ = − x 1 r ∂ r ∆φ · (B × ∇){(A · ∇)(A · x) 1 r ∂ r φ} = − x 1 r ∂ r ∆φ · (B × ∇){A · A 1 r ∂ r φ + (A · x) 2 1 r ∂ r ( 1 r ∂ r φ)} =2(A × B) · x(A · x) 1 r ∂ r ∆φ · { 1 r ∂ r ( 1 r ∂ r φ)}, (5.35) − ∇∆φ · (B × ∇)(A · ∇)(B · ∇)ψ = − x 1 r ∂ r ∆φ · (B × ∇)(A · ∇)(B · x) 1 r ∂ r ψ = − x 1 r ∂ r ∆φ · (B × ∇){(A · B) 1 r ∂ r ψ + (A · x)(B · x) 1 r ∂ r ( 1 r ∂ r ψ)} =(A × B) · x(B · x) 1 r ∂ r ∆φ · { 1 r ∂ r ( 1 r ∂ r ψ)},= − x 1 r ∂ r ∆ψ · (B × ∇)(A · ∇)(B · x) 1 r ∂ r φ = − x 1 r ∂ r ∆ψ · (B × ∇){(A · B) 1 r ∂ r φ + (A · x)(B · x) 1 r ∂ r ( 1 r ∂ r φ)} =(A × B) · x(B · x) 1 r ∂ r ∆ψ · { 1 r ∂ r ( 1 r ∂ r φ)}, (5.38) (A × B) · ∇ {A · ∇∂ k φ}∂ k ∆ψ − ∆φ{A · ∇∆ψ} =(A × B) · ∇ {A · ∇x k 1 r ∂ r φ}x k 1 r ∂ r ∆ψ − {A · x}∆φ · 1 r ∂ r ∆ψ =(A × B) · ∇ (A · x) { 1 r ∂ r φ + r∂ r ( 1 r ∂ r φ)} 1 r ∂ r ∆ψ − ∆φ · 1 r ∂ r ∆ψ =(A × B) · x{A · x} 1 r ∂ r { 1 r ∂ r φ + r∂ r ( 1 r ∂ r φ)} 1 r ∂ r ∆ψ − ∆φ · 1 r ∂ r ∆ψ = − (A × B) · x{A · x} 1 r ∂ r 2 r ∂ r φ · 1 r ∂ r ∆ψ = − (A × B) · x{A · x} 2 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆ψ + 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ∆ψ ,∂ r ψ + r∂ r ( 1 r ∂ r ψ)} 1 r ∂ r ∆ψ − ∆ψ · 1 r ∂ r ∆ψ =(A × B) · x{B · x} 1 r ∂ r { 1 r ∂ r ψ + r∂ r ( 1 r ∂ r ψ)} 1 r ∂ r ∆ψ − ∆ψ · 1 r ∂ r ∆ψ = − (A × B) · x{B · x} 1 r ∂ r 2 r ∂ r ψ · 1 r ∂ r ∆ψ = − (A × B) · x{B · x} 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r ∆ψ + 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ , (5.40) ∇∆φ · (A × ∇)(A · ∇) 2 φ =x 1 r ∂ r ∆φ · (A × ∇)(A · ∇)(A · x) 1 r ∂ r φ =x 1 r ∂ r ∆φ · (A × ∇){(A · A) 1 r ∂ r φ + (A · x) 2 1 r ∂ r ( 1 r ∂ r φ)} =0,(5.1 r ∂ r ψ + (A · x)(B · x) 1 r ∂ r ( 1 r ∂ r ψ)} =(A × B) · x(A · x) 1 r ∂ r ∆φ · { 1 r ∂ r ( 1 r ∂ r ψ)}, (5.42) ∇∆ψ · (A × ∇)(B · ∇) 2 ψ =x 1 r ∂ r ∆ψ · (A × ∇)(B · ∇)(B · x) 1 r ∂ r ψ =x 1 r ∂ r ∆ψ · (A × ∇){(B · B) 1 r ∂ r ψ + (B · x) 2 1 r ∂ r ( 1 r ∂ r ψ)} =2(A × B) · x(B · x) 1 r ∂ r ∆ψ · { 1 r ∂ r ( 1 r ∂ r ψ)}, (5.43) ∇∆ψ · (A × ∇)(A · ∇)(B · ∇)φ =x 1 r ∂ r ∆ψ · (A × ∇)(A · ∇)(B · x) 1 r ∂ r φ =x 1 r ∂ r ∆ψ · (A × ∇){(A · B) 1 r ∂ r φ + (A · x)(B · x) 1 r ∂ r ( 1 r ∂ r φ)} =(A × B) · x(A · x) 1 r ∂ r ∆ψ · { 1 r ∂ r ( 1 r ∂ r φ)}.∆{φ t − ν∆φ} ={A · x} 2 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆φ + 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ∆φ + {B · x} 2 r ∂ r 1 r ∂ r ψ · 1 r ∂ r ∆φ + 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆φ − 2(A · x) 1 r ∂ r ∆φ · { 1 r ∂ r ( 1 r ∂ r φ)} − (B · x) 1 r ∂ r ∆φ · { 1 r ∂ r ( 1 r ∂ r ψ)} − (B · x) 1 r ∂ r ∆ψ · { 1 r ∂ r ( 1 r ∂ r φ)} ={A · x} 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ∆φ − (B · x) 1 r ∂ r ∆ψ · 1 r ∂ r ( 1 r ∂ r φ) + {B · x} 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r ∆φ + 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆φ = A 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ∆φ − B 1 r ∂ r ∆ψ · 1 r ∂ r ( 1 r ∂ r φ) + B 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r ∆φ + 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆φ · x.A 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ∆φ − B 1 r ∂ r ∆ψ · 1 r ∂ r ( 1 r ∂ r φ) + B 1 r ∂ r 1 r ∂ r ψ · 1 r ∂ r ∆φ + 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆φ = 0, (5.52) A 1 r ∂ r 1 r ∂ r φ · 1 r ∂ r ∆ψ + 2 r ∂ r φ · 1 r ∂ r 1 r ∂ r ∆ψ − A 1 r ∂ r ∆φ · 1 r ∂ r ( 1 r ∂ r ψ) + B 2 r ∂ r ψ · 1 r ∂ r 1 r ∂ r ∆ψ = 0.
(5.57)
And the equation (5.53) implies that ∂ r 1 r ∂ r φ · ∂ r ∆ψ + 2∂ r φ · ∂ r 1 r ∂ r ∆ψ =∂ r ∆φ · ∂ r ( 1 r ∂ r ψ), Now assume that at least one of ∂ r φ and ∂ r ψ is not zero. Then at least one of the following equations ∂ r 1 r ∂ r ∆φ = 0 (5.60) and ∂ r 1 r ∂ r ∆ψ = 0 (5.61) is satisfied.
The equations (5.60) and (5.54) imply that φ = f 4 r 4 + (12f 4 νt + f 2 )r 2 + f 0 (t), (5.62) where f 2 and f 4 are arbitrary constants, f 0 (t) is arbitrary function of t.
Similarly the equations (5.61) and (5.55) imply that ψ = g 4 r 4 + (12g 4 νt + g 2 )r 2 + g 0 (t), (5.63) where g 2 and g 4 are arbitrary constants, g 0 (t) is arbitrary function of t. On the other hand, provided that at least one of (5.62), (5.63) and (5.64) is not satisfied, then the equations (5.45) (5.46) can not be satisfied by any radial symmetry functions φ and ψ.
In summary, Theorem 1.9 is proved.
Proposition 1. 1 (
1Local H m Solution). Let u 0 ∈ H m (R 3 ) ∩ H(R 3 ) and m ≥ 1. Then there exist T = T u 0 H m > 0 and a unique solution u of the problem (1.1)-(1.3) such that u ∈ C [0, T ]; H m (R 3 ) ∩ H(R 3 ) .
Proposition 1.2 (Local Mild Solution). Let u 0 ∈ L p (R 3 ) satisfy (1.2) in distribution and p > 3. Then there exist T = T u 0 L p > 0 and a unique solution u of the problem (1.1)-(1.3) such that u ∈ C([0, T ]; L p (R 3 )).
Theorem 1 . 3 (
13Radial Symmetry Persistence). For the problem (1.1)-(1.3), let x = (x 1 , x 2 ) ∈ R 2 , the velocity vectors u and u 0 respectively hold the following (1,1)symplectic representation u(t, x) = − ∂ 2 φ(t, x), ∂ 1 φ(t, x), 0 , (1.9) u 0 (x) = − ∂ 2 φ 0 (x), ∂ 1 φ 0 (x), 0 . (1.10)
Theorem 1 . 4 (
14Radial Symmetry Either Persistence or Breaking). For the problem (1.1)-(1.3), assume that the velocity vectors u and u 0 = 0 respectively hold the following (1,2)-symplectic representation
(φ, ψ) = Φ(r), Ψ(r) satisfies equations (1.20) (1.21). (I) (Static Euler Flow) u = (A × ∇)Φ + {(A × ∇) × ∇}Ψ satisfies static three dimensional Euler equations (1.12).
Then u defined by (1.22) (1.24) is solution of the problem (1.1)-(1.3).
Then u defined by (1.22) (1.26) is solution of the problem (1.1)-(1.3).
Theorem 1 . 8 (
18Radial Symmetry Breaking). Let u be solution of the problem (1.1)-(1.3). Assume that the velocity vectors u and u 0 = 0 respectively hold the following (1,1)-symplectic representation
Theorem 1 . 9 (
19Radial Symmetry Breaking). Let u be solution of the problem (1.1)-(1.3). Assume that the velocity vectors u and u 0 respectively hold the following (2,2)symplectic representation u(t, x) ={(A × ∇) × ∇}φ(t, x) + {(B × ∇) × ∇}ψ(t, x), (1.37) u 0 (x) ={(A × ∇) × ∇}φ 0 (x) + {(B × ∇) × ∇}ψ 0 (x), (1.38)
product with B × ∇ and equation (1.1), we have (3.6) (A · B)∆ − (A · ∇)(B · ∇) {φ t − ν∆φ} + (B × ∇) · {(u · ∇)u} = 0.
Proof of Theorem 1.4. Firstly we consider the case which vectors A and B are linearly dependent. Without loss of generality, we assume A = B. The equations (3.15) (3.16) are as follows
is any real constant. Equations (3.36) (3.37) are the special case of equations (3.32) (3.33). Equation (3.37) can be writtenλ 2 (rψ) + ∂ 2 r (rψ) (3.38)for any real constants λ, α, β. Let Φ λαβ (r) = λΨ λαβ (r). Then (φ, ψ) = Φ λαβ (r), Ψ λαβ (r) satisfies equations (3.28) (3.29).By Theorem 1.4, this corollary is proved.Proof of Theorem 1.7. Now we consider the case which vectors A and B are perpendicular, i.e. A⊥B. The equations (3.15) (3.16) are as follows
orthogonal transformation y = ρx defined by (3.23) in the equation (3.49), we have
Firstly ( 3 .
351) is satisfied provided x ∈ R 3 − {x|ρx = x}.Finally for x ∈ {x|ρx = x}, selecting x n ∈ R 3 − {x|ρx = x} such that x n → x as n → ∞, we can prove that (3.51) is also satisfied by n → ∞.Let us select another orthogonal transformation O r as follows(3.52) z = O r x = x = (x 3 , x 1 , x 2 ).
(3.55) Inserting(3.55) into(3.53), we obtain ∂ r 1 r ∂ r ∆{ψ t − ν∆ψ} = r ψ = 0, then equations (3.39) (3.40) are satisfied by (φ, ψ) which is the solution of (∂ r φ, ∂ r ψ) = 0. In this case, the corresponding velocity u = 0. It is trivial. Now provided ∂ r ψ = 0, then the equation (3.57) implies that ∂ r 1 r ∂ r ∆ψ = 0 and ∆ψ = 20g 4 r 2 + 6g 2 , ψ = g 4 r 4 + g 2 r 2 + g 0 , (3.58)where g 0 , g 2 and g 4 are any functions of t.Putting(3.48) (3.58) into (3.39) (3.40), the equations (3.39) and (3.40) are satisfied provided f 2 = 0. Provided f 2 = 0, putting (3.40) (3.41) (3.45) (3.48) together, we derive
( 3 .
348) (3.58) into (3.39) (3.40), the equations (3.39) and (3.40) are satisfied provided f 2 g 4 = 0.
(A × ∇) × ∇ = (A · ∇)∇ − A∆ and (B × ∇) × ∇ = (B · ∇)∇ − B∆.
× B) · ∇∆{φ t − ν∆φ} + (B × ∇) · {(u · ∇)ω − (ω · ∇)u} = 0.
· (B × ∇)(B · ∇) 2 ψ = − x 1 r ∂ r ∆ψ · (B × ∇)(B · ∇)(B · x) 1 r ∂ r ψ = − x 1 r ∂ r ∆ψ · (B × ∇){B · B 1 r ∂ r ψ + (B · x)
× B) · ∇ {B · ∇∂ k ψ}∂ k ∆ψ − ∆ψ{B · ∇∆ψ} =(A × B) · ∇ {B · ∇x k 1 r ∂ r ψ}x k 1 r ∂ r ∆ψ − {B · x}∆ψ · 1 r ∂ r ∆ψ =(A × B) · ∇ (B · x) { 1 r
r ∆φ · (A × ∇)(A · ∇)(B · x) 1 r ∂ r ψ =x 1 r ∂ r ∆φ · (A × ∇){(A · B)
r φ = ∂ r ψ = 0, all equations (5.54), (5.55), (5.56), (5.57), (5.58) and (5.59) are satisfied. Here velocity vector u = 0. This is trivial.
∇φ + (B × ∇) × ∇ψ| 2 dx = ∞, where φ and ψ are defined by (5.62) (5.63) respectively. In summary, the equations (5.45) (5.46) are only satisfied by (φ, ψ) defined in (5.62) (5.63) (5.64).
It is obvious that random u e is the solution of static Euler equations (1.12).Remark 1.6. The equation (1.27) is so-called Bäcklund transformation which changes equations (1.20) (1.21) into static three dimensional Euler equations (1.12).).
(1.29)
Provided Φ(r), Ψ(r) satisfies equations (1.20) (1.21). It is that
AcknowledgmentsThis work is supported by National Natural Science Foundation of China-NSF, Grant No.11971068 and No.11971077.Putting together (5.32) and (5.39)-(5.44), we derive Then r 2 = y · y = ρx · ρx = x · x.Applying the orthogonal transformation (5.47) in the equations (5.45) (5.46), we obtain ∆{φ t − ν∆φ}Similarly using the equations (5.46) (5.49), we derive(5.51)
. A V Bäcklund, EinigesÜ ber Curven und Flächentransformationen. Lund Universitets Arsskrift. 10A. V. Bäcklund. EinigesÜ ber Curven und Flächentransformationen. Lund Universitets Arsskrift 10(1875), 1-12.
Ill-posedness of the Navier-Stokes equations in a critical spaces in 3D. J Bourgain, N Pavlovic, J. Funct. Anal. 255J. Bourgain and N. Pavlovic. Ill-posedness of the Navier-Stokes equations in a critical spaces in 3D. J. Funct. Anal. 255(2008), 2233-2247.
Partial regularity of suitable weak solutions of the Navier-Stokes equations. L Caffarelli, R Kohn, L Nirenberg, Comm. Pure Appl. Math. 35L. Caffarelli, R. Kohn and L. Nirenberg. Partial regularity of suitable weak solutions of the Navier- Stokes equations. Comm. Pure Appl. Math. 35 (1982), 771-837.
Existence of weak solutions for the Navier-Stokes equations with initial data in L p. C P Calderón, Trans. Amer. Math. Soc. 3181C. P. Calderón. Existence of weak solutions for the Navier-Stokes equations with initial data in L p . Trans. Amer. Math. Soc. 318(1982), No.1, 179-200.
A generalization of a theorem by Kato on Navier-Stokes equations. M Cannone, Rev. Mat. Iberoamericana. 13M. Cannone. A generalization of a theorem by Kato on Navier-Stokes equations. Rev. Mat. Iberoamericana 13(1997), 515-541.
Harmonis analysis tools for solving the incompressible Navier-Stokes equations. M Cannone, Mathematical Fluid Dynamics. S. Friedlander and D. Serre3ElsevierM. Cannone. Harmonis analysis tools for solving the incompressible Navier-Stokes equations. Hand- book of Mathematical Fluid Dynamics, Vol. 3, Eds. S. Friedlander and D. Serre, Elsevier, 2003, 161-244.
Large time behavior in incompressible Navier-Stokes equations. A Carpio, SIAM J. Math. Anal. 27A. Carpio. Large time behavior in incompressible Navier-Stokes equations. SIAM J. Math. Anal. 27(1996), 449-475.
Lower bound on the blow-up rate of the axisymmetric Navier-Stokes equations. C.-C Chen, R M Strain, T.-P Tsai, H.-T Yau, Int. Math. Res. Not. 931C.-C. Chen, R. M. Strain, T.-P. Tsai and H.-T. Yau. Lower bound on the blow-up rate of the axisymmetric Navier-Stokes equations. Int. Math. Res. Not. 9(2008), Art. ID rnn016, 31pp.
Lower bounds on the blow-up rate of the axisymmetric Navier-Stokes equations II. C.-C Chen, R M Strain, T.-P Tsai, H.-T Yau, Comm. Part. Diff. Eq. 34C.-C. Chen, R. M. Strain, T.-P. Tsai and H.-T. Yau. Lower bounds on the blow-up rate of the axisymmetric Navier-Stokes equations II. Comm. Part. Diff. Eq. 34(2009), 203-232.
Théorèmes d'unicité pour le système de Navier-Stokes tridimensionnel. J.-Y Chemin, J. Anal. Math. 77J.-Y. Chemin. Théorèmes d'unicité pour le système de Navier-Stokes tridimensionnel. J. Anal. Math. 77(1999), 27-50.
On lower bounds for possible blow up solutions to the periodic Navier-Stokes equations. J C Cortissoz, J A Montero, C E Pinilla, J. Math. Phys. 5533101J. C. Cortissoz, J. A. Montero and C. E. Pinilla. On lower bounds for possible blow up solutions to the periodic Navier-Stokes equations. J. Math. Phys. 55(2014), 033101.
L3,∞-solutions to the Navier-Stokes equations and backward uniqueness. L Escauriaza, G Seregin, V Šverák, Russian Math. Surveys. 58L. Escauriaza, G. Seregin and V.Šverák. L3,∞-solutions to the Navier-Stokes equations and backward uniqueness. Russian Math. Surveys 58(2003), 211-250.
. I Gallagher, G S Koch, F Planchnon, A profile decomposition approach to the L ∞ t (L 3 xI. Gallagher, G. S. Koch and F. Planchnon. A profile decomposition approach to the L ∞ t (L 3 x )
Navier-Stokes regularity criterion. Math. Ann. 355Navier-Stokes regularity criterion. Math. Ann. 355(2013), 1527-1559.
The second iterate for the Navier-Stokes equation. P Germain, J. Funct. Anal. 255P. Germain. The second iterate for the Navier-Stokes equation. J. Funct. Anal. 255(2008), 2248- 2264.
Solutions for semilinear parabolic equations in L p and regularity of weak solutions of the Navier-Stokes system. Y Giga, J. Diff. Eq. 62Y. Giga. Solutions for semilinear parabolic equations in L p and regularity of weak solutions of the Navier-Stokes system. J. Diff. Eq. 62(1986), 186-212.
On the Cauchy problem for the Navier-Stokes equations with nondecaying initial data. Y Giga, K Inui, S Matsui, Quad. Mat. 4Y. Giga, K. Inui and S. Matsui. On the Cauchy problem for the Navier-Stokes equations with nondecaying initial data. Quad. Mat. 4 (1999), 27-68.
Solutions in L r of the Navier-Stokes initial value problem. Y Giga, T Miyakawa, Arch. Rational Mech. Math. 89Y. Giga and T. Miyakawa. Solutions in L r of the Navier-Stokes initial value problem. Arch. Rational Mech. Math. 89(1985), 267-281.
Navier-Stokes flow in R 3 with measures as initial vorticity and Morrey spaces. Y Giga, T Miyakawa, Comm. Partial Diff. Eq. 14Y. Giga and T. Miyakawa. Navier-Stokes flow in R 3 with measures as initial vorticity and Morrey spaces. Comm. Partial Diff. Eq. 14(1989), 577-618.
Alternative Theorem of Navier-Stokes Equations in R 3 . ArXiv. Yongqian Han, 10169Yongqian Han. Alternative Theorem of Navier-Stokes Equations in R 3 . ArXiv 2011.10169 (2020).
Stationary spectrum of strong turbulence in magnetized nonuniform plasma. A Hasegawa, K Mima, Phys. Rev. Lett. 39205A. Hasegawa and K. Mima. Stationary spectrum of strong turbulence in magnetized nonuniform plasma. Phys. Rev. Lett. 39 (1977), 205.
Pseudo-three-dimensional turbulence in magnetized nonuniform plasma. A Hasegawa, K Mima, Phys. Fluids. 21A. Hasegawa and K. Mima. Pseudo-three-dimensional turbulence in magnetized nonuniform plasma. Phys. Fluids 21 (1978), 87-92.
Uber die Anfangswertaufgabe für die hydrodynamischen Grundgleichungen. E Hopf, Math. Nachr. 4E. Hopf. Uber die Anfangswertaufgabe für die hydrodynamischen Grundgleichungen. Math. Nachr. 4(1951), 213-231.
Quasi-linear equations of evolution, with applications to partial differential equations. T Kato, Springer Lecture Notes in Mathematics. 448SpringerT. Kato. Quasi-linear equations of evolution, with applications to partial differential equations. Springer Lecture Notes in Mathematics 448. pp.25-70, Berlin, Springer, 1975.
Strong L p solutions of the Navier-Stokes equations in R m , with applications to weak solutions. T Kato, Math. Z. 187T. Kato. Strong L p solutions of the Navier-Stokes equations in R m , with applications to weak solutions. Math. Z. 187(1984), 471-480.
On The nonstationary Navier-Stokes system. T Kato, H Fujita, Rend Sem. Mat. Univ. Padova. 32T. Kato and H. Fujita. On The nonstationary Navier-Stokes system. Rend Sem. Mat. Univ. Padova 32(1962), 243-260.
The Navier-Stokes equation with weak initial data. T Kato, G Ponce, International Math. Res. Notices. 10T. Kato and G. Ponce. The Navier-Stokes equation with weak initial data. International Math. Res. Notices 10 (1994), 435-444.
An alternativ approach to regularity for The Navier-Stokes equations in criticle spaces. C E Kenig, G S Koch, Ann. l'Inst. H. Poincara (C) Non Linear Anal. 28C. E. Kenig and G. S. Koch. An alternativ approach to regularity for The Navier-Stokes equations in criticle spaces. Ann. l'Inst. H. Poincara (C) Non Linear Anal. 28(2011), 159-187.
Liouville theorems for the Navier-Stokes equations and applications. G Koch, N Nadirashvili, G A Seregin, V Sverák, Acta Math. 203G. Koch, N. Nadirashvili, G. A. Seregin and V.Sverák. Liouville theorems for the Navier-Stokes equations and applications. Acta Math. 203(2009), 83-105.
Well-posedness for The Navier-Stokes equations. H Koch, D Tataru, Adv. Math. 157H. Koch and D. Tataru. Well-posedness for The Navier-Stokes equations. Adv. Math. 157(2001), 22-35.
O A Ladyzenskaya, The Mathematical Theory of Viscous Incompressible Flows. Gordon and Breach2nd editionO. A. Ladyzenskaya. The Mathematical Theory of Viscous Incompressible Flows (2nd edition). Gordon and Breach, 1969.
On partial regularity of suitable weak solutions to the three-demensional Navier-Stokes equations. O A Ladyzenskaya, G A Geregin, J. Math. Fluid Mech. 1O. A. Ladyzenskaya and G. A. Geregin. On partial regularity of suitable weak solutions to the three-demensional Navier-Stokes equations. J. Math. Fluid Mech. 1(1999), 356-387.
Sur le mouvement d'un liquid visqueux emplissant l'espace. J Leray, Acta Math. 63J. Leray. Sur le mouvement d'un liquid visqueux emplissant l'espace. Acta Math. 63 (1934), 193- 248.
Dynamical behavior for the solutions of the Navier-Stokes equation. K Li, T Ozawa, B Wang, Comm. Pure Appl. Anal. 174K. Li, T. Ozawa and B. Wang. Dynamical behavior for the solutions of the Navier-Stokes equation. Comm. Pure Appl. Anal. 17(4), 2018, 241-257.
Blowup criterion for Navier-Stokes equation in critical Besov space with spatial dimensions d ≥ 4. K Li, B Wang, Ann. Inst. Henri Poincare (C) Anal. Nonlinear. 366K. Li and B. Wang. Blowup criterion for Navier-Stokes equation in critical Besov space with spatial dimensions d ≥ 4. Ann. Inst. Henri Poincare (C) Anal. Nonlinear 36(6), 2019, 1679-1707.
A new proof of the Caffarelli-Kohn-Nirenberg theorem. F.-H Lin, Comm. Pure Appl. Math. 51F.-H. Lin. A new proof of the Caffarelli-Kohn-Nirenberg theorem. Comm. Pure Appl. Math. 51 (1998), 241-257.
Analytic Semigroups and Optimal Regularity in Parabolic Problems. A Lunardi, BirkhäuserBasel, Boston; BerlinA. Lunardi. Analytic Semigroups and Optimal Regularity in Parabolic Problems. Basel, Boston; Berlin: Birkhäuser, 1995.
On optimal decay rates for weak solutions to the Navier-Stokes equations in R n. T Miyakawa, M E Schonbek, Math. Bohem. 1262T. Miyakawa and M. E. Schonbek. On optimal decay rates for weak solutions to the Navier-Stokes equations in R n . Math. Bohem. 126(2), 2001, 443-455.
On Leray's self-similar solutions of the Navier-Stokes equations. J Nečas, M Ružička, V Šverák, Acta Math. 176J. Nečas, M. Ružička and V.Šverák. On Leray's self-similar solutions of the Navier-Stokes equations Acta Math. 176(1996), 283-294.
Global strong solutions in Sobolev or Lebesgue spaces to the incompressible Navier-Stokes equations in R 3. F Planchon, Ann. Inst. H. Poincare (C) Anal. Nonlinear. 13F. Planchon. Global strong solutions in Sobolev or Lebesgue spaces to the incompressible Navier- Stokes equations in R 3 . Ann. Inst. H. Poincare (C) Anal. Nonlinear 13(1996), 319-336.
Un teorema di unicità per le equazioni di Navier-Stokes. G Prodi, Ann. Mat. Pura Appl. 48G. Prodi. Un teorema di unicità per le equazioni di Navier-Stokes. Ann. Mat. Pura Appl. 48(1959), 173-182.
Lower bounds on blow up solutions of the three dimensional Navier-Stokes equations in homogeneous Sobolev spaces. J C Robinson, W Sadowski, R P Silva, J. Math. Phys. 53J. C. Robinson, W. Sadowski and R. P. Silva. Lower bounds on blow up solutions of the three dimensional Navier-Stokes equations in homogeneous Sobolev spaces. J. Math. Phys. 53(2012), 115618-1-15.
Bäcklund Transformations and Their Applications. C Rogers, W R Shadwick, Academic PressNew YorkC. Rogers, W. R. Shadwick. Bäcklund Transformations and Their Applications. Academic Press, New York, 1982.
Partial regularity of solutions to the Navier-Stokes equations. V Scheffer, Pacific J. Math. 66V. Scheffer. Partial regularity of solutions to the Navier-Stokes equations. Pacific J. Math. 66 (1976), 535-552.
Hausdorff measure and the Navier-Stokes equations. V Scheffer, Comm. Math. Phys. 55V. Scheffer. Hausdorff measure and the Navier-Stokes equations. Comm. Math. Phys. 55 (1977), 97-112.
Lower bounds of rates of decay for solutions to the Navier-Stokes equations. M E Schonbek, J. Amer. Math. Soc. 4M. E. Schonbek. Lower bounds of rates of decay for solutions to the Navier-Stokes equations. J. Amer. Math. Soc. 4(1991), 423-449.
Asymptotic behavior of solutions to the three-dimensional Navier-Stokes equations. M E Schonbek, Indiana Univ. Math. J. 41M. E. Schonbek. Asymptotic behavior of solutions to the three-dimensional Navier-Stokes equa- tions. Indiana Univ. Math. J. 41(1992), 809-823.
A certain necessary condition of potential blow up for Navier-Stokes equations. G Seregin, Comm. Math. Phys. 312G. Seregin. A certain necessary condition of potential blow up for Navier-Stokes equations. Comm. Math. Phys. 312(2012), 833-845.
On the interior regularity of weak solutions of the Navier-Stokes equations. J Serrin, Arch. Rational Mech. Anal. 9J. Serrin. On the interior regularity of weak solutions of the Navier-Stokes equations. Arch. Rational Mech. Anal. 9 (1962), 187-195.
E M Stein, Singular Integrals and Differentiability Properties of Functions. Princeton University PressE. M. Stein. Singular Integrals and Differentiability Properties of Functions. Princeton University Press 1970.
Analysis on Morrey spaces and applications to Navier-Stokes equation. M Taylor, Comm. Partial Diff. Eq. 17M. Taylor. Analysis on Morrey spaces and applications to Navier-Stokes equation. Comm. Partial Diff. Eq. 17(1992), 1407-1456.
Navier-Stokes Equations. Theory and numerical analysis. R Temam, AMS Chelsea PublishingProvidence, RIReprint of the 1984 editionR. Temam. Navier-Stokes Equations. Theory and numerical analysis. (Reprint of the 1984 edition). AMS Chelsea Publishing, Providence, RI, 2001.
On Leray's self-similar solutions of the Navier-Stokes equations satisfying local energy estimates. T.-P Tsai, Arch. Rational Mech. Anal. 143T.-P. Tsai. On Leray's self-similar solutions of the Navier-Stokes equations satisfying local energy estimates. Arch. Rational Mech. Anal. 143 (1998), 29-51.
A new proof of partial regularity of solutions to the Navier-Stokes equations. A Vasseur, Nonlin. Diff. Eq. Appl. 14A. Vasseur. A new proof of partial regularity of solutions to the Navier-Stokes equations. Nonlin. Diff. Eq. Appl. 14(2007), 753-785.
Regularity of weak solutions of the Navier-Stokes equations. W Wahl, Proc. Symp. Pure Appl. Math. 45W. von Wahl. Regularity of weak solutions of the Navier-Stokes equations. Proc. Symp. Pure Appl. Math. 45(1986), 497-503.
Ill-posedness for the Navier-Stokes equation in critical Besov spacesḂ −1 ∞,q. B Wang, Adv. in Math. 268B. Wang. Ill-posedness for the Navier-Stokes equation in critical Besov spacesḂ −1 ∞,q . Adv. in Math. 268(2015), 350-372.
A unified proof on the partial regularity for suitable weak solutions of nonstationary and stationary Navier-Stokes equations. Y Wang, G Wu, J. Diff. Eq. 256Y. Wang and G. Wu. A unified proof on the partial regularity for suitable weak solutions of nonstationary and stationary Navier-Stokes equations. J. Diff. Eq. 256(2014), 1224-1249.
The Navier-Stokes initial value problem in L p. F B Weissler, Arch. Rational Mech. Anal. 74F. B. Weissler. The Navier-Stokes initial value problem in L p . Arch. Rational Mech. Anal. 74(1980), 219-230.
Ill-posedness of the 3D Navier-Stokes equations in generalized Besov space near BMO −1. T Yoneda, J. Funct. Anal. 258T. Yoneda. Ill-posedness of the 3D Navier-Stokes equations in generalized Besov space near BMO −1 . J. Funct. Anal. 258(2010), 3376-3387.
. address: han [email protected] of Applied Physics and Computational Mathematics, P.O. Box. 8009P. R. China EmailInstitute of Applied Physics and Computational Mathematics, P.O. Box 8009, Beijing 100088, P. R. China Email address: han [email protected]
| [] |
[
"Learning a transferable kinetic energy functional KineticNet: Deep learning a transferable kinetic energy functional for orbital-free density functional theory",
"Learning a transferable kinetic energy functional KineticNet: Deep learning a transferable kinetic energy functional for orbital-free density functional theory"
] | [
"R Remme \nIWR\nHeidelberg University\n\n",
"T Kaczun \nIWR\nHeidelberg University\n\n",
"M Scheurer \nIWR\nHeidelberg University\n\n",
"A Dreuw \nIWR\nHeidelberg University\n\n",
"F A Hamprecht \nIWR\nHeidelberg University\n\n"
] | [
"IWR\nHeidelberg University\n",
"IWR\nHeidelberg University\n",
"IWR\nHeidelberg University\n",
"IWR\nHeidelberg University\n",
"IWR\nHeidelberg University\n"
] | [] | Orbital-free density functional theory (OF-DFT) holds the promise to compute ground state molecular properties at minimal cost. However, it has been held back by our inability to compute the kinetic energy as a functional of the electron density only. We here set out to learn the kinetic energy functional from ground truth provided by the more expensive Kohn-Sham density functional theory. Such learning is confronted with two key challenges: Giving the model sufficient expressivity and spatial context while limiting the memory footprint to afford computations on a GPU; and creating a sufficiently broad distribution of training data to enable iterative density optimization even when starting from a poor initial guess. In response, we introduce KineticNet, an equivariant deep neural network architecture based on point convolutions adapted to the prediction of quantities on molecular quadrature grids. Important contributions include convolution filters with sufficient spatial resolution in the vicinity of the nuclear cusp, an atom-centric sparse but expressive architecture that relays information across multiple bond lengths; and a new strategy to generate varied training data by finding ground state densities in the face of perturbations by a random external potential. KineticNet achieves, for the first time, chemical accuracy of the learned functionals across input densities and geometries of tiny molecules. For two electron systems, we additionally demonstrate OF-DFT density optimization with chemical accuracy. | 10.48550/arxiv.2305.13316 | [
"https://export.arxiv.org/pdf/2305.13316v1.pdf"
] | 258,841,301 | 2305.13316 | f837a7979f0d73875fd4405c719622aa7c08d113 |
Learning a transferable kinetic energy functional KineticNet: Deep learning a transferable kinetic energy functional for orbital-free density functional theory
R Remme
IWR
Heidelberg University
T Kaczun
IWR
Heidelberg University
M Scheurer
IWR
Heidelberg University
A Dreuw
IWR
Heidelberg University
F A Hamprecht
IWR
Heidelberg University
Learning a transferable kinetic energy functional KineticNet: Deep learning a transferable kinetic energy functional for orbital-free density functional theory
(Dated: 5 May 2023)
Orbital-free density functional theory (OF-DFT) holds the promise to compute ground state molecular properties at minimal cost. However, it has been held back by our inability to compute the kinetic energy as a functional of the electron density only. We here set out to learn the kinetic energy functional from ground truth provided by the more expensive Kohn-Sham density functional theory. Such learning is confronted with two key challenges: Giving the model sufficient expressivity and spatial context while limiting the memory footprint to afford computations on a GPU; and creating a sufficiently broad distribution of training data to enable iterative density optimization even when starting from a poor initial guess. In response, we introduce KineticNet, an equivariant deep neural network architecture based on point convolutions adapted to the prediction of quantities on molecular quadrature grids. Important contributions include convolution filters with sufficient spatial resolution in the vicinity of the nuclear cusp, an atom-centric sparse but expressive architecture that relays information across multiple bond lengths; and a new strategy to generate varied training data by finding ground state densities in the face of perturbations by a random external potential. KineticNet achieves, for the first time, chemical accuracy of the learned functionals across input densities and geometries of tiny molecules. For two electron systems, we additionally demonstrate OF-DFT density optimization with chemical accuracy.
Orbital-free density functional theory (OF-DFT) holds the promise to compute ground state molecular properties at minimal cost. However, it has been held back by our inability to compute the kinetic energy as a functional of the electron density only. We here set out to learn the kinetic energy functional from ground truth provided by the more expensive Kohn-Sham density functional theory. Such learning is confronted with two key challenges: Giving the model sufficient expressivity and spatial context while limiting the memory footprint to afford computations on a GPU; and creating a sufficiently broad distribution of training data to enable iterative density optimization even when starting from a poor initial guess. In response, we introduce KineticNet, an equivariant deep neural network architecture based on point convolutions adapted to the prediction of quantities on molecular quadrature grids. Important contributions include convolution filters with sufficient spatial resolution in the vicinity of the nuclear cusp, an atom-centric sparse but expressive architecture that relays information across multiple bond lengths; and a new strategy to generate varied training data by finding ground state densities in the face of perturbations by a random external potential. KineticNet achieves, for the first time, chemical accuracy of the learned functionals across input densities and geometries of tiny molecules. For two electron systems, we additionally demonstrate OF-DFT density optimization with chemical accuracy.
I. INTRODUCTION
Kohn-Sham density functional theory (KS-DFT) has become the workhorse of quantum chemistry thanks to its appealing trade-off of computational cost vs. accuracy of molecular property predictions. Even so, its use of orbitals and resulting cubic scaling with system size precludes its application to larger systems with thousands of atoms that are needed to faithfully model, e.g., macromolecules in solution. The main reason that KS-DFT needs orbitals in the first place is that, despite decades of theoretical work, a concrete recipe to accurately compute the non-interacting kinetic energy T s 1 from the electron density has remained elusive; whereas it can be computed from Kohn-Sham orbitals φ i via T s = t s (r)d 3 r with a kinetic energy density
t s (r) = 1 2 ∑ N i=1 |∇φ i (r)| 2(1)
Yet, the mesmerizing promise of the Hohenberg-Kohn theorems is that it suffices to solve a single integrodifferential equation for the density ρr to find the ground state of a system, provided we find a concrete means to compute T s and the kinetic potential (its functional derivative with respect to the electron density δ T s δ ρ ) as a functional of the electron density only.
Extensive theoretical and experimental work has shown that the kinetic energy density is not merely local or "semi-local", i.e., t s (r) is not a function of the electron density ρ(r) and its spatial derivatives only. On the other hand, aromatic systems and conductors aside, chemistry exhibits a large degree of locality, suggesting that it should be possible to learn a kinetic energy density functional that generalizes across relevant swathes of chemical space.
In response, we here propose a deep equivariant neural network architecture to approximate the kinetic energy den-sity t s (r). Specifically, we make the following contributions:
• We propose an equivariant deep architecture ingesting an electron density represented on a quadrature grid along with nuclear locations and charges, and predicting a kinetic energy density on the same grid.
• We show how to generate varied electron densities and associated kinetic energy potentials needed to achieve convergence when initiating density optimization far from the ground state.
• We demonstrate orbital-free density optimization in systems with two electrons, reproducing bonding with chemical accuracy.
• We offer machine learned functionals for the kinetic energy density and gradient which yield chemical accuracy in OF-DFT calculations, generalizing over input electron densities, external potential and molecular geometry.
Related Work
Machine learning has been used to improve DFT pipelines before. A large number of works 2-7 focus on learning an approximation to the exchange correlation functional, where Kirkpatrick et al. 2 recently demonstrated impressive results. Dick and Fernandez-Serra 5 use an architecture that is similar in some respects to learn the exchange correlation functional. That said, they move to invariant features early on in their model and do not learn the atomic representations, relying instead on hand-crafted features to encode the atomic environments. They also train their model to only predict the total exchange correlation energy as a scalar, and not the potential. A number of works demonstrate the potential of ML for OF-DFT on one-dimensional data sets, such as Snyder et al. 8 Meyer, Weichselbaum, and Hauser 9 and Saidaoui et al. 10 . The approach of Ghasemi and Kühne 11 works in 3D, but on single, rotationally symmetric atoms only, effectively reducing the dimensionality to one. Golub and Manzhos 12 take a semi-local approach to the 3D problem as they train a neural network that takes five features reflecting the electron density, its gradient as well as its Laplacian to model the kinetic energy density, which they apply to each grid point individually. Seino et al. 13 and Fujinami et al. 14 show promising results for learning the kinetic energy and potential for molecules, however their models generalize only over different densities and hence only work for a single molecule with fixed geometry at a time. Ryczko et al. 15 learn the kinetic functional on a voxel-grid representation that works well for their application to graphene lattices, but is less suited for molecules. They present one of the few approaches with successful density optimization, however only for a learned functional that is trained to mimic the flawed Thomas-Fermi approximation. The most extensive results including density optimization are presented by Imoto, Imada, and Oshiyama 16 . Like Golub and Manzhos, they use a simple neural network applied pointwise and learn an enhancement factor to the Thomas-Fermi (TF) functional in a way that guarantees both correct scaling and asymptotic behaviour. They outperform classical approximations, but not by the extent required to reach chemical accuracy.
II. KINETICNET: A DEEP EQUIVARIANT ARCHITECTURE
When developing the architecture for our machine learning model, we were guided by a number of physically motivated constraints: Firstly, input and output should be represented on the quadrature grid (consisting of grid points evenly distributed on each of a number of spherical shells arranged around each atomic nucleus), such that it can seamlessly replace existing functional approximations. Secondly, the model should be equivariant with respect to the group E(3), i.e. rotations and translations of the input molecule should not change the predicted kinetic energy, and the predicted kinetic potential should be transformed in accordance with the input. Finally, the field of view, i.e. the spatial extent of the input grid points that influence the output at a given point, should span several bond lengths. On the other hand, the model should still be local in the sense that for very big molecules, only nearby atoms influence the prediction, thus conceptually allowing for the generalization towards bigger molecules.
We guarantee translational equivariance by only using relative positions in our model, and rotational equivariance by using equivariant convolutions as presented in Tensor Field Networks 17 and implemented in the e3nn library 18 . This amounts to decomposing convolutional filters F into a radial part R depending on the distance r = r and an angular part Y , depending on the directionr = r/r. The former is learned and the latter is given by the spherical harmonics (depending on the representation of the in-and output features of the convolution):
F (l f ,l i ) cm (r) = R (l f ,l i ) c (r)Y (l f ) m (r)(2)
with non-negative integer rotation orders of the input and filter l i and l f , channel index c and order inside the representation m ∈ {−l f , .., l f }. Multiplying the filters with the input features and computing a certain linear combination, using Clebsch-Gordan coefficients as weights, yields equivariant output features of the point convolution. We learn separate convolutional filters for each element and use tensorial features up to order l = 4. To achieve a sufficient field of view while keeping the computational cost tractable, we use an encoder-decoder structure: In a first atomic encoding layer, we use a point convolution to compute features at every atom of the molecule (and not every input grid point). This is followed by a number of atom-atom interaction layers, each of which consists of a point convolution with the atomic nuclei positions as in-and outputs, followed by a nonlinear activation function. These layers are computationally cheap and greatly increase the field of view. Finally, a decoding layer, again a single point convolution, propagates the information back to the quadrature grid. This architecture has a sufficient field of view to capture functional groups and some of their context in molecules, while still being local and allowing for the generalization over molecule compositions. The learned atomic encoding layers are one advantage over prior work, as most commonly 5,19 handcrafted features are used to encode the local environments of the atoms. When predicting energy densities, we additionally scale the output with a Superposition of Atomic Densities (SAD) (commonly used as an initial guess in KS-DFT), allowing the model to predict the correct asymptotic behaviour for larger distances from the atoms. In particular, the prediction of very small values becomes possible in low-density regions without extremely precise tuning of the parameters of the radial models, which would otherwise be necessary.
As a loss we use a smooth L1-loss, applied to the pointwise difference between the kinetic energy density and potential predictions and the corresponding ground truth on the grid. We use an adaptive scale parameter for the transition between the quadratic and linear regimes of the loss, the parameter grows linearly with the target value , but we threshold this value with 10 −6 Ha/Bohr 3 from below, such that the quadratic region can be reached even at grid locations with very small target values (e.g. far away from the nuclei).
As mentioned above, the parameters of KineticNet lie in the radial models. We parameterize them as Weiler et al. 20 , by a 3-layer fully connected MLP applied to the radius encoded by a set of cosine basis functions. For the initial atomic encoding and final decoder layer, we additionally transform the input distances r with the inverse of the Treutler-Ahlrichs 21 map f TA before feeding them to each radial model R i (where i is a shorthand for indices c, l f , l i in eq. 2): This effectively changes the radial model to have a distancedependent spatial resolution (see Figure 2), exactly in correspondence to the spherical shells of the quadrature grid around the atoms. One feature of our method is that the learning of a spatial filter in terms of absolute distances allows us dealing with varying grid resolutions, i.e. spacing of radial shells and angular grids. We do not exploit this explicitly in this work, but it can be useful to speed up the training by first utilizing lowerresolution samples before fine tuning in the high resolution setting, as well as granting the added flexibility of allowing a single model to be deployed at multiple grid resolutions.
R i (r) = R i f −1 TA (r) .(3)
III. TRAINING DATA GENERATION
Sufficiently large and representative datasets are as decisive for the success of a machine learning approach as the training setup and architecture. We use KS-DFT employing the BLYP XC functional 22,23 to generate ground truth data for the supervised training of our functionals. Generating a large number of training samples is easy, but to ensure sufficient variability in the data, we had to employ a new technique that we discuss in this section.
We use eq. 1 to generate ground truth kinetic energy density. Many other definitions exist, in particular any additive constant to t s that integrates to zero yields the same total kinetic energy. That said we choose eq. 1 over other formulations for the kinetic energy density as its values lie in a smaller range, which is preferred for machine learning models. The kinetic potential δ T s δ ρ can be computed 24 from
δ T s δ ρ − µ = ∑ N i − 1 2 φ i (r)∇ 2 φ i (r) − ε i φ 2 i (r) ρ(r)(4)
where ε i stands for the eigenvalue/orbital energy of the i-th KS-Orbital, ρ for the electron density and µ for the chemical potential, which is assumed equal to the energy of the highest occupied molecular orbital ε HOMO . 24 The derivation by King and Handy 24 equates parts of the Euler and Kohn-Sham equations, suggesting that the equation is only valid for stationary states. Yet any OF-DFT algorithm will encounter non-stationary electron densities on its way from the initial guess to the true ground state. As generalization from ground-state densities to these intermediate states cannot be expected, it is crucial to also include non-ground state electron densities in the training set. Such training makes the model sufficiently robust to achieve convergence of the iterative density optimization. This necessity has also been noted by Ryczko et al. 15 who observe convergence only for a functional trained to mimic the TF approxi-mation on a varied dataset, but not for the functional trained on KS ground truth at ground states only. In summary, the paradoxical task is to compute the kinetic potential for densities other than the true ground state while at the same time eq. 4 holds only for stationary states. This is where our second contribution lies.
The first Hohenberg-Kohn theorem states that a one to one mapping exists between the external potential and the ground state electron density of a system. 25 Thus, slightly perturbing the external potential of a molecule will lead to a different electron density as ground state and thereby enable the use of eq. 4. Therefore, we slightly perturb the external potential matrix in KS-DFT, by adding a randomly sampled symmetric matrix with an appropriately chosen norm, to generate our training data. The pyscf software package 26,27 is used for this purpose as it is efficient and well suited for the integration of ML models trained with python.
For our model, as with most neural networks, it is useful if the inputs and targets have similar scales, and that their values do not vary over many orders of magnitude within a single sample (and between samples). Hence, an important detail in the training data generation is how we deal with the cusps at the atoms, of both the input electron density and the output energy density and its potential. Here, we take the approach of subtracting spherically symmetric "Atomic Contributions" (ACs) for each atom and each of the fields. We compute them by applying restricted or restricted open shell KS-DFT to each atom type, and spherically symmetrizing the result, for details see appendix S IV (in the few cases, in which KS-DFT did not converge, these non-converged solutions still fulfill their purpose). This greatly reduces the magnitude of the cusps, see figure S1.
Another relevant detail is the choice of target for the kinetic potential: We follow Ryczko et al. 15 and do not directly predict the kinetic potential δ T s δ ρ , but rather √ ρ δ T s δ ρ , its product with the square root of the electron density. They report that this gives a lower training error, and we have two additional reasons to make this choice: On the one hand, the denominator in eq. 4 leads to numerical problems for small densities, e.g. far from the atomic nuclei, which are alleviated by taking this product. On the other hand, in our OF-DFT calculations, the kinetic potential is multiplied with the square root of the density whenever evaluated, due to our Ansatz (eq. 5), see section IV below, hence directly predicting this quantity is natural.
We generate datasets for a number of different atoms and molecules: First, the two-electron systems He, H 2 and H + 3 , and furthermore the molecules HF, H 2 O as well as two neon atoms in the vicinity of each other as an instance of a nonbinding system, which we label as Ne 2 . We sample the perturbation of the external potential and the molecule geometry independently for each training instance. For the linear molecules, we sample the inter-atomic distance uniformly in a range from around 0.4 Å to around 2.0 Å, thus covering both strongly compressed structures as well as nearly dissociated ones. For H + 3 , we arrange the nuclei in an equilateral triangle of side length √ 2 Å and perturb the position of each atom by a random vector with a length that is sampled uniformly in [0, 0.5Å]. Lastly, for water, we apply the following procedure: Each O−H bond length is uniformly sampled between 90% and 110% of its equilibrium value. The bond angle is varied by uniformly sampling each O−H "vector" from a 10°conus. 80% of each dataset is used for training and the rest for validation.
IV. DENSITY OPTIMIZATION
After successfully training these models, the next logical step is to use them "in the wild", i.e. in an actual OF-DFT 24 to see if density optimization is possible. We use the approach in which the density ρ is represented as the square of a single "orbital", or more precisely of a linear combination of atomic basis functions χ ν :
ρ(r) = ∑ ν c ν χ ν (r) 2 .(5)
The coefficients c ν are the variables which are optimized to minimize the energy functional while ensuring the correct normalization of the density. This approach allows the use of well established quantum chemical libraries for the evaluation of many of the integrals. To achieve this optimization of the total energy w.r.t. the electron density under the constraint of its normalization to the correct number of electrons N e , a Lagrange multiplier µ is introduced:
L =T s [ρ] + v ext (r)ρ(r) dr + J[ρ] + E xc [ρ] − µ N e − ρ(r)dr (6)
where T s is the non-interacting kinetic energy, J the classical electron-electron interaction, and E xc the exchange correlation energy. The ground state electron density is then given by the global minimum of this equation. Therefore, its functional derivative w.r.t. the electron density gives us the stationarity condition
δ L δ ρ = 0 = δ T s δ ρ + v ext (r) + δ J δ ρ + δ E xc δ ρ − µ .(7)
Introducing the basis expansion, the gradient w.r.t. the expan-
∂ L ∂ c j = 2 φ j | δ T s δ ρ + v ext (r) + δ J δ ρ + δ E xc δ ρ − µ| ∑ i c i φ i (8)
Furthermore, the Lagrange multiplier µ, which corresponds to the chemical potential, needs to be optimized:
∂ L ∂ µ = N e − ρ(r) dr .(9)
To iteratively solve this constrained optimization problem, we use the SLSQP solver 30 as implemented in the scipy package 31 .
For the initial guess in our optimizations we use an adapted version of a SAD guess. For this we use the atomic KS-DFT densities and fit OF-DFT density coefficients to it. Those coefficients are then used to construct a guess by simply placing them at the position of the corresponding atomic basis functions.
V. EXPERIMENTS
A. Training details
For each of the five datasets, we train one instance of our model to predict the kinetic energy density t s (to be integrated to the kinetic energy T s ) and a second instance for the kinetic potential δ T s δ ρ . Training a single model is computationally more efficient, both at train and test times. Here we choose to this split into two models to simplify our training by obviating the need of weighting the two objectives. We train our models using the Adam optimizer 32 using default parameters and a learning rate of 0.01 that we decay exponentially after 10 5 training iterations. We use a batch size of 64 and train until convergence. We observe some amount of overfitting in the sense that the loss is greater during validation than training, however both decrease during the whole training procedure which allows us to simply evaluate the final saved model.
B. Validation results
For all datasets the mean absolute error (MAE) of the predicted kinetic energy over 100 samples from the validation set falls below the threshold of chemical accuracy, 1 mHa per electron, see table I. We compare these results to three classical approximations of the kinetic energy functional, the firstorder Thomas-Fermi (TF) functional, the second order von Weizäcker (vW) correction and the MGE2 functional which is a linear combination of the two former approximations and which performed best in the extensive comparison by Fujinami et al. 14 . The superiority of the ML functional in this metric is very obvious as it outperforms the classical approximations by more than two orders of magnitude throughout, however with one exception: The vW functional is exact for two electron systems, hence its MAE is zero for He, H 2 and H + 3 . One could argue that the learning task for the ML model in these cases is also much easier for the ML functional, as the semilocal expression of the vW functional is already exact, but on one hand our model by construction cannot simply reproduce this term, and on the other hand we demonstrate a similar accuracy (per electron) on the bigger systems, where vW alone fails spectacularly.
For HF and Ne 2 we additionally demonstrate that our model is accurate enough to model chemical bonds (or the absence thereof) by evaluating it on the KS ground states for varying inter-atomic distances, and plotting the resulting dissociation curves in figures 3 and 4. Note that none of the geometries, nor any ground states (without a perturbed external potential) were part of the training sets. II: Density optimization results for two electron systems using our ML functionals as well as classical functionals. Note that the vW functional is exact for these systems, the small deviations are due to the limited steps and finite convergence threshold in our OF-DFT implementation.
V ext = 0 V ext from validation with solvation model Dataset ∆E [mHa] ρ − ρ KS 1 ∆E [mHa] ρ − ρ KS 1 ∆E [mHa] ρ −
C. Density optimization
The results of applying our machine learned functionals in OF-DFT are summarized in table II We evaluated each of our models on 100 geometries from the corresponding validation set (except of course for He, where only a single geometry is available), setting the SLSQP convergence threshold to 10 −6 and allowing a maximum of 100 steps. In the few cases when no convergence is reached, we evaluate the best solution obtained so far. To quantify the results, we compute the mean over the absolute energy errors, as well as the L1 density deviation ρ − ρ KS 1 = |ρ(r) − ρ KS (r)| dr (10) between the KS density ρ KS and the result of the OF calculation ρ. For He, H 2 and H + 3 , we obtain errors of less than 1 mHa and L1 density deviations on the order of 10 −3 electrons, see table II. This is more than precise enough to correctly model the H 2 bond, see figure 5.
Furthermore, the way our learned functionals generalize allows us to apply them in different settings: Just as KS-DFT during data generation, we can apply orbital free density optimization on molecules in the presence of an additional external potential. For this, we use potentials from the validation set and observe that the accuracy of our model is still good, see middle two columns in table II.
We can also apply solvation models that simulate a chemical environment by a density-dependent contribution to the external potential. To this end we employ the ddCOSMO solvation model 33-35 as implemented in pyscf with default parameters, i.e. simulating a solution in water, see the two rightmost columns of table II.
Note that none of these modifications would have been possible if we took a very direct black-box ML approach of e.g. directly predicting the ground-state electron density.
The reason why we only present density optimization results for the two electron systems He, H 2 and H + 3 is that only for those there is an exact correspondence between the possible KS densities and the Ansatz we use in our OF calculations (eq.5, for details see appendix S III). Hence, for these systems densities close to the KS ground state are obtainable. On the other hand, in the cc-pVDZ basis that we are using, it is impossible to model densities close to the KS ground state using the OF Ansatz, even fitting the coefficients to best mimic the KS density leads to an L1 deviation of multiple electrons. So while OF-DFT calculations using our learned functionals sometimes converge for these larger Systems, either a sufficiently larger basis, maybe optimized for this application, or an entirely different Ansatz are required to reach quantitatively interesting results.
VI. CONCLUSION
We present KineticNet, a new equivariant machine learning model adapted for the prediction of molecular properties on quadrature grids. Using the electron density on the grid and the positions of all nuclei as input, it can successfully predict the corresponding non-interacting kinetic energy density for a variety of systems such as HF, H 2 O and Ne 2 . The new functional correctly describes both chemical bonding (as well as the absence of it in Ne 2 ). We offer proof of principle that this architecture can predict the kinetic potential with sufficient accuracy to allow actual OF-DFT density optimization to reach the respective KS-DFT ground state for the model systems H 2 , H + 3 and Ne. Additionally, we show that the generation of varied training data, by invoking fundamental concepts of DFT, allows training a model that generalizes over densities arising in the presence of different external potentials. This also includes simple solvent models such as ddCOSMO which can be applied out of the box without any additional retraining.
Given these encouraging results, the next step is to general-ize the entire workflow to afford density optimization for more than two electrons. We conjecture that the principal obstacle in the current setup is that the KS-DFT ground state cannot be represented by our combination of basis and description of the density. In response, we are now working on a specialized OF-DFT basis set which hopefully overcomes this limitation and allows for chemically accurate OF-DFT calculations on systems of unprecedented size. As mentioned in section III to generate the trainingsdata for our model we slightly perturbed the external potential of our molecules to sample a divers set of densities as solution of Kohn-Sham-DFT and thereby calculate our targets. To achieve this we used the pyscf package as code base and implemented our own Restricted Kohn Sham class which takes an additional Matrix and adds it to the external potential matrix in the Hamiltonian of the SCF procedure.
For the sampling of those perturbation matrices the following approach was adopted:
1. (Relative) entries of the pertubation matrix are drawn from some random distribution and ensure a symmetric matrix 2. the norm of the perturbation matrix is drawn from some distribution and the matrix is normalized accordingly
The distributions are chosen to ensure that a majority of data points are somewhat close to the ground state. The reasoning being that for accurate convergence and ground state values the machine learning model should be we ll trained on this part of density space while far away from the solution a rough estimate of the kinetic energy and potential should be enough to guide the OF-DFT solver roughly in the correct direction. Details regarding those distributions in the different basis sets are given in table S1. All datasets have been calculated using at the BLYP/cc-pVDZ level of theory.
S II. OF-DFT IMPLEMENTATION
Our OF-DFT implementation is based on the pyscf package, which we use to compute all the required integrals. Density fitting as implemented in the pyscf package is used for the calculation of the Coulomb matrix.
S III. CORRESPONDENCE BETWEEN KS AND OF ANSATZ FOR TWO ELECTRONS
Recall the equation for the electron density in terms of the coefficients c ν in our OF approach:
ρ(r) = ∑ ν c ν χ ν (r) 2 .
(S1)
In KS-DFT one can write the electron density in the basis of atomic basis functions using the molecular orbital coefficients m iν :
ρ(r) = ∑ i ∑ ν m iν χ ν (r) 2 .(S2)
For two electron systems, there is only one molecular orbital, hence the first sum disappears. Thus, the two expressions for the density are equal for m 1ν = c ν . FIG. S1: Effectiveness of subtracting atomic contributions demonstrated for H 2 O. For electron density, kinetic energy density as well as our target for the kinetic potential, subtracting the ACs decreases the value range by at least two orders of magnitude.
S IV. ATOMIC CONTRIBUTIONS
The atomic contributions have been calculated using either restricted or restricted open-shell KS-DFT at the BLYP/cc-pVDZ level of theory. An 'atomic' initial guess was used, as implemented in the pyscf package. The convergence tolerance was set to 10 −6 and a grid level of 2 was used. Symmetry was employed to remove directional bias. The usage of symmetry ensures separation w.r.t. angular momentum. This allows the following procedure for spherical symmetrization of p type orbitals: First the MO coefficients and energies are averaged weighted by their occupation. Next the electrons in p-orbitals are evenly distributed over all three p-orbitals. As this procedure has only been implemented for p type orbitals only elements up to Neon can be used.
S V. MODEL HYPERPARAMETERS
In Table S2, we list the model hyperparameters we use. All models employ L = 5 atom-atom interaction layers. In general, we choose the number of features per order l such that approximately the same number of floats are used for each l.
FIG. 1 :FIG. 2 :
12The proposed KineticNet architecture is an equivariant deep neural network with three types of layers: First, an atomic encoder relying on point convolutions (eq. 2) to summarize the density information on the quadrature grid in terms of tensorial features associated with the nuclei; then a number L of atom-atom interactions layers; and finally a decoding layer making predictions at all grid points. Schematic of the radial basis to model R in eq. 2, with and without transformation to adjust to the Treutler-Alrichs shells, as used in the atomic encoding and decoding layers. We apply a smooth cutoff towards the maximum radius.
FIG. 3 :FIG. 4 :
34Total ground state energy of HF at different bond lengths, as computed by KS-DFT as well as the prediction of our ML functional on the KS ground-state densities (without orbital-free density optimization). Total ground state energy of Ne 2 at different "bond" lengths, as computed by KS-DFT as well as the prediction of our ML functional on the KS ground-state densities (without orbital-free density optimization).
FIG. 7 :
7Comparison between predicted and KS kinetic energies for H + 3 on 100 samples from the validation set, and representative electron densities (minus SAD).
S1: Datasets used for training. The norm of the perturbation matrix is sampled from a normal distribution with a minimum norm of 0.005 and the mean and standard deviation given in the table. The relative entries of these matrices are sampled from a normal distribution with mean and standard deviation given in the table.
S2: Model hyperparameters. The representations are listed as multiplicities for different l, e.g. (20, 5, 3) means 20 scalars, 5 vectors, and 3 type l = 2 tensors.
FIG. 5: Total ground state energy of H 2 at different bond lengths, as computed by KS-DFT as well as OF-DFT using our machine learned functionals. calculation to compute the ground state of a geometry not seen at training time. To this end, we have implemented an OF-DFT solver based on the work of Chan, Cohen, and Handy 28 and Ryley et al.1.15
1.10
1.05
1.00
0.95
total energy [Ha]
KS
KineticNet (OF)
1
0
1
E [mHa]
1.0
1.5
2.0
2.5
3.0
3.5
bond length [Bohr]
0.00
0.01
L1 density error
[electrons]
TABLE I :
IEnergy mean absolute error for our models (KineticNet) and classical functionals on validation sets consisting of KS solution densities (for varying v ext ).|∆T s |[mHa] He H 2 H +
3
HF Ne 2 H 2 O
KineticNet 0.31 0.16 0.84 4.41 4.86 2.69
TF 291 212 276 9058 21631 6927
vW
0
0
0 26357 77716 18671
MGE2 79 102 155 779 2599 811
sion coefficients is then given by
FIG. 6: Slices through input, prediction, and error for H + 3 , on three validation samples. From left to right: Electron density, electron density minus AC, predicted kinetic energy density minus AC, error of the electron density, predicted kinetic potential times square root of electron density and lastly its error.0.2
0.0
0.2
[Bohr 3 ]
0.1 0.0
0.1
minus AC [Bohr 3 ]
5
0
5
ts minus AC [Ha Bohr 3 ]
×10 2
1 0
1
ts [Ha Bohr 3 ]
×10 3
0.2
0.0
0.2
Ts minus AC [Ha Bohr 3 ]
5
0
5
( Ts ) [Ha Bohr 3 ]
×10 3
TABLE
33 E. Cancès, Y. Maday, and B. Stamm, "Domain decomposition for implicit solvation models," The Journal of chemical physics 139, 054111 (2013). 34 F. Lipparini, B. Stamm, E. Cances, Y. Maday, and B. Mennucci, "Fast domain decomposition algorithm for continuum solvation models: Energy and first derivatives," Journal of chemical theory and computation 9, 3637-3648 (2013). 35 F. Lipparini, G. Scalmani, L. Lagardère, B. Stamm, E. Cancès, Y. Maday, J.-P. Piquemal, M. J. Frisch, and B. Mennucci, "Quantum, classical, and hybrid qm/mm calculations in solution: General implementation of the ddcosmo linear scaling strategy," The Journal of chemical physics 141, 184108 (2014).Supplemental Material
S I. DATA GENERATION
TABLE
Ts minus AC [Ha Bohr 3 ]2
0
2
[Bohr 3 ]
×10 2
1
0
1
minus AC [Bohr 3 ]
5
0
5
ts [Ha Bohr 3 ]
×10 3
2.5 0.0 2.5
ts minus AC [Ha Bohr 3 ]
×10 1
2 0
2
Ts [Ha Bohr 3 ]
×10 4
5 0
5
TABLE
R. G. Parr and W. Yang, "Density-functional theory of the electronic structure of molecules," Annual review of physical chemistry 46, 701-728
AcknowledgementsWe would like to thank Christof Gehrig for his help in achieving the first successful density optimization, in particular by suggesting the use of the product of the kinetic potential and the square root of the electron density as a target. This work is supported by Deutsche Forschungsgemeinschaft (DFG) under Germany's Excellence Strategy EXC-2181/1 -390900948 (the Heidelberg STRUCTURES Excellence Cluster), as well as by Klaus Tschira Stiftung gGmbH in the framework of the SIMPLAIX consortium. T.K and A.D acknowledge support by the state of Baden-Württemberg through bwHPC and the German Research Foundation (DFG) through grant no INST 40/575-1 FUGG (JUSTUS 2 cluster).
Pushing the frontiers of density functionals by solving the fractional electron problem. J Kirkpatrick, B Mcmorrow, D H P Turban, A L Gaunt, J S Spencer, A G D G Matthews, A Obika, L Thiry, M Fortunato, D Pfau, L R Castellanos, S Petersen, A W R Nelson, P Kohli, P Mori-Sánchez, D Hassabis, A J Cohen, 10.1126/science.abj6511Science. 374J. Kirkpatrick, B. McMorrow, D. H. P. Turban, A. L. Gaunt, J. S. Spencer, A. G. D. G. Matthews, A. Obika, L. Thiry, M. Fortunato, D. Pfau, L. R. Castellanos, S. Petersen, A. W. R. Nelson, P. Kohli, P. Mori-Sánchez, D. Hassabis, and A. J. Cohen, "Pushing the frontiers of density function- als by solving the fractional electron problem," Science 374, 1385-1389 (2021).
Completing density functional theory by machine learning hidden messages from molecules. R Nagai, R Akashi, O Sugino, npj Computational Materials. 6R. Nagai, R. Akashi, and O. Sugino, "Completing density functional theory by machine learning hidden messages from molecules," npj Computational Materials 6, 1-8 (2020).
Cider: An expressive, nonlocal feature set for machine learning density functionals with exact constraints. K Bystrom, B Kozinsky, Journal of Chemical Theory and Computation. 18K. Bystrom and B. Kozinsky, "Cider: An expressive, nonlocal feature set for machine learning density functionals with exact constraints," Journal of Chemical Theory and Computation 18, 2180-2192 (2022).
Machine learning accurate exchange and correlation functionals of the electronic density. S Dick, M Fernandez-Serra, Nature communications. 11S. Dick and M. Fernandez-Serra, "Machine learning accurate exchange and correlation functionals of the electronic density," Nature communications 11, 1-10 (2020).
ωb97x-v: A 10-parameter, rangeseparated hybrid, generalized gradient approximation density functional with nonlocal correlation, designed by a survival-of-the-fittest strategy. N Mardirossian, M Head-Gordon, Physical Chemistry Chemical Physics. 16N. Mardirossian and M. Head-Gordon, "ωb97x-v: A 10-parameter, range- separated hybrid, generalized gradient approximation density functional with nonlocal correlation, designed by a survival-of-the-fittest strategy," Physical Chemistry Chemical Physics 16, 9904-9924 (2014).
Machine learning the physical nonlocal exchange-correlation functional of densityfunctional theory. J Schmidt, C L Benavides-Riveros, M A Marques, The journal of physical chemistry letters. 10J. Schmidt, C. L. Benavides-Riveros, and M. A. Marques, "Machine learning the physical nonlocal exchange-correlation functional of density- functional theory," The journal of physical chemistry letters 10, 6425-6431 (2019).
Orbital-free bond breaking via machine learning. J C Snyder, M Rupp, K Hansen, L Blooston, K.-R Müller, K Burke, The Journal of chemical physics. 139224104J. C. Snyder, M. Rupp, K. Hansen, L. Blooston, K.-R. Müller, and K. Burke, "Orbital-free bond breaking via machine learning," The Journal of chemical physics 139, 224104 (2013).
Machine learning approaches toward orbital-free density functional theory: Simultaneous training on the kinetic energy density functional and its functional derivative. R Meyer, M Weichselbaum, A W Hauser, Journal of chemical theory and computation. 16R. Meyer, M. Weichselbaum, and A. W. Hauser, "Machine learning ap- proaches toward orbital-free density functional theory: Simultaneous train- ing on the kinetic energy density functional and its functional derivative," Journal of chemical theory and computation 16, 5685-5694 (2020).
Direct scheme calculation of the kinetic energy functional derivative using machine learning. H Saidaoui, S Kais, S Rashkeev, F Alharbi, arXiv:2003.00876arXiv preprintH. Saidaoui, S. Kais, S. Rashkeev, and F. Alharbi, "Direct scheme calcu- lation of the kinetic energy functional derivative using machine learning," arXiv preprint arXiv:2003.00876 (2020).
Artificial neural networks for the kinetic energy functional of non-interacting fermions. S A Ghasemi, T D Kühne, The Journal of Chemical Physics. 15474107S. A. Ghasemi and T. D. Kühne, "Artificial neural networks for the kinetic energy functional of non-interacting fermions," The Journal of Chemical Physics 154, 074107 (2021).
Kinetic energy densities based on the fourth order gradient expansion: performance in different classes of materials and improvement via machine learning. P Golub, S Manzhos, Physical Chemistry Chemical Physics. 21P. Golub and S. Manzhos, "Kinetic energy densities based on the fourth order gradient expansion: performance in different classes of materials and improvement via machine learning," Physical Chemistry Chemical Physics 21, 378-395 (2019).
Semilocal machine-learned kinetic energy density functional with third-order gradients of electron density. J Seino, R Kageyama, M Fujinami, Y Ikabata, H Nakai, The Journal of chemical physics. 148241705J. Seino, R. Kageyama, M. Fujinami, Y. Ikabata, and H. Nakai, "Semi- local machine-learned kinetic energy density functional with third-order gradients of electron density," The Journal of chemical physics 148, 241705 (2018).
Orbitalfree density functional theory calculation applying semi-local machinelearned kinetic energy density functional and kinetic potential. M Fujinami, R Kageyama, J Seino, Y Ikabata, H Nakai, Chemical Physics Letters. 748137358M. Fujinami, R. Kageyama, J. Seino, Y. Ikabata, and H. Nakai, "Orbital- free density functional theory calculation applying semi-local machine- learned kinetic energy density functional and kinetic potential," Chemical Physics Letters 748, 137358 (2020).
Toward orbital-free density functional theory with small data sets and deep learning. K Ryczko, S J Wetzel, R G Melko, I Tamblyn, Journal of Chemical Theory and Computation. 18K. Ryczko, S. J. Wetzel, R. G. Melko, and I. Tamblyn, "Toward orbital-free density functional theory with small data sets and deep learning," Journal of Chemical Theory and Computation 18, 1122-1128 (2022).
Order-n orbital-free densityfunctional calculations with machine learning of functional derivatives for semiconductors and metals. F Imoto, M Imada, A Oshiyama, Physical Review Research. 333198F. Imoto, M. Imada, and A. Oshiyama, "Order-n orbital-free density- functional calculations with machine learning of functional derivatives for semiconductors and metals," Physical Review Research 3, 033198 (2021).
Tensor field networks: Rotation-and translation-equivariant neural networks for 3d point clouds. N Thomas, T Smidt, S Kearnes, L Yang, L Li, K Kohlhoff, P Riley, arXiv:1802.08219arXiv preprintN. Thomas, T. Smidt, S. Kearnes, L. Yang, L. Li, K. Kohlhoff, and P. Riley, "Tensor field networks: Rotation-and translation-equivariant neural net- works for 3d point clouds," arXiv preprint arXiv:1802.08219 (2018).
M Geiger, T Smidt, arXiv:2207.09453e3nn: Euclidean neural networks. arXiv preprintM. Geiger and T. Smidt, "e3nn: Euclidean neural networks," arXiv preprint arXiv:2207.09453 (2022).
Symmetry-adapted machine learning for tensorial properties of atomistic systems. A Grisafi, D M Wilkins, G Csányi, M Ceriotti, Physical review letters. 12036002A. Grisafi, D. M. Wilkins, G. Csányi, and M. Ceriotti, "Symmetry-adapted machine learning for tensorial properties of atomistic systems," Physical review letters 120, 036002 (2018).
3d steerable cnns: Learning rotationally equivariant features in volumetric data. M Weiler, M Geiger, M Welling, W Boomsma, T S Cohen, Advances in Neural Information Processing Systems. 31M. Weiler, M. Geiger, M. Welling, W. Boomsma, and T. S. Cohen, "3d steerable cnns: Learning rotationally equivariant features in volumetric data," Advances in Neural Information Processing Systems 31 (2018).
Turbomole version 5, theoretical chemistry group, university of karlsruhe, 2002;(b) o. treutler, r. ahlrichs. R Ahlrichs, M Bär, H Baron, R Bauernschmitt, S Böcker, M Ehrig, K Eichkorn, S Elliott, F Furche, F Haase, J. Chem. Phys. 102346R. Ahlrichs, M. Bär, H. Baron, R. Bauernschmitt, S. Böcker, M. Ehrig, K. Eichkorn, S. Elliott, F. Furche, F. Haase, et al., "Turbomole version 5, theoretical chemistry group, university of karlsruhe, 2002;(b) o. treutler, r. ahlrichs," J. Chem. Phys 102, 346 (1995).
Density-functional exchange-energy approximation with correct asymptotic behavior. A D Becke, Physical review A. 383098A. D. Becke, "Density-functional exchange-energy approximation with correct asymptotic behavior," Physical review A 38, 3098 (1988).
Development of the colle-salvetti correlation-energy formula into a functional of the electron density. C Lee, W Yang, R G Parr, Physical review B. 37785C. Lee, W. Yang, and R. G. Parr, "Development of the colle-salvetti correlation-energy formula into a functional of the electron density," Phys- ical review B 37, 785 (1988).
Kinetic energy functionals from the kohn-sham potential. R A King, N C Handy, Physical Chemistry Chemical Physics. 2R. A. King and N. C. Handy, "Kinetic energy functionals from the kohn-sham potential," Physical Chemistry Chemical Physics 2, 5049-5056 (2000).
Inhomogeneous electron gas. P Hohenberg, W Kohn, 10.1103/PhysRev.136.B864Phys. Rev. 136P. Hohenberg and W. Kohn, "Inhomogeneous electron gas," Phys. Rev. 136, B864-B871 (1964).
Pyscf: the pythonbased simulations of chemistry framework. Q Sun, T C Berkelbach, N S Blunt, G H Booth, S Guo, Z Li, J Liu, J D Mcclain, E R Sayfutyarova, S Sharma, Wiley Interdisciplinary Reviews: Computational Molecular Science. 81340Q. Sun, T. C. Berkelbach, N. S. Blunt, G. H. Booth, S. Guo, Z. Li, J. Liu, J. D. McClain, E. R. Sayfutyarova, S. Sharma, et al., "Pyscf: the python- based simulations of chemistry framework," Wiley Interdisciplinary Re- views: Computational Molecular Science 8, e1340 (2018).
Recent developments in the pyscf program package. Q Sun, X Zhang, S Banerjee, P Bao, M Barbry, N S Blunt, N A Bogdanov, G H Booth, J Chen, Z.-H Cui, The Journal of chemical physics. 15324109Q. Sun, X. Zhang, S. Banerjee, P. Bao, M. Barbry, N. S. Blunt, N. A. Bog- danov, G. H. Booth, J. Chen, Z.-H. Cui, et al., "Recent developments in the pyscf program package," The Journal of chemical physics 153, 024109 (2020).
Thomas-Fermi-Dirac-von Weizsäcker models in finite systems. G K Chan, A J Cohen, N C Handy, 10.1063/1.1321308J. Chem. Phys. 114631G. K.-L. Chan, A. J. Cohen, and N. C. Handy, "Thomas-Fermi-Dirac-von Weizsäcker models in finite systems," J. Chem. Phys. 114, 631 (2001).
Robust all-electron optimization in orbital-free density-functional theory using the trust-region image method. M S Ryley, M Withnall, T J P Irons, T Helgaker, A M Teale, 10.1021/acs.jpca.0c09502J. Phys. Chem. A. 125American Chemical SocietyM. S. Ryley, M. Withnall, T. J. P. Irons, T. Helgaker, and A. M. Teale, "Ro- bust all-electron optimization in orbital-free density-functional theory using the trust-region image method," J. Phys. Chem. A 125, 459-475 (2021), publisher: American Chemical Society.
D Kraft, A Software Package for Sequential Quadratic Programming, Deutsche Forschungs-und Versuchsanstalt für Luft-und Raumfahrt Köln: Forschungsbericht (Wiss. Berichtswesen d. DFVLR. D. Kraft, A Software Package for Sequential Quadratic Programming, Deutsche Forschungs-und Versuchsanstalt für Luft-und Raumfahrt Köln: Forschungsbericht (Wiss. Berichtswesen d. DFVLR, 1988).
. P Virtanen, R Gommers, T E Oliphant, M Haberland, T Reddy, D Cournapeau, E Burovski, P Peterson, W Weckesser, J Bright, S J Van Der Walt, M Brett, J Wilson, K J Millman, N Mayorov, A R J Nelson, E Jones, R Kern, E Larson, C J Carey, İ Polat, Y Feng, E W Moore, J Vanderplas, D Laxalde, J Perktold, R Cimrman, I Henriksen, E A Quintero, C R Harris, A M Archibald, A H Ribeiro, F Pedregosa, P Van Mulbregt, 10.1038/s41592-019-0686-2Nature Methods. 17and SciPy 1.0 Contributors, "SciPy 1.0: Fundamental Algorithms for Scientific Computing in PythonP. Virtanen, R. Gommers, T. E. Oliphant, M. Haberland, T. Reddy, D. Cour- napeau, E. Burovski, P. Peterson, W. Weckesser, J. Bright, S. J. van der Walt, M. Brett, J. Wilson, K. J. Millman, N. Mayorov, A. R. J. Nelson, E. Jones, R. Kern, E. Larson, C. J. Carey,İ. Polat, Y. Feng, E. W. Moore, J. VanderPlas, D. Laxalde, J. Perktold, R. Cimrman, I. Henriksen, E. A. Quintero, C. R. Harris, A. M. Archibald, A. H. Ribeiro, F. Pedregosa, P. van Mulbregt, and SciPy 1.0 Contributors, "SciPy 1.0: Fundamental Algo- rithms for Scientific Computing in Python," Nature Methods 17, 261-272 (2020).
Adam: A method for stochastic optimization. D P Kingma, J Ba, arXiv:1412.6980arXiv preprintD. P. Kingma and J. Ba, "Adam: A method for stochastic optimization," arXiv preprint arXiv:1412.6980 (2014).
| [] |
[
"Making Language Models Better Tool Learners with Execution Feedback",
"Making Language Models Better Tool Learners with Execution Feedback"
] | [
"Shuofei Qiao [email protected] \nZhejiang University\n\n",
"Honghao Gui [email protected] \nZhejiang University\n\n",
"Huajun Chen ♠♥ [email protected] \nZhejiang University\n\n\nDonghai Laboratory\n\n",
"Ningyu Zhang [email protected] \nZhejiang University\n\n"
] | [
"Zhejiang University\n",
"Zhejiang University\n",
"Zhejiang University\n",
"Donghai Laboratory\n",
"Zhejiang University\n"
] | [] | Tools serve as pivotal interfaces that enable humans to understand and reshape the world. With the advent of foundational models, AI systems can utilize tools to expand their capabilities and interact with the world. Existing tool learning methodologies, encompassing supervised fine-tuning and prompt engineering approaches, often induce language models to utilize tools indiscriminately, as complex problems often exceed their own competencies. However, introducing tools for simple tasks, which the models themselves can readily resolve, can inadvertently propagate errors rather than enhance performance. This leads to the research question: can we teach language models when and how to use tools? To meet this need, we propose Tool leaRning wIth exeCution fEedback (TRICE), a twostage end-to-end framework that enables the model to continually learn through feedback derived from tool execution, thereby learning when and how to use tools effectively. Experimental results, backed by further analysis, show that TRICE can make the language model to selectively use tools by decreasing the model's dependency on tools while enhancing the performance 1 . * Corresponding Author. | 10.48550/arxiv.2305.13068 | [
"https://export.arxiv.org/pdf/2305.13068v1.pdf"
] | 258,832,372 | 2305.13068 | 7919cb1a1dcf70ed7803c43a71d43dba696ef149 |
Making Language Models Better Tool Learners with Execution Feedback
Shuofei Qiao [email protected]
Zhejiang University
Honghao Gui [email protected]
Zhejiang University
Huajun Chen ♠♥ [email protected]
Zhejiang University
Donghai Laboratory
Ningyu Zhang [email protected]
Zhejiang University
Making Language Models Better Tool Learners with Execution Feedback
Tools serve as pivotal interfaces that enable humans to understand and reshape the world. With the advent of foundational models, AI systems can utilize tools to expand their capabilities and interact with the world. Existing tool learning methodologies, encompassing supervised fine-tuning and prompt engineering approaches, often induce language models to utilize tools indiscriminately, as complex problems often exceed their own competencies. However, introducing tools for simple tasks, which the models themselves can readily resolve, can inadvertently propagate errors rather than enhance performance. This leads to the research question: can we teach language models when and how to use tools? To meet this need, we propose Tool leaRning wIth exeCution fEedback (TRICE), a twostage end-to-end framework that enables the model to continually learn through feedback derived from tool execution, thereby learning when and how to use tools effectively. Experimental results, backed by further analysis, show that TRICE can make the language model to selectively use tools by decreasing the model's dependency on tools while enhancing the performance 1 . * Corresponding Author.
Introduction
Tools serve as vital interfaces that allow humans to comprehend and reshape the world. A defining characteristic that distinguishes humans from other animals is our remarkable capacity to create and utilize tools (Orban and Caruana, 2014;Osiurak and Reynaud, 2020). The recent rapid advancement of foundation models (Brown et al., 2020;Ouyang et al., 2022;Chowdhery et al., 2022) enables them to possess surprising capabilities of generation (Brown et al., 2020;Touvron et al., 2023), reasoning (Qiao et al., 2022;Huang and Chang, 2022), and decision-making (Yao et al., 2023;Shen et al., 2023), making it practical for AI machines to utilize tools effectively (Paranjape et al., 2023;.
Existing research has shed light on the potential of Large Language Models (LLMs) to exhibit a promising level of dexterity and finesse in tool use (Qin et al., 2023). Toolformer (Schick et al., 2023) teaches LLMs themselves to use tools by fine-tuning them in a self-supervised way. ART (Paranjape et al., 2023) leverages task-specific demonstrations to prompt frozen LLMs to generate intermediate reasoning steps and tool use. Other works (Shen et al., 2023;Ge et al., 2023; employ LLMs as a hub for human-tool interaction, responsible for orchestrating the deployment and usage of tools and consolidating the results for user interpretation.
Despite the empirical success of previous work, a critical issue remains: LLMs often do not understand when and how to properly use which tools. On one hand, the use of tools is necessary to augment LLMs when facing complex problems that surpass their inherent capabilities. On the other hand, for simpler problems that can readily be solved by the models themselves, introducing tools can paradoxically propagate errors rather than enhance performance. These errors can include but are not limited to, improper selection of tool types, generation of incorrect tool inputs, and ineffective utilization of tool return results. Intuitively, it's crucial for LLMs to develop an awareness of when tools are necessary and when they are not, and to be able to make decisions about selecting the most appropriate tools for the task at hand.
To address the above issues, we propose Tool leaRning wIth exeCution fEedback (TRICE) as shown in Figure 1, a two-stage end-to-end framework that enables the model to continually learn through feedback derived from tool execution, thereby learning when and how to use tools effectively. Specifically, we first build a dataset that helps discern when tool usage is necessary for LLMs and when it is not. We evaluate the untrained model by having it answer questions directly, considering correct responses as instances where tools aren't needed, and incorrect responses as instances where tools are required for assistance. Given the lack of gold labels, we utilize ChatGPT (OpenAI, 2022) to automatically generate tool usage APIs for data requiring tools, thereby effectively eliminating the need for time-consuming manual annotation. For data that does not require tools, we directly use the model's correct answer as the label. Then, we introduce a two-stage training strategy to teach the model when to use tools: 1) Behavior Cloning. We conduct supervised fine-tuning on the dataset to let the model imitate the tool-using behavior. 2) Reinforcement Learning with Execution Feedback (RLEF). We further reinforce the model with tool execution feedback, guiding the model to selectively use tools to avoid error propagation. We utilize RRHF , a simple and effective reinforcement learning algorithm, as the basic backbone.
We train and evaluate TRICE on two mathematical reasoning datasets. Experimental results and further analyses demonstrate that TRICE successfully instructs the model to judiciously use tools, simultaneously reducing its reliance on tools and enhancing the accuracy of tool usage. In summary, the key contributions of our study are as follows:
• We introduce TRICE, a two-stage end-to-end training framework that leverages execution feedback to help LLMs become more proficient tool learners.
• We achieve superior performance on two benchmark datasets compared to previous fine-tuning-based tool learning methods.
• Through extensive empirical analysis, we demonstrate that our TRICE framework can guide the model in judicious tool usage, thereby reducing the model's dependency on tools and improving the precision of tool use.
Related Work
Tool Learning. Though possessing remarkable capabilities of generation (Brown et al., 2020;Touvron et al., 2023), reasoning (Qiao et al., 2022;Huang and Chang, 2022), and decisionmaking (Yao et al., 2023;Shen et al., 2023;, LLMs still struggle in many basic aspects such as arithmetic calculation (Patel et al., 2021), knowledge querying (Komeili et al., 2022;Ji et al., 2022), etc. where much smaller and simpler tools may precisely excel. Under this circumstance, a new paradigm, called Tool Learning (Qin et al., 2023), is born to combine the strengths of both LLMs and specialized tools. Some works (Driess et al., 2023;Shen et al., 2023; regard LLMs as a decisionmaking hub for compositional tool using which can be called Tool-Oriented Learning (Qin et al., 2023), while others (Gao et al., 2022;Liu et al., 2023;Schick et al., 2023) treat tools as complementary resources to extend the power of LLMs which can be called Tool-Augmented Learning (Mialon et al., 2023;Qin et al., 2023).
Despite their success, tool-augmented approaches tend to force LMs to use tools mindlessly regardless of whether they actually need to lend tools for help. This may, in some scenarios, steer LMs to erroneously choose the type of tools or the way to use tools, making the loss outweighs the gain. Compared to previous works, we focus on the tool-augmented learning paradigm and try to make LMs better tool learners by teaching them to use tools selectively instead of blindly.
Learning from Feedback. An intuitive training approach of tool learning is to fit LMs on examples with human-labeled tools directly (Torabi et al., 2018; which is so-called Behavior Cloning (Bain and Sammut, 1995). However, this method is time-consuming and labor-intensive. Moreover, it is impractical to explicitly annotate every possible scenario (Codevilla et al., 2019) and LMs can only imitate human-labeled behavior but are difficult to generalize to new scenarios. It is worth noting that humans generally have the ability to correct and reflect on their own behavior from trial and error (Allen et al., 2019). Intuitively, feedbacks from the environments or humans enable LMs to understand the impact of their actions and adapt their behavior accordingly.
Reinforcement learning (RL) excels at enabling models to learn decision-making abilities in complex environments through feedback (Schrittwieser et al., 2020;Yao et al., 2022;Ge et al., 2023). RLHF (Christiano et al., 2017;Ouyang et al., 2022) applies a state-of-the-art RL algorithm, Proximal Policy Optimization (PPO) (Schulman et al., 2017), to align LMs with human feedback. Nevertheless, PPO has shown sensitivity to hyperparameters and its conventional implementation requires a minimum of four models, making it memoryintensive and challenging to train. RAINIER (Liu et al., 2022) reinforces knowledge introspection for commonsense question answering with a fixed QA model providing feedback. OpenAGI (Ge et al., 2023) proposes RL with task feedback for complex task-solving with various external expert models.
Compared to previous feedback strategies, we introduce RLEF for tool learning which reinforces the LMs by leveraging the execution result of tools. We further apply a much simpler RL framework in contrast to RLHF, called RRHF , where the model learns to align with human preferences by scoring the responses produced by various sampling policies and utilizing ranking loss.
Methodology
Problem Overview. We mainly focus on the mathematical reasoning task, with each training instance in the format of x = (s, q, t, a), where s is the specialized instruction of each task, q is the question, t is the tool API and a is the gold answer. Following an instruction-following paradigm, the complete input of the LLM is as follows:
input = {instruction}{question}(1)
As for the output, when LLM deems that no tool is necessary, it outputs the answer a. Conversely, if the model identifies the need for a tool, it outputs the tool API t, which encompasses the specific type of tool and its corresponding input:
output =
(2) answer use_tool = false tool_name(tool_input) use_tool = true Given the problem, the main challenges lie in 1) determining the LLM when to or not to harness tools for help as well as 2) how to impart the ability to the model to make selective use of tools. For the former, we allow the untrained model to infer answers, considering the correct ones as not requiring tools and the incorrect ones as indicating the need for tool assistance. For the latter, we adopt TRICE, a two-stage training strategy. In the first stage, we Algorithm 1 Two-Stage Training Input: initial model θ, train dataset Dtool, response generation model set M 1: θclone ← Optimize θ with Eq.3 from Dtool. Section 3.1 2: res, scores ← Collect responses and calculate reward scores with M, Dtool. 3: θRLEF ← RLEF(Dtool, θclone, res, scores) Section 3.2 4: 5: procedure RLEF(Dtool, θ, res, scores) 6:
θold ← θ 7:
for iterations = 1, 2, ... do 8:
Sample a minibatch from Dtool. 9:
for step = 1, 2, ..., |minibatch| do 10:
Calculate Lrank and Lsft with res, scores, Dtool based on Eq.5&6 11:
Calculate Lpolicy based on Eq.7. 12:
Optimize θ with Lpolicy for one step. 13: end for 14:
θold ← θ 15:
end for 16:
return θ 17: end procedure Output: θRLEF use Behavior Cloning to teach the model how to invoke tools. Building upon this foundation, in the second stage, we leverage RLEF to train the model in selective tool usage. The overview of our method is illustrated in Figure 2.
Data Construction. The construction of the dataset is based on the assumption that when LLM generates incorrect answers, it indicates the need for assistance from tools. For the initial training set D init = {(q, a)} |D init | i=1 , we first utilize the pretrained LLM without fine-tuning to generate final answers. Since we do not have gold labels of tool APIs, we employ ChatGPT (OpenAI, 2022) to generate t = tool_name(tool_input) as pseudolabels under a few-shot prompting manner for questions where the generated answers are incorrect. As for questions with correct answers, we directly set t = None, indicating that no tool API is required. We devise specific instructions s tailored to the mathematical reasoning task. In the end, we obtain the dataset D tool = {(s, q, t, a)} |D tool | i=1 as we desire.
Training. As shown in Figure 2, based on D tool , we conduct a two-stage training approach: I) Behavior Cloning ( §3.1). In this stage, we teach the model to imitate the tool usage behavior of Chat-GPT by fine-tuning it on D tool in a sequence-tosequence manner. This empowers our model with the preliminary functionality of tool API calls. II) Reinforcement Learning with Execution Feedback ( §3.2). We continue to train our model obtained in stage I with reinforcement learning in order to enhance the accuracy of tool utilization. Specially, we leverage RRHF as our backbone framework and replace its reward score with the effect of tool execution. The entire training procedure is outlined in Algorithm 1.
Training Stage I: Behavior Cloning
During the behavior cloning stage, we aim to enable the LM to master the schema of tool API calls and develop preliminary skills in selectively utilizing tools. As the existing dataset for tool learning is limited and the accuracy of tool usage in this stage may not be high, we leverage D tool contained pseudo-labels for tool API calls generated by Chat-GPT to fine-tune the model.
Specifically, for the model p LM with tunable parameters θ, the training loss of stage I can be formulated as:
L clone (θ) ∝ (s,q,t,a)∈D tool − log p LM (o|s, q; θ),(3)
where o is the specified output of the model as defined in Eq.2. The final parameterized model of this stage is denoted as θ clone .
Training Stage II: RLEF
In stage II, we continue to optimize θ clone with execution feedback, so as to enhance its capability to selectively utilize tools and improve the accuracy of decision-making regarding tool types and corresponding inputs.
Policy Loss. For each question q, we have k different responses y i , 1 ≤ i ≤ k generated by LLMs like ChatGPT, LLaMA, or even provided by human experts. We apply a reward function to score each y i with R(a, y i ) = r i where a is the gold answer of question q. Aiming to align with scores {r i } k , we then score each y i with our model:
p i = t log p LM (y i,m |q, y i,<m ; θ clone ) ||y i || ,(4)
where p i is the conditional log probability of y i and ||y i || is the length-normalized factor. To facilitate the LM in learning the correct score ordering of different y i , we introduce a ranking loss during training:
L rank = r i <r j max(0, p i − p j ).(5)
Meanwhile, in order to prevent the model from deviating too far from the original parameters and generating unreasonable tool API calls structure, we reintroduce the supervised fine-tuning loss:
L sft = − m log p LM (o m |q, o <m ; θ clone ). (6)
Finally, the overall policy loss is defined as follows:
L policy = α · L rank + L sft ,(7)
where α is a hyperparameter that determines the proportion of the rank loss.
Reward Function. The purpose of the reward function is to give each y i a r i and rank them. The main criterion for scoring is derived from the feedback from the execution result of y i . Concretely, we set S as the maximum reward score. We execute y i to obtain the predicted answer a * i which is to say that if y i contains a tool API call, we invoke the tool and use the return result as the predicted answer a * i , and if y i is indeed the answer generated by the LLM (this means y i does not need a tool for help), then the generated answer is considered as the final prediction a *
i . The answer to the mathematical reasoning task is exactly a number, so we compare a * i and the gold answer a by evaluating their proximity:
e i = |a − a * i |.(8)
Afterward, we equally divide S into k values:
{r i } k = { S k , 2S k , . . . , (k − 1)S k , S}. (9)
We view the output o regulated in D tool as the pseudo-human-expert response and assign it with the maximum score S. The remaining responses are scored based on e i , where a smaller e i corresponds to a higher r i , and in the case of equal e i , random ordering is applied to ensure fairness. Obviously, the process of obtaining responses and reward scores can be decoupled from the training process which greatly simplifies the training process. Therefore, in this stage, we only need to train a single model.
Experiments
Experimental Settings
Models. We apply Alpaca-7B (Taori et al., 2023) (instruction fine-tuned LLaMA) trained in the LoRA (Hu et al., 2022) manner as the backbone LLM and continue to train the LoRA during both I and II training stages. In the response generation part, we sample responses from 4 different models, e.g. ChatGPT, InstuctGPT, Alpaca-LoRA-7B, LLaMA-LoRA-7B, and the output o regulated in D tool as the pseudo-human-expert response. For ChatGPT and InstructGPT, we prompt them with instructions and few-shot examples, and for Alpaca-LoRA-7B and LLaMA-LoRA-7B, we finetune them on D tool for a few epochs in order to equip them with initial abilities for answer and tool generation.
Datasets. We mainly train and test our model on two mathematical reasoning datasets, ASDiv (Miao et al., 2020) and SVAMP (Patel et al., 2021). AS-Div covers a diverse set of English math word problem corpus including various arithmetic operations. To facilitate the use of the calculator, we select its basic arithmetic operations part which contains 1,218 instances. We randomly split them into 952 instances of the training set and 266 instances of the test set. SVAMP is another more challenging math word problem dataset containing 1,000 instances. We randomly select 856 as the training set and 144 as the test set. We incorporate the training sets of them as the original dataset D init and test our model on their test sets.
Baselines. Throughout the remainder of this section, we mainly compare the following models:
• Toolformer (Schick et al., 2023), an approach to teaching the LLM to utilize tools on its own through also fine-tuning, where the model is trained to use tools blindly.
• Alpaca-LoRA-7B, Alpaca-7B (Taori et al., 2023) trained in the LoRA (Hu et al., 2022) manner which is indeed the model of ours without training.
• TRICE-I, model trained only on the behavior cloning stage to show the imitation ability of our model on tool learning.
• TRICE-II, model trained only on the RLEF stage to reflect the results of training with execution feedback alone.
• TRICE-All, model trained on both two stages.
Model ASDiv SVAMP Avg.
Toolformer (Schick et al., 2023) Table 1: Performance comparison of different baselines. We show the Accuracy score on ASDiv (Miao et al., 2020) and SVAMP (Patel et al., 2021).
Setups. For the mathematical reasoning task, we choose the Calculator tool to spy on the effectiveness of our method. During both stage I and II, we train LoRA with lora_r = 8, lora_alpha = 16, and lora_target_modules = Q, V . In stage I, we have a 32 batch size per GPU. In stage II, we have a 4 batch size per GPU. In both stages, we apply gradient accumulation at 8 steps and the model max length at 512 tokens. We first warm up the learning rate to 2e-5 and decay to 0 linearly. Since sampling responses and training are separated, our whole training procedure only needs to load one model, largely reducing the training costs. We train on 3 24GB Nvidia 3090 GPUs, typically costing 0.5-1 hours for stage I and 1-2 hours for stage II. Table 1 presents the performance of our method compared to various baselines on two mathematical reasoning datasets. We can observe that just training on D tool with stage I (TRICE-I), our method significantly outperforms Toolformer by an average of 15.4% on accuracy, not to mention the untrained Alpaca-LoRA-7B which performs even worse. Building upon this, after further training in stage II (TRICE-All), our method achieve an additional 3.0% improvement. This indicates that our approach equips the model with a solid ability to learn and utilize tools. However, the results obtained solely from training in stage II (TRICE-II) are not satisfactory, indicating that the initial tool generation ability endowed to the model in stage I is essential to a stabler training with reinforcement learning. Another intriguing observation is that our method exhibits a more significant enhancement on the more challenging SVAMP dataset (4.5%) compared to ASDiv (1.4%) from TRICE-I to TRICE-All. This finding suggests that tool learning with execution feedback may have the potential to generalize the problem-solving capabilities of the model for complex tasks.
Main Results
Tool Usage Rate
ASDiv SVAMP Avg. Figure 3: Comparison of tool usage rate among different training stages on the test dataset. In the w/o Training stage, we consider a need for using tools when the model reaches a wrong answer.
Analysis
Selective Tool Usage. Figure 3 illustrates the proportion of tool usage by our model at each training stage. It is noticed that after the training stage I, the reliance of our model on tools has significantly deepened to an average of 98.5% compared to the original 76.4% without training. This indicates that due to the imbalanced data distribution regarding the presence or absence of tools in the training set, supervised fine-tuning tends to make the model overly dependent on tools. However, after undergoing training in stage II, our model not only shows improvement in performance but also reduces its dependency on tools to 92.0%. This demonstrates that our model can learn selective tool usage through execution feedback.
Case Study. To further analyze the role of each training stage, we provide several cases of responses from different stages in Table 2. As shown in Case 1&2, stage I equips the model with a certain level of tool generation capability, but it may not excel in making optimal decisions about the input of tools. Stage II alleviates this limitation and enhances the effectiveness of our model in learning to utilize tools. Additionally, Case 3 confirms once again that our proposed method, TRICE, enables the model to use tools selectively. Furthermore, as illustrated in Case 4, our model still exhibits certain flaws leading to errors in tool usage. We speculate that this could be attributed to two factors: 1) our backbone model has a scale of 7B, which may limit its performance in mathematical reasoning tasks; 2) training incorporating reinforcement learning introduces certain instabilities that could contribute to the occurrence of errors in tool usage.
Discussion
In tool learning, LLMs manipulate tools and respond to users conditioned on a variety of knowledge sources. The most crucial aspect is for the model to have its own understanding of when it needs to use tools and when it doesn't need to use them. One particularly challenging issue in this context is the problem of knowledge conflicts (Qin et al., 2023) which may derive from the conflicts between model knowledge and augmented knowledge from tools, and among augmented knowledge from different tools. This may lead to a lack of explainability in model prediction and planning. LMs need to have the ability to differentiate knowledge from various sources and discern which ones are valuable, which ones are irrelevant, and even which ones may be harmful. This ability becomes even more critical in highly specialized fields such as biomedical research and legal assistance, where the accuracy and reliability of knowledge are of utmost importance. Our approach leverages the feedback loop of trial and error to learn when to use tools and when not to. The model learns to recognize situations where relying solely on its intrinsic knowledge may not be sufficient and utilizing tools is more reliable. Similarly, it learns to identify scenarios where its own learned knowledge is capable of solving the problem without the need for extensive tool usage. This learning process allows the model to adapt and make informed decisions about when to rely on its own capabilities and when to utilize tools effectively.
However, our current method is unable to learn the usage of multiple tools or tool compositions. In the future, more sophisticated trial-and-error processes and feedback mechanisms will be necessary to teach the LMs to better utilize tools and even create new tools.
Conclusion
In this paper, we focus on addressing the challenge of selective utilization of external tools by LLMs and propose a two-stage end-to-end training framework dubbed TRICE to make language model better tool learners with execution feedback. We also create a dataset to assist the model in learning when and how to use tools properly. Through comprehensive experiments on two mathematical reasoning datasets, we have shown that our method can achieve better performance compared to finetunebased tool learning approaches. Extensive analysis illustrates that our TRICE can selectively use tools by reducing the dependency of the model on tools while improving the accuracy of tool usage.
Limitations
Given our limited computational resources, we only conduct experiments with a single tool. However, in the future, we plan to incorporate additional tasks and tools, and further investigate the use of compositional tools to enhance our approach. To cater to various tasks, we intend to explore more complex reward mechanisms that integrate execution feedback from a range of tasks across various scenarios.
Figure 1 :
1Language model learns to use tools from execution feedback.
Figure 2 :
2Given a math problem, solve it and you can use a calculator for help. Input: Jerry had 7 action figures and 2 books on a shelf in his room. Later he added 4 more books to the shelf. How many more action figures than books were on his shelfThe overview of our proposed framework TRICE. In stage-I (Behavior Cloning), We conduct supervised fine-tuning on the dataset to let the model imitate the tool-using behavior. In stage-II (RLEF), We further reinforce the model based on RRHF with tool execution feedback, guiding the model to selectively use tools.
While playing at the arcade, Kaleb won 8 tickets playing 'whack a mole' and 7 tickets playing 'skee ball'. If he was trying to buy candy that cost 5 tickets a piece, how many could he buy?Winter is almost here and most animals are migrating to warmer countries. There were 89 bird families living near the mountain. If 60 bird families flew away for winter. How many more bird families flew away for the winter than those that stayed behind?Rachel had to complete 8 pages of math homework. If she had to complete 6 more pages of reading homework than math homework How many pages of reading homework did she have to complete? Jerry had 7 action figures and 2 books on a shelf in his room. Later he added 4 more books to the shelf. How many more action figures than books were on his shelf?Id Question
Gold Answer
After Stage I
After Stage I&II
1
3
calculator(15/5)
calculator((8+7)/5)
2
31
calculator(89-60)
calculator(60-29)
3
14
calculator(8-6)
She had to complete 14
pages of reading homework.
4
1
3.
calculator(7-2)
Table 2 :
2Case analysis on the response of after training in stage I and stage I&II.
The tools challenge: Rapid trial-anderror learning in physical problem solving. Kelsey R Allen, Kevin A Smith, Joshua B Tenenbaum, abs/1907.09620CoRRKelsey R. Allen, Kevin A. Smith, and Joshua B. Tenen- baum. 2019. The tools challenge: Rapid trial-and- error learning in physical problem solving. CoRR, abs/1907.09620.
A framework for behavioural cloning. Michael Bain, Claude Sammut, Machine Intelligence 15, Intelligent Agents. St. Catherine's College, Oxford, UKOxford University PressMichael Bain and Claude Sammut. 1995. A framework for behavioural cloning. In Machine Intelligence 15, Intelligent Agents [St. Catherine's College, Oxford, UK, July 1995], pages 103-129. Oxford University Press.
Mc-Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam2020December 6-12, 2020, virtualTom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam Mc- Candlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learn- ers. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Informa- tion Processing Systems 2020, NeurIPS 2020, De- cember 6-12, 2020, virtual.
. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won, Charles Chung, Sebastian Sutton, Parker Gehrmann, Kensen Schuh, Sasha Shi, Joshua Tsvyashchenko, Abhishek Maynez, Parker Rao, Yi Barnes, Noam Tay, Vinodkumar Shazeer, Emily Prabhakaran, Nan Reif, Ben Du, Reiner Hutchinson, James Pope, Jacob Bradbury, Michael Austin, Guy Isard, Pengcheng Gur-Ari, Toju Yin, Anselm Duke, Sanjay Levskaya, Sunipa Ghemawat, Henryk Dev, Xavier Michalewski, Vedant Garcia, Kevin Misra, Liam Robinson, Denny Fedus, Daphne Zhou, David Ippolito, Hyeontaek Luan, Barret Lim, Alexander Zoph, Ryan Spiridonov, David Sepassi, Dohan, 10.48550/arXiv.2204.02311Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-HellsternDouglas Eck, Jeff Dean, Slav Petrovand Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. CoRR, abs/2204.02311Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vin- odkumar Prabhakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghe- mawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fe- dus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankara- narayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Bren- nan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. Palm: Scaling language modeling with pathways. CoRR, abs/2204.02311.
Deep reinforcement learning from human preferences. Paul F Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, Dario Amodei, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAPaul F. Christiano, Jan Leike, Tom B. Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Pro- cessing Systems 2017, December 4-9, 2017, Long Beach, CA, USA, pages 4299-4307.
Exploring the limitations of behavior cloning for autonomous driving. Felipe Codevilla, Eder Santana, Antonio M López, Adrien Gaidon, 10.1109/ICCV.2019.00942Felipe Codevilla, Eder Santana, Antonio M. López, and Adrien Gaidon. 2019. Exploring the limitations of behavior cloning for autonomous driving. In 2019
IEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South). IEEEIEEE/CVF International Conference on Computer Vision, ICCV 2019, Seoul, Korea (South), October 27 -November 2, 2019, pages 9328-9337. IEEE.
Palm-e: An embodied multimodal language model. Danny Driess, Fei Xia, S M Mehdi, Corey Sajjadi, Aakanksha Lynch, Brian Chowdhery, Ayzaan Ichter, Jonathan Wahid, Quan Tompson, Tianhe Vuong, Wenlong Yu, Yevgen Huang, Chebotar, 10.48550/arXiv.2303.03378abs/2303.03378Pierre Sermanet. Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete FlorenceCoRRDanny Driess, Fei Xia, Mehdi S. M. Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, Wenlong Huang, Yevgen Chebotar, Pierre Ser- manet, Daniel Duckworth, Sergey Levine, Vincent Vanhoucke, Karol Hausman, Marc Toussaint, Klaus Greff, Andy Zeng, Igor Mordatch, and Pete Flo- rence. 2023. Palm-e: An embodied multimodal lan- guage model. CoRR, abs/2303.03378.
Jamie Callan, and Graham Neubig. 2022. PAL: program-aided language models. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, 10.48550/arXiv.2211.10435abs/2211.10435CoRRLuyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Gra- ham Neubig. 2022. PAL: program-aided language models. CoRR, abs/2211.10435.
Openagi: When LLM meets domain experts. Yingqiang Ge, Wenyue Hua, Jianchao Ji, Juntao Tan, Shuyuan Xu, Yongfeng Zhang, 10.48550/arXiv.2304.04370abs/2304.04370CoRRYingqiang Ge, Wenyue Hua, Jianchao Ji, Juntao Tan, Shuyuan Xu, and Yongfeng Zhang. 2023. Ope- nagi: When LLM meets domain experts. CoRR, abs/2304.04370.
Lora: Low-rank adaptation of large language models. Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen, The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event. Open-Review.netEdward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2022. Lora: Low-rank adapta- tion of large language models. In The Tenth Inter- national Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. Open- Review.net.
Jie Huang, Kevin Chen-Chuan Chang, arXiv:2212.10403Towards reasoning in large language models: A survey. arXiv preprintJie Huang and Kevin Chen-Chuan Chang. 2022. To- wards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403.
Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hallucination in natural language generation. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, abs/2202.03629CoRRZiwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Andrea Madotto, and Pascale Fung. 2022. Survey of hal- lucination in natural language generation. CoRR, abs/2202.03629.
Internet-augmented dialogue generation. Mojtaba Komeili, Kurt Shuster, Jason Weston, 10.18653/v1/2022.acl-long.579Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1ACL 2022. Association for Computational LinguisticsMojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22- 27, 2022, pages 8460-8478. Association for Compu- tational Linguistics.
Pre-trained language models for interactive decisionmaking. Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, NeurIPS. Ekin Akyürek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, and Yuke ZhuShuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clin- ton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, Jacob Andreas, Igor Mordatch, Antonio Torralba, and Yuke Zhu. 2022. Pre-trained language models for interactive decision- making. In NeurIPS.
Rainier: Reinforced knowledge introspector for commonsense question answering. Jiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, Yejin Choi, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. the 2022 Conference on Empirical Methods in Natural Language ProcessingAbu Dhabi, United Arab Emirates2022Association for Computational LinguisticsJiacheng Liu, Skyler Hallinan, Ximing Lu, Pengfei He, Sean Welleck, Hannaneh Hajishirzi, and Yejin Choi. 2022. Rainier: Reinforced knowledge introspec- tor for commonsense question answering. In Pro- ceedings of the 2022 Conference on Empirical Meth- ods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 8938-8958. Association for Computa- tional Linguistics.
Mind's eye: Grounded language model reasoning through simulation. Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, Andrew M Dai, The Eleventh International Conference on Learning Representations. Ruibo Liu, Jason Wei, Shixiang Shane Gu, Te-Yen Wu, Soroush Vosoughi, Claire Cui, Denny Zhou, and An- drew M. Dai. 2023. Mind's eye: Grounded lan- guage model reasoning through simulation. In The Eleventh International Conference on Learning Rep- resentations.
Chameleon: Plug-and-play compositional reasoning with large language models. Pan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai-Wei Chang, Ying Nian Wu, Song-Chun Zhu, Jianfeng Gao, 10.48550/arXiv.2304.09842abs/2304.09842CoRRPan Lu, Baolin Peng, Hao Cheng, Michel Galley, Kai- Wei Chang, Ying Nian Wu, Song-Chun Zhu, and Jianfeng Gao. 2023. Chameleon: Plug-and-play compositional reasoning with large language mod- els. CoRR, abs/2304.09842.
Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ramakanth Pasunuru, Roberta Raileanu, Timo Baptiste Rozière, Jane Schick, Asli Dwivedi-Yu, Edouard Celikyilmaz, Grave, 10.48550/arXiv.2302.07842abs/2302.07842Yann LeCun, and Thomas Scialom. 2023. Augmented language models: a survey. CoRR. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ramakanth Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Aug- mented language models: a survey. CoRR, abs/2302.07842.
A diverse corpus for evaluating and developing english math word problem solvers. Shen-Yun, Chao-Chun Miao, Keh-Yih Liang, Su, 10.18653/v1/2020.acl-main.92Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, Online. the 58th Annual Meeting of the Association for Computational Linguistics, ACL 2020, OnlineAssociation for Computational LinguisticsShen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. 2020. A diverse corpus for evaluating and devel- oping english math word problem solvers. In Pro- ceedings of the 58th Annual Meeting of the Associ- ation for Computational Linguistics, ACL 2020, On- line, July 5-10, 2020, pages 975-984. Association for Computational Linguistics.
Chatgpt: Optimizing language models for dialogue. Openai, OpenAI. 2022. Chatgpt: Optimizing language mod- els for dialogue. https://openai.com/blog/ chatgpt/.
The neural basis of human tool use. A Guy, Fausto Orban, Caruana, 10.3389/fpsyg.2014.00310Frontiers in Psychology. 5Guy A. Orban and Fausto Caruana. 2014. The neural basis of human tool use. Frontiers in Psychology, 5.
The elephant in the room: What matters cognitively in cumulative technological culture. François Osiurak, Emanuelle Reynaud, 10.1017/S0140525X19003236Behavioral and Brain Sciences. 43156François Osiurak and Emanuelle Reynaud. 2020. The elephant in the room: What matters cognitively in cumulative technological culture. Behavioral and Brain Sciences, 43:e156.
Training language models to follow instructions with human feedback. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F Christiano, Jan Leike, Ryan Lowe, NeurIPS. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow in- structions with human feedback. In NeurIPS.
ART: automatic multistep reasoning and tool-use for large language models. Bhargavi Paranjape, Scott M Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, Marco Túlio Ribeiro, 10.48550/arXiv.2303.09014abs/2303.09014CoRRBhargavi Paranjape, Scott M. Lundberg, Sameer Singh, Hannaneh Hajishirzi, Luke Zettlemoyer, and Marco Túlio Ribeiro. 2023. ART: automatic multi- step reasoning and tool-use for large language mod- els. CoRR, abs/2303.09014.
Are NLP models really able to solve simple math word problems?. Arkil Patel, Satwik Bhattamishra, Navin Goyal, 10.18653/v1/2021.naacl-main.168Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021OnlineAssociation for Computational LinguisticsArkil Patel, Satwik Bhattamishra, and Navin Goyal. 2021. Are NLP models really able to solve simple math word problems? In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online, June 6-11, 2021, pages 2080-2094. Association for Computational Linguistics.
Shuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, arXiv:2212.09597Fei Huang, and Huajun Chen. 2022. Reasoning with language model prompting: A survey. arXiv preprintShuofei Qiao, Yixin Ou, Ningyu Zhang, Xiang Chen, Yunzhi Yao, Shumin Deng, Chuanqi Tan, Fei Huang, and Huajun Chen. 2022. Reasoning with lan- guage model prompting: A survey. arXiv preprint arXiv:2212.09597.
. Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, 10.48550/arXiv.2304.08354Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liuand Maosong Sun. 2023. Tool learning with foundation models. CoRR, abs/2304.08354Yujia Qin, Shengding Hu, Yankai Lin, Weize Chen, Ning Ding, Ganqu Cui, Zheni Zeng, Yufei Huang, Chaojun Xiao, Chi Han, Yi Ren Fung, Yusheng Su, Huadong Wang, Cheng Qian, Runchu Tian, Kunlun Zhu, Shihao Liang, Xingyu Shen, Bokai Xu, Zhen Zhang, Yining Ye, Bowen Li, Ziwei Tang, Jing Yi, Yuzhang Zhu, Zhenning Dai, Lan Yan, Xin Cong, Yaxi Lu, Weilin Zhao, Yuxiang Huang, Junxi Yan, Xu Han, Xian Sun, Dahai Li, Jason Phang, Cheng Yang, Tongshuang Wu, Heng Ji, Zhiyuan Liu, and Maosong Sun. 2023. Tool learning with foundation models. CoRR, abs/2304.08354.
Toolformer: Language models can teach themselves to use tools. Timo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, Thomas Scialom, 10.48550/arXiv.2302.04761abs/2302.04761CoRRTimo Schick, Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Tool- former: Language models can teach themselves to use tools. CoRR, abs/2302.04761.
Mastering atari, go, chess and shogi by planning with a learned model. Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy P Lillicrap, David Silver, 10.1038/s41586-020-03051-4Nat. 5887839Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, Timothy P. Lillicrap, and David Silver. 2020. Mastering atari, go, chess and shogi by planning with a learned model. Nat., 588(7839):604-609.
. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov, abs/1707.06347John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proximal policy optimization algorithms. CoRR, abs/1707.06347.
Hugginggpt: Solving AI tasks with chatgpt and its friends in huggingface. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, Yueting Zhuang, 10.48550/arXiv.2303.17580abs/2303.17580CoRRYongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving AI tasks with chatgpt and its friends in huggingface. CoRR, abs/2303.17580.
Stanford alpaca: An instruction-following llama model. Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, Tatsunori B Hashimoto, Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. 2023. Stanford al- paca: An instruction-following llama model. https: //github.com/tatsu-lab/stanford_alpaca.
Behavioral cloning from observation. Faraz Torabi, Garrett Warnell, Peter Stone, 10.24963/ijcai.2018/687Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018. the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018Stockholm, Swedenijcai.orgFaraz Torabi, Garrett Warnell, and Peter Stone. 2018. Behavioral cloning from observation. In Proceed- ings of the Twenty-Seventh International Joint Con- ference on Artificial Intelligence, IJCAI 2018, July 13-19, 2018, Stockholm, Sweden, pages 4950-4957. ijcai.org.
Llama: Open and efficient foundation language models. Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Naman Baptiste Rozière, Eric Goyal, Faisal Hambro, Aurélien Azhar, Armand Rodriguez, Edouard Joulin, Guillaume Grave, Lample, 10.48550/arXiv.2302.13971abs/2302.13971CoRRHugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurélien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. Llama: Open and efficient foundation language mod- els. CoRR, abs/2302.13971.
Webshop: Towards scalable real-world web interaction with grounded language agents. Shunyu Yao, Howard Chen, John Yang, Karthik Narasimhan, NeurIPS. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. 2022. Webshop: Towards scalable real-world web interaction with grounded language agents. In NeurIPS.
React: Synergizing reasoning and acting in language models. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Yuan Karthik R Narasimhan, Cao, The Eleventh International Conference on Learning Representations. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik R Narasimhan, and Yuan Cao. 2023. React: Synergizing reasoning and acting in language models. In The Eleventh International Conference on Learning Representations.
RRHF: rank responses to align language models with human feedback without tears. Zheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, Fei Huang, 10.48550/arXiv.2304.05302abs/2304.05302CoRRZheng Yuan, Hongyi Yuan, Chuanqi Tan, Wei Wang, Songfang Huang, and Fei Huang. 2023. RRHF: rank responses to align language models with human feedback without tears. CoRR, abs/2304.05302.
| [] |
[] | [
"Matteo Acclavio ",
"Matteo Acclavio "
] | [] | [] | ACographs are a class of (undirected) graphs, characterized by the absence of induced subgraphs isomorphic to the four-vertices path, showing an intuitive oneto-one correspondence with classical propositional formulas. In this paper we study sequent calculi operating on graphs, as a generalization of sequent calculi operating on formulas -therefore on cographs.We mostly focus on sequent systems with multiplicative rules (in the sense of linear logic, that is, linear and context-free rules) extending multiplicative linear logic with connectives allowing us to represent modular decomposition of graphs by formulas, therefore obtaining a representation of a graph with linear size with respect to the number of its vertices. We show that these proof systems satisfy basic proof theoretical properties such as initial coherence, cut-elimination and analyticity of proof search. We prove that the system conservatively extend multiplicative linear logic with and without mix, and that the system extending the former derives the same graphs which are derivable in the deep inference system GS from the literature.We provide a syntax for proof nets for our systems by extending the syntax of Retoré's RB-structures to represent graphical connectives. A topological characterization of those structures encoding correct proofs is given, as well as a sequentialization procedure to construct a derivation from a correct structure.We conclude the paper by discussing how to extend those linear systems with the structural rules of weakening and contraction, providing a sequent system for an extension of classical propositional logic beyond cographs. | 10.48550/arxiv.2305.12975 | [
"https://export.arxiv.org/pdf/2305.12975v1.pdf"
] | 258,832,495 | 2305.12975 | b064c758ac4737fbee1ba4dc6b88c7f3cb06e6a6 |
22 May 2023
Matteo Acclavio 22 May 2023G P T I: M L L B C
ACographs are a class of (undirected) graphs, characterized by the absence of induced subgraphs isomorphic to the four-vertices path, showing an intuitive oneto-one correspondence with classical propositional formulas. In this paper we study sequent calculi operating on graphs, as a generalization of sequent calculi operating on formulas -therefore on cographs.We mostly focus on sequent systems with multiplicative rules (in the sense of linear logic, that is, linear and context-free rules) extending multiplicative linear logic with connectives allowing us to represent modular decomposition of graphs by formulas, therefore obtaining a representation of a graph with linear size with respect to the number of its vertices. We show that these proof systems satisfy basic proof theoretical properties such as initial coherence, cut-elimination and analyticity of proof search. We prove that the system conservatively extend multiplicative linear logic with and without mix, and that the system extending the former derives the same graphs which are derivable in the deep inference system GS from the literature.We provide a syntax for proof nets for our systems by extending the syntax of Retoré's RB-structures to represent graphical connectives. A topological characterization of those structures encoding correct proofs is given, as well as a sequentialization procedure to construct a derivation from a correct structure.We conclude the paper by discussing how to extend those linear systems with the structural rules of weakening and contraction, providing a sequent system for an extension of classical propositional logic beyond cographs.
I
In theoretical computer science, formulas are used to describe complex structures using elementary operators such as logical connectives and modalities. In particular, the proof theory of propositional logic typically considers formulas built from a very limited palette of binary (connectives) and unary (modalities) operators. Beside the restriction on the basic operators does not generally limit the expressiveness of the language, as soon as proof theory is used to define paradigms as "formulas-as-types", "formulas-asprograms", or "formulas-as-processes", this limitation leads to a payout in term of efficiency whenever we aim at providing efficient implementations: in order to describe complex interaction, ad-hoc encodings need to be put in place. As a consequence, automated tools relying on formula-based proof systems are either sub-optimal, because of the blow-up in computational complexity due to the use of encodings, or sacrifice the quality of information, by reducing their scope to only considering simpler configurations. This latter possibility may lead to information loss, potentially causing, among others, security issues or imprecise results in AI for decision systems. For this reason, graphs are often used in computer science practice from abstract definitions to practical implementation to describe systems with complex interactions: it is often the case that "a picture is worth a thousand words" 1 . By means of example, consider a system consisting of four processes a, b, c and d racing to access shared resources, and assume that the pairs of processes a and b, b and c, and c and d share the access to a same resource. This configuration can be represented by the graph below on the left (called P 4 ) where vertices represent processes and an edge is drawn whenever two processes share the access to a same resource.
Similarly, we could consider a dependency relation (e.g., causality) in a system with where a depends from b, and c depends from both b and d. In this case, again, the binary relation of "non-causal dependency" can be represented by a graph with similar shape (see the graph above on the right). It is well-known that the graph P 4 cannot be expressed by a formula containing only binary connectives with a one-to-one correspondence between atoms and vertices of the graph [34,55]. Beside the simplicity of the patter P 4 , it occurs in a graph representing a relation as soon as we consider non series-parallel ones, which are ubiquitous in distributed systems (see, e.g., non-transitive conflict of interest relations in control access models such as [18], in dependency graphs, or in producer-consumer queues). It is worthy notice that the use of graph-based syntaxes is not new in logic and proof theory for this exact reason: a same object may admit multiple representations, but graphs allows us to provide more canonical ones. By means of example, graphs are largely used in defining semantics (see, e.g., Kripke semantics for modal logics [17]), in proof systems capturing semantical structures (see, e.g., nested sequents [58,20,70]), in proof systems capturing proof equivalence (e.g., proof nets [42] or combinatorial proofs [52,52]).
However, proof theory has rarely considered graphs as primitive terms to reason on: prior to [7,8,5] we cannot find proof systems conceived to handle graphs as terms of an inference system defined with prooftheoretical purposes. 2 In these works, the authors move from the well-known correspondence between classical propositional formulas and cographs (graphs containing no induced subgraph isomorphic to a P 4 ) [55] to generalize proof theoretical methodologies for inference systems on formulas to graphs. In fact, we could say that inference systems operating on formulas can be seen as inference systems operating on cographs, that is, on graphs with "less complex" structure where no induced subgraph isomorphic to P 4 occurs 3 . In these works, the authors consider only deep inference [48,12] formalism to design proof systems operating on graphs. Such unconventional choice with respect to, e.g., sequent calculi or natural deduction, pays off in [5], where a proof system operating on graphs with both symmetric and non-symmetric edges defines a conservative extension of the non-commutative logic BV 4 , for which a cut-free sequent calculus cannot exist [76].
. M
This paper aims at extending methodologies from proof theory from formulas to graphs. For this purpose, I define the notion of graphical connectives to define formulas whose purpose is to represent graphs via the graph modular decompositions, that is, abstract syntax trees uniquely describing graphs with a term of linear size with respect to the number of vertices of the graph. This provides foundation to the methodologies used in [7,8,24] to design proof systems operating on graphs by handling their modular decomposition trees. Note that in this paper I only discuss graphical connectives designed on undirected graphs generalizing the well-known correspondence between classical propositional formulas and cographs. However, the proposed methodology scales to more general graphs such as the mixed graphs used in [5].
Using graphical connectives I then define multiplicative proof systems (in the sense of [29,44]) operating on formulas which can naturally be interpreted as graphs (see Figure 1), proving basic proof theoretical properties for these systems such as cut-elimination, initial coherence and a weaker notion of the analyticity condition taking into account the richer structure of non-binary connectives. I prove that the logic MPL is a conservative extensions of multiplicative linear logic and that the logic MPL • is a conservative extensions of multiplicative linear logic with mix. Moreover, I prove that MPL • is sound with respect to the semantics interpreting formulas as graphs, that is, if two formulas are interpreted as isomorphic graphs, then we can prove that they are logically equivalent.
I prove that one of these sequent systems internalizes the notion of graph isomorphism as a logical equivalence between formulas, that is, two formulas encoding isomorphic graphs are logically equivalent, and that such a system is sound and complete with respect to the set of graphs in the graphical logic GS from my previous works [7,8]. This latter result indirectly provides a proof of analiticity and transitiveness of implication for the logic GS using more standard techniques 5 .
Then in the second part of the paper I provide a formalism for proof nets for these substructural graphical logics extending Retoré's syntax of RB-proof nets [73]. In particular, this syntax is defined by generalizing the gates for the binary connectives`and ⊗ in order to represent the graphical connectives I introduce. Then I provide a correctness criterion for these proof nets with respect to the logics MPL and MPL • , together with a sequentialization procedure. This criterion is obtained by refining Retoré's criterion by weakening the condition of acyclicity, therefore ruling out only specific cycles 6 , and including an order relation on gates, derivable by the topological structure of the RB-structure, providing constraints on the order in which connectives can be sequentialized, as in C-nets [33]. Figure 2: A graph, its modular decomposition and a derivation in MPL • of the formula encoding it.
a b c d e c ⊥ f ⊥ b ⊥ d ⊥ e ⊥ f g g ⊥ a ⊥`n P 5 P 4 P 4 a b c d e c ⊥ f ⊥ b ⊥ d ⊥ e ⊥ f`a ⊥ g g ⊥ ax ⊢ a, a ⊥ ax ⊢ b, b ⊥ ax ⊢ c, c ⊥ ax ⊢ d, d ⊥ ax ⊢ e, e ⊥ ax ⊢ f, f ⊥ ⊗ ⊢ e, e ⊥ ⊗ f, f ⊥ d-κ P 4 ⊢ κ P 4 b, c, d, e , κ P 4 c ⊥ , f ⊥ , b ⊥ , d ⊥ , e ⊥ ⊗ f wd ⊗ ⊢ κ P 5 a, b, c, d, e , κ P 4 c ⊥ , f ⊥ , b ⊥ , d ⊥ , e ⊥ ⊗ f, a ⊥ ⊢ κ P 5 a, b, c, d, e , κ P 4 c ⊥ , f ⊥ , b ⊥ , d ⊥ , (e ⊥ ⊗ f )`a ⊥ ax ⊢ g, g ⊥ ⊢ g`g ⊥ wd ⊗ ⊢ κ P 5 a, b, c, d, e , κ P 4 c ⊥ , f ⊥ , b ⊥ , d ⊥ , κ P 4 e ⊥ ⊗ f, g`g ⊥ , a ⊥g g ⊥ oà b c d e c ⊥ f ⊥ b ⊥ d ⊥ e ⊥ f
r`a ⊥ o P 5 o P 4 o P 5 r P 5 r P 4 r P 5 orF igure 3: The RB-proof net encoding the derivation in Figure 2.
. O
In Section 2 we recall definitions and results in graph theory and the notion of modular decomposition. We then use these notions to extend the correspondence between classical propositional formulas and cographs to formulas containing graphical connectives and any graph. In Section 3 we define sequent calculi operating on formulas containing graphical connectives, proving their proof-theroretical properties, as well as the fact that they are conservative extensions of multiplicative linear logic with and without mix. In Section 4 we recall the graphical proof system on undirected graphs from [7,8] and we prove it recognize the same set of graphs recognized by the extension of multiplicative linear logic with mix with graphical connectives.
In the second part of the paper we provide a way to represent proofs via RB-proof nets [71,73]. In Section 5 we recall the original definitions and we explain how to generalize RB-proof nets for multiplicative linear logic to represent proofs in our logics, and we explain why the criterion for multiplicative linear logic with mix does not extends to our logics. We then extend this criterion for the graphical logic GS in Section 7, providing a sequentialization procedure for correct proof nets. To conclude, in Section 8 we show how our multiplicative graphical logics can be extended with structural rules, and we summarize in Section 9 some of the possible the research directions opened by this work.
F F T G
In this section we recall standard results from the literature on graphs, and how the notion of graphs modular decomposition allows us to extend the connection between cographs, i.e., the class of graphs containing no four vertices whose induced subgraph is isomoprhic to the four-vertices paths, and classical propositional formulas to general graphs and a class of formulas built using certain new n-ary connectives we introduce in this paper called graphical connectives.
. G M D
In this work are interested in using graphs to model patterns of interactions, describing such patterns by means of the binary relations (edges) between its components (vertices). For this reason we define at the same time the graphs and the corresponding notion of identity allowing us to consider patterns differing for their syntactic description as the same graph.
Definition 1. A L-labeled graph (or simply graph) G = V G , ℓ G , G
⌢ is given by a finite set of vertices V G , a partial labeling function ℓ G : V G → L associating a label ℓ(v) from a given set of labels L to each vertex v ∈ V G (we denote by ∅ the empty function), and a non-reflexive symmetric edge relation
G ⌢ ⊂ V G × V G whose elements, called edges, may be denoted vw instead of (v, w). A graph is empty (denoted G = ∅) if V G = ∅.
A symmetry between two graphs G and G ′ is a bijection f :
V G → V G ′ such that x G ⌢y iff f (x) G ′ ⌢ f (y) for any x, y ∈ V G . An isomorphism is a symmetry f such that ℓ(v) = ℓ( f (v)) for any x, y ∈ V G .
Two graphs G and G ′ are symmetric (denoted G ∼ G ′ ) if there is an symmetry between G and G ′ . They are isomorphic if there is a isomorphism between G and G ′ . From now on, we consider isomorphic graphs to be the same graph (denoted G = G ′ ).
Two
vertices v and w in G are connected if there is a sequence v = u 0 , . . . , u n = w of vertices in G (called path) such that u i−1 G ⌢u i for all i ∈ {1, . . . , n}. A connected component of G is a maximal set of connected vertices in G. A graph G is a clique (resp. a stable set) iff G ⌢ = ∅ (resp. G ⌢ = ∅).
Observation. The problem of graph isomorphism is a standard NP-problem. That is, verify that a given bijection between the sets of vertices of two graphs is an isomorphism can be checked in polynomial time, while finding a graph isomorphism is a problem admitting no polynomial time algorithm. For this reason, whenever we say that two graphs are the same, either we assume they share the same set of vertices, therefore implicitly assuming the isomorphism f to be defined by the identity function over the set of vertices, or we assume an isomorphism to be given. This allows us to verify whether two graphs are the same in polynomial time.
Notation 2. When drawing a graph or an unlabeled graph we draw v w whenever v⌢w, we draw no edge at all whenever v ⌢w. We may represent a vertex of a graph by using its label instead of its name. For example, the single-vertex graph G = {v}, ℓ G , ∅ may be represented either by a the vertex name v or by the vertex label ℓ(v). Note that because of our notion of identity of graphs, whenever there are no ambiguity because of two vertices with a same label, the representation of a graph provides us the same information of its definition as the triple containing the set of vertices, the label function and the set of edges.
Example 3. Consider the following graphs:
F = {u 1 , u 2 , u 3 , u 4 } , {ℓ(u 1 ) = a, ℓ(u 2 ) = b, ℓ(u 3 ) = c, ℓ(u 4 ) = d} , {u 1 u 2 , u 2 u 3 , u 3 u 4 } G = {v 1 , v 2 , v 3 , v 4 } , {ℓ(v 1 ) = b, ℓ(v 2 ) = a, ℓ(v 3 ) = c, ℓ(v 4 ) = d} , {v 1 v 2 , v 1 v 3 , v 3 v 4 } H = {w 1 , w 2 , w 3 , w 4 } , {ℓ(w 1 ) = a, ℓ(w 2 ) = b, ℓ(w 3 ) = c, ℓ(w 4 ) = d} , {w 1 w 2 , w 1 w 3 , w 3 w 4 }
They are all symmetric, that is F ∼ G ∼ H, but F = G H as can easily be verified using their representations:
F = a b c d = G and H = b a c d
In order to use proof theoretical methodologies on graphs, we need a suitable notion of subgraphs to be used in the same way sub-formulas are used in proof systems, that is, to state properties of the calculus or to define the behavior of rules. For this purpose, we use for a notion of module to identify subgraph allowing us to decompose a graph using abstract syntax trees similar to the ones underlying formulas. Intuitively, a module is a subset of vertices of a graph having the same edge-relation with any vertex outside the subset. This generalize what we observe in formulas, where any propositional atom of a subformula has the same relation (the one given by the least common ancestor node in the formula tree) with a given propositional atom not in the subformula with a propositional atom .
Definition 4. Let G = V G , ℓ G , E G be a graph and W ⊆ V G . The graph induced by W is the graph G| W ≔ W, ℓ G | W , G ⌢ ∩ (W × W) where ℓ G | W (v) ≔ ℓ G (v) for all v ∈ W. A module of a graph G is a subset M of V G such that x⌢z iff y⌢z for any x, y ∈ M, z ∈ V G \ M. A module M is trivial if M = ∅, M = V G , or M = {x} for some x ∈ V G .
From now on, we identify a module M of a graph G with the induced subgraph G| M .
Remark 5. A connected component of a graph G is a module of G.
Using modules we can optimize the way we represent graphs reducing the number of edges drawn without losing information, relying on the fact that all vertices of a module has the same edge-relation with any vertex outside the module. Notation 6. In order to improve reading, we may border vertices of a same module by a closed line and draw edges connecting those closed lines to denote the existence of an edge between each vertex inside it. By means of example, consider the following graph and its more compact modular representation.
a
c e b d = a b c d e(2)
The notion of module is related to a notion of context, which can be intuitively formulated as a graph with a special vertex playing the role of a hole in which we can plug in a module. Definition 7. A context C[ ] is a (non-empty) graph containing a single occurrence of a special vertex . It is trivial if C[ ] = . If C[ ] is a context and G a graph, we define C[G] as the graph obtained by replacing by G. Formally,
C[G] ≔ V C[ ] \ { } ⊎V G , vw | v, w ∈ V C[ ] \ { }, v C[ ] ⌢ w ∪ vw | v ∈ V C[ ] \ { }, w ∈ V G , v C[ ] ⌢ Remark 8. A set of vertices M is a module of a graph G iff there is a context C[ ] such that G = C[M].
This idea of plugging a graph inside another graph can be generalized, providing the definition of a composition-via a graph, allowing to compose multiple graphs in a "modular way" using a graph itself as an operation.
Definition 9. Let G be a graph with V G = {v 1 , . . . , v n } and let H 1 , . . . , H n be n graphs. We define the composition of H 1 , . . . , H n via G as the graph G H 1 , . . . , H n obtained by replacing each vertex v i of G with the graph H i for all i ∈ {1, . . . , n}. Formally,
G H 1 , . . . , H n = n i=1 V H i , n i=1 H i ⌢ ∪ (x, y) x ∈ V H i , y ∈ V H j , v i G ⌢v j(3)
The subgraphs H 1 , . . . , H n are called factors of G H 1 , . . . , H n and are (possibly not maximal) modules of G H 1 , . . . , H n .
Observation. By definition, H 1 , . . . , H n are (possibly not maximal) modules of G H 1 , . . . , H n .
Remark 10.
The information about the labels of the graph G used to define the composition-via operation is lost. In particular, if G is a graph with V G = {v 1 , . . . , v n } and σ a permutation over the set {1, . . . , n} such that the map f σ :
V G → V G mapping v i in f σ (v i ) = v σ(i) for all i ∈ {1, . . . , n} is an isomorphism from G to G, then G H 1 , . . . , H n = G ′ H 1 , . . . , H n .
In order to establish a connection between graphs and formulas, from now on we only consider graphs whose set of labels belong to the set L = a, a ⊥ | a ∈ A where A is a fixed set of propositional variables. We then define the dual of a graphs.
Definition 11. Let G = V G , ℓ G , E G be a graph. We define the edge relation G ⌢ ≔ (v, w) | v w and vw G ⌢ and we define the dual graph of G as the graph G ⊥ ≔ V G , G ⌢, ℓ G ⊥ with ℓ G ⊥ (v) = (ℓ G (v)) ⊥ .
Remark 12. By definition, each module of a graph corresponds to a module of its dual graph. It follows that a connected component of G ⊥ is a module of G.
Notation 13. If G is the representation of a graph G, then we may represent the graph G ⊥ by bordering the representation of G with a closed line and with the symbol for negation on the upper-right corner, that is, G ⊥ .
. C P F C
The set of classical (propositional) formulas is generated from a set of propositional variable A using the negation (·) ⊥ , the disjunction ∨ and the conjunction ∧ using the following grammar:
φ, ψ ≔ a | φ ∨ ψ | φ ∧ ψ | φ ⊥ with a ∈ A.(4)
We consider the following equivalence laws over classical formulas:
φ ∨ ψ ≡ ψ ∨ φ φ ∨ (ψ ∨ χ) ≡ (φ ∨ ψ) ∨ χ φ ∧ ψ ≡ ψ ∧ φ φ ∧ (ψ ∧ χ) ≡ (φ ∧ ψ) ∧ χ(5)
and with the following De-Morgan laws:
(φ ⊥ ) ⊥ ≡ φ (φ ∧ ψ) ⊥ ≡ φ ⊥ ∨ ψ ⊥(6)
We denote by ≡ the equivalence relation generated by equivalence and De-Morgan laws. We define a map from literals to single-vertex graphs, which extends to formulas via the compositionvia a two-vertices stable set with S 2 (for formulas which are disjunctions) and a two-vertices clique K 2 (for formulas which are conjunctions).
Definition 14.
Let φ be a classical formula, then φ is the graph inductively defined as follows:
[[a]] = a φ ⊥ = φ ⊥ φ ∨ ψ = S 2 φ , ψ φ ∧ ψ = K 2 φ , ψ
where K 2 is a given clique with 2 vertices and where we denote by a the single-vertex graph, whose vertex is labeled by a.
We can easily observe that the map [[·]] well-behaves with respect to the equivalence over formulas ≡, that is, equivalent formulas are mapped to the symmetric graphs.
Proposition 15. Let φ and ψ be classical formulas.
Then φ ≡ ψ iff φ = ψ .
We finally recall the definition of cographs, and the theorem establishing the relation between cographs and classical formulas, i.e., providing an alternative definition of cographs as graphs generated by singlevertex graphs using the composition-via a two-vertices no-edge graph and a two-vertices one-edge graph.
Definition 16. A cograph is a graph G such that for any four distinct vertices v 1 , v 2 , v 3 , v 4 ∈ V G the induced subgraph G| {v 1 ,v 2 ,v 3 ,v 4 } is not symmetric to the graph {a, b, c, d}, ∅, {ab, bc, cd} (i.e., a b c d). Theorem 17 ([39]). A graph is a cograph iff there is a formula φ such that G ∼ φ .
. M D G
We can now introduce the notion of prime graph which plays a special role in graphs modular decomposition, that is, in the possibility of inductively define graphs from single-vertices graphs using the operation of composition-via a graph restrained specific graphs (see e.g., [39,55,51,60,63,35]).
Definition 18.
A graph G is prime if |V G | > 1 and all its modules are trivial. A graph G is quasi-prime if it is prime, a clique or a stable set.
We recall the following standard result from the literature. This result enforces the existence of the possibility of inductively describe graphs using single-vertex graphs and the operation of composition-via prime graphs. More precisely, we can define the notion of modular decomposition of a graph composition-via quasi-prime graphs to provide a more canonical representation.
Definition 20. Let G be a non-empty graph. A modular decomposition of G is a way to write G using single-vertex graphs and the operation of composition-via quasi-prime graphs:
• if G is a graph with a single vertex x labeled by a, then G = a (i.e. G = {x}, ℓ(x) = a, ∅ );
• if G is disconnected with connected components H 1 , . . . , H n , then G = S H 1 , . . . , H n for a stable set S with |V S | = n;
• if G ⊥ is disconnected with connected components H 1 , . . . , H n , then G = C H 1 , . . . , H n for a clique C with |V C | = n;
• if both G and G ⊥ are connected and H 1 , . . . , H n are maximal modules of G, then there is a unique prime graph P (with |V P | > 2) such that G = P H 1 , . . . , H n .
A spurious modular decomposition of G is a modular decomposition of G in which we allow occurrences of the empty graph ∅ occur as leaves of the abstract syntactic tree.
Observation. Modular decomposition does not provide a unique way to write graphs.
In fact, whenever we have two isomorphic graphs (not even symmetric), this provide multiple ways to define a graph using the composition-via. By means of example, if we consider the non-isomorphic but symmetric graphs P = a b c d and P ′ = a c b d, then we have that
P a ′ , b ′ , c ′ , d ′ = a ′ b ′ c ′ d ′ = P ′ a ′ , c ′ , b ′ , d ′ .
Even considering isomorphic graphs, we may have permutations allowing us to write a graph using the same composition-via a graph G changing the order of its factors, that is, G H 1 , . . . , H n may be the same graph of G H σ(1),...,H σ(n) for some permutations σ over the set {1, . . . , n}. Note that if G is a clique or a stable set, then σ can be any permutation.
Moreover, the associativity of cliques and stable sets creates additional ambiguity. By means of example, consider two cliques K 3 and K 2 with respectively three and two vertices, then
K 2 a, b, c = K 2 a, K 2 b, c = K 2 K 2 a, b , c .
In order to limit the proliferation of operation of composition-via graphs, we introduce the notion of base of graphical connectives, allowing us to provide more canonical modular decomposition of graphs.
Definition 21. A graphical connective C v 1 , . . . , v n = V C ,V C = v 1 , .
. . , v n and a non-reflexive symmetric edge relation C ⌢ over the set of vertices occurring in V C . We define the composition-via a graphical connective similarly to the composition-via a graph G C = v∈V C {v}, ∅, C ⌢ for a labeling function ℓ (see Definition 9). A graphical connective is prime (resp. a clique and a stable set) if C a 1 , . . . , a n is a prime graph (resp. a clique and a stable set) for any a 1 , . . . , a n single-vertex graphs.
The group of symmetries and the set of dualizing symmetries of a graphical connective C are respectively defined the following subset of the set S n of permutations over the set {1, . . . , n}: a
S(C) ≔ σ ∈ S |V G | | C a 1 , . . . , a |V G | = C a σ(1) , . . . , a σ(|V G |) S ⊥ (C) ≔ σ ∈ S |V G | | (C a 1 , . . . , a |V G | ) ⊥ = C a ⊥ σ(1) , . . . , a ⊥ σ(|V G |)(7)
for any set of single-vertex graphs {a 1 , . . . , a |V G | }. A set of graphical connectives Q is a base (resp. prime base) if for each quasi-prime graph (resp. for each prime graph) Q with V Q = {w 1 , . . . , w n } there is a unique C ∈ Q such that Q = C w σ(1) , . . . , w σ(n) for some permutations σ ∈ S n . Notation 22. We define the following graphical connectives (with n > 1):
n v 1 , . . . , v n ≔ v 1 , . . . , v n , ∅ ⊗ n v 1 , . . . , v n ≔ v 1 , . . . , v n , {v i v j | i j} Bull v 1 , . . . , v 5 ≔ v 1 , . . . , v 5 , {(v 1 v 2 , v 2 v 3 , v 3 v 4 , v 5 v 2 , v 5 v 3 )} P n v 1 , . . . , v n ≔ v 1 , . . . , v n , {v i v i+1 | i ∈ {1, . . . , n − 1}}(8)
and we denote by`≔`2 and by ⊗ ≔ ⊗ 2 = P 2 . That is,
3 a 1 , a 2 , a 3 = a 1 a 2 a 3 ⊗ n b 1 , b 2 , b 3 = b 1 b 2 b 3 Bull c 1 , . . . , c 5 = c 1 c 2 c 3 c 4 c 5 P n d 1 , . . . , d n = d 1 d 2 · · · d n−1 d n
We use the following notation for the composition-via the graphical connectives`and ⊗:
H 1`H2 =` H 1 , H 2 H 1 ⊗ H 2 = ⊗ H 1 , H 2
From now on, we assume only bases containing the graphical connectives in Equation (8).
a More precisely, S n provided with the operation of composition is a group whose neutral element the identity permutation (denoted id).
Example 23. Consider the following graph
G = c d e f a b g h i = a b c d e f g h i We can write G as P 4 a`b, c ⊗ d, e ⊗ f, ⊗ 3 g, h, i or P 4 ⊗ 3 g, i, h , e ⊗ f, d ⊗ c, a`b ( or P 4 a`b, c ⊗ d, e ⊗ f, g ⊗(h ⊗ i) if
we only use prime connectives). The dual graph of G is defined as the graph
G ⊥ = d ⊥ c ⊥ e ⊥ f ⊥ a ⊥ b ⊥ i ⊥ h ⊥ g ⊥ = e ⊥ f ⊥ a ⊥ b ⊥ g ⊥ h ⊥ i ⊥ c ⊥ d ⊥
and can be written as
G ⊥ = P ⊥ 4 a ⊥ ⊗ b ⊥ , c ⊥`d⊥ , e ⊥`f ⊥ ,`3 g ⊥ , h ⊥ , i ⊥ .
We can reformulate the standard result on modular decomposition as follows.
Theorem 24. Let G be a non-empty graph. Then then there is a unique way (up to symmetries of graphical connectives) to write G using single-vertex graphs and the graphical connectives in a given base Q.
Proof. The proof follows by Definition 20 and the unique way, up to connective symmetries, to write a quasi-prime graph using the operation of composition-via a graphical connective of a base.
Corollary 25. Two graphs are symmetric iff they admit a same modular decomposition.
M I M G L
In this section we define connectives with a one-to-one correspondence with a graphical connectives, and a set of formulas constructed using these connectives which we can interpret (semantically) as graphs.
We then provide two sequent calculi using linear and context-free sequent rules and we prove their prooftheoretical properties.
. G F
In order to represent graphs as formulas using graph modular decomposition, we need to define new connectives beyond conjunction and disjunction in order to have a correspondence between the graphs of our base and the connectives of our logic. For this purpose, we define a set of formulas whose connectives are in one-to-one correspondence with the graphical connectives in a prime base P.
Definition 26. Assume a prime base P to be fixed. The set of formulas is generated by the set of propositional atoms A, a unit •, and the set of (graphical) connectives C = {κ Q | Q ∈ Q} using the following syntax:
φ 1 , . . . , φ n ≔ • | a | a ⊥ | κ P φ 1 , . . . , φ |P| with a ∈ A and κ ∈ C(9)
The arity of the connective κ Q is defined as |κ Q | ≔ |V Q |. We may denote by`(resp. ⊗) the binary connective κ`(resp. κ ⊗ ) and we may write φ`ψ (resp. φ ⊗ ψ) instead of κ` φ, ψ (resp. κ ⊗ φ, ψ ). A literal is a formula of the form a or a ⊥ for an atom a ∈ A. The set of literals is denoted L. A formula is unit-free if it contains no occurrences of •. A formula containing no literal is said vacuous. A MLL-formula is a formula containing only occurrences of`and ⊗ connectives.
A formula κ φ 1 , . . . , φ n is called a κ-formula and we say that κ is its main connective.
A formula is compact if it contains no subformu-las of the form`n φ 1 , . . . , φ k ,`m φ k+1 , . . . , φ k+m , φ k+m+1 , . . . , . . . , φ n−m+1 or ⊗ n φ 1 , . . . , φ k , ⊗ m φ k+1 , . . . , φ k+m , φ k+m+1 , . . . , . . . , φ n−m+1 for any n, m ∈ N.
The size (resp. energy) of a formula φ is the number |φ| of (resp. the multiset of) literals, units, and connectives occurring in it.
We consider the following equivalence laws: (10) and the following De-Morgan laws:
κ Q φ 1 , . . . , φ |Q| ≡ κ Q φ σ(1) , . . . , φ σ(|V Q |) for each σ ∈ S(Q) φ ⊗(ψ ⊗ χ) ≡ (φ ⊗ ψ) ⊗ χ φ`(ψ`χ) ≡ (φ`ψ)`χ• ⊥ ≡ • φ ⊥⊥ ≡ φ only if S ⊥ (Q) = ∅ : κ Q φ 1 , . . . φ |Q| ⊥ ≡ κ Q ⊥ φ ⊥ σ(1) , . . . , φ ⊥ σ(|V Q |) only if S ⊥ (Q) ∅ : κ Q φ 1 , . . . φ |Q| ⊥ ≡ κ Q φ ⊥ ρ(1) , . . . , φ ⊥ ρ(|V Q |) for each ρ ∈ S ⊥ (Q)(11)
We denote by ≡ the equivalence relation generated by equivalence and De-Morgan laws. A context formula (or simply context) ζ[ ] is a formula containing an hole taking the place of an atom. Given a context ζ[ ], the formula ζ[φ] is defined by simply replacing the atom with the formula φ.
For example, if ζ[ ] = ψ`( ⊗ χ), then ζ[φ] = ψ`(φ ⊗ χ).
Each formula φ with set of occurrences of literals x 1 , . . . , x n can be considered as a synthetic connective, that is, given ψ 1 , . . . , ψ n formulas we denote by φ ψ 1 , . . . , ψ n the formula obtained by replacing x i with ψ i for all i ∈ {1, . . . , n}. Therefore we define the set of symmetries of φ as the set S(φ) of permutations σ over {1, . . . , n} such that φ ψ 1 , . . . , ψ n ≡ φ ψ σ(1) , . . . , ψ σ(n) for any formulas ψ 1 , . . . , ψ n .
The linear implication φ ⊸ ψ is defined as φ ⊥`ψ . We write φ ψ as a shortcut for "φ ⊸ ψ and ψ ⊸ φ".
Observation. In multiplicative linear logic a De Morgan law linking the connectives`and ⊗ are considered commutative because the following implications are provable φ`ψ ⊸ ψ`φ and φ ⊗ ψ ⊸ ψ ⊗ φ for all φ and ψ.
For this reason, the laws φ`ψ ≡ ψ`φ and φ ⊗ ψ ≡ ψ ⊗ φ are in some way subsumed and the De-Morgan law establishing a relation between`and ⊗ is usually written in the form (φ ⊗ ψ) ⊥ = φ ⊥`ψ⊥ establishing a relation between the connectives ⊗ and`, similar to the one in the second line of Equation (11) with σ being the identity. However, in [1,2], where authors consider non-commutative versions of linear logic where sequents are considered as lists of formulas and the exchange rule is removed or restricted. In this logic both ⊗ and`connectives are non-commutative and the De-Morgan law establishing the duality between`and ⊗ written as
(φ ⊗ ψ) ⊥ = ψ ⊥`φ⊥ , that is, is of the form (κ φ 1 , φ 2 ) ⊥ ≡ κ ⊥ φ ⊥ σ(1) , φ ⊥ σ(2)
as in the second line of Equation (11) with σ the permutation exchanging 1 an 2. These two cases covers all the possible way to define De-Morgan laws between pairs of binary connectives. However, we can consider cases of connectives such that only one connective occurs in the laws. The non-commutative connective logic logic Pomset [71] provides an example of this case, where the non-commutative binary connective ⊳ is self-dual, that is, it satisfy the De-Morgan law (φ ⊳ ψ) ⊥ = φ ⊥ ⊳ ψ ⊥ , as in the third line of Equation (11) with σ being the identity.
A fourth and last way to define a De-Morgan law for binary connectives would be law of the form (κ φ, ψ ) ⊥ ≡ κ ⊥ ψ ⊥ , φ ⊥ . Note that this writing is closer to the law defining the inverse of the product of two elements in a group.
For graphical connectives we observe new behaviors similar to this latter (but more complex), which cannot be properly called self-duality, as in the case of ⊳ but rather a duality "up to isomorphism". Consider the connective κ P 4 whose the dual connective is κ P 4 itself; the De-Morgan law establishing this duality (κ P 4 a, b, c, d ) ⊥ ≡ κ P 4 b ⊥ , d ⊥ , a ⊥ , c ⊥ is not simply expressed by negating the subformulas of a κ P 4 -formula, as in the case of ⊳, but also changing their order in a more complex way that just by "inverting the order of subformulas" as in the aforementioned fourth possible way to define De-Morgan laws for a binary connective.
ax ⊢ a, a ⊥ ⊢ Γ, φ, ψ ⊢ Γ, φ`ψ ⊢ Γ, φ ⊢ ψ, ∆ ⊗ ⊢ Γ, φ ⊗ ψ, ∆ ⊢ Γ 1 , φ σ(1) , ψ τ(1) · · · ⊢ Γ n , φ σ(n) , ψ τ(n) d-κ σ ∈ S(κ) τ ∈ S(κ ⊥ ) ⊢ Γ 1 , . . . , Γ n , κ φ 1 , . . . , φ n , κ ⊥ ψ 1 , . . . ψ n • ⊢ • ax ⊢ a, a ⊥ ⊢ Γ 1 ⊢ Γ 2 mix ⊢ Γ 1 , Γ 2 ⊢ Γ 1 , φ σ(1) , ψ τ(1) · · · ⊢ Γ n , φ σ(n) , ψ τ(n) d-κ σ ∈ S(κ) τ ∈ S(κ ⊥ ) ⊢ Γ 1 , . . . , Γ n , κ φ 1 , . . . , φ n , κ ⊥ ψ 1 , . . . ψ n ⊢ Γ, φ 1 , . . . φ ǹ n n > 1 ⊢ Γ,`n φ 1 , . . . φ n ⊢ Γ 1 , φ 1 · · · ⊢ Γ n , φ n ⊗ n n > 1 ⊢ Γ 1 , . . . , Γ n , ⊗ n φ 1 , . . . φ n ⊢ Γ, ψ ⊢ ∆, χ φ σ(1) , . . . , φ σ(n) wd⊗ n > 1 κ φ 1 , . . . , φ k , •, φ k+1 , . . . , φ n = χ φ σ(1) , . . . , φ σ(n) σ ∈ S(χ) ⊢ Γ, ∆, κ φ 1 , . . . , φ k , ψ, φ k+1 , . . . , φ n AX ⊢ φ, φ ⊥ ⊢ Γ 1 , φ ⊢ Γ 2 , φ ⊥ cut ⊢ Γ 1 , Γ 2 ⊢ Γ 1 , φ ⊢ Γ 2 , ζ[•] cxt-⊗ ⊢ Γ 1 , Γ 2 , ζ[φ] ⊢ Γ, ζ ′ [φ] cxt-`⊢ Γ, ζ ′ [•], φ ⊢ Γ 1 , φ σ(1) , ψ τ(1) · · · ⊢ Γ n , φ σ(n) , ψ τ(n) d-χ |χ| > 1 σ ∈ S(χ) τ ∈ S(χ ⊥ ) ⊢ Γ 1 , . . . , Γ n , χ φ 1 , . . . , φ n , χ ⊥ ψ 1 , . . . ψ n
Remark 27. As explained in [8] (Section 9), the so-called generalized multiplicative connectives from the literature in linear logic [29,44,62,9] are different from the ones discussed here. In fact, the unique 4-ary graphical connectives P 4 is iso-dual and has symmetry group {id, Definition 28. If φ is a formula, we define the graph φ as follows:
[[•]] = ∅ [[a]] = a φ ⊥ = φ ⊥ κ Q φ 1 , . . . , φ n = Q φ 1 , . . . , φ n
where we denote by a the single-vertex graph, whose vertex is labeled by a. Conversely given a (possibly spurious) modular decomposition (via graphical connectives) of a graph G, we define [[G]] −1 as the formula whose abstract syntax trees are has a one-to-one one-toone correspondence (respecting the parenthood relation) between laves labeled by a literal x, leaves labeled by units (∅ and • respectively), and between nodes labeled by the graphical connective Q and nodes labeled by connectives κ Q .
For
each formula φ = ψ x 1 , . . . , x n (where x 1 , . . . , x n are literals), we define S(φ) ≔ S( φ ).
Observation. Intuitively, compact and unit-free formulas are the representation of graphs modular decomposition via graphical connectives, providing a one-to-one correspondence between graphical connectives in the abstract syntax trees in the two syntexes.
We have the following immediate results.
Proposition 29. Let φ and ψ be formulas.
If φ ≡ ψ, then φ = ψ . Moreover, if φ and ψ are unit-free, then φ ≡ ψ iff φ = ψ .
However, for expected stronger statements such as connections between implication between formulas whose interpretation is the same graph, we need the results in the next sections.
. E M L L G C
We assume the reader to be familiar with the definition of sequent calculus derivations as trees of sequents (see, e.g., [77]) but we recall here some definitions.
Definition 30. We define a sequent is a set of occurrences of formulas.
A sequent system S is a set of sequent rules as the ones in Figure 4. In a sequent rule ρ, we say that a formula is active if it occurs in one of its premises (the sequents above the horizontal line) but not in its conclusion (the sequent below the horizontal line), and principal if it occurs in its conclusion but in none of its premises.
A proof of a sequent Γ is a derivation with no open premises, denoted π S
Γ
. We denote by
Γ ′ π ′ S Γ an (open) derivation of Γ from Γ ′ , that is, is a proof tree having exactly one open premise Γ ′ .
A rule is admissible if given provable premises, then its conclusion is is derivable without using the rule itself. A rule is derivable from a set of rules S, if it is possible to define an open derivation having the same premises and the same conclusion of the rule using only rules in S.
We then use the sequent rules in Figure 4 to define two logic over formulas.
Notation 31. In this paper, as in the tradition of linear logic, we use the same notation to denote a proof system S and the logic it identifies, that is, the set of formulas admitting a proof in S.
We generalize the multiplicative linear logic (with mix) [40] to the following logics operating on (more general) formulas constructed using graphical connectives beyond`and ⊗.
Definition 32. We define the following logics via their sequent systems:
Multiplicative Prime Logic
:
MPL = ax,`, ⊗, d-κ | κ ∈ C prime Multiplicative Prime Logic with mix : MPL • = {ax, •,`n, mix, ⊗ n , d-κ, wd ⊗ | κ ∈ C}(12)
Observation (Rules Exegesis). The rules ax,`, ⊗, cut, mix and • the standard rules from multiplicative linear logic with mix. In particular, the ax is the restriction of the general axiom rule AX to atomic formulas. The rule`can be read as the rule making explicit the meta-connective "comma" we use in sequents to separate formulas. The "true" commutativity of`, that is the fact that we consider the formulas φ`ψ and ψ`φ to be the same formula is natural consequence of on the fact that we consider the sequents φ, ψ and ψ, φ to be the same sequent (as soon as we do not consider sequents as lists of formulas). Similarly, the rule ⊗ can be read as the rule making the meta-connective "parallel branches" in derivation trees a concrete one, an then applying two occurrences of the weak-
distributivity law (i.e. φ ⊗(ψ`χ) → w.d. (φ ⊗ ψ)`χ)
in the following way:
Γ, φ ψ, ∆ ⊗ Γ, φ ⊗ ψ, ∆ the premises (Γ, φ) and (ψ, ∆) are valid same interpretation of (Γ`φ) ⊗(ψ`∆) weak distr. Γ`(φ ⊗(ψ`∆)) weak distr. Γ`(φ ⊗ ψ)`∆ same interpretation of Γ, φ ⊗ ψ, ∆
To simplify proofs, in MPL • we generalize the rules for`and ⊗ to their n-ary versions, proving that the n-ary versions of these two connectives are derivable using the associativity of the binary ones. The (double) dual connectives rule d-κ introduces a pair of dual connectives at the same time. a This rule is a reformulation in sequent calculi of the p↓ from the logic GS (see Section 4), where the rule takes two dual graphical connectives, and creating a graph where the edges of these two connectives have been merged (i.e. a clique) and where modules in the same "position" of the graphical connectives are gathered in a same module. To have an intuition about how this rule behaves, consider the equivalent two-sided formulation of our sequents and rules, where formulas can move across the turnstile (⊢) modulo negation. In setting, the rule d-⊗ below introduces a ⊗ on the right-hand side, together with a ⊗ on the left-hand side (that is, a`). As mentioned above, the premises of this rule should be considered in a ⊗ relation, as in the regular ⊗ (see below).
Γ 1 , φ 1 ⊢ ψ 1 , ∆ 1 Γ 2 , φ 2 ⊢ ψ 2 , ∆ 2 d-⊗ Γ 1 , Γ 2 , φ 1 ⊗ φ 2 ⊢ ψ 1 ⊗ ψ 2 , ∆ 1 , ∆ 2 equivalent to ⊢ Γ ⊥ 1 , φ ⊥ 1 , ψ 1 , ∆ 1 , ∆ 1 ⊢ Γ ⊥ 2 , φ ⊥ 2 , ψ 2 , ∆ 2 , ∆ 2 ⊗ ⊢ Γ ⊥ 1 , Γ ⊥ 2 , φ ⊥ 1 , φ ⊥ 2 , ψ 1 ⊗ ψ 2 , ∆ 1 , ∆ 2 ⊢ Γ ⊥ 1 , Γ ⊥ 2 , φ ⊥ 1`φ ⊥ 2 , ψ 1 ⊗ ψ 2 , ∆ 1 , ∆ 2
Similarly, we should consider that the premises of a d-κ Q to be in a Q-shaped relation. For example, for Q = P 4 we should have consider the first and the second premises, the second and the third premises, and the third and fourth premises in such a ⊗ relation, as shown below.
Γ 1 , φ 1 ⊢ ψ 1 , ∆ 1 Γ 2 , φ 2 ⊢ ψ 2 , ∆ 2 Γ 3 , φ 3 ⊢ ψ 3 , ∆ 3 Γ 4 , φ 4 ⊢ ψ 4 , ∆ 4 d-κ P 4 Γ 1 , Γ 2 , Γ 3 , Γ 4 , κ P 4 φ 1 , φ 2 , φ 3 , φ 4 ⊢ κ P 4 ψ 1 , ψ 2 , ψ 3 , ψ 4 , ∆ 1 , ∆ 2 , ∆ 3 , ∆ 4
With this intuition, it appears clear that during proof construction (top-down interpretation of a derivation) the d-κ simply makes explicit the relation between the premises of the rule (which could be thought as a meta-connective) by "keeping track" of this relation by introducing a copy of the connective capturing this same pattern in each side of the sequent. At the same time, in proof search (i.e., bottom-up interpretation of a derivation) in single-sided sequents, the rule d-κ could be thought as a rule merging a κ-formula κ φ 1 , . . . , φ n with a κ ⊥ -formula dual κ ⊥ ψ 1 , . . . , ψ n into the formula (φ 1`ψ1 ) ⊗ · · · ⊗(φ n`ψn ), followed by a ⊗ n splitting the context, and some`. Note that having a rule introducing only one of the two dual connectives inevitably leads to the same problems of the rules for generalized multiplicative connectives introduced in the early works on linear logic [29,44], where initial coherence (i.e. the possibility of having only atomic axioms in a cut-free system, [14]) is ruled out because of the so-called packaging problem. This problem, due to the fact that some rules would require to introduce a new connective between formulas from a same sequent and from different sequents (therefore enforcing strong constraints in proof search), does not occur in the syntax of proof nets with generalized connectives, where the rigid structure imposed by derivation branching is removed. An extensive discussion of such a single-connective rule and the results on it is provided in Appendix C.
The rule wd ⊗ allows us construct derivations where we can simulate the possibility of applying certain deep inference rules to subformulas of a sequent system, in the style of deep inference systems [47]. It is a generalization of the weak-distribution law in symmetric monoidal categories (see, e.g., [61,3]
) φ ⊗(ψ`χ) −→ (φ ⊗ ψ)`χ(13)
distributing the ⊗ over the other connectives, that is,
χ ⊗ κ φ 1 , . . . , φ k , ψ, φ k+1 , . . . , φ n −→ κ φ 1 , . . . , φ k , ψ ⊗ χ, φ k+1 , . . . , φ n(14)
Note this law toghether with the "dual" weak-distributive law
κ φ 1 , . . . , φ k , ψ`χ, φ k+1 , . . . , φ n −→ κ φ 1 , . . . , φ k , ψ, φ k+1 , . . . , φ n `χ(15)
distributing the connective`over the other connectives usually collapse on a single law (the standard one in Equation (13)) whenever we consider only the two connectives ⊗ and`. This law for theì s captured by the admissible rule context-par (cxt-`) generalizing it using the unit •, while the admissible rule context-tensor (cxt-⊗) generalizes wd ⊗ . The other derivable rules in the last row are the generalization of the axiom to any formulas (AX), the generalizing of the rule d-κ to synthetic connectives (d-χ) and the standard cut-rule.
Notation 33. Unless needed for sake of clarity, we omit to the permutations over the indices of the subformulas in rules.
Remark 34. If we consider only MLL-formulas, then the rule wd ⊗ is admissible.
. P L
MPL MPL •
We start by observing that these systems are initial coherent [14,64], that is, we can derive the implication φ ⊸ φ for any formula φ only using atomic axioms. This allows us to prove that in IsoMix the unit • is, in some sense, the unit for all connectives. We conclude by proving the admissibility of cut via cutelimination, together with the admissibility of certain rules which are useful to prove the results in the next sections.
Theorem 35. The logics MPL and MPL • are initial coherent.
Proof. By induction on the structure of a formula φ: • if φ = a is a literal, then there is a derivation a, a ⊥ made of a single ax occurrence;
• if φ = •, since • ⊥ = •,
• if φ = κ ψ 1 , . . . , ψ n , then we can apply a d-κ to the sequent φ, φ ⊥ and obtaining sequents ψ i , ψ ⊥ i for all i ∈ {1, . . . , |κ|}. We conclude by inductive hypothesis.
Note that if φ is unit-free, then we only need the rules ax and p to prove ⊢ φ, φ ⊥ .
The derivability of the general axiom rule and the general d-χ immediately follows by a similar argument.
Lemma 36. Let χ be a formula such that |χ| > 1. Then rule d-χ is derivable.
Proof. By induction on the structure of φ using the rule d-κ. To prove cut-elimination in MPL • , we rely on the admissibility of the rule cxt-`in MPL • .
Lemma 38. The rule cxt-`is admissible in MPL • . Proof. To prove the admissibility of cxt-`we show that ⊢ MPL • Γ, ζ[φ], then ⊢ MPL • Γ, ζ[•], φ: • If ζ[ ] = is trivial, then ζ[φ] = φ and we conclude since • ⊢ • HP ⊢ Γ, φ mix ⊢ Γ, φ, • . • If, w.l.o.g., ζ[ ] =`n ζ ′ [ ], ψ 2 , . . . , ψ n , then there is a derivation ⊢ Γ, ζ ′ [φ], ψ 2 , . . . , ψ ǹ n ⊢ Γ,`n ζ ′ [φ], ψ 2 , . . . , ψ n , thus a derivation IH ⊢ Γ, ζ ′ [•], ψ 2 , . . . , ψ n , φ n ⊢ Γ,`n ζ ′ [•], ψ 2 , . . . , ψ n , φ a
The existence of rules introducing two (or more than two) operators at the same time is not a novelty in structural proof theory. Similar rules can be observed in focused proof systems (ee, e.g. [13,65,64]), where a rule can handle multiple connectives of a same formula, or in modal logic and linear logic (more precisely, variants of linear logic with functorial promotion rule), where rules for modalities often introduces multiple modalities in a single application (see, e.g., [43,17,21,59]). In recent works on display calculi [25], the authors use rules for two sided sequent systems where rules introduce a modality (which could be of any arity) on one side of the sequent together with their associated weak modality, internalizing the introduction of the dual connective on the other side of the sequent.
• If, w.l.o.g., ζ[ ] = φ ⊗ C ′ [ ], then there is a derivation ⊢ Γ 1 , ζ ′ [φ] ⊢ Γ 2 , ψ 2 · · · ⊢ Γ n , ψ n ⊗ n ⊢ Γ 1 , Γ 2 , ⊗ n ζ ′ [φ], ψ 2 , . . . , ψ n , thus a derivation IH ⊢ Γ 1 , ζ ′ [•], φ IH ⊢ Γ 2 , ψ 2 · · · IH ⊢ Γ n , ψ n ⊗ n ⊢ Γ 1 , Γ 2 , ψ ⊗ ζ ′ [•], φ • If, w.l.o.g., ζ[ ] = κ ζ ′ [ ], ψ 2 , . . . , ψ n , then: -either there is a derivation ⊢ Γ 1 , ζ ′ [φ], χ 1 ⊢ Γ 2 , ψ 2 , χ 2 · · · ⊢ Γ n , ψ n , χ n d-κ ⊢ Γ 1 , . . . , Γ n , κ ⊥ χ 1 , . . . , χ n , κ ζ ′ [φ], ψ 2 , . . . , ψ n thus a derivation IH ⊢ Γ 1 , ζ ′ [•], φ, χ 1 HP ⊢ Γ 2 , ψ 2 , χ 2 · · · HP ⊢ Γ n , ψ n , χ n d-κ ⊢ Γ 1 , . . . , Γ n , κ ⊥ χ 1 , . . . , χ n , κ ζ ′ [•], ψ 2 , . . . , ψ n , φ -or there is a derivation ⊢ Γ 1 , ψ ′ ⊢ Γ 2 , χ ζ ′ [φ], ψ 2 , . . . , ψ n wd ⊗ ⊢ Γ 1 , Γ 2 , κ ζ ′ [φ], ψ 2 , . . . , ψ k , ψ ′ , ψ k+1 , . . . , ψ n thus a derivation ⊢ Γ 1 , ψ ′ IH ⊢ Γ 2 , χ ζ ′ [•], ψ 2 , . . . , ψ n , φ wd⊗ ⊢ Γ, κ ζ ′ [•], ψ 2 , . . . , ψ k , ψ ′ , ψ k+1 , . . . , ψ n , φ
The proof of admissibility of cut is proven by providing a cut-elimination procedure.
Theorem 39 (Cut-elimination). Let X ∈ {MPL, MPL • }. The rule cut is admissible in X.
Proof. For MPL, it suffices to define the weight of an instance of a cut-rule as the maximum length of a branch above one of its premises and the weight of of a derivation as the sum of the weights of the cuts. To conclude it suffices to remark that each cut-elimination step from Figure 5 reduces the weight of a derivation.
For MPL • , we also have to define the energy of an instance of a cut-rule as the (multiset) union of the energies of its cut-formulas and the energy of a derivation as the multiset of the energies its cuts. We then consider the order over multisets of units, literals and connectives defined in such a way κ < κ ′ whenever |κ| < |κ ′ | and • < x for any x literal. According to this order, each non-commutative cut-elimination step reduces the energy of a derivation. The only non-trivial case is the case in which we cut a principal formula of a wd ⊗ against a principal formula of another wd ⊗ where the two wd ⊗rules are applied to principal subformulas with different indices (more precisely, whose indices are not related by a permutation in S(κ)). In this case, the cut-elimination step introduces three new cuts, all of which with smaller energy. To conclude it suffices to remark that the lexicographic order over the pairs give by the energy and the weight of a derivation reduce at each step because each commutative cut-elimination step does not change the energy but reduces the weight.
The admissibility of cut implies analyticity via the sub-formula property for MPL. Proof. It suffices to remark that the rules in MPL satisfy the subformula property, that is, all formulas occurring in a premise of a rule are subformulas of the formulas occurring in the conclusion.
ax ⊢ a, a ⊥ ⊢ a, Γ cut ⊢ a, Γ ⊢ a ⊥ , Γ ⊢ • ⊢ Γ mix ⊢ •, Γ • ⊢ • cut ⊢ Γ ⊢ Γ ⊢ Γ 1 , φ 1 · · · ⊢ Γ n , φ n ⊗n ⊢ Γ 1 , . . . , Γ n , ⊗ n φ 1 , . . . , φ n ⊢ ∆, φ ⊥ 1 , . . . , φ ⊥ ǹ n ⊢ ∆,`n φ ⊥ 1 , . . . , φ ⊥ n cut ⊢ Γ 1 , . . . , Γ n , ∆ ⊢ Γ 1 , φ 1 ⊢ Γ n , φ n ⊢ ∆, φ ⊥ 1 , . . . , φ ⊥ n cut . . . cut ⊢ Γ 1 , . . . , Γ n , ∆ ⊢ Γ 1 , φ 1 , ψ 1 · · · ⊢ Γ n , φ n , ψ n d-κ ⊢ Γ 1 , . . . , Γ n , κ φ 1 , . . . , φ n , κ ⊥ ψ 1 , . . . , ψ n ⊢ ∆ 1 , ψ ⊥ 1 , χ 1 · · · ⊢ ∆ n , ψ ⊥ n , χ n d-κ ⊢ ∆ 1 , . . . , ∆ n , κ ψ ⊥ 1 , . . . , ψ ⊥ n , κ ⊥ χ 1 , . . . , χ n cut ⊢ Γ 1 , . . . , Γ n , ∆ 1 , . . . , ∆ n , κ φ 1 , . . . , φ n , κ ⊥ χ 1 , . . . , χ n ⊢ Γ 1 , φ 1 , ψ 1 ⊢ ∆ 1 , ψ ⊥ 1 , χ 1 cut ⊢ Γ 1 , ∆ 1 , φ 1 , χ 1 · · · ⊢ Γ n , φ n , ψ n ⊢ ∆ n , ψ ⊥ n , χ n cut ⊢ Γ n , φ n , χ n d-κ ⊢ Γ 1 , . . . , Γ n , ∆ 1 , . . . , ∆ n , κ ⊥ ψ 1 , . . . , ψ n , κ χ 1 , . . . , χ n ⊢ Γ 1 , φ 1 , ψ 1 · · · ⊢ Γ n , φ n , ψ n d-κ ⊢ Γ 1 , . . . , Γ n , κ P φ 1 , . . . , φ n , κ P ⊥ ψ 1 , . . . , ψ n ⊢ ∆ 1 , ψ ⊥ 1 ⊢ ∆ 2 , χ ψ ⊥ 2 , . . . , ψ ⊥ n wd⊗ ⊢ ∆ 1 , ∆ 2 , κ P ψ ⊥ 1 , . . . , ψ ⊥ n cut ⊢ Γ 1 , . . . , Γ n , ∆ 1 , ∆ 2 , κ P φ 1 , . . . , φ n ⊢ Γ 1 , φ 1 , ψ 1 ⊢ ∆ 1 , ψ ⊥ 1 cut ⊢ Γ 1 , ∆ 1 , φ 1 ⊢ Γ 2 , φ 2 , ψ 2 · · · ⊢ Γ n , φ n , ψ n d-χ ⊢ Γ 2 , . . . , Γ n , χ φ 1 , . . . , φ n , χ ⊥ ψ 1 , . . . , ψ n ⊢ ∆ 2 , χ ψ ⊥ 2 , . . . , ψ ⊥ n cut ⊢ Γ 2 , . . . , Γ n , ∆ 2 , χ φ 1 , . . . , φ n wd⊗ ⊢ Γ 1 , . . . , Γ n , ∆, κ P φ 1 , . . . , φ n ⊢ Γ 1 , φ k ⊢ Γ 2 , χ φ 1 , . . . , φ k−1 , φ k+1 , . . . φ n wd⊗ ⊢ Γ 1 , Γ 2 , κ P φ 1 , . . . , φ n ⊢ ∆ 1 , φ ⊥ k ⊢ ∆ 2 , χ ⊥ φ ⊥ 1 , . . . , φ ⊥ k−1 , φ ⊥ k+1 , . . . φ ⊥ n wd⊗ ⊢ ∆, κ P ⊥ φ ⊥ 1 , . . . , φ ⊥ n cut ⊢ Γ 1 , Γ 2 , ∆ 1 , ∆ 2 ⊢ Γ 1 , φ k ⊢ ∆ 1 , φ ⊥ k cut ⊢ Γ 1 , ∆ 1 ⊢, Γ 2 , χ φ 1 , . . . , φ k−1 , φ k+1 , . . . φ n ⊢ ∆ 2 , χ ⊥ φ ⊥ 1 , . . . , φ ⊥ k−1 , φ ⊥ k+1 , . . . φ ⊥ n cut ⊢ Γ 2 , ∆ 2 mix ⊢ Γ 1 , Γ 2 , ∆ 1 , ∆ 2 ⊢ Γ 1 , φ 1 ⊢ Γ 2 , χ φ 2 , . . . φ n wd⊗ ⊢ Γ 1 , Γ 2 , κ P φ 1 , . . . , φ n ⊢ ∆ 1 , φ ⊥ k ⊢ ∆ 2 , χ ′ φ ⊥ 1 , . . . , φ ⊥ n−1 wd⊗ ⊢ ∆, κ P ⊥ φ ⊥ 1 , . . . , φ ⊥ n cut ⊢ Γ 1 , Γ 2 , ∆ 1 , ∆ 2 where P ∅, v 2 , . . . , v n ∼ χ a 2 , . . . , a n P ⊥ v 1 , . . . , v n−1 , ∅ ∼ χ ′ a 1 , . . . , a n−1 • ⊢ • ⊢ Γ 1 , φ 1 ⊢ ∆ 2 , χ ′ φ ⊥ 1 , . . . , φ ⊥ n−1 cxt-`⊢ φ ⊥ 1 , ∆ 2 , χ ′ •, . . . , φ ⊥ n−1 cut ⊢ Γ 1 , ∆ 2 , χ ⊥ •, φ ⊥ 2 . . . , φ ⊥ n−1 wd⊗ ⊢ Γ 1 , ∆ 2 , κ ⊥ •, φ ⊥ 2 . . . , φ ⊥ n−1 , • • ⊢ • ⊢ Γ 2 , χ φ 2 , . . . , φ n ⊢ Γ 2 , χ φ 2 , . . . , φ n−1 , • , φ ⊥ n ⊢ ∆ 1 , φ ⊥ n cut ⊢ Γ 2 , ∆ 1 , χ φ 2 , . . . , φ n−1 , • wd⊗ ⊢ Γ 2 , ∆ 1 , κ •, φ 2 , . . . , φ n−1 , • cut ⊢ Γ 1 , Γ 2 , ∆ 1 , ∆ 2 Γ 1 , ∆ ′ , φ ρ Γ 1 , ∆, φ φ ⊥ , Γ 2 cut Γ 1 , Γ 2 , ∆ Γ 1 , ∆ ′ , φ φ ⊥ , Γ 2 cut Γ 1 , Γ 2 , ∆ ρ Γ 1 , Γ 2 , ∆ ′ Γ 1 , ∆ ′ 1 · · · Γ n , ∆ ′ n , φ ρ Γ 1 , . . . , Γ n , ∆, φ φ ⊥ , Γ n+1 cut Γ 1 , . . . , Γ n+1 , ∆ Γ 1 , ∆ ′ 1 · · · Γ n−1 , ∆ ′ n−1 Γ n , ∆ ′ n , φ Γ n+1 , ∆ ′ n+1 , φ ⊥ cut Γ n , Γ n+1 , ∆ ′ n , ∆ ′ n+1 ρ Γ 1 , . . . , Γ n+1 , ∆
The same result cannot be immediately stated for MPL • because of the rule unitor κ . This because, as already observed in the previous works on graphical logic [7,8,5], having more-than-binary connectives implies the possibility of having sub-connectives, that is, graphical connectives with smaller arity corresponding to the synthetic connective obtained by fixing certain of the entries of a connective to be units. Proof. For MPL it is consequence of the subformula property. For MPL • it suffices to remark thatà nd ⊗ have no sub-connectives, therefore quasi-subformula are simply sub-formulas.
Definition 41. A graphical connective κ Q is a sub-connective of κ Q ′ if Q is an induced (quasi- prime) subgraph of Q ′ . We may denote κ Q = κ Q ′ | i 1 ,...,i k with i 1 , . . . , i k ∈ {1, . . . , n} such that i i < · · · < i k if Q •, . . . , •, v i 1 , •, . . . , •, v i k , •, . . . , • ∼ Q ′ v 1 , . . . , v n for any single-vertex graphs v 1 , . . . , v n . A quasi-subformula of a formula φ = ζ[κ Q ′ ψ 1 , . . . , ψ n ] is a literal in φ or is a formula κ Q ′ | i 1 ,...,i k ψ ′ i 1 , . . . , ψ ′ i k with ψ ′ i j a quasi-subformula of ψ j for all j ∈ {1, . . . , k}.
For both MPL and MPL • we have the following result which takes the name of splitting in the deep inference literature (see, e.g, [11,49,50]). This lemma states that is always possible, during proof search, to apply a rule removing a connective after having applied certain rules in the context. Note that in the linear logic literature, the term splitting is usually used as adjective for an instance of a ⊗ on which a rule can be applied splitting the context into two premises, that is, as a specific instance of this more general formulation.
Lemma 44 (Splitting). Let Γ, κ φ 1 , . . . , φ n be a sequent and let X ∈ {MPL, MPL • }. If ⊢ X Γ, κ φ 1 , . . . , φ n , then there are sequents ∆ 1 , . . . , ∆ n , Γ ′ such that
π 1 ⊢ ∆ 1 , φ 1 · · · π n ⊢ ∆ n , φ n ρ ⊢ Γ ′ , κ φ 1 , . . . , φ n π 0 ⊢ Γ, κ φ 1 , . . . , φ n with ρ ∈ {`n, ⊗ n , d-κ} or π 1 ⊢ ∆ 1 , φ 1 π 2 ⊢ ∆ 2 , χ φ 2 , . . . , φ n wd ⊗ ⊢ Γ ′ , κ φ 1 , . . . , φ n π 0
⊢ Γ, κ φ 1 , . . . , φ n for some proofs π 1 , . . . , π n and an open derivation π 0 .
Proof. By case analysis of the last rule occurring in a proof π of Γ, κ φ 1 , . . . , φ n :
• the last rule cannot be a ax or • since the formula κ φ 1 , . . . , φ n occurs in the conclusion.
• if the last rule is a`n, then we conclude by inductive hypothesis on its premise.
• if the last rule is a mix, then we conclude by inductive hypothesis on the premise containing the formula κ φ 1 , . . . , φ n ;
• if the last rule is in {⊗ n , d-κ, wd ⊗ } then:
either this is the desired rule ρ;
or one of the (provable) premises of this rule is of the shape Γ ′ , κ φ 1 , . . . , φ n , allowing us to conclude by inductive hypothesis.
We conclude this section proving the admissibility of the rule cxt-⊗ in MPL • .
Lemma 45. The rule cxt-⊗ is admissible in MPL • .
Proof. We proceed by induction on ζ[ ]:
• If ζ[ ] = [ ], then cxt-⊗ is an instance of mix.
• If ζ[ ] = ζ ′ [ ]`ψ, then cxt-⊗ can be replaced by a mix followed by a`.
• If, w.l.o.g., ζ[ ] = κ ζ ′ [ ], ψ 2 , . . . , ψ n , for a κ `, then we can apply Lemma 44 and conclude since we have a derivation
IH ⊢ Γ ′ 1 , ∆ 1 , ζ ′ [φ] HP ⊢ ∆ 2 , ψ 2 · · · HP ⊢ ∆ n , ψ n ρ ⊢ Γ ′ 1 , Γ ′ 2 , κ ζ ′ [ ], ψ 2 , . . . , ψ n π 0 HP ⊢ Γ 1 , Γ 2 , κ ζ ′ [ ], ψ 2 , . . . , ψ n . S L E MPL •
In order to prove that two formulas φ and ψ interpreted by a same graph φ = φ are logically equivalent (i.e., φ ψ), we here provide intermediate results allowing to decompose this equivalence in smaller steps.
We first prove that connectives symmetries are derivable in MPL, therefore in MPL • .
Proof. By Theorem 39, it suffices to prove that the following implications are derivable.
κ φ 1 , . . . , φ n ⊸ κ φ σ(1) , . . . , φ σ(n) κ φ σ(1) , . . . , φ σ(n) ⊸ κ φ 1 , . . . , φ n for all σ∈S(Q) and κ φ 1 , . . . , φ n ⊸ κ ⊥ φ ρ(1) , . . . , φ ρ(n) κ ⊥ φ ρ(1) , . . . , φ ρ(n) ⊸ κ φ 1 , . . . , φ n for all τ∈S ⊥ (Q)
These are easily derivable using an instance of d-κ and AX-rules.
Remark 47. The rule sym-`is derivable directly because sequents are sets if occurrences of formulas, therefore the order of the occurrences of the formulas in a sequent is not relevant, and we can permute this order before applying the rule`. This because the interpretation of the meta-connective comma we use to separate formulas in a sequent is the same of`. Similarly, the rule sym-⊗ is derivable because in our sequent system, as in standard sequent calculus, the order of the premises of the rules is not relevant. Said differently, the space between branches in a derivation is a commutative meta-connective which is internalized by the ⊗.
As a consequence of the analyticity MPL and MPL • , we could consider the connectives multi-par (κ`n) and multi-tensor (κ ⊗ n ) superfluous in our syntax for formulas since they are synthetic connectives definable via the binary`and ⊗. In particular, this allows us restrain our reason on compact formulas only since rules expressing the associativity of`n and ⊗ n are derivable.
Therefore, the connectives`and ⊗ are associative and any formula admits an equivalent compact formula.
Proof. We only prove the associativity result for`n, since the proof for ⊗ n is similar.
The result follows by Theorem 39 after proving that following implications hold for any n, m ∈ Ǹ n φ 1 , . . . , φ n ⊸`n −m+1 κ`m φ 1 , . . . , φ m , φ m+1 , . . . , φ n `n −m+1 κ`m φ 1 , . . . , φ m , φ m+1 , . . . , φ n ⊸`n φ 1 , . . . , φ n We can therefore immediately conclude that MPL is sound and complete with respect to graph isomorphism if we consider unit-free formulas.
Proposition 49. Let φ and ψ be unit-free formulas. If φ ≡ ψ, then φ ⊸ ψ and ψ φ.
Proof. By induction on the formulas φ and ψ using Lemmas 46 and 48.
For a stronger result on general formulas, we need to show that for any two formulas φ and ψ are interpreted (via [[·]]) by the same non-empty graph, both these formulas are equivalent to a unit-free formula χ representing the modular decomposition of this graph via graphical connectives.
n > 1 χ compact formula κ φ 1 , . . . , φ k , •, φ k+1 , . . . , φ n = χ φ σ(1) , . . . , φ σ(n) σ ∈ S(χ) ⊢ Γ, κ φ 1 , . . . , φ k , •, φ k+1 , . . . , φ n(18)
Proof. It suffices to consider the derivation • ⊢ • ⊢ Γ, χ φ σ(1) , . . . , φ σ(n) wd ⊗ ⊢ Γ, κ φ 1 , . . . , φ k , •, φ k+1 , . . . , φ n Theorem 51. Let φ and ψ be formulas. If φ = ψ ∅, then φ and ψ are equivalent in MPL • , that is, φ ψ is valid in MPL • .
Proof. Given any formula φ, we can define by induction on the number of units • occurring in a unit-free compact formula φ ′ such that φ φ ′ .
• if φ is a literal, then φ ′ = φ;
• if φ = κ φ 1 , . . . , φ n and φ i • for all i ∈ {1, . . . , n}, then φ ′ = κ φ ′ 1 , . . . , φ ′ n .
Otherwise, w.l.o.g., we assume φ i = • and we let φ ′ = χ φ ′ 2 , . . . , φ ′ n for a compact formula χ such that κ •, φ 2 , . . . , φ n = χ φ 2 , . . . , φ n and we conclude by inductive hypothesis since we the following derivations:
IH ⊢ φ ⊥ 2 , φ ′ 2 · · · IH ⊢ φ ⊥ n , φ ′ n d-χ ⊢ χ ⊥ φ ⊥ 2 , . . . , φ ⊥ n , χ φ ′ 2 , . . . , φ ′ n unitor κ ⊢ κ ⊥ •, φ ⊥ 2 , . . . , φ ⊥ n , χ φ ′ 2 , . . . , φ ′ n `⊢ κ ⊥ •, φ ⊥ 2 , . . . , φ ⊥ n `χ φ ′ 2 , . . . , φ ′ n and IH ⊢ φ ′⊥ 2 , φ 2 · · · IH ⊢ φ ′⊥ n , φ n d-χ ⊢ χ ⊥ •, φ ′⊥ 2 , . . . , φ ′⊥ n , χ •, φ 2 , . . . , φ n unitor κ ⊢ χ ⊥ φ ′⊥ 2 , . . . , φ ′⊥ n , κ •, φ 2 , . . . , φ n `⊢ χ ⊥ φ ′ 2 , . . . , φ ′⊥ n `κ •, φ 2 , . . . , φ n
Therefore we can construct unit-free compact formulas φ ′ and ψ ′ such that φ φ ′ and ψ ψ ′ . Moreover, by definition of [[·]] and the rule unitor κ we have φ ′ = φ = ψ = ψ ′ . Because of the unicity of the modular decomposition via graphical connectives of the graph φ ′ = ψ ′ modulo symmetries of connectives, and their correspondence with unit-free compact formulas, then we must have φ ′ ≡ ψ ′ . We conclude using the transitivity of and Proposition 49, by letting χ be the formula φ ′ .
T G L
GS M MPL •
In this section we prove that the logic on graphs defined by the deep inference proof system GS from [7,8] is the same logic identified by the graph corresponding to formulas which are provable in MPL • . In this paper we define deep inference system GS = {ai↓, s`, s ⊗ , p↓} using the rules in Figure 6. The definition of derivations in deep inference systems operating on graphs are provided in Appendix A.
Remark 52. At the syntactical level, the system GS operates on graphs by manipulating their spurious modular decompositions via graphical connectives. Therefore, for any derivation in GS we can assume to be given a spurious modular decomposition of each graph G occurring in a derivation, therefore a unique formula [[G]] −1 (defined as in Definition 28) to be given.
Remark 53. We here provide a slightly different formulation of with respect to [7] and [8]. In particular, we consider a p-rules with stronger side condition which is balanced by the presence of s ⊗ in the system. However, it can be easily shown that the systems are equivalent (see Appendix A.1).
We can easily prove that each sequent provable in MPL • is interpreted by [[·]] as a graph which is admitting a proof in GS.
Lemma 54. Let Γ be a sequent. If ⊢ MPL • Γ, then ⊢ GS [[Γ]].
Proof. Let π be a proof of Γ in MPL • , we define a derivation To prove the converse, we use the admissibility of cxt-`to prove in a more concise way that every time there is a rule in GS with premise H and conclusion G, then there are formulas φ and ψ such that φ and ψ , and such that ψ ⊸ φ. • if r = ai↓, then φ = a`a ⊥ and ψ = • and a derivation
⊢ ∆ 1 π2 ⊢ ∆ 2 mix ⊢ ∆ 1 , ∆ 2 ∅ [[π1]].. ∆ 1 , ∆ 2 , ⊗ n φ 1 , . . . , φ n π1 ⊢ ∆ 1 , φ 1 π2 ⊢ ∆ n , χ φ 2 , . . . , φ n wd⊗ ⊢ ∆ 1 , ∆ 2 , κ P φ 1 , . . . , φ n• • ax ⊢ a, a ⊥ ⊢ a`a ⊥ mix ⊢ •, a`a ⊥
• if r = s`, then φ = µ i`κ µ 1 , . . . , µ i−1 , •`ν, µ i+1 , . . . µ n and ψ = κ µ 1 , . . . , µ i−1 , µ iν , µ i+1 , . . . µ n for some formulas µ 1 , . . . , µ n , ν such that µ i = M i for all i ∈ {1, . . . , n} and [[ν]] = N. We conclude by Lemma 38 since we have the following derivation
AX ⊢ ψ ⊥ , κ µ 1 , . . . , µ i−1 , µ i`ν , µ i+1 , . . . , µ n cxt-`⊢ ψ ⊥ , µ i , κ µ 1 , . . . , µ i−1 , •`ν, µ i+1 , . . . , µ n `⊢ ψ ⊥ , φ • if r = s ⊗ then φ = κ µ 1 , . . . , µ i−1 , µ i ⊗ ν, µ i+1 , .
. . µ n and ψ = µ i ⊗ κ µ 1 , . . . , µ i−1 , • ⊗ ν, µ i+1 , . . . µ n for some formulas µ 1 , . . . , µ n , ν such that µ i = M i for all i ∈ {1, . . . , n} and [[ν]] = N. We conclude by Lemma 38 since we have the following derivation
AX ⊢ κ ⊥ µ ⊥ 1 , . . . , µ ⊥ i−1 , µ ⊥ i`ν ⊥ , µ ⊥ i+1 , . . . µ ⊥ n , φ cxt-`⊢ µ ⊥ i , κ ⊥ µ ⊥ 1 , . . . , µ ⊥ i−1 , •`ν ⊥ , µ ⊥ i+1 , . . . µ ⊥ n , φ ⊢ ψ ⊥ , φ • if r = p↓ then φ = κ P ⊥ µ 1 , .
. . , µ n `κ P ν 1 , . . . , ν n and ψ ⊥ = (µ ⊥ 1 ⊗ ν ⊥ 1 )`· · ·`(µ ⊥ n ⊗ ν ⊥ n ) for some formulas µ 1 , . . . , µ n , ν 1 , . . . , ν n such that µ i = M i ∅ and [[ν i ]] = N i ∅ for all i ∈ {1, . . . , n}. We conclude since we have the following derivation We conclude since, w.l.o.g., there is a derivation of the following forms:
AX ⊢ µ 1 , µ ⊥ 1 AX ⊢ ν 1 , ν ⊥ 1 ⊗ ⊢ µ ⊥ 1 ⊗ ν ⊥ 1 , µ 1 , ν 1 · · · AX ⊢ µ n , µ ⊥ n AX ⊢ ν n , ν ⊥ n ⊗ ⊢ µ ⊥ n ⊗ ν ⊥ n , µ n , ν n d-κ ⊢ (µ ⊥ 1 ⊗ ν ⊥ 1 ), . . . , (µ ⊥ n ⊗ ν ⊥ n ), φ (µ ⊥ 1 ⊗ ν ⊥ 1 )`· · ·`(µ ⊥ n ⊗ ν ⊥ n ), φ If C[ ] ,IH ⊢ (ζ ′ [ψ ′ ]) ⊥ , ζ ′ [φ ′ ] AX ⊢ µ ⊥ 1 , µ 1 · · · AX ⊢ µ ⊥ n , µ n d-κ ⊢ κ P ⊥ (ζ ′ [ψ ′ ]) ⊥ , µ ⊥ 1 , . . . , µ ⊥ n , κ P ζ ′ [φ ′ ], µ 1 , . . . , µ n .
We are now able to prove the main result of this section, that is, establishing a correspondence between graphs provable in GS and graphs which are interpretation via [[·]] of formulas provable in MPL • . Definition 56. We define the following graphical logics (i.e. sets of graphs):
Graphical Multiplicative Logic: GML = φ | φ formula such that ⊢ MPL φ Graphical IsoMix Logic : GML • = φ | φ formula such that ⊢ MPL • φ(19)
We say that G is provable in X ∈ {GML, GML • } (denoted ⊢ X G) if there is a formula φ such that ⊢ X φ and φ = G.
Theorem 57. Let G be a graph. Then ⊢ GS G iff ⊢ GML G.
Proof. If ⊢ GML G, then by there is a (compact unit-free) formula φ and such that ⊢ MPL • φ. We conclude by applying Lemma 54 to a given proof of φ in MPL • . To prove the converse, let D be a proof of G in GS. We define a proof π D and a formula φ = [[G]] −1 (see Remark 52) by induction on the number n of rules in D.
• If n = 0, then G = ∅ and π D = • ⊢ • .
• If n = 1, then G = a`a ⊥ and π D = ⊢ a, a ⊥ ⊢ a`a ⊥ .
• If n > 1, then by inductive hypothesis we have a proof π D ′ of a formula ψ such that ψ is the premise graph of the last rule r in π D (which may be applied deep inside a context
RB-P N
In this section we present a way to encode proofs in MPL and MPL • by means of graphs with two kind of edges. We then provide a calculus operating on these graphs in the style of sequent calculus which is sound and complete with respect to graphs encoding proofs. For this purpose, we extend the syntax of RB-proof nets introduced by Retoré in his PhD thesis for MLL-proof nets [71,75]. 7 The main difference between the syntax for proof nets commonly used in the literature (that is, as the ones used in, e.g., [40,29,42] or any of their reformulation which can be found in the literature) and RB-proof nets, is there in the former nodes are labeled by connectives and the edges (called wires) by formulas, while in the latter each connective is represented by a small graph keeping track of the relations between the inputs and the output of the gate (see Figure 8), while wires are represented by a different kind of edges. However, the idea behind the correctness criterion for MLL-proof nets can be found almost unchanged in the syntax of RB-proof nets. In fact, this correctness criterion for MLL-proof nets checks the absence of elementary cycles in any possible graph obtained by pruning one of the two input wires of each`-gate. In RB-proof nets this is captured by simply having no edge connecting the two inputs of a`-gate, preventing the existence of an alternating elementary path passing from one input to another. This elegant change in the syntax allows us to still have a criterion based on checking the absence of cycles in a graph, but avoiding the need of checking an exponential number of graphs with respect to the`-gates (one for each possible combination of pruning of the inptus of the`-gates).
b ⊥ a ⊥ a b ⊗ b ⊥`a⊥ a ⊗ b ax ax b ⊥ a ⊥ a b o`o ⊗ r`r ⊗
The idea for designing RB-proof nets for multiplicative prime logic (with and without mix) comes from the remark that in RB-proof nets the graph induced by the inputs of a graph representing a`-gate (⊗-gate) is isomorphic to the prime graph`(respectively ⊗). We define G-gates for any graph G by mimicking this construction: consider the vertices of a graph as inputs of a gate where an output and an outgoing wire are attached (see Figure 9). Figure 9: A`-gate, a ⊗-gate, a P 4 -gate and an intuitive depiction of a G-gate in RB-structures.
• • • • • • • • • • • • • • • · · · • G • •
. F G
RB-
We start by recalling the definition of RB-graph, which are graphs with two kind of (undirected) edges we use to represent RB-structures, and which we use to represent both the decomposition trees of a graph, and encoding of proofs.
Definition 58. An RB-graph G = V G , ℓ G , G ⌢, G
⊥ is given by a set of vertices V G , a (partial) labeling function ℓ G (we denote ∅ the empty function), and two symmetric and non-reflexive edge relations
G ⌢ and G ⊥ over V G such that G ⊥ is a perfect matching in the graph V G , ℓ G , G ⌢ ∪ G ⊥ (that is, every vertex in V G belongs to exactly one edge in G ⊥), and such that if v G ⊥w, then ℓ(v) = (ℓ(w)) ⊥ . The edges in G
⌢ are called R-edges (for red or regular) and the edges in G ⊥ are called B-edges (for blue or bold). We denote by ∅ the empty RB-graph ∅, ∅, ∅ and we extend to notion of induced subgraph to RB-graphs. Notation 59. When drawing a RB-graph we draw red/regular edges v w whevever v⌢w, and blue/bold edges v w whenever v⊥w.
In order to represent tree-structure of the formula tree of a formula or the modular decomposition of a graph, we define gates encoding graphical connectives.
Definition 60. Let G { }v 1 , . . . , v n be a connective. A G-gate (or simply gate) is a RB-graph of the
form G = V G , ∅, G ⌢, G ⊥ with a vertex i i G (its i-th input)
for each vertex in v i ∈ V G plus a vertex r G (its root) and a vertex o G (its output), and having a R-edge between the i-th and the j-th inputs iff v i G ⌢v j , a R-edge between each input and the output and a B-edge between the output and the root. Formally
G = i i G | v i ∈ V G ∪ {o G , r G } , i i G i j G | v i G ⌢v j ∪ i i G o G | v i ∈ V G , {o G r G }
We denote In(G) the set of inputs of G and we call a R-edge connecting two inputs of a gate a connector edge and the B-edge connecting the output to the root of a gate a wire. We say that G has type of G (denoted G : G) or that G is a G-gate whenever | G In(G) ∼ G.
We define the operation of gluing two graphs by identifying some of their vertices.
Definition 61. Let G and H be RB-graphs with disjoint set of vertices. An interface X is a set of pairs (x, y) ∈ V G × V H such that if ℓ(x) and ℓ(y) are both defined, then ℓ(x) = ℓ(y) and such that x x ′ and y y ′ for any (x, y), (x ′ , y ′ ) ∈ X.
The gluing of G and H via an interface X as the RB-graphs G ⊲⊳ X H obtained by identifying the vertices occurring in a same pair in in X. Formally G ⊲⊳ X H has vertices V G ∪ (V H \ {y | (x, y) ∈ X}), the labeling function defined by the union of the two labeling functions, and a R-edge (resp. B-edge) an edge uv whenever either u and v are both in V G or both in V H and u⌢v (resp. u⊥v),
or u ∈ V G , v ∈ V H and there is (v ′ , v) ∈ X such that u G ⌢v ′ (resp. u G ⊥v ′ ). The disjoint union of G and H is defined as G ⊎ H ≔ G ⊲⊳ ∅ H. a
We use the operation of gluing to construct tree-like RB-graph reproducing the abstract syntax tree of a graph (i.e., its modular decomposition via graphical connectives) or of a formula (i.e., its formula tree).
Definition 62. Let G be a graph described by a (possibly spurious) modular decomposition via graphical connectives of a given base Q. The RB-tree of G is the RB-graph {{G}} define as follow by induction on the modular decomposition of G using graphical connectives of a base:
• if G = •, then {{G}} = ∅;
• if G = x is a single-vertex graph with label x, then {{φ}} is the RB-graph made of a single vertex v labeled by x. We say that v is at the same time the leaf and a root of {{G}}; Notation 63. When referring to gates we may intend them as the induced subgraphs in a RB-tree, as we do for moldules. Note that gluing identifies roots with inputs of gates, but we may still denote such a vertex using any two of the identified vertices names in order to simplify certain definitions.
• if G = Q G 1 , . . . , G n ,{{φ}} = G Q ⊲⊳ X 1≤i≤n {{φ i }} where X = i i Q , r {{φi}} | i ∈ {1,
Observation. We naturally have correspondences between leaves of {{Γ}} (leaves in {{G}}) and occurrences of literals in Γ (vertices in V G ), between gates of {{Γ}} and occurrences of quasi-prime graphs in the modular decomposition of [[Γ]] by means of quasi-prime graphs (respectively occurrence of connectives in Γ), and between roots of {{Γ}} and occurrences of formulas in Γ.
In the following section we need to consider RB-forest obtained by pruning certain leaves in a given RB-forest. For this purpose we introduce the following notation.
Notation 64. Let G be a RB-tree. If W is a subset of leaves of G, then the RB-tree G♣W is defined as the RB-forest of the graph G x 1 , . . . ,
x n where x i = ∅ if v i ∈ W and x i = v i otherwise.
a In the literature of graphs and hypergraphs with interfaces, interfaces are defined as pairs of bijections from a set of n elements to the sets of vertices of two distinct graphs. This definition is equivalent to our definition of interface, as well as our definition of gluing is equivalent to the one of graph composition by pushout. Example 65. Let G = P 4 a`b, c ⊗ d, e ⊗ f, ⊗ 3 g, h, i be the graph from Example 23. Its RBforest is shown below on the left.
a b c d e f g h i o`o ⊗ o ⊗ o ⊗ 3 i 1 P 4 i 2 P 4 i 3 P 4 i 4 P 4 o P 4 r P 4 g h o ⊗ d i 1 orT
he RB-forest above on the right is the RB-forest {{G}} ♣{a, b, c, e, f, i}. .
RB-G R P
Intuitively, MLL-proof nets of a formula φ are encoded as the formula tree of φ decorated with a function pairing occurrences of literals occurring in φ satisfying certain topological properties. In RB-graphs, such pairing function is encoded by means of B-edges (called wires).
Definition 66. Let Γ be a P-sequent. An axiom linking Link for Γ is a (total) bijection between occurrences of literals in Γ such that if x is an occurrence of a literal a in Γ, then Link(x) is an occurrence of a literal a ⊥ . We denote by ⊓ Link the set two-vertices sets {v a , v f a } containing leaves of {{Γ}} paired by Link.
A RB-structure of Γ is a RB-graph G of the form
G = V {{Γ}} , ℓ Γ , {{Γ}} ⌢ , {{Γ}} ⊥ ∪ ⊓ Link ≔ {{Γ}} ⊔ Link
Gates, leaves, roots and wires of G are the ones of {{Γ}}. The (axiom) links of G are B-edges in ⊓ Link . Let X ∈ {MLL, MLL • , MPL, MPL • } and let π be a derivation of Γ in X. The axiom linking of π (denoted Link π ) is defined by the set of pairs of dual literals matched by the ax-rules in π, that is, Link(x) is the unique occurrence of literal such that there is a ax-rule with conclusion ⊢ x, Link(x) in π. A X-net is a RB-structure of the form {{π}} ≔ {{Γ}} ⊔ Link π for a proof π of Γ in X.
We define two inference systems for RB-structures in order to characterize families of RB-graph.
Definition 67. We define the following inference systems for RB-graphs using the rules in Figure 10:
RB P : ax RB ,`R B , ⊗ RB , d-κ RB P | P ∈ P \ {`, ⊗} RB • Q : • RB , ax RB ,`R B , mix RB , ⊗ RB , d-κ RB Q , s RB ⊗ | Q ∈ Q \ {`, ⊗}(20)
We say that a RB-graph G is provable in X ∈ {RB P , RB • Q } (denoted ⊢ X G), or simply is in X, if there is proof (i.e. a derivation with no open premises) of G in X.
Remark 68. Rules`R B , d-κ RB
Q and s RB ⊗ glue roots of RB-structures to a gate. Thus they subsume the non-emptiness of each of their premises.
We establish a correspondence between proofs in MPL • of a sequent Γ and proofs of RB-structures of the form {{Γ}} ⊔ Link in RB • Q .
⊢ G ⊲⊳ X`G1 ⊢ G 1 · · · ⊢ G n ⊗ RB G: ⊗ n ⊢ G ⊲⊳ X s (G 1 ⊎ · · · ⊎ G n ) ⊢ G 1 · · · ⊢ G n d-κ RB Q G: Q, H : Q ⊥ ⊢ (G ⊎ H) ⊲⊳ X d (G 1 ⊎ · · · ⊎ G n ) • RB ⊢ ∅ ⊢ G 1 · · · ⊢ G 2 mix RB ⊢ G 1 ⊎ · · · ⊎ G n ⊢ G 1 ⊢ G♣ i k G ⊲⊳ X − G 2 s RB ⊗ G: Q ⊢ G ⊲⊳ X + (G 1 ⊎ G 2 ) ⊢ G 1 · · · ⊢ G n s-κ RB G G: G ⊢ G ⊲⊳ X s (G 1 ⊎ · · · ⊎ G n ) X`≔ i i G , r i | i ∈ {1, .
. . , n} and r 1 , . . . , r n distinct roots in Root(G)
X s ≔ i i G , r i | i ∈ {1, . . . , n} and r i ∈ Root(G i ) X d ≔ i i G , r 1 i , i i H , r 2 i | i ∈ {1, . . . , n} and r 1 i , r 2 i distinct roots in Root(G i ) X + ≔ i i G Q , r i | i ∈ {1, .
. . , n} with r k ∈ Root(G 2 ) and r 1 , . . . , r k−1 , r k+1 , . . . , r m distinct roots in Root(G 2 ) . AE-C
X − ≔ (x i , r i ) | i ∈ {1, . . . , n} \ {k} with i i G Q , r i ∈ X + and x i the leaf of G Q ♣{i k G Q } corresponding to v i ∈ V Q v 1 ,...,vn
RB-
We recall here the topological notions required to formulate the correctness criterion for RB-structures encoding proofs in MLL and MLL • in terms of connectness and acyclicity of RB-structures with respect to alternating-elementary paths.
Definition 70. Let G = V, ℓ, ⌢, ⊥ be a RB-graph. An alternating path is a path p = v 0 · · · v n in the graph V, ℓ, ⌢ ∪ ⊥ such that v i ⌢v i+1 iff v i+1 ⊥v i+2 and such that v i ⊥v i+1 iff v i+1 ⌢v i+2 for all i ∈ {0, . . . , n}.
We say that such an ae-path connects v 0 with v n , and that it covers the vertices v i for all i ∈ {1, . . . , n} and the edges v i v i+1 for all i ∈ {0, . . . , n − 1}. An ae-path is an alternating path which is also elementary, that is, such that a vertex occurs at most once in the ae-path. If X, Y ∈ {R, B}, a XY-path is an ae-path v 0 · · · v n such that v 0 v 1 is a X-edge and v n−1 v n is a Y-edge. We say that two vertices v and w of G are ae-connected if there is an ae-path connecting them, a RB-graph is ae-connected if any two of its vertices are ae-connected.
An ae-cycle is an ae-path c = v 0 · · · v 2n such that v 0 = v 2n . Note that we consider ae-cycles modulo cyclic permutations of the indices, that is, we identify the ae-cycle v 0 · · · v 2n−1 · v 0 with the ae-cycle v i · · · v 2n−1 · v 0 · · · v i for any i ∈ {0, . . . , 2n − 1}. A chord of c is a R-edge v h v k with h, k ∈ {0, . . . , 2n} with k > h + 1. It is a shortcut if there is a BB-path from v k to v h which is a sub-sequence of c and such that v h · · · v k · v h is an ae-cycle. The set of ae-cycles of G is denoted AE(G).
Notation 71. When drawing a RB-graph we draw v w if there is an ae-path between v and w. Whenever we want to point at a specific path or an induced subgraph, we highlight the vertices and the edges it covers as follow v w.
Remark 72. An ae-cycle of a RB-structure covering a connector edge and the output of a gate has a shortcut. This can be observed in the example below on the left where the non-connector edge between the rightmost input and the output of the gate is a shortcut (dotted lines represent possible R-edges).
i i G i j G i k G o G or i i G i j G i h G i k G o G (21)
Note that also the ae-cycle above on the right has a shortcut i j
G i h G .
We then formalize in this framework two intuitive notions we use in the next sections. The first is simply a formalization of the idea that in RB-forests we represent roots are at the bottom of our forests.
Definition 73. Let G = V, ℓ, ⌢, ⊥ be a RB-forest or a RB-structure. We say that a vertex v is above a vertex w (w is below v) if there is a ae-path from w to a leaf of T passing through no connector edges covering v. Similarly, a gate G is above (below) a gate G ′ if its output is below the output of G ′ .
We then define the notion of g-path, allowing us to define a notion of connectess for RB-structures similar to the one used in standard MLL-proof nets, that is, where paths are sequences of wires connected by a gate.
Definition 74. We write v▽w if v w are vertices in a same gate, that is, if there is a gate G ∈ Gates(G) such that v, w ∈ In(G) ∪ {o G }.
A g-path from v to w in G an alternating elementary path p = v 0 · · · v n in the RB-graph V, ℓ, ▽, ⊥ with n > 2 such that there are at most two i, j ∈ {1, . . . , n} with i j such that v i ▽v j . The notions of g-connectness and of g-connected component are defined in the standard way (see Definition 1), using g-paths instead of paths (or ae-paths).
G is a MLL
• -net iff AE(G) = ∅.
The proof of this theorem can be reconstructed by the one using the Danos-Regnier switching criterion [30] for standard MLL-proof nets. For the reader familiar with the terminology of Danos-Regnier switching for MLL proof nets [30], the idea is that ae-paths in a RB-structure of a MLL-formula are exactly the paths which may be observed in a test of the proof net. In fact, ae-paths can only pass at most once through each`-or ⊗-gate, thus an ae-path may only pass through one input of a`-gate to its output; this can be interpreted as if a switch has been applied a switching selecting the input occurring in the path. Details can be found in [72,36,66].
Remark 77. In a RB-structure G such that any gate is a`-or a ⊗-gate, any ae-cycle is chordless because gates have only two inputs and one output.
However, it is easy to find graphs which are provable in GML, but not satisfying this criterion: any graph of the shape P ⊸ P for a prime graph P {`, ⊗} is provable in GML (see Theorem 35) but the RB-structure representing such a proof has ae-cycles.
Example 78. Consider the RB-proof net below corresponding to the (unique) proof of
P 4 a, b, c, d ⊸ P 4 a, b, c, d in GML. c ⊥ a ⊥ d ⊥ b ⊥ o P ⊥ 4 r P ⊥ 4 a b c d o P 4 r P 4(22)
It exhibits an ae-cycle a · b · b ⊥ · d ⊥ · d · c · c ⊥ · a ⊥ having two chords a ⊥ d ⊥ and bc. a More in general, we remark that during the construction of a MPL • -nets using the rules in RB • Q , ae-cycles can be introduced only by an instance of d-κ RB Q and those ae-cycles always cover the P 4 's over the inputs of the gates in its conclusion which are not in its premises.
G C C
MPL MPL •
In this section identify a topological characterization of those RB-structures which are MPL • -nets by means of a correctness criterion, and we define a sequentialization procedure allowing us to reconstruct a proof in MPL • . For this purpose, we isolate a family of ae-cycles allowing us to retrieve all the information witnessing the correct application of d-κ-rules. This result is possible thanks to results on the primeval decomposition of graphs [56] allowing us to further characterize prime graphs by specific topological properties we recall in the next subsection. We then provide a method inspired by the sequential edges introduced in C-nets [33] in order to recover partial information about possible order in which the connectives (or more precisely, the rules introducing them) can be sequentialized. The correctness criterion is obtained by combining this order with a refinement of Retoré's criterion (via the absence of specific ae-cycles, as theorized in [67] for coherent interaction graphs).
. C P-C G
The notion of modular decomposition have been refined in [15] by underlying the importance of the induced subgraphs isomorphic to P 4 . Due to their importance,we introduce the following convention.
Notation 79. We say that a quadruple a, b, c, d of four vertices of a graph G is a P 4 of G if G| {a,b,c,d} = P 4 a, b, c, d . A P 4 of a RB-graph is a P 4 containing only R-edges.
Definition 80. In a graph P 4 a, b, c, d we call a and d its end-points, b and c its mid-points, the edge bc is its mid-edge, and the edges ab and cd are its end-edges.
a We here put emphasis on the presence of chords because the absence of chordless ae-cycles allows us to provide a correctness criterion for the different encoding of MLL-nets via the RB-structures discussed in Section 9.
We recall that a graph G is connected if there is a path connecting any pair of vertices. This definition is equivalent to require that for any partition of V G into disjoint two non-empty sets V 1 and V 2 there is a crossing P 2 , that is, an induced subgraph isomorphic to P 2 with vertices in both V 1 and V 2 . In this paper we are interested in the generalization of this alternative definition using P 4 instead of P 2 .
Definition 81 ( [56]). A graph G is p-connected if for any partition of V G into disjoint two non-empty sets V 1 and V 2 there is a crossing P 4 , that is, there is a P 4 of G with vertices in both V 1 and
V 2 . A p-component of G is a maximal p-connected subset of V. A p-component V ′ of G is separable if
there is a partition of V ′ in two disjoint subsets V 1 and V 2 such that every P 4 in G has middle points in V 1 and end-points in V 2 . Such a partition is denoted V 1 | V 2 and is called a separation of V ′ .
Notation 82. As for modules, we may identify a p-component with its induced subgraph. Moreover, we may identify a separable p-component with its separation.
Proposition 83 ( [56]). Let G be a graph. Then G is p-connected iff any two vertices v, w ∈ V G admit a p-chain from v to w, that is,
a path u = v 1 , . . . , v n = w such that v i , v i+1 , v i+2 , v i+3 is a P 4 of G for all i ∈ {1, . . . , n − 3}.
The following result is a consequence of a more general result known as Structure Theorem [56] and results on separable p-connected graphs (for a survey on the topic see [15]).
Theorem 84. Let P be a prime graph which is not a clique or a stable set. If P is not p-connected, then P has a unique separable p-component
V ′ = K | S such that V P = V ′ ⊎ {w P } and such that • if v ∈ K,
then v is a mid-point of a P 4 in P and v⌢w P ;
• if v ∈ S , then v is a end-point of a P 4 in P and v ⌢w P .
The vertex w P is called the weak vertex of P, and the set of vertices K and S are called the strong component and the stable component of P respectively.
We conclude this section by providing some additional lemmas required for the proofs in the rest of this section.
Lemma 85. Let P be a non p-connected prime graph with weak vertex w P and let G be a graph obtained by removing from P some edges (at least one) containing w P from P. If G is connected, then G is p-connected.
Proof. We first observe that if vw are p-connected in P, then they are in G. Moreover, if G is connected, then there is a v ∈ S connected to w P , thus there is a P 4 of the form w, v, u, t in P. We conclude by Proposition 83 since for any u ∈ S either {w P , v, u} induces a P 3 in G, or there is a v u ∈ K \ {v} such that w P , v, v u , u is a P 4 of G.
Remark 86 . If a, b, c, d is a P 4 of a graph G, then c ⊥
, a ⊥ , d ⊥ , b ⊥ is a P 4 of G ⊥ . Lemma 87. Let P = P v 1 , . . . , v n and P ′ = P ′ v ⊥ 1 , . . . , v ⊥ n be prime graph such that v i , v j , v h , v k is a P 4 of P iff v ⊥ j , v ⊥ k , v ⊥ i , v ⊥ h is a P 4 of P ′ . Then P ′ = P ⊥ .
Proof. If P is p-connected, then for any u, v ∈ V P such that u P ⌢v there is a P 4 in P of the form, w.l.o.g., u, v, w, t or w, u, v, t . Then u ⊥ P ′ ⌢v ⊥ since, by hypothesis, there is a P 4 in P ′ of the form w ⊥ , u ⊥ , t ⊥ , v ⊥ or w, u, v, t respectively.
Otherwise by Theorem 84 know that P is not p-connected and we repeat the same argument above for the vertices of its p-component. Moreover, we know that the weak vertex w P of P is connected to each vertex in the strong component of P. If the same does not hold for the vertex w ⊥ P in P ⊥ , then by Lemma 85 P ⊥ admits a P 4 containing w P . This is impossible (see Remark 86) since P admits no P 4 containing w P .
. A C C
MPL
From the results on p-connectedness of prime graphs in the previous subsection, we deduce that any prime graph different from`and ⊗ is "tiled" by P 4 's except for at most one vertex (the weak vertex of a non p-connected prime graph). This allows us to isolate a family of ae-cycles we can use to check whether two gates can be sequentialized by a same d-κ RB Q -rule (i.e. if they are gates with dual type). Moreover, we prove that from these ae-cycles we can also extract information about whether two gates of dual type can eventually be sequentialized at the same time.
Definition 88. Let G be a RB-structure, G ∈ Gates(G), and c ∈ AE(G). The RB-subgraph induced by
c in G (in G) is defined as the RB-graph G| c (G| c ) induced by the vertices of G (of G) covered by c. A vertex v of G is p-covered if there is a c ∈ AE m (G) such that v belongs to a P 4 in G| c .
The ae-cycle c is minimal if it contains no shortcuts and if for any G ∈ Gates(G) the graph G| c is isomorphic to ∅ or ⊗ or P 4 . The set of minimal ae-cycles of G is denoted AE m (G).
Remark 89. After Remark 72, a minimal ae-cycle covering edges of a gate G induces one of following configurations.
i i G o G or i i G i j G o G or i i G i j G i h G i k G o G(23)
In order to ensure that gates whose type is not`or ⊗ can eventually be sequentialized using a d-κ-rule, we need have a criterion ensuring us the possibility of pairing those gates in such a way paired gates not only have dual types, and their inputs are connected in a proper way with respect of this duality.
Definition 90. Let G be a RB-structure. An entailing relation is a bijection ♥ over the set of inputs of gates of G whose type is not`or ⊗ such that the following hold:
1. ♥ is an involution, that is, ♥(♥(v)) = v; 2. if v ∈ G 1 and ♥(v) ∈ G 2 , then G 1 G 2 ;
3. for any P 4 over the inputs of G 1 , there is an entangling c ∈ AE m (G) covering it, that is, a minimal ae-cycle c such that G| c is of the following form
w 1 v 1 v 2 w 2 w 3 v 3 v 4 w 4 where v 1 , v 2 , v 3 , v 4 is a P 4 in G 1 , w 3 , w 1 , w 4 , w 2 is a P 4 in G 2 , ♥(v i ) = w i for all i ∈ {1, 2, 3, 4}.(24)
We say that two gates G 1 and G 2 are entangled (denoted G 1 ♥G 2 ) iff all their inputs are. We denote by AE ♥ (G) the set of minimal entangling cycles in AE m (G). We say that ♥ is simply entangling iff AE m (G) = AE ♥ (G) and for all c ∈ AE ♥ (G) the graph G| c contains extactly two P 4 's.
Remark 91. If G 1 and G 2 are are two entangled gates of a RB-structure G, then, as consequence of Lemma 87, their types are dual. Said differently,
for each v, v ′ ∈ G 1 , if v⌢v ′ (resp. v ⌢v ′ ) in G| In(G 1 ) , then ♥(v) ⌢♥(w ′ ) (resp. v⌢v ′ ) in G| In(G 2 ) .
Example 92. Consider the RB-structures in Figure 11. The P 4 -gate in the RB-structure on the left is not covered and the unique ae-cycle in AE m (G) which is not in AE ♥ (G). In the the right, the P 4 's containing the vertex w ⊥ in the P 5 -gate are not p-covered. Figure 11: Examples of RB-structures whose gates cannot be entangled. The highlighted P 4 's are not
a b c d c ⊥ a ⊥ d ⊥ b ⊥ o P 4 o ⊗ o ⊗ r P 4 r ⊗ r ⊗ w ⊥ c ⊥ a ⊥ d ⊥ b ⊥ w a b c d o P 5 o Bull r P 5 r Bullp-covered d e f g b ⊥ d ⊥ a ⊥ c ⊥ o P 4 o P 4 a b c i 4 P 4 f ⊥ i 2 P 4 g ⊥ e ⊥ o P 4 o P 4
r P 4 r P 4 G 1 is the gate containing a G 2 is the gate containing a ⊥ G 3 is the gate containing g G 4 is the gate containing g ⊥ Figure 12: An example of a RB-structure where ≺ ♥ is not well-founded: G 1 ≺ ♥ G 4 and G 4 ≺ ♥ G 1 .
G 1 ≺ G 3 and G 4 ≺ G 2 G 1 ≺ ♥ G 4 because G 4 ♥G 3 and G 1 ≺ G 3 G 4 ≺ ♥ G 1 because G 1 ♥G 2 and G 4 ≺ G 2
However, this condition is not enough tu guarantee sequentializability (see Figure 13). In fact, since the rule d-κ RB Q sequentializes two gates at a time, we need to check whether a pair of entangled gates can eventually both occurs as root gates during the sequentialization procedure. This condition is equivalent of checking whether we can formulate a version of the splitting lemma (Lemma 44) for the system RB P .
For this purpose, we enrich the partial order over gates given by the below relation between vertices in a RB-structure with the minimal set of constraints on the order in which cycles could be removed during proof search. This order provides the same information of a minimal set of sequential edges in a C-nets (see [33]), but without modifying the structure of our RB-graphs to accommodate additional edges. In fact, the information of this order can be solely defined in function of the ae-paths in the RB-structure.
Definition 93. Let G be a RB-structure, ♥ is a entangling relation for G, and G 1 , G 2 ∈ Gates(G). We say that G 1 is precede G 2 (denoted G 1 ≺ ♥ G 2 ) whenever G 1 is below G 2 or below a vertex of an ae-cycle in AE m (G) covering a P 4 of G 2 .
The transitive closure of the relation ≺ ♥ is a (strict) order over Gates(G) (see Figure 12).
Using ≺ ♥ wa are able to characterize "statically", i.e., without performing an attempt of sequentialization, those pairs of entangled gates which can eventually be removed in such a way the premises of a sound application of a d-κ RB Q , that is, in such way if we remove both the two entangled gates we split the RB-structure into a set of disjoint RB-structures.
Definition 94. A RB-structure G is MPL-correct iff G is ae-connected, there is a simply entangling relation ♥ over Gates(G) such that ≺ ♥ is well-founded.
Remark 95. The correctness criterion for MPL subsumes Retoré's criterion for MLL since ♥ = ∅ and the RB-structure of a sequent of MLL-formulas contains no P 4 's.
Example 96. The RB-structure in the left-hand side of Figure 13 contains a unique ae-cycle which should entangle a P 4 and a Bull. This is impossible since |V P ⊥ 4 | = 4 5 = |V Bull |. Similarly in the RB-structure on the right-hand side of Figure 11 we cannot have entangling relations.
The RB-structure in the right-hand side of Figure 13 contains a unique ae-cycle, but it induces four P 4 's , therefore it cannot be simply entangling.
The RB-structure in Figure 12 has an entangling relation which is simply entangling.
o P 4 b ⊥ i 1 P 4 a ⊥ oc ⊥ d ⊥ w ⊥ a b w c d o Bull r P 4 r Bull r P 4 r P 4 o P 4 o P 4 b ⊥ h ⊥ a ⊥ c ⊥ f ⊥ h g ⊥ e ⊥ a b c d d ⊥ e f g o P 4 o P 4
r P 4 r P 4 Figure 13: Examples of non-correct RB-structures: the entangling relation defined by Link is not simply entangling.
Theorem 97. Let Γ be a sequent. Then
⊢ MPL Γ ⇐⇒ there is a Link such that G = {{Γ}} ⊔ Link is MPL-correct
Proof. If ⊢ MPL Γ, then there is a derivation π of Γ and we can define a derivation of G = {{Γ}} ⊔ Link π in RB P by induction on the rules in a derivation in MPL. To conclude it suffices to check that each rule preserves correctness, that is, if all its premises are MPL-correct, then also its conclusion is.
• the conclusion of a rule ax RB is MPL-correct;
• the conclusion G of a rule`R B contains no new ae-cycles with respect to its premise G 1 , that is, AE m (G) = AE m (G 1 ). We conclude since the order ≺ ♥ over Gates(G) is well-defined iff it also is over Gates(G 1 );
• the conclusion G of a rule ⊗ RB contains no new ae-cycles with respect to its premises G 1 and
G 2 , that is, AE m (G) = AE m (G 1 ) ∪ AE m (G 2 )
. We conclude since the order ≺ ♥ over Gates(G) is well-defined iff it also is over Gates(G 1 ) and AE m (G) = AE ♥ (G) iff both AE m (G 1 ) = AE ♥ (G 1 ) and AE m (G 2 ) = AE ♥ (G 2 );
• the conclusion G of a rule d-κ RB Q contains new ae-cycles with respect to its premises G 1 , . . . , G n and G 2 , but they are all simply entangling. We conclude similarly to the previous case, since both new gates can only be below of some gates in i∈{1,...,n} Gates(G i );
To prove the converse we provide a sequentialization procedure returning a derivation π G in MPL defined by induction on number of gates in Gates(G) and links in ⊓ G : we apply ax RB and`R B whenever possible, and ⊗ RB whenever there is a ⊗-gate in Root(G) whose inputs are not covered by any cycle in AE m (G), and a d-κ RB Q -rules whenever the correctness ensures the presence of two entangled root-gates.
More precisely, we define a proof π G from roots to leaves as follows:
1. if Gates(G) = ∅, then G = {a, a ⊥ }, ∅, {aa ⊥ } and π G is an instance ax RB ;
2. if R-Gates(G) contains a`n-gate, then G = G`n ⊲⊳ X`G1 and
π G = π 1 IH ⊢ G 1 RB ⊢ G`n ⊲⊳ X`G1
where π 1 is defined by inductive hypothesis since |⊓ G | = |⊓ G 1 | but |Gates(G 1 )| < |Gates(G)|, and the correctness of G 1 is guaranteed by the fact that AE m (G) = AE m (G 1 );
3. if no gate in R-Gates(G) ∅ is a`n-gate and at there is a ⊗-gate in R-Gates(G) which is not covered by a cycle in AE m (G), then there is a r 1 ∈ Root(G 1 ) and a r 2 ∈ Root(G 2 ) such that Figure 14: A proof in MPL • where the connective κ P 9 has been introduced by a s ⊗ , and the corresponding derivation in RB • Q where we highlighted the two residual components of the P 9 -gate.
π G = π 1 IH ⊢ G 1 π 2 IH ⊢ G 2 ⊗ RB ⊢ G ⊲⊳ {(i 1 G ,r 1) ,(i 2 G ,r 2 )} (G 1 ⊎ G 2 ) ax ⊢ a, a ⊥ ax ⊢ b, b ⊥ ax ⊢ c, c ⊥ ax ⊢ d, d ⊥ d-κ ⊢ κ P 4 b ⊥ , d ⊥ , a ⊥ , c ⊥ , κ P 4 a, b, c, d ax ⊢ f, f ⊥ ax ⊢ g, g ⊥ ax ⊢ h, h ⊥ ax ⊢ i, i ⊥ d-κ ⊢ κ P 4 g ⊥ , i ⊥ , f ⊥ , h ⊥ , κ P 4 f, g, h, i • ⊢ • mix RB ⊢ κ P 4 b ⊥ , d ⊥ , a ⊥ , c ⊥ , κ P 4 a, b, c, d , κ P 4 g ⊥ , i ⊥ , f ⊥ , h ⊥ , κ P 4 f, g, h, i , • 2×`⊢ κ P 4 b ⊥ , d ⊥ , a ⊥ , c ⊥ , κ P 4 a, b, c, d `•`κ P 4 f, g, h, i , κ P 4 g ⊥ , i ⊥ , f ⊥ , h ⊥ ax RB ⊢ e ⊥ , e s ⊗ ⊢ κ P 4 b ⊥ , d ⊥ , a ⊥ , c ⊥ , κ P 9 a, b, c, d, e, f, g, h, i , κ P 4 g ⊥ , i ⊥ , f ⊥ , h ⊥ , e ⊥ D RB • Q ⊢ o P 4 r P 4 r P 4 o P 4 b ⊥ d ⊥ a ⊥ c ⊥ g ⊥ i ⊥ f ⊥ h ⊥ a b c d f g h i o P 4 r P 4 r P 4 o P 4 ax RB ⊢ e e ⊥ s RB ⊗ ⊢ o P 4 r P 4 r P 4 o P 4 b ⊥ d ⊥ a ⊥ c ⊥ e ⊥ g ⊥ i ⊥ f ⊥ h ⊥ a b c d e f g h i o P 9 r P 9
where the two proofs π 1 and π 2 are defined by inductive hypothesis since ⊓ G = ⊓ G 1 ⊎ ⊓ G 2 and the correctness of G 1 and G 2 is guaranteed by the fact that AE m (G) = AE m (G 1 ) ⊎ AE m (G 2 ); 4. Otherwise, we can assume R-Gates(G) contains no`-gates, or ⊗-gates whose inputs are not covered by a cycle in AE ♥ (G). Therefore we must have two entangled gates G, G ′ ∈ R-Gates(G) in R-Gates(G) because ≺ ♥ is well-founded.
By Remark 91, the types of these gates are connectives. and G = (G P ⊎ G P ⊥ ) ⊲⊳ X G ′ . In order to be sure that such a RB-structure is the conclusion of a d-κ RB Q , it suffices to prove that G ′ = G 1 ⊎ · · · ⊎ G n . This is equivalent to check that there are no ae-paths from r 1 G i to any r 1 G j or r 2 G j whenever i j; However, if such a path existed, then we should have a cycle shortcut for one of the ae-cycles in AE m (G) covering a P 4 in G 1 or in G 2 .
Thus we conclude since we have
π G = π 1 IH G 1 · · · π n IH G n d-κ RB Q ⊢ (G P ⊎ G P ⊥ ) ⊲⊳ X d (G 1 ⊎ · · · ⊎ G n )
where π 1 , . . . , π n are defined by inductive hypothesis since ⊓ G = n i=1 ⊓ G i and the correctness of G 1 , . . . , G n is guaranteed by the fact that AE m (G) ⊇ n i=1 AE m (G i ) .
. A C C
MPL •
As shown in Figure 14, in MPL • the rule s ⊗ could introduce (top-down) an occurrence of a new connective from a formula containing smaller ones. This prevent us to define a correctness criterion reasoning directly on the gates of a RB-structure since some of them could be deconstructed by a s RB ⊗ , splitting a gate into smaller ones. For this purpose, we define the residual components allowing us to spot in a RB-structure the connectives originally introduced by d-κ.
g o`g ⊥ a b c d e c ⊥ f ⊥ b ⊥ d ⊥ e ⊥ f r`a ⊥ rò P5 o P4 h o`h ⊥ o P5 r P5 o`r P4 r`r P5
r`oG 1 is the P 5 -gate containing a G 2 is the P 4 -gate containing c ⊥ G 3 is the P 5 -gate containing e ⊥ G 4 is the`-gate containing g G 5 is the`-gate containing r P4 G 6 is the`-gate containing h G 7 is the bottommost`-gate
G 5 ≺ ♥ G 1 and G 5 ≺ ♥ G 2 because of ≺ G 3 ≺ ♥ G 4 because of ≺ G 7 ≺ ♥ G 3 because of ≺ G 3 ≺ ♥ G 5 because of ≺ G 7 ≺ ♥ G 3 and G 7 ≺ ♥ G 6 because of ≺ G 7 ≺ ♥ G 1 because G 3 ≺ G 5
and there is a cycle in AE m (G) covering G 1 and G 5 G 7 ≺ ♥ G 2 because G 3 ≺ G 5 and there is a cycle in AE m (G) covering G 2 and G 5 Figure 15: A RB-structure where the residual components are highlighted and the lattice of the ≺ ♥ relation between gates.
Definition 98. Let G = {{Γ}} ⊔ Link be a RB-structure. A input v of a gate G ∈ Gates(G) is a graft if there is no BB-path from v to any other input of the same gate.
The residual of G is the graph induced by the inputs of the gates in G which are not grafts and the R-edges of G (we assume the labelling function to be empty). We denote by Res(G) the set of residual components of G, that is, the set of connected component of the residual of G.
Remark 99. Intuitively, grafts witness the an application of s RB ⊗ (or ⊗ RB ) while the residual of G identify the type of the gates which have been introduced by a d-κ RB Q -rule. Therefore, removing grafts we are able to split those gates which may have been "merged" by a s RB ⊗ . For an example, see Figure 14 where the P 9 -gate in the RB-proof net in the conclusion of the derivation in RB • Q has been introduced by a s RB ⊗ -rule merging two P 4 -gates in the RB-structure (this because Res(G P 9 ♣ i 5 G P 9
) ∼ (P 4`∅`P4 )).
Remark 100. Since in MLL • -nets we only have`-and ⊗-gates, then by ae-acyclicity each input of a ⊗-gate must be a graft.
We refine the notion of entanglement defining it on residual components instead of gates, in order to ensure sequentializability of a residual component which, during sequentialization, eventually become a gate which should have been introduced by a d-κ RB Q (or a ⊗ RB ).
Definition 101. Two residual components R 1 and R 2 of a RB-structure G are entangled (denoted R 1 ♥R 2 ) iff there is a total bijection ♥ between the vertices in R 1 and R 2 (we denote v♥w if v = ♥(w)) such that it satisfies conditions (1)-(3) in Definition 90 formulated on residual components instead of on gates. An entangling relation is fully entangling iff AE ♥ (G) = AE m (G).
Given R 1 , R 2 ∈ Res(G), we write R 1 ≺ R 2 if there is a vertex of R 2 below a vertex of R 2 . We write R 2 ≺ ♥ R 2 if there are R ′ 1 , R ′ 2 ∈ Res(G) such that R ′ 1 ≺ R ′ 2
and such that there are ae-cycles in AE m (G) covering R 1 and R 2 and covering R 2 and R ′ 2 (see Figure 15 for an example).
Remark 102. As in Remark 91, two entangled residual components induces graphs which are dual modulo symmetries, that is, such that G| R 1 ∼ ♥(G| R 2 ) ⊥ .
Definition 103. The RB-structure G is GS-correct iff there is a fully entangled relation ♥ over Res(G) such that ≺ ♥ is well-founded.
Remark 104. Our correctness criteria subsume Retoré's ones for MLL and MLL • since in a RBstructure of a sequent of MLL-formulas there are no P 4 's.
Theorem 105. Let G = {{Γ}} ⊔ Link be a RB-structure. Then ⊢ MPL • [[Γ]] ⇐⇒ G is GS-correct Proof.
The proof is similar to the one of Theorem 97.
To construct the RB-proof net encoding a derivation in RB • Q we have to consider the following additional cases to handle the additiona rules in the systems RB • Q which are not in RB P :
• if ρ = mix RB , then the gates in the conclusion are the same gates in the premises, AE m (G) = n i=1 AE m (G i ), and Res(G) = n i=1 Res(G i ). Note that the conclusion is not ae-connected nor g-connected;
• if ρ = s RB ⊗ , then (using the same convention of Figure 10) the k-th input i k G Q of the active gate G Q is a graft and no new ae-cycle has been created; that is, AE m (G) = AE m (G 1 ) ⊎ AE m (G 2 ) and Res(G) = Res(G 1 ) ∪ Res(G 2 ). In fact, even if G Q contains a P 4 not occurring in its premises, it must contain the vertex i k G Q , which is a graft, therefore not occurring in any residual component of G.
• if ρ = d-κ RB Q , then the only difference with the case analysis of in the proof of Theorem 97 is that we are assuming the weaker condition that ♥ is fully entangling.
Note that an instance of a`R B could have conclusion a g-connected RB-structure but a premise which is not g-connected.
Conversely, the sequentialization procedure is refined as follows:
1. if G = ∅ then Γ = • and π G is an instance of • RB ;
2. if G ∅ and Gates(G) = ∅, then G = {a, a ⊥ }, ∅, {aa ⊥ } and π G is an instance ax RB ;
3. if G ∅ is not g-connected, then G = G 1 ⊎ · · · ⊎ G n and
π G = π 1 IH ⊢ G 1 · · · π n IH ⊢ G n mix RB ⊢ G
where the proofs π 1 , . . . , π n are inductively defined since we have that ⊓ G = i∈{1,...,n} ⊓ G i , Res(G) = i∈{1,...,n} Res(G i ) and the correctness of G 1 , . . . , G n is guaranteed by the fact that AE m (G) = i∈{1,...,n} AE m (G i ); 4. if G is g-connected and R-Gates(G) contains a`n-gate, then G = G`n ⊲⊳ X`G1 and
π G = π 1 IH ⊢ G 1 RB ⊢ G`n ⊲⊳ X`G1
where π 1 is defined by inductive hypothesis since |⊓ G | = |⊓ G 1 | but |Gates(G 1 )| < |Gates(G)| and the correctness of G 1 is guaranteed by the fact that AE m (G) = AE m (G 1 ) and Res(G) = Res(G 1 ); 5. if G ∅ is g-connected and no gate in R-Gates(G) ∅ is a`n-gate and at least one input of a G ∈ R-Gates(G) is a graft, then
• either G is a ⊗ n -gate whose inputs are all graft, therefore there are G 1 , . . . , G n and r i ∈ Root(G i ) for all i ∈ {1, . . . , n} such that
π G = π 1 IH ⊢ G 1 · · · π n IH ⊢ G n ⊗ RB ⊢ G ⊲⊳ {(i i G ,r i) |i∈{1,...,n}} i∈{1,...,n} G i
• or G is a Q-gate with Q quasi-prime and |V Q | = n + 1 > 2 and, w.l.o.g., there is am interface X + = X − ∪ i 1 Q , r 2 where r 2 ∈ Root(G 2 ) such that
π G = π 1 IH ⊢ Q ∅, v 2 , . . . , v n+1 ⊲⊳ X − G 1 π 2 IH ⊢ G 2 s RB ⊗ ⊢ G Q ⊲⊳ X + (G 1 ⊎ G 2 )
In both cases, the two proofs of the premises are defined by inductive hypothesis;
6. Otherwise G ∅ is g-connected, no gate in R-Gates(G) ∅ is a`n-gate and no input of a gate in R-Gates(G) is a graft. We conclude as in Case 4 in the proof of Theorem 97.
. A T C
GS
An axiom linking Link for a graph G is a (total) bijection between atoms mapping each atom to one of its dual. If D is a proof of G in GS we define a set of B-edges ⊓ D by pairing the vertices matched by the applications of the ai↓-rules in D.
Theorem 106. Let G be a graph. Then there is a proof D of G in GS iff the RB-structure
V {{G}} , ℓ G , {{G}} ⌢ , T ⊥ ∪ ⊓ D is GS-correct.
Proof. It follows Theorems 57 and 105 and the fact that the translation between GS and MPL • preserve the pairs of atoms matched by an ax-rule and vertices matched by an ai↓-rule.
C L B C
We conclude this paper by providing a simple extension of MPL with structural rules allowing us to generalize classical logic beyond cographs.
Definition 107. We define the following logics of formulas (defined via a proof system) and graphs respectively:
Classical Prime Logic : PLK = MPL ∪ {w, c} Classical Graphical Logic: GLK = φ | φ formula such that ⊢ PLK φ(25)
We say that a graph G is provable in
GLK (denoted ⊢ GLK G) if G ∈ GLK.
For PLK we can prove the admissibility of the cut-rule via cut-elimination.
Theorem 108 (Cut-elimination). The rule cut is admissible in PLK.
Proof. Consider the cut-elimination steps from Figure 5 and Figure 16 and the definition of weight from the proof of Theorem 39. We conclude by remarking that applying cut-elimination steps to any one of the top-most cut in the derivation, then we obtain a derivation with smaller weight.
⊢ Γ w ⊢ Γ, φ ⊢ Γ, φ, φ c ⊢ Γ, φ ψ w↓ ψ`φ φ`φ c↓ φ
a`a ac a P φ 1 , . . . , φ n `P ψ 1 , . . . , ψ n m` P prime Figure 16: Structural rules for sequent calculi, the corresponding rules in deep inference, the atomic contraction and the generalized medial rule, and cut-elimination steps to handle them.
P φ 1`ψ1 , . . . , φ n`ψn ⊢ Γ w ⊢ Γ, φ ⊢ φ ⊥ , ∆ cut ⊢ Γ, ∆ ⊢ Γ w ⊢ Γ, ∆ ⊢ Γ, φ, φ c ⊢ Γ, φ ⊢ φ ⊥ , ∆ cut ⊢ Γ, ∆ ⊢ Γ, φ, φ ⊢ φ ⊥ , ∆ cut ⊢ Γ, ∆, φ ⊢ φ ⊥ , ∆ cut ⊢ Γ, ∆, ∆ c ⊢ Γ, ∆
Moreover, we can define a sequent system containing the deep inference version of the structural rules (see Figure 16) to obtain the following decomposition result.
Theorem 109 (Decomposition). Let φ be a formula such that ⊢ PLK φ. Then there is a formula φ ′ such that ⊢ MPL φ ′ and φ ′ ⊢ {w,w↓,c↓} φ.
Proof. The proof is immediate by applying rule permutations. For a reference, see [10].
This result can be refined using a generalized medial rule proposed proposed in [24] to restrict the instances of contraction rules to atomic ones.
Lemma 110. The (deep) contraction rule c↓ is derivable using atomic contraction (ac↓) and medial rule (m).
Proof. By induction on the contracted formula φ. If φ = a is an atom, then an instance of c↓ can be replaced by an instance of ac↓. Otherwise, φ = κ ψ 1 , . . . , ψ n and we conclude since we can replace each application of c↓ with a derivation of the following form This would allow to provide a stronger result similar to the one for in deep inference systems for classical graphical logic [19,22] formulated as follows.
Corollary 111 (Decomposition). Let φ be a formula such that ⊢ PLK φ. Then there are formulas φ ′ , ψ ′ and ψ such that
⊢ PLK φ ⇐⇒ ⊢ MPL φ ′ ⊢ m ψ ′ ⊢ ac↓ ψ⊢ w↓ φ
Proof. By Theorem 109 we find the desired φ ′ . Using rules permutations, we can push occurrences of w↓ down in a derivation, finding the desired ψ. We then apply 110 and replace all instances of c↓-rules with derivations containing only m and ac↓. We conclude by applying rule permutations to move all ac-rules below the instances of m-rules.
To conclude this section, we recall that classical graphical logic, beside being a conservative extension of classical propositional logic is not the same logic of the boolean graphical logic (denoted GBL) defined in [24] (an inference systems on graphs by extending the semantics of read-once boolean relations from cographs to general graphs). As shown in [8] the following graph expected to be provable in GBL, but is
not provable in GS nor in GLK. b c ⊥ a b ⊥ c a ⊥ C F W
In this paper we provide the definition of the notion of graphical connectives, and we defined a class of formulas generated by a signature of logical connectives corresponding to the graphical ones. We have provided three proof systems operating on these generalized formulas (MPL, MPL • and PLK) , proving that these systems satisfy cut-elimination and are conservative extensions of multiplicative linear logic with and without mix, and of classical logic respectively. We proved that the logic GS form [7,6] provides a model for the system extending the multiplicative linear logic with mix (MPL • ). We then provide proof nets for the substructural logics MPL and MPL • by extending the syntax of RBproof net with additional types of gates whose design is based on the structure of prime graphs, cliques and stable sets. We e provided a topological characterization of formulas and graphs which are provable in both these logics by extending the syntax of RB-proof nets for multiplicative logic.
. F W
Categorical Semantics. We are interested in defining the categorical structures for the multiplicative prime logic (with and without mix). We conjecture that such categories are extensions of star-autonomous and IsoMix [27,28] categories respectively, with additional (n-ary) monoidal products. In particular, the categorical structure for MPL • should be a quotient of a free multi-monoidal category whose products share the same unit (for each of their entries) and whose unitors are defined according to the unitor κ defined in Lemma 50.
Digraphs, Games and Event Structures. In this work we started our investigation from the correspondence between classical formulas (and multiplicative linear logic formulas) and cographs. However, a different approach could be considered by considering the correspondence between intuitionistic propositional formulas and the Hyland-Ong arenas [54] (directed graphs). We foresee interesting connections with game semantic, concurrent games and event structures [79]. In this setting, graphs generalizing the connectives from additive linear logic [26] could allow express non-transitive conflict relations, as well as to handle the general setting where the conflict relation # could define patterns which cannot be expressed via linear formulas (i.e., without repetition of events) constructed using with binary connectives only.
A new framework for GoI. In Section 5 we defined a general setting for generalized proof nets. The current models for the geometry of interaction (or GoI) [41,46,67] and Girard's transcendental syntax [45,46,37,38] are constructed using the Danos-Regnier correctness criterion for multiplicative proof nets to define tests. The atomic component used to build of these proof structures, as well as the tests used to define the correctness criterion, are based on a paradigm which could be named connectives-as-permutations (if we follow the approach from, e.g., [29,37]) or connectives-as-partitions (if we follow the approach from, e.g., [44,9]). As explained in Section 9 of [8], connectives-as-partitions are distinct from the graphical connectives used in this paper. Thus we foresee the possibility of exploring entire new models of GoI over the RB-structures defined in this paper.
Relational RB-Nets. In Section 7 we provided a topological characterization of RB-structures encoding correct derivations in RB • Q , relying on the encoding of a graph (or a squent) with an RB-forest. However, RB-cographs 8 provide another possible encoding of RB-graphs proofs in MLL [72,74,73]. This encoding is obtained by directly enriching a cograph (whose edges are R-edges) encoding a MLL-formula with Bedges pairing the vertices corresponding to the atoms matched by the axiom rules. The correctness criterion for these RB-graph crucially rely on the presence of chords in an ae-cycle. Intuitively, chords identify (nonelementary) alternating cycles in {{Γ}} ⊔ Link covering the connector edge of a ⊗. It follows that chords are indeed paired, inducing "bow-tie" subgraphs as the one below on the left.
a a ⊥ b b ⊥ a a ⊥ b b ⊥(26)
The criterion for MLL fails for MPL because of the presence of P 4 's as shown in the example above on the right. We conjecture the possibility of reproducing the correctness criterion for RB-structure directly on these RB-graph. Moreover, a correctness criterion on RB-nets could be used to extended this syntax to include exponentials by reformulating the correctness criterion provided in [4]. On Graphical Classical Logic.
In [24] the authors investigate the possibility of extending boolean logic beyond cographs. Beside this logic have been proved to be incompatible with extensions of the multiplicative graphical logic, it is still interesting to see the exact relation between our graphical classical logic and the boolean graphical logic presented in the aforementioned paper.
The decomposition result for GLK suggests the possibility of defining combinatorial proofs combinatorial proofs [53,52] for this logic and study its proof equivalence which would present some non-trivial derivations (see the left-most RB-structure in Figure 13 for an example) Moreover, we foresee the possibility of extending the results in [16] to classical graphical logic. A correctness criterion for BV. The technique of sequentialize and entail connectives could provide insights on the correctness criterion BV [47,69,68] and its extensions GV and GV sl from [5]. Automated Theorem Provers for Graphical Logic. The introduction of graphical connectives provides a representation of graphs which is linear with respect to the number of its vertices. Such a representation could be used to implement efficient in automated tools to implement the current results in graphical logic, as well as to provide new automated tools to address challenging problems in mathematics and computer science where the graphical syntax improve usability and efficiency.
A D I O D F
p↓ in [8] p↓ in [7] (M 1`N1 ) ⊗ · · · ⊗(M n`Nn ) p1↓ ⋆ P ⊥ M 1 , . . . , M n `P N 1 , . . . , N n (M 1`N1 ) ⊗ · · · ⊗(M n`Nn )
In order to prove the equivalence between our system and the ones in [7,8] we recall the following lemma allowing us to prove that in GS we can derive any graph of the shape G ⊸ G. Proof. The first equivalence follows from Proposition 114. The other has been proved in [8].
B S C
RB-
In order to prove Theorem 69, we use the following technical lemmas.
Lemma 116. Let H be a graph. Then the following rule is admissible in RB P . In this case, we now consider the root of κ P φ 1 , . . . , φ n and of κ P ⊥ ψ 1 , . . . , ψ n to respectively be the root of κ P φ 1 , . . . , φ k , • ∈ Root({{K}}) and the root of κ P ⊥ ψ 1 , . . . , ψ k , • ∈ Root( K ⊥ ).
⊢ G 1 · · · ⊢ G n d-κ RB G ⊢ (G G ⊎ G G ⊥ ) ⊲⊳ X d (G 1 ⊎ · · · ⊎ G 2 )
• if ρ = mix, then if, w.l.o.g., ∆ 2 only contains vacuous formulas, then
π 1 ⊢ ∆ 1 π 2 ⊢ ∆ 2 mix ⊢ ∆ 1 , ∆ 2 {{π 1 }} ⊢ {{∆ 1 }} -otherwise π 1 ⊢ ∆ 1 π 2 ⊢ ∆ 2 mix ⊢ ∆ 1 , ∆ 2 IH {{π 1 }} IH {{π 2 }} mix RB ⊢ {{π 1 }} ⊎ {{π 2 }} • if ρ = wd ⊗ , then π 1 ⊢ ∆ 1 , φ k π 1
⊢ ∆ 2 , χ φ 1 , . . . , φ k−1 , φ k+1 , . . . , φ n wd⊗ ⊢ ∆ 1 , ∆ 2 , κ φ 1 , . . . , φ n since ∆, κ φ 1 , . . . , φ k−1 , •, φ k+1 , . . . , φ n = ∆, χ φ 1 , . . . , φ n by definition of the rule wd ⊗ .
if φ k is vacuous, then then we let {{π}} = {{π 1 }} ⊎ {{π 2 }} and we consider the root of χ φ 1 , . . . , φ k−1 , φ k+1 , . . . , φ n to be the root of κ φ 1 , . . . , φ n ; if χ φ 1 , . . . , φ k−1 , φ k+1 , . . . , φ n is vacuous, then we also let {{π}} = {{π 1 }} ⊎ {{π 2 }} but now we consider the root of {{φ k }} to be the root of κ φ 1 , . . . , φ n ; otherwise, {{π 2 }} = G Q v 1 ,...,v k−1 ,∅,v k+1 ,...,v n ⊲⊳ X − {{Γ 2 , φ 1 , . . . , φ k−1 , φ k+1 , . . . , φ n }} ⊔ Link with X − = v i , r {{φi}} and we conclude by letting {{π}} = {{π 1 }} ⊎
{{Γ 2 , φ 1 , . . . , φ k−1 , φ k+1 , . . . , φ n }} ⊔ Link ⊲⊳ X + G Q v 1 ,...,v k with x + = x − ∪ v k , r {{φk}} .
We let the root of κ φ 1 , . . . , φ n to be the root of χ φ 1 , . . . , φ k−1 , φ k+1 , . . . , φ n in {{π}}.
Conversely, given a proof π of G = {{Γ}} ⊔ Link in RB • Q , we define a proof π(G) of Γ in MPL • by induction on the last rule ρ in π:
• if ρ = • RB , then G = ∅ and π ′ = • ⊢ •
• if ρ = ax RB , then G = a a ⊥ and π ′ = ax ⊢ a, a ⊥
• if ρ =`R B , then G = G ⊲⊳ X`G1 with G 1 = {{Γ, φ 1 , . . . , φ n }} ⊔ Link and G :`n. Thus π(G) = π(G 1 ) IH ⊢ Γ, φ 1 , . . . , φ ǹ n ⊢ Γ,`n φ 1 , . . . , φ n • if ρ = ⊗ RB , then G = G ⊗ n ⊲⊳ X s n i=1 G i with G i = {{Γ i , φ i }} ⊔ Link i for all i ∈ {1, . . . , n}. Thus π(G) is of the shape π(G 1 ) IH ⊢ Γ 1 , φ 1 · · · π(G n ) IH ⊢ Γ n , φ n ⊗ ⊢ Γ 1 , . . . , Γ n , φ 1 ⊗(· · · (φ n−1 ⊗ φ n ))
• if ρ = d-κ RB Q then G = (G P ⊎ G P ⊥ ) ⊲⊳ X d n i=1 G i with G i = {{Γ i , φ i , ψ i }} ⊔ Link i for all i ∈ {1
, . . . , n}. Thus π(G) = π(G 1 ) IH ⊢ Γ 1 , φ 1 , ψ 1 . . . π(G n ) IH ⊢ Γ n , φ n , ψ n d-κ P ⊢ Γ 1 , . . . , Γ n , κ P φ 1 , . . . , φ n , κ P ⊥ ψ 1 , . . . , ψ n • if ρ = mix RB , then G = n i=1 G i with G i = {{Γ i }} ⊔ Link i for all i ∈ {1, . . . , n}. Thus π(G) = π(G 1 ) IH ⊢ Γ 1 · · · π(G n ) IH ⊢ Γ n mix ⊢ Γ 1 , . . . , Γ n • if ρ = s RB ⊗ , then G = G Q ⊲⊳ X + (G 1 ⊎ G 2 ) with 1. G 1 = {{Γ 1 , φ k }} ⊔ Link 1 ; 2. G 2 = Q v 1 , . . . , v k−1 , ∅, v k+1 , . . . , v n ⊲⊳ X − {{Γ 2 , φ 1 , . . . , φ k−1 , φ k+1 , . . . , φ n }} ⊔ Link 2 .
Thus π(G) = π(G 1 ) IH ⊢ Γ 1 , φ k π(G 2 ) IH ⊢ Γ 2 , χ φ 1 , . . . , φ k−1 , φ k+1 , . . . , φ n wd⊗ ⊢ Γ 2 , κ Q φ 1 , . . . , φ n with χ ∼ Q v 1 , . . . , v k−1 , ∅, v k+1 , . . . , v n .
C O R I C T
In this appendix we discuss the results about the system extending multiplicative linear logic with the rule s-κ, that is, the system. MLL s-κ ≔ ax,`, ⊗, mix, s-κ | κ P ∈ C with P {`, ⊗} prime where we consider the following rule introducing a prime graphical connective different from`n and ⊗ at a time instead of d-κ.
⊢ Γ 1 , φ 1 · · · ⊢ Γ n , φ n s-κ ⊢ Γ 1 , . . . , Γ n , κ φ 1 , . . . , φ n
We first observe that in the system does not satisfy anymore initial coherence (it suffices to consider the formula κ P 4 a, b, c, d ⊸ κ P 4 a, b, c, d ) even if the system still satisfies cut-elimination. In fact, for a proof of cut-elimination it suffices to include in the proof of Theorem 39 the following cut-elimination step.
⊢ Γ 1 , φ 1 · · · ⊢ Γ n , φ n s-κ ⊢ Γ 1 , . . . , Γ n , κ P φ 1 , . . . , φ n ⊢ ∆ 1 , φ ⊥ 1 · · · ⊢ ∆ n , φ ⊥ n s-κ ⊢ ∆ 1 , . . . , ∆ n , κ P ⊥ φ ⊥ 1 , . . . , φ ⊥ n cut ⊢ Γ 1 , . . . , Γ n , ∆ 1 , . . . , ∆ n ⊢ Γ 1 , φ 1 ⊢ ∆ 1 , φ ⊥ 1 cut ⊢ Γ 1 , ∆ 1 , χ 1 , ψ 1 · · · ⊢ Γ n , φ ⊥ n ⊢ ∆ n , φ ⊥ n cut ⊢ Γ n , χ n , ψ n mix ⊢ Γ 1 , . . . , Γ n , ∆ 1 , . . . , ∆ n Note that s-κ is derivable in MPL • using d-κ and unitor κ .
Lemma 117. The rule s-κ is derivable in MPL • .
Proof. It suffices to consider the following derivation We can directly prove the derivability of the corresponding rule in the system RB • Q .
• ⊢ • ⊢ Γ 1 , φ 1 mix ⊢ Γ 1 , •, φ 1 · · · • ⊢ • ⊢ Γ n ,
Lemma 118. The rule s-κ RB G is derivable in RB • Q .
Proof. We proceed by induction on the modular decomposition of G via quasi-prime graphs:
• if G =`n, then
IH ⊢ G 1 · · · IH ⊢ G n mix RB ⊢ G 1 ⊎ · · · ⊎ G ǹ RB ⊢ G ⊲⊳ X s (G 1 ⊎ · · · ⊎ G n )
• if G = ⊗ n , then s-κ RB Q = ⊗ RB .
• if G = P, for a prime graph P {`, ⊗}, then there is
T = G Q ♣{i k G Q } such that IH ⊢ G T ⊲⊳ X ′ s (G 1 ⊎ · · · ⊎ G k−1 ⊎ G k+1 ⊎ · · · ⊎ G n ) IH ⊢ G k s RB ⊗ ⊢ G Q ⊲⊳ X s (G 1 ⊎ G k )
• otherwise G = Q M 1 , . . . , M n and we can apply inductive hypothesis.
contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 1.2 Outline of the paper . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . and Modular Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.2 Classical Propositional Formulas and Cographs . . . . . . . . . . . . . . . . . . . . . . . 8 2.3 Modular Decomposition ofGraphs . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 9 3 Multiplicative and IsoMix Graphical Logic 11 3.1 Generalized Formulas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 3.2 Extending Multiplicative Linear Logic with Graphical Connective . . . . . . . . . . . . . 14 3.3 Properties of the Logics MPL and MPL • . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.4 Soundness of Logical Equivalence in MPL • . . . . . . . . . . . . . . . . . . . . . . . . . 20 4 The Graphical Logic GS is a Model for MPL • 22 5 RB-Proof Nets 25 5.1 From Graphs to RB-forests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 5.2 RB-Graphs Representing Proofs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 6 A Correctness Criterion for MLL • -nets 29 6.1 AE-Connectedness in RB-graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 6.2 A Topological Characterization of MLL and MLL • . . . . . . . . . . . . . . . . . . . . . . 30 7 Generalizing the Correctness Criterion to MPL and MPL • 31 7.1 Connectedness and P-Connectness in Graphs . . . . . . . . . . . . . . . . . . . . . . . . 31 7.2 A Correctness Criterion for MPL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 7.3 A Correctness Criterion for MPL • . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 7.4 A Topological Characterization of GS . . . . . . . . . . . . . . . . . . . . . . . . . . . Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 A Deep Inference and the Open Deduction Formalism 47 A.1 Equivalent Definitions of GS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 B Soundness and Completeness of RB-nets 48 C On Rules Introducing a Connective at a Time 52
Figure 1 :
1The lattice of the logics on formulas studied in this paper and the lattice of graphical logics defined by the interpretation of graphical connectives as prime graphs. The logics below the dotted line contain formulas where only the binary connectives for conjunction and disjunction occurs; similarly, the graphical logics below the dotted line are families of cographs.
Theorem 19 (
19[55]). Let G be a graph with at least two vertices. Then there are non-empty modules M 1 , . . . , M n of G and a prime graph P such that G = P M 1 , . . . , M n .
Figure 4 :
4Linear sequent calculus rules. In the first line the rules for MPL, in the second line the rules for MPL • . Below the admissible rules in MPL • .
the unique pair of dual generalize multiplicative connectives (which are not iso-dual) G 4 and G ⊥ 4 have symmetry group {id,
then we have a derivation of ⊢ •, • by applying mix to the conclusion of two •-rules;
Corollary 37 .
37The rule AX is derivable in MPL and in MPL • .
Figure 5 :
5Cut-elimination steps. The two rules in the bottom are called commutative cut-elimination steps.Corollary 40 (Analyticity of MPL). Let Γ be a sequent. If ⊢ MPL Γ then there is a proof of Γ in MPL only containing occurrences of sub-formulas of formulas Γ.
Corollary 42 (
42Analyticity of MPL • ). Let Γ be a sequent. If ⊢ MPL • Γ then there is a proof of Γ in MPL • only containing occurrences of quasi-subformula of formulas in Γ. Corollary 43 (Conservativity). The logic MPL is a conservative extension of MLL. The logic MPL • is a conservative extension of MLL • .
Lemma 46 .
46The following rules are admissible in MPL. ⊢ Γ, κ φ 1 , . . . , φ n sym-κ ................................................. σ ∈ S(Q) ⊢ Γ, κ φ σ(1) , . . . , φ σ(n) ⊢ Γ, κ ⊥ φ 1 , . . . , φ n dsym-κ ............................................... ρ ∈ S ⊥ (Q) ⊢ Γ, κ φ ρ(1) , . . . , φ ρ(n)
Lemma 48 .
48The following rules are admissible. ⊢ Γ,`n −m+1 κ`m φ 1 , . . . , φ m , φ m+1 , . . . , φ n `-asso ............................................................................................... m < n ⊢ Γ,`n φ 1 , . . . , φ n ⊢ Γ, ⊗ n−m+1 κ ⊗m φ 1 , . . . , φ m , φ m+1 , . . . , φ n ⊗ -asso ............................................................................................. m < n ⊢ Γ, ⊗ n φ 1 , . . . , φ n
Lemma 50 .
50The following rule, is derivable in MPL • . ⊢ Γ, χ φ σ(1) , . . . , φ σ(n) unitorκ ........................................................................
PFigure 6 :
6⊥ M 1 , . . . , M n `P M ′ 1 , . . . , M ′ n P M 1 , . . . , M i−1 , M i`N , M i+1 , . . . M n s`M i`P M 1 , . . . , M i−1 , N, M i+1 , . . . , M n M i ⊗ P M 1 , . . . , M i−1 , N, M i+1 , . . . , M n s ⊗ P M 1 , . . . , M i−1 , M i ⊗ N, M i+1 , . . . , M n Inference rules for the system GS, where P is a prime graph and M i ∅ M ′ i for all i ∈ {1, . . . , n}.
on the last rule r in π according to Figure 7.
⊥ ......................... a, a ⊥ π1 ⊢ ∆, φ 1 , . . . , φ ǹ n ⊢ ∆,`n φ 1 , . . . , φ n ∅ [[π1]] IH ∆, φ 1 , . . . , φ n ............................................ ∆,`n φ 1 , . . . , φ n π1
∆ 1 Figure 7 :
17, φ 1 ............................ , χ φ 2 , . . . , φ n ..................................................... [[∆ 1 ]]` χ φ 2 , . . . , φ n p↓ ([[∆ 1 ]]`[[∆ 2 ]])` φ 1 ⊗ χ φ 2 , . . . , φ n ........................................................................ φ 1 ⊗ χ φ 2 , . . . , φ n s⊗ P φ 1 , . . . , φ n .............................................................................................................................∆ 1 , ∆ 2 , κ P φ 1 , . . . , φ n π1 ⊢ ∆ 1 , φ 1 , ψ 1 . . . πn ⊢ ∆ n , φ n , ψ n d-κ ⊢ ∆ 1 , . . . , ∆ n , κ P φ 1 , . . . , φ n , κ P ⊥ ψ 1 , . . . , ψ n Dπ 1 IH ∆ 1 , φ 1 , ψ 1 .................................................. [[∆ 1 ]]`( φ 1 ` ψ 1 ) ⊗ · · · ⊗ Dπ n IH ∆ n , φ n , ψ n .................................................. [[∆ 1 ]]`( φ n ` ψ n ) p↓`n [[∆ 1 ]] , . . . , [[∆ n ]] `⊗ n φ 1 ` ψ 1 , . . . , φ n ` ψ n p↓ P φ 1 , . . . , φ n `P ⊥ ψ 1 , . . . , ψ n ..................................................................................................................................................................... ∆ 1 , . . . , ∆ n , κ P φ 1 , . . . , φ n , κ P ⊥ ψ 1 , . . . , ψ n Rules to translate derivations in MPL • into derivations in GS. Lemma 55. Let H r G ∈ GS and C[ ] be a context. Then there are formulas φ and ψ such that ⊢ MPL • ψ ⊥ , φ, and φ = C[G] and ψ = C[H]. Proof. We can prove by case analysis on the rule r that H ⊸ G. If C[ ] = , then: • if an isomorphism is applied, then G = H and we conclude by Theorem 51, letting φ = [[G]] −1 and ψ = [[H]] −1 (see Remark 52);
then by Theorem 24 that C[ ] = κ Q C ′ [ ], M 1 , . . . , M n for a quasi-prime graph Q and non-empty graph M 1 , . . . , M n . In this case we let ζ[ ] = κ P ζ ′ [ ], µ 1 , . . . , µ n for some formulas µ i such that µ i for all i ∈ {1, . . . , n} and a context ζ ′ [ ] such that ζ ′ [ ] = C ′ [ ].
Figure 8 :
8The same proof net in the original Girard's syntax and Retoré's one.
. . . , n} . In this case the root of {{G}} is the vertex r Q (also denoted r G ). The set of leaves of {{G}} is given by the union of the set of leaves of {{G 1 }} , . . . , {{G n }}. The set of gates of G is the set Gates(G) = {G Q } ∪ 1≤i≤n Gates(G i ) . The RB-tree {{φ}} of a formula φ is defined analogously, by considering the formula tree instead of the spurious modular decomposition describing a graph. That is, {{φ}} is the RB-tree of a (spurious modulate decomposition of a) graph G such that [[G]] −1 = φ. A RB-forest is a disjoint union of RB-trees. The RB-forest a sequent Γ = φ 1 , . . . , φ n is defined as the RB-graph {{Γ}} = {{φ 1 }} ⊎ · · · ⊎ {{φ n }}. The set Root({{Γ}}) of roots of {{Γ}} is the set containing all the roots of {{φ 1 }} , . . . , {{φ n }}. The set of gates of {{Γ}} is the set Gates({{Γ}}) = 1≤i≤n Gates({{φ i }}) and we denote by R-Gates({{Γ}}) the subset of root gates of {{Γ}}, that is, gates whose roots are in Root({{Γ}}).
Figure 10 :
10Inference rules of the system RB • Q and the derivable rule s-κ RB G , where P is a prime graph.Sketch of proof. Both implication are proven by induction on the size of a derivation using the correspondence between rules in MPL • and RB • Q . We here highlight some relevant details of the proof: (⇒) The (implicit) non-emptiness condition on the rules in RB • Q observed in Remark 68 requires particular care in the inductive definition in presence of units (thus only for the translation of MPL • ). For this purpose, the rule s RB ⊗ plays a crucial role since it allows to simulate specific instances of wd ⊗ .(⇐) Each instance of s RB ⊗ corresponds to an instance of wd ⊗ , which is admissible in MPL • . For the other rules there is a one-to-one correspondence between the two systems.Details of the proof are provided in Appendix B.A C C MLL • -In this section we recall Retoré's topological characterization RB-graphs encoding proofs in MLL and MLL • and we show where where this criterion fails for MPL • -nets.
Lemma 75 .
75Let G be a RB-structure. Then G = G 1 ⊎ G 2 iff there are no g-paths connecting vertices in G 1 with vertices in G 2 .Proof. The non-trivial implication follows by definition: the absence of such g-paths implies the absence of B-edge or gates containing at the same time vertices in G 1 and in in G 2 .We recall Retoré's characterization of those RB-structure which are encoding of proofs in MLL • via {{·}}.Theorem 76 ([75]). Let G = {{Γ}} ⊔ Link be a RB-structure of a sequent Γ of MLL-formulas. Then 1. G is a MLL-net iff G is ae-connected and AE(G) = ∅;
κ ψ 1
1, . . . , ψ n `κ ψ 1 , . . . , ψ n c↓ κ ψ 1 , . . . , ψ n κ ψ 1 , . . . , ψ n `κ ψ 1 , . . .
p2↓ † P ⊥ M 1 , . . . , M n `P N 1 , . . . , N n ⋆ ≔ P {`, ⊗} prime M i ∅ for all i ∈ {1, . . . , n} † ≔ P {`, ⊗} prime M i`Ni ∅ for all i ∈ {1, . . . , n}
Lemma 113 .
113Let M 1 , . . . , M n , N 1 , . . . , N n and G be graphs such that |V G | = n. Then there is a derivation(M 1`N1 ) ⊗ · · · ⊗(M n`Nn ) {s ⊗ ,p↓} G ⊥ M 1 , . . . , M n `G N 1 , . . . , N nProof. By induction on the modular decomposition of G.Thanks to this lemma, we can therefore prove the admissibility of the weaker Proposition 114. The following rule, which is a version of p↓ with weaker side conditions, is admis-sible in GS (M 1`N1 ) ⊗ · · · ⊗(M n`Nn ) p 1 ↓ P ⊥ M 1 , . . . , M n `P N 1 , . . . ,N n where P is prime and M i ∅ for all i ∈ {1, . . . , n}. Proof. Note that we may have N i = ∅ for some i ∈ {1, . . . , n}. Thus, if N i ∅ for all i ∈ {1, . . . , n}, then p 1 ↓ is an occurrence of p↓. Otherwise, w.l.o.g., N 1 = ∅, thus we have a derivationM 1 ⊗ (M 2`N2 ) ⊗ · · · ⊗(M n`Nn ) Lemma 113 H ⊥ M 2 , . . . , M n `H N 2 , . . . , N n ............................................................................................................ M 1 ⊗ P ⊥ ∅, M 2 , . . . , M n s ⊗ P ⊥ M 1 , M 2 , . . . , M n `P ∅, N 2 , . . . , N n Theorem 115. Let G be a graph. Then ⊢ GS G ⇔ ⊢ {ai↓,s`,s ⊗ ,p 1 ↓} G ⇔ ⊢ {ai↓,s`,p 1 ↓} G ⇔ ⊢ {ai↓,s`,p 2 ↓} G
⊢⊢⊢G
G`⊲⊳ i i`, r {{φi}} |i∈{1,...,k} {{π 1 }} If k = 1, then we now consider the root of {{φ 1 }} as if it is the root of `n φ 1 , . . . , φ n .• if ρ = ⊗ n , then if all φ 1 , . . . , φ n are not vacuous∆ 1 , . . . , ∆ n , ⊗ n φ 1 , . . . , φ n , ∆ IH ⊢ {{π 1 }} ⊗ RB ⊢ G ⊗ ⊲⊳ i i ⊗ ,r {{φi}} |i∈{1,...,n} {{π 1 }} if, w.l.o.g. φ k+1 , .. . , φ n are vacuous andπ 1 ⊢ ∆, φ 1 , . . . , φ n ⊗ ⊢ ∆, ⊗ n φ 1 , . . . , φ n IH ⊢ {{π 1 }} ⊗ RB ⊢ G ⊗ ⊲⊳ i i ⊗ ,r {{φi}} |i∈{1,...,k} {{π 1 }} If k = 1, then we now consider the root of {{φ 1 }} as if it is the root of ⊗ n φ 1 , . . . , φ n . • if ρ = d-κ and if, w.l.o.g., φ j = ∅ for all k ∈ {k + 1, . . . , l}, ψ i = ∅ for all i ∈ {l + 1, . . . , m}, and φ i = ∅ = ψ i for all i ∈ {m + 1, . . . , n} then ∆ n , φ n , ψ n d-κ ⊢ ∆ 1 , . . . , ∆ n , κ P φ 1 , . . . , φ n , κ P ⊥ ψ 1 , . . . ′ ⊎ {{π m+1 }} ⊎ · · · ⊎ {{π n }} where -G ′′ = {{H}} ⊎ H ⊥ ⊲⊳ Y ({{π 1 }} ⊎ · · · ⊎ {{π l }}) -H v 1 , . . . , v l = P v 1 , . . . , v l , ∅, . . . , ∅ -Y = i i {{H}} , r {{φi}} , i i {{H ⊥ }} , r {{ψi}} | i ∈ {1, . . . , n} -G ′ = {{K}} ⊎ K ⊥ ⊲⊳ X ({{π n+1 }} ⊎ · · · ⊎ {{π m }}) -K v 1 , . . . , v m = P v 1 , . . . , v m , ∅, . . . , ∅ -X = Y ∪ i i {{K}} , r {{φi}} , i i {{K ⊥ }} , r {{ψi}} | i ∈ {n + 1, . . . , m}
φ n mix ⊢
mixΓ n , •, φ n d-κ ⊢ Γ 1 , . . . , Γ n , κ ⊥ •, . . . , • , κ φ 1 , . . . , φ n (n−1)×unitor κ ⊢ Γ 1 , . . . , Γ n , •, κ φ 1 , . . . , φ n `⊢ Γ 1 , . . . , Γ n , •`κ φ 1 , . . . , φ n unitor κ ⊢ Γ 1 , . . . , Γ n , κ φ 1 , . . . , φ n
IH ............................................ ............................ ∆ n , φ n ............................ [[∆ 1 ]]` φ n s⊗ [[∆ 1 ]]`· · ·`[[∆ n ]]` ⊗ n φ 1 , . . . , φ n[[∆ 1 ]]`∅
[[π1]] IH
[[∆ 2 ]]
[[∆ 1 , ∆ 2 ]]
π1
⊢ ∆ 1 , φ 1 · · ·
πn
⊢ ∆ n , φ n
⊗n
⊢ ∆ 1 , . . . , ∆ n , ⊗ n φ 1 , . . . , φ n
[[π1]] IH
∆ 1 , φ 1
[[∆ 1 ]]` φ 1
⊗ · · · ⊗
[[π1]] IH
). By Lemma 55 we can define a proof of φ in MPL • ∪ {cut} as the one belowIH
ψ
Lemma 55
⊢ ψ ⊥ , φ
cut
⊢ φ
and conclude by Theorem 39 .
then {{G}} is the RB-graph obtained by gluing the root of {{G i }} with the i-th input of a fresh new Q-gate for all i ∈ {1, . . . , n}. That is,
Theorem 69 .
69Let Γ be a P-sequent. Then 1. ⊢ MPL [[Γ]] ⇐⇒ ⊢ RB P {{Γ}} ⊔ Link for an axiom link Link for Γ 2. ⊢ MPL • [[Γ]] ⇐⇒ ⊢ RB • Q {{Γ}} ⊔ Link for an axiom link Link for Γax RB
⊢ a a ⊥
⊢ G 1
RB
G:`n
To be more precise, we should instead say "a picture is worth an exponential number of words". . .2 Another line of works[23,78,24,31,32] explored the extensions of the semantics of boolean logic from cographs enconding formulas to graphs. However, in these works graphical logic is investigated from a semantical viewpoint rather than under the lens of proof systems.3 Note that several NP-hard optimization problems on graphs become solvable in polynomial time if restricted to cographs[57].4 The logic BV is a NP-time decidable fragment of Pomset logic[69,68]. This logic is sound and complete with respect to seriesparallel order refinements: if φ and ψ are formulas encoding series-parallel orders, then the order encoded by φ is a refinement of the order encoded by ψ iff ⊢ BV φ ⊸ ψ.
Note that the full proof of the admissibility of the rule simulating the cut in deep inference systems in the system GS, as well as the proof that GS is a conservative extension of multiplicative linear logic with mix, are quite convoluted and takes several pages in the Appendix of[8].6 In[67] the authors theorized the possibility of the existence of logics satisfying weaker correctness criterion on RB-structures with respect to the one from[73].
More precisely, in these works the author defines proof nets for Pomset logic including not only undirected edges, but also directed edges connecting the two inputs of a gate representing the non-commutative connective ⊳ internalizing the pomset order by a logical connective.
Also known as handsome proof nets, relational RB-prenets[69], RB-cographs[68], or closed coherent spaces[36].
Open deduction[48]is a proof formalism based on deep inference[12]. It has originally been defined for formulas, but it is abstract enough such that it can equally well be used for graphs, as already done in[6].Definition 112. An inference system S is a set of inference rules (as for example shown inFigure 4).A derivation D in S with premise G and conclusion H is denotedH and is defined inductively as follows:• Every graph G is a (trivial) derivation with premise G and conclusion G (also denoted G).• An instance of a rulein S is a derivation with premise G and conclusion H.• If D 1 is a derivation with premise G 1 and conclusion H 1 , and D 2 is a derivation with premise G 2 and conclusion H 2 , and H 1 = G 2 , then the composition of D 1 and D 2 is a derivation D 2 ; D 1 denoted as below.Note that even if the symmetry between G 2 and H 1 is not written, we always assume it is part of the derivation and explicitly given.• If G is a graph with n vertices and D 1 , . . . , D n are derivations with premise G i and conclusion H i for each i ∈ {1, . . . , n}, then G D 1 , . . . , D n is a derivation with premise G G 1 , . . . , G n and conclusion G H 1 , . . . , H n denoted as below on the left., ⊗} we may write the derivations as above on the right.is well-defined for any context C[ ] and any derivationA. E DGSWe here show that the formulation of the system GS provided in this paper is equivalent to one provided in[7,8]. In particular, in the previous these papers the rule s ⊗ was not included in the system. However, as shown in[8]this rule plays a crucial role in the proof that GS is a conservative extension of MLL • and in[5]it is shown that this rule cannot be admissible in the proof systems operating on mixed graphs. Moreover, we here give a weaker side condition on the p-rule with respect to the rules below:where G is a graph with |V G | = n andProof. By induction on the modular decomposition-via quasi-prime graphs of G:for a given X d ;for given X d and X`n. Note that the conclusion can also be written asWe can now provide a proof of Theorem 69.Theorem (Theorem 69). Let Γ be a P-sequent. Then
Phase semantics and sequent calculus for pure non-commutative classical linear logic. Abrusci Vito Michele, Journal of Symbolic Logic. 564Vito Michele Abrusci. Phase semantics and sequent calculus for pure non-commutative classical linear logic. Journal of Symbolic Logic, 56(4):1403-1451, December 1991.
Non-commutative logic I: The multiplicative fragment. Michele Vito, Paul Abrusci, Ruet, Annals of Pure and Applied Logic. 101Vito Michele Abrusci and Paul Ruet. Non-commutative logic I: The multiplicative fragment. Annals of Pure and Applied Logic, 101:29-64, 2000.
A constructive proof of coherence for symmetric monoidal categories using rewriting. Matteo Acclavio, Matteo Acclavio. A constructive proof of coherence for symmetric monoidal categories using rewrit- ing, 2017.
Exponentially handsome proof nets and their normalization. Matteo Acclavio, Electronic Proceedings in Theoretical Computer Science. 353Matteo Acclavio. Exponentially handsome proof nets and their normalization. Electronic Proceedings in Theoretical Computer Science, 353:1-25, dec 2021.
A Graphical Proof Theory of Logical Time. Matteo Acclavio, Ross Horne, Sjouke Mauw, Lutz Straßburger, 7th International Conference on Formal Structures for Computation and Deduction (FSCD 2022). Amy P. FeltyDagstuhl, Germany, 2022228of Leibniz International Proceedings in Informatics (LIPIcs)Matteo Acclavio, Ross Horne, Sjouke Mauw, and Lutz Straßburger. A Graphical Proof Theory of Logical Time. In Amy P. Felty, editor, 7th International Conference on Formal Structures for Compu- tation and Deduction (FSCD 2022), volume 228 of Leibniz International Proceedings in Informatics (LIPIcs), pages 22:1-22:25, Dagstuhl, Germany, 2022. Schloss Dagstuhl -Leibniz-Zentrum für In- formatik.
An Analytic Propositional Proof System On Graphs. Matteo Acclavio, Ross Horne, Lutz Straßburger, This is an extended version of a paper published at LICS 2020 [AHS20].Matteo Acclavio, Ross Horne, and Lutz Straßburger. An Analytic Propositional Proof System On Graphs. This is an extended version of a paper published at LICS 2020 [AHS20]., December 2020.
Logic beyond formulas: A proof system on graphs. Matteo Acclavio, Ross Horne, Lutz Straßburger, Proceedings of the 35th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS '20. the 35th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS '20New York, NY, USAAssociation for Computing MachineryMatteo Acclavio, Ross Horne, and Lutz Straßburger. Logic beyond formulas: A proof system on graphs. In Proceedings of the 35th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS '20, page 38-52, New York, NY, USA, 2020. Association for Computing Machinery.
An Analytic Propositional Proof System on Graphs. Matteo Acclavio, Ross Horne, Lutz Straßburger, Logical Methods in Computer Science. 184Matteo Acclavio, Ross Horne, and Lutz Straßburger. An Analytic Propositional Proof System on Graphs. Logical Methods in Computer Science, Volume 18, Issue 4, October 2022.
Generalized connectives for multiplicative linear logic. Matteo Acclavio, Roberto Maieli, 28th EACSL Annual Conference on Computer Science Logic (CSL 2020). Maribel Fernández and Anca MuschollDagstuhl, Germany, 2020Zentrum fuer Informatik152Matteo Acclavio and Roberto Maieli. Generalized connectives for multiplicative linear logic. In Maribel Fernández and Anca Muscholl, editors, 28th EACSL Annual Conference on Computer Sci- ence Logic (CSL 2020), volume 152 of LIPIcs, pages 6:1-6:16, Dagstuhl, Germany, 2020. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
From syntactic proofs to combinatorial proofs. Matteo Acclavio, Lutz Straßburger, Automated Reasoning -9th International Joint Conference, IJCAR 2018, Held as Part of the Federated Logic Conference, FloC. Oxford, UKSpringer10900Didier Galmiche, Stephan Schulz, and Roberto SebastianiMatteo Acclavio and Lutz Straßburger. From syntactic proofs to combinatorial proofs. In Didier Galmiche, Stephan Schulz, and Roberto Sebastiani, editors, Automated Reasoning -9th International Joint Conference, IJCAR 2018, Held as Part of the Federated Logic Conference, FloC 2018, Oxford, UK, July 14-17, 2018, Proceedings, volume 10900, pages 481-497. Springer, 2018.
Subatomic proof systems: Splittable systems. Andrea Aler Tubella, Alessio Guglielmi, ACM Trans. Comput. Logic. 191Andrea Aler Tubella and Alessio Guglielmi. Subatomic proof systems: Splittable systems. ACM Trans. Comput. Logic, 19(1), January 2018.
Introduction to Deep Inference. Andrea Aler Tubella, Lutz Straßburger, Lecture. Andrea Aler Tubella and Lutz Straßburger. Introduction to Deep Inference. Lecture, August 2019.
Logic programming with focusing proofs in linear logic. Jean-Marc Andreoli, Journal of Logic and Computation. 23Jean-Marc Andreoli. Logic programming with focusing proofs in linear logic. Journal of Logic and Computation, 2(3):297-347, 1992.
Canonical propositional Gentzen-type systems. Arnon Avron, Iddo Lev, Automated Reasoning. Rajeev Goré, Alexander Leitsch, and Tobias NipkowBerlin, Heidelberg; Berlin HeidelbergSpringerArnon Avron and Iddo Lev. Canonical propositional Gentzen-type systems. In Rajeev Goré, Alexan- der Leitsch, and Tobias Nipkow, editors, Automated Reasoning, pages 529-544, Berlin, Heidelberg, 2001. Springer Berlin Heidelberg.
On the p-connectedness of graphs-a survey. Luitpold Babel, Stephan Olariu, Discrete Applied Mathematics. 951-3Luitpold Babel and Stephan Olariu. On the p-connectedness of graphs-a survey. Discrete Applied Mathematics, 95(1-3):11-33, 1999.
Totally linear proofs. Victoria Barrett, Alessio Guglielmi, 5th International Workshop on Trends in Linear Logic and Applications (TLLA 2021). Rome (virtual), ItalyVictoria Barrett and Alessio Guglielmi. Totally linear proofs. In 5th International Workshop on Trends in Linear Logic and Applications (TLLA 2021), Rome (virtual), Italy, June 2021.
Modal logic: graph. Darst. Patrick Blackburn, Maarten De Rijke, Yde Venema, Cambridge University Press53Patrick Blackburn, Maarten De Rijke, and Yde Venema. Modal logic: graph. Darst, volume 53. Cambridge University Press, 2001.
The chinese wall security policy. F C David, Brewer, J Michael, Nash, IEEE symposium on security and privacy. Oakland1989206David FC Brewer and Michael J Nash. The chinese wall security policy. In IEEE symposium on security and privacy, volume 1989, page 206. Oakland, 1989.
Locality for classical logic. Kai Brünnler, Notre Dame Journal of Formal Logic. 474Kai Brünnler. Locality for classical logic. Notre Dame Journal of Formal Logic, 47(4):557-580, 2006.
Deep sequent systems for modal logic. Kai Brünnler, Archive for Mathematical Logic. 486Kai Brünnler. Deep sequent systems for modal logic. Archive for Mathematical Logic, 48(6):551-577, 2009.
Modular sequent systems for modal logic. Kai Brünnler, Lutz Straßburger, Automated Reasoning with Analytic Tableaux and Related Methods, TABLEAUX'09. Martin Giese and Arild WaalerSpringer5607Kai Brünnler and Lutz Straßburger. Modular sequent systems for modal logic. In Martin Giese and Arild Waaler, editors, Automated Reasoning with Analytic Tableaux and Related Methods, TABLEAUX'09, volume 5607 of Lecture Notes in Computer Science, pages 152-166. Springer, 2009.
On the length of medial-switch-mix derivations. Paola Bruscoli, Lutz Straßburger, Logic, Language, Information, and Computation -24th International Workshop. Juliette Kennedy and Ruy J. G. B. de QueirozLondon, UKSpringer10388ProceedingsPaola Bruscoli and Lutz Straßburger. On the length of medial-switch-mix derivations. In Juliette Kennedy and Ruy J. G. B. de Queiroz, editors, Logic, Language, Information, and Computation - 24th International Workshop, WoLLIC 2017, London, UK, July 18-21, 2017, Proceedings, volume 10388 of Lecture Notes in Computer Science, pages 68-79. Springer, 2017.
A graph theoretical extension of boolean logic. Bachelor's thesis. Cameron Calk, Cameron Calk. A graph theoretical extension of boolean logic. Bachelor's thesis, 2016.
Beyond formulas-as-cographs: an extension of boolean logic to arbitrary graphs. Cameron Calk, Anupam Das, Tim Waring, Cameron Calk, Anupam Das, and Tim Waring. Beyond formulas-as-cographs: an extension of boolean logic to arbitrary graphs, 2020.
Syntactic completeness of proper display calculi. Jinsheng Chen, Giuseppe Greco, Alessandra Palmigiano, Apostolos Tzimoulis, ACM Trans. Comput. Logic. 234Jinsheng Chen, Giuseppe Greco, Alessandra Palmigiano, and Apostolos Tzimoulis. Syntactic com- pleteness of proper display calculi. ACM Trans. Comput. Logic, 23(4), oct 2022.
A language for multiplicative-additive linear logic. J R B Cockett, C A Pastro, Proceedings of the 10th Conference on Category Theory in Computer Science (CTCS. the 10th Conference on Category Theory in Computer Science (CTCS122J.R.B. Cockett and C.A. Pastro. A language for multiplicative-additive linear logic. Electronic Notes in Theoretical Computer Science, 122:23-65, 2005. Proceedings of the 10th Conference on Category Theory in Computer Science (CTCS 2004).
Proof theory for full intuitionistic linear logic, bilinear logic, and mix categories. J R B Cockett, R A G Seely, Theory and Applications of Categories. 35J.R.B. Cockett and R.A.G. Seely. Proof theory for full intuitionistic linear logic, bilinear logic, and mix categories. Theory and Applications of Categories, 3(5):85-131, 1997.
Weakly distributive categories. J R B Cockett, R A G Seely, J. of Pure and Applied Algebra. 114J.R.B. Cockett and R.A.G. Seely. Weakly distributive categories. J. of Pure and Applied Algebra, 114:133-173, 1997.
The structure of multiplicatives. Vincent Danos, Laurent Regnier, Archive for Mathematical logic. 283Vincent Danos and Laurent Regnier. The structure of multiplicatives. Archive for Mathematical logic, 28(3):181-203, 1989.
The structure of the multiplicatives. Vincent Danos, Laurent Regnier, Arch. Math. Log. 283Vincent Danos and Laurent Regnier. The structure of the multiplicatives. Arch. Math. Log., 28(3):181- 203, 1989.
Complexity of evaluation and entailment in boolean graph logic. Anupam Das, preprintAnupam Das. Complexity of evaluation and entailment in boolean graph logic. preprint, 2019.
New minimal linear inferences in boolean logic independent of switch and medial. Anupam Das, Alex A Rice, 6th International Conference on Formal Structures for Computation and Deduction, FSCD 2021. Naoki KobayashiBuenos Aires, Argentina195:19. Schloss Dagstuhl -Leibniz-Zentrum für InformatikAnupam Das and Alex A. Rice. New minimal linear inferences in boolean logic independent of switch and medial. In Naoki Kobayashi, editor, 6th International Conference on Formal Structures for Com- putation and Deduction, FSCD 2021, July 17-24, 2021, Buenos Aires, Argentina (Virtual Conference), volume 195 of LIPIcs, pages 14:1-14:19. Schloss Dagstuhl -Leibniz-Zentrum für Informatik, 2021.
Proof nets sequentialisation in multiplicative linear logic. Paolo Di Giamberardino, Claudia Faggian, Annals of Pure and Applied Logic. 1553Paolo Di Giamberardino and Claudia Faggian. Proof nets sequentialisation in multiplicative linear logic. Annals of Pure and Applied Logic, 155(3):173-182, 2008.
Topology of series-parallel networks. R Duffin, Journal of Mathematical Analysis and Applications. 102R.J Duffin. Topology of series-parallel networks. Journal of Mathematical Analysis and Applications, 10(2):303 -318, 1965.
The Theory of 2-Structures A Framework for Decomposition and Transformation of Graphs. A Ehrenfeucht, T Harju, G Rozenberg, World ScientificA. Ehrenfeucht, T. Harju, and G Rozenberg. The Theory of 2-Structures A Framework for Decompo- sition and Transformation of Graphs. World Scientific, 1999.
A new correctness criterion for mll proof nets. Thomas Ehrhard, Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS '14. the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS '14New York, NY, USAAssociation for Computing MachineryThomas Ehrhard. A new correctness criterion for mll proof nets. In Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS), CSL-LICS '14, New York, NY, USA, 2014. Association for Computing Machinery.
A gentle introduction to girard's transcendental syntax. Boris Eng, Thomas Seiller, 5th International Workshop on Trends in Linear Logic and Applications. 2021Boris Eng and Thomas Seiller. A gentle introduction to girard's transcendental syntax. In 5th Inter- national Workshop on Trends in Linear Logic and Applications (TLLA 2021), 2021.
Multiplicative linear logic from logic programs and tilings. hal-02895111. Boris Eng, Thomas Seiller, Boris Eng and Thomas Seiller. Multiplicative linear logic from logic programs and tilings. hal- 02895111, 2021.
. Tibor Gallai, Transitiv orientierbare Graphen. Acta Mathematica Academiae Scientiarum Hungarica. 181-2Tibor Gallai. Transitiv orientierbare Graphen. Acta Mathematica Academiae Scientiarum Hungarica, 18(1-2):25-66, 1967.
. Jean-Yves Girard, Linear logic. Theoretical Computer Science. 50Jean-Yves Girard. Linear logic. Theoretical Computer Science, 50:1-102, 1987.
Towards a geometry of interaction. Jean-Yves Girard, Contemporary Mathematics. 92Jean-Yves Girard. Towards a geometry of interaction. Contemporary Mathematics, 92:69-108, 1989.
Proof-nets : the parallel syntax for proof-theory. Jean-Yves Girard, Logic and Algebra. Marcel Dekker. Aldo Ursini and Paolo AglianoNew YorkJean-Yves Girard. Proof-nets : the parallel syntax for proof-theory. In Aldo Ursini and Paolo Agliano, editors, Logic and Algebra. Marcel Dekker, New York, 1996.
. Jean-Yves Girard, Light linear logic. Information and Computation. 143Jean-Yves Girard. Light linear logic. Information and Computation, 143:175-204, 1998.
Jean-Yves Girard, On the meaning of logical rules II: multiplicatives and additives. NATO ASI Series F: Computer and Systems Sciences. 175Jean-Yves Girard. On the meaning of logical rules II: multiplicatives and additives. NATO ASI Series F: Computer and Systems Sciences, 175:183-212, 2000.
Three lightings of logic (Invited Talk). Jean-Yves Girard, of Leibniz International Proceedings in Informatics (LIPIcs). Simona Ronchi Della RoccaDagstuhl, Germany23Schloss Dagstuhl-Leibniz-Zentrum fuer InformatikJean-Yves Girard. Three lightings of logic (Invited Talk). In Simona Ronchi Della Rocca, editor, Computer Science Logic 2013 (CSL 2013), volume 23 of Leibniz International Proceedings in In- formatics (LIPIcs), pages 11-23, Dagstuhl, Germany, 2013. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
Transcendental syntax i: deterministic case. Jean-Yves Girard, Mathematical Structures in Computer Science. 275JEAN-YVES GIRARD. Transcendental syntax i: deterministic case. Mathematical Structures in Computer Science, 27(5):827-849, 2017.
A system of interaction and structure. Alessio Guglielmi, ACM Transactions on Computational Logic. 81Alessio Guglielmi. A system of interaction and structure. ACM Transactions on Computational Logic, 8(1):1-64, 2007.
A proof calculus which reduces syntactic bureaucracy. Alessio Guglielmi, Tom Gundersen, Michel Parigot, Proceedings of the 21st International Conference on Rewriting Techniques and Applications. Christopher Lynchthe 21st International Conference on Rewriting Techniques and ApplicationsDagstuhl, GermanyZentrum fuer Informatik6Alessio Guglielmi, Tom Gundersen, and Michel Parigot. A proof calculus which reduces syntac- tic bureaucracy. In Christopher Lynch, editor, Proceedings of the 21st International Conference on Rewriting Techniques and Applications, volume 6 of LIPIcs, pages 135-150, Dagstuhl, Germany, 2010. Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik.
Non-commutativity and MELL in the calculus of structures. Alessio Guglielmi, Lutz Straßburger, Computer Science Logic. Laurent FribourgSpringerAlessio Guglielmi and Lutz Straßburger. Non-commutativity and MELL in the calculus of structures. In Laurent Fribourg, editor, Computer Science Logic, pages 54-68, Berlin, Heidelberg, 2001. Springer.
A non-commutative extension of MELL. Alessio Guglielmi, Lutz Straßburger, Logic for Programming. Matthias Baaz and Andrei VoronkovBerlin, HeidelbergSpringerAlessio Guglielmi and Lutz Straßburger. A non-commutative extension of MELL. In Matthias Baaz and Andrei Voronkov, editors, Logic for Programming, Artificial Intelligence, and Reasoning, pages 231-246, Berlin, Heidelberg, 2002. Springer.
A survey of the algorithmic aspects of modular decomposition. Michel Habib, Christophe Paul, Computer Science Review. 41Michel Habib and Christophe Paul. A survey of the algorithmic aspects of modular decomposition. Computer Science Review, 4(1):41-59, 2010.
. Dominic Hughes, Proofs Without Syntax. Annals of Mathematics. 1643Dominic Hughes. Proofs Without Syntax. Annals of Mathematics, 164(3):1065-1076, 2006.
Towards Hilbert's 24 th problem: Combinatorial proof invariants: (preliminary version). Dominic Hughes, Electr. Notes Theor. Comput. Sci. 165Dominic Hughes. Towards Hilbert's 24 th problem: Combinatorial proof invariants: (preliminary version). Electr. Notes Theor. Comput. Sci., 165:37-63, 2006.
On full abstraction for PCF: I. Models, observables and the full abstraction problem, II. Dialogue games and innocent strategies, III. A fully abstract and universal game model. J , Martin E Hyland, Chih-Hao Luke Ong, Information and Computation. 163J. Martin E. Hyland and Chih-Hao Luke Ong. On full abstraction for PCF: I. Models, observables and the full abstraction problem, II. Dialogue games and innocent strategies, III. A fully abstract and universal game model. Information and Computation, 163:285-408, 2000.
Graph decomposition for undirected graphs. O Lee, Ralph G James, Donald D Stanton, Cowan, Proceedings of the Third Southeastern Conference on Combinatorics, Graph Theory, and Computing. the Third Southeastern Conference on Combinatorics, Graph Theory, and ComputingBoca Raton, FlaFlorida Atlantic Univ.Lee O James, Ralph G Stanton, and Donald D Cowan. Graph decomposition for undirected graphs. In Proceedings of the Third Southeastern Conference on Combinatorics, Graph Theory, and Computing (Florida Atlantic Univ., Boca Raton, Fla., 1972), pages 281-290, 1972.
P-components and the homogeneous decomposition of graphs. Beverly Jamison, Stephan Olariu, SIAM Journal on Discrete Mathematics. 83Beverly Jamison and Stephan Olariu. P-components and the homogeneous decomposition of graphs. SIAM Journal on Discrete Mathematics, 8(3):448-463, 1995.
The np-completeness column: an ongoing guide. S David, Johnson, Journal of Algorithms. 63David S Johnson. The np-completeness column: an ongoing guide. Journal of Algorithms, 6(3):434- 451, 1985.
Cut-free sequent calculi for some tense logics. Ryo Kashima, Studia Logica. 531Ryo Kashima. Cut-free sequent calculi for some tense logics. Studia Logica, 53(1):119-136, 1994.
Modularisation of sequent calculi for normal and non-normal modalities. Björn Lellmann, Elaine Pimentel, ACM Trans. Comput. Logic. 202Björn Lellmann and Elaine Pimentel. Modularisation of sequent calculi for normal and non-normal modalities. ACM Trans. Comput. Logic, 20(2), feb 2019.
Matching theory. László Lovász, D Michael, Plummer, American Mathematical Soc367László Lovász and Michael D Plummer. Matching theory, volume 367. American Mathematical Soc., 2009.
Categories for the Working Mathematician. Saunders Mac Lane, Number 5 in Graduate Texts in Mathematics. SpringerSaunders Mac Lane. Categories for the Working Mathematician. Number 5 in Graduate Texts in Mathematics. Springer, 1971.
Non decomposable connectives of linear logic. Roberto Maieli, Annals of Pure and Applied Logic. 17011102709Roberto Maieli. Non decomposable connectives of linear logic. Annals of Pure and Applied Logic, 170(11):102709, 2019.
Linear-time modular decomposition and efficient transitive orientation of comparability graphs. M Ross, Jeremy P Mcconnell, Spinrad, Proceedings of the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '94. the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '94USARoss M. McConnell and Jeremy P. Spinrad. Linear-time modular decomposition and efficient transi- tive orientation of comparability graphs. In Proceedings of the Fifth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA '94, pages 536-545, USA, 1994. Society for Industrial and Applied Mathematics.
A formal framework for specifying sequent calculus proof systems. Dale Miller, Elaine Pimentel, Theoretical Computer Science. 474Dale Miller and Elaine Pimentel. A formal framework for specifying sequent calculus proof systems. Theoretical Computer Science, 474:98-116, 2013.
From proofs to focused proofs: a modular proof of focalization in linear logic. Dale Miller, Alexis Saurin, CSL 2007: Computer Science Logic. J. Duparc and T. A. HenzingerSpringer-Verlag4646Dale Miller and Alexis Saurin. From proofs to focused proofs: a modular proof of focalization in linear logic. In J. Duparc and T. A. Henzinger, editors, CSL 2007: Computer Science Logic, volume 4646 of LNCS, pages 405-419. Springer-Verlag, 2007.
Unique perfect matchings, forbidden transitions and proof nets for linear logic with Mix. Lê Thành Dũng Nguyên, Logical Methods in Computer Science. 161Lê Thành Dũng Nguyên. Unique perfect matchings, forbidden transitions and proof nets for linear logic with Mix. Logical Methods in Computer Science, Volume 16, Issue 1, February 2020.
Coherent interaction graphs: A non-deterministic geometry of interaction for mll. Dũng Lê Thành, Thomas Nguyên, Seiller, Lê Thành Dũng Nguyên and Thomas Seiller. Coherent interaction graphs: A non-deterministic ge- ometry of interaction for mll. 2019.
A System of Interaction and Structure III: The Complexity of BV and Pomset Logic. Dũng Lê Thành, Lutz Nguyên, Straßburger, working paper or preprintLê Thành Dũng Nguyên and Lutz Straßburger. A System of Interaction and Structure III: The Com- plexity of BV and Pomset Logic. working paper or preprint, 2022.
BV and Pomset Logic are not the same. Dũng Lê Thành, Lutz Nguyên, Straßburger, :17, Dagstuhl, Germany, 2022. Schloss Dagstuhl -Leibniz-Zentrum für Informatik. Florin Manea and Alex Simpson21630th EACSL Annual Conference on Computer Science Logic (CSL 2022)Lê Thành Dũng Nguyên and Lutz Straßburger. BV and Pomset Logic are not the same. In Florin Manea and Alex Simpson, editors, 30th EACSL Annual Conference on Computer Science Logic (CSL 2022), volume 216 of Leibniz International Proceedings in Informatics (LIPIcs), pages 3:1-3:17, Dagstuhl, Germany, 2022. Schloss Dagstuhl -Leibniz-Zentrum für Informatik.
The method of tree-hypersequents for modal propositional logic. Francesca Poggiolesi, Towards Mathematical Philosophy. D. Makinson, J. Malinowski, and H. WansingSpringer28Francesca Poggiolesi. The method of tree-hypersequents for modal propositional logic. In D. Makin- son, J. Malinowski, and H. Wansing, editors, Towards Mathematical Philosophy, volume 28 of Trends in Logic, pages 31-51. Springer, 2009.
Réseaux et Séquents Ordonnés. Christian Retoré, Université Paris VIIPhD thesisChristian Retoré. Réseaux et Séquents Ordonnés. PhD thesis, Université Paris VII, 1993.
Perfect matchings and series-parallel graphs: multiplicatives proof nets as R&B-graphs. Christian Retoré, Electronic Notes in Theoretical Computer Science. 3Christian Retoré. Perfect matchings and series-parallel graphs: multiplicatives proof nets as R&B- graphs. Electronic Notes in Theoretical Computer Science, 3, 1996.
Handsome proof-nets: R&B-graphs, perfect matchings and series-parallel graphs. Christian Retoré, INRIA. 3652Rapport de recherche. Appeared as [75Christian Retoré. Handsome proof-nets: R&B-graphs, perfect matchings and series-parallel graphs. Rapport de recherche 3652, INRIA, 1999. Appeared as [75].
Pomset logic as a calculus of directed cographs. Christian Retoré, Dynamic Perspectives in Logic and Linguistics. V. M. Abrusci and C. CasadioBulzoni, RomaAlso available as INRIA Rapport de Recherche RR-3714Christian Retoré. Pomset logic as a calculus of directed cographs. In V. M. Abrusci and C. Casadio, editors, Dynamic Perspectives in Logic and Linguistics, pages 221-247. Bulzoni, Roma, 1999. Also available as INRIA Rapport de Recherche RR-3714.
Handsome proof-nets: perfect matchings and cographs. Christian Retoré, Theoretical Computer Science. 2943Christian Retoré. Handsome proof-nets: perfect matchings and cographs. Theoretical Computer Science, 294(3):473-488, 2003.
A system of interaction and structure II: The need for deep inference. Alwen Fernanto Tiu, Logical Methods in Computer Science. 22Alwen Fernanto Tiu. A system of interaction and structure II: The need for deep inference. Logical Methods in Computer Science, 2(2):1-24, 2006.
Basic Proof Theory. Anne Sjerp Troelstra, Helmut Schwichtenberg, Cambridge University Presssecond editionAnne Sjerp Troelstra and Helmut Schwichtenberg. Basic Proof Theory. Cambridge University Press, second edition, 2000.
A graph theoretic extension of boolean logic. Timothy Waring, Master's thesisTimothy Waring. A graph theoretic extension of boolean logic. Master's thesis, 2019.
Games and strategies as event structures. Glynn Winskel, Silvain Rideau, Pierre Clairambault, Simon Castellan, Logical Methods in Computer Science. 13Glynn Winskel, Silvain Rideau, Pierre Clairambault, and Simon Castellan. Games and strategies as event structures. Logical Methods in Computer Science, 13, 2017.
| [] |
[
"Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision",
"Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision",
"Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision",
"Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision"
] | [
"Senior Member, IEEEJi Id Zhong ",
"Jin Li ",
"Qiang Wang ",
"Fellow, IEEEZhongfei Zhang ",
"Senior Member, IEEEJi Id Zhong ",
"Jin Li ",
"Qiang Wang ",
"Fellow, IEEEZhongfei Zhang "
] | [] | [
"SUBMITTED TO IEEE TRANSACTIONS ON IMAGE PROCESSING",
"SUBMITTED TO IEEE TRANSACTIONS ON IMAGE PROCESSING"
] | General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data without catastrophic forgetting of the old tasks that don't rely on task boundaries during both training and testing stages. We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting, in which relation deviation refers to the deficiency of the relationship among all classes in knowledge distillation, and feature deviation refers to indiscriminative feature representations. To this end, we propose a Complementary Calibration (CoCa) framework by mining the complementary model's outputs and features to alleviate the two deviations in the process of GCL. Specifically, we propose a new collaborative distillation approach for addressing the relation deviation. It distills model's outputs by utilizing ensemble dark knowledge of new model's outputs and reserved outputs, which maintains the performance of old tasks as well as balancing the relationship among all classes. Furthermore, we explore a collaborative self-supervision idea to leverage pretext tasks and supervised contrastive learning for addressing the feature deviation problem by learning complete and discriminative features for all classes. Extensive experiments on four popular datasets show that our CoCa framework achieves superior performance against state-of-the-art methods. | 10.1109/tip.2022.3230457 | [
"https://export.arxiv.org/pdf/2109.02426v2.pdf"
] | 237,420,393 | 2109.02426 | 3944dd9dbd77966acbf405425f2d6a227b4bac79 |
Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision
AUGUST 20XX 1
Senior Member, IEEEJi Id Zhong
Jin Li
Qiang Wang
Fellow, IEEEZhongfei Zhang
Complementary Calibration: Boosting General Continual Learning with Collaborative Distillation and Self-Supervision
SUBMITTED TO IEEE TRANSACTIONS ON IMAGE PROCESSING
XXAUGUST 20XX 1Index Terms-General continual learningcomplementary cal- ibrationknowledge distillationself-supervised learningsuper- vised contrastive learning
General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data without catastrophic forgetting of the old tasks that don't rely on task boundaries during both training and testing stages. We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting, in which relation deviation refers to the deficiency of the relationship among all classes in knowledge distillation, and feature deviation refers to indiscriminative feature representations. To this end, we propose a Complementary Calibration (CoCa) framework by mining the complementary model's outputs and features to alleviate the two deviations in the process of GCL. Specifically, we propose a new collaborative distillation approach for addressing the relation deviation. It distills model's outputs by utilizing ensemble dark knowledge of new model's outputs and reserved outputs, which maintains the performance of old tasks as well as balancing the relationship among all classes. Furthermore, we explore a collaborative self-supervision idea to leverage pretext tasks and supervised contrastive learning for addressing the feature deviation problem by learning complete and discriminative features for all classes. Extensive experiments on four popular datasets show that our CoCa framework achieves superior performance against state-of-the-art methods.
I. INTRODUCTION
H UMAN beings have the capability to continuously acquire, adjust and transfer knowledge, which is just we desire agents to have. Continual learning [1], [2], also called incremental learning or lifelong learning, focuses on the problem of learning from a data stream in non-stationary data distributions. These data come from different tasks, in which the input domains are constantly changing. In this situation, we hope to exploit the acquired knowledge to solve the old and new tasks. Continual learning has a wide range of related applications in the real world, such as object detection [3], product search [4] and 3D object classification [5]. Manuscript The main challenge in continual learning is catastrophic forgetting [6], that is, when a deep neural network is trained on a new task, the performance on old tasks usually drops significantly. To prevent the catastrophic forgetting, we hope that the model is capable to integrate new and existing knowledge from new data (plasticity) on the one hand, and prevent the significant interference of new input on existing knowledge (stability) on the other hand. These two conflicting demands constitute the stability-plasticity dilemma [1].
Early studies of continual learning primarily focused on the Task Incremental Learning (Task-IL) scenario [7], [8], [9], in which the difficulty is greatly reduced by employing task boundaries during testing stage. Recently, lots of studies consider a more challenging setting, i.e., Class Incremental Learning (Class-IL) [10], [11], [12], in which task boundaries are unavailable when testing stage. However, existing methods for both Task-IL and Class-IL rely on task boundaries in the training stage, which are not in line with the practical requirement. In this paper, we consider a more complex and practical setting: General Continual Learning (GCL) [2], [13], whose task boundaries are not available during both training and testing stages. Therefore, most of the existing continual learning methods cannot be applied to GCL.
Recently, Buzzega et al. [13] proposed a simple and strong GCL baseline named Dark Experience Replay (DER). They balanced the stability-plasticity dilemma by knowledge distillation. Concretely, they took old model as the teacher and reserved the old model ' The feature extractor tends to extract discriminative feature for a single task rather than all tasks. For example, the "horn" feature is enough to distinguish sheep from wolf for task 1, while the "ear" feature is enough to distinguish the sheepdog and goose for task 2. However, when both tasks are mixed, previously extracted features may not be discriminative enough, that is, the feature representations are incomplete. For example, the "ear" and "horn" features are indiscriminative in representing sheepdog and wolf. outputs of the old samples. However, new samples are unseen to the old model, which lead to inaccurate old model's outputs. As shown in Fig. 1, the outputs of an old model lack the relationship between the old and new classes. In addition, when a new task consists of new samples of the old classes, the relationship among the classes in the old model may incomplete. We refer to the deficiency of the relationship among all classes in knowledge distillation as relation deviation, which makes interference in balancing the relationship of the old and new classes.
Moreover, another reason for catastrophic forgetting in GCL is that the feature representations are not discriminative enough in representing both the old and new classes, that is, the feature representations are incomplete for all classes in GCL. As shown in Fig. 2, the feature extractor tends to extract the discriminative features of a single task, which are indiscriminative in representing the data in the following classes. Thus, it is hard to distinguish all classes with them. We name this indiscriminative feature representation problem as feature deviation.
To tackle the above problems, we propose a Complementary Calibration (CoCa) framework to alleviate the relation and feature deviations by mining the complementary information of model's outputs and features. Particularly, we first utilize the collaborative distillation technique by ensembling dark knowledge to balance the relationship among classes while maintaining the performance of old tasks. Then, we employ collaborative self-supervision composed by pretext tasks and supervised contrastive learning, in which pretext tasks enable feature extractor to learn complete features, while supervised contrastive learning maintains the meaningful transformation of pretext tasks and learns discriminative features between the new and old classes. Pretext tasks and supervised contrastive learning complement each other, ensuring the feature repre-sentations to be complete and discriminative for all classes in GCL. The proposed framework is shown in Fig. 3.
The main highlights are summarized below:
(1) We reveal that relation and feature deviations are crucial problems for catastrophic forgetting in GCL, and propose a novel Complementary Calibration (CoCa) framework for GCL to alleviate these two deviations by exploring the complementary information of model's outputs and features.
(2) We ensemble dark knowledge to alleviate the relation deviation, keeping the performance of the old tasks and balancing the relationship of inter-class by collaborative distillation.
(3) We leverage a collaborative self-supervised network by exploiting pretext tasks and supervised contrastive learning, which enables feature extractor to learn complete and discriminative features for all classes in GCL.
(4) Extensive experiments on four popular datasets, namely sequential CIFAR-10, sequential CIFAR-100, sequential Tiny ImageNet and MNIST-360, show that our CoCa framework outperforms the state-of-the-art methods.
II. RELATED WORK
A. Continual Learning
Early studies focus on Task Incremental Learning (Task-IL) [2], the simplest continual learning scenario, in which the task boundaries are available during both training and testing stages. Such approaches can be roughly categorized into three types: rehearsal-based methods, regularization-based methods and structure-based methods. Rehearsal-based methods [14], [7], [15] replay reserved samples from old tasks while learning a new task to mitigate catastrophic forgetting. Regularizationbased methods [8], [9], [16] constrain the parameters of each part of the model to protect the previously learned knowledge. Structure-based methods [17], [18], [19] alleviate catastrophic forgetting by modifying the underlying architecture of the network.
Due to the rigid restriction of Task-IL, recent studies pay more attention on Class Incremental Learning (Class-IL) [10], [20], which prohibits access to the task boundaries during testing stage. Different from Task-IL, Class-IL needs to distinguish all seen classes when testing stage. Many studies [10], [21], [22] have revealed that Class-IL models are easily biased into new classes, thus existing efforts usually aim at alleviating this problem by deviation amendment from different aspects, such as normalized features [10], class statistics [23] and weight aligning [22].
Although great progress has been achieved, most Task-IL and Class-IL approaches depend on task boundaries in the training stage. Actually, task boundaries are blurry in practical scenarios due to the fact that stream data usually have not clear task divisions. Thus, recent studies set out to explore General Continual Learning (GCL) [2], [13], whose differences from the Task-IL and Class-IL settings are mainly in two aspects: (i) Task boundaries are not necessary during both training and testing stages; (ii) Memory size is limited even facing infinite stream data. Therefore, it is a quite challenging setting.
Some efforts towards GCL are from the aspect of the sample strategy. Isele et al. [ strategy so that the probability of all samples can be selected is equal. Aljundi et al. [25] proposed a greedy selection method named Gradient based Sample Selection (GSS), which aims at improving the diversity of samples. Afterwards, some methods concentrate on mixed methods. Rao et al. [26] proposed a unsupervised continual learning approach called CURL with model expansion and generative replay to maintain the performance of old tasks. On the basis of CURL, Lee et al. [27] proposed Continual Neural Dirichlet Process Mixture (CN-DPM), which utilizes the Bayesian nonparametric framework to enlarge the number of experts. Buzzega et al. [13] proposed Dark Experience Relay (DER) with the combination of regularization and rehearsal-based methods, which employs experience replay and knowledge distillation to promote the new model's outputs consistency with the original outputs. Our CoCa framewok also combines knowledge distillation with rehearsal, the key difference is that we explore collaborative distillation to balance the relationship among all classes by utilizing new model's outputs.
B. Knowledge Distillation
Knowledge distillation [28] refers to the approach that the training process of a student model is supported under the supervision of a teacher model with dark knowledge (soft targets). The dark knowledge contains the rich similarity relationship among all classes. In addition, the student model could distill knowledge by itself, which is called self-distillation [29]. Knowledge distillation is widely applied in continual learning to address the catastrophic forgetting problem. LWF [30] is the earliest work to explore it in continual learning, which aims at leveraging new samples of the old model's outputs to constrain the new model's outputs. Afterwards, FDR [31] stores samples as well as the dark knowledge of the old model, and constrains the 2 norm of the difference between the new model's outputs and dark knowledge. Unlike the label distillation, LUCIR [10] directly limits the normalized features extracted by the new model as consistent as those by the old model, while PODNet [11] constrains the evolution of each layer's output. Further, DDE [12] introduces causal inference to distill the casual effect between the old and new data. Different from them, our proposed collaborative distillation explores ensemble dark knowledge from old and new models, which contains more informative similarity relationships than that from a single model.
C. Self-Supervised Learning
Self-Supervised Learning (SSL) [32] refers to learn representations with large amounts of data without manual labels, which explores input samples' inherent co-occurrence relationships as supervision. A typical type of SSL is pretext tasks, which generally leverage the spatial structure or sequential relationships of input images, such as pretextinvariant representations [33] and geometric transformation [34]. Another type is unsupervised contrastive learning [35], [36], which utilizes the contrastive loss to pull multiple views of an image closer and pushes those from other samples apart in an embedding space. SSL is widely applied in many fields, including few-shot learning [37], imbalance learning [38] and continual learning [39]. Zhu et al. [39] utilized SSL to extract discriminative feature representations and memorized class-representative prototype to maintain the class boundaries. Khosla et al. [40] extended unsupervised contrastive learning to supervised setting by employing images from the same class as positive samples. Positive samples are pulled closer and the other samples are pushed away in an embedding space. Mai et al. [41] proposed Supervised Contrastive Replay (SCR), leveraging supervised contrastive learning and nearest-classmean classifier to mitigate catastrophic forgetting.
The closest studies to our work are PASS [39] and SCR [41]. Particularly, PASS memorizes each class-representative prototype that depends on task boundaries. SCR [41] adopts supervised contrastive learning to obtain well-separated representations, in which the trainings of feature extractor and classifier are separated. Moreover, the samples of old classes are apt to be lost, especially when the memory is very limited. Thus, SCR is hard to be applied in GCL. Different from them, we introduce pretext tasks and supervised contrastive learning into collaborative self-supervision in our GCL framework, which complement each other in our Feature Calibration Module. In this way, complete and discriminative features can be obtained.
III. METHODOLOGY
In this work, we focus on GCL in image classification. Formally, given stream data characterized by tasks
D 1 , D 2 , . . . , D T , each task D t = {(x t,i , y t,i ) Nt i=1 } consists of N t images
x ∈ X and labels y ∈ Y corresponding to x. The task boundaries are unavailable during both training and testing stages. To alleviate catastrophic forgetting, similar to ER [24], we employ constant memory
M = {(x i ,ô i , y i ) B i=1 }
to store limited amount of previous training samples (x, y) and corresponding model's outputsô, where B represents buffer size.
As shown in Fig. 3, our proposed CoCa framework consists of three parts: an Experience Replay Module, a Relation Calibration Module and a Feature Calibration Module. In what follows, we first introduce Experience Replay Module and analyze the relation deviation among classes in existing knowledge distillation methods. Then in Relation Calibration Module, we elaborate on how collaborative knowledge distillation mitigates the relation deviation. Finally, the Feature Calibration Module leverages collaborative self-supervision to alleviate the feature deviation, which is composed of pretext tasks based on geometric transformations and supervised contrastive learning to learn complete and discriminative features respectively. Interestingly, the alleviation process for relation and feature deviations are mutually complementary.
A. Experience Replay Module
Our feature extractor f Θ for GCL is a neural network, parameterized with parameters Θ. The role of f Θ is to extract feature representations of all tasks. Meanwhile, a classifier f θ needs to be trained to project the learned feature representations into the label space. The ideal goal is to minimize the following formula:
arg min Θ,θ T t=1 E (x,y)∼Dt L CE (σ (f Θ,θ (x)) , y) ,(1)
where L CE is the cross-entropy loss and σ(·) is the softmax function. Since it is unavailable to get all samples from old tasks, Experience Reply Module stores a subset of previous training set and employs them to jointly optimize models. This is equivalent to correctly classifying new tasks given limited memory M from old tasks. Therefore, Eq. 1 is substituted by the following loss terms:
L base = E (x,y)∼Dt L CE (σ (f Θ,θ (x)) , y) + E (x,y)∼M L CE (σ (f Θ,θ (x)) , y) .(2)
Experience Replay Module is widely applied in image classification. For example, Buzzega et al. [13] proposed a strong baseline dubbed DER that employs experience replay and knowledge distillation to maintain the performance of old tasks on GCL setting. In particular, it constrains the new model's outputs to be as consistent as possible with the old model. Formally, the loss is written as:
L KD = E (x,ô)∼M f Θ,θ (x) −ô 2 2 ,(3)
whereô are the model's outputs sampled from old models and the · 2 operator refers to the 2 norm. However, when training the new model in GCL,ô is deficient to express the relationship among all classes. We refer to this deficiency as relation deviation.
B. Relation Calibration Module
As mentioned above, the relation deviation in GCL is caused by forcing the outputs of new model to keep consistent with the old model's outputs, which lack whole inter-class relationships compared with the outputs of new model. Meanwhile, when the model is trained on a new task, its performance on old tasks usually drops significantly, indicating inaccurate outputs of the new model on old tasks. Fortunately, this deficient inter-class relationships exist in the new model's outputs and this inaccuracy can be compensated by the old model's outputs. Therefore, we naturally aim to explore complementary properties of the two outputs by forcing the new model's outputs to be consistent with the ensemble outputs of old and new models.
Specifically, we propose a collaborative distillation loss
L CKD = E (x,ô)∼M o − o * (o,ô) 2 2 .(4)
Noticeably, we cannot directly combine o andô linearly. Otherwise, if the ensemble outputs o * are composed by o * (o,ô) = γo + (1 − γ)ô, where γ is a positive trade-off hyper-parameter to balanceô and o, L CKD and L KD will be linearly correlated:
L CKD = E (x,ô)∼M o − (γo + (1 − γ)ô) 2 2 = (1 − γ) 2 L KD .(5)
Meanwhile, the new model's outputs may inaccurate. Therefore, we adopt the features similarity in the same batch to propagate the new model's outputs o and fuse the old model's outputsô. In this way, the ensemble outputs o * (o,ô) are carried out to obtain complete inter-class relationships, which are calculated by the following three steps. Firstly, we calculate the normalized samples' similarityŜ(i, j) in the same batch. For each pair of sample (x i , x j ), the normalized feature embeddings obtained by the feature extractor f Θ is (ẑ i ,ẑ j ), the normalized samples' similarity matrixŜ ∈ R N ×N is calculated by:Ŝ
(i, j) = exp(S(i, j)) Σ k =i exp(S(i, k)) ,(6)
where the similarity function S(i, j) is defined as:
S(i, j) = ẑ T iẑ j (i = j) 0(i = j) .(7)
Secondly, we conduct label propagation by o and normalized similarity matrixŜ described in [42] as:
Q t = ωŜQ t−1 + (1 − ω) o (t ≥ 0) ,(8)
where ω is a weighting factor in range [0, 1) and Q 0 = o.
The label propagation is conducted many times for obtaining more accurate outputs:
Q ∞ = lim t→∞ ωŜ t o + (1 − ω) t−1 i=1 ωŜ i o .(9)
Since ω and the eigenvalues ofŜ are both in range [0, 1), we obtain an approximate formulation for Q ∞ as:
Q ∞ = (1 − ω) I − ωŜ −1 o,(10)
where I is an identity matrix. Finally, the ensemble outputs o * consist of the old model's outputsô and the modified outputs Q ∞ , which is written as:
o * (o,ô) = γ (1 − ω) I − ωŜ −1 o + (1 − γ)ô.(11)
In this way, we alleviate the relation deviation among all classes of the old model's outputs. Even if the matrix inversion operation takes O(n 3 ) time complexity, the computational complexity is trivial when the batch size n is limited.
C. Feature Calibration Module
Besides relation deviation, feature deviation is another key challenge, which is caused by the indiscriminative feature representations. To address this challenge, we develop a Feature Calibration Module, which consists of pretext tasks and supervised contrastive learning. In detail, we first design selfsupervised pretext tasks as auxiliary supervision, enabling feature extractor to learn complete features. Then, we utilize the supervised contrastive learning to learn discriminative features between the new and old classes.
As the samples of the old tasks are unavailable in GCL, the feature extractor tends to extract the discriminative features for the new incoming task. This tendency results in incomplete feature representations, which generally cannot well distinguish the old tasks from the new ones. To calibrate incomplete feature representations, we exploit self-supervised learning based on geometric transformations pretext tasks to enable the feature extractor to exact complete features, which represents the rich spatial or sequential relationships of the samples. In particalr, we apply pretext tasks loss L P T to distinguish what geometric transformation has been made to the original images. Among them, a muti-layers perception f ψ is applied as the auxiliary classifier to project the feature exacted by f Θ into the label space. Accordingly, the pretext tasks loss L P T of geometric transformation tasks are designed as:
L P T = L CE (σ(f Θ,ψ (x p , y p ))),(12)
where the proxy label y p consists of a series of geometric transformations, such as rotation and scaling, image x p is produced by applying geometric transformations proxy label y p to the original image x.
To better learn discriminative features for all tasks, we further leverage supervised contrastive learning in CoCa framework. In detail, we introduce another MLP f φ with parameters φ, whose purpose is to map the feature to an embedding space where the supervised contrastive learning loss is applied. In the feature space, the distances from the same class are shorten and those of different classes are enlarged. Assuming the embedding z = f Θ,φ (x), {z + i } and {z − i } represent the set of all positive and negative samples distinct from z i in the multi-viewed batch respectively, the supervised contrastive learning loss function is written as:
L SCL = E zi∼z [− Σ zj ∼{z + i } S(zi,zj ) τ + log( Σ zj ∼{z + i } exp( S(zi,zj ) τ ) + Σ z k ∼{z − i } exp( S(zi,z k ) τ ))],(13)
where (i, j, k) denotes the index, τ is a scalar temperature parameter, and Σ z k ∼{z − i } exp( S(zi,z k ) τ ) encourages samples from different classes to be as far away from the unit hypersphere as possible in Eq. 13. In this way, the features are evenly distributed on the unit hypersphere as far as possible. Supervised contrastive learning could well distinguish the old and new classes as it utilizes the label information.
Noticeably, pretext tasks and supervised contrastive learning are cooperative and complementary. Here, they not only cooperate together to obtain better feature representations to alleviate the feature deviation problem, but make up for their respective defects mutually. Concretely, on the one hand, the pretext tasks may be redundant, and even may produce interference. For example, it is difficult to distinguish number 6 from the number 9 after 180 • rotation. Thankfully, the supervised contrastive learning suppresses the redundancy of the pretext tasks by supervised information; on the other hand, when the samples of the old classes are missing due to the limited memory, the complete features learned by the pretext tasks assist the supervised contrastive learning to obtain discriminative features for all tasks. Therefore, the collaborative self-supervision loss L CSS of Feature Calibration Module becomes:
L CSS = L P T + L SCL .(14)
Feature Calibration Module explores existing samples and features to learn complete and discriminative features for both old and new tasks, these complementary sets of features alleviate catastrophic forgetting effectively.
D. The Overall Objective
Since our proposed CoCa framework consists of Experience Replay Module, Feature Calibration Module and Relation Calibration Module, the objective function of the whole training stage is as follows:
L CoCa = L base + λ 1 L CKD + λ 2 L CSS ,(15)
where λ 1 and λ 2 are hyperparameters. At the test stage, the Feature Calibration Module and Relation Calibration Module are removed.
IV. EXPERIMENTS
A. Datasets
Four popular datasets are selected in our image classification experiments: sequential CIFAR-10, sequential CIFAR-100, sequential Tiny ImageNet and MNIST-360.
CIFAR-10 [44] consists of 10 classes, each class has 6000 samples of 32×32 color images, including 5000 training samples and 1000 test samples. CIFAR-100 [44] is similar to CIFAR-10 except that the number of classes is 100. Each class has 600 images, which are divided into 500 for training and 100 for testing. Tiny ImageNet [45] is a subset of ImageNet [46], which contains 200 classes, and each class has 500 samples with 64 × 64 color images for training. We split the CIFAR-10, CIFAR-100 and Tiny ImageNet evenly into 5, 20 and 10 sequential tasks respectively, each of which includes 2, 5 and 20 classes, i.e., sequential CIFAR-10, sequential CIFAR-100 and sequential Tiny ImageNet. MNIST-360 [13] is specially designed for GCL setting, which offers a sequence of MNIST numbers from 0 to 8 at increasing angles. It builds the batch by using samples belong to two continuous subsequent classes at a time, such as, (0, 1), (1, 2), ... , (8,0). Different from other three datasets, the task boundaries are blurry and the same task appears repeatedly. It is more challenging and practical for continual learning.
B. Implementation details
We employ a fully connected network with two hidden layers as backbone for MNIST-360 dataset. As for the other datasets, we employ ResNet-18 [47] as backbone. And two fully connected networks with three hidden layers are employed as the projector and auxiliary classifier, respectively. The hyperparameters are selected via grid search by employing reserved samples of validation set from all task's training sets. The number of epochs for datasets MNIST-360, sequential CIFAR-10, sequential CIFAR-100 and sequential Tiny Ima-geNet are 1, 50, 50 and 100, respectively. Following [13], we utilize Stochastic Gradient Descent (SGD) as optimizer and fix the batch size at 64 to ensure that the amount of updates for all methods are the same. Concretely, each batch consists of 32 new samples and 32 reserved samples, and the later are updated with reservoir sample strategy [48] at the end of each batch.
In Relation Calibration Module, we set both the weighting factor ω and the trade-off hyper-parameter γ as 0.1. In Feature Calibration Module, the pretext tasks are composed of three types of geometric transformations, including rotation {0 • , 90 • , 180 • , 270 • }, scaling {0.67, 1.0} and aspect ratio {0.67, 1.33}. Additionally, we only perform a random transformation for a sample to save the computation cost. Our approach is implemented with pytorch framework and trained on one NVIDIA GeForce RTX 3060 GPU.
C. Comparison with State-of-the-Art Methods 1) Competitors: Two group of competitors are selected. The first group is the GCL approaches, including (1) CN-DPM (continual neural dirichlet process mixture) [27]; (2) ER (experience replay with reservoir) [24]; (3) A-GEM (average gradient episode memory) [15]; (4) GSS (gradient sample selection) [25]; and (5) DER (dark experience replay) [13]. The second group is the Class-IL and Task-IL approaches, in which task boundaries should be provided during training stage, including: (1) LWF (learning without forgetting) [30];
(2) oEWC (online elastic weight consolidation) [16]; (3) SI (synaptic intelligence) [9]; (4) GEM (gradient episode memory) [7]; (5) iCaRL (incremental classifier and representation learning) [20]; (6) FDR (function distance regularization) [31]; and (7) HAL (hindsight anchor learning) [43]. In addition, we also report the performance bound, including: (1) JOINT represents that all data are available at any time, which is an upper bound; and (2) SGD means that no strategy is adopted to alleviate forgetting at the training time, which is the lower bound.
Following [2] and [13], we employ the average accuracy on all tasks as the evaluation criterion. To make a fair comparison, we apply the single-head setting during training stage and the pretrain model is unavailable in all methods. It should be emphasized that no task boundaries are provided for our proposed CoCa approach, even in comparison with those Class-IL and Task-IL approaches.
2) Results: Table I reports the results of the comparison on GCL setting. It could be observed that our approach outperforms the competitors in all cases. Specifically, for sequential CIFAR-10 dataset, the proposed CoCa framework outperforms the state-of-the-art competitors at least in 1.3%. Furthermore, CoCa beats all methods on sequential CIFAR-100 dataset. For example, it surpasses the second-best method DER in around 7% in buffer size sets of 500. As for the sequential Tiny ImageNet dataset, CoCa has a performance gain against the best competitor by 1.82%, 0.95% and 0.77% under 200, 500 and 5120 buffers, respectively.
Since the MNIST-360 dataset has no task boundaries, many methods that rely on task boundaries are inadequate for it, such as LWF [30], oEWC [16], SI [9], GEM [7], iCaRL [20], FDR [31], and HAL [43]. Thus, their methods are unavailable on MNIST-360 dataset. We could observe that our CoCa framework achieves top performance across all the different buffer sizes, which surpasses the suboptimal method at least in 5%. It is quite impressive when the buffer size is limited, such as 200, with at least 12% gains against the other competitors.
These results indicate that the collaborative distillation and self-supervision greatly alleviate catastrophic forgetting by mitigating deviation in the absence of task boundaries.
In addition, we also observe significant performance differences of CoCa on different datasets and different buffer sizes. Specifically, it can be observed that the performance on sequential CIFAR-10 dataset is higher than sequential Tiny ImageNet and sequential CIFAR-100 datasets. The reason lies in that it has quite fewer classes than the other datasets. For the performance differences among different buffer sizes for the same dataset, it is due to that the more replay samples are retained, the easier it is to alleviate the deviation. When the buffer size is large enough, it approximates the setting of joint training, i.e., the performance upper bound. As we can see, as the buffer size increases from 200 to 5120, the average accuracy improves by at least 23% on these datasets.
Moreover, since all methods are applicable to with Task-IL setting, following [13], we also make a comparison on Task-IL setting, as shown in Table II. It should be noted that Task-IL experiments cannot be conducted on MNIST-360 dataset since there is no task boundaries during testing stage. We could observe that our proposed approach also outperforms all the competitors on the three datasets, even additional boundaries are provided for those Class-IL and Task-IL approaches. Take the sequential CIFAR-100 dataset for example, the accuracies of our CoCa are 75.23%, 82.28% and 91.36% under 200, 500 and 5120 buffers, respectively.
D. Ablation Study
1) The impact of each component: To evaluate the impact of different components in CoCa, we conduct ablation studies on sequential CIFAR-100 and MNIST-360 datasets, as shown in Table III. We take the Experience Replay Module as baseline, upon which the following components are considered: CKD applies collaborative distillation loss, PT adds pretext tasks loss, SCL introduces supervised contrastive learning loss, CSS represents the combination of pretext tasks loss and supervised contrastive learning loss.
As shown in Table III dataset. This is because the samples in the MNIST-360 dataset are rotated at increasing angles, which interferes with the prediction of the pretext tasks. However, its combination with SCL, i.e., CSS, performs better than each of them individually, which proves that the PT and SCL complement each other. Furthermore, when all the components are combined, the best performance is achieved in all settings. This consistent improvement verifies our statement that the joint calibration between CKD and CSS is beneficial for alleviating the deviation, that is, the modules of relation calibration and feature calibration are mutually complementary.
2) The impact of the different numbers of tasks: We further conduct experiments to explore the impact of the different numbers of tasks. We choose sequential CIFAR-100 dataset as example, which includes 10-spilt and 20-spilt settings, that is, dividing the CIFAR-100 dataset into 10 and 20 tasks, respectively. As shown in Table IV, we observe that all competitors except for the upper bound JOINT are sensitive to the number of tasks. As the number of tasks increases from 10 to 20, the performance drops a lot. For example, when buffer sizes are 200, 500 and 5120, the declines are 4.86%, 5.55% and 2.86% in CoCa, respectively. This is due to that the increasing number of the tasks leads to the decreasing number of the classes for each task. However, in the case of either a 10-split or 20-split settings, our approach is superior to the other competitors, demonstrating that the effectiveness of CoCa framework.
3) The impact of parameters ω and γ: Hyper-parameters ω and γ are important in calibrating the relation deviation in Eq.11. We also select sequential CIFAR-100 dataset as example, whose results are shown in Fig.4. As observed from Fig. 4(a), the different proportions ω of label propagation have little impact on Relation Calibration Module within the interval of (0.1, 0.9). However, when ω reaches 1, there is a significant decline. This is because the ensemble output o * is equal to the reserved outputô, which fails to explore collaborative distillation. Moreover, as shown in Fig. 4(b), γ is a sensitive hyper-parameter on balancing the relationship among all classes in collaborative distillation. As we can see, even if γ is 0.1, the accuracy improves more than 2% against that of 0. This explains that it is necessary to alleviate the relation deviation in knowledge distillation. However, as γ increases, the ensemble output o * contains less information from the old model, resulting in a dramatic decline in model performance.
E. Visualization Analysis 1) t-SNE Results: To further verify the effectiveness of our CoCa framework on calibrating the feature deviation, we visualize the features for the test set of MNIST-360 dataset, which are shown in Fig. 5. We observe that the t-SNE of the baseline method ER is better than that of SGD. However, as evident in Fig. 5(b), classes are not distinguished well and the class boundaries are also not precise and compact. The Relation Calibration Module is formed by adding the collaborative distillation loss to the baseline, whose t-SNE is shown in Fig. 5(c). As we can see, compared with ER, Relation Calibration Module reduces the interference of features among different classes through the normalized similarity matrix, which further helps to aggregate samples of the same class. Figure 5(e) shows the t-SNE corresponding to Feature Calibration Module, from which we observe that each class is well distinguished and the boundaries of most classes are clear. On the one hand, the complete features are obtained through pretext tasks, leading to the class representations are well spread out. On the other hand, the supervised contrastive learning enables the inter-class distance larger and the intra-class distance smaller, which makes the boundaries are clearer. Finally, our method takes advantage of the complementary modules that lead to more compact clusters and discriminative class boundaries. It is worth noting that the t-SNE of our method is comparable to that of JOINT, which also proves the effectiveness of our CoCa framework in alleviating the feature deviation.
2) Visualization of confusion matrices: Figure 6 provides the visualization of confusion matrices on the test set of MNIST-360 dataset to give an insight into the effectiveness of our CoCa framework. Among them, diagonal entries represent the accuracy of each class. As shown in Fig. 6(b), SGD is obviously biased towards the last task (8,0). Interestingly, it obtains an easily distinguishable feature for the number 1 (see Fig. 5(a)), but its accuracy is still zero. This proves that the catastrophic forgetting is the result of feature extractor and classifier. By comparing Fig. 6(c) and Fig. 6(d), we observe that the CoCa has few deviations towards the last task and achieves superior performance among all classes.
V. CONCLUSION In this paper, we have proposed the CoCa framework to alleviate the relation and feature deviations in GCL by collaborative distillation and self-supervision. Specifically, the collaborative distillation mitigates the relation deviation by exploring ensemble dark knowledge in knowledge distillation to balance the relationship among classes. The collaborative self-supervision is composed by pretext tasks and supervised contrastive learning, which aims at learning complete and discriminative features to alleviate the feature deviation. Extensive experiments have demonstrated that our proposed CoCa framework outperforms the state-of-the-art ones. In future, we consider to leverage online distillation approaches and explore how to select positive samples in contrastive learning.
Fig. 1 .
1Diagram of relation deviation in knowledge distillation. The old model has not seen new classes, thus its outputô lacks of the relationship between old and new classes. When the reserved sample x is replayed, the deficiency of relationship results in a deviation between the output of the new model o and the output of the old modelô, which leads to inaccurate continual learning.
L CKD to keep the new model's outputs o consistent with the ensemble outputs o * (o,ô) obtained by Relation Calibration Module. It utilizes the features similarity matrix, the outputs o of the new model and the reserved outputsô of the old model to obtain ensemble outputs o * (o,ô). Hence, the learning objective for collaborative knowledge distillation is:
Fig. 4 .Fig. 5 .Fig. 6 .
456, each component contributes positively to the model except for the PT on the MNISTThe impacts of relation calibration parameters ω and γ on sequential CIFAR-100 dataset with buffer size of 500. t-SNE visualization of features for test set of MNIST-360 dataset. (a) t-SNE visualization results of SGD. (b) t-SNE visualization results of ER. (c) t-SNE visualization results of Relation Calibration Module. (d) t-SNE visualization results of JOINT. (e) t-SNE visualization results of Feature Calibration Module. (f) t-SNE visualization results of CoCa. (a) JOINT (b) SGD (c) ER (d) CoCa (ours) Confusion matrices of four different variations for test set of MNIST-360 dataset. (a) Confusion matrix of JOINT. (b) Confusion matrix of SGD. (c) Confusion matrix of ER. (d) Confusion matrix of CoCa.
received xxx xx, 2021; revised xxx xx, 2021. This work was supported by the National Natural Science Foundation of China (NSFC) under Grants 6217020340 and 61771329. Zhong Ji, Jin Li, and Qiang Wang are with the School of Electrical and Information Engineering, Tianjin University, Tianjin 300072, China, and also with the Tianjin Key Laboratory of Brain-Inspired Intelligence Technology, Tianjin University, Tianjin 300072, China (e-mail: [email protected]; [email protected]; [email protected]). Zhongfei Zhang is with the Computer Science Department, Binghamton University, State University of New York, Binghamton, NY 13902, USA (email: [email protected]). *The corresponding author is Qiang Wang.
s outputs to constrain the new model's arXiv:2109.02426v2 [cs.CV] 29 Dec 2022Fig. 2. Diagram of the indiscriminative feature representations in GCL.goose
sheepdog
Task 1
Task 2
sheep
wolf
goose
sheepdog
Task 1 + Task 2
sheep
wolf
w/ horn
w/o horn
large ear
small ear
?
24] employed reservoir sample Feature Calibration Module. A backbone f Θ is shared between Experience Replay Module and Feature Calibration Module to extract image representations, then an auxiliary classifier f ψ with a non-linear MLP and a classifier f θ with a single linear layer are applied on top of the image representations to predict labels. A non-linear projector f φ in Feature Calibration Module is adopted to translate the image representations for the supervised contrastive learning loss.Share Weights
New
Samples
Reserved
Samples
...
Classifier
Projector
...
Auxiliary
Classifier
Feature Calibration Module
Geometric
Transformation
Softmax
Softmax
Experience Replay Module
...
Relation Calibration Module
Rotation
Aspect-Ratio
Scale
Legend
Reserved
Outputs
Outputs
Feature
Predicted
Label
Fig. 3. Overview of the proposed CoCa framework at the training stage, which is composed of three parts: an Experience Replay Module, a Relation
Calibration Module, and a
TABLE I AVERAGE
IACCURACY (%) ON GCL SETTING. "*" INDICATES THAT TASK BOUNDARIES ARE PROVIDED AT THE TRAINING STAGE ON TASK-IL SETTING. "*" INDICATES THAT TASK BOUNDARIES ARE PROVIDED AT THE TRAINING STAGEMethod
sequential CIFAR-10
sequential CIFAR-100
sequential Tiny ImageNet
MNIST-360
JOINT
92.20
69.55
59.99
82.98
SGD
19.62
4.33
7.92
19.02
LWF* [30]
19.61
4.26
8.46
-
oEWC* [16]
19.49
3.49
7.58
-
SI* [9]
19.48
4.60
6.58
-
CN-DPM [27]
45.21
20.10
-
-
B
200
500
5120
200
500
5120
200
500
5120
200
500
1000
GEM* [7]
25.54 26.20 25.26
8.17
12.45
4.55
-
-
-
-
-
-
iCaRL* [20]
49.02 47.55 55.07 19.26 24.71 29.78
7.53
9.38
14.08
-
-
-
FDR* [31]
30.91 28.71 19.70 11.67 18.00 32.35
8.70
10.54
28.97
-
-
-
HAL* [43]
32.36 41.79 59.12
7.60
9.55
22.11
-
-
-
-
-
-
ER [24]
44.79 57.74 82.47
9.84
14.64 44.79
8.49
9.99
27.40
49.27 65.04 75.18
A-GEM [15]
20.04 22.67 21.99
4.73
4.74
4.87
8.07
8.06
7.96
28.34 28.13 29.21
GSS [25]
39.07 49.73 67.27
6.35
7.44
9.71
-
-
-
43.92 54.45 63.84
DER [13]
64.88 72.70 85.24 18.66 28.70 51.20 10.96 19.38
39.02
54.16 69.62 76.03
CoCa (ours)
66.25 76.27 89.52 21.20 32.88 58.38 12.78 20.33
39.79
67.02 76.49 81.82
TABLE II
AVERAGE ACCURACY (%) Method
sequential CIFAR-10
sequential CIFAR-100
sequential Tiny ImageNet
JOINT
98.31
95.22
82.04
SGD
61.02
39.91
18.31
LWF* [30]
63.29
25.02
15.85
oEWC* [16]
68.29
23.13
19.20
SI* [9]
68.05
38.37
36.32
B
200
500
5120
200
500
5120
200
500
5120
GEM* [7]
90.44 92.16 95.55 68.56 71.69 78.21
-
-
-
iCaRL* [20]
88.99 88.22 92.23 71.56 77.99 81.59 28.19 31.55
40.83
FDR* [31]
91.01 93.29 94.32
68.78 75.20 82.33 40.36 49.88
68.01
HAL* [43]
82.51 84.54 88.51 51.49 57.03 67.42
-
-
-
ER [24]
91.19 93.61 96.98 69.40 74.45 88.83 38.17 48.64
67.29
A-GEM [15]
83.88 89.48 90.10 58.88 59.58 64.68 22.77 25.33
26.22
GSS [25]
88.80 91.02 94.19 41.94 56.18 67.65
-
-
-
DER [13]
91.92 93.88 96.12 72.21 77.30 87.66 40.87 51.91
69.84
CoCa (ours) 92.95 94.54 97.17 75.23 82.28 91.36 46.13 55.03
71.19
TABLE III ABLATION
IIISTUDIES ON COCA COMPONENTS ON SEQUENTIAL CIFAR-100 AND MNIST-360 DATASETS. 'CKD' INDICATES COLLABORATIVE KNOWLEDGE DISTILLATION, 'PT' INDICATES PRETEXT TASKS AND 'SCL' INDICATES SUPERVISED CONTRASTIVE LEARNING 30.27 51.55 56.71 71.07 76.18 10.79 19.95 47.38 47.09 62.24 72.75 11.33 16.44 51.11 59.37 66.90 75.73 12.25 21.25 52.31 64.14 70.83 75.91 20.77 30.43 54.29 53.59 71.35 77.49 18.60 30.65 53.19 66.46 74.75 81.28 21.20 32.88 58.38 67.02 76.49 81.82TABLE IV ABLATION STUDIES ON THE DIFFERENT NUMBERS OF TASKS ON SEQUENTIAL CIFAR-100 DATASET CoCa (ours) 26.06 38.43 61.25 21.20 32.88 58.38CKD PT
SCL sequential CIFAR-100
MNIST-360
B
200
500
5120
200
500
1000
9.84
14.64 44.79 49.27 65.04 75.18
20.26 Method
10 Tasks
20 Tasks
JOINT
69.55
69.55
SGD
8.54
4.33
B
200
500
5120
200
500
5120
ER [24]
13.81 21.81 49.83
9.84
14.64 44.79
DER [13]
23.25 36.20 56.20 18.66 28.70 51.20
Continual lifelong learning with neural networks: A review. G I Parisi, R Kemker, J L Part, C Kanan, S Wermter, Neural Networks. 113G. I. Parisi, R. Kemker, J. L. Part, C. Kanan, and S. Wermter, "Continual lifelong learning with neural networks: A review," Neural Networks, vol. 113, pp. 54-71, 2019.
A continual learning survey: Defying forgetting in classification tasks. M Delange, R Aljundi, M Masana, S Parisot, X Jia, A Leonardis, G Slabaugh, T Tuytelaars, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2021M. Delange, R. Aljundi, M. Masana, S. Parisot, X. Jia, A. Leonardis, G. Slabaugh, and T. Tuytelaars, "A continual learning survey: Defying forgetting in classification tasks," IEEE Transactions on Pattern Analysis and Machine Intelligence, [Online], 2021.
Did: Disentangling-imprintingdistilling for continuous low-shot detection. X Chen, Y Wang, J Liu, Y Qiao, IEEE Transactions on Image Processing. 29X. Chen, Y. Wang, J. Liu, and Y. Qiao, "Did: Disentangling-imprinting- distilling for continuous low-shot detection," IEEE Transactions on Image Processing, vol. 29, pp. 7765-7778, 2020.
Metasearch: Incremental product search via deep meta-learning. Q Wang, X Liu, W Liu, A.-A Liu, W Liu, T Mei, IEEE Transactions on Image Processing. 29Q. Wang, X. Liu, W. Liu, A.-A. Liu, W. Liu, and T. Mei, "Metasearch: Incremental product search via deep meta-learning," IEEE Transactions on Image Processing, vol. 29, pp. 7549-7564, 2020.
L3doc: Lifelong 3d object classification. Y Liu, Y Cong, G Sun, T Zhang, J Dong, H Liu, IEEE Transactions on Image Processing. 2021Y. Liu, Y. Cong, G. Sun, T. Zhang, J. Dong, and H. Liu, "L3doc: Life- long 3d object classification," IEEE Transactions on Image Processing, [Online], 2021.
Catastrophic interference in connectionist networks: The sequential learning problem. M Mccloskey, N J Cohen, Psychology of Learning and Motivation. 24M. McCloskey and N. J. Cohen, "Catastrophic interference in con- nectionist networks: The sequential learning problem," Psychology of Learning and Motivation, vol. 24, pp. 109-165, 1989.
Gradient episodic memory for continual learning. D Lopez-Paz, M.-A Ranzato, Proceedings of the Advances in Neural Information Processing Systems. the Advances in Neural Information Processing Systems30D. Lopez-Paz and M.-A. Ranzato, "Gradient episodic memory for con- tinual learning," in Proceedings of the Advances in Neural Information Processing Systems, 2017, vol. 30, pp. 6467-6476.
Overcoming catastrophic forgetting in neural networks. J Kirkpatrick, R Pascanu, N Rabinowitz, J Veness, G Desjardins, A A Rusu, K Milan, J Quan, T Ramalho, A Grabska-Barwinska, Proceedings of the National Academy of Sciences. 11413J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska et al., "Overcoming catastrophic forgetting in neural networks," Pro- ceedings of the National Academy of Sciences, vol. 114, no. 13, pp. 3521-3526, 2016.
Continual learning through synaptic intelligence. F Zenke, B Poole, S Ganguli, Proceedings of the International Conference on Machine Learning. the International Conference on Machine Learning70F. Zenke, B. Poole, and S. Ganguli, "Continual learning through synaptic intelligence," in Proceedings of the International Conference on Machine Learning, vol. 70, August 2017, pp. 3987-3995.
Learning a unified classifier incrementally via rebalancing. S Hou, X Pan, C C Loy, Z Wang, D Lin, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)S. Hou, X. Pan, C. C. Loy, Z. Wang, and D. Lin, "Learning a unified classifier incrementally via rebalancing," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 831-839.
Podnet: Pooled outputs distillation for small-tasks incremental learning. A Douillard, M Cord, C Ollion, T Robert, E Valle, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionA. Douillard, M. Cord, C. Ollion, T. Robert, and E. Valle, "Podnet: Pooled outputs distillation for small-tasks incremental learning," in Proceedings of the European Conference on Computer Vision, 2020, pp. 86-102.
Distilling causal effect of data in class-incremental learning. X Hu, K Tang, C Miao, X.-S Hua, H Zhang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)X. Hu, K. Tang, C. Miao, X.-S. Hua, and H. Zhang, "Distilling causal effect of data in class-incremental learning," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 3957-3966.
Dark experience for general continual learning: a strong, simple baseline. P Buzzega, M Boschini, A Porrello, D Abati, S Calderara, Proceedings of the Advances in Neural Information Processing Systems. the Advances in Neural Information Processing Systems15930P. Buzzega, M. Boschini, A. Porrello, D. Abati, and S. Calderara, "Dark experience for general continual learning: a strong, simple baseline," in Proceedings of the Advances in Neural Information Processing Systems, 2020, pp. 15 920-15 930.
Catastrophic forgetting, rehearsal and pseudo rehearsal. A Robins, Connection Science. 72A. Robins, "Catastrophic forgetting, rehearsal and pseudo rehearsal," Connection Science, vol. 7, no. 2, pp. 123-146, 1995.
Efficient lifelong learning with a-GEM. A Chaudhry, M Ranzato, M Rohrbach, M Elhoseiny, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsA. Chaudhry, M. Ranzato, M. Rohrbach, and M. Elhoseiny, "Efficient lifelong learning with a-GEM," in Proceedings of the International Conference on Learning Representations, 2019, pp. 1-12.
Progress & compress: A scalable framework for continual learning. S Jonathan, C Wojciech, L Jelena, G.-B Agnieszka, W T Yee, P Razvan, H Raia, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningS. Jonathan, C. Wojciech, L. Jelena, G.-B. Agnieszka, W. T. Yee, P. Razvan, and H. Raia, "Progress & compress: A scalable framework for continual learning," in Proceedings of the International Conference on Machine Learning, 2018, pp. 4528-4537.
Adversarial continual learning. S Ebrahimi, F Meier, R Calandra, T Darrell, M Rohrbach, Proceedings of the European Conference on Computer Vision. the European Conference on Computer VisionS. Ebrahimi, F. Meier, R. Calandra, T. Darrell, and M. Rohrbach, "Ad- versarial continual learning," in Proceedings of the European Conference on Computer Vision, 2020, pp. 386-402.
Packnet: Adding multiple tasks to a single network by iterative pruning. A Mallya, S Lazebnik, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)A. Mallya and S. Lazebnik, "Packnet: Adding multiple tasks to a single network by iterative pruning," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 7765-7773.
Overcoming catastrophic forgetting with hard attention to the task. J Serra, D Suris, M Miron, A Karatzoglou, Proceedings of the International Conference on Machine Learning. the International Conference on Machine LearningJ. Serra, D. Suris, M. Miron, and A. Karatzoglou, "Overcoming catas- trophic forgetting with hard attention to the task," in Proceedings of the International Conference on Machine Learning, 2018, pp. 4548-4557.
icarl: Incremental classifier and representation learning. S.-A Rebuffi, A Kolesnikov, G Sperl, C H Lampert, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. the IEEE/CVF Conference on Computer Vision and Pattern RecognitionS.-A. Rebuffi, A. Kolesnikov, G. Sperl, and C. H. Lampert, "icarl: Incremental classifier and representation learning," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2017, pp. 2001-2010.
Large scale incremental learning. Y Wu, Y Chen, L Wang, Y Ye, Z Liu, Y Guo, Y Fu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)Y. Wu, Y. Chen, L. Wang, Y. Ye, Z. Liu, Y. Guo, and Y. Fu, "Large scale incremental learning," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 374-382.
Maintaining discrimination and fairness in class incremental learning. B Zhao, X Xiao, G Gan, B Zhang, S.-T Xia, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)13B. Zhao, X. Xiao, G. Gan, B. Zhang, and S.-T. Xia, "Maintaining dis- crimination and fairness in class incremental learning," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 13 205-13 214.
Il2m: Class incremental learning with dual memory. E Belouadah, A Popescu, Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV). the IEEE/CVF International Conference on Computer Vision (ICCV)E. Belouadah and A. Popescu, "Il2m: Class incremental learning with dual memory," in Proceedings of the IEEE/CVF International Confer- ence on Computer Vision (ICCV), 2019, pp. 583-592.
Selective experience replay for lifelong learning. I David, C , Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceI. David and C. Akansel, "Selective experience replay for lifelong learn- ing," in Proceedings of the AAAI Conference on Artificial Intelligence, 2018, pp. 3302-3309.
Gradient based sample selection for online continual learning. A Rahaf, L Min, G Baptiste, B Yoshua, Proceedings of the Advances in Neural Information Processing Systems. the Advances in Neural Information Processing SystemsA. Rahaf, L. Min, G. Baptiste, and B. Yoshua, "Gradient based sample selection for online continual learning," in Proceedings of the Advances in Neural Information Processing Systems, 2019, pp. 11 816-11 825.
Continual unsupervised representation learning. D Rao, F Visin, A Rusu, R Pascanu, Y W Teh, R Hadsell, Proceedings of the Advances in Neural Information Processing Systems. the Advances in Neural Information Processing SystemsD. Rao, F. Visin, A. Rusu, R. Pascanu, Y. W. Teh, and R. Hadsell, "Continual unsupervised representation learning," in Proceedings of the Advances in Neural Information Processing Systems, 2019, pp. 7647- 7657.
A neural dirichlet process mixture model for task-free continual learning. S Lee, J Ha, D Zhang, G Kim, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsS. Lee, J. Ha, D. Zhang, and G. Kim, "A neural dirichlet process mixture model for task-free continual learning," in Proceedings of the International Conference on Learning Representations, 2020, pp. 1-11.
Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks. L Wang, K.-J Yoon, IEEE Transactions on Pattern Analysis and Machine Intelligence. 2021L. Wang and K.-J. Yoon, "Knowledge distillation and student-teacher learning for visual intelligence: A review and new outlooks," IEEE Transactions on Pattern Analysis and Machine Intelligence, [Online], 2021.
Self-distillation with batch knowledge ensembling improves imagenet classification. Y Ge, C L Choi, X Zhang, P Zhao, F Zhu, R Zhao, H Li, Y. Ge, C. L. Choi, X. Zhang, P. Zhao, F. Zhu, R. Zhao, and H. Li, "Self-distillation with batch knowledge ensembling improves imagenet classification," 2021. [Online]. Available: https: //arxiv.org/abs/2104.13298/
Learning without forgetting. Z Li, D Hoiem, IEEE Transactions on Pattern Analysis and Machine Intelligence. 4012Z. Li and D. Hoiem, "Learning without forgetting," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 12, pp. 2935- 2947, 2018.
Measuring and regularizing networks in function space. A Benjamin, D Rolnick, K Kording, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsA. Benjamin, D. Rolnick, and K. Kording, "Measuring and regularizing networks in function space," in Proceedings of the International Con- ference on Learning Representations, 2019, pp. 1-12.
Self-supervised learning: Generative or contrastive. X Liu, F Zhang, Z Hou, L Mian, Z Wang, J Zhang, J Tang, IEEE Transactions on Knowledge and Data Engineering. 2021X. Liu, F. Zhang, Z. Hou, L. Mian, Z. Wang, J. Zhang, and J. Tang, "Self-supervised learning: Generative or contrastive," IEEE Transactions on Knowledge and Data Engineering, [Online], 2021.
Self-supervised learning of pretextinvariant representations. I Misra, L Van Der Maaten, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)I. Misra and L. van der Maaten, "Self-supervised learning of pretext- invariant representations," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 6706- 6716.
Unsupervised representation learning by predicting image rotations. S Gidaris, P Singh, N Komodakis, Proceedings of the International Conference on Learning Representations. the International Conference on Learning RepresentationsS. Gidaris, P. Singh, and N. Komodakis, "Unsupervised representation learning by predicting image rotations," in Proceedings of the Interna- tional Conference on Learning Representations, 2018, pp. 1-14.
A simple framework for contrastive learning of visual representations. T Chen, S Kornblith, M Norouzi, G Hinton, Proceedings of the International Conference on Machine Learning. the International Conference on Machine Learning119T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A simple framework for contrastive learning of visual representations," in Proceedings of the International Conference on Machine Learning, vol. 119, 2020, pp. 1597-1607.
Understanding contrastive representation learning through alignment and uniformity on the hypersphere. T Wang, P Isola, Proceedings of the International Conference on Machine Learning. the International Conference on Machine Learning119T. Wang and P. Isola, "Understanding contrastive representation learning through alignment and uniformity on the hypersphere," in Proceedings of the International Conference on Machine Learning, vol. 119, 2020, pp. 9929-9939.
Exploring complementary strengths of invariant and equivariant representations for few-shot learning. M N Rizve, S Khan, F S Khan, M Shah, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)M. N. Rizve, S. Khan, F. S. Khan, and M. Shah, "Exploring complemen- tary strengths of invariant and equivariant representations for few-shot learning," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 10 836-10 846.
Contrastive learning based hybrid networks for long-tailed image classification. P Wang, K Han, X.-S Wei, L Zhang, L Wang, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)P. Wang, K. Han, X.-S. Wei, L. Zhang, and L. Wang, "Contrastive learning based hybrid networks for long-tailed image classification," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 943-952.
Prototype augmentation and self-supervision for incremental learning. F Zhu, X.-Y Zhang, C Wang, F Yin, C.-L Liu, Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)F. Zhu, X.-Y. Zhang, C. Wang, F. Yin, and C.-L. Liu, "Prototype augmentation and self-supervision for incremental learning," in Pro- ceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 5871-5880.
Supervised contrastive learning. P Khosla, P Teterwak, C Wang, A Sarna, Y Tian, P Isola, A Maschinot, C Liu, D Krishnan, Proceedings of the Advances in Neural Information Processing Systems. the Advances in Neural Information Processing Systems18P. Khosla, P. Teterwak, C. Wang, A. Sarna, Y. Tian, P. Isola, A. Maschinot, C. Liu, and D. Krishnan, "Supervised contrastive learn- ing," in Proceedings of the Advances in Neural Information Processing Systems, 2020, pp. 18 661-18 673.
Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning. Z Mai, R Li, H Kim, S Sanner, Proceedings of the CVPR Continual Learning in Computer Vision Workshop. the CVPR Continual Learning in Computer Vision WorkshopZ. Mai, R. Li, H. Kim, and S. Sanner, "Supervised contrastive replay: Revisiting the nearest class mean classifier in online class-incremental continual learning," Proceedings of the CVPR Continual Learning in Computer Vision Workshop, pp. 1-11, 2021.
Learning with local and global consistency. D Zhou, O Bousquet, T Lal, J Weston, B Schölkopf, Proceedings of the Advances in Neural Information Processing Systems. the Advances in Neural Information Processing SystemsD. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Schölkopf, "Learning with local and global consistency," in Proceedings of the Advances in Neural Information Processing Systems, 2004, pp. 1-8.
Using hindsight to anchor past knowledge in continual learning. A Chaudhry, A Gordo, P K Dokania, P H S Torr, D Lopez-Paz, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial IntelligenceA. Chaudhry, A. Gordo, P. K. Dokania, P. H. S. Torr, and D. Lopez- Paz, "Using hindsight to anchor past knowledge in continual learning," in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp. 6993-7001.
Learning multiple layers of features from tiny images. A Krizhevsky, G Hinton, 1Handbook of systemic autoimmune diseasesA. Krizhevsky and G. Hinton, "Learning multiple layers of features from tiny images," Handbook of systemic autoimmune diseases, vol. 1, no. 4, 2009.
Tiny imagenet visual recognition challenge. P Hadi, G Saman, P. Hadi and G. Saman, "Tiny imagenet visual recognition challenge," 2015. [Online]. Available: http://tiny-imagenet.herokuapp.com/
. O Russakovsky, J Deng, H Su, J Krause, S Satheesh, S Ma, Z Huang, A Karpathy, A Khosla, M Bernstein, A C Berg, L Fei-Fei, International Journal of Computer Vision. 1153Imagenet large scale visual recognition challengeO. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg, and L. Fei- Fei, "Imagenet large scale visual recognition challenge," International Journal of Computer Vision, vol. 115, no. 3, pp. 1573-1405, 2015.
Deep residual learning for image recognition. K He, X Zhang, S Ren, J Sun, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778.
Random sampling with a reservoir. J S Vitter, ACM Transactions on Mathematical Software. 111J. S. Vitter, "Random sampling with a reservoir," ACM Transactions on Mathematical Software, vol. 11, no. 1, pp. 37-57, 1985.
| [] |
[
"DyLoc: Dynamic Localization for Massive MIMO Using Predictive Recurrent Neural Networks",
"DyLoc: Dynamic Localization for Massive MIMO Using Predictive Recurrent Neural Networks"
] | [
"Farzam Hejazi [email protected] \nDepartment of Electrical and Computer Engineering\nDepartment of Electrical and Computer Engineering\nUniversity of Central Florida Orlando\nUSA\n",
"Katarina Vuckovic [email protected] \nDepartment of Electrical and Computer Engineering\nUniversity of Central Florida Orlando\nUSA\n",
"Nazanin Rahnavard [email protected] \nUniversity of Central Florida Orlando\nUSA\n"
] | [
"Department of Electrical and Computer Engineering\nDepartment of Electrical and Computer Engineering\nUniversity of Central Florida Orlando\nUSA",
"Department of Electrical and Computer Engineering\nUniversity of Central Florida Orlando\nUSA",
"University of Central Florida Orlando\nUSA"
] | [] | This paper presents a data-driven localization framework with high precision in time-varying complex multipath environments, such as dense urban areas and indoors, where GPS and model-based localization techniques come short. We consider the angle-delay profile (ADP), a linear transformation of channel state information (CSI), in massive MIMO systems and show that ADPs preserve users' motion when stacked temporally. We discuss that given a static environment, future frames of ADP time-series are predictable employing a video frame prediction algorithm. We express that a deep convolutional neural network (DCNN) can be employed to learn the background static scattering environment. To detect foreground changes in the environment, corresponding to path blockage or addition, we introduce an algorithm taking advantage of the trained DCNN. Furthermore, we present DyLoc, a data-driven framework to recover distorted ADPs due to foreground changes and to obtain precise location estimations. We evaluate the performance of Dy-Loc in several dynamic scenarios employing DeepMIMO dataset [1] to generate geo-tagged CSI datasets for indoor and outdoor environments. We show that previous DCNN-based techniques fail to perform with desirable accuracy in dynamic environments, while DyLoc pursues localization precisely. Moreover, simulations show that as the environment gets richer in terms of the number of multipath, DyLoc gets more robust to foreground changes. | 10.1109/infocom42981.2021.9488913 | [
"https://arxiv.org/pdf/2101.07848v2.pdf"
] | 231,648,134 | 2101.07848 | 74840f58caf3590cb4b73fc65b4217b5b945659d |
DyLoc: Dynamic Localization for Massive MIMO Using Predictive Recurrent Neural Networks
22 Jan 2021
Farzam Hejazi [email protected]
Department of Electrical and Computer Engineering
Department of Electrical and Computer Engineering
University of Central Florida Orlando
USA
Katarina Vuckovic [email protected]
Department of Electrical and Computer Engineering
University of Central Florida Orlando
USA
Nazanin Rahnavard [email protected]
University of Central Florida Orlando
USA
DyLoc: Dynamic Localization for Massive MIMO Using Predictive Recurrent Neural Networks
22 Jan 2021Index Terms-Data-driven Localizationmassive MIMODeep LearningDynamic EnvironmentsFrame Prediction
This paper presents a data-driven localization framework with high precision in time-varying complex multipath environments, such as dense urban areas and indoors, where GPS and model-based localization techniques come short. We consider the angle-delay profile (ADP), a linear transformation of channel state information (CSI), in massive MIMO systems and show that ADPs preserve users' motion when stacked temporally. We discuss that given a static environment, future frames of ADP time-series are predictable employing a video frame prediction algorithm. We express that a deep convolutional neural network (DCNN) can be employed to learn the background static scattering environment. To detect foreground changes in the environment, corresponding to path blockage or addition, we introduce an algorithm taking advantage of the trained DCNN. Furthermore, we present DyLoc, a data-driven framework to recover distorted ADPs due to foreground changes and to obtain precise location estimations. We evaluate the performance of Dy-Loc in several dynamic scenarios employing DeepMIMO dataset [1] to generate geo-tagged CSI datasets for indoor and outdoor environments. We show that previous DCNN-based techniques fail to perform with desirable accuracy in dynamic environments, while DyLoc pursues localization precisely. Moreover, simulations show that as the environment gets richer in terms of the number of multipath, DyLoc gets more robust to foreground changes.
I. INTRODUCTION
With the expansion of location-based services such as peerto-peer ride sharing, local search-and-discovery mobile apps, navigation services, store locators, autonomous driving, and urban unmanned aerial systems (UAS) traffic management, the demand for highly-accurate positioning technologies is growing [2]. When the environment is free from strong multipath (mainly outdoor environments), localization can be considered mostly a solved problem [3]. Nonetheless, localization in harsh multipath environments (mainly indoors and dense urban areas) has remained under extensive investigation [4].
For environments where line of sight (LOS) is dominant and multipath is scarce, various localization techniques have been proposed in the literature, majority of which are modelbased. In these environments, physical models can describe the scattering surrounding quite well. These techniques mainly employ received signal strength indicator (RSSI), time of arrival (ToA), time difference of arrival (TDoA) and angle of arrival (AOA) measurements to pursue localization [5]- [14]. One major issue with model-based techniques is that they normally require measurements of those parameters from multiple anchor nodes which may not be available in every environment. To address shortfalls of model-based techniques and to tackle localization for complex multipath environments, several data driven approaches have been proposed. Datadriven localization techniques are mainly called fingerprintingbased localization [15]. This type of localization generally constitutes gathering a dataset of a geo-tagged communication parameter (e.g. RSSI or channel state information (CSI)) all over the environment and training a neural network based on the dataset for online localization. Unfortunately, the data-driven approach has also failed to solve the problem of localization in complex multipath environments thus far. Data-driven approaches are highly dependent of an exorbitant campaign of gathering a geo-tagged dataset. Moreover, they need several hours of training. During these two relatively prolonged procedures, it is probable that the environment changes and the dataset becomes invalid [16]. Some efforts have been conducted to taper the required dataset and reduce the training time [17]; however, in the best-case scenario they could reduce the initial required time to a couple of hours while these environments can change in a couple of seconds. Furthermore, it is almost impossible to gather data for every possible dynamic scenario and retrain the network.
Massive multi input multi output (MIMO) is a technique in wireless networks that utilizes numerous antennas mainly at base stations to take advantage of multipath effect to spatially multiplex users [18]. Massive MIMO is considered a core technology behind the revolution in ultra-high speed communications promised by 5G cellular networks [19]. To enable spatial multiplexing, base stations must identify the propagation environment from their antennas to the users' antennas. This task is routinely conducted by measuring CSI. When measured perfectly, CSI preserves all information about scattering, fading, delays and power decay of the channel. Due to the rich information contained in CSI, it is considered an integral parameter for single-site fingerprinting-based localization. Vieira et. al. considered using a deep convolutional neural network (DCNN) trained by angle delay profiles (ADPs) for localization for the first time in the literature [20]. Viera shows that in addition to memorizing the dataset, the trained DCNN can generalize localization to unknown location within the environment. Sun et. Al. trained two different DCNNs to pursue the localization task. The first network is similar to the regression network proposed by Viera. The second network incorporates two different blocks, the first block is a classification DCNN that defines to which grid cell the user location belongs, and the second block uses a weighted Knearest neighbor (WKNN) algorithm to find the user location within the cell precisely [21]. In [22], the authors design input features to make them robust to CSI impairments. They consider an autocorrelated version of CSI as the input to the CNN. In [23], De Bast et. al. showed that CNN trained using CSI performs with centimeter accuracy in a static indoor environment. However, when a person is walking in the room the error increased by ten folds or more. To the best of our knowledge, this is the only work that examines the effect of dynamic scenarios on the accuracy of localization via a CNN trained using CSI. Unfortunately, the available literature mostly considers static scenarios and ignores the dynamic nature of complex scattering environments. To address this shortcoming, in our work, we mainly focus on addressing localization in dynamic scenarios leveraging a data-driven approach. To the best of authors' knowledge, this work is the first attempt to tackle localization in complex dynamic environments with a data-driven perspective.
Deep learning has shown outstanding performance in the field of computer vision (CV) so far. Among all various topics in the CV context, video surveillance, video frame prediction, video foreground and anomaly detection tackle highly dynamic problems [24]- [26]. In our work, we adopt some ideas from frame prediction literature to address datadriven localization in dynamic environments. First, we prove that time-series of ADPs preserves users' movements assuming a static environment (the static environment is referred to as background (BG)). Consequently, it leads us to infer predictability of the next frame on an ADP temporal sequence based on previous frames using a predictive recurrent neural network. We model changes in the environment by LOS blockage, none LOS (NLOS) blockage, and NLOS addition, referred as foreground (FG). We propose an algorithm to discriminate between those ADPs which are accurate and those distorted by FG. Consequently, we propose DyLoc to recover distorted ADPs and to estimate the user location incorporating a WKNN block with a predictive recurrent neural network (PredRNN) block. To examine the performance of DyLoc we consider an indoor and an outdoor environments utilizing DeepMIMO dataset [1]. We show that a trained CNN fails to estimate user location with acceptable accuracy in dynamic environments while the proposed technique can estimate user location with a decent accuracy. The main contributions of our work can be encapsulated as follow:
• Proving that a time-series of ADPs can preserve users' movements • Showing ADP can be predicted based on previous frames of the ADP time-series • Modeling changes in the environment as LOS blockage, NLOS blockage, and NLOS addition • Proposing DyLoc, a novel localization algorithm, that includes two steps: (i) an algorithm to detect distorted ADPs, and (ii) an algorithm to recover distorted ADPs and to estimate users' location utilizing PredRNN and WKNN techniques. The rest of the paper is organized as follows. In Section II, we present the considered system and channel model and define ADP. We introduce motion preservation property of ADP in Section III-A. In Section III-B, we express how we can model a dynamic propagation environment, where propagation paths can be blocked or added at anytime. We present a new prospective toward fingerprint gathering campaigns in Section IV. In Section V, we introduce DyLoc to tackle the localization task in complex environments. In Section VI, we examine the performance of the proposed technique via various simulations. Finally, we conclude the paper in Section VII.
II. SYSTEM AND CHANNEL MODEL
Assume we require to localize a single user, utilizing a single base station (BS) of a typical MIMO-Orthogonal frequency-division multiplexing (OFDM) wireless network. For the ease of exposition and similar to [27], we suppose that the BS is equipped with a uniform linear array (ULA), with half wavelength spacing between two adjacent antennas, and a user's device has a single omni-directional antenna. The BS has N t antennas, and uses OFDM signaling with N c subcarriers. We assume a geometric channel model between the BS and the user with C distinguishable clusters. Moreover, each cluster constitutes R C distinguishable paths. Each path can be characterized by a delay τ m denote the sampling duration and the sampled delay belonging to the path m of the cluster k, respectively [21]. Assuming these parameters, channel frequency response (CFR) for each sub-carrier l can be written as [28]
h[l] = C k=1 RC m=1 α (k) m e(θ (k) m )e −j2π l n (k) m Nc ,(1)
where j denotes the imaginary unit and e(θ) denotes the array response vector of the ULA given by
e(θ) = [1, e −j2π dcos(θ) λ , . . . , e −j2π (N t −1)dcos(θ) λ ] T ,(2)
where d is the gap between two adjacent antennas and λ is the wavelength. Thus, the overall CFR matrix of the channel between the BS and the user can be expressed as
H = [h[1], . . . , h[N c ]] .(3)
This matrix commonly is referred to as CSI in the literature.
Angle Delay profile (ADP) is a linear transformation of the CSI computed by multiplying it with two discrete Fourier transform (DFT) matrices. Let us define the DFT matrix V ∈ C Nt×Nt as
[V ] z,q ∆ = 1 √ N t e −j2π ( z(q− N t 2 ) ) N t ,
and F ∈ C Nc×Nc as
[F ] z,q ∆ = 1 √ N c e −j2π zq Nc .
Then ADP matrix G is defined as [21] G = V H HF .
Now, let us define [A] z,q = |G z,q |, where |.| denotes absolute value. Throughout this paper, we refer to A as ADP. When measured perfectly, CSI is a very rich data and preserves all scattering characteristics of the channel. However, when depicted in its raw format it is completely meaningless. On the other hand, referring to [29], the (z, q) element of the ADP represents the absolute gain of z th AOA and q th delay as illustrated in Fig. 1. Therefore, we can simply make sense of ADP as a visual representation of all distinguishable paths between the user and the BS. For example, we can deduct from Fig. 1 that there is a LOS path cluster with AOA around 18 o and approximately 10 −8 s delay, and there are eight NLOS clusters between the user and the BS. Using ADP, we can cast the localization problem as a pattern recognition problem and take advantage of the rich literature of deep learning applications in CV [21].
III. SCATTERING ENVIRONMENT
In this section, we discuss static and dynamic environments and how a user's motion reflects on ADP. We will show that when a user moves in a continuous track in the location domain, all paths in ADP also move in a continuous track, assuming a totally static environment. Moreover, we discuss dynamic changes in the environment and how we can model their effects on scatterings.
A. Static Environment
First, let us define what we mean by a static environment. Definition 1. Static environment is an environment including a user and at least one base station, in which nothing other than the user can move and all the materials of the surfaces remain the same within the environment. In this environment, scattering (influenced by propagation paths, decays, and delays) remains unchanged. Moreover, the user's motion does not affect the scattering and does not block or add any path, while it may change visible paths between the BS and the user.
Theorem 1. Let us consider a static environment where the user's movement does not change the paths between the BS and the user. Given a static environment, assume the user movement does not change the visible paths between the BS and the user. If user's position shifts by some very small positive amount δd, changes in delay of paths are limited to the following bounds
δτ (k) m ≤ δd v c ; ∀k ∈ {1, . . . , C}, ∀m ∈ {1, . . . , R C } ,(5)
where v c denotes the speed of light and δτ (k) m denotes changes in the delay of the path m of cluster k. Further, the following bound on the path AOA shift holds for any α > 1 Proof. When the user's position changes by δd, the length of each path from the BS to the user (LOS and NLOS) will change the same or less, thus the change in the delay is limited to δd vc . Assuming LOS path (path 1 in Fig. 2), change in the angle of the path is the maximum if the movement is perpendicular to the path. Thus, assuming the movement is perpendicular to the LOS, then tan(δθ) = δd d (path) , where d path is the length of the path from the BS to the user and δθ is the change in AOA of signal from the user to the BS. Considering lim x→0 tan(x) → x, for any α > 1 there is a δd close enough to zero such that δθ
lim δd→0 δθ (k) m ≤ α δd d (k) m ; ∀k ∈ {1, . . . , C}, ∀m ∈ {1, . . . , R C } ,(6)(k) m ≤ α δd d (k) m .
Referring to [30], every NLOS path from the BS to the user, can be considered as a LOS path from a virtual BS to the user, with the same length (Fig. 2). Thus, (6) holds for a NLOS path as well.
Referring to Theorem 1, we can infer that, given a static environment and fixed paths between the BS and the user, any continuous user's movement in the location domain will result in a continuous movement in the angle-delay domain. In other words, when the user moves within the environment, all paths in the angle-delay domain start moving in a continuous track. Consequently, if we cascade consecutive ADPs to form a time-series data, the sequence is highly correlated temporally.
Theorem 1 expresses that if a path exists in the ADP, it moves in a continuous track when the user moves. In addition to path motion, even in a static environment, it is possible that some paths dissipate and some new visible paths emerge during the user movement. Referring to [29], ADP is highly correlated in location domain and similarities between the ADPs decrease smoothly with respect to their physical distances. Thus, emergence and dissipation of paths occur very smoothly during the user movement. Hence, the following conclusion can be inferred:
Collorary 1: Assuming a static environment, consecutive frames of time-series of the user measured ADPs show a continuous movement for all paths between the BS and the user.
This conclusion leads us to the idea that it may be possible to predict future ADP frames based on the time-series of past frames.
B. Dynamic Behaviour of The Environment
Till now, we have discussed that a time-series of ADPs belonging to a certain user are highly correlated temporally, assuming a static environment. Nonetheless, complex scattering environments are normally highly dynamic. This means that several objects can move into and out of them and can change the scattering environment quickly and thoroughly. Dynamic changes in the environment result in the following changes in the static scattering environment: 1) LOS blockage: a new object blocks the LOS path between the user and the BS. 2) NLOS blockage: a new object blocks some NLOS paths between the user and the BS. 3) NLOS addition: scattering from surfaces of a new object adds some NLOS paths between the user and the BS. We assume that all objects and surfaces belonging to the static environments stay fixed and unchanged, therefore a LOS path already blocked by the static environment cannot get unblocked by the dynamic movements inside the environment.
IV. RETHINKING OF FINGERPRINT GATHERING CAMPAIGNS
To form a dataset of geo-tagged CSIs (or any other communication parameter), previous studies have generally considered measuring CSI at several locations inside a static environment. Such campaigns may be ineffective for user localization in dynamic scenarios in the first place, since few movements inside the environment may invalidate the whole dataset quickly. In this work, we introduce a new perspective toward these data gathering campaigns that can picture them as an integral part of any data-driven dynamic localization framework. In fact, by measuring CSI around a static environment, we map spatial distribution of all propagation paths within the static environment. It is like recording video footage from the whole static environment from the BS point of view. Such a footage can reveal all LOS and NLOS paths from any point inside the environment to the BS (or vice versa) in visible light band. Similarly, such measurement campaigns in radio frequency bands help us to understand the underlying scattering environment thoroughly, from the BS point of view. In the video processing literature, the underlying static environment and the changing environment are called "background" and "foreground", respectively. Here, we mimic the same pattern. In fact, measurement campaigns are quite obliging for understating the static scattering environment. Once the measurements are complete, we can utilize our understanding of BG to detect FG and employ proper algorithms to track changes and recover the true BG. Such a prospective toward fingerprinting, in conjunction with meaningful representation of the environment via ADP images, enables us to cast dynamic fingerprinting problem (or any other wireless communication problem that should deal with a dynamic scattering environment) as a video processing problem. Eventually, this redefinition enables us to take advantage of the rich literature of video surveillance, video frame prediction, and video foreground and anomaly detection in the context of computer vision to tackle wireless communication problems in dynamic environments.
V. DYLOC: DYNAMIC LOCALIZATION VIA FINGERPRINTING
In this section, we introduce our proposed localization framework for dynamic environments. Suppose a user is moving inside an environment where there is a BS utilizing massive MIMO technology for communication as defined in Section II. Some of the measured CSIs may get affected by FG and get distorted compared to those of a static environment as explained in Section III-B. At first we train an off-theshelf DCNN as introduced in [20], [21] to conduct localization assuming the environment is static. In this regard, we suppose we have a dataset of geo-tagged CSIs that maps the underlying BG exhaustively. To obtain a dataset of geo-tagged ADPs, we transform all CSIs to ADPs using (4). We denote the dataset by Υ which consists of ADPs paired by locations. Then we take the DCNN and train it based on the dataset to conduct the localization task when ADPs are not distorted. Now, we assume a stream of CSIs are measured consecutively to establish and maintain the link between the user and the BS. H t denotes the CSI measured at time t, and A t denotes the corresponding ADP. Primarily, we develop an algorithm to determine whether the measured ADP is distorted or not. In this regard, we pass A t through DCNN and estimate the user location. Then we search the dataset and find nearby locations to the estimated location and compare the paired ADPs with the measured ADP, to see if there is at least one ADP among them that is similar to the measured one. To quantitatively measure similarity between two ADPs, we need a similarity metric. In [29], authors introduce the joint angledelay similarity coefficient (JDASC) and prove that it is a decreasing function of physical distance. We observed that simple normalized correlation between two ADPs does the job as well. Thus, we define normalized correlation S as
S (A,Â) = vec(A).vec(Â) ||A|| F ||Â|| F ,(7)
where A, denote two arbitrary ADPs, vec(.) denotes an operator that concatenates columns of a matrix into a vector, operation . denotes inner product and ||.|| F denotes Frobenius norm. If there is at least one ADP from the neighboring locations whose similarity to the measured ADP is more than a predefined threshold (thr 2 ), we label the ADP as "accurate", otherwise we label it as "distorted". The algorithm for distorted ADP detection is summarized in Algorithm 1. The rationale behind the proposed algorithm stems from our discussion in Section III-A that ADP is highly correlated in the location domain. Since the DCNN is solely trained by accurate ADPs, it will return inaccurate locations when faced with distorted ADPs. Hence, nearby ADPs will not show high similarities with the measured ADP. On the other hand, if the measured ADP is accurate, there will be an ADP in the nearby ADPs that looks very similar to the measured one.
Algorithm 1 Distorted ADP Detection
Require: measured CSI at time t, Ht; thr 1 , thr 2 Ensure: Ht is distorted or accurate 1: Convert Ht to ADP At using (4) 2: Apply At to DCNN and get the location estimation xt 3: Xt ← find all paired ADPs in the dataset where their tagged locations are closer than thr 1 to xt 4: flag ← 0 5: for all ADPs A ∈ Xt do If the measured ADP is accurate, we can simply use DCNN for localization. On the other hand, if it is distorted, we propose to use WKNN along with a video frame prediction algorithm to recover the true ADP and conduct localization. Based on our discussion in Sections III-A and IV, time-series of accurate ADPs are highly correlated temporally. Thus, given time-series of accurate ADPs before facing a distorted ADP, we try to predict the next frame of the time-series using a predictive recurrent neural network (PredRNN). PredRNN is a frame prediction algorithm that tries to learn dependencies between consecutive frames and uses this knowledge to predict the next frame of the sequence. Frame prediction is a challenging problem in CV and several algorithms have been published to address it [31], [32]. We chose PredRNN mainly because it shows promising results on the radar echo dataset. The detailed discussion about PredRNN can be found in [33]. In this work, we assume that we have a sequence of past accurate ADPs with length f ∈ N at time t, A t−f Ts , . . . , A t−Ts denoted by A t . We train the network based on a dataset of random walks which we generate using Υ. In Section VI-C, we will describe how we generate the moving dataset, the PredRNN structure and how we train it, in details.
After detecting a distorted ADP, we pass A t through the trained PredRNN to predict the accurate ADP denoted bŷ A t . Next, we pass t through DCNN to obtain an initial estimation of the user locationx t . In addition to frame prediction, in light of the fact that ADPs are highly correlated in location domain, we can take the last location estimation based on the last accurate ADP and find nearby locations in the database and use them to reach a better location estimation and recover the true ADP. To clarify, if some paths in the ADP get blocked or added, the remaining paths pose correlation with nearby ADPs. Thus, using the similarity criteria (7), we can extract the residual similarities between the distorted ADP and nearby ADPs. Moreover, we calculate the similarity between the distorted ADP and the predicted one. Now we are able to combine nearby locations and ADPs and the estimated location and ADP via a WKNN algorithm to obtain a better location estimation and recover ADP. Weights of WKNN can be determined directly from calculated similarities. Hence, the location and the true ADP can be estimated as
x t = x∈N w x x;Ā t = x∈N w x A x ,(8)
where x t denotes the estimated location andĀ t denotes the recovered ADP, A x is ADP at location x, N denotes union of the set of nearby locations added by the predicted location, and w x is the weight, given by
w x = S (A t , A x ) A∈A S (A t , A) ,(9)
where A denotes the set of ADPs corresponding to locations in N . Finally, we can estimate the user location at time t (denoted by x t ) and recover the ADP (Ā t ) which can be used for future location estimations. Algorithm 2 summarizes the proposed algorithm for ADP recovery and location estimation. Moreover, Fig. 3 summarizes the end-toend DyLoc localization framework.
VI. SIMULATIONS
In this section, the performance of our proposed localization framework for an indoor and an outdoor environments is studied. In Section VI-A, we present the dataset that we used for static environment fingerprinting. We define the structure of the DCNN and how to train it in Section VI-B. Then, we clarify how we generate the moving dataset using the static 10:Āt ← x∈N wxAx 11: returnĀt,xt dataset in Section VI-A1. Further, we explain the structure of the PredRNN and how we trained it. Next, we express dynamic scenarios generated to test the performance of DyLoc. Finally, in Section VI-D we dive deep into evaluating the performance of DyLoc for various dynamic scenarios and compare it with the state-of-the-art. 1
Algorithm 2 Location Estimation When ADP Is Distorted
A. Static Datasets
In this work, we use DeepMIMO dataset to generate CSI datasets in static environments 2 . Thus far, DeepMIMO has introduced for one outdoor and two indoor environments. We picked one indoor and one outdoor environments for our simulations.
1) Outdoor Environment: To generate an outdoor environment we select DeepMIMO outdoor scenario number 1 (O1) at 3.5 GHz band. "O1" is an urban environment of two streets and one intersection. We suppose only BS number 2 (BS2) is working and it has been equipped with a ULA with N t = 64 antennas aligned with y-axis. We set the OFDM bandwidth to 10 MHz and N c = 64. We also set the number of paths to 25. Furthermore, we only generate a dataset for R1 to R1100 (Rows 1 to 1100, show locations of data points in DeepMIMO) , therefore the datset constitutes of 199100 data points. Table I summarizes the dataset parameters.
2) Indoor Environment: We picked DeepMIMO indoor scenario number 3 "I3" at 60 GHz to emulate an indoor environment. "I3" simulates a 10m × 11m conference room and its hallway. We assume only access point number 2 (BS2) is working. Other parameters that set up the indoor propagation environment are reflected in Table I.
B. DCNN Setup
As we explained in Section I, in [21] two different DCNNs have been introduced to pursue localization using ADPs. We refer to the first setup that utilizes a regression network as DCNN and the second setup that uses a classification network along with WKNN as DCNN+WKNN. In our work, we train the DCNN to learn the background scattering environment as a part of DyLoc as described in Section V. We also compare the performance of DCNN and DCNN+WKNN with DyLoc. 1) Outdoor Environment: Based on the architecture presented in [21], we choose the parameters presented in Table II for the DCNN setup. We use Max-pooling for pooling layers with size 2 × 2 and ReLU for activation function. We set training epochs to 500. The setup for the classification network in DCNN+WKNN is the same as DCNN while we add a Softmax layer to the network to conduct classification. Defining area of interest as the set of all locations in the dataset, we assume a 18 × 55 grid on the area of interest (18 equally-spaced segments in the x-direction and 55 segments in the y-direction). The classification network is trained to determine to which cell of the grid an input ADP belongs. Then using a WKNN technique with k = 3 we estimate the location.
2) Indoor Environment: Parameters setup in Section VI-A2 results in 32 × 32 ADP images in this scenario. We train a 5-layer regression DCNN with parameters as in Table II to learn the underlying propagation environment. The number of training epochs in this simulation is set to 200. The classification CNN setup in DCNN+WKNN technique is the same as the DCNN setup. Similar to the outdoor environment we assume a 18 × 55 grid on the area of interest. We also set k = 3 for the WKNN technique.
Layer Kernel Size (O1) Kernel Number (O1) Kernel Size (I3) Kernel Number (I3) 1 32 × 32 × 1 2 16 × 16 × 1 4 2 16 × 16 × 2 4 8 × 8 × 4 8 3 8 × 8 × 4 8 7 × 7 × 8 16 4 7 × 7 × 8 16 5 × 5 × 16 32 5 5 × 5 × 16 32 3 × 3 × 32 64 6 3 × 3 × 32 64
C. Moving Dataset and PredRNN Setup
To train PredRNN and to emulate dynamic scenarios, we needed moving datasets. We generated moving datasets utilizing static datasets introduced in the previous section. Since the static datasets define the propagation environment thoroughly, they can be used to form dynamic datasets by stacking adjacent locations and paired ADPs. Both "O1" and "I3" datasets assume a grid on the corresponding environment ("O1" assumes a 20 cm distance between two adjacent grid points and "I3" assumes grid of 1 cm apart). We take advantage of the underlying grid and form our moving dataset. To initialize each movement, we suppose the user is randomly placed on one of the grid points. Then at each step of the movement the user moves to one of the adjacent grid points.
We assume the movement continues for f consecutive steps. Next, we stacked all locations and paired ADPs together to form and save one sequence. We consider 2 modes for user random walks:
• Mode 1: User chooses its direction only at the first timestep and continues the same direction in the following time-steps. If the user reaches the boundaries of the environment, the user chooses another direction from possible remaining directions randomly. • Mode 2: User goes for a complete random walk and at each time-step chooses a new direction. To train PredRNN, we generated the whole dataset based on mode 1 movements since we expect that PredRNN is not able to predict ADPs stem from fully random walks. Thus, we generated 10000 sequences each with length 11. We employed the PredRNN presented in [33] with the same setup and parameters. We fed the first 10 ADP frames to the network and optimized the network to predict the 11 th ADP; so we do not use the location sequence for training, we only make use of the paired ADPs. Eventually, we expect the trained PredRNN is capable of predicting the next ADP in the sequence, given the 10 last accurate ADPs.
1) Dynamic Test Scenarios:
To evaluate the performance of the proposed DyLoc algorithm, we generate 1000 sequences of length 20 for each of the indoor and the outdoor environments. Half of these sequences is generated based on Mode 1 movements and half of them based on Mode 2. We assume the first 10 frames of each sequence consist of accurate ADPs and FG does not affect them. On the other hand, we assume the last 10 frames of each sequence are distorted based on one of the following scenarios:
• LOS Blockage: we assume that the most powerful path between the BS and the user gets blocked, thus we eliminate this path from all 10 ADP frames. • NLOS Blockage: we assume that the second most powerful path between the BS and the user gets blocked, thus we wipe out this path from all 10 ADP frames. • NLOS Addition: we add a path 6dB weaker than the strongest path arbitrarily located in the ADP image to all 10 ADP frames. We inspect DyLoc performance for the above 3 scenarios for both "O1" and "I3" environments and compare it with the state-of-the-art DCNN and DCNN+WKNN algorithm [21].
D. Results and Discussion
In this section, we present the performance of DyLoc for the scenarios introduced in Section VI-C for the outdoor environment "O1" and the indoor environment "I3".
1) Outdoor Environment: Table III summarizes root mean square error (RMSE) of location estimation for the distorted frames for the 3 scenarios using DyLoc and compares it with DCNN and DCNN+WKNN. For the first 10 accurate frames, the RMSE for all scenarios using DCNN and DCNN+WKNN is 19 cm and 14.5 cm, respectively. Since DyLoc uses the same DCNN when frames are accurate, the DyLoc accuracy is 19 cm. However, when the frames are distorted, DCNN error proliferates to more than 20 m when LOS is blocked, more than 10 m when NLOS is added, and more than 7.5 m when NLOS is blocked. This huge surge in error is because DCNN has not been trained for localization based on distorted ADPs. Moreover, DCNN error is the highest when LOS is blocked. Since the outdoor environment is not a rich scattering environment and generally there are 2-3 paths between the BS and the user, the LOS path contains the most valuable information in the ADP, so if it is blocked, the DCNN loses its most important clue for localization. When NLOS paths are distorted, the effect of NLOS addition is more than NLOS blockage since we assume a very strong path is added to the ADP. The DCNN+WKNN performance is even worse than DCNN facing distorted ADPs and the error is more than 180 m when LOS is blocked, more than 30 m when NLOS is added, and more than 50 m when NLOS is blocked. Since the classification network cannot find the correct cell that the distorted ADP belongs to, the WKNN layer does not perform well and the performance plunges drastically. [21], DCNN+WKNN [21], and PredRNN [33] for LOS blockage, NLOS blockage, NLOS addition scenarios at outdoor environment "O1" and outdoor environment "I3". PredRNN error shows location estimation error of the predictive arm of DyLoc. increases. This is totally expected since the prediction error of the previous predicted frames propagates to the next frame and results in a higher error at the next frame. Nevertheless, the error remains less than 1.8 m, 1.2 m, and 0.9 m, for LOS blockage, NLOS addition, and NLOS blockage for all the distorted frames, respectively. These error values show a quite promising performance by DyLoc in the outdoor environment for dynamic scenarios.
Referring to Table III, RMSE of location estimation via the predictive arm of Algorithm 2 (PredRNN) -before incorporating WKNN (x t )-for all distorted frames and for the 3 scenarios are expressed. The error changes between 2.0 m to 2.4 m for the NLOS blockage scenario, 2.0 m to 2.6 m in the NLOS addition scenario, and 2.0 m to 3.6 m in the LOS blockage scenario. The error is increasing with a higher rate when LOS gets blocked since the measured ADP can help us the least to improve our prediction. Moreover, in the LOS blockage scenario, especially when we have only one path between the BS and the user, and it gets blocked (i.e. the connection is lost),x t is our exclusive source of location estimation. Thus, the predictive arm is very crucial for location estimation in the LOS blockage scenario. Additionally, when we incoroprate WKNN to form DyLoc, the error values decrease to 0.3 m (frame 11) and 0.89 m (frame 20), 0.3 m (frame 11) and 1.16 m (frame 20), and 0.37 m (frame 11) and 1.71 m (frame 20) for the NLOS Blockage, NLOS addition, and LOS blockage scenarios, respectively. These results show that the WKNN arm is pretty successful in reducing the total estimation error. Consequently, both WKNN and PredRNN arms are crucial for accurate location estimation and robustness to FG dynamic changes.
2) Indoor Environment: Similar to the outdoor environment, we compare DCNN, DCNN+WKNN, and DyLoc algorithms in the indoor environment "I3" for the 3 dynamic scenarios. In contrast to the outdoor environment, the indoor environment is a rich scattering environment and there are several propagation paths between the BS and the user. The RMSE for accurate frames utilizing DCNN is 5 cm and DCNN+WKNN is 4.5 cm. Unlike the outdoor scenario, the indoor scenario DCNN performs much better when NLOS is added to the ADP. This happens because the scattering environment is rich and NLOS addition can be filtered out by DCNN.
As Table III expresses, RSME of DyLoc is less than 8 cm for all scenarios and the performance of DyLoc is very close in these scenarios. This can be justified by the fact that the scattering environment is very rich and the pervasiveness of paths helps DyLoc to obtain a robust location estimation to environment changes. This fact is also reflected in the RSME of PredRNN. Again, the performance of PredRNN is pretty close in the 3 scenarios and the error changes between 15 cm to 18 cm. Interestingly, it seems that the PredRNN performs better in predicting the LOS path position in the ADP rather than the NLOS paths. We may explain this phenomena by the fact that the LOS path is stronger than the NLOS paths and the predictive network could learn to predict its location better than weaker NLOS paths. Thus, centimeter accuracy in the indoor environments is achievable using DyLoc.
VII. CONCLUSION
We have introduced a novel framework to address datadriven localization in dynamic environments. We have discussed that proposed deep learning algorithms in the literature fail to tackle localization in dynamic environments since they are principally dependent on prolonged tasks of data gathering and network training. Taking advantage of a meaningful representation of communication channel, we have devised an algorithm to discover dynamic changes in the propagation environment. Based on that, we have developed DyLoc to perform localization in time-varing environments. We have showcased the performance of DyLoc in indoor and outdoor environments. Our results have shown that DyLoc is able to pursue localization accurately in the both environments. Moreover, simulation results have revealed that when the number of multipath increases, DyLoc becomes more robust to time-varying changes.
ACKNOWLEDGMENT This work is supported by the National Science Foundation under Grant No. CCF-1718195.
mm
, k ∈ {1, . . . , C}, m ∈ {1, . . . , R C }, an AOA to the BS's antenna θ T s , where T s and n (k)
Fig. 1 :
1A sample ADP image. Each pixel represents the absolute gain of the path with the corresponding AOA and the delay. Each "+"-like shape in the image shows a path cluster between the BS and the user.
changes in the AOA and the length of the path m of cluster k, respectively.
Fig. 2 :
2LOS and NLOS paths geometry. The NLOS path can be considered as a LOS path from a virtual BS located at the reflection of the BS with respect to the reflective surface.
Fig. 3 :
3Block Diagram of the proposed localization framework DyLoc.
Require: time-series of accurate ADPs At : A t−Tsf , . . . , A t−Ts distorted ADP At geo-tagged ADP dataset Υ the previous estimated location x t−Ts thr 3 Ensure: The recovered ADPĀt, and location estimation xt1: Frame prediction 2: Apply At to PredRNN and get the predicted ADPÂt 3: ApplyÂt to DCNN and get the location estimationxt 4: WKNN 5: N ← find all data points in the database which their locations are closer than thr 3 to x t−Ts 6: N A ← paired ADPs of locations in N 7: N ← N ∪xt 8: N A ← N A ∪Ât 9: xt ← x∈N wxxx; wx = S (At,Ax) A∈N A S (At,A)
TABLE I :
I"O1" and "I3" DeepMIMO datasets' parametersParameter
Outdoor
Scenario (O1)
Indoor
Scenario (I3)
Frequency Band
3.5 GHz
60 GHz
Bandwidth
10 MHz
0.5 GHz
BS
BS2
BS2
Antenna
ULA
ULA
Antenna Elements (Nt)
64
32
Antenna Alignment
y-axis
x-axis
Sub-carrier Number (Nc)
64
32
Path Number
25
25
Locations
R1 to R1100
R1 to R550
TABLE II :
IIDCNN setup for "O1" and "I3"
TABLE III :
IIILocation estimation RMSE in meter of the last 10 frames of the time-series employing DyLoc, DCNN
In contrast to DCNN and DCNN+WKNN, DyLoc performs with high accuracy when faces distorted ADPs. Regarding the TABLE III, as frame number increases the DyLoc errorFrame Number
RMSE(m)
11
12
13
14
15
16
17
18
19
20
Scenario O1
LOS Blockage
DyLoc
0.37
0.53
0.69
0.85
1.00
1.14
1.29
1.43
1.57
1.71
DCNN
25.55
25.14
25.51
25.01
25.18
24.58
25.17
25.72
24.11
24.96
DCNN + WKNN
181.66
179.30
178.88
178.90
175.11
178.90
183.42
176.93
180.52
176.00
PredRNN
2.05
1.96
2.32
2.45
2.56
2.72
2.88
3.05
3.24
3.56
NLOS Blockage
DyLoc
0.30
0.38
0.45
0.52
0.58
0.65
0.71
0.78
0.83
0.89
DCNN
10.15
10.69
10.95
11.05
10.60
11.23
11.06
10.95
10.98
10.65
DCNN + WKNN
36.17
34.66
34.52
34.45
38.95
35.32
36.86
36.23
35.72
39.38
PredRNN
2.05
2.09
2.16
2.16
2.21
2.22
2.21
2.26
2.28
2.34
NLOS Addition
DyLoc
0.30
0.42
0.53
0.63
0.72
0.82
0.91
0.99
1.08
1.16
DCNN
15.09
15.09
15.26
15.44
15.00
15.21
14.80
15.36
15.12
15.66
DCNN + WKNN
55.86
57.43
55.06
55.45
54.41
56.40
53.86
54.69
53.70
55.59
PredRNN
2.05
2.23
2.39
2.38
2.38
2.40
2.44
2.48
2.51
2.57
Scenario I3
LOS Blockage
DyLoc
0.04
0.04
0.04
0.05
0.05
0.06
0.06
0.07
0.07
0.08
DCNN
0.93
0.93
0.93
0.92
0.93
0.92
0.93
0.93
0.93
0.93
DCNN + WKNN
2.02
2.00
2.04
2.03
1.99
2.01
2.03
1.99
2.00
1.99
PredRNN
0.15
0.16
0.16
0.16
0.17
0.17
0.17
0.17
0.17
0.17
NLOS Blockage
DyLoc
0.05
0.05
0.06
0.06
0.07
0.07
0.07
0.08
0.08
0.08
DCNN
0.47
0.47
0.47
0.48
0.48
0.48
0.47
0.46
0.46
0.47
DCNN + WKNN
1.12
1.10
1.07
1.11
1.10
1.06
1.11
1.13
1.12
1.10
PredRNN
0.15
0.16
0.16
0.17
0.17
0.18
0.18
0.18
0.18
0.18
NLOS Addition
DyLoc
0.04
0.04
0.05
0.05
0.06
0.06
0.07
0.07
0.08
0.08
DCNN
0.20
0.20
0.20
0.19
0.19
0.19
0.19
0.19
0.19
0.19
DCNN + WKNN
1.35
1.27
1.33
1.39
1.32
1.36
1.34
1.37
1.33
1.32
PredRNN
0.15
0.16
0.16
0.17
0.17
0.18
0.18
0.18
0.18
0.18
The authors release their codes in the following link "https://github.com/FarzamHejaziK/DyLoc". 2 http://www.deepmimo.net/
Deepmimo: A generic deep learning dataset for millimeter wave and massive mimo applications. Ahmed Alkhateeb, arXiv:1902.06435arXiv preprintAhmed Alkhateeb, "Deepmimo: A generic deep learning dataset for millimeter wave and massive mimo applications," arXiv preprint arXiv:1902.06435, 2019.
Location-based services. A Iris, Richard T Junglas, Watson, Communications of the ACM. 51365Iris A Junglas and Richard T Watson, "Location-based services," Communications of the ACM, vol. 51, no. 3, pp. 65, 2008.
A survey on toa based wireless localization and nlos mitigation techniques. I Guvenc, C Chong, IEEE Communications Surveys Tutorials. 113I. Guvenc and C. Chong, "A survey on toa based wireless localiza- tion and nlos mitigation techniques," IEEE Communications Surveys Tutorials, vol. 11, no. 3, pp. 107-124, 2009.
A survey of indoor localization systems and technologies. F Zafari, A Gkelias, K K Leung, IEEE Communications Surveys Tutorials. 213F. Zafari, A. Gkelias, and K. K. Leung, "A survey of indoor localization systems and technologies," IEEE Communications Surveys Tutorials, vol. 21, no. 3, pp. 2568-2599, 2019.
Performance investigation of angle of arrival based localization. Mislav Zane, Markus Rupp, Stefan Schwarz, WSA 2020; 24th International ITG Workshop on Smart Antennas. VDE, 2020. Mislav Zane, Markus Rupp, and Stefan Schwarz, "Performance inves- tigation of angle of arrival based localization," in WSA 2020; 24th International ITG Workshop on Smart Antennas. VDE, 2020, pp. 1-4.
Rssi-based indoor localization with the internet of things. Sebastian Sadowski, Petros Spachos, IEEE Access. 6Sebastian Sadowski and Petros Spachos, "Rssi-based indoor localization with the internet of things," IEEE Access, vol. 6, pp. 30149-30161, 2018.
Toa localization in the presence of random sensor position errors. Zhenhua Ma, Ho, 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEZhenhua Ma and KC Ho, "Toa localization in the presence of random sensor position errors," in 2011 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2011, pp. 2468-2471.
Accuracy analysis for tdoa localization in sensor networks. Regina Kaune, Julian Hörst, Wolfgang Koch, 14th International Conference on Information Fusion. IEEERegina Kaune, Julian Hörst, and Wolfgang Koch, "Accuracy analysis for tdoa localization in sensor networks," in 14th International Conference on Information Fusion. IEEE, 2011, pp. 1-8.
A tensorbased localization framework exploiting phase interferometry measurements. Farzam Hejazi, Mohsen Joneidi, Nazanin Rahnavard, 2020 IEEE International Radar Conference (RADAR). IEEEFarzam Hejazi, Mohsen Joneidi, and Nazanin Rahnavard, "A tensor- based localization framework exploiting phase interferometry measure- ments," in 2020 IEEE International Radar Conference (RADAR). IEEE, 2020, pp. 554-559.
A novel method to detect and localize lpi radars. Farzam Hejazikookamari, Yaser Norouzi, Elham Sadat Kashani, Mohammad Mahdi Nayebi, IEEE Transactions on Aerospace and Electronic Systems. 555Farzam Hejazikookamari, Yaser Norouzi, Elham Sadat Kashani, and Mohammad Mahdi Nayebi, "A novel method to detect and localize lpi radars," IEEE Transactions on Aerospace and Electronic Systems, vol. 55, no. 5, pp. 2327-2336, 2018.
A new pseudolinear solution to bearing-only tracking. F Hejazi, Y Khalili, Norouzi, Nayebi, 2013 IEEE Radar Conference (RadarCon13). IEEEF Hejazi, MM Khalili, Y Norouzi, and MM Nayebi, "A new pseudolin- ear solution to bearing-only tracking," in 2013 IEEE Radar Conference (RadarCon13). IEEE, 2013, pp. 1-4.
Sar processing to localize lpi radars. F Hejazi, Norouzi, Nayebi, 2014 International Radar Conference. IEEEF Hejazi, Y Norouzi, and MM Nayebi, "Sar processing to localize lpi radars," in 2014 International Radar Conference. IEEE, 2014, pp. 1-4.
Lower bound of error in aoa based passive source localization using single moving platform. F Hejazi, Yaser Norouzi, Mohammad Mehdi Nayebi, East-West Design & Test Symposium. EWDTSF Hejazi, Yaser Norouzi, and Mohammad Mehdi Nayebi, "Lower bound of error in aoa based passive source localization using single moving platform," in East-West Design & Test Symposium (EWDTS 2013).
. IEEE. IEEE, 2013, pp. 1-4.
Using a moving aerial platform to detect and localise a low probability of intercept radar. Yaser Farzam Hejazi Kookamari, Mohammad Mahdi Norouzi, Nayebi, IET Radar, Sonar & Navigation. 117Farzam Hejazi Kookamari, Yaser Norouzi, and Mohammad Mahdi Nayebi, "Using a moving aerial platform to detect and localise a low probability of intercept radar," IET Radar, Sonar & Navigation, vol. 11, no. 7, pp. 1062-1069, 2017.
Deep learning in mobile and wireless networking: A survey. Chaoyun Zhang, Paul Patras, Hamed Haddadi, IEEE Communications Surveys & Tutorials. 213Chaoyun Zhang, Paul Patras, and Hamed Haddadi, "Deep learning in mobile and wireless networking: A survey," IEEE Communications Surveys & Tutorials, vol. 21, no. 3, pp. 2224-2287, 2019.
A survey of indoor localization systems and technologies. Faheem Zafari, Athanasios Gkelias, Kin K Leung, IEEE Communications Surveys & Tutorials. 213Faheem Zafari, Athanasios Gkelias, and Kin K Leung, "A survey of indoor localization systems and technologies," IEEE Communications Surveys & Tutorials, vol. 21, no. 3, pp. 2568-2599, 2019.
Model-driven deep learning for physical layer communications. Hengtao He, Chao-Kai Shi Jin, Feifei Wen, Geoffrey Ye Gao, Zongben Li, Xu, IEEE Wireless Communications. 265Hengtao He, Shi Jin, Chao-Kai Wen, Feifei Gao, Geoffrey Ye Li, and Zongben Xu, "Model-driven deep learning for physical layer communications," IEEE Wireless Communications, vol. 26, no. 5, pp. 77-83, 2019.
Massive mimo for next generation wireless systems. Ove Erik G Larsson, Fredrik Edfors, Thomas L Tufvesson, Marzetta, IEEE communications magazine. 522Erik G Larsson, Ove Edfors, Fredrik Tufvesson, and Thomas L Marzetta, "Massive mimo for next generation wireless systems," IEEE communi- cations magazine, vol. 52, no. 2, pp. 186-195, 2014.
The role of small cells, coordinated multipoint, and massive mimo in 5g. Konstantinos Volker Jungnickel, Wolfgang Manolakis, Berthold Zirwas, Volker Panzner, Moritz Braun, Mikael Lossow, Rikke Sternad, Tommy Apelfröjd, Svensson, IEEE communications magazine. 525Volker Jungnickel, Konstantinos Manolakis, Wolfgang Zirwas, Berthold Panzner, Volker Braun, Moritz Lossow, Mikael Sternad, Rikke Apelfröjd, and Tommy Svensson, "The role of small cells, coordinated multipoint, and massive mimo in 5g," IEEE communications magazine, vol. 52, no. 5, pp. 44-51, 2014.
Deep convolutional neural networks for massive mimo fingerprint-based positioning. Joao Vieira, Erik Leitinger, Muris Sarajlic, Xuhong Li, Fredrik Tufvesson, 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). IEEEJoao Vieira, Erik Leitinger, Muris Sarajlic, Xuhong Li, and Fredrik Tufvesson, "Deep convolutional neural networks for massive mimo fingerprint-based positioning," in 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio Communications (PIMRC). IEEE, 2017, pp. 1-6.
Fingerprint-based localization for massive mimo-ofdm system with deep convolutional neural networks. Xiaoyu Sun, Chi Wu, Xiqi Gao, Geoffrey Ye Li, IEEE Transactions on Vehicular Technology. 6811Xiaoyu Sun, Chi Wu, Xiqi Gao, and Geoffrey Ye Li, "Fingerprint-based localization for massive mimo-ofdm system with deep convolutional neural networks," IEEE Transactions on Vehicular Technology, vol. 68, no. 11, pp. 10846-10857, 2019.
Dnn-based localization from channel estimates: Feature design and experimental results. Paul Ferrand, Alexis Decurninge, Maxime Guillaud, arXiv:2004.00363arXiv preprintPaul Ferrand, Alexis Decurninge, and Maxime Guillaud, "Dnn-based localization from channel estimates: Feature design and experimental results," arXiv preprint arXiv:2004.00363, 2020.
Mamimo csi-based positioning using cnns: Peeking inside the black box. De Sibren, Sofie Bast, Pollin, arXiv:2003.04581arXiv preprintSibren De Bast and Sofie Pollin, "Mamimo csi-based positioning using cnns: Peeking inside the black box," arXiv preprint arXiv:2003.04581, 2020.
Spatiotemporal anomaly detection using deep learning for real-time video surveillance. Rashmika Nawaratne, Damminda Alahakoon, Daswin De Silva, Xinghuo Yu, IEEE Transactions on Industrial Informatics. 161Rashmika Nawaratne, Damminda Alahakoon, Daswin De Silva, and Xinghuo Yu, "Spatiotemporal anomaly detection using deep learning for real-time video surveillance," IEEE Transactions on Industrial Informatics, vol. 16, no. 1, pp. 393-402, 2019.
Deep frame prediction for video coding. Hyomin Choi, V Ivan, Bajić, IEEE Transactions on Circuits and Systems for Video Technology. Hyomin Choi and Ivan V Bajić, "Deep frame prediction for video cod- ing," IEEE Transactions on Circuits and Systems for Video Technology, 2019.
Real-world anomaly detection in surveillance videos. Waqas Sultani, Chen Chen, Mubarak Shah, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionWaqas Sultani, Chen Chen, and Mubarak Shah, "Real-world anomaly detection in surveillance videos," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6479-6488.
Millimeter wave beam-selection using out-of-band spatial information. Anum Ali, Nuria González-Prelcic, Robert W Heath, IEEE Transactions on Wireless Communications. 172Anum Ali, Nuria González-Prelcic, and Robert W Heath, "Millimeter wave beam-selection using out-of-band spatial information," IEEE Transactions on Wireless Communications, vol. 17, no. 2, pp. 1038- 1052, 2017.
Frequency selective hybrid precoding for limited feedback millimeter wave systems. Ahmed Alkhateeb, W Robert, Heath, IEEE Transactions on Communications. 645Ahmed Alkhateeb and Robert W Heath, "Frequency selective hybrid precoding for limited feedback millimeter wave systems," IEEE Trans- actions on Communications, vol. 64, no. 5, pp. 1801-1818, 2016.
Single-site localization based on a new type of fingerprint for massive mimo-ofdm systems. Xiaoyu Sun, Xiqi Gao, Geoffrey Ye Li, Wei Han, IEEE Transactions on Vehicular Technology. 677Xiaoyu Sun, Xiqi Gao, Geoffrey Ye Li, and Wei Han, "Single-site localization based on a new type of fingerprint for massive mimo-ofdm systems," IEEE Transactions on Vehicular Technology, vol. 67, no. 7, pp. 6134-6145, 2018.
On the use of multipath geometry for wideband cooperative localization. Yuan Shen, Moe Z Win, GLOBECOM 2009-2009 IEEE Global Telecommunications Conference. IEEEYuan Shen and Moe Z Win, "On the use of multipath geometry for wideband cooperative localization," in GLOBECOM 2009-2009 IEEE Global Telecommunications Conference. IEEE, 2009, pp. 1-6.
Dual motion gan for future-flow embedded video prediction. Xiaodan Liang, Lisa Lee, Wei Dai, Eric P Xing, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionXiaodan Liang, Lisa Lee, Wei Dai, and Eric P Xing, "Dual motion gan for future-flow embedded video prediction," in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp. 1744-1752.
Future frame prediction for anomaly detection-a new baseline. Wen Liu, Weixin Luo, Dongze Lian, Shenghua Gao, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionWen Liu, Weixin Luo, Dongze Lian, and Shenghua Gao, "Future frame prediction for anomaly detection-a new baseline," in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp. 6536-6545.
Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms. Yunbo Wang, Mingsheng Long, Jianmin Wang, Zhifeng Gao, S Yu Philip, Advances in Neural Information Processing Systems. Yunbo Wang, Mingsheng Long, Jianmin Wang, Zhifeng Gao, and S Yu Philip, "Predrnn: Recurrent neural networks for predictive learning using spatiotemporal lstms," in Advances in Neural Information Processing Systems, 2017, pp. 879-888.
| [
"https://github.com/FarzamHejaziK/DyLoc\"."
] |