subject
stringlengths 7
35
| docno
int64 1
930k
| score
float64 0.14
3.63k
| dfq
int64 1
105
| text
stringlengths 44
1.61M
| meta
stringlengths 21
6.42k
|
---|---|---|---|---|---|
high_school_physics | 498,614 | 16.486936 | 1 | Thornton Wilder's Our Town is an American classic, first produced over 80 years ago, and continuing through the years with frequent productions in theaters and schools around the country. It's a simple story really; its three acts explore the ideas of "Daily Life," "Love and Marriage," and "Death and Dying" through the interconnected residents of Grover's Corners. But it's really quite profound in its simplicity, the final act being especially poignant as it forces us to look at the beauty of every day life and communion with our fellow human beings, something that is often overlooked in the busyness of life.* The new production by Artistry, perhaps best known for their musicals, features a fantastic cast that brings out all of the humor, heart, and meaning in this classic. There's a reason that Our Town continues to be produced, and audiences continue to see it - it speaks to us in a very real and deep way.
Linda Kelsey and the cast of Our Town (photo by Devon Cox)
The play is written in an unusual style, in which a character known as "Stage Manager" (played by Linda Kelsey whom you may know from TV, who is so warm and natural here) serves as narrator, and fully acknowledges that this is a play, introducing scenes and cutting them off when time is short. She speaks directly to the audience as she tells us the story of this extraordinarily ordinary town. We meet many people in the town, from the milkman to the constable to the town drunk, but the focus is on the Gibbs and Webb families. George Gibbs and Emily Webb (Chicago imports Jelani Pitcher and Brianna Joy Ford, both full of youthful charm that grows into something deeper throughout the play) are teenagers and best friends in the first act, and the second act features their wedding at a young age. The third act takes place in the cemetery, with the deceased observing and commenting on the living. The recently departed wants to relive one mundane day in their life, against the advice of the other residents of the cemetery. They soon find out that it's too painful to watch the careless way people go about the day, not realizing how precious each moment is, and beg to be returned to their grave.*
George and Emily (Jelani Pitcher and Brianna Joy Ford,
photo by Devon Cox)
This production is done in the traditional minimalist style, with the stage open to the back walls, the only set pieces a few tables, chairs, ladders, and simple backdrops in unfinished wood (designed by Rick Polenek). Benjamin McGovern directs the cast in a playful way, at times acknowledging they're in a play, at times totally immersed in the scene. Actors mime actions with invisible props, so that we can really focus on the words, emotions, and story rather than the objects on stage. Other highlights in the cast include Adelin Phelps, Ansa Akyea, Elise Langer, and Jason Ballweber as the Gibbs and Webb parents (the latter two bringing the funny, as they do); Craig Johnson as the funny/sad alcoholic choir director; and Catie Bair and Liam Beck-O'Sullivan as the younger siblings.
I've seen this play a few times now (who hasn't?), but I always forget just how brilliant it is at capturing the very essence of life. The humor, the pain, the delight, the difficulties, all with very simple and almost matter of fact language that cuts right to the truth of the matter. If you've never seen Our Town or have seen it a dozen times, it's still worth a visit to this lovely depiction of Grover's Corners.
Our Town continues through September 29 (click here for info and tickets).
*Some text borrowed from what I wrote about previous productions I've seen.
Labels: Adelin Phelps, Ansa Akyea, Artistry, Benjamin McGovern, Brianna Joy Ford, Catie Bair, Craig Johnson, Elise Langer, Jason Ballweber, Jelani Pitcher, Linda Kelsey, Our Town, Rick Polenek | {"pred_label": "__label__wiki", "pred_label_prob": 0.668081521987915, "wiki_prob": 0.668081521987915, "source": "cc/2020-05/en_middle_0025.json.gz/line419065"} |
high_school_physics | 472,296 | 16.462988 | 1 | Save Yourselves
Strange Dark Matter Findings Could Rewrite the Universe's History
Scientists believe that mysterious dark matter is key to forming galaxies. Now a recent series of bizarre findings threatens to undermine everything we think we know.
by Carly Minsky
07 January 2020, 12:11pm
Photo by Getty/Paolo Carnassale
In November, astronomers at the Chinese Academy of Sciences in Beijng published a paper identifying 19 galaxies which might violate the most fundamental theory of how the universe first formed.
They had been searching the sky for yet-undiscovered galaxies which seem to be lacking the usual dark matter component, aiming to add more evidence to a baffling phenomenon scientists had begun observing last year. And they claimed to have found a whole group of them.
Until recently, it was almost unanimously assumed that huge amounts of invisible dark matter was the key to galaxy formation; the gravitational effects experienced and induced by clumps of dark matter in the universe produced swirling disks of gas clouds, stars and dark matter. Dark matter had to make up the majority of matter in these galaxies, according to standard models of the universe, otherwise they would never have formed.
But since 2016, researchers keep stumbling upon galaxies that don’t seem to be dominated by dark matter. Some calculations imply that these galaxies lack dark matter entirely. Either way, galaxies which have far less dark matter than expected would give physicists a lot of explaining to do: why are they lacking dark matter, and how did they form in the first place?
GALAXIES WITHOUT DARK MATTER COULD REWRITE THE HISTORY OF THE UNIVERSE
Unlike the researchers in Beijing, Yale professor Pieter van Dokkum wasn’t trying to find anomalies among the "ultra diffuse" galaxies he studies – those with very few stars. After all, before van Dokkum made a chance discovery in 2018, there was very little reason to expect to find any galaxies which had formed and continued to exist without dark matter.
In fact, van Dokkum and his team expected to find dark matter everywhere they looked. Unlike our own star-packed Milky Way, where dark matter is only on the periphery, ultra-diffuse galaxies should be densely filled with dark matter. Their analysis instead identified, for the first time, a galaxy without any dark matter at all.
“The way the stars move should be entirely dominated by the amount of dark matter,” he says. “The fact we found this deficit was not a subtle result.”
Though van Dokkum's findings were and remain controversial, new analysis of galaxies observed through giant telescopes keeps poking holes in accepted theories. If nothing else, the latest study by researchers in Beijing highlights still-open questions about the shape and structure of galaxies.
Attempts to explain the apparent lack of dark matter in the identified dwarf galaxies fall into two broad categories: theories about how and why galaxies may have been stripped of the dark matter they once had, and theories which postulate the formation of galaxies without dark matter at all.
In the first group, researchers such as Go Ogiya at Observatoire de la Côte d'Azur in France are exploring a phenomenon called "tidal stripping" in which matter is pulled away from a galaxy due to tidal forces that arise when it is interacting with the gravitational field of another galaxy. Even before van Dokkum reported his observations of two dark-matter-deficient galaxies, there was evidence that tidal stripping would disproportionately affect dark matter, which is less tightly bound than visible or “baryonic” matter (i.e. stars) in the centre of the galaxy.
“Whether the baryonic matter is stripped from the satellite galaxy depends on its structure: the distribution of stars determines their resilience to the tidal force,” Ogiya wrote in an email. “In my numerical simulations... stars are hardly stripped from the satellite galaxy while its size doubles as a result of dark matter stripping.”
Ogiya also suggests a process by which galaxies might form without ever containing dark matter. So-called “tidal dwarf galaxies” could form when dark matter and baryonic matter is “ejected” from an existing galaxy due to tidal forces, but the dark matter component evaporates due to its higher velocity, leaving only stars and gas to form a new galaxy.
However, the theories with game-changing potential are those which rewrite the history of the early universe, around 13 billion years ago, long before the “heyday of galaxy formation” when tidal stripping might have taken place. Though he says he's agnostic about proposed explanations of dark-matter-deficiency, van Dokkum favours a theory suggesting that dark matter and gas interacted just after the Big Bang in ways which are not currently predicted. This could cause gas clumps which would separate from the dark matter and then later form galaxies made up only of gas and stars.
“That would be really cool because then we would learn something about the conditions of the early universe and probe the epoch when galaxy formation was not yet underway,” he says.
DARK MATTER DEFICIENCY, OR MISCALCULATION?
Some scientists believe that van Dokkum's result, while striking, may not be reliable. Since dark matter can’t be seen, how much of it is present in a galaxy has to be inferred from the behaviour of stars in the galaxy, which can be seen. Calculating this behaviour (for instance, velocity of the stars), depends on other galactic traits, like the shape of the galaxy and its distance away from us.
Ignacio Trujillo, a scientist specializing in Extragalactic Astronomy at the Institute of Astrophysics of the Canary Islands, has shown that the strange behaviour of these ultra-diffuse galaxies can be explained away by measuring how far away they are using alternative methods. If the anomalous galaxies are in fact much closer to Earth than the distance used for van Dokkum’s calculations, then calculating their dark matter components gives results within the expected range.
"You find yourself in this strange regime where you have to ask yourself: how strange a hypothesis can I accommodate which is still less strange than finding no dark matter?"
According to Trujillo, it is indeed possible for very tiny galaxies to form without the presence of dark matter, but not the way galaxies were formed just after the Big Bang. Rather than gas and stars clumping together due to gravitational effects, these galaxies form when an existing galaxy collides with another and residual matter stripped from the colliding galaxies forms a new galaxy – all without any dark matter. In contrast to galaxies formed by the tidal-stripping effects Ogiya studies, these galaxies would live for a relatively short time by cosmic standards: a few hundred years. Examples have already been identified for decades, he says, but these aren’t the sort of dark-matter-deficient galaxies that challenge the current understanding of our universe.
“People have already found things they have called galaxies without dark matter, but they are transient features which then disappear,” he says. “The novelty would be to find a galaxy which was formed originally in the early universe without dark matter, and I don’t think we have any strong candidates yet.”
Already, Trujillo’s colleagues say they have also found issues with the research published last month identifying 19 possible galaxies lacking dark matter. This week the American Astronomical Society published a research note authored by Jorge Sanchez Almeida which argued that the dark matter calculations for the 19 galaxies failed to account for true shape of dwarf galaxies, simplifying them as being “disk-like” instead of elliptical 3D shapes.
For van Dokkum, questions on both sides of the debate are very much still open. The two anomalous galaxies he’s identified as being dark-matter-deficient are strange in a number of ways, by anyone’s standards, he says. If they do in fact contain dark matter, contrary to his own hypothesis, other assumptions about the distribution of dark matter, as well as the shape and orientation of the galaxies and their history will need to be revised.
“You can come up with unlikely scenarios in which these galaxies do contain dark matter,” he says. “And unlikely is relative in this case to the hypothesis that there is no dark matter.”
“You find yourself in this strange regime where you have to ask yourself: how strange a hypothesis can I accommodate which is still less strange than finding no dark matter?”
The “fun thing” about the ultra-diffuse dwarf galaxies van Dokkum and others are studying is that they are “nearby and bright," van Dokkum said, which makes it relatively easy to investigate their behaviour.
With the powerful telescopes available today, van Dokkum hopes not only to make more accurate measurements of galaxy velocities and distance, but also identify the epoch in which the galaxies formed, using new data from the Hubble telescope. This empirical evidence would rule out a whole class of explanations for the apparent dark matter deficiency, one way or another.
Nicolas Martin, a researcher at the Observatory Astronomical De Strasbourg in France, believes that the observations needed to drive research forward are just beyond the limits of what is possible with the best apparatus around at the moment. Emailing while on location at one such cutting-edge telescope, he said that the research community would likely have to wait for two next-generation telescopes, currently planned or under construction in Chile and Hawaii, before they could generate even more precise measurements of the velocity of stars in the dwarf galaxies.
In the meantime, Martin, van Dokkum, Ogiya, and others will all be searching the sky to determine how many other strange dark-matter-deficient galaxies might be out there.
“If they are very common, then they need to be naturally produced by our models of galaxy formation,” Martin says. “If they are not common, then maybe they are just weirdos, coming from environments that are poorly understood corner cases, which is less problematic overall.” | {"pred_label": "__label__wiki", "pred_label_prob": 0.7514021992683411, "wiki_prob": 0.7514021992683411, "source": "cc/2020-05/en_head_0041.json.gz/line982263"} |
high_school_physics | 405,675 | 16.461332 | 1 | From Murata: "Murata develops world's smallest 32.768 kHz MEMS resonator"
Murata Manufacturing Co., Ltd. (Head Office: Nagaokakyo-shi, Kyoto; Chairman of the Board and President: Tsuneo Murata) has developed the world’s smallest*1 32.768kHz*2 MEMS*3resonator*4, which is expected to make a significant contribution to reducing the size and power consumption of IoT devices, wearables, and healthcare devices.
IoT, wearable, and healthcare related applications where small size, long operating times, and longer battery life are essential are increasing demand for compact electronic components that reduce the power consumption. 32.768kHz resonators are used extensively in power sensitive applications to keep accurate time while allowing power hungry resources to be put into a deep sleep mode, thereby saving system level power and extending overall battery life.
The new MEMS resonator is more than 50% smaller than competing solutions, while featuring a low ESR*5, excellent frequency accuracy, and low power consumption. This is due to the use of MEMS, a technology that was developed by Murata Electronics Oy (formerly VTI Technologies) which has a proven track record of unique and innovative MEMS used in a wide variety of applications originating with the automotive industry.
2. Main Features
While achieving miniaturization through MEMS technology, the new MEMS resonator exhibits frequency temperature characteristics of less than 160ppm (Operating temperature: -30 to 85°C ) with an initial frequency accuracy (25°C ) that is comparable to or better than that of a quartz tuning fork crystal resonator.
Key features are as follows:
- Over 50% smaller than conventional tuning fork quartz crystal resonators
With dimensions measuring 0.9 x 0.6 x 0.3mm (width, length, height), the new MEMS resonator is more than 50% smaller*6 than a conventional 32.768kHz tuning fork crystal resonator.
- Built-in load capacitors
A typical pierce-type oscillator circuit design uses two external multilayer ceramic load capacitors. The new MEMS resonator is equipped with built in load capacitors, which makes possible a reduction in external parasitic capacitance, mounting space, and further contributes to more flexible circuit design.
- Reduced power consumption by realizing a low ESR
With crystal resonators in general, the ESR tends to rise as the device becomes smaller in size. However, with a low ESR (75kΩ) the MEMS resonator can generate a stable reference clock signal by reducing the IC gain while also cutting power consumption by 13% compared to a conventional quartz crystal. (Based on internal tests)
- Can be built into an IC package
With silicon-based wafer-level chip scale packaging (WLCSP), the resonator can be co-packaged with an IC, eliminating the need for any external low-frequency clock references.
3. Availability
Mass production of the new resonator is scheduled to begin in December 2018 as the WMRAG Series.
The new resonator will be showcased at the Murata booth at CEATEC JAPAN 2018, which will be held at Makuhari Messe from October 16 to 19, 2018.
*1 The world's smallest 32.768kHz resonator based on an in-house study, as of the end of September 2018
*2 32.768kHz is used as reference clock signal for driving watches and ICs because it is easy to obtain highly accurate one-second signals using this frequency with digital electronic circuits.
*3 MEMS refers to micro electro mechanical systems. These systems have 3D microstructures formed using semiconductor manufacturing process technology.
*4 Timing devices such as resonators are passive components designed to generate a reference clock signal when an IC operates. High-quality resonators generate highly accurate and highly stable signals that are essential for stable IC operation.
*5 ESR refers to equivalent series resistance. A smaller ESR indicates easier generation of stable clock signals.
*6 The comparative size is with respect to an equivalent conventional product of dimensions of 1.2 x 1.0 x 0.3mm (width, length, height). (As of October 2018)
Source: https://www.murata.com/en-us/products/info/timingdevice/mems-r/2018/1004
Looking to integrate Murata products with your design? Our Applications Engineers offer free design and technical help for your latest designs. Contact us today!
Why partner with Symmetry Electronics? Symmetry's technical staff is specially trained by our suppliers to provide a comprehensive level of technical support. Our in-house Applications Engineers provide free design services to help customers early in the design cycle, providing solutions to save them time, money and frustration. Contact Symmetry for more information. | {"pred_label": "__label__cc", "pred_label_prob": 0.5916551351547241, "wiki_prob": 0.4083448648452759, "source": "cc/2019-30/en_middle_0002.json.gz/line1130911"} |
high_school_physics | 142,407 | 16.453602 | 1 | dinner table heights typical table height typical table height furniture standard table height mm average dinner table height standard dining table dimensions for 4.
dinner table heights dinner table dimensions dimensions 8 dining table dimensions metric dinner table height cm.
dinner table heights what is a standard table height standard dinner table height dining table height ideal height for what is a standard table height dining dinner table measurements.
dinner table heights great round dining table for 8 people round dining table dimensions dining table dimensions australia.
dinner table heights topic related to coffee table dimensions home design and interior decorating size mm dinner table measurements.
dinner table heights 6 person table size inspiring dining chair art designs from 8 table dimensions far fetched dining co design 6 person dinner table size dining table dimensions in mm.
dinner table heights restaurant dining table dimensions also ideal design ideas dining table dimensions in cm.
dinner table heights park heights restaurant table set for wedding dinner dining table dimensions for 4.
dinner table heights the new heights of fine dining dining table dimensions for 4.
dinner table heights standard dining table height cm standard dinner table height large size of dining room chair height dinner table height.
dinner table heights 6 person table size 6 person table dimensions dining room chair dimensions dining table sizes for dinner table length.
dinner table heights inch wide rustic dining table skinny dining table narrow bar inside tables d on dinning dinner table dimensions.
dinner table heights standard dining table width average kitchen table size average kitchen table height medium size of dining standard dining table dinner table sizes cm.
dinner table heights x dinner table height cm.
dinner table heights interior piece round dining set person table dimensions with leaf extension alluring kitchen sets full size large circle dinner white and brown modern room dinner table sizes.
dinner table heights coffee table legs cocktail table height lift up coffee table adjustable height patio dinner table length.
dinner table heights adjustable height coffee dining table dining table dimensions for 14.
dinner table heights kitchen table dimensions farmhouse table round kitchen table dimensions dinner table dimensions.
dinner table heights standard dinner table height standard table height standard bar table dimensions full image for medium image standard dinner table height dining table dimensions in cm.
dinner table heights height of coffee table modern dining useful round standard throughout dining table dimensions uk.
dinner table heights table dimensions dining room table white dining table set tall dining room sets dinner table dimensions dining table dimensions australia.
dinner table heights breathtaking dining chair tip about modern concept large dining room table dimensions large dining dinner table sizes.
dinner table heights typical dining room dimensions restaurant table fascinating ideas on standard size in dinner table dimensions.
dinner table heights average dining table height average kitchen table height dining tables dining room chair dimensions average kitchen average dining table height dinner table length.
dinner table heights dining tables with storage medium size of dining room table heights dinner table height counter height dining tables dinner table height cm.
dinner table heights dinner table dimensions standard dining room table size of worthy standard dining room table size photo dinner table length.
dinner table heights table dimension top table of dimensions dinner table dimensions.
dinner table heights dining room tables table regular 1 dinner table length.
dinner table heights northern heights 5 piece oval dining table set in black dinner table dimensions.
dinner table heights dinner table heights remarkable dining chair height of standard room dining table dimensions dining table dimensions per person.
dinner table heights dining table dimensions cad ray style walnut dining table dimensions for 14.
dinner table heights dinner table heights fabulous 6 seat dining table dimensions square dining table dimensions square dining table dinner table heights dining table dimensions in inches.
dinner table heights dinner table heights table dimensions beautiful dining room table height with fine dining room table dimensions dinner table heights dining table dimensions for 8 seating.
dinner table heights round table dimensions dimension image 1 for lady round base tables dinner table dimensions for 4 dinner table dimensions.
dinner table heights dining room table dimensions cool with picture of dining room painting at design dining table dimensions in inches.
dinner table heights captivating coffee table dimensions at tea luxury room decor dinner table measurements.
dinner table heights view video dimensions dining table dimensions for 4.
dinner table heights pool table sizes 6 foot standard bar dimensions full size of counter height dining metric amazing dining table dimensions australia.
dinner table heights dinner table heights a x dining room for 8 with a circular table this room dining table dimensions for dinner table measurements. | {'timestamp': '2019-04-26T05:54:20Z', 'url': 'http://tradeoperators.info/dinner-table-heights/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 705,949 | 16.437728 | 1 | Applications of the Gauss-Bonnet theorem to gravitational lensing
G W Gibbons1 and M C Werner2 1 Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA, UK G.W.G 2 Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA, UK
In this geometrical approach to gravitational lensing theory, we apply the Gauss-Bonnet theorem to the optical metric of a lens, modelled as a static, spherically symmetric, perfect non-relativistic fluid, in the weak deflection limit. We find that the focusing of the light rays emerges here as a topological effect, and we introduce a new method to calculate the deflection angle from the Gaussian curvature of the optical metric. As examples, the Schwarzschild lens, the Plummer sphere and the singular isothermal sphere are discussed within this framework.
pacs:
95.30.Sf, 98.62.Sb, 04.40.Dg, 02.40.Hw
††: Class. Quantum Grav.
The deflection of light by gravitational fields has been studied with great interest in astrophysics as well as in theoretical physics. Fundamental properties such as Fermat’s principle for Lorentzian manifolds, conditions on image multiplicity and caustics in spacetime have been discussed in a fully relativistic setting (e.g., see [1] and references therein). In the astrophysical context, however, an impulse approximation with piecewise straight light rays in flat space has proven useful since deflection angles on cosmological scales are very small (for a comprehensive introduction, see e.g. [2] or [3] pp 272–95 and references therein). Despite their different premisses, both treatments have yielded mathematically interesting, general properties which depend on topology. In particular, image counting theorems like the odd number theorem have been established with different versions of Morse theory both in the spacetime lensing and impulse approximation frameworks [4] .
In this article, we would like to present another approach to gravitational lensing theory which emphasizes global properties. Specifically, we consider the astrophysically relevant weak deflection limit not in the impulse approximation but treat light rays as spatial geodesics of the optical metric, and use the Gauss-Bonnet theorem. This approach has previously been applied to lensing by cosmic strings [5] , and we extend it here to static, spherically symmetric bodies of a perfect fluid as simple models for galaxies acting as gravitational lenses. It turns out that the focusing of light rays is, from this point of view, essentially a topological effect. Hence we find, rather surprisingly, that the deflection angle can be calculated by integrating the Gaussian curvature of the optical metric outwards from the light ray, in contrast to the usual description in terms of the mass enclosed within the impact parameter of the light ray. To illustrate, we discuss how this works for three well-known models, namely the Schwarzschild lens, the Plummer sphere and the singular isothermal sphere.
The structure of this article is therefore as follows. In section 2 we give a brief review of the Gauss-Bonnet theorem and introduce two constructions which will be used to investigate the lensing geometry. The optical metric and its Gaussian curvature for static, spherically symmetric systems of a perfect fluid is discussed in section 3, followed by the application to the three lens models mentioned above in section 4.
With regard to conventions, we use metric signature (−,+,+,+), Latin and Greek indices for space and spacetime coordinates, respectively, and set the speed of light c=1. G denotes the gravitational constant as usual.
2 Gauss-Bonnet theorem and lensing geometry
The Gauss-Bonnet theorem connects the intrinsic differential geometry of a surface with its topology, and this will be the main tool in our exposition. Let the domain (D,χ,g) be a subset of a compact, oriented surface, with Euler characteristic χ and a Riemannian metric g giving rise to a Gaussian curvature K. Furthermore, let ∂D:{t}→D be its piecewise smooth boundary with geodesic curvature κ, where defined, and exterior angle αi at the ith vertex, traversed in the positive sense. Then the local and global versions of the Gauss-Bonnet theorem (see, e.g., [6] pp 139, 143) can be combined to give
∫∫DK\rmdS+∫∂Dκ\rmdt+∑iαi=2πχ(D). (1)
Next, consider a smooth curve γ:{t}→D of unit speed such that g(˙γ,˙γ)=1, and let ¨γ be the unit acceleration vector. This vector, then, is perpendicular to ˙γ and hence spans a Frenet frame together with ˙γ. The geodesic curvature of γ is therefore given by (see, e.g., [6] p 138)
κ=g(∇˙γ˙γ,¨γ), (2)
which clearly vanishes if, and only if, γ is a geodesic. We will now describe two specific setups for the domain, D1 and D2, which will represent the weak deflection lensing geometries discussed in section 4, and are shown in figure 1.
Figure 1: Weak deflection lensing geometry. Two geodesics γ1 and γ2 from the source S to the observer O are deflected by a lens with centre at L. D1 and D2 are two domains with boundary curves γL and γP as discussed in the text.
Firstly, let D1 be bounded by two geodesics γ1,γ2 intersecting in two vertices, the first being the source S, and the second the observer O. Hence κ(γ1)=κ(γ2)=0 except at the source and observer where the interior angles are θS=π−αS and θO=π−αO, respectively. D1 contains the lens centre L, and both S and O are assumed to be very distant from L such that θS and θO are small and positive. If the lens centre is non-singular, then D1 is simply connected so that χ(D1)=1. On the other hand, if L is singular, then χ(D1)=0 and there is a contribution to ∂D1 given by the boundary curve γL enclosing L. Hence the Gauss-Bonnet theorem (1) yields
θS+θO=∫∫D1K\rmdS (3)
in the non-singular case, and
θS+θO=2π+∮γLκdt+∫∫D1K\rmdS (4)
in the singular case.
Secondly, let D2 be a simply connected domain such that L∉D2 and D1∩D2=γ1. Furthermore, D2 be asymptotically flat such that S and O are in this regime. Thus ∂D2 consists of a geodesic γ1 from S to O, and the perimeter curve γP of a circular sector centred on L intersecting γ1 in S and O at right angles. If ϕ is the angular coordinate centred on L, then γp has angular range π+δ where δ is the small and positive, asymptotic deflection angle of γ1. Therefore we can use (1) to write
∫π+δ0κ(γP)\rmdt\rmdϕ\rmdϕ−π=−∫∫D2K\rmdS, (5)
again using κ(γ1)=0. By construction, the calculation of the deflection angle according to (5) is obviously independent of any singularity that may occur at L in D1.
3 Optical metric and its Gaussian curvature
We shall now consider null geodesics deflected by a static, spherically symmetric massive body consisting of a perfect fluid. In the present context, this is to be thought of as the stellar fluid of a galaxy acting as a gravitational lens. The corresponding line element written in Schwarzschild coordinates xμ=(t,r,ϑ,ϕ) is given by (e.g., see [3] p 375)
\rmds2 =gμν\rmdxμ\rmdxν (6)
=−exp[2A(r)]\rmdt2+exp[2B(r)]\rmdr2+r2(\rmdθ2+sin2ϑ\rmdϕ2).
By spherical symmetry, we can assume, without loss of generality, that the null geodesics with \rmds2=0 are in the equatorial plane ϑ=π/2. All images are therefore collinear with the lens centre L in this case. Spatial projection of the null geodesics yields the light rays, and these are geodesics of the optical metric goptmn=gmn/(−g00), by Fermat’s principle. The geometry of the optical metric, also known as optical reference geometry, is useful for the study of gravitational and inertial forces in general relativity [7] . It is convenient to introduce a radial Regge-Wheeler tortoise coordinate r∗ defined by
\rmdr∗=\rmdrexp[B(r)−A(r)],
so that the line element of the optical metric becomes
\rmdt2 =goptmn\rmdxm\rmdxn=exp[2(B−A)]\rmdr2+exp(−2A)r2\rmdϕ2 (7)
=\rmdr∗2+f(r∗)2\rmdϕ2
from (6), where f(r∗)=exp[−A(r)]r, r=r(r∗). It is clear, then, that the equatorial plane in the optical metric is a surface of revolution when it is embedded, where possible, in R3. Its (intrinsic) Gaussian curvature K (e.g., see [6] p 66) can be found from (7) and expressed in terms of the Schwarzschild radial coordinate r,
K =−1f(r∗)\rmd2f\rmdr∗2 (8)
=−exp[2(A−B)](\rmdA\rmdr\rmdB\rmdr−1r\rmdA\rmdr−1r\rmdB\rmdr−\rmd2A\rmdr2).
One can now specialize to the perfect fluid case. Then the energy-momentum tensor is Tμν=diag(ρ,p,p,p) in the local flat metric, where ρ=ρ(r) denotes the density and p=p(r) the pressure of the lens model. Now from Einstein’s field equations (e.g., see [3] pp 376–7),
exp[−2B(r)]=1−2μ(r)rwhereμ(r)=4πG∫r0ρ(r′)r′2\rmdr′. (9)
The conservation of the energy-momentum tensor, together with the field equations, gives rise to the Tolman-Oppenheimer-Volkoff equation
which expresses the hydrostatic equilibrium, and we also have
\rmdA\rmdr=11−2μr(μr2+4πGpr). (11)
Hence using (8), the Gaussian curvature is
K=−2μexp[2(A−B)]r3(1−2μr)2{1−3μ2r−4πGr3μ[ρ(1−2μr)+p(1−3μr−2πGpr2)]}.
Since we are presently interested in weak deflections only, one can limit the discussion to a non-relativistic stellar fluid subject to the collisionless Boltzmann equation (e.g. [8] ). But this means that the pressure may be neglected in (8). To see this, recall that non-relativistic kinetic theory, for instance in the case of an isotropic velocity dispersion σ2, implies that p=ρσ2/3 but σ2≪1. Now the area element of the equatorial plane in the optical metric is
\rmdS=√|detgopt|\rmdr\rmdϕ=exp[B(r)−2A(r)]r\rmdr\rmdϕ
from (7). Together with (9), the integrand of the Gaussian curvature therefore reduces to
K\rmdS=−2μr2(1−2μr)3/2[1−3μ2r−4πGr3ρμ(1−2μr)]\rmdr\rmdϕ. (12)
Notice that for realistic lens models, where ρ→ρ0>0 as r→0, and μ→μ∞<∞ as r→∞, the Gaussian curvature must change sign at some radius in the equatorial plane. For if r→∞, then ρ has to decrease faster than r−3 for μ to be asymptotically finite, so K<0 follows immediately from (12). Conversely, if r→0, then μ(r)→4πGρ0r3/3 by (9), so the term in the square bracket in (12) tends to −2, and the result follows.
In the next section, we shall discuss three particular lens models as examples.
4 Applications to lensing models
4.1 Schwarzschild lens
The Schwarzschild lens can be characterized by μ(r)=const. and ρ(r)=0, r>0 so that outside the event horizon at r=2μ,
K\rmdS=−2μr2(1−2μr)3/2(1−3μ2r)<0 (13)
from (12). The optical Schwarzschild metric has therefore negative Gaussian curvature, and an embedding diagram illustrating this can be found e.g. in [7] . Now, in light of this fact, it might appear surprising at first glance that light rays can be focused. For consider a geodesic γ(t) separated from a neighbouring geodesic by a Jacobi field Y(t)=y(t)¨γ, then the geodesic deviation equation becomes (see, e.g., [6] p 102)
\rmd2y\rmdt2+K(γ)y=0,
implying that the light rays should diverge. However, given a domain D1 as described in section 2, the focusing of two light rays at the observer’s vertex O is made possible by the non-trival topology of D1 such that χ(D1)=0 because of the event horizon. It is convenient to use the circular photon orbit at r=3μ as inner boundary curve γL because its geodesic curvature vanishes. Then equation (4) applies,
θS+θO=2π+∫∫D1K\rmdS,
which can clearly be fulfilled for small, positive θS,θO despite the negative Gaussian curvature. The topological contribution is therefore essential to the focusing, and it is in this sense that gravitational lensing can be understood as a topological effect. In fact, its global nature is also borne out by equation (5). Here, the deflection angle is calculated by integrating the Gaussian curvature in the domain D2 outwards from the light ray, as opposed to the usual treatment where the deflection is determined by the mass enclosed within the impact parameter (e.g. [2] p 231 and [3] p 277). Now since the optical Schwarzschild metric is asymptotically Euclidean, we can take κ(γP)\rmdt/\rmdϕ=1 on the circular boundary of D2. Moreover, because of the weak deflection limit, we may assume that the light ray is given by r(t)=b/sinϕ at zeroth order with impact parameter b≫2μ. Hence (5) together with (13) implies
δ =−∫∫D2K\rmdS≈∫π0∫∞b/sinϕ2μr2\rmdr\rmdϕ (14)
=4μb.
This is the well-known formula for the Schwarzschild deflection angle in the weak limit (e.g., [2] p 25), as required.
Finally, it should be remarked that the negative Gaussian curvature of the optical metric is a rather general feature of black hole metrics, as is the closed photon orbit forming part of ∂D2. The existence of photon spheres for static, spherically symmetric metrics was first discussed in [9] , and more generally, in terms of an energy condition, in [10] . A fuller discussion of the Schwarzschild case, including applications of the Gauss-Bonnet theorem to geodesic triangles, can be found in the forthcoming article [11] .
4.2 Plummer model
The next example is one of the simplest realistic (in the sense of section 3) models for a stellar system, the Plummer sphere [12] . It is a polytrope with density distribution (e.g., [8] p 225)
ρ(r)=ρ0[1+(rr0)2]−5/2 (15)
where ρ0 is the central density and r0 a scale radius, so that the mass parameter defined in (9) becomes
μ(r)=μ∞(rr0)3[1+(rr0)2]−3/2. (16)
The corresponding metric is therefore an interior solution with finite μ∞ proportional to the total Newtonian mass of this model in the non-relativistic limit where μ∞/r0≪1. We can now compute the Gaussian curvature integrand from (12) to first order in μ∞/r0, to obtain
K\rmdS=−2μ∞r0u(1+u2)3/2(1−31+u2)\rmdu\rmdϕ + \Or(μ2∞r20), (17)
where a dimensionless radius u≡r/r0 has been introduced for convenience. Since μ is finite at large radii, the optical metric of the Plummer sphere approximates the optical metric of the Schwarzschild lens and its negative Gaussian curvature at large radii. Of course, there is no event horizon in this model, so the lens centre L is non-singular, and the domain D1 is simply connected. Then the relevant form of the Gauss-Bonnet theorem (3) requires that K>0 somewhere in D1 for focusing to be possible. Indeed, it can be seen from (17) that the leading term of the Gaussian curvature changes sign at u=√2, and this is in agreement with the more general conclusion mentioned in section 3.
We can now calculate the deflection angle for the Plummer model by considering the domain D2 as in the previous example. From (5) and (17),
δ =−∫∫D2K\rmdS≈∫π0∫∞b/(r0sinϕ)2μ∞r0[u(1+u2)3/2−3u(1+u2)5/2]\rmdu\rmdϕ
=4μ∞r0br0[1+(br0)2]−1
after suitable substitutions. This is the weak deflection angle as expected (e.g., [2] p 245), and we recover the Schwarzschild deflection angle (14) in the appropriate limit b≫r0.
4.3 Singular isothermal sphere
The last model discussed here is the singular isothermal sphere. Fully relativistic, isothermal solutions of the Tolman-Oppenheimer-Volkoff equation exist [13] , and the density distribution of the singular isothermal sphere is, in fact, similar to the global monopole [14] . However, since we are presently concerned with the weak deflection limit, it suffices to consider the non-relativistic, singular isothermal sphere with the following density distribution (see, e.g., [3] pp 289–90 and [8] p 226–8) and mass parameter,
ρ(r)=σ22πGr2 ⇒ μ(r)=2σ2r, (18)
using (9), where σ2≪1 is again the isotropic velocity dispersion of the stellar fluid. This model is a solution of the Tolman-Oppenheimer-Volkoff equation (10) in the non-relativistic limit, that is, to first order in σ2. Then (12) shows that the Gaussian curvature vanishes for r>0 in this limit, so that the equatorial plane in the optical metric is a cone with a singular vertex at r=0. Another way to see this is to isometrically embed the optical metric (7) in R3 with cylindrical coordinates (z,R,ϕ) such that
\rmdt2 =exp[2(B−A)]\rmdr2+exp(−2A)r2\rmdϕ2
=\rmdz(r)2+\rmdR(r)2+R(r)2\rmdϕ2.
Using the equations (9), (11) and (18), one can find the coefficients of the spacetime metric (6),
exp(2A) =C−2r4σ2/(1−4σ2),
exp(2B) =(1−4σ2)−1,
where C is a non-zero constant. Then the embedding yields
R(r)=Cr(1−6σ2)/(1−4σ2),
and the equatorial plane in the optical metric is described by a cone in R3 of the form
z=√8σ2(1−9σ22)1/21−6σ2R
in the non-relativistic limit. This corresponds to a deficit angle of Δ≈8πσ2.
Now in order to find the deflection angle, we can again apply Gauss-Bonnet in the form of equation (5) which reads
∫π+δ0κ(γP)\rmdt\rmdϕ\rmdϕ−π=0 (19)
since K=0 in D2. The Gaussian curvature of the circular perimeter curve γP can be calculated directly from (2),
κ(γP)\rmdt =(1−4σ2)1/2(1−r\rmdA\rmdr)\rmdϕ
=1−6σ2(1−4σ2)1/2\rmdϕ
using (11). The leading term of the deflection angle is therefore
δ=4πσ2
from (19), in agreement with the standard treatment (e.g., [3] p 290). This shows that, from this point of view, the constant deflection angle of the singular isothermal sphere comes essentially from the deficit angle of the conical optical metric, similar to cosmic string lensing (e.g., see [5] ).
In this article, we have introduced a geometrical approach to gravitational lensing theory different from the spacetime and impulse approximation treatments. By applying the Gauss-Bonnet theorem to the optical metric, whose geodesics are the spatial light rays, we found that the focusing of light rays can be regarded as a topological effect. We also gave a new expression (5) to calculate the deflection angle by integrating the Gaussian curvature of the optical metric outwards from the light ray, in the weak deflection limit. The lens models considered were given by static, spherically symmetric bodies of a non-relativistic, perfect fluid, and we discussed as examples the Schwarzschild lens, the Plummer sphere and the singular isothermal sphere.
It would therefore be interesting to see whether this approach could also be extended and be fruitfully applied to lenses without spherical symmetry, where images are no longer collinear with the lens centre, or to the relativistic strong deflection limit.
MCW would like to thank Claude Warnick for useful discussions, and the Science and Technology Facilities Council, UK, for funding.
[1] Perlick V 2000 Ray Optics, Fermat’s Principle, and Applications to General Relativity (Berlin: Springer-Verlag)
[2] Schneider P, Ehlers J and Falco EE 1999 Gravitational Lenses (Berlin: Springer-Verlag)
[3] Straumann N 2004 General Relativity: With Applications to Astrophysics (Berlin: Springer-Verlag)
[4] McKenzie R H 1985 A gravitational lens produces an odd number of images J. Math. Phys.26 1592-96 Petters A O 1995 Multiplane gravitational lensing. I. Morse theory and image counting J. Math. Phys.36 4263-75
[5] Gibbons G W 1993 No glory in cosmic string theory Phys. Lett.B 308 237–39
[6] Klingenberg W 1978 A Course in Differential Geometry (New York: Springer-Verlag)
[7] Abramowicz M A, Carter B and Lasota J P 1988 Optical reference geometry for stationary and static dynamics Gen. Rel. Grav. 20 1173–83
[8] Binney J and Tremaine S 1987 Galactic Dynamics (Princeton: Princeton University Press)
[9] Atkinson R d’E 1965 On light tracks near a very massive star Astron. J. 70 517–23
[10] Claudel C-M, Virbhadra K S and Ellis G F R 2001 The geometry of photon surfaces J. Math. Phys.42 818–38
[11] Gibbons G W and Warnick C M 2008 Hyperbolic space and no hair theorems (forthcoming)
[12] Plummer H C 1911 On the problem of distribution in globular star clusters Mon. Not. R. Astron. Soc. 71 460–70
[13] Bisnovatyi-Kogan G S and Zel’dovich Ya B 1969 Models of clusters of point masses with large central redshifts Astrophys. 5 105–9
[14] Barriola B and Vilenkin A 1989 Gravitational field of a global monopole Phys. Rev. Lett.63 341–43 | {"pred_label": "__label__wiki", "pred_label_prob": 0.5342332720756531, "wiki_prob": 0.5342332720756531, "source": "cc/2022-05/en_middle_0048.json.gz/line1276668"} |
high_school_physics | 259 | 16.422816 | 1 | \section{Preliminaries}\label{prelim}
Throughout this paper, we use the shorthand notation $[d] = \{1,\ldots,d\}$.
We write
\begin{eqnarray*} \begin{aligned}
H(\mathcal{B}_t,\ket{\phi}) = - \sum_{i = 1}^{d}|\inp{\phi}{b_k^t}|^2 \log |\inp{\phi}{b_k^t}|^2,
\end{aligned} \end{eqnarray*}
for the Shannon entropy~\cite{shannon:info}
arising from measuring the pure state $\ket{\phi}$ in basis $\mathcal{B}_t = \{\ket{b_1^t},\ldots,\ket{b_d^t}\}$.
In general, we will use $\ket{b_k^t}$ with $k \in [d]$ to denote the $k$-th element of a basis $\mathcal{B}_t$
indexed by $t$. We also briefly refer to the R\'enyi entropy of order 2 (collision entropy)
of measuring $\ket{\phi}$ in basis $\mathcal{B}_t$ given by $H_2(\mathcal{B}_t,\ket{\phi}) = - \log \sum_{i=1}^d
|\inp{\phi}{b_k^t}|^4$~\cite{cachin:renyi}.
\subsection{Mutually unbiased bases}
We also need the notion of mutually unbiased bases (MUBs),
which were initially introduced in the context
of state estimation~\cite{wootters:mub}, but appear in many other problems in quantum information.
The following definition closely follows the one given in~\cite{boykin:mub}.
\begin{definition}[MUBs] \label{def-mub}
Let $\mathcal{B}_1 = \{|b^1_1\rangle,\ldots,|b^1_{d}\rangle\}$ and $\mathcal{B}_2 =
\{|b^2_1\rangle,\ldots,|b^2_{d}\rangle\}$ be two orthonormal bases in
$\mathbb{C}^d$. They are said to be
\emph{mutually unbiased} if
$|\lanb^1_k |b^2_l\rangle| = 1/\sqrt{d}$, for every $k,l \in[d]$. A set $\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$ of
orthonormal bases in $\mathbb{C}^d$ is called a \emph{set of mutually
unbiased bases} if each pair of bases is mutually unbiased.
\end{definition}
We use $N(d)$ to denote the maximal number of MUBs in dimension $d$.
In any dimension $d$, we have that
$\mbox{N}(d) \leq d+1$~\cite{boykin:mub}. If $d = p^k$ is a prime power, we have
that $\mbox{N}(d) = d+1$ and explicit constructions are
known~\cite{boykin:mub,wootters:mub}. If $d = s^2$ is a square,
$\mbox{N}(d) \geq \mbox{MOLS}(s)$ where $\mbox{MOLS}(s)$ denotes the number of mutually orthogonal
$s \times s$ Latin squares~\cite{wocjan:mub}. In general, we have
$\mbox{N}(n m) \geq \min\{\mbox{N}(n),\mbox{N}(m)\}$ for all $n,m \in \mathbb{N}$~\cite{zauner:diss,klappenecker:mubs}.
It is also known that in any dimension, there exists an explicit construction for 3 MUBs~\cite{grassl:mub}.
Unfortunately, not very much is known for other dimensions. For example,
it is still an open problem whether there exists a set of $7$ MUBs in dimension $d=6$.
We say that a unitary $U_t$ transforms the computational basis into the $t$-th MUB
$\mathcal{B}_t = \{\ket{b^t_1},\ldots,\ket{b^t_d}\}$
if for all $k \in [d]$ we have $\ket{b^t_k} = U_t\ket{k}$.
Here, we are particularly concerned with two specific constructions of mutually unbiased bases.
\subsubsection{Latin squares}
First of all, we consider MUBs based on mutually orthogonal Latin squares~\cite{wocjan:mub}.
Informally, an $s \times s$ Latin square over the symbol set $[s] = \{1,\ldots,s\}$ is an arrangement
of elements of $[s]$ into an $s \times s$ square such that in each row and each column every element
occurs exactly once. Let $L_{ij}$ denote the entry in a Latin square in row $i$ and column $j$.
Two Latin squares $L$ and $L'$ are called mutually orthogonal if and only if
$\{(L_{i,j},L'_{i,j})|i,j \in [s]\} = \{(u,v)|u,v \in [s]\}$.
From any $s\times s$ Latin square we can obtain a basis for $\mathbb{C}^{s}\otimes \mathbb{C}^{s}$.
First, we construct $s$ of the basis vectors from the entries of
the Latin square itself. Let $\ket{v_{1,\ell}} = (1/\sqrt{s}) \sum_{i,j\in [s]} E^L_{i,j}(\ell) \ket{i,j}$
where $E^L$ is a predicate such that $E^L_{i,j}(\ell) = 1$ if and only if $L_{i,j} = \ell$.
Note that for each $\ell$ we have exactly $s$ pairs $i,j$ such that $E_{i,j}(\ell) = 1$, because
each element of $[s]$ occurs exactly $s$ times in the Latin square.
Secondly, from each such vector we obtain $s-1$ additional vectors by adding successive rows
of an $s \times s$ (complex) Hadamard matrix $H = (h_{ij})$ as coefficients to obtain the remaining
$\ket{v_{t,j}}$ for $t \in [s]$, where $h_{ij} = \omega^{ij}$ with $i,j \in \{0,\ldots,s-1\}$ and
$\omega = e^{2 \pi i/s}$.
Two additional MUBs can then be obtained in the same way from the two non-Latin squares where
each element occurs for an entire row or column respectively. From each mutually orthogonal latin square
and these two extra squares which also satisfy the above orthogonality condition, we obtain one basis.
This construction therefore gives $\mbox{MOLS}(s) + 2$ many MUBs. It is known that if $s = p^k$ is a
prime power itself, we obtain
$p^k+1\approx \sqrt{d}$ MUBs from this construction. Note, however, that there do exist many more
MUBs in prime power dimensions, namely $d+1$. If $s$ is not a prime power, it is merely known
that $\mbox{MOLS}(s) \geq s^{1/14.8}$~\cite{wocjan:mub}.
As an example, consider the following $3 \times 3$ Latin square and
the $3 \times 3$ Hadmard matrix\\
\begin{center}
\begin{tabular}{lr}
\begin{tabular}{|c|c|c|}
\hline
1 & 2 & 3\\\hline
2 & 3 & 1\\\hline
3 & 1 & 2 \\\hline
\end{tabular},
&
$
H = \left(\begin{array}{ccc}
1 &1& 1\\
1 &\omega &\omega^2\\
1 &\omega^2& \omega
\end{array}\right)$,
\end{tabular}
\end{center}
where $\omega = e^{2 \pi i/3}$.
First, we obtain vectors
\begin{eqnarray*}
\ket{v_{1,1}} &=& (\ket{1,1} + \ket{2,3} + \ket{3,2})/\sqrt{3}\\
\ket{v_{1,2}} &=& (\ket{1,2} + \ket{2,1} + \ket{3,3})/\sqrt{3}\\
\ket{v_{1,3}} &=& (\ket{1,3} + \ket{2,2} + \ket{3,1})/\sqrt{3}.
\end{eqnarray*}
With the help of $H$ we obtain 3
additional vectors from the ones above. From the vector $\ket{v_{1,1}}$,
for example, we obtain
\begin{eqnarray*}
\ket{v_{1,1}} &=& (\ket{1,1} + \ket{2,3} + \ket{3,2})/\sqrt{3}\\
\ket{v_{2,1}} &=& (\ket{1,1} + \omega \ket{2,3} + \omega^2 \ket{3,2})/\sqrt{3}\\
\ket{v_{3,1}} &=& (\ket{1,1} + \omega^2 \ket{2,3} + \omega \ket{3,2})/\sqrt{3}.
\end{eqnarray*}
This gives us basis $\mathcal{B} = \{\ket{v_{t,\ell}}|t,\ell \in [s]\}$ for $s = 3$.
The construction of another basis follows in exactly the same way from a mutually orthogonal
Latin square. The fact that two such squares $L$ and $L'$ are mutually orthogonal ensures
that the resulting bases will be mutually unbiased. Indeed, suppose we are given another such basis,
$\mathcal{B'} = \{\ket{u_{t,\ell}}|t,\ell \in [s]\}$ belonging to $L'$. We then have for any $\ell,\ell' \in [s]$ that
$|\inp{u_{1,\ell'}}{v_{1,\ell}}|^2 =
|(1/s) \sum_{i,j\in [s]} E^{L'}_{i,j}(\ell') E^L_{i,j}(\ell)|^2 = 1/s^2$, as there exists excactly only
one pair $\ell,\ell' \in [s]$ such that $E^{L'}_{i,j}(\ell') E^L_{i,j}(\ell) = 1$. Clearly, the same argument
holds for the additional vectors derived from the Hadamard matrix.
\subsubsection{Generalized Pauli matrices}
The second construction we consider is based on the generalized Pauli matrices
$X_d$ and $Z_d$~\cite{boykin:mub}, defined by their actions on the
computational basis $C = \{\ket{1},\ldots,\ket{d}\}$ as follows:
$$
X_d\ket{k} = \ket{k+1}, \and Z_d\ket{k} = \omega^k\ket{k},~\forall \ket{k} \in C,
$$
where $\omega = e^{2 \pi i/d}$. We say that $\left(X_{d}\right)^{a_1} \left(Z_{d}\right)^{b_1}
\otimes \cdots \otimes \left(X_{d}\right)^{a_N} \left(Z_{d}\right)^{b_N}$ for
$a_k,b_k \in \{0,\ldots,d-1\}$ and $k \in [N]$ is a \emph{string of Pauli Matrices}.
If $d$ is a prime, it is known that the $d+1$ MUBs constructed first by
Wootters and Fields~\cite{wootters:mub} can also be obtained as the eigenvectors of the
matrices $Z_d,X_d,X_dZ_d,X_dZ_d^2,\ldots,X_dZ_d^{d-1}$~\cite{boykin:mub}. If $d = p^k$ is a prime power,
consider all $d^2-1$ possible strings of Pauli matrices excluding the identity and group them
into sets $C_1,\ldots,C_{d+1}$ such that $|C_i| = d - 1$ and $C_i \cup C_j = \{\mathbb{I}\}$ for $i \neq j$ and all
elements of $C_i$ commute. Let $B_i$ be the common eigenbasis of all elements of $C_i$. Then
$B_1,\ldots,B_{d+1}$ are MUBs~\cite{boykin:mub}. A similar result for $d = 2^k$ has also been shown
in~\cite{lawrence:mub}.
A special case of this construction are the three mutually unbiased bases in dimension $d=2^k$ given by
the unitaries $\mathbb{I}^{\otimes k}$,$H^{\otimes k}$ and $K^{\otimes k}$ with $K = (\mathbb{I} + i\sigma_x)/\sqrt{2}$ applied to the computational
basis.
\subsection{2-designs}
For the purposes of the present work, \emph{spherical $t$-designs} (see for example Ref.\ \cite{Renesetal04a}) can be defined as follows.
\begin{definition}[$t$-design]
Let $\{|\tau_1\rangle,\ldots,|\tau_{m}\rangle\}$ be a set of state vectors in $\mathbb{C}^d$, they are said to form a $t$-design if
\begin{eqnarray*} \begin{aligned}
\frac{1}{m}\sum_{i=1}^m [|\tau_i\rangle \langle \tau_i|]^{\otimes t} = \frac{\Pi_+^{(t,d)}}{\mathop{\mathrm{Tr}}\nolimits \Pi_+^{(t,d)}},
\end{aligned} \end{eqnarray*}
where $\Pi_+(t,d)$ is a projector onto the completely symmetric subspace of ${\mathbb{C}^d}^{\otimes t}$ and
\begin{eqnarray*} \begin{aligned}
\mathop{\mathrm{Tr}}\nolimits \Pi_+^{(t,d)}=\cmb{d+t-1}{d-1}=\frac{(d+t-1)!}{(d-1)!~t!},
\end{aligned} \end{eqnarray*}
is its dimension.
\end{definition}
Any set $\mathbb{B}$ of $d+1$ MUBs forms a \emph{spherical $2$-design} \cite{KlappeneckerRotteler05a,Renesetal04a},
i.e., we have for $\mathbb{B} = \{\mathcal{B}_1,\ldots,\mathcal{B}_{d+1}\}$ with $\mathcal{B}_t = \{\ket{b^t_1},\ldots,\ket{b^t_d}\}$ that
\begin{eqnarray*} \begin{aligned}
\frac{1}{d(d+1)}\sum_{t=1}^{d+1}\sum_{k=1}^{d} [|b^t_k\rangle \langle b^t_k|]^{\otimes 2} &= 2\frac{\Pi_+^{(2,d)}}{d(d+1)}.
\end{aligned} \end{eqnarray*}
\section{Uncertainty relations}
We now prove tight entropic uncertainty for measurements in MUBs in square dimensions.
The main result of~\cite{maassen:entropy}, which will be very useful for us, is stated next.
\begin{theorem}[Maassen and Uffink]
Let $\mathcal{B}_1$ and $\mathcal{B}_2$ be two orthonormal basis in a Hilbert space of dimension $d$. Then
for all pure states $\ket{\psi}$
\begin{eqnarray} \begin{aligned} \label{eq:maasenuffinkbound}
\frac{1}{2}\left[ H(\mathcal{B}_1,\ket{\psi})+H(\mathcal{B}_2,\ket{\psi})\right]\geq -\log c(\mathcal{B}_1,\mathcal{B}_2),
\end{aligned} \end{eqnarray}
where $c(\mathcal{B}_1,\mathcal{B}_2)=\max \left \{|\langle b_1|b_2\rangle|:|b_1\rangle \in \mathcal{B}_1,|b_2\rangle \in \mathcal{B}_2\right \}$.
\end{theorem}
The case when $\mathcal{B}_1$ and $\mathcal{B}_2$ are MUBs is of special interest for us. More generally, when one has a set of MUBs a trivial application of (\ref{eq:maasenuffinkbound}) leads to the following corollary also noted in~\cite{azarchs:entropy}.
\begin{corollary}\label{MUderived}
Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$, be a set of MUBs in a Hilbert space of dimension $d$. Then
\begin{eqnarray} \begin{aligned} \label{eq:manymubsbound}
\frac{1}{m} \sum_{t=1}^m H(\mathcal{B}_t,|\psi\rangle)\geq \frac{\log d}{2}.
\end{aligned} \end{eqnarray}
\end{corollary}
\begin{proof}
Using (\ref{eq:maasenuffinkbound}), one gets that for any pair of MUBs $\mathcal{B}_t$ and $\mathcal{B}_{t'}$ with $t\neq t'$
\begin{eqnarray} \begin{aligned}\label{eq:maassenuffinkij}
\frac{1}{2}\left[ H(\mathcal{B}_t,\psi)+H(\mathcal{B}_{t'},\psi)\right]\geq \frac{\log d}{2}.
\end{aligned} \end{eqnarray}
Adding up the resulting equation for all pairs $t\neq t'$ we get the desired result (\ref{eq:manymubsbound}).
\end{proof}
Here, we now show that this bound can in fact be tight for a large set of MUBs.
\subsection{MUBs in square dimensions}
Corollary \ref{MUderived}, gives a lower bound on the average of the entropies of a set of MUBs. The obvious question is whether that bound is tight. We show that the bound is indeed tight when we consider product MUBs in a Hilbert space of square dimension.
\begin{theorem}\label{squareThm}
Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$ with $m\geq 2$ be a set of MUBs in a Hilbert space $\mathcal{H}$ of dimension $s$. Let $U_t$ be the unitary operator that transforms the computational basis to $\mathcal{B}_t$.
Then $\mathbb{V}=\{\mathcal{V}_1,\ldots,\mathcal{V}_m\}$, where
\begin{eqnarray*} \begin{aligned}
\mathcal{V}_t=\left \{U_t|k\rangle \otimes U_t^* |l\rangle: k,l\in[s] \right \},
\end{aligned} \end{eqnarray*}
is a set of MUBs in $\mathcal{H} \otimes \mathcal{H}$, and it holds that
\begin{eqnarray} \begin{aligned}\label{eq:squaremubsbound}
\min_{\ket{\psi}} \frac{1}{m} \sum_{t=1}^m H(\mathcal{V}_t,|\psi\rangle)= \frac{\log d}{2},
\end{aligned} \end{eqnarray}
where $d=\dim(\mathcal{H} \otimes \mathcal{H})=s^2$.
\end{theorem}
\begin{proof}
It is easy to check that $\mathbb{V}$ is indeed a set of MUBs. Our proof works by constructing a state $\ket{\psi}$
that achieves the bound in Corollary~\ref{MUderived}.
It is easy to see that the maximally entangled state
\begin{eqnarray*} \begin{aligned}
|\psi\rangle = \frac{1}{\sqrt{s}}\sum_{k=1}^{s}|kk\rangle,
\end{aligned} \end{eqnarray*}
satisfies $U\otimes U^*|\psi\rangle=|\psi\rangle$ for any $U\in \textrm{U}(d)$. Indeed,
\begin{eqnarray*} \begin{aligned}
\langle \psi |U\otimes U^*|\psi\rangle&=\frac{1}{s}\sum_{k,l=1}^{s} \langle k |U|l\rangle\langle k |U^*|l\rangle\\
&=\frac{1}{s}\sum_{k,l=1}^{s} \langle k |U|l\rangle\langle l |U^\dagger|k\rangle\\
&=\frac{1}{s}\mathop{\mathrm{Tr}}\nolimits UU^\dagger=1.
\end{aligned} \end{eqnarray*}
Therefore, for any $t\in[m]$ we have that
\begin{eqnarray*} \begin{aligned}
H(\mathcal{V}_t,|\psi\rangle) &=-\sum_{kl}|\langle kl|U_t\otimes U_t^*|\psi\rangle|^2\log|\langle kl|U_t\otimes U_t^*|\psi\rangle|^2\\
&=-\sum_{kl}|\langle kl|\psi\rangle|^2\log|\langle kl|\psi\rangle|^2\\
&=\log s=\frac{\log d}{2}.
\end{aligned} \end{eqnarray*}
Taking the average of the previous equation we get the desired result.
\end{proof}
\subsection{MUBs based on Latin Squares}
We now consider mutually unbiased bases based on Latin squares~\cite{wocjan:mub}
as described in Section~\ref{prelim}. Our proof again follows by providing a state
that achieves the bound in Corollary~\ref{MUderived}, which turns out to have
a very simple form.
\begin{lemma}\label{LSentropy}
Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$ with $m \geq 2$ be any set of MUBs in a Hilbert space of dimension $d=s^2$ constructed
on the basis of Latin squares. Then
$$
\min_{\ket{\psi}} \frac{1}{m} \sum_{\mathcal{B}\in\mathbb{B}} H(\mathcal{B},\ket{\psi}) = \frac{\log d}{2}.
$$
\end{lemma}
\begin{proof}
Consider the state $\ket{\psi} = \ket{1,1}$ and fix a basis
$\mathcal{B}_t = \{\ket{v^t_{i,j}}|i,j \in [s]\} \in \mathbb{B}$
coming from a Latin square.
It is easy
to see that there exists exactly one $j \in [s]$ such that $\inp{v^t_{1,j}}{1,1} = 1/\sqrt{s}$. Namely this
will be the $j \in [s]$ at position $(1,1)$ in the Latin square. Fix this $j$. For any other
$\ell \in [s], \ell \neq j$, we have $\inp{v^t_{1,\ell}}{1,1} = 0$. But this means that
there exist exactly $s$ vectors in $\mathcal{B}$ such that $|\inp{v^t_{i,j}}{1,1}|^2 = 1/s$, namely
exactly the $s$ vectors derived
from $\ket{v^t_{1,j}}$ via the Hadamard matrix. The same argument holds for any such basis $\mathcal{B} \in \mathbb{T}$.
We get
\begin{eqnarray*}
\sum_{\mathcal{B} \in \mathbb{B}} H(\mathcal{B},\ket{1,1}) &=& \sum_{\mathcal{B} \in \mathbb{B}}
\sum_{i,j \in [s]} |\inp{v^t_{i,j}}{1,1}|^2 \log |\inp{v^t_{i,j}}{1,1}|^2\\
&=& |\mathbb{T}| s \frac{1}{s} \log \frac{1}{s}\\
&=& |\mathbb{T}| \frac{\log d}{2}.
\end{eqnarray*}
The result then follows directly from Corollary~\ref{MUderived}.
\end{proof}
\subsection{Using a full set of MUBs}
We now provide an alternative proof of an entropic uncertainty relation
for a full set of mutually unbiased bases. This has previously been proved
in~\cite{sanchez:entropy2}. Nevertheless, because our proof is so
simple using existing results about 2-designs we include it here for completeness,
in the hope that if may offer additional insight.
\begin{lemma}\label{FullentropyCollision}
Let $\mathbb{B}$ be a set of $d+1$ MUBs in a Hilbert space of dimension $d$.
Then
$$
\frac{1}{d+1}\sum_{\mathcal{B}\in\mathbb{B}}H_2(\mathcal{B},\ket{\psi}) \geq \log\left(\frac{d+1}{2}\right).
$$
\end{lemma}
\begin{proof}
Let $\mathcal{B}_t = \{\ket{b^t_1},\ldots,\ket{b^t_d}\}$ and $\mathbb{B} = \{\mathcal{B}_1,\ldots,\mathcal{B}_{d+1}\}$.
We can then write
\begin{eqnarray*}
\frac{1}{d+1}\sum_{\mathcal{B}\in\mathbb{B}}H_2(\mathcal{B},\ket{\psi}) &=&
- \frac{1}{d+1}\sum_{t=1}^{d+1} \log \sum_{k=1}^d |\inp{b^t_k}{\psi}|^4\\
&\geq& \log\left(\frac{1}{d+1}\sum_{t=1}^{d+1}\sum_{k=1}^d |\inp{b^t_k}{\psi}|^4\right)\\
&=&\log\left(\frac{d+1}{2}\right),
\end{eqnarray*}
where the first inequality follows from the concavity of the $\log$,
and the final inequality follows directly from the fact that a full set of MUBs forms a 2-design
and~\cite[Theorem 1]{KlappeneckerRotteler05a}.
\end{proof}
We then obtain the original result by Sanchez-Ruiz~\cite{sanchez:entropy2} by noting that
$H(\cdot) \geq H_2(\cdot)$.
\begin{corollary}\label{Fullentropy}
Let $\mathbb{B}$ be a set of $d+1$ MUBs in a Hilbert space of dimension $d$.
Then
$$
\frac{1}{d+1}\sum_{\mathcal{B}\in\mathbb{B}}H(\mathcal{B},\ket{\psi}) \geq \log\left(\frac{d+1}{2}\right).
$$
\end{corollary}
\section{Locking}
We now turn our attention to locking. We first explain the connection between locking
and entropic uncertainty relations. In particular, we show that for MUBs based on generalized
Pauli matrices, we only need to look at such uncertainty relations to determine the exact strength
of the locking effect. We then consider how good MUBs based on Latin squares are for locking.
In order to determine how large the locking effect is for some set
of mutually unbiased bases $\mathbb{B}$,
and the state
\begin{equation}\label{rhoAB}
\rho_{AB} =
\sum_{t=1}^{|\mathbb{B}|}
\sum_{k=1}^{d} p_{t,k}
(\outp{k}{k} \otimes \outp{t}{t})_A
\otimes (\outp{b^t_k}{b^t_k})_B,
\end{equation}
we must find an optimal bound for $\mathcal{I}_c(\rho_{AB})$.
Here, $\{p_{t,k}\}$ is a probability distribution over $\mathbb{B} \times [d]$.
That is, we must find a POVM
$M_A \otimes M_B$ that maximizes Eq.\ (\ref{mutualInfo}).
It has been shown in~\cite{terhal:locking} that we can restrict ourselves to to taking
$M_A$ to be the local measurement determined by the projectors $\{\outp{k}{k} \otimes \outp{t}{t}\}$.
It is also known that we can limit ourselves to take the measurement $M_B$ consisting of rank one
elements $\{\alpha_i \outp{\Phi_i}{\Phi_i}\}$ only~\cite{davies:access}, where $\alpha_i \geq 0$ and $\ket{\Phi_i}$
is normalized.
Maximizing over $M_B$ then corresponds to maximizing Bob's accessible information~\cite[Eq.\ (9.75)]{peres:book} for
the ensemble $\mathcal{E} = \{p_{k,t},\outp{b^t_k}{b^t_k}\}$
\begin{eqnarray}
\begin{aligned}\label{accessible}
&&\mathcal{I}_{acc}(\mathcal{E})= \max_M \left(- \sum_{k,t} p_{k,t} \log p_{k,t} + \right.\\
&&\left.\sum_{i} \sum_{k,t} p_{k,t} \alpha_i \bra{\Phi_i}\rho_{k,t}\ket{\Phi_i}
\log \frac{p_{k,t} \bra{\Phi_i}\rho_{k,t}\ket{\Phi_i}}{\bra{\Phi_i}\mu\ket{\Phi_i}} \right),
\end{aligned}
\end{eqnarray}
where $\mu = \sum_{k,t} p_{k,t} \rho_{k,t}$ and $\rho_{k,t} = \outp{b^t_k}{b^t_k}$.
Therefore, we have $\mathcal{I}_c(\rho_{AB}) = \mathcal{I}_{acc}(\mathcal{E})$.
We are now ready to prove our locking results.
\subsection{An example}
We first consider a very simple example with only three MUBs that provides the intuition behind
the remainder of our paper. The three MUBs we consider now are generated by the unitaries
$\mathbb{I}$, $H$ and $K = (\mathbb{I} + i\sigma_x)/\sqrt{2}$ when applied to the computational basis.
For this small example, we also investigate the role of the prior over the bases and the encoded
basis elements. It turns out that this does not affect the strength of the locking effect positively. Actually, it is possible
to show the same for encodings in many other bases. However,we
do not consider this case in full generality as to not obscure our main line of argument.
\begin{lemma}\label{3mubLocking}
Let $U_0=\mathbb{I}^{\otimes n}$,$U_1 = H^{\otimes n}$, and $U_2 = K^{\otimes n}$,
where $k \in \{0,1\}^n$ and $n$ is an even integer.
Let $\{p_t\}$ with $t \in [3]$ be a probability distribution over
the set $\mathcal{S} = \{U_1,U_2,U_3\}$. Suppose that $p_1,p_2,p_3 \leq 1/2$ and let
$p_{t,k} = p_t (1/d)$.
Consider the ensemble $\mathcal{E} = \{ p_t \frac{1}{d},U_t \outp{k}{k}U_t^\dagger\}$, then
$$
\mathcal{I}_{acc}(\mathcal{E}) = \frac{n}{2}.
$$
If, on the other hand, there exists a $t \in [3]$ such that
$p_t > 1/2$, then $\mathcal{I}_{acc}(\mathcal{E}) > n/2$.
\end{lemma}
\begin{proof}
We first give an explicit measurement strategy and then prove a matching upper bound
on $\mathcal{I}_{acc}$.
Consider the Bell basis vectors $\ket{\Gamma_{00}} = (\ket{00} + \ket{11})/\sqrt{2}$,
$\ket{\Gamma_{01}} = (\ket{00} - \ket{11})/\sqrt{2}$, $\ket{\Gamma_{10}} = (\ket{01} + \ket{10})/\sqrt{2}$,
and $\ket{\Gamma_{11}} = (\ket{01} - \ket{10})/\sqrt{2}$. Note that we can write
for the computational basis
\begin{eqnarray*}
\ket{00} &=& \frac{1}{\sqrt{2}}(\ket{\Gamma_{00}} + \ket{\Gamma_{01}})\\
\ket{01} &=& \frac{1}{\sqrt{2}}(\ket{\Gamma_{10}} + \ket{\Gamma_{11}})\\
\ket{10} &=& \frac{1}{\sqrt{2}}(\ket{\Gamma_{10}} - \ket{\Gamma_{11}})\\
\ket{11} &=& \frac{1}{\sqrt{2}}(\ket{\Gamma_{00}} - \ket{\Gamma_{01}}).
\end{eqnarray*}
The crucial fact to note is that if we fix some $k_1k_2$, then
there exist exactly two Bell basis vectors $\ket{\Gamma_{i_1i_2}}$ such that
$|\inp{\Gamma_{i_1i_2}}{k_1k_2}|^2 = 1/2$. For the remaining two basis vectors
the inner product with $\ket{k_1k_2}$ will be zero.
A simple calculation shows that we can express the two qubit basis states of
the other two mutually unbiased bases analogously: for each two qubit basis state
there are exactly two Bell basis vectors such that the
inner product is zero and for the other two the inner product squared is $1/2$.
We now take the measurement given by $\{\outp{\Gamma_i}{\Gamma_i}\}$ with
$\ket{\Gamma_i} = \ket{\Gamma_{i_1i_2}} \otimes \ldots \otimes \ket{\Gamma_{i_{n-1}i_{n}}}$ for the binary
expansion of $i = i_1i_2\ldots i_n$. Fix a $k = k_1k_2\ldots k_n$. By the above argument,
there exist exactly $2^{n/2}$ strings $i \in \{0,1\}^n$ such that
$|\inp{\Gamma_i}{k}|^2 = 1/(2^{n/2})$. Putting everything together, Eq.\ (\ref{accessible})
now gives us for any prior distribution $\{p_{t,k}\}$ that
\begin{equation}\label{Ibell}
-\sum_i \bra{\Gamma_i}\mu\ket{\Gamma_i} \log \bra{\Gamma_i}\mu\ket{\Gamma_i} - \frac{n}{2} \leq \mathcal{I}_{acc}(\mathcal{E}).
\end{equation}
For our particular distribution we have $\mu = \mathbb{I}/d$ and thus
$$
\frac{n}{2} \leq \mathcal{I}_{acc}(\mathcal{E}).
$$
We now prove a matching upper bound that shows that our measurement is optimal.
For our distribution, we can rewrite Eq.\ (\ref{accessible}) for the POVM
given by $\{\alpha_i \outp{\Phi_i}{\Phi_i}\}$ to
\begin{eqnarray*}
\mathcal{I}_{acc}(\mathcal{E}) &=& \max_M \left(\log d + \right.\\
&&\left. \sum_i \frac{\alpha_i}{d} \sum_{k,t} p_{t} |\bra{\Phi_i}U_t\ket{k}|^2 \log |\bra{\Phi_i}U_t\ket{k}|^2 \right)\\
&=& \max_M \left(\log d - \sum_i \frac{\alpha_i}{d} \sum_{t} p_t H(\mathcal{B}_t,\ket{\Phi_i}) \right).
\end{eqnarray*}
It follows from Corollary~\ref{MUderived} that $\forall i\in \{0,1\}^n$ and
$p_1,p_2,p_3\leq 1/2$,
\begin{eqnarray*}
(1/2-p_1) [H(\mathcal{B}_2,\ket{\Phi_i}) + H(\mathcal{B}_3,\ket{\Phi_i}) ]&+&\\
(1/2-p_2) [H(\mathcal{B}_1,\ket{\Phi_i}) + H(\mathcal{B}_3,\ket{\Phi_i})]&+&\\
(1/2-p_3)[H(\mathcal{B}_1,\ket{\Phi_i})+H(\mathcal{B}_2,\ket{\Phi_i})]&& \geq n/2.
\end{eqnarray*}
Reordering the terms we now get
$\sum_{t=1}^3 p_{t} H(\mathcal{B}_t,\ket{\Phi_i})\geq n/2.$
Putting things together and using the fact that $\sum_i \alpha_i=d$, we obtain
$$
\mathcal{I}_{acc}(\mathcal{E}) \leq \frac{n}{2},
$$
from which the result follows.
If, on the other hand, there exists a $t \in [3]$ such that $p_t > 1/2$, then
by measuring in the basis $\mathcal{B}_t$ we obtain $\mathcal{I}_{acc}(\mathcal{E}) \geq p_t n > n/2$.
\end{proof}
Above, we have only considered a non-uniform prior over the set
of bases.
In \cite{BalWehWin:pistar} it is observed that when we want to
guess the XOR of a string of length $2$ encoded in one (unknown to us)
of these three bases,
the uniform prior on the strings is not the one that gives the smallest
probability of success. This might lead one to think that a similar
phenomenon could be observed in the present setting, i.e., that one
might obtain better locking with three basis for a non-uniform prior on
the strings. In what follows, however, we show that this is not the case.
Let $p_t=\sum_{k} p_{k,t}$ be the marginal distribution on the basis,
then the difference in Bob's knowledge between receiving only the
quantum state and receiving the quantum state \emph{and} the basis information
is given by
\begin{eqnarray*}
\Delta(p_{k,t})=H(p_{k,t})-\mathcal{I}_{acc}(\mathcal{E}) -H(p_t),
\end{eqnarray*}
substracting the basis information itself. Consider the post-measurement state
$\nu=\sum_i \bra{\Gamma_i}\mu\ket{\Gamma_i}\ket{\Gamma_i}\bra{\Gamma_i}$. Using (\ref{Ibell}) we obtain
\begin{eqnarray} \label{gap1}
\Delta(p_{k,t})\leq H(p_{k,t})-S(\nu)+n/2 -H(p_t),
\end{eqnarray}
where $S$ is the von Neuman entropy. Consider the state
\begin{eqnarray*}
\rho_{12} =
\sum_{k=1}^{d} \sum_{t=1}^{3} p_{k,t}(\outp{t}{t})_1
\otimes (U_t \outp{k}{k} U_t^{\dagger})_2,
\end{eqnarray*}
we have that
\begin{eqnarray*}
S(\rho_{12})=H(p_{k,t}) &\leq S(\rho_1) +S(\rho_2)\\
& = H(p_t) +S(\mu)\\
&\leq H(p_t)+S(\nu).
\end{eqnarray*}
Using (\ref{gap1}) and the previous equation we get
\begin{eqnarray*}
\Delta(p_{k,t})\leq n/2,
\end{eqnarray*}
for any prior distribution. This bound is saturated by the uniform prior
and therefore we conclude that the uniform prior results in the largest
gap possible.
\subsection{MUBs from generalized Pauli Matrices}
We first consider MUBs based on the generalized Pauli matrices $X_d$ and $Z_d$ as described
in Section~\ref{prelim}. We consider a uniform prior over the elements of each basis and the set of bases.
Choosing a non-uniform prior does not lead to a better locking effect.
\begin{lemma}\label{equiv}
Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_{m}\}$ be any set of MUBs constructed on the basis of generalized Pauli matrices
in a Hilbert space of prime power dimension $d = p^N$.
Consider the ensemble $\mathcal{E} = \{ \frac{1}{d m},\outp{b^t_k}{b^t_k}\}$. Then
$$
I_{acc}(\mathcal{E}) = \log d -
\frac{1}{m} \min_{\ket{\psi}} \sum_{\mathcal{B}_t \in \mathbb{B}} H(\mathcal{B}_t,\ket{\psi}).
$$
\end{lemma}
\begin{proof}
We can rewrite Eq.\ (\ref{accessible}) for the POVM
given by $\{\alpha_i \outp{\Phi_i}{\Phi_i}\}$ to
\begin{eqnarray*}
\mathcal{I}_{acc}(\mathcal{E}) &=& \max_M \left(\log d + \right.\\
&&\left. \sum_i \frac{\alpha_i}{d m} \sum_{k,t} |\inp{\Phi_i}{b^t_k}|^2 \log |\inp{\Phi_i}{b^t_k}|^2 \right)\\
&=& \max_M \left(\log d - \sum_i \frac{\alpha_i}{d} \sum_{t} p_{t} H(\mathcal{B}_t,\ket{\Phi_i}) \right).
\end{eqnarray*}
For convenience, we split up the index $i$ into $i = ab$ with $a = a_1,\ldots,a_N$ and $b=b_1,\ldots,b_N$,
where $a_\ell,b_\ell \in \{0,\ldots,p-1\}$ in the following.
We first show
that applying generalized Pauli matrices to the basis vectors of a MUB merely permutes those vectors.
\begin{claim}
Let $\mathcal{B}_t = \{\ket{b^t_1},\ldots,\ket{b^t_d}\}$ be a basis based on generalized Pauli matrices
(Section~\ref{prelim}) with
$d = p^N$. Then $\forall a,b \in \{0,\ldots,p-1\}^N, \forall k \in [d]$ we have that $\exists k' \in [d],$ such
that $\ket{b^{t}_{k'}} = X_d^{a_1}Z_d^{b_1} \otimes \ldots
\otimes X_d^{a_N}Z_d^{b_N}\ket{b^t_k}$.
\end{claim}
\begin{proof}
Let $\Sigma_p^i$ for $i \in \{0,1,2,3\}$ denote the generalized Pauli's
$\Sigma_p^0 = \mathbb{I}_p$,
$\Sigma_p^1 = X_p$,
$\Sigma_p^3 = Z_p$, and
$\Sigma_p^2 = X_p Z_p$. Note that $X_p^uZ_p^v = \omega^{uv} Z_p^v X_p^u$,
where $\omega = e^{2\pi i/p}$.
Furthermore, define
$
\Sigma_p^{i,(x)} = \mathbb{I}^{\otimes (x - 1)} \otimes \Sigma_p^{i} \otimes \mathbb{I}^{N-x}
$
to be the Pauli operator $\Sigma_p^i$ applied to the $x$-th qupit.
Recall from Section~\ref{prelim} that the basis $\mathcal{B}_t$ is the unique simultaneous eigenbasis
of the set of operators in $C_t$, i.e., for all $k \in [d]$ and $f,g \in [N]$,
$\ket{b^t_k} \in \mathcal{B}_t$ and $c_{f,g}^t \in C_t$, we have
$c_{f,g}^t \ket{b^t_k}=\lambda_{k,f,g}^t \ket{b^t_k} \textrm{ for some value }\lambda^t_{k,f,g}$.
Note that any vector $\ket{v}$ that satisfies this equation
is proportional to a vector in $\mathcal{B}_t$. To prove
that any application of one of the generalized Paulis merely permutes the vectors in $\mathcal{B}_t$
is therefore equivalent to proving that $\Sigma^{i,(x)}_{p}
\ket{b^t_k}$ are eigenvectors of $c_{f,g}^t$ for any $f,g \in [k]$ and $i \in \{1,
3\}$. This can be seen as follows: Note that $c_{f,g}^t=\bigotimes_{n=1}^N
\left(\Sigma^{1, (n)}_{p}\right)^{f_N} \left(\Sigma^{3,(n)}_{p}\right)^{g_N}$
for $f = (f_1,\ldots,f_N)$ and $g=(g_1, \ldots, g_N)$
with $f_N,g_N \in \{0,\ldots,p-1\}$~\cite{boykin:mub}. A calculation then shows that
$$
c_{f,g}^t \Sigma^{i,(x)}_p \ket{b^t_k}= \tau_{f_x,g_x, i} \lambda_{k,f,g}^t \Sigma^{i,(x)}_{p} \ket{b^t_k},
$$
where $\tau_{f_x,g_x, i}=\omega^{g_x}$ for $i = 1$ and
$\tau_{f_x,g_x,i}=\omega^{-f_x}$ for $i = 3$. Thus
$\Sigma^{i,(x)}_{p} \ket{b^t_k}$ is an eigenvector of $c^t_{f,g}$ for
all $t, f, g$ and $i$, which proves our claim.
\end{proof}
Suppose we are given $\ket{\psi}$ that minimizes
$\sum_{\mathcal{B}_t \in \mathbb{T}} H(\mathcal{B}_t,\ket{\psi})$.
We can then construct a full POVM with $d^2$ elements by taking
$\{\frac{1}{d}\outp{\Phi_{ab}}{\Phi_{ab}}\}$ with $\ket{\Phi_{ab}} = (X_d^{a_1}Z_d^{b_1} \otimes \ldots
\otimes X_d^{a_N}Z_d^{b_N})^\dagger\ket{\psi}$. However, it follows from our claim
above that $\forall a,b,k, \exists k'$ sucht that $|\inp{\Phi_{ab}}{b^t_k}|^2 = |\inp{\psi}{b^{t}_{k'}}|^2$,
and thus
$H(\mathcal{B}_t,\ket{\psi}) = H(\mathcal{B},\ket{\Phi_{ab}})$ from which the result follows.
\end{proof}
Determining the strength of the locking effects for such MUBs is thus equivalent to proving bounds on
entropic uncertainty relations. We thus obtain as a corollary of
Theorem~\ref{squareThm} and Lemma~\ref{equiv}, that, for dimensions which are the square of a prime power
$d = p^{2N}$, using any product MUBs based on generalized Paulis does not give us any better
locking than just using 2 MUBs.
\begin{corollary}\label{pauliLocking}
Let $\mathbb{S}=\{\mathcal{S}_1,\ldots,\mathcal{S}_{m}\}$ with $m \geq 2$ be any set of MUBs constructed on the basis of generalized Pauli matrices
in a Hilbert space of prime (power) dimension $s = p^N$.
Define $U_t$ as the unitary that transforms the computational basis
into the $t$-th MUB, i.e., $\mathcal{S}_t = \{U_t\ket{1},\ldots,U_t\ket{s}\}$.
Let $\mathbb{B} = \{\mathcal{B}_1,\ldots,\mathcal{B}_{m}\}$ be the set of product MUBs with
$\mathcal{B}_t = \{U_t \otimes U_t^* \ket{1},\ldots,U_t \otimes U_t^*\ket{d}\}$ in dimension $d=s^2$.
Consider the ensemble $\mathcal{E} = \{ \frac{1}{d m},\outp{b^t_k}{b^t_k}\}$. Then
$$
I_{acc}(\mathcal{E}) = \frac{\log d}{2}.
$$
\end{corollary}
\begin{proof}
The claim follows from Theorem~\ref{squareThm} and the proof of Lemma~\ref{equiv}, by constructing
a similar measurement formed from vectors $\ket{\hat{\Phi}_{\hat{a}\hat{b}}} = K_{a^1b^1}
\otimes K_{a^2b^2}^* \ket{\psi}$
with $\hat{a} = a^1a^2$ and $\hat{b} = b^1b^2$, where $a^1,a^2$ and $b^1,b^2$ are
defined like $a$ and $b$ in the proof of Lemma~\ref{equiv}, and $K_{ab} = (X_d^{a_1}Z_d^{b_1}\otimes\ldots\otimes X_d^{a_N}Z^{b_N}_d)^\dagger$
from above.
\end{proof}
The simple example we considered above is in fact a special case of Corollary~\ref{pauliLocking}.
It shows that if the vector that minimizes the sum of entropies has certain symmetries,
such as for example the Bell states, the resulting POVM can even be much simpler.
\subsection{MUBs from Latin Squares}
At first glance, one might think that maybe the product MUBs based on generalized Paulis are not well suited
for locking just because of their product form. Perhaps MUBs with entangled basis vectors do not exhibit this problem.
To this end, we examine how well MUBs based on Latin squares can lock classical information in a quantum state.
All such MUBs are highly entangled, with the exception of the two extra MUBs based on non-Latin squares.
Surprisingly, it turns out, however, that \emph{any} set of at least two MUBs based on Latin squares, does equally well
at locking as using just 2 such MUBs. Thus such MUBs perform equally ``badly'', i.e., we cannot improve the strength of
the locking effect by using more MUBs of this type.
\begin{lemma}\label{LSlocking}
Let $\mathbb{B}=\{\mathcal{B}_1,\ldots,\mathcal{B}_m\}$ with $m \geq 2$ be any set of MUBs in a Hilbert space of dimension $d=s^2$ constructed
on the basis of Latin squares.
Consider the ensemble $\mathcal{E} = \{ \frac{1}{d m},\outp{b^t_k}{b^t_k}\}$. Then
$$
\mathcal{I}_{acc}(\mathcal{E}) = \frac{\log d}{2}.
$$
\end{lemma}
\begin{proof}
Note that we can again rewrite $\mathcal{I}_{acc}(\mathcal{E})$ as in the proof of Lemma~\ref{equiv}. Consider
the simple measurement in the computational basis $\{\outp{i,j}{i,j}|i,j \in [s]\}$. The result
then follows by the same
argument as in Lemma~\ref{LSentropy}.
\end{proof}
\section{Conclusion and Open Questions}
We have shown tight bounds on entropic uncertainty relations and locking for specific sets of mutually unbiased bases.
Surprisingly, it turns out that using more mutually unbiased basis does not always lead to a better locking effect.
It is interesting to consider what may make these bases so special. The example of three MUBs considered in Lemma~\ref{3mubLocking}
may provide a clue. These three bases are given by the common eigenbases of $\{\sigma_x \otimes \sigma_x, \sigma_x \otimes \mathbb{I},
\mathbb{I} \otimes \sigma_x\}$, $\{\sigma_z \otimes \sigma_z, \sigma_z \otimes \mathbb{I}, \mathbb{I} \otimes \sigma_z\}$ and $\{\sigma_y \otimes \sigma_y,
\sigma_y \otimes \mathbb{I}, \mathbb{I} \otimes \sigma_y\}$ respectively~\cite{boykin:mub}. However, $\sigma_x \otimes \sigma_x$, $\sigma_z \otimes \sigma_z$ and
$\sigma_y \otimes \sigma_y$ commute and thus also share a common eigenbasis, namely the Bell basis. This is exactly the basis we will use as our
measurement. For all MUBs based on generalized Pauli matrices, the MUBs in prime power dimensions are given as the common eigenbasis
of similar sets consisting of strings of Paulis. It would be interesting to determine the strength of the locking effect on the
basis of the commutation relations of elements of \emph{different} sets. Perhaps it is possible to obtain good locking from a subset of such MUBs
where none of the elements from different sets commute.
It is also worth noting that the numerics of~\cite{terhal:locking}
indicate that at least in dimension $p$ using more than three bases does indeed lead to a stronger
locking effect. It would be interesting to know, whether the strength of the locking effect
depends not only on the number of bases, but also on the dimension of the system in question.
Whereas general bounds still elude us, we have shown that merely choosing mutually unbiased bases is not sufficient to obtain good locking effects or high lower bounds
for entropic uncertainty relations. We thus have to look for different properties.
\acknowledgments
We would like to thank Harry Buhrman, Hartwig Bosse, Matthias Christandl, Richard Cleve, Debbie Leung, Serge Massar,
David Poulin, and Ben Toner for discussions. We would especially like to thank Andris Ambainis and Andreas Winter for many
helpful comments and interesting discussions. We would also like to thank Debbie Leung, John Smolin and Barbara Terhal for providing
us with explicit details on the numerical studies conducted in~\cite{terhal:locking}.
Thanks also to Matthias Christandl and Serge Massar for discussions on errors in string commitment protocols, to which
end claim 1 was proved in the first place. Thanks also to Matthias Christandl and Ronald de Wolf for helpful
comments on an earlier version of this note.
We are supported by an NWO vici grant 2004-2009
and by the EU project QAP (IST 015848).
| {'timestamp': '2007-03-21T10:58:55', 'yymm': '0606', 'arxiv_id': 'quant-ph/0606244', 'language': 'en', 'url': 'https://arxiv.org/abs/quant-ph/0606244'} |
high_school_physics | 715,037 | 16.420075 | 1 | November 2006 ISSUE - Leonard N. Garcia (QSS Group, Inc.), Editor
The Pisgah Astronomical Research Institute (PARI) Hosts the Space Science Lab
The Warren Rupp Observatory and Hidden Hollow Astronomical Convention 2006
Radio Jove in the News
Radio JOVE in the classroom in Puerto Rico
Sonification Tools for Radio JOVE
A New Home for the WCCRO
New Horizons Probe to Fly by Jupiter in February
JOVE Around the World
Introducing The Radio JOVE Project, Inc.
by Charles Osborne, (PARI Technical Director)
In July and August of 2006 PARI hosted 27 local high school students in our first Space Science Lab (SSL). Radio JOVE kits were their centerpiece project.
The SSL is funded by a three year grant to PARI from the Burroughs-Wellcome Fund Student Science Enrichment Program to encourage under-represented students in math and science. The students were chosen by their teachers as good candidates to participate in this project. Each student received a backpack with the Radio JOVE kit, a soldering iron, and tools. The program was a one week summer school with the students staying on site at PARI.
Their studies began with lectures on astronomy, radio astronomy, solar physics, and basic electronics. Component identification, how to use a digital voltmeter, and soldering was practiced under the watchful eyes of local mentors. Each group of three to four students had one mentor. We started by desoldering random components from old circuit boards, and in some cases resoldering them back into the boards to gain experience using the tools and soldering irons.
The individualized instruction paid off with all twenty-seven students leaving with working JOVE receivers. On the last day we participated in a JOVE teleconference. You should have seen the looks on the students. faces when they realized Dick Flagg and Jim Sky, the creators of the JOVE receiver and SkyPipe software, were right there on the teleconference bridge answering their questions. PARI gave away donated PCs to the students making sure that every student had their own dedicated PC to use for the Radio JOVE project (most of the students did not have computers at home). Many of the donated PCs were Pentium 100s, but have proven perfect for tasks like collecting Radio JOVE data.
PARI will host the students for several Saturday group meetings in coming months to help them maintain momentum in receiving Solar flares and Jupiter. In addition to introducing them to the Radio JOVE list server, we also created their own email group on our server so they could compare notes and work with PARI staff and each other. Thanks to the grant from Burroughs-Wellcome, for the next two summers we anticipate repeating this process with new groups of high school students from surrounding counties. For further information visit PARI.s website http://www.pari.edu and http://www.pari.edu/programs/K12Programs/ssl/ .
PARI is a not-for-profit public foundation located at a former NASA site 30 miles southwest of Asheville, NC. Our mission is to provide research and educational opportunities for a broad cross-section of users in radio and optical astronomy and in the related disciplines of physics, mathematics, engineering, earth sciences, chemistry and computer science.
Paul Doupont helps with part identification as students check their parts kits against the lists.
Christi Whitworth shows how to read a part on a circuit board with the digital voltmeter.
Charles Osborne answering questions.
Thurburn Barker talks about reading color bands.
The watchmaker extreme magnifier goggles are neat!
by Jason Shinn(Astronomy Club of Akron, Ohio USA )
The Warren Rupp Observatory, Mansfield, Ohio.
The Warren Rupp Observatory (WRO) is located on the grounds of Friendly House Hidden Hollow Resident Camp near Mansfield, Ohio (USA). The observatory holds monthly public events and hosts numerous educational programs for Hidden Hollow campers, student groups, scouts, and anyone interested in pursuing astronomical outreach - all at no cost. WRO and the Richland Astronomical Society (RAS) also host one of Ohio's premier astronomical conventions, the Hidden Hollow astronomical convention. Once each year, folks from all over the United States gather under some of the darkest skies in Ohio to swap stories and observations. Aside from observing at night, lectures and workshops are held during the day.
Participants of the Hidden Hollow Astronomical Convention were able to observe live radio observations made by the Canal Fulton Amateur Radio Observatory using Radio JOVE equipment.
The Canal Fulton Amateur Radio Observatory (CFARO) made its second appearance at Hidden Hollow this year. I was again invited by RAS president and observatory director, Tammy Plotner, to lecture on my radio astronomy hobby. At last year's event I made a kiosk of sorts to allow folks to listen to sample radio sounds one might hear at decametric wavelengths. This year the CFARO telescope traveled to Hidden Hollow for a live demonstration! I arrived the day before opening registration and set up a complete Radio JOVE system with one 10' dipole and two computers. Both computers were used to monitor the radio background continuously. Computer one charted the Milky Way over a twenty-four hour period at one sample per second. Skypipe was set to show a 24 hour window from 0000 UT to 2359 UT through which it displayed the local transit of the plane of the Milky Way. During the day, computer two monitored the background for possible solar radio bursts, with a sample rate of 10 samples per second. Having a "live" demonstration like this, open all day and all night, allowed those interested to see the Radio JOVE project firsthand.
Examples of two 24-hour Milky Way drift scans on October 20, 21, 2006.
Despite less than cooperative weather, this year's convention at Hidden Hollow was a tremendous success. Many thanks to the Radio JOVE project for donating a copy of the project's educational CDs, "Visual Primer to Radio JOVE" and "Radio JOVE Reference", for the door prize drawing. The prize winner, a budding amateur radio astronomer, was thrilled to have won the CDs! (Contributed door prizes are raffled to participants to meet the expenses of having such an event.) I am looking forward to attending future conventions with the complete JOVE telescope. Responses from convention visitors to CFARO and the JOVE project were very positive, a major score in outreach for Radio JOVE. As the sun approaches solar maximum there will be tremendous opportunity to show others just how fascinating the Radio JOVE project truly is. With as much excitement as I have seen over simple lighting strikes, a live demonstration with an active sun or Jupiter could be nothing short of amazing!
A special thanks to Tammy Plotner for her help in preparing this article.
The Radio JOVE Project: http://radiojove.gsfc.nasa.gov/
The Warren Rupp Observatory: http://www.wro.org/
The Canal Fulton Amateur Radio Observatory: http://members.aol.com/cfaro/index.html
The Astronomy Club of Akron - Akron, OH: http://www.acaoh.org/
The Bluegrass Amateur Astronomy Club - Lexington, KY: http://www.ms.uky.edu/~bgaac/
The Astronomical League: http://www.astroleague.org/
The U.S. Geological Survey: http://www.usgs.gov/
Michael Stephan and the Warsaw Astronomical Society of Indiana have recently been featured in an article published in the South Bend Tribune. The article describes their new Radio Jove system in operation at the Potawotami Wildlife Park.
We heard about this from the Radio Jove email list but we thought this ought to be highlighted more prominently. Below is a part of the email message and a link to a news article from the South Bend Tribune. Thank you Michael. Great work! [Editor.]
"We don't yet know how to interpret our results but hey we at least have a graph going when we hook things up. Our set-up is only temporary while I run permanent runs to my house (closest building) where we will house the receiver and then feed a signal to the Interpretive Center (300' away) for hopefully a public display.
We have a New Moon Group and now a "Radio Astronomy Group" for our small society here in Indiana. We are a core group of maybe half dozen members working our way through the PPTs and trying to learn about radio astronomy."
Read more about it at http://www.southbendtribune.com/apps/pbcs.dll/article?AID=/20061030/News01/610300366/-1/NEWS01.
Michael D. Stephan
Warsaw Astronomical Society
Potawatomi Wildlife Park
Indiana's First Dark Sky Preserve
by Wanda Diaz(Shirohisa Ikeda Project)
Maunabo is a small town in the south-east corner of the island of Puerto Rico. It is a town bathed by the waters of the Caribbean sea, where the population mainly depends on fishing for their income. Maunabo's Higinio Figueroa Villegas elementary and middle school recently acquired a Radio Jove kit with the assistance of the town's mayor, the Honorable Jorge Marquez, and the Shirohisa Ikeda Project. This kit will enable the students to learn physics, mathematics and astronomy while having fun.
The group of students meets once a week with Mrs. Aida Soto, their science teacher, and Ms. Wanda Diaz from the Shirohisa Ikeda Project to work on subjects related to radio astronomy. These students assembled their first Radio JOVE kit a year ago, while in sixth grade. At that time the soldering took the entire semester. This time the kids used a new technique developed by Wanda Diaz and Leonard Garcia from the Radio JOVE team. This technique was developed to enable the visually impaired to safely solder electrical components. The first kit built with this technique was soldered by Wanda Diaz, Young Lee and Dr. Garcia at NASA/GSFC during the summer of 2006. That receiver is working properly and is being installed in the Cecilia Benitez elementary school in Caguas, PR. The soldering technique was explained to the kids in Maunabo as if they were blind. They were given oral explanations of each component while making use of their visual skills. They were also encouraged to make tactile exploration of the components. The students said they understood better this time. They also thought it was safer and more fun.
The students and the teacher are very enthusiastic!
Ms. Wanda Diaz is advising seventh-grade students in Maunabo, Puerto Rico on how to assemble the Radio Jove receiver.
Seventh-grade students in Maunabo, Puerto Rico soldering components onto the printed circuit board.
by Jim Sky(RadioSky Publishing)
As part of the effort to make Radio JOVE more accessible to everyone, we have been developing non-visual software tools to handle some of the basic functions currently provided by Radio-Jupiter prediction software, Radio-SkyPipe data collection software, and the Spectrograph viewer. These new Windows programs are in the experimental stages but are available for anyone to work with.
These tools do not require the use of a mouse or a monitor. They rely on keyboard inputs to produce the desired functions. This means that the user will have to learn a few commands by reading the appropriate sections of the supplied documentation. There is also some help by pressing F1. Most of the commands are short and intuitive. The output is either spoken in English as in the case of the Radio-Jupiter storm predictions or presented as sonified data. Sonification involves converting the data (such as signal strength) into sound that helps you visualize the signal intensity over time. In the case of the Spectrograph data, a file conversion tool produces a sound spectrum that mimics the RF spectrum over time.
Please feel free to use these programs and provide your comments about them. You can find a page describing the various downloads at: http://radiosky.com/sonification.htm
by Richard Flagg(Windward Community College Radio Observatory (WCCRO) )
The Windward Community College Radio Observatory (WCCRO) has moved into the new Lanihuli Observatory building. Eventually the observatory will also house a 16 inch optical telescope to be used for WCC's astronomy courses, students engaged in Hawaii Space Grant Consortium research projects, school groups touring the Aerospace Exploration Lab and the community in general. The observatory also houses a NOAA weather satellite tracking station.
Interior of the Lanihuli Observatory showing the Jove installation. Receivers and support electronics are located in the cabinet along the rear wall. Computers used for serving SkyPipe, spectrograph images, and streaming audio are also visible. The large rectangular tube to the left is part of a heliostat which tracks the sun and projects its image downward onto a flat and stable surface for viewing and analysis.
The WCCRO log-periodic antenna is seen in the background of this rooftop shot, with the recently installed heliostat in the foreground. The antenna is supported on an azimuth/elevation mount atop a 20 ft tower.
by Jim Gass(Raytheon/ITSS)
"New Horizons" is a NASA deep-space probe currently en route to fly within 10,000 km of Pluto and its three moons, Charon, Nix, and Hydra. From there, it will continue outward into the Kuiper Belt. Launched in January of this year, the spacecraft was built by John Hopkins Applied Physics Lab and the Southwest Research Institute. It is scheduled to reach Pluto in July 2015.
New Horizons' LORRI image of Jupiter from 181 million miles away taken on September 4, 2006.
On its way to Pluto, New Horizons will pass within 2.3 million kilometers of Jupiter at 05:41 UTC on February 28, 2007. This planned maneuver will give it a gravity assist. At the point of closest approach to Jupiter, the spacecraft will be traveling at 21 km/s. During this encounter, it will study Jovian atmospheric dynamics, its ring composition, the charged particle composition in Jupiter's magnetotail, lightning, aurorae, and the atmospheres and composition of the Galilean moons. Some limited observations will begin as early as January 1, 2007 with the most intense series of observations being made in a ten day period centered on closest approach. The observations will continue until May or June 2007 as New Horizons becomes the first spacecraft to travel down Jupiter's magnetotail and gathers information on its distant magnetosphere.
The craft has seven scientific instruments. These include the Long Range Reconnaissance Imager telescopic camera (LORRI), the "Ralph" visible/infrared imager/spectrometer, the "Alice" UV imaging spectrometer, the Radio Science Experiment (REX), the Solar Wind around Pluto plasma spectrometer (SWAP), the Pluto Energetic Particle Spectrometer Science Investigation (PEPSSI), the Venetia Burney Student Dust Counter (VBSDC).
Radio Jovers may find REX interesting. This experiment is integrated into the spacecraft's main radio communications system. At Pluto, REX will measure the refraction of the New Horizons' communication signals by Pluto's atmosphere. This will correlate to the average molecular weight and temperature. REX will also be used as a passive radiometer, measuring natural radio emissions in its operating range.
Radio Jove participants may want to make an extra effort to observe Jupiter during the closest approach period and for the few months afterwards. We expect to set up some coordinated observing sessions during this period. Look for announcements on this in emails on the Radio Jove distribution list.
For more information about New Horizons, see http://pluto.jhuapl.edu.
by Jim Thieman(NASA's GSFC)
Over the many years that the Radio JOVE project has been in existence we have taken and/or received images of JOVE telescopes in many places around the world. See last issue for Radio JOVE dipoles in Australia, Montana and Japan. Below are shown some more of the archive of photos that we have built. Perhaps you would like to contribute to a follow-on to this article as well. It would be good, of course, if there were something unique about the photo, such as a unique approach to how the antenna is supported, a unique type of antenns, a unique setting in which it appears, or maybe just the unique person or persons associated with that antenna. Please be sure that we have permission to show the pictures of individuals if they are identifiable in the picture. Thanks for any submissions.
Is that Radio JOVE on Mars? Well at least in a Mars-like area in Utah. Thanks to Dusty Samouce.
Who says you can't observe from a rooftop? Thanks to Chuck Higgins in Tennessee.
One antenna is just not enough for some people. Thanks to Dusty Samouce in Montana.
A Yagi antenna operating at 20 MHz and installed on the rooftop of Mendel Hall, The College of Saint Catherine, St. Paul, Minnesota. Dick Flagg in Hawaii tells us this is the Mother of all Yagi antennas.
Early JOVE Team listening for Solar radio bursts. Very early. Thanks to Dick Flagg in Hawaii
by By Chuck Higgins(Middle Tennessee State University)
On behalf of all of the folks who volunteer their time and talents to the Radio JOVE Project, I would like to announce the creation of The Radio JOVE Project, Inc. This is a not-for-profit company created in August 2006 that will specialize in radio science education products and mainly serve as the distributor for the Radio JOVE telescope kits. The primary purpose for starting this company is to be able to more easily handle the RJ kit parts ordering and financial aspects of kit improvement and distribution. Another nice benefit is that we can now accept donations from folks who are interested in radio astronomy and education. Therefore, let me make the first solicitation: If you or anyone you know (individual or company) would like to make a tax-deductible contribution to the Radio JOVE Project, Inc. please inquire at the address below. Thank you very much.
The Radio JOVE Project, Inc.
MTSU Box 412
FEIN: 20-5239863
E-mail: [email protected]
INSPIRE http://image.gsfc.nasa.gov/poetry/inspire/ | {"pred_label": "__label__cc", "pred_label_prob": 0.6987537145614624, "wiki_prob": 0.3012462854385376, "source": "cc/2022-05/en_middle_0065.json.gz/line815675"} |
high_school_physics | 506,691 | 16.410545 | 1 | Why does the International Space Station have a downward facing light?
I really enjoy watching the International Space Station fly over head. I subscribe to alerts so I know when it is in my area during the night time. I wouldn't be able to see it if it weren't for the light. Perhaps the light is only so we can see it more easily. I'm not exactly sure why there is a light on it and I've searched for answers.
Why is there a light on the International Space Station facing us on Earth?
It's a very bright white light. It's not a blinking light for collision avoidance. It uses power, maybe not a lot but still a consideration.
iss observation
TildalWave
$\begingroup$ Where do you subscribe to those alerts? It sounds great! $\endgroup$ – TMH Oct 13 '14 at 9:12
$\begingroup$ You can subscribe to ISS Alerts here: spotthestation.nasa.gov $\endgroup$ – Scott Oct 13 '14 at 14:00
$\begingroup$ "It's a very bright white light" - yup that's our sun alright!! $\endgroup$ – Fattie Oct 14 '14 at 10:09
$\begingroup$ en.wikipedia.org/wiki/Satellite_flare $\endgroup$ – Nick T Oct 15 '14 at 7:37
$\begingroup$ "It's not a blinking light for collision avoidance." You mean like dodging bullets? $\endgroup$ – a CVn Oct 15 '14 at 13:10
There are a few positional lights on the visiting spacecraft to the International Space Station (ISS), also doubling as indicators that the visiting spacecraft docked to the station are powered and similar reasons. And the Canadarm2 has lights on it so it can be remotely / CCTV operated also when the station is in the Earth's shadow, and they do have lights inside the station so astronauts / cosmonauts can see, and some of this light would be reflected towards the Earth when someone is in the Cupola pointing towards nadir and its protective shutters are open. But none of these lights are nearly as powerful enough to see them from at least 410 km (254 miles) which is station's current orbital altitude and would be the minimum distance between you and the station when it orbits directly overhead from the observer, and most of them do blink.
What you can see from the ground is however this:
What's perhaps not best known is that the station's solar panel arrays are double-sided to also collect some of the sunlight reflected off the Earth's albedo (what our planet reflects from the Sun). They generate roughly up to 120 kW of power (on average about 84 kW) needed by the station's large number of equipment, life support, experiments,... to function. It stores excess collected power in batteries for when the station doesn't generate electric power with its photovoltaics (that's where that average comes from, lower than its maximum output), but more importantly to your question, they're roughly the size of a U.S. football field:
Image source and credit: International Space Station - Facts and Figures
So what you see is indeed as @GWP mentions in his/her answer, even if that started as a rather vague one-line answer. How can we be sure? Simple. The station simply doesn't have sufficient power to run any such lights powerful and large enough to be seen from the ground, as even at 100% efficiency they would consume roughly the same amount of power that the station generates at its own photovoltaics efficiency of well under 50% (yes, they're due for an upgrade if they want to run it till 2024 and run even more experiments). The rest of the incident light from the Sun is mostly reflected, and some absorbed as heat that needs to be radiated to space. But you'd want as little of this heat absorption, since convective heat transfer doesn't really work in the near vacuum in the Low Earth Orbit (LEO), so the solar arrays use coating that matches its efficiency and reflects the rest in wavelengths it's not as efficient in converting incident sunlight into electricity. And if it can utilize less than 50% of what light it has available to it and reflects as much as possible of the rest,... well, you can do the maths.
The light you see is sunlight reflected off the station's enormous solar power arrays, and can only be visible when the observational conditions are dark enough that the bright spot of the station isn't lost in the sunlit background, and the arrays reflect sunlight towards you, so when the station is not in Earth's shadow. When the station is barely entering the Earth's shadow, and if it was visible before it, you can also visually observe it slowly fading into the shadow with your own eyes. That's another proof that it doesn't keep lights strong enough to be seen from the surface on, if the fact that there's absolutely no good reason to do that isn't good enough of a proof on its own.
Here are four frames from the station showing two Soyuz spacecraft docked to it entering the Earth's shadow as seen from one of the ISS HD Earth Viewing Experiment aft facing cams (click on images for larger versions):
This happens about every 93 minutes on the station, or each time it completes one of its orbits. Except during the period of station's high beta angle when it might not enter Earth's shadow at all and its orbit remains more or less aligned with the Earth's day/night terminator for a few days or so. For more on all of this, read e.g.:
Can I see the ISS from the surface with the naked eye?
Can you make out the shape of the ISS with the unaided eye as it passes overhead?
How often does the ISS orbit align with the day/night terminator?
OK, so why white if the station's solar arrays reflect brownish-golden color on their own? Simply because of the intensity of reflected sunlight when compared to the darker background of dusk or dawn when the station can be seen with a naked eye. When the station is in the process of entering Earth's shadow and/or its beta angle (the angle between the Sun, the station, and the observer) is shallower, and the station's solar arrays are rotated to collect as much sunlight so towards the Sun, the amount of this reflected light will also be smaller, the light beam will also lose some of its intensity through the atmosphere through a process known as atmospheric diffraction (see the other linked question), and your eyes will be able to distinguish colors more precisely. It will also appear a bit more yellowish at those times.
To somewhat reaffirm my last point, consider this frame capture from the ISS HD Earth Viewing Experiment showing solar arrays on two Soyuz visiting spacecraft docked to the station and reflecting incident sunlight at different angles:
As you can see, the solar panels are actually not white, but due to the intensity of reflected sunlight, the array of the foremost Soyuz spacecraft appears bright white. It's not exactly white as neither the average wavelength of the light emitted by our Sun by intensity is, and the reflected light does include color component of the materials used on the panels, but for all intents and purposes, it's close enough.
TildalWaveTildalWave
$\begingroup$ If the solar panels are tracking the sun, when the ISS is passing the terminator they will be perpendicular to the ground below and will not reflect light in that direction. What would, are the aluminum radiator panels that cool the solar panels, which have an area of 156 m^2. They wouldn't be directly reflecting sunlight, though - they'd be bouncing back the light from the sunlit portion of the earth below them. I've struggled with this before and that's as near as i can figure it... $\endgroup$ – kim holder wants Monica back Oct 14 '14 at 15:54
$\begingroup$ @briligg It's not as simple as that because they rotate the panels to provide as much current as possible from both of its sides. IIRC they actually even made some algorithm competition or alike to solve that. But you're right, it's not just the solar panels that reflect light. The station's truss is also mostly white or otherwise reflective (anodized metal and such) so it would also reflect some of the sunlight. As for radiators, I think the angle would be too steep most of the time. Maybe when the station is low on the horizon to the west and at dawn (Sun to the east) it could. $\endgroup$ – TildalWave Oct 14 '14 at 16:05
$\begingroup$ Ooo that does sound like a complicated calculation, i like it. All the same i don't think the solar panels are the main contributors to the light most of the time. For the sake of clarity, i'm going to add an answer that shows some photos taken with telescopes from the ground, and quotes the Hayden Planetarium's commentary on the subject. $\endgroup$ – kim holder wants Monica back Oct 14 '14 at 16:26
$\begingroup$ I am surprised this has less votes than the #1 answer. I'd like to reward both of you. Maybe edit them together. $\endgroup$ – Scott Feb 15 '15 at 2:56
It's not a powered light; what you're seeing is sunlight reflected from the solar panels. That's why you can only see it during near overhead passes around dusk and dawn - the sun has not yet "set" from the space station's altitude.
From http://nasa.gov/vision/space/travelinginspace/f_skywatch.html
The Space Station is one of the most visible man made objects in the sky, because it reflects sunlight and often looks like a slow-moving star.
GWPGWP
$\begingroup$ @Scott That sounds very similar to the sun reflecting from the moon. It could even be exactly the same mechanism! $\endgroup$ – Gusdor Oct 13 '14 at 14:32
$\begingroup$ @Gusdor Good point. I would feel more confident that it is not a powered light with a reputable reference. $\endgroup$ – Scott Oct 13 '14 at 14:54
$\begingroup$ @Scott Hopefully this will be reputable enough. nasa.gov/vision/space/travelinginspace/f_skywatch.html The Space Station is one of the most visible man made objects in the sky, because it reflects sunlight and often looks like a slow-moving star. $\endgroup$ – Gusdor Oct 13 '14 at 15:00
$\begingroup$ You should do some more stargazing at dusk! In fact, quite a few satellites passing overhead at any given time, visible as a steadily moving 'star'. "At any given time, there are hundreds of satellites in the sky. Most of them are too faint to see, but if you're in an area without much light pollution, and you look carefully enough, there's virtually always a satellite visible. Their rapid motion across the sky and various highly inclined orbits make them unlikely to be anything but artificial." from what-if.xkcd.com/60 $\endgroup$ – Sanchises Oct 13 '14 at 16:18
$\begingroup$ @Scott: It's the sun. Source: I work on the space station. $\endgroup$ – Tristan Oct 14 '14 at 18:42
The ISS is covered with highly reflective material to help regulate its temperature. It isn't so easy to radiate energy away in space, a vacuum is actually the best insulator known to science. So when sunlight hits the station the best thing is to prevent as much of its energy as possible from being absorbed, thus everything is bright white. There are also giant radiators on it for the same reason. They are mostly there to prevent the solar panels from overheating - because they are actually the only things on the ISS that aren't bright white, they are a dark coppery color. The radiators also help cool the rest of the station. Because the solar panels are angled for best capture of light, they rarely reflect much light towards they ground. For the most part, what you see from the ground is light reflected from the body of the space station and from the radiators.
There are a number of photos around online taken with telescopes of the ISS passing overhead near dusk or dawn. The one below is in the public domain and was taken by Ralf Vandebergh. In it, the bright sections are on the main truss and the hab modules.
And here are links to several more copyrighted ones: One from the European Space Agency
Another from Vandebergh that appeared in Wired
And several by Thierry Legault
When the ISS does reflect sunlight off the solar panels towards the ground, what is seen is a flare, and it can be very bright indeed. Here is what they say about it on the Hayden Planetarium website:
And as a bonus, sunlight glinting directly off the solar panels can sometimes make the ISS appear to briefly "flare" in brilliance to as bright as magnitude -8; more than 16 times brighter than Venus!
kim holder wants Monica backkim holder wants Monica back
$\begingroup$ It would be nice to have a good measure of apparent magnitude of the ISS on one of these passes and use it to calculate how much light was being reflected towards the observer - if i could figure out how to do that. $\endgroup$ – kim holder wants Monica back Oct 14 '14 at 17:21
$\begingroup$ For ISS apparent magnitude best I could find is from Heavens Above: Intrinsic brightness (Magnitude) -1.3 (at 1000 km distance, 50% illuminated), Maximum brightness (Magnitude) -5.1 (at perigee, 100% illuminated). My own rule of a thumb is that it's about as bright as Venus (max –4.89) when "riding the day/night terminator" and about as bright as Mars (max –2.91) otherwise. Both of which also vary in time, that's why it's at least as correct as a clock that doesn't work - two times a day it's spot on. :) $\endgroup$ – TildalWave Oct 14 '14 at 17:28
$\begingroup$ There's one problem with what you suggest (that the existing photographs of the station from the ground fairly represent what parts of it reflect the most light). Namely that there's an intrinsic problem to taking stills / video of a brightly lit distant object with optical equipment due to atmospheric diffraction. When the station would be fully lit (or close to it) it would simply be impossible to get much detail of it from ground. So what photos of it you see online taken from the ground are mostly when it reflects less light and not the best representative of what we see with a naked eye. $\endgroup$ – TildalWave Oct 15 '14 at 13:26
Not the answer you're looking for? Browse other questions tagged iss observation or ask your own question.
Is it safe to see the ISS with the naked eye?
How is the International Space Station protected against orbital debris?
What does the ISS look like by night?
Naked Eye View of ISS: Resolution?
Have any satellites had lights visible from Earth besides FITSat-1?
Why does the International Space Station produce so much methane?
Is the International Space Station “hackable”?
Air in International Space Station
Does the ISS have zenith-facing windows?
International Space Station consumables shelf life duration
Does the International Space Station get TV?
Can Astronauts really see firework displays from space? | {"pred_label": "__label__wiki", "pred_label_prob": 0.577356219291687, "wiki_prob": 0.577356219291687, "source": "cc/2020-05/en_middle_0046.json.gz/line114276"} |
high_school_physics | 28,168 | 16.404118 | 1 | Can We See Stars Forming?
A recent article on the Internet was entitled “Infant Stars Caught in Act of Feeding.”1 New techniques are allowing astronomers to study disks of dust and gas around stars at very high levels of detail. The European Southern Observatory’s Very Large Telescope Interferometer (or VLTI) in Chile is able to measure at an angle so small, it would be like looking at the period of a sentence at a distance of 50 Kilometers (31 Miles). An interferometer combines the data from two or more telescopes that are separated from each other in such a way that the multiple telescopes act like one much larger telescope. A recent study looked at six stars known as Herbig Ae/Be objects, believed to be young stars still growing in size from their formation. This study was directed at finding what is happening to the dust and gas surrounding these stars.
Astronomers frequently report observations like this of “new stars” or “young stars,” which assume that these stars formed within the last few million years. Astronomers who believe the big bang and today’s other naturalistic origins theories would say stars can form in the present from clouds of dust and gas in space. Realize that no one saw these stars form. Instead, the properties of these stars, along with their location near gas and dust clouds where astronomers think that stars form is the basis for the belief that they are recently formed stars.
Young-universe creationist physicists and astronomers tend to be skeptical of reports claiming certain stars have recently formed. These claims often make many assumptions including that 1) the age of the star is known based on today’s accepted ideas of millions of years of stellar evolution and 2) that the dust disk surrounding the star had a role in the star’s formation. Evolutionary scientists would often assume the dust disk formed at about the same time as the star, though astronomers were not present to observe such events in the past.
Some creation scientists might argue that stars could not form after the Creation Week. However, others would say that stars could form after the Creation Week, but would argue that the naturalistic origins theories accepted today are not adequate explanations of the process. It is true that stars and other objects we have not seen before become visible to us all the time. There are a number of scientific reasons why scientists may see a star today that could not have been seen just days or weeks earlier in the same region of the sky. In the case of these Herbig Ae/Be stars, they simply were not observed before.
The recent observations of the six Herbig Ae/Be stars showed that for two cases gas was falling into the star, and, for the other four, gas was moving outward away from the star or from a disk around the star. Stars go through a variety of stages as they age. In some of these stages there are particularly strong stellar winds made up of charged particles that flow outward from the star, driving gas away from the star. However some stars are “quieter” so that gas is more likely to be pulled into the star by gravity. Either of these processes is possible in a creation view, so these observations are not surprising.
From a creation viewpoint, the interesting questions raised by these observations are about the age of the disks and which came first, the star or the disk. What was created in the Creation Week? Was it the star, the disk, or were both created by God at the same time? Was the star formed out of the disk at creation, though perhaps in a supernatural manner? Young-universe creationist scientists research these questions and have various opinions. It is important to note that just because gas is observed falling into the star, this does not necessarily mean that the disk had anything to do with the formation of the star.
If some disks formed from collisions since creation, these disks would be very young in age and limited in size. On the other hand, if the disks were created in the Creation Week, they would still be only several thousand years old. An age of thousands of years means that the amount of change in the disk since the beginning would be limited. This seems to agree with this report about the six Herbig Ae/Be stars, which said that some of the stars had dust present closer to the star than was expected considering the temperature.8 It is not surprising to find evidence of gas near the star, but these observations suggest there are microscopic dust grains close to the star. Evolutionary scientists would expect that in millions of years, dust very near the star would be driven away or would be vaporized.
So, a question raised is why have the dust particles close to the star not evaporated when it is more than hot enough to vaporize them. This suggests the disks are very young indeed. To evolutionary scientists, the dust grains near the star would be perhaps hundreds of thousands to millions of years old. Over those kinds of time scales the dust could not still be so close to the star unless something keeps it from being too hot, e.g., gas shielding the dust from the star’s light. This is an example of how scientists assume processes they have not observed are at work in order to explain how the observed dust could still be present. Instead, why not consider the star and the disks to be only several thousand years old, then many of the difficulties of explaining the dust disks disappear.
Andrea Thompson, “ Infant Stars Caught in Act of Feeding,” Fox News.
Amazing Old Stars Give Birth Again, Space.com.
Two Planets Suffer Violent Collision, ScienceDaily.com.
B. Zuckerman, Francis C. Fekel, Michael H. Williamson, Gregory W. Henry, M. P. Muno, Planetary systems around close binary stars: the case of the very dusty, Sun-like, spectroscopic binary BD+20 307, Astrophysical Journal (in press for December 10, 2008), also available at arxiv.org/abs/0808.1765.
Spencer, W., “The Existence and Origin of Extrasolar Planets,” TJ 15 no. 1 (2001):17–25.
S. Kraus et. al., “The Origin of Hydrogen Line Emission for Five Herbig Ae/Be Stars Spatially Resolved by VLTI/AMBER Spectro-interferometry,” Astronomy and Astrophysics 489 (2008):1157–1173. | {'timestamp': '2019-04-19T04:31:55Z', 'url': 'https://answersingenesis.org/astronomy/stars/star-formation-and-creation/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 209,299 | 16.373484 | 1 | y = nanmean(X) returns the mean of the elements of X, computed after removing all NaN values.
If X is a vector, then nanmean(X) is the mean of all the non-NaN elements of X.
If X is a matrix, then nanmean(X) is a row vector of column means, computed after removing NaN values.
If X is a multidimensional array, then nanmean operates along the first nonsingleton dimension of X. The size of this dimension becomes 1 while the sizes of all other dimensions remain the same. nanmean removes all NaN values.
For information on how nanmean treats arrays of all NaN values, see Tips.
y = nanmean(X,'all') returns the mean of all elements of X, computed after removing NaN values.
y = nanmean(X,dim) returns the mean along the operating dimension dim of X, computed after removing NaN values.
y = nanmean(X,vecdim) returns the mean over the dimensions specified in the vector vecdim. The function computes the means after removing NaN values. For example, if X is a matrix, then nanmean(X,[1 2]) is the mean of all non-NaN elements of X because every element of a matrix is contained in the array slice defined by dimensions 1 and 2.
Find the column means for matrix data with missing values.
Find the mean of all the values in an array, ignoring missing values.
Create a 2-by-5-by-3 array X with some missing values.
Find the mean of the elements of X.
Find the row means for matrix data with missing values by specifying to compute the means along the second dimension.
Find the mean of a multidimensional array over multiple dimensions.
Find the mean of each page of X by specifying dimensions 1 and 2 as the operating dimensions.
For example, ypage(1,1,1) is the mean of the non-NaN elements in X(:,:,1).
Find the mean of the elements in each X(i,:,:) slice by specifying dimensions 2 and 3 as the operating dimensions.
For example, yrow(2) is the mean of the non-NaN elements in X(2,:,:).
Input data, specified as a scalar, vector, matrix, or multidimensional array.
If X is an empty array, then nanmean(X) is NaN. For more details, see Tips.
Dimension to operate along, specified as a positive integer scalar. If you do not specify a value, then the default value is the first array dimension whose size does not equal 1.
dim indicates the dimension whose length reduces to 1. size(y,dim) is 1 while the sizes of all other dimensions remain the same.
If dim is equal to 1, then nanmean(X,1) returns a row vector containing the mean for each column.
If dim is equal to 2, then nanmean(X,2) returns a column vector containing the mean for each row.
If dim is greater than ndims(X) or if size(X,dim) is 1, then nanmean returns X.
Vector of dimensions, specified as a positive integer vector. Each element of vecdim represents a dimension of the input array X. The output y has length 1 in the specified operating dimensions. The other dimension lengths are the same for X and y.
For example, if X is a 2-by-3-by-3 array, then nanmean(X,[1 2]) returns a 1-by-1-by-3 array. Each element of the output is the mean of the elements on the corresponding page of X.
Mean values, returned as a scalar, vector, matrix, or multidimensional array.
When nanmean computes the mean of an array of all NaN values, the array is empty once the NaN values are removed and, therefore, the sum of the remaining elements is 0. Because the mean calculation involves division by 0, the mean value is NaN. The output NaN is not a mean of NaN values.
Instead of using nanmean, you can use the MATLAB® function mean with the input argument nanflag specified as the value 'omitnan'.
The 'all' and vecdim input arguments are not supported.
The dim input argument must be a compile-time constant.
If you do not specify the dim input argument, the working (or operating) dimension can be different in the generated code. As a result, run-time errors can occur. For more details, see Automatic dimension restriction (MATLAB Coder). | {'timestamp': '2019-04-19T15:20:48Z', 'url': 'https://www.mathworks.com/help/stats/nanmean.html', 'language': 'en', 'source': 'c4'} |
high_school_physics | 1,475 | 16.371788 | 1 | \section{INTRODUCTION}
Fractal dimension is one of the major themes in Fractal Geometry. Estimation of fractal dimension of sets and graphs has received a lot of attention in the literature \cite{Fal}.
In 1986, Barnsley \cite{MF1} introduced the idea of fractal interpolation functions(FIF) and computed the Hausdorff dimension of affine FIF. Estimation of box dimension for a class of affine FIFs presented in \cite{MF6,B&H,M&H}. Several authors \cite{MF3,HM,JV1,Kono,VV} also calculated the fractal dimension of the graph of FIF. In 1991, Massopust \cite{MP} estimated the box dimension of graph of vector-valued FIF. Later Hardin and Massopust \cite{HM} constructed fractal interpolation functions from $\mathbb{R}^n$ to $\mathbb{R}^m$ and also given the formula for calculating the box dimension. We encourage the reader to see some recent works on fractal dimension of fractal functions defined on different domains such as Sierpinski gasket \cite{SP,TV}, rectangular domain \cite{SS2} and interval \cite{JV1,VV}. To the best of our knowledge, we may say that there is no work available for dimension of complex-valued fractal functions. Here we give some basic results for complex-valued FIF and provide some results to convince the reader that there is some difference between dimensional result of the complex-valued and real-valued fractal functions.
\par In 1986, Mauldin and Williams \cite{w&l} were the poineers who studied the problem of decomposition of the continuous functions in terms of fractal dimensions. They proved the existence of decomposition of any continuous function on $[0,1]$ into sum of two continuous functions, where each have Hausdorff dimension one. Later in 2000, Wingren \cite{pt} gave a technique to construct the above decomposition of Mauldin and Williams. Moreover, he proved the same type result of Mauldin and Williams for the lower box dimension. Bayart and Heurteaux \cite{BFY} also proved similar result for Hausdorff dimension $\beta=2$, and raised the question for $\beta\in [1,2]$. Recently in 2013, Jia Liu and Jun Wu \cite{LW} solved the question which was raised by Bayart and Haurteaux. More preciesly
they proved that, for any $\beta \in [1,2]$, each continuous function on $[0,1]$ can be decomposed into sum of two continuous functions, where each have Hausdorff dimension $\beta.$ Falconer and Fraser \cite{F&F} found an upper bound for upper box dimension of graph of sum of two continuous functions which depends on dimension of both of graphs.
\par
In \cite{LW,F&F}, it is clear that the Hausdorff dimension of graph of $g+h$ does not depend on the Hausdorff dimension of graph of $g$ and $h$ whereas, the upper box dimension depends on both.
Motivated from this we think about the behaviour of Hausdorff dimension of graph of $g+ih$, whether it depends on the Hausdorff dimension of graphs of $g$ and $h $ or not? We obtained an affirmative answer for this question. Also the upper box dimension of $g+ih$ depends on the upper box dimensions of $g$ and $h$ which is quite different from the upper box dimension of $g+h$. Finally, we studied some relations between fractal dimensions of the graphs of $g(x)+ i h(x)$, $g(x)+ h(x)$ and $(g(x), h(x))$.
\par
The paper is organized as follows. In upcoming section \ref{se2}, we give some preliminary results and required definition for next section. Section \ref{se3} consists of some dimensional results for complex-valued continuous functions and FIFs. In this section, first we establish some propositions and lemmas to form a relation between the fractal dimension of complex-valued and real-valued continuous functions. After that, we determine bound of Hausdorff dimension of $\alpha$-fractal function under some assumption.
We also obtain some conditions under which $\alpha$-fractal function becomes a H\"{o}lder continuous function and bounded variation function, and calculate its fractal dimension.
\section{preliminaries}\label{se2}
\begin{definition}
Let $F$ be a subset of a metric space $(Y,d)$. The Hausdorff dimension of $F$ is defined as follows
$$ \dim_H{F}=\inf\{\beta>0: \text{for every}~\epsilon>0,~\text{there is a cover}~~ \{V_i\}~\text{of}~F~\text{ with}\sum |V_i|^\beta<\epsilon \}$$
\end{definition}
\begin{definition}
The box dimension of a non-empty bounded subset $F$ of a metric space $(Y,d)$ is defined as
$$\dim_{B}F=\lim_{\delta \to 0}\frac{\log{N_{\delta}(F)}}{-\log\delta},$$
where $N_\delta(F)$ is the minimum number of sets of diameter $\delta>0$ that can cover $F.$
If this limit does not exist, then limsup and liminf is known as upper and lower box dimension respectively.
\end{definition}
\begin{definition}
For $r> 0$ and $t\geq0$, let
$P^t_r(F)=\sup\big \{\sum^{}_i|B_i|^t \big\}$, where $\{B_i\}$ is a collection of disjoint balls of radii at most $r$ with centres in $F$.
As $r$ decreases, $P^t_r$ also decreases. Therefore the limit
$$P^t_0(F)= \lim_{r \to 0}P^t_r(F)$$
exists. We define
$$P^t(F)=\inf\bigg\{\sum_iP^t_0(F_i): F\subset\bigcup^{\infty}_{i=1}F_i\bigg\},$$
and it is known as the $t$-dimensional packing measure. Packing dimension is defined as follows:
$$\dim_{P}(F)=\inf\{t\geq 0 : P^{t}(F)=0\}=\sup\{t\geq 0 : P^{t}(F)=\infty\}.$$
\end{definition}
\textbf{Note-} We denote graph of function $f$ by $G(f)$ throughout this paper.
\begin{remark}
$f=f_1+if_2 :[a,b] \to \mathbb{C} $ is a H\"older continuous function with H\"{o}lder exponent $\sigma$ if and only if $f_i$ is also H\"older continuous function with H\"{o}lder exponent $\sigma $ for every $i=1,2$.
\end{remark}
\begin{theorem}\cite{Fal}
If $f: [0,1] \to \mathbb{R}$ is a H\"{o}lder continuous function with the H\"{o}lder exponent $ \sigma\in (0,1)$. Then $\overline{\dim}_B(G(f))\leq 2-\sigma. $
\end{theorem}
\subsection{Iterated Function Systems} Let $(Y, d)$ be a complete metric space, and we denote the family of all nonempty compact subsets of $Y$ by $H(Y)$ . For any $A_1,A_2\in H(Y)$, we define the Hausdorff metric by
$$h(A_1,A_2) = \inf\{\delta>0 : A_1\subset {A_2}_\delta~~\text{and}~~ A_2 \subset {A_1}_\delta \} ,$$
where ${A_1}_\delta$ and ${A_2}_\delta$ denote the $\delta $-neighbourhood of sets $A_1$ and $A_2$, respectively. It is well-known that $(H(Y),h)$ is a complete metric space.
\\\textbf{Note-} A map $\theta: (Y,d) \to (Y,d) $ is called a contraction if there exists a constant $c<1$ such that
$$d(\theta(a),\theta(b)) \le c~~d(a,b),~~\forall~~~ a , b \in Y.$$
\begin{definition}
The system $\mathcal{I}=\big\{(Y,d); \theta_1,\theta_2,\dots,\theta_N \big\}$ is called an iterated function system (IFS), if each $\theta_i$ is a contraction self-map on $Y$ for $i\in \{1,2,\dots,N\}$.
\end{definition}
\textbf{Note-} Let $\mathcal{I}=\big\{(Y,d); \theta_1,\theta_2,\dots,\theta_N \big\}$ be an IFS. We define a mapping $S$ from $H(Y)$ into $H(Y)$ given by
$$ S(A) = \cup_{i=1}^N \theta_i (A).$$
The map $S$ is a contraction
map under the Hausdorff metric $h$. If $(Y,d)$ is a complete metric space therefore, by Banach contraction principle, there exists a unique $E\in H(Y)$ such that $ E = \cup_{i=1}^N \theta_i (E) $ and it is called the attractor of the IFS. We refer the reader to see \cite{MF2,Fal}
for details.
\begin{definition}
We say that an IFS $\mathcal{I}=\{(Y,d);\theta_1,\theta_2,\dots,\theta_N\}$ satisfies the open set condition(OSC) if there is a non-empty open set $O$ with $\theta_i(O) \subset O~~\forall~i\in \{1,2,\cdots,N\}$ and $ \theta_i(O)\cap \theta_j(O)= \emptyset $ for $i\ne j$. Moreover, if $O \cap E \ne \emptyset,$ where $E$ is the attractor of the $\mathcal{I}$, then we say that $\mathcal{I}$ satisfies the strong open set condition(SOSC). If $\theta_i(E)\cap \theta_j(E)=\emptyset$ for $i\ne j$, then we say that the IFS $\mathcal{I}$ satisfies the strong separation condintion(SSC).
\end{definition}
\subsection{Fractal Interpolation Functions}
Consider a set of data points $ \{(x_i,y_i)\in \mathbb{R}\times\mathbb{C} : i=1,2,\dots,N\} $ with $x_1<x_2<\dots <x_N$. Set $ T = \{1,2,...,N-1\}$ and $J= [x_1, x_N] .$ For each $ k \in T,$ set $J_k= [x_k, x_{k+1}]$ and let $P_k: J \rightarrow J_k $ be a contractive homeomorphism satisfying
$$ P_k(x_1)=x_k,~~P_k(x_N)=x_{k+1}.$$
For each $ k\in T $, let $\Psi_k: J\times \mathbb{C} \rightarrow \mathbb{C} $ be a continuous map such that
$$ |\Psi_k(t,z_1) - \Psi_k(t,z_2)| \leq \tau_k |z_1- z_2| ,$$
$$ \Psi_k(x_1,y_1)=y_k, \Psi_k(x_N,y_N)=y_{k+1},$$
where $(t,z_1), (t,z_2) \in J\times \mathbb{C} $ and $ 0 \leq \tau_k < 1.$ In particular, we can take for each $k\in T$,
$$P_k(t)=a_k t+ d_k, \quad \Psi_k(t,y) = \alpha_k y + q_k (t).$$
Constants $a_k$ and $d_k$ are uniquely determined by the condition $ P_k(x_1)=x_k, P_k(x_N)=x_{k+1}.$ The multiplier $\alpha_k$ is called the scaling factor, which satisfies $-1< \alpha_k <1$ and $q_k:J\rightarrow \mathbb{C}$ is a continuous function such that $q_k(x_1)=y_k-\alpha_k y_1$ and $q_k(x_N)=y_{k+1}-\alpha_k y_N$.
Now for each $k \in T $, we define functions $W_k:J\times \mathbb{C} \rightarrow J\times \mathbb{C} $ by $$W_k(t,y)=\big(P_k(t),\Psi_k(t,y)\big). $$
Then the IFS $\mathcal{J}:=\{J\times \mathbb{C};W_1,W_2,\dots,W_{N-1}\}$ has a unique attractor, \cite[Theorem 1]{MF1}, which is the graph of a function $h$ which satisfying
the self-referential equation:
$$h(t)= \alpha_k h \big(P_k^{-1}(t) \big)+ q_k \big(P_k^{-1}(t)\big), ~t \in J_k, k \in T.$$
The above function $h$ is known as the fractal interpolation function (FIF).
\subsection{$\alpha$-Fractal Functions}
To obtain a class of fractal functions with respect to a given continuous function on a compact interval in $\mathbb{R}$, we can adapt the idea of construction of FIF. The space of all complex-valued continuous functions defined on $J=[x_1,x_N]$ in $\mathbb{R}$ is denoted by $\mathcal{C}(J)$, with the sup norm. Let $f $ be a given function in $ \mathcal{C}(J)$, known as the germ function. For constructing the IFS, we consider the following assumptions
\begin{enumerate}
\item Let $\Delta:=\{x_1,x_2,\dots,x_N:x_1<x_2<\dots<x_N\}$ be a partition of $J=[x_1,x_N]$.
\item Let $\alpha_k: J \rightarrow \mathbb{C}$ be continuous functions with $\| \alpha_k\|_{\infty}=\max\{|\alpha_k(t)|:t\in J\} < 1$, for all $ k \in T $. These $\alpha_k$ are called the scaling functions and $\alpha= \big(\alpha_1, \alpha_2, \dots, \alpha_{N-1}\big) \in \big(\mathcal{C}(J) \big)^{N-1}$ is called the scaling vector.
\item Let $b:J \rightarrow \mathbb{C}$ be a continuous function such that $b \ne f$ and $b(x_1)=f(x_1), b(x_N)= f(x_N)$, named as the base function.
\end{enumerate}
Motivated by \cite{MF1,MF2}, Navascu\'{e}s \cite{N2} considered the following set of functons
\begin{equation} \label{BE1}
\begin{split}
P_k(t) =&~ a_k t + d_k, \\
\Psi_k(t,y)=&~ \alpha_k(t) y + f \big( P_k(t)\big) -\alpha_k(t) b(t).
\end{split}
\end{equation}
Then the corresponding IFS $\mathcal{J}:=\{J\times \mathbb{C};W_1,W_2,\dots,W_{N-1}\}$, where
$$W_k(t,y) = \Big(P_k(t), \Psi_k(t,y)\Big),$$
has a unique attractor, which is the graph of a continuous function $f_{\Delta,b}^{\alpha}: J \rightarrow \mathbb{C}$ such that $f_{\Delta,b}^{\alpha}(x_n)= f(x_n), n=1,2,\dots,N $. For simplicity, we denote $f_{\Delta,b}^{\alpha}$ by $f^\alpha$. The real valued $f^\alpha$ is widely known as $\alpha $-fractal function, see, for instance, \cite{VV,JV1,TV,SS2,JCN}. Moreover, $f^{\alpha}$ satisfies the following equation
\begin{equation}\label{FractalEquation}
f^{\alpha}(t)= f(t)+\alpha_k(P_k^{-1}(t)).(f^{\alpha}- b)\big(P_k^{-1}(t)\big)~~~~\forall~~ t \in J,~~ k \in T.
\end{equation}
We can also treat $f^{\alpha}$ as a ``fractal perturbation" of $f$.
\section{Main Theorems}\label{se3}
In the following lemma, we provide a relationship between Hausdorff dimension of complex-valued continuous function and Hausdorff dimension of its real and imaginary part.
\begin{lemma}\label{new222}
Suppose $f: [a,b] \rightarrow \mathbb{C}$ is a continuous function and $g,h:[a,b] \to \mathbb{R}$ is real and imaginary part of $f$ respectively, that is, $f= g +ih $. Then we have
\begin{itemize}
\item[(1)] \label{t1} $\dim_H (\text{G}(g+i h))\geq \max\{\dim_H (\text{G}(g)),\dim_H (\text{G}(h)) \}.$
\item[(2)] $\dim_H (\text{G}(g+i h)) = \dim_H( \text{G}(g))$, provided the imaginary part $ h$ is Lipschitz.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item[(1)] Let us define a mapping $\Phi : G(f) \to G(g)$ as follows
$$\Phi(x,g(x)+i h(x) )=(x,g(x)).$$
We aim to show that $\phi$ is a Lipschitz mapping. Using simple properties of norm, it follows that
\begin{align*}
\|\Phi&(x_1,g(x_1)+i h(x_1) )-\Phi(x_2,g(x_2)+i h(x_2) )\|^2\\
=~&\|(x_1,g(x_1))-(x_2,g(x_2))\|^2\\
=~&|x_1-x_2|^2 +|g(x_1)-g(x_2)|^2\\
\leq~&|x_1-x_2|^2 +|g(x_1)-g(x_2)|^2
+|h(x_1)-h(x_2)|^2\\
=~&\|(x_1,g(x_1)+i h(x_1) )-(x_2,g(x_2)+i h(x_2)\|^2.
\end{align*}
That is, $\Phi$ is a Lipschitz map. Now, Lipschitz invariance property of Hausdorff dimension yields $$\dim_H (\text{G}(f))\geq \dim_H (\text{G}(g)).$$
On similar lines, we obtain $$\dim_H (\text{G}(f))\geq \dim_H (\text{G}(h)).$$
Comibining both of the above inequalities, we get
$$\dim_H (\text{G}(f))\geq \max\{\dim_H (\text{G}(g)),\dim_H (\text{G}(h)) \},$$
completing the proof of item (i).
\item[(2)] In this part, we continue our proof with the same mapping $\Phi : G(f) \to G(g),$ defined by
$$\Phi(x,g(x)+i h(x) )=(x,g(x)).$$
Here our aim is to show that $\Phi$ is a bi-Lipschitz map.
From Part (1) of Lemma \ref{t1}, it is obvious that $\Phi$ is a Lipschitz map.
And
\begin{align*}
\|(x_1&,g(x_1)+i h(x_1) )-(x_2,g(x_2)+i h(x_2)\|^2\\
=~& |x_1-x_2|^2 +|g(x_1)-g(x_2)|^2
+|h(x_1)-h(x_2)|^2\\
\leq~&|x_1-x_2|^2 +C_1^2 |x_1-x_2|^2
+|g(x_1)-g(x_2)|^2\\
\leq~& (1+C_1^2)\{\ |x_1-x_2|^2+|g(x_1)-g(x_2)|^2 \}\\=~&(1+C_1^2)\|(x_1,g(x_1))-(x_2,g(x_2))\|^2\\
=~& (1+C_1^2) \|\Phi(x_1,g(x_1)+i h(x_1) )-\Phi(x_2,g(x_2)+i h(x_2) )\|^2.
\end{align*}
Therefore, $\Phi$ is a bi-Lipschitz map. In the light of the bi-Lipschitz invariance property of Hausdorff dimension, we get
$$\dim_H(\text{G}(f))= \dim_H (\text{G}(g)),$$
this completes the proof.
\end{itemize}
\end{proof}
Now, we will present some results similar to the above in terms of other dimensions.
\begin{proposition}
Suppose $f: [a,b] \rightarrow \mathbb{C}$ is a continuous function and $g,h:[a,b] \to \mathbb{R}$ is real and imaginary part of $f$ respectively, that is, $f= g +ih $. Then we have
$$\dim_P (\text{G}(g+i h))\geq \max\{\dim_P (\text{G}(g)),\dim_P (\text{G}(h)) \},$$ $$\overline{\dim}_B (\text{G}(g+i h))\geq \max\{\overline{\dim}_B (\text{G}(g)),\overline{\dim}_B (\text{G}(h)) \},$$ $$\underline{\dim}_B (\text{G}(g+i h))\geq \max\{\underline{\dim}_B (\text{G}(g)),\underline{\dim}_B (\text{G}(h)) \}.$$
\end{proposition}
\begin{proof}
The proof is similar to part (1) of Lemma \ref{t1}, hence we omit.
\end{proof}
\begin{proposition} Suppose $f: [a,b] \rightarrow \mathbb{C}$ is a continuous function and $g,h:[a,b] \to \mathbb{R}$ is real and imaginary part of $f$ respectively, that is $f= g +ih $. If $h$ is a Lipschitz function, then we have
$$\dim_P(\text{G}(f))= \dim_P (\text{G}(g)),\overline{\dim}_B(\text{G}(f))= \overline{\dim}_B (\text{G}(g)),$$ and
$$\underline{\dim}_B(\text{G}(f))= \underline{\dim}_B (\text{G}(g)).$$
\end{proposition}
\begin{proof}
The proof is similar to part (2) of Lemma \ref{t1}, hence omitted.
\end{proof}
\begin{lemma}
Suppose $f: [a,b] \rightarrow \mathbb{C}$ is a continuous function and $g,h:[a,b] \to \mathbb{R}$ is real and imaginary part of $f$ respectively, that is, $f= g +ih .$ If $h$ is a Lipschitz function on $[0,1]$, then
$$\dim_H (\text{G}(g+i h)) =\dim_H (\text{G}(g+ h))=\dim_H (\text{G}(g, h))= \dim_H( \text{G}(g)),$$
$$\overline{\dim}_B (\text{G}(g+i h)) =\overline{\dim}_B (\text{G}(g+ h))=\overline{\dim}_B (\text{G}(g, h))= \overline{\dim}_B( \text{G}(g)),$$
$$\dim_P (\text{G}(g+i h)) =\dim_P (\text{G}(g+ h))=\dim_P (\text{G}(g, h))= \dim_P( \text{G}(g)).$$
\end{lemma}
\begin{proof}
The mapping $\Phi :G(g+h)\to G(g) $ defined by
$$\Phi(x,(g(x)+h(x)))=(x,g(x))$$
is a bi-Lipschitz map, see part(1) of Lemma \ref{t1}. Now bi-Lipschitz invariance property of Hausdorff dimension, gives
\begin{equation}\label{e1}
\dim_H (\text{G}(g+ h))=\dim_H( \text{G}(g)).
\end{equation}
Again we can show $\Phi :G(g,h)\to G(g) $ defined by $$\Phi(x,(g(x),h(x)))=(x,g(x)),$$ is a bi-Lipschitz map, see part (2) of Lemma \ref{t1}. By using bi-Lipschitz invariance property of Hausdorff dimension, we get
\begin{equation}\label{e2}
\dim_H (\text{G}(g, h))=\dim_H( \text{G}(g)).
\end{equation}
Further, by Lemma \ref{t1}, Equation \ref{e1} and Equation \ref{e2}, we get
$$\dim_H (\text{G}(g+i h)) =\dim_H (\text{G}(g+ h))=\dim_H (\text{G}(g, h))= \dim_H( \text{G}(g)).$$
Since upper box dimension, lower box dimension and packing dimension satisfy bi-Lipschitz invariance property, the rest follows.
\end{proof}
\begin{lemma}\label{l2}
Suppose $g,h :[a,b]\to \mathbb{R}$ are continuous functions. Then $g+ih :[a,b]\to \mathbb{C}$, $(g,h):[a,b] \to \mathbb{R}^2$ are continuous functions and $\dim_HG({g+ih})= \dim_HG{(g,h)}$.
\end{lemma}
\begin{proof}
Let us define a mapping $\Phi:G({g+ih})\to G{(g,h)}$ as follows
$$\Phi(x,g(x)+ih(x))=(x,(g(x),h(x))).$$ We target to show that $\Phi$ is a bi-Lipschitz map. Performing simple calculations, we have
\begin{align*}
\|\Phi&(x_1,g(x_1)+ih(x_1)),\Phi(x_2,g(x_2)+ih(x_2))\|^2\\=
&\|(x_1,(g(x_1),h(x_1))),(x_2,(g(x_2),h(x_2)))\|^2\\=&|x_1-x_2|^2 +|g(x_1)-g(x_2)|^2
+|h(x_1)-h(x_2)|^2\\=&\|(x_1,g(x_1)+ih(x_1)),(x_2,g(x_2)+ih(x_2))\|^2.
\end{align*}
Therefore, $\Phi$ is a bi-Lipschitz map.
By using bi-Lipschitz invariance property of Hausdorff dimension, we get $$\dim_HG({g+ih})= \dim_HG{(g,h)}.$$
Since upper box dimension, lower box dimension and packing dimension also fulfill the bi-Lipschitz invariance property, we complete the proof.
\end{proof}
\begin{remark}
The Peano space filling curve $\textbf{g}:[0,1] \to [0,1] \times[0,1]$ is $\frac{1}{2}$-H\"older continuous, see details \cite{Kono}. The component functions satisfy $\dim_HG({g_1}) =\dim_HG({g_2})= 1.5 $. On the other hand, we have $\dim_HG(\textbf{g}\big) \geq 2.$ Now, consider a complex-valued mapping $f(x)=g_1(x)+ i g_2(x),$ and using Lemma \ref{l2}, $\dim_HG(f) \geq 2.$ From this example, we say that the upper bound of the Hausdorff dimension for the graph of a complex-valued function cannot be expressed in terms of its H\"{o}lder exponent as we do for real-valued function.
\end{remark}
\par
From above, it is clear that, dimensional results for complex-valued function and real-valued function are different. Now, we are ready to give some dimensional results for complex-valued fractal interpolation function.
\par
Define a metric $D$ on $ J\times \mathbb{C}$ by
$$D((t_1,z_1),(t_2,z_2))= |t_1-t_2|+|z_1-z_2|~~~~~~~\forall~~(t_1,z_1),(t_2,z_2)\in J\times \mathbb{C}.$$
Then $\big( J\times \mathbb{C}, D \big)$ is a complete metric space.
\begin{theorem}
Let $\mathcal{I}:=\{J\times \mathbb{C};W_1,W_2,\dots,W_{N-1}\}$ be the IFS defined in the construction of $f^\alpha$ such that $$ c_k D((t_1,z_1),(t_2,z_2) ) \le D(W_k(t_1,z_1) , W_k(t_2,z_2)) \le C_k D((t_1,z_1),(t_2,z_2)) ,$$ where $(t_1,z_1),(t_2,z_2) \in J\times \mathbb{C}$ and $0 < c_k \le C_k < 1 ~ \forall~ k \in T .$ Then $r \le \dim_H(G(f^{\alpha})) \le R ,$ where $r$ and $R$ are given by $ \sum\limits_{k=1}^{N-1} c_k^{r} =1$ and $ \sum\limits_{k=1}^{N-1} C_k^{R} =1$ respectively.
\end{theorem}
\begin{proof}
For upper bound of $\dim_H(G(f^{\alpha}))$, Follow Proposition $9.6$ in \cite{Fal}. For the lower bound of $\dim_H(G(f^{\alpha}))$ we proceed as follows.
Let $V = (x_1,x_N) \times \mathbb{C}.$ Then $$W_i(V) \cap W_{j}(V)=\emptyset,$$ for each $i\ne j \in T.$ Because $$P_i\big((x_1,x_N)\big) \cap P_j\big((x_1,x_N)\big)=\emptyset, ~~~~\forall~~i\ne j \in T.$$ We can observe that $ V \cap G(f^{\alpha}) \ne \emptyset,$ this implies that IFS $\mathcal{I}$ satisfies the SOSC. Then there exists an index $i\in T^*$ such that $W_i(G(f^{\alpha}))\subset V,$ where $T^*:=\cup_{n \in \mathbb{N}}\{1,2,\dots, N-1\}^n$. We denote $ W_i(G(f^{\alpha}))$ by $ (G(f^{\alpha}))_i$ for any $i\in T^*$. Now, it is obvious that for each $n\in \mathbb{N}$, the sets $\{(G(f^{\alpha}))_{ji}: j \in T^n \}$ is disjoint. Then, for each $n\in \mathbb{N}$, IFS $\mathcal{L}_n=\{W_{ji}: j \in T^n\}$ satisfies the hypothesis of Proposition $9.7$ in \cite{Fal}. Hence, by Proposition $9.7$ in \cite{Fal} if $A_n^* $ is an attractor of the IFS $\mathcal{L}_n,$ then $ r_n \le \dim_H(A_n^*)$, where $r_n$ is given by $ \sum_{ j \in T^n} c_{ji}^{r_n} =1.$ Then $ r_n \le \dim_H(A_n^*) \le \dim_H(G(f^{\alpha}))$ because $A_n^*\subset G(f^{\alpha})$. Suppose that $ \dim_H(G(f^{\alpha})) < r.$ This implies that $ r_n < r $. Let $ c_{max}=\max\{c_1, c_2, \dots,c_{N-1}\}.$ We have
$$
c_{i}^{- r_n} = \sum_{ j \in T^n} c_{j}^{r_n}\ \ge \sum_{ j \in T^n} c_{j}^{r} c_{j}^{\dim_H(G(f^{\alpha})) -r} \ge \sum_{ j \in T^n} c_{j}^{r} c_{max}^{n(\dim_H(G(f^{\alpha})) - r)}. $$
This implies that $$c_{i}^{- r} \geq c_{max}^{n(\dim_H(G(f^{\alpha})) -r)}. $$
We have a contradiction for large value of $n\in \mathbb{N} $. Therefore, we get $ \dim_H(G(f^{\alpha})) \ge r,$ proving the assertion.
\end{proof}
\begin{remark}
In \cite{MR}, Roychowdhury estimated the Hausdorff and box dimension of attractor of the hyperbolic recurrent iterated function system consisting of bi-Lipschitz mappings under the open set condition using Bowen's pressure function and volume argument. Note that recurrent iterated function is a generalization of the iterated function, hence so is Roychowdhury's result. We should emphasize on the fact that in the above we provide a proof without using pressure function and volume argument. Our proof can be generalized to general complete metric spaces.
\end{remark}
\begin{remark}
This theorem can be compared with Theorem(2.4) in \cite{JV1}.
\end{remark}
The H\"{o}lder space is defined as follows:$$ \mathcal{H}^{\sigma}(J ) := \{h:J \rightarrow \mathbb{C}: ~\text{h is H\"{o}lder continuous with exponent}~ \sigma \} .$$
Note that $(\mathcal{H}^{\sigma}(J),\|.\|_\mathcal{H})$ is a Banach space, where $ \|h\|_{\mathcal{H}}:= \|h\|_{\infty} +[h]_{\sigma}$ and $$[h]_{\sigma} = \sup_{t_1\ne t_2} \frac{|h(t_1)-h(t_2)|}{|t_1-t_2|^{\sigma}}.$$
\begin{theorem}\label{BBVL3}
Let $f, b, \alpha \in \mathcal{H}^{\sigma}(J )$ such that $b(x_1)=f(x_1)$ and $b(x_N)=f(x_N).$ Set $c:= \min\{a_k: k \in T \}$. If $ \frac{\|\alpha\|_{\mathcal{H}}}{c^\sigma}< 1 $, then
$f^{\alpha}$ is H\"{o}lder continuous with exponent $\sigma$.
\end{theorem}
\begin{proof}
Let us define $ \mathcal{H}^{\sigma}_f(J ):= \{ h \in \mathcal{H}^{\sigma}(J ): h(x_1)=f(x_1), ~h(x_N)=f(x_N) \}.$
By basic real analysis technique, we may see that $\mathcal{H}^{\sigma}_f(J )$ is a closed subset of $\mathcal{H}^{\sigma}(J).$ Since $(\mathcal{H}^{\sigma}(J),\|.\|_\mathcal{H})$ is a Banach space, it implies that $\mathcal{H}^{\sigma}_f(J )$ will be a complete metric space with respect to metric induced by $\|.\|_\mathcal{H}.$ We define a map $S: \mathcal{H}^{\sigma}_f(J) \rightarrow \mathcal{H}^{\sigma}_f(J )$ by $$ (Sh)(t)=f(t)+\alpha_k(P_k^{-1}(t)) ~(h-b)(P_k^{-1}(t)) $$
$\forall~~ t \in J_k $ where $k \in T.$ We shall show that $S$ is well-defined and contraction map on $\mathcal{H}^{\sigma}_f(J)$.
\begin{equation*}
\begin{split}
[Sh]_\sigma = &\max_{k \in T} \sup_{t_1 \ne t_2, t_1,t_2 \in J_k} \frac{|Sh(t_1)-Sh(t_2)|}{|t_1-t_2|^{\sigma}}\\
\le& \max_{k \in T} \Bigg[ \sup_{t_1 \ne t_2, t_1,t_2 \in J_k} \frac{|f(t_1)-f(t_2)|}{|t_1-t_2|^{\sigma}}\\
& + \sup_{t_1\ne t_2, t_1,t_2 \in J_k} \frac{|\alpha_k(P_k^{-1}(t_1))| \Big|(h-b)(P_k^{-1}(t_1))-(h-b)(P_k^{-1}(t_2))\Big|}{|t_1-t_2|^{\sigma}}\\ & + \sup_{t_1\ne t_2, t_1,t_2 \in J_k} \frac{|(h-b)(P_k^{-1}(t_2))| \Big|\alpha_k(P_k^{-1}(t_1))-\alpha_k(P_k^{-1}(t_2))\Big|}{|t_1-t_2|^{\sigma}}\Bigg]\\
\le & ~[f]_{\sigma}+ \frac{\|\alpha\|_{\infty}}{c^{\sigma}} \big( [h]_{\sigma}+[b]_{\sigma} \big)+ \frac{\|h-b\|_{\infty}}{c^{\sigma}} [\alpha]_{\sigma},
\end{split}
\end{equation*}
where $[\alpha]_{\sigma}= \max\limits_{k \in T} \sup\limits_{t_1 \ne t_2, t_1,t_2 \in J} \frac{ |\alpha_k(t_1)-\alpha_k(t_2)|}{|t_1-t_2|^{\sigma}}.$ Let $g, h \in \mathcal{H}^{\sigma}_f(J )$, we have
\begin{equation*}
\begin{aligned}
\|Sg -Sh\|_{\mathcal{H}} &= \|Sg -Sh\|_{\infty} + [Sg-Sh]_{\sigma}\\
&\le \| \alpha \|_{\infty} \|g -h\|_{\infty} + \frac{\|\alpha\|_{\infty}}{c^{\sigma}} [g-h]_{\sigma}+\frac{\|g-h\|_{\infty}}{c^{\sigma}} [\alpha]_{\sigma}\\
&\le \frac{\|\alpha\|_{\mathcal{H}}}{c^{\sigma}} \| g-h\|_{\mathcal{H}}.
\end{aligned}
\end{equation*}
This implies that $S$ is well-defined map on $\mathcal{H}^{\sigma}_f(J )$.
Since $\frac{\|\alpha\|_{\mathcal{H}}}{c^{\sigma}} < 1$, $S$ is a contraction map. By the application of Banach contraction mapping theorem, $S$ has a unique fixed point $f^{\alpha} \in \mathcal{H}^{\sigma}_f(J )$. Hence we are done.
\end{proof}
\begin{theorem}\label{mainthm}
Let germ function $f$, base function $b$ and scaling function $ \alpha_j$ be complex-valued functions such that
\begin{equation}\label{Hypo}
\begin{aligned}
& |f(t_1) -f(t_2)| \le l_f |t_1-t_2 |^{\sigma},\\&
|b(t_1) -b(t_2)| \le l_b |t_1-t_2|^{\sigma},\\&
|\alpha_j(t_1) -\alpha_j(t_2)| \le l_{\alpha} |t_1-t_2|^{\sigma}
\end{aligned}
\end{equation}
for each $t_1,t_2 \in J ,j \in T,$ and for some $l_f, l_b, l_{\alpha} > 0, \sigma\in (0,1]$. Let $f_1,f_2$ be component of $f$, $b_1,b_2$ be component of $b$,$\alpha_j^{1},\alpha_j^{2}$ be component of $\alpha_j$ and $f^{\alpha}_1,f^{\alpha}_2$ be component of $f^{\alpha}$.
Also, consider constants $l_{f_i}, \delta_0> 0$ such that for all $t_1 \in J $ and $\delta < \delta_0$, there exists $t_2\in J $ with $|t_1-t_2| \le \delta$ and $$ |f_i(t_1)-f_i(t_2)| \ge l_{f_i} |t_1 -t_2|^{\sigma}~~\text{ for}~ i\in\{1,2\} .$$
If ~~~$ \|\alpha\|_{\mathcal{H}}< c^\sigma ~\min\Big\{1,\frac{l_{f_1}-2(\|b\|_{\infty}+M)l_{\alpha}c^{-\sigma} }{2(k_{f,b,\alpha}+l_b)},\frac{l_{f_2}-2(\|b\|_{\infty}+M)l_{\alpha}c^{-\sigma} }{2(k_{f,b,\alpha}+l_b)}\Big\},$ then we have $$1 \le \dim_H\big(G{(f^{\alpha}_i)}\big) \le \dim_B\big(G{(f^{\alpha}_i)}\big) = 2 - \sigma~~~\text{for}~~i=1,2.$$Moreover, $1 \le \dim_H\big(G{(f^{\alpha})}\big) \le \dim_B\big(G{(f^{\alpha})}\big) \ge 2 - \sigma.$
\end{theorem}
\begin{proof}
Since $ \|\alpha\|_{\mathcal{H}}< c^\sigma$, Theorem (3.11) yields that the fractal version $f^{\alpha}$ of $f$ is H\"older continuous with the exponent $\sigma$. Consequently, $$|f^{\alpha}_i(t_1)-f^{\alpha}_i(t_2)|\leq |f^{\alpha}(t_1)-f^{\alpha}(t_2)| \le k_{f,b,\alpha} |t_1-t_2|^{\sigma}$$ for some $ k_{f,b,\alpha}> 0$ and for $i=1,2.$
Firstly, we try to give an upper bound for upper box dimension of $G(f^{\alpha}_i) $ for $i=1,2$ as follows:
For $ \delta \in (0,1) $, let $m$ be the smallest natural number greater than or equal to $\frac{1}{\delta}$ and $N_{\delta}(G({f^{\alpha}_i}))$ be the number of $\delta$-mesh that can intersect with $G(f^{\alpha}_i)$, we have
\begin{equation}
\begin{aligned}
N_{\delta}(G({f^{\alpha}_i})) & \le 2m+ \sum_{r=0}^{m-1} \Bigg( { \frac{R_{f^{\alpha}_i}[(r\delta,(r+1)\delta]}{\delta} } \Bigg)\\& \leq 2 \bigg({\frac{1}{\delta}}+1\bigg) + \sum_{r=0}^{m-1} \frac{R_{f^{\alpha}_i}[r\delta,(r+1)\delta]}{\delta}\\& \leq 2 \bigg({\frac{1}{\delta}}+1\bigg)+ \sum_{r=0}^{m-1} k_{f,b,\alpha} \delta^{\sigma -1} .
\end{aligned}
\end{equation}
From this, we conclude that
$$\overline{\dim}_B\big(G(f^{\alpha}_i)\big) =\varlimsup_{\delta \rightarrow 0} \frac{\log N_{\delta}(G(f^{\alpha}_i))}{- \log \delta}\le 2- \sigma~~~~~\forall~~~ i=1,2.$$
Next, we will prove that $\underline{\dim}_B\big(G(f^{\alpha}_i)\big) \ge 2 - \sigma~~~~~~~~~\forall~~ i=1,2$. For this, using the self-referential equation, we can write
\begin{equation}\label{new333}
\begin{aligned}
f^{\alpha}_1(t)=&f_1(t)+\alpha_k^1\big(P_k^{-1}(t)\big) \big[f^{\alpha}_1\big(P_k^{-1}(t)\big) - b_1\big(P_k^{-1}(t)\big)\big]\\&-\alpha_k^2\big(P_k^{-1}(t)\big) \big[f^{\alpha}_2\big(P_k^{-1}(t)\big)- b_2\big(P_k^{-1}(t)\big)\big]
\end{aligned}
\end{equation}
for every $t \in J_k $ and $k \in T.$
Let $ t_1 ,t_2 \in J_k $ such that $|t_1-t_2| \le \delta.$ From Equation \ref{new333}, we have
\begin{align*}
|f^{\alpha}_1(t_1)- f^{\alpha}_1(t_2)| =& \Big| f_1(t_1)-f_1(t_2) \\&+ \alpha_k^1\big(P_k^{-1}(t_1)\big) ~ f^{\alpha}_1\big(P_k^{-1}(t_1)\big) - \alpha_k^1\big(P_k^{-1}(t_2)\big) ~ f^{\alpha}_1\big(P_k^{-1}(t_2)\big)\\& - \alpha_k^1\big(P_k^{-1}(t_1\big) ~ b_1\big(P_k^{-1}(t_1)\big) + \alpha_k^1\big(P_k^{-1}(t_2)\big) ~ b_1\big(P_k^{-1}(t_2)\big) \\&- \alpha_k^2\big(P_k^{-1}(t_1)\big) ~ f^{\alpha}_2\big(P_k^{-1}(t_1)\big) + \alpha_k^2\big(P_k^{-1}(t_2)\big) ~ f^{\alpha}_2\big(P_k^{-1}(t_2)\big)\\& + \alpha_k^2\big(P_k^{-1}(t_1)\big) ~ b_2\big(P_k^{-1}(t_1)\big) - \alpha_k^2\big(P_k^{-1}(t_2)\big) ~ b_2\big(P_k^{-1}(t_2)\big) \Big|\\
\ge & | f_1(t_1)-f_1(t_2)| - \|\alpha\|_{\infty} ~ \Big|f^{\alpha}_1\big(P_k^{-1}(t_1)\big) - f^{\alpha}_1\big(P_k^{-1}(t_2)\big) \Big|\\& - \|\alpha\|_{\infty} ~ \Big|b_1\big(P_k^{-1}(t_1)\big) - b_1\big(P_k^{-1}(t_2)\big) \Big| - \big(\|b\|_{\infty}+\|f^{\alpha}\|_{\infty}\big)\\& ~\Big|\alpha_k^1\big(P_k^{-1}(t_1)\big)-\alpha_k^1\big(P_k^{-1}(t_2)\big)
\Big| -\|\alpha\|_{\infty} ~ \Big|f^{\alpha}_2\big(P_k^{-1}(t_1)\big) \\&- f^{\alpha}_2\big(P_k^{-1}(t_2)\big) \Big| - \|\alpha\|_{\infty} ~ \Big|b_2\big(P_k^{-1}(t_1)\big) - b_2\big(P_k^{-1}(t_2)\big) \Big|\\& - \big(\|b\|_{\infty}+\|f^{\alpha}\|_{\infty}\big) ~\Big|\alpha_k^2\big(P_k^{-1}(t_1)\big)-\alpha_k^2\big(P_k^{-1}(t_2)\big)
\Big|.
\end{align*}
With the help of Equation (\ref{Hypo}), we get
\begin{equation*}
\begin{aligned}
|f^{\alpha}_1(t_1)- f^{\alpha}_1(t_2)|
\ge & ~l_{f_1} | t_1-t_2|^{\sigma}-2\|\alpha\|_{\infty} ~ k_{f,b,\alpha}\Big|P_k^{-1}(t_1) - P_k^{-1}(t_2) \Big|^{\sigma}\\& -2 \|\alpha\|_{\infty}~ l_b \Big|P_k^{-1}(t_1) - P_k^{-1}(t_2) \Big|^{\sigma}\\&-2 \big(\|b\|_{\infty}+M\big) l_{\alpha} ~\Big|P_k^{-1}(t_1) - P_k^{-1}(t_2) \Big|^{\sigma}\\
\ge & ~l_{f_1} | t_1-t_2|^{\sigma}-2\|\alpha\|_{\infty} ~ k_{f,b,\alpha} a^{-\sigma} | t_1-t_2|^{\sigma}\\& - 2\|\alpha\|_{\infty} ~ l_b c^{-\sigma} | t_1-t_2|^{\sigma}\\&- 2\big(\|b\|_{\infty}+M\big)c^{-\sigma} l_{\alpha} ~|t_1-t_2|^{\sigma}\\
= & ~\Big(l_{f_1}-2(k_{f,b,\alpha}+ l_b)\|\alpha\|_{\infty} c^{-\sigma}-2\big(\|b\|_{\infty}+M\big)c^{-\sigma} l_{\alpha}\Big) | t_1-t_2|^{\sigma}.
\end{aligned}
\end{equation*}
Set $L:=l_{f_1}-2(k_{f,b,\alpha}+ l_b)\|\alpha\|_{\infty} c^{-\sigma}-2\big(\|b\|_{\infty}+M\big)c^{-\sigma} l_{\alpha}.$ Then by the given condition $L > 0 $. Let $\delta= c^{n}$ for $n\in \mathbb{N}$ and $w$ be the smallest natural number greater than or equal to $\frac{1}{c^m}$. We estimate
\begin{equation*}
\begin{aligned}
N_{\delta}(G(f^{\alpha}_1)) &\ge \sum_{r=0}^{w} \max\Big\{1, \big( c^{-n} R_{f^{\alpha}_1}[r\delta,(r+1)\delta]\big ) \Big\}\\
& \ge \sum_{r=0}^{w} { \big( c^{-n} R_{f^{\alpha}_1}[r\delta,(r+1)\delta] \big)} \\ & \ge \sum_{r=0}^{w } {L c^{-n} c^{n \sigma }}\\
& = L c^{n( \sigma-2)} .
\end{aligned}
\end{equation*}
By using the above inequality for $N_{\delta}(G(f^{\alpha}_1))$, we obtain
\begin{equation*}
\begin{aligned}
\underline{\dim}_B\big(G(f^{\alpha}_1)\big) =\varliminf_{\delta \rightarrow 0}\frac{ \log\Big( N_{\delta}(G(f^{\alpha}_1))\Big)}{- \log (\delta)}
& \ge \varliminf_{n \rightarrow \infty}\frac{ \log\Big(L c^{n(\sigma - 2)} \Big) }{-n \log c}\\ & =
2- \sigma,
\end{aligned}
\end{equation*}
Similarly, we get $$\underline{\dim}_B\big(G(f^{\alpha}_2)\big) \ge 2 - \sigma,$$
establishing the result.
\end{proof}
\begin{definition}
A complex-valued function $ g:J \rightarrow \mathbb{C}$ is said to be of bounded variation if the total variation $V(g,J) $ of $g$ defined by $$V(g,J)= \sup_{Q=(y_0,y_1, \dots,y_m) ~~\text{partition of } J}~ \sum_{k=1}^{m} |g(y_i)-g(y_{i-1})|,$$ is finite.
The space of all bounded variation functions on $J,$ denoted by $\mathcal{BV}(J,\mathbb{C}),$ forms a Banach space with respect to the norm
$\|g\|_{\mathcal{BV}}:= |g(y_0)|+ V(g,J).$
\end{definition}
\begin{theorem}
If $f, b\in \mathcal{C}(J,\mathbb{C})\cap \mathcal{BV}(J,\mathbb{C})$ and $\alpha_k\in \mathcal{C}(J,\mathbb{C})\cap \mathcal{BV}(J,\mathbb{C})~~~\forall~~k\in T$ such that $\|\alpha\|_{\mathcal{BV}}< \frac{1}{2(N-1)}.$ Then $f^\alpha \in \mathcal{C}(J,\mathbb{C})\cap \mathcal{BV}(J,\mathbb{C})$. Moreover, $\dim_H(G(f^{\alpha}))=\dim_B(G(f^{\alpha}))=1 .$
\end{theorem}
\begin{proof}
Following Theorem (3.11) and \cite[Theorem 3.24 ]{JV1}, one may complete the proof.
\end{proof}
\begin{remark}
The above theorem will reduce to \cite[Theorem 3.24]{JV1} when all functions $f,b $ and $\alpha_k$ are real-valued.
\end{remark}
\bibliographystyle{amsplain}
| {'timestamp': '2022-04-08T02:27:31', 'yymm': '2204', 'arxiv_id': '2204.03622', 'language': 'en', 'url': 'https://arxiv.org/abs/2204.03622'} |
high_school_physics | 869,457 | 16.355596 | 1 | /*
Holder - 2.2 - client side image placeholders
(c) 2012-2013 Ivan Malopinsky / http://imsky.co
Provided under the MIT License.
Commercial use requires attribution.
*/
var Holder = Holder || {};
(function (app, win) {
var preempted = false,
fallback = false,
canvas = document.createElement('canvas');
var dpr = 1, bsr = 1;
var resizable_images = [];
if (!canvas.getContext) {
fallback = true;
} else {
if (canvas.toDataURL("image/png")
.indexOf("data:image/png") < 0) {
//Android doesn't support data URI
fallback = true;
} else {
var ctx = canvas.getContext("2d");
}
}
if(!fallback){
dpr = window.devicePixelRatio || 1,
bsr = ctx.webkitBackingStorePixelRatio || ctx.mozBackingStorePixelRatio || ctx.msBackingStorePixelRatio || ctx.oBackingStorePixelRatio || ctx.backingStorePixelRatio || 1;
}
var ratio = dpr / bsr;
var settings = {
domain: "holder.js",
images: "img",
bgnodes: ".holderjs",
themes: {
"gray": {
background: "#eee",
foreground: "#aaa",
size: 12
},
"social": {
background: "#3a5a97",
foreground: "#fff",
size: 12
},
"industrial": {
background: "#434A52",
foreground: "#C2F200",
size: 12
},
"sky": {
background: "#0D8FDB",
foreground: "#fff",
size: 12
},
"vine": {
background: "#39DBAC",
foreground: "#1E292C",
size: 12
},
"lava": {
background: "#F8591A",
foreground: "#1C2846",
size: 12
}
},
stylesheet: ""
};
app.flags = {
dimensions: {
regex: /^(\d+)x(\d+)$/,
output: function (val) {
var exec = this.regex.exec(val);
return {
width: +exec[1],
height: +exec[2]
}
}
},
fluid: {
regex: /^([0-9%]+)x([0-9%]+)$/,
output: function (val) {
var exec = this.regex.exec(val);
return {
width: exec[1],
height: exec[2]
}
}
},
colors: {
regex: /#([0-9a-f]{3,})\:#([0-9a-f]{3,})/i,
output: function (val) {
var exec = this.regex.exec(val);
return {
size: settings.themes.gray.size,
foreground: "#" + exec[2],
background: "#" + exec[1]
}
}
},
text: {
regex: /text\:(.*)/,
output: function (val) {
return this.regex.exec(val)[1];
}
},
font: {
regex: /font\:(.*)/,
output: function (val) {
return this.regex.exec(val)[1];
}
},
auto: {
regex: /^auto$/
},
textmode: {
regex: /textmode\:(.*)/,
output: function(val){
return this.regex.exec(val)[1];
}
}
}
//getElementsByClassName polyfill
document.getElementsByClassName||(document.getElementsByClassName=function(e){var t=document,n,r,i,s=[];if(t.querySelectorAll)return t.querySelectorAll("."+e);if(t.evaluate){r=".//*[contains(concat(' ', @class, ' '), ' "+e+" ')]",n=t.evaluate(r,t,null,0,null);while(i=n.iterateNext())s.push(i)}else{n=t.getElementsByTagName("*"),r=new RegExp("(^|\\s)"+e+"(\\s|$)");for(i=0;i<n.length;i++)r.test(n[i].className)&&s.push(n[i])}return s})
//getComputedStyle polyfill
window.getComputedStyle||(window.getComputedStyle=function(e){return this.el=e,this.getPropertyValue=function(t){var n=/(\-([a-z]){1})/g;return t=="float"&&(t="styleFloat"),n.test(t)&&(t=t.replace(n,function(){return arguments[2].toUpperCase()})),e.currentStyle[t]?e.currentStyle[t]:null},this})
//http://javascript.nwbox.com/ContentLoaded by Diego Perini with modifications
function contentLoaded(n,t){var l="complete",s="readystatechange",u=!1,h=u,c=!0,i=n.document,a=i.documentElement,e=i.addEventListener?"addEventListener":"attachEvent",v=i.addEventListener?"removeEventListener":"detachEvent",f=i.addEventListener?"":"on",r=function(e){(e.type!=s||i.readyState==l)&&((e.type=="load"?n:i)[v](f+e.type,r,u),!h&&(h=!0)&&t.call(n,null))},o=function(){try{a.doScroll("left")}catch(n){setTimeout(o,50);return}r("poll")};if(i.readyState==l)t.call(n,"lazy");else{if(i.createEventObject&&a.doScroll){try{c=!n.frameElement}catch(y){}c&&o()}i[e](f+"DOMContentLoaded",r,u),i[e](f+s,r,u),n[e](f+"load",r,u)}}
//https://gist.github.com/991057 by Jed Schmidt with modifications
function selector(a){
a=a.match(/^(\W)?(.*)/);var b=document["getElement"+(a[1]?a[1]=="#"?"ById":"sByClassName":"sByTagName")](a[2]);
var ret=[]; b!==null&&(b.length?ret=b:b.length===0?ret=b:ret=[b]); return ret;
}
//shallow object property extend
function extend(a,b){
var c={};
for(var i in a){
if(a.hasOwnProperty(i)){
c[i]=a[i];
}
}
for(var i in b){
if(b.hasOwnProperty(i)){
c[i]=b[i];
}
}
return c
}
//hasOwnProperty polyfill
if (!Object.prototype.hasOwnProperty)
/*jshint -W001, -W103 */
Object.prototype.hasOwnProperty = function(prop) {
var proto = this.__proto__ || this.constructor.prototype;
return (prop in this) && (!(prop in proto) || proto[prop] !== this[prop]);
}
/*jshint +W001, +W103 */
function text_size(width, height, template) {
height = parseInt(height, 10);
width = parseInt(width, 10);
var bigSide = Math.max(height, width)
var smallSide = Math.min(height, width)
var scale = 1 / 12;
var newHeight = Math.min(smallSide * 0.75, 0.75 * bigSide * scale);
return {
height: Math.round(Math.max(template.size, newHeight))
}
}
function draw(args) {
var ctx = args.ctx;
var dimensions = args.dimensions;
var template = args.template;
var ratio = args.ratio;
var holder = args.holder;
var literal = holder.textmode == "literal";
var exact = holder.textmode == "exact";
var ts = text_size(dimensions.width, dimensions.height, template);
var text_height = ts.height;
var width = dimensions.width * ratio,
height = dimensions.height * ratio;
var font = template.font ? template.font : "Arial,Helvetica,sans-serif";
canvas.width = width;
canvas.height = height;
ctx.textAlign = "center";
ctx.textBaseline = "middle";
ctx.fillStyle = template.background;
ctx.fillRect(0, 0, width, height);
ctx.fillStyle = template.foreground;
ctx.font = "bold " + text_height + "px " + font;
var text = template.text ? template.text : (Math.floor(dimensions.width) + "x" + Math.floor(dimensions.height));
if (literal) {
var dimensions = holder.dimensions;
text = dimensions.width + "x" + dimensions.height;
}
else if(exact && holder.exact_dimensions){
var dimensions = holder.exact_dimensions;
text = (Math.floor(dimensions.width) + "x" + Math.floor(dimensions.height));
}
var text_width = ctx.measureText(text).width;
if (text_width / width >= 0.75) {
text_height = Math.floor(text_height * 0.75 * (width / text_width));
}
//Resetting font size if necessary
ctx.font = "bold " + (text_height * ratio) + "px " + font;
ctx.fillText(text, (width / 2), (height / 2), width);
return canvas.toDataURL("image/png");
}
function render(mode, el, holder, src) {
var dimensions = holder.dimensions,
theme = holder.theme,
text = holder.text ? decodeURIComponent(holder.text) : holder.text;
var dimensions_caption = dimensions.width + "x" + dimensions.height;
theme = (text ? extend(theme, {
text: text
}) : theme);
theme = (holder.font ? extend(theme, {
font: holder.font
}) : theme);
el.setAttribute("data-src", src);
holder.theme = theme;
el.holder_data = holder;
if (mode == "image") {
el.setAttribute("alt", text ? text : theme.text ? theme.text + " [" + dimensions_caption + "]" : dimensions_caption);
if (fallback || !holder.auto) {
el.style.width = dimensions.width + "px";
el.style.height = dimensions.height + "px";
}
if (fallback) {
el.style.backgroundColor = theme.background;
} else {
el.setAttribute("src", draw({ctx: ctx, dimensions: dimensions, template: theme, ratio:ratio, holder: holder}));
if(holder.textmode && holder.textmode == "exact"){
resizable_images.push(el);
resizable_update(el);
}
}
} else if (mode == "background") {
if (!fallback) {
el.style.backgroundImage = "url(" + draw({ctx:ctx, dimensions: dimensions, template: theme, ratio: ratio, holder: holder}) + ")";
el.style.backgroundSize = dimensions.width + "px " + dimensions.height + "px";
}
} else if (mode == "fluid") {
el.setAttribute("alt", text ? text : theme.text ? theme.text + " [" + dimensions_caption + "]" : dimensions_caption);
if (dimensions.height.slice(-1) == "%") {
el.style.height = dimensions.height
} else if(holder.auto == null || !holder.auto){
el.style.height = dimensions.height + "px"
}
if (dimensions.width.slice(-1) == "%") {
el.style.width = dimensions.width
} else if(holder.auto == null || !holder.auto){
el.style.width = dimensions.width + "px"
}
if (el.style.display == "inline" || el.style.display === "" || el.style.display == "none") {
el.style.display = "block";
}
set_initial_dimensions(el)
if (fallback) {
el.style.backgroundColor = theme.background;
} else {
resizable_images.push(el);
resizable_update(el);
}
}
}
function dimension_check(el, callback) {
var dimensions = {
height: el.clientHeight,
width: el.clientWidth
};
if (!dimensions.height && !dimensions.width) {
if (el.hasAttribute("data-holder-invisible")) {
throw new Error("Holder: placeholder is not visible");
} else {
el.setAttribute("data-holder-invisible", true)
setTimeout(function () {
callback.call(this, el)
}, 1)
return null;
}
} else {
el.removeAttribute("data-holder-invisible")
}
return dimensions;
}
function set_initial_dimensions(el){
if(el.holder_data){
var dimensions = dimension_check(el, set_initial_dimensions)
if(dimensions){
var holder = el.holder_data;
holder.initial_dimensions = dimensions;
holder.fluid_data = {
fluid_height: holder.dimensions.height.slice(-1) == "%",
fluid_width: holder.dimensions.width.slice(-1) == "%",
mode: null
}
if(holder.fluid_data.fluid_width && !holder.fluid_data.fluid_height){
holder.fluid_data.mode = "width"
holder.fluid_data.ratio = holder.initial_dimensions.width / parseFloat(holder.dimensions.height)
}
else if(!holder.fluid_data.fluid_width && holder.fluid_data.fluid_height){
holder.fluid_data.mode = "height";
holder.fluid_data.ratio = parseFloat(holder.dimensions.width) / holder.initial_dimensions.height
}
}
}
}
function resizable_update(element) {
var images;
if (element.nodeType == null) {
images = resizable_images;
} else {
images = [element]
}
for (var i in images) {
if (!images.hasOwnProperty(i)) {
continue;
}
var el = images[i]
if (el.holder_data) {
var holder = el.holder_data;
var dimensions = dimension_check(el, resizable_update)
if(dimensions){
if(holder.fluid){
if(holder.auto){
switch(holder.fluid_data.mode){
case "width":
dimensions.height = dimensions.width / holder.fluid_data.ratio;
break;
case "height":
dimensions.width = dimensions.height * holder.fluid_data.ratio;
break;
}
}
el.setAttribute("src", draw({
ctx: ctx,
dimensions: dimensions,
template: holder.theme,
ratio: ratio,
holder: holder
}))
}
if(holder.textmode && holder.textmode == "exact"){
holder.exact_dimensions = dimensions;
el.setAttribute("src", draw({
ctx: ctx,
dimensions: holder.dimensions,
template: holder.theme,
ratio: ratio,
holder: holder
}))
}
}
}
}
}
function parse_flags(flags, options) {
var ret = {
theme: extend(settings.themes.gray, {})
};
var render = false;
for (sl = flags.length, j = 0; j < sl; j++) {
var flag = flags[j];
if (app.flags.dimensions.match(flag)) {
render = true;
ret.dimensions = app.flags.dimensions.output(flag);
} else if (app.flags.fluid.match(flag)) {
render = true;
ret.dimensions = app.flags.fluid.output(flag);
ret.fluid = true;
} else if (app.flags.textmode.match(flag)) {
ret.textmode = app.flags.textmode.output(flag)
} else if (app.flags.colors.match(flag)) {
ret.theme = app.flags.colors.output(flag);
} else if (options.themes[flag]) {
//If a theme is specified, it will override custom colors
if(options.themes.hasOwnProperty(flag)){
ret.theme = extend(options.themes[flag], {});
}
} else if (app.flags.font.match(flag)) {
ret.font = app.flags.font.output(flag);
} else if (app.flags.auto.match(flag)) {
ret.auto = true;
} else if (app.flags.text.match(flag)) {
ret.text = app.flags.text.output(flag);
}
}
return render ? ret : false;
}
for (var flag in app.flags) {
if (!app.flags.hasOwnProperty(flag)) continue;
app.flags[flag].match = function (val) {
return val.match(this.regex)
}
}
app.add_theme = function (name, theme) {
name != null && theme != null && (settings.themes[name] = theme);
return app;
};
app.add_image = function (src, el) {
var node = selector(el);
if (node.length) {
for (var i = 0, l = node.length; i < l; i++) {
var img = document.createElement("img")
img.setAttribute("data-src", src);
node[i].appendChild(img);
}
}
return app;
};
app.run = function (o) {
preempted = true;
var options = extend(settings, o),
images = [],
imageNodes = [],
bgnodes = [];
if (typeof (options.images) == "string") {
imageNodes = selector(options.images);
} else if (window.NodeList && options.images instanceof window.NodeList) {
imageNodes = options.images;
} else if (window.Node && options.images instanceof window.Node) {
imageNodes = [options.images];
}
if (typeof (options.bgnodes) == "string") {
bgnodes = selector(options.bgnodes);
} else if (window.NodeList && options.elements instanceof window.NodeList) {
bgnodes = options.bgnodes;
} else if (window.Node && options.bgnodes instanceof window.Node) {
bgnodes = [options.bgnodes];
}
for (i = 0, l = imageNodes.length; i < l; i++) images.push(imageNodes[i]);
var holdercss = document.getElementById("holderjs-style");
if (!holdercss) {
holdercss = document.createElement("style");
holdercss.setAttribute("id", "holderjs-style");
holdercss.type = "text/css";
document.getElementsByTagName("head")[0].appendChild(holdercss);
}
if (!options.nocss) {
if (holdercss.styleSheet) {
holdercss.styleSheet.cssText += options.stylesheet;
} else {
holdercss.appendChild(document.createTextNode(options.stylesheet));
}
}
var cssregex = new RegExp(options.domain + "\/(.*?)\"?\\)");
for (var l = bgnodes.length, i = 0; i < l; i++) {
var src = window.getComputedStyle(bgnodes[i], null)
.getPropertyValue("background-image");
var flags = src.match(cssregex);
var bgsrc = bgnodes[i].getAttribute("data-background-src");
if (flags) {
var holder = parse_flags(flags[1].split("/"), options);
if (holder) {
render("background", bgnodes[i], holder, src);
}
} else if (bgsrc != null) {
var holder = parse_flags(bgsrc.substr(bgsrc.lastIndexOf(options.domain) + options.domain.length + 1)
.split("/"), options);
if (holder) {
render("background", bgnodes[i], holder, src);
}
}
}
for (l = images.length, i = 0; i < l; i++) {
var attr_data_src, attr_src;
attr_src = attr_data_src = src = null;
try {
attr_src = images[i].getAttribute("src");
attr_datasrc = images[i].getAttribute("data-src");
} catch (e) {}
if (attr_datasrc == null && !! attr_src && attr_src.indexOf(options.domain) >= 0) {
src = attr_src;
} else if ( !! attr_datasrc && attr_datasrc.indexOf(options.domain) >= 0) {
src = attr_datasrc;
}
if (src) {
var holder = parse_flags(src.substr(src.lastIndexOf(options.domain) + options.domain.length + 1)
.split("/"), options);
if (holder) {
if (holder.fluid) {
render("fluid", images[i], holder, src)
} else {
render("image", images[i], holder, src);
}
}
}
}
return app;
};
contentLoaded(win, function () {
if (window.addEventListener) {
window.addEventListener("resize", resizable_update, false);
window.addEventListener("orientationchange", resizable_update, false);
} else {
window.attachEvent("onresize", resizable_update)
}
preempted || app.run();
});
if (typeof define === "function" && define.amd) {
define([], function () {
return app;
});
}
})(Holder, window); | {'content_hash': 'e2da2b188d0b88f9f031bcc301ebb26f', 'timestamp': '', 'source': 'github', 'line_count': 536, 'max_line_length': 626, 'avg_line_length': 29.774253731343283, 'alnum_prop': 0.6425841218121436, 'repo_name': 'luisferfranco/soy', 'id': '9509a06303b0484a649c1b075a527566c7d35e32', 'size': '15959', 'binary': False, 'copies': '8', 'ref': 'refs/heads/master', 'path': 'js/vendor/holder.js', 'mode': '33188', 'license': 'apache-2.0', 'language': [{'name': 'ApacheConf', 'bytes': '138'}, {'name': 'CSS', 'bytes': '89327'}, {'name': 'HTML', 'bytes': '18289'}, {'name': 'JavaScript', 'bytes': '141'}, {'name': 'PHP', 'bytes': '24948'}]} |
high_school_physics | 124,410 | 16.279516 | 1 | This type implements ICloneable, ICollection, IEnumerable, and IList.
Serves as the base class for arrays. Provides methods for creating, copying, manipulating, searching, and sorting arrays.
This class is intended to be used as a base class by language implementations that support arrays. Only the system can derive from this type: derived classes of Array are not to be created by the developer.
[Note: An array is a collection of identically typed data elements that are accessed and referenced by sets of integral indices.
The rank of an array is the number of dimensions in the array. Each dimension has its own set of indices. An array with a rank greater than one can have a different lower bound and a different number of elements for each dimension. Multidimensional arrays (i.e. arrays with a rank greater than one) are processed in row-major order.
The lower bound of a dimension is the starting index of that dimension.
The length of an array is the total number of elements contained in all of its dimensions.
A vector is a one-dimensional array with a lower bound of '0'.
If the implementer creates a derived class of Array, expected Array behavior cannot be guaranteed. For information on array-like objects with increased functionality, see the IList and IList<T> interfaces. For more information regarding the use of arrays versus the use of collections, see Partition V of the CLI Specification.
Every specific Array type has three instance methods defined on it. While some programming languages allow direct access to these methods, they are primarily intended to be called by the output of compilers based on language syntax that deals with arrays.
Get: Takes as many Int32 arguments as the array has dimensions and returns the value stored at the given index. It throws a IndexOutOfRangeException exception for invalid indices.
Set: Takes as many Int32 arguments as the array has dimensions, plus one additional argument (the last argument) which has the same type as an array element. It stores the final value in the specified index of the array. It throws a IndexOutOfRangeException exception for invalid indices.
Address: Takes as many Int32 arguments as the array has dimensions and returns the address of the element at the given index. It throws a IndexOutOfRangeException exception for invalid indices.
In addition, every specific Array type has a constructor on it that takes as many non-negative Int32 arguments as the array has dimensions. The arguments specify the number of elements in each dimension, and a lower bound of 0. Thus, a two-dimensional array of Int32 objects would have a constructor that could be called with (2, 4) as its arguments to create an array of eight zeros with the first dimension indexed with 0 and 1 and the second dimension indexed with 0, 1, 2, and 3.
For all specific array types except vectors (i.e. those permitted to have non-zero lower bounds and those with more than one dimension) there is an additional constructor. It takes twice as many arguments as the array has dimensions. The arguments are considered in pairs, with the first of the pair specifying the lower bound for that dimension and the second specifying the total number of elements in that dimension. Thus, a two-dimensional array of Int32 objects would also have a constructor that could be called with (-1, 2, 1, 3) as its arguments, specifying an array of 6 zeros, with the first dimension indexed by -1 and 0, and the second dimension indexed by 1, 2, and 3.
Parallel implementation of methods taking a System.Predicate argument are not permitted.
Constructs a new instance of the Array class.
Returns a read-only System.Collections.Generic.IList<T> wrapper around the specified array.
The array to wrap in a read-only IList<T> wrapper.
A read-only IList<T> wrapper around the specified array.
ArgumentNullException array is null .
The returned IList<T> has the same enumeration order as the array it wraps.
A collection that is read-only is simply a collection with a wrapper that prevents modifying the underlying array; therefore, if changes are made to the underlying array, the read-only collection reflects those changes.
Searches the specified section of the specified one-dimensional Array for the specified value, using the specified IComparer implementation.
A Int32 that contains the index at which searching starts.
A Int32 that contains the number of elements to search, beginning with index .
A Object for which to search.
The IComparer implementation to use when comparing elements. Specify a null reference to use the IComparable implementation of each element.
A Int32 with one of the following values based on the result of the search operation.
The index of value in the array. value was found.
The bitwise complement of the index of the first element that is larger than value. value was not found, and at least one array element in the range of index to index + length - 1 was greater than value.
The bitwise complement of (index + length). value was not found, and value was greater than all array elements in the range of index to index + length- 1.
RankException array has more than one dimension.
ArgumentOutOfRangeException index is less than array.GetLowerBound(0) .
ArgumentException index + length is greater than array.GetLowerBound(0) + array.Length .
InvalidOperationException comparer is null , and both value and at least one element of array do not implement the IComparable interface.
value is compared to each element of array using comparer until an element with a value greater than or equal to value is found. If comparer is null , the IComparable interface of the element being compared - or of value if the element being compared does not implement the interface -- is used. If value does not implement the IComparable interface and is compared to an element that does not implement the IComparable interface, a InvalidOperationException exception is thrown. If array is not already sorted, correct results are not guaranteed.
This example demonstrates the System.Array.BinarySearch(System.Array,System.Object) method.
The object searched for, 3, was not found.
The next larger object is at index 2.
The object searched for, 6, was found at index 3.
and no object in the array had greater value.
Searches the specified one-dimensional Array for the specified value, using the specified IComparer implementation.
The bitwise complement of the index of the first element that is larger than value. value was not found, and at least one array element was greater than value.
The bitwise complement of (array.GetLowerBound(0) + array.Length). value was not found, and value was greater than all array elements.
This version of System.Array.BinarySearch(System.Array,System.Object) is equivalent to System.Array.BinarySearch(System.Array,System.Object)(array, array.GetLowerBound(0), array.Length, value, comparer).
value is compared to each element of array using comparer until an element with a value greater than or equal to value is found. If comparer is null , the IComparable interface of the element being compared - or of value if the element being compared does not implement the interface - is used. If value does not implement the IComparable interface and is compared to an element that does not implement the IComparable interface, a InvalidOperationException exception is thrown. If array is not already sorted, correct results are not guaranteed.
Searches the specified section of the specified one-dimensional Array for the specified value.
ArgumentException index and length do not specify a valid range in array (i.e. index + length > array.GetLowerBound(0) + array.Length).
InvalidOperationException Either value or at least one element of array does not implement the IComparable interface.
This version of System.Array.BinarySearch(System.Array,System.Object) is equivalent to System.Array.BinarySearch(System.Array,System.Object)(array, array.GetLowerBound(0), array.Length, value, null ).
value is compared to each element of array using the IComparable interface of the element being compared - or of value if the element being compared does not implement the interface - until an element with a value greater than or equal to value is found. If value does not implement the IComparable interface and is compared to an element that does not implement the IComparable interface, a InvalidOperationException exception is thrown. If array is not already sorted, correct results are not guaranteed.
Searches the specified one-dimensional Array for the specified object.
A Array to search for an object.
The bitwise complement of the index of the first element that is larger than value. value was not found and the value of at least one element of array was greater than value.
The bitwise complement of (array.GetLowerBound(0) + array.Length). value was not found, and value was greater than the value of all array elements.
InvalidOperationException Both value and at least one element of array do not implement the IComparable interface.
Searches an entire one-dimensional sorted array for a specific element, using the IComparable<T> or IComparable interface implemented by each element of the array and by the specified object.
The one-dimensional array to search.
The object for which to search.
A non-negative index of value in the array. value was found.
A negative value, which is the bitwise complement of the index of the first element that is larger than value. value was not found and the value of at least one element of array was greater than value.
A negative value, which is the bitwise complement of one more than the index of the final element. value was not found, and value was greater than the value of all array elements.
InvalidOperationException Neither value nor the elements of the array implement the IComparable<T> or IComparable interfaces.
Duplicate elements are allowed. If the array contains more than one element equal to value, the method returns the index of only one of the occurrences, but not necessarily the first one.
Searches an entire one-dimensional sorted array for a value using the specified IComparer<T> interface.
The implementation to use when comparing elements.
null to use the IComparable<T> or IComparable implementation of each element.
InvalidOperationException comparer is null , and neither value nor the elements of the array implement the IComparable<T> or IComparable interface.
The comparer customizes how the elements are compared.
If comparer is not null , the elements of array are compared to the specified value using the specified System.Collections.Generic.IComparer implementation.
If comparer is null , the default comparer is used.
Searches a range of elements in a one-dimensional sorted array for a value, using the IComparable interface implemented by each element of the array and by the specified value.
The starting index of the range to search.
The length of the range to search.
ArgumentException index + length is greater than array.Length .
InvalidOperationException Neither value nor the elements of the array implement the IComparable<T> or IComparable interface.
Searches a range of elements in a one-dimensional sorted array for a value, using the specified IComparer<T> interface.
ArgumentException index and length do not specify a valid range in array.
The elements of array must already be sorted in increasing value according to the sort order defined by comparer; otherwise, the behavior is unspecified.
If comparer is not null , the elements of array are compared to the specified value using the specified IComparer<T> implementation.
If comparer is null , the comparison is done using the IComparable<T> or System.IComparable implementation provided by the element itself or by the specified value.
Sets the specified range of elements in the specified Array to zero, false, or to a null reference, depending on the element type.
A Int32 that contains the index at which clearing starts.
A Int32 that contains the number of elements to clear, beginning with index.
index and length do not specify a valid range in array (i.e. index + length > array.GetLowerBound(0) + array.Length ).
Reference-type elements will be set to null . Value-type elements will be set to zero, except for Boolean elements, which will be set to false .
Returns a Object that is a copy of the current instance.
A Object that is a copy of the current instance.
This example demonstrates the System.Array.Clone method.
//Clear the values of the original array.
Converts an array of one type to an array of another type.
The one-dimensional array to convert.
A Converter<T,U> that converts each element from one type to another type.
A new array of the target type containing the converted elements from array.
ArgumentNullException array is null or converter is null .
The Converter<T,U> is a delegate that converts an array element to the target type. The elements of array are individually passed to this converter, and the converted elements are saved in the new array. The source array remains unchanged.
Copies the specified number of elements from the specified source array to the specified destination array.
A Array that contains the data to copy.
A Array that receives the data.
A Int32 designating the number of elements to copy, starting with the first element and proceeding in order.
ArgumentNullException sourceArray or destinationArray is null .
RankException sourceArray and destinationArray have different ranks.
ArrayTypeMismatchException The elements in both arrays are built-in types, and converting from the type of the elements of sourceArray into the type of the elements in destinationArray requires a narrowing conversion.
Both arrays are built-in types, and one array is a value-type array and the other an array of interface type not implemented by that value-type.
Both arrays are user-defined value types and are not of the same type.
InvalidCastException At least one of the elements in sourceArray is not assignment-compatible with the type of destinationArray.
This version of System.Array.Copy(System.Array,System.Array,System.Int32) is equivalent to System.Array.Copy(System.Array,System.Array,System.Int32) (sourceArray, sourceArray.GetLowerBound(0), destinationArray, destinationArray.GetLowerBound(0), length).
If an exception is thrown while copying, the state of destinationArray is undefined.
If sourceArray and destinationArray are the same array, System.Array.Copy(System.Array,System.Array,System.Int32) copies the source elements safely to their destination, as if the copy were done through an intermediate array.
This example demonstrates the System.Array.Copy(System.Array,System.Array,System.Int32) method.
Copies the specified number of elements from a source array starting at the specified source index to a destination array starting at the specified destination index.
A Int32 that contains the index in sourceArray from which copying begins.
A Int32 that contains the index in destinationArray at which storing begins.
A Int32 that contains the number of elements to copy.
InvalidCastException At least one element in sourceArray is assignment-incompatible with the type of destinationArray.
ArgumentException (sourceIndex + length ) > (sourceArray.GetLowerBound(0) + sourceArray.Length).
(destinationIndex + length ) > ( destinationArray.GetLowerBound(0) + destinationArray.Length).
If sourceArray and destinationArray are the same array, System.Array.Copy(System.Array,System.Array,System.Int32) copies the source elements safely to their destination as if the copy were done through an intermediate array.
Copies all the elements of the current zero-based instance to the specified one-dimensional array starting at the specified subscript in the destination array.
A one-dimensional Array that is the destination of the elements copied from the current instance.
A Int32 that contains the index in array at which copying begins.
RankException The current instance has more than one dimension.
ArgumentOutOfRangeException index < array.GetLowerBound(0) .
ArgumentException array has more than one dimension.
( index + Length of the current instance) > (array.GetLowerBound(0) + array.Length ).
The number of elements in the current instance is greater than the available space from index to the end of array.
ArrayTypeMismatchException The element type of the current instance is not assignment-compatible with the element type of array.
index is the array index in the destination array at which copying begins.
[Note: This method is implemented to support the ICollection interface. If implementing ICollection is not explicitly required, use System.Array.Copy(System.Array,System.Array,System.Int32) to avoid an extra indirection.
If this method throws an exception while copying, the state of array is undefined.
The following example shows how to copy the elements of one Array into another.
Creates a zero-based, multidimensional array of the specified Type and dimension lengths.
The Type of the elements contained in the new Array instance.
A one-dimensional array of Int32 objects that contains the size of each dimension of the new Array instance.
A new zero-based, multidimensional Array instance of the specified Type with the specified length for each dimension. The System.Array.Rank of the new instance is equal to lengths.Length.
ArgumentNullException elementType or lengths is null .
ArgumentException elementType is not a valid Type.
ArgumentOutOfRangeException A value in lengths is less than zero.
The number of elements in lengths is required to equal the number of dimensions in the new Array instance. Each element of lengths specifies the length of the corresponding dimension in the new instance.
The following example shows how to create and initialize a multidimensional Array.
Creates a zero-based, three-dimensional array of the specified Type and dimension lengths.
A Int32 that contains the number of elements contained in the first dimension of the new Array instance.
A Int32 that contains the number of elements contained in the second dimension of the new Array instance.
A Int32 that contains the number of elements contained in the third dimension of the new Array instance.
A new zero-based, three-dimensional Array instance of elementType objects with the size length1 for the first dimension, length2 for the second, and length3 for the third.
ArgumentNullException elementType is null .
The following example shows how to create and initialize a three-dimensional Array.
Creates a zero-based, two-dimensional array of the specified Type and dimension lengths.
A new zero-indexed, two-dimensional Array instance of elementType objects with the size length1 for the first dimension and length2 for the second.
The following example shows how to create and initialize a two-dimensional Array.
Constructs a zero-based, one-dimensional array with the specified number of elements of the specified type.
A Int32 that contains the number of elements contained in the new Array instance.
A zero-based, one-dimensional Array object containing length elements of type elementType.
The following example shows how to create and initialize a one-dimensional Array.
Creates a multidimensional array whose element type is the specified Type, and dimension lengths and lower bounds, as specified.
A one-dimensional array of Int32 objects that contains the lower bound of each dimension of the new Array instance.
A new multidimensional Array whose element type is the specified Type and with the specified length and lower bound for each dimension.
ArgumentNullException elementType, lengths, or lowerBounds is null .
lengths and lowerBounds do not contain the same number of elements.
Each element of lengths specifies the length of the corresponding dimension in the new Array instance.
Each element of lowerBounds specifies the lower bound of the corresponding dimension in the new Array instance.
The following example shows how to create and initialize a multidimensional Array with specified low bounds.
Determines whether the specified array contains any element that matches the conditions defined by the specified predicate.
The predicate that defines the conditions of the elements to search for.
true , if the array contains one or more elements that match the conditions defined by the specified predicate; otherwise, false .
ArgumentNullException array or match is null .
The predicate returns true if the object passed to it matches the delegate. Each element of array is passed to the predicate in turn, and processing is stopped when the predicate returns true .
Searches for an element that matches the predicate, and returns the first occurrence within the entire array.
The elements of array are individually passed to the predicate, moving forward in the array, starting with the first element and ending with the last element. Processing is stopped when the predicate returns true .
The predicate that specifies the elements to search for.
The elements of array are individually passed to the predicate, and those elements for which the predicate returns true , are saved in the returned array.
Searches for an element that matches the predicate, and returns the zero-based index of the first occurrence within the entire array.
The zero-based index of the first occurrence of an element that matches the conditions defined by match, if found; otherwise, -1.
The elements of array are individually passed to the predicate. The array is searched forward starting at the first element and ending at the last element. Processing is stopped when the predicate returns true .
Searches for an element that matches the predicate, and returns the zero-based index of the first occurrence within the range of elements in the array that extends from the specified index to the last element.
The zero-based starting index of the search.
ArgumentOutOfRangeException startIndex is less than zero or greater than array.Length .
The elements of array are individually passed to the predicate. The array is searched forward starting at the specified index and ending at the last element. Processing is stopped when the predicate returns true .
Searches for an element that matches the predicate, and returns the zero-based index of the first occurrence within the range of elements in the array that starts at the specified index and contains the specified number of elements.
The number of consecutive elements to search.
ArgumentOutOfRangeException startIndex is less than zero.
count is less than zero.
startIndex + count is greater than array.Length .
The elements of array are individually passed to the predicate. The array is searched forward starting at the specified index and going for count elements. Processing is stopped when the predicate returns true .
Searches for an element that matches the predicate, and returns the last occurrence within the entire array.
The last element that matches the conditions defined by the specified predicate, if found; otherwise, the default value for type T.
The elements of array are individually passed to the predicate, moving backward in the array, starting with the last element and ending with the first element. Processing is stopped when a match is found.
Searches for an element that matches the predicate, and returns the zero-based index of the last occurrence within the entire array.
The elements of array are individually passed to the predicate. The array is searched backwards starting at the last element and ending at the first element. Processing is stopped when the predicate returns true .
Searches for an element that matches the predicate, and returns the zero-based index of the last occurrence within the range of elements in the array that extends from the specified index to the last element.
The zero-based starting index of the backward search.
The elements of array are individually passed to the predicate. The array is searched backward starting at the specified index and ending at the first element. Processing is stopped when the predicate returns true .
Searches for an element that matches the predicate, and returns the zero-based index of the last occurrence within the range of elements in the array that ends at the specified index and contains the specified number of elements.
count is greater than startIndex + 1.
The elements of array are individually passed to the predicate. The array is searched backward starting at the specified index and going for count elements. Processing is stopped when the predicate returns true .
Performs the specified action on each element of the specified array.
The array on whose elements the action is to be performed.
The action to perform on each element of array.
ArgumentNullException array or action is null .
The elements of array are individually passed to the action. The elements of the current array are individually passed to the action delegate, sequentially, in index order, and on the same thread as that used to call ForEach . Execution stops if the action throws an exception.
Returns a IEnumerator for the current instance.
A IEnumerator for the current instance.
A IEnumerator grants read-access to the elements of a Array.
[Behaviors: Enumerators can be used to read the data in the collection, but they cannot be used to modify the underlying collection.
Initially, the enumerator is positioned before the first element of the current instance. System.Collections.IEnumerator.Reset returns the enumerator to this position. Therefore, after an enumerator is created or after a System.Collections.IEnumerator.Reset, System.Collections.IEnumerator.MoveNext is required to be called to advance the enumerator to the first element of the collection before reading the value of System.Collections.IEnumerator.Current.
System.Collections.IEnumerator.Current returns the same object until either System.Collections.IEnumerator.MoveNext or System.Collections.IEnumerator.Reset is called. System.Collections.IEnumerator.MoveNext sets System.Collections.IEnumerator.Current to the next element.
If System.Collections.IEnumerator.MoveNext passes the end of the collection, the enumerator is positioned after the last element in the collection and System.Collections.IEnumerator.MoveNext returns false. When the enumerator is at this position, subsequent calls toSystem.Collections.IEnumerator.MoveNext also return false . If the last call to System.Collections.IEnumerator.MoveNext returned false , System.Collections.IEnumerator.Current is unspecified. To set System.Collections.IEnumerator.Current to the first element of the collection again, you can call System.Collections.IEnumerator.Reset followed by System.Collections.IEnumerator.MoveNext.
An enumerator remains valid as long as the collection remains unchanged. If changes are made to the collection, such as adding, modifying, or deleting elements, the enumerator is irrecoverably invalidated and its behavior is undefined.
The enumerator does not have exclusive access to the collection; therefore, enumerating through a collection is intrinsically not a thread safe procedure. To guarantee thread safety during enumeration, you can lock the collection during the entire enumeration. To allow the collection to be accessed by multiple threads for reading and writing, you must implement your own synchronization.
[Default: Multidimensional arrays will be processed in Row-major form.
This example demonstrates the System.Array.GetEnumerator method.
Gets the number of elements in the specified dimension of the array.
The zero-based dimension of the array whose length is to be determined.
The number of elements in the specified dimension of the array.
IndexOutOfRangeException dimension is less than zero.
dimension is equal to or greater than System.Array.Rank.
Returns the lower bound of the specified dimension in the current instance.
A Int32 that contains the zero-based dimension of the current instance whose lower bound is to be determined.
A Int32 that contains the lower bound of the specified dimension in the current instance.
dimension is equal to or greater than the System.Array.Rank property of the current instance.
Returns the upper bound of the specified dimension in the current instance.
A Int32 that contains the zero-based dimension of the current instance whose upper bound is to be determined.
A Int32 that contains the upper bound of the specified dimension in the current instance.
Gets the value at the specified position in the current multidimensional instance.
A one-dimensional array of Int32 objects that contains the indices that specify the position of the element in the current instance whose value to get.
A Object that contains the value at the specified position in the current instance.
ArgumentNullException indices is null .
ArgumentException The number of dimensions in the current instance is not equal to the number of elements in indices.
IndexOutOfRangeException At least one element in indices is outside the range of valid indices for the corresponding dimension of the current instance.
The number of elements in indices is required to be equal to the number of dimensions in the current instance. All elements in indices collectively specify the position of the desired element in the current instance.
Gets the value at the specified position in the current one-dimensional instance.
A Int32 that contains the position of the value to get from the current instance.
ArgumentException The current instance has more than one dimension.
IndexOutOfRangeException index is outside the range of valid indices for the current instance.
This example demonstrates the System.Array.GetValue(System.Int32) method.
Gets the value at the specified position in the current two-dimensional instance.
A Int32 that contains the first-dimension index of the element in the current instance to get.
A Int32 that contains the second-dimension index of the element in the current instance to get.
ArgumentException The current instance does not have exactly two dimensions.
IndexOutOfRangeException At least one of index1 or index2 is outside the range of valid indexes for the corresponding dimension of the current instance.
Gets the value at the specified position in the current three-dimensional instance.
A Int32 that contains the third-dimension index of the element in the current instance to get.
ArgumentException The current instance does not have exactly three dimensions.
IndexOutOfRangeException At least one ofindex1 or index2 or index3 is outside the range of valid indexes for the corresponding dimension of the current instance.
Searches the specified one-dimensional Array, returning the index of the first occurrence of the specified Object in the specified range.
A one-dimensional Array to search.
A Object to locate in array.
A Int32 that contains the number of elements to search, beginning with startIndex.
ArgumentOutOfRangeException startIndex is less than array.GetLowerBound(0) .
startIndex + count is greater than array.GetLowerBound(0) + array.Length .
The elements are compared using System.Object.Equals(System.Object).
Searches the specified one-dimensional Array, returning the index of the first occurrence of the specified Object between the specified index and the last element.
ArgumentOutOfRangeException startIndex is less than array.GetLowerBound(0) or greater than array.GetLowerBound(0) + array.Length .
This version of System.Array.IndexOf(System.Array,System.Object) is equivalent to System.Array.IndexOf(System.Array,System.Object) (array, value , startIndex, (array.Length - startIndex+array.GetLowerBound(0))).
Searches the specified one-dimensional Array, returning the index of the first occurrence of the specified Object.
This version of System.Array.IndexOf(System.Array,System.Object) is equivalent to System.Array.IndexOf(System.Array,System.Object)(array, value, array.GetLowerBound(0),array.Length).
The following example demonstrates the System.Array.IndexOf(System.Array,System.Object) method.
Searches for the specified value and returns the index of the first occurrence within the range of elements in the array starting at the specified index and continuing for, at most, the specified number of elements.
The zero-based index of the first occurrence of value within the range of elements in array that starts at startIndex and contains the number of elements specified in count , if found; otherwise, -1.
startIndex + count is greater than System.Array.Length.
The elements are compared using System.Object.Equals(System.Object). The array is searched forward starting at startIndex and ending at startIndex + count - 1. Processing is stopped when the predicate returns true .
Searches the specified array, returning the index of the first occurrence in the specified array starting at the specified index and including the last element.
The zero-based index of the first occurrence of value within the range of elements in array that extends from startIndex to the last element, if found; otherwise, -1. If startIndex is equal to the length of the array, -1 is returned.
The elements are compared using System.Object.Equals(System.Object). The array is searched forward starting at startIndex and ending at the last element. Processing is stopped when the predicate returns true .
Searches the specified array, returning the index of the first occurrence of the specified value.
The zero-based index of the first occurrence of value in array, if found; otherwise, - 1.
The elements are compared using System.Object.Equals(System.Object). The array is searched forward starting at the first element and ending at the last element. Processing is stopped when the predicate returns true .
Initializes every element of the current instance of value-type objects by calling the default constructor of that value type.
This method cannot be used on reference-type arrays.
If the current instance is not a value-type Array or if the value type does not have a default constructor, the current instance is not modified.
The current instance can have any lower bound and any number of dimensions.
Searches the specified one-dimensional Array, returning the index of the last occurrence of the specified Object in the specified range.
A Int32 that contains the number of elements to search, beginning with startIndex .
ArgumentOutOfRangeException startIndex is outside the range of valid indices for array.
Searches the specified one-dimensional Array, returning the index of the last occurrence of the specified Object between the specified index and the first element.
This version of System.Array.LastIndexOf(System.Array,System.Object) is equivalent to System.Array.LastIndexOf(System.Array,System.Object)( array, value, startIndex,startIndex+ 1 -array.GetLowerBound(0)).
Searches the specified one-dimensional Array, returning the index of the last occurrence of the specified Object.
This version of System.Array.LastIndexOf(System.Array,System.Object) is equivalent to System.Array.LastIndexOf(System.Array,System.Object)(array, value, (array.GetLowerBound(0) + array.Length - 1), array.Length).
The following example demonstrates the System.Array.LastIndexOf(System.Array,System.Object) method.
Searches for the specified value and returns the index of the last occurrence within the range of elements in the array starting at the specified index and continuing backwards for, at most, the specified number of elements.
The zero-based index of the last occurrence of value within the range of elements in array that ends at startIndex and contains the number of elements specified in count , if found; otherwise, -1.
The elements are compared using System.Object.Equals(System.Object). The array is searched backward starting at startIndex and going for count elements. Processing is stopped when the predicate returns true .
Searches the specified array backwards, returning the index of the last occurrence of the specified array, starting at the specified index.
The zero-based index of the last occurrence of value within the range of elements in array that extends from startIndex to the first element, if found; otherwise, -1.
The elements are compared using System.Object.Equals(System.Object). The array is searched backward starting at startIndex and ending at the first element. Processing is stopped when the predicate returns true .
Searches the specified array, returning the index of the last occurrence of the specified value.
The zero-based index of the last occurrence of value in array, if found; otherwise, - 1.
The elements are compared using System.Object.Equals(System.Object). The array is searched backward starting at the last element and ending at the first element. Processing is stopped when the predicate returns true .
Changes the size of an array to the specified new size.
null to create a new array with the specified size.
The size of the new array.
ArgumentOutOfRangeException newSize is less than zero.
If array is null , this method creates a new array with the specified size.
If array is not null , then if newSize is equal to System.Array.Length of the old array, this method does nothing. Otherwise, this method allocates a new array with the specified size, copies elements from the old array to the new one, and then assigns the new array reference to the array parameter. If newSize is greater than System.Array.Length of the old array, a new array is allocated and all the elements are copied from the old array to the new one. If newSize is less than System.Array.Length of the old array, a new array is allocated and elements are copied from the old array to the new one until the new one is filled; the rest of the elements in the old array are ignored.
Reverses the sequence of the elements in the specified range of the specified one-dimensional Array.
The one-dimensional Array to reverse.
A Int32 that contains the index at which reversing starts.
A Int32 that contains the number of elements to reverse.
The following example demonstrates the System.Array.Reverse(System.Array) method.
Reverses the sequence of the elements in the specified one-dimensional Array.
This version of System.Array.Reverse(System.Array) is equivalent to System.Array.Reverse(System.Array)(array, array.GetLowerBound(0), array.Length).
Sets the value of the element at the specified position in the current one-dimensional instance.
A Object that contains the new value for the specified element.
A Int32 that contains the index of the element whose value is to be set.
InvalidCastException value is not assignment-compatible with the element type of the current instance.
[Note: Use the System.Array.GetLowerBound(System.Int32) and System.Array.GetUpperBound(System.Int32) methods to determine whether index is out of bounds.
For more information regarding valid conversions that will be performed by this method, see Convert.
Sets the value of the element at the specified position in the current two-dimensional instance.
A Int32 that contains the first-dimension index of the element in the current instance to set.
A Int32 that contains the second-dimension index of the element in the current instance to set.
IndexOutOfRangeException At least one of index1 or index2 is outside the range of valid indices for the corresponding dimension of the current instance.
[Note: For more information regarding valid conversions that will be performed by this method, see Convert.
Use the System.Array.GetLowerBound(System.Int32) and System.Array.GetUpperBound(System.Int32) methods to determine whether any of the indices are out of bounds.
Sets the value of the element at the specified position in the current three-dimensional instance.
A Int32 that contains the third-dimension index of the element in the current instance to set.
IndexOutOfRangeException At least one of index1, index2, or index3 is outside the range of valid indices for the corresponding dimension of the current instance.
Sets the value of the element at the specified position in the current multidimensional instance.
A one-dimensional array of Int32 objects that contains the indices that specify the position of the element in the current instance to set.
Use the System.Array.GetLowerBound(System.Int32) and System.Array.GetUpperBound(System.Int32) methods to determine whether any of the values in indices is out of bounds.
Sorts the specified range of the specified pair of one-dimensional Array objects (one containing a set of keys and the other containing corresponding items) based on the keys in the first specified Array using the specified IComparer implementation.
A one-dimensional Array that contains the keys to sort.
A one-dimensional Array that contains the items that correspond to each element of keys. Specify a null reference to sort only keys.
A Int32 that contains the index at which sorting starts.
A Int32 that contains the number of elements to sort.
ArgumentNullException keys is null .
RankException keys has more than one dimension.
items is not a null reference and has more than one dimension.
ArgumentException items is not a null reference, and keys.GetLowerBound(0) does not equal items.GetLowerBound(0).
index and length do not specify a valid range in key.
items is not a null reference, and index and length do not specify a valid range in items.
InvalidOperationException comparer is null , and one or more elements in keys that are used in a comparison do not implement the IComparable interface.
Each key in keys is required to have a corresponding item in items. The sort is performed according to the order of keys. After a key is repositioned during the sort, the corresponding item in items is similarly repositioned. Only keys.Length elements of items will be sorted. Therefore, items is sorted according to the arrangement of the corresponding keys in keys. If the sort is not successfully completed, the results are undefined.
If comparer is a null reference, each element of keys is required to implement the IComparable interface to be capable of comparisons with every other element in keys.
Sorts the elements in the specified section of the specified one-dimensional Array using the specified IComparer implementation.
A one-dimensional Array to sort.
InvalidOperationException comparer is null , and one or more elements in array that are used in a comparison do not implement the IComparable interface.
This version of System.Array.Sort(System.Array) is equivalent to System.Array.Sort(System.Array)(array, null , index, length, comparer).
If comparer is a null reference, each element of array is required to implement the IComparable interface to be capable of comparisons with every other element in array. If the sort is not successfully completed, the results are unspecified.
Sorts the specified pair of one-dimensional Array objects (one containing a set of keys and the other containing corresponding items) based on the keys in the first specified Array using the specified IComparer implementation.
A one-dimensional Array that contains the items that correspond to each element in keys. Specify a null reference to sort only keys.
items is not a null reference, and keys.Length > items.Length.
InvalidOperationException comparer is a null , and one or more elements in keys that are used in a comparison do not implement the IComparable interface.
This version of System.Array.Sort(System.Array) is equivalent to System.Array.Sort(System.Array)(keys, items, keys.GetLowerBound(0), keys.Length, comparer).
Each key in keys is required to have a corresponding item in items. The sort is performed according to the order of keys . After a key is repositioned during the sort, the corresponding item in items is similarly repositioned. Only keys.Length elements of items are sorted. Therefore, items is sorted according to the arrangement of the corresponding keys in keys. If the sort is not successfully completed, the results are unspecified.
Sorts the elements in the specified one-dimensional Array using the specified IComparer implementation.
The one-dimensional Array to sort.
InvalidOperationException comparer is a null reference, and one or more elements in array that are used in a comparison do not implement the IComparable interface.
This version of System.Array.Sort(System.Array) is equivalent to System.Array.Sort(System.Array)(array, null , array.GetLowerBound(0), array.Length, comparer).
Sorts the specified ranges of the specified pair of one-dimensional Array objects (one containing a set of keys and the other containing corresponding items) based on the keys in the first specified Array.
A Int32 that contains the index at which sort begins.
index and length do not specify a valid range in keys.
InvalidOperationException One or more elements in keys that are used in a comparison do not implement the IComparable interface.
This version of System.Array.Sort(System.Array) is equivalent to System.Array.Sort(System.Array)(keys, items, index, length, null ).
Each key in keys is required to have a corresponding item in items. The sort is performed according to the order of keys . After a key is repositioned during the sort, the corresponding item in items is similarly repositioned. Therefore, items is sorted according to the arrangement of the corresponding keys in keys. If the sort is not successfully completed, the results are undefined.
Each element of keys is required to implement the IComparable interface to be capable of comparisons with every other element in keys.
Sorts the elements of the specified one-dimensional Array.
InvalidOperationException One or more elements in array that are used in a comparison do not implement the IComparable interface.
This version of System.Array.Sort(System.Array) is equivalent to System.Array.Sort(System.Array)(array, null , array.GetLowerBound(0), array.Length, null ).
Each element of array is required to implement the IComparable interface to be capable of comparisons with every other element in array.
This example demonstrates the System.Array.Sort(System.Array) method.
Sorts the specified pair of one-dimensional Array objects (one containing a set of keys and the other containing corresponding items) based on the keys in the first specified Array.
A one-dimensional Array that contains the items that correspond to each of element of keys. Specify a null reference to sort only keys.
This version of System.Array.Sort(System.Array) is equivalent to System.Array.Sort(System.Array)(keys, items, keys.GetLowerBound(0), keys.Length, null ).
Sorts the elements in the specified range of the specified one-dimensional Array.
This version of System.Array.Sort(System.Array) is equivalent to System.Array.Sort(System.Array)(array, null , index, length, null ).
Each element of array is required to implement the IComparable interface to be capable of comparisons with every other element in array. If the sort is not successfully completed, the results are unspecified.
Sorts a range of elements in a pair of arrays based on the keys in the first array using the specified System.Collections.Generic.IComparer<K>.
The array that contains the keys to sort.
The array that contains the items that correspond to each of the keys in keys.
null to sort only the keys array.
The starting index of the range to sort.
The number of elements in the range to sort.
The System.Collections.Generic.IComparer<K> implementation to use when comparing elements.
null to use the System.IComparable<K> or IComparable implementation of each element.
ArgumentException index and length do not specify a valid range in keys.
items is not null , and index and length do not specify a valid range in items.
ArgumentOutOfRangeException index is less than zero.
InvalidOperationException comparer is null , and one or more elements in keys that are used in a comparison do not implement the System.IComparable<K> or System.IComparable interface.
If items is non-null, each key in keys is required to have a corresponding item in items. The sort is performed according to the order of keys. After a key is repositioned during the sort, the corresponding item in items is similarly repositioned. Only keys.Length elements of items will be sorted. Therefore, items is sorted according to the arrangement of the corresponding keys in keys. If the sort is not successfully completed, the results are undefined.
If comparer is a null reference, each element of keys is required to implement the System.IComparable<K> or IComparable interface to be capable of comparisons with every other element in keys.
Sorts a pair of arrays based on the keys in the first array, using the specified System.Collections.Generic.IComparer.
ArgumentException items is not null , and the length of keys does not match the length of items.
This version of System.Array.Sort is equivalent to System.Array.Sort<K,V>( keys, items, 0, keys.Length , comparer) .
If items is non-null, each key in keys is required to have a corresponding item in items. The sort is performed according to the order of keys. After a key is repositioned during the sort, the corresponding item in items is similarly repositioned. Only keys.Length elements of items will be sorted. Therefore, items is sorted according to the arrangement of the corresponding keys in keys. If the sort is not successfully completed, the results are unspecified.
Sorts a range of elements in a pair of arrays based on the keys in the first array, using the System.IComparable<K> or IComparable implementation of each key.
InvalidOperationException One or more elements in keys that are used in a comparison are the null reference or do not implement the System.IComparable<K> or System.IComparable interface.
If items is non-null, each key in keys is required to have a corresponding item in items. When a key is repositioned during the sorting, the corresponding item in items is similarly repositioned. Therefore, items is sorted according to the arrangement of the corresponding keys in keys.
If the sort is not successfully completed, the results are unspecified.
Each key within the specified range of elements in keys must implement the System.IComparable<K> or IComparable interface to be capable of comparisons with every other key.
This implementation performs an unstable sort; that is, if two elements are equal, their order might not be preserved. In contrast, a stable sort preserves the order of elements that are equal.
Sorts a pair of arrays based on the keys in the first array using the IComparable implementation of each key.
ArgumentException items is not null , and the length of keys does not equal the length of items.
Each key in keys must implement the System.IComparable<K> or IComparable interface to be capable of comparisons with every other key.
If the sort is not successfully completed, the results are undefined.
Sorts the elements in a range of elements in an array using the specified comparer.
InvalidOperationException comparer is null , and one or more elements in array that are used in a comparison do not implement the System.IComparable<K> or System.IComparable interface.
If comparer is null, each element within the specified range of elements in array must implement the IComparable interface to be capable of comparisons with every other element in array.
Sorts the elements in an array using the specified comparer.
The IComparer<T> implementation to use when comparing elements.
InvalidOperationException comparer is null , and one or more elements in array that are used in a comparison do not implement the IComparable<T> or System.IComparable interface.
If comparer is null, each element of array must implement the IComparable<T> or IComparable interface to be capable of comparisons with every other element in array.
Sorts the elements in an array using the specified comparison.
The Comparison<T> to use when comparing elements.
Sorts the elements in an entire array using the IComparable<T> or IComparable implementation of each element of that array.
InvalidOperationException One or more elements in array that are used in a comparison are the null reference or do not implement the IComparable<T> or System.IComparable interface.
Each element of array is required to implement the IComparable<T> or IComparable interface to be capable of comparisons with every other element in array.
Sorts an array using the IComparable<T> or IComparable implementation of each element of that array.
InvalidOperationException One or more elements in array that are used in a comparison do not implement the IComparable<T> or System.IComparable interface.
Each element within the specified range of elements in array must implement the IComparable<T> or IComparable interface to be capable of comparisons with every other element in array.
Determines whether every element in the array matches the predicate.
The array to check against the conditions.
The predicate against which the elements are checked..
true , if every element in array matches the specified predicate; otherwise, false .
The predicate returns true if the object passed to it matches the delegate. The elements of array are individually passed to the predicate, and processing is stopped when the delegate returns false for any element.
Gets the total number of elements in all the dimensions of the current instance.
A Int32 that contains the total number of elements in all the dimensions of the current instance.
A Int64 value containing the length of the array.
Gets the rank (number of dimensions) of the current instance.
A Int32 that contains the rank (number of dimensions) of the current instance. | {'timestamp': '2019-04-19T23:01:12Z', 'url': 'http://dotgnu.org/pnetlib-doc/System/Array.html', 'language': 'en', 'source': 'c4'} |
high_school_physics | 865 | 16.26321 | 1 | \section{Introduction}
\paragraph*{Problem statement and motivation}
Let $\QQ,\RR,\CC$ be respectively the
fields of rational, real and complex numbers, and let $m,n$
be positive integers. Given $m \times m$
matrices ${H}_0, {H}_1, \ldots, {H}_n$ with entries in $\QQ$ and Hankel
structure, i.e. constant skew diagonals,
we consider the {\it linear Hankel matrix} ${H}(\vecx) = {H}_0+\X_1{H}_1+\ldots+\X_n{H}_n$,
denoted ${H}$ for short, and the algebraic set
\[
{\mathcal{H}}_r =
\{\vecx \in \CC^n : {\rm rank} \, {H}(\vecx) \leq r\}.
\]
The goal of this paper is to provide an efficient algorithm for
computing at least one sample point per connected component of the
real algebraic set ${\mathcal{H}}_r \cap \RR^n$.
Such an algorithm can be used to solve the matrix rank minimization
problem for ${H}$. Matrix rank minimization mostly consists of
minimizing the rank of a given matrix whose entries are subject to
constraints defining a convex set. These problems arise in many
engineering or statistical modeling applications and have recently
received a lot of attention. Considering Hankel structures is
relevant since it arises in many applications (e.g. for model
reduction in linear dynamical systems described by Markov parameters, see
\cite[Section 1.3]{markovsky12}).
Moreover, an algorithm for computing sample points in each connected
component of ${\mathcal{H}}_r\cap\RR^n$ can also be used to decide the
emptiness of the feasibility set $S=\{\vecx \in \RR^n : {H}(\vecx)\succeq 0\}$.
Indeed, considering the minimum rank $r$ attained in the boundary of
$S$, it is easy to prove that one of the connected components of
${\mathcal{H}}_{r} \cap \RR^n$ is actually contained in $S$. Note also that
such feasibility sets, also called Hankel spectrahedra, have recently
attracted some attention (see e.g. \cite{BS14}).
The intrinsic algebraic nature of our problem makes relevant the
design of
exact algorithms to achieve reliability.
On the one hand, we aim at
exploiting algorithmically the special Hankel structure to
gain efficiency.
On the other hand, the design of a special algorithm
for the case of linear Hankel matrices can bring the foundations of a
general approach to e.g. the symmmetric case which is important for
semi-definite programming, i.e. solving linear matrix inequalities.
\paragraph*{Related works and state-of-the-art} Our problem consists
of computing sample points in real algebraic sets. The first algorithm
for this problem is due to Tarski but its complexity was not
elementary recursive \cite{Tarski}. Next, Collins designed the
Cylindrical Algebraic Decomposition algorithm \cite{c-qe-1975}. Its
complexity is doubly exponential in the number of variables which is
far from being optimal since the number of connected components of a
real algebraic set defined by $n$-variate polynomial equations of
degree $\leq d$ is upper bounded by $O(d)^n$. Next, Grigoriev and
Vorobjov \cite{GV88} introduced the first algorithm based on critical
point computations computing sample point in real algebraic sets
within $d^{O(n)}$ arithmetic operations. This work has next been
improved and generalized (see \cite{BaPoRo06} and references therein)
from the complexity viewpoint. We may apply these algorithms to our
problem by computing all $(r+1)$-minors of
the Hankel matrix and compute sample points in the real algebraic set
defined by the vanishing of these minors. This is done in time
$(\binom{m}{r+1}\binom{n+r}{r})^{O(1)}+r^{O(n)}$ however since the
constant in the exponent is rather high, these algorithms did not lead
to efficient implementations in practice. Hence, another series of
works, still using the critical point method but aiming at designing
algorithms that combine asymptotically optimal complexity and
practical efficiency has been developed (see e.g. \cite{BGHSS, SaSc03,
GS14} and references therein).
Under regularity assumptions, these yield probabilistic algorithms
running in time which is essentially $O(d^{3n})$ in the smooth case
and $O(d^{4n})$ in the singular case (see \cite{S05}). Practically,
these algorithms are implemented in the library {\sc RAGlib} which
uses Gr\"obner bases computations (see \cite{faugere2012critical,
Sp14} about the complexity of computing critical points with
Gr\"obner bases).
Observe that determinantal varieties such as ${\mathcal{H}}_r$ are generically
singular (see \cite{bruns1988determinantal}). Also the
aforementioned algorithms do not exploit the structure of the
problem. In \cite{HNS2014}, we introduced an algorithm for computing
real points at which a
{\em generic} linear square matrix of size $m$ has rank $\leq m-1$, by
exploiting the structure of the problem. However, because of the
requested genericity of the input linear matrix, we cannot use it for
linear Hankel matrices. Also, it does not allow to get sample points
for a given, smaller rank deficiency.
\paragraph*{Methodology and main results} Our main result is an
algorithm that computes sample points in each connected component of
${\mathcal{H}}_r \cap \RR^n$ under some genericity assumptions on the entries
of the linear Hankel matrix ${H}$ (these genericity assumptions are
made explicit below). Our algorithm exploits the Hankel structure of
the problem. Essentially, its complexity is quadratic in a multilinear
B\'ezout bound on the number of complex solutions. Moreover, we find
that, heuristically, this bound is less than
${{m}\choose{r+1}}{{n+r}\choose{r}}{{n+m}\choose{r}}$.
Hence, for subfamilies of the real root finding problem on linear
Hankel matrices where the maximum rank allowed $r$ is fixed, the complexity
is essentially in $(nm)^{O(r)}$.
The very basic idea is to study the algebraic set ${\mathcal{H}}_r\subset
\CC^n$ as the Zariski closure of the projection of an incidence
variety, lying in $\CC^{n+r+1}$. This variety encodes the fact that
the kernel of ${H}$ has dimension $\geq m-r$. This lifted variety
turns out to be generically smooth and equidimensional and defined by
quadratic polynomials with multilinear structure. When these
regularity properties are satisfied, we prove that computing one point
per connected component of the incidence variety is sufficient to
solve the same problem for the variety ${\mathcal{H}}_r \cap \RR^n$. We also
prove that these properties are generically satisfied. We remark that
this method is similar to the one used in \cite{HNS2014}, but in this
case it takes strong advantage of the Hankel structure of the linear
matrix, as detailed in Section \ref{sec:prelim}. This also reflects on
the complexity of the algorithm and on practical performances.
Let ${{C}}$ be a connected component of ${\mathcal{H}}_r\cap\RR^n$, and and
$\Pi_1,\pi_1$ be the canonical projections $\Pi_1: (\x_1, \ldots,
\x_n, \y_1, \ldots, \y_{r+1})\to \x_1$ and $\pi_1: (\x_1, \ldots,
\x_n)\to \x_1$. We prove that in generic coordinates, either {\em (i)}
$\pi_1(C) = \RR$ or {\em (ii)} there exists a critical point of the
restriction of $\Pi_1$ to the considered incidence variety. Hence,
after a generic linear change of variables, the algorithm consists of
two main steps: {\em (i)} compute the critical points of the
restriction of $\Pi_1$ to the incidence variety and {\em (ii)}
instantiating the first variable $\X_1$ to a generic value and perform
a recursive call following a geometric pattern introduced in
\cite{SaSc03}.
This latter step ({\em i}) is actually performed by building the
Lagrange system associated to the optimization problem whose solutions
are the critical points of the restriction of $\pi_1$ to the incidence
variety. Hence, we use the algorithm in \cite{jeronimo2009deformation}
to solve it. One also observes heuristically that these Lagrange
systems are typically zero-dimensional.
However, we were not able to prove this finiteness property,
but we
prove that it holds when we restrict the optimization step
to the set
of points $\vecx \in {\mathcal{H}}_r$ such that ${\rm rank} \, {H}(\vecx)
= p$, for
any $0 \leq p \leq r$. However, this is sufficient
to
conclude that there are finitely many critical points of the
restriction
of $\pi_1$ to ${\mathcal{H}}_r \cap \RR^n$, and that the
algorithm
returns the output correctly.
When the
Lagrange system has dimension $0$, the complexity of
solving
its equations is essentially
quadratic
in the number of its complex solutions.
As previously announced, by the
structure of these systems one can deduce multilinear
B\'ezout bounds on
the number of solutions that are polynomial in $nm$ when
$r$ is fixed, and polynomial in $n$ when $m$ is fixed.
This complexity result outperforms
the
state-of-the-art algorithms.
We finally
remark that the complexity gain is reflected also
in the
first implementation of the algorithm, which allows
to solve instances of our problem that are out of reach of the
general algorithms implemented in {\sc RAGlib}.
\paragraph*{Structure
of the paper}
The paper
is structured as follows. Section \ref{sec:prelim} contains
preliminaries
about Hankel matrices and the basic notation of the
paper; we
also prove that our regularity assumptions are generic. In
Section
\ref{sec:algo} we describe the algorithm and prove its
correctness.
This is done by using preliminary results proved in
Sections
\ref{sec:dimension} and \ref{sec:closedness}. Section
\ref{ssec:algo:complexity} contains the complexity analysis
and bounds
for the number of complex solutions of the output of the
algorithm.
Finally, Section \ref{sec:exper} presents the results of our
experiments on generic linear Hankel matrices, and comparisons
with the state-of-the-art algorithms for the real root finding
problem.
\section{Notation and preliminaries} \label{sec:prelim}
\paragraph*{Basic notations} \label{ssec:prelim:basic}
We denote by $\GL(n, \QQ)$ (resp. $\GL(n, \CC)$)
the set of $n \times n$ non-singular matrices with rational (resp.
complex) entries. For a matrix $M \in \CC^{m \times m}$ and an integer
$p \leq m$, one denotes with $\minors(p,M)$ the list of determinants
of $p \times p$ sub-matrices of $M$. We denote by $M'$ the
transpose matrix of $M$.
Let $\QQ[\vecx]$ be the ring of polynomials on $n$ variables $\vecx = (\X_1, \ldots, \X_n)$
and let $\mathbf{f} = (f_1, \ldots, f_p) \in \QQ[\vecx]^p$ be a polynomial system.
The common zero locus of the entries of $\mathbf{f}$ is denoted by
$\zeroset{\mathbf{f}} \subset \CC^n$, and its dimension with $\dim \, \zeroset{\mathbf{f}}$. The ideal generated by $\mathbf{f}$ is denoted by
$\left\langle \mathbf{f} \right\rangle$, while if $\mathcal{V} \subset \CC^n$ is any set, the
ideal of polynomials vanishing on $\mathcal{V}$ is denoted by $\ideal{\mathcal{V}}$, while the
set of regular (resp. singular) points of $\mathcal{V}$ is denoted by ${\rm reg}\, \, \mathcal{V}$
(resp. ${\rm sing}\, \, \mathcal{V}$). If $\mathbf{f} = (f_1, \ldots, f_p) \subset \QQ[\vecx]$, we denote
by $\jac \mathbf{f} = \left( \partial f_i / \partial \X_j \right)$ the Jacobian matrix
of $\mathbf{f}$. We denote by ${\rm reg}\,(\mathbf{f}) \subset \zeroset{\mathbf{f}}$ the subset where
$\jac \mathbf{f}$ has maximal rank.
A set $\mathcal{E} \subset \CC^n$ is locally closed if $\mathcal{E} = {\mathcal{Z}}
\cap \mathscr{O}$ where ${\mathcal{Z}}$ is a Zariski closed set and $\mathscr{O}$ is a
Zariski open set.
Let $\mathcal{V} = \zeroset{\mathbf{f}} \subset \CC^n$ be a smooth equidimensional algebraic set, of dimension $d$,
and let $\mathbf{g} \colon \CC^n \to \CC^p$ be an algebraic map. The set of critical points of
the restriction of $\mathbf{g}$ to $\mathcal{V}$ is the solution set of $\mathbf{f}$ and of the $(n-d+p)-$minors
of the matrix $\jac (\mathbf{f}, \mathbf{g})$, and it is denoted by ${\rm crit}\,(\mathbf{g}, \mathcal{V})$. Finally, if $\mathcal{E}
\subset \mathcal{V}$ is a locally closed subset of $\mathcal{V}$, we denote by ${\rm crit}\,(\mathbf{g}, \mathcal{E})
= \mathcal{E} \cap {\rm crit}\,(\mathbf{g}, \mathcal{V})$.
Finally, for $M \in \GL(n, \CC)$ and $f \in \QQ[\vecx]$, we denote by $f^M(\vecx) = f(M\,\vecx)$,
and if $\mathbf{f}=(f_1, \ldots, f_p) \subset \QQ[\vecx]$ and $\mathcal{V} = \zeroset{\mathbf{f}}$, by
$\mathcal{V}^M = \zeroset{\mathbf{f}^M}$ where $\mathbf{f}^M = (f^M_1, \ldots, f^M_p)$.
\paragraph*{Hankel structure} \label{ssec:prelim:hanktoep}
Let $\{h_1, \ldots, h_{2m-1}\} \subset \QQ$. The matrix ${H} = (h_{i+j-1})_{1 \leq i,j \leq m}
\in \QQ^{m \times m}$ is called a Hankel matrix,
and we use the notation ${H} = {\sf Hankel}(h_1, \ldots, h_{2m-1})$.
The structure of a Hankel matrix induces structure on its kernel. By
\cite[Theorem 5.1]{heinig1984algebraic}, one has that if ${H}$ is a Hankel
matrix of rank at most $r$, then there exists a non-zero vector $\vecy = (\y_1,
\ldots, \y_{r+1}) \in \QQ^{r+1}$ such that the columns of the $m \times
(m-r)$ matrix
\[
{Y}(\vecy)=
\begin{bmatrix}
\vecy & 0 & \ldots & 0 \\
0 & \vecy & \ddots & \vdots \\
\vdots & \ddots & \ddots & 0 \\
0 & \ldots & 0 & \vecy \\
\end{bmatrix}
\]
generate a $(m-r)-$dimensional subspace of the kernel of ${H}$.
We observe that ${H} \, {Y}(\vecy)$ is also a Hankel matrix.
The product ${H} \, {Y}(\vecy)$ can be re-written as a matrix-vector product
$\tilde{{H}} \, y$, with $\tilde{{H}}$ a given rectangular Hankel matrix.
Indeed, let ${H}={\sf Hankel}(h_1, \ldots, h_{2m-1})$. Then, as previously
observed, ${H} \, {Y}(\vecy)$ is a rectangular Hankel matrix, of size
$m \times (m-r)$, whose entries coincide with the entries of
\[
\tilde{{H}} \, \vecy =
\begin{bmatrix}
h_1 & \ldots & h_{r+1} \\
\vdots & & \vdots \\
h_{2m-r-1} & \ldots & h_{2m-1}
\end{bmatrix}
\begin{bmatrix}
\y_1 \\
\vdots \\
\y_{r+1}
\end{bmatrix}.
\]
Let $H(\vecx)$ be a linear Hankel matrix.
From \cite[Corollary 2.2]{conca1998straightening} we deduce that, for $p \leq r$, then
the ideals $\left\langle \minors(p+1,{H}(\vecx)) \right\rangle$ and
$\left\langle\minors(p+1,\tilde{{H}}(\vecx))\right\rangle$ coincides. One deduces that
$\vecx = (\X_1, \ldots, \X_n) \in \CC^n$ satisfies ${\rm rank} \, {H}(\vecx) = p$
if and only if it satisfies ${\rm rank} \, \tilde{{H}}(\vecx) = p$.
\paragraph*{Basic sets} \label{ssec:prelim:polsys}
We first recall that the linear matrix ${H}(\vecx) = {H}_0 + \X_1{H}_1 + \ldots + \X_n{H}_n$,
where each $H_i$ is a Hankel matrix, is also a Hankel matrix. It is identified
by the $(2m-1)(n+1)$ entries of the matrices ${H}_i$. Hence we often consider ${H}$ as an element of
$\CC^{(2m-1)(n+1)}$. For $M \in \GL(n, \QQ)$, we denote by ${H}^M(\vecx)$ the linear
matrix ${H}(M \vecx)$.
We define in the following the main algebraic sets appearing during
the execution of our algorithm, given ${H} \in \CC^{(2m-1)(n+1)}$,
$0 \leq p \leq r$, $M \in \GL(n, \CC)$ and $\u = (u_1, \ldots, u_{p+1}) \in \QQ^{p+1}$.
{\it Incidence varieties.} We consider the polynomial system
\[
\begin{array}{lccl}
\mathbf{f}({H}^M,\u,p): & \CC^{n} \times \CC^{p+1} & \longrightarrow & \CC^{2m-p-1} \times \CC \\
& (\vecx, \vecy) & \longmapsto &
\left ((\tilde{H}(M \, \vecx)\, \vecy)', \u'\vecy-1 \right )
\end{array}
\]
where $\tilde{H}$ has been defined in the previous section.
We denote by ${\incidence}({H}^M, \u, p) = \zeroset{\mathbf{f}_p({H}^M,\u)} \subset \CC^{n+p+1}$ and simply
${\incidence}={\incidence}({H}^M,\u,p)$ and $\mathbf{f}=\mathbf{f}({H}^M, \u, p)$ when $p, {H}, M$ and $\u$ are clear.
We also denote by $\incidencereg({H}^M, \u, p) = \incidence({H}^M,\u, p) \cap
\{(\vecx, \vecy)\in \CC^{n+p+1} : {\rm rank} \,{H}(\vecx)=p\}.$
{\it Fibers.} Let ${\alpha} \in \QQ$. We denote by
$\mathbf{f}_{{\alpha}}({H}^M, \u, p)$ (or simply $\mathbf{f}_{{\alpha}}$) the polynomial system obtained
by adding $\X_1-{\alpha}$ to $\mathbf{f}({H}^M, \u, p)$. The resulting algebraic set
$\zeroset{\mathbf{f}_{{\alpha}}}$, denoted by ${\incidence}_{{\alpha}}$, equals ${\incidence} \cap \zeroset{\X_1-{\alpha}}$.
{\it Lagrange systems.}
Let $\v \in \QQ^{2m-p}$. Let $\jac_1\mathbf{f}$ denote the matrix of size $c \times (n+p)$
obtained by removing the first column of $\jac \mathbf{f}$ (the derivative
w.r.t. $\X_1$), and define $\lagrange=\lagrange({H}^M, \u, \v, p)$ as the map
\[
\begin{array}{lrcl}
\lagrange : & \CC^{n+2m+1} & \to & \CC^{n+2m+1} \\
& (\vecx,\vecy,\vecz) & \mapsto & (\tilde{{H}}(M \, \vecx) \, \vecy, \u'\vecy-1, \vecz'\jac_1\mathbf{f}, \v'\vecz-1)
\end{array}
\]
where $\vecz=(\z_1, \ldots, \z_{2m-p})$ stand for Lagrange multipliers. We
finally define ${\mathcal{Z}}({H}^M, \u, \v, p) = \zeroset{\lagrange({H}^M, \u, \v, p)} \subset
\CC^{n+2m+1}.$
\paragraph*{Regularity property $\sfG$} \label{ssec:prelim:regul}
We say that a polynomial system $\mathbf{f} \in \QQ[x]^c$ satisfies Property $\sfG$ if
the Jacobian matrix $\jac \, \mathbf{f}$ has maximal rank at any point of $\zeroset{\mathbf{f}}$.
We remark that this implies that:
\begin{enumerate}
\item the ideal $\ideal{\mathbf{f}}$ is radical;
\item the set $\zeroset{\mathbf{f}}$ is either empty or smooth and equidimensional of co-dimension $c$.
\end{enumerate}
We say that ${\lagrange}({H}^M, \u, \v, p)$ satisfies $\sfG$ over
$\incidencereg(H^M, \u, p)$ if the following holds: for $(\vecx,\vecy,\vecz) \in
{\mathcal{Z}}({H}^M, \u, \v, p)$ such that $(\vecx,\vecy) \in \incidencereg(H^M, \u, p)$,
the matrix $\jac({\lagrange}({H}^M, \u, \v, p))$ has maximal rank at $(\vecx,\vecy,\vecz)$.
Let $\u \in \QQ^{p+1}$. We say that ${H} \in \CC^{(2m-1)(n+1)}$ satisfies Property
$\sfG$ if $\mathbf{f}({H}, \u, p)$ satisfies Property $\sfG$ for all $0 \leq p \leq r$.
The first result essentially shows that $\sfG$ holds
for $\mathbf{f}({H}^M, \u, p)$ (resp. $\mathbf{f}_{\alpha}({H}^M, \u, p)$) when the
input parameter ${H}$ (resp. ${\alpha}$) is generic enough.
\begin{proposition} \label{prop:regularity}
Let $M \in \GL(n,\CC)$.
\begin{itemize}
\item[(a)] There exists a non-empty Zariski-open set ${\mathscr{H}} \subset
\CC^{(2m-1)(n+1)}$ such that, if ${H} \in {\mathscr{H}} \cap
\QQ^{(2m-1)(n+1)}$, for all $0 \leq p \leq r$ and $\u \in \QQ^{p+1}-\{\mathbf{0}\}$,
$\mathbf{f}({H}^M, \u, p)$ satisfies Property
$\sfG$
\item[(b)] for ${H} \in {\mathscr{H}}$, and $0 \leq p \leq r$, if ${\incidence}({H}^M, \u, p) \neq \emptyset$
then $\dim \, {\mathcal{H}}_p \leq n-2m+2p+1$;
\item[(c)] For $0 \leq p \leq r$ and $\u \in \QQ^{p+1}$, if $\mathbf{f}({H}^M, \u, p)$ satisfies
$\sfG$, there exists a non-empty Zariski open set ${\mathscr{A}} \subset \CC$ such
that, if ${\alpha} \in {\mathscr{A}}$, the polynomial system $\mathbf{f}_{{\alpha}}$
satisfies $\sfG$
\end{itemize}
\end{proposition}
\proof
Without loss of generality, we can assume that $M = {\rm I}_n$. We let $0
\leq p \leq r$, $\u \in \QQ^{p+1}-\{\mathbf{0}\}$ and recall that we identify the
space of linear Hankel matrices with $\CC^{(2m-1)(n+1)}$. This space is
endowed by the variables $\mathfrak{h}_{k,\ell}$ with $1\leq k \leq 2m-1$
and $0\leq \ell \leq n$; the generic linear Hankel matrix is then
given by
$\mathfrak{H}=\mathfrak{H}_0+\X_1\mathfrak{H}_1+\cdots+\X_n\mathfrak{H}_n$
with $\mathfrak{H}_i={\sf Hankel}(\mathfrak{h}_{1,i}, \ldots, \mathfrak{h}_{2m-1, i})$.
We consider the map
\[
\begin{array}{lccc}
q : & \CC^{n+(p+1)+(2m-1)(n+1)} & \longrightarrow & \CC^{2m-p} \\
& (\vecx, \vecy, {H}) & \longmapsto& \mathbf{f}({H}, \u, p)
\end{array}
\]
and, for a given ${H} \in \CC^{(2m-1)(n+1)}$, its section-map $q_{H}
\colon \CC^{n+(p+1)} \to \CC^{2m-p}$ sending $(\vecx,\vecy)$ to
$q(\vecx,\vecy,{H})$. We also consider the map $\tilde{q}$ which associates
to $(\vecx, \vecy, {H})$ the entries of $\tilde{H}\vecy$ and its section map
$\tilde{q}_H$; we will consider these latter maps over the open set
$O=\{(\vecx, \vecy)\in \CC^{n+p+1}\mid \vecy\neq \mathbf{0}\}$. We prove below
that $\mathbf{0}$ is a regular value for both $q_H$ and $\tilde{q}_H$.
Suppose first that $q^{-1}(\mathbf{0}) = \emptyset$
(resp. $\tilde{q}^{-1}(\mathbf{0})$). We deduce that for all ${H} \in
\CC^{(2m-1)(n+1)}$, $q_H^{-1}(\mathbf{0}) = \emptyset$ (resp
$\tilde{q}_H^{-1}(\mathbf{0}) = \emptyset$) and $\mathbf{0}$ is a
regular value for both maps $q_H$ and $\tilde{q}_H$. Note also that
taking ${\mathscr{H}} = \CC^{(2m-1)(n+1)}$, we deduce that $\mathbf{f}({H}, \u, p)$
satisfies $\sfG$.
Now, suppose that $q^{-1}(\mathbf{0})$ is not empty and let
$(\vecx,\vecy,{H}) \in q^{-1}(0)$. Consider the Jacobian matrix $\jac q$ of
the map $q$ with respect to the variables $\vecx,\vecy$ and the entries of ${H}$,
evaluated at $(\vecx,\vecy,{H})$. We consider the submatrix of $\jac q$
by selecting the column corresponding to:
\begin{itemize}
\item the partial derivatives with respect to $\mathfrak{h}_{1, 0}, \ldots,
\mathfrak{h}_{2m-1, 0}$;
\item the partial derivatives with respect to $\Y_1, \ldots,
\Y_{p+1}$.
\end{itemize}
We obtain a $(2m-p) \times (2m+p)$ submatrix of $\jac q$; we prove below
that it has full rank $2m-p$.
Indeed, remark that the $2m-p-1$ first lines correspond to the entries
of $\tilde{{H}}\vecy$ and last line corresponds to the derivatives of
$\u'\vecy-1$. Hence, the structure of this submatrix is as below
\[
\begin{bmatrix}
\y_1 & \ldots & \y_{p+1} & 0 & \ldots & 0 &0& \cdots & 0 \\
0 & \y_1 & \ldots & \y_{p+1} & \ldots & 0 & & \\
\vdots & & \ddots & & \ddots & & \vdots & & \vdots\\
\vdots & & &\y_1 & \ldots & \y_{p+1}& 0 & & 0\\
0 & & \cdots & & \cdots & 0 & u_1 & \cdots & u_{p+1}\\
\end{bmatrix}
\]
Since this matrix is evaluated at the solution set of $\u'\vecy-1=0$,
we deduce straightforwardly that one entry of $\u$ and one entry of $\vecy$
are non-zero and that the above matrix is full rank and that
$\mathbf{0}$ is a regular value of the map $q$.
We can do the same for $\jac \tilde{q}$ except the fact that we do not
consider the partial derivatives with respect to $\Y_1, \ldots,
\Y_{p+1}$. The $(2m-p-1) \times (2m-1)$ submatrix we obtain corresponds
to the upper left block containing the entries of $\vecy$. Since
$\tilde{q}$ is defined over the open set $O$ in which $\vecy\neq
\mathbf{0}$, we also deduce that this submatrix has full rank
$2m-p-1$.
By Thom's Weak Transversality Theorem one deduces that there exists a
non-empty Zariski open set ${\mathscr{H}}_p \subset \CC^{(2m-1)(n+1)}$ such
that if ${H} \in {\mathscr{H}}_p$, then $\mathbf{0}$ is a regular value of
$q_{H}$ (resp. $\tilde{q}_{H}$). We deduce that for ${H} \in {\mathscr{H}}_p$, the
polynomial system $\mathbf{f}({H}, \u, p)$ satisfies $\sfG$ and using the
Jacobian criterion \cite[Theorem 16.19]{Eisenbud95},
$\incidence({H}, \u, p)$ is either empty or smooth equidimensional of
dimension $n-2m+2p+1$. This proves assertion (a), with ${\mathscr{H}} =
\bigcap_{0 \leq p \leq r}{\mathscr{H}}_p$.
Similarly, we deduce that $\tilde{q}_{H}^{-1}(\mathbf{0})$ is either
empty or smooth and equidimensional of dimension $n-2m+2p+2$. Let
$\Pi_\vecx$ be the canonical projection $(\vecx, \vecy)\to \vecx$;
note that for
any $\vecx \in {\mathcal{H}}_r$, the dimension of $\Pi_\vecx^{-1}(x)\cap
\tilde{q}_H^{-1}(\mathbf{0})$ is $\geq 1$ (by homogeneity
of the $\vecy$-variables). By the Theorem on the Dimension of Fibers
\cite[Sect.6.3,Theorem 7]{Shafarevich77}, we deduce that
$n-2m+2p+2-\dim({\mathcal{H}}_p) \geq 1$. We deduce that for
${H} \in {\mathscr{H}}$, $\dim({\mathcal{H}}_p) \leq n-2m+2p+1$ which proves assertion
(b).
It remains to prove assertion (c). We assume that
$\mathbf{f}({H}, \u, p)$ satisfies
$\sfG$. Consider the restriction of the map $\Pi_1 \colon \CC^{n+p+1}
\to \CC$, $\Pi_1(\vecx,\vecy)=\x_1$, to ${\incidence}({H}, \u, p)$, which is smooth
and equidimensional by assertion (a).
By Sard's Lemma \cite[Section 4.2]{SaSc13}, the set of critical values
of the restriction of $\Pi_1$ to ${\incidence}({H}, \u, p)$ is finite. Hence,
its complement ${\mathscr{A}} \subset \CC$ is a non-empty Zariski open
set. We deduce that for ${\alpha} \in {\mathscr{A}}$, the Jacobian matrix of
$\mathbf{f}_\alpha({H}, \u, p)$ satisfies $\sfG$.
\hfill$\square$
\section{Algorithm and correctness} \label{sec:algo}
In this section we present the algorithm, which is called {\sf LowRank\-Hankel},
and prove its correctness.
\subsection{Description} \label{ssec:algo:desc}
\paragraph*{Data representation}
The algorithm takes as {\it input} a couple $({H}, r)$, where ${H} = ({H}_0, {H}_1, \ldots,
{H}_n)$ encodes $m \times m$ Hankel matrices with entries in $\QQ$, defining the
linear matrix ${H}(\vecx)$, and $0 \leq r \leq m-1$.
The {\it output} is represented by a rational parametrization, that is a polynomial system
\[
\mathbf{q} = (q_0({t}), q_1({t}), \ldots, q_n({t}), q({t})) \subset \QQ[{t}]
\]
of univariate polynomials, with $gcd(q,q_0)=1$. The set of solutions of
\[
\X_i-q_i({t})/q_0({t}) = 0, \,\,i=1 \ldots n \qquad q({t})=0
\]
is clearly finite and expected to contain at least one point per connected component
of the algebraic set ${\mathcal{H}}_r \cap \RR^n$.
\paragraph*{Main subroutines and formal description}
We start by describing the main subroutines we use.
\noindent {\sf ZeroDimSolve}. It takes as input a polynomial system
defining an algebraic set ${\mathcal{Z}}\subset \CC^{n+k}$ and a subset of
variables $\vecx=(\X_1, \ldots, \X_n)$.
If ${\mathcal{Z}}$ is
finite, it returns a rational parametrization of the projection of
${\mathcal{Z}}$ on the $\vecx$-space else it returns an empty list.
\noindent {\sf ZeroDimSolveMaxRank}. It takes as input a polynomial
system $\mathbf{f}=(f_1, \ldots, f_c)$ such that $Z=\{\vecx \in
\CC^{n+k} \mid {\sf rank}(\jac\mathbf{f}(\vecx))=c\}$ is finite and a
subset of variables $\vecx=(\X_1, \ldots, \X_n)$ that endows $\CC^n$. It
returns {\sf fail: the assumptions are not satisfied} if assumptions
are not satisfied, else it returns a rational parametrization of the
projection of ${\mathcal{Z}}$ on the $\vecx$-space.
\noindent {\sf Lift}. It takes as input a rational parametrization of
a finite set ${\mathcal{Z}} \subset \CC^N$ and a number ${\alpha} \in \CC$, and
it returns a rational parametrization of $\{({\alpha}, \bfx) \, : \,
\bfx \in {\mathcal{Z}}\}$.
\noindent {\sf Union}. It takes as input two rational parametrizations
encoding finite sets ${\mathcal{Z}}_1, {\mathcal{Z}}_2$ and it returns a rational
parametrization of ${\mathcal{Z}}_1 \cup {\mathcal{Z}}_2$.
\noindent {\sf ChangeVariables}. It takes as input a rational
parametrization of a finite set ${\mathcal{Z}} \subset \CC^N$ and a
non-singular matrix $M \in \GL(N, \CC)$. It returns a rational
parametrization of ${\mathcal{Z}}^M$.
The algorithm {\sf LowRankHankel} is recursive, and it
assumes that its input ${H}$ satisfies Property $\sfG$.
${\sf LowRankHankel}({H},r)$:
\begin{enumerate}
\item \label{step:rec:1} If $n < 2m-2r-1$ then return $[\,]$.
\item\label{step:rec:choice1} Choose randomly $M\in \GL(n, \QQ)$, ${\alpha} \in \QQ$ and
$\u_p \in \QQ^{p+1}$, $\v_p \in \QQ^{2m-p}$ for $0\leq p \leq r$.
\item \label{step:rec:2} If $n = 2m-2r-1$ then return ${\sf ZeroDimSolve}(\mathbf{f}({H},\u_r, r), \vecx))$.
\item Let ${\mathsf{P}}={\sf ZeroDimSolve}({\lagrange}(\mathbf{f}({H},\u_r,r), \v))$
\item \label{step:rec:3} If ${\mathsf{P}}=[]$ then for $p$ from 0 to $r$ do
\begin{enumerate}
\item ${\mathsf{P}}'={\sf ZeroDimSolveMaxRank}({\lagrange}({H}^M, \u_p, \v_p), \vecx)$;
\item ${\mathsf{P}} = {\sf Union}({\mathsf{P}}, {\mathsf{P}}')$
\end{enumerate}
\item \label{step:rec:5} ${\sf Q}={\sf Lift}({\sf LowRankHankel}({\sf Subs}(\X_1={\alpha}, {H}^M),r), {\alpha})$;
\item \label{step:rec:6} return({\sf ChangeVariables}({\sf Union}(${\mathsf{Q}}, {\mathsf{P}}$), $M^{-1}$)).
\end{enumerate}
\subsection{Correctness} \label{ssec:algo:corr} \label{sssec:algo:prelimresult}
The correctness proof is based on the two following results that are
proved in Sections \ref{sec:dimension} and \ref{sec:closedness}.
The first result states that when the input matrix $H$ satisfies
$\sfG$ and that, for a generic choice of $M$ and $\v$, and for all
$0 \leq p \leq r$, the set of solutions $(\vecx, \vecy, \vecz)$ to $\lagrange({H}^M, \u,
\v, p)$ at which ${\rm rank}\,\tilde{{H}}(x)=p$ is finite and contains
${\rm crit}\,(\pi_1, \incidencereg({H}^M,\u, p))$.
\begin{proposition} \label{prop:dimension}\label{PROP:DIMENSION} Let
${\mathscr{H}}$ be the set defined in Proposition \ref{prop:regularity} and
let ${H} \in {\mathscr{H}}$ and $\u \in \QQ^{p+1}-\{\mathbf{0}\}$ for $0 \leq p
\leq r$. There exist non-empty Zariski open sets ${\mathscr{M}}_1 \subset
\GL(n,\CC)$ and ${\mathscr{V}} \subset \CC^{2m-p}$ such that if $M \in
{\mathscr{M}}_1 \cap \QQ^{n \times n}$ and $\v \in {\mathscr{V}} \cap \QQ^{2m-p}$,
the following holds:
\begin{itemize}
\item[(a)] ${\lagrange}({H}^M, \u, \v, p)$ satisfies $\sfG$ over
$\incidencereg(H^M, u, p)$;
\item[(b)] the projection of ${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$ on the
$(\vecx,\vecy)$-spa\-ce contains ${\rm crit}\,(\Pi_1, \incidencereg({H}^M, \u,
p))$
\end{itemize}
\end{proposition}
\begin{proposition}\label{prop:closedness}\label{PROP:CLOSEDNESS}
Let ${H} \in {\mathscr{H}}$, $0\leq p\leq r$ and $d_p = n-2m+2p+1$ and
${\mathcal{C}}$ be a connected component of ${\mathcal{H}}_p \cap \RR^n$. Then
there exist non-empty Zariski open sets ${\mathscr{M}}_2 \subset \GL(n,\CC)$
and ${\mathscr{U}} \subset \CC^{p+1}$ such that for any $M \in {\mathscr{M}}_2 \cap
\QQ^{n \times n}$, $\u \in {\mathscr{U}} \cap \QQ^{p+1}$, the following
holds:
\begin{itemize}
\item[(a)] for $i = 1, \ldots, d_p$, $\pi_i({\mathcal{C}}^M)$ is closed;
\item[(b)] for any ${\alpha} \in \RR$ in the boundary of
$\pi_1({\mathcal{C}}^M)$, $\pi_1^{-1}({\alpha}) \cap {\mathcal{C}}^M$
is finite;
\item[(c)] for any $\vecx\in \pi_1^{-1}({\alpha}) \cap {\mathcal{C}}^M$ and $p$
such that ${\rm rank} \, \tilde{{H}}_p(\vecx)=p$, there exists $(\vecx, \vecy) \in
\RR^n \times \RR^{p+1}$ such that $(\vecx,\vecy) \in {\incidence} ({H}^\M, \u,
p)$.
\end{itemize}
\end{proposition}
Our algorithm is probabilistic and its correctness depends on the
validity of the choices that are made at Step
\ref{step:rec:choice1}. We make this assumption that we formalize
below.
We need to distinguish the choices of $M, \u$ and $\v$ that are made
in the different calls of {\sf LowRankHankel}; each of these parameter
must lie in a non-empty Zariski open set defined in Propositions
\ref{prop:regularity}, \ref{prop:dimension} and \ref{prop:closedness}.
We assume that the input matrix ${H}$ satisfies $\sfG$; we denote it by
${H}^{(0)}$, where the super script indicates that no recursive call has been
made on this input; similarly ${\alpha}^{(0)}$ denotes the choice of
${\alpha}$ made at Step \ref{step:rec:choice1} on input
${H}^{(0)}$. Next, we denote by ${H}^{(i)}$ the input of {\sf
LowRankHankel} at the $i$-th recursive call and by
${\mathscr{A}}^{(i)}\subset \CC$ the non-empty Zariski open set defined in
Proposition \ref{prop:regularity} applied to $H^{(i)}$. Note that if
${\alpha}^{(i)}\in {\mathscr{A}}^{(i)}$, we can deduce that ${H}^{(i+1)}$
satisfies $\sfG$.
Now, we denote by ${\mathscr{M}}_1^{(i)},{\mathscr{M}}_2^{(i)}$ and
${\mathscr{U}}^{(p,i)},{\mathscr{V}}^{(p,i)}$ the open sets defined in Propositions
\ref{prop:regularity}, \ref{prop:dimension} and \ref{prop:closedness}
applied to $H^{(i)}$, for $0 \leq p \leq r$ and where $i$ is the depth
of the recursion.
Finally, we denote by $M^{(i)} \in \GL(n, \QQ)$, $\u^{(i)}_p\in
\QQ^{p+1}$ and $\v^{(i)}_p$, for $0 \leq p \leq r$, respectively the
matrix and the vectors chosen at Step \ref{step:rec:choice1} of the
$i$-th call of ${\sf LowRankHankel}$.
\noindent {\bf Assumption $\sfH$}. We say that $\sfH$
is satisfied if $M^{(i)}$, ${\alpha}^{(i)}$, $\u^{(i)}_p$ and $\v^{(i)}_p$
satisfy:
\begin{itemize}
\item $M^{(i)} \in ({\mathscr{M}}_1^{(i)} \cap
{\mathscr{M}}_2^{(i)}) \cap \QQ^{i \times i}$;
\item ${\alpha}^{(i)} \in {\mathscr{A}}^{(i)}$.
\item $\u^{(i)}_p \in {\mathscr{U}}^{(p,i)} \cap
\QQ^{p+1}-\{\mathbf{0}\}$, for $0 \leq p \leq r$;
\item $\v_p^{(i)} \in {\mathscr{V}}^{(p,i)} \cap \QQ^{2m-p}-\{\mathbf{0}\}$ for $0 \leq p \leq r$;
\end{itemize}
\begin{theorem}
Let ${H}$ satisfy $\sfG$. Then, if
$\sfH$ is satisfied, ${\sf LowRankHankel}$ with input $({H}, r)$, returns a
rational para\-met\-rization that encodes a finite algebraic set in ${\mathcal{H}}_r$ meeting
each connected component of ${\mathcal{H}}_r \cap \RR^n$.
\end{theorem}
\begin{proof}
The proof is by decreasing induction on the depth of the recursion.
When $n<2m-2r-1$, ${\mathcal{H}}_r$ is empty since the input ${H}$ satisfies
$\sfG$ (since $\sfH$ is satisfied). In this case, the output defines the empty set.
When $n=2m-2r-1$, since $\sfH$ is satisfied, by Proposition \ref{PROP:REGULARITY},
either ${\mathcal{H}}_r = \emptyset$ or $\dim\,{\mathcal{H}}_r = 0$. Suppose ${\mathcal{H}}_r = \emptyset$.
Hence ${\incidence}_r = \emptyset$, since the projection of ${\incidence}_r$ on
the $\vecx-$space is included in ${\mathcal{H}}_r$. Suppose now that $\dim {\mathcal{H}}_r
= 0$: Proposition \ref{prop:closedness} guarantees that the output
of the algorithm defines a finite set containing ${\mathcal{H}}_r$.
Now, we assume that $n>2m-2r-1$; our induction assumption is that for
any $i\geq 1$ ${\sf LowRankHankel}({H}^{(i)}, r)$ returns a rational
parametrization that encodes a finite set of points in the algebraic
set defined by ${\sf rank}({H}^{(i)})\leq r$ and that meets every
connected component of its real trace.
Let ${{C}}$ be a connected component of ${\mathcal{H}}_r \cap \RR^n$. To keep
notations simple, we denote by $M \in \GL(n, \QQ)$, $\u_p$ and $\v_p$
the matrix and vectors chosen at Step \ref{step:rec:choice1} for
$0\leq p \leq r$. Since $\sfH$ holds one can apply Proposition
\ref{prop:closedness}. We deduce that the image $\pi_1({{C}}^M)$ is
closed. Then, either $\pi_1({{C}}^M) = \RR$ or it is a closed interval.
Suppose first that $\pi_1({\mathcal{C}}^M) = \RR$. Then for ${\alpha} \in \QQ$
chosen at Step \ref{step:rec:choice1}, $\pi_1^{-1}({\alpha}) \cap
{\mathcal{C}}^M \neq 0$. Remark that $\pi_1^{-1}({\alpha}) \cap {{C}}^M$ is the
union of some connected components of ${\mathcal{H}}^{(1)}_r \cap \RR^{n-1} =
\{\vecx=(\x_2, \ldots, \x_n) \in \RR^{n-1} : {\rm rank} \, {H}^{(1)} (\vecx) \leq
r\}$. Since $\sfH$ holds, assertion (c) of Proposition
\ref{prop:regularity} implies that ${H}^{(1)}$ satisfies $\sfG$. We
deduce by the induction assumption that the parametrization returned
by Step \ref{step:rec:5} where ${\sf LowRankHankel}$ is called
recursively defines a finite set of points that is contained in
${\mathcal{H}}_r$ and that meets ${{C}}$.
Suppose now that $\pi_1({{C}}^M) \neq \RR$. By Proposition
\ref{prop:closedness}, $\pi_1({{C}}^M)$ is closed. Since ${{C}}^M$ is
connected, $\pi_1({{C}}^M)$ is a connected interval, and since
$\pi_1({{C}}^M) \neq \RR$ there exists $\beta$ in the boundary of
$\pi_1({{C}}^M)$ such that $\pi_1({{C}}^M) \subset [\beta, +\infty)$ or
$\pi_1({{C}}^M) \subset (-\infty, \beta]$. Suppose without loss of
generality that $\pi_1({{C}}^M) \subset [\beta, +\infty)$, so that
$\beta$ is the minimum value attained by $\pi_1$ on ${{C}}^M$.
Let $\vecx=(\beta, \x_2, \ldots, \x_n) \in {{C}}^M$, and suppose that
${\rm rank} (\tilde{{H}}(\vecx)) = p$. By Proposition \ref{prop:closedness} (assertion (c)),
there exists $\vecy \in \CC^{p+1}$ such that $(\vecx,\vecy) \in
\incidence(H, \u, p)$. Note that since ${\rm rank} (\tilde{{H}}(\vecx)) = p$, we also deduce that
$(\vecx,\vecy) \in
\incidencereg(H, \u, p)$.
We claim that there exists $\vecz \in \CC^{2m-p}$ such that $(\vecx,\vecy,\vecz)$
lies on ${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$.
Since $\sfH$ holds, Proposition \ref{prop:dimension} implies that
${\lagrange}({H}^M, \u, \v, p)$ satisfies $\sfG$ over $\incidencereg(H^M,
\u, p)$. Also, note that the Jacobian criterion implies that
${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$ has dimension at most $0$.
We conclude that the point $\vecx \in {{C}}^M$ lies on the finite set
encoded by the rational parametrization {\sf P} obtained at Step
\ref{step:rec:3} of ${\sf LowRankHankel}$ and we are done.
It remains to prove our claim, i.e. there exists $\vecz \in \CC^{2m-p}$
such that $(\vecx,\vecy,\vecz)$ lies on ${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$.
Let ${{C}}' $ be the connected component of ${\incidence}({H}, \u, p)^M \cap
\RR^{n+m(m-r)}$ containing $(\vecx,\vecy)$. We first prove that $\beta =
\pi_1(\vecx,\vecy)$ lies on the boundary of $\pi_1({{C}}')$. Indeed, suppose
that there exists $(\widetilde{\vecx},\widetilde{\vecy}) \in {{C}}'$ such that
$\pi_1(\widetilde{\vecx},\widetilde{\vecy}) < \beta$. Since ${{C}}'$ is
connected, there exists a continuous semi-algebraic map $\tau \colon
[0,1] \to {{C}}'$ with $\tau(0) = (\vecx,\vecy)$ and $\tau(1) =
(\widetilde{\vecx},\widetilde{\vecy})$. Let $\varphi: (\vecx, \vecy)\to \vecx$ be the
canonical projection on the $\vecx$-space.
Note that $\varphi \circ \tau$ is also continuous and semi-algebraic
(it is the composition of continuous semi-algebraic maps), with
$(\varphi \circ \tau)(0)=\vecx$, $(\varphi \circ \tau)(1)
=\widetilde{\vecx}$. Since $(\varphi \circ \tau)(\theta) \in {\mathcal{H}}_p$ for
all $\theta \in [0,1]$, then $\widetilde{\vecx} \in {{C}}$. Since
$\pi_1(\widetilde{\vecx}) = \pi_1(\widetilde{\vecx}, \widetilde{\vecy}) <
{\alpha}$ we obtain a contradiction. So $\pi_1(\vecx,\vecy)$ lies on the
boundary of $\pi_1({{C}}')$.
By the Implicit Function Theorem, and the fact that $\mathbf{f}({H}, \u, p)$
satisfies Property ${\sfG}$, one deduces that $(\vecx,\vecy)$ is a critical
point of the restriction of $\Pi_1: (\x_1, \ldots, \x_n, \y_1, \ldots,
\y_{r+1})\to \x_1$ to ${\incidence}({H}, \u, p)$.
Since ${\sf rank}({H}^M(\vecx))=p$ by construction, we deduce that
$(\vecx,\vecy)$ is a critical point of the restriction of $\Pi_1$ to
$\incidencereg({H}^M, \u, p)$ and that, by Proposition
\ref{prop:dimension}, there exists $\vecz \in \CC^{2m-p}$ such
that $(\vecx,\vecy,\vecz)$ belongs to the set ${\rm reg}\,({\lagrange}({H}^M, \u, \v, p))$, as claimed.
\end{proof}
\section{Degree bounds and complexity} \label{ssec:algo:complexity}
We first remark that the complexity of subroutines ${\sf Union}$,
${\sf Lift}$ and ${\sf ChangeVariables}$ (see \cite[Chap. 10]{SaSc13})
are negligible with respect to the complexity of ${\sf
ZeroDimSolveMaxRank}$.
Hence, the complexity of ${\sf LowRankHankel}$ is at most $n$ times
the complexity of ${\sf ZeroDimSolveMaxRank}$, which is computed
below.
Let $({H},r)$ be the input, and let $0 \leq p \leq r$.
We estimate the complexity of ${\sf ZeroDimSolveMaxRank}$ with input
$({H}^M, \u_p, \v_p)$. It depends on the algorithm used to solve
zero-dimensional polynomial systems. We choose the one of
\cite{jeronimo2009deformation} that can be seen as a symbolic homotopy
taking into account the sparsity structure of the system to solve.
More precisely, let $\mathbf{p}\subset \QQ[x_1, \ldots, x_n]$ and
$s\in \QQ[x_1, \ldots, x_n]$ such that the common complex solutions of
polynomials in $\mathbf{p}$ at which $s$ does not vanish is finite.
The algorithm in \cite{jeronimo2009deformation} builds a system
$\mathbf{q}$ that has the same monomial structure as $\mathbf{p}$ has
and defines a finite algebraic set. Next, the homotopy system
$\mathbf{t} = t\mathbf{p}+(1-t)\mathbf{q}$ where $t$ is a new
variable is built. The system $\mathbf{t}$ defines a $1$-dimensional
constructible set over the open set defined by $s\neq 0$ and for
generic values of $t$. Abusing notation, we denote by $Z(\mathbf{t})$
the curve defined as the Zariski closure of this constructible set.
Starting from the solutions
of $\mathbf{q}$ which are encoded with a rational parametrization, the
algorithm builds a rational parametrization for the solutions of
$\mathbf{p}$ which do not cancel $s$. Following
\cite{jeronimo2009deformation}, the algorithm runs in time
$\ensuremath{{O}{\,\tilde{ }\,}}(Ln^{O(1)} \delta \delta')$ where $L$ is the complexity of
evaluating the input, $\delta$ is a bound on the number of isolated
solutions of $\mathbf{p}$ and $\delta'$ is a bound on the degree of
$Z(\mathbf{t})$ defined by $\mathbf{t}$.
Below, we estimate these degrees when the input is a Lagrange system
as the ones we consider.
\noindent
{\bf Degree bounds.} We let $((\tilde{{H}}
\,\vecy)',{\u_p}'\vecy-1)$, with $\vecy = (\Y_1, \ldots, \Y_{p+1})'$,
defining ${\incidence}_p({H},{\u_p})$. Since $\vecy \neq 0$, one can eliminate
w.l.o.g. $\Y_{p+1}$, and the linear form ${\u_p}'\vecy-1$, obtaining a
system $\tilde{\mathbf{f}} \in \QQ[\vecx,\vecy]^{2m-p-1}$. We recall that if
$\vecx^{(1)}, \ldots, \vecx^{(c)}$ are $c$ groups of variables, and $f
\in \QQ[\vecx^{(1)}, \ldots, \vecx^{(c)}]$, we say that the
multidegree of $f$ is $(d_1, \ldots, d_c)$ if its degree with respect
to the group $\vecx^{(j)}$ is $d_j$, for $j=1, \ldots, c$.
Let $\lagrange=(\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\mathbf{h}})$ be the corresponding
Lagrange system, where
$$
(\tilde{\mathbf{g}}, \tilde{\mathbf{h}}) = (\tilde{g}_1, \ldots, \tilde{g}_{n-1}, \tilde{h}_1,
\ldots, \tilde{h}_{p}) = \vecz' \jac_1 \tilde{\mathbf{f}}
$$
with $\vecz = [1, \Z_2, \ldots, \Z_{2m-p-1}]$ a non-zero vector of Lagrange
multipliers (we let $\Z_1 = 1$ w.l.o.g.). One obtains that $\lagrange$ is constituted by
\begin{itemize}
\item
$2m-p-1$ polynomials of multidegree bounded by $(1,1,0)$ with respect to $(\vecx,\vecy,\vecz)$,
\item
$n-1$ polynomials of multidegree bounded by $(0,1,1)$ with respect to $(\vecx,\vecy,\vecz)$,
\item
$p$ polynomials of multidegree bounded by $(1,0,1)$ with respect to $(\vecx,\vecy,\vecz)$,
\end{itemize}
that is by $n+2m-2$ polynomials in $n+2m-2$ variables.
\begin{lemma} \label{lemma1}
With the above notations, the number of isolated solutions of $\zeroset{\lagrange}$ is at most
\[
\delta(m,n,p) = \sum_{\ell}\binom{2m-p-1}{n-\ell} \binom{n-1}{2m-2p-2+\ell} \binom{p}{\ell}
\]
where $\ell\in \{\max\{0,n-2m+p+1\}, \ldots,
\min\{p,n-2m+2p+1\}\}$.
\end{lemma}
\begin{proof}
By \cite[Proposition 11.1]{SaSc13}, this degree is bounded by the multilinear
B\'ezout bound $\delta(m,n,p)$ which is the sum of the coefficients of
\[
(s_\X+s_\Y)^{2m-p-1} (s_\Y+s_\Z)^{n-1} (s_\X+s_\Z)^{p} \in \QQ[s_\X,s_\Y,s_\Z]
\]
modulo $I = \left \langle s_\X^{n+1}, s_\Y^{p+1}, s_\Z^{2m-p-1} \right
\rangle$. The conclusion comes straightforwardly by technical
computations.
\end{proof}
With input $\lagrange$, the homotopy system $\mathbf{t}$ is constituted by
$2m-p-1, n-1$ and $p$ polynomials of multidegree respectively bounded
by $(1,1,0,1), (0,1,1,1)$ and $(1,0,1,1)$ with respect to
$(\vecx,\vecy,\vecz,t)$.
We prove the following.
\begin{lemma} \label{lemma2}
${\rm deg} \, \zeroset{\mathbf{t}} \in {O}(pn(2m-p) \delta(m,n,p))$.
\end{lemma}
\begin{proof}[of Lemma \ref{lemma2}]
We use Multilinear B\'ezout bounds as in the proof of Lemma \ref{lemma1}.
The degree of $\zeroset{{\bf t}}$ is bounded by the sum of the coefficients
of
\[
(s_\X+s_\Y+s_{t})^{2m-p-1} (s_\Y+s_\Z+s_{t})^{n-1} (s_\X+s_\Z+s_{t})^{p}
\]
modulo $I = \left\langle s_\X^{n+1}, s_\Y^{p+1}, s_\Z^{2m-p-1}, s_{t}^2 \right\rangle
\subset \QQ[s_\X, s_\Y, s_\Z, s_{t}]$. Since the variable $s_t$ can appear
up to power $1$, the previous polynomial is congruent to $P_1+P_2+P_3+P_4$
modulo $I$, with
\begin{itemize}
\item[] $P_1 = (s_\X+s_\Y)^{2m-p-1} (s_\Y+s_\Z)^{n-1} (s_\X+s_\Y)^{p}$
\item[] $P_2 = (2m-p-1) s_t (s_\X+s_\Y)^{2m-p-2} (s_\Y+s_\Z)^{n-1} \,(s_\X+s_\Z)^{p}$,
\item[] $P_3 = (n-1) s_t (s_\Y+s_\Z)^{n-2} (s_\X+s_\Y)^{2m-p-1} (s_\X+s_\Z)^{p}$
\item[] $P_4 = p \, s_t (s_\X+s_\Z)^{p-1} (s_\X+s_\Y)^{2m-p-1} (s_\Y+s_\Z)^{n-1}.$
\end{itemize}
We denote by $\Delta(P_i)$ the contribution of $P_i$ to the previous
sum.
Firstly, observe that $\Delta(P_1) = \delta(m,n,p)$ (compare with the
proof of Lemma \ref{lemma1}). Defining
$\chi_1 = \max\{0,n-2m+p+1\}$ and $\chi_2 = \min\{p,n-2m+2p+1\}$,
one has $\Delta(P_1) = \delta(m,n,p) = \sum_{\ell = \chi_1}^{\chi_2}{\gamma}(\ell)$
with
${\gamma}(\ell) = \binom{2m-p-1}{n-\ell} \binom{n-1}{2m-2p-2+\ell} \binom{p}{\ell}.$
Write now $P_2 = (2m-p-1)s_t\tilde{P}_2$, with $\tilde{P}_2 \in \QQ[\X,\Y,\Z].$
Let $\Delta(\tilde{P}_2)$ be the contribution of $\tilde{P}_2$, that is the sum
of the coefficients of $\tilde{P}_2$ modulo $I' = \left\langle s_\X^{n+1}, s_\Y^{p+1},
s_\Z^{2m-p-1} \right\rangle$, so that $\Delta(P_2) = (2m-p-1)\Delta(\tilde{P}_2)$. Then
\[
\Delta(\tilde{P}_2) = \sum_{i,j,\ell}\binom{2m-p-2}{i}\binom{n-1}{j}\binom{p}{\ell}
\]
where the sum runs in the set defined by the inequalities
\[
i + \ell \leq n, \,\,\, 2m-p-2-i+j \leq p, \,\,\, n-1-j+p-\ell \leq 2m-p-2.
\]
Now, since $\tilde{P}_2$ is homogeneous of degree $n+2m-3$, only three possible
cases hold:
{\it Case (A)}. $i + \ell=n$, $2m-p-2-i+j =p$ and $n-1-j+p-\ell =2m-p-3$. Here the contribution is
$\delta_a = \sum_{\ell=\alpha_1}^{\alpha_2} {\varphi}_a(\ell)$ with
\[
{\varphi}_a(\ell) = \binom{2m-p-2}{n-\ell}\binom{n-1}{2m-2p-3+\ell}\binom{p}{\ell},
\]
and $\alpha_1 = \max\{0,n-2m+p+2\}, \alpha_2 = \min\{p,n-2m+2p+2\}.$
Suppose first that $\ell$ is an admissible index for $\Delta(P_1)$ and $\delta_a$,
that is $\max\{\chi_1,\alpha_1\}=\alpha_1 \leq \ell \leq \chi_2=\min\{\chi_2,\alpha_2\}$.
Then:
\begin{align*}
{\varphi}_a(\ell) & \leq \binom{2m-p-1}{n-\ell}\binom{n-1}{2m-2p-3+\ell}\binom{p}{\ell} = \\
& = \Psi(\ell) {\gamma}(\ell) \qquad \text{with} \, \Psi(\ell) = \frac{2m-2p-2+\ell}{n-(2m-2p-2+\ell)}.
\end{align*}
The rational function $\ell \longmapsto \Psi(\ell)$
is piece-wise monotone (its first derivative is positive), and its unique possible
pole is $\ell = n-2m+2p+2$. Suppose that this value is a pole for $\Psi(\ell)$.
This would imply $\alpha_2 = n-2m+2p+2$ and so $\chi_2 = n-2m+2p+1$; since $\ell$
is admissible for $\Delta(P_1)$, then one would conclude a contradiction. Hence
the rational function $\Psi(\ell)$ has no poles, its maximum is atteined in
$\chi_2$ and its value is $\Psi(\chi_2) = n-1$. Hence
${\varphi}_a(\ell) \leq (n-1){\gamma}(\ell)$.
Now, we analyse any possible case:
\begin{enumerate}
\item[(A1)] $\chi_1 = 0, \alpha_1=0$. This implies $\chi_2=n-2m+2p+1, \alpha_2=n-2m+2p+2$.
We deduce that
\begin{align*}
\delta_a &= \sum_{\ell = 0}^{\chi_2}{\varphi}_a(\ell) + {\varphi}_a(\alpha_2) \leq (n-1) \sum_{\ell = 0}^{\chi_2}{\gamma}(\ell) + \\
&+ {\varphi}_a(\alpha_2) \leq (n-1)\Delta(P_1)+ {\varphi}_a(\alpha_2).
\end{align*}
In this case we deduce the bound $\delta_a \leq n \Delta(P_1)$.
\item[(A2)] $\chi_1 = 0, \alpha_1=n-2m+p+2$. This implies $\chi_2=n-2m+2p+1, \alpha_2=p$.
In this case all indices are admissible, and hence we deduce the bound $\delta_a \leq (n-1) \Delta(P_1).$
\item[(A3)] $\chi_1 = n-2m+p+1$. This implies $\alpha_1=n-2m+p+2$, $\chi_2=p, \alpha_2=p$.
Also in this case all indices are admissible, and $\delta_a \leq (n-1) \Delta(P_1).$
\end{enumerate}
{\it Case (B)}. $i + \ell=n$, $2m-p-2-i+j =p-1$ and $n-1-j+p-\ell = 2m-p-2$. Here the contribution is
$\delta_b=\sum_\ell {\varphi}_b(\ell)$ where
\[
{\varphi}_b(\ell) = \binom{2m-p-2}{n-\ell}\binom{n-1}{2m-2p-2+\ell}\binom{p}{\ell}.
\]
One gets $\delta_b \leq \Delta(P_1)$ since the sum above is defined in $\max \{0,n\-2m+p+2\} \leq
\ell \leq \min \{p,n-2m+2p+1\}$, and the inequality ${\varphi}_b(\ell) \leq {\gamma}(\ell)$ holds term-wise.
{\it Case (C)} $i + \ell=n-1$, $2m-p-2-i+j = p$ and $n-1-j+p-\ell = 2m-p-2$. Here the contribution is
$\delta_c = \sum_{\ell}{\varphi}_c(\ell)$ where
\[
{\varphi}_c(\ell) = \binom{2m-p-2}{n-1-\ell}\binom{n-1}{2m-2p-2+\ell}\binom{p}{\ell}.
\]
One gets $\delta_c \leq \Delta(P_1)$ since the sum above is defined in $\max\{0,n\-2m+p+1\} \leq
\ell \leq \min\{p,n-2m+2p+1\}$, and the inequality ${\varphi}_c(\ell) \leq {\gamma}(\ell)$ holds term-wise.
We conclude that $\delta_a \leq n \Delta(P_1)$, $\delta_b \leq \Delta(P_1)$ and $\delta_c \leq \Delta(P_1)$.
Hence $\Delta(P_2) = (2m-p-1) (\delta_a+\delta_b+\delta_c) \in {O}(n(2m-p) \Delta(P_1)).$
Analogously to $\Delta(P_2)$, one can conclude that $\Delta(P_3) \in {O}(n(n+2m-p) \Delta(P_1))$
and $\Delta(P_4) \in {O}(pn(n+2m-p) \Delta(P_1)).$
\end{proof}
\noindent{\bf Estimates.}\\
We provide the whole complexity of ${\sf ZeroDimSolveMaxRank}$.
\begin{theorem}
Let $\delta = \delta(m,n,p)$ be given by Lemma \ref{lemma1}. Then
{\sf ZeroDimSolveMaxRank} with input $\lagrange({H}^M, \u_p, \v_p)$ computes
a rational parametrization
within
\[
\ensuremath{{O}{\,\tilde{ }\,}}(p(n+2m)^{O(1)}(2m-p) \delta^2),
\]
arithmetic operations over $\QQ$.
\end{theorem}
\begin{proof}
The polynomial entries of the system $\mathbf{t}$ (as defined in the
previous section) are cubic polynomials in $n+2m-1$ variables, so
the cost of their evaluation is in $O((n+2m)^3)$. Applying
\cite[Theorem 5.2]{jeronimo2009deformation} and bounds given in
Lemma \ref{lemma1} and \ref{lemma2} yield the claimed complexity
estimate.
\end{proof}
From Lemma \ref{lemma1}, one deduces that for all $0 \leq p \leq r$,
the maximum number of complex solutions computed by ${\sf
ZeroDimSolveMaxRank}$ is bounded above by $\delta(m,n,p)$. We deduce
the following result.
\begin{proposition}
Let $H$ be a $m \times m$, $n-variate$ linear Hankel matrix, and let
$r \leq m-1$. The maximum number of complex solutions computed by
${\sf LowRankHankel}$ with input $(H,r)$ is
$$
\binom{2m-r-1}{r} + \sum_{k=2m-2r}^{n}\sum_{p=0}^{r} \delta(m,k,p).
$$
where
$\delta(m,k,p)$ is the bound defined in Lemma \ref{lemma1}.
\end{proposition}
\begin{proof}
The maximum number of complex solutions computed by {\sf
ZeroDimSolve} is the degree of ${\incidence}(H, \u, r)$. Using, the
multilinear B\'ezout bounds, this is bounded by the coefficient of
the monomial $s_\X^{n}s_\Y^{r}$ in the expression
$(s_\X+s_\Y)^{2m-r-1}$, that is exactly $\binom{2m-r-1}{r}$. The
proof is now straightforward, since {\sf ZeroDimSolveMaxRank} runs
$r+1$ times at each recursive step of {\sf LowRankHankel}, and since
the number of variables decreases from $n$ to $2m-2r$.
\end{proof}
\section{Proof of Proposition \ref{prop:dimension}} \label{sec:dimension}
\noindent
We start with a local description of the algebraic sets defined by our
Lagrange systems. This is obtained from a local description of the
system defining $\incidence({H}, \u, p)$. Without loss of generality,
we can assume that $\u=(0, \ldots, 0, 1)$ in the whole section: such a
situation can be retrieved from a linear change of the $\vecy$-variables
that leaves invariant the $\vecx$-variables.
\subsection{Local equations} \label{ssec:dimlag:local}\label{sssec:dimlag:local:inc}
\label{sssec:dimlag:local:lag}
Let $(\vecx, \vecy)\in \incidencereg({H}, \u, p)$. Then, by definition, there
exists a $p \times p$ minor of $\tilde{{H}}(\vecx)$ that is
non-zero. Without loss of generality, we assume that this minor is the
determinant of the upper left $p\times p$ submatrix of
$\tilde{H}$. Hence, consider the following block partition
\begin{equation} \label{partition}
\tilde{{H}}(\vecx) =
\left[
\begin{array}{cc}
N & Q \\
P & R \\
\end{array}
\right]
\end{equation}
with $N \in \QQ[\vecx]^{p \times p}$, and $Q \in \QQ[\vecx]^{p}$, $P \in
\QQ[\vecx]^{(2m-2p-1) \times p}$, and $R \,\in \,\QQ[\vecx]^{2m-2p-1}$. We
are going to exhibit suitable local descriptions of $\incidencereg({H},
\u, p)$ over the Zariski open set $O_N\subset \CC^{n+p+1}$ defined by
$\det N \neq 0$; we denote by $\QQ[\vecx,\vecy]_{\det N}$ the local ring of
$\QQ[\vecx, \vecy]$ localized by $\det N$.
\begin{lemma} \label{lemma:local:incidence}
Let $N,Q,P,R$ be as above, and $\u \in \QQ^{p+1}-\{\mathbf{0}\}$. Then there exist
$\{q_{i}\}_{1 \leq i \leq p} \subset \QQ[\vecx]_{\det N}$ and
$\{\tilde{q}_{i}\}_{1 \leq i \leq 2m-2p-1} \subset \QQ[\vecx]_{\det N}$ such that the
constructible set $\incidencereg({H}, \u, p) \cap O_N$ is
defined by the equations
\begin{align*}
\Y_{i} - q_{i}(\vecx) &= 0 \qquad 1 \leq i \leq p \\
\tilde{q}_{i}(\vecx) &= 0 \qquad 1 \leq i \leq 2m-2p-1 \\
\Y_{p+1} - 1 &= 0.
\end{align*}
\end{lemma}
\begin{proof}
Let $c=2m-2p-1$.
The proof follows by the equivalence
\[
\left[ \begin{array}{cc} N & Q \\ P & R \end{array} \right] \vecy = 0
\;
\text{iff}
\;
\left[ \begin{array}{cc} {\rm I}_p & 0 \\ -P & {\rm I}_{c} \end{array} \right]
\left[ \begin{array}{cc} N^{-1} & 0 \\ 0 & {\rm I}_{c} \end{array} \right]
\left[ \begin{array}{cc} N & Q \\ P & R \end{array} \right] \vecy = 0
\]
in the local ring $\QQ[\vecx,\vecy]_{\det N}$, that is if and only if
\[
\left[ \begin{array}{cc} {\rm I}_p & N^{-1}Q \\ 0 & R-PN^{-1}Q \end{array} \right] \vecy = 0
\]
Recall that we have assumed that $\u=(0, \ldots, 0, 1)$; then the
equation $\u\vecy=1$ is $\Y_{p+1}=1$. Denoting by $q_{i}$ and
$\tilde{q}_{i}$ respectively the entries of vectors $-N^{-1}Q$ and
$-(R-PN^{-1}Q)$ ends the proof.
\end{proof}
The above local system is denoted by $\tilde{\mathbf{f}} \in \QQ[\vecx,\vecy]_{\det N}^{2m-p}$.
The Jacobian matrix of this polynomial system is
\[
\jac\tilde{\mathbf{f}} =
\left[
\begin{array}{cc}
\begin{array}{c} \jac_x\tilde{\mathbf{q}} \\ \star \end{array}
&
\begin{array}{c} 0 \\ {\rm I}_{p+1} \end{array}
\end{array}
\right]
\]
with $\tilde{\mathbf{q}} = (\tilde{q}_{1}(\vecx), \ldots,
\tilde{q}_{2m-2p-1}(\vecx))$. Its kernel defines the tangent space to
$\incidencereg({H}, \u, p)\cap O_N$. Let $\w=(\w_1, \ldots, \w_n) \in
\CC^n$ be a row vector; we denote by $\pi_\w$ the projection
$\pi_\w(\vecx,\vecy) = \w_1\X_1 + \cdots + \w_n\X_n$.
Given a row vector $\v \in \CC^{2m-p+1}$, we denote by ${\sf wlagrange}(\tilde{\mathbf{f}},
\v)$ the following polynomial system
\begin{equation} \label{local-lag}
\tilde{\mathbf{f}}, \,\,\,\, (\tilde{\mathbf{g}}, \tilde{\mathbf{h}}) = [\Z_1, \ldots, \Z_{2m-p}, \Z_{2m-p+1}]
\left[
\begin{array}{c}
\jac \tilde{\mathbf{f}} \\
\begin{array}{cc}
\w & 0
\end{array}
\end{array}
\right], \,\,\,\,
\v'\vecz-1.
\end{equation}
For all $0 \leq p \leq r$, this polynomial system contains $n+2m+2$
polynomials and $n+2m+2$ variables. We denote by ${\sf L}_p(\tilde{\mathbf{f}}, \v,
\w)$ the set of its solutions whose projection on the $(\vecx, \vecy)$-space
lies in $O_N$.
Finally, we denote by ${\sf wlagrange}({\mathbf{f}}, \v)$ the polynomial
system obtained when replacing $\tilde{\mathbf{f}}$ above with $\mathbf{f}=\mathbf{f}({H}, \u,
p)$. Similarly, its solution set is denoted by ${\sf L}_p({\mathbf{f}}, \v,
\w)$.
\subsection{Intermediate result} \label{ssec:dimlag:intlemma}
\begin{lemma} \label{lemma:intermediate}
Let ${\mathscr{H}} \subset \CC^{(2m-r)(n+1)}$ be the non-empty Zariski open set
defined by Proposition \ref{prop:regularity}, ${H} \in {\mathscr{H}}$
and $0 \leq p \leq r$.
There exist non-empty Zariski open sets ${\mathscr{V}} \subset \CC^{2m-p}$ and
${\mathscr{W}} \subset \CC^n$ such that if $\v \in {\mathscr{V}}$ and $\w \in
{\mathscr{W}}$, the following holds:
\begin{itemize}
\item[(a)] the set $\mathcal{L}_p(\mathbf{f}, \v, \w)=\mathcal{L}(\mathbf{f}, \v, \w) \cap
\{(\vecx,\vecy,\vecz) \mid {{\rm rank}}\,\tilde{H}(\vecx)=p\}$ is finite and the
Jacobian matrix of ${\sf wlagrange}({\mathbf{f}}, \v)$ has maximal rank at
any point of $\mathcal{L}_p(\mathbf{f}, \v, \w)$;
\item[(b)] the projection of $\mathcal{L}_p(\mathbf{f}, \v, \w)$ in the
$(\vecx,\vecy)$-space contains the critical points of the restriction of
$\pi_\w$ restricted to $\incidencereg({H}, \u, p)$.
\end{itemize}
\end{lemma}
\proof
We start with Assertion (a).
The statement to prove holds over $\incidencereg({H}, \u, p)$; hence it
is enough to prove it on any open set at which one $p\times p$ minor
of $\tilde {H}$ is non-zero. Hence, we assume that the determinant of
the upper left $p\times p$ submatrix $N$ of $\tilde {H}$ is non-zero;
$O_N\subset \CC^{n+p+1}$ is the open set defined by $\det\, N \neq 0$,
and we reuse the notation introduced in this section. We prove
that there exist non-empty Zariski open sets ${\mathscr{V}}'_N\subset \CC^{2m-p}$
and ${\mathscr{W}}_N \subset \CC^{n}$ such that for $\v \in {\mathscr{V}}'_N$ and $\w \in
{\mathscr{W}}_N$, $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$ is finite and that the Jacobian matrix
associated to ${\sf wlagrange}(\tilde{\mathbf{f}}, \v)$ has maximal rank at any
point of $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$. The Lemma follows straightforwardly
by defining ${\mathscr{V}}'$ (resp. ${\mathscr{W}}$) as the intersection of ${\mathscr{V}}'_N$
(resp. ${\mathscr{W}}_N$) where $N$ varies in the set of $p \times p$ minors
of $\tilde{H}(\vecx)$.
Equations $\tilde{\mathbf{h}}$ yield $\Z_{j}=0$ for $j=2m-2p, \ldots, 2m-p$,
and can be eliminated together with their
$\vecz$ variables from the Lagrange system ${\sf wlagrange}(\tilde{\mathbf{f}},
\v)$. It remains $\vecz$-variables $\Z_1, \ldots, \Z_{2m-2p-1}, \Z_{2m-p+1}$;
we denote by $\Omega \subset \CC^{2m-2p}$ the Zariski open set where they don't
vanish simultaneously.
Now, consider the map
\[
\begin{array}{lrcc}
q : & O_N \times \Omega \times \CC^{n} & \longrightarrow & \CC^{n+2m-p} \\
& (\vecx,\vecy,\vecz,\w) & \longmapsto & (\tilde{\mathbf{f}}, \tilde{\mathbf{g}})
\end{array}
\]
and, for $\w \in \CC^n$, its section map $q_{\w}(\vecx,\vecy,\vecz) = q(\vecx,\vecy,\vecz,\w)$.
We consider $\tilde{\v} \in \CC^{2m-p}$ and we denote by $\tilde{\vecz}$
the remaining $\vecz-$variables, as above. Hence we define
\[
\begin{array}{lrcc}
Q : & O_N \times \Omega \times \CC^{n} \times \CC^{2m-2p} & \longrightarrow & \CC^{n+2m-p+1} \\
& (\vecx, \vecy, \vecz, \w, \tilde{\v}) & \longmapsto & (\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\v}'\vecz-1)
\end{array}
\]
and its section map $Q_{\w,\tilde{\v}}(\vecx,\vecy,\vecz) = q(\vecx,\vecy,\vecz,\w,\tilde{\v})$.
We claim that $\mathbf{0} \in \CC^{n+2m-p}$ (resp. $\mathbf{0} \in \CC^{n+2m-p+1}$) is a
regular value for $q$ (resp. $Q$). Hence we deduce, by Thom's Weak Transversality
Theorem, that there exist non-empty Zariski open sets ${\mathscr{W}}_N \subset \CC^n$ and
$\tilde{{\mathscr{V}}}_N \subset \CC^{2m-2p}$ such that if $\w \in {\mathscr{W}}_N$ and $\tilde{\v} \in
\tilde{{\mathscr{V}}}_N$, then $\mathbf{0}$ is a regular value for $q_{\w}$ and $Q_{\w,\tilde{\v}}$.
We prove now this claim. Recall that since $H \in {\mathscr{H}}$, the Jacobian matrix
$\jac_{\vecx,\vecy} \tilde{\mathbf{f}}$ has maximal rank at any point $(\vecx,\vecy) \in \zeroset{\tilde{\mathbf{f}}}$.
Let $(\vecx,\vecy,\vecz,\w) \in q^{-1}(\bf0)$ (resp. $(\vecx, \vecy, \vecz, \w, \tilde{\v}) \in Q^{-1}(\bf0)$).
Hence $(\vecx,\vecy) \in \zeroset{\tilde{\mathbf{f}}}$. We isolate the square submatrix of
$\jac q (\vecx,\vecy,\vecz,\w)$ obtained by selecting all its rows and
\begin{itemize}
\item the columns corresponding to derivatives of $\vecx, \vecy$ yielding a
non-singular submatrix of $\jac_{\vecx,\vecy} \tilde{\mathbf{f}}(\vecx,\vecy)$;
\item the columns corresponding to the derivatives w.r.t. $\w_1,
\ldots, \w_n$, hence this yields a block of zeros when applied to
the lines corresponding to $\tilde{\mathbf{f}}$ and the block ${\rm I}_n$ when
applied to $\tilde{\mathbf{g}}$.
\end{itemize}
For the map $Q$, we consider the same blocks as above. Moreover, since
$(\vecx,\vecy,\vecz,\w,\tilde{\v}) \in Q^{-1}(\bf0)$ verifies $\tilde{\v}'\vecz-1=0$,
there exists $\ell$ such that $\z_\ell \neq 0$. Hence, we add the derivative
of the polynomial $\tilde{\v}'\vecz-1$ w.r.t. $\tilde{\v}_\ell$, which
is $\z_\ell \neq 0$. The claim is proved.
Note that $q^{-1}_\w(\mathbf{0})$ is defined by $n+2m-p$ polynomials
involving $n+2m-p+1$ variables. We deduce that for $\w \in {\mathscr{W}}_N$,
$q_\w^{-1}(\mathbf{0})$
is either empty or it is equidimensional and has dimension $1$. Using
the homogeneity in the $\vecz$-variables and the Theorem on the Dimension of
Fibers \cite[Sect. 6.3, Theorem 7]{Shafarevich77}, we deduce that the projection on the $(\vecx, \vecy)$-space of
$q_\w^{-1}(\mathbf{0})$ has dimension $\leq 0$.
We also deduce that for $\w \in {\mathscr{W}}_N$ and $\tilde{\v} \in \tilde{{\mathscr{V}}}_N$,
$Q_{\w,\tilde{\v}}^{-1}(\bf0)$ is either empty or finite.
Hence, the points of $Q^{-1}_{\v, \w}(\mathbf{0})$ are in bijection
with those in $\mathcal{L}(\tilde{\mathbf{f}}, \v, \w)$ forgetting their $0$-coordinates
corresponding to $\Z_j=0$.
We define ${\mathscr{V}}'_N = \tilde{{\mathscr{V}}}_N \times \CC^{p} \subset \CC^{2m-2p}$.
We deduce straightforwardly that for $\v \in {\mathscr{V}}'_N$ and $\w \in {\mathscr{W}}_N$,
the Jacobian matrix of ${\sf wlagrange}(\tilde{\mathbf{f}}, \v)$ has
maximal rank at any point of $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$. By the Jacobian
criterion, this also implies that the set $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$ is
finite as requested.
We prove now Assertion (b).
Let ${\mathscr{W}} \subset \CC^n$ and ${\mathscr{V}}' \subset \CC^{2m-p}$ be the non-empty
Zariski open sets defined in the proof of Assertion (a). For $\w \in {\mathscr{W}}$
and $\v \in {\mathscr{V}}'$, the projection of $\mathcal{L}_p(\tilde{\mathbf{f}}, \v, \w)$ on the
$(\vecx,\vecy)-$space is finite.
Since $H \in {\mathscr{H}}$, $\incidencereg({H}, \u, p)$ is smooth and
equidimensional.
Since we work on $\incidencereg({H}, \u, p)$, one of the $p \times p$
minors of $\tilde{H}(\vecx)$ is non-zero. Hence, suppose to work in
$O_N \cap \incidencereg({H}, \u, p)$ where $O_N \subset \CC^{n+p+1}$
has been defined in the proof of Assertion (a). Remark that
\[
{\rm crit}\,(\pi_\w, \incidencereg({H}, \u, p)) \, = \, \bigcup_N \, {\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))
\]
where $N$ runs over the set of $p \times p$ minors of $\tilde{H}(\vecx)$.
We prove below that there exists a non-empty Zariski open set
${\mathscr{V}} \subset \CC^{2m-p}$ such that if $\v \in {\mathscr{V}}$, for all $N$
and for $\w \in {\mathscr{W}}$, the set ${\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$
is finite and contained in the projection of $\mathcal{L}_p(\mathbf{f}, \v, \w)$. This
straightforwardly implies that the same holds for ${\rm crit}\,(\pi_\w, \incidencereg({H}, \u, p))$.
Suppose w.l.o.g. that $N$ is the upper left $p \times p$ minor of $\tilde{H}(\vecx)$.
We use the notation $\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\mathbf{h}}$ as above. Hence,
the set ${\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$ is the image by the
projection $\pi_{\vecx,\vecy}$ over the $(\vecx,\vecy)-$space, of the
constructible set defined by $\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\mathbf{h}}$ and $\vecz
\neq 0$. We previously proved that, if $\w \in {\mathscr{W}}_N$, $q^{-1}(\bf0)$ is either empty
or equidimensional of dimension $1$. Hence, the constructible set defined by
$\tilde{\mathbf{f}}, \tilde{\mathbf{g}}, \tilde{\mathbf{h}}$ and $\vecz \neq 0$, which is isomorphic
to $q^{-1}(\bf0)$, is either empty or equidimensional of dimension $1$.
Moreover, for any $(\vecx,\vecy) \in {\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$,
$\pi_{\vecx,\vecy}^{-1}(\vecx,\vecy)$ has dimension 1, by the homogeneity
of polynomials w.r.t. variables $\vecz$. By the Theorem on the Dimension
of Fibers \cite[Sect. 6.3, Theorem 7]{Shafarevich77}, we deduce that
${\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$ is finite.
For $(\vecx,\vecy) \in {\rm crit}\,(\pi_\w, O_N \cap \incidencereg({H}, \u, p))$, let
${\mathscr{V}}_{(\vecx,\vecy),N} \subset \CC^{2m-p}$ be the non-empty Zariski open
set such that if $\v \in {\mathscr{V}}_{(\vecx,\vecy),N}$ the hyperplane
$\v'\vecz-1=0$ intersects transversely $\pi_{\vecx,\vecy}^{-1}(\vecx,\vecy)$.
Recall that ${\mathscr{V}}'_N \subset \CC^{2m-p}$ has been defined in the proof of
Assertion (a). Define
$$
{\mathscr{V}}_N = {\mathscr{V}}'_N \cap \bigcap_{(\vecx,\vecy)} {\mathscr{V}}_{(\vecx,\vecy),N}
$$
and ${\mathscr{V}} = \bigcap_N {\mathscr{V}}_N$. This concludes the proof, since ${\mathscr{V}}$
is a finite intersection of non-empty Zariski open sets.
\hfill$\square$
\subsection{Conclusion} \label{ssec:dimlag:proof}
We denote by ${\mathscr{M}}_1 \subset \GL(n,\CC)$ the set of non-singular matrices
$M$ such that the first row $\w$ of $M^{-1}$ lies in the set ${\mathscr{W}}$
given in Lemma \ref{lemma:intermediate}: this set is non-empty and Zariski
open since the entries of $M^{-1}$ are rational functions of the entries of $M$.
Let ${\mathscr{V}} \subset \CC^{2m-p}$ be the non-empty Zariski open set given by Lemma
\ref{lemma:intermediate} and let $\v \in {\mathscr{V}}$.
Let $\e_1$ be the row vector $(1, 0, \ldots, 0) \in \QQ^n$ and for all $M \in \GL(n,\CC)$, let
\[
\tilde{M} =
\left[
\begin{array}{cc}
M & {0} \\
{0} & {\rm I}_m \\
\end{array}
\right].
\]
Remark that for any $M \in {\mathscr{M}}_1$ the following identity holds:
$$
\left[\begin{array}{c}
\jac \mathbf{f}({H}^M, \u, p) \\
\e_1 \quad 0\; \cdots \; 0\\
\end{array}\right] = \left[\begin{array}{cc}
\jac \mathbf{f}({H}, \u, p) \\
\w \quad 0 \; \cdots \; 0\\
\end{array}\right]\tilde{M}.
$$
We conclude that the set of solutions of the system
\begin{equation}
\label{eq:dim:1}
\left(\mathbf{f}({H}, \u, p), \quad
\vecz'
\left[\begin{array}{c}
\jac\mathbf{f}({H}, \u, p) \\
\w \quad 0\; \cdots \; 0\\
\end{array}\right],
\quad \v'\vecz-1 \right)
\end{equation}
is the image by the map $(\vecx,\vecy) \mapsto \tilde{M}^{-1}(\vecx,\vecy)$
of the set ${S}$ of solutions of the system
\begin{equation}
\label{eq:dim:2}
\left(\mathbf{f}({H}, \u, p), \quad
\vecz'\left[\begin{array}{c}
\jac\mathbf{f}({H}, \u, p) \\
\e_1 \quad 0\; \cdots \; 0\\
\end{array}\right], \quad \v'\vecz-1 \right).
\end{equation}
Now, let $\varphi$ be the projection that eliminates the last
coordinate $\z_{2m-p+1}$. Remark that $\varphi(S)
= {\sf L}_p(\mathbf{f}^M, \v, \e_1)
.
Now, applying Lemma \ref{lemma:intermediate} ends the proof.
\hfill$\square$
\section{Proof of Proposition \ref{prop:closedness}}\label{sec:closedness}
The proof of Proposition \ref{prop:closedness} relies on results of
\cite[Section 5]{HNS2014}
and of \cite{SaSc03}. We use the
same notation as in \cite[Section 5]{HNS2014}, and we recall them below.
\paragraph*{Notations} For ${\mathcal{Z}} \subset \CC^n$ of dimension $d$, we denote by
$\Omega_i({\mathcal{Z}})$ its $i-$equidimensional component, $i=0, \ldots, d$. We denote by
${\mathscr{S}}({\mathcal{Z}})$ the union of:
\begin{itemize}
\item $\Omega_0({\mathcal{Z}}) \cup \cdots \cup \Omega_{d-1}({\mathcal{Z}})$
\item the set ${\rm sing}\,(\Omega_d({\mathcal{Z}}))$ of singular points of $\Omega_d({\mathcal{Z}})$.
\end{itemize}
Let $\pi_i$ be the map $(\x_1, \ldots, \x_n) \to (\x_1, \ldots, \x_i)$.
We denote by ${\mathscr{C}}(\pi_i, {\mathcal{Z}})$ the Zariski closure of the union of the
following sets:
\begin{itemize}
\item $\Omega_0({\mathcal{Z}}) \cup \cdots \cup \Omega_{i-1}({\mathcal{Z}})$;
\item the union for $r \geq i$ of the sets ${\rm crit}\,(\pi_i, {\rm reg}\,(\Omega_r({\mathcal{Z}})))$.
\end{itemize}
For $M \in \GL(n,\CC)$ and ${\mathcal{Z}}$ as above, we define the collection
of algebraic sets $\{\mathcal{O}_i({\mathcal{Z}}^M)\}_{0 \leq i \leq d}$ as follows:
\begin{itemize}
\item ${\mathcal O}_d({\mathcal{Z}}^M)={\mathcal{Z}}^M$;
\item ${\mathcal O}_i({\mathcal{Z}}^M)={\mathscr{S}}({\mathcal O}_{i+1}({\mathcal{Z}}^M))
\cup {\mathscr{C}}(\pi_{i+1}, {\mathcal O}_{i+1}({\mathcal{Z}}^M)) \cup \\
\cup {\mathscr{C}}(\pi_{i+1},{\mathcal{Z}}^M)$ for $i=0, \ldots, d-1$.
\end{itemize}
We finally recall the two following properties:
{\it Property ${\mathsf{P}}({\mathcal{Z}})$.} Let ${{\mathcal{Z}}} \subset \CC^n$ be
an algebraic set of dimension $d$. We say that $M \in \GL(n,\CC)$ satisfies
${\mathsf{P}}({\mathcal{Z}})$ when for all $i = 0, 1, \ldots, d$:
\begin{enumerate}
\item
${\mathcal O}_i({{\mathcal{Z}}}^M)$ has dimension $\leq i$ and
\item
${\mathcal O}_i({{\mathcal{Z}}}^M)$ is in Noether position with respect to $\x_1, \ldots, \x_i$.
\end{enumerate}
{\it Property ${\sf Q}$.} We say that an algebraic set ${\mathcal{Z}}$ of dimension $d$
satisfies ${\sf Q}_i({\mathcal{Z}})$ (for a given $1 \leq i \leq d$) if for any connected
component ${{C}}$ of ${{\mathcal{Z}}}\cap \RR^n$ the boundary of $\pi_i({{C}})$ is contained
in $\pi_i({\mathcal O}_{i-1}({\mathcal{Z}}) \cap {{{C}}})$. We say that ${\mathcal{Z}}$ satisfies
${\mathsf{Q}}$ if it satisfies ${\mathsf{Q}}_1, \ldots, {\mathsf{Q}}_d$.
Let ${\mathcal{Z}}\subset \CC^n$ be an algebraic set of dimension $d$. By
\cite[Proposition 15]{HNS2014}, there exists a non-empty Zariski open
set $\mathscr{M}\subset \GL(n,\CC)$ such that for $M\in
\mathscr{M}\cap \GL(n,\QQ)$ Property ${\sf P}({\mathcal{Z}})$ holds. Moreover,
if $M \in \GL(n, \QQ)$ satisfies ${\sf P}({\mathcal{Z}})$, then ${\sf
Q}_i({\mathcal{Z}}^M)$ holds for $i=1, \ldots, d$ \cite[Proposition
16]{HNS2014}.
We use these results in the following proof of Proposition \ref{prop:closedness}.
\proof We start with assertion (a). Let ${\mathscr{M}}_2 \subset
\GL(n,\CC)$ be the non-em\-pty Zariski open set of \cite[Proposition
17]{HNS2014} for ${\mathcal{Z}} = {\mathcal{H}}_p$: for $M \in {\mathscr{M}}_2$, $M$
satisfies ${{\mathsf{P}}}({\mathcal{H}}_p)$. Remark that the connected components of
${\mathcal{H}}_p \cap \RR^n$ and are in bijection with those of ${\mathcal{H}}^M_p \cap \RR^n$ (given by
${{C}} \leftrightarrow {{C}}^M$). Let ${{C}}^M \subset {\mathcal{H}}^M_p \cap
\RR^n$ be a connected component of ${\mathcal{H}}^M_p \cap \RR^n$. Let
$\pi_1$ be the projection on the first variable $\pi_1 \colon
\RR^{n} \to \RR$, and consider its restriction to $ {\mathcal{H}}^M_r \cap
\RR^n$. Since $M \in {\mathscr{M}}_2$, by \cite[Proposition 16]{HNS2014}
the boundary of $\pi_1({{C}}^M)$ is included in $\pi_1({\mathcal
O}_0({\mathcal{H}}^M_p) \cap {{C}}^M)$ and in particular in
$\pi_1({{C}}^M)$. Hence $\pi_1({{C}}^M)$ is closed.
We prove now assertion (b).
Let $M \in {\mathscr{M}}_2$, ${{C}}$ a connected component of ${\mathcal{H}}_p \cap \RR^n$ and
${\alpha} \in \RR$ be in the boundary of $\pi_1({{C}}^M)$. By \cite[Lemma 19]{HNS2014}
$\pi_1^{-1}({\alpha}) \cap {{C}}^M$ is finite.
We claim that, up to genericity
assumptions on $\u \in \QQ^{p+1}$, for $\vecx \in \pi_1^{-1}({\alpha}) \cap {{C}}^M$,
the linear system $\vecy \mapsto \mathbf{f}({H}^M, \u, p)$ has at least one solution.
We deduce that
there exists a non-empty Zariski open set ${\mathscr{U}}_{{{C}},\vecx} \subset
\CC^{p+1}$ such that if $\u \in {\mathscr{U}}_{{{C}},\vecx} \cap \QQ^{p+1}$, there exists $\vecy \in
\QQ^{p+1}$ such that $(\vecx,\vecy) \in {\incidence}({H}^M, \u, p)$. One concludes by taking
\[
{\mathscr{U}} = \bigcap_{{{C}} \subset {\mathcal{H}}_p \cap \RR^n} \bigcap_{\vecx \in \pi_1^{-1}({\alpha}) \cap {{C}}^M} {\mathscr{U}}_{{{C}},\vecx},
\]
which is non-empty and Zariski open since:
\begin{itemize}
\item the collection $\{{{C}} \subset {\mathcal{H}}_p \cap \RR^n \, \text{connected component}\}$
is finite;
\item the set $\pi_1^{-1}({\alpha}) \cap {{C}}^M$ is finite.
\end{itemize}
It remains to prove the claim we made. For $\vecx \in \pi_1^{-1}({\alpha}) \cap {{C}}^M$, the matrix
$\tilde{{H}}(\vecx)$ is rank defective, and let $p' \leq p$ be its rank.
The linear system
\[
\left[ \begin{array}{c} \tilde{{H}}(\vecx) \\ \u \end{array} \right] \cdot \vecy =
\left[ \begin{array}{c} {\bf 0} \\ 1 \end{array} \right]
\]
has a solution if and only if
\[
{\rm rank} \left[ \begin{array}{c} \tilde{{H}}(\vecx) \\ \u \end{array} \right] =
{\rm rank} \left[ \begin{array}{c} \tilde{{H}}(\vecx) \\ \u \end{array}
\begin{array}{c} {\bf 0} \\ 1 \end{array} \right],
\]
and the rank of the second matrix is $p'+1$. Denoting by ${\mathscr{U}}_{{{C}},\vecx}
\subset \CC^{p+1}$ the complement in $\CC^{p+1}$ of the $p'-$dimensional
linear space spanned by the rows of $\tilde{{H}}(\vecx)$, proves the claim and
concludes the proof.\hfill$\square$
\section{Experiments} \label{sec:exper}
The algorithm {\sf LowRankHankel} has been implemented under
\textsc{Ma\-ple}. We use the \textsc{FGb} \cite{faugere2010fgb}
library implemented by J.-C. Faugère for solving solving
zero-dimensional polynomial systems using Gr\"obner bases. In
particular, we used the new implementation of \cite{FM11} for
computing rational parametrizations. Our implementation checks the
genericity assumptions on the input.
We test the algorithm with input $m \times m$ linear Hankel matrices
${H}(\vecx)={H}_0+\X_1{H}_1+\ldots+\X_n{H}_n$, where the entries of ${H}_0,
\ldots, {H}_n$ are random rational numbers, and an integer $0 \leq r
\leq m-1$. None of the implementations of Cylindrical Algebraic
Decomposition solved our examples involving more that $3$ variables.
Also, on all our examples, we found that the Lagrange systems define
finite algebraic sets.
We compare the practical behavior of ${\sf LowRankHankel}$ with the
performance of the library {\sc RAGlib}, implemented by the third
author (see \cite{raglib}). Its function ${\sf PointsPerComponents}$,
with input the list of $(r+1)-$minors of ${H}(\vecx)$, returns one
point per connected component of the real counterpart of the algebraic
set ${\mathcal{H}}_r$, that is it solves the problem presented in this
paper. It also uses critical point methods. The symbol $\infty$ means
that no result has been obtained after $24$ hours. The symbol matbig
means that the standard limitation in $\textsc{FGb}$ to the size of
matrices for Gr\"obner bases computations has been reached.
We report on timings (given in seconds) of the two implementations in
the next table. The column ${\sf New}$ corresponds to timings of ${\sf
LowRankHankel}$. Both computations have been done on an Intel(R)
Xeon(R) CPU $E7540$ [email protected] {\rm GHz}$ 256 Gb of RAM.
We remark that $\textsc{RAGlib}$ is competetitive for problems of small
size (e.g. $m=3$) but when the size increases ${\sf LowRankHankel}$
performs much better, especially when the determinantal variety has
not co-dimension $1$. It can tackle problems that are out reach of
$\textsc{RAGlib}$. Note that for fixed $r$, the algorithm seems to have a
behaviour that is polynomial in $nm$ (this is particularly visible
when $m$ is fixed, e.g. to $5$).
{\tiny
\begin{table}
\centering
\begin{tabular}{|c|c|c||c|c|}
\hline
$(m,r,n)$ & {\sf RAGlib} & {\sf New} & {\sf TotalDeg} & {\sf MaxDeg}\\
\hline
\hline
$(3,2,2)$ & 0.3 & 5 & 9 & 6\\
$(3,2,3)$ & 0.6 & 10 & 21 & 12\\
$(3,2,4)$ & 2 & 13 & 33 & 12\\
$(3,2,5)$ & 7 & 20 & 39 & 12\\
$(3,2,6)$ & 13 & 21 & 39 & 12\\
$(3,2,7)$ & 20 & 21 & 39 & 12\\
$(3,2,8)$ & 53 & 21 & 39 & 12\\
\hline
\hline
$(4,2,3)$ & 2 & 2.5 & 10 & 10\\
$(4,2,4)$ & 43 & 6.5 & 40 & 30\\
$(4,2,5)$ & 56575 & 18 & 88 & 48\\
$(4,2,6)$ & $\infty$ & 35 & 128 & 48\\
$(4,2,7)$ & $\infty$ & 46 & 143 & 48\\
$(4,2,8)$ & $\infty$ & 74 & 143 & 48\\
\hline
\hline
$(4,3,2)$ & 0.3 & 8 & 16 & 12\\
$(4,3,3)$ & 3 & 11 & 36 & 52\\
$(4,3,4)$ & 54 & 31 & 120 & 68\\
$(4,3,5)$ & 341 & 112 & 204 & 84\\
$(4,3,6)$ & 480 & 215 & 264 & 84\\
$(4,3,7)$ & 528 & 324 & 264 & 84\\
$(4,3,8)$ & 2638 & 375 & 264 & 84\\
\hline
\hline
$(5,2,5)$ & 25 & 4 & 21 & 21 \\
$(5,2,6)$ & 31176 & 21 & 91 & 70\\
$(5,2,7)$ & $\infty$ & 135 & 199 & 108\\
$(5,2,8)$ & $\infty$ & 642 & 283 & 108\\
$(5,2,9)$ & $\infty$ & 950 & 311 & 108\\
$(5,2,10)$ & $\infty$ & 1106 & 311 & 108\\
\hline
\hline
$(5,3,3)$ & 2 & 2 & 20 & 20\\
$(5,3,4)$ & 202 & 18 & 110 & 90\\
$(5,3,5)$ & $\infty$ & 583 & 338 &228\\
$(5,3,6)$ & $\infty$ & 6544 & 698 & 360\\
$(5,3,7)$ & $\infty$ & 28081 & 1058 & 360\\
$(5,3,8)$ & $\infty$ & $\infty$ & - & - \\
\hline
\end{tabular}
\label{tab:time}
\caption{Timings and degrees}
\end{table}
\begin{table}
\centering
\begin{tabular}{|c|c|c||c|c|}
\hline
$(m,r,n)$ & {\sf RAGlib} & {\sf New} & {\sf TotalDeg} & {\sf MaxDeg}\\
\hline
\hline
\hline
\hline
$(5,4,2)$ & 1 & 5 & 25 & 20\\
$(5,4,3)$ & 48 & 30 & 105 & 80\\
$(5,4,4)$ & 8713 & 885 & 325 & 220\\
$(5,4,5)$ & $\infty$ & 15537 & 755 & 430\\
$(5,4,6)$ & $\infty$ & 77962 & 1335 & 580\\
\hline
\hline
$(6,2,7)$ & $\infty$ & 6 & 36 & 36 \\
$(6,2,8)$ & $\infty$ & matbig & - & - \\
\hline
\hline
$(6,3,5)$ & $\infty$ & 10 & 56 & 56 \\
$(6,3,6)$ & $\infty$ & 809 & 336 & 280 \\
$(6,3,7)$ & $\infty$ & 49684 & 1032 & 696 \\
$(6,3,8)$ & $\infty$ & matbig & - & - \\
\hline
\hline
$(6,4,3)$ & 3 & 5 & 35 & 35 \\
$(6,4,4)$ & $\infty$ & 269 & 245 & 210 \\
$(6,4,5)$ & $\infty$ & 30660 & 973 & 728 \\
$(6,4,6)$ & $\infty$ & $\infty$ & - & - \\
\hline
\hline
$(6,5,2)$ & 1 & 9 &36 & 30 \\
$(6,5,3)$ & 915 & 356 & 186 & 150 \\
$(6,5,4)$ & $\infty$ & 20310 & 726 & 540 \\
$(6,5,5)$ & $\infty$ & $\infty$ & - & - \\
\hline
\end{tabular}
\label{tab:time2}
\caption{Timings and degrees (continued)}
\end{table}
}
Finally, we report in column ${\sf TotalDeg}$ the degree of the
rational parametrization obtained as output of the algorithm, that is
the number of its complex solutions. We observe that this value is
definitely constant when $m,r$ are fixed and $n$ grows, as for the
maximum degree (column ${\sf MaxDeg}$) appearing during the recursive
calls.
The same holds for the multilinear bound given in Section
\ref{ssec:algo:complexity} for the total number of complex solutions.
\newpage
| {'timestamp': '2015-02-10T02:19:53', 'yymm': '1502', 'arxiv_id': '1502.02473', 'language': 'en', 'url': 'https://arxiv.org/abs/1502.02473'} |
high_school_physics | 652,041 | 16.253707 | 1 | 19.1: Current
https://phys.libretexts.org/@app/auth/3/login?returnto=https%3A%2F%2Fphys.libretexts.org%2FBookshelves%2FUniversity_Physics%2FBook%253A_Introductory_Physics_-_Building_Models_to_Describe_Our_World_(Martin_Neary_Rinaldo_and_Woodman)%2F19%253A_Electric_Current%2F19.01%253A_Current
In the preceding chapters, we examined “electrostatic” systems; those for which charges are not in motion. In electrostatic systems, the electric field inside of a conductor is zero (by definition, or charges would be moving, since they are free to move in a conductor). We argued that if charges are deposited onto a conductor, they would quickly arrange themselves into a static configuration (on the surface of the conductor).
Instead, we can build systems were charges move in a conductor. If we apply a fixed potential difference across a conductor, this will result in an electric field inside the conductor and the charges within will move as a result. In general, this requires that there be some sort of circuit formed, whereby charges enter one end of the conductor and exit the other. The most simple circuit that one can construct is to connect the two terminals of a battery to the ends of a conductor, as illustrated in Figure \(\PageIndex{1}\).
Figure \(\PageIndex{1}\): A simple circuit is created by connecting the terminals of a battery to a conducting material such as a copper wire. Note that while electrons flow from the negative to the positive terminal of the battery, conventional current is defined as if it were positive charges moving in the opposite direction.
A battery (as we will see in more detail in Section 20) is a device that provides a source of charges and a fixed potential difference. For example, a \(9\text{V}\) battery has two terminals with a constant voltage of \(9\text{V}\) between them.
“Electric current” is defined to be the rate at which charges cross a given plane (usually a plane perpendicular to some conductor through which we want to define the current). We define current, \(I\), as the total amount of charge, \(\Delta Q\), that flows through any cross-section of the conductor during an amount of time, \(\Delta t\):
\[I=\frac{\Delta Q}{\delta t}=\frac{dQ}{dt}\]
where we take a derivative if the rate at which charges flow is not constant in time. The S.I. unit of current is the Ampère (A). Current is defined to be positive in the direction in which positive charges flow. In almost all cases, it is negative electrons that flow through a material; the current is defined to be in the opposite direction from which the actual electrons are flowing, as illustrated in Figure \(\PageIndex{1}\). To distinguish that the current is in the direction opposite to that of the flowing electrons, one sometimes uses the term “conventional current” to indicate that the current is referring to a flow of positive charges.
Note that the definition of electric current is very similar to the “flow rate”, \(Q\), that we defined as the volume flow of a liquid across a given cross-section (Section 15.3). As we continue to develop our description of current, you will notice that there are many similarities between describing the flow of an incompressible fluid and describing the flow of charges in a conductor.
We think of current as a macroscopic quantity, something that we can easily measure in the lab. Current is a measure of the average rate at which charges are moving through the conductor, and not a measure of what is going on at a microscopic level. In order to model the motion of charges at the microscopic level, we introduce the “current density”, \(\vec j\):
\[\vec j =\frac{I}{A}\hat E\]
where, \(I\), is the current that flows through a surface with cross-sectional area, \(A\), and \(\hat E\) is a unit vector in the direction of the electric field at the point where we are determining the current density. The current density allows us to develop a microscopic description of the current, since it is the electric current per unit area and points in the direction of the electric field at some position. Given the current density, \(\vec j\), one can always determine the current through a surface with area, \(A\), and normal vector, \(\hat n\): \[\begin{aligned} I = A(\vec j\cdot \hat n)\end{aligned}\] If the current density changes over the surface, one must take an integral instead: \[\begin{aligned} I=\int \vec j \cdot d\vec A\end{aligned}\] where \(d\vec A\), is a surface element with area, \(A\), and direction given by the normal to the surface at that point. The overall sign of the current will be determined by the direction of the flow of positive charges.
Electric current flows through a conductor with a narrowing cross section, as illustrated in Figure \(\PageIndex{2}\). If the cross-sectional area the conductor is \(A_1\) at one end, and \(A_2\), at the other end, what is the ratio of the current densities, \(j_1/j_2\), at the two ends of the conductor?
Figure \(\PageIndex{2}\): Current flows through a conductor with a cross-section that decreases from \(A_{1}\) to \(A_{2}\).
This situation is very similar to the flow of an incompressible fluid. In this case, the number of charges entering the conductor must be equal to the number of charges exiting the conductor during a given amount of time. That is, the total current, \(I\), must be the same at both ends, since there is no place in the conductor for charges to accumulate. Since the current must be the same on both ends, we can relate the current densities at each end: \[\begin{aligned} j&=\frac{I}{A}\\ \therefore I&=j_1A_1=j_2A_2\\ \therefore \frac{j_2}{j_1}&=\frac{A_1}{A_2}\end{aligned}\] and we find that the current density at the exit of the conductor must be higher than at the entrance. This is similar to the continuity equation in the Fluid Mechanics chapter (Section 15.3), where the current density plays a role analogous to the velocity in the fluids case.
19.2: Microscopic model of current | {"pred_label": "__label__cc", "pred_label_prob": 0.6912645101547241, "wiki_prob": 0.3087354898452759, "source": "cc/2022-05/en_head_0020.json.gz/line1735439"} |
high_school_physics | 883,130 | 16.231586 | 1 | Q: Find the volume of the region bounded by a sphere and a paraboloid using cylindrical and spherical coordinates. The sphere is $x^2 + y^2 + z^2 = 4$ and the paraboloid is $x^2 + y^2 = 3z$
I've already done for the cylindrical coordinates and got:
$$\int^{2\pi}_0\int_0^1\int_{\frac{r^2}{3}}^\sqrt{4-r^2}rdzdrd\theta = \frac{\pi(31-12\sqrt{3})}{6}$$
But I'm having troubles when building the integral for spherical coordinates. So far I've found the bounds of integration for $d\rho$ and $d\theta$.
$$\int^{2\pi}_0\int\int_0^2\rho ^2\sin\phi \,\,\,d\rho d\phi d\theta$$
What about the bounds for $d\phi$? Done a small search and had to solve $3\rho\cos\phi = \rho^2\sin^2\theta$, but this is the point where I'm stuck. Is this right? If yes, how to solve this?
A: In cylindrical coordinates, the volume is
$$\int^{2\pi}_0 \int_0^{\sqrt3} \int^{-\frac{r^2}{3}}_{-\sqrt{4-r^2}} r\>dr d\theta
= \frac{19\pi}6$$
and, in spherical coordinates
$$\int^{2\pi}_0 \int_{\pi/2}^{2\pi/3} \int_0^{-\frac{3\cos\phi}{\sin^2\phi}}
\rho ^2\sin\phi \,d\rho d\phi d\theta
+\int^{2\pi}_0 \int^{\pi}_{2\pi/3} \int_0^{2}
\rho ^2\sin\phi \,d\rho d\phi d\theta = \frac{19\pi}6$$
A: At the intersection of paraboloid and sphere,
$x^2 + y^2 + z^2 = 4, x^2+y^2 = 3z \implies z^2 + 3z - 4 = 0$.
i.e, $z = 1, x^2+y^2 = 3$.
So in cylindrical coordinates, $0 \leq r \leq \sqrt3$.
Now in spherical coordinates,
$x = \rho \cos\theta \sin\phi,y = \rho \sin\theta \sin\phi, z = \rho \cos\phi$
At the intersection of paraboloid and sphere,
$\rho = 2, z = \rho \cos\phi = 1, \implies \cos\phi = \frac{1}{2}$
So for $0 \leq \phi \leq \frac{\pi}{3}$, the region is bound by the sphere and for $\frac{\pi}{3} \leq \phi \leq \frac{\pi}{2}$, the region is bound by the paraboloid.
$3z = x^2 + y^2 \implies 3 \rho \cos\phi = \rho^2 \sin^2\phi$
$\rho = 3 \cos\phi \csc^2\phi$
So bounds for the second integral are,
$\frac{\pi}{3} \leq \phi \leq \frac{\pi}{2}, 0 \leq \rho \leq 3 \cos\phi \csc^2\phi $
| {'language': 'en', 'url': 'https://math.stackexchange.com/questions/4104811', 'timestamp': '2023-03-29', 'source': 'stackexchange', 'question_score': '1'} |
high_school_physics | 685,415 | 16.228598 | 1 | Unlicensed electrical contractor fined $16,000
By Workplace Health and Safety Queensland
Using Facebook to advertise an air-conditioning installation business over the summer of 2018/19 may have been the undoing for an unlicensed electrician who was recently fined $16,000 in the Brisbane Magistrates Court.
Between November 2018 and January 2019 the defendant installed air-conditioning units at two residential properties in Regents Park and Victoria Point.
He was charged with five charges under the Electrical Safety Act 2002 including one charge of unlicensed electrical contracting, two charges of unlicensed electrical work and two charges for failing to ensure that the electrical installation complied with the wiring rules.
He’d never held an electrical contractor licence authorising him to conduct a business nor held an electrical work licence authorising him to perform electrical work in Queensland.
Electrical Safety Office Inspectors investigated the worker’s installations and conducted inspections of the work performed. The Inspectors found that he was unlicensed to carry out the electrical work, and that the work carried out fell short of the standard required by the wiring rules.
But this was not the first time that the defendant had been visited by the Electrical Safety Office. In August 2016 he was fined $10,000 in relation to electrical work performed during 2014, and in early November 2018 he was issued with an improvement notice.
Magistrate Nolan referenced the previous improvement notice issued to him and observed that, within days after being issued with the notice, he again performed electrical work and contracting from 17 November.
His Honour also acknowledged that the requirements in the Electrical Safety legislation was all about preventing electrical risks in places where people live and work.
Head of Electrical Safety Office, Donna Heelan, said our Inspectors found that the installations did not comply with the requirements of Australian Standard 3000 - Electrical Installations (the wiring rules).
“The failures included a failure to ensure cables and conductors were free from undue mechanical stress, a failure to ensure conductors were adequately terminated, and a failure to ensure openings on isolation switches were properly glued or sealed to prevent the entry of water,” Ms Heelan said.
“Carrying out unlicensed electrical work is life threatening. Members of the public should be rightfully be protected by our electrical safety laws.”
Magistrate Nolan imposed a penalty of $16,000 and order costs of $1,099.70 against the defendant. A conviction was recorded.
More prosecutions are at owhsp.qld.gov.au
For any media enquiries, contact: [email protected] or 0478 33 22 00.
Injury prevention and safety
Craig's story
Craig suffered serious injuries when he fell from a ladder at work. Here's how support and a positive mindset helped Craig get back.
3 November 2021 Read more
Solar panel installation company fined $55,000 after apprentice falls from ladder
A Redcliffe based solar panel installation company has been fined $55,000 after a hearing in the Maroochydore Magistrates Court last Friday.
22 October 2021 Read more
New films urge rural Queenslanders to be safe around electricity
Five new films launched by the Queensland Government today are aimed at keeping rural communities safe from electrical dangers. | {"pred_label": "__label__wiki", "pred_label_prob": 0.6796272993087769, "wiki_prob": 0.6796272993087769, "source": "cc/2022-05/en_middle_0008.json.gz/line1152337"} |
high_school_physics | 193 | 16.226667 | 1 | \section{Background} The starting point for the development of algebraic invariants in topological data analysis is the classification of finite persistence modules over a field $k$: that any such module decomposes into a direct sum of indecomposable interval modules; moreover, the decomposition is unique up to reordering. The barcodes associated to the original module correspond to these interval submodules, which are indexed by the set of connected subgraphs of the finite directed graph associated to the finite, totally ordered indexing set of the module. Modules that decompose in such a fashion are conventionally referred to as {\it tame}.
\vskip.2in
A central problem in the subject has been to determine what, if anything, holds true for more complex types of poset modules (ps-modules) - those indexed on finite partially ordered sets; most prominent among these being $n$-dimensional persistence modules \cite{cs, cz}. Gabriel's theorem \cite{pg, gr} implies that the only types for which the module is {\it always} tame are those whose underlying graph corresponds to a simply laced Dynkin diagram of type $A_n$, $D_n, n\ge 4$ or one of the exceptional graphs $E_6, E_7$ or $E_8$; a result indicating there is no simple way to generalize the 1-dimensional case.
\vskip.2in
However it is natural to ask whether the existence of some (naturally occuring) additional structure for such modules might lead to an appropriate generalization that is nevertheless consistent with Gabriel's theorem. This turns out to be the case. Before stating our main results, we will need to briefly discuss the framework in which we will be working. We consider finite ps-modules (referred to as $\cal C$-modules in this paper) equipped with i) no additional structure, ii) a {\it weak inner product} ($\cal WIPC$-module) , iii) an {\it inner product} ($\cal IPC$-module). The ``structure theorem" - in reality a sequence of theorems and lemmas - is based on the fundamental notion of a {\it multi-flag} of a vector space $V$, referring to a collection of subspaces of $V$ closed under intersections, and the equally important notion of {\it general position} for such an array. Using terminology made precise below, our results may be summarized as follows:
\begin{itemize}
\item Any $\cal C$-module admits a (non-unique) weak inner product structure (can be realized as a $\cal WIPC$-module). However, the obstruction to further refining this to an $\cal IPC$-structure is in general non-trivial, and we give an explicit example of a $\cal C$-module which cannot admit an inner product structure.
\item Associated to any finite $\cal WIPC$-module $M$ is a functor ${\cal F}:{\cal C}\to (multi\mhyphen flags/k)$ which associates to each $x\in obj({\cal C})$ a multi-flag ${\cal F}(M)(x)$ of the vector space $M(x)$, referred to as the {\it local structure} of $M$ at $x$.
\item This local strucure is naturally the direct limit of a directed system of recursively defined multi-flags $\{{\cal F}_n(M), \iota_n\}$, and is called {\it stable} when this directed system stabilizes at a finite stage.
\item In the case $M$ is an $\cal IPC$-module with stable local structure
\begin{itemize}
\item it determines a {\it tame covering} of $M$ - a surjection of $\cal C$-modules $p_M:T(M)\surj M$ with $T(M)$ weakly tame, and with $p_M$ inducing an isomorphism of associated graded local structures. The projection $p_M$ is an isomorphism iff $M$ itself is weakly tame, which happens exactly when the multi-flag ${\cal F}(M)(x)$ is in general position for each object $x$. In this way $T(M)$ is the closest weakly tame approximation to $M$.
\item If, in addition, the category $\cal C$ is {\it holonomy free} (h-free), then each block of $T(M)$ may be non-canonically written as a finite direct sum of GBCs (generalized bar codes); in this case $T(M)$ is tame and $M$ is tame iff it is weakly tame.
\end{itemize}
\item In the case $M$ is equipped only with a $\cal WIPC$-structure, the tame cover may not exist, but one can still define the {\it generalized bar code vector} of $M$ which, in the case $M$ is an $\cal IPC$-module, measures the dimensions of the blocks of $M$. This vector does not depend on the choice of $\cal WIPC$-structure, and therefore is defined for all $\cal C$-modules $M$ with stable local structure.
\item All finite $n$-dimensional zig-zag modules have strongly stable local structure for all $n\ge 1$ (this includes all finite $n$-dimensional persistence modules, and strongly stable implies stable).
\item All finite $n$-dimensional persistence modules, in addition, admit a (non-unique) inner product structure.
\end{itemize}
A distinct advantage to the above approach is that the decomposition into blocks, although dependent on the choice of inner product, is {\it basis-free}; moreover the local structure is derived solely from the underlying structure of $M$ via the iterated computation of successively refined functors ${\cal F}_n(M)$ determined by images, kernels and intersections. For modules with stable local structure, the total dimension of the kernel of $p_M$ - referred to as the {\it excess} of $M$ - is an isomorphism invariant that provides a complete numerical obstruction to an $\cal IPC$-module $M$ being weakly tame. Moreover, the block diagram of $M$, codified by its tame cover $T(M)$, always exists for $\cal IPC$-modules with stable local structure, even when $M$ itself is not weakly tame. It would seem that the computation of the local structure of $M$ in this case should be amenable to algorithmic implementation. And although there are obstructions (such as holonomy) to stability for $\cal WIPC$-modules indexed on arbitrary finite ps-categories, these obstructions vanish for finite zig-zag modules in all dimensions (as indicated by the last bullet point). Additionally, although arbitrary $\cal C$-modules may not admit an inner product, all finite $n$-dimensional persistence modules do (for all dimensions $n\ge 1$), which is our main case of interest.
\vskip.2in
A brief organizational description: in section 2 we make precise the notion of multi-flags, general position, and the local structure of a $\cal WIPC$-module. The {\it excess} of the local structure - a whole number which measures the failure of general position - is defined. In section 3 we show that when $M$ is an $\cal IPC$-module, the associated graded local structure ${\cal F}(M)_*$ defines the blocks of $M$, which in turn can be used to create the tame cover via direct sum. Moreover, this tame cover is isomorphic to $M$ iff the excess is zero. We define the holonomy of the indexing category; for holonomy-free (h-free) categories, we show that this block sum may be further decomposed into a direct sum of generalized bar codes, yielding the desired generalization of the classical case mentioned above. As an illustration of the efficacy of this approach, we use it at the conclusion of section 3.2 to give a 2-sentence proof of the structure theorem for finite 1-dimensional persistence modules. In section 3.3 we show that the dimension vector associated to the tame cover can still be defined in the absence of an inner product structure, yielding an isomorphism invariant for arbitrary $\cal C$ modules. Section 3.4 investigates the obstruction to equipping a $\cal C$-module with an inner product, the main results being that i) it is in general non-trivial, and ii) the obstruction vanishes for all finite $n$-dimensional persistence modules. In section 3.5 we consider the related obstruction to being h-free; using the introduced notion of an elementary homotopy we show all finite $n$-dimensional persistence modules are strongly h-free (implying h-free). We also show how the existence of holonomy can prevent stability of the local structure. Finally section 3.6 considers the stability question; although (from the previous section) the local structure can fail to be stable in general, it is always so for i) finite $n$-dimensional zig-zag modules (which includes persistence modules as a special case) over an arbitrary field, and ii) any $\cal C$-module over a finite field.
\vskip.2in
In section 4 we introduce the notion of geometrically based $\cal C$-modules; those which arise via application of homology to a $\cal C$-diagram of simplicial sets or complexes with finite skeleta. We show that the example of section 3.4 can be geometrically realized, implying that geometrically based $\cal C$-modules need not admit an inner product. However, by a cofibrant replacement argument we show that any geometrically based $\cal C$-module admits a presentation by $\cal IPC$-modules, a result which is presently unknown for general $\cal C$-modules.
\vskip.2in
I would like to thank Dan Burghelea and Fedor Manin for their helpful comments on earlier drafts of this work, and Bill Dwyer for his contribution to the proof of the cofibrancy replacement result presented in section 4.1.
\vskip.5in
\section{$\cal C$-modules}
\subsection{Preliminaries} Throughout we work over a fixed field $k$. Let $(vect/k)$ denote the category of finite dimensional vector spaces over $k$, and linear homomorphisms between such. Given a category $\cal C$, a {\it $\cal C$-module over $k$} is a covariant functor $M:{\cal C}\to (vect/k)$. The category $({\cal C}\mhyphen mod)$ of $\cal C$-modules then has these functors as objects, with morphisms represented in the obvious way by natural transformations. All functorial constructions on vector spaces extend to the objects of $({\cal C}\mhyphen mod)$ by objectwise application. In particular, one has the appropriate notions of
\begin{itemize}
\item monomorphisms, epimorphisms, short and long-exact sequences;
\item kernel and cokernel;
\item direct sums, Hom-spaces, tensor products;
\item linear combinations of morphisms.
\end{itemize}
With these constructs $({\cal C}\mhyphen mod)$ is an abelian category, without restriction on $\cal C$. By a {\it ps-category} we will mean the categorical representation of a poset $(S,\le)$, where the objects identify with the elements of $S$, while $Hom(x,y)$ contains a unique morphism iff $x\le y$ in $S$. A ps-category is {\it finite} iff it has a finite set of objects, and is {\it connected} if its nerve $N({\cal C})$ is connected. A ps-module is then a functor $F:{\cal C}\to (vect/k)$ from a ps-category $\cal C$ to $(vect/k)$. A morphism $\phi_{xy}:x\to y$ in $\cal C$ is {\it atomic} if it does not admit a non-trivial factorization (in terms of the partial ordering, this is equivalent to saying that if $x\le z\le y$ then either $z=x$ or $z=y$). Any morphism in $\cal C$ can be expressed (non-uniquely) as a composition of atomic morphisms. The {\it minimal graph} of $\cal C$ is then defined as the (oriented) subgraph of the 1-skeleton of $N({\cal C})$ with the same vertices, but whose edges are represented by atomic morphisms (not compositions of such). The minimal graph of $\cal C$ is denoted by $\Gamma({\cal C})$ and will be referred to simply as the graph of $\cal C$. We observe that $\cal C$ is connected iff $\Gamma({\cal C})$ is a connected.
\vskip.2in
In all that follows we will assume $\cal C$ to be a {\it connected, finite ps-category}, so that all $\cal C$-modules are finite ps-modules. If $M$ is a $\cal C$-module and $\phi_{xy}\in Hom_{\cal C}(x,y)$, we will usually denote the linear map $M(\phi_{xy}): M(x)\to M(y)$ simply as $\phi_{xy}$ unless more precise notation is needed. A very special type of ps-category occurs when the partial ordering on the finite set is a total ordering. In this case the resulting categorical representation $\cal C$ is isomorphic to $\underline{n}$, which denotes the category corresponding to $\{1 < 2 < 3\dots < n\}$. A finite persistence module is, by definition, an $\underline{n}$-module for some natural number $n$. So the $\cal C$-modules we consider in this paper occur as natural generalizations of finite persistence modules.
\vskip.3in
\subsection{Inner product structures} It will be useful to consider two refinements of the category $(vect/k)$.
\begin{itemize}
\item $(WIP/k)$, the category whose objects are inner product (IP)-spaces $V = (V,<\ ,\ >_V)$ and whose morphisms are linear transformations (no compatibility required with respect to the inner product structures on the domain and range);
\item $(IP/k)$, the wide subcategory of $(WIP/k)$ whose morphisms $L:(V,<\ ,\ >_V)\to (W,<\ ,\ >_W)$ satisfy the property that $\wt{L}: ker(L)^\perp\to W$ is an isometric embedding, where $ker(L)^\perp\subset V$ denotes the orthogonal complement of $ker(L)\subset V$ in $V$ with respect to the inner product $<\ ,\ >_V$, and $\wt{L}$ is the restriction of $L$ to $ker(L)^\perp$.
\end{itemize}
There are obvious transformations
\[
(IP/k)\xrightarrow{\iota_{ip}} (WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
where the first map is the inclusion which is the identity on objects, while the second map forgets the inner product on objects and is the identity on transformations between two fixed objects.
\vskip.2in
Given a $\cal C$-module $M:{\cal C}\to (vect/k)$ a {\it weak inner product} on $M$ is a factorization
\[
M: {\cal C}\to (WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
while an {\it inner product} on $M$ is a further factorization through $(IP/k)$:
\[
M: {\cal C}\to (IP/k)\xrightarrow{\iota_{ip}}(WIP/k)\xrightarrow{p_{wip}} (vect/k)
\]
A $\cal WIPC$-module will refer to a $\cal C$-module $M$ equipped with a weak inner product, while an $\cal IPC$-module is a $\cal C$-module that is equipped with an actual inner product, in the above sense. As any vector space admits a (non-unique) inner product, we see that
\begin{proposition} Any $\cal C$-module $M$ admits a non-canonical representation as a $\cal WIPC$-module.
\end{proposition}
The question as to whether a $\cal C$-module $M$ can be represented as an $\cal IPC$-module, however, is much more delicate, and discussed in some detail below.
\vskip.2in
Given a $\cal C$-module $M$ and a morphism $\phi_{xy}\in Hom_{\cal C}(x,y)$, we set $KM_{xy} := \ker(\phi_{xy} : M(x)\to M(y)).$ We note that a $\cal C$-module $M$ is an $\cal IPC$-module, iff
\begin{itemize}
\item for all $x\in obj({\cal C})$, $M(x)$ comes equipped with an inner product $< , >_x$;
\item for all $\phi_{xy}\in Hom_{\cal C}(x,y)$, the map $\wt{\phi}_{xy} : KM_{xy}^\perp\to M(y)$ is an isometry, where $\wt{\phi}_{xy}$ denotes the restriction of $\phi_{xy}$ to $KM_{xy}^\perp = $ the orthogonal complement of $KM_{xy}\subset M(x)$ with respect to the inner product $< , >_x$. In other words,
\[
<\phi({\bf v}), \phi({\bf w})>_y = <{\bf v}, {\bf w}>_x,\qquad \forall\, {\bf v}, {\bf w}\in KM_{xy}^\perp
\]
\end{itemize}
\begin{definition} Let $V = (V, < , >)$ be an inner product (IP) space. If $W_1\subseteq W_2\subset V$, we write $(W_1\subset W_2)^\perp$ for the relative orthogonal complement of $W_1$ viewed as a subspace of $W_2$ equipped with the induced inner product, so that $W_2\cong W_1\oplus (W_1\subset W_2)^\perp$.
\end{definition}
Note that $(W_1\subset W_2)^\perp = W_1^\perp\cap W_2$ when $W_1\subseteq W_2$ and $W_2$ is equipped with the induced inner product.
\vskip.3in
\subsection{Multi-flags and general position} Recall that a {\it flag} in a vector space $V$ consists of a finite sequence of proper inclusions beginning at $\{0\}$ and ending at $V$:
\[
\underline{W} := \{W_i\}_{0\le i\le n} = \left\{\{0\} = W_0\subset W_1\subset W_2\subset\dots\subset W_m = V\right\}
\]
If $\underline{m}$ denotes the totally ordered set $0 < 1 < 2 <\dots < m$ viewed as a category, $Sub(V)$ the category of subspaces of $V$ and inclusions of such, with $PSub(V)\subset Sub(V)$ the wide subcategory whose morphisms are proper inclusions, then there is an evident bijection
\[
\{\text{flags in } V\}\Leftrightarrow \underset{m\ge 1}{\coprod} Funct(\underline{m}, PSub(V))
\]
We will wish to relax this structure in two different ways. First, one may consider a sequence as above where not all of the inclusions are proper; we will refer to such an object as a {\it semi-flag}. Thus a semi-flag is represented by (and corresponds to) a functor $F:\underline{m}\to Sub(V)$ for some $m$. More generally, we define a {\it multi-flag} in $V$ to be a collection ${\cal F} = \{W_\alpha\subset V\}$ of subspaces of $V$ containing $\{0\}, V$, partially ordered by inclusion, and closed under intersection. It need not be finite.
\vskip.2in
Assume now that $V$ is equipped with an inner product. Given an element $W\subseteq V$ of a multi-flag $\cal F$ associated to $V$, let $S(W) := \{U\in {\cal F}\ |\ U\subsetneq W\}$ be the elements of $\cal F$ that are proper subsets of $W$, and set
\begin{equation}\label{eqn:one}
W_{\cal F} := \left(\left(\displaystyle\sum_{U\in S(W)} U\right) \subset W\right)^\perp
\end{equation}
\begin{definition}\label{def:genpos} For an IP-space $V$ and multi-flag $\cal F$ in $V$, the associated graded of $\cal F$ is the set of subspaces ${\cal F}_* := \{W_{\cal F}\ |\ W\in{\cal F}\}$. We say that $\cal F$ is in \underbar{general position} iff $V$ can be written as a direct sum of the elements of ${\cal F}_*$: $V\cong \displaystyle\bigoplus_{W\in{\cal F}} W_{\cal F}$.
\end{definition}
Note that, as $V\in{\cal F}$, it will always be the case that $V$ can be expressed as a sum of the subspaces in ${\cal F}_*$. The issue is whether that sum is a direct sum, and whether that happens is completely determined by the sum of the dimensions.
\begin{proposition} For any multi-flag $\cal F$ of an IP-space $V$, $\displaystyle\sum_{W\in{\cal F}} dim(W_{\cal F}) \ge dim(V)$. Moreover the two are equal iff $\cal F$ is in general position.
\end{proposition}
\begin{proof} The first claim follows from the fact that $\displaystyle\sum_{W\in{\cal F}} W = V$. Hence the sum of the dimensions on the left must be at least $dim(V)$, and equals $dim(V)$ precisely when the sum is a direct sum.
\end{proof}
\begin{definition} The excess of a multi-flag $\cal F$ of an IP-space $V$ is $e({\cal F}) := \left[\displaystyle\sum_{W\in{\cal F}} dim(W_{\cal F})\right] - dim(V)$.
\end{definition}
\begin{corollary} For any multi-flag $\cal F$, $e({\cal F})\ge 0$ and $e({\cal F}) = 0$ iff $\cal F$ is in general position.
\end{corollary}
Any semi-flag $\cal F$ of $V$ is in general position; this is a direct consequence of the total ordering. Also the multi-flag $\cal G$ formed by a pair of subspaces $W_1, W_2\subset V$ and their common intersection (together with $\{0\}$ and $V$) is always in general position. More generally, we have
\begin{lemma}\label{lemma:2} If ${\cal G}_i$, $i = 1,2$ are two semi-flags in the inner product space $V$ and $\cal F$ is the smallest multi-flag containing ${\cal G}_1$ and ${\cal G}_2$ (in other words, it is the multi-flag generated by these two semi-flags), then $\cal F$ is in general position.
\end{lemma}
\vskip.1in
Let ${\cal G}_i = \{W_{i,j}\}_{0\le j\le m_i}, i = 1,2$. Set $W^{j,k} := W_{1,j}\cap W_{2,k}$. Note that for each $i$, $\{W^{i,k}\}_{0\le k\le m_2}$ is a semi-flag in $W_{1,i}$, with the inclusion maps $W_{1,i}\hookrightarrow W_{1,i+1}$ inducing an inclusion of semi-flags $\{W^{i,k}\}_{0\le k\le m_2}\hookrightarrow \{W^{i+1,k}\}_{0\le k\le m_2}$. By induction on length in the first coordinate we may assume that the multi-flag of $W := W_{1,m_1-1}$ generated by $\wt{\cal G}_1 := \{W_{1,j}\}_{0\le j\le m_1-1}$ and $\wt{\cal G}_2 := \{W\cap W_{2,k}\}_{0\le k\le m_2}$ are in general position. To extend general position to the multi-flag on all of $V$, the induction step allows reduction to considering the case where the first semi-flag has only one middle term:
\begin{claim} Given $W\subseteq V$, viewed as a semi-flag ${\cal G}'$ of $V$ of length 3, and the semi-flag ${\cal G}_2 = \{W_{2,j}\}_{0\le j\le m_2}$ as above, the multi-flag of $V$ generated by ${\cal G}'$ and ${\cal G}_2$ is in general position.
\end{claim}
\begin{proof} The multi-flag $\cal F$ in question is constructed by intersecting $W$ with the elements of ${\cal G}_2$, producing the semi-flag ${\cal G}_2^W := W\cap {\cal G}_2 = \{W\cap W_{2,j}\}_{0\le j\le m_2}$ of $W$, which in turn includes into the semi-flag ${\cal G}_2$ of $V$. Constructed this way the direct-sum splittings of $W$ induced by the semi-flag $W\cap {\cal G}_2$ and of $V$ induced by the semi-flag ${\cal G}_2$ are compatible, in that if we write $W_{2,j}$ as $(W\cap W_{2,j})\oplus (W\cap W_{2,j}\subset W_{2,j})^\perp$ for each $j$, then the orthogonal complement of $W_{2,k}$ in $W_{2,k+1}$ is given as the direct sum of the orthogonal complement of $(W\cap W_{2,k})$ in $(W\cap W_{2,k+1})$ and the orthogonal complement of $(W\cap W_{2,k}\subset W_{2,k})^\perp$ in $(W\cap W_{2,k+1}\subset W_{2,k+1})^\perp$, which yields a direct-sum decomposition of $V$ in terms of the associated grade terms of $\cal F$, completing the proof both of the claim and of the lemma.
\end{proof}
On the other hand, one can construct simple examples of multi-flags which are not - in fact cannot be - in general position, as the following illustrates.
\begin{example} Let $\mathbb R\cong W_i\subset\mathbb R^2$ be three 1-dimensional subspaces of $\mathbb R^2$ intersecting in the origin, and the $\cal F$ be the multi-flag generated by this data. Then $\cal F$ is not in general position.
\end{example}
\vskip.2in
Given an arbitrary collection of subspaces $T = \{W_\alpha\}$ of an IP-space $V$, the multi-flag generated by $T$ is the smallest multi-flag containing each element of $T$. It can be constructed as the closure of $T$ under the operations i) inclusion of $\{0\}, V$ and ii) taking finite intersections.
\vskip.2in
[Note: Example 1 also illustrates the important distinction between a configuration of subspaces being of {\it finite type} (having finitely many isomorphism classes of configurations), and the stronger property of {\it tameness} (the multi-flag generated by the subspaces is in general position).]
\vskip.2in
A multi-flag $\cal F$ of $V$ is a poset in a natural way; if $V_1,V_2\in {\cal F}$, then $V_1\le V_2$ as elements in $\cal F$ iff $V_1\subseteq V_2$ as subspaces of $V$. If $\cal F$ is a multi-flag of $V$, $\cal G$ a multi-flag of $W$, a {\it morphism} of multi-flags $(L,f):{\cal F}\to {\cal G}$ consists of
\begin{itemize}
\item a linear map from $L:V\to W$ and
\item a map of posets $f:{\cal F}\to {\cal G}$ such that
\item for each $U\in {\cal F}$, $L(U)\subseteq f(U)$.
\end{itemize}
Then $\{multi\mhyphen flags\}$ will denote the category of multi-flags and morpisms of such.
\vskip.2in
If $L:V\to W$ is a linear map of vector spaces and $\cal F$ is a multi-flag of $V$, the multi-flag generated by $\{L(U)\ |\ U\in {\cal F}\}\cup \{W\}$ is a multi-flag of $W$ which we denote by $L({\cal F})$ (or $\cal F$ pushed forward by $L$). In the other direction, if $\cal G$ is a multi-flag of $W$, we write $L^{-1}[{\cal G}]$ for the multi-flag $\{L^{-1}[U]\ |\ U\in {\cal G}\}\cup \{\{0\}\}$ of $V$ (i.e., $\cal G$ pulled back by $L$; as intersections are preserved under taking inverse images, this will be a multi-flag once we include - if needed - $\{0\}$). Obviously $L$ defines morphisms of multi-flags ${\cal F}\xrightarrow{(L,\iota)} L({\cal F})$, $L^{-1}[{\cal G}]\xrightarrow{(L,\iota')} {\cal G}$.
\vskip.3in
\subsection{The local structure of $\cal C$-modules}
Assume first that $M$ is an $\cal WIPC$-module. A {\it multi-flag of $M$} or {\it $M$-multi-flag} is a functor $F:{\cal C}\to \{multi\mhyphen flags\}$ which assigns
\begin{itemize}
\item to each $x\in obj({\cal C})$ a multi-flag $F(x)$ of $M(x)$;
\item to each $\phi_{xy}:M(x)\to M(y)$ a morphism of multi-flags $F(x)\to F(y)$
\end{itemize}
To any $\cal WIPC$-module $M$ we may associate the multi-flag $F_0$ which assigns to each $x\in obj({\cal C})$ the multi-flag $\{\{0\}, M(x)\}$ of $M(x)$. This is referred to as the {\it trivial} multi-flag of $M$.
\vskip.2in
A $\cal WIPC$-module $M$ determines a multi-flag on $M$. Precisely, the {\it local structure} ${\cal F}(M)$ of $M$ is defined recursively at each $x\in obj({\cal C})$ as follows: let $S_1(x)$ denote the set of morphisms of $\cal C$ originating at $x$, and $S_2(x)$ the set of morphisms terminating at $x$, $x\in obj({\cal C})$ (note that both sets contain $Id_x:x\to x$). Then
\vskip.05in
\begin{enumerate}
\item[\underbar{LS1}] ${\cal F}_0(M)(x) =$ the multi-flag of $M(x)$ generated by
\[
\{\ker(\phi_{xy}:M(x)\to M(y))\}_{\phi_{xy}\in S_1(x)}\cup \{im(\phi_{zx} : M(z)\to M(x)\}_{\phi_{zx}\in S_2(x)};
\]
\item[\underbar{LS2}] For $n\ge 0$, ${\cal F}_{n+1}(M)(x) =$ the multi-flag of $M(x)$ generated by
\begin{itemize}
\item[{LS2.1}] $\phi_{xy}^{-1}[W]\subset M(x)$, where $W\in{\cal F}_n(M)(y)$ and $\phi_{xy}\in S_1(x)$;
\item [{LS2.2}] $\phi_{zx}[W]\subset M(x)$, where $W\in{\cal F}_n(M)(z)$ and $\phi_{zx}\in S_2(x)$;
\end{itemize}
\item [\underbar{LS3}]${\cal F}(M)(x) = \varinjlim {\cal F}_n(M)(x)$.
\end{enumerate}
More generally, starting with a multi-flag $F$ on $M$, the local structure of $M$ relative to $F$ is arrived at in exactly the same fashion, but starting in LS1 with the multi-flag generated (at each object $x$) by ${\cal F}_0(M)(x)$ and $F(x)$. The resulting direct limit is denoted ${\cal F}^F(M)$. Thus the local structure of $M$ (without superscript) is the local structure of $M$ relative to the trivial multi-flag on $M$. In almost all cases we will only be concerned with the local structure relative to the trivial multi-flag on $M$.
\begin{proposition}\label{prop:invimage} For all $k\ge 1$, $W\in {\cal F}_k(M)(x)$, and $\phi_{zx}:M(z)\to M(x)$, there is a unique maximal element of $W'\in {\cal F}_{k+1}(M)(z)$ with $\phi_{zx}(W') = W$.
\end{proposition}
\begin{proof} This is an immediate consequence of property (LS2.1).
\end{proof}
\begin{definition} The local structure of a $\cal WIPC$-module $M$ is the functor ${\cal F}(M)$, which associates to each vertex $x\in obj({\cal C})$ the multi-flag ${\cal F}(M)(x)$.
\end{definition}
A key question arises as to whether the direct limit used in defining ${\cal F}(M)(x)$ stablizes at a finite stage. For infinite fields $k$ it turns out that this property is related the existence of {\it holonomy}, as we will see below. For now, we include it as a definition.
\begin{definition} The local structure on $M$ is \underbar{locally stable} at $x\in obj({\cal C})$ iff there exists $N = N_x$ such that ${\cal F}_n(M)(x)\inj {\cal F}_{n+1}(M)(x)$ is the identity map whenever $n\ge N$. It is \underbar{stable} if it is locally stable at each object. It is \underbar{strongly stable} if for all \underbar{finite} multi-flags $F$ on $M$ there exists $N = N(F)$ such that ${\cal F}^F(M)(x) = {\cal F}^F_N(M)(x)$ for all $x\in obj({\cal C})$.
\end{definition}
In almost all applications of this definition we will only be concerned with stability, not the related notion of strong stability. The one exception occurs in the statement and proof of Theorem \ref{thm:6} below.
\vskip.2in
For each $0\le k\le \infty$ and at each object $x$ we may consider the associated graded ${\cal F}_k(M)_*(x)$ of ${\cal F}_k(M)(x)$. Stabilization via direct limit in the construction of ${\cal F}(M)$ yields a multi-flag structure that is preserved under the morphisms of the $\cal C$-module $M$. The following result identifies the effect of a morphism on the associated graded limit ${\cal F}(M)_*$, under the more restrictive hypothesis that $M$ is equipped with an inner product structure (which guarantees that the relative orthogonal complements coming from the associated graded are compatible under the morphisms of $M$).
\begin{theorem}\label{thm:1} Let $M$ be an $\cal IPC$-module with stable local structure. Then for all $k\ge 0$, $x,y,z\in obj({\cal C})$, $W\in {\cal F}(M)(x)$, $\phi_{zx}:M(z)\to M(x)$, and $\phi_{xy}:M(x)\to M(y)$
\begin{enumerate}
\item The morphisms of $M$ and their inverses induce well-defined maps of associated graded sets
\begin{gather*}
\phi_{xy}:{\cal F}(M)_*(x)\to {\cal F}(M)_*(y)\\
\phi_{zx}^{-1}: {\cal F}(M)_*(x)\to {\cal F}(M)_*(z)
\end{gather*}
\item $\phi_{xy}(W)\in {\cal F}(M)(y)$, and either $\phi_{xy}(W_{\cal F}) = \{0\}$, or $\phi_{xy}:W_{\cal F}\xrightarrow{\cong}\phi_{xy}(W_{\cal F}) = \left(\phi_{xy}(W_{\cal F})\right)_{\cal F}$ where $\phi_{xy}(W_{\cal F})$ denotes the element in the associated graded ${\cal F}(M)_*(y)$ induced by $\phi_{xy}(W)$;
\item either $im(\phi_{zx})\cap W_{\cal F} = \{0\}$, or there is a canonically defined element $U_{\cal F} = \left(\phi_{zx}^{-1}[W_{\cal F}]\right)_{\cal F} = \left(\phi_{zx}^{-1}[W]\right)_{\cal F}\in {\cal F}(M)_*(z)$ with $\phi_{zx}:U_{\cal F}\xrightarrow{\cong} W_{\cal F}$.
\end{enumerate}
\end{theorem}
\begin{proof} Stabilization with respect to the operations (LS.1) and (LS.2), as given in (LS.3), implies that for any object $x$, morphisms $\phi_{xy},\phi_{zx}$, and $W\in {\cal F}(M)(x)$, that $\phi(W)\in {\cal F}(M)(y)$ and $\phi^{-1}[W]\in {\cal F}(M)(z)$, verifying the first statement. Let $K = \ker(\phi_{xy})\cap W$. Then either $K=W$ or is a proper subset of $W$. If the former case, $\phi_{xy}(W_{\cal F}) = \{0\}$, while if the latter we see (again, by stabilization) that $K\in S(W)$ and so $K\cap W_{\cal F} = \{0\}$, implying that $W_{\cal F}$ maps isomorphically to its image under $\phi_{xy}$. Moreover, in this last case $\phi_{xy}$ will map $S(W)$ surjectively to $S(\phi_{xy}(W))$, implying the equality $\phi_{xy}(W_{\cal F}) = \left(\phi_{xy}(W_{\cal F})\right)_{\cal F}$.
\vskip.2in
Now given $\phi_{zx}:M(z)\to M(x)$, let $U = \phi_{zx}^{-1}[W]\in {\cal F}(M)(z)$. As before, the two possibilities are that $\phi_{zx}(U) = W$ or that $T := \phi_{zx}(U)\subsetneq W$. In the first case, $\phi_{zx}$ induces a surjective map of sets $S(U)\surj S(W)$, and so will map $U_{\cal F}$ surjectively to $W_{\cal F}$. By statement 2. of the theorem, this surjection must be an isomorphism. In the second case we see that the intersecton $im(\phi_{zx})\cap W$ is an element of $S(W)$ (as ${\cal F}(M)(x)$ is closed under intersections), and so $W_{\cal F}\cap im(\phi_{zx}) = \{0\}$ by the definition of $W_{\cal F}$.
\end{proof}
\vskip.1in
Using the local structure of $M$, we define the {\it excess} of a $\cal WIPC$-module $M$ as
\[
e(M) = \sum_{x\in obj({\cal C})} e({\cal F}(M)(x))
\]
We say ${\cal F}(M)$ is {\it in general position} at the vertex $x$ iff ${\cal F}(M)(x)$ is in general position as defined above; in other words if $e({\cal F}(M)(x)) = 0$ . Thus ${\cal F}(M)$ is {\it in general position} (without restriction) iff $e(M) = 0$. The previous theorem implies
\begin{corollary}\label{cor:2} ${\cal F}(M)$ is {\it in general position} at the vertex $x$ if and only if $e({\cal F}(M)(x)) = 0$. It is in general position (without restriction) iff $e(M) = 0$.
\end{corollary}
Note that as $M(x)$ is finite-dimensional for each $x\in obj({\cal C})$, ${\cal F}(M)(x)$ must be locally stable at $x$ if it is in general position (in fact, general position is a much stronger requirement).
\vskip.2in
Now assume given a $\cal C$-module $M$ without any additional structure. A multi-flag on $M$ is then defined to be a multi-flag on $M$ equipped with an arbitrary $\cal WIPC$-structure. Differing choices of weak inner product on $M$ affect the choice of relative orthogonal complements appearing in the associated graded at each object via equation (\ref{eqn:one}). However the constructions in LS1, LS2, and LS3 are independent of the choice of inner product, as are the definitions of excess and stability at an object and also for the module as a whole. So the results stated above for $\cal WIPC$-modules may be extended to $\cal C$-modules. The only result requiring an actual $\cal IPC$-structure is Theorem \ref{thm:1}.
\vskip.5in
\section{Statement and proof of the main results} In discussing our structural results, we first restrict to the case $M$ is an $\cal IPC$-module, and then investigate what properties still hold for more general $\cal WIPC$-modules.
\subsection{Blocks, generalized barcodes, and tame $\cal C$-modules} To understand how blocks and generalized barcodes arise, we first need to identify the type of subcategory on which they are supported. For a connected poset category $\cal C$, its oriented (minimal) graph $\Gamma = \Gamma({\cal C})$ was defined above. A subgraph $\Gamma'\subset\Gamma$ will be called {\it admissible} if
\begin{itemize}
\item it is connected;
\item it is pathwise full: if $v_1e_1v_2e_2\dots v_{k-1}e_{k-1}v_k$ is an oriented path in $\Gamma'$ connecting $v_1$ and $v_k$, and $(v_1=w_1)e'_1w_2e'_2\dots w_{l-1}e'_{l-1}(w_l = v_k)$ is any other path in $\Gamma$ connecting $v_1$ and $v_k$ then the path $v_1=w_1e'_1w_2e'_2\dots w_{l-1}e'_{l-1}w_l$ is also in $\Gamma'$.
\end{itemize}
Any admissible subgraph $\Gamma'$ of $\Gamma$ determines a unique subcategory ${\cal C}'\subset {\cal C}$ for which $\Gamma({\cal C}') = \Gamma'$, and we will call a subcategory ${\cal C}'\subset {\cal C}$ admissible if $\Gamma({\cal C}')$ is an admissible subgraph of $\Gamma({\cal C})$. If $M'\subset M$ is a sub-$\cal C$-module of the $\cal C$-module $M$, its {\it support} will refer to the full subcategory ${\cal C}(M')\subset {\cal C}$ generated by $\{x\in obj({\cal C})\ |\ M'(x)\ne \{0\} \}$. It is easily seen that being a submodule of $M$ (rather than just a collection of subspaces indexed on the objects of $\cal C$) implies that the support of $M'$, if connected, is an admissible subcatgory of $\cal C$ in the above sense. A {\it block} will refer to a sub-$\cal C$-module $M'$ of $M$ for which $\phi_{xy}:M'(x)\xrightarrow{\cong} M'(y)$ whenever $x,y\in obj({\cal C}(M'))$ (any morphism between non-zero vertex spaces of $M'$ is an isomorphism). Finally, $M'$ is a {\it generalized barcode} (GBC) for $M$ if it is a block where $dim(M'(x) ) = 1$ for all $x\in obj({\cal C}(M'))$.
\vskip.2in
It is evident that if $M'\subset M$ is a GBC, it is an indecomposeable $\cal C$-submodule of $M$. If $\Gamma$ represents an oriented graph, we write $\ov{\Gamma}$ for the underlying unoriented graph. Unlike the particular case of persistence (or more generally zig-zag) modules, blocks occuring as $\cal C$-modules for an arbitrary ps-category may not decompose into a direct sum of GBCs. The following two simple oriented graphs illustrate the obstruction.
\vskip.5in
({\rm D1})\vskip-.4in
\centerline{
\xymatrix{
\bullet\ar[rr]\ar[dd] && \bullet\ar[dd] &&&& \bullet && \bullet\ar[dd]\ar[ll]\\
& \Gamma_1 & &&&& & \Gamma_2 &\\
\bullet\ar[rr] && \bullet &&&& \bullet\ar[uu]\ar[rr] && \bullet
}}
\vskip.2in
For a block represented by the graph $\Gamma_1$ on the left, the fact $\cal C$ is a poset category implies that, even though the underlying unoriented graph is a closed loop, going once around the loop yields a composition of isomorphisms which is the identity. As a consequence, it is easily seen that a block whose support is an admissible category ${\cal C}'$ with graph $\Gamma({\cal C}') = \Gamma_1$ can be written as a direct sum of GBCs indexed on ${\cal C}'$ (see the lemma below). However, if the graph of the supporting subcategory is $\Gamma_2$ as shown on the right, then the partial ordering imposes no restrictions on the composition of isomorphisms and their inverses, starting and ending at the same vertex. For such a block with base field $\mathbb R$ or $\mathbb C$, the moduli space of isomorphism types of blocks of a given vertex dimension $n$ is non-discrete for all $n>1$ and can be identified with the space of $n\times n$ Jordan normal forms. The essential difference between these two graphs lies in the fact that the category on the left exhibits one initial and one terminal object, while the category on the right exhibits two of each. Said another way, the zig-zag length of the simple closed loop on the left is two, while on the right is four. We remark that the obstruction here is not simply a function of the underlying unoriented graph, as $\ov{\Gamma}_1 = \ov{\Gamma}_2$ in the above example. A closed loop in $\Gamma({\cal C})$ is an {\it h-loop} if it is able to support a sequence of isomorphsms whose composition going once around, starting and ending at the same vertex, is other than the identity map (``h'' for holonomy). Thus $\Gamma_2$ above exhibits an h-loop. Note that the existence of an h-loop implies the existence of a simple h-loop.
\vskip.2in
We wish explicit criteria which identify precisely when this can happen. One might think that the zig-zag length of a simple closed loop is enough, but this turns out to not be the case. The following illustrates what can happen.
\vskip.5in
({\rm D2})\vskip-.4in
\centerline{
\xymatrix{
& \bullet\ar[rr]\ar[dd] && \bullet\ar[rr]\ar@{-->}[dd] && \bullet\ar[dd]\\
\\
\Gamma({\cal C}'): & \bullet\ar[dd]\ar@{-->}[rr] && \bullet\ar[rr]\ar[dd] && \bullet\\
\\
&\bullet\ar[rr] && \bullet &&
}}
\vskip.2in
Suppose $\cal C$ indexes $3\times 3$ two-dimensional persistance modules (so that $\Gamma({\cal C})$ looks like an oriented two-dimensonal $3\times 3$ lattice, with arrows pointing down and also to the right). Suppose ${\cal C}'\subset {\cal C}$ is an admissible subcategory of $\cal C$ with $\Gamma({\cal C}')$ containing the above simple closed curve indicated by the solid arrows. The zig-zag length of the curve is four, suggesting that it might support holonomy and so be a potential h-loop. However, the admissibility condition forces ${\cal C}'$ to also contain the morphisms represented by the dotted arrows, resulting in three copies of the graph $\Gamma_1$ above. Including these morphisms one sees that holonomy in this case is not possible.
\vskip.2in
Given an admissible subcategory ${\cal C}'$ of $\cal C$, we will call ${\cal C}'$ {\it h-free} if $\Gamma({\cal C}')$ does not contain any simple closed h-loops (and therefore no closed h-loops).
\begin{lemma}\label{lemma:3} Any block $M'$ of $M$ whose support ${\cal C}'$ is h-free can be written (non-uniquely) as a finite direct sum of GBCs all having the same support as $M'$.
\end{lemma}
\begin{proof} Fix $x\in obj(supp(M'))$ and a basis $\{\bf{v}_1,\dots,\bf{v}_n\}$ for $M'(x)$. Let $y\in obj(supp(M'))$, and choose a path $xe_1x_1e_2x_2\dots x_{k-1}e_ky$ from $x_0 = x$ to $x_k = y$ in $\ov{\Gamma}(M')$. Each edge $e_j$ is represented by an invertible linear map $\lambda_j = (\phi_{x_{j-1}x_j})^{\pm 1}$, with
\[
\lambda := \lambda_k\circ\lambda_{k-1}\circ\dots\circ\lambda_1:M'(x)\xrightarrow{\cong} M'(y)
\]
As ${\cal C}' = supp(M')$ is h-free, the isomorphism between $M'(x)$ and $M'(y)$ resulting from the above construction is independent of the choice of path in $\ov{\Gamma}(M')$ from $x$ to $y$, and is uniquely determined by the ${\cal C}'$-module $M'$. Hence the basis $\{\bf{v}_1,\dots,\bf{v}_n\}$ for $M'(x)$ determines one for $M'(y)$ given as $\{\lambda(\bf{v}_1),\dots,\lambda(\bf{v}_n)\}$ which is independent of the choice of path connecting these two vertices. In this way the basis at $M'(x)$ may be compatibly extended to all other vertices of ${\cal C}'$, due to the connectivity hypothesis. The result is a system of {\it compatible bases} for the ${\cal C}'$-module $M'$, from which the splitting of $M'$ into a direct sum of GBCs each supported by ${\cal C}'$ follows.
\end{proof}
A $\cal C$-module $M$ is said to be {\it weakly tame} iff it can be expressed as a direct sum of blocks. It is {\it strongly tame} or simply {\it tame} if, in addition, each of those blocks may be further decomposed as a direct sum of GBCs.
\vskip.3in
\subsection{The main results} We first establish the relation between non-zero elements of the associated graded at an object of $\cal C$ and their corresponding categorical support. We assume throughout this section that $M$ is an $\cal IPC$-module with stable local structure.
\vskip.2in
Suppose $W\in {\cal F}(M)(x)$ with $0\ne W_{\cal F}\in {\cal F}(M)_*(x)$. Then $W_{\cal F}$ uniquely determines a subcategory ${\cal C}(W_{\cal F})\subset {\cal C}$ satisfying the following three properties:
\begin{enumerate}
\item $x\in obj({\cal C}(W_{\cal F}))$;
\item For each path $xe_1x_1e_2\dots x_{k-1}e_ky$ in $\Gamma({\cal C}(W_{\cal F}))$ beginning at $x$, with each edge $e_j$ represented by $\lambda_j = (\phi_{x_{j-1}x_j})^{\pm 1}$ ($\phi_{x_{j-1}x_j}$ a morphism in $\cal C$), $W_{\cal F}$ maps isomorphically under the composition $\lambda = \lambda_k\circ\lambda_{k-1}\circ\dots\circ\lambda_1$ to $0\ne \lambda(W_{\cal F})\in W_{\cal F}(M)_*(y)$;
\item ${\cal C}(W_{\cal F})$ is the largest subcategory of $\cal C$ satisfying properties 1. and 2.
\end{enumerate}
We refer to ${\cal C}(W_{\cal F})$ as the {\it block category} associated to $W_{\cal F}$. It is easy to see that $\varnothing\ne {\cal C}(W_{\cal F})$, and moreover that ${\cal C}(W_{\cal F})$ is admissible as defined above. Now let ${\cal S}({\cal C})$ denote the set of admissible subcategories of $\cal C$. If $x\in obj({\cal C})$ we write ${\cal S}_x{\cal C}$ for the subset of ${\cal S}({\cal C})$ consisting of those admimissible ${\cal C}'\subset {\cal C}$ with $x\in obj({\cal C}')$.
\begin{lemma}\label{lemma:4} For each $x\in obj({\cal C})$ and $\cal IPC$-module $M$, the assignment
\begin{gather*}
{\cal A}_x: {\cal F}(M)_*(x)\backslash\{0\}\longrightarrow {\cal S}_x{\cal C}\\
0\ne W_{\cal F}\mapsto {\cal C}(W_{\cal F})
\end{gather*}
defines an injection from ${\cal F}(M)_*(x)\backslash\{0\}$ to the set of admissible subcategories of $\cal C$ which occur as the block category of a non-zero element of ${\cal F}(M)_*(x)$.
\end{lemma}
\begin{proof} The fact that ${\cal C}(W_{\cal F})$ is uniquely determined by $W_{\cal F}$ ensures the map is well-defined. To see that the map is 1-1, we observe that corresponding to each ${\cal C}'\in {\cal S}_x{\cal C}$ is a unique maximal $W\in {\cal F}(M)(x)$ with image-kernel-intersection data determined by the manner in which each vertex $y\in obj({\cal C}')$ connects back to $x$. More precisely, the subspace $W$ is the largest element of ${\cal F}M(x)$ satisfying the property that for every
\begin{itemize}
\item $y\in obj({\cal C}')$;
\item zig-zag sequence $p$ of morphisms in ${\cal C}'$ connecting $x$ and $y$;
\item morphism $\phi_{yz}$ in $\cal C$ from $y$ to $z\in obj({\cal C})\backslash obj({\cal C}')$;
\end{itemize}
the pull-back and push-forward of $ker(\phi_{xz})$ along the path back from $M(z)$ to $M(x)$ yields a subspace of $M(x)$ containing $W$. This clearly determines $W$ uniquely; note that the conditions may result in $W = \{0\}$. Restricting to the image of ${\cal A}_x$ we arrive at the desired result.
\end{proof}
Write ${\cal AS}({\cal C})$ for the subset of ${\cal S}({\cal C})$ consisting of those admissible subcategories for which there exists $x\in obj({\cal C})$ with $\{0\}\ne im({\cal A}_x)\subset {\cal S}_x{\cal C}$. This lemma, in conjunction with Theorem \ref{thm:1}, implies
\begin{theorem}\label{thm:2} Let $M$ be an $\cal IPC$-module. Each ${\cal C}'\in {\cal AS}({\cal C})$ uniquely determines a block $\cal C$-submodule $M({\cal C}')$ of $M$, where $M({\cal C}')(x) = $ the unique non-zero element $W_{\cal F}$ of ${\cal F}(M)_*(x)$ for which ${\cal C}(W_{\cal F}) = {\cal C}'$.
\end{theorem}
\begin{proof} Fix ${\cal C}'\in {\cal AS}({\cal C})$ and $x\in obj({\cal C}')$. By Theorem \ref{thm:1} and Lemma \ref{lemma:4}, for any $\phi_{xy'}\in Hom({\cal C}')$, $W_{\cal F} := {\cal A}^{-1}({\cal C}'\in {\cal S}_x{\cal C})$ maps isomorphically under $\phi_{xy'}$ to $\phi_{xy'}(W_{\cal F})\in {\cal F}(M)_*(y')$.
\vskip.1in
Now any other vertex $y\in\Gamma({\cal C}')$ is connected to $x$ by a zig-zag path of oriented edges. Let $\lambda_{xy}$ represent such a path, corresponding to a composition sequence of morphisms and their inverses. As $M$ is not required to be h-free, the resulting isomorphism between $W_{\cal F}$ and $\lambda_{xy}(W_{\cal F})$ is potentially dependent on the choice of path $\lambda_{xy}$ in $\Gamma({\cal C}')$. However the space itself is not. Moreover the same lemma and theorem also imply that for any $\phi_{xz}\in Hom({\cal C})$ with $z\in obj({\cal C})\backslash obj({\cal C}')$, $W_{\cal F}$ maps to $0$ under $\phi_{xz}$. This is all that is needed to identify an actual submodule of $M$ by the assignments
\begin{itemize}
\item $M({\cal C}')(y) = \lambda_{xy}(W_{\cal F})$ for $\lambda_{xy}$ a zig-zag path between $x$ and $y$ in $\Gamma({\cal C}')$;
\item $M({\cal C}')(z) = 0$ for $z\in obj({\cal C})\backslash obj({\cal C}')$
\end{itemize}
As defined, $M({\cal C}')$ is a block, completing the proof.
\end{proof}
We define the {\it (weakly) tame cover} of $M$ as
\begin{equation}
T(M)(x) = \underset{{\cal C}'\in {\cal AS}({\cal C})}{\bigoplus} M({\cal C}')
\end{equation}
with the projection $p_M:T(M)\surj M$ given on each summand $M({\cal C}')$ by the inclusion provided by the previous theorem. We are now in a position to state the main result.
\begin{theorem}\label{thm:3} An $\cal IPC$-module $M$ is weakly tame iff its excess $e(M) = 0$. In this case the decomposition into a direct sum of blocks is basis-free, depending only on the underlying $\cal IPC$-module $M$, and is unique up to reordering. If in addition $\cal C$ is h-free then $M$ is tame, as each block decomposes as a direct sum of GBCs, uniquely up to reordering after fixing a choice of basis at a single vertex.
\end{theorem}
\begin{proof} The excess at a given vertex $x$ is zero iff the projection map at that vertex is an isomorphism, as the excess is equal to the dimension of the kernel of $p_M$ at $x$. Moreover, if $\cal C$ is h-free then each block further decomposes in the manner described by Lemma \ref{lemma:3}; the precise way in which this decomposition occurs will depend on a choice of basis at a vertex in the support of that block, but once that has been chosen, the basis at each other vertex is uniquely determined. All that remains is to decide the order in which to write the direct sum.
\end{proof}
Note that the excess of $M$ need not be finite. If $\cal C$ is not h-free and $M$ exhibits holonomy at a vertex x, then the tame cover of $M$ might be infinite dimensional at $x$, which will make the overall excess infinite. Nevertheless, $T(M)$ in all cases should be viewed as the ``closest" weakly tame approximation to $M$, which equals $M$ if and only if $M$ itself is weakly tame. Another way to view this proximity is to observe that $T(M)$ and the projection to $M$ are constructed in such a way that $p_M$ induces a global isomorphism of associated graded objects
\[
p_M: {\cal F}(T(M))_*\xrightarrow{\cong} {\cal F}(M)_*
\]
so that $T(M)$ is uniquely characterized up to isomorphism as the weakly tame $\cal C$-module which maps to $M$ by a map which induces an isomorphism of the associated graded local structure.
\vskip.2in
To conclude this subsection we illustrate the efficiency of this approach by giving a geodesic proof of the classical strucuture theorem for finite 1-dimensional persistence modules. Let us first observe that such a module $M$ may be equipped with an inner product structure; the proof follows easily by induction on the length of $M$. So for the following theorem we may assume such an IP-structure has been given.
\begin{theorem}\label{thm:4} If ${\cal C} \cong \underline{n}$ is the categorical representation of a finite totally ordered set, then any $\cal C$-module $M$ is tame.
\end{theorem}
\begin{proof} By Lemma \ref{lemma:2}, the multiflag ${\cal F}(M)(x)$ is in general position for each object $x$, implying the excess $e(M) = 0$, so $M$ is weakly tame by the previous theorem. But there are no non-trivial closed zig-zag loops in $\Gamma({\cal C})$, so $\cal C$ is h-free and $M$ is tame.
\end{proof}
\vskip.3in
\subsection{The GBC vector for $\cal C$-modules} In the absence of an $\cal IP$-structure on the $\cal C$-module $M$, assuming only that $M$ is a $\cal WIP$-module, we may not necessarily be able to construct a weakly tame cover of $M$ but can still extract useful numerical information. By the results of Proposition \ref{prop:invimage} and the proof of Theorem \ref{thm:1}, we see that the results of that theorem still hold for the assoicated graded ${\cal F}_*(M)$, assuming only a $\cal WIPC$-module structure. Moreover, a slightly weaker version of the results of Theorem \ref{thm:2} still apply for this weaker $\cal WIPC$-structure. Summarizing,
\begin{theorem}\label{thm:wipc} Let $M$ be an $\cal WIPC$-module with stable local structure. Then for all $k\ge 0$, $x,y,z\in obj({\cal C})$, $\phi_{zx}:M(z)\to M(x)$, and $\phi_{xy}:M(x)\to M(y)$, the morphisms of $M$ and their inverses induce well-defined maps of associated graded sets
\begin{gather*}
\phi_{xy}:{\cal F}(M)_*(x)\to {\cal F}(M)_*(y)\\
\phi_{zx}^{-1}: {\cal F}(M)_*(x)\to {\cal F}(M)_*(z)
\end{gather*}
Moreover, if $W\in {\cal F}(M)_*(x)$, viewed as a subquotient space of $M(x)$, then either $dim(\phi_{xy}(W)) = dim(W)$ or $dim(\phi_{xy}(W)) = 0$. Similarly, either $dim(\phi_{zx}^{-1}(W)) = dim(W)$ or $dim(\phi_{zx}^{-1}(W)) = 0$. In this way we may, as before, define the support ${\cal C}(W)$ of $W$, which will be an admissible subcategory of $\cal C$. Each ${\cal C}'\in {\cal AS}({\cal C})$ uniquely determines a block $\cal C$-module $M({\cal C}')$, where $M({\cal C}')(x) = $ the unique non-zero element $W$ of ${\cal F}(M)_*(x)$ for which ${\cal C}(W) = {\cal C}'$.
\end{theorem}
The lack of IP-structure means that, unlike the the statement of Theorem \ref{thm:2}, we cannot identify the $\cal C$-module $M({\cal C}')$ as an actual submodule of $M$, or even construct a map of $\cal C$ modules $M({\cal C}')\to M$, as $M({\cal C}')$ is derived purely from the associated graded local structure ${\cal F}_*(M)$.
\vskip.2in
Nevertheless, Theorem \ref{thm:wipc} implies the dimension of each of these blocks - given as the dimension at any element in the support - is well-defined, as $dim(M({\cal C}')(x)) = dim(M({\cal C}')(y))$ for any pair $x,y\in obj({\cal C}')$ by the theorem above. The {\it generalized bar code dimension} of $M$ is the vector ${\cal S}({\cal C})\to \mathbb W$ given by
\[
GBCD(M)({\cal C}') =
\begin{cases}
dim(M({\cal C}')) := dim\big(M({\cal C}')(x)\big), x\in obj({\cal C}')\qquad\text{if }{\cal C}'\in {\cal AS}({\cal C})\\
0 \hskip2.68in\text{if }{\cal C}'\notin {\cal AS}({\cal C})
\end{cases}
\]
Finally if $M$ is simply a $\cal C$-module, let $M'$ denote $M$ with a fixed weak inner product structure. Setting
\[
GBCD(M) := GBCD(M')
\]
yields a well-defined function $GBCD:\{{\cal C}$-$ modules\}\to \mathbb W$, as one easily sees that $GBCD(M')$ is independent of the choice of lift of $M$ to a $\cal WIPC$-module; moreover this is an isomorphism invariant of $M$.
\vskip.3in
\subsection{Obstructions to admitting an inner product} The obstruction to imposing an IP-structure on a $\cal C$-module is, in general, non-trivial.
\begin{theorem}\label{thm:obstr} Let ${\cal C} = {\cal C}_2$ be the poset category for which $\Gamma({\cal C}_2) = \Gamma_2$, as given in diagram (D1). Then there exist ${\cal C}_2$-modules $M$ which do not admit an inner product structure.
\end{theorem}
\begin{proof} Label the initial objects of $\cal C$ as $x_1, x_2$, and terminal objects as $y_1, y_2$, with morphisms $\phi_{i,j}:x_i\to y_j$, $1\le i,j\le 2$. For each $(i,j)$ fix an identification $M(x_i) = M(y_j) = \mathbb R$. In terms of this identification, let
\[
M(\phi_{i,j})({\bf v}) =
\begin{cases}
2{\bf v}\quad\text{if } (i,j) = (1,1)\\
{\bf v}\quad\ \text{otherwise}
\end{cases}
\]
The self-map $M(\phi_{1,2})^{-1}\circ M(\phi_{2,2})\circ M(\phi_{2,1})^{-1}\circ M(\phi_{1,1}): M(x_1)\to M(x_1)$ is given as scalar multiplication by $2$. There is no norm on $\mathbb R$ for which this map is norm-preserving; hence there cannot be any collection of inner products $<_-,_->_{i,j}$ on $M_{i,j}$ giving $M$ the structure of an $\cal IPC$-module.
\end{proof}
More generally, we see that
\begin{theorem} If $\cal C$ admits holonomy, then there exist $\cal C$-modules which do not admit the structure of an inner product. Moreover, the obstruction to admitting an inner product is an isomorphism invariant.
\end{theorem}
However, for an important class of $\cal C$-modules the obstruction vanishes. An $n$-dimensional persistence module is defined as a $\cal C$-module where $\cal C$ is an $n$-dimensional {\it persistence category}, i.e., one isomorphic to $\underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$ where $m_p$ is the categorical representation of the totally ordered set $\{1 < 2 < \dots < p\}$.
\begin{theorem} Any (finite) n-dimensional persistence module admits an IP-structure.
\end{theorem}
\begin{proof} It was already observed above that the statement is true for ordinary 1-dim.~persistence modules. So we may proceed by induction, assuming $n > 1$ and that the statement holds in dimensions less than $n$. Before proceeding we record the following useful lemmas. Let ${\cal C}[1]$ denote the categorical representation of the poset $\{0 < 1\}$, and let ${\cal C}[m] = \prod_{i=1}^m{\cal C}[1]$. This is a poset category with objects $m$-tuples $(\varepsilon_1,\dots,\varepsilon_m)$ and a unique morphism $(\varepsilon_1,\dots,\varepsilon_m)\to (\varepsilon'_1,\dots,\varepsilon'_m)$ iff $\varepsilon_j\le \varepsilon'_j, 1\le j\le m$. The oriented graph $\Gamma({\cal C}[m])$ may be viewed as the oriented 1-skeleton of a simplicial $m$-cube. Write $t$ for the terminal object $(1,1,\dots,1)$ in ${\cal C}[m]$, and let ${\cal C}[m,0]$ denote the full subcategory of ${\cal C}[m]$ on objects $obj({\cal C}[m])\backslash \{t\}$.
\begin{lemma} Let $M$ be a ${\cal C}[m]$ module, and let $M(0) = M|_{{\cal C}[m,0]}$ be the restriction of $M$ to the subcategory ${\cal C}[m,0]$. Then any inner product structure on $M[m,0]$ may be extended to one on $M$.
\end{lemma}
\begin{proof} Let $M'$ be the ${\cal C}[m]$-module defined by
\begin{align*}
M'|_{{\cal C}[m,0]} &= M|_{{\cal C}[m,0]}\\
M'(t) &= \underset{{\cal C}[m,0]}{colim}\ M'
\end{align*}
with the map $M'(\phi_{xt})$ given by the unique map to the colimit when $x\in obj({\cal C}[m,0])$. The inner product on $M(0)$ extends to a unique inner product on $M'$. We may then choose an inner product on $M(t)$ so that the unique morphism $M'(t)\to M(t)$ (determined by $M$) lies in $(IP/k)$. Fixing this inner product on $M(t)$ gives $M$ an IP-structure compatible with the given one on $M(0)$.
\end{proof}
For evident reasons we will refer to this as a {\it pushout extension} of the inner product. More generally, iterating the same line of argument yields
\begin{lemma}\label{lemma:5} Let $M$ be a ${\cal C}[m]$-module and $\wt{M} = M|_{{\cal C}'}$ where ${\cal C}'$ is an admissible subcategory of ${\cal C}[m]$ containing the initial object. Then any IP-structure on $\wt{M}$ admits a compatible extension to $M$.
\end{lemma}
Continuing with the proof of the theorem, let ${\cal C} = \underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$ with $m_p = \{1 < 2 < \dots < p\}$ as above. Let ${\cal C}_q = \underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_{n-1}}\times \{1 < 2 < \dots < q\}$, viewed as a full subcategory of $\cal C$. Given a $\cal C$-module $M$, let $M_i$ be the ${\cal C}_i$-module constructed as the restriction of $M$ to ${\cal C}_i$. By induction on dimension, we may assume $M_1$ has been equipped with an IP-structure. By induction on the last index, assume that this IP-structure has been compatibly extended to $M_i$. Now $\Gamma({\cal C}_{i+1})$ can be viewed as being constructed from $\Gamma({\cal C}_i)$ via a sequence of $m = m_1m_2\dots m_{n-1}$ concatenations, where each step concatenates the previous graph with the graph $\Gamma({\cal C}[n])$ along an admissible subgraph of $\Gamma({\cal C}[n])$ containing the initial vertex. Denote this inclusive sequence of subgraphs by $\{\Gamma_\alpha\}_{1\le \alpha\le m}$; for each $\alpha$ let ${\cal C}_{\alpha}$ be the subcategory of ${\cal C}_{i+1}$ with $\Gamma({\cal C}_\alpha) = \Gamma_\alpha$. Finally, let $N_\alpha$ denote the restriction of $M$ to ${\cal C}_\alpha$, so that $N_1 = M_i$ and $N_m = M_{i+1}$. Then $N_1$ comes equipped with an IP-structure, and by Lemma \ref{lemma:5} an IP-structure on $N_j$ admits a pushout extension to one on $N_{j+1}$ for each $1\le j\le (m-1)$. Induction in this coordinate then implies the IP-structure on $M_i$ an be compatibly extended (via iterated pushouts) to one on $M_{i+1}$, completing the induction step. As $M_{m_n} = M$, this completes the proof of the theorem.
\end{proof}
\vskip.3in
\subsection{h-free modules} When is an indexing category $\cal C$ h-free? To better understand this phenomenon, we note that the graph $\Gamma_1$ in diagram (D1) - and the way it appears again in diagram (D2) - suggests it may be viewed from the perspective of homotopy theory: define an {\it elementary homotopy} of a closed zig-zag loop $\gamma$ in $\Gamma({\cal C})$ to be one which performs the following replacements in either direction
\vskip.5in
({\rm D3})\vskip-.4in
\centerline{
\xymatrix{
A\ar[rr]\ar[dd] && B && && && B\ar[dd]\\
&& & \ar@{<=>}[rr]& && &&\\
C && && && C\ar[rr] && D
}}
\vskip.2in
In other words, if $C\leftarrow A\rightarrow B$ is a segment of $\gamma$, we may replace $\gamma$ by $\gamma'$ in which the segment $C\leftarrow A\rightarrow B$ is replaced by $C\rightarrow D\leftarrow B$ with the rest of $\gamma$ remaining intact; a similar description applies in the other direction. We do not require the arrows in the above diagram to be represented by atomic morphisms, simply oriented paths between vertices.
\begin{lemma} If a zig-zag loop $\gamma$ in $\Gamma({\cal C})$ is equivalent, by a sequence of elementary homotopies, to a collection of simple closed loops of type $\Gamma_1$ as appearing in (D1), then $\gamma$ is h-free. If this is true for all zig-zag loops in $\Gamma({\cal C})$ then $\cal C$ itself is h-free.
\end{lemma}
\begin{proof} Because $\Gamma_1$ has no holonomy, replacing the connecting segment between B and C by moving in either direction in diagram (D3) does not change the homonomy of the closed path. Thus, if by a sequence of such replacements one reduces to a connected collection of closed loops of type $\Gamma_1$, the new loop - hence also the original loop - cannot have any holonomy.
\end{proof}
Call $\cal C$ {\it strongly h-free} if every zig-zag loop in $\Gamma({\cal C})$ satisfies the hypothesis of the above lemma. Given $n$ ps-categories ${\cal C}_1, {\cal C}_2,\dots,{\cal C}_n$, the graph of the $n$-fold cartesian product is given as
\[
\begin{split}
\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n) = N_1({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)
= diag(N({\cal C}_1)\times N({\cal C}_2)\times\dots N({\cal C}_n))_1\\
= diag(\Gamma({\cal C}_1)\times\Gamma({\cal C}_2)\times\dots\times\Gamma({\cal C}_n))
\end{split}
\]
the oriented 1-skeleton of the diagonal of the product of the oriented graphs of each category. Of particular interest are $n$-dimensional persistence categories, as defined above.
\begin{theorem}\label{thm:5} Finite $n$-dimensional persistence categories are strongly h-free.
\end{theorem}
\begin{proof} The statement is trivially true for $n=1$ (there are no simple closed loops), so assume $n\ge 2$. Let ${\cal C}_i = \underline{m_i}$, $1\le i\le n$.
\begin{claim} The statement is true for $n=2$.
\end{claim}
\begin{proof} Given a closed zig-zag loop $\gamma$ in $\Gamma({\cal C}_1\times {\cal C}_2)$, we may assume ${\bf a} = (a_1,a_2)$ are the coordinates of an initial vertex of the loop. We orient $\Gamma({\cal C}_1\times {\cal C}_2)$ so that it moves to the right in the first coordinate and downwards in the second coordinate, viewed as a lattice in $\mathbb R^2$. As it is two-dimensional, we may assume that $\gamma$ moves away from $\bf a$ by a horizontal path to the right of length at least one, and a vertical downwards path of length also at least one. That means we may apply an elementary homotopy to the part of $\gamma$ containing $\bf a$ as indicated in diagram (D3) above, identifying $\bf a$ with the vertex ``A" in the diagram, and replacing $C\leftarrow A\rightarrow B$ with $C\rightarrow D\leftarrow B$. If $D$ is already a vertex in $\gamma$, the result is a single simple zig-zag loop of type $\Gamma_1$, joined at $D$ with a closed zig-zag-loop of total length less than $\gamma$. By induction on total length, both of these loops are h-free, hence so the original $\gamma$. In the second case, $D$ was not in the original loop $\gamma$. In this case the total length doesn't change, but the total area enclosed by the curve (viewed as a closed curve in $\mathbb R^2$) does. By induction on total bounded area, the curve is h-free in this case as well, completing the proof of the claim.
\end{proof}
Continuing with the proof of the theorem, we assume $n > 2$, and that we are given a zig-zag path $\gamma$ in $\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)$. From the above description we may apply a sequence of elementary homotopies in the first two coordinates to yield a zig-zag loop $\gamma'$ in $\Gamma({\cal C}_1\times{\cal C}_2\times\dots\times{\cal C}_n)$ with the same degree of h-freeness as $\gamma$, but where the first two coordinates are constant. The theorem follows by induction on $n$.
\end{proof}
We conclude this subsection with an illustration of how holonomy can prevent stability of the local structure over an infinite field. Consider the indexing category $\cal C$ whose graph $\Gamma({\cal C})$ is
\vskip.5in
({\rm D4})\vskip-.4in
\centerline{
\xymatrix{
\bullet && y\ar[dd]\ar[ll] && x\ar[ll]\\
& \Gamma_2 & &&\\
\bullet\ar[uu]\ar[rr] && \bullet &&
}}
\vskip.2in
where the part of the graph labeled $\Gamma_2$ is as in (D1). Suppose the base field to be $k = \mathbb R$ and $M$ the $\cal C$-module which assigns the vector space $\mathbb R^2$ to each vertex in $\Gamma_2$, and assigns $\mathbb R$ to $x$. Each arrow in the $\Gamma_2$-part of the graph is an isomorphism, chosen so that going once around the simple closed zig-zag loop is represented by an element of $SO(2)\cong S^1$ of infinite order (i.e., and irrational rotation). Let $M(x)$ map to $M(y)$ by an injection. In such an arrangement, the local structure of $M$ at the vertex $y$, or the other three vertices of $\cal C$ lying in the graph $\Gamma_2$, never stabilizes.
\vskip.3in
\subsection{Modules with stable local structure} Stability of the local structure can be verified directly in certain important cases. We have given the definition of an $n$-dimensional persistence category above. This construction admits a natural zig-zag generalization. Write \underbar{zm} for any poset of the form $\{1\ R_1\ 2\ R_2\ 3\dots (m-1)\ R_{m-1}\ m\}$ where $R_i = $ ``$\le$" or ``$\ge$" for each $i$. A zig-zag module of length $m$, as defined in \cite{cd}, is a functor $M:$ \underbar{zm}$\to (vect/k)$ for some choice of zig-zag structure on the underlying set of integers $\{1,2,\dots,m\}$. More generally, an $n$-dimensional zig-zag category $\cal C$ is one isomorphic to ${\rm \underline{zm_1}}\times{\rm \underline{zm_2}}\times\dots{\rm \underline{zm_n}}$ for some choice of $\rm \underline{zm_i}$, $1\le i\le n$, and a finite $n$-dimensional zig-zag module is defined to be a functor
\[
M : {\rm \underline{zm_1}}\times{\rm \underline{zm_2}}\times\dots{\rm \underline{zm_n}}\to (vect/k)
\]
for some sequence of positive integers $m_1,m_2,\dots,m_n$ and choice of zig-zag structure on each correpsonding underying set. As with $n$-dimensional persistence modules, $n$-dimensional zig-zag modules may be viewed as a zig-zag diagram of $(n-1)$-dimensional zig-zag modules in essentially $n$ different ways. The proof of the next theorem illustrates the usefulness of strong stability.
\begin{theorem}\label{thm:6} Finite $n$-dimensional zig-zag modules have strongly stable local structure for all $n\ge 0$.
\end{theorem}
\begin{proof} We will first consider the case of $n$-dimensional persistence modules. We say an $n$-dimensional persistence category $\cal C$ has multi-dimension $(m_1,m_2,\dots,m_n)$ if $\cal C$ is isomorphic to $\underline{m_1}\times\underline{m_2}\times\dots\times\underline{m_n}$; note that this $n$-tuple is a well-defined invariant of the isomorphism class of $\cal C$, up to reordering. We may therefore assume the dimensions $m_i$ have been arranged in non-increasing order. We assume the vertices of $\Gamma({\cal C})$ have been labeled with multi-indices $(i_1,i_2,\dots,i_n), 1\le i_j\le m_j$, so that an oriented path in $\Gamma({\cal C})$ from $(i_1,i_2,\dots,i_n)$ to $(j_1,j_2,\dots,j_n)$ (corresponding to a morphism in $\cal C$) exists iff $i_k\le j_k, 1\le k\le n$. We will reference the objects of $\cal C$ by their multi-indices. The proof is by induction on dimension; the base case $n=0$ is trivially true as there is nothing to prove.
\vskip.2in
Assume then that $n\ge 1$. For $1\le i\le j\le m_n$, let ${\cal C}[i,j]$ denote the full subcategory of $\cal C$ on objects $(k_1,k_2,\dots,k_n)$ with $i\le k_n\le j$, and let $M[i,j]$ denote the restriction of $M$ to ${\cal C}[i,j]$. Let ${\cal F}_1$ resp.~${\cal F}_2$ denote the local structures on $M[1,m_n-1]$ and $M[m_n]$ respectively; by induction on the cardinality of $m_n$ we may assume these local structures are stable with stabilization indices $N_1,N_2$. Let $\phi_i:M[i]\to m[i+1]$ be the structure map from level $i$ to level $(i+1)$ in the $n$th coordinate. Then define $\phi_\bullet : M[1,m_n-1]\to M[m_n]$ be the morphism of $n$-dimensional persistence modules which on $M[i]$ is given by the composition
\[
M[i]\xrightarrow{\phi_i} M[i+1]\xrightarrow{\phi_{i+1}}\dots M[m_n-1]\xrightarrow{\phi_{m_n-1}} M[m_n]
\]
Define a multi-flag on $M[1,m_n-1]$ by ${\cal F}_1^* := \phi_\bullet^{-1}[{\cal F}_2]$ and on $M[m_n]$ by ${\cal F}_2^* := \phi_\bullet ({\cal F}_1)$. By induction on length and dimension we may assume that $M[1,m_n-1]$ and $M[m_n]$ have local structures which stabilize strongly (we note that $M[m_n]$ is effectively an $(n-1)$-dimensional persistence module). As these multi-flags are finite, we have that
\begin{itemize}
\item the restricted local structures ${\cal F}_i$ are stable (noted above);
\item the local structure of $M[1,m_n-1]$ is stable relative to ${\cal F}_1^*$;
\item the local structure of $M[m_n]$ is stable relative to ${\cal F}_2^*$.
\end{itemize}
We may then choose $N$ so that in each of the three itemized cases, stabilization has been achieved by the $N^{th}$ stage. Let $\cal G$ be the multi-flag on $M$ which on $M[1,m_n-1]$ is the local structure relative to ${\cal F}_1^*$ and on $M[m_n]$ is the local structure relative to ${\cal F}_2^*$. Then $\cal G$ is the local structure on $M$, and has been achieved after at most $2N$ stages starting with the trivial semi-flag on $M$. This implies $M$ has stable local structure. To verify the induction step for the statement that $M$ has srongly stable local structure, let $F$ be a finite multi-flag on $M$. Let $F_1$ be its restriction to $M[1,m_n-1]$, and $F_2$ its restriction to $M[m_n]$. Then let ${\cal F}_i^{**}$ denote the multi-flag generated by ${\cal F}_i^*$ and $F_i$. Proceeding with the same argument as before yields a multi-flag ${\cal G}^*$ achieved at some finite stage which represents the local structure of $M$ relative to $F$, completing the induction step for persistence modules.
\vskip.2in
In the more general case that one starts with a finite, $n$-dimensional zig-zag module $M$, the argument is esssentially identical but with one adjustment. Representing $M$ as
\[
M[1]\leftrightarrow M[2]\leftrightarrow \dots M[m_n-1]\leftrightarrow M[m_n]
\]
where ``$\leftrightarrow$" indicates either ``$\leftarrow$" or ``$\rightarrow$", the multi-flags ${\cal F}_i^*$ are defined on $M[1,m_n-1]$ and $M[m_n]$ respectively by starting with the stabilized local structure on the other submodule, and then extending by either pulling back or pushing forward as needed to the other. The rest of the induction step is the same, as is the basis step when $n=0$ and there are no morphisms.
\end{proof}
\vskip.2in
The above discussion applies to arbitrary fields; in this case, as we have seen, it is possible that the local structure fails to be stable. However, if the base field $k$ is finite, then the finiteness of $\cal C$ together with the finite dimensionality of a $\cal C$-module $M$ at each vertex implies that any $\cal C$-module $M$ over $k$ is a finite set. In this case, the infinite refinement of ${\cal F}(M)$ that must occur in order to prevent stabilization at some finite stage is no longer possible. Hence
\begin{theorem} Assume the base field $k$ is finite. Then for all (finite) poset categories $\cal C$ and $\cal C$-modules $M$, $M$ has stable local structure.
\end{theorem}
\vskip.5in
\section{Geometrically based $\cal C$-modules} A $\cal C$-module $M$ is said to be {\it geometrically based} if $M = H_n(F)$ for some positive integer $n$, where $F:{\cal C}\to {\cal D}$ is a functor from $\cal C$ to a category $\cal D$, equalling either
\begin{itemize}
\item {\bf f-s-sets} - the category of simplicial sets with finite skeleta and morphisms of simplicial sets, or
\item {\bf f-s-com} - the category of finite simplicial complexes and morphisms of simplicial complexes.
\end{itemize}
Almost all $\cal C$-modules that arise in applications are of this type. A central question, then, is whether or not such modules admit an inner product structure of the type needed for the above structure theorems to hold. We show that the obstruction to imposing an IP-structure on geometrically based modules is in general non-trivial, by means of an explicit example given below. On the other hand, all geometrically based $\cal C$-modules admit a presentation by $\cal IPC$-modules. In what follows we will restrict ourselves to the category {\bf f-s-sets}, as it is slightly easier to work in (although all results carry over to {\bf f-s-complexes}).
\subsection{Cofibrant replacement} Any $\cal C$-diagram in {\bf f-s-sets} can be cofibrantly replaced, up to weak homotopical transformation. Precisely,
\begin{theorem} If $F:{\cal C}\to$ {\bf f-s-sets}, then there is a $\cal C$-diagram $\wt{F}:{\cal C}\to$ {\bf f-s-sets} and a natural transformation $\eta:\wt{F}\xrightarrow{\simeq} F$ which is a weak equivalence at each object, where $\wt{F}(\phi_{xy})$ is a closed cofibration (inclusion of simplicial sets) for all morphisms $\phi_{xy}$\footnote{The proof following is a minor elaboration of an argument communicated to us by Bill Dwyer \cite{bd}.}.
\end{theorem}
\begin{proof} The simplicial mapping cylinder construction $Cyl(_-)$ applied to any morphism in {\bf f-s-sets} verifies the statement of the theorem in the simplest case $\cal C$ consists of two objects and one non-identity morphism. Suppose $\cal C$ has $n$ objects; we fix a total ordering on $obj({\cal C})$ that refines the partial ordering: $\{x_1 \prec x_2 \prec \dots \prec x_n\}$ where if $\phi_{x_i x_j}$ is a morphism in $\cal C$ then $i\le j$ (but not necessarily conversely). Let ${\cal C}(m)$ denote the full subcategory of $\cal C$ on objects $x_1,\dots,x_m$, with $F_m = F|_{{\cal C}(m)}$. By induction, we may assume the statement of the theorem for $F_m:{\cal C}(m)\to$ {\bf f-s-sets}, with cofibrant lift denoted by $\wt{F}_m$; with $\eta_m:\wt{F}_m\xrightarrow{\simeq} F_m$.
\vskip.2in
Now let ${\cal D}(m)$ denote the slice category ${\cal C}/x_{m+1}$; as ``$\prec$" is a refinement of the poset ordering ``$<$", the image of the forgetful functor $P_m:{\cal D}(m)\to {\cal C}; (y\to x_{m+1})\mapsto y$ lies in ${\cal C}(m)$. And as $\cal C$ is a poset category, the collection of morphisms $\{\phi_{y x_{m+1}}\}$ uniquely determine a map
\[
f_m : \underset{{\cal D}(m)}{colim}\ \wt{F}_m\circ P_m\xrightarrow{\eta_m} \underset{{\cal D}(m)}{colim}\ F_m\circ P_m \to F(x_{m+1})
\]
Define $\wt{F}_{m+1}:{\cal C}(m+1)\to$ {\bf f-s-sets} by
\begin{itemize}
\item $\wt{F}_{m+1}|_{{\cal C}(m)} = \wt{F}_m$;
\item $\wt{F}_{m+1}(x_{m+1}) = Cyl(f_m)$;
\item If $\phi_{x x_{m+1}}$ is a morphism from $x\in obj({\cal C}(m))$ to $x_{m+1}$, then
\[
\wt{F}_{m+1}(\phi_{x x_{m+1}}):\wt{F}_{m}(x) = \wt{F}_{m+1}(x)\to \wt{F}_{m+1}(x_{m+1})
\]
is given as the composition
\[
\wt{F}_{m}(x) = \wt{F}_m\circ P_m(x\xrightarrow{\phi_{x x_{m+1}}} x_{m+1})\hookrightarrow
\underset{{\cal D}(m)}{colim}\ \wt{F}_m\circ P_m\hookrightarrow Cyl(f_m) = \wt{F}_{m+1}(x_{m+1})
\]
\end{itemize}
where the first inclusion into the colimit over ${\cal D}(m)$ is induced by the inclusion of the object \newline
$(x\xrightarrow{\phi_{x x_{m+1}}} x_{m+1})\hookrightarrow obj({\cal D}(m))$. As all morphisms in ${\cal D}(m)$ map to simplicial inclusions under $\wt{F}_m\circ P_m$ the resulting map of $\wt{F}_m(x)$ into the colimit will also be a simplicial inclusion. Finally, the natural transformation $\eta_m:\wt{F}_m\to F_m$ is extended to $\eta_{m+1}$ on $\wt{F}_{m+1}$ by defining $\eta_{m+1}(x_{m+1}): \wt{F}_{m+1}(x_{m+1})\to F_{m+1}(x_{m+1})$ as the natural collapsing map $Cyl(f_m)\surj F(x_{m+1})$, which has the effect of making the diagram
\centerline{
\xymatrix{
\wt{F}_{m+1}(x)\ar[rr]^{\wt{F}_{m+1}(\phi_{xy})}\ar[dd]^{\eta_{m+1}(x)} && \wt{F}_{m+1}(y)\ar[dd]^{\eta_{m+1}(y)}\\
\\
F_{m+1}(x)\ar[rr]^{F_{m+1}(\phi_{xy})} && F_{m+1}(y)
}}
\vskip.2in
commute for morphisms $\phi_{xy}\in Hom({\cal C}_{m+1})$. This completes the induction step, and the proof.
\end{proof}
\begin{corollary}\label{cor:pres} Any geometrically based $\cal C$-module $M$ admits a presentation by $\cal C$-modules $N_1\inj N_2\surj M$ where $N_i$ is an $\cal IPC$-module and $N_1\inj N_2$ is an isometric inclusion of $\cal IPC$-modules.
\end{corollary}
\begin{proof} By the previous result and the homotopy invariance of homology, we may assume $M = H_n(F)$ where $F :{\cal C}\to$ {\bf i-f-s-sets}, the subcategory of {\bf f-s-sets} on the same set of objects, but where all morphisms are simplicial set injections. In this case, for each object $x$, $C_n(F(x)) = C_n(F(x);k)$ admits a canonical inner product determined by the natural basis of $n$-simplices $F(x)_n$, and each morphism $\phi_{xy}$ induces an injection of basis sets $F(x)_n\inj F(y)_n$, resulting in an isometric inclusion $C_n(F(x))\inj C_n(F(y))$. In this way the functor $C_n(F) := C_n(F;k):{\cal C}\to (vect/k)$ inherits a natural $\cal IPC$-module structure. If $Q$ is an $\cal IPC$-module where all of the morphisms are isometric injections, then any $\cal C$-submodule $Q'\subset Q$, equipped with the same inner product, is an $\cal IPC$-submodule of $Q$. Now $C_n(F)$ contains the $\cal C$-submodules $Z_n(F)$ ($n$-cycles) and $B_n(F)$ ($n$-boundaries); equipped with the induced inner product the inclusion $B_n(F)\hookrightarrow Z_n(F)$ is an isometric inclusion of $\cal IPC$-modules, for which $M$ is the cokernel $\cal C$-module.
\end{proof}
[Note: The results for this subsection have been stated for {\bf f-s-sets}; similar results can be shown for {\bf f-s-complexes} after fixing a systematic way for representing the mapping cyclinder of a map of simplicial complexes as a simplicial complex; this typically involves barycentrically subdividing.]
\vskip.3in
\subsection{Geometrically realizing an IP-obstruction} As we saw in Theorem \ref{thm:obstr}, the ${\cal C}_2$-module
\vskip.5in
({\rm D5})\vskip-.4in
\centerline{
\xymatrix{
\mathbb R && \mathbb R\ar[dd]^{1}\ar[ll]_{2}\\
\\
\mathbb R\ar[uu]^{1}\ar[rr]_{1} && \mathbb R
}}
\vskip.2in
does not admit an IP-structure. We note the same diagram can be formed with $S^1$ in place of $\mathbb R$:
\vskip.5in
({\rm D6})\vskip-.4in
\centerline{
\xymatrix{
\mathbb S^1 && S^1\ar[dd]^{1}\ar[ll]_{2}\\
\\
S^1\ar[uu]^{1}\ar[rr]_{1} && S^1
}}
\vskip.2in
Here ``$2:S^1\to S^1$" represents the usual self-map of $S^1$ of degree $2$. This diagram can be realized up to homotopy by a digram of simplicial complexes and simplicial maps as follows: let $T_1 = \partial(\Delta^2)$ denote the standard triangulation of $S^1$, and let $T_2$ be the barycentric subdivision of $T_1$. We may form the ${\cal C}_2$-diagram in {\bf f-s-com}
\vskip.5in
({\rm D7})\vskip-.4in
\centerline{
\xymatrix{
T_1 && T_2\ar[dd]^{f_1}\ar[ll]_{f_2}\\
\\
T_1\ar[uu]^{1}\ar[rr]_{1} && T_1
}}
\vskip.2in
The map $f_2$ is the triangulation of the top map in (D6), while $f_1$ is the simplicial map which collapses every other edge to a point. The geometric realization of (D7) agrees up to homotopy with (D6). Of course this diagram of simplicial complexes can also be viewed as a diagram in {\bf f-s-sets}. Applying $H_1(_-;\mathbb Q)$ to diagram (D7) we have
\begin{theorem} There exist geometrically based $\cal C$-modules with domain category {\bf f-s-com} (and hence also {\bf f-s-sets}) which do not admit an IP-structure.
\end{theorem}
In this way we see that the presentation result of Corollary \ref{cor:pres} is, in general, the best possible in terms of representing a geometrically based $\cal C$-module in terms of modules equipped with an $\cal IPC$-structure.
\vskip.5in
\section*{Open questions}
\subsubsection*{If $\cal C$ is h-free, does every $\cal C$-module admit an inner product structure?} More generally, what are necessary and sufficient conditions on the indexing category $\cal C$ that guarantee the existence of an inner product (IP) structure on any $\cal C$-module $M$?
\subsubsection*{If $\cal C$ is h-free, does every $\cal C$-module $M$ have stable local structure?} In other words (as with IP-structures), is holonomy the only obstruction to ${\cal F}(M)$ stabilizing at some finite stage? One is tempted to conjecture that the answer is ``yes", however the only evidence so far is the lack of examples of non-stable local structures occuring in the absence of holonomy, or even imagining how this could happen.
\vskip.3in
\subsubsection*{Does h-free imply strongly h-free?} Again, this question is based primarily on the absence, so far, of any counterexample illustrating the difference between these two properties.
\vskip.3in
\subsubsection*{If $M$ has stable local structure, does it have strongly stable local structure?} Obviously, strongly stable implies stable. The issue is whether these conditions are, for some reason, equivalent. If not, then a more refined version of the question would be: under what conditions (on either the indexing category $\cal C$ or the $\cal C$-module $M$) are they equivalent?
\vskip.5in
| {'timestamp': '2018-06-29T02:11:49', 'yymm': '1803', 'arxiv_id': '1803.08108', 'language': 'en', 'url': 'https://arxiv.org/abs/1803.08108'} |
high_school_physics | 247 | 16.181245 | 1 | \section{Introduction}
The spirit of most extra-dimensional models of particle physics is to translate observed or desirable properties of ordinary 4D particle interactions into particular shapes or features (like warping or brane positions) within an assumed extra-dimensional geometry. In principle these features are hoped to be obtained by minimizing the energy of deforming the extra dimensions, but it is in practice a challenge to do so explicitly.
Part of what makes this challenging is the fact that general covariance makes energy in itself not a useful criterion for distinguishing amongst various solutions. For instance for closed geometries invariance under time reparameterization implies {\em all} solutions have precisely zero energy. This has long been understood in cosmology, where the explanation of the geometry of the present-day universe is seen to be contingent on the history of how it evolved in the distant past. A similar understanding is also likely for the shapes of any present-day extra dimensions, suggesting we should seek to explain their properties in terms of how they have evolved over cosmological times.
This is not the approach taken by most models of extra-dimensional cosmology, however, which usually explicitly assume extra dimensions to be stabilized at fixed values as the observed four dimensions change in time. This approach is taken usually for technical reasons: it is difficult to find explicit time-dependent solutions to the full higher-dimensional field equations. Instead, models of extra-dimensional cosmology usually use one of two simplifying approximations: either `mirage' or `4D effective' cosmology.
In `mirage' cosmology \cite{MirageCosmology} brane-localized observers experience time-dependent geometries because they move through a static extra-dimensional bulk. In these models the branes are usually taken as `probe' branes, that don't back-react on the static bulk. An exception to this is for Randall-Sundrum type cosmologies \cite{RScosmo} involving codimension-1 branes, for which the Israel junction conditions \cite{IJC} allow back-reaction to be explicitly computed. In these models all extra-dimensional features are usually fixed from the get-go.
In `effective 4D' cosmology the Hubble scale is assumed to be much smaller than the Kaluza-Klein (KK) mass scale, so that all of the time dependence in the geometry can be computed within the effective 4D theory, where some extra-dimensional features (like moduli) boil down to the values of various scalar fields. This is the approach most frequently used for string inflation, for example \cite{SIreviews}. Here some changes to the extra dimensions can be followed by seeing how the corresponding modulus fields evolve. But this can only be done for sufficiently slow expansion and only after it is already assumed that the extra dimensions are so small that the 4D approximation is valid. In particular, it cannot follow evolution where all dimensions are initially roughly the same size, to explain why some dimensions are larger than others.
Our goal in this paper is to take some first steps towards going beyond these two types of approximations. To this end we explore the implications of previously constructed time-dependent solutions \cite{scaling solutions} to the full higher-dimensional field equations of chiral gauged 6D supergravity \cite{NS}, including the effects of back reaction from several codimension-2 source branes. When doing so it is crucial to work with a geometry with explicitly compactified extra dimensions, including a mechanism for stabilizing the extra-dimensional moduli, since it is well known that these can compete with (and sometimes ruin) what might otherwise appear as viable inflationary models\footnote{For early steps towards inflationary 6D models see \cite{HML}. } \cite{SIreviews}. For the system studied here this is accomplished using a simple flux-stabilization mechanism, that fixes all bulk properties except the overall volume modulus.
Incorporating the back-reaction of the branes in these solutions is the main feature new to this paper. It is important because it allows the explicit determination of how the extra-dimensional geometry responds to the choices made for a matter field, which we assume to be localized on one of the source branes. It also provides a mechanism for lifting the one remaining flat direction, through a codimension-two generalization of the Goldberger-Wise mechanism \cite{GW} of codimension-one Randall-Sundrum models.
In order to compute the back-reaction we extend to time-dependent geometries the bulk-brane matching conditions that were previously derived for codimension-two branes only in the limit of maximally symmetric on-brane geometries \cite{Cod2Matching, BBvN, BulkAxions, susybranes}. We then apply these conditions to the time-dependent bulk geometries to see how their integration constants are related to physical choices made for the dynamics of an `inflaton' scalar field that we assume to be localized on one of the source branes.
For the solutions we describe, the scale factor of the on-brane dimensions expands like $a(t) \propto t^p$, and our main interest is on the accelerating solutions (for which $p > 1$). The parameter $p$ is an integration constant for the bulk solution, whose value becomes related to the shape of the potential for the on-brane scalar. de Sitter solutions \cite{6DdS} are obtained in the limit $p \to \infty$, which corresponds to the limit where the on-brane scalar potential becomes independent of the inflaton.
What is most interesting is what the other dimensions do while the on-brane geometry inflates: their radius expands with a universal expansion rate, $r(t) \propto \sqrt t$, that is $p$-independent for any finite $p$. (By contrast, the extra dimensions do not expand at all for the special case of the de Sitter solutions.) The different expansion rates therefore cause the accelerated expansion of the on-brane directions to be faster than the growth of the size of the extra-dimensional directions; possibly providing the seeds of an understanding of why the on-brane dimensions are so much larger at the present epoch, in our much later universe.
Because the extra dimensions expand (rather than contract), the Kaluza-Klein mass scale falls with time, putting the solution deeper into the domain of validity of the low-energy semiclassical regime. Equivalently, the higher-dimensional gravity scale falls (in 4D Planck units) during the inflationary epoch. This opens up the intriguing possibility of reconciling a very low gravity scale during the present epoch with a potentially much higher gravity scale when primordial fluctuations are generated during inflation.
In the limit where the motion is adiabatic, we verify how the time-dependence of the full theory is captured by the solutions of the appropriate effective low-energy 4D theory. The 4D description of the inflationary models turns out to resemble in some ways an extended inflation model \cite{ExtInf}, though with an in-principle calculable potential for the Brans-Dicke scalar replacing the cosmological-constant sector that is usually assumed in these models.
The rest of this paper is organized as follows. The next section, \S2, summarizes the field equations and solutions that describe the bulk physics in the model of interest. A particular focus in this section is the time-dependence and the asymptotics of the solutions in the vicinity of the two source branes. These are followed in \S3\ by a description of the dynamics to be assumed of the branes, as well as the boundary conditions that are dictated for the bulk fields by this assumption. The resulting matching conditions are then used to relate the parameters of the bulk solution to the various brane couplings and initial conditions assumed for the brane-localized scalar field. \S4\ then describes the same solutions from the point of view of a 4D observer, using the low-energy 4D effective theory that captures the long-wavelength physics. The low-energy field equations are solved and shown to share the same kinds of solutions as do the higher-dimensional field equations, showing how the two theories can capture the same physics. Some conclusions and outstanding issues are discussed in \S5. Four appendices provide the details of the brane properties; the derivation of the time-dependent codimension-two matching conditions; and the dimensional reduction to the 4D effective theory.
\section{The bulk: action and solutions}
In this section we summarize the higher-dimensional field equations and a broad class of time-dependent solutions, whose properties are matched to those of the source branes in the next section. For definiteness we use the equations of 6D chiral gauged super-gravity \cite{NS} with flux-stabilized extra dimensions. The minimal number of fields to follow are the 6D metric, $g_{\ssM \ssN}$, and dilaton, $\phi$, plus a flux-stabilizing Maxwell potential, $A_\ssM$. Although other fields are present in the full theory, only these three need be present in the simplest flux-stabilized solutions \cite{SSs, ConTrunc}.
The action for these fields is
\be \label{BulkAction}
S_\mathrm{bulk} = - \int \exd^6 x \sqrt{-g} \; \left\{ \frac1{2\kappa^2} \, g^{\ssM\ssN}
\Bigl( \cR_{\ssM \ssN} + \pd_\ssM \phi \, \pd_\ssN \phi \Bigr)
+ \frac14 \, e^{-\phi} \cF_{\ssM\ssN} \cF^{\ssM\ssN}
+ \frac{2 \, g_\ssR^2}{\kappa^4} \, e^\phi \right\} \,,
\ee
where $\kappa^2 = 8\pi G_6 = 1/M_6^4$ defines the 6D Planck scale and $\cF = \exd \cA$ is the field strength for the Maxwell field, whose coupling is denoted by $g$. The coupling $g$ can be, but need not be, the same as the coupling $g_\ssR$ that appears in the scalar potential, since supersymmetry requires $g_\ssR$ must be the gauge coupling for a specific $U(1)_\ssR$ symmetry that does not commute with supersymmetry. $g$ would equal $g_\ssR$ if $\cA_\ssM$ gauges this particular symmetry, but need not otherwise.
The field equations coming from this action consist of the Einstein equation
\be \label{BulkEinsteinEq}
\cR_{\ssM\ssN} + \partial_\ssM \phi \, \partial_\ssN \phi
+ \kappa^2 e^{-\phi} \cF_{\ssM \ssP} {\cF_\ssN}^\ssP
- \left( \frac{\kappa^2}{8} \,
e^{-\phi} \cF_{\ssP\ssQ} \cF^{\ssP \ssQ}
- \frac{g_\ssR^2}{\kappa^2} \, e^\phi \right)
g_{\ssM\ssN} = 0 \,,
\ee
the Maxwell equation
\be \label{BulkMaxwellEq}
\nabla_\ssM (e^{-\phi} \cF^{\ssM \ssN}) = 0 \,,
\ee
and the dilaton equation
\be \label{BulkDilatonEq}
\Box \phi - \frac{2 \, g_\ssR^2 }{\kappa^2} \, e^\phi + \frac{\kappa^2}4 \, e^{-\phi} \cF_{\ssM\ssN} \cF^{\ssM\ssN} = 0 \,.
\ee
Notice these equations are invariant under the rigid rescaling,
\be \label{scaleinv}
g_{\ssM \ssN} \to \zeta \, g_{\ssM \ssN}
\quad \hbox{and} \quad
e^\phi \to \zeta^{-1} \, e^\phi \,,
\ee
with $\cA_\ssM$ unchanged, which ensures the existence of a zero-mode that is massless at the classical level, and much lighter than the generic KK scale once quantum effects are included.
\subsection{Bulk solutions}
The exact solutions to these equations we use for cosmology are described in \cite{scaling solutions} (see also \cite{CopelandSeto}). Their construction exploits the scale invariance of the field equations to recognize that exact time-dependent solutions can be constructed by scaling out appropriate powers of time from each component function.
\subsubsection*{Time-dependent ansatz}
Following \cite{scaling solutions} we adopt the following ansatz for the metric,
\be
\label{metric-ansatz}
\exd s^2 = (H_0\tau)^c \left\{ \left[ -e^{2\omega(\eta)} \exd\tau^2
+e^{2 \alpha (\eta)} \delta_{ij} \exd x^i \exd x^j \right]
+ \tau^{2} \left[e^{2v(\eta)} \exd\eta^2 +
e^{2 \beta(\eta)}\exd\theta^2\right] \right\} \,,
\ee
while the dilaton and Maxwell field are assumed to be
\be \label{dilaton-ansatz}
e^\phi = \frac{e^{\varphi(\eta)} }{(H_0\tau)^{2+c}} \quad \hbox{and} \quad
\cA_\theta = \frac{ A_\theta(\eta) }{H_0} \,.
\ee
The power of time, $\tau$, appearing in each of these functions is chosen to ensure that all of the $\tau$-dependence appears as a common factor in each of the field equations. The 6D field equations then reduce to a collection of $\tau$-independent conditions that govern the profiles of the functions $\varphi$, $\omega$, $\alpha$, $\beta$, $v$ and $A_\theta$. For later convenience we briefly digress to describe the properties of these profiles in more detail.
\subsubsection*{Radial profiles}
Explicitly, with the above ansatz the Maxwell equation becomes
\be
A_\theta'' + \(\omega + 3 \alpha - \beta - v - \varphi \)' A_\theta' = 0 \,,
\ee
where primes denote differentiation with respect to the coordinate $\eta$. The dilaton equation similarly is
\be
\varphi'' + \( \omega + 3 \alpha - v + \beta \)' \varphi'
+ (2+c)(1+2c) \, e^{2(v-\omega)}
+ \frac{\kappa^2}2 \, e^{-(2\beta + \varphi)}(A_\theta')^2
-\frac{2g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi} = 0 \,.
\ee
The $\tau$-$\eta$ Einstein equation is first order in derivatives,
\be \label{eq:Einst}
(2c+1) \, \omega' +3\alpha' + (2+c) \, \varphi' = 0 \,,
\ee
while the rest are second order
\ba
\omega''+\(\omega+3\alpha -v +\beta \)' \omega'
+\frac{\kappa^2} 4 \, e^{-(2 \beta + \varphi)}
(A_\theta')^2 + \frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi}
-\left(c^2+\frac{5c}{2} +4 \right) \, e^{2(v-\omega)} &=&0\nn\\
\beta'' + \(\omega+3\alpha -v +\beta\)' \beta'
+\frac{3\kappa^2}4 \, e^{-(2\beta +\varphi)}(A_\theta')^2
+ \frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi}
-\frac12(c+2)(2c+1) \, e^{2(v-\omega)} &=&0 \nn\\
\alpha'' + \(\omega+3\alpha -v+\beta\)' \alpha' - \frac{\kappa^2}4
\, e^{-(2\beta +\varphi)}(A_\theta')^2
+\frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi}
-\frac{c}{2}\,(2c+1) \, e^{2(v-\omega)} &=&0\nn\\
\omega'' + 3\alpha'' + \beta'' + (\omega')^2
+3(\alpha')^2 + (\beta')^2 + (\varphi')^2 -\(\omega+3\alpha +\beta\)'
v'\qquad\qquad\qquad\qquad\quad&&\nn\\
+\frac{3\kappa^2}4 \, e^{-(2\beta +\varphi)}(A_\theta')^2
+\frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi}
-\frac12(c+2)(2c+1) \, e^{2(v-\omega)} &=&0 \,. \nn\\
\ea
One linear combination of these --- the `Hamiltonian' constraint for evolution in the $\eta$ direction --- also doesn't involve any second derivatives, and is given by
\ba \label{eq:Hamconst}
&&(\varphi')^2 - 6\(\omega + \alpha + \beta\)' \alpha'
- 2 \omega' \beta' \nn\\
&& \qquad\qquad +\frac{\kappa^2}2 \, e^{-(2\beta +\varphi)}(A_\theta')^2
- \frac{4g_\ssR^2}{\kappa^2 H_0^2} \, e^{2v+\varphi}
+4 (c^2+c+1) \, e^{2(v-\omega)} = 0 \,.
\ea
As shown in \cite{scaling solutions}, these equations greatly simplify if we trade the four functions $\alpha$, $\beta$, $\omega$ and $\varphi$ for three new functions $\cx$, $\cy$ and $\cz$, using the redefinitions
\ba \label{XYZdef}
\omega&=&-\frac\cx4+\frac\cy4+ \left( \frac{2+c}{2c} \right) \cz
\,, \qquad
\alpha = -\frac\cx4+\frac\cy4- \left( \frac{2+c}{6c} \right) \cz \,,\nn\\
&& \qquad \beta = \frac{3\cx}4+\frac\cy4+\frac\cz2 \quad
\hbox{and} \quad
\varphi = \frac\cx2-\frac\cy2-\cz \,.
\ea
Only three functions are needed to replace the initial four because these definitions are chosen to identically satisfy eq.~\pref{eq:Einst} which, for the purposes of integrating the equations in the $\eta$ direction, can be regarded as a constraint (because it doesn't involve any second derivatives). The function $v$ can be chosen arbitrarily by redefining $\eta$, and the choice
\be
v = -\frac\cx4+\frac{5\cy}4+\frac\cz2 \,,
\ee
proves to be particularly simple \cite{scaling solutions}.
In terms of these variables the Maxwell equation becomes
\be
A_\theta'' - 2\cx' A_\theta' = 0 \,,
\ee
the dilaton equation is
\be
\(\frac12\cx-\frac12\cy-\cz\)''+(c+2)(2c+1) \, e^{2(\cy-\cz/c)} +
\frac{\kappa^2}2 \, e^{-2\cx}(A_\theta')^2 - \frac{2g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy}=0 \,,
\ee
and the remaining Einstein equations are
\ba
\(-\frac14\cx+\frac14\cy+\frac{2+c}{2c}\cz\)''+\frac{\kappa^2} 4 \,
e^{-2 \cx} (A_\theta')^2 + \frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy}
-\left( c^2+\frac{5c}{2}+4 \right) \, e^{2(\cy-\cz/c)} &=&0\nn\\
\( \frac34\cx+\frac14\cy+\frac12\cz \)'' +\frac{3\kappa^2}4 \,
e^{-2\cx}(A_\theta')^2 + \frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy}
-\frac12(c+2)(2c+1) \, e^{2(\cy-\cz/c)} &=&0\nn\\
\(\frac14\cx+\frac14\cy-\frac{2+c}{6c}\cz \)'' - \frac{\kappa^2}4
\, e^{-2\cx}(A_\theta')^2 +\frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy}
-\frac12c(2c+1) \, e^{2(\cy-\cz/c)} &=&0 \nn\\
\(-\frac14\cx+\frac54\cy+\frac12\cz\)''
+(\cx')^2-(\cy')^2+\frac43 \, \frac{1+c+c^2}{c^2} \, (\cz')^2
\qquad\qquad\qquad\qquad&&\nn\\
+\frac{3\kappa^2}4 \, e^{-2\cx}(A_\theta')^2
+\frac{g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy}
-\frac12(c+2)(2c+1) \, e^{2(\cy-\cz/c)} &=&0 \,. \nn\\
\ea
The combination of twice the second Einstein equation plus the Dilaton equation is completely independent of $\cy$ and $\cz$. This combination and the Maxwell equation can be exactly integrated, giving
\ba \label{eq:chisoln}
A_\theta &=& q \int\exd\eta \; e^{2\cx}\nn\\
e^{-\cx} &=& \left( \frac{\kappa \, q}{\lambda_1} \right)
\cosh\left[ \lambda_1(\eta-\eta_1) \right],
\ea
where $q$, $\lambda_1$ and $\eta_1$ are integration constants.
The remaining field equations then reduce to
\ba
\label{bulkXY}
\cy''+\frac{4g_\ssR^2}{\kappa^2 H_0^2} \, e^{2\cy} - 4 (1+c+c^2)
\, e^{2\cy-2\cz/c}&=&0\nn\\
\hbox{and} \qquad
\cz'' - 3c\, e^{2\cy-2\cz/c}&=&0 \,,
\ea
together with the first-order constraint, eq.~\pref{eq:Hamconst}, that ensures that only two of the `initial conditions' --- $\cx'$, $\cy'$ and $\cz'$ --- are independent.
\subsubsection*{Asymptotic forms}
With these coordinates the singularities of the metric lie at $\eta \to \pm \infty$, which is interpreted as the position of two source branes. We now pause to identify the asymptotic forms to be required by the metric functions as these branes are approached.
There are two physical conditions that guide this choice. First, we wish the limits $\eta \to \pm \infty$ to represent codimension-two points, rather than codimension-one surfaces, and so require $e^{2\beta} \to 0$ in this limit. In addition, we require the two extra dimensions to have finite volume, which requires $e^{\beta + v} \to 0$.
In Appendix \ref{app:AsForm} we argue, following \cite{scaling solutions}, that these conditions require both $\cy''$ and $\cz''$ must vanish in the limit $\eta \to \pm \infty$, and so
the functions $\cy$ and $\cz$ asymptote to linear functions of $\eta$ for large $|\eta|$:
\be \label{eq:YZbcs}
\cy \to \cy_\infty^\pm \mp\lambda_2^\pm\eta \quad \hbox{and} \quad
\cz \to \cz_\infty^\pm \mp\lambda_3^\pm\eta \quad \hbox{as} \quad
\eta \to \pm \infty \,,
\ee
where $\cy_\infty^\pm$, $\cz_\infty^\pm$, $\lambda_2^\pm$ and $\lambda_3^\pm$ are integration constants. The signs in eqs.~\pref{eq:YZbcs} are chosen so that $\lambda_2^\pm$ and $\lambda_3^\pm$ give the outward-pointing normal derivatives: {\em e.g.} $\lim_{\eta \to \pm \infty} N \cdot \partial \cy = \lambda_2^\pm$, where $N_\ssM$ denotes the outward-pointing unit normal to a surface at fixed $\eta$.
Not all of the integration constants identified to this point are independent of one another, however. In particular, the asymptotic form as $\eta \to +\infty$ can be computed from that at $\eta \to - \infty$ by integrating the field equations, and so cannot be independently chosen. In principle, given a value for $c$ and for all of the constants $\lambda_i^+$, $\cx_\infty^+$, $\cy_\infty^+$ and $\cz_\infty^+$, integration of the bulk field equations yields the values for $\lambda_i^-$, $\cx_\infty^-$, $\cy_\infty^-$ and $\cz_\infty^-$.
In addition, the integration constants need not all be independent even restricting our attention purely to the vicinity of only one of the branes. There are several reasons for this. One combination of these field equations --- the `Hamiltonian' constraint, eq.~\pref{eq:Hamconst} --- imposes a condition\footnote{If this constraint is satisfied as $\eta \to -\infty$, the equations of motion automatically guarantee it also holds as $\eta \to + \infty$.} that restricts the choices that can be made at $\eta \to - \infty$,
\be
\label{eq:powersconstraint}
(\lambda_2^\pm)^2 = \lambda_1^2 + \frac43 \left( \frac{1+c+c^2}{c^2} \right) (\lambda_3^\pm)^2 \,.
\ee
Also, it turns out that the constants $\cx_\infty^\pm$ are not independent of the other parameters describing the bulk solution, like the flux-quantization integer $n$ to be discussed next.
Next, flux quantization for the Maxwell field in the extra dimensions also imposes a relation amongst the integration constants. In the absence of brane sources, flux quantization implies \cite{scaling solutions}
\be
\frac n{g} = \frac{q}{H_0} \int_{-\infty}^\infty\exd\eta \; e^{2\cx}
= \frac{\lambda_1^2}{q\kappa^2 H_0} \, \int_{-\infty}^\infty\exd\eta \, \cosh^{-2}\left[\lambda_1(\eta-\eta_1)\right]
= \frac{2\lambda_1}{q\kappa^2 H_0} \,,
\ee
where $n$ is an integer. This gets slightly modified when branes are present, if the branes are capable of carrying a brane-localized Maxwell flux \cite{BulkAxions, susybranes} (as is the case in particular for the branes considered in \S3, below). In this case the flux-quantization condition is modified to
\be
\label{eq:bulkfluxquant}
\frac n{g} = \sum_b \frac{\Phi_b(\phi)}{2\pi}
+\frac{2\lambda_1}{q\kappa^2 H_0} \,,
\ee
where $\Phi_b$ is the flux localized on each brane. (More on this when we discuss brane properties in more detail in \S3.)
Finally, since the above solutions transform into one other under constant shifts of $\eta$, we may use this freedom to reparameterize $\eta \to \eta+\eta_1$ to eliminate $\eta_1$, in which case
\be
e^{-\cx}=\frac{\kappa\, q}{\lambda_1} \, \cosh(\lambda_1\eta)
= \frac{4\pi g}{\kappa H_0(2\pi n-g\sum_b\Phi_b)} \, \cosh(\lambda_1\eta).
\ee
{}From this we see that the asymptotic form for $\cx$ is
\be
\cx\to\cx_\infty^\pm\mp\lambda_1\eta\,,
\ee
with
\be
\cx_\infty^\pm = \ln\left[ \kappa H_0\(
\frac n{g}-\sum_b\frac{\Phi_b}{2\pi}\) \right] \,.
\ee
This shows explicitly how $\cx_\infty^\pm$ is related to other integration constants.
All told, this leaves $c$, $H_0$, $\lambda_2^-$, $\lambda_3^-$, $\cy^-$ and $\cz^-$ (or, equivalently, $c$, $H_0$, $\lambda_2^+$, $\lambda_3^+$, $\cy^+$ and $\cz^+$) as the six independent integration constants of the bulk solution. These we relate to brane properties in subsequent sections.
\subsection{Interpretation as 4D cosmology}
In order to make contact with the cosmology seen by a brane-localized observer, we must put the 4D metric into standard Friedmann-Lemaitre-Robertson-Walker (FLRW) form. In particular, we should do so for the 4D Einstein-frame metric, for which the 4D Planck scale is time-independent.
\subsubsection*{4D Einstein frame}
Recall the 6D metric has the form
\ba
g_{\ssM \ssN} \, \exd x^\ssM\exd x^\ssN &=& (H_0\tau)^c \Bigl\{ \left[
-e^{2\omega} \exd\tau^2
+ e^{2 \alpha} \delta_{ij} \,\exd x^i \exd x^j \right]
+ \tau^{2} \left[ e^{2v} \exd\eta^2
+ e^{2\beta} \exd\theta^2 \right] \Bigr\} \nn\\
&=& \hat g_{\mu\nu}\exd x^\mu \exd x^\nu + \frac{(H_0\tau)^{2+c}}{H_0^2}
\left[e^{2v}\exd\eta^2
+e^{2\beta} \exd\theta^2 \right] \,,
\ea
and denote by $\hat R_{\mu \nu}$ the Ricci tensor constructed using $\hat g_{\mu\nu}$. In terms of these, the time dependence of the 4D Einstein-Hilbert term is given by
\be
\frac{1}{2 \kappa^2} \sqrt{-g} \; g^{\ssM \ssN} R_{\ssM \ssN}
= \frac{1}{2 \kappa^2 H_0^2} \sqrt{-\hat g} \; \hat g^{\mu\nu} \hat R_{\mu\nu} \;
e^{\beta +v}(H_0 \tau)^{2+c} + \cdots \,.
\ee
This time dependence can be removed by defining a new 4D Einstein-frame metric
\be
\tilde g_{\mu\nu} = (H_0\tau)^{2+c} \hat g_{\mu\nu} \,,
\ee
whose components are
\be
\tilde g_{\mu\nu} \, \exd x^\mu \exd x^\nu = (H_0\tau)^{2+2c}
\left[ -e^{2\omega} \exd \tau^2 + e^{2\alpha} \delta_{ij} \,
\exd x^i\exd x^j\right] \,.
\ee
\subsubsection*{FLRW time}
FLRW time is defined for this metric by solving $\exd t = \pm (H_0 \tau)^{1 + c} \exd \tau$. There are two cases to consider, depending on whether or not $c=-2$. If $c \ne -2$, then
\be
H_0 t = \frac{|H_0 \tau|^{2+c}}{|2+c|}
\qquad ( \hbox{if} \quad c \ne -2)\,,
\ee
where the sign is chosen by demanding that $t$ increases as $\tau$ does. (If $c < -2$ then $t$ rises from 0 to $\infty$ as $\tau$ climbs from $-\infty$ to 0.) This puts the 4D metric into an FLRW-like form
\be \label{FLRWwarpedform}
\tilde g_{\mu\nu} \, \exd x^\mu \exd x^\nu = - e^{2\omega} \, \exd t^2
+ a^2(t) \, e^{2\alpha} \delta_{ij} \, \exd x^i\exd x^j \,,
\ee
where
\be
a(t) = ( |c+2| \, H_0 t)^p \quad \hbox{with} \quad
p = \frac{1+ c}{2+c}
\qquad ( \hbox{if} \quad c \ne -2)\,.
\ee
\FIGURE[ht]{
\epsfig{file=pVSc2.eps,angle=0,width=0.35\hsize}
\caption{A plot of the power, $p$, controlling the scale factor's expansion, vs the parameter $c$ appearing in the higher-dimensional ansatz.
} \label{fig:pvsc} }
Notice that $p > 1$ if $c < -2$, with $p \to 1$ as $c \to - \infty$ and $p \to + \infty$ when $c \to -2$ from below (see fig.~\ref{fig:pvsc}). This describes an accelerated power-law expansion, resembling the power-law expansion of `extended inflation' \cite{ExtInf} for which $\ddot a/a = p\,(p-1)/t^2 > 0$. Similarly, $p < 0$ if $-2 < c < -1$, with $p \to 0$ as $c \to - 1$ and $p \to - \infty$ as $c \to -2$ from above. Since $p < 0$ this describes a 4D universe that contracts as $t$ increases. Finally $0 < p < 1$ if $c > -1$, climbing monotonically from zero with increasing $c$ until $p \to 1$ as $c \to + \infty$. Since $\ddot a/a < 0$, this describes decelerated expansion.
If $c=-2$, we instead define
\be
H_0 t = - \ln|H_0\tau|
\qquad ( \hbox{if} \quad c = -2) \,,
\ee
in which case the FLRW metric again takes the form of eq.~\pref{FLRWwarpedform}, with
\be
a(t) = e^{H_0 t}
\qquad ( \hbox{if} \quad c = -2)\,.
\ee
This is the limiting case of the de Sitter-like solutions, found in \cite{6DdS}.
It may seem a surprise to find de Sitter solutions, given the many no-go results \cite{dSnogo}, however these de Sitter solutions thread a loop-hole in the no-go theorems. The loop-hole is the benign-looking assumption of compactness: that integrals of the form $I := \int \exd^n x \; \sqrt{g} \; \Box X$ must vanish, where $X$ is a suitable combination of bulk fields. This assumption is violated due to the back-reaction of the branes, since this can force the bulk fields to become sufficiently singular near the branes to contribute nonzero contributions to integrals like $I$ \cite{6DdS, BBvN}.
\subsubsection*{$t$-dependence of other bulk fields}
Recalling that the extra-dimensional metric has the form
\be
\exd s^2 = \frac{|H_0 \tau|^{2+c}}{H_0^2} \left( e^{2v } \exd \eta^2 + e^{2\beta} \exd \theta^2 \right) \,,
\ee
we see that the linear size of the extra dimensions is time-independent if $c = -2$, but otherwise behaves as
\be
r(t) \propto \frac{|H_0 \tau|^{1 + c/2}}{H_0}
= \frac{(|c+2| \, H_0 t)^{1/2}}{H_0}
\qquad ( \hbox{if} \quad c \ne -2) \,.
\ee
This shows that the extra dimensions universally grow as $r \propto \sqrt t$ for any $c \ne -2$. In particular $r(t)$ grows even if $a(t) \propto t^p$ shrinks (which happens when $p < 0$: {\em i.e.} when $-2 < c < -1$). When $a(t)$ grows, it grows faster than $r(t)$ whenever $p > \frac12$, and which is true for both $c < -2$ and for $c > 0$. It is true in particular whenever the expansion of the on-brane directions accelerates ({\em i.e.} when $p > 1$). When $0 < p < \frac12$ (and so $-1 < c < 0$) it is the extra dimensions that grow faster.
Another useful comparison for later purposes is between the size of $r(t)$ and the 4D Hubble length, $H^{-1}(t)$. Since neither $r$ nor $H$ depends on time when $c = -2$, this ratio is also time-independent in this limit. But for all other values of $c$, the Hubble scale is given by $H := \dot a/a = p/t$, with $p = (c+1)/(c+2)$, as above. Consequently, the ratio of $H$ to the KK scale, $m_\KK = 1/r$, is given by
\be
H(t)\, r(t) \propto \frac{|c+1|}{( |c+2| \, H_0 t)^{1/2}} \,,
\ee
and so decreases as $t$ evolves.
The dilaton also has a simple time-dependence when expressed as a function of $t$. It is time-dependent if $c = -2$, but otherwise evolves as
\be
e^\phi \propto \frac{1}{(H_0 \tau)^{2+c}} \propto \frac{1}{t}
\qquad ( \hbox{if} \quad c \ne -2) \,,
\ee
which shows that $r^2 e^\phi$ remains independent of $t$ for all $c$. Notice that this implies that evolution takes us deeper into the regime of weak coupling, since it is $e^\phi$ that is the loop-counting parameter of the bulk supergravity \cite{susybranes, TNCC}.
\section{Brane actions and bulk boundary conditions}
It is not just the geometry of the universe that is of interest in cosmology, but also how this geometry responds to the universal energy distribution. So in order to properly exploit the above solutions to the field equations it is necessary to relate its integration constants to the physical properties of the matter that sources it. In the present instance this requires specifying an action for the two source branes that reside at $\eta \to \pm \infty$.
To this end we imagine one brane to be a spectator, in the sense that it does not involve any on-brane degrees of freedom. Its action therefore involves only the bulk fields, which to lowest order in a derivative expansion is\footnote{Although nominally involving one higher derivative than the tension term, the magnetic coupling, $\Phi$, describes the amount of flux that can be localized on the brane \cite{BulkAxions, susybranes}, and can be important when computing the energetics of flux-stabilized compactifications in supergravity because of the tendency of the tension to drop out of this quantity \cite{susybranes, TNCC}. We follow here the conventions for $\Phi$ adopted in \cite{BulkAxions, susybranes}, which differ by a factor of $e^{-\phi}$ from those of \cite{TNCC}.}
\be \label{eq:Sspec}
S_s = - \int \exd^4x \sqrt{-\gamma} \; \left\{ T_s
- \frac12 \, \Phi_s \, e^{-\phi} \, \epsilon^{mn} \cF_{mn} + \cdots \right\} \,.
\ee
Here $T_s$ and $\Phi_s$ are dimensionful parameters, $\gamma_{mn}$ is the induced on-brane metric, and $\epsilon^{mn}$ is the antisymmetric tensor defined on the two dimensions transverse to the brane. Physically, $T_s$ denotes the tension of the spectator brane, while the magnetic coupling, $\Phi_s$, has the physical interpretation of the amount of flux that is localized at the brane \cite{BulkAxions, susybranes} (see Appendix \ref{app:FluxQ}).
To provide the dynamics that drives the bulk time dependence we imagine localizing a scalar field --- or inflaton, $\chi$ --- on the second, `inflaton', brane with action
\be \label{eq:Sinf}
S_i = - \int \exd^4x \sqrt{- \gamma} \; \left\{ T_i +
f(\phi) \Bigl[ \gamma^{\mu\nu} \pd_\mu \chi \pd_\nu \chi
+ V(\chi) \Bigr] - \frac12 \, \Phi_i \, e^{-\phi}
\, \epsilon^{mn} \cF_{mn} + \cdots \right\} \,.
\ee
As before $T_i$ and $\Phi_i$ denote this brane's tension and bulk flux, both of which we assume to be independent of the bulk dilaton, $\phi$. In what follows we assume the following explicit forms,
\be
f(\phi) = e^{-\phi}
\quad \hbox{and} \quad
V(\chi) = V_0 + V_1 \, e^{\zeta \chi}
+ V_2 \, e^{2\zeta \chi} + \cdots \,,
\ee
but our interest is in the regime where the term $V_1 \, e^{\zeta \chi}$ dominates all the others in $V(\chi)$, and so we choose the coefficients $V_k$ appropriately. These choices --- $f = e^{-\phi}$ and $V = V_1 \, e^{\zeta \, \chi}$, as well as the $\phi$-independence of $T_s$, $T_i$, $\Phi_s$ and $\Phi_i$ --- are special because they preserve the scale invariance, eq.~\pref{scaleinv}, of the bulk equations of motion.
As we see below, these choices for the functions $f(\phi)$ and $V(\chi)$ are required in order for the equations of motion for $\chi$ to be consistent with the power-law time-dependence we assume above for the solution in the bulk. In order to see why this is true, we require the matching conditions that govern how this action back-reacts onto the properties of the bulk solution that interpolates between the two branes. This requires the generalization to time-dependent systems of the codimension-two matching conditions worked out elsewhere \cite{Cod2Matching, BBvN} for the special case of maximally symmetric on-brane geometries. These matching conditions generalize the familiar Israel junction conditions that relate bulk and brane properties for codimension-one branes, such as those encountered in Randall-Sundrum type models \cite{RS}.
\subsection{Time-dependent brane-bulk matching}
When the on-brane geometry is maximally symmetric --- {\em i.e.} flat, de Sitter or anti-de Sitter --- the matching conditions for codimension-two branes are derived in refs.~\cite{Cod2Matching} (see also \cite{PST}), and summarized with examples in ref.~\cite{BBvN}. In Appendix \ref{matchingderivation} we generalize these matching conditions to the case where the on-brane geometry is time-dependent, in order to apply it to the situation of interest here. In this section we describe the result of this generalization.
For simplicity we assume axial symmetry in the immediate vicinity of the codimension-2 brane, with $\theta$ being the coordinate labeling the symmetry direction and $\rho$ labeling a `radial' off-brane direction, with the brane located at $\rho = 0$. We do not demand that $\rho$ be proper distance, or even that $\rho$ be part of a system of orthogonal coordinates. However we do assume that there exist coordinates for which there are no off-diagonal metric components that mix $\theta$ with other coordinates: $g_{a \theta} = 0$. With those choices, the matching conditions for the metric are similar in form to those that apply in the maximally symmetric case:
\be \label{eq:cod2matching-g}
- \frac12 \left[ \sqrt{g_{\theta\theta}}
\, \left(K^{mn}-K \proj^{mn}\right) - {\rm flat} \right]
= \frac{\kappa^2}{2\pi} \, \frac1{\sqrt{-\gamma}}
\, \frac{\delta S_b}{\delta g_{mn}} \,,
\ee
while those for the dilaton and Maxwell field are
\be \label{eq:cod2matching-phi}
- \sqrt{g_{\theta\theta}} \, N^m\nabla_m\phi
= \frac{\kappa^2}{2\pi} \, \frac1{\sqrt{-\gamma}}
\, \frac{\delta S_b}{\delta\phi} \,,
\ee
and
\be \label{eq:cod2matching-A}
- \sqrt{g_{\theta\theta}} \, e^{-\phi} \, N_mF^{mn}
= \frac{\kappa^2}{2\pi} \, \frac1{\sqrt{-\gamma}} \,
\frac{\delta S_b}{\delta A_n} \,.
\ee
Here the action appearing on the right-hand-side is the codimension-two action, such as eq.~\pref{eq:Sspec} or \pref{eq:Sinf}, and `flat' denotes the same result for a metric without a singularity at the brane position. We define the projection operator $\proj^m_n = \delta^m_n - N^m N_n$, where $N^m$ is the unit normal to the brane, pointing into the bulk. The induced metric $\gamma_{mn}$ is the projection operator restricted to the on-brane directions, and has determinant $\gamma$.
In principle the indices $m,n$ in eqs.~\pref{eq:cod2matching-g} run over all on-brane\footnote{When the metric has off-diagonal components mixing $\rho$ and brane directions, then $m,n$ also run over $\rho$. In our metric ansatz, those matching conditions vanish identically.} coordinates as well as $\theta$, and this might seem to present a problem since the codimension-2 action is not normally expressed as a function of $\theta$, since this is a degenerate coordinate at the brane position. However, the $\theta - \theta$ matching condition is never really required, because it is not independent of the others. Its content can instead be found from the others by using the `Hamiltonian' constraint, eq.~\pref{eq:Hamconst}, in the near-brane limit \cite{Cod2Matching, BBvN, otheruvcaps}.
\subsubsection*{Specialization to the bulk solutions}
Specialized to the geometry of our bulk ansatz, the above considerations lead to the following independent matching conditions for the inflationary brane. Writing the 4D on-brane coordinates as $\{ x^\mu \} = \{ t, x^i \}$, the $tt$, $ij$ and dilaton matching conditions become
\ba \label{eq:matchingform}
\Bigl[ e^{\beta-v}(\partial_n \beta + 3 \partial_n \alpha) \Bigr]_b
&=& 1- \frac{\kappa^2}{2\pi} \left\{ T - H_0\Phi e^{-\varphi-v-\beta}A_\theta' + f(\phi) \left[
-\pd_\tau \chi \, \pd^\tau \chi + V(\chi) \right] \right\} \nn\\
\Bigl[ e^{\beta-v}(\partial_n \beta + 2 \partial_n \alpha
+ \partial_n \omega)\Bigr] &=&1 -\frac{\kappa^2}{2\pi} \, \left\{ T
- H_0\Phi e^{-\varphi-v-\beta}A_\theta' + f(\phi) \left[ \pd_\tau \chi \, \pd^\tau \chi
+ V(\chi) \right] \right\} \nn\\
\Bigl[ e^{\beta-v} \partial_n \phi \Bigr] &=& \frac{\kappa^2}{2\pi} \, \left( f'(\phi) \left[ \pd_\tau\chi \, \pd^\tau\chi + V(\chi) \right] + H_0\Phi e^{-\varphi-v-\beta}A_\theta' \right) \,,
\ea
with $\partial_n = \pm \partial_\eta$ denoting the inward-pointing (away from the brane) radial derivative, and both sides are to be evaluated at the brane position --- {\em i.e.} with bulk fields evaluated in the limit\footnote{As we see below, any divergences in the bulk profiles in this near-brane limit are to be absorbed in these equations into renormalizations of the parameters appearing in the brane action.} $\eta \to \mp \infty$. In these equations $f'$ denotes $\exd f/\exd \phi$ while $A'_\theta = \partial_\eta A_\theta = F_{\eta \theta}$.
\subsubsection*{Consistency with assumed time-dependence}
We first record what $f(\phi)$ and $V(\chi)$ must satisfy in order for the matching conditions, eqs.~\pref{eq:matchingform}, to be consistent with the time-dependence assumed for the bulk cosmological solutions of interest here. Evaluating the left-hand side of the matching conditions, eqs.~\pref{eq:matchingform} using the ans\"atze of eqs.~\pref{metric-ansatz} and \pref{dilaton-ansatz} shows that they are time-independent. The same must therefore also be true of the right-hand side.
We choose $f(\phi)$ and $V(\chi)$ by demanding that the time-dependence arising due to the appearance of $\phi$ on the right-hand side cancel with time-dependence of the $\chi$-dependent pieces. Comparing the bottom two equations of \pref{eq:matchingform} then shows that the time-dependence of $f(\phi)$ and $f'(\phi)$ must be the same, and so $f(\phi) = C e^{k\phi}$ for some constants $C$ and $k$. The scale $C$ can be absorbed into the normalization of $\chi$, and so is dropped from here on.
Similarly, comparing the top two of eqs.~\pref{eq:matchingform} shows that the quantity $g^{\tau \tau} \partial_\tau \chi \, \partial_\tau \chi$ must scale with time in the same way as does $V(\chi)$. Furthermore, any scaling of $\chi$ with time must satisfy the $\chi$ equation of motion, found by varying the brane action with respect to $\chi$:
\be \label{eq:chieqn}
\pd_\mu \left[ \sqrt{- \gamma} \; e^{k\phi} \pd^\mu \chi \right] - \sqrt{- \gamma} \; e^{k\phi} V'(\chi) = 0 \,.
\ee
Specialized to a homogeneous roll, $\chi=\chi(\tau)$, this simplifies to
\be \label{eq:chieqntau}
\pd_\tau \left[ (H_0 \tau)^{2c} (H_0 \tau)^{-k(c+2)} e^{-2\omega}(H_0\tau)^{-c} \pd_\tau \chi \right] + (H_0 \tau)^{2c} (H_0 \tau)^{-k(c+2)} V'(\chi) = 0 \,.
\ee
All of these conditions are satisfied provided we assume a potential of the form
\be
V(\chi) = V_1 \, e^{\zeta \chi} \,,
\ee
and an inflaton solution of the form
\be \label{chisoln}
\chi = \chi_0+\chi_1\ln|H_0\tau| \,,
\ee
since in this case the time-dependence of the $\chi$ field equation factors. In what follows it is notationally useful to define $\hat V_1 := V_1 \, e^{\zeta \chi_0}$, allowing eqs.~\pref{eq:chieqn} and \pref{eq:chieqntau} to be rewritten as
\be
\label{inflaton-eom}
H_0^2 e^{-2\omega} = \frac{\hat V_1 \zeta }{\chi_1(3+2\zeta \chi_1)}\,.
\ee
Notice that if $\zeta\chi_1 > 0$ then $V_1$ must also be non-negative.
In this case the conditions that $\pd_\tau \chi \pd^\tau \chi$ and $V(\chi)$ scale like $e^{-k\phi}$ boil down to
\be
(H_0\tau)^{-c-2} \propto \tau^{\zeta \chi_1} \propto \tau^{k(c+2)} \,,
\ee
and so consistency between the scaling solutions and the matching condition implies $k = -1$, and so $f(\phi) = e^{-\phi}$ as anticipated earlier. It also determines the bulk time exponent $c$ in terms of brane properties:
\be \label{ceqn}
\zeta \chi_1 = - (c+2) \,.
\ee
\subsection{Relation between brane parameters and physical bulk quantities}
We now use the above tools to establish more precisely the connection between brane properties and the physical characteristics of the bulk geometry.
\subsubsection*{Determination of integration constants}
Specializing the matching to the choices $f(\phi) = e^{-\phi}$ and $V(\chi) = V_1 e^{\zeta \chi}$, and using the $\tau$-dependence of the bulk and brane fields described in \S2, gives the matching conditions in a form that determines the bulk integration constants in terms of properties of the two branes.
Consider first the spectator brane, for which the matching conditions are
\ba \label{redmatching}
e^{\beta-v} \( \lambda_2^+ - \frac{\lambda_3^+}c \)
&=& 1 - \frac{\kappa^2 T_s}{2\pi} + \frac{\kappa^2}{2\pi}H_0\Phi_se^{-\varphi-v-\beta}A_\theta'\nn\\
e^{\beta-v} \( \lambda_2^+ + \frac{1+2c}{3c} \, \lambda_3^+ \)
&=& 1 - \frac{\kappa^2T_s}{2\pi} + \frac{\kappa^2}{2\pi}H_0\Phi_se^{-\varphi-v-\beta}A_\theta' \\
e^{\beta-v} \( \lambda_1 - \lambda_2^+ - 2 \lambda_3^+ \)
&=& \frac{\kappa^2}{\pi}H_0\Phi_se^{-\varphi-v-\beta}A_\theta' \,, \nn
\ea
with all quantities evaluated at $\eta \to + \infty$. The difference between the first two of these implies
\be
\lambda_3^+ = 0\,,
\ee
for the asymptotic geometry near the spectator brane, which also implies\footnote{ From the constraint alone, $\lambda_1 = - \lambda_2^+$ is also allowed. The requirement of codimensions-2 branes together with finite volume excludes this possibility. For details, see appendix \ref{app:AsForm}} $\lambda_1=\lambda_2^+$ once the bulk constraint, eq.~\pref{eq:powersconstraint}, is used. This is then inconsistent with the third matching condition at this brane unless we also choose the spectator brane to contain no flux, $\Phi_s=0$. Given this, the matching conditions then degenerate into the usual defect-angle/tension relation \cite{TvsA}, which for the coordinates used here reads
\be
\lambda_1=\lambda_2^+=e^{v-b}\(1-\frac{\kappa^2 T_s}{2\pi}\) \,.
\ee
This summarizes the near-brane geometry for a pure-tension brane for which $T_s$ does not depend on $\phi$.
Next consider the inflaton brane, for which matching implies
\ba
e^{\beta-v} \( \lambda_2^- - \frac{\lambda_3^-}c \)
&=& 1- \frac{\kappa^2}{2\pi} \, e^{-\varphi} \left[ e^{-2\omega}
(H_0\chi_1)^2 + \hat V_1 - H_0 \, \Phi_i \, e^{-v-\beta}A_\theta'\right] - \frac{\kappa^2 T_i}{2\pi} \nn\\
e^{\beta-v} \( \lambda_2^- + \frac{1+2c}{3c} \, \lambda_3^- \)
&=& 1- \frac{\kappa^2}{2\pi} \, e^{-\varphi} \left[ - e^{-2\omega}
(H_0\chi_1)^2 + \hat V_1 - H_0 \, \Phi_i \, e^{-v-\beta}A_\theta'\right] - \frac{\kappa^2 T_i}{2\pi} \nn\\
e^{\beta-v} \( \lambda_1 - \lambda_2^- - 2 \lambda_3^- \)
&=& \frac{\kappa^2}{\pi} \, e^{-\varphi} \left[
e^{-2\omega}(H_0\chi_1)^2 - \hat V_1 + H_0 \, \Phi_i \, e^{-v-\beta} A_\theta'\right] \,,
\ea
with the fields evaluated at $\eta \to - \infty$. Using the first two matching conditions to eliminate $\lambda_2^-$, and using eqs.~\pref{inflaton-eom} and \pref{ceqn} to eliminate $H_0$ and $c$ allows the isolation of $\lambda_3^-$, giving
\be
\label{lambda3matching}
e^{\beta-v}\lambda_3^- = \frac{\kappa^2 \hat V_1}{2\pi}
\( \frac{6+3 \, \zeta \chi_1}{3 + 2 \, \zeta \chi_1}\) \, e^{-\varphi}\,.
\ee
In general, matching for the inflaton brane is more subtle, since for it the above matching conditions typically diverge when evaluated at the brane positions. As usual \cite{Bren}, this divergence is absorbed into the parameters of the brane action, as we now briefly sketch.
\subsubsection*{Brane renormalization}
In general, in the near-brane limit $\beta-v=\cx-\cy$ varies linearly with $\eta$, approaching $\cx_\infty^\pm - \cy_\infty^\pm \mp (\lambda_1 - \lambda_2^-) \, \eta$ as $\eta \to \pm \infty$. This shows that unless $\lambda_2^\pm=\lambda_1$ (which with the constraint, eq.~\pref{eq:powersconstraint}, then implies $\lambda_3^\pm=0$), the left-hand sides of the above matching conditions diverge. These divergences are generic to codimension-two and higher sources, as is familiar from the divergence of the Coulomb potential at the position of any source (codimension-3) point charges (in 3 space dimensions).
We absorb these divergences into renormalizations of the brane parameters, which in the present instance are $V_1$, $\zeta$, $T_i$ and $\Phi_i$, together with a wave-function renormalization of the on-brane field, $\chi$ (which for the present purposes amounts to a renormalization of $\chi_1$). To this end we regularize the matching conditions, by evaluating them at a small but nonzero distance away from the brane --- {\em i.e.} for $|\eta| = 1/\epsilon$ very large --- and assign an $\epsilon$-dependence to the couplings in such a way as to ensure that the renormalized results are finite as $\epsilon \to 0$. This is a meaningful procedure because the values of these parameters are ultimately determined by evaluating physical observables in terms of them, and measuring the values of these observables. Ultimately all of the uncertainties associated with the $\epsilon$ regularization cancel once the renormalized parameters are eliminated in this way in terms of observables, since a theory's predictive value is in the correlations it implies among the values of these observables.
In this section we (temporarily) denote the resulting renormalized (finite) brane parameters by a bar, {\em e.g.} for $\eta = -1/\epsilon$,
\be \label{zrendef}
\zeta \to \overline\zeta := Z_\zeta(\epsilon) \, \zeta \,,
\quad
V_1 \to \overline V_1 := Z_\ssV (\epsilon) \, V_1 \,,
\quad
\chi_1 \to \overline \chi_1 := Z_\chi (\epsilon) \, \chi_1
\quad \hbox{and so on} \,.
\ee
We define the parameters $Z_\ssV$, $Z_\zeta$ {\em etc.} so that $\overline \zeta$, $\overline V_1$ and the others remain finite. Since, as we show later, the integration constants like $\lambda_i^\pm$ are directly relatable to physical observables, the above matching conditions give us guidelines on how the various couplings renormalize. For instance, inspection of eq.~\pref{ceqn} shows that the product $\overline \zeta \, \overline \chi_1$ should remain finite, since it determines the physically measurable quantity $c$. Consequently
\be \label{zetachiZs}
Z_\zeta(\epsilon) Z_\chi (\epsilon) = \hbox{finite} \,.
\ee
Next, the finiteness of $\zeta \chi_1$ together with the particular combination of matching conditions that sets $\lambda_3^-$ --- {\em i.e.} eqn. ~\pref{lambda3matching} --- shows that when $\eta = -1/\epsilon$ we must define
\be \label{eq:ZVexp}
Z_\ssV = \frac{\overline V_1}{V_1} = e^{-\left[ \lambda_3^- + \frac32 \,
(\lambda_2^- - \lambda_1) \right]/\epsilon} + \hbox{(finite)} \,,
\ee
in order to compensate for the divergent behaviour of $e^{\varphi + \beta - v}$.
Using this in the inflaton equation, eq.~\pref{inflaton-eom}, and keeping in mind that (see below) $H_0$ is a physical parameter, we find
\be
H_0^2 \propto e^{-\( \lambda_1 - \lambda_2^- + \frac{2}{c} \lambda_3^-
\) /\epsilon} \; \frac{\zeta }{\chi_1} \,,
\ee
and so this, together with eq.~\pref{zetachiZs}, leads to $Z_\zeta (\epsilon) / Z_\chi(\epsilon) = \hbox{finite}$. If we absorb only the exponential dependence on $1/\epsilon$ into the renormalizations --- {\em e.g.} taking $\hbox{`finite'} = 0$ in eq.~\pref{eq:ZVexp} --- this implies
\ba
Z_\zeta &=& e^{ - \frac12 \( \lambda_1 - \lambda_2^-
+ \frac{2}{c} \, \lambda_3^- \) / \epsilon} \nn\\
Z_\chi &=& e^{\frac12 \( \lambda_1 - \lambda_2^-
+\frac{2}{c} \, \lambda_3^- \)/\epsilon} \,.
\ea
Finally, the matching conditions involving $T_i$ are rendered finite by defining
\ba
1 - \frac{\kappa^2 \overline T_i}{2\pi} &:=& e^{- ( \lambda_2^- - \lambda_1 )/\epsilon} \( 1 - \frac{\kappa^2 T_i}{2\pi} \) + \hbox{(finite)} \,.
\ea
$\Phi_i$ does not require a divergent renormalization, as it appears as a finite quantity in the matching conditions.
\subsubsection*{Connection to physical properties}
Since the above section uses the finiteness of the bulk integration constants, $\lambda_i^\pm$, $H_0$, $c$ {\em etc.}, we pause here to relate these quantities more explicitly to physical observables. This ultimately is what allows us to infer the values taken by the finite renormalized parameters.
First, $c$ and $H_0$, directly determine the power of time with which the scale factor for the on-brane dimensions expand, and is thereby measurable through cosmological observations that determine $\dot a/a$, $\ddot a/a$ and so on.
Similarly, the volume of the extra dimensions is,
\be
\cV_2 = \int \exd^2x \, \sqrt{g_2} =
2\pi (H_0\tau)^c\tau^2\int_{-\infty}^\infty \exd\eta \,
\exp\(\frac\cx2+\frac{3\cy}2 +\cz\),
\ee
and the proper distance between the branes is given by
\be
L = (H_0\tau)^{c/2}\tau \int_{-\infty}^\infty
\exd\eta \, \exp\( -\frac\cx4 +\frac{5\cy}4 + \frac\cz2 \) \,.
\ee
It is through relations such as these that physical quantities get related to the integration constants. In particular, convergence of these integrals implies conditions on the signs of the combinations $\lambda_1+4\lambda_2^\pm+2\lambda_3^\pm$ and $-\lambda_1+5\lambda_2^\pm+2\lambda_3^\pm$, all of which must be finite. The same is true of $\lambda_2$, which can be regarded as a function of the other two powers through the constraint \pref{eq:powersconstraint}.
Finally, the fluxes, $\Phi_s$ and $\Phi_i$, appear in the flux quantization condition and are directly related to a (finite) physical quantity: the magnetic charge of the branes. The renormalized tensions, $T_s$ and $T_i$, similarly enter into expressions for the deficit angle at the corresponding brane location.
\subsection{The 6D perspective in a nutshell}
Before turning to the view as seen by a 4D observer, this section first groups the main results obtained above when using the time-dependent matching conditions, eqs.~\pref{redmatching}, to relate the constants of the bulk scaling solution to the (renormalized) parameters in the source-brane actions, eqs.~\pref{eq:Sspec} and \pref{eq:Sinf}.
The physical couplings that we may specify on the inflaton brane are the renormalized quantities $V_1$, $\zeta $, $T_i$ and $\Phi_i$ (and we henceforth drop the overbar on renormalized quantities). On the spectator brane we similarly have $T_s$ and $\Phi_s$. We also get to specify `initial conditions' for the on-brane inflaton: $\chi_0$ and $\chi_1$, as well as the integer, $n$, appearing in the flux-quantization condition. Of these, $\chi_0$ and $V_1$ only appear in the combination $\hat V_1 = V_1 \, e^{\zeta \chi_0}$, and so the value of $\hat V_1$ can be regarded as an initial condition for the inflaton rather than a choice for a brane coupling. Altogether these comprise 8 parameters: 5 brane couplings; 1 bulk flux integer; and 2 inflaton initial condition.
We now summarize the implications these parameters impose on the integration constants in the bulk, and identify any consistency conditions amongst the brane properties that must be satisfied in order to be able to interpolate between them using our assumed scaling bulk solution.
\subsubsection*{Time dependence}
First off, consistency of the scaling ansatz for the time dependence of all fields gives
\be \label{cvszetachi}
c = -2 - \zeta \chi_1 \,.
\ee
Notice that this involves only the brane coupling $\zeta$ --- whose value determines the flatness of the inflaton potential --- and the inflaton initial condition, $\chi_1$. In particular, $c = -2$, corresponding to a de Sitter on-brane geometry, if either $\zeta$ or $\chi_1$ is chosen to vanish.
Next, we take the inflaton equation of motion on the brane to give the bulk parameter $H_0$ in terms of choices made on the inflationary brane:
\be
H_0^2 = e^{-\frac12 (\cx_\infty^- - \cy_\infty^-)
+\frac{\zeta \chi_1}{2+\zeta \chi_1} \, \cz_\infty^-}
\left( \frac{\hat V_1}{3+2 \, \zeta \chi_1}
\right) \frac{\zeta }{ \chi_1} \,.
\ee
Among other things, this shows that the choice $\chi_1 = 0$ does not satisfy the $\chi$ field equation unless $\zeta$ or $V_1$ vanish.
\subsubsection*{Consistency relations}
Consider next how the number of couplings on the branes restricts the other integration constants in the bulk.
Start with the spectator brane. Near the spectator brane we have $\lambda_3^+ = 0$ and
\be \label{lamsums}
\lambda_1 = \lambda_2^+ = e^{\cy_\infty^+ - \cx_\infty^+}
\(1-\frac{\kappa^2 T_s}{2\pi}\) \,,
\ee
as well as $\Phi_s = 0$. Specifying $T_s$ therefore imposes two relations among the four remaining independent bulk integration constants, $\lambda_1$, $\lambda_2^+$, $\cy_\infty^+$ and $\cz_\infty^+$, relevant to asymptotics near the spectator brane. We regard eq.~\pref{lamsums} as being used to determine the value of two of these, $\lambda_2^+$ and $\cy_\infty^+$ say.
Next we use the bulk equations of motion, eqs.~\pref{eq:chisoln} and \pref{bulkXY}, to integrate the bulk fields across to the inflaton brane. Starting from a specific choice for the fields and their $\eta$-derivatives at the spectator brane, this integration process leads to a unique result for the asymptotic behaviour at the inflaton brane. Given the 2-parameter set of solutions consistent with the spectator brane tension, integration of the bulk field equations should generate a 2-parameter subset of the parameters describing the near-inflaton-brane limit.
Now consider matching at the inflaton brane. The three asymptotic powers describing the near-brane limit for the inflaton brane can be expressed as
\ba \label{lamsumi}
\lambda_1 &=& e^{\cz_\infty^- - \frac32 ( \cx_\infty^- -
\cy_\infty^-)} \frac{\kappa^2 \hat V_1}{2\pi} \,
\( \frac{\zeta \chi_1}{3+2 \, \zeta \chi_1} \)
+ e^{\cy_\infty^- - \cx_\infty^-} \( 1
-\frac{\kappa^2 T_i}{2\pi} \)
+ \frac{3\kappa^2 H_0 q \, \Phi_i}{2\pi} \nn\\\nn\\
\lambda_2^- &=& e^{ \cz_\infty^- - \frac32 ( \cx_\infty^-
- \cy_\infty^-)} \frac{\kappa^2 \hat V_1}{2\pi}
\( \frac{-6-3 \, \zeta \chi_1}{3+2 \, \zeta \chi_1} \)
+ e^{ \cy_\infty^- - \cx_\infty^-} \( 1 -
\frac{\kappa^2 T_i}{2\pi} \)
+\frac{\kappa^2 H_0 q\, \Phi_i}{2\pi}\\
\nn\\
\lambda_3^- &=& e^{ \cz_\infty^- - \frac32 ( \cx_\infty^-
- \cy_\infty^-)} \frac{\kappa^2 \hat V_1}{2\pi}
\( \frac{6+3 \, \zeta \chi_1}{3+2 \, \zeta \chi_1} \) \,, \nn
\ea
which follow from three of the four matching conditions at the inflaton brane.\footnote{Recall that for time-independent systems there are 3 metric matching conditions -- $(tt)$, $(ij)$ and $(\theta \theta)$ -- plus that for the dilaton, $\phi$. The Hamiltonian constraint then imposes one relation amongst these three conditions, that can be regarded as implicitly fixing how the brane action depends on $g_{\theta\theta}$.} Notice that the constant $q$ appearing here can be regarded as being a function of the flux-quantization integer $n$ and the inflaton-brane flux coupling, $\Phi_i$:
\be
q = \frac{4\pi g \lambda_1}{\kappa^2H_0[2\pi n -g\sum_b\Phi_b]}
= \frac{4\pi g \lambda_1 }{\kappa^2H_0[2\pi n -g\Phi_i]}\,.
\ee
The three parameters $\lambda_1$, $\lambda_2^-$ and $\lambda_3^-$ are not independent because they must satisfy the constraint, eq.~\pref{eq:powersconstraint},
\be \label{hamconst2}
(\lambda_2^-)^2 - (\lambda_1)^2 = \frac43 \left( \frac{1+c+c^2}{c^2}
\right) (\lambda_3^-)^2
=\frac{12 + 12 \, \zeta \chi_1 +4(\zeta \chi_1)^2}{12
+12\, \zeta \chi_1 +3(\zeta \chi_1)^2} \; (\lambda_3^-)^2 \,,
\ee
whose validity follows as a consequence of the field equations because the same constraint holds for the parameters, $\lambda_1$, $\lambda_2^+$ and $\lambda_3^+$, that control the bulk asymptotics near the spectator brane.
In principle, for a given set of inflaton-brane couplings we can regard two of eqs.~\pref{lamsumi} as fixing the remaining two free bulk parameters. The third condition does not over-determine these integration constants of the bulk, because the constraint, eq.~\pref{hamconst2}, is satisfied as an identity for all of the 2-parameter family of bulk solutions found by matching to the spectator brane. Consequently the third of eqs.~\pref{lamsumi} must be read as a constraint on one of the inflaton-brane properties. If we take this to be $\hat V_1$, say, then it can be interpreted as a restriction on the initial condition, $\chi_0$, in terms of the spectator-brane tension. This restriction is the consistency condition that is required if we wish to interpolate between the two branes using the assumed bulk scaling solution.
\subsubsection*{Inflationary choices}
In the end of the day we see that consistency with the bulk geometry does not preclude us from having sufficient freedom to adjust brane properties like $\zeta$ and $\chi_1$ to dial the parameters $c$ and $H_0$ freely. This shows that there is enough freedom in our assumed brane properties to allow treating these bulk parameters as independent quantities that can be freely adjusted.
In particular, we are free to choose the product $\zeta \chi_1$ to be sufficiently small and positive -- {\em c.f.} eq.~\pref{cvszetachi} -- to ensure an accelerated expansion: {\em i.e.} that $c$ is just slightly more negative than the de Sitter value of $-2$. This is the adjustment that is required to assure a `slow roll' within this model.
We also see that the time-dependence of the solution is such that the brane potential energy shrinks as the brane expands. That is, evaluated at the solution, eq.~\pref{chisoln},
\be \label{Vovertime}
\Bigl. V_1 \, e^{\zeta \chi} \Bigr|_{\rm soln}
= \hat V_1 \, | H_0 \tau
|^{\zeta \chi_1} = \hat V_1 \, \Bigl(|c+2|
\, H_0 t \Bigr)^{\zeta \chi_1/(2+c)}
= \frac{\hat V_1}{ \zeta \chi_1
H_0 t } \,.
\ee
This shows how inflation might end in this model. Suppose we take
\be
V(\chi) = V_0 + V_1 \, e^{\zeta \chi} + V_2 \, e^{2 \, \zeta \chi}
+ \cdots \,,
\ee
where $V_1$ is chosen much larger than $V_0$ or the other $V_k$'s. Then if $\chi$ is initially chosen so that $V(\chi) \simeq V_1 \, e^{\zeta \chi}$ is dominated by the term linear in $e^{\zeta \chi}$, then the above scaling bulk solution can be consistent with the brane-bulk matching conditions. But eq.~\pref{Vovertime} shows that this term shrinks in size when evaluated at this solution (as also do the terms involving higher powers of $e^{\zeta \chi}$), until eventually the $V_0$ term dominates.
Once $V_0$ dominates the bulk scaling solution can no longer apply, plausibly also implying an end to the above accelerated expansion of the on-brane geometry. If $V(\chi) \simeq V_0$, then the inflaton brane effectively has a $\phi$-dependent tension, $T_{\rm eff} = T_i + V_0 \, e^{-\phi}$, which breaks the bulk scale invariance and so can lift the bulk's flat direction \cite{Cod2Matching, BBvN, susybranes} and change the dynamics of the bulk geometry.
Although this likely ends the inflationary evolution described above, it is unlikely in itself to provide a sufficiently graceful exit towards a successful Hot Big Bang epoch. Earlier calculations for maximally-symmetric branes show that such a tension leads to an effective potential (more about which below) proportional to $T_{\rm eff}' \propto - V_0 \, e^{-\phi}$, which points to a continued runaway along the would-be flat direction rather than a standard hot cosmology. We leave for further work the construction of a realistic transition from extra-dimensional inflation to later epochs, but expect that a good place to seek this interface is by modifying the assumption that $\Phi_s$ and/or $\Phi_i$ remain independent of $\phi$, since it is known \cite{TNCC} that when $\sum_b \Phi_b \propto e^{\phi}$ the low-energy scalar potential can act to stabilize $\phi$ at a minimum where the low-energy effective potential vanishes (classically).
\section{The view from 4D}
We now ask what the above dynamics looks like from the perspective of a 4D observer, as must be possible on general grounds within an effective theory in the limit when the Hubble scale, $H$, is much smaller than the KK scale. We can find the 4D description in this limit by explicitly compactifying the 6D theory. Our goal when doing so is to show how the low-energy 4D dynamics agrees with that of the explicit higher-dimensional solution, and to acquire a better intuition for how this inflationary model relates to more familiar 4D examples.
\subsection{The 4D action}
The simplest way to derive the functional form of the low-energy 4D action (at least at the classical level) is to use the classical scale invariance of the bulk field equations, since these are preserved by the choices we make for the branes --- at least during the inflationary epoch where $V \simeq V_1 \, e^{\zeta \chi}$.
Since this symmetry must therefore also be a property of the classical 4D action, there must exist a frame for which it can be written in the following scaling form:
\ba \label{4Deffaction}
S_{\rm eff} &=& - \int \exd^4x \sqrt{ - \hat g_4}
\; e^{-2 \varphi_4} \left[ \frac1{2\kappa_{4}^2}
\hat g^{\mu\nu} \( \hat R_{\mu\nu} + Z_\varphi\,
\pd_\mu \varphi_4 \pd_\nu\varphi_4 \) \right. \nn\\
&& \qquad\qquad\qquad\qquad\qquad\qquad
\left. \phantom{\frac12}
+ f^2 \, \hat g^{\mu\nu}
\pd_\mu \chi \pd_\nu \chi + U_\JF \( e^{\zeta \chi - \varphi_4} \)
\right] \\
&=& - \int \exd^4x \sqrt{ - {\bf g}_4}
\; \left[ \frac1{2\kappa_{4}^2}
{\bf g}^{\mu\nu} \( {\bf R}_{\mu\nu} + (6 + Z_\varphi) \,
\pd_\mu \varphi_4 \pd_\nu\varphi_4 \) \right. \nn\\
&& \qquad\qquad\qquad\qquad\qquad\qquad
\left. \phantom{\frac12}
+ f^2 \, {\bf g}^{\mu\nu}
\pd_\mu \chi \pd_\nu \chi + e^{2 \varphi_4}
U_\JF \( e^{\zeta \chi - \varphi_4} \)
\right] \,, \nn
\ea
where $\varphi_4$ denotes the 4D field corresponding to the flat direction of the bulk supergravity and $\chi$ is the 4D field descending from the brane-localized inflaton. The second version gives the action in the 4D Einstein frame, whose metric is defined by the Weyl transformation:
\be
{\bf g}_{\mu\nu} = e^{-2\varphi_4} \hat g_{\mu\nu} \,.
\ee
The potential, $U_\JF$, is an a-priori arbitrary function of the scale-invariant combination $e^{\zeta \chi - \varphi_4}$, whose functional form is not dictated purely on grounds of scale invariance.
The detailed form of $U_\JF$ and the values of the constants $\kappa_4$, $Z_\varphi$ and $f$, are calculable in terms of the microscopic parameters of the 6D theory by dimensional reduction. As shown in detail in Appendix \ref{app:dimred}, we find $Z_\varphi = -4$,
\ba \label{kappaJFandfmatching}
\frac{1}{2 \kappa_4^2} &=& \int \exd \theta \exd \eta \;
\frac{e^{-\omega + 3\alpha + \beta + v}}{2 \kappa^2 H_0^2}
= \frac\pi{ \kappa^2 H_0^2} \int \exd \eta
\; e^{2\cy-2\cz/c} \nn\\
&=& \frac\pi{ \kappa^2 H_0^2} \int \exd \eta \; \frac{\cz''}{3c}
= -\frac{\pi\lambda_3^-}{H_0^2\kappa^2c} \nn\\
f^{2} &=& e^{-\cx_\infty^- +\cy_\infty^- - \frac2c\cz_\infty^-}
\left( \frac{23-2c}{28+8c} \right) \,,
\ea
while the potential becomes
\be \label{VEFmatching}
V_\EF := e^{2\varphi_4} \, U_\JF =
- C e^{2\varphi_4} + D e^{\zeta\chi + \varphi_4} \,,
\ee
with the constants $C$ and $D$ evaluating to
\ba \label{CDmatching}
C &=& \frac54 \, q H_0 \Phi_i - e^{-\cx_\infty^- + \cy_\infty^-}
\(\frac{2\pi}{\kappa^2} - T_i \)
- e^{-\cx_\infty^+ + \cy_\infty^+} T_s \nn\\
D &=& \frac54 e^{-\frac32 (\cx_\infty^- - \cy_\infty^-)
+ \cz_\infty^-} V_1\,.
\ea
In the regime of interest, with $\kappa^2 T_i/2\pi \ll 1$ and $\kappa^2 T_s /2\pi \ll 1$ and $V_1 > 0$, both $C$ and $D$ are positive. The unboundedness from below of $V_\EF$ as $\varphi_4 \to \infty$ is only an apparent problem, since the domain of validity of the semiclassical calculations performed here relies on the bulk weak-coupling condition, $e^{\varphi_4} \ll 1$.
\subsection*{4D dynamics}
The classical field equations obtained using this 4D effective action consist of the following scalar equations,
\ba
\frac{2}{\kappa_{4}^2} \, \Box\varphi_4 &=&
-2C \, e^{2\varphi_4} + D \, e^{\zeta \chi + \varphi_4} \nn\\
2{f^2} \, \Box\chi &=& \zeta D \, e^{\zeta \chi + \varphi_4} \,,
\ea
and the trace-reversed Einstein equations
\be
{\bf R}_{\mu\nu} + 2 \, \pd_\mu \varphi_4 \pd_\nu\varphi_4
+ {2\kappa_{4}^2}{f^2} \, \pd_\mu \chi \pd_\nu \chi
+\kappa_{4}^2 V_{\EF} \, {\bf g}_{\mu\nu} = 0 \,.
\ee
This system admits scaling solutions, with all functions varying as a power of time,
\ba \label{4Dpowerlaw}
{\bf g}_{\mu\nu} &=& (H_0\tau)^{2+2c}
\( \eta_{\mu\nu} \, \exd x^\mu \exd x^\nu \) \nn\\
e^{\varphi_4} &=& e^{\varphi_{40}} (H_0\tau)^{-2-c} \nn\\
e^{\zeta\chi} &=& e^{\zeta \chi_0} (H_0\tau)^{\zeta\chi_1}
= e^{\zeta \chi_0} (H_0\tau)^{-2-c}\,.
\ea
Notice that the consistency of the field equations with the power-law time-dependence requires $\zeta \chi_1=-2-c$, just like in six dimensions ({\em c.f.} eq.~\pref{ceqn}). With this, the scalar equations of motion are
\ba
\frac{2}{\kappa_{4}^2} \,H_0^2 (2c^2+5c+2) &=&
-2 C \, e^{2 \varphi_{40}} + D \, e^{\zeta \chi_0 + \varphi_{40}} \nn\\
-2 (2c+1) {H_0^2 \, \chi_1}{f^2} &=&
\zeta D \, e^{\zeta \chi_0 + \varphi_{40}} \,,
\ea
and the Einstein equations become
\ba
\frac{H_0^2}{\kappa_{4}^2} \( 2c^2+5c+5 \)
+ {2H_0^2 \, \chi_1^2}{f^2}
&=& - C\, e^{2 \varphi_{40}}
+D \, e^{\zeta \chi_0 + \varphi_{40}} \nn\\
\frac{H_0^2}{\kappa_{4}^2} (2c^2+3c+1) &=&
-C\, e^{2 \varphi_{40}}
+D\, e^{\zeta \chi_0 + \varphi_{40}} \,.
\ea
These four equations are to be solved for the three variables $\chi_0$, $\chi_1$ and $\varphi_{40}$ appearing in the power-law ansatz, eqs.~\pref{4Dpowerlaw}. This is not an over-determined problem because the four equations are not independent (a linear combination of the two scalar equations gives the second Einstein equation).
Subtracting the two Einstein equations yields
\be
{\chi_1^2}{f^2} = - \frac{2+c}{\kappa_{4}^2}
= \frac{\zeta\chi_1}{\kappa_{4}^2} \,,
\ee
and so discarding the trivial solution, $\chi_1=0$, we find
\be \label{eq:chi1soln}
{\chi_1} = \frac{\zeta}{\kappa_{4}^2 f^2} \,.
\ee
Next, dividing the two scalar equations gives the relation
\be \label{solntoscalareqn}
-\frac{2c^2+5c+2}{(2c + 1) \kappa_4^2 f^2 \chi_1} =
-\frac{c+2}{\zeta} = \frac{1}{\zeta} \(1
- \frac{2C}{D} \, e^{\varphi_{40} - \zeta \chi_0 } \) \,,
\ee
where the first equality uses eq.~\pref{eq:chi1soln}. Combining eqs.~\pref{ceqn}, \pref{eq:chi1soln} and \pref{solntoscalareqn} finally gives
\be \label{chivsCD}
\frac{\zeta^2}{\kappa_{4}^2 f^2} = 1 - \frac{2C}{D} \, e^{\varphi_{40}
- \zeta \chi_0 } \,.
\ee
This last equation shows that the scaling ansatz is only consistent with the field equations if $\chi_0$ is chosen appropriately, in agreement with what was found by matching between branes in the 6D perspective. It also shows, in particular, that $\zeta \chi_1$ can be dialed to be small and positive by suitably adjusting the scale-invariant (and time-independent) quantity $\varphi_4 - \zeta \chi$ so that the right-hand side of eq.~\pref{chivsCD} is sufficiently small and positive. This is not inconsistent with the microscopic choices made for the branes because the ratio $C/D$ is positive.
The upshot is this: the above relations precisely reproduce the counting of parameters and the properties of the solutions of the full 6D theory, once the low-energy parameters $C$, $D$, $\kappa_4$ and $f$ are traded for the underlying brane properties, using eqs.~\pref{kappaJFandfmatching} and \pref{CDmatching}.
\subsection{The 4D inflationary model}
The 4D effective description also gives more intuition of the nature of the inflationary model, and why the scalar evolution can be made slow.
Notice that the action, eq.~\pref{4Deffaction}, shows that the scalar target space is flat in the Einstein frame. Consequently, the slow-roll parameters are controlled completely by the Einstein-frame potential, eq.~\pref{VEFmatching}. In particular,
\ba
\varepsilon_\varphi &:=& \left(
\frac{1}{V_\EF} \; \frac{\partial V_\EF}{\partial
\varphi_4} \right)^2 = \left(
\frac{ - 2 + (D/C) e^{\zeta\chi
- \varphi_4}}{ - 1 + (D/C) e^{\zeta\chi
- \varphi_4}} \right)^2 \nn\\
\varepsilon_\chi &:=& \frac{1}{\kappa_4^2 f^2}
\left( \frac{1}{V_\EF} \; \frac{\partial V_\EF}{\partial \chi}
\right)^2 = \frac{\zeta^2}{\kappa_4^2 f^2} \left(
\frac{ (D/C) e^{\zeta\chi - \varphi_4} }{
- 1 + (D/C) e^{\zeta\chi - \varphi_4}}
\right)^2 \,.
\ea
This shows that there are two conditions required for $V_\EF$ to have sufficiently small first derivatives for slow-roll inflation. First, $\varepsilon_\chi \ll 1$ requires $\zeta^2 \ll \kappa_4^2 f^2$, in agreement with the 6D condition $\zeta \chi_1 \ll 1$ once eq.~\pref{eq:chi1soln} is used. Second, $\varepsilon_\varphi \ll 1$ is generically {\em not} true, but can be made to be true through a judicious choice of initial conditions for $\zeta \chi - \varphi_4$: $(D/C) \, e^{\zeta \chi - \varphi_4} = 2 + \cO(\zeta \chi_1)$, in agreement with eq.~\pref{chivsCD}. Notice that in this case $\varepsilon_\chi \simeq \cO[ \zeta \chi_1 ]$ while $\varepsilon_\varphi \simeq \cO[(\zeta \chi_1)^2] \ll \varepsilon_\chi$.
Next, consider the second derivatives of $V_\EF$:
\ba
\eta_{\varphi\varphi} &:=& \left(
\frac{1}{V_\EF} \; \frac{\partial^2 V_\EF}{\partial
\varphi_4^2} \right) =
\frac{ - 4 + (D/C) e^{\zeta\chi
- \varphi_4}}{ - 1 + (D/C) e^{\zeta\chi
- \varphi_4}} \simeq -2 + \cO(\zeta \chi_1) \nn\\
\eta_{\varphi\chi} &:=& \frac{1}{\kappa_4 f}
\left( \frac{1}{V_\EF} \; \frac{\partial^2 V_\EF}{
\partial \varphi_4 \partial \chi}
\right) = \frac{\zeta}{\kappa_4 f} \left(
\frac{ (D/C) e^{\zeta\chi - \varphi_4} }{
- 1 + (D/C) e^{\zeta\chi - \varphi_4}} \right)
\simeq \frac{2\,\zeta}{\kappa_4 f} + \cO(\zeta \chi_1)\\
\eta_{\chi\chi} &:=& \frac{1}{\kappa_4^2 f^2}
\left( \frac{1}{V_\EF} \; \frac{\partial^2 V_\EF}{
\partial \chi^2}
\right) = \frac{\zeta^2}{\kappa_4^2 f^2} \left(
\frac{ (D/C) e^{\zeta\chi - \varphi_4} }{
- 1 + (D/C) e^{\zeta\chi - \varphi_4}} \right)
\simeq \frac{2\,\zeta^2}{\kappa_4^2 f^2} + \cO(\zeta \chi_1) \,,\nn
\ea
where the last, approximate, equality in each line uses eq.~\pref{chivsCD}.
\FIGURE[ht]{
\epsfig{file=potentialexample.eps,angle=0,width=0.45\hsize}
\caption{Sample potential evaluated for $C=D=1$ and $\zeta=0.3$. The red line denotes the path taken by the scaling solutions.
} \label{fig:potential} }
Notice that $\eta_{\varphi\varphi}$ is not itself small, even when $\zeta \ll \kappa_4 f$ and eq.~\pref{chivsCD} is satisfied. However, in the field-space direction defined by $\vec n := \vec\varepsilon/|\vec \varepsilon|$ we have $n_\chi \simeq \cO(1)$ and $n_\varphi \simeq \cO(\zeta \chi_1)$ and so
\be
\eta_{ab} n^a n^b = \cO( \zeta \chi_1) = \cO \left(
\frac{ \zeta^2}{ \kappa_4^2 f^2} \right) \ll 1 \,.
\ee
Because $\eta_{\varphi\varphi}$ is negative and not small, slow roll is achieved only by choosing initial conditions to lie sufficiently close to the top of a ridge, with initial velocities chosen to be roughly parallel to the ridge (see Fig.~\ref{fig:potential}). For single-field 4D models such an adjustment is unstable against de Sitter fluctuations of the inflaton field, and although more difficult to compute in the higher-dimensional theory, the low-energy 4D potential suggests that similar considerations are likely also to be true here.
\section{Conclusions}
In a nutshell, the previous sections describe a family of --- previously known \cite{scaling solutions} --- exact, explicit, time-dependent solutions to the field equations of 6D supergravity in the presence of two space-filling, positive-tension source branes. The solutions describe both the cosmological evolution of the on-brane geometry and the change with time of the extra-dimensional geometry transverse to the branes. These solutions have explicitly compact extra dimensions, with all but one modulus stabilized using an explicit flux-stabilization mechanism. The time evolution describes the dynamics of the one remaining would-be modulus of the bulk geometry to the back-reaction of the source branes.
\subsection{Bugs and features}
The new feature added in this paper is to identify a choice for the dynamics of a brane-localized scalar field whose evolution is consistent with the bulk evolution, and so can be interpreted as the underlying dynamics that gives rise to the bulk evolution. In order to find this choice for the brane physics we set up and solve the codimension-two matching problem for time-dependent brane geometries, extending earlier analyses \cite{Cod2Matching, BBvN, susybranes} of these matching conditions for systems with maximally symmetric on-brane geometries.
We also find the 4D theory that describes this system in the limit of slow evolution, where a low-energy effective field theory should apply. The low-energy theory turns out to be a simple scalar-tensor system involving two scalar fields in 4 dimensions: one corresponding to the brane-localized mode and one corresponding to the would-be flat direction of the bulk geometry. We verify that the 4D system has time-dependent solutions that reproduce those of the full 6D equations (as they must).
In particular, we identify a region of parameter space that describes an inflationary regime, including a limit for which the on-brane geometry is de Sitter. (The de Sitter solution is not a new one \cite{6DdS}, and evades the various no-go theorems \cite{dSnogo} because the near-brane behavior of the bulk fields dictated by the brane-bulk matching does not satisfy a smoothness assumption --- `compactness' --- that these theorems make.) For parameters near the de Sitter limit, the evolution is accelerated and takes a power-law slow-roll form, $a(t) \propto t^p$ with $p > 1$. (The de Sitter solution is obtained in the limit $p \to \infty$.) From the point of view of the low-energy 4D theory, the de Sitter solution corresponds to sitting at the top of a ridge, and the scaling solutions describe motion near to and roughly parallel with this ridge. Experience with the 4D potential suggests that the initial conditions required to obtain inflation in this model are likely to require careful tuning.
{}From the 4D perspective, the inflationary scenario resembles old models of extended inflation \cite{ExtInf}, for which accelerated power-law expansion is found to arise when Brans-Dicke theory is coupled to matter having an equation of state $w = -1$. Having a Brans-Dicke connection is perhaps not too surprising, despite earlier difficulties finding extended inflation within a higher-dimensional context. Part of what is new here relative to early work is the scale invariance of the bulk supergravity that is not present, for example, in non-supersymmetric 6D constructions \cite{ExtInfKK}. Another new feature is brane-localized matter, which was not present in early searches within string theory \cite{ExtInfStr}. Brans-Dicke-like theories arise fairly generically in the low-energy limit of the 6D supergravity of interest here because back-reaction tends to ensure that the bulk dilaton, $\varphi_4$, couples to brane-localized brane matter in this way \cite{BulkAxions, susybranes}.
For cosmological applications it is interesting that the 4D limit of the higher-dimensional system is not {\em exactly} a Brans-Dicke theory coupled to matter. It differs by having a scalar potential (rather than a matter cosmological constant), that is calculable from the properties of the underlying branes. It also differs by being `quasi-Brans Dicke', in that the scalar-matter coupling tends to itself depend on the Brans-Dicke field, $\varphi_4$. Both of these features are potentially attractive for applications because successful cosmology usually requires the Brans-Dicke coupling to be relatively large during inflation compared with the largest values allowed by present-day solar-system constraints \cite{ExtInfProb}. Having both field-dependent couplings and a scalar potential can allow these properties to be reconciled, by having the potential drive the scalar at late times to a value for which the coupling is small. (See, for instance, \cite{6Dquint} for a sample cosmology which uses this mechanism in a related example.)
A noteworthy feature of the inflationary geometries is that the extra dimensions are not static (although they become static in the strict de Sitter limit). Instead they expand with $r(t) \propto \sqrt t$, while the scale factor of the on-brane directions expands even faster, $a(t) \propto t^p$ with $p > 1$. As a result the Kaluza-Klein mass scale shrinks, as does the higher-dimensional gravity mass scale (measured in 4D Planck units), during the inflationary expansion.
If embedded into a full inflationary picture, including the physics of the late-epoch Hot Big Bang, such an inflationary scenario can have several attractive properties. First, the relative expansion rates of the various dimensions might ultimately explain why the four on-brane dimensions are much larger than the others. It might also explain why two internal dimensions might be bigger than any others, if it were embedded into a 10-dimensional geometry with the `other' 4 dimensions stabilized.
A second attractive feature is the disconnect that this scenario offers between the gravity scale during inflation and the gravity scale in the present-day universe.\footnote{In this our model is similar in spirit to ref.~\cite{VolInf}.} Inflationary models such as these can allow the current gravity scale to be low (in the multi-TeV range in extreme cases), and yet remain consistent with the observational successes of generating primordial fluctuations at much higher scales. Inflationary models like this might also point to a way out of many of the usual cosmological problems faced by low gravity-scale models \cite{ADD, MSLED}, such as a potentially dangerous oversupply of primordial KK modes.
\subsection{Outstanding issues}
The model presented here represents only the first steps down the road towards a realistic inflationary model along these lines, however, with a number of issues remaining to be addressed. Perhaps the most important of these are related to stability and to ending inflation and the transition to the later Hot Big Bang cosmology. Besides identifying the Standard Model sector and how it becomes reheated, it is also a challenge to identify why the cosmic expansion ends and why the present-day universe remains four-dimensional and yet is so close to flat.
What is intriguing from this point of view is the great promise that the same 6D supergravity used here also has for addressing some of these late-universe issues \cite{TNCC}, especially for the effective cosmological constant of the present-day epoch. In particular, these 6D theories generically lead to scalar-tensor theories at very low energies,\footnote{Remarkably, the same mechanism that can make the vacuum energy naturally small in 6D supergravity also protects this scalar's mass to be very light \cite{6Dquint, TNCC, susybranes}.} and so predict a quintessence-like Dark Energy \cite{6Dquint}. Successfully grafting the inflationary scenario described here onto this late-time cosmology remains unfinished, yet might provide a natural theory of initial conditions for the quintessence field as arising as a consequence of an earlier inflationary period (see \cite{QInf} for some other approaches to this problem, and \cite{DERev} for a more comprehensive review).
Other outstanding issues ask whether (and if so, how) the extra dimensions help with the problems of many 4D inflationary models: initial-condition problems, fine-tuning and naturalness issues, and so on. Since some of these questions involve `Planck slop' coming from the UV completion \cite{SIreviews}, a helpful step in this direction might be to identify a stringy provenance for the 6D gauged chiral supergravity studied here \cite{CP}.
Another interesting direction asks about the existence and properties of cosmological solutions that explore the properties of the extra dimensions more vigorously than is done by the model considered here. That is, although our model here solves the full higher-dimensional field equations, it is only the volume modulus of the extra-dimensional geometry that evolves with time, with all of the other KK modes not changing. Although our calculation shows that this is consistent with the full equations of motion, even for Hubble scales larger than the KK scale, it is probably not representative of the general case when $H > m_\KK$. More generally one expects other KK modes to become excited by the evolution, allowing a richer and more complex evolution.
There remains much to do.
\section*{Acknowledgements}
We thank Allan Bayntun and Fernando Quevedo for helpful discussions. Our research was supported in part by funds from the Natural Sciences and Engineering Research Council (NSERC) of Canada. Research at the Perimeter Institute is supported in part by the Government of Canada through Industry Canada, and by the Province of Ontario through the Ministry of Research and Information (MRI).
| {'timestamp': '2011-08-15T02:00:51', 'yymm': '1108', 'arxiv_id': '1108.2553', 'language': 'en', 'url': 'https://arxiv.org/abs/1108.2553'} |
high_school_physics | 136 | 16.149052 | 1 | \section{Introduction: Cardy's Conjecture}
Except in the rare cases(e.g. \cite{2}) of integrable or otherwise solvable models, it is generally impossible to derive by hand the spectrum of a quantum theory from only the Hamiltonian. Perturbation theory is therefore a standard technique for the professional theorist, but despite its many successes in describing phenomenological aspects of quantum theory, it is plagued by challenges\cite{3}\cite{4}. For the quantum field theorist, there are challenges aplenty; the first and most glaring issue are the short-distance divergences of Feynman diagrams when using the interaction picture to compute observables or scattering amplitudes. To amend this particular problem, one defines a `renormalized' version of their quantum theory, which introduces a mass scale $\mu$ that serves as an upper bound for the momentum-space resolution of the theory - or if one likes, a coarse-graining of spacetime. Since the theory is now manifestly insensitive to arbitrarily short distance events, the operators and parameters of the original theory must be modified in a $\mu$-dependent way to both avoid the divergences and make observable quantities independent of the choice of $\mu$. Since the parameters of the renormalized theory now depend on this scale (we say they ``run" with $\mu$), there is a potential problem with the application of perturbation theory should these parameters ever become large for a given value of $\mu$. This happens in Quantum Electrodynamics (QED): when renormalized at a very high scale the electric charge of the fermions become arbitrarily large. Since this limit is exactly where the renormalized theory meets the original QFT, such theories are generally considered to be unphysical, unless they are embedded into a new theory at some intermediate scale so as to avoid this issue (as is QED in Grand Unified Theories). On the contrary, we have theories like Quantum Chromodynamics (QCD) where the running of the coupling is opposite - the beta function is negative and the coupling becomes very large when renormalized at a low energy scale\cite{5}\cite{6}. QCD renormalized at scales much higher than a few hundred MeV is considered perturbative. However, perturbative QCD has failed to provide a description of the theory's spectrum which is consistent with observations of the long-distance physics\cite{7}. Given that there are many technical challenges with perturbation theory (Haag's theorem\cite{3}, non-convergence of power series expansions\cite{4}, general difficulties with calculating to high order) it should not be surprising that it generally fails to predict the spectrum of QFTs with negative beta functions.\\
Nevertheless, perturbation theory has taught us much about QFTs. The necessary introduction of the renormalized perturbation theory opened the door for the study of the Renormalization Group (RG)\cite{8}, from which we have learned much about non-perturbative physics. One such piece of knowledge is the c-theorem: in 2-dimensional conformal field theories (CFTs), there is a number $c[\mu]$ that enters as a proportionality constant in front of the anomalous divergence of the scale current. It has been established that c decreases monotonically\cite{9} as one flows to the IR ($\mu$ is decreased), regardless of the microscopic details of the theory. This establishes that (at least for 2 dimensional CFTs) a theory renormalized in the UV has more degrees of freedom than one renormalized in the IR. John Cardy conjectured that this is true for all field theories, and proposed a candidate for $c$ in higher dimensional, or non-conformal QFTs\cite{10}. A similar result has been proven in 4 dimensions, called the $a$-theorem\cite{11}. As with the c-theorem, $a$ is a multiplicative constant of the anomalous divergence of the scale current in a CFT. It is not the only anomalous term, but it is the component that survives integration over spacetime of the anomaly. In this context, one often sees the following ``equation" in the literature
\begin{equation}
\int d^D x \braket{T^\mu_\mu}\footnote{Here we are careful to mention that the use of $\mu$ is to indicate a spacetime index, not the scale at which the theory has been renormalized at. When we wish to make RG scale dependence explicit, we will use square brackets, e.g. $c[\mu]$} \sim \text{anomaly}
\end{equation}
In all relevant classical field theories it is the case that the divergence of the scale current is equal to the trace of the stress energy tensor. Cardy's conjecture, simply put, is that the left hand side of this equation is a monotonically increasing function of $\mu$. In two and four dimensions, the $c$ and $a$-theorems respectively, are examples of Cardy's conjecture for theories which exhibit a pattern of spontaneous conformal symmetry breaking. To endow an interacting field theory with conformal invariance, it is generally necessary to couple it to the metric tensor. At least in the case of the proof of the $a$-theorem, this is done through a massless mediator known as the dilaton. The dilaton-matter and dilaton-metric interactions are tuned so that the trace of the total stess-energy tensor vanishes identically. Additionally, the dilaton can be coupled arbitrarily weakly to the matter theory, allowing one to study the trace anomaly perturbatively as a consequence of the RG flow between conformal fixed points\cite{11}. This is possible to do because the IR effective theory of the dilaton and metric tensor is highly constrained by the assumed conformal (or at least Weyl) symmetry, and the anomaly coefficient appears in the four-derivative terms of the effective action. This is effectively how the $a$ and $c$ theorems were established. In three dimensions, any attempt to replicate the proof of the $a$-theorem is doomed, since there are no conformal(Weyl)-invariant terms constructed of the Riemann tensor or its derivatives with which to build an IR effective theory of the dilaton and metric \cite{12}. It is thus often claimed that there is no trace anomaly in three-dimensional quantum field theories. We wish to emphasize now that this is meant to be a statement about CFTs and their pattern of symmetry breaking in three dimensions, not for three-dimensional field theories in general. The present work does \textit{not} consider this pattern of symmetry breaking. We will focus on the physically relevant case of non-conformal theories, and whether conformal/scale symmetry is acquired in the UV is of no consequence to our results. Therefore, we do not need to assume any particular properties of the IR effective actions, and the methods and language used here may be quite different than most literature on Cardy's conjecture and CFTs. The use of the word `anomaly' to describe this phenomena then takes on a different meaning: instead of a violation of some classical conservation law (in general there is no conserved scale current), the trace anomaly serves as an obstruction to using the stress-energy tensor as a generator of scale transformations in the quantum theory.\\
The layout of this paper is as follows. In Section 2 we will make the ``equation" above more clear. In particular, we will argue that what belongs in those brackets (what has been called the divergence of the scale current) is not the trace of the stress-energy tensor, but rather a different tensor $\theta^\mu_\mu$ which only happens to equal $T^\mu_\mu$ when quantum mechanics is turned off. This argument is phrased entirely in the context of RG invariance and makes obvious what role the anomalous scale dimensions of operators should play in the discussion.\\
In section 3, once Cardys conjecture is translated into a statement about the anomalous scaling, we will briefly discuss a criteria for the existence of solutions to the field equations that makes direct use of these anomalous dimensions. It is noticed that for IR strongly coupled theories, the anomalous dimensions generically behave in such a way that allows solutions to become manifest in the field equations of the renormalized theory which are not present in the classical theory - something which the present author has conjectured might happen\cite{13}. The conditions on which such solutions become manifest are proposed.\\
\section{Scale Transformations and RG Invariance}
\subsection{Anomalous Dimension}
When a theory is defined by an arbitrary renormalization scale $\mu$, it no longer obeys the scaling relations expected from classical field theory. For example, consider the correlation function of renormalized fields $\hat{\phi}[\mu]$ with couplings $g_i[\mu]$
\begin{equation}
G^{(3)}(\mu;g_i, x_1,x_2,x_3) = \braket{\hat{\phi}(x_1)\hat{\phi}(x_2)\hat{\phi}(x_3)}
\end{equation}
\noindent It is useful to imagine the result of re-scaling these coordinates by a proportionality constant $\lambda$. From classical physics, we have the so-called `engineering dimension' $\Delta$ of the field defined by
\begin{equation}
\phi(\lambda x) = \lambda^{-\Delta}\phi(x)
\end{equation}
\noindent and naively, one would expect the dimension of $G^{(3)}$ to be $-3\Delta$. This is not the case, as a scale transformation of this form must be accompanied by a change of renormalization scale $\mu \rightarrow \mu^\prime(\lambda)$ as well. The correct result, which is consistent with demanding invariance under change of renormalization scale, is
\begin{equation}
G^{(3)}(\mu;g_i,\lambda x_1,\lambda x_2,\lambda x_3) = \lambda^{-3(\Delta + \gamma(g_i))}G^{(3)}(\mu^\prime;g_i, x_1,x_2,x_3)
\end{equation}
\noindent where $\gamma$ is the anomalous dimension of the field, and on the right hand side of this equation, it is understood that $g_i$ is being renormalized at the scale $\mu^\prime$. In a perturbative theory, $\gamma$ is usually a polynomial function in the couplings that starts at quadratic order. Thus it is small. However, the above equation should be interpereted as an exact statement about the relationship between scale transformations and RG flow, irrespective of the theorists ability to actually calculate such quantities. In a non-perturbative regime, the anomalous dimensions could be potentially $\mathcal{O}(1)$ or greater.\\
The uniqueness of the scale $\mu^\prime$ is also an issue. We suspect that as one increases $\lambda$ and probes the large distance properties of the UV renormalized theory, the corresponding choice of $\mu^\prime$ should decrease, equating to sensitivity of the IR theory. If $\mu^\prime$ is then a monotonically decreasing function of $\lambda$, we should also then suspect that the anomalous dimension does not take the same values twice at different scales, ensuring the uniqueness of the formula. These expectations, as we shall see, are at the heart of Cardy's conjecture and are tied to expectations that RG flows cannot be cyclic\cite{9}.\\
It is also imperative that we discuss the anomalous dimension of composite field operators, like $\hat{\phi}^2$. The anomalous dimension of this operator is not equal to twice that of the fundamental field $\hat{\phi}$. It is easy enough to check that the divergence structure of $\braket{\hat{\phi}^2(x)}$ and $\braket{\hat{\phi}(x)\hat{\phi}(y)}$ are different, and so their renormalized counterparts must subtract divergences differently and therefore scale differently. The exact scaling dimension of operators at separated points will still be only the sum of the scaling dimensions of each operator; this is a consequence of the cluster decomposition property\cite{15}. This rule however does not apply for composite operators in interacting field theories: one might consider this a consequence of the operator product expansion.\\
\subsection{Scale Currents}
In a classical field theory there is a straightforward, easy way to calculate the effects of a scale transformation on solutions to the field equations. One must only identify the right hand side of
\begin{equation}
\phi(x^\mu + \delta x^\mu) = \phi(x^\mu) + \delta\phi(x^\mu)
\end{equation}
\noindent which with the identification $\delta x^\mu = (\lambda-1)x^\mu$ is exactly the scale transformation mentioned previously. With $\lambda$ expanded in a power series around the value of 1, a scale current is obtained by variation of the action in the usual way prescribed by Noether's theorem. The result is
\begin{equation}
j_\mu = x_\nu \theta^{\mu\nu}
\end{equation}
\noindent for some symmetric tensor which is conserved ($\partial_\mu \theta^{\mu\nu} = 0$) if the action is manifestly translation invariant. In the case of gauge theories, it is possible to choose $\delta \phi$ ($\phi$ is meant as a stand-in for any field) in such a way that this quantity is gauge invariant. The divergence of this scale current is therefore the trace
\begin{equation}
\partial_\mu j^\mu = \theta^\mu_\mu
\end{equation}
In a classically scale-invariant theory, this vanishes. However even when the theory is not classically scale invariant, this is a useful quantity to know. By the equations of motion, the spacetime integral of $\theta^\mu_\mu$ can be found to be proportional to any mass terms that are implicit in the Lagrangian. The scale current is therefore a useful tool for analyzing the breaking of scale invariance(or a larger conformal invariance), be it spontaneous or explicit. Given a Lagrangian composed of ``operators" $\mathcal{O}_i(x)$ and their engineering dimensions $\Delta_i$:
\begin{equation}
\mathcal{L} = \sum_i \mathcal{O}_i(x)
\end{equation}
\noindent it is quite easy to compute the divergence of this current in $D=d+1$ dimensions. By definition of the engineering dimensions, it is
\begin{equation}
\theta^\mu_\mu(x) = \sum_i (\Delta_i-D)\mathcal{O}_i(x)
\end{equation}
In many reasonable classical field theories, it happens to be the case that $\theta^{\mu\nu} = T^{\mu\nu}$, the stress energy tensor. As in the case of gravitational physics, $T^{\mu\nu}$ can be computed by variation of the action with respect to the metric tensor. In General Relativity the metric tensor is a dynamical thing so $T^{\mu\nu}$ emerges in its equations of motion, but in non-gravitational field theories the variation with respect to the metric still computes the same quantity. For a static solution to the field equations whose stresses vanish at spatial infinity, the integral of $T^\mu_\mu$ is exactly the energy/rest-mass of the solution\cite{16}, in agreement with the interpretation of $\theta^\mu_\mu$. So long as there is no explicit coordinate dependence in the Lagrangian, these two tensors will be the same, classically.\\
All of this changes once quantum mechanics is turned on. Scale invariance (if it is present) is broken by the RG flow, and can only be effectively restored if the flow terminates at a fixed point\cite{17}. This can happen, but exact scale invariance is broken and $\braket{\hat{\theta}^\mu_\mu}$ is anomalous. It is possible that this `trace anomaly' is largely responsible for the existence of \textit{all} mass scales in nature. In particular, Non-Abelian Yang-Mills theory with no matter fields is classically scale invariant, and the excitations are massless. In the quantum theory however, a mass gap is conjectured\cite{7} and the theory dynamically acquires a scale which was not present classically.\\
Furthermore one should not expect that $\hat{\theta}^{\mu\nu} = \hat{T}^{\mu\nu}$ remains to be generically true once quantum mechanics is involved. One reason is quite simple: variation with respect to the metric is not a quantum mechanical operation. If the theory is not a quantum theory of gravity, there is no reason to suspect that the metric dependence of a quantum action is sensitive to the quantum nature of the matter fields. For example, if
\begin{equation}
S_\text{int} = -\int d^D x \sqrt{-g}\hat{\phi}^4(x)
\end{equation}
\noindent then $\hat{T}^\mu_\mu$ necessarily contains a term $D \hat{\phi}^4(x)$. This is completely insensitive to the fact that the scaling dimension of this operator is dependent on what other operators are included in the Lagrangian as well as the scale $\mu$ that we have renormalized at. The derivation of $\hat{\theta}^\mu_\mu$ naturally leads to the inclusion of a term $-\left(\gamma_{\phi^4}-D\right)\hat{\phi}^4(x)$ which contains both information about the renromalization scale and the rest of the dynamics of the theory (here we have worked in a setting where $\Delta = 0$). If that is not enough to convince one that these tensors are different, consider the principle of RG invariance:
\subsection{RG Invariance and Cardy's Conjecture}
A principle tenant of the renormalization group procedure is the invariance of physical quantities with respect to $\mu$. The foremost example of a physical quantity, which is fundamental to the construction of any theory, is the invariant eigenvalue of the squared momentum operator $\hat{P}^2$. For a stationary state identified with a quanta of the theory, this is just the physical mass $M^2$. Consider a stationary state $\ket{\Psi}$, whose wavefunction vanishes sufficiently fast at spatial infinity. Manifest translation invariance implies, e.g. $\partial_\mu \hat{T}^{\mu i} =0$ or\footnote{This argument was borrowed from \cite{16}}
\begin{equation}
\int d^d x x_i \braket{\Psi|\partial_\mu \hat{T}^{\mu i}|\Psi} = \int d^d x x_i \partial_j\braket{\Psi| \hat{T}^{j i}|\Psi} = -\int d^d x \braket{\Psi|\hat{T}^i_i|\Psi} = 0
\end{equation}
Therefore, since the ``00" component of the stress energy tensor is the Hamiltonian, the spatial integral of $\braket{\Psi| \hat{T}^{\mu}_\mu|\Psi}$ is just the mass of the state, $M_\Psi$. For the vacuum, this is zero. How then could it be that the integrated trace of the stress energy tensor depends on $\mu$, for the vacuum or for other states? We argue of course that it does not, and that the operator relevant to Cardy's conjecture is not the true stress energy tensor, but rather $\hat{\theta}^\mu_\mu$ which generically differs from $\hat{T}^\mu_\mu$ by the $\mu$-dependent scale anomaly.\\
We will now re-state Cardy's conjecture. For a Lagrangian density as in eq. (8), where the terms are now true operators in a quantum theory, the divergence of the scale current is
\begin{equation}
\hat{\theta}^\mu_\mu(x) = \sum_i (\Delta_i + \gamma_i-D)\hat{\mathcal{O}}_i(x)
\end{equation}
\noindent where the RG scale dependence is now implicit in $\gamma_i$'s. Cardy's conjecture is then about the quantity
\begin{equation}
\Theta[\mu] \equiv\sum_i (\Delta_i + \gamma_i-D)\int d^D x \braket{\hat{\mathcal{O}}_i(x)} = \int d^D x\braket{\hat{T}^\mu_\mu} + \int d^D x \sum_i\gamma_i \braket{\hat{\mathcal{O}}_i}
\end{equation}
Specifically, $\Theta_\text{IR} \leq \Theta_\text{UV}$, i.e. that it is a monotonically increasing function of $\mu$. The first term in the right hand side is a constant: the anomaly is represented as the remainder. Of course there are two dependencies on $\mu$ in the anomaly: that of $\gamma_i$'s and that of the expectation values $\braket{\hat{\mathcal{O}}_i(x)}$. We claim that a model-independent interpretation of Cardy's conjecture is really just a statement about the anomalous dimensions: $\gamma_i[\mu]$ increases monotonically as we flow to the IR. This would in fact affirm our earlier assumption that eq. (4) necessitated a unique choice of $\mu^\prime$. As a reminder, were this not the case we should conclude that RG flows can be cyclic, something which certainly should not describe any physical system. For the remainder of this paper we will work under the assumption that this is the correct interpretation of Cardy's conjecture, and that the conjecture is correct.\\
\section{Finding New States}
Briefly, let's take a detour back to the land of classical field theory. Consider the case of an interacting Klein-Gordon field in 2+1 dimensions, with a $\mathbb{Z}_2$ potential in its spontaneously broken phase (perhaps $V(\phi) = -m^2\phi^2 + g_6 \phi^6 + V_0$). Naively, one might suspect that domain walls are formed, and could be stable. This is not the case. We divide the Hamiltonian into kinetic and potential terms
\begin{equation}
H = K+U
\end{equation}
\noindent where K and U are positive definite. We now imagine scaling a static domain wall of spatial size 1 to a size $\lambda^{-1}$. Since $\Delta_K = 2$ and $\Delta_U = 0$ we have in $d=2$ spatial dimensions:
\begin{equation}
H[\lambda] = K + \lambda^{-2}U
\end{equation}
\noindent which cannot be minimized except at $\lambda = \infty$. Therefore the domain wall is unstable to shrinking indefinitely (this is of course due to the unbalanced tension on the wall). Furthermore, no such extended field configurations exist and are stable in this model. The above classical scaling argument is the core of what is known as Derrick's theorem\cite{1}.\\
\subsection{Large Anomalous Dimensions}
The story is potentially much different in the quantum theory. It is easy to see how different things can be by assuming at first $K$ and $U$ acquire anomalous dimensions $\gamma_K$ and $\gamma_U$. The conditions for a stable state then become the condition for a local minima of $H[\lambda]$ at $\lambda = 1$:
\begin{align}
\gamma_K K + (\gamma_U - 2)U &= 0\\
\gamma_K (\gamma_K -1) K + (\gamma_U - 2)(\gamma_U - 3)U &> 0
\end{align}
Alternatively (by eliminating $U$ for $K$ using eq. (16)), $(2-\gamma_U)/\gamma_K> 0$ and $\gamma_K(2+\gamma_K +\gamma_U) > 0$. Now obviously in a real model the terms that appear in the potential energy will not all have identical scaling dimensions. The argument is clear though; the activation of anomalous scaling of operators that contribute to the energy density of a state can circumvent the conclusions of Derrick's theorem. This of course depends on the relative signs and magnitudes of the anomalous dimensions in question. This is where Cardy's conjecture enters the picture.\\
If the theory is weakly coupled in the IR, $\gamma_K$ and $\gamma_U$ are small, close to zero. As one increases the RG scale, these terms either decrease or stay the same. At some UV scale then, they stand the chance of being large (perhaps before the theory is embedded to become UV complete) but negative. The stated conditions for circumvention of Derrick's theorem are then not satisfied.\\
This theory in 2+1 dimensions happens to have a negative beta function - it is strongly coupled in the IR and weakly coupled in the UV\cite{18}. Suppose then that in the UV where the anomalous dimensions are close to zero (and possibly negative) Derrick's theorem is not circumvented. Cardy's conjecture implies then that as we flow to the IR, $\gamma_K$ and $\gamma_U$ grow to become large as we approach the non-perturbative regime. If the flow does not terminate at a fixed point before this happens, the anomalous dimensions stand a chance of approaching unity (and positive), where the above conditions can almost certainly be satisfied. The meaning of the title of this paper is now revealed: it is precisely the case of an IR strongly-coupled QFT where one expects new massive solitonic states to become manifest in the field equations. That they are not manifest as solutions along the entire RG trajectory should not be surprising: perturbation theory in the UV is by construction only sensitive to a subsection of the total Hilbert space of a QFT\cite{13}.\\
We did not just prove that stable domain walls will exist in the 2+1 dimensional scalar field theory described at the beginning of this section. Rather, we used this simple case to visualize the potential for strongly-coupled theories to host solutions which have no classical analogue. Whether a theory actually hosts such solitonic states must be checked on a case-by-case level, and is subject to details that we will not address here. In upcoming works we will investigate specific cases and demonstrate that signifigant deviations from classical estimates of soliton masses may occur.\\
\subsection{The Masses}
Suppose a theorist makes some exact non-perturbative evaluation of the operator dimensions in the Hamiltonian at some UV scale $\mu$, and uses the beta function for all relevant coupling constants to put bounds on how slowly those dimensions evolve with $\mu$. This theorist then discovers that by our monotinicity arguments, conditions (16) and (17) are met as one flows to the IR, at an RG scale no smaller than $\mu^*$. What then is the mass of the state which emerges there, and how does it depend on the value of $\mu^*$?\\
We don't have an exact answer to this question of course, and necessarily this is a regime where perturbative calculations of anomalous dimensions are likely not very accurate. But if the conditions above can be met, one should be able to put a lower bound on the mass of the lightest solitonic state. Generically we should consider a time-independent state (i.e. a static field configuration) and a Hamiltonian of the form
\begin{align}
H = \sum_i H_i[\mu] + \Omega[\mu]
\end{align}
\noindent where the $H_i$ are derived from the integrated expectation values (with respect to the massive state, not the vacuum) of operators in eq. (8), and $\Omega$ is a free energy. Often one will encounter texts which claim this free energy is a cosmological constant, but this is not correct: while renormalization of vacuum graphs requires the introduction of a field-independent term in the action to regulate divergences, this is an RG-variant term and is compatible with a zero vacuum energy. The true meaning of $\Omega[\mu]$ is that of the free energy associated with the invisible fluctuation modes with momenta greater than $\mu$. Both $\Omega$ and the $H_i$ are RG-variant quantities, but by construction their sum should not be.\\
What matters now is essentially applying stability conditions (16) and (17) to our Hamiltonian (18). The result is not literally (16) and (17), but rather (in $d$ spatial dimensions)
\begin{align}
&\sum_i (\Delta_i+\gamma_i - d)H_i + (\gamma_\Omega -d)\Omega = 0\\
&\sum_i (\Delta_i+\gamma_i - d)(\Delta_i+\gamma_i - d+1)H_i + (\gamma_\Omega -d)(\gamma_\Omega -d+1)\Omega > 0
\end{align}
Here $\Omega$ is assumed to have no engineering dimension, and only acquires dimension $\gamma_\Omega$ through the anomalous running of the renormalized coupling constants of which it is a function. Assuming that conditions (19) and (20) are met at and below the RG scale $\mu^*$, we now have known relationships between the numerical values of $H_i[\mu^*]$ and $\Omega[\mu^*]$. As is typical with classical extended field configurations, all of these terms should generically differ only by $\mathcal{O}(1)$ coefficients\cite{19}. The anomalous dimensions will generically\footnote{The previous 2+1 dimensional example was a special case. There it seems that these solutions turn on as soon as $\gamma$'s are positive, even if very small. However, in this number of dimensions many models already host solitonic states, e.g. vortices, a consequence of the fact that $\Delta_K = d = 2$. In higher dimensions, where soliton solutions are generically not present, the $\gamma$'s have more work to do and therefore will be $\mathcal{O}(d-\Delta)$ at $\mu^*$} be $\mathcal{O}(d-\Delta)\sim\mathcal{O}(1)$ at $\mu^*$ and since the coefficients of the $H_i$'s and $\Omega$ in eqs. (19) and (20) will be differing in sign, we argue that $H \sim \Omega[\mu^*]$ is as good an estimate for the lower bound of the mass as any. What is the value of $\Omega[\mu^*]$? Naively, it is some integral of the free energy density $\omega(g_i)$, a function of the couplings of the theory renormalized at $\mu^*$. The correct expression should depend on some normalized energy density profile of the state, and since this is at the moment indeterminate, we conjecture the following:
\begin{align}
M \sim \Omega[\mu^*] \gtrsim \frac{\omega[\mu^*]}{(\mu^*)^d}
\end{align}
Since the state only manifests as a solution to our quantum equations of motion when we are insensitive to distance scales less than $1/\mu^*$, the volume of the physical object should be at least $(1/\mu^*)^d$. The natural guess is then eq. (21).
\section{Conclusion and Going Forward}
We have interpreted Cardy's conjecture as a statement about the change of anomalous dimensions under RG flow. The basic requirements that RG flows not be cyclic and that an IR renormalized theory have less degrees of freedom than a UV renormalized theory are realized if some scaling dimensions are larger in the IR than in the UV. This becomes consequential if those dimensions deviate from classical values by $\mathcal{O}(1)$ corrections, and those deviations are positive. This happens when the theory is IR strongly coupled, and it opens the door for a circumvention of Derrick's classical no-go theorem. Should new solutions to the renormalized field equations emerge, we identify them as solitons and have proposed a way to estimate a lower bound on their mass using perturbation theory in the UV. Such a thing is only possible because of monotinicity arguments. Once the solutions are found and characterized, we propose a reorganization of the degrees of freedom at scales at and below $\mu^*$ to reflect the manifestation of these new solutions. One should expand perturbatively around such solutions, rather than the free field theory configurations.
\newpage
| {'timestamp': '2021-04-12T02:02:18', 'yymm': '2005', 'arxiv_id': '2005.07209', 'language': 'en', 'url': 'https://arxiv.org/abs/2005.07209'} |
high_school_physics | 341,898 | 16.145163 | 1 | This robin redbreast may not be able to sing, but he can help the plants! Complete with a hand painted red breast, he will add oodles of character to the home, garden or conservatory. Watering the houseplants or herb pots will never be the same again with this hand-crafted, galvanised ornamental robin watering can. This unusual watering can has been handcrafted from galvanised metal sheet to make watering in the garden a totally new experience. First, it is crafted into shape and then given a protective decorative coating. The cans are handmade in India, so each can is individual and may not be as highly finished as a mass produced factory item. We hope you agree that this adds to the charm. Dimensions: Length 24 Width 9.5 Height 23cm Holds approx. 0.5 litres. | {'timestamp': '2019-04-23T07:51:54Z', 'url': 'http://www.nux3.co.uk/watering-equipment/bronze-red-robin-watering-can-gardening-gift-b007p36i5g.html', 'language': 'en', 'source': 'c4'} |
high_school_physics | 138,603 | 16.128644 | 1 | Used Cornell Centrifugal Pump. Model: 3NLT-F5K. Serial Number: 977495-88. Pump capacity at 1800 rpm: 225 gallons per minute at 25 tdh. Inlets: 6 inches diameter port with a 7-1/2 inches flange. Outlets: 3 inches diameter port with a 7-1/2 inches flange. Overall dimensions: 45 inches length x 15 inches width x 15 inches height.
Used 2001 Cornell Centrifugal Pump. Model: 3CB-F16K 10-6. S/N: 121910 15.22. Capacity: 300 gpm. Impeller diameter: 15.22 in. PSID: 33. Inlets: 6 inch 150 lb class flange. Outlets: 3 inch 150 lb class flange. Overall dimensions: 5 feet L x 2 feet 4 inches W x 2 feet H.
Used Cornell Centrifugal Close Coupled Refrigerant Pump. Model: 2CBS5-4. Pump material: ASTM A536 60-40-18 ductile Iron. Flow rate for new pump is 102 gallons per minute with required horsepower and pressure. Discuss with salesperson your process and product to determine if this refurbished pump should handle your specific duty. Pump specifications: Polar white/John Crane, 1.25”, T-1, double mechanical shaft seal with pressurized barrier fluid lubrication system, low oil limit switch, and seal chamber heater to maintain proper barrier oil viscosity. Motor specification: Close coupled to a totally enclosed fan cooled, refrigerant atmosphere, hostile environment, premium efficiency motor, with class “F” installation; suitable for VFD. Baldor industrial motor, 5 horse power. Overall dimensions: 2 feet 4 inches length x 1 feet 4 inches width x 1 feet 8 inches H.[ACN55][h9]. | {'timestamp': '2019-04-21T12:23:45Z', 'url': 'http://food-processing-equipment.biz/industrialcornell.html', 'language': 'en', 'source': 'c4'} |
high_school_physics | 1,339 | 16.105222 | 1 | \section{Introduction}\label{sec:introduction}}
\else
\section{Introduction}
\label{sec:introduction}
\fi
\IEEEPARstart{A}{dvanced} data collection techniques in today's world require
researchers to work with large volumes of nonlinear data, such as global climate
patterns \cite{daley1993atmospheric, jones2009high}, satellite signals
\cite{manjunath1996texture, zumberge1997precise}, social and mobile networks
\cite{carrington2005models, becker2013human}, the human genome
\cite{huang2009systematic,schafer2005empirical}, and patterns in collective
motion \cite{gajamannage2015identifying,gajamannage2015detecting}. Studying,
analyzing, and predicting such large datasets is challenging, and many such
tasks might be implausible without the presence of Nonlinear Dimensionality
Reduction (NDR) techniques. NDR interprets high-dimensional data using a reduced
dimensional representation that corresponds to the intrinsic nonlinear
dimensionality of the data \cite{van2009dimensionality}. Manifolds are often
thought of as being smooth, however many existing NDR methods do not directly
leverage this important feature. Sometimes, ignoring the underlying smoothness
of the manifold can lead to inaccurate embeddings, especially when the data has
been contaminated by noise.
Many NDR methods have been developed over the last two decades due to the lack
of accuracy and applicability of classic Linear Dimensionality Reduction (LDR)
methods such as Principal Component Analysis (PCA) \cite{jolliffe2002principal},
which finds directions of maximum variance, or Multi-Dimensional Scaling (MDS)
\cite{cox2000multidimensional}, which attempts to preserve the squared Euclidean
distance between pairs of points. As the Euclidean distance used in MDS
quantifies the distance between points in the high-dimensional space, rather the
actual distance along the manifold, it can have difficulty inferring a faithful
low-dimensional embedding. On the other hand, Isometric Mapping (Isomap)
represents the pairwise distance between points using \emph{geodesic distances}
and is an NDR method that successfully resolves the aforesaid problem in MDS
\cite{tenenbaum2000global}. Although Isomap has been successfully used to
analyze low-dimensional embedding of data from several domains, such as
collective motion \cite{delellis2014collective}, face recognition
\cite{yang2002face}, and hand-writing digit classification
\cite{yang2002extended}, this method can suffer from short-circuiting
\cite{balasubramanian2002isomap}, low-density of the data
\cite{lee2002curvilinear}, and non-convexity \cite{zha2003isometric}, all of
which can be magnified in the presence of noisy measurements. It is therefore
our goal here to propose a new method which ameliorates some of
these issues as compared to Isomap.
Generally speaking, NDR approaches reveal a smooth low-dimensional, nonlinear
manifold representation of high-dimensional data. While there are many unique
capabilities provided by current NDR methods, most of them encounter poor
performance in specific instances. In particular, many current NDR methods are not
adept at preserving the smoothness of the embedded manifold in the presence of
noise. Specifically, Isomap closely mimics the underlying manifold's geometry
using a graph structure that it makes using a neighborhood search
\cite{friedman1977algorithm, agarwal1999geometric}, over the high-dimensional
data. Geodesics are generally \emph{piecewise linear}, thus, the manifold constructed using geodesics in this method is not actually smooth at each node, as demonstrated in
Fig.~\ref{fig:sphere_with_network}(a). Moreover, the length of such a geodesic
between two points is not necessarily the manifold distance. In fact,
given a sufficiently smooth manifold, the Isomap generated geodesic distance
will generally be an \emph{over-estimate} of the true manifold distance, again
as demonstrated in Fig.~\ref{fig:sphere_with_network}(b). Of course, such issues are intensive in the presence of noisy measurements. Accordingly, herein we propose to \emph{replace the segments of the piecewise linear Isomap geodesic by a smoothing spline} as shown by the black curve in
Fig.~\ref{fig:sphere_with_network}(b) and \emph{consider the length of the
spline as the estimation of the manifold distance between points}.
\begin{figure}[htp]
\centering
\begin{subfigure}{0.3\textwidth}
\centering
\includegraphics[width= 2.2in]{figure1a.pdf}
\caption{}
\end{subfigure}%
~
\begin{subfigure}{0.2\textwidth}
\centering
\vspace{2mm}
\includegraphics[height= 1.7in]{figure1b.pdf}
\vspace{4mm}
\caption{}
\end{subfigure}
\caption{This figure demonstrates the lack of smoothness of the geodesics generated by Isomap. (a) Three nearest neighbors for each point (blue dots) of a spherical dataset of 300 points are found and joined by line segments (shown in blue) to create a graph structure. The Isomap manifold distance between two arbitrary points $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ is estimated as the length of the geodesic (red path), that is defined as the shortest path between two points, computed by using, for example, Floyd's algorithm \cite{floyd1962algorithm}. (b) However, our approach creates a \emph{smoothing spline}, shown by the black curve, that is fitted through the points in the geodesic as a better approximation of the distances on the smooth manifold than the geodesic distance.}
\label{fig:sphere_with_network}
\end{figure}
There are few NDR methods found in the literature that utilize smoothing splines
for embedding like our approach. For example, Local Spline Embedding (LSE) also
uses a smoothing spline to perform the embedding \cite{xiang2009nonlinear}.
However, this method minimizes the reconstruction error of the objective
function and embeds the data using smoothing splines that map local coordinates
of the underlying manifold to global coordinates. Specifically, LSE assumes the
existence of a smooth low-dimensional underlying manifold and the embedding is
based on an eigenvalue decomposition that is used to project the data onto a
tangent plane. However, differing from our approach, LSE assumes that the data
is noise free and unaffected by anomalies. The Principal Manifold Finding
Algorithm (PMFA) is another NDR method that also uses cubic smoothing splines to
represent the manifold and then quantifies the intrinsic distances of the points
on the manifold as lengths of the splines \cite{gajamannage2015dimensionality}.
However, this approach embeds high-dimensional data by reducing the
reconstruction error over a two-dimensional space. As this method only performs
two-dimensional embeddings, its applicability is limited for problems with
larger intrinsic dimensionality. As we will demonstrate in the sequel, our
proposed methods overcome the limitations of these methods.
This paper is structured as follows. In Section~\ref{sec:mds_isomap}, we will
detail the MDS and Isomap algorithms and describe the evolution of our NDR
method from these methods. Section~\ref{sec:sge} presents our NDR method, Smooth
Geodesic Embedding (SGE), that fits geodesics, as in Isomap, by smoothing
splines. We analyze the performance of the SGE method in
Section~\ref{sec:per_analysis} using three representative examples: a
semi-spherical dataset; images of faces; and images of hand written digits.
Finally, we provide discussion and conclusions in Section~\ref{sec:conclusion}.
\section{Multidimensional scaling and Isomap}\label{sec:mds_isomap}
We begin our analysis by deriving the mathematical details of the LDR method MDS. Then, we proceed to discuss Isomap which replaces the Euclidean distance in MDS by a geodesic distance. Next, we derive our method, SGE, as an extension of Isomap that fits geodesics by smoothing splines.
\subsection{Multidimensional scaling}\label{sec:mds}
Multidimensional scaling is a classic LDR algorithm that leverages the squared Euclidean distance matrix $\boldsymbol{D}=[d_{ij}^2]_{n\times n}$; where $d_{ij} = \|\boldsymbol{y}_i - \boldsymbol{y}_j\|_2$ and $n$ is the order of the high-dimensional space. Here, $\boldsymbol{y}_i$, $\boldsymbol{y}_j\in \mathbb{R}^{n\times 1}$, are two points on the high-dimensional dataset $\boldsymbol{Y}=[\boldsymbol{y}_1; \dots; \boldsymbol{y}_i; \dots; \boldsymbol{y}_j; \dots; \boldsymbol{y}_n]$. This method first transforms the distance matrix $\boldsymbol{D}$ into a Gram matrix $\boldsymbol{S}=[s_{ij}]_{n\times n}$, which is derived by \emph{double-centering} \cite{lee2007nonlinear} the data using
\begin{equation}\label{eqn:double_centering}
s_{ij}=-\frac{1}{2}\big[d^2_{ij}-\mu_i(d^2_{ij}) -\mu_j(d^2_{ij})+\mu_{ij}(d^2_{ij})\big].
\end{equation}
Here, $\mu_i(d^2_{ij})$ and $\mu_j(d^2_{ij})$ are the means of the $i$-th row and $j$-th column, respectively, of the squared distance matrix, and $\mu_{ij}(d^2_{ij})$ is the mean of the entire matrix $D$. MDS then computes the Eigenvalue Decomposition (EVD) of $\boldsymbol{S}$ as
\begin{equation}\label{eqn:evd}
\boldsymbol{S}=\boldsymbol{U}\boldsymbol{\Sigma}\boldsymbol{U}^T,
\end{equation}
where $\boldsymbol{U}$ is a unitary matrix ($\boldsymbol{U}^T\boldsymbol{U}=\boldsymbol{I}$) providing the singular vectors and $\boldsymbol{\Sigma}$ is a diagonal matrix of singular-values. The Gram matrix $\boldsymbol{S}$, that is made from the Euclidean distance matrix $\boldsymbol{D}$, is symmetric and positive semidefinite\footnote{A symmetric $n\times n$ matrix $\boldsymbol{M}$ is said to be positive semidefinite, if $\boldsymbol{z}^T \boldsymbol{M} \boldsymbol{z}\ge 0$ for all non-zero $\boldsymbol{z}\in\mathbb{R}^{n \times 1}$.}. Thus, all the eigenvalues of $\boldsymbol{S}$ are non-negative, and both the Singular-Value Decomposition (SVD) and the EVD of $\boldsymbol{S}$ are the same~\cite{lee2007nonlinear}. $\boldsymbol{\Sigma}$ and $\boldsymbol{U}$ are arranged such that the diagonal of $\boldsymbol{\Sigma}$ contains the eigenvalues of $S$ in descending order, and the columns of $\boldsymbol{U}$ represent the corresponding eigenvectors in the same order. We estimate $p$-dimensional latent variables of the high-dimensional dataset by
\begin{equation}\label{eqn:latent_var}
\hat{\boldsymbol{X}}=\boldsymbol{I}_{p\times n}\boldsymbol{\Sigma}^{1/2}\boldsymbol{U}^T.
\end{equation}
Here, $\hat{\boldsymbol{X}}$ is the $d$-dimensional embedding of the input data $\boldsymbol{Y}$.
Note, in the case of a matrix $\boldsymbol{S}$ which is not symmetric positive semidefinite, the EVD has negative eigenvalues which then violate Eqn.~(\ref{eqn:latent_var})~\cite{lee2007nonlinear}. Accordingly, as we discuss in Section~\ref{sec:isomap}, we replace the EVD computation on $\boldsymbol{S}$ by SVD.
Multidimensional scaling has limited applicability as it is inherently a linear method (when $D$ is a squared Euclidean distance matrix). However, the NDR scheme Isomap overcomes this problem by employing geodesic distances instead of Euclidean distances.
\subsection{Isomap}\label{sec:isomap}
Isomap works by creating a graph structure, based upon high-dimensional data, that estimates the intrinsic geometry of the manifold. The graph structure used by Isomap can be parameterized in multiple ways, but herein we
focus on the parameter $\delta$ which measures the number of \emph{nearest neighbors}
to a given point \cite{agarwal1999geometric}. The nearest neighbor collection for each point is transformed into a graph structure by treating points as graph nodes and connecting each pair of nearest neighbors by an edge having the weight equal to the Euclidean distance between the two points. Given such a graph, the distance between any two points is measured as the \emph{shortest path distance in the graph}, which is commonly called the \emph{geodesic distance}.
The geodesic distance between any two points in the data can be computed in many ways, including Dijkstra's algorithm \cite{dijkstra1959note}. We employ Floyd's algorithm \cite{floyd1962algorithm} for this task as it computes the shortest paths between all pairs of nodes in one batch, and is more efficient than Dijkstra's algorithm in this case.
As in MDS, we first formulate the doubly centered matrix $\boldsymbol{S}$ from the squared geodesic distance matrix using Eqn.~(\ref{eqn:double_centering}). Here, the doubly centered matrix is not necessarily positive semidefinite as we \emph{approximate} the true geodesic distance matrix by the shortest graph distance \cite{lee2007nonlinear}. Thus, the eigenvalue decomposition of the $\boldsymbol{S}$ matrix might produce negative eigenvalues and Eqn.~(\ref{eqn:latent_var}) does not hold in this case. To overcome this problem, it is standard to perform the SVD over the Gram matrix $\boldsymbol{S}$ as
\begin{equation}\label{eqn:svd}
\boldsymbol{S}=\boldsymbol{V}\boldsymbol{\Sigma} \boldsymbol{U}^T,
\end{equation}
where $\Sigma$ is a diagonal matrix of singular values (non negative), and $U$ and $V$ are unitary matrices. The latent variables of the higher dimensional input data are revealed by Eqn.~(\ref{eqn:latent_var}) with $\boldsymbol{\Sigma}$ and $\boldsymbol{U}^T$ obtained from Eqn.~(\ref{eqn:svd}).
Isomap emphasizes nonlinear features of the manifold, however the lengths measured using geodesics might not faithfully reflect the correct manifold distance, as we demonstrate in Fig.~\ref{fig:sphere_with_network}. Accordingly, we propose to overcome this drawback in Isomap by utilizing a smoothing approach for geodesics.
\section{Smoothing geodesics embedding}\label{sec:sge}
Our goal is to fit the geodesics computed in Isomap with smoothing splines to more closely mimic the manifold and preserve the geometry of the embedding. Classic smoothing spline constructions \cite{de1972calculating} require one input parameter, denoted by $s$, that controls the smoothness of the spline fitted through the points in a geodesic. Our proposed method, SGE, has five parameters:
\begin{itemize}
\item $\delta$ (inherent from Isomap) for the number of nearest neighbors,
\item$\mu_s$ controls the smoothness of the splines,
\item $\nu$ controls the threshold of the length of splines before reducing the order of the spline to the next level,
\item $h$ controls the number of discretizations that the method uses to evaluate the length of a spline, and
\item finally, $p$ prescribes the number of embedding dimensions.
\end{itemize}
Here, we demonstrate our approach by constructing a spline on an arbitrary geodesic $\mathcal{G}$, having $m\ge2$ points, in the graph created by a neighborhood search algorithm. For an index $k$ we have that $d$-dimensional points in $\mathcal{G}$ are given by
\begin{equation}\label{eqn:geo_pts}
\big\{\boldsymbol{y}_k=[y_{1k}, \dots, y_{dk}]^T\vert k=1 \dots, m\big\}.
\end{equation}
For each dimension $l\in \big\{1, \dots, d\big\}$, we fit $\big\{y_{lk}\vert k=1 \dots, m\big\}$ using one dimensional smoothing splines $\hat{f}_l(z)$ of order $\theta+1$ that are parameterized in $z \in [0,1]$ by minimizing
\begin{equation}\label{eqn:spline}
\sum^m_{k=1}\big[y_{lk}-\hat{f}_l(z_k)\big] + s\int^1_0 \big[\hat{f}_l^{(\theta)}(z)\big]^2dz
\end{equation}
as in \cite{de1972calculating}. Here, $(\theta)$ represents the order of the derivative of $\hat{f}_l$, and $z_k$ is a discretization of the interval $[0,1]$ such that $z_1=0$, $z_k=(k-1)/(m-1)$, and $z_m=1$. Minimizing of Eqn.~(\ref{eqn:spline}) yields $d$ one-dimensional smoothing splines $\{\hat{f}_l(z)|l=1,\dots,d\}$. We combine these one dimensional splines and obtain a $d$-dimensional smoothing spline of the points $\big\{\boldsymbol{y}_k\vert k=1 \dots, m\big\}$ in $\mathcal{G}$,
\begin{equation}\label{eqn:dspline}
\hat{\boldsymbol{f}}(z)=[\hat{f}_1(z), \dots, \hat{f}_d(z)]^T,
\end{equation}
that estimates the \emph{smooth geodesic}. In numerical implementations, the order $\theta+1$ of the spline $\hat{f}$ should be less than number of points $m$ in the geodesic \cite{de1972calculating}.
Choosing the order of the spline is challenging, since while a spline with some specified order might perfectly fit the data (and thereby over-fit the noise in the data), another spline with a different order might only fit the data weakly. The length of the fitted spline between two points is defined as the manifold distance between those two points, thus either an over-fitted or under-fitted spline might provide an incorrect manifold distance. To overcome this problem, we introduce the spline threshold (in percentage) which allows the maximum length of a spline that can yield beyond the length of the geodesic. We treat the geodesic distance as the default manifold distance between two points. If the length of a spline with a specific order exceeds this limit, SGE reduces the order of the spline to the next level until the length of the spline satisfies the threshold or reaches the default length of the geodesic distance. It is worthwhile to try a spline fit with a lower order when a higher order spline fails numerically.
We present below our procedure of choosing the order of a given spline under three main cases (1, 2, and 3) and sub-cases (a, b, \dots):
\begin{itemize}
\item \textbf{Case--1} If $m\ge4$:
\begin{itemize}
\item \textbf{Case--a:} we first fit the points in the geodesic with a cubic smoothing spline $\hat{\boldsymbol{f}}(z)$ where $z\in[0,1]$ according to Eqn.~(\ref{eqn:spline}) and Eqn.~(\ref{eqn:dspline}). Note that, a cubic smoothing spline is represented by $\theta=2$ in Eqn.~(\ref{eqn:spline}). We discretize this spline into $h$ segments $z_{k_1}=(k_1-1)/(h-1); k_1=1,\dots, h$ and compute the length,
\begin{equation}\label{eqn:len_spl}
d_{\hat{\boldsymbol{f}}}=\sum^{h-1}_{k_1=1}\|\hat{\boldsymbol{f}}(z_{k_1+1})-\hat{\boldsymbol{f}}(z_{k_1})\|.
\end{equation}
Then, the length $d_{\hat{\boldsymbol{f}}}$ is compared with the corresponding geodesic distance
\begin{equation}\label{eqn:len_geo}
d_\mathcal{G}=\sum_{k=1}^{m-1}\|\boldsymbol{y}_{k+1}-\boldsymbol{y}_k\|.
\end{equation}
If $d_{\hat{\boldsymbol{f}}}<d_\mathcal{G}(100+\nu)/100$ (so that $\nu$ is thought of as a percentage), then we accept $d_{\hat{\boldsymbol{f}}}$ as the length of the smooth geodesic, otherwise we proceed to Case--b. The parameter $\nu$ (in percentage) defines the threshold (the upper bound) of the length of the spline $\hat{\boldsymbol{f}}$ that is allowed to exceed from the length of the corresponding geodesic.\\
\item \textbf{Case--b:} we fit the data with a quadratic (i.e., $\theta=1$ ) spline $\hat{\boldsymbol{f}}$ according to Eqn.~(\ref{eqn:spline}) and Eqn.~(\ref{eqn:dspline}) and compute the length of the quadratic spline using Eqn.~(\ref{eqn:len_spl}). If $d_{\hat{\boldsymbol{f}}}<d_\mathcal{G}(100+\nu)/100$, then we accept $d_{\hat{\boldsymbol{f}}}$ as the length of the smooth geodesic, otherwise move to the next case. \\
\item \textbf{Case--c:} we make a linear (i.e., $\theta=0$) fit $\hat{\boldsymbol{f}}$ according to Eqn.~(\ref{eqn:spline}) and Eqn.~(\ref{eqn:dspline}), and measure the length using Eqn.~(\ref{eqn:len_spl}). If $d_{\hat{\boldsymbol{f}}}<d_\mathcal{G}(100+\nu)/100$ in the linear fit, then we accept $d_{\hat{\boldsymbol{f}}}$, otherwise we move to Case--d. \\
\item\textbf{Case--d:} instead of fitting a spline, we consider the original geodesic itself as the fit and treat $d_\mathcal{G}$ as the length of the smooth geodesic.
\end{itemize}
\item \textbf{Case--2} If $m=3$:\\
The spline fitting process here is started from fitting a quadratic spline as only three points are in the geodesic. Thus, we carry-out all the Cases b--d as in Case--1.
\item \textbf{Case--3} If $m=2$:\\
We have only two points in the geodesic, thus, we perform Cases c--d as in Case--1.
\end{itemize}
We use the smoothing parameter $s$ to offset the spline fit between no fitting error (when $s=0$) and the best smoothness (when $s\rightarrow \infty$). The parameter $s$ controls the sum of square errors between the training points and the fitted function. The best value for $s$ ensuring the least error while having a sufficient smoothness is bounded by a function of the number of points in the geodesic as
\begin{equation}\label{eqn:spl_interval}
m-\sqrt{m}\le s \le m+\sqrt{m},
\end{equation}
\cite{reinsch1967smoothing}. Since the number of points in geodesics vary, we are unable to input a onetime value as the smoothing parameter into the method that satisfies the inequality (\ref{eqn:spl_interval}). In order to control this, here we introduce a new parameter called the smoothing multiplier $\mu_s\ge0$ such that $s=\mu_s m$. Now, given $\mu_s$, SGE uses different smoothing parameters for different smooth geodesics depending on $m$.
For each pair of point indices $i,j$ in the dataset, we execute the aforesaid procedure and approximate the length of the smooth geodesic $d_{ij}$. Then, we square the entries $d_{ij}$ and create the matrix $\boldsymbol{D}=[d^2_{ij}]_{n\times n}$. We perform double centering on $\boldsymbol{D}$ using Eqn.~(\ref{eqn:double_centering}) to obtain the doubly centered matrix $\boldsymbol{S}$. Then, we compute SVD as in Eqn.~(\ref{eqn:svd}) followed by computing $p$-dimensional latent variables $\hat{\boldsymbol{X}}$ according to Eqn.~(\ref{eqn:latent_var}). A summary of the method SGE is presented in Algorithm \ref{alg:algorithm}.
\begin{algorithm*}[!htp]
\caption{ \textit{Smooth Geodesics Embedding (SGE).
\\ Inputs: Data ($\boldsymbol{Y}$), number of nearest neighbors ($\delta$), smoothing multiplier ($\mu_s$), spline threshold percentage ($\nu$), number of discretizations ($h$), and embedding dimension ($p$).
\\Outputs: List of $p$ largest singular values ($\lambda_l;l=1,\dots,p$) and $p$-dimensional embedding ($\hat{\boldsymbol{X}}$). }}\label{alg:detecting_transitions}.
\begin{algorithmic}[1]
\State For each point in $\boldsymbol{Y}$, choose $\delta$ nearest points as neighbors \cite{friedman1977algorithm}.
\State Consider all the point in $\boldsymbol{Y}$ as nodes and if any two nodes are chosen to be neighbors in 1, then join them by an edge having the length equal to the squared Euclidean distance between them. This step converts the dataset into a graph.
\State For each pair of nodes in the graph, find the points $\mathcal{G}=\big\{\boldsymbol{y}_k\vert k=1 \dots, m\big\}$ in the shortest path using Floyd's algorithm \cite{floyd1962algorithm}. Here, $m=|\mathcal{G}|\ge2$.
\State The points in $\mathcal{G}$ are fitted with a smoothing spline and its length is computed:
\newline
Case--1 ($m\ge4$):
\begin{addmargin}[1em]{2em}
Case--a:
\begin{addmargin}[1em]{2em}
fit $\mathcal{G}$ with a cubic smoothing spline using Eqn.~(\ref{eqn:spline}) and Eqn.~(\ref{eqn:dspline}), then approximate the length $d_{\hat{\boldsymbol{f}}}$ of that using Eqn.~(\ref{eqn:len_spl}). Let, the length of the geodesic is $d_\mathcal{G}$ [Eqn.~(\ref{eqn:len_geo})]. If $d_{\hat{\boldsymbol{f}}}<d_\mathcal{G}(100+\nu)/100$, then accept $d_{\hat{\boldsymbol{f}}}$ as the length of the smooth geodesic, otherwise proceed to Case--b.
\end{addmargin}
Case--b:
\begin{addmargin}[1em]{2em}
fit $\mathcal{G}$ with a quadratic smoothing spline using Eqn.~(\ref{eqn:spline}) and Eqn.~(\ref{eqn:dspline}). Approximate the length $d_{\hat{\boldsymbol{f}}}$ of that spline using Eqn.~(\ref{eqn:len_spl}). If $d_{\hat{\boldsymbol{f}}}<d_\mathcal{G}(100+\nu)/100$, then accept $d_{\hat{\boldsymbol{f}}}$ as the length of the smooth geodesic, otherwise proceed to Case--c.
\end{addmargin}
Case--c:
\begin{addmargin}[1em]{2em}fit $\mathcal{G}$ with a linear smoothing spline using Eqn.~(\ref{eqn:spline}) and Eqn.~(\ref{eqn:dspline}). Approximate the length $d_{\hat{\boldsymbol{f}}}$ of that spline using Eqn.~(\ref{eqn:len_spl}). If $d_{\hat{\boldsymbol{f}}}<d_\mathcal{G}(100+\nu)/100$, then accept $d_{\hat{\boldsymbol{f}}}$ as the length of the smooth geodesic, otherwise proceed to Case--d. \end{addmargin}
Case--d:
\begin{addmargin}[1em]{2em}
Consider $d_\mathcal{G}$ as the approximated length of the smooth geodesic.
\end{addmargin}
\end{addmargin}
Case--2 ($m=3$):
Perform Cases b--d similarly as in Case--1.
\newline
Case--3 ($m=2$):
Perform Cases c--d similarly as in Case--1.
\State Fill the distance matrix $\boldsymbol{D}=[d^2_{ij}]_{n\times n}$ where $d_{ij}$ is the length of the smooth geodesic between nodes $i$ and $j$ computed in 3-4. Double center $\boldsymbol{D}$ and convert it to a Gramian matrix $\boldsymbol{S}$ using the Eqn.~(\ref{eqn:double_centering}).
\State Perform the SVD on $\boldsymbol{S}$ using Eqn.~(\ref{eqn:svd}) and extract $p$ largest singular values $\lambda_l;l=1,\dots,p$ along with the latent variable $\hat{\boldsymbol{X}}$ as given by Eqn.~(\ref{eqn:latent_var}).
\end{algorithmic}\label{alg:algorithm}
\end{algorithm*}
\section{Performance analysis}\label{sec:per_analysis}
In this section we demonstrate the effectiveness of our proposed NDR approach using three representative examples. As the first example, we use a synthetic dataset of a semi-sphere to analyze the performance of SGE with respect to neighborhood size, smoothness, sparsity, and noise. Then, we study the performance of SGE using two standard benchmark datasets: 1) face images \cite{deeplearning}; and 2) images of handwritten digits (2's, 4's, 6's, and 8's) \cite{mnistdatabase}.
\subsection{Embedding of a semi-sphere}
We begin by embedding a synthetic dataset sampled from a semi-spherical manifold using SGE and Isomap to demonstrate the key concepts of our proposed SGE technique since, in this case, we can analytically compute the manifold distance on the semi-sphere and then compare with the embedding distances computed by SGE and Isomap. We sample 600 points from the manifold defined by
\begin{align}\label{eqn:sphere}\nonumber
y_1 &= r\cos(\gamma_1)\cos(\gamma_2), \\
y_2 & = r\cos(\gamma_1)\sin(\gamma_2), \\ \nonumber
y_3 & = r\sin(\gamma_1),\\ \nonumber
\end{align}
for $\gamma_1=\mathcal{U}[-\pi/2, \pi/2]$ and $\gamma_2=\mathcal{U}[0, \pi]$ where $\mathcal{U}[a,b]$ denotes a uniform distribution between $a$ and $b$. Here, $r$ is the radius of the semi-sphere which is set to $20+\mathcal{N}[0,3^2]$, where $\mathcal{N}[0,3^2]$ is a random variable sampled from a Gaussian distribution with mean 0 and variance $3^2$. Here, we impose a high noise into the dataset as we are going to investigate the robustness of our method of embedding noisy datasets.
We set the spline threshold $\nu$ and spline discretization $h$ to be $10\%$ and 100, respectively. Then, we run the SGE algorithm repeatedly over the spherical dataset with $\delta=2, 3, \dots, 8$; $\mu_s=0, 0.1, \dots, 1.0$ and obtain two-dimensional embeddings. Here, we have 77 different pairs of $\delta$'s and $\mu_s$'s, those then produce 77 two-dimensional embeddings. Now, we asses the performance of the methods in terms of distance preserving ability between the original data and the embedding. For each such embedding (77 in total), we compute distances between points in the embedding space using the Euclidean metric and denote the distance matrix by $\boldsymbol{D}_S$. Now, we run Isomap with the same sequence of $\delta$'s above and obtain its two-dimensional embeddings. The distance matrix for the embedding of Isomap is denoted by $\boldsymbol{D}_I$. Now, we compute true manifold distances between points of the dataset using cosine distances. If $\boldsymbol{\alpha}$ and $\boldsymbol{\beta}$ are two points on a semi-sphere with radius $r$, the manifold distance $d$ is given by
\begin{equation}\label{eqn:mani_dist}
d = r\gamma; \ \gamma=\cos^{-1}\begin{pmatrix}\frac{\boldsymbol{\alpha}\boldsymbol{\beta}}{|\boldsymbol{\alpha}||\boldsymbol{\beta}|}\end{pmatrix},
\end{equation}
\cite{stewart2012essential}. We compute all the pairwise distances using the above equation and form the distance matrix $\boldsymbol{D}_M$ for the manifold.
The embedding error of SGE, denoted by $\mathcal{E}_S$, is computed as the Mean Absolute Deviation (MAD) between embedding and data \cite{petruccelli1999applied}. Since the distance matrices are symmetric and have zeros on the diagonal, MAD
can then be computed using
\begin{equation} \label{eqn:mad}
\mathcal{E}_S=\frac{2}{n(n-1)}\sum^n_{i=2}\sum^n_{j=i}\big|(\boldsymbol{D}_R)_{ij}-(\boldsymbol{D}_S)_{ij}\big|.
\end{equation}
Similarly, for each pair of $\delta$ and $\mu_s$, we also compute MAD between the embedding of Isomap and the original data that we denote by $\mathcal{E}_I$. Fig.~\ref{fig:comp_embedding} illustrates MADs for Isomap ($\mathcal{E}_I$), SGE ($\mathcal{E}_E$), and their differences ($\mathcal{E}_I-\mathcal{E}_S$), versus $\delta$ and $\mu_s$. Fig.~\ref{fig:comp_embedding}(a) and \ref{fig:comp_embedding}(b) show that both methods display decreasing errors for increasing $\delta$'s (i.e., increasing neighbors), while SGE has decreasing error as $\mu_s$ increases. Fig.~\ref{fig:comp_embedding}(c) also indicates that SGE performs better than Isomap for larger smoothing multipliers \emph{for all $\delta$'s}. Moreover, this plot shows SGE performs worst when $\delta=2$ and $\mu_s=0$, and performs best when $\delta=2$ and $\mu_s=1$.
\begin{figure}[htp]
\centering
\includegraphics[width=3in]{figure2.pdf}
\caption{Analyzing the performance of Isomap and SGE embeddings using Mean Absolute Deviation (MAD). Herein, we compute, (a) MAD between Isomap embedding and data, denoted by $\mathcal{E}_I$, for different neighborhood sizes ($\delta$'s), and (b) MAD between SGE embedding and data, dented by $\mathcal{E}_S$, for different neighborhood sizes and smoothing multiplier ($\mu$'s). (c) The difference of errors between these two methods ($\mathcal{E}_I-\mathcal{E}_S$) which is computed in the variable space $\delta$ and $\mu_s$. \emph{The green cells denote that the performance of SGE is superior to that of Isomap.}}
\label{fig:comp_embedding}
\end{figure}
Next, we analyze the influence of data sparsity of the manifold for embedding with SGE and compare that with Isomap. For that task, we produce a sequence of spherical datasets with an increasing number of points. We create the first dataset of 200 points using Eqn.~(\ref{eqn:sphere}) with $r=20+\mathcal{N}[0,2^2]$, then add another 100 points, generated using the same equation, into the first dataset to produce the second dataset. Similarly, we generate the last dataset of 1200 points. Then, we embed these datasets in two-dimensions using both Isomap with $\delta=3$ and SGE with $\delta=3$, $\mu_s=1$, and $\nu=10\%$. We compute the embedding errors $\mathcal{E}_S$ and $\mathcal{E}_I$ using MAD for each dataset as explained before. Since a significantly higher noise is used for the datasets, we run both methods over each dataset for 16 realizations to allow us to compute averages. Fig.~\ref{fig:analysis_n}(a) shows the mean of embedding errors of 16 realizations and error bars for the errors of Isomap and SGE embeddings. We observe that the error associated with SGE embedding is smaller than that of Isomap for \emph{all values of $n$.}
Finally, we study the embedding error in terms of the size of the noise present in the data. For that task we formulate a latticed semi-sphere of 600 points using Eqn.~(\ref{eqn:sphere}) with equally appropriately discretized $\gamma_1 \in[-\pi/2,\pi/2]$ and $\gamma_2 \in [0, \pi]$. Then, we impose increasing uniform noise levels into the parameter representing the radius as $r=20+\eta\text{U}[-1,1]$; $\eta=0, 0.3, 0.9, \dots, 3$ and produce 11 datasets. We embed each dataset 25 times (25 realizations) using Isomap with $\delta=3$ and SGE with $\delta=3$, $\nu=10\%$, and $\mu_s=1$. Fig.~\ref{fig:analysis_n}(b) presents embedding errors for both methods computed using Eqn.~(\ref{eqn:mad}). We observe that, while $\mathcal{E}_S$ slowly increases with increasing $\eta$, $\mathcal{E}_I$ increases more quickly with increasing $\eta$. \emph{Note that, the errorbars for $\mathcal{E}_S$ are significantly smaller compared to that of $\mathcal{E}_I$ at any given $\eta$.}
\begin{figure}[htp]
\centering
\includegraphics[width=3in]{figure3.pdf}
\caption{Mean embedding error of Isomap, denoted by $\mathcal{E}_S$, (in red) and that of SGE, denoted by $\mathcal{E}_S$, (in blue) versus, (a) sparcity and (b) noise. Error bars represent standard deviations of errors computed over realizations. \emph{Note that, SGE has both lower average error and lower variance in the error across trials.}}
\label{fig:analysis_n}
\end{figure}
\subsection{Embedding of face images}
In this section, we validate the SGE method using a real-world dataset of face images available in \cite{deeplearning}. This dataset consists 698 images of dimension $64\times 64$ each with a varying pose and direction of lighting, as shown by a sample of 16 snapshots in Fig.~\ref{fig:face_images}(a). We randomly choose 400 images as our baseline dataset and generate three other datasets of 400 images from the baseline dataset by imposing Gaussian noise with standard deviations ($\sigma's$) 0.1, 0.2, and 0.3 [Fig.~\ref{fig:face_images}(b)]. We set $\delta=4$, $\nu=10\%$, $h=100$ in SGE and run this algorithm over each dataset (4 in total) 5 times for $\mu_s= $0, 0.3, 0.6, 0.9, 1.2. Then, we embed these four datasets in two dimensions using Isomap with $\delta=4$.
We use the ability to preserve distances between the original and the embedding data to analyze the performance of the method \cite{shaw2009structure}. In particular,
we view the distances in the original (noise free) imagery as the ``true'' distances and judge the algorithm's ability to recover those distances after the imagery has been corrupted by noise.
For both the data and the embedding, we first search $\delta$ nearest neighbors for each point and then produce a weighted graph by treating points in the dataset as nodes and connecting each two neighbors by an edge having the length equal to their Euclidean distance. The weighted graph constructed through the nearest neighbor search is a simple graph\footnote{A simple graph is an undirected graph that does not contain loops (edges connected at both ends to the same vertex ) and multiple edges (more than one edge between any two different vertices) \cite{balakrishnan2012textbook}.} that does not contain self-loops or multiple edges. We compute the $ij$-th entry of the adjacency distance matrix $A$ for the data as
\begin{equation}\label{eqn:adj1}
A_{ij}= \begin{cases}
d(i,j) & : \text{if} \ \exists \ \text{an edge} \ ij \ \text{in the graph} \\
& \ \ \text{of the original data,}\\
0 & : \text{otherwise,}
\end{cases}
\end{equation}
and the $ij$-th entry of the adjacency distance matrix $\tilde{A}$ for the embedding data as
\begin{equation}\label{eqn:adj2}
\tilde{A}_{ij}= \begin{cases}
d(i,j) & : \text{if} \ \exists \ \text{an edge} \ ij \ \text{in the graph} \\
& \ \ \text{of the embedding data,}\\
0 & : \text{otherwise.}
\end{cases}
\end{equation}
Here, $d(i,j)$ is the Euclidean distance between nodes $i$ and $j$. In this paper, we impose Gaussian noise into real-world datasets such as face images and images of handwritten digits. \emph{Thus, we think of our original data as the uncorrupted data before we impose the noise.}
For $n$ points in the dataset, the error associated with the neighbors' distance is computed as the normalized sum of pairwise absolute differences between entries of the adjacency distance matrices,
\begin{equation}\label{eqn:adj_error}
\text{error}=\frac{1}{n\delta}\sum_{i,j}\big\vert A_{ij}-\tilde{A}_{ij}\big\vert,
\end{equation}
where $\delta$ is the neighbor parameter \cite{shaw2009structure}.
Fig.~\ref{fig:face_images}(c) illustrates the embedding errors of Isomap, denoted by $\mathcal{E}_I$, and SGE, denoted by $\mathcal{E}_S$, for $\sigma= $ 0, 0.1, 0.2, 0.3 and $\mu_s= $ 0, 0.3, 0.6, 0.9, 1.2. We observe that the error increases in both methods when the noise in the data increases. However, the error of embedding noisy data can be reduced significantly by choosing appropriate non-zero smoothing multipliers in SGE as shown here. Fig.~\ref{fig:face_images}(d) showing the difference of errors ($\mathcal{E}_S$-$\mathcal{E}_S$) demonstrates that SGE performs better in terms of error than Isomap for all the noise levels with $\mu_s \ge 0.3$.
\begin{figure}[hpt]
\vspace{20pt}
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=3.4in]{figure4a.pdf}
\end{subfigure}
~
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=3.5in]{figure4b.pdf}
\end{subfigure}
~
\begin{subfigure}{0.5\textwidth}
\centering
\includegraphics[width=3.5in]{figure4c.pdf}
\end{subfigure}
\caption{Embedding of face images ($64\times 64$ dimensional), distorted with different noise levels, using Isomap and SGE with different smoothing levels. (a) A sample of 16 face images \cite{deeplearning}; where the snapshots in the first, second, and third rows, represent left-right light change, left-right pose change, and up-down pose change, respectively. (b) Face images are distorted by imposing three levels of Gaussian noise $\sigma=$ 0.1, 0.2, and 0.3. The datasets (four in total) are embedded using Isomap and then using SGE with smoothing multipliers $\mu_s=$ 0, 0.3 0.6, 0.9, 1.2. Then, (c) the embedding errors of Isomap ($\mathcal{E}_I$) and SGE ($\mathcal{E}_S$), and (c) their error difference are computed. \emph{The green cells denote that the performance of SGE is superior to that of Isomap.}}\label{fig:face_images}
\end{figure}
\subsection{Embedding of handwritten digits}
Next, we embed handwritten digits available from the Mixed National Institute of Standards and Technology (MNIST) database \cite{mnistdatabase} using SGE and study the performance of the method on this dataset. This dataset contains 60,000 $28 \times 28$ dimensional images of handwritten digits from 0 to 9. We sample two arbitrary datasets for our study, each with 400 images, such that one dataset has only the digit 2 and the other dataset has the digits 2, 4, 6, and 8.
We run Isomap over the dataset having the digit 2 with $\delta=4$. We run SGE two times: first with $\delta=4$, $\mu_s=0$, $\nu=10\%$, and $h=100$; and second with $\delta=4$, $\mu_s=0.6$, $\nu=10\%$, and $h=100$. Thus, aforesaid procedure yields three two-dimensional embeddings. We formulate the adjacency distance matrices for the data and embedding using Eqns.~(\ref{eqn:adj1}) and (\ref{eqn:adj2}), respectively, and compute the error of embedding using Eqn.~(\ref{eqn:adj_error}). Then, we distort the dataset of 400 images with Gaussian noise having $\sigma=0.2$ and run Isomap with $\delta=4$. We run the noisy dataset two times in SGE: first with parameters $\delta=4$, $\mu_s=0$, $\nu=10\%$, and $h=100$; and second with $\delta=4$, $\mu_s=0.6$, $\nu=10\%$, and $h=100$. The embedding errors for Isomap, SGE with $\mu_s=0$, and SGE with $\mu_s=0.6$, are given in Table~\ref{tab:mnist}(a). We see in this table that, regardless of the noise present in the data, the error associated with SGE without smoothing is greater than that of Isomap, while that of SGE with smoothing is smaller than that of Isomap. Moreover, the error of embedding is increased when moving from the noisy dataset to the noise free dataset by .87 for Isomap, while that is only increased by .24 for SGE with $\mu_s=0.6$. This is due to the fact that setting the smoothing parameter $\mu$ to $\mu_s=0.6$ allows SGE to recover the manifold corrupted by noisy measurements.
Next, we embed a sample of 400 digits, consisting of 2's, 4's, 6's, and 8's, into two dimensions using Isomap and SGE. We run Isomap over this dataset with $\delta=4$. Then, run SGE two times: first with $\delta=4$, $\nu=10\%$, $\mu_s=0$, and $h=100$; and second with $\delta=4$, $\nu=10\%$, $\mu_s=0$, and $h=100$. Thereafter, we distort the dataset with a Gaussian noise having $\sigma=0.3$ and then run Isomap with $\delta=4$ followed by running SGE with the same two parameter sets that we used before. Then, we compute the Isomap and SGE errors associated with embedding of noise free and noisy datasets using Eqn.~(\ref{eqn:adj_error}) that we present in Table~\ref{tab:mnist}(b). Similarly to the embedding of the digit 2, regardless of the error in the data, here we also note that the embedding error of SGE \emph{with no smoothing} is greater than that of Isomap, while that of SGE \emph{with smoothing} is smaller than that of Isomap. Moreover, moving from embedding of noise free data to embedding of noisy data, while the error associated with Isomap is increased by .88, that of SGE with $\mu_s=0.9$ is increased only by .21.
Finally, we compare the classification ability of both methods with the presence of high noise. In this dataset, clear clustering of similar digits allows for better classification accuracy. To demonstrate the desired clustering we present two dimensional Isomap and SGE embeddings of the noisy dataset ($\sigma=0.3$) of digits 2, 4, 6, and 8, that we present in Fig.~\ref{fig:handwring_2468}. Therein, we observe that while Isomap is unable to achieve a clear clustering of digits, SGE with $\mu_s=0.9$ achieves qualitatively better clustering, even under the high noise present in the data.
\begin{table}[htp]
\begin{center}
\begin{adjustbox}{width=.48\textwidth}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{(a)} & \multirow{2}{*}{Noise} & \multirow{2}{*}{Isomap} & \multicolumn{2}{|c|}{SGE}\\
\cline{4-5}
& & & $\mu_s=0$ & $\mu_s=0.6$\\
\hline
\multirow{2}{*}{Digit ``2"} & $\sigma=0$ & 7.02 & 7.13 & 5.86\\
\cline{2-5}
& $\sigma=0.2$ & 7.89 & 8.19 & 6.10\\
\hline \hline
\multirow{2}{*}{(b)} & \multirow{2}{*}{Noise} & \multirow{2}{*}{Isomap} & \multicolumn{2}{|c|}{SGE}\\
\cline{4-5}
& & & $\mu_s=0$ & $\mu_s=0.9$\\
\hline
Digits ``2", ``4", & $\sigma=0$ & 7.38 & 7.43 & 6.09\\
\cline{2-5}
``6", and ``8" & $\sigma=0.3$ & 8.20 & 8.36 & 6.30\\
\hline
\end{tabular}
\end{adjustbox}
\end{center}
\caption {Errors of Isomap and SGE Embeddings of, (a) a sample of 400 handwritten 2's; and (b) a sample of 400 handwritten digits having number 2's, 4's, 6's, and 8's. The first row of (a) shows the error when the dataset of digit 2 is embedded using both Isomap, and SGE with two smoothing coefficients $\mu_s=0$ and $\mu_s=0.6$. Then, the dataset is imposed with a Gaussian noise of $\sigma=0.2$ and embedded using both Isomap, and SGE with $\mu_s=0$ and $\mu_s=0.6$ that you see in the second row of (a). The first row of (b) represents the errors of Isomap embedding, and SGE embeddings with $\mu_s=0$ and $\mu_s=0.9$, of the noise free version of the sample of digits having the numbers 2, 4, 6, and 8. The second row of (b) represents the errors of Isomap embedding, and SGE embeddings with $\mu_s=0$ and $\mu_s=0.9$, of the noisy version of the dataset created by imposing a Gaussian noise with $\sigma=0.3$.} \label{tab:mnist}
\end{table}
\begin{figure}[htp]
\centering
\includegraphics[width=3.6in]{figure5.pdf}
\caption{Isomap and SGE Embeddings of handwritten numbers 2, 4, 6, and 8. Different digits are shown in different colors (2's in green, 4's in orange, 6's in brown, and 8's in gray) in the two-dimensional embedding space and the embedded snapshots illustrate the appearance of arbitrarily chosen handwritten digits. The left panel shows the Isomap embedding of the noisy dataset ($\sigma=0.3$) and the right panel shows the SGE embedding of the same dataset with $\mu_s=0.9$. The embedding error of each case is indicated in the title of the corresponding panel. Note that, qualitatively speaking, the SGE embedding appears to have better clustering of similar digits than that of Isomap.}
\label{fig:handwring_2468}
\end{figure}
\section{Conclusion}\label{sec:conclusion}
Nonlinear dimensionality reduction methods can recover unfaithful embeddings due to presence of high noise in the data. In order to obtain a faithful embedding for noisy data, some smoothing procedure should be performed in the embedding. With this idea in mind, herein we introduced a novel nonlinear dimensionality reduction framework using smooth geodesics that emphasizes the underlying smoothness of the manifold. Our method begins by first searching for nearest neighbors for each point using a $\delta$-nearest neighbor search \cite{friedman1977algorithm}. Then, we create a weighted graph by representing all of the points as nodes and joining neighboring nodes with edges having their Euclidean distances as weights. For each pair of nodes in the graph, we create a geodesic \cite{tenenbaum2000global}, that is defined as the shortest path between the given nodes generated using Floyd's algorithm \cite{cormen2009introduction}. We fit each such geodesic with a smoothing spline (called a smooth geodesic) with smoothing multiplier ($\mu_s$) and spline threshold ($\nu$) \cite{reinsch1967smoothing, de1972calculating}. The length of these splines are treated as manifold distances between corresponding points. Finally, we use the classic MDS method to find the dimension of the distance matrix of smooth geodesics and perform the embedding.
In SGE, the order of the spline fit is set to either three, two, or one, depending on the spline threshold. Since a sufficient smoothness and a low fitting error can be obtained by cubic smoothing splines, we first rely on a spline fit of order three. However, we observed that the smoothing spline routing in \cite{de1972calculating} fits very long cubic splines for some specific smoothing multipliers. Thus, if the length of a cubic smoothing spline doesn't satisfy the threshold, we reduce the order of the spline to the next level.
We first demonstrated the effectiveness of our technique on a synthetic dataset representing a section of a semi-sphere. We observed that smoothing approach provides better performance than what is achieved by standard Isomap when embedding a noisy dataset. We also observed that the errors in both methods decrease as the neighborhood size increases [Fig.~\ref{fig:comp_embedding}(a) and (b)]. However, when the neighborhood size is small, say $\delta=2$, SGE has clear performance advantages over Isomap for noisy data when a sufficient smoothness is employed [Fig.~\ref{fig:comp_embedding}(c)]. The spherical dataset also demonstrated that SGE is more robust to sparse sampling than Isomap [Fig.~\ref{fig:analysis_n}(a)]. Moreover, while increasing noise in the data always appears to reduce the performance of the embedding, irrespective to the method that is used, we see that Isomap is highly effected by increasing noise while SGE, with a judicious choice of smoothing parameter, is more robust [Fig.~\ref{fig:analysis_n}(b)].
We also studied two standard benchmark data sets, face images and handwritten digit images, and found that SGE provided similar superior performance on noisy versions of those data sets. In particular, for the digit classification task, we observed that SGE provides qualitatively superior clustering of similar digits in the presence of noise. As future work, we will quantify the classification performance of the low dimensional nonlinear embedding using a variety of standard supervised machine learning techniques.
The NDR method that we introduced here ensures better performance and preserves the topology of the manifold by emphasizing the smoothness of the manifold when embedding noisy data. This method is an extension of famous NDR method Isomap where we replaced the geodesics with smoothing splines. In the future, we plan to examine such techniques in more generality. For example, one can imagine generalizing Isomap to the case where geodesics are not a good approximation of long manifold distances. In such a case, one can attempt to treat the long manifold distances as unknown, and employ matrix completion techniques on distance matrices where some entries are not observed.
\ifCLASSOPTIONcompsoc
\section*{Acknowledgments}
\else
\section*{Acknowledgment}
\fi
The author's would like to thank Chen Zou for the support given in coding and
also would like to thank the NSF XSEDE Jetstream
\cite{stewart2015jetstream,towns2014xsede} under allocation TG-DMS160019 for
support of this work.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
| {'timestamp': '2017-07-24T02:03:52', 'yymm': '1707', 'arxiv_id': '1707.06757', 'language': 'en', 'url': 'https://arxiv.org/abs/1707.06757'} |
high_school_physics | 356,950 | 16.079244 | 1 | The Prismatic Speaker Sphere [SOURCE] is not just unique when it comes to tuning its color in order to suit your music mood but at the same time a uniquely designed wireless spherical speaker packed with prismatic LED lights to give it more interesting.
This spherical speaker is equipped with 12 internal LEDs that allows the owner to easily set the led lights to glow in thousands of different shade cycles, it even allows the owner to flash the leds in time to music.
This wireless spherical speaker can be easily pair with the owner’s favorite bluetooth devices even from 30 feet away and is perfect in streaming music because it is capable of delivering clear sound even at full volume.
The Prismatic Speaker Sphere is splash-resistant because it is made with sturdy polyethylene plastic just enough to make it your perfect outdoor speaker for your next outdoor party and best of all, it already comes with an infrared remote control so users can easily adjust the styles they need specially when it comes to pumping up the party.
Check [THIS PAGE] for some other important information. | {'timestamp': '2019-04-19T16:17:58Z', 'url': 'https://www.itrush.com/the-prismatic-speaker-sphere-a-wireless-spherical-speaker-with-prismatic-leds-that-let-you-tune-its-color-to-suit-the-mood-or-music/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 255,139 | 16.011861 | 1 | Meanwhile the powerful Jesuits were transforming the transcendent faith of countless Catholics into a theology of humanism and socio- political revolution. This happened after the fateful decision was implemented world wide to change fundamentally the focus of Holy Mass by pivoting the worshipping priest away from the CREATOR, turning his back to the Eternal Trinitarian God YHWH to face God’s temporal CREATION: Man. Humanism invaded the centerpiece of the Catholic faith!
Consequently, the Table of the Lord Jesus Christ on which the eternal Sacrifice of the only begotten Son of God is celebrated in our dimension of time, was moved into timelessness, away from Rome. The Dispenser of the Grace of Redemption for humankind, the indwelling, ever present and all-powerful Essence of the Divine Trinity, accompanying the Christian believers every day, was fatally sinned against. And it is written that this very transgression can never be forgiven. The Holy Spirit of God that indwells those He chooses, is to be worshipped now in the eternal dimension of spirit and truth, (John 4,21-24) in prayer, meditation and contemplation, as all other avenues are poisoned. May He, the Holy Spirit of God guide us to His Table of the Spirit. It is prepared and ready to receive all the faithful believers. | {'timestamp': '2019-04-25T10:49:53Z', 'url': 'http://www.paultrog.com/articles/Dominion.aspx', 'language': 'en', 'source': 'c4'} |
high_school_physics | 888,159 | 16.006768 | 1 | Q: Boundary conditions of the Electric field of a conducting transmission line Solving Maxwell's Equations for the Electric Field of a linear-isotropic cylindrical conductor leads to the electric field being proportional to the Bessel function of the first kind (see below).
Maxwell's Equations:
\$ Div[E] = \frac{\rho}{\epsilon_{ps}} \$-> 0 (reasonable approximation of no free charge in conductor)
\$ Div[B] = 0 \$
\$ Curl[E] = -\frac{d}{dt}[B] \$ ... (i)
\$ Curl[B] = \mu J + \mu\epsilon_{ps}\frac{d}{dt}[E] \$ ... (ii)
Take Curl of (i):
\$ Curl[Curl[E]] = Grad[Div[E]] - Laplacian[E] = -\frac{d}{dt}[Curl[B]] \$
Since Div[E] ~ 0,
\$ Laplacian[E] = \frac{d}{dt}[Curl[B]] = \frac{d}{dt}[\mu J + \mu espilon_{ps} E]
\$
The assumption of a linear, isotropic material prompts us to treat eps, mu, and rho as frequency-dependent scalars (rather than tensors). Also, we make the assumption of J = rho*E, a rough approximation of Ohm's law that follows from the kinetic theory of charges in the context of the Drude model.
So: \$ Laplacian[E] = d/dt[\mu\rho E + \mu\epsilon\frac{d}{dt}[E]] \$
which is a basic wave-equation with a dampening terms via \$ \mu\rho \frac{d}{dt}[E] \$....
Representing Laplacian[E] in cylindrical coordinates to suit the geometry of the line, and solving the PDE by separation of variables (E[r,t] = R[r]*T[t]*Z[z]) leads to:
\$ R[r] = BesselJ[0,\lambda r] \$ (which is the r-spatial component of E).
In math/physics, it is here that we would typically quantize lambda using a boundary condition on the E-field at r = a (a being the radius of the line), and then employ some initial conditions to derive the required Bessel-Fourier series representation of the solution. Does this act of assigning boundary condition(s) apply to such wire systems, and if so, what form of boundary condition(s) on E[a] should we use? >> Are the standard boundary conditions of electromagnetism sufficient?
In mathematics, a boundary condition is typically specified and a Bessel-fourier series is then used to determine the nature of the field in the line. In electrical engineering, what type of boundary conditions are applied to a conductor carrying a current in the direction of the E-field?
A: If you assume the conductive elements are perfect (\$\rho=0\$), then the boundary condition is that the E field tangent to the surface goes to 0. This is often called "perfect conductor boundary conditions".
If you want to model a real conductive material (\$\rho > 0\$), then you will have to model the fields and currents inside the conductive region also. The boundary condition will be that the tangential component of \$\vec{E}\$ is continuous across the boundary.
| {'language': 'en', 'url': 'https://electronics.stackexchange.com/questions/331767', 'timestamp': '2023-03-29', 'source': 'stackexchange', 'question_score': '0'} |
high_school_physics | 566,781 | 15.978573 | 1 | Math ⋅
Geometry ⋅
How to Read Dimensions
••• onairjiw/iStock/GettyImages
How to Calculate the Lineal Footage of a Circle
By Samuel Markings
Dimensions in blueprints represent the size of an object in two- or three-dimensional space. For example, a dimension of a rectangular room on a blueprint, 14' 11" X 13' 10" equates to a room size of 14 feet, 11-inches wide by 13 feet, 10-inches long. Dimensions are expressed as width by length by height or depth in three-dimensional space.
Object Measurements
A three-dimensional desk for example, may be expressed as 25" X 82" X 39" which means the desk is 25 inches wide by 82 inches long and 39 inches tall. A window dimension on blueprints is treated as two-dimensional space. For example, a window that is 24 inches wide by 30 inches tall would be written as 24" X 30". In the manufacturing industry, this standard window size is referred to as a 2030 or 2 feet by 3 feet. In a rectangular swimming pool, the dimension might read 16' X 30' X 9' or 16 feet wide by 30 feet long and 9 feet deep.
Determining Dimensions
In both physics and mathematics, a dimension represents the least number of coordinates required to identify any point with in it. A line represents one dimension, whereas a square represents two dimensions and a cube applies to three-dimensional space. If an object is circular and flat, the dimensions are normally quoted in terms of a single measurable factor called the radius. The radius of a circle is the distance between its center and outer edge.
Espace Professionnel: How to Measure Objects
BBC: Area and circumference of a circle
Think Math: Measurement: Length, Width, Height, Depth
Samuel Markings has been writing for scientific publications for more than 10 years, and has published articles in journals such as "Nature." He is an expert in solid-state physics, and during the day is a researcher at a Russell Group U.K. university.
How to Find the Volume of a Sphere in Terms of Pi
What Is the Difference Between Yards & Feet?
How to Determine Square Feet Area
How to Convert Seconds Into Miles Per Hour
How to Calculate the Volume of Water to Fill a Rectangular Tank | {"pred_label": "__label__wiki", "pred_label_prob": 0.6623857021331787, "wiki_prob": 0.6623857021331787, "source": "cc/2021-04/en_head_0046.json.gz/line850571"} |
high_school_physics | 537,140 | 15.963532 | 1 | 05 Jun, 2019 07:00 05 Jun, 2019 07:03
Phase 2 AEM survey: data processing results
RNS Number : 1464B
Kavango Resources PLC
("Kavango" or "the Company")
Kavango Resources plc (LSE: KAV), the exploration group listed on the Standard List segment of the main market of the London Stock Exchange and targeting the discovery of world class mineral deposits in Botswana, is pleased to announce that processing of the data from Phase 2 of the airborne electromagnetic (AEM) survey carried out by SkyTEM over the northern section of the Company's prospecting licences that cover much of the Kalahari Suture Zone (KSZ) structure in southwest Botswana has now been completed.
· The results of the data processing and modelling have shown that SkyTEM's new low frequency AEM system has been able to penetrate to an average depth just under 400m, below the conductive cover of Kalahari sand and Karoo sediments.
· The processing was carried out by the Danish based Aarhus Geophysics Ltd using state of the art 3D modelling software. The geological modelling allows Kavango's technical team to identify lithological units, structures, zones of conductivity and potential ore bodies on a line by line basis.
· Each of the lines flown (500m apart) can be viewed as a vertical section with depth penetration down to nearly 700m in some cases. It also provides for horizontal "slicing" at any depth. This enables Kavango's geologists to move from line to line, at any depth, to view the conductive targets and their associated host lithologies.
· The model identifies a number of conductors closely associated with gabbroic*** intrusives and faults.
· The added depth penetration of the new SkyTEM 12.5Hz system combined with the advanced data processing undertaken by Aarhus Geophysics has produced results greatly superior to those from the Phase 1 AEM survey.
· One of the advantages of the new processing techniques is its ability to discriminate between false (artefact) conductors created by the software and "real" conductors that are generated by the geology.
· To date 45 conductive anomalies have been identified from the Phase 2 survey. Of these, 24 conductors are detected on single flight lines. The remaining conductors are traceable over several lines. The longest conductor is 4.5km long.
· The most compelling conductors will now be surveyed by ground based resistivity techniques (CSAMT**) together with detailed soil geochemistry. The immediate objective is to identify the best 3 or 4 targets for drilling, to be carried out as soon as ground follow up is completed.
Michael Foster, Chief Executive Officer of Kavango Resources, commented:
"The combination of SkyTEM's new high resolution 12.5Hz EM surveying system together with the advanced data processing provided by Aarhus Geophysics represents a major breakthrough in Kavango's exploration for massive sulphide orebodies lying below Kalahari and Karoo cover in the northern part of the KSZ. The ability of our geologists to view the rocks using 3D models provides them with exploration tools that will allow for the identification of mineralized zones which have remained completely hidden until now.
The data processing has identified numerous conductors of interest, the longest of which is 4.5km long. The most compelling conductors will now be surveyed by ground based resistivity techniques together with detailed soil geochemistry. The immediate objective is to identify the best 3 or 4 targets for drilling, to be carried out as soon as ground follow up is completed."
Images from the data derived from the Phase 2 survey and processed by Aarhus Geophysics can be viewed on the Company's website (see below).
Further information in respect of the Company and its business interests is provided on the Company's website at www.kavangoresources.com and on Twitter at #KAV.
Company's website at www.kavangoresources.com
Twitter at #KAV
City & Westminster Corporate Finance LLP
Nicola Baldwin
SI Capital Limited (Broker) +44 1483 41 3500
Nick Emerson
Kavango's 100% subsidiary in Botswana, Kavango Minerals (Pty) Ltd, is the holder of 15 prospecting licences covering 9,231 km2 of ground, including most of the 450km long KSZ magnetic anomaly in the southwest of the country along which Kavango is exploring for Cu-Ni-PGE rich sulphide orebodies. This large area, which is entirely covered by Cretaceous and post-Cretaceous Kalahari sediments, has not previously been explored using modern techniques.
The area covered by Kavango's KSZ licences displays a geological setting with distinct similarities to that hosting the World Class Norilsk Ni-Cu-PGE orebodies in Siberia.
Exploration Model:
Kavango's exploration model is based upon the search for magmatic massive sulphide orebodies buried beneath up to 200m of overburden. The identification of drill targets follows a carefully constructed exploration program specifically developed by the Company for exploration in areas covered by Kalahari and Karoo sediments and sands.
The exploration program is initiated by identifying the location of magmatic intrusive rocks from an analysis of the regional magnetic surveys published by the Botswana Government. This is followed by an airborne electro-magnetic survey (AEM) carried out over the magnetic anomalies that have signatures indicating the presence of intrusive rocks at depth. By using the latest generation of low frequency helicopter-borne EM surveying, conductors lying below the Kalahari/Karoo cover can be identified for further investigation. These conductors can be tested on surface by very high sensitivity soil sampling*, which can detect metal ions transported from buried, metal rich massive sulphide deposits associated with the emplacement of magmatic intrusive rocks.
Kavango uses a ground based geophysical technique known as Controlled Source Audio frequency Magneto Tellurics (CSAMT)** to identify the exact location of the conductors. The shape, orientation and depth of the conductors will determine if the conductor should be drilled. The presence of a metal in soil anomaly is also used to prioritise the conductors.
The next phase of the exploration involves the drilling of the conductor to determine the presence of sulphide mineralisation and its metal component (discovery). This is followed by the evaluation of the discovery, which will determine whether the deposit is large enough and rich enough to make an economically viable mine (feasibility).
*Kavango geologists have pioneered a high resolution soil sampling technique to detect ultra-fine metal particles which have been transported in solution from considerable depths of burial to the surface by capillary action and transpiration. Evaporation leaves the metal ions as accumulations within a surface "duricrust" which is then sampled and analysed. Zinc, which is the most mobile of the base metal elements (i.e. goes into solution easily) acts as a pathfinder to mineralization at depth.
**Massive sulphide (base metal) deposits can be detected by CSAMT because they conduct electricity easily (conductors) as opposed to silicate wall rocks (resistive).
***Gabbro is a dense mafic intrusive rock, usually formed in an oceanic crust environment, when molten mass cools and crystallises at depth, forming a coarse grained, dark coloured rock, similar in its chemical composition to basalt.
****************************ENDS**************************************
This information is provided by RNS, the news service of the London Stock Exchange. RNS is approved by the Financial Conduct Authority to act as a Primary Information Provider in the United Kingdom. Terms and conditions relating to the use and distribution of this information may apply. For further information, please contact [email protected] or visit www.rns.com.
UPDUSVBRKSANRAR
DWF acquiring legal services firm Mindcrest in £14m deal
London open: Stocks edge higher after positive US session; Fed decision eyed
London midday: Stocks pare gains ahead of Fed announcement
Wednesday broker round-up
Vodafone to sell Egypt business to Saudi Telecom
SMART BOX info
You are seeing these quotes based on previous browsing related to sectors such as:
You are seeing these stories as you have shown an interest in the following categories
{{ storiesRelated.scrollNewsPercent }}% Complete
US trade deficit ywans wider in December as imports jump
Gig economy needs regulation to save workers from precarity
CMA raises further concerns about StubHub
Santander ends year with higher-than-expected capital position
Boeing's Q4 hammered by 737 MAX grounding and charges in space unit
Avast 'reviewing all options' for Jumpshot amid data concerns
Griffin Mining reassures shareholders as coronavirus affects operations
Oilex exiting Australia with proposed sale of Cooper-Eromanga interests
ScS CEO David Knight to depart; orders fall | {"pred_label": "__label__cc", "pred_label_prob": 0.5227915644645691, "wiki_prob": 0.4772084355354309, "source": "cc/2020-05/en_middle_0125.json.gz/line403373"} |
high_school_physics | 887,285 | 15.957909 | 1 | Q: Preimage of codimension one subvarieties under a dominant map Let $f:X\to Y$ be a dominant morphism of projective varieties over an algebraically closed field. If $Z\subset Y$ is a codimension $1$ subvariety, do the irreducible components of $f^{-1}(Z)$ necessarily have codimension $1$ in $X$?
I am asking about codimension $1$ since I know codimension $1$ usually behaves the best.
If this is true, is there an easy explanation? If this is almost true, what is the correction?
A: EDIT: I rewrote this answer with more details.
You need some more assumptions; the easiest one I know is to require that the dimension of the fiber $f^{-1}(\eta)$ of the generic point $\eta \in Y$ is equal to $\dim(X) - \dim(Y)$.
(1) Let $f : X \to Y$ be a morphism of schemes over a base scheme $S$. Recall that if $X \to S$ is universally closed, then $f : X \to Y$ is also universally closed. In fact we can factor $f$ through the closed immersion $X \hookrightarrow X \times_S Y$ and the projection $X \times_S Y \to Y$; closed immersions are proper, hence a priori universally closed, and the second is a base change of a universally closed morphism, hence also universally closed. See (Stacks, 01W6).
(2) Suppose the structural morphism $X \to S$ is universally closed. Any $S$-morphism $f : X \to Y$ is closed by (1), and hence surjective if it is dominant.
(3) The following proposition is from (EGA, IV_2, 5.6.6):
Let $X$ and $Y$ be irreducible schemes, $Y$ locally noetherian, $f : X \to Y$ dominant and locally of finite type. Let $e = \dim(f^{-1}(\eta))$ be the dimension of the generic fiber ($\eta \in Y$ being the generic point). Then one has the inequality
$$ \dim(X) \leq \dim(Y) + e. $$
Further, if $Y$ is universally catenary then one has equality if and only if
$$ \dim(Y) = \sup_{y \in f(X)} \dim(\mathscr{O}_{Y,y}). $$
Note that this clearly holds when $f$ is surjective (EGA, IV_2, 5.1.4).
(4) Let $S$ be a locally noetherian and universally catenary scheme, and let $X$ and $Y$ be schemes locally of finite type over $S$, with $X$ proper over $S$. Let $Z \subset Y$ be a closed irreducible subscheme, let $(W_\alpha)$ be the irreducible components of the inverse image $f^{-1}(Z)$, and let $f_\alpha : W_\alpha \to Z$ denote the restrictions of $f$ to $W_\alpha$. Since $X$ and $Y$ are locally of finite type over $S$, $W_\alpha$ and $Z$ are locally of finite type over $S$ and it follows that $f_\alpha : W_\alpha \to Z$ are locally of finite type (Stacks, 01T8). Also, $Z$ is universally catenary (Stacks, 02J9) and locally noetherian (Stacks, 01T6). Since $W_\alpha$ is proper over $S$, $f_\alpha$ is surjective by (2). Hence by (3) one sees
$$ \dim(W_\alpha) = \dim(Z) + e, $$
where $e$ is the dimension of the fiber of the generic point $\eta \in Y$, and in particular the inverse image $f^{-1}(Z)$ is purely of dimension $\dim(Z) + e$.
(5) If $X$ and $Y$ are further biequidimensional (EGA, 0_IV, 14.3.3), i.e. one has the formula
$$ \dim(W) + \mathrm{codim}(W, X) = \dim(X) $$
for every closed subspace $W \subset X$ (and likewise for $Y$), then one can rewrite (4) as
$$ \mathrm{codim}(f^{-1}(Z), X) = \mathrm{codim}(Z, Y) + m - n - e, $$
where $m$ and $n$ are the dimensions of $X$ and $Y$, respectively.
(6) In particular, when $S = \mathrm{Spec}(k)$ is the spectrum of a field $k$, then $X$ and $Y$ are biequidimensional (EGA, IV_2, 5.2.1) and the formula of (5) holds. More generally, $S$ only needs to be Jacobson (and locally noetherian, universally catenary); this is not in EGA but I believe it's buried in some form in the Stacks project (see the comments on (Stacks, 02S2)).
(7) Concluding, we have seen that in good cases, and in particular in your case, when the dimension of the generic fiber is equal to $\dim(X) - \dim(Y)$, the codimension of the inverse image $f^{-1}(Z)$ in $X$ is equal to the codimension of $Z$ in $Y$.
A: [This answer has been edited to discuss the general case.]
I will assume that variety means irreducible (otherwise you could work on individual irreducible components). Then $f:X \to Y$ is dominant by assumption, and has closed image since its source is projective, thus it is surjective.
Thus $f^{-1}(Z) \to Z$ is also surjective.
Now, as described in e.g. this MO answer (or in a Hartshorne exercise, maybe in Section 4 of Chapter II), for the proper map $f$, the function $y \mapsto \dim f^{-1}(y)$ is upper semicontinuous, so if $z \in Z$, then the dimensions of $f^{-1}(z)$ are at least $\dim X - \dim Y$. This implies that $f^{-1}(Z)$ contains components of
codimension $1$. (The intuition is just that we can add the dimension of $Z$ and of a typical fibre. One way to make this precise is to note that if every component of $f^{-1}(Z)$ were of codimension at least $2$, then since it dominates $Z$, we would see that
a generic fibre would be of dimension $\dim X - \dim Y -1$, whereas we already noted that every fibre has dimension at least $\dim X - \dim Y$.)
In general you can't do better than this, because there are morphisms of $3$-folds (just to take an example) which are birational, but in which the preimage of some particular point $y \in Y$ is a curve. Then if you take $Z$ to be a generic codim'n one subvariety passing through $y$, its preimage will be the union of a codim'n subvariety of $X$ (the proper transform of $Z$) and the curve $f^{-1}(P)$. So in general you can't expect $f^{-1}(Z)$ to be equidimensional.
Note also that $f^{-1}(Z)$ can also contain multiple components of codimension $1$. (E.g. let $X \to Y$ be the blow up of a surface at a point, and let $Z$ be a curve that passes through the blown up point.)
Rereading the question, I see that the point of the question might be the equidimensionality, and so you might be interested in the counterexample involving $3$-folds. This MO question and answers gives one such example.
| {'language': 'en', 'url': 'https://math.stackexchange.com/questions/344210', 'timestamp': '2023-03-29', 'source': 'stackexchange', 'question_score': '3'} |
high_school_physics | 50,696 | 15.947294 | 1 | Another type of number which is difficult to understand and has a complicated relationship to reality is the imaginary number denoted by i which is equal to square root of -1. Now, any negative or positive number multiplied by itself yields a positive and not a negative number, so the square root of -1 cannot exist in the field of real numbers. (square root of -3 can be written as 3i and so on..) But this kind of number has been defined in order to solve certain algebraic equations where roots of negative numbers appear. (They do so because the field of real numbers is not algebraically closed i.e. the solutions lie in a different field to the coefficients) A complex number consists of a real and imaginary part and can be written in the form of a+bi, where a and b are real numbers and i is the standard imaginary unit. So now a real number can be thought of as a special case of complex number where b=0. A complex number can be plotted on a graph with the real part on the x-axis and imaginary part on the y-axis. Using complex numbers means that we need never get stuck at solving tricky equations (the field of complex numbers is algebraically closed). They are very useful in many fields but in the field of Quantum physics, they are not just useful they are essential.
With real numbers we can describe the geometry of solid, geometric shapes like squares, cylinders, spirals etc. Benoit Mandelbrot discovered (accidently) that complex numbers could be used to describe complex shapes. Consider the equation zn+1 = zn2 + c ; where z and c are imaginary numbers and n increases by 1 each time. This means that the output squared and added to c is fed back to the input for calculation of another output. Mandelbrot wanted to know for which values of c, the magnitude of zn would stop growing when the equation was applied for an infinite number of times. He discovered that if the magnitude went above 2, then it would grow forever but for the right values of c, sometimes the result would simply oscillate between different magnitudes less than 2. He plotted these values with the help of a computer and was amazed to see a complex pattern which when magnified revealed a similar hidden pattern and this pattern went on infinitely. He named this pattern a fractal. A fractal has the self-similarity property of having the same (irregular) shape at all levels of magnification. Mandelbrot soon realised that fractal shapes appear everywhere in nature. A mountain range for e.g or a coastline, lightning or systems of blood vessels. These shapes cannot be predicted exactly in their details but the general shape can be approximated. Since then a variety of fractals have been discovered and some of them lie at the heart of a new branch of mathematics called chaos theory. Fractals are also used as the basis for digital art and animation created with the help of a fractal-generating software.
Another interesting number (this time,real!) is a constant denoted by c which is the speed of light in vacuum (approximately 186,282 miles per second which is essentially the speed limit of the universe). That the speed of light is constant regardless of the frame of reference of the observer is a very strange phenomenon that we are not normally aware of because it is so incredibly fast. If you are travelling on the highway in a car, for e.g, at 60 mph, your speed relative to the stationary objects like the trees that you pass by is 60 mph, but your speed relative to a football in the next car seat is 0 mph, likewise your speed relative to your friend driving by your side at the same 60 mph is 0 mph. If he is travelling at 60 mph in the opposite direction, then you will see him as driving away from you at 120 mph. If you shot a bullet from your car(!), then the total speed or velocity of the bullet would be the speed at which you are travelling + the speed of the bullet. Then, you would naturally expect that if you switched on the headlights of the car, the total speed of the light would be the speed of the car + the speed of the light but this is not so. This is actually very wierd and easier to visualise (that it is wierd!) if you imagine yourself travelling in a rocket at a speed close to that of light. You would always see light travel at c regardless of the speed you yourself are travelling at and is the same even if you were travelling in the opposite direction! How can this be?
Special theory of relativity also predicts length contraction (decrease in length of objects travelling). The general theory of relativity predicts (among other things) gravitational time dilation – gravity influences the passage of time. The more massive an object is, the slower time runs; the further away you are from the object the faster time runs. This notion of space, time and velocity being interdependent (where before Einstein, time was thought to be constant) has forced us to think of the universe as a space-time continuum in four dimensions, 3 spatial and one of time. | {'timestamp': '2019-04-22T02:12:07Z', 'url': 'https://watercolorjournal.wordpress.com/tag/physics/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 751,349 | 15.935354 | 1 | (-) District of Columbia (1)
(-) Ohio (1)
Boonshoft Museum of Discovery (1)
Smithsonian Institution / National Zoological Park (1)
(-) 2008: ELG for Spherical Display Systems for Earth Systems Science-Installations & Content (2)
2015: National Ocean Sciences Competition for High School Students (1)
2008: National Environmental Literacy Assessment (1)
A NOAA Spherical Display System at the Smithsonian National Zoological Park
Smithsonian Institution / National Zoological Park offsite link · Washington, District of Columbia
The Smithsonian National Zoological Park (SNZP) in Washington, DC is integrating the NOAA Science on a Sphere(SOS) spherical display system into SNZP's Amazonia Science Gallery (ASG). The SOS system at ASG will be seen in person by tens of thousands of visitors each year and potentially by millions more through electronic outreach programs. The SOS system will become an integral part of the exhibit and will be used for both informal and formal science education programs at the National Zoo.
PI: Miles Roberts
Global Connections: Science on a Sphere
Boonshoft Museum of Discovery offsite link · Dayton, Ohio
The Boonshoft Museum of Discovery/Discovery Zoo in Dayton, OH has developed and implemented a new, permanent exhibition featuring NOAA's Science on a Sphere. The exhibition builds environmental literacy among public visitors, K-12 students, and the myriad of groups that the Museum reaches. A significant portion of the audience is from underrepresented groups. A special display within the exhibition focuses on the Mississippi Watershed and how it is related to the health of the oceans.
The Boonshoft Museum of Discovery/Discovery Zoo in Dayton, OH has developed and implemented a new, permanent exhibition featuring NOAA's Science on a Sphere. The exhibition builds environmental literacy among public visitors, K-12 students, and the myriad of groups that the Museum reaches. A significant portion of the audience is from underrepresented groups. A special display within the exhibition focuses on the Mississippi Watershed and how it is related to the health of the oceans. The exhibition also includes three interactive stations where visitors can engage in hands-on activities related to NOAA datasets.
PI: Susan Pion
State: Ohio County: Montgomery District: OH10 | {"pred_label": "__label__cc", "pred_label_prob": 0.6610493659973145, "wiki_prob": 0.33895063400268555, "source": "cc/2023-06/en_head_0026.json.gz/line1429074"} |
high_school_physics | 857,177 | 15.91085 | 1 | // MIT License:
//
// Copyright (c) 2010-2013, Joe Walnes
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
// AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
/**
* Smoothie Charts - http://smoothiecharts.org/
* (c) 2010-2013, Joe Walnes
*
* v1.0: Main charting library, by Joe Walnes
* v1.1: Auto scaling of axis, by Neil Dunn
* v1.2: fps (frames per second) option, by Mathias Petterson
* v1.3: Fix for divide by zero, by Paul Nikitochkin
* v1.4: Set minimum, top-scale padding, remove timeseries, add optional timer to reset bounds, by Kelley Reynolds
* v1.5: Set default frames per second to 50... smoother.
* .start(), .stop() methods for conserving CPU, by Dmitry Vyal
* options.interpolation = 'bezier' or 'line', by Dmitry Vyal
* options.maxValue to fix scale, by Dmitry Vyal
* v1.6: minValue/maxValue will always get converted to floats, by Przemek Matylla
* v1.7: options.grid.fillStyle may be a transparent color, by Dmitry A. Shashkin
* Smooth rescaling, by Kostas Michalopoulos
* v1.8: Set max length to customize number of live points in the dataset with options.maxDataSetLength, by Krishna Narni
* v1.9: Display timestamps along the bottom, by Nick and Stev-io
* (https://groups.google.com/forum/?fromgroups#!topic/smoothie-charts/-Ywse8FCpKI%5B1-25%5D)
* Refactored by Krishna Narni, to support timestamp formatting function
* v1.10: Switch to requestAnimationFrame, removed the now obsoleted options.fps, by Gergely Imreh
* v1.11: options.grid.sharpLines option added, by @drewnoakes
* Addressed warning seen in Firefox when seriesOption.fillStyle undefined, by @drewnoakes
* v1.12: Support for horizontalLines added, by @drewnoakes
* Support for yRangeFunction callback added, by @drewnoakes
* v1.13: Fixed typo, reported by @alnikitich in issue #32
*/
function TimeSeries(options) {
options = options || {};
options.resetBoundsInterval = options.resetBoundsInterval || 3000; // Reset the max/min bounds after this many milliseconds
options.resetBounds = options.resetBounds === undefined ? true : options.resetBounds; // Enable or disable the resetBounds timer
this.options = options;
this.data = [];
this.maxValue = Number.NaN; // The maximum value ever seen in this time series.
this.minValue = Number.NaN; // The minimum value ever seen in this time series.
// Start a resetBounds Interval timer desired
if (options.resetBounds) {
this.boundsTimer = setInterval((function(thisObj) { return function() { thisObj.resetBounds(); } })(this), options.resetBoundsInterval);
}
}
// Reset the min and max for this timeseries so the graph rescales itself
TimeSeries.prototype.resetBounds = function() {
this.maxValue = Number.NaN;
this.minValue = Number.NaN;
for (var i = 0; i < this.data.length; i++) {
this.maxValue = !isNaN(this.maxValue) ? Math.max(this.maxValue, this.data[i][1]) : this.data[i][1];
this.minValue = !isNaN(this.minValue) ? Math.min(this.minValue, this.data[i][1]) : this.data[i][1];
}
};
TimeSeries.prototype.append = function(timestamp, value) {
this.data.push([timestamp, value]);
this.maxValue = !isNaN(this.maxValue) ? Math.max(this.maxValue, value) : value;
this.minValue = !isNaN(this.minValue) ? Math.min(this.minValue, value) : value;
};
function SmoothieChart(options) {
// Defaults
options = options || {};
options.grid = options.grid || {};
options.grid.fillStyle = options.grid.fillStyle || '#000000';
options.grid.strokeStyle = options.grid.strokeStyle || '#777777';
options.grid.lineWidth = typeof(options.grid.lineWidth) === 'undefined' ? 1 : options.grid.lineWidth;
options.grid.sharpLines = !!options.grid.sharpLines;
options.grid.millisPerLine = options.grid.millisPerLine || 1000;
options.grid.verticalSections = typeof(options.grid.verticalSections) === 'undefined' ? 2 : options.grid.verticalSections;
options.millisPerPixel = options.millisPerPixel || 20;
options.maxValueScale = options.maxValueScale || 1;
// NOTE there are no default values for 'minValue' and 'maxValue'
options.labels = options.labels || { fillStyle:'#ffffff' };
options.interpolation = options.interpolation || "bezier";
options.scaleSmoothing = options.scaleSmoothing || 0.125;
options.maxDataSetLength = options.maxDataSetLength || 2;
options.timestampFormatter = options.timestampFormatter || null;
options.horizontalLines = options.horizontalLines || [];
this.options = options;
this.seriesSet = [];
this.currentValueRange = 1;
this.currentVisMinValue = 0;
}
// Based on http://inspirit.github.com/jsfeat/js/compatibility.js
SmoothieChart.AnimateCompatibility = (function() {
var lastTime = 0,
requestAnimationFrame = function(callback, element) {
var requestAnimationFrame =
window.requestAnimationFrame ||
window.webkitRequestAnimationFrame ||
window.mozRequestAnimationFrame ||
window.oRequestAnimationFrame ||
window.msRequestAnimationFrame ||
function(callback, element) {
var currTime = new Date().getTime();
var timeToCall = Math.max(0, 16 - (currTime - lastTime));
var id = window.setTimeout(function() {
callback(currTime + timeToCall);
}, timeToCall);
lastTime = currTime + timeToCall;
return id;
};
return requestAnimationFrame.call(window, callback, element);
},
cancelAnimationFrame = function(id) {
var cancelAnimationFrame =
window.cancelAnimationFrame ||
function(id) {
clearTimeout(id);
};
return cancelAnimationFrame.call(window, id);
};
return {
requestAnimationFrame: requestAnimationFrame,
cancelAnimationFrame: cancelAnimationFrame
};
})();
SmoothieChart.prototype.addTimeSeries = function(timeSeries, options) {
this.seriesSet.push({timeSeries: timeSeries, options: options || {}});
};
SmoothieChart.prototype.removeTimeSeries = function(timeSeries) {
this.seriesSet.splice(this.seriesSet.indexOf(timeSeries), 1);
};
SmoothieChart.prototype.streamTo = function(canvas, delay) {
this.canvas = canvas;
this.delay = delay;
this.start();
};
SmoothieChart.prototype.start = function() {
if (!this.frame) {
this.animate();
}
};
SmoothieChart.prototype.animate = function() {
this.frame = SmoothieChart.AnimateCompatibility.requestAnimationFrame(this.animate.bind(this));
this.render(this.canvas, new Date().getTime() - (this.delay || 0));
};
SmoothieChart.prototype.stop = function() {
if (this.frame) {
SmoothieChart.AnimateCompatibility.cancelAnimationFrame( this.frame );
delete this.frame;
}
};
// Sample timestamp formatting function
SmoothieChart.timeFormatter = function(dateObject) {
function pad2(number){return (number < 10 ? '0' : '') + number};
return pad2(dateObject.getHours())+':'+pad2(dateObject.getMinutes())+':'+pad2(dateObject.getSeconds());
};
SmoothieChart.prototype.render = function(canvas, time) {
var canvasContext = canvas.getContext("2d");
var options = this.options;
var dimensions = {top: 0, left: 0, width: canvas.clientWidth, height: canvas.clientHeight};
// Save the state of the canvas context, any transformations applied in this method
// will get removed from the stack at the end of this method when .restore() is called.
canvasContext.save();
// Round time down to pixel granularity, so motion appears smoother.
time = time - time % options.millisPerPixel;
// Move the origin.
canvasContext.translate(dimensions.left, dimensions.top);
// Create a clipped rectangle - anything we draw will be constrained to this rectangle.
// This prevents the occasional pixels from curves near the edges overrunning and creating
// screen cheese (that phrase should need no explanation).
canvasContext.beginPath();
canvasContext.rect(0, 0, dimensions.width, dimensions.height);
canvasContext.clip();
// Clear the working area.
canvasContext.save();
canvasContext.fillStyle = options.grid.fillStyle;
canvasContext.clearRect(0, 0, dimensions.width, dimensions.height);
canvasContext.fillRect(0, 0, dimensions.width, dimensions.height);
canvasContext.restore();
// Grid lines....
canvasContext.save();
canvasContext.lineWidth = options.grid.lineWidth;
canvasContext.strokeStyle = options.grid.strokeStyle;
// Vertical (time) dividers.
if (options.grid.millisPerLine > 0) {
for (var t = time - (time % options.grid.millisPerLine); t >= time - (dimensions.width * options.millisPerPixel); t -= options.grid.millisPerLine) {
canvasContext.beginPath();
var gx = Math.round(dimensions.width - ((time - t) / options.millisPerPixel));
if (options.grid.sharpLines)
gx -= 0.5;
canvasContext.moveTo(gx, 0);
canvasContext.lineTo(gx, dimensions.height);
canvasContext.stroke();
// To display timestamps along the bottom
// May have to adjust millisPerLine to display non-overlapping timestamps, depending on the canvas size
if (options.timestampFormatter){
var tx=new Date(t);
// Formats the timestamp based on user specified formatting function
// SmoothieChart.timeFormatter function above is one such formatting option
var ts = options.timestampFormatter(tx);
var txtwidth=(canvasContext.measureText(ts).width/2)+canvasContext.measureText(minValueString).width + 4;
if (gx<dimensions.width - txtwidth){
canvasContext.fillStyle = options.labels.fillStyle;
// Insert the time string so it doesn't overlap on the minimum value
canvasContext.fillText(ts, gx-(canvasContext.measureText(ts).width / 2), dimensions.height-2);
}
}
canvasContext.closePath();
}
}
// Horizontal (value) dividers.
for (var v = 1; v < options.grid.verticalSections; v++) {
var gy = Math.round(v * dimensions.height / options.grid.verticalSections);
if (options.grid.sharpLines)
gy -= 0.5;
canvasContext.beginPath();
canvasContext.moveTo(0, gy);
canvasContext.lineTo(dimensions.width, gy);
canvasContext.stroke();
canvasContext.closePath();
}
// Bounding rectangle.
canvasContext.beginPath();
canvasContext.strokeRect(0, 0, dimensions.width, dimensions.height);
canvasContext.closePath();
canvasContext.restore();
// Calculate the current scale of the chart, from all time series.
var maxValue = Number.NaN;
var minValue = Number.NaN;
for (var d = 0; d < this.seriesSet.length; d++) {
// TODO(ndunn): We could calculate / track these values as they stream in.
var timeSeries = this.seriesSet[d].timeSeries;
if (!isNaN(timeSeries.maxValue)) {
maxValue = !isNaN(maxValue) ? Math.max(maxValue, timeSeries.maxValue) : timeSeries.maxValue;
}
if (!isNaN(timeSeries.minValue)) {
minValue = !isNaN(minValue) ? Math.min(minValue, timeSeries.minValue) : timeSeries.minValue;
}
}
if (isNaN(maxValue) && isNaN(minValue)) {
canvasContext.restore(); // without this there is crash in Android browser
return;
}
// Scale the maxValue to add padding at the top if required
if (options.maxValue != null)
maxValue = options.maxValue;
else
maxValue = maxValue * options.maxValueScale;
// Set the minimum if we've specified one
if (options.minValue != null)
minValue = options.minValue;
// If a custom range function is set, call it
if (this.yRangeFunction) {
var range = this.yRangeFunction({min: minValue, max: maxValue});
minValue = range.min;
maxValue = range.max;
}
var targetValueRange = maxValue - minValue;
this.currentValueRange += options.scaleSmoothing*(targetValueRange - this.currentValueRange);
this.currentVisMinValue += options.scaleSmoothing*(minValue - this.currentVisMinValue);
var valueRange = this.currentValueRange;
var visMinValue = this.currentVisMinValue;
var yValueToPixel = function(value)
{
var offset = value - visMinValue;
return dimensions.height - (valueRange !== 0 ? Math.round((offset / valueRange) * dimensions.height) : 0);
};
// Draw any horizontal lines
if (options.horizontalLines && options.horizontalLines.length) {
for (var hl = 0; hl < options.horizontalLines.length; hl++) {
var line = options.horizontalLines[hl];
var hly = Math.round(yValueToPixel(line.value)) - 0.5;
canvasContext.strokeStyle = line.color || '#ffffff';
canvasContext.lineWidth = line.lineWidth || 1;
canvasContext.beginPath();
canvasContext.moveTo(0, hly);
canvasContext.lineTo(dimensions.width, hly);
canvasContext.stroke();
canvasContext.closePath();
}
}
// For each data set...
for (var d = 0; d < this.seriesSet.length; d++) {
canvasContext.save();
var timeSeries = this.seriesSet[d].timeSeries;
var dataSet = timeSeries.data;
var seriesOptions = this.seriesSet[d].options;
// Delete old data that's moved off the left of the chart.
// We must always keep the last expired data point as we need this to draw the
// line that comes into the chart, but any points prior to that can be removed.
while (dataSet.length >= options.maxDataSetLength && dataSet[1][0] < time - (dimensions.width * options.millisPerPixel)) {
dataSet.splice(0, 1);
}
// Set style for this dataSet.
canvasContext.lineWidth = seriesOptions.lineWidth || 1;
canvasContext.strokeStyle = seriesOptions.strokeStyle || '#ffffff';
// Draw the line...
canvasContext.beginPath();
// Retain lastX, lastY for calculating the control points of bezier curves.
var firstX = 0, lastX = 0, lastY = 0;
for (var i = 0; i < dataSet.length && dataSet.length !== 1; i++) {
var x = Math.round(dimensions.width - ((time - dataSet[i][0]) / options.millisPerPixel));
var y = yValueToPixel(dataSet[i][1]);
if (i == 0) {
firstX = x;
canvasContext.moveTo(x, y);
}
// Great explanation of Bezier curves: http://en.wikipedia.org/wiki/Bezier_curve#Quadratic_curves
//
// Assuming A was the last point in the line plotted and B is the new point,
// we draw a curve with control points P and Q as below.
//
// A---P
// |
// |
// |
// Q---B
//
// Importantly, A and P are at the same y coordinate, as are B and Q. This is
// so adjacent curves appear to flow as one.
//
else {
switch (options.interpolation) {
case "line":
canvasContext.lineTo(x,y);
break;
case "bezier":
default:
canvasContext.bezierCurveTo( // startPoint (A) is implicit from last iteration of loop
Math.round((lastX + x) / 2), lastY, // controlPoint1 (P)
Math.round((lastX + x)) / 2, y, // controlPoint2 (Q)
x, y); // endPoint (B)
break;
}
}
lastX = x; lastY = y;
}
if (dataSet.length > 0 && seriesOptions.fillStyle) {
// Close up the fill region.
canvasContext.lineTo(dimensions.width + seriesOptions.lineWidth + 1, lastY);
canvasContext.lineTo(dimensions.width + seriesOptions.lineWidth + 1, dimensions.height + seriesOptions.lineWidth + 1);
canvasContext.lineTo(firstX, dimensions.height + seriesOptions.lineWidth);
canvasContext.fillStyle = seriesOptions.fillStyle;
canvasContext.fill();
}
canvasContext.stroke();
canvasContext.closePath();
canvasContext.restore();
}
// Draw the axis values on the chart.
if (!options.labels.disabled) {
canvasContext.fillStyle = options.labels.fillStyle;
var maxValueString = parseFloat(maxValue).toFixed(2);
var minValueString = parseFloat(minValue).toFixed(2);
canvasContext.fillText(maxValueString, dimensions.width - canvasContext.measureText(maxValueString).width - 2, 10);
canvasContext.fillText(minValueString, dimensions.width - canvasContext.measureText(minValueString).width - 2, dimensions.height - 2);
}
canvasContext.restore(); // See .save() above.
};
| {'content_hash': '71989a8a7af3b964bd7629798b7c093a', 'timestamp': '', 'source': 'github', 'line_count': 409, 'max_line_length': 152, 'avg_line_length': 41.80929095354523, 'alnum_prop': 0.6933333333333334, 'repo_name': 'dada0423/cdnjs', 'id': 'baedd23008f9fe30ebfc762b44b70eed801ebb42', 'size': '17100', 'binary': False, 'copies': '176', 'ref': 'refs/heads/master', 'path': 'ajax/libs/smoothie/1.13.0/smoothie.js', 'mode': '33188', 'license': 'mit', 'language': []} |
high_school_physics | 889,426 | 15.900994 | 1 | Q: If $\mu$ equals Haar measure on the 3-dimensional unit sphere $S^2$, then $\hat{\mu}(\varepsilon) = \dfrac{2\sin(2\pi |\varepsilon|)}{|\varepsilon|}$. If $\mu$ equals Haar measure on the 3-dimensional unit sphere $S^2$, then $\hat{\mu}(\varepsilon) = \dfrac{2\sin(2\pi |\varepsilon|)}{|\varepsilon|}$.
I am not quite sure how to start this problem. First of all, I can't figure out the formula for the $\mu$ itself, let alone computing its Fourier transform.
The only I can make sense of $\hat{\mu}$ is when I view $\mu$ as a distribution but even in that case I don't know where to begin with.
Any help is appreciated.
Answer:
Here is my attempted solution after looking at given suggestions below.
$$\hat{\mu}(\varepsilon) = \int_{S^2} e^{-i\varepsilon\cdot x}dx =4\pi \int_{S^2} e^{-2\pi i\varepsilon\cdot\gamma} d\sigma(\gamma)\, (1)$$, where $d\sigma(\gamma)$ is the surface element of $S^2$. Because $\mu$ is rotationally invariant, so is its Fourier transform. In other words, the above integral is radial in $\varepsilon$. Hence, it suffices to assume that if $|\varepsilon| = \rho$, then $\varepsilon = (0,0,\rho)$. Finally, using routine spherical coordinate transform, we see that our integral in $(1)$ equals to $${4\pi}\int_{0}^{2\pi}\int_{0}^{\pi} e^{-2\pi i\rho\cos\theta}\sin\theta d\theta d\phi = 4\pi\cdot\dfrac{1}{2}\int_{0}^{\pi}e^{-2\pi i\rho\cos\theta}\sin\theta d\theta = $$ $$2\pi\cdot\dfrac{1}{4\pi i\rho}[e^{2\pi i\rho u}]_{u=-1}^{u=1} = \dfrac{2 \sin(2\pi\rho)}{\rho}$$, as desired.
A: In this case the Haar measure is simply the Lebesgue measure.
So you have
$$
\hat \mu(\varepsilon)=\int_{S^2} e^{-i\varepsilon x}dx.
$$
| {'language': 'en', 'url': 'https://math.stackexchange.com/questions/1750315', 'timestamp': '2023-03-29', 'source': 'stackexchange', 'question_score': '2'} |
high_school_physics | 407,848 | 15.851688 | 1 | Kronos Partners with Even to Add Comprehensive Financial Wellness Tools to Workforce Dimensions Marketplace
October 11, 2018 11:00 AM Eastern Daylight Time
OAKLAND, Calif. & LOWELL, Mass.--(BUSINESS WIRE)--Today, Even and Kronos Incorporated, a leading provider of workforce management and human capital management cloud solutions, announced Even’s partnership with Kronos to add its comprehensive financial wellness app to the Workforce Dimensions Marketplace. Through this partnership, businesses that use Workforce Dimensions can easily offer the Even financial wellness app to their employees. With Even, Workforce Dimensions will help millions of users budget, save, and safely resolve cash flow emergencies, with the ultimate goal of breaking the paycheck to paycheck cycle.
Nearly 20 percent of Americans don’t save any of their annual income, while another 21 percent only save five percent or less. As the cost of living continues to rise and wages remain stagnant, there’s a widespread need for resources to help people take control of their finances and set themselves up for future success.
“Our mission is to end the paycheck-to-paycheck cycle, and this partnership will enable millions of people who work at companies that use Kronos for scheduling and payroll to better budget their earnings every pay period,” said Jon Schlossberg, CEO and co-Founder of Even. “Kronos makes it easy for organizations to empower employees to take control of their work-life balance. By integrating with Even, those same organizations can provide another powerful tool to help employees better manage one of the most challenging balances of their lives: their money.”
Even integrates with time and attendance, payroll, and banking systems to understand and directly improve the full picture of financial health:
Instantly budgets so you know how much is okay to spend
Safely solves cash flow problems with Instapay on-demand access to wages
Automatically saves money to make progress towards goals
Together, these core products decrease accidental overspending, eliminate the cost of interest and loans, and help users build savings. Even is also FDIC insured and takes rigorous measures to ensure its system is completely secure, using 256-bit end-to-end encryption and undergoing regular security and privacy audits by some of the nation’s largest employers.
Key to the integration with Workforce Dimensions, the Even’s Schedule and Earnings product allows users to easily view their upcoming shift schedules as well as review how much they’ve earned during previous shifts. These schedules are directly linked to Kronos time management and automatically appear in the user-friendly Even app as managers add shifts to a user’s schedule. Users can review their shifts and earned wages simply by tapping on the Earn tab within the Even app.
Workforce Dimensions is a next-generation workforce management solution from Kronos. The solution makes it quick and simple for customers to extend the value of their Kronos investment with tightly integrated partner applications, such as Even, through the Kronos D5 platform’s leading open API framework.
“Workforce Dimensions is built on a completely open and extensible platform, enabling innovative integrations with partners such as Even that empower employees with real-time access to data in ways that simply were not possible before,” said Michael May, Senior Director of the Workforce Dimensions Technology Partner Program at Kronos. “Even’s intuitive, easy-to-use application will provide Workforce Dimensions customers with a powerful financial wellness application that can further support employee engagement while maximizing the value of their Kronos investment.”
About Even
Even is a mission-driven technology company working to end the paycheck-to-paycheck cycle. The company’s technology integrates with attendance, payroll, and banking systems to create innovative products that address the core components of financial health. The company was founded by former Instagram and Google engineers, and is headquartered in Oakland, California. Additional information at www.even.com.
About Kronos Incorporated
Kronos is a leading provider of workforce management and human capital management cloud solutions. Kronos industry-centric workforce applications are purpose-built for businesses, healthcare providers, educational institutions, and government agencies of all sizes. Tens of thousands of organizations — including half of the Fortune 1000® — and more than 40 million people in over 100 countries use Kronos every day. Visit www.kronos.com. Kronos: Workforce Innovation That Works.
Clarity PR
Kevin Brown, 512-917-8744
[email protected] | {"pred_label": "__label__cc", "pred_label_prob": 0.613342821598053, "wiki_prob": 0.386657178401947, "source": "cc/2019-30/en_middle_0007.json.gz/line694639"} |
high_school_physics | 867,781 | 15.84031 | 1 | namespace glm
{
/// Return the next ULP value(s) after the input value(s).
///
/// @tparam genType A floating-point scalar type.
///
/// @see gtc_ulp
template<typename genType>
GLM_FUNC_DECL genType next_float(genType x);
/// Return the previous ULP value(s) before the input value(s).
///
/// @tparam genType A floating-point scalar type.
///
/// @see gtc_ulp
template<typename genType>
GLM_FUNC_DECL genType prev_float(genType x);
/// Return the value(s) ULP distance after the input value(s).
///
/// @tparam genType A floating-point scalar type.
///
/// @see gtc_ulp
template<typename genType>
GLM_FUNC_DECL genType next_float(genType x, int ULPs);
/// Return the value(s) ULP distance before the input value(s).
///
/// @tparam genType A floating-point scalar type.
///
/// @see gtc_ulp
template<typename genType>
GLM_FUNC_DECL genType prev_float(genType x, int ULPs);
/// Return the distance in the number of ULP between 2 single-precision floating-point scalars.
///
/// @see gtc_ulp
GLM_FUNC_DECL int float_distance(float x, float y);
/// Return the distance in the number of ULP between 2 double-precision floating-point scalars.
///
/// @see gtc_ulp
GLM_FUNC_DECL int64 float_distance(double x, double y);
/// Return the next ULP value(s) after the input value(s).
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Floating-point
/// @tparam Q Value from qualifier enum
///
/// @see gtc_ulp
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> next_float(vec<L, T, Q> const& x);
/// Return the value(s) ULP distance after the input value(s).
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Floating-point
/// @tparam Q Value from qualifier enum
///
/// @see gtc_ulp
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> next_float(vec<L, T, Q> const& x, int ULPs);
/// Return the value(s) ULP distance after the input value(s).
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Floating-point
/// @tparam Q Value from qualifier enum
///
/// @see gtc_ulp
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> next_float(vec<L, T, Q> const& x, vec<L, int, Q> const& ULPs);
/// Return the previous ULP value(s) before the input value(s).
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Floating-point
/// @tparam Q Value from qualifier enum
///
/// @see gtc_ulp
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> prev_float(vec<L, T, Q> const& x);
/// Return the value(s) ULP distance before the input value(s).
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Floating-point
/// @tparam Q Value from qualifier enum
///
/// @see gtc_ulp
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> prev_float(vec<L, T, Q> const& x, int ULPs);
/// Return the value(s) ULP distance before the input value(s).
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam T Floating-point
/// @tparam Q Value from qualifier enum
///
/// @see gtc_ulp
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, T, Q> prev_float(vec<L, T, Q> const& x, vec<L, int, Q> const& ULPs);
/// Return the distance in the number of ULP between 2 single-precision floating-point scalars.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam Q Value from qualifier enum
///
/// @see gtc_ulp
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, int, Q> float_distance(vec<L, float, Q> const& x, vec<L, float, Q> const& y);
/// Return the distance in the number of ULP between 2 double-precision floating-point scalars.
///
/// @tparam L Integer between 1 and 4 included that qualify the dimension of the vector
/// @tparam Q Value from qualifier enum
///
/// @see gtc_ulp
template<length_t L, typename T, qualifier Q>
GLM_FUNC_DECL vec<L, int64, Q> float_distance(vec<L, double, Q> const& x, vec<L, double, Q> const& y);
/// @}
}//namespace glm
#include "ulp.inl"
| {'content_hash': '1dad65c773fd21a0983ac5ae1bb1a9d1', 'timestamp': '', 'source': 'github', 'line_count': 126, 'max_line_length': 103, 'avg_line_length': 34.65079365079365, 'alnum_prop': 0.6811726981218507, 'repo_name': 'htmlboss/OpenGL-Renderer', 'id': '0d80a75852a277b4a28a1a17d82f59850eabfe2e', 'size': '5042', 'binary': False, 'copies': '50', 'ref': 'refs/heads/master', 'path': 'MP-APS/3rdParty/glm/glm/gtc/ulp.hpp', 'mode': '33188', 'license': 'mit', 'language': [{'name': 'C', 'bytes': '2522'}, {'name': 'C++', 'bytes': '121929'}, {'name': 'GLSL', 'bytes': '39246'}, {'name': 'Makefile', 'bytes': '879'}, {'name': 'Shell', 'bytes': '108'}]} |
high_school_physics | 624,812 | 15.81739 | 1 | Into the 8th Dimension
admin — 06/27/2020 in Documentary 49 Likes 280 Views Share Tweet on Twitter Share on Facebook Google+ Pinterest
: Into the 8th Dimension
: English 2.0
: 2 :8:00
: Documentary
Click to Download via Torrent
A feature-length documentary on the making of the 1980s cult classic movie The Adventures of Buckaroo Banzai Across The 8th Dimension.
Find Films by Director :
Find Films by Actors :
Christopher Lloyd, Clancy Brown, John Lithgow, Peter Weller,
Into the 8th Dimension yts, Into the 8th Dimension yts movies, Into the 8th Dimension yts torrent, Into the 8th Dimension torrent, Into the 8th Dimension yts subtitles, Into the 8th Dimension english yts subtitles, Into the 8th Dimension full movie download, Into the 8th Dimension yts movie, Into the 8th Dimension movie subtitles, Into the 8th Dimension full movie torrent, Into the 8th Dimension yts movies official, Into the 8th Dimension full movie download hd 1080p
Athlete A
Italianamerican
Repeat Attenders
Getting to the Nutcracker
The Mystery of Picasso
Paris Is Burning
Tribes of the Moon: The Making of Nightbreed | {"pred_label": "__label__cc", "pred_label_prob": 0.6338332891464233, "wiki_prob": 0.36616671085357666, "source": "cc/2021-04/en_middle_0079.json.gz/line427890"} |
high_school_physics | 481,412 | 15.80165 | 1 | Interior arrangement for direct reduction rotary kilns and method
A method and means for maximizing the use of the kiln capacity in a rotary kiln, directly reducing metal oxides using solid carbonaceous materials as the source of fuel and reducant, is disclosed involving the creation of an annular dam arrangement within the kiln at a selected position between the feed end and the discharge end dams, which arrangement is located and dimensioned with respect to the end dams, such that the materials in the charge bed suitably fill the kiln volume and have sufficient residence time in the feed end portion of the kiln, to permit adequate heat transfer thereto, thus minimizing the portion of the kiln needed for preheating and maximizing the remaining portion of the kiln available for reduction. In a kiln of a given size, the spacing and dimensions of the end dams and one or more intermediate dams are designed in combination with the degree of kiln inclination, the kiln rotational speed and the required heat transfer rate to the surface of the charge bed to obtain a volume filling in the charge bed in the preheat zone, and hence a solids residence time therein, which is optimum so that the mass flow rate and the degree of metallization of the metal oxides may be maximized for the available kiln volume.
Baker, Alan C. (Harriman, TN)
Boulter, Geoffrey N. (New York, NY)
Wilbert, Daniel H. (Knoxville, TN)
The Direct Reduction Corporation (New York, NY)
75/477, 266/173, 266/248
C21B13/08; (IPC1-7): C21B13/08; F27B7/30
266/173, 266/248, 75/36, 75/2R, 75/29
4273314 Direct reduction rotary kiln with improved air injection 1981-06-16 Keran et al. 266/173
2039645 Treatment of sulphur bearing ores 1936-05-05 Hechenbleikner 266/173
ANDREWS, MELVYN J
THOMAS P. DOWD (ONE PURITAN AVENUE, YONKERS, NY, 10710, US)
1. In an inclined rotary kiln of the type for directly reducing metal oxides using a solid carbonaceous material as the source of fuel and reductant, and having an opening at the higher end for receiving the metal oxides materials as a charge along with a portion of the solid carbonaceous materials, and an opening at the lower end for receiving the remainder of the solid carbonaceous materials and for discharging the reduced materials therefrom and wherein the kiln interior wall defines a process operating zone bounded by a feed end dam located at the higher end of the kiln and a discharge end dam located at the lower end through which zone the bed of charge materials moves, the improvement comprising:
intermediate dam means, disposed within the kiln between the feed end and discharge end dams at a location about one-third the distance along the length of the kiln from the feed end dam, for dividing the operating zone into a preheat zone between said dam means and the feed end dam and a reduction zone between said dam means and the discharge end dam, said dam means being dimensioned to provide volume filling and residence time of the bed materials in said preheat zone sufficient to permit the transfer of adequate heat thereto at the available rate of heat transfer through the surface of the bed in said preheat zone to raise the temperature of the materials to a level approaching the reduction temperatures of the metal oxides by the time the materials reach the end of said preheat zone in their movement through the kiln.
2. A kiln as in claim 1 wherein said intermediate dam means comprises a plurality of annular dams.
3. A kiln as in claim 1 further comprising a plurality of means, spaced from each other along the kiln length, for injecting oxygen-containing gas axially within the kiln, each of said injecting means on the feed end side of said dam means being directed to inject said gas toward the feed end and each of said injecting means on the discharge end side of said dam means being directed to inject said gas toward the discharge end.
4. A method for optimizing the product metallization and throughput capacity of an inclined rotary kiln with a given interior volume, directly reducing metal oxides using solid carbonaceous materials as the source of fuel and reductant, a portion of which carbonaceous materials is fed as a charge together with the metal oxides into the kiln through a feed opening at the higher end thereof, and the remainder of which is fed through a discharge opening at the lower end of the kiln out of which the reduced materials are discharged, comprising the steps of:
forming an intermediate dam structure within the kiln between the feed opening and discharge opening at a location about one-third the distance along the length of the kiln from the feed opening for defining a region in the kiln wherein the charge bed moving through the kiln is preheated to a temperature approaching that at which the metal oxides are reduced; and
setting the dimensions of the intermediate dam structure with respect to the feed opening to create a charge bed depth in the defined region between the end of the dam structure and the feed opening, for providing a volume filling and retention time of the materials in said region of the bed sufficient to raise the temperature of the materials upon reaching the end of the dam structure to a level approaching the reduction temperature of the metal oxides at the available heat transfer rate at the surface of the bed in said region.
5. The method of claim 4 comprising the further steps of:
injecting oxygen-containing gas at spaced intervals axially along the length of the kiln;
directing said gas injected on the feed opening side of the dam structure toward the feed opening; and
directing said gas injected on the discharge opening side of the dam structure toward the discharge opening.
6. The method of claim 5 comprising the further step of injecting an amount of said gas toward the feed opening to maintain the temperature of the gas exiting through the opening above about 750° to facilitate afterburning with ambient air.
7. The method of claim 4 wherein said dam structure is formed as a plurality of annular dams.
8. The method of claim 4 wherein the location and dimensions of said dam structure are determined as a function of the particular metal oxides and carbonaceous materials to be used in the charge bed.
9. The method of claim 4 comprising the further steps of adjusting the rotational speed of the kiln and the rate of charge feed such that a small amount of charge material spills back out of the kiln through the feed opening consistently during kiln operation.
10. The method of claim 4 wherein said dam structure is located as close to the feed opening as feasible to maximize the length of the kiln between said dam structure and said discharge opening.
The present invention relates to the direct reduction of metal oxides in rotary kilns using solid carbonaceous materials as the source of fuel and reductant, and more particularly to a method and means for constructing the kiln interior to maximize the kiln output for a given volume kiln.
The interior of a direct reduction rotary kiln may be divided essentially into two operating zones, a preheat zone at the kiln feed end, wherein the materials entering the charge bed are preheated to bring them up to a temperature level at which reduction will begin, and a reduction zone, wherein the metal oxides are actually reduced to a metallic state before passing out of the kiln discharge end. The heat transfer requirements from the burning freeboard gases to the charge bed in the two zones differ substantially since in the preheat zone the need is primarily for sensible heat to raise the bed temperature to the threshold level for reduction, while in the reduction zone the reactions bringing about reduction are strongly endothermic, and create an added heat demand which varies along the length of the charge bed. The amount of heat transfer to the bed in the reduction zone therefore must generally be much higher than in the preheat zone to achieve a high level of metallization. Accordingly, to maximize the use of the kiln volume, it would seem desirable to maintain a high temperature throughout the kiln for a rapid preheating of the charge in the preheat zone, particularly when the feed materials are fed at ambient temperature, and for accelerated and maximum reduction of the oxides in the reduction zone. However, problems are presented when attempting this approach by various phenomena which occur in the charge bed. For example, when the metal oxides used are those in iron ore, rapid increases in bed temperatures in the preheat zone can cause excessively rapid phase changes in the metal oxides from hexagonal crystal hematite to cubic crystal magnetite and excessive decrepitation of the ore. Also, rapid heat up may cause the formation of sticky phases in the bed in the transition region of the kiln just beyond the preheat zone. These phases can result in sintering and uncontrolled accretion formation on the kiln walls in the transition region. Further, if certain coals are used as the carbonaceous material, rapid heat up may plasticize the coal, thereby retarding mixing of the materials in the charge bed. Consequently, the temperature levels in the kiln and heat transfer to the bed must be carefully controlled to produce and maintain a suitable temperature profile in the charge bed that will permit optimum metallization of the metal oxides while avoiding charge decrepitation, sintering, wall accretions, and other deleterious effects. A particular temperature profile for this purpose is described in U.S. Pat. No. 4,304,597, assigned to the same assignee as the present invention.
It, therefore, appears that a gradual bed temperature increase is desirable; but, unfortunately, in a kiln of a given volume, if the temperature of the charge bed in the preheat zone is brought up gradually to avoid the previously-mentioned deleterious effects, too much of the kiln volume may be used in preheating, leaving insufficient volume for the reduction zone. This will result in a low kiln output for the total operating volume of the kiln, or, in other words, inefficient operation. An actual example of such a situation is described in the ISS-AIME Ironmaking Proceedings, Volume 35, St. Louis, 1976, pp. 396-405.
To solve this latter problem, prior art solutions have included heating of the charge materials prior to feeding them to the kiln and mechanically complex techniques for rapid preheating of the charge in the kiln, such as by the use of under-bed combustion-air injection. These solutions nevertheless do not greatly decrease the risk of kiln accretions or ringing in the transition region at the start of the reduction zone, the region of the kiln which, in the absence of proper heat transfer or bed temperature control, is the most susceptible to ringing.
The present invention provides a solution to the problem of properly transferring an adequate supply of heat to the charge bed in the preheat zone of the kiln by utilizing the intermediate interior dam, used in some direct reduction and certain other process kilns, in an improved manner. This solution obviates the need for prior heating of the charge outside of the kiln, complicated rapid preheating techniques, or excessive gas temperatures, by maximizing the degree of kiln volume filling and charge residence time and, consequently, the product throughput in a kiln of a given volume.
The present invention involves the design and installation of a dam arrangement comprising one or more intermediate dams, preferably formed as part of the refractory lining, in the interior of a direct reduction rotary kiln. The location of the dam arrangement and its dimensions with respect to the kiln feed end and discharge end dams are selected to provide the necessary volume filling and retention time of the charge bed materials in the preheat zone to permit sufficient heat transfer to the bed in that zone for smooth charge temperature elevation and thus the avoidance of undesirable ore decrepitation, charge sintering and wall accretions in the transition region at the beginning of the reduction zone. The intermediate dam arrangement is positioned in a kiln of a given volume taking into consideration the compositions and desired mass flow rates of the raw materials and their specific heats as well as the endothermic heat requirements to estimate the required rate of heating in the preheat zone and accordingly the volume filling required to essentially define the preheat zone and separate it from the reduction zone. The heights of the feed end, discharge end and intermediate dams are then selected, along with the kiln inclination and rate of rotation, to create charge bed depths in the kiln that allow sufficient residence time to permit the required amounts of heat at the expected heat transfer rates to be transferred to the bed materials in a controlled manner, and particularly to minimize the portion of the kiln needed for the preheat zone and thus to maximize the remaining portion of the kiln available for reduction. The resulting enhancement of the control of heat transfer to the charge bed permits a degree of metallization of the materials being reduced and a mass flow rate that are maximum for the volume of the kiln being used for the process.
BRIEF DESCRIPTION OF THE DRAWING
FIG. 1 is a diagram of a direct reduction system including a view in section of a rotary kiln incorporating and illustrating an intermediate dam arrangement in accordance with the present invention.
FIG. 2 is a view in section of a portion of a kiln interior illustrating a modification of an intermediate dam arrangement in accordance with the present invention.
The system shown in FIG. 1 is of the type in which the present invention is intended for use and is particularly suitable for reducing metal oxides, typically iron oxides contained in iron ore. The metal oxides are fed in the form of pellets or natural lump ore or other physical forms into the feed end of rotary kiln 6 and are reduced therein by using solid carbonaceous materials, such as coal, as the source of fuel and reductant, which materials are fed into the kiln from both the feed end and the discharge end. The feed end carbonaceous materials are fed with the metal oxides and other charge materials, such as desulfurizing agent in the form of limestone or dolomite, from appropriate supply sources 4, by conventional weigh feeder conveying means 5 and an inclined chute 5a, through an opening 6a in the feed end of the kiln 6. All the feed end materials enter the kiln, conveniently at ambient temperature, and form a charge bed 1 between a feed end dam 21 and a discharge end dam 22, both typically formed annularly as part of the refractory lining 20 of the kiln. The charge materials in the bed 1, by virtue of the preselected slope or inclination of the kiln and its rotation, move progressively along the kiln length to the discharge end.
The discharge end carbonaceous materials, in the form of coal and/or recycled char and the like, are injected onto the surface of the advancing bed 1 through an opening 6b in the kiln discharge end, out of which opening the kiln product is discharged over dam 22. These materials, which also may be fed at ambient temperature, are preferably injected by blowing with a low pressure air source 7 through a pipe 8, in a manner and by means such as described in U.S. Pat. No. 4,306,643 and co-pending U.S. application Ser. Nos. 266,602 and 317,939, all assigned to the same assignee as the present application.
The reduction process is begun by initially igniting the carbonaceous materials in the kiln and then continuing the combustion by the injection of an oxygen-containing gas, such as air, drawn in from outside the kiln through tubes 9 passing through the kiln shell and having injection nozzles 9a for directing the injected gas axially within the kiln. Each of the air tubes 9 may be provided externally with a fan 10 that is individually controllable to permit the air injection all along the kiln to be closely regulated so that the combustion of the gases arising from the bed, and thus heat transfer to the bed, can be varied in the different regions.
Fixed thermocouples 30 may be provided along the kiln length to sense the gas and bed temperatures and provide an appropriate indication through which the average temperature profiles within the kiln can be monitored and adjusted to optimize process operation. In addition to the fixed thermocouples 30, a roving, fast-response thermocouple 31 may be used selectively, by manual insertion into appropriate ports along the kiln, to detect the immediate gas and bed temperatures at particular locations during kiln rotation.
The process operating zone within the kiln interior, as indicated in FIG. 1, consists of two regions, a preheat zone at the feed end wherein the solids bed of charge materials is increased in temperature, typically from ambient, to bring it up to a level approaching that at which the reduction of the metal oxide materials begins, and a reduction zone beginning at the end of the preheat zone and continuing to the discharge end of the kiln. In the reduction zone the metallization of the metal oxides is carried out by complex gas/solid reactions that are strongly temperature dependent for a given production rate of metal. As the reduction reactions are highly endothermic, high operating temperatures with attendant high heat transfer to the bed are normally desirable in the reduction zone. However, the overall use of high temperatures is limited by the occurrence of sintering in the bed brought on by reactions in the bed constituents at high temperatures. High temperatures may also result in uncontrolled accretion formation on the kiln walls as well as other deleterious effects such as the production of sulfur bearing compounds in the bed. These effects occur primarily in the initial portion of the reduction zone so that the bed temperatures must be carefully controlled in this transition region. By the time the bed materials reach the latter portion of the kiln length rapid and complete reduction is occurring so that in that region, called the working zone, the highest operating temperatures are permissible and desirable. Consequently, the temperature levels in and heat transfer to the bed must be carefully controlled to maintain a temperature profile in the solid charge materials that will maximize the amount and rate of metallization while avoiding sintering and other deleterious effects. However, the actual transition points or demarcation lines between the various zones are not easily determinable and will vary with the bed constituents or materials being used in the process. A method and means for determining or defining the end of the preheat zone and beginning of the reduction zone and for minimizing the length of the former to maximize the length of the latter in accordance with the present invention will now be described.
In designing a kiln of the type shown, to begin with the desired product capacity of the kiln is chosen in terms of the annual tonnage of metal to be produced by the kiln. Then, based on the selected capacity a necessary volume is determined, that is, the total internal kiln volume that will be required to achieve the product capacity, which volume is dependent upon the mass flow rates of the constituents involved and particularly on the estimated volume of gases that must be handled. The kiln diameter is established by using the internal kiln volume figure and the estimated velocity of the gases that must pass through and out of the kiln to determine the cross-sectional area necessary. With the cross-sectional area established, the diameter can be easily calculated. With a diameter value, the length of the kiln is calculated using the conventional length to diameter (L/D) relationship, typically of a magnitude between 141/2 and 171/2, and taking into account the residence time the constituents will need in the kiln depending upon the heat flux or heat transfer required and the gas velocity. Following from the length and diameter of the kiln, the slope or inclination at which the kiln will be mounted for rotation is determined in order to roughly maximize the volume of the charge bed that can be retained within the kiln during operation of the process. In this latter regard, the height of the discharge end dam 22 is selected to maximize the degree of filling or bed size and the residence time of the materials in the kiln reduction zone.
Having thus broadly established the major structural design parameters of the kiln, process considerations and particularly the heat transfer requirements to the charge bed incrementally along the length of the kiln must be taken into account to refine and complete the design. Again, it is important to consider that the heat demand in the charge bed along the kiln during process operation differs substantially, ranging from the mode-rate heat required in the preheat zone to gently raise the temperature level of the materials in the bed, to the considerably higher quantities of heat needed to produce the comparatively high bed temperatures in the reduction zone. Further, as previously noted, high heat transfer in the transition region between the preheat zone and the reduction zone can raise the bed temperatures to a level that will cause sintering and other deleterious effects and so must be avoided. To achieve the proper temperature profile in the bed, therefore, without the occurrence of sintering, the feeding of the combustion air from tubes 9 along the kiln should be regulated to bring the temperature of the kiln bed gradually upwards from the preheat zone to the reduction zone. However, a problem encountered in accomplishing such a controlled temperature increase is that unless the charge materials are heated before entering the kiln or some means is provided to rapidly increase the temperature of the charge from ambient at the feed end, it may be found that an unacceptably long portion of the kiln volume is required for the preheat zone in order to transfer a sufficient amount of heat to the charge bed to bring about the necessary temperature transition. Under such circumstances the reduction zone will be correspondingly shortened, unless the kiln length is redesigned, and the kiln output capacity may become unacceptably low with respect to the kiln size.
The foregoing problem is solved, in accordance with the present invention, without prior heating or auxiliary rapid heating means, by the proper utilization of one or more intermediate interior dams of the type presently used in a number of direct reduction and other rotary kilns. Briefly, an annular intermediate dam arrangement, such as dam 23, is formed in the kiln interior and positioned and dimensioned to essentially separate the preheat zone from the reduction zone and increase the volume filling and accordingly the residence time of the charge materials in the bed in the preheat zone so that for a given heat transfer rate the amount of heat transferred to the materials may be significantly increased in a controlled manner to bring them to the temperature needed for reduction. The heights and dimensions of the discharge end dam 22 and the feed end dam 21 may be adjusted with respect to those of the intermediate dam 23 to maximize the use of the kiln volume in performing the process.
More particularly, in determining the position and dimensions of the intermediate dam 23, it is firstly taken into consideration that for the efficient use of the kiln volume approximately the first third of the kiln should be given over to preheating the charge. Accordingly, the intermediate dam should be formed somewhere in the region about a third of the kiln length from the feed end. Also, The height of the dam will be a function of the cross-sectional area needed to pass the estimated volume of gases that must be handled and the longitudinal inclination of the kiln shell.
In initially estimating and setting the location of the dam, the analysis may be carried out as follows. The known properties of the raw materials to be processed are considered, their specific heats and their desired mass flow rates, and the required rate of heating in the preheat zone is calculated for the available charge volume filling in the zone in the absence of the dam. The calculated heat transfer rate is then compared with the heat available from the gas flow above the bed in the zone and the difference determines the increase in the volume of the charge required in the preheat zone to provide an adequate residence time. The intermediate dam location may then be selected based on the required volume and taking into account the maximum permissible height. Preferably the location is selected to be as close to the feed end as feasible to maximize the length of the reduction zone.
In this regard, it will be seen that the optimum positioning of the dam within the selected region will be dependent to a large extent on the constituents to be used in the process, since the amount of heat to be transferred per unit of mass and the volume required will vary as a function of the combinations of materials to be used in the process. Consequently, the actual building of a dam in the refractory of a given kiln may have to involve a compromise as to positioning if various process runs are to be conducted in the kiln with different metal oxides and carbonaceous materials.
With the location of the intermediate dam arrangement established, there will then be a defined separation of the preheat and reduction zones in the kiln, that is, a readily identifiable place in the kiln indicating where metallization of the metal oxides is actually beginning.
Once the location has been selected, the height of the intermediate dam 23 must then be estimated and set based upon the volume of bed necessary to accomplish appropriate preheating of the charge. The relative heights of the feed end and discharge end dams 21 and the kiln inclination may be adjusted accordingly during this exercise.
An important factor which must be taken into consideration in determining the height and other dimensions of the dam 23, as well as its location, is the actual mechanics of heat transfer or heat flow between the gas and the charge bed, which flow occurs through the surface of the bed at the gas/bed interface. Firstly, for a given combination of constituents, the amount of heat per unit of mass that must be transferred to the bed is the same irrespective of the area of the surface through which it is transferred. The surface area of the bed available for heat transfer in a particular kiln will clearly depend upon the size of the kiln and it will be appreciated that, during design, as the kiln size is increased, that is, the kiln diameter, the available volume for the charge bed will be increased. However, while the mass of materials capable of being retained in the larger volume kiln bed will increase roughly in proportion to the cube of the kiln diameter, the area through which the heat is transferred to this mass, that is primarily the surface area of the bed exposed to the hot gas, will increase only in proportion to the square of the diameter. Consequently, as the size of the kiln is scaled up in design, the ability to provide for adequate heat transfer to the bed, in the absence of the intermediate dam, becomes a greater and greater problem.
The intermediate dam is used to overcome this problem by two effects. Firstly, the increase, due to the dam, in the total charge volume available in the preheat zone to accommodate the greater hourly mass flow of charge feed in a larger kiln, increases the hours of residence time of such charge in this zone so that although the rate of radiant heat transfer per hour may remain unchanged as compared to a smaller kiln, the total quantity of heat transferred to the charge increases to supply the additional quantity required to preheat adequately the greater mass flow of charge. Secondly, since the total charge cross-section occupies a sector of the circular interior of the preheat zone and the depth of the charge is increased by the intermediate dam, the actual area of charge surface available to receive the heat transfer is also increased. This increase in surface area causes the required level of heat flux (Gigacalories of heat transferred per hour per square meter of charge surface) in the preheat zone to be decreased, while the hourly quantity of heat needed to preheat the hourly mass flow of charge remains the same.
An illustrative example is as follows:
Parameters Units Quantities
Internal refractory diameter
Meters 5.0
of kiln
Nominal production capacity of
Metric 215,000
directly reduced iron (DRI)
tons per
Hourly equivalent iron oxide
Metric 39.5
ore feed tons
Corresponding total heat
Gigacalories
requirement in preheat
In the absence of the dam, the residence time of a given volume of charge in the preheat zone of a kiln designed in accordance with the given parameters would be about 1.05 hours. However, with the intermediate dam in place, this average residence time may typically be increased to 2.15 hours. It will be seen that this increase in average residence time reduces the required heat transfer rate in the zone from N/1.05 Gcal/hr to N/2.15 Gcal/hr. Thus, the heat transfer rate requirement may be reduced by a factor of (1.05:215)=0.488:1 or about 50%.
In addition to the reduction in heat transfer rate, the overall heat flux requirement is reduced by virtue of the increase in the available surface area of the bed for heat transfer. For example, if it is considered that the width of the surface of the bed subtends a 90° angle centered on the axis within the kiln and the kiln has a radius R of 2.5 meters, the surface area of the bed will be approximately equal to 1.414 RL m2, where L is the length of the bed in the preheat zone in meters. If it is considered further that in the absence of the dam the width line of the bed surface intersects a radius drawn perpendicularly to the two surfaces, at a point half way between the first surface and the circumference, the area of the surface without the dam will equal 1.0423 RL m2 from the geometrical construction. The required heat flux values, with and without the dam therefore, will be respectively (N/2.15×1.414 RL)=N/3.0401 RL and (N/1.05×1.0423 RL)=N/1.0944 RL Gcal/hr/m2.
Consequently, the ratio of the heat flux requirements with and without the dam will be 1.0944/3.0401 or 0.3599:1. Thus, the cumulative result of the two effects is that the heat flux requirement is about 36% of what it would be without the dam.
A further important consideration in this regard resulting from the use of the intermediate dam, is the change in the rate of heat transfer to the materials when passing over the dam. It will be seen that as the bed material actually passes over the upper surface or lip of the intermediate dam 23, the material goes from a very thick bed or volume immediately before the dam to a comparatively thin layer on the dam itself. Thus, while the surface area remains approximately the same, the bed volume is substantially decreased so that the available bed surface area per unit of mass through which heat may be transferred in this region is increased by a large factor. As a result, the rate of heat transfer from gases to solids in this region is greatly enhanced. Further, on the downstream side of the intermediate dam, because of the inclination of the kiln, the advancing or falling layer of material lands on and combines with another comparatively thin layer of bed so that the materials again gain additional sensible heat which contributes to bringing them ultimately up to the reduction temperature. In any event, since in the region immediately beyond the dam, the transition region, the temperature of the kiln bed, with proper control of the temperature profile in the kiln, will still be below that at which the reduction reactions are occurring rapidly, the heat demand of the bed will still be low. However, as this transition region is located at the beginning of the reduction zone, the potential is present to have very high gas temperatures generated so that it becomes important that the rate of rise in temperature in this region be depressed and closely controlled to avoid high heat transfer to the bed with consequent rapid sintering. This is accomplished by the controlled addition of combustion air into the freeboard gases above the transition region and the preheat zone in a manner such as disclosed in U.S. Pat. No. 4,273,314, assigned to the same assignee as the present application.
In particular, as shown in FIG. 1, by reversing the three air tubes 9 in the preheat zone, the amount of combustion air being injected into the freeboard over the transition region can be minimized by simply limiting the air injected through the two air tubes in that region. At the same time, larger air volumes can be injected by the air tubes in the preheat zone since this combustion air is directed to travel with the exhaust gas flow out of the kiln. This air tube arrangement then permits the injection of increased combustion air volumes into the preheat zone to enhance the heat transfer to the bed in that zone without disturbing the control of the heat transfer in the transition region where control is critical. The increased heat provided by the increased air injection can be used to roast off as much sulphur as possible from the feed end coal and maximize burning of the coal volatiles in the freeboard above the charge bed in the preheat zone and thus optimize process performance. This capability is another factor taken into consideration in designing the intermediate dam.
It will then be appreciated by those skilled in the art that proper use of the dam in combination with gas and bed temperature control using injected combustion air, permits the pumping of heat into the bed in a highly efficient manner so that the charge materials entering the kiln at ambient temperature may be expeditiously but carefully brought up to the proper temperature levels in the preheat zone and enter the reduction zone at an early point in their travel along the kiln, thus maximizing the use of kiln volume in performing the process.
It is contemplated that when designing for certain combinations of charge materials, an intermediate dam arrangement using a single dam such as shown in FIG. 1 may not be found suitable so that one or more additional dams may be used, such as illustrated in FIG. 2. For example, if the height required for a single intermediate dam is such as to limit the cross-sectional area of the kiln interior at the dam beyond that required for the passage of the gases therethrough and out of the kiln, then the use of two dams of reduced height to achieve the same total volume filling may be found feasible. In such instances the spacing, heights, and dimensions of the intermediate dams 23a and 23b will be adjusted to achieve the desired volume filling. It will be within the purview of those skilled in the art using the foregoing descriptions to determine the appropriate design parameters for the particular constituent materials to be used in the process.
The appropriate and preferred temperature profiles for iron ores and other iron oxide materials, similar to those disclosed in previously-noted U.S. Pat. No. 4,304,597, are such that the temperature in the bed in the preheat zone increases to a maximum of about 750° C. to 800° C. and in the transition region downstream of the intermediate dam the temperature is elevated slowly through about a further 75° C. to 125° C. until reaching the start of the working zone where the reduction reactions begin to occur rapidly. The charge temperature is thereafter increased rapidly to a constant maximum ranging from about 925° C. to 1075° C., depending upon the characteristics of the constituents and the desired product specifications, throughout the working zone all the way to the discharge end of the kiln.
In addition to the sensible heating of the preheat zone bed materials, principally by radiant heat from the passing hot gases generated in the reduction zone, there are endothermic phenomena that occur in the preheat zone so that chemical heat as well as sensible heat are required to be supplied. Heat is absorbed in the calcining of the limestone or dolomite, in evaporating water in the feed materials, and in the devolatilizing of the carbonaceous materials. Control of this devolatilizing in a steady manner is important to obtain progressive release and utilization of the chemical heat of the volatiles within the kiln. The extended residence time and controlled temperature rise achievable in the preheat zone with the intermediate dam allows this to be carried out. Appropriate control is achieved as previously-indicated by regulating the volume of air that is injected into this zone. With proper air volume regulation, the volatiles rising from the bed may be partially burned in a steady controlled fashion such that the gas temperatures are increased and maintained at a high enough level to produce the necessary heat transfer rates. At the appropriate gas temperature levels, above about 750° C. to 800° C., heat transfer is largely radiative and the rate is proportional to the difference between the fourth powers of the gas and bed temperatures. Again, the increased residence time of the bed materials in the preheat zone, due to the intermediate dam, provides adequate time to absorb additional heat given off by burning of the volatiles in this zone.
Combustion of the volatiles from the preheat zone bed should be as complete as possible not only for the production of additional heat but also for eliminating the production of pollutants by the process. More particularly, in order to avoid or minimize any stack gas pollution resulting from the incomplete combustion of CO and hydrocarbons in the kiln off-gases, the temperature of the gases at the kiln exit should be maintained above about 750° C. Under this condition, any required after-burning can be accomplished by the simple addition of the required volumes of ambient air into the exhaust gas ducting. For most charge constituents, the process will be run with exhaust gases exiting the kiln at about 800° C. to 850° C. so that the temperature of the gas in the ducting will be maintained above the 750° C. level. This temperature maintenance will permit adequate after-burning to occur even though there are temporary drops in the gas temperature by 100° C. or more. A conventional water spray system may be used at the feed end of the kiln to assist in controlling the exhaust gas temperature.
It is possible to produce the 75° C. to 125° C. rise in temperature within the materials bed in the transition region without varying the gas temperature in the region since the surface area to mass variation will automatically produce the increase. While a rise in gas temperature need not necessarily be produced, still it can be utilized if adequate preheating of the bed materials is not occurring in the preheat zone; a condition which would normally indicate that a larger proportion of the kiln length is needed. The combustion air injection may be adjusted to regulate the hot gas flow above the dam to produce the appropriate heat.
Also, in the event that the materials are not brought to the desired temperature level at the end of the preheat zone due to inadequate heating or design, a temperature rise to the desired level may be accomplished by controlling the injection of the carbonaceous material from pipe 8 at the discharge end in combination with air injection through the shell tubes 9. The carbonaceous material may be blown onto the surface of the bed beyond the dam into the preheat zone to provide additional volatiles for combustion to produce an increase in the gas temperature above the bed. This material may also act to supply additional carbon to the bed to provide char to this region particularly when no char is fed at the feed end.
Briefly then, optimum operation may be achieved by controlling the bed temperature in the preheat zone such that the materials are at a temperature of about 750° C.-800° C. at the intermediate dam 23 and thereafter the temperature is increased by about 75° C. to 125° C. in the transition region immediately beyond the dam by creating the appropriate heat transfer rate in this area. The materials will then leave the transition region and enter the working zone at a temperature of about 825° C. to 925° C., which has been found to be desirable in achieving proper control and optimum operation of the process.
During kiln operation, once the optimum design has been completed, to ensure that maximum volume filling in the preheat zone is being maintained, a simple monitoring method may be used which is capable of being performed by an unskilled operator. The method involves the establishment of a small but definite material spillback mass flow over the feed end lip of the kiln. To this end, the rotational speed of the kiln and the rate of charge feed are adjusted such that a small amount of material spills back over the lip of the feed end dam 21 indicating that the charge is entering the kiln faster than the rotational speed can move it along within the kiln. The small amount of spillback is maintained at a constant level by maintaining the feeding and rotation of the kiln at constant levels. This small spillback will indicate that the maximum volume of materials is being supplied to the kiln and thus that the kiln is operating at full capacity and providing maximum residence time for the materials in the preheat zone of the kiln. The spillback materials may be readily recycled and it is a simple matter for an operator to maintain the charge feed spillback and kiln rotational rates constant during process operation. It will also be seen that without an intermediate dam arrangement, there is no defined separation of the preheat and reduction zones in the kiln, that is, no readily identifiable place in the kiln indicating where metallization of the metal oxides is actually beginning. With such a dam, after process operation is suitably established, there is a defined line of demarcation which can be used to enhance process control.
The dams may be of castable or monolithic refractory construction or of refractory brick as will be found desirable or convenient in a particular situation. The creation of an intermediate dam by means of sinter accretion build-up is possible but not preferred.
<- Previous Patent (Process for producin...) | Next Patent (Method for the flash...) -> | {"pred_label": "__label__cc", "pred_label_prob": 0.5107598900794983, "wiki_prob": 0.4892401099205017, "source": "cc/2020-05/en_head_0056.json.gz/line445345"} |
high_school_physics | 664 | 15.756107 | 1 | \subsubsection*{Organisation of the paper}
The paper is organised as follows. In Section~\ref{sec-def}, we briefly recall the basic properties of hyperbolic groups which we will need, and we introduce Markov compacta. Section~\ref{sec-typy} contains auxiliary facts regarding mainly the \textit{conical} and \textit{ball types} in hyperbolic groups (defined in~\cite{CDP}), which will be the key tool in the proof of Theorem~\ref{tw-kompakt}.
The main claim of Theorem~\ref{tw-kompakt} is obtained by constructing an appropriate family of covers of $\partial G$ in Section~\ref{sec-konstr}, considering the corresponding inverse sequence of nerves in Section~\ref{sec-engelking-top} and finally verifying the Markov property in Section~\ref{sec-markow}. An outline of this reasoning is given in the introduction to Section~\ref{sec-konstr}, and its summary appears in Section~\ref{sec-markow-podsum}. Meanwhile, we give the proof of Theorem~\ref{tw-bi-lip-0} (as a~corollary of Theorem~\ref{tw-bi-lip}) in Section~\ref{sec-bi-lip}, mostly by referring to the content of Section~\ref{sec-engelking-top}.
While the Markov system obtained at the end of Section~\ref{sec-markow} will be already barycentric and have mesh property, in Sections~\ref{sec-abc} and \ref{sec-wymd} we focus respectively on ensuring distinct types property and bounding the dimensions of involved complexes. This will lead to a complete proof of Theorem~\ref{tw-kompakt}, summarised in Section~\ref{sec-wymd-podsum}.
Finally, Section~\ref{sec-sm} contains the proof of Theorem~\ref{tw-semi-markow-0}; its content is basically unrelated to Sections~\ref{sec-konstr}--\ref{sec-wymd}, except for that we re-use the construction of \textit{$B$-type} from Section~\ref{sec-sm-abc-b}.
\subsubsection*{Acknowledgements}
I would like to thank my supervisors Jacek Świątkowski and Damian Osajda for recommending this topic, for helpful advice and inspiring conversations, and Aleksander Zabłocki for all help and careful correcting.
\section{Introduction}
\label{sec-def}
\subsection{Hyperbolic groups and their boundaries}
\label{sec-def-hip}
Throughout the whole paper, we assume that $G$ is a~hyperbolic group in the sense of~Gromov~\cite{G}. We implicitly assume that $G$ is equipped with a~fixed, finite generating set~$S$, and we identify~$G$ with its Cayley graph~$\Gamma(G, S)$. As a~result, we will often speak about ``distance in~$G$'' or ``geodesics in~$G$'', referring in fact to the Cayley graph. Similarly, the term ``dependent only on~$G$'' shall be understood so that dependence on~$S$ is also allowed. By~$\delta$ we denote some fixed constant such that $\Gamma(G, S)$ is a $\delta$-hyperbolic metric space; we assume w.l.o.g. that $\delta \geq 1$.
We denote by~$e$ the identity element of~$G$, and by~$d(x, y)$ the distance of elements $x, y \in G$. The distance $d(x, e)$ will be called the \textit{length} of~$x$ and denoted by~$|x|$.
We use a notational convention that $[x, y]$ denotes a geodesic segment between the points $x, y \in G$, that is, an isometric embedding $\alpha : [0, n] \cap \mathbb{Z} \rightarrow G$ such that $\alpha(0) = x$ and $\alpha(n) = y$, where $n$ denotes $d(x, y)$. In the sequel, geodesic segments as well as geodesic rays and bi-infinite geodesic paths (i.e. isometric embeddings resp. of~$\mathbb{N}$ and $\mathbb{Z}$) will be all refered to as ``geodesics in $G$''; to specify which kind of geodesic is meant (when unclear from context), we will use adjectives \textit{finite}, \textit{infinite} and \textit{bi-infinite}.
We denote by~$\partial G$ the Gromov boundary of~$G$, defined as in~\cite{Kap}. We recall after \cite[Chapter 1.3]{zolta} that, as a~set, it is the~quotient of the set of all infinite geodesic rays in~$G$ by the relation of being close:
\[ (x_n) \sim (y_n) \qquad \Leftrightarrow \qquad \exists_{C > 0} \ \forall_{n \geq 0} \ d(x_n, y_n) < C; \]
moreover, in the above definition one can equivalently assume that $C = 4\delta$. It is also known that the topology defined on $\partial G$ is compact, preserved by the natural action of~$G$, and compatible with a~family of \textit{visual metrics}, defined depending on a~parameter~$a > 1$ with values sufficiently close to~$1$. Although we will not refer directly to the definition and properties of these metrics, we will use an estimate stated as (P2) in~\cite[Chapter~1.4]{zolta} which guarantees that, for every sufficiently small~$a > 1$, the visual metric with parameter~$a$ (which we occasionally denote by $d_v^{(a)}$) is bi-Lipschitz equivalent to the following \textit{distance function}:
\[ d_a \big( p, q \big) = a^{-l} \qquad \textrm{ for } p, q \in \partial G, \]
where $l$ is the largest possible distance between $e$ and any bi-infinite geodesic in~$G$ joining $p$ with $q$. As we will usually work with a~fixed value of~$a$, we will drop it in the notation.
For $x, y \in G \cup \partial G$, the symbol $[x, y]$ will denote \textit{any} geodesic in~$G$ joining~$x$ with~$y$. We will use the following fact from~\cite[Chapter~1.3]{zolta}:
\begin{fakt}
\label{fakt-waskie-trojkaty}
Let $\alpha, \beta, \gamma$ be the sides of a~geodesic triangle in~$G$ with vertices in~$G \cup \partial G$. Then, $\alpha$ is contained in the $4(p+1)\delta$-neighbourhood of $\beta \cup \gamma$, where $p$ is the number of vertices of the triangle which lie in~$\partial G$.
\end{fakt}
\subsection{Markov compacta}
\label{sec-def-markow}
\begin{df}[{\cite[Definition~1.1]{Dra}}]
\label{def-kompakt-markowa}
Let $(K_i, f_i)_{i \geq 0}$ be an inverse system consisting of the spaces $K_i$ and maps $f_i: K_{i + 1} \rightarrow K_i$ for $i \geq 0$.
Such system will be called \textit{Markov} (or said to satisfy \textit{Markov property}) if the following conditions hold:
\begin{itemize}
\item[(i)] $K_i$ are finite simplicial complexes which satisfy the inequality $\sup \dim K_i < \infty$;
\item[(ii)] for every simplex $\sigma$, in $K_{i+1}$ its image $f_i(\sigma)$ is contained in some simplex belonging to $K_i$ and the restriction $f_i|_\sigma$ is an affine map;
\item[(iii)] simplexes in $\amalg_i K_i$ can be assigned finitely many \textit{types} so that for any simplexes $s \in K_i$ and~$s' \in K_j$ of the same type there exist isomorphisms of subcomplexes $i_k : (f^{i+k}_i)^{-1}(s) \rightarrow (f^{j+k}_j)^{-1}(s')$ for $k \geq 0$ such that the following diagram commutes:
\begin{align}
\label{eq-markow-drabinka}
\xymatrix@+3ex{
s \ar[d]_{i_0} & \ar[l]_{f_i} f_i^{-1}(s) \ar[d]_{i_1} & \ar[l] \ldots & \ar[l] (f^{i+k}_i)^{-1}(s) \ar[d]_{i_k} & \ar[l]_{f_{i+k}} (f^{i+k+1}_i)^{-1}(s) \ar[d]_{i_{k+1}} & \ar[l] \ldots \\
s' & \ar[l]_{f_j} f_j^{-1}(s') & \ar[l] \ldots & \ar[l] (f^{j+k}_j)^{-1}(s') & \ar[l]_{f_{j+k}} (f^{j+k+1}_j)^{-1}(s') & \ar[l] \ldots
}
\end{align}
where $f^a_b$ (for $a \geq b$) means the composition $f_b \circ f_{b+1} \circ \ldots \circ f_{a-1} : K_a \rightarrow K_b$.
\end{itemize}
\end{df}
\begin{df}[{\cite[Definition~1.1]{Dra}}]
\label{def-kompakt-markowa2}
A topological space $X$ is a \textit{Markov compactum} if it is the inverse limit of a Markov system.
\end{df}
\begin{df}[{cf.~\cite[Lemma~2.3]{Dra}}]
\label{def-mesh}
\
\begin{itemize}
\item[\textbf{(a)}] A sequence $(\mathcal{A}_n)_{n \geq 0}$ of families of subsets in a compact metric space has \textit{mesh property} if \[ \lim_{n \rightarrow \infty} \, \max_{A \in \mathcal{A}_n} \diam A = 0. \]
\item[\textbf{(b)}] An inverse system of polyhedra $(K_n, f_n)$ has \textit{mesh property} if, for any $i \geq 0$, the sequence $(\mathcal{F}_n)_{n \geq i}$ of families of subsets in $K_i$ has mesh property, where
\[ \mathcal{F}_n = \big\{ f^n_i(\sigma) \ \big|\ \sigma \textrm{ is a simplex in }K_n \big\}. \]
\end{itemize}
\end{df}
\begin{uwaga}
\label{uwaga-mesh-bez-metryki}
We can formulate Definition \ref{def-mesh}a in an equivalent way (regarding only the topology): for any open cover $\mathcal{U}$ of $X$ there exists $n \geq 0$ such that, for every $m \geq n$, every set $A \in \mathcal{A}_m$ is contained in some $U \in \mathcal{U}$.
In particular, this means that the sense of Definition \ref{def-mesh}b does not depend on the choice of a metric (compatible with the topology) in $K_i$.
\end{uwaga}
\begin{df}
\label{def-kompakt-barycentryczny}
A Markov system $(K_i, f_i)$ is called \textit{barycentric} if, for any $i \geq 0$, the vertices of $K_{i+1}$ are mapped by $f_i$ to the vertices of the first barycentric subdivision of $K_i$.
\end{df}
\begin{df}
\label{def-kompakt-wlasciwy}
A Markov system $(K_i, f_i)$ has \textit{distinct types property} if for any $i \geq 0$ and any simplex $s \in K_i$ all simplexes in the pre-image $f_i^{-1}(s)$ have pairwise distinct types.
\end{df}
\begin{uwaga}
\label{uwaga-sk-opis}
A motivation for the above two definitions is the observation that barycentric Markov systems with distinct types property are \textit{finitely describable}. In more detail, if the system $(K_i, f_i)_{i \geq 0}$ satisfies the conditions from Definitions ~\ref{def-kompakt-markowa}, \ref{def-kompakt-barycentryczny} and~\ref{def-kompakt-wlasciwy}, and if $N$ is so large that complexes $K_0, \ldots, K_N$ contain simplexes of all possible types, then the full system $(K_i, f_i)_{i = 0}^\infty$ can by rebuilt on the base of the initial part of the system (which is finitely describable because of being barycentric).
\[
\xymatrix@C+3ex{
K_0 & \ar[l]_{f_0} K_1 & \ar[l]_{f_1} \ldots & \ar[l]_{f_{N-1}} K_N & \ar[l]_{f_N} K_{N+1}.
}
\]
The proof is inductive: for any $n \geq N+1$ the complex $K_{n+1}$ with the map $f_n : K_{n+1} \rightarrow K_n$ is given uniquely by the subsystem $K_0 \longleftarrow \ldots \longleftarrow K_n$. This results from the following:
\begin{itemize}
\item for any simplex $s \in K_n$ there exists a model simplex $\sigma \in K_m$ of the same type, where $m < n$, and then the pre-image $f_n^{-1}(s)$ together with the types of its simplexes and the restriction $f_n \big|_{f_n^{-1}(s)}$ is determined by the pre-image $f_{m}^{-1}(\sigma)$ and the restriction $f_m \big|_{f_m^{-1}(\sigma)}$ (which follows from Definition \ref{def-kompakt-markowa});
\item for any pair of simplexes $s' \subseteq s \in K_n$ the choice of a type preserving injection $f_n^{-1}(s') \rightarrow f_n^{-1}(s)$ is uniquely determined by the fact that vertices in $f_n^{-1}(s)$ have pairwise distinct types (by Definition \ref{def-kompakt-wlasciwy});
\item since $K_{n+1}$ is the union of the family of pre-images of the form $f_n^{-1}(s)$ for $s \in K_n$, which is closed with respect to intersecting, the knowledge of these pre-images and the type preserving injections between them is sufficient to recover $K_{n+1}$; obviously we can reconstruct~$f_n$ too, by taking the union of the maps $f_n^{-1}(s) \rightarrow s$ determined so far.
\end{itemize}
\end{uwaga}
\section{Types of elements of~$G$}
\label{sec-typy}
The goal of this section is to introduce the main properties of the \textit{cone types} (Definition \ref{def-typ-stozkowy}) and \textit{ball types} (Definition \ref{def-typ-kulowy}) for elements of a hyperbolic group $G$. These classical results will be used in the whole paper.
The connection between cone types (which describe the natural structure of the group and its boundary) and ball types (which are obviously only finite in number) in the group $G$ was described for the first time by Cannon in \cite{Cannon} and used to prove properties of the growth function for the group. This result turns out to be an important tool in obtaining various finite presentations of Gromov boundary: it is used in \cite{CDP} to build an automatic structure on $\partial G$ and in \cite{zolta} to present $\partial G$ as a semi-Markovian space in torsion-free case (the goal of Section \ref{sec-sm} is to generalise this result to all groups). Therefore it is not surprising that we will use this method to build the structure of Markov compactum for the space $\partial G$.
\subsection{Properties of geodesics in~$G$}
\begin{fakt}
\label{fakt-geodezyjne-pozostaja-bliskie}
Let $\alpha = [e, x]$ and $\beta = [e, y]$, where $|x| = |y| = n$ and $d(x, y) = k$. Then, for $0 \leq m \leq n$, the following inequality holds:
\[ d \big( \alpha(m), \beta(m) \big) \ \leq \ 8\delta + \max \big( k + 8\delta - 2(n-m), \, 0 \big). \]
In particular, for $0 \leq m \leq n - \tfrac{k}{2} - 4\delta$, we have $d(\alpha(m), \beta(m)) \leq 8\delta$.
\end{fakt}
\begin{proof}
Let us consider the points $\alpha(m), \beta(m)$ lying on the sides of a $4\delta$-narrow geodesic triangle $[e, x, y]$. We will consider three cases.
If $\alpha(m)$ lies at distance at most $4\delta$ from $\beta$, then we have $d(\alpha(m), \beta(m')) \leq 4\delta$ for some $m'$, so from the triangle inequality in the triangle $[e, \alpha(m), \beta(m')]$ we obtain $|m' - m| \leq 4\delta$, so
\[ d(\alpha(m), \beta(m)) \leq d(\alpha(m), \beta(m')) + |m' - m| \leq 8\delta, \]
which gives the claim.
If $\beta(m)$ lies at distance at most $4\delta$ from $\alpha$, the reasoning is analogous.
It remains to consider the case when $\alpha(m)$, $\beta(m)$ are at distance at most $4\delta$ respectively from $a, b \in [x, y]$. Then, $|a|, |b| \leq m + 4\delta$, so $a$, $b$ are at distance at least $D = n - m - 4\delta$ from the both endpoints of $[x, y]$. Therefore, $d(a, b) \leq k - 2D$, and so
\[ d \big( \alpha(m), \beta(m) \big) \leq d \big( \alpha(m), a \big) + d(a, b) + d \big( b, \beta(m) \big) \leq 8\delta + k - 2D = 16\delta + k - 2(n - m). \qedhere \]
\end{proof}
\begin{wn}
\label{wn-krzywe-geodezyjne-pozostaja-bliskie}
Let $\alpha = [e, x]$ and $\beta = [e, y]$, with $|x| = n$ and $d(x, y) = k$. Then, for $0 \leq m \leq \min(n, |y|)$, we have:
\[ d \big( \alpha(m), \beta(m) \big) \ \leq \ 8\delta + \max \big( 2k + 8\delta - 2(n-m), \, 0 \big). \]
\end{wn}
\begin{proof}
From the triangle inequality we have $\big| n - |y| \big| = \big| |x| - |y| \big| \leq k$. Let $n' = \min(n, |y|)$; we claim that $d(\alpha(n'), \beta(n')) \leq 2k$. Indeed: if $n' = n$, we have
\[ d(\alpha(n'), \beta(n')) \leq d(x, y) + d(y, \beta(n)) \leq k + \big| |y| - n \big| \leq 2k; \]
otherwise $n' = |y|$ and so
\[ d(\alpha(n'), \beta(n')) \leq d(\alpha(|y|), x) + d(x, y) \leq \big| |y| - n \big| + k \leq 2k. \]
It remains to use Lemma~\ref{fakt-geodezyjne-pozostaja-bliskie} for geodesics $\alpha, \beta$ restricted to the interval $[0, n']$ and the doubled value of~$k$.
\end{proof}
\begin{fakt}
\label{fakt-geodezyjne-przekatniowo}
Let $(\alpha_k)_{k \geq 0}$ be a sequence of geodesic rays in~$G$ which start at~$e$. Denote $x_k = \lim_{n \rightarrow \infty} \alpha_k(n)$. Then, there exists a subsequence $(\alpha_{k_i})_{i \geq 0}$ and a geodesic $\alpha_{\infty}$ such that $\alpha_{k_i}$ coincides with~$\alpha_{\infty}$ on the segment~$[0, i]$. Moreover, the point $x_{\infty} = \lim_{n \rightarrow \infty} \alpha_{\infty}(n)$ is the limit of $(x_{k_i})$.
\end{fakt}
\begin{proof}
The first part of the claim is obtained from an easy diagonal argument: since, for every $n \geq 0$, the set $\{ x \in G \,|\, |x| \leq n \}$ is finite, the set of possible restrictions $\{ \alpha_k \big|_{[0, n]} \,|\, k \geq 0 \}$ must be finite too. This allows to define inductively $\alpha_{\infty}$: we take $\alpha_\infty(0) = e$, and for the consecutive $n > 0$ we choose $\alpha_\infty(n)$ so that $\alpha_\infty$ coincides on $[0, n]$ with infinitely many among the~$\alpha_k$'s. Such choice is always possible and guarantees the existence of a subsequence $(\alpha_{k_i})$.
The obtained sequence $\alpha_\infty$ is a geodesic because every its initial segment $\alpha_\infty \big|_{[0, i]}$ coincides with an initial segment $\alpha_{k_i} \big|_{[0, i]}$ of a geodesic. (We note that we can obtain an increasing sequence $(k_i)$). In this situation, from Lemma~5.2.1 in~\cite{zolta} and the definition of the topology in $G \cup \partial G$ it follows that $x_{\infty} = \lim_{i \rightarrow \infty} x_{k_i}$ holds in~$\partial G$.
On the other hand, we have $\gamma_\infty(k) = g$, and so $x = [\gamma_\infty] \in \sppan(g)$.
\end{proof}
\subsection{Cone types and their analogues in~$\partial G$}
\begin{df}[cf.~\cite{CDP}]
\label{def-typ-stozkowy}
We define the \textit{cone type} $T^c(x)$ of $x \in G$ as the set of all $y \in G$ such that there exists a geodesic connecting $e$ to $xy$ and passing through $x$.
\end{df}
Elements of the set $xT^c(x)$ will be called \textit{descendants} of~$x$.
\begin{fakt}[{\cite[Chapter~12.3]{CDP}}]
\label{fakt-przechodniosc-potomkow}
The relation of being a descendant is transitive: if $y \in T^c(x)$ and $w \in T^c(xy)$, then $yw \in T^c(x)$.
\end{fakt}
\begin{fakt}
\label{fakt-synowie-typy-stozkowe}
If $y \in T^c(x)$, then the cone type $T^c(xy)$ is determined by $T^c(x)$ and $y$.
\end{fakt}
\begin{proof}
This results from multiple application of Lemma 12.4.3 in~\cite{CDP}.
\end{proof}
\begin{df}
The \textit{span} of an element $g \in G$ (denoted $\sppan(g)$) is the set of all $x \in \partial G$ such that there exists a geodesic from $e$ to $x$ passing through $g$.
\end{df}
\begin{fakt}
The set $\sppan(g)$ is closed for every $g \in G$.
\end{fakt}
\begin{proof}
Denote $|g| = k$ and let $x_i$ be a sequence in $\sppan(g)$ converging to $x \in \partial G$. We will show that $x$ also belongs to $\sppan(g)$. Let $\gamma_i$ be a geodesic in~$G$ starting in~$e$, converging to~$x_i$ and such that $\gamma_i(k) = g$. By Lemma~\ref{fakt-geodezyjne-przekatniowo}, there is a subsequence $(\gamma_{i_j})$ which is increasingly coincident with some geodesic~$\gamma_\infty$; in particular, we have $\gamma_\infty(0) = e$ and $\gamma_\infty(k) = g$. Moreover, Lemma~\ref{fakt-geodezyjne-przekatniowo} ensures that~$[\gamma_\infty] \in \partial G$ is the limit of~$x_{i_j}$, so it is equal to~$x$. This means that $x \in \sppan(g)$.
\end{proof}
\begin{fakt}
\label{fakt-stozek-a-span}
For any $g \in G$, $\sppan(g)$ is the set of limits in $\partial G$ of all geodesic rays in $G$ starting at $g$ and contained in $gT^c(g)$.
\end{fakt}
\begin{proof}
Denote $|g| = k$. Let $\alpha$ be a geodesic starting at $g$ and contained in $gT^c(g)$. From the definition of the set $T^c(g)$ it follows that for any $n > 0$ we have $|\alpha(n)| = n + k$. This shows that for any geodesic $\beta$ connecting $e$ with $g$ the curve $\beta \cup \alpha$ is geodesic, because for any $m > k$ its restriction to $[0, m]$ connects the points $e$ and $\alpha(m - k)$ which have distance exactly $m$ from each other. Therefore $\lim_{n \rightarrow \infty} \alpha(n) = \lim_{m \rightarrow \infty} \beta(m)$ belongs to $\sppan(g)$.
The opposite inclusion is obvious.
\end{proof}
Let us fix a constant $a > 1$ (depending on the group $G$) used in the definition of the visual metric on $\partial G$.
\begin{fakt}
\label{fakt-spany-male}
Let $g \in G$. If $|g| = n$, then $\diam \sppan(g) \leq C \cdot a^{-n}$, where $C$ is a constant depending only on $G$.
\end{fakt}
\begin{proof}
Let $x, y \in \sppan(g)$ and $\alpha, \beta$ be geodesics following from $e$ through $g$ correspondingly to $x$ and $y$. By Lemma 12.3.1 in~\cite{CDP}, the path $\overline{\beta}$ built by joining the restrictions $\alpha \big|_{[0, n]}$ and $\beta \big|_{[n, \infty)}$ is a geodesic converging to~$y$. On the other hand, $\overline{\beta}$ coincides with~$\alpha$ on the interval~$[0, n]$. Then, if $\gamma$ is a bi-infinite geodesic connecting~$x$ with~$y$, from Lemma~5.2.1 in~\cite{zolta} we obtain $d(e, \gamma) \geq n - 12\delta$, which finishes the proof.
\end{proof}
\subsection{Ball $N$-types}
\label{sec-typy-kulowe}
\begin{ozn}
For any $x \in G$ and $r > 0$, we denote by $B_r(x)$ the set $\{ y \in G \,|\, d(x, y) \leq r \}$.
\end{ozn}
\begin{df}[{\cite[Chapter~12]{CDP}}]
\label{def-typ-kulowy}
Let $x \in G$ and $N > 0$. We define the \textit{ball $N$-type} of an element $x$ (denoted $T^b_N(x)$) as the function $f^b_{x,\,N} : B_N(e) \rightarrow \mathbb{Z}$, given by formula
\begin{align}
\label{eq-def-n-typu}
f^b_{x,\,N}(y) = |xy| - |x|.
\end{align}
\end{df}
\begin{lem}[{\cite[Lemma~12.3.3]{CDP}}]
\label{lem-kulowy-wyznacza-stozkowy}
There exists a constant $N_0$, depending only on $G$, such that for any $N \geq N_0$ and $x, y \in G$, the equality $T^b_N(x) = T^b_N(y)$ implies that $T^c(x) = T^c(y)$.
\end{lem}
\begin{fakt}
\label{fakt-kulowy-duzy-wyznacza-maly}
Let $x, y \in G$, $N, k > 0$ and $|y| \leq k$. Then, $T^b_N(xy)$ depends only on $T^b_{N + k}(x)$, $y$ and~$N$. \qed
\end{fakt}
\begin{proof}
Let $f$, $f'$ denote the functions of $(N+k)$-type for $x$ and $N$-type for $xy$, respectively. Let $z \in B_N(e)$.
Then, $yz$ and $y$ both belong to $B_{N + k}(e)$, which is the domain of~$f$, and moreover
\[ f'(z) = |xyz| - |xy| = |xyz| - |x| - (|xy| - |x|) = f(yz) - f(y). \qedhere \]
\end{proof}
\begin{lem}
\label{lem-potomkowie-dla-kulowych}
Let $N_0$ be the constant from Proposition~\ref{lem-kulowy-wyznacza-stozkowy}. Let $N > N_0 + 8\delta$, $M \geq 0$, $x \in G$ and $y \in T^c(x)$, where $|y| \geq M + 4\delta$. Then, $T^b_M(xy)$ depends only on $T^b_N(x)$, $y$ and $N$, $M$.
\end{lem}
Note that the value $M \geq 0$ in this proposition can be chosen arbitrarily.
\begin{proof}
Let $x, x' \in G$ be such that $T^b_N(x) = T^b_N(x')$. Denote $n = |x|$.
Let $z \in B_M(e)$. We need to prove that
\begin{align}
\label{eq-potomkowie-do-spr}
|xyz| - |xy| = |x'yz| - |x'y|.
\end{align}
Let $\alpha, \beta$ be geodesics connecting $e$ respectively with $xy$ and $xyz$; we can assume that $\alpha$ passes through $x$. Denote $w = x^{-1}\beta(n)$. Since $n \leq |xy| - \tfrac{2M}{2} - 4\delta$, by applying Corollary \ref{wn-krzywe-geodezyjne-pozostaja-bliskie} for geodesics $\alpha$, $\beta$, we obtain
\[ |w| = d(x, xw) = d(\alpha(n), \beta(n)) \leq 8\delta. \]
Then, by the equality $T^b_N(x) = T^b_N(x')$, we deduce from Lemma~\ref{fakt-kulowy-duzy-wyznacza-maly} that $T^b_{N - 8\delta}(xw) = T^b_{N - 8\delta}(x'w)$.
By Proposition \ref{lem-kulowy-wyznacza-stozkowy}, we obtain
\[ T^c(x) = T^c(x'), \qquad T^c(xw) = T^c(x'w), \]
where $y$ belongs to the first set and $w^{-1}yz$ to the second one. This gives \eqref{eq-potomkowie-do-spr} because
\[ |xyz| - |xy| = |xw| + |w^{-1}yz| - (|x| + |y|) = |w^{-1}yz| - |y| = |x'w| + |w^{-1}yz| - (|x'| + |y|) = |x'yz| - |x'y|. \qedhere \]
\end{proof}
\begin{fakt}
\label{fakt-kuzyni-lub-torsje}
For any $r > 0$ there is $N_r > 0$ such that for any $N \geq N_r$ and $g, h \in G$, the conditions
\[ |h| \leq r, \qquad |gh| = |g|, \qquad T^b_N(gh) = T^b_N(g) \]
imply that $h$ is a torsion element.
\end{fakt}
\begin{proof}
If $h$ is not a torsion element, then the remark following Proposition~1.7.3 in~\cite{zolta} states that it must be of hyperbolic type (which means that the sequence $(h^n)_{n \in \mathbb{Z}}$ is a bi-infinite quasi-geodesic in $G$). In this situation, a contradiction follows from the proof of Proposition~7.3.1 in~\cite{zolta}, provided that we replace in this proof the constant $4\delta$ by $r$. (This change may increase the value of~$N_r$ obtained from the proof but the argument does not require any other modification).
\end{proof}
\section{Quasi-invariant systems}
\label{sec-konstr}
The presentation of $\partial G$ as a Markov compactum will be obtained in the following steps:
\begin{itemize}[nolistsep]
\item[(i)] choose a suitable system $\mathcal{U}$ of open covers of~$\partial G$;
\item[(ii)] build an inverse system of nerves of these covers (and appropriate maps between them);
\item[(iii)] prove that $\partial G$ is the inverse limit of this system;
\item[(iv)] verify the Markov property (see~Definition~\ref{def-kompakt-markowa}).
\end{itemize}
The steps (ii-iii) and (iv) will be discussed in Sections~\ref{sec-engelking} and \ref{sec-markow} respectively. In this section, we focus on step~(i). We begin with introducing in Section~\ref{sec-konstr-quasi-niezm} the notion of a \textit{quasi-$G$-invariant system of covers} in $\partial G$ (or, more generally, in a compact metric $G$-space), which summarises the conditions under which we will be able to execute steps (ii-iv). Section~\ref{sec-konstr-gwiazda} contains proof an additional \textit{star property} for such systems; we will need it in Section~\ref{sec-engelking}. Finally, in Section~\ref{sec-konstr-pokrycia} we construct an example quasi-$G$-invariant system in~$\partial G$, which will serve as the basis for the construction of the Markov system representing $\partial G$.
\subsection{Definitions}
\label{sec-konstr-quasi-niezm}
Let $(X, d)$ be a metric space equipped with a homeomorphic action of a hyperbolic group $G$ (recall that we assume that $G$ is equipped with a fixed set of generators).
Definitions \ref{def-quasi-niezm} and~\ref{def-quasi-niezm-pokrycia} summarise conditions which --- as we will prove in Sections \ref{sec-konstr-gwiazda} and~\ref{sec-markow} --- are sufficient to make the construction of Section \ref{sec-engelking-top} (and in particular Theorem~\ref{tw-konstr}) applicable to the sequence $(\mathcal{U}_n)_{n \geq 0}$, and to guarantee that the constructed inverse system has Markov property (in the sense of Definition~\ref{def-kompakt-markowa}). In the next subsection, we will construct, for a given hyperbolic group $G$, a~sequence of covers of $\partial G$ with all properties introduced in this subsection.
\begin{ozn}
\label{ozn-suma-rodz}
For any family $\mathcal{C} = \{ C_x \}_{x \in G}$ of subsets of a~space $X$, we denote:
\[ \mathcal{C}_n = \big\{ C_x \ \big|\ x \in G, \ |x| = n \big\}, \qquad |\mathcal{C}|_n = \bigcup_{C \in \mathcal{C}_n} C. \]
We will usually identify the family $\mathcal{C}$ with the sequence of subfamilies $(\mathcal{C}_n)_{n \geq 0}$.
\end{ozn}
\dzm{
\begin{df}
\label{def-funkcja-typu}
By a \textit{type function} on~$G$
we will mean any function~$T$ on~$G$
with values in a~finite set. For $x \in G$, the value $T(x)$ will be called the \textit{($T$-)type} of~$x$.
Analogously, by a \textit{type function} on a system $(K_n)_{n \geq 0}$ of simplicial complexes we will mean any function $T$ mapping simplexes of all $K_n$ to a finite set; the value $T(\sigma)$ will be called the \textit{($T$-)type} of $\sigma$.
For two type functions $T_1, T_2$ on $G$ (resp. on a system $(K_n)_{n \geq 0}$), we will call $T_1$ \textit{stronger} than $T_2$ if the $T_2$-type of any element (resp. simplex)
can be determined out of its $T_1$-type.
\end{df}
}
\begin{df}
\label{def-quasi-niezm}
A family $\mathcal{C} = \{ C_x \}_{x \in G}$ of subsets of a $G$-space $X$ is a~\textit{quasi-$G$-invariant system} \dzm{(with respect to a type function $T : G \rightarrow \mathcal{T}$)} if there exists a \textit{neighbourhood constant} $D > 0$ and a~\textit{jump constant} $J > 0$ such that:
\begin{itemize}[leftmargin=1.5cm]
\qhitem{c}{QI1}{QI1} the sequence of subfamilies $(\mathcal{C}_n)_{n \geq 0}$, where $\mathcal{C}_n = \big\{ C_x \ \big|\ x \in G, \ |x| = n \big\}$, has mesh property (in the sense of Definition \ref{def-mesh}a);
\qhitem{d}{QI2}{QI2} for every $n$ and $x, y \in G$, the following implication holds:
\[ |x| = |y| = n, \quad C_x \cap C_y \neq \emptyset \qquad \Rightarrow \qquad d(x, y) \leq D; \]
\qhitem{e}{QI3}{QI3} for every $x \in G$ and $0 < k \leq \tfrac{|x|}{J}$, there exists $y \in G$ such that $|y| = |x| - kJ$ and $C_y \supseteq C_x$;
\qhitem{f}{QI4}{QI4} whenever $T(x) = T(gx)$ for $g, x \in G$, we have:
\begin{itemize}
\qhitem{f1}{a}{QI4a} $C_{gx} = g \cdot C_x$;
\qhitem{f2}{b}{QI4b} for every $y \in G$ such that $|y| = |x|$ and $C_x \cap C_y \neq \emptyset$, we have
\[ C_{gy} = g \cdot C_y, \qquad |gy| = |gx|; \]
\qhitem{f3}{c}{QI4c}
for every $y \in G$ such that $|y| = |x| + kJ$ for some $k > 0$ and $\emptyset \neq C_y \subseteq C_x$, we have
\[ |gy| = |gx| + kJ, \qquad T(gy) = T(y), \qquad \textrm{and \ so} \qquad C_{gy} = g \cdot C_y. \]
\end{itemize}
\end{itemize}
\end{df}
\begin{uwaga}
\label{uwaga-quasi-niezm-jeden-skok}
Let us note that if \qhlink{e} is satisfied for $k = 1$, then by induction it must hold for all $k > 0$, and that the same applies to \qhlink{f3}.
\end{uwaga}
\begin{uwaga}
\label{uwaga-quasi-niezm-rozne-poziomy}
From now on, we adopt the convention that the sets belonging to $\mathcal{C}_n$ are implicitly equipped with the value of~$n$; this would matter only if some subsets $C_1 \in \mathcal{C}_{n_1}$, $C_2 \in \mathcal{C}_{n_2}$ with $n_1 \neq n_2$ happen to consist of the same elements. In this case, we will treat $C_1$, $C_2$ as \textit{not} equal; in particular, any condition of the form $C_1 = C_g$ will implicitly imply $|g| = n_1$. This should not lead to confusion since, although we will often consider an inclusion between an element of $\mathcal{C}_{n_1}$ and an element of $\mathcal{C}_{n_2}$ with $n_1 \neq n_2$, we will be never interested whether set-equality holds between these objects.
\end{uwaga}
\begin{df}
\label{def-quasi-niezm-pokrycia}
A system $\mathcal{C} = \{ C_x \}_{x \in G}$ of subsets of $X$ will be called \textit{a system of covers} if $\mathcal{C}_n$ is an open cover of $X$ for every $n \geq 0$.
\end{df}
\dzm{
\begin{df}
\label{def-quasi-niezm-system-wpisany}
Let $\mathcal{C} = \{ C_x \}_{x \in G}$, $\mathcal{D} = \{ D_x \}_{x \in G}$ be two quasi-$G$-invariant systems of subsets of~$X$. We will say that $\mathcal{C}$ is \textit{inscribed} in~$\mathcal{D}$ if $C_x \subseteq D_x$ for every $x \in G$, and if the type function associated to~$\mathcal{C}$ is stronger than the one associated to~$\mathcal{D}$.
\end{df}
}
\subsection{The star property}
\label{sec-konstr-gwiazda}
\begin{df}
Let $\mathcal{U}$ be an open cover of~$X$, and $U \in \mathcal{U}$. Then, the \textit{star} of $U$ in~$\mathcal{U}$ is the union $\bigcup\{U_i \, | \, U_i \in \mathcal{U}, \ U_i \cap U \neq \emptyset\}$.
\end{df}
\begin{df}
\label{def-wl-gwiazdy}
Let $(\mathcal{U}_n)$ be a family of open covers of~$X$. We say that $(\mathcal{U}_n)$ has \textit{star property} if, for every $n > 0$, every star in the cover $\mathcal{U}_n$ is contained in some element of the cover $\mathcal{U}_{n-1}$; more formally:
\[ \forall_{n > 0} \ \forall_{U \in \mathcal{U}_n} \ \exists_{V \in \mathcal{U}_{n-1}} \ \bigcup_{U' \in \mathcal{U}_n; \, U \cap U' \neq \emptyset} U' \subseteq V. \]
\end{df}
\begin{lem}
\label{lem-gwiazda}
Let $(\mathcal{U}_n)$ be a quasi-$G$-invariant system of covers of a compact metric $G$-space $X$ and let $J$ denote its jump constant. Then, there exists a constant $L_0$ such that, for any $L \geq L_0$ divisible by~$J$, the sequence of covers $(\mathcal{U}_{Ln})_{n \in \mathbb{N}}$ has star property.
\end{lem}
\begin{proof}
Let $L_{(i)}$ be constant such that, for every $j \geq i + L_{(i)}$, every element of~$\mathcal{U}_j$ together with its star is contained in some set from $\mathcal{U}_i$. Its existence is an immediate result of the existence of a Lebesgue number for $\mathcal{U}_i$, and from the mesh condition for the system~$(\mathcal{U}_n)$.
Since there exist only finite many $N$-types in~$G$, there exists $S > 0$ such that for any $g \in G$ there is $g' \in G$ such that $|g'| < S$ and $T(g) = T(g')$.
We will show the claim of the proposition is satisfied by
\[ L_0 = 1 + \max \{ L_{(i)} \,|\, i < S \}. \]
Let $|g| = L(k + 1)$ and $L \geq L_0$ be divisible by $J$; we want to prove that there exists $\tilde{f} \in G$ of length $Lk$ such that $U_{\tilde{f}}$ contains $U_g$ together with all its neighbours in $\mathcal{U}_{L(k+1)}$. If $Lk < S$, this holds by the inequality $L \geq L_0 \geq L_{(Lk)}$ and the definition of the constant $L_{(Lk)}$.
Otherwise, by the property \qhlink{e} there exists $f$ of length $Lk$ such that $U_g \subseteq U_{f}$. Let $f' \in G$ of length $j < S$ satisfy $T(f') = T(f)$.
Denote $h = f' f^{-1}$. Then, since $J \mathrel{|} L$, by~\qhlink{f3} we have
\begin{align}
\label{eq-gwiazda-sukces-na-malej}
U_{hg} = h \cdot U_g \subseteq h \cdot U_{f} = U_{f'}, \qquad T(hg) = T(g), \qquad |hg| = j + L.
\end{align}
Therefore, since $j < S$, there exists some $\tilde{f}'$ of length $j$ such that $U_{\tilde{f}'}$ contains $U_{hg}$ together with its whole star. Then, by \eqref{eq-gwiazda-sukces-na-malej}, we have $U_{\tilde{f}'} \cap U_{f'} \neq \emptyset$, and so from \qhlink{f2} we obtain
\[ \qquad U_{h^{-1}\tilde{f}'} = h^{-1} \cdot U_{\tilde{f}'}, \qquad |h^{-1} \widetilde{f}'| = |h^{-1} f'| = Lk. \]
Now, let $|x| = |g|$ and $U_x \cap U_g \neq \emptyset$. Then, from \eqref{eq-gwiazda-sukces-na-malej} and~\qhlink{f2} we have $U_{hx} = h \cdot U_x$; in particular, $U_{hx}$ is contained in the star of the set $U_{hg} = h \cdot U_g$, and so it is contained in $U_{\tilde{f}'}$. Then, by \qhlink{f3}:
\[ U_x = h^{-1} \cdot U_{hx} \subseteq h^{-1} \cdot U_{\tilde{f}'} = U_{h^{-1} \tilde{f}'}. \]
This means that the element $\tilde{f} := h^{-1}\tilde{f}'$ has the desired property.
\end{proof}
\subsection{The system of span-star interiors}
\label{sec-konstr-pokrycia}
\begin{df}
\label{def-konstr-towarzysze}
For every element $g \in G$ and $r > 0$, we denote
\[ P(x) = \big\{ y \in G \,\big|\, |xy| = |x| \big\}, \qquad P_r(x) = P(x) \cap B_r(e). \]
If $y \in P(x)$ (resp. $P_r(x)$), we call $xy$ a \textit{fellow} (resp. \textit{$r$-fellow}) of $x$.
\end{df}
From the definition of the ball type, we obtain the following property.
\begin{fakt}
\label{fakt-kulowy-wyznacza-towarzyszy}
If $N \geq r > 0$, then the set $P_r(x)$ depends only on $T^b_N(x)$ and $r$, $N$. \qed
\end{fakt}
\begin{df}
\label{def-span-star}
We define the set $S_g$ as the interior of span-star in $\partial G$ around $\sppan(g)$:
\[ S_g = \innt \Big( \bigcup_{h \in I(g)} \sppan(gh) \Big), \qquad \textrm{ gdzie } \quad I(g) = \big\{ h \in P(g) \,\big|\, \sppan(gh) \cap \sppan(g) \neq \emptyset \big\}. \]
For any $k > 0$, we define the family
\[ \mathcal{S}_k = \{ S_g \ |\ g \in G, \, |g| = k, \, S_g \neq \emptyset \}. \]
\end{df}
\begin{fakt}
\label{fakt-span-w-pokryciu}
For every $g \in G$, we have $\sppan(g) \subseteq S_g$.
\end{fakt}
\begin{proof}
Let us consider the equality
\[ \partial G = \bigcup_{h \in P(g)} \sppan(gh) = \Big( \bigcup_{h \in I(g)} \sppan(gh) \Big) \cup \Big( \bigcup_{h \in P(g) \setminus I(g)} \sppan(gh) \Big). \]
The second summand is disjoint with $\sppan(g)$, and moreover closed (as a finite union of closed sets), which means that $\sppan(g)$ must be contained in the interior of the first summand, which is exactly $S_g$.
\end{proof}
\begin{wn}
\label{wn-konstr-pokrycie}
For every $k > 0$, the family $\mathcal{S}_k$ is a cover of $\partial G$.
\end{wn}
\begin{proof}
This is an easy application of the above lemma and of the equality $\partial G = \bigcup_{g \in G \,:\, |g| = k} \sppan(g)$.
\end{proof}
\begin{fakt}
\label{fakt-pokrycie-male}
Under the notation of Lemma \ref{fakt-spany-male}, for every $k > 0$ and $U \in \mathcal{S}_k$, we have $\diam U \leq 3 C \cdot a^{-k}$.
\end{fakt}
\begin{proof}
Let $U = S_g$ for some $g \in G$, where $|g|=k$, and let $x, y \in S_g$. Then, $x \in \sppan(gh_1)$ and $y \in \sppan(gh_2)$ for some $h_1, h_2 \in I(g)$. By Lemma \ref{fakt-spany-male}, we obtain
\[ d(x, y) \leq \diam \sppan(gh_1) + \diam \sppan(g) + \diam \sppan(gh_2) \leq 3 C \cdot a^{-k}. \qedhere \]
\end{proof}
\begin{fakt}
\label{fakt-sasiedzi-blisko}
Let $h \in P(g)$. Then:
\begin{itemize}
\item[\textbf{(a)}] If $\sppan(g) \cap \sppan(gh) \neq \emptyset$, then $|h| \leq 4\delta$ (so: $I(g) \subseteq P_{4\delta}(g)$);
\item[\textbf{(b)}] If $S_g \cap S_{gh} \neq \emptyset$, then $|h| \leq 12\delta$.
\end{itemize}
\end{fakt}
\begin{proof}
\textbf{(a)} Let $|g| = |gh| = k$ and $x \in \sppan(g) \cap \sppan(gh)$. Then, there exist geodesics $\alpha, \beta$ stating at $e$ and converging to $x$ such that $\alpha(k) = g$, $\beta(k) = gh$.
By inequality (1.3.4.1) in~\cite{zolta}, this implies that $d(g, gh) \leq 4\delta$.
\textbf{(b)} Let $x \in S_g \cap S_{gh}$. Then, by definition, we have $x \in \sppan(gu) \cap \sppan(ghv)$ for some $u \in I(g)$, $v \in I(gh)$. Using part~\textbf{a)}, we obtain
\[ |h| \leq |u| + |u^{-1}hv| + |v^{-1}| \leq 4\delta + 4\delta + 4\delta = 12\delta. \qedhere \]
\end{proof}
\begin{fakt}
\label{fakt-wlasnosc-gwiazdy-bez-gwiazdy}
Let $g \in G$ and $k < |g|$. Then:
\begin{itemize}
\item[\textbf{(a)}] there exists $f \in G$ of length $k$ such that $g \in fT^c(f)$;
\item[\textbf{(b)}] for any $f \in G$ with the properties from part~\textbf{(a)}, we have $\sppan(g) \subseteq \sppan(f)$;
\item[\textbf{(c)}] for any $f \in G$ with the properties from part~\textbf{(a)}, we have $S_g \subseteq S_f$.
\end{itemize}
\end{fakt}
\begin{proof}
\textbf{(a)} Let $\alpha$ be a geodesic from $e$ to $g$. Then, $f = \alpha(k)$ has the desired properties.
\textbf{(b)} If $f$ has the properties from part~\textbf{(a)}, then, by Lemma \ref{fakt-przechodniosc-potomkow}, we have $gT^c(g) \subseteq fT^c(f)$, so it remains to apply Lemma \ref{fakt-stozek-a-span}.
\textbf{(c)} By the parts~\textbf{(a)} and~\textbf{(b)}, for any $h \in I(g)$ there exists some element $f_h$ of length $k$ such that $\sppan(gh) \subseteq \sppan(f_h)$; here $f_e$ can be chosen to be $f$. In particular, we have:
\[ \emptyset \neq \sppan(g) \cap \sppan(gh) \subseteq \sppan(f) \cap \sppan(f_h), \]
so $f^{-1} f_h \in I(f)$. Since $h \in I(g)$ is arbitrary, we obtain
\[ \bigcup_{h \in I(g)} \sppan(gh) \subseteq \bigcup_{h \in I(g)} \sppan(f_h) \subseteq \bigcup_{x \in I(f)} \sppan(f x). \]
By taking the interiors of both sides of this containment, we get the claim.
\end{proof}
\begin{lem}
\label{lem-typy-pokrycia-niezmiennicze}
Let $N_0$ denote the constant from Proposition~\ref{lem-kulowy-wyznacza-stozkowy}. Assume that $N, r \geq 0$ and $g, x \in G$ satisfy $T^b_N(gx) = T^b_N(x)$. Then:
\begin{itemize}
\item[\textbf{(a)}] if $N \geq N_0$, then $\sppan(gx) = g \cdot \sppan(x)$;
\item[\textbf{(b)}] if $N \geq N_0 + r$, then $\sppan(gxy) = g \cdot \sppan(xy)$ for $y \in P_r(x)$;
\item[\textbf{(c)}] if $N \geq N_1 := N_0 + 4\delta$, then $S_{gx} = g \cdot S_x$;
\item[\textbf{(d)}] if $N \geq N_1 + r$, then $S_{gxy} = g \cdot S_{xy}$ for $y \in P_r(x)$;
\item[\textbf{(e)}] if $N \geq N_2 := N_0 + 16\delta$ and $y \in G$ satisfy $|y| = |x|$ and $S_x \cap S_y \neq \emptyset$, then
\[ S_{gy} = g \cdot S_y, \qquad \textrm{ and moreover } \quad |gy| = |gx|; \]
\item[\textbf{(f)}] if $N \geq N_3 := N_0 + 21\delta$, $k \geq 0$, $L > N + k + 4\delta$ and $y \in G$ satisfy $|y| = |x| + L$ and $\emptyset \neq S_y \subseteq S_x$, then:
\[ S_{gy} = g \cdot S_y, \qquad \textrm{ and moreover } \quad |gy| = |gx| + L \quad \textrm{and} \quad T^b_{N + k}(gy) = T^b_{N + k}(y). \]
\end{itemize}
\end{lem}
\begin{proof}
\textbf{(a)} If $N \geq N_0$, by Proposition \ref{lem-kulowy-wyznacza-stozkowy} we have $T^c(gx) = T^c(x)$ and so $gxT^c(gx) = g \cdot xT^c(x)$. In particular, the left action by $g$, which is an isometry, gives a unique correspondence between geodesics in $G$ starting at $x$ and contained in $xT^c(x)$ and geodesics in $G$ starting at $gx$ and contained in~$gxT^c(gx)$. Then, the claim holds by Lemma~\ref{fakt-stozek-a-span} and by continuity of the action of $g$ on~$G \cup \partial G$.
\textbf{(b)} If $N \geq N_0 + r$, then, by Lemma~\ref{fakt-kulowy-duzy-wyznacza-maly}, we have $T^b_{N_0}(gxy) = T^b_{N_0}(xy)$ for every $y \in P_r(x)$; it remains to apply~\textbf{(a)}.
\textbf{(c)} Let $y \in I(x)$. By Lemma~\ref{fakt-sasiedzi-blisko}a, we have $y \in P_{4\delta}(x)$. Since $N \geq N_0 + 4\delta$, from~\textbf{(b)} and Lemma~\ref{fakt-kulowy-wyznacza-towarzyszy} we obtain that
\[ \sppan(gx) = g \cdot \sppan(x), \qquad \sppan(gxy) = g \cdot \sppan(xy), \qquad y \in P_{4\delta}(gx). \]
Since $\sppan(x) \cap \sppan(xy) \neq \emptyset$, by acting with~$g$ we obtain $\sppan(gx) \cap \sppan(gxy) \neq \emptyset$, so $y \in I(gx)$. Then, we have
\[ g \cdot \bigcup_{y \in I(x)} \sppan(xy) = \bigcup_{y \in I(x)} \sppan(gxy) \subseteq \bigcup_{y \in I(gx)} \sppan(gxy). \]
By an analogous reasoning for the inverse element $g^{-1}$, we prove that the above containment is in fact an equality. Moreover, since the left action of~$g$ is a homeomorphism, it must map the interior of the left-hand side sum (which is~$S_x$) exactly onto the interior of the right-hand side sum (resp.~$S_{gx}$).
\textbf{(d)} This follows from~\textbf{(c)} in the same way as \textbf{(b)} was obtained from~\textbf{(a)}.
\textbf{(e)} By Lemma~\ref{fakt-sasiedzi-blisko}b, we have $x^{-1} y \in P_{12\delta}(x)$. Then, the first part of the claim follows from~\textbf{(d)}. For the second part, note that from $N \geq 16\delta$ we obtain that $x^{-1} y \in P_N(x)$ which is contained in the domain of $T^b_N(x)$ (as a~function); hence, the assumption that $T^b_N(x) = T^b_N(gx)$ implies that $|gxy| - |gx| = |xy| - |x| = 0$, as desired.
\textbf{(f)} Let $|y| \geq |x|$ and $S_y \subseteq S_x$. By Lemma~\ref{fakt-wlasnosc-gwiazdy-bez-gwiazdy}, there exists $z \in G$ such that
\[ |xz| = |x|, \qquad y \in xzT^c(xz), \qquad S_y \subseteq S_{xz}. \]
In particular, $S_{xz} \cap S_x \neq \emptyset$, and so by Lemma~\ref{fakt-sasiedzi-blisko} we have $z \in P_{12\delta}(x)$. By Lemmas~\ref{fakt-kulowy-duzy-wyznacza-maly} and~\ref{fakt-kulowy-wyznacza-towarzyszy}, we obtain
\begin{align}
\label{eq-niezm-z-tow-gx}
T^b_{N - 12\delta}(gxz) = T^b_{N - 12\delta}(xz), \qquad z \in P_{12\delta}(gx).
\end{align}
From the first of these properties and from Proposition \ref{lem-kulowy-wyznacza-stozkowy}, we have $T^c(gxz) = T^c(xz)$. Since $(xz)^{-1}y$ belongs to $T^c(xz)$ and is of length
\begin{align}
\label{eq-niezm-dlugosci}
|(xz)^{-1}y| = |y| - |xz| = |y| - |x| = L > N + k + 4\delta,
\end{align}
by applying Proposition~\ref{lem-potomkowie-dla-kulowych} for \eqref{eq-niezm-z-tow-gx} and the action of $(xz)^{-1}y$ (with parameters $N - 12\delta > N_0 + 8\delta$, $N + k$) we obtain
\[ T^b_{N + k}(gy) = T^b_{N + k}(y), \]
and then from \textbf{(c)}
\[ S_{gy} = g \cdot S_y. \]
Moreover, the conditions $(xz)^{-1}y \in T^c(gxz)$, \eqref{eq-niezm-dlugosci} and \eqref{eq-niezm-z-tow-gx} imply that
\[ |gy| = |gxz| + |(xz)^{-1}y| = |gxz| + L = |gx| + L, \]
which finishes the proof.
\end{proof}
\begin{wn}
\label{wn-spanstary-quasi-niezm}
For $N \geq N_3$, the sequence of covers $(\mathcal{S}_n)$ together with the type function $T = T^b_N$ is a quasi-$G$-invariant system of covers.
\end{wn}
\begin{proof}
We have checked in Corollary \ref{wn-konstr-pokrycie} that every $\mathcal{S}_n$ is a~cover of $\partial G$; obviously it is open. The subsequent conditions from Definition \ref{def-quasi-niezm} hold correspondingly by \ref{fakt-pokrycie-male}, \ref{fakt-sasiedzi-blisko}b and \ref{fakt-wlasnosc-gwiazdy-bez-gwiazdy} and Proposition~\ref{lem-typy-pokrycia-niezmiennicze}c,e,f (for $k = 0$). Here, we take the following constants:
\[ D = 12\delta, \qquad J_0 = 0, \qquad J = N_3 + 4\delta. \qedhere \]
\end{proof}
\section{Inverse limit construction}
\label{sec-engelking}
\dzm{
In this section, we present a classical construction (see Theorem~\ref{tw-konstr} below) which presents --- up to a~homeomorphism --- every compact metric space~$X$ as the inverse limit of the sequence of nerves of an appropriate system of covers of~$X$ (which we will call \textit{admissible}; see Definition~\ref{def-konstr-admissible}). We will also show (in Lemma~\ref{fakt-konstr-sp-zal}) that admissible systems can be easily obtained from any quasi-$G$-invariant systems of covers~$\mathcal{U}$.
In Section~\ref{sec-bi-lip}, we investigate this construction in the particular case when~$X = \partial G$ and $\mathcal{U}$ is inscribed in the system~$\mathcal{S}$ from Section~\ref{sec-konstr-pokrycia}. As we will show in Theorem~\ref{tw-bi-lip}, in such case the construction allows as well to describe certain metric properties of $\partial G$. (See the introduction to Section~\ref{sec-bi-lip} for more details).
}
\dzm{
\subsection{A topological description by limit of nerves}
\label{sec-engelking-top}
}
Let $X$ be a~compact metric space.
\begin{df}
Recall that the \textit{rank} of a family $\mathcal{U}$ of subspaces of a space $X$ is the maximal number of elements of $\mathcal{U}$ which have non-empty intersection.
\end{df}
\begin{df}
\label{def-konstr-admissible}
A sequence $(\mathcal{U}_i)_{i \geq 0}$ of open covers $X$ will be called an \textit{admissible system} if the following holds:
\begin{itemize}
\item[(i)] for every $i \geq 0$, the cover $\mathcal{U}_i$ is finite and does not contain empty sets;
\item[(ii)] there exists $n \geq 0$ such that $\rank \mathcal{U}_i \leq n$ for every $i \geq 0$;
\item[(iii)] the sequence $(\mathcal{U}_i)_{i \geq 0}$ has mesh property (in the sense of Definition \ref{def-mesh}a);
\item[(iv)] the sequence $(\mathcal{U}_i)_{i \geq 0}$ has star property (see Definition \ref{def-wl-gwiazdy}).
\end{itemize}
\end{df}
There is an easy connection between this notion and the contents of the previous section:
\begin{fakt}
\label{fakt-konstr-sp-zal}
Let $(\mathcal{U}_n)$ be a quasi-$G$-invariant system of covers of a $G$-space $X$. Define
\[ \widetilde{\mathcal{U}}_n = \{ U \in \mathcal{U}_n \,|\, U \neq \emptyset \}. \]
Let $L_0$ denote the constant obtained for the system $(\mathcal{U}_n)$ from Proposition~\ref{lem-gwiazda}. Then, for any $L \geq L_0$, the sequence of the covers $(\widetilde{\mathcal{U}}_{nL})_{n \geq 0}$ is admissible.
\end{fakt}
\begin{proof}
Clearly, for every $n \geq 0$ the family $\widetilde{\mathcal{U}}_n$ is an open cover of $X$. The condition (i) follows from the definition of~$\widetilde{\mathcal{U}}_n$. The mesh and star properties result correspondingly from the property \qhlink{c} and Proposition~\ref{lem-gwiazda}.
Finally, the condition (ii) follows from \qhlink{d}: whenever $U_x \cap U_y \neq \emptyset$, we have $d(x, y) \leq D$, so $x^{-1}y$ belongs to the ball in $G$ centred in $e$ of radius $D$. This means that the rank of the cover $\mathcal{U}_n$ (and thus also of $\widetilde{\mathcal{U}}_n$) does not exceed the number of elements in this ball, which is finite and independent from $n$.
\end{proof}
\begin{ozn}
Let $\mathcal{U}$ be an open cover of~$X$. For $U \in \mathcal{U}$, we denote by~$v_U$ the vertex in the nerve of~$\mathcal{U}$ corresponding to~$U$. We also denote by~$[v_1, \ldots, v_n]$ the simplex in~this nerve spanned by vertices $v_1, \ldots, v_n$.
\end{ozn}
\begin{df}
\label{def-konstr-nerwy}
For an admissible system $(\mathcal{U}_i)_{i \geq 0}$ in~$X$, we define the \textit{associated system of nerves} $(K_i, f_i)_{i \geq 0}$, where $f_i : K_{i+1} \rightarrow K_i$ for $i \geq 0$, as follows:
\begin{itemize}
\item[(i)] for $i \geq 0$, $K_i$ is the nerve of the cover~$\mathcal{U}_i$;
\item[(ii)] for $U \in K_{i+1}$, $f_i(v_U)$ is the barycentre of the simplex spanned by $\{ v_V \,|\, V \in K_i, \, V \supseteq U \}$;
\item[(iii)] for other elements of $K_{i+1}$, we extend $f_i$ so that it is affine on every simplex.
\end{itemize}
For any $j \geq 0$, we denote by $\pi_j$ the natural projection from the inverse limit $\mathop{\lim}\limits_{\longleftarrow} K_i$ to $K_j$.
\end{df}
\begin{uwaga}
If $v_{U_1}, \ldots, v_{U_n}$ span a~simplex in $K_{i+1}$, then $U_1 \cap \ldots \cap U_n \neq \emptyset$; this implies that the family $\mathcal{A} = \{ V \in \mathcal{U}_i \,|\, V \supseteq U_1 \cap \ldots \cap U_n \}$ has a non-empty intersection and therefore the vertices $\{ v_V \,|\, V \in \mathcal{A} \}$ span a simplex in~$K_i$ which contains all the images $f_i(v_{U_j})$ for $1 \leq j \leq n$. This ensures that the affine extension described in condition (iii) of Definition~\ref{def-konstr-nerwy} is indeed possible.
\end{uwaga}
The following theorem is essentially an adjustment of Theorem~1.13.2 in~\cite{E} to our needs (see the discussion below).
\begin{tw}
\label{tw-konstr}
Let $(\mathcal{U}_i)_{i \geq 0}$ be an admissible system in~$X$, and $(K_i, f_i)$ be its associated nerve system. For any $x \in X$ and $i \geq 0$, denote by $K_i(x)$ the simplex in $K_i$ spanned by the set $\{ v_U \,|\, U \in \mathcal{U}_i \, x \in U \}$. Then:
\begin{itemize}
\item[\textbf{(a)}] The system $(K_i, f_i)$ has mesh property;
\item[\textbf{(b)}] For every $x \in X$, the space $\mathop{\lim}\limits_{\longleftarrow} K_i(x) \subseteq \mathop{\lim}\limits_{\longleftarrow} K_i$ has a~unique element, which we will denote by~$\varphi(x)$;
\item[\textbf{(c)}] The map $\varphi : X \rightarrow \mathop{\lim}\limits_{\longleftarrow} K_i$ defined above is a~homeomorphism.
\end{itemize}
\end{tw}
A proof of Theorem~\ref{tw-konstr} can be obtained from the proof of Theorem~1.13.2 given in~\cite{E} as follows:
\begin{itemize}
\item Although our assumptions are different than those in~\cite{E}, they still imply all statements in the proof given there, except for the condition labelled as~(2). However, this condition is used there only to ensure the mesh and star properties of~$(\mathcal{U}_i)$ which we have assumed anyway.
\item The theorem from~\cite{E} does not state the mesh property for the nerve system.
However, an inductive application of the inequality labelled as~(6) in its proof gives (in our notation) that:
\begin{align}
\label{eq-konstr-szacowanie-obrazow}
\diam f^j_i(\sigma) \leq \left( \tfrac{n}{n + 1} \right)^{j - i} \qquad \textrm{ for every simplex $\sigma$ in~$K_j$},
\end{align}
where $n$ denotes the upper bound for the rank of covers required by Definition~\ref{def-konstr-admissible}. The right-hand side of~\eqref{eq-konstr-szacowanie-obrazow} does not depend on~$\sigma$, but only on~$i$, and tends to zero as $i \rightarrow \infty$, which proves mesh property for the nerve system. (Although the above estimate holds only for the particular metric on~$K_i$ used in~\cite{E}, this suffices to deduce the mesh property in view of Remark~\ref{uwaga-mesh-bez-metryki}).
\end{itemize}
\subsection{A metric description for systems inscribed in~$\mathcal{S}$}
\label{sec-bi-lip}
\dzm{
Let~$G$ by a hyperbolic group, and let~$\mathcal{U}$ be a quasi-$G$-invariant system of covers of~$\partial G$, inscribed in the system~$\mathcal{S}$ defined in Section~\ref{sec-konstr-pokrycia}. We will now prove that, under such assumptions, the homeomorphism $\varphi : \partial G \rightarrow \mathop{\lim}\limits_{\longleftarrow} K_i$ obtained from Theorem~\ref{tw-konstr} on the basis of~$\mathcal{U}$ (through Lemma~\ref{fakt-konstr-sp-zal}) is a bi-Lipschitz equivalence --- when $\partial G$ is considered with the visual metric $d_v^{(a)}$ for sufficiently small value of~$a$, and $\mathop{\lim}\limits_{\longleftarrow} K_i$ with the natural \textit{simplicial metric} (see Definition~\ref{def-metryka-komp} below) for the same value of~$a$.
}
To put this in a context, let us recall the known properties of visual metrics on~$\partial G$. The definition of the visual metric given in~\cite{zolta} depends not only on the choice of~$a$, but also on the choice of a basepoint in the group (in this paper, we always set it to be~$e$) and a set of its generators. It is known that the visual metrics obtained for different choices of these parameters do not have to be bi-Lipschitz equivalent, however, they all determine the same quasi-conformal structure (\cite[Theorems~2.18 and~3.2]{Kap}). In this situation, Theorem~\ref{tw-bi-lip} shows that this natural quasi-conformal structure on~$\partial G$ can be as well described by means of \dzm{the inverse limit of polyhedra which we have built so far. This will enable us, in view of Theorems~\ref{tw-kompakt-ogolnie} and~\ref{tw-sk-opis} (to be shown in the next sections), to} give (indirectly) a description of quasi-conformal structures on the boundaries of hyperbolic groups in terms of appropriate Markov systems.
\subsubsection{The simplicial metric}
Let us recall the definition of the metric on simplicial complexes used in the proof of Theorem 1.13.2 in~\cite{E} (which serves as the base for Theorem \ref{tw-konstr}). For any $n \geq 0$, we denote
\[ e_i = (\underbrace{0, \ldots, 0}_{i-1}, 1, 0, \ldots, 0) \in \mathbb{R}^n. \]
\begin{df}
\label{def-metryka-l1}
Let $K$ be a simplicial complex with $n$ vertices. Let $m \geq n$ and $f : K \rightarrow \mathbb{R}^m$ be an injective affine map sending vertices of $k$ to points of the form $e_i$ (for $1 \leq i \leq m$). We define the \textit{$l^1$ metric} on $K$ by the formula:
\[ d_K(x, y) = \| f(x) - f(y) \|_1 \qquad \textrm{ for } x, y \in K. \]
\end{df}
\begin{uwaga}
\label{uwaga-metryka-l1-sens}
The metric given by Definition \ref{def-metryka-l1} does not depend on the choice of $m$ and~$f$ because any other affine inclusion $f' : K \rightarrow \mathbb{R}^{m'}$ must be (after restriction to $K$) a composition of $f$ with a linear coordinate change which is an isometry with respect to the norm $\| \cdot \|_1$.
\end{uwaga}
\begin{uwaga}
\label{uwaga-metryka-l1-ogr}
Since in Definition~\ref{def-metryka-l1} we have $f(K) \subseteq \{ (x_i) \,|\, x_i \geq 0 \textrm{ for } 1 \leq i \leq m, \ \sum_{i=1}^m x_i = 1 \}$, it can be easily deduced that any complex $K$ has diameter at most $2$ in the $l^1$ metric.
\end{uwaga}
\begin{df}
\label{def-metryka-komp}
Let $(K_i, f_i)_{i \geq 0}$ be an inverse system of simplicial complexes. For any real $a > 1$, we define \dzm{the \textit{simplicial metric} (with parameter~$a$)} $d^M_a$ on~$\mathop{\lim}\limits_{\longleftarrow} K_i$ by the formula
\[ d^M_a \big( (x_i)_{i \geq 0}, (y_i)_{i \geq 0} \big) = \sum_{i = 0}^\infty a^{-i} \cdot d_{K_i}(x_i, y_i). \]
\end{df}
\begin{uwaga}
In the case when $a = 2$, Definition \ref{def-metryka-komp} gives the classical metric used in countable products of metric spaces (and hence also in the limits of inverse systems); in particular, it is known that the metric~$d^M_2$ is compatible with the natural topology on the inverse limit (i.e. the restricted Tichonov's product topology). However, this fact holds, with an analogous proof, for any other value of $a>1$ (see~\cite[the remark following Theorem~4.2.2]{ET})
\end{uwaga}
\subsubsection{Bi-Lipschitz equivalence of both metrics}
In the following theorem, we use the notions \textit{quasi-$G$-invariant}, \textit{system of covers}, \textit{inscribed} defined respectively in Definitions~\ref{def-quasi-niezm}, \ref{def-quasi-niezm-pokrycia} and \ref{def-quasi-niezm-system-wpisany}, as well as the system $\mathcal{S}$ defined in Definition~\ref{def-span-star}.
\begin{tw}
\label{tw-bi-lip}
Let $G$ be a hyperbolic group. \dzm{Let~$\mathcal{U}$ be a quasi-$G$-invariant system of covers of~$\partial G$, inscribed in the system~$\mathcal{S}$ (see Section~\ref{sec-konstr-pokrycia}), and let $\varphi : \partial G \rightarrow \mathop{\lim}\limits_{\longleftarrow} K_i$ be the homeomorphism obtained for~$\mathcal{U}$ from Theorem~\ref{tw-konstr}.}
Then, there exists a constant $a_1 > 1$ (depending only on~$G$) such that, for any $a \in (1, a_1)$,
$\varphi$ is a bi-Lipschitz equivalence between the visual metric on~$G$ with parameter~$a$ (see Section~\ref{sec-def-hip}) and the simplicial metric $d^M_a$ on~$\mathop{\lim}\limits_{\longleftarrow} K_i$.
\end{tw}
\begin{uwaga}
Theorem~\ref{tw-bi-lip} re-states the second claim of Theorem~\ref{tw-bi-lip-0}, which is sufficient to deduce the first claim in view of the introduction to Section~\ref{sec-bi-lip}.
\end{uwaga}
\begin{uwaga}
\label{uwaga-bi-lip-wystarczy-d}
To prove the above theorem, it is clearly sufficient to check a bi-Lipschitz equivalence between the simplicial metric $d_a^M$ and the \textit{distance function}~$d_a$ which has been introduced in Section~\ref{sec-def-hip} as a bi-Lipschitz approximation of the visual metric.
\end{uwaga}
\begin{fakt}
\label{fakt-rozlaczne-symp-daleko}
If $s_1,s_2$ are two disjoint simplexes in a complex $K$, then for any $z_1 \in s_1, z_2 \in s_2$, we have $d_K(z_1, z_2) = 2$.
\end{fakt}
\begin{proof}
Let $f : K \rightarrow \mathbb{R}^m$ satisfy the conditions from Definition \ref{def-metryka-l1}. For $j = 1, 2$, let~$A_j$ denote the set of indexes $1 \leq i \leq m$ for which $e_i = f(v)$ for some vertex~$v \in s_j$. Then we have
\[ f(s_j) = \Big\{ (x_i) \in \mathbb{R}^m \ \Big|\ \ x_i \geq 0 \textrm{ for } 1 \leq i \leq m, \ \ x_i = 0 \textrm{ for } i \in A_j, \ \ \sum_{i \in A_j} x_i = 1 \Big\}. \]
However, since $f$ is an inclusion, the sets $A_1$, $A_2$ are disjoint, from which it results that, for any $p_j \in f(s_j)$ (for $j = 1, 2$), we have $\| p_1 - p_2 \|_1 = 2$.
\end{proof}
\begin{lem}
\label{lem-bi-lip-geodezyjne}
There exist constants $E_1, N_4$ (depending only on~$G$) such that if $k, l \geq 0$, $N > N_4$, $g, x \in G$ and $p, q \in \partial G$ satisfy the conditions:
\[ |x| = k, \qquad |gx| = l, \qquad T^b_N(x) = T^b_N(gx), \qquad p \in \sppan(x), \qquad d(p, q) \leq a^{-(k+E_1)}, \]
then in $\partial G$ we have
\begin{align}
\label{eq-bi-lip-geodezyjne-teza}
d(g \cdot p, \, g \cdot q) \leq a^{-(l-k)} \cdot d(p, q).
\end{align}
\end{lem}
\begin{uwaga}
As soon as we prove the inequality \eqref{eq-bi-lip-geodezyjne-teza} in general, it will follow that it can be strengthened to an equality. This is because if the elements $g, x, p, q$ satisfy the assumptions of the proposition, then its claim implies that the elements~$g^{-1}, gx, g \cdot p, g \cdot q$ also satisfy these assumptions. By using the proposition to these elements, we will then obtain that $d(p, q) \leq a^{-(k-l)} \cdot d(g \cdot p, g \cdot q)$, ensuring that an equality in \eqref{eq-bi-lip-geodezyjne-teza} holds. We do not include this result it in the claim of the proposition because it is not used in this article.
\end{uwaga}
\begin{proof}[Proof of Proposition~\ref{lem-bi-lip-geodezyjne}]
We set
\[ E_1 = 13\delta, \qquad N_4 = N_0 + 64\delta, \]
where $N_0$ denotes the constant from Proposition \ref{lem-kulowy-wyznacza-stozkowy}.
\textbf{1. }Let $\gamma$ be some geodesic connecting $p$ with~$q$ for which the distance $d(e, \gamma)$ is maximal. Note that then
\[ d(e, \gamma) = - \log_a d(p, q) \geq k + 13\delta. \]
Since the left shift by $g$ is an isometry in $G$, the sequence $g \cdot \gamma$ determines a bi-infinite geodesic which, by definition, connects the points $\gamma \cdot p, \gamma \cdot q$ in~$\partial G$. Then, to finish the proof it suffices to estimate from below the distance $d(e, g \cdot \gamma)$.
\textbf{2. }Let $\alpha, \beta$ be some geodesics connecting $e$ correspondingly with~$p$ and~$q$; we can require in addition that $\alpha(k) = x$. Denote $y = \beta(k)$.
Since $\alpha, \beta, \gamma$ form a geodesic triangle (with two vertices in infinity), by Lemma~\ref{fakt-waskie-trojkaty}, there exists an element $s \in \beta \cup \gamma$ in a distance $\leq 12\delta$ from~$x$.
Then, $\big| |s| - k \big| \leq 12\delta$, so in particular $s \notin \gamma$, and so $s \in \beta$. In this situation, we have
\[ d(x, y) \leq d(x, s) + d(s, y) \leq 12\delta + \big| |s| - k \big| \leq 24\delta. \]
\textbf{3. }For any $i \in \mathbb{Z}$, we choose a geodesic $\eta_i$ connecting $e$ with~$\gamma(i)$. Since $\gamma(i)$ must lie in a distance $\leq 12\delta$ from some element of~$\alpha$ or~$\beta$, by using Corollary \ref{wn-krzywe-geodezyjne-pozostaja-bliskie} for the geodesic $\eta_i$ and correspondingly $\alpha$ or~$\beta$, we obtain that the point $z_i = \eta(k)$ lies in a distance $\leq 40\delta$ correspondingly from $x$ or~$y$. Therefore, in any case we have
\[ d(x, z_i) \leq 64 \delta. \]
\textbf{4. }We still consider any value of $i \in \mathbb{Z}$. Since $N > N_4$ and $T^b_N(x) = T^b_N(gx)$, as well as $|x| = |z_i| = k$, from Lemmas \ref{fakt-kulowy-duzy-wyznacza-maly} and~\ref{fakt-kulowy-wyznacza-towarzyszy} we obtain that
\[ T^b_{N_0}(z_i) = T^b_{N_0}(gz_i), \qquad |gx| = |gz_i| = l. \]
Then, $z_i$ and~$gz_i$ have the same cone types by Proposition \ref{lem-kulowy-wyznacza-stozkowy}, so from $\gamma(i) \in z_iT^c(z_i)$ we deduce that $g\gamma(i) \in gz_iT^c(gz_i)$, and then
\[ |g\gamma(i)| = |gz_i| + |z_i^{-1}\gamma(i)| = |gz_i| + |\gamma(i)| - |z_i| = |\gamma(i)| + (l - k). \]
By taking the minimum over all $i \in \mathbb{Z}$, we obtain that
\[ d(e, g \cdot \gamma) = d(e, \gamma) + (l - k), \]
and then
\[ d(g \cdot p, g \cdot q) \leq d(p, q) \cdot a^{-(l-k)}. \qedhere \]
\end{proof}
\begin{fakt}
\label{fakt-duze-gwiazdy}
\dzm{Under the assumptions of Theorem~\ref{tw-bi-lip}, }there exists a constant $E$ (depending only on $G$ \dzm{and~$\mathcal{U}$}) such that, for any $k \geq 0$, the Lebesgue number of the cover $\mathcal{U}_k$ is at least $E \cdot a^{-k}$.
\end{fakt}
\begin{proof}
Let $N > N_4 \dzm{+ D}$, where $N_4$ is the constant from Proposition~\ref{lem-bi-lip-geodezyjne} \dzm{and $D$ is the neighbourhood constant of the system~$\mathcal{S}$. Denote by~$T$ the type function associated with~$\mathcal{U}$.}
Let $M>0$ be chosen so that for any $g \in G$ there exists $h \in G$ such that \dzm{$T(g) = T(h)$} and $|h| < M$. Let $L_j$ denote the Lebesgue constant for the cover $\mathcal{U}_j$ for $j < M$. We will prove that the claim of the lemma is satisfied by the number
\[ E = a^{-E_1} \cdot \min_{j<M} \, (a^j L_j), \]
where $E_1$ is the constant from Proposition~\ref{lem-bi-lip-geodezyjne}.
Let $k \geq 0$ and~$B \subset \partial G$ be a non-empty subset with diameter at most $E \cdot a^{-k}$. Let $x$ be any element of $B$, then
\dzm{
there exist elements $g, \widetilde{g} \in G$ of length~$k$ such that
\[ x \in \sppan(g), \qquad x \in U_{\widetilde{g}}. \]
Then, we have $x \in \sppan(g) \cap U_{\widetilde{g}} \subseteq S_g \cap S_{\widetilde{g}}$, so by~\qhlink{d} it follows that $d(g, \widetilde{g}) \leq D$.
}
By the definition of $M$, there exists $h \in G$ such that
\[ |h| < M, \qquad \dzm{T(g) = T(h)}. \]
Denote $\gamma = hg^{-1}$
\dzm{
and $\widetilde{h} = \gamma \widetilde{g}$.
By Definition~\ref{def-quasi-niezm-system-wpisany}, the type function~$T$ is stronger than the ball type~$T^b_N$ (in the sense of Definition~\ref{def-funkcja-typu}). Therefore, $T^b_N(g) = T^b_N(h)$, which together with Lemma~\ref{fakt-kulowy-duzy-wyznacza-maly} and $d(g, \widetilde{g}) \leq D$ implies that $T^b_{N-D}(\widetilde{g}) = T^b_{N-D}(\widetilde{h})$.
Since $N - D > N_4$, we obtain
}
from Proposition \ref{lem-bi-lip-geodezyjne} that
\[ \diam (\gamma \cdot B) \leq E \cdot a^{-k} \cdot a^{-(|h| - k)} \leq a^{-|h|} \cdot \min_{j < M} (a^j L_j) \leq L_{|h|}, \]
which means that there exists $h' \in G$ such that $|h'| = |h|$ and
$\gamma \cdot B \subseteq U_{h'}$.
Let us note that it follows from \qhlink{f1} that
\[ \gamma \cdot x \in \dzm{\gamma \cdot U_{\widetilde{g}} = U_{\widetilde{h}}}, \]
so $\gamma \cdot x$ is a common element of \dzm{$U_{\widetilde{h}}$ and~$U_{h'}$}. Then, from \qhlink{f2} we obtain that
\[ U_{\gamma^{-1} h'} = \gamma^{-1} \cdot U_{h'} \supseteq B. \qedhere \]
\end{proof}
\begin{proof}[{\normalfont \textbf{Proof of Theorem~\ref{tw-bi-lip}}}]
Denote by $n$ the \dzm{maximal rank of all the covers~$\mathcal{S}_t$, for $t \geq 0$. Then, for every $t \geq 0$ we have $\rank \, \mathcal{U}_t \leq n$ and hence $\dim K_t \leq n$.} Denote also by $a_0$ a (constant) number such that the visual metric, considered for values $1 < a < a_0$, has all the properties described in Section~\ref{sec-def}. We define
\[ a_1 = \min \big( a_0, \tfrac{n+1}{n} \big). \]
Let $1 < a < a_1$. Denote by $M$ the diameter of $\partial G$ with respect to the visual metric (which is finite due to compactness of $\partial G$), and by $C_1$ --- the multiplier of bi-Lipschitz equivalence between the distance function $d$ and the visual metric.
Let $p, q$ be two distinct elements~of $\partial G$ and let $k \geq 0$ be the minimal natural number such that $d(p, q) > a^{-k}$. Observe that $d(p, q) \leq a^{-(k-1)}$ if $k > 0$, while $d(p, q) \leq MC_1 \leq MC_1 \cdot a^{-(k-1)}$ in the other case, so in general we have:
\begin{align}
\label{eq-bi-lip-lapanie-ujemnych}
d(p, q) \leq M' \cdot a^{-(k-1)}, \qquad \textrm{ where } \qquad M' = \max(MC_1, 1).
\end{align}
\dzm{As in Definition~\ref{def-konstr-nerwy}, we let~$\pi_n$ denote the projection from~$\mathop{\lim}\limits_{\longleftarrow} K_i$ to $K_n$. Our goal is to estimate $d^M_a(\overline{p}, \overline{q})$, where $\overline{p}$, $\overline{q}$ denote correspondingly the images of $p, q$ under $\varphi$.}
First, we will estimate $d^M_a(\overline{p}, \overline{q})$ from above. Let $l$ be the maximal number not exceeding $k - \log_a E$. We consider two cases:
\begin{itemize}
\item If $l < 0$, then $k < \log_a E$, and then, by Remark~\ref{uwaga-metryka-l1-ogr},
\[ d^M_a(\overline{p}, \overline{q}) = \sum_{t = 0}^\infty a^{-t} \cdot d_{K_t} \big( \pi_t(\overline{p}), \pi_t(\overline{q}) \big) \leq \sum_{t = 0}^\infty a^{-t} \cdot 2 \leq \frac{2a}{a-1} \leq \frac{2a}{(a-1)MC_1} \cdot d(p, q). \]
\item If $l \geq 0$, then by Lemma~\ref{fakt-duze-gwiazdy} there exists $U \in \mathcal{U}_l$ containing both $p$ and $q$.
Then, in the complex $K_l$, the points $\pi_l(\overline{p})$ and~$\pi_l(\overline{q})$ must lie in some (possibly different) simplexes containing the vertex $v_U$. Then, we have
\[ d_{K_l} \big( \pi_l(\overline{p}),v_U \big) \leq 2, \qquad d_{K_l} \big( \pi_l(\overline{q}), v_U \big) \leq 2 \]
by Remark \ref{uwaga-metryka-l1-ogr}, and moreover
\[ d_{K_t} \big( \pi_t(\overline{p}), \pi_t(\overline{q}) \big) \leq 2 \cdot 2 \cdot \big( \tfrac{n}{n+1} \big)^{l-t} \qquad \textrm{ for } \quad 0 \leq t \leq l. \]
by the condition \eqref{eq-konstr-szacowanie-obrazow} from the proof of theorem~\ref{tw-konstr} (which we may use here because we are now working with the same metric in $K_i$ which was used in~\cite{E}). Then, since $\diam K_t \leq 2$ for $t \geq 0$ (by Remark \ref{uwaga-metryka-l1-ogr}) and $\tfrac{an}{n+1} < 1$, we have
\begin{align*}
d^M_a(\overline{p}, \overline{q}) & = \sum_{t = 0}^\infty a^{-t} \cdot d_{K_t} \big( \pi_t(\overline{p}), \pi_t(\overline{q}) \big) \leq \sum_{t = 0}^l a^{-t} \cdot 4 \cdot \big( \tfrac{n}{n+1} \big)^{l-t} + \sum_{t = l+1}^\infty a^{-t} \cdot 2 \leq \\
& \leq 4 a^{-l} \cdot \sum_{t = 0}^l \big( \tfrac{an}{n+1} \big)^{l-t} + 2 \cdot \sum_{t = l+1}^\infty a^{-t} \leq C_2 \cdot a^{-l} \leq (C_2Ea) \cdot a^{-k} \leq (C_2Ea) \cdot d(p, q),
\end{align*}
where $C_2$ is some constant depending only on $a$ and~$n$ (and so independent of~$p, q$).
\end{itemize}
The opposite bound will be obtained by Lemma \ref{fakt-pokrycie-male}. Let $C$ denote the constant from that lemma and let $l'$ be the smallest integer greater than $k + \log_a(3C)$. Then, Lemma \ref{fakt-pokrycie-male} ensures that, for any $t \geq l'$ and \dzm{$x \in G$ of length~$t$}, we have
\[ \dzm{ \diam_d U_x \leq \diam_d S_x } \leq 3C \cdot a^{-t} \leq a^{-k} < d(p, q), \]
so the points $p, q$ cannot belong simultaneously to any element of the cover $\mathcal{U}_t$. Then, \dzm{by the definition of $\varphi: \partial G \simeq \mathop{\lim}\limits_{\longleftarrow} K_i$,} the points $\pi_t(\overline{p})$, $\pi_t(\overline{q})$ lie in some two disjoint simplexes in $K_t$, and so, by Lemma \ref{fakt-rozlaczne-symp-daleko}, their distance is equal to $2$. Then, by \eqref{eq-bi-lip-lapanie-ujemnych}, we have:
\[ d^M_a(\overline{p}, \overline{q}) = \sum_{t = 0}^\infty a^{-t} \cdot d_{K_t} \big( \pi_t(\overline{p}), \pi_t(\overline{q}) \big) \geq \sum_{t = l'}^\infty a^{-t} \cdot 2 \geq \frac{2a^{-l'} \cdot a}{a-1} \geq \tfrac{2}{3C(a-1)} \cdot a^{-k} \geq \tfrac{2}{3CM'a(a-1)} \cdot d(p, q). \]
In view of Remark \ref{uwaga-bi-lip-wystarczy-d}, this finishes the proof.
\end{proof}
\section{Markov property}
\label{sec-markow}
The main goal of this section is to prove the following theorem:
\begin{tw}
\label{tw-kompakt-ogolnie}
Let $(\mathcal{U}_n)_{n \geq 0}$ be a quasi-$G$-invariant system of covers of a compact, metric $G$-space-$X$. Let $L_0$ denote the constant given by Proposition \ref{lem-gwiazda} for this system, $L \geq L_0$ and let $(K_n, f_n)$ be the associated inverse system of nerves obtained for the sequence of the covers $(\widetilde{\mathcal{U}}_{nL})_{n \geq 0}$ (see Definition~\ref{def-konstr-nerwy}).
Then, the system $(K_n, f_n)$ is barycentric, Markov and has the mesh property.
\end{tw}
The proof of this theorem appears --- after a number of auxiliary definitions and facts --- in Section~\ref{sec-markow-podsum}.
\subsection{Simplex types and translations}
\label{sec-markow-typy}
Below (in Definition \ref{def-typ-sympleksu}) we define simplex \textit{types} which we will use to prove the Markov property of the system $(K_n, f_n)$. Intuitively, we would like the type of a simplex $s = [v_{U_{g_1}}, \ldots, v_{U_{g_k}}]$ to contain the information about types of elements $g_i$ (which seems to be natural), but also about their relative position in $G$ (which, as we will see in Section \ref{sec-markow-synowie}, will significantly help us in controlling the pre-images of the maps $f_n$).
However, this general picture becomes more complicated because we are not guaranteed a unique choice of an element $g$ corresponding to a given set $U_g \in \widetilde{\mathcal{U}}_n$.
Therefore, in the type of a simplex, we will store information about relative positions of \textit{all} elements of~$G$ representing its vertices.
As an effect of the above considerations, we will obtain a quite complicated definition of type (which will be only rarely directly referred to). An equality of such types for given two simplexes can be conveniently described by existence of a \textit{shift} between them, preserving the simplex structure described above (see Definition~\ref{def-przesuniecie-sympleksu}). This property will be used in a number of proofs in the following sections.
We denote by $Q_n$ the nerve of the cover $\widetilde{\mathcal{U}}_n)$. (Then, $K_n = Q_{nL}$).
\begin{df}
\label{def-graf-typu-sympleksu}
For a simplex $s$ in $Q_n$, we define a directed graph $G_s = (V_s, E_s)$ in the following way:
\begin{itemize}
\item the vertices in $G_s$ are \textit{all} the elements $g \in G$ for which $v_{U_g}$ is a vertex in~$s$ (and so $|g| = n$ by Remark~\ref{uwaga-quasi-niezm-rozne-poziomy}); thus, $G_s$ may possibly have more vertices than $s$ does;
\item every vertex $g \in V_s$ is labelled with its type $T(g)$;
\item the edges in~$G_s$ are all pairs $(g, g')$ for $g, g' \in V_s$, $g \neq g'$;
\item every edge $(g, g')$ is labelled with the element $g^{-1} g' \in G$.
\end{itemize}
\end{df}
\begin{df}
\label{def-typ-sympleksu}
We call two simplexes $s \in Q_n$ and $s' \in Q_{n'}$ \textit{similar} if there exists an isomorphism of graphs $\varphi: G_s \rightarrow G_{s'}$ preserving all labels of vertices and edges.
The \textit{type} of a simplex $s \in Q_n$ (denoted by $T^\Delta(s)$) is its similarity class. \\
(Hence: two simplexes are similar if and only if they have the same type).
\end{df}
\begin{df}
\label{def-przesuniecie-sympleksu}
A simplex $s' \in Q_{n'}$ will be called the \textit{shift} of a simplex $s \in Q_n$ by an element~$\gamma$ (notation: $s' = \gamma \cdot s$) if the formula $\varphi(g) = \gamma \cdot g$ defines an isomorphism $\varphi$ which satisfies the conditions of Definition \ref{def-typ-sympleksu}.
\end{df}
\begin{fakt}
\label{fakt-przesuniecie-skladane}
Shifting simplexes satisfies the natural properties of a (partial) action of $G$ on a set:
\[ \textrm{ if } \qquad s' = \gamma \cdot s \quad \textrm{ and } \quad s'' = \gamma' \cdot s', \qquad \textrm{ then } \qquad s = \gamma^{-1} \cdot s' \quad \textrm{ and } \quad s'' = (\gamma' \, \gamma) \cdot s. \pushQED{\qed}\qedhere\popQED \]
\end{fakt}
\begin{fakt}
\label{fakt-przesuniecie-istnieje}
Two simplexes $s \in Q_n$, $s' \in Q_{n'}$ have equal types $\ \Longleftrightarrow\ $ $s' = \gamma \cdot s$ for some $\gamma \in G$.
\end{fakt}
\begin{proof}
The implication $(\Leftarrow)$ is obvious. On the other hand, let $\varphi : G_s \rightarrow G_{s'}$ be an isomorphism satisfying the conditions from Definition \ref{def-typ-sympleksu}. We choose arbitrary $g_0 \in V_s$ and define $\gamma = \varphi(g_0) \, g_0^{-1}$. Since $\varphi$ preserves the labels of edges, for any $g \in V_s \setminus \{ g_0 \}$ we have
\[ \varphi(g_0)^{-1} \, \varphi(g) = g_0^{-1} \, g \quad \Rightarrow \quad \varphi(g) \, g^{-1} = \varphi(g_0) \, g_0^{-1} = \gamma \quad \Rightarrow \quad \varphi(g) = \gamma \cdot g. \qedhere \]
\end{proof}
\begin{fakt}
\label{fakt-przesuwanie-symp-zb}
If $s' = \gamma \cdot s$ and $v_{U_x}$ is a vertex in $s$, then $v_{U_{\gamma x}}$ is a vertex in~$s'$ and moreover
\[ U_{\gamma x} = \gamma \cdot U_x, \qquad T(\gamma x) = T(x). \]
In particular, shifting the sets from $\widetilde{\mathcal{U}}$ by~$\gamma$ gives a bijection between the vertices of $s$ and~$s'$.
\end{fakt}
\begin{proof}
This follows from Definitions \ref{def-graf-typu-sympleksu} and~\ref{def-przesuniecie-sympleksu}, and from property~\qhlink{f1}.
\end{proof}
\begin{lem}
The total number of simplex types in all of the complexes $Q_n$ is finite.
\end{lem}
\begin{proof}
Let us consider a simplex $s \in K_n$. If $g, g' \in V_s$, then the vertices $v_{U_g}, v_{U_{g'}}$ belong to $s$, which means by definition that $U_g \cap U_{g'} \neq \emptyset$, and then, by \qhlink{d} and the definition of $V_s$, we have $|g^{-1} g'| \leq D$.
Then, the numbers of vertices in the graphs $G_s$, as well as the number of possible edge labels appearing in all such graphs, are not greater than the cardinality of the ball $B(e, D)$ in the group~$G$. This finishes the proof because the labels of vertices are taken by definition from the finite set of types of elements in~$G$.
\end{proof}
\subsection{The main proposition}
\label{sec-markow-synowie}
\begin{lem}
\label{lem-przesuwanie-dzieci-sympleksow}
Let $s \in K_n$, $s' \in K_{n'}$ be simplexes of the same type and $s' = \gamma \cdot s$ for some $\gamma \in G$.
Then, the maps $I : s \rightarrow s'$ and $J : f_n^{-1}(s) \rightarrow f_n^{-1}(s')$, defined on the vertices of the corresponding subcomplexes by the formulas
\[ I(v_U) = v_{\gamma \cdot U} \quad \textrm{ for } v_U \in s, \qquad J(v_U) = v_{\gamma \cdot U} \quad \textrm{ for } v_U \in f_n^{-1}(s), \]
and extended affinely to the simplexes in these subcomplexes, have the following properties:
\begin{itemize}
\item they are well defined (in particular, $\gamma \cdot U$ is an element of the appropriate cover);
\item they are isomorphisms of subcomplexes;
\item they map simplexes to their shifts by $\gamma$ (in particular, they preserve simplex types).
\end{itemize}
Moreover, the following diagram commutes:
\begin{align}
\label{eq-diagram-do-spr}
\xymatrix@+3ex{
s \ar[d]_{I} & \ar[l]_{f_n} f_n^{-1}(s) \ar[d]_{J} \\
s' & \ar[l]_{f_{n'}} f_{n'}^{-1}(s').
}
\end{align}
\end{lem}
\begin{proof}
Let
\begin{align}
\label{eq-markow-wstep-1}
s = [v_{U_1}, \ldots, v_{U_k}], \qquad U_i = U_{g_i}, \qquad g_i' = \gamma \, g_i, \qquad U_i' = U_{g_i'}.
\end{align}
Then, from the assumptions (using the definitions and Lemma \ref{fakt-przesuwanie-symp-zb}) we obtain that
\[ s' = [v_{U'_1}, \ldots, v_{U'_k}], \qquad U_i' = \gamma \cdot U_i, \qquad T(g_i) = T(g_i'), \qquad |g_i| = nL, \qquad |g'_i| = n'L. \]
In particular, for every $v_U \in s$ the value $I(v_U)$ is correctly defined and belongs to $s'$; also, $I$ gives a bijection between the vertices of $s$ and $s'$, so it is an isomorphism. Moreover, for any subsimplex $\sigma = [v_{U_{i_1}}, \ldots, v_{U_{i_l}}] \subseteq s$ and $g \in G_s$, by Lemma \ref{fakt-przesuwanie-symp-zb} we have an equivalence
\[ v_{U_g} \in \sigma \quad \Leftrightarrow \quad U_g \in \{ U_{i_j} \,|\, 1 \leq j \leq l \} \quad \Leftrightarrow \quad U_{\gamma g} \in \{ \gamma \cdot U_{i_j} \,|\, 1 \leq j \leq l \} \quad \Leftrightarrow \quad v_{U_{\gamma g}} \in I(\sigma), \]
so the isomorphism $G_s \simeq G_{s'}$ given by $\gamma$ restricts to an isomorphism $G_\sigma \simeq G_{I(\sigma)}$, so $I(\sigma) = \gamma \cdot \sigma$.
It remains to check the desired properties of the map $J$, and commutativity of the diagram \eqref{eq-diagram-do-spr}.
First, we will check that $J$ is correctly defined. Let $v_U$ be a vertex in ~$f_n^{-1}(s)$ and let $U = U_h$ for some $h \in G$ of length $(n + 1)L$. From the definition of $f_n$ we obtain that $U_h \subseteq U_{g_i}$ for some $1 \leq i \leq k$. Then, denoting $h' = \gamma h$ and using~\qhlink{f3}, we have
\begin{align}
\label{eq-markov-wlasnosci-h'}
U_{h'} = \gamma \cdot U_h \subseteq \gamma \cdot U_{g_i} = U_{g'_i}, \qquad T(h') = T(h), \qquad |h'| = (n' + 1)L,
\end{align}
so in particular $\gamma \cdot U_h \in \widetilde{\mathcal{U}}_{(n'+1)L}$, and then $J(v_U) = v_{\gamma \cdot U_h}$ is a vertex in~$K_{n' + 1}$.
Now, we will prove that the vertex $J(v_U)$ belongs to $f_{n'}^{-1}(s')$ and that the diagram~\eqref{eq-diagram-do-spr} commutes.
From the definition of maps $f_n$, $f_{n'}$ it follows that, for both these purposes, it is sufficient to prove that
\begin{align}
\label{eq-markov-zgodnosc-rodzicow-ogolnie}
\big\{ U' \,\big|\, U' \in \mathcal{U}_{n'L}, \, U' \supseteq U_{h'} \big\} = \big\{ \gamma \cdot U \,\big|\, U \in \mathcal{U}_{nL}, \, U \supseteq U_h \big\}.
\end{align}
Let us check the inclusion $(\supseteq)$. Let $U_g = U \supseteq U_h$ for some $g \in G$ of length~$nL$. Then in particular $U_g \cap U_{g_i} \supseteq U_h \neq \emptyset$, so from property \qhlink{f2} we have
\[ U_{\gamma g} = \gamma \cdot U_g \supseteq \gamma \cdot U_h = U_{h'}, \qquad |\gamma g| = n' L. \]
This proves the inclusion $(\supseteq)$ in~\eqref{eq-markov-zgodnosc-rodzicow-ogolnie}. Since $U_{g_i'} = \gamma \cdot U_{g_i} \supseteq \gamma \cdot U_h = U_{h'}$, the opposite inclusion can be proved in an analogous way, by considering the shift by $\gamma^{-1}$. Hence, we have verified \eqref{eq-markov-zgodnosc-rodzicow-ogolnie}. In particular, we obtain that $J(v_U) \in f_{n'}^{-1}(s')$ for every $v_U \in f_n^{-1}(s)$, and so $J$ is correctly defined on the vertices of the complex $f_n^{-1}(s)$. From~\eqref{eq-markov-zgodnosc-rodzicow-ogolnie} we also deduce the commutativity of the diagram \eqref{eq-diagram-do-spr} when restricted to the vertices of the complexes considered.
Now, let $\sigma = [v_{U_{h_1}}, \ldots, v_{U_{h_l}}]$ be a simplex in~$f_n^{-1}(s)$. Then, we have $\bigcap_{i = 1}^l U_{h_i} \neq \emptyset$, so also $\bigcap_{i=1}^l U_{\gamma h_i} = \gamma \cdot \bigcap_{i=1}^l U_{h_i} \neq \emptyset$. This implies that in $f_{n'}^{-1}(s')$ there is a simplex $[J(v_{U_{h_1}}), \ldots, J(v_{U_{h_l}})]$. This means that $J$ can be affinely extended from vertices to simplexes, leading to a correctly defined map of complexes. The commutativity of the diagram \eqref{eq-diagram-do-spr} is then a result from the (already checked) commutativity for vertices.
By exchanging the roles of $s$ and $s'$, and correspondingly of $g_i$ and $g_i'$, we obtain an exchange of roles between $\gamma$ and $\gamma^{-1}$. Therefore, the map $\widetilde{J} : f_{n'}^{-1}(s') \rightarrow f_n^{-1}(s)$, obtained analogously for such situation, must be inverse to $J$. Hence, $J$ is an isomorphism.
It remains to check the equality $J(\sigma) = \gamma \cdot \sigma$ for any simplex $\sigma$ in~$f_n^{-1}(s)$. For this, we choose any element $h \in V_\sigma$ (i.e. a vertex from the graph $G_\sigma$ from Definition~\ref{def-graf-typu-sympleksu}) and take $\varphi(h) = \gamma h$; this element was previously denoted by $h'$. Then, from \eqref{eq-markov-wlasnosci-h'} and the already checked properties of $J$, we deduce that
\[ v_{U_{\gamma h}} = v_{\gamma \cdot U_h} = J(v_{U_h}) \, \textrm{ is a vertex in } J(\sigma), \qquad T(\gamma h) = T(h). \]
The first of these facts means that $\varphi(h)$ indeed belongs to $V_{J(\sigma)}$; the second one ensures that $\varphi$ preserves the labels of vertices in the graphs.
Preserving the labels of edges follows easily from the definition of $\varphi$. Hence, it remains only to check that $\varphi$ gives a bijection between $V_\sigma$ and ~$V_{J(\sigma)}$, which we obtain by repeating the above reasoning for the inverse map $J^{-1}$ (with $s'$, $J(\sigma)$ playing now the roles of $s$, $\sigma$).
\end{proof}
\subsection{Conclusion: $\partial G$ is a Markov compactum}
\label{sec-markow-podsum}
Although we will be able to prove Theorem~\ref{tw-kompakt} in its full strength only at the end of Section~\ref{sec-wymd}, we note that the results already obtained imply the main claim of this theorem, namely that Gromov boundaries of hyperbolic groups are always Markov compacta (up to homeomorphism). Before arguing for that, we will finish the proof of Theorem~\ref{tw-kompakt-ogolnie}.
\begin{proof}[{\normalfont \textbf{Proof of Theorem~\ref{tw-kompakt-ogolnie}}}]
Barycentricity and the mesh property for the system $(K_n, f_n)$ follow from Theorem \ref{tw-konstr}; it remains to check the Markov property for this system. The condition (ii) from Definition \ref{def-kompakt-markowa} follows from the way in which the maps $f_n$ are defined in the claim of Theorem~\ref{tw-konstr}, while (i) is a result of the assumption (ii) in Definition \ref{def-konstr-admissible} (admissibility of $\mathcal{U}$). It remains to check (iii).
The type which we assign to simplexes is the $T^\Delta$-type from Definition \ref{def-typ-sympleksu}. If two simplexes $s \in K_i$, $s' \in K_j$ have the same type, then by Lemma \ref{fakt-przesuniecie-istnieje} we know that $s' = \gamma \cdot s$ for some $\gamma \in G$. Then, by an inductive application of Proposition \ref{lem-przesuwanie-dzieci-sympleksow}, we deduce that for every $k \geq 0$ the simplexes in the pre-image $(f^{j+k}_j)^{-1}(s')$ coincide with the shifts (by $\gamma$) of simplexes in the pre-image $(f^{i+k}_i)^{-1}(s)$, and moreover, by letting
\[ i_k (v_U) = v_{\gamma \cdot U} \qquad \textrm{ for } \quad k \geq 0, \ v_U \in (f^{i+k}_i)^{-1}(s), \]
we obtain correctly defined isomorphisms of subcomplexes which preserve simplex types and make the diagram from Definition \ref{def-kompakt-markowa} commute. This finishes the proof.
\end{proof}
\begin{proof}[{\normalfont \textbf{Proof of the main claim of Theorem~\ref{tw-kompakt}}}]
Let $G$ by a hyperbolic group and let~$\mathcal{S}$ be the system of covers of~$\partial G$ defined in Section~\ref{sec-konstr-pokrycia}. By~Corollary~\ref{wn-spanstary-quasi-niezm}, $\mathcal{S}$ is quasi-$G$-invariant; hence, by Lemma~\ref{fakt-konstr-sp-zal}, there is $L \geq 0$ such that the system $(\widetilde{S}_{nL})_{n \geq 0}$ is admissible. By applying Theorem~\ref{tw-konstr}, we obtain an inverse system $(K_n, f_n)$ whose inverse limit is homeomorphic to~$\partial G$; on the other hand, Theorem~\ref{tw-kompakt-ogolnie} ensures that this system is Markov. This finishes the proof.
\end{proof}
\begin{uwaga}
Theorem~\ref{tw-kompakt-ogolnie} ensures also barycentricity and mesh property for the obtained Markov system.
\end{uwaga}
\section{Strengthenings of type}
\label{sec-abc}
In this section, we will construct new \textit{type functions} in the group $G$ (in the sense of Definition~\ref{def-funkcja-typu}), or in the inverse system $(K_n)$ constructed in Section~\ref{sec-engelking} (in an analogous sense, i.e. we assign to every simplex its \textit{type} taken from a finite set), with the aim of ensuring
certain regularity conditions of these functions which will be needed in the next sections.
The basic condition of our interest, which will be considered in several flavours, is ``children determinism'': the type of an element (resp. a simplex) should determine the type of its ``children'' (in an appropriate sense), analogously to the properties of the ball type $T^b_N$ described in Proposition \ref{lem-potomkowie-dla-kulowych}. The most important result of this section is the construction of a new type (which we call \textit{$B$-type} and denote by $T^B$) which, apart from being children-deterministic in such sense, returns different values for any pair of \textit{$r$-fellows} in $G$ (see Definition~\ref{def-konstr-towarzysze}) for some fixed value of $r$. (To achieve the goals of this article, we take $r = 16\delta$). This property will be crucial in three places of the remaining part of the paper:
\begin{itemize}
\item In Section \ref{sec-sk-opis}, we will show that, by including the $B$-type in the input data for Theorem \ref{tw-kompakt-ogolnie}, we can ensure the resulting Markov system has distinct types property (see Definition~\ref{def-kompakt-wlasciwy}), which will in turn guarantee its finite describability (see Remark \ref{uwaga-sk-opis}).
\item In Section \ref{sec-wymd}, this property will allow us a~kind of ``quasi-$G$-invariant control'' of simplex dimensions in the system $(K_n)$.
\item In Section \ref{sec-sm}, the $B$-type's property of distinguishing fellows will be used to present the boundary $\partial G$ as a semi-Markovian space (see Definition~\ref{def-sm-ps}).
\end{itemize}
Let us note that this property of $B$-type is significantly easier to be achieved in the case of torsion-free groups (see the introduction to Section \ref{sec-sm-abc-b}).
A natural continuation of the topic of this section will also appear in Section \ref{sec-sm-abc-c}, in which we will enrich the $B$-type to obtain a new \textit{$C$-type}, which will serve directly as a basis for the presentation of $\partial G$ as a semi-Markovian space. (We postpone discussing the $C$-type to Section \ref{sec-sm} because it is needed only there, and also because we will be able to list its desired properties only as late as in Section~\ref{sec-sm-zyczenia}).
\subsubsection*{Technical assumptions}
In Sections \ref{sec-abc}--\ref{sec-wymd}, we assume that $N$ and~$L$ are fixed and sufficiently large constants; explicit bounds from below will be chosen while proving consecutive facts. (More precisely, we assume that $N$ satisfies the assumptions of Corollary \ref{wn-sm-kulowy-wyznacza-potomkow} and Proposition \ref{lem-sm-kuzyni}, and that $L \geq \max(N + 4\delta, 14\delta)$ satisfies the assumptions of Lemma \ref{fakt-sm-rodzic-kuzyna}; some of these bounds will be important only in Section \ref{sec-wymd}). Let us note that, under such assumptions, Proposition \ref{lem-potomkowie-dla-kulowych} ensures that the ball type $T^b_N(x)$ determines the values of $T^b_N$ for all descendants of $x$ of length $\geq |x| + L$.
The types which we will construct --- similarly as the ball type $T^b$ --- will depend on the value of a parameter $N$ (discussed in the above paragraph) which, for simplicity, will be omitted in the notation.
Also, we assume that the fixed generating set~$S$ of the group~$G$ (which we are implicitly working with throughout this paper) is closed under taking inverse, and we fix some enumeration $s_1, \ldots, s_Q$ of all its elements (This will be used in Section \ref{sec-abc-pp}).
\subsection{Prioritised ancestors}
\label{sec-abc-pp}
\begin{df}
Let $x, y \in G$. We call $y$ a \textit{descendant} of $x$ if $|y| = |x| + d(x, y)$. (Equivalently: if $y \in xT^c(x)$). In such situation, we say that $x$ is an \textit{ancestor} of~$y$.
If in addition $d(x, y) = 1$, we say that $y$ is a \textit{child} of~$x$ and $x$ is a \textit{parent} of~$y$.
\end{df}
\begin{df}
\label{def-sm-nic-sympleksow}
The \textit{prioritised parent} (or \textit{p-parent}) of an element $y \in G \setminus \{ e \}$ is the element $x \in G$ such that $x$ is a parent of~$y$ and $x = y s_i$ with $i$ least possible. The p-parent of $y$ will be denoted by $y^\uparrow$.
An element $g' \in G$ is a \textit{priority child} (or \textit{p-child}) of $g$ if $g$ is its p-parent.
As already suggested, a given element of $G \setminus \{ e \}$ must have exactly one p-parent but may have many p-children.
The relation of \textit{p-ancestry} (resp. \textit{p-descendance}) is defined as the reflexive-transitive closure of p-parenthood (resp. p-childhood); in particular, for any $g \in G$ and $k \leq |g|$, $g$ has exactly one p-ancestor $g'$ such that~$|g'| = |g| - k$, which we denote by $g^{\uparrow k}$.
\end{df}
\begin{df}
\label{def-sm-p-wnuk}
Let $x, y \in G$. We call $x$ a \textit{p-grandchild} (resp. \textit{p-grandparent}) of $y$ if it is a p-descendant (resp. p-ancestor) of $y$ and $\big| |x| - |y| \big| = L$. For simplicity of notation, we denote $x^\Uparrow = x^{\uparrow L}$ (for $|x| \geq L$) and analogously $x^{\Uparrow k} = x^{\uparrow Lk}$ (for $|x| \geq Lk$).
\end{df}
Let $N_0$ denote the constant coming from Proposition \ref{lem-kulowy-wyznacza-stozkowy}.
\begin{fakt}
\label{fakt-sm-kulowy-wyznacza-dzieci}
Let $x, y, s \in G$ satisfy $T^b_{N_0+2}(x) = T^b_{N_0+2}(y)$ and $|s| = 1$. Then, $(xs)^\uparrow = x$ if and only if $(ys)^\uparrow = y$.
\end{fakt}
\begin{proof}
Since the set $S$ is closed under taking inverse, we have $s = s_i^{-1}$ for some~$i$. Assume that $(xs_i^{-1})^\uparrow = x$ but $(ys_i^{-1})^\uparrow = z \neq y$. Then, $z = ys_i^{-1}s_j$ for some $j < i$. Note that $d(y, z) \leq 2$. Define $z' = xs_i^{-1}s_j$; then by Lemma \ref{fakt-kulowy-duzy-wyznacza-maly} and Proposition \ref{lem-kulowy-wyznacza-stozkowy} we have
\[ T^b_{N_0}(z') = T^b_{N_0}(z), \qquad s_j^{-1} \in T^c(z) = T^c(z'). \]
This means that the element $xs_i^{-1} = z's_j^{-1}$ is a child of $z'$; then, since $j < i$, it cannot be a p-child of $x$.
\end{proof}
\begin{wn}
\label{wn-sm-kulowy-wyznacza-potomkow}
Let $N \geq N_5 := 2N_0 + 8\delta + 2$. Then, if $x, y \in G$ satisfy $T^b_N(x) = T^b_N(y)$, the left translation by $\gamma = yx^{-1}$ gives a bijection between the p-descendants of $x$ and the p-descendants of $y$.
\end{wn}
\begin{proof}
Let $z$ be a p-descendant of $x$; we want to prove that $\gamma z$ is a p-descendant of~$y$. We will do this by induction on the difference $|z| - |x|$.
If $|z| = |x|$, the claim is obvious. Assume that $|z| > |x|$ and denote $w = z^\uparrow$; then $w$ is a p-descendant of $x$ for which we may apply the induction assumption, obtaining that $\gamma w$ is a p-descendant of $y$. To finish the proof, it suffices to check that
\begin{align}
\label{eq-sm-typy-potomkow}
T^b_{N_0 + 2}(w) = T^b_{N_0 + 2}(\gamma w),
\end{align}
since then from Lemma \ref{fakt-sm-kulowy-wyznacza-dzieci} it will follow that $\gamma z$ is a p-child of $\gamma w$, and then also a p-descendant of $y$. We consider two cases:
\begin{itemize}
\item If $d(z, x) \leq N - (N_0 + 2)$, then \eqref{eq-sm-typy-potomkow} holds by equality $T^b_N(x) = T^b_N(y)$ and Lemma \ref{fakt-kulowy-duzy-wyznacza-maly}.
\item If $|z| - |x| \geq N_0 + 4\delta + 2$, then \eqref{eq-sm-typy-potomkow} follows (in view of the equality $T^b_N(x) = T^b_N(y)$ and the inequality $N \geq N_0 + 8\delta$) from Proposition \ref{lem-potomkowie-dla-kulowych}.
\end{itemize}
Since $d(z, x) = |z| - |x|$ (because $z$ is a descendant of $x$) and $N \geq 2N_0 + 4\delta + 2$, at least one of the two cases must hold. This finishes the proof.
\end{proof}
\subsection{The $A$-type}
\label{sec-sm-abc-a}
Let $\mathcal{T}^b_N$ be the set of values of the type $T^b_N$. As an introductory step, we define the \textit{$Z$-type} $T^Z$ in $G$ so that:
\begin{itemize}
\item $T^Z$ is compatible with $T^b_N$ for all elements $g \in G$ with length $\geq L$;
\item $T^Z$ assigns pairwise distinct values, not belonging to $\mathcal{T}^b_N$, to all elements of length $< L$.
\end{itemize}
Let us note that such a strengthening preserves most of the properties of $T^b_N$, in particular those described in Proposition \ref{lem-potomkowie-dla-kulowych} and Corollary \ref{wn-sm-kulowy-wyznacza-potomkow}.
\begin{uwaga}
\label{uwaga-pomijanie-N}
Although the values of $T^Z$ depend on the value of $N$, for simplicity we omit this fact in notation, assuming $N$ to be fixed. (We are yet not ready to state our assumptions on $N$; this will be done below in Proposition \ref{lem-sm-kuzyni}).
\end{uwaga}
Let $\mathcal{T}^Z$ be the set of all possible values of $T^Z$. For every $\tau \in \mathcal{T}^Z$, choose some \textit{representative} $g_\tau \in G$ of this type. For convenience, we denote by $\gamma_g$ the element of~$G$ which (left-) translates $g$ to the representative (chosen above) of its $Z$-type:
\[ \gamma_g = g_{T^Z(g)} \, g^{-1} \qquad \textrm{ for } \quad g \in G. \]
Let us recall that, by Corollary \ref{wn-sm-kulowy-wyznacza-potomkow}, the left translation by $\gamma_g$ gives a bijection between p-descendants of $g$ and p-descendants of $g_{T^Z(g)}$, and thus also a bijection between the p-grandchildren of these elements.
For every $\tau \in \mathcal{T}$, let us fix an (arbitrary) enumeration of all p-grandchildren of $g_\tau$.
\begin{df}
\label{def-sm-numer-dzieciecy}
The \textit{descendant number} of an element $g \in G$ with $|g| \geq L$ (denoted by~$n_g$) is the number given (in the above enumeration) to the element $\gamma_{g^\Uparrow} \cdot g$ as a p-grandchild of $g_{T^Z(g^\Uparrow)}$. If $|g| < L$, we set $n_g = 0$.
\end{df}
\begin{df}
\label{def-sm-typ-A}
We define the \textit{$A$-type} of an element $g \in G$ as the pair $T^A(g) = (T^Z(g), n_g)$.
\end{df}
We note that the set $\mathcal{T}^A$ of all possible $A$-types is finite because the number of p-grandchildren of $g$ depends only on $T^Z(g)$.
\begin{fakt}
\label{fakt-sm-istnieje-nss}
For any two distinct elements $g, g' \in G$ of equal length, there is $k \geq 0$ such that $g^{\Uparrow k}$ and $g'^{\Uparrow k}$ exist and have different $A$-types.
\end{fakt}
\begin{proof}
Let $n = |g| = |g'|$ and let $k \geq 0$ be the least number for which $g^{\Uparrow k} \neq g'^{\Uparrow k}$. If $n - kL < L$, then by definition $g^{\Uparrow k}$ and $g'^{\Uparrow k}$ have different $Z$-types. Otherwise, we have $g^{\Uparrow k+1} = g'^{\Uparrow k+1}$, which means that $g^{\Uparrow k}$ and~$g'^{\Uparrow k}$ have different descendant numbers.
\end{proof}
\begin{fakt}
\label{fakt-sm-przen-a}
If $g, h \in G$ have the same $Z$-type, then the left translation by $\gamma = hg^{-1}$ maps p-grandchildren of~$g$ to p-grandchildren of~$h$ and preserves their $A$-types.
\end{fakt}
\begin{proof}
Denote $\tau = T^Z(g) = T^Z(h)$. Let $g'$ be a p-grandchild of $g$; then $\gamma g'$ is a p-grandchild of $h$ by Corollary \ref{wn-sm-kulowy-wyznacza-potomkow}. Moreover, from Proposition \ref{lem-potomkowie-dla-kulowych} we know that $g'$ and $\gamma g'$ have equal ball types $T^b_N$, and since they both have the same length $\geq L$, it follows that they have equal $Z$-types. They also have equal descendant numbers because
\[ \gamma_{(\gamma g')^\Uparrow} \, \gamma g' = g_\tau \, h^{-1} \, \gamma \, g' = g_\tau \, g^{-1} \, g' = \gamma_{g'^\Uparrow} \, g'. \qedhere \]
\end{proof}
\subsection{Cousins and the $B$-type}
\label{sec-sm-abc-b}
The aim of this subsection is to strengthen the type function to distinguish any pair of neighbouring elements in $G$ of the same length. In the case of a torsion-free group, there is nothing new to achieve as the desired property is already satisfied by the ball type $T^b_N$ (by Lemma \ref{fakt-kuzyni-lub-torsje}). In general, the main idea is to remember within the type of $g$ the ``crucial genealogical difference'' between $g$ and every it \textit{cousin} (i.e. a neighbouring element of the same length --- see Definition \ref{def-sm-kuzyni}) --- where, to be more precise, the ``genealogical difference'' between $g$ and $g'$ consists of the $A$-types of their p-ancestors in the oldest generation in which these p-ancestors are still distinct. However, it turns out that preserving all the desired regularity properties (in particular, children determinism) requires remembering not only the genealogical differences between $g$ and its cousins, but also similar differences between any pair of its cousins.
\begin{df}
\label{def-sm-sasiedzi}
Two elements $x, y \in G$ will be called \textit{neighbours} (denotation: $x \leftrightarrow y$) if $|x| = |y|$ and $d(x, y) \leq 8\delta$.
\end{df}
Denote
\begin{align}
\label{eq-sm-zbiory-torsji}
\begin{split}
Tor & = \big\{ g \in G \,\big|\, |g| \leq 16\delta, \, g \textrm{ is a torsion element} \big\}, \\
R & = \max \Big( \big\{ |g^n| \,\big|\, g \in Tor, \, n \in \mathbb{Z} \big\} \cup \{ 16 \delta \} \Big).
\end{split}
\end{align}
Since~$|Tor| < \infty$, it follows that $R < \infty$.
\begin{df}
\label{def-sm-kuzyni}
Two elements $g, g' \in G$ will be called \textit{cousins} if $|g| = |g'|$ and $d(g, g') \leq R$. The set of cousins of $g$ will be denoted by $C_g$. (It is exactly the set $g P_R(g)$ using the notation of Section \ref{sec-konstr-pokrycia}).
\end{df}
Let us note that if $g, h \in G$ are neighbours, then they are cousins too.
\begin{fakt}
\label{fakt-sm-rodzic-kuzyna}
If $L \geq \tfrac{R}{2} + 4\delta$, then for any cousins $g, g' \in G$ their p-grandparents are neighbours (and hence also cousins).
\end{fakt}
\begin{proof}
This is clear by Lemma \ref{fakt-geodezyjne-pozostaja-bliskie}.
\end{proof}
For $g \in G$ of length~$n$ and any two distinct $g', g'' \in C_g$, let $k_{g', g''}$ be the least value $k \geq 0$ for which $g'^{\Uparrow k}$ and $g''^{\Uparrow k}$ have different $A$-types (which is a correct definition by Lemma \ref{fakt-sm-istnieje-nss}). Let the sequence $k^{(g)} = (k^{(g)}_i)_{i = 1}^{M_g}$ come from arranging the elements of the set $\{ k_{g', g''} \,|\, g', g'' \in C_g \} \cup \{ 0 \}$ in a decreasing order.
\begin{df}
\label{def-sm-typ-B}
The \textit{$B$-type} of an element $g$ is the set
\[ T^B(g) = \Big\{ \big( g^{-1} g', \, W_{g, \, g'} \big) \ \Big|\ g' \in C_g \Big\}, \qquad \textrm{ where } \qquad W_{g, \, g'} = \Big( T^A \big( {g'}^{\Uparrow k^{(g)}_i} \big) \Big)_{i=1}^{M_g}. \]
(We recall that the notation hides the dependence on a fixed parameter $N$, whose value will be chosen in Proposition \ref{lem-sm-kuzyni}).
\end{df}
\begin{uwaga}
\label{uwaga-sm-B-wyznacza-A}
Since $g$ is a cousin of itself and $0$ is the last value in the sequence $(k^{(g)}_i)$, the set $T^B(g)$ contains in particular a pair of the form $\big( e, \, (\ldots, T^A(g)) \big)$, which means that the $B$-type determines the $A$-type (of the same element).
\end{uwaga}
\begin{uwaga}
\label{uwaga-sm-indeksy-z-tabelki}
For any two distinct $g', g'' \in C_g$, let $i^{(g)}_{g', g''}$ be the position on which the value $k_{g', g''}$ appears in sequence~$k^{(g)}$. Then from definition it follows that this index depends only on the sequences $W_{g, g'}$ and~$W_{g, g''}$, and more precisely it is equal to the greatest $i$ for which the $i$-th coordinates of these sequences differ. (At the same time, we observe that the whole sequences $W_{g, g'}$, $W_{g, g''}$ must be different). This fact will be used in the proofs of Proposition \ref{lem-sm-B-dzieci} and~\ref{lem-sm-kuzyni}.
\end{uwaga}
\begin{fakt}
\label{fakt-sm-B-skonczony}
There exist only finitely many possible $B$-types in~$G$.
\end{fakt}
\begin{proof}
The finiteness of $A$-type results from its definition. If $g' \in C_g$, then the element $g^{-1} g'$ belongs to the ball $B(e, R)$ in~$G$, whose size is finite and independent of $g$. Since the number of cousins of $g$ is also globally limited, we obtain a limit on the length of the sequence $(k^{(g)}_i)$, which ensures a finite number of possible sequences $W_{g, g'}$.
\end{proof}
\begin{lem}
\label{lem-sm-B-dzieci}
Let $g_1, h_1 \in G$ have the same $B$-types and $\gamma = h_1g_1^{-1}$. Then, the left translation by $\gamma$ maps p-grandchildren of $g_1$ to p-grandchildren of $h_1$ and preserves their $B$-type.
\end{lem}
\begin{proof}
\textbf{1. }Let $g_2$ be a p-grandchild of $g_1$ and $h_2 = \gamma g_2$. Then, $h_1 = h_2^\Uparrow$ by Remark~\ref{uwaga-sm-B-wyznacza-A} and Lemma~\ref{fakt-sm-przen-a}; it remains to check that $T^B(g_2) = T^B(h_2)$.
Since the left multiplication by $\gamma$ clearly gives a bijection between cousins of $g_2$ and cousins of~$h_2$, it suffices to show that for any $g_2' \in C_{g_2}$ we have
\begin{align}
\label{eq-sm-B-dzieci-cel}
W_{g_2, \, g_2'} = W_{h_2, \, h_2'}, \qquad \textrm{ where } \quad h_2' = \gamma g_2'.
\end{align}
\textbf{2. }Let $g_2'$, $h_2'$ be as above. Denote $g_1' = {g_2'}^\Uparrow$; by Lemma~\ref{fakt-sm-rodzic-kuzyna}, $g_1'$ is a cousin of $g_1$. Then, the element $h_1' = \gamma g_1'$ is a cousin of $h_1$ and from the equalities $T^B(g_1) = T^B(h_1)$ and $g_1^{-1} g_1' = h_1^{-1} h_1'$ it follows that
\begin{align}
\label{eq-sm-B-dzieci-znane-war-t1}
W_{g_1, \, g_1'} = W_{h_1, \, h_1'}.
\end{align}
Since $0$ is the last element in the sequence $(k^{(g_1)}_i)$, as well as in $(k^{(h_1)}_i)$, we obtain in particular that $T^A(g_1') = T^A(h_1')$. By Lemma \ref{fakt-sm-przen-a}, we have:
\begin{align}
\label{eq-sm-B-dzieci-zachowane-A-t2}
h_1' = h_2'^\Uparrow, \qquad T^A(g_2') = T^A(h_2').
\end{align}
\textbf{3. }For arbitrarily chosen $g_2', g_2'' \in C_{g_2}$, we denote:
\[ h_2' = \gamma \, g_2', \qquad h_2'' = \gamma \, g_2'', \qquad \tau' = T^A(g_2') = T^A(h_2'), \qquad \tau'' = T^A(g_2'') = T^A(h_2''). \]
Moreover, we denote by $g_1', g_1'', h_1', h_1''$ the p-grandparents correspondingly of $g_2', g_2'', h_2', h_2''$.
Then, by definition:
\begin{align}
\label{eq-sm-B-dzieci-nss}
k_{g_2', g_2''} = \begin{cases}
k_{g_1', g_1''} + 1, & \textrm{ when } \tau' = \tau'', \\
0, & \textrm{ when } \tau' \neq \tau''
\end{cases}
\qquad \textrm{ and } \qquad
k_{h_2', h_2''} = \begin{cases}
k_{h_1', h_1''} + 1, & \textrm{ when } \tau' = \tau'', \\
0, & \textrm{ when } \tau' \neq \tau''.
\end{cases}
\end{align}
This implies that the sequence $(k^{(g_2)}_j)$ (resp. $(k^{(h_2)}_j)$) is obtained from $(k^{(g_1)}_i)$ (resp. $(k^{(h_1)}_i)$) by removing some elements, increasing the remaining elements by $1$, and appending the value $0$ to its end. Then, for any $g_2' \in C_{g_2}$, the sequences $W_{g_2, g_2'}$, $W_{h_2, h_2'}$ are obtained respectively from $W_{g_1, g_1'}$, $W_{h_1, h_1'}$ by removing the corresponding elements and appending respectively the values $T^A(g_2')$, $T^A(h_2')$; these two appended values must be the same by \eqref{eq-sm-B-dzieci-zachowane-A-t2}. (Note that increasing the values in the sequence $(k^{(g_2)}_j)$ by~$1$ translates to making no change in the sequence $W_{g_2, g_2'}$ due to the equality $g_2'^{\Uparrow k+1} = g_1'^{\Uparrow k}$)
Therefore, to finish the proof (i.e. to show \eqref{eq-sm-B-dzieci-cel}) it is sufficient, by \eqref{eq-sm-B-dzieci-znane-war-t1}, to check that both sequences $(k^{(g_1)}_i)$ and $(k^{(h_1)}_i)$ are subject to removing elements exactly at the same positions.
\textbf{4. }From \eqref{eq-sm-B-dzieci-nss} we know that the value $k^{(g_1)}_i + 1$ appears in the sequence $(k^{(g_2)}_j)$ if and only if there exist $g_2', g_2'' \in C_{g_2}$ such that (with the above notations ):
\begin{align}
\label{eq-sm-B-dzieci-ii}
T^A(g_2') = T^A(g_2''), \qquad i = i^{(g_1)}_{g_1', g_1''}.
\end{align}
Then, from Remark \ref{uwaga-sm-indeksy-z-tabelki} and \eqref{eq-sm-B-dzieci-znane-war-t1} used for the pairs $(g_1', h_1')$ and~$(g_1'', h_1'')$, we obtain that $i^{(h_1)}_{h_1', h_1''} = i^{(g_1)}_{g_1', g_1''} = i$, while from \eqref{eq-sm-B-dzieci-ii} and~\eqref{eq-sm-B-dzieci-zachowane-A-t2} we have $T^A(h_2') = T^A(h_2'')$, so analogously we prove that the value $k^{(h_1)}_i + 1$ appears in the sequence $(k^{(h_2)}_j)$. The proof of the opposite implication --- if $k^{(h_1)}_i + 1$ appears in~$(k^{(h_2)}_j)$, then $k^{(g_1)}_i + 1$ must appear in~$(k^{(g_2)}_j)$ --- is analogous.
The obtained equivalence finishes the proof.
\end{proof}
\begin{lem}
\label{lem-sm-kuzyni}
If $N$ is sufficiently large, then, for any $g, h \in G$ satisfying $|g| = |h|$ and $d(g, h) \leq 16\delta$, the condition $T^B(g) = T^B(h)$ holds only if $g=h$.
\end{lem}
\begin{proof}
Suppose that $g \neq h$ while $T^B(g) = T^B(h)$. Then, by Remark \ref{uwaga-sm-B-wyznacza-A}, $T^A(g) = T^A(h)$, and so $T^b_N(g) = T^b_N(h)$. Denote $\gamma = g^{-1}h$ and
assume that $N$ is greater than the constant $N_r$ given by Lemma \ref{fakt-kuzyni-lub-torsje} for $r = 16\delta$. Then, the lemma implies that $\gamma$ is a torsion element, i.e. (using the notation of \eqref{eq-sm-zbiory-torsji}) $\gamma \in Tor$, and so $|\gamma^i| \leq R$ for all $i \in \mathbb{Z}$. Then, the set $A = \{ g \gamma^i \,|\, i \in \mathbb{Z} \}$ has diameter not greater then $R$, so any two of its elements are cousins. Moreover, $A$ contains $g$ and $h$.
We obtain that all elements of the set $K = \{ k_{g', g''} \,|\, g', g'' \in A, \, g' \neq g'' \}$ appear in the sequence $k^{(g)}$ as well as in $k^{(h)}$; moreover, from Remark \ref{uwaga-sm-indeksy-z-tabelki} we obtain that they appear in $k^{(g)}$ exactly at the following set of positions:
\[ I_1 = \Big\{ \max \big\{ j \,\big|\, (W)_j \neq (W')_j \big\} \ \Big| \ (\gamma^i, W), \, (\gamma^{i'}, W') \in T^B(g), \ i, i' \in \mathbb{Z}, \, \gamma^i \neq \gamma^{i'} \Big\}, \]
where by $(W)_j$ we denoted the $j$-th element of the sequence $W$. Similarly, the elements of $K$ appear in $k^{(h)}$ exactly on the positions from the set $I_2$ defined analogously (with $g$ replaced by $h$). However, by assumption we have $T^B(g) = T^B(h)$, and so $I_1 = I_2$.
Now, consider the pair $(\gamma, W_{g, h})$ which appears in $T^B(g)$; from the equality $T^B(g) = T^B(h)$ we obtain that
\[ (\gamma, W_{g, h}) = (h^{-1} h', W_{h, h'}) \qquad \textrm{ for some } h' \in C_h. \]
Since the sets of indexes $I_1, I_2$ are equal, from $W_{g, h} = W_{h, h'}$ we obtain in particular that
\[ T^A(h^{\Uparrow k}) = T^A({h'}^{\Uparrow k}) \qquad \textrm{ for every } k \in K. \]
However, from $h^{-1} h' = \gamma$ it follows that $h' \in A \setminus \{ h \}$ and then $k_{h, h'} \in K$, which contradicts the above equality.
\end{proof}
\subsection{Stronger simplex types}
\label{sec-sk-opis}
\begin{tw}
\label{tw-sk-opis}
Let $(\mathcal{U}_n)$ be a quasi-$G$-invariant system of covers of a~space $X$, equipped with a type function stronger than $T^B$ and a neighbourhood constant $D$ not greater than $16\delta$. Let $(K_n, f_n)$ be the inverse system obtained from applying Theorem \ref{tw-kompakt-ogolnie} for $(\mathcal{U}_n)$. Then, the simplex types in the system $(K_n, f_n)$ can be strengthened to ensure the distinct types property for this system, without losing Markov property.
\end{tw}
\begin{proof}[Organisation of the proof]
We will consider the inverse system with a new simplex type $T^{\Delta + A}$, defined in Definition \ref{def-sk-opis-typ-delta-a}. Then, Lemmas \ref{fakt-sk-opis-markow} and~\ref{fakt-sk-opis-bracia} will ensure that this system satisfies the conditions from Definitions \ref{def-kompakt-markowa} and~\ref{def-kompakt-wlasciwy}, respectively.
\end{proof}
We now start the proof.
Let $T$ denote the type function associated with the system $(\mathcal{U}_n)$, and $T^\Delta$ --- the corresponding simplex type function in the system $(K_n)$, as defined in Section \ref{sec-markow-typy}.
\begin{fakt}
\label{fakt-sk-opis-podsympleksy-rozne-typy}
For any simplex $s \in K_n$, all subsimplexes $s' \subseteq s$ have pairwise distinct $T^\Delta$-types.
\end{fakt}
\begin{proof}
Let $v = v_{U_x}$, $v' = v_{U_{x'}}$ be two distinct vertices joined by an edge in $K_n$. Then, by using the definition of the complex $K_n$, then the property \qhlink{d} and Proposition \ref{lem-sm-kuzyni}, we have:
\begin{align}
\label{eq-sm-sk-opis-rozne-typy-wierzch}
U_x \cap U_{x'} \neq \emptyset \quad \Rightarrow \quad d(x, x') \leq D \leq 16\delta \quad \Rightarrow \quad T^B(x) \neq T^B(x') \quad \Rightarrow \quad T(x) \neq T(x').
\end{align}
Recall that, by Definition \ref{def-typ-sympleksu}, for any $s = [v_1, \ldots, v_k] \in K_n$ the value $T^\Delta(s)$ determines in particular the set of labels in the graph $G_s$ (defined in Definition \ref{def-graf-typu-sympleksu}). This set can be described by the formula:
\[ A_s = \big\{ T(x) \ \big|\ x \in G, \, |x| = n, \, v_{U_x} = v_i \textrm{ for some } 1 \leq i \leq k \big\} \]
However, it follows from \eqref{eq-sm-sk-opis-rozne-typy-wierzch} that if two simplexes ~$s', s'' \subseteq s$ differ in that some vertex~$v_{U_x}$ belongs only to~$s'$, then the value $T(x)$ belongs to $A_{s'} \setminus A_{s''}$, which means that $T^\Delta(s') \neq T^\Delta(s'')$.
\end{proof}
\begin{fakt}
\label{fakt-sk-opis-przesuniecie-jedyne}
Let $s \in K_n$, $s' \in K_{n'}$ have the same types. Then, there exists a unique $\gamma \in G$ such that $s' = \gamma \cdot s$.
\end{fakt}
\begin{uwaga}
The claim of Lemma~\ref{fakt-sk-opis-przesuniecie-jedyne} is stronger than what was stated in Section \ref{sec-markow-typy} in that $\gamma$ should be unique. The stronger claim follows, as we will show below, from the additional assumption that the type function~$T$ is stronger than~$T^B$.
\end{uwaga}
\begin{proof}[Proof of Lemma~\ref{fakt-sk-opis-przesuniecie-jedyne}]
By Lemma \ref{fakt-przesuniecie-istnieje}, a desired element $\gamma$ exists, it remains to check its uniqueness. Let $s' = \gamma \cdot s = \gamma' \cdot s$; then, by Lemma \ref{fakt-przesuniecie-skladane}, we have~$s = (\gamma^{-1} \gamma') \cdot s$. By Definition \ref{def-przesuniecie-sympleksu}, this means that if $v_{U_x}$ is a vertex in $s$, then setting $x' = \gamma^{-1} \gamma' x$ we have $T(x') = T(x)$ and moreover $v_{U_{x'}}$ is also a vertex in $s$. By reusing the argument from \eqref{eq-sm-sk-opis-rozne-typy-wierzch}, we conclude that $x = x'$ holds, and so $\gamma = \gamma'$.
\end{proof}
We will now define a strengthening of the $T^\Delta$-type, in a way quite analogous to the definition of the $T^A$-type for simplexes. (Some differences will occur in the proofs, and also in Definition ~\ref{def-sk-opis-typ-delta-a}).
\begin{df}[cf.~Definition~\ref{def-sm-nic-sympleksow}]
For $n > 0$, we call the \textit{prioritised parent} of a simplex $s \in K_n$ the minimal simplex in $K_{n-1}$ containing~$f_n(s)$ (which will be denoted by $s^\uparrow$). A Simplex $s$ is a \textit{prioritised child} of a simplex $s'$ if $s' = s^\uparrow$.
\end{df}
\begin{fakt}[cf.~Lemma~\ref{fakt-sm-kulowy-wyznacza-dzieci}, and also Lemma~\ref{fakt-przesuniecie-istnieje}]
\label{fakt-sk-opis-przesuwanie-dzieci}
Let $s \in K_n$, $s' \in K_{n'}$ and $\gamma \in G$ be such that $s' = \gamma \cdot s$ (in the sense of Definition \ref{def-przesuniecie-sympleksu}). Then, the translation by $\gamma$ gives a bijection between prioritised children of $s$ and prioritised children of $s'$.
\end{fakt}
\begin{proof}
This is an easy corollary of Proposition \ref{lem-przesuwanie-dzieci-sympleksow} (and Lemma \ref{fakt-przesuniecie-skladane}). The proposition ensures that the translation by $\gamma$ maps simplexes contained in $f_n^{-1}(s)$ to simplexes contained in $f_{n'}^{-1}(s')$. Moreover, if $\sigma \subseteq s$ and $\gamma \cdot \sigma$ is not a prioritised child of $s'$, then we have $\gamma \cdot \sigma \subseteq f_{n'}^{-1}(s'')$ for some $s'' \subsetneq s'$ and then $\sigma \subseteq f_n^{-1}(\gamma^{-1} \cdot s'')$, which means that $\sigma$ is not a prioritised child of $s$. The reasoning in the opposite direction is analogous because $s = \gamma^{-1} \cdot s'$.
\end{proof}
Let $\mathcal{T}$ be the set of values of $T^\Delta$. For every $\tau \in \mathcal{T}$, choose some \textit{representative} of this type $s_\tau \in K_{n_\tau}$. For any simplex $s \in K_n$ of type~$\tau$, let $\gamma_\tau \in G$ be the unique element such that $\gamma_s \cdot s = s_\tau$ (the uniqueness follows from Lemma \ref{fakt-sk-opis-przesuniecie-jedyne}).
\begin{df}[cf. Definitions~\ref{def-sm-numer-dzieciecy} and~\ref{def-sm-typ-A}]
\label{def-sk-opis-typ-delta-a}
The $T^{\Delta + A}$-type of a simplex $s \in K_n$ is defined by the formula
\begin{align}
\label{eq-sk-opis-typ-delta-a}
T^{\Delta + A}(s) = \big( T^\Delta(s), \ T^\Delta(s^\uparrow), \ \gamma_{s^\uparrow} \cdot s \big).
\end{align}
\end{df}
Note that, by Proposition \ref{lem-przesuwanie-dzieci-sympleksow}, the translation $\gamma_{s^\uparrow} \cdot s$ (which plays an analogous role to the descendant number) exists and it is one of the simplexes in the pre-image of $f_{n_\tau}^{-1}(s_\tau)$, where $\tau = T^\Delta(s^\uparrow)$. This ensures the correctness of the above definition, as well as finiteness of the resulting type $T^{\Delta + A}$.
Also, note that the component $T^\Delta(s^\uparrow)$ in the formula~\eqref{eq-sk-opis-typ-delta-a} has no equivalent in Definition \ref{def-sm-typ-A}. It will be used in the proof of Lemma \ref{fakt-sk-opis-bracia}.
\begin{fakt}
\label{fakt-sk-opis-markow}
The inverse system $(K_n, f_n)$, equipped with the simplex type function $T^{\Delta + A}$, satisfies the conditions from Definition \ref{def-kompakt-markowa}.
\end{fakt}
\begin{proof}
Since the system $(K_n)$ has the Markov property when equipped a type function $T^\Delta$, weaker than $T^{\Delta + A}$, it suffices to check the condition (iii). To achieve this --- by Proposition \ref{lem-przesuwanie-dzieci-sympleksow} --- we need only to check that, if for some $s \in K_n$, $s' \in K_{n'}$, $\gamma \in G$ the equality $s' = \gamma \cdot s$ holds and $s$, $s'$ have the same $T^{\Delta + A}$-type, then the translation by $\gamma$ preserves the values of $T^{\Delta + A}$ for all simplexes contained in $f_n^{-1}(s)$.
Let then $\sigma$ be any such simplex. Then, $\sigma^\uparrow \subseteq s$, so from the claim of Proposition \ref{lem-przesuwanie-dzieci-sympleksow} (more precisely: from the fact that the translation by $\gamma$ preserves $T^\Delta$ and that it commutes with $f_n$ and~$f_{n'}$) we deduce that:
\begin{align}
\label{eq-sk-opis-delta-typy-zgodne}
T^\Delta \big( (\gamma \cdot \sigma)^\uparrow \big) = T^\Delta \big( \gamma \cdot (\sigma^\uparrow) \big) = T^\Delta( \sigma^\uparrow ).
\end{align}
Moreover, it is clear that $T^\Delta(\sigma) = T^\Delta(\gamma \cdot \sigma)$, so it only remains to verify the equality of the last coordinates in the types $T^{\Delta + A}(\sigma)$, $T^{\Delta + A}(\gamma \cdot \sigma)$.
Denote by $\tau$ the formula~\eqref{eq-sk-opis-delta-typy-zgodne}. Then, using Lemma \ref{fakt-przesuniecie-skladane}, we have
\[ \gamma_{\sigma^\uparrow} \cdot \sigma^\uparrow = s_\tau = \gamma_{(\gamma \cdot \sigma)^\uparrow} \cdot (\gamma \cdot \sigma)^\uparrow = \gamma_{(\gamma \cdot \sigma)^\uparrow} \cdot \big( \gamma \cdot (\sigma)^\uparrow \big) = ( \gamma_{(\gamma \cdot \sigma)^\uparrow} \, \gamma ) \cdot \sigma^\uparrow, \]
so, by Lemma \ref{fakt-sk-opis-przesuniecie-jedyne}, we obtain $\gamma_{\sigma^\uparrow} = \gamma_{(\gamma \cdot \sigma)^\uparrow} \, \gamma$. This in turn implies that:
\[ \gamma_{\sigma^\uparrow} \cdot \sigma = ( \gamma_{(\gamma \cdot \sigma)^\uparrow} \, \gamma ) \cdot \sigma = \gamma_{(\gamma \cdot \sigma)^\uparrow} \cdot (\gamma \cdot \sigma), \]
which finishes the proof.
\end{proof}
\begin{fakt}
\label{fakt-sk-opis-bracia}
For any simplex $s \in K_n$, all simplexes in the pre-image $f_n^{-1}(s)$ have pairwise distinct $T^{\Delta + A}$-types.
\end{fakt}
\begin{proof}
Let $\sigma, \sigma' \in f_n^{-1}(s)$ satisfy $T^{\Delta + A}$. Then in particular $T^\Delta(\sigma^\uparrow) = T^\Delta({\sigma'}^\uparrow)$, and since $\sigma^\uparrow$, ${\sigma'}^\uparrow$ are subsimplexes of $s$, from Lemma \ref{fakt-sk-opis-podsympleksy-rozne-typy} we obtain that they are equal. Then, the equality of the third coordinates in types $T^{\Delta + A}(\sigma)$, $T^{\Delta + A}(\sigma')$ implies that $\sigma = \sigma'$.
\end{proof}
\section{Markov systems with limited dimension}
\label{sec-wymd}
In this section, we assume that $\dim \partial G \leq k < \infty$, and we discuss how to adjust the construction of a Markov system to ensure that all the complexes in the inverse system also have dimension $\leq k$. Since $\partial G$ is a compact metric space, its dimension can be understood as the covering dimension, or equivalently as the small inductive dimension (cf.~\cite[Theorem~1.7.7]{E}). In the sequel, we denote the space $\partial G$ by~$X$, and the symbol $\partial$ will always mean the topological frontier taken in~$X$ or in some its subset.
The main result of this section is given below.
\begin{lem}
\label{lem-wym}
Let $k \geq 0$ and let $G$ by a hyperbolic group such that $\dim \partial G \leq k$. Then, there exists a quasi-$G$-invariant system of covers of~$\partial G$ of rank~$\leq k + 1$.
\end{lem}
Since the rank of a cover determines the dimension of its nerve, this result will indeed allow to limit the dimension of the complexes in the Markov system for~$\partial G$ (see Section~\ref{sec-wymd-podsum}).
\begin{uwaga}
Although the proof of Proposition~\ref{lem-wym} given below will involve many technical details, let us underline that --- in its basic sketch --- it resembles an elementary result from dimension theory stating that every open cover $\mathcal{U} = \{ U_i \}_{i = 1}^n$ of a compact metric space~$X$ of dimension~$k$ contains an open subcover of rank~$\leq k + 1$. Below, we present the main steps of a proof of this fact, and pointing out the analogies between these steps and the contents of the rest of this section.
\begin{itemize}
\item[(i)] We proceed by induction on $k$. For convenience, we work with a slightly stronger inductive claim: the cover~$\{ U_i \}$ contains an open subcover~$\{ V_j \}$ such that the closures $\overline{V_j}$ form a family of rank~$\leq k + 1$.
(For the proof of Proposition~\ref{lem-wym}, the inductive reasoning is sketched in more detail in Proposition~\ref{lem-wym-cala-historia}).
\item[(ii)] Using the auxiliary Theorem~\ref{tw-wym-przedzialek} stated below, we choose in each $U_i \in \mathcal{U}$ an open subset $U_i'$ with frontier of dimension $\leq k-1$ so that the sets~$U_i'$ still form a cover of~$X$.
(Similarly we will define the sets~$D_x$ in the proof of Proposition~\ref{lem-wym-bzdziagwy}).
\item[(iii)] We define the sets $U_i''$ by the condition:
\[ x \in U_i'' \qquad \qquad \Longleftrightarrow \qquad \qquad x \in U_i' \quad \textrm{ and } \quad x \notin U'_j \quad \textrm{ for } j < i. \]
(Analogously we will define the sets~$E_x$ in the proof of Proposition~\ref{lem-wym-bzdziagwy})
\item[(iv)] The space~$X$ is now covered by the interiors of the sets $U_i''$ (which are pairwise disjoint) together with the set~$\widetilde{X} = \bigcup_i \partial U_i''$, which is a closed subset of~$X$ of dimension~$\leq k - 1$. In~$\widetilde{X}$, we consider an open cover formed by the sets $\widetilde{U}_i = U_i \cap \widetilde{X}$. By the inductive hypothesis, this cover must contain an open subcover formed by some sets~$\widetilde{V}_j$ ($1 \leq j \leq m$) whose closures form a family of rank~$\leq k$. Also, we may require that $\widetilde{V}_j$ is open in~$\widetilde{X}$ --- but not necessarily in~$X$.
(The sets $\partial U_i''$, $\innt U_i''$ correspond to the sets $F_x$, $G_x$ appearing in the formulation of Proposition~\ref{lem-wym-bzdziagwy}).
\item[(v)] Let $\varepsilon > 0$ be the least distance between any \textit{disjoint pair} of closures $\overline{\widetilde{V}_{j_1}}$, $\overline{\widetilde{V}_{j_2}}$. For $1 \leq j \leq m$, we define $V_j$ as the $\tfrac{\varepsilon}{4}$-neighbourhood (in~$X$) of~$\widetilde{V}_j$. Then, it is easy to verify that the sets $V_j$ are open in~$X$ and cover~$\widetilde{X}$, and moreover the rank of the family $\{\overline{ V_j }\}$ does not exceed the rank of $\{\overline{ \widetilde{V}_j }\}$ which is $\leq k$.
Let $U_i'''$ denote $\innt U_i''$ minus the closed $\tfrac{\varepsilon}{8}$-neighbourhood of $\widetilde{X}$.
Then, the family
\[ \mathcal{V} = \{ V_j \}_{j = 1}^m \cup \{ U_i''' \}_{i = 1}^n \]
is an open cover of~$X$. Moreover, the rank of the family of closures of all elements of $\mathcal{V}$ is at most the sum of ranks of the families $\{ \overline{ V_j } \}$ and $\{ \overline{ U_i''' } \}$, which are respectively $k$ and $1$ (the latter because for every $i \neq i'$ we have $\overline{U_i'''} \cap \overline{U_{i'}'''} \subseteq \overline{U_i''} \cap \overline{U_{i'}''} \subseteq \widetilde{X}$ which is disjoint from both $\overline{U_i'''}$ and~$\overline{U_{i'}'''}$). Hence, $\mathcal{V}$ satisfies all the desired conditions.
(In our proof of Proposition~\ref{lem-wym}, the construction of appropriate neighbourhoods takes place in Proposition~\ref{lem-wym-kolnierzyki}, and the other of the above steps have their counterparts in the proof of Proposition~\ref{lem-wym-cala-historia}).
\end{itemize}
In comparison to the above reasoning, the main difficulty in proving Proposition~\ref{lem-wym} lies in ensuring quasi-$G$-invariance of the adjusted covers, which we need for preserving the Markov property for the system of their nerves (using Theorem~\ref{tw-kompakt-ogolnie}). For this, instead of defining each of the sets $U_i'$ independently, we will first choose a finite number of \textit{model sets}, one for each possible value of type in~$G$, and translate these model sets using Proposition~\ref{lem-potomkowie-dla-kulowych}. The inductive argument will now require special care for preserving quasi-$G$-invariance; nevertheless; the main idea remains unchanged.
\end{uwaga}
We will use the following auxiliary result from dimension theory:
\begin{tw}[{\cite[Theorem~1.5.12]{E}}]
\label{tw-wym-przedzialek}
Let~$Y$ be a separable metric space of dimension~$k$ and $A, B$ be disjoint closed subsets of~$Y$. Then, there exist open subsets $\widetilde{A}, \widetilde{B} \subseteq Y$ such that
\[ A \subseteq \widetilde{A}, \qquad B \subseteq \widetilde{B}, \qquad \widetilde{A} \cap \widetilde{B} = \emptyset \qquad \textrm{ and } \qquad \dim \big(Y \setminus (\widetilde{A} \cup \widetilde{B})\big) \leq k - 1. \]
\end{tw}
\subsection{$\theta$-weakly invariant subsystems}
\begin{ozn}
For any two families $\mathcal{C} = \{ C_x \}_{x \in G}$, $\mathcal{D} = \{ D_x \}_{x \in G}$, we denote:
\[ \mathcal{C} \sqcup \mathcal{D} = \{ C_x \cup D_x \}_{x \in G}. \]
\end{ozn}
\begin{df}
A system $\mathcal{C} = \{ C_x \}_{x \in G}$ of subsets of~$X$ will be called:
\begin{itemize}
\item \textit{of dimension $\leq k$} if $|\mathcal{C}|_n$ is of dimension $\leq k$ for all $n \geq 0$;
\item \textit{of rank $\leq k$} if, for every $n \geq 0$, the family $\mathcal{C}_n$ is of rank $\leq k$ (i.e. if the intersection of any $k+1$ pairwise distinct members of $\mathcal{C}_n$ must be empty); in particular, \textit{disjoint} if it is of rank~$\leq 1$.
\end{itemize}
\end{df}
\begin{fakt}
\label{fakt-wym-suma-pokryc}
If two systems of subsets $\mathcal{C}$, $\mathcal{D}$ are correspondingly of rank~$\leq a$ and~$\leq b$, then $\mathcal{C} \sqcup \mathcal{D}$ is of rank $\leq a+b$. \qed
\end{fakt}
\begin{df}
Let $X$ be a topological space and $A, B \subseteq X$. We will say that~$A$ is a \textit{separated} subset of~$B$ (denotation: $A \Subset B$) if $\overline{A} \subseteq \innt B$.
\end{df}
\begin{df}
Let $\mathcal{C} = \{ C_x \}$, $\mathcal{D} = \{ D_x \}$ be two quasi-$G$-invariant systems of subsets of~$X$. We will say that:
\begin{itemize}
\item $\mathcal{C}$ is an \textit{(open, closed) subsystem} in~$\mathcal{D}$ if, for every $n \geq 0$ and $C_x \in \mathcal{C}_n$, $C_x$ is an (open, closed) subset in~$|\mathcal{D}|_n$ \\ (recall from \ref{ozn-suma-rodz} that $|\mathcal{D}|_n$ denotes $\bigcup_{C \in \{C_x \,|\, x \in G, \, |x|=n\}} C$ )
\item $\mathcal{C}$ is a \textit{semi-closed subsystem} in~$\mathcal{D}$ if $|\mathcal{C}|_n$ is a closed subset in~$|\mathcal{D}|_n$ for $n \geq 0$;
\item $\mathcal{C}$ \textit{covers} $\mathcal{D}$ if $|\mathcal{C}|_n \supseteq |\mathcal{D}|_n$ for $n \geq 0$.
\end{itemize}
A system $\mathcal{C}$ will be called \textit{semi-closed} if $|\mathcal{C}|_n$ is a closed subset in~$X$ for $n \geq 0$.
\end{df}
\begin{df}
For any integer $\theta \geq 0$, we define the type function $T^B_\theta$ in~$G$ as the extension (in the sense of Definition~\ref{def-sm-typ-plus}) of the type function $T^B$ (defined in Definition~\ref{def-sm-typ-B}) by~$r = \theta \cdot 12\delta$:
\[ T^B_\theta = (T^B)^{+ \theta \cdot 12\delta} \]
\end{df}
\begin{fakt}
\label{fakt-wym-poglebianie-B+}
Let $g, x, y \in G$ and~$\theta \geq 0$, $k > 0$ satisfy
\[ y \in xT^c(x), \qquad T^B_\theta(x) = T^B_\theta(gx), \qquad |y| = |x| + kL, \]
where $L$ denotes the constant from Section~\ref{sec-sm} (defined in Section~\ref{sec-sm-nici}).
Then:
\[ T^B_{\theta + 1}(y) = T^B_{\theta + 1}(gy), \qquad |gy| = |gx| + kL. \]
\end{fakt}
\begin{proof}
It suffices to prove the claim for $k = 1$; for greater values of~$k$, it will then easily follow by induction.
Let $z \in yP_{(\theta + 1) \cdot 12\delta}(y)$. Denote $w = z^\Uparrow$ (see Definition~\ref{def-sm-p-wnuk}). Since $L \geq 14\delta$, Lemma~\ref{fakt-geodezyjne-pozostaja-bliskie} implies that
\[ d(x, w) \leq \max \big( (\theta+1) \cdot 12\delta + 16\delta - 2L, \ 8\delta \big) \leq \theta \cdot 12\delta, \]
so $w \in xP_{12\delta}(x)$, and then $T^B(w) = T^B(gw)$ and $|gx| = |gw|$. Therefore, by Proposition~\ref{lem-sm-B-dzieci} we know that $gw = (gz)^\Uparrow$ and $T^B(z) = T^B(gz)$. The first of these equalities implies in particular that $gz$ is a descendant of $gw$, that is,
\[ |gz| = |gw| + d(gz, gw) = |gx| + d(z, w) = |gx| + L. \]
in particular, setting $z = y$ we obtain that $|gy| = |y| + L$. Considering again an arbitrary~$z$, we deduce that $|gz| = |gy|$, so $P_{(\theta+1) \cdot 12\delta}(y) \subseteq P_{(\theta+1) \cdot 12\delta}(gy)$; the opposite inclusion can be proved analogously (by exchanging the roles between $g$ and $g^{-1}$). In this situation, the equality $T^B(z) = T^B(gz)$ for an arbitrary~$z$ implies that $T^B_{\theta+1}(y) = T^B_{\theta+1}(gy)$.
\end{proof}
\begin{df}
A family of subsets $\mathcal{C} = \{ C_x \}$ will be called a \textit{$\theta$-weakly invariant system} if:
\begin{itemize}
\item for every $x \in G$, $C_x$ is a subset of the set $S_x$ defined in Section~\ref{sec-konstr-pokrycia};
\item the family $\mathcal{C}$, together with the type function $T^B_\theta$, satisfies the condition~\qhlink{f1} of Definition~\ref{def-quasi-niezm}.
\end{itemize}
\end{df}
Although we only require the condition~\qhlink{f1} to hold, we will show in Proposition~\ref{lem-wym-slabe-sa-qi} that this suffices to force the other conditions of Definition~\ref{def-quasi-niezm} to hold under some appropriate assumptions.
Let us observe that the system $\mathcal{S}$ described in Section~\ref{sec-konstr-pokrycia} is $0$-weakly (and then also $\theta$-weakly for $\theta \geq 0$) invariant. This is because the values of $T^B_\theta$ determine uniquely the values of $T^b_N$ (by Remark~\ref{uwaga-sm-B-wyznacza-A} and Definition~\ref{def-sm-typ-A}), while the system $\mathcal{S}$ equipped with the latter type function is quasi-$G$-invariant by Corollary~\ref{wn-spanstary-quasi-niezm}.
\begin{fakt}
\label{fakt-wym-typ-wyznacza-sasiadow}
Let $\mathcal{C} = \{ C_x \}_{x \in G}$ be a $\Theta$-weakly invariant system for some $\Theta \geq 0$. Let $n \geq 0$, $\theta \geq 1$ and $x, y \in G$ be of length~$n$, and suppose that $C_x \cap C_y \neq \emptyset$. Then:
\begin{itemize}
\item[\textbf{(a)}] $T^B_\theta(x) \neq T^B_\theta(y)$;
\item[\textbf{(b)}] For every $g \in G$, the equality $T^B_\theta(x) = T^B_\theta(gx)$ implies that $T^B_{\theta-1}(y) = T^B_{\theta-1}(gy)$ and $|gx| = |gy|$.
\end{itemize}
\end{fakt}
\begin{proof}
Since $\emptyset \neq C_x \cap C_y \subseteq S_x \cap S_y$, it follows from Lemma~\ref{fakt-sasiedzi-blisko} that $d(x, y) \leq 12\delta$. Then, Proposition~\ref{lem-sm-kuzyni} implies that $T^B(x) \neq T^B(y)$, so $T^B_\theta(x) \neq T^B_\theta(y)$. Moreover, since $|x| = |y|$, we have $x^{-1} y \in P_{\theta \cdot 12\delta}(x) = P_{\theta \cdot 12\delta}(gx)$, and so $|gx| = |gy|$.
Note that the triangle inequality gives:
\[ P_{(\theta-1) \cdot 12\delta}(y) = \big( y^{-1} x \, P_{\theta \cdot 12\delta}(x) \big) \cap B \big( e, (\theta - 1) \cdot 12\delta \big) \]
and analogously for~$gx, gy$, which shows that $P_{(\theta-1) \cdot 12\delta}(y) = P_{(\theta-1)\cdot 12\delta}(gy)$. Moreover, for any $z \in yP_{(\theta-1) \cdot 12\delta}(y)$ the above equality implies that $x^{-1}z \in P_{\theta \cdot 12\delta}(x) = P_{\theta \cdot 12\delta}(gx)$, and so $T^B(z) = T^B(gz)$. Since~$z$ was arbitrary, we deduce that $T^B_{\theta-1}(y) = T^B_{\theta-1}(gy)$.
\end{proof}
\begin{lem}
\label{lem-wym-slabe-sa-qi}
Let $\theta \geq 0$ and $\mathcal{C}$ be a $\theta$-weakly invariant system of open covers of~$X$. Then, $\mathcal{C}$ is quasi-$G$-invariant when equipped with the type function~$T^B_{\theta+1}$.
\end{lem}
\begin{proof}
Let $\mathcal{C} = \{ C_x \}_{x \in G}$. The conditions~\qhlink{c}, \qhlink{d} for $\mathcal{C}$ follow directly from the same conditions for~$\mathcal{S}$. Also, \qhlink{f2} can be easily translated: whenever we have
\[ T^B_{\theta+1}(x) = T^B_{\theta+1}(gx), \qquad |x| = |y|, \qquad C_x \cap C_y \neq \emptyset, \]
then by Lemma~\ref{fakt-wym-typ-wyznacza-sasiadow}b it follows that $|gx| = |gy|$ and $T^B_{\theta}(y) = T^B_{\theta}(gy)$, so --- by the property~\qhlink{f1} for~$\mathcal{C}$ --- we have $C_{gy} = g \cdot C_y$.
We will now verify the property \qhlink{f3}.
Let $L$ denote the constant coming from Lemma~\ref{fakt-wym-poglebianie-B+}.
Suppose that
\[ T^B_{\theta+1}(x) = T^B_{\theta+1}(gx), \qquad |y| = |x| + L, \qquad \emptyset \neq C_y \subseteq C_x. \]
Let $\alpha$ be a geodesic joining~$e$ with~$y$ and let $z = \alpha(|x|)$. Then, by Lemma~\ref{fakt-wlasnosc-gwiazdy-bez-gwiazdy}c we have $C_y \subseteq S_y \subseteq S_z$; on the other hand, $C_y \subseteq S_x$, so $S_x \cap S_z \neq \emptyset$, and then by Lemma~\ref{fakt-wym-typ-wyznacza-sasiadow} we have
\[ T^B_\theta(z) = T^B_\theta(gz), \qquad |gx| = |gz|. \]
Since $y \in zT^c(z)$ and $|y| = |x| + L = |z| + L$, using Lemma~\ref{fakt-wym-poglebianie-B+} and then the property~\qhlink{f1} for~$\mathcal{C}$ we obtain that
\[ T^B_{\theta+1}(y) = T^B_{\theta+1}(gy), \qquad |gy| = |gz| + L = |gx| + L, \qquad C_{gy} = g \cdot C_y. \]
By Remark~\ref{uwaga-quasi-niezm-jeden-skok}, this means that $\mathcal{C}$ satisfies~\qhlink{f3} for the jump constant~$L$.
It remains to verify the property \qhlink{e}. Let $\mathcal{T}$ be the set of all possible values of~$T^B_{\theta + 1}$. Choose $N > 0$ such that, for every $\tau \in \mathcal{T}$, there is $x \in G$ of length less than~$N$ such that $T^B_{\theta + 1}(x) = \tau$. Let $\varepsilon > 0$ be the minimum of all Lebesgue numbers for the covers $\mathcal{C}_1, \ldots, \mathcal{C}_N$, and choose $J' > N$ so that $\max_{S \in \mathcal{S}_n} \diam S_n < \varepsilon$ for every $n \geq J'$. We will show that the system~$\mathcal{C}$ together with $J'$ satisfies~\qhlink{e}. By Remark~\ref{uwaga-quasi-niezm-jeden-skok}, it suffices to verify this in the case when $k = 1$.
Let $x \in G$ satisfy $|x| \geq J'$. By Lemma~\ref{fakt-wlasnosc-gwiazdy-bez-gwiazdy}, we have $S_x \subseteq S_y$ for some $y \in G$ of length~$|x| - J'$. Let $y' \in G$ be an element such that $T^B_{\theta+1}(y') = T^B_{\theta+1}(y)$ and $|y'| < N$. Denote $\gamma = y'y^{-1}$ and $x' = \gamma x$; then, by~\qhlink{f3}, we have
\[ S_{x'} = \gamma \cdot S_x, \qquad T^B_{\theta+1}(x') = T^B_{\theta+1}(x), \qquad |x'| \, = \, |y'| + |x| - |y| \, = \, |y'| + J'. \]
The last equality implies that $\diam S_{x'} < \varepsilon$, which means (since $|y'| < N$) that $S_{x'}$ must be contained in~$C_{z'}$ for some $z' \in G$ such that $|z'| = |y'|$. Denote $z = \gamma^{-1} z'$. Then, using the property~\qhlink{f2} and then Lemma~\ref{fakt-wym-typ-wyznacza-sasiadow}b, we obtain:
\[ S_z = \gamma^{-1} \cdot S_{z'}, \qquad T^B_\theta(z) = T^B_\theta(z'), \qquad |z| = |y| = |x| - J', \]
and so, since $\mathcal{C}$ is $\theta$-weakly invariant, it follows that:
\[ C_x \subseteq S_x = \gamma^{-1} \cdot S_{x'} \subseteq \gamma^{-1} \cdot C_{z'} = C_z. \]
Altogether, we obtain that $\mathcal{C}$ is quasi-$G$-invariant with $\gcd(L, J')$ as the jump constant.
\end{proof}
\begin{fakt}
\label{fakt-wym-homeo-na-calej-kuli}
Let $\theta \geq 0$, $\mathcal{C}$ be a $\theta$-weakly invariant system, and $x, y \in G$ with $T^B_{\theta + 1}(x) = T^B_{\theta + 1}(y)$. Then, we have
\[ |\mathcal{C}|_{|y|} \cap S_y = yx^{-1} \cdot \big( |\mathcal{C}|_{|x|} \cap S_x \big). \]
In particular, the left translation by $yx^{-1}$, when applied to separated subsets of~$S_x$, preserves the interiors and closures taken in $|\mathcal{C}|_{|x|}$ and respectively $|\mathcal{C}|_{|y|}$.
\end{fakt}
\begin{proof}
Denote $n = |x|$, $m = |y|$ and $\gamma = yx^{-1}$.
By symmetry, it suffices to show one inclusion. Let~$p \in |\mathcal{C}|_n \cap S_x$; then~$p$ belongs to some $C_{x'} \in \mathcal{C}_n$. In particular, we have:
\[ p \ \in \ S_x \cap C_{x'} \ \subseteq \ S_x \cap S_{x'}. \]
Denote $y' = \gamma x'$.
Then, we have $T^B_\theta(y') = T^B_\theta(x')$ by Lemma~\ref{fakt-wym-typ-wyznacza-sasiadow}b, so~\qhlink{f1} implies that $C_{y'} = \gamma \cdot C_{x'}$, as well as $S_y = \gamma \cdot S_x$. Therefore,
\[ \gamma \cdot p \ \in \ \gamma \cdot (C_{x'} \cap S_x) \ = \ C_{y'} \cap S_y \ \subseteq \ |\mathcal{C}|_m \cap S_y. \qedhere \]
\end{proof}
\subsection{Disjoint, weakly invariant nearly-covers}
\label{sec-wym-wyscig}
Before proceeding with the construction, we will introduce notations and conventions used below.
In the proofs of Propositions~\ref{lem-wym-bzdziagwy} and \ref{lem-wym-kolnierzyki}, we will use the following notations, dependent on the value of a~parameter~$\theta$ (which is a~part of the input data in both propositions). Let $\tau_1, \ldots, \tau_K$ be an enumeration of all possible $T^B_{\theta+1}$-types. For simplicity, we identify the value $\tau_i$ with the natural number~$i$. For every~$1 \leq i \leq K$, we fix an arbitrary $x_i \in G$ such that $T^B_{\theta+1}(x_i) = i$, and we set $S_i = _S{x_i}$, $n_i = |x_i|$.
Similarly, let $1, \ldots, \widetilde{K}$ be an enumeration of all possible~$T^B_{\theta+2}$-types, and for every $1 \leq \widetilde{\imath} \leq \widetilde{K}$ let $\widetilde{x}_{\widetilde{\imath}} \in G$ be a~fixed element such that $T^B_{\theta+2}(\widetilde{x}_{\widetilde{\imath}}) = \widetilde{\imath}$. We denote also $M = \max_{{\widetilde{\imath}}=1}^{\widetilde{K}} |\widetilde{x}_{\widetilde{\imath}}|$.
In the remaining part of Section~\ref{sec-wymd}, we will usually consider sub-systems of a given semi-closed system in~$X$ (which is given the name~$\mathcal{C}$ in Propositions~\ref{lem-wym-bzdziagwy} and~\ref{lem-wym-kolnierzyki}), and more generally --- subsets of~$X$ known to be contained in~$|\mathcal{C}|_n$ for some~$n$ (known from the context). Unless explicitly stated otherwise, the basic topological operators for such sets (closure, interior, frontier) will be performed within the space~$|\mathcal{C}|_n$ (for the appropriate value of~$n$). This will not influence closures, since~$|\mathcal{C}|_n$ is a closed subset of~$X$, but will matter for interiors and frontiers.
This subsection contains the proof of the following result.
\begin{lem}
\label{lem-wym-bzdziagwy}
Let $k \geq 0$, $\theta \geq 1$ and $\mathcal{C} = \{ C_x \}$ be a semi-closed, $\theta$-weakly invariant system of dimension $\leq k$ in~$X$. Then, there exist:
\begin{itemize}
\item a disjoint, open, $(\theta + 2)$-weakly invariant subsystem $\mathcal{G} = \{ G_x \}$ in $\mathcal{C}$;
\item a closed, $(\theta + 2)$-weakly invariant subsystem $\mathcal{F} = \{ F_x \}$ in $\mathcal{C}$ of dimension $\leq k - 1$
\end{itemize}
such that $\mathcal{F} \sqcup \mathcal{G}$ covers $\mathcal{C}$ and $\partial G_x \subseteq |\mathcal{F}|_n$ for every $n \geq 0$ and~$G_x \in \mathcal{G}_n$.
\end{lem}
\begin{proof}
Let $\varepsilon > 0$ be the minimum of all Lebesgue numbers for the covers $\mathcal{S}_1, \ldots, \mathcal{S}_M$. Define:
\[ I_x = \{ p \in S_x \,|\, B(p, \varepsilon) \subseteq S_x \} \qquad \qquad \textrm{ for } x \in G, \, |x| \leq M \]
and
\begin{align*}
G_i = \bigcup_{|x| \leq M, \, T^B_{\theta+1}(x) = i} x_i x^{-1} \cdot I_x, \qquad H_i = X \setminus S_i \qquad \qquad \textrm{ for } 1 \leq i \leq K.
\end{align*}
Fix some $1 \leq i \leq K$. We observe that, for every~$x \in G$, we have $d(I_x, X \setminus S_x) \geq \varepsilon$, which implies that~$G_i$ is a finite union of separated subsets of~$S_i$; then,
$\overline{G_i} \cap \overline{H_i} = \emptyset$ (in~$X$). Then, the intersections $\overline{G_i} \cap |\mathcal{C}|_{n_i}$, $\overline{H_i} \cap |\mathcal{C}|_{n_i}$ are disjoint closed subsets of~$|\mathcal{C}|_{n_i}$, so by Theorem~\ref{tw-wym-przedzialek} there are open subsets of~$|\mathcal{C}|_{n_i}$:
\[ \widetilde{G}_i \supseteq \overline{G_i} \cap |\mathcal{C}|_{n_i}, \qquad \widetilde{H}_i \supseteq \overline{H_i} \cap |\mathcal{C}|_{n_i}, \qquad \qquad (\textrm{ closures of } G_i, H_i \textrm{ taken in } X) \]
which cover $|\mathcal{C}|_{n_i}$ except for some subset of dimension~$\leq k - 1$ (which must contain $\partial \widetilde{G}_i$).
For any $x \in G$, we denote $|x| = n$ and $T^B_{\theta+1}(x) = i$, and then we define:
\begin{align*}
D_x & = x x_i^{-1} \cdot \widetilde{G}_i, \\
E_x & = D_x \setminus \, \bigcup_{y \in G, \, |y| = n, \, T^B_{\theta+1}(y) < T^B_{\theta+1}(x)} D_y, \\
F_x & = \partial E_x, \\
G_x & = \intr E_x.
\end{align*}
Note that $\partial G_x = \overline{G_x} \setminus G_x \subseteq \overline{E_x} \setminus \intr E_x = \partial E_x = F_x$, as desired in the claim.
The remaining part of the proof consists of verifying the following claims (of which \textbf{(a)} and \textbf{(b)} are auxiliary):
\begin{itemize}[nolistsep]
\item[\textbf{(a)}] $\mathcal{D} = \{ D_x \}_{x \in G}$ covers~$\mathcal{C}$;
\item[\textbf{(b)}] $\mathcal{E} = \{ E_x \}_{x \in G}$ is disjoint and covers~$\mathcal{C}$;
\item[\textbf{(c)}] $\mathcal{F} = \{ F_x \}_{x \in G}$ is of dimension~$\leq k - 1$;
\item[\textbf{(d)}] $\mathcal{G} = \{ G_x \}_{x \in G}$ is disjoint;
\item[\textbf{(e)}] $F_x \subseteq S_x$ for every~$x \in G$;
\item[\textbf{(f)}] $\mathcal{F} \sqcup \mathcal{G}$ covers~$\mathcal{C}$;
\item[\textbf{(g)}] $\mathcal{F}$ and~$\mathcal{G}$ are $(\theta+2)$-weakly invariant.
\end{itemize}
\textbf{(a)}
To verify that $\mathcal{D}$ covers $\mathcal{C}$, choose an arbitrary $p \in |\mathcal{C}|_n$; then $p$ lies in some $S_x \in \mathcal{S}_n$. Denote
\[ \widetilde{\imath} = T^B_{\theta + 2}(x), \qquad \gamma = x\widetilde{x}_{\widetilde{\imath}}^{-1}, \qquad p' = \gamma^{-1} \cdot p. \]
Then, $p' \in S_{\widetilde{x}_{\widetilde{\imath}}}$. By the definition of $\varepsilon$, we have $B(p', \varepsilon) \subseteq S_y$ for some $S_y \in \mathcal{S}_{|\widetilde{x}_{\widetilde{\imath}}|}$; then $p'$ lies in~$I_y$. Now, let
\[
j = T^B_{\theta+1}(y), \qquad \beta = yx_j^{-1}, \qquad p'' = \beta^{-1} \cdot p'. \]
By definition, $p''$ lies in $G_j$; moreover, by applying Lemma~\ref{fakt-wym-homeo-na-calej-kuli} twice we obtain that $p'' \in |\mathcal{C}|_{|x_j|}$. On the other hand, since $S_y$ intersects non-trivially with $S_{x_i}$ and $T^B_{\theta+2}(\gamma x_i) = T^B_{\theta+2}(x_i)$, by~\qhlink{f2} and Lemma~\ref{fakt-wym-typ-wyznacza-sasiadow}b we have:
\[ S_{\gamma \beta x_j} \ = \ S_{\gamma y} \ = \ \gamma \cdot S_y \ \ni \ \gamma \cdot p' \ = \ p, \qquad T^B_{\theta+1}(\gamma \beta x_j) = T^B_{\theta+1}(\gamma y) = T^B_{\theta+1}(y) = j, \]
which implies that
\[ p \ = \ \gamma \beta \cdot p'' \ \in \ \gamma \beta \cdot (G_j \cap |\mathcal{C}|_{|x_j|}) \ \subseteq \ \gamma \beta \cdot \widetilde{G}_j \ \subseteq \ D_{\gamma \beta x_j}. \]
\textbf{(b)}
Suppose that $p \in E_x \cap E_y$ for some $x \neq y$. Then, $S_x \cap S_y \neq \emptyset$, so by Lemma~\ref{fakt-wym-typ-wyznacza-sasiadow}a we have $T^B_{\theta+1}(x) \neq T^B_{\theta+1}(y)$; assume w.l.o.g. that $T^B_{\theta+1}(x)$ is the smaller one. Then, by definition, $p \in E_x \subseteq D_x$ cannot belong to $E_y$. This proves that $\mathcal{E}$ is disjoint.
Now, let~$p \in |\mathcal{C}|_n$ and let $x \in G$ of length~$n$ be chosen so that $p \in D_x$ and $T^B_{\theta+1}(x)$ is the lowest possible. Then, by definition, $p \in E_x$. This means that $\mathcal{E}$ covers $\mathcal{C}$.
\textbf{(c)}
First, note that, if $x \in G$ and $T^B_{\theta+1}(x) = i$, then, by Lemma~\ref{fakt-wym-homeo-na-calej-kuli} and the definitions of~$D_x$ and~$\widetilde{G}_i$, we have:
\[ \dim \partial D_x = \dim \partial \big( xx_i^{-1} \cdot \widetilde{G}_i \big) = \dim \big( xx_i^{-1} \cdot \partial \widetilde{G}_i \big) = \dim \partial \widetilde{G}_i \leq k - 1. \]
Fix some $n \geq 0$. Recall that, for any subsets $Y, Z$ in any topological space, we have $\partial(Y \setminus Z) \subseteq \partial Y \cup \partial Z$. By applying this fact finitely many times in the definition of every $E_x$ with $|x| = n$, we obtain that $|\mathcal{F}|_n = \bigcup_{|x| = n} \partial E_x$ is contained in $\bigcup_{|x| = n} \partial D_x$, i.e. in a~finite union of closed subsets of~$X$ of dimension~$\leq k - 1$. By Theorem 1.5.3 in~\cite{E}, such union must have dimension $\leq k-1$, which proves that $\mathcal{F}$ is of dimension~$\leq k - 1$.
\textbf{(d)} follows immediately from~\textbf{(b)}.
\textbf{(e)} Note first that, for every $1 \leq i \leq K$, we have
\[ \overline{D_{x_i}} = \overline{\widetilde{G}_i} \subseteq |\mathcal{C}|_{n_i} \setminus \widetilde{H}_i \subseteq X \setminus \overline{H_i} = \intr S_{x_i}. \]
Now, let $x \in G$ and denote $T^B_{\theta+1}(x) = i$ and $\gamma = xx_i^{-1}$. The left translation by~$\gamma$ is clearly a~homeomorphism mapping $S_{x_i}$ to~$S_x$ and $D_{x_i}$ to $D_x$; moreover, by Lemma~\ref{fakt-wym-homeo-na-calej-kuli}, it preserves interiors and closures computed within the appropriate spaces $|\mathcal{C}|_n$. Hence, we have:
\[ F_x \subseteq \overline{E_x} \subseteq \overline{D_x} = \gamma \cdot \overline{D_{x_i}} \subseteq \gamma \cdot \intr S_{x_i} = \intr S_x. \]
\textbf{(f)}
follows easily from \textbf{(b)}:
\[ |\mathcal{C}|_n \setminus |\mathcal{G}|_n = |\mathcal{C}|_n \setminus \bigcup_{|x| = n} G_x \subseteq \bigcup_{|x| = n} \big( E_x \setminus G_x \big) \subseteq \bigcup_{|x| = n} \partial E_x = |\mathcal{F}|_n. \]
\textbf{(g)}
Let $T^B_{\theta+2}(x) = T^B_{\theta+2}(y)$ for some $x, y \in G$ and denote $\gamma = yx^{-1}$. By Lemma~\ref{fakt-wym-homeo-na-calej-kuli}, it is sufficient to check that $E_y = \gamma \cdot E_x$. We will show that $\gamma \cdot E_x \subseteq E_y$; the other inclusion is analogous.
Let $i = T^B_{\theta+1}(x) = T^B_{\theta+1}(y)$ and let $p \in E_x$. Then, in particular, $p \in D_x$, so
\[ \gamma \cdot p = yx_i^{-1} \cdot ( x_ix^{-1} \cdot p ) \in D_y. \]
Suppose that $\gamma \cdot p \notin E_y$; then, there must be some $y' \in G$ such that
\[ |y'| = |y|, \qquad T^B_{\theta+1}(y') < i, \qquad \gamma \cdot p \in D_{y'}. \]
In particular, we have $\emptyset \neq D_y \cap D_{y'} \subseteq S_y \cap S_{y'}$. By~Lemma~\ref{fakt-wym-typ-wyznacza-sasiadow}b, setting $x' = \gamma^{-1} y'$ we obtain that
\[ |x'| = |x|, \qquad T^B_{\theta+1}(x') = T^B_{\theta+1}(y'). \]
In particular, since $T^B_{\theta+1}(x') = T^B_{\theta+1}(y')$ and $x' = \gamma^{-1} y'$, we must have $D_{x'} = \gamma^{-1} \cdot D_{y'} \ni p$. This contradicts the assumption that $p \in E_x$ because $T^B_{\theta+1}(x') < T^B_{\theta+1}(x)$.
\end{proof}
\subsection{Weakly invariant neighbourhoods}
\label{sec-wym-kolnierzyki}
\begin{lem}
\label{lem-wym-kolnierzyki}
Let $k \geq 0$, $\theta \geq 0$ and suppose that:
\begin{itemize}
\item $\mathcal{C} = \{ C_x \}_{x \in G}$ is a semi-closed, $\theta$-weakly invariant system;
\item $\mathcal{D} = \{ D_x \}_{x \in G}$ is a closed, $(\theta+1)$-weakly invariant subsystem in~$\mathcal{C}$ of rank~$\leq k$.
\end{itemize}
Then, there exists an open, $(\theta+1)$-weakly invariant subsystem $\mathcal{G} = \{ G_x \}_{x \in G}$ in~$\mathcal{C}$ such that $\mathcal{G}$ covers $\mathcal{D}$ and moreover the system of closures $\overline{\mathcal{G}} = \{ \overline{G_x} \}_{x \in G}$ is $(\theta+1)$-weakly invariant and of rank~$\leq k$.
\end{lem}
\begin{uwaga}
Since $\mathcal{G}$ itself is claimed to be $(\theta+1)$-weakly invariant, the condition that the system of closures $\overline{\mathcal{G}}$ is $(\theta+1)$-weakly invariant reduces to the condition that $\overline{G_x} \subseteq S_x$ for every $x \in G$.
\end{uwaga}
\begin{proof}[Proof of Proposition~\ref{lem-wym-kolnierzyki}]
In the proof, we use the notations and conventions introduced in the beginning of~Section~\ref{sec-wym-wyscig}.
Also, we will frequently (and implicitly) use Lemma~\ref{fakt-wym-homeo-na-calej-kuli} to control the images of interiors/closures (taken ``in~$\mathcal{C}$'') under translations by elements of~$G$.
\textbf{1. }We choose by induction, for $1 \leq i \leq K$, an open subset $G_i \Subset S_{x_i}$ containing $D_{x_i}$ such that:
\begin{gather}
\begin{split}
\label{eq-wym-war-laty}
\textrm{for every } x, y \in G \textrm{ with } |x| = |y| \leq M, \, T^B_{\theta+1}(x) = i, \, T^B_{\theta+1}(y) = j \textrm{ and } D_x \cap D_y = \emptyset, \textrm{ we have:} \\
(xx_i^{-1} \cdot \overline{G_i}) \cap D_y = \emptyset, \textrm{ and moreover } (xx_i^{-1} \cdot \overline{G_i}) \cap (yx_j^{-1} \cdot \overline{G_j}) = \emptyset \textrm{ if } j < i. \qquad \
\end{split}
\end{gather}
Such choice is possible because we only require $G_i$ to be an open neighbourhood of~$D_{x_i}$ such that the closure $\overline{G_i}$ is disjoint with the union of sets of the following form:
\[ |\mathcal{C}|_{n_i} \setminus S_{x_i}, \qquad x_i x^{-1} \cdot D_y, \qquad x_i x^{-1} y x_j^{-1} \cdot \overline{G_j}. \]
Since this is a~finite union of closed sets (as we assume $|x|, |y| \leq M$), and we work in a~metric space, it suffices to check that each of these sets is disjoint from~$D_{x_i}$.
In the case of~$|\mathcal{C}_{n_i}| \setminus S_{x_i}$, this is clear.
If we had $D_{x_i} \cap (x_i x^{-1} \cdot D_y) \neq \emptyset$ for some $x, y$ as specified above, it would follow that
\[ D_x \cap D_y = (x x_i^{-1} \cdot D_{x_i}) \cap D_y \neq \emptyset, \]
contradicting one of the assumptions in~\eqref{eq-wym-war-laty}. Similarly, if we had $D_{x_i} \cap (x_i x^{-1} y x_j^{-1} \cdot \overline{G_j}) \neq \emptyset$ for some $j < i$, then it would follow that $D_x \cap (y x_j^{-1} \cdot \overline{G_j}) \neq \emptyset$, which contradicts the assumption that we have (earlier) chosen $G_j$ to satisfy~\eqref{eq-wym-war-laty}.
\textbf{2. }Now, let
\[ G_x = x x_i^{-1} \cdot G_i \qquad \textrm{ for } \quad x \in G, \, T^B_{\theta+1}(x) = i. \]
Then, the system $\mathcal{G} = \{ G_x \}_{x \in G}$ is obviously open, $(\theta+1)$-weakly invariant and covers~$\mathcal{D}$. The fact that $\overline{\mathcal{G}}$ is also $(\theta+1)$-weakly invariant follows then from Lemma~\ref{fakt-wym-homeo-na-calej-kuli} (because $G_i \Subset S_{x_i}$, and hence $G_x \Subset S_x$ for $x \in G$). It remains to check that $\overline{\mathcal{G}}$ is of rank~$\leq k$.
\textbf{3. }Let $x, y \in G$ be such that $|x| = |y|$ and $\overline{G_x} \cap \overline{G_y} \neq \emptyset$. Denote $T^B_{\theta+1}(x) = i$, $T^B_{\theta+1}(y) = j$. Let $x'$ be such that $|x'| \leq M$ and $T^B_{\theta+2}(x') = T^B_{\theta+2}(x)$. Since $\overline{G_x} \cap \overline{G_y} \neq \emptyset$ implies $S_x \cap S_y \neq \emptyset$, by Lemma~\ref{fakt-wym-typ-wyznacza-sasiadow}b (combined with~\qhlink{f1} for $\mathcal{G}$) we obtain that
\[ T^B_{\theta+1}(y') = j, \qquad |y'| = |x'| \leq M, \qquad \overline{G_{x'}} \cap \overline{G_{y'}} = x'x^{-1} \cdot (\overline{G_x} \cap \overline{G_y}) \neq \emptyset, \qquad \quad \textrm{ where } \quad y' = x'x^{-1}y. \]
Then it follows that
\[ \emptyset \neq \overline{G_{x'}} \cap \overline{G_{y'}} = (x'x_i^{-1} \cdot \overline{G_i}) \cap (y'x_j^{-1} \cdot \overline{G_j}), \]
so the condition~\eqref{eq-wym-war-laty} implies that $D_{x'} \cap D_{y'} \neq \emptyset$. Then, since $\mathcal{D}$ is $(\theta+1)$-weakly invariant, we have
\[ D_x \cap D_y = xx'^{-1} \cdot (D_{x'} \cap D_{y'}) \neq \emptyset. \]
This means that, for $x, y \in G$ of equal length, $\overline{G_x}$ and $\overline{G_y}$ can intersect non-trivially only if $D_x$ and $D_y$ do so, whence it follows that the rank of~$\overline{\mathcal{G}}$ is not greater than that of~$\mathcal{D}$. This finishes the proof.
\end{proof}
\subsection{The overall construction}
The following proposition describes the whole inductive construction --- analogous to the one presented in the introduction to this section --- of a cover satisfying the conditions from Proposition~\ref{lem-wym}.
\begin{lem}
\label{lem-wym-cala-historia}
Let $k \geq -1$, $\theta \geq 1$ and let $\mathcal{C}$ be a semi-closed, $\theta$-weakly invariant system of dimension $\leq k$.
Then, there exist $(\theta + 3(k+1))$-weakly invariant subsystems $\mathcal{D}$, $\mathcal{E}$ in~$\mathcal{C}$ of rank $\leq k+1$ which both cover~$\mathcal{C}$. Moreover, $\mathcal{D}$ is closed and $\mathcal{E}$ is open.
\end{lem}
\begin{proof}
We proceed by induction on~$k$. If $k = -1$, the system $\mathcal{C}$ must consist of empty sets, so we can set~$\mathcal{D} = \mathcal{E} = \mathcal{C}$.
Now, let $k > -1$. Denote $\Theta = \theta + 3(k+1)$. We perform the following steps:
\textbf{1. }By applying Proposition~\ref{lem-wym-bzdziagwy} to the system $\mathcal{C}$, we obtain some $(\theta + 2)$-weakly invariant systems $\mathcal{G} = \{ G_x \}_{x \in G}$ and $\mathcal{F}$ with additional properties described in the claim of the proposition.
\textbf{2. }Since $\mathcal{F}$ is closed (and then also semi-closed), $(\theta+2)$-weakly invariant and has dimension~$\leq k - 1$, it satisfies the assumptions of the current proposition (with parameters $k-1$ and $\theta+2$). Therefore, by the inductive hypothesis, there exists a $(\Theta-1)$-weakly invariant closed system $\mathcal{D}'$ of rank $\leq k$ which covers $\mathcal{F}$.
\textbf{3. }Since $\mathcal{C}$ is $\theta$-weakly invariant, it is also $(\Theta-2)$-weakly invariant, which means that the systems $\mathcal{C}$, $\mathcal{D}'$ satisfy the assumptions of Proposition~\ref{lem-wym-kolnierzyki} (with parameters $k$ and $\Theta-2$). Then, there exists an open, $(\Theta-1)$-weakly invariant subsystem $\mathcal{G}' = \{ G'_x \}_{x \in G}$ in~$\mathcal{C}$ which covers~$\mathcal{D}'$ and such that the system of closures $\mathcal{F}' = \{ \overline{G'_x} \}_{x \in G}$ is $(\Theta-1)$-weakly invariant and of rank~$\leq k$.
\textbf{4. }Now, we define two subsystems $\mathcal{D} = \{ D_x \}_{x \in G}$ and $\mathcal{E} = \{ E_x \}_{x \in G}$ as follows:
\begin{align}
\label{eq-wym-konstr-wycinanka}
D_x = (G_x \setminus |\mathcal{G}'|_n) \cup F'_x, \qquad E_x = G_x \cup G'_x \qquad \textrm{ for } \quad x \in G, \, |x| = n.
\end{align}
Observe that $G_x \setminus |\mathcal{G}'|_n$ is closed because the claim of Proposition~\ref{lem-wym-bzdziagwy} implies that $\partial G_x \subseteq |\mathcal{F}|_n \subseteq |\mathcal{D}'|_n \subseteq |\mathcal{G}'|_n$, so $G_x \setminus |\mathcal{G}'|_n = \overline{G_x} \setminus |\mathcal{G}'|_n$ is a difference of a closed and an open subset (in~$|\mathcal{C}|_n$). Hence, $D_x$ is closed. On the other hand, $E_x$ is clearly open in~$|\mathcal{C}|_n$.
Since $\mathcal{G}$, $\mathcal{F}'$ and~$\mathcal{G}'$ are all $(\Theta-1)$-weakly invariant, $\mathcal{D}$ and~$\mathcal{E}$ must both be $\Theta$-weakly invariant (more precisely: $\mathcal{E}$ is obviously $(\Theta-1)$-weakly and then also $\Theta$-weakly invariant, while for~$\mathcal{D}$ we apply Lemma~\ref{fakt-wym-homeo-na-calej-kuli}). It is also easy to see that
\[ |\mathcal{E}_n| \ = \ |\mathcal{G}|_n \cup |\mathcal{G}'|_n \ \supseteq \ |\mathcal{G}|_n \cup |\mathcal{F}|_n \ = \ |\mathcal{C}|_n,
\qquad
|\mathcal{D}|_n \ = \ \big( |\mathcal{G}|_n \setminus |\mathcal{G}'|_n \big) \cup |\mathcal{F}'|_n \ = \ |\mathcal{G}|_n \cup |\mathcal{F}'|_n \ \supseteq \ |\mathcal{E}|_n, \]
so $\mathcal{D}$ and~$\mathcal{E}$ both cover~$\mathcal{C}$. Finally, since $\mathcal{G}$ is disjoint and $\mathcal{F}'$ (and so also $\mathcal{G}'$) is of rank $\leq k$, it follows that $\mathcal{D}$ and~$\mathcal{E}$ must be of rank~$\leq k +1$ by Lemma~\ref{fakt-wym-suma-pokryc}.
\end{proof}
\subsection{Conclusion: The complete proof of Theorem~\ref{tw-kompakt}}
\label{sec-wymd-podsum}
\begin{proof}[{\normalfont \textbf{Proof of Proposition~\ref{lem-wym}}}]
The claim follows from applying Proposition~\ref{lem-wym-cala-historia} to the system~$\mathcal{S}$ (defined in Section~\ref{sec-konstr-pokrycia}). This is a semi-closed and~$0$-weakly (and hence also $1$-weakly) invariant system, so the proposition ensures that there exists an open $(3k+4)$-weakly invariant subsystem $\mathcal{E}$ of rank~$\leq k + 1$ which covers~$\mathcal{S}$.
Since $\mathcal{S}$ is a system of covers, while~$\mathcal{E}$ is open and covers~$\mathcal{S}$, it follows that $\mathcal{E}$ is also a system of covers. Then, it follows from Proposition~\ref{lem-wym-slabe-sa-qi} that $\mathcal{E}$ is quasi-$G$-invariant (with the type function~$T^B_{3k+5}$). This means that $\mathcal{E}$ has all the desired properties.
\end{proof}
\begin{proof}[{\normalfont \textbf{Proof of Theorem~\ref{tw-kompakt}}}]
We use the quasi-$G$-invariant system of covers~$\mathcal{E}$ obtained in the proof of Proposition~\ref{lem-wym}. By Lemma~\ref{fakt-konstr-sp-zal}, there is $L \geq 0$ such that the system $(\widetilde{\mathcal{E}}_{Ln})_{n \geq 0}$ (where $\widetilde{\mathcal{E}}_n$ denotes $\mathcal{E}_n$ with empty members removed) is admissible. Then, by Theorems~\ref{tw-konstr} and~\ref{tw-kompakt-ogolnie}, the corresponding system of nerves $(K_n, f_n)$ is Markov, barycentric and has mesh property.
Since the type function $T^B_{3k+5}$ associated with this system is stronger than $T^B$, Theorem~\ref{tw-sk-opis} ensures that the simplex types used in the system $(K_n, f_n)$ can be strengthened to make this system simultaneously Markov and has the distinct types property. (Barycentricity and mesh property are clearly preserved as the system itself does not change). Moreover, for every $n \geq 0$ we have
\[ \dim K_n = \rank \widetilde{\mathcal{E}}_{Ln} - 1 = \rank \mathcal{E}_{Ln} - 1 \leq \dim \partial G, \]
where the last inequality follows from the property of~$\mathcal{E}$ claimed by Proposition~\ref{lem-wym}. Finally, since $\mathcal{E}$ is $(3k+4)$-weakly invariant, it is in particular inscribed into~$\mathcal{S}$, which means in view of Theorem~\ref{tw-bi-lip} that the homeomorphism $\varphi : \partial G \simeq \mathop{\lim}\limits_{\longleftarrow} K_n$ obtained from Theorem~\ref{tw-konstr} is in fact a bi-Lipschitz equivalence (in the sense specified by Theorem~\ref{tw-bi-lip}).
This shows that the system~$(K_n, f_n)_{n \geq 0}$ has all the properties listed in Theorem~\ref{tw-kompakt}, which finishes the proof.
\end{proof}
\section{$\partial G$ as a semi-Markovian space}
\label{sec-sm}
The aim of this section is to show that the boundary $\partial G$ of a hyperbolic group~$G$ is a \textit{semi-Markovian space} (see Definition \ref{def-sm-ps}). In Section \ref{sec-sm-def}, we introduce notions needed to formulate the main result, which appears at its end as Theorem \ref{tw-semi-markow-0}. The remaining part of the section contains the proof of this theorem.
\subsection{Semi-Markovian sets and spaces}
\label{sec-sm-def}
Let $\Sigma$ be a finite alphabet and $\Sigma^\mathbb{N}$ denote the set of infinite words over $\Sigma$.
In the set $\Sigma$ we define the operations of \textit{shift} $S : \Sigma^\mathbb{N} \rightarrow \Sigma^\mathbb{N}$ and \textit{projection} $\pi_F : \Sigma^\mathbb{N} \rightarrow \Sigma^F$ (where $F \subseteq \mathbb{N}$) by the formulas:
\[ S \big( (a_0, a_1, \ldots) \big) = (a_1, a_2, \ldots), \qquad \pi_F \big( (a_0, a_1, \ldots) \big) = (a_n)_{n \in F}. \]
\begin{df}[{\cite[Chapter~2.3]{zolta}}]
\label{def-sm-cylinder}
A subset $C \subseteq \Sigma^\mathbb{N}$ is called a \textit{cylinder} if $C = \pi_F^{-1}(A)$ for some finite $F \subseteq \mathbb{N}$ and for some $A \subseteq \Sigma^F$.
(Intuitively: the set $C$ can be described by conditions involving only a finite, fixed set of positions in the sequence $(a_n)_{n \geq 0} \in \Sigma^\mathbb{N}$).
\end{df}
\begin{df}[{\cite[Definition~6.1.1]{zolta}}]
\label{def-sm-zb}
A subset $M \subseteq \Sigma^\mathbb{N}$ is called a \textit{semi-Markovian set} if there exist cylinders $C_1$, $C_2$ in~$\Sigma^\mathbb{N}$ such that $M = C_1 \cap \bigcap_{n \geq 0}^\infty S^{-n}(C_2)$.
\end{df}
\begin{uwaga}
\label{uwaga-sm-proste-zbiory}
In particular, for any subset $\Sigma_0 \subseteq \Sigma$ and binary relation $\rightarrow$ in $\Sigma$, the following set is semi-Markovian:
\[ M(\Sigma_0, \, \rightarrow) = \big\{ (a_n)_{n \geq 0} \,\big|\, a_0 \in \Sigma_0, \, a_n \rightarrow a_{n+1} \textrm{ for } n \geq 0 \big\}. \]
\end{uwaga}
We consider the space of words $\Sigma^\mathbb{N}$ with the natural Cantor product topology (generated by the base of cylinders). In this topology, all semi-Markovian sets are closed subsets of $\Sigma^\mathbb{N}$.
Before formulating the next definition, we introduce a natural identification of pairs of words and words of pairs of symbols:
\[ J : \quad \Sigma^\mathbb{N} \times \Sigma^\mathbb{N} \quad \ni \quad \Big( \big( (a_n)_{n \geq 0}, \, (b_n)_{n \geq 0} \big) \Big) \qquad \mapsto \qquad \big( (a_n, \, b_n) \big)_{n \geq 0} \quad \in \quad (\Sigma \times \Sigma)^\mathbb{N}. \]
\begin{df}
\label{def-sm-rel}
A binary relation $R \subseteq \Sigma^\mathbb{N} \times \Sigma^\mathbb{N}$ will be called a \textit{semi-Markovian relation} if its image under the above identification $J(R) \subseteq (\Sigma \times \Sigma)^\mathbb{N}$ is a semi-Markovian set (over the product alphabet $\Sigma \times \Sigma$).
\end{df}
\begin{df}[{\cite[Definition~6.1.5]{zolta}}]
\label{def-sm-ps}
A topological Hausdorff space $\Omega$ is called a \textit{semi-Markovian space} if it is the topological quotient of a semi-Markovian space (with the Cantor product topology) by a semi-Markovian equivalence relation.
\end{df}
We can now re-state the main result of this section:
\begin{twsm}
The boundary of any hyperbolic group $G$ is a semi-Markovian space.
\end{twsm}
The proof of Theorem \ref{tw-semi-markow-0} --- preceded by a number of auxiliary facts --- is given at the end of this section. Roughly, it will be obtained by applying Corollary~\ref{wn-sm-kryt} to the \textit{$C$-type} function which will be defined in Section~\ref{sec-sm-abc-c}.
\begin{uwaga}
Theorem~\ref{tw-semi-markow-0} has been proved (in~\cite{zolta}) under an additional assumption that $G$ is torsion-free.
We present a proof which does not require this assumption; the price for it is that our reasoning (including the results from Section~\ref{sec-abc}, which will play an important role here) is altogether significantly more complicated.
However, in the case of torsion-free groups, these complications mostly trivialise (particularly, so does the construction of $B$-type) --- and the remaining basic structure of the reasoning (summarised in Lemma \ref{fakt-sm-kryt-zbior}) is analogous to that in the proof from~\cite{zolta}. Within this analogy, a key role in our proof is played by Proposition~\ref{lem-sm-kuzyni}, corresponding to Lemma 7.3.1 in~\cite{zolta} which particularly requires $G$ to be torsion-free. More concrete remarks about certain problems related with the proof from \cite{zolta} will be stated later in Remark~\ref{uwaga-zolta}.
\end{uwaga}
\begin{uwaga}
Theorem \ref{tw-semi-markow-0} can be perceived as somewhat analogous to a known result stating automaticity of hyperbolic groups (described for example in \cite[Theorem~12.7.1]{CDP}).
The relation between those theorems seems even closer if we notice that --- although the classical automaticity theorem involves the Cayley graph of a group --- it can be easily translated to an analogous description of the Gromov boundary. Namely, the boundary is the quotient of some ``regular'' set of infinite words by some ``regular'' equivalence relation (in a sense analogous to Definition \ref{def-sm-rel}) where ``regularity'' of a set $\Phi \subseteq \Sigma^\mathbb{N}$ means that there is a finite automaton $A$ such that any infinite word $(a_n)_{n \geq 0}$ belongs to $\Phi$ if and only if $A$ accepts all its finite prefixes.
However, such regularity condition is weaker than the condition from Definition \ref{def-sm-zb}, as the following example shows:
\[ \Phi = \big\{ (x_n)_{n \geq 0} \in \{ a, b, c \}^\mathbb{N} \ \big|\ \forall_n \ (x_n = b \ \Rightarrow \ \exists_{i < n} \ x_i = a) \big\}, \qquad \quad A: \ \
\raisebox{-3.5ex}{
\begin{tikzpicture}[scale=0.15]
\node[draw, circle, double] (s0) at (0, 0) {};
\node[draw, circle, double] (s1) at (10, 0) {};
\node[draw, circle] (s2) at (20, 0) {};
\draw[->] (-3,0) -- (s0);
\draw[->] (s0) -- (s1) node [midway, above, draw=none, inner sep=2] {\footnotesize $a$};
\draw (s0) edge [in=110,out=70,loop] node [midway, above, inner sep=2] {\footnotesize $c$} ();
\draw (s1) edge [in=110,out=70,loop] node [midway, above, inner sep=2] {\footnotesize $a,b,c$} ();
\draw (s2) edge [in=110,out=70,loop] node [midway, above, inner sep=2] {\footnotesize $a,b,c$} ();
\draw[->] (s0) edge[bend right=20] node [midway, sloped, below, inner sep=2] {\footnotesize $b$} (s2);
\end{tikzpicture}
}
\]
It is easy to check that the set $\Phi$ corresponds to the automaton $A$ in the sense described above, while it is not semi-Markovian.
The latter claim can be shown as follows. Assume that there exists a presentation $\Phi = C_1 \cap \bigcap_{n \geq 0} S^{-n} (C_2)$ as required by Definition \ref{def-sm-zb}, and let the cylinder $C_1$ have form $\pi_F^{-1}(A)$, according to Definition \ref{def-sm-cylinder}. Denote by $N$ the maximal element of $F$. Then, the word $\alpha_1 = \underbrace{cc\ldots{}cc}_{N+1}\underbrace{aa\ldots{}aa}_{N+1}\underbrace{cc\ldots{}cc}_{N+1}bb\ldots$ belongs to $\Phi$, while $\alpha_2 = \underbrace{cc\ldots{}cc}_{N+1}bb\ldots$ does not belong to $\Phi$. However, these words have a common prefix of length $N+1$ (so $\alpha_2$ cannot be rejected by $C_1$) and moreover $\alpha_2$ is a suffix of $\alpha_1$ (so $\alpha_2$ cannot be rejected by $C_2$).
\end{uwaga}
\subsection{Compatible sequences}
\label{sec-sm-nici}
In the remainder of Section \ref{sec-sm}, we work under the assumptions formulated in the introduction to Section \ref{sec-abc}.
\begin{ozn}
For any $n \geq 0$, we denote
\[ G_n = \big\{ x \in G \,\big|\, |x| = n \big\}. \]
\end{ozn}
\begin{df}
An infinite sequence $(g_n)_{n \geq 0}$ in~$G$ will be called \textit{compatible} if, for all $n \geq 0$, we have $g_n \in G_{Ln}$ and $g_n = g_{n+1}^\Uparrow$. We denote the set of all such sequences by $\mathcal{N}$.
\end{df}
Note that to any compatible sequence $(g_n)_{n \geq 0}$ we can naturally assign a geodesic $\alpha$ in~$G$, defined by the formula
\[ \alpha(m) = g_n^{\uparrow Ln-m} \qquad \textrm{ for } \quad m, n \geq 0, \ Ln \geq m. \]
(It is easy to check that the value $g_n^{\uparrow Ln-m}$ does not depend on the choice of $n$). This means that every compatible sequence $(g_n)$ has a \textit{limit}: it must converge in~$G \cup \partial G$ to the element $\lim_{m \rightarrow \infty} \alpha(m) = [\alpha] \in \partial G$.
\begin{fakt}
\label{fakt-sm-nici-a-punkty}
Let the map $I : \mathcal{N} \rightarrow \partial G$ assign to a compatible sequence $(g_n)_{n \geq 0}$ its limit in $\partial G$. Then:
\begin{itemize}
\item[\textbf{(a)}] $I$ is surjective;
\item[\textbf{(b)}] for every $(g_n), (h_n) \in \mathcal{N}$, we have
\[ I \big( (g_n) \big) = I \big( (h_n) \big) \qquad \Longleftrightarrow \qquad g_n \leftrightarrow h_n \quad \textrm{ for every } n \geq 0. \]
\end{itemize}
\end{fakt}
\begin{proof}
\textbf{(a)} Let $x \in \partial G$ and let $\alpha$ be any infinite geodesic going from $e$ towards $x$. For $k \geq 0$, we define a geodesic $\alpha_k$ in~$G$ by the formula
\[ \alpha_k(n) = \begin{cases}
\alpha(k)^{\uparrow k-n} & \textrm{ for } n \leq k, \\
\alpha(n) & \textrm{ for } n \geq k.
\end{cases}
\]
(For $n = k$, both branches give the same result). Then, for any $n \geq 0$, we have $|\alpha_k(n)| = n$ and moreover $d \big( \alpha_k(n), \alpha_k(n+1) \big) = 1$, which proves that $\alpha_k$ is a geodesic. Moreover, we have $\alpha_k(0) = e$ and $\lim_{n \rightarrow \infty} \alpha_k(n) = x$ because $\alpha_k$ ultimately coincides with $\alpha$.
Using Lemma~\ref{fakt-geodezyjne-przekatniowo} to the sequence $(\alpha_k)$, we obtain some subsequence $(\alpha_{k_i})$ and a geodesic $\alpha_\infty$ such that $\alpha_\infty$ coincides with~$\alpha_{k_i}$ on the segment~$[0, i]$. By choosing a subsequence if necessary, we can assume that $k_i \geq i$; then we have $\alpha_\infty(i-1) = \alpha_\infty(i)^\uparrow$ for every $i \geq 1$, so the sequence $\big( \alpha_\infty(Ln) \big)_{n \geq 0}$ is compatible. On the other hand, Lemma \ref{fakt-geodezyjne-przekatniowo} also ensures that $I \big( (g_n) \big) = [\alpha_\infty] = \lim_{i \rightarrow \infty} [\alpha_{k_i}] = x$, which proves the claim.
\textbf{(b)} The implication $(\Rightarrow)$ follows directly from the inequality (1.3.4.1) in~\cite{zolta}. On the other hand, if $g_n \leftrightarrow h_n$ for every $n \geq 0$, and if $\alpha, \beta$ are the geodesics corresponding to the compatible sequences $(g_n)$ and~$(h_n)$, then we have
\[ d \big( \alpha(Ln), \beta(Ln) \big) = d(g_n, h_n) \leq 8\delta \qquad \textrm{ for } \quad n \geq 0, \]
so from the triangle inequality we deduce that $d \big( \alpha(m), \beta(m) \big) \leq 2L + 8\delta$ for all $m \geq 0$, and so in $\partial G$ we have $[\alpha] = [\beta]$.
\end{proof}
\subsection{Desired properties of the type function}
\label{sec-sm-zyczenia}
The presentation of $\partial G$ as a semi-Markovian space will be based on an appropriate type function (see the introduction to Section \ref{sec-abc}). Since the ball type $T^b_N$ used in the previous sections has too weak properties for our needs, we will use some its strengthening. In this section, we state (in Lemma \ref{fakt-sm-kryt-zbior}) a list of properties of a type function which are sufficient (as we will prove in Corollary \ref{wn-sm-kryt}) to give a semi-Markovian structure on $\partial G$. The construction of a particular function $T^C$ satisfying these conditions will be given in Section \ref{sec-sm-abc-c}.
\begin{df}
Let $T$ be any type function in $G$ with values in a~finite set $\mathcal{T}$. For a compatible sequence $\nu = (g_n)_{n \geq 0} \in \mathcal{N}$, we call its \textit{type} $T^*(\nu)$ the sequence $\big( T(g_n) \big)_{n \geq 0} $.
\end{df}
Then, using the definition of a semi-Markovian space, it is easy to show the following lemma.
\begin{fakt}
\label{fakt-sm-kryt-zbior}
Let $T$ be a type function in $G$ with values in $\mathcal{T}$. Then:
\begin{itemize}
\item[\textbf{(a)}] If, for every element of~$G$, all its p-grandchildren have pairwise distinct types, then the function $T^* : \mathcal{N} \rightarrow \mathcal{T}^{\mathbb{N}}$ is injective;
\item[\textbf{(b)}] If the set of p-grandchildren of $g \in G$ depends only on the type of $g$, then the image of $T^*$ is a semi-Markovian set over $\mathcal{T}$;
\item[\textbf{(c)}] Under the assumptions of parts \textbf{(a)} and \textbf{(b)}, if for any $g, g' \in G_{L(n+1)}$, $h, h' \in G_{L(m+1)}$ the conditions
\begin{gather*}
T(g) = T(h), \qquad T(g') = T(h'), \qquad T(g^\Uparrow) = T(h^\Uparrow), \qquad T({g'}^\Uparrow) = T({h'}^\Uparrow), \\
g^\Uparrow \leftrightarrow g'^\Uparrow, \qquad g \leftrightarrow g', \qquad h^\Uparrow \leftrightarrow {h'}^\Uparrow,
\end{gather*}
imply that $h \leftrightarrow h'$, then the equivalence relation $\sim$ in the set $T^*(\mathcal{N})$, given by the formula \linebreak \mbox{$T^*(\nu) \sim T^*(\nu') \ \Leftrightarrow \ I(\nu) = I(\nu')$}, is a semi-Markovian relation.
\end{itemize}
\end{fakt}
\begin{proof}
Part \textbf{(a)} is clear. If the assumption of part~\textbf{(b)} holds, it is easy to check that
$T^*(\mathcal{N}) = M(\Sigma_0, \rightarrow)$, where $\Sigma_0 = \{ T(e) \}$ and $\tau \rightarrow \tau'$ if and only if $\tau = T(g^\Uparrow)$ and $\tau' = T(g)$ for some $n \geq 0$ and $g \in G_{L(n+1)}$.
Analogously, it is easy to check that, under the assumptions of part~\textbf{(c)}, the relation $\sim$ has the form $M(A_0, \leadsto)$, where
\begin{gather*}
A_0 = \big\{ \big( T(e), T(e) \big) \}, \\
\big( T(g^\Uparrow), T({g'}^\Uparrow) \big) \ \leadsto \ \big( T(g), T(g') \big) \qquad \textrm{ for } \quad g, g' \in G_{L(n+1)}, \ g^\Uparrow \leftrightarrow g'^\Uparrow, \, g \leftrightarrow g', \ n \geq 0.
\end{gather*}
Indeed: the containment $\sim \subseteq M(A_0, \leadsto)$ results from Lemma \ref{fakt-sm-nici-a-punkty}b. On the other hand, if a sequence $\big( (\tau_n, \tau_n') \big)_{n \geq 0}$ belongs to $M(A_0, \leadsto)$, then the sequences $(\tau_n)_{n \geq 0}$ and $(\tau'_n)_{n \geq 0}$ belong to the set $M(\Sigma_0, \rightarrow)$
defined in the previous paragraph, so they are types of some compatible sequences $(h_n)_{n \geq 0}$ and correspondingly $(h'_n)_{n \geq 0}$. Moreover, it is easy to check by induction that $h_n \leftrightarrow h_n'$ for every $n \geq 0$: for $n = 0$ this holds since $h_0 = h'_0 = e$, and for $n > 0$ one can use the relation $h_{n-1} \leftrightarrow h'_{n-1}$, the condition $(\tau_{n-1}, \tau'_{n-1}) \leadsto (\tau_n, \tau'_n)$ and the assumptions of part~\textbf{(c)}. Therefore, we obtain that $h_n \leftrightarrow h_n'$ for $n \geq 0$, and so by Lemma~\ref{fakt-sm-nici-a-punkty}b we deduce that $(\tau_n) \sim (\tau_n')$.
\end{proof}
\begin{wn}
\label{wn-sm-kryt}
Under the assumptions of parts~\textbf{(a-c)} in Lemma~\ref{fakt-sm-kryt-zbior}, $\partial G$ is a semi-Markovian space.
\end{wn}
\begin{proof}
Since the map $I \circ (T^*)^{-1} : T^*(\mathcal{N}) \rightarrow \partial G$ is surjective by Lemma \ref{fakt-sm-nici-a-punkty}a, to verify that it is a homeomorphism we need only to check its continuity. Let $(\tau_n^{(i)}) \mathop{\longrightarrow}\limits_{i \rightarrow \infty} (\tau_n)$ in the space $T^*(\mathcal{N})$; this means that there exists a sequence $n_i \rightarrow \infty$ such that for every $i \geq 0$ the sequences $(\tau_n^{(i)})$ and $(\tau_n)$ coincide on the first $n_i$ positions. Then, the assumptions of part~\textbf{(a)} imply that the corresponding compatible sequences $(g_n^{(i)})$, $(g_n)$ also coincide on the first $n_i$ positions; in particular, $g_{n_i}^{(i)} = g_{n_i}$. Then, also the geodesics $\alpha^{(i)}$ corresponding to the sequences $(g_n^{(i)})$ are increasingly coincident with the geodesic $\alpha$ corresponding to the sequence $(g_n)$, which means by the definition of $\partial G$ that $I \big( (g_n^{(i)}) \big) = [\alpha^{(i)}] \rightarrow [\alpha] = I \big( (g_n) \big)$ in $\partial G$.
\end{proof}
\begin{uwaga}
\label{uwaga-zolta}
The main ``skeleton'' of the proof of Theorem \ref{tw-semi-markow-0} presented in Lemma \ref{fakt-sm-kryt-zbior} is taken from \cite{zolta}. The proof given there uses the ball type $T^b_N$ (defined in Section~\ref{sec-typy-kulowe}) as the type function, and $1$ as the value of~$L$. However, in the case of a torsion group, this type function does not have to satisfy the assumptions of part~\textbf{(a)} in Lemma \ref{fakt-sm-kryt-zbior}. Moreover, even in the torsion-free case, the verification of the assumptions of part~\textbf{(b)} --- given in \cite{zolta} on page 125 (Chapter~7, proof of Proposition~2.4) --- contains a~defect in line 13. More precisely, it is claimed there that if one takes $N$ sufficiently large, $L = 1$ and $x', y' \in G$ such that $y'$ ``follows'' (in our terms: is a~child of) $x'$, then every element of the set $B_N(y') \setminus B_N(x')$ ``can be considered as belonging to the tree $T_{geo,x'}$'' (in our terms: to $x' T^c(x')$).
Our approach avoids this problem basically by choosing $L$ and $N$ so large that the analogous claim must indeed hold, as one could deduce e.g. from Lemma~\ref{fakt-kulowy-duzy-wyznacza-maly} combined with the proof of Proposition~\ref{lem-potomkowie-dla-kulowych}.
\end{uwaga}
\subsection{Extended types and the $C$-type}
\label{sec-sm-abc-c}
\begin{df}
\label{def-sm-typ-plus}
Let $T$ be an arbitrary type function in $G$ with values in a finite set $\mathcal{T}$ and let $r \geq 0$. Let $g \in G$, and let $P_r(g)$ denote the set of $r$-fellows of $g$ (see Definition \ref{def-konstr-towarzysze}). We define the \textit{extended type} of element $g$ as the function $T^{+r}(g) : P_r(g) \rightarrow \mathcal{T}$, defined by the formula:
\[ \big( T^{+r}(g) \big)(h) = T(gh) \qquad \textrm{ for } \quad h \in P_r(g). \]
\end{df}
Since $P_r(g)$ is contained in a bounded ball $B(e, r)$, the extended type function $T^{+r}$ has, in an obvious way, finitely many possible values.
\begin{df}
\label{def-sm-typ-C}
We define the \textit{$C$-type} of an element $g \in G$ as its $B$-type extended by $8\delta$:
\[ T^C(g) = (T^B)^{+8\delta}(g) \qquad \textrm{ for } g \in G. \]
\end{df}
Note that, by comparing Definitions~\ref{def-konstr-towarzysze} and~\ref{def-sm-sasiedzi}, we obtain that the set $P_{8\delta}(g)$ contains exactly these $h \in G$ for which $g \leftrightarrow gh$. This means that the $C$-type of $g$ consists of the $B$-type of~$g$ and of $B$-types of its neighbours (together with the knowledge about their relative location).
\begin{fakt}
\label{fakt-sm-C-bracia}
For every $g \in G$, all p-grandchildren of $g$ have pairwise distinct $C$-types.
\end{fakt}
\begin{proof}
By definition, the $C$-type of an element $h \in G$ contains its $B$-type, which in turn contains its $A$-type and finally its descendant number $n_h$, which by definition distinguishes all the p-grandchildren of a fixed element $g \in G$.
\end{proof}
\begin{lem}
\label{lem-sm-C-dzieci}
The set of $C$-types of all p-grandchildren of a given element $g \in G$ depends only on $T^C(g)$.
\end{lem}
\begin{proof}
Let $g_1, h_1 \in G$ satisfy $T^C(g_1) = T^C(h_1)$; denote $\gamma = h_1g_1^{-1}$. By Lemma \ref{fakt-sm-przen-a} we know that the left translation by $\gamma$
gives a bijection between p-grandchildren of $g_1$ and p-grandchildren of $h_1$. Let $g_2$ be a p-grandchild of $g_1$ and $h_2 = \gamma g_2$; our goal is to prove that $T^C(g_2) = T^C(h_2)$. For this, choose any $g_2' \in G$ such that $g_2 \leftrightarrow g_2'$; we need to prove that $T^B(g_2') = T^B(h_2')$, where $h_2' = \gamma g_2'$.
Denote $g_1' = g_2'^\Uparrow$ and $h_1' = \gamma g_1'$. Since $g_2 \leftrightarrow g_2'$, by Lemma~\ref{fakt-sm-rodzic-kuzyna} we have $g_1 \leftrightarrow g_1'$; then from the equality $T^C(g_1) = T^C(h_1)$ we obtain that $T^B(g_1') = T^B(h_1')$. In this situation, Proposition \ref{lem-sm-B-dzieci} ensures that $T^B(g_2') = T^B(h_2')$, q.e.d.
\end{proof}
\begin{lem}
\label{lem-sm-C-klejenie}
The type function $T^C$ satisfies the condition stated in part~\textbf{(c)} of Lemma \ref{fakt-sm-kryt-zbior}.
\end{lem}
\begin{proof}
Let $g, g', h, h'$ be as in part~\textbf{(c)} of Lemma \ref{fakt-sm-kryt-zbior}. In particular, we assume that $T^C(g^\Uparrow) = T^C(h^\Uparrow)$. By the definition of $C$-type, this means that the left translation by $\gamma = h^{-1}g$ neighbours of $g$ to neighbours of $h$, preserving their $B$-type, which in turn implies by Proposition \ref{lem-sm-B-dzieci} that this shift preserves the children of these neighbours, together with their $B$-types. In particular:
\begin{itemize}
\item the element ${g'}^\Uparrow$ must be mapped to ${h'}^\Uparrow$ since by assumption we have $T^B({g'}^\Uparrow) = T^B({h'}^\Uparrow)$, and moreover ${h'}^\Uparrow$ is the only neighbour of $h^\Uparrow$ with the appropriate $B$-type (by Proposition~\ref{lem-sm-kuzyni});
\item the elements $g$, $g'$ must be mapped correspondingly to $h$, $h'$ since by assumption the corresponding $B$-types coincide, and moreover $h$, $h'$ are the only p-grandchildren of $h^\Uparrow$, ${h'}^\Uparrow$ with the appropriate $B$-types (because by Remark~\ref{uwaga-sm-B-wyznacza-A} the $B$-type determines the $A$-type, which in turn distinguishes all the p-grandchildren of a given element).
\end{itemize}
Therefore, we have $d(h, h') = d(\gamma g, \gamma g') = d(g, g') \leq 8\delta$, q.e.d.
\end{proof}
\begin{proof}[{\normalfont \textbf{Proof of Theorem~\ref{tw-semi-markow-0}}}]
By Corollary~\ref{wn-sm-kryt} it suffices to ensure that the type~$T^C$ (which is finitely valued by Lemma~\ref{fakt-sm-B-skonczony} and Definition~\ref{def-sm-typ-C}) satisfies the conditions \textbf{(a-c)} from Lemma~\ref{fakt-sm-kryt-zbior}. The conditions of parts~\textbf{(b)} and~\textbf{(c)} follow correspondingly from Propositions~\ref{lem-sm-C-dzieci} and~\ref{lem-sm-C-klejenie}, while the condition of part~\textbf{(a)} follows from the fact that, by definition, the value $T^C(x)$ for a given $x \in G$ determines $T^B(x)$ and further $T^A(x)$, while the $A$-types of all p-grandchildren of a given element are pairwise distinct by definition. This finishes the proof.
\end{proof}
\bibliographystyle{plain}
| {'timestamp': '2015-03-17T01:14:04', 'yymm': '1503', 'arxiv_id': '1503.04577', 'language': 'en', 'url': 'https://arxiv.org/abs/1503.04577'} |
high_school_physics | 129,198 | 15.749512 | 1 | The Sharjah Center for Astronomy and Space Sciences (SCASS) at the University of Sharjah organized a lecture under the title "Spherical Astronomy" for the Center's staff, the University of Sharjah community, space enthusiasts and the general public. During the lecture, Mr. Mohammed Baker Rihan, Research Assistant at the Center, explored spherical astronomy as the oldest branch of astronomy used to determine the location of objects on the celestial sphere as seen at a particular date, time, and from a set location on Earth. The lecture covered the history of spherical astronomy, developments, a review of spherical geometry, and the importance of this branch of astronomy and its use in different fields such as in satellite tracking systems, celestial navigation systems, space debris tracking systems, planetarium software, amongst others. This lecture enhanced the audience's understanding of spherical astronomy as well as past and modern perspectives in the area. | {'timestamp': '2019-04-20T11:18:56Z', 'url': 'http://www.sharjah.ac.ae/en/Media/Pages/news-details.aspx?mcid=534&clt=en', 'language': 'en', 'source': 'c4'} |
high_school_physics | 71,126 | 15.742762 | 1 | There is a children’s movie called Flatland, in which the main characters are geometric shapes that exist in a two dimensional world. One day, a huge commotion breaks out when there is a report of a creature that can appear and disappear in a moment’s notice. This being is breaking into safes and materializing behind closed doors, and no one knows how. It turns out this being is a three-dimensional shape, a sphere. Imagine that there were creatures living in a piece of paper, and you could put your finger anywhere through the paper. The only part of your finger these two-dimensional creatures could see would be the cross-section of your finger on the paper. If you lifted your finger and placed it anywhere else through the paper, it would seem to them that your finger had teleported. But before discussing the implications of these dimensional interactions, let us clarify: what is a dimension? Look around you: the world we live in and perceive is in three dimensions, with the components of length, height, and width. In modern physics, space and time are connected to create the space-time continuum, which consists of four dimensions. This one higher dimension defies our natural perception of a physical reality, as aspects like distance are distorted. One can only imagine what changes in the higher-up dimensions, as new variables are introduced and with it a new set of natural laws and conventions. And we should also establish the mathematical principle of dimensions: every lower dimension is subsumed under its higher counterparts. Each lower dimension is a part of those higher than itself, and the higher dimensions contain those lower.
Imagine if a four-dimensional or higher being acted in our world as the three-dimensional being acted in Flatland. The fullness of its existence would be out of our natural scope of understanding and perception, and it would have powers we struggle to comprehend. We could only catch a glimmer or a shadow—a three dimensional image—of its higher-dimensional existence. Just as the idea of capturing only a glimpse of a being’s essence is logical under the mathematical assumption that there exists more dimensions than we are able to perceive and that they permeate through dimensions lower than themselves, looking from a religious perspective, perhaps we’re able to see glimpses of a spiritual realm in which a god or deity dwells and how it may filter through our physical reality.
Just as the third dimension contains the first and second dimensions, God, on the infinite dimension, fills all the ones below Him, and we can see how this echoes throughout God’s characteristics. God is touted as omnipresent: He fills the vastness of space. God is eternal: time exists in a lower dimension for Him. And this is why God is considered omniscient: He can see everything that happens and might happen as well. Earth, existing in the third dimension, is only capable of capturing a shadow of God’s glory. The earth simply cannot contain His glory, and that is why when God manifests on earth, in the form of a pillar of fire, a burning bush, or a huge cloud in the Old Testament, these are physical wonders that are temporary and do not fully make sense to us.
Although something does not make complete sense to us does not mean it does not exist. God exists behind a veil that we can never lift and lives in a space we can never reach: He is beyond us, and the most we can ever know about Him is a shadow of His entire glory.
But now, what are the implications for approaching Christianity within this framework? If each dimension is contained in the one above it and God is in heaven, this begs the question: is heaven right in front of us? Just as the second dimension is not a separate bubble of space that we enter into, what if our dimensional three- space is not a separate bubble from heaven, but in actuality, a part of heaven?
C.S. Lewis makes it clear that in some way, the re-tellers and writers of the Scriptures were guided by God, and that there was purpose in God’s decision to move in their spirit to write such stories.
These verses, along with this excerpt from C.S. Lewis, point to a gift that is instilled within us as humans in the core of our spiritual identity: to be led along by the Holy Spirit. Perhaps these are signs that our world exists within the heavenly realm, that our spirits, guided by the Holy Spirit, testify to a spiritual reality above our physical existence, even though we are still on earth. We may not be able to discern the fullness of heaven’s presence on earth, but from this cross-section of the spiritual realm, we see glimpses and shadows of realities greater than our own. By living and walking in the Holy Spirit, our actions reflect the spiritual reality above us, and thus, in doing so, we bring heaven down to earth.
1 For the sake of this article, I won’t discuss those implications here, but I am more than happy to meet and have a conversation about it.
2 I think it’s perfectly viable to look at this mathematical argument and make a case for pluralism or for the truth in any other religion. The point of this article isn’t to prove Christianity as the one true religion by any means. Please come discuss this with me if you want.
3 To not detract from this article, I will not prove the infinitude of God, but for a biblical assertion of this, see Revelations 1:8 and 2 Chronicles 2:6. For a non-biblical assertion of an infinite God, see Thomas Aquinas’ Contra Gentiles.
4 So this article is not scattered, those who have questions on how this idea of God’s omniscience can be reconciled with the idea of free will can also come talk with me, and we can engage in delightful conversation.
5 Contextually, Paul is speaking to the Corinthian church, but in our day and age, I interpret this to apply to all believers that allow the Holy Spirit to speak through them, as I believe that this gift has been extended to all believers.
6 Academically speaking, scholars say that Genesis borrows heavily from Babylonian and Mesopotamian creations myths, such as the Enuma Elish, as well as other myths from that time period and cultural context. Also, C.S. Lewis is speaking from a Christian perspective.
Peter Yuanxi Chen is a sophomore at Pomona College, trying, unsuccessfully, to balance his love for Mathematics, Art, and Religious Studies. He enjoys taking his time and thinking. Please go talk to him.
Photo credit: kswiens from morguefile.com. | {'timestamp': '2019-04-21T14:33:56Z', 'url': 'http://augustinecollective.org/heaven-on-earth/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 894,427 | 15.742577 | 1 | Q: Is 3+1 spacetime as privileged as is claimed? I've often heard the argument that having 3 spatial dimensions is very special. Such arguments are invariably based on certain assumptions that do not appear to be justifiable at all, at least to me. There is a summary of arguments on Wikipedia.
For example, a common argument for why >3 dimensions is too many is that the gravitational law cannot result in stable orbital motion. A common argument for <3 dimensions being too few is that one cannot have a gastrointestinal tract, or more generally, a hole that doesn't split an organism into two.
Am I being overly skeptical in thinking that while the force of gravity may not be able to hold objects in stable orbits, there most certainly exist sets of physical laws in higher dimensions which result in formation of stable structures at all scales? It may be utterly different to our universe, but who said a 4D universe must be the same as ours with one extra dimension?
Similarly, isn't it very easy to conceive of a 2D universe in which organisms can feed despite not having any holes, or not falling apart despite having them? For example, being held together by attractive forces, or allowing certain fundamental objects of a universe to interpenetrate, and thus enter a region of the body in which they become utilized. Or, conceive of a universe so incomprehensibly different to ours that feeding is unnecessary, and self-aware structures form through completely different processes.
While I realise that this is sort of a metaphysical question, is 3+1 dimensions really widely acknowledged to be particularly privileged by respected physicists?
A: I would like to share my view on this issue.
I think some answers with the word "anthropic" need not to be dismissed, but could be interpreted them in a deeper sense.
Anthropic should not be something derogatory, "just humans", as if we were not part of universe, instead perhaps it could be treated as concepts like "inertial frame of reference" are treated. A measure, a way to measure, a point of view, a frame of reference.
An imagination exercise:
Suppose one day a networking software is self aware.
Then is make some self replicas, and they ask themselves :
"Why we are on layer 7 of the OSI model?"
"Does it have something special?"
One of them would say "Because we can't live in lower layers then if
the universe would be lower layered we wouldn't be asking things like
this"
Another might say : "To live in layer 7, previous layer must exist to
allow us, but, think on layer 0, our conversation are ultimately
travelling through a cable for example, then we are at the same time,
layer 0, layer 1, ... layer 7, the universe is not layer 7!!,
its one or all layer at same time, depending "who" is measuring it,
we can see it till layer 7, but the top we see doesn't mean it's the whole
that exist, perhaps there are higher layers than 7, and lower than 0,
that are forbidden to us, and can't be known at all"
I think 3D+1 is the top that our natural senses are aware of, with technology we could know or suspect other dimensions, as far we know, "conscious beings" can't rise in lower dimensions, but that perhaps is a prejudice, because whatever we call 3D+1 perhaps can be parsed in just 1D! (similar as in the above story), so we should review our statements, of course beings could exist in higher dimensions too (if they do not exist already, they would).
A single matrix in a paper although is within a 3D+1 it could contain higher dimensions, of course a matrix in a paper is not conscious, but nobody knows if a computer program will be aware of itself someday, that day, it will "live" and even "measure" a higher dimension, and again as the matrix in the paper, we would know that it coexist in a lower dimension too.
It's a very interesting topic, I've asked about this before, you could read the answer to that question too
what are dimensions?
Regards
A: It is the minimum dimension required for the Weyl Tensor $C_{abcd}$to exist in the decomposition of the (completely covariant) Riemann Curvature Tensor $R_{abcd}$. That is kind of privileged. Or else, there would be no gravity in a vacuum (and thus, no long distance gravity, and no orbits, no free-fall)! And if it were any more, the gravity would weaken too quick (the inverse square law would become the inverse cube law, etc.)
A: The answer is no - and many of these reasons can be chalked up to a deficiency of imagination as to what sorts of possible physical laws there could be in other dimensions. For example, the idea that life is only possible with at least 3 dimensions is actually demonstrably false. Two-dimensional space has been explicitly shown to be sufficient by considering systems such as two-dimensional cellular automata: in particular, Conway's "Game of Life" can be considered to describe "a universe", and it supports self-replicating systems in it, which could be considered one, albeit broad, way to define life. They don't eat and process food (metabolism), which might disqualify it in the eyes of some, at least those favoring a very strict definition based only on Earth and our own universe since what constitutes "life" is rather subject to dispute, but that's because they don't need to - there is no strict "conservation of energy" in this universe. Regarding things not being able to avoid each other or the organism having spatially separate parts, it gets around this because separated components can "communicate" with each other by exchanging particles ("gliders"), so it's more like a swarm of smaller separate components (although technically you could consider our universe to be similar if you consider atoms to be comprised separate components though quantum mechanics problematizes things somewhat with its "fuzzy" nature and its interpretation in non-experimentalist terms so as to "describe a universe on its own terms" in the same was as the Conway's rules, is not clear). It's not sure though if Conway's universe can spontaneously generate (abiogenesis) life like ours can (this depends on a number of questions involving the fate of an infinite random grid which is in a sense the "most likely" starting condition if it were taken to be a naturally-existing universe), however, but it shows at least that you don't need 3 dimensions, and in fact don't even need to have a continuous space time, for something that exhibits at least one of the most distinctive features of life.
Regarding other points mentioned like how that forces don't provide orbits in >3 dimensions, this is based on again naive straightforward extensions of our own physics. In particular, the two long-range forces in our universe, gravitation and electromagnetism, follow an "inverse square" law meaning that the force is proportional to $r^{-2}$ where $r$ is the separation of the objects in question. Such inverse square laws support stable Kepler orbits - this is a relatively simple problem (nowadays!) any physics student will encounter in the course of their training. The reason the universe operates on inverse square laws is these forces can be conceived of in terms of field lines, and more generally the exchange of virtual particles in quantum field theory, which can be thought of as emitting a kind of radiation, and omnidirectional radiation creates a constant flux through a spherical surface, and surface area of a sphere goes as $r^2$ because of the three-dimensional nature of space. Straightforwardly generalizing this to a four-dimensional case would produce inverse-cube forces (as the radiation flux passes through a hyperspherical surface and its surface-volume (not area!) goes up as $r^3$), and these have no stable orbits. (Indeed it may even be that quantum mechanics is not able to save the atom from collapse, these are called "super-singular" potentials, though I've not personally tried to solve the Schrodinger equation to see how it behaves.)
But in reality, one could imagine more drastic changes to physics that would invalidate these concerns. One could be that forces are carried by a different mechanism than virtual particle radiation (though this requires gutting quantum theory). Then perhaps you could have $r^2$ forces. Another one would be if that objects emitted a repulsive force in addition to an attractive one. This would create a place where the forces would balance, and you could have stability. (E.g. you could have two electromagnetic-like forces, one acting in the opposite way the other does and dependent each on their own kinds of "charge", which could be different, and this could structuralize atoms. I suspect the Schrodinger equation for such a complicated scenario will not be solvable analytically though and I won't even try. I also don't know how the energy levels will structure, and I suspect such an "orbital" will be more like a shell, as in a real hollow spherical surface, where the "electron's" (or whatever it is now) probability wave fills up the "ditch" in the potential around the nucleus.) Still other options might exist like that "particles" are actually innately extended bodies and not points, and can carry other forms of information on them (maybe different parts of their surface are "colored" differently, for lack of a better word and visualization, and differently "colored" parts interact in different ways), which would prevent their total collapse or that of a structure built from them.
The only thing that might privilege it is our very specific structure of physics, but if you're going to be that specific then you might as well say 3-dimensional space is just part of that and thus it's kind of trivial. So I really think it's down to lack of imagination; there is absolutely no reason at all a Universe, even one with life, can be built in a different dimensionality.
A: If you look at Static forces and virtual-particle exchange (Wikipedia) you'll see a line of reasoning that doesn't seem to depend on the number of space-like dimensions, yet still arrives at an inverse square law. I realize this isn't exactly a rigorous QED calculation (for which I feel far too stupid) but it makes me reconsider my former belief in non-privilege. If d = 3 is the only case that allows both radiation and conservation of energy, then that's just... wow.
A: I am not sure that being in 3+1-D is a privilege. Actually, all the troubles with Feynmann integrals come from 4D. Secondly, the QFT is integrable only in 2+1-D. From the mathematical point of view, the 4D differentiable manifolds are most problematic.
On contrary, I also heard that if the space is not 3D then the signal cannot be transmitted, but at the moment I don't know the proof. This is significant, since without signal transmission, our world has a bigger problem than to be able to tie a knot from a string.
A: I think that many different dimensions and metric signatures have their specific “privileges”. More general, different geometries in a broader sense, and, even more general, different underlying mathematical structures (such as fields other than ℝ) also could be models for space-time of some alternative physics. But is was just (necessary for me) philosophical preface.
One time and PDEs
What is special for Lorentzian manifolds with their (locally) one temporal and three spatial dimensions? First of all, a metric signature with one temporal dimension but some (one or more) spatial dimensions is something very special (I deliberately ignore the question of mathematical sign, whether $t^2$ is positive and $x^2$ negative or versa — it doesn’t make difference where the time is distinguished). There are two cases where a Cauchy problem can be solved for degree-2 partial differential equation, for a reasonably broad class of initial and boundary conditions. In $t^2 - x^2$-like metrics hyperbolic differential equations live. Notoriously, the other case are parabolic differential equations that are degree-2 by space but degree-1 by time and correspond to Galilean time; so it is also one-time-many-space-dimensions universe. In parabolic case, of course, there is no non-degenerate quadratic metric.
What is special with Cauchy problem? It is a natural formulation of evolution problem. We specified an initial state of the field, we specified boundary conditions, and we can predict evolution. And even without boundary conditions hyperbolic equations (but not parabolic) admit a solution in a cone-like domain, of space-time points where initial conditions traverse all the past cone. Hyperbolic PDE is the only case that allows exact prediction (is certain spacetime domain) in spite of spatially-bounded knowledge of initial conditions.
For more that one time solutions will not be unique. For a “time”, geometrically, no different from space, a solution will not always exist.
Specificity of 3 + 1
Let’s think we proved that exactly one time dimension is a requisite. Why is special to have exactly 3 spatial dimensions, D = 3? In the case of quadratic metric (corresponding to abovementioned hyperbolic PDEs) the answer is simple: orthogonal group is the Lorentz group. Its unity component is isomorphic to Möbius group. The universal cover of said unity component is SL(2, ℂ) – it is very convenient for quantum field theory and other applications.
The case of D = 1 is inconvenient for numerous reasons (not only symmetry-related). In the case of D = 2, apart of not having the full geometric SU(2), we’d have more types of quantum statistics than two types that we have in our universe (fermions and bosons). We could have particles with arbitrary angular momentum; it’s IMHO not for good. But photons couldn’t have helicity. What all this quantum stuff is for, indeed? Although D = 2 can be, in principle, habitable, it unlikely will be a quantum world.
What about D > 3, indeed? Geometrical gains are insignificant. There are some theories that requires extra dimensions but… in 4+ dimensional spaces we should have more than 2-component spinors. It is an unnecessary complication, isn’t it?
A: Science fiction writer (but also published physicist) Greg Egan has put quite a bit of work into investigating a universe with 4+0 dimensions: Orthogonal. Some of it is quite ingenious, eg. assuming a compact universe guarantees that the (modified) wave equation doesn't have exponentially growing solutions and time appears, without the -1 in the spacetime metric, as the local gradient of entropy.
A: No. While there are some arguments for why 3 spatial dimensions are a good place to live in, the answer to the question why our universe has 3 large spatial dimensions is presently not known.
Karch & Randall wrote a paper on the issue some years back: http://xxx.lanl.gov/abs/hep-th/0506053 They consider some higher dimensional space filled with objects of different dimensions that have some interactions among each other and argue that 3 dimensional ones are among those most likely to dominate. It's an argument though that is not widely accepted due to the assumptions they have to make for this to work.
A: If we can control physics to our liking, there may be a few other possibilities, but we still seem privileged.
Lets look at these parameters:
*
*Number of space dimensions.
*Number of time dimensions.
*Dimension of the worldlines.
And here is the best guesses for different spacetimes:
3+1, 1D world lines. We are here, with 1D worldlines tracing out curves in 3+1 spacetime.
1+3. We are also here. There is no way to differentiate between $m+n$ and $n+m$.
4+0 and 2+2, 1D world lines. These allow closed timelike curves. Causality feels very important in terms of avoiding all sorts of strange paradoxes, so it seems hard for sentient life to exist in a universe that doesn't enforce causality. Both of these also have issues with stability since it is possible to generate an arbitrary amount of mass or energy from nothing. Nevertheless, Greg Egan has explored both of these in his novels.
1+1, 2+1. Planets don't have gravity in 1+1 or 2+1, but 2+1 can still have a big bang with cosmic inflation. Perhaps life floats freely in space filled with gas and dust? How would two neuron axons cross without mixing up the signals? Conways life can do so with timing, i.e. using "stoplights" at intersections. However, having evolution do so at the scale of intelligent life could be insurmountable; 2D space has low fertility so to speak.
4+1 (and 5+1, etc). In 3+1 forces would follow inverse cube law. This makes orbits unstable. Electron orbitals also are unstable; this is called the falling to center problem. But with extensive tweaking of forces and elementary particle masses it is possible to mix attractive and repulsive Yukawah potentials at various scales to allow stable atoms forming solids and liquids on a stable planet/star. So stability is not insurmountable like the 4+0 case, but achieving it sacrifices parsimony.
3.5+1: Are fractal dimensions possible? This has been explored for quantum field theory. For 3.5 spatial dimensions you would have inverse 2.5 law instead of inverse square law. Orbitals would be stable (they are stable for anything below inverse cube). The surface of planets would be 2.5 dimensional: with $n$ buildings within 1 km you could expect $4\sqrt{2} n = 5.66n$ within 2 km. There is no need to cross streets for d=2.5. This is all wonderful, except that non-integer dimensions would (I think) make space itself fractal: there would be "bubbles" of "non-space" at all length scales. This would prevent momentum from existing: any travelling wave immediately "hits" these bubbles and scatters in all directions. You couldn't throw a ball, shine a laser or see distant objects, get caught in a cyclone, or even have sound. Light diffuses instead of propigates, illuminating both the "day" and "night" side of your planet almost equally. Away from the bubbles are regions (on all scales) where space is "denser". City-scale "dense zones" are prime real-estate since you can fit more buildings within a 1km distance. Planets would be "glued" to large dense zones where more mass can be compressed into less "distance". Put your brain in a head-sized dense zones and your neurons pack more tightly; in general anything that moved would have to keep reconfiguring itself as the space it was in changed. The lack of momentum is bad for fertility and the difficulty of incorporating general relativity raises issues of parsimony.
3&4+1: One could have a semi-compact dimension along with 3 non-compact dimensions. Suppose the extra dimension was 1000km long. Particles moving in the 4th dimension would "wrap around" and return to the origin after travelling 1000km. Forces such as gravity are inverse-square for inter-planetary scales and higher but transition to inverse-cube at shorter distances. Matter would need a Yukawah-like stabilization. Keeping the extra dimension at an "interesting" scale takes a huge parsimony cost.
3+2, 2D worldlines: Instead of worldlines, what about world planes? There would be 3 spatial dimensions left over for 3+2. This exotic-sounding situation may not actually be distinguishable from our own universe. Consider a classical, Newtonian test particle in a gravity well. At a point in time there would be two 3D velocity vectors: $v_{1}(t_a,t_b) = dx/dt_1, v_{2}(t_a, t_b) = dx/dt_2$. You also have gravitational forces: $\ddot v_1, \ddot v_2$. Neither the two forces nor the two velocities need be the same. However, you can choose a time direction: $t=\alpha t_1+\beta t_2$ and then evolve the dynamics of the system along that time direction. Doing so would be no different than having a single time direction. Timepoints outside of the timeline are in parallel universes and what happens there does not affect the timeline itself. I am 99% sure this argument would generalize to relativity and quantum field theory and we will get 3+2(2D) = 4+1(1D).
In summary: There are several fundamental "privileges". Causality allows "people" to have a "history" that is safe from paradoxes. Stability prevents the highest entropy state from being reached instantly. Life then steps in and moves things toward equilibrium, extracting energy in the process. Complex life is much easier if the physics have good fertility. Parsimony reduces the need for fine tuning, which means that it is more likely a "random" set of dimensionless physical constants allows life to exist. If we desire all these "privileges", (what is indistinguishable from) 3+1 is the winner. However, parsimony is not as necessary as the other criteria so 4+1, 5+1, etc are not ruled out either.
A: 3 dimensions of space are special, because this is the lowest number of dimensions, where a random walk doesn't return to it's origin with certainty (probability = 1), see
http://mathworld.wolfram.com/PolyasRandomWalkConstants.html
Similarly I think one time dimension is special, simply because less than one would mean no evolution at all, and more than one would lead to instabilities of all kinds.
| {'language': 'en', 'url': 'https://physics.stackexchange.com/questions/10651', 'timestamp': '2023-03-29', 'source': 'stackexchange', 'question_score': '74'} |
high_school_physics | 1,117 | 15.711816 | 1 | \section{Introduction}
Black holes are perhaps the most curious objects described by physics. Their construction requires
only our concepts of space and time~\cite{Chandrasekhar:1985kt}, and they are completely described
by only a few parameters, such as their mass, charge and spin. In this, black holes are exactly like
elementary particles. Another property that black holes share with fundamental particles is our
complete lack of knowledge about their internal structure, including whether any such structure
exists. But the source of this ignorance appears to be different for the two kinds of objects.
Elementary particles are point objects which cannot be probed further, since that would require
infinite energy of the probe. A black hole on the other hand presents to the universe a closed surface
of finite size, but it is impossible to observe anything about its internal structure, as no information
passes from the inside of this surface to the outside, at least classically.
The startling discovery by Hawking that stationary black holes radiate like a black body
with a finite surface temperature~\cite{Hawking:1974sw}, following Bekenstein's suggestion
that standard laws of thermodynamics applied to a black hole provided we assume that its
entropy is proportional to the surface area of its horizon~\cite{Bekenstein:1973ur}, implies
the possibility that a black hole has associated with it a very large number of microscopic
states. It is natural to think that these states are in some way related to the degrees of
freedom of the horizon. In the membrane paradigm, one replaces the black hole by a
membrane with certain classical properties at the stretched horizon, i.e. a small distance
outside the event horizon (an excellent overview is provided by the collection of articles
in~\cite{Thorne:1986iy}). This is a sensible description from the perspective of an external
stationary observer, who finds that particles cannot classically leave the interior of the
black hole or reach the horizon from the outside in finite time.
Thus it seems that the classical or semi-classical dynamics of fields, or gravity, on a
spacetime which includes a horizon, may be studied by looking at the bulk and the horizon,
and completely ignoring what happens in the interior of horizon. It has been suggested that
in this view it should be sufficient to consider fields on a manifold with boundary. For
gravity, this approach leads to a quantum description in which an infinite set of
observables are localized on the boundary~\cite{Balachandran:1994up, Balachandran:1995qa, Carlip:1998wz, Carlip:1999cy}. Recently there has been a resurgence of interest in
studying the behaviour of quantum fields near black hole horizons, motivated by various
paradoxes and puzzles~\cite{Almheiri:2012rt, Braunstein:2009my}. The near horizon behaviour
of fields have also been investigated in the space time of isolated
horizons~\cite{Ashtekar:1998sp, Ashtekar:1999yj, Ashtekar:2000sz, Ashtekar:2001is,
Booth:2000iq, Booth:2001gx}.
Boundary conditions on the classical fields play a crucial role in all these investigations.
In most of these papers, though not in all of them, Dirichlet (or Neumann) conditions are imposed
on the boundary, i.e. fields (or their derivatives) are set to vanish on the (stretched) horizon.
This is a
convenient choice for most calculations, but somewhat of an overkill, since the (stretched)
horizon is not a physical boundary of spacetime. In particular, fields need not vanish on
the horizon -- only invariants constructed out of the stress energy tensor need to remain
finite. For that it is sufficient for invariants made out of the physical fields to remain
finite on the horizon. Gauge theories are even more special in this regard, since components
of gauge fields are not physical, but defined up to gauge transformations. So strictly speaking
it is not necessary to impose finiteness on components of gauge fields on the horizon.
Gauge theories are characterized by the presence of redundant degrees of
freedom, which leads to the presence of constraints, usually relations among
the corresponding momenta. The formalism for studying
the dynamics of constrained systems was discovered by Dirac~\cite{Dirac:1950pj}
and independently by Bergmann et al.~\cite{Bergmann:1949zz,Anderson:1951ta}, and has
been applied
to numerous theories of interest over the years~\cite{Dirac-lect-1964,Henneaux:1992ig}.
In this paper, we will be concerned with the
classical dynamics of gauge theories defined on spherically symmetric
curved backgrounds, with horizons for boundaries.
While the formalism for constrained field theories set up by Dirac generalizes to curved backgrounds~\cite{Dirac:1951zz}, the more general formulation in terms of shift and lapse variables was introduced by Arnowitt, Deser and Misner~\cite{Arnowitt:1962hi}. Apart from the gravitational field itself, the ADM decomposition has been used to determine aspects of the Maxwell field on curved backgrounds, which includes the electromagnetic self-energy problem of a point charge~\cite{Arnowitt:1960zza}, the behaviour of the fields near the horizons of stationary black hole spacetimes~\cite{MacDonald:1982zz}, and its quantization~\cite{Sorkin:1979ja}. Through the works of Isenberg and Nester, the ADM decomposition has been subsequently used to better understand the initial value problem of fields theories~\cite{Isenberg:1977ja} and the description of derivative coupled theories ~\cite{Isenberg:1977fs}. While the relevance of boundary terms in the description of the gravitational field has also been considered in ~\cite{Isenberg:1981fa}, the general formulation of constrained theories with boundaries has not been provided in these works.
Boundaries lead to the inclusion of surface terms which are essential in the formulation of the action in gravity~\cite{Regge:1974zd, Gibbons:1978ac, York:1986} and other field theories~\cite{Gervais:1976ec, Anco:2001gm, Anco:2001gk,Romero:2002vg, G.:2013zca, G.:2015uda}. The generalization to cases where the boundaries are null~\cite{Parattu:2015gga, Booth:2001gx}
as well as non-orthogonal~\cite{Hawking:1996ww}, have also been considered in the literature.
There is also much recent activity regarding the asymptotic symmetries of gauge theories and
gravity~\cite{Balachandran:2013wsa, Campiglia:2014yka, Strominger:2013lka, He:2014cra,
Strominger:2013jfa, He:2014laa}, which can be formulated in terms of fields localized
on the null boundary of a conformally compactified asymptotically flat
spacetime~\cite{Mohd:2014oja}.
While surface terms in these contexts provide an important topic of investigation in its own
right~\cite{Donnelly:2016auv}, it is not what our paper seeks to address.
We will be largely concerned in the role
boundaries play in the classical
and quantum descriptions of constrained field theories. There has been some consideration of
these in the literature. The modifications of constraints through the presence of boundaries,
close to the spirit in which we will carry out our work, has been investigated in~\cite{SheikhJabbari:1999xd,Zabzine:2000ds}. In~\cite{Balachandran:1993tm,Balachandran:1992qg}, by studying the quantization of the the Chern-Simons theory on a disk, the role of boundaries on the vacuum structure of the theory has been covered in detail. While these works have made some ground in addressing how boundaries affect constraints, many questions still remain open. As far as we are aware of, there appears to be no general prescription on how boundaries are to be considered in the case of constrained theories, and a formulation on curved backgrounds with horizons is completely lacking. The present work is an attempt to address this issue.
The organization of our paper is as follows. In Sec.~\ref{geom}, we describe the foliation
which will be implemented to carry out the 3+1 decomposition of the spherically
symmetric spacetime, as well as the form of the matter action defined on it. In
Sec.~\ref{Max}, we consider the specific example of Maxwell's electrodynamics
as a constrained theory, for which the concrete manifestation of horizons in the
description of the dynamics of the fields are pointed out as they arise. Finally, in
Sec.~\ref{Disc} we discuss some unexpected results and possible applications of
our findings.
\section{General Algorithm} \label{geom}
We will work on a static, spherically symmetric spacetime endowed with at least one horizon.
In other words, we assume that there is a timelike Killing vector field $\xi^a\,$
with norm given by $\xi^a\xi_a = -\lambda^2\,,$ satisfying
\begin{align}
\xi_{[a}\nabla_b \xi_{c ]} & = 0 \,. \label{Frobenius}
\end{align}
The horizon is defined by $\xi^a$ becoming null, $\lambda=0\,.$
It follows that there is a spacelike hypersurface $\Sigma$ which is
everywhere orthogonal to $\xi^a\,.$ The situation we have in mind is that
of fields living on the background of a
static black hole. For an asymptotically flat or anti-de Sitter
space, $\Sigma$ is the region `outside the horizon', while for a positive
cosmological constant, we may have a static de Sitter black hole spacetime,
in which case $\Sigma$ is the region `between the horizons'.
The induced metric on $\Sigma$ is given by
\begin{equation}
h_{ab} = g_{ab} + {\lambda}^{-2} \xi_{a} \xi_{b} \,,
\label{gen.met}
\end{equation}
leading to the following expression for the determinant of spacetime metric
\begin{equation}
\sqrt{-g} = \lambda \, \sqrt{h} \, . \label{gen.det}
\end{equation}
The action functional for $N$ fields $\Phi_A\,,\, A = 1, \cdots, N\,,$ is given by the time
integral of the Lagrangian $L$, or equivalently the integral of
the Lagrangian density ${\cal L}$ over the four volume,
\begin{equation}
S[\Phi_A] = \int dt\, L
= \int dt \int \limits_{\Sigma} \lambda dV_x ~ {\cal L}(\Phi_A(x) , \nabla_a \Phi_A(x)) \, ,
\label{gen.act}
\end{equation}
where $dV_x$ is the volume element on $\Sigma\,,$
and ${\cal L}(\Phi_A(x) , \nabla_a \Phi_A(x))$ is the Lagrangian density. The Lagrangian density
can be written in terms of the `spatial' and `temporal' derivatives of the fields,
\begin{equation}
{\cal L} \equiv {\cal L}(\Phi_A(x), \mathcal{D}_a \Phi_A(x), \dot{\Phi}_A(x) )\,,
\label{gen.den}
\end{equation}
where $\mathcal{D}_a \Phi_A = h_a^b \nabla_b \Phi_A$ are the $\Sigma$-projected derivatives
of the fields $\Phi_A\,,$ and $\dot{\Phi}_A$ are their time derivatives, defined as their Lie derivatives
with respect to $\xi\,,$
\begin{equation}
\dot{\Phi}_A := \pounds_{\xi} \Phi_A \,.
\label{gen.dot}
\end{equation}
The momenta $\Pi^A$ canonically conjugate to the fields $\Phi_A\,$ are defined as
\begin{equation}
\Pi^A = \frac{\delta L}{\delta \dot{\Phi}_A} = -\lambda^{-1}~ \xi_a~ \frac{\partial {\cal L}}{\partial (\nabla_a \Phi_A)} \, ,
\label{gen.mom}
\end{equation}
where the functional derivative in this definition is taken on the hypersurface $\Sigma\,,$ i.e.
it is an `equal-time' functional derivative, defined as
\begin{equation}
\frac{\delta\Phi_A(\vec{x}, t)}{\delta\Phi_B(\vec{y}, t)} = \delta^B_A\, \delta(x, y) =
\frac{\delta\dot\Phi_A(\vec{x}, t)}{\delta\dot\Phi_B(\vec{y}, t)}\, .
\label{gen.var}
\end{equation}
The $\delta(x,y)$ in Eq.~(\ref{gen.var}) is the three-dimensional covariant delta function defined
on $\Sigma\,,$
\begin{equation}
\int\limits_\Sigma dV_y \delta(x,y) f(\vec{y}, t) = f(\vec{x}, t)\,.
\label{gen.del}
\end{equation}
Given a Lagrangian $L$ we can construct the canonical Hamiltonian through the Legendre transform
\begin{equation}
H_C = \int \limits_{\Sigma} dV_x ~(\Pi^A \dot{\Phi}_A) - L \,.
\label{gen.Ham}
\end{equation}
Dynamics in the Hamiltonian formalism is determined using the Poisson bracket, which for two
functionals $F(\Phi_A(x), \Pi^A(x))$ and $G(\Phi_A(x), \Pi^A(x))$ of the fields and their momenta
is defined as
\begin{equation}
\left[F ,G\right]_P = \int dV_z \left[\frac{\delta F}{\delta \Phi_A(z)} \frac{\delta G}{\delta \Pi^A(z)}
- \frac{\delta G}{\delta \Phi_A(z)} \frac{\delta F}{\delta \Pi^A(z)} \right]\,. \label{gen.PB}
\end{equation}
The canonical Poisson brackets between the fields and their momenta follows from setting
$F(x) = \Phi_A(\vec{x}, t)$ and $G(y) = \Pi^B(\vec{y}, t)$
\begin{equation}
\left[\Phi_A(\vec{x}, t), \Pi^B(\vec{y}, t) \right]_P = \delta_{A}^{B} \delta(x,y)\,.
\label{gen.can}
\end{equation}
The time evolution of any functional of the fields and momenta is determined
from its Poisson bracket with the Hamiltonian
\begin{equation}
\dot{F}(x) = \left[F(x), H_C \right]_P\,.
\end{equation}
The Hamiltonian, obtained by a Legendre transform from the Lagrangian,
provides a complete description of the dynamics of the system only if
all velocities of the theory uniquely map into momenta by
Eq.~(\ref{gen.mom}). In the case of constrained theories,
such a mapping is not possible due to the presence of constraints. In these theories, the Hamiltonian
must be constructed by determining all the constraints of the theory via the Dirac-Bergmann
algorithm. The usual Poisson brackets of these theories are modified in the presence of
constraints, and as we will argue below, the constraints of the theory are
modified in the presence of horizons.
\section{The Maxwell field}\label{Max}
For the sake of concreteness, we will consider the specific example of electromagnetism
as a constrained theory on spherically symmetric spacetimes with horizon(s). The action is
\begin{equation}
S_{EM} = \int dV_4 \left(-\tfrac{1}{4} F_{a b} F_{c d} g^{a c} g^{b d}\right) \, ,
\end{equation}
where $dV_4 = \lambda dV_x$ is the four dimensional volume form on the manifold $\Sigma\times\mathbb{R}$, and
$F_{a b} = 2 \partial_{[a}
A_{b]}$. Defining $e_a = - \lambda^{-1} \xi^c F_{c d}$ and $f_{a b} = F_{c d} h^c_a h^d_b\,,$ we
can rewrite this action as
\begin{equation}
S_{EM} = - \int dt \int \limits_{\Sigma} dV_x \frac{\lambda}{4} \left[f_{a b} f^{a b} - 2 e_{a} e^{a} \right] \, ,
\label{H.Lag}
\end{equation}
Recalling Eq.~(\ref{gen.dot}), we write
\begin{align}
\dot A_b \equiv \pounds_{\xi} A_b &= \xi^{a}\nabla_a A_b + A_a \nabla_a \xi^a \, \notag \\
&=\xi^a F_{a b} + \nabla_a (A_b \xi^b) \,,
\end{align}
and defining
{$\phi = A_a \xi^{a}$},
we have for the projection $a_b\,,$
\begin{equation}
\dot a_b = -\lambda e_b + \mathcal{D}_b \phi \,.
\label{H.elf}
\end{equation}
Since the velocity term $\pounds_{\xi} \phi$ does not appear in the electromagnetic Lagrangian Eq.~(\ref{H.Lag}), it implies that the momentum conjugate to $\phi$ vanishes,
\begin{equation}
\frac{\partial L_{E M}}{\partial \dot{\phi}} = \pi^{\phi} = 0 \,,
\label{H.con}
\end{equation}
thus producing the only constraint following from the Lagrangian. The momenta
corresponding to the $a_b$ are given by
\begin{equation}
\pi^b = \frac{\partial L_{E M}}{\partial \dot{a}_b} = - e^b \, .
\label{H.mom}
\end{equation}
The canonical Hamiltonian is then
\begin{align}
H_C &= \int \limits_{\Sigma} dV_x \, \left(\pi^b \dot{a}_b\right) - L \notag \\
&= \int \limits_{\Sigma} dV_x \, \left[\lambda\left(\frac{1}{2} \pi^b \pi_b + \frac{1}{4} f_{a b} f^{a b}\right) + \pi^b \mathcal{D}_b \phi \right]\,.
\end{align}
The constraint of Eq.~(\ref{H.con}) is now added to this and a new Hamiltonian is defined,
\begin{equation}
H_0 = \int \limits_{\Sigma} dV_x \, \left[\lambda \left(\frac{1}{2} \pi^b \pi_b + \frac{1}{4} f_{a b} f^{a b}\right) + \pi^b \mathcal{D}_b \phi + v_{\phi} \pi^{\phi} \right] \, ,
\label{H.pri}
\end{equation}
where $v_{\phi}$ is an undetermined multiplier. The canonical Poisson brackets of
Eq.~(\ref{gen.can}) are in this case
\begin{align}
\left[\phi(x), \pi^{\phi}(y) \right]_P & = \delta(x,y) \notag\\
\left[ a_a(x) , \pi^b(y) \right]_P & = \delta^{b}_{a}\delta(x,y) \,.
\label{H.can}
\end{align}
\subsection{The Dirac-Bergmann algorithm} \label{MDBA}
We will now apply the Dirac-Bergman algorithm to determine all the constraints of the theory
and construct the unconstrained Hamiltonian.
For that, we need to check that the constraint is obeyed at all times, or in other words,
$\dot \pi^{\phi} \approx 0 \,.$ The Poisson bracket between $\pi^\phi$ and the Hamiltonian
is calculated with the help of a smearing function $\epsilon$ as follows,
\begin{align}
\int \limits_{\Sigma} dV_y \epsilon(y) \dot\pi^\phi(y) &= \int \limits_{\Sigma} dV_y
\epsilon (y) \left[\pi^{\phi}(y) , H_0 \right]_P \,\notag \\
&= \int \limits_{\Sigma} dV_y
\epsilon (y) \left[\pi^{\phi}(y) , \int \limits_{\Sigma} dV_x \pi^b(x)
\mathcal{D}^x_b \phi(x) \right]_P \notag \\
&= - \oint \limits_{\partial \Sigma} da_y \, \epsilon(y) n^y_b \pi^b(y) +
\int \limits_{\Sigma} dV_y \, \epsilon(y) \left( \mathcal{D}^y_b \pi^b(y)\right)\,.
\label{U.PB1}
\end{align}
Here we have used the canonical Poisson brackets given in Eq.~(\ref{H.can})
and an integration by parts. The smearing function $\epsilon$ is assumed
to be well behaved, but we make no further assumption regarding its properties.
In particular we do not assume that $\epsilon$ vanishes on the
horizon (or horizons, if $\Sigma$ is the region between the horizons in a de Sitter
black hole spacetime).
The surface integral is finite, since using Schwarz inequality we get
\begin{equation}
\left| n_b \pi^b \right| \leq \, \sqrt{\left|n_b n^b\right| \,
\left|\pi_b \pi^b\right|} \,.
\label{U.SI}
\end{equation}
In this, $n_bn^b = 1$ by definition since $n$ is the `unit normal' to the horizon,
and $\pi_b\pi^b = e_b e^b$ appears in the energy momentum tensor (more precisely
in invariant scalars such as $T^{ab}T_{ab}$), and therefore may not
diverge at the horizon.
So the integral over $\partial\Sigma$ is finite and in general different from zero.
Thus the boundary integral is finite at the horizon and we have a non-vanishing
contribution from $\partial\Sigma$\,, which was one of the things we wanted to show.
We note that since the smearing function $\epsilon$ is present in the integrand,
specific assumptions about the class of allowed smearing functions may be required to
produce physically sensible results. For the Maxwell case, the assumption
that $\epsilon$ is regular at the horizon (with no dependence on $\lambda$)
is sufficient.
The right hand side of Eq.~(\ref{U.PB1}), comprising of a bulk and a surface term,
must vanish weakly, giving a constraint. Since we are working on a spherically symmetric
background, we can use a radial delta function to convert the surface integral to
a volume integral,
\begin{equation}
\oint \limits_{\partial \Sigma} da_y K(y) = \int \limits_{\Sigma} dV_y \lambda(y) K(y)
\delta(r(y) - r_H) \,,
\label{U.area}
\end{equation}
where $K$ is any well-behaved function, $r_H$ is the radius of the sphere corresponding
to $\partial\Sigma\,,$ $\delta$
is the usual Dirac delta, defined by $\int\, dr\, \delta(r - R) f(r) = f(R)$ for
any well-behaved function $f(r)\,,$ and we have assumed that $h^{rr} = \lambda$ for the
spherically symmetric metrics that we consider. (This is solely for notational convenience,
and $h^{rr}$ should replace $\lambda$ in this formula if there is any confusion.) If we have
a de Sitter black hole spacetime, we will need to consider a sum over two spheres,
corresponding to inner and outer horizons.
Thus we can rewrite the last equality in Eq.~(\ref{U.PB1}) as
\begin{equation}
\int \limits_{\Sigma} dV_y \epsilon(y) \dot\pi^\phi(y)
= \int \limits_{\Sigma} dV_y \, \epsilon(y) \left[ - \lambda(y) n^y_b \pi^b(y)
\delta (r(y) - r_H) + \mathcal{D}^y_b \pi^b(y) \right]\,,
\label{U.PB11}
\end{equation}
which produces the constraint
\begin{equation}
\Omega_2 = - \lambda n_b \pi^b\delta (r - r_H) + \mathcal{D}_b \pi^b \approx 0 \, .
\label{U.con2}
\end{equation}
We now need to check if there are any further constraints resulting from $\dot \Omega_2 \approx 0$. We first include the new constraint with a multiplier into the existing Hamiltonian given in Eq.~(\ref{H.pri}), which gives us
\begin{equation}
H_T = H_0 + \int \limits_{\Sigma} dV_x v_1 \left[\mathcal{D}_b \pi^b - \lambda n_b \pi^b\delta (r - r_H) \right] \,,
\label{U.sham}
\end{equation}
and consider the time evolution to be governed by this Hamiltonian,
\begin{align}
\int \limits_{\Sigma} dV_y\, \epsilon(y) \dot \Omega_2(y) &= \int \limits_{\Sigma} dV_y\,
\epsilon(y) \left[ \Omega_2(y) , H_T \right]_P \notag \\
&= - \int \limits_{\Sigma} dV_y \mathcal{D}^y_b\left( \epsilon (y)\right) \int \limits_{\Sigma} dV_x \left[ \pi^b(y) , \mathcal{D}^x_a a_c(x) \right]_P f^{a c}(x) \notag \\
&= \int \limits_{\Sigma} dV_y \mathcal{D}^y_b \left( \epsilon(y)\right) \int \limits_{\Sigma} dV_x \mathcal{D}^x_a \left(\delta(x,y)\right) f^{a b}(x) \notag\\
&= \int \limits_{\Sigma} dV_x \mathcal{D}^x_a \left(\int \limits_{\Sigma} dV_y \mathcal{D}^y_b \left( \epsilon(y)\right)\delta(x,y)\right) f^{a b}(x) \notag\\
& = \int \limits_{\Sigma} dV_y \mathcal{D}^y_a \mathcal{D}^y_b \epsilon(y) f^{a b}(y) \notag\\
& = 0\,.
\label{U.PB2}
\end{align}
The last equality follows from the antisymmetry of $f^{ac}$ in its indices, and we have used the fact that
$\mathcal{D}$ is torsion-free. Since these constraints commute with one another, they are also first class.
Thus the full Hamiltonian is the $H_T$ defined earlier,
\begin{equation}
H_T = \int\limits_{\Sigma} dV_x \left[ \lambda \left( \frac{1}{4} f_{a b} f^{a b} + \frac{1}{2} \pi_a \pi^a\right) + v_1 \left( \mathcal{D}_b \pi^b - \lambda n_b \pi^b \delta (r - r_H) \right) + \pi^b \mathcal{D}_b \phi + v_{\phi} \pi^{\phi} \right]
\label{U.Htot1}
\end{equation}
The multipliers $v_1$ and $v_\phi$ may be determined by examining the equations of motion.
The evolution of $\phi$ is given by
\begin{align}
\int \limits_{\Sigma} dV_y \epsilon(y) \dot \phi(y) &= \int \limits_{\Sigma} dV_y
\epsilon(y)\left[ \phi(y) , H_T \right]_P \notag \\
&= \int \limits_{\Sigma} dV_y \epsilon(y) \int \limits_{\Sigma} dV_x v_{\phi}(x)
\left[ \phi(y) , \pi^{\phi}(x) \right]_P \notag \\
&= \int \limits_{\Sigma} dV_y \epsilon (y) v_{\phi}(y) \, ,
\end{align}
which tells us that we can set $\dot \phi = v_{\phi}$. The evolution of $a_b$ can also
be determined in the same manner,
%
\begin{align}
\int \limits_{\Sigma} & dV_y\, \epsilon(y) \dot{a}_b(y) = \int \limits_{\Sigma} dV_y\, \left[ \epsilon(y) a_b(y) , H_T \right]_P \notag \\
&= \int \limits_{\Sigma} dV_y\, \epsilon(y) \int \limits_{\Sigma} dV_x \left[a_b(y) , \pi^c(x) \right]_P \left(\lambda(x) \pi_c(x) + \mathcal{D}^x_c\phi(x) - \mathcal{D}^x_c v_1(x)\right) \notag \\
&= \int \limits_{\Sigma} dV_y \epsilon(y) \left[ \lambda(y) \pi_b(y) + \mathcal{D}^y_b \phi(y) - \mathcal{D}^y_b v_1(y) \right] \, .
\label{U.vpt}
\end{align}
Comparing this with Eq.~(\ref{H.elf}), we deduce that we can set
$\mathcal{D}_b v_1 = 0$. There may be many ways in which this
condition could be satisfied,
but for simplicity, we will simply assume that $v_1 =0$.
Then Eq.~(\ref{U.vpt}) produces
\begin{equation}
\dot a_b = \lambda\pi_b + \mathcal{D}_b \phi \, ,
\end{equation}
and we thus find that the total Hamiltonian takes the form
\begin{equation}
H_T = \int\limits_{\Sigma} dV_x \left[\lambda \left(\frac{1}{4} f_{a b} f^{a b} + \frac{1}{2} \pi_a \pi^a\right) + \pi^b \mathcal{D}_b \phi + \dot{\phi} \pi^{\phi} \right] \,.
\end{equation}
\subsection{Gauge transformations and Gauge fixing}
We have found two constraints, both first class, which depend on the momenta in the theory. These on account of being first class will generate gauge transformations, i.e. they transform the fields while not transforming the Hamiltonian (or the Lagrangian).
For the constraint $\Omega_1 = \pi^{\phi}$ , the only non-vanishing Poisson Bracket with
the fields is
\begin{align}
\delta_1 \phi(y) &= \left[ \phi(y) , \int\limits_{\Sigma} dV_x \epsilon_1 (x) \pi^{\phi}(x) \right]_P \notag \\
&= \epsilon_1 (y) \,.
\label{GT1}
\end{align}
For the other first class constraint $\Omega_2$ of the theory, we have the following
non-vanishing Poisson bracket with the fields
\begin{align}
\delta_2 a_b(y) &= \left[ a_b(y) , \int\limits_{\Sigma} dV_x \epsilon_2 (x) \left(\mathcal{D}^x_c \pi^c(x) - \lambda(x) n^x_c \pi^c(x) \delta(r(x) - r_H) \right) \right]_P \notag \\
&= \left[ a_b(y) , \int\limits_{\Sigma} dV_x \pi^c(x) \mathcal{D}^x_c \epsilon_2(x) \right]_P \notag \\
&= \mathcal{D}^y_b \epsilon_2(y)\, .
\label{GT2}
\end{align}
These transformations can be identified with the usual gauge transformations $A_\mu \to A_\mu + \partial_\mu \psi$ if we write
$\epsilon_2(y) = \psi$ and
$\epsilon_1(y) = \pounds_{\xi} \psi$. This can be seen by
simply projecting the gauge transformation one finds from the Lagrangian
\begin{align}
A_a(x) + \nabla^x_a \psi(x) & = \delta^b_a \left( A_b(x) + \nabla^x_b \psi(x) \right) \notag\\
&=a_a + \mathcal{D}^x_a \psi(x) -\lambda^{-2}(x) \xi_{a} \left(\phi(x) + \pounds_{\xi} \psi(x) \right) \, .
\label{gf.GTL}
\end{align}
We note that the gauge transformations for this background are the same as in the
absence of horizons. The boundary terms which arise in the constraints are such that the
gauge transformations remain unaltered.
To proceed further, we take the approach of converting the gauge constraints into
second class ones by fixing the gauge. Let us choose a `radiation-like' gauge, in
which the gauge-fixing condition or constraint is analogous to the usual radiation
gauge, with an additional boundary contribution motivated by the
surface term in Eq.~(\ref{U.con2}). This choice considers how the horizon could
affect the dynamics of the theory, which was our original motivation for this work.
Thus we now have a total of four constraints given by
\begin{align}
\Omega_1 & = \pi^{\phi} \notag \\
\Omega_2 & = \mathcal{D}_a \pi^a
- \lambda n_a \pi^a \delta (r - r_H) \notag\\
\Omega_3 & = \phi \notag \\
\Omega_4 & = \mathcal{D}^b a_b
- \lambda n^b a_b \delta (r - r_H)\,.
\label{gf.con}
\end{align}
We note that $a_b$ is not a physical observable since it changes under a gauge transformation.
In particular, the near horizon behaviour of the coefficient of the $\delta$-function in the
last term of the gauge-fixing constraint\, $\Omega_4$ cannot be fixed from any physical
consideration. The Poisson brackets of these constraints are easily calculated,
\begin{align}
\left[\Omega_1 (x), \Omega_3 (y) \right]_P & = - \delta(x,y) \, , \notag \\
\left[\Omega_2 (x), \Omega_4 (y) \right]_P & =
\mathcal{D}_{a} \mathcal{D}^{a}\delta(x,y) \, ,
\label{gf.pb}
\end{align}
with all other Poisson brackets vanishing. The first Poisson bracket in
Eq.~(\ref{gf.pb}) follows directly from the canonical relations. The
second Poisson bracket gives
\begin{align}
& \left[ \int dV_x \eta(x) \Omega_2 (x) ,
\int dV_y \epsilon(y) \Omega_4(y) \right]_P \notag\\
& \qquad \qquad = \left[ \int dV_x \left(\mathcal{D}^x_a
\eta(x) \right) \pi^a(x)\,, \int dV_y \left(\mathcal{D}_y^b\epsilon(y) \right) a_b(y) \right]_P \notag \\
& \qquad \qquad = - \int dV_y \left(\mathcal{D}^y_a
\eta(y)\right) \left(\mathcal{D}_y^a\epsilon(y) \right) \notag \\
& \qquad \qquad = -\oint da_y \epsilon(y) n_y^a \left(\mathcal{D}^y_a\eta(y)\right) + \int dV_y \epsilon(y) \mathcal{D}_y^a \mathcal{D}^y_a\eta(y)\,,
\label{gf.pb2}
\end{align}
where we have used integration by parts in deriving the equalities.
The surface integral in the last equality of Eq.~(\ref{gf.pb2}) vanishes,
which can be seen by using Schwarz's inequality
\begin{align}
\left| n^a D_a \left(\eta\right) \right|^2 & \leq \left|n^a n_a \right| \left|h^{ab} \left(D_a\eta\right) \left(D_b\eta\right)\right| \notag\\
& = h^{a b} \left(D_a\eta\right) \left(D_b\eta\right) \,.
\label{gf.srf}
\end{align}
The smearing function and its derivatives are regular on the horizon, while
$h^{rr} \sim \lambda^2$ on spherically symmetric backgrounds. Hence the surface integral
in the last equality of Eq.~(\ref{gf.pb2}) vanishes, and only the
volume term contributes.
Thus the Poisson brackets between the constraints are those given in Eq.~(\ref{gf.pb}).
The matrix of the Poisson brackets between these constraints have
a non-vanishing determinant and is invertible. This matrix,
$C_{\alpha\beta}\left(x,y \right) = \left[\Omega_\alpha (x),
\Omega_\beta(y)\right]_P\,,$ is given by
\begin{equation}
C (x,y) =
\begin{pmatrix}
0 & 0 & -\delta(x,y)\ & 0 \\
0 & 0 & 0 & \mathcal{D}_a \mathcal{D}^a\delta(x,y) \\
\delta(x,y) & 0 & 0 & 0 \\
0 & - \mathcal{D}_a \mathcal{D}^a\delta(x,y) & 0 & 0
\end{pmatrix} \, .
\label{gf.dirm}
\end{equation}
The Dirac bracket for any two dynamical entities $F$ and $G$ (which may be
functions on the phase space, or functionals, the duals of functions) is defined as
\begin{equation}
\left[F\,, \,G \right]_{D} = \left[F\,, \,G \right]_{P} - \int dV_w \int dV_z
\left[F\,, \,\Omega_\alpha(w)\right]_P C^{-1}_{\alpha\beta}\left(w, z\right) \left[\Omega_\beta(z)\,, \,G\right]_P\,.
\label{gf.dir}
\end{equation}
Thus we need to find the inverse of the operator
$\mathcal{D}_a
\mathcal{D}^a$.
Let us formally write the inverse as $\tilde{G}(x, y)\,,$ i.e.
\begin{equation}
\mathcal{D}_a\mathcal{D}^a \tilde{G}\left(x, y\right) = \delta\left(x,y\right) \,.
\label{U.gfs}
\end{equation}
This $\tilde{G}(x, y)\,$ is the Green's function
for the {\em spatial} Laplacian operator $\mathcal{D}_a\mathcal{D}^a $\,, but not of the wave operator,
which is the {\em spacetime} Laplacian. With the help of this, the inverse matrix
$C^{-1}_{\alpha\beta}(x,y)$ can be written as
\begin{equation}
C^{-1}(x,y) =
\begin{pmatrix}
0 & 0 & \delta(x,y) & 0\\
0 & 0 & 0 & -\tilde{G}\left(x,y\right) \\
-\delta(x,y) & 0 & 0 & 0 \\
0 & \tilde{G}\left(x,y \right) & 0 & 0
\end{pmatrix} \, .
\label{gf.cin}
\end{equation}
We can now substitute Eq.~(\ref{gf.cin}) in Eq.~(\ref{gf.dir}) to find the following Dirac brackets for the fields,
\begin{align}
\left[a_a(x),\pi^b(y)\right]_{D} & = \delta(x,y)\delta_a^b - \mathcal{D}_a^x \mathcal{D}^b_x \tilde{G}\left(x,y \right)\,,
\label{gf.cdir}
\end{align}
all other Dirac brackets being zero.
We could choose to fix the gauge so that the resulting Dirac brackets would involve
the (static) Green's function for the spacetime Laplacian. The corresponding gauge-fixing
constraints are
\begin{align}
\Omega_3 & = \phi \, ,\notag \\
\Omega_4 & = \mathcal{D}^b\left(\lambda a_b\right) \, .
\label{gf.con2}
\end{align}
In this gauge the Dirac brackets are given by
\begin{align}
\left[a_a(x),\pi^b(y)\right]_{D} & = \delta(x,y)\,\delta_a^b -
\mathcal{D}_a^x\left(\lambda(x)\mathcal{D}^b_x G\left(x,y \right)\right)\,,
\label{gf.cdir2}
\end{align}
all other Dirac brackets being zero. Here $G(x, y)$ is the time-independent Green's function
for the spacetime Laplacian,
\begin{equation}
\mathcal{D}_a^x\left(\lambda(x)\mathcal{D}^a_x G\left(x,y \right)\right) = \delta(x,y)\,.
\end{equation}
Although the choice of gauge-fixing functions determine the form of Dirac brackets, we know
that physical observables and measurable quantities must be independent of that choice.
However, the choice of Green's function is determined by the boundary conditions
we wish to impose on the fields at the horizon. We have mentioned earlier that
the horizon is not a boundary of the spacetime, in particular it is not necessary to
impose boundary conditions which force the respective fields to vanish on the
horizon, or even remain finite on the horizon if we are considering gauge fields.
We expect that the utility of our choice of gauge-fixing in Eq.~(\ref{gf.con}) could lie
in its allowance for more general boundary conditions for gauge fields at the horizon,
and in its ability to describe the dynamics of fields at the horizon. We will leave a
detailed investigation for later work, but discuss one interesting result, as well
as some open questions, in the next section.
We note in passing that explicit forms of both kinds of Green's functions considered in this section are known for the Schwarzschild
black hole as well as for the static de Sitter backgrounds~\cite{Cop:1928, J.Math.Phys.12.1845, Hanni:1973fn, Linet:1976sq, Bunch:1977sq, Chernikov:1968zm, Tagirov:1972vv, Dowker:1977, Fernandes:2016sue}.
\section{Discussion}\label{Disc}
In this work, we have argued that horizons affect the constraints of gauge theories, and
will in turn affect their dynamics in ways that spatial boundaries cannot. In the previous
section, we demonstrated that the Gauss law constraint of the Maxwell field receives a surface
contribution on spherically symmetric backgrounds with horizons. This however will not
typically happen for backgrounds with spatial boundaries. While the behaviour of gauge
fields at spatial boundaries cannot be determined by physical considerations alone, they must
nevertheless be continuous across them. When surface terms occur, they are present on either
side of the spatial boundary and cancel out. When the boundary corresponds to the physical end
of the manifold, regularity of the fields require that they vanish there. The surface terms of
the previous section exist because the horizon hides the `other' side of the horizon from
observations. The only requirement are that physical fields, more precisely gauge-invariant
(and local Lorentz-invariant) scalars constructed out of the fields, must be finite at the
horizon. Horizons thus lead to a richer set of possibilities for field theories,
and in particular for gauge theories.
We have also considered the gauge fixing of the Maxwell field as a specific example to test
our claim. We can get some insight into the role played by the horizon in the choice of
boundary conditions by comparing the two gauge fixing constraints of Eq.~(\ref{gf.con}) and
Eq.~(\ref{gf.con2}). These gauge choices led to Dirac brackets which involve the
Green's functions for the spatial Laplacian and the spacetime Laplacian, respectively. The
difference in the actions of the two Laplacian operators on a time-independent
scalar field $F$ is seen in the following identity,
\begin{equation}
\nabla_a\nabla^a F - \mathcal{D}_a \mathcal{D}^a F = \left(\lambda^{-1} \mathcal{D}^a \lambda\right) D_{a} F \, .
\label{App.Lap}
\end{equation}
When $\lambda=\lambda(r)$, as is the case for spherically symmetric backgrounds, the limit of Eq.~(\ref{App.Lap}) as $r \to r_H$ is
\begin{equation}
\left[\nabla_a\nabla^a F - \mathcal{D}_a \mathcal{D}^a F\right]_{r=r_H} = \kappa_H \left(\partial_r F\right)_{r=r_H} \, .
\label{App.diff}
\end{equation}
Thus the action of the spacetime and spatial Laplacians differ at the horizon by a term which depends on the surface
gravity $\kappa_H$ of the background. Eq.~(\ref{App.diff}) also indicates how boundary conditions affect the behaviour
of the operators on the left hand side. For instance,
the operators disagree on the horizon when either Dirichlet or Robin boundary conditions are assumed.
This suggests that the eigenvalues, and thus the description of the horizon states may be different for the two gauge choices,
however that is an investigation we leave for another occasion.
The modified Gauss law constraint in Eq.~(\ref{U.con2}) has further implications on the quantization of this theory, including horizon states and the charge. By integrating this constraint over a bounded volume whose radius is greater than the event horizon of the black hole (and less than the cosmological horizon, should it exist), we can derive the expression for the charge contained within it. In considering the Reissner-Nordstr\"om solution for simplicity, the electric field given in Eq.~(\ref{H.elf}) reduces to $\lambda \pi_b = - \mathcal{D}_b \phi$. Integrating up to a spatial boundary of radius $r_B$, where $r_B > r_H$, we have
\begin{align}
Q &= \int \limits_{\Sigma_B} \Omega_2 \notag\\
&= - \oint \limits_{\partial_{\Sigma}} \left[r^2\partial_r \phi(r) \right]_{r = r_B} + \oint \limits_{\partial_{\Sigma}} \left[r^2\partial_r \phi(r) \right]_{r = r_H} - \oint \limits_{\partial_{\Sigma}} \left[r^2\partial_r \phi(r) \right]_{r = r_H} \notag\\
&= - \oint \limits_{\partial_{\Sigma}} \left[r^2\partial_r \phi(r) \right]_{r = r_B} \, .
\label{App.charge}
\end{align}
In Eq.~(\ref{App.charge}), `$\Sigma_B$' indicates that the volume integral on the hypersurface is evaluated from the horizon up to a sphere of constant radius $r_B$. The usual spherically symmetric solution, $\phi = \frac{Q}{4 \pi r}$, satisfies this equation. A crucial difference occurs if the above integral is performed \emph{exactly} at the black hole horizon, in which case we have
\begin{align}
Q_H &= \int \limits_{\Sigma_H} \left[\mathcal{D}_a \pi^{a} - \lambda n_a \pi^a \delta(r - r_H) \right] \notag\\
&=0\,.
\label{App.nocharge}
\end{align}
Note that Eq.~(\ref{App.nocharge}) is the charge at the horizon(s) and holds regardless of the solution for the charge outside the horizon. Equations (\ref{App.charge}) and (\ref{App.nocharge}) suggests that the horizon may be viewed as a dipole layer, with the charge on one side of the horizon being screened from observation. For an observer outside the horizon, the black hole is a charged body which follows from the bulk contribution to the constraint. When the observer is at the horizon, the cancellation of ``positive" and ``negative" charges leads to the result in Eq.~(\ref{App.nocharge}).
The constraint in Eq.~(\ref{U.con2}) will also necessarily affect the quantization of the Maxwell field on such backgrounds. Quantization of gauge fields is most effectively carried out within the BRST formalism, where the first class constraints of the theory and the inclusion of additional ghost fields leads to the construction of the BRST charge operator. Within the Hamiltonian BRST formalism, this operator leads to the derivation of the gauge fixing and ghost actions at tree level, both of which are BRST invariant \cite{Henneaux:1985kr,Henneaux:1992ig}. On backgrounds with boundaries, the requirement of BRST invariance imposes certain restrictions on the allowed boundary values of the fields involved. Such considerations have been made on curved backgrounds with spatial boundaries
in~\cite{Moss:1989wu,Moss:1990yq,Moss:1996ip,Moss:2013vh}, and more recently in~\cite{Huang:2014pfa,Donnelly:2014fua,Donnelly:2015hxa} in relation to edge state entanglement
entropy calculations. These investigations however did not consider the modification of the Gauss law constraint. That the constraint derived in Eq.~(\ref{U.con2}) does contain a non-vanishing contribution at the horizon should lead to the derivation of ``horizon terms" for the action, which are guaranteed to be BRST invariant. Further, it is clear that when Eq.~(\ref{U.con2}) is imposed as an operator on physical states, it will relate the bulk and horizon states in a non-trivial way. To our knowledge, such considerations have not been made in the literature, and could be particularly relevant in describing the behaviour of horizon states of gauge fields.
To summarize, we have presented a covariant formalism for describing constrained field theories in
the presence of an event horizon of spherically symmetric black hole spacetimes, which are
either asymptotically flat or have an outer cosmological horizon. In the process we also
determined that the presence of horizons lead to non-trivial surface contributions to the
constraints in general, and demonstrated this explicitly in the Maxwell case. We have argued that surface contributions to the constraints will modify both the quantization of these theories and the description of states at the horizon. We leave further investigation of these topics to future work.
\section*{Acknowledgments}
KF thanks Steve Carlip, Daniel Kabat, Aruna Kesavan, George Paily and Robert Wald for stimulating discussions.
| {'timestamp': '2016-12-30T02:03:41', 'yymm': '1608', 'arxiv_id': '1608.02696', 'language': 'en', 'url': 'https://arxiv.org/abs/1608.02696'} |
high_school_physics | 248,334 | 15.668398 | 1 | Skickar Monday, April 29 om du beställer innan 1 timme and 2 minuter!
You can store up to 5 different internet radio stations from all over the world. Music can be streamed from the simple free TIBO app or by using Bluetooth on your mobile device to one or more rooms in CD quality. The TIBO app supports Spotify, TIDAL, Napster, Tunein, iheartradio etc. You can charge your mobile device using USB and you can even connect external devices such as turntable, CD, and game consoles etc. Then you can bounce the music from these all around your home. Sphere 4 is transportable, rechargeable complete with a carry handle. This active speaker makes music entertainment available whenever and wherever it is need.
With the deceptively simple TIBO Sphere 4 users can create a multiroom sound system in seconds. The Sphere 4 has more functionality and power than most product in the market place at this price point. TIBO’s quality sound reproduction combine with a unique, style make this speaker standout in any environment.
Controlled by the FREE TIBO app and with TIBO Bounce as standard, Sphere 4 uses Smart Audio to generate great, high resolution sound which can be streamed via Bluetooth or Wi-Fi. Sphere 4 can be used on its own or in group mode connected to one of more Smart Audio speakers at the touch of a button. It is a multi-connect, multi-play active speaker complete with five pre-sets for streaming thousands of internet radio stations and playlists.
The Sphere 4’s rechargeable lithium ion batteries deliver a minimum of eight hours of playback time. The speaker comes complete with 3.5mm line-in for connecting to other devices, Wi-Fi or LAN for connecting to the internet and USB for charging your devices when you are on the move.1 x 4” drive unit and rear tuned port for controlled tight bass and a 2 x 0.75” dome tweeter for natural high frequency response. The Sphere 4’s High Resolution audio is delivered by 50W of RMS power supplied by a class D Amp 4 Ohms and 88 dB 50Hz-20KHz. | {'timestamp': '2019-04-26T13:57:47Z', 'url': 'https://hifi-freaks.no/tibo-sphere-4.html', 'language': 'en', 'source': 'c4'} |
high_school_physics | 308 | 15.634136 | 1 | \section{Introduction}
The study of supremum of Gaussian processes is a major area of study in probability and functional analysis as epitomized by the celebrated {\it majorizing measures} theorem of Fernique and Talagrand (see \cite{LedouxT}, \cite{Talagrand05} and references therein). There is by now a rich body of work on obtaining tight estimates and characterizations of the supremum of Gaussian processes with several applications in analysis \cite{Talagrand05}, convex geometry \cite{Pisier99} and more. Recently, in a striking result, Ding, Lee and Peres \cite{DingLP11} used the theory to resolve the Winkler-Zuckerman {\it blanket time} conjectures \cite{WinklerZ96}, indicating the usefulness of Gaussian processes even for the study of combinatorial problems over discrete domains.
Ding, Lee and Peres \cite{DingLP11} used the powerful {\it Dynkin isomorphism theory} and majorizing measures theory to establish a structural connection between the cover time (and blanket time) of a graph $G$ and the supremum of a Gaussian process associated with the Gaussian Free Field on $G$. They then use this connection to resolve the Winkler-Zuckerman blanket time conjectures and to obtain the first deterministic polynomial time constant factor approximation algorithm for computing the cover time of graphs. This latter result resolves an old open question of Aldous and Fill (1994).
Besides showing the relevance of the study of Gaussian processes to discrete combinatorial questions, the work of Ding, Lee and Peres gives evidence that studying Gaussian processes could even be an important algorithmic tool; an aspect seldom investigated in the rich literature on Gaussian processes in probability and functional analysis. Here we address the corresponding computational question directly, which given the importance of Gaussian processes in probability, could be of use elsewhere. In this context, the following question was asked by Lee \cite{leepost} and Ding \cite{Ding11}.
\begin{question}
For every $\epsilon > 0$, is there a deterministic polynomial time algorithm that given a set of vectors $v_1,\ldots,v_m \in \rd$, computes a $(1 + \epsilon)$-factor approximation to $\ex_{X \lfta \N^d}[\sup_i |\inp{v_i,X}|]$\footnote{Throughout, $\N$ denotes the univariate Gaussian distribution with mean $0$ and variance $1$ and for a distribution $\calD$, $X \lfta \calD$ denotes a random variable with distribution $\calD$. By a $\alpha$-factor approximation to a quantity $X$ we mean a number $p$ such that $p \leq X \leq \alpha p$.}.
\end{question}
We remark that Lee \cite{leepost} and \cite{Ding11} actually ask for an approximation to $\ex_{X \lfta \N^d}[\sup_i \inp{v_i, X}]$ (and not the supremum of the absolute value). However, this formulation results in a somewhat artificial asymmetry and for most interesting cases these two are essentially equivalent: if $\ex_{X \lfta \N^d}[\sup_i \inp{v_i, X}] = \omega(\max_i \nmt{v_i})$, then $\ex_{X \lfta \N^d}[\sup_I |\inp{v_i, X}|] = (1+o(1)) \ex_{X \lfta \N^d}[\sup_i \inp{v_i, X}]$\footnote{This follows from standard concentration bounds for supremum of Gaussian processes; we do not elaborate on it here as we ignore this issue.}. We shall overlook this distinction from now on.
There is a simple randomized algorithm for the problem: sample a few Gaussian vectors and output the median supremum value for the sampled vectors. This however, requires $O(d\log d/\epsilon^2)$ random bits. Using Talagrand's majorizing measures theorem, Ding, Lee and Peres give a deterministic polynomial time $O(1)$-factor approximation algorithm for the problem. This approach is inherently limited to not yield a PTAS as the majorizing measures characterization is bound to lose a universal constant factor. Here we give a PTAS for the problem thus resolving the above question.
\begin{theorem}\label{th:main}
For every $\epsilon > 0$, there is a deterministic algorithm that given a set of vectors $v_1,\ldots,v_m \in \rd$, computes a $(1 + \epsilon)$-factor approximation to $\ex_{x \lfta \N^d}[\sup_i |\inp{v_i,x}|]$ in time $\poly(d) \cdot m^{\tilde{O}(1/\epsilon^2)}$.
\end{theorem}
Our approach is comparatively simpler than the work of Ding, Lee and Peres, using some classical {\it comparison inequalities} in convex geometry
We explain our result on estimating semi-norms with respect to Gaussian measures mentioned in the abstract in \sref{sec:linest}.
We next discuss some applications of our result to computing cover times of graphs as implied by the works of Ding, Lee and Peres \cite{DingLP11} and Ding \cite{Ding11}.
\subsection{Application to Computing Cover Times of Graphs}
The study of random walks on graphs is an important area of research in probability, algorithm design, statistical physics and more. As this is not the main topic of our work, we avoid giving formal definitions and refer the readers to \cite{AldousF}, \cite{Lovasz93} for background information.
Given a graph $G$ on $n$-vertices, the cover time, $\tau_{cov}(G)$, of $G$ is defined as the expected time a random walk on $G$ takes to visit all the vertices in $G$ when starting from the worst possible vertex in $G$. Cover time is a fundamental parameter of graphs and is extensively studied. Algorithmically, there is a simple randomized algorithm for approximating the cover time - simulate a few trials of the random walk on $G$ for $\poly(n)$ steps and output the median cover time. However, without randomness the problem becomes significantly harder. This was one of the motivations of the work of Ding, Lee and Peres \cite{DingLP11} who gave the first deterministic constant factor approximation algorithm for the problem, improving on an earlier work of Kahn, Kim, Lov\'asz and Vu \cite{KahnKLV00} who obtained a deterministic $O((\log \log n)^2)$-factor approximation algorithm. For the simpler case of trees, Feige and Zeitouni \cite{FeigeZ09} gave a FPTAS.
Ding, Lee and Peres also conjectured that the cover time of a graph $G$ (satisfying a certain reasonable technical condition) is asymptotically equivalent to the supremum of an explicitly defined Gaussian process---the Gaussian Free Field on $G$. However, this conjecture though quite interesting on its own, is not enough to give a PTAS for cover time; one still needs a PTAS for computing the supremum of the relevant Gaussian process. Our main result provides this missing piece, thus removing one of the obstacles in their posited strategy to obtain a PTAS for computing the cover time of graphs. Recently, Ding \cite{Ding11} showed the main conjecture of Ding, Lee and Peres to be true for bounded-degree graphs and trees. Thus, combining his result (see Theorem 1.1 in \cite{Ding11}) with \tref{th:main} we get a PTAS for computing cover time on bounded degree graphs with $\tau_{hit}(G) = o(\tau_{cov}(G))$\footnote{The hitting time $\tau_{hit}(G)$ is defined as the maximum over all pairs of vertices $u,v \in G$ of the expected time for a random walk starting at $u$ to reach $v$. See the discussion in \cite{Ding11} for why this is a reasonable condition.}. As mentioned earlier, previously, such algorithms were only known for trees \cite{FeigeZ09}.
\ignore{
\begin{theorem}
For every $\epsilon > 0$ and $\Delta > 0$ there exists a constant $C_{\Delta,\epsilon}$ such that the following holds. For every graph $G$ with maximum degree at most $\Delta$ and $\tau_{hit}(G) < C_{\Delta,\epsilon}(\tau_{cov}(G))$, there exists a deterministic $n^{O_{\Delta,\epsilon}(1)}$-time algorithm to compute a $(1+\epsilon)$-factor approximation to $\tau_{cov}(G)$.
\end{theorem}}
\section{Outline of Algorithm}
The high level idea of our PTAS is as follows. Fix the set of vectors $V = \{v_1,\ldots,v_m\} \subseteq \R^d$ and $\epsilon > 0$. Without loss of generality suppose that $\max_{v \in V} \nmt{v} = 1$. We first reduce the dimension of $V$ by projecting $V$ onto a space of dimension of $O((\log m)/\epsilon^2)$ \'a la the classical Johnson-Lindenstrauss lemma (JLL). We then give an algorithm that runs in time polynomial in the number of vectors but exponential in the underlying dimension. Our analysis relies on two elegant comparison inequalities in convex geometry---Slepian's lemma \cite{Slepian62} for the first step and Kanter's lemma \cite{Kanter77} for the second step. We discuss these modular steps below.
\subsection{Dimension Reduction} We project the set of vectors $V\subseteq \R^d$ to $\R^k$ for $k = O((\log m)/\epsilon^2)$ to preserve all pairwise (Euclidean) distances within a $(1+\epsilon)$-factor as in the Johnson-Lindenstrauss lemma (JLL). We then show that the expected supremum of the {\it projected} Gaussian process is within a $(1 + \epsilon)$ factor of the original value. The intuition is that, the supremum of a Gaussian process, though a global property, can be controlled by pairwise correlations between the variables. To quantify this, we use Slepian's lemma, that helps us relate the supremum of two Gaussian processes by comparing pairwise correlations. Finally, observe that using known derandomizations of JLL, the dimension reduction can be done deterministically in time $\poly(d,m,1/\epsilon)$ \cite{DJLL}.
Thus, to obtain a PTAS it would be enough to have a deterministic algorithm to approximate the supremum of a Gaussian process in time exponential in the dimension $k = O((\log m)/\epsilon^2)$. Unfortunately, a naive argument by discretizing the Gaussian measure in $\R^k$ leads to a run-time of at least $k^{O(k)}$; which gives a $m^{O((\log \log m)/\epsilon^2)}$ algorithm. This question was recently addressed by Dadush and Vempala \cite{DadushV12}, who needed a similar sub-routine for their work on computing {\it M-Ellipsoids} of convex sets and give a deterministic algorithm with a run-time of $(\log k)^{O(k)}$. Combining their algorithm with the dimension reduction step gives a deterministic $m^{O((\log \log \log m)/\epsilon^2)}$ time algorithm for approximating the supremum. We next get rid of this $\omega(1)$ dependence in the exponent.
\subsection{Oblivious Linear Estimators for Semi-Norms}\label{sec:linest}
We in fact, solve a more general problem by constructing an optimal {\it linear estimator} for semi-norms in Gaussian space.
Let $\phi:\R^k \rgta \R_+$ be a semi-norm, i.e., $\phi$ is homogeneous and satisfies triangle inequality. For normalization purposes, we assume that $1 \leq \ex_{x \lfta \N^k}[\phi(x)]$ and that the Lipschitz constant of $\phi$ is at most $k^{O(1)}$. Note that the supremum function $\phi_V(x) = \sup_{v \in V}|\inp{v,x}|$ satisfies these conditions. Our goal will be to compute a $(1+\epsilon)$-factor approximation to $\ex_{x \lfta \N^k}[\phi(x)]$ in time $2^{O_\epsilon(k)}$.
\ignore{
\begin{theorem}\label{th:epsnet}
For every $\epsilon > 0$, there exists a distribution $\calD$ on $\R^k$ which can be sampled using $O(k\log(1/\epsilon))$ bits in time $\poly(k,1/\epsilon)$ and space $O(\log k + \log(1/\epsilon))$ such that for every semi-norm $\phi:\R^k \rgta \R_+$,
\[(1-\epsilon) \ex_{x \lfta \calD}[\phi(x)] \leq \ex_{x \lfta \N^k}[\phi(x)] \leq (1+\epsilon)\ex_{x \lfta \calD}[\phi(x)].\]
In particular, there exists a deterministic $(1/\epsilon)^{O(k)}$-time algorithm for computing a $(1+\epsilon)$-factor approximation to $\ex_{X \lfta \N^k}[\phi(X)]$ using only oracle access to $\phi$.
\end{theorem}}
\begin{theorem}\label{th:epsnetintro}
For every $\epsilon > 0$, there exists a deterministic algorithm running in time $(1/\epsilon)^{O(k)}$ and space $\poly(k,1/\epsilon)$ that computes a $(1+\epsilon)$-factor approximation to $\ex_{X \lfta \N^k}[\phi(X)]$ using only oracle access to $\phi$.
\end{theorem}
Our algorithm has the additional property of being an {\it oblivious linear estimator}: the set of query points does not depend on $\phi$ and the output is a positive weighted sum of the evaluations of $\phi$ on the query points. Further, the construction is essentially optimal as any such oblivious estimator needs to make at least $(1/\epsilon)^{\Omega(k)}$ queries (see \sref{sec:appendix}). In comparison, the previous best bound of Dadush and Vempala \cite{DadushV12} needed $(\log k)^{O(k)}$ queries. We also remark that the query points of our algorithm are essentially the same as that of Dadush and Vempala, however our analysis is quite different and leads to better parameters.
As in the analysis of the dimension reduction step, our analysis of the oblivious estimator relies on a comparison inequality---Kantor's lemma---that allows us to ``lift'' a simple estimator for the univariate case to the multi-dimensional case.
We first construct a symmetric distribution $\mu$ on $\R$ that has a simple {\it piecewise flat graph} and {\it sandwiches} the one-dimensional Gaussian distribution in the following sense. Let $\nu$ be a ``shrinking'' of $\mu$ defined to be the probability density function (pdf) of $(1-\epsilon)x$ for $x \lfta \mu$. Then, for every symmetric interval $I \subseteq \R$, $\mu(I) \leq \N(I) \leq \nu(I)$.
Kantor's lemma \cite{Kanter77} says that for pdf's $\mu,\nu$ as above that are in addition {\it unimodal}, the above relation carries over to the product distributions $\mu^k, \nu^k$: for every symmetric convex set $K \subseteq \R^k$, $\mu^k(K) \leq \N^k(K) \leq \nu^k(K)$. This last inequality immediately implies that semi-norms cannot {\it distinguish} between $\mu^k$ and $\N^k$: for any semi-norm $\phi$, $\ex_{\mu^k}[\phi(x)] = (1\pm \epsilon)\ex_{\N^k}[\phi(x)]$. We then suitably prune the distribution $\mu^k$ to have small support and prove \tref{th:epsnet}.\\
Our main result, \tref{th:main}, follows by first reducing the dimension as in the previous section and applying \tref{th:epsnet} to the semi-norm $\phi:\R^k \rgta \R_+$, $\phi(x) = \sup_i|\inp{u_i,x}|$ for the projected vectors $\{u_1,\ldots,u_m\}$.
\section{Dimension Reduction}
The use of JLL type random projections for estimating the supremum comes from the following comparison inequality for Gaussian processes. We call a collection of real-valued random variables $\{X_t\}_{t \in T}$ a Gaussian process if every finite linear combination of the variables has a normal distribution with mean zero. For a reference to Slepian's lemma we refer the reader to Corollary 3.14 and the following discussion in \cite{LedouxT}.
\begin{theorem}[Slepian's Lemma \cite{Slepian62}]\label{lm:slepian}
Let $\{X_t\}_{t \in T}$ and $\{Y_t\}_{t \in T}$ be two Gaussian processes such that for every $s,t \in T$, $\ex[(X_s - X_t)^2] \leq \ex[(Y_s - Y_t)^2]$. Then, $\ex[\sup_t X_t] \leq \ex[\sup_t Y_t]$.
\end{theorem}
We also need a derandomized version of the Johnson-Lindenstrauss Lemma.
\begin{theorem}[\cite{DJLL}]\label{th:djll}
For every $\epsilon > 0$, there exists a deterministic $(d m^2 (\log m + 1/\epsilon)^{O(1)})$-time algorithm that given vectors $v_1,\ldots,v_m \in \R^d$ computes a linear mapping $A:\R^d \rgta \R^k$ for $k = O((\log m)/\epsilon^2)$ such that for every $i,j \in [m]$, $\nmt{v_i - v_j} \leq \nmt{A(v_i) - A(v_j)} \leq (1+\epsilon)\nmt{v_i - v_j}$.
\end{theorem}
Combining the above two theorems immediately implies the following.
\begin{lemma}\label{lm:derandjl}
For every $\epsilon > 0$, there exists a deterministic $(d m^2 (\log m + 1/\epsilon)^{O(1)})$-time algorithm that given vectors $v_1,\ldots,v_m \in \R^d$ computes a linear mapping $A:\R^d \rgta \R^k$ for $k = O((\log m)/\epsilon^2)$ such that
\begin{equation}
\label{eq:jllsup}
\ex_{x \lfta \N^d}[\sup_i |\inp{v_i,x}|] \leq \ex_{y \lfta \N^k}[\sup_i |\inp{A(v_i),y}|] \leq (1+\epsilon) \ex_{x \lfta \N^d}[\sup_i |\inp{v_i,x}|].
\end{equation}
\end{lemma}
\begin{proof}
Let $V = \{v_1,\ldots,v_m\} \cup \{-v_1,\ldots,-v_m\}$ and let $\{X_v\}_{v \in V}$ be the Gaussian process where the joint distribution is given by $X_v \equiv \inp{v,x}$ for $x \lfta \N^d$. Then, $\ex_{x \lfta \N^d}[\sup_i |\inp{v_i,x}|] = \ex[\sup_v X_v]$.
Let $A:\R^d \rgta \R^k$ be the linear mapping as given by \tref{th:djll} applied to $V$. Let $\{Y_v\}_{v \in V}$ be the ``projected'' Gaussian process with joint distribution given by $Y_v \equiv \inp{A(v),y}$ for $y \lfta \N^k$. Then, $\ex_{y \lfta \N^k}[\sup_i |\inp{v_i,y}|] = \ex[\sup_v Y_v]$.
Finally, observe that for any $u,v \in V$,
\[ \ex[(X_u - X_v)^2] = \nmt{u-v}^2 \leq \nmt{A(u) - A(v)}^2 = \ex[(Y_u - Y_v)^2] \leq (1+\epsilon)^2\ex[(X_u - X_v)^2].\]
Combining the above inequality with Slepian's lemma \lref{lm:slepian} applied to the pairs of processes $\left(\{X_v\}_{v \in V}, \{Y_v\}_{v \in V}\right)$ and $\left(\{Y_v\}_{v \in V}, \{(1+\epsilon)X_v\}_{v \in V}\right)$ it follows that
\[ \ex[\sup_v X_v] \leq \ex[\sup_v Y_v] \leq \ex[\sup_v (1+\epsilon)X_v] = (1+\epsilon)\ex[\sup_v X_v].\]
The lemma now follows.
\end{proof}
\section{Oblivious Estimators for Semi-Norms in Gaussian Space}\label{sec:epsnet}
In the previous section we reduced the problem of computing the supremum of a $d$-dimensional Gaussian process to that of a Gaussian process in $k = O((\log m)/\epsilon^2)$-dimensions. Thus, it suffices to have an algorithm for approximating the supremum of Gaussian processes in time exponential in the dimension. We will give such an algorithm that works more generally for all semi-norms.
Let $\phi:\R^k \rgta \R_+$ be a semi-norm. That is, $\phi$ satisfies the triangle inequality and is homogeneous. For normalization purposes we assume that $1 \leq \ex_{\N^k}[\phi(X)]$ and the Lipschitz constant of $\phi$ is at most $k^{O(1)}$.
\begin{theorem}\label{th:epsnet}
For every $\epsilon > 0$, there exists a set $S \subseteq \R^k$ with $|S| = (1/\epsilon)^{O(k)}$ and a function $p:\R^k \rgta \R_+$ computable in $\poly(k,1/\epsilon)$ time such that the following holds. For every semi-norm $\phi:\R^k \rgta \R_+$,
\[(1 - \epsilon) \left(\sum_{x \in S} p(x) \phi(x)\right) \leq \ex_{X \lfta \N^k}[\phi(X)] \leq (1 + \epsilon) \left(\sum_{x \in S} p(x) \phi(x)\right).\]
Moreover, successive elements of $S$ can be enumerated in $\poly(k,1/\epsilon)$ time and $O(k\log(1/\epsilon))$ space.
\end{theorem}
\tref{th:epsnetintro} follows immediately from the above.
\begin{proof}[Proof of \tref{th:epsnetintro}]
Follows by enumerating over the set $S$ and computing $\sum_{x \in S} p(x) \phi(x)$ by querying $\phi$ on the points in $S$.
\end{proof}
We now prove \tref{th:epsnet}.
Here and henceforth, let $\gamma$ denote the pdf of the standard univariate Gaussian distribution. Fix $\epsilon > 0$ and let $\delta > 0$ be a parameter to be chosen later. Let $\mu \equiv \mu_{\epsilon,\delta}$ be the pdf which is a piecewise-flat approximator to $\gamma$ obtained by spreading the mass $\gamma$ gives to an interval $I = [i\delta, (i+1)\delta)$ evenly over $I$. Formally, $\mu(z) = \mu(-z)$ and for $z > 0$, $z \in [i\delta,(i+1)\delta)$,
\begin{equation}
\label{eq:defmu}
\mu(z) = \frac{\gamma([i\delta,(i+1)\delta))}{\delta}.
\end{equation}
Clearly, $\mu$ defines a symmetric distribution on $\R$. We will show that for $\delta \ll \epsilon$ sufficiently small, semi-norms cannot {\it distinguish} the product distribution $\mu^k$ from $\N^k$:
\begin{lemma}\label{lm:epsnetm}
Let $\delta = (2\epsilon)^{3/2}$. Then, for every semi-norm $\phi:\R^k \rgta \R$,
$$(1-\epsilon) \ex_{X \lfta \mu^k}[\phi(X)] \leq \ex_{Z \lfta \N^k}[\phi(Z)] \leq \ex_{X \lfta \mu^k}[\phi(X)].$$
\end{lemma}
\newcommand{\hX}{\hat{X}}
We first prove \tref{th:epsnet} assuming the above lemma, whose proof is deferred to the next section.
\begin{proof}[Proof of \tref{th:epsnet}]
Let $\hat{\mu}$ be the symmetric distribution supported on $\delta(\Z + 1/2)$ with pdf defined by
$$\hat{\mu}(\delta(i+1/2)) = \mu([i\delta, (i+1)\delta)),$$ for $i \geq 0$. Further, let $X \lfta \mu^k$, $\hX \lfta \hat{\mu}^k$, $Z \lfta \N^k$.
We claim that $\ex[\phi(\hX)] = (1\pm \epsilon)\ex[\phi(Z)]$. Let $Y$ be uniformly distributed on $[-\delta,\delta]^k$ and observe that random variable $X \equiv \hX + Y$ in law. Therefore,
\begin{multline}
\ex[\phi(X)] = \ex[\phi(\hX+Y)] = \ex[\phi(\hX)] \pm \ex[\phi(Y)] = \ex[\phi(\hX)] \pm \delta \ex[\phi(Y/\delta)]\\
= \ex[\phi(\hX)] \pm \delta \ex_{Z' \in_u [-1,1]^k}[\phi(Z')] = \ex[\phi(\hX)] \pm \delta \ex[\phi(Z)] \text{ (\lref{lm:cube})}.
\end{multline}
Thus, by \lref{lm:epsnetm},
\begin{equation}
\label{eq:lm1}
\ex[\phi(\hX)] = (1\pm O(\epsilon)) \ex[\phi(Z)]
\end{equation}
We next prune $\hat{\mu}^k$ to reduce its support. Define $p:\R^k \rgta \R_+$ by $p(x) = \hat{\mu}^k(x)$. Clearly, $p(x)$ being a product distribution is computable in $\poly(k,1/\epsilon)$ time.
Let $S = \left(\delta(\Z + 1/2)\right)^k \cap B_2(3\sqrt{k})$, where $B_2(r) \subseteq \R^k$ denotes the Euclidean ball of radius $r$. As $\phi$ has Lipschitz constant bounded by $k^{O(1)}$, a simple calculation shows that throwing away all points in the support of $\hX$ outside $S$ does not change $\ex[\phi(\hX)]$ much. It is easy to check that for $x \notin S$, $p(x) \leq \exp(-\nmt{x}^2/4)/(2\pi)^{k/2}$. Therefore,
\begin{multline}
\ex[\phi(\hX)] = \sum_{x} p(x) \phi(x) = \sum_{x \in S}p(x) \phi(x) + \sum_{x \notin S}p(x) \phi(x)\\
= \sum_{x \in S} p(x) \phi(x) \pm \sum_{x \notin S} \frac{\exp(-\nmt{x}^2/4)}{(2\pi)^{k/2}}\cdot ( k^{O(1)} \nmt{x}) = \sum_{x \in S}p(x) \phi(x) \pm o(1).
\end{multline}
From \eref{eq:lm1} and the above equation we get (recall that $\ex[\phi(Z)] \geq 1$)
\[ \ex[\phi(Z)] = (1\pm O(\epsilon)) \left(\sum_{x \in S} p(x) \phi(x)\right),\]
which is what we want to show.
We now reason about the complexity of $S$. First, by a simple covering argument $|S| < (1/\delta)^{O(k)}$:
\[ |S| < \frac{Vol\,(B_2(3\sqrt{k}) + [-\delta,\delta]^k)}{Vol\,([-\delta,\delta]^k)} = (1/\delta)^{O(k)} = (1/\epsilon)^{O(k)},\]
where for sets $A, B \subseteq \R^k$, $A+B$ denotes the Minkowski sum and $Vol$ denotes Lebesgue volume. This size bound almost suffices to prove \tref{th:epsnet} except for the complexity of enumerating elements from $S$. Without loss of generality assume that $R = 3\sqrt{n}/\delta$ is an integer. Then, enumerating elements in $S$ is equivalent to enumerating integer points in the $n$-dimensional ball of radius $R$. This can be accomplished by going through the set of lattice points in the natural lexicographic order, and takes $\poly(k,1/\epsilon)$ time and $O(k\log(1/\epsilon))$ space per point in $S$.
\end{proof}
\section{Proof of \lref{lm:epsnetm}}
Our starting point is the following definition that helps us {\it compare} multivariate distributions when we are only interested in volumes of convex sets. We shall follow the notation of \cite{Ball}.
\begin{definition}
Given two symmetric pdf's, $f,g$ on $\R^k$, we say that $f$ is less peaked than $g$ ($f \preceq g$) if for every symmetric convex set $K \subseteq \R^k$, $f(K) \leq g(K)$.
\end{definition}
We also need the following elementary facts. The first follows from the unimodality of the Gaussian density and the second from partial integration.
\begin{fact}
For any $\delta > 0$ and $\mu$ as defined by \eref{eq:defmu}, $\mu$ is less peaked than $\gamma$.
\end{fact}
\begin{fact}\label{fct:peaked}
Let $f, g$ be distributions on $\R^k$ with $f \preceq g$. Then for any semi-norm $\phi:\R^k \rgta \R$, $\ex_f[\phi(x)] \geq \ex_g[\phi(x)]$.
\end{fact}
\begin{proof}
Observe that for any $t > 0$, $\{x: \phi(x) \leq t\}$ is convex. Let random variables $X \lfta f$, $Y \lfta g$. Then, by partial integration, $\ex[\phi(X)] = \int_0^\infty \phi'(t) \pr[ \phi(X) > t] dt \geq \int_0^\infty \phi'(t) \pr[\phi(Y)> t] dt = \ex[\phi(Y)]$.
\end{proof}
The above statements give us a way to compare the expectations of $\mu$ and $\gamma$ for one-dimensional convex functions. We would now like to do a similar comparison for the product distributions $\mu^k$ and $\gamma^k$. For this we use Kanter's lemma \cite{Kanter77}, which says that the relation $\preceq$ is preserved under tensoring if the individual distributions have the additional property of being {\it unimodal}.
\begin{definition}
A distribution $f$ on $\R^n$ is unimodal if $f$ can be written as an increasing limit of a sequence of distributions each of which is a finite positively weighted sum of uniform distributions on symmetric convex sets.
\end{definition}
\begin{theorem}[Kanter's Lemma \cite{Kanter77}; cf.~\cite{Ball}]\label{th:kanter}
Let $\mu_1,\mu_2$ be symmetric distributions on $\R^n$ with $\mu_1 \preceq \mu_2$ and let $\nu$ be a unimodal distribution on $\R^m$. Then, the product distributions $\mu_1 \times \nu$, $\mu_2 \times \nu$ on $\R^n \times \R^m$ satisfy $\mu_1 \times \nu \preceq \mu_2 \times \nu$.
\end{theorem}
We next show that $\mu$ ``sandwiches'' $\gamma$ in the following sense.
\begin{lemma}
Let $\nu$ be the pdf of the random variable $y = (1-\epsilon)x$ for $x \lfta \mu$. Then, for $\delta \leq (2\epsilon)^{3/2}$, $\mu \preceq \gamma \preceq \nu$.
\end{lemma}
\begin{proof}
As mentioned above, $\mu \preceq \gamma$. We next show that $\gamma \preceq \nu$. Intuitively, $\nu$ is obtained by spreading the mass that $\gamma$ puts on an interval $I = [i\delta, (i+1)\delta)$ evenly on the \emph{smaller} interval $(1-\epsilon)I$. The net effect of this operation is to push the pdf of $\mu$ closer towards the origin and for $\delta$ sufficiently small the inward push from this ``shrinking'' wins over the outward push from going to $\mu$.
Fix an interval $I = [-i \delta(1-\epsilon) - \theta, i\delta(1-\epsilon) + \theta]$ for $0 \leq \theta < \delta(1-\epsilon)$. Then,
\begin{align}\label{eq:cases}
\nu(I) &= \nu\left(\,[-i\delta(1-\epsilon), i\delta(1-\epsilon)]\,\right) + 2\, \nu\left(\,[i\delta(1-\epsilon), i\delta(1-\epsilon) + \theta]\,\right)\\
&= \gamma\left(\,[-i\delta,i\delta]\,\right) + \frac{2\, \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}.
\end{align}
We now consider two cases.
Case 1: $i \geq (1-\epsilon)/\epsilon$ so that $i\delta(1-\epsilon) + \theta \leq i\delta$. Then, from the above equation,
\[ \nu(I) \geq \gamma\left(\,[-i\delta, i \delta]\,\right) \geq \gamma\left(\,[-i\delta(1-\epsilon)-\theta, i\delta(1-\epsilon)+\theta]\,\right) = \gamma(I).\]
Case 2: $i < (1-\epsilon)/\epsilon$. Let $\alpha = (i+1)\delta = \delta/\epsilon$. Then, as $1 - x^2/2 \leq e^{-x^2/2} \leq 1$,
\[ \gamma((i\delta, i\delta + \theta]) \leq \theta \cdot \gamma(0),\;\;\;\; \gamma(\,[i\delta,(i+1)\delta)\,) \geq \delta \cdot \gamma(0) \cdot (1-\alpha^2/2).\]
Therefore,
\begin{align*}
\nu(I) &= \gamma(I) - 2 \gamma\left(\,(i\delta, i\delta(1-\epsilon) + \theta]\,\right) + \frac{2 \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}\\
&\geq \gamma(I) - 2 \gamma\left(\,(i\delta, i\delta + \theta]\,\right) + \frac{2 \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}\\
&\geq \gamma(I) - 2 \theta \gamma(0) + \frac{2 \theta \cdot \delta \cdot \gamma(0) \cdot (1-\alpha^2/2)}{\delta(1-\epsilon)}\\
&= \gamma(I) + \frac{2\theta \gamma(0)}{1-\epsilon} \cdot (\epsilon - \alpha^2/2) \geq \gamma(I),
\end{align*}
for $\alpha^2 \leq 2 \epsilon$, i.e., if $\delta \leq (2\epsilon)^{3/2}$.
\ignore{
Let $\delta \ll \epsilon$ be sufficiently small so that the pdf of $\gamma$ is nearly constant in the interval $[0,(i+1)\delta]$, that is $\gamma([\alpha,\beta]) = (\beta-\alpha) \gamma(0) \pm O((\beta-\alpha)^2)$ for $0 < \alpha < \beta < (i+1) \delta$. Then, by \eref{eq:cases}, as $\theta < \delta(1-\epsilon)$,
\begin{align*}
\nu(I) &\geq \gamma(I) - 2 \gamma\left(\,(i\delta, i\delta(1-\epsilon) + \theta]\,\right) + \frac{2 \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}\\
&\geq \gamma(I) - 2 \gamma\left(\,(i\delta, i\delta + \theta]\,\right) + \frac{2 \theta \cdot \gamma(\,[i\delta,(i+1)\delta)\,)}{\delta(1-\epsilon)}\\
&\geq \gamma(I) - 2 \theta \gamma(0) - O(\theta^2) + \frac{2 \theta (\delta \gamma(0) - O(\delta^2))}{\delta(1-\epsilon)}\\
&= \gamma(I) + \frac{2\theta \gamma(0)\epsilon}{1-\epsilon} - O(\theta \,\delta) \geq \gamma(I)
\end{align*}}
\end{proof}
\lref{lm:epsnetm} follows easily from the above two claims.
\begin{proof}[Proof of \lref{lm:epsnetm}]
Clearly, $\mu,\nu,\gamma$ are unimodal and product of unimodal distributions is unimodal. Thus, from the above lemma and iteratively applying Kanter's lemma we get $\mu^k \preceq \gamma^k \preceq \nu^k$. Therefore, by Fact \ref{fct:peaked}, for any semi-norm $\phi$,
\[ \ex_{\mu^k}[\phi(X)] \geq \ex_{\gamma^k}[\phi(Y)] \geq \ex_{\nu^k}[\phi(X)] = \ex_{\mu^k}[\phi((1-\epsilon)X)] = (1-\epsilon)\ex_{\mu^k}[\phi(X)].\]
\end{proof}
We now prove the auxiliary lemma we used in proof of \tref{th:epsnet}.
\begin{lemma}\label{lm:cube}
Let $\rho$ be the uniform distribution on $[-1,1]$. Then, $\gamma \preceq \rho$ and for any semi-norm $\phi:\R^k \rgta \R$, $\ex_{\rho^k}[\phi(x)] \leq \ex_{\gamma^k}[\phi(x)]$.
\end{lemma}
\begin{proof}
It is easy to check that $\gamma \preceq \rho$. Then, by Kanter's lemma $\gamma^k \preceq \rho^k$ and the inequality follows from Fact \ref{fct:peaked}.
\end{proof}
\section{A PTAS for Supremum of Gaussian Processes}
Our main theorem, \tref{th:main}, follows immediately from \lref{lm:derandjl} and \tref{th:epsnetintro} applied to the semi-norm $\phi:\R^k \rgta \R$ defined by $\phi(x) = \sup_{i \leq m} |\inp{A(v_i),x}|$.
\section{Lowerbound for Oblivious Estimators}\label{sec:appendix}
We now show that \tref{th:epsnet} is optimal: any oblivious linear estimator for semi-norms as in the theorem must make at least $(C/\epsilon)^{k}$ queries for some constant $C > 0$.
Let $S \subseteq \R^k$ be the set of query points of an oblivious estimator. That is, there exists a function $f:\R_+^S \rgta \R_+$ such that for any semi-norm $\phi:\R^k \rgta \R_+$, $f((\phi(x): x \in S)) = (1\pm \epsilon) \ex_{Y \lfta \N^k}[\phi(Y)]$. We will assume that $f$ is monotone in the following sense: $f(x_1,\ldots,x_{|S|}) \leq f(y_1,\ldots,y_{|S|})$ if $0 \leq x_i \leq y_i$ for all $i$. This is clearly true for any linear estimator (and also for the median estimator). Without loss of generality suppose that $\epsilon < 1/4$.
\newcommand{\spk}{\mathcal{S}^{k-1}}
The idea is to define a suitable semi-norm based on $S$: define $\phi:\R^k \rgta \R$ by $\phi(x) = \sup_{u \in S}|\inp{u/\nmt{u},x}|$. It is easy to check that for any $v \in S$, $\nmt{v} \leq \phi(v)$. Therefore, the output of the oblivious estimator when querying the Euclidean norm is at most the output of the estimator when querying $\phi$. In particular,
\begin{equation}
\label{eq:app1}
(1-\epsilon) \ex_{Y \lfta \N^k}[\nmt{Y}] \leq f((\nmt{x}: x \in S)) \leq f((\phi(x): x \in S)) \leq (1+\epsilon) \ex_{Y \lfta \N^k}[\phi(Y)].
\end{equation}
We will argue that the above is possible only if $|S| > (C/\epsilon)^k$. Let $\spk$ denote the unit sphere in $\R^k$. For the remaining argument, we shall view $Y \lfta \N^k$ to be drawn as $Y = R X$, where $X \in \spk$ is uniformly random on the sphere and $R \in \R$ is independent of $X$ and has a Chi-squared distribution with $k$ degrees of freedom. Let $S(\epsilon) = \cup_{u \in S} \{y \in \spk: |\inp{u/\nmt{u},y}| \geq 1 - 4\epsilon\}$.
Now, by a standard volume argument, for any $y \in \spk$, $\pr_X[|\inp{X,y}| \geq 1 - 4\epsilon] < (O(\epsilon))^k$. Thus, by a union bound, $p = \pr_X[X \in S(\epsilon)] < |S| \cdot (O(\epsilon))^k$. Further, for any $y \in \spk\setminus S(\epsilon)$, $\phi(y) < 1-4\epsilon$. Therefore,
\begin{multline*}
\ex_{X}[\phi(X)] = \pr[X \notin S(\epsilon)] \cdot \ex[\phi(X) | X \notin S(\epsilon)] + \pr[X \in S(\epsilon)] \cdot \ex[\phi(X) | X \in S(\epsilon)]\leq \\(1-p) (1-4\epsilon) + p.
\end{multline*}
Thus,
\begin{equation}
\label{eq:app2}
\ex[\phi(Y)] = \ex[\phi(R X)] = \ex[R] \cdot \ex[\phi(X)] \leq \ex[\nmt{Y}] \cdot ((1-p)(1-4\epsilon) + p).
\end{equation}
Combining Equations \ref{eq:app1} and \ref{eq:app2}, we get
\[ 1 - \epsilon \leq (1+\epsilon)\cdot ((1-p)(1-4\epsilon) + p) < 1-3\epsilon + 2p.\]
As $p < |S| \cdot (O(\epsilon))^k$, the above leads to a contradiction unless $|S| > (C/\epsilon)^{k}$ for some constant $C > 0$.
\bibliographystyle{amsalpha}
| {'timestamp': '2012-02-23T02:03:17', 'yymm': '1202', 'arxiv_id': '1202.4970', 'language': 'en', 'url': 'https://arxiv.org/abs/1202.4970'} |
high_school_physics | 810,700 | 15.61439 | 1 | General Answers on Diesel Generators
Diesel Engine FAQ
Cummins Diesel Engine Range
Explained: The Cummins QSK60 Series
Explained: The Cummins KTA50 Series
Explained: The Cummins QST30 Series
Explained: The Cummins VTA28 Series
Explained: The Cummins QSX15 Series
Explained: The Cummins NT855 Series
All About the Cummins QST30-G3 Engine
What is the Cummins QST30-G3 diesel engine?
The Cummins QST30 range has been engineered to deliver an optimum power to weight ratio. The engine has been engineered to be compact but deliver more torque and power than anything else its size.
The QST30-G3 is a 12 cylinder, turbocharged and after cooled engine which is currently made in the USA.
The engine comes in both the 50 Hz and 60 Hz configuration and provides outstanding performance levels.
What is the power output of the Cummins QST30-G3 diesel engine?
The total displacement of this engine is 30.5L, requires 40.7L of oil and 114L of coolant. The engine therefore produces 1080 horsepower (806 kWm) at 1500 RPM and 1220 horsepower (910 kWm) at 1800 RPM.
Need a diesel generator? Our UK made diesel generators include this Cummins diesel engine.
What power rating does the Cummins QST30-G3 diesel engine run at?
The QST30-G3 is available for use in diesel generators at Prime (PRP), Standby (ESP) and Continuous (COP) ratings making it a versatile engine.
What is the fuel consumption of the Cummins QST30-G3 engine?
The Cummins QST30-G3 fuel consumption depends on its load. At 1500 rpm prime power it is has the following fuel consumption (percent of prime power):
At 25% it uses 26.6 litres/hr
At 75% it uses 67 litres/hr
At 100% it uses 85.7 litres/hr
What are the dimensions of the Cummins QST30-G3 diesel engine?
This Cummins Diesel Engine has dimensions of 2621mm length, 1448mm width, 2021mm height and has a dry weight of 3437 kg. The dimensions of the bore and stroke are 140.0mm and 165.1mm respectively.
This engine can be paired with an alternator to make a generator. The generator could be used in a wide range of industries for power generation, and if a larger power is required the generators can be placed in sync to increase power output.
What customer support is offered with the Cummins QST30-G3 diesel engine?
Cummins products come with superior technical support and after sales service to make sure your engine is always running smoothly. The Cummins team provide one of the best warranty's in the industry and the team will always act quickly to help.
QST30G3 spec sheet.pdf
All About the Cummins KTA50-G3 Engine
All About the Cummins QSK23-G3 Engine | {"pred_label": "__label__wiki", "pred_label_prob": 0.6930575966835022, "wiki_prob": 0.6930575966835022, "source": "cc/2023-06/en_middle_0047.json.gz/line1328959"} |
high_school_physics | 1,077 | 15.572377 | 1 |
\section{Introduction}
\subsection{\!\!\!}
\label{sub:first intro}
An \emph{almost self-conjugate (ASC) partition} is a weakly decreasing tuple of positive integers whose Young diagram has a special shape: for each box on the main diagonal (marked with a dot in the figure below), its \emph{arm} (the part of its row strictly to its right) is longer than its \emph{leg} (the part of its column strictly below) by exactly one box. For example, $(7,5,4,2,1,1)$ is an ASC partition, which we see from its Young diagram below (where the length of each arm is written to its right, and the length of each leg is written beneath it):
\[
\ytableausetup{smalltableaux}
\begin{ytableau}
\bullet &{}&{}&{}&{}&{}&{}&\none[\scriptstyle \rightarrow]&\none[\scriptstyle 6]\\
{}&\bullet&{}&{}&{}&\none[\scriptstyle\rightarrow]&\none[\scriptstyle 3]\\
{}&{}&\bullet&{}&\none[\scriptstyle\rightarrow]&\none[\scriptstyle 1]\\
{}&{}&\none[\scriptstyle\downarrow]\\
{}&\none[\scriptstyle\downarrow]&\none[\scriptstyle 0]\\
{}&\none[\scriptstyle 2]\\
\none[\scriptstyle\downarrow]\\
\none[\scriptstyle 5]
\end{ytableau}
\]
We follow~\cite{Dong} in adopting the ``ASC'' terminology; these partitions also play a role in a recent paper~\cite{Linusson} under the name of ``shift-symmetric'' partitions. Partitions with the conjugate shape (i.e., where each leg is one box longer than its corresponding arm) are also known in the literature as ``threshold partitions,'' since they are precisely the partitions that can be realized as the degree sequence of a threshold graph; see~\cite{Hammer}*{Lemma 10}. Our own interest in ASC partitions arose from their appearance in symmetric-function identities due to Dudley Littlewood, and in related BGG complexes. In a sense, these complexes are the ``natural habitat'' for ASC partitions and their conjugates. In this paper, we classify the BGG complexes acting as the natural habitat for a generalization of the ASC partitions and their conjugates, namely, partitions for which the arm--leg difference is an arbitrary constant $m$.
We begin by recalling three classical identities which will be a recurring theme of this paper, each identity involving the Schur polynomials $s_\pi$. First we have the dual Cauchy identity~\cite{Stanley}*{Thm.~7.14.3}:
\begin{equation}
\tag{I}
\label{Dual-Cauchy}
\prod_{i,j} (1+x_i y_j) = \sum_{\mathclap{\pi \in \Par(p \times q)}}s_\pi(x_1,\ldots,x_p) s_{\pi'}(y_1,\ldots,y_q),
\end{equation}
where $\Par(p\times q)$ is the set of partitions whose Young diagram fits inside a $p \times q$ rectangle. Next we have two of the Littlewood identities ~\cite{Littlewood}*{Section 11.9}:
\begin{align}
\prod_{ i\leq j} (1-x_i x_j) &= \sum_\pi (-1)^{|\pi|/2} s_\pi(x_1,\ldots,x_n), \tag{II}\label{Littlewood-C}\\
\prod_{i<j} (1- x_i x_j) &= \sum_{\pi} (-1)^{|\pi|/2} s_{\pi'}(x_1,\ldots,x_{n+1}), \tag{III}\label{Littlewood-D}
\end{align}
where each sum ranges over the ASC partitions with at most $n$ parts. With these three identities in hand, we outline the results and methods of this paper below.
\subsection{Dimension identities} We begin by proving two new identities (Theorems~\ref{theorem:ID-dim-GLn} and \ref{thm:dim GLn pairs}) relating the dimensions of certain modules for $\mathfrak{gl}_n$ and $\mathfrak{gl}_{n+m}$, when the highest weights are partitions whose arm--leg difference is $m$. In the special case $m=1$, the highest weights are precisely the ASC partitions appearing in the Littlewood identities~\eqref{Littlewood-C} and \eqref{Littlewood-D}. (See Figure~\ref{fig:example intro}, which illustrates an example of the dimension identity in Theorem~\ref{thm:dim GLn pairs}.) These new dimension identities are interesting in their own right from a combinatorial viewpoint, but they play a larger role later in the paper, in the proof of our main result (Theorem~\ref{thm:Cong and Conj}). This work arose from trying to understand when the ratio appearing in certain dimension identities in~\cite{EW} is equal to 1.
\subsection{Generalized BGG resolutions}
The identities~\eqref{Dual-Cauchy}--\eqref{Littlewood-D} can be viewed as Euler characteristics of the Bernstein--Gelfand--Gelfand (BGG) complex for the trivial representation of each classical group. The following example is an informal preview.
\begin{figure}[ht]
\centering
\input{Example_Introduction.tex}
\caption{Two isomorphic posets from Example~\ref{ex:D4 and C3}. Note that the Young diagrams on the right-hand side are precisely the ASC partitions with at most $3$ parts; meanwhile, each Young diagram on the left is the conjugate of its corresponding Young diagram on the right. On the left-hand side, each Young diagram represents the (dual of the) $\mathfrak{gl}_4$-module with corresponding highest weight; on the right-hand side, each Young diagram represents a $\mathfrak{gl}_3$-module. Each diagram is labeled with the dimension of the corresponding $\mathfrak{gl}_4$- or $\mathfrak{gl}_3$-module. By viewing the Young diagrams as parabolic Verma modules and the arrows as the canonical maps between them, one can interpret the left (resp., right) poset as the BGG complex of the trivial representation of $\mathfrak{so}_4$ (resp., $\sp_6$).}
\label{fig:example intro}
\end{figure}
\begin{ex}
\label{ex:D4 and C3}
Throughout this example, we refer to Figure~\ref{fig:example intro}, which shows two posets. The poset elements are highest weights for $\mathfrak{gl}_4$ and $\mathfrak{gl}_3$, respectively. On the left side we consider the Hermitian symmetric pair $(\mathfrak{g},\k) = (\mathsf{D}_4,\mathsf{A}_3) = (\mathfrak{so}_4, \mathfrak{gl}_4)$; see Section~\ref{sub:Hermitian pairs} for the general theory of Hermitian symmetric pairs $(\mathfrak{g},\k)$ and parabolic subalgebras $\q$ of Hermitian type. The poset shown on the left-hand side represents the BGG--Lepowsky complex of the trivial representation of $\mathfrak{so}_4$; that is to say, the complex is a free in terms of parabolic Verma modules $N_{\pi^*} \coloneqq U(\mathfrak{g}) \otimes_{U(\q)} F_{\pi^*}$, where $F_{\pi^*}$ is the (dual of the) simple $\mathfrak{gl}_4$-module whose highest weight is the partition $\pi$. In the figure, we depict each parabolic Verma module $N_{\pi^*}$ as the Young diagram of the partition $\pi$ (decorated with the symbol $*$). The empty diagram $\bullet$ on top therefore represents the first term $N_0$ in the resolution, while the diagram just below it represents the second term $N_{(1,1)^*}$, and so forth. Several Young diagrams at the same level in the resolution should be understood as the direct sum of the corresponding parabolic Verma modules. Each arrow is the canonical map between parabolic Verma modules. We also label
the Young diagram of each $\pi$ with the dimension of the corresponding $\mathfrak{gl}_4$-module $F_{\pi^*}$. Similarly, on the right-hand side of Figure~\ref{fig:example intro}, we consider the Hermitian symmetric pair $(\mathsf{C}_3,\mathsf{A}_2) = (\sp_6,\mathfrak{gl}_3)$. Just as on the left, the poset represents the BGG complex of the trivial representation of $\sp_6$, where this time the Young diagram of $\pi$ stands for the simple $\mathfrak{gl}_3$-module $F_{\pi^*}$.
We first observe that the poset for $(\mathsf{D}_4,\mathsf{A}_3)$ is clearly isomorphic to that for $(\mathsf{C}_3, \mathsf{A}_2)$. Second, we see that each partition appearing in the left-hand poset is the conjugate of its corresponding partition in the right-hand poset (i.e., the Young diagrams are transposes of each other). More specifically, the partitions $\pi$ appearing on the right-hand side are precisely the ASC partitions with at most $3$ parts, which are the partitions $\pi$ occurring in the Littlewood identity~\eqref{Littlewood-C}. Likewise, their conjugate partitions $\pi'$ appearing on the left side are the partitions appearing in~\eqref{Littlewood-D}. Even without more details, it is not hard to believe that the alternating sum in~\eqref{Littlewood-C} and \eqref{Littlewood-D} turns out to be the Euler characteristic of the BGG complex. (See \cites{Sam-Weyman-2013,Sam-Weyman-2015} for a different approach to this fact.) We make another conspicuous observation: not only do the corresponding $\mathfrak{gl}_4$- and $\mathfrak{gl}_3$-modules have highest weights that are conjugate to each other, but they also have the same dimension. This equality of dimensions is the aforementioned special case of Theorem~\ref{thm:dim GLn pairs}.
The informal observations above --- namely, the isomorphism of posets which preserves BGG complexes and the dimension of the $\k$-modules --- describe a phenomenon that we will make rigorous in Section~\ref{sec:Congruence}, by means of the notion of \emph{congruence of blocks}. Using this language, the upshot of the present example is that the principal blocks for $(\mathsf{D}_{n+1},\mathsf{A}_n)$ and $(\mathsf{C}_n,\mathsf{A}_{n-1})$ are congruent.
\end{ex}
\subsection{Diagrams of Hermitian type} For us, the striking fact in Example~\ref{ex:D4 and C3} is that in this instance of congruence, the poset isomorphism is given by conjugate partitions. Our goal in this project was to find other congruences of blocks (in the context of Hermitian symmetric pairs) where the poset isomorphism is given by conjugate partitions. We thus build upon the work of Armour \cite{Armour}, who observed the appearance of conjugate partitions in the context of congruence of singular and regular blocks. To illustrate the main idea behind Section~\ref{s:Diagrams of Hermitian Type}, we have shaded the Young diagrams in Figure~\ref{fig:example intro} to show how they can be constructed via a ``stacking'' operation. On either side of Figure~\ref{fig:example intro}, if we consider only the shaded boxes of the diagrams, then moving down along a line in the Hasse diagram adds one shaded box, such that the shaded boxes form a shifted Young diagram. (There is a natural way to identify these shifted Young diagrams with the lower-order ideals of the poset of positive noncompact roots of $\mathfrak{g}$, which in turn correspond to certain Weyl group elements.)
Note that the shaded part of each diagram is the same in both posets. On the right-hand side, to make the Young diagram of $\pi$ itself, we ``stack'' the shaded diagram against the right edge of its reflection about the diagonal (the white boxes). This stacking construction by nature produces an ASC partition $\pi$.
Likewise, on the left-hand side of Figure~\ref{fig:example intro}, each shaded diagram is stacked against the upper edge of its reflection, thereby producing the conjugate of an ASC partition.
\subsection{Congruence of blocks and conjugate partitions} Our main result (Theorem~\ref{thm:Cong and Conj}, summarized in Table~\ref{table:WC}) is a description of six infinite families of congruent blocks given by conjugate partitions, just as in Example~\ref{ex:D4 and C3}. The congruence in Example~\ref{ex:D4 and C3} is a special subfamily --- in some sense, a degenerate case --- in which the two resolutions are for finite-dimensional modules; in general, the complex on the left is the resolution of an infinite-dimensional $\mathfrak{g}$-module (see \cite{EW}). In verifying the six families of congruences in Table~\ref{table:WC}, our primary tool is the process of \emph{Enright--Shelton reduction}. This reduction, which has a strong combinatorial flavor, produces a poset isomorphism by deleting certain coordinates in a weight of $\mathfrak{g}$; the result of the reduction is therefore a weight of a Lie algebra $\mathfrak{g}'$ with generally smaller rank than $\mathfrak{g}$. (Even in Example~\ref{ex:D4 and C3}, we can see a sort of proto-Enright reduction from $\mathsf{D}_4$ to $\mathsf{C}_3$.)
\subsection{Hilbert series and generalized Littlewood identities}
As an application of our main result, we are able to write down the Hilbert series for the infinite-dimensional modules in each of our six families (see Table \ref{table:Hilbert-series}). For certain of the families --- namely, those in which the infinite-dimensional module is a Wallach representation of $\mathfrak{g}$ --- we thereby recover the well-known Hilbert series of determinantal varieties. By computing the Euler characteristic of BGG resolutions of the finite-dimensional $\mathfrak{g}$-modules, we derive six new families of identities (see Table~\ref{table:identities}) generalizing the classical identities~\eqref{Dual-Cauchy}--\eqref{Littlewood-D}.
\subsection{Open problems} Our dimension identities in Theorems~\ref{theorem:ID-dim-GLn} and \ref{thm:dim GLn pairs} raise further questions that we leave as open problems in Section~\ref{sec:open probs}. In particular, one would like to find bijective proofs in terms of semistandard Young tableaux. It is also natural to look for other equalities among the dimensions of $\mathfrak{gl}$-modules, and to try to classify them. This problem is similar in flavor to that of classifying the equalities among the binomial coefficients, in the work of Lind~\cite{Lind}, Singmaster~\cite{Singmaster}, and de Weger~\cite{deWeger}. Our main result in this paper also suggests the problem of classifying all instances of congruent blocks in the context of Hermitian symmetric pairs; see Figure~\ref{fig:sporadic example} for one ``sporadic'' example lying outside the six families mentioned above.
\section{Dimension identities for $\mathfrak{gl}_{n}$- and $\mathfrak{gl}_{n+m}$-modules}
\label{sec:Dimension IDs}
\subsection{Partitions}
We use Greek letters to denote weakly decreasing tuples. In particular, a \emph{partition} is a finite, weakly decreasing tuple of elements in $\mathbb{N} \coloneqq \{0,1,2,\ldots\}$. We regard two partitions as equal if they differ only by trailing zeros. Given a partition $\pi = (\pi_1, \ldots, \pi_n)$, we write $|\pi| \coloneqq \sum_i \pi_i$.
We also make frequent use of the ``dual'' notation $\pi^* \coloneqq (-\pi_n,\ldots,-\pi_1)$.
To a partition $\pi$ we associate a \emph{Young diagram}, which consists of left-justified rows of boxes, such that the $i$th row from the top contains $\pi_i$ boxes. Note that $|\pi|$ is the number of boxes in the Young diagram of $\pi$. We define the \emph{rank}, denoted by $\rk\pi$, to be the length of the main diagonal of the Young diagram of $\pi$. We write $\Par(p \times q)$ for the set of partitions whose Young diagram fits inside a $p \times q$ rectangle. The \emph{conjugate} partition of $\pi$ is the partition whose Young diagram is that of $\pi$ reflected about the main diagonal; we write $\pi'$ to denote the conjugate of $\pi$ (but see our disclaimer at the beginning of Section~\ref{sec:Congruence}). In diagrams, we use the symbol $\bullet$ to denote the zero partition $0 \coloneqq (0, \ldots, 0)$, corresponding to the empty Young diagram.
A partition $\pi$ of rank $r$ can be uniquely described by its arm lengths $\alpha_1> \cdots > \alpha_r$ and leg lengths $\beta_1 > \cdots > \beta_r$, as follows. Define $\alpha_i$ to be the number of boxes in the $i$th row strictly to the right of the main diagonal in the Young diagram of $\pi$; likewise, define $\beta_i$ to be the number of boxes in the $i$th column strictly below the main diagonal. In this way, we will often denote a partition by its \emph{Frobenius symbol}, writing
\[
\pi = (\alpha | \beta) = (\alpha_1, \ldots, \alpha_r \mid \beta_1, \ldots, \beta_r).
\]
If $\pi = 0 = (\; \mid \;)$, then $\alpha$ and $\beta$ are empty. Clearly if $\pi = (\alpha|\beta)$, then $\pi' = (\beta|\alpha)$. For $m \in \mathbb{N}$, we adopt the shorthand
\[
\alpha + m \coloneqq (\alpha_1 + m, \ldots, \alpha_r + m).
\]
For example, if $\pi = (\alpha | \beta)$, then $(\alpha + m \mid \beta)$ denotes the partition obtained by adding $m$ to all the arms of $\pi$. We observe that an ASC partition, introduced informally in Section~\ref{sub:first intro}, is a partition of the form $(\alpha + 1 \mid \alpha)$.
\subsection{Dimension identities}
Throughout the paper, we let $\F{\mu}{n}$ denote the finite-dimensional simple $\mathfrak{gl}_n$-module with highest weight $\mu$, where $\mu$ is a weakly decreasing $n$-tuple of integers. In the proof below, we will write $(i,j) \in \pi$ to denote the box in the $i$th row (from the top) and $j$th column (from the left) of the Young diagram of a partition $\pi$. From each box $(i,j) \in \pi$ there emanates a \emph{hook}, which consists of all the boxes weakly to the right in row $i$ or weakly below in column $j$. Thus the hook length $h(i,j)$ is the number of boxes in the hook emanating from $(i,j)$. The content of a box is defined as $c(i,j) \coloneqq j-i$.
\begin{theorem}
\label{theorem:ID-dim-GLn}
Let $(\alpha | \beta) \in \Par(p\times q)$. Then for each $m \in \mathbb{N}$, we have
\begin{equation}
\label{ID-dim-GLn}
\dim \F{(\alpha+m\mid \beta)}{p} \dim \F{(\beta+m|\alpha)}{q} = \dim \F{(\alpha|\beta+m)}{p+m} \dim \F{(\beta|\alpha+m)}{q+m}.
\end{equation}
\end{theorem}
\begin{proof}
Suppose $\pi = (\alpha|\beta)$. By the hook--content formula~\cite{Stanley}*{Thm.~7.21.2}, we have
\begin{equation}
\label{hookcontentformula}
\dim \F{\pi}{n} = \prod_{(i,j) \in \pi} \frac{n + c(i,j)}{h(i,j)}.
\end{equation}
We first rewrite the product of the numerators in~\eqref{hookcontentformula} in terms of hooks: letting $h_k$ denote the hook emanating from box $(k,k)$, we have
\[
\prod_{(i,j) \in h_k} (n + c(i,j)) = \prod_{\ell = -\beta_k}^{\alpha_k} (n + \ell) = \frac{(n+\alpha_k)!}{(n-\beta_k-1)!}
\]
and so, putting $r = \rk \pi$, we have
\begin{equation}\label{hookrewrite}
\prod_{(i,j) \in \pi} (n + c(i,j)) = \prod_{k=1}^r \frac{(n+\alpha_k)!}{(n-\beta_k-1)!}.
\end{equation}
Using~\eqref{hookcontentformula} and~\eqref{hookrewrite}, we can rewrite the left-hand side of~\eqref{ID-dim-GLn} as
\begin{equation}\label{LHS}
\frac{\displaystyle\prod_{k =1}^r \frac{(p+\alpha_k + m)!}{(p-\beta_k - 1)!}}{\displaystyle\prod_{\mathclap{(i,j) \in (\alpha+m|\beta)}} h(i,j)} \: \frac{\displaystyle\prod_{k =1}^r \frac{(q+\beta_k + m)!}{(q-\alpha_k -1)!}}{\displaystyle\prod_{\mathclap{(i',j') \in (\beta+m|\alpha)}} h(i',j')},
\end{equation}
and the right-hand side of~\eqref{ID-dim-GLn} as
\begin{equation} \label{RHS}
\frac{\displaystyle\prod_{k =1}^r \frac{(p+m+\alpha_k)!}{(p+m-\beta_k-m - 1)!}}{\displaystyle\prod_{\mathclap{(i',j') \in (\alpha|\beta+m)}} h(i',j')} \:
\frac{\displaystyle\prod_{k =1}^r \frac{(q+m +\beta_k)!}{(q+m-\alpha_k - m -1)!}}{\displaystyle\prod_{\mathclap{(i,j) \in (\beta|\alpha+m)}} h(i,j)}.
\end{equation}
Clearly the numerator in~\eqref{LHS} equals that in~\eqref{RHS}.
Moreover, since $(\alpha \mid \beta + m) = (\beta + m \mid \alpha)'$, and since the multiset of hook lengths is preserved under conjugation of partitions, the denominators in~\eqref{LHS} and~\eqref{RHS} are also equal. Hence the left- and right-hand sides of~\eqref{ID-dim-GLn} are equal, and the theorem follows.
\end{proof}
\begin{theorem}
\label{thm:dim GLn pairs}
We have $\dim \F{\pi}{n} = \dim \F{\pi'}{n+m}$ for each $n \in \mathbb N$ if and only if $\pi$ has the form $(\alpha + m \mid \alpha)$.
\end{theorem}
\begin{proof}
Suppose $\pi = (\alpha + m \mid \alpha)$. In Theorem~\ref{theorem:ID-dim-GLn}, by setting $\alpha = \beta$ and $p=q=n$, and then taking the square root of both sides of~\eqref{ID-dim-GLn}, we have $\dim \F{\pi}{n} = \dim \F{\pi'}{n+m}$. Conversely, fix $m$ and suppose that $\dim \F{\pi}{n} = \dim \F{\pi'}{n+m}$ for all $n$. Since conjugate partitions have the same multiset of hook lengths, and since $c(j,i) = -c(i,j)$, the hook--content formula~\eqref{hookcontentformula} yields
\[
\prod_{(i,j) \in \pi} n + c(i,j) = \prod_{(i,j) \in \pi} n+m - c(i,j).
\]
Treating each side as a polynomial in $n$, by unique factorization we must have the equality of multisets
\begin{equation}
\label{multiset-content}
C_1 \coloneqq \{c(i,j) \mid (i,j) \in \pi\} = \{ m - c(i,j) \mid (i,j) \in \pi\}.
\end{equation}
If $\pi = (\alpha|\beta)$, then $\alpha_1 = \max(C_1)$ and $-\beta_1 = \min(C_1)$. But by~\eqref{multiset-content}, we must also have $\min(C_1) = m -\alpha_1 = -\beta_1$, and thus $\alpha_1 = \beta_1 + m$. Since the outermost hook $h_1$ of $\pi$ contains exactly one box with content $c$ for each $-\beta_1 \leq c \leq \alpha_1$, we delete these contents from $C_1$ to obtain the new multiset
\[
C_2 \coloneqq C_1 \setminus \{-\beta_1, -\beta_1 + 1, \ldots, \alpha_1\}.
\]
Just as before, $\alpha_2 = \max(C_2)$ and $-\beta_2 = \min(C_2) = m - \alpha_2$, and so $\alpha_2 = \beta_2 + m$. Continuing in this way to define each new multiset $C_{i+1} \coloneqq C_i \setminus \{ -\beta_i, -\beta_i + 1, \ldots, \alpha_i\}$, we obtain $\alpha_i = \beta_i + m$ for each $i = 1, \ldots, \rk\pi$.
Therefore $\pi$ takes the form $(\alpha + m \mid \alpha)$.
\end{proof}
\begin{rem}
We also have the following $q$-analogue of Theorem~\ref{thm:dim GLn pairs}:
\[
s_{\pi}(q^{n-1}, q^{n-3}, \ldots, q^{3-n}, q^{1-n}) = s_{\pi'}(q^{n+m-1}, q^{n+m-3}, \ldots, q^{3-n-m}, q^{1-n-m})
\]
when $\pi = (\alpha + m \mid \alpha)$. In terms of representation theory, this means that the modules in Theorem~\ref{thm:dim GLn pairs} not only have the same dimension, but are equivalent as $\sl_2$-modules upon the restriction of $\mathfrak{gl}_n$ and $\mathfrak{gl}_{n+m}$ to their principal $\sl_2$ subalgebra.
\end{rem}
\section{Generalized BGG resolutions}
\label{section:ID's and BGG}
\subsection{\!\!\!} With Theorems~\ref{theorem:ID-dim-GLn} and~\ref{thm:dim GLn pairs} in mind, we recall the three classical identities~\eqref{Dual-Cauchy}, \eqref{Littlewood-C}, and \eqref{Littlewood-D} from the introduction. We observe the similarity between~\eqref{Dual-Cauchy} and Theorem~\ref{theorem:ID-dim-GLn} in the case $m=0$, where the two highest weights on the left-hand side of~\eqref{ID-dim-GLn} are conjugates of each other, with the first being an element of $\Par(p \times q)$. Likewise the sums in the Littlewood identities~\eqref{Littlewood-C} and \eqref{Littlewood-D} range over the same ASC partitions that appear as highest weights for the $\mathfrak{gl}_n$-modules in Theorem~\ref{thm:dim GLn pairs}, in the case $m=1$. In the remainder of this section, we will explain how each of the identities~\eqref{Dual-Cauchy},~\eqref{Littlewood-C}, and~\eqref{Littlewood-D} can be viewed as the Euler characteristic of the BGG complex of the trivial representation; see also~\cites{Sam-Weyman-2013,Sam-Weyman-2015} for a different approach.
\subsection{Lie algebras of Hermitian type}
\label{sub:Hermitian pairs}
Let $\mathfrak{g}_{\mathbb R}$ be a real simple noncompact reductive Lie algebra, with Cartan decomposition $\mathfrak{g}_{\mathbb R} = \k_{\mathbb R} \oplus \mathfrak{p}_{\mathbb R}$. We write the complexified Cartan decomposition $\mathfrak{g} = \k \oplus \mathfrak{p}$. From the general theory, there exists a distinguished element $h_0 \in \mathfrak{z}(\k)$ such that $\operatorname{ad} h_0$ acts on $\mathfrak{g}$ with eigenvalues $0$ and $\pm 1$. We thus have a triangular decomposition $\mathfrak{g} = \mathfrak{p}^- \oplus \k \oplus \mathfrak{p}^+$, where $\mathfrak{p}^{\pm} = \{ x \in \mathfrak{g} \mid [h_0, x] = \pm x\}$. The subalgebra $\q = \k \oplus \mathfrak{p}^+$ is a maximal parabolic subalgebra of $\mathfrak{g}$, with Levi subalgebra $\k$ and abelian nilradical $\mathfrak{p}^+$. Parabolic subalgebras of complex simple Lie algebras that arise in this way are called parabolic subalgebras of \emph{Hermitian type}, and $(\mathfrak{g},\k)$ is called a \emph{Hermitian symmetric pair}.
In this paper, we focus our attention on the three families of Hermitian symmetric pairs, which we call Types I, II, and III, that occur in the setting of Howe duality. In the list below, as we will do throughout the paper, we interchangeably name each pair according to the real Lie algebras $(\mathfrak{g}_{\mathbb R}, \k_{\mathbb R})$, the complexified Lie algebras $(\mathfrak{g},\k)$, and the Cartan classification:
\[
\renewcommand{3}{1.5}
\begin{array}{lccccc}
\text{Type I:} & (\mathfrak{su}(p,q), \: \mathfrak{s}(\mathfrak{u}(p) \oplus \mathfrak{u}(q)) & = & (\sl_{p+q},\:\mathfrak{s}(\mathfrak{gl}_p \oplus \mathfrak{gl}_q)) & = & (\mathsf{A}_{p+q-1},\:\mathsf{A}_{p-1}\times\mathsf{A}_{q-1}). \\
\text{Type II:} & (\sp(2n,\mathbb R), \:\mathfrak{u}(n)) & = & (\sp_{2n},\:\mathfrak{gl}_n) & = & (\mathsf{C}_n,\:\mathsf{A}_{n-1}). \\
\text{Type III:} & (\mathfrak{so}^*(2n), \: \mathfrak{u}(n)) & = & (\mathfrak{so}_{2n},\:\mathfrak{gl}_n) & = & (\mathsf{D}_n,\:\mathsf{A}_{n-1}).
\end{array}
\]
For each type above, we give explicit realizations of $\mathfrak{g}$, $\k$, and $\mathfrak{p}^+$ in the first three columns of Table~\ref{table:Type123}. In the $\mathfrak{g}$ column, for Type I, $\left[\begin{smallmatrix}A&B\\mathbb C&D\end{smallmatrix}\right]$ is a block $(p+q)\times(p+q)$ complex matrix, while for Types II and III it is a block $2n\times 2n$ complex matrix. In the $\mathfrak{p}^+$ column, we write $\M_{p,q}$ for the space of complex $p \times q$ matrices, while $\SM_n$ (resp., $\AM_n$) denotes the symmetric (resp., alternating) complex $n \times n$ matrices. In the next column, we write down the character of $\mathbb C[\mathfrak{p}^+]$ as a $\k$-module, setting $x_i \coloneqq e^{-\varepsilon_i}$ and $y_j \coloneqq e^{\varepsilon_j}$. (For Type I, the sum ranges over $1 \leq i\leq p$ and $1 \leq j \leq q$, whereas in Types II and III we have $1 \leq i \leq j \leq n$ and $1 \leq i < j \leq n$, respectively.) We also include the well-known expansions of these characters in terms of Schur polynomials: in Type I, this yields the Cauchy identity, while in Types II and III the identities were recorded by Littlewood~\cite{Littlewood}*{Section 11.9} on the same page as his identities~\eqref{Littlewood-C} and~\eqref{Littlewood-D}. In the last column of Table~\ref{table:Type123}, by the phrase ``even rows (resp., columns),'' we mean that all row (resp., column) lengths in the Young diagram are even.
\begin{table}
\centering
\input{Table_basics.tex}
\caption{Summary of data for Hermitian symmetric pairs of Types I, II, and III. For Type I, we write $\mathbf{x} = (x_1, \ldots, x_p)$ and $\mathbf{y} = (y_1, \ldots, y_q)$; for the other types, $\mathbf{x} = (x_1, \ldots, x_n)$.}
\label{table:Type123}
\end{table}
\subsection{Roots and weights}
Suppose $(\mathfrak{g},\k)$ is a Hermitian symmetric pair, and let $\mathfrak{h}$ be a Cartan subalgebra of both $\mathfrak{g}$ and $\k$. Let $\Phi$ be the root system of the pair $(\mathfrak{g},\mathfrak{h})$, and $\mathfrak{g}_\alpha$ the root space corresponding to $\alpha \in \Phi$. Then put $\Phi(\k) = \{ \alpha \in \Phi \mid \mathfrak{g}_\alpha \subseteq \k\}$ and $\Phi(\mathfrak{p}^+) = \{ \alpha \in \Phi \mid \mathfrak{g}_\alpha \subseteq \mathfrak{p}^+\}$.
Choose a set $\Phi^+$ of positive roots so that $\Phi(\mathfrak{p}^+)\subseteq \Phi^+$, and let $\Phi^- = -\Phi^+$ denote the negative roots. We write $\Phi^{\!+\!}(\k)$ for $\Phi^+ \cap \Phi(\k)$. Let $\Pi
= \{\alpha_1,\ldots,\alpha_r\} \subset \Phi^+$ denote the set of simple roots. We write $\langle \; , \; \rangle$ to denote the nondegenerate bilinear form on $\mathfrak{h}^*$ induced from the Killing form of $\mathfrak{g}$. For $\alpha \in \Phi$, we write $\alpha^\vee \coloneqq 2\alpha / \langle \alpha,\alpha \rangle$. We define the fundamental roots $\omega_i$ such that they form a basis of $\mathfrak{h}^*$ dual to the $\alpha_i^\vee$, i.e., $\langle \omega_i,\alpha_j^\vee\rangle = \delta_{ij}$. As usual, we let $\rho \coloneqq \frac{1}{2} \sum_{\alpha \in \Phi^+} \alpha$. Explicitly, we have the following, where $\rho$ is expressed in the standard $\varepsilon$-coordinates:
\begin{alignat*}{3}
& \text{Type I:} \quad &
\Phi^{\!+\!}(\k) &= \{ \varepsilon_i - \varepsilon_j \mid 1 \leq i < j \leq p \: \text{ or } \: p+1 \leq i < j \leq p+q\}, \\
& & \Phi(\mathfrak{p}^+) & = \{ \varepsilon_i - \varepsilon_j \mid 1 \leq i \leq p \: \text{
and } \: p+1 \leq j \leq p+q\},\\
& & \Pi & = \{\alpha_i = \varepsilon_i - \varepsilon_{i+1} \mid 1 \leq i \leq p+q-1\},\\
& & \omega_i & = \varepsilon_1 + \cdots + \varepsilon_i,\\
& & \rho & = (p+q-1, p+q-2, \ldots, 2, 1, 0).\\[1em]
& \text{Type II:} \quad &
\Phi^{\!+\!}(\k) & = \{ \varepsilon_i - \varepsilon_j \mid 1 \leq i < j \leq n \}, \\
& & \Phi(\mathfrak{p}^+) & = \{ \varepsilon_i + \varepsilon_j \mid 1 \leq i \leq j \leq n \}.\\
& & \Pi & = \{ \alpha_i = \varepsilon_i - \varepsilon_{i+1} \mid 1 \leq i \leq n-1\} \cup \{ \alpha_n = 2 \varepsilon_n\},\\
& & \omega_i & = \varepsilon_1 + \cdots + \varepsilon_i,\\
& & \rho & = (n, n-1, \ldots, 3, 2, 1).\\[1em]
& \text{Type III:} \quad &
\Phi^{\!+\!}(\k) & = \{ \varepsilon_i - \varepsilon_j \mid 1 \leq i < j \leq n \}, \\
& & \Phi(\mathfrak{p}^+) & = \{ \varepsilon_i + \varepsilon_j \mid 1 \leq i < j \leq n \},\\
& & \Pi & = \{\alpha_i = \varepsilon_i - \varepsilon_{i+1} \mid 1 \leq i \leq n-1\} \cup \{ \alpha_n = \varepsilon_{n-1}+\varepsilon_n\},\\
& & \omega_i & = \begin{cases}
\varepsilon_1 + \cdots + \varepsilon_i,& 1 \leq i \leq n-2,\\
\frac{1}{2}(\varepsilon_1 + \cdots + \varepsilon_{n-1} - \varepsilon_n), & i = n-1,\\
\frac{1}{2}(\varepsilon_1 + \cdots + \varepsilon_n), & i = n,
\end{cases} \\
& & \rho & = (n-1, n-2, \ldots, 2, 1, 0).
\end{alignat*}
Note that $\Phi(\mathfrak{p}^+)$ inherits the usual poset structure from $\mathfrak{h}^*$, where $\mu \leq \lambda$ if and only if $\lambda - \mu$ can be written as an $\mathbb{N}$-linear combination of positive roots. There is a unique element of $\Pi \cap \Phi(\mathfrak{p}^+)$, the ``noncompact simple root,'' which is the smallest element of the poset $\Phi(\mathfrak{p}^+)$.
Let $\mathcal{W}$ be the Weyl group of the pair $(\mathfrak{g},\mathfrak{h})$, and $\mathcal{W}(\k) \subseteq \mathcal{W}$ the Weyl group of the pair $(\k,\mathfrak{h})$. For each $w \in \mathcal{W}$, let $\Phi_w \coloneqq \Phi^+ \cap w\Phi^-$. Then we have the usual length function $\ell$ on $\mathcal{W}$, whereby
\begin{equation}
\label{length-size-Delta-w}
\ell(w) = |\Phi_w|.
\end{equation}
Following Kostant~\cite{Kostant}*{(5.13.1)}, we define
\[
\prescript{\k}{}{\mathcal{W}} \coloneqq \{ w \in \mathcal{W} \mid \Phi_w \subseteq \Phi(\mathfrak{p}^+)\},
\]
the subset of minimal-length right coset representatives of $\mathcal{W}(\k)$ in $\mathcal{W}$. To refine $\prescript{\k}{}{\mathcal{W}}$ by length, we further define
\[
\prescript{\k}{}{\mathcal{W}}_i \coloneqq \{w \in \prescript{\k}{}{\mathcal{W}} \mid \ell(w) = i\}.
\]
The ``dot'' action by the Weyl group is defined as follows:
\[
w \cdot \lambda \coloneqq w(\lambda + \rho) - \rho
\]
for $w \in \mathcal{W}$ and $\lambda \in \mathfrak{h}^*$. We define the reflection $s_\alpha:\mathfrak{h}^* \longrightarrow \mathfrak{h}^*$ by $s_\alpha(\lambda) \coloneqq \lambda - \langle \lambda, \alpha^\vee \rangle \alpha$. Let
\begin{align*}
\Lambda^+ &\coloneqq \{ \lambda \in \mathfrak{h}^* \mid \langle \lambda+\rho, \: \alpha^\vee\rangle \in \mathbb{Z}_{>0} \text{ for all } \alpha \in \Phi^+\},\\
\Lambda^{\!+\!}(\k) &\coloneqq \{ \lambda \in \mathfrak{h}^* \mid \langle \lambda+\rho, \: \alpha^\vee\rangle \in \mathbb{Z}_{>0} \text{ for all } \alpha \in \Phi^{\!+\!}(\k)\}
\end{align*}
denote the sets of dominant integral weights with respect to $\Phi^+$ and $\Phi^{\!+\!}(\k)$, respectively. In Type I, we write a weight in $\mathfrak{h}^*$ as a $(p+q)$-tuple in which a semicolon separates the first $p$ coordinates from the last $q$ coordinates. Upon restriction to $\k = \mathfrak{s}(\mathfrak{gl}_p \oplus \mathfrak{gl}_q)$, a weight $(\mu;\nu)$ is thus the ordered pair $(\mu,\nu)$. When $(\mu,\nu) \in \Lambda^{\!+\!}(\k)$, we will also write $\mu \otimes \nu$, since it is the highest weight of the irreducible $\k$-module $\F{\mu}{p} \otimes \F{\nu}{q}$. In Types II and III, elements of $\mathfrak{h}^*$ are written as $n$-tuples.
From the general theory of Hermitian symmetric pairs, we have an isomorphism of posets
\begin{align}
\label{kW poset}
\begin{split}
\prescript{\k}{}{\mathcal{W}} & \cong \{\text{lower-order ideals in }\Phi(\mathfrak{p}^+)\},\\
w & \mapsto \Phi_w,
\end{split}
\end{align}
with $\prescript{\k}{}{\mathcal{W}}$ a poset under the Bruhat order, and the set of lower-order ideals in $\Phi(\mathfrak{p}^+)$ ordered by inclusion, with the usual ordering on the roots. Hence $\Phi_w$ can be pictured as a subdiagram of the Hasse diagram of $\Phi(\mathfrak{p}^+)$, which can be drawn on a square lattice.
\subsection{BGG--Lepowsky resolutions}
For $\mu \in \Lambda^{\!+\!}(\k)$, let $F_{\mu}$ be the finite-dimensional simple $\k$-module with highest weight $\mu$. Then $F_{\mu}$ is also a module for $\q = \k\oplus\mathfrak{p}^+$, with $\mathfrak{p}^+$ acting by zero. We define the \emph{parabolic Verma module}
\begin{equation}
\label{Verma}
N_\lambda \coloneqq U(\mathfrak{g}) \otimes_{U(\q)} F_{\lambda}.
\end{equation}
When $(\mathfrak{g},\k)$ is a Hermitian symmetric pair, $\mathfrak{p}^-$ is abelian and therefore we can identify $U(\mathfrak{p}^-)$ with $S(\mathfrak{p}^-)$. By the PBW theorem, we thus obtain
\[
N_\lambda \cong S(\mathfrak{p}^-) \otimes F_{\lambda}
\]
as a $\k$-module. It will therefore be easy for us to write down the $\k$-character of a parabolic Verma module, as follows. Since $\mathfrak{p}^- \cong (\mathfrak{p}^+)^*$, we have $S(\mathfrak{p}^-) \cong \mathbb C[\mathfrak{p}^+]$, and so we can find $\ch S(\mathfrak{p}^-)$ listed in Table~\ref{table:Type123}. In Types II and III, where $\k = \mathfrak{gl}_n$, if $\mu^*$ happens to be a partition, then $\ch F_{\mu} = s_\mu(\mathbf{x})$. In Type I, if $\mu^*$ and $\nu$ both happen to be partitions, then $\ch F_{\mu\otimes\nu} = s_\mu(\mathbf{x})s_\nu(\mathbf{y})$. (Recall that the symbol $\mu^*$ denotes the tuple obtained by negating and reversing the coordinates of $\mu$.)
Let $\lambda \in \Lambda^+$, and let $L_{\lambda}$ be the finite-dimensional simple $\mathfrak{g}$-module with highest weight $\lambda$. Generalizing the Bernstein--Gelfand--Gelfand (BGG) resolution in terms of ordinary Verma modules~\cite{BGG}, Lepowsky~\cite{Lepowsky} showed that there exists a resolution for $L_{\lambda}$ in terms of parabolic Verma modules:
\[
0 \longrightarrow N_s \longrightarrow N_{s-1} \longrightarrow \cdots \longrightarrow N_1 \longrightarrow N_0 \longrightarrow L_{\lambda} \longrightarrow 0,
\]
where
\begin{equation}
\label{BGG form}
N_i = \bigoplus_{\mathclap{w \in \prescript{\k}{}{\mathcal{W}}_i}} N_{w\cdot\lambda}
\end{equation}
and $s = |\Phi(\mathfrak{p}^+)|$. We now state the result (to be proved in Section~\ref{s:Diagrams of Hermitian Type}) that connects these BGG resolutions (where $\lambda = 0$) with the three classical identities~\eqref{Dual-Cauchy}--\eqref{Littlewood-D} above:
\begin{prop}
\label{prop:BGG}
In each of Types I--III, the trivial $\mathfrak{g}$-module $\mathbbm{1}$ has the generalized BGG resolution
\[
0 \longrightarrow N_s \longrightarrow N_{s-1} \longrightarrow \cdots \longrightarrow N_1 \longrightarrow N_0 \longrightarrow \mathbbm{1} \longrightarrow 0,
\]
with the terms given as follows:
\[
\def3{2.5}
\begin{array}{llll}
\text{\normalfont{Type I}} & (\mathfrak{g} = \mathsf{A}_{p+q-1}): &
\displaystyle N_i = \bigoplus_{\substack{\pi \in \Par(p \times q),\\ |\pi| = i}} N_{\pi^* \otimes \pi'} & \text{\normalfont{and }} s= pq.\\
\text{\normalfont{Type II}} & (\mathfrak{g} = \mathsf{C}_n): & \displaystyle N_i = \bigoplus_{\substack{\pi = (\alpha +1 | \alpha),\\ \alpha_1 < n, \\ |\alpha+1|=i}} N_{\pi^*} & \text{\normalfont{and }} s= \binom{n+1}{2}.\\
\text{\normalfont{Type III}} & (\mathfrak{g} = \mathsf{D}_{n+1}): & \displaystyle N_i = \bigoplus_{\substack{\pi = (\alpha+1|\alpha),\\ \alpha_1 < n,\\ |\alpha+1|=i}} N_{\pi'^*} & \text{\normalfont{and }} s= \binom{n+1}{2}.
\end{array}
\]
\end{prop}
(In Type III, we have stated the proposition for $\mathsf{D}_{n+1}$ rather than $\mathsf{D}_n$, in order to line up our results with the Littlewood identity~\eqref{Littlewood-D} in $n+1$ variables. We have added the extra variable so that the sums in both Littlewood identities would range over the same set of partitions.) Observe that $|(\alpha + 1 \mid \alpha)| = 2|\alpha+1|$; then by alternating the characters of the terms in the BGG resolutions in Proposition~\ref{prop:BGG}, we obtain
\[
\def3{3}
\begin{array}{ll}
\text{\normalfont{Type I:}} &
\displaystyle 1 = \ch \mathbbm{1} = \ch S(\mathfrak{p}^-) \cdot \sum_{\mathclap{\pi \in \Par(p \times q)}} (-1)^{|\pi|} \ch \left(\F{\pi^*}{p}\otimes\F{\pi'}{q}\right) = \frac{\sum_\pi (-1)^{|\pi|} s_\pi(x_1,\ldots,x_p)s_{\pi'}(y_1,\ldots,y_q)}{\prod_{i,j} (1-x_i y_j)}.
\\
\text{\normalfont{Type II:}} &
\displaystyle 1 = \ch \mathbbm{1} = \ch S(\mathfrak{p}^-) \cdot \sum_{\mathclap{\substack{\pi=(\alpha+1|\alpha),\\ \alpha_1 < n}}} (-1)^{|\pi|/2} \ch \F{\pi^*}{n} = \frac{\sum_\pi (-1)^{|\pi|/2} s_\pi(x_1,\ldots,x_n)}{\prod_{i \leq j} (1-x_i x_j)}.
\\
\text{\normalfont{Type III:}} &
\displaystyle 1 = \ch \mathbbm{1} = \ch S(\mathfrak{p}^-) \cdot \sum_{\mathclap{\substack{\pi=(\alpha+1|\alpha),\\ \alpha_1 < n}}} (-1)^{|\pi|/2} \ch \F{\pi'^*}{n+1} = \frac{\sum_\pi (-1)^{|\pi|/2} s_{\pi'}(x_1,\ldots,x_{n+1})}{\prod_{i < j} (1-x_i x_j)}.
\end{array}
\]
For Type I, rearranging the equation above yields the dual Cauchy identity~\eqref{Dual-Cauchy} under the substitution $x_i \mapsto -x_i$. For Types II and III, rearrangement yields the Littlewood identities~\eqref{Littlewood-C} and~\eqref{Littlewood-D}.
\begin{rem}
In Types II and III, by looking at the \emph{graded} characters of parabolic Verma modules in the parabolic BGG resolution of the trivial representation, we obtain
\begin{align*}
\frac{\displaystyle\sum_{\pi \operatorname{ASC}} (-1)^{|\pi|/2} t^{|\pi|/2} s_{\pi}(x_1, \ldots, x_n)}{\displaystyle\prod_{i \leq j} (1 - t x_i x_j)} &= 1,\\[3ex]
\frac{\displaystyle\sum_{\pi \operatorname{ASC}} (-1)^{|\pi|/2} t^{|\pi|/2} s_{\pi'}(x_1,\ldots,x_n)}{\displaystyle\prod_{i<j} (1 - t x_i x_j)} &= 1.
\end{align*}
Rearranging and replacing $t$ with $-t$, we obtain
\begin{align*}
\prod_{i\leq j} (1 + t x_i x_j) &= \sum_{\pi \operatorname{ASC}} t^{|\pi|/2} s_{\pi}(x_1,\ldots,x_n),\\
\prod_{i<j} (1 + t x_i x_j) &= \sum_{\pi \operatorname{ASC}} t^{|\pi|/2} s_{\pi'}(x_1,\ldots,x_n).
\end{align*}
\end{rem}
\section{Diagrams of Hermitian type}
\label{s:Diagrams of Hermitian Type}
\subsection{\!\!\!}
In this section, we introduce certain diagrams that encode the highest weights $w \cdot \lambda \in \Lambda^{\!+\!}(\k)$ occurring in the BGG resolutions. We will then be able to write down all of these weights directly from the diagrams, eliminating the need for calculations in terms of the Weyl group. This will allow for a quick proof of Proposition~\ref{prop:BGG} above. Moreover, in Types II and III, it turns out that these diagrams are ASC partitions and their conjugates.
\subsection{A recursive formula for $w \cdot \lambda$}
We follow the exposition in~\cite{EHP}*{Section 3.7}, to which we refer the reader for details; we emphasize that the following facts are valid only within the Hermitian symmetric setting. Suppose $(\mathfrak{g},\k)$ is a Hermitian symmetric pair. Then there exists a unique map $f:\Phi(\mathfrak{p}^+)\longrightarrow \Pi$, such that
\begin{equation}
\label{fdef}
\Phi_w = \Phi_v \cupdot \{\beta\} \quad (w,v \in \prescript{\k}{}{\mathcal{W}}) \quad \Longrightarrow \quad w = vs_{f(\beta)} = s_{_\beta} v.
\end{equation}
In this case, we have the explicit formula
\begin{equation}
\label{f-beta}
f(\beta) = v^{-1}\beta.
\end{equation}
It follows inductively that if we write $\Phi_w = \{\beta_1, \ldots,\beta_\ell\}$ such that for every $i = 1, \ldots, \ell$ the set $\{\beta_1, \ldots, \beta_i\}$ is a lower-order ideal of $\Phi(\mathfrak{p}^+)$, then
\begin{equation}
\label{w-in-terms-of-v}
w = s_{f(\beta_1)} \cdots s_{f(\beta_\ell)} = s_{\beta_\ell} \cdots s_{\beta_1}.
\end{equation}
In this paper, the motivation behind the map $f$ is the following observation (generalizing a result of Kostant in the case $\lambda = 0$):
\begin{lemma}
\label{lemma:w-dot-lambda}
For $\lambda \in \Lambda^{\!+\!}(\k)$ and $w \in \prescript{\k}{}{\mathcal{W}}$, we have \begin{equation}
\label{w-dot-la-equation}
w \cdot \lambda = \lambda - \sum_{\mathclap{\beta \in \Phi_w}} \big\langle\lambda+\rho,\:f(\beta)^\vee\big\rangle \beta.
\end{equation}
\end{lemma}
\begin{proof}
We prove this by induction on $\ell(w)$. In the base case $w = {\rm id}$, we have $\Phi_w = \varnothing$ and so the sum in~\eqref{w-dot-la-equation} is empty, as desired.
Now let $v \in \prescript{\k}{}{\mathcal{W}}$ and assume that~\eqref{w-dot-la-equation} holds for all elements in $\prescript{\k}{}{\mathcal{W}}$ with length at most $\ell(v)$. Let $w \in \prescript{\k}{}{\mathcal{W}}$ such that $\Phi_w = \Phi_v \cupdot \{\beta\}$; then $\ell(w) = \ell(v)+1$. From~\eqref{fdef} we have $w = vs_{f(\beta)}$, and so
\begin{align*}
w \cdot \lambda &= w(\lambda + \rho) - \rho\\
&= vs_{f(\beta)}(\lambda+\rho) - \rho\\
&= v\Big(\lambda+\rho - \langle\lambda+\rho,\:f(\beta)^\vee\rangle f(\beta)\Big)-\rho\\
&= v(\lambda+\rho) - \langle\lambda+\rho,\:f(\beta)^\vee\rangle v\big(f(\beta)\big) - \rho\\
&= v \cdot \lambda - \langle\lambda+\rho, \: f(\beta)^\vee\rangle\beta,
\end{align*}
where in the last equality we have used~\eqref{f-beta} to obtain $v(f(\beta)) = vv^{-1}\beta = \beta$. By the induction hypothesis, we have $v \cdot \lambda = \lambda - \sum_{\gamma \in \Phi_v} \langle\lambda+\rho,\:f(\gamma)^\vee\rangle\gamma$. Since $\Phi_w = \Phi_v \cupdot \{\beta\}$, we see that $w$ satisfies~\eqref{w-dot-la-equation}.
\end{proof}
\subsection{Diagrams of Hermitian type}
The proof of Lemma~\ref{lemma:w-dot-lambda} suggests a recursive method for computing $w \cdot \lambda$, supposing that we know the reduced expression $w=s_{\beta_\ell}\cdots s_{\beta_1}$ on the right-hand side of~\eqref{w-in-terms-of-v}. We can do even better, however: in this section, we interpret Lemma~\ref{lemma:w-dot-lambda} diagrammatically, which will enable us to write down $w \cdot \lambda$ directly in terms of $\Phi_w$.
Given $\lambda \in \Lambda^+$, we will adopt two useful methods for abbreviating the inner product $\langle\lambda+\rho,\:f(\beta)^\vee\rangle$; the first method is in terms of $\beta$, while the second method is in terms of the simple root $f(\beta)$. For the first method, we set the following shorthand for the roots in $\Phi(\mathfrak{p}^+)$:
\[
\beta_{ij} \coloneqq \begin{cases}
\varepsilon_i - \varepsilon_{p+j},& \mathfrak{g} \text{ is of Type I }(1\leq i \leq p, 1 \leq j \leq q),\\
\varepsilon_i + \varepsilon_j, & \mathfrak{g} \text{ is of Type II }(n \geq i \geq j \geq 1),\\
\varepsilon_i + \varepsilon_j, & \mathfrak{g} \text{ is of Type III }(n \geq i > j \geq 1).
\end{cases}
\]
Then using two indices, we write
\begin{equation}
\label{def:d_ij}
d_{ij} \coloneqq \big\langle\lambda+\rho, \: f(\beta_{ij})^\vee\big\rangle,
\end{equation}
which allows us to rewrite Lemma~\ref{lemma:w-dot-lambda} as
\begin{equation}
\label{w-dot-lambda-rewrite}
w \cdot \lambda = \lambda - \sum_{\mathclap{\beta_{ij} \in \Phi_w}} d_{ij} \beta_{ij}.
\end{equation}
For the second method, we use just a single index, writing
\begin{equation}
\label{def:d_i}
d_i \coloneqq \big\langle\lambda+\rho,\:\alpha_i^\vee\big\rangle.
\end{equation}
In order to pass between the two abbreviations, we have the following formulas (see~\cite{EHP}*{appendix}):
\begin{equation}
\label{f}
\renewcommand{3}{1.5}
\begin{array}{ll}
\text{Type I:} & d_{ij} = d_{i+j-1}. \\
\text{Type II:} & d_{ij} = d_{n-i+j}.\\[2ex]
\text{Type III:} & d_{ij} =
\begin{cases}
d_n,& i-j=1 \text{ with }n-i \text{ even},\\
d_{n-i+j} & \text{otherwise}.
\end{cases}
\end{array}
\end{equation}
For a fixed $\lambda \in \Lambda^+$, we now associate to each $w \in \prescript{\k}{}{\mathcal{W}}$ a diagram as follows (see Figures~\ref{fig:diagram_dij} and~\ref{fig:diagram_di}):
\begin{itemize}
\item Orient the Hasse diagram of $\Phi(\mathfrak{p}^+)$ so that the noncompact simple root is in the northwest corner, the highest root is in the southeast corner, and $\beta_{pq}$ (Type I) or $\beta_{n1}$ (Types II and III) is in the northeast corner.
\item Form a Young diagram $[\Phi_w]$ by placing a box at each root in $\Phi_w$. In Type I, we obtain the (true) Young diagram of some partition in $\Par(p \times q)$. In Types II and III, we obtain a shifted Young diagram in which the row lengths are strictly decreasing.
\item In the diagram $[\Phi_w]$, fill the box corresponding to $\beta_{ij}$ with the entry $d_{ij}$. Since the entries $d_{ij}$ depend on $\lambda$, we denote this filled diagram by the symbol $[\Phi_w]_\lambda$, and call it a \emph{diagram of Hermitian type}.
\end{itemize}
We will treat $[\Phi_w]$ as both an unfilled diagram and as the partition given by its row lengths; in particular, $[\Phi_w]_i$ is the number of boxes in the $i$th row. We observe from~\eqref{length-size-Delta-w} that \begin{equation}
\label{length-size-diagram}
\ell(w) = \big|[\Phi_w]\big|.
\end{equation}
In Figure~\ref{fig:diagram_dij}, we display $[\Phi(\mathfrak{p}^+)]_\lambda$ filled with the entries $d_{ij}$, while in Figure~\ref{fig:diagram_di} we use~\eqref{f} to display the equivalent filling with the $d_i$. (The $\varepsilon$'s along the margins are merely a visual aid.) It follows from~\eqref{f} that the entries $d_i$ are constant along each diagonal; hence in Figure~\ref{fig:diagram_di}, the diagonal dots represent the same entry that is in the boxes directly northwest and southeast. (The only exception to this is the main diagonal in Type III, where the entries alternate between $d_n$ and $d_{n-1}$.)
\begin{figure}[ht]
\centering
\input{Diagram_dij.tex}
\caption{The diagram $[\Phi(\mathfrak{p}^+)]_{\lambda}$ of Hermitian type, filled with the $d_{ij}$ as in~\eqref{def:d_ij}.}
\label{fig:diagram_dij}
\end{figure}
\begin{figure}[ht]
\centering
\input{Diagram_di.tex}
\caption{The diagram $[\Phi(\mathfrak{p}^+)]_{\lambda}$ of Hermitian type, filled with the $d_i$ as in~\eqref{def:d_i}.}
\label{fig:diagram_di}
\end{figure}
\subsection{Stacking diagrams}
We now introduce a ``stacking'' construction that converts $[\Phi_w]_\lambda$ into a new diagram $\stla{w}$ with twice the size. Ultimately we will treat the diagram $[\Phi_w]_\lambda$ as mere shorthand for the stacked diagram $\stla{w}$, which is the actual key to writing down the highest weights $w \cdot \lambda$. First we define an unfilled diagram $\st{w}$ via its row lengths:
\begin{equation*}
\renewcommand{3}{1.5}
\begin{array}{lll}
\text{Type I:} & \st{w}_i = [\Phi_w]'_{-(q+1-i)}, & \text{ for } 1 \leq i \leq q, \\
& \st{w}_{q+i} = [\Phi_w]_i, & \text{ for } 1 \leq i \leq p. \\
\text{Type II:} & \st{w}_i = [\Phi_w]_i + [\Phi_w]'_i, & \text{ for } 1 \leq i \leq n. \\
\text{Type III:} & \st{w}'_i = [\Phi_w]_i + [\Phi_w]'_i, & \text{ for } 1 \leq i \leq n-1.
\end{array}
\end{equation*}
For the first $q$ rows in Type I, the negative row lengths mean that the rows extend to the left instead of to the right.
We illustrate the construction below. In Type I, the shaded diagram $[\Phi_w] = (5,3,2)$ is stacked corner-to-corner with its reflection about the 45-degree axis:
\input{StackA.tex}
\noindent In the case above, the row lengths from top to bottom are $(-1,-1,-2,-3,-3;5,3,2)$. Note that we have written the semicolon after the first $q$ coordinates (i.e., the row lengths of the reflected diagram), but ultimately, in light of Theorem~\ref{thm:w-dot-la}, we will work with the dual tuple $(-2,-3,-5;3,3,2,1,1)$, where as usual the semicolon is written after the first $p$ coordinates. In general for Type I, if $[\Phi_w] = \pi \in \Par(p \times q)$, then we have the following description of $\st{w}$ as a $(q+p)$-tuple of its row lengths:
\begin{equation}
\label{stack-Type I}
\text{Type I:} \quad \st{w} = (\pi'^*; \pi).
\end{equation}
In Type II (resp., Type III), the shaded diagram $[\Phi_w] = (5,3,2)$ is stacked horizontally (resp., vertically) with its conjugate:
\input{StackCD.tex}
\noindent If we modify the Frobenius notation for Types II and III to write $[\Phi_w] = (\alpha|0)$, where the ``0'' records the fact that all leg lengths in a shifted diagram are zero, then we can describe the stacking construction in terms of ASC partitions:
\begin{equation}
\label{stack-ASC}
\renewcommand{3}{1.5}
\begin{array}{ll}
\text{Type II:} & \st{w} = (\alpha+1 \mid \alpha). \\
\text{Type III:} & \st{w} = (\alpha \mid \alpha+1).
\end{array}
\end{equation}
In the same way that we constructed the unfilled diagram $\st{w}$ from $[\Phi_w]$, we can construct a filled diagram $\stla{w}$ from $[\Phi_w]_\lambda$. Specifically, the shape of $\stla{w}$ is given by $\st{w}$, while the filling is induced by the filling of $[\Phi_w]_\lambda$. In Type I, we fill the reflection of $[\Phi_w]_\lambda$ with the negatives of the original entries, which we denote by a bar (e.g., $\neg{3} = -3$). Given a filled diagram $D$, we write $\rows D$ (resp. $\cols D$) to denote the tuple whose $i$th coordinate is the sum of the entries in the $i$th row (resp., column) of $D$, counting from top to bottom (resp., left to right). We then observe the following:
\begin{equation}
\label{rowcol}
\rows\,\stla{w} =
\begin{cases}
\left(\cols\,[\Phi_w]^*_\lambda;\: \rows\,[\Phi_w]_\lambda\right), & \mathfrak{g} \text{ is of Type I},\\[1ex]
\rows\,[\Phi_w]_\lambda+\cols\,[\Phi_w]_\lambda, & \mathfrak{g} \text{ is of Type II},\\[1ex]
(\rows\,[\Phi_w]_\lambda,\:0) + (0,\:\cols\,[\Phi_w]_\lambda), & \mathfrak{g} \text{ is of Type III,}
\end{cases}
\end{equation}
where $(-, 0)$ and $(0, -)$ denote the $n$-tuples obtained by appending and prepending a $0$, respectively.
\begin{ex}[Type I]
Let $p=3$ and $q=4$. Let $w\in\prescript{\k}{}{\mathcal{W}}$ such that $\Phi_w = \{\beta_{31}, \beta_{32}, \beta_{33}, \beta_{34}, \beta_{21}, \beta_{22}, \beta_{11}\}$. (Although we will not need to work with $w$ directly, it is easy enough to see that $w = s_{11}s_{22}s_{21}s_{34}s_{33}s_{32}s_{31}$, where $s_{ij} \coloneqq s_{\beta_{ij}}$). By consulting Figure~\ref{fig:diagram_dij}, we see that $[\Phi_w]$ is the partition $(4,2,1)$. Now fix $\lambda = (3,3,3;0,0,0,0)$. Then $\lambda+\rho = (9, 8, 7; 3, 2, 1, 0)$. Hence $d_1 = d_2 = d_4 = d_5 = d_6 = 1$, while $d_3 = 4$. Therefore, following the diagram in Figure~\ref{fig:diagram_di}, we have
\[
\ytableausetup{boxsize=1em}
[\Phi_w]_\lambda = \ytableaushort[*(gray!30) \scriptstyle]{4111,14,1} \qquad \leadsto \qquad \stla{w} = \ytableaushort[\scriptstyle]{\none\none {\neg{1}},\none\none {\neg{1}}, \none {\neg{4}} {\neg{1}},{\neg{1}} {\neg{1}} {\neg{4}},\none\none\none{*(gray!30)4}{*(gray!30)1}{*(gray!30)1}{*(gray!30)1},\none\none\none{*(gray!30)1}{*(gray!30)4},\none\none\none{*(gray!30)1}}
\]
from which we see that $\rows\,\stla{w} = (-1,-1,-5,-6;7,5,1)$.
\end{ex}
\begin{ex}[Type II]
Let $n=4$. Let $w \in \prescript{\k}{}{\mathcal{W}}$ such that $\Phi_w = \{\beta_{44}, \beta_{43}, \beta_{42}, \beta_{41}, \beta_{33}\}$.
By~\eqref{w-in-terms-of-v} we have $w = s_{33}s_{41}s_{42}s_{43}s_{44}$. Then $[\Phi_w] = (\alpha|0)$ where $\alpha = (3,0)$. Now fix $\lambda = (9,5,3,3)$. Then $\lambda+\rho = (13, 8, 5, 4)$. Hence $d_1 = 5$, $d_2 =3$, $d_3 = 1$, and $d_4 = 4$. Therefore we have
\[
\ytableausetup{smalltableaux}
[\Phi_w]_\lambda = \ytableaushort[*(gray!30)]{4135,\none4}
\qquad \leadsto \qquad \stla{w}= \ytableaushort{4{*(gray!30)4}{*(gray!30)1}{*(gray!30)3}{*(gray!30)5},14{*(gray!30)4},3,5}
\]
from which we see that $\rows\,\stla{w} = (17, 9, 3, 5)$. Note also that $\st{w} = (4,1\mid 3,0) = (\alpha+1 \mid \alpha)$.
\end{ex}
\begin{ex}[Type III]
Let $n=4$. Let $w \in \prescript{\k}{}{\mathcal{W}}$ such that $\Phi_w = \{\beta_{43}, \beta_{42}, \beta_{41}, \beta_{32}, \beta_{31}\}$.
By~\eqref{w-in-terms-of-v} we have $w = s_{31}s_{32}s_{41}s_{42}s_{43}$. Then $[\Phi_w] = (\alpha|0)$ where $\alpha = (2,1)$. Now fix $\lambda = \left(\frac{3}{2}, \frac{3}{2}, \frac{3}{2}, -\frac{3}{2}\right)$. Then $\lambda+\rho = (\frac{9}{2}, \frac{7}{2}, \frac{5}{2}, -\frac{3}{2})$. Hence $d_1 = d_2 = d_4 = 1$, while $d_3 =4$. Therefore we have
\[
\ytableausetup{smalltableaux}
[\Phi_w]_\lambda = \ytableaushort[*(gray!30)]{111,\none41}
\qquad \leadsto \qquad \stla{w}= \ytableaushort{{*(gray!30)1}{*(gray!30)1}{*(gray!30)1},1{*(gray!30)4}{*(gray!30)1},14,11}
\]
from which we see that $\rows\,\stla{w} = (3, 6, 5, 2)$. Note also that $\st{w} = (2,1\mid 3,2) = (\alpha \mid \alpha+1)$.
\end{ex}
\subsection{BGG resolutions from the diagrams}
We arrive at the main result of this section:
\begin{theorem}
\label{thm:w-dot-la}
Suppose $(\mathfrak{g},\k)$ is of Type I, II, or III. Let $\lambda \in \Lambda^{\!+\!}(\k)$ and $w \in \prescript{\k}{}{\mathcal{W}}$. Then we have
\[
w \cdot \lambda = \lambda + {\rows\,\stla{w}^*}_{\textstyle .}
\]
\end{theorem}
\begin{proof}
The arguments below are clear from the diagrams in Figure~\ref{fig:diagram_dij}:
\bigskip
\textbf{Type I:} We have
\[
\Sigma \coloneqq \sum_{\mathclap{\beta_{ij}\in \Phi_w}} d_{ij}\beta_{ij} = \sum_{\mathclap{\substack{(i,j):\\ \beta_{ij}\in \Phi_w}}} d_{ij}(\varepsilon_i - \varepsilon_{p+j}) = \sum_{i=1}^p \overbrace{\left(\sum_j d_{ij}\right)}^{\mathclap{\text{$i$th row sum from the bottom}}}\varepsilon_i - \sum_{j=1}^q \underbrace{\left(\sum_i d_{ij}\right)}_{\mathclap{\text{$j$th column sum}}}\varepsilon_{p+j}
\]
where the row and column sums refer to $[\Phi_w]_\lambda$. Therefore the first $p$ coordinates of $\Sigma$ are those of $\rows\,[\Phi_w]_\lambda$ in reverse order, while the final $q$ coordinates of $\Sigma$ are those of $-\cols\,[\Phi_w]_\lambda$. Hence by~\eqref{rowcol}, the coordinates of $\Sigma$ are the coordinates of $\rows\,\stla{w}$ in reverse order. The result follows from~\eqref{w-dot-lambda-rewrite}.
\bigskip
\textbf{Type II:} We have
\[
\Sigma \coloneqq \sum_{\mathclap{\beta_{ij} \in \Phi_w}} d_{ij}\beta_{ij} = \sum_{\mathclap{\substack{(i,j):\\ \beta_{ij}\in \Phi_w}}} d_{ij}(\varepsilon_i + \varepsilon_j) = \sum_{i=1}^n \overbrace{\left(\sum_j d_{ij}\right)}^{\mathclap{\text{$i$th row sum from the bottom}}} \varepsilon_i + \sum_{j=1}^n \underbrace{\left(\sum_i d_{ij}\right)}_{\mathclap{\text{$j$th column sum from the right}}} \varepsilon_j
\]
where the row and column sums refer to $[\Phi_w]_\lambda$. Therefore the coordinates of $\Sigma$ are those of $\rows\,[\Phi_w]_\lambda+\cols\,[\Phi_w]_\lambda$ in reverse order. By~\eqref{rowcol}, these are also the coordinates of $\rows\,\stla{w}$ in reverse order. The result follows from~\eqref{w-dot-lambda-rewrite}.
\textbf{Type III:} We have
\[
\Sigma \coloneqq \sum_{\mathclap{\beta_{ij} \in \Phi_w}} d_{ij}\beta_{ij} = \sum_{\mathclap{\substack{(i,j):\\ \beta_{ij}\in \Phi_w}}} d_{ij}(\varepsilon_i + \varepsilon_j) = \sum_{i=2}^{n} \overbrace{\left(\sum_j d_{ij}\right)}^{\mathclap{\text{$(i-1)$th row sum from the bottom}}} \varepsilon_i + \sum_{j=1}^{n-1} \underbrace{\left(\sum_i d_{ij}\right)}_{\mathclap{\text{$j$th column sum from the right}}} \varepsilon_j
\]
where the row and column sums refer to $[\Phi_w]_\lambda$. Therefore the coordinates of $\Sigma$ are those of
\[
(\rows\,[\Phi_w]_\lambda,\:0) + (0, \: \cols\,[\Phi_w]_\lambda)
\]
in reverse order. By~\eqref{rowcol}, these are also the coordinates of $\rows\,\stla{w}$ in reverse order. The result follows from~\eqref{w-dot-lambda-rewrite}.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:BGG}]
We must show that the weights $w \cdot 0$ are precisely those appearing in the resolutions in Proposition~\ref{prop:BGG}. When $\lambda = 0$, we have $d_{i}=\langle\rho,\:\alpha_i^\vee\rangle=1$ for all $i$. Hence in this case, each box in $[\Phi_w]_\lambda$ is filled with a ``1,'' and therefore $\rows\,\st{w}_0 = \st{w}$.
In Type I, we have a bijection $\prescript{\k}{}{\mathcal{W}} \longrightarrow \Par(p\times q)$ given by $w \longmapsto [\Phi_w]$. Setting $\pi = [\Phi_w] \in \Par(p \times q)$, we combine Theorem~\ref{thm:w-dot-la} with~\eqref{stack-Type I} to obtain
\[
w \cdot 0 = \st{w}^* = (\pi'^*;\pi)^* = (\pi^*;\pi').
\]
In Type II, we have a bijection between $\prescript{\k}{}{\mathcal{W}}$ and strictly decreasing partitions $\alpha$ such that $n > \alpha_1 > \cdots > \alpha_r$ with $r \leq n$; the bijection is given by $w \longmapsto [\Phi_w] = (\alpha|0)$. Setting $\pi = \st{w}$, we combine Theorem~\ref{thm:w-dot-la} with~\eqref{stack-ASC} to obtain
\[
w \cdot 0 = \st{w}^* = \pi^* =
(\alpha + 1 \mid \alpha)^*.
\]
The argument is the same in Type III (for $\mathsf{D}_{n+1})$ if we set $\pi' = \st{w}$.
It remains to show that each individual term $N_i$ in the resolutions of Proposition~\ref{prop:BGG} contains the correct partitions $\pi$, but in all three types this is immediate from~\eqref{length-size-diagram}. The values of $s = |\Phi(\mathfrak{p}^+)|$ are easily calculated in each type.
\end{proof}
\begin{ex}[Type I]
Let $p=q=2$, so that $(\mathfrak{g},\k) = (\mathsf{A}_3, \mathsf{A}_1 \times \mathsf{A}_1)$. Let $\lambda = (6,3;3,1)$. Then $\lambda+\rho = (9,5;4,1)$, and we have $d_1=4$, $d_2 =1$, and $d_3 = 3$. Below we depict the poset $\prescript{\k}{}{\mathcal{W}}$, where each element $w$ is labeled with the diagram $[\Phi_w]_\lambda$:
\input{Example_Res_A.tex}
\noindent Each diagram $[\Phi_w]_\lambda$ is shorthand for the stacked diagram $\stla{w}$. For example, the arrow along the lower-left edge of the diamond represents the following map:
\begin{equation}
\label{map example I}
\ytableausetup{boxsize=1em}
\ytableaushort[\scriptstyle]{\none{\neg{3}},{\neg{4}}{\neg{1}},\none\none{*(gray!30)1}{*(gray!30)3},\none\none{*(gray!30)4}} \longrightarrow \ytableaushort[\scriptstyle]{{\neg{4}}{\neg{1}},\none\none{*(gray!30)1},\none\none{*(gray!30)4}}
\end{equation}
In order to interpret this as a map between parabolic Verma modules, we apply Theorem~\ref{thm:w-dot-la}, which directs us to read off the tuple of row sums for each stacked diagram, and then add its dual to $\lambda$:
\begin{alignat*}{3}
\lambda + (-3,-5;4,4)^* &=(6,3;3,1) + (-4,-4;5,3) &&= (2,-1;8,4),\\
\lambda + (0,-5;1,4)^* &= (6,3;3,1) + (-4, -1;5,0) &&= (2,2;8,1).
\end{alignat*}
Therefore the map in~\eqref{map example I} is the map $N_{(2,-1;8,4)} \longrightarrow N_{(2,2;8,1)}$ in the BGG resolution of $L_{\lambda}$.
\end{ex}
\begin{ex}[Type II]
Let $(\mathfrak{g},\k) = (\mathsf{C}_3, \mathsf{A}_2)$, and let $\lambda = (3,1,1)$. Then $\lambda+\rho = (6,3,2)$, which gives us $d_1 = 3$, $d_2 = 1,$ and $ d_3 = 2$. Below is the poset $\prescript{\k}{}{\mathcal{W}}$, where each element $w$ is labeled with the diagram $[\Phi_w]_\lambda$:
\input{Example_Res_C.tex}
\noindent As in the previous example, each diagram $[\Phi_w]_\lambda$ is shorthand for the diagram $\stla{w}$. For example, the second arrow from the left represents the following map:
\begin{equation}
\label{map example II}
\ytableaushort{2{*(gray!30)2}{*(gray!30)1}{*(gray!30)3},12{*(gray!30)2}{*(gray!30)1},31} \longrightarrow \ytableaushort{2{*(gray!30)2}{*(gray!30)1}{*(gray!30)3},12{*(gray!30)2},3}
\end{equation}
Applying Theorem~\ref{thm:w-dot-la}, we compute the row sums and add the dual to $\lambda$:
\begin{alignat*}{3}
\lambda + (8,6,4)^* &= (2,0,0) - (4,6,8) &&= (-2, -6, -8),\\
\lambda + (8, 5, 3)^* &= (2,0,0) - (3,5,8) &&= (-1, -5, -8).
\end{alignat*}
Therefore the map in~\eqref{map example II} is the map $N_{(8,6,2)^*} \longrightarrow N_{(8,5,1)^*}$ in the BGG resolution of $L_{\lambda}$.
\end{ex}
\begin{rem}
Suppose we fix an origin on $\stla{w}$, at the common corner (Type I) or the northwest corner (Types II and III). If we adopt the convention (as in Type I) that boxes in the right (resp., left) half-plane contain positive (resp., negative) entries, then taking the dual of $\rows\stla{w}$ is equivalent to first rotating $\stla{w}$ by 180 degrees about the origin. \end{rem}
\section{Congruence of blocks and conjugate partitions}
\label{sec:Congruence}
\subsection{\!\!\!}
This section is the heart of the paper. Our goal is to classify the occurrences of the phenomena we observed in Example~\ref{ex:D4 and C3}. In particular, we wish to find (pairs of) pairs $(\mathfrak{g},\k)$ and $(\mathfrak{g}', \k')$ of Types I--III, along with $\lambda \in \Lambda^{\!+\!}(\k)$ and $\lambda' \in \Lambda^{\!+\!}(\k')$, such that the highest weights $\{\mu = w\cdot \lambda \mid w \in \prescript{\k}{}{\mathcal{W}}\}$ and $\{\mu' = w' \cdot \lambda' \mid w' \in \prescript{\k'\!}{}{\mathcal{W}'}\}$ form two isomorphic posets; moreover, this poset isomorphism should preserve BGG resolutions, and also preserve the dimension of each $\k$-module $F_\mu$. Finally, this isomorphism should send Young diagrams of (duals of) poset elements to their conjugate diagrams. Our main result, namely Theorem~\ref{thm:Cong and Conj} and its summary in Table~\ref{table:WC}, consists of six infinite families that enjoy the properties observed in the pair from Example~\ref{ex:D4 and C3}.
A word of warning: in this section, we use the prime symbol to denote the image of a weight under a certain reduction operation. Hence, \emph{a priori} the symbol $\mu'$ now has nothing to do with the conjugate partition of $\mu$. Nonetheless, it will turn out that in the special settings of Theorem~\ref{thm:Cong and Conj}, certain poset elements $\pi$ and $\pi'$ truly are conjugate partitions, as suggested by the notation.
One major advantage of our methods in this paper is that they eliminate the need for explicit calculations in the Weyl group; as a result, until now we have not even needed to write down the actions of the reflections $s_{ij} \coloneqq s_{\beta_{ij}}$. In this section, however, it will be useful to record the following. In Type I, $s_{ij}$ transposes the $i$th and $(p+j)$th coordinates. In Types II and III, $s_{ij}$ transposes and negates the $i$th and $j$th coordinates. In Type II, we also have $s_{ii}$, which negates the $i$th coordinate.
\subsection{Congruent blocks in parabolic category $\O$}
Recall the parabolic Verma module $N_\lambda$ from~\eqref{Verma}. We define $L_\lambda$ to be the unique simple quotient of $N_\lambda$. We now introduce a category in which these modules $N_\lambda$ and $L_\lambda$ are the basic objects. Let $(\mathfrak{g},\k)$ be a Hermitian symmetric pair. We denote by $\O(\mathfrak{g},\k)$ the full subcategory of $U(\mathfrak{g})$-mod whose objects belong to following set:
\[
\left\{ M \: \middle| \:\parbox{6cm}{$M$ is a finitely generated $U(\mathfrak{g})$-module,\\
$M$ is a locally finite $U(\q)$-module,\\
$M$ is a semisimple $U(\k)$-module} \right\}.
\]
The simple modules in $\O(\mathfrak{g},\k)$ are parametrized by $\Lambda^{\!+\!}(\k)$, via the correspondence $L_\lambda \longleftrightarrow \lambda \in \Lambda^{\!+\!}(\k)$. For $\lambda \in \Lambda^{\!+\!}(\k)$, let $\chi_\lambda$ be the infinitesimal character of the ordinary Verma module $M_\lambda$, and therefore of its quotients $N_\lambda$ and $L_\lambda$. We let $\O(\mathfrak{g},\k)_\lambda$ denote the full subcategory of $\O(\mathfrak{g},\k)$ whose objects are the modules whose composition factors have the infinitesimal character $\chi_\lambda$. Given $\lambda \in \Lambda^{\!+\!}(\k)$, the category $\O(\mathfrak{g},\k)_\lambda$ contains finitely many simple modules, namely the modules $L_\mu$ such that $\mu = w \cdot \lambda \in \Lambda^{\!+\!}(\k)$ for some $w \in \mathcal{W}$.
We define a \emph{block} in $\O(\mathfrak{g},\k)$ by separating indecomposable modules which are homologically unrelated via the Ext functor. We write $\mathcal{B}_\lambda$ to denote the block containing $L_\lambda$. The block $\mathcal{B}_\lambda$ is a subcategory of $\O(\mathfrak{g},\k)_\lambda$, and furthermore, Enright and Shelton~\cites{ES87, ES89} showed that each $\O(\mathfrak{g},\k)_\lambda$ decomposes into at most two blocks. For a block $\mathcal{B}$, we define the \emph{poset of $\mathcal{B}$} to be
\[
\Lambda(\mathcal{B}) \coloneqq \{ \mu \in \Lambda^{\!+\!}(\k) \mid L_\mu \text{ is an object in $\mathcal{B}$}\},
\]
via the usual ordering on $\mathfrak{h}^*$. From now on, we reserve the symbol $\lambda$ to denote a \textit{quasidominant weight}: that is to say, $\lambda$ is the unique maximal element of $\Lambda(\mathcal{B}_\lambda)$. We write $\mu$ when referring to an arbitrary element of $\Lambda(\mathcal{B}_\lambda)$. We say that $\lambda \in \Lambda^{\!+\!}(\k)$ is \emph{regular} if $\langle \lambda+\rho,\:\alpha^\vee\rangle \neq 0$ for all $\alpha \in \Phi^+$. If $\lambda \in \Lambda^{\!+\!}(\k)$ is regular, then $\O(\mathfrak{g},\k)_\lambda = \mathcal{B}_\lambda$ is itself a block, called a \emph{regular block}; in this case, $\Lambda(\mathcal{B}_\lambda) \cong \prescript{\k}{}{\mathcal{W}}$ as posets.
In order to capture three of the four important aspects of the situation in Example~\ref{ex:D4 and C3}, we follow~\cite{Armour}*{Def.~3.4.1} in defining the following notion of congruent blocks:
\begin{dfn}
\label{def:congruence}
Let $(\mathfrak{g},\k)$ and $(\mathfrak{g}',\k')$ be Hermitian symmetric pairs. Let $\mathcal{B}$ be a block in $\O \coloneqq \O(\mathfrak{g},\k)$ and $\mathcal{B}'$ a block in $\O' \coloneqq \O(\mathfrak{g}',\k')$. We say that $\mathcal{B}$ is \emph{congruent} to $\mathcal{B}'$ if
\begin{enumerate}
\item we have an isomorphism of posets $\Lambda(\mathcal{B}) \cong \Lambda(\mathcal{B}')$, where we write $\mu \mapsto \mu'$;
\item for all $\mu,\nu \in \Lambda(\mathcal{B})$ where $\mu < \nu$, and for all $i \geq 0$, we have $\operatorname{Ext}^i_{\O}(N_\mu, L_\nu) \cong \operatorname{Ext}^i_{\O'}(N_{\mu'},L_{\nu'})$;
\item for all $\mu \in \Lambda(\mathcal{B})$, we have $\dim F_\mu = \dim F_{\mu'}$.
\end{enumerate}
\end{dfn}
If $\lambda$ is the maximal element in the poset $\Lambda(\mathcal{B})$, then properties (1) and (2) above guarantee that $L_\lambda$ is a Kostant module (in the language of~\cite{Enright-Hunziker-RepTh}). Therefore $L_\lambda$ has a BGG resolution \cite{Enright-Hunziker}*{Thm.~2.8}, even though (as in the singular case described in the following subsection) $L_\lambda$ may not be a finite-dimensional module.
Revisiting Example~\ref{ex:D4 and C3} in light of Definition~\ref{def:congruence}, we can now say that the regular block $\O(\mathsf{D}_4,\mathsf{A}_3)_0$ is congruent to the regular block $\O(\mathsf{C}_3,\mathsf{A}_2)_0$.
\subsection{Enright--Shelton reduction}
\label{sub:ES}
Recall that whenever $\lambda \in \Lambda^{\!+\!}(\k)$ is regular, we have $\O(\mathfrak{g},\k)_\lambda = \mathcal{B}_\lambda \cong \prescript{\k}{}{\mathcal{W}}$ as posets. If, on the other hand, $\langle \lambda + \rho, \: \alpha^\vee\rangle = 0$ for some $\alpha \in \Phi$, then we say that $\lambda$ is \emph{singular}, and $\alpha$ is a \emph{singular root} with respect to $\lambda$. We also say that $\mathcal{B}_\lambda$ is a \emph{singular block}. (For all six families in Table~\ref{table:WC}, note that $\lambda$ is singular.) Loosely speaking, the process of \emph{Enright--Shelton reduction} converts a singular weight (plus $\rho$) in $\Lambda^{\!+\!}(\k)$ into a regular weight (plus $\rho'$) in $\Lambda^{\!+\!}(\k')$, where $(\mathfrak{g}',\k')$ is a certain Hermitian symmetric pair whose rank is less than that of $(\mathfrak{g}, \k)$.
In the following discussion, we will explain the details of this reduction for the specific instances of $\mathfrak{g}$ and $\mathfrak{g}'$ listed in Table~\ref{table:WC}. We will write a superscript $\flat$ to denote the result of Enright--Shelton reduction:
\begin{equation*}
\mu + \rho \xrightarrow{\quad{\rm ES}\quad} (\mu + \rho)^\flat = \mu' + \rho',
\end{equation*}
which induces the map
\begin{equation}
\label{ES reduction}
\mu \longmapsto \mu' = (\mu+\rho)^\flat - \rho'.
\end{equation}
The reduction is invertible, and we will write a superscript $\sharp$ to denote its inverse:
\[
\mu' + \rho' \xrightarrow{\quad{\rm ES}^{-1}\quad} (\mu'+\rho')^\sharp = \mu + \rho,
\]
which induces the inverse map of~\eqref{ES reduction}, namely
\begin{equation}
\label{ES reduction inverse}
\mu' \longmapsto \mu = (\mu' + \rho')^\sharp - \rho.
\end{equation}
\begin{theorem}[\cites{ES87,ES89}]
\label{thm:ES}
Let $(\mathfrak{g}, \k)$ be a Hermitian symmetric pair. Let $\lambda \in \Lambda^{\!+\!}(\k)$ with $L_\lambda \neq N_\lambda$, and let $\lambda' \in \Lambda^{\!+\!}(\k')$ be the regular weight obtained from $\lambda$ by Enright--Shelton reduction. Then the map~\eqref{ES reduction} induced by Enright--Shelton reduction restricts to an isomorphism of posets
\[
\Lambda(\mathcal{B}_\lambda) \longrightarrow \Lambda(\mathcal{B}_{\lambda'})
\]
satisfying conditions (1) and (2) of Definition~\eqref{def:congruence}.
\end{theorem}
\begin{table}[t]
\centering
\input{Table_main.tex}
\caption{Congruence of singular and regular blocks $\mathcal{B}_\lambda$ and $\mathcal{B}_{\lambda'}$. (See Theorem~\ref{thm:Cong and Conj}.) On the right side of the table, we put $p=P-m$, $q = Q-m$, and $n = N-m$. To avoid trivial cases, we let $1 \leq k \leq \text{rank of $(\mathfrak{g},\k)$}$, although Type II remains of interest for $k=0$. We write $\omega^*_k$ merely to denote the $N$-tuple whose last $k$ coordinates are $-1$, with $0$ elsewhere. We write $\omega'_i$ for the fundamental weights of $\mathfrak{g}'$.}
\label{table:WC}
\end{table}
We now detail the process of Enright--Shelton reduction for each of the six families in Table~\ref{table:WC}. Note that we label each family with respect to $\mathfrak{g}'$ rather than $\mathfrak{g}$, reflecting our philosophy that the regular blocks are easier to understand than the singular blocks. We use capital letters $P$, $Q$, and $N$ to describe the rank of $\mathfrak{g}$; then defining $m$ to be the number of coordinates deleted via reduction (details below), we describe the rank of $\mathfrak{g}'$ using the lower-case letters $p\coloneqq P-m$, $q\coloneqq Q-m$, and $n \coloneqq N-m$. In each family, the parameter $k$ ranges over all positive integers strictly less than the rank of $(\mathfrak{g},\k)$, which is $\min\{P,Q\}$, $N$, or $\lfloor N/2 \rfloor$, depending on whether $\mathfrak{g}=\mathsf{A}_{P+Q-1}$, $\mathsf{C}_N$, or $\mathsf{D}_N$, respectively.
\textbf{Type I} ($\mathfrak{g} = \mathsf{A}_{P+Q-1}$, $\mathfrak{g}' = \mathsf{A}_{p+q-1}$; $\lambda = -k\omega_P$, $\lambda' = k\omega'_p$). Suppose $\mu+\rho = (a_1, \ldots, a_P; b_1, \ldots, b_Q)$. Then $\mu +\rho$ contains a singularity for each instance of an equality $a_i = b_j$, in which case a singular root is $\beta_{ij} = \varepsilon_i - \varepsilon_{P+j}$. Hence to perform the reduction on $\mu + \rho$, we delete all coordinate pairs $a_i, b_j$ such that $a_i = b_j$. Then $m$ is the number of such pairs.
For example, let $P=4$ and $Q=3$, with $k=2$. Then $\lambda = (-2,-2,-2,-2;0,0,0)$, and $\lambda + \rho = (4,3,2,1;2,1,0)$. Therefore by~\eqref{ES reduction}, we have
\[
\lambda' = (\lambda+\rho)^\flat - \rho' = (4,3,\mathbf{2},\mathbf{1};\mathbf{2},\mathbf{1},0)^\flat - \rho' = (4,3;0) - (2,1;0) = (2,2;0) = 2\omega'_2.
\]
Note that the reduction deletes $m=2$ pairs of coordinates, so that $p=2$, $q=1$, and indeed $\lambda' = k\omega'_p$. As an example of an arbitrary element of $\Lambda(\mathcal{B}_\lambda)$, we choose $\mu = (-2,-3,-3,-3;1,1,1)$. Then $\mu + \rho = (4,2,1,0;3,2,1)$, and we have
\[
\mu' = (4,\mathbf{2},\mathbf{1},0;3,\mathbf{2},\mathbf{1})^\flat - \rho' = (4,0;3) - (2,1;0) = (2, -1; 3).
\]
\textbf{Type II} ($\mathfrak{g} = \mathsf{D}_N$, $\mathfrak{g}' = \mathsf{C}_n$; $\lambda = -2k\omega_N$, $\lambda = k\omega'_n$). This case is exceptional among the six families in Table~\ref{table:WC}, because the $\mathfrak{g}'$ obtained by Enright--Shelton reduction is actually $\mathsf{D}_{n+1}$, not $\mathsf{C}_n$. In order to obtain the result in Table~\ref{table:WC}, we compose Enright--Shelton reduction with a further reduction from $\mathsf{D}_{n+1}$ to $\mathsf{C}_n$, by deleting the 0 from $(\mu + \rho)^\flat$. This second reduction also satisfies conditions (1) and (2) in Definition~\ref{def:congruence}. The reason for our modification here is this: for the other five families, it turns out that Enright--Shelton reduction satisfies condition (3) as well, producing congruent blocks, but in Type II, the extra reduction is necessary to fulfill condition (3).
Suppose $\mu + \rho = (a_1, \ldots, a_N)$, where the $a_i$ are integers. Then $\mu+\rho$ contains a singularity for each instance of an equality $a_i = -a_j < 0$, in which case a singular root is $\beta_{ij} = \varepsilon_i + \varepsilon_j$. To perform the reduction on $\mu + \rho$, we delete all coordinate pairs $a_i, a_j$ such that $a_i = -a_j$. Then as mentioned above, we perform a second reduction by deleting the coordinate 0.
For example, let $N=8$, with $k=2$. Then $\lambda = (-2,\ldots,-2)$, and $\lambda+\rho=(5,4,\ldots,-1,-2)$. Therefore, we have
\[
\lambda+\rho \xrightarrow{\quad{\rm ES}\quad} (\lambda+\rho)^\flat = (5,4,3,\mathbf{2},\mathbf{1},0,\mathbf{-1},\mathbf{-2})^\flat = (5,4,3,0) \xrightarrow{\text{delete $0$}} (5,4,3)
\]
as a weight in Type $\mathsf{C}_3$. Then, treating $(5,4,3)$ as the term $(\lambda+\rho)^\flat$ in~\eqref{ES reduction}, we have
\[
\lambda' = (\lambda+\rho)^\flat - \rho' = (5,4,3)-(3,2,1) = (2,2,2) = 2\omega_3.
\]
Ultimately we have deleted $m=5$ coordinates. As an example of an arbitrary element of $\Lambda(\mathcal{B}_\lambda)$, we take $\mu = (-2,-4,-4,-4,-4,-4,-4)$. Then $\mu + \rho = (5,2,1,0,-1,-2,-3,-4)$, and we have
\[
(5,\mathbf{2}, \mathbf{1},0,\mathbf{-1},\mathbf{-2},-3,-4)^\flat = (5,0,-3,-4) \xrightarrow{\text{delete $0$}} (5,-3,-4),
\]
and so
\[
\mu'= (5,-3,-4) - (3,2,1) = (2,-5,-5).
\]
(We will revisit this setting in Example~\ref{ex:D8 and C3}.)
\textbf{Type IIIa} ($\mathfrak{g} = \mathsf{C}_N$, $\mathfrak{g}' = \mathsf{D}_n$; $\lambda = -\frac{k}{2}\omega_N$, $\lambda'=k\omega'_n$). Suppose $\mu + \rho = (a_1, \ldots, a_N)$ with the $a_i$ either all integers or all half-integers. Then $\mu + \rho$ contains a singularity for each instance of an equality $a_i = -a_j < 0$, in which case a singular root is $\beta_{ij}$; moreover, if $k$ is even, then $\mu + \rho$ contains another singularity at the coordinate $a_i=0$, which means that we have a (long) singular root $\beta_{ii} = 2\varepsilon_i$. To perform the reduction on $\mu + \rho$, we delete all pairs of opposite coordinates, along with 0 (if applicable).
For example, let $N=7$ and $k=4$. Then $\lambda = (-2,\ldots,-2)$, and $\lambda + \rho = (5,4,3,2,1,0,-1)$. Then we reduce by deleting $m=3$ coordinates, and we have
\[
\lambda' = (\lambda+\rho)^\flat-\rho' = (5,4,3,2,\mathbf{1},\mathbf{0},\mathbf{-1})^\flat - \rho' = (5,4,3,2)-(3,2,1,0) = (2,2,2,2) = 4\omega'_4.
\]
On the other hand, if $k=5$, then $\lambda = \left(-\frac{5}{2}, \ldots, -\frac{5}{2}\right)$ and $\lambda+\rho = \left(\frac{9}{2}, \frac{7}{2}, \ldots, -\frac{1}{2}, -\frac{3}{2}\right)$. We now delete $m=4$ coordinates, and obtain
\[
\lambda' = \left(\frac{9}{2}, \frac{7}{2}, \frac{5}{2}, \mathbf{\frac{3}{2}}, \mathbf{\frac{1}{2}}, \mathbf{-\frac{1}{2}}, \mathbf{-\frac{3}{2}}\right)^{\!\flat} - \rho' = \left(\frac{9}{2}, \frac{7}{2}, \frac{5}{2}\right) - (2,1,0) = \left(\frac{5}{2}, \frac{5}{2}, \frac{5}{2}\right) = 5 \omega'_3.
\]
\textbf{Type IIIb} ($\mathfrak{g} = \mathsf{C}_N$, $\mathfrak{g}' = \mathsf{D}_n$; $\lambda = -\frac{k}{2}\omega_N + \omega^*_k$, $\lambda'=k\omega'_{n-1}$). The reduction procedure is the same as in Type IIIa.
As an example, let $N=7$ and $k=4$. Then $\lambda = (-2,-2,-2,-3,-3,-3,-3)$, and $\lambda + \rho = (5,4,3,1,0,-1,-2)$. We have
\[
\lambda' = (\lambda+\rho)^\flat-\rho' = (5,4,3,\mathbf{1},\mathbf{0},\mathbf{-1},-2)^\flat - \rho' = (5,4,3,-2)-(3,2,1,0) = (2,2,2,-2) = 4\omega'_3.
\]
When $k$ is odd, the half-integral case works out similarly.
\textbf{Type IIIc} ($\mathfrak{g} = \mathsf{D}_N$, $\mathfrak{g}' = \mathsf{D}_n$; $\lambda = -(2k-1)\omega_N$, $\lambda'=(2k+1)\omega'_n$). The reduction procedure is the same as in Type II, and there is no need for the extra reduction step since in this case we always obtain half-integer coordinates. As an example, let $N=4$ and $k=1$. Then $\lambda = \left(-\frac{1}{2}, \ldots, -\frac{1}{2}\right)$ and $\lambda + \rho = \left(\frac{5}{2}, \frac{3}{2}, \frac{1}{2}, -\frac{1}{2}\right)$. We reduce by deleting $m=2$ coordinates, and we have
\[
\lambda' = \left(\frac{5}{2}, \frac{3}{2}, \mathbf{\frac{1}{2}}, \mathbf{-\frac{1}{2}}\right)^{\!\flat}-\rho' =
\left(\frac{5}{2}, \frac{3}{2}\right) - (1,0) = \left(\frac{3}{2}, \frac{3}{2}\right) = 3\omega'_2.
\]
\textbf{Type IIId} ($\mathfrak{g} = \mathsf{D}_N$, $\mathfrak{g}' = \mathsf{D}_n$; $\lambda = -(2k-1)\omega_N + \omega^*_{2k+1}$, $\lambda'=(2k+1)\omega'_{n-1}$). The reduction procedure is the same as in Type IIIc. As an example, let $N=6$ and $k=2$. Then $\lambda = \left(-\frac{3}{2} -\frac{5}{2},-\frac{5}{2},-\frac{5}{2},-\frac{5}{2},-\frac{5}{2}\right)$ and $\lambda + \rho = \left(\frac{7}{2}, \frac{3}{2}, \frac{1}{2}, -\frac{1}{2}, -\frac{3}{2}, -\frac{5}{2}\right)$. We reduce by deleting $m=4$ coordinates, and we have
\[
\lambda' = \left(\frac{7}{2}, \mathbf{\frac{3}{2}}, \mathbf{\frac{1}{2}}, \mathbf{-\frac{1}{2}}, \mathbf{-\frac{3}{2}}, -\frac{5}{2}\right)^{\!\flat}-\rho' =
\left(\frac{7}{2}, -\frac{5}{2}\right) - (1,0) = \left(\frac{5}{2}, \frac{5}{2}\right) = 5\omega'_1.
\]
\subsection{Twisted posets of the regular blocks}
In order to see how conjugate partitions are related to congruent blocks, we introduce a ``twist'' to the posets $\Lambda(\mathcal{B}_\lambda)$ and $\Lambda(\mathcal{B}_{\lambda'})$. Let $\zeta$ be the unique fundamental weight of $\mathfrak{g}$ that is orthogonal to $\Phi(\k)$; likewise, let $\zeta'$ be the unique fundamental weight of $\mathfrak{g}'$ orthogonal to $\Phi(\k')$. Let $\beta$ be the highest root of $\mathfrak{g}$, and $\beta'$ the highest root of $\mathfrak{g}'$. We note that $\langle \zeta,\beta^\vee\rangle = \langle \zeta',\beta'^\vee\rangle
= 1$.
\begin{dfn}
\label{def:Lambda tilde}
Let $\lambda$ and $\lambda'$ belong to one of the families in Table~\ref{table:WC}. We define the \emph{twisted posets}
\begin{align*}
\tL(\mathcal{B}_\lambda) &\coloneqq \{ \mu - \langle \lambda, \beta^\vee\rangle \zeta \mid \mu \in \Lambda(\mathcal{B}_\lambda)\},\\
\tL(\mathcal{B}_{\lambda'}) & \coloneqq \{ \mu' - \langle \lambda',\beta'^\vee\rangle \zeta' \mid \mu' \in \Lambda(\mathcal{B}_{\lambda'})\}.
\end{align*}
\end{dfn}
Note that, except in Types IIIb and IIId, the weight $\langle \lambda,\beta^\vee\rangle\zeta$ is the same as $\lambda$, and $\langle \lambda',\beta'^\vee\rangle\zeta'$ is the same as $\lambda'$. Hence outside Types IIIb and IIIc, the twisted posets have the weight 0 as their maximal element. For Types IIIb and IIIc, $\langle \lambda,\beta^\vee\rangle \zeta$ is just the $\lambda$ from Types IIIa and IIIc, respectively; the same is true for $\langle \lambda',\beta'^\vee\rangle\zeta'$ and the $\lambda'$. Clearly $\Lambda(\mathcal{B}_\lambda) \cong \tL(\mathcal{B}_\lambda)$ and $\Lambda(\mathcal{B}_{\lambda'}) \cong \tL(\mathcal{B}_{\lambda'})$ as posets, and any poset map $\Lambda(\mathcal{B}_\lambda)\longrightarrow \Lambda(\mathcal{B}_{\lambda'})$ induces a unique map $\tL(\mathcal{B}_\lambda)\longrightarrow \tL(\mathcal{B}_{\lambda'})$ between twisted posets. In the context of BGG resolutions, the twist amounts to tensoring with the $1$-dimensional $\k$-module $F_{-\langle \lambda, \beta^\vee\rangle \zeta}$ or the $1$-dimensional $\k'$-module $F_{-\langle\lambda',\beta'^\vee\rangle\zeta'}$. Therefore, we will write
\begin{align}
\label{def:L tilde}
\begin{split}
\widetilde{L}_\lambda &\coloneqq L_\lambda \otimes F_{-\langle \lambda,\beta^\vee\rangle\zeta,}\\
\widetilde{L}_{\lambda'} &\coloneqq L_{\lambda'} \otimes F_{-\langle \lambda',\beta'^\vee\rangle\zeta'.}\end{split}
\end{align}
In the following lemma, we establish the final column of Table~\ref{table:WC}, which is an explicit description of the elements of $\tL(\mathcal{B}_{\lambda'})$. The flow of our argument in each type is this: we use Enright--Shelton reduction on $\lambda$ to obtain $\lambda'$, which determines the filling of diagrams $[\Phi_w]_{\lambda'}$. From this filling we show that $\rows\,\stla{w}$ is the result of uniformly lengthening the arms or legs of $\st{w}$, whose shape we already understand from~\eqref{stack-Type I} and~\eqref{stack-ASC}.
\begin{lemma}
\label{lemma:pi'}
Let $(\mathfrak{g}, \k)$, $\lambda$, and $\lambda'$ belong to one of the six families in Table~\ref{table:WC}. Then $\tL(\mathcal{B}_{\lambda'})$ is the set of all weights $\pi'^*$ shown in the last column of Table~\ref{table:WC}.
\end{lemma}
\begin{proof}
\textbf{Type I} ($\lambda = -k\omega_P$). We have $\lambda + \rho=(P+Q-1-k, \ldots, Q-k; \: Q-1, \ldots, 0)$, where the ellipses denote coordinates decreasing by 1. Thus the $(P+1-k)$th coordinate is $Q-1$, meaning that the string of $k$ coordinates before the semicolon equals the string of $k$ coordinates after the semicolon. Hence in the Enright--Shelton reduction, we delete these $m=k$ pairs of coordinates, so that $\mathfrak{g}' = \mathsf{A}_{P+Q-2k-1} = \mathsf{A}_{p+q-1}$ and $\rho' = (p+q-1, \ldots, 0)$.
Thus
\begin{align*}
(\lambda+\rho)^\flat = \lambda' + \rho' &= (p+q+k-1, \ldots, q+k;q-1, \ldots, 0),\\
\lambda' &= (k,\ldots,k;0,\ldots,0) = k\omega'_p.
\end{align*}
It is clear that $d_i \coloneqq \langle \lambda'+\rho',\:\alpha_i^\vee\rangle = 1$ for all $i \neq p$, while $d_p=k+1$. Hence every non-diagonal entry in $[\Phi_w]_{\lambda'}$ is 1, while every diagonal entry is $k+1$. Therefore $\cols\,[\Phi_w]_{\lambda'}$ is the result of adding $k$ to each leg of $[\Phi_w]$ and then taking the conjugate; likewise, $\rows\,[\Phi_w]_{\lambda'}$ is the result of adding $k$ to each arm of $[\Phi_w]$. Therefore, setting $[\Phi_w] = (\alpha|\beta)$, and recalling that $m=k$, we use~\eqref{rowcol} to obtain
\[
\rows\,\st{w}_{\lambda'} = ((\beta+m\mid\alpha)^*;\:(\alpha+m \mid \beta)).
\]
By Theorem~\ref{thm:w-dot-la} and Definition~\eqref{def:Lambda tilde}, each element of $\tL(\mathcal{B}_{\lambda'})$ equals $\rows\,\st{w}_{\lambda'}^*$ for a unique $w \in \prescript{\k'\!}{}{\mathcal{W}}$. We therefore have
\[
\tL(\mathcal{B}_{\lambda'}) = \Big\{(\alpha+m\mid\beta)^* \otimes (\beta+m \mid \alpha) \:\Big|\: (\alpha|\beta) \in \Par(p \times q)\Big\},
\]
as desired.
\textbf{Type II} ($\lambda = -2k\omega_N$). We have $\lambda+\rho=(N-k-1, \ldots, -k)$, where the ellipsis denotes coordinates decreasing by 1. Thus the final $2k+1$ coordinates are the string $(k, \ldots, -k)$, all of which (except the 0) is deleted via Enright--Shelton reduction. We then have $(\lambda+\rho)^\flat = (N-k-1, \ldots, k+1, 0)$. After deleting the $0$ (as described in Section~\ref{sub:ES}), we have deleted $m=2k+1$ coordinates, so that $\mathfrak{g}' = \mathsf{C}_{N-2k-1} = \mathsf{C}_n$ and $\rho' = (n,\ldots,1)$. Thus
\begin{align*}
\lambda'+\rho' &= (k+n,\ldots,k+1),\\
\lambda' &= (k,\ldots,k) = k\omega'_n.
\end{align*}
It is clear that $d_i = 1$ for all $i\neq n$, while $d_n = k+1$. Hence every non-diagonal entry in $[\Phi_w]_{\lambda'}$ is $1$, while every diagonal entry is $k+1$. Therefore $\rows\,\st{w}_{\lambda'}$ is the result of adding $2k$ to each arm of $\st{w}$. But if $[\Phi_w] = (\alpha|0)$, then by~\eqref{stack-ASC} we have $\st{w} = (\alpha+1\mid \alpha)$, and so $\rows\,\st{w}_{\lambda'} = (\alpha+2k+1 \mid \alpha)= (\alpha+m \mid \alpha)$. By Theorem~\ref{thm:w-dot-la} and Definition~\eqref{def:Lambda tilde}, each element of $\tL(\mathcal{B}_{\lambda'})$ equals $\rows\,\st{w}_{\lambda'}^*$ for a unique $w \in \prescript{\k'\!}{}{\mathcal{W}}$, and hence $\tL(\mathcal{B}_{\lambda'}) = \{(\alpha+m \mid \alpha)^* \mid \alpha_1 < n\}$, as desired.
\textbf{Type IIIa} ($\lambda = -\frac{k}{2}\omega_N$). We have $\lambda+\rho=(N-\frac{k}{2}, \ldots, 1-\frac{k}{2})$, where the ellipsis denotes coordinates decreasing by 1. Thus the $(k-1)$th coordinate from the end equals $k-1-\frac{k}{2} = \frac{k}{2}-1$, and so the final $k-1$ coordinates are the string $(\frac{k}{2}-1, \ldots, 1-\frac{k}{2})$, which is deleted via Enright--Shelton reduction. Hence $m=k-1$, so that $\mathfrak{g}' = \mathsf{D}_{N-k+1} = \mathsf{D}_n$ and $\rho' = (n-1,\ldots,0)$. Thus
\begin{align*}
(\lambda+\rho)^\flat = \lambda'+\rho' &= \left(n-1+\frac{k}{2},\ldots,\frac{k}{2} \right),\\
\lambda' &= \left(\frac{k}{2},\ldots,\frac{k}{2}\right) = k\omega'_n.
\end{align*}
We therefore have $d_i = 1$ for all $i\neq n$, while $d_n = k+1$. Hence all entries in $[\Phi_w]_{\lambda'}$ are $1$, except for the odd diagonal entries, which are $k+1$. Therefore in $\st{w}_{\lambda'}$, the entries $k+1$ occur in consecutive vertical pairs : explicitly, in positions $(2i-1,\:2i-1)$ and $(2i,\: 2i-1)$, for $i = 1, \ldots, h \coloneqq \lceil \rk[\Phi_w]/2\rceil$. It follows that $\rows\,\st{w}_{\lambda'}$ is the result of adding $k$ to the first $h$ row pairs in $\st{w}$, which forces $\rk \rows\,\st{w}_{\lambda'} = 2h$. If $[\Phi_w] = (\alpha|0)$, then by~\eqref{stack-ASC} we have $\st{w} = (\alpha\mid \alpha+1)$, and so $\rows\,\st{w}_{\lambda'} = (\alpha+k \mid \alpha+1)$, where if $\rk \alpha$ is odd then we augment $\alpha$ by inserting $(\ldots, -1 \mid \ldots, 0)$. By allowing $\alpha_1<n$ rather than $\alpha_1 < n-1$, we can rewrite partitions of this form as $\rows\,\st{w}_{\lambda'} = (\alpha+k-1 \mid \alpha) = (\alpha + m \mid \alpha)$ with even rank. The rest follows as in the previous cases.
\textbf{Type IIIb} ($\lambda = -\frac{k}{2}\omega_N+\omega_k^*$).
We have
\[
\lambda + \rho = \Big(\underbrace{N - \tfrac{k}{2}, \ldots, \tfrac{k}{2}+1}_{N-k},\underbrace{\tfrac
{k}{2}-1, \ldots, -\tfrac{k}{2}}_k\Big),
\]
where the ellipses denote coordinates decreasing by 1. Thus the final $k$ coordinates, except for the very last one, are all deleted via Enright--Shelton reduction. Hence $m=k-1$, so that $\mathfrak{g}' = \mathsf{D}_{N-k+1} = \mathsf{D}_n$ and $\rho' = (n-1, \ldots, 0)$. Thus
\begin{align*}
(\lambda + \rho)^\flat = \lambda' + \rho' &= \left(n-1+\frac{k}{2}, \ldots, \frac{k}{2}+1,-\frac{k}{2}\right),\\
\lambda' &= \left(\frac{k}{2},\ldots, \frac{k}{2},-\frac{k}{2}\right) = k\omega'_{n-1}.
\end{align*}
We therefore have $d_i = 1$ for all $i \neq n-1$, while $d_{n-1} = k+1$. Hence $[\Phi_w]_{\lambda'}$ is the same as in Type IIIa, except that $k+1$ occurs as the \emph{even} diagonal entries. As a result, $\rows\,\st{w}_{\lambda'}$ has \emph{odd} rank, but is not in general a true partition since its first arm is not longer than its second arm: indeed, it is obtained from $\st{w}$ by adding $k$ to the first $\lfloor \rk[\Phi_w]/2 \rfloor$ row pairs \textit{beneath} the first row.
Recall, however, that in Type IIIb, we define $\tL(\mathcal{B}_{\lambda'})$ in terms of the $\lambda'$ from Type IIIa, namely $k\omega'_n$; since the difference of the two $\lambda'$s is $k\varepsilon_1^*$, we have
\[
\tL(\mathcal{B}_{\lambda'}) = \left\{(k\varepsilon_1 + \rows\,\st{w}_{\lambda'})^* \: \middle| \: w \in \prescript{\k'\!}{}{\mathcal{W}}\right\}.
\]
This addition of $k\varepsilon_1$ adds back the ``missing'' $k$ to the first arm of $\rows\,\st{w}_{\lambda'}$, so that elements of $\tL(\mathcal{B}_{\lambda'})$ are the duals of the partitions $(\alpha + m \mid \alpha)$ with odd rank, where $\alpha_1 < n$.
\textbf{Type IIIc} ($\lambda = -(2k-1)\omega_N$). We have $\lambda+\rho=(N-k-\frac{1}{2},\ldots,-k+\frac{1}{2})$, where the ellipsis denotes coordinates decreasing by 1. Thus the final $2k$ coordinates are the string $(k-\frac{1}{2}, \ldots, -k+\frac{1}{2})$, which is deleted via Enright--Shelton reduction. Hence $m=2k$, so that $\mathfrak{g}' = \mathsf{D}_{N-2k} = \mathsf{D}_n$ and $\rho' = (n-1,\ldots,0)$. Thus
\begin{align*}
(\lambda+\rho)^\flat = \lambda'+\rho' &= \left(n+k-\frac{1}{2}, \ldots, k+\frac{1}{2}\right),\\
\lambda' &= \left(\frac{2k+1}{2},\ldots,\frac{2k+1}{2}\right) = (2k+1)\omega'_n.
\end{align*}
The rest of the argument is identical to Type IIIa, where $m=2k$ instead of $m=k-1$.
\textbf{Type IIId} ($\lambda = -(2k-1)\omega_N + \omega_{2k+1}^*$).
We have
\[
\lambda + \rho = \Big(\underbrace{N - k - \tfrac{1}{2}, \ldots, k+\tfrac{3}{2}}_{N-2k-1},\underbrace{k-\tfrac
{1}{2}, \ldots, -k-\tfrac{1}{2}}_{2k+1}\Big),
\]
where the ellipses denote coordinates decreasing by 1. Thus the final $2k+1$ coordinates, except for the very last one, are all deleted via Enright--Shelton reduction. Hence $m=2k$, so that $\mathfrak{g}' = \mathsf{D}_{N-2k} = \mathsf{D}_n$ and $\rho' = (n-1, \ldots, 0)$. Thus
\begin{align*}
(\lambda + \rho)^\flat = \lambda' + \rho' &= \left(n+k-\frac{1}{2}, \ldots, k+\frac{3}{2},-k-\frac{1}{2}\right),\\
\lambda' &= \left(\frac{2k+1}{2},\ldots, \frac{2k+1}{2},-\frac{2k+1}{2}\right) = (2k+1)\omega'_{n-1}.
\end{align*}
The rest of the argument is identical to Type IIIb, where $m=2k$ instead of $m=k-1$.
\end{proof}
\subsection{Main result: congruent blocks and conjugate partitions}
\label{sub:proofs}
We arrive at our main result:
\begin{theorem}
\label{thm:Cong and Conj}
Let $(\mathfrak{g},\k)$, along with $\lambda$ and $\lambda'$, belong to one of the six families in Table~\ref{table:WC}. Then $\mathcal{B}_\lambda$ is congruent to $\mathcal{B}_{\lambda'}$. Moreover, we have an isomorphism of twisted posets
\begin{align*}
\tL(\mathcal{B}_\lambda) &\longrightarrow \tL(\mathcal{B}_{\lambda'}),\\
\pi^* &\longmapsto \pi'^*
\end{align*}
such that $\pi$ and $\pi'$ are conjugate partitions.
\end{theorem}
\begin{rem}
\label{rem:butterfly}
In Type I, the weight $\pi' = \rows\,\st{w}_{\lambda'}$ is not a true partition, but rather a pair of partitions, where the dual operation $( \: )^*$ has been applied to the first partition.
Hence $\pi'$ is represented by a corner-to-corner stacking of the two Young diagrams, where the first is rotated 180 degrees so that its row lengths are considered negative. (Recall the stacked diagrams $\st{w}$ in Type I, which took the same form.) Seeing as how such stacked diagrams resemble a butterfly, we will refer to each of the two diagrams as a ``wing'' of $\pi'$.
Likewise, $\pi$ is represented by a butterfly diagram. The claim in the theorem is that $\pi'$ and $\pi$ are conjugates, i.e., we can obtain one from the other by reflecting about the 45-degree axis through the center of the butterfly.
\end{rem}
From now on, thanks to Lemma~\ref{lemma:pi'}, we will write $\pi'^*$ for an arbitrary element of $\tL(\mathcal{B}_{\lambda'})$, where $\pi'$ is a partition (except in Type I, where $\pi$ contains $q$ negative coordinates followed by $p$ positive coordinates). By Theorem~\ref{thm:ES}, there is a poset isomorphism $\Lambda(\mathcal{B}_\lambda) \longrightarrow \Lambda(\mathcal{B}_{\lambda'})$ induced by Enright--Shelton reduction, which further induces a poset isomorphism $\tL(\mathcal{B}_\lambda) \longrightarrow \tL(\mathcal{B}_{\lambda'})$. We denote the preimage of $\pi'^*$ by writing $\pi^* \in \tL(\mathcal{B}_{\lambda'})$, without making any assumptions about the nature of $\pi = (\pi^*)^*$ itself. Explicitly, we must have
\[
\pi = [(\pi'^*+\lambda'+\rho')^\sharp - (\lambda+\rho)]^*.
\]
(As before, in Types IIIb and IIId we must use the $\lambda$ and $\lambda'$ from Types IIIa and IIIc, respectively.) Since $\pi'^* = w\cdot\lambda'-\lambda'$ for some $w \in \prescript{\k'\!}{}{\mathcal{W}}$, the claim in Theorem~\ref{thm:Cong and Conj} is that the two weights
\begin{align}
\label{pi formula}
\begin{split}
\pi'&=[w(\lambda'+\rho')-(\lambda'+\rho')]^*,\\
\pi &= [w(\lambda'+\rho')^\sharp - (\lambda+\rho)]^*
\end{split}
\end{align}
are truly conjugate partitions for all $w \in \prescript{\k'\!}{}{\mathcal{W}}$. Before proving Theorem~\ref{thm:Cong and Conj}, we present a detailed example that illuminates the way in which these conjugate partitions arise. Our approach is to begin with $\Lambda(\mathcal{B}_{\lambda'})$, which we understand completely thanks to Lemma~\ref{lemma:w-dot-lambda}, and then reverse the Enright--Shelton reduction to pass to $\Lambda(\mathcal{B}_{\lambda})$. This philosophy --- taking the regular block as our starting point in order to understand the singular block --- is the reason for our labeling the various types in terms of $\mathfrak{g}'$ rather than $\mathfrak{g}$.
\begin{figure}[t]
\centering
\input{Example_Conjugates_C.tex}
\caption{Illustration of Example~\ref{ex:D8 and C3}. The symbol $\blacksquare$ denotes the string $(2,1,0,\neg{1}, \neg{2})$ deleted via Enright--Shelton reduction (and subsequent deletion of 0). For typographical clarity, the bars denote negatives.}
\label{fig:example conjugates Type II}
\end{figure}
\begin{ex}
\label{ex:D8 and C3}
We revisit our previous example in Type II, where $N=8$ and $k=2$. (See the case-by-case descriptions at the end of Section~\ref{sub:ES}.) For typographical clarity, we write negatives as bars over the coordinates; hence we have $\mathfrak{g} = \mathsf{D}_8$ with $\lambda = (\neg{2}, \ldots, \neg{2})$, and $\mathfrak{g}' = \mathsf{C}_3$ with $\lambda' = (2,2,2)$. Thus $\lambda'+\rho' = (5,4,3)$.
We refer the reader to Figure~\ref{fig:example conjugates Type II} throughout this example. On the right, we represent $w \in \prescript{\k'\!}{}{\mathcal{W}}$ by the diagram $[\Phi_w]_{\lambda'}$. On the left, we simultaneously depict \emph{both} posets $\tL(\mathcal{B}_{\lambda'})$ and $\tL(\mathcal{B}_{\lambda})$. The symbol $\blacksquare \coloneqq (2,1,0,\neg{1}, \neg{2})$ denotes the string of $m=5$ coordinates deleted via Enright--Shelton reduction (and subsequent deletion of the 0). In this way, ignoring the $\blacksquare$ gives us $w(\lambda'+\rho')$, while retaining the $\blacksquare$ gives us $w(\lambda'+\rho')^\sharp$. (We abuse notation slightly by writing $\sharp$ to reverse both the 0-deletion and the Enright--Shelton reduction.) Upon subtracting either $\lambda'+\rho'$ or $\lambda+\rho$, we have the elements of either $\tL(\mathcal{B}_{\lambda'})$ or $\tL(\mathcal{B}_{\lambda})$.
Our goal is to understand why (the duals of) corresponding elements $\pi'^* \in \tL(\mathcal{B}_{\lambda'})$ and $\pi^* \in \tL(\mathcal{B}_{\lambda})$ are conjugate partitions. In order to compare inductively the construction of $\pi'$ and $\pi$, it suffices to consider the general case where $w$ covers $v$, i.e., where $[\Phi_v]_{\lambda'}$ is joined from above to $[\Phi_w]_{\lambda'}$ in Figure~\ref{fig:example conjugates Type II}. In other words, we consider the effect of adding one box to $[\Phi_v]$ to obtain $[\Phi_w]$. In the base case, at the top of Figure~\ref{fig:example conjugates Type II} where $w = {\rm id}$, we have from~\eqref{pi formula} that $\pi' = \pi = 0$. Therefore, again by~\eqref{pi formula}, it actually suffices to compare the differences
\begin{equation}
\label{differences}
w(\lambda'+\rho') - v(\lambda'+\rho') \qquad \text{and} \qquad w(\lambda'+\rho')^\sharp - v(\lambda'+\rho')^\sharp,
\end{equation}
which (upon taking the dual) will tell us how $\pi'$ and $\pi$ are constructed as $[\Phi_w]$ is built box by box.
We adopt the convention of building $[\Phi_w]$ by rows from top to bottom, adding boxes in a given row from left to right. Hence for our purposes, there are at most two ways to add a box to $[\Phi_v]$: either add a new box along the diagonal, or add a box to the bottom row.
\bigskip
\textbf{Case 1: adding a diagonal box.}
Suppose that $[\Phi_w]$ is obtained by adding the $i$th diagonal box to $[\Phi_v]$, which corresponds to the root $2\varepsilon_{n+1-i}$. It follows from Lemma~\ref{lemma:w-dot-lambda} that $w(\lambda'+\rho')$ is obtained by subtracting $2(k+1) = 6$ from the $i$th coordinate of $v(\lambda' + \rho')$, counting from the right. We claim (but we save for the actual proof) that this coordinate is necessarily $k+1 = 3$, which therefore is negated by our subtraction. In Figure~\ref{fig:example conjugates Type II}, we thus have
\begin{equation}
\label{add diag box}
( \ldots, \posarrow{3}{\substack{\text{$i$th from the right,}\\\text{ignoring the $\blacksquare$}}}, \blacksquare, \ldots) \quad \leadsto \quad (\ldots, \blacksquare, \neg{3}, \ldots).
\end{equation}
Therefore, on one hand, ignoring the $\blacksquare$ in~\eqref{add diag box} gives us
\[
w(\lambda'+\rho') - v(\lambda' + \rho') = (0, \ldots, 0,\posarrow{\neg{6}}{\text{$i$th from the right}},0, \ldots, 0),
\]
which (upon taking the dual) creates the $i$th arm in $\pi'$, with length $m=6-1=5$. On the other hand, including the $\blacksquare$ in~\eqref{add diag box} gives us
\[
w(\lambda'+\rho')^\sharp - v(\lambda' + \rho')^\sharp = (0, \ldots,0,\underbrace{\neg{1},\ldots,\neg{1}}_{\mathclap{\substack{\text{$m+1$ coordinates},\\
\text{ending $i$th from the right}}}},0,\ldots,0),
\]
which (upon taking the dual) creates the $i$th leg in $\pi$, with length $m=5$. (This fact requires that each previous leg must be strictly longer than $m$, which will be clear from Case 2 below.) Hence the case of adding a diagonal box preserves the conjugate shapes of $\pi'$ and $\pi$.
As a specific example of Case 1, in Figure~\ref{fig:example conjugates Type II} we choose $[\Phi_v]_{\lambda'} = \ytableaushort[*(gray!30)]{31}$, and add a diagonal box to begin row $i=2$, thus obtaining $[\Phi_w]_{\lambda'}$ to the southeast. On the left side, we thus have
\[
(5,3,\blacksquare,\neg{4}) \quad \leadsto \quad (5,\blacksquare,\neg{3},\neg{4}).
\]
We confirm that this has the effect of negating the $3$, which was indeed in position $i=2$ from the right. Moreover, we have the differences $(5,\neg{3},\neg{4}) - (5,3,\neg{4}) = (0,\neg{6},0)$ and $(5,\blacksquare,\neg{3},\neg{4}) - (5,3,\blacksquare,\neg{4}) = (0,\neg{1},\neg{1},\neg{1},\neg{1},\neg{1},\neg{1},0)$.
\bigskip
\textbf{Case 2: adding a non-diagonal box.} Suppose that $[\Phi_w]$ is obtained by adding the $j$th non-diagonal box to the $i$th row of $[\Phi_v]$, which we assume is the bottom row. This box corresponds to the root $\varepsilon_{n+1-i} + \varepsilon_{n+1-i-j}$. It follows from Lemma~\ref{lemma:w-dot-lambda} that $w(\lambda'+\rho')$ is obtained by subtracting $1$ from the $i$th and $(i+j)$th coordinates of $v(\lambda'+\rho')$, counting from the right. We claim (but we reserve for the proof) that these coordinates are necessarily $\neg{j+k}$ and $j+k+1$, which are therefore transposed and negated by our subtraction. In our current example where $k=2$, these two coordinates are $\neg{j+2}$ and $j+3$. In Figure~\ref{fig:example conjugates Type II}, we thus have
\begin{equation}
\label{add to bottom row}
(\ldots, \posarrow{j+3}{\substack{\text{$(i+j)$th} \\ \text{from the right,}\\ \text{ignoring the $\blacksquare$}}}, \ldots\ldots, \blacksquare, \posarrow{\neg{j+2}}{\substack{\text{$i$th} \\ \text{from the right}}}, \ldots) \quad \leadsto \quad (\ldots, j+2, \ldots\ldots, \blacksquare, \neg{j+3}, \ldots).
\end{equation}
Therefore, on one hand, ignoring the $\blacksquare$ in~\eqref{add to bottom row} gives us
\[
w(\lambda' + \rho') - v(\lambda'+\rho') = (0, \ldots, 0,\posarrow{\neg{1}}{\substack{\text{$(i+j)$th} \\ \text{from right}}}, 0, \ldots\ldots, 0, \posarrow{\neg{1}}{\substack{\text{$i$th} \\ \text{from right}}}, 0, \ldots, 0),
\]
which (upon taking the dual) adds a box to the $i$th arm and the $i$th leg of $\pi'$. (Note that each previous leg of $\pi$ must be strictly longer than $j$, by the very fact that we are able to add the $j$th box to the bottom row of $[\Phi_v]$.) On the other hand, including the $\blacksquare$ in~\eqref{add to bottom row} gives us
\[
w(\lambda'+\rho')^\sharp - v(\lambda'+ \rho')^\sharp = (0, \ldots, 0,\posarrow{\neg{1}}{\substack{\text{$(m+i+j)$th} \\ \text{from right}}}, 0, \ldots\ldots, 0, \posarrow{\neg{1}}{\substack{\text{$i$th} \\ \text{from right}}}, 0, \ldots, 0),
\]
which (upon taking the dual) also adds a box to the $i$th arm and the $i$th leg of $\pi$. Hence the case of adding a non-diagonal box preserves the conjugate shapes of $\pi'$ and $\pi$.
As a specific example of Case 2, we again choose $[\Phi_v]_{\lambda'} = \ytableaushort[*(gray!30)]{31}$, but this time we add box $j=2$ to the bottom (i.e., only) row $i=1$ to obtain $[\Phi_w]_{\lambda'}$ to its southwest. On the left side, we thus have
\[
(5,3,\blacksquare,\neg{4}) \quad \leadsto \quad (4,3,\blacksquare,\neg{5}).
\]
We confirm that this has the effect of subtracting $1$ from the coordinates $\neg{j+2} = \neg{4}$ and $j+3 = 5$, which were in positions $i=1$ and $i+j = 3$, counting from the right. Moreover, we have the differences $(4,3,\neg{5}) - (5,3,\neg{4}) = (\neg{1},0,\neg{1})$ and $(4,3,\blacksquare,\neg{5}) - (5,3,\blacksquare,\neg{4}) = (\neg{1},0,0,0,0,0,0,\neg{1})$.
\end{ex}
Having concluded our preliminary example in Type II, we proceed to prove Theorem~\ref{thm:Cong and Conj}. The proof, for each of the types, merely makes rigorous the same idea that drives Example~\ref{ex:D8 and C3}; for this reason, we encourage the reader to begin reading the proof for Type II before the other types.
\begin{proof}[Proof of Theorem~\ref{thm:Cong and Conj}] Once we have shown for each type that $\pi'$ and $\pi$ are conjugate partitions, the congruence of blocks $\mathcal{B}_{\lambda'}$ and $\mathcal{B}_{\lambda}$ will follow immediately from Lemma~\ref{lemma:pi'} and Theorems~\ref{theorem:ID-dim-GLn} and~\ref{thm:dim GLn pairs}. Hence we prove the conjugate property for each type:
\bigskip
\textbf{Type I.} As for Type II below, formally the proof relies on the following analogue of~\eqref{claim in main proof}: if $[\Phi_w] = (\alpha - 1 \mid \beta - 1)$ has rank $r$, then we have
\begin{align}
\label{claim Type I}
\begin{split}
w(c_p, \ldots, c_1&;d_1,\ldots,d_q)\\
= &(c_p, \ldots, \widehat{c_{\beta_1}}, \ldots, \widehat{c_{\beta_r}}, \ldots, c_1,d_{\alpha_r}, \ldots,d_{\alpha_1}; c_{\beta_1}, \ldots, c_{\beta_r},d_1, \ldots, \widehat{d_{\alpha_r}}, \ldots, \widehat{d_{\alpha_1}}, \ldots, d_q),
\end{split}
\end{align}
where the hats denote missing coordinates. This can be shown inductively using the same method as in~\eqref{claim in main proof}.
Recall from Lemma~\ref{lemma:pi'} that $\lambda+\rho = (p+q+k-1,\ldots,q+k,\blacksquare;\blacksquare,q-1,\ldots,0)$, where $\blacksquare = (q+k-1, \ldots, q)$ is the string deleted on either side of the semicolon via Enright--Shelton reduction. Therefore $\lambda'+\rho' = (p+q+k-1,\ldots,q+k;q-1,\ldots,0)$. By~\eqref{pi formula}, in the base case where $w = {\rm id}$ we have $\pi' = \pi = 0$, and so just as in Example~\ref{ex:D8 and C3}, it suffices to consider the differences~\eqref{differences} when $[\Phi_w]$ is obtained by adding a single box to $[\Phi_v]$.
First suppose we obtain $[\Phi_w]$ by adding the $i$th diagonal box to $[\Phi_v]$, which corresponds to the root $\varepsilon_{p+1-i}-\varepsilon_{p+i}$. Then by Lemma~\ref{lemma:w-dot-lambda}, $w(\lambda'+\rho')$ is obtained from $v(\lambda' + \rho')$ by subtracting $k+1$ from the $i$th coordinate to the left of the semicolon, and adding $k+1$ to the $i$th coordinate to the right of the semicolon.\footnote{Of course, we could apply~\eqref{w-in-terms-of-v} rather than Lemma~\ref{lemma:w-dot-lambda}, and thus simply transpose the two $i$th coordinates in both directions from the semicolon; the reader can check that this has the exact same effect because of our particular $\lambda'+\rho'$. For the sake of uniformity, however, we have decided to use the method in Lemma~\ref{lemma:pi'} throughout this proof.} It follows from~\eqref{claim Type I} that these coordinates are $q+k$ on the left, and $q-1$ on the right, and hence we have
\[
( \ldots,\posarrow{q+k}{\text{$i$th to the left}},\blacksquare,\ldots; \ldots, \blacksquare,\posarrow{q-1}{\text{$i$th to the right}},\ldots) \quad \leadsto \quad ( \ldots,\blacksquare,\posarrow{q-1}{\text{$i$th to the left}},\ldots; \ldots, \posarrow{q+k}{\text{$i$th to the right}},\blacksquare,\ldots),
\]
counting coordinates from the semicolon, without counting the $\blacksquare$. On one hand, ignoring the $\blacksquare$ gives us
\[
w(\lambda'+\rho') - v(\lambda'+\rho') = (0, \ldots, 0, \posarrow{\neg{k+1}}{\text{$i$th to the left}},0, \ldots,0;0,\ldots,0,\posarrow{k+1}{\text{$i$th to the right}},0,\ldots,0),
\]
which (upon taking the dual) creates the $i$th leg of length $m=k$ in each of the wings of $\pi'$. (Recall Remark~\ref{rem:butterfly} on the butterfly diagrams that represent $\pi'$ and $\pi$.) On the other hand, including the $\blacksquare$ gives us
\[
w(\lambda'+\rho')^\sharp - v(\lambda' + \rho')^\sharp = (0,\ldots,0,\underbrace{\neg{1},\ldots,\neg{1}}_{\mathclap{\substack{\text{$m+1$ coordinates,} \\ \text{ending $i$th to the left}}}},0,\ldots,0;0,\ldots,0,\underbrace{1,\ldots,1}_{\mathclap{\substack{\text{$m+1$ coordinates,} \\ \text{starting $i$th to the right}}}},0,\ldots,0),
\]
which (upon taking the dual) creates the $i$th \emph{leg} of length $m$ in each of the wings of $\pi$.
The remaining possibilities work out similarly, following from Lemma~\ref{lemma:w-dot-lambda} and~\eqref{claim Type I}. In sum, adding a box to the $i$th arm of $[\Phi_v]$ adds a box to the $i$th arm of each wing of $\pi'$, but adds a box to the $i$th \emph{leg} of each wing of $\pi$; the same remains true if we interchange the words ``arm'' and ``leg'' everywhere. Hence $\pi$ is obtained from $\pi'$ by a reflection about the 45-degree axis, and so the two are conjugates.
\bigskip
\textbf{Type II.} Our only task is to verify the two inductive claims made in Example~\ref{ex:D8 and C3}. In Case 1, we claimed that whenever it is possible to add the $i$th diagonal box to $[\Phi_v]$, the weight $v(\lambda'+\rho')$ must have $k+1$ as its $i$th coordinate from the right. In Case 2, we claimed that whenever it is possible to add the $j$th non-diagonal box to the $i$th row of $[\Phi_v]$, the weight $v(\lambda'+\rho')$ must have $\neg{j+k}$ as its $i$th coordinate, and $k+j+1$ as its $(i+j)$th coordinate, counting from the right. Both of these claims will be immediate once we prove the following fact, where $[\Phi_v] = (\alpha-1\mid 0)$, so that the row lengths are given by $\alpha_1 > \cdots > \alpha_r > 0$:
\begin{equation}
\label{claim in main proof}
v(c_n, \ldots, c_1) = (c_n, \ldots, \widehat{c_{\alpha_1}}, \ldots, \widehat{c_{\alpha_r}}, \ldots, c_1, \neg{c_{\alpha_r}}, \ldots, \neg{c_{\alpha_1}}),
\end{equation}
where the hats denote missing coordinates.
To prove~\eqref{claim in main proof}, first consider the case $r=1$, so that by~\eqref{w-in-terms-of-v} we have $v = s_{n,n+1-\alpha_1} \cdots s_{n,n}$. We proceed by induction on $\alpha_1$. In the base case $\alpha_1=1$, we have $v = s_{n,n}$, and so the effect of applying $v$ is to negate $c_1$. Thus $v(c_n, \ldots, c_1) = (c_n,\ldots, c_2, \neg{c_1})$, which agrees with~\eqref{claim in main proof}. Now assuming that~\eqref{claim in main proof} holds for $r=1$, let $[\Phi_w]$ be obtained from $[\Phi_v]$ by adding the $(\alpha_1+1)$th box in the sole row.
Then $w(c_n, \ldots, c_1)$ is obtained from $v(c_n, \ldots, c_1)$ by transposing and negating the $1$st and $(\alpha_1+1)$th coordinates from the right; therefore we have
\begin{equation}
\label{r=1}
w(c_n, \ldots, c_1) = s_{n,n-\alpha_1}(c_n, \ldots, \posarrow{c_{\alpha_1+1}}{\text{$(\alpha_1+1)$th from right}},\widehat{c_{\alpha_1}}, \ldots\ldots, c_1,\posarrow{\neg{c_{\alpha_1}}}{\text{$1$st from right}}) = (c_n, \ldots, \widehat{c_{\alpha_1+1}}, \ldots, c_1,\neg{c_{\alpha_1+1}})
\end{equation}
as desired. Proceeding by induction on $r$, we assume~\eqref{claim in main proof} and suppose that $[\Phi_w]$ is obtained from $[\Phi_v]$ by adding a row of length $\alpha_{r+1}<\alpha_r$. But then $w(c_n, \ldots, c_1)$ is obtained from $v(c_n, \ldots, c_1)$ by applying the result~\eqref{r=1}, having replaced $1$ with $r+1$, to the string $(c_{\alpha_{r}-1}, \ldots, c_1)$, which thus becomes $(c_{\alpha_{r}-1}, \ldots, \widehat{c_{\alpha_{r+1}}}, \ldots, c_1, \neg{c_{\alpha_{r+1}}})$.
This proves~\eqref{claim in main proof}, and upon setting $(c_n,\ldots,c_1) = \lambda'+\rho' = (k+n, \ldots, k+1)$, the two claims from Example~\ref{ex:D8 and C3} follow immediately.
\bigskip
\textbf{Types IIIabcd.} The proof is identical in spirit to Type II. The analogue to~\eqref{claim in main proof} is the following: if $[\Phi_w] = (\alpha - 2 \mid 0)$ with rank $r$, then we have
\[
w(c_n,\ldots,c_1) = (c_n, \ldots, \widehat{c_{\alpha_1}}, \ldots, \widehat{c_{\alpha_r}}, \ldots, (-1)^r c_1, \neg{c_{\alpha_r}}, \ldots, \neg{c_{\alpha_1}}),
\]
so that there is always an even number of negated coordinates. The only other substantial difference occurs in Types IIIb and IIId, due to the twisting by the $\lambda'$ from Types IIIa and IIIb, respectively. Hence in Types IIIb and IIId, we must check in the base case ($w = {\rm id}$) that $\pi'$ and $\pi$ are conjugates. Indeed, in both types, when $w = {\rm id}$, we have $\pi'$ a single row and $\pi$ a single column of the same length; in Type IIIb this length is $k$, while in Type IIId it is $2k+1$. The rest of the proof imitates Type II, and so the details are left to the reader. \end{proof}
\section{Hilbert series and generalized Littlewood identities}
\label{section:HS}
\subsection{Hilbert series of $\widetilde{L}_\lambda$}
In this section, we derive the Hilbert series of the infinite-dimensional modules $\widetilde{L}_{\lambda}$ for the families in Table~\ref{table:WC}. Recall from \eqref{def:L tilde} that $\widetilde{L}_\lambda \coloneqq L_\lambda \otimes F_{-\langle \lambda,\beta^\vee\rangle\zeta}$.
Let $M$ be a highest-weight $\mathfrak{g}$-module with highest weight $\lambda$, and weight space decomposition $M = \bigoplus_{\mu \leq \lambda} M_\mu$. Recall the distinguished element $h_0 \in \mathfrak{z}(\k)$ from the beginning of Section \ref{sub:Hermitian pairs}. The \emph{Hilbert series} of $M$ is the formal power series
\[
H_M(t) \coloneqq \sum_{\mu \leq \lambda}\dim(M_\mu) t^{-\mu(h_0)}.
\]
In the setting of this paper, where the coordinates of $\lambda$ are either all integers or all half-integers, each of our Hilbert series is an element of $\mathbb{Z}[[t^{1/2}]]$, and can be written as a rational function of the form
\[
H_M(t) = \frac{P(t)}{(1-t)^d},
\]
with $P \in \mathbb Z[t^{1/2}]$ such that $P(1) \neq 0$. Then $P(1)$ is the Bernstein degree of $M$, and $d$ is the Gelfand--Kirillov (GK) dimension of $M$.
\begin{table}[ht]
\centering
\input{Table_Hilbert.tex}
\caption{Data for the Hilbert series $H_{\widetilde{L}}(t) = \frac{P(t)}{(1-t)^d}$ of $\widetilde{L}=\widetilde{L}_\lambda \coloneqq L_\lambda \otimes F_{-\langle \lambda,\beta^\vee\rangle \zeta}$. As in Table~\ref{table:WC}, we write $p=P-m$, $q=Q-m$, and $n=N-m$.}
\label{table:Hilbert-series}
\end{table}
\begin{theorem}
\label{thm:Hilbert series}
For each of the six types in Table~\ref{table:WC}, let $\widetilde{L} = \widetilde{L}_\lambda$ as defined in \eqref{def:L tilde}. Then
\[
H_{\widetilde{L}}(t) = \frac{P(t)}{(1-t)^d},
\]
where $P(t)$ and $d$ are given in Table~\ref{table:Hilbert-series}. Furthermore, we have $P(t) = H_{\widetilde{L}'}(t)$, where $\widetilde{L}' = \widetilde{L}_{\lambda'}$.
\end{theorem}
\begin{proof}
By the transfer theorem in~\cite{Enright-Hunziker}*{p.~623} relating Hilbert series to Enright--Shelton reduction, we have
\begin{equation}
\label{transfer theorem}
H_{\widetilde{L}}(t) = \frac{\dim F_\lambda}{\dim F_{\lambda'}} \cdot \frac{H_{\widetilde{L}'}(t)}{(1-t)^d},
\end{equation}
where $d = \dim(\mathfrak{p}^+) - \dim(\mathfrak{p}'^+)$. Due to the congruence of blocks $\mathcal{B}_{\lambda}$ and $\mathcal{B}_{\lambda'}$ in Table~\ref{table:WC}, we have $\dim F_\lambda / \dim F_{\lambda'} = 1$ in~\eqref{transfer theorem}. It remains to verify $H_{\widetilde{L}'}(t)$ and the GK dimension $d$ for each type below.
\bigskip
\textbf{Types I and II.} Note that in these types, we have $\langle \lambda',\beta'^\vee\rangle \zeta' = \lambda' = k \zeta'$, where $\zeta'$ is the unique fundamental weight orthogonal to $\Phi(\k')$, as in Definition~\ref{def:Lambda tilde}. Recall that the $\k'$-module $F_{-k\zeta'}$ is $1$-dimensional. By Theorem 3.1 in~\cite{EHW}, we have
\begin{equation}
\label{schmid decomp}
\widetilde{L}_{\lambda'} = L_{k \zeta'} \otimes F_{-k\zeta'} \cong \bigoplus_{k \geq m_1 \geq \cdots \geq m_r \geq 0} F_{-(m_1\gamma_1 + \cdots + m_r \gamma_r)}
\end{equation}
as a $\k'$-module, where $\gamma_1 < \cdots < \gamma_r$ are Harish-Chandra's strongly orthogonal noncompact roots for $\mathfrak{g}'$, with $\gamma_1 \in \Pi$. Expanding the $\gamma_i$ into standard coordinates, we obtain the following well-known specializations (recall that $\lambda'$ depends on $k$ in Table~\ref{table:WC}):
\begin{equation}
\label{Schmid I-IIIa}
\renewcommand{3}{1.5}
\begin{array}{lll}
\text{Type I:} & \widetilde{L}_{\lambda'} \cong \bigoplus_\nu (\F{\nu}{p})^* \otimes \F{\nu}{q}, & \nu \in \Par(\min\{p,q\} \times k).\\
\text{Type II:} & \widetilde{L}_{\lambda'} \cong \bigoplus_\nu (\F{\nu}{n})^*, & \nu \in \Par(n \times 2k) \text{ with even rows.}\\
\end{array}
\end{equation}
Each summand in~\eqref{schmid decomp} contributes $\dim(F_{-(m_1 \gamma_1 + \cdots + m_r \gamma_r)})t^{m_1 + \cdots + m_r}$ to $H_E(t)$. Since $\nu^* = -\sum_i m_i \gamma_i$, it is easy to check that in Type I, we have $|\nu| = \sum_i m_i$, while in Type II we have $|\nu| = 2\sum_i m_i$. We compute the GK dimension $d$ using Proposition~\ref{prop:BGG}:
\begin{equation*}
\renewcommand{3}{1.5}
\begin{array}{ll}
\text{Type I:} & d = PQ - pq = (p+k)(q+k) - pq = k(p+q+k).\\
\text{Type II:} & d = \binom{N}{2} - \binom{n+1}{2} = \binom{n+2k+1}{2} - \binom{n+1}{2} = k(2n+2k+1).
\end{array}
\end{equation*}
\bigskip
\textbf{Types IIIabcd.} Now let $\mathfrak{g}' = \mathsf{D}_n$. Recall that in Types IIIa and IIIb we have $\langle \lambda',\beta'^\vee\rangle \zeta' = k\omega'_n$, while in Types IIIc and IIId we have $\langle \lambda',\beta'^\vee\rangle \zeta' = (2k+1)\omega'_n$. By Theorem 22 in~\cite{Enright-Hunziker}, for $a,b \in \mathbb{N}$, we have the following decomposition as a module for $\k' = \mathfrak{gl}_n$:
\begin{equation}
\label{so_n decomp}
L_{a\omega'_{n-1} + b\omega'_n} \otimes F_{-(a+b)\zeta'} \cong \bigoplus_\nu (\F{\nu}{n})^*,
\end{equation}
where $\nu \in \Par(n \times (a+b))$ such that exactly $a$ columns have odd length. We therefore have the following:
\begin{equation}
\label{so_n IIIa-d}
\renewcommand{3}{1.5}
\begin{array}{lllll}
\text{Type IIIa:} & \lambda' = k\omega'_n & \Longrightarrow & a=0, b = k & \Longrightarrow \quad \nu \in \Par(n \times k), \text{ even columns.}\\
\text{Type IIIb:} & \lambda' = k\omega'_{n-1} & \Longrightarrow& a=k, b=0 & \Longrightarrow \quad \nu \in \Par(n \times k), \text{ odd columns.}\\
\text{Type IIIc:} & \lambda' = (2k+1)\omega'_{n} &\Longrightarrow& a=0, b= 2k+1 &\Longrightarrow \quad \nu \in \Par(n \times (2k+1)), \text{ even columns.}\\
\text{Type IIId:} & \lambda' = (2k+1)\omega'_{n-1} &\Longrightarrow& a=2k+1, b=0 &\Longrightarrow \quad \nu \in \Par(n \times (2k+1)), \text{ odd columns.}\\
\end{array}
\end{equation}
(As before, the phrase ``even/odd columns'' means that all column lengths are even/odd.) Each summand in~\eqref{so_n decomp} contributes $\dim(\F{\nu}{n})t^{|\nu|/2}$ to $H_E(t)$. We compute the GK dimension $d$ using Proposition~\ref{prop:BGG}:
\[
\renewcommand{3}{1.5}
\begin{array}{ll}
\text{Types IIIa and IIIb:} & d = \binom{N+1}{2} - \binom{n}{2} = \binom{n+k}{2} - \binom{n}{2} = k(2n+k-1)/2.\\
\text{Types IIIc and IIId:} & d = \binom{N}{2} - \binom{n}{2} = \binom{n+2k}{2} - \binom{n}{2} = k(2n+2k-1).
\end{array}
\]
This completes the proof.
\end{proof}
\subsection{Hilbert series for modules of invariants and semi-invariants}
In Types I--IIIa, $H_{\widetilde{L}}(t)$ is the Hilbert series of the determinantal variety containing the matrices in $\M_{p,q}$, $\AM_n$, or $\SM_n$ with rank at most $k$, $2k$, or $k$, respectively. More specifically, the determinantal variety is the associated variety of $L_\lambda$, which is called the \emph{$k$th Wallach representation}; see~\cites{EW,Enright-Hunziker,EHP}. The Hilbert series of these determinantal varieties can also be interpreted combinatorially from a Stanley decomposition of the coordinate ring; this decomposition is obtained from the Bj\"orner shelling \cite{Bjorner} of the $k$th order complex on the poset of matrix coordinates, and can be understood entirely in terms of non-intersecting lattice paths and a modified RSK correspondence. Details can be found in \cites{Sturmfels,Krattenthaler}, \cite{Herzog}, and \cite{Conca} for Types I, II, and IIIa, respectively.
From the perspective of classical invariant theory, let $\mathfrak{g}$ be of Type $\mathsf{A}$, $\mathsf{C}$, or $\mathsf{D}$, and let $H$ be the complex classical group such that $(H, \mathfrak{g})$ is a dual pair (in the sense of Howe duality); hence $H$ is $\GL_k$, $\Or_k$, or $\Sp_{2k}$, respectively. Let $V$ be the defining representation of $H$, and let either $W = (V^*)^{\oplus P} \oplus V^{\oplus Q}$ (if $\mathfrak{g} = \mathsf{A}_{P+Q+1}$) or $W = V^{\oplus N}$ (if $\mathfrak{g} = \mathsf{C}_N$ or $\mathsf{D}_N$). If $\lambda$ is as in Types I--IIIa, then as $\mathfrak{g}$-modules, we have
\[
\widetilde{L}_\lambda \cong \mathbb C[W]^H \coloneqq \{ f \in \mathbb C[W] \mid f(hw) = f(w) \text{ for all } w \in W, \: h \in H\},
\]
where the right-hand side is called the \emph{algebra of invariants}. Due to the fact (i.e., the first fundamental theorem of classical invariant theory) that the fundamental variants are quadratics, it is customary to write
\[
H_{\mathbb C[W]^{H}}(t) = H_{\widetilde{L}}(t^2).
\]
As a result, in Types IIIb and IIId, whereas $P(t)$ has half-integer exponents, $P(t^2)$ becomes a true polynomial.
If $\mathfrak{g} = \mathsf{C}_N$ and $H = \Or_k$, with $\lambda$ as in Type IIIb, then as $\mathfrak{g}$-modules we have
\[
\widetilde{L}_\lambda \cong \mathbb C[W]^{H,\det} \coloneqq \{ f \in \mathbb C[W] \mid f(hw) = \det h \cdot f(w) \text{ for all } w \in W, \: h \in H \},
\]
where the right-hand side is called the \emph{module of semi-invariants} with respect to the character $\chi = \det$.
\begin{ex}
In Type IIIb, let $N=3$ and $k=1$, so that $H=\Or_1 = \{\pm 1\}$. The module $\mathbb C[W]^{H,\det}$ of semi-invariants is the span of the odd-degree monomials in $\mathbb C[x_1,x_2,x_3]$. In Table~\ref{table:Hilbert-series}, we have $m=0$, so that $n=3$ and $d=3$. Since $\nu$ ranges over partitions in $\Par(3 \times 1)$ with columns of odd length, the sum over $\nu$ ranges over the two partitions $(1)$ and $(1,1,1)$. We observe that $\F{(1)}{3}=\mathbb C^3$ and $\F{(1,1,1)}{3} = \bigwedge^3\mathbb C^3$, with dimensions $3$ and $1$, respectively. Hence from Table~\ref{table:Hilbert-series}, we have the following Hilbert series (after replacing $t$ by $t^2$):
\[
H_{\mathbb C[W]^{H,\det}}(t) = \frac{3t+t^3}{(1-t^2)^3.}
\]
This Hilbert series reflects the fact that $\mathbb C[W]^{H,\det}$ is a free module over $\mathbb C[x_1^2, x_2^2, x_3^2]$ with three generators $x_1,$ $x_2$, $x_3$ of degree 1, and one generator $x_1 x_2 x_3$ of degree 3.
\end{ex}
\subsection{Generalized Littlewood identities}
Recall from Section~\ref{section:ID's and BGG} that the classical identities~\eqref{Dual-Cauchy},~\eqref{Littlewood-C}, and~\eqref{Littlewood-D} can each be interpreted as the Euler characteristic of the BGG resolution of the trivial representation of $\mathfrak{g}'$ in Type I, II, and III, respectively. In this subsection, by combining our results in Tables~\ref{table:WC} and~\ref{table:Hilbert-series}, we generalize these identities by considering the BGG resolution of the finite-dimensional $\mathfrak{g}'$-module $\widetilde{L}_{\lambda'}$. We first point out one of Littlewood's identities~\cite{Littlewood}*{(11.9;5)} that has not yet appeared in this paper:
\begin{equation}
\label{Littlewood-B}
\tag{IIIab}
\prod_i (1-x_i) \prod_{i<j}(1-x_i x_j) = \sum_\pi (-1)^{(|\pi|+\rk \pi)/2} s_\pi(x_1, \ldots, x_n),
\end{equation}
where the sum is over all self-conjugate partitions $\pi = (\alpha|\alpha)$ with $\alpha_1 < n$. See~\cite{Macdonald}*{Ex.~5.9} for an interpretation of~\eqref{Littlewood-B} in terms of the root system for Type $\mathsf{B}$.
\begin{theorem}
\label{thm:new IDs}
For each $k \in \mathbb{N}$, we have the three identities in Table~\ref{table:identities}. Moreover,
\begin{itemize}
\item upon setting $k=0$, our identities reduce to the classical identities~\eqref{Dual-Cauchy},~\eqref{Littlewood-C}, and~\eqref{Littlewood-D};
\item upon setting $k=1$, the sum of our identities in Types IIIa and IIIb reduces to the Littlewood identity~\eqref{Littlewood-B}.
\end{itemize}
\end{theorem}
\begin{table}[ht]
\centering
\input{Table_IDs.tex}
\caption{Identities generalizing the classical identities~\eqref{Dual-Cauchy}--\eqref{Littlewood-D}. The shorthand $\mathbf{x}$ and $\mathbf{y}$ is as in Table~\ref{table:Type123}.}
\label{table:identities}
\end{table}
A word on notation: the identities in Table~\ref{table:identities} can be stated without any reference to congruent blocks or Enright--Shelton reduction, and so we have omitted the prime symbol on the partitions $\pi$. We therefore continue this convention in the proof below, despite using the fact that these partitions are elements of the poset $\tL(\mathcal{B}_{\lambda'})$ from earlier in the paper. Likewise, we omit the prime symbol on the weights $\mu$, which earlier we denoted by $\mu' \in \Lambda(\mathcal{B}_{\lambda'})$.
\begin{proof}
On one hand, by using~\eqref{Schmid I-IIIa} and~\eqref{so_n IIIa-d}, we can easily write down the character of $\widetilde{L}_{\lambda'}$ as the sum of Schur polynomials ranging over certain partitions $\nu$; this is the sum on the left-hand side of the identities in Table~\ref{table:identities}.
On the other hand, $\ch \widetilde{L}_{\lambda'}$ must equal the alternating sum of the characters of the parabolic Verma modules occurring in the BGG resolution for $\widetilde{L}_{\lambda'}$. Since tensoring with a $1$-dimensional $\k'$-module is an exact functor, we can obtain the BGG resolution of $\widetilde{L}_{\lambda'}$ from that of $L_{\lambda'}$, by replacing each $N_{\mu}$ with $\widetilde{N}_{\mu} \coloneqq N_{\mu} \otimes F_{-\langle \lambda',\beta'^\vee\rangle \zeta'}$. Then
\begin{equation}
\label{Etilde char ID}
\ch \widetilde{L}_{\lambda'} = \sum_{\mu \in \Lambda(\mathcal{B}_{\lambda'})} (-1)^{\ell(\mu)}\ch \widetilde{N}_{\mu},
\end{equation}
where $\ell(\mu)$ is defined to be $\ell(w)$ for the unique $w \in \prescript{\k'\!}{}{\mathcal{W}}$ such that $\mu = w \cdot \lambda'$. As noted above, the set $\{\widetilde{N}_{\mu} \mid \mu \in \Lambda(\mathcal{B}_{\lambda'})\}$ of Verma modules occurring in~\eqref{Etilde char ID} equals the set $\{N_{\pi^*} \mid \pi^* \in \tL(\mathcal{B}_{\lambda'})\}$, which we understand completely from Table~\ref{table:WC}. Hence we can rewrite~\eqref{Etilde char ID} as
\begin{equation}
\label{Etilde char ID final}
\ch \widetilde{L}_{\lambda'} = \sum_{\pi^* \in \tL(\mathcal{B}_{\lambda'})} (-1)^{\ell(\pi)} \ch N_{\pi^*}
\end{equation}
where $\ell(\pi)$ is defined to be $\ell(\mu)$ such that $\mu \in \Lambda(\mathcal{B}_{\lambda'})$ is the preimage of $\pi^*$.
Thus, by~\eqref{length-size-diagram}, computing $\ell(\pi)$ is just a matter of recovering $|[\Phi_w]|$ from $\rows\,\st{w}_{\lambda'}$, which we compute in Table~\ref{table:identities}. Hence the right-hand side of each identity in the table is given by~\eqref{Etilde char ID final}, except that we multiply both sides by $\ch S(\mathfrak{p}^-)$, found in Table~\ref{table:Type123}.
Setting $k=0$ causes the sum $\sum_\nu$ to become empty. For Types I and II, we then immediately recover the classical identities~\eqref{Dual-Cauchy} and~\eqref{Littlewood-C}. In Types IIIa and IIIb, the $k=0$ case is degenerate because the $\pi$'s are simply the stacked diagrams $\st{w}$, as in the proof of Proposition~\ref{prop:BGG}; hence the condition on the parity of the rank of $\pi$ disappears, and the two identities collapse into the Littlewood identity~\eqref{Littlewood-D}.
Setting $k=1$ in Types IIIa and IIIb, we see that $\nu$ must be a single column and so $s_\nu(\mathbf{x})$ is an elementary symmetric polynomial; therefore $\sum_{\nu} s_\nu(\mathbf{x}) = \prod_i (1+x_i)$. Adding these two identities together, we obtain
\begin{equation}
\label{almost Littlewood IIIab}
\prod_{i < j} (1-x_i x_j) \prod_{i} (1+x_i) = \sum_\pi(-1)^{(|\pi| - k \rk \pi)/2} s_\pi(\mathbf{x}),
\end{equation}
where (since $m=k-1 = 0$) the sum ranges over self-conjugate partitions $\pi$, just as in~\eqref{Littlewood-B}. Now let $-\mathbf{x} \coloneqq (-x_1, \ldots, -x_n)$. Since $s_\pi(\mathbf{x})$ is homogeneous of degree $|\pi|$, we have $s_\pi (-\mathbf{x}) = (-1)^{|\pi|} s_\pi(\mathbf{x})$. Moreover, since $\frac{|\pi|+ \rk \pi}{2} + \frac{|\pi| - \rk \pi}{2} = |\pi|$, the two addends have the same parity if and only if $|\pi|$ is even, and therefore
\[
(-1)^{(|\pi| - \rk \pi)/2}s_\pi(-\mathbf{x}) = (-1)^{(|\pi|+ \rk \pi)/2} s_\pi(\mathbf{x}).
\]
Thus, if we make the substitutions $x_i \mapsto -x_i$ in~\eqref{almost Littlewood IIIab}, we recover the Littlewood identity~\eqref{Littlewood-B}.
\end{proof}
\begin{rem}
The analogous identities corresponding to Types IIIc and IIId reduce to those in Types IIIa and IIIb, upon replacing $2k$ by $k-1$. We can, however, view the identities for Types IIIa and IIIb as the two extreme special cases of a more general identity, since they follow from the first two special cases~\eqref{so_n IIIa-d} of the general branching rule~\eqref{so_n decomp}. Specifically, for $a,b \in \mathbb{N}$, we have
\begin{equation}
\label{identity IIIab}
\prod_{i < j} (1 - x_i x_j) \sum_\nu s_\nu(\mathbf{x}) = \sum_\pi (-1)^{\ell(\pi)} s_\pi(\mathbf{x}),
\end{equation}
where $\nu \in \Par(n\times (a+b))$ has $a$ odd columns and $b$ even columns, and where the right-hand sum ranges over all partitions of the form
\[
\pi =
\begin{cases}
(\alpha+a+b-1 \mid \alpha) \text{ with odd rank}, & a=0,\\
(\alpha+a+b-1 \mid \alpha) \text{ with even rank}, & b=0,\\
(\alpha+a+b-1,a-1 \mid\alpha,0) \text{ with odd rank} & \\ \qquad \text{or} & \\
(\alpha+a+b-1, b-1 \mid \alpha,0) \text{ with even rank},
& a,b>0,
\end{cases}
\]
for $\alpha_1 < n$. (When $a=b=0$, the first two cases collapse into the single case in the Littlewood identity~\eqref{Littlewood-D}, as mentioned above.) The Frobenius symbols in the last case describe an additional arm, whose length is either $a-1$ or $b-1$. In \eqref{identity IIIab} we also have the more complicated formula
\[
\ell(\pi) = \frac{|\pi|- 2(a+b)\lfloor\frac{\rk \pi}{2}\rfloor - (-1)^{\rk \pi} a}{2}.
\]
\end{rem}
\section{Open problems}
\label{sec:open probs}
\begin{prob}
\label{prob:bijection}
Find a bijective proof for Theorems~\ref{theorem:ID-dim-GLn} and~\ref{thm:dim GLn pairs}.
\end{prob}
Let $\SSYT(\lambda,n)$ denote the set of all semistandard Young tableaux with shape $\lambda$ and maximum entry $n$. A bijective proof of Theorem~\ref{thm:dim GLn pairs} would exhibit a bijection
\begin{equation}
\label{bijection-cor}
\SSYT(\pi, n)\longleftrightarrow \SSYT(\pi', n+1)
\end{equation}
for each $n \in \mathbb N$ and for each $\pi = (\alpha + 1 \mid \alpha)$ with $\alpha_1 < n$. As a first step toward a solution, we can exhibit a bijection
\begin{equation}
\label{bijection}
\bigcup_\pi \: \SSYT(\pi,n) \longleftrightarrow \bigcup_\pi \: \SSYT(\pi', n+1),
\end{equation}
where both sums range over partitions $\pi = (\alpha+1 \mid \alpha)$ with $\alpha_1 < n$. Specifically, we have bijections
\begin{equation}
\label{bijections}
\bigcup_\pi \: \SSYT(\pi,n) \longleftrightarrow
\mathcal{G}_n \longleftrightarrow
\mathcal{G}'_{n+1}
\longleftrightarrow
\bigcup_\pi \: \SSYT(\pi', n+1),
\end{equation}
where
\begin{align*}
\mathcal{G}_n & = \{ \text{graphs on $n$ labeled vertices \emph{with} loops but \emph{without} multiple edges} \}, \\
\mathcal{G}'_{n+1} & = \{ \text{graphs on $n+1$ labeled vertices \emph{without} loops and \emph{without} multiple edges} \}.
\end{align*}
The easiest of the three bijections in~\eqref{bijections} is $\mathcal{G}_n \longrightarrow \mathcal{G}'_{n+1}$: to each graph in $\mathcal{G}_n$, we add a new vertex $v_{n+1}$, and then resolve each loop $(v_i, v_i)$ by replacing it with an edge $(v_i, v_{n+1})$. The inverse begins with a graph in $\mathcal{G}'_{n+1}$, deletes the vertex $v_{n+1}$, and then replaces each edge $(v_i,v_{n+1})$ with the loop $(v_i, v_i)$.
The other two bijections in~\eqref{bijections} can be obtained using variations on Schensted row-insertion, both detailed by Burge~\cite{Burge}.
(Burge's method actually yields combinatorial proofs of the Littlewood identities~\eqref{Littlewood-C},~\eqref{Littlewood-D}, and~\eqref{Littlewood-B}, along with their reciprocals.) Similarly to Knuth~\cite{Knuth}, Burge defines an algorithm he calls INSERT 4 (and an inverse algorithm DELETE 4) that gives a bijection $\bigcup_\pi \SSYT(\pi,n) \longleftrightarrow \mathcal{G}_n$, where $\pi$ is of the form $(\alpha + 1 \mid \alpha)$.
Burge defines another algorithm INSERT 3 (and its inverse DELETE 3) that gives a bijection $\bigcup_\pi \SSYT(\pi', n+1) \longleftrightarrow \mathcal{G}'_{n+1}$, with $\pi$ as before. Upon composing all three bijections in~\eqref{bijections}, we therefore have the bijection ~\eqref{bijection}.
\begin{ex}
Let $n=3$, and $\pi = (2|1)$. Then the bijection~\eqref{bijection} restricted to $\SSYT(\pi, 3)$ yields the following correspondence:
\renewcommand{3}{1.5}
\ytableausetup{aligntableaux=top}
\begin{center}
\resizebox{\linewidth}{!}{
\begin{tabular}{lllllllllllllll}
\ytableaushort{111,2} & \ytableaushort{111,3} & \ytableaushort{112,2} & \ytableaushort{112,3} &
\ytableaushort{113,2} &
\ytableaushort{113,3} &
\ytableaushort{122,2} &
\ytableaushort{122,3} &
\ytableaushort{123,2} &
\ytableaushort{123,3} &
\ytableaushort{133,2} &
\ytableaushort{133,3} &
\ytableaushort{222,3} &
\ytableaushort{223,3} &
\ytableaushort{233,3} \\
$\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ & $\updownarrow$ \\
\ytableaushort{11,2,4} & \ytableaushort{11,3,4} & \ytableaushort{14,2,4} & \ytableaushort{11,2,3} & \ytableaushort{14,2,3} & \ytableaushort{14,3,4} & \ytableaushort{12,2,4} & \ytableaushort{12,3,4} &
\ytableaushort{12,2,3} &
\ytableaushort{13,2,4} &
\ytableaushort{13,2,3} & \ytableaushort{13,3,4} & \ytableaushort{22,3,4} & \ytableaushort{24,3,4} & \ytableaushort{23,3,4}
\end{tabular}
}
\end{center}
\end{ex}
As long as the bijection~\eqref{bijection} preserves the conjugate shapes of individual tableaux, we would have the desired bijection~\eqref{bijection-cor}. Unfortunately this is not the case. In general, the bijection~\eqref{bijection} does \textit{not} restrict to a bijection~\eqref{bijection-cor} for each individual shape $\pi$, as evidenced by the following pairing:
\[
\ytableausetup{centertableaux}
\ytableaushort{112,223} \longleftrightarrow \ytableaushort{124,2,3,4}
\]
We have not yet found a way to modify the bijections in~\eqref{bijections} in order to yield a bijection~\eqref{bijection} that preserves conjugate shapes.
\begin{prob}
Determine all equalities $\dim \F{\sigma}{s} = \dim \F{\tau}{t}$.
\end{prob}
\begin{figure}[t]
\centering
\input{Sporadic_example.tex}
\caption{A sporadic example of congruent blocks, where corresponding poset elements are not given by conjugate partitions. On the left we have $\tL(\mathcal{B}_{\lambda})$, for $\mathfrak{g} = \mathsf{D}_6$ with $\lambda = (\neg{1}, \neg{1}, \neg{1}, \neg{1}, \neg{1}, \neg{2})$. On the right we have $\tL(\mathcal{B}_{\lambda'})$, for $\mathfrak{g}' = \mathsf{D}_4$ with $\lambda' = (1,1,0,0)$.}
\label{fig:sporadic example}
\end{figure}
Theorem~\ref{thm:dim GLn pairs} constitutes a first step toward the solution of this problem, in giving the infinite families where $\sigma = (\alpha+m\mid \alpha)$, $\tau = \sigma'$, and $t = s+m$. This problem is similar in flavor to the question of classifying the equalities $\binom{a}{b} = \binom{c}{d}$ of binomial coefficients. There is an infinite family of these discovered by Lind~\cite{Lind} and Singmaster~\cite{Singmaster}; moreover, de Weger~\cite{deWeger} has conjectured that this family comprises \emph{all} nontrivial equalities, with the exception of seven sporadic instances.
Closely related is the problem of classifying all instances of congruent blocks (in the context of Hermitian symmetric pairs). Outside the families in Table~\ref{table:WC}, there exists at least one sporadic example of congruent blocks $\mathcal{B}_{\lambda}$ and $\mathcal{B}_{\lambda'}$, for $\mathfrak{g} = \mathsf{D}_6$ with $\lambda = -2\omega_6 + \omega^*_1 = (\neg{1},\neg{1},\neg{1},\neg{1},\neg{1},\neg{2})$. Applying Enright--Shelton reduction, we obtain $\mathfrak{g}'=\mathsf{D}_4$ with $\lambda' = (1,1,0,0)$. Upon twisting the weights in $\Lambda(\mathcal{B}_{\lambda})$ and $\Lambda(\mathcal{B}_{\lambda'})$ by subtracting $-2\omega_6$ and $2\omega_4$, respectively, we obtain the posets $\tL(\mathcal{B}_{\lambda})$ and $\tL(\mathcal{B}_{\lambda'})$ shown in Figure~\ref{fig:sporadic example}. We label each poset element with the dimension of the $\k$-module (or $\k'$-module) with the corresponding highest weight. Note that the blocks $\mathcal{B}_{\lambda}$ and $\mathcal{B}_{\lambda'}$ are indeed congruent, although their corresponding poset elements are not given by conjugate partitions.
\subsection*{Acknowledgments} Originally our proofs of Theorems~\ref{theorem:ID-dim-GLn} and \ref{thm:dim GLn pairs} relied on the Weyl dimension formula; we would like to thank Daniel Herden for suggesting a simpler argument via the hook--content formula.
\bibliographystyle{alpha}
| {'timestamp': '2023-01-25T02:03:11', 'yymm': '2301', 'arxiv_id': '2301.09744', 'language': 'en', 'url': 'https://arxiv.org/abs/2301.09744'} |
high_school_physics | 636,155 | 15.567807 | 1 | Instituto de Física "Gleb Wataghin" - IFGW
IFGW - Artigos e Outros Documentos
Title: Constraints On Large Extra Dimensions From The Minos Experiment
Constraints on large extra dimensions from the MINOS experiment
Author: Adamson, P.
Anghel, I.
Aurisano, A.
Barr, G.
Bishai, M.
Blake, A.
Bock, G. J.
Bogert, D.
Cao, S. V.
Carroll, T. J.
Castromonte, C. M.
Chen, R.
Childress, S.
Coelho, J. A. B.
Corwin, L.
Cronin-Hennessy, D.
de Jong, J. K.
De Rijck, S.
Devan, A. V.
Devenish, N. E.
Diwan, M. V.
Escobar, C. O.
Evans, J. J.
Falk, E.
Feldman, G. J.
Flanagan, W.
Frohne, M. V.
Gabrielyan, M.
Gallagher, H. R.
Germani, S.
Gomes, R. A.
Goodman, M. C.
Gouffon, P.
Graf, N.
Gran, R.
Grzelak, K.
Habig, A.
Hahn, S. R.
Hartnell, J.
Hatcher, R.
Holin, A.
Huang, J.
Hylen, J.
Irwin, G. M.
Isvan, Z.
James, C.
Jensen, D.
Kafka, T.
Kasahara, S. M. S.
Koizumi, G.
Kordosky, M.
Kreymer, A.
Lang, K.
Ling, J.
Litchfield, P. J.
Lucas, P.
Mann, W. A.
Marshak, M. L.
Mayer, N.
McGivern, C.
Medeiros, M. M.
Mehdiyev, R.
Meier, J. R.
Messier, M. D.
Miller, W. H.
Mishra, S. R.
Sher, S. M.
Moore, C. D.
Mualem, L.
Musser, J.
Naples, D.
Nelson, J. K.
Newman, H. B.
Nichol, R. J.
Nowak, J. A.
O'Connor, J.
Orchanian, M.
Pahlka, R. B.
Paley, J.
Patterson, R. B.
Pawloski, G.
Perch, A.
Pfutzner, M. M.
Phan, D. D.
Phan-Budd, S.
Plunkett, R. K.
Poonthottathil, N.
Qiu, X.
Radovic, A.
Rebel, B.
Rosenfeld, C.
Rubin, H. A.
Sail, P.
Sanchez, M. C.
Schneps, J.
Schreckenberger, A.
Schreiner, P.
Sharma, R.
Sousa, A.
Tagg, N.
Talaga, R. L.
Thomas, J.
Thomson, M. A.
Tian, X.
Timmons, A.
Todd, J.
Tognini, S. C.
Toner, R.
Torretta, D.
Tzanakos, G.
Urheim, J.
Vahle, P.
Viren, B.
Weber, A.
Webb, R. C.
White, C.
Whitehead, L.
Whitehead, L. H.
Wojcicki, S. G.
Zwaska, R.
Abstract: We report new constraints on the size of large extra dimensions from data collected by the MINOS experiment between 2005 and 2012. Our analysis employs a model in which sterile neutrinos arise as Kaluza-Klein states in large extra dimensions and thus modify the neutrino oscillation probabilities due to mixing between active and sterile neutrino states. Using Fermilab's Neutrinos at the Main Injector beam exposure of 10.56 x 10(20) protons on target, we combine muon neutrino charged current and neutral current data sets from the Near and Far Detectors and observe no evidence for deviations from standard three-flavor neutrino oscillations. The ratios of reconstructed energy spectra in the two detectors constrain the size of large extra dimensions to be smaller than 0.45 mu m at 90% C.L. in the limit of a vanishing lightest active neutrino mass. Stronger limits are obtained for nonvanishing masses.
We report new constraints on the size of large extra dimensions from data collected by the MINOS experiment between 2005 and 2012. Our analysis employs a model in which sterile neutrinos arise as Kaluza-Klein states in large extra dimensions and thus modify the neutrino oscillation probabilities due to mixing between active and sterile neutrino states. Using Fermilab's Neutrinos at the Main Injector beam exposure of 10.56 x 10(20) protons on target, we combine muon neutrino charged current and neutral current data sets from the Near and Far Detectors and observe no evidence for deviations from standard three-flavor neutrino oscillations. The ratios of reconstructed energy spectra in the two detectors constrain the size of large extra dimensions to be smaller than 0.45 mu m at 90% C.L. in the limit of a vanishing lightest active neutrino mass. Stronger limits are obtained for nonvanishing masses.
Subject: Neutrino Oscillations
Lepton Charge
Oscilações de neutrinos, Detectores, Modelo padrão (Física nuclear)
Country: Estados Unidos
Editor: American Physical Society
Citation: Physical Review D. Amer Physical Soc, v. 94, p. , 2016.
Identifier DOI: 10.1103/PhysRevD.94.111101
Address: https://journals.aps.org/prd/abstract/10.1103/PhysRevD.94.111101
Appears in Collections: IFGW - Artigos e Outros Documentos | {"pred_label": "__label__cc", "pred_label_prob": 0.6742513179779053, "wiki_prob": 0.3257486820220947, "source": "cc/2021-04/en_middle_0100.json.gz/line238448"} |
high_school_physics | 565,786 | 15.566613 | 1 | | 00 | 01 | 02 | 03 | 04 | 05 | 06 | 07 | 08 | 09 | 10 | 11| 12 | 13 | 14 | 15 | 16 | 17 | 18 | 19 | 20 | 21 | 22 | 23 | 24 | 25 | 26 | 27 | 28 | 29 | 30 | 31 | 32 | 33 | 34 |
THE ELECTRON PHASE SHIFT
The lambda / 2 phase shift.
The lambda spaced lines on the wave peaks switch to valleys on the right.
The electron phase shift had to be explained carefully because it modifies seriously the way matter acts and reacts. Actually, this stunning phenomenon allows one to explain why the action and reaction law holds true. It is all about Matter Mechanics.
The full wavelength electron core.
In July 2003, I found Mr. Milo Wolff's web site. I noticed that the core in the center of his electron was a full wavelength in diameter. I immediately wrote a computer program in order to check this and I soon found that the core for spherical standing waves should indeed be a full lambda wide as shown above.
This was also very clearly demonstrated by Mr. Jocelyn Marcotte thanks to his optimized 3-D Virtual Medium. Mr. Marcotte also showed that any spherical incoming Gaussian impulse simply reproduces the same normal distribution pattern in the center. Surprisingly, the well known two peaks "Ricker wavelet" (which is used for studying earthquakes -and not named after "Richter") as a convergent spherical system produces the same Gaussian distribution pattern in the center.
In addition, the electron amplification is caused by a lens effect. The resulting energy is extracted from aether waves and it is permanently radiated all around. This leads us to four important points:
1 - The electron is a pulsating wave center. Electrons constantly radiate energy.
2 - The electron is a finite system. In my opinion its standing waves progressively fade out and do not expand significantly outside a one meter radius sphere, and possibly much less.
3 - Spherical standing waves are not made out of in-waves and out-waves. This is indeed a very useful and effective method for displaying standing waves. However, this point of view absolutely does not correspond to what is really going on. Any standing wave node is a zero energy point where no energy can pass through. The medium substance simply moves back and forth inside a lambda / 2 space, and this phenomenon obeys Hooke's law.
4 - The electron is made of spherical standing waves which are superimposed on outgoing traveling waves whose amplitude is nil at the center. The result is not well known as partially standing waves. I made the animation below in order to show that standing waves can slowly transform into traveling waves, with a transitional partially standing wave state between them :
The electron is a pulsation wave center because its standing waves progressively transform to regular traveling waves.
The emitted energy is borrowed from all aether waves traveling in the vicinity because of a lens effect.
Now, let us suppose that two electrons are very close together. Then only pure standing waves, not partially standing waves, are present. It becomes obvious that the wave addition, especially along the axis joining them, may vary.
As a matter of fact, adding or removing just a half-wavelength to the distance produces opposite effects which are clearly visible in the diagrams below.
Intermediate on-axis waves cancel, but the rest add constructively beyond both electrons.
The external radiation pressure produces an attraction effect.
Here, the distance is .5 wavelength shorter and the result is rather a repulsion effect.
Intermediate waves add constructively but they rather cancel beyond both electrons.
So there should be an equilibrium point for a .25 wavelength distance difference.
The important point is that, if the election core was only a half-lambda wide, this opposite effect would be absent. Waves would add constructively or destructively everywhere.
The capture phenomenon.
Assuming that axial standing waves are amplified, they radiate energy towards both sides. In the upper diagram situation, there is an inward radiation pressure only and both electrons are pushed towards each other. On the contrary, they are rather repelled in the lower diagram. It becomes obvious that there is an equilibrium point in-between, and as long as the particles are not moving very fast with respect to each other, they must be captured in this position. The electron or positron pair then becomes a quark. It is truly a Wave Structure, and it is responsible for neutrons, protons, and all other more or less stable particles.
Excessive vibrations may destroy the resulting particle, though, and there is surely a privileged distance where this clamping effect is likely to occur. Please note this appears to be a gluonic field. It is not an electrostatic field because the electron standing waves are involved. Electrostatic forces are rather the result of the addition of progressive spherical waves emitted by electrons or positrons.
Because energy is the square of amplitude, which is doubled there, this area basically contains four times more energy than that of one electron. What is more, those standing waves are also amplified by aether waves in the same manner as electrons alone are, and the resulting energy becomes additional mass.
Finally, the gluonic field mass is much greater than that of one electron. It becomes clear that the electron pair which is responsible for the whole quark including the gluonic field becomes almost invisible. In addition, the resulting frequency is no longer that of the electron. So the electrostatic forces cannot work any more, but there is still some place for some other sort of spin, which is known to be fractional.
Most of the proton mass, which is 1836 times that of one electron, is located inside gluonic fields. They are very strong fields of force. The neutron contains only three quarks, each quark being made of two electrons and/or positrons only, but there is most probably an additional positron hidden in the middle of the proton.
The hydrogen atom.
However, the wave addition works quite differently for the electron-positron pair. Assuming that the proton is a neutron containing a positron, this is the case for the the hydrogen atom whose nucleus is one proton only. In such a case, there is a pi/2 phase offset and the wave addition produces the stunning on-axis unidirectional radiation, which is responsible for magnetic fields.
The electron in the presence of a proton generates this unidirectional radiation.
The wave direction is reversed for opposite spin and also for any lambda/2 distance difference.
This phenomenon is the origin of magnetic north and south poles.
Mr. Milo Wolff's onion layers.
My own calculations using both the Huygens Principle and Mr. Wolff's "onion layers" comparison (each layer internal radius is that of the precedent external radius, and the layer thickness is a half-wavelength) lead me to the diagram below. The formulas were given by Mr. Jocelyn Marcotte in 2006.
The electron standing wave amplitude.
This is no longer true for very long distances, where amplitude is nil.
Mr. Milo Wolff's phase shift.
Those calculations were very satisfactory and all seemed O.K. More similar schemes can also be found on the Internet and may have been elaborated hundreds of years ago. However, Mr. Milo Wolff suggested that this wave system could be an electron.
Mr. Wolff also spoke about a "phase shift". He invoked it in order to justify the electron spin.
However, I do not agree. In my picture, the electron and positron spin is the result of a phase difference, which is well visible in the animated diagram below:
The electron spin.
The –1/2 and +1/2 spins are opposite, yet they both belong to the electron. The electron vs. positron phase is not opposite, there is rather a pi / 2 difference only. Thus, there is no need to invoke a phase shift in order to explain spin.
Such a phase shift seemed to me a quite useless and weird idea. However my computer programs could easily reveal phases. So I begun investigating inside the electron core and I found... a phase shift!
The phase shift is clearly visible below:
The leftward and rightward wave addition produces the standard electron.
One could also say in-waves and out-waves, but both interpretations are wrong from my point of view.
Faster than the speed of light.
Clearly, the wave accelerates inside the spherical central antinode.
Since the beginning I always thought that the waves mean speed was constant whether the aether was filled with high energy waves or not. For this reason the speed of light is a constant. It is absolute.
However, the speed of sound, for example, is faster at sea level then at high altitude. The temperature being constant, the sound waves are faster where the air pressure is higher.
From Mr. Wolff's "onion layers" point of view, the spherical wave amplitude should be linked to each layer volume:
Surprisingly, the first onion layer volume is exactly seven (7) times that of the electron core.
The sine wave energy distribution inside each layer leads to the equivalent sin(x)/x equation.
Clearly, as soon as they are penetrating the internal sphere, incoming waves have to deal with a seven times smaller volume (please bear in mind that this is only the mathematical point of view) where the medium compression is very high. But the important point is that the medium is compressed on a full lambda distance, allowing an unusual faster wave speed there.
The acceleration inside the core is also clearly visible on this animation :
Here, the Huygens' wavelets come from the surface of just one hemisphere, that on the left hand side.
Both hemispheres simply reproduce the unmoving electron.
The radiation pressure.
It becomes clear that any convergent hemispheric wave must move faster than the speed of light inside the core or focus point. Such a situation may be seen as the result of millions of Huygens' wavelets incoming from the inner surface of only one hemisphere. However, this does not occur inside a static, unmoving electron because both hemispheres theoretically produce waves traveling in opposite directions.
This strongly suggests to me that when the wave amplitude is not equal from both sides, the electron central antinode must be pushed away. This motion produces a slight Doppler effect, and it becomes more and more significant until the unequal amplitude situation stops. The result is my Doppler moving electron:
The Doppler moving electron.
Each spherical wave center of curvature stays at rest inside aether.
This animation shows the electron while it is moving at half of the speed of light (beta = .5). Then the Doppler wave amplitude ratio equals:
(l – b) / (1 + b) = 3
The forward amplitude is 3 while it is 1 backward. The difference (3 – 1 = 2) indicates that the electron core is constantly pushed forward by half of the speed of light. This is a very plausible explanation for inertia. Any moving electron cannot change its speed unless a change in forward or backward wave amplitude occurs.
The "Wave Structure of Matter".
So far as I know, Mr. Milo Wolff first proposed this spherical standing wave as the basic unit for matter.
He told us that the electron and the positron were the same particle.
He discovered the important lambda / 2 phase shift.
He showed that the mass increase was related to the Doppler effect.
He also demonstrated that despite the fact that they may be seen as a point, electrons are actually present inside a rather large space. Thus, a wave interaction becomes possible in such a way that a given number of electrons and positrons are not isolated any more. On the contrary, they can form a structure: a Wave Structure. Obviously, speaking about the Wave Structure of Matter is the next highly relevant step, but unfortunately Mr. Wolff and most of his disciples united under this banner were strangely silent on this. As a matter of fact, they never proposed any wave structure.
I am very disappointed that so many people were wrongly led into some sort of mystic sect. Frankly, philosophy is not the appropriate tool. This is weird.
The most brilliant exception was Mrs. Caroline H. Thompson, who asked the correct question: What is matter? She did propose a "Wave Structure" in the form of a tetrahedron; my own researches did not lead to this hypothesis but it is still possible. She also pointed out that many of Mr. Wolff's ideas were somewhat evanescent. She proposed her own wave theory. She was very convinced that many of today's well accepted ideas such as photons, misunderstood quantum effects, the non existence of the aether, Einstein's Relativity, etc., had to be reviewed. She realized that because of this, the scientific world was led on a blind alley. I also think that many of today's assumptions in physics are definitely false.
Mrs. Caroline Thompson was a very clever person, perhaps the most clever woman ever.
More Pioneers.
I strongly believe that Lorentz was right. His Relativity is true and complete. Space and time are distinct and absolute. Einstein's Special Relativity seems to be true but the postulates are not. Especially, the speed of light is not the same in all frames of reference. Einstein also misled us for 100 years about photons and gravity. His ideas about space-time contraction and/or curvature are ridiculous.
I also agree with Mr. Serge Cabala's ideas. The piston machine on his home page is very interesting. It shows how the Lorentz transformations act on matter. In my opinion this pioneer should be remembered as the first person on this planet who discovered (around 1970) that matter is purely made out of waves. He postulated that aether should exist, and he also showed that Relativity is consistent with the aether.
Mr. Yuri Ivanov discovered in 1990 that standing waves undergo a contraction according to Michelson's calculations. This is of the utmost importance. As far as I know, nobody else was aware of this before him. What is more, he showed that matter using standing waves as bonding forces should also contract for this reason, and that this could explain Michelson's null result and Relativity. Unfortunately, he clearly did not understand well Lorentz's time equation, and he finally ended up with too severe a contraction.
The true cause.
On July 10, 2006, Mr. Jocelyn Marcotte succeeded in experimenting the moving and unmoving electron thanks to his own 3-D Virtual Medium. It was a great achievement.
The point is that Mr. Marcotte's experiment also clearly demonstrated that, near to the electron center, the first spherical node is one wavelength wide in diameter instead of the normal lambda / 2 length between two successive nodes. Inside the inner spherical node, the additional distance for a theoretical incoming and then outgoing wave is half of a wavelength, which finally causes the lambda / 2 phase shift. But this is only the mathematical point of view.
From a mechanical point of view, though, the first spherical node is totally an obstacle to any wave displacement. Clearly, no energy can pass through it because a standing wave node is a constant zero energy zone. So, a more reasonable hypothesis should be that waves are simply bouncing back and forth between the first and the second spherical node, whose radius is extended to an additional lambda / 4 length as compared to a theoretical node right at the center.
This is the true cause of the electron phase shift.
More proofs.
Please download this program on the Wave Mechanics:
WaveMechanics05.bas WaveMechanics05.exe
This program was intended to show how waves behave in the vicinity of reflectors such as parabolic, elliptic, corner, straight and even three-sided. I am quite sure that this sort of program, which is easily upgradeable, will soon become a must for opticians, acousticians, and radio-electricians. However, I managed to start the program on the elliptic configuration for both the emitter and receiver. In addition, both sections are joined together into a full elliptic reflector.
It is a well known fact that the ellipse main property is that the distance from one focal point to any point of the ellipse and then to the second focal point is rigorously constant. This apparently unavoidable property led mathematicians to postulate that circular (or spherical for the ellipsoid reflector) waves emitted at the first focal point should reach the second focal point without any distortion.
Well, this proves to be absolutely false, and one must admit that equations for this are not reliable. It is all about amplitude, not phase. Because amplitude is clearly higher on the left hand side, the central antinode at the second focal point is not circular any more. It is rather elliptic, and the result is an equivalent node and antinode offset. In addition, the characteristic "partially standing wave" behavior is easily recognizable (see Plane Standing Waves) as a longitudinal motion which is also observable in the animation shown above (Airy180.gif).
Regular waves can influence standing waves.
This behavior applies to the moving electron and the phase shift calculus then becomes much more complex. Finally, the same behavior also explains radiation pressure as well as inertia.
Until some organization realizes that this most important phenomenon should be experimented by means of a physical device, using air, water or solid homogenous substance such as quartz, the only experiment available now needs Philippe Delmotte's and Jocelyn Marcotte's virtual medium, which is a virtual but still highly dependable laboratory. The goal is to show that traveling waves penetrating a circular or spherical standing wave system will introduce some changes. It is important to test this experimentally.
Besides Mr. Marcotte's experiment, this phenomenon is indeed easily verifiable inside air or water by means of a large number of loudspeakers regularly spaced on the internal surface of a sphere. This will produce wavelets in accordance with the Huygens Principle, and whose addition generates a unique incoming spherical wave front.
The important point is that traveling waves definitely influence standing waves. Two electrons pulsating spherical waves all around will surely influence each other. Clearly, this explains the action and reaction law, and the whole matter mechanics becomes possible thanks to the radiation pressure.
This flawless demonstration is a very strong argument in favor of the wave nature of matter.
The electron phase shift and the Wave Mechanics.
The electron phase shift is of the utmost importance because it leads to some totally different Wave Mechanics. Magnetic, electrostatic and gluonic fields are especially involved. Now that the electron additional lambda / 2 wavelength inside its central core is well established, action and reaction through fields of force can finally be explained.
Now, we can understand why Newton's laws hold true.
Matter is made of Waves
The electron
Ivanov's Waves
Spherical Standing Waves
The Doppler Effect
The Aether
The Michelson Interferometer
The Lorentz Transformations
The Time Scanner
Lorentzian Relativity Page 1
The Relativistic Big Bang
The Wave Mechanics
Electrostatic Fields
Nuclear Forces
Active and Reactive Mass
Fields of Force
The Fields of Force Dynamics
The Wave Theory
The Wave Theory Postulates
The Theory of Evolution
Errors to Correct
Proofs and Experiences
The Huygens Principle
Gabriel LaFreniere
Bois-des-Filion in Québec.
Email Please read this notice.
On the Internet since September 2002. Last update December 3, 2009. | {"pred_label": "__label__cc", "pred_label_prob": 0.6723069548606873, "wiki_prob": 0.32769304513931274, "source": "cc/2021-04/en_head_0044.json.gz/line169991"} |
high_school_physics | 367,942 | 15.555541 | 1 | Young boy abandoned on bus by grandma, disowned by mom - police
Stock image of a young sad boy at school. (PHOTO: iStock)
A three-year-old boy has been permanently remanded into state care after his mother failed to make an appearance at court or at the police station to claim the child.
"The person who was originally charged with child endangerment turned out to be the grandmother, but the mother was asked several times to come to court or the police station, and her response was she 'no live nowhere, and can't take care of no pickney right now', so she has not turned up," the investigating officer, Constable Paul Banbury told the St Catherine Parish Court.
The child, a young boy, first ended up in the care of the state after a Good Samaritan - who happens to be the mother of murdered schoolgirl, Shante Skyers - offered to carry him to the police station after he was found alone on a bus in the middle of Spanish Town, St Catherine.
A statement from the complainant, Crystal Service, who was a passenger in the bus at the time, revealed that she had seen the child on the bus after the adult in whose care the boy had been entrusted got out of the vehicle, abandoning the young child. The young boy reportedly tried to dangerously step off the bus in pursuit of the adult but was prevented from doing so by a concerned vendor.
"As the baby boy tried to step off the bus, I saw a vendor run towards the bus step and prevented the boy from dropping outside of the bus. The bus did not stop and then drove a bit further up the road and stopped once more. As the bus stop the remaining persons who were in the bus tried to exit. I then observed a lady took up this baby boy and asked whose baby, nobody answered," read the statement, a copy of which was obtained by Loop News reporter Claude Mills.
When the conductor finally got back on the bus, no one claimed the child, Service reported.
"The conductor gave the baby to a man and stepped back into the bus. The conductor immediately loaded the bus to go back to Kingston. I observe the man the conductor had given the baby to tried asking the baby some questions. However, the baby was too small to understand. I then intervened and told this man that the baby is not understanding what he’s asking, hence, I will take the baby to the police station, so that the police can try to find his mother," the complainant said.
The complainant then made a report to the Spanish Town police station, and an investigating officer later arrested the woman in whose care the child had been entrusted. That woman, it later emerged, was the child's grandmother. She was charged with child endangerment.
Service's seven-year-old daughter, Shante, was found murdered in April of this year. The girl had been missing for five days before the shocking discovery of her body in a section of Sterling Castle Heights, known as Blue Hole.
Crystal Service
Shante Skyers
Police commissioner extends sympathy to Shante Skyers’ family, friends
UPDATE: Missing former HWT Transport Centre GM reunited with family
Cops join Kingston family in search for missing child | {"pred_label": "__label__wiki", "pred_label_prob": 0.797653079032898, "wiki_prob": 0.797653079032898, "source": "cc/2019-30/en_head_0002.json.gz/line967660"} |
high_school_physics | 401,089 | 15.548954 | 1 | Phase behavior of charged hydrophobic colloids on flat and spherical surfaces
by Kelleher, Colm P., Ph.D., New York University, 2017, 194; 10195879
For a broad class of two-dimensional (2D) materials, the transition from isotropic fluid to crystalline solid is described by the theory of melting due to Kosterlitz, Thouless, Halperin, Nelson and Young (KTHNY). According to this theory, long-range order is achieved via elimination of the topological defects which proliferate in the fluid phase. However, many natural and man-made 2D systems posses spatial curvature and/or non-trivial topology, which require the presence of topological defects, even at T=0. In principle, the presence of these defects could profoundly affect the phase behavior of such a system. In this thesis, we develop and characterize an experimental system of charged colloidal particles that bind electrostatically to the interface between an oil and an aqueous phase. Depending on how we prepare the sample, this fluid interface may be flat, spherical, or have a more complicated geometry. Focusing on the cases where the interface is flat or spherical, we measure the interactions between the particles, and probe various aspects of their phase behavior. On flat interfaces, this phase behavior is well-described by KTHNY theory. In spherical geometries, however, we observe spatial structures and inhomogeneous dynamics that cannot be captured by the measures traditionally used to describe flat-space phase behavior. We show that, in the spherical system, ordering is achieved by a novel mechanism: sequestration of topological defects into freely-terminating grain boundaries (“scars”), and simultaneous spatial organization of the scars themselves on the vertices of an icosahedron. The emergence of icosahedral order coincides with the localization of mobility into isolated “lakes” of fluid or glassy particles, situated at the icosahedron vertices. These lakes are embedded in a rigid, connected “continent” of locally crystalline particles.
Advisor: Chaikin, Paul M.
Commitee: Donev, Aleksandar, Grosberg, Alexander, Pine, David, Sleator, Tycho
School: New York University
Department: Physics
Subjects: Physics, Condensed matter physics, Materials science
Keywords: Colloidal particles, Melting, Phase behavior | {"pred_label": "__label__cc", "pred_label_prob": 0.7349907159805298, "wiki_prob": 0.2650092840194702, "source": "cc/2019-30/en_head_0050.json.gz/line1137311"} |
high_school_physics | 327,441 | 15.541973 | 1 | A validated pressure care assessment is required e.g.Waterlow scale.
Alternating Pressure Mattress - Premium 8, Pressure Care Medium to High Risk - Alternating Cell Height 215mm, 885 x 2020 x 230mm, Weight Capacity 180kg, Multi-Stretch Cover, Digital Pump with advanced safety features, adjustable settings.
Promotes even pressure distribution. AKTON polymer gel, unlike fluid gel, does not displace from areas requiring pressure relief. Isolates and protects areas where shear forces and pressure points are prevalent. AKTON polymer provides significant shock, vibration and impact absorption. Pad is sealed by a waterproof film. Pad does not promote bacterial growth, making it ideal for multi-client use. Dimensions: Full Mattress 685mm Width. 2030mm Length. 22mm Height. Max user weight: unlimited.
Promotes even pressure distribution. AKTON polymer gel, unlike fluid gel, does not displace from areas requiring pressure relief. Isolates and protects areas where shear forces and pressure points are prevalent. AKTON polymer provides significant shock, vibration and impact absorption. Pad is sealed by a waterproof film. Pad does not promote bacterial growth, making it ideal for multi-client use. Dimensions: Large 685mm Width. 1120mm Length. 22mm Height. Max user weight: unlimited.
Promotes even pressure distribution. AKTON polymer gel, unlike fluid gel, does not displace from areas requiring pressure relief. Isolates and protects areas where shear forces and pressure points are prevalent. AKTON polymer provides significant shock, vibration and impact absorption. Pad is sealed by a waterproof film. Pad does not promote bacterial growth, making it ideal for multi-client use. Dimensions: Regular 430mm Width. 685mm Length. 22mm Height. Max user weight: unlimited.
Soft open cell foam. Provides additional comfort to existing mattress. Convoluted profile provides increased air circulation. Dimensions: Double 1330W x 1850L x 75H mm. Max user weight: 100kg.
Soft open cell foam. Provides additional comfort to existing mattress. Convoluted profile provides increased air circulation. Dimensions: Queen 1485W x 1995L x 75H mm. Max user weight: 100kg.
Soft open cell foam. Provides additional comfort to existing mattress. Convoluted profile provides increased air circulation. Dimensions: Single 875W x 1850L x 75H mm. Max user weight: 100kg. | {'timestamp': '2019-04-22T13:18:19Z', 'url': 'https://mobilityservices.com.au/product/productlist.aspx?c=AB14', 'language': 'en', 'source': 'c4'} |
high_school_physics | 757,557 | 15.528064 | 1 | 7 Dimensions of Cybersecurity
Nov 01 2022, 3:00pm UTC
Michelle Drolet, Founder & CEO | Towerwall
Backed by decades of security experience, Towerwall learned that there is no single piece of technology or system that will keep your organization secure. That’s why Towerwall security experts developed a unique security approach that is not only consistent, repeatable and measurable, but also flexible enough to adapt to the changing threat landscape. In this discussion, Towerwall Founder & CEO, Michelle Drolet outlines how your organization can deploy the 7 Dimensions and stay secure. About the speaker: Michelle Drolet, founded and CEO of TowerWall, is a member of the Forbes Technology Council. The Software Report recently named Drolet one of the Top 25 Women in Cybersecurity in 2021; she's also been named one of the Top CEOs to Watch in 2020 by CIO Views and The 10 Most Powerful Women in the Channel by VARBusiness Magazine. She sits on the Framingham Foundation Board Events Committee and the MassBay Cyber Security Advisory Board and is past chair of the Mass Bay Foundation Board, MetroWest Chamber of Commerce, and Framingham ESL. She is the founder of the Information Security Summit at Mass Bay Community College, New England’s premier event that gathers top security and risk management professionals for open dialogue on the latest threats, developments and trends occurring in information security.
Threat Assessment
On-demand talks (1833)
Subscribers (187398)
This channel features presentations by leading experts in the field of information security. From application, computer, network and Internet security to access control management, data privacy and other hot topics, you will walk away with practical advice for your strategic and tactical information security initiatives.… | {"pred_label": "__label__cc", "pred_label_prob": 0.7259552478790283, "wiki_prob": 0.2740447521209717, "source": "cc/2023-06/en_head_0034.json.gz/line1563590"} |
high_school_physics | 876,513 | 15.504442 | 1 | Q: Hydrogen Atom in two spatial dimensions with $1/r$ potential I am almost new to Quantum Mechanics. Recently I learned about the hydrogen atom in three dimensions. I struggle to answer the following exercise where the hydrogen atom in two dimensions is considered:
For an hydrogen atom the stationary Schroedinger Equation in polar coordinates is given by
\begin{align}
\left[ -\frac{\hbar^2}{2m}\left(\frac{\partial^2}{\partial r^2} + \frac{1}{r}\frac{\partial}{\partial r} + \frac{1}{r^2}\frac{\partial^2}{\partial \phi^2}\right) - \frac{\alpha\hbar c}{r} - E \right]\psi(r,\phi) = 0.
\end{align}
The wave function of the ground state of the hydrogen atom in two dimensions has the form $\psi_0(r, \phi) = N_0\exp(-\nu_0r/a_B)$, where $a_B$ denotes the Bohr radius $a_B = \hbar / (mc\alpha)$.
I am asked to compute $\nu_0$, the ground state energy $E_0$ as well as $N_0 > 0$.
While I think that computing the constant $N_0$ as well as the energy $E_0$ is easy, I struggle to see how one can compute $\nu_0$. Can anyone help?
Edit:
We find
\begin{align}
\frac{\partial^2}{\partial \phi^2}\psi_0(r, \phi) &= 0, \\
\frac{\partial}{\partial r}\psi_0(r, \phi) &= -\frac{\nu_0}{a_B}\psi_0(r,\phi), \\
\frac{\partial^2}{\partial r^2}\psi_0(r, \phi) &= \frac{\nu_0^2}{a_B^2}\psi_0(r,\phi).
\end{align}
Plugging the results in yields
\begin{align}
-\frac{\hbar^2}{2m}\frac{\nu_0^2}{a_B^2}\psi_0(r,\phi) + \frac{\hbar^2 \nu_0}{2mra_B}\psi_0(r,\phi) - \frac{\alpha\hbar c}{r}\psi_0(r,\phi) = E_0\psi_0(r,\phi)
\end{align}
or equivalently
\begin{align}
E_0 = -\frac{\hbar^2}{2m}\frac{\nu_0^2}{a_B^2} + \frac{\hbar^2 \nu_0}{2mra_B} - \frac{\alpha\hbar c}{r}.
\end{align}
Due to
\begin{align}
\int_{0}^{\infty} N_0^2 \exp(-2\nu_0r/a_B) \ \mathrm{d}r = \frac{a_B\cdot N_0^2}{2\nu_0} \overset{!}{=} 1,
\end{align}
we find
\begin{align}
N_0 = \sqrt{\frac{2\nu_0}{a_B}}.
\end{align}
At this point I do think that once having $\nu_0$ the computation of $E_0$ and $N_0$ can be finished.
A: HINT: the $E$ that appears in Schrödinger's equation is the energy for the "entire state". In particular, it's just a number, and cannot be a function of $r$. So looking at your equation for $E_0$, what must be true about the parameters in it to make it independent of $r$?
(Also, your equation for $E_0$ is incorrect due to an algebra error. You'll need to correct this to get the correct answer. Note that $\nu_0$ should end up being a dimensionless number.)
| {'language': 'en', 'url': 'https://physics.stackexchange.com/questions/606960', 'timestamp': '2023-03-29', 'source': 'stackexchange', 'question_score': '3'} |
high_school_physics | 261,243 | 15.499561 | 1 | Scientists have outlined a test to discover if the early universe had just one spatial dimension. This mind-boggling concept is at the heart of a theory that University at Buffalo physicist Dejan Stojkovic and colleagues proposed that suggests that the early universe — which exploded from a single point and was very, very small at first — was one-dimensional before expanding to include two dimensions and then three — the world in which we live today.
The theory, if valid, would address important problems in particle physics. Stojkovic and Loyola Marymount University physicist Jonas Mureika have described a test that could prove or disprove the "vanishing dimensions" hypothesis.
Because it takes time for light and other waves to travel to Earth, telescopes peering out into space can, essentially, look back into time as they probe the universe's outer reaches. Gravitational waves can't exist in one- or two-dimensional space. So Stojkovic and Mureika have reasoned that the Laser Interferometer Space Antenna (LISA), a planned international gravitational observatory, should not detect any gravitational waves emanating from the lower-dimensional epochs of the early universe.
Stojkovic, an assistant professor of physics, says the theory of evolving dimensions represents a radical shift from the way we think about the cosmos — about how our universe came to be. The core idea is that the dimensionality of space depends on the size of the space we're observing, with smaller spaces associated with fewer dimensions. That means that a fourth dimension will open up (and may have already) as the universe continues to expand.
*The incompatibility between quantum mechanics and general relativity. Quantum mechanics and general relativity are mathematical frameworks that describe the physics of the universe. Quantum mechanics is good at describing the universe at very small scales, while relativity is good at describing the universe at large scales. Currently, the two theories are considered incompatible; but if the universe, at its smallest levels, had fewer dimensions, mathematical discrepancies between the two frameworks would disappear.
*The mystery of the universe's accelerating expansion: Physicists have observed that the expansion of the universe is speeding up, and they don't know why. The addition of new dimensions as the universe grows would explain this acceleration. Stojkovic says a fourth dimension may have already opened at large, cosmological scales.
*The need to alter the mass of the Higgs boson: The standard model of particle physics predicts the existence of an as yet undiscovered elementary particle called the Higgs boson. For equations in the standard model to accurately describe the observed physics of the real world, however, researchers must artificially adjust the mass of the Higgs boson for interactions between particles that take place at high energies. If space has fewer dimensions at high energies, the need for this kind of "tuning" disappears.
"What we're proposing here is a shift in paradigm," Stojkovic said. "Physicists have struggled with the same problems for 10, 20, 30 years, and straight-forward extensions of the existing ideas are unlikely to solve them."
"We have to take into account the possibility that something is systematically wrong with our ideas," he continued. "We need something radical and new, and this is something radical and new."
Because the planned deployment of LISA is still years away, it may be a long time before Stojkovic and his colleagues are able to test their ideas this way.
However, some experimental evidence already points to the possible existence of lower-dimensional space. Specifically, scientists have observed that the main energy flux of cosmic ray particles with energies exceeding 1 teraelectron volt — the kind of high energy associated with the very early universe — are aligned along a two-dimensional plane.
If high energies do correspond with lower-dimensional space, as the "vanishing dimensions" theory proposes, researchers working with the Large Hadron Collider particle accelerator in Europe should see planar scattering at such energies.
Stojkovic says the observation of such events would be "a very exciting, independent test of our proposed ideas." | {'timestamp': '2019-04-23T12:51:01Z', 'url': 'https://dailygalaxy.com/2011/04/big-bang-weirdness-radical-new-theory-says-the-early-universe-had-only-1-dimension-and-that-a-4th-al/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 170,127 | 15.489434 | 1 | Car Audio or Theater - The Audio ATV20 Off-road/marine is an excellent speaker for the boat. I actually loved that the product has the feature of water proof poly injection cone, soft dome diaphragm high performance tweeter. Other highlights consist of fully marinized, 450w max power and frequency response: 45-25k hz. Bar Code# 791489116022. The speaker dimensions are 11"H x 28.25"L x 13.25"W and weighs roughly 2.8 lbs.
Car Audio or Theater - The MS6100 Marine 6. 5-INCH Dual-cone Marine a great speaker. Just one of the key features is the big, dynamic power with output of 35 watts rms and peak power handling of 105 watts. Other features include big bass with polypropylene woofer cone and one year warranty. It's 7"H x 5"L x 8"W and has a weight of 21 lbs. The barcode also called the "Universal Product Code" for this is 066510876399.
Car Audio or Theater - Blast the tunes within your boat to the MA6004 4-CHANNEL Full-range Marine Amplifier from Jbl. The UPC for this really good water resistant marine speaker is 071020222499. I in fact loved that the product has 12d b built-in variable electronic crossover. Additional features include stainless-steel hardware, marinized circuit board and variable bass boost. The speaker dimensions are 10"H x 12"L x 8"W. It has a weight of 21 lbs.
Car Audio or Theater - Are you looking to purchase a water resistant marine speaker? The KFC-1653MRG 6. 5-INCH, in black brought to you by Kenwood is a superb speaker. I in fact liked that it had the feature of guaranteed lowest price. contact us 24/7. Other highlights include waterproof polypropylene cone woofer, 150w max and 6. 5in 2-way marine speakers. The water resistant marine speaker is available in black. The speaker has a weight of 4 lbs. The speaker comes with a warranty of one year parts and labor from the manufacturer.
Car Audio or Theater - When you buy on the internet, it is much easier to get better deals on products. The PLMRBS8 8-INCH Low-profile manufactured by Sound Around is the right solution for your boat. I definitely loved that the speaker has the feature of thermal, short, and overload protection circuits protect subwoofer and your vehicle. It's 4.75" Height x 17" Length x 12.75" Width. It weighs approximately 12.5 lbs. The color of the speaker is white.
Car Audio or Theater - The PLMRKT2A 2-CHANNEL, a great item manufactured by Sound Around makes a great item for your boat. These speakers are being well received and they are seeing decent sales online. One of the attributes for this item is the adjustable high low level inputs, rca line input. Additional features include remote control for volume gain. The water resistant marine speaker is 7.76" Height x 11.81" Length x 10.47" Width and it has a weight of 6.97 lbs. The water resistant marine speaker comes with a warranty of one year from the manufacturer.
Car Audio or Theater - Searching for a new water resistant marine speaker? The KFC-1633MRW 6. 5-INCH 100 Watt Max Power is a nice speaker. The color for these water resistant marine speakers is white. In my opinion you will like that the item offers this feature of peak: 200 watts per pair / 100 watts each. Other highlights include frequency response: 65-20, uv resistant grille and frame, 8 oz magnet, sensitivity: 84 d b, power handling:, 4 ohms impedance and top-mount depth: 2-1/4". The water resistant marine speaker dimensions are 3.9" Height x 10" Length x 4.5" Width and it has got a weight of 3.8 lbs.
Car Audio or Theater - Making loads of noise with the Rockford Fosgate M2 M210S4 Marine-grade 10-INCH 500-WATT Subwoofer (white) brought to you by Rockford Fosgate! One of the many features for these speakers is the injection molded mineral filled parabolic polypropylene cone. Other highlights consist of 35hz - 250hz frequency response. It's 9"H x 14"L x 13"W and has a weight of 14.5 lbs. The speaker is available in white.
Car Audio or Theater - Everybody knows how fantastic and effortless it can be purchasing on the web. Shopping for speakers for the boat? Consider the Fosgate M2 M262 Marine from Rockford Fosgate. EAN# 0780687328634. The color of these water resistant marine speakers is white. One of the major features for these speakers is the injection molded mineral filled polypropylene cone body with tpe surround. Additional features consist of 53hz - 22k hz frequency response. The water resistant marine speaker dimensions are 5.5"H x 19"L x 9"W. It weighs somewhere around 6.5 lbs.
Car Audio or Theater - Usually you'll get more affordable prices by purchasing on the web than you would in actual shops. Make loads of noise to the Jbl MS9200 6-INCH manufactured by Prospec Electronics. The Jbl MS9200 6-INCH X 9-INCH 2-WAY Marine is an instance of high quality speaker you can get on the market. For everybody who is wishing to purchase this item, you have come to the right place. This site offers you special discounted rates just for this water resistant marine speaker with secure transaction. The special features include water and ultra violet resistant design, polypropylene woofer cone and sensitivity : 91 decibel. The speaker is 6"H x 19"L x 13"W and it weighs roughly 8.9 lbs. | {'timestamp': '2019-04-20T20:58:16Z', 'url': 'http://marine-speakers.autoaudio.us/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 1,382 | 15.488074 | 1 | \section{Introduction}
The dynamics of self-gravitating objects is the topic of great
interest in general theory of relativity (GR), which is modern
theory of gravity. This problem becomes source of inspiration for
researchers when the stellar objects remain in stable for the most
of time against the perturbation caused by the self gravitational
force of the massive objects. This process provides the information
to study structure formation of the gravitationally collapsing
objects. In the relativistic gravitational physics the dynamics of
the stars studied by the Chandrasekhar \cite{1} first time in 1964,
since then there has been growing interest to study the dynamics of
stars in this research direction. This work was extended by the
Herrera et al.\cite{2,3} explicitly for spherically symmetric heat
conducting, isotropic/anisotropic and viscous fluids in the
framework of GR. Recently, Herrera et al.\cite{5} have investigated
the dynamics of the expansion-free fluids using first order
perturbation of metric components as well as matter variables.
Several properties of the fluid play a dominant role in dynamical
process of the gravitationally collapsing objects. Herrera et
al.\cite{7} have studied the expansion free condition for the
collapsing sphere. Herrera and his collaborators \cite{7f,7a}
discussed the dynamical process of gravitational collapse using
Misner and Sharp's formulation. They considered the matter
distribution with shear-free spherically symmetric. The realistic
model of heat conducting star which shows dissipation in the form of
heat flux in radial direction and shear viscosity was studied by
Chan \cite{7b}. Herrera et al. \cite{7c} also formulated the
dynamical equations of the fluids which contains heat flux,
radiation and bulk viscosity and then coupled these equations with
causal transport equations. The inertia of heat flux and its
significance in the dynamics of dissipative collapse was studied by
Herrera \cite{7d}. The present paper is the particular case
(adiabatic case) of this work in $5D$ Einstein Gauss-Bonnet gravity.
Sharif and Azam \cite{11a}-\cite{13} have studied the effects of
electromagnetic field on the dynamical stability of the collapsing
dissipative and non dissipative fluids in spherical and cylindrical
geometries. This work has been further extended by Sharif and his
collaborators \cite{15}-\cite{22} in modified theories of gravity,
like $f(R)$ and $f(T)$ and $f(R,T)$. Recently \cite{23}, Abbas and
Sarwar have studied the dynamical stability of collapsing star in
Gauss-Bonnet gravity.
The dynamical system dealing with the dimensions greater or equal to
5 are usually discussed in the Gauss-Bonnet gravity theory. The
natural appearance of this theory occurs in the low energy effective
action of the modern string theory. Boulware and Deser \cite{44}
investigated the black hole (BH) solutions in $N$ dimensional string
theory with four dimensional Gauss-Bonnet invariant. This work is
the extension of $N$ dimensional solutions formulated by Tangherili
\cite{45}, Merys and Perry \cite{46}. Wheeller \cite{47} discussed
the spherically symmetric BH solutions with their physical
properties in detail. The topological structure of nontrivial BHs
has been explored by Cai \cite{48}. Kobayashi \cite{49} and Maeda
\cite{50} have formulated the structure of Vaidya BH in Gauss-Bonnet
gravity. All investigations show that the presence of the
Gauss-Bonnet term in the field equations would effect the final
state of the gravitational collapse. Recently \cite{51}, Jhinag and
Ghosh have consider the $5D$ action with the Gauss-Bonnet terms in
Tolman-Bondi model and give an exact model of the gravitational
collapse of a inhomogeneous dust. Motivated by these studies, we
have studies, we have explored the dynamics of the gravitationally
collapsing spheres in Einstein Gauss-Bonnet gravity. This paper is
extension of Herrera work\cite{7d} in Einstein Gauss-Bonnet gravity.
We would like to mention that the objective of this paper is to
study the effects of Gauss-Bonnet term on dynamics of collapsing
system. As heat flux is absent in the source equation, so transport
equations and their coupling with the dynamical equations is not the
objective of this paper. Hopefully it will be discussed explicitly
elsewhere.
This paper is organized as follow: In section \textbf{{2}} the
Einstein Gauss-Bonnet field equations and matching conditions have
been discussed. The dynamical equations have been formulated in
section \textbf{3}. We summaries the results of the paper in the
last section.
\section{Einstein Gauss-Bonnet Field Equations }
Consider the following action in $5D$
\begin{equation}\label{1}
S=\int d^{5}x\sqrt{-g}\left[ \frac{1}{2k_{5}^{2}}\left( R+\alpha
L_{GB}\right) \right] +S_{matter}
\end{equation
where $R$ ia the Ricci scalar in $5D$ and $k^2_{5}={8\pi G_{5}}$ is
coupling constant in $5D$. Also, the Gauss-Bonnet Lagrangian has the
form
\begin{equation}\label{2}
L_{GB}=R^{2}-4R_{\alpha\beta}R^{\alpha\beta}+R_{\alpha\beta
cd}R^{\alpha\beta cd}
\end{equation}
where coefficient $\alpha$ is coupling constant in Einstein
Gauss-Bonnet gravity. Such action is derivable in the low-energy
limiting case of super-string theory. Here, $\alpha $ is treated as
the inverse of string tension which is positive definite and $\alpha
\geq 0$ in this paper. For the $4D$ manifold, Gauss-Bonnet terms do
not contribute to field equations. The variation of action (\ref{1})
with respect to $5D$ metric tensor yields the following set of field
equations
\begin{equation}\label{3}
{G}_{\alpha\beta}=G_{\alpha\beta}+\alpha
H_{\alpha\beta}=T_{\alpha\beta},
\end{equation}
where
\begin{equation}\label{4}
G_{\alpha\beta}=R_{\alpha\beta}-\frac{1}{2}g_{\alpha\beta}R
\end{equation}
is the Einstein tensor and
\begin{equation}\label{5}
H_{\alpha\beta}=2\left[ RR_{\alpha\beta}-2R_{a\alpha }R_{b}^{a
}-2R^{a b
}R_{a\alpha b\beta }+R_{a}^{a b \gamma }R_{b\alpha \beta \gamma
\right] -\frac{1}{2}g_{\alpha\beta}L_{GB},
\end{equation}
is the Lanczos tensor.
A spacelike $4D$ hypersurface $\Sigma^{(e)}$ is taken such that it
divides a $5D$ spacetime into two 5D manifolds, $M^-$ and $M^+$,
respectively. The $5D$ TB spacetime is taken as an interior manifold
$M^-$ which is inner region of a collapsing inhomogeneous and
anisotropic star is given by \cite{51}
\begin{equation}\label{6}
ds_{-}^2=-dt^2+B^2dr^2+C^2(d\theta^2+\sin^2{\theta}d\phi^2
+\sin^2{\theta}\sin^2{\phi}d\psi^2),
\end{equation}
where $B$ and $C$ are functions of $t$ and $r$. The energy-momentum
tensor $T_{\alpha \beta }^{-}$ for anisotropic fluid has the form
\begin{equation}\label{7}
T_{\alpha \beta }^{-}=(\mu +P_{\perp })V_{\alpha }V_{\beta
}+P_{\perp }g_{\alpha \beta }+(P_{r}-P_{\perp })\chi _{\alpha }\chi
_{\beta },
\end{equation}
where $\mu $ is the energy density, $P_{r}$ the radial pressure,
$P_{\perp }$the tangential pressure, $V^{\alpha }$ the four velocity
of the fluid and $ \chi _{\alpha }$ a unit four vector along the
radial direction. These
quantities satisfy
\begin{equation}
V^{\alpha }V_{\alpha }=-1\ \ ,\ \ \ \ \ \chi ^{\alpha }\chi _{\alpha
}=1\ \ ,\ \ \ \ \ \chi ^{\alpha }V_{\alpha }=0 \label{N8}
\end{equation
The expansion scalar $\Theta $ for the fluid is given by
\begin{equation}\label{8}
\Theta =V_{\ ;\ \alpha ,}^{\alpha }.
\end{equation
Since we assumed the metric (6) comoving, then
\begin{equation}\label{9}
V^{\alpha }=\delta _{0}^{\alpha }\ ,\ \ \ \ \ \chi ^{\alpha
}=B^{-1}\delta _{1}^{\alpha }\
\end{equation
and for the expansion scalar, we get
\begin{equation}\label{10}
\Theta =\frac{\dot{B}}{B}+\frac{3\dot{C}}{C}.
\end{equation}
Hence, Einstein Gauss-Bonnet field equations take the for
\begin{eqnarray}\nonumber
{\kappa}^2_{5}\mu &&=\frac{12\left( C^{\prime 2}-B^{2}\left(
1+\dot{C}^{2}\right) \right) }{C^{3}B^{5}}\left[ C^{\prime
}B^{\prime }+B^{2}\dot{C}\dot{B}-BC^{\prime \prime }\right] \alpha
\\\label{11} &&\ \ \ \ \ -\frac{3}{B^{3}C^{2}}\left[ B^{3}\left(
1+\dot{C}^{2}\right) +B^{2}C\dot{C}\dot{B}+CC^{\prime }B^{\prime
}-B(CC^{\prime \prime }+C^{\prime 2})\right]\\\label{12a}
{\kappa}^2_{5}p_{r} &&=-12\alpha \left( \frac{1}{C^{3}}-\frac{C^{^{\prime }2}}{B^{2}C^{3}
+\frac{\dot{C}^{2}}{C^{3}}\right) \ddot{C}+3\frac{C^{^{\prime }2}}{B^{2}C^{2
} -3\Big(\frac{1+\dot{C}^{2}+C\ddot{C}}{C^{2}}\Big)\\\nonumber
{\kappa}^2_{5}p_{\perp } &&=\frac{4\alpha }{B^{4}C^{2}}\Big[
-2B\left( B^{^{\prime
}}C^{^{\prime }}+B^{2}\dot{B}\dot{C}-BC^{^{\prime \prime }}\right) \ddot{C
+B\left( C^{^{\prime
}2}-B^{2}\left( 1+\dot{C}^{2}\right) \right)
\ddot{B}\\\nonumber&&+2\Big( \dot{B}C^{^{\prime
}}-B\dot{C}^{^{\prime }}\Big] -\frac{1}{B^{3}C^{2}}\Big[ B^{3}\Big(
1+\dot{C}^{2}+2C\ddot{C}\Big) +B^{2}C\left(
2\dot{C}\dot{B}+C\ddot{B}\right)\\&&+2CC^{^{\prime }}B^{^{\prime
}}-2B\left( CC^{^{\prime \prime }}+C^{^{\prime }2}\right)\Big]
\label{13}\\
&&\frac{12\alpha }{B^{5}C^{3}}\left( \dot{B}C^{^{\prime
}}-B\dot{C}^{^{\prime
}}\right) \left( B^{2}\left( 1+\dot{C}^{2}\right) -C^{^{\prime }2}\right) -
\frac{B\dot{C}^{^{\prime }}-\dot{B}C^{^{\prime }}}{B^{3}C}=0.
\label{14}
\end{eqnarray}
The mass function $m(t,r)$ analogous to Misner-Sharp mass in $n$
manifold without ${\Lambda}$ is given by \cite{51a}
\begin{equation}\label{15}
m(t,r)=\frac{(n-2)}{2k_{n}^{2}}{V^k}_{n-2}\left[ R^{n-3}\left(
k-g^{ab}R,_{a}R,_{b}\right) +(n-3)(n-4)\alpha \left(
k-g^{ab}R,_{a}R,_{b}\right) ^{2} \right],
\end{equation}
where a comma denotes partial differentiation and ${V^k}_{n-2}$ is
the surface of $(n-2)$ dimensional unit space. For $k=1$,
${V^1}_{n-2}=\frac{2{\pi}^{(n-1)/2}}{\Gamma((n-1)/2)}$, using this
relation with $n=5$ and Eq.(\ref{6}), the mass function (\ref{15})
reduces to
\begin{equation}\label{16}
m(r,t)=\frac{3}{2}\left[ C^{2}\left( 1-\frac{C^{^{\prime }2}}{B^{2
}+\dot{C}^{2}\right) +2\alpha \left( 1-\frac{C^{^{\prime }2}}{B^{2}}+\dot{C
^{2}\right) ^{2}\right]
\end{equation}
In the exterior region to $\Sigma^{(e)}$, we consider Einstein
Gauss-Bonnet Schwarzschild solution which is given by
\begin{equation}\label{c1}
ds_{+}^2=-F(R)d{\nu}^2-2d\nu dR+R^2(d\theta^2+\sin^2{\theta}d\phi^2
+\sin^2{\theta}\sin^2{\phi}d\psi^2),
\end{equation}
where
$F(R)=1+\frac{{R}^2}{4\alpha}-\frac{{R}^2}{4\alpha}\sqrt{1+\frac{16\alpha
M}{\pi {R}^4}}$.
The smooth matching of the $5D$ anisotropic fluid sphere (\ref{6})
to GB Schwarzschild BH solution (\ref{c1}), across the interface at
$r = {r_{\Sigma}}^{(e)}$ = constant, demands the continuity of the
line elements and extrinsic curvature components (i.e., Darmois
matching conditions \cite{53}), implying
\begin{eqnarray}\label{c2}
dt \overset{\Sigma^{(e)}}{=}\sqrt{F(R)}d\nu,\\
R \overset{\Sigma^{(e)}}{=}R, \\\label{cm}
m(r,t)\overset{\Sigma^{(e)}}{=}M,
\end{eqnarray}
\begin{eqnarray}\nonumber
&&-12\alpha \left( \frac{1}{C^{3}}-\frac{C^{^{\prime }2}}{B^{2}C^{3}
+\frac{\dot{C}^{2}}{C^{3}}\right) \ddot{C}+3\frac{C^{^{\prime }2}}{B^{2}C^{2
} -3\Big(\frac{1+\dot{C}^{2}+C\ddot{C}}{C^{2}}\Big)\\
&&\overset{\Sigma^{(e)}}{=}\frac{12\alpha }{B^{5}C^{3}}\left( \dot{B}C^{^{\prime
}}-B\dot{C}^{^{\prime
}}\right) \left( B^{2}\left( 1+\dot{C}^{2}\right) -C^{^{\prime }2}\right) -
\frac{B\dot{C}^{^{\prime }}-\dot{B}C^{^{\prime }}}{B^{3}C}.
\label{c3}
\end{eqnarray}
Comparing Eq.(\ref{c3}) with (\ref{12a}) and (\ref{14}) (for detail
see \cite{12}), we get
\begin{equation}\label{c4}
p_r\overset{\Sigma^{(e)}}{=}0.
\end{equation}
Hence, the matching of the interior inhomogeneous anisotropic fluid
sphere (\ref{6}) with the exterior vacuum Einstein Gauss-Bonnet
spactime (\ref{c1}) produces Eqs.(\ref{6}) and (\ref{cm}).
\section{Dynamical Equations}
In this section, we formulate the equations that deal with the
dynamics of collapsing process in Einstein Gauss-Bonnet gravity.
Following Misner and Sharp formalism \cite{52}, we discuss the
dynamics of the collapsing system. We introduce proper time
derivative as well as the proper radial as follows:
\begin{equation}\label{D1}
D_{T}=\frac{\partial}{\partial{t}},\quad
D_{R}=\frac{1}{R'}\frac{\partial}{\partial{r}},\quad R=C.
\end{equation}
The velocity of the collapsing fluid is the proper time derivative
of $R$ defined as
\begin{equation}\label{D2}
U=D_{T}(R)\equiv \dot{C}.
\end{equation}
Using above result in the mass function given by Eq.(\ref{16})
\begin{equation}\label{D3}
m(r,t)=\frac{3}{2}\left[ C^{2}\left( 1-\frac{C^{^{\prime }2}}{B^{2
}+U^{2}\right) +2\alpha \left( 1-\frac{C^{^{\prime }2}}{B^{2}}+
^{2}\right) ^{2}\right].
\end{equation}
Solving above equation for $\frac{C^{^{\prime }}}{B}$, we get the
positive and negative roots, the positive roots are given by
\begin{equation}\label{D3}
E=\frac{C^{^{\prime
}}}{B}=\sqrt{1+U^2+\frac{R^2}{4\alpha}\pm\frac{\sqrt{3R^2+16m\alpha}}{4\sqrt{3}\alpha}}.
\end{equation}
The rate of change of mass (Eq.(\ref{16})) with respect proper time
is given by
\begin{equation}\label{D3}
D_{{T}}m(t,r)=-{\kappa_5}^2P_{r}U{R}^{3},
\end{equation}
where we have used Einstein Gauss-Bonnet field equations
Eqs.(\ref{12a}) and (\ref{14}). The right hand side of this equation
has a single term . The this term is due to effective pressure
(means the pressure is effected by the Gauss-Bonnet term) in
$r$-direction. This term is positive in case of collapse ($U<0$).
This implies that as effective pressure in $r$-direction increases,
mass (energy) also increases with the same amount. Similarly, we can
calculate
\begin{equation}\label{D4}
D_{R}m(t,r)=\frac{2}{3}k^2_{5}\mu R^{3},
\end{equation}
where we have used Einstein Gauss-Bonnet field equations
Eqs.(\ref{11}) and (\ref{14}). This equation explain how effective
energy density affects the mass between neighboring hypersurfaces in
the interior fluid distribution. Integration of Eq.(\ref{D4}) yields
\begin{equation}\label{D5}
m(t,r)=\frac{2}{3}k^2_{5}\int^{{R}}_{0}({R}^{3}\mu)d{R}.
\end{equation}
The dynamical equations can be obtained from the contracted Bianchi
identities ${T^{ab}}_{;b}=0$. Consider the following two equations
\begin{eqnarray}
{T^{\alpha\beta}}_{;\beta}V_{\alpha}&&=\left[ \dot{\mu}+\left( \mu
+P_{r}\right) \frac{\dot{B}}{B}+3\left( \mu +P_{\perp }\right)
\frac{\dot{C}}{C}\right] =0 , \label{D6}\\ T_{;\beta }^{\ \alpha
\beta }\chi _{\alpha }&&=\frac{1}{B}\left[
P_{r}^{^{\prime }}+3\left( P_{r}-P_{\perp }\right) \frac{C^{^{\prime }}}{C
\right] =0. \label{D7}
\end{eqnarray}
The acceleration of the collapsing fluid is defined as
\begin{equation}\label{D8}
D_{{T}}U=\ddot{C}.
\end{equation}
Using Eqs.(\ref{D7}), (\ref{D8}) and (\ref{12a}), we get
\begin{eqnarray}
&&\Big[12\alpha
C\Big(\frac{9(p_r-p_{\bot})^2}{C^3}(1+U^2)+(\frac{{p_r}'}{
B^2}+1)\Big)\Big]D_T
U\\\nonumber&&=-9(p_r-p_{\bot})^2\Big[\kappa^2_5p_rC+3(\frac{1}{C}+\frac{U^2}{C})\Big].\label{D9}
\end{eqnarray}
This equation yields the effect of different forces on the
collapsing process. It can be interpreted in the form of Newton's
second law of motion i.e., Force = mass density $\times$
acceleration. The term within square bracket on left side of above
equation represent the inertial or passive gravitational mass. All
the quantities on right side in square bracket are positive, hence
this side is consequently negative and implies the retardation of
dynamical system giving rise to the collapse of the system.
\section{Outlook}
This paper investigates the effects of the Gauss-Bonnet term on the
dynamics of anisotropic fluid collapse in the $5D$ Einstein
Gauss-Bonnet gravity. We have extended the work of Herrera \cite{7d}
to $5D$ Einstein Gauss-Bonnet gravity. To this end, the
non-conducting anisotropic fluid with $5D$ spherical symmetry has
been taken as the source of gravitation in Einstein Gauss-Bonnet
gravity. The Misner-Sharp mass has been calculated in the present
scenario. The smooth matching of the interior source has been
carried out with $5D$ Schwarzschild BH solution in Einstein
Gauss-Bonnet gravity by using the Darmois \cite{53} junction
conditions. The matching of the two regions implies the vanishing of
radial pressure over the boundary of the star and continuity of the
gravitational masses in the interior and exterior regions. By using
the Misner and Sharp approach for the proper time and radial
derivatives, we have formulated the velocity as well as acceleration
of the system. These definitions have been also applied to formulate
the general dynamical equations in Gauss-Bonnet gravity.
The analysis of the dynamical equations predicts the following
consequences:
\begin{itemize}
\item Mass of the collapsing spheres increases with the passage of
time.
\item Effective energy density of the system would effects the mass of the system during the different stages of the
collapse
\item The system under consideration goes to retardation implying the gravitational
collapse.
\end{itemize}
We would like to mention that transport equations and their coupling
with dynamical equations are not the objective of this paper as heat
flux is absent in anisotropic fluid. This will be done in an other
investigation with the \textbf{inclusion of charge term in the
interior source }in future.
\vspace{0.25cm}
| {'timestamp': '2015-04-30T02:13:02', 'yymm': '1504', 'arxiv_id': '1504.07937', 'language': 'en', 'url': 'https://arxiv.org/abs/1504.07937'} |
high_school_physics | 586,889 | 15.473826 | 1 | Electrical Safety-SMART!
Energy-SMART!
Electrical Safety World
Tell Me More Electrical Safety-SMART! Home
Pioneers of
Electricity FAQ (Frequently Asked Questions)
Need to write a report about electricity? Or just want to know more about some aspect of electricity that has caught your interest? You've come to the right place. Simply click on the questions below, and you'll be on your way!
What is electricity?
How is electricity generated?
How does electricity travel?
How is electricity measured?
How many miles of power lines are there in the United States?
Do the words "shocked" and "electrocuted" mean the same thing?
Why can you sometimes see a spark if you can't see electricity?
When a circuit is open, do electrons go backward, or do they just stop?
Why does electricity try to get to the ground, and what does it do when it gets there?
Why can a bird stand on a power line and not get shocked?
What is static electricity?
What is lightning?
How much energy is in a bolt of lightning?
Does lightning ever strike fish?
Who holds the world's record for being struck by lightning most often?
Why didn't Ben Franklin get electrocuted when he tied a metal key to a kite string and flew the kite in a thunderstorm?
Why shouldn't I use a corded phone or electrical appliance during a thunderstorm?
How do batteries create electricity?
Why don't I get a shock when I touch a battery?
What are those little boxes on hair dryer cords?
Do electric eels really create electricity?
How does a defibrillator work?
How does an incandescent light bulb work?
How does a compact fluorescent light (CFL) work?
How does a light-emitting diode (LED) work?
Electricity is a form of energy that starts with atoms. Atoms are too small to see, but they make up everything around us. An atom has three tiny parts: protons, neutrons, and electrons. The center of the atom has at least one proton and one neutron. At least one electron travels around the center of the atom at a great speed. Electricity can be created by forcing electrons to flow from atom to atom.
Most electricity used in the United States is produced at power plants. Various energy sources are used to turn turbines. The spinning turbine shafts turn electromagnets that are surrounded by heavy coils of copper wire inside generators. This creates a magnetic field, which causes the electrons in the copper wire to move from atom to atom.
Electricity leaves the power plant and is sent over high-power transmission lines on tall towers. The very strong electric current from a power plant must travel long distances to get where it is needed. Electricity loses some of its strength (voltage) as it travels, so transformers, which boost or "step up" its power, must help it along.
When electricity gets closer to where it will be used, its voltage must be decreased. Different kinds of transformers at utility substations do this job, "stepping down" electricity's power. Electricity then travels on overhead or underground distribution wires to neighborhoods. When the distribution wires reach a home or business, another transformer reduces the electricity down to just the right voltage to be used in appliances, lights, and other things that run on electricity.
A conductor carries the electricity from the distribution wires to the house meter box. The meter measures how much electricity the people in the house use. From the meter box, wires run through the walls to outlets and lights. The electricity is always waiting in the wires to be used.
Electricity travels in a circuit. When you switch on an appliance, you complete the circuit. Electricity flows along power lines to the outlet, through the power cord into the appliance, then back through the cord to the outlet and out to the power lines again.
Electricity travels fast (186,000 miles per second). If you traveled that fast, you could travel around the world almost eight times in the time it takes to turn on a light! And if you had a lamp on the moon wired to a switch in your bedroom, it would take only 1.28 seconds after you flipped the switch for electricity to light the lamp 238,857 miles away!
Figures used to arrive at the numbers:
• Speed of light: 186,000 miles/sec
• Average distance to the moon: 238,857 miles
• Circumference of the earth: 24,902 miles (equatorial), 24,860 miles (polar)
Volts, amps, and watts measure electricity. Volts measure the "pressure" under which electricity flows. Amps measure the amount of electric current. Watts measure the amount of work done by a certain amount of current at a certain pressure or voltage.
To understand how they are related, think of water in a hose. Turning on the faucet supplies the force, which is like the voltage. The amount of water moving through the hose is like the amperage. You would use lots of water that comes out really hard (like a lot of watts) to wash off a muddy car. You would use less water that comes out more slowly (like less watts) to fill a glass.
watts = amps × volts
amps = watts ÷ volts
There are about 200,000 miles of high-voltage transmission lines in the United States and millions of miles of distribution lines carrying electricity to our homes, schools and businesses.
No! Someone can be shocked by electricity and survive. But when we say someone has been electrocuted, it means they have been killed by electricity.
You can't see electricity when it is flowing through a circuit. But if electricity leaves the circuit—like when someone is shocked—you can see a spark. The spark isn't electricity itself. It is a flame that happens when the electricity travels through the air and burns up oxygen particles.
Neither! In the wires of an electrical circuit, the electrons are always jiggling around. When a circuit is closed to run an appliance or a light bulb, the electrons jiggle a lot and travel through the wire. When the circuit is open, all the electrons just jiggle where they are—kind of like running in place.
It's just the nature of electricity to move from an area of higher voltage to an area of lower voltage, if given a path to travel there. The ground is simply the lowest-voltage area around, so if you give electricity a path to the ground, it will take it, no questions asked! When electricity goes into the ground, the earth absorbs its energy.
It is easier for electricity to keep flowing through the power line than to go through the bird. But if a bird with large wings touches a power line and a tree or power pole at the same time, it provides electricity with a path to the ground, and could be shocked. And if a bird touches two wires at once, it will create a circuit—electricity will flow through the bird and likely electrocute it.
The shock you feel when you touch an object after walking on carpet is static electricity. When you drag your feet across carpet on a dry day, electrons from the carpet get transferred to your body. (Electrons are parts of the atoms that make up all matter.) If you then touch a piece of metal, such as a doorknob, the electrons jump to the metal and you'll feel a shock.
Lightning is a large discharge of static electricity. During a thunderstorm, clouds build up a charge when small bits of ice collide through a rising and sinking motion within the clouds themselves. The charges created by these collisions eventually fill up the whole cloud. When there is a big difference in charge between the cloud and its surroundings, the cloud discharges a lightning bolt.
One lightning strike can carry between 100 million and 1 billion volts. (100 million volts is the equivalent of 8 million car batteries.)
Yes, it does. Because water conducts electricity, when lightning strikes water, it spreads out along the surface. Any fish near the surface of the water get electrocuted.
According to Guinness World Records, Roy G. Sullivan, a former United States park ranger, was struck by lightning seven times over the course of his 35-year career. Lightning has burned off his eyebrows, seared his shoulder, set his hair on fire, injured his ankle, and burned his belly and chest.
Ben Franklin probably did not do his famous kite experiment the way it is usually portrayed. (Franklin never wrote about it himself, and the only description we have of it was written by another scholar, Joseph Priestley, 15 years later.) Franklin believed lightning was a flow of electricity taking place in nature. He knew of electricity’s dangers, and would probably not have risked being struck by lightning by flying his kite during a storm. It is more likely that Franklin flew his kite before the storm occurred, and that his famous key gave off an electric spark by drawing small electrical charges from the air.
There is a very small chance that a lightning strike could surge through phone lines or through the wires of an electrical appliance. If you were to touch a phone or appliance at just that moment, you could be shocked.
A chemical reaction within the battery forces electrons to move.
There is not enough voltage in a regular household battery to cause a shock. However, car batteries are powerful enough to shock, so you should never tamper with them.
Water and electric hair dryers are a dangerous combination! In the early 1980s, hair dryers falling into bathtubs or sinks filled with water caused about 18 deaths per year. Since 1991, hair dryer manufacturers have been required to include GFCIs (ground fault circuit interrupters) on dryer cords. GFCIs cut off electricity to prevent serious shock. Thanks to these devices, the number of hair dryer related deaths has dropped to an average of two per year.
Yes! An electric eel uses chemicals in its body to manufacture electricity. A large electric eel can produce a charge of up to 650 volts, which is more than five times the shocking power of a household outlet.
Inside the cells of the heart, tiny electrical currents fire in a steady rhythm. If that rhythm is disrupted due to disease or injury, a heart attack can occur. A defibrillator shocks every cell in the heart at the same time, so they all start up again in rhythm. It's like each cell is dancing to the same beat!
The wire inside a light bulb is called a filament. It is made of tungsten, a metal that stays solid at very high temperatures. Electricity flows through the tungsten filament, causing it to heat up and glow. The glow gives off light. Inside a light bulb is a vacuum—in other words, all the air has been removed from inside the glass bulb. (If there was air inside, the wire would burn up.)
Compact fluorescent lights (CFLs) and other fluorescent light bulbs contain gases (argon and mercury vapor) that produce invisible ultraviolet (UV) light when stimulated by electricity. When the UV light hits the white phosphor coating inside the fluorescent bulb, the phosphor illuminates or “fluoresces,” changing the UV light into visible light. CFLs are very energy-efficient, using only about one-fifth the energy of a standard incandescent bulb. This is because all of the electricity they use goes toward creating light, whereas the energy used by standard incandescent bulbs creates heat as well as light.
Like their energy-efficient cousins, CFLs, LEDs don’t waste energy on heat so they don’t get especially hot. But unlike CFLs, LEDs are illuminated solely by the movement of electrons in a semiconductor material. A semiconductor is a material with electrical conductivity (meaning the ability to transfer electrical energy) between that of a conductor and an insulator (hence the prefix “semi”). Inside an LED, when an electrical current passes through the semiconductor material, electrons move through the material and drop to other energy levels, and in the process they emit photons of light. LEDs are becoming an increasingly important and common light source because of their high degree of energy efficiency.
© 2020 Culver Media, LLC. All rights reserved.
Portland General Electric Co. Privacy Policy | {"pred_label": "__label__wiki", "pred_label_prob": 0.6768468022346497, "wiki_prob": 0.6768468022346497, "source": "cc/2021-04/en_middle_0007.json.gz/line1349154"} |
high_school_physics | 260,212 | 15.416034 | 1 | Traffic Mining is the art of extracting hidden, obfuscated or encrypted information from IP traffic, by only observing the Layer 2 - Layer 4 header features. It exploits the fact that nobody produces perfect and secure code when writing internet applications. The key point are libraries used by every one, such as codecs. They have intrinsic features and physical characteristics which cannot be changed without impeding correct functionality. This characteristic behaviour reflects itself in Layer 3 and 4 header features, independent of any encryption on Layer 7. A prominent feature is the packet length (PL) and the inter-arrival time (IAT), also known as packet interdistance, of the consecutive packets in a A or B flow.
The major work in classification of encrypted traffic is the quality of the preprocessing. Hence, T2 focusses on what type of data should be fed into a classifier or a feature selection mechanism to produce optimal results.
In the following we will discuss these preprocessing approaches using the traffic skypeu.pcap, which contains a simple voice conversation between two peers. For illustration t2plot, a wrapper for gnuplot is used.
Normally this should not happen, because the NODEPOOL factor in pkSIATHisto.h is set to 17, which suffices for a large tree of PLs and IATs.
Nevertheless, then increase the NODEPOOL_FACTOR to 18 or a bit higher, recompile and see what happens.
It is a multiplication factor with the hashChainTableSize, denoting the maximum amount of flows in memory at a specific tim, defined by HASHCHAINTABLE_BASE_SIZE in tranalyzer.h. So be careful, each flow uses suddenly a considerable amount of memory, if you turn up the NODEPOOL_FACTOR unnecessarily high.
[INF] Hash Autopilot: main HashMap full: flushing 1 oldest flow(s)! Fix: Invoke T2 with '-f value' next time.
So now you are all set for any pcap mishap that might hit you in future. Let’s start with the TM statistical approach.
To profile traffic, the flow representation is the most convenient one, because the nature of a traffic type can be compressed into a collection of numbers, e.g. a vector, which then can be postprocessed by standard programs such as SPSS, Matlab, Excel or an AI plugin.
For now we are intereseted in column 100 of skypeu_flows.txt, designated Ps_Iat_Cnt_PsCnt_IatCnt. It contains a 3D statistics and their projections onto PL and IAT.
or look at the projection, the packetlength statistics. It contains information about the application.
Sometimes the IAT statistics bears some information about the application and the user. But often the IAT alone is not significant.
Using the -r option, all online features of gnuplot can now be used. The pl_iat, pl, iat distribution can now be fed into a classifier of your choosing.
and look into the .h file. For non C literate: The “//” denotes a comment in C, it has not effect on the constants; only change the values right after the constant with a editor of your choice. Change come into effect if the plugin is recompiled.
#define HISTO_PRINT_BIN 0 // 1: Bin number; 0: Minimum of assigned inter arrival time.
To conserve flow memory space, the resolution of the IAT distribution can be flexibly configured to match the needs for the classifier. E.g. for Voice applications the region between 0-400ms need to have a higher resolution than IAT $ 1s. For other appliications it might be different. Hence, six sections are predefined, three are activated by setting IATSECMAX. The constant IATBINBu defines the upper boundary of a section while IATBINWu denotes the bin width. Thus, the resulting distribution can be expanded or shrinked to your linking. If more than 6 sections are necessary add new defines and range definitions.
Nevertheless, especially for statistical classifiers or unsupervised learners, such as ESOM a vector of constant dimensions is more appropriate. For that reason the descriptiveStat plugin was created, supplying PL and IAT statistics vectors up to the 3rd moment.
For each flow of a certain class such a descriptive vector can be fed into a C5.0 or any Classifier for training and testing.
As our small example is not diverse enough, an example of ESOM clustering of unknown 2GByte 1.7GBit/s traffic processed by T2 is depicted below. The resulting map arranges the unknown traffic type into regions, using only the PL descriptive vector.
The training of the map is derived by our own high performance post processing tool traviz3. Nevertheless, any AI tool can produce the same results. Maybe not with the same speed, but for research purposes they will do their job. Just import the PL vectors of your traffic of choice into weka or matlab.
The flow index, the flow direction and the time processing can be selected in order to produce the appropriate signal for your purpose. You will see its application during the tutorial. Let us now discuss some prominent features of the plugin. So apply the script to the flow file, select flow index 1 and move it to another name, we will need it later on.
In order to classify encrypted applications, normally the first 5-10 packets bear enough information because the initiation protocol reflects itself in these first PL_IAT sequence. N depends on the type of job at hand. For the first pcap supplied on the page N=20 is enough, for the second one we will need a bigger value. Nevertheless, you can select any N to your liking, just keep in mind T2 has to hold all vectors times the amount of flows in memory. So the performance of your machine is also a factor.
The signal processing approach treats the PLs of a flow as a digital signal. Due to the fact that packets do not appear at regular intervals, the resulting signal has missing samples (s. fig below).
Now set NFRST_IAT to 0 and recompile, rerun T2 and use the script to produce a signal for A flow index 1.
If NFRST_IAT is 2 then a signal vector is produced with absolute time stamps. Recompile, rerun T2 and use the script to produce Signal with A positive, B negative PL of flow index 1.
Signals are represented by complex numbers, they have amplitude and phase, a fact constantly ignored by some researchers. Nevertheless, due to the nature of internet traffic sometimes a quick fix by omitting time makes classifiers more resiliant. Hence, the script fpsGplt has an additional parameter to replace time by an integer count, so a vector is produced by equidistant PL values, as depicted below.
It is obvious that the spectrum of the signal is now drasticly distorted, but the vector can be easily processed by any AI which requires abstract vectored input. Nevertheless, from the signal processing standpoint this representation does not make so much sense, unless the number on the x-axis where correctly sampled values. So how do we get there without much computational effort?
One obvious approach is to pick the smallest IAT and use 2/IAT as a sampling frequency which often produces large vector dimensions and slows the classification process down.
Another approach is to reconstruct the signal with well known methods already used in radar technology. Here, a sampling frequency is picked outside a bandwidth limited signal according to shannons requirements, which contains most of the energy of the original signal (Gerchberg Papadopulous). Been there, done that. Lots of computational effort, requires specialized HW if really being considered. But, then the missing samples can be reconstructed with a much lower frequency, producing less samples.
So a less expensive and easier way is required which almost satisfies dear old shannon, and it has to be implemented in tranalyzer in a performant way. Satisfying Shannon is easy, he is dead, satisfying the Anteater is more difficult.
The representation of a packet flow into a signal is vital. One method is to produce an A and B flow signal as depicted below. In order to preserve the causal correlation between B and A Signal the B part has to be shifted by the start of the B flow. We will see later that there are complications by just combining A and B Flow into a signal, because the full duplex nature of the IP protocol and asymmetric delays of the peers do not guarantee causality between A and B packets. Leaving that aside, for the sake of simplicity let’s first produce a signal which we can investigate and plot.
Zooming into the first part of the Signal (right mouse click defines the area) we see a small B spike followed by a larger A Peak.
The smallest difference between A and B peak normally defines the minimum sampling frequency, which we like to be as low as possible to reduce the amount of unnecessary sampled 0 and for performance reasons. Let’s see what happens if we omit this A-B packet minimal interdistance information and treat each flow separately too produce a signal which can be readily sampled with a lower enough frequency. Have a look at the PL_IAT vector above and pick the minimum required pulse length for your sampling frequency.
Looking also at the Plot above you will notice the bursty nature of the packet length signal. The task is to replace the spikes with an appropriate pulse length allowing a minimal sampling frequency? Looking at the sorted IAT list above, a drastic jump at 0.009251 can be identified. Thus any aggregation IAT below 9000us would be fine. Lets choose 2000us because 1ms is a reasonable unit for voice traffic. The minimal default pulse width is defined by NFRST_MINIAT(S/U)/NFRST_MINPLENFRC in nFrstPkts.h. The default value of NFRST_MINPLENFRC is 2.
Now invoke t2plot using the -pl option, so that PL values are connected. This facilitates the recognition of signal characteristics.
Note that around 0.044s an A Pulse is overlapping the B Pulse. That is the effect mentioned before that IAT between A and B packets are not considered, to avoid high sampling frequencies. Sure enough this is what needs to be done if we are really interested to be thorough. An easy way to mitigate this effect is to consider A and B flow separately.
One approach is to shift every conflicting B Puls to the future, which tampers with the phase of the signal. For classification purposes a pragmatic choice. For signal freaks an nogo. They will get the minimum A/B Spike IAT and use a fraction of that as a puilse length. This option will be integrated in the version 0.8.2 of nFrstPkts.
Because the A/B vectors are stored in sequence thus the -pl option in t2plot plots lines crossing the pulse at 0. To produce a consistent signal sorting by time is required.
The peaky signal around 0.044s is the overlapping A/B signal effect described above.
You can add the L3/4 header length to the PL by setting NFRST_HDRINFO. But then all discussed signal forming modes will be deactivated. The NFRST_XCLD controls the exclusion of a certain PL range. The range is defined by NFRST_XMIN, NFRST_XMAX. This is useful when certain PLs are not relevant for the classification process. Instead of weeding them out by the classifier itself, we can remove them before, thus reducing the size of the model or facilitating the feature extraction process.
Now download a more complicated pcap where somebody streams a film.
So lets try 2000 for a start and set NFRST_IAT to relative mode.
This is one way to reduce the amount of sidelobes in the spectrum.
Let us now sample the signal with the default edge. The -p factor defines the IAT in [s] of the sampling pulses.
So you see, gnuplot does not show the PL 0 in the chosen plot mode, but they are there in the sampled file.
Choose a higher NFRST_MINIATU according to your detail requirements of the classification process, remove the time info and you have the Bytes-Per-Burst (BPB) measure.
If you need it non inverted, omit the -i option.
That is discussed in our next AI tutorial, which is currently being written. If you cannot wait, put the following vectors into your AI and see how it performs. And, important, give us feedback. | {'timestamp': '2019-04-18T10:40:59Z', 'url': 'https://tranalyzer.com/tutorial/trafficmining', 'language': 'en', 'source': 'c4'} |
high_school_physics | 54,310 | 15.410579 | 1 | A summary of feature characteristics are intel core i7-4500u, 1000 gb 7200 rpm hard drive and windows 8. It's dimensions are 14.13" Height x 22.48" Length x 1.97" Width. It weighs approximately 19.8 lbs. 0886227625420 is the bar code for this item. Shopping for ET2322INTH-04 23-INCH . To get the best price on this item together with other products, visit the market link on this page.
ASUS All-in-One PC ET2322INTH-04 consists of everything— display, processor, graphics, storage, memory, and more— in 1 slim and sleek form aspect. The I/O ports are cleverly gathered together in a row at the back from the screen to keep all cables in 1 place. ET2322INTH not merely looks stylish and chic, but blends into your home and office spaces with equal ease. Its clutter-free and space-saving design capabilities an ultra-slim 10-point multitouch display along with a silver sculpture-inspired hinge that curves gracefully from the base and, as opposed to on any other Ai O PCs, connects towards the rear of ET2322INTH for a far far more elegant look.
Powered by the all-new 4th generation Intel Core i7 processor, ET2322INTH is ultra-efficient by consuming less energy but still delivering high performance. ASUS ET2322 Ai O PC comes with new NVIDIA Ge Force GT 740M graphics architecture that lets you preview and produce HD video faster, save a lot more time when sharing high-quality pictures with friends, and enjoy smoother animation at the same time as better responsiveness when playing the most recent games. 0 delivers an automatic burst of speed whenever it's needed. Seamlessly switch between applications for effortless multitasking of Internet surfing, animated-Power Point creating and home video creating; spend much less time waiting when doing demanding tasks with Intel Turbo Boost Technology 2.
Super Speed USB 3. What's more, ASUS Ai Charger II charges your smartphones and tablets with out having to leave your PC on for added convenience and power savings. Use ET2322INTH as a secondary Full HD display by connecting a PC, notebook or tablet via its HDMI input port; or connect to an HDTV for big-screen entertainment through the HDMI output port. 0 lets you recharge your smartphones, tablets, and other USB devices up to 50% more rapidly and transfer data between devices up to ten times more quickly than prior generation USB3.
ET2322INTH has two built-in stereo speakers, internal subwoofer, plus a 10-watt external subwoofer all underpinned by the ASUS's Golden Ear team's renowned Sonic Master Premium technologies. Together with MAXXAUDIO specialist audio processing it delivers the ultimate audio experience. This makes ET2322INTH a supreme multimedia center, with every song, movie, and game sounding much better thanks to carefully balanced and dynamic audio. Enjoy deeper and richer bass, a wider range, true-to-life surround, powerful output, distinct vocals, and pristine clarity.
Dimensions: Height: 14.13" Length: 22.48" Depth: 1.97"
Package Dim.: Height: 7" Length: 27.5" Depth: 20.6" | {'timestamp': '2019-04-20T00:22:54Z', 'url': 'http://intelextreme.best-gaming-pc.com/ET2322INTH04_23INCH_Desktop.html', 'language': 'en', 'source': 'c4'} |
high_school_physics | 18,801 | 15.400765 | 1 | The search function aren't that super here euuuuu.
Why is the RF injected into the side of the EM Device. I know it's a basic question but can someone expand on it a little more?
The technique will come into its own when alignment over at least one million Km is feasible.
Now you'll notice that they were in a clean room (albeit in air) so that should give pause to anyone considering using this as a launch system for extremely light payloads directly from Earth's surface. Nevertheless, a little noodling would not go amiss on this topic.
The available motive power is Q*P (Q=200, P=800 W in the video), so the force F = 2*Q*P/c (1.1 mN in the video), so the acceleration a = F/m (m=0.45 Kg, a = 2.5 mm/s2 in the video).
What would it take to get up to 1 gee for an Earth-based launch?
The acceleration needs to be increased by a factor of ~4000x.
Leaving m alone for the moment, Bae states that Q could improve by a factor ~5x (200->1000).
Now we need 4000/5 = 800x improvement.
Using a 800 kW laser does that for us (1000x).
Alternatively we can use a lower mass and thus a lower power laser.
No one else has chimed in, so I'll give this a shot.
I believe the choice of antenna position is predominantly a function of the antenna beam pattern, and the desire to couple maximum energy into the cavity.
A simple dipole antenna radiates/couples well in the perpendicular direction, so placing a dipole antenna perpendicular to the cavity wall would allow direct coupling into the dominant resonant direction (i.e. between the concave/convex end plates).
I had proposed (many pages back) that the use of a waveguide to inject a magnetron's signal had the effect of a directional beam pattern that was much better at injecting energy than removing energy from the cavity. (since resonanting energy is dominantly between the end plates, a waveguide input roughly perpendicular to the walls would inject energy better than remove energy) However, I'll readily admit my reasoning may be overly simplistic.
Using the enhanced photo I built a Google SketchUp model such that when overlayed with the image it matches. Then I scaled the model so that the RF connector plate measured 1 inch along one side. The resulting cone has dimensions of sD: 122.2mm, bD: 223.8mm, L: 153.98mm. Not sure the margin of error but the numbers should look fairly close I hope.
1- Could someone please try an em cavity with the bottom (large curved end) not electrically bonded to the sidewalls & top. ie place a circular insulating gasket between the contact point of the sidewall bottom edge and the actual bottom curved plate, just the wall/plate boundary NOT covering the internal surface area of the curved plate.
What do you think will be different if the bottom plate is electrically insulated?
I think we could easily incorporate this into our design, we have planned to leave a small clearance between the bottom movable plate and the frustum. Our bottom plate may not be as curved as Shawyer's, but we'll be able to provide a small amount of curvature by tighting the screws to different lengths.
Accurate to about 5% = 1-cos(18o). Could be better were you to use my observation about the distortion in the 1" dimensions.
I have this, but don't know what the small end chord, large end chord, or the perpendicular distance between the chords should be, in terms of wavelengths. Or the drive frequency for that matter.
What is your antenna like. A centered dipole (vertical in the images) works well, and the longer the better it seems to me.
It's an actual 3D Model and the projection lines up to a pixel, so the 'distortion' should be accounted for.
After building the 3D-Model I'm 'fairly certain' there is a rubber gasket between the end plates and cone. It's 1/16 of an inch thick in the model which reflects what I see in the image.
A little more detail - hidden edges can be seen in the xray rendering.
It was first question in my mind also. Why not on top or bottom ?
r1 to ((r2-r1)/h) z' + r1 ?
It's Eq. 18 that still bothers me a bit. Invoking the Heaviside step function is OK, but I don't see the addl. components being detectable outside the cavity w/o a non-linear term. Maybe I'm missing something ?
Beats me. And it looks like the placement would put part of the big end in the near near field and loading the bejesus out of the source.
I've also noted they use a simple loop antenna, but there seems to be no consideration to what we called at Collins the "look angle"
Basically, you have to use a defined google search to find past items of interest on this thread. | {'timestamp': '2019-04-21T14:09:33Z', 'url': 'https://forum.nasaspaceflight.com/index.php?amp;topic=36313.msg1374733', 'language': 'en', 'source': 'c4'} |
high_school_physics | 499,856 | 15.393256 | 1 | How slender were the humeri of Giraffatitan?
Continuing with what seems to have turned out to be Brachiosaur Humerus Week here on SV-POW! (part 1, part 2, part 3), let’s consider the oft-stated idea that brachiosaurs have the most slender humeri of any sauropod. For example, Taylor (2009:796) wrote that:
Discarding a single outlier, the ratio of proximodistal length to minimum transverse width (Gracility Index or GI) in humeri of B. brancai [i.e. Giraffatitan] varies between 7.86 for the right humerus HMN F2 and 9.19 for the left humerus HMN J12, with the type specimen’s right humerus scoring 8.69, slightly more gracile than the middle of the range […] For the B. altithorax type specimen, the GI is 8.50, based on the length of 204 cm and the minimum transverse width of 24 cm reported by Riggs (1904:241). However, the B. altithorax humerus looks rather less gracile to the naked eye than that of B. brancai, and careful measurement from Riggs’s plate LXXIV yields a GI of 7.12, indicating that the true value of the minimum transverse width is closer to 28.5 cm. As noted by Riggs (1903:300-301), the surface of the distal end of this humerus has flaked away in the process of weathering. Careful comparison of the humeral proportions with those of other sauropods (Taylor and Wedel, in prep.) indicates that the missing portion of this bone would have extended approximately a further 12 cm, extending the total length to 216 cm and so increasing the GI to 7.53 – still less gracile than any B. brancai humerus except the outlier, but more gracile than any other sauropod species except Lusotitan atalaiensis (8.91), and much more gracile than the humerus of any non-brachiosaurid sauropod (e.g., Diplodocus Marsh, 1878 sp., 6.76; Malawisaurus dixeyi Jacobs, Winkler, Downs and Gomani, 1993, 6.20; Mamenchisaurus constructus Young, 1958, 5.54; Camarasaurus supremus Cope, 1877, 5.12; Opisthocoelicaudia skarzynskii Borsuk-Bialynicka, 1977, 5.00 – see Taylor and Wedel, in prep.)
Implicit in this (though not spelled out, I admit) is that the humeri of brachiosaurs are slender proportional to their femora. So let’s take a look at the humerus and femur of Giraffatitan, as illustrated in Janensch’s beautiful 1961 monograph of the limbs and girdles of Tendaguru sauropods:
The first thing you’ll notice is that the humerus is way longer than the femur. That’s because Janensch’s Beilage A illustrates the right humerus of SII (now properly known as MB R.2181) while his Beilage J illustrates the right femur of the rather smaller referred individual St 291. He did this because the right femur of SII was never recovered and the left femur was broken, missing a section in the middle that had to be reconstructed in plaster.
(What’s a Beilage? It’s a German word that seems to literally mean something like “supplement”, but in Janensch’s paper it means a plate (full-page illustration) that occurs in the main body of the text, as opposed to the more traditional plates that come at the end, and which are numbered from XV to XXIII.)
How long would the intact SII femur have been? Janensch (1950b:99) wrote “Since the shaft of the right femur is missing for the most part, it was restored to a length of 196 cm, calculated from other finds” (translation by Gerhard Maier). Janensch confused the left and right femora here, but assuming his length estimate is good, we can upscale his illustration of St 291 so that it’s to SII scale, and matches the humerus. Here’s how that looks:
Much more reasonable! The humerus is still a little longer, as we’d expect, but not disturbingly so.
Measuring from this image, the midshaft widths of the femur and humerus are 315 and 207 pixels respectively, corresponding to absolute transverse widths of 353 and 232 mm — so the femur is broader by a factor of 1.52. That’s why I expressed surprise on learning that Benson et al (2014) gave Giraffatitan a CF:CH ratio (circumference of femur to circumference of humerus) of only 1.12.
Anyone who would like to see every published view of the humeri and femora of these beasts is referred to Taylor (2009:fig. 5). In fact, here it is — go crazy.
Taylor (2009: figure 5). Right limb bones of Brachiosaurus altithorax and Brachiosaurus brancai, equally scaled. A–C, humerus of B. altithorax holotype FMNH P 25107; D–F, femur of same; G–K, humerus of B. brancai paralectotype HMN SII; L–P, femur of B. brancai referred specimen HMN St 291, scaled to size of restored femur of HMN SII as estimated by Janensch (1950b:99). A, D, G, L, proximal; B, E, H, M, anterior; C, K, P, posterior; J, O, medial; F, I, N, distal. A, B, D, E modified from Riggs (1904:pl. LXXIV); C modified from Riggs (1904:fig. 1); F modified from Riggs (1903:fig. 7); G–K modified from Janensch (1961:Beilage A); L–P modified from Janensch (1961:Beilage J). Scale bar equals 50 cm.
Notice that the femur of Giraffatitan, while transversely pretty broad, is freakishly narrow anteroposteriorly. The same is true of the femur of Brachiosaurus, although it’s never been shown in a published paper — I observed it in the mounted casts in Chicago.
So let’s take a wild stab at recalculating the mass of Giraffatitan using the Benson et al. formula. First, measuring the midshaft transverse:anteroposterior widths of the long bones gives eccentricity ratios of 2.39 for the femur and 1.54 for the humerus (I am not including the anterior prejection of the deltopectoral crest in the anteroposterior width of the humerus) . Dividing the absolute transverse widths above by these ratios gives us anteroposterior widths of 148 for the femur and 150 mm for the humerus. So they are almost exactly the same in this dimension.
If we simplify by treating these bones as elliptical in cross section, we can approximate their midshaft circumference. It turns out that the formula for the circumference is incredibly complicated and involves summing an infinite series:
But since we’re hand-waving so much anyway, we can use the approximation C = 2π sqrt((a²+b²)/2). where a and b are the major and minor radii (not diameters). For the femur, these measurements are 176 and 74 mm, so C = 848 mm; and for the humerus, 116 and 75 mm yields 614 mm. (This compares with FC=730 and HC=654 in the data-set of Benson et al., so we have found the femur to be bigger and the humerus smaller than they did.)
So the CF:CH ratio is 1.38 — rather a lot more than the 1.12 reported by Benson et al. (Of course, if they measured the actual bones rather than messing about with illustrations, then their numbers are better than mine!)
And so to the mass formula, which Campione and Evans (2012) gave as their equation 2:
log BM = 2.754 log (CH+CF) − 1.097
Which I understand to use base-10 logs, circumferences measured in millimeters, and yield a mass in grams, though Campione and Evans are shockingly cavalier about this. CH+CF is 1462; log(1462) = 3.165. That gives us a log BM of 7.619, so BM = 41,616,453 g = 41,616 kg.
Comparison with Benson et al. (2014)
Midshaft measurements and estimates for SII long bones (all measurements in mm)
SV-POW!
Benson et al.
Transverse diameter 353 232 240
Transverse radius 176 116 120
Anteroposterior diameter 148 150 146
Anteroposterior radius 74 75 73
Circumference 848 614 730 654
Total circumference 1462 1384
Mass estimate (kg) 41,616 34,000
My new mass estimate of 41,616 kg is is a lot more than the 34,000 kg found by Benson et al. This seems to be mostly attributable to the much broader femur in my measurement: by contrast, the humerus measurements are very similar (varying by about 3% for both diameters). That leaves me wondering whether Benson et al. just looked at a different femur — or perhaps used St 291 without scaling it to SII size. Hopefully one of the authors will pass by and comment.
More to come on this mass estimate real soon!
Campione, Nicolás E, and David C. Evans. 2012. A universal scaling relationship between body mass and proximal limb bone dimensions in quadrupedal terrestrial tetrapods. BMC Biology 10:1–21. doi:10.1186/1741-7007-10-60
Benson Roger B. J., Nicolás E. Campione, Matthew T. Carrano, Philip D. Mannion, Corwin Sullivan, Paul Upchurch, and David C. Evans. (2014) Rates of Dinosaur Body Mass Evolution Indicate 170 Million Years of Sustained Ecological Innovation on the Avian Stem Lineage. PLoS Biology 12(5):e1001853. doi:10.1371/journal.pbio.1001853
Janensch, Werner. 1950b. Die Skelettrekonstruktion von Brachiosaurus brancai. Palaeontographica (Supplement 7) 3:97-103, and plates VI-VIII.
Janensch, Werner. 1961. Die Gliedmaszen und Gliedmaszengurtel der Sauropoden der Tendaguru-Schichten. Palaeontographica, suppl. 7 (1), teil 3, lief. 4:177-235.
Taylor, Michael P. 2009. A re-evaluation of Brachiosaurus altithorax Riggs 1903 (Dinosauria, Sauropoda) and its generic separation from Giraffatitan brancai (Janensch 1914). Journal of Vertebrate Paleontology 29(3):787-806.
Taylor and Wedel, in prep. Hmm, I wonder where that’s got to? Matt, we really ought to warm this up and get it done. Why, yes I am using the bibliography of a blog-post to communicate with a co-author, thanks for asking.
Filed in brachiosaurids, Brachiosaurus, femur, Giraffatitan, humerus, math, stinkin' appendicular elements
20 Responses to “How slender were the humeri of Giraffatitan?”
Matt Wedel Says:
One of your references needs updating. I think that last one should be:
Taylor and Wedel, in prep. Well, seriously, dude, that manuscript is so straightforward and you’ve just written most of the discussion in this post series. Why don’t we just decide that we’re going to take one week–just not next week–and knock it out? Whatever we have at the end of the week goes up as a PeerJ preprint. That ought to keep our feet to the fire.
Allen Hazen Says:
Leg-bone circumference works as a predictor of body mass because it is a reasonable proxy for structural strength of the leg column. Not being a civil engineer, I can’t say anything definite here, but I would think that an elliptical cross section would not be as strong as a circular cross section of the same circumference. (Imaginary experiment: take the cardboard centres from two rolls of paper towels. Leave one round, carefully press the other until it assumes an elliptical cross section. Use both as columns to support a pile of encyclopedia volumes. Which buckles first?)
So, since the humerus is wide but “freakishly” thin front-to-back, would its circumference perhaps give a misleadingly high body mass estimate, given that the formula is derived from taxa with humeri of more nearly circular cross section?
(And no, I have no idea whatever how the effect of this would compare quantitatively with all the OTHER sources of uncertainty in body mass estimation!)
Mike Taylor Says:
Matt, on the humerus paper: I currently have twelve other papers in the queue to do before that one. But I will add it to the list. Meanwhile, don’t forget that paper’s sibling, Wedel and Taylor on North American brachiosaur cervicals, which has been dormant for about the same length of time.
Allen, I think that in your thought experiment the circular and elliptical tubes would be equally strong when it comes to supporting encyclopaedias — i.e. in longitudinal compression. The real issue is bending. The elliptical tube is stronger against bending parallel to its major axis, and weaker against bending parallel to its minor access.
What does this tell us about the forces acting on brachiosaur femora in life? That the were subject to more mediolateral force than anteroposterior? That is odd for limbs that were swung back and forth in locomotion. I don’t know what to make of it. (And by the way, nearly all sauropod femora are broader than they are anteroposteriorly thick — brachiosaurs just take it to extremes.)
Mark Robinson Says:
Mike, I believe that you’re right about the cross-sectional area of a columnar structure being relevant to its ability to bear a static load, but that the shape is more important for withstanding non-vertical forces. I don’t know whether either is necessarily better than the other for calculating the masses of animals but Campione and Evans (2012) seem to show that circumference is a good proxy.
As for why sauropod femora are thicker laterally than anteroposteriorly, I wonder if it’s to do with their ability to counteract any “wobble” as they transfer weight to a foot during locomotion? They have large muscles designed to swing their limbs forwards and backwards with which they can probably also make small adjustments to resist any instability anteroposteriorly. However, I imagine that they have less muscle power available to deal with any lateral wobbliness so perhaps this is compensated for by having transversely wider femora?
Further hand-waving leads me to think that this (transversely wider femora) might be more noticeable in sauropods with a narrower stance (potentially less-stable laterally) and, if so, that there might also be a difference between humeri and femora with sauropods whose forelimb stance is wider than that of their hindlimbs.
How do titanosaur femora compare?
In each step cycle, all of the weight from the back half of the body had to be borne on a single femur, which was loaded eccentrically. I’ll bet that’s the driver for sauropod femora being wide in cross-section.
I wonder at what size that kicks in at? I’ve never looked closely enough at elephant femora to tell if they’ve started down the same path. Where’s John of the Freezers when you need him?
Also, here’s a project for some enterprising youngster: survey femoral cross-sectional eccentricity in sauropodomorphs and see if it correlates to body mass, as estimated both volumetrically and by limb bone allometry. If anyone has done that, I don’t think I’ve seen it. Wilson and Carrano discussed femoral eccentricity a bit, but I don’t know of any attempt at a broad survey.
In each step cycle, all of the weight from the back half of the body had to be borne on a single femur
That is true: but during that phase of the step cycle, the femur in question is driving the body forward, and so bearing the most powerful anteroposterior bending force that it has to withstand. So to me that seems like it would if anything require the opposite kind of eccentricity: femur cross-sectional area longer anteroposteriorly than transversely.
Also: it’s equally true that in another part of the step cycle, a single humerus has to bear all the weight of the front half of the body — so why isn’t the humerus similarly eccentric?
And finally: why would brachiosaur femora be so much more eccentric than, say, those of apatosaurs? (I don’t offhand know what the situation is with big titanosaurs — does anyone?)
All in all, I think there’s more going on here — something that we’ve not understood.
Someone ought to model the locomotion forces properly instead of just waving their hands like we are.
Benoît Régent-Kloeckner Says:
It might be naive, but it seems to me that even when restricting to the static balance of forces, one should expect such bones to be eccentric. Indeed, when one looks at the flying buttresses of a cathedral one is not surprised to see that they can be quite thin in the “anteroposterior” direction, but are always wide in the “mediolateral” direction. On the contrary, inner pillars are usually round.
So, might the eccentricity of bones (and the eccentricity variation between different bones) be explained by (or could be used to predict) the way they are placed: supporting weight vertically as a pillar, or supporting it more laterally as a flying butress?
Regarding the greater eccentricity of brachiosaur femora, are their humeri generally less eccentric than, say, those of diplodocids? Also, what percentage of (static) mass is borne by a brachiosaur’s hindlimbs versus its forelimbs, and how does that compare with other sauropods?
And finally: why would brachiosaur femora be so much more eccentric than, say, those of apatosaurs?
Are they actually that much more eccentric? I know that Amphicoelias is the oddball for having a circular femur cross-section–although I’ve heard that some specimens of Diplodocus have round femora as well–which suggests that the default for diplodocids is to have femora that are at least somewhat elliptical in cross-section.
But your larger point, about actually knowing stuff instead of just hand-waving, is well taken. Hopefully if we say enough dumb stuff, Matt Bonnan or Heinrich Mallison will show up to correct us.
Are they actually that much more eccentric?
Yeah, I think so. I know femoral eccentricity is widespread in sauropods, but seriously, look at the Giraffatitan figure up there, it’s crazy.
Femoral eccentricity is one of the characters in most sauropod cladistics matrices: for example, in Harris (2006), which has been the basis of my own lightly modified analyses (Taylor and Naish 2007, Taylor 2009, Taylor et al. 2011), it’s character 284, “Ratio of mediolateral:craniocaudal diameter of femur at midshaft”, with scores 0 (‘=1.85’). As you can see, Giraffatitan is right off the scale. (And so is Brachiosaurus, thought I think not quite so extreme.)
Hopefully if we say enough dumb stuff, Matt Bonnan or Heinrich Mallison will show up to correct us.
That’s the plan. Though given that failure of that strategy so far, we may have to up the Dumb Factor.
I can’t add much to the discussion on the biomechanical benefits of a transversely eccentric femoral shaft but note that it seems to be correlated with graviportalism. Matt asks the pertinent question, at what size does this kick in? Elephants, from memory, do have quite eccentric femur as does the extinct marsupial Diprotodon (which was ‘only’ white rhino to hippo sized). I could add that at first glance the cross section of Antetonitrus femur is also very eccentric but a look at the femur in distal or proximal view indicates that there is a large amount of plastic deformation (its squished) even though the shaft looks well-preserved and crack-free. I sometimes wonder if the crazy eccentricity of really big sauropods has been exagerated by post-depositional forces in almost all cases. – can’t always take fossils at face value.
Nima Says:
41,616 kg? So Giraffatitan is heavy again :)
Then again, cases of the effect of femoral eccentricity on mass may not always be easy to predict.
This is why I always liked the model-dunking method, instead of the complicated equations method. Visually you can draw your own conclusions about the reliability of the model used, rather than having to pick apart complex maths and proofs. It may be entirely possible that the maths could differ significantly with different species (muscle mass ratios, pneumaticity, etc. all are factors.)
I suspect pneumaticity could reduce the weight well below 41 tons. But perhaps our image of what Giraffatitan looked like has been too skewed by Greg Paul’s starving skeletals.
Ian Corfe Says:
Nice analysis. A couple of notes, questions and thoughts:
– Is the HMN St 291 femur the same as the ‘Ni’ locality one used in the Giraffatitan mount (right femur)? As far as I can see Campione & Evans 2012 used only the mounted specimen for both humerus and femur circumferences, using actual circumference measurements not extrapolating from length/breadth dimensions, and the same data was used in Benson et al 2014.
– You note the left femur of SII is broken and missing the middle section, and reconstructed with plaster. Is this likely to have also underestimated the true circumference if this was the femur measured rather than the right side femur from another individual?
– Although your Giraffatitan linear humerus measurements are very similar to those of Campione & Evans 2012/Benson et al 2014, the circumference you calculated is a bit further (around 7%) from that measured by the authors, and it is that that feeds into the mass estimation.
– Benson et al 2014 note their method for estimating circumference from linear measurements (though only used for Brachiosaurus, not Giraffatitan) as follows:
“To estimate femoral circumferences from observed diameters, we parsed our data into a set of 29 groups, including paraphyletic grades (denoted ‘basal’). These were intended to represent approximate ‘body plan’ groupings that should have similar relationships between humeral and femoral shaft diameters and their circumferences, a hypothesis that was tested using regression.
For each of the femur and humerus, each group contained some taxa for which the minimum shaft circumference and both its anteroposterior and mediolateral diameters were known, some taxa for which only a subset of these measurements were known, and some taxa for which none of these were known (for example, if the bone was not preserved in a specimen of that taxon). We estimated femoral circumferences for taxa in which at least one diameter was known by taking the following steps:
(1) We estimated the ordinary least squares regression equation of anteroposterior shaft diameter on mediolateral shaft diameter, and mediolateral shaft diameter on anteroposterior shaft diameter for each group in which both measurements were known in at least three taxa.
(2) For groups in which a significant (p < 0.05) regression relationship existed between the diameters, we used those relationships to predict the second diameter measurement for taxa in which only one diameter measurement was known. In general, bipedal groups with sufficient sample sizes had well-constrained relationships between the diameters of their mass-supporting stylopodials, but the relationship was weaker in some quadrupedal groups, especially Ceratopsidae, Hadrosauroidea and Sauropoda, suggesting they exhibit more variable eccentricity (Table S1).
(3) We used equation [4] below to convert pairs of diameters (dml = mediolateral diameter; dap = anteroposterior diameter) into circumferences, assuming that the bone shaft has an oval cross-section (circumferenceoval):
[4] circumferenceoval = pi * ((3 * (dml + dap)) – (((3 * dml + dap)*(dml + 3 * dap))0.5))
(4) Measured shaft circumference was regressed through the origin on circumferenceoval for each group. All R2 values exceeded 0.985 and the slopes of the regression lines (ranging from 0.92–1.10) were used as correction factors to translate circumferenceoval into an estimate of the true shaft circumference for taxa in which a measured shaft circumference was not known. Some groups had too little data to estimate a correction factor. Thus, the factor for Dromaeosauridae was used for Avialae and Alvarezsauroidea, the factor for Titanosauriformes was used for Macronaria, the factor for Eusauropoda was used for Sauropoda, the factor for basal Ornithischia was used for Pachycephalosauria, and the median factor for all groups was used for Therizinosauria."
So for the Brachiosaurus humerus where no circumference or ant-post diameter was measured, the ant-post diameter was predicted using the regression of all 'basal Titanosauriformes'. The circumference was then calculated from the oval formula, and then a correction factor added based on the regression of measured circumference vs circumference oval for 'basal Titanosaurifomes'.
As far as I can see, this is likely to overestimate the ant-post diameter (based on the slenderness of brachiosaurid humeri), and thus the humerus circumference of Brachiosaurus. That probably means the femur:humerus ratio of Brachiosaurus may be underestimated because of an overestimated humerus circumference (though the humerus circumference isn't reported in Benson et al 2014 data), while that for Giraffatitan may be also underestimated, but due to a smaller than expected femur circumference from either a reconstructed femur or small sized femur from a different individual to the humerus.
Looking forwards to the next post!
Oh yes, and in terms of mass, this probably means the Brachiosaurus mass is overestimated and the Giraffatitan mass underestimated, using this method. So they were probably closer in mass as you previously pointed out! (Though that doesn’t help determine if brachiosaur mass is in general overestimated by the method, just points out specific potential problems with those mounts/specimens)…
How big was Brachiosaurus altithorax? I mean, really? | Sauropod Vertebra Picture of the Week Says:
[…] Since we’re currently in a sequence of Brachiosaurus-themed posts [part 1, part 2, part 3, part 4, part 5, part 6], this seems like a good time to fix that. So here is my response, fresh from […]
Brachiosaurus altithorax, fossils in the field | Sauropod Vertebra Picture of the Week Says:
[…] our Brachiosaurus series [part 1, part 2, part 3, part 4, part 5, part 6, part 7], here is another historically important photo scanned from the Glut […]
Following up from the above to figure out the estimated Brachiosaurus humerus circumference:
Working backwards through the equations to figure the estimated humerus circumference of Brachiosaurus gives a figure of 688mm. So Brachiosaurus had an (estimated) humerus 688mm circumference, femur 940mm, giving a ratio of 1.37:1 and total C(h+f) 1628mm for a mass of 56000kg. Giraffatitan measured and estimated by Benson et al 2014 measurements is humerus 654 (about 5% smaller than Brachiosaurus), femur 730 (22% smaller), ratio 1.12:1, total C(h+f) 1384 (15% smaller) for a mass of 34,000kg (39% less mass). The measurements and estimates by Mike are above.
If Benson et al 2014 and Campione & Evans 2012 used the smaller femur of the Berlin Giraffatitan mount, from a different, smaller individual, for their measurements, then that could possibly explain the discrepancy (as Mike and I both suggest earlier). Assuming the same 1.37:1 ratio between femur and humerus for Giraffatitan as for Brachiosaurus (and so both humerus and femur about 5% smaller than in Brachiosaurus), we get:
Giraffatitan humerus = 654mm, femur = 893 (which is 5% larger than Mike’s estimate from the 2 diameters), total C(h+f) 1547 (6% larger than Mike’s estimate), for a mass of 48625kg (13% less mass than Brachiosaurus, but 17% greater than Mike’s Giraffatitan estimate and 43% greater than the Benson et al 2014 estimate).
As noted above though, this may be the wrong femur:humerus ratio if that of Brachiosaurus is underestimated by overestimating the humerus circumference…
I hope the message people are taking away from this is not “Brachiosaurus weighed 56000 kg”, but “Be very wary of mass estimates extrapolated from a single measurement of a single bone”!
How much is our intuition about sauropod mass worth? | Sauropod Vertebra Picture of the Week Says:
[…] As promised, some thoughts on the various new brachiosaur mass estimates in recent papers and blog-posts. […]
Arm lizard | Sauropod Vertebra Picture of the Week Says:
[…] all approximately correct. The actual humerus is 204cm long, but the distal end is eroded and it was probably 10-12cm longer in life. I don’t know how big this cast is, but I know that casts are inherently untrustworthy so I […]
« The humerus of Brachiosaurus altithorax, part 3: the airport mount
Gilles Danis of P.A.S.T on the Chicago Brachiosaurus mount » | {"pred_label": "__label__wiki", "pred_label_prob": 0.506988525390625, "wiki_prob": 0.506988525390625, "source": "cc/2020-05/en_middle_0028.json.gz/line633679"} |
high_school_physics | 713,720 | 15.386786 | 1 | Subjects and Methods
Date of Acceptance 12-Dec-2019
Date of Web Publication 20-Mar-2020
Background and Objectives: The main effect of pulmonary stenosis is a rise in right ventricular pressure. This pressure overload leads to multiple changes in the shape, dimensions, and volumes of the right ventricle (RV) that are reversed after the relieve of the valve obstruction. We thought to study the changes in the RV in patients undergoing balloon pulmonary valvuloplasty (BPV) using three-dimensional (3D) echocardiography.
Subjects and Methods: The study included 50 patients with isolated valvular pulmonary stenosis who underwent BPV at our hospital from December 2016 to August 2017; echocardiography was recorded preprocedural and 3 months after the procedural.
Results: The median age of the study group at the time of the procedure was 2.7 years. The indexed RV wall thickness, basal, and mid-right ventricular dimensions decreased significantly after the procedure (P < 0.005), and the longitudinal dimension increased significantly after the procedure (P < 0.005). The end-systolic and the end-diastolic volumes (EDVs) by 3D echocardiography increased insignificantly (P > 0.05), and the right ventricular function increased significantly (P < 0.05), indicating that the changes in the EDVs were more than the changes in the end-systolic volumes.
Conclusions: There are several factors that interplay together and result in reverse remodeling of the RV after BPV including regression in the RV hypertrophy; changes in the interventricular septal morphology, bowing, and mobility; and changes in the ventricular geometry and dimensions, rather than changes in the ventricular volumes.
Keywords: Balloon pulmonary valvuloplasty, balloon pulmonary valvuloplasty, pulmonary stenosis, right ventricular reverse remodeling, three-dimensional echocardiography
Mansour A, Elfiky AA, Mohamed AS, Ezzeldin DA. Mechanism of the right ventricular reverse remodeling after balloon pulmonary valvuloplasty in patients with congenital pulmonary stenosis: A three-dimensional echocardiographic study. Ann Pediatr Card 2020;13:123-9
Mansour A, Elfiky AA, Mohamed AS, Ezzeldin DA. Mechanism of the right ventricular reverse remodeling after balloon pulmonary valvuloplasty in patients with congenital pulmonary stenosis: A three-dimensional echocardiographic study. Ann Pediatr Card [serial online] 2020 [cited 2022 Jan 18];13:123-9. Available from: https://www.annalspc.com/text.asp?2020/13/2/123/281097
Pulmonary stenosis is the second most common congenital cardiac malformation which comprises 7.5%–9% of all congenital heart defects.[1]
Secondary changes in the right ventricle (RV) and pulmonary arteries occur as a result of pulmonary valve obstruction.[2]
The main physiologic effect of valvular pulmonary stenosis (PS) is a rise in RV pressure proportional to the severity of obstruction. This elevation of RV pressure is accompanied by an increase in muscle mass where hyperplasia of the muscle cells with a concomitant increase in the number of capillaries occurs. In contrast, the adult myocardium responds with hypertrophy of the existing fibers, with no change in the capillary network.[2]
This is associated with changes in the morphology and the mobility of the interventricular septum, as well as the dimensions, volumes, and morphology of the right ventricular cavity.[2]
After successful balloon pulmonary valvuloplasty (BPV), these changes are reversed, so we thought to study and evaluate these changes by three-dimensional (3D) echocardiography.
The aim was to study the changes in the function, dimensions, and volumes of the RV using 3D echocardiography before and 3 months after percutaneous BPV in patients with congenital pulmonary stenosis.
This study was approved by our institutional review board and informed consent was obtained from the parents of all the children enrolled in the study. The study protocol conforms to the ethical guidelines of the 1975 Declaration of Helsinki.
The study included 50 patients with isolated valvular pulmonary stenosis who underwent BPV at our hospital from December 2016 to August 2017; echocardiography was recorded preprocedural and 3 months after the procedural.
Patient demographic data included age at the time of the procedure, gender, body weight, and body surface area.
Pre-BPV-echocardiographic data included pulmonary valve annulus, peak pressure gradient across the pulmonary valve, valve morphology, degree of tricuspid regurgitation (TR), degree of pulmonary regurgitation, and right ventricular dimensions, volumes, and functions by 2D and 3D echocardiography.
Right ventricle assessment
Imaging views
Apical four-chamber (RV focused), modified apical four-chamber, left parasternal long axis and parasternal short axis, left parasternal RV Inflow, and subcostal views were used for the comprehensive assessment of the RV. Right ventricular hypertrophy and contractility were observed. The RV dimensions were measured at end-diastole from a RV-focused apical four-chamber view without foreshortening.[3]
Right ventricle dimensions and free wall thickness
RV dimensions were measured by 3D echocardiography in the apical four chamber view at the end diastole. The basal diameter was defined as the maximal short axis dimension in the basal one third of the RV cavity, the mid RV diameter was measured in the middle third of the RV at the level of the left ventricular (LV) papillary muscles, while the longitudinal dimension was drawn from the RV apex to the mid point of the TV annulus. RV wall thickness was measured in diastole, from the parasternal long axis view [Figure 1].[3]
Figure 1: Showing the apical four-chamber view right ventricle focused measuring dimensions in patient number 16
Right ventricle volumes
At the apical four-chamber view, a 2DQ software automatically traced the end-diastole and end-systole frames and consequently detected the end-diastolic volume (EDV) and the end-systolic volume (ESV) using the area length method. Manual adjustment was done when needed to include the myocardial trabeculae and papillary muscles and to optimize the RV border tracing. The software generates volume/time curves through which the EDV and ESV were measured.[3]
Right ventricle function
Fractional area change
The fractional area change was estimated by tracing the RV in the apical four-chamber view to obtain the end-diastolic and the end-systolic areas, and then, the percentage change between them was calculated according to the following equation:[3]
Tricuspid annular peak systolic excursion
M-mode was used to estimate the RV function by applying M-mode to the lateral TV annulus in the apical four-chamber view and then measuring the peak distance of annular motion [Figure 1].[3]
Doppler evaluation
The severity of pulmonary valve stenosis was assessed by applying continuous-wave Doppler across the PV flow to estimate the pressure drop across the pulmonary valve using the simplified Bernoulli equation which states that P = 4 (V2) 2, where P is the peak instantaneous pressure gradient, in millimeters of mercury (mmHg), across the pulmonary valve, and V2 is the peak flow velocity, in meters per se cond, distal to the orifice. The degree of stenosis was classified into: Moderate PS (peak pressure gradient [PPG] of 30–64 mmHg) and severe PS (PPG >64 mmHg).[4]
After completing the two-dimensional echocardiography, all the cases were subjected to electrocardiogram gated 3D echocardiography using the same Phillips iE33 echocardiography machine. RV data sets were recorded during a four-beat acquisition (obtaining one subvolume during each heartbeat). The subvolumes were then electronically merged into one dataset from the four-chamber apical view ensuring that the entire RV was viewed simultaneously in both orthogonal planes with minimal spatial and temporal artifacts. Both the patient and the transducer positions were modified for optimal simultaneous visualization, and then, a full-volume loop was acquired. The data sets were then stored for further analysis.[5]
QLAB 10 quantification software was used to assess the image quality including analyzable RV apex, lateral wall, and tricuspid valve. Manual tracing of the endocardial border was done during the end-diastolic phase and the end-systolic phase. The software then automatically delineated the RV endocardial border. By sequential analysis, the software created an RV mathematic dynamic 3D endocardial surface that represents changes in the RV cavity over the cardiac cycle. From this 3D endocardial surface, global RV volumes and ejection fraction (EF) were automatically calculated [Figure 2].[6]
Figure 2: Apical four-chamber view showing the end-diastolic volume, end systolic volume and right ventricle function automatically calculated by the software after manual tracing of the right ventricle endocardial borders in both end-diastolic and end-systolic phases and a three-dimensional image of the right ventricle at the end-systolic phase in patient number 1
Procedural data included pre- and postprocedure pressure gradient, pulmonary valve annulus, balloon type and balloon size used, balloon-to-annulus ratio, and any intra or immediate postprocedure complications.
Follow-up echocardiogram
All patients underwent follow-up echocardiographic study 3 months after the procedure using a Philips IE33 machine with emphasis on the evaluation of the RV.
The degree of residual pulmonary stenosis as well as the degree of pulmonary regurgitation and TR was recorded.
The study included 20 males (40%) and 30 females (60%) with a mean age of 2.7 years (ranged from 6 months to 21 years). The patient's body surface area ranged from 0.2 to 1.92 m2, with a mean of 0.5 m2; their height ranged from 50 to 172 cm, with a mean of 79.7 cm; and their weight ranged from 3 to 85 kg, with a mean of 14.1 kg [Table 1].
Table 1: Demographic characteristics of the study groups
There was a highly significant drop in pressure gradient cardiac catheterization from a mean of 67.84 ± 13.47 mmHg before BPV to a mean of 27.56 ± 3.80 mmHg immediately after BPV and that concurred with the drop of the pressure gradient measured by echocardiography that dropped from a mean of 80.4 ± 28.8 mmHg before BPV to a mean of 17.1 ± 8.4 mmHg immediately after successful BPV; this drop in the PG was maintained during the follow-up.
The change in the RV end-diastolic pressure and the pulmonary artery diastolic pressure showed a nonsignificant change before and after BPV as measured by cardiac catheterization with values of 6.40 ± 1.44 mmHg and 6.38 ± 1.32 mmHg before and after BPV for the end-diastolic pressure and 10.68 ± 2.11 mmHg and 10.50 ± 2.05 mmHg for the pulmonary artery diastolic pressures.
The change in the pulmonary artery systolic pressure increased significantly before and immediately after BPV as measured by catheterization from 20.34 ± 2.43 mmHg to 19.58 ± 2.15 mmHg with P = 0.015.
Early restenosis was detected in three cases (6%) as they reported a PPG across PV >36 mmHg, while 47 cases (94%) reported a PPG across PV <36 mmHg recorded by Doppler echocardiography at the 3-month follow-up after BPV.
At follow-up, there was no significant increase in patients with pulmonary regurgitation (PR) and TR compared to baseline before BPV; 82% had mild PR, 18% had trivial PR, 54% had mild TR, and 46% had trivial TR.
The right ventricle free wall thickness indexed to body surface area
It decreased from a mean of 13.2 ± 4.84 mm/m2 before BPV to a mean of 10.01 ± 3.11 mm/m2, and this decrease was statistically highly significant with P < 0.001 [Figure 3].
Figure 3: Right ventricle wall thickness before and after balloon pulmonary valvuloplasty
The basal right ventricle dimension (V1) indexed to body surface area
It decreased from a mean of 6.74 ± 3.68 cm/sqm before BPV to a mean of 5.90 ± 2.87 cm/sqm after successful BPV, and the decrease was statistically highly significant with P = 0.001 [Figure 4].
Figure 4: Right ventricle dimensions before and after balloon pulmonary valvuloplasty
The mid-right ventricle dimension (V2) indexed to body surface area
It decreased from a mean of 5.44 ± 3.14 cm/m2 before BPV to a mean of 4.87 ± 2.38 cm/m2, and the decrease was statistically significant with P = 0.018.
Before BPV, the transverse dimensions of 32 patients were above the normal values. After BPV, the transverse dimensions of those patients decreased; 22 of them restored their normal values, while the transverse dimensions of the other ten patients were still above the normal values.
The longitudinal right ventricle dimension (V3) indexed to body surface area
It increased from a mean of 10.48 ± 5.46 cm/m2 before BPV to a mean of 11.36 ± 5.69 cm/sqm after successful BPV, and the increase was statistically highly significant with P = 0.003.
Before BPV, the longitudinal dimensions of 26 patients were below the normal values. After BPV, three of them showed a slight increase in the longitudinal dimension yet remained below normal values, while the other 23 patients regained their normal values. It was noted that six patients of those who denoted longitudinal dimensions within normal values before BPV showed an increase in their longitudinal dimension after BPV yet remained within the normal values.
Regarding the Z-score values for the RV dimensions, there was statistically highly significant changes in theses parameters before and after BPV with median value and interquartile range of 1.10 (0.7–1.4) before BPV and 1.20 (0.9–1.4) after BPV.
The end-diastolic right ventricle volume indexed to body surface area measured by two-dimensional echocardiography
It increased from a mean of 28.71 ± 13.49 ml/m2 before BPV to a mean of 28.95 ± 13.43 ml/m2 after successful BPV, but this increase was statistically insignificant with P = 0.086.
The end-systolic right ventricle volume indexed to body surface area measured by two-dimensional echocardiography
It increased from a mean of 14.31 ± 8.69 ml/m2 before BPV to a mean of 14.77 ± 8.96 ml/sqm after successful BPV, but this increase was statistically insignificant with a P = 0.053.
The fractional area change
It increased from a mean of 51.99% ±9.57% before BPV to a mean of 55.41% ±10.02% after successful BPV, and this increase was statistically significant with P = 0.021.
It increased from a mean of 21.58 ± 3.22 mm before BPV to a mean of 22.5 ± 3.41 mm after successful BPV, and the increase was statistically highly significant with P = 0.005 [Table 2].
Table 2: Two-dimensional echocardiographic measurement before and three months after balloon pulmonary valvuloplasty
Regarding the RV diastolic parameters, there were no significant changes in the tricuspid valve E velocity, A velocity, and E/A ratio before and after BPV [Figure 5].
Figure 5: Changes in the right ventricle indexed volumes by three-dimensional echocardiography before and after balloon pulmonary valvuloplasty
The E/E' of the TV lateral annulus showed statistically highly significant changes after BPV with a value of 6.90 ± 1.02 before BPV and value of 6.10 ± 0.85 after BPV, although the changes in the RV diastolic dysfunction grade did not show statistically significant changes before and after BPV; this may suggest that it may take a longer time until the RV diastolic dysfunction grade improves.
Three-dimensional echocardiography assessment before and three months after balloon pulmonary valvuloplasty
The end-diastolic right ventricle volume indexed to body surface area
It increased from a mean of 28.77 ± 16.45 ml/sqm before BPV to a mean of 30.51 ± 15.03 ml/m2 3 months after successful BPV, and this increase was statistically insignificant with a P = 0.085.
End-systolic right ventricle volume indexed to body surface area
It increased from a mean of 14.39 ± 8.85 ml/m2 to a mean of 14.45 ± 8.52 ml/m2 3 months after successful BPV, and this increase was statistically insignificant with P = 0.94 [Figure 5].
Right ventricle function by three-dimensional echocardiography
It increased from a mean of 50.72% ±8.074% before BPV to a mean of 53.74% ±9.56% 3 months after successful BPV, and this increase was statistically significant with P = 0.043, which indicates that the increase in the right ventricular EDV index (RVEDVI) was larger than the increase in the right ventricular ESV index (RVESVI).
Only nine cases (18%) had trivial PR recorded by Doppler echocardiography at the 3-month follow-up after BPV, while 82% of the patients had mild PR.
Early restenosis was detected in three cases (6%) as they reported a PPG across PV >36 mmHg, while 47 cases (94%) reported a PPG across PV <36 mmHg recorded by Doppler echocardiography at the 3-month follow-up after BPV [Table 3].
Table 3: Three-dimensional echocardiographic measurement before and three months after balloon pulmonary valvuloplasty
The main effect of pulmonary stenosis is a rise in right ventricular pressure. This elevation is accompanied by multiple changes in the RV muscle and geometry, including changes in the morphology, movement of the interventricular septum, as well as changes in the shape of the RV cavity, and an increase in the RV muscle mass.[2]
Since thefirst description of BPV in 1982 by Kan, the procedure became the treatment of choice. Immediate reduction of gradient and increase in jet width and free motion of the pulmonary valve leaflets with less doming have been observed following balloon dilatation. Improvement of right ventricular function has also occurred.[1] There are insufficient data about the effects of BPV on RV geometry, volumes, and function in patients with congenital pulmonary stenosis mainly because the quantification of RV size and function with conventional echocardiography is challenging because of the anterior position of the RV in the chest, its complex asymmetrical geometry, complex crescentic shape, highly trabeculated endocardial border, impossibility to simultaneously visualize both inflow and outflow tracts, and the lack of realistic geometric models for volume calculation.
Indeed cardiac magnetic resonance (CMR) is the current gold standard for the quantification of RV geometry and function, but its widespread use is limited by costs, time consumption, and contraindications, making it unsuitable for patients screening or monitoring on large scale.[7]
As right ventricular size and function have been found to be important predictors of cardiovascular morbidity and mortality, the development of novel echocardiographic techniques including 3D became a must and opened new exciting opportunities in right ventricular imaging.
3D echocardiography has proven accuracy in measuring RV volumes and function when compared with CMR.[8]
Echocardiography is a widely available imaging technique particularly suitable for follow-up studies because of its noninvasive nature, low cost, and lack of ionizing radiation or radioactive agent. Real-time 3D echocardiography has been shown to be accurate in assessing RV and LV volumes, stroke volumes, and EFs in comparison with CMR imaging.[8]
Where several studies in the past decades have focused on the intermediate and long-term follow-up after BPV,[9],[10],[11] independent predictors of long-term results, causes of restenosis,[12] balloon annulus ratio, and predictors of pulmonary regurgitation,[13] we did not find much data on the reverse remodeling of the RV after BPV.
In the current study, there was an interaction of several factors in the RV after BPV.
Three months after BPV, the transverse dimensions of the RV were significantly reduced in comparison to the preprocedural dimensions; this may due to changes in the RV lateral wall and the interventricular septum morphology after reduction of the RV pressure with decreased interventricular flattening and bowing.
The changes in the transverse dimensions of the RV seem to be also as a result of regression in the RV wall thickness that decreased significantly after BPV.
The longitudinal dimension of the RV was significantly increased after BPV, suggesting that after relieve of the RV pressure, the dilated RV regains its normal elongated geometry rather than globular shape.
The data of our study concur with the results published by Broch et al. in 2016; when they studied the effect of pressure overload reduction on the RV in 26 patients with chronic thromboembolic pulmonary hypertension (CTEPH), they found that the reduction in the RV pressure leads to a significant decrease in the RV end-diastolic diameter, area, and free wall thickness.[14]
Fukui et al. have also demonstrated that in 20 patients with CTEPH, the RV reverse remodeling by magnetic resonance imaging was due to the regression in the RV thickness and marked improvement in the RV mass and interventricular septal bowing.[15]
In our current study, although the RVESVI and RVEDVI volumes by 3D echocardiography increased after BPV, none of these changes were statistically significant, indicating that the immediate and short-term changes after BPV are more related to the changes in the geometry and regression of hypertrophy rather that significant changes in the volumes.
Fukui et al. have demonstrated in their study that the RV end-diastolic and ESV index markedly improved, with concomitant improvements in RV EF.[15]
These results concur with the study of Tsugu et al., which showed a significant improvement in the RV volumes and EF in patient with CTEPH after balloon pulmonary angioplasty.[16]
These results suggest that the mechanism of RV reverse remodeling in patients with CTEPH after balloon pulmonary angioplasty shares partially some features with the changes that occur in patients with congenital PS after BPV.
In the current study, only patients with trivial and mild TR and PR were included because we intended to study the changes in RV that result from the relieve of the elevated RV pressure without the development of other significant valvular lesion that may lead to the affection of the results.
We used a non-RV dedicated echocardiography software in this current study, although RV dedicated software is now becoming available in many echo laboratories, they are still expensive and there are no available data comparing those softwares with the old ones.
Further studies on the RV with the new software are needed, and it will give us more insight on the reverse changes that occur to the RV after relieve of the pressure overload in patients with congenital pulmonary stenosis.
Kan JS, White RI Jr., Mitchell SE, Gardner TJ. Percutaneous balloon valvuloplasty: A new method for treating congenital pulmonary-valve stenosis. N Engl J Med 1982;307:540-2.
Alyan O, Ozdemir O, Kacmaz F, Topaloglu S, Ozbakir C, Gozu A, et al. Sympathetic overactivity in patients with pulmonary stenosis and improvement after percutaneous balloon valvuloplasty. Ann Noninvasive Electrocardiol 2008;13:257-65.
Galderisi M, Cosyns B, Edvardsen T, Cardim N, Delgado V, Di Salvo G, et al. Standardization of adult transthoracic echocardiography reporting in agreement with recent chamber quantification, diastolic function, and heart valve disease recommendations: an expert consensus document of the European Association of Cardiovascular Imaging. Eur Heart J Cardiovasc Imaging 2017;18:1301-10.
Baumgartner H, Hung J, Bermejo J, Chambers JB, Evangelista A, Griffin BP, et al. Echocardiographic assessment of valve stenosis: EAE/ASE recommendations for clinical practice. Eur J Echocardiogr 2009;10:1-25.
Renella P, Marx GR, Zhou J, Gauvreau K, Geva T. Feasibility and reproducibility of three-dimensional echocardiographic assessment of right ventricular size and function in pediatric patients. J Am Soc Echocardiogr 2014;27:903-10.
van der Zwaan HB, Helbing WA, McGhie JS, Geleijnse ML, Luijnenburg SE, Roos-Hesselink JW, et al. Clinical value of real-time three-dimensional echocardiography for right ventricular quantification in congenital heart disease: Validation with cardiac magnetic resonance imaging. J Am Soc Echocardiogr 2010;23:134-40.
Badano LP, Boccalini F, Muraru D, Bianco LD, Peluso D, Bellu R, et al. Current clinical applications of transthoracic three-dimensional echocardiography. J Cardiovasc Ultrasound 2012;20:1-22.
Surkova E, Muraru D, Iliceto S, Badano LP. The use of multimodality cardiovascular imaging to assess right ventricular size and function. Int J Cardiol 2016;214:54-69.
Cheragh H, Hassan UM, Hafizullah M, Gul AM. Outcome of balloon pulmonary valvuloplasty with 18 months follow up. J Ayub Med Coll Abbottabad 2009;21:3-7.
Behjati-Ardakani M, Forouzannia SK, Abdollahi MH, Sarebanhassanabadi M. Immediate, short, intermediate and long-term results of balloon valvuloplasty in congenital pulmonary valve stenosis. Acta Med Iran 2013;51:324-8.
Luo F, Xu WZ, Xia CS, Shi LP, Wu XJ, Ma XL, et al. Percutaneous balloon pulmonary valvuloplasty for critical pulmonary stenosis in infants under 6 months of age and short and medium term follow-up. Zhonghua Er Ke Za Zhi 2011;49:17-20.
McCrindle BW. Independent predictors of long-term results after balloon pulmonary valvuloplasty. Valvuloplasty and Angioplasty of Congenital Anomalies (VACA) registry investigators. Circulation 1994;89:1751-9.
Roushdy AM, Eldin RS, Eldin DA. Effect of balloon annulus ratio on the outcome post balloon pulmonary valvuloplasty in children. Prog Ped Cardiol 2017;46:75-80.
Broch K, Murbraech K, Ragnarsson A, Gude E, Andersen R, Fiane AE, et al. Echocardiographic evidence of right ventricular functional improvement after balloon pulmonary angioplasty in chronic thromboembolic pulmonary hypertension. J Heart Lung Transplant 2016;35:80-6.
Fukui S, Ogo T, Morita Y, Tsuji A, Tateishi E, Ozaki K, et al. Right ventricular reverse remodelling after balloon pulmonary angioplasty. Eur Respir J 2014;43:1394-402.
Tsugu T, Murata M, Kawakami T, Minakata Y, Kanazawa H, Kataoka M, et al. Changes in right ventricular dysfunction after balloon pulmonary angioplasty in patients with chronic thromboembolic pulmonary hypertension. Am J Cardiol 2016;118:1081-7. | {"pred_label": "__label__wiki", "pred_label_prob": 0.6932787895202637, "wiki_prob": 0.6932787895202637, "source": "cc/2022-05/en_middle_0063.json.gz/line1012272"} |
high_school_physics | 819,638 | 15.373228 | 1 | A basketball court is a place where we play basketball. It could be different based on the level of play, league, budget, spatial ability, and designs. Here we included the best links and information related to the basketball court dimension to give you great knowledge about the basketball dimension. So, let’s jump in,
Shortcut Table
Basketball Court Dimension According To The League.
Basketball Court Dimensions NBA
Basketball Court Dimensions FIBA
Basketball Court Dimension High School
Basketball Court Dimensions College
Basketball Court Sections
Center Circle:
Three Point Line:
Low Post Area:
Backboard:
Rim: (height, dimension of rim)
Length 94 ft 91ft 10 inch 84 ft 94 ft
Width 50 ft 49.21 ft 50 ft 50 ft
Free throw line 15 ft 15.09 ft 15 ft 15 ft
Key width 16 ft 16.08 ft 12 ft 12 ft
3 Point ARC 23.75 ft 22.15 ft 9 ft 20.9 ft
No charge zone arc 4 ft 4.10 ft 4 ft 4 ft
Center circle
Diameter 12 ft 11.81 ft 12 ft 12 ft
Rim height 10 ft 10 ft 10 ft 10 ft
The regulation basketball court dimensions are 94 feet high and 50 feet wide. But there’re also some differences in dimensions regarding the league and the level of play.
The dimensions of the NBA are 94 feet high and 50 feet wide (28.65m*15.24m). The 3 point arc is approximately 23.75feet (7.24m) from the rim. The free-throw line is 15ft (4.57m) away from the backboard line, and the key is 16 feet (4.88m) wide. The centre circle of the NBA court is a little bit bigger than INT, 12 feet. The no-charge zone arc is 4feet.
The FIBA’s court dimension is a little bit smaller than the NBAs court dimension. The FIBA’s court length is approximately 91 feet 10 inches (28m), and the width is 49 feet 2.5 inches (15m). The center circle is 11.81feet (3.6m). Free-throw line distance from the point on the floor directly below the backboard is 15.09feet (4.6m), and the key is 16.08feet (4.9m). The no-charge zone arc measure is 4.10feet (1.25m).
The overall size of the high school court is 84 feet long and 50 feet wide. Some junior school set this measure as 74 feet long and 42 feet wide. The center circle has 6 feet outside the radius. The foul line distance is 15 feet from the front of the backboard and 18 feet 10 inches from the baseline. The key is 12 feet wide. The 3 point line is 19 feet 9 inches with a straight line extending out 5 feet 3 inches from the baseline. The no-charge semi-circle is 4 feet.
The size of the college basketball court is the same as like NBA, its 94 feet long and 50 feet wide. The center circle radius is 6 feet outside, and 2 feet in the inside circle. The 3 point arc is 20 feet 9 inches away from the center of the rim. The free-throw line length is the same as others; 15 feet from the backboard and 18 feet 10 inches from the baseline. The key is 12 feet wide with a 6 feet half radius on the top. The no-charge zone arc is 4 feet.
The center circle is placed in the middle of the court. Its diameter around 11.81 ft in the FIBA, and the rest of the time 12 ft including the NBA. Only two players are permitted to enter this area and try to tab the ball by their hands to their team while the referee throws the ball in the air.
The three-point line is the line that separates the two-point area from the three-point area. Any successful shot beyond this line counted as three points. But if somehow shooting player’s foot touches the line, it will be counted as two points. The length of the line could be separated by the league as the NBA’s court has 23.75 ft while FIBA sets the value as 22.15 ft.
Perimeter: The perimeter area is the area outside the free-throw line and inside of the three-point line. The shots successfully made from this area is called ‘perimeter shots. Even if the player who made a shot and his foot touch the three-point line, will be considered as ‘perimeter shots.
Key: the key, free throw lane, or shade lane is actually the printed area of the rim beneath. It’s also been noticed in different measures. In the NBA, it is 16 feet, while in the FIBA 16.08, in the NCAA or high school and junior level leagues, it often 12 feet.
The low post area is the area outside of the lane, inside the three-point line. The low post area is closest to the basket but not in the lane. However, it has an important role in the strategy of basketball. skilled low post players could score many points in-game.
Basketball Hoops: The basketball hoop has two main parts on it; the backboard, and the rim.
All highly professional leagues like NBA, NCAA, FIBA, use tempered glasses backboard with a dimension of 72 inches wide and 42 inches height.
The regulation size of a rim is 18 inches. The Rim used by professionals or leagues is 18 inches in diameter based. In the NBA, FIBA, NCAA, whatever the game level is, the dimension is always 18 inches except for some recreational games. And the height of the Rim is 10 feet from the ground.
What are the Middle School and High school court dimensions?
The middle school basketball court dimensions are 74 feet long and 42 feet wide while the school courts are a little bit bigger with 84 feet long and 50 feet wide.
What is the regulation basketball rim height?
The regulation rim height from floor to rim is 10 feet. All major league like NBA, FIBA, NCAA, SCHOOL, and JUNIOR SCHOOL has the same rim height. But often kids leagues set the rim height to 8 or 9 feet because 10 feet height is just too big to shooting for these kids.
What is the ball diameter?
The size of the ball is different for men’s, women’s, and youth leagues. The official NBA game ball made by Spalding has measures 9.43 to 9.51 in diameter and 29.5-inches in circumference. The NCAA & the WNBA use a slightly little ball in 9.07 to 9.23-inch diameter and 28.5-inch circumference. On the other side, youth leagues use 28.5-inch and Girls’ youth basketballs use a 27.5-inch circumference ball.
How long is a high school game?
The high school basketball games have four quarters of 8 minutes in their game. So a total of 32 minutes game time.
How long is a college basketball game?
The college basketball games consist of two parts each part has 20 minute game time. So, a total of 40 minute game time.
How many quarters are there in the NBA game, and how long?
There are four quarters in the NBA game, each quarter lasts for 12 minutes. So the NBA game has 48 of game time. But in WNBA, basketball games consist of four 10 minute quarters, for a total of 40 minutes of game time.
What type of wood is used for NBA courts?
The NBA uses maple for its floor because of its hardness and light color.
How often does the NBA replace their floor?
After every 10 years later NBA changes their wood. But often it keeps longer if the wood is useable.
sportsfeelgoodstories.com
Basketball court – Wikipedia
Some Related Post You May Like:
Best Trampoline Basketball Hoop Reviews
best pool basketball hoops
Best Toddler Basketball Hoop
Best Portable Basketball Hoop For Driveway
PrevPreviousTop 10 Best Under Armour Basketball Shoes In 2022
NextTracing Back The Steps Of Superstar Teams Of NBANext | {"pred_label": "__label__wiki", "pred_label_prob": 0.5234246253967285, "wiki_prob": 0.5234246253967285, "source": "cc/2023-06/en_middle_0063.json.gz/line765961"} |
high_school_physics | 255,372 | 15.339531 | 1 | XO Bracelet - is currently on backorder. You may still purchase now though and we'll ship as soon as more become available.
This is how we spell Love! When love strikes, our hearts begin to flutter. There is absolutely no other feeling a like. Show your love with the perfect gift. This piece is of the most delicate nature and the style is perfect for every day classic tee wear. Thickness: 0.7mm / 0.03". Dimension 20mm x 7mm. Chain style: Rollo Chain. Chain length: 6.5". Total open length: 18.5cm.
© 2019 Anna Lou of London. All prices in GBP. Designed by Empyre. Powered by Shopify. | {'timestamp': '2019-04-18T13:02:12Z', 'url': 'https://www.annalouoflondon.com/collections/for-teens/products/xo-bracelet-4790', 'language': 'en', 'source': 'c4'} |
high_school_physics | 409,278 | 15.339266 | 1 | Journal of High Energy Physics
Strings in bubbling geometries and dual Wilson loop correlators Strings in bubbling geometries and dual Wilson loop correlators
Non-Abelian supertubes Non-Abelian supertubes
Search for a scalar partner of the top quark in the jets plus missing... Search for a scalar partner of the top quark in the jets plus missing transverse momentum final state at \( \sqrt{s}=13 \) TeV with the ATLAS detector
Phase transitions in warped AdS3 gravity Phase transitions in warped AdS3 gravity
Extremal higher spin black holes Extremal higher spin black holes
Small black holes and near-extremal CFTs Small black holes and near-extremal CFTs
Equivalence of emergent de Sitter spaces from conformal field theory Equivalence of emergent de Sitter spaces from conformal field theory
Modifications to holographic entanglement entropy in warped CFT Modifications to holographic entanglement entropy in warped CFT
Emergent gravity from Eguchi-Kawai reduction Emergent gravity from Eguchi-Kawai reduction
Entanglement entropy in flat holography Entanglement entropy in flat holography
Topological entanglement entropy in Euclidean AdS3 via surgery
Journal of High Energy Physics, Dec 2017
Zhu-Xi Luo, Hao-Yu Sun
Zhu-Xi Luo
Hao-Yu Sun
We calculate the topological entanglement entropy (TEE) in Euclidean asymptotic AdS3 spacetime using surgery. The treatment is intrinsically three-dimensional. In the BTZ black hole background, several different bipartitions are applied. For the bipartition along the horizon between two single-sided black holes, TEE is exactly the Bekenstein-Hawking entropy, which supports the ER=EPR conjecture in the Euclidean case. For other bipartitions, we derive an Entangling-Thermal relation for each single-sided black hole, which is of topological origin. After summing over genus-one classical geometries, we compute TEE in the high-temperature regime. In the case where k = 1, we find that TEE is the same as that for the Moonshine double state, given by the maximally-entangled superposition of 194 types of “anyons” in the 3d bulk, labeled by the irreducible representations of the Monster group. We propose this as the bulk analogue of the thermofield double state in the Euclidean spacetime. Comparing the TEEs between thermal AdS3 and BTZ solutions, we discuss the implication of TEE on the Hawking-Page transition in 3d.
https://link.springer.com/content/pdf/10.1007%2FJHEP12%282017%29116.pdf
JHE AdS3 Zhu-Xi Luo 0 1 3 Hao-Yu Sun 0 1 2 0 266 LeConte Hall, MC 7300 , Berkeley, CA 94720 , U.S.A 1 201 James Fletcher Bldg. , 115 South 1400 East, Salt Lake City, UT 84112-0830 , U.S.A 2 Department of Physics, University of California , USA 3 Department of Physics and Astronomy, University of Utah , USA We calculate the topological entanglement entropy (TEE) in Euclidean asymptotic AdS3 spacetime using surgery. The treatment is intrinsically three-dimensional. In the BTZ black hole background, several di erent bipartitions are applied. For the bipartition along the horizon between two single-sided black holes, TEE is exactly the BekensteinHawking entropy, which supports the ER=EPR conjecture in the Euclidean case. For other bipartitions, we derive an Entangling-Thermal relation for each single-sided black hole, which is of topological origin. After summing over genus-one classical geometries, we compute TEE in the high-temperature regime. In the case where k = 1, we nd that TEE is the same as that for the Moonshine double state, given by the maximally-entangled superposition of 194 types of anyons" in the 3d bulk, labeled by the irreducible representations of the Monster group. We propose this as the bulk analogue of the thermo eld double state in the Euclidean spacetime. Comparing the TEEs between thermal AdS3 and BTZ solutions, we discuss the implication of TEE on the Hawking-Page transition in 3d. AdS-CFT Correspondence; Black Holes; Anyons; Topological Field Theories - HJEP12(07)6 1 Introduction 2 Review of relevant components Thermal AdS3 BTZ black hole BTZ geometry \Surgery" and replica trick 3 Conformal boundary and H = Solid tori classi ed as Mc;d Bipartition into two disks Two disjoint thermal AdS3 3 4 5 6 2.1 2.2 2.3 3.1 3.2 4.1 4.2 4.3 5.1 5.2 Summation over geometries TEE for the full partition function di as quantum dimensions Discussion and outlook A Bipartition for the full partition function B TEE from the whole J (q) function TEE between two one-sided black holes and mutual information The entangling-thermal relation 1 Introduction Topological entanglement entropy (TEE), rst introduced in condensed matter physics [ 1, 2 ], has been widely used to characterize topological phases. It is the constant subleading term (relative to the area-law term) in the entanglement entropy, only dependent on universal data of the corresponding topological phase. At low energy, a large class of topological phases can be e ectively described using Chern-Simons gauge theory with a compact, simple, simply-connected gauge group. When this is the case, TEE can be found using surgery [3] and replica trick [4] by computing the partition function on certain 3-manifolds. For compact gauge groups, TEE is expressed [3] in terms of modular S matrices of Wess-Zumino-Witten rational conformal eld theory (RCFT) on a 2d compact Riemann surface, following the CS/WZW correspondence rst described in geometric quantization by [5]. { 1 { ratio between the interval length on the boundary circle that is contained in subregion A and the circumference of the full circle. After applying the replica trick, the glued manifold is a genus-n handlebody. Using one-loop partition function on this handlebody [10{15], we derive an explicit expression for TEE, which vanishes in the low-temperature limit. Then we consider two disjoint thermal AdS3's and calculate the TEE between them, which turns out to be the thermal entropy of one thermal AdS3. However, this does not mean any nontrivial entanglement between the two solid tori, and we support this argument by calculating the mutual information between them, which gives zero. We also compute TEEs in an eternal BTZ background. In the Euclidean picture there is only one asymptotic region for the eternal BTZ black hole [17], which corresponds to the gluing of the two asymptotic regions of the two single-sided black holes in the Lorentzian picture. We show that TEE between the two single-sided black holes is equal to the Bekenstein-Hawking entropy of one single-sided black hole. The mutual information between them does not vanish and again equals to the Bekenstein-Hawking entropy, which guarantees the explanation of the result as supporting the ER=EPR conjecture in 3d bulk to be true [18{20]. stating Focusing on one single-sided black hole, we then derive an Entangling-Thermal relation, lim relation is similar to but di erent from the thermal entropy relation [ 24 ] derived from the Ryu-Takayanagi formula [25], in that our result is topological and does not depend on geometrical details. The full modular-invariant genus one partition function of three-dimensional pure gravity is a summation of classical geometries or gravitational instantons, which includes both thermal AdS3 and the BTZ black hole. At high temperatures, the full partition function is dominated by the SL(2; Z) family of black hole solutions, whereas the low-temperature solution is dominated by the thermal AdS3. We compute TEE for the full partition function with a bipartition between the two single-sided black holes in the high temperature regime and again observe ER=EPR explicitly. When Chern-Simons level kR = kL = l=16G = 1, after de ning the quantum dimension data on the boundary Monster CFT with orbifolding, we see from the TEE calculation that the black hole geometries correspond to a topological phase in the bulk which contains a maximally-entangled superposition of 194 types of \anyons", labeled by the irreducible representations of the Monster group. This state, dubbed as Moonshine double state, has the similar property as the thermo eld double state on the asymptotic boundary in that TEE between the anyon pairs is equal to the Bekenstein-Hawking entropy. The rest of the paper is organized as follows. In section 2 we give a minimal introduction to the knowledge that facilitates the TEE calculation, including replica trick and Schottky uniformization. In section 3 we show the calculation of TEE in thermal AdS3, which amounts to the computation of the partition function on a genus nhandlebody. We also compute the TEE between two disjoint thermal AdS3 and show their mutual information vanishes. Section 4 illustrates the TEE calculation for BTZ black holes for several di erent bipartitions. We discuss the relations with ER=EPR and show that mutual information between the two single-sided black holes is equal to the BekensteinHawking entropy. We further propose an Entangling-Thermal relation for single-sided black holes. Then in section 5 we demonstrate the TEE of the full modular-invariant partition function after summing over geometries and present the quantum dimension interpretation. The system is mapped to a superposition of 194 types of \anyons". Comments on the implication of TEE on the Hawking-Page transition and the outlook can be found in section 6. 2 Review of relevant components In this section we will introduce basic concepts that are essential to understanding the rest of the paper. 2.1 \Surgery" and replica trick Surgery was originally invented by Milnor [28] to study and classify manifolds of dimension of techniques used to produce a new nite-dimensional manifold from an existing one in a controlled way. Speci cally, it refers to cutting out parts of a manifold and replacing it by a part of another manifold, matching up along the cut. As a warm-up, we review the usage of surgery in the entanglement calculation of 2d CFT for a single interval at nite temperature T = 1= [4]. The interval A lies on an in nitely long line whose thermal density matrix is denoted as . The reduced density matrix of subregion A is then de ned as A = trA , where the trace trA over the complement of A only glues together points that are not in A, while an open cut is left along A. Entanglement entropy between A and its complement A is then SA = tr A ln A. The matrix logarithm is generally hard to compute, so alternatively one applies the replica trick to obtain an equivalent expression, with proper normalization (so that the resultant quantity is 1 when being analytically continued to n = 1): ds2 = dy2 + dzdz y2 ; { 4 { Now the problem reduces to the computation of tr( nA). Using surgery, one can interpret it as the path integral on the glued 2-manifold [29]. An example for n = 3 is shown in gure 2, where the left panel sketches 3A, and the right panel is tr( 3A). In this case with a nite temperature, SA is not necessarily equal to SA. This operation can be extended to 3-manifolds in a straightforward way, as shown in ref. [3]. The authors calculated examples where the constant time slices are closed surfaces and restricted to ground states, so that the cycle is in nitely long. The constant time slices that we are interested in for Euclidean AdS3 are all open surfaces with asymptotic conformal boundaries, and the quantum states do not necessarily belong to the ground state Hilbert subspace. Details will be presented in sections 3 and 4. 2.2 Conformal boundary and H3= We now introduces the hyperbolic three-space H3 that describes the Euclidean AdS3. It is the 3d analogue of hyperbolic plane, with the standard Poincare-like metric (2.1) (2.2) where y > 0 and z is a complex coordinate. Any 3-manifold M having a genus n Riemann surface n as its conformal boundary that permits a complete metric of constant negative curvature can be constructed using ( ) = S2 1 Schottky uniformization. The idea is to represent the 3-manifold M as the quotient of H by a Kleinian group [30], which is a discrete subgroup of SL(2; C) as well as a discrete group of conformal automorphisms of n. The conformal boundary of H3 is a sphere at in nity, S2 , on which acts discretely, except for a limit set of accumulation points of denoted by ( ). The complement 1 ( ) is called the domain of discontinuity. Then the 3-manifold M has boundary ( )= , a well-de ned quotient. In particular, when M is a handlebody, reduces to a Schottky group, which is freely nitely generated by the loxodromic elements 1; : : : ; n 2 SL(2; C), that acts on S12 as a fractional linear transformation. Among these generators, there are 3n 3 independent complex parameters, which are coordinates on the Schottky space, a covering space of the complex moduli of the Riemann surface. Each 2 is completely characterized by its xed points and its multiplier q . An eigenvalue q is de ned through the unique conjugation of jq j < 1. More explicitly, denoting ; as the xed points of , one has under SL(2; C): z 7! q z with (z) (z) = q z z Within the Schottky group , there are primitive conjugacy classes h 1; : : : ; ni of , with \primitive" meaning that is not a positive power of any other element in . 2.3 Solid tori classi ed as Mc;d The physical spacetimes we are concerned about in this paper are all solid tori, i.e. the n = 1 case in the previous subsection. They have toroidal conformal boundaries, so the Schottky group actions is relatively simple. After these topological constructions, we can further classify them into the Mc;d family according to their geometries. This family rst appeared in the discussion of classical gravitational instantons which dominate the path integral in ref. [31], and is further explained in refs. [14] and [32]. boundaries T 2 = be isomorphic to Z In this case, ( ) composes of the north and south poles of S2 . Since solid tori have 1 ( )= , 1( ( )) must be a subgroup of 1(T 2), so 1( ( )) can only Z, Z, or the trivial group. When 1( ( )) = Z Z, ( ) has to be a Riemann surface of genus 1, which cannot be isomorphic to an open subset of S2 . When 1( ( )) is trivial, ( ) is a simply-connected universal cover of T 2, so that 1 has to be Z Z. It is easily seen from (2.2) that if = Z Z, then although H3=(Z Z) has a toroidal boundary at y = 0, there is a cusp at y ! 1, whose sub-Plackian length scale invalidates semi-classical treatments. The only possibility is thus 1( ( )) = Z, where can be either Z or Z Zn. The latter yields M to be a Zn-orbifold, indicating the existence of massive particles, which are not allowed in pure gravity. To avoid undesirable geometries such as cusps and orbifolds in the contributions to path integral [10, 14], we restrict our Schottky group to be = Z, { 5 { generated by the matrix W = 0 q where jqj < 1. The boundary torus is thus obtained by quotiening the complex z-plane without the origin by Z. Rede ne z = e2 i!, so ! is de ned up to ! ! ! + 1, and W acts by ! ! ! + ln q=2 i. Hence, the complex modulus of the torus is ln q=2 i, de ned up to a PSL(2; Z) Mobius transformation (a + b)=(c + d), where integers a; b; c; d satisfy ad bc = 1. When constructing a solid torus from its boundary torus, is de ned only up to + Z by a choice of solid lling, completely determined by the pair (c; d) of relatively prime integers. This is because the ip of signs (a; b; c; d) ! ( a; b; c; d) does not a ect q, and once (c; d) are given, (a; b) can be uniquely determined by ad bc = 1 up to a shift (a; b) ! (a; b) + t(c; d); t 2 Z which leaves q una ected. We call these solid tori Mc;d's, and any Mc;d can be obtained from M0;1 via a modular transformation on . Physically, M0;1 is the Euclidean thermal AdS3 and M1;0 is the traditional Euclidean BTZ black hole obtained from Wick rotating the original metric in [8]. Excluding M0;1, Mc;d's are collectively called the SL(2; Z) family of Euclidean black holes, to be discussed in section 5. 3 The Euclidean thermal AdS3 has the topology of a solid torus M0;1, whose non-contractible loop is parametrized by the Euclidean time. The constant time slice is thus a disk D2 with a boundary S1, perpendicular to the non-contractible loop. 3.1 Bipartition into two disks We bipartite the disk into upper and lower subregions A and B, both having the topology of a disk. The solid torus is then turned into a sliced bagel as in gure 3. Boundary of each subregion contains an interval lying on the S1. In the following we will denote the ratio between the length of one interval and the circumference of the boundary S1 to be a, satisfying 0 a 1. Except for the symmetric case where a = 1=2 and the two subregions are equivalent, generally SA 6= SB. As introduced in section 2, one then glues each of n copies of subregion B separately while gluing the n copies of subregion A together. The resultant 3-manifold is an nhandlebody, which is a lled genus-n Riemann surface, shown in gure 3. (In the special case of n = 1, the handlebody reduces to a solid torus.) With a proper normalization, the entanglement entropy corresponding to subregion A is then takes the form CFT [12],1 Contribution to the path integral around a classical saddle point for an n-handlebody where k i+1Si(n) is the i-loop free energy of boundary graviton excitations. At tree level (i = 0), Ztree(n-handlebody) can be derived assuming the dual CFT is an extremal Z(n) = exp kS0(n) + X k i+1Si(n) ; " # i Ztree(n) = Y 1 Y with the product running over primitive conjugacy classes of , q being the multiplier of introduced in section 2, and k = l=16G. In general the two products are hard to evaluate. However, in the low-temperature regime when thermal AdS3 dominates, the leading contribution to the in nite product over m comes from m = 1. Furthermore, the product over is dominated by a single-letter contribution [15, 16], q j j 1 q1j2n. Combining these, we obtain with q1 a function of n and a, having the form Q prim. j 1 Ztree(n) Y prim. j 1 q1j24k = j1 q1j48nk; q1 = sin2( a) n2 sin2( a=n) e 2 : At one-loop (i = 1) level, the general expression for Zloop(n-handlebody) can be derived from either the boundary extremal CFT [12, 13] or the bulk heat kernel method [10]. They both depend on the Schottky parametrization of the boundary genus n-Riemann surface. The result is Zloop(n) = 1 Y Y 1This partition function is motivated by the Liouville action of a single free boson on a handlebody, and is conjectured in [12] as a weight 12k modular form to avoid singularities of special functions. { 7 { (3.2) (3.3) (3.4) (3.5) (3.6) in the low-temperature regime q1 into (3.1), we obtain ST AdS (a) The terms containing k come from tree-level, while others are one-loop contributions. The entire expression approaches to zero very fast in the low-temperature regime ! 1 for any k. The dependence of the above result on a distinguishes itself from the original de nition [ 1, 2 ] of TEE, which is a universal constant. We note that a enters as the boundary condition on the constant time slice, and has nothing to do with the leading area-law term in usual expressions of entanglement entropies. When subregion A is \nothing", i.e. a ! 0, a cot( a) ! 1, thus the TEE between subregions A and B vanishes. When A is instead \everything", i.e. a ! 1, a cot( a) ! 1, balanced by the smaller e 2 1 at low temperatures. We observe that apart from the a ! 0 case, the TEE for thermal AdS3 is always negative. Another important case is when a = 1=2 so that the two subregions are symmetric. In this case we have ST AdS a = 1 2 Now we take two non-interacting thermal AdS3's as the whole system, represented by two disjoint solid tori M0;1. There are two non-interacting, non-entangled, identical CFTs living on their asymptotic boundaries. One would naively expect the TEE between these two solid tori to be zero, which is not really the case. To calculate the entanglement entropy between these two solid tori, one can simply use In the low temperatures, we can approximate q = e2 i = e 2 as a small number and thus at leading order Z0;1( ) q 2k(1 After straightforward calculations we obtain We have used the shorthand notation Z0;1( ) = Z0;1( ; ) to take into account both holomorphic and anti-holomorphic sectors. The partition function Z0;1(n ) comes from gluing n copies of solid torus A, which is a new solid torus with modular parameter n . Meanwhile, Z0;1( )n comes from gluing individually the n copies of solid torus B. We can simply multiply the contributions from A and B together because they are disjoint. Then we can plug these into the expression for the solid torus partition function, i.e. the 1-handlebody result from (3.3) and (3.5), Z0;1( ) = jqj2k Y j 1 qmj 2: 1 m=2 ST AdS This contains only the loop contribution, i.e. the semi-classical result is zero. For comparison, we also calculate the canonical ensemble thermal entropy of a single thermal AdS3 at temperature 1: STthAerdmSal = ln Z(1-handlebody) has the low-temperature form Z(1-handlebody) 1 @Z(1-handlebody) : It STthAerdmSal mal entropy of a single thermal AdS3 is the same as the TEE between two independent thermal AdS3's. This does not imply that there are nontrivial topological entanglement between the two copies of thermal AdS3, but simply reveals the insu ciency of using entanglement entropy as an entanglement measure at nite temperatures. For example, consider two general subsystems A and B with thermal density matrices A and B and combine them into a separable system, = A B: These two subregions are thus obviously non-entangled. But if one attempts to calculate the entanglement entropy between A and B by tracing over B, one can still get an arbitrary result depending on the details of A. If we choose state, then the entanglement entropy will be zero. If instead we choose A = j ih j where j i is some pure the proper normalized identity matrix, then the entanglement entropy will be ln(dim(HA)). So depending on the choice of A, one can obtain any value of the entanglement entropy between these minimum and maximum values. This shortcoming is due to the fact that now the entanglement entropy calculation involves undesired classical correlations in mixed 1 A = dim(HA) 1 as To address this issue, we look at the topological mutual information between the two states. solid tori, eternal BTZ black hole. 4 BTZ black hole black hole. 4.1 BTZ geometry I(A; B) = S(A) + S(B) S(A [ B); (3.14) so that the thermal correlations can be canceled. Following similar replica trick calculations, one easily obtain S(A [ B) = 2S(A) = 2S(B), thus the mutual information vanishes and there exists no nontrivial topological entanglement between the two disjoint thermal AdS3's. We will observe in the next section that this statement no longer holds true for an We will explore in this section the topological entanglement in the bulk of Euclidean BTZ It has been speculated for a long time that the 3d gravity is rather trivial because there is no gravitational wave besides local uctuations. However in 1992, authors of [8] proposed { 9 { a new type of AdS-Schwarzschild black hole with Lorentzian metric where the lapse and shift functions have the form NL2 = 4Gr2JL : G is the three-dimensional Newton constant, l the curvature radius of AdS3, and M , JL are the mass and angular momentum of the black hole, respectively. The outer and 8GML + rl22 + 16Gr22JL2 ; NL = inner horizons are de ned by r 2 = 4GMLl 2 1 s 1 J 2 ! L ML2l2 : Let tL = it and JL = iJ , and we do the Wick rotation to get ds2 = N 2dt2 + N 2dr2 + r2(d + N dt)2; with N 2 = 2 8GM + rl2 16Gr22J2 ; N (r) = 4rG2J . The horizons are now given by r 2 = 4GM l2 1 r 1 + J 2 ! M 2l2 : The Euclidean BTZ black hole is locally isometric to the hyperbolic three-space H3 and is 3 globally described by H = with = Z. The topology is a solid torus, and one can make it explicit by doing the following coordinate transformations [34] x = y = z = s r2 s r2 where variable They bring the metric (4.3) to the upper half-space H 3 with z > 0. Further changing to the spherical coordinates (x; y; z) = (R cos cos ; R sin cos ; R sin ), we nally arrive at ds2 = l 2 sin2 dR2 R2 + cos2 d 2 + d 2 : To ensure that the above coordinate transformation is non-singular (contains no conical singularities) at the z axis r = r+, we must require periodicity in the arguments of the trigonometric functions. That is, we must identify 1 2 l ( ; t) 1 2 l ( + ; t + ); = r+2jr rj2 ; = r+2r+rl2 : We recombine the real pair ( ; ) into a single complex (4.1) (4.2) (4.3) (4.4) (4.5) (4.6) (4.7) (4.8) x y clidean BTZ black hole is a solid torus. Horizon is the blue dashed line threading the central cord of the solid torus. The Euclidean time runs in the meridian direction. which is the complex modular parameter of the boundary torus. In terms of metric (4.6), this corresponds to the global identi cations (R; ; ) Re2 r+=l; + 2 jr j l ; : (4.9) A fundamental region for (4.6) is the lling of the slice between inner and outer hemispheres centered at the origin having radii R = 1 and R = e2 r+=l respectively, with an opening 2 jr j=l or 2 (if r = 0) in azimuthal angle, as shown by gure 4, and two hemispheres are identi ed along the radial lines with a twist of angle 2 jr j=l or 2 (if r = 0). Hence, the segment on z-axis between two hemispheres corresponds to the outer horizon, and is mapped to the central cord of solid torus at = =2 (the boundary torus is at = 0). For convenience, in the rest of the paper, unless stated otherwise, we only focus on non-rotating Euclidean BTZ black hole, so that is pure imaginary and r = 0. 4.2 TEE between two one-sided black holes and mutual information Following refs. [18{20], an eternal Lorentzian AdS black hole has two asymptotic regions and can be viewed as two black holes connected through a non-transversable wormhole. It is also suggested from the dual CFT perspective that the entanglement entropy between the CFTs living on the two asymptotic boundaries is equal to the thermal entropy of one CFT. Motivated by this, we are interested in calculating the TEE between the two single-sided black holes in the bulk. However, for the Euclidean BTZ black hole (4.3) and (4.6), the metrics only cover the spacetime outside the horizon of one single-sided black hole. Everything inside the horizon is hidden, so is another single-sided black hole. In order to make the computation of TEE between two single-sided black holes possible, we take an alternative view of the solid torus M1;0, as in gure 5. In the left panel, we sketch the constant time slice of the right single-sided black hole, called R. It is the constant slice in metric (4.6) with an annulus topology, whose inner boundary is identi ed with the horizon. In the right panel, we glue the two constant time slices for black holes L and R along the horizon. Then there The inner boundary in blue denotes the horizon. Time evolution of this slice corresponds to rotating angle around the inner blue boundary. Right: gluing the constant time slices of single-sided black holes R (light grey) and L (dark grey) along the horizon (blue line) in the middle. parts A~ and B~ in spacetime are respectively formed by rotating both spatial subregions A and B by . Right: the graphical representation of A, with a wedge missing in spacetime subregion A. comes the most important step: we fold the annulus of black hole L along the horizon, so that it coincides with the annulus of black hole R. To obtain the full spacetime geometry, one rotates the constant time slice of L about the horizon counterclockwise by , while rotating the constant time slice of R about the horizon clockwise by . Namely, the two annuli meet twice: once at angle 0, the other at . The resultant manifold is a solid torus, same as M1;0 introduced before. Hence one can view this solid torus either as one singlesided black hole R with modular parameter = i , or as two single-sided black holes L and R, each contributing 0 = i =2. It might concern some readers that the CFTs living on the asymptotic boundaries of L and R in the Lorentzian picture are now glued together. We note that this is a feature of the Euclidean picture: due to the di erent direction of evolutions, we have CFTL(t) =CFTR( t). At t = 0, these obviously coincide. Then at t = =2, they give CFTL(t = =2) =CFTR(t = =2). Using the fact that in the Euclidean picture we have =2 = = 2 + = =2, we arrive at CFTL(t = =2) =CFTR(t = =2), thus they coincide again and the two CFTs are glued together. This is consistent with the fact that in the Euclidean signature, there should only be one asymptotic region, as shown in [17]. Now we can calculate the TEE between the constant time slices of L and R, which we denote as A and B. Importantly, since in general the result can be time dependent, we specify the cut to be done at t = 0. As shown in the left panel of gure 6, each subregion contributes 0 to the modular parameter of the solid torus. We sketch one copy of A in the right panel. ~ ~ B2 ~ B1 ~ A2 cutaway wedge runs along the longitude (non-contractible loop) of the solid torus, with its vertex on the horizon. Right: graphical representation of tr nA. The disk is perpendicular to the horizon. To nd S(A), we need to calculate the partition function of the 3-manifold that corresponds to tr nA. We rst enlarge the missing wedge in the right panel of gure 6 and shrink the size of A~, B~. To add the second copy of A, one should glue A~1 to B~2, with B~2 glued with A~2, as shown in gure 7. Note that this di ers from the usual way of doing replica tricks, where A~1 is always glued to A~2. This is again a result of the opposite directions of time evolutions for L and R: the B spatial slice at t = =2 should always be identi ed with the A spatial slice at t = =2. One can then follow this procedure and glue n-copies of A. The resultant 3-manifold is a solid torus with modular parameter 2n 0, since each copy of A~ contributes 0 and the same goes for B~. Replica trick then gives (4.10) (4.11) 1, the (4.12) Partition function Z1;0( ) can be obtained from that of the thermal AdS3 by a modular transformation ! where the rst term comes from tree level and is identi ed with the Bekenstein-Hawking entropy. The above expression matches with the thermal entropy of one single-sided black hole at one-loop, SBthTerZmal(A) = ln Z1;0( ) = SBT Z (A): (4.13) Remarkably, this equation holds true regardless of Z1;0( )'s speci c form. It might be confusing at rst that the Bekenstein-Hawking entropy, usually viewed as an area-law term, appears in the calculation of topological entanglement entropy. To ~ ~ B ~ C surrounding the lower half circle corresponds to C~. Right: one copy of A. The picture shows the disk perpendicular to the horizon. The thin layer make it explicit that the results above are TEEs instead of the full entanglement entropy, alternatively we can use Z1;0( ) derived from supersymmetric localization method in Chern-Simons theory on 3-manifolds with boundaries [22]. Following the replica trick, we nd exactly the same expression.2 Since Chern-Simons theory is a topological quantum eld theory, the resulting entanglement entropy is a TEE. The horizon area r+ should be understood as a topological quantum number of the theory. In the calculation of TEE between two disjoint thermal AdS3's, as stated in section 3, we have seen that a nonzero TEE is not enough to guarantee true nontrivial entanglement between two subregions because of the possible contribution from classical correlations. So we resort to the mutual information I(A; B) between two single-sided black holes. We then need to nd S(A [ B). Since in the Euclidean picture we are no longer at a pure state, it is not necessary that S(A [ B) vanishes, although A [ B consists the entire system. We start with bipartiting the system into A [ B and C at t = 0, as shown in gure 8. C is a very small region whose area will nally be taken to zero. The glued manifold is a solid torus with modular parameter 2n 0, exactly the same form as that in gure 6. The contributions from C vanish because C is still contractible in the glued manifold and we can safely take their area to be zero. Plugging (4.11) into the replica trick formula (4.10), we again obtain SBT Z (A [ B) = SBthTerZmal(A): So indeed the TEE of A [ B does not vanish. Combining these, we nd that the mutual information is the same as the Bekenstein-Hawking entropy for a single-sided black hole: I(A; B) = SBT Z (A) + SBT Z (B) SBT Z (A [ B) = SBthTerZmal(A): Note that, had we naively taken the full partition function of the eternal BTZ black hole to be Z1;0( )2, namely, the two single-sided black holes are independent and nonentangled so that their partition functions can be multiplied together, then SBT Z (A [ B) would have been twice SBthTerZmal(A) and the mutual information would have vanished. So the nonzeroness of mutual information indicates nontrivial entanglement between L and R. 2The supersymmetric localization method involves boundary fermions. We need to remove the contribution from the boundary fermions to match with the partition function (4.11). (4.14) (4.15) R lead to Z1;0(n ) after gluing. The gray area corresponds to subregion A, and the width of the annulus B will be taken to zero. There is still another surgery that can yield SBthTerZmal(A): (1) restrict to the right singlesided black hole R as the full spacetime, which is a solid torus with modular parameter , obtained from rotating the constant time slice of it by 2 ; (2) thicken the horizon S1 to a narrow annulus inside the spatial slice of the solid torus R; (3) calculate the TEE between the thin solid torus generated by thickened horizon, denoted by B^, and the rest, denoted by A^; (4) and nally take the limit that thickness of solid torus B^ goes to zero. The bipartition of the constant slice in this case is sketched in gure 9. In this bipartition, the obtained TEE is between the exterior and the interior of horizon, rather than that between two single-sided black holes. The glued manifold is again represented by Z1;0(n ) and the replica trick yields the Bekenstein-Hawking entropy. We have thus come to a conclusion that the followings are equal: (a) TEE between the two single-sided black holes, (b) TEE between the exterior and the interior of the horizon for a single-sided black hole, (c) thermal entropy of one single-sided black hole, (d) mutual information between the two single-sided black holes. The equivalence of (a) and (c) supports the ER=EPR conjecture [18{20] in the Euclidean AdS3 case. The equivalence between (b) and (c) shows explicitly from the bulk perspective that one should view the thermal entropy of a black hole as entanglement entropy (see for example ref. [21]). In general for a rotating BTZ black hole, although there is an inner horizon at r = r , the z-axis still represents the outer horizon at r = r+ in the spherical 3 coordinates (4.5) for the upper H . Hence, the replica trick described earlier still applies to a rotating BTZ black hole with modular parameter = + i , where is the angular potential, the conjugate variable to angular momentum. Geometrically, we just need to put r = jr j \inside" the inner edge of the constant time slice, so that it is not observable.3 4.3 The entangling-thermal relation In ref. [ 24 ], the authors showed a relation (4.16) for a single-sided BTZ black hole between the entanglement entropy of CFT on the conformal boundary and the Bekenstein-Hawking 3A similar situation will be described in appendix A. ends of the grey region. Middle: the front view of tr A for the \ring" con guration. Right: the side view of tr 4A inside the \ring" of the rst tr A. entropy: l!0 lim(SA(L l) SA(l)) = Sthermal; (4.16) where SA(L l) is the entanglement entropy of a subregion A on the boundary 1+1d CFT with an interval length (L l), and Sthermal is the thermal entropy in the bulk. In this section, we propose another similar but di erent Entangling-Thermal relation. We rst consider the bipartition of the constant time slice as in gure 10 for a singlesided black hole. We put the separation between two subregions away from the horizon, so that region B generates the white contractible region in the left panel. The right panel is equivalent to the left one, and will be convenient for visualization of the gluing. We will call the glued manifold as the \ring", because after time evolution, region B = A (the complement) will glue to itself and form a ring around the solid torus, as shown in the middle panel of gure 11, where the small white part corresponds to the unglued part in the left panel. Hence, a single copy is the middle panel: away from the ring, the open wedge running around the longitude is the same as that in the left panel of gure 7. Naively it seems that one is unable to glue n copies of the above geometry, since the ring blocks a portion of the wedge's opening. However, there do exist a unique embedding from n copies to R3 up to homotopy equivalence, as shown in the right panel of gure 11: one rst stretches the grey region in the left panel to the blue area in right panel, and glue a second light grey copy so that its t = 0 edge are glued to the t = edge of the blue copy; now one repeats this process for green and yellow regions and so on, still preserving the replica symmetry. Notice that rings from gray, green and yellow copies (color online) are not in this piece of paper, but on parallel planes above or below. Then one puts rings from each copy side by side on the boundary torus, which requires each ring to be in nitesimally thin since n is arbitrarily large. The resultant manifold is again a solid torus of modular parameter n . So the replica trick calculation follows the previous equation (4.10) and gives lim Area(A)!0 S(A) = SBthTerZmal: (4.17) For completeness, we note that gure 11 has another limiting case, where the width of the ring covers almost the entire longitudinal direction of the solid torus, and its depth occupies a considerable portion of the radial direction, as shown in gure 12. Now in order to put rings side by side upon gluing n copies, we need to stretch the non-contractible direction for n times to accommodate them, so that the resultant manifold is approximately a solid torus with modular parameter =n. Now plug Z1;0( =n) into (4.10): lim lim Area(A)!0 S(A) = 2 n=1 4 + 1 e 4 = ; which vanishes at high temperature. Note that here is no k-dependence, meaning we can observe the one-loop e ect directly. Now we consider the complementary bipartition to gure 11, as shown in gure 13, where the grey region is generated by B in gure 10. The gluing here is simple: since the unglued cut in the grey region A~ is parallel to the longitude, n copies should be arranged around a virtual axis tangent to the annulus. The resultant manifold is a vertical One can calculate the corresponding TEE following a parallel procedure in the calculation of thermal AdS3 in section 3. The partition function of the glued manifold is Z(n) = Y 1 Y j q j m 24k 1 Y Y where the rst and second factors come from tree level and one-loop, respectively. The products are over primitive conjugacy classes of . In the high-temperature regime, 2 (4.19) (4.20) this expression can be simpli ed by the single-letter word approximation j 1 q10j2n, so that Q prim. j 1 Here q10 can be obtained from q1 in (3.5) using a modular transformation, Z(n; q10) j 1 j 1 q10j48nk q102j2n : q10(n; a) = sinh2( a= ) n2 sinh2( a=n ) e 2 = : d dn Z(n; q10(n)) Z(1; q10(1))n n=1 : The replica trick then gives This is explicitly written as S(A) = 96k a q j (4.21) (4.22) (4.23) a 2 e 2 = + 8(12k 1) 2 e 4 = + O(e 6 = ): (4.24) We now take the limit a ! 0 because this corresponds to the limit where the grey region in gure 13 goes to zero, so that: lim which vanishes at high temperature. The in nitesimally negative value is a quirk due to approximation on q 's. Combining equations (4.17) and (4.25), one obtains the Entangling-Thermal relation: lim Area(A)!0 [S(A) We give this relation a di erent name from the two-dimensional thermal entropy relation in the dual CFT calculation (4.16) because this is not merely a generalization of it in one higher dimension. The thermal entropy relation (4.16) relates the entanglement entropy on the dual CFT with the thermal entropy of black hole in the bulk, while the entanglingthermal relation connects the topological entanglement entropy and thermal entropy both in the bulk gravitational theory. Additionally, the explanation for thermal entropy relation relies on the geometrical detail (minimal surfaces) in the bulk [ 24 ], while the entanglingthermal relation is of topological origin. In the rst bipartition in gure 11, subregion A sees the non-contractible loop and the nontrivial ux threading through the hole inside the annulus. In the second bipartition in gure 13, subregion A does not completely surround the non-contractible circle, i.e. the horizon. The di erence between them thus characterizes the non-contractible loop. Finally we remark that there are several cases in which gluing procedures are not available. The no-gluing criterion is that, as long as the boundary of a subregion is contractible and not anchored on the boundary S1, the spatial slice is not n-glueable. Also, a single copy in which glued region B completely surrounds region except for the inner edge is not n-glueable. 5 Summation over geometries The partition functions of thermal AdS3, Z0;1( ), and BTZ black hole, Z1;0( ), are not modular-invariant by themselves. To obtain the full modular-invariant partition function, one needs to sum over the pair of parameters (c; d) for Zc;d. This can alternatively be written as the summation over modular transformations of Z0;1 as follows: Z( ) = Zc;d( ) = X 1 SL(2;Z) X 1 SL(2;Z) Z0;1 a + b c + d : 1 r and Schottky parametrization are invariant under make the full partition function invariant under both T : 1, and the summation over coset is to ! + 1 and S : ! 1= . Note that in the previous sections we have used Zc;d( ) = Zc;d( ; ) as the shorthand for the product of holomorphic and anti-holomorphic pieces, whereas in this section we return to the notation that Zc;d( ) describes the holomorphic part of the partition function only. The anti-holomorphic part can easily be found as Z( ) and Z( ; ) = Z( )Z( ). Modular-invariant partition function of the form (5.1) is unique for the most negative cosmological constant (k = 1) [11, 35] and was investigated in more general situations (k > 1) in [14]. An important theorem due to [35] is that the moduli space of Riemann surfaces of genus one is itself a Riemann surface of genus zero, parametrized by the jfunction. Consequently, any modular-invariant function can be written as a function of it. The J -function is de ned as J ( ) 1728g2( )3 g2( )3 27g3( )2 744 (5.1) (5.2) where q = e2 i as usual, and g2( ) morphic Eisenstein series of weight 2k; k 60G4( ) and g3( ) 140G6( ), where G2k are holo2, de ned as G2k P(m;n)6=(0;0)(m + n ) 2k: Since the pole in the full partition function Z(q) at q = 0 is of order k (due to the holomorphic tree-level contribution of thermal AdS3, q k), it must be a polynomial in J of degree k, k j=0 Z(q) = X aiJ i = X c(k; n)qn: n For k = 1 we simply have Z(q) = J (q). The coe cients of J (q) in front of qn was known to be intimately related to the dimensions of irreducible representations of the monster group M, the largest sporadic group. It has 246 320 5 9 7 1053 group elements and 194 conjugacy classes. Dimensions of the irreducible representations of the monster group can be found in the rst column of its character table [36]: 1, 196883, 21296876, 842609326, 18538750076, 19360062527 : : : . After John McKay's observation 196884 = 1 + 196883, Thompson further noticed [37]: (5.3) This phenomenon is dubbed \monstrous moonshine" by Conway and Norton [38], later proved by Borcherds [39]. Ref. [11] conjectures that for cosmological constant k l=16G 2 Z, quantum 3d Euclidean pure gravity including BTZ black holes can be completely described by a rational CFT (RCFT) called extremal self-dual CFT (ECFT) with central charge (cL; cR) = (24k; 24k), which is factorized into a holomorphic and an anti-holomorphic pieces. An ECFT is a CFT whose lowest dimension of primary eld is k + 1, and it has a sparsest possible spectrum consistent with modular invariance, presenting a nite mass gap. The only known example is the k = 1 one with a monster symmetry, constructed by FrenkelLepowsky-Meurman (FLM) [40] to have partition function as J (q), but its uniqueness has not been proved. The existence of ECFTs with k > 1 is conjectured to be true [11] and is still an active open question [41, 42]. In this section we will mainly focus on the k = 1 case. 5.1 TEE for the full partition function The modular-invariant partition function is still de ned on a solid torus. We will again consider the bipartition that separate the two single-sided black holes, similar to the story in section 4.2. It is justi ed in appendix A that one can still cut SL(2; Z) family of BTZ black holes along their outer horizons, which lie in the core of the solid torus. So one just needs to plug the partition function J (q) into the replica trick formula. At low temperatures, q = e 2 is small, so that the full partition function will be dominated by the q 1 term with almost trivial thermal entropy and TEE, trivial in the sense that there are no tree-level contributions. At high temperatures, richer physics is allowed. Below we calculate the TEE of the full partition function in this regime. Generally, the coe cient in front of qn in the partition function Z(q) for any k can be written as 193 X i=0 c(k; n) = mi( k; n)di; where each di is the dimension of the corresponding irreducible representations Mi of M, and mi( k; n) is the multiplicity of the irreducible representation Mi in the decomposition similar to (5.4) so c(k; n) is guaranteed to be a non-negative integer. At large n, mi( k; n) has the following asymptotic form [43], mi( k; n) p dijkj1=4 2jMjjnj3=4 e 4 pjknj: Now we restrict to the k = 1 case and let n to be a variable. After taking care of the anti-holomorphic part, the replica trick (4.10) gives the following TEE Sfull(A) = Sftuhlelrmal = 2 ln J (q) : Note that this is again the same as the expression for calculation of thermal entropy in the canonical ensemble. (Using = l=r+ = 1=p M = 1=pn, n is viewed as a function of so the second term in (5.7) is nonzero.) The computation of SA[B for the entire SL(2; Z) family of black holes is also similar to that of M1;0 calculated in section 4.2. The result is again equal to the thermal entropy, based on the fact that the SL(2; Z) family of black holes are all solid tori with horizons living in the core. This implies that the system is again in a mixed state due to Euclideanization, as expected in [44, 45]. The mutual information I(A; B) is also the thermal entropy, parallel to the discussion in section 4. In the high-temperature expansion, we only take the qn term Jn(q) from the summation in J (q) to calculate TEE because this desired term has a coe cient exponentially larger than those at lower temperatures:4 Jn(q) = X Mathematically the two copies of di in di2 are both the dimension of irreducible module Mi of the monster group, which will be explained in detail later in section 5.2. But physically they have di erent origins: one is the contribution from a single Mi as shown in equation (5.5), while the other is probability amplitude for Mi to appear in the summation as in equation (5.6). Namely, there is a correspondence between the partition function J (q) and a pure state in the bulk, which is a superposition of all di erent Mi's: In analogy to topological phases, the state is a maximally-entangled state of 194 types of \anyons" labelled by the irreducible representations of the Monster group M. The di that 4We will take into account all terms of J(q) in appendix B. j i = 193 X i=0 di pjMj ji; i i: (5.5) (5.6) (5.7) (5.8) (5.9) i Wilson line corresponding to the quasiparticle-antiquasiparticle pair i, i intersects with horizon both on the constant time slice and in the 3d bulk. appears explicitly in (5.9) corresponds to that in (5.6), whereas ji; i i means a quasiparticleantiquasiparticle pair labeled by Mi and contributes another di, which correspond to the one in (5.5). In ref. [27], the authors proposed from abstract category theory, that the ER=EPR realization in the context of TQFT should be exactly of the form (5.9). We will show later that this speci c maximally-entangled superposition is the bulk TQFT version of the thermo eld double state on the dual CFTs. Applying to equation (5.8) the identity for nite groups: Pi di2 = jMj, we arrive at Jn(q) = p2n3=4 e 4 pn qn = p 1 2 3=2e2 = : 8 Sfull(A) = + 3 ln ln 2 3: Plugging it into (5.7) and taking into account the anti-holomorphic part, we again recover the Bekenstein-Hawking entropy: The rst three terms agree with Witten's asymptotic formula for Bekenstein-Hawking entropy [11], and provides an additional term 3. Remarkably, the \anyons" become invisible in TEE after the summation over i. This is exactly due to the appearance of the maximally-entangled superposition in equation (5.9). Had we taken another state where only one single Mj appears with probability amplitude 1 and all the others appear with amplitude 0, the corresponding contribution would have been proportional to ln dj =pjMj instead of 0. The latter matches with the entanglement entropy calculations in refs. [46{48] for an excited state labeled by j in a rational CFT.5 In our case, the creation of the quasiparticle-antiquasiparticle pair i and i can be represented by a Wilson line, as shown in gure 14. The Wilson line intersects the noncontractible loop of the solid torus, i.e. the horizon, which is the reason why it can be detected by a cut along the horizon. To make full understanding of the \anyon" picture, we rewrite state (5.9) as j i = 1 pJ (q) i=0 193 X e 2 Ei ji; i i; 5This disappearance of \anyons" in the TEE for a maximally-entangled superposition is also expected in the context of topological phases, see equation (40) of ref. [3], where one takes j jj there to be dj=D. (5.10) (5.11) (5.12) where the energy level corresponding to the \anyon" pair i; i is described by the quantum dimension of Mi: Denoting ji; i i matrix Ei = 1 ln d 2 jMj i Jn(q) : A = X e i Ei jiihij; jiiji i, one can trace over all the ji i's and obtain the reduced density which is just the thermal density matrix for \anyons", and di erent types of anyons i form an ensemble. Using the expression for energy levels (5.13), the entanglement entropy between the \anyon" pair can be easily calculated as S (A) = Sthermal(A) = Sfull(A); where we have added the anti-holomorphic contribution. Thus the state (5.12) has the similar property as the thermo eld double state does in that the entanglement entropy between the quasiparticle-antiquasiparticle pair is equal to the thermal entropy of one quasiparticle. We call this state in the 3d bulk as the Moonshine double state, in which the pair of \anyons" are separated by the horizon, just like the two single-sided black holes L and R are separated by it. Unfortunately it has a shortcoming: as a pure state, the Moonshine double state above cannot reproduce the result of nonzero S(A [ B) (4.14). To account for this, one could modify the nal total quantum state as = j ~ ih ~ j th; where the modi ed moonshine double state now reads j ~ i = p4J(q) 1 Pi1=930 e 2 E~i ji; i i with Ei = 1 ln h jdMi2j Jn(q)1=2i. These energy levels lead to the partition function Z(q) = J (q)1=2. When one bipartites the system into two single-sided black holes A and B, one can see from straightforward computation that j ~ i will contribute half of Bekenstein-Hawking entropy. The newly introduced th is purely thermal and exhibits no non-local correlations between A and B, so that its von Neumann entropy is extensive and scales with volume. When one bipartites the system into the two single-sided black holes A and B, it will give half of the Bekenstein-Hawking entropy. Combining the contribution from j ~ i, we recover S ~ (A) = Sthermal(A), the Bekenstein-Hawking entropy. When considering S(A [ B), the modi ed moonshine double state contributes nothing as a pure state, while the result for th is simply Sthermal(A), matching with the calculations in (4.14). Another caveat is that since ln J is approximately the Bekenstein-Hawking entropy, the leading term in Ei scales with 2 n. So in order to have a genuine quantum theory, our theory has to have a UV cuto scale at a certain n. Furthermore, apart from the asymptotic expression (5.6) which gives rise to the tree-level Bekenstein-Hawking entropy, there is the remainder formula [49] for coe cients of qn in the whole partition function (5.13) (5.14) (5.15) (5.16) j(1; p)j + 62p2e 2 pnnp=2; the remainder formula reads c(k;n) = p2(kn)3=4 ke4 pkn " p 1 1+ X ( 1)m(1;m) (kn)p=2 + p2n3=4 e4 pn S(k;n) where p(x) is the integer partition of x 2 Z+, and m=1 (8 p 1 1+ X ( 1)m(1;m) (kn)p=2 + p2n3=4 e4 pn S(k;n) 5 ; ar(k) p(r + k) p(r + k 1); (5.18) 0 < p2n3=4 e4 pn S(k; n) To check this claim, one could restrict to the k = 1 monstrous case and plug this expression into (5.7). Alternatively one may x n and view the c(k; n) as the number of possible microstates at xed energy, i.e. in the micro-canonical ensemble. One then performs a unilateral forward Laplace transform to return to canonical ensemble and then plug it into (5.7). Computations in both methods are in general complicated, and we do not pursue it here. We provide another perspective towards the loop contribution in appendix B by plugging in the whole J function instead of only one large n term. We observe that the loop correction is negative, consistent with both the thermal AdS3 case in section 3 and the BTZ case in section 4. 5.2 di as quantum dimensions In this section we provide more mathematical details and show that di equals the quantum dimension of the irreducible module Mi of jMj. An ECFT at k = 1 is a special vertex operator algebra (VOA) V \ whose automorphism group is the Monster group M. This VOA, also known as the moonshine module [40], is an in nite-dimensional graded representation of M with an explicit grading: where every Vn\ is an M-module, called a homogeneous subspace. It can be further decomposed into V \ = 1 M n= 1 V \; n 193 M i=0 V \ n ' M i mi( 1;n); with Mi labeling the irreducible M-modules, and mi( 1; n) is the multiplicity of Mi. This is the same multiplicity that appears in (5.5). (For ECFTs with general k, we have a tower of moonshine modules [43] V ( k) = L1 n= k Vn ( k), where Vn ( k)'s are all irreducible (5.17) !3 (5.19) (5.20) sectors: where M and M the M-modules Mi in Vn ( k), so that Vn ( k) ' Li1=930 Mi mi( k;n):) M-modules. For each summand, one can similarly de ne mi( k; n) as the multiplicity of Since we restrict to the holomorphic part of Z( ; ) in this section, the entire dual CFT contains the ECFT above as a holomorphic piece. Furthermore, it is diagonal, i.e. its Hilbert space is a graded sum of tensor products of holomorphic and anti-holomorphic H = M 2C M M ; as V M M-modules, for the 194 V M-submodules ViM in V \ with V M = V1M, where Mi denotes an irreducible module for M with character di. This V M is a sub-VOA of V \ of CFT type [54], and is called the monster orbifold, because it is obtained from orbifolding V \ by its automorphism group M [83], in the same sense as orbifolding the Leech lattice VOA by Z=2Z in the FLM construction. The standard de nition of the quantum dimension of a VOA-module N with respect to a general VOA V is [52, 53] qdimV N = lim q!1 chqV chqN : The quantum dimensions of submodules of orbifold VOA V G obtained from orbifolding V by a subgroup G Aut(V ) only recently found their applications in quantum Galois theory [52, 53]. In our case, the quantum dimensions of all ViM's with respect to V M were are indecomposable representations of right and left Virasoro algebras. HJEP12(07)6 Since Virasoro action is built into the VOA axioms [50], these are also modules of the right and left monstrous VOAs, so V \ admits induced representations from representations of the Virasoro algebra [51]. Obviously there are in nite number of Virasoro primaries, and V \ is not an RCFT in this sense. However, V \ is a typical example of a holomorphic/self-dual VOA, i.e. there is only one single irreducible V \-module which is itself. Knowing that there is only one VOA-primary, one can reorganize Virasoro elds in M and M representations of V \, by introducing the graded dimension of the V \-module N , de ned as into irreducible chqN trN qL0 = X dim Nnqn; 1 n=0 where L0 is the usual Virasoro generator and Nn's are homogeneous subspaces of N labelled by eigenvalues of L0. (Note that we have omitted the overall prefactor q c=24 often appeared in literature.) The above procedure is similar to regourpong in nite number of Virasoro primaries in WZW models into nite number of Kac-Moody primaries. To explain the di appearing in (5.8), it is natural to consider quantum dimensions associated to V M consisted of xed points of the action by M on V \. By theorem 6.1 in [52, 53], we have the following decomposition of V \ V \ 194 M ViM i=1 Mi (5.21) (5.22) (5.23) (5.24) rst calculated to be qdimV M ViM = di in [43], using the asymptotic formula for multiplicities of M-modules Mi in Fourier coe cients of j-invariant, bypassing the knowledge of V M's rationality, which is still only conjectured to be true. The remaining question is to de ne in parallel a quantum dimension for the M-modules in the above pair ViM; Mi . The de nition (5.24) does not directly apply to an M-module, but one can extend the de nition using the n-graded dimension of M-modules Mi's. We de ne chqMi as6 chqMi X j i( ): n= 1 Vn\ ( )qn is the monstrous McKay-Thompson series for each as well as the unique Hauptmodul for a genus-0 subgroup of SL(2; R) for each belongs to an index set with order 171, deduced from the 194 conjugacy classes of M. The di erence 194 171 = 23 can be understood from the one-to-one correspondence between conjugacy classes and irreducible representations of M: most of the 194 irreducible representations have distinct dimensions, except for 23 coincidences. 's are only sensitive to the dimensions of the corresponding irreducible representations. i( ) is complex conjugation of the character of the irreducible representation Mi of the 171 \conjugacy classes" At large n, summation in chqMi is dominated by the rst Hauptmodul for the identity . 7 element of M, which is exactly the Klein's invariant j(q), so that q!1 lim chqMi In other words, one can view chqMi as a function chqMi(g) on group M, and when de ning the quantum dimension in (5.25), we take the value when its argument is the identity With this, we can de ne the quantum dimension of M-modules Mi in (5.20) relative qdimV \ Mi limq!1 cchhqqMV\i = lim n!1 dim(Mi)n dim Vn \ : Here chqV \ = J (q) by applying (5.24) to V \, which is a V \-module of itself. Combining the discussions above, the quantum dimension is just qdimV \ Mi = di: The di's that appeared explicitly in (5.8) of the TEE calculation are quantum dimensions of Mi, while those in (5.6) are quantum dimensions of ViM. They coincide numerically. As we mentioned before, the rationality of V M is widely conjectured to be true,8 and by a theorem of Huang [55], the module category of any rational, C2-co nite VOA is modular, i.e. it is a modular tensor category with a non-degenerate S-matrix. If one believes in 6We are deeply grateful to Richard E. Borcherds for suggesting this alternative formula. It is similar to the generating function of multiplicity mi( 1; n) in section 8.6 of [43], but without normalization by 1=jMj. 7In literature this is often denoted by tr( jMi) or tr(Mi( )) or chMi ( ) as well. 8Unfortunately, the conjecture has only been proved only when the subgroup of the automorphism group is solvable [84, 85], which is not our case. (5.25) (5.26) (5.27) (5.28) the rationality conjecture, then qdimV M ViM's have a well-de ned interpretation in terms of modular S-matrices of the orbifold CFT V M: di = Si0=S00: (5.29) Note that these 194 \anyons" are the pure charge exitations in the corresponding topological ordered system described by the modular tensor category associated with the orbifold VOA V M. 6 Discussion and outlook In the high-temperature regime, the full modular-invariant partition function (5.1) is dominated by the black hole solution Z1;0( ), while in the low-temperature regime, it is dominated by Z0;1( ), the thermal AdS3 solution [14, 32]. It is widely believed that there exists a Hawking-Page [56, 57] transition at the critical temperature 1, or r+ l. However, there is no consensus on whether this transition really exists [14, 58, 59], or if it exists, whether it is a rst-order or a continuous phase transition [60{65], or something else that is more subtle. In this section we o er a clue from the TEE perspective. We compare the a = 1 (de ned in gure 1) case in (3.7) of thermal AdS3 and the gure 9 case of a single-sided black hole, for their subregion A's both cover the whole space. One then observes that even at the tree level, TEE of BTZ and thermal AdS3 have di erent signs. A natural guess would thus be that, if the transition exists, it should be topological and happen at where the TEE changes sign. Our de nition of topological entanglement entropy is the constant subleading term in the expression for entanglement entropy, which is in general di erent from the tripartite information as used in [ 1 ]. For topological phases in condensed matter physics, these two formulations di er by a factor of two and are both negative. For gravitational theories in the bulk, our topological entanglement entropies can be either positive (as in BTZ black hole case) or negative (as in the thermal AdS3 case). To calculate the tripartite information, one can use the surgery method presented in this paper and nd its time-dependence, which at late times is negative of the Bekenstein-Hawking entropy [76]. This matches with the results in CFTs with gravitational duals, it is expected that the tripartite information should be negative [77] and that for thermo eld double state, it equals the negative of the Bekenstein-Hawking entropy [78]. Quantum dimensions also appears in the calculation of left-right entanglement in RCFT [86]. One might perform similar computations in the orbifold VOA V M appeared in section 5.2, by using the Ishibashi boundary CFT states that were constructed in [87] for open bosonic strings ending on D-branes. Given the \anyonic" interpretation in section 5, one natural question to ask is that, to what extent 3d pure quantum gravity can be described as a theory of topological order. Naively one would expect the corresponding topological order to be the 3d Dijkgraaf-Witten theory of the monster group M, which gives rise to the same modular tensor category as the one given by orbifold CFT V M as explained in section 5.2. On the other hand, it is also natural to expect the corresponding topological order to be the one which is e ectively described by the double SL(2; C) Chern-Simons theory. It would be highly non-trivial to nd a mechanism that reconciles these two theories. Another remark is that we have speci ed the bipartitions to be done at t = 0 in section 4, while in general the result can be time-dependent. In the latter case one can still use the surgery method proposed in this paper to nd the TEE or Renyi entropies, which can serve as an indicator of scrambling [79]. A nal mathematically motivated direction is the following. Vaughn Jones considered how one von Neumann algebra can be embedded in another and developed subfactor theory [80]. In general, the Jones program is about how to embed one in nite object into another, reminiscent of eld extensions in abstract algebra, and quantum dimension is dened exactly in this spirit. It would be interesting to see how subfactor theory in general can help connect topological phases and pure quantum gravity [81]. Acknowledgments We are deeply grateful to Richard E. Borcherds for teaching us quantum dimensions of M-modules over V \. We appreciate Song He and Mudassir Moosa's suggestions on the manuscript, and thank Ori J. Ganor and Yong-Shi Wu for remarks on Hawking-Page transition. We thank Norihiro Iizuka and Seiji Terashima for explaining their work, Andreas W. W. Ludwig and Zhenghan Wang for extremely helpful comments on the moonshine module. We thank Diptarka Das, Shouvik Datta and Sridip Pal for explaining their work and pointing out ref. [87] to us. Zhu-Xi thanks Herman Verlinde for comments on the sign of BTZ TEE, and Zheng-Cheng Gu, Muxin Han, Jian-dong Zhang for helpful discussions. We also appreciate the workshop \Mathematics of Topological Phases of Matter" at SCGP, where part of the work was completed. A Bipartition for the full partition function In this appendix we justify that inputting j-invariant into the replica trick formula is a legal operation. We need to make sure that the horizon in the SL(2; Z) family of Euclidean BTZ black holes is still at the central cord of their solid tori, so that we can cut along it. Although j-function contains contribution from thermal AdS3 which contains no black holes, we will see later that this con guration contributes nothing at a high enough nite temperature. For convenience we set l = 1. To see how Euclidean BTZ Schwarzschild coordinates transform under the SL(2; Z) action on , we need an intermediate FRW metric for the unexcited (before being quotiented by ) AdS3 with cylindrical topology, similar to the one mainly used in [14]: ds2 = cosh2 d 2 + d 2 = sinh2 (du du)2 + cosh2 (du + du)2 + d 2 = sinh2 d 2 + cosh2 dt02 + d 2; (A.1) where 2u i t and 2u indicates the radial direction. i t parametrize the domain of discontinuity , and To obtain a Euclidean BTZ from this, we demand 2u 1= = + i the modular parameter for BTZ black hole, and the modular parameter of thermal AdS3. The identi cation in the BTZ spatial direction is automatic due to the periodicity in the H 3 metric; Im 0 represents the time identi cation because it is the length of the time cycle, and Re 0 o ers a spatial twist upon that identi cation, inducing an angular momentum by \tilting" the meridian.9 De ne the Schwarzschild radial coordinate r: sinh2 = r 2 (Im(1= 0))2 j 0j2 ; we obtain the Euclidean BTZ black hole in Schwarzschild coordinates for r Im(1= 0): HJEP12(07)6 ds2 = N 2dt2 + N (r) 2dr2 + r2[d + N (r)dt]2; where N 2(r) (Re(1= 0))(Im(1= 0))=r2. [r2 (Im(1= 0))2][r2 + (Re(1= 0))2]=r2, and N (r) = Now the outer horizon is at r+ = Im(1= 0). When an SL(2; Z) transformation is applied 0 ! 00 = 1=(c 0 + d) = =(d c), r becomes r002 ! (c Re 0 + d)2 sinh2 + (c Im 0)2 cosh2 jc 0 + dj4 : It is enough to just think of 1=(c 0 + d) because there are only three independent parameters in (a; b; c; d) due to the constraint ad bc = 1. One has the freedom to choose a = 0, which xes bc = 1, consequently (a 0 + b)=(c 0 + d) = 1=(c2 0 + cd). Rede ne c2 = c and cd = d, then we arrive at 1=(c 0 + d). The minus sign in both c and d is not a problem, because (c; d) is equivalent to ( c; d). Since sinh2 = r 2 2 1, we have Im 00 = c =(c2 2 + d2), Re 00 = d=(c2 2 + d2), implying a rotating black hole. Now we need to see if the new r00 is still at the horizon in the Schwarzschild coordinates associated to 0, and it su ces to check that r+00 = Im 00. This is indeed true. Hence no matter what (c; d) we change into, as long as and 00 are SL(2; Z)-equivalent, r00 = r00 + coordinate system for the upper half H3, so our cut is still valid. Im will be mapped to a segment on z-axis of spherical B TEE from the whole J (q) function Now we plug the entire J -function as the canonical partition function into (4.10). We start from the de nition of j-invariant j( ) = J ( ) 744 E3( )= ( ), where 4 = 24( ) is the normalized modular discriminant. To nd the derivative of J ( ), we make use of the Jacobi theta function #(f ) f 0 m 12 E2( )f [73], where Ej ( ) is Eisenstein series of weight j and m is the weight of an arbitrary modular form f . Substituting j( ) for f , we obtain (A.2) (A.3) (A.4) d j( ) = #(j( )) + E2( )j( ): (B.1) 9Situation is almost identical in the thermal AdS3 (A.1), where Im speci es the time identi cation, upon which Re indicates a spatial twist. d j( ) = 2 i E6( ) E4( ) j( ): Sfull( ) = ln J ( ) + 2 j( ) E6( ) J ( ) E4( ) : Plugging into the replica trick equation (4.10) we obtain for the holomorphic part HJEP12(07)6 Einstein series Gs( ) positive integer N [74]: To calculate the ration E6=E4, we use the asymptotic formula for the holomorphic 2 (s)Es( ), assuming 0 < j arg j < and Re(s) > N + 1 for any (B.2) (B.3) (B.4) J (i ), (B.5) We have made use of the fact that the weight of j( ) is three times the weight of E4( ) by de nition. One easily observes from the right hand side of above equation that the weight of j( ) becomes 12 + 2 = 14 after di erentiation. Since the vector space of SL(2; Z) plugging in the rst several terms of the j( ) function and we nally arrive at10 modular forms of weight 14 is spanned by E2( )E6( ) and has complex dimension 1, we 4 must have dd j( ) / EE46(( )) j( ), up to a constant prefactor. This factor can be found by [2] M. Levin and X.-G. Wen, Detecting topological order in a ground state wave function, Phys. 10It is also a consequence of applying Ramanujan's identities on E2, E4 and E6 [16]. 1) s)(1 + e is) (s) + 2 sin(s ) (1 + cos(s )) (s) X k=1; k odd 2 sin(s ) s k (s + k) ( k) k + O(j jN ); j j 1: For both s = 4; 6, the second term vanishes at high temperatures j j ! 0, and sin(s ) in the summation over k vanishes as well. Switching to the real variable = i , we have G4(i ) 2 4 (4) and G6(i ) where we have taken into account the anti-holomorphic part. Now we see that if we consider the entire SL(2; Z) family of black holes as well as thermal AdS3 (the later contributes little at small ), the one-loop contribution to TEE is negative, agreeing with our previous calculations. Open Access. This article is distributed under the terms of the Creative Commons Attribution License (CC-BY 4.0), which permits any use, distribution and reproduction in any medium, provided the original author(s) and source are credited. [3] S. Dong, E. Fradkin, R.G. Leigh and S. Nowling, Topological entanglement entropy in Chern-Simons theories and quantum Hall uids, JHEP 05 (2008) 016 [arXiv:0802.3231] (1988) 46 [INSPIRE]. [4] P. Calabrese and J.L. Cardy, Entanglement entropy and quantum eld theory, J. Stat. Mech. 06 (2004) P06002 [hep-th/0405152] [INSPIRE]. [5] E. Witten, Quantum eld theory and the Jones polynomial, Commun. Math. Phys. 121 [arXiv:1708.04242] [INSPIRE]. [hep-th/0005106] [INSPIRE]. [hep-th/0106112] [INSPIRE]. [7] L. McGough and H. Verlinde, Bekenstein-Hawking entropy as topological entanglement entropy, JHEP 11 (2013) 208 [arXiv:1308.2342] [INSPIRE]. [8] M. Ban~ados, C. Teitelboim and J. Zanelli, The black hole in three-dimensional space-time, Phys. Rev. Lett. 69 (1992) 1849 [hep-th/9204099] [INSPIRE]. [9] M. Ban~ados, M. Henneaux, C. Teitelboim and J. Zanelli, Geometry of the (2 + 1) black hole, Phys. Rev. D 48 (1993) 1506 [Erratum ibid. D 88 (2013) 069902] [gr-qc/9302012] [INSPIRE]. [10] S. Giombi, A. Maloney and X. Yin, One-loop partition functions of 3D gravity, JHEP 08 (2008) 007 [arXiv:0804.1773] [INSPIRE]. [11] E. Witten, Three-dimensional gravity revisited, arXiv:0706.3359 [INSPIRE]. [12] X. Yin, Partition functions of three-dimensional pure gravity, Commun. Num. Theor. Phys. 2 (2008) 285 [arXiv:0710.2129] [INSPIRE]. [arXiv:1509.02062] [INSPIRE]. 02 (2010) 029 [arXiv:0712.0155] [INSPIRE]. [13] B. Chen and J.-Q. Wu, 1-loop partition function in AdS3/CFT2, JHEP 12 (2015) 109 [14] A. Maloney and E. Witten, Quantum gravity partition functions in three dimensions, JHEP [15] T. Barrella, X. Dong, S.A. Hartnoll and V.L. Martin, Holographic entanglement beyond classical gravity, JHEP 09 (2013) 109 [arXiv:1306.4682] [INSPIRE]. [16] D. Das, S. Datta and S. Pal, Monstrous entanglement, JHEP 10 (2017) 147 [17] K. Krasnov, Holography and Riemann surfaces, Adv. Theor. Math. Phys. 4 (2000) 929 [18] J.M. Maldacena, Eternal black holes in anti-de Sitter, JHEP 04 (2003) 021 [19] M. Van Raamsdonk, Building up spacetime with quantum entanglement, Gen. Rel. Grav. 42 (2010) 2323 [Int. J. Mod. Phys. D 19 (2010) 2429] [arXiv:1005.3035] [INSPIRE]. [20] J. Maldacena and L. Susskind, Cool horizons for entangled black holes, Fortsch. Phys. 61 (2013) 781 [arXiv:1306.0533] [INSPIRE]. [arXiv:1104.3712] [INSPIRE]. [21] S.N. Solodukhin, Entanglement entropy of black holes, Living Rev. Rel. 14 (2011) 8 [22] N. Iizuka, A. Tanaka and S. Terashima, Exact path integral for 3D quantum gravity, Phys. Rev. Lett. 115 (2015) 161304 [arXiv:1504.05991] [INSPIRE]. 3256. [23] S. Gukov, Three-dimensional quantum gravity, Chern-Simons theory and the A polynomial, [25] S. Ryu and T. Takayanagi, Holographic derivation of entanglement entropy from AdS/CFT, [26] J.D. Brown and M. Henneaux, Central charges in the canonical realization of asymptotic [31] J.M. Maldacena and A. Strominger, AdS3 black holes and a stringy exclusion principle, JHEP 12 (1998) 005 [hep-th/9804085] [INSPIRE]. [32] R. Dijkgraaf, J.M. Maldacena, G.W. Moore and E.P. Verlinde, A black hole farey tail, [33] J. Manschot, AdS3 partition functions reconstructed, JHEP 10 (2007) 103 [30] W.P. Thurston, Three dimensional manifolds, Kleinian groups and hyperbolic geometry, Bull. [34] S. Carlip and C. Teitelboim, Aspects of black hole quantum mechanics and thermodynamics in (2 + 1)-dimensions, Phys. Rev. D 51 (1995) 622 [gr-qc/9405070] [INSPIRE]. [35] G. Hohn, Selbstduale Vertexoperatorsuperalgebren und das Babymonster (in German), Ph.D. thesis, Bonn Germany, (1995) [Bonner Math. Schr. 286 (1996) 1] [arXiv:0706.0236]. [36] J.H. Conway, R.T. Curtis, S.P. Norton, R.A. Parker and R.A. Wilson, Atlas of nite groups: maximal subgroups and ordinary characters for simple groups, Clarendon Press, Oxford U.K., (1985), pg. 220. [37] J.G. Thompson, Some numerology between the Fisher-Griess monster and the elliptic modular function, Bull. London Math. Soc. 11 (1979) 352. [38] J.H. Conway and S.P. Norton, Monstrous moonshine, Bull. London Math. Soc. 11 (1979) 308. [39] R.E. Borcherds, Monstrous moonshine and monstrous Lie superalgebras, Invent. Math. 109 [40] I. Frenkel, J. Lepowsky and A. Meurman, A natural representation of the Fischer-Griess monster with the modular function J as character, Proc. Nat. Acad. Sci. U.S.A. 81 (1984) [41] J.-B. Bae, K. Lee and S. Lee, Bootstrapping pure quantum gravity in AdS3, [42] D. Gaiotto, Monster symmetry and extremal CFTs, JHEP 11 (2012) 149 [arXiv:0801.0988] 92 (2015) 065010 [arXiv:1507.00582] [INSPIRE]. operators in 2D CFTs, JHEP 10 (2015) 173 [arXiv:1507.01157] [INSPIRE]. [49] F.L. Williams, Remainder formula and zeta expression for extremal CFT partition functions, in Symmetry: representation theory and its applications, R. Howe, M. Hunziker and J. Willenbring eds., Progr. Math. 257, Birkhauser, New York NY U.S.A., (2014), pg. 505 [INSPIRE]. [50] I. Frenkel, J. Lepowsky and A. Meurman, Vertex operator algebras and the monster, Pure Appl. Math. 134, Academic Press Inc., Boston MA U.S.A., (1988), pg. 329 [INSPIRE]. [51] E. Frenkel and D. Ben-Zvi, Vertex algebras and algebraic curves, Math. Surv. Mon. 88, Amer. Math. Soc., Providence RI U.S.A., (2004), pg. 43 [math.QA/0007054]. [52] C. Dong, X. Jiao and F. Xu, Quantum dimensions and quantum Galois theory, Trans. Amer. Math. Soc. 365 (2013) 6441 [arXiv:1201.2738]. [hep-th/9412037] [INSPIRE]. [53] C.-Y. Dong and G. Mason, On quantum Galois theory, Duke Math. J. 86 (1997) 305 [INSPIRE]. [hep-th/0003271] [INSPIRE]. (2005) 297 [hep-th/0506096] [INSPIRE]. [54] T. Gannon, Moonshine beyond the monster: the bridge connecting algebra, modular forms and physics, Cambridge University Press, Cambridge U.K., (2006), pg. 292. [55] Y.-Z. Huang, Vertex operator algebras, the Verlinde conjecture and modular tensor categories, Proc. Nat. Acad. Sci. 102 (2005) 5352 [math.QA/0412261] [INSPIRE]. [56] S.W. Hawking and D.N. Page, Thermodynamics of black holes in anti-de Sitter space, Commun. Math. Phys. 87 (1983) 577 [INSPIRE]. [57] P.C.W. Davies, Thermodynamics of black holes, Proc. Roy. Soc. Lond. A 353 (1977) 499 [58] J. Crisostomo, R. Troncoso and J. Zanelli, Black hole scan, Phys. Rev. D 62 (2000) 084013 [59] Y.S. Myung, No Hawking-Page phase transition in three dimensions, Phys. Lett. B 624 [60] P. Caputa, N. Kundu, M. Miyaji, T. Takayanagi and K. Watanabe, Liouville action as path-integral complexity: from continuous tensor networks to AdS/CFT, JHEP 11 (2017) 097 [arXiv:1706.07056] [INSPIRE]. [61] M. Eune, W. Kim and S.-H. Yi, Hawking-Page phase transition in BTZ black hole revisited, JHEP 03 (2013) 020 [arXiv:1301.0395] [INSPIRE]. [62] L. Cappiello and W. Mueck, On the phase transition of conformal eld theories with HJEP12(07)6 [66] A.B. Zamolodchikov and A.B. Zamolodchikov, Liouville eld theory on a pseudosphere, J. Phys. A 13 (1980) 1113 [INSPIRE]. (2001) 2183 [gr-qc/0102052] [INSPIRE]. hep-th/0101152 [INSPIRE]. Phys. 123 (1989) 177 [INSPIRE]. pg. 97. [70] G.W. Moore and N. Seiberg, Classical and quantum conformal eld theory, Commun. Math. [68] O. Coussaert, M. Henneaux and P. van Driel, The asymptotic dynamics of three-dimensional Einstein gravity with a negative cosmological constant, Class. Quant. Grav. 12 (1995) 2961 [71] J. Fuchs, I. Runkel and C. Schweigert, TFT construction of RCFT correlators I. Partition functions, Nucl. Phys. B 646 (2002) 353 [hep-th/0204148] [INSPIRE]. [72] V. Balasubramanian, P. Hayden, A. Maloney, D. Marolf and S.F. Ross, Multiboundary wormholes and holographic entanglement, Class. Quant. Grav. 31 (2014) 185015 [arXiv:1406.2663] [INSPIRE]. [73] M. Kaneko and D. Zagier, Supersingular j-invariants, hypergeometric series, and Atkin's orthogonal polynomials, in Proceedings of the Conference on Computational Aspects of Number Theory, AMS/IP Stud. Adv. Math. 7, International Press, Cambridge U.K., (1997), [74] T. Node, Some asymptotic expansions of the Eisenstein series, RIMS K^okyu^roku 1659 [75] J.F. Duncan and I.B. Frenkel, Rademacher sums, moonshine and gravity, Commun. Num. Theor. Phys. 5 (2011) 849 [arXiv:0907.4529] [INSPIRE]. [76] Z.-X. Luo and H.-Y. Sun, Time-dependent topological entanglement entropy in Euclidean [77] P. Hayden, M. Headrick and A. Maloney, Holographic mutual information is monogamous, Phys. Rev. D 87 (2013) 046003 [arXiv:1107.2940] [INSPIRE]. [78] P. Hosur, X.-L. Qi, D.A. Roberts and B. Yoshida, Chaos in quantum channels, JHEP 02 (2016) 004 [arXiv:1511.04021] [INSPIRE]. [79] C.T. Asplund, A. Bernamonti, F. Galli and T. Hartman, Entanglement scrambling in 2d conformal eld theory, JHEP 09 (2015) 110 [arXiv:1506.03772] [INSPIRE]. local perturbations of the eternal BTZ black hole, JHEP 08 (2015) 011 [arXiv:1503.08161] HJEP12(07)6 115 (2015) 131602 [arXiv:1504.02475] [INSPIRE]. (2003) 229 [hep-th/0202074] [INSPIRE]. [1] A. Kitaev and J. Preskill , Topological entanglement entropy , Phys. Rev. Lett . 96 ( 2006 ) [6] E. Witten , ( 2 + 1) -dimensional gravity as an exactly soluble system , Nucl. Phys. B 311 Commun. Math. Phys. 255 ( 2005 ) 577 [ hep -th/0306165] [INSPIRE]. [24] T. Azeyanagi , T. Nishioka and T. Takayanagi , Near extremal black hole entropy as entanglement entropy via AdS2/CFT1, Phys . Rev. D 77 ( 2008 ) 064005 [arXiv: 0710 .2956] Phys . Rev. Lett . 96 ( 2006 ) 181602 [ hep -th/0603001] [INSPIRE]. symmetries: an example from three-dimensional gravity , Commun. Math. Phys. 104 ( 1986 ) [27] J.C. Baez and J. Vicary , Wormholes and entanglement, Class. Quant. Grav . 31 ( 2014 ) [28] J.W. Milnor , A procedure for Killing homotopy groups of di erentiable manifolds , Proc.
This is a preview of a remote PDF: https://link.springer.com/content/pdf/10.1007%2FJHEP12%282017%29116.pdf
Zhu-Xi Luo, Hao-Yu Sun. Topological entanglement entropy in Euclidean AdS3 via surgery, Journal of High Energy Physics, 2017, 116, DOI: 10.1007/JHEP12(2017)116 | {"pred_label": "__label__cc", "pred_label_prob": 0.7363527417182922, "wiki_prob": 0.26364725828170776, "source": "cc/2019-30/en_middle_0010.json.gz/line29994"} |
high_school_physics | 583,909 | 15.33039 | 1 | Hifi Pig
Hifi News, Hifi Reviews
Hifi Reviews
Turntables, Cart’s and Arms
CD Players and Transports
Computer Audio and Streaming
Unboxing Hifi
Hifi Diary
Hifi Pig Quarterly
Hifi Pig E-Magazine Issuu Version
Hifi Shows
Bristol Hifi Show 2018
Behind the Brands
Hifi Company Profiles
Factory Visits
Hifi Comment
Dealers View
Fidelity Matters
Rants, Raves and Rock n Roll
Readers Lives
Retro Bites
The Views Of Stu
The Voice Of Dom
Transatlantic Dispatches: A New World Disorder
About Hifi Pig
Hifi Pig People
Hifi Pig Christmas Gift Guide 2020
Find Your Local Hifi Dealer
North West Audio Show
North West Audio Show 2019
High End Munich 2019
QED Announce XT25 Loudspeaker Cable
27. November 2017 · Write a comment · Categories: Hifi News · Tags: cable, cables, hi fi news, hifi news, loudspeaker, Loudspeaker cable, Loudspeakers
QED has announced that its new XT25 loudspeaker cable would be available from QED stockists in the UK by the end of the November. Retailers around the world should receive stock in December.
XT25 was developed as a first upgrade cable for all speaker types whether they be floor-standing, bookshelf stereo or home theatre and therefore was deliberately designed to be small and flexible enough for use in this demanding and price sensitive environment.
In common with all QED’s cables, XT25’s design is informed by the results of their research into loudspeaker cable design which began in 1995 and is detailed in the recently updated The Science of Sound Report. This report sets out the design principles to which QED have adhered ever since and which along with many hours of listening and iteration resulted in the development of their flagship Supremus speaker cable, on which all their designs are based.
Calculation can accurately predict the maximum skin depth at a given frequency for a particular conducting material making it possible to keep the diameter of the cables below the critical size determined by the maximum frequency expected. QED cables above 1.5 mm2 cross-sectional area utilise air core technology to obviate the skin effect by bundling several smaller cores together to make up a larger CSA. As frequency increases electrons flow more and more towards the periphery of a conductor so that if the frequency is high enough only a very thin layer (or skin) on the outside of the conductor is used. This skin depth varies for different materials at a fixed frequency and in copper it means that if a conductor has larger than 0.66 mm2 cross-sectional area not all of that area is available for an analogue music signal to use. In previous QED cables the skin effect problem was effectively eliminated by the use of X-Tube™ Technology which works by placing all of the conducting material around a central hollow insulating rod. However, for ac signals, changing magnetic fields generated by the flow of current set up eddy currents in nearby conductors which force current to flow only in areas furthest away from conductors carrying current in the same direction and vice versa. This proximity effect has a detrimental influence on current distribution in a speaker cable even if it utilises X-Tube™ Technology. By forming the conductors into a tube-like shape with hollow centre, current densities at different frequencies are maintained because the electric field which contributes towards the skin effect acts towards the centre of the conductor from where the conductive material has been removed. At the same time the ring of conductors is formed into separate bundles with only a loose electrical association which are then twisted into a 90 mm lay so that no single conductor bundle remains on the inside or outside of the cable (and therefore prey to the proximity effect) for long enough for it to become an audible problem.
QED say that they have long recognised that low DC resistance of the loudspeaker cable is of paramount importance for high fidelity signal transfer. This is because the speaker presents a frequency dependent load to the amplifier of which the cable forms a variable proportion. If resistance is allowed to be too large, then audible changes to the frequency response characteristics of the loudspeaker will be introduced which cannot be corrected for by the amplifier’s negative feedback loop. In order to improve upon previous budget cables, QED’s first target was to increase the CSA without increasing the overall size of the cable substantially. To that end, combined with QED’s exclusive use of 99.999% oxygen-free copper, the CSA has been increased from 1.5 mm2 to 2.5 mm2 which instantly gives the new cable a considerably lower dc resistance.
It is not generally appreciated that the electrical signals moving at or near the speed of light in a wire do so via the medium of electromagnetic (EM) waveforms which exist within the dielectric which surrounds the conductors as well as within the conductors themselves. The movement of electrons along the conductor merely facilitates generation of the EM waveform as their drift velocity is much slower – being only a few centimetres per second. It is therefore important to ensure that the dielectric material used to insulate and protect the central conductors of the speaker cable is of a type which ‘permits’ the establishment of EM waveforms without appreciable loss. Dielectric losses are directly proportional to the permittivity of the material used and as this a measure of each material relative to that in a vacuum it should be as close to unity as possible. Like its predecessors, XT25 uses a specially formulated low-density polyethylene (LDPE) dielectric which at 1.69 has the lowest relative permittivity practically available. QED say that their research has shown that low capacitance cables are generally preferred in listening tests over high capacitance counterparts and this is usually because low loss dielectrics have been utilised. The use of LDPE and careful control of conductor spacing results in a cable with a very low capacitance per meter and a dissipation factor (loss tangent) of 0.0001 at 10 kHz.
Speaker cables have to carry a lot of current. If the electrical resistance of the cable is too large, part of the music signal will be lost, causing a detrimental effect on the fidelity of the sound you hear. To stop this from happening QED makes the resistance of their speaker cables as small as possible. This is done by using the largest practical cross-sectional area of copper within the size constraints of each cable. In order to squeeze the last drop of performance from the conductors QED make sure that there are no impurities in the copper which would defeat the object of making them so large. That’s why they use 99.999% oxygen free copper exclusively.
XT25 will cost approximately £6 (€7.50) per linear metre unterminated.
Cold-welded termination with QED’s Airloc 4mm plugs or spades would normally cost around £5 (€6)per plug. This makes a 3m pair of terminated cables around £70 (€85).
QED offer an unlimited lifetime guarantee which means that if a QED cable ever fails to deliver to its full potential during its lifetime QED will replace it free of charge.
Check Out Related Posts
Black Rhodium Launches New Loudspeaker Cable
British cable manufacturer, Black Rhodium, Polka Classic has been developed from the Foxtrot / Quickstep…
CAD Announce New USB Cable
CAD have announced their latest USB cable. CAD’s new CAD USB Cable I & II…
Rockport Technologies Announce Atria Loudspeaker
Rockport Technologies have announced the introduction of their new Atria loudspeaker. In essence the Atria…
Check out Ultimate Stream’s Portfolio
The video cannot be shown. Your web browser does not support iFrames.
Search Hifi Pig
Copyright © Hifi Pig and Hifi Pig Magazine,
Big Pig Media LLP
Unauthorized use and/or duplication of this material (in part or in full) without express and written permission from this website’s author
and/or owner is strictly prohibited. Links may be used, provided that full and clear credit is given to Hifi Pig and Big Pig Media with appropriate and specific direction (link) to the original content.
Hifi Pig is part of the Big Pig Media LLP group
of companies.
Partnership No OC397825
© 2021 Hifi Pig. All rights reserved. | {"pred_label": "__label__cc", "pred_label_prob": 0.5059723258018494, "wiki_prob": 0.49402767419815063, "source": "cc/2021-04/en_middle_0002.json.gz/line446370"} |
high_school_physics | 119,308 | 15.278814 | 1 | Press “AIDA”, Japan, model FT2-40, manufactured in 1974.
Force between the posts – 400 ton.
Maximum force on each operation – 150 ton.
Number of slider strokes 8-12 , length of stroke – 800 mm.
Dimensions of die holder: 690 х 1150 mm.
Height of press – 7400 mm (max).
Power of main electric motor – 110 kW.
Power supply – 380 V.
Operating air pressure – 5 kg/cm2. | {'timestamp': '2019-04-18T22:50:45Z', 'url': 'https://www.exapro.com/aida-ft2-40-p80613020/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 1,347 | 15.267124 | 1 |
\section{Conclusions}
To the best of our knowledge, \emph{Cormorant} is the first neural network architecture in which
the operations implemented by the neurons is directly motivated by the form of known physical interactions.
Rotation and translation invariance are explicitly ``baked into'' the network by the fact all activations
are represented in spherical tensor form (\m{\SO(3)}--vectors), and the neurons combine Clebsch--Gordan
products, concatenation of parts and mixing with learnable weights, all of which are covariant operations.
In future work we envisage the potentials learned by \emph{Cormorant} to be directly integrated in
MD simulation frameworks.
In this regard, it is very encouraging that on MD-17, which is the standard benchmark for force field
learning, \emph{Cormorant} outperforms all other competing methods.
Learning from derivatives (forces) and generalizing to other compact symmetry groups are
natural extensions of the persent work.
\subsection*{Acknowledgements}
This project was supported by DARPA ``Physics of AI'' grant number HR0011837139, and used computational
resources acquired through NSF MRI 1828629.
We thank E. Thiede for helpful discussion and comments on the manuscript.
\ignore{
We propose a new architecture, we call Cormorant, for learning on molecular data from DFT calculations.
These networks provide a physics inspired platform for learning functions on molecular data.
At the heart of the networks are $n$-atom interactions, that combine $n$ atomic representations using
Clebsch-Gordan operations to covariantly construct a new atom representation.
We consider a specific choice of interactions in this framework, and relate the corresponding architecture
to a generalization of message passing neural networks.
We train our Cormorant on two standard datasets in the molecular chemistry community.
We find that for the problem of learning potential energy surfaces, we significantly outperform competing architecutres. For the task of learning ground state molecular properties we are competitive with the state of the art on many learning targets.
}
\ignore{
\clearpage
Our network is constructed in three components: (1) An input featurization network $\{F^{s=0}_j\} \leftarrow \mathrm{INPUT}(\{Z_j, r_{jj'}\})$ that operates only on atomic charges/identities and (optionally) a scalar function of relative positions $r_{jj'}$. (2) An $S$-layer network $\{ F^{s+1}_j \} \leftarrow \mathrm{CGNet}(\{ F^s_j \})$of covariant activations $F^{s}_i$, each of which is a $\SO(3)$-vector of type $\tau_i$. (3) A rotation invariant network at the top $y \leftarrow \mathrm{OUTPUT}(\bigoplus_{s=0}^{S} \{F^s_i\})$ that constructs scalars from the activations $F^s_i$, and uses them to predict a regression target $y$. In the following section we focus on the network that constructs the covariant activation functions, and leave the details of the input and output featurization to the Supplement.
\subsection{Clebsch-Gordan non-linearity and \m{\SO(3)}-vector operations}
\label{sec:cg_nonlinearity}
The central operation in our Cormorant is the Clebsch-Gordan transformation applied to two $\SO(3)$ vectors $F_1$ and $F_2$, with types $\tau_1 = \big(\tau_1^0, \ldots, \tau_1^{L} \big)$ and $\tau_2 = \big(\tau_2^0, \ldots, \tau_2^{L} \big)$. This requires a generalization of the transformation in Sec.~\ref{sec: physical}, defined for single component irreducible $\SO(3)$-vectors $Q^{A}_{\ell}$ and $Q^{B}_{\ell^\prime}$.
The general form of the CG decomposition
results in a quadratic increase in the number of parts.
Here, we specialize to the case where $\tau_1^\ell = \tau_2^\ell = N_c$, where $N_c$ is the number of channels. Given this restriction, we henceforth define the CG decomposition between two $\SO(3)$-vectors as:
\begin{equation}
F_{1} \otimes_{\rm cg} F_{2} =
\bigoplus_{c}\bigoplus_{\ell = |\ell_{1} - \ell_{2}|}^{\ell_{1}+\ell_{2}}
C_{\ell_{1},\ell_{2},\ell} \cdot\left(F_{1,c}^{\ell}\otimes F_{2,c}^{\ell}\right).
\end{equation}
This structure is strictly less general than the form used in \citep{KLT2018}, as it takes the elements $c = c^\prime$ of the ``part'' indices. However, it is more computationally tractable, and no less expressive when combined with linear mixing matrices.
Throughout this text, $\oplus$ denotes the sum of $\SO(3)$-vectors, which concatenates the irreps of each isotypic part of both $\SO(3)$-vectors type $\tau_1$ and $\tau_2$ into a new $\SO(3)$-vector of type $\tau^\ell_3 = \tau^\ell_1 + \tau^\ell_2$. We also can mix $\SO(3)$-vectors component wise with a list of weight mixing matrices $W = (W^0, \ldots, W^\ell)$, which we denote through $\tilde{F} = F \cdot W = \oplus_{\ell=0}^{L} F^\ell W^\ell$. It is useful to note that the CG product is only associative or commutative up to a unitary transformation, i.e., $\left( F_1 \otimes_{\rm cg} F_2 \right) = \left( F_2 \otimes_{\rm cg} F_1 \right) \cdot U$, and $\left( F_1 \otimes_{\rm cg} F_2 \right) \otimes_{\rm cg} F_3 = F_1 \otimes_{\rm cg} \left( F_2 \otimes_{\rm cg} F_3 \right) \cdot V$, for some set of unitary matrices $U_\ell U_\ell^\dagger = V_\ell V_\ell^\dagger = \hat{1}$. At times we will reabsorb these unitary matrices into a redefinition of the learnable weights $W$.
}
\section{Experiments}
We present experimental results on two
datasets of interest to the computational chemistry community: MD-17
for learning molecular force fields and potential energy surfaces,
and QM-9 for learning the ground state properties of a set of molecules.
The supplement provides a detailed summary of all hyperparameters, our training algorithm, and the details of the input/output levels used
in both cases. Our code is available at \href{https://github.com/risilab/cormorant}{https://github.com/risilab/cormorant}.
\textbf{QM9}~\citep{ramakrishnan2014quantum} is a dataset of approximately \m{134}k small organic molecules
containing the atoms H, C, N, O, F.~
For each molecule, the ground state configuration is calculated using DFT, along with a variety of molecular properties.
We use the ground state configuration as the input to our Cormorant,
and use a common subset of properties in the literature as regression targets.
Table~\ref{tab:results}(a) presents our results averaged over three training runs compared with
SchNet~\citep{SchNet}, MPNNs~\citep{Riley2017}, and wavelet scattering networks~\citep{Hirn2017}.
Of the twelve regression targets considered, we achieve leading or competitive results on six
($\alpha$, $\Delta\epsilon$, $\epsilon_{\mathrm{HOMO}}$, $\epsilon_{\mathrm{LUMO}}$, $\mu$, $C_v$).
The remaining four targets are within $40\%$ of the best result, with the exception of $R^2$.
\textbf{MD-17}~\citep{Chmiela2016a} is a dataset of eight small organic molecules
(see Table~\ref{tab:results}(b)) containing up to 17 total atoms composed of the atoms H, C, N, O, F.~
For each molecule, an \emph{ab initio} molecular dynamics simulation was run using DFT
to calculate the ground state energy and forces. At intermittent timesteps,
the energy, forces, and configuration (positions of each atom) were
recorded. For each molecule we use a train/validation/test split of
50k/10k/10k atoms respectively.
The results of these experiments are presented in Table~\ref{tab:results}(b), where the
mean-average error (MAE) is plotted on the test set for each of molecules. (All units are in kcal/mol, as consistent with the dataset and the literature.)
To the best of our knowledge, the current state-of-the art algorithms on this dataset are
DeepMD~\citep{Zhang2017}, DTNN~\citep{Schutt2017}, SchNet~\citep{SchNet}, GDML~\citep{Chmiela2016a}, and sGDML~\citep{Chmiela2018}.
Since training and testing set sizes were not consistent, we
used a training set of 50k molecules to compare with all neural network
based approaches. As can be seen from the table, our \emph{Cormorant} network
outperforms all competitors.
\begin{table}[t]
\centering
\caption{\label{tab:results} Mean absolute error of various prediction targets on QM-9 (left)
and conformational energies (in units of kcal/mol) on MD-17 (right). The best results within a standard deviation of three Cormorant training runs (in parenthesis) are indicated in bold.}
\begin{minipage}{0.49\textwidth}
\tiny
\input{gdb9_results}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\tiny
\input{md17_results}
\end{minipage}
\end{table}
\section{Introduction}
In principle, quantum mechanics provides a perfect description of the forces governing the
behavior of atoms, molecules and crystalline materials such as metals.
However, for systems larger than a few dozen atoms, solving
the Schr\"odinger equation explicitly at every timestep
is not a feasible proposition on present day computers.
Even Density Functional Theory (DFT) \citep{HohenbergKohn},
a widely used approximation to the equations of quantum mechanics,
has trouble scaling to more than a few hundred atoms.
Consequently, the majority of practical work in molecular dynamics today
falls back on fundamentally classical models, where the atoms are essentially treated as
solid balls and the forces between them are given by pre-defined formulae called
\emph{atomic force fields} or \emph{empirical potentials}, such as
the CHARMM family of models \citep{CHARMM1983,CHARMM2009}.
There has been a widespread realization that this approach
has inherent limitations, so in
recent years a burgeoning community has formed around trying to use machine learning
to \emph{learn} more descriptive force fields directly from DFT computations
\citep{Behler2007fe,Bartok2010wd,Rupp2012,Shapeev2015,Chmiela2016a,Zhang2017,Schutt2017,Hirn2017}.
More broadly, there is considerable interest in using ML methods not just for learning
force fields, but also for predicting many other physical/chemical properties of atomic systems
across different branches of materials science, chemistry and pharmacology
\citep{MontavonEtAl,Riley2017,ANI1,Tensormol}.
At the same time, there have been significant advances in our understanding of the equivariance and
covariance properties of neural networks, starting with
\citep{Cohen2016,Cohen2017} in the context of traditional convolutional neural nets (CNNs).
Similar ideas underly generalizations of CNNs to manifolds
\citep{Masci2015,Monti2016,BronsteinEtAl}
and graphs \citep{BrunaZaremba2014,HenaffLeCun2015}.
In the context of CNNs on the sphere, \citet{SphericalCNN2018} realized the advantage of using
``Fourier space'' activations,
i.e., expressing the activations of neurons in a basis defined by the irreducible representations of the
underlying symmetry group (see also \citep{EstevesSph}), and these ideas were later generalized
to the entire \m{\textrm{SE}(3)} group \citep{Weiler}.
\citet{EquivarianceICML18} gave a complete characterization of what operations are allowable in
Fourier space neural networks to preserve covariance, and Cohen et al generalized the framework
even further to arbitrary gauge fields \citep{CohenGauge}.
There have also been some recent works
where even the nonlinear part of the neural network's operation is performed in Fourier space:
independently of each other \citep{Thomas2018} and \citep{Nbody2018arxiv} were to first to
use the Clebsch--Gordan transform inside rotationally covariant neural networks for learning
physical systems, while \citep{KLT2018} showed that in spherical CNNs the Clebsch--Gordan
transform is sufficient to serve as the sole source of nonlinearity.
The \emph{Cormorant} neural network architecture proposed in the present paper combines some of the
insights gained from the various force field and potential learning efforts with the emerging theory of
Fourier space covariant/equivariant neural networks.
The important point that we stress in the following pages is that
by setting up the network in such a way that each neuron
corresponds to an actual set of physical atoms, and that each activation is covariant to symmetries
(rotation and translation), we get a network in which the ``laws'' that individual
neurons learn resemble known physical interactions.
Our experiments show that this generality pays off in
terms of performance on standard benchmark datasets.
\ignore{
Assume that the potential attached to atom \m{i} is \m{\phi_i(\sseq{\h {\V r}}{k})},
with \m{\h{\V r_j}=\V r_{p_j}\!\<-\V r_i}, where \m{\V r_i} is the position vector of atom \m{i} and
\m{\V r_{p_j}} is the position vector of its \m{j}'th neighbor.
The total force experienced by atom \m{i} is then simply the negative gradient
\m{F_i=-\nabla_{\!\small \V r_i} \phi_i(\sseq{\h {\V r}}{k})}.
Classically, in molecular dynamics \m{\phi_i} is usually given in terms of a closed form formula with a few tunable
parameters. Popular examples of such so-called empirical potentials (empirical force fields)
include the CHARMM models \citep{CHARMM1983,CHARMM2009} and others.
Empirical potentials are fast to evaluate but are crude
models of the quantum interactions between atoms,
limiting the accuracy of molecular simulation.
A little over ten years ago, machine learning entered this field, promising
to bridge the gap between the quantum and classical worlds by \emph{learning}
the aggregate force on each atom as a function of the positons of its neighbors from a relatively
small number of DFT calculations \citep{Behler2007fe}.
In the last few years there has been a veritable explosion in the amount of activity in
machine learned atomic potentials (MLAP),
and molecular dynamics simulations based on this approach
are starting to yield results that outperform other methods
\citep{Bartok2010wd,Behler2015gv,Shapeev2015,Chmiela2016a,Zhang2017,Schutt2017}.
Much of the arsenal of present day machine learning algorithms has been applied to the MLAP problem,
from genetic algorithms, through kernel methods, to neural networks.
However, rather than the statistical
details of the specific learning algorithm, often what is critically important for
problems of this type
is the representation of the atomic environment, i.e., the choice of learning features
that the algorithm is based on.
This situation is by no means unique in the world of applied machine learning:
in computer vision and speech recognition, in particular, there is a rich literature
of such representational issues.
What makes the situation in Physics applications somewhat special is the presence of
constraints and invariances that the representation must satisfy not just in an approximate,
but in the \emph{exact} sense.
As an example, one might consider rotation invariance.
If rotation invariance is not fully respected by an image recognition system, some objects
might be less likely to be accurately detected in certain orientations than in others.
In a molecular dynamics setting, however, using a potential that is not fully rotationally invariant
would not just degrade accuracy, but would likely lead to entirely unphysical molecular trajectories.
\subsection{Fixed vs.\:learned representations.}
Similarly to other branches of machine learning, in recent years the MLAP community has been shifting from
fixed input features towards representations learned from the data itself,
in particular, using ``deep'' neural networks to represent atomic enviroments.
Several authors have found that certain concepts from the mainstream neural networks literature,
such as convolution and equivariance, can be successfuly repurposed to this domain.
In fact, the analogy with computer vision is more than just skin deep.
In both domains two competing objectives are critical to success:
\begin{compactenum}[~~1.]
\item
The ability to capture structure in the input data at multiple different length scales,
, i.e., to construct a \emph{multiscale} representation of the input image or the atomic environment.
\item
The above mentioned invariance property with respect to spatial transformations, including
translations, rotations, and possibly scaling.
\end{compactenum}
There is a rich body of work on addressing these objectives in the neural networks
literature. One particularly attractive approach is
the \emph{scattering networks} framework of Mallat and coworkers,
which, at least in the limit of an infinite number of neural network layers,
provides a representation of functions that is both
globally invariant with respect to symmetries and Lipschitz with respect to warpings
\citep{Mallat2012,Hirn2017}.
Inspired by recent work on neural networks for representing graphs and other structured objects
by covariant compositional neural architectures \citep{CompNetsArxiv18},
in this paper we take the idea of learnable multiscale representations one step further,
and propose \m{N}--body networks,
a neural network architecture where \emph{the individual ``neurons''
correspond to physical subsystems endowed with their own internal state}.
The structure and behavior of the resulting model
follows the tradition of coarse graining and representation theoretic ideas in Physics, and
provides a learnable and multiscale representation of the atomic environment that is fully
covariant to the action of the appropriate symmetries.
However, the scope of the underlying ideas is significantly broader, and we believe that \m{N}--body networks
will also find application in modeling other types of many-body Physical systems, as well.
An even more general contribution of the present work is that it shows how the machinery of group
representation theory, specifically the concept of Clebsch--Gordan decompositions, can be used to design
neural networks that are covariant to the action of a compact group yet are computationally efficient.
This aspect is related to the recent explosion of interest in generalizing the notion of convolutions to graphs
\citep{Niepert2016,Defferrard2016,Duvenaud2015,Li2016,Riley2017,CompNetsArxiv18},
manifolds \citep{Monti2016, Masci2015},
and other domains \citep{Bruna2013,SphericalCNN2018},
as well as the question of generalizing the concept of equivariance (covariance) in general
\citep{Cohen2016,Cohen2017,EquivarianceArxiv18}.
Several of the above works employed generalized Fourier representations of one type or another,
but to ensure equivariance the nonlinearity was always applied in the ``time domain''. Projecting
back and forth between the time domain and the frequency domain is a major bottleneck,
which we can eliminate because the Clebsch--Gordan
transform allows us to compute one type of nonlinearity, tensor products, entirely in the Fourier domain.
}
\section{CORMORANT:\m{\:} COvaRiant MOleculaR Artificial Neural neTworks}\label{sec: cormorant}
The goal of using ML in molecular problems is not to encode known physical laws,
but to provide a platform for learning interactions from data that cannot easily be captured in a
simple formula.
Nonetheless, the mathematical structure of known physical laws, like those discussed in the previous sections,
give strong hints about how to represent physical interactions in algorithms.
In particular, when using machine learning to learn molecular potentials or similar
rotation and translation invariant physical quantities, it is essential to make sure that the algorithm
respects these invariances.
Our Cormorant neural network has invariance to rotations baked into its architecture in a way that
is similar to the physical equations of the previous section:
the internal activations are all spherical tensors, which are then
combined at the top of the network in such a way as to guarantee that the final output is a scalar
(i.e., is invariant).
However, to
allow the network to learn interactions that are more complicated than classical interatomic
forces, we allow each neuron to output not just a single spherical tensor, but a combination
of spherical tensors of different orders.
We will call an object consisting of \sm{\tau_0} scalar components, \m{\tau_1} components transforming as first order
spherical tensors, \m{\tau_2} components transforming as second order spherical tensors, and so on,
an \m{\SO(3)}--\emph{covariant vector of type} \sm{(\tau_0,\tau_1,\tau_2,\ldots)}.
The output of each neuron in Cormorant is an \m{\SO(3)}--vector of a fixed type.
\begin{definition}
We say that \m{F} is an \m{\SO(3)}-covariant vector of type
\sm{\V \tau=(\tau_0,\tau_1,\tau_2,\ldots,\tau_L)} if it can be written as a collection of
complex matrices \sm{F_0,F_1,\ldots,F_L}, called its \emph{isotypic parts},
where each \m{F_\ell} is a matrix of size \m{(2\ell\<+1) \<\times \tau_\ell} and transforms under rotations
as \m{F_\ell\mapsto D^\ell(\V R)\, F_\ell}.
\end{definition}
The second important feature of our architecture is that each neuron corresponds to
either a single atom or a set of atoms forming a physically meaningful subset of the system at hand,
for example all atoms in a ball of a given radius.
This condition helps encourage the network to learn physically meaningful and interpretable
interactions.
The high level definition of \emph{Cormorant} nets is as follows.
\begin{definition} \label{def:cormorant}
Let \m{\Scal} be a molecule or other physical system consisting of \m{N} atoms.
A ``Cormorant'' covariant molecular neural network for \m{\Scal} is a feed forward
neural network consisting of \m{m} neurons \m{\sseq{\mathfrak{n}}{m}}, such that
\vspace{-4pt}
\begin{enumerate}[~~C1.]
\item Every neuron \m{\mathfrak{n}_i} corresponds to some subset \m{\Scal_i} of the atoms.
In particular, each input neuron corresponds to a single atom. Each output neuron
corresponds to the entire system \m{\Scal}.
\item The activation of each \m{\mathfrak{n}_i} is an \m{\SO(3)}-vector of a fixed type
\m{\V \tau_{\!i}}.
\item The type of each output neuron is \sm{\V \tau_{\!\textrm{out}}\<=(1)}, i.e., a scalar.~\footnote{Cormorant can learn data of arbitrary $\textrm{SO(3)}$-vector outputs. We restrict to scalars here to simplify the exposition.}
\end{enumerate}
\end{definition}
Condition (C3) guarantees that whatever function a \emph{Cormorant} network
learns will be invariant to global rotations. Translation invariance is
easier to enforce simply by making sure that the interactions represented by individual neurons
only involve relative distances.
\subsection{Covariant neurons}
\label{sec:cormorant_neurons}
The neurons in our network must be such that if each of their inputs is
an \m{\SO(3)}--covariant vector then so is their output.
Classically, neurons perform a simple linear operation such as \m{\x\mapsto W\x+\V b},
followed by a nonlinearity like a ReLU.
In convolutional neural nets the weights are tied together in a specific way which
guarantees that the activation of each layer is covariant to the action of global translations.
\citet{EquivarianceICML18} discuss the generalization of convolution to the action of compact groups
(such as, in our case, rotations) and prove that the only possible \emph{linear} operation
that is covariant with the group action, is what, in terms of \m{\SO(3)}--vectors, corresponds to
multiplying each \m{F_\ell} matrix from the right by some matrix \m{W} of learnable weights.
For the nonlinearity, one option would be to express each spherical tensor as a function on
\m{\SO(3)} using an inverse \m{\SO(3)} Fourier transform, apply a pointwise nonlinearity, and then
transform the resulting function back into spherical tensors. This is the approach taken in
e.g., \citep{SphericalCNN2018}. However, in our case this would be forbiddingly
costly, as well as introducing quadrature errors by virtual of having to interpolate on the group,
ultimately degrading the network's covariance.
Instead, taking yet another hint from the structure of physical interactions, we use
the Clebsch--Gordan transform introduced in \ref{sec: gen interaction} as a nonlinearity.
The general rule for taking the CG product of two \m{\SO(3)}--parts
\sm{F_{\ell_1}\tin\CC^{(2\ell_1+1)\times n_1}}
and \sm{G_{\ell_2}\tin \CC^{(2\ell_2+1)\times n_2}} gives a collection of
parts \sm{[F_{\ell_1}\otimes_{\rm cg} G_{\ell_2}]_{\abs{\ell_1-\ell_1}},\ldots [F_{\ell_1}\otimes_{\rm cg} G_{\ell_2}]_{\ell_1+\ell_1}}
with columns
\begin{equation}\label{CG parts}
\sqbbig{[F_{\ell_1}\otimes_{\rm cg} G_{\ell_2}]_\ell}_{\ast,(i_1,i_2)}
C_{\ell_1,\ell_2,\ell} \br{[F_{\ell_1}]_{\ast,i_1} \otimes [G_{\ell_2}]_{\ast,i_2}},
\end{equation}
i.e., every column of \sm{F_{\ell_1}} is separately CG-multiplied with every column of \sm{G_{\ell_2}}.
The \m{\ell}'th part of the CG-product of two \m{\SO(3)}--vectors consists of the concatenation of all
\m{\SO(3)}--part matrices with index \m{\ell} coming from multiplying each part of \m{F}
with each part of \m{G}:
\[ [F\otimes_{\rm cg} G]_\ell=\bigoplus_{\ell_1} \bigoplus_{\ell_2} [F_{\ell_1} \otimes_{\rm cg} G_{\ell_2}]_\ell.\]
Here and in the following \m{\oplus} denotes the appropriate concatenation of vectors and matrices.
In Cormorant, however, as a slight departure from \rf{CG parts},
to reduce the quadratic blow-up in the number of columns, we always have \m{n_1\<=n_2}
and use the restricted ``channel-wise'' CG-product,
\[
\sqbbig{[F_{\ell_1}\otimes_{\rm cg} G_{\ell_2}]_\ell}_{\ast,i}
C_{\ell_1,\ell_2,\ell} \br{[F_{\ell_1}]_{\ast,i} \otimes [G_{\ell_2}]_{\ast,i}},
\]
where each column of \m{F_{\ell_1}} is only mixed with the corresponding column of \m{G_{\ell_2}}.
We note that similar Clebsch--Gordan nonlinearities were used in \citep{KLT2018}, and that
the Clebsch--Gordan product is also an essential part of Tensor Field Networks \citep{Thomas2018}.
\subsection{One-body and two-body interactions}
\label{sec:n_atom_interactions}
As stated in Definition \ref{def:cormorant}, the covariant neurons in a Cormorant net correspond to different
subsets of the atoms making up the physical system to be modeled. For simplicty in our present architecture
there are only two types of neurons: those that correspond to individual atoms and those that correspond
to pairs. For a molecule consisting of \m{N} atoms, each layer \m{s=0,1,\ldots, S} of the covariant
part of the network has \m{N} neurons corresponding to the atoms and \m{N^2} neurons corresponding
to the \m{(i,j)} atom pairs. By loose analogy with graph neural networks, we call the corresponding
\m{F_i^s} and \m{g^s_{i,j}} activations vertex and edge activations, respectively.
In accordance with the foregoing, each \m{F_i^s}
activation is an \m{\SO(3)}--vector consisting of \m{L\<+1}
distinct parts \sm{(F_i^{s,0},F_i^{s,1},\ldots, F_i^{s,L})},
i.e., each \m{F_i^{s,\ell}} is a
\m{(2\ell+1)\times \tau^s_\ell} dimensional complex matrix that transforms under rotations as
\m{F_i^{s,\ell}\mapsto D^\ell(R)\, F_i^{s,\ell}}.
The different columns of these matrices are regarded as the different \emph{channels} of the network,
because they fulfill a similar role to channels in conventional convolutional nets.
The \m{g^{s}_{i,j}} edge activations also break down into parts
\sm{(g_{i,j}^{s,0},g_{i,j}^{s,1},\ldots, g_{i,j}^{s,L})}, but these are invariant under rotations.
Again for simplicity, in the version of Cormorant that we used in our experiments \m{L} is the same
in every layer (specifically \m{L\<=3}), and the number of channels is also
independent of both \m{s} and \m{\ell}, specifically, \m{\tau^s_\ell\<\equiv n_c\<=16}.
The actual form of the vertex activations captures ``one-body interactions''
propagating information from the previous layer related to the \emph{same} atom and
(indirectly, via the edge activations) ``two-body interactions''
capturing interactions between \emph{pairs} of atoms:
\begin{equation}\label{eq: vertex activations}
F^{s-1}_{i}=
\Big[
\underbrace{F^s_i \oplus \big(F^{s-1}_{i}\otimes_{\rm cg} F^{s-1}_{i}\big)}_{\text{one-body part}} \oplus
\underbrace{\Big( \sum_{j} G_{i,j}^{s} \otimes_{\rm cg} F^{s-1}_{j} \Big)}_{\text{two-body part}}\Big]\cdot
W^{\text{vertex}}_{s,\ell}.\\
\end{equation}
Here \m{G_{i,j}^{s}} are \m{\SO(3)}--vectors arising from the edge network.
Specifically, \m{G_{i,j}^{s,\ell}=g_{i,j}^{s,\ell}\, Y^\ell(\V {\h r}_{i,j})},
where \m{Y^\ell(\V {\h r}_{i,j})} are the spherical harmonic
vectors capturing the relative position of atoms \m{i} and \m{j}.
The edge activations, in turn, are defined
\begin{equation}
g^{s,\ell}_{i,j}={\,\mu^s_{}(r_{i,j})\: \sqbBig{\brbig{\,g^{s-1,\ell}_{i,j}\oplus
\brbig{F^{s-1}_{i}\cdot F^{s-1}_{j}}\oplus
\eta^{s,\ell}_{}(r_{i,j})\,} \, W^{\text{edge}}_{s,\ell}\,}}
\end{equation}
where we made the \m{\ell=0,1,\ldots,L} irrep index explicit.
As before, in these formulae, \m{\oplus} denotes concatenation over the channel index \m{c},
\m{\eta^{s,\ell}_{c}(r_{i,j})}
are learnable radial functions, and \sm{\mu^s_{c}(r_{i,j})} are learnable cutoff functions limiting the
influence of atoms that are farther away from atom \m{i}.
The learnable parameters of the network are the \m{\cbrN{W^{\text{vertex}}_{s,\ell}}} and
\m{\cbrN{W^{\text{edge}}_{s,\ell}}} weight matrices.
Note that the \sm{F^{s-1}_{i}\cdot F^{s-1}_{j}} dot product term is the only term
in these formulae responsible for the interaction between different atoms, and that this term always appears in conjunction with the
\m{\eta^{s,\ell}_{c}(r_{i,j})} radial basis functions and \sm{\mu^s_{c}(r_{i,j})} cutoff functions
(as well as the \m{\SO(3)}--covariant spherical harmonic vector) making sure that interaction scales
with the distance between the atoms.
More details of these activation rules are given in the Supplement.
\ignore{
In order to motivate our Cormorant architecture, let's return to the multipole problem above.
We start with Eq.~\rf{eq:multipole-aggregation}, which demonstrates how to construct a single moment from a set of charges.
We denote this operation a ``one-body'' term, as it creates a new representation from a linear operation over
input representations (in this case, the charges $q_i$.) On the other hand, the electrostatic energy in
Eq.~\rf{eq:multipole-energy} is a ``two-body'' term, as it arises from the interaction between two different input representations, $Q^{A}_{\ell}$ and $Q^{B}_{\ell^\prime}$.
We generalize this analogy to propose a framework for designing our Cormorant:
we define an $n$-atom term as the Clebsch-Gordan product of $n$ $\SO(3)$-vector activations,
which we henceforth denote as $F_i$ (instead of $Q_i$ as above) following the convention in neural networks.
For the purposes of this manuscript we start by considering activations that have the form
\begin{equation}
F^{s+1}_{i} = \Big(\bigoplus_{n=1}\Phi_{i}^{\left(n\right)}\left(\left\{ F_{j} \right\} \right)\Big) \cdot W
\end{equation}
where
$\Phi_{i}^{\left(n\right)}\left(\left\{ F_{j}\right\} \right)$
is a general covariant $n$-atom interaction term for the activations $\left\{ F_{j}\right\}_{j \in \mathcal{S}_i}$.
The $n$-atom interactions for atom $i$ are constructed by summing over all possible paths $i \rightarrow j_{1}\rightarrow\ldots\rightarrow j_{n}$ of length $n$ which start atom $i$ and jump between atoms $j_k \in \mathcal{S}_i$. For each step in the path, we CG-multiply by $F_{j_k}$, along with a $\SO(3)$-vector transition ``amplitude'' $\Upsilon^{(n)}_{jj'}$:
\begin{equation}
\Phi_{i}^{\left(n\right)}\left(\left\{ F_{j} \right\} \right)=
\bigoplus_{m=0}^n
\bigoplus_{\{i_1, \ldots, i_m \} \subset \{1, \ldots, n\}}
\sum_{\substack{{j_1, \ldots, j_n} \in \mathcal{S}_i \\
j_{i_1} = \ldots = j_{i_m}=i}}
\bigotimes_{k=0}^{n-1}\left(\Upsilon_{j_{k}j_{k+1}}^{\left(n\right)}\otimes F_{j_{k+1}}\right)
\end{equation}
This form ensures that interactions are permutation invariant, translation invariant,
and rotationally covariant.
Here, we focus on a few key points:
\begin{enumerate}[~~1.]
\item We are building a representation of atom $i$ and atoms $j \in \mathcal{S}_{i}$ in its local environment. We therefore use the direct sum when $j_k = i$, and a normal sum otherwise.
\item The form of $\Upsilon^{(n)}_{jj^\prime}$ can be different for each $n$, and is constrained by symmetry, and unless otherwise noted we chose $\Upsilon_{jj} = 1$.
\item If we require $\Upsilon^{(n)}_{jj^\prime}$ depend only on the relative position $\mathbf{r}_{jj'} = \mathbf{r}_j - \mathbf{r}_{j'}$ for $j \neq j^\prime$, then $\Upsilon^{(n)}_{jj^\prime}$ must be constructed only from linear combinations of tensor powers of $\mathbf{r}_{jj'}$. A decomposition of $\bigoplus_k\mathbf{r}_{jj'}^{\otimes k}$ into irreducibles generates terms proportional to $r_{jj'}^m Y^\ell(\hat{\mathbf{r}}_{jj^\prime})$.
We therefore chose the generic form $\Upsilon^{(n)}_{jj^\prime} = \Upsilon^{(n)}(\mathbf{r}_{jj^\prime}) = \bigoplus_{\ell=0}^{L} \mathcal{F}^\ell(r_{ij}) Y^\ell(\hat{\mathbf{r}}_{jj^\prime})$, where $\mathcal{F}_c^\ell(r)$ are a set of (possibly learnable) radial basis functions.
\item The one-body interaction $\Phi_{i}^{\left(1\right)} = F_i \oplus \big(\sum_j \Upsilon^{(1)}_{ij} \otimes_{\rm cg} F_j \big)$ contains a component analogous to the radial filters of
\citep{Thomas2018}.
\item Some of the information in higher order order interactions is induced from lower order terms. For example, $\Phi_{i}^{\left(2\right)} = \big( \Phi_{i}^{\left(1\right)} \big)^{\otimes_{\rm cg} 2} \oplus \tilde{\Phi}_{i}^{\left(2\right)}$, where $\tilde{\Phi}_{i}^{\left(2\right)}$ only contains terms of the form $F_{j_1} \otimes_{\rm cg} F_{j_2}$, where $j_1 \neq j_2 \neq i$.
\end{enumerate}
We use the $n$-atom interactions $\Phi_{i}^{\left(n\right)}$ to design the CG layers in our \emph{Cormorant} network.
Note that while these can be structurally identified with $n$-atom interactions, we encourage the reader to not take the analogy too far. Each CG layer serves two purposes: (1) to build up a representation for an atom's local environment, and (2) to generate interactions between representations. Lower CG layers likely serve to build up a good representation. Only at higher layers, when the features $F_i$ are well constructed is it likely that a clear mapping to physical degrees of freedom be possible. We leave this connection to future work.
\subsection{Implementation of $\textrm{SO(3)}$-vector layers}
The CG layers we chose for our implementation of Cormorant were based upon $n \leq 3$-atom interaction terms. Due to computational limitations, we considered only a subset of the general $n \leq 3$ form above. There is an analogy between these interactions and a generalization of Message Passing Neural Networks. In the supplement, we discuss the general form of the CG layer we implemented, along with the remaining details of our network architecture.
}
\subsection{Overall structure and comparison with other architectures}
In addition to the covariant neurons described above, our network also needs neurons to compute
the input featurization and the the final output after the covariant layers. Thus, in total,
a Cormorant networks consists of three distinct parts:
\begin{compactenum}[~~1.]
\item An input featurization network $\{F^{s=0}_j\} \leftarrow \mathrm{INPUT}(\{Z_i, r_{i,j}\})$ that
operates only on atomic charges/identities and (optionally) a scalar function of relative positions $r_{i,j}$.
\item An $S$-layer network $\{ F^{s+1}_i \} \leftarrow \mathrm{CGNet}(\{ F^s_i \})$of covariant activations
$F^{s}_i$, each of which is a $\SO(3)$-vector of type $\tau^s_i$.
\item A rotation invariant network at the top $y \leftarrow \mathrm{OUTPUT}(\bigoplus_{s=0}^{S} \{F^s_i\})$
that constructs scalars from the activations $F^s_i$, and uses them to predict a regression target $y$.
\end{compactenum}
We leave the details of the input and output featurization to the Supplement.
A key difference between Cormorant and other recent covariant networks (Tensor Field Networks~\citep{Thomas2018} and \m{\textrm{SE}(3)}-equivariant networks~\citep{Weiler}) is the use of Clebsch-Gordan non-linearities. The Clebsch-Gordan non-linearity results in a complete interaction of every degree of freedom in an activation. This comes at the cost of increased difficulty in training, as discussed in the Supplement. We further note that \m{\textrm{SE}(3)}-equivariant networks use a three-dimensional grid of points to represent data, and ensure both translational and rotational covariance (equivariance) of each layer. Cormorant on the other hand uses activations that are covariant to rotations, and strictly invariant to translations.
\section{The nature of physical interactions in molecules}\label{sec: physical}
Ultimately interactions in molecular systems arise from the
quantum structure of electron clouds around constituent atoms.
However, from a chemical point of view, effective atom-atom interactions
break down into a few simple classes based upon symmetry.
Here we review a few of these classes in the context of the multipole expansion, whose structure will inform the design of our neural network.
\paragraph{Scalar interactions.}
The simplest type of physical interaction is that between two particles that are pointlike and
have no internal directional degrees of freedom, such as spin or dipole moments. A classical example is
the electrostatic attraction/repulsion between two charges described by the Coulomb energy
\begin{equation}\label{eq: Coulomb}
V_
=-\ovr{4\pi\epsilon_0}\,\fr{q_A q_B}{\absN{\V r_{\!AB}}}\:.
\end{equation}
Here \m{q_A} and \m{q_B} are the charges of the two particles, \m{\V r_{\!A}} and \m{\V r_{\nts B}} are
their position vectors, \m{\V r_{\!AB}=\V r_{\!A}\<-\V r_{\nts B}}, and \m{\epsilon_0} is a universal constant.
Note that this equation already reflects symmetries: the fact that \rf{eq: Coulomb} only depends on the
\emph{length} of \m{\V r_{\!AB}} and not its direction or the position vectors individually
guarantees that the potential is invariant under both translations and rotations.
\paragraph{Dipole/dipole interactions.}
One step up from the scalar case is the interaction between two dipoles.
In general, the electrostatic dipole moment of a set of \m{N} charged particles relative
to their center of mass \m{\V r} is just the first moment of their position vectors weighted by
their charges:
\[\V \mu=\sum_{i=1}^{N}q_i (\V r_i-\V r).\]
The dipole/dipole contribution to the electrostatic potential energy between two sets of particles
\m{A} and \m{B} separated by a vector \m{\V r_{\!AB}} is then given by
\begin{equation}\label{eq: dipole-dipole}
V_{d/d}=\ovr{4\pi\epsilon_0}\sqbbigg{\fr{\V\mu_A\cdot\V\mu_B}{\abs{\V r_{\!AB}}^3}-3\,
\fr{(\V \mu_A\cdot\V r_{\!AB})(\V \mu_B\cdot\V r_{\!AB})}{|\V r_{\!AB}|^5}}.
\end{equation}
One reason why dipole/dipole interactions are indispensible for capturing the energetics of molecules
is that most chemical bonds are polarized. However, dipole/dipole interactions also occur
in other contexts, such as the interaction between the magnetic spins of electrons.
\paragraph{Quadropole/quadropole interactions.}
One more step up the multipole hierarchy is the interaction between quadropole moments.
In the electrostatic case, the quadropole moment is the second moment of the charge density
(corrected to remove the trace), described by the \emph{matrix}
\[\V \Theta=\sum_{i=1}^N q_i(3\tts \V r_i\V r_i^\top-\abs{\V r_i}^2\nts I).\]
Quadropole/quadropole interactions appear for example when describing the interaction between benzene rings,
but the general formula for the corresponding potential is quite complicated.
As a simplification,
let us only consider the special case when in some coordinate system aligned with the structure of
\m{A}, and at polar angle \m{(\theta_A,\phi_A)} relative to the vector \m{\V r_{\!AB}} connecting \m{A} and \m{B},
\m{\V \Theta_A} can be transformed into a form that is diagonal, with \m{[\Theta_{\nts A}]_{zz}\<=\vartheta_A}
and \sm{[\Theta_{\nts A}]_{xx}\<=[\Theta_{\nts A}]_{yy}\<=-\vartheta_A/2} \citep{stone1997theory}.
We make a similar assumption about the quadropole moment of \m{B}.
In this case the interaction energy becomes
\begin{multline}\label{eq: quad-quad}
V_{q/q}=\fr{3}{4}\fr{\vartheta_A\vartheta_B}{4\pi\epsilon_0 \abs{\V r_{\!AB}}^5}
\big[ 1-5\cos^\theta_A-5\cos^2\theta_B-15\cos^2\theta_A\cos^2\theta_B+\\2(4\cos\theta_A\theta_B-
\sin\theta_A\sin\theta_B\cos(\phi_A\<-\phi_B))^2\big].
\end{multline}
Higher order interactions involve moment tensors of order 3,4,5, and so on.
One can appreciate that the corresponding formulae, especially when considering not just electrostatics
but other types of interactions as well (dispersion, exchange interaction, etc),
quickly become very involved.
\section{Spherical tensors and representation theory}
Fortunately, there is an alternative formalism for expressing molecular interactions, that of
spherical tensors, which makes the general form of physically allowable
interactions more transparent.
This formalism also forms the basis of the
our Cormorant networks described in the next section.
The key to spherical tensors is understanding how physical quantities transform under rotations.
Specifically, in our case, under a rotation \m{\V R}:
\[q\longmapsto q\hspace{40pt}
\V\mu\longmapsto \V R\ts\V\mu\hspace{40pt}
\V \Theta\longmapsto \V R\ts \V \Theta \V R^\top\hspace{40pt}
\V r_{\!AB} \longmapsto \V R\,\V r_{\!AB}.
\]
Flattening \sm{\V\Theta} into a vector \sm{\wbar{\V\Theta}\tin\RR^9}, its transformation rule
can equivalently be written as \sm{\wbar{\V\Theta}\mapsto (\V R\<\otimes \V R)\, \wbar{\V\Theta}},
showing its similarity to the other three cases.
In general, a \m{k}'th order Cartesian moment tensor \sm{T^{(k)}\tin \RR^{3\times 3\times \ldots \times 3}} (or its flattened \sm{\wbar T{}^{(k)} \in \RR^{3k}} equivalent)
transforms as
\sm{\wbar T{}^{(k)}\mapsto (\V R\<\otimes\V R\<\otimes\ldots\otimes\V R)\,\wbar T{}^{(k)}}.
Recall that given a group \m{G}, a \emph{representation} \m{\rho} of \m{G} is
a matrix valued function \m{\rho\colon G\to\CC^{d\times d}} obeying
\m{\rho(xy)=\rho(x)\rho(y)} for any two group elements \m{x,y\tin G}.
It is easy to see that \m{\V R}, and consequently \m{\V R\otimes\ldots\otimes\V R} are representations
of the three dimensional rotation group \m{\SO(3)}.
We also know that because \m{\SO(3)} is a compact group, it has a countable sequence of unitary
so-called irreducible representations (irreps), and,
up to a similarity transformation, any representation can be reduced to a direct sum of irreps.
In the specific case of \m{\SO(3)}, the irreps are called \emph{Wigner D-matrices} and
for any positive integer \m{\ell=0,1,2,\ldots} there is a single corresponding irrep \sm{D^\ell(\V R)},
which is a \m{(2\ell\<+1)} dimensional representation
(i.e., as a function, \sm{D^\ell\colon \SO(3)\to \CC^{(2\ell+1)\times(2\ell+1)}}).
The \m{\ell\<=0} irrep is the trivial irrep \m{D^0(\V R)\<=(1)}.
The above imply that there is a fixed unitary transformation matrix \m{C^{(k)}} which reduces
the \m{k}'th order rotation operator
into a direct sum of irreducible representations:
\[\underbrace{\V R\<\otimes\V R\<\otimes\ldots\otimes\V R}_k=C^{(k)}
\sqbBig{\bigoplus_\ell \bigoplus_{i=1}^{\tau_\ell} D^\ell(\V R)} {C^{(k)}}^\dag.
\]
Note that the transformation \sm{\V R\<\otimes\V R\<\otimes\ldots\otimes\V R} contains redundant copies of \m{D^\ell(\V R)}, which we denote as the multiplicites \m{\tau_\ell}. For our present purposes knowing the actual values of the \m{\tau_\ell}
is not that important, except that \m{\tau_k\<=1} and that
for any \m{\ell>k},~ \m{\tau_\ell\<=0}.
What is important is that \sm{\wbar T{}^{(k)}}, the vectorized form of the Cartesian moment tensor has a
corresponding decomposition \vspace{-6pt}
\begin{equation}\label{eq: Cartesian decomp}
\wbar T{}^{(k)}=C^{(k)} \sqbBig{\bigoplus_\ell \bigoplus_{i=1}^{\tau_\ell} Q_{\ell,i}}.
\end{equation}
This is nice, because using the unitarity of \sm{Q_{\ell_i}}, it shows that
under rotations the individual \sm{Q_{\ell,i}} components transform \emph{independently}
as \sm{Q_{\ell,i}\mapsto D^\ell(\V R)\ts Q_{\ell,i}}.
What we have just described is a form of generalized Fourier analysis applied to
the transformation of Cartesian tensors under rotations.
For the electrostatic multipole problem it is particularly relevant, because it turns out
that in that case, due to symmetries of \sm{\wbar{T}{}^{(k)}},
the only nonzero \sm{Q_{\ell,i}} component of \rf{eq: Cartesian decomp} is the single one with \m{\ell\<=k}.
Furthermore, for a set of \m{N} charged particles (indexing its components \m{-\ell,\ldots,\ell})
\m{Q_\ell} has the simple form
\begin{equation}
[Q_\ell]_m=\br{\fr{4\pi}{2\ell\<+1}}^{1/2}\sum_{i=1}^{N} q_i\, (r_i)^\ell\:Y_\ell^m(\theta_i,\phi_i)
\hspace{60pt}m=-\ell,\ldots,\ell,\label{eq:multipole-aggregation}
\end{equation}
where \m{(r_i,\theta_i,\phi_i)} are the coordinates of the \m{i}'th particle in
spherical polars, and the \m{Y_\ell^m(\theta,\phi)} are the well known spherical harmonic functions.
\m{Q_\ell} is called the \m{\ell}'th \emph{spherical moment} of the charge distribution.
Note that while \sm{\wbar{T}{}^{(\ell)}} and \sm{Q_\ell} convey exactly the same information,
\sm{\wbar{T}{}^{(\ell)}} is a tensor with \sm{3^{\ell}} components, while \m{Q_\ell} is just a
\m{(2\ell\<+1)} dimensional vector.
Somewhat confusingly, in physics and chemistry any quantity \m{U} that transforms under rotations as
\sm{U\<\mapsto D^\ell(\V R)\ts U} is often called an (\m{\ell}'th order) \emph{spherical tensor},
despite the fact that in terms of its presentation \m{Q_\ell} is just a vector of \m{2\ell\<+1} numbers.
Also note that since \m{D^0(\V R)\<=(1)}, a zeroth order spherical tensor is just a scalar.
A first order spherical tensor, on the other hand, can be used to represent a spatial vector
\m{\V r\<=(r,\theta,\phi)} by setting \m{[U_1]_m=r\, Y_1^m(\theta,\phi)}.
\subsection{The general form of interactions}
\label{sec: gen interaction}
The benefit of the spherical tensor formalism is that it makes it very clear how each part of
a given physical equation transforms under rotations. For example, if \m{Q_\ell} and \sm{\widetilde Q_\ell} are two
\m{\ell}'th order spherical tensors, then \sm{Q_\ell^\dag \widetilde Q_\ell} is a scalar,
since under a rotation \m{\V R}, by the unitarity of the Wigner \m{D}-matrices,
\[Q_\ell^\dag \widetilde Q_\ell\longmapsto (D^\ell\nts(\V R)\, Q_\ell)^\dag\, (D^\ell\nts(\V R)\, \widetilde Q_\ell)=
Q_\ell^\dag\: (D^\ell\nts(\V R))^\dag \, D^\ell\nts(\V R)\: \widetilde Q_\ell=Q_\ell^\dag \widetilde Q_\ell.
\]
Even the dipole/dipole interaction \rf{eq: dipole-dipole}
requires a more sophisticated way of coupling spherical
tensors than this, since it involves non-trivial interactions between not just two, but three different quantites:
the two dipole moments \sm{\V \mu_{\!A}} and \sm{\V \mu_B} and the the relative position vector
\sm{\V r_{\!AB}}.
Representing interactions of this type requires taking \emph{tensor products} of the constituent variables.
For example, in the dipole/dipole case we need terms of the form \sm{Q_{\ell_1}^A\otimes Q_{\ell_2}^B}.
Naturally, these will transform according to the tensor product of the corresponding irreps:
\[Q_{\ell_1}^A\<\otimes Q_{\ell_2}^B\mapsto (D^{\ell_1}\!(\V R)\<\otimes D^{\ell_2}\!(\V R))\,
(Q_{\ell_1}^A\<\otimes Q_{\ell_2}^B).\]
In general, \sm{D^{\ell_1\!}(\V R)\<\otimes D^{\ell_2\!}(\V R)} is \emph{not} an irreducible representation.
However it does have a well studied decomposition into irreducibles, called the \emph{Clebsch--Gordan}
decomposition:
\[D^{\ell_1\!}(\V R)\<\otimes D^{\ell_2\!}(\V R)=C_{\ell_1,\ell_2}^\dag
\sqbbigg{\:\bigoplus_{\ell=\abs{\ell_1-\ell_2}}^{\ell_1+\ell_2} D^\ell\nts(\V R)\,} C_{\ell_1,\ell_2}.\]
Letting \m{C_{\ell_1,\ell_2,\ell}\tin \CC^{(2\ell+1)\times(2\ell_1+1)(2\ell_2+2)}}
be the block of \m{2\ell\<+1} rows in \m{C_{\ell_1,\ell_2}} corresponding to the \m{\ell} component of
the direct sum, we see that
\m{C_{\ell_1,\ell_2,\ell}(Q_{\ell_1}^A\<\otimes Q_{\ell_2}^B)} is an \m{\ell}'th order spherical tensor.
In particular, given some other spherical tensor quantity \sm{U_\ell},
\[U_\ell^\dag \cdot C_{\ell_1,\ell_2,\ell}\cdot (Q_{\ell_1}^A\<\otimes Q_{\ell_2}^B)\]
is a scalar, and hence it is a candidate for being a term in the potential energy.
Note the similarity of this expression to the \emph{bispectrum} \citep{KakaralaPhD,Bendory},
which is an already established tool in the force field learning literature \citep{BartokPRB2013}.
Almost any rotation invariant interaction potential can be expressed in terms of iterated Clebsch--Gordan
products between spherical tensors.
In particular, the full electrostatic energy between two sets of charges \m{A} and \m{B}
separated by a vector \m{\V r=(r,\theta,\phi)} expressed in multipole form \citep{jackson_classical_1999} is
\begin{equation}
V_{AB}=\ovr{4\pi\epsilon_0} \sum_{\ell=0}^\infty \sum_{\ell'=0}^\infty
\sqrt{{2\ell+2\ell'}\choose{2\ell}}
\sqrt{\fr{4\pi}{2\ell\<+2\ell'+1}}\:
r^{-(\ell+\ell'+1)}\:
Y_{\ell+\ell'}(\theta,\phi)\, C_{\ell_1,\ell_2,{\ell+\ell'}}\:(Q^A_{\ell} \otimes Q^B_{\ell'}). \label{eq:multipole-energy}
\end{equation}
Note the generality of this formula: the \m{\ell\<=\ell'\<=1} case covers the dipole/dipole
interaction \rf{eq: dipole-dipole}, the \m{\ell\<=\ell'\<=2} case covers the quadropole/quadropole
interaction \rf{eq: quad-quad}, while the other terms cover every other possible type of
multipole/multipole interaction.
Magnetic and other types of interactions, including interactions that involve
3-way or higher order terms, can also be
recovered from appropriate combinations of tensor products and Clebsch--Gordan decompositions.
We emphasize that our discussion of electrostatics is only intended to illustrate the algebraic structure of interatomic interactions of any type, and is not restricted to electrostatics. In what follows, we will not explicitly specify what interactions the network will learn. Nevertheless, there are physical constraints on the interactions arising from symmetries, which we explicitly impose in our design of Cormorant.
\ignore{
\[D^\ell\colon \SO(3)\to \CC^{(2\ell+1)\times(2\ell+1)}.\]
Also recall that the \emph{spherical harmonics}
\m{\setofN{Y_\ell^m(\theta,\phi)}{\ell\<=0,1,2,\ldots,~m\<=-\ell,\ldots,\ell}}
are a basis for complex functions on the unit sphere \m{S^2}.
The fact that the number of
spherical harmonics for a given value of \m{\ell} is the same as the dimensionality of \m{D^\ell} is
not a coincidence: the Wigner D-matrices and the spherical harmonics are closely related, in particular
\m{f} is a function on \m{S^2} and
\[f(\theta,\phi)=\sum_{l=0}^{\infty} \sum_{m=-\ell}^{\ell}\h f^\ell_m Y^\ell_m(\theta,\phi),\]
is its spherical harmonic expansion, then if
we apply a rotation \m{\V R} to \m{f}, the expansion coefficients for any given \m{\ell},
collected in a vector \m{\h f^\ell}, change to
\[\h {\V f}'^\ell=D^\ell(\V R)\,\h {\V f}^\ell.\]
}
\section{Architecture}
As discussed in the main text, our Cormorant architecture is constructed from three basic building blocks: (1) an input featurization that takes $(Z_i, \mathbf{r}_i)$ and outputs a scalar, (2) a set of covariant CG layers that update $F_i^{s}$ to $F_i^{s+1}$, (3) a layer that takes the set of covariant activations $F_i^{s}$, and construct a permutation and rotation invariant regression target.
\subsection{Notation}
Throughout this section, we will follow the use the main text, and denote a $\textrm{SO(3)}$-vector at layer $s$ by $F^{s} = (F^{s}_{0}, \ldots, F^{s}_{L})$ with maximum weight $L$. Each $\textrm{SO(3)}$-vector has corresponding type $\tau^{s}$, and lives in a representation space $F^{s} \in V^{s} =\bigoplus_{\ell=0}^{L^{s}} \bar{V}_{\ell}^{\tau_{\ell}^{s}}$, where $\bar{V}_{\ell} = \mathbb{C}^{(2\ell+1) \times 1}$ is the representation space for irreducible representation of $\textrm{SO(3)}$ with multiplicity 1. We will also introduce the vector space for the edge network $V_{\rm edge} ^s= \bigoplus_{\ell=0}^{L^{s}} \mathbb{C}^{\tau_{\ell}^{s}}$.
See Table~\ref{tab:symbols} for a more complete table of symbols used in the supplement and main text.
\subsection{Overall structure}
The Cormorant network is a function ${\rm CORMORANT}\left(\left\{ Z_{i},\mathbf{r}_{i}\right\} \right):\mathbb{Z}^{N}\times\mathbb{R}^{N\times3}\rightarrow\mathbb{R}$ that takes a set of $N$ charge-positions $\left\{ Z_{i},\mathbf{r}_{i}\right\} $ and outputs a single regression target. The
\begin{equation}
{\rm CORMORANT}\left(\left\{ Z_{i},\mathbf{r}_{i}\right\} \right)={\rm OUTPUT}\left({\rm CGNet}\left({\rm INPUT}\left(\left\{ Z_{i},\mathbf{r}_{i}\right\} \right)\right)\right)
\end{equation}
networks are constructed from three basic units:
\begin{enumerate}
\item ${\rm INPUT}\left(\left\{ Z_{i},\mathbf{r}_{i}\right\} \right):\mathbb{Z}^{N}\times\mathbb{R}^{N\times3}\rightarrow (\bar{V}_0)^N$ which takes the $N$ charge-position pairs and outputs $N$ sets of scalar feature vectors $c_{{\rm in}}$. (See section \ref{sec:input_featurization}.)
\item ${\rm CGNet}\left(\left\{ F_i,\mathbf{r}_{i}\right\} \right):(\bar{V}_0)^N \times\mathbb{R}^{N\times3}\rightarrow\bigoplus_{s=0}^{S} \left( V^{s} \right)^{N}$ takes the set of scalar features from ${\rm INPUT}\left(\left\{ Z_{i},\mathbf{r}_{i}\right\} \right)$, along with the set of positions for each atom, and outputs a $\textrm{SO(3)}$-vector for each level $s = 0, \ldots, S$ using Clebsch-Gordan operations. (See section \ref{sec:covariant_layers}.)
\item ${\rm OUTPUT}\left(\bigoplus_{s=0}^{S} ( V^{s} )^{N} \right)\rightarrow\mathbb{R}$ takes the output of ${\rm CGNet}$ above, constructs a set of scalars, and then constructs a permutation-invariant prediction that can be exploited at the top of the network. (See section \ref{sec:output_featurization}.)
\end{enumerate}
This design is organized in a modular way to separate the input featurization, the covariant $\textrm{SO(3)}$-vector layers, and the output regression tasks. Importantly, the ${\rm INPUT}$ and ${\rm OUTPUT}$ networks are different for GDB9 and MD17. However, the covariant $\textrm{SO(3)}$-vector layers ${\rm CGNet}$ were identical in design and hyperparameter choice. We include these designs and choices below.
\subsection{Input featurization}
\label{sec:input_featurization}
\subsubsection{MD-17}
For MD-17, the input featurization was determined by taking the tensor product $\tilde{F}_i = \mathrm{onehot}_i \otimes \vec{Z}_i$, where $\mathrm{onehot}_i$ is a one-hot vector determining which of $N_{\rm species}$ atomic species an atom is, and $\vec{Z}_i = (1, \tilde{Z}_i, \tilde{Z}_i^2)$, where $\tilde{Z}_i = Z_i / Z_{\rm max}$, and $Z_{\rm max}$ is the largest charge in the dataset. We then use a single learnable mixing matrix to convert this real vector with $3\times N_{\rm species}$ elements to a complex representation $\ell = 0$ and $N_c$ channels (or $\tau_i = (n_c)$.)
We found for MD-17, a complex input featurization network was not significantly beneficial, and that this input parametrization was sufficiently expressive.
\subsubsection{QM-9}
For the dataset QM-9, we used an input featurization based upon message passing neural networks. We start by creating the vector $\tilde{F}_i = \mathrm{onehot}_i \otimes \vec{Z}_i$ as defined in the previous section. Using this, a weighted adjacency matrix is constructed using a mask in the same manner as in the main text: $\mu_{ij} = \sigma((r_{\rm cut} - r_{ij}) / w)$, with learnable cutoffs/width $r_{\rm cut}$/$w$ and $\sigma(x) = 1 / (1 + \exp(-x))$ . This mask is used to aggregate neighbors $\tilde{F}^{\rm agg}_i = \sum_j \mu_{ij} \tilde{F}_j$. The result is concatenated with $\tilde{F}$, and passed through a MLP with a single hidden layer with 256 neurons and $\mathrm{ReLU}$ activations with an output real vector of length $2\times n_c$. This is then resized to form a complex $\SO(3)$-vector composed of a single irrep of type $\tau_i = (n_c)$.
\subsection{Covariant $SO(3)$-vector layers}
\label{sec:covariant_layers}
For both datasets, the central covariant $SO(3)$-vector layers of our Cormorant are identical. In both cases, we used $S = 4$ layers with $L = 3$, followed by a single $SO(3)$-vector layer with $L = 0$. The number of channels of the input tensors at each level is fixed to $n_c = 16$, and similarly the set of weights $W$ reduce the number of channels of each irreducible representation back to $n_c = 16$.
\subsubsection{Overview}
The algorithm can be implemented as iterating over the function
$${\rm CGLayer}\left(g_{ij}^{s},F_{i}^{s},\mathbf{r}_{i,}\right): (V^{s}_{\rm edge})^{N\times N} \times\mathbb{R}^{N\times N\times 3} \times (V^s)^{N}\rightarrow(V^{s+1}_{\rm edge})^{N\times N} \times(V^{s+1})^{N}$$
where $g_{ij}^{s}\in (V_{\rm edge}^s)^{N \times N}$ and is an edge network at level $s$ with $c_{s}$ channels for each $\ell\in\left[0,L\right]$, and $F_{i}^{s}\in (V^s)^{N}$ is an atom-state vector that lives in the representation space at level $s$.
The function $\left(g_{ij}^{s+1}, F_{i}^{s+1}\right)\leftarrow{\rm CGLayer}\left(g_{ij}^{s},F_{i}^{s},\mathbf{r}_{i}\right)$ is itself constructed in the following way:
\begin{itemize}
\item $g_{ij}^{s+1} \leftarrow{\rm EdgeNetwork}\left(g_{ij}^{s},\mathbf{r}_{ij},F_{i}^{s}\right)$
\item $G^{s+1}_{ij}\leftarrow{\rm Edge2Vertex}\left(g_{ij}^{s+1},Y^{\ell}\left(\hat{\mathbf{r}}_{ij}\right)\right)$
\item $F_{i}^{s+1}\leftarrow{\rm VertexNetwork}\left(F^{s+1}_{ij},F_{i}^{s}\right)$
\end{itemize}
where:
\begin{enumerate}
\item ${\rm EdgeNetwork}\left(g_{ij}^{s},\mathbf{r}_{ij},F_{i}^{s}\right):(V^{s}_{\rm edge})^{N\times N}\times\mathbb{R}^{N\times N\times3}\times(V^{s})^{N}\rightarrow(V^{s+1}_{\rm edge})^{N\times N}$ is a pair/edge network that combined the input pair matrix $g_{ij}^{s}$ at level $s$, with a position network $F_{ij,c}=F_{c}\left(\left|\mathbf{r}_{ij}\right|\right)$, and $d_{ij} \sim F_{i} \cdot F_{j}$ is a matrix of dot products, all of which will be defined below. This output is then used to construct a set of representations that will be used as the input to the ${\rm VertexNetwork}$ function below.
\item ${\rm Edge2Vertex}\left(g_{ij}^{s+1},Y_{ij}\right):(V^{s}_{\rm edge})^{N\times N}\times (V)^{N \times N}\rightarrow (V^s)^{N\times N}$ takes the product of the scalar pair network $g_{ij}^{s+1}$, with the $SO(3)$-vector of spherical harmonics $Y_{ij} = \bigoplus_{\ell=0}^{L}Y^{\ell}\left(\hat{\mathbf{r}}_{ij}\right)$, to produce a $SO(3)$-vector of edge scalar representations that will be considered in the aggregation step in ${\rm VertexNetwork}$.
\item ${\rm VertexNetwork}\left(G^{s+1}_{ij},F_{i}^{s}\right):(V^s)^{N \times N} \times(V^s)^{N}\rightarrow (V^{s+1})^{N}$ updates the vertex $\textrm{SO(3)}$-vector activations by combining a ``Clebsch-Gordan aggregation'', a CG non-linearity, a skip connection, and a linear mixing layer.
\end{enumerate}
\subsubsection{Edge networks}
Our edge network is an extension of the ``edge networks'' in Message Passing Neural Networks ~\cite{Gilmer2017}. The $\rm EdgeNetwork$ function takes three different types of pair features, concatenates them, and then mixes them. We express write the edge network (Eq.~(9)) in the main text) with all indices explicitly included:
\begin{equation}
g_{\ell c,ij}^{s+1}=m_{c,ij}^{s}\odot\sum_{c^{\prime}} \left(\bigoplus_{c_1} g_{\ell c_{1},ij}^{s}\oplus \bigoplus_{c_2} d_{c_{2},ij}^{s}\oplus \bigoplus_{c_3} \eta_{\ell c_{3},ij}\right)_{c^{\prime}} \left(W_{s,\ell}^{\rm edge} \right)_{c'c}
\end{equation}
where:
\begin{itemize}
\item $W_{s,\ell}^{\rm edge}$ is a weight matrix at layer $s$ for each $\ell$ of the edge network.
\item $g_{\ell c_{1},ij}^{s}$ is a set of edge activations from the previous layer.
\item $d_{c_{2},ij}^{s}=\bigoplus_{\ell=0}^{L}F^s_{\ell c_2 i}\cdot F^s_{\ell c_2 j}$, is a matrix of dot products, where $F_{\ell ci}\cdot F_{\ell cj}=\sum_{m}\left(-1\right)^{m}\left(F_{\ell ci,m}F_{\ell cj,-m}\right)$.\footnote{Note that $F_{\ell ci}\cdot F_{\ell cj}=\sum_{m}\left(-1\right)^{m}\left(F_{\ell ci,m}F_{\ell cj,-m}\right)$ is (up to a constant) just the CG decomposition $C_{\ell\ell0}\left(F_{\ell ci}\otimes F_{\ell cj}\right)$. The specific matrix elements of the CG coefficients $C_{\ell\ell0}$ are $\left\langle \ell m_{1}\ell m_{2}|00\right\rangle \propto\left(-1\right)^{m_{1}}\delta_{m_{1},-m_{2}}$.}
\item $\eta_{\ell c_{3},ij}^{s}=\eta_{\ell c_3}^{s}\left(\left|\mathbf{r}_{ij}\right|\right)$ is a set of learnable basis functions. These functions are of the form $\eta_{\ell c_{k,n}}^{s}\left(r\right)=r^{-k}\left(\sin\left(2\pi\kappa_{\ell n}^{s}r+\phi_{\ell n}^{s}\right) + \mathrm{i} \sin\left(2\pi\bar\kappa_{\ell n}^{s} r+\bar\phi_{\ell n}^{s}\right)\right)$, where $\kappa_{\ell n}^{s}$, $\bar\kappa_{\ell n}^{s}$, $\phi_{\ell n}^{s}$, and $\bar\phi_{\ell n}^{s}$ are learnable parameters, the list of channels $c$ is found by flattening the matrix indexed by $c_{3}=\left(k,n\right)$, and $\mathrm{i}^2 = -1$.
\item $\mu_{\ell c,ij}^{s}$ is a mask that is used drop the radial functions smoothly to zero. This mask is constructed through $$\mu_{c,ij}=\sigma\left(-\left(r_{ij}-r_{c,{\rm soft}}^{s}\right)/w_{c}^{s}\right),$$ where $\sigma\left(x\right)$ is the sigmoid activation, $r_{c,{\rm soft}}^{s}$ is a soft cutoff that drops off with width $w_{c}^{s}$.
\end{itemize}
\subsubsection{From edge scalar representations to $SO(3)$-vector}
The function $G^{s+1}_{ij}\leftarrow{\rm Edge2Vertex}\left(g_{ij}^{s+1},Y^{\ell}\left(\hat{\mathbf{r}}_{ij}\right)\right)$ will take the scalar output of the edge network $g_{\ell c,ij}^{s+1}$, and construct a set of $SO(3)$-vector representations using spherical harmonics through:
\begin{equation}
G^{s+1}_{\ell c,ij}=g_{\ell c,ij}^{s}Y^{\ell}\left(\hat{\mathbf{r}}_{ij}\right)
\end{equation}
We note the normalization of the spherical harmonics here is not using the ``quantum mechanical'' convention, but rather are normalized such that $\sum_{m}\left|Y_{m}^{\ell}\left(\hat{\mathbf{r}}\right)\right|^{2}=1$. This is equivalent to scaling the QM version by $Y_{m}^{\ell}\left(\hat{\mathbf{r}}\right)\rightarrow\sqrt{\frac{2\ell+1}{4\pi}}\times Y_{m}^{\ell}\left(\hat{\mathbf{r}}\right)$.
\subsubsection{Vertex networks}
The function ${\rm VertexNetwork}$ is found by concatenating three operations:
\begin{align}
F_{\ell,i}^{s+1} & = \left( {\rm VertexNetwork}\left(G^{s+1}_{ij},F_{i}^{s}\right) \right)_\ell\\
& = \sum_{c^{\prime}} \left(\bigoplus_{c_1} F_{c_1,i}^{s+1,{\rm ag}}\oplus \bigoplus_{c_2} F_{c_2,i}^{s+1,{\rm nl}}\oplus \bigoplus_{c_3} F_{c_3,i}^{s+1,{\rm id}}\right)_{\ell,c^{\prime}} \left(W^{{\rm vertex}}_{s,\ell}\right)_{c^{\prime}c}
\end{align}
where
\begin{enumerate}
\item $F_{i}^{s+1,{\rm ag}}=\sum_{j\in N\left(i\right)} G^{s+1}_{ij} \otimes_{\rm cg} F_{j}^{s} $ is a CG-aggregation step and $G^{s}_{ij}$ is the set of edge representations calculated by ${\rm Edge2Vertex}$.
\item $F_{i}^{s+1,{\rm nl}}=F_{i}^{s} \otimes_{\rm cg} F_{i}^{s}$
is a CG non-linearity.
\item $F_{i}^{s+1,{\rm id}}=F_{i}^{s}$ is just the identity function, or equivalently a skip connection.
\item $W^{{\rm vertex}}_{s,\ell}$ is a atom feature mixing matrix.
\end{enumerate}
\subsection{Output featurization}
\label{sec:output_featurization}
The output featurization of the network starts with the construction of a set of scalar invariants from the set of activations $F_i^{s}$ for all atoms $i$ and all levels $s=0\ldots S$. We extract three scalar invariants from each activation $F$ (dropping the $i$ and $s$ indices):
\begin{compactenum}
\item Take the $\ell=0$ component: $\xi_0(F) = F^{s}_{\ell=0}$.
\item Take the scalar product with itself: $\xi_1(F) = \mathrm{Re}[\tilde{\xi}_1(F)] + \mathrm{Im}[\tilde{\xi}_1(F)]$ where $\tilde{\xi}_1(F) = \sum_{m=-\ell}^{\ell} (-1)^{m} F^s_{\ell,m} F^s_{\ell,-m}$.
\item Calculate the $\SO(3)$-invariant norm: $\xi_2(F^s) = \sum_{m=-\ell}^{\ell} F^s_{\ell,m} \left(F^s_{\ell,m}\right)^*$.
\end{compactenum}
These are then concatenated together to get a final set of scalars:
$x_i = \bigoplus_{s=0}^{S} \xi_0(F_i^s) \oplus \bigoplus_{\ell=0}^{L} (\xi_1(F_i^s) \oplus \xi_2(F_i^s))$
and fed into the output network network.
\subsubsection{MD-17}
The output for the MD-17 network is straightforward. The scalars $x_i$ are summed over, and then a single linear layer is applied: $y = A \left(\sum_i x_i \right) + b$.
\subsubsection{QM-9}
The output for the QM-9 is constructed using two multi-layer perceptrons (MLPs). First, a MLP is applied to the scalar representation $x_i$ at each site. The result is summed over all sites, forming a single permutation invariant representation of the molecule. This representation is then used to predict a single number used as the regression target:
$y = \mathrm{MLP}_2 \left(\sum_i \mathrm{MLP}_1(x_i)\right)$. Here, both $\mathrm{MLP}_1$ and $\mathrm{MLP}_2$ have a single hidden layer of size 256, and the intermediate representation has 96 neurons.
\subsection{Weight initialization}
\label{sec:weight-initialization}
All CG weights $W^{\ell}$ were initialized uniformly in the interval $[-1, 1]$, and then scaled by a factor of $W^{\ell}_{\tau_\ell^{\rm in}, \tau_\ell^{\rm out}} \sim \mathrm{Unif}(-1, 1) * g / (\tau^{\rm in}_\ell + \tau^{\rm out}_\ell)$, where $\tau^{\rm in}_\ell$, $\tau^{\rm out}_\ell$ and $g$ is the weight gain.
We chose the gain to ensure that the activations at each level were order unity when the network is initialized. We found that if the gain was too low, the CG products in higher levels would not significantly contribute to training, and information would only flow through linear (one-body) operations. This would result in convergence to poor training error. On the other hand, if the gain is set too high, the CG non-linearities dominate at initialization and would increase the change of the instabilities discussed above.
In practice, the gain was hand-tuned by such that the mean of the absolute value of the CG activations $1/{(N_{\rm atom} N_{\rm c} (2\ell+1))}\sum_{\ell, i,c,m} |F^s_{\ell,i,c,m}|$ at each level was approximately unity for a random mini-batch. For experimental results presented here, we used a gain of $g=5$.
\section{Covariant $\SO(3)$-vector layers}
\label{sec:cg_layers}
We discuss the specific implementation of the covariant CG layers acting on the $\SO(3)$-vector
activations $F_i$. Our network is inspired by the $n \leq 3$-atom interactions described in the main text.
In practice, computational limitations forced us to take only a subset of these operations, in particular:
\begin{equation}
\bigg[
\underbrace{\Big(F_{i}\oplus\brBig{\sum_{j}\Upsilon_{ij}^{\left(1\right)}\otimes_{\rm cg} F_{j}}\Big)}_{\textrm{one-body}}
\oplus\underbrace{\left(F_{i}\otimes_{\rm cg} F_{i}\right)}_{\textrm{two-body}}
\oplus\underbrace{\Big(\sum_{j}F_{i}\otimes_{\rm cg}\Upsilon_{ij}^{\left(3\right)}\otimes_{\rm cg} \left(F_{j}\right)^{\otimes_{\rm cg} 2}\Big)}_{\textrm{three-body}}
\bigg] \cdot W
\end{equation}
For computational tractability, we replace
$F_{i}\otimes_{\rm cg}\Upsilon_{ij}^{\left(3\right)}\otimes_{\rm cg} F_{j}$ by $(F_{i}\cdot F_{j}) \otimes_{\rm cg}\Upsilon_{ij}^{\left(3\right)}$ in the three-body term. This can be be done
using the properties of the CG transformation to give a unitary redefinition of $W \rightarrow W^\prime$,
followed by a projection into the subspace for which $\ell_{\rm max} = 0$.
After some algebra, we arrive at the form of our $\mathrm{CGLayer}$:
\begin{equation}
\mathrm{CGLayer}(\{F_{i},\mathbf{r}_{ij}\}) =
\Big[F_i \oplus
\big( F_{i}\otimes_{\rm cg} F_{i}\big) \oplus
\Big(\sum_{j}\Big(\Upsilon_{ij}^{\left(1\right)}\oplus
\big(F_{i}\cdot F_{j}\big) \Upsilon_{ij}^{\left(3\right)}\Big)\otimes_{\rm cg} F_{j} \big)\Big]
\cdot W^\prime
\label{eq:cglayer-mpnn}
\end{equation}
Structurally, \rf{eq:cglayer-mpnn} looks similar to a message passing neural network
\citep{Gilmer2017}), where messages are CG-products acting on $\SO(3)$-vector activations.
In this framework, the term
\sm{\Upsilon_{ij}^{\left(\mathrm{edge}\right)} = \Upsilon_{ij}^{\left(1\right)}\oplus\left(F_{i}\cdot F_{j}\right)\Upsilon_{ij}^{\left(3\right)}}
looks like an ``edge network'' with $\SO(3)$-vector messages.
Inspired by this connection, we generalize our architecture to
\begin{equation}
\mathrm{CGLayer}(\{F_{i},\mathbf{r}_{ij}\}) =
\Big[
F_i \oplus \big( F_{i}\otimes_{\rm cg} F_{i}\big) \oplus
\big( \sum_{j} \Upsilon_{ij}^{\left(\mathrm{edge}\right),s} \otimes_{\rm cg} F_{j} \big)\Big] \cdot W^\prime.
\label{eq:cglayer-edge}
\end{equation}
Following this idea, we can allow the edge network to be ``self-consistently'' updated based upon the value at the previous level:
\m{\Upsilon_{ij}^{\left(\mathrm{edge}\right),s} \equiv \big[ \Upsilon_{ij}^{\left(\mathrm{edge}\right),s-1} \oplus \Upsilon_{ij}^{\left(1\right),s}\oplus\big(F_{i}\cdot F_{j}\big)\Upsilon_{ij}^{\left(3\right),s} \big] \cdot W_{\mathrm{edge}}} is a ``self-consistent'' amplitude.
In practice, we assume $\Upsilon_{ij}^{\left(1,3\right),s} \propto Y^\ell(\hat{\mathbf{r}}_{ij})$. We can therefore calculate the edge network by defining $\Upsilon_{ij}^{\left(\mathrm{edge}\right),s} = \bigoplus_{\ell=0}^{\ell_{\rm max}} \mathcal{E}^{s,\ell}_{ij} Y^\ell(\hat{\mathbf{r}}_{ij})$, and then updating
\sm{\mathcal{E}^{s}_{ij} = \bigl( \mathcal{E}^{s-1}_{ij} \oplus \mathcal{F}^{(1),s}_{ij} \oplus \mathcal{D}^{s}_{ij} \bigr) \cdot W_{\rm edge}},
where $\mathcal{D}^{s}_{ij} = (F_{i}\cdot F_{j})$.
The functions $\Upsilon_{ij}$ define the position dependence of the interaction between atoms $i$ and $j$. In chemical environments, atoms that are separated by a significant distance will not talk to each other. For this reason we include a soft mask $\Upsilon_{ij} \rightarrow m_{ij} \times \Upsilon_{ij}$, where $m_{ij} = \sigma((r_{\rm cut} - r_{ij}) / w)$, and $r_{\rm cut}$, $w$ are respectively learnable cutoffs and widths described below.
\section{General $n$-atom Interaction terms}
We now write the general form of the $n$-atom interactions described in the main text.
\begin{equation}
\end{equation}
\section{Experimental details}\label{sec:training-algorithm}
We trained our network using the AMSGrad~\citep{j.2018on} optimizer with a constant learning rate of $5\times 10^{-4}$ and a mini-batch size of $25$. We trained for 512 and 256 epoch respectively for MD-17 and QM-9. For each molecule in MD-17, we uniformly sampled 50k/10k/10k data points in the training/validation/test splits respectively. In QM-9 the dataset was randomly split to 100k molecules in the train set, with $10\%$ in the test set, and the remaining in the validation set. We removed the 3054 molecules that failed consistency requirements~\citep{ramakrishnan2014quantum}, and also subtracted the thermochemical energy~\citep{Gilmer2017} for the targets $C_v$, $U_0$, $U$, $G$, $H$, ZPVE.
For both datasets, we used, $S=4$ CGLayers with $L=3$ and we used $N_c = 16$ channels at the output of each CGLayer. This gave networks with 299808 and 154241 parameter respectively for QM-9 and MD-17. Training time for QM-9 is takes roughly 48 hours on a NVidia 2080 Ti GPU. Training time for MD-17 varies based upon the molecule being trained, but typically ranges between 26 and 30 hours.
\subsection{Training instabilities}
Training our Cormorant had several subtleties that we both believe are related to the nature of the CG non-linearity.
We found a poor choice of weight initialization or optimization algorithm will frequently result in either: (1) an instability resulting in very large training loss ($> 10^6$), from which the network will never recover, or (2) convergence to weights where the activation of CG non-linearities in higher layers turn off, and the resulting training error is poor.
We believe these difficulties are a result of the CG non-linearity, which is quadratic and unbounded. In fact, our network is just a high-order polynomial function of learnable parameters.\footnote{This is true for MD-17, although for QM-9, the presence of non-linearities in the fully-connected MLPs adds a more conventional non-linearity.}
For the hyperparameters used in our experiments, the prediction at the top is a sixteenth order polynomial of our network's parameters. As a result, in certain regions of parameter space small gradient updates can result in rapid growth of the output amplitude or a rapid drop in the importance of some channels.
These issues were more significant when we used Adam~\citep{Kingma2015AdamAM} then AMSGrad, and when the network's parameters were not initialized in a narrow range. Using the weight initialization scheme discussed in Sec.~\ref{sec:weight-initialization}, we were able to consistently converge to low training and validation error, provided we were limited to at most four CG layers.
| {'timestamp': '2019-11-27T02:01:42', 'yymm': '1906', 'arxiv_id': '1906.04015', 'language': 'en', 'url': 'https://arxiv.org/abs/1906.04015'} |
high_school_physics | 793,264 | 15.259213 | 1 | Improved smoothed analysis of the $k$-means method
Show more ( 9 Page)
Improved Smoothed Analysis of the k-Means Method
Bodo Manthey
Heiko R¨
oglin
The k-means method is a widely used clustering algorithm. One of its distinguished features is its speed in practice. Its worst-case running-time, however, is exponential, leaving a gap between practical and theoretical performance. Arthur and Vassilvitskii [3] aimed at closing this gap, and they proved a bound of poly(nk, σ−1) on the smoothed running-time of the k-means method, where n is the number of data points and σ is the standard deviation of the Gaussian perturbation. This bound, though better than the worst-case bound, is still much larger than the running-time observed in practice.
We improve the smoothed analysis of the k-means method by showing two upper bounds on the expected running-time of k-means. First, we prove that the expected running-time is bounded by a polynomial in n
√ k
and σ−1. Second, we prove an upper bound of kkd·poly(n, σ−1
), where d is the dimension of the data space. The polynomial is in-dependent of k and d, and we obtain a polynomial bound for the expected running-time for k, d ∈ O(plog n/ log log n).
Finally, we show that k-means runs in smoothed poly-nomial time for one-dimensional instances.
The k-means method is a very popular algorithm for clustering high-dimensional data. It is a local search algorithm based on ideas by Lloyd [10]: Initiated with k arbitrary cluster centers, it assigns every data point to its nearest center, and then readjusts the centers, reassigns the data points, . . . until it stabilizes. (In Section 1.1, we describe the algorithm formally.) The k-means method terminates in a local optimum, which might be far worse than the global optimum. However, in practice it works very well. It is particularly popular because of its simplicity and its speed: “In practice, the number of iterations is much less than the number
∗A full version of this paper is available at http://arxiv.org/abs/0809.1715
†Saarland University, Department of Computer Science, [email protected]. Work done in part at Yale University, Department of Computer Science, supported by the Postdoc-Program of the German Academic Exchange Service (DAAD).
‡Boston University, Department of Computer Science, [email protected]. Supported by a fellowship within the Postdoc-Program of the German Academic Exchange Service (DAAD).
of samples”, as Duda et al. [6, Section 10.4.3] put it. According to Berkhin [5], the k-means method “is by far the most popular clustering tool used in scientific and industrial applications.”
The practical performance and popularity of the k-means method is at stark contrast to its performance in theory. The only upper bounds for its running-time are based on the observation that no clustering appears twice in a run of k-means: Obviously, n points can be distributed among k clusters in only kn ways.
Furthermore, the number of Voronoi partitions of n points in Rd into k classes is bounded by a polynomial
in nkd [8], which yields an upper bound of poly(nkd).
On the other hand, Arthur and Vassilvitskii [2] showed that k-means can run for 2Ω(√n)iterations in the worst
To close the gap between good practical and poor theoretical performance of algorithms, Spielman and Teng introduced the notion of smoothed analysis [12]: An adversary specifies an instance, and this instance is then subject to slight random perturbations. The smoothed running-time is the maximum over the adver-sarial choices of the expected running-time. On the one hand, this rules out pathological, isolated worst-case in-stances. On the other hand, smoothed analysis, unlike average-case analysis, is not dominated by random in-stances since the inin-stances are not completely random; random instances are usually not typical instances and have special properties with high probability. Thus, smoothed analysis also circumvents the drawbacks of average-case analysis. For a survey of smoothed analy-sis, we refer to Spielman and Teng [13].
The goal of this paper is to bound the smoothed running-time of the k-means method. There are ba-sically two reasons why the smoothed running-time of the k-means method is a more realistic measure than its worst-case running-time: First, data obtained from measurements is inherently noisy. So even if the original data were a bad instance for k-means, the data mea-sured is most likely a slight perturbation of it. Second, if the data possesses a meaningful k-clustering, then slightly perturbing the data should preserve this clus-tering. Thus, smoothed analysis might help to obtain a faster k-means method: We take the data measured, perturb it slightly, and then run k-means on the
per-CORE Metadata, citation and similar papers at core.ac.uk
turbed instance. The bounds for the smoothed running-time carry over to this variant of the k-means method. 1.1 k-Means Method. An instance of the k-means clustering problem is a point set X ⊆ Rd consisting of n points. The aim is to find a clustering C1, . . . , Ck
of X , i.e., a partition of X , as well as cluster centers c1, . . . , ck ∈ Rd such that the potential
k X i=1 X x∈Ci kx − cik2
is minimized. Given the cluster centers, every data point should obviously be assigned to the cluster whose center is closest to it. The name k-means stems from the fact that, given the clusters, the centers c1, . . . , ck
should be chosen as the centers of mass, i.e., ci = 1
|Ci| P
x∈Cix. The k-means method proceeds now as follows:
1. Select cluster centers c1, . . . , ck.
2. Assign every x ∈ X to the cluster Ci whose cluster
center ci is closest to it.
3. Set ci =|C1i|Px∈Cix.
4. If clusters or centers have changed, goto 2. Other-wise, terminate.
Since the potential decreases in every step, no clustering occurs twice, and the algorithm eventually terminates.
1.2 Related Work. The problem of finding a good clustering can be approximated arbitrarily well: B˘adoiu et al. [4], Matouˇsek [11], and Kumar et al. [9] devised polynomial time approximation schemes with different dependencies on the approximation ratio (1 + ε) as well as n, k, and d: O(2O(kε−2log k)· nd), O(nε−2k2d
logkn), and O(exp(k/ε) · nd), respectively.
While the polynomial time approximation schemes show that k-means clustering can be approximated arbitrarily well, the method of choice for finding a k-clustering is the k-means method due to its performance in practice. However, the only polynomial bound for k-means holds for d = 1, and only for instances with polynomial spread [7], which is the maximum distance of points divided by the minimum distance.
Arthur and Vassilvitskii [3] have analyzed the run-ning-time of the k-means method subject to Gaussian perturbation: The points are drawn according to in-dependent d-dimensional Gaussian distributions with standard deviation σ. Arthur and Vassilvitskii proved that the expected running-time after perturbing the in-put with Gaussians with standard deviation σ is
poly-nomial in nk, d, the diameter of the perturbed point set,
and 1/σ.
Recently, Arthur [1] showed that the probability that the running-time of k-means subject to Gaussian perturbations exceeds a polynomial in n, d, the diam-eter of the instance, and 1/σ is bounded by O(1/n). However, his argument does not yield any significant bound on the expected running-time of k-means: The probability of O(1/n) that the running-time exceeds a polynomial bound is too large to yield an upper bound for the expected running-time, except for the trivial up-per bound of poly(nkd).
1.3 New Results. We improve the smoothed analy-sis of the k-means method by proving two upper bounds on its running-time. First, we show that the smoothed running-time of k-means is bounded by a polynomial in n
k and 1/σ.
Theorem 1.1. Let X ⊆ Rd be a set of n points drawn according to independent Gaussian distributions whose means are in [0, 1]d. Then the expected running-time of
the k-means method on the instance X is bounded from above by a polynomial in n
Thus, compared to the previously known bound, we decrease the exponent by a factor of √k. Second, we show that the smoothed running-time of k-means is bounded by kkd · poly(n, 1/σ). In particular, this decouples the exponential part of the bound from the number n of points.
Theorem 1.2. Let X be drawn as described in The-orem 1.1. Then the expected running-time of the k-means method on the instance X is bounded from above by kkd· poly n, 1/σ .
An immediate consequence of Theorem 1.2 is the following corollary, which proves that the expected running-time is polynomial in n and 1/σ if k and d are small compared to n. This result is of particular interest since d and k are usually much smaller than n.
Corollary 1.1. Let k, d ∈ O(plog n/ log log n). Let X be drawn as described in Theorem 1.1. Then the expected running-time of k-means on the instance X is bounded by a polynomial in n and 1/σ.
David Arthur [1] presented an insightful proof that k-means runs in time polynomial in n, 1/σ, and the diameter of the instance with a probability of at least 1 − O(1/n). It is worth pointing out that his result is orthogonal to our results: neither do our results imply polynomial running time with probability 1 − O(1/n),
nor does Arthur’s result yield any non-trivial bound on the expected running-time (not even poly(nk, 1/σ))
since the success probability of 1 − O(1/n) is way too small. The exception is our result for d = 1, which yields not only a bound on the expectation, but also a bound that holds with high probability. However, the original definition of smoothed analysis [12] is in terms of expectation, not in terms of bounds that hold with a probability of 1 − o(1).
To prove our bounds, we prove a lemma about per-turbed point sets (Lemma 2.1). The lemma bounds the number of points close to the boundaries of Voronoi partitions that arise during the execution of k-means. It might be of independent interest, in particular for smoothed analyses of geometric algorithms and prob-lems.
Finally, we prove a polynomial bound for the running-time of k-means in one dimension.
Theorem 1.3. Let X ⊆ R be drawn according to 1-dimensional Gaussian distributions as described in The-orem 1.1. Then the expected running-time of k-means on X is polynomial in n and 1/σ. Furthermore, the probability that the running-time exceeds a polynomial in n and 1/σ is bounded by 1/ poly(n).
We remark that this result for d = 1 is not implied by the result of Har-Peled and Sadri [7] that the running-time of one-dimensional k-means is polynomial in n and the spread of the instance. The reason is that the expected value of the square of the spread is unbounded.
The restriction of the adversarial points to be in [0, 1]d is necessary: Without any bound, the adversary
can place the points arbitrarily far away, thus dimin-ishing the effect of the perturbation. We can get rid of this restriction and obtain the same results by allowing the bounds to be polynomial in the diameter of the ad-versarial instance. However, for the sake of clarity and to avoid another parameter, we have chosen the former model.
1.4 Outline. To prove our two main theorems, we first prove a property of perturbed point sets (Sec-tion 2): In any step of the k-means algorithm, there are not too many points close to any of the at most
2 hyperplanes that bisect the centers and that form
the Voronoi regions. To put it another way: No matter how k-means partitions the point set X into k Voronoi regions, the number of points close to any boundary is rather small with overwhelming probability.
We use this lemma in Section 3: First, we use it to prove Lemma 3.1, which bounds the expected number of iterations in terms of the smallest possible
distance of two clusters. Using this bound, we derive a first upper bound for the expected number of iterations (Lemma 3.2), which will result in Theorem 1.2 later on. In Sections 4 and 5, we distinguish between itera-tions in which at most √k or at least √k clusters gain or lose points. This will result in Theorem 1.1.
We consider the special case of d = 1 in Section 6. For this case, we prove an upper bound polynomial in n and 1/σ until the potential has dropped by at least 1. In Sections 3, 4, 5, and 6 we are only concerned with bounding the number of iterations until the potential has dropped by at least 1. Using these bounds and an upper bound on the potential after the first round, we will derive Theorems 1.1, 1.2, and 1.3 as well as Corollary 1.1 in Section 7.
Due to space limitations, some proofs can only be found in the full version of this paper.
1.5 Preliminaries. In the following, X is the per-turbed instance on which we run k-means, i.e., X = {x1, . . . , xn} ⊆ Rd is a set of n points, where each point
xi is drawn according to a d-dimensional Gaussian
dis-tribution with mean µi ∈ [0, 1]d and standard
devia-tion σ.
Inaba et al. [8] proved that the number of iterations of k-means is poly nkd in the worst case. We
abbrevi-ate this bound by W ≤ nκkdfor some constant κ in the
following.
Let D ≥ 1 be chosen such that, with a probability of at least 1 − W−1, every data point from X lies in the hypercube D := [−D, 1 + D]d after the perturbation.
In Section 7, we prove that D can be bounded by a polynomial in n and σ, and we use this fact in the following sections. We denote by F the failure event that there exists one point in X that does not lie in the hypercube D after the perturbation. We say that a cluster is active in an iteration if it gains or loses at least one point.
We will always assume in the following that d ≤ n and k ≤ n, and we will frequently bound both d and k by n to simplify calculations. Of course, k ≤ n holds for every meaningful instance since it does not make sense to partition n points into more than n clusters. Furthermore, we can assume d ≤ n for two reasons: First, the dimension is usually much smaller than the number of points, and, second, if d > n, then we can project the points to a lower-dimensional subspace without changing anything.
Let C = {C1, . . . , Ck} denote the set of clusters.
For a natural number k, let [k] = {1, . . . , k}. In the following, we will assume that numbers such as √k are integers. For the sake of clarity, we do not write down the tedious floor and ceiling functions that are
actually necessary. Since we are only interested in the asymptotics, this does not affect the validity of the proofs. Furthermore, we assume in the following sections that σ ≤ 1. This assumption is only made to simplify the arguments and we describe in Section 7 how to get rid of it.
2 A Property of Perturbed Point Sets
The following lemma shows that, with high probability, there are not too many points close to the hyperplanes dividing the clusters. It is crucial for our bounds for the smoothed running-time: If not too many points are close to the bisecting hyperplanes, then, eventually, one point that is further away from the bisecting hyperplanes must go from one cluster to another, which causes a significant decrease of the potential.
Lemma 2.1. Let a ∈ [k] be arbitrary. With a probability of at least 1 − 2W−1, the following holds: In every step of the k-means algorithm (except for the first one) in which at least kd/a points change their assignment, at least one of these points has a distance larger than
ε := σ 4 32n2dD2 · σ 3Dn3+2κ 4a
from the bisector that it crosses.
Proof. We consider a step of the k-means algorithm, and we refer to the configuration before this step as the first configuration and to the configuration after this step as the second configuration. To be precise, we assume that in the first configuration the positions of the centers are the centers of mass of the points assigned to them in this configuration. The step we consider is the reassignment of the points according to the Voronoi diagram in the first configuration.
Let B ⊆ X with |B| = ` := kd/a be a set of points that change their assignment during the step. There are at most n` choices for the points in B and at most
k2` ≤ n2` choices for the clusters they are assigned to
in the first and the second configuration. We apply a union bound over all these at most n3`choices.
The following sets are defined for all i, j ∈ [k] and j 6= i. Let Bi ⊆ B be the set of points that leave
cluster Ci. Let Bi,j ⊆ Bi be the set of points assigned
to cluster Ci in the first and to cluster Cj in the second
configuration, i.e., the points in Bi,j leave Ci and enter
Cj. We have B =SiBi and Bi=Sj6=iBi,j.
Let Aibe the set of points that are in Ci in the first
configuration except for those in Bi. We assume that
the positions of the points in Ai are determined by an
adversary. Since the sets A1, . . . , Ak form a partition
of the points in X \ B that has been obtained in the
previous step on the basis of a Voronoi diagram, there are at most W choices for this partition [8]. We also apply a union bound over the choices for this partition. In the first configuration, exactly the points in Ai ∪ Bi are assigned to cluster Ci. Let c1, . . . , ck
denote the positions of the cluster centers in the first configuration, i.e., ci is the center of mass of Ai∪ Bi.
Since the positions of the points in X \ B are assumed to be fixed by an adversary, and since we apply a union bound over the partition A1, . . . , Ak, the impact of the
set Ai on the position of ci is fixed. However, we want
to exploit the randomness of the points in Bi in the
following. Thus, the positions of the centers are not fixed yet but they depend on the randomness of the points in B. In particular, the bisecting hyperplane Hi,j
of the clusters Ciand Cj is not fixed but depends on Bi
and Bj.
In order to complete the proof, we have to estimate the probability of the event
(E ) ∀i, j : ∀b ∈ Bi,j: dist(b, Hi,j) ≤ ε ,
where dist(x, H) = miny∈Hkx − yk denotes the shortest
distance of a point x to a hyperplane H. In the following, we denote this event by E . If the hyperplanes Hi,j were fixed, the probability of E could readily
be seen to be at most 2ε σ√2π ` ≤ ε σ ` . But the hyperplanes are not fixed since their positions and orientations depend on the points in the sets Bi,j.
Therefore, we are only able to prove the following weaker bound in Lemma 2.2:
Pr E ∧ ¬F ≤ 3D σ kd · 32n 2dD2ε σ4 `/4 , where ¬F denotes the event that, after the perturba-tion, all points of X lie in the hypercube D = [−D, D + 1]d. Now the union bound yields the following upper
bound on the probability that a set B with the stated properties exists: Pr E ≤ Pr E ∧ ¬F + Pr F ≤ n3`W · 3D σ kd · 32n 2dD2ε σ4 `/4 + W−1 = n3`W · 1 n3+2κ kd + W−1 ≤ n3`+κkd· 1 n3+2κ kd + W−1 ≤ n−κkd+ W−1 ≤ 2W−1.
The equation is by our choice of ε, the inequalities are due to some simplifications and W ≤ nκkd.
Lemma 2.2. The probability of the event E ∧ ¬F is bounded from above by
3D σ kd · 32n 2dD2ε σ4 `/4 . 3 An Upper Bound
Lemma 2.1 yields an upper bound on the number of iterations that k-means needs: Since there are only few points close to hyperplanes, eventually a point switches from one cluster to another that initially was not close to a hyperplane. The results of this section lead to the proof of Theorem 1.2.
First, we bound the number of iterations in terms of the distance ∆ of the closest cluster centers that occur during the run of k-means.
Lemma 3.1. For every a ∈ [k], with a probability of at least 1 − 3W−1, every sequence of kkd/a+ 1 consecutive
steps of the k-means algorithm (not including the first one) reduces the potential by at least
ε2· min{∆2, 1}
36dD2kkd/a ,
where ∆ denotes the smallest distance of two cluster centers that occurs during the sequence and ε is defined as in Lemma 2.1.
In order to obtain a bound on the number of iterations that k-means needs, we need to bound the distance ∆ of the closest cluster centers. This is done in the following lemma, which exploits Lemma 3.1. The following lemma is the crucial ingredient of the proof of Theorem 1.2.
Lemma 3.2. Let a ∈ [k] be arbitrary. Then the expected number of steps until the potential drops by at least 1 is bounded from above by
γ · k2kd/a· nkd d
2n4D
σε 2
for a sufficiently large absolute constant γ.
Proof. With a probability of at least 1 − 3W−1, the number of iterations until the potential drops by at least
36dD2kkd/a
is at most kkd/a+ 1 due to Lemma 3.1. We estimate the contribution of the failure event, which occurs only with probability 3W−1, to the expected running time by 3 and ignore it in the following. Let T denote the
random variable that equals the number of sequences of length kkd/a+ 1 until the potential has dropped by one.
The random variable T can only exceed t if min{∆2, 1} ≤ 36dD
2kkd/a
ε2· t ,
leading to the following bound on the expected value of T : E [T ] =PW t=1Pr T ≥ t ≤RW 0 Pr h min{∆2, 1} ≤ 36dD2kkd/a ε2·t i dt ≤ t0+RW t0 Pr h ∆ ≤ 6 √ dDkkd/(2a) ε·√t i dt , for t0= (24d+96)n4 √ dDkkd/(2a) σε 2 .
Let us consider a situation reached by k-means in which there are two clusters C1 and C2 whose centers
are at a distance of δ from each other. We denote the positions of these centers by c1 and c2. Let H be the
bisector between c1 and c2. The points c1 and c2 are
the centers of mass of the points assigned to C1 and C2,
respectively. From this, we can conclude the following: for every point that is assigned to C1 or C2 and that
has a distance of at least δ from the bisector H, as compensation another point must be assigned to C1 or
C2 that has a distance of at most δ/2 from H. Hence,
the total number of points assigned to C1 or C2 can be
at most twice as large as the total number of points assigned to C1 or C2 that are at a distance of at most
δ from H. Hence, there can only exist two centers at a distance of at most δ if one of the following two properties is met:
1. There exists a hyperplane from which more than 2d points have a distance of at most δ.
2. There exist two subsets of points whose union has cardinality at most 4d and whose centers of mass are at a distance of at most δ.
The probability that one of these events occurs can be bounded from above as follows using a union bound and Lemma 4.4 (see also Arthur and Vassilvitskii [3, Proposition 5.6]): n2d 4dδ σ 2d−d + (2n)4d· δ σ d ≤ (4d+16)nσ 4δ d . Hence, Prh∆ ≤ 6 √ dDkkd/(2a) ε·√t i ≤ √ t0 √ t d and, for d ≥ 3, we obtain E [T ] ≤ t0+RW t0 √ t0 √ t d dt ≤ t0+ t0d/2h 1 (−d/2+1)·td/2−1 i∞ t0 = d−2d · t0 ≤ 2κnkd · t0.
For d = 2, we obtain E [T ] ≤ t0+RW t0 √ t0 √ t d dt ≤ t0+ t0· ln(t) W1 = t0· (1 + ln(W )) ≤ 2κnkd · t0.
Altogether, this shows that the expected number of steps until the potential drops by at least 1 can be bounded from above by
2 + kkd/a+ 1 · 2κnkd · (24d+96)n4√dDkkd/(2a)
σε
2 , which can, for a sufficiently large absolute constant γ, be bounded from above by
γ · k2kd/a· nkd · d2n4D
4 Iterations with at most √k Active Clusters In this and the following section, we aim at proving the main lemmas that lead to Theorem 1.1. To do this, we distinguish two cases: In this section, we deal with the case that at most √k clusters are active. In this case, either few points change clusters, which yields a potential drop caused by the movement of the centers. Or many points change clusters. Then, in particular, many points switch between two clusters, and not all of them can be close to the hyperplane bisecting the corresponding centers, which yields the potential due to the reassignment.
We define an epoch to be a sequence of consecutive iterations in which no cluster center assumes more than two different positions. Equivalently, there are at most two different sets Ci0, Ci00 that every cluster Ci
assumes. The obvious upper bound for the length of an epoch is 2k, which is stated also by Arthur and Vassilvitskii [3]: After that many iterations, at least one cluster must have assumed a third position. For our analysis, however, 2k is too big, and we bring it down to a constant.
Lemma 4.1. The length of any epoch is less than four. Proof. Let x be any data point that changes from one cluster to another during an epoch, and let i1, i2, . . . , i`
be the indices of the different clusters to which x belongs in that order. (We have ij 6= ij+1, but x can change
back to a cluster it has already visited. So, e.g., ij = ij+2 is allowed.) For every ij, we then have two
different sets Ci0 j and C 00 ij with centers c 0 ij and c 00 ij such that x ∈ Ci00j \ C0
ij. Since x belongs always to at exactly one cluster, we have Cij = C
ij for all except for one j for which Cij = C
ij. Now assume that ` ≥ 4. Then, when changing from Ci1 to Ci2, we have kx − c
i2k < kx − c
0 i4k
since x prefers Ci2 over Ci4 and, when changing to Ci4, we have kx − c0i
4k < kx − c
i2k. This contradicts the assumption that ` ≥ 4.
Now assume that x does not change from Cij to Cij+1 for a couple of steps, i.e., x waits until it even-tually changes clusters. Then the reason for eventu-ally changing to Cij+1 can only be that either Cij has changed to some ˜Cij, which makes x prefer Cij+1. But, since ˜Cij 6= C
ij and x ∈ ˜Cij, we have a third cluster for Cij. Or Cij+1 has changed to ˜Cij+1, and x prefers ˜Cij+1. But then ˜Cij+16= C
ij and x /∈ ˜Cij+1, and we have a third cluster for Cij+1.
We can conclude that x visits at most three different clusters, and changes its cluster in every iteration of the epoch. Furthermore, the order in which x visits its clusters is periodic with a period length of at most three. Finally, even a period length of three is impossible: Suppose x visits Ci1, Ci2, and Ci3. Then, to go from Cij to Cij+1 (arithmetic is modulo 3), we have kx − c
0 ij+1k < kx − c0
ij−1k. Since this holds for j = 1, 2, 3, we have a contradiction.
This holds for every data point. Thus, after at most four iterations either k-means terminates, which is fine, or some cluster assumes a third configuration, which ends the epoch, or some clustering repeats, which is
impossible.
Similar to Arthur and Vassilvitskii [3], we define a key-value to be an expression of the form K = s
t·cm(S),
where s, t ∈ N, s ≤ n2, t < n, and S ⊆ X is a set of
at most 4d√k points. (Arthur and Vassilvitskii allow up to 4dk points.) For two key-values K1, K2, we write
K1 ≡ K2 if and only if they have identical coefficients
for every data point.
We say that X is δ-sparse if, for every key-values K1, K2, K3, K4with kK1+ K2− K3− K4k ≤ δ, we have
K1+ K2≡ K3+ K4.
Lemma 4.2. The probability that the point set X is not δ-sparse is at most n16d √ k+12· n 4δ σ d .
After four iterations, one cluster has assumed a third center or k-means terminates. This yields the following lemma (see also Arthur and Vassilvitskii [3, Corollary 5.2]).
Lemma 4.3. Assume that X is δ-sparse. Then, in every sequence of four consecutive iterations that do not lead to termination and such that in every of these iterations
• at most√k clusters are active and
• each cluster gains or loses at most 2d√k points, the potential decreases by at least 4nδ24.
We say that X is ε-separated if, for every hyper-plane H, there are at most 2d points in X that are within distance ε of H. The following lemma, due to Arthur and Vassilvitskii [3, Proposition 5.6], shows that X is likely to be ε-separated.
Lemma 4.4. (Arthur, Vassilvitskii [3]) X is not ε-separated with a probability of at most
n2d· 4dε σ
d .
Given that X is ε-separated, every iteration with at most √k active clusters in which one cluster gains or loses at least 2d√k points yields a significant decrease of the potential.
Lemma 4.5. Assume that X is ε-separated. For every iteration with at most√k active clusters, the following holds: If a cluster gains or loses more than 2d√k points, then the potential drops by at least 2ε2/n.
This lemma is similar to Proposition 5.4 of Arthur and Vassilvitskii [3]. We present here a corrected proof based on private communication with Sergei Vassilvitskii. Proof. If a cluster Ci gains or loses more than 2d
√ k points in a single iteration with at most √k active clusters, then there exists another cluster Cjwith which
Ci exchanges at least 2d + 1 points. Since X is
ε-separated, one of these points, say, x, must be at a distance of at least ε from the hyperplane bisecting the cluster centers ci and cj. Assume that x switches from
Ci to Cj.
Then the potential decreases by at least kci− xk2−
kcj− xk2= (2x − ci− cj) · (cj− ci). Let v be the unit
vector in cj− ci direction. Then (2x − ci− cj) · v ≥ 2ε.
We have cj − ci = αv for α = kcj − cik, and hence,
it remains to bound kcj− cik from below. If we can
prove α ≥ ε/n, then we have a potential drop of at least (2x − ci− cj) · αv ≥ α2ε ≥ 2ε2/n as claimed.
Let H be the hyperplane bisecting the centers of Ci and Cj in the previous iteration. While H does not
necessarily bisect ci and cj, it divides the data points
belonging to Ci and Cj correctly. In particular, this
implies that kci− cjk ≥ dist(ci, H) + dist(cj, H).
Consider the at least 2d + 1 data points switching between Ci and Cj. One of them must be at a distance
of at least ε of H since X is ε-separated. Let us assume that this point switches to Ci. This yields
dist(ci, H) ≥ ε/n since Ci contains at most n points.
Thus, kci− cjk ≥ ε/n, which yields α ≥ ε/n.
Now set δi= n−16−(16+i)· √
k·σ and ε
i= σ·n−4−i √
Then the probability that the instance is not δi-sparse
is bounded from above by n16d √ k+12+4d−16d−(16+i)d·√k ≤ n−id √ k.
The probability that the instance is not εi-separated is
bounded from above by (we use d ≤ n and 4 ≤ n) n4d−4d−id
k = n−id√k.
We abbreviate the fact that an instance is δi-sparse
and εi-separated by i-nice. Now Lemmas 4.3 and 4.5
immediately yield the following lemma.
Lemma 4.6. Assume that X is i-nice. Then the number of sequences of at most four consecutive iterations, each of which with at most √k active clusters, until the potential has dropped by at least 1 is bounded from above by minn1 4· n −36−(32+2i)√k· σ2, 2σ2· n−9−i2√ko −1 ≤n(c+2i)· √ k σ2 =: Si
for a suitable constant c.
The first term comes from δi, which yields a
poten-tial drop of at least δ2
i/(4n4). The second term comes
from εi, which yields a drop of at least 2ε2i/n.
Putting the pieces together yields the main lemma of this section.
Lemma 4.7. The expected number of sequences of at most four consecutive iterations, each of which with at most √k active clusters, until the potential has dropped by at least 1 is bounded from above by
poly n √ k,1 σ .
5 Iterations with at least √k Active Clusters In this section, we consider steps of the k-means al-gorithm in which at least √k different clusters gain or lose points. The improvement yielded by such a step can only be small if none of the cluster centers changes its position significantly due to the reassign-ment of points, which, intuitively, becomes increasingly unlikely the more clusters are active. We show that, in-deed, if at least √k clusters are active, then with high probability one of them changes its position by n−O(
√ k),
yielding a potential drop in the same order of magni-tude.
The following observation, which has also been used by Arthur and Vassilvitskii [3], relates the movement of a cluster center to the potential drop.
Lemma 5.1. If in an iteration of the k-means algorithm a cluster center changes its position from c to c0, then the potential drops by at least kc − c0k2.
Now we are ready to prove the main lemma of this section.
Lemma 5.2. The expected number of steps with at least √
k active clusters until the potential drops by at least 1 is bounded from above by
Proof. We consider one step of the k-means algorithm with at least √k active clusters. Let ε be defined as in Lemma 2.1 for a = 1. We distinguish two cases: Either one point that is reassigned during the considered iteration has a distance of at least ε from the bisector that it crosses, or all points are at a distance of at most ε from their respective bisectors. In the former case, we immediately get a potential drop of at least 2ε∆, where ∆ denotes the minimal distance of two cluster centers. In the latter case, Lemma 2.1 implies that with high probability less than kd points are reassigned during the considered step. We apply a union bound over the choices for these points. In the union bound, we fix not only these points but also the clusters they are assigned to before and after the step. We denote by Aithe set of points that are assigned to cluster Ciin
both configurations and we denote by Biand Bi0the sets
of points assigned to cluster Cibefore and after the step,
respectively, except for the points in Ai. Analogously to
Lemma 2.1, we assume that the positions of the points in A1∪ . . . ∪ Ak are fixed adversarially, and we apply a
union bound on the different partitions A1, . . . , Ak that
are realizable. Altogether, we have a union bound over less than
nκkd· n3kd≤ n(κ+3)·kd
events. Let ci be the position of the cluster center of Ci
before the reassignment, and let c0i be the position after the reassignment. Then
|Ai| · cm(Ai) + |Bi| · cm(Bi)
|Ai| + |Bi|
where cm(·) denotes the center of mass of a point set. Since c0ican be expressed analogously, we can write the change of position of the cluster center of Ci as
ci− c0i= |Ai| · cm(Ai) 1 |Ai| + |Bi| − 1 |Ai| + |Bi0| +|Bi| · cm(Bi) |Ai| + |Bi| −|B 0 i| · cm(Bi0) |Ai| + |Bi0| .
Due to the union bound, cm(Ai) and |Ai| are fixed.
Additionally, also the sets Bi and Bi0 are fixed but not
the positions of the points in these two sets. If we considered only a single center, then we could easily estimate the probability that kci− c0ik ≤ β. For this, we
additionally fix all positions of the points in Bi ∪ B0i
except for one of them, say bi. Given this, we can
express the event kci− c0ik ≤ β as the event that bi
assumes a position in a ball whose position depends on the fixed values and whose radius, which depends on the number of points in |Ai|, |Bi|, and |B0i|, is not larger
than nβ. Hence, the probability is bounded from above by
nβ σ
However, we are interested in the probability that this is true for all centers simultaneously. Unfortunately, the events are not independent for different clusters. We estimate this probability by identifying a set of `/2 clusters whose randomness is independent enough, where ` ≥ √k is the number of active clusters. To be more precise, we do the following: Consider a graph whose nodes are the active clusters and that contains an edge between two nodes if and only if the corresponding clusters exchange at least one point. We identify a dominating set in this graph, i.e., a subset of nodes that covers the graph in the sense that every node not belonging to this subset has at least one edge into the subset. We can assume that the dominating set, which we identify, contains at most half of the active clusters. (In order to find such a dominating set, start with the graph and throw out edges until the remaining graph is a tree. Then put the nodes on odd layers to the left side and the nodes on even layers to the right side, and take the smaller side as the dominating set.)
For every active center C that is not in the domi-nating set, we do the following: We assume that all the positions of the points in Bi∪Bi0are already fixed except
for one of them. Given this, we can use the aforemen-tioned estimate for the probability of kci− c0ik ≤ β. If
we iterate this over all points not in the dominating set, we can always use the same estimate; the reason is that the choice of the subset guarantees that, for every node not in the subset, we have a point whose position is not fixed yet. This yields an upper bound of
d`/2 .
Combining this probability with the number of choices in the union bound yields a bound of
n(κ+3)·kd· nβ σ d`/2 ≤ n(κ+3)·kd· nβ σ d √ k/2 .
β = σ
n(4κ+6)·√k+1
the probability can be bounded from above by n−κkd≤ W−1.
Now we also take into account the failure probabil-ity of 2W−1 from Lemma 2.1. This yields that, with a
probability of at least 1 − 3W−1, the potential drops in
every iteration, in which at least√k clusters are active, by at least Γ := min{2ε∆, β2} ≥ min σ8∆ 1296n14+8κD6d, σ2 n(8κ+12)·√k+2
≥ minn∆ · poly n−1, σ , poly n−
√ k, σ o
since d ≤ n and D is polynomially bounded in σ and n. The number T of steps with at least √k active clusters until the potential has dropped by one can only exceed t if Γ ≤ 1/t. Hence, E [T ] is bounded from above by
P∞ t=1Pr T ≥ t + 3W −1· W ≤ 3 + Z ∞ t=0 Pr T ≥ t dt ≤ 4 + Z ∞ t=1 Pr Γ ≤ 1 t dt ≤ 4 + β−2+R∞ t=β−2Pr Γ ≤ 1t dt ≤ 4 + β−2+R∞ t=β−2Pr ∆ · poly 1 n, σ ≤ 1 t dt ≤ 4 + β−2+R∞ t=β−2Pr ∆ ≤ 1 t · poly n, 1 σ dt ≤ 4 + β−2+ R∞ t=β−2min ( 1, (4d+16)·n4·poly n,σ−1 t·σ d) dt = poly n √ k,1 σ ,
where the integral is upper bounded as in the proof of
Lemma 3.2.
6 A Polynomial Bound in One Dimension In this section, we consider a one-dimensional set X ⊆ R of points. The aim of this section is to prove that the expected number of steps until the potential has dropped by at least 1 is bounded by a polynomial in n and 1/σ.
We say that the point set X is ε-spreaded if the following conditions are fulfilled:
• There is no interval of length ε that contains three or more points of X .
• For any four points x1, x2, x3, x4, where x2 and x3
may denote the same point, we have |x1− x2| > ε
or |x3− x4| > ε.
The following lemma justifies the notion of ε-spreaded-ness.
Lemma 6.1. Assume that X is ε-spreaded. Then the potential drops by at least 4nε22 in every iteration.
Assume that X is ε-spreaded. Then the number of iterations until the potential has dropped by at least 1 is at most 4n2/ε2by the lemma above. Let us estimate the probability that X is ε-spreaded.
Lemma 6.2. The probability that X is not ε-spreaded is bounded from above by 2n4ε2
σ2 .
Now we have all ingredients for the proof of the main lemma of this section.
Lemma 6.3. The number of iterations of k-means until the potential has dropped by at least 1 is bounded by a polynomial in n and 1/σ.
Proof. Let T be the random variable of the number of iterations until the potential has dropped by at least 1. If T ≥ t, then X cannot be ε-spreaded with 4n2/ε2≤ t. Thus, in this case, X is not ε-spreaded with ε = 2n√
t. In
the worst case, k-means runs for at most nκk iterations. Hence, the expected running time can be bounded by
nκk X t=1 Pr T ≥ t ≤ nκk X t=1 Pr X is not √2n t-spreaded ≤ nκk X t=1 8n4n2 tσ2 ∈ O n6 σ2 · log n κk ⊆ O n 7 σ2 · log n . Finally, we remark that, by choosing ε = n2+cσ , we obtain that the probability that the number of iterations until the potential has dropped by at least exceeds a polynomial in n and 1/σ is bounded from above by O(n−2c). This yields a bound on the running-time of k-means for d = 1 that holds with high probability. 7 Putting the Pieces Together
In the previous sections, we have only analyzed the expected number of iterations until the potential drops by at least 1. To bound the expected number of iterations that k-means needs to terminate, we need an upper on the potential in the beginning. To get this, we use the following lemma.
Lemma 7.1. Let x be a one-dimensional Gaussian ran-dom variable with standard deviation σ and mean µ ∈ [0, 1]. Then, for all t ≥ 1,
Pr x /∈ [−t, 1 + t] < σ · exp − t 2 2σ2 .
For D = p2σ2ln(n1+κkddσ) ≤ poly(n, σ), the
probability that any component of any of the n data points is not contained in the hypercube D = [−D, 1 + D]d is bounded from above by n−κkd ≤ W−1. This
implies that X ⊆ D with a probability of at least 1 − W−1. If X ⊆ D, then, after the first iteration, the potential is bounded from above by nd · (2D + 1)2= poly(n).
In the beginning, we made the assumption that σ ≤ 1. While this covers the small values of σ, which we consider as more relevant, the assumption is only a technical requirement, and we can get rid of it: The number of iterations that k-means needs is invariant under scaling of the point set X . Now assume that σ > 1. Then we consider X scaled down by 1/σ, which corresponds to the following model: The adversary chooses points from the hypercube [0, 1/σ]d ⊆ [0, 1]d,
and then we add d-dimensional Gaussian vectors with standard deviation 1 to every data point. The expected running-time that k-means needs on this instance is bounded from above by the running-time needed for adversarial points chosen from [0, 1]d and σ = 1, which is poly(n) ≤ poly(n, 1/σ).
The remaining parts of the proofs of the theorems and the corollary, which are based on straightforward arguments, can be found in the full version of this paper. 8 Conclusions
We have proved two upper bounds for the smoothed running-time of the k-means method: The first bound is poly(n
k, 1/σ). The second bound is kkd· poly(n, 1/σ),
which decouples the exponential growth in k and d from the number of points and the standard deviation. In particular, this yields a smoothed running-time that is polynomial in n and 1/σ for k, d ∈ O(plog n/ log log n). The obvious question now is whether a bound exists that is polynomial in n and 1/σ, without exponential dependence on k or d. We believe that such a bound exists. However, we suspect that new techniques are required to prove it; bounding the smallest possible improvement from below might not be sufficient. The reason for this is that the number of possible partitions, and thus the number of possible k-means steps, grows exponentially in k, which makes it more likely for small improvements to exist as k grows.
Finally, we are curious if our techniques carry over to other heuristics. In particular Lemma 2.1 is quite general, as it bounds the number of points from above that are close to the boundaries of the Voronoi partitions that arise during the execution of k-means. In fact, we believe that a slightly weaker version of Lemma 2.1 is also true for arbitrary Voronoi partitions and not only for those arising during the execution of k-means. This
insight might turn out to be helpful in other contexts as well.
We thank David Arthur, Dan Spielman, Shang-Hua Teng, and Sergei Vassilvitskii for fruitful discussions and comments.
[1] David Arthur. Smoothed analysis of the k-means method. Manuscript, 2008.
[2] David Arthur and Sergei Vassilvitskii. How slow is the k-means method? In Nina Amenta and Otfried Cheong, editors, Proc. of the 22nd ACM Symposium on Computational Geometry (SOCG), pages 144–153. ACM Press, 2006.
[3] David Arthur and Sergei Vassilvitskii. Worst-case and smoothed analysis of the ICP algorithm, with an ap-plication to the k-means method. In Proc. of the 47th Ann. IEEE Symp. on Foundations of Comp. Science (FOCS), pages 153–164. IEEE Computer Society, 2006. [4] Mihai B˘adoiu, Sariel Har-Peled, and Piotr Indyk. Approximate clustering via core-sets. In Proc. of the 34th Ann. ACM Symposium on Theory of Computing (STOC), pages 250–257. ACM Press, 2002.
[5] Pavel Berkhin. Survey of clustering data mining techniques. Technical report, Accrue Software, San Jose, CA, USA, 2002.
[6] Richard O. Duda, Peter E. Hart, and David G. Stork. Pattern Classification. John Wiley & Sons, 2000. [7] Sariel Har-Peled and Bardia Sadri. How fast is the
k-means method? Algorithmica, 41(3):185–202, 2005. [8] Mary Inaba, Naoki Katoh, and Hiroshi Imai.
Variance-based k-clustering algorithms by Voronoi diagrams and randomization. IEICE Transactions on Information and Systems, E83-D(6):1199–1206, 2000.
[9] Amit Kumar, Yogish Sabharwal, and Sandeep Sen. A simple linear time (1 + ε)-approximation algorithm for k-means clustering in any dimensions. In Proc. of the 45th Ann. IEEE Symp. on Foundations of Comp. Science (FOCS), pages 454–462, 2004.
[10] Stuart P. Lloyd. Least squares quantization in PCM. IEEE Transactions on Information Theory, 28(2):129– 137, 1982.
[11] Jiˇr´ı Matouˇsek. On approximate geometric k-clustering. Discrete and Computational Geometry, 24(1):61–84, 2000.
[12] Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algorithms: Why the simplex algorithm usually takes polynomial time. Journal of the ACM, 51(3):385–463, 2004.
[13] Daniel A. Spielman and Shang-Hua Teng. Smoothed analysis of algorithms and heuristics: Progress and open questions. In Luis M. Pardo, Allan Pinkus, Endre S¨uli, and Michael J. Todd, editors, Foundations of Computational Mathematics, Santander 2005, pages 274–342. Cambridge University Press, 2006.
Related subjects : Means and method of camera space manipulation Method and means for gravity table automation
The Performance of K-Means and K-Modes Clustering to Identify Cluster in Numeric...
The sum of square and mean value calculation in K- means clustering gave better output than K- modes clustering which used simple-matching distance and mode value
On K-Means Cluster Preservation using Quantization Schemes
When the high-dimensional objects in a dataset represent time-series, (i.e., the T samples of the 1-D time series are collected into a T dimensional vector), the proposed quantization
The Improved K-Means with Particle Swarm Optimization
Means is a typical clustering algorithm in Data Mining which is widely used for clustering large set of data’s. In 1967, Mac Queen firstly proposed the K-Means algorithm, it
A stability analysis of sparse K-means
Thus, a researcher can be more confident in the feature selection from our method as reflecting a true subset of features since they would not change regardless of the noise level of
K-means based cluster analysis of residential smart meter measurements
These results can be interpreted in terms of that the load profiles of the test period and the training cluster centers obtained for large values of the clustering error threshold
A Smoothed Analysis of the k-means Method
During an iteration of the k -means method there are two possible events that can lead to a significant potential drop: either one cluster center moves significantly, or a data point is
How Fast is the k-means Method
We present polynomial upper and lower bounds on the number of iterations per- formed by the k -means method (a.k.a. Lloyd’s method) for k -means clustering.. Our upper bounds are
Functional factorial K-means analysis
To investigate the performance of the FFKM method, artificial data, which included a known low-dimensional cluster structure, were analyzed by four different methods: (i) the FFKM method,
K-means vs Mini Batch K-means: a comparison
Comparing the values between the ratio of the objective function and the adjusted rand index, it can be said that, despite that the agreement between the partitions decreases as the
Efficiency of k-Means and K-Medoids Algorithms for Clustering Arbitrary Data Poi...
This research work uses arbitrarily distributed input data points to evaluate the clustering quality and performance of two of the partition based clustering
COMPARATIVE ANALYSIS OF PARALLEL K MEANS AND PARALLEL FUZZY C MEANS CLUSTER ALGO...
Step 1: Splits the data set equally among the available processors so that each one receives N/ p records, where N is the number of records and p is the number of processes
Cluster analysis with SPSS: K-Means Cluster Analysis
The Options button gives the option of displaying additional statistics – initial cluster centers (Initial cluster centers), dispersion analysis table (ANOVA table) and information
Improved K means Map Reduce Algorithm for Big Data Cluster Analysis
A means: improving the cluster assignment phase of k means for Big Data
A generalised upper bound for the k tuple domination number
K Means Cluster Analysis for Image Segmentation
Improved k means Clustering for Document Categorization
An improved density based k Means algorithm
Study of K Means and Enhanced K Means Clustering Algorithm
Performance Analysis of Improved K-Means & K-Means in Cluster Generation
K-Means Cluster Analysis Of Cities Based On Their Inter-Distances
II. THE K-MEANS CLUSTERING METHOD
An Enhanced Global K-Means Algorithm for Cluster Analysis | {"pred_label": "__label__cc", "pred_label_prob": 0.5829899907112122, "wiki_prob": 0.41701000928878784, "source": "cc/2023-06/en_middle_0016.json.gz/line1023223"} |
high_school_physics | 1,377 | 15.246358 | 1 | \section{Introduction}\label{SectionIntro}
The problem of state observability for systems driven by unknown inputs (UI) is a fundamental problem in control theory. This problem was introduced and firstly investigated in the seventies \cite{Basile69,Bha78,Guido71,Wang75}. A huge effort has then been devoted to design observers for both linear and nonlinear systems in presence of UI, e.g., \cite{Barb07, Barb09,Che06,Daro94,Floq04,Floq07,Gua91,Ha04,Ham10,Hou92,Koe01,Koe08,Yan88}.
The goal of this paper is not to design new observers for systems driven by UI but to provide simple analytic conditions in order to check the weak local observability of the state. The obtained results hold for systems whose dynamics are nonlinear in the state and affine in both the known and the unknown inputs. Additionally, the unknown inputs are supposed to be smooth functions of time (specifically, they are supposed to be $\mathcal{C}^k$, for a suitable integer $k$).
In \cite{Her77} the observability properties of a nonlinear system are derived starting from the definition of indistinguishable states. According to this definition, the Lie derivatives of any output computed along any direction allowed by the system dynamics take the same values at the states which are indistinguishable.
Hence, if a given state $x$ belongs to the indistinguishable set of a state $x_0$ (i.e., to $I_{x_0}$) all the Lie derivatives computed at $x$ and at $x_0$ take the same values. This is a fundamental property. In particular, based on this property, the observability rank condition was introduced in \cite{Her77}.
Our first objective is to extend the observability rank condition. For, we introduce a new definition of {\it indistinguishable states} for the case UI (section \ref{SectionDefinitions}). Then, in section \ref{SectionExtendedSystem} we introduce a new system by a suitable state extension. For this extended system, we show that, the Lie derivatives of the outputs up to a given order, take the same values at the states which are indistinguishable.
In other words, the new system satisfies the same property derived in \cite{Her77} mentioned above and this allows us to extend the observability rank condition (section \ref{SectionEORC}). We will refer to this extension as to the Extended Observability Rank Condition ($EORC$).
The new system is obtained by a state augmentation. In particular, the augmented state is obtained by including the unknown inputs together with their time-derivatives up to given order. This augmented state has already been considered in the past. Specifically, in \cite{Belo10} the authors adopted this augmented state to investigate the observability properties of a fundamental problem in the framework of mobile robotics (the bearing SLAM). In particular, starting from the idea of including the time-derivatives of the unknown input in the state, in \cite{Belo10} a sufficient condition for the state observability has been provided.
The $EORC$ $~$is based on the computation of a codistribution defined in the augmented space. In other words, the $EORC$ $~$allows us to check the weak local observability of the original state together with its extension and not directly of the original state. This makes the computational cost dependent on the dimension of the augmented state. Additionally, the $EORC$ $~$ only provides sufficient conditions for the weak local observability of the original state since the state augmentation can be continued indefinitely.
For these reasons, the paper focuses on the following two fundamental issues:
\begin{itemize}
\item Understanding if it is possible to derive the weak local observability of the original state by computing a codistribution defined in the original space, namely a codistribution consisting of covectors with the same dimension of the original state.
\item Understanding if there exists a given augmented state such that, by further augmenting the state, the observability properties of the original state provided by $EORC$$~$ remain unvaried.
\end{itemize}
\noindent Both these issues have been fully addressed in the case of a single unknown input (see theorems \ref{TheoremSeparation} and \ref{TheoremStop}).
Thanks to the result stated by theorem \ref{TheoremSeparation} (section \ref{SectionSeparation}), the algorithm in definition \ref{DefinitionOmega} in section \ref{SectionSeparation} (for the case of a single known input) and in definition \ref{DefinitionOmegaE} in section \ref{SectionExtension} (for the case of multiple known inputs) can be used to obtain the entire observable codistribution. In other words, the observability properties of the original state are obtained by a very simple algorithm.
As it will be seen, the analytic derivations required to prove theorem \ref{TheoremSeparation} are complex and we are currently extending them to the multiple unknown inputs case. Theorem \ref{TheoremStop} (section \ref{SectionStop}) ensures the convergence of the algorithm in a finite number of steps and it also provides the criterion to establish that this convergence has been reached. Also this proof is based on several tricky and complex analytical steps.
Both theorems \ref{TheoremSeparation} and \ref{TheoremStop} are first proved in the case of a single known input (sections \ref{SectionSeparation} and \ref{SectionStop}) but in section \ref{SectionExtension} their validity is extended to the case of multiple known inputs.
All the theoretical results are illustrated in section \ref{SectionApplicationsSystem} by deriving the observability properties of several nonlinear systems driven by unknown inputs.
\section{Basic Definitions}\label{SectionDefinitions}
In the sequel we will refer to a nonlinear control system with $m_u$ known inputs ($ u \equiv [u_1,\cdots,u_{m_u}]^T$) and $m_w$ unknown inputs or disturbances ($w\equiv [w_1,\cdots,w_{m_w}]^T$). The state is the vector $ x \in M$, with $M$ an open set of $\mathbb{R}^n$. We assume that the dynamics are nonlinear with respect to the state and affine with respect to the inputs (both known and unknown). Finally, for the sake of simplicity, we will refer to the case of a single output $y$ (the extension to multiple outputs is straightforward). Our system is characterized by the following equations:
\begin{equation}\label{EquationStateEvolution}
\left\{\begin{aligned}
\dot{ x } &= f_0 ( x ) + \sum_{i=1}^{m_u} f_i ( x ) u_i + \sum_{j=1}^{m_w} g_j ( x ) w_j \\
y &= h( x ) \\
\end{aligned}\right.
\end{equation}
\noindent where $ f_i ( x )$, $i=0,1,\cdots,m_u$, and $ g_j ( x )$, $j=1,\cdots,m_w$, are vector fields in $M$ and the function $h( x )$ is a scalar function defined on the open set $M$. For the sake of simplicity, we will assume that all these functions are analytic functions in $M$.
Let us consider the time interval $\mathcal{I}\equiv [0, ~T]$. Note that, since the equations in (\ref{EquationStateEvolution}) do not depend explicitly on time, this can be considered as a general time interval of length $T$. In the sequel, we will assume that the solution of (\ref{EquationStateEvolution}) exists in $\mathcal{I}$ and we will denote by $ x(t; ~ x_0; ~u; ~w )$ the state at a given time $t\in\mathcal{I}$, when $ x(0)= x_0$ and the known input and the disturbance are $u(t)$ and $w(t)$, respectively, $\forall t\in\mathcal{I}$.
\vskip .2cm
\noindent We introduce the following definition:
\begin{df}[Indistinguishable states in presence of UI]\label{DefindistinguishableStates}
Two states $x_a$ and $x_b$ are indistinguishable if, for any $ u (t)$ (the known input vector function), there exist $w_a (t)$ and $w_b (t)$ (i.e., two unknown input vector functions in general, but not necessarily, different from each other) such that $h( x(t; ~ x_a; ~u; ~w_a ))=h( x(t; ~ x_b; ~u; ~w_b ))$ $\forall t\in\mathcal{I}$.
\end{df}
\noindent This definition states that, if $x_a$ and $x_b$ are indistinguishable, then, for any known input, by looking at the output during the time interval $\mathcal{I}$, we cannot conclude if the initial state was $x_a$ and the disturbance $w_a$ or if the initial state was $x_b$ and the disturbance $w_b$.
We remark that, contrary to the definition of indistinguishable states in the case without disturbances, the new definition does not establish an equivalence relation. Indeed, we can have $x_a$ and $x_b$ indistinguishable, $x_b$ and $x_c$ indistinguishable but $x_a$ and $x_c$ are not indistinguishable. As in the case of known inputs, given $ x_0 $, the indistinguishable set $I_{x_0}$ is the set of all the states $x$ such that $x$ and $x_0$ are indistinguishable.
Starting from this definition, we can use exactly the same definitions of observability and weak local observability adopted in the case without disturbances.
\section{Extended system and basic properties}\label{SectionExtendedSystem}
In order to extend the observability rank condition to the case of unknown inputs we introduce a new system (the extended system) such that its Lie derivatives are constant on the indistinguishable sets.
The new system will be denoted by $\Sigma^{(k)}$. It is simply obtained by extending the original state by including the unknown inputs together with their time derivatives. Specifically, we denote by $ ^kx $ the extended state that includes the time derivatives up to the $(k-1)-$order:
\begin{equation}\label{EquationExtendedState}
^kx \equiv [ x^T, ~ w^T, ~ w^{(1)~T}, ~\cdots,~ w^{(k-1)~T}]^T
\end{equation}
\noindent where $w^{(k)} \equiv \frac{d^k w}{dt^k}$ and $^kx\in M^{(k)}$, with $M^{(k)}$ an open set of $\mathbb{R}^{n+k m_w}$. From (\ref{EquationStateEvolution}) it is immediate to obtain the dynamics for the extended state:
\begin{equation}\label{EquationExtendedStateEvolution}
\begin{aligned}
^k\dot{x} &= f_0^{(k)} ( ^kx ) + \sum_{i=1}^{m_u} f_i^{(k)} ( x ) u_i + \sum_{j=1}^{m_w} 1^{n+(k-1) m_w+j}_{{n+k m_w}} w_j^{(k)} \\
\end{aligned}
\end{equation}
\noindent where:
\begin{equation}\label{EquationF_0^E}
f_0^{(k)} ( ^kx ) \equiv \left[\begin{array}{c}
f_0 ( x ) + \sum_{i=1}^{m_w} g_i ( x ) w_i\\
w^{(1)} \\
w^{(2)} \\
\cdots\\
w^{(k-1)} \\
0_{m_w} \\
\end{array}
\right]
\end{equation}
\begin{equation}\label{EquationF_i^E}
f_i^{(k)} ( x ) \equiv \left[\begin{array}{c}
f_i ( x )\\
0_{k m_w} \\
\end{array}
\right]
\end{equation}
\noindent and we denoted by $0_{m}$ the $m-$dimensional zero column vector and by $1^{l}_{m}$ the $m-$dimensional unit column vector, with $1$ in the $l^{th}$ position and $0$ elsewhere. We remark that the resulting system has still $m_u$ known inputs and $m_w$ disturbances. However, while the $m_u$ known inputs coincide with the original ones, the $m_w$ unknown inputs are now the $k-$order time derivatives of the original disturbances. The state evolution depends on the known inputs via the vector fields $f_i^{(k)}$, ($i=1,\cdots,m_u$) and it depends on the disturbances via the unit vectors $1^{n+(k-1) m_w+j}_{{n+k m_w}}$, ($j=1,\cdots,m_w$). Finally, we remark that only the vector field $f_0^{(k)}$ depends on the new state elements.
\noindent In the rest of this section we derive several properties satisfied by $\Sigma^{(k)}$.
\begin{lm}\label{LmLieDer1}
Let us consider the system $\Sigma^{(k)}$. The Lie derivatives of the output up to the $m^{th}$ order ($m\le k$) are independent of $w_j^{(f)}$, $j=1,\cdots,m_w$, $\forall f\ge m$.
\end{lm}
{\bf Proof:~}{ We proceed by induction on $m$ for any $k$. When $m=0$ we only have one zero-order Lie derivative (i.e., $h(x)$), which only depends on $x$, namely it is independent of $w^{(f)}$, $\forall f \ge 0$. Let us assume that the previous assert is true for $m$ and let us prove that it holds for $m+1$. If it is true for $m$, any Lie derivative up to the $m^{th}$ order is independent of $w^{(f)}$, for any $f\ge m$. In other words, the analytical expression of any Lie derivative up to the $m-$order is represented by a function $g(x,w,w^{(1)},\cdots,w^{(m-1)})$. Hence, $\nabla g = [\frac{\partial g}{\partial x}, \frac{\partial g}{\partial w}, \frac{\partial g}{\partial w^{(1)}}, \cdots, \frac{\partial g}{\partial w^{(m-1)}}, 0_{(k-m)m_w}]$. It is immediate to realize that the product of this gradient by any vector filed in (\ref{EquationExtendedStateEvolution}) depends at most on $w^{(m)}$, i.e., it is independent of $w^{(f)}$, $\forall f \ge m+1$ $\blacksquare$}
\vskip .2cm
\noindent A simple consequence of this lemma are the following two properties:
\begin{pr}\label{PrLieDer1}
Let us consider the system $\Sigma^{(k)}$. The Lie derivatives of the output up to the $k^{th}$ order along at least one vector among $1^{n+(k-1) m_w+j}_{{n+k m_w}}$ ($ j=1, \cdots,m_w$) are identically zero.
\end{pr}
{\bf Proof:~}{ From the previous lemma it follows that all the Lie derivatives, up to the $(k-1)-$order are independent of $w^{(k-1)}$, which are the last $m_w$ components of the extended state in (\ref{EquationExtendedState}). Then, the proof follows from the fact that any vector among $1^{n+(k-1) m_w+j}_{{n+k m_w}}$ ($ j=1, \cdots,m_w$) has the first $n+(k-1)m_w$ components equal to zero $\blacksquare$}
\begin{pr}\label{PrLieDer2}
The Lie derivatives of the output up to the $k^{th}$ order along any vector field $f_0^{(k)}$, $f_1^{(k)} , \cdots, f_{m_u}^{(k)}$ for the system $\Sigma^{(k)}$ coincide with the same Lie derivatives for the system $\Sigma^{(k+1)}$
\end{pr}
{\bf Proof:~}{ We proceed by induction on $m$ for any $k$. When $m=0$ we only have one zero-order Lie derivative (i.e., $h(x)$), which is obviously the same for the two systems, $\Sigma^{(k)}$ and $\Sigma^{(k+1)}$. Let us assume that the previous assert is true for $m$ and let us prove that it holds for $m+1\le k$. If it is true for $m$, any Lie derivative up to the $m^{th}$ order is the same for the two systems. Additionally, from lemma \ref{LmLieDer1}, we know that these Lie derivatives are independent of $w^{(f)}$, $\forall f \ge m$. The proof follows from the fact that the first $n+mm_w$ components of $f_0^{(k)} $, $ f_1^{(k)} , \cdots, f_{m_u}^{(k)}$ coincide with the first $n+mm_w$ components of $f_0^{(k+1)}$, $f_1^{(k+1)} , \cdots, f_{m_u}^{(k+1)}$ when $m< k$ $\blacksquare$}
\vskip .2cm
In the sequel we will use the notation: $ \xi \equiv [ w^T, ~ w^{(1)~T}, ~\cdots,~ w^{(k-1)~T}]^T$. In this notation we have $ ^kx = [ x^T, ~ \xi^T]^T$.
We also denote by $\Sigma^{(0)}$ the original system, i.e., the one characterized by the state $x$ and the equations in (\ref{EquationStateEvolution}). The definition \ref{DefindistinguishableStates}, given for $\Sigma^{(0)}$, can be applied to $\Sigma^{(k)}$. Specifically, in $\Sigma^{(k)}$, two states $ [x_a, \xi_a] $ and $ [x_b, \xi_b] $ are indistinguishable if, for any $u(t)$ (the known inputs), there exist two vector functions $w_a^{(k)} (t)$ and $w_b^{(k)} (t)$ (the $k^{th}$ time derivative of two disturbance vectors) such that, $h( x(t; ~ [x_a, \xi_a]; ~u; ~w_a^{(k)} ))=h( x(t; ~ [x_b, \xi_b]; ~u; ~w_b^{(k)} ))$ $\forall t\in\mathcal{I}$.
\noindent It holds the following fundamental result:
\begin{pr}\label{PrConstantLieDer}
If $[x_a, \xi_a]$ and $[x_b, \xi_b]$ are indistinguishable in $\Sigma^{(k)}$ then the Lie derivatives of the output up to the $k^{th}$-order computed on these points take the same values.
\end{pr}
{\bf Proof:~}{We consider a piecewise-constant input $\tilde{u}$ as follows ($i=1,\cdots,m_u$):
\begin{equation}\label{EquationInputPiecewise}
\tilde{u}_i(t)=
\end{equation}
\[
\left\{\begin{aligned}
&u_i^1 ~~~t\in[0, ~t_1)\\
&u_i^2 ~~~t\in[t_1, ~t_1+t_2)\\
&\cdots \\
&u_i^g ~~~t\in[t_1+t_2+\cdots+t_{g-1}, ~t_1+t_2+\cdots+t_{g-1}+t_g)\\
\end{aligned}\right.
\]
\noindent Since $[x_a, \xi_a]$ and $[x_b, \xi_b]$ are indistinguishable in $\Sigma^{(k)}$, there exist two disturbance functions $w_a^{(k)} (t)$ and $w_b^{(k)} (t)$ such that:
\begin{equation}\label{Equationha=hb}
h( x(t; ~ [x_a, \xi_a]; ~\tilde{u}; ~w_a^{(k)} ))=h( x(t; ~ [x_b, \xi_b]; ~\tilde{u}; ~w_b^{(k)} ))
\end{equation}
\noindent $\forall t\in [0, ~t_1+t_2+\cdots+t_{g-1}+t_g)\subset\mathcal{I}$. On the other hand, by taking the two quantities in (\ref{Equationha=hb}) at $t=t_1+t_2+\cdots+t_{g-1}+t_g$, we can consider them as functions of the $g$ arguments $t_1,t_2,\cdots,t_g$. Hence, by differentiating with respect to all these variables, we also have:
\begin{equation}\label{EquationhaD=hbD}
\frac{\partial^g h( x(t_1+\cdots+t_g; ~ [x_a, \xi_a]; ~\tilde{u}; ~w_a^{(k)} ))}{\partial t_1\partial t_2\cdots\partial t_g}=
\end{equation}
\[
=\frac{\partial^g h( x(t_1+\cdots+t_g; ~ [x_b, \xi_b]; ~\tilde{u}; ~w_b^{(k)} ))}{\partial t_1\partial t_2\cdots\partial t_g}
\]
\noindent By computing the previous derivatives at $t_1=t_2=\cdots=t_g=0$ and by using proposition \ref{PrLieDer1} we obtain, if $g\le k$:
\begin{equation}\label{EquationLDa=LDb}
\mathcal{L}^g_{\theta_1 \theta_2 \cdots \theta_g} h\left|_{\begin{aligned}
&x=x_a\\
&\xi=\xi_a\\
\end{aligned}}\right.=\mathcal{L}^g_{\theta_1 \theta_2 \cdots \theta_g} h\left|_{\begin{aligned}
&x=x_b\\
&\xi=\xi_b\\
\end{aligned}}\right.
\end{equation}
\noindent where $\theta_h=f_0^{(k)}+\sum_{i=1}^{m_u}f_i^{(k)}u_i^h$, $h=1,\cdots,g$. The equality in (\ref{EquationLDa=LDb}) must hold for all possible choices of $u_1^h,\cdots,u_{m_u}^h$. By appropriately selecting these $u_1^h,\cdots,u_{m_u}^h$, we finally obtain:
\begin{equation
\mathcal{L}^g_{v_1 v_2 \cdots v_g} h\left|_{\begin{aligned}
&x=x_a\\
&\xi=\xi_a\\
\end{aligned}}\right.=\mathcal{L}^g_{v_1 v_2 \cdots v_g} h\left|_{\begin{aligned}
&x=x_b\\
&\xi=\xi_b\\
\end{aligned}}\right.
\end{equation}
\noindent where $v_1 v_2 \cdots v_g$ are vector fields belonging to the set $\{ f_0^{(k)},f_1^{(k)},\cdots,f_{m_u}^{(k)}\}$ $\blacksquare$}
\vskip.2 cm
\noindent In \cite{Her77} it was also defined the concept of $V-$indistinguishable states, with $V$ a subset of the definition set (in the specific case $V\subseteq M^{(k)}$) that includes the two considered states. From this definition and the previous proof we can alleviate the assumptions in the previous proposition. Specifically, we have the following:
\begin{Rm}\label{Remark}
The statement of proposition \ref{PrConstantLieDer} also holds if $[x_a, \xi_a]$ and $[x_b, \xi_b]$ are $V-$indistinguishable, being $V$ any open subset of $M^{(k)}$ and $[x_a, \xi_a],[x_b, \xi_b] \in V$.
\end{Rm}
\noindent Thanks to the results stated by propositions \ref{PrLieDer2} and \ref{PrConstantLieDer} we will introduce the extension of the observability rank condition in the next section.
\section{Extension of the Observability Rank condition}\label{SectionEORC}
According to the observability rank condition, the weak local observability of the system in (\ref{EquationStateEvolution}) with $m_w=0$ at a given point $x_0$ can be investigated
by analyzing the codistribution generated by the gradients of the Lie derivatives of its output. Specifically, if the dimension of this codistribution is equal to the dimension of the state on a given neighbourhood of $x_0$, we conclude that the state is weakly locally observable at $x_0$ (theorem $3.1$ in \cite{Her77}).
We can also check the weak local observability of a subset of the state components. Specifically, a given component of the state is weakly locally observable at $x_0$, if its gradient belongs to the aforementioned codistribution\footnote{A component of the state is observable at $x_0$ if it is constant on the indistinguishable set $I_{x_0}$.}.
The proof of theorem $3.1$ in \cite{Her77} is based on the fact that all the Lie derivatives (up to any order) of the output computed along any direction allowed by the system dynamics take the same values at the states which are indistinguishable.
Let us consider now the general case, i.e., when $m_w \neq 0$.
In the extended system ($\Sigma^{(k)}$) we know that the Lie derivatives up to the $k-$order satisfy the same property (see proposition \ref{PrConstantLieDer}). Therefore, we can extend the validity of theorem $3.1$ in \cite{Her77} to our case, provided that we suitably augment the state and that we only include the Lie derivatives up to the $k-$order to build the observable codistribution.
In the sequel, we will introduce the following notation:
\begin{itemize}
\item $\bar{\Omega}_m$ will denote the observable codistribution for $\Sigma^{(k)}$ $~$that includes all the Lie derivatives of the output along $f_0^{(k)}, f_1^{(k)}, \cdots, f_{m_u}^{(k)}$ up to the order $m\le k$;
\item The symbol $d$ will denote the gradient with respect to the extended state in (\ref{EquationExtendedState}) and the symbol $d_x$ will denote the gradient only respect to $x$;
\item For a given codistribution $\Lambda$ and a given vector field $\eta$, we will denote by $\mathcal{L}_{\eta} \Lambda$ the codistribution whose covectors are the Lie derivatives along $\eta$ of the covectors in $\Lambda$ (we are obviously assuming that the dimension of these covectors coincides with the dimension of $\eta$).
\item Given two vector spaces $V_1$ and $V_2$, we will denote with $V_1+V_2$ their sum, i.e., the span of all the generators of both $V_1$ and $V_2$.
\item For a given $V\subseteq M^{(k)}$ and a given $[x_0, ~\xi_0]\in V$, we will denote with $I_{[x_0, \xi_0]}^V$ the set of all the states $V-$indistinguishable from $[x_0, ~\xi_0]$.
\end{itemize}
\noindent The codistribution $\bar{\Omega}_m$ can be computed recursively by the following algorithm:
\begin{al}[Computation of $\bar{\Omega}_m$, $m\le k$
\begin{algorithmic}
\STATE
\STATE{Set $\bar{\Omega}_0=span\{dh\}$;}
\STATE{Set $i=0$}
\WHILE {$i<m$}
\STATE{Set $i=i+1$}
\STATE{Set $\bar{\Omega}_i=\bar{\Omega}_{i-1}+\sum_{i'=0}^{m_u}\mathcal{L}_{f_{i'}^{(k)}}\bar{\Omega}_{i-1}$}
\ENDWHILE
\end{algorithmic}
\label{alg1}
\end{al}
\noindent Let us denote by $x_j$ the $j^{th}$ component of the state ($j=1,\cdots,n$). We introduce the following definition:
\begin{df}[$EORC$]
For the system $\Sigma^{(k)}$, the $j^{th}$ component of the state (i.e., $x_j$, $j=1,\cdots,n$) satisfies the extended observability rank condition at $[x_0,\xi_0]$
if $dx_j \in \bar{\Omega}_k$ at $[x_0,\xi_0]$. If this holds $\forall j=1,\cdots,n$, we say that the state $x$ satisfies the extended observability rank condition at $[x_0,\xi_0]$ in $\Sigma^{(k)}$.
\end{df}
\noindent We have the following result, which is the extension of the result stated by theorem $3.1$ in \cite{Her77}:
\begin{pr}
For $\Sigma^{(k)}$, if $x_j$ ($j=1,\cdots n$) satisfies the observability rank condition at $[x_0,\xi_0]$, then $x_j$ is weakly locally observable at $[x_0,\xi_0]$. Additionally, $x_j$ remains weakly locally observable by further extending the state (i.e., in every system $\Sigma^{(f)}$ ($f> k$)).
\end{pr}
{\bf Proof:~}{We prove that it exists an open neighbourhood $U$ of $[x_0,\xi_0]$ such that, for every open neighbourhood $V\subseteq U$ of $[x_0,\xi_0]$, $x_j$ is constant on the set $I^V_{[x_0, \xi_0]}$. Since $dx_j \in \bar{\Omega}_k$ at $[x_0,\xi_0]$, it exists some open neighborhood $U$ of $[x_0,\xi_0]$, such that $x_j$ can be expressed in terms of the Lie derivatives of $h$ along the directions $f_{i'}^{(k)}$ ($i'=0,1,\cdots,m_u$) up to the $k$ order. If $V\subseteq U$ is an open neighborhood of $[x_0,\xi_0]$, then proposition \ref{PrConstantLieDer} and remark \ref{Remark} imply that all the Lie derivatives up to the $k$ order are constant on the set $I^V_{[x_0, \xi_0]}$ and, consequently, also $x_j$ is constant on this set. Finally, the fact that $x_j$ is weakly locally observable in every system $\Sigma^{(f)}$ ($f> k$) directly follows from proposition \ref{PrLieDer2} $\blacksquare$}
\vskip .2cm
In accordance with the previous result, the $EORC$ $~$is a tool to analyze the observability properties of a nonlinear system driven by known and unknown inputs. However,
we remark two important limitations of the $EORC$. The former consists in the fact that the state augmentation can be continued indefinitely. As a result, the $EORC$ $~$only provides sufficient conditions for the weak local observability of the state components.
The latter regards the computational cost demanded to check if it is satisfied. Specifically, the computation demanded to check if $dx_j$ belongs to $\bar{\Omega}_k$ can be very complex because by increasing $k$ we also increase the dimension of the extended state.
In the rest of this paper, we will focus our attention on these fundamental issues and we will provide the main paper contributions:
\begin{itemize}
\item obtaining a new codistibution ($\Omega_k$) that is the span of covectors whose dimension is $n$ (i.e., independent of the state extension) such that $d_xx_j \in \Omega_k$ if and only if $dx_j \in \bar{\Omega}_k$;
\item understanding if there exists a given $\hat{k}$ such that, if $dx_j\notin \bar{\Omega}_{\hat{k}}$, then $dx_j\notin \bar{\Omega}_k$ $\forall k>\hat{k}$.
\end{itemize}
We fully address both these issues in the case $m_w=1$. In section \ref{SectionSingle} we introduce the basic equations that characterize this case. In section \ref{SectionSeparation} we provide a complete answer to the first issue by operating a separation on the codistribution generated by all the Lie derivatives up to the $k-$order. Specifically, we prove that the observable codistribution can be splitted into two codistributions. The former is generated by the gradients of scalar functions that only depend on the original state. The latter is generated by the gradients of scalar functions that depend on the entire augmented state. However, this latter codistribution can be ignored when deriving the observability properties of the original state.
The former codistribution, namely the one generated by the gradients of scalar functions that only depend on the original state, is defined by a simple recursive algorithm.
In section \ref{SectionStop} we provide a complete answer to the second issue by proving that this algorithm converges in a finite number of steps and by also providing the criterion to establish that the convergence of the algorithm has been reached (theorem \ref{TheoremStop}). Also this proof is based on several tricky analytic steps.
For the sake of clarity, we start this discussion by considering the case when the system is characterized by a single known input, i.e., when $m_u=1$ (sections \ref{SectionSingle}, \ref{SectionSeparation} and \ref{SectionStop}). In particular, both theorems \ref{TheoremSeparation} and \ref{TheoremStop} are proved in this simplified case. However, in section \ref{SectionExtension}, their validity is extended to the case of multiple known inputs (i.e., $\forall m_u>1$).
\section{Single known Input and single disturbance}\label{SectionSingle}
We will refer to the following system:
\begin{equation}\label{EquationStateEvolution1}
\left\{\begin{aligned}
\dot{x} &= f ( x ) u + g ( x ) w \\
y &= h( x ) \\
\end{aligned}\right.
\end{equation}
\noindent In other words, we consider the case when $f_0$ is the null vector and $m_u=m_w=1$. In this case, the extended state that includes the time derivatives up to the $(k-1)-$order is:
\begin{equation}\label{EquationExtendedState1}
^kx \equiv [ x^T, ~ w^T, ~ w^{(1)~T}, ~\cdots,~ w^{(k-1)~T}]^T
\end{equation}
\noindent The dimension of the extended state is $n+k$. From (\ref{EquationStateEvolution1}) it is immediate to obtain the dynamics for the extended state:
\begin{equation}\label{EquationExtendedStateEvolution1}
\begin{aligned}
^k\dot{x} &= G(^kx)+ F( x ) u + 1^{n+k}_{{n+k}} w^{(k)} \\
\end{aligned}
\end{equation}
\noindent where:
\begin{equation}\label{EquationFG}
F\equiv \left[\begin{array}{c}
f ( x )\\
0 \\
0 \\
\cdots\\
0 \\
0 \\
\end{array}
\right]
~~~~~ G \equiv \left[\begin{array}{c}
g ( x ) w\\
w^{(1)} \\
w^{(2)} \\
\cdots\\
w^{(k-1)} \\
0 \\
\end{array}
\right]
\end{equation}
\noindent In the sequel, we will denote by $L^1_g$ the first order Lie derivative of the function $h(x)$ along the vector field $g(x)$, i.e., $L^1_g\equiv\mathcal{L}_gh$.
The derivations provided in the next sections are based on the assumption that $L^1_g \neq 0$ on a given neighbourhood of $x_0$. We conclude this section by showing that, when this assumption does not hold, it is possible to introduce new coordinates and to show that the observability properties can be investigated starting from a new output that satisfies the assumption.
Let us suppose that $L^1_g=0$ on a given neighbourhood of $x_0$. We introduce the following system associated with the system in (\ref{EquationStateEvolution1}):
\begin{equation}\label{EquationStateEvolutionAss}
\left\{\begin{aligned}
\dot{x} &= f ( x ) + g ( x ) u\\
y &= h( x ) \\
\end{aligned}\right.
\end{equation}
\noindent This is a system without disturbances and with a single known input $u$. Let us denote by $r$ the relative degree of this system at $x_0$. Since $L^1_g=0$ on a given neighbourhood of $x_0$, we have $r>1$. Additionally, we can introduce the following new local coordinates (see proposition 4.1.3 in \cite{Isi95}):
\begin{equation}\label{EquationLocalCoordinates1}
x'=\mathcal{Q}(x)=\left[\begin{array}{c}
\mathcal{Q}_1(x)\\
\cdots\\
\mathcal{Q}_n(x)\\
\end{array}\right]
\end{equation}
\noindent such that the first new $r$ coordinates are:
\begin{equation}\label{EquationLocalCoordinates2}
\mathcal{Q}_1(x)=h(x),~ \mathcal{Q}_2(x)=\mathcal{L}^1_f h(x), ~\cdots, ~\mathcal{Q}_r(x)=\mathcal{L}^{r-1}_f h(x)
\end{equation}
\noindent Now let us derive the equations of the original system (i.e., the one in (\ref{EquationStateEvolution1})) in these new coordinates. We have:
\begin{equation}\label{EquationStateEvolution1New}
\left\{\begin{aligned}
\dot{x}' &= \tilde{f} ( x' ) u+ \tilde{g} ( x' ) w\\
y &= x_1' \\
\end{aligned}\right.
\end{equation}
\noindent where $\tilde{f}$ and $\tilde{g}$ have the following structure:
\begin{equation}\label{Equationftgt}
\tilde{f}\equiv \left[\begin{array}{c}
x_2'\\
x_3' \\
\cdots\\
x_r' \\
\tilde{f}_r(x') \\
\cdots\\
\tilde{f}_n(x') \\
\end{array}
\right]
~~~~~ \tilde{g}\equiv \left[\begin{array}{c}
0\\
0\\
\cdots\\
0 \\
\tilde{g}_r(x') \\
\cdots\\
\tilde{g}_n(x') \\
\end{array}
\right]
\end{equation}
\noindent We remark that the first $r$ components of $x'$ are weakly locally observable since they are the output and its Lie derivatives along $f$ up to the $(r-1)-$order (note that we do not need to augment the state to use the first $(r-1)$ Lie derivatives because all the Lie derivatives up to the $(r-1)-$order that includes at least one direction along $g$ vanish automatically). In order to investigate the observability properties of the remaining components, we augment the state as in (\ref{EquationExtendedState1}) and we can consider the new output $\tilde{h}(x')=x_r'$. We set $L^1_g =\tilde{g}_r=\mathcal{L}_g\mathcal{L}^{r-1}_fh\neq 0$.
\section{The observable codistribution ($\Omega$)}\label{SectionSeparation}
In this section we operate a separation on the codistribution generated by all the Lie derivatives up to the $m-$order ($m\le k$). Specifically, we prove that this codistribution can be splitted into two codistributions. The former is generated by the gradients of scalar functions that only depend on the original state. The latter is generated by the gradients of scalar functions that depend on the entire augmented state. However, this latter codistribution can be ignored when deriving the observability properties of the original state.
The observable codistribution is given by the algorithm \ref{alg1} and, in this case $m_u=m_w=1$, it reduces to the span of the gradients of all the Lie derivatives along $F$ and $G$ up to the $k$-order. Hence, for any $m\le k$, it is obtained recursively by the following algorithm:
\begin{enumerate}
\item $\bar{\Omega}_0=span\{dh\}$;
\item $\bar{\Omega}_m=\bar{\Omega}_{m-1}+\mathcal{L}_{G}\bar{\Omega}_{m-1}+\mathcal{L}_{F}\bar{\Omega}_{m-1}$
\end{enumerate}
\noindent For a given $m\le k$ we define the vector $\phi_m\in \mathbb{R}^n$ by the following algorithm:
\begin{enumerate}
\item $\phi_0=f$;
\item $\phi_m=\frac{[\phi_{m-1}, ~g]}{L^1_g}$
\end{enumerate}
\noindent where the parenthesis $[\cdot, \cdot]$ denote the Lie brackets of vector fields. Similarly, we define $\Phi_m\in \mathbb{R}^{n+k}$ by the following algorithm:
\begin{enumerate}
\item $\Phi_0=F$;
\item $\Phi_m=[\Phi_{m-1}, ~G]$
\end{enumerate}
\noindent By a direct computation it is easy to realize that $\Phi_m$ has the last $k$ components identically null. In the sequel, we will denote by $\Breve{\Phi}_m$ the vector in $\mathbb{R}^n$ that contains the first $n$ components of $\Phi_m$. In other words, $\Phi_m\equiv [\Breve{\Phi}_m^T,0_k^T]^T$. Additionally, we set $\hat{\phi}_m\equiv \left[\begin{array}{c}
\phi_m \\
0_k \\
\end{array}
\right]$.
We define the $\Omega$ codistribution as follows (see definition \ref{DefinitionOmegaE} in section \ref{SectionExtension} for the case when $m_u>1$):
\begin{df}[$\Omega$ codistribution, $m_u=m_w=1$]\label{DefinitionOmega}
This codistribution is defined recursively by the following algorithm:
\begin{enumerate}
\item $\Omega_0=d_xh$;
\item $\Omega_m=\Omega_{m-1}+\mathcal{L}_f \Omega_{m-1} + \mathcal{L}_{\frac{g}{L^1_g}} \Omega_{m-1} +\mathcal{L}_{\phi_{m-1}} d_xh$
\end{enumerate}
\end{df}
\noindent Note that this codistribution is completely integrable by construction. More importantly, its generators are the gradients of functions that only depend on the original state ($x$) and not on its extension. In the sequel, we need to embed this codistribution in $\mathbb{R}^{n+k}$. We will denote by $[\Omega_m,0_k]$ the codistribution made by covectors whose first $n$ components are covectors in $\Omega_m$ and the last components are all zero. Additionally, we will denote by $L^m$ the codistribution that is the span of the Lie derivatives of $dh$ up to the order $m$ along the vector $G$, i.e., $L^m \equiv span\{\mathcal{L}^1_Gdh, \mathcal{L}^2_Gdh, \cdots, \mathcal{L}^m_Gdh\}$.
We finally introduce the following codistribution:
\begin{df}[$\tilde{\Omega}$ codistribution]\label{DefinitionOmegaTilde}
This codistribution is defined as follows: $\tilde{\Omega}_m \equiv [\Omega_m,0_k]+ L^m$
\end{df}
\noindent The codistribution $\tilde{\Omega}_m$ consists of two parts. Specifically, we can select a basis that consists of exact differentials that are the gradients of functions that only depend on the original state ($x$) and not on its extension (these are the generators of $[\Omega_m,0_k]$) and the gradients $\mathcal{L}^1_Gdh, \mathcal{L}^2_Gdh, \cdots, \mathcal{L}^m_Gdh$. The second set of generators, i.e., the gradients $\mathcal{L}^1_Gdh, \mathcal{L}^2_Gdh, \cdots, \mathcal{L}^m_Gdh$, are $m$ and, with respect to the first set, they are gradients of functions that also depend on the state extension $\xi=[w,~w^{(1)},\cdots, ~w^{(m-1)}]^T$.
We have the following result:
\begin{lm}
Let us denote with $x_j$ the $j^{th}$ component of the state ($j=1,\cdots,n$). We have: $d_xx_j \in \Omega_m$ if and only if $dx_j \in \tilde{\Omega}_m$
\end{lm}
{\bf Proof:~}{The fact that $d_xx_j \in \Omega_m$ implies that $dx_j \in \tilde{\Omega}_m$ is obvious since $[\Omega_m,0_k] \subseteq\tilde{\Omega}_m$ by definition. Let us prove that also the contrary holds, i.e., that if $dx_j \in \tilde{\Omega}_m$ then $d_xx_j \in \Omega_m$.
Since $dx_j \in \tilde{\Omega}_m$ we have $dx_j=\sum_{i=1}^{N_1}c^1_i \omega_i^1+\sum_{i=1}^{N_2}c^2_i \omega_i^2$, where $\omega_1^1, \omega_2^1, \cdots, \omega_{N_1}^1$ are $N_1$ generators of $[\Omega_m,0_k]$ and $\omega_1^2, \omega_2^2, \cdots, \omega_{N_2}^2$ are $N_2$ generators of $L^m$. We want to prove that $N_2=0$.
We proceed by contradiction. Let us suppose that $N_2\ge 1$.
We remark that the first set of generators have the last $k$ entries equal to zero, as for $dx_j$. The second set of generators consists of the Lie derivatives of $h$ along $G$ up to the $m$ order. Let us select the one that is the highest order Lie derivative and let us denote by $j'$ this highest order. We have $1\le N_2 \le j'\le m$. By a direct computation, it is immediate to realize that this is the only generator that depends on $w^{(j'-1)}$. Specifically, the dependence is linear by the product $L^1_g w^{(j'-1)}$ (we remind the reader that $L^1_g \neq 0$). But this means that $dx_j$ has the $(n+j')^{th}$ entry equal to $L^1_g\neq 0$ and this is not possible since $dx_j=[d_xx_j,0_k]$ $\blacksquare$}
\vskip .2 cm
\noindent A fundamental consequence of this lemma is that, if we are able to prove that $\tilde{\Omega}_m=\bar{\Omega}_m$, the weak local observability of the original state $x$, can be investigated by only considering the codistribution $\Omega_m$.
In the rest of this section we will prove this fundamental theorem, stating that $\tilde{\Omega}_m=\bar{\Omega}_m$.
\begin{theorem}[Separation]\label{TheoremSeparation}
$\bar{\Omega}_m=\tilde{\Omega}_m\equiv [\Omega_m,0_k]+ L^m$
\end{theorem}
\noindent The proof of this theorem is complex and is based on several results that we prove before. Based on them, we provide the proof of the theorem at the end of this section.
\vskip .2 cm
\begin{lm}\label{LemmaLemma1}
$\mathcal{L}_G \bar{\Omega}_m+\mathcal{L}_{\Phi_m}dh=\mathcal{L}_G \bar{\Omega}_m+\mathcal{L}_F\mathcal{L}^m_Gdh$
\end{lm}
{\bf Proof:~}{We have $\mathcal{L}_F\mathcal{L}^m_Gdh=\mathcal{L}_G\mathcal{L}_F\mathcal{L}^{m-1}_Gdh+\mathcal{L}_{\Phi_1}\mathcal{L}^{m-1}_G dh$.
The first term $\mathcal{L}_G\mathcal{L}_F\mathcal{L}^{m-1}_Gdh \in \mathcal{L}_G \bar{\Omega}_m$. Hence, we need to prove that
$\mathcal{L}_G \bar{\Omega}_m+\mathcal{L}_{\Phi_m}dh=\mathcal{L}_G \bar{\Omega}_m+\mathcal{L}_{\Phi_1}\mathcal{L}^{m-1}_G dh$. We repeat the previous procedure $m$ times. Specifically, we use the equality $\mathcal{L}_{\Phi_j}\mathcal{L}^{m-j}_Gdh=\mathcal{L}_G\mathcal{L}_{\Phi_j}\mathcal{L}^{m-j-1}_Gdh+\mathcal{L}_{\Phi_{j+1}}\mathcal{L}^{m-j-1}_G dh$, for $j=1,\cdots,m$, and we remove the first term since $\mathcal{L}_G\mathcal{L}_{\Phi_j}\mathcal{L}^{m-j-1}_Gdh \in \mathcal{L}_G \bar{\Omega}_m$ $\blacksquare$}
\begin{lm}\label{LemmaRisA}
$\Breve{\Phi}_m=\sum_{j=1}^m c^n_j(\mathcal{L}_Gh, \mathcal{L}^2_Gh, \cdots, \mathcal{L}^m_Gh) \phi_j$, i.e., the vector $\Breve{\Phi}_m$ is a linear combination of the vectors $\phi_j$ ($j=1,\cdots,m$), where the coefficients ($c^n_j$) depend on the state only through the functions that generate the codistribution $L^m$
\end{lm}
{\bf Proof:~}{ We proceed by induction. By definition, $\Breve{\Phi}_0=\phi_0$.
\noindent {\bf Inductive step:} Let us assume that $\Breve{\Phi}_{m-1}=\sum_{j=1}^{m-1} c_j(\mathcal{L}_Gh, \mathcal{L}^2_Gh, \cdots, \mathcal{L}^{m-1}_Gh) \phi_j$. We have:
\[
\Phi_m=[\Phi_{m-1},~G]=\sum_{j=1}^{m-1}\left[c_j \left[\begin{array}{c}
\phi_j\\
0_k \\
\end{array}
\right],~G\right]=
\]
\[
\sum_{j=1}^{m-1}f_j\left[\left[\begin{array}{c}
\phi_j\\
0_k \\
\end{array}
\right],~G\right] -
\sum_{j=1}^{m-1}\mathcal{L}_G c_j \left[\begin{array}{c}
\phi_j\\
0_k \\
\end{array}
\right]
\]
We directly compute the Lie bracket in the sum (note that $\phi_j$ is independent of the unknown input $w$ and its time derivatives):
\[
\left[\left[\begin{array}{c}
\phi_j\\
0_k \\
\end{array}
\right],~G\right]= \left[\begin{array}{c}
[\phi_j,~g] w\\
0_k \\
\end{array}
\right]=\left[\begin{array}{c}
\phi_{j+1} \mathcal{L}^1_Gh\\
0_k \\
\end{array}
\right]
\]
Regarding the second term, we remark that $\mathcal{L}_G c_j = \sum_{i=1}^{m-1} \frac{\partial c_j}{\partial (\mathcal{L}_G^ih)} \mathcal{L}_G^{i+1}h$. By setting $\tilde{c}_j=c_{j-1}\mathcal{L}^1_Gh$ for $j=2,\cdots,m$ and $\tilde{c}_1=0$, and by setting $\bar{c}_j=-\sum_{i=1}^{m-1} \frac{\partial c_j}{\partial (\mathcal{L}_G^ih)} \mathcal{L}_G^{i+1}h$ for $j=1,\cdots,m-1$ and $\bar{c}_m=0$, we obtain $\Breve{\Phi}_m=\sum_{j=1}^m (\tilde{c}_j + \bar{c}_j) \phi_j$, which proves our assert since $c^n_j(\equiv \tilde{c}_j + \bar{c}_j)$ is a function of $\mathcal{L}_Gh, \mathcal{L}^2_Gh, \cdots, \mathcal{L}^m_Gh$ $\blacksquare$}
\noindent It also holds the following result:
\begin{lm}\label{LemmaRisB}
$\hat{\phi}_m=\sum_{j=1}^m b^n_j(\mathcal{L}_Gh, \mathcal{L}^2_Gh, \cdots, \mathcal{L}^m_Gh) \Phi_j$, i.e., the vector $\hat{\phi}_m$ is a linear combination of the vectors $\Phi_j$ ($j=1,\cdots,m$), where the coefficients ($b^n_j$) depend on the state only through the functions that generate the codistribution $L^m$
\end{lm}
{\bf Proof:~}{We proceed by induction. By definition, $\Phi_0=\hat{\phi}_0$.
\noindent {\bf Inductive step:} Let us assume that $\hat{\phi}_{m-1}=\sum_{j=1}^{m-1} b_j(\mathcal{L}_Gh, \mathcal{L}^2_Gh, \cdots, \mathcal{L}^{m-1}_Gh) \Phi_j$. We need to prove that $\hat{\phi}_m=\sum_{j=1}^m b^n_j(\mathcal{L}_Gh, \mathcal{L}^2_Gh, \cdots, \mathcal{L}^m_Gh) \Phi_j$.
We start by applying on both members of the equality $\hat{\phi}_{m-1}=\sum_{j=1}^{m-1} b_j(\mathcal{L}_Gh, \mathcal{L}^2_Gh, \cdots, \mathcal{L}^{m-1}_Gh) \Phi_j$ the Lie bracket with respect to $G$. We obtain for the first member: $[\hat{\phi}_{m-1}, ~G]= \hat{\phi}_m \mathcal{L}^1_Gh$. For the second member we have:
\[
\sum_{j=1}^{m-1} [g_j \Phi_j, ~G]= \sum_{j=1}^{m-1} b_j [\Phi_j, ~G]- \sum_{j=1}^{m-1} \mathcal{L}_G b_j \Phi_j=
\]
\[
=\sum_{j=1}^{m-1} b_j \Phi_{j+1}- \sum_{j=1}^{m-1} \sum_{i=1}^{m-1} \frac{\partial b_j}{\partial (\mathcal{L}_G^ih)} \mathcal{L}_G^{i+1}h\Phi_j
\]
\noindent By setting $\tilde{b}_j=\frac{b_{j-1}}{\mathcal{L}^1_G}$ for $j=2,\cdots,m$ and $\tilde{b}_1=0$, and by setting $\bar{b}_j=-\sum_{i=1}^{m-1} \frac{\partial b_j}{\partial (\mathcal{L}_G^ih)} \frac{\mathcal{L}_G^{i+1}h}{\mathcal{L}^1_G}$ for $j=1,\cdots,m-1$ and $\bar{b}_m=0$, we obtain $\hat{\phi}_m=\sum_{j=1}^m (\tilde{b}_j + \bar{b}_j) \Phi_j$, which proves our assert since $b^n_j(\equiv \tilde{b}_j + \bar{b}_j)$ is a function of $\mathcal{L}_Gh, \mathcal{L}^2_Gh, \cdots, \mathcal{L}^m_Gh$ $\blacksquare$}
An important consequence of the previous two lemmas is the following result:
\begin{pr}\label{PropertyLemma2}
The following two codistributions coincide:
\begin{enumerate}
\item $span\{ \mathcal{L}_{\Phi_0}dh, \mathcal{L}_{\Phi_1}dh, \cdots, \mathcal{L}_{\Phi_m}dh, \mathcal{L}^1_Gdh, \cdots\mathcal{L}^m_Gdh\}$;
\item $span\{ \mathcal{L}_{\hat{\phi}_0}dh, \mathcal{L}_{\hat{\phi}_1}dh, \cdots, \mathcal{L}_{\hat{\phi}_m}dh, \mathcal{L}^1_Gdh, \cdots\mathcal{L}^m_Gdh\}$;
\end{enumerate}
\end{pr}
\noindent We are now ready to prove theorem \ref{TheoremSeparation}.
{\bf Proof:~}{We proceed by induction. By definition, $\bar{\Omega}_0=\tilde{\Omega}_0$ since they are both the span of $dh$.
\noindent {\bf Inductive step:} Let us assume that $\bar{\Omega}_{m-1}=\tilde{\Omega}_{m-1}$. We have:
$\bar{\Omega}_m=\bar{\Omega}_{m-1}+\mathcal{L}_F \bar{\Omega}_{m-1} + \mathcal{L}_G \bar{\Omega}_{m-1}=\bar{\Omega}_{m-1} + \mathcal{L}_F \tilde{\Omega}_{m-1} + \mathcal{L}_G \bar{\Omega}_{m-1}=\bar{\Omega}_{m-1} + [\mathcal{L}_f \Omega_{m-1}, 0_k]+\mathcal{L}_F L^{m-1} + \mathcal{L}_G \bar{\Omega}_{m-1}$.
On the other hand, $\mathcal{L}_F L^{m-1} = \mathcal{L}_F \mathcal{L}^1_Gdh + \cdots + \mathcal{L}_F \mathcal{L}^{m-2}_Gdh + \mathcal{L}_F \mathcal{L}^{m-1}_Gdh$. The first $m-2$ terms are in $\bar{\Omega}_{m-1}$. Hence we have: $\bar{\Omega}_m= \bar{\Omega}_{m-1} + [\mathcal{L}_f \Omega_{m-1},0_k] + \mathcal{L}_F \mathcal{L}^{m-1}_G dh + \mathcal{L}_G \bar{\Omega}_{m-1}$. By using lemma \ref{LemmaLemma1} we obtain: $\bar{\Omega}_m= \bar{\Omega}_{m-1} + [\mathcal{L}_f \Omega_{m-1},0_k] + \mathcal{L}_{\Phi_{m-1}} dh + \mathcal{L}_G \bar{\Omega}_{m-1}$. By using again the induction assumption we obtain: $\bar{\Omega}_m= [\Omega_{m-1},0_k] + L^{m-1} + [\mathcal{L}_f \Omega_{m-1},0_k] + \mathcal{L}_{\Phi_{m-1}} dh + \mathcal{L}_G [\Omega_{m-1},0_k] + \mathcal{L}_G L^{m-1}=[\Omega_{m-1},0_k] + L^m + [\mathcal{L}_f \Omega_{m-1},0_k] + \mathcal{L}_{\Phi_{m-1}} dh + [\mathcal{L}_{\frac{g}{L^1_g}} \Omega_{m-1},0_k]$ and by using proposition \ref{PropertyLemma2} we obtain: $\bar{\Omega}_m=[\Omega_{m-1},0_k] + L^m + [\mathcal{L}_f \Omega_{m-1},0_k] + \mathcal{L}_{\hat{\phi}_{m-1}} dh + [\mathcal{L}_{\frac{g}{L^1_g}} \Omega_{m-1},0_k]=\tilde{\Omega}_m$ $\blacksquare$}
\section{Convergence of the algorithm that defines $\Omega$}\label{SectionStop}
Theorem \ref{TheoremSeparation} is fundamental. It allows us to obtain all the observability properties of the original state by restricting the computation to the $\Omega$ codistribution, namely a codistribution whose covectors have the same dimension of the original space. In other words, the dimension of these covectors is independent of the state augmentation. The $\Omega$ codistribution is defined recursively and $\Omega_m\subseteq\Omega_{m+1}$ (see definition \ref{DefinitionOmega} in section \ref{SectionSeparation}). This means that, if for a given $m$ the gradients of the components of the original state belong to $\Omega_m$, we can conclude that the original state is weakly locally observable. On the other hand, if this is not true, we cannot exclude that it is true for a larger $m$. The goal of this section is precisely to address this issue. We will show that the algorithm converges in a finite number of steps and we will also provide the criterion to establish that the algorithm has converged (theorem \ref{TheoremStop}). This theorem will be proved at the end of this section since we need to introduce several important new quantities and properties.
\vskip .2cm
\noindent For a given positive integer $j$ we define the vector $\psi_j\in \mathbb{R}^n$ by the following algorithm:
\begin{enumerate}
\item $\psi_0=f$;
\item $\psi_j=[\psi_{j-1}, ~\frac{g}{L^1_g}]$
\end{enumerate}
\noindent It is possible to find a useful expression that relates these vectors to the vectors $\phi_j$, previously defined. Specifically we have:
\begin{lm}\label{LemmaPsiPhi}
It holds the following equation:
\begin{equation}\label{EquationPsiPhi}
\psi_j=\phi_j + \left\{ \sum_{i=0}^{j-1} (-)^{j-i} \mathcal{L}^{j-i-1}_{\frac{g}{L^1_g}} \left( \frac{\mathcal{L}_{\phi_i}L^1_g}{L^1_g}\right) \right\} \frac{g}{L^1_g}
\end{equation}
\end{lm}
{\bf Proof:~}{We proceed by induction. By definition $\psi_0=\phi_0=f$ and equation (\ref{EquationPsiPhi}) holds for $j=0$.
\noindent {\bf Inductive step:} Let us assume that it holds for a given $j-1\ge 0$ and let us prove its validity for $j$. We have:
\[
\psi_j=\left[\psi_{j-1}, ~\frac{g}{L^1_g}\right]=\left[\phi_{j-1}, ~\frac{g}{L^1_g}\right] + \left[\left\{ \sum_{i=0}^{j-2} (-)^{j-i-1} \mathcal{L}^{j-i-2}_{\frac{g}{L^1_g}} \left( \frac{\mathcal{L}_{\phi_i}L^1_g}{L^1_g}\right) \right\} \frac{g}{L^1_g}, ~\frac{g}{L^1_g}\right]
\]
\noindent On the other hand:
\[
\left[\phi_{j-1}, ~\frac{g}{L^1_g}\right] = \phi_{j} - \frac{\mathcal{L}_{\phi_{j-1}}L^1_g}{L^1_g} \frac{g}{L^1_g}
\]
\noindent and
\[
\left[\left\{ \sum_{i=0}^{j-2} (-)^{j-i-1} \mathcal{L}^{j-i-2}_{\frac{g}{L^1_g}} \left( \frac{\mathcal{L}_{\phi_i}L^1_g}{L^1_g}\right) \right\} \frac{g}{L^1_g}, ~\frac{g}{L^1_g}\right]=-\mathcal{L}_{\frac{g}{L^1_g}}
\left\{ \sum_{i=0}^{j-2} (-)^{j-i-1} \mathcal{L}^{j-i-2}_{\frac{g}{L^1_g}} \left( \frac{\mathcal{L}_{\phi_i}L^1_g}{L^1_g}\right) \right\} \frac{g}{L^1_g}=
\]
\[
\left\{ \sum_{i=0}^{j-2} (-)^{j-i} \mathcal{L}^{j-i-1}_{\frac{g}{L^1_g}} \left( \frac{\mathcal{L}_{\phi_i}L^1_g}{L^1_g}\right) \right\} \frac{g}{L^1_g}
\]
\noindent Hence:
\[
\psi_j= \phi_{j} - \frac{\mathcal{L}_{\phi_{j-1}}L^1_g}{L^1_g} \frac{g}{L^1_g} +
\left\{ \sum_{i=0}^{j-2} (-)^{j-i} \mathcal{L}^{j-i-1}_{\frac{g}{L^1_g}} \left( \frac{\mathcal{L}_{\phi_i}L^1_g}{L^1_g}\right) \right\} \frac{g}{L^1_g},
\]
\noindent which coincides with (\ref{EquationPsiPhi})
$\blacksquare$}
\vskip .2cm
\noindent From this lemma we obtain the following result:
\begin{lm}\label{LemmaFunctionsInOmegam}
If $\Omega_m$ is invariant with respect to $\mathcal{L}_f$ and $\mathcal{L}_{\frac{g}{L^1_g}}$ then, for $i=0,1,\cdots,m-2$, we have:
\begin{equation}\label{EquationFunctionsInOmegam}
d_x \frac{\mathcal{L}_{\phi_i}L^1_g}{L^1_g} \in \Omega_m
\end{equation}
\end{lm}
{\bf Proof:~}{From (\ref{EquationPsiPhi}) we obtain:
\begin{equation}\label{EquationPsiPhi1}
\mathcal{L}_{\psi_j} d_xh=\mathcal{L}_{\phi_j} d_xh + \mathcal{L}_{\Upsilon_j \frac{g}{L^1_g}} d_xh
\end{equation}
\noindent where $\Upsilon_j\equiv \sum_{i=0}^{j-1} (-)^{j-i} \mathcal{L}^{j-i-1}_{\frac{g}{L^1_g}} \left( \frac{\mathcal{L}_{\phi_i}L^1_g}{L^1_g}\right)$. From (\ref{EquationPsiPhi1}) we also obtain:
\begin{equation}\label{EquationPsiPhi2}
\mathcal{L}_{\psi_j} d_xh=\mathcal{L}_{\phi_j} d_xh + d_x\Upsilon_j + \Upsilon_j \mathcal{L}_{\frac{g}{L^1_g}} d_xh
\end{equation}
\noindent We consider this last equation for all the integers $j=1,2,\cdots,m-1$.
By construction, for these $j$, $\mathcal{L}_{\phi_j}d_xh \in \Omega_m$ (see definition \ref{DefinitionOmega}). Additionally, since by assumption $\Omega_m$ is invariant with respect to both $\mathcal{L}_f$ and $\mathcal{L}_{\frac{g}{L^1_g}}$, it is also invariant with respect to $\mathcal{L}_{\psi_j}$ ($\forall j$) and in particular $\mathcal{L}_{\psi_j}d_xh\in\Omega_m$. Therefore, from (\ref{EquationPsiPhi2}) we obtain:
\begin{equation}\label{EquationPsiPhi3}
d_x\Upsilon_j\in \Omega_m
\end{equation}
\noindent $j=1,\cdots,m-1$. The proof of the statement is obtained by using (\ref{EquationPsiPhi3}), from $j=1$ up to $j=m-1$ and by using again the invariance with respect to $\mathcal{L}_{\frac{g}{L^1_g}}$ $\blacksquare$}
\noindent From lemma \ref{LemmaPsiPhi} with $j=1,\cdots,m-1$ and lemma \ref{LemmaFunctionsInOmegam} it is immediate to obtain the following result:
\begin{pr}\label{PropPhimTot}
If $\Omega_m$ is invariant with respect to $\mathcal{L}_f$ and $\mathcal{L}_{\frac{g}{L^1_g}}$ then it is also invariant with respect to $\mathcal{L}_{\phi_j}$, $j=1,\cdots,m-1$.
\end{pr}
\vskip .5cm
\noindent Let us denote by $L^2_g \equiv \mathcal{L}^2_g h$ and by $\rho \equiv \frac{L^2_g}{(L^1_g)^2}$.
\begin{lm}\label{LemmaKeyEquality}
We have the following key equality:
\begin{equation}\label{EquationKeyEquality}
\mathcal{L}_{\phi_j}h=\mathcal{L}_{\phi_{j-2}} \rho + \rho \frac{\mathcal{L}_{\phi_{j-2}}L^1_g}{L^1_g}-\mathcal{L}_{\frac{g}{L^1_g}}\left(\frac{\mathcal{L}_{\phi_{j-2}}L^1_g}{L^1_g}+\mathcal{L}_{\phi_{j-1}}h \right)
\end{equation}
\noindent $j\ge 2$.
\end{lm}
{\bf Proof:~}{We will prove this equality by an explicit computation. We have:
\[
\mathcal{L}_{\phi_j}h=\frac{1}{L^1_g}\left( \mathcal{L}_{\phi_{j-1}}\mathcal{L}_g h-\mathcal{L}_g\mathcal{L}_{\phi_{j-1}}h\right)
\]
The second term on the right hand side simplifies with the last term in (\ref{EquationKeyEquality}). Hence we have to prove:
\begin{equation}\label{EquationKeyEqualityProof}
\frac{1}{L^1_g}\mathcal{L}_{\phi_{j-1}}L^1_g=\mathcal{L}_{\phi_{j-2}} \rho + \rho \frac{\mathcal{L}_{\phi_{j-2}}L^1_g}{L^1_g}-\mathcal{L}_{\frac{g}{L^1_g}}\frac{\mathcal{L}_{\phi_{j-2}}L^1_g}{L^1_g}
\end{equation}
We have:
\begin{equation}\label{EquationKeyEqualityProof1}
\frac{1}{L^1_g}\mathcal{L}_{\phi_{j-1}}L^1_g=\frac{1}{(L^1_g)^2}\left( \mathcal{L}_{\phi_{j-2}} L^2_g-\mathcal{L}_g\mathcal{L}_{\phi_{j-2}}L^1_g\right)
\end{equation}
We remark that:
\[
\frac{1}{(L^1_g)^2} \mathcal{L}_{\phi_{j-2}} L^2_g=\mathcal{L}_{\phi_{j-2}} \rho + 2 \rho \frac{\mathcal{L}_{\phi_{j-2}}L^1_g}{L^1_g}
\]
and
\[
\frac{1}{(L^1_g)^2}\mathcal{L}_g\mathcal{L}_{\phi_{j-2}}L^1_g = \rho \frac{\mathcal{L}_{\phi_{j-2}}L^1_g}{L^1_g} + \mathcal{L}_{\frac{g}{L^1_g}}\frac{\mathcal{L}_{\phi_{j-2}}L^1_g}{L^1_g}
\]
By substituting these two last equalities in (\ref{EquationKeyEqualityProof1}) we immediately obtain (\ref{EquationKeyEqualityProof})
$\blacksquare$}
\begin{lm}\label{LemmaRhoInOmegam}
In general, it exists a finite $m$ such that $d_x\rho \in \Omega_m$.
\end{lm}
{\bf Proof:~}{For a given $m$, $\Omega_m$ contains all the covectors $d_x\mathcal{L}_{\phi_j}h$ ($j=0,\cdots,m-1$). From equation (\ref{EquationKeyEquality}), we immediately obtain that, for a given $m\ge 3$, $\Omega_m$ contains the covectors ($j=0,\cdots,m-3$):
\begin{equation}\label{EquationMu}
\mu_j \equiv d_x\rho_j +\chi_j d_x\rho +\rho d_x\chi_j-\mathcal{L}_{\frac{g}{L^1_g}}\left(d_x\chi_j \right)
\end{equation}
\noindent with $d_x\rho_j \equiv d_x\mathcal{L}_{\phi_j}\rho$ and $\chi_j \equiv \frac{\mathcal{L}_{\phi_j}L^1_g}{L^1_g}$.
Let us denote by $j^*$ the smallest integer such that:
\begin{equation}\label{Equationj*}
d_x\rho_{j^*}=\sum_{j=0}^{j^*-1} c_j d_x\rho_j
\end{equation}
\noindent Note that $j^*$ is a finite integer and in particular $j^*\le n$, where $n$ is the dimension of the state. Indeed, if this would be not the case, the dimension of the codistribution generated by $d_x\rho_0,d_x\rho_1, \cdots, d_x\rho_n$ would be larger than $n$.
Now let us consider a given $m\ge j^*+3$ such that $\Omega_{m+1}=\Omega_m$. This integer $m$ is finite since the dimension of $\Omega_m$ is bounded by the dimension of the state (i.e., by $n$), for any $m$. From the recursive algorithm that defines $\Omega$ we obtain that $\Omega_m$ is invariant with respect to $\mathcal{L}_f$ and $\mathcal{L}_{\frac{g}{L^1_g}}$. By using lemma \ref{LemmaFunctionsInOmegam}, i.e., by using the fact that $d_x\chi_j \in \Omega_m$, $j=0,\cdots,m-2$, from (\ref{EquationMu}) we obtain that $\Omega_m$ also contains the covectors ($j=0,\cdots,m-3$):
\begin{equation}\label{EquationMup}
\mu_j' \equiv d_x\rho_j +\chi_j d_x\rho
\end{equation}
\noindent From (\ref{Equationj*}) and (\ref{EquationMup}) we obtain:
\begin{equation}\label{EquationMu*}
\mu_{j*}' = \sum_{j=0}^{j^*-1} c_j d_x\rho_j +\chi_{j*} d_x\rho
\end{equation}
\noindent From equation (\ref{EquationMup}), for $j=0,\cdots,j^*-1$, we obtain: $d_x\rho_j=\mu_j'-\chi_jd_x\rho$. By substituting in (\ref{EquationMu*}) we obtain:
\begin{equation}
\mu_{j*}'-\sum_{j=0}^{j^*-1} c_j \mu_j' = \left(-\sum_{j=0}^{j^*-1} c_j\chi_j +\chi_{j*} \right) d_x\rho
\end{equation}
\noindent We remark that the left hand side consists of the sum of covectors that belong to $\Omega_m$. Since in general $\chi_{j*}\neq \sum_{j=0}^{j^*-1} c_j\chi_j$, we conclude that $d_x\rho \in \Omega_m$ $\blacksquare$}
\noindent The previous lemma ensures that it exists a finite $m$ such that $d_x\rho \in \Omega_m$. In particular, from the previous proof, it is possible to check that this value of $m$ cannot exceed $2n+2$. The following theorem allows us to obtain the criterion to stop the algorithm in definition \ref{DefinitionOmega}:
\begin{theorem}\label{TheoremStop}
If $d_x\rho \in \Omega_m$ and $\Omega_m$ is invariant under $\mathcal{L}_f$ and $\mathcal{L}_{\frac{g}{L^1_g}}$, then $\Omega_{m+p}=\Omega_m$ $\forall p\ge 0$
\end{theorem}
{\bf Proof:~}{We proceed by induction. Obviously, the equality holds for $p=0$.
\noindent {\bf Inductive step:} let us assume that $\Omega_{m+p}=\Omega_m$ and let us prove that $\Omega_{m+p+1}=\Omega_m$. We have to prove that $d_x\mathcal{L}_{\phi_{m+p}}h \in \Omega_m$. Indeed, from the inductive assumption, we know that $\Omega_{m+p}(=\Omega_m)$ is invariant under $\mathcal{L}_f$ and $\mathcal{L}_{\frac{g}{L^1_g}}$. Additionally, because of this invariance, by using proposition \ref{PropPhimTot}, we obtain that $\Omega_m$ is also invariant under $\mathcal{L}_{\phi_j}$, for $j=1,2,\cdots,m+p-1$. Since $d_x\rho \in \Omega_m$ we have $d_x\mathcal{L}_{\phi_{m+p-2}} \rho \in \Omega_m$. Additionally, $d_x\mathcal{L}_{\phi_{m+p-1}}h \in \Omega_m$ and, because of lemma \ref{LemmaFunctionsInOmegam}, we also have $d_x\frac{\mathcal{L}_{\phi_{m+p-2}}L^1_g}{L^1_g} \in \Omega_m$. Finally, because of the invariance under $\mathcal{L}_{\frac{g}{L^1_g}}$, also the Lie derivatives along $\frac{g}{L^1_g}$ of $d_x\mathcal{L}_{\phi_{m+p-1}}h$ and $d_x\frac{\mathcal{L}_{\phi_{m+p-2}}L^1_g}{L^1_g}$ belong to $\Omega_m$.
Now, we use equation (\ref{EquationKeyEquality}) for $j=m+p$. By computing the gradient of this equation it is immediate to obtain that $d_x\mathcal{L}_{\phi_{m+p}}h \in \Omega_m$
$\blacksquare$}
\section{Extension to the case of multiple known inputs and method's summary}\label{SectionExtension}
The previous two sections provide a complete answer to the problem of deriving all the observability properties of a system whose dynamics is driven by a single known input and a single unknown input and that depend non-linearly on the state and linearly on both the inputs. Before providing the steps to be followed in order to obtain the weak local observability properties of such a system, we remark that it is possible to extend our results to the case of multiple known inputs. This extension is simply obtained by re-defining the $\Omega$ codistribution.
We are referring to the nonlinear system characterized by the following equations:
\begin{equation}\label{EquationStateEvolution1E}
\left\{\begin{aligned}
\dot{x} &= \sum_{i=1}^{m_u}f_i ( x ) u_i + g ( x ) w \\
y &= h( x ) \\
\end{aligned}\right.
\end{equation}
\noindent The new $\Omega$ codistribution is defined as follows:
\begin{df}[$\Omega$ codistribution, $m_w=1$, $\forall m_u$]\label{DefinitionOmegaE}
This codistribution is defined recursively by the following algorithm:
\begin{enumerate}
\item $\Omega_0=d_xh$;
\item $\Omega_m=\Omega_{m-1}+\sum_{i=1}^{m_u} \mathcal{L}_{f_i} \Omega_{m-1} + \mathcal{L}_{\frac{g}{L^1_g}} \Omega_{m-1} +\sum_{i=1}^{m_u}\mathcal{L}_{\phi^i_{m-1}} d_xh$
\end{enumerate}
where the vectors $\phi^i_m\in \mathbb{R}^n$ ($i=1,\cdots,m_u$) are defined by the following algorithm:
\begin{enumerate}
\item $\phi^i_0=f_i$;
\item $\phi^i_m=\frac{[\phi^i_{m-1}, ~g]}{L^1_g}$
\end{enumerate}
\end{df}
\noindent It is immediate to repeat all the steps carried out in section \ref{SectionSeparation} and extend the validity of theorem \ref{TheoremSeparation} to the system characterized by (\ref{EquationStateEvolution1E}). This extension states that all the observability properties of the state that satisfies the nonlinear dynamics in (\ref{EquationStateEvolution1E}) can be derived by analyzing the codistribution defined by definition \ref{DefinitionOmegaE}.
Finally, also theorem \ref{TheoremStop} can be easily extended to cope with the case of multiple known inputs. In this case, requiring that $\Omega_{m+1} =\Omega_m$ means that $\Omega_m$ must be invariant with respect to $\mathcal{L}_{\frac{g}{L^1_g}}$ and all $\mathcal{L}_{f_i}$ simultaneously.
\vskip .5cm
We conclude this section by outlining the steps to investigate the weak local observability at a given point $x_0$ of a nonlinear system driven by a single disturbance and several known inputs. In other words, to investigate the weak local observability of a system defined by a state that satisfies the dynamics in (\ref{EquationStateEvolution1E}). The validity of the following procedure is a consequence of the theoretical results previously derived (in particular theorem \ref{TheoremSeparation} and theorem \ref{TheoremStop}).
\begin{enumerate}
\item For the chosen $x_0$, compute $L^1_g(= \mathcal{L}^1_g h)$ and $\rho\left(=\frac{\mathcal{L}^2_g h}{(L^1_g)^2}\right)$. In the case when $L^1_g=0$, introduce new local coordinates, as explained at the end of section \ref{SectionSingle} and re-define the output\footnote{Note that in the case of multiple known inputs, for the local coordinates we have the possibility to choose among the $m_u$ functions $f_i$. The most convenient choice is the one that corresponds to the highest relative degree (if this degree coincides with $n$ it means that the state is weakly locally observable and we do not need to pursue the observability analysis).}.
\item Build the codistribution $\Omega_m$ (at $x_0$) by using the algorithm provided in definition \ref{DefinitionOmegaE}, starting from $m=0$
and, for each $m$, check if $d_x\rho \in \Omega_m$.
\item Denote by $m'$ the smallest $m$ such that $d_x\rho \in \Omega_m$.
\item For each $m\ge m'$ check if $\Omega_{m+1}=\Omega_m$ and denote by $\Omega^*=\Omega_{m^*}$ where $m^*$ is the smallest integer such that $m^*\ge m'$ and $\Omega_{m^*+1}=\Omega_{m^*}$ (note that $m^*\le 2n+2$).
\item If the gradient of a given state component ($x_j$, $j=1,\cdots,n$) belongs to $\Omega^*$ (namely if $d_xx_j\in\Omega^*$) on a given neighbourhood of $x_0$, then $x_j$ is weakly locally observable at $x_0$. If this holds for all the state components, the state $x$ is weakly locally observable at $x_0$. Finally, if the dimension of $\Omega^*$ is smaller than $n$ on a given neighbourhood of $x_0$, then the state is not weakly locally observable at $x_0$.
\end{enumerate}
\section{Applications}\label{SectionApplicationsSystem}
We apply the theory developed in the previous sections in order to investigate the observability properties of several nonlinear systems driven by unknown inputs.
In \ref{SectionApplication1} we consider systems with a single disturbance, namely characterized by the equations given in (\ref{EquationStateEvolution1E}). In this case we will use the results obtained in sections \ref{SectionSingle}, \ref{SectionSeparation}, \ref{SectionStop} and \ref{SectionExtension}. In particular, we will follow the steps outlined at the end of section \ref{SectionExtension}. In \ref{SectionApplication2} we consider the case of multiple disturbances, i.e., when the state dynamics satisfy the first equation in (\ref{EquationStateEvolution}). In this section, we also consider the case of multiple outputs and we use directly the $EORC$, as discussed in section \ref{SectionEORC}.
\subsection{Systems with a single disturbance}\label{SectionApplication1}
We consider a vehicle that moves on a $2D$-environment. The configuration of the vehicle in a global reference frame, can be characterized through the vector $[x_v, ~y_v, ~\theta_v]^T$ where $x_v$ and $y_v$ are the cartesian vehicle coordinates, and $\theta_v$ is the vehicle orientation. We assume that the dynamics of this vector satisfy the unicycle differential equations:
\begin{equation}\label{EquationSimpleExampeDynamicsC}
\left[\begin{aligned}
\dot{x}_v &= v \cos\theta_v \\
\dot{y}_v &= v \sin\theta_v \\
\dot{\theta_v} &= \omega \\
\end{aligned}\right.
\end{equation}
\noindent where $v$ and $\omega$ are the linear and the rotational vehicle speed, respectively, and they are the system inputs. We consider the following three cases of output (see also figure \ref{Fig} for an illustration):
\begin{figure}[htbp]
\begin{center}
\includegraphics[width=.4\columnwidth]{fig.jpg}
\caption{The vehicle state in cartesian and polar coordinates together with the three considered outputs.} \label{Fig}
\end{center}
\end{figure}
\begin{enumerate}
\item the distance from the origin (e.g., a landmark is at the origin and its distance is measured by a range sensor);
\item the bearing of the origin in the local frame (e.g., a landmark is at the origin and its bearing angle is measured by an on-board camera);
\item the bearing of the vehicle in the global frame (e.g., a camera is placed at the origin).
\end{enumerate}
\noindent We can analytically express the output in terms of the state. We remark that the expressions become very simple if we adopt polar coordinates: $r \equiv \sqrt{x_v^2+y_v^2}$, $\phi=atan\frac{y_v}{x_v}$. We have, for the three cases, $y=r$, $y=\pi-(\theta_v-\phi)$ and $y=\phi$, respectively. For each of these three cases, we consider the following two cases: $v$ is known, $\omega$ is unknown; $v$ is unknown, $\omega$ is known. The dynamics in these new coordinates become:
\begin{equation}\label{EquationSimpleExampeDynamics}
\left[\begin{aligned}
\dot{r} &= v \cos(\theta_v-\phi) \\
\dot{\phi} &= \frac{v}{r} \sin(\theta_v-\phi) \\
\dot{\theta_v} &= \omega \\
\end{aligned}\right.
\end{equation}
\subsubsection{$y=r$, $u=\omega$, $w=v$} In this case we have $f=\left[\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}
\right]$ and $g=\left[\begin{array}{c}
\cos(\theta_v-\phi) \\
\frac{\sin(\theta_v-\phi)}{r} \\
0 \\
\end{array}
\right]$.
\noindent We follow the five steps mentioned at the end of section \ref{SectionExtension}. We have $L^1_g=\cos(\theta_v-\phi)$ and $\rho\equiv \frac{L^2_g}{(L^1_g)^2}=\frac{\tan^2(\theta_v-\phi)}{r}$. Additionally:
\[
d_x\rho=\frac{\tan(\theta_v-\phi)}{r}\left[-\frac{\tan(\theta_v-\phi)}{r}, -\frac{2}{\cos^2(\theta_v-\phi)},\frac{2}{\cos^2(\theta_v-\phi)}\right]
\]
\noindent We also have $\Omega_0=span\{[1,0,0]\}$. Hence, $d_x\rho\notin \Omega_0$. Additionally, $\Omega_1=\Omega_0$. We need to compute $\Omega_2$ and, in order to do this, we need to compute $\phi_1$. We obtain: $\phi_1=\left[\begin{array}{c}
-\tan(\theta_v-\phi) \\
\frac{1}{r} \\
0 \\
\end{array}
\right]$
and $\Omega_2=span\left\{[1,0,0], ~\left[0,\frac{1}{\cos^2(\theta_v-\phi)}, -\frac{1}{\cos^2(\theta_v-\phi)}\right]\right\}$. It is immediate to check that $d_x\rho\in \Omega_2$, meaning that $m'=2$. Additionally, by a direct computation, it is possible to check that $\Omega_3=\Omega_2$ meaning that $m^*=2$ and $\Omega^*=\Omega_2$, whose dimension is $2$. We conclude that the dimension of the observable space is equal to $2$ and the state is not weakly locally observable.
\subsubsection{$y=r$, $u=v$, $w=\omega$} In this case we have $f=\left[\begin{array}{c}
\cos(\theta_v-\phi) \\
\frac{\sin(\theta_v-\phi)}{r} \\
0 \\
\end{array}
\right]$ and $g=\left[\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}
\right]$
\noindent We follow the five steps mentioned at the end of section \ref{SectionExtension}. We easily obtain $L^1_g=0$. Hence, we have to introduce new local coordinates, as explained at the end of section \ref{SectionSingle}. We obtain $\mathcal{L}^1_fh=\cos(\theta_v-\phi)$ and we obtain that the relative degree of the associated system in (\ref{EquationStateEvolutionAss}) is $r=2$.
Let us denote the new coordinates by $x_1', ~x_2', ~x_3'$. In accordance with (\ref{EquationLocalCoordinates1}) and (\ref{EquationLocalCoordinates2}) we should set $x_1'=r$ and $x_2'=\cos(\theta_v-\phi)$. On the other hand, to simplify the computation, we set $x_2'=\theta_v-\phi$. Finally, we set $x_3'=\theta_v$.
We compute the new vector fields that characterize the dynamics in the new coordinates. We have:
\begin{equation}\label{Equationftgt}
\tilde{f}\equiv \left[\begin{array}{c}
\cos(x_2')\\
-\frac{\sin(x_2')}{x_1'} \\
0 \\
\end{array}
\right]
~~~~~ \tilde{g}\equiv \left[\begin{array}{c}
0\\
1\\
1\\
\end{array}
\right]
\end{equation}
\noindent Additionally, we set $\tilde{h}=\cos(x_2')$ and $\Omega_1=span\{[1, 0, 0],~ [0, -\sin(x_2'),0]\}$. In the new coordinates we obtain: $L^1_g=-\sin(x_2')$ and $\rho=-\frac{\cos(x_2')}{\sin^2(x_2')}$. It is immediate to check that $d_x\rho\in \Omega_1$, meaning that $m'=1$. Additionally, by a direct computation, it is possible to check that $\Omega_2=\Omega_1$ meaning that $m^*=1$ and $\Omega^*=\Omega_1$, whose dimension is $2$. We conclude that the dimension of the observable space is equal to $2$ and the state is not weakly locally observable.
\subsubsection{$y=\theta_v-\phi$, $u=\omega$, $w=v$} In this case we have $f=\left[\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}
\right]$ and $g=\left[\begin{array}{c}
\cos(\theta_v-\phi) \\
\frac{\sin(\theta_v-\phi)}{r} \\
0 \\
\end{array}
\right]$.
\noindent We follow the five steps mentioned at the end of section \ref{SectionExtension}. We have $L^1_g=-\frac{\sin(\theta_v-\phi)}{r}$ and $\rho=2\cot(\theta_v-\phi)$. Additionally:
\[
d_x\rho=\frac{2}{\sin^2(\theta_v-\phi)}\left[0, 1,-1\right]
\]
\noindent We also have $\Omega_0=span\{[0,-1,1]\}$. Hence, $d_x\rho \in \Omega_0$, meaning that $m'=0$. Additionally, by a direct computation, it is possible to check that $\Omega_1=\Omega_0$ meaning that $m^*=0$ and $\Omega^*=\Omega_0$, whose dimension is $1$. We conclude that the dimension of the observable space is equal to $1$ and the state is not weakly locally observable.
\subsubsection{$y=\theta_v-\phi$, $u=v$, $w=\omega$} In this case we have $f=\left[\begin{array}{c}
\cos(\theta_v-\phi) \\
\frac{\sin(\theta_v-\phi)}{r}\\
0 \\
\end{array}
\right]$ and $g=\left[\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}
\right]$
\noindent We follow the five steps mentioned at the end of section \ref{SectionExtension}. We have $L^1_g=1$ and $\rho=0$. Hence, $d_x\rho=[0,0,0]$ and we do not need to check if $d_x\rho\in \Omega_m$. In other words, we can set $m'=0$.
By a direct computation we obtain: $\Omega_0=span\{[0,-1,1]\}$, $\Omega_1=span\left\{[0,-1,1], \right.$ $\left. \left[-\frac{\sin(\theta_v-\phi)}{r^2},-\frac{\cos(\theta_v-\phi)}{r}, \frac{\cos(\theta_v-\phi)}{r}\right]\right\}$. Additionally, we obtain $\Omega_2=\Omega_1$, meaning that $m^*=1$ and $\Omega^*=\Omega_1$, whose dimension is $2$. We conclude that the dimension of the observable space is equal to $2$ and the state is not weakly locally observable.
\subsubsection{$y=\phi$, $u=\omega$, $w=v$} In this case we have $f=\left[\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}
\right]$ and $g=\left[\begin{array}{c}
\cos(\theta_v-\phi) \\
\frac{\sin(\theta_v-\phi)}{r} \\
0 \\
\end{array}
\right]$.
\noindent We follow the five steps mentioned at the end of section \ref{SectionExtension}. We have $L^1_g=\frac{\sin(\theta_v-\phi)}{r}$ and $\rho=-2\cot(\theta_v-\phi)$. Additionally:
\[
d_x\rho=\frac{2}{\sin^2(\theta_v-\phi)}\left[0, -1,1\right]
\]
\noindent We also have $\Omega_0=span\{[0,1,0]\}$. Hence, $d_x\rho \notin \Omega_0$. Additionally, $\Omega_1=\Omega_0$. We need to compute $\Omega_2$ and, in order to do this, we need to compute $\phi_1$. We obtain: $\phi_1=\left[\begin{array}{c}
-r \\
\cot(\theta_v-\phi) \\
0 \\
\end{array}
\right]$
and $\Omega_2=span\left\{[0,1,0], ~\frac{1}{\sin^2(\theta_v-\phi)}\left[0, 1,-1\right]\right\}$. It is immediate to check that $d_x\rho\in \Omega_2$, meaning that $m'=2$. Additionally, by a direct computation, it is possible to check that $\Omega_3=\Omega_2$ meaning that $m^*=2$ and $\Omega^*=\Omega_2$, whose dimension is $2$. We conclude that the dimension of the observable space is equal to $2$ and the state is not weakly locally observable.
\subsubsection{$y=\phi$, $u=v$, $w=\omega$} In this case we have $f=\left[\begin{array}{c}
\cos(\theta_v-\phi) \\
\frac{\sin(\theta_v-\phi)}{r} \\
0 \\
\end{array}
\right]$ and $g=\left[\begin{array}{c}
0 \\
0 \\
1 \\
\end{array}
\right]$
\noindent We follow the five steps mentioned at the end of section \ref{SectionExtension}. We easily obtain $L^1_g=0$. Hence, we have to introduce new local coordinates, as explained at the end of section \ref{SectionSingle}. We obtain $\mathcal{L}^1_fh=\frac{\sin(\theta_v-\phi)}{r}$ and we obtain that the relative degree of the associated system in (\ref{EquationStateEvolutionAss}) is $r=2$.
Let us denote the new coordinates by $x_1', ~x_2', ~x_3'$. In accordance with (\ref{EquationLocalCoordinates1}) and (\ref{EquationLocalCoordinates2}) we set $x_1'=\phi$ and $x_2'=\frac{\sin(\theta_v-\phi)}{r}$. Finally, we set $x_3'=\frac{\cos(\theta_v-\phi)}{r}$.
We compute the new vector fields that characterize the dynamics in the new coordinates. We obtain:
\begin{equation}\label{Equationftgt}
\tilde{f}\equiv \left[\begin{array}{c}
x_2'\\
-2x_2'x_3' \\
x_2'^2-x_3'^2\\
\end{array}
\right]
~~~~~ \tilde{g}\equiv \left[\begin{array}{c}
0\\
x_3'\\
-x_2'\\
\end{array}
\right]
\end{equation}
\noindent Additionally, we set $\tilde{h}=x_2'$ and $\Omega_1=span\{[1, 0, 0],~ [0, 1,0]\}$. In the new coordinates we obtain: $L^1_g=x_3'$ and $\rho=-\frac{x_2'}{x_3'^2}$.
Since $\rho$ depends on $x_3'$, $d_x\rho\notin \Omega_1$. Since the dimension of $\Omega_1$ is already $2$, because of lemma \ref{LemmaRhoInOmegam}, we know that it exists a given integer $m$ such that the dimension of $\Omega_m$ is larger than $2$. Hence, we conclude that the entire state is weakly locally observable.
\subsection{Systems with multiple disturbances}\label{SectionApplication2}
In this case we refer to the general case, i.e., to systems characterized by the dynamics given in (\ref{EquationStateEvolution}). For this general case we do not have the results stated by the theorem of separation (theorem \ref{TheoremSeparation}) and we have to compute the entire codistribution and to proceed as it has been described in section \ref{SectionEORC}.
We derive the observability properties of two systems with unknown inputs. The first system characterizes a localization problem in the framework of mobile robotics. The state and its dynamics are the same as in the example discussed in \ref{SectionApplication1}. However, we consider a different output and also the case when both the inputs are unknown.
For this simple example, the use of our theory is not required to derive the observability properties, which can be obtained by using intuitive reasoning.
The second system is much more complex and describes one of the most important sensor fusion problem, which is the problem of fusing visual and inertial measurements. We will refer to this problem as to the visual-inertial structure from motion problem (the Vi-SfM~ problem). This problem has been investigated by many disciplines, both in the framework of computer science \cite{Bry08,Jon11,Kelly11,TRO12,IJCV14,Stre04} and in the framework of neuroscience (e.g., \cite{Bert75,Dokka11,Fets10}).
Inertial sensors usually consist of three orthogonal accelerometers and three orthogonal gyroscopes. All together, they constitute the Inertial Measurement Unit (IMU). We will refer to the fusion of monocular vision with the measurements from an IMU as to the {\it standard} Vi-SfM~ problem.
In \cite{Hesch12,Jon11,Kelly11,Kottas12,Li12,TRO12,Mirza08} and \cite{Wei11Thesis} the observability properties of the standard Vi-SfM~ have been investigated in several different scenarios. Very recently, following two independent procedures, the most general result for the standard Vi-SfM~ problem has been provided in \cite{Guo13} and \cite{Iros13}.
This result can be summarized as follows. In the standard Vi-SfM~ problem all the independent observable states are: the positions in the local frame of all the observed features, the three components of the speed in the local frame, the biases affecting the inertial measurements, the roll and the pitch angles, the magnitude of the gravity and the transformation between the camera and IMU frames. The fact that the yaw angle is not observable is an obvious consequence of the system invariance under rotation about the gravity vector.
We want to use here the theory developed in the previous sections in order to investigate the observability properties of the Vi-SfM~ problem when the number of inertial sensors is reduced, i.e., when the system is driven by unknown inputs.
\subsubsection{Simple 2D localization problem}\label{SubSectionApplication}
We consider the system characterized by the same dynamics given in (\ref{EquationSimpleExampeDynamicsC}). Additionally, we assume that the vehicle is equipped with a GPS able to provide its position. Hence, the system output is the following two-components vector:
\begin{equation}\label{EquationSimpleExampleOutput1}
y=[x_v, ~y_v]^T
\end{equation}
Let us start by considering the case when both the system inputs, i.e., the two functions $v(t)$ and $\omega(t)$, are available. By comparing (\ref{EquationStateEvolution}) with (\ref{EquationSimpleExampeDynamicsC}) we obtain $x=[x_v, ~y_v, ~\theta_v]^T$, $m_u=2$, $m_w=0$, $u_1=v$, $u_2=\omega$, $f_0(x)=[0, ~0, ~0]^T$, $f_1(x)=[\cos\theta_v, ~\sin\theta_v, ~0]^T$ and $f_2(x)=[0, ~0, ~1]^T$.
In order to investigate the observability properties, we apply the observability rank crondition introduced in \cite{Her77}.
The system has two outputs: $h_x \equiv x_v$ and $h_y \equiv y_v$. By definition, they coincide with their zero-order Lie derivatives. Their gradients with respect to the state are, respectively: $[1,~0,~0]$ and $[0,~1,~0]$. Hence, the space spanned by the zero-order Lie derivatives has dimension two. Let us compute the first order Lie derivatives. We obtain: $\mathcal{L}^1_1 h_x=\cos\theta_v$, $\mathcal{L}^1_1 h_y=\sin\theta_v$, $\mathcal{L}^1_2 h_x=\mathcal{L}^1_2 h_y=0$. Hence, the space spanned by the Lie derivatives up to the first order span the entire configuration space and we conclude that the state is weakly locally observable.
We now consider the case when both the system inputs are unknown. In this case, by comparing (\ref{EquationStateEvolution}) with (\ref{EquationSimpleExampeDynamicsC}) we obtain $m_u=0$, $m_w=2$, $w_1=v$, $w_2=\omega$, $f_0(x)=[0, ~0, ~0]^T$, $g_1(x)=[\cos\theta_v, ~\sin\theta_v, ~0]^T$ and $g_2(x)=[0, ~0, ~1]^T$.
Intuitively, we know that the knowledge of both the inputs is unnecessary in order to have the full observability of the entire state. Indeed, the first two state components can be directly obtained from the GPS. By knowing these two components during a given time interval, we also know their time derivatives. In particular, we know $\dot{x}_v(0)$ and $\dot{y}_v(0)$. From (\ref{EquationSimpleExampeDynamicsC}) we easily obtain: $\theta_v(0)=atan\left(\frac{\dot{y}_v(0)}{\dot{x}_v(0)} \right)$. Hence, also the initial orientation is observable, by only using the GPS measurements.
Let us proceed by applying the $EORC$, discussed in section \ref{SectionEORC}. We start by computing the codistribution $\bar{\Omega}_0$ in $\Sigma^{(0)}$. We easily obtain:
\[
\bar{\Omega}_0= span\{[1,~0,~0], ~[0,~1,~0]\}
\]
\noindent From this we know that $x_v$ and $y_v$ are weakly locally observable. We want to know if also $\theta_v$ is weakly locally observable (in which case the entire state would be weakly locally observable). We have to compute $\bar{\Omega}_1$ in $\Sigma^{(1)}$. For, we build the system $\Sigma^{(1)}$. We have: $^1x=[x_v, ~y_v, ~\theta_v, ~v, ~\omega]^T$. We can easily obtain the analytical expression for the quantities appearing in (\ref{EquationExtendedStateEvolution}). We have: $f_0^{(1)}(x)=[\cos\theta_v v, ~\sin\theta_v v, ~\omega, ~0, ~0]^T$. We compute the analytical expression of the first-order Lie derivatives along this vector filed. We have: $\mathcal{L}^1_0 h_x=\nabla h_x \cdot f_0^{(1)}(x)=[1,~0,~0,~0,~0]\cdot [\cos\theta_v v, ~\sin\theta_v v, ~\omega, ~0, ~0]=\cos\theta_v v$ (similarly, we obtain $\mathcal{L}^1_0 h_y=\sin\theta_v v$). We obtain:
\[
\bar{\Omega}_1= span\{[1,0,0, 0,0], [0,1,0, 0,0],
\]
\[ [0,0,-\sin\theta_v v, \cos\theta_v, 0],[0,0,\cos\theta_v v, \sin\theta_v, 0]\}
\]
\noindent from which we obtain that the gradient of $\theta_v$ belongs to $\bar{\Omega}_1$. Therefore, also $\theta_v$ is weakly locally observable and so the entire original state.
\subsubsection{The Vi-SfM~ with partial input knowledge}
For the brevity sake, we do not provide here the computation necessary to deal with this problem. All the details are available in \cite{ICRA14,FnT14} (see also the work in \cite{TRO11} for the definition of continuous symmetries). Here we provide a summary of these results. First of all, we remark that the Vi-SfM~ problem can be described by a nonlinear system with six inputs (3 are the accelerations along the three axes, provided by the accelerometers, and 3 are the angular speeds provided by the gyroscopes). The outputs are the ones provided by the vision. In the simplest case of a single point feature, they consist of the two bearing angles of this point feature in the camera frame.
We analyzed the following three cases:
\begin{enumerate}
\item camera extrinsically calibrated, only one input known (corresponding to the acceleration along a single axis);
\item camera extrinsically uncalibrated, only one input known (corresponding to the acceleration along a single axis);
\item camera extrinsically uncalibrated, two inputs known (corresponding to the acceleration along two orthogonal axes).
\end{enumerate}
\noindent The dimension of the original state is $12$ in the first case and $23$ in the other two cases. Additionally $m_u=1$ and $m_w=5$ in the first two cases while $m_u=2$ and $m_w=4$ in the last case.
In \cite{ICRA14,FnT14} we prove that the observability properties of Vi-SfM~ do not change by removing all the three gyroscopes and one of the accelerometers. In other words, exactly the same properties hold when the sensor system only consists of a monocular camera and two accelerometers. To achieve this result, we computed the Lie derivatives up to the second order for the third case mentioned above. By removing a further accelerometer (i.e., by considering the case of a monocular camera and a single accelerometer) the system loses part of its observability properties. In particular, the distribution $\Delta^k(\equiv \bar{\Omega}_k^{\perp})$, $\forall k\ge 2$, contains a single vector. This vector describes a continuous symmetry that is the invariance under the rotation around the accelerometer axis. This means that some of the internal parameters that define the extrinsic camera calibration, are no longer identifiable. Although this symmetry does not affect the observability of the absolute scale and the magnitude of the velocity, it reflects in an indistinguishability of all the initial speeds that differ for a rotation around the accelerometer axis. On the other hand, if the camera is extrinsically calibrated (i.e., if the relative transformation between the camera frame and the accelerometer frame is known (first case mentioned above)) this invariance disappears and the system still maintains full observability, as in the case of three orthogonal accelerometers and gyroscopes.
The analysis of this system (the first case mentioned above) has been done in the extreme case when only a single point feature is available. This required to significantly augment the original state. In particular, in \cite{ICRA14,FnT14} we compute all the Lie derivatives up to the $7^{th}$ order, i.e., we included in the original state the 5 unknown inputs together with their time-derivatives up to the six order. We prove that the gradient of any component of the original state, with the exception of the yaw angle, is orthogonal to the distribution $\Delta^k$, $\forall k\ge 7$ (see the computational details in \cite{ICRA14,FnT14}) \footnote{Note that, the yaw angle is not observable even in the case when all the 6 inputs are known. The fact that the yaw is unobservable is a consequence of a symmetry in the considered system, which is the system invariance under rotations about the gravity axis.}.
\newpage
\section{Conclusion}\label{SectionConclusion}
In this paper we investigated the problem of nonlinear observability when part (or even all) of the system inputs is unknown. We made the assumption that the unknown inputs are differentiable functions of time (up to a given order). The goal was not to design new observers but to provide simple analytic conditions in order to check the weak local observability of the state. An unknown input was also called disturbance.
The analysis started by extending the observability rank condition. This was obtained by a state augmentation and was called the extended observability rank condition. In general, by further augmenting the state, the observability properties of the original state also increase. As a result, the extended observability rank condition only provides a sufficient condition for the weak local observability of the original state since the state augmentation can be continued indefinitely. Additionally, the computational cost demanded to obtain the observability properties through the extended observability rank condition, dramatically depends on the dimension of the augmented state. For these reasons, we focused our investigation on the following two fundamental issues. The former consisted in understanding if there exists a given augmented state such that, by further augmenting the state, the observability properties of the original state provided by the extended observability rank condition remain unvaried. The latter consisted in understanding if it is possible to derive the observability properties of the original state by computing a codistribution defined in the original space, namely a codistribution consisting of covectors with the same dimension of the original state. Both these issues have been fully addressed in the case of a single unknown input.
In this case, we provided an analytical method to operate a separation on the codistribution computed by the extended observability rank condition, i.e., the codistribution defined in the augmented space. Thanks to this separation, we introduced a method able to obtain the observability properties by simply computing a codistribution that is defined in the original space (theorem \ref{TheoremSeparation}).
The new codistribution is defined recursively by a very simple algorithm. Specifically, the algorithm in definition \ref{DefinitionOmega} in section \ref{SectionSeparation} (for the case of a single known input) and in definition \ref{DefinitionOmegaE} in section \ref{SectionExtension} (for the case of multiple known inputs). Hence, the overall method to obtain all the observability properties is very simple. On the other hand, the analytic derivations required to prove the validity of this separation are complex and we are currently extending them to the multiple unknown inputs case.
Finally, we showed that the recursive algorithm converges in a finite number of steps and we also provided the criterion to establish if the algorithm has converged (theorem \ref{TheoremStop}). Also this proof is based on several tricky and complex analytical steps.
Both theorems \ref{TheoremSeparation} and \ref{TheoremStop} have first been proved in the case of a single known input (sections \ref{SectionSeparation} and \ref{SectionStop}) but in section \ref{SectionExtension} their validity was extended to the case of multiple known inputs.
All the theoretical results have been illustrated by deriving the observability properties of several nonlinear systems driven by known and unknown inputs.
\newpage
| {'timestamp': '2015-02-03T02:24:49', 'yymm': '1501', 'arxiv_id': '1501.02968', 'language': 'en', 'url': 'https://arxiv.org/abs/1501.02968'} |
high_school_physics | 497,243 | 15.226795 | 1 | Dante's Medieval Cosmology
Written between 1308 and 1320, Dante's Divina Commedia (Divine Comedy) represents the peak in medieval cosmology, blending the Ptolemaic geographical and astronomical systems with Christian theology and philosophy. A motionless spherical Earth at the center of the universe is surrounded by the spheres of the seven classical planets, the stars, and the Primum Mobile. The afterworld is divided into three sections: Inferno (Hell), Purgatorio (Purgatory), and Paradiso (Heaven). Hell is found inside the Earth, divided into nine circles for increasing levels of sin. Purgatory, divided into two ante-purgatory ledges, seven terraces, and the Earthly Paradise, lies opposite Jerusalem, between the Earth's surface and the sphere of the Moon. Above are the nine spheres of Heaven, surmounted by the nonphysical Empyrean containing the Rose of the Blessed, the Angelic Choirs, and God. This Demonstration offers a schematic, three-dimensional, interactive view of Dante's cosmology.
Contributed by: Paolo Maraner (December 2015)
The original title of Dante's poem was Comedy; the adjective Divine was added later in the fourteenth century. The present-day standard reference edition is [1]. English translations of the Comedy continue to be published regularly. See, for example, [2].
[1] D. Alighieri, Commedia (G. Petrocchi, ed.), 3 vols., Milano: Mondadori, 1966–67.
[2] D. Alighieri, The Divine Comedy (H. F. Cary, trans.), New York: Grolier, 1973.
Dante Alighieri (1265-1321) (Wolfram ScienceWorld)
Paolo Maraner "Dante's Medieval Cosmology"
http://demonstrations.wolfram.com/DantesMedievalCosmology/
Published: December 2 2015
The Universe in a Popup Book
Antonella Maraner and Paolo Maraner
Adam P. Goucher
Ptolemaic to Copernican World System Continuum
Paolo Maraner
Astronomical Clock
Erik Mahieu
Eratosthenes's Measure of the Earth's Circumference
Projection of Earth on Polyhedra
Locate a Point on the Earth's Surface
Spencer Rugaber
Trilateration and the Intersection of Three Spheres
Buckminster Fuller's Dymaxion Map
The Layout of the Center of Yerevan, Armenia
Yuri Ashrafyan
Aberration of Light
Copernicus Epicycles versus Kepler's Ellipses
Kepler's Harmonices Mundi
A Visual Proof of Thales's Intercept Theorem
Fate of the Euler Line and the Nine-Point Circle on the Sphere
Seasonal Variations of Day and Night
Spherical Pythagorean Theorem
The Earth as Seen from the Moon
Three World Systems for Earth-Sun-Mars Kinematics
Rotating Michelson-Morley Interferometer | {"pred_label": "__label__cc", "pred_label_prob": 0.6005229353904724, "wiki_prob": 0.3994770646095276, "source": "cc/2020-05/en_middle_0021.json.gz/line1103886"} |
high_school_physics | 56,230 | 15.222092 | 1 | All right, but how much does a yard of sand weigh?
What if I use concrete?
Our sand calculator is a tool designed specifically to help you with the calculations on the building site. Estimating the required amount of any building material is a difficult task, and errors may result either in running out of materials when all machines are in full swing, or in heaps of sand lying around after the earthworks are completed. Use this sand calculator or our paver sand calculator to answer the question "how much sand do I need" and never worry about it again!
Determine the length and width of a cuboidal excavation. For example, we can assume an excavation of length L = 12 yd and width b = 3 yd.
Calculate the area of the excavation, multiplying the length by width. In our case, A = 12 * 3 = 36 yd^2. You can also type the area of the excavation directly into our calculator if you choose an excavation of some more sophisticated shape.
Establish the depth of the excavation. Let's say it's d = 0.5 yd.
Multiply the area and depth of the excavation to obtain its volume: 36 * 0.5 = 18 cu yd.
The volume of sand required is equal to the volume of excavation. Our sand calculator will display this value for you.
To calculate the weight of a cubic yard of sand, you simply have to multiply the volume by density. You don't have to remember the density of sand, though - our calculator has a set value for density so you don't have to type it in manually. Of course, if you decide to use some unusual material, feel free to change the value!
Once you know the total weigh of sand you have to buy, you won't have to worry about overspending on building materials. But how much exactly will you spend? Our sand calculator can help you with that - all you have to do is enter the price of sand (per unit of mass, such as tonne, or per unit of volume, for example cubic yard). The calculator will display the total cost of the sand you need.
Then head to our concrete estimator to calculate the number of bags of cement needed to cast concrete elements of given dimensions.
Sand Calculator can be embedded on your website to enrich the content you wrote and make it easier for your visitors to understand your message. | {'timestamp': '2019-04-21T08:45:18Z', 'url': 'https://www.omnicalculator.com/construction/sand', 'language': 'en', 'source': 'c4'} |
high_school_physics | 1 | 15.221948 | 1 | \section{Introduction}
One of the fundamental ingredients in the theory of non-commutative or
quantum geometry is the notion of a differential calculus.
In the framework of quantum groups the natural notion
is that of a
bicovariant differential calculus as introduced by Woronowicz
\cite{Wor_calculi}. Due to the allowance of non-commutativity
the uniqueness of a canonical calculus is lost.
It is therefore desirable to classify the possible choices.
The most important piece is the space of one-forms or ``first
order differential calculus'' to which we will restrict our attention
in the following. (From this point on we will use the term
``differential calculus'' to denote a
bicovariant first order differential calculus).
Much attention has been devoted to the investigation of differential
calculi on quantum groups $C_q(G)$ of function algebra type for
$G$ a simple Lie group.
Natural differential calculi on matrix quantum groups were obtained by
Jurco \cite{Jur} and
Carow-Watamura et al.\
\cite{CaScWaWe}. A partial classification of calculi of the same
dimension as the natural ones
was obtained by
Schm\"udgen and Sch\"uler \cite{ScSc2}.
More recently, a classification theorem for factorisable
cosemisimple quantum groups was obtained by Majid \cite{Majid_calculi},
covering the general $C_q(G)$ case. A similar result was
obtained later by Baumann and Schmitt \cite{BaSc}.
Also, Heckenberger and Schm\"udgen \cite{HeSc} gave a
complete classification on $C_q(SL(N))$ and $C_q(Sp(N))$.
In contrast, for $G$ not simple or semisimple the differential calculi
on $C_q(G)$
are largely unknown. A particularly basic case is the Lie group $B_+$
associated with the Lie algebra $\lalg{b_+}$ generated by two elements
$X,H$ with the relation $[H,X]=X$. The quantum enveloping algebra
\ensuremath{U_q(\lalg{b_+})}{}
is self-dual, i.e.\ is non-degenerately paired with itself \cite{Drinfeld}.
This has an interesting consequence: \ensuremath{U_q(\lalg{b_+})}{} may be identified with (a
certain algebraic model of) \ensuremath{C_q(B_+)}. The differential calculi on this
quantum group and on its ``classical limits'' \ensuremath{C(B_+)}{} and \ensuremath{U(\lalg{b_+})}{}
will be the main concern of this paper. We pay hereby equal attention
to the dual notion of ``quantum tangent space''.
In section \ref{sec:q} we obtain the complete classification of differential
calculi on \ensuremath{C_q(B_+)}{}. It turns out that (finite
dimensional) differential
calculi are characterised by finite subsets $I\subset\mathbb{N}$.
These
sets determine the decomposition into coirreducible (i.e.\ not
admitting quotients) differential calculi
characterised by single integers. For the coirreducible calculi the
explicit formulas for the commutation relations and braided
derivations are given.
In section \ref{sec:class} we give the complete classification for the
classical function algebra \ensuremath{C(B_+)}{}. It is essentially the same as in the
$q$-deformed setting and we stress this by giving an almost
one-to-one correspondence of differential calculi to those obtained in
the previous section. In contrast, however, the decomposition and
coirreducibility properties do not hold at all. (One may even say that
they are maximally violated). We give the explicit formulas for those
calculi corresponding to coirreducible ones.
More interesting perhaps is the ``dual'' classical limit. I.e.\ we
view \ensuremath{U(\lalg{b_+})}{} as a quantum function algebra with quantum enveloping
algebra \ensuremath{C(B_+)}{}. This is investigated in section \ref{sec:dual}. It
turns out that in this setting we have considerably more freedom in
choosing a
differential calculus since the bicovariance condition becomes much
weaker. This shows that this dual classical limit is in a sense
``unnatural'' as compared to the ordinary classical limit of section
\ref{sec:class}.
However, we can still establish a correspondence of certain
differential calculi to those of section \ref{sec:q}. The
decomposition properties are conserved while the coirreducibility
properties are not.
We give the
formulas for the calculi corresponding to coirreducible ones.
Another interesting aspect of viewing \ensuremath{U(\lalg{b_+})}{} as a quantum function
algebra is the connection to quantum deformed models of space-time and
its symmetries. In particular, the $\kappa$-deformed Minkowski space
coming from the $\kappa$-deformed Poincar\'e algebra
\cite{LuNoRu}\cite{MaRu} is just a simple generalisation of \ensuremath{U(\lalg{b_+})}.
We use this in section \ref{sec:kappa} to give
a natural $4$-dimensional differential calculus. Then we show (in a
formal context) that integration is given by
the usual Lesbegue integral on $\mathbb{R}^n$ after normal ordering.
This is obtained in an intrinsic context different from the standard
$\kappa$-Poincar\'e approach.
A further important motivation for the investigation of differential
calculi on
\ensuremath{U(\lalg{b_+})}{} and \ensuremath{C(B_+)}{} is the relation of those objects to the Planck-scale
Hopf algebra \cite{Majid_Planck}\cite{Majid_book}. This shall be
developed elsewhere.
In the remaining parts of this introduction we will specify our
conventions and provide preliminaries on the quantum group \ensuremath{U_q(\lalg{b_+})}, its
deformations, and differential calculi.
\subsection{Conventions}
Throughout, $\k$ denotes a field of characteristic 0 and
$\k(q)$ denotes the field of rational
functions in one parameter $q$ over $\k$.
$\k(q)$ is our ground field in
the $q$-deformed setting, while $\k$ is the
ground field in the ``classical'' settings.
Within section \ref{sec:q} one could equally well view $\k$ as the ground
field with $q\in\k^*$ not a root of unity. This point of view is
problematic, however, when obtaining ``classical limits'' as
in sections \ref{sec:class} and \ref{sec:dual}.
The positive integers are denoted by $\mathbb{N}$ while the non-negative
integers are denoted by $\mathbb{N}_0$.
We define $q$-integers, $q$-factorials and
$q$-binomials as follows:
\begin{gather*}
[n]_q=\sum_{i=0}^{n-1} q^i\qquad
[n]_q!=[1]_q [2]_q\cdots [n]_q\qquad
\binomq{n}{m}=\frac{[n]_q!}{[m]_q! [n-m]_q!}
\end{gather*}
For a function of several variables (among
them $x$) over $\k$ we define
\begin{gather*}
(T_{a,x} f)(x) = f(x+a)\\
(\fdiff_{a,x} f)(x) = \frac{f(x+a)-f(x)}{a}
\end{gather*}
with $a\in\k$ and similarly over $\k(q)$
\begin{gather*}
(Q_{m,x} f)(x) = f(q^m x)\\
(\partial_{q,x} f)(x) = \frac{f(x)-f(qx)}{x(1-q)}\\
\end{gather*}
with $m\in\mathbb{Z}$.
We frequently use the notion of a polynomial in an extended
sense. Namely, if we have an algebra with an element $g$ and its
inverse $g^{-1}$ (as
in \ensuremath{U_q(\lalg{b_+})}{}) we will mean by a polynomial in $g,g^{-1}$ a finite power
series in $g$ with exponents in $\mathbb{Z}$. The length of such a polynomial
is the difference between highest and lowest degree.
If $H$ is a Hopf algebra, then $H^{op}$ will denote the Hopf algebra
with the opposite product.
\subsection{\ensuremath{U_q(\lalg{b_+})}{} and its Classical Limits}
\label{sec:intro_limits}
We recall that,
in the framework of quantum groups, the duality between enveloping algebra
$U(\lalg{g})$ of the Lie algebra and algebra of functions $C(G)$ on the Lie
group carries over to $q$-deformations.
In the case of
$\lalg{b_+}$, the
$q$-deformed enveloping algebra \ensuremath{U_q(\lalg{b_+})}{} defined over $\k(q)$ as
\begin{gather*}
U_q(\lalg{b_+})=\k(q)\langle X,g,g^{-1}\rangle \qquad
\text{with relations} \\
g g^{-1}=1 \qquad Xg=qgX \\
\cop X=X\otimes 1 + g\otimes X \qquad
\cop g=g\otimes g \\
\cou (X)=0 \qquad \cou (g)=1 \qquad
\antip X=-g^{-1}X \qquad \antip g=g^{-1}
\end{gather*}
is self-dual. Consequently, it
may alternatively be viewed as the quantum algebra \ensuremath{C_q(B_+)}{} of
functions on the Lie group $B_+$ associated with $\lalg{b_+}$.
It has two classical limits, the enveloping algebra \ensuremath{U(\lalg{b_+})}{}
and the function algebra $C(B_+)$.
The transition to the classical enveloping algebra is achieved by
replacing $q$
by $e^{-t}$ and $g$ by $e^{tH}$ in a formal power series setting in
$t$, introducing a new generator $H$. Now, all expressions are written in
the form $\sum_j a_j t^j$ and only the lowest order in $t$ is kept.
The transition to the classical function algebra on the other hand is
achieved by setting $q=1$.
This may be depicted as follows:
\[\begin{array}{c @{} c @{} c @{} c}
& \ensuremath{U_q(\lalg{b_+})} \cong \ensuremath{C_q(B_+)} && \\
& \diagup \hspace{\stretch{1}} \diagdown && \\
\begin{array}{l} q=e^{-t} \\ g=e^{tH} \end{array} \Big| _{t\to 0}
&& q=1 &\\
\swarrow &&& \searrow \\
\ensuremath{U(\lalg{b_+})} & <\cdots\textrm{dual}\cdots> && \ensuremath{C(B_+)}
\end{array}\]
The self-duality of \ensuremath{U_q(\lalg{b_+})}{} is expressed as a pairing
$\ensuremath{U_q(\lalg{b_+})}\times\ensuremath{U_q(\lalg{b_+})}\to\k$
with
itself:
\[\langle X^n g^m, X^r g^s\rangle =
\delta_{n,r} [n]_q!\, q^{-n(n-1)/2} q^{-ms}
\qquad\forall n,r\in\mathbb{N}_0\: m,s\in\mathbb{Z}\]
In the classical limit this becomes the pairing $\ensuremath{U(\lalg{b_+})}\times\ensuremath{C(B_+)}\to\k$
\begin{equation}
\langle X^n H^m, X^r g^s\rangle =
\delta_{n,r} n!\, s^m\qquad \forall n,m,r\in\mathbb{N}_0\: s\in\mathbb{Z}
\label{eq:pair_class}
\end{equation}
\subsection{Differential Calculi and Quantum Tangent Spaces}
In this section we recall some facts about differential calculi
along the lines of Majid's treatment in \cite{Majid_calculi}.
Following Woronowicz \cite{Wor_calculi}, first order bicovariant differential
calculi on a quantum group $A$ (of
function algebra type) are in one-to-one correspondence to submodules
$M$ of $\ker\cou\subset A$ in the category $^A_A\cal{M}$ of (say) left
crossed modules of $A$ via left multiplication and left adjoint
coaction:
\[
a\triangleright v = av \qquad \mathrm{Ad_L}(v)
=v_{(1)}\antip v_{(3)}\otimes v_{(2)}
\qquad \forall a\in A, v\in A
\]
More precisely, given a crossed submodule $M$, the corresponding
calculus is given by $\Gamma=\ker\cou/M\otimes A$ with $\diff a =
\pi(\cop a - 1\otimes a)$ ($\pi$ the canonical projection).
The right action and coaction on $\Gamma$ are given by
the right multiplication and coproduct on $A$, the left action and
coaction by the tensor product ones with $\ker\cou/M$ as a left
crossed module. In all of what follows, ``differential calculus'' will
mean ``bicovariant first order differential calculus''.
Alternatively \cite{Majid_calculi}, given in addition a quantum group $H$
dually paired with $A$
(which we might think of as being of enveloping algebra type), we can
express the coaction of $A$ on
itself as an action of $H^{op}$ using the pairing:
\[
h\triangleright v = \langle h, v_{(1)} \antip v_{(3)}\rangle v_{(2)}
\qquad \forall h\in H^{op}, v\in A
\]
Thereby we change from the category of (left) crossed $A$-modules to
the category of left modules of the quantum double $A\!\bowtie\! H^{op}$.
In this picture the pairing between $A$ and $H$ descends to a pairing
between $A/\k 1$ (which we may identify with $\ker\cou\subset A$) and
$\ker\cou\subset H$. Further quotienting $A/\k 1$ by $M$ (viewed in
$A/\k 1$) leads to a pairing with the subspace $L\subset\ker\cou H$
that annihilates $M$. $L$ is called a ``quantum tangent space''
and is dual to the differential calculus $\Gamma$ generated by $M$ in
the sense that $\Gamma\cong \Lin(L,A)$ via
\begin{equation}
A/(\k 1+M)\otimes A \to \Lin(L,A)\qquad
v\otimes a \mapsto \langle \cdot, v\rangle a
\label{eq:eval}
\end{equation}
if the pairing between $A/(\k 1+M)$ and $L$ is non-degenerate.
The quantum tangent spaces are obtained directly by dualising the
(left) action of the quantum double on $A$ to a (right) action on
$H$. Explicitly, this is the adjoint action and the coregular action
\[
h \triangleright x = h_{(1)} x \antip h_{(2)} \qquad
a \triangleright x = \langle x_{(1)}, a \rangle x_{(2)}\qquad
\forall h\in H, a\in A^{op},x\in A
\]
where we have converted the right action to a left action by going
from \mbox{$A\!\bowtie\! H^{op}$}-modules to \mbox{$H\!\bowtie\! A^{op}$}-modules.
Quantum tangent spaces are subspaces of $\ker\cou\subset H$ invariant
under the projection of this action to $\ker\cou$ via \mbox{$x\mapsto
x-\cou(x) 1$}. Alternatively, the left action of $A^{op}$ can be
converted to a left coaction of $H$ being the comultiplication (with
subsequent projection onto $H\otimes\ker\cou$).
We can use the evaluation map (\ref{eq:eval})
to define a ``braided derivation'' on elements of the quantum tangent
space via
\[\partial_x:A\to A\qquad \partial_x(a)={\diff a}(x)=\langle
x,a_{(1)}\rangle a_{(2)}\qquad\forall x\in L, a\in A\]
This obeys the braided derivation rule
\[\partial_x(a b)=(\partial_x a) b
+ a_{(2)} \partial_{a_{(1)}\triangleright x}b\qquad\forall x\in L, a\in A\]
Given a right invariant basis $\{\eta_i\}_{i\in I}$ of $\Gamma$ with a
dual basis $\{\phi_i\}_{i\in I}$ of $L$ we have
\[{\diff a}=\sum_{i\in I} \eta_i\cdot \partial_i(a)\qquad\forall a\in A\]
where we denote $\partial_i=\partial_{\phi_i}$. (This can be easily
seen to hold by evaluation against $\phi_i\ \forall i$.)
\section{Classification on \ensuremath{C_q(B_+)}{} and \ensuremath{U_q(\lalg{b_+})}{}}
\label{sec:q}
In this section we completely classify differential calculi on \ensuremath{C_q(B_+)}{}
and, dually, quantum tangent spaces on \ensuremath{U_q(\lalg{b_+})}{}. We start by
classifying the relevant crossed modules and then proceed to a
detailed description of the calculi.
\begin{lem}
\label{lem:cqbp_class}
(a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ensuremath{C_q(B_+)}$ by left
multiplication and left
adjoint coaction are in one-to-one correspondence to
pairs $(P,I)$
where $P\in\k(q)[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is
finite.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$
if $P=1$.
(b) The finite codimensional maximal $M$
correspond to the pairs $(1,\{n\})$ with $n$ the
codimension. The infinite codimensional maximal $M$ are characterised by
$(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any
$k\in\mathbb{N}_0$.
(c) Crossed submodules $M$ of finite
codimension are intersections of maximal ones.
In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to
$(1,\{n\})$.
\end{lem}
\begin{proof}
(a) Let $M\subseteq\ensuremath{C_q(B_+)}$ be a crossed \ensuremath{C_q(B_+)}-submodule by left
multiplication and left adjoint coaction and let
$\sum_n X^n P_n(g) \in M$, where $P_n$ are polynomials in $g,g^{-1}$
(every element of \ensuremath{C_q(B_+)}{} can be expressed in
this form). From the formula for the coaction ((\ref{eq:adl}), see appendix)
we observe that for all $n$ and for all $t\le n$ the element
\[X^t P_n(g) \prod_{s=1}^{n-t} (1-q^{s-n}g)\]
lies in $M$.
In particular
this is true for $t=n$, meaning that elements of constant degree in $X$
lie separately in $M$. It is therefore enough to consider such
elements.
Let now $X^n P(g) \in M$.
By left multiplication $X^n P(g)$ generates any element of the form
$X^k P(g) Q(g)$, where $k\ge n$ and $Q$ is any polynomial in
$g,g^{-1}$. (Note that $Q(q^kg) X^k=X^k Q(g)$.)
We see that $M$ contains the following elements:
\[\begin{array}{ll}
\vdots & \\
X^{n+2} & P(g) \\
X^{n+1} & P(g) \\
X^n & P(g) \\
X^{n-1} & P(g) (1-q^{1-n}g) \\
X^{n-2} & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \\
\vdots & \\
X & P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g) \\
& P(g) (1-q^{1-n}g) (1-q^{2-n}g) \ldots (1-q^{-1}g)(1-g)
\end{array}
\]
Moreover, if $M$ is generated by $X^n P(g)$ as a module
then these elements generate a basis for $M$ as a vector
space by left
multiplication with polynomials in $g,g^{-1}$. (Observe that the
application of the coaction to any of the elements shown does not
generate elements of new type.)
Now, let $M$ be a given crossed submodule. We pick, among the
elements in $M$ of the form $X^n P(g)$ with $P$ of minimal
length,
one
with lowest degree in $X$. Then certainly the elements listed above are
in $M$. Furthermore for any element of the form $X^k Q(g)$, $Q$ must
contain $P$ as a factor and for $k<n$, $Q$ must contain $P(g) (1-q^{1-n}g)$
as a factor. We continue by picking the smallest $n_2$, so that
$X^{n_2} P(g) (1-q^{1-n}g) \in M$. Certainly $n_2<n$. Again, for any
element of $X^l Q(g)$ in $M$ with $l<n_2$, we have that
$P(g) (1-q^{1-n}g) (1-q^{1-n_2}g)$ divides Q(g). We proceed by
induction, until we arrive at degree zero in $X$.
We obtain the following elements generating a basis for $M$ by left
multiplication with polynomials in $g,g^{-1}$ (rename $n_1=n$):
\[ \begin{array}{ll}
\vdots & \\
X^{n_1+1} & P(g) \\
X^{n_1} & P(g) \\
X^{n_1-1} & P(g) (1-q^{1-{n_1}}g) \\
\vdots & \\
X^{n_2} & P(g) (1-q^{1-{n_1}}g) \\
X^{n_2-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2})\\
\vdots & \\
X^{n_3} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) \\
X^{n_3-1} & P(g) (1-q^{1-{n_1}}g) (1-q^{1-{n_2}}g) (1-q^{1-n_3})\\
\vdots & \\
& P(g) (1-q^{1-{n_1}}g) (1-q^{1-n_2}g) (1-q^{1-n_3}g) \ldots (1-q^{1-n_m}g)
\end{array}
\]
We see that the integers $n_1,\ldots,n_m$ uniquely determine the shape
of this picture. The polynomial $P(g)$ on the other hand can be
shifted (by $g$ and $g^{-1}$) or renormalised. To determine $M$
uniquely we shift and normalise $P$ in such a way that it contains no
negative powers
and has unit constant coefficient. $P$ can then be viewed as a
polynomial $\in\k(q)[g]$.
We see that the codimension of $M$ is the sum of the lengths of the
polynomials in $g$ over all degrees in $X$ in the above
picture. Finite codimension corresponds to $P=1$. In this
case the codimension is the sum
$n_1+\ldots +n_m$.
(b) We observe that polynomials of the form $1-q^{j}g$
have no common divisors for distinct $j$. Therefore,
finite codimensional crossed
submodules are maximal if and only if
there is just one integer ($m=1$). Thus, the maximal left
crossed submodule of
codimension $k$ is generated by $X^k$ and $1-q^{1-k}g$.
For an infinite codimensional crossed submodule we certainly need
$m=0$. Then, the maximality corresponds to irreducibility of
$P$.
(c) This is again due to the distinctness of factors $1-q^j g$.
\end{proof}
\begin{cor}
\label{cor:cqbp_eclass}
(a) Left crossed \ensuremath{C_q(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C_q(B_+)}$
are in one-to-one correspondence to pairs
$(P,I)$ as in lemma \ref{lem:cqbp_class}
with the additional constraint $(1-g)$ divides $P(g)$ or $1\in I$.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$
if $P=1$.
(b) The finite codimensional maximal $M$
correspond to the pairs
$(1,\{1,n\})$ with $n\ge 2$ the
codimension. The infinite codimensional maximal $M$ correspond to pairs
$(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-q^{-k}g$ for any
$k\in\mathbb{N}_0$.
(c) Crossed submodules $M$ of finite
codimension are intersections of maximal ones.
In particular $M=\bigcap_{n\in I} M^n$, with $M^n$ corresponding to
$(1,\{1,n\})$.
\end{cor}
\begin{proof}
First observe that $\sum_n X^n P_n(g)\in \ker\cou$ if and only if
$(1-g)$ divides $P_0(g)$. This is to say that that $\ker\cou$
is the crossed submodule corresponding to the pair $(1,\{1\})$ in
lemma \ref{lem:cqbp_class}. We obtain the classification
from the one of lemmas \ref{lem:cqbp_class} by intersecting
everything with this crossed submodule. In particular, this reduces
the codimension by one in the finite codimensional case.
\end{proof}
\begin{lem}
\label{lem:uqbp_class}
(a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ via the left adjoint
action and left
regular coaction are in one-to-one correspondence to the set
$3^{\mathbb{N}_0}\times2^{\mathbb{N}}$.
Finite dimensional $L$ are in one-to-one correspondence to
finite sets $I\subset\mathbb{N}$ and $\dim L=\sum_{n\in I}n$.
(b) Finite dimensional irreducible $L$ correspond to $\{n\}$
with $n$ the dimension.
(c) Finite dimensional $L$ are direct sums of irreducible ones. In
particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$.
\end{lem}
\begin{proof}
(a) The action takes the explicit form
\[g\triangleright X^n g^k = q^{-n} X^n g^k\qquad
X\triangleright X^n g^k = X^{n+1}g^k(1-q^{-(n+k)})\]
while the coproduct is
\[\cop(X^n g^k)=\sum_{r=0}^{n} \binomq{n}{r}
q^{-r(n-r)} X^{n-r} g^{k+r}\otimes X^r g^k\]
which we view as a left coaction here.
Let now $L\subseteq\ensuremath{U_q(\lalg{b_+})}$ be a crossed \ensuremath{U_q(\lalg{b_+})}-submodule via this action
and coaction. For $\sum_n X^n P_n(g)\in L$ invariance under
the action by
$g$ clearly means that \mbox{$X^n P_n(g)\in L\ \forall n$}. Then from
invariance under the coaction we can conclude that
if $X^n \sum_j a_j g^j\in L$ we must have
$X^n g^j\in L\ \forall j$.
I.e.\ elements of the form $X^n g^j$ lie separately in $L$ and it is
sufficient to consider such elements. From the coaction we learn that
if $X^n g^j\in L$ we have $X^m g^j\in L\ \forall m\le n$.
The action
by $X$ leads to $X^n g^j\in L \Rightarrow X^{n+1} g^j\in
L$ except if
$n+j=0$. The classification is given by the possible choices we have
for each power in $g$. For every positive integer $j$ we can
choose wether or not to include the span of
$\{ X^n g^j|\forall n\}$ in $L$ and for
every non-positive
integer we can choose to include either the span of $\{ X^n
g^j|\forall n\}$
or just
$\{ X^n g^j|\forall n\le -j\}$ or neither. I.e.\ for positive
integers ($\mathbb{N}$) we have two choices while for non-positive (identified
with $\mathbb{N}_0$) ones we have three choices.
Clearly, the finite dimensional $L$ are those where we choose only to
include finitely many powers of $g$ and also only finitely many powers
of $X$. The latter is only possible for the non-positive powers
of $g$.
By identifying positive integers $n$ with powers $1-n$ of $g$, we
obtain a classification by finite subsets of $\mathbb{N}$.
(b) Irreducibility clearly corresponds to just including one power of $g$
in the finite dimensional case.
(c) The decomposition property is obvious from the discussion.
\end{proof}
\begin{cor}
\label{cor:uqbp_eclass}
(a) Left crossed \ensuremath{U_q(\lalg{b_+})}-submodules $L\subseteq\ker\cou\subset\ensuremath{U_q(\lalg{b_+})}$ via
the left adjoint
action and left regular coaction (with subsequent projection to
$\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to
the set $3^{\mathbb{N}}\times2^{\mathbb{N}_0}$.
Finite dimensional $L$ are in one-to-one correspondence to
finite sets
$I\subset\mathbb{N}\setminus\{1\}$ and $\dim L=\sum_{n\in I}n$.
(b) Finite dimensional irreducible $L$ correspond to $\{n\}$
with $n\ge 2$ the dimension.
(c) Finite dimensional $L$ are direct sums of irreducible ones. In
particular $L=\oplus_{n\in I} L^n$ with $L^n$ corresponding to $\{n\}$.
\end{cor}
\begin{proof}
Only a small modification of lemma \ref{lem:uqbp_class} is
necessary. Elements of
the form $P(g)$ are replaced by elements of the form
$P(g)-P(1)$. Monomials with non-vanishing degree in $X$ are unchanged.
The choices for elements of degree $0$ in $g$ are reduced to either
including the span of
$\{ X^k |\forall k>0 \}$ in the crossed submodule or not. In
particular, the crossed submodule characterised by \{1\} in lemma
\ref{lem:uqbp_class} is projected out.
\end{proof}
Differential calculi in the original sense of Woronowicz are
classified by corollary \ref{cor:cqbp_eclass} while from the quantum
tangent space
point of view the
classification is given by corollary \ref{cor:uqbp_eclass}.
In the finite dimensional case the duality is strict in the sense of a
one-to-one correspondence.
The infinite dimensional case on the other hand depends strongly on
the algebraic models we use for the function or enveloping
algebras. It is therefore not surprising that in the present purely
algebraic context the classifications are quite different in this
case. We will restrict ourselves to the finite dimensional
case in the following description of the differential calculi.
\begin{thm}
\label{thm:q_calc}
(a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C_q(B_+)}{} and
corresponding quantum tangent spaces $L$ on \ensuremath{U_q(\lalg{b_+})}{} are
in one-to-one correspondence to
finite sets $I\subset\mathbb{N}\setminus\{1\}$. In particular
$\dim\Gamma=\dim L=\sum_{n\in I}n$.
(b) Coirreducible $\Gamma$ and irreducible $L$ correspond to
$\{n\}$ with $n\ge 2$ the dimension.
Such a $\Gamma$ has a
right invariant basis $\eta_0,\dots,\eta_{n-1}$ so that the relations
\begin{gather*}
\diff X=\eta_1+(q^{n-1}-1)\eta_0 X \qquad
\diff g=(q^{n-1}-1)\eta_0 g\\
[a,\eta_0]=\diff a\quad \forall a\in\ensuremath{C_q(B_+)}\\
[g,\eta_i]_{q^{n-1-i}}=0\quad \forall i\qquad
[X,\eta_i]_{q^{n-1-i}}=\begin{cases}
\eta_{i+1} & \text{if}\ i<n-1 \\
0 & \text{if}\ i=n-1
\end{cases}
\end{gather*}
hold, where $[a,b]_p := a b - p b a$. By choosing the dual basis on
the corresponding irreducible $L$ we obtain
the braided derivations
\begin{gather*}
\partial_i\no{f}=
\no{Q_{n-1-i,g} Q_{n-1-i,X} \frac{1}{[i]_q!} (\partial_{q,X})^i f}
\qquad\forall i\ge 1\\
\partial_0\no{f}=
\no{Q_{n-1,g} Q_{n-1,X} f - f}
\end{gather*}
for $f\in \k(q)[X,g,g^{-1}]$ with normal ordering
$\k(q)[X,g,g^{-1}]\to \ensuremath{C_q(B_+)}$ given by \mbox{$g^n X^m\mapsto g^n X^m$}.
(c) Finite dimensional $\Gamma$ and $L$ decompose into direct sums of
coirreducible respectively irreducible ones.
In particular $\Gamma=\oplus_{n\in I}\Gamma^n$ and
$L=\oplus_{n\in I}L^n$ with $\Gamma^n$ and $L^n$ corresponding to $\{n\}$.
\end{thm}
\begin{proof}
(a) We observe that the classifications of lemma
\ref{lem:cqbp_class} and lemma \ref{lem:uqbp_class} or
corollary \ref{cor:cqbp_eclass} and corollary \ref{cor:uqbp_eclass}
are dual to each other in the finite (co){}dimensional case. More
precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$
corresponding to $(1,I)$ in lemma \ref{lem:cqbp_class} is the
annihilator of the crossed
submodule $L$ corresponding to $I$ in lemma \ref{lem:uqbp_class}
and vice versa.
$\ensuremath{C_q(B_+)}/M$ and $L$ are dual spaces with the induced pairing.
For $I\subset\mathbb{N}\setminus\{1\}$ finite this descends to
$M$ corresponding to $(1,I\cup\{1\})$ in corollary
\ref{cor:cqbp_eclass} and $L$ corresponding to $I$ in corollary
\ref{cor:uqbp_eclass}.
For the dimension of $\Gamma$ observe
$\dim\Gamma=\dim{\ker\cou/M}=\codim M$.
(b) Coirreducibility (having no proper quotient) of $\Gamma$
clearly corresponds to maximality of $M$. The statement then follows
from parts (b) of corollaries
\ref{cor:cqbp_eclass} and \ref{cor:uqbp_eclass}. The formulas are
obtained by choosing the basis $\eta_0,\dots,\eta_{n-1}$ of
$\ker\cou/M$ as the equivalence classes of
\[(g-1)/(q^{n-1}-1),X,\dots,X^{n-1}\]
The dual basis of $L$ is then given by
\[g^{1-n}-1, X g^{1-n},\dots, q^{k(k-1)} \frac{1}{[k]_q!} X^k g^{1-n},
\dots,q^{(n-1)(n-2)} \frac{1}{[n-1]_q!} X^{n-1} g^{1-n}\]
(c) The statement follows from corollaries \ref{cor:cqbp_eclass} and
\ref{cor:uqbp_eclass} parts (c) with the observation
\[\ker\cou/M=\ker\cou/{\bigcap_{n\in I}}M^n
=\oplus_{n\in I}\ker\cou/M^n\]
\end{proof}
\begin{cor}
There is precisely one differential calculus on \ensuremath{C_q(B_+)}{} which is
natural in the sense that it
has dimension $2$.
It is coirreducible and obeys the relations
\begin{gather*}
[g,\diff X]=0\qquad [g,\diff g]_q=0\qquad
[X,\diff X]_q=0\qquad [X,\diff g]_q=(q-1)({\diff X}) g
\end{gather*}
with $[a,b]_q:=ab-qba$. In particular we have
\begin{gather*}
\diff\no{f} = {\diff g} \no{\partial_{q,g} f} + {\diff X}
\no{\partial_{q,X} f}\qquad\forall f\in \k(q)[X,g,g^{-1}]
\end{gather*}
\end{cor}
\begin{proof}
This is a special case of theorem \ref{thm:q_calc}.
The formulas follow from (b) with $n=2$.
\end{proof}
\section{Classification in the Classical Limit}
\label{sec:class}
In this section we give the complete classification of differential
calculi and quantum tangent spaces in the classical case of \ensuremath{C(B_+)}{}
along the lines of the previous section.
We pay particular
attention to the relation to the $q$-deformed setting.
The classical limit \ensuremath{C(B_+)}{} of the quantum group \ensuremath{C_q(B_+)}{} is
simply obtained by substituting the parameter $q$ with $1$.
The
classification of left crossed submodules in part (a) of lemma
\ref{lem:cqbp_class} remains
unchanged, as one may check by going through the proof.
In particular, we get a correspondence of crossed modules in the
$q$-deformed setting with crossed modules in the
classical setting
as a map of
pairs $(P,I)\mapsto (P,I)$
that converts polynomials $\k(q)[g]$ to polynomials $\k[g]$ (if
defined) and leaves
sets $I$ unchanged. This is one-to-one in the finite
dimensional case.
However, we did use the distinctness of powers of $q$ in part (b) and
(c) of lemma
$\ref{lem:cqbp_class}$ and have to account for changing this. The
only place where we used it, was in observing that
factors $1-q^j g $ have no common divisors for distinct $j$. This was
crucial to conclude the maximality (b) of certain finite codimensional
crossed submodules and the intersection property (c).
Now, all those factors become $1-g$.
\begin{cor}
\label{cor:cbp_class}
(a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ensuremath{C(B_+)}$ by left
multiplication and left
adjoint coaction are in one-to-one correspondence to
pairs $(P,I)$
where $P\in\k[g]$ is a polynomial with $P(0)=1$ and $I\subset\mathbb{N}$ is
finite.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=\sum_{n\in I}n$
if $P=1$.
(b) The infinite codimensional maximal $M$ are characterised by
$(P,\emptyset)$ with $P$ irreducible and $P(g)\neq 1-g$ for any
$k\in\mathbb{N}_0$.
\end{cor}
In the restriction to $\ker\cou\subset\ensuremath{C(B_+)}$ corresponding to corollary
\ref{cor:cqbp_eclass} we observe another difference to the
$q$-deformed setting.
Since the condition for a crossed submodule to lie in $\ker\cou$ is exactly
to have factors $1-g$ in the $X$-free monomials this condition may now
be satisfied more easily. If the characterising polynomial does not
contain this factor it is now sufficient to have just any non-empty
characterising integer set $I$ and it need not contain $1$. Consequently,
the map $(P,I)\mapsto (P,I)$ does not reach all crossed submodules now.
\begin{cor}
\label{cor:cbp_eclass}
(a) Left crossed \ensuremath{C(B_+)}-submodules $M\subseteq\ker\cou\subset\ensuremath{C(B_+)}$
are in one-to-one correspondence to pairs
$(P,I)$ as in corollary \ref{cor:cbp_class}
with the additional constraint $(1-g)$ divides $P(g)$ or $I$ non-empty.
$\codim M<\infty$ iff $P=1$. In particular $\codim M=(\sum_{n\in I}n)-1$
if $P=1$.
(b) The infinite codimensional maximal $M$ correspond to pairs
$(P,\{1\})$ with $P$ irreducible and $P(g)\neq 1-g$.
\end{cor}
Let us now turn to quantum tangent spaces on \ensuremath{U(\lalg{b_+})}{}. Here, the process
to go from the $q$-deformed setting to the classical one is not quite
so straightforward.
\begin{lem}
\label{lem:ubp_class}
Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules $L\subset\ensuremath{U(\lalg{b_+})}$ via the left
adjoint action
and left regular coaction are
in one-to-one correspondence to pairs $(l,I)$ with $l\in\mathbb{N}_0$ and
$I\subset\mathbb{N}$ finite. $\dim L<\infty$ iff $l=0$. In particular $\dim
L=\sum_{n\in I}n$ if $l=0$.
\end{lem}
\begin{proof}
The left adjoint action takes the form
\[
X\triangleright X^n H^m = X^{n+1}(H^m-(H+1)^m) \qquad
H\triangleright X^n H^m = n X^n H^m
\]
while the coaction is
\[
\cop(X^n H^m) = \sum_{i=1}^n \sum_{j=1}^m \binom{n}{i} \binom{m}{j}
X^i H^j\otimes X^{n-1} H^{m-j}
\]
Let $L$ be a crossed submodule invariant under the action and coaction.
The (repeated) action of $H$ separates elements by degree in $X$. It is
therefore sufficient to consider elements of the form $X^n P(H)$, where
$P$ is a polynomial.
By acting with $X$ on an element $X^n P(H)$ we obtain
$X^{n+1}(P(H)-P(H+1))$. Subsequently applying the coaction and
projecting on the left hand side of the tensor product onto $X$ (in
the basis $X^i H^j$ of \ensuremath{U(\lalg{b_+})})
leads to the element $X^n (P(H)-P(H+1))$. Now the degree of
$P(H)-P(H+1)$ is exactly the degree of $P(H)$ minus $1$. Thus we have
polynomials $X^n P_i(H)$ of any degree $i=\deg(P_i)\le \deg(P)$ in $L$
by induction. In particular, $X^n H^m\in L$ for all
$m\le\deg(P)$. It is thus sufficient to consider elements of
the form $X^n H^m$. Given such an element, the coaction generates all
elements of the form $X^i H^j$ with $i\le n, j\le m$.
For given $n$, the characterising datum is the maximal $m$ so
that $X^n H^m\in L$. Due to the coaction this cannot decrease
with decreasing $n$ and due to the action of $X$ this can decrease at
most by $1$ when increasing $n$ by $1$. This leads to the
classification given. For $l\in N_0$ and $I\subset\mathbb{N}$ finite, the
corresponding crossed submodule
is generated by
\begin{gather*}
X^{n_m-1} H^{l+m-1}, X^{n_m+n_{m-1}-1} H^{l+m-2},\dots,
X^{(\sum_i n_i)-1} H^{l}\\
\text{and}\qquad
X^{(\sum_i n_i)+k} H^{l-1}\quad \forall k\ge 0\quad\text{if}\quad l>0
\end{gather*}
as a crossed module.
\end{proof}
For the transition from the $q$-deformed (lemma
\ref{lem:uqbp_class}) to the classical case we
observe that the space spanned by $g^{s_1},\dots,g^{s_m}$ with $m$
different integers $s_i\in\mathbb{Z}$ maps to the space spanned by
$1, H, \dots, H^{m-1}$ in the
prescription of the classical limit (as described in section
\ref{sec:intro_limits}). I.e.\ the classical crossed submodule
characterised by an integer $l$ and a finite set $I\subset\mathbb{N}$ comes
from a crossed submodule characterised by this same $I$ and additionally $l$
other integers $j\in\mathbb{Z}$ for which $X^k g^{1-j}$ is included. In
particular, we have a one-to-one correspondence in the finite
dimensional case.
To formulate the analogue of corollary \ref{cor:uqbp_eclass} for the
classical case is essentially straightforward now. However, as for
\ensuremath{C(B_+)}{}, we obtain more crossed submodules than those from the $q$-deformed
setting. This is due to the degeneracy introduced by forgetting the
powers of $g$ and just retaining the number of different powers.
\begin{cor}
\label{cor:ubp_eclass}
(a) Proper left crossed \ensuremath{U(\lalg{b_+})}-submodules
$L\subset\ker\cou\subset\ensuremath{U(\lalg{b_+})}$ via the
left adjoint
action and left regular coaction (with subsequent projection to
$\ker\cou$ via $x\mapsto x-\cou(x)1$) are in one-to-one correspondence to
pairs $(l,I)$ with $l\in\mathbb{N}_0$ and $I\subset\mathbb{N}$ finite where $l\neq 0$
or $I\neq\emptyset$.
$\dim L<\infty$ iff $l=0$. In particular $\dim
L=(\sum_{n\in I}n)-1$ if $l=0$.
\end{cor}
As in the $q$-deformed setting, we give a description of the finite
dimensional differential calculi where we have a strict duality to
quantum tangent spaces.
\begin{prop}
(a) Finite dimensional differential calculi $\Gamma$ on \ensuremath{C(B_+)}{} and
finite dimensional quantum tangent spaces $L$ on \ensuremath{U(\lalg{b_+})}{} are
in one-to-one correspondence to non-empty finite sets $I\subset\mathbb{N}$.
In particular $\dim\Gamma=\dim L=(\sum_{n\in I}) n)-1$.
The $\Gamma$ with $1\in\mathbb{N}$ are in
one-to-one correspondence to the finite dimensional
calculi and quantum tangent spaces of the $q$-deformed setting
(theorem \ref{thm:q_calc}(a)).
(b) The differential calculus $\Gamma$ of dimension $n\ge 2$
corresponding to the
coirreducible one of \ensuremath{C_q(B_+)}{} (theorem \ref{thm:q_calc}(b)) has a right
invariant
basis $\eta_0,\dots,\eta_{n-1}$ so that
\begin{gather*}
\diff X=\eta_1+\eta_0 X \qquad
\diff g=\eta_0 g\\
[g, \eta_i]=0\ \forall i \qquad
[X, \eta_i]=\begin{cases}
0 & \text{if}\ i=0\ \text{or}\ i=n-1\\
\eta_{i+1} & \text{if}\ 0<i<n-1
\end{cases}
\end{gather*}
hold. The braided derivations obtained from the dual basis of the
corresponding $L$ are
given by
\begin{gather*}
\partial_i f=\frac{1}{i!}
\left(\frac{\partial}{\partial X}\right)^i f\qquad
\forall i\ge 1\\
\partial_0 f=\left(X \frac{\partial}{X}+
g \frac{\partial}{g}\right) f
\end{gather*}
for $f\in\ensuremath{C(B_+)}$.
(c) The differential calculus of dimension $n-1$
corresponding to the
one in (b) with $1$ removed from the characterising set is
the same as the one above, except that we set $\eta_0=0$ and
$\partial_0=0$.
\end{prop}
\begin{proof}
(a) We observe that the classifications of corollary
\ref{cor:cbp_class} and lemma \ref{lem:ubp_class} or
corollary \ref{cor:cbp_eclass} and corollary \ref{cor:ubp_eclass}
are dual to each other in the finite (co)dimensional case.
More
precisely, for $I\subset\mathbb{N}$ finite the crossed submodule $M$
corresponding to $(1,I)$ in corollary \ref{cor:cbp_class} is the
annihilator of the crossed
submodule $L$ corresponding to $(0,I)$ in lemma \ref{lem:ubp_class}
and vice versa.
$\ensuremath{C(B_+)}/M$ and $L$ are dual spaces with the induced pairing.
For non-empty $I$ this descends to
$M$ corresponding to $(1,I)$ in corollary
\ref{cor:cbp_eclass} and $L$ corresponding to $(0,I)$ in corollary
\ref{cor:ubp_eclass}.
For the dimension of $\Gamma$ note
$\dim\Gamma=\dim{\ker\cou/M}=\codim M$.
(b) For $I=\{1,n\}$ we choose in
$\ker\cou\subset\ensuremath{C(B_+)}$ the basis $\eta_0,\dots,\eta_{n-1}$ as the
equivalence classes of
$g-1,X,\dots,X^{n-1}$. The dual basis in $L$
is then $H,X,\dots,\frac{1}{k!}X^k,\dots,\frac{1}{(n-1)!}X^{n-1}$.
This leads to the
formulas given.
(c) For $I=\{n\}$ we get the same as in (b) except that $\eta_0$ and
$\partial_0$ disappear.
\end{proof}
The classical commutative calculus is the special case of (b) with
$n=2$. It is the only calculus of dimension $2$ with
$\diff g\neq 0$. Note that it is not coirreducible.
\section{The Dual Classical Limit}
\label{sec:dual}
We proceed in this section to the more interesting point of view where
we consider the classical algebras, but with their roles
interchanged. I.e.\ we view \ensuremath{U(\lalg{b_+})}{} as the ``function algebra''
and \ensuremath{C(B_+)}{} as the ``enveloping algebra''. Due to the self-duality of
\ensuremath{U_q(\lalg{b_+})}{}, we can again view the differential calculi and quantum tangent
spaces as classical limits of the $q$-deformed setting investigated in
section \ref{sec:q}.
In this dual setting the bicovariance constraint for differential
calculi becomes much
weaker. In particular, the adjoint action on a classical function
algebra is trivial due to commutativity and the adjoint coaction on a
classical enveloping algebra is trivial due to cocommutativity.
In effect, the correspondence with the
$q$-deformed setting is much weaker than in the ordinary case of
section \ref{sec:class}.
There are much more differential
calculi and quantum tangent spaces than in the $q$-deformed setting.
We will not attempt to classify all of them in the following but
essentially
contend ourselves with those objects coming from the $q$-deformed setting.
\begin{lem}
\label{lem:cbp_dual}
Left \ensuremath{C(B_+)}-subcomodules $\subseteq\ensuremath{C(B_+)}$ via the left regular coaction are
$\mathbb{Z}$-graded subspaces of \ensuremath{C(B_+)}{} with $|X^n g^m|=n+m$,
stable under formal derivation in $X$.
By choosing any ordering in \ensuremath{C_q(B_+)}{}, left crossed submodules via left
regular action and adjoint coaction are in one-to-one correspondence
to certain subcomodules of \ensuremath{C(B_+)}{} by setting $q=1$. Direct sums
correspond to direct sums.
This descends to $\ker\cou\subset\ensuremath{C(B_+)}$ by the projection $x\mapsto
x-\cou(x) 1$.
\end{lem}
\begin{proof}
The coproduct on \ensuremath{C(B_+)}{} is
\[\cop(X^n g^k)=\sum_{r=0}^{n} \binom{n}{r}
X^{n-r} g^{k+r}\otimes X^r g^k\]
which we view as a left coaction.
Projecting on the left hand side of the tensor product onto $g^l$ in a
basis $X^n g^k$, we
observe that coacting on an element
$\sum_{n,k} a_{n,k} X^n g^k$ we obtain elements
$\sum_n a_{n,l-n} X^n g^{l-n}$ for all $l$.
I.e.\ elements of the form
$\sum_n b_n X^n g^{l-n}$ lie
separately in a subcomodule and it is
sufficient to consider such elements. Writing the coaction
on such an element as
\[\sum_t \frac{1}{t!} X^t g^{l-t}\otimes \sum_n b_n
\frac{n!}{(n-t)!} X^{n-t} g^{l-n}\]
we see that the coaction generates all formal derivatives in $X$
of this element. This gives us the classification: \ensuremath{C(B_+)}-subcomodules
$\subseteq\ensuremath{C(B_+)}$ under the left regular coaction are $\mathbb{Z}$-graded
subspaces with $|X^n g^m|=n+m$, stable under formal derivation in
$X$ given by $X^n
g^m \mapsto n X^{n-1} g^m$.
The correspondence with the \ensuremath{C_q(B_+)} case follows from
the trivial observation
that the coproduct of \ensuremath{C(B_+)}{} is the same as that of \ensuremath{C_q(B_+)}{} with $q=1$.
The restriction to $\ker\cou$ is straightforward.
\end{proof}
\begin{lem}
\label{lem:ubp_dual}
The process of obtaining the classical limit \ensuremath{U(\lalg{b_+})}{} from \ensuremath{U_q(\lalg{b_+})}{} is
well defined for subspaces and sends crossed \ensuremath{U_q(\lalg{b_+})}-submodules
$\subset\ensuremath{U_q(\lalg{b_+})}$ by
regular action and adjoint coaction to \ensuremath{U(\lalg{b_+})}-submodules $\subset\ensuremath{U(\lalg{b_+})}$
by regular
action. This map is injective in the finite codimensional
case. Intersections and codimensions are preserved in this case.
This descends to $\ker\cou$.
\end{lem}
\begin{proof}
To obtain the classical limit of a left ideal it is enough to
apply the limiting process (as described in section
\ref{sec:intro_limits}) to the
module generators (We can forget the additional comodule
structure). On the one hand,
any element generated by left multiplication with polynomials in
$g$ corresponds to some element generated by left multiplication with a
polynomial in $H$, that is, there will be no more generators in the
classical setting. On the other hand, left multiplication by a
polynomial in $H$ comes
from left multiplication by the same polynomial in $g-1$, that is,
there will be no fewer generators.
The maximal left crossed \ensuremath{U_q(\lalg{b_+})}-submodule $\subseteq\ensuremath{U_q(\lalg{b_+})}$
by left multiplication and adjoint coaction of
codimension $n$ ($n\ge 1$) is generated as a left ideal by
$\{1-q^{1-n}g,X^n\}$ (see lemma
\ref{lem:cqbp_class}). Applying the limiting process to this
leads to the
left ideal of \ensuremath{U(\lalg{b_+})}{} (which is not maximal for $n\neq 1$) generated by
$\{H+n-1,X^n\}$ having also codimension $n$.
More generally, the picture given for arbitrary finite codimensional left
crossed modules of \ensuremath{U_q(\lalg{b_+})}{} in terms of generators with respect to
polynomials in $g,g^{-1}$ in lemma \ref{lem:cqbp_class} carries over
by replacing factors
$1-q^{1-n}g$ with factors $H+n-1$ leading to generators with
respect to polynomials in $H$. In particular,
intersections go to intersections since the distinctness of
the factors for different $n$ is conserved.
The restriction to $\ker\cou$ is straightforward.
\end{proof}
We are now in a position to give a detailed description of the
differential calculi induced from the $q$-deformed setting by the
limiting process.
\begin{prop}
(a) Certain finite dimensional
differential calculi $\Gamma$ on \ensuremath{U(\lalg{b_+})}{} and quantum tangent spaces $L$
on \ensuremath{C(B_+)}{}
are in one-to-one correspondence to finite dimensional differential
calculi on \ensuremath{U_q(\lalg{b_+})}{} and quantum
tangent spaces on \ensuremath{C_q(B_+)}{}. Intersections correspond to intersections.
(b) In particular,
$\Gamma$ and $L$ corresponding to coirreducible differential calculi
on \ensuremath{U_q(\lalg{b_+})}{} and
irreducible quantum tangent spaces on \ensuremath{C_q(B_+)}{} via the limiting process
are given as follows:
$\Gamma$ has a right invariant basis
$\eta_0,\dots,\eta_{n-1}$ so that
\begin{gather*}
\diff X=\eta_1 \qquad \diff H=(1-n)\eta_0 \\
[H, \eta_i]=(1-n+i)\eta_i\quad\forall i\qquad
[X, \eta_i]=\begin{cases}
\eta_{i+1} & \text{if}\ \ i<n-1\\
0 & \text{if}\ \ i=n-1
\end{cases}
\end{gather*}
holds. The braided derivations corresponding to the dual basis of
$L$ are given by
\begin{gather*}
\partial_i\no{f}=\no{T_{1-n+i,H}
\frac{1}{i!}\left(\frac{\partial}{\partial X}\right)^i f}
\qquad\forall i\ge 1\\
\partial_0\no{f}=\no{T_{1-n,H} f - f}
\end{gather*}
for $f\in\k[X,H]$
with the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ via $H^n X^m\mapsto H^n X^m$.
\end{prop}
\begin{proof}
(a) The strict duality between \ensuremath{C(B_+)}-subcomodules $L\subseteq\ker\cou$
given by lemma \ref{lem:cbp_dual} and corollary \ref{cor:uqbp_eclass}
and \ensuremath{U(\lalg{b_+})}-modules $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$ with $M$ given by lemma
\ref{lem:ubp_dual} and
corollary \ref{cor:cqbp_eclass} can be checked explicitly.
It is essentially due to mutual annihilation of factors $H+k$ in
\ensuremath{U(\lalg{b_+})}{} with elements $g^k$ in \ensuremath{C(B_+)}{}.
(b) $L$ is generated by
$\{g^{1-n}-1,Xg^{1-n},\dots,
X^{n-1}g^{1-n}\}$ and
$M$ is generated by $\{H(H+n-1),X(H+n-1),X^n \}$.
The formulas are obtained by denoting with
$\eta_0,\dots,\eta_{n-1}$ the equivalence classes of
$H/(1-n),X,\dots,X^{n-1}$ in $\ensuremath{U(\lalg{b_+})}/(\k 1+M)$.
The dual basis of $L$ is then
\[g^{1-n}-1,X g^{1-n},
\dots,\frac{1}{(n-1)!}X^{n-1}
g^{1-n}\]
\end{proof}
In contrast to the $q$-deformed setting and to the usual classical
setting the many freedoms in choosing a calculus leave us with many
$2$-dimensional calculi. It is not obvious which one we should
consider to be the ``natural'' one. Let us first look at the
$2$-dimensional calculus coming from the $q$-deformed
setting as described in (b). The relations become
\begin{gather*}
[\diff H, a]=\diff a\qquad [\diff X, a]=0\qquad\forall a\in\ensuremath{U(\lalg{b_+})}\\
\diff\no{f} =\diff H \no{\fdiff_{1,H} f}
+ \diff X \no{\frac{\partial}{\partial X} f}
\end{gather*}
for $f\in\k[X,H]$.
We might want to consider calculi which are closer to the classical
theory in the sense that derivatives are not finite differences but
usual derivatives. Let us therefore demand
\[\diff P(H)=\diff H \frac{\partial}{\partial H} P(H)\qquad
\text{and}\qquad
\diff P(X)=\diff X \frac{\partial}{\partial X} P(X)\]
for polynomials $P$ and ${\diff X}\neq 0$ and ${\diff H}\neq 0$.
\begin{prop}
\label{prop:nat_bp}
There is precisely one differential calculus of dimension $2$ meeting
these conditions. It obeys the relations
\begin{gather*}
[a,\diff H]=0\qquad [X,\diff X]=0\qquad [H,\diff X]=\diff X\\
\diff \no{f} =\diff H \no{\frac{\partial}{\partial H} f}
+\diff X \no{\frac{\partial}{\partial X} f}
\end{gather*}
where the normal ordering $\k[X,H]\to \ensuremath{U(\lalg{b_+})}$ is given by
$X^n H^m\mapsto X^n H^m$.
\end{prop}
\begin{proof}
Let $M$ be the left ideal corresponding to the calculus. It is easy to
see that for a primitive element $a$ the classical derivation condition
corresponds to $a^2\in M$ and $a\notin M$. In our case $X^2,H^2\in
M$. If we take the
ideal generated from these two elements we obtain an ideal of
$\ker\cou$ of codimension $3$. Now, it is sufficient without loss of
generality to add a generator of the form $\alpha H+\beta X+\gamma
XH$. $\alpha$ and $\beta$ must then be zero in order not
to generate $X$ or $H$ in $M$.
I.e.\ $M$ is generated by $H^2,
XH, X^2$. The relations stated follow.
\end{proof}
\section{Remarks on $\kappa$-Minkowski Space and Integration}
\label{sec:kappa}
There is a straightforward generalisation of \ensuremath{U(\lalg{b_+})}.
Let us define the Lie algebra $\lalg b_{n+}$ as generated by
$x_0,\dots, x_{n-1}$ with relations
\[ [x_0,x_i]=x_i\qquad [x_i,x_j]=0\qquad\forall i,j\ge 1\]
Its enveloping algebra \ensuremath{U(\lalg{b}_{n+})}{} is nothing but (rescaled) $\kappa$-Minkowski
space as introduced in \cite{MaRu}. In this section we make some
remarks about its intrinsic geometry.
We have an injective Lie algebra
homomorphism $b_{n+}\to b_+$ given by
$x_0\mapsto H$ and $x_i\mapsto X$.
This is an isomorphism for $n=2$. The injective Lie algebra
homomorphism extends to an injective homomorphism of enveloping
algebras $\ensuremath{U(\lalg{b_+})}\to \ensuremath{U(\lalg{b}_{n+})}$ in the obvious way. This gives rise
to an injective map from the set of submodules of \ensuremath{U(\lalg{b_+})}{} to the set of
submodules of \ensuremath{U(\lalg{b}_{n+})}{} by taking the pre-image. In
particular this induces an injective
map from the set of differential calculi on \ensuremath{U(\lalg{b_+})}{} to the set of
differential calculi on \ensuremath{U(\lalg{b}_{n+})}{} which are invariant under permutations
of the $x_i\ i\ge 1$.
\begin{cor}
\label{cor:nat_bnp}
There is a natural $n$-dimensional differential calculus on \ensuremath{U(\lalg{b}_{n+})}{}
induced from the one considered in proposition
\ref{prop:nat_bp}.
It obeys the relations
\begin{gather*}
[a,\diff x_0]=0\quad\forall a\in \ensuremath{U(\lalg{b}_{n+})}\qquad [x_i,\diff x_j]=0
\quad [x_0,\diff x_i]=\diff x_i\qquad\forall i,j\ge 1\\
\diff \no{f} =\sum_{\mu=0}^{n-1}\diff x_{\mu}
\no{\frac{\partial}{\partial x_{\mu}} f}
\end{gather*}
where the normal ordering is given by
\[\k[x_0,\dots,x_{n-1}]\to \ensuremath{U(\lalg{b}_{n+})}\quad\text{via}\quad
x_{n-1}^{m_{n-1}}\cdots
x_0^{m_0}\mapsto x_{n-1}^{m_{n-1}}\cdots x_0^{m_0}\]
\end{cor}
\begin{proof}
The calculus is obtained from the ideal generated by
\[x_0^2,x_i x_j, x_i x_0\qquad\forall i,j\ge 1\]
being the pre-image of
$X^2,XH,X^2$ in \ensuremath{U(\lalg{b_+})}{}.
\end{proof}
Let us try to push the analogy with the commutative case further and
take a look at the notion of integration. The natural way to encode
the condition of translation invariance from the classical context
in the quantum group context
is given by the condition
\[(\int\otimes\id)\circ\cop a=1 \int a\qquad\forall a\in A\]
which defines a right integral on a quantum group $A$
\cite{Sweedler}.
(Correspondingly, we have the notion of a left integral.)
Let us
formulate a slightly
weaker version of this equation
in the context of a Hopf algebra $H$ dually paired with
$A$. We write
\[\int (h-\cou(h))\triangleright a = 0\qquad \forall h\in H, a\in A\]
where the action of $H$ on $A$ is the coregular action
$h\triangleright a = a_{(1)}\langle a_{(2)}, h\rangle$
given by the pairing.
In the present context we set $A=\ensuremath{U(\lalg{b}_{n+})}$ and $H=\ensuremath{C(B_{n+})}$. We define the
latter as a generalisation of \ensuremath{C(B_+)}{} with commuting
generators $g,p_1,\dots,p_{n-1}$ and coproducts
\[\cop p_i=p_i\otimes 1+g\otimes p_i\qquad \cop g=g\otimes g\]
This can be identified (upon rescaling) as the momentum sector of the
full $\kappa$-Poincar\'e algebra (with $g=e^{p_0}$).
The pairing is the natural extension of (\ref{eq:pair_class}):
\[\langle x_{n-1}^{m_{n-1}}\cdots x_1^{m_1} x_0^{k},
p_{n-1}^{r_{n-1}}\cdots p_1^{r_1} g^s\rangle
= \delta_{m_{n-1},r_{n-1}}\cdots\delta_{m_1,r_1} m_{n-1}!\cdots m_1!
s^k\]
The resulting coregular
action is conveniently expressed as (see also \cite{MaRu})
\[p_i\triangleright\no{f}=\no{\frac{\partial}{\partial x_i} f}\qquad
g\triangleright\no{f}=\no{T_{1,x_0} f}\]
with $f\in\k[x_0,\dots,x_{n-1}]$.
Due to cocommutativity, the notions of left and right integral
coincide. The invariance conditions for integration become
\[\int \no{\frac{\partial}{\partial x_i} f}=0\quad
\forall i\in\{1,\dots,n-1\}
\qquad\text{and}\qquad \int \no{\fdiff_{1,x_0} f}=0\]
The condition on the left is familiar and states the invariance under
infinitesimal translations in the $x_i$. The condition on the right states the
invariance under integer translations in $x_0$. However, we should
remember that we use a certain algebraic model of \ensuremath{C(B_{n+})}{}. We might add,
for example, a generator $p_0$
to \ensuremath{C(B_{n+})}{}
that is dual to $x_0$ and behaves
as the ``logarithm'' of $g$, i.e.\ acts as an infinitesimal
translation in $x_0$. We then have the condition of infinitesimal
translation invariance
\[\int \no{\frac{\partial}{\partial x_{\mu}} f}=0\]
for all $\mu\in\{0,1,\dots,{n-1}\}$.
In the present purely algebraic context these conditions do not make
much sense. In fact they would force the integral to be zero on the
whole algebra. This is not surprising, since we are dealing only with
polynomial functions which would not be integrable in the classical
case either.
In contrast, if we had for example the algebra of smooth functions
in two real variables, the conditions just characterise the usual
Lesbegue integral (up to normalisation).
Let us assume $\k=\mathbb{R}$ and suppose that we have extended the normal
ordering vector
space isomorphism $\mathbb{R}[x_0,\dots,x_{n-1}]\cong \ensuremath{U(\lalg{b}_{n+})}$ to a vector space
isomorphism of some sufficiently large class of functions on $\mathbb{R}^n$ with a
suitable completion $\hat{U}(\lalg{b_{n+}})$ in a functional
analytic framework (embedding \ensuremath{U(\lalg{b}_{n+})}{} in some operator algebra on a
Hilbert space). It is then natural to define the integration on
$\hat{U}(\lalg{b_{n+}})$ by
\[\int \no{f}=\int_{\mathbb{R}^n} f\ dx_0\cdots dx_{n-1}\]
where the right hand side is just the usual Lesbegue integral in $n$
real variables $x_0,\dots,x_{n-1}$. This
integral is unique (up to normalisation) in
satisfying the covariance condition since, as we have seen,
these correspond
just to the usual translation invariance in the classical case via normal
ordering, for which the Lesbegue integral is the unique solution.
It is also the $q\to 1$ limit of the translation invariant integral on
\ensuremath{U_q(\lalg{b_+})}{} obtained in \cite{Majid_qreg}.
We see that the natural differential calculus in corollary
\ref{cor:nat_bnp} is
compatible with this integration in that the appearing braided
derivations are exactly the actions of the translation generators
$p_{\mu}$. However, we should stress that this calculus is not
covariant under the full $\kappa$-Poincar\'e algebra, since it was
shown in \cite{GoKoMa} that in $n=4$ there is no such
calculus of dimension $4$. Our results therefore indicate a new
intrinsic approach to $\kappa$-Minkowski space that allows a
bicovariant
differential calculus of dimension $4$ and a unique translation
invariant integral by normal ordering and Lesbegue integration.
\section*{Acknowledgements}
I would like to thank S.~Majid for proposing this project,
and for fruitful discussions during the preparation of this paper.
| {'timestamp': '1998-07-19T14:33:52', 'yymm': '9807', 'arxiv_id': 'math/9807097', 'language': 'en', 'url': 'https://arxiv.org/abs/math/9807097'} |
high_school_physics | 871,627 | 15.219427 | 1 | Q: Discrete Spherical Symmetry Group Take two spheres each having a certain number (say 5) of identical dots on them. What is the approach to proving/disproving that they are equivalent under the set of spherical rotations?
One could label the points: say (A,B,C,D,E),(1,2,3,4,5)
Align (A,1),(X,Y) with (X,Y) being successive attempted matches, say (B,2),(B,3)... ; then check all of the rest for matching.
But this seems rather inelegant. In general, it involves 5x4 test alignments and 3x3 tests.
One would prefer some kind of rank/determinant calculation. This seems reasonable since alignments are linear when expressed in terms of cartesian coordinates with spherical rotation mappings, having only two degrees of freedom, but the formulation eludes me.
A: Well, you could calculate the various point-to-point distances in either setting. When the arrangements would be different then the set of distances also will come out different. You not even are forced to calculate them all: the first mismatch here is enough to deside.
--- rk
| {'language': 'en', 'url': 'https://math.stackexchange.com/questions/3083668', 'timestamp': '2023-03-29', 'source': 'stackexchange', 'question_score': '1'} |
high_school_physics | 46,420 | 15.209191 | 1 | Our most loved part: Seeing great Super Metroid and Uber Man arranges in a super Mario bros 3 rom amusement.
What makes it insane: This hack has apparently gotten the most introduction out of any of them – on account of its severe, “are you clowning at the present time” trouble level. You know you’re in for a doozy of a diversion when the absolute first screen places you in point clear scope of a mass of Slug Bills. Things just get increasingly crazy from that point – you’ll learn things you never at any point knew were conceivable in a super Mario bros 3 rom title, similar to the power conceded by turn bounces, or that Yoshi’s solitary reason in life is to go about as a conciliatory platform. On the off chance that trouble is what you’re searching for, Kaizo is your go-to hack.
Our most loved part: In the event that we experienced the supervisor battle at 7:25 in the first diversion, we likely would’ve killed our SNES and nestled into the fetal position, sobbing.
What makes it insane: When Moltov Mario World made his Pit hacks, he wasn’t planning to make an undeniable Mario diversion; rather, he tried to pack all the discipline and agony of a whole play through into one merciless dimension. As though looking over stages weren’t sufficiently challenging, the design of the Pit of Death implies that simply brushing a divider or roof in the wrong spot spells moment passing. Be that as it may, pause, there’s more – you’ll have to ace the craft of mid-air juggling keys and P-switches on the off chance that you need any expectation of enduring.
Our most loved part: The crazy P-switch juggling at 0:49. Never in a thousand years would we be able to pull that off.
What makes it insane: What’s Passing without a little Gloom? To cite the agreeable, welcoming introduction screen, Moltov Mario World has created “the most troublesome dimension on the planet; you will cry and wish you were not alive on the off chance that you play this hack.” Our certainty isn’t helped much when the over world delineates a skull encompassing the dimension, nor does the crimson scenery after entering the Pit. This isn’t false promoting, either – the accuracy required for this present stage’s shell-jumping and key-hurling will instigate lose hope in most any super Mario bros 3 rom master.
Our most loved part: 0:38. Mario figures out how to suspend with only a key and his distraught abilities. | {'timestamp': '2019-04-18T12:50:17Z', 'url': 'http://ipeadia.com/the-works-of-art-super-mario-omega/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 247,865 | 15.190839 | 1 | Are you a fan of baking? Don't miss the unusual rolling pin with biscuit cutters (9 pieces) with which you'll be able to knead the dough and also store the 8 cutters that are included. Approx. dimensions of the rolling pin (diameter x length): 47 x 7 cm. Approx. dimensions of the cutters (diameter x length): 5.5 x 2 cm. Available in several different designs that will be randomly shipped depending on stock. | {'timestamp': '2019-04-25T04:51:07Z', 'url': 'https://ibar.store/en/decoration-accessories-moulds/13197-rolling-pin-with-biscuit-cutters-9-pieces-8718158262042.html', 'language': 'en', 'source': 'c4'} |
high_school_physics | 569,838 | 15.168278 | 1 | To All the Players and Flat Earthers
I am not shocked to be writing a post about the flat earth theory because we live in a time of great distrust of almost every organization and institution on the earth. What we are seeing now is a mental state where people are searching for truth which at times makes them open for suggestion. When it comes to the “flat earth science” there is an absence of scientific data such as sine and pressure waves.
Eratosthenes of Cyrene (/ɛrəˈtɒsθəniːz/; Greek: Ἐρατοσθένης, IPA: [eratostʰénɛːs]; c. 276 BC[1] – c. 195/194 BC[2]) was a Greek mathematician,geographer, poet, astronomer, and music theorist. He was a man of learning, becoming the chief librarian at the Library of Alexandria. He invented the discipline of geography, including the terminology used today.[3]
He is best known for being the first person to calculate the circumference of the Earth, which he did by applying a measuring system using stadia, a standard unit of measure during that time period. His calculation was remarkably accurate. He was also the first to calculatethe tilt of the Earth's axis (again with remarkable accuracy). Additionally, he may have accurately calculated the distance from the Earth to the Sun and invented the leap day.[4] He created the first map of the world incorporating parallels and meridians, based on the available geographical knowledge of the era.
Read more about Eratosthenes of Cyrene
There is also absence of scientifically proven models which can be replicated with the same conclusion. For some reason an Internet rumor was spread NASA is behind creating the round earth reality. This could not be any further from the truth because there are many qualified ancient personalities who came to the conclusion the earth is round. Why? Because they were the scientist of their time and considered many variables which can be presented in video.
Centrifugal force effecting the shape . . .
I believe anyone who just witnessed this video should not need anymore information regarding the shape of the earth. Seeing is believing and you must admit a spinning water drop takes on the shape of the earth. Of course gravity plays its role in the equation of shape but it is the centrifugal force which cause the round shape. All planets and stars experience rotation; and the earth rotates about 1,000 miles an hour. Russia’s high definition satellite image of the earth.
Top 10 Ways to Know the Earth is Not Flat
I think these videos are great learning tools because you can not argue against this evidence. The centrifugal force cause a spherical shape as long as it is spinning which also produces a magnetic field around the earth. I would like to present solid content injected inside the water sphere in the form of Alkaseltzer. Now if you can imagine the water spinning think like the ride at Coney Island that would spin and the force would make you would stick to the wall. In my opinion, this gives credence to the hollow earth which I believe Enoch experienced!
Antacid tablet added to 50 micrograms of water
Let’s recap before we move so this can marinate in your mind for a minute. I believe even the flat earthers agree the earth is rotating. The above video shows this rotation causes a spherical shape but not a perfect sphere just like the earth! You have also seen gas and sediment in the form of Alkaseltzer injected into a sphere. These forces where created to achieve these very spherical outcomes. This is why all of the planets and stars are spherical.
Water Molecule with electrons spinning.
Next, there is relationship between the Universe and our bodies. At a molecular level you will find rotation causes a round shape for electrons. The phrase “As above so below” can also be said “As externally so internally.” The Creator appears to have created a duality in all things so there are no flat electrons in our bodies and they spin. I think the flat earth resurgence is a result of “teachers” in the alternative truth movement running out of material; and using Scripture to build a very flimsy case the round earth is a conspiracy.
But if you don't believe me when I tell you about earthly things, how can you possibly believe if I tell you about heavenly things?
(John 3:12)
In essence this is another distraction to create conflict between people while tricking for donations. Regardless of the shape of the earth, how is this pushing me forward spiritually? I have shown you the science behind why the earth is round and also the molecules in our bodies are spherical because of spin. If you understand the spinning earth also creates a magnetic field which are two concentric circles overlapping while rotating with opposing forces. Folks need to understand the Creator designed these forces for this very effect! I hope this helps and did not fall flat!
flat earthHollow Earth
M4.2 CA 18 hours ago
Updated on Sun 17 Jan 21:28:43 (UTC) | {"pred_label": "__label__cc", "pred_label_prob": 0.6956989169120789, "wiki_prob": 0.30430108308792114, "source": "cc/2021-04/en_head_0051.json.gz/line1120559"} |
high_school_physics | 843,085 | 15.109574 | 1 | //Copyright (C) Microsoft Corporation. All rights reserved.
// explicit1.cs
interface IDimensions
{
float Length();
float Width();
}
class Box : IDimensions
{
float lengthInches;
float widthInches;
public Box(float length, float width)
{
lengthInches = length;
widthInches = width;
}
// Explicit interface member implementation:
float IDimensions.Length()
{
return lengthInches;
}
// Explicit interface member implementation:
float IDimensions.Width()
{
return widthInches;
}
public static void Main()
{
// Declare a class instance "myBox":
Box myBox = new Box(30.0f, 20.0f);
// Declare an interface instance "myDimensions":
IDimensions myDimensions = (IDimensions) myBox;
// Print out the dimensions of the box:
/* The following commented lines would produce compilation
errors because they try to access an explicitly implemented
interface member from a class instance: */
//System.Console.WriteLine("Length: {0}", myBox.Length());
//System.Console.WriteLine("Width: {0}", myBox.Width());
/* Print out the dimensions of the box by calling the methods
from an instance of the interface: */
System.Console.WriteLine("Length: {0}", myDimensions.Length());
System.Console.WriteLine("Width: {0}", myDimensions.Width());
}
}
| {'content_hash': 'd7d0dcd34b473b161081036126adfe61', 'timestamp': '', 'source': 'github', 'line_count': 49, 'max_line_length': 70, 'avg_line_length': 29.714285714285715, 'alnum_prop': 0.6318681318681318, 'repo_name': 'SiddharthMishraPersonal/StudyMaterials', 'id': '6d88b805a3e4f9e47db7bf0d829772051e7882ca', 'size': '1458', 'binary': False, 'copies': '2', 'ref': 'refs/heads/master', 'path': 'Linq/CSharpSamples/LanguageSamples/ExplicitInterface/ExplicitInterface1/explicit1.cs', 'mode': '33188', 'license': 'mit', 'language': [{'name': 'C', 'bytes': '1'}, {'name': 'C#', 'bytes': '1963252'}, {'name': 'C++', 'bytes': '7401'}, {'name': 'CSS', 'bytes': '31609'}, {'name': 'Java', 'bytes': '14635'}, {'name': 'Objective-C', 'bytes': '368454'}]} |
high_school_physics | 807,934 | 15.106001 | 1 | Analyzing Host Cell Proteins Using Off-Line Two-Dimensional Liquid Chromatography–Mass Spectrometry
Pat Sandra, Koen Sandra, Alexia Ortiz
LCGC Supplements, Special Issues-11-01-2016, Volume 34, Issue 11
The use of off-line 2D-LC–MS for the characterization of HCPs and their monitoring during downstream processing
Protein biopharmaceuticals are commonly produced recombinantly in mammalian, yeast, or bacterial expression systems. In addition to the therapeutic protein, these cells also produce endogenous host cell proteins (HCPs) that can contaminate the biopharmaceutical product, despite major purification efforts. Since HCPs can affect product safety and efficacy, they need to be closely monitored. Enzyme-linked immunosorbent assays (ELISA) are recognized as the gold standard for measuring HCPs because of their high sensitivity and high throughput, but mass spectrometry (MS) is gaining acceptance as an alternative and complementary technology for HCP characterization. This article reports on the use of off-line two-dimensional liquid chromatography–mass spectrometry (2D-LC–MS) for the characterization of HCPs and their monitoring during downstream processing.
In contrast to small-molecule drugs that are commonly synthesized by chemical means, protein biopharmaceuticals result from recombinant expression in nonhuman host cells. As a result, the biotherapeutic is co-expressed with hundreds of host cell proteins (HCPs) with different physicochemical properties present in a wide dynamic concentration range. During downstream processing, the levels of HCPs are substantially reduced to a point considered acceptable to regulatory authorities (typically <100 ppm–ng HCP/mg product). These process-related impurities are considered as critical quality attributes because they might induce an immune response, cause adjuvant activity, exert a direct biological activity (such as cytokines), or act on the therapeutic itself (for example, proteases) (1,2). To mention some specific examples, during the clinical development phase of Omnitrope, Sandoz’s human growth hormone biosimilar expressed in E. coli, adverse events associated with residual HCPs were encountered. The European Medicines Agency (EMA) only granted approval after additional purification steps for HCP clearance were incorporated (3–5). Scientists at Biogen Idec demonstrated fragmentation of a highly purified monoclonal antibody as a result of residual Chinese hamster ovarian (CHO) cell protease activity in the drug substance, despite an enormous purification effort undertaken (protein A affinity chromatography with subsequent orthogonal purification steps by cation- and anion-exchange chromatography) (6). The authors of the study state that it is of utmost importance to identify residual protease activity early in process development to allow a revision of the purification scheme or ultimately to knockdown the specific protease gene.
Multicomponent enzyme-linked immunosorbent assay (ELISA) is presently the workhorse method for HCP testing because of its high throughput, sensitivity, and selectivity (1,2). Polyclonal antibodies used in the test are typically generated by the immunization of animals with an appropriate preparation derived from the production cell, minus the product-coding gene. However, ELISA does not comprehensively recognize all HCP species-that is, it cannot detect HCPs to which no antibody was raised, it only provides information on the total amount of HCPs without providing insight in individual HCPs, and, in a multicomponent setup, it has a poor quantitation power. In that respect, MS nicely complements ELISA because it can provide both qualitative and quantitative information on individual HCPs. In recent years, various papers have appeared dealing with the mass spectrometric (MS) analysis of HCPs (2,3,712). These studies typically rely on bottom-up proteomics approaches in which peptides derived from the protein following proteolytic digestion are handled. A clear trend is observed toward the use of upfront multidimensional chromatography to tackle the enormous complexity and wide dynamic range (2,3,7,8,10). Compared to one-dimensional liquid chromatography (1D-LC), two-dimensional LC (2D-LC) drastically increases peak capacity as long as the two dimensions are orthogonal (13). In a 1D chromatographic setup the separation space is dominated by peptides derived from the therapeutic protein, in 2D-LC the increased peak capacity allows one to look substantially beyond the therapeutic peptides and detect HCPs at low levels. Three recent papers using 2D-LC–MS/MS demonstrate that HCPs can be revealed at levels as low as 10 ppm (2,3,7). In these cases, label-free quantification was based on the three most intense tryptic peptides making use of single-point calibration against spiked exogenous proteins.
An off-line 2D-LC–MS/MS setup was used in our laboratory for the characterization of HCPs throughout the downstream manufacturing of a therapeutic enzyme recombinantly expressed in yeast. The workflow is schematically presented in Figure 1. Supernatant was collected at different purification steps. Following desalting of the supernatant, the proteins were reduced using dithiothreitol (DTT) and alkylated using iodoacetamide (IAM) before overnight trypsin digestion. The peptide mixture was subsequently subjected to 2D-LC–MS/MS.
Figure 1: Workflow for the characterization of HCPs using off-line 2D-LC–MS/MS.
In successfully applying 2D-LC, the selectivity of the two separation mechanisms toward the peptides must differ substantially to maximize orthogonality and, hence, resolution. Various orthogonal combinations targeting different physicochemical properties of the peptides have been described. Bottom-up proteomics setups initially relied on the combination of strong-cation exchange and reversed-phase LC to separate by charge in the first dimension and by hydrophobicity in the second dimension (13–15). In recent years, various researchers have shifted their efforts to the combination of reversed-phase LC and reversed-phase LC (13,16–19). The orthogonality in this nonobvious combination is mainly directed by the mobile-phase pH, in this instance, high pH in the first dimension and low pH in the second dimension, and by the zwitterionic nature of the peptides. In contrast to the combination of strong-cation exchange and reversed-phase LC, where the first dimension has an intrinsic low peak capacity, the combination of reversed-phase LC in both dimensions benefits from the high peak capacities of the two independent dimensions, which results in an overall high peak capacity of the 2D setup.
In the characterization of yeast HCPs, we opted to use reversed-phase LC in both dimensions with the first dimension operated at pH 10 and the second dimension at pH 2.6. An acidic pH is preferred in the second dimension since it maximizes MS sensitivity for peptides. Figure 2 shows the first dimension ultraviolet (UV) 214-nm chromatogram of a selected downstream manufacturing sample. A reversed-phase LC column with an internal diameter of 2.1 mm was used, which allowed substantial amounts of sample to be loaded, in this particular case the amount corresponding to 115 µg of protein. The peptides were nicely spread throughout the acetonitrile gradient and 22 fractions were collected and further processed after drying and reconstitution in 50 µL of low-pH mobile-phase A (2% acetonitrile and 0.1% formic acid).
Figure 2: First-dimension reversed-phase LC–UV 214 nm chromatogram of a selected downstream manufacturing sample. HPLC system: Agilent Technologies 1200; Column: 150 mm × 2.1 mm, 3.5-µm Waters XBridge BEH C18; mobile-phase A: 10 mM NH4HCO3 pH 10; mobile-phase B: acetonitrile; flow rate: 200 µL/min; gradient: 5–50% B in 30 min; column temperature: 25 °C; injection volume: 50 µL; fraction interval: 1.5 min (300-µL fractions).
The second dimension consisted of a reversed-phase LC capillary column with an internal diameter of 75 µm, which was directly coupled through a nanospray interface to high resolution quadrupole time-of-flight (QTOF)-MS operated in the data-dependent acquisition (DDA) mode. The LC–MS/MS traces of some selected fractions are shown in Figure 3 illustrating good orthogonality between first and second dimension separations.
Figure 3: Second dimension LC–MS/MS chromatograms of selected fractions (Figure 2). HPLC system: Thermo Scientific Ultimate3000 RSLC nano; MS system: Agilent Technologies 6530 Q-TOF; column: 150 mm × 75 µm, 3-µm Thermo Scientific Acclaim PepMap100 C18; precolumn: 20 mm × 75 µm, 3-µm Acclaim PepMap100 C18 (Thermo Scientific); mobile-phase A: 2% acetonitrile, 0.1% formic acid; mobile-phase B: 80% acetonitrile, 0.1% formic acid; loading solvent: 2% acetonitrile, 0.1% formic acid; flow rate: 300 nL/min (nano pump), 5 µL/min (loading pump); gradient: 0-60% B in 60 min; column temperature: 35°C; injection volume: 20 µL.
The MS system was programmed so that an MS survey measurement preceded three dependent MS/MS acquisitions. Precursors selected twice for collision-induced dissociation (CID) were placed in an exclusion list. Generated MS/MS spectra were subjected to database searching (yeast proteins and therapeutic enzyme sequence) and relative protein quantification was performed from total protein intensities computed by the Spectrum Mill search engine. Total intensity is the sum of intensities for all spectra of peptides belonging to a given protein. Figure 4 shows the evolution of the therapeutic enzyme and some selected HCPs throughout the final stages of purification. Of particular interest, during downstream manufacturing, a nonyeast-derived glycosidase was added to shape the glycosylation profile of the therapeutic enzyme (in between stage 1 and 2). This glycosidase temporarily reduced the purity of the therapeutic enzyme but was rapidly cleared. The HCPs detected were mainly proteases, which influenced stability of the therapeutic enzyme. While some were clearly reduced throughout the process (serine carboxypeptidase 1 and aspartyl peptidase), others were enriched (serine carboxypeptidase 2 and metallopeptidase). While these proteases were present at low levels (<0.1%), stability studies have shown that they act on the protein. With the identity of these proteases revealed, they could be the subject of a gene knockout to increase product stability.
Figure 4: Evolution of the therapeutic enzyme, the exogenous glycosidase, and some selected HCPs throughout the final stages of downstream manufacturing. The numbers on the bars represent the relative abundances and the number of unique peptides identified and quantified. Relative abundances were calculated based on the MS signal of identified peptides. Note: the therapeutic enzyme contains various fully occupied glycosylation sites. These glycopeptides are not identified by the MS/MS search engine and therefore not taken into account in the calculation of relative abundances.
It is important to note that none of the HCPs reported could be identified using 1D-LC–MS/MS operated under exactly the same conditions as reported in the legend of Figure 3. Column load was evidently much lower compared to the 2D-LC–MS/MS analysis (4 µg versus 115 µg).
In conclusion, off-line 2D-LC–MS/MS represents a valuable new tool for the characterization of HCPs and their monitoring throughout downstream processing. The use of multidimensional chromatography substantially increases peak capacity and improves the dynamic range providing access to otherwise unmined HCPs. Based on the output of the 2D-LC–MS/MS experiment, processes can be adjusted and identified HCPs can be incorporated in single product ELISAs or in targeted multiple reaction monitoring (MRM) MS assays for routine monitoring.
F. Wang, D. Richardson, and M. Shameem, BioPharm Int.28, 32–38 (2015).
Q. Zhang, A.M. Goetze, H. Cui, J. Wylie, S. Trimble, A. Hewig, and G.C. Flynn, mAbs6, 659–670 (2014).
C.E. Doneanu, A. Xenopoulos, K. Fadgen, J. Murphy, S.J. Skilton, H. Prentice, M. Stapels, and W. Chen, mAbs4, 24–44 (2012).
M. Pavlovic, E. Girardin, L. Kapetanovic, K. Ho, and J.H. Trouvin, Horm. Res.69, 14–21 (2008).
J. Geigert, The Challenge of CMC Regulatory Compliance for Biopharmaceuticals and Other Biologics (Springer Science & Business Media, Heidelberg, Germany, 2013).
S.X. Gao, Y. Zhang, K. Stansberry-Perkins, A. Buko, S. Bai, V. Nguyen, and M.L. Brader, Biotechnol. Bioeng.108, 977–982 (2011).
M.R. Schenauer, G.C. Flynn, and A.M. Goetze, Anal. Biochem.428, 150–157 (2012).
J.H. Thompson, W.K. Chung, M. Zhu, L. Tie, Y. Lu, N. Aboulaich, R. Strouse, and W. Mo, Rapid Comm. Mass Spectrom.28, 855–860 (2014).
V. Reisinger, H. Toll, R.E. Mayer, J. Visser, and F. Wolschin, Anal. Biochem.463, 1–6 (2014).
C.E. Doneanu and W. Chen, Methods Mol. Biol.1129, 341–350 (2014).
K. Bomans, A. Lang, V. Roedl, L. Adolf, K. Kyriosoglou, K. Diepold, G. Eberl, M. Mølhøj, U. Strauss, C. Schmalz, R. Vogel, D. Reusch, H. Wegele, M. Wiedmann, and P. Bulau, PLoS One8, e81639 (2013).
M.R. Schenauer, G.C. Flynn, and A.M. Goetze, Biotechnol. Bioeng.29, 951–957 (2013).
K. Sandra, M. Moshir, F. D’hondt, R. Tuytten, K. Verleysen, K. Kas, I. François, and P. Sandra, J. Chromatogr. B877, 1019–1039 (2009).
M.P. Washburn, D.A. Wolters, and J.R. Yates, Nat. Biotechnol.19, 242–247 (2001).
D.A. Wolters, M.P. Washburn, and J.R. Yates, Anal. Chem.73, 5683–5690 (2001).
N. Delmotte, M. Lasaosa, A. Tholey, E. Heinzle, and C.G. Huber, J. Proteome Res.6, 4363–4373 (2007).
M. Gilar, P. Olivova, A.E. Daly, and J.C. Gebler, Anal. Chem.77, 6426–6434 (2005).
G. Vanhoenacker, I. Vandenheede, F. David, P. Sandra, and K. Sandra, Anal. Bioanal. Chem.407, 355–366 (2015).
K. Sandra, K. Mortier, L. Jorge, L.C. Perez, P. Sandra, S. Priem, S. Poelmans, and M.P. Bouche, Bioanalysis6, 1201–1213 (2014).
Koen Sandra is Director at the Research Institute for Chromatography (RIC) in Kortrijk, Belgium.
Alexia Ortiz is a Proteomics Researcher at the Research Institute for Chromatography (RIC).
Pat Sandra is Chairman at the Research Institute for Chromatography (RIC) and Emeritus Professor at Ghent University in Ghent, Belgium. | {"pred_label": "__label__cc", "pred_label_prob": 0.5522125363349915, "wiki_prob": 0.44778746366500854, "source": "cc/2023-06/en_middle_0042.json.gz/line680170"} |
high_school_physics | 270,712 | 15.085457 | 1 | Designing your own boxy pouch with the exact dimensions you want has never been easier. Enter your desired length, width, and height of the finished pouch, and the calculator will tell you what dimensions to cut the fabric!
You can either design the pouch to use a zipper length commonly available, or you can trim any zipper that is longer than the needed length down to the pouch size.
7) Trim seam allowance to 1/4” | 0.64 cm prepare the edge for French seams.
11) Turn the pouch inside out. Press out seams you just sewed. Resew them with the seam allowance you entered into the calculator so the raw edges are completely encased by the new seam (French seams).
12) Turn the pouch inside out. It’s finished! Ready to be filled with all sorts of lovely things. | {'timestamp': '2019-04-20T15:08:47Z', 'url': 'https://sarkirsten.com/blog/2018/12/3/pouch-pattern-calculator', 'language': 'en', 'source': 'c4'} |
high_school_physics | 20,516 | 15.041429 | 1 | After a barrage of teasers, Lamborghini has finally unveiled the Aventador SVJ during Monterey Car Week in California. It is a blend of distinctive design, mouth-watering tech and the top-standard handling and performance. The Italian manufacturer says the Aventador SVJ takes the concept of a super sports car to “new dimension”.
For people not in the know, in the Aventador SVJ, the SV traditionally stands for Superveloce meaning ‘superfast’ and the J denotes ‘Jota’ indicating track and performance superiority. The Aventador SVJ’s global premiere was hugely anticipated since it tamed the Nürburgring-Nordschleife to become the fastest production car by completing the 20.6 km course in just 6:44.97 minutes.
To maintain the exclusivity, Lamborghini has restricted the production of Aventador SVJ to just 900 units worldwide and we can expect it to arrive in India as well. A special version dubbed SVJ 63 with unique configuration has been unveiled at the Pebble Beach Concours d’Elegance to pay homage to Lamborghini’s founding year of 1963.
It boasts increased usage of carbonfibre and is limited to just 63 examples. Endorsed as the pinnacle of Lambo’s super sports car range, it has a performance-optimised powertrain. The Lamborghini Aventador SVJ is the most powerful series production V12-engined model from the flamboyant brand.
The engine produces a maximum power of 770 hp at 8,500 rpm and 720 Nm of torque delivered at 6,750 rpm. At just 1,525 kg dry, it has a weight-to-power ratio of 1.98 kg/hp and can accelerate from zero to 100 kmph in 2.8 seconds. More interestingly, from 0 to 200 kmph, it only takes 8.6 seconds. The top speed is claimed at more than 350 kmph and it has 100-0 kmph braking distance of just 30 meters.
Compared to the Aventador S, the SVJ is more aerodynamically superior as every body panel has been designed under ‘form follows function’ philosophy. Lamborghini was obsessed with using lightweight material to gain performance and the SVJ features patented “Aerodinamica Lamborghini Attiva 2.0” system that actively varies aero load to achieve high downforce or low drag. | {'timestamp': '2019-04-24T06:43:00Z', 'url': 'https://gaadiwaadi.com/lamborghini-aventador-svj-unveiled-fastest-series-production-v12/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 612,956 | 15.040729 | 1 | InternationalSelect your role
GSF Lux SICAV
Global Quality Equity
Latest NAV Price
NAV daily change %
Dealing currency
USD 1.24 B
Investment objective summary
The Fund aims to provide long-term capital growth.
The Fund invests primarily in the shares of companies around the world.
The Fund focuses investment on shares deemed by the Investment Manager to be of high quality i.e. companies which have provided sustainably high levels of return on invested capital and free cash flow (a company’s cash earnings after its capital expenditures have been accounted for).
The Fund is unrestricted in its choice of companies either by size, industry or geography.
Derivatives (financial contracts whose value is linked to the price of an underlying asset) may be used for efficient portfolio management purposes e.g. with the aim of either managing the Fund risks or reducing the costs of managing the Fund.
Clyde Rossouw
Clyde is Co-Head of Quality at Ninety One. He is a portfolio manager with a focus...
See full bio
Performance & returns
Rolling 12 month Performance
Growth of Investment
Calendar Year Returns
Trailing Returns
The value of investments, and any income generated from them, can fall as well as rise. Past performance is not a reliable indicator of future results. If the currency shown differs from your home currency, returns may increase or decrease as a result of currency fluctuations. Investment objectives and performance targets may not necessarily be achieved, losses may be made. We recommend that you seek independent financial advice to ensure this Fund is suitable for your investment needs. Where a shareclass has been in existence for less than 12 months, performance is not disclosed. No representation is being made that any investment will or is likely to achieve profits or losses similar to those achieved in the past, or that significant losses will be avoided. The Trailing Returns chart may use different Sector performance start dates compared to other performance charts or other marketing literature which may result in minor differences. Where a benchmark index is calculated on a monthly basis, the returns will be out of line with the fund and/or sector which are calculated daily.
Key facts & Downloads
BYR4YN3
L5447T186
INGQEAI LX
MSCI AC World Net Return
Morningstar category sector
Global Large-Cap Growth Equity
Fund inception date
Share class inception date
Valuation & transaction cut-off
16:00 New York Time (forward pricing)
Key Investor Information (KII) (en) 127kb
Factsheet (en) 92kb
Factsheet (pt) 106kb
Price history (en)
Distribution history (en)
Portfolio & Holdings
Date as of 30/11/2020
Geographic breakdown (%)
Sector breakdown (%)
Top & bottom country weightings vs comparative index (%)
Top & bottom sector weightings vs comparative index (%)
Top & bottom stock weightings vs comparative index (%)
Top holdings (%)
Europe ex UK
Far East ex Japan
China*
*Offshore and/or Mainland
Maximum initial charge %
Ongoing charge %
The Fund may incur further expenses (not included in the above Ongoing charge) as permitted by the Prospectus.
Specific fund risks
Charges from capital
For Inc-2 and Inc-3 shares classes, expenses are charged to the capital account rather than to income, so capital will be reduced. This could constrain future capital and income growth. Income may be taxable.
Concentrated portfolio
The portfolio invests in a relatively small number of individual holdings. This may mean wider fluctuations in value than more broadly invested portfolios.
Changes in the relative values of different currencies may adversely affect the value of investments and any related income.
The use of derivatives is not intended to increase the overall level of risk. However, the use of derivatives may still lead to large changes in value and includes the potential for large financial loss. A counterparty to a derivative transaction may fail to meet its obligations which may also lead to a financial loss.
Emerging market (inc. China)
These markets carry a higher risk of financial loss than more developed markets as they may have less developed legal, political, economic or other systems.
The value of equities (e.g. shares) and equity-related investments may vary according to company profits and future prospects as well as more general market factors. In the event of a company default (e.g. insolvency), the owners of their equity rank last in terms of any financial payment from that company.
Distributions & Yields
Distribution & Yields
Distribution per share
Most recent distribution payments
Distribution amount
Ex-distribution date
Yield data as of 30/11/2020
We recommend that you seek independent financial advice to ensure this Fund is suitable for your investment needs.
All the information contained in this communication is believed to be reliable but may be inaccurate or incomplete. Any opinions stated are honestly held but are not guaranteed and should not be relied upon.
This communication is provided for general information only. It is not an invitation to make an investment nor does it constitute an offer for sale. The full documentation that should be considered before making an investment, including the Prospectus and Key Investor Information Documents, which set out the Fund specific risks, are available from Ninety One. This Fund should be considered as a long-term investment.
Performance data source: © Morningstar, NAV based, (net of fees, excluding initial charges), total return, in the share class dealing currency. Performance would be lower had initial charges been included as an initial charge of up to 5% (10% for S shares) may be applied to your investment. This means that for an investment of $1,000, where the initial charge equals 5%, $950 ($900 for S shares) would actually be invested in the Fund. Returns to individual investors will vary in accordance with their personal tax status and tax domicile.
Morningstar Analyst rating™: Copyright © 2020. Morningstar. All Rights Reserved. The information, data and opinions expressed and contained herein are proprietary to Morningstar and/or its content providers and are not intended to represent investment advice or recommendation to buy or sell any security; are not warranted to be accurate, complete or timely. Neither Morningstar nor its content providers are responsible for any damages or losses arising from any use of this Rating, Rating Report or Information contained therein.
The overall rating for a fund, often called the ‘star rating’, is a third party rating derived from a quantitative methodology that rates funds based on an enhanced Morningstar™ Risk-Adjusted Return measure. ‘Star ratings’ run from 1 star (lowest) to 5 stars (highest) and are reviewed at the end of every calendar month. The various funds are ranked by their Morningstar™ Risk-Adjusted Return scores and relevant stars are assigned. It is important to note that individual shareclasses of each fund are evaluated separately and their ratings may differ depending on the launch date, fees and expenses relevant to the shareclass. In order to achieve a rating the share class of a fund must have a minimum three-year performance track record.
For a full description of the ratings, please see our ratings guide. A rating is not a recommendation to buy, sell or hold a fund.
The portfolio may change significantly over a short period of time. This is not a buy or sell recommendation for any particular security. Figures may not always sum to 100 due to rounding.
For an explanation of statistical terms, please see our glossary.
Location:International
Call client team +44 (0) 203 938 2000
Funds & strategies | {"pred_label": "__label__cc", "pred_label_prob": 0.6600072383880615, "wiki_prob": 0.3399927616119385, "source": "cc/2021-04/en_middle_0056.json.gz/line1368523"} |
high_school_physics | 871,720 | 14.983727 | 1 | Q: How to use return_sequences option and TimeDistributed layer in Keras? I have a dialog corpus like below. And I want to implement a LSTM model which predicts a system action. The system action is described as a bit vector. And a user input is calculated as a word-embedding which is also a bit vector.
t1: user: "Do you know an apple?", system: "no"(action=2)
t2: user: "xxxxxx", system: "yyyy" (action=0)
t3: user: "aaaaaa", system: "bbbb" (action=5)
So what I want to realize is "many to many (2)" model. When my model receives a user input, it must output a system action.
But I cannot understand return_sequences option and TimeDistributed layer after LSTM. To realize "many-to-many (2)", return_sequences==True and adding a TimeDistributed after LSTMs are required? I appreciate if you would give more description of them.
return_sequences: Boolean. Whether to return the last output in the output sequence, or the full sequence.
TimeDistributed: This wrapper allows to apply a layer to every temporal slice of an input.
Updated 2017/03/13 17:40
I think I could understand the return_sequence option. But I am not still sure about TimeDistributed. If I add a TimeDistributed after LSTMs, is the model the same as "my many-to-many(2)" below? So I think Dense layers are applied for each output.
A: The LSTM layer and the TimeDistributed wrapper are two different ways to get the "many to many" relationship that you want.
*
*LSTM will eat the words of your sentence one by one, you can chose via "return_sequence" to outuput something (the state) at each step (after each word processed) or only output something after the last word has been eaten. So with return_sequence=TRUE, the output will be a sequence of the same length, with return_sequence=FALSE, the output will be just one vector.
*TimeDistributed. This wrapper allows you to apply one layer (say Dense for example) to every element of your sequence independently. That layer will have exactly the same weights for every element, it's the same that will be applied to each words and it will, of course, return the sequence of words processed independently.
As you can see, the difference between the two is that the LSTM "propagates the information through the sequence, it will eat one word, update its state and return it or not. Then it will go on with the next word while still carrying information from the previous ones.... as in the TimeDistributed, the words will be processed in the same way on their own, as if they were in silos and the same layer applies to every one of them.
So you dont have to use LSTM and TimeDistributed in a row, you can do whatever you want, just keep in mind what each of them do.
I hope it's clearer?
EDIT:
The time distributed, in your case, applies a dense layer to every element that was output by the LSTM.
Let's take an example:
You have a sequence of n_words words that are embedded in emb_size dimensions. So your input is a 2D tensor of shape (n_words, emb_size)
First you apply an LSTM with output dimension = lstm_output and return_sequence = True. The output will still be a squence so it will be a 2D tensor of shape (n_words, lstm_output).
So you have n_words vectors of length lstm_output.
Now you apply a TimeDistributed dense layer with say 3 dimensions output as parameter of the Dense. So TimeDistributed(Dense(3)).
This will apply Dense(3) n_words times, to every vectors of size lstm_output in your sequence independently... they will all become vectors of length 3. Your output will still be a sequence so a 2D tensor, of shape now (n_words, 3).
Is it clearer? :-)
A: return_sequences=True parameter:
If We want to have a sequence for the output, not just a single vector as we did with normal Neural Networks, so it’s necessary that we set the return_sequences to True. Concretely, let’s say we have an input with shape (num_seq, seq_len, num_feature). If we don’t set return_sequences=True, our output will have the shape (num_seq, num_feature), but if we do, we will obtain the output with shape (num_seq, seq_len, num_feature).
TimeDistributed wrapper layer:
Since we set return_sequences=True in the LSTM layers, the output is now a three-dimension vector. If we input that into the Dense layer, it will raise an error because the Dense layer only accepts two-dimension input. In order to input a three-dimension vector, we need to use a wrapper layer called TimeDistributed. This layer will help us maintain output’s shape, so that we can achieve a sequence as output in the end.
| {'language': 'en', 'url': 'https://stackoverflow.com/questions/42755820', 'timestamp': '2023-03-29', 'source': 'stackexchange', 'question_score': '63'} |
high_school_physics | 234,713 | 14.982003 | 1 | Set of 40 magnetic pins in blue.
Dimensions are 4mm diameter by 10mm length per magnet.
Perfect for holding papers to any magnetic surface.
Set of 40 magnetic pins. Dimensions are 4mm diameter by 10mm length per magnet. Perfect for holding papers to any magnetic surface. One magnet can hold up to 6 pieces of printer paper. This magnet set complies with the regulations and labeling requirements of the Consumer Product Safety Commission. In addition, in accordance with Amazon strong magnet policy, these magnets are not made of rare earth metals and have a flux less than 50 kG2mm2. | {'timestamp': '2019-04-21T15:17:39Z', 'url': 'http://www.your-new-horse.com/40-magnetic-pins-in-assorted-colors-4mm-x-10mm-blue-b014lbvzv6.html', 'language': 'en', 'source': 'c4'} |
high_school_physics | 668,663 | 14.972127 | 1 | Issue V,3: Nov. 10, 1999
A match made in heaven
by Carole Brown
Few people can be unaware of the conversations that have arisen on campus over the past few years regarding charismatic and traditional Catholicism. Many people treat the question as if it were an either/or proposition—that a person is either charismatic or traditional. Sometimes, the question is posed exactly that bluntly: “Are you charismatic or traditional?” The two have at times been pitted against each other as though they were somehow mutually exclusive.
I’d like to argue, on the contrary, that the two belong together. Other Concourse writers have written good articles in previous issues that pointed out the potential on this campus for bringing charismatic and traditional spiritualities together. It seems to me that this matter goes beyond simply establishing a “unity in diversity” which helps people of diverse “spiritualites” to tolerate each other. As authentic Catholic Christians, we are called to embrace both these realities as indivisible dimensions of our faith.
I begin with what may seem a rather bold assertion: it is impossible to be orthodox without embracing both the charismatic and traditional dimensions of our faith. At Franciscan University, most of us are concerned about orthodoxy. Unfortunately, for whatever reason, there is a tendency to think we must create subdivisions or camps within orthodoxy, labeling them “charismatic spirituality” and “traditional spirituality”—as if they were optional alternatives. The problem with such an approach is that as quickly as one identifies with one camp and rejects the other, he is no longer in harmony with the teaching of the Church. The Church does not distinguish these things in the same way that it might distinguish a “Franciscan spirituality” from an “Ignatian Spirituality” or a “Carmelite spirituality” from a “Dominican spirituality.” When the Church speaks of its traditional and charismatic nature, it sees them both as essential dimensions of an authentically Christian life. Traditions and charisms are not optional, nor can they be marginalized as such.
Let us first identify and deal with the caricatures that have come to be identified with the terms “traditional” and “charismatic.” The so-called “traditional” Catholics are caricatured as being fond of novenas, the Blessed Mother, the Rosary, and hymns set to organ music. They wear scapulars and large collections of medals. They also like the Latin Mass and incense. They are into beauty, dignity and reverence, and regard clapping in Mass as “irreverent.” “Charismatic” Catholics, on the other hand, always have their hands in the air unless they are “resting in the Spirit.” They prefer guitars, Vineyard music, clapping, and even dancing as the Spirit moves them. They avoid fixed formulas for prayer, preferring instead to pray using spontaneous praise or in tongues.
When one of these caricatures encounters the other, it is little wonder that they appear to be irreconcilable opposites. To the charismatic, the traditional seems rigid more than reverent—more interested in rules and rubrics than “worship in spirit and truth.” To the traditional, the charismatic seems wild and obnoxious—more anxious to work up emotional highs than to contemplate the mysteries, and somehow disconnected with the Church’s past. One would not expect that a charismatic could also be contemplative, or that a traditional could praise God in tongues that were not his own.
By Webster’s definition, a caricature is an exaggeration by means of often ludicrous distortion of parts or characteristics. We can recognize the characteristics described in the caricatures, even though they are exaggerated and distorted. Unfortunately, it is not uncommon on our campus to use these caricatures as though they were accurate portrayals of what it is to be traditional and charismatic, and to dismiss one or the other on this basis.
The Church sees things quite differently. There is no indication that the Church recognizes a division between charismatic and traditional dimensions of our faith. In fact, she seems to think of both as necessary for all. In the Dogmatic Constitution on the Church, the Church points out those practices which are most frequently identified as “traditional” and ascribes them, not to some Catholics, but to the faithful. The faithful “must frequently partake of the sacraments, chiefly of the Eucharist, and take part in the liturgy; he must constantly apply himself to prayer, self-denial, active brotherly service and the practice of all virtues.”1 While insisting on the supremacy of Christ, she encourages pious devotions to the saints2 and to the Blessed Virgin Mary in particular. In the words of the Church, “the cult…of the Blessed Virgin [should] be generously fostered, and…the practices and exercises of devotion towards her, recommended by the teaching authority of the Church in the course of centuries be highly esteemed…”3 Paul VI speaks of devotion to the Blessed Virgin Mary as “an integral element of Christian worship.”4 The Rosary too is highly recommended as a “compendium of the entire Gospel…suitable for fostering contemplative prayer.”5 (It should be added here, that while encouraging the use of the Rosary as “an excellent prayer,” the Church also says that it “should not be propagated in a way that is too one-sided or exclusive…the faithful should feel serenely free in its regard. They should be drawn to its calm recitation by its intrinsic appeal.”6)
By the same token, the Second Vatican Council taught clearly that everyone is to be open to the charisms of the Holy Spirit:
It is not only through the sacraments and the ministrations of the Church that the Holy Spirit makes holy the People, leads them and enriches them with his virtues. Allotting his gifts according as he wills (cf. Cor.12:11), he also distributes special graces among the faithful of every rank. By these gifts he makes them fit and ready to undertake various tasks and offices for the renewal and building up of the Church, as it is written, “the manifestation of the Spirit is given to everyone for profit.” Whether these charisms be very remarkable or more simple and widely diffused, they are to be received with thanksgiving and consolation since they are fitting and useful for the needs of the church…7 [emphasis mine].
John Paul II takes this openness even a step farther. In his Pentecost address of 1998,8 he states unequivocally that:
the institutional and charismatic aspects are co-essential as it were to the Church’s constitution. They contribute, although differently, to the life, renewal and sanctification of God’s People. It is from this providential rediscovery of the Church’s charismatic dimension that, before and after the Council, a remarkable pattern of growth has been established… (# 4)
The Holy Father confirmed that the charismatic dimension of our faith is not an optional spirituality, and may not be marginalized as such. “The institutional and charismatic aspects are co-essential to the Church’s constitution.” We cannot do without the charismatic dimension anymore than we can do without the Pope or the Sacraments!
Some try to limit the Holy Father’s use of the word “charismatic” in this context because, while many of the people present at this address represented the charismatic renewal, there were also groups there who do not use the gifts which are commonly referred to as charismatic gifts. Therefore, he must have meant it in a different way than we use the term in Steubenville (i.e. charismatic in the sense of gifts such as tongues, prophecy, etc.) It is true that the Holy Father used the word charismatic in a broad and inclusive sense, but there is nothing to indicate that he excluded the charisms of tongues, prophecy and so forth—in fact quite the opposite. What did the Holy Father mean when he spoke of the “providential rediscovery of the Church’s charismatic dimension”? The Church has always had preachers and teachers, apostles and evangelists; the Church has always exercised hospitality and service to the poor. The Holy Father’s use of the term “providential rediscovery” could hardly be applied to these charisms because the Church never lost them. What then could constitute a “providential rediscovery” unless it implied a discovery of something that had been, in some sense, lost?9 To what historical moment does this providential rediscovery refer—when was the original discovery? I would submit that the historical moment to which this “rediscovery” refers is Pentecost.10 In this respect, it can be said that charismatic prayer is the oldest tradition the Church has. Certainly, the Holy Father does not assign a superior value to the charisms present at Pentecost. Rather he affirms that “there is an enormous range of charisms through which the Holy Spirit shares His charity and holiness with the Church,”11 which, without dismissing other charisms, includes the gifts of tongues, prophecy, etc.
I think one of the reasons some are dismissive of the charismatic dimension is that many of us have fears and reservations about opening ourselves to it. Some of us have become cynical because of negative experiences with the charismatic movement or with certain people who identified themselves as “charismatic.” Some of us find it frightening to consider entering into any kind of prayer that is not under our direct control. I understand this fear because I, myself, was turned off by my first encounter with charismatic worship—although I loved God, this was unfamiliar and uncomfortable to me. Moreover, as I watched people resting in the spirit for the first time, I concluded that this was an instance of psychological suggestion. I developed an intellectual block that closed me to the charisms associated with the charismatic renewal for almost ten years. Mercifully, God’s providence later guided me to a charismatic healing Mass, where for the first time I experienced the power that is available in these gifts. The Lord ministered to me through a laywoman who had never met me before, giving her a word of knowledge about a difficult situation in my life. I knew then, without a doubt, that this was much more than psychological suggestion—it was from God, and it was powerful. Not only that—it was something that I needed. Discovering the charismatic dimension of my faith has provided a richness for my prayer, indeed a means to deeper contemplative prayer, that I could not have imagined had I not experienced it.
Pope John Paul II goes on, in the same address:
Today, I would like to cry out to all of you gathered here in St. Peter’s Square and to all Christians: Open yourselves docilely to the gifts of the Spirit! Accept gratefully and obediently the charisms which the Spirit never ceases to bestow on us! Do not forget that every charism is given for the common good, that is, for the benefit of the whole Church. (5) [emphasis mine]
What this demands of all of us is a healthy openness to the charismatic gifts. It is true that no one has all the gifts, but gifts are given to everyone. The gifts are not to be “rashly desired” but “received with thanksgiving.”12 St. Paul tells us that we should “strive eagerly for the spiritual gifts, above all that you may prophesy.” (1 Corinthians 14:1) While there may be a delicate balance between “striving eagerly” and “rashly desiring,” there is no room anywhere for the wholesale dismissal or rejection of the charismatic gifts on the basis that “I’m not into the charismatic thing.” Does not one who rejects the charisms place himself in a position of dissent?
This doesn’t mean that we must all rush out to join the nearest charismatic community, or be a “card-carrying member” of the charismatic movement. The charismatic dimension of our faith is part of our baptismal heritage. It is not contingent upon our musical preferences, nor our personal “prayer style,” i.e. whether we prefer loud singing or a quieter, more contemplative approach. Does it mean that we have to pray with our hands up or learn to play guitar? No. What it means is that we are fully open to the Holy Spirit, whatever his will for us might be. It implies that we allow ourselves to be taught concerning the charisms, and even that we seek out opportunities to learn, such as availing oneself of a Born of the Spirit Seminar, attending a prayer meeting, reading and studying. It also implies discerning the gifts that are present in us and doing what we can to mature in them.
In the final analysis, the charisms and traditions of the Church are about authentic conversion. John Paul II speaks of conversion in this way:
Conversion is expressed in a faith that is total and radical and which neither hinders nor limits God’s gift. At the same time, it gives rise to a dynamic and lifelong process which demands a continual turning away from ‘life according to the flesh’ to ‘life according to the Spirit’ (cf. Rom 8:3-13). Conversion means accepting, by a personal decision, the saving sovereignty of Christ and becoming his disciple.13
None of us can claim that our conversion is complete—it is a lifelong process. It may be possible to say “I don’t feel ready for this gift yet,” but if we truly seek conversion it is not permissible to refuse for ourselves (or others) the traditions of the Church or the charisms of the Holy Spirit. Conversion “neither limits nor hinders God’s gift.” It doesn’t refuse certain kinds of gifts, but rather declares “I want all that you have for me.”
At Franciscan University we have allowed a division to creep up on us that could potentially be poisonous—Paul VI warned against it in Evangelization in the Modern World:
The power of evangelization will find itself considerably diminished if those who proclaim the Gospel are divided among themselves in all sorts of ways. Is this not perhaps one of the great sicknesses of evangelization today? Indeed, if the Gospel that we proclaim is seen to be rent by doctrinal disputes, ideological polarizations, or mutual condemnations among Christians, at the mercy of the latter’s differing views on Christ and the Church…how can those to whom we address our preaching fail to be disturbed, disoriented and even scandalized?14
We cannot hope to be effective in our witness to the world if we allow this division to continue. Nor can we claim to be orthodox without embracing the fullness of the Church’s teaching on the necessity of both the traditional and charismatic dimensions of our faith. It cannot but grieve the Holy Spirit when we treat either of them with contempt. I hope that we can all respond to the Holy Father’s call to open ourselves with docility, gratefulness, and obedience to all the treasures that were entrusted to us in our Baptism—the riches of our tradition as well as the newness that the Holy Spirit brings in His charisms.
Carole Brown graduated from the MA Theology program in 1997. She now serves as Director of Evangelistic Outreach and Orientation at FUS.
Dogmatic Constitution on the Church (Lumen Gentium) #42 ↑
Ibid. #50 ↑
Ibid. # 67 ↑
For the Right Ordering and Development of Devotion to the Blessed Virgin Mary (Marialis Cultus) #58 ↑
Ibid.#42 ↑
Lumen Gentium 12 ↑
L’Osservatore Romano, 3 June 1998 ↑
It is important to note that the charismatic gifts of tongues, prophecy, miracles, etc. never disappeared entirely from the Church. For a good treatment of the evidence of charismatic gifts in the first eight centuries of the Church, see Christian Initiation and Baptism in the Holy Spirit, by George Montague and Kilian McDonnell (Liturgical Press 1991). These gifts have also been referred to in the writings of later saints—for example, St. John of the Cross (16th century) describes the value and proper ordering of these gifts in the Christian life in Book Three, Chapter 30 of his Ascent of Mount Carmel. ↑
Acts 2 describes what happened at Pentecost. When Pope John XXIII convened Vatican Council II, his prayer also referred to this event: “Renew in our own days your miracles as of a second Pentecost…” ↑
For a thorough catechetical treatment on the gifts of the Spirit, see The Spirit, Giver of Life and Love: A Catechesis on the Creed p.366. (Pope John Paul II) ↑
Mission of the Redeemer, #46 ↑
Evangelization in the Modern World, #77 ↑
A match made in heaven, Carole Brown The ‘Stratford man’ and the Shakespearean canon: no match at all, Kathleen van Schaijik Liberal arts and professional programs: a reply to Jason Negri, Ben Brown Let’s improve our stats, Sofia Genato The ideal of perfecting the mind is timeless, Michael Houser Literary works not severed from their human source, Justine Schmiesing Cultivating the intellect, Anne Schmiesing Balance in parenting methods, Butch Kinerney
Same topic: charismatic & traditional spirituality
II,1 Can charismatics and traditionalists peacefully coexist?, Kathleen van Schaijik II,7 Confrontation and culture at Franciscan University, David Schmiesing II,8 Traditionalists, charismatics and the liturgy, Adam Tate II,9 Why tradition in the liturgy is so important to our religious life, Alice von Hildebrand II,9 Charisms are traditional, Alicia Hernon II,9 Why ‘charismatic spirituality’ belongs at the heart of our communal life, Kathleen van Schaijik III,1 Filling out the meaning of the term ‘Charismatic’, Jim Weiner III,3 Campus Spiritualities: Responding to charismatic critics, Adam Tate III,3 Campus Spiritualities: Tongues in Scripture, Gerald E. Hatcher IV,7 The importance of engaging questions about our campus culture, Mark Fischer V,4 Bringing the masses from starvation to full strength, Kathleen van Schaijik V,4 Baptism in the Holy Spirit goes beyond the charisms, Ralph Sharafinski V,4 Latin, Gregorian Chant, and the Spirit of Vatican II, Jeff Ziegler V,4 Learning about the Eastern Rites, Michael Wrasman V,4 Complimentary opponents of modernism, Michael Houser V,4 What does ‘charismatic’ really mean?, Adam Tate V,4 The blessings of both sides: a personal testimony, Sr. Jane M. Abeln, SMIC VI,1 Reconsidering the term ‘Baptism in the Holy Spirit’, Scott Johnston
I,2 NFP (1) V,3 A match made in heaven | {"pred_label": "__label__cc", "pred_label_prob": 0.5948621034622192, "wiki_prob": 0.40513789653778076, "source": "cc/2022-05/en_head_0043.json.gz/line1553561"} |
high_school_physics | 555,616 | 14.859408 | 1 | Home Agriculture
United States Attorneys General
United States Department of Justice
MIL OSI - Australia
MIL-OSI Security: Nearly three dozen Savannah-area defendants charged in drug trafficking indictment in Operation Deadlier Catch
SAVANNAH, GA: A total of 29 defendants face multiple federal felony charges in a drug trafficking investigation targeting a violent Savannah-area network that distributed cocaine, heroin and marijuana.
Investigated under the Organized Crime Drug Enforcement Task Forces (OCDETF), Operation Deadlier Catch involved multiple federal agencies who traced a major source of cocaine distributed in Chatham County to a drug trafficking organization that channeled drugs from Mexico through a California supplier, said Bobby L. Christine, U.S. Attorney for the Southern District of Georgia.
“This investigation represents yet another significant infiltration and disruption of a violent, gang-related drug distribution network in the Southern District,” said U.S. Attorney Christine. “Our dedicated law enforcement partners continue to demonstrate our shared commitment to target and eliminate those who would spread poison and fear in our communities.”
In Operation Deadlier Catch, investigators from the FBI, the U.S. Drug Enforcement Administration (DEA), the U.S. Postal Inspection Service (USPIS), the Chatham-Savannah Counter Narcotics Team (CNT), and the Savannah Police Department monitored and infiltrated the drug trafficking network to trace the source of supply and points of distribution in the greater Savannah area, and to identify and eliminate sources of violent crime in the community. In a series of searches, investigators seized more than 24 kilograms of cocaine, more than 180 pounds of marijuana, 3 kilos of heroin, and at least 14 firearms – many of them in the possession of previously convicted felons.
The U.S. Attorney’s Office for the Southern District of Georgia also has initiated civil forfeiture proceedings for the firearms and $1.5 million in cash and other assets including vehicles and jewelry, along with two homes in Savannah that are alleged to have been used as part of the drug distribution network.
“There is no better example of the value of our partnerships between federal, state and local law enforcement agencies than ‘Operation Deadlier Catch’,” said Chris Hacker, Special Agent in Charge of FBI Atlanta. “The removal of drugs, guns and alleged gang members immediately makes the streets of Savannah safer, thanks to those partnerships.”
Robert J. Murphy, Special Agent in Charge of the DEA Atlanta Division stated, “DEA places the highest priority of not only removing dangerous drugs from the street, but also on seizing the ill-gotten gains of illegal drug trafficking. DEA’s intent clearly is to put drug traffickers out of business by using every available resource. This case was successful because of the collaborative efforts of our federal, state and local law enforcement partners and the United States Attorney’s Office.”
“The United States Postal Inspection Service was proud to play a role along with its local, state and federal partners in this operation, to target individuals trafficking in illegal narcotics in the Savannah region,” said Antonio Gomez, Inspector in Charge of the U.S. Postal Inspection Service Miami Division. “Criminals that traffic in narcotics and its associated violent crimes will continue to be targeted by these law enforcement agencies, and will face their day in court.”
“CNT is proud to work with the United States Attorney’s Office, and our many law enforcement partners, in pursuit of our mission to target drug traffickers in this community. The importance of prosecution led, multi-agency investigations, such as this one cannot be understated,” said CNT Director Michael G. Sarhatt. “Our Chatham County community is safer when we combine our efforts to remove the criminal organizations bringing drugs into this area.”
A 27-count indictment unsealed in U.S.A. vs. Bulloch, et.al, alleges multiple felony charges against 29 defendants. Each of the defendants is charged with Conspiracy to Possess with Intent to Distribute and to Distribute 5 Kilograms or More of Cocaine, 28 Grams or More of Crack Cocaine, and an Amount of Marijuana, a charge that carries a maximum penalty upon conviction of up to life in federal prison. Those charged and any additional charges include:
Joseph Bulloch, a/k/a “Lil Joe,” 32, of Savannah, also charged with Possession with Intent to Distribute 5 Kilograms or More of Cocaine, 28 Grams or More of Crack Cocaine, and an Amount of Heroin and Marijuana; Possession of a Firearm in Furtherance of a Drug Trafficking Crime; Possession of a Firearm by a Convicted Felon; and two counts of Maintaining a Drug-Involved Premises;
Ildelfonso Sanchez-Inzunza, a/k/a “Jessie,” 29, of Savannah, also charged with Possession with Intent to Distribute 5 Kilograms or More of Cocaine, 28 Grams or More of Crack Cocaine, and an Amount of Heroin and Marijuana; Possession of a Firearm in Furtherance of a Drug Trafficking Crime; Possession of a Firearm by an Illegal Alien; and Maintaining a Drug-Involved Premises;
Kashif Collins, a/k/a “Sheef,” a/k/a “Fat Boy,” 34, of Savannah, also charged with Possession With Intent to Distribute 500 Grams or More of Cocaine and An Amount of Marijuana; and two counts of Maintaining a Drug-Involved Premises;
Jontae Keel, a/k/a “Biyha” 29, of Savannah, also charged with Possession with Intent to Distribute 50 Kilograms or More of Marijuana; Possession of a Firearm in Furtherance of a Drug Trafficking Crime; Possession of a Firearm by a Convicted Felon; and two counts of Maintaining a Drug-Involved Premises;
Rashamel Brown, a/k/a “2Stiff Respeckk,” 25, of Savannah, also charged with Conspiracy to Use, Carry, or Possess Firearms;
Bernard Carter, a/k/a “Nard,” 28, of Savannah;
Jarnard Williams, a/k/a “June,” 30, of Savannah;
Charles Collins, a/k/a “Greg,” 66, of Savannah, also charged with Distribution of Cocaine; and Maintaining a Drug-Involved Premises;
Craig Scott, a/k/a “Major Flavor,” 26, address unknown, also charged with Conspiracy to Use, Carry, or Possess Firearms;
Lamar Harris, a/k/a “Foolie,” 19, of Savannah, also charged with Conspiracy to Use, Carry, or Possess Firearms; Possession with Intent to Distribute 500 Grams or More of Cocaine, and an Amount of Marijuana; Possession of a Firearm in Furtherance of a Drug Trafficking Crime; and Maintaining a Drug-Involved Premises;
Yusef Scott, a/k/a “Self,” a/k/a “Bolton St Self,” 21, an inmate at the Chatham County Detention Center, also charged with Conspiracy to Use, Carry, or Possess Firearms; Using and Carrying a Firearm During and in Relation to a Drug Trafficking Crime; Possession with Intent to Distribute 500 Grams or More of Cocaine, and an Amount of Marijuana; Possession of a Firearm in Furtherance of a Drug Trafficking Crime; and Maintaining a Drug-Involved Premises;
Jermaine Robbins, a/k/a “Juggy,” a/k/a “Jug Love,” a/k/a “Chicken Man,” 41, an inmate at the Chatham County Detention Center, also charged with Conspiracy to Use, Carry, or Possess Firearms; Possession with Intent to Distribute 500 Grams or More of Cocaine, and an Amount of Marijuana; Possession of a Firearm in Furtherance of a Drug Trafficking Crime; and Maintaining a Drug-Involved Premises;
Barshalai Jones, a/k/a “Paidfully AK,” 19, an inmate at the Chatham County Detention Center, also charged with Conspiracy to Use, Carry, or Possess Firearms; Possession with Intent to Distribute 500 Grams or More of Cocaine, and an Amount of Marijuana; Possession of a Firearm in Furtherance of a Drug-Trafficking Crime; and Maintaining a Drug-Involved Premises;
Shakeem Douse, a/k/a “G Street NBA,” a/k/a “Pothead,” 26, of Savannah, also charged with Conspiracy to Use, Carry, or Possess Firearms; and Possession of a Firearm by a Convicted Felon;
Andre Woolford, a/k/a “Hoggie,” 27, of Savannah, also charged with Distribution of Cocaine;
Temperance Fennell, 37, of Pooler, Ga., also charged with Maintaining a Drug-Involved Premises;
Joseph Parrish, a/k/a “Wifi,” a/k/a “Wee Wee,” 29, of Savannah, also charged with Possession of a Firearm in Furtherance of a Drug Trafficking Crime; and two counts of Maintaining a Drug-Involved Premises;
David Fuentes, a/k/a “Shaggy,” 31, of Pooler, Ga., also charged with Possession with Intent to Distribute 50 Kilograms or More of Marijuana;
Javontae Parrish, a/k/a “Vontae,” 30, of Savannah, also charged with Possession with Intent to Distribute 50 Kilograms or More of Marijuana;
Jashavious Keel, a/k/a “Bub,” 27, of Savannah, also charged with Possession with Intent to Distribute Marijuana; and Possession of a Firearm in Furtherance of a Drug Trafficking Crime;
Thomas Holland, a/k/a “White Boy,” 37, of Savannah, also charged with Possession with Intent to Distribute 50 Kilograms or More of Marijuana;
Joann Keel Robinson, a/k/a “Ma Dukes,” 53, of Savannah;
Gumecindo Ramirez-Perales, 46, of Bakersfield, Calif., also charged with Possession with Intent to Distribute 50 Kilograms or More of Marijuana;
Omar Alejandro Gonzalez, 41, of Bakersfield, Calif.;
Jose Joel Elicier Christophers, 38, of Bakersfield, Calif.;
Tyreik Watson, 42, an inmate at Federal Correctional Institution Yazoo City Low, in Yazoo City, Miss.;
Darin Smith, a/k/a “Evil Twin,” 49, an inmate at the Chatham County Detention Center;
Morissa Pollard, 34, of Savannah, also charged with Maintaining a Drug-Involved Premises; and,
Michael Simmons, a/k/a “Unc,” 57, of Savannah.
Criminal indictments contain only charges; defendants are presumed innocent unless and until proven guilty.
This case is part of an Organized Crime Drug Enforcement Task Forces (OCDETF) operation. OCDETF identifies, disrupts, and dismantles the highest-level criminal organizations that threaten the United States using a prosecutor-led, intelligence-driven, multi-agency approach. It is being investigated by the FBI, the DEA, the U.S. Postal Inspection Service, CNT, the Chatham County Sheriff’s Office, and the Savannah Police Department, and prosecuted for the United States by Assistant U.S. Attorneys Frank M. Pennington and Noah Abrams, with asset forfeitures coordinated by Xavier A. Cunningham, Section Chief of the Asset Forfeiture Recovery Unit of the U.S. Attorney’s Office, and Gary Purvis, Asset Litigation Financial Analyst.
Southern District of Georgia U.S. Attorney Bobby L. Christine, joined by federal and local law enforcement officials on Dec. 16, 2020, announces the indictments of 29 defendants in Operation Deadlier Catch, a drug trafficking investigation targeting a gang-related network in the greater Savannah area.
Previous articleMIL-OSI United Nations: Secretary-General appoints Raisedon Zenenga of Zimbabwe Assistant Secretary-General and Mission Coordinator, United Nations Support Mission in Libya
Next articleMIL-OSI Security: U.S. Department of Justice Recognizes U.S. Attorney’s Office for the Eastern District of North Carolina for its Work with Project Safe Neighbhorhoods
MIL-OSI Economics: EROSS+: Thales Alenia Space and its partners will lead an Horizon 2020 project dedicated to On-Orbit Servicing
MIL-OSI Australia: Search for missing Geelong woman Mariae
MIL-OSI United Kingdom: All children in schools should get hot meals – Mullan
MIL-OSI Translation: CORYMBE 155: The high seas patrol vessel Commander Birot in interaction with the Togolese navy
MIL-OSI United Kingdom: Health minister must ensure effective and efficient roll-out of vaccine – Gildernew | {"pred_label": "__label__wiki", "pred_label_prob": 0.8180216550827026, "wiki_prob": 0.8180216550827026, "source": "cc/2021-04/en_head_0028.json.gz/line39773"} |
high_school_physics | 762,628 | 14.856179 | 1 | Electronvolt is equal to energy gained by a single electron when it is accelerated through 1 volt of electric potential difference. The branch of physics concerned with the structure of the atom and the characteristics of subatomic particles.
This includes ions as well as neutral atoms and, unless otherwise stated, for the purposes of this discussion it should be assumed that the term .
Atomic Theory. Dalton's atomic theory It is well-known to the students of modern science that all matter is made up of microscopically small particles known as atoms, and the theory is called Atomic Structure theory. Used in print 0; See all 2 definitions of atomic physics . One of the great organizing principles in quantum mechanics is the idea that the symmetries of a system are reflected in the spectrum. Atomic theory is a theory that classifies many elementary states, facts, and properties including both postulate and axiom about atoms. This is both interesting and important, for Atomic Physics is the foundation for a wide range of basic science and practical technology. An atomic mass unit is defined as 1/12th the mass of a carbon-12 atom. 2011-09-17 06:11:36. The branch of physics concerned with the structure of the atom, its energy states, and its interactions with particles and fields. Browse the use examples 'Atomic physics' in the great English corpus. Atomic physics has proved to be a spectacularly successful application of quantum mechanics, which is one of the cornerstones of modern physics. adverb. It is primarily concerned with the way in which electrons are arranged around the nucleus and the processes by which these arrangements change. atomic energy: [noun] energy that is created by splitting apart the nuclei of atoms : nuclear energy. verb. The number of electrons in an electrically-neutral atom is .
One atomic mass unit is equal to 1.66 x 10-24 grams. .
nuclear 2. Many different tools have been developed to deal with one electron and many electron atoms. 1. the branch of physics that studies the internal structure of atomic nuclei 1. atomic physics - the branch of physics that studies the internal structure of atomic nuclei. Electronvolts are a traditional unit of energy particularly in atomic and nuclear physics. Friendly Dictionary, Encyclopedia and Thesaurus The Free Dictionary 13,325,107,077 visits served Search Page tools Language . natural philosophy, physics - the science of matter and energy and . Meaning of atomic physics. The atomic mass constant, denoted m u, is defined identically, giving m u = m(12 C)/12 = 1 Da. Another such example is the three-dimensional isotropic harmonic oscillator, and the Schrdinger equation with this potential has exact solutions which are helpful in . Successive atomic models, such as those proposed by Thomson and Rutherford, changed the way we think about the atom 's charge, as they included electrical charges and described how these were distributed in the atom.
Introduction to Atomic Physics. This is similar to a planet, moves around the sun. See atomic structure. The Bohr model is a diagrammatic and mathematical representation of the Hydrogen atom that enables you to know about atomic physics. The nucleus is made up of neutrons and protons in an atom. 2a : transmission through or by means of a conductor also : the transfer of heat through matter by communication of kinetic energy from particle to particle with no net displacement of the particles compare convection, radiation. Atomic physics ( or atom physics ) is physics of the electron hull of atoms.
Atomic theory is a theory that classifies many elementary states, facts, and properties including both postulate and axiom about atoms. This nucleus contains most of the atom's mass and is composed of protons and neutrons (except for common hydrogen which has only one proton). So orbits and orbitals have totally different meanings. : nuclear physics, nucleonics Atomic Mass - Atomic mass is the mass of an atom or other particle, expressed in unified atomic mass units (u). nucleonics nuclear_physics. pronoun. An even more crucial concept is the idea that near-symmetries lead to hierarchies in the . The Coulomb field is an outstanding example.
Synonyms. Information block about the term. Written as a collection of problems, hints and solutions, the book will provide help in learning about both fundamental and applied aspects of this vast field of knowledge, where rapid and exciting developments are currently taking place. Atom physics synonyms, Atom physics pronunciation, Atom physics translation, English dictionary definition of Atom physics. . Definition.
noun atomic physics (physics, atomic) The branch of physics that studies the internal structure of atomic nuclei. The dalton or unified atomic mass unit (symbols: Da or u) is a unit of mass widely used in physics and chemistry. Things To Remember. Translate atomic physics into Spanish. Atomic structure definition, the structure of an atom, theoretically consisting of a positively charged nucleus surrounded and neutralized by negatively charged electrons revolving in orbits at varying distances from the nucleus, the constitution of the nucleus and the arrangement of the electrons differing with various chemical elements. While the term atomic deals with 1 = 10 -10 m, where is an ngstrm (according to Anders Jonas ngstrm), the term . The second branch is associated with all those processes related to collision problems.
Main difference is in the scale. Atomic Physics. cross section - (physics) the probability that a particular interaction (as capture or ionization) will take place between particles; measured in barns. What is definition of atomic physics?
'Subject to the limitations of the Heisenberg uncertainty principle, the advancement of atomic physics and quantum physics allowed increasingly accurate descriptions of complex atoms.'. nucleonics nuclear_physics. This definition causes the SI unit for energy is the same as the unit of work - the joule (J). Atomic and Nuclear Structure.
atomic: [adjective] of, relating to, or concerned with atoms. Components of the atom. Share. There are some theories, equations, and rules in Classical . preposition.
2. It must be added, atomism was one of a number of competing theories on the nature of matter. This atomic energy is a power source for nuclear weapons and nuclear reactors. Find methods information, sources, references or conduct a literature review on . Atomic radius is similar to the radius of a circle. Here again there is a number of available . The nucleus is composed of protons and neutrons. According to Rutherford, the atomic model includes atoms to view in the shape of solar systems. n the branch of physics that studies the internal structure of atomic nuclei Synonym(s) nuclear physics; nucleonics; News & Articles atomic physics. It is the carbon-12 atom, which, by . All atoms are roughly the same size. An atomic physics perspective on the kilogram's new definition Wolfgang Ketterle is the John D. MacArthur Professor of Physics at the Massachusetts Institute of Technology in Cambridge, director of the MIT-Harvard Center for Ultracold Atoms, and associate director of the Research Laboratory of Electronics.
Atomic Structure. Atomic Orbitals Definition. The beginning of atomic physics is marked by the discovery and scrutinious study of spectral lines. Atomic theory began as a philosophical concept in ancient Greece and India. (Atomic Physics) the current concept of the atom as an entity with a definite structure. .
It is a part of modern physics that tells us how modern civilization is going on and how all the scientific research has been performed.
Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus.It is primarily concerned with the arrangement of electrons around the nucleus and the processes by which these arrangements change. They join to form hadrons, such as protons and neutrons, which are components of the nuclei of atoms. Atomic energy is the source of power for both nuclear reactors and nuclear weapons. It is given the symbol Z. Wiki User. Which physical phenomena? Atomic physics deals with the atom as a system consisting of a nucleus and electrons. The definition of Quantum Physics corresponds to the study of physics on a microscopic level like the way atoms or electrons move. It provides the mathematical formulas for all natural activities happening on our planet earth. The main difference is in the scale. Electrons move in various orbits with the nucleus in the middle . Experimental nuclear physics drives innovation in scientific instrumentation. A crucial component of this is .
One thing that came directly from Charles Darwin is that humans are part of nature, along with all the other animate beings .
Atomic physics Definition from Encyclopedia Dictionaries & Glossaries. Atomic Physics. plural noun treated as singular. The attack objects and atomic attacks are not limited to those give. To understand the source of this energy, one must first understand the atom. atomic physics . The antiparticle of a quark is the antiquark. The notion that matter is made of fundamental building blocks dates to the ancient . Atomic physics is the study of the composition of the atom, its interactions with other particles and subatomic energy states. 1. atomic_physics has definitions from the field of physics 1 [ noun ] (physics) the branch of physics that studies the internal structure of atomic nuclei . The atoms consist of two parts. Wikipedia Dictionaries. I m a g e w i l l b e U p l o a d e d S o o n. This energy comes from the splitting (fission) or joining (fusion) of atoms. The physical properties of atoms are largely . metro.us. In Latin, naturephysicsmeans "everything that happens.". Orbitals are the space or region around the nucleus where the electron are calculated to be present. is now to understand Atomic Physics, not just to illustrate the mathematics of Quantum Mechanics. The modern understanding of the atom is that it consists of a heavy nucleus of positive charge surrounded by a cloud of light, negatively charged electrons. Besides the standard kilogram, it is a second mass standard. An atom is the smallest particle of an element that . According to Dalton, the atomic model consists of an atom in hard spheres. The total number of protons in the nucleus is called the atom's atomic number. It is primarily concerned with the arrangement of electrons around the nucleus and the processes by which these arrangements change. Since the mass of electrons is much smaller than that of protons and neutrons, the atomic mass is nearly identical to the mass number. Well, whatever ones we can make good predictions for. A convenient unit of length for measuring atomic sizes is the angstrom . Nuclear Physics is defined as the branch of physics deals with the structure of the atomic nucleus and its interactions. atomic physics : Definition, Usages, News and More. atomic physics, the scientific study of the structure of the atom, its energy states, and its interactions with other particles and with electric and magnetic fields. Parts of speech for Atomic physics. Ernest Rutherford laid down the foundations of modern .
determiner. The atomic mass constant, denoted m u, is defined identically, giving m u = m(12 C)/12 = 1 Da. Definition 3 A security metric in this paper focuses on the measured raw data, which is the effect of an atomic attack or the characterization of an attack object. Here the electron jumps orbit ( while being in a stable atomic structure) and produces some electromagnetic energy. The atom consists of a small but massive nucleus surrounded by a cloud of rapidly moving electrons. A scientific theory, which deals with the nature of matter is known as atomic theory. Atomic physics is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus.Atomic physics typically refers to the study of atomic structure and the interaction between atoms. Successive atomic models, such as those proposed by Thomson and Rutherford, changed the way we think about the atom 's charge, as they included electrical charges and described how these were distributed in the atom. The study of quarks and the interactions between them through the strong force is called particle physics. As was written, all matter except dark matter is made of molecules, which are themselves made of atoms. Physics Definition. atomic physics: 1 n the branch of physics that studies the internal structure of atomic nuclei Synonyms: nuclear physics , nucleonics Type of: natural philosophy , physics the science of matter and energy and their interactions
The atoms are the smallest constituents of ordinary matter, which can be divided without the release of electrically charged particles. An atom is a complex arrangement of negatively charged electrons arranged in defined shells about a positively charged nucleus. Definition of. Literary usage of Atomic physics. Learn the definition of 'Atomic physics'. Examples of how to use "atomic physics" in a sentence from the Cambridge Dictionary Labs It is defined as 1 12 of the mass of an unbound neutral atom of carbon-12 in its nuclear and electronic ground state and at rest. English Wikipedia - The Free Encyclopedia.
the branch of physics that studies the internal structure of atomic nuclei. These ancient philosophers speculated that the earth was made up of different combinations . Search Words. atomic physics atomic physics . A correct and simple definition: "quantum physics" is a collection of models of physical phenomena which use the mathematics of "Hilbert space" to make operational predictions for the outcomes of laboratory experiments. This book is intended for advanced undergraduate, graduate students and researchers who are interested in Atomic, Molecular, and Optical (AMO) Physics. As with many scientific fields, strict delineation can . Explore the latest full-text research PDFs, articles, conference papers, preprints and more on ATOMIC PHYSICS. it is the branch of physics that deals with the structure and the behaviour of an atom is called atomic physics. Atomic physics (or atom physics) is the field of physics that studies atoms as an isolated system of electrons and an atomic nucleus.It is primarily concerned with the arrangement of electrons around the nucleus and the processes by which these arrangements change. Below you will find example usage of this term as found in modern and/or classical literature: 1.
Definition of atomic_physics.
Information and translations of atomic physics in the most comprehensive dictionary definitions resource on the web. This is used in atomic, nuclear, solid state and high energy physics. The nuclear physics deals with the nucleus as a system consisting of a nucleons (protons and neutrons). Atomic radius or Atomic Radii is the total distance from the nucleus of an atom to the outermost orbital of its electron. stimulated emission: The process by which an atomic electron (or an excited molecular state) interacting with an electromagnetic wave of a certain frequency may drop to a lower energy level, transferring its energy to that field. When you hear atomic physics one of the last images in your mind would be a reptile. conjunction. Definition of atomic physics in the Definitions.net dictionary. It is defined as 1 12 of the mass of an unbound neutral atom of carbon-12 in its nuclear and electronic ground state and at rest. When you hear atomic physics one of the last images in your mind would be a reptile. nuclear physics, nucleonics.
In atomic physics it is possible to have quantum mechanical Hamiltonians because the forces between particles are well known. This nuclear energy comes from the cycle of nuclear fusion and nuclear fission. Usage in scientific papers # Random matrix theory has been used extensively in many different elds including nuclear physics, atomic and molecular physics, condensed matter physics, quantum chaos, quantum gravity, and mathematical physics.
atomic physics definition
Which Is Not A Property Of An Ideal Gas?
Chargepoint Stock Prediction
3 Bay Suite Marriott Marquis San Francisco
Wne Lacrosse Schedule 2022
Interleukin-2 Receptor Antibody
Barstool Golf Crewneck
Fleet Card Vs Credit Card
White Sneakers Boutique
Upsc Eligibility Criteria 2022
Wolfram Alpha Substitute Variable
Felixstowe Tennis Tournament 2022
Junior Sabbath School Lesson
atomic physics definition 2022 | {"pred_label": "__label__wiki", "pred_label_prob": 0.68632572889328, "wiki_prob": 0.68632572889328, "source": "cc/2023-06/en_head_0041.json.gz/line1469378"} |
high_school_physics | 647,710 | 14.801363 | 1 | A Treatise on Plane and Spherical Trigonometry: With Their Most Useful ...
By John Bonnycastle
TREATISE
PLANE AND SPHERICAL
TRIGONOMETRY.
PRINTED FOR J, JOHNSON, ST. PAUL'S CHURCH-YARD,
BY R. TAYLOR AND CO., GHOE LANE.
At what period Trigonometry first began to be cultivated, as a branch of the mathematical sciences, is extremely uncertain, no records having been left by the antients, which enable us to trace it to a higher age than that of Hipparchus, who flourished about a century and a half before Christ, and is reported by Theon, in his Commentary on Ptolemy's Almagest, to have written a work, in twelve books, on the chords of circular arcs, which, from the nature of the title, must evidently have been a treatise on Trigonometry.
But the earliest work on the subject, now extant, is the Spherics of Theodosius, a native of Tripoli in Bythinia, who, soon after the time above mentioned, collected and brought together, into this performance, the scattered principles of the science which had been discovered by his predecessors, and formed them into a regular treatise, in three books, containing a variety of the most necessary and useful propositions relating to the sphere, arranged and demonstrated with great perspicuity and elegance, after the manner of Euclid's Elements (a).
(a) This work of Theodosius, which came to us through the medium of an Arabic version, has been published both in Greek and Latin by several writers; but the Latin edition of Dr. Barrow, 8vo. London, 1675, and that of Hunt, 8vo. Oxford, 1709, are | {"pred_label": "__label__cc", "pred_label_prob": 0.5590656995773315, "wiki_prob": 0.44093430042266846, "source": "cc/2022-05/en_head_0014.json.gz/line1567662"} |
high_school_physics | 251,174 | 14.766195 | 1 | AHH! Insight! Each plane must intersect the others because they all pass through the center. And two planes intersect in a line. And the line must intersect the sphere at two points. SO, we can count intersection points: There are 9 planes, and each plane will intersect the other 8, so there are 9 ∗ 8 = 72 intersection points IF we arrange the planes for maximum regions. More generally, if we have n planes arranged for max intersection points, we will have 𝑛(𝑛 − 1) intersection points.
Wait, let’s do this carefully. There are 9 planes, and they can each intersect 8 different planes; but that counts the intersections of plane A and plane B twice, so there are (9*8)/2 = 36 lines of intersection, but 36 ∗ 2 = 72 points of intersection with the sphere. So our problem just got narrower: Given 72 intersection points defining various regions on the sphere, how many regions do we get?
And that’s where the problem stands as of this writing. My preliminary conjecture is that each region will be a “triangle” (officially, spherical triangle) on the surface of the sphere, especially if we are maximizing regions. I need to prove that conjecture and then count triangles, which shouldn’t be too hard. | {'timestamp': '2019-04-24T22:12:36Z', 'url': 'https://meangreenmath.com/2017/08/06/solving-a-math-competition-problem-part-3/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 827,066 | 14.701501 | 1 | Hollow Glass Microspheres
Russian president demanded that exports of Russian gas to "unfriendly" countries be settled in rubles. The demand has raised concerns in Germany about possible supply disruptions and the impact on industry and households if utilities do not pay in robles. Europe gets about 40% of its gas from Russia. Last year, Europe imported about 155 billion cubic meters. Germany, Europe's largest economy, depends heavily on Russian gas.
The chief executive of Germany's E.ON said the German economy would face "significant damage, which should be avoided if possible" without Russian supplies. He also said it would take Germany three years to wean itself off Russian gas.
In the event of a supply disruption, Germany's gas network regulator would prioritize home heating over industrial use, so energy-hungry manufacturers such as steelmakers would be the first to suffer, he said.
The volatile international situations will continue to affect the markets and prices of many commodities like the Hollow Glass Beads.
What is hollow glass microsphere?
Hollow glass microspheres, sometimes termed microballoons or glass bubbles, have diameters ranging from 10 to 300 micrometers. Hollow spheres are used as lightweight fillers in composite materials such as syntactic foam and lightweight concrete. The hollow glass bead is a kind of specially processed glass bead, which is mainly characterized by smaller density and poorer thermal conductivity than glass bead. It is a new kind of micron-grade light material developed in the 1950s and 1960s. Its main component is borosilicate, with a general particle size of 10~250μm and a wall thickness of 1~2μm. Hollow glass beads are characterized by high compressive strength, high melting point, high resistivity, small thermal conductivity and thermal shrinkage coefficient, etc., and they are known as the "space-age material" in the 21st century.
Hollow glass microspheres, also known as bubbles, microbubbles, or micro balloons, are usually formulated from borosilicate - sodium salt glass mixtures and offer the advantages of low density, high heat and chemical resistance. The walls of glass microspheres are rigid and are usually about 10% thick of the diameter of the spheres. At present, spherical particles have a wide range of densities, from as low as 0.06g/ C3 to as high as 0.80g/ C3, with particle sizes ranging from 5um to 180um. The compressive strength of the hollow sphere is determined by the wall thickness of the hollow sphere and, as expected, the greater the density of the sphere, the higher the compressive strength.The lightweight hollow glass sphere is chemically stable, non-flammable, non-porous, excellent water resistance. TRUNNANO is a trusted global Hollow Glass Sphere Hollow Glass Beads supplier. Feel free to send an inquiry about the latest price of Hollow Glass Sphere Hollow Glass Beads at any time.
Product Performance of Hollow Glass Sphere Hollow Glass Beads:
Hollow glass microspheres are micron-level hollow glass microspheres with a smooth surface. The main chemical component is borosilicate glass, and it is a hollow transparent sphere under the electron microscope. Hollow glass beads have low density, high strength, high temperature resistance, acid and alkali resistance, low thermal conductivity, electrical insulation and other properties. They have good fluidity and chemical stability, and they are multi-functional frontier new materials across fields.
How are Hollow Glass Sphere Hollow Glass Beads Produced?
How do you make glass microspheres?
1. Glass powder method
The glass powder method uses pre-prepared glass powder containing gases such as SO3 to pass through the flame at the temperature of 1100-1500℃. At this time, SO3 and other gases dissolved in the glass powder overflow from the inside of the glass due to the decrease in solubility and the change in the atmosphere of the kiln. At the same time, the glass powder becomes spherical under high temperature due to the surface tension. The spillage gas is sealed in the spherical particles to form hollow glass beads.
2. Spray granulation method
Spray granulation method is made in advance with special auxiliary reagent (boric acid, urea, ammonium pentaborate) aqueous solution of sodium silicate, and then through the nozzle to the solution injection into the spray dryer, is expected to drop after drying to get a certain particle size of powder particles, the final will be powder particle heating foam-forming hollow glass beads.
3. The droplet method
The liquid drop method adopts the same raw materials as the spray granulation method. The raw materials are 500 copies of sodium silicate (M (SiO2): M (Na2O) =2), and the same amount of 10% ammonium perborate aqueous solution is added. After mixing evenly, the hollow glass beads are sprayed into the spray drying tower.
4. Dry gel method
The alkyl salt was added to dilute hydrochloric acid and decompose with water. After gelation, the gel was dried in two stages at 60℃ and 150℃ and then crushed by a ball mill. After grading, the dried gel powder was obtained, which was foamed in a vertical electric furnace at 1280℃ to prepare hollow glass microspheres.
Applications of Hollow Glass Sphere Hollow Glass Beads:
Hollow glass microspheres are widely used in glass fiber reinforced plastic products, composite foam plastic, artificial marble, compound wood, sound insulation heat preservation material, atomic ash, deep-sea buoyancy, bowling, low-density cement, sealing material, lightweight, resin handicraft, the mural wall hanging frame, wall plate sandwich layer structure of lightweight packaging materials, electronic industry, absorbing material, lightweight concrete, such as emulsion explosive.
Hollow glass microspheres also provide a conductive coating. Conductive coatings of optimized thickness provide spherical particles with excellent conductivity and shielding properties while maintaining the weight savings associated with hollow, low-density materials. These conductive microbubbles are suitable for military applications, biotechnology, medical devices, electronics and other special industries.
Hollow glass beads have obvious weight reduction and sound insulation and heat preservation effect so that the products have good cracking resistance and reprocessing performance, is widely used in engineering plastics, insulation materials, rubber, buoyancy material, FRP, artificial marble, man-made agate, generation of composite materials such as wood, and the oil industry, aerospace, communications, 5 g new high-speed trains, cars, ships, in areas such as thermal insulation coatings, adhesives, vigorously promote the development of science and technology.
Packing & Shipping of Hollow Glass Sphere Hollow Glass Beads:
We have many different kinds of packing which depend on hollow glass spheres quantity.
Hollow glass spheres packing: 1kg/bag, 25kg/barrel, or 200kg/ barrel.
Hollow glass spheres shipping: could be shipped out by sea, by air, by express as soon as possible once payment receipt.
Russia is a major supplier of industrial metals such as nickel, aluminium and palladium. Russia and Ukraine are both major wheat exporters, and Russia and Belarus produce large amounts of potash, an input to fertiliser. The price and market of the Hollow Glass Beads will fluctuate under its influence. Prices of these goods have been rising since 2022 and are now likely to rise further because of the Russia-Ukraine conflict. Russia is a major supplier of industrial metals such as nickel, aluminium and palladium. Russia and Ukraine are both major wheat exporters, and Russia and Belarus produce large amounts of potash, an input to fertiliser. The price and market of the Hollow Glass Beads will fluctuate under its influence. Prices of these goods have been rising since 2022 and are now likely to rise further because of the Russia-Ukraine conflict.
Germany Production Cuts Shutdowns Closures Panic buying will Affect the Price of inconel 3d metal printing service
What problems should we pay attention to in the development of graphite technology?
Characteristics of high purity graphite powder…
U.S. Inflation Hits 40-year High As Expected Will Affect the Price of MoS2 powder…
Does the decline in the supply of drugs used to fight opioid addiction during the pandemic affect ct value qpcr…
Japan power supply alert and rc truck and trailer of the market related introduction…
The common problems in the use and maintenance of lithium batteries…
Do you knnow amorphous graphite?…
What is Zinc Sulfide ZnS Product?…
What is the Advantages of Foam Cement?…
boron nitride graphene Silicon Nitride powder silica fume Aluminum Nitride Aluminum Nitride Overview Aluminum Nitride Applications Aluminum Nitride Suppliers Aluminum Nitride Price graphite Inconel 718
Nitinol is a shape memory alloy, a special alloy that can automatically restore its plastic deformation to its original shape at a certain temperature. It can meet the needs of various engineering and medical applications and is an excellent function…
Lithium sulfide, molecular formula: Li2S, lithium sulfide, molecular weight: 45.95. White to yellow crystals. With an anti-fluorite structure.…
Tungsten Oxide WO3 is also called tungsten trioxide. Tungsten trioxide is an inorganic substance, chemical formula WO3, is a light yellow crystalline powder. Insoluble in water, soluble in alkali, slightly soluble in acid.Used for making high melting…
Tungsten Disulfide WS2 powder | {"pred_label": "__label__cc", "pred_label_prob": 0.5964837670326233, "wiki_prob": 0.4035162329673767, "source": "cc/2023-06/en_middle_0077.json.gz/line403079"} |
high_school_physics | 342,603 | 14.672981 | 1 | CION Ares Diversified Credit Fund (CADC) is a continuously offered, diversified, unlisted closed-end management investment company that is structured as an interval fund. The fund’s investment objective is to provide superior risk-adjusted returns across various market cycles by investing in a globally diversified portfolio of liquid and illiquid credit assets.
The fund materials below must be preceded or accompanied by a prospectus. By proceeding, you acknowledge that you have received and reviewed the prospectus. If not, a prospectus can be obtained here. View the risk factors.
Returns include reinvestment of distributions and reflect fund expenses inclusive of expense support which will remain in effect at least until February 29, 2020 and may be subject to reimbursement in the future. The net expense ratio, inclusive of expense support, is 0.34% as of October 31, 2018. The gross expense ratio, without expense support, is 5.98% as of October 31, 2018. Expense ratios are annualized and calculated as a percentage of average net assets. The sales charge for Class A is up to 5.75%. Share values will fluctuate, therefore if repurchased, they may be worth more or less than their original cost. Past performance is not indicative of future results.
Returns include reinvestment of distributions and reflect fund expenses inclusive of expense support which will remain in effect at least until February 29, 2020 and may be subject to reimbursement in the future. The net expense ratio, inclusive of expense support, is 0.34% as of October 31, 2018. The gross expense ratio, without expense support, is 6.73% as of October 31, 2018. Expense ratios are annualized and calculated as a percentage of average net assets. Share values will fluctuate, therefore if repurchased, they may be worth more or less than their original cost. Past performance is not indicative of future results.
Returns include reinvestment of distributions and reflect fund expenses inclusive of expense support which will remain in effect at least until February 29, 2020 and may be subject to reimbursement in the future. The net expense ratio, inclusive of expense support, is 0.34% as of October 31, 2018. The gross expense ratio, without expense support, is 5.73% as of October 31, 2018. Expense ratios are annualized and calculated as a percentage of average net assets. Share values will fluctuate, therefore if repurchased, they may be worth more or less than their original cost. Past performance is not indicative of future results.
Returns include reinvestment of distributions and reflect fund expenses inclusive of expense support which will remain in effect at least until February 29, 2020 and may be subject to reimbursement in the future. The net expense ratio, inclusive of expense support, is 0.34% as of October 31, 2018. The gross expense ratio, without expense support, is 6.23% as of October 31, 2018. Expense ratios are annualized and calculated as a percentage of average net assets. The sales charge for Class L is up to 4.25%. Share values will fluctuate, therefore if repurchased, they may be worth more or less than their original cost. Past performance is not indicative of future results.
Returns include reinvestment of distributions and reflect fund expenses inclusive of expense support which will remain in effect at least until February 29, 2020 and may be subject to reimbursement in the future. The net expense ratio, inclusive of expense support, is 0.34% (Class I) as of October 31, 2018. The gross expense ratio, without expense support, is 5.73% (Class I) as of October 31, 2018. Expense ratios are annualized and calculated as a percentage of average net assets. The sales charge for Class W is up to 3%. Share values will fluctuate, therefore if repurchased, they may be worth more or less than their original cost. Past performance is not indicative of future results.
This graph illustrates the performance of a hypothetical $10,000 investment made in this Fund from the inception date of the product. This is represented as the change in total return at monthly intervals. Total return is a measure of the change in NAV including reinvestment of all distributions and is presented on a net basis reflecting the deduction of fund expenses and applicable fees with expense support provided by CION Ares Management (CAM). The performance quoted represents past performance, is no guarantee of future results and may not provide an adequate basis for evaluating the performance of the Fund over varying market conditions or economic cycles. Investment return and principal value of an investment will fluctuate; therefore, you may have a gain or loss when you sell your shares. Current performance may be higher or lower than the performance data quoted.
Excludes cash, other net assets and equity instruments.
Holdings and allocations, unless otherwise indicated, are based on the total portfolio and subject to change without notice. Data shown is for informational purposes only and not a recommendation to buy or sell any security.
CION Ares Diversified Credit Fund is only available through participating Broker/Dealers and Registered Investment Advisors. If you are interested in learning whether alternative investments are suitable for your investment needs, please contact your financial advisor.
* The public offering price is equal to the NAV plus an upfront sales charge of up to 5.75% for Class A, 4.25% for Class L, 3.00% for Class W and offering costs of up to $0.25 per share. Past performance is not a guarantee of future results. Please see the current prospectus, as amended and supplemented, for more information including, but not limited to, annual fund expenses.
** Current distribution rate is expressed as a percentage equal to the projected annualized distribution amount (which is calculated by annualizing the current daily cash distribution per share without compounding), divided by the relevant net asset value per share. The current distribution rate shown may be rounded.
*** Monthly Distributions – There is no assurance monthly distributions paid by the Fund will be maintained at the targeted level or paid at all.
Sharpe Ratio – a risk-adjusted measure that measures reward per unit of risk. The higher the Sharpe Ratio, the better. The numerator is the difference between a portfolio’s return and the return of a risk-free instrument. The denominator is the portfolio’s standard deviation.
Standard Deviation – a widely used measure of an investment’s performance volatility. Standard deviation shows how much variation from the mean exists with a larger number indicating the data points are more spread out over a larger range of values.
CION Securities, LLC (CSL) is the wholesale marketing agent for CION Ares Diversified Credit Fund, advised by CION Ares Management, LLC (CAM) and distributed by ALPS Distributors, Inc (ADI). CSL, member FINRA, and CAM are not affiliated with ADI, member FINRA. | {'timestamp': '2019-04-25T00:36:29Z', 'url': 'https://www.cioninvestments.com/products/cion-ares-diversified-credit-fund/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 116,505 | 14.544658 | 1 | Earth Day is about more than switching off the overhead lights – it’s about making purchasing decisions that will minimise our impact on the environment. From eerily-silent zero-emission trucks to seaweed-membrane edible water bottles, these are just some of the products that should be on the radar of every innovation scout.
Although the petroleum industry is grudgingly beginning to recognise that an increasing number of car drivers will hang up the fuel bowser (gas pump) for the last time within the next decade, there’s still a sticking-point when it comes to heavy vehicles.
Wrong. Alternatives are already available for zero or low-emission trucks that match, or even beat, the performance of a diesel-fuelled truck.
The Ports of Los Angles and Long Beach took delivery of a zero-emission, 670 horsepower 18-wheeler earlier this month. The hydrogen-fuelled truck is completely silent and emits only water from its tailpipe.
The twin ports are a major source of pollution in the region, due in part to an estimated 19,000 cargo containers moving through daily, carrying $450 billion worth of goods annually. If the test is successful, thousands of conventional trucks could potentially be replaced by hydrogen-fuelled trucks.
Toyota is yet to announce a price for the truck but have predicted it will be competitive with new, diesel-powered trucks when it hits the market. Mileage looks good, with a range of 200 miles on one 20-minute charge. The fuel-cell stacks can be fed water, natural gas or a variety of waste products, with one Toyota spokesperson telling the press that abundant hydrogen can be reclaimed from landfill waste.
Mystery surrounds Tesla’s much-anticipated electric semi-trailer, with most reports centred around a tweet from Elon Musk announcing that the truck will be unveiled in September 2017, and that it is “seriously next-level”.
Musk has also confirmed that the semi-trailer will be followed by a ute (pick-up truck) within 18-24 months, and has suggested that Tesla should also enter the bus and heavy-duty truck markets.
The company has yet to share details about how large the battery itself would be or how the truck would overcome range limitations, but commentators from Morgan Stanley have predicted that the truck would be “relatively short-range” (200-300 miles), and use Tesla’s charging stations to quickly swap the batteries for charged ones (a 5-minute process) and get the vehicles back on the road.
Waitrose has partnered with bio-fuel company CNG Fuels to place an order for 10 flatbed trucks that will be powered entirely by rotten food, sourced from unsold food at supermarkets across the UK.
This investment ticks two boxes for Waitrose’s sustainability targets – lowering carbon dioxide emissions, while addressing food waste. Globally, an estimated one-third of all food, or 1.3 billion metric tons of produce – goes to waste every year. The new biomethane trucks have an average range of nearly 500 miles, with the biofuel to cost 40% less than diesel fuel. The biomethane emits 70% less carbon dioxide than diesel.
The next challenge? Lifting a commercial airliner off the ground with rotting vegetables. It may seem unthinkable today, but so was the technology that’s now enabling zero-emission semi-trailers.
With an estimated 100 million plastic water bottles being trashed globally every single day, there will soon be more plastic than fish in the ocean. That’s why it’s vital that a solution is found to stem the (literal) tide of plastic.
A start-up called Skipping Rocks Lab has created a product that won’t completely replace plastic bottles, but could potentially make a big dent in their consumption.
“Ooho!” edible water spheres are created by dipping frozen balls of liquid into an algae mixture (seaweed), forming a watertight membrane around the water, which then melts inside. To consume the liquid you simply bite into the membrane (apparently tasteless) and sip it out, or just eat the entire ball.
The spheres generate 5x less carbon dioxide and require 9x less energy to make than a conventional PET (plastic) water bottle. But here’s the catch – they’re perishable. The product has been compared to fruit, with a shelf-life of just a few days. Try keeping one of these in your pantry for a week and you’ll find that it has dissolved into a puddle. However, Ooho would be perfect for events where bottles are bought in bulk and distributed to enormous groups of people, only to be trashed in huge numbers during or immediately after the event – think music festivals, marathons and conferences.
eProcurement provider Wax Digital has surveyed 200 UK business on the impact of Brexit, finding that 4 out 5 business fear it will hinder their growth. 79% also stated their growth is being hindered by suppliers being unprepared for growth amidst Brexit.
37% said that Brexit will restrict their ability to do business in Europe and 35% said that it will make EU business more costly and complex. 26% expect to reduce their business operations on the continent and 24% will look at alternative international opportunities. Interestingly, 65% of surveyed UK business leaders voted “remain” and would still do so today.
The survey also explored perceptions of the Trump Presidency, with 82% saying that a ‘business mogul’ type figure in the White House is positive, and 40% expecting Trump to improve UK to US business opportunities.
Previous PostBest of The Blog- Should You Ever Rehire An Ex-Employee?Next PostIBM CPO: You’re Finished If You Think You’ve Finished! | {'timestamp': '2019-04-19T11:10:38Z', 'url': 'http://blog.procurious.com/save-planet-garbage-powered-trucks-edible-water-bottles/', 'language': 'en', 'source': 'c4'} |
high_school_physics | 97,119 | 14.446866 | 1 | Levasil® FO1440 is the most commonly used colloidal silica for bonding refractory fibers and rigidizing refractory fiber shapes and boards. Levasil® FO1440 is an economical 40% concentration silica sol of 14 nanometer diameter amorphous silica spheres. The particles carry a slightly negative surface charge with a high surface area to weight ratio for good floccing with Westar+ and Westar+3 cationic starch.
Good High Temperature Bonds - Colloidal silica bonds withstand temperatures up to 2300oF with low shrinkage..
Saves Money - Economical 40% concentration reduces freight and package costs over lower concentration sols.
Flocs with Cationic Starch - Negative surface charge flocs cationic starch refractory fibers together to form a three dimensional floc for good product strength.
Rigidizes Effectively - Can be used diluted or full strength for sealing or rigidizing of fiber-bonded shapes.
Packaging Bulk tanks, 4000 gal; totes, 275 gal. ;drums, 55 gal; pails, 5 gal. | {'timestamp': '2019-04-23T04:07:47Z', 'url': 'http://wesbond.com/products/levasil_fo1440.htm', 'language': 'en', 'source': 'c4'} |
high_school_physics | 384,935 | 14.439826 | 1 | Home > MOSQUITO NETS - Bed Canopies, Travel, Camping & More > Mosquito Prevention Related Articles > Mosquito Nets Save Lives
Mosquito Nets Save Lives
Widespread distribution of mosquito nets and a new medicine sharply reduced malaria deaths in several African countries, World Health Organization researchers reported Thursday.
Times Health Guide: Malaria »The report was one of the most hopeful signs in the long battle against a disease that is estimated to kill a million children a year in poor tropical countries.
“We saw a very drastic impact,” said Dr. Arata Kochi, chief of malaria for the W.H.O. “If this is done everywhere, we can reduce the disease burden 80 to 85 percent in most African countries within five years.”
The only 100% way to prevent malaria i sthe use of a good mosquito net. Travelers should always have a net with them for areas that may be infested with malaria carrying mosquitos. Always check with the state department prior to travel for updated information.
Mosquitoes will create havoc if allowed to do so, so prevention is the best method to halt infection.
There have been earlier reports of success with nets and the new medicine, artemisinin, a Chinese drug made from wormwood. But most have been based on relatively small samples; this is the first study to compare national programs.
“This is extremely exciting,” said Dr. Michel Kazatchkine, executive director of the Global Fund to Fight AIDS, Tuberculosis and Malaria. “If we can scale up like this everywhere, we should be able to eliminate malaria as a major public health threat in many countries.”
The report was done by a team from the World Health Organization for the Global Fund, the chief financing agency for combating malaria. It looked at programs in four countries that tried to distribute mosquito nets to the families of every child under 5, and medicines containing artemisinin to every public clinic.
In Ethiopia, deaths of children from malaria dropped more than 50 percent. In Rwanda, they dropped more than 60 percent in only two months.
Zambia, Dr. Kochi said, had only about a 33 percent drop in overall deaths because nets ran short and many districts ran out of medicine. But those areas without such problems had 50 to 60 percent reductions, he said.
Ghana was a bit of a mystery, according to the report. It got little money from the Global Fund, Dr. Kochi said, and so bought few nets and had to charge patients for drugs. Malaria deaths nonetheless fell 34 percent, but deaths among children for other reasons dropped 42 percent.
Holding drives to distribute insecticide-impregnated nets is a growing trend, now that the Global Fund, the President’s Malaria Initiative, United Nations agencies, the World Bank and private fund-raisers like AgainstMalaria.org have offered hundreds of millions of dollars. Such drives must be continuous because “permanent” nets wear out after three to five years.
The report, finished in December, was an effort to find hard data, which has long been a problem with malaria, especially in rural Africa, where anyone with fever is often presumed to have malaria and medical records scribbled in school notebooks are rarely forwarded to the capital. For this study, researchers tallied only hospitalized children whose diagnoses were confirmed.
Rwanda, a small country that handed out three million nets in two months in 2006, had 66 percent fewer child malaria deaths in 2007 than in 2005.
Ethiopia, much larger, took almost two years to hand out 20 million nets; it cut deaths of children in half.
In Africa, malaria is a major killer of children, but so are diarrhea and pneumonia, which have multiple causes, as well as measles, which has been declining as the Global Alliance for Vaccines and Immunization has expanded.
Until the recent infusions of money from international donors and the reorganization of malaria leadership at the W.H.O., the fight against malaria had been in perilous shape, with nets scarce, many countries using outdated or counterfeit medicines, spraying programs dormant and diagnoses careless.
Even the most commonly cited mortality figure — one million deaths of children a year — has always been no more than an educated guess. | {"pred_label": "__label__wiki", "pred_label_prob": 0.7763793468475342, "wiki_prob": 0.7763793468475342, "source": "cc/2019-30/en_head_0027.json.gz/line121367"} |
high_school_physics | 639,389 | 14.386742 | 1 | The Self-Luminescent Semi-Transparent Moon
NASA and modern astronomy maintain that the Moon is a solid, spherical, Earth-like habitation which man has actually flown to and set foot on. They claim the Moon is a non-luminescent planetoid which receives and reflects all its light from the Sun. The reality is, however, that the Moon is not a solid body, it is clearly circular, but not spherical, and not in any way an Earth-like planetoid which humans could set foot on. In fact, the Moon is largely transparent and completely self-luminescent, shining with its own unique light.
The Sun’s light is golden, warm, drying, preservative and antiseptic, while the Moon’s light is silver, cool, damp, putrefying and septic. The Sun’s rays decrease the combustion of a bonfire, while the Moon’s rays increase combustion. Plant and animal substances exposed to sunlight quickly dry, shrink, coagulate, and lose the tendency to decompose and putrify; grapes and other fruits become solid, partially candied and preserved like raisins, dates, and prunes; animal flesh coagulates, loses its volatile gaseous constituents, becomes firm, dry, and slow to decay. When exposed to moonlight, however, plant and animal substances tend to show symptoms of putrefaction and decay.
In direct sunlight a thermometer will read higher than another thermometer placed in the shade, but in full, direct moonlight a thermometer will read lower than another placed in the shade. If the Sun’s light is collected in a large lens and thrown to a focus point it can create significant heat, while the Moon’s light collected similarly creates no heat. In the “Lancet Medical Journal,” from March 14th, 1856, particulars are given of several experiments which proved the Moon’s rays when concentrated can actually reduce the temperature upon a thermometer more than eight degrees.
“The sun’s light, when concentrated by a number of plane or concave mirrors throwing the light to the same point; or by a large burning lens, produces a black or non-luminous focus, in which the heat is so intense that metallic and alkaline substances are quickly fused; earthy and mineral compounds almost immediately vitrified; and all animal and vegetable structures in a few seconds decomposed, burned up and destroyed. The moon’s light concentrated in the above manner produces a focus so brilliant and luminous that it is difficult to look upon it; yet there is no increase of temperature. In the focus of sun-light there is great heat but no light. In that of the moon’s light there is great light but no heat.”-Dr. Samuel Rowbotham, “Zetetic Astronomy, Earth Not a Globe!” (144)
“Light which is reflected must necessarily be of the same character as that which causes the reflection, but the light of the Moon is altogether different from the light of the Sun, therefore the light of the Moon is not reflected from the Sun. The Sun’s light is red and hot, the Moon’s pale and cold -the Sun’s dries and preserves certain kinds of fish and fruit, such as cod and grapes, for the table, but the Moon’s turns such to putrefaction -the Sun’s will often put out a coal fire, while the Moon’s will cause it to bum more brightly -the rays of the Sun, focused through a burning-glass, will set wood on fire, and even fuse metals, while the rays of the Moon, concentrated to the strongest power, do not exhibit the very slightest signs of heat. I have myself long thought that the light of the Moon is Electric, but, be that as it may, even a Board School child can perceive that its light is totally unlike that of the Sun.”-David Wardlaw Scott, “Terra Firma”(151-2)
So sunlight and moonlight clearly have altogether different properties, and furthermore the Moon itself cannot physically be both a spherical body and a reflector of the Sun’s light! Reflectors must be flat or concave for light rays to have any angle of incidence; If a reflector’s surface is convex then every ray of light points in a direct line with the radius perpendicular to the surface resulting in no reflection.
“Again, if the Moon is a sphere, which it is declared to be, how can its surface reflect the light of the Sun? If her surface was a mass of polished silver, it could not reflect from more than a mere point! Let a silvered glass ball or globe of considerable size be held before a lamp or fire of any magnitude, and it will be seen that instead of the whole surface reflecting light, there will be a very small portion only illuminated. But the Moon‟s whole surface is brilliantly illuminated! A condition or effect utterly impossible if it be spherical.” -Dr. Samuel Rowbotham, “Earth Not a Globe, 2nd Edition” (97)
The Bible also confirms that the Moon is self-luminescent and not a mere reflector of sunlight in Genesis 1:16 where it states that “God made two great luminaries, the greater luminary to rule the day, and the lesser luminary to rule the night.”
Not only is the Moon clearly self-luminescent, shining its own unique light, but it is also largely transparent! NASA photoshoppers claim the Moon is a dark spherical planetoid, yet with our own eyes or through a telescope we can see it is actually a bright, circular, semi-transparent luminary. On a clear night, during a waxing or waning cycle, it is even possible to occasionally see stars and planets directly through the surface of the Moon!
On March 7th, 1794, four astronomers (3 in Norwich, 1 in London) wrote in “The Philosophical Transactions of the Royal Astronomical Society” that they “saw a star in the dark part of the moon, which had not then attained the first quadrature; and from the representations which are given the star must have appeared very far advanced upon the disc.” Sir James South of the Royal Observatory in Kensington wrote in a letter to the Times newspaper April 7, 1848, that, “On the 15th of March, 1848,when the moon was seven and a half days old, I never saw her unillumined disc so beautifully. On my first looking into the telescope a star of about the 7th magnitude was some minutes of a degree distant from the moon’s dark limb. I saw that its occultation by the moon was inevitable … The star, instead of disappearing the moment the moon’s edge came in contact with it, apparently glided on the moon’s dark face, as if it had been seen through a transparent moon; or, as if a star were between me and the moon … I have seen a similar apparent projection several times … The cause of this phenomenon is involved in impenetrable mystery.” In the monthly notices of the Royal Astronomical Society for June 8, 1860, Thomas Gaunt stated that the “Occultation of Jupiter by the moon, on the 24th of May, 1860, was seen with an achromatic of 3.3 inches aperture, 50 inches focus; the immersion with a power of 50, and the emersion with a power of 70. At the immersion I could not see the dark limb of the moon until the planet appeared to touch it, and then only to the extent of the diameter of the planet; but what I was most struck with was the appearance on the moon as it passed over the planet. It appeared as though the planet was a dark object, and glided on to the moon instead of behind it; and the appearance continued until the planet was hid, when I suddenly lost the dark limb of the moon altogether.”
I have personally also seen stars through the edge of the waxing/waning Moon. It actually happens fairly often; if you are diligent and specifically observing for the phenomenon on starry nights you can occasionally see it even with the naked eye.
“During a partial solar eclipse the sun’s outline has many times been seen through the body of the moon. But those who have been taught to believe that the moon is a solid opaque sphere, are ever ready with ‘explanations,’ often of the most inconsistent character, rather than acknowledge the simple fact of semi-transparency. Not only has this been proved by the visibility of the sun’s outline through segments, and sometimes the very centre of the moon, but often, at new moon, the outline of the whole, and even the several shades of light on the opposite and illuminated part have been distinctly seen. In other words we are often able to see through the dark side of the moon’s body to light on the other side.”-Dr. Samuel Rowbotham, “Zetetic Astronomy, Earth Not a Globe!” (337)
“That the moon is not a perfectly opaque body, but a crystallized substance, is shown from the fact that when a few hours old or even at quarter we can through the unilluminated portion see the light shining on the other side. Stars have also been observed through her surface!”-J. Atkinson, “Earth Review Magazine”
A Star occulting a crescent Moon has long been a popular symbol of Islam, was the symbol of the Ottoman Empire, it is found on the flags of Algeria, Azerbaijan, Libya, Malaysia, Mauritania, Pakistan, Singapore, Tunisia, Turkey, and in the Coat of Arms of countries from Croatia, to Germany, Ireland, Poland, Portugal, Romania, Sweden, Ukraine and the United Kingdom. Its origins can be traced back thousands of years to ancient Hindu culture where it is found in the symbol for the word “Om,” the primary name for the almighty, representing the union of god Shiva and goddess Shakti. Why the symbol has carried such widespread historical significance is open to interpretation, but regardless of interpretation, the image of star(s) occulting the Moon has long been a prevalent and meaningful picture.
That stars and planets have been seen through the Moon is a fact, but to this day NASA, modern astronomy and a world full of brainwashed heliocentrists maintain that the Moon is a spherical, Earth-like habitation capable of landing spaceships on. They claim the Moon (and Mars for that matter!) are habitable desert planets, much like Star Wars’ Tatooine, Dune’s Arrakis and other such imaginary science-fiction worlds. Since long before the staged Apollo“Moon landings” these Masonic Sun-worshipping heliocentrists have been claiming the Moon to be a solid planetoid complete with plains, plateaus, mountains, valleys and craters though nothing of the sort can be discerned even using the best telescopes.
“Astronomers have indulged in imagination to such a degree that the moon is now considered to be a solid, opaque spherical world, having mountains, valleys, lakes, or seas, volcanic craters, and other conditions analogous to the surface of the earth. So far has this fancy been carried that the whole visible disc has been mapped out, and special names given to its various peculiarities, as though they had been carefully observed, and actually measured by a party of terrestrial ordinance surveyors. All this has been done in direct opposition to the fact that whoever, for the first time, and without previous bias of mind, looks at the moon’s surface through a powerful telescope, is puzzled to say what it is really like, or how to compare it with anything known to him. The comparison which may be made will depend upon the state of mind of the observer. It is well known that persons looking at the rough bark of a tree, or at the irregular lines or veins in certain kinds of marble and stone, or gazing at the red embers in a dull fire will, according to the degree of activity of the imagination, be able to see many different forms, even the outlines of animals and of human faces. It is in this way that persons may fancy that the moon’s surface is broken up into hills and valleys, and other conditions such as are found on earth. But that anything really similar to the surface of our own world is anywhere visible upon the moon is altogether fallacious.”-Dr. Samuel Rowbotham, “Zetetic Astronomy, Earth Not a Globe!” (335)
Buy The Flat Earth Conspiracy 252-Page Paperback, eBook, or ePub
Posted by Eric Dubay at 11:50 PM 0 comments
Labels: Cosmology, Flat Earth, Moon Landing Hoax | {"pred_label": "__label__cc", "pred_label_prob": 0.7002553343772888, "wiki_prob": 0.2997446656227112, "source": "cc/2022-05/en_head_0003.json.gz/line1706070"} |
high_school_physics | 80,398 | 14.369161 | 1 | The subdivision of the sphere in 120 spherical triangles with angles of 36°, 60°, 90°. The photographed model is made in cardboard, by joing together with a stapler these 120 triangles. We can make also other analogous subdivisions. | {'timestamp': '2019-04-23T04:06:38Z', 'url': 'http://www.matematita.it/materiale/index.php?p=cat&sc=271&im=7374', 'language': 'en', 'source': 'c4'} |
high_school_physics | 789,279 | 14.344318 | 1 | BACKYARD BROADCASTING LOCAL NEWS JANUARY 7, 2020
WOMAN ARRESTED AFTER DOMESTIC INCIDENT
A woman from Albany Township is facing a preliminary hearing in Towanda after she was charged with simple harassment from an incident on Christmas Eve in southern Bradford County. According to State police, 51 year old Pamela Mosier choked David Mosier at their home and then threatened him with a rifle. After responding to the domestic incident, police said they found the full magazine and Mosier admitted to loading it, but she allegedly didn’t know how to put the cartridge in the bolt action rifle. Mosier was incarcerated with strait bail set at 30 thousand dollars. Her hearing is January 8th.
OFFICER ASKS COUNCIL TO RE-EXAMINE BUDGET
The Montgomery Borough Council last night agreed to address a police officer’s concerns at a future work session about the 2020 budget not having enough police man-hours to address the rising crime rates. According to the Sun Gazette, Cpl. Eric Winters requested the council reopen the current budget and reconsider the current reduction plan from 130 to 60 hours a week. Winters said that there has been a spike in serious crime and to reduce patrol hours by 60 percent when crime is up 40 percent is a mistake. The next Montgomery Borough Council meeting is scheduled for February 11
SUSQUEHANNA PRESIDENT PUBLISHES AGAIN
The president of Susquehanna University has published a seventh book compiling years of research into a single source for musicians and conductors of ensembles. President Jonathan D. Green co-authored the book with award-winning music director David W. Oertel, and is called “Choral-Orchestral Repertoire: A Conductor’s Guide”. Green, who was installed as Susquehanna’s 15th president in 2017, is also an accomplished composer.
NARCAN KITS AVAILABLE TOMORROW AT FARM SHOW
The Pennsylvania Department of Health is distributing free Naloxone to Pennsylvania residents between 8am and 9pm tomorrow at their booth at the PA Farm Show as part of Gov. Tom Wolf administration’s ongoing effort to stop opioid overdoses. 14 thousand kits have been distributed to Pennsylvanians in the past 2 years. The kits are shown to stop overdose deaths, which declined in Pennsylvania by 18 percent from 2017 to 2018, dropping from 5,377 to 4,413. The Farm Show is held through this Saturday, January 11th at the Pennsylvania Farm Show Complex & Expo Center at 2300 N. Cameron St. in Harrisburg.
GEISINGER CITED FOR LAST YEAR’S BACTERIA OUTBREAK
The state health department has released a report that cites Geisinger health center in Danville for failing to routinely sanitize the equipment it used to prepare breast milk. According to the Citizen’s Voice, the report, based on an investigation conducted between Oct. 9 and Oct. 25, is the first to be released after a deadly bacterial outbreak killed three premature babies and sickened five others at the Danville hospital. State health department staff ordered the medical center in Danville to correct several deficiencies in the wake of an October inspection. There have been no new outbreaks since the hospital corrected its policies.
POLICE ARREST NY MAN PASSED OUT THE BEHIND WHEEL
Sayre police have arrested an Upstate New York man for possessing methamphetamine and marijuana, after finding him passed out in a running car on South Wilbur Ave. in Sayre at 10 o’clock at night. According to North Central PA dot com Police investigated and found 31 year old Justin Wheeler, 31, of Alpine, NY, with drugs and drug paraphernalia in the vehicle. Wheeler is charged with a first offense DUI, intentionally possessing a controlled substance, possession of marijuana for personal use, and possession of drug paraphernalia. He has a court date on January 24 in Athens Township.
INVESTIGATION CONTINUES INTO TURNPIKE CRASH
Authorities are seeking answers about a Pennsylvania interstate multi-vehicle accident that killed five people. The Pennsylvania State Police released new details Monday, saying it could be months before there’s a complete picture of what started the incident Sunday on the Pennsylvania Turnpike. According to Penn Live, The crash occurred around 3:33 a.m. in the westbound lane of the turnpike in Mount Pleasant Township at mile marker 86.1, closing the highway between the New Stanton and Breezewood exits for roughly 15 hours. The pile-up killed a tour bus driver, a 9-year-old from New York, a second passenger and two occupants of a UPS vehicle, one man from Lewistown, PA. About 60 people were injured in the wreck about 30 miles east of Pittsburgh. Officials say the bus lost control and set of a chain reaction that involved three tractor-trailers and a passenger car.
FIRST DAY FOR WILLIAMSPORT MAYOR
A ceremonial inauguration kicked off Derek Slaughter’s first day as mayor of Williamsport Monday, who has vowed to be a leader of an administration to work with all groups from local businesses to not-for-profits. According to the Sun Gazette, Slaughter also said he believes there are financial irregularities in city government and will work to get the finances in order.. Former councilman Patrick Marty spoke and said the inauguration is an important civic occasion. The event which is a formal welcome and part of city code, also featured music performances with the Williamsport High School Choir.
LOCALS WIN CATTLE AWARDS AT FARM SHOW
Local people won big in the beef cattle category the 104th Pennsylvania Farm Show yesterday. According to farm show reports, multiple first place winner, Jacob Eichenlaub of Mifflinburg took reserve grand champion and reserve grand champion bull. Hannah Imgrund of Lewisburg won grand champion female and reserve champion heifer calf and Amanda Rapp also had multiple firsts and took two champion senior heifer wins. Tonight there is a Maple Production Demonstration by the PA Maple Syrup Producers on the Culinary Connection Stage and a Mini Horse Pull in the Large Arena. The Farm show runs through next Saturday January 11th.
76ers beat Thunder 120 to 113, Pacers beat the Hornets 115 to 104, Wizards over the Celtics 99 to 94, Magic over the Nets 101 to 89, Nuggets beat the Hawks 123 to 115, Jazz over the Pelicans 128 to 126, Spurs beat the Bucks 126 to 104, Mavericks over the Bulls 118 to 110, and Kings beat the Warriors 111 to 98
Oilers beat the Maple Leafs 6 to 4, Jets over the Canadiens 3 to 2, Islanders blanked Avalanche 1-nothing, and Blue Jackets over the Kings 4-2
State College edged Williamsport 60 to 58 with a three-pointer with 4 seconds to go, Wellsboro beat Meadowbrook Christian 73 to 34, Hughesville beat Line Mountain 70 to 47
Montoursville beat Troy 52 to 20, Hughesville over Benton 67 to 39, Lewisburg beat South Williamsport 33 to 30
Kutztown beat Lock Haven 66 to 54, Shippensburg beat Mansfield 89 to 62, Marywood over Penn College 73 to 69
Men’s Basketball Shippensburg beat Mansfield 90 to 68 | {"pred_label": "__label__wiki", "pred_label_prob": 0.7775980234146118, "wiki_prob": 0.7775980234146118, "source": "cc/2023-06/en_middle_0009.json.gz/line172097"} |
high_school_physics | 511,280 | 14.22496 | 1 | Web oturum açma
[Turkey]
Portatif yoğunluk ölçerler: DMA 35
Reometre
Yoğunluk Ölçer
Yoğunluk ve konsantrasyon ölçer: EasyDens
Alkol Ölçüm Cihazı
Ürünler - genel bakış
Türüne göre ürünler
Endüstri Ürünler
Standart Ürünler
Sık görüntülenenler
Mikrodalga Yakma
Proses Sensörleri
Partikül Boyutu Analiz Cihazları
CO₂ ve Oksijen Ölçüm Cihazı
Polarimetre
İnline İçecek Analizi
Parlama Noktası
Enstrümental İndentasyon Test Cihazı
İçecek Analizi
Tüm ürün türleri
Servis ve Destek - genel bakış
İlaç sektörü Yeterlilik
ISO 17025 kalibrasyonu
Güvenlik Deklarasyonu
Eğitim kaynakları
Partikül Karakterizasyonu
Yüzey Karakterizasyonu
Hakkımızda - genel bakış
İşlere ve kariyer - genel bakış
Öğrenciler için
Partikül boyutu belirleme için lazer difraksiyon
Laser diffraction for particle sizing
Laser diffraction is one of the most common techniques for particle size analysis. It is based on the observation that the angle of (laser) light diffracted by a particle corresponds to the size of the particle. In a complex sample containing particles of different sizes, light diffraction results in a specific diffraction pattern. By analyzing such a pattern the exact size composition (i.e. particle size distribution) of the sample can be deduced.
Diffraction (from Latin diffringere, 'to break into pieces') is a phenomenon of waves bending when encountering obstacles or slits. Any type of a wave – mechanical, such as sound and water waves, but also electromagnetic, such as light waves – can be diffracted.
Light diffraction has an analytical application to determine the size of the obstacle the light wave is running into. This analytical method is based on the fact that the angle of light diffraction is inversely proportional to the size of the obstacle (Figure 1). In practice, the light source used for such analysis is usually a laser, so the technique is commonly referred to as laser diffraction. In order for laser diffraction to work, the obstacles need to be small enough to be comparable to the wavelength of the laser. Such tiny obstacles are generally referred to as particles. Any small and distinct subdivision of matter can be a particle, e.g. a grain of a powder or a droplet in an emulsion.
Figure 1: Schematic depiction of laser diffraction when the laser encounters obstacles small enough to be comparable to its wavelength. The diffraction angle of small particles (α1) is bigger than the diffraction angle of bigger particles (α2). Therefore, the complex diffraction pattern coming from different particle sizes in a sample is used to determine the particle size distribution (PSD).
Particle size determination
A laser diffraction experiment for particle size determination has a simple setup: The dispersed particles are first directed towards a laser beam (Figure 2). The beam gets diffracted by the particles at different angles depending on the particle size (Figure 1). The different angles of diffraction are seen as specific diffraction patterns (i.e. Airy pattern, Figure 3), which also depend on the particle size (Figure 4). The diffraction pattern is then detected and analyzed by a complex algorithm that compares the measured values to expected theoretical values (see chapter Detection and analysis). The result is a particle size distribution (PSD).
Figure 2: Illustration of laser diffraction in a particle size analyzer. The red arrow represents the laser beam, which shines through the sample (blue arrow).The concentric circles represent a simplified diffraction pattern.
Figure 3: The visual result of light diffraction: The innermost circle is called the Airy disk. Together with the outer concentric rings it forms the so-called Airy pattern (or diffraction pattern).
Figure 4: Simulation of diffraction patterns for two spherical particles. Particle a is twice the size of particle b. Above is a plot of the radial intensity of diffracted light through a cross-section (shown as a red arrow). As shown in the equations, the size of the Airy disk is directly proportional to the wavelength (λ), but inversely proportional to the size of the particle (d). That means that bigger particles exhibit smaller Airy disks, i.e. more “dense” diffraction patterns.
In order to get a clear diffraction, it is necessary to have a proper dispersion of the sample. This means that each particle should be visible as a single particle in front of the laser, moving through either liquid medium or air. Usually a sample should be analyzed in a state relevant to its application, i.e. it should be measured in liquid mode if the final product is a liquid dispersion and in dry mode if the final product is a powder.
In liquid mode the particles are dispersed in a liquid and pumped into a glass measurement cell which is placed in front of the laser. The sample keeps circulating until the measurement is done. The liquid dispersion unit is usually equipped with a mechanical stirrer with adjustable speed and with a sonicator with adjustable duration and power.
In dry mode the powder is put into motion either by compressed air or by gravity, creating a dry flow which is positioned in front of the laser beam. The sample de-agglomerates (breaks down into smaller sized particles) as particles collide with each other or with the wall of the dispersion unit.
Detection and analysis
The diffraction patterns shown so far represent the ideal case of a single-sized population of perfectly spherical particles (Figure 2, Figure 3, Figure 4). However, real samples consist of a number of particles of different sizes and often also different shapes. As an outcome, each particle shows a specific diffraction pattern, and they all overlap resulting in an uneven patch of light rather than a distinguishable pattern (Figure 5). This chapter will explain how laser diffraction particle size analyzers convert the detected intensities into information about particle sizes contained in the sample.
Figure 5: Intensity plot: Overlapping diffraction patterns of a sample containing particles of different sizes (left), and a sum of diffraction patterns, i.e. intensities actually measured by the detector (right).
Raw data acquisition
In Figure 6 the actual detector of a particle size analyzer is shown. Its shape enables it to detect a wedge of the circular diffraction pattern (Figure 3). Each photosensitive area will receive a different intensity of light, depending on the specific diffraction pattern. To detect angles too big for the wedge detector, additional individual detectors are usually placed.
Figure 6: An example of a main detector of a laser diffraction instrument. At the center of the wedge there is a tiny hole allowing the undiffracted laser beam to pass through (close up photo on the right). The black blocks are the photosensitive areas which detect the intensity of diffracted light at different angles.
Once the instrument has recorded an intensity plot (Figure 5), the next step is to distinguish the individual diffraction patterns it consists of. The matrix illustrating the general principle is shown in Figure 7. The algorithm estimates proportions of the size classes in the total volume, i.e. particle size distribution (PSD) by comparing the measured data to the expected theoretical values for different size classes.
Figure 7: Matrix representing the principle of extracting particle size distribution (PSD) from raw intensity data. Raw intensity (I) data is shown in orange brackets and divided into portions measured by each detector, i.e. at each angle from α1 to αn. The blue part of the equation represents the theoretical part, i.e. the expected intensities for each size class (c1 to cm) and at each angle of detection (α1 to αn). By comparing the theoretical with actual intensities, PSD can be calculated. This result is shown in red brackets, where N stands for the relative proportion of each size class (c1 to cm) in the total volume.
The theoretically expected values for different particle sizes are illustrated as a 3D graph in Figure 8. The graph shows that for very small particles in the nano-range there is practically no diffraction pattern visible. For such particles other sizing techniques might be more suitable, e.g. dynamic light scattering.
Figure 8: Expected intensities of light at different diffraction angles for particles of various sizes.
Diffraction data analysis theory
Two different theories are used for the analysis of laser diffraction raw data, namely Fraunhofer and Mie (Figure 9). Both assume a spherical particle shape. Fraunhofer theory is simpler, as it does not take into account phenomena like absorption, refraction, reflection, or scattering of light. It works well for large and/or opaque particles, and doesn’t require any knowledge of the particle’s optical properties. Mie theory, however, does consider other light scattering phenomena, and consequently requires knowledge of the particle’s refractive index and absorption coefficient for the particular wavelength. As a general principle, it is always preferable to use Fraunhofer theory as a default, rather than using Mie theory with possibly inaccurate values for the particle’s optical properties.
Figure 9: Illustration of the difference between Fraunhofer (left) and Mie diffraction theory (right). Large and opaque particles are usually analyzed using Fraunhofer diffraction theory. Mie theory also considers other optical phenomena in addition to diffraction.
Laser diffraction gives an estimation of the percentage of particles belonging to a certain size class. Size classes are groups of particles of similar sizes and each size class is assigned two different diameters (Figure 10).
Figure 10: Illustration of size classes and how they are defined by two diameters.
A typical result of a laser diffraction measurement is shown in Figure 11. The basic particle size distribution might have one or more peaks for size classes, which indicate the most common particle sizes. The Dmode value defines the position of the highest peak. However, there might be more peaks or the peak might be weakly defined (e.g. spikey, flat, etc.), so peak values are rather unreliable. For this reason, usually the cumulative distribution is analyzed. To get this distribution, values for all previous classes are added to the next. This is done either from the smallest to the biggest diameter (called the "undersize curve") or in the opposite direction (called the "oversize curve"). In either direction, the cumulative curve always ranges from 0 % to 100 %, with the middle point D50 being the most commonly reported result of particle sizing by laser diffraction. D50 defines the point where 50 % of the particles are smaller and 50 % bigger than that certain diameter. The beginning and end of the distribution are commonly defined by D10 and D90, although other D values can be used to define the cumulative distribution as well (e.g. D1 or D99).
Figure 11: Typical result of a laser diffraction particle size measurement. The red curve is the basic particle size distribution, with the Dmode value defining the position of the peak. The cumulative curve ("undersize", here shown in turquoise) has its middle point at D50 – it being the single most common result of particle sizing by laser diffraction.
In laser diffraction the percentages of the particle sizes in the sample are typically given in volume (volume-based distribution). Alternatively, the relative proportions of particle sizes can also stand for the surface of the particles or number-based distributions. Since the theories used for laser diffraction assume spherical particles, the representations in surface and number are made by applying a geometrical calculation (for the surface area and volume of a sphere) to the volume-based result.
Particle size is a common quality control parameter, affecting both the production process and the final properties of a product. Laser diffraction is a valuable tool for particle sizing, from the sub-micron to the millimeter range. The increasing popularity of this method is due to its high repeatability combined with its fast and easy measurement technique that requires low sample amounts. Laser diffraction is a relative method which uses the optical behavior of particles to derive their sizes. In order to do so, the analysis theory assumes that the measured particles are spherical and reports their diameter. Obviously, for non-spherical particles this leads to a deviation from their real sizes. However, as the shape-caused error remains consistent, this makes laser diffraction a highly reliable quality control tool.
Further references
Xu, R. (2002). Particle Characterization: Light Scattering Methods. Dordrecht: Springer Netherlands, 111-181
Merkus, H. (2009). Particle Size Measurements. Dordrecht: Springer Netherlands, 259-285
Bohren, C. and Huffman, D. (2007). Absorption and Scattering of Light by Small Particles.Weinheim: Wiley-VCH Verlag GmbH & Co. KGaA, 381-428
ISO 13320:2009, Particle size analysis - Laser diffraction methods
2. Particle size determination
2.1. Diffraction
2.2. Dispersion
3. Detection and analysis
3.1. Raw data acquisition
3.2. Data analysis
3.3. Diffraction data analysis theory
4. Particle size distribution
6. Further references
Reoloji
Yasal bilgi
Anton Paar uzmanları servis, destek ve eğitim ihtiyaçlarınız için size yardımcı olmaya hazır.
Basınla ilişkiler
Site haritası | Copyright 2020 Anton Paar GmbH | {"pred_label": "__label__cc", "pred_label_prob": 0.6140339374542236, "wiki_prob": 0.38596606254577637, "source": "cc/2020-05/en_middle_0058.json.gz/line1296421"} |
high_school_physics | 828,769 | 14.136338 | 1 | Home›Injury›More difficult harm information for former first-spherical picks Jake Burger and Zack Burdi
More difficult harm information for former first-spherical picks Jake Burger and Zack Burdi
Head Injury On A Child
Steven Adler’s Knife Injury Was an Accident, Says Representative
More tender-tissue pain for key Pie, heartbreak for Kanga
The White Sox spent first-spherical choices on Zack Burdi and Jake Burger in 2016 and 2017, respectively. Both men were dogged by way of accidents, and the news didn’t get any better Friday. Burdi may not pitch for the rest of the season, his recuperation from Tommy John’s surgical operation yielding now to a torn tendon in his knee. Burger, in the meantime, remains dealing with a setback in his restoration from a pair of Achilles tears in 2018, a bruised heel on the equal foot that has restricted him to the point where he’s not taking part in baseball activities. The burger might not be again on the sector till the fall, widespread manager Rick Hahn stated. It’s a couple of enormously hard blows for two gamers who carry first-round expectancies of being distinction-makers in the franchise’s ongoing rebuilding system. Burdi changed into eyed by many enthusiasts and onlookers as the team’s closer to destiny.
The energy-hitting Burger was hoped to be a center-of-the-order bat for the subsequent contending White Sox group. Obviously, neither player can be written off at this factor; however, even greater harm woes or even more ignored developmental time throws what sort of effect they may have, and whilst, into question. After 27 appearances with Triple-A Charlotte in 2017, Burdi became close down in June and had Tommy John surgery, which wiped out all however a handful of rookie league innings in 2018. He didn’t pitch plenty for the duration of the Arizona Fall League before being shut down, though Hahn stated at the time that becomes not anything to worry about. This season, he cut up time with Class A Winston-Salem and Double-A Birmingham, and the numbers were now not properly: a 6.Seventy-five ERA in 22.2 innings of labor. Now he won’t pitch again until 2020.
“Zack Burdi, even as doing a pregame drill in Birmingham some weeks ago, sincerely inside the closing 10 days, felt a type of exchange in his knee,” Hahn said of the Downers Grove native’s contemporary damage. “After an exam, it becomes disclosed that he has a torn ligament through his patella, which wishes to be repaired. That could be sometime in early July, and Zack will be out for the remainder of the season.” Burger becomes the No. Eleven choose the 2017 draft and hasn’t played in lots of games to become a member of the White Sox agency. He slashed .271/.335/.409 with four homers in forty-nine games with Class A Kannapolis. In spring 2018, he tore his Achilles, then tore the same Achilles not lengthy after, extending his healing time. And even though there has been optimism surrounding his return this season, his bruised heel has been sufficient of a setback to preserve him out of the movement and away from baseball sports in the intervening time. He likely won’t seem with a full-season associate in 2019.
“Jake Burger is also progressing, albeit slowly, from his bruised left heel, the equal leg that had the Achilles trouble,” Hahn said. “He is doing water-agility drills right now; however, he has but to renew baseball interest on the sphere. I don’t have a timeline on Jake’s go back to motion, however optimistically, he’s capable of joining the Arizona membership earlier than that season ends, and if no longer, we’ll see Jake with a bit of luck completely wholesome in instructionals (which normally take place after the give up of the minor league everyday season).” There have been many positives right here in 2019 which have made it look like the White Sox competition window ought to open as soon as the 2020 season, or even with Friday’s harm information, that also appears sensible. But for these first-round picks, mainly, it’s turning more and more difficult to venture out what roles they may play as that window begins to open. That turned into the case already, and each gamers coping with new injuries clouds that image even extra.
EEG test detects hidden focus in unresponsive mind damage patients
Latest Injury Update For USWNT Star Alex Morgan
AI desires extra health statistics if it is to assist treatment the arena
How your health is at chance all through a heatwave
Are testosterone-boosting supplements powerful? Not possibly, in line with new studies | {"pred_label": "__label__wiki", "pred_label_prob": 0.7269467115402222, "wiki_prob": 0.7269467115402222, "source": "cc/2023-06/en_middle_0080.json.gz/line1163781"} |
high_school_physics | 57,294 | 14.066125 | 1 | In a world in which a "Pilates Ball" can be purchased at Bed,Bath,& Beyond, I am enthralled with teaching the underlying principles that support the spine. There is a big difference between using a ball to support the spine for core-strengthening exercises and using a ball to give your body sensory information about your body's own innate buoyancy. Buoyancy is just the beginning of what one can get from sensing oneself spherically. Universal principles of movement are principles of the universe and the way we move within the universe and the ways we can allow it to move us.
We live, breathe, and move on a sphere with a powerful core that is in constant rotation around an axis while simultaneously in rotation around a solar force. There is no flatness to this sphere. There is an almost incomprehensible depth. The radiating core gives us the great gift of gravity and weightiness and a connection to the power of the ground that most us spend our lives either resisting or taking for granted.
It has been known for hundreds of years that the earth is not flat even though you can look out for miles and miles and not be able to perceive where it curves off. Except for pictures of the planet from space that I have seen on television via satellite, I have no experiential proof that what I walk around on everyday is a sphere.
It seems strange to even question it, but in the days before the discovery, most people on the planet thought a spherical earth preposterous. Our bodies, if we perceive them as such, have incomprehensible depth, a powerful core, a reliable center of gravity, a solar plexus, and besides the head and the eyes that you are reading this with, there are spheres within spheres in the make-up of the body.
There is a perfectly straight line of force that flows through the body as an axis, just like the earth's axis; but no part of the body, particularly not the spine (which is a serpentine line) flattens out to line up with that axis.
Yet for having accepted for hundreds of years that the earth isn’t flat, it is taking a preposterous amount of time for people not to flatten out parts of the body. We desire flat bellies, we lock our knees, and we are conditioned from a very young age to sit up straight.
Perhaps because of our disconnection from the earth, the population isn’t realizing that when you flatten any part of the body, you lessen the potential to connect one system of the body to another system (such as connecting the legs to the spine),and you lessen your potential to be grounded.
you shorten the line of the body.
We know without ever seeing it with our own eyes from space that there’s no flatness to the earth, but we still think that holding ourselves upright, standing up straight, pulling the belly in, yanking the shoulders back, and making a rod of the spine are desirable elements of posture.
There is no flatness to the body; only arc, spin, rotation and spiral… just like the planet and the way it moves itself and us through the universe.
An adage: the more personal you make it, the more universal it becomes.
while a woman with white hair pulled back into a lush ponytail sits and watches me. The rubbery textured mat underneath my body is wet with sweat and with the tears that have been so profuse for the last ten minutes that they fell not so much in drops but in sheets.
“Now you are breathing,” she says with a gratified but composed exultation in a marvelously accented posh that penetrates my system. She has guided me with her voice through a series of sustained postures in which my body has gone into states of involuntary “tremoring” as is of her technique.
She has spoken to me in this lesson of honoring the bones, allowing them to relate to their ball and sockets joints. She has spoken of the skin as an organ that breathes and that it is not a bag to hold the bones together.
I allow my rolling around to lessen gradually, and it comes to a peaceful sighing lull, like a child’s toy that’s been awkwardly spun, momentarily disregarded, slowly winding into a rocking halt still with the energy of the tiny fingers that first set it in motion. I feel for the first time that I am not trying to breathe in some way I think she wants me to, neither am I passive. I am simply in a body that is passionately breathing. I uncoil and then unfurl my newfound supple self off the floor. I face her composure and I perceive a measured state of her being pleased with this work we have done together. She proffers a tissue box towards me, but I silently refuse it.
My sobbing has ceased, yet it has lubricated my system beyond the obviousness of my throat feeling soft and alive and open. She does not ask me to perform the second part, and not because the hour is up.
“Well you can thank yourself and you can thank the universe.
proportion for me. Contact became a crucial element to my teaching. One of the universal principles I will write about is contact.
My intention in writing is to make a different, if not deeper contact with people continuing to study, who have studied in the past, or who have hopes of studying at some point. I believe my job as a teacher can extend beyond the people who I make actual physical contact with in the studio.
The psychic depths are enabled, enhanced, and advanced through the physiological depth. Or as the teacher I study with now says, “There’s a whole world inside your hip sockets.” That thought, along with the Shakespearean: “All the world’s a stage…” could make for some pretty big pelvic drama. But if “all the world’s a stage,” the universe is an enormously accommodating theater.
By the way, I have by now embodied the “being wanted” part of my princely soliloquy. It is located right behind the navel. This important energetic as well as physiological junction leads into the related principles of expansion, extension, and suspension which I will also write about.
I would like to lead people through a demystification of the ubiquitous navel-to-spine Pilates concept that is overused and misunderstood.
I would like to present the tools used in Pilates, particularly the springs,not as resistance tools, to push against, but as sensory support: a spring being a spiraling coil of wire that can unwind and become bouncy, giving information to the body of its own buoyant and springy potential.
A roller can be sensed as having the same energetic cylindrical shape as the deepest abdominal layer.
A bar can be sensed as a curving surface used to create a balanced dynamic relationship between the ball of the foot and the heel. See the Footing post.
People often ask me how would I define what it is that I do - what is this work. I have a Pilates studio and use that equipment and some of the exercises to impart principles of movement. This is beyond what one experiences in exercise that has mostly muscular, and not movement focus. This is beyond fitness - it is function that brings fulfillment. Learning the underlying principles that support your spine, learning the process by which your imagination unwinds you in many ways, learning through the sensory experience that your body receives within the movements - this is what I teach: the you within the universe. You as a universal force: unflattening, unfolding, uncoiling, undulating, unraveling.
So that you may sense that you are not merely on the spherical miracle of this earth, but that you are actually of it, let's begin from the ground up - let's get your footing. Footing is the first lesson in which you may put principle to practice.
Please feel free to comment on what you perceive from these writings. If the universe is made of stories, please share your universe here with me. | {'timestamp': '2019-04-25T11:58:52Z', 'url': 'https://blog.backboneandwingspan.com/introduction/', 'language': 'en', 'source': 'c4'} |